threads
listlengths
1
275
[ { "msg_contents": "This is the best write-up I've seen yet on quantifying what SSDs are good \nand bad at in a database context:\n\nhttp://www.bigdbahead.com/?p=37\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 29 Apr 2008 14:55:27 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "SSD database benchmarks" }, { "msg_contents": "On 29.04.2008, at 12:55, Greg Smith wrote:\n\n> This is the best write-up I've seen yet on quantifying what SSDs are \n> good and bad at in a database context:\n>\n> http://www.bigdbahead.com/?p=37\n\nThey totally missed \"mainly write\" applications which most of my \napplications are. Reads in a OLTP setup are typically coming from a \ncache (or, like in our case an index like Solr) while writes go \nthrough ... So you might get some decent IO from the SSD when the \ndatabase just started up without caches filled, but as long as your \ncache hit ratio is good, it doesn't matter anymore after a couple of \nminutes.\n\nNevertheless it's an interesting development.\n\ncug\n\n-- \nhttp://www.event-s.net\n\n", "msg_date": "Tue, 29 Apr 2008 14:00:37 -0600", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD database benchmarks" }, { "msg_contents": "On Tue, 29 Apr 2008, Guido Neitzer wrote:\n\n> They totally missed \"mainly write\" applications which most of my applications \n> are.\n\nNot totally--the very first graph shows that even on random data, 100% \nwrite apps are 1/2 the speed of a regular hard drive.\n\nAfter poking around the site a bit more they also did some tests with MFT, \nsome kernel software from Easy Computing that basically uses a write log \ndisk to improve the write situation:\n\nhttp://www.bigdbahead.com/?p=44\n\nThat version also has a better summary of the results.\n\nMFT is clearly not ready for prime time yet from the general buginess, but \nas that approach matures (and more vendors do something similar in \nhardware) it will be really interesting to see what happens.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 29 Apr 2008 19:26:02 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SSD database benchmarks" }, { "msg_contents": "On Tue, Apr 29, 2008 at 2:55 PM, Greg Smith <[email protected]> wrote:\n> This is the best write-up I've seen yet on quantifying what SSDs are good\n> and bad at in a database context:\n>\n> http://www.bigdbahead.com/?p=37\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\nhere is another really informative article (pdf)\nhttp://www.storagesearch.com/easyco-flashperformance-art.pdf.\n\nThe conclusions are different...they claim it's hard to make good use\nof ssd advantages if you do any random writing at all. Of course,\nthey are also developing an automagical solution to the problem. The\ninformation is good though...it's certainly worth a read. The problem\nis probably solvable.\n\nmerlin\n", "msg_date": "Tue, 29 Apr 2008 19:27:20 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD database benchmarks" } ]
[ { "msg_contents": "I hope I am posting to the right list.\nI am running Postgresql 8.1.9 and don't understand the behavior of\nhistograms for data items not in the MVC list. I teach databases and\nwant to use Postgres as an example. I will appreciate any help that\nanyone can provide.\n\nHere is the data I am using. I am interested only in the \"rank\" attribute.\n\nCREATE TABLE Sailors (\n sid Integer NOT NULL,\n sname varchar(20),\n rank integer,\n age real,\n PRIMARY KEY (sid));\n\nI insert 30 sailor rows:\n\nINSERT INTO Sailors VALUES (3, 'Andrew', 10, 30.0);\nINSERT INTO Sailors VALUES (17, 'Bart', 5, 30.2);\nINSERT INTO Sailors VALUES (29, 'Beth', 3, 30.4);\nINSERT INTO Sailors VALUES (28, 'Bryant', 3, 30.6);\nINSERT INTO Sailors VALUES (4, 'Cynthia', 9, 30.8);\nINSERT INTO Sailors VALUES (16, 'David', 9, 30.9);\nINSERT INTO Sailors VALUES (27, 'Fei', 3, 31.0);\nINSERT INTO Sailors VALUES (12, 'James', 3, 32.0);\nINSERT INTO Sailors VALUES (30, 'Janice', 3, 33.0);\nINSERT INTO Sailors VALUES (2, 'Jim', 8, 34.5);\nINSERT INTO Sailors VALUES (15, 'Jingke', 10, 35.0);\nINSERT INTO Sailors VALUES (26, 'Jonathan',9, 36.0);\nINSERT INTO Sailors VALUES (24, 'Kal', 3, 36.6);\nINSERT INTO Sailors VALUES (14, 'Karen', 8, 37.8);\nINSERT INTO Sailors VALUES (8, 'Karla',7, 39.0);\nINSERT INTO Sailors VALUES (25, 'Kristen', 10, 39.5);\nINSERT INTO Sailors VALUES (19, 'Len', 8, 40.0);\nINSERT INTO Sailors VALUES (7, 'Lois', 8, 41.0);\nINSERT INTO Sailors VALUES (13, 'Mark', 7, 43.0);\nINSERT INTO Sailors VALUES (18, 'Melanie', 1, 44.0);\nINSERT INTO Sailors VALUES (5, 'Niru', 5, 46.0);\nINSERT INTO Sailors VALUES (23, 'Pavel', 3, 48.0);\nINSERT INTO Sailors VALUES (1, 'Sergio', 7, 50.0);\nINSERT INTO Sailors VALUES (6, 'Suhui', 1, 51.0);\nINSERT INTO Sailors VALUES (22, 'Suresh',9, 52.0);\nINSERT INTO Sailors VALUES (20, 'Tim',7, 54.0);\nINSERT INTO Sailors VALUES (21, 'Tom', 10, 56.0);\nINSERT INTO Sailors VALUES (11, 'Warren', 3, 58.0);\nINSERT INTO Sailors VALUES (10, 'WuChang',9, 59.0);\nINSERT INTO Sailors VALUES (9, 'WuChi', 10, 60.0);\n\nafter analyzing, I access the pg_stats table with\n\nSELECT n_distinct, most_common_vals,\n most_common_freqs, histogram_bounds\nFROM pg_stats WHERE tablename = 'sailors' AND attname = 'rank';\n\nand I get:\n\nn_distinct most_common_vals most_common_freqs \nhistogram_bounds\n-0.233333\n {3,9,10,7,8} {0.266667,0.166667,0.166667,0.133333,0.133333} \n{1,5}\n\nI have two questions. I'd appreciate any info you can provide,\nincluding pointers to the source code.\n\n1. Why does Postgres come up with a negative n_distinct? It\napparently thinks that the number of rank values will increase as the\nnumber of sailors increases. What/where is the algorithm that decides\nthat?\n\n2. The most_common_vals and their frequencies make sense. They say\nthat the values {3,9,10,7,8} occur a total of 26 times, so other\nvalues occur a total of 4 times. The other, less common, values are 1\nand 5, each occuring twice, so the histogram {1,5} is appropriate.\nIf I run the query\nEXPLAIN SELECT * from sailors where rank = const;\nfor any const not in the MVC list, I get the plan\n\nSeq Scan on sailors (cost=0.00..1.38 rows=2 width=21)\n Filter: (rank = const)\n\nThe \"rows=2\" estimate makes sense when const = 1 or 5, but it makes no\nsense to me for other values of const not in the MVC list.\nFor example, if I run the query\nEXPLAIN SELECT * from sailors where rank = -1000;\nPostgres still gives an estimate of \"row=2\".\nCan someone please explain?\n\nThanks,\n\nLen Shapiro\nPortland State University\n", "msg_date": "Tue, 29 Apr 2008 21:56:32 -0700", "msg_from": "Len Shapiro <[email protected]>", "msg_from_op": true, "msg_subject": "Understanding histograms" }, { "msg_contents": "Len Shapiro <[email protected]> writes:\n> 1. Why does Postgres come up with a negative n_distinct?\n\nIt's a fractional representation. Per the docs:\n\n> stadistinct\tfloat4\t \tThe number of distinct nonnull data values in the column. A value greater than zero is the actual number of distinct values. A value less than zero is the negative of a fraction of the number of rows in the table (for example, a column in which values appear about twice on the average could be represented by stadistinct = -0.5). A zero value means the number of distinct values is unknown\n\n> The \"rows=2\" estimate makes sense when const = 1 or 5, but it makes no\n> sense to me for other values of const not in the MVC list.\n> For example, if I run the query\n> EXPLAIN SELECT * from sailors where rank = -1000;\n> Postgres still gives an estimate of \"row=2\".\n\nI'm not sure what estimate you'd expect instead? The code has a built in\nassumption that no value not present in the MCV list can be more\nfrequent than the last member of the MCV list, so it's definitely not\ngonna guess *more* than 2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Apr 2008 01:19:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Understanding histograms " }, { "msg_contents": "Tom,\n\nThank you for your prompt reply.\n\nOn Tue, Apr 29, 2008 at 10:19 PM, Tom Lane <[email protected]> wrote:\n> Len Shapiro <[email protected]> writes:\n> > 1. Why does Postgres come up with a negative n_distinct?\n>\n> It's a fractional representation. Per the docs:\n>\n> > stadistinct float4 The number of distinct nonnull data values in the column. A value greater than zero is the actual number of distinct values. A value less than zero is the negative of a fraction of the number of rows in the table (for example, a column in which values appear about twice on the average could be represented by stadistinct = -0.5). A zero value means the number of distinct values is unknown\n\nI asked about n_distinct, whose documentation reads in part \"The\nnegated form is used when ANALYZE believes that the number of distinct\nvalues is likely to increase as the table grows\". and I asked about\nwhy ANALYZE believes that the number of distinct values is likely to\nincrease. I'm unclear why you quoted to me the documentation on\nstadistinct.\n>\n>\n> > The \"rows=2\" estimate makes sense when const = 1 or 5, but it makes no\n> > sense to me for other values of const not in the MVC list.\n> > For example, if I run the query\n> > EXPLAIN SELECT * from sailors where rank = -1000;\n> > Postgres still gives an estimate of \"row=2\".\n>\n> I'm not sure what estimate you'd expect instead?\n\nInstead I would expect an estimate of \"rows=0\" for values of const\nthat are not in the MCV list and not in the histogram. When the\nhistogram has less than the maximum number of entries, implying (I am\nguessing here) that all non-MCV values are in the histogram list, this\nseems like a simple strategy and has the virtue of being accurate.\n\nWhere in the source is the code that manipulates the histogram?\n\n> The code has a built in\n> assumption that no value not present in the MCV list can be more\n> frequent than the last member of the MCV list, so it's definitely not\n> gonna guess *more* than 2.\n\nThat's interesting. Where is this in the source code?\n\nThanks for all your help.\n\nAll the best,\n\nLen Shapiro\n\n> regards, tom lane\n>\n", "msg_date": "Tue, 29 Apr 2008 23:32:18 -0700", "msg_from": "\"Len Shapiro\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Understanding histograms" }, { "msg_contents": "\"Len Shapiro\" <[email protected]> writes:\n> I asked about n_distinct, whose documentation reads in part \"The\n> negated form is used when ANALYZE believes that the number of distinct\n> values is likely to increase as the table grows\". and I asked about\n> why ANALYZE believes that the number of distinct values is likely to\n> increase. I'm unclear why you quoted to me the documentation on\n> stadistinct.\n\nn_distinct is just a view of stadistinct. I assumed you'd poked around\nin the code enough to know that ...\n\n>>> The \"rows=2\" estimate makes sense when const = 1 or 5, but it makes no\n>>> sense to me for other values of const not in the MVC list.\n>> \n>> I'm not sure what estimate you'd expect instead?\n\n> Instead I would expect an estimate of \"rows=0\" for values of const\n> that are not in the MCV list and not in the histogram.\n\nSurely that's not very sane? The MCV list plus histogram generally\ndon't include every value in the table. IIRC the estimate for values\nnot present in the MCV list is (1 - sum(MCV frequencies)) divided by\n(n_distinct - number of MCV entries), which amounts to assuming that\nall values not present in the MCV list occur equally often. The weak\nspot of course is that the n_distinct estimate may be pretty inaccurate.\n\n> Where in the source is the code that manipulates the histogram?\n\ncommands/analyze.c builds it, and most of the estimation with it\nhappens in utils/adt/selfuncs.c.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Apr 2008 10:43:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Understanding histograms " }, { "msg_contents": "On Wed, 2008-04-30 at 10:43 -0400, Tom Lane wrote:\n> > Instead I would expect an estimate of \"rows=0\" for values of const\n> > that are not in the MCV list and not in the histogram.\n> \n> Surely that's not very sane? The MCV list plus histogram generally\n> don't include every value in the table. IIRC the estimate for values\n> not present in the MCV list is (1 - sum(MCV frequencies)) divided by\n> (n_distinct - number of MCV entries), which amounts to assuming that\n> all values not present in the MCV list occur equally often. The weak\n> spot of course is that the n_distinct estimate may be pretty inaccurate.\n\nMy understanding of Len's question is that, although the MCV list plus\nthe histogram don't include every distinct value in the general case,\nthey do include every value in the specific case where the histogram is\nnot full.\n\nEssentially, this seems like using the histogram to extend the MCV list\nsuch that, together, they represent all distinct values. This idea only\nseems to help when the number of distinct values is greater than the\nmax size of MCVs, but less than the max size of MCVs plus histogram\nbounds.\n\nI'm not sure how much of a gain this is, because right now that could\nbe accomplished by increasing the statistics for that column (and\ntherefore all of your distinct values would fit in the MCV list). Also\nthe statistics aren't guaranteed to be perfectly up-to-date, so an\nestimate of zero might be risky.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Wed, 30 Apr 2008 15:47:02 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Understanding histograms" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> On Wed, 2008-04-30 at 10:43 -0400, Tom Lane wrote:\n>> Surely that's not very sane? The MCV list plus histogram generally\n>> don't include every value in the table.\n\n> My understanding of Len's question is that, although the MCV list plus\n> the histogram don't include every distinct value in the general case,\n> they do include every value in the specific case where the histogram is\n> not full.\n\nI don't believe that's true. It's possible that a small histogram means\nthat you are seeing every value that was in ANALYZE's sample, but it's\na mighty long leap from that to the assumption that there are no other\nvalues in the table. In any case that seems more an artifact of the\nimplementation than a property the histogram would be guaranteed to\nhave.\n\n> ... the statistics aren't guaranteed to be perfectly up-to-date, so an\n> estimate of zero might be risky.\n\nRight. As a matter of policy we never estimate less than one matching\nrow; and I've seriously considered pushing that up to at least two rows\nexcept when we see that the query condition matches a unique constraint.\nYou can get really bad join plans from overly-small estimates.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Apr 2008 19:17:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Understanding histograms " }, { "msg_contents": "\n\"Tom Lane\" <[email protected]> writes:\n\n> Right. As a matter of policy we never estimate less than one matching\n> row; and I've seriously considered pushing that up to at least two rows\n> except when we see that the query condition matches a unique constraint.\n> You can get really bad join plans from overly-small estimates.\n\nThis is something that needs some serious thought though. In the case of\npartitioned tables I've seen someone get badly messed up plans because they\nhad a couple hundred partitions each of which estimated to return 1 row. In\nfact of course they all returned 0 rows except the correct partition. (This\nwas in a join so no constraint exclusion)\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's 24x7 Postgres support!\n", "msg_date": "Wed, 30 Apr 2008 20:53:44 -0400", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Understanding histograms" }, { "msg_contents": "Gregory Stark <[email protected]> writes:\n> This is something that needs some serious thought though. In the case of\n> partitioned tables I've seen someone get badly messed up plans because they\n> had a couple hundred partitions each of which estimated to return 1 row. In\n> fact of course they all returned 0 rows except the correct partition. (This\n> was in a join so no constraint exclusion)\n\nYeah, one of the things we need to have a \"serious\" partitioning\nsolution is to get the planner's estimation code to understand\nwhat's happening there.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 May 2008 00:41:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Understanding histograms " } ]
[ { "msg_contents": "Hi all,\n\nlooking for a HA master/master or master/slave replication solution. Our \nsetup consists of two databases and we want to use them both for queries.\n\n\nAside from pgpool II there seems no advisable replication solution. But \nthe problem seems to be that we will have a single point of failure with \npgpool. slony also has the disadvantage not to cover a real failover \nsolution. Are there any other manageable and well tested tools/setups for \nour scenario?\n\n\nBest regards\nGernot\n\n\n", "msg_date": "Wed, 30 Apr 2008 17:45:59 +0200", "msg_from": "Gernot Schwed <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres replication" }, { "msg_contents": "On Wed, 30 Apr 2008, Gernot Schwed wrote:\n\n> Hi all,\n>\n> looking for a HA master/master or master/slave replication solution. Our\n> setup consists of two databases and we want to use them both for queries.\n>\n>\n> Aside from pgpool II there seems no advisable replication solution. But\n> the problem seems to be that we will have a single point of failure with\n> pgpool. slony also has the disadvantage not to cover a real failover\n> solution. Are there any other manageable and well tested tools/setups for\n> our scenario?\n\nI'm about to setup a similar config and what I was intending to do is to \nrun pgpool on both boxes and use heartbeat (from http://linux-ha.org ) to \nmove an IP address from one box to the other. clients connect to this \nvirtual IP and then pgpool will distribute the connections to both systems \nfrom there.\n\nDavid Lang\n", "msg_date": "Wed, 30 Apr 2008 13:47:00 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Postgres replication" }, { "msg_contents": "On Thu, May 1, 2008 at 5:47 AM, <[email protected]> wrote:\n> I'm about to setup a similar config and what I was intending to do is to\n> run pgpool on both boxes and use heartbeat (from http://linux-ha.org ) to\n> move an IP address from one box to the other. clients connect to this\n> virtual IP and then pgpool will distribute the connections to both systems\n> from there.\n\nHow about pgpool-HA? It's a script that integrates pgpool and heartbeat.\nhttp://pgfoundry.org/projects/pgpool/\n\n-- \nFujii Masao\nNIPPON TELEGRAPH AND TELEPHONE CORPORATION\nNTT Open Source Software Center\n", "msg_date": "Thu, 1 May 2008 18:33:54 +0900", "msg_from": "\"Fujii Masao\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres replication" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nSomeone on this list has one of those 'confirm your email' filters on their \nmailbox, which is bouncing back messages ... this is an attempt to try and \nnarrow down the address that is causing this ...\n\n- -- \nMarc G. Fournier Hub.Org Hosting Solutions S.A. (http://www.hub.org)\nEmail . [email protected] MSN . [email protected]\nYahoo . yscrappy Skype: hub.org ICQ . 7615664\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.8 (FreeBSD)\n\niEYEARECAAYFAkgZRAAACgkQ4QvfyHIvDvNHrwCcDdlkjAXSyfyOBa5vgfLVOrSb\nJyoAn005bSbY6lnyjGmlOQzj7fSMNSKV\n=n5PC\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Thu, 01 May 2008 01:16:00 -0300", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Please ignore ..." }, { "msg_contents": "On Thu, 01 May 2008 01:16:00 -0300\n\"Marc G. Fournier\" <[email protected]> wrote:\n> Someone on this list has one of those 'confirm your email' filters on their \n\nArgh! Why do people think that it is OK to make their spam problem\neveryone else's problem? Whenever I see one of those I simply\nblackhole the server sending them.\n\nPeople, please, I know the spam you get isn't your fault but it isn't my\nfault either. You clean up your mailbox and I'll clean up mine.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 1 May 2008 01:23:31 -0400", "msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please ignore ..." }, { "msg_contents": "On Thu, 1 May 2008, D'Arcy J.M. Cain wrote:\n\n> Whenever I see one of those I simply blackhole the server sending them.\n\nAh, the ever popular vigilante spam method. What if the message is coming \nfrom, say, gmail.com, and it's getting routed so that you're not sure \nwhich account is originating it? Do you blackhole everybody on *that* \nserver just because there's one idiot?\n\nThis is the same problem on a smaller scale. It's not clear which account \nis reponsible, and I believe I saw that there are other people using the \nsame ISP who also subscribe to the list. That's why Marc is testing who \nthe guilty party is rather than unsubscribing everyone there.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 1 May 2008 02:55:10 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please ignore ..." }, { "msg_contents": "D'Arcy J.M. Cain wrote:\n> On Thu, 01 May 2008 01:16:00 -0300\n> \"Marc G. Fournier\" <[email protected]> wrote:\n> \n>> Someone on this list has one of those 'confirm your email' filters on their \n>> \n>\n> Argh! Why do people think that it is OK to make their spam problem\n> everyone else's problem? Whenever I see one of those I simply\n> blackhole the server sending them.\n>\n> People, please, I know the spam you get isn't your fault but it isn't my\n> fault either. You clean up your mailbox and I'll clean up mine.\n>\n> \nI second that completely\n\nWe use http://www.commtouch.com/\nwhich is built into GMS along with black holes\nWhen they added commtouch to the server our spam went to maybe 2 to 5 \nspam messages a day per mailbox with only a handful of false positives \nover the past 2 years. \n\nNow if i can get them to dump MySQL as the backend\n\n\n\n\n\n\nD'Arcy J.M. Cain wrote:\n\nOn Thu, 01 May 2008 01:16:00 -0300\n\"Marc G. Fournier\" <[email protected]> wrote:\n \n\nSomeone on this list has one of those 'confirm your email' filters on their \n \n\n\nArgh! Why do people think that it is OK to make their spam problem\neveryone else's problem? Whenever I see one of those I simply\nblackhole the server sending them.\n\nPeople, please, I know the spam you get isn't your fault but it isn't my\nfault either. You clean up your mailbox and I'll clean up mine.\n\n \n\nI second that completely\n\nWe use http://www.commtouch.com/ \nwhich is built into GMS along with black holes \nWhen they added commtouch  to the server our spam went to maybe 2 to 5\nspam messages  a day per mailbox with only a handful of false positives\nover the past 2 years.  \n\nNow if i can get them to dump MySQL as the backend", "msg_date": "Thu, 01 May 2008 02:21:18 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please ignore ..." }, { "msg_contents": "Hi all the ignorers, ;)\n\nGreg Smith wrote:\n> On Thu, 1 May 2008, D'Arcy J.M. Cain wrote:\n> \n>> Whenever I see one of those I simply blackhole the server sending them.\n> \n> Ah, the ever popular vigilante spam method. What if the message is \n> coming from, say, gmail.com, and it's getting routed so that you're not \n> sure which account is originating it? Do you blackhole everybody on \n> *that* server just because there's one idiot?\n> \n> This is the same problem on a smaller scale. It's not clear which \n> account is reponsible, and I believe I saw that there are other people \n> using the same ISP who also subscribe to the list. That's why Marc is \n> testing who the guilty party is rather than unsubscribing everyone there.\n\nyes, blackholing is bad as well as accepting everything and then sending\nout errors. Unfortunaly, email resembles the ideas of the decade when it\nwas invented (freedom of speach over regulating) so security is only\navailable as ad on. I wish however everybody would go by cryptography,\nmeaning in our case the sender signs and the list checks (1) and also\nthe list signs (2) when sending out, which makes it easy to check for\nthe receiver if to accept the mail or decline in band...\n\nCheers\nTino\n\nPS: happy 1st of may :-)", "msg_date": "Thu, 01 May 2008 09:35:31 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please ignore ..." }, { "msg_contents": "On Thu, 1 May 2008 02:55:10 -0400 (EDT)\nGreg Smith <[email protected]> wrote:\n> On Thu, 1 May 2008, D'Arcy J.M. Cain wrote:\n> \n> > Whenever I see one of those I simply blackhole the server sending them.\n> \n> Ah, the ever popular vigilante spam method. What if the message is coming \n> from, say, gmail.com, and it's getting routed so that you're not sure \n> which account is originating it? Do you blackhole everybody on *that* \n> server just because there's one idiot?\n\nWell, I actually do block gmail groups on another list that is\ngatewayed to a newsgroup due to the volume of spam that originates from\nthere but in this case my experience has been that it is done by a\nservice. For example, I reject all email from spamarrest.com. There\nis nothing I want to see from them.\n\n> This is the same problem on a smaller scale. It's not clear which account \n> is reponsible, and I believe I saw that there are other people using the \n> same ISP who also subscribe to the list. That's why Marc is testing who \n> the guilty party is rather than unsubscribing everyone there.\n\nOf course. If someone is running it on a server independent of the ISP\nthat's a different story. However, it is pretty hard to run that code\non most ISPs without the cooperation of the ISP. That's why there are\ncompanies like SpamArrest. People who run their own server and are in\na position to do this themself tend to also be smart enough to\nunderstand why it is a bad idea.\n\nOn the other hand, this type of thing is no different than spam and in\nthis day and age every ISP, no matter how big, has a responsibility to\ndeal with spammers on their own system and if they don't they deserve\nto be blocked just like any other spam-friendly system.\n\nThe fact that Marc has to run this test and does not immediately know\nwho the guilty party is suggests to me that they are using a service. I\nnever saw the offending message myself so perhaps it is coming from\nSpamArrest and I just rejected the email on my SMTP server.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 1 May 2008 08:35:37 -0400", "msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please ignore ..." }, { "msg_contents": "Marc G. Fournier wrote:\n\n> Someone on this list has one of those 'confirm your email' filters on their \n> mailbox, which is bouncing back messages ... this is an attempt to try and \n> narrow down the address that is causing this ...\n\nDid you find out?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 2 May 2008 10:29:34 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please ignore ..." }, { "msg_contents": "Marc G. Fournier wrote:\n\n> Someone on this list has one of those 'confirm your email' filters on their \n> mailbox, which is bouncing back messages ... this is an attempt to try and \n> narrow down the address that is causing this ...\n\nSo it seems you're still unable to determine the problematic address?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 16 May 2008 17:03:15 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please ignore ..." } ]
[ { "msg_contents": "Hi,\n\nCan anyone who have started using 8.3.1 list out the pros and cons.\n\nThanx in advance\n\n~ Gauri\n\nHi,Can anyone who have started using 8.3.1 list out the pros and cons.Thanx in advance~ Gauri", "msg_date": "Fri, 2 May 2008 13:01:28 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Pros and Cons of 8.3.1" }, { "msg_contents": "> Can anyone who have started using 8.3.1 list out the pros and cons.\n\nI upgraded to 8.3.1 yesterday from 8.3.0. I've used 8.3.0 since it was\nreleased and it's working fine. I upgraded from 7.4 (dump/restore) and\nit was working out of the box. We have somewhat simple sql-queries so\nthere was no need to change/alter these. The largest table has approx.\n85 mill. records (image-table).\n\nOne thing I had newer seen before was that duplicate rows was inserted\ninto our order-table but I don't know whether this is due to changes\nin the web-app or 8.3.0. Now that I upgraded to 8.3.1 I will wait a\nfew weeks and see if I get the same error before I alter the column\nand add a unique contraint.\n\nSo from a 7.4-perspective and fairly simple queries I don't see any issues.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Fri, 2 May 2008 09:51:09 +0200", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pros and Cons of 8.3.1" }, { "msg_contents": "On Fri, 02 May 2008 12:11:34 -0500\nJustin <[email protected]> wrote:\n\n> \n> don't know for sure if it is windows to linux but we moved to 8.2\n> that was install on windows and moved to 8.3.1 on Ubuntu using the\n> compiled version from Ubuntu\n>\n> We had minor annoying problem with implicit data conversion no longer \n> happens\n\n\nIt is 8.3.x and the change was documented in the release notes.\n\nJoshua D. Drake\n\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate", "msg_date": "Fri, 2 May 2008 09:30:47 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pros and Cons of 8.3.1" }, { "msg_contents": ">>> On Fri, May 2, 2008 at 2:31 AM, in message\n<[email protected]>, \"Gauri\nKanekar\"\n<[email protected]> wrote: \n \n> Can anyone who have started using 8.3.1 list out the pros and cons.\n \nThere are bugs in the 8.3.1 release which bit us when we started using\nit; however, these are fixed in the 8.3 stable branch of cvs. We are\nrunning successfully with that. These fixes will be in 8.3.2 when it is\nreleased.\n \nhttp://archives.postgresql.org/pgsql-bugs/2008-04/msg00168.php\n \nIt's generally a good idea to test with a new release before putting it\ninto production, especially a major release.\n \nSince you asked on the performance list -- we have found performance to\nbe significantly better under 8.3 than earlier releases. Also, the data\ntakes less space on the disk, and checkpoint disk activity spikes are\nreduced in 8.3.\n \n-Kevin\n \n\n", "msg_date": "Fri, 02 May 2008 11:35:54 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pros and Cons of 8.3.1" }, { "msg_contents": "\n\nGauri Kanekar wrote:\n>\n> Hi,\n>\n> Can anyone who have started using 8.3.1 list out the pros and cons.\n>\n> Thanx in advance\n>\n> ~ Gauri\n\ndon't know for sure if it is windows to linux but we moved to 8.2 that \nwas install on windows and moved to 8.3.1 on Ubuntu using the compiled \nversion from Ubuntu\n\nWe had minor annoying problem with implicit data conversion no longer \nhappens\n\nHad several pl/pgsql functions called each other where a programmer got \nlazy to not making sure the variable typed matched the parameter type so \nwe get an error yelling at us can't find function due to data type \nmismatch . Its was very easy to fix. \n\nThere is allot changes to how Text searches work and the indexes\n\n", "msg_date": "Fri, 02 May 2008 12:11:34 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pros and Cons of 8.3.1" } ]
[ { "msg_contents": "Attempting to resend. My first attempt was rejected with this\nexplanation:\n \nYour message to the pgsql-performance list has been denied\nfor the following reason(s):\n\nA message was previous posted with this Message-ID\nDuplicate Message-ID - <[email protected]> (Fri May 2\n13:36:52 2008)\nDuplicate Partial Message Checksum (Fri May 2 13:36:52 2008)\n \n \n>>> On Fri, May 2, 2008 at 2:31 AM, in message\n<[email protected]>, \"Gauri\nKanekar\"\n<[email protected]> wrote: \n \n> Can anyone who have started using 8.3.1 list out the pros and cons.\n \nThere are bugs in the 8.3.1 release which bit us when we started using\nit; however, these are fixed in the 8.3 stable branch of cvs. We are\nrunning successfully with that. These fixes will be in 8.3.2 when it is\nreleased.\n \nhttp://archives.postgresql.org/pgsql-bugs/2008-04/msg00168.php\n \nIt's generally a good idea to test with a new release before putting it\ninto production, especially a major release.\n \nSince you asked on the performance list -- we have found performance to\nbe significantly better under 8.3 than earlier releases. Also, the data\ntakes less space on the disk, and checkpoint disk activity spikes are\nreduced in 8.3.\n \n-Kevin\n\n", "msg_date": "Fri, 02 May 2008 11:46:19 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pros and Cons of 8.3.1" } ]
[ { "msg_contents": "Attempting to resend. My first attempt was rejected with this\nexplanation:\n \nYour message to the pgsql-performance list has been denied\nfor the following reason(s):\n\nA message was previous posted with this Message-ID\nDuplicate Message-ID - <[email protected]> (Fri May 2\n13:36:52 2008)\nDuplicate Partial Message Checksum (Fri May 2 13:36:52 2008)\n \n \n>>> \"Gauri Kanekar\" wrote: \n \n> Can anyone who have started using 8.3.1 list out the pros and cons.\n \nThere are bugs in the 8.3.1 release which bit us when we started using\nit; however, these are fixed in the 8.3 stable branch of cvs. We are\nrunning successfully with that. These fixes will be in 8.3.2 when it is\nreleased.\n \nhttp://archives.postgresql.org/pgsql-bugs/2008-04/msg00168.php\n \nIt's generally a good idea to test with a new release before putting it\ninto production, especially a major release.\n \nSince you asked on the performance list -- we have found performance to\nbe significantly better under 8.3 than earlier releases. Also, the data\ntakes less space on the disk, and checkpoint disk activity spikes are\nreduced in 8.3.\n \n-Kevin\n\n\n", "msg_date": "Fri, 02 May 2008 11:54:14 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pros and Cons of 8.3.1" } ]
[ { "msg_contents": "Greetings -- I have an UPDATE query updating a 100 million row table, \nand allocate enough memory via shared_buffers=1500MB. However, I see \ntwo processes in top, the UPDATE process eating about 850 MB and the \nwriter process eating about 750 MB. The box starts paging. Why is \nthere the writer taking almost as much space as the UPDATE, and how \ncan I shrink it?\n\nCheers,\nAlexy\n", "msg_date": "Fri, 2 May 2008 12:24:35 -0700", "msg_from": "Alexy Khrabrov <[email protected]>", "msg_from_op": true, "msg_subject": "two memory-consuming postgres processes" }, { "msg_contents": "On Fri, May 2, 2008 at 1:24 PM, Alexy Khrabrov <[email protected]> wrote:\n> Greetings -- I have an UPDATE query updating a 100 million row table, and\n> allocate enough memory via shared_buffers=1500MB. However, I see two\n> processes in top, the UPDATE process eating about 850 MB and the writer\n> process eating about 750 MB. The box starts paging. Why is there the\n> writer taking almost as much space as the UPDATE, and how can I shrink it?\n\nShared_buffers is NOT the main memory pool for all operations in\npgsql, it is simply the buffer pool used to hold data being operated\non.\n\nThings like sorts etc. use other memory and can exhaust your machine.\nHowever, I'd like to see the output of vmstat 1 or top while this is\nhappening.\n\nHow much memory does this machine have?\n", "msg_date": "Fri, 2 May 2008 13:30:38 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": "\nOn May 2, 2008, at 12:30 PM, Scott Marlowe wrote:\n\n> On Fri, May 2, 2008 at 1:24 PM, Alexy Khrabrov \n> <[email protected]> wrote:\n>> Greetings -- I have an UPDATE query updating a 100 million row \n>> table, and\n>> allocate enough memory via shared_buffers=1500MB. However, I see two\n>> processes in top, the UPDATE process eating about 850 MB and the \n>> writer\n>> process eating about 750 MB. The box starts paging. Why is there \n>> the\n>> writer taking almost as much space as the UPDATE, and how can I \n>> shrink it?\n>\n> Shared_buffers is NOT the main memory pool for all operations in\n> pgsql, it is simply the buffer pool used to hold data being operated\n> on.\n>\n> Things like sorts etc. use other memory and can exhaust your machine.\n> However, I'd like to see the output of vmstat 1 or top while this is\n> happening.\n>\n> How much memory does this machine have?\n\nIt's a 2GB RAM MacBook. Here's the top for postgres\n\nProcesses: 117 total, 2 running, 6 stuck, 109 sleeping... 459 \nthreads \n 12 \n:34:27\nLoad Avg: 0.27, 0.24, 0.32 CPU usage: 8.41% user, 11.06% sys, \n80.53% idle\nSharedLibs: num = 15, resident = 40M code, 2172K data, 3172K \nlinkedit.\nMemRegions: num = 20719, resident = 265M + 12M private, 1054M shared.\nPhysMem: 354M wired, 1117M active, 551M inactive, 2022M used, 19M \nfree.\nVM: 26G + 373M 1176145(160) pageins, 1446482(2) pageouts\n\n PID COMMAND %CPU TIME #TH #PRTS #MREGS RPRVT RSHRD \nRSIZE VSIZE\n51775 postgres 6.8% 2:40.16 1 9 39 1504K 896M 859M+ \n1562M\n51767 postgres 0.0% 0:39.74 1 8 28 752K 896M 752M \n1560M\n\nthe first is the UPDATE, the second is the writer.\n\nThe query is very simple,\n\nnetflix=> create index movs_mid_idx on movs(mid);\nCREATE INDEX\nnetflix=> update ratings set offset1=avg-rating from movs where \nmid=movie_id;\n\nwhere the table ratings has about 100 million rows, movs has about \n20,000.\n\nI randomly increased values in postgresql.conf to\n\nshared_buffers = 1500MB\nmax_fsm_pages = 2000000\nmax_fsm_relations = 10000\n\nShould I set the background writer parameters somehow to decrease the \nRAM consumed by the writer?\n\nCheers,\nAlexy\n", "msg_date": "Fri, 2 May 2008 12:38:37 -0700", "msg_from": "Alexy Khrabrov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": "On Fri, May 2, 2008 at 1:38 PM, Alexy Khrabrov <[email protected]> wrote:\n>\n>\n> On May 2, 2008, at 12:30 PM, Scott Marlowe wrote:\n>\n>\n> > On Fri, May 2, 2008 at 1:24 PM, Alexy Khrabrov <[email protected]>\n> wrote:\n> >\n> > > Greetings -- I have an UPDATE query updating a 100 million row table,\n> and\n> > > allocate enough memory via shared_buffers=1500MB. However, I see two\n> > > processes in top, the UPDATE process eating about 850 MB and the writer\n> > > process eating about 750 MB. The box starts paging. Why is there the\n> > > writer taking almost as much space as the UPDATE, and how can I shrink\n> it?\n> > >\n> >\n> > Shared_buffers is NOT the main memory pool for all operations in\n> > pgsql, it is simply the buffer pool used to hold data being operated\n> > on.\n> >\n> > Things like sorts etc. use other memory and can exhaust your machine.\n> > However, I'd like to see the output of vmstat 1 or top while this is\n> > happening.\n> >\n> > How much memory does this machine have?\n> >\n>\n> It's a 2GB RAM MacBook. Here's the top for postgres\n>\n> Processes: 117 total, 2 running, 6 stuck, 109 sleeping... 459 threads\n> 12:34:27\n> Load Avg: 0.27, 0.24, 0.32 CPU usage: 8.41% user, 11.06% sys, 80.53%\n> idle\n> SharedLibs: num = 15, resident = 40M code, 2172K data, 3172K linkedit.\n> MemRegions: num = 20719, resident = 265M + 12M private, 1054M shared.\n> PhysMem: 354M wired, 1117M active, 551M inactive, 2022M used, 19M free.\n> VM: 26G + 373M 1176145(160) pageins, 1446482(2) pageouts\n>\n> PID COMMAND %CPU TIME #TH #PRTS #MREGS RPRVT RSHRD RSIZE VSIZE\n> 51775 postgres 6.8% 2:40.16 1 9 39 1504K 896M 859M+\n> 1562M\n> 51767 postgres 0.0% 0:39.74 1 8 28 752K 896M 752M\n> 1560M\n\nSOME snipping here.\n\n> I randomly increased values in postgresql.conf to\n>\n> shared_buffers = 1500MB\n> max_fsm_pages = 2000000\n> max_fsm_relations = 10000\n\nOn a laptop with 2G ram, 1.5Gig shared buffers is probably WAY too high.\n\n> Should I set the background writer parameters somehow to decrease the RAM\n> consumed by the writer?\n\nNo, the background writer reads through the shared buffers for dirty\nones and writes them out. so, it's not really using MORE memory, it's\njust showing that it's attached to the ginormous shared_buffer pool\nyou've set up.\n\nLower your shared_buffers to about 512M or so and see how it works.\n", "msg_date": "Fri, 2 May 2008 13:53:56 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": "\"Scott Marlowe\" <[email protected]> writes:\n> On Fri, May 2, 2008 at 1:38 PM, Alexy Khrabrov <[email protected]> wrote:\n>> I randomly increased values in postgresql.conf to\n>> \n>> shared_buffers = 1500MB\n>> max_fsm_pages = 2000000\n>> max_fsm_relations = 10000\n\n> On a laptop with 2G ram, 1.5Gig shared buffers is probably WAY too high.\n\ns/probably/definitely/, especially seeing that OS X is a bit of a memory\nhog itself. I don't think you should figure on more than 1GB being\nusefully available to Postgres, and you can't give all or even most of\nthat space to shared_buffers.\n\n> No, the background writer reads through the shared buffers for dirty\n> ones and writes them out. so, it's not really using MORE memory, it's\n> just showing that it's attached to the ginormous shared_buffer pool\n> you've set up.\n\nYeah. You have to be aware of top's quirky behavior for shared memory:\non most platforms it will count the shared memory against *each*\nprocess, but only as much of the shared memory as that process has\ntouched so far. So over time the reported size of any PG process will\ntend to climb to something over the shared memory size, but most of that\nisn't \"real\".\n\nI haven't directly checked whether OS X's top behaves that way, but\ngiven your report I think it does.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 May 2008 16:13:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two memory-consuming postgres processes " }, { "msg_contents": "On Fri, 2 May 2008, Alexy Khrabrov wrote:\n\n> I have an UPDATE query updating a 100 million row table, and \n> allocate enough memory via shared_buffers=1500MB.\n\nIn addition to reducing that as you've been advised, you'll probably need \nto increase checkpoint_segments significantly from the default (3) in \norder to get good performance on an update that large. Something like 30 \nwould be a reasonable starting point.\n\nI'd suggest doing those two things, seeing how things go, and reporting \nback if you still think performance is unacceptable. We'd need to know \nyour PostgreSQL version in order to really target future suggestions.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 2 May 2008 16:22:29 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": "\nOn May 2, 2008, at 1:13 PM, Tom Lane wrote:\n> I don't think you should figure on more than 1GB being\n> usefully available to Postgres, and you can't give all or even most of\n> that space to shared_buffers.\n\n\nSo how should I divide say a 512 MB between shared_buffers and, um, \nwhat else? (new to pg tuning :)\n\nI naively thought that if I have a 100,000,000 row table, of the form \n(integer,integer,smallint,date), and add a real coumn to it, it will \nscroll through the memory reasonably fast. Yet when I had \nshared_buffers=128 MB, it was hanging there 8 hours before I killed \nit, and now with 1500MB is paging again for several hours with no end \nin sight. Why can't it just add a column to a row at a time and be \ndone with it soon enough? :) It takes inordinately long compared to a \nFORTRAN or even python program and there's no index usage for this \ntable, a sequential scan, why all the paging?\n\nCheers,\nAlexy\n", "msg_date": "Fri, 2 May 2008 13:26:47 -0700", "msg_from": "Alexy Khrabrov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: two memory-consuming postgres processes " }, { "msg_contents": "On May 2, 2008, at 1:22 PM, Greg Smith wrote:\n\n> On Fri, 2 May 2008, Alexy Khrabrov wrote:\n>\n>> I have an UPDATE query updating a 100 million row table, and \n>> allocate enough memory via shared_buffers=1500MB.\n>\n> In addition to reducing that as you've been advised, you'll probably \n> need to increase checkpoint_segments significantly from the default \n> (3) in order to get good performance on an update that large. \n> Something like 30 would be a reasonable starting point.\n>\n> I'd suggest doing those two things, seeing how things go, and \n> reporting back if you still think performance is unacceptable. We'd \n> need to know your PostgreSQL version in order to really target \n> future suggestions.\n\nPostgreSQL 8.3.1, compiled from source on Mac OSX 10.5.2 (Leopard). \nSaw the checkpoint_segments warning every ~20sec and increased it to \n100 already. Will see what 512 MB buys me, but 128 MB was paging \nmiserably.\n\nCheers,\nAlexy\n", "msg_date": "Fri, 2 May 2008 13:28:42 -0700", "msg_from": "Alexy Khrabrov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": "Interestingly, after shutting down the server with \nshared_buffer=1500MB in the middle of that UPDATE, I see this:\n\nbash-3.2$ /opt/bin/pg_ctl -D /data/pgsql/ stop\nwaiting for server to shut down....LOG: received smart shutdown request\nLOG: autovacuum launcher shutting down\n........................................................... failed\npg_ctl: server does not shut down\nbash-3.2$ /opt/bin/pg_ctl -D /data/pgsql/ stop\nwaiting for server to shut \ndown..........................................................LOG: \nshutting down\nLOG: database system is shut down\n done\nserver stopped\n\n-- had to do it twice, the box was paging for a minute or two.\n\nShould I do something about the autovacuum e.g. to turn it off \ncompletely? I thought it's not on as all of it was still commented \nout in postgresql.conf as shipped, only tweaked a few numbers as \nreported before.\n\nCheers,\nAlexy\n\n", "msg_date": "Fri, 2 May 2008 13:35:32 -0700", "msg_from": "Alexy Khrabrov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": "On Fri, May 2, 2008 at 2:26 PM, Alexy Khrabrov <[email protected]> wrote:\n>\n> So how should I divide say a 512 MB between shared_buffers and, um, what\n> else? (new to pg tuning :)\n\nDon't worry so much about the rest of the settings. Maybe increase\nsort_mem (aka work_mem) to something like 16M or so. that's about it.\n\n> I naively thought that if I have a 100,000,000 row table, of the form\n> (integer,integer,smallint,date), and add a real coumn to it, it will scroll\n> through the memory reasonably fast.\n\nThis is a database. It makes changes on disk in such a way that they\nwon't be lost should power be cut off. If you're just gonna be batch\nprocessing data that it's ok to lose halfway through, then python /\nperl / php etc might be a better choice.\n\n> Yet when I had shared_buffers=128 MB,\n> it was hanging there 8 hours before I killed it, and now with 1500MB is\n> paging again for several hours with no end in sight.\n\nYou went from kinda small to WAY too big. 512M should be a happy medium.\n\n> Why can't it just add\n> a column to a row at a time and be done with it soon enough? :)\n\nAdding a column is instantaneous. populating it is not.\n\n> It takes\n> inordinately long compared to a FORTRAN or even python program and there's\n> no index usage for this table, a sequential scan, why all the paging?\n\nAgain, a database protects your data from getting scrambled should the\nprogram updating it quit halfway through etc...\n\nHave you been vacuuming between these update attempts? Each one has\ncreated millions of dead rows and bloated your data store. vacuum\nfull / cluster / reindex may be needed.\n", "msg_date": "Fri, 2 May 2008 14:40:51 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": "\nOn May 2, 2008, at 1:40 PM, Scott Marlowe wrote:\n> Again, a database protects your data from getting scrambled should the\n> program updating it quit halfway through etc...\n\nRight -- but this is a data mining work, I add a derived column to a \nrow, and it's computed from that very row and a small second table \nwhich should fit in RAM.\n\n> Have you been vacuuming between these update attempts? Each one has\n> created millions of dead rows and bloated your data store. vacuum\n> full / cluster / reindex may be needed.\n\nI've read postgresql.conf better and see autovacuum = on is commented \nout, so it's on. That explains why shutting down was taking so long \nto shut autovacuum down too.\n\nBasically, the derived data is not critical at all, -- can I turn (1) \noff transactional behavior for an UPDATE, (2) should I care about \nvacuuming being done on the fly when saving RAM, or need I defer it/ \nmanage it manually?\n\nI wonder what MySQL would do here on MyISAM tables without \ntransactional behavior -- perhaps this is the case more suitable for \nthem?\n\nCheers,\nAlexy\n", "msg_date": "Fri, 2 May 2008 13:51:44 -0700", "msg_from": "Alexy Khrabrov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": "On Fri, May 2, 2008 at 2:26 PM, Alexy Khrabrov <[email protected]> wrote:\n> I naively thought that if I have a 100,000,000 row table, of the form\n> (integer,integer,smallint,date), and add a real coumn to it, it will scroll\n> through the memory reasonably fast.\n\nIn Postgres, an update is the same as a delete/insert. That means that changing the data in one column rewrites ALL of the columns for that row, and you end up with a table that's 50% dead space, which you then have to vacuum.\n\nSometimes if you have a \"volatile\" column that goes with several \"static\" columns, you're far better off to create a second table for the volatile data, duplicating the primary key in both tables. In your case, it would mean the difference between 10^8 inserts of (int, float), very fast, compared to what you're doing now, which is 10^8 insert and 10^8 deletes of (int, int, smallint, date, float), followed by a big vacuum/analyze (also slow).\n\nThe down side of this design is that later on, it requires a join to fetch all the data for each key.\n\nYou do have a primary key on your data, right? Or some sort of index?\n\nCraig\n", "msg_date": "Fri, 02 May 2008 14:02:24 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": "\nOn May 2, 2008, at 2:02 PM, Craig James wrote:\n\n> On Fri, May 2, 2008 at 2:26 PM, Alexy Khrabrov \n> <[email protected]> wrote:\n>> I naively thought that if I have a 100,000,000 row table, of the form\n>> (integer,integer,smallint,date), and add a real coumn to it, it \n>> will scroll\n>> through the memory reasonably fast.\n>\n> In Postgres, an update is the same as a delete/insert. That means \n> that changing the data in one column rewrites ALL of the columns for \n> that row, and you end up with a table that's 50% dead space, which \n> you then have to vacuum.\n>\n> Sometimes if you have a \"volatile\" column that goes with several \n> \"static\" columns, you're far better off to create a second table for \n> the volatile data, duplicating the primary key in both tables. In \n> your case, it would mean the difference between 10^8 inserts of \n> (int, float), very fast, compared to what you're doing now, which is \n> 10^8 insert and 10^8 deletes of (int, int, smallint, date, float), \n> followed by a big vacuum/analyze (also slow).\n>\n> The down side of this design is that later on, it requires a join to \n> fetch all the data for each key.\n>\n> You do have a primary key on your data, right? Or some sort of index?\n\nI created several indices for the primary table, yes. Sure I can do a \ntable for a volatile column, but then I'll have to create a new such \ntable for each derived column -- that's why I tried to add a column to \nthe existing table. Yet seeing this is really slow, and I need to to \nmany derived analyses like this -- which are later scanned in other \ncomputations, so should persist -- I indeed see no other way but to \nprocreate derived tables with the same key, one column per each...\n\nCheers,\nAlexy\n", "msg_date": "Fri, 2 May 2008 14:09:52 -0700", "msg_from": "Alexy Khrabrov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": "On Fri, 2 May 2008, Alexy Khrabrov wrote:\n\n> I created several indices for the primary table, yes.\n\nThat may be part of your problem. All of the indexes all are being \nupdated along with the main data in the row each time you touch a record. \nThere's some optimization there in 8.3 but it doesn't make index overhead \ngo away completely. As mentioned already, the optimal solution to \nproblems in this area is to adjust table normalization as much as feasible \nto limit what you're updating.\n\n> Basically, the derived data is not critical at all, -- can I turn (1) \n> off transactional behavior for an UPDATE,\n\nWhat you can do is defer transaction commits to only happen periodically \nrather than all the time by turning off syncronous_commit and increasing \nwal_writer_delay; see \nhttp://www.postgresql.com.cn/docs/8.3/static/wal-async-commit.html\n\n> (2) should I care about vacuuming being done on the fly when saving RAM, \n> or need I defer it/manage it manually?\n\nIt's hard to speculate from here about what optimal vacuum behavior will \nbe. You might find it more efficient to turn autovacuum off when doing \nthese large updates. The flip side is that you'll be guaranteed to end up \nwith more dead rows in the table and that has its own impact later.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 2 May 2008 17:23:45 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": "On May 2, 2008, at 2:23 PM, Greg Smith wrote:\n\n> On Fri, 2 May 2008, Alexy Khrabrov wrote:\n>\n>> I created several indices for the primary table, yes.\n>\n> That may be part of your problem. All of the indexes all are being \n> updated along with the main data in the row each time you touch a \n> record. There's some optimization there in 8.3 but it doesn't make \n> index overhead go away completely. As mentioned already, the \n> optimal solution to problems in this area is to adjust table \n> normalization as much as feasible to limit what you're updating.\n\nWas wondering about it, too -- intuitively I 'd like to say, \"stop all \nindexing\" until the column is added, then say \"reindex\", is it \ndoable? Or would it take longer anyways? SInce I don't index on that \nnew column, I'd assume my old indices would do -- do they change \nbecause of rows deletions/insertions, with the effective new rows \naddresses?\n\nCheers,\nAlexy\n", "msg_date": "Fri, 2 May 2008 14:30:17 -0700", "msg_from": "Alexy Khrabrov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": "\n> I created several indices for the primary table, yes. Sure I can do a \n> table for a volatile column, but then I'll have to create a new such \n> table for each derived column -- that's why I tried to add a column to \n> the existing table. Yet seeing this is really slow, and I need to to \n> many derived analyses like this -- which are later scanned in other \n> computations, so should persist -- I indeed see no other way but to \n> procreate derived tables with the same key, one column per each...\n\n\tOK, so in that case, if you could do all of your derived column \ncalculations in one query like this :\n\nCREATE TABLE derived AS SELECT ... FROM ... (perform all your derived \ncalculations here)\n\n\tor :\n\nBEGIN;\t<-- this is important to avoid writing xlog\nCREATE TABLE derived AS ...\nINSERT INTO derived SELECT ... FROM ... (perform all your derived \ncalculations here)\nCOMMIT;\n\n\tBasically, updating the entire table several times to add a few simple \ncolumns is a bad idea. If you can compute all the data you need in one \nquery, like above, it will be much faster. Especially if you join one \nlarge table to several smaller ones, and as long as the huge data set \ndoesn't need to be sorted (check the query plan using EXPLAIN). Try to do \nas much as possible in one query to scan the large dataset only once.\n\n\tNote that the above will be faster than updating the entire table since \nit needs to write much less data : it doesn't need to delete the old rows, \nand it doesn't need to write the transaction log, since if the transaction \nrolls back, the table never existed anyway. Also since your newly created \ntable doesn't have any indexes, they won't need to be updated.\n\n\tIf you really need to update an entire table multiple times, you will \nneed to :\n\n\t- Use hardware that can handle disk writes at a decent speed (that isn't \na characteristic of a laptop drive)\n\t- use MyIsam, yes (but if you need to make complex queries on the data \nafterwards, it could suck).\n\n", "msg_date": "Fri, 02 May 2008 23:30:44 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": ">>> Alexy Khrabrov wrote: \n \n> SInce I don't index on that \n> new column, I'd assume my old indices would do -- do they change \n> because of rows deletions/insertions, with the effective new rows \n> addresses?\n \nEvery update is a delete and insert. The new version of the row must\nbe added to the index. Every access through the index then has to\nlook at both versions of the row to see which one is \"current\" for its\ntransaction. Vacuum will make the space used by the dead rows\navailable for reuse, as well as removing the old index entries and\nmaking that space available for new index entries.\n \n-Kevin\n \n\n", "msg_date": "Fri, 02 May 2008 16:43:58 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": "On May 2, 2008, at 2:43 PM, Kevin Grittner wrote:\n\n>>>> Alexy Khrabrov wrote:\n>\n>> SInce I don't index on that\n>> new column, I'd assume my old indices would do -- do they change\n>> because of rows deletions/insertions, with the effective new rows\n>> addresses?\n>\n> Every update is a delete and insert. The new version of the row must\n> be added to the index. Every access through the index then has to\n> look at both versions of the row to see which one is \"current\" for its\n> transaction. Vacuum will make the space used by the dead rows\n> available for reuse, as well as removing the old index entries and\n> making that space available for new index entries.\n\nOK. I've cancelled all previous attempts at UPDATE and will now \ncreate some derived tables. See no changes in the previous huge table \n-- the added column was completely empty. Dropped it. Should I \nvacuum just in case, or am I guaranteed not to have any extra rows \nsince no UPDATE actually went through and none are showing?\n\nCheers,\nAlexy\n", "msg_date": "Fri, 2 May 2008 15:03:12 -0700", "msg_from": "Alexy Khrabrov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": ">>> Alexy Khrabrov wrote: \n \n> OK. I've cancelled all previous attempts at UPDATE and will now \n> create some derived tables. See no changes in the previous huge\ntable \n> -- the added column was completely empty. Dropped it. Should I \n> vacuum just in case, or am I guaranteed not to have any extra rows \n> since no UPDATE actually went through and none are showing?\n \nThe canceled attempts would have left dead space. If you have\nautovacuum running, it probably made the space available for reuse,\nbut depending on exactly how you got to where you are, you may have\nbloat. Personally, I would do a VACUUM ANALYZE VERBOSE and capture\nthe output. If bloat is too bad, you may want to CLUSTER the table\n(if you have the free disk space for a temporary extra copy of the\ntable) or VACUUM FULL followed by REINDEX (if you don't have that much\nfree disk space).\n \nLet us know if you need help interpreting the VERBOSE output.\n \n-Kevin\n \n\n", "msg_date": "Fri, 02 May 2008 17:29:00 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": "On Fri, 2 May 2008, PFC wrote:\n> CREATE TABLE derived AS SELECT ... FROM ... (perform all your derived \n> calculations here)\n\nGiven what you have said (that you really want all the data in one table) \nit may be best to proceed like this:\n\nFirst, take your original table, create an index on the primary key field, \nand CLUSTER on that index.\n\nCREATE TABLE derived AS SELECT ... FROM ... ORDER BY primary key field\nCREATE INDEX derived_pk ON derived(primary key field)\n\nRepeat those last two commands ad nauseum.\n\nThen, when you want a final full table, run:\n\nCREATE TABLE new_original AS SELECT * FROM original, derived, derived2,\n ... WHERE original.pk = derived.pk ...\n\nThat should be a merge join, which should run really quickly, and you can \nthen create all the indexes you want on it.\n\nMatthew\n\n-- \nWhen I first started working with sendmail, I was convinced that the cf\nfile had been created by someone bashing their head on the keyboard. After\na week, I realised this was, indeed, almost certainly the case.\n -- Unknown\n", "msg_date": "Sat, 3 May 2008 10:25:28 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two memory-consuming postgres processes" }, { "msg_contents": "On Fri, May 2, 2008 at 4:51 PM, Alexy Khrabrov <[email protected]> wrote:\n>\n> On May 2, 2008, at 1:40 PM, Scott Marlowe wrote:\n>\n> > Again, a database protects your data from getting scrambled should the\n> > program updating it quit halfway through etc...\n> >\n>\n> Right -- but this is a data mining work, I add a derived column to a row,\n> and it's computed from that very row and a small second table which should\n> fit in RAM.\n\nFull table update of a single field is one of the worst possible\noperations with PostgreSQL. mysql is better at this because lack of\nproper transactions and full table locking allow the rows to be\n(mostly) updated in place. Ideally, you should be leveraging the\npower of PostgreSQL so that you can avoid the full table update if\npossible. Maybe if you step back and think about the problem you may\nbe able to come up with a solution that is more efficient.\n\nAlso, if you must do it this way, (as others suggest), do CREATE TABLE\nnew_table AS SELECT...., then create keys, and drop the old table when\ndone. This is much faster than update.\n\nmerlin\n", "msg_date": "Sat, 3 May 2008 11:07:11 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two memory-consuming postgres processes" } ]
[ { "msg_contents": "\nHi,\n\nI'm porting an application written with pretty portable SQL, but tested \nalmost exclusively on MySQL.\n\nI'm wondering why would this query take about 90 seconds to return 74 rows?\n\n\nSELECT INFORMATION_SCHEMA.TABLE_CONSTRAINTS.CONSTRAINT_NAME, \nINFORMATION_SCHEMA.TABLE_CONSTRAINTS.TABLE_NAME, \nINFORMATION_SCHEMA.TABLE_CONSTRAINTS.TABLE_NAME, \nINFORMATION_SCHEMA.TABLE_CONSTRAINTS.CONSTRAINT_TYPE,\n INFORMATION_SCHEMA.KEY_COLUMN_USAGE.COLUMN_NAME, \nINFORMATION_SCHEMA.KEY_COLUMN_USAGE.REFERENCED_TABLE_NAME, \nINFORMATION_SCHEMA.KEY_COLUMN_USAGE.REFERENCED_COLUMN_NAME\n FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS, \nINFORMATION_SCHEMA.KEY_COLUMN_USAGE\n WHERE \nINFORMATION_SCHEMA.TABLE_CONSTRAINTS.CONSTRAINT_NAME=INFORMATION_SCHEMA.KEY_COLUMN_USAGE.CONSTRAINT_NAME\n AND \nINFORMATION_SCHEMA.TABLE_CONSTRAINTS.CONSTRAINT_SCHEMA=INFORMATION_SCHEMA.KEY_COLUMN_USAGE.CONSTRAINT_SCHEMA\n AND \nINFORMATION_SCHEMA.TABLE_CONSTRAINTS.CONSTRAINT_SCHEMA='mydbname'\n AND \nINFORMATION_SCHEMA.TABLE_CONSTRAINTS.CONSTRAINT_TYPE='FOREIGN KEY'\n ORDER BY INFORMATION_SCHEMA.TABLE_CONSTRAINTS.TABLE_NAME, \nINFORMATION_SCHEMA.KEY_COLUMN_USAGE.ORDINAL_POSITION\n\nAn equivalent query with the same data set on the same server takes a \ncouple of milliseconds on MySQL 5.\nIs it something I'm doing wrong or it's just that PostgreSQL \nINFORMATION_SCHEMA is not optimized for speed? BTW, what I'm trying to \ndo is get some info on every FOREIGN KEY in a database.\n\nIt's PostgreSQL 8.2.7 on Fedora 8 64, Athlon 64 X2 3600+.\n\nErnesto\n\n", "msg_date": "Fri, 02 May 2008 18:07:58 -0300", "msg_from": "Ernesto <[email protected]>", "msg_from_op": true, "msg_subject": "Very slow INFORMATION_SCHEMA" }, { "msg_contents": "Ernesto <[email protected]> writes:\n> I'm wondering why would this query take about 90 seconds to return 74 rows?\n\nEXPLAIN ANALYZE might tell you something.\n\nIs this really the query you're running? Because these two columns\ndon't exist:\n\n> INFORMATION_SCHEMA.KEY_COLUMN_USAGE.REFERENCED_TABLE_NAME, \n> INFORMATION_SCHEMA.KEY_COLUMN_USAGE.REFERENCED_COLUMN_NAME\n\nLeaving those out, I get sub-second runtimes for 70-odd foreign key\nconstraints, on much slower hardware than I think you are using.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 May 2008 18:20:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow INFORMATION_SCHEMA " }, { "msg_contents": "Tom Lane schrieb:\n> Ernesto <[email protected]> writes:\n> \n>> I'm wondering why would this query take about 90 seconds to return 74 rows?\n>> \n>\n> EXPLAIN ANALYZE might tell you something.\n>\n> Is this really the query you're running? Because these two columns\n> don't exist:\n>\n> \n>> INFORMATION_SCHEMA.KEY_COLUMN_USAGE.REFERENCED_TABLE_NAME, \n>> INFORMATION_SCHEMA.KEY_COLUMN_USAGE.REFERENCED_COLUMN_NAME\n>> \n>\n> Leaving those out, I get sub-second runtimes for 70-odd foreign key\n> constraints, on much slower hardware than I think you are using.\n> \nI can confirm this for a quite larger result set (4020 rows) for a DB \nwith 410 tables and a lot of foreign key constraints.\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------------------------------------------------------------------\n Sort (cost=1062.00..1062.00 rows=1 width=192) (actual \ntime=30341.257..30343.195 rows=4020 loops=1)\n Sort Key: table_constraints.table_name, \n((ss.x).n)::information_schema.cardinal_number\n -> Nested Loop (cost=889.33..1061.99 rows=1 width=192) (actual \ntime=308.004..30316.302 rows=4020 loops=1)\n -> Nested Loop (cost=889.33..1057.70 rows=1 width=132) \n(actual time=307.984..30271.700 rows=4020 loops=1)\n Join Filter: ((table_constraints.constraint_name)::text = \n((ss.conname)::information_schema.sql_identifier)::text)\n -> Subquery Scan table_constraints (cost=887.99..926.75 \nrows=1 width=128) (actual time=278.247..293.392 rows=554 loops=1)\n Filter: (((constraint_schema)::text = \n'public'::text) AND ((constraint_type)::text = 'FOREIGN KEY'::text))\n -> Unique (cost=887.99..912.21 rows=969 \nwidth=259) (actual time=276.915..288.848 rows=4842 loops=1)\n -> Sort (cost=887.99..890.41 rows=969 \nwidth=259) (actual time=276.911..279.536 rows=4842 loops=1)\n Sort Key: constraint_catalog, \nconstraint_schema, constraint_name, table_catalog, table_schema, \ntable_name, constraint_type, is_deferr\nable, initially_deferred\n -> Append (cost=118.46..839.92 \nrows=969 width=259) (actual time=1.971..48.601 rows=4842 loops=1)\n -> Subquery Scan \"*SELECT* 1\" \n(cost=118.46..238.72 rows=224 width=259) (actual time=1.970..14.931 \nrows=1722 loops=1)\n -> Hash Join \n(cost=118.46..236.48 rows=224 width=259) (actual time=1.968..12.556 \nrows=1722 loops=1)\n Hash Cond: \n(c.connamespace = nc.oid)\n -> Hash Join \n(cost=116.97..227.42 rows=224 width=199) (actual time=1.902..8.206 \nrows=1722 loops=1)\n Hash Cond: \n(r.relnamespace = nr.oid)\n -> Hash Join \n(cost=115.50..222.49 rows=329 width=139) (actual time=1.839..5.760 \nrows=1722 loops=1)\n Hash \nCond: (c.conrelid = r.oid)\n -> Seq \nScan on pg_constraint c (cost=0.00..97.23 rows=1723 width=75) (actual \ntime=0.004..1.195 rows=1\n723 loops=1)\n -> Hash \n(cost=110.28..110.28 rows=418 width=72) (actual time=1.823..1.823 \nrows=458 loops=1)\n -> \nSeq Scan on pg_class r (cost=0.00..110.28 rows=418 width=72) (actual \ntime=0.012..1.437 rows=\n458 loops=1)\n \nFilter: ((relkind = 'r'::\"char\") AND (pg_has_role(relowner, \n'USAGE'::text) OR has_table_pri\nvilege(oid, 'INSERT'::text) OR has_table_privilege(oid, 'UPDATE'::text) \nOR has_table_privilege(oid, 'DELETE'::text) OR has_table_privilege(oid, \n'REFERENCES'::text) OR\n has_table_privilege(oid, 'TRIGGER'::text)))\n -> Hash \n(cost=1.27..1.27 rows=15 width=68) (actual time=0.051..0.051 rows=15 \nloops=1)\n -> Seq \nScan on pg_namespace nr (cost=0.00..1.27 rows=15 width=68) (actual \ntime=0.010..0.032 rows=15 l\noops=1)\n \nFilter: (NOT pg_is_other_temp_schema(oid))\n -> Hash \n(cost=1.22..1.22 rows=22 width=68) (actual time=0.049..0.049 rows=22 \nloops=1)\n -> Seq Scan on \npg_namespace nc (cost=0.00..1.22 rows=22 width=68) (actual \ntime=0.008..0.022 rows=22 loops=1\n)\n -> Subquery Scan \"*SELECT* 2\" \n(cost=125.45..601.21 rows=745 width=138) (actual time=2.621..29.380 \nrows=3120 loops=1)\n -> Hash Join \n(cost=125.45..593.76 rows=745 width=138) (actual time=2.618..24.938 \nrows=3120 loops=1)\n Hash Cond: \n(a.attrelid = r.oid)\n -> Seq Scan on \npg_attribute a (cost=0.00..419.55 rows=5551 width=6) (actual \ntime=0.009..8.111 rows=3399 loops=1)\n Filter: \n(attnotnull AND (attnum > 0) AND (NOT attisdropped))\n -> Hash \n(cost=121.78..121.78 rows=294 width=136) (actual time=2.578..2.578 \nrows=458 loops=1)\n -> Hash Join \n(cost=1.46..121.78 rows=294 width=136) (actual time=0.073..2.085 \nrows=458 loops=1)\n Hash \nCond: (r.relnamespace = nr.oid)\n -> Seq \nScan on pg_class r (cost=0.00..115.76 rows=431 width=72) (actual \ntime=0.011..1.358 rows=458 lo\nops=1)\n \nFilter: ((relkind = 'r'::\"char\") AND (pg_has_role(relowner, \n'USAGE'::text) OR has_table_privilege\n(oid, 'SELECT'::text) OR has_table_privilege(oid, 'INSERT'::text) OR \nhas_table_privilege(oid, 'UPDATE'::text) OR has_table_privilege(oid, \n'DELETE'::text) OR has_table\n_privilege(oid, 'REFERENCES'::text) OR has_table_privilege(oid, \n'TRIGGER'::text)))\n -> Hash \n(cost=1.27..1.27 rows=15 width=68) (actual time=0.051..0.051 rows=15 \nloops=1)\n\n -> \nSeq Scan on pg_namespace nr (cost=0.00..1.27 rows=15 width=68) (actual \ntime=0.010..0.033 row\ns=15 loops=1)\n \nFilter: (NOT pg_is_other_temp_schema(oid))\n -> Nested Loop (cost=1.34..130.80 rows=6 width=321) \n(actual time=0.040..52.244 rows=1845 loops=554)\n -> Nested Loop (cost=1.34..128.42 rows=8 \nwidth=261) (actual time=0.017..16.949 rows=1251 loops=554)\n Join Filter: (pg_has_role(r.relowner, \n'USAGE'::text) OR has_table_privilege(c.oid, 'SELECT'::text) OR \nhas_table_privilege(c.oid, 'INSERT'::\ntext) OR has_table_privilege(c.oid, 'UPDATE'::text) OR \nhas_table_privilege(c.oid, 'REFERENCES'::text))\n -> Hash Join (cost=1.34..109.27 rows=46 \nwidth=193) (actual time=0.009..5.149 rows=1251 loops=554)\n Hash Cond: (c.connamespace = nc.oid)\n -> Seq Scan on pg_constraint c \n(cost=0.00..103.69 rows=1008 width=133) (actual time=0.007..2.765 \nrows=1302 loops=554)\n Filter: (contype = ANY \n('{p,u,f}'::\"char\"[]))\n -> Hash (cost=1.33..1.33 rows=1 \nwidth=68) (actual time=0.022..0.022 rows=1 loops=1)\n -> Seq Scan on pg_namespace nc \n(cost=0.00..1.33 rows=1 width=68) (actual time=0.016..0.019 rows=1 loops=1)\n Filter: ('public'::text = \n((nspname)::information_schema.sql_identifier)::text)\n -> Index Scan using pg_class_oid_index on \npg_class r (cost=0.00..0.39 rows=1 width=76) (actual time=0.005..0.006 \nrows=1 loops=693054)\n Index Cond: (r.oid = c.conrelid)\n Filter: (relkind = 'r'::\"char\")\n -> Index Scan using pg_namespace_oid_index on \npg_namespace nr (cost=0.00..0.28 rows=1 width=68) (actual \ntime=0.004..0.005 rows=1 loops=693054)\n Index Cond: (nr.oid = r.relnamespace)\n Filter: (NOT pg_is_other_temp_schema(oid))\n -> Index Scan using pg_attribute_relid_attnum_index on \npg_attribute a (cost=0.00..4.27 rows=1 width=70) (actual \ntime=0.006..0.007 rows=1 loops=4020)\n Index Cond: ((ss.roid = a.attrelid) AND (a.attnum = \n(ss.x).x))\n Filter: (NOT attisdropped)\n Total runtime: 30346.174 ms\n(60 rows)\nX-AntiVirus: checked by AntiVir MailGuard (Version: 8.0.0.18; AVE: 8.1.0.37; VDF: 7.0.3.243)\n\nThis is Postgresql 8.2.4, on a Dual-Core XEON 3.6GHz. With nested_loops \noff, I get a very fast response (330ms).\n\nRegards,\n Mario Weilguni\n", "msg_date": "Mon, 05 May 2008 13:30:28 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow INFORMATION_SCHEMA" }, { "msg_contents": "Mario Weilguni <[email protected]> writes:\n> I can confirm this for a quite larger result set (4020 rows) for a DB \n> with 410 tables and a lot of foreign key constraints.\n> ...\n> This is Postgresql 8.2.4, on a Dual-Core XEON 3.6GHz. With nested_loops \n> off, I get a very fast response (330ms).\n\nFWIW, it looks like 8.3 is significantly smarter about this example\n--- it's able to push the toplevel conditions on CONSTRAINT_SCHEMA\nand CONSTRAINT_TYPE down inside the UNION, where 8.2 fails to do so.\n\nWhich is not to say that there's not more left to do on optimizing\nthe information_schema views. In this particular case, for example,\nI wonder why the UNION in INFORMATION_SCHEMA.TABLE_CONSTRAINTS isn't a\nUNION ALL. There's probably a lot more such micro-optimizations that\ncould be done if anyone was motivated to look at it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 May 2008 20:56:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow INFORMATION_SCHEMA " } ]
[ { "msg_contents": "Rauan Maemirov wrote:\n>> I want to ask, if anyone used query_cache of pgPool. The problem is\n>> there is no detailed installation steps on how to configure it\n>> correctly. Itried to follow it, but guess that it doesn't cache my\n>> queries. So, maybe someone adviced me or give link.\n> \n> Nobody use??\n\nI'd say caching is better done on a higher level -- like HTML fragments \nor what ever you generate from those queries..\n\nJust my two cent.\n\n\n-- \nBest regards,\nHannes Dorbath\n", "msg_date": "Sun, 04 May 2008 13:13:07 +0200", "msg_from": "Hannes Dorbath <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgPool query cache" } ]
[ { "msg_contents": "The subject basically says it all, I'm looking for the fastest \n(indexable) way to calculate the next birthdays relative to NOW() from a \ndataset of about 1 million users.\n\nI'm currently using a function based index, but leap year handling / \nmapping February 29 to February 28 gives me some headaches.\n\nIs there any best practice to do that in PostgreSQL?\n\n\n-- \nBest regards,\nHannes Dorbath\n", "msg_date": "Sun, 04 May 2008 14:29:58 +0200", "msg_from": "Hannes Dorbath <[email protected]>", "msg_from_op": true, "msg_subject": "Fastest way / best practice to calculate \"next birthdays\"" }, { "msg_contents": "Hannes Dorbath wrote:\n> The subject basically says it all, I'm looking for the fastest \n> (indexable) way to calculate the next birthdays relative to NOW() from a \n> dataset of about 1 million users.\n> \n> I'm currently using a function based index, but leap year handling / \n> mapping February 29 to February 28 gives me some headaches.\n> \n> Is there any best practice to do that in PostgreSQL?\n\npostgres=# SELECT current_date|| ' a ' || to_char(current_date, 'Day'), \n\ncurrent_date + '1 Year'::interval || ' a ' || to_char(current_date + '1 \nYear'::interval, 'Day') as next_birthday;\n ?column? | next_birthday\n------------------------+---------------------------------\n 2008-05-04 a Sunday | 2009-05-04 00:00:00 a Monday\n\n?\n\n\nSincerely,\n\nJoshua D. Drake\n\n", "msg_date": "Sun, 04 May 2008 09:11:34 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fastest way / best practice to calculate \"next birthdays\"" }, { "msg_contents": "Joshua D. Drake wrote:\n> postgres=# SELECT current_date|| ' a ' || to_char(current_date, 'Day'),\n> current_date + '1 Year'::interval || ' a ' || to_char(current_date + '1 \n> Year'::interval, 'Day') as next_birthday;\n> ?column? | next_birthday\n> ------------------------+---------------------------------\n> 2008-05-04 a Sunday | 2009-05-04 00:00:00 a Monday\n> \n> ?\n\nSorry, I think I phrased the question badly. What I'm after basically is:\n\nhttp://www.depesz.com/index.php/2007/10/26/who-has-birthday-tomorrow/\n\n\n-- \nBest regards,\nHannes Dorbath\n", "msg_date": "Sun, 04 May 2008 19:50:37 +0200", "msg_from": "Hannes Dorbath <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fastest way / best practice to calculate \"next birthdays\"" }, { "msg_contents": "Hannes Dorbath írta:\n> Joshua D. Drake wrote:\n>> postgres=# SELECT current_date|| ' a ' || to_char(current_date, 'Day'),\n>> current_date + '1 Year'::interval || ' a ' || to_char(current_date + \n>> '1 Year'::interval, 'Day') as next_birthday;\n>> ?column? | next_birthday\n>> ------------------------+---------------------------------\n>> 2008-05-04 a Sunday | 2009-05-04 00:00:00 a Monday\n>>\n>> ?\n>\n> Sorry, I think I phrased the question badly. What I'm after basically is:\n>\n> http://www.depesz.com/index.php/2007/10/26/who-has-birthday-tomorrow/\n\nIf you define the same functional index as in the above link:\n\nCREATE OR REPLACE FUNCTION indexable_month_day(date) RETURNS TEXT as $BODY$\nSELECT to_char($1, 'MM-DD');\n$BODY$ language 'sql' IMMUTABLE STRICT;\n\ncreate table user_birthdate (\n id serial not null primary key,\n birthdate date\n);\ncreate index user_birthdate_day_idx on user_birthdate ( \nindexable_month_day(birthdate) );\n\nThen you can use this query:\n\nselect count(*) from user_birthdate where indexable_month_day(birthdate) \n > '02-28' and indexable_month_day(birthdate) <= '03-01';\n\nIn a generic and parametrized way:\n\nselect * from user_birthdate\nwhere\n indexable_month_day(birthdate) > indexable_month_day(now()::date) and\n indexable_month_day(birthdate) <= indexable_month_day((now() + '1 \ndays'::interval)::date);\n\nThis will still use the index and it will work for the poor ones\nwho have birthday every 4 years, too. Assume, it's 02-08 today, 03-01 \nthe next day.\nThe now() < X <= now() + 1 day range will find 02-29.\n\n-- \n----------------------------------\nZoltán Böszörményi\nCybertec Schönig & Schönig GmbH\nhttp://www.postgresql.at/\n\n\n", "msg_date": "Sun, 04 May 2008 21:05:21 +0200", "msg_from": "Zoltan Boszormenyi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fastest way / best practice to calculate \"next birthdays\"" }, { "msg_contents": "Hannes Dorbath wrote:\n\n> Sorry, I think I phrased the question badly. What I'm after basically is:\n> \n> http://www.depesz.com/index.php/2007/10/26/who-has-birthday-tomorrow/\n> \n\nOK So what I came up with is - (the times are from a G4 1.25Ghz)\n\nCREATE TABLE birthdaytest\n(\n id serial PRIMARY KEY,\n birthdate date\n);\n\n\nCREATE INDEX idx_bday_month ON birthdaytest\nUSING btree(extract(month from birthdate));\n\nCREATE INDEX idx_bday_day ON birthdaytest\nUSING btree(extract(day from birthdate));\n\n\ninsert into birthdaytest (birthdate) values \n('1930-01-01'::date+generate_series(0,365*70));\n\n... I repeated this another 15 times to load some data\n\n\nvacuum analyse birthdaytest;\n\n\\timing\n\nselect count(*) from birthdaytest;\n\n> count \n> --------\n> 408816\n> (1 row)\n> \n> Time: 233.501 ms\n\n\nselect * from birthdaytest\nwhere extract(month from birthdate) = 5\nand extract(day from birthdate) between 6 and 12;\n\n> id | birthdate \n> --------+------------\n> 126 | 1930-05-06\n> 127 | 1930-05-07\n> 128 | 1930-05-08\n> ...\n> ...\n> 408613 | 1999-05-11\n> 408614 | 1999-05-12\n> (7840 rows)\n> \n> Time: 211.237 ms\n\n\nselect * from birthdaytest\nwhere extract(month from birthdate) = extract(month from current_date)\nand extract(day from birthdate) between extract(day from current_date) \nand extract(day from current_date+14);\n\n> id | birthdate \n> --------+------------\n> 125 | 1930-05-05\n> 126 | 1930-05-06\n> 127 | 1930-05-07\n> ...\n> ...\n> 408619 | 1999-05-17\n> 408620 | 1999-05-18\n> 408621 | 1999-05-19\n> (16800 rows)\n> \n> Time: 483.915 ms\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n", "msg_date": "Mon, 05 May 2008 15:11:38 +0930", "msg_from": "Shane Ambler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fastest way / best practice to calculate \"next birthdays\"" }, { "msg_contents": "If I have to find upcoming birthdays in current week and the current week\nfall into different months - how would you handle that?\n\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Fastest-way-best-practice-to-calculate-next-birthdays-tp2068398p5849705.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 18 May 2015 02:30:15 -0700 (MST)", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fastest way / best practice to calculate \"next birthdays\"" }, { "msg_contents": "On Monday, May 18, 2015, [email protected] <\[email protected]> wrote:\n\n> If I have to find upcoming birthdays in current week and the current week\n> fall into different months - how would you handle that?\n>\n\nExtract(week from timestamptz_column)\n\nISO weeks are not affected by month boundaries but do start on Monday.\n\nDavid J.\n\nOn Monday, May 18, 2015, [email protected] <[email protected]> wrote:If I have to find upcoming birthdays in current week and the current week\nfall into different months - how would you handle that?\nExtract(week from timestamptz_column)ISO weeks are not affected by month boundaries but do start on Monday.David J.", "msg_date": "Wed, 20 May 2015 20:22:15 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fastest way / best practice to calculate \"next birthdays\"" }, { "msg_contents": "On 05/20/15 20:22, David G. Johnston wrote:\n> On Monday, May 18, 2015, [email protected] <\n> [email protected]> wrote:\n> \n>> If I have to find upcoming birthdays in current week and the current week\n>> fall into different months - how would you handle that?\n>>\n> \n> Extract(week from timestamptz_column)\n> \n> ISO weeks are not affected by month boundaries but do start on Monday.\n\nThere is the year start/end boundary conditions to worry about there.\n\nIf the current week covers Dec28-Jan02 then week of year won't help for\na birthday on Jan01 or Jan02 if 'today' is in the Dec portion. Ditto\nfor birthday in Dec portion when 'today' is in the Jan portion.\n\nThere is probably a better way to do it than what I'm showing here, but\nhere's an example:\n\nwith x as (\n select now() - (extract(dow from now()) || ' days')::interval as\nweekstart\n)\nselect to_char(x.weekstart, 'YYYY-MM-DD') as first_day,\n to_char(x.weekstart + '6 days', 'YYYY-MM-DD') as last_day\n from x;\n\nYou could probably make some of that into a function that accepts a\ntimestamptz and generates the two days. Or even does the compare too.\n\nHTH.\n\nBosco.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 21 May 2015 09:15:04 -0700", "msg_from": "Bosco Rama <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fastest way / best practice to calculate \"next birthdays\"" }, { "msg_contents": "On Thursday, May 21, 2015, Bosco Rama <[email protected]> wrote:\n\n> On 05/20/15 20:22, David G. Johnston wrote:\n> > On Monday, May 18, 2015, [email protected] <javascript:;> <\n> > [email protected] <javascript:;>> wrote:\n> >\n> >> If I have to find upcoming birthdays in current week and the current\n> week\n> >> fall into different months - how would you handle that?\n> >>\n> >\n> > Extract(week from timestamptz_column)\n> >\n> > ISO weeks are not affected by month boundaries but do start on Monday.\n>\n> There is the year start/end boundary conditions to worry about there.\n>\n> If the current week covers Dec28-Jan02 then week of year won't help for\n> a birthday on Jan01 or Jan02 if 'today' is in the Dec portion. Ditto\n> for birthday in Dec portion when 'today' is in the Jan portion.\n>\n>\nYou need to read the documentation regarding ISO year and ISO week more\ncarefully. There is no issue with years only ensuring that your definition\nof week starts with Monday and contains 7 days. The ISO year for January\n1st can be different than the Gregorian year for the same.\n\nDavid J.\n\nOn Thursday, May 21, 2015, Bosco Rama <[email protected]> wrote:On 05/20/15 20:22, David G. Johnston wrote:\n> On Monday, May 18, 2015, [email protected] <\n> [email protected]> wrote:\n>\n>> If I have to find upcoming birthdays in current week and the current week\n>> fall into different months - how would you handle that?\n>>\n>\n> Extract(week from timestamptz_column)\n>\n> ISO weeks are not affected by month boundaries but do start on Monday.\n\nThere is the year start/end boundary conditions to worry about there.\n\nIf the current week covers Dec28-Jan02 then week of year won't help for\na birthday on Jan01 or Jan02 if 'today' is in the Dec portion.  Ditto\nfor birthday in Dec portion when 'today' is in the Jan portion.\nYou need to read the documentation regarding ISO year and ISO week more carefully.  There is no issue with years only ensuring that your definition of week starts with Monday and contains 7 days.  The ISO year for January 1st can be different than the Gregorian year for the same.David J.", "msg_date": "Thu, 21 May 2015 09:50:04 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fastest way / best practice to calculate \"next birthdays\"" }, { "msg_contents": "On 5/21/15 11:15 AM, Bosco Rama wrote:\n> You could probably make some of that into a function that accepts a\n> timestamptz and generates the two days.\n\nYou'll be better off if instead of 2 days it gives you a daterange: \nhttp://www.postgresql.org/docs/9.4/static/rangetypes.html\n\nI don't know about the exact ISO details, but your approach is the \ncorrect one: find the date that the current week started on and then \nbuild a range of [week start, week start + 7 days).\n\nAlso, note the use of [ vs ). That is the ONLY correct way to do this if \nyou're comparing to a timestamp.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 22 May 2015 16:23:19 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fastest way / best practice to calculate \"next birthdays\"" }, { "msg_contents": "\"[email protected]\" <[email protected]> wrote:\n\n> If I have to find upcoming birthdays in current week and the\n> current week fall into different months - how would you handle\n> that?\n\nIf you don't need to cross from December into January, I find the\neasiest is:\n\nSELECT * FROM person\n WHERE (EXTRACT(MONTH FROM dob), EXTRACT(DAY FROM dob))\n BETWEEN (6, 28) AND (7, 4);\n\nThat is logicically the same as:\n\nSELECT * FROM person\n WHERE (EXTRACT(MONTH FROM dob) >= 6\n AND (EXTRACT(MONTH FROM dob) > 6\n OR (EXTRACT(DAY FROM dob) >= 28)))\n AND (EXTRACT(MONTH FROM dob) <= 7\n AND (EXTRACT(MONTH FROM dob) < 7\n OR (EXTRACT(DAY FROM dob) <= 4)));\n\nThat's the generalized case; with the months adjacent, this simpler\nform is also equivalent:\n\nSELECT * FROM person\n WHERE (EXTRACT(MONTH FROM dob) = 6\n AND EXTRACT(DAY FROM dob) >= 28)\n OR (EXTRACT(MONTH FROM dob) = 7\n AND EXTRACT(DAY FROM dob) <= 4);\n\nThe first query I showed is faster than either of the alternatives,\nespecially if there is an index on dob.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 1 Jun 2015 19:11:15 +0000 (UTC)", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fastest way / best practice to calculate \"next\n birthdays\"" } ]
[ { "msg_contents": "PostgreSQL: 8.2.4\n\n \n\nWe currently backup all of our database tables per schema using pg_dump\nevery half hour. We have been noticing that the database performance\nhas been very poor during the backup process. How can I improve the\nperformance?\n\n \n\nServer Specs:\n\nDedicated DB server\n\nDatabase takes up 8.0 Gig of disk space\n\n2 Xeon 5160 dual cores 3.0 \n\n16 G of memory\n\nTwo disks in raid 1 are used for the OS, database and backups. SAS\n10,000 RPM drives.\n\nOS: Linux AS 4.x 64 bit\n\nshared_buffers = 1 GB\n\nwork_mem = 20MB\n\nmax_fsm_pages = 524288\n\nrandom_page_cost=1.0\n\neffective_cache_size=16GB\n\nmax_connections=150\n\n \n\nAll other settings are the default settings. \n\n \n\nI have tried doing backups to a second set of disks but the performance\nonly improved somewhat.\n\n \n\nDoes anyone have advice on how to improve my performance during backup?\nWould adding two quad core processors improve performance?\n\n \n\nThanks,\n\n \n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPostgreSQL: 8.2.4\n \nWe currently backup all of our database tables per schema\nusing pg_dump every half hour.  We have been noticing that the database\nperformance has been very poor during the backup process.  How can I\nimprove the performance?\n \nServer Specs:\nDedicated DB server\nDatabase takes up 8.0 Gig of disk space\n2 Xeon 5160 dual cores 3.0 \n16 G of memory\nTwo disks in raid 1 are used for the OS, database and\nbackups.  SAS 10,000 RPM drives.\nOS: Linux AS 4.x 64 bit\nshared_buffers = 1 GB\nwork_mem = 20MB\nmax_fsm_pages = 524288\nrandom_page_cost=1.0\neffective_cache_size=16GB\nmax_connections=150\n \nAll other settings are the default settings.  \n \nI have tried doing backups to a second set of disks but the\nperformance only improved somewhat.\n \nDoes anyone have advice on how to improve my performance\nduring backup?  Would adding two quad core processors improve performance?\n \nThanks,\n \n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Mon, 5 May 2008 09:59:57 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Backup causing poor performance - suggestions" }, { "msg_contents": "On Mon, May 5, 2008 at 8:59 AM, Campbell, Lance <[email protected]> wrote:\n>\n> PostgreSQL: 8.2.4\n\nYou should update to 8.2.7 as a matter of periodic maintenance. It's\na very short and easy update.\n\n> We currently backup all of our database tables per schema using pg_dump\n> every half hour. We have been noticing that the database performance has\n> been very poor during the backup process. How can I improve the\n> performance?\n>\n>\n>\n> Server Specs:\n>\n> Dedicated DB server\n>\n> Database takes up 8.0 Gig of disk space\n>\n> 2 Xeon 5160 dual cores 3.0\n>\n> 16 G of memory\n>\n> Two disks in raid 1 are used for the OS, database and backups. SAS 10,000\n> RPM drives.\n>\n> OS: Linux AS 4.x 64 bit\n\nSo, what kind of RAID controller are you using? And can you add more\ndrives and / or battery backed cache to it?\n", "msg_date": "Mon, 5 May 2008 09:05:32 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup causing poor performance - suggestions" }, { "msg_contents": "Scott,\nThe server is a Dell PowerEdge 2900 II with the standard Perc 6/I SAS\ncontroller with 256 MB cache. \n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Monday, May 05, 2008 10:06 AM\nTo: Campbell, Lance\nCc: [email protected]\nSubject: Re: [PERFORM] Backup causing poor performance - suggestions\n\nOn Mon, May 5, 2008 at 8:59 AM, Campbell, Lance <[email protected]> wrote:\n>\n> PostgreSQL: 8.2.4\n\nYou should update to 8.2.7 as a matter of periodic maintenance. It's\na very short and easy update.\n\n> We currently backup all of our database tables per schema using\npg_dump\n> every half hour. We have been noticing that the database performance\nhas\n> been very poor during the backup process. How can I improve the\n> performance?\n>\n>\n>\n> Server Specs:\n>\n> Dedicated DB server\n>\n> Database takes up 8.0 Gig of disk space\n>\n> 2 Xeon 5160 dual cores 3.0\n>\n> 16 G of memory\n>\n> Two disks in raid 1 are used for the OS, database and backups. SAS\n10,000\n> RPM drives.\n>\n> OS: Linux AS 4.x 64 bit\n\nSo, what kind of RAID controller are you using? And can you add more\ndrives and / or battery backed cache to it?\n", "msg_date": "Mon, 5 May 2008 10:10:22 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Backup causing poor performance - suggestions" }, { "msg_contents": "On Mon, May 5, 2008 at 9:10 AM, Campbell, Lance <[email protected]> wrote:\n> Scott,\n> The server is a Dell PowerEdge 2900 II with the standard Perc 6/I SAS\n> controller with 256 MB cache.\n\nIt's probably not gonna win any awards, but it's not too terrible.\n\nWhat does vmstat 1 say during your backups / normal operation? The\nlast four or five columns have the most useful data for\ntroubleshooting.\n", "msg_date": "Mon, 5 May 2008 09:54:15 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup causing poor performance - suggestions" }, { "msg_contents": "Campbell, Lance wrote:\n> We currently backup all of our database tables per schema using pg_dump \n> every half hour. We have been noticing that the database performance \n> has been very poor during the backup process. How can I improve the \n> performance?\n\nIt sounds like the goal is to have frequent, near-real-time backups of your databases for recovery purposes. Maybe instead of looking at pg_dump's performance, a better solution would be a replication system such as Slony, or a \"warm backup\" using Skype Tools.\n\nBacking up the database every half hour puts a large load on the system during the dump, and means you are re-dumping the same data, 48 times per day. If you use a replication solution, the backup process is continuous (spread out through the day), and you're not re-dumping static data; the only data that moves around is the new data.\n\nI've used Slony with mixed success; depending on the complexity and size of your database, it can be quite effective. I've heard very good reports about Skype Tools, which has both a Slony-like replicator (not as configurable as Slony, but easier to set up and use), plus an entirely separate set of scripts that simplifies \"warm standby\" using WAL logging.\n\nCraig\n", "msg_date": "Mon, 05 May 2008 09:10:16 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup causing poor performance - suggestions" }, { "msg_contents": "On Mon, May 5, 2008 at 10:11 AM, Campbell, Lance <[email protected]> wrote:\n> Scott,\n> The last 6 entries are when the system is not backing up. The system\n> was running fine. But the other entries are when it was backing up.\n> Reads seem to be fine but any operations that need to write data just\n> hang.\n\nCould you repost that as an attachment? the wrapping applied by your\nemail client makes it very hard to read.\n\nJust perusing it, it doesn't look like you're CPU bound, but I/O bound.\n\nAs Craig mentioned, you may do better with some form of replication\nsolution here than pg_dumps.\n\nGiven that your db can fit in memory (when you say it's 8G do you mean\nON DISK, or in a backup? Big diff) then the only thing the backups\nshould be slowing down are update queries. Select queries shouldn't\neven notice.\n\nHowever, there's a LOT of wait state, and only blocks out, not really\nmany in, so I'm guessing that you've got a fair bit of writing going\non at the same time as your backups.\n", "msg_date": "Mon, 5 May 2008 11:05:03 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup causing poor performance - suggestions" }, { "msg_contents": "On Mon, 2008-05-05 at 09:10 -0700, Craig James wrote:\n> Campbell, Lance wrote:\n> > We currently backup all of our database tables per schema using pg_dump \n> > every half hour. We have been noticing that the database performance \n> > has been very poor during the backup process. How can I improve the \n> > performance?\n> \n> It sounds like the goal is to have frequent, near-real-time backups of\n> your databases for recovery purposes. Maybe instead of looking at\n> pg_dump's performance, a better solution would be a replication system\n> such as Slony, or a \"warm backup\" using Skype Tools.\n> \n> Backing up the database every half hour puts a large load on the\n> system during the dump, and means you are re-dumping the same data, 48\n> times per day. If you use a replication solution, the backup process\n> is continuous (spread out through the day), and you're not re-dumping\n> static data; the only data that moves around is the new data.\n> \n> I've used Slony with mixed success; depending on the complexity and\n> size of your database, it can be quite effective. I've heard very\n> good reports about Skype Tools, which has both a Slony-like replicator\n> (not as configurable as Slony, but easier to set up and use), plus an\n> entirely separate set of scripts that simplifies \"warm standby\" using\n> WAL logging.\n\nI think we should mention Warm Standby via pg_standby, which is part of\ncore software and documentation. Seems strange not to mention it at all.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Mon, 05 May 2008 18:43:52 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup causing poor performance - suggestions" }, { "msg_contents": "On Mon, May 5, 2008 at 11:19 AM, Campbell, Lance <[email protected]> wrote:\n> Scott,\n> When I do a df -h I see that the database takes up a total of 8Gig of\n> disk space. This is not the size of the backup file of the database.\n\nOk. Just wanted to make sure. Looking at vmstat, with little or no\nblocks read in, it would appear your database fits in memory.\n\n> I have attached the vmstats as a text file. Are there any properties I\n> could adjust that would help with writing data during a backup? Are\n> there any properties that might help with improving pg_dump performance?\n\nWith as much writing as you have going on, it might help to crank up\nyour setting for checkpoint segments to something like 100 or more.\nWith as much disk space as you've got you can afford it. Other than\nthat, you might wanna look at a faster / better RAID controller in the\nfuture. One with lots of battery backed cache set to write back.\n\n> Ideally I want to keep the backups as simple a process as possible\n> before we have to go to the next level of backup. My next strategy is\n> to put some of the tables in tablespaces on a couple different disks. I\n> will also backup to a dedicated set of disks. If there are a few\n> performance tweaks I can make I will. I was hoping to wait a few years\n> before we have to go to a more involved backup process.\n\nIt may well be cheaper to go the route of a faster RAID controller\nwith battery backed cache, since it's a one time investment, not an\nongoing maintenance issue like running PITR or slony can become. Plus\nit helps overall performance quite a bit.\n", "msg_date": "Mon, 5 May 2008 13:44:16 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup causing poor performance - suggestions" }, { "msg_contents": "On Mon, 5 May 2008, Campbell, Lance wrote:\n\n> We currently backup all of our database tables per schema using pg_dump\n> every half hour. We have been noticing that the database performance\n> has been very poor during the backup process. How can I improve the\n> performance?\n\nUh, don't do that? pg_dump works OK doing periodic backups during \nrelatively calm periods, it's not really appropriate to run all the time \nlike that.\n\nIf you need a backup to stay that current, you might instead consider a \nreplication solution that does a periodic full snapshot and then just \nmoves incrementals around from there. WAL shipping is the most obvious \ncandidate as it doesn't necessarily require an additional server and the \nmain components are integrated into the core now. You could just save the \nfiles necessary to recover the database just about anywhere. Most other \nreplication solutions would require having another server just to run that \nwhich is probably not what you want.\n\n> I have tried doing backups to a second set of disks but the performance \n> only improved somewhat.\n\nThen the real problem you're having is probably contention against the \ndatabase you're dumping from rather than having enough write capacity on \nthe output side. This isn't a surprise with pgdump as it's not exactly \ngentle on the server you're dumping from. To improve things here, you'd \nneed to add the second set of disks as storage for some of the main \ndatabase.\n\n> Would adding two quad core processors improve performance?\n\nDoubt it. pg_dump is basically taking up a processor and some percentage \nof disk resources when you're running it. If your server has all of the \nother 3 processors pegged at the same time, maybe adding more processors \nwould help, but that seems pretty unlikely.\n\nA message from Scott alluded to you showing some vmstat output, but I \ndidn't see that show up on the list. That would give more insight here. \nAlso, if you still have checkpoint_segments at its default (3, you didn't \nmention adjusting it) that could be contributing to this general problem; \nthat should be much higher on your hardware.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 5 May 2008 16:03:58 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup causing poor performance - suggestions" }, { "msg_contents": "On Mon, 5 May 2008, Scott Marlowe wrote:\n\n> Other than that, you might wanna look at a faster / better RAID \n> controller in the future. One with lots of battery backed cache set to \n> write back.\n\nHopefully Lance's PERC 6/I SAS already has its battery installed. The 6/I \nwith 256MB of cache is decent enough that I'm not sure getting a better \ncontroller would be a better investment than, say, adding more disks and \nsplitting the database I/O usefully among them.\n\nUpgrading to PG 8.3 comes to mind as another option I'd consider before \ngetting desparate enough to add/swap controller cards, which is always a \nscary thing on production servers.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 5 May 2008 16:11:27 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup causing poor performance - suggestions" }, { "msg_contents": "On Mon, May 5, 2008 at 2:03 PM, Greg Smith <[email protected]> wrote:\n> On Mon, 5 May 2008, Campbell, Lance wrote:\n>\n>\n> > We currently backup all of our database tables per schema using pg_dump\n> > every half hour. We have been noticing that the database performance\n> > has been very poor during the backup process. How can I improve the\n> > performance?\n> >\n>\n> Uh, don't do that? pg_dump works OK doing periodic backups during\n> relatively calm periods, it's not really appropriate to run all the time\n> like that.\n\nWow, my reading comprehension must be dropping. I totally missed the\nevery half hour bit and read it as every day. If you're backing up a\nlive database every hour, then yes, it's a very good idea to switch to\na hot or cold standby method with PITR\n", "msg_date": "Mon, 5 May 2008 15:50:30 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup causing poor performance - suggestions" } ]
[ { "msg_contents": "i've had to write queries to get trail balance values out of the GL \ntransaction table and i'm not happy with its performance \n\nThe table has 76K rows growing about 1000 rows per working day so the \nperformance is not that great it takes about 20 to 30 seconds to get all \nthe records for the table and when we limit it to single accounting \nperiod it drops down to 2 seconds\n\nHere is the query and explain . PostgreSql is 8.3.1 on new server with \nraid 10 Serial SCSI.\n\nSELECT period.period_id,\n period.period_start,\n period.period_end,\n accnt.accnt_id,\n accnt.accnt_number,\n accnt.accnt_descrip,\n period.period_yearperiod_id,\n accnt.accnt_type,\n COALESCE(( SELECT sum(gltrans.gltrans_amount) AS sum\n FROM gltrans\n WHERE gltrans.gltrans_date < period.period_start\n AND gltrans.gltrans_accnt_id = accnt.accnt_id\n AND gltrans.gltrans_posted = true), 0.00)::text::money AS \nbeginbalance,\n COALESCE(( SELECT sum(gltrans.gltrans_amount) AS sum\n FROM gltrans\n WHERE gltrans.gltrans_date <= period.period_end\n AND gltrans.gltrans_date >= period.period_start\n AND gltrans.gltrans_amount <= 0::numeric\n AND gltrans.gltrans_accnt_id = accnt.accnt_id\n AND gltrans.gltrans_posted = true), 0.00)::text::money AS \nnegative,\n COALESCE(( SELECT sum(gltrans.gltrans_amount) AS sum\n FROM gltrans\n WHERE gltrans.gltrans_date <= period.period_end\n AND gltrans.gltrans_date >= period.period_start\n AND gltrans.gltrans_amount >= 0::numeric\n AND gltrans.gltrans_accnt_id = accnt.accnt_id\n AND gltrans.gltrans_posted = true), 0.00)::text::money AS \npositive,\n COALESCE(( SELECT sum(gltrans.gltrans_amount) AS sum\n FROM gltrans\n WHERE gltrans.gltrans_date <= period.period_end\n AND gltrans.gltrans_date >= period.period_start\n AND gltrans.gltrans_accnt_id = accnt.accnt_id\n AND gltrans.gltrans_posted = true), 0.00)::text::money AS \ndifference,\n COALESCE(( SELECT sum(gltrans.gltrans_amount) AS sum\n FROM gltrans\n WHERE gltrans.gltrans_date <= period.period_end\n AND gltrans.gltrans_accnt_id = accnt.accnt_id\n AND gltrans.gltrans_posted = true), 0.00)::text::money AS \nendbalance\nFROM period, accnt\nORDER BY period.period_id, accnt.accnt_number;\n\n\"Sort (cost=4083970.56..4083974.89 rows=1729 width=57) (actual \ntime=24680.402..24681.386 rows=1729 loops=1)\"\n\" Sort Key: period.period_id, accnt.accnt_number\"\n\" Sort Method: quicksort Memory: 292kB\"\n\" -> Nested Loop (cost=1.14..4083877.58 rows=1729 width=57) (actual \ntime=4.043..24674.258 rows=1729 loops=1)\"\n\" -> Seq Scan on accnt (cost=0.00..4.33 rows=133 width=41) \n(actual time=0.011..0.158 rows=133 loops=1)\"\n\" -> Materialize (cost=1.14..1.27 rows=13 width=16) (actual \ntime=0.001..0.010 rows=13 loops=133)\"\n\" -> Seq Scan on period (cost=0.00..1.13 rows=13 \nwidth=16) (actual time=0.005..0.023 rows=13 loops=1)\"\n\" SubPlan\"\n\" -> Aggregate (cost=1093.64..1093.65 rows=1 width=8) (actual \ntime=6.039..6.039 rows=1 loops=1729)\"\n\" -> Bitmap Heap Scan on gltrans (cost=398.21..1092.18 \nrows=585 width=8) (actual time=5.171..5.623 rows=428 loops=1729)\"\n\" Recheck Cond: ((gltrans_accnt_id = $1) AND \n(gltrans_date <= $3))\"\n\" Filter: gltrans_posted\"\n\" -> BitmapAnd (cost=398.21..398.21 rows=636 \nwidth=0) (actual time=5.158..5.158 rows=0 loops=1729)\"\n\" -> Bitmap Index Scan on \ngltrans_gltrans_accnt_id_idx (cost=0.00..30.57 rows=1908 width=0) \n(actual time=0.078..0.078 rows=574 loops=1729)\"\n\" Index Cond: (gltrans_accnt_id = $1)\"\n\" -> Bitmap Index Scan on \ngltrans_gltrans_date_idx (cost=0.00..367.10 rows=25446 width=0) (actual \ntime=7.407..7.407 rows=63686 loops=1183)\"\n\" Index Cond: (gltrans_date <= $3)\"\n\" -> Aggregate (cost=58.19..58.20 rows=1 width=8) (actual \ntime=0.920..0.921 rows=1 loops=1729)\"\n\" -> Bitmap Heap Scan on gltrans (cost=38.90..58.16 \nrows=9 width=8) (actual time=0.843..0.878 rows=40 loops=1729)\"\n\" Recheck Cond: ((gltrans_date <= $3) AND \n(gltrans_date >= $0) AND (gltrans_accnt_id = $1))\"\n\" Filter: gltrans_posted\"\n\" -> BitmapAnd (cost=38.90..38.90 rows=10 \nwidth=0) (actual time=0.839..0.839 rows=0 loops=1729)\"\n\" -> Bitmap Index Scan on \ngltrans_gltrans_date_idx (cost=0.00..8.08 rows=382 width=0) (actual \ntime=0.782..0.782 rows=5872 loops=1729)\"\n\" Index Cond: ((gltrans_date <= $3) AND \n(gltrans_date >= $0))\"\n\" -> Bitmap Index Scan on \ngltrans_gltrans_accnt_id_idx (cost=0.00..30.57 rows=1908 width=0) \n(actual time=0.076..0.076 rows=574 loops=798)\"\n\" Index Cond: (gltrans_accnt_id = $1)\"\n\" -> Aggregate (cost=58.20..58.21 rows=1 width=8) (actual \ntime=0.897..0.898 rows=1 loops=1729)\"\n\" -> Bitmap Heap Scan on gltrans (cost=38.89..58.19 \nrows=4 width=8) (actual time=0.845..0.874 rows=20 loops=1729)\"\n\" Recheck Cond: ((gltrans_date <= $3) AND \n(gltrans_date >= $0) AND (gltrans_accnt_id = $1))\"\n\" Filter: (gltrans_posted AND (gltrans_amount >= \n0::numeric))\"\n\" -> BitmapAnd (cost=38.89..38.89 rows=10 \nwidth=0) (actual time=0.840..0.840 rows=0 loops=1729)\"\n\" -> Bitmap Index Scan on \ngltrans_gltrans_date_idx (cost=0.00..8.08 rows=382 width=0) (actual \ntime=0.783..0.783 rows=5872 loops=1729)\"\n\" Index Cond: ((gltrans_date <= $3) AND \n(gltrans_date >= $0))\"\n\" -> Bitmap Index Scan on \ngltrans_gltrans_accnt_id_idx (cost=0.00..30.57 rows=1908 width=0) \n(actual time=0.077..0.077 rows=574 loops=798)\"\n\" Index Cond: (gltrans_accnt_id = $1)\"\n\" -> Aggregate (cost=58.20..58.21 rows=1 width=8) (actual \ntime=0.908..0.909 rows=1 loops=1729)\"\n\" -> Bitmap Heap Scan on gltrans (cost=38.89..58.19 \nrows=4 width=8) (actual time=0.854..0.885 rows=20 loops=1729)\"\n\" Recheck Cond: ((gltrans_date <= $3) AND \n(gltrans_date >= $0) AND (gltrans_accnt_id = $1))\"\n\" Filter: (gltrans_posted AND (gltrans_amount <= \n0::numeric))\"\n\" -> BitmapAnd (cost=38.89..38.89 rows=10 \nwidth=0) (actual time=0.843..0.843 rows=0 loops=1729)\"\n\" -> Bitmap Index Scan on \ngltrans_gltrans_date_idx (cost=0.00..8.08 rows=382 width=0) (actual \ntime=0.785..0.785 rows=5872 loops=1729)\"\n\" Index Cond: ((gltrans_date <= $3) AND \n(gltrans_date >= $0))\"\n\" -> Bitmap Index Scan on \ngltrans_gltrans_accnt_id_idx (cost=0.00..30.57 rows=1908 width=0) \n(actual time=0.078..0.078 rows=574 loops=798)\"\n\" Index Cond: (gltrans_accnt_id = $1)\"\n\" -> Aggregate (cost=1093.64..1093.65 rows=1 width=8) (actual \ntime=5.485..5.485 rows=1 loops=1729)\"\n\" -> Bitmap Heap Scan on gltrans (cost=398.21..1092.18 \nrows=585 width=8) (actual time=4.699..5.110 rows=388 loops=1729)\"\n\" Recheck Cond: ((gltrans_accnt_id = $1) AND \n(gltrans_date < $0))\"\n\" Filter: gltrans_posted\"\n\" -> BitmapAnd (cost=398.21..398.21 rows=636 \nwidth=0) (actual time=4.687..4.687 rows=0 loops=1729)\"\n\" -> Bitmap Index Scan on \ngltrans_gltrans_accnt_id_idx (cost=0.00..30.57 rows=1908 width=0) \n(actual time=0.079..0.079 rows=574 loops=1729)\"\n\" Index Cond: (gltrans_accnt_id = $1)\"\n\" -> Bitmap Index Scan on \ngltrans_gltrans_date_idx (cost=0.00..367.10 rows=25446 width=0) (actual \ntime=6.717..6.717 rows=57814 loops=1183)\"\n\" Index Cond: (gltrans_date < $0)\"\n\"Total runtime: 24682.580 ms\"\n\n\n", "msg_date": "Mon, 05 May 2008 21:01:49 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": true, "msg_subject": "need to speed up query" }, { "msg_contents": "You're joining these two tables: period, accnt, but I'm not seeing an\non () clause or a where clause joining them. Is the cross product\nintentional?\n\nBut what I'm seeing that seems like the lowest hanging fruit would be\ntwo column indexes on the bits that are showing up in those bit map\nscans. Like this part:\n\n\" Recheck Cond: ((gltrans_date <= $3) AND\n(gltrans_date >= $0) AND gltrans_accnt_id = $1))\"\n\" Filter: gltrans_posted\"\n\" -> BitmapAnd (cost=38.90..38.90 rows=10\nwidth=0) (actual time=0.839..0.839 rows=0 loops=1729)\"\n\" -> Bitmap Index Scan on\ngltrans_gltrans_date_idx (cost=0.00..8.08 rows=382 width=0) (actual\ntime=0.782..0.782 rows=5872 loops=1729)\"\n\" Index Cond: ((gltrans_date <= $3)\nAND (gltrans_date >= $0))\"\n\" -> Bitmap Index Scan on\ngltrans_gltrans_accnt_id_idx (cost=0.00..30.57 rows=1908 width=0)\n(actual time=0.076..0.076 rows=574 loops=798)\"\n\" Index Cond: (gltrans_accnt_id = $1)\"\n\nYou are looking through 574 rows in one column and 5872 in another.\nBut when they're anded together, you get 0 rows. A two column index\nthere should really help.\n", "msg_date": "Mon, 5 May 2008 21:27:03 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need to speed up query" }, { "msg_contents": "Justin --\n\nYou wrote:\n> \n> i've had to write queries to get trail balance values out of the GL \n> transaction table and i'm not happy with its performance \n> \n> \n> The table has 76K rows growing about 1000 rows per working day so the \n> performance is not that great it takes about 20 to 30 seconds to get all \n> the records for the table and when we limit it to single accounting \n> period it drops down to 2 seconds\n\nSo 30 seconds for 76 days (roughly) worth of numbers ? Not terrible but not great.\n\n> Here is the query and explain . PostgreSql is 8.3.1 on new server with \n> raid 10 Serial SCSI.\n<... snipped 'cause I have a lame reader ...>\n\n> \" Sort Method: quicksort Memory: 292kB\"\n<...snip...>\n> \"Total runtime: 24682.580 ms\"\n\n\nI don't have any immediate thoughts but maybe you could post the table schemas and indexes. It looks to my untutored eye as if most of the estimates are fair so I am guessing that you have run analyze recently.\n\nWhat is your sort memory set to ? If work_mem is too low then you'll go to disk (if you see tmp files under the postgres $PGDATA/base directory you might be seeing the result of this) ...\n\nHTH\n\nGreg Williamson\nSenior DBA\nDigitalGlobe\n\nConfidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.\n\n(My corporate masters made me say this.)\n\n\n\n\n\n\nRE: [PERFORM] need to speed up query\n\n\n\nJustin --\n\nYou wrote:\n> \n> i've had to write queries to get trail balance values out of the GL\n> transaction table and i'm not happy with its performance\n>\n>\n> The table has 76K rows growing about 1000 rows per working day so the\n> performance is not that great it takes about 20 to 30 seconds to get all\n> the records for the table and when we limit it to single accounting\n> period it drops down to 2 seconds\n\nSo 30 seconds for 76 days (roughly) worth of numbers ? Not terrible but not great.\n\n> Here is the query and explain .  PostgreSql  is 8.3.1 on new server with\n> raid 10 Serial SCSI.\n<... snipped 'cause I have a lame reader ...>\n\n> \"  Sort Method:  quicksort  Memory: 292kB\"\n<...snip...>\n> \"Total runtime: 24682.580 ms\"\n\n\nI don't have any immediate thoughts but maybe you could post the table schemas and indexes. It looks to my untutored eye as if most of the estimates are fair so I am guessing that you have run analyze recently.\n\nWhat is your sort memory set to ? If work_mem is too low then you'll go to disk (if you see tmp files under the postgres $PGDATA/base directory you might be seeing the result of this) ...\n\nHTH\n\nGreg Williamson\nSenior DBA\nDigitalGlobe\n\nConfidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.\n\n(My corporate masters made me say this.)", "msg_date": "Mon, 5 May 2008 22:08:35 -0600", "msg_from": "\"Gregory Williamson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need to speed up query" }, { "msg_contents": "yes the cross join is intentional.\n\nThanks creating the two column index drop processing time to 15 to 17 \nseconds\nput per period down to 1 second\n\n\n\nScott Marlowe wrote:\n> You're joining these two tables: period, accnt, but I'm not seeing an\n> on () clause or a where clause joining them. Is the cross product\n> intentional?\n>\n> But what I'm seeing that seems like the lowest hanging fruit would be\n> two column indexes on the bits that are showing up in those bit map\n> scans. Like this part:\n>\n> \" Recheck Cond: ((gltrans_date <= $3) AND\n> (gltrans_date >= $0) AND gltrans_accnt_id = $1))\"\n> \" Filter: gltrans_posted\"\n> \" -> BitmapAnd (cost=38.90..38.90 rows=10\n> width=0) (actual time=0.839..0.839 rows=0 loops=1729)\"\n> \" -> Bitmap Index Scan on\n> gltrans_gltrans_date_idx (cost=0.00..8.08 rows=382 width=0) (actual\n> time=0.782..0.782 rows=5872 loops=1729)\"\n> \" Index Cond: ((gltrans_date <= $3)\n> AND (gltrans_date >= $0))\"\n> \" -> Bitmap Index Scan on\n> gltrans_gltrans_accnt_id_idx (cost=0.00..30.57 rows=1908 width=0)\n> (actual time=0.076..0.076 rows=574 loops=798)\"\n> \" Index Cond: (gltrans_accnt_id = $1)\"\n>\n> You are looking through 574 rows in one column and 5872 in another.\n> But when they're anded together, you get 0 rows. A two column index\n> there should really help.\n>\n> \n", "msg_date": "Tue, 06 May 2008 00:36:36 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: need to speed up query" }, { "msg_contents": "Gregory Williamson wrote:\n>\n> Justin --\n>\n> You wrote:\n> > \n> > i've had to write queries to get trail balance values out of the GL\n> > transaction table and i'm not happy with its performance\n> >\n> >\n> > The table has 76K rows growing about 1000 rows per working day so the\n> > performance is not that great it takes about 20 to 30 seconds to get all\n> > the records for the table and when we limit it to single accounting\n> > period it drops down to 2 seconds\n>\n> So 30 seconds for 76 days (roughly) worth of numbers ? Not terrible \n> but not great.\n>\n> > Here is the query and explain . PostgreSql is 8.3.1 on new server with\n> > raid 10 Serial SCSI.\n> <... snipped 'cause I have a lame reader ...>\n>\nnot according to the bench marks i have done, which were posted a \ncouple of months ago.\n>\n>\n> > \" Sort Method: quicksort Memory: 292kB\"\n> <...snip...>\n> > \"Total runtime: 24682.580 ms\"\n>\n>\n> I don't have any immediate thoughts but maybe you could post the table \n> schemas and indexes. It looks to my untutored eye as if most of the \n> estimates are fair so I am guessing that you have run analyze recently.\n>\n> What is your sort memory set to ? If work_mem is too low then you'll \n> go to disk (if you see tmp files under the postgres $PGDATA/base \n> directory you might be seeing the result of this) ...\n>\ni need to look into work mem its set at 25 megs which is fine for most \nwork unless we get into the accounting queries which have to be more \ncomplicated than they need to be because how some of the tables are laid \nout which i did not lay out.\n>\n>\n> HTH\n>\n> Greg Williamson\n> Senior DBA\n> DigitalGlobe\n>\n> Confidentiality Notice: This e-mail message, including any \n> attachments, is for the sole use of the intended recipient(s) and may \n> contain confidential and privileged information and must be protected \n> in accordance with those provisions. Any unauthorized review, use, \n> disclosure or distribution is prohibited. If you are not the intended \n> recipient, please contact the sender by reply e-mail and destroy all \n> copies of the original message.\n>\n> (My corporate masters made me say this.)\n>\n\n\n\n\n\n\nGregory Williamson wrote:\n\n\n\nRE: [PERFORM] need to speed up query\n\nJustin --\n\nYou wrote:\n> \n> i've had to write queries to get trail balance values out of the GL\n> transaction table and i'm not happy with its performance\n>\n>\n> The table has 76K rows growing about 1000 rows per working day so\nthe\n> performance is not that great it takes about 20 to 30 seconds to\nget all\n> the records for the table and when we limit it to single accounting\n> period it drops down to 2 seconds\n\nSo 30 seconds for 76 days (roughly) worth of numbers ? Not terrible but\nnot great.\n\n> Here is the query and explain .  PostgreSql  is 8.3.1 on new\nserver with\n> raid 10 Serial SCSI.\n<... snipped 'cause I have a lame reader ...>\n\n\nnot according to the bench marks i have done,  which were posted a\ncouple of months ago. \n\n\n> \"  Sort Method:  quicksort  Memory: 292kB\"\n<...snip...>\n> \"Total runtime: 24682.580 ms\"\n\n\nI don't have any immediate thoughts but maybe you could post the table\nschemas and indexes. It looks to my untutored eye as if most of the\nestimates are fair so I am guessing that you have run analyze recently.\n\nWhat is your sort memory set to ? If work_mem is too low then you'll go\nto disk (if you see tmp files under the postgres $PGDATA/base directory\nyou might be seeing the result of this) ...\n\n\ni need to look into work mem its set at 25 megs which is fine for most\nwork unless we get into the accounting queries which have to be more\ncomplicated than they need to be because how some of the tables are\nlaid out which i did not lay out.\n\n\nHTH\n\nGreg Williamson\nSenior DBA\nDigitalGlobe\n\nConfidentiality Notice: This e-mail message, including any attachments,\nis for the sole use of the intended recipient(s) and may contain\nconfidential and privileged information and must be protected in\naccordance with those provisions. Any unauthorized review, use,\ndisclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply e-mail and destroy all\ncopies of the original message.\n\n(My corporate masters made me say this.)", "msg_date": "Tue, 06 May 2008 00:48:29 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: need to speed up query" }, { "msg_contents": "\n> i've had to write queries to get trail balance values out of the GL \n> transaction table and i'm not happy with its performance The table has \n> 76K rows growing about 1000 rows per working day so the performance is \n> not that great it takes about 20 to 30 seconds to get all the records \n> for the table and when we limit it to single accounting period it drops \n> down to 2 seconds\n\n\tWhat is a \"period\" ? Is it a month, or something more \"custom\" ? Can \nperiods overlap ?\n\n> COALESCE(( SELECT sum(gltrans.gltrans_amount) AS sum\n> FROM gltrans\n> WHERE gltrans.gltrans_date < period.period_start\n> AND gltrans.gltrans_accnt_id = accnt.accnt_id\n> AND gltrans.gltrans_posted = true), 0.00)::text::money AS \n> beginbalance,\n\n\tNote that here you are scanning the entire table multiple times, the \ncomplexity of this is basically (rows in gltrans)^2 which is something \nyou'd like to avoid.\n", "msg_date": "Tue, 06 May 2008 09:02:42 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need to speed up query" }, { "msg_contents": "\n\nPFC wrote:\n>\n>> i've had to write queries to get trail balance values out of the GL \n>> transaction table and i'm not happy with its performance The table \n>> has 76K rows growing about 1000 rows per working day so the \n>> performance is not that great it takes about 20 to 30 seconds to get \n>> all the records for the table and when we limit it to single \n>> accounting period it drops down to 2 seconds\n>\n> What is a \"period\" ? Is it a month, or something more \"custom\" ? \n> Can periods overlap ?\nNo periods can never overlap. If the periods did you would be in\nviolation of many tax laws around the world. Plus it you would not know\nhow much money you are making or losing.\nGenerally yes a accounting period is a normal calendar month. but you\ncan have 13 periods in a normal calendar year. 52 weeks in a year / 4\nweeks in month = 13 periods or 13 months in a Fiscal Calendar year.\nThis means if someone is using a 13 period fiscal accounting year the\nstart and end dates are offset from a normal calendar.\nTo make this really funky you can have a Fiscal Calendar year start\nJune 15 2008 and end on June 14 2009\n\nhttp://en.wikipedia.org/wiki/Fiscal_year\n>\n>> COALESCE(( SELECT sum(gltrans.gltrans_amount) AS sum\n>> FROM gltrans\n>> WHERE gltrans.gltrans_date < period.period_start\n>> AND gltrans.gltrans_accnt_id = accnt.accnt_id\n>> AND gltrans.gltrans_posted = true), 0.00)::text::money AS \n>> beginbalance,\n>\n> Note that here you are scanning the entire table multiple times, \n> the complexity of this is basically (rows in gltrans)^2 which is \n> something you'd like to avoid.\n>\nFor accounting purposes you need to know the Beginning Balances,\nDebits, Credits, Difference between Debits to Credits and the Ending\nBalance for each account. We have 133 accounts with presently 12\nperiods defined so we end up 1596 rows returned for this query.\n\nSo period 1 should have for the most part have Zero for Beginning\nBalances for most types of Accounts. Period 2 is Beginning Balance is\nPeriod 1 Ending Balance, Period 3 is Period 2 ending balance so and so\non forever.\n\n\n\n\n\n", "msg_date": "Tue, 06 May 2008 08:22:02 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: need to speed up query" }, { "msg_contents": "\n>> What is a \"period\" ? Is it a month, or something more \"custom\" ? \n>> Can periods overlap ?\n\n> No periods can never overlap. If the periods did you would be in \n> violation of many tax laws around the world. Plus it you would not know \n> how much money you are making or losing.\n\n\tI was wondering if you'd be using the same query to compute how much was \ngained every month and every week, which would have complicated things.\n\tBut now it's clear.\n\n> To make this really funky you can have a Fiscal Calendar year start \n> June 15 2008 and end on June 14 2009\n\n\tDon't you just love those guys ? Always trying new tricks to make your \nlife more interesting.\n\n>> Note that here you are scanning the entire table multiple times, \n>> the complexity of this is basically (rows in gltrans)^2 which is \n>> something you'd like to avoid.\n>>\n> For accounting purposes you need to know the Beginning Balances, \n> Debits, Credits, Difference between Debits to Credits and the Ending \n> Balance for each account. We have 133 accounts with presently 12 \n> periods defined so we end up 1596 rows returned for this query.\n\n\tAlright, I propose a solution which only works when periods don't overlap.\n\tIt will scan the entire table, but only once, not many times as your \ncurrent query does.\n\n> So period 1 should have for the most part have Zero for Beginning \n> Balances for most types of Accounts. Period 2 is Beginning Balance is \n> Period 1 Ending Balance, Period 3 is Period 2 ending balance so and so \n> on forever.\n\n\tPrecisely. So, it is not necessary to recompute everything for each \nperiod.\n\tUse the previous period's ending balance as the current period's starting \nbalance...\n\n\tThere are several ways to do this.\n\tFirst, you could use your current query, but only compute the sum of what \nhappened during a period, for each period, and store that in a temporary \ntable.\n\tThen, you use a plpgsql function, or you do that in your client, you take \nthe rows in chronological order, you sum them as they come, and you get \nyour balances. Use a NUMERIC type, not a FLOAT, to avoid rounding errors.\n\n\tThe other solution does the same thing but optimizes the first step like \nthis :\n\tINSERT INTO temp_table SELECT period, sum(...) GROUP BY period\n\n\tTo do this you must be able to compute the period from the date and not \nthe other way around. You could store a period_id in your table, or use a \nfunction.\n\n\tAnother much more efficient solution would be to have a summary table \nwhich keeps the summary data for each period, with beginning balance and \nend balance. This table will only need to be updated when someone finds an \nold receipt in their pocket or something.\n\n> This falls under the stupid question and i'm just curious what other \n> people think what makes a query complex?\n\n\tI have some rather complex queries which postgres burns in a few \nmilliseconds.\n\tYou could define complexity as the amount of brain sweat that went into \nwriting that query.\n\tYou could also define complexity as O(n) or O(n^2) etc, for instance your \nquery (as written) is O(n^2) which is something you don't want, I've seen \nstuff that was O(2^n) or worse, O(n!) in software written by drunk \nstudents, in this case getting rid of it is an emergency...\n", "msg_date": "Tue, 06 May 2008 18:35:16 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need to speed up query" }, { "msg_contents": "On Tue, 2008-05-06 at 03:01 +0100, Justin wrote:\n\n> i've had to write queries to get trail balance values out of the GL\n> transaction table and i'm not happy with its performance\n\nGo ahead and give this a try:\n\nSELECT p.period_id, p.period_start, p.period_end, a.accnt_id,\n a.accnt_number, a.accnt_descrip, p.period_yearperiod_id,\n a.accnt_type,\n SUM(CASE WHEN g.gltrans_date < p.period_start\n THEN g.gltrans_amount ELSE 0.0\n END)::text::money AS beginbalance,\n SUM(CASE WHEN g.gltrans_date < p.period_end\n AND g.gltrans_date >= p.period_start\n AND g.gltrans_amount <= 0::numeric\n THEN g.gltrans_amount ELSE 0.0\n END)::text::money AS negative,\n SUM(CASE WHEN g.gltrans_date <= p.period_end\n AND g.gltrans_date >= p.period_start\n AND g.gltrans_amount >= 0::numeric\n THEN g.gltrans_amount ELSE 0.0\n END)::text::money AS positive,\n SUM(CASE WHEN g.gltrans_date <= p.period_end\n AND g.gltrans_date >= p.period_start\n THEN g.gltrans_amount ELSE 0.0\n END)::text::money AS difference,\n SUM(CASE WHEN g.gltrans_date <= p.period_end\n THEN g.gltrans_amount ELSE 0.0\n END)::text::money AS endbalance,\n FROM period p\n CROSS JOIN accnt a\n LEFT JOIN gltrans g ON (g.gltrans_accnt_id = a.accnt_id\n AND g.gltrans_posted = true)\n ORDER BY period.period_id, accnt.accnt_number;\n\nDepending on how the planner saw your old query, it may have forced\nseveral different sequence or index scans to get the information from\ngltrans. One thing all of your subqueries had in common was a join on\nthe account id and listing only posted transactions. It's still a big\ngulp, but it's only one gulp.\n\nThe other thing I did was that I guessed you added the coalesce clause\nbecause the subqueries individually could return null rowsets for\nvarious groupings, and you wouldn't want that. This left-join solution\nonly lets it add to your various sums if it matches all the conditions,\notherwise it falls through the list of cases until nothing matches. If\nsome of your transactions can have null amounts, you might consider\nturning g.gltrans into COALESCE(g.gltrans, 0.0) instead.\n\nOtherwise, this *might* work; without knowing more about your schema,\nit's only a guess. I'm a little skeptical about the conditionless\ncross-join, but whatever.\n\nEither way, by looking at this query, it looks like some year-end\nsummary piece, or an at-a-glance idea of your account standings. The\nproblem you're going to have with this is that there's no way to truly\noptimize this. One way or another, you're going to incur some\ncombination of three sequence scans or three index scans; if those\ntables get huge, you're in trouble. You might want to consider a\ndenormalized summary table that contains this information (and maybe\nmore) maintained by a trigger or regularly invoked stored-procedure and\nthen you can select from *that* with much less agony.\n\nThen there's fact-tables, but that's beyond the scope of this email. ;)\n\nGood luck!\n\n-- \n\nShaun Thomas\nDatabase Administrator\n\nLeapfrog Online \n807 Greenwood Street \nEvanston, IL 60201 \nTel. 847-440-8253\nFax. 847-570-5750\nwww.leapfrogonline.com\n\n\n", "msg_date": "Tue, 6 May 2008 11:43:29 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need to speed up query" }, { "msg_contents": "it worked it had couple missing parts but it worked and ran in 3.3 \nseconds. *Thanks for this *\ni need to review the result and balance it to my results as the \nAccountant already went through and balanced some accounts by hand to \nverify my results\n\n<<begin quote>>\n\n You might want to consider a\ndenormalized summary table that contains this information (and maybe\nmore) maintained by a trigger or regularly invoked stored-procedure and\nthen you can select from *that* with much less agony.\n\n<<end quote>>\n\nI just dumped the summary table because it kept getting out of balance \nall the time and was missing accounts that did not have transaction in \nthem for given period. Again i did not lay out the table nor the old \ncode which was terrible and did not work correctly. I tried several \ntimes to fix the summary table but to many things allowed it to get \nout of sync. Keeping the Ending and Beginning Balance correct was to \nmuch trouble and i needed to get numbers we can trust to the accountant. \n\nThe developers of the code got credits and debits backwards so instead \nof fixing the code they just added code to flip the values on the front \nend. Its really annoying. At this point if i could go back 7 months \nago i would not purchased this software if i had known what i know now.\n\nI've had to make all kinds of changes i never intended to make in order \nto get the stuff to balance and agree. I've spent the last 3 months in \ncode review fixing things that allow accounts to get out of balance and \nstop stupid things from happening, like posting GL Transactions into \nnon-existing accounting periods. the list of things i have to fix is \ngetting dam long.\n\n\n\n\n\n\n\n\nit worked it had couple missing parts but it worked and ran in 3.3\nseconds.  Thanks for this \ni need to review the result and balance it to my results as the\nAccountant already went through and balanced some accounts by hand to\nverify my results \n\n<<begin quote>>\n You might want to consider a\ndenormalized summary table that contains this information (and maybe\nmore) maintained by a trigger or regularly invoked stored-procedure and\nthen you can select from *that* with much less agony.\n<<end quote>>\n\nI just dumped the summary table because it kept getting out of balance\nall the time and was missing accounts that did not have transaction in\nthem for given period.  Again i did not lay out the table nor the old\ncode which was terrible and did not work  correctly.   I tried several\ntimes to fix the summary table  but to many  things allowed it to get\nout of sync.  Keeping the Ending and Beginning Balance correct was to\nmuch trouble and i needed to get numbers we can trust to the\naccountant.  \n\nThe developers of the code got credits and debits backwards so instead\nof fixing the code they just added code to flip the values on the front\nend.  Its really annoying.  At this point if i could go back 7 months\nago i would not purchased this software if i had known what i know now.\n\n\nI've had to make all kinds of changes i never intended to make in order\nto get the stuff to balance and agree. I've spent the last 3 months in\ncode review fixing things that allow accounts to get out of balance and\nstop stupid things from happening, like posting GL Transactions into\nnon-existing accounting periods.  the list of things i have to fix is\ngetting dam long.", "msg_date": "Tue, 06 May 2008 12:22:11 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: need to speed up query" }, { "msg_contents": "\n\nPFC wrote:\n>\n>>> What is a \"period\" ? Is it a month, or something more \"custom\" ? \n>>> Can periods overlap ?\n>\n>> No periods can never overlap. If the periods did you would be in \n>> violation of many tax laws around the world. Plus it you would not \n>> know how much money you are making or losing.\n>\n> I was wondering if you'd be using the same query to compute how \n> much was gained every month and every week, which would have \n> complicated things.\n> But now it's clear.\n>\n>> To make this really funky you can have a Fiscal Calendar year start \n>> June 15 2008 and end on June 14 2009\n>\n> Don't you just love those guys ? Always trying new tricks to make \n> your life more interesting.\n\nThats been around been around a long time. You can go back a few \nhundreds years\n\n\n>>> Note that here you are scanning the entire table multiple times, \n>>> the complexity of this is basically (rows in gltrans)^2 which is \n>>> something you'd like to avoid.\n>>>\n>> For accounting purposes you need to know the Beginning Balances, \n>> Debits, Credits, Difference between Debits to Credits and the \n>> Ending Balance for each account. We have 133 accounts with \n>> presently 12 periods defined so we end up 1596 rows returned for this \n>> query.\n>\n> Alright, I propose a solution which only works when periods don't \n> overlap.\n> It will scan the entire table, but only once, not many times as \n> your current query does.\n>\n>> So period 1 should have for the most part have Zero for Beginning \n>> Balances for most types of Accounts. Period 2 is Beginning Balance \n>> is Period 1 Ending Balance, Period 3 is Period 2 ending balance so \n>> and so on forever.\n>\n> Precisely. So, it is not necessary to recompute everything for \n> each period.\n> Use the previous period's ending balance as the current period's \n> starting balance...\n>\n> There are several ways to do this.\n> First, you could use your current query, but only compute the sum \n> of what happened during a period, for each period, and store that in a \n> temporary table.\n> Then, you use a plpgsql function, or you do that in your client, \n> you take the rows in chronological order, you sum them as they come, \n> and you get your balances. Use a NUMERIC type, not a FLOAT, to avoid \n> rounding errors.\n>\n> The other solution does the same thing but optimizes the first \n> step like this :\n> INSERT INTO temp_table SELECT period, sum(...) GROUP BY period\n>\n> To do this you must be able to compute the period from the date \n> and not the other way around. You could store a period_id in your \n> table, or use a function.\n>\n> Another much more efficient solution would be to have a summary \n> table which keeps the summary data for each period, with beginning \n> balance and end balance. This table will only need to be updated when \n> someone finds an old receipt in their pocket or something.\n>\n\nAs i posted earlier the software did do this but it has so many bugs \nelse where in the code it allows it get out of balance to what really is \nhappening. I spent a several weeks trying to get this working and find \nall the places it went wrong. I gave up and did this query which took \na day write and balance to a point that i turned it over to the \naccountant. I redid the front end and i'm off to the races and Fixing \nother critical problems.\n\nAll i need to do is take Shanun Thomas code and replace the View this \nselect statement creates\n\n\n>> This falls under the stupid question and i'm just curious what other \n>> people think what makes a query complex?\n>\n> I have some rather complex queries which postgres burns in a few \n> milliseconds.\n> You could define complexity as the amount of brain sweat that went \n> into writing that query.\n> You could also define complexity as O(n) or O(n^2) etc, for \n> instance your query (as written) is O(n^2) which is something you \n> don't want, I've seen stuff that was O(2^n) or worse, O(n!) in \n> software written by drunk students, in this case getting rid of it is \n> an emergency...\n>\n\nThanks for your help and ideas i really appreciate it.\n", "msg_date": "Tue, 06 May 2008 12:41:55 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: need to speed up query" } ]
[ { "msg_contents": "L.S.\n\nI'm noticing a difference in planning between a join and an in() clause, \nbefore trying to create an independent test-case, I'd like to know if there's \nan obvious reason why this would be happening:\n\n\n=> the relatively simple PLPGSQL si_credit_tree() function has 'ROWS 5' in \nit's definition\n\n\ndf=# select version();\n version\n------------------------------------------------------------------------\n PostgreSQL 8.3.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2\n(1 row)\n\n\n\ndb=# explain analyse\n\tselect sum(si.base_total_val)\n\tfrom sales_invoice si, si_credit_tree(80500007) foo(id)\n\twhere si.id = foo.id;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------\n Aggregate (cost=42.73..42.74 rows=1 width=8) (actual time=0.458..0.459 \nrows=1 loops=1)\n -> Nested Loop (cost=0.00..42.71 rows=5 width=8) (actual \ntime=0.361..0.429 rows=5 loops=1)\n -> Function Scan on si_credit_tree foo (cost=0.00..1.30 rows=5 \nwidth=4) (actual time=0.339..0.347 rows=5 loops=1)\n -> Index Scan using sales_invoice_pkey on sales_invoice si \n(cost=0.00..8.27 rows=1 width=12) (actual time=0.006..0.008 rows=1 loops=5)\n Index Cond: (si.id = foo.id)\n\nTotal runtime: 0.562 ms\n\n\n\n\ndb=# explain analyse\n\tselect sum(base_total_val)\n\tfrom sales_invoice\n\twhere id in (select id from si_credit_tree(80500007));\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------\n Aggregate (cost=15338.31..15338.32 rows=1 width=8) (actual \ntime=3349.401..3349.402 rows=1 loops=1)\n -> Seq Scan on sales_invoice (cost=0.00..15311.19 rows=10846 width=8) \n(actual time=0.781..3279.046 rows=21703 loops=1)\n Filter: (subplan)\n SubPlan\n -> Function Scan on si_credit_tree (cost=0.00..1.30 rows=5 \nwidth=0) (actual time=0.146..0.146 rows=1 loops=21703)\n\nTotal runtime: 3349.501 ms\n\n\n\n\n\nI'd hoped the planner would use the ROWS=5 knowledge a bit better:\n\n\ndb=# explain analyse\n\tselect sum(base_total_val)\n\tfrom sales_invoice\n\twhere id in (80500007,80500008,80500009,80500010,80500011);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=40.21..40.22 rows=1 width=8) (actual time=0.105..0.106 \nrows=1 loops=1)\n -> Bitmap Heap Scan on sales_invoice (cost=21.29..40.19 rows=5 width=8) \n(actual time=0.061..0.070 rows=5 loops=1)\n Recheck Cond: (id = ANY \n('{80500007,80500008,80500009,80500010,80500011}'::integer[]))\n -> Bitmap Index Scan on sales_invoice_pkey (cost=0.00..21.29 rows=5 \nwidth=0) (actual time=0.049..0.049 rows=5 loops=1)\n Index Cond: (id = ANY \n('{80500007,80500008,80500009,80500010,80500011}'::integer[]))\n\nTotal runtime: 0.201 ms\n\n\n\n\n\n\n-- \nBest,\n\n\n\n\nFrank.\n", "msg_date": "Tue, 6 May 2008 10:21:43 +0200", "msg_from": "Frank van Vugt <[email protected]>", "msg_from_op": true, "msg_subject": "plan difference between set-returning function with ROWS within IN()\n\tand a plain join" }, { "msg_contents": "On Tue, 06 May 2008 10:21:43 +0200, Frank van Vugt <[email protected]> \nwrote:\n\n> L.S.\n>\n> I'm noticing a difference in planning between a join and an in() clause,\n> before trying to create an independent test-case, I'd like to know if \n> there's\n> an obvious reason why this would be happening:\n\nIs the function STABLE ?\n", "msg_date": "Tue, 06 May 2008 11:53:17 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plan difference between set-returning function with ROWS within\n\tIN() and a plain join" }, { "msg_contents": "> > I'm noticing a difference in planning between a join and an in() clause,\n> > before trying to create an independent test-case, I'd like to know if\n> > there's\n> > an obvious reason why this would be happening:\n>\n> Is the function STABLE ?\n\nYep.\n\nFor the record, even changing it to immutable doesn't make a difference in \nperformance here.\n\n\n\n-- \nBest,\n\n\n\n\nFrank.\n", "msg_date": "Tue, 6 May 2008 13:55:52 +0200", "msg_from": "Frank van Vugt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plan difference between set-returning function with ROWS within\n\tIN() and a plain join" }, { "msg_contents": "Frank van Vugt <[email protected]> writes:\n> db=# explain analyse\n> \tselect sum(base_total_val)\n> \tfrom sales_invoice\n> \twhere id in (select id from si_credit_tree(80500007));\n\nDid you check whether this query even gives the right answer? The\nEXPLAIN output shows that 21703 rows of sales_invoice are being\nselected, which is a whole lot different than the other behavior.\n\nI think you forgot the alias foo(id) in the subselect and it's\nactually reducing to \"where id in (id)\", ie, TRUE.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 May 2008 10:17:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plan difference between set-returning function with ROWS within\n\tIN() and a plain join" }, { "msg_contents": "> > db=# explain analyse\n> > \tselect sum(base_total_val)\n> > \tfrom sales_invoice\n> > \twhere id in (select id from si_credit_tree(80500007));\n>\n> Did you check whether this query even gives the right answer?\n\nYou knew the right answer to that already ;)\n\n> I think you forgot the alias foo(id) in the subselect and it's\n> actually reducing to \"where id in (id)\", ie, TRUE.\n\nTricky, but completely obvious once pointed out, that's _exactly_ what was \nhappening.\n\n\ndb=# explain analyse\n\tselect sum(base_total_val)\n\tfrom sales_invoice\n\twhere id in (select id from si_credit_tree(80500007) foo(id));\n QUERY \nPLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=42.79..42.80 rows=1 width=8) (actual time=0.440..0.441 \nrows=1 loops=1)\n -> Nested Loop (cost=1.31..42.77 rows=5 width=8) (actual \ntime=0.346..0.413 rows=5 loops=1)\n -> HashAggregate (cost=1.31..1.36 rows=5 width=4) (actual \ntime=0.327..0.335 rows=5 loops=1)\n -> Function Scan on si_credit_tree foo (cost=0.00..1.30 \nrows=5 width=4) (actual time=0.300..0.306 rows=5 loops=1)\n -> Index Scan using sales_invoice_pkey on sales_invoice \n(cost=0.00..8.27 rows=1 width=12) (actual time=0.006..0.008 rows=1 loops=5)\n Index Cond: (sales_invoice.id = foo.id)\n\nTotal runtime: 0.559 ms\n\n\n\n\nThanks for the replies!\n\n\n-- \nBest,\n\n\n\n\nFrank.\n", "msg_date": "Tue, 6 May 2008 17:27:40 +0200", "msg_from": "Frank van Vugt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plan difference between set-returning function with ROWS within\n\tIN() and a plain join" }, { "msg_contents": "On Tue, May 6, 2008 at 11:27 AM, Frank van Vugt <[email protected]> wrote:\n>> > db=# explain analyse\n>> > select sum(base_total_val)\n>> > from sales_invoice\n>> > where id in (select id from si_credit_tree(80500007));\n>>\n>> Did you check whether this query even gives the right answer?\n>\n> You knew the right answer to that already ;)\n>\n>> I think you forgot the alias foo(id) in the subselect and it's\n>> actually reducing to \"where id in (id)\", ie, TRUE.\n>\n> Tricky, but completely obvious once pointed out, that's _exactly_ what was\n> happening.\n\nThis is one of the reasons why, for a table named 'foo', I name the\ncolumns 'foo_id', not 'id'. Also, if you prefix the id column with\nthe table name, you can usually use JOIN USING which is a little bit\ntighter and easier than JOIN ON.\n\nmerlin\n", "msg_date": "Sat, 10 May 2008 07:42:28 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plan difference between set-returning function with ROWS within\n\tIN() and a plain join" } ]
[ { "msg_contents": "I've just discovered a problem with quite simple query. It's really \nconfusing me.\nPostgresql 8.3.1, random_page_cost=1.1. All tables were analyzed before \nquery.\n\nEXPLAIN ANALYZE\nSELECT i.c, d.r\nFROM i\n JOIN d ON d.cr = i.c\nWHERE i.dd between '2007-08-01' and '2007-08-30'\n\nHash Join (cost=2505.42..75200.16 rows=98275 width=16) (actual \ntime=2728.959..23118.632 rows=93159 loops=1)\n Hash Cond: (d.c = i.c)\n -> Seq Scan on d d (cost=0.00..61778.75 rows=5081098 width=16) \n(actual time=0.075..8859.807 rows=5081098 loops=1)\n -> Hash (cost=2226.85..2226.85 rows=89862 width=8) (actual \ntime=416.526..416.526 rows=89473 loops=1)\n -> Index Scan using i_dd on i (cost=0.00..2226.85 rows=89862 \nwidth=8) (actual time=0.078..237.504 rows=89473 loops=1)\n Index Cond: ((dd >= '2007-08-01'::date) AND (dd <= \n'2007-08-30'::date))\nTotal runtime: 23246.640 ms\n\nEXPLAIN ANALYZE\nSELECT i.*, d.r\nFROM i\n JOIN d ON d.c = i.c\nWHERE i.dd between '2007-08-01' and '2007-08-30'\n\nNested Loop (cost=0.00..114081.69 rows=98275 width=416) (actual \ntime=0.114..1711.256 rows=93159 loops=1)\n -> Index Scan using i_dd on i (cost=0.00..2226.85 rows=89862 \nwidth=408) (actual time=0.075..207.574 rows=89473 loops=1)\n Index Cond: ((dd >= '2007-08-01'::date) AND (dd <= \n'2007-08-30'::date))\n -> Index Scan using d_uniq on d (cost=0.00..1.24 rows=2 width=16) \n(actual time=0.007..0.009 rows=1 loops=89473)\n Index Cond: (d.c = i.c)\nTotal runtime: 1839.228 ms\n\nAnd this never happened with LEFT JOIN.\n\nEXPLAIN ANALYZE\nSELECT i.c, d.r\nFROM i\n LEFT JOIN d ON d.cr = i.c\nWHERE i.dd between '2007-08-01' and '2007-08-30'\n\nNested Loop Left Join (cost=0.00..114081.69 rows=98275 width=16) \n(actual time=0.111..1592.225 rows=93159 loops=1)\n -> Index Scan using i_dd on i (cost=0.00..2226.85 rows=89862 \nwidth=8) (actual time=0.072..210.421 rows=89473 loops=1)\n Index Cond: ((dd >= '2007-08-01'::date) AND (dd <= \n'2007-08-30'::date))\n -> Index Scan using d_uniq on d (cost=0.00..1.24 rows=2 width=16) \n(actual time=0.007..0.009 rows=1 loops=89473)\n Index Cond: (d.c = i.c)\n\"Total runtime: 1720.185 ms\"\n\nd_uniq is unique index on d(r, ...).\n\n", "msg_date": "Tue, 06 May 2008 20:14:03 +0900", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Seqscan problem" }, { "msg_contents": "Vlad Arkhipov <[email protected]> writes:\n> I've just discovered a problem with quite simple query. It's really \n> confusing me.\n> Postgresql 8.3.1, random_page_cost=1.1. All tables were analyzed before \n> query.\n\nWhat have you got effective_cache_size set to?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 May 2008 10:22:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan problem " }, { "msg_contents": "Tom Lane writes:\n> Vlad Arkhipov <[email protected]> writes:\n> \n>> I've just discovered a problem with quite simple query. It's really \n>> confusing me.\n>> Postgresql 8.3.1, random_page_cost=1.1. All tables were analyzed before \n>> query.\n>> \n>\n> What have you got effective_cache_size set to?\n>\n> \t\t\tregards, tom lane\n>\n> \n\n1024M\n\n\n\n\n\n\nTom Lane writes:\n\nVlad Arkhipov <[email protected]> writes:\n \n\nI've just discovered a problem with quite simple query. It's really \nconfusing me.\nPostgresql 8.3.1, random_page_cost=1.1. All tables were analyzed before \nquery.\n \n\n\nWhat have you got effective_cache_size set to?\n\n\t\t\tregards, tom lane\n\n \n\n\n1024M", "msg_date": "Wed, 07 May 2008 09:48:36 +0900", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seqscan problem" } ]
[ { "msg_contents": "Hello,\n\nI have a query that runs for hours when joining 4 tables but takes \nmilliseconds when joining one MORE table to the query.\nI have One big table, t_event (8 million rows) and 4 small tables \n(t_network,t_system,t_service, t_interface, all < 1000 rows). This \nquery takes a few milliseconds :\n[code]\nselect * from t_Event event\ninner join t_Service service on event.service_id=service.id\ninner join t_System system on service.system_id=system.id\ninner join t_Interface interface on system.id=interface.system_id\ninner join t_Network network on interface.network_id=network.id\nwhere (network.customer_id=1) order by event.c_date desc limit 25\n\n\"Limit (cost=23981.18..23981.18 rows=1 width=977)\"\n\" -> Sort (cost=23981.18..23981.18 rows=1 width=977)\"\n\" Sort Key: this_.c_date\"\n\" -> Nested Loop (cost=0.00..23981.17 rows=1 width=977)\"\n\" -> Nested Loop (cost=0.00..23974.89 rows=1 width=961)\"\n\" -> Nested Loop (cost=0.00..191.42 rows=1 \nwidth=616)\"\n\" Join Filter: (service_s3_.system_id = \nservice1_.system_id)\"\n\" -> Nested Loop (cost=0.00..9.29 rows=1 \nwidth=576)\"\n\" -> Seq Scan on t_network \nservice_s4_ (cost=0.00..1.01 rows=1 width=18)\"\n\" Filter: (customer_id = 1)\"\n\" -> Index Scan using \ninterface_network_id_idx on t_interface service_s3_ (cost=0.00..8.27 \nrows=1 width=558)\"\n\" Index Cond: \n(service_s3_.network_id = service_s4_.id)\"\n\" -> Seq Scan on t_service service1_ \n(cost=0.00..109.28 rows=5828 width=40)\"\n\" -> Index Scan using event_svc_id_idx on t_event \nthis_ (cost=0.00..23681.12 rows=8188 width=345)\"\n\" Index Cond: (this_.service_id = \nservice1_.id)\"\n\" -> Index Scan using t_system_pkey on t_system \nservice_s2_ (cost=0.00..6.27 rows=1 width=16)\"\n\" Index Cond: (service_s2_.id = service1_.system_id)\"\n[/code]\n\nThis one takes HOURS, but I'm joining one table LESS :\n\n[code]\nselect * from t_Event event\ninner join t_Service service on event.service_id=service.id\ninner join t_System system on service.system_id=system.id\ninner join t_Interface interface on system.id=interface.system_id\nwhere (interface.network_id=1) order by event.c_date desc limit 25\n\n\"Limit (cost=147.79..2123.66 rows=10 width=959)\"\n\" -> Nested Loop (cost=147.79..2601774.46 rows=13167 width=959)\"\n\" Join Filter: (service1_.id = this_.service_id)\"\n\" -> Index Scan Backward using event_date_idx on t_event \nthis_ (cost=0.00..887080.22 rows=8466896 width=345)\"\n\" -> Materialize (cost=147.79..147.88 rows=9 width=614)\"\n\" -> Hash Join (cost=16.56..147.79 rows=9 width=614)\"\n\" Hash Cond: (service1_.system_id = service_s2_.id)\"\n\" -> Seq Scan on t_service service1_ \n(cost=0.00..109.28 rows=5828 width=40)\"\n\" -> Hash (cost=16.55..16.55 rows=1 width=574)\"\n\" -> Nested Loop (cost=0.00..16.55 rows=1 \nwidth=574)\"\n\" -> Index Scan using \ninterface_network_id_idx on t_interface service_s3_ (cost=0.00..8.27 \nrows=1 width=558)\"\n\" Index Cond: (network_id = 1)\"\n\" -> Index Scan using t_system_pkey on \nt_system service_s2_ (cost=0.00..8.27 rows=1 width=16)\"\n\" Index Cond: (service_s2_.id = \nservice_s3_.system_id)\"\n[/code]\n\nMy understanding is that in the first case the sort is done after all \nthe table joins and filtering, but in the second case ALL the rows in \nt_event are scanned and sorted before the join. There is an index on \nthe sorting column. If I remove this index, the query runs very fast. \nBut I still need this index for other queries.So I must force the \nplanner to do the sort after the join, in the second case. How can i \ndo that?\n\nThanks a lot for your help,\n\nAntoine\n\n", "msg_date": "Tue, 6 May 2008 17:03:44 +0200", "msg_from": "Antoine Baudoux <[email protected]>", "msg_from_op": true, "msg_subject": "multiple joins + Order by + LIMIT query performance issue" }, { "msg_contents": "Antoine,\n\nOn Tue, May 6, 2008 at 5:03 PM, Antoine Baudoux <[email protected]> wrote:\n> \"Limit (cost=23981.18..23981.18 rows=1 width=977)\"\n> \" -> Sort (cost=23981.18..23981.18 rows=1 width=977)\"\n> \" Sort Key: this_.c_date\"\n\nCan you please provide the EXPLAIN ANALYZE output instead of EXPLAIN?\n\nThanks.\n\n-- \nGuillaume\n", "msg_date": "Tue, 6 May 2008 17:38:23 +0200", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple joins + Order by + LIMIT query performance issue" }, { "msg_contents": "Here is the explain analyse for the first query, the other is still \nrunning...\n\n\nexplain analyse select * from t_Event event\ninner join t_Service service on event.service_id=service.id\ninner join t_System system on service.system_id=system.id\ninner join t_Interface interface on system.id=interface.system_id\ninner join t_Network network on interface.network_id=network.id\nwhere (network.customer_id=1) order by event.c_date desc limit 25\n\nLimit (cost=11761.44..11761.45 rows=1 width=976) (actual \ntime=0.047..0.047 rows=0 loops=1)\n -> Sort (cost=11761.44..11761.45 rows=1 width=976) (actual \ntime=0.045..0.045 rows=0 loops=1)\n Sort Key: event.c_date\n Sort Method: quicksort Memory: 17kB\n -> Nested Loop (cost=0.00..11761.43 rows=1 width=976) \n(actual time=0.024..0.024 rows=0 loops=1)\n -> Nested Loop (cost=0.00..11755.15 rows=1 width=960) \n(actual time=0.024..0.024 rows=0 loops=1)\n -> Nested Loop (cost=0.00..191.42 rows=1 \nwidth=616) (actual time=0.024..0.024 rows=0 loops=1)\n Join Filter: (interface.system_id = \nservice.system_id)\n -> Nested Loop (cost=0.00..9.29 rows=1 \nwidth=576) (actual time=0.023..0.023 rows=0 loops=1)\n -> Seq Scan on t_network network \n(cost=0.00..1.01 rows=1 width=18) (actual time=0.009..0.009 rows=1 \nloops=1)\n Filter: (customer_id = 1)\n -> Index Scan using \ninterface_network_id_idx on t_interface interface (cost=0.00..8.27 \nrows=1 width=558) (actual time=0.011..0.011 rows=0 loops=1)\n Index Cond: \n(interface.network_id = network.id)\n -> Seq Scan on t_service service \n(cost=0.00..109.28 rows=5828 width=40) (never executed)\n -> Index Scan using event_svc_id_idx on t_event \nevent (cost=0.00..11516.48 rows=3780 width=344) (never executed)\n Index Cond: (event.service_id = service.id)\n -> Index Scan using t_system_pkey on t_system system \n(cost=0.00..6.27 rows=1 width=16) (never executed)\n Index Cond: (system.id = service.system_id)\nTotal runtime: 0.362 ms\n\n\n\nOn May 6, 2008, at 5:38 PM, Guillaume Smet wrote:\n\n> Antoine,\n>\n> On Tue, May 6, 2008 at 5:03 PM, Antoine Baudoux <[email protected]> wrote:\n>> \"Limit (cost=23981.18..23981.18 rows=1 width=977)\"\n>> \" -> Sort (cost=23981.18..23981.18 rows=1 width=977)\"\n>> \" Sort Key: this_.c_date\"\n>\n> Can you please provide the EXPLAIN ANALYZE output instead of EXPLAIN?\n>\n> Thanks.\n>\n> -- \n> Guillaume\n\n", "msg_date": "Tue, 6 May 2008 18:42:34 +0200", "msg_from": "Antoine Baudoux <[email protected]>", "msg_from_op": true, "msg_subject": "Re: multiple joins + Order by + LIMIT query performance issue" }, { "msg_contents": "On Tue, 2008-05-06 at 16:03 +0100, Antoine Baudoux wrote:\n\n> My understanding is that in the first case the sort is\n> done after all the table joins and filtering, but in the\n> second case ALL the rows in t_event are scanned and sorted\n> before the join.\n\nYou've actually run into a problem that's bitten us in the ass a couple\nof times. The problem with your second query is that it's *too*\nefficient. You'll notice the first plan uses a bevy of nest-loops,\nwhich is very risky if the row estimates are not really really accurate.\nThe planner says \"Hey, customer_id=1 could be several rows in the\nt_network table, but not too many... I better check them one by one.\"\nI've turned off nest-loops sometimes to avoid queries that would run\nseveral hours due to mis-estimation, but it looks like yours was just\nfine.\n\nThe second query says \"Awesome! Only one network... I can just search\nthe index of t_event backwards for this small result set!\"\n\nBut here's the rub... try your query *without* the limit clause, and you\nmay find it's actually faster, because the planner suddenly thinks it\nwill have to scan the whole table, so it choses an alternate plan\n(probably back to the nest-loop). Alternatively, take off the order-by\nclause, and it'll remove the slow backwards index-scan.\n\nI'm not sure what causes this, but the problem with indexes is that\nthey're not necessarily in the order you want unless you also cluster\nthem, so a backwards index scan is almost always the wrong answer.\nPersonally I consider this a bug, and it's been around since at least\nthe 8.1 tree. The only real answer is that you have a fast version of\nthe query, so try and play with it until it acts the way you want.\n\n-- \n\nShaun Thomas\nDatabase Administrator\n\nLeapfrog Online \n807 Greenwood Street \nEvanston, IL 60201 \nTel. 847-440-8253\nFax. 847-570-5750\nwww.leapfrogonline.com\n\n\n", "msg_date": "Tue, 6 May 2008 11:43:40 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple joins + Order by + LIMIT query performance\n\tissue" }, { "msg_contents": "\tThanks a lot for your answer, there are some points I didnt understand\n\nOn May 6, 2008, at 6:43 PM, Shaun Thomas wrote:\n\n>\n> The second query says \"Awesome! Only one network... I can just search\n> the index of t_event backwards for this small result set!\"\n>\n\nShouldnt It be the opposite? considering that only a few row must be \n\"joined\" (Sorry but I'm not familiar with DBMS terms) with the\nt_event table, why not simply look up the corresponding rows in the \nt_event table using the service_id foreign key, then do the sort? Isnt \nthe planner fooled by the index on the sorting column? If I remove the \nindex the query runs OK.\n\n\n> But here's the rub... try your query *without* the limit clause, and \n> you\n> may find it's actually faster, because the planner suddenly thinks it\n> will have to scan the whole table, so it choses an alternate plan\n> (probably back to the nest-loop). Alternatively, take off the order- \n> by\n> clause, and it'll remove the slow backwards index-scan.\n\nYou are right, if i remove the order-by clause It doesnt backwards \nindex-scan.\n\nAnd if I remove the limit and keep the order-by clause, the backwards \nindex-scan is gone too, and the query runs in a few millisecs!!\n\nThis is crazy, so simply by adding a LIMIT to a query, the planning is \nchanged in a very bad way. Does the planner use the LIMIT as a sort of \nhint?\n\n\nThank you for your explanations,\n\n\nAntoine Baudoux\n", "msg_date": "Tue, 6 May 2008 19:24:28 +0200", "msg_from": "Antoine Baudoux <[email protected]>", "msg_from_op": true, "msg_subject": "Re: multiple joins + Order by + LIMIT query performance issue" }, { "msg_contents": "Shaun Thomas <[email protected]> writes:\n> I'm not sure what causes this, but the problem with indexes is that\n> they're not necessarily in the order you want unless you also cluster\n> them, so a backwards index scan is almost always the wrong answer.\n\nWhether the scan is forwards or backwards has nothing to do with it.\nThe planner is using the index ordering to avoid having to do a\nfull-table scan and sort. It's essentially betting that it will find\n25 (or whatever your LIMIT is) rows that satisfy the other query\nconditions soon enough in the index scan to make this faster than the\nfull-scan approach. If there are a lot fewer matching rows than it\nexpects, or if the target rows aren't uniformly scattered in the index\nordering, then this way can be a loss; but when it's a win it can be\na big win, too, so \"it's a bug take it out\" is an unhelpful opinion.\n\nIf a misestimate of this kind is bugging you enough that you're willing\nto change the query, I think you can fix it like this:\n\n\tselect ... from foo order by x limit n;\n=>\n\tselect ... from (select ... from foo order by x) ss limit n;\n\nThe subselect will be planned without awareness of the LIMIT, so you\nshould get a plan using a sort rather than one that bets on the LIMIT\nbeing reached quickly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 May 2008 13:59:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple joins + Order by + LIMIT query performance issue " }, { "msg_contents": "Antoine Baudoux wrote:\n> Here is the explain analyse for the first query, the other is still \n> running...\n> \n> \n> explain analyse select * from t_Event event\n> inner join t_Service service on event.service_id=service.id\n> inner join t_System system on service.system_id=system.id\n> inner join t_Interface interface on system.id=interface.system_id\n> inner join t_Network network on interface.network_id=network.id\n> where (network.customer_id=1) order by event.c_date desc limit 25\n> \n> Limit (cost=11761.44..11761.45 rows=1 width=976) (actual \n> time=0.047..0.047 rows=0 loops=1)\n> -> Sort (cost=11761.44..11761.45 rows=1 width=976) (actual \n> time=0.045..0.045 rows=0 loops=1)\n> Sort Key: event.c_date\n> Sort Method: quicksort Memory: 17kB\n> -> Nested Loop (cost=0.00..11761.43 rows=1 width=976) (actual \n> time=0.024..0.024 rows=0 loops=1)\n> -> Nested Loop (cost=0.00..11755.15 rows=1 width=960) \n> (actual time=0.024..0.024 rows=0 loops=1)\n> -> Nested Loop (cost=0.00..191.42 rows=1 \n> width=616) (actual time=0.024..0.024 rows=0 loops=1)\n> Join Filter: (interface.system_id = \n> service.system_id)\n> -> Nested Loop (cost=0.00..9.29 rows=1 \n> width=576) (actual time=0.023..0.023 rows=0 loops=1)\n> -> Seq Scan on t_network network \n> (cost=0.00..1.01 rows=1 width=18) (actual time=0.009..0.009 rows=1 loops=1)\n> Filter: (customer_id = 1)\n> -> Index Scan using \n> interface_network_id_idx on t_interface interface (cost=0.00..8.27 \n> rows=1 width=558) (actual time=0.011..0.011 rows=0 loops=1)\n> Index Cond: (interface.network_id \n> = network.id)\n> -> Seq Scan on t_service service \n> (cost=0.00..109.28 rows=5828 width=40) (never executed)\n> -> Index Scan using event_svc_id_idx on t_event \n> event (cost=0.00..11516.48 rows=3780 width=344) (never executed)\n> Index Cond: (event.service_id = service.id)\n> -> Index Scan using t_system_pkey on t_system system \n> (cost=0.00..6.27 rows=1 width=16) (never executed)\n> Index Cond: (system.id = service.system_id)\n> Total runtime: 0.362 ms\n\nAre the queries even returning the same results (except for the extra \ncolumns coming from t_network)? It looks like in this version, the \nnetwork-interface join is performed first, which returns zero rows, so \nthe rest of the joins don't need to be performed at all. That's why it's \nfast.\n\nWhich version of PostgreSQL is this, BTW?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 06 May 2008 19:04:17 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple joins + Order by + LIMIT query performance issue" }, { "msg_contents": "Antoine Baudoux wrote:\n> Here is the explain analyse for the first query, the other is still \n> running...\n> \n> \n> explain analyse select * from t_Event event\n> inner join t_Service service on event.service_id=service.id\n> inner join t_System system on service.system_id=system.id\n> inner join t_Interface interface on system.id=interface.system_id\n> inner join t_Network network on interface.network_id=network.id\n> where (network.customer_id=1) order by event.c_date desc limit 25\n> \n> Limit (cost=11761.44..11761.45 rows=1 width=976) (actual \n> time=0.047..0.047 rows=0 loops=1)\n> -> Sort (cost=11761.44..11761.45 rows=1 width=976) (actual \n> time=0.045..0.045 rows=0 loops=1)\n> Sort Key: event.c_date\n> Sort Method: quicksort Memory: 17kB\n> -> Nested Loop (cost=0.00..11761.43 rows=1 width=976) (actual \n> time=0.024..0.024 rows=0 loops=1)\n> -> Nested Loop (cost=0.00..11755.15 rows=1 width=960) \n> (actual time=0.024..0.024 rows=0 loops=1)\n> -> Nested Loop (cost=0.00..191.42 rows=1 \n> width=616) (actual time=0.024..0.024 rows=0 loops=1)\n> Join Filter: (interface.system_id = \n> service.system_id)\n> -> Nested Loop (cost=0.00..9.29 rows=1 \n> width=576) (actual time=0.023..0.023 rows=0 loops=1)\n> -> Seq Scan on t_network network \n> (cost=0.00..1.01 rows=1 width=18) (actual time=0.009..0.009 rows=1 loops=1)\n> Filter: (customer_id = 1)\n> -> Index Scan using \n> interface_network_id_idx on t_interface interface (cost=0.00..8.27 \n> rows=1 width=558) (actual time=0.011..0.011 rows=0 loops=1)\n> Index Cond: (interface.network_id \n> = network.id)\n> -> Seq Scan on t_service service \n> (cost=0.00..109.28 rows=5828 width=40) (never executed)\n> -> Index Scan using event_svc_id_idx on t_event \n> event (cost=0.00..11516.48 rows=3780 width=344) (never executed)\n> Index Cond: (event.service_id = service.id)\n> -> Index Scan using t_system_pkey on t_system system \n> (cost=0.00..6.27 rows=1 width=16) (never executed)\n> Index Cond: (system.id = service.system_id)\n> Total runtime: 0.362 ms\n\nAre the queries returning the same results (except for the extra columns \ncoming from t_network)? It looks like in this version, the \nnetwork-interface join is performed first, which returns zero rows, so \nthe rest of the joins don't need to be performed at all. That's why it's \nfast.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 06 May 2008 19:06:35 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple joins + Order by + LIMIT query performance issue" }, { "msg_contents": "\nOn Tue, 2008-05-06 at 18:59 +0100, Tom Lane wrote:\n\n> Whether the scan is forwards or backwards has nothing\n> to do with it. The planner is using the index ordering\n> to avoid having to do a full-table scan and sort.\n\nOh, I know that. I just noticed that when this happened to us, more\noften than not, it was a reverse index scan that did it. The thing that\nannoyed me most was when it happened on an index that, even on a table\nhaving 20M rows, the cardinality is < 10 on almost every value of that\nindex. In our case, having a \"LIMIT 1\" was much worse than just getting\nback 5 or 10 rows and throwing away everything after the first one.\n\n> but when it's a win it can be a big win, too, so \"it's\n> a bug take it out\" is an unhelpful opinion.\n\nThat's just it... it *can* be a big win. But when it's a loss, you're\nindex-scanning a 20M+ row table for no reason. We got around it,\nobviously, but it was a definite surprise when a query that normally\nruns in 0.5ms time randomly and inexplicably runs at 4-120s. This is\ndisaster for a feed loader chewing through a few ten-thousand entries.\n\nBut that's just me grousing about not having query hints or being able\nto tell Postgres to never, ever, ever index-scan certain tables. :)\n\n-- \n\nShaun Thomas\nDatabase Administrator\n\nLeapfrog Online \n807 Greenwood Street \nEvanston, IL 60201 \nTel. 847-440-8253\nFax. 847-570-5750\nwww.leapfrogonline.com\n\n\n", "msg_date": "Tue, 6 May 2008 13:14:29 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple joins + Order by + LIMIT query performance\n\tissue" }, { "msg_contents": "\nOn Tue, 2008-05-06 at 18:24 +0100, Antoine Baudoux wrote:\n\n> Isnt the planner fooled by the index on the sorting column?\n> If I remove the index the query runs OK.\n\nIn your case, for whatever reason, the stats say doing the index scan on\nthe sorted column will give you the results faster. That isn't always\nthe case, and sometimes you can give the same query different where\nclauses and that same slow-index-scan will randomly be fast. It's all\nbased on the index distribution and the particular values being fetched.\n\nThis goes back to what Tom said. If you know a \"miss\" can result in\nterrible performance, it's best to just recode the query to avoid the\nsituation.\n\n> This is crazy, so simply by adding a LIMIT to a query, the planning is\n> changed in a very bad way. Does the planner use the LIMIT as a sort of\n> hint?\n\nYes. That's actually what tells it the index scan can be a \"big win.\"\nIf it scans the index backwards on values returned from some of your\njoins, it may just have to find 25 rows and then it can immediately stop\nscanning and just give you the results. In normal cases, this is a\nmassive performance boost when you have an order clause and are\nexpecting a ton of results, (say you're getting the first 25 rows of\n10000 or something). But if it would be faster to generate the results\nand *then* sort, but Postgres thinks otherwise, you're pretty much\nscrewed.\n\nBut that's the long answer. You have like 3 ways to get around this\nnow, so pick one. ;)\n\n-- \n\nShaun Thomas\nDatabase Administrator\n\nLeapfrog Online \n807 Greenwood Street \nEvanston, IL 60201 \nTel. 847-440-8253\nFax. 847-570-5750\nwww.leapfrogonline.com\n\n\n", "msg_date": "Tue, 6 May 2008 13:24:33 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple joins + Order by + LIMIT query performance\n\tissue" }, { "msg_contents": ">\n> If a misestimate of this kind is bugging you enough that you're \n> willing\n> to change the query, I think you can fix it like this:\n>\n> \tselect ... from foo order by x limit n;\n> =>\n> \tselect ... from (select ... from foo order by x) ss limit n;\n>\n> The subselect will be planned without awareness of the LIMIT, so you\n> should get a plan using a sort rather than one that bets on the LIMIT\n> being reached quickly.\n\nI tried that, using a subquery. Unfortunately this does not change \nanything :\n\nselect * from (select * from t_Event event\ninner join t_Service service on event.service_id=service.id\ninner join t_System system on service.system_id=system.id\ninner join t_Interface interface on system.id=interface.system_id\nwhere (interface.network_id=1) order by event.c_date desc ) ss limit 25\n\n\"Limit (cost=147.79..5563.93 rows=25 width=3672)\"\n\" -> Subquery Scan ss (cost=147.79..2896263.01 rows=13368 \nwidth=3672)\"\n\" -> Nested Loop (cost=147.79..2896129.33 rows=13368 \nwidth=958)\"\n\" Join Filter: (service.id = event.service_id)\"\n\" -> Index Scan Backward using event_date_idx on t_event \nevent (cost=0.00..1160633.69 rows=8569619 width=344)\"\n\" -> Materialize (cost=147.79..147.88 rows=9 width=614)\"\n\" -> Hash Join (cost=16.56..147.79 rows=9 \nwidth=614)\"\n\" Hash Cond: (service.system_id = system.id)\"\n\" -> Seq Scan on t_service service \n(cost=0.00..109.28 rows=5828 width=40)\"\n\" -> Hash (cost=16.55..16.55 rows=1 \nwidth=574)\"\n\" -> Nested Loop (cost=0.00..16.55 \nrows=1 width=574)\"\n\" -> Index Scan using \ninterface_network_id_idx on t_interface interface (cost=0.00..8.27 \nrows=1 width=558)\"\n\" Index Cond: (network_id = \n1)\"\n\" -> Index Scan using \nt_system_pkey on t_system system (cost=0.00..8.27 rows=1 width=16)\"\n\" Index Cond: (system.id = \ninterface.system_id)\"\n\n\nThe worst thing about all this is that there are ZERO rows to join \nwith the t_event table. So the planner decide to index-scan 8 millions \nrow, where there is no hope of finding a match!\nThis seems a very ,very , very poor decision\n", "msg_date": "Wed, 7 May 2008 09:23:51 +0200", "msg_from": "Antoine Baudoux <[email protected]>", "msg_from_op": true, "msg_subject": "Re: multiple joins + Order by + LIMIT query performance issue " }, { "msg_contents": "On Tue, 6 May 2008, Tom Lane wrote:\n> If a misestimate of this kind is bugging you enough that you're willing\n> to change the query, I think you can fix it like this:\n>\n> \tselect ... from foo order by x limit n;\n> =>\n> \tselect ... from (select ... from foo order by x) ss limit n;\n>\n> The subselect will be planned without awareness of the LIMIT, so you\n> should get a plan using a sort rather than one that bets on the LIMIT\n> being reached quickly.\n\nSurely if that's the case, that in itself is a bug? Apart from being \n\"useful\", I mean.\n\nMatthew\n\n-- \n\"Television is a medium because it is neither rare nor well done.\" \n -- Fred Friendly\n", "msg_date": "Wed, 7 May 2008 11:42:23 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple joins + Order by + LIMIT query performance\n issue" }, { "msg_contents": "Ok, I've tried everything, and the planner keeps choosing index scans \nwhen it shouldnt.\n\nIs there a way to disable index scans?\n\n\nAntoine\n", "msg_date": "Fri, 9 May 2008 09:18:39 +0200", "msg_from": "Antoine Baudoux <[email protected]>", "msg_from_op": true, "msg_subject": "Re: multiple joins + Order by + LIMIT query performance issue" }, { "msg_contents": "On Fri, May 9, 2008 at 1:18 AM, Antoine Baudoux <[email protected]> wrote:\n> Ok, I've tried everything, and the planner keeps choosing index scans when\n> it shouldnt.\n>\n> Is there a way to disable index scans?\n\nYou can use \"set enable_indexscan off;\" as the first command I've had\none or two reporting queries in the past that it was a necessity to do\nthat before running certain queries on very large datasets where a seq\nscan would kill performance.\n", "msg_date": "Fri, 9 May 2008 01:53:18 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple joins + Order by + LIMIT query performance issue" } ]
[ { "msg_contents": "This falls under the stupid question and i'm just curious what other \npeople think what makes a query complex?\n\n\n\n\n", "msg_date": "Tue, 06 May 2008 10:45:42 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": true, "msg_subject": "What constitutes a complex query " }, { "msg_contents": "Justin wrote:\n> This falls under the stupid question and i'm just curious what other \n> people think what makes a query complex?\n\nThere are two kinds:\n\n1. Hard for Postgres to get the answer.\n\n2. Hard for a person to comprehend.\n\nWhich do you mean?\n\nCraig\n", "msg_date": "Tue, 06 May 2008 09:26:59 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What constitutes a complex query" }, { "msg_contents": "On Tue, May 6, 2008 at 9:45 AM, Justin <[email protected]> wrote:\n> This falls under the stupid question and i'm just curious what other people\n> think what makes a query complex?\n\nWell, as mentioned, there's two kinds. some that look big and ugly\nare actually just shovelling data with no fancy interactions between\nsets. Some reporting queries are like this. I've made simple\nreporting queries that took up many pages that were really simple in\nnature and fast on even older pgsql versions (7.2-7.4)\n\nI'd say that the use of correlated subqueries qualifies a query as\ncomplicated. Joining on non-usual pk-fk stuff. the more you're\nmashing one set of data against another, and the odder the way you\nhave to do it, the more complex the query becomes.\n", "msg_date": "Tue, 6 May 2008 10:41:40 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What constitutes a complex query" }, { "msg_contents": "\nOn May 6, 2008, at 8:45 AM, Justin wrote:\n\n> This falls under the stupid question and i'm just curious what other \n> people think what makes a query complex?\n\nIf I know in advance exactly how the planner will plan the query (and \nbe right), it's a simple query.\n\nOtherwise it's a complex query.\n\nAs I get a better feel for the planner, some queries that used to be \ncomplex become simple. :)\n\nCheers,\n Steve\n\n", "msg_date": "Tue, 6 May 2008 09:56:16 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What constitutes a complex query " }, { "msg_contents": "On Tue, May 6, 2008 at 9:41 AM, Scott Marlowe <[email protected]> wrote:\n> I'd say that the use of correlated subqueries qualifies a query as\n> complicated. Joining on non-usual pk-fk stuff. the more you're\n> mashing one set of data against another, and the odder the way you\n> have to do it, the more complex the query becomes.\n\nI would add that data analysis queries that have multiple level of\naggregation analysis can be complicated also.\n\nFor example, in a table of racer times find the average time for each\nteam while only counting teams whom at least have greater than four\nteam members and produce an ordered list displaying the ranking for\neach team according to their average time.\n\n\n-- \nRegards,\nRichard Broersma Jr.\n\nVisit the Los Angles PostgreSQL Users Group (LAPUG)\nhttp://pugs.postgresql.org/lapug\n", "msg_date": "Tue, 6 May 2008 09:58:39 -0700", "msg_from": "\"Richard Broersma\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What constitutes a complex query" }, { "msg_contents": "\n\nCraig James wrote:\n> Justin wrote:\n>> This falls under the stupid question and i'm just curious what other \n>> people think what makes a query complex?\n>\n> There are two kinds:\n>\n> 1. Hard for Postgres to get the answer.\nthis one\n>\n> 2. Hard for a person to comprehend.\n>\n> Which do you mean?\n>\n> Craig\n>\n", "msg_date": "Tue, 06 May 2008 12:23:00 -0500", "msg_from": "Justin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What constitutes a complex query" }, { "msg_contents": "On Tue, May 6, 2008 at 11:23 AM, Justin <[email protected]> wrote:\n>\n>\n> Craig James wrote:\n>\n> > Justin wrote:\n> >\n> > > This falls under the stupid question and i'm just curious what other\n> people think what makes a query complex?\n> > >\n> >\n> > There are two kinds:\n> >\n> > 1. Hard for Postgres to get the answer.\n> >\n> this one\n\nSometimes, postgresql makes a bad choice on simple queries, so it's\nhard to say what all the ones are that postgresql tends to get wrong.\nPlus the query planner is under constant improvement thanks to the\nfolks who find poor planner choices and Tom for making the changes.\n", "msg_date": "Tue, 6 May 2008 11:37:59 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What constitutes a complex query" } ]
[ { "msg_contents": "Right now, we have a few servers that host our databases. None of them\nare redundant. Each hosts databases for one or more applications.\nThings work reasonably well but I'm worried about the availability of\nsome of the sites. Our hardware is 3-4 years old at this point and I'm \nnot naive to the possibility of drives, memory, motherboards or whatever \nfailing.\n\nI'm toying with the idea of adding a little redundancy and maybe some\nperformance to our setup. First, I'd replace are sata hard drives with\na scsi controller and two scsi hard drives that run raid 0 (probably \nrunning the OS and logs on the original sata drive). Then I'd run the \nprevious two databases on one cluster of two servers with pgpool in \nfront (using the redundancy feature of pgpool).\n\nOur applications are mostly read intensive. I don't think that having \ntwo databases on one machine, where previously we had just one, would \nadd too much of an impact, especially if we use the load balance feature \nof pgpool as well as the redundancy feature.\n\nCan anyone comment on any gotchas or issues we might encounter? Do you \nthink this strategy has possibility to accomplish what I'm originally \nsetting out to do?\n\nTIA\n-Dennis\n", "msg_date": "Tue, 06 May 2008 10:33:13 -0600", "msg_from": "Dennis Muhlestein <[email protected]>", "msg_from_op": true, "msg_subject": "Possible Redundancy/Performance Solution" }, { "msg_contents": "On Tue, 6 May 2008, Dennis Muhlestein wrote:\n\n> First, I'd replace are sata hard drives with a scsi controller and two \n> scsi hard drives that run raid 0 (probably running the OS and logs on \n> the original sata drive).\n\nRAID0 on two disks makes a disk failure that will wipe out the database \ntwice as likely. If you goal is better reliability, you want some sort of \nRAID1, which you can do with two disks. That should increase read \nthroughput a bit (not quite double though) while keeping write throughput \nabout the same.\n\nIf you added four disks, then you could do a RAID1+0 combination which \nshould substantially outperform your existing setup in every respect while \nalso being more resiliant to drive failure.\n\n> Our applications are mostly read intensive. I don't think that having two \n> databases on one machine, where previously we had just one, would add too \n> much of an impact, especially if we use the load balance feature of pgpool as \n> well as the redundancy feature.\n\nA lot depends on how much RAM you've got and whether it's enough to keep \nthe cache hit rate fairly high here. A reasonable thing to consider here \nis doing a round of standard performance tuning on the servers to make \nsure they're operating efficient before increasing their load.\n\n> Can anyone comment on any gotchas or issues we might encounter?\n\nGetting writes to replicate to multiple instances of the database usefully \nis where all the really nasty gotchas are in this area. Starting with \nthat part and working your way back toward the front-end pooling from \nthere should crash you into the hard parts early in the process.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 6 May 2008 13:39:27 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible Redundancy/Performance Solution" }, { "msg_contents": "Greg Smith wrote:\n> On Tue, 6 May 2008, Dennis Muhlestein wrote:\n> \n> \n> RAID0 on two disks makes a disk failure that will wipe out the database \n> twice as likely. If you goal is better reliability, you want some sort \n> of RAID1, which you can do with two disks. That should increase read \n> throughput a bit (not quite double though) while keeping write \n> throughput about the same.\n\nI was planning on pgpool being the cushion between the raid0 failure \nprobability and my need for redundancy. This way, I get protection \nagainst not only disks, but cpu, memory, network cards,motherboards etc. \n Is this not a reasonable approach?\n\n> \n> If you added four disks, then you could do a RAID1+0 combination which \n> should substantially outperform your existing setup in every respect \n> while also being more resiliant to drive failure.\n> \n>> Our applications are mostly read intensive. I don't think that having \n>> two databases on one machine, where previously we had just one, would \n>> add too much of an impact, especially if we use the load balance \n>> feature of pgpool as well as the redundancy feature.\n> \n> A lot depends on how much RAM you've got and whether it's enough to keep \n> the cache hit rate fairly high here. A reasonable thing to consider \n> here is doing a round of standard performance tuning on the servers to \n> make sure they're operating efficient before increasing their load.\n> \n>> Can anyone comment on any gotchas or issues we might encounter?\n> \n> Getting writes to replicate to multiple instances of the database \n> usefully is where all the really nasty gotchas are in this area. \n> Starting with that part and working your way back toward the front-end \n> pooling from there should crash you into the hard parts early in the \n> process.\n\n\nThanks for the tips!\nDennis\n", "msg_date": "Tue, 06 May 2008 12:31:01 -0600", "msg_from": "Dennis Muhlestein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible Redundancy/Performance Solution" }, { "msg_contents": "On Tue, 6 May 2008, Dennis Muhlestein wrote:\n\n> I was planning on pgpool being the cushion between the raid0 failure \n> probability and my need for redundancy. This way, I get protection against \n> not only disks, but cpu, memory, network cards,motherboards etc. Is this \n> not a reasonable approach?\n\nSince disks are by far the most likely thing to fail, I think it would be \nbad planning to switch to a design that doubles the chance of a disk \nfailure taking out the server just because you're adding some server-level \nredundancy. Anybody who's been in this business for a while will tell you \nthat seemingly improbable double failures happen, and if were you'd I want \na plan that survived a) a single disk failure on the primary and b) a \nsingle disk failure on the secondary at the same time.\n\nLet me strengthen that--I don't feel comfortable unless I'm able to \nsurvive a single disk failure on the primary and complete loss of the \nsecondary (say by power supply failure), because a double failure that \nstarts that way is a lot more likely than you might think. Especially \nwith how awful hard drives are nowadays.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 6 May 2008 16:35:02 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible Redundancy/Performance Solution" }, { "msg_contents": "Greg Smith wrote:\n> On Tue, 6 May 2008, Dennis Muhlestein wrote:\n> \n > Since disks are by far the most likely thing to fail, I think it would\n> be bad planning to switch to a design that doubles the chance of a disk \n> failure taking out the server just because you're adding some \n> server-level redundancy. Anybody who's been in this business for a \n> while will tell you that seemingly improbable double failures happen, \n> and if were you'd I want a plan that survived a) a single disk failure \n> on the primary and b) a single disk failure on the secondary at the same \n> time.\n> \n> Let me strengthen that--I don't feel comfortable unless I'm able to \n> survive a single disk failure on the primary and complete loss of the \n> secondary (say by power supply failure), because a double failure that \n> starts that way is a lot more likely than you might think. Especially \n> with how awful hard drives are nowadays.\n\nThose are good points. So you'd go ahead and add the pgpool in front \n(or another redundancy approach, but then use raid1,5 or perhaps 10 on \neach server?\n\n-Dennis\n", "msg_date": "Tue, 06 May 2008 15:39:02 -0600", "msg_from": "Dennis Muhlestein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible Redundancy/Performance Solution" }, { "msg_contents": "On Tue, May 6, 2008 at 3:39 PM, Dennis Muhlestein\n<[email protected]> wrote:\n\n> Those are good points. So you'd go ahead and add the pgpool in front (or\n> another redundancy approach, but then use raid1,5 or perhaps 10 on each\n> server?\n\nThat's what I'd do. specificall RAID10 for small to medium drive sets\nused for transactional stuff, and RAID6 for very large reporting\ndatabases that are mostly read.\n", "msg_date": "Tue, 6 May 2008 19:57:53 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible Redundancy/Performance Solution" }, { "msg_contents": "On Tue, 6 May 2008, Dennis Muhlestein wrote:\n\n> Those are good points. So you'd go ahead and add the pgpool in front (or \n> another redundancy approach, but then use raid1,5 or perhaps 10 on each \n> server?\n\nRight. I don't advise using the fact that you've got some sort of \nreplication going as an excuse to reduce the reliability of individual \nsystems, particularly in the area of disks (unless you're really creating \na much larger number of replicas than 2).\n\nRAID5 can be problematic compared to other RAID setups when you are doing \nwrite-heavy scenarios of small blocks, and it should be avoided for \ndatabase use. You can find stories on this subject in the archives here \nand some of the papers at http://www.baarf.com/ go over why; \"Is RAID 5 \nReally a Bargain?\" is the one I like best.\n\nIf you were thinking about 4 or more disks, there's a number of ways to \ndistribute those:\n\n1) RAID1+0 to make one big volume\n2) RAID1 for OS/apps/etc, RAID1 for database\n3) RAID1 for OS+xlog, RAID1 for database\n4) RAID1 for OS+popular tables, RAID1 for rest of database\n\nExactly which of these splits is best depends on your application and the \ntradeoffs important to you, but any of these should improve performance \nand reliability over what you're doing now. I personally tend to create \ntwo separate distinct volumes rather than using any striping here, create \na tablespace or three right from the start, and then manage the underlying \nmapping to disk with symbolic links so I can shift the allocation around. \nThat does require you have a steady hand and good nerves for when you \nscrew up, so I wouldn't recommend that to everyone.\n\nAs you get more disks it gets less practical to handle things this way, \nand it becomes increasingly sensible to just make one big array out of \nthem and stopping worrying about it.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 6 May 2008 22:37:09 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible Redundancy/Performance Solution" }, { "msg_contents": "\n> \n> 1) RAID1+0 to make one big volume\n> 2) RAID1 for OS/apps/etc, RAID1 for database\n> 3) RAID1 for OS+xlog, RAID1 for database\n> 4) RAID1 for OS+popular tables, RAID1 for rest of database\n\nLots of good info, thanks for all the replies. It seems to me then, \nthat the speed increase you'd get from raid0 is not worth the downtime \nrisk, even when you have multiple servers. I'll start pricing things \nout and see what options we have.\n\nThanks again,\nDennis\n", "msg_date": "Wed, 07 May 2008 09:36:55 -0600", "msg_from": "Dennis Muhlestein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible Redundancy/Performance Solution" } ]
[ { "msg_contents": "We are using pgfouine to try and optimize our database at this time. Is\nthere a way to have pgfouine show examples or breakout commits? \n\n \n\nQueries that took up the most time\n\nRank Total duration Times executed Av. duration (s)\nQuery\n\n1 26m54s 222,305 0.01\nCOMMIT;\n\n \n\nPerhaps we need to tweak what is being logged by postgresql?\n\n \n\nlog_destination = 'syslog' \n\nlogging_collector = on \n\nlog_directory = 'pg_log' \n\nlog_truncate_on_rotation = on \n\nlog_rotation_age = 1d \n\n \n\nsyslog_facility = 'LOCAL0'\n\nsyslog_ident = 'postgres'\n\n \n\nclient_min_messages = notice \n\nlog_min_messages = notice \n\nlog_error_verbosity = default \n\nlog_min_error_statement = notice \n\nlog_min_duration_statement = 2 \n\n#silent_mode = off \n\n \n\ndebug_print_parse = off\n\ndebug_print_rewritten = off\n\ndebug_print_plan = off\n\ndebug_pretty_print = off\n\nlog_checkpoints = off\n\nlog_connections = off\n\nlog_disconnections = off\n\nlog_duration = off\n\nlog_hostname = off\n\n#log_line_prefix = '' \n\nlog_lock_waits = on \n\nlog_statement = 'none' \n\n#log_temp_files = -1 \n\n#log_timezone = unknown \n\n \n\n#track_activities = on\n\n#track_counts = on\n\n#update_process_title = on\n\n \n\n#log_parser_stats = off\n\n#log_planner_stats = off\n\n#log_executor_stats = off\n\n#log_statement_stats = off\n\n \n\nRegards,\n\nJosh\n\n\n\n\n\n\n\n\n\n\nWe are using pgfouine to try and optimize our database at\nthis time.  Is there a way to have pgfouine show examples or breakout\ncommits? \n \nQueries that took up the most time\nRank     Total duration    Times\nexecuted             Av.\nduration (s)             Query\n1          26m54s             222,305                         0.01\n                             COMMIT;\n \nPerhaps we need to tweak what is being logged by postgresql?\n \nlog_destination =\n'syslog'             \n\nlogging_collector =\non                 \n\nlog_directory =\n'pg_log'               \n\nlog_truncate_on_rotation =\non           \nlog_rotation_age = 1d                  \n\n \nsyslog_facility = 'LOCAL0'\nsyslog_ident = 'postgres'\n \nclient_min_messages =\nnotice            \nlog_min_messages =\nnotice              \n\nlog_error_verbosity =\ndefault           \nlog_min_error_statement =\nnotice        \nlog_min_duration_statement =\n2          \n#silent_mode =\noff                     \n\n \ndebug_print_parse = off\ndebug_print_rewritten = off\ndebug_print_plan = off\ndebug_pretty_print = off\nlog_checkpoints = off\nlog_connections = off\nlog_disconnections = off\nlog_duration = off\nlog_hostname = off\n#log_line_prefix =\n''                  \n\nlog_lock_waits =\non                    \n\nlog_statement =\n'none'                 \n\n#log_temp_files =\n-1                   \n\n#log_timezone =\nunknown                \n\n \n#track_activities = on\n#track_counts = on\n#update_process_title = on\n \n#log_parser_stats = off\n#log_planner_stats = off\n#log_executor_stats = off\n#log_statement_stats = off\n \nRegards,\nJosh", "msg_date": "Tue, 6 May 2008 16:10:47 -0500", "msg_from": "\"Josh Cole\" <[email protected]>", "msg_from_op": true, "msg_subject": "pgfouine - commit details?" }, { "msg_contents": "Josh,\n\nOn Tue, May 6, 2008 at 11:10 PM, Josh Cole <[email protected]> wrote:\n> We are using pgfouine to try and optimize our database at this time. Is\n> there a way to have pgfouine show examples or breakout commits?\n\nI hesitated before not implementing this idea. The problem is that you\noften don't log everything and use log_min_duration_statement and thus\nyou don't have all the queries of the transaction in your log file\n(and you usually don't have the BEGIN; command in the logs).\n\n-- \nGuillaume\n", "msg_date": "Wed, 7 May 2008 02:31:52 +0200", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgfouine - commit details?" }, { "msg_contents": "We are shipping the postgres.log to a remote syslog repository to take the I/O burden off our postgresql server. As such if we set log_min_duration_statement to 0 this allow us to get more detailed information about our commits using pgfouine...correct?\n \n--\nJosh\n\n________________________________\n\nFrom: Guillaume Smet [mailto:[email protected]]\nSent: Tue 5/6/2008 7:31 PM\nTo: Josh Cole\nCc: [email protected]\nSubject: Re: [PERFORM] pgfouine - commit details?\n\n\n\nJosh,\n\nOn Tue, May 6, 2008 at 11:10 PM, Josh Cole <[email protected]> wrote:\n> We are using pgfouine to try and optimize our database at this time. Is\n> there a way to have pgfouine show examples or breakout commits?\n\nI hesitated before not implementing this idea. The problem is that you\noften don't log everything and use log_min_duration_statement and thus\nyou don't have all the queries of the transaction in your log file\n(and you usually don't have the BEGIN; command in the logs).\n\n--\nGuillaume\n\n\n\nRe: [PERFORM] pgfouine - commit details?\n\n\n\n\nWe are shipping the postgres.log to a remote syslog repository to take the I/O burden off our postgresql server.  As such if we set log_min_duration_statement to 0 this allow us to get more detailed information about our commits using pgfouine...correct?\n \n--\nJosh\n\n\nFrom: Guillaume Smet [mailto:[email protected]]Sent: Tue 5/6/2008 7:31 PMTo: Josh ColeCc: [email protected]: Re: [PERFORM] pgfouine - commit details?\n\nJosh,On Tue, May 6, 2008 at 11:10 PM, Josh Cole <[email protected]> wrote:> We are using pgfouine to try and optimize our database at this time.  Is> there a way to have pgfouine show examples or breakout commits?I hesitated before not implementing this idea. The problem is that youoften don't log everything and use log_min_duration_statement and thusyou don't have all the queries of the transaction in your log file(and you usually don't have the BEGIN; command in the logs).--Guillaume", "msg_date": "Wed, 7 May 2008 00:29:38 -0500", "msg_from": "\"Josh Cole\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgfouine - commit details?" } ]
[ { "msg_contents": "Hello friends,\n\nI'm working on optimizing queries using the Kruskal algorithm (\nhttp://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4318118). I did several\ntests in the database itself and saw interesting results.\nI did 10 executions with each query using unchanged source of Postgres and\nthen adapted to the algorithm of Kruskal.\nThe query I used is composed of 12 tables and 11 joins.\n\nResults Postgresql unchanged (ms): (\\ timing)\n\n170,690\n168,214\n182,832\n166,172\n174,466\n167,143\n167,287\n172,891\n170,452\n165,665\naverage=> 170,5812 ms\n\n\nResults of Postgresql with the Kruskal algorithm (ms): (\\ timing)\n\n520,590\n13,533\n8,410\n5,162\n5,543\n4,999\n9,871\n4,984\n5,010\n8,883\naverage=> 58,6985 ms\n\n\nAs you can see the result, using the Kruskal algorithm, the first query\ntakes more time to return results. This does not occur when using the\noriginal source of Postgres.\nSo how is the best method to conduct the tests? I take into consideration\nthe average of 10 executions or just the first one?\nDo you think I must clean the cache after each query? (because the other (9)\nexecutions may have information in memory).\n\nregards, Tarcizio Bini.\n\nHello friends,I'm working on optimizing queries using the Kruskal algorithm (http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4318118). I did several tests in the database itself and saw interesting results.\nI did 10 executions with each query using unchanged source of Postgres and then adapted to the algorithm of Kruskal.The query I used is composed of 12 tables and 11 joins. Results Postgresql unchanged (ms): (\\ timing)\n170,690168,214182,832166,172174,466167,143167,287172,891170,452165,665average=> 170,5812 msResults of Postgresql with the Kruskal algorithm (ms): (\\ timing)\n520,59013,5338,4105,1625,5434,9999,8714,9845,0108,883average=> 58,6985 msAs you can see the result, using the Kruskal algorithm, the first query takes more time to return results. This does not occur when using the original source of Postgres.\nSo how is the best method to conduct the tests? I take into consideration the average of 10 executions or just the first one?Do you think I must clean the cache after each query? (because the other (9) executions may have information in memory).\nregards, Tarcizio Bini.", "msg_date": "Wed, 7 May 2008 13:28:04 -0300", "msg_from": "\"Tarcizio Bini\" <[email protected]>", "msg_from_op": true, "msg_subject": "=?WINDOWS-1252?Q?Query_Optimization_with_Kruskal=92s_Algorithm?=" }, { "msg_contents": "On 5/7/08, Tarcizio Bini <[email protected]> wrote:\n> I'm working on optimizing queries using the Kruskal algorithm\n> (http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4318118).\n\nThat paper looks very interesting. I would love to hear what the\nPostgreSQL committers think of this algorithm.\n\nAlexander.\n", "msg_date": "Wed, 7 May 2008 22:09:30 +0200", "msg_from": "\"Alexander Staubo\" <[email protected]>", "msg_from_op": false, "msg_subject": "=?UTF-8?Q?Re:__Query_Optimiza?=\n\t=?UTF-8?Q?tion_with_Kruskal=E2=80=99s_Algorithm?=" }, { "msg_contents": "On May 8, 2:09 am, [email protected] (\"Alexander Staubo\") wrote:\n> On 5/7/08, Tarcizio Bini <[email protected]> wrote:\n>\n> > I'm working on optimizing queries using the Kruskal algorithm\n> > (http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4318118).\n>\n> That paper looks very interesting. I would love to hear what the\n> PostgreSQL committers think of this algorithm.\n>\n> Alexander.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\nI also would like to hear from them. But seems like the thread is\nloosed in tonn of other threads.\n", "msg_date": "Sat, 10 May 2008 10:31:22 -0700 (PDT)", "msg_from": "Rauan Maemirov <[email protected]>", "msg_from_op": false, "msg_subject": "\n =?windows-1252?Q?Re=3A_Query_Optimization_with_Kruskal=92s_Algorithm?=" }, { "msg_contents": "\nOn May 10, 2008, at 1:31 PM, Rauan Maemirov wrote:\n\n> I also would like to hear from them. But seems like the thread is\n> loosed in tonn of other threads.\n\nIt's also the middle of a commit fest, when a lot of the developers \nare focussed on processing the current patches in the queue, rather \nthan actively exploring new, potential features.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n\n", "msg_date": "Sat, 10 May 2008 14:26:51 -0400", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "=?WINDOWS-1252?Q?Re:__Re:_Query_Optimization_with_Krusk?=\n\t=?WINDOWS-1252?Q?al=92s_Algorithm?=" }, { "msg_contents": "Repost to -hackers, you're more likely to get a response on this topic.\n\nOn Sat, May 10, 2008 at 1:31 PM, Rauan Maemirov <[email protected]> wrote:\n> On May 8, 2:09 am, [email protected] (\"Alexander Staubo\") wrote:\n>> On 5/7/08, Tarcizio Bini <[email protected]> wrote:\n>>\n>> > I'm working on optimizing queries using the Kruskal algorithm\n>> > (http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4318118).\n>>\n>> That paper looks very interesting. I would love to hear what the\n>> PostgreSQL committers think of this algorithm.\n>>\n>> Alexander.\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n>\n> I also would like to hear from them. But seems like the thread is\n> loosed in tonn of other threads.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Sat, 10 May 2008 16:27:12 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "=?WINDOWS-1252?Q?Re:__Re:_Query_Optimi?=\n\t=?WINDOWS-1252?Q?zation_with_Kruskal=92s_Algorithm?=" }, { "msg_contents": "\"Jonah H. Harris\" <[email protected]> writes:\n> Repost to -hackers, you're more likely to get a response on this topic.\n\nProbably not, unless you cite a more readily available reference.\n(I dropped my IEEE membership maybe fifteen years ago ...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 May 2008 17:12:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: =?WINDOWS-1252?Q?Re:__Re:_Query_Optimi?=\n\t=?WINDOWS-1252?Q?zation_with_Kruskal=92s_Algorithm?=" }, { "msg_contents": "On Sat, May 10, 2008 at 5:12 PM, Tom Lane <[email protected]> wrote:\n> \"Jonah H. Harris\" <[email protected]> writes:\n>> Repost to -hackers, you're more likely to get a response on this topic.\n>\n> Probably not, unless you cite a more readily available reference.\n> (I dropped my IEEE membership maybe fifteen years ago ...)\n\nYeah, I don't have one either. Similarly, I couldn't find anything\napplicable to the PG implementation except references to the paper.\nWikipedia has the algorithm itself\n(http://en.wikipedia.org/wiki/Kruskal's_algorithm), but I was more\ninterested in the actual applicability to PG and any issues they ran\ninto.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Sat, 10 May 2008 17:17:21 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "=?WINDOWS-1252?Q?Re:_Re:__Re:_Query_Opti?=\n\t=?WINDOWS-1252?Q?mization_with_Kruskal=92s_Algorithm?=" }, { "msg_contents": "\"Jonah H. Harris\" <[email protected]> writes:\n> Wikipedia has the algorithm itself\n> (http://en.wikipedia.org/wiki/Kruskal's_algorithm), but I was more\n> interested in the actual applicability to PG and any issues they ran\n> into.\n\nHmm ... minimum spanning tree of a graph, eh? Right offhand I'd say\nthis is a pretty terrible model of the join order planning problem.\nThe difficulty with trying to represent join order as a weighted\ngraph is that it assumes the cost to join two relations (ie, the\nweight on the arc between them) is independent of what else you have\njoined first. Which is clearly utterly wrong for join planning.\n\nOur GEQO optimizer has a similar issue --- it uses a search algorithm\nthat is designed to solve traveling-salesman, which is almost the same\nthing as minimum spanning tree. The saving grace for GEQO is that its\nTSP orientation is only driving a heuristic; when it considers a given\noverall join order it is at least capable of computing the right cost.\nIt looks to me like Kruskal's algorithm is entirely dependent on the\nassumption that minimizing the sum of some predetermined pairwise costs\ngives the correct plan.\n\nIn short, I'm sure it's pretty fast compared to either of our current\njoin planning methods, but I'll bet a lot that it often picks a much\nworse plan. Color me unexcited, unless they've found some novel way\nof defining the graph representation that avoids this problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 May 2008 20:30:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: =?WINDOWS-1252?Q?Re:_Re:__Re:_Query_Opti?=\n\t=?WINDOWS-1252?Q?mization_with_Kruskal=92s_Algorithm?=" } ]
[ { "msg_contents": "Hi,\n\nI have a large database (multiple TBs) where I'd like to be able to do\na backup/restore of just a particular table (call it foo). Because\nthe database is large, the time for a full backup would be\nprohibitive. Also, whatever backup mechanism we do use needs to keep\nthe system online (i.e., users must still be allowed to update table\nfoo while we're taking the backup).\n\nAfter reading the documentation, it seems like the following might\nwork. Suppose the database has two tables foo and bar, and we're only\ninterested in backing up table foo:\n\n1. Call pg_start_backup\n\n2. Use the pg_class table in the catalog to get the data file names\nfor tables foo and bar.\n\n3. Copy the system files and the data file for foo. Skip the data file for bar.\n\n4. Call pg_stop_backup()\n\n5. Copy WAL files generated between 1. and 4. to another location.\n\nLater, if we want to restore the database somewhere with just table\nfoo, we just use postgres's normal recovery mechanism and point it at\nthe files we backed up in 2. and the WAL files from 5.\n\nDoes anyone see a problem with this approach (e.g., correctness,\nperformance, etc.)? Or is there perhaps an alternative approach using\nsome other postgresql mechanism that I'm not aware of?\n\nThanks!\n- John\n", "msg_date": "Wed, 7 May 2008 13:02:57 -0700", "msg_from": "\"John Smith\" <[email protected]>", "msg_from_op": true, "msg_subject": "Backup/Restore of single table in multi TB database" }, { "msg_contents": "On Wed, May 7, 2008 at 4:02 PM, John Smith <[email protected]> wrote:\n\n> Does anyone see a problem with this approach (e.g., correctness,\n> performance, etc.)? Or is there perhaps an alternative approach using\n> some other postgresql mechanism that I'm not aware of?\n\nDid you already look at and reject pg_dump for some reason? You can\nrestrict it to specific tables to dump, and it can work concurrently\nwith a running system. Your database is large, but how large are the\nindividual tables you're interested in backing up? pg_dump will be\nslower than a file copy, but may be sufficient for your purpose and\nwill have guaranteed correctness.\n\nI'm fairly certain that you have to be very careful about doing simple\nfile copies while the system is running, as the files may end up out\nof sync based on when each individual one is copied. I haven't done it\nmyself, but I do know that there are a lot of caveats that someone\nwith more experience doing that type of backup can hopefully point you\nto.\n\n-- \n- David T. Wilson\[email protected]\n", "msg_date": "Wed, 7 May 2008 16:09:45 -0400", "msg_from": "\"David Wilson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup/Restore of single table in multi TB database" }, { "msg_contents": "On Wed, 7 May 2008 13:02:57 -0700\n\"John Smith\" <[email protected]> wrote:\n\n> Hi,\n> \n> I have a large database (multiple TBs) where I'd like to be able to do\n> a backup/restore of just a particular table (call it foo). Because\n> the database is large, the time for a full backup would be\n> prohibitive. Also, whatever backup mechanism we do use needs to keep\n> the system online (i.e., users must still be allowed to update table\n> foo while we're taking the backup).\n\n> Does anyone see a problem with this approach (e.g., correctness,\n> performance, etc.)? Or is there perhaps an alternative approach using\n> some other postgresql mechanism that I'm not aware of?\n\nWhy are you not just using pg_dump -t ? Are you saying the backup of\nthe single table pg_dump takes to long? Perhaps you could use slony\nwith table sets?\n\nJoshua D. Drake\n\n\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate", "msg_date": "Wed, 7 May 2008 13:11:25 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup/Restore of single table in multi TB database" }, { "msg_contents": "On Wed, 7 May 2008 16:09:45 -0400\n\"David Wilson\" <[email protected]> wrote:\n\n> I'm fairly certain that you have to be very careful about doing simple\n> file copies while the system is running, as the files may end up out\n> of sync based on when each individual one is copied. I haven't done it\n> myself, but I do know that there are a lot of caveats that someone\n> with more experience doing that type of backup can hopefully point you\n> to.\n\nBesides the fact that it seems to be a fairly hacky thing to do... it\nis going to be fragile. Consider:\n\n(serverA) create table foo();\n(serverB) create table foo();\n\n(serverA) Insert stuff;\n(serverA) Alter table foo add column;\n\nOops...\n\n(serverA) alter table foo drop column;\n\nYou now have different version of the files than on serverb regardless\nof the table name.\n\nJoshua D. Drake\n\n \n\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate", "msg_date": "Wed, 7 May 2008 13:16:35 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup/Restore of single table in multi TB database" }, { "msg_contents": "On Wed, 2008-05-07 at 13:02 -0700, John Smith wrote:\n\n> I have a large database (multiple TBs) where I'd like to be able to do\n> a backup/restore of just a particular table (call it foo). Because\n> the database is large, the time for a full backup would be\n> prohibitive. Also, whatever backup mechanism we do use needs to keep\n> the system online (i.e., users must still be allowed to update table\n> foo while we're taking the backup). \n\nHave a look at pg_snapclone. It's specifically designed to significantly\nimprove dump times for very large objects.\n\nhttp://pgfoundry.org/projects/snapclone/\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Wed, 07 May 2008 22:28:44 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup/Restore of single table in multi TB database" }, { "msg_contents": "\"John Smith\" <[email protected]> writes:\n> After reading the documentation, it seems like the following might\n> work. Suppose the database has two tables foo and bar, and we're only\n> interested in backing up table foo:\n\n> 1. Call pg_start_backup\n\n> 2. Use the pg_class table in the catalog to get the data file names\n> for tables foo and bar.\n\n> 3. Copy the system files and the data file for foo. Skip the data file for bar.\n\n> 4. Call pg_stop_backup()\n\n> 5. Copy WAL files generated between 1. and 4. to another location.\n\n> Later, if we want to restore the database somewhere with just table\n> foo, we just use postgres's normal recovery mechanism and point it at\n> the files we backed up in 2. and the WAL files from 5.\n\n> Does anyone see a problem with this approach\n\nYes: it will not work, not even a little bit, because the WAL files will\ncontain updates for all the tables. You can't just not have the tables\nthere during restore.\n\nWhy are you not using pg_dump?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 May 2008 17:41:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup/Restore of single table in multi TB database " }, { "msg_contents": "Hi Tom,\n\nActually, I forgot to mention one more detail in my original post.\nFor the table that we're looking to backup, we also want to be able to\ndo incremental backups. pg_dump will cause the entire table to be\ndumped out each time it is invoked.\n\nWith the pg_{start,stop}_backup approach, incremental backups could be\nimplemented by just rsync'ing the data files for example and applying\nthe incremental WALs. So if table foo didn't change very much since\nthe first backup, we would only need to rsync a small amount of data\nplus the WALs to get an incremental backup for table foo.\n\nBesides picking up data on unwanted tables from the WAL (e.g., bar\nwould appear in our recovered database even though we only wanted\nfoo), do you see any other problems with this pg_{start,stop}_backup\napproach? Admittedly, it does seem a bit hacky.\n\nThanks,\n- John\n\nOn Wed, May 7, 2008 at 2:41 PM, Tom Lane <[email protected]> wrote:\n> \"John Smith\" <[email protected]> writes:\n> > After reading the documentation, it seems like the following might\n> > work. Suppose the database has two tables foo and bar, and we're only\n> > interested in backing up table foo:\n>\n> > 1. Call pg_start_backup\n>\n> > 2. Use the pg_class table in the catalog to get the data file names\n> > for tables foo and bar.\n>\n> > 3. Copy the system files and the data file for foo. Skip the data file for bar.\n>\n> > 4. Call pg_stop_backup()\n>\n> > 5. Copy WAL files generated between 1. and 4. to another location.\n>\n> > Later, if we want to restore the database somewhere with just table\n> > foo, we just use postgres's normal recovery mechanism and point it at\n> > the files we backed up in 2. and the WAL files from 5.\n>\n> > Does anyone see a problem with this approach\n>\n> Yes: it will not work, not even a little bit, because the WAL files will\n> contain updates for all the tables. You can't just not have the tables\n> there during restore.\n>\n> Why are you not using pg_dump?\n>\n> regards, tom lane\n>\n", "msg_date": "Wed, 7 May 2008 15:24:22 -0700", "msg_from": "\"John Smith\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Backup/Restore of single table in multi TB database" }, { "msg_contents": "On Wed, 2008-05-07 at 15:24 -0700, John Smith wrote:\n\n> Actually, I forgot to mention one more detail in my original post.\n> For the table that we're looking to backup, we also want to be able to\n> do incremental backups. pg_dump will cause the entire table to be\n> dumped out each time it is invoked.\n> \n> With the pg_{start,stop}_backup approach, incremental backups could be\n> implemented by just rsync'ing the data files for example and applying\n> the incremental WALs. So if table foo didn't change very much since\n> the first backup, we would only need to rsync a small amount of data\n> plus the WALs to get an incremental backup for table foo.\n> \n> Besides picking up data on unwanted tables from the WAL (e.g., bar\n> would appear in our recovered database even though we only wanted\n> foo), do you see any other problems with this pg_{start,stop}_backup\n> approach? Admittedly, it does seem a bit hacky.\n\nYou wouldn't be the first to ask to restore only a single table.\n\nI can produce a custom version that does that if you like, though I'm\nnot sure that feature would be accepted into the main code.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Thu, 08 May 2008 07:25:16 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup/Restore of single table in multi TB database" }, { "msg_contents": "Hello,\n\nI had postgresql 7.4 on ubuntu and over one year ago I moved to 8.2\nTill now I was backing up my db via pgadmin remotely from windows but \nnow I want to do it from the ubuntu server.\n\nWhen I run the command pgdump it said that the database is 8.2 but the \ntool is 7.4 - my question is, where in the world is the pgdump for 8.2 - \nI can't find it.\n\npg_dump, pg_dumpall are all in /usr/bin but where are the 8.2 ones ?\n\nTIA,\nQ\n\n\n", "msg_date": "Thu, 08 May 2008 01:52:17 -0500", "msg_from": "Q Master <[email protected]>", "msg_from_op": false, "msg_subject": "Ubuntu question" }, { "msg_contents": "On Thu, May 08, 2008 at 01:52:17AM -0500, Q Master wrote:\n> I had postgresql 7.4 on ubuntu and over one year ago I moved to 8.2\n> Till now I was backing up my db via pgadmin remotely from windows but \n> now I want to do it from the ubuntu server.\n\nI suggest looking at the README.Debian for postgres, it contains much\nimportant information you need to understand how multiple concurrently\ninstalled versions work.\n\n> When I run the command pgdump it said that the database is 8.2 but the \n> tool is 7.4 - my question is, where in the world is the pgdump for 8.2 - \n> I can't find it.\n> \n> pg_dump, pg_dumpall are all in /usr/bin but where are the 8.2 ones ?\n\nFirst, check what you have installed with pg_lsclusters (this will give\nyou the port number). Normally you can specify the cluster directly to\npg_dump but if you want the actual binary go to:\n\n/usr/lib/postgresql/<version>/bin/pg_dump.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Please line up in a tree and maintain the heap invariant while \n> boarding. Thank you for flying nlogn airlines.", "msg_date": "Thu, 8 May 2008 09:01:02 +0200", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ubuntu question" }, { "msg_contents": "\n\nQ Master wrote:\n> Hello,\n>\n> I had postgresql 7.4 on ubuntu and over one year ago I moved to 8.2\n> Till now I was backing up my db via pgadmin remotely from windows but \n> now I want to do it from the ubuntu server.\n>\n> When I run the command pgdump it said that the database is 8.2 but the \n> tool is 7.4 - my question is, where in the world is the pgdump for 8.2 \n> - I can't find it.\n>\n> pg_dump, pg_dumpall are all in /usr/bin but where are the 8.2 ones ?\nYou need to download the pgcontrib package from ubuntu package site. I\nuse the gnome package manager from ubuntu to handle this plus it\nautomatically handles the updates if any apply\n\n>\n> TIA,\n> Q\n>\n>\n>\n\n\n", "msg_date": "Thu, 08 May 2008 03:47:59 -0400", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Ubuntu question" }, { "msg_contents": "\n\nQ Master wrote:\n> Hello,\n>\n> I had postgresql 7.4 on ubuntu and over one year ago I moved to 8.2\n> Till now I was backing up my db via pgadmin remotely from windows but \n> now I want to do it from the ubuntu server.\n>\n> When I run the command pgdump it said that the database is 8.2 but the \n> tool is 7.4 - my question is, where in the world is the pgdump for 8.2 \n> - I can't find it.\n>\n> pg_dump, pg_dumpall are all in /usr/bin but where are the 8.2 ones ?\nYou need to download the pgcontrib package from ubuntu package site. I\nuse the gnome package manager from ubuntu to handle this plus it\nautomatically handles the updates if any apply\n\n>\n> TIA,\n> Q\n>\n>\n>\n\n\n\n\n", "msg_date": "Thu, 08 May 2008 03:48:43 -0400", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ubuntu question" }, { "msg_contents": "sorry all i accident cross posted\nfat fingered it\n\nJustin wrote:\n>\n>\n> Q Master wrote:\n>> Hello,\n>>\n>> I had postgresql 7.4 on ubuntu and over one year ago I moved to 8.2\n>> Till now I was backing up my db via pgadmin remotely from windows but \n>> now I want to do it from the ubuntu server.\n>>\n>> When I run the command pgdump it said that the database is 8.2 but \n>> the tool is 7.4 - my question is, where in the world is the pgdump \n>> for 8.2 - I can't find it.\n>>\n>> pg_dump, pg_dumpall are all in /usr/bin but where are the 8.2 ones ?\n> You need to download the pgcontrib package from ubuntu package site. I\n> use the gnome package manager from ubuntu to handle this plus it\n> automatically handles the updates if any apply\n>\n>>\n>> TIA,\n>> Q\n>>\n>>\n>>\n>\n>\n>\n", "msg_date": "Thu, 08 May 2008 03:49:25 -0400", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Ubuntu question" }, { "msg_contents": "Simon Riggs wrote:\n> Have a look at pg_snapclone. It's specifically designed to significantly\n> improve dump times for very large objects.\n>\n> http://pgfoundry.org/projects/snapclone/\n> \nAlso, in case the original poster is not aware, by default pg_dump \nallows to backup single tables.\nJust add -t <table name>.\n\n\n\nDoes pg_snapclone works mostly on large rows or will it also be faster \nthan pg_dump for narrow tables?\n", "msg_date": "Fri, 18 Jul 2008 20:25:50 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup/Restore of single table in multi TB database" }, { "msg_contents": "\nOn Fri, 2008-07-18 at 20:25 -0400, Francisco Reyes wrote:\n\n> Does pg_snapclone works mostly on large rows or will it also be faster \n> than pg_dump for narrow tables?\n\nIt allows you to run your dump in multiple pieces. Thats got nothing to\ndo with narrow or wide.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Sat, 19 Jul 2008 10:02:32 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Backup/Restore of single table in multi TB database" } ]
[ { "msg_contents": "PostgreSQL: 8.2\n\n \n\nWhen you create a foreign key to a table is there an index that is\ncreated on the foreign key automatically?\n\n \n\nExample:\n\nTable A has a field called ID.\n\n \n\nTable B has a field called fk_a_id which has a constraint of being a\nforeign key to table A to field ID.\n\n \n\nIs there an index automatically created on field fk_a_id in table B when\nI create a foreign key constraint?\n\n \n\n \n\nI assume yes. But I wanted to check. I did not see it specifically\nmentioned in the documentation.\n\n \n\nI also see \"CREATE TABLE / PRIMARY KEY will create implicit index\" when\ncreating a primary key but I don't see any similar statement when\ncreating a foreign key.\n\n \n\nThanks,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPostgreSQL: 8.2\n \nWhen you create a foreign key to a table is there an index\nthat is created on the foreign key automatically?\n \nExample:\nTable A has a field called ID.\n \nTable B has a field called fk_a_id which has a constraint of\nbeing a foreign key to table A to field ID.\n \nIs there an index automatically created on field fk_a_id in\ntable B when I create a foreign key constraint?\n \n \nI assume yes.  But I wanted to check.  I did not\nsee it specifically mentioned in the documentation.\n \nI also see “CREATE TABLE / PRIMARY KEY will create\nimplicit index” when creating a primary key but I don’t see any similar\nstatement when creating a foreign key.\n \nThanks,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Thu, 8 May 2008 11:52:50 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Creating a foreign key" }, { "msg_contents": "On Thu, 8 May 2008 11:52:50 -0500\n\"Campbell, Lance\" <[email protected]> wrote:\n\n> PostgreSQL: 8.2\n> \n> \n> \n> When you create a foreign key to a table is there an index that is\n> created on the foreign key automatically?\n\nNo.\n\nJoshua D. Drake\n\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate", "msg_date": "Thu, 8 May 2008 10:00:14 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating a foreign key" }, { "msg_contents": "\nOn Thu, 2008-05-08 at 17:52 +0100, Campbell, Lance wrote:\n\n> Is there an index automatically created on field fk_a_id in table B\n> when I create a foreign key constraint?\n\nNo. The problem with doing this is it assumes certain things about your\ninfrastructure that may be entirely false. Indexes are to speed up\nqueries by logarithmically reducing the result set to matched index\nparameters, and pretty much nothing else. Indexes are also not free,\ntaking up both disk space and CPU time to maintain, slowing down\ninserts.\n\nForeign keys are not bi-directional either. They actually check the\nindex in the *source* table to see if the value exists. Having an index\non a column referring to another table may be advantageous, but it's not\nalways necessary. If you never use that column in a where clause, or it\nisn't restrictive enough, you gain nothing and lose speed in table\nmaintenance. It's totally up to the focus of your table schema design,\nreally. Only careful app management and performance analysis can really\ntell you where indexes need to go, beyond the rules-of-thumb concepts,\nanyway.\n\n> I also see “CREATE TABLE / PRIMARY KEY will create implicit index”\n> when creating a primary key but I don’t see any similar statement when\n> creating a foreign key.\n\nThat's because the definition of a primary key is an index that acts as\nthe primary lookup for the table. This is required to be an index,\npartially because it has an implied unique constraint, and also because\nit has a search-span of approximately 1 when locating a specific row\nfrom that table.\n\nBut indexes aren't some kind of magical \"make a query faster\" sauce.\nWith too many values, the cost of scanning them individually becomes\nprohibitive, and the database will fall-back to a faster sequence-scan,\nwhich can take advantage of the block-fetch nature of most storage\ndevices to just blast through all the results for the values it's\nlooking for. It's restrictive where clauses *combined* with well-chosen\nindexes that give you good performance, with a little tweaking here and\nthere to make the query-planner happy.\n\nBut that's the long version. Postgres is by no means bare-bones, but it\nassumes DBAs are smart enough to manage the structures they bolt onto\nthe metal. :)\n\n-- \n\nShaun Thomas\nDatabase Administrator\n\nLeapfrog Online \n807 Greenwood Street \nEvanston, IL 60201 \nTel. 847-440-8253\nFax. 847-570-5750\nwww.leapfrogonline.com\n\n\n", "msg_date": "Thu, 8 May 2008 12:18:59 -0500", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating a foreign key" }, { "msg_contents": "Shaun,\nThanks for the very detailed description of why posgres does not auto\ncreate indexes. That makes a lot of sense.\n\nThanks again,\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n-----Original Message-----\nFrom: Shaun Thomas [mailto:[email protected]] \nSent: Thursday, May 08, 2008 12:19 PM\nTo: Campbell, Lance\nCc: [email protected]\nSubject: Re: [PERFORM] Creating a foreign key\n\n\nOn Thu, 2008-05-08 at 17:52 +0100, Campbell, Lance wrote:\n\n> Is there an index automatically created on field fk_a_id in table B\n> when I create a foreign key constraint?\n\nNo. The problem with doing this is it assumes certain things about your\ninfrastructure that may be entirely false. Indexes are to speed up\nqueries by logarithmically reducing the result set to matched index\nparameters, and pretty much nothing else. Indexes are also not free,\ntaking up both disk space and CPU time to maintain, slowing down\ninserts.\n\nForeign keys are not bi-directional either. They actually check the\nindex in the *source* table to see if the value exists. Having an index\non a column referring to another table may be advantageous, but it's not\nalways necessary. If you never use that column in a where clause, or it\nisn't restrictive enough, you gain nothing and lose speed in table\nmaintenance. It's totally up to the focus of your table schema design,\nreally. Only careful app management and performance analysis can really\ntell you where indexes need to go, beyond the rules-of-thumb concepts,\nanyway.\n\n> I also see \"CREATE TABLE / PRIMARY KEY will create implicit index\"\n> when creating a primary key but I don't see any similar statement when\n> creating a foreign key.\n\nThat's because the definition of a primary key is an index that acts as\nthe primary lookup for the table. This is required to be an index,\npartially because it has an implied unique constraint, and also because\nit has a search-span of approximately 1 when locating a specific row\nfrom that table.\n\nBut indexes aren't some kind of magical \"make a query faster\" sauce.\nWith too many values, the cost of scanning them individually becomes\nprohibitive, and the database will fall-back to a faster sequence-scan,\nwhich can take advantage of the block-fetch nature of most storage\ndevices to just blast through all the results for the values it's\nlooking for. It's restrictive where clauses *combined* with well-chosen\nindexes that give you good performance, with a little tweaking here and\nthere to make the query-planner happy.\n\nBut that's the long version. Postgres is by no means bare-bones, but it\nassumes DBAs are smart enough to manage the structures they bolt onto\nthe metal. :)\n\n-- \n\nShaun Thomas\nDatabase Administrator\n\nLeapfrog Online \n807 Greenwood Street \nEvanston, IL 60201 \nTel. 847-440-8253\nFax. 847-570-5750\nwww.leapfrogonline.com\n\n\n", "msg_date": "Thu, 8 May 2008 13:11:49 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Creating a foreign key" }, { "msg_contents": "\n> When you create a foreign key to a table is there an index that is\n> created on the foreign key automatically?\n\n\tNo, Postgres doesn't do it for you, because if you create (ref_id) \nreferences table.id, you will perhaps create an index on (ref_id, date) \nwhich would then fill the purpose (and other purposes), or perhaps your \ntable will have 10 rows (but postgres doesnt' know that when you create \nit) and having an index would be useless, or your table could have many \nrows but only a few distinct referenced values, in which case again the \nindex would only slow things down.\n\tPG does not presume to know better than yourself what you're gonna do \nwith your data ;)\n\tUNIQUE and PRIMARY KEY do create UNIQUE INDEXes, of course.\n", "msg_date": "Thu, 08 May 2008 22:02:47 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating a foreign key" } ]
[ { "msg_contents": "Hi, all. I want to ask what type of index is better to create for\nbigint types. I have table with bigint (bigserial) primary key. What\ntype is better to use for it? I tried btree and hash, but didn't\nnotice any differences in execution time. For GiST and GIN there is a\ntrouble that I must create operator class, so I limited myself to use\nbtree or hash. But if it's better to use gist or gin, coment are\nwelcome.\n", "msg_date": "Thu, 8 May 2008 12:00:39 -0700 (PDT)", "msg_from": "Rauan Maemirov <[email protected]>", "msg_from_op": true, "msg_subject": "Creating indexes" }, { "msg_contents": "Hi,\n\n> Hi, all. I want to ask what type of index is better to create for\n> bigint types. I have table with bigint (bigserial) primary key. What\n\nhttp://www.postgresql.org/docs/8.3/static/sql-createtable.html\n\nPostgreSQL automatically creates an index for each unique constraint \nand primary key constraint to enforce uniqueness. Thus, it is not \nnecessary to create an index explicitly for primary key columns.\n\n> type is better to use for it? I tried btree and hash, but didn't\n\nYou already have an index on your bigint primary key. I think it is of \ntype btree.\n\nJan\nHi,Hi, all. I want to ask what type of index is better to create forbigint types. I have table with bigint (bigserial) primary key. Whathttp://www.postgresql.org/docs/8.3/static/sql-createtable.htmlPostgreSQL automatically creates an index for each unique constraint and primary key constraint to enforce uniqueness. Thus, it is not necessary to create an index explicitly for primary key columns.type is better to use for it? I tried btree and hash, but didn'tYou already have an index on your bigint primary key. I think it is of type btree.Jan", "msg_date": "Thu, 8 May 2008 21:49:26 +0200", "msg_from": "Asche <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating indexes" }, { "msg_contents": "On Thursday 08 May 2008, Rauan Maemirov <[email protected]> wrote:\n> Hi, all. I want to ask what type of index is better to create for\n> bigint types. I have table with bigint (bigserial) primary key. What\n> type is better to use for it? I tried btree and hash, but didn't\n> notice any differences in execution time. \n\nA primary key is a unique btree index, and it's as about as good as it gets \nfor a bigint.\n\n-- \nAlan", "msg_date": "Thu, 8 May 2008 12:52:30 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating indexes" }, { "msg_contents": "> Hi, all. I want to ask what type of index is better to create for\n> bigint types. I have table with bigint (bigserial) primary key. What\n> type is better to use for it? I tried btree and hash, but didn't\n> notice any differences in execution time. For GiST and GIN there is a\n> trouble that I must create operator class, so I limited myself to use\n> btree or hash. But if it's better to use gist or gin, coment are\n> welcome.\n\n\tIf you use BIGINT, I presume you will have lots of different values, in \nthat case the best one is the btree. It is the most common and most \noptimized index type.\n\tGiST's strength is in using indexes for stuff that can't be done with a \nsimple btree : geometry, full text, ltree, etc, but gist is slower in the \ncase of indexing a simple value.\n\tGIN indexes are more compact and very fast for reads but updating is very \nslow (they are meant for mostly read-only tables).\n\tHash is a bit of a fossil. Also it does not support range queries, so if \nyou need that, btree is definitely better.\n\n\n", "msg_date": "Thu, 08 May 2008 22:08:17 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creating indexes" }, { "msg_contents": "On May 9, 1:49 am, [email protected] (Asche) wrote:\n> Hi,\n>\n> > Hi, all. I want to ask what type of index is better to create for\n> > bigint types. I have table with bigint (bigserial) primary key. What\n>\n> http://www.postgresql.org/docs/8.3/static/sql-createtable.html\n>\n> PostgreSQL automatically creates an index for each unique constraint  \n> and primary key constraint to enforce uniqueness. Thus, it is not  \n> necessary to create an index explicitly for primary key columns.\n>\n> > type is better to use for it? I tried btree and hash, but didn't\n>\n> You already have an index on your bigint primary key. I think it is of  \n> type btree.\n>\n> Jan\n\nAah, I understand. Thanks to all for detailed response.\n", "msg_date": "Thu, 8 May 2008 22:41:22 -0700 (PDT)", "msg_from": "Rauan Maemirov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Creating indexes" } ]
[ { "msg_contents": "Hello,\n\nI have a strange performance problem with postgresql 8.3 (shipped with ubuntu \nhardy) and a query that seems very simple:\n\nexplain analyze \n\tSELECT * FROM (part LEFT OUTER JOIN part_lang ON part.id = part_lang.id) \n\tWHERE part.parent = 49110;\n\nquery plan here: http://front7.smartlounge.be/~kervel/queryplan.txt\n\nthe query does not return a single row\n\nthe table \"part\" and \"part_lang\" have a whole lot of tables inheriting them, \nmost of the inheriting tables only contain a few rows.\n\nthis turns this query in an append of a whole lot of seq scan/ index scan's. \nThese scans are predictably quick, but the \"append\" takes 5 seconds (and the \nnumbers of the scans do not add up to the append actual time)\n\nif i leave out the \"outer join\" performance is okay:\n SELECT * FROM part WHERE part.parent = 49110;\n\nif i then add a \"order by sequence number\" the performance is bad again:\nSELECT * FROM part WHERE part.parent = 49110 order by sequencenumber;\n\nI'm a bit stuck with this problem, and i don't know how to continue finding \nout why.\n\nDoes someone have an explanation / possible solution for this performance ?\n\nWe use a similar scheme on a lot of other projects without problems (but this \ntime, the number of tables inheriting from part is a bit bigger).\n\nThanks a lot in advance, \ngreetings,\nFrank\n\n-- \n \n=========================\nFrank Dekervel\[email protected]\n=========================\nSmartlounge\nJP Minckelersstraat 78\n3000 Leuven\nphone:+32 16 311 413\nfax:+32 16 311 410\nmobile:+32 473 943 421\n=========================\nhttp://www.smartlounge.be\n=========================\n", "msg_date": "Fri, 9 May 2008 10:09:46 +0200", "msg_from": "Frank Dekervel <[email protected]>", "msg_from_op": true, "msg_subject": "\"append\" takes a lot of time in a query" }, { "msg_contents": "Frank Dekervel <[email protected]> writes:\n> this turns this query in an append of a whole lot of seq scan/ index scan's. \n> These scans are predictably quick, but the \"append\" takes 5 seconds (and the \n> numbers of the scans do not add up to the append actual time)\n\nIt says 5 milliseconds, not 5 seconds.\n\n> Does someone have an explanation / possible solution for this performance ?\n\nRethink your schema --- this is pushing the inheritance feature far\nbeyond what it's designed to do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 May 2008 10:12:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"append\" takes a lot of time in a query " }, { "msg_contents": "Hello,\n\nSmall update on this problem:\n\nWouter Verhelst came to help debugging, and he determined that the 2 seconds \nwere spent planning the query and not executing the query. (executing the \nquery is quick as seen in the query plan).\n\nTo avoid replanning this query all the time, Wouter suggest replacing the \nquery with a stored procedure (so that the query plan was saved). We did a \nproof of concept, and it seemed to work very well.\n\nAnother approach would be caching of prepared statements, but i found no \nimplementations of this on the net.\n\nWe now still need to find a way to hook the stored procs in our O-R mapper: \ngenerating them the first time a query is done (fairly easy), and \nmaking \"select procname(param1,param2)\" behave like a normal query. \n\nWe tried a stored procedure returning a cursor and this seemed to work, but \nwe'd like to avoid this as, to use cursors, we need to change the core logic \nof our system that decides whether to use transactions and cursors (now the \nsystem does not create a cursor if it does not expect too many rows coming \nback and so on, and no transaction if no \"update\" or \"insert\" queries have to \nbe done). \n\nSo i'll look further to see if i can make \"select foo()\" behave exactly like a \nnormal query.\n\nAnother thing we saw was this: on a project where the query generated a \n451-line query plan, the query took 30 milliseconds. On a project where the \nsame query generated a 1051-line query plan (more tables inheriting \nthe \"part\" table), the query took 2 seconds. Something of exponential \ncomplexity in the query planner ?\n\ngreetings,\nFrank\n\nOn Friday 16 May 2008 17:59:37 Frank Dekervel wrote:\n> Hello,\n>\n> Thanks for the explanation. You were right, i misread the query plan.\n> But the strange thing is this query really takes a long time (see below),\n> contrary to what the query plan indicates. This makes me believe we are\n> doing something very wrong...\n>\n> xxx => select now(); SELECT * FROM (part LEFT OUTER JOIN part_lang ON\n> part.id = part_lang.id) WHERE part.parent= 49110; select now(); now\n> -------------------------------\n> 2008-05-16 17:51:15.525056+02\n> (1 row)\n>\n> parent | id | dirindex | permissions | sequencenumber | partname | lang |\n> id | online\n> --------+----+----------+-------------+----------------+----------+------+-\n>---+-------- (0 rows)\n>\n> now\n> -------------------------------\n> 2008-05-16 17:51:17.179043+02\n> (1 row)\n>\n> As for postgresql inherited tables: we are moving to joined inheritance\n> already, but we still have a lot of \"inherited tables\" implementations. It\n> is the first time we see this kind of problem ...\n>\n> I'm the original e-mail for reference.\n>\n> thanks already !\n>\n> greetings,\n> Frank\n>\n> On Friday 09 May 2008 16:12:46 Tom Lane wrote:\n> > Frank Dekervel <[email protected]> writes:\n> > > this turns this query in an append of a whole lot of seq scan/ index\n> > > scan's. These scans are predictably quick, but the \"append\" takes 5\n> > > seconds (and the numbers of the scans do not add up to the append\n> > > actual time)\n> >\n> > It says 5 milliseconds, not 5 seconds.\n> >\n> > > Does someone have an explanation / possible solution for this\n> > > performance ?\n> >\n> > Rethink your schema --- this is pushing the inheritance feature far\n> > beyond what it's designed to do.\n> >\n> > \t\t\tregards, tom lane\n\n\t\n\n-- \n \n=========================\nFrank Dekervel\[email protected]\n=========================\nSmartlounge\nJP Minckelersstraat 78\n3000 Leuven\nphone:+32 16 311 413\nfax:+32 16 311 410\nmobile:+32 473 943 421\n=========================\nhttp://www.smartlounge.be\n=========================\n", "msg_date": "Wed, 21 May 2008 12:37:22 +0200", "msg_from": "Frank Dekervel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"append\" takes a lot of time in a query" }, { "msg_contents": "I think you're looking to return set of record or something like that.\n\nOn Wed, May 21, 2008 at 4:37 AM, Frank Dekervel\n<[email protected]> wrote:\n> Hello,\n>\n> Small update on this problem:\n>\n> Wouter Verhelst came to help debugging, and he determined that the 2 seconds\n> were spent planning the query and not executing the query. (executing the\n> query is quick as seen in the query plan).\n>\n> To avoid replanning this query all the time, Wouter suggest replacing the\n> query with a stored procedure (so that the query plan was saved). We did a\n> proof of concept, and it seemed to work very well.\n>\n> Another approach would be caching of prepared statements, but i found no\n> implementations of this on the net.\n>\n> We now still need to find a way to hook the stored procs in our O-R mapper:\n> generating them the first time a query is done (fairly easy), and\n> making \"select procname(param1,param2)\" behave like a normal query.\n>\n> We tried a stored procedure returning a cursor and this seemed to work, but\n> we'd like to avoid this as, to use cursors, we need to change the core logic\n> of our system that decides whether to use transactions and cursors (now the\n> system does not create a cursor if it does not expect too many rows coming\n> back and so on, and no transaction if no \"update\" or \"insert\" queries have to\n> be done).\n", "msg_date": "Wed, 21 May 2008 12:44:44 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"append\" takes a lot of time in a query" } ]
[ { "msg_contents": "I figure this subject belongs on the performance mailing list because it is\nabout partitioning, which is a performance issue.\n\nI'm working on partitioning some of the tables used by an application that\nuses OpenJPA. It turns out that OpenJPA is sensitive to the numbers\nreturned when you do an insert. So I put together a test and attached it.\nMy postgres version is 8.3.1 compiled from source.\n\nMy problem is that this:\ntest=> INSERT INTO ttt (a, b) VALUES ('5-5-08', 'test11212');\nINSERT 0 0\nTime: 21.646 ms\nneeds to show:\nINSERT 0 1\n\nor OpenJPA will not accept it. The insert works, but OpenJPA does not\nbelieve it and aborts the current transaction.\n\nIs it possible to have partitioning and have insert show the right number of\nrows inserted?\n\nThanks,\n\n--Nik", "msg_date": "Mon, 12 May 2008 12:18:57 -0400", "msg_from": "\"Nikolas Everett\" <[email protected]>", "msg_from_op": true, "msg_subject": "Partitioning: INSERT 0 0 but want INSERT 0 1" }, { "msg_contents": "I have the same problem in PG 8.2\n\nTo resolve this issue I had to create a new table with the same\nstructure than the partitioned table with a trigger for insert and\nupdate. All the operations the application have to do are directed to\nthis new table.\n\nWhen a new record is inserted in the new table the trigger insert a\nnew record with the same values into the partitioned table and then\ndelete all records from this new table. In updates operations the\ntrigger redirect the operation to the partitioned table too.\n\nWith this _not elegant_ solution our Java application is able to do its job.\n\nIf you find a better solution please let me know.\n\n----\nNeil Peter Braggio\[email protected]\n\n\nOn Tue, May 13, 2008 at 11:48 AM, Nikolas Everett <[email protected]> wrote:\n> I figure this subject belongs on the performance mailing list because it is\n> about partitioning, which is a performance issue.\n>\n> I'm working on partitioning some of the tables used by an application that\n> uses OpenJPA. It turns out that OpenJPA is sensitive to the numbers\n> returned when you do an insert. So I put together a test and attached it.\n> My postgres version is 8.3.1 compiled from source.\n>\n> My problem is that this:\n> test=> INSERT INTO ttt (a, b) VALUES ('5-5-08', 'test11212');\n> INSERT 0 0\n> Time: 21.646 ms\n> needs to show:\n> INSERT 0 1\n>\n> or OpenJPA will not accept it. The insert works, but OpenJPA does not\n> believe it and aborts the current transaction.\n>\n> Is it possible to have partitioning and have insert show the right number of\n> rows inserted?\n>\n> Thanks,\n>\n> --Nik\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n", "msg_date": "Tue, 13 May 2008 17:57:55 +1930", "msg_from": "\"Neil Peter Braggio\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioning: INSERT 0 0 but want INSERT 0 1" }, { "msg_contents": "If I can't find an answer in the next day or so I'll crack open OpenJPA and\ndisable that check. Its a very simple, if ugly, hack.\n\n--Nik\n\n\nOn 5/12/08, Neil Peter Braggio <[email protected]> wrote:\n>\n> I have the same problem in PG 8.2\n>\n> To resolve this issue I had to create a new table with the same\n> structure than the partitioned table with a trigger for insert and\n> update. All the operations the application have to do are directed to\n> this new table.\n>\n> When a new record is inserted in the new table the trigger insert a\n> new record with the same values into the partitioned table and then\n> delete all records from this new table. In updates operations the\n> trigger redirect the operation to the partitioned table too.\n>\n> With this _not elegant_ solution our Java application is able to do its\n> job.\n>\n> If you find a better solution please let me know.\n>\n> ----\n> Neil Peter Braggio\n> [email protected]\n>\n>\n> On Tue, May 13, 2008 at 11:48 AM, Nikolas Everett <[email protected]>\n> wrote:\n> > I figure this subject belongs on the performance mailing list because it\n> is\n> > about partitioning, which is a performance issue.\n> >\n> > I'm working on partitioning some of the tables used by an application\n> that\n> > uses OpenJPA. It turns out that OpenJPA is sensitive to the numbers\n> > returned when you do an insert. So I put together a test and attached\n> it.\n> > My postgres version is 8.3.1 compiled from source.\n> >\n> > My problem is that this:\n> > test=> INSERT INTO ttt (a, b) VALUES ('5-5-08', 'test11212');\n> > INSERT 0 0\n> > Time: 21.646 ms\n> > needs to show:\n> > INSERT 0 1\n> >\n> > or OpenJPA will not accept it. The insert works, but OpenJPA does not\n> > believe it and aborts the current transaction.\n> >\n> > Is it possible to have partitioning and have insert show the right\n> number of\n> > rows inserted?\n> >\n> > Thanks,\n> >\n> > --Nik\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list (\n> [email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> >\n> >\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIf I can't find an answer in the next day or so I'll crack open OpenJPA and disable that check.  Its a very simple, if ugly, hack.\n \n--Nik \nOn 5/12/08, Neil Peter Braggio <[email protected]> wrote:\nI have the same problem in PG 8.2To resolve this issue I had to create a new table with the same\nstructure than the partitioned table with a trigger for insert andupdate. All the operations the application have to do are directed tothis new table.When a new record is inserted in the new table the trigger insert a\nnew record with the same values into the partitioned table and thendelete all records from this new table. In updates operations thetrigger redirect the operation to the partitioned table too.With this _not elegant_ solution our Java application is able to do its job.\nIf you find a better solution please let me know.----Neil Peter [email protected] Tue, May 13, 2008 at 11:48 AM, Nikolas Everett <[email protected]> wrote:\n> I figure this subject belongs on the performance mailing list because it is> about partitioning, which is a performance issue.>> I'm working on partitioning some of the tables used by an application that\n> uses OpenJPA.  It turns out that OpenJPA is sensitive to the numbers> returned when you do an insert.  So I put together a test and attached it.> My postgres version is 8.3.1 compiled from source.>\n> My problem is that this:> test=> INSERT INTO ttt (a, b) VALUES ('5-5-08', 'test11212');> INSERT 0 0> Time: 21.646 ms> needs to show:> INSERT 0 1>> or OpenJPA will not accept it.  The insert works, but OpenJPA does not\n> believe it and aborts the current transaction.>> Is it possible to have partitioning and have insert show the right number of> rows inserted?>> Thanks,>> --Nik>\n>>  -->  Sent via pgsql-performance mailing list ([email protected])>  To make changes to your subscription:>  http://www.postgresql.org/mailpref/pgsql-performance\n>>--Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 12 May 2008 23:46:54 -0400", "msg_from": "\"Nikolas Everett\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partitioning: INSERT 0 0 but want INSERT 0 1" } ]
[ { "msg_contents": "Inheritted a number of servers and I am starting to look into the hardware.\n\nSo far what I know from a few of the servers\nRedhat servers.\n15K rpm disks, 12GB to 32GB of RAM.\nAdaptec 2120 SCSI controller (64MB of cache).\n\nThe servers have mostly have 12 drives in RAID 10.\nWe are going to redo one machine to compare RAID 10 vs RAID 50. \nMostly to see if the perfomance is close, the space gain may be usefull.\n\nThe usage pattern is mostly large set of transactions ie bulk loads of \nmillions of rows, queries involving tens of millions of rows. There are \nusually only a handfull of connections at once, but I have seen it go up to \n10 in the few weeks I have been at the new job. The rows are not very wide. \nMostly 30 to 90 bytes. The few that will be wider will be summary tables \nthat will be read straight up without joins and indexed on the fields we \nwill be quering them. Most of the connections will all be doing bulk \nreads/updates/writes.\n\nSome of the larger tables have nearly 1 billion rows and most have tens of \nmillions. Most DBs are under 500GB, since they had split the data as to keep \neach machine somewhat evenly balanced compared to the others.\n\nI noticed the machine we are about to redo doesn't have a BBU.\n\nA few questions.\nWill it pay to go to a controller with higher memory for existing machines? \nThe one machine I am about to redo has PCI which seems to \nsomewhat limit our options. So far I have found another Adaptec controller, \n2130SLP, that has 128MB and is also just plain PCI. I need to decide whether \nto buy the BBU for the 2120 or get a new controller with more memory and a \nBBU. For DBs with bulk updates/inserts is 128MB write cache even enough to \nachieve reasonable rates? (ie at least 5K inserts/sec) \n\nA broader question\nFor large setups (ie 500GB+ per server) does it make sense to try to get a \ncontroller in a machine or do SANs have better throughput even if at a much \nhigher cost?\n\nFor future machines I plan to look into controllers with at least 512MB, \nwhich likely will be PCI-X/PCI-e.. not seen anything with large caches for \nPCI. Also the machines in question have SCSI drives, not SAS. I believe the \nmost recent machine has SAS, but the others may be 15K rpm scsi \n\nWhether a SAN or just an external enclosure is 12disk enough to substain 5K \ninserts/updates per second on rows in the 30 to 90bytes territory? At \n5K/second inserting/updating 100 Million records would take 5.5 hours. That \nis fairly reasonable if we can achieve. Faster would be better, but it \ndepends on what it would cost to achieve.\n", "msg_date": "Mon, 12 May 2008 22:04:03 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "RAID controllers for Postgresql on large setups" }, { "msg_contents": "On Mon, 12 May 2008 22:04:03 -0400\nFrancisco Reyes <[email protected]> wrote:\n\n> Inheritted a number of servers and I am starting to look into the\n> hardware.\n> \n> So far what I know from a few of the servers\n> Redhat servers.\n> 15K rpm disks, 12GB to 32GB of RAM.\n> Adaptec 2120 SCSI controller (64MB of cache).\n> \n> The servers have mostly have 12 drives in RAID 10.\n> We are going to redo one machine to compare RAID 10 vs RAID 50. \n> Mostly to see if the perfomance is close, the space gain may be\n> usefull.\n\nMost likely you have a scsi onboard as well I am guessing. You\nshouldn't bother with the 2120. My tests show it is a horrible\ncontroller for random writes.\n\nComparing software raid on an LSI onboard for an IBM 345 versus a 2120s\nusing hardware raid 10, the software raid completely blew the adaptec\naway.\n\nJoshua D. Drake\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate", "msg_date": "Mon, 12 May 2008 19:11:18 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers for Postgresql on large setups" }, { "msg_contents": "Joshua,\n\ndid you try to run the 345 on an IBM ServeRAID 6i?\nI have one in mine, but I never actually ran any speed test.\nDo you have any benchmarks that I could run and compare?\n\nbest regards,\nchris\n-- \nchris ruprecht\ndatabase grunt and bit pusher extraordina�re\n\n\nOn May 12, 2008, at 22:11, Joshua D. Drake wrote:\n\n> On Mon, 12 May 2008 22:04:03 -0400\n> Francisco Reyes <[email protected]> wrote:\n>\n>> Inheritted a number of servers and I am starting to look into the\n>>\n\n[snip]\n\n> Comparing software raid on an LSI onboard for an IBM 345 versus a \n> 2120s\n> using hardware raid 10, the software raid completely blew the adaptec\n> away.\n\n[more snip]", "msg_date": "Mon, 12 May 2008 22:56:20 -0400", "msg_from": "Chris Ruprecht <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers for Postgresql on large setups" }, { "msg_contents": "Joshua D. Drake writes:\n\n> Most likely you have a scsi onboard as well I am guessing.\n\nWill check.\n\n\n> shouldn't bother with the 2120. My tests show it is a horrible\n> controller for random writes.\n\nThanks for the feedback..\n \n> Comparing software raid on an LSI onboard for an IBM 345 versus a 2120s\n> using hardware raid 10, the software raid completely blew the adaptec\n> away.\n\nAny PCI controller you have had good experience with?\nHow any other PCI-X/PCI-e controller that you have had good results?\n", "msg_date": "Mon, 12 May 2008 23:24:09 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RAID controllers for Postgresql on large setups" }, { "msg_contents": "Chris Ruprecht wrote:\n> Joshua,\n> \n> did you try to run the 345 on an IBM ServeRAID 6i?\n\nNo the only controllers I had at the time were the 2120 and the LSI on \nboard that is limited to RAID 1. I put the drives on the LSI in JBOD and \nused Linux software raid.\n\nThe key identifier for me was using a single writer over 6 (RAID 10) \ndrives with the 2120 I could get ~ 16 megs a second. The moment I went \nto multiple writers it dropped exponentially.\n\nHowever with software raid I was able to sustain ~ 16 megs a second over \nmultiple threads. I stopped testing at 4 threads when I was getting 16 \nmegs per thread :). I was happy at that point.\n\n\nJoshua D. Drake\n\n\n", "msg_date": "Mon, 12 May 2008 21:40:15 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers for Postgresql on large setups" }, { "msg_contents": "Francisco Reyes wrote:\n> Joshua D. Drake writes:\n> \n> \n> Any PCI controller you have had good experience with?\n\nI don't have any PCI test data.\n\n> How any other PCI-X/PCI-e controller that you have had good results?\n\nhttp://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\n\nIf you are digging for used see if you can pick up a 64xx series from \nHP. A very nice card that can generally be had for reasonable dollars.\n\nhttp://cgi.ebay.com/HP-Compaq-SMART-ARRAY-6402-CTRL-128MB-SCSI-273915-B21_W0QQitemZ120259020765QQihZ002QQcategoryZ11182QQssPageNameZWDVWQQrdZ1QQcmdZViewItem\n\nIf you want new, definitely go with the P800.\n\nSincerely,\n\nJoshua D. Drake\n\n", "msg_date": "Mon, 12 May 2008 21:42:16 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers for Postgresql on large setups" }, { "msg_contents": "On Mon, 12 May 2008, Francisco Reyes wrote:\n\n> We are going to redo one machine to compare RAID 10 vs RAID 50. Mostly to \n> see if the perfomance is close, the space gain may be usefull.\n\nGood luck with that, you'll need it.\n\n> Will it pay to go to a controller with higher memory for existing \n> machines? The one machine I am about to redo has PCI which seems to \n> somewhat limit our options. So far I have found another Adaptec \n> controller, 2130SLP, that has 128MB and is also just plain PCI. I need \n> to decide whether to buy the BBU for the 2120 or get a new controller \n> with more memory and a BBU.\n\nThese options are both pretty miserable. I hear rumors that Adaptec makes \ncontrollers that work OK under Linux , I've never seen one. A quick \nsearch suggests both the 2120 and 2130SLP are pretty bad. The suggestions \nJoshua already gave look like much better ideas.\n\nConsidering your goals here, I personally wouldn't put a penny into a \nsystem that wasn't pretty modern. I think you've got too aggressive a \ntarget for database size combined with commit rate to be playing with \nhardware unless it's new enough to support PCI-Express cards.\n\n> For DBs with bulk updates/inserts is 128MB write cache even enough to \n> achieve reasonable rates? (ie at least 5K inserts/sec)\n\nThis really depends on how far the data is spread across disk. You'll \nprobably be OK on inserts. Let's make a wild guess and say we fit 80 \n100-byte records in each 8K database block. If you have 5000/second, \nthat's 63 8K blocks/second which works out to 0.5MB/s of writes. Pretty \neasy, unless there's a lot of indexes involved as well. But an update can \nrequire reading in a 8K block, modifying it, then writing another back out \nagain. In the worst case, if your data was sparse enough (which is \nfrighteningly possible when I hear you mention a billion records) that \nevery update was hitting a unique block, 5K/sec * 8K = 39MB/second of \nreads *and* writes. That doesn't sound like horribly much, but that's \npretty tough if there's a lot of seeking involved in there.\n\nNow, in reality, many of your small records will be clumped into each \nblock on these updates and a lot of writes are deferred until checkpoint \ntime which gives more time to aggregate across shared blocks. You'll \nactually be somewhere in the middle of 0.5 and 78MB/s, which is a pretty \nwide range. It's hard to estimate too closely here without a lot more \ninformation about the database, the application, what version of \nPostgreSQL you're using, all sorts of info.\n\nYou really should be thinking in terms of benchmarking the current \nhardware first to try and draw some estimates you can extrapolate from. \nTheoretical comments are a very weak substitute for real-world \nbenchmarking on the application itself, even if that benchmarking is done \non less capable hardware. Run some tests, measure your update rate while \nalso measuring real I/O rate with vmstat, compare that I/O rate to the \ndisk's sequential/random performance as measured via bonnie++, and now \nthere's a set of figures that mean something you can estimate based on.\n\n> For large setups (ie 500GB+ per server) does it make sense to try to get a \n> controller in a machine or do SANs have better throughput even if at a much \n> higher cost?\n\nThat's not a large setup nowadays, certainly not large enough that a SAN \nwould be required to get reasonable performance. You may need an array \nthat's external to the server itself, but a SAN includes more than just \nthat.\n\nThere are a lot of arguments on both sides for using SANs; see \nhttp://wiki.postgresql.org/wiki/Direct_Storage_vs._SAN for a summary and \nlink to recent discussion where this was thrashed about heavily. If \nyou're still considering RAID5 and PCI controllers you're still a bit in \ndenial about the needs of your situation here, but jumping right from \nthere to assuming you need a SAN is likely overkill.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 13 May 2008 01:58:35 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers for Postgresql on large setups" }, { "msg_contents": "On Mon, May 12, 2008 at 8:04 PM, Francisco Reyes <[email protected]> wrote:\n> Inheritted a number of servers and I am starting to look into the hardware.\n>\n> So far what I know from a few of the servers\n> Redhat servers.\n> 15K rpm disks, 12GB to 32GB of RAM.\n> Adaptec 2120 SCSI controller (64MB of cache).\n\nConsidering the generally poor performance of adaptec RAID\ncontrollers, you'd probably be better off with 12 SATA drives hooked\nup to an escalade or Areca card (or cards). Since you seem to want a\nlot of storage, a large array of SATA disks may be a better balance\nbetween performance and economy.\n\n> The servers have mostly have 12 drives in RAID 10.\n> We are going to redo one machine to compare RAID 10 vs RAID 50. Mostly to\n> see if the perfomance is close, the space gain may be usefull.\n\nSee the remark about SATA drives above. With 12 750Gig drives, you'd\nhave 6*750G of storage in RAID-10, or about 4.5 Terabytes of redundant\nstorage.\n\n> The usage pattern is mostly large set of transactions ie bulk loads of\n> millions of rows, queries involving tens of millions of rows.\n> A few questions.\n> Will it pay to go to a controller with higher memory for existing machines?\n\nThen no matter how big your cache on your controller, it's likely NOT\nbig enough to ever hope to just swallow the whole set at once. Bigger\nmight be better for a lot of things, but for loading, a good\ncontroller is more important. An increase from 64M to 256M is not\nthat big in comparison to how big your datasets are likely to be.\n\n> The one machine I am about to redo has PCI which seems to somewhat limit our\n> options.\n\nYou do know that you can plug a PCI-X card into a PCI slot, right?\n(see the second paragraph here:\nhttp://en.wikipedia.org/wiki/PCI-X#Technical_description) So, you can\nget a nice card today, and if needs be, a better server to toss it in\ntomorrow.\n\nWhere I work we have a nice big machine in production with a very nice\nPCI-X card (Not sure which one, my cohort ordered it) and we wanted\nout in house testing machine to have the same card, but that machine\nis much less powerful. Same card fit, so we can get some idea about\nI/O patterns on the test box before beating on production.\n\n> A few questions.\n> Will it pay to go to a controller with higher memory for existing machines?\n\nPay more attention to the performance metrics the card gets from\npeople testing it here. Areca, Escalade / 3Ware, and LSI get good\nreviews, with LSI being solid but a little slower than the other two\nfor most stuff.\n\n> For large setups (ie 500GB+ per server) does it make sense to try to get a\n> controller in a machine or do SANs have better throughput even if at a much\n> higher cost?\n\nSANs generally don't have much better performance, and cost MUCH more\nper meg stored. They do however have some nice management options.\nIf a large number of disks in discrete machines presents a problem of\nmaintenance, the SAN might help, but given the higher cost, it's often\njust cheaper to keep a box of disks handy and have a hardware person\nreplace them.\n\n> For future machines I plan to look into controllers with at least 512MB,\n> which likely will be PCI-X/PCI-e.. not seen anything with large caches for\n> PCI.\n\nSee remark about PCI-X / PCI\n\n> Also the machines in question have SCSI drives, not SAS. I believe the\n> most recent machine has SAS, but the others may be 15K rpm scsi\n> Whether a SAN or just an external enclosure is 12disk enough to substain 5K\n> inserts/updates per second on rows in the 30 to 90bytes territory?\n\nYou'll only know by testing, and a better RAID controller can make a\nWORLD of difference here. Just make sure whatever controller you get\nhas battery backed cache, and preferably a fair bit of it. Some\ncontrollers can handle 1G+ of memory.\n", "msg_date": "Tue, 13 May 2008 02:17:41 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers for Postgresql on large setups" }, { "msg_contents": "\n> Will it pay to go to a controller with higher memory for existing \n> machines? The one machine I am about to redo has PCI which seems to \n> somewhat limit our options.\n\n\tUrgh.\n\n\tYou say that like you don't mind having PCI in a server whose job is to \nperform massive query over large data sets.\n\n\tYour 12 high-end expensive SCSI drives will have a bandwidth of ... say \n800 MB/s total (on reads), perhaps more.\n\tPCI limits you to 133 MB/s (theoretical), actual speed being around \n100-110 MB/s.\n\n\tConclusion : 85% of the power of your expensive drives is wasted by \nhooking them up to the slow PCI bus ! (and hence your money is wasted too)\n\n\tFor instance here I have a box with PCI, Giga Ethernet and a software \nRAID5 ; reading from the RAID5 goes to about 110 MB/s (actual disk \nbandwidth is closer to 250 but it's wasted) ; however when using the giga \nethernet to copy a large file over a LAN, disk and ethernet have to share \nthe PCI bus, so throughput falls to 50 MB/s. Crummy, eh ?\n\n\t=> If you do big data imports over the network, you lose 50% speed again \ndue to the bus sharing between ethernet nic and disk controller.\n\n\tIn fact for bulk IO a box with 2 SATA drives would be just as fast as \nyour monster RAID, lol.\n\n\tAnd for bulk imports from network a $500 box with a few SATA drives and a \ngiga-ethernet, all via PCIexpress (any recent Core2 chipset) will be \nfaster than your megabuck servers.\n\n\tLet me repeat this : at the current state of SATA drives, just TWO of \nthem is enough to saturate a PCI bus. I'm speaking desktop SATA drives, \nnot high-end SCSI ! (which is not necessarily faster for pure throughput \nanyway).\n\tAdding more drives will help random reads/writes but do nothing for \nthroughput since the tiny PCI pipe is choking.\n\n\tSo, use PCIe, PCIx, whatever, but get rid of the bottleneck.\n\tYour money is invested in disk drives... keep those, change your RAID \ncontroller which sucks anyway, and change your motherboard ...\n\n\tIf you're limited by disk throughput (or disk <-> giga ethernet PCI bus \ncontention), you'll get a huge boost by going PCIe or PCIx. You might even \nneed less servers.\n\n> For future machines I plan to look into controllers with at least 512MB, \n> which likely will be PCI-X/PCI-e..\n\n> not seen anything with large caches for PCI.\n\n\tThat's because high performance != PCI\n\n> Whether a SAN or just an external enclosure is 12disk enough to substain \n> 5K inserts/updates per second on rows in the 30 to 90bytes territory? At \n> 5K/second inserting/updating 100 Million records would take 5.5 hours. \n> That is fairly reasonable if we can achieve. Faster would be better, but \n> it depends on what it would cost to achieve.\n\n\tIf you mean 5K transactions with begin / insert or update 1 row / commit, \nthat's a lot, and you are going to need cache, BBU, and 8.3 so fsync isn't \na problem anymore.\n\tOn your current setup with 15K drives if you need 1 fsync per INSERT you \nwon't do more than 250 per second, which is very limiting... PG 8.3's \"one \nfsync per second instead of one at each commit\" feature is a really cheap \nalternative to a BBU (not as good as a real BBU, but much better than \nnothing !)\n\n\tIf you mean doing large COPY or inserting/updating lots of rows using one \nSQL statement, you are going to need disk bandwidth.\n\n\tFor instance if you have your 100M x 90 byte rows + overhead, that's \nabout 11 GB\n\tThe amount of data to write is twice that because of the xlog, so 22 GB \nto write, and 11 GB to read, total 33 GB.\n\n\tOn your setup you have a rather low 110 MB/s throughput it would take a \nbit more than 3 min 20 s. With 800 MB/s bandwidth it would take 45 \nseconds. (but I don't know if Postgres can process data this fast, \nalthough I'd say probably).\n\tOf course if you have many indexes which need to be updated this will add \nrandom IO and more WAL traffic to the mix.\n\tCheckpoints andbgwriter also need to be tuned so they don't kill your \nperformance when writing lots of data.\n\n\tFor your next servers as the other on the list will tell you, a good RAID \ncard, and lots of SATA drives is a good choice. SATA is cheap, so you can \nget more drives for the same price, which means more bandwidth :\n\nhttp://tweakers.net/reviews/557/17/comparison-of-nine-serial-ata-raid-5-adapters-pagina-17.html\n\n\tOf course none of those uses PCI.\n\tRAID5 is good for read speed, and big sequential writes. So if the only \nthing that you do is load up a multi-gigabyte dump and process it, it's \ngood.\n\tNow if you do bulk UPDATEs (like updating all the rows in one of the \npartitions of your huge table) RAID5 is good too.\n\tHowever RAID5 will choke and burn on small random writes, which will come \n from UPDATing random rows in a large table, updating indexes, etc. Since \nyou are doing this apparently, RAID5 is therefore NOT advised !\n\n\tAlso consider the usual advice, like CLUSTER, or when you load a large \namount of data in the database, COPY it to a temp table, then INSERT it in \nthe main table with INSERT INTO table SELECT FROM temp_table ORDER BY \n(interesting_fields). If the \"interesting_fields\" are something like the \ndate and you often select or update on a date range, for instance, you'll \nget more performance if all the rows from the same day are close on disk.\n\n\tHave you considered Bizgres ?\n\n\n\n\n\n", "msg_date": "Tue, 13 May 2008 11:48:29 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers for Postgresql on large setups" }, { "msg_contents": "PFC writes:\n\n> \tYou say that like you don't mind having PCI in a server whose job is to \n> perform massive query over large data sets.\n\nI am in my 4th week at a new job. Trying to figure what I am working with.\n>From what I see I will likely get as much improvement from new hardware as \nfrom re-doing some of the database design. Can't get everything done at \nonce, not to mention I have to redo one machine sooner rather than later so \nI need to prioritize.\n\n>In fact for bulk IO a box with 2 SATA drives would be just as fast as \n> your monster RAID, lol.\n\nI am working on setting up a standard test based on the type of operations \nthat the company does. This will give me a beter idea. Specially I will work \nwith the developers to make sure the queries I create for the benchmark are \nrepresentative of the workload.\n \n>Adding more drives will help random reads/writes but do nothing for \n> throughput since the tiny PCI pipe is choking.\n\nUnderstood, but right now I have to use the hardware they already have. Just \ntrying to make the most of it. I believe another server is due in some \nmonths so then I can better plan.\n\nIn your opinion if we get a new machine with PCI-e, at how many spindles \nwill the SCSI random access superiority start to be less notable? Specially \ngiven the low number of connections we usually have running against these \nmachines.\n \n>If you mean doing large COPY or inserting/updating lots of rows using one \n> SQL statement, you are going to need disk bandwidth.\n\nWe are using one single SQL statement.\n \n> http://tweakers.net/reviews/557/17/comparison-of-nine-serial-ata-raid-5-adapters-pagina-17.html\n\nI have heard great stories about Areca controllers. That is definitely one \nin my list to research and consider.\n \n> \tHowever RAID5 will choke and burn on small random writes, which will come \n> from UPDATing random rows in a large table, updating indexes, etc. Since \n> you are doing this apparently, RAID5 is therefore NOT advised !\n\nI thought I read a while back in this list that as the number of drives \nincreased that RAID 5 was less bad. Say an external enclosure with 20+ \ndrives.\n\n \n>Have you considered Bizgres ?\n\nYes. In my todo list, to check it further. I have also considered Greenplums \nmay DB offering that has clustering, but when I initially mentioned it there \nwas some reluctance because of cost. Also will look into Enterprise DB.\n\nRight now I am trying to learn usage patterns, what DBs need to be \nre-designed and what hardware I have to work with. Not to mention learning \nwhat all these tables are. Also need to make time to research/get a good \nER-diagram tool and document all these DBs. :(\n", "msg_date": "Tue, 13 May 2008 08:00:25 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RAID controllers for Postgresql on large setups" }, { "msg_contents": "On Tue, May 13, 2008 at 8:00 AM, Francisco Reyes <[email protected]> wrote:\n> PFC writes:\n>\n>\n> > You say that like you don't mind having PCI in a server whose job\n> is to perform massive query over large data sets.\n> >\n>\n> I am in my 4th week at a new job. Trying to figure what I am working with.\n> From what I see I will likely get as much improvement from new hardware as\n> from re-doing some of the database design. Can't get everything done at\n> once, not to mention I have to redo one machine sooner rather than later so\n> I need to prioritize.\n>\n>\n>\n> > In fact for bulk IO a box with 2 SATA drives would be just as fast as\n> your monster RAID, lol.\n> >\n>\n> I am working on setting up a standard test based on the type of operations\n> that the company does. This will give me a beter idea. Specially I will work\n> with the developers to make sure the queries I create for the benchmark are\n> representative of the workload.\n>\n>\n>\n> > Adding more drives will help random reads/writes but do nothing for\n> throughput since the tiny PCI pipe is choking.\n> >\n>\n> Understood, but right now I have to use the hardware they already have.\n> Just trying to make the most of it. I believe another server is due in some\n> months so then I can better plan.\n>\n> In your opinion if we get a new machine with PCI-e, at how many spindles\n> will the SCSI random access superiority start to be less notable? Specially\n> given the low number of connections we usually have running against these\n> machines.\n>\n>\n>\n> > However RAID5 will choke and burn on small random writes, which\n> will come from UPDATing random rows in a large table, updating indexes,\n> etc. Since you are doing this apparently, RAID5 is therefore NOT advised !\n> >\n>\n> I thought I read a while back in this list that as the number of drives\n> increased that RAID 5 was less bad. Say an external enclosure with 20+\n> drives.\n\nmaybe, but I don't think very many people run that many drives in a\nraid 5 configuration...too dangerous. with 20 drives in a single\nvolume, you need to be running raid 10 or raid 6. 20 drive raid 50 is\npushing it as well..I'd at least want a hot spare.\n\nmerlin\n", "msg_date": "Tue, 13 May 2008 08:07:22 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers for Postgresql on large setups" }, { "msg_contents": "\nOn May 12, 2008, at 10:04 PM, Francisco Reyes wrote:\n\n> Adaptec 2120 SCSI controller (64MB of cache).\n>\n> The servers have mostly have 12 drives in RAID 10.\n> We are going to redo one machine to compare RAID 10 vs RAID 50. \n> Mostly to see if the perfomance is close, the space gain may be \n> usefull.\n\nwith only 64Mb of cache, you will see degradation of performance. \nfrom my experience, the adaptec controllers are not the best choice, \nbut that's mostly FreeBSD experience. And if you don't have a BBU, \nyou're not benefitting from the write-back cache at all so it is kind \nof moot.\n\nIf you want to buy a couple of 2230SLP cards with 256Mb of RAM, I have \nthem for sale. They're identical to the 2130SLP but have two SCSI \nchannels per card instead of one. they both have BBUs, and are in \nworking condition. I retired them in favor of an external RAID \nattached via Fibre Channel.\n\n", "msg_date": "Tue, 13 May 2008 11:01:34 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers for Postgresql on large setups" }, { "msg_contents": "\nOn May 12, 2008, at 11:24 PM, Francisco Reyes wrote:\n\n> Any PCI controller you have had good experience with?\n> How any other PCI-X/PCI-e controller that you have had good results?\n\nThe LSI controllers are top-notch, and always my first choice. They \nhave PCI-X and PCI-e versions.\n\n", "msg_date": "Tue, 13 May 2008 11:02:53 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers for Postgresql on large setups" }, { "msg_contents": "PFC wrote:\n> PCI limits you to 133 MB/s (theoretical), actual speed being \n> around 100-110 MB/s.\nMany servers do have more than one bus. You have to process that data \ntoo so its not going to be as much of a limit as you are suggesting. It \nmay be possible to stream a compressed data file to the server and copy \nin from that after decompression, which will free LAN bandwidth. Or \neven if you RPC blocks of compressed data and decompress in the proc and \ninsert right there.\n\n> On your current setup with 15K drives if you need 1 fsync per \n> INSERT you won't do more than 250 per second, which is very limiting... \nWell, thats 250 physical syncs. But if you have multiple insert streams \n(for group commit), or can batch the rows in each insert or copy, its \nnot necessarily as much of a problem as you seem to be implying. \nParticularly if you are doing the holding table trick.\n\nJames\n\n", "msg_date": "Tue, 13 May 2008 21:12:58 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers for Postgresql on large setups" }, { "msg_contents": ">\n>> \tYou say that like you don't mind having PCI in a server whose job is \n>> to perform massive query over large data sets.\n>\n> I am in my 4th week at a new job. Trying to figure what I am working \n> with.\n\n\tLOOL, ok, hehe, not exactly the time to have a \"let's change everything\" \nfit ;)\n\n> From what I see I will likely get as much improvement from new hardware \n> as from re-doing some of the database design. Can't get everything done \n> at once, not to mention I have to redo one machine sooner rather than \n> later so I need to prioritize.\n>\n>> In fact for bulk IO a box with 2 SATA drives would be just as fast as \n>> your monster RAID, lol.\n>\n> I am working on setting up a standard test based on the type of \n> operations that the company does. This will give me a beter idea. \n> Specially I will work with the developers to make sure the queries I \n> create for the benchmark are representative of the workload.\n\n\twatching vmstat (or iostat) while running a very big seq scan query will \ngive you information about the reading speed of your drives.\n\tSame for writes, during one of your big updates, watch vmstat, you'll \nknow if you are CPU bound or IO bound...\n\n- one core at 100% -> CPU bound\n- lots of free CPU but lots of iowait -> disk bound\n\t- disk throughput decent (in your setup, 100 MB/s) -> PCI bus saturation\n\t- disk throughput miserable (< 10 MB/s) -> random IO bound (either random \nreads or fsync() or random writes depending on the case)\n\t\t\n> In your opinion if we get a new machine with PCI-e, at how many spindles \n> will the SCSI random access superiority start to be less notable? \n> Specially given the low number of connections we usually have running \n> against these machines.\n\n\tSorting of random reads depends on multiple concurrent requests (which \nyou don't have). Sorting of random writes does not depend on concurrent \nrequests so, you'll benefit on your updates. About SCSI vs SATA vs number \nof spindles : can't answer this one.\n\n> We are using one single SQL statement.\n\n\tOK, so forget about fsync penalty, but do tune your checkpoints so they \nare not happening all the time... and bgwriter etc.\n\n\n\n", "msg_date": "Tue, 13 May 2008 22:46:04 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers for Postgresql on large setups" }, { "msg_contents": "PFC schrieb:\n> PCI limits you to 133 MB/s (theoretical), actual speed being around \n> 100-110 MB/s.\n\n\"Current\" PCI 2.1+ implementations allow 533MB/s (32bit) to 1066MB/s \n(64bit) since 6-7 years ago or so.\n\n> For instance here I have a box with PCI, Giga Ethernet and a \n> software RAID5 ; reading from the RAID5 goes to about 110 MB/s (actual \n> disk bandwidth is closer to 250 but it's wasted) ; however when using \n> the giga ethernet to copy a large file over a LAN, disk and ethernet \n> have to share the PCI bus, so throughput falls to 50 MB/s. Crummy, eh ?\n\nSounds like a slow Giga Ethernet NIC...\n\n> Let me repeat this : at the current state of SATA drives, just TWO \n> of them is enough to saturate a PCI bus. I'm speaking desktop SATA \n> drives, not high-end SCSI ! (which is not necessarily faster for pure \n> throughput anyway).\n> Adding more drives will help random reads/writes but do nothing for \n> throughput since the tiny PCI pipe is choking.\n\nIn my experience, SATA drives are very slow for typical database work \n(which is heavy on random writes). They often have very slow access \ntimes, bad or missing NCQ implementation (controllers / SANs as well) \nand while I am not very familiar with the protocol differences, they \nseem to add a hell of a lot more latency than even old U320 SCSI drives.\n\nSequential transfer performance is a nice indicator, but not very \nuseful, since most serious RAID arrays will have bottlenecks other than \nthe theoretical cumulated transfer rate of all the drives (from \ncontroller cache speed to SCSI bus to fibre channel). Thus, lower \nsequential transfer rate and lower access times scale much better.\n\n>> Whether a SAN or just an external enclosure is 12disk enough to \n>> substain 5K inserts/updates per second on rows in the 30 to 90bytes \n>> territory? At 5K/second inserting/updating 100 Million records would \n>> take 5.5 hours. That is fairly reasonable if we can achieve. Faster \n>> would be better, but it depends on what it would cost to achieve.\n\n5K/s inserts (with no indexes) are easy with PostgreSQL and typical \n(current) hardware. We are copying about 175K rows/s with our current \nserver (Quad core Xeon 2.93GHz, lots of RAM, meagre performance SATA SAN \nwith RAID-5 but 2GB writeback cache). Rows are around 570b each on \naverage. Performance is CPU-bound with a typical number of indexes on \nthe table and much lower than 175K/s though, for single row updates we \nget about 9K/s per thread (=5.6MB/s) and that's 100% CPU-bound on the \nserver - if we had to max this out, we'd thus use several clients in \nparallel and/or collect inserts in text files and make bulk updates \nusing COPY. The slow SAN isn't a problem now.\n\nOur SATA SAN suffers greatly when reads are interspersed with writes, \nfor that you want more spindles and faster disks.\n\nTo the OP I have 1 hearty recommendation: if you are using the \nRAID-functionality of the 2120, get rid of it. If you can wipe the \ndisks, try using Linux software-RAID (yes, it's an admin's nightmare \netc. but should give much better performance even though the 2120's \nplain SCSI won't be hot either) and then start tuning your PostgreSQL \ninstallation (there's much to gain here). Your setup looks decent \notherwise for what you are trying to do (but you need a fast CPU) and \nyour cheapest upgrade path would be a decent RAID controller or at least \na decent non-RAID SCSI controller for software-RAID (at least 2 ports \nfor 12 disks), although the plain PCI market is dead.\n\n-mjy\n", "msg_date": "Tue, 27 May 2008 00:44:05 +0200", "msg_from": "Marinos Yannikos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAID controllers for Postgresql on large setups" } ]
[ { "msg_contents": "Hi,\n\nWe want to migrate from postgres 8.1.3 to postgres 8.3.1.\nCan anybody list out the installation steps to be followed for migration.\nDo we require to take care of something specially.\n\nThanks in advance\n~ Gauri\n\nHi,We want to migrate from postgres 8.1.3 to postgres 8.3.1.Can anybody list out the installation steps to be followed for migration.Do we require to take care of something specially.\nThanks in advance~ Gauri", "msg_date": "Tue, 13 May 2008 11:10:25 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Installation Steps to migrate to Postgres 8.3.1" }, { "msg_contents": "> We want to migrate from postgres 8.1.3 to postgres 8.3.1.\n> Can anybody list out the installation steps to be followed for migration.\n> Do we require to take care of something specially.\n\nPerform a pg_dump, do a restore and validate your sql-queries on a test-server.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Tue, 13 May 2008 08:42:19 +0200", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation Steps to migrate to Postgres 8.3.1" }, { "msg_contents": "On Mon, May 12, 2008 at 11:40 PM, Gauri Kanekar\n<[email protected]> wrote:\n> Hi,\n>\n> We want to migrate from postgres 8.1.3 to postgres 8.3.1.\n> Can anybody list out the installation steps to be followed for migration.\n> Do we require to take care of something specially.\n\nFirst, I'd recommend updating your 8.1.x install to 8.1.11 or whatever\nthe latest is right now.\n\nThere are some ugly bugs hiding in 8.1.3 if I remember correctly (Tom\njust mentioned one that could do things like leaving orphaned objects\nin the db in another thread.) It's always a good idea to keep up to\ndate on the updates of pgsql. Some updates aren't critical, but most\nearly ones in the 7.x through 8.1 tended to have a lot of bugs fixed\nin them in the first few updates.\n\nThen, your migration to 8.3.x can be done a bit more leisurely and\nwell planned and tested, without putting your current data in danger.\n", "msg_date": "Tue, 13 May 2008 02:25:44 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation Steps to migrate to Postgres 8.3.1" }, { "msg_contents": "Hi,\nAlong these lines, the usual upgrade path is a pg_dump/pg_restore set.\nHowever, what if your database is large (> 50GB), and you have to\nminimize your downtime (say less than an hour or two). Any suggestions\non how to handle that kind of situation? It sure would be nice to have\nsome kind of tool to update in-place a database, though I know that's\nnot a likely path.\n\nDoug\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Scott\nMarlowe\nSent: Tuesday, May 13, 2008 4:26 AM\nTo: Gauri Kanekar\nCc: [email protected]\nSubject: Re: [PERFORM] Installation Steps to migrate to Postgres 8.3.1\n\nOn Mon, May 12, 2008 at 11:40 PM, Gauri Kanekar\n<[email protected]> wrote:\n> Hi,\n>\n> We want to migrate from postgres 8.1.3 to postgres 8.3.1.\n> Can anybody list out the installation steps to be followed for\nmigration.\n> Do we require to take care of something specially.\n\nFirst, I'd recommend updating your 8.1.x install to 8.1.11 or whatever\nthe latest is right now.\n\nThere are some ugly bugs hiding in 8.1.3 if I remember correctly (Tom\njust mentioned one that could do things like leaving orphaned objects\nin the db in another thread.) It's always a good idea to keep up to\ndate on the updates of pgsql. Some updates aren't critical, but most\nearly ones in the 7.x through 8.1 tended to have a lot of bugs fixed\nin them in the first few updates.\n\nThen, your migration to 8.3.x can be done a bit more leisurely and\nwell planned and tested, without putting your current data in danger.\n\n-- \nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Tue, 13 May 2008 08:00:46 -0400", "msg_from": "\"Knight, Doug\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation Steps to migrate to Postgres 8.3.1" }, { "msg_contents": "On Tue, May 13, 2008 at 6:00 AM, Knight, Doug <[email protected]> wrote:\n> Hi,\n> Along these lines, the usual upgrade path is a pg_dump/pg_restore set.\n> However, what if your database is large (> 50GB), and you have to\n> minimize your downtime (say less than an hour or two). Any suggestions\n> on how to handle that kind of situation? It sure would be nice to have\n> some kind of tool to update in-place a database, though I know that's\n> not a likely path.\n\nlook up Slony\n", "msg_date": "Tue, 13 May 2008 08:37:59 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Installation Steps to migrate to Postgres 8.3.1" } ]
[ { "msg_contents": "Hi everybody,\n\nI'm fairly new to PostgreSQL and I have a problem with\na query:\n\nSELECT * FROM \"LockerEvents\" LIMIT 10000 OFFSET\n10990000\n\nThe table LockerEvents has 11 Mlillions records on it\nand this query takes about 60 seconds to complete.\nMoreover, even after making for each column in the\ntable a index the EXPLAIN still uses sequential scan\ninstead of indexes.\n\nThe EXPLAIN is:\n\"Limit (cost=100245579.54..100245803.00 rows=10000\nwidth=60) (actual time=58414.753..58482.661 rows=10000\nloops=1)\"\n\" -> Seq Scan on \"LockerEvents\" \n(cost=100000000.00..100245803.00 rows=11000000\nwidth=60) (actual time=12.620..45463.222 rows=11000000\nloops=1)\"\n\"Total runtime: 58493.648 ms\"\n\nThe table is:\n\nCREATE TABLE \"LockerEvents\"\n(\n \"ID\" serial NOT NULL,\n \"IDMoneySymbol\" integer NOT NULL,\n \"IDLocker\" integer NOT NULL,\n \"IDUser\" integer NOT NULL,\n \"IDEventType\" integer NOT NULL,\n \"TimeBegin\" timestamp(0) without time zone NOT NULL,\n \"Notes\" character varying(200),\n \"Income\" double precision NOT NULL DEFAULT 0,\n \"IncomeWithRate\" double precision NOT NULL DEFAULT\n0,\n CONSTRAINT pk_lockerevents_id PRIMARY KEY (\"ID\"),\n CONSTRAINT fk_lockerevents_ideventtype_eventtypes_id\nFOREIGN KEY (\"IDEventType\")\n REFERENCES \"EventTypes\" (\"ID\") MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_lockerevents_idlocker_lockers_id\nFOREIGN KEY (\"IDLocker\")\n REFERENCES \"Lockers\" (\"ID\") MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT\nfk_lockerevents_idmoneysymbol_moneysymbols_id FOREIGN\nKEY (\"IDMoneySymbol\")\n REFERENCES \"MoneySymbols\" (\"ID\") MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT fk_lockerevents_iduser_users_id FOREIGN\nKEY (\"IDUser\")\n REFERENCES \"Users\" (\"ID\") MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (OIDS=FALSE);\n\n\nCREATE INDEX idx_col_lockerevents_income\n ON \"LockerEvents\"\n USING btree\n (\"Income\");\n\nCREATE INDEX idx_col_lockerevents_incomewithrate\n ON \"LockerEvents\"\n USING btree\n (\"IncomeWithRate\");\n\nCREATE INDEX idx_col_lockerevents_notes\n ON \"LockerEvents\"\n USING btree\n (\"Notes\");\n\nCREATE INDEX idx_col_lockerevents_timebegin\n ON \"LockerEvents\"\n USING btree\n (\"TimeBegin\");\n\nCREATE INDEX\nidx_fk_lockerevents_ideventtype_eventtypes_id\n ON \"LockerEvents\"\n USING btree\n (\"IDEventType\");\n\nCREATE INDEX idx_fk_lockerevents_idlocker_lockers_id\n ON \"LockerEvents\"\n USING btree\n (\"IDLocker\");\n\nCREATE INDEX\nidx_fk_lockerevents_idmoneysymbol_moneysymbols_id\n ON \"LockerEvents\"\n USING btree\n (\"IDMoneySymbol\");\n\nCREATE INDEX idx_fk_lockerevents_iduser_users_id\n ON \"LockerEvents\"\n USING btree\n (\"IDUser\");\n\nCREATE UNIQUE INDEX idx_pk_lockerevents_id\n ON \"LockerEvents\"\n USING btree\n (\"ID\");\n\n\nIf I do the query :\nSELECT * FROM \"LockerEvents\" LIMIT 10000 OFFSET 0\nthen this query takes under a second to complete - I\nbelieve this is because the sequential scan starts\nfrom beginning.\n\nI need the query to complete under 10 seconds and I do\nnot know how to do it. \nPlease help me!\n\nThank you,\nDanny\n\n\n \n", "msg_date": "Tue, 13 May 2008 09:57:03 -0700 (PDT)", "msg_from": "idc danny <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with 11 M records table" }, { "msg_contents": "In response to idc danny <[email protected]>:\n\n> Hi everybody,\n> \n> I'm fairly new to PostgreSQL and I have a problem with\n> a query:\n> \n> SELECT * FROM \"LockerEvents\" LIMIT 10000 OFFSET\n> 10990000\n\nThis query makes no sense, and I can't blame PostgreSQL for using a\nseq scan, since you've given it no reason to do otherwise. If you\nwant a random sampling of rows, you should construct your query more\nto that effect, as this query is going to give you a random sampling\nof rows, and the LIMIT/OFFSET are simply junk that confuses the\nquery planner.\n\nI suspect that you don't really want a random sampling of rows, although\nI can't imagine what you think you're going to get from that query.\nHave you tried putting an ORDER BY clause in?\n\n> \n> The table LockerEvents has 11 Mlillions records on it\n> and this query takes about 60 seconds to complete.\n> Moreover, even after making for each column in the\n> table a index the EXPLAIN still uses sequential scan\n> instead of indexes.\n> \n> The EXPLAIN is:\n> \"Limit (cost=100245579.54..100245803.00 rows=10000\n> width=60) (actual time=58414.753..58482.661 rows=10000\n> loops=1)\"\n> \" -> Seq Scan on \"LockerEvents\" \n> (cost=100000000.00..100245803.00 rows=11000000\n> width=60) (actual time=12.620..45463.222 rows=11000000\n> loops=1)\"\n> \"Total runtime: 58493.648 ms\"\n> \n> The table is:\n> \n> CREATE TABLE \"LockerEvents\"\n> (\n> \"ID\" serial NOT NULL,\n> \"IDMoneySymbol\" integer NOT NULL,\n> \"IDLocker\" integer NOT NULL,\n> \"IDUser\" integer NOT NULL,\n> \"IDEventType\" integer NOT NULL,\n> \"TimeBegin\" timestamp(0) without time zone NOT NULL,\n> \"Notes\" character varying(200),\n> \"Income\" double precision NOT NULL DEFAULT 0,\n> \"IncomeWithRate\" double precision NOT NULL DEFAULT\n> 0,\n> CONSTRAINT pk_lockerevents_id PRIMARY KEY (\"ID\"),\n> CONSTRAINT fk_lockerevents_ideventtype_eventtypes_id\n> FOREIGN KEY (\"IDEventType\")\n> REFERENCES \"EventTypes\" (\"ID\") MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT fk_lockerevents_idlocker_lockers_id\n> FOREIGN KEY (\"IDLocker\")\n> REFERENCES \"Lockers\" (\"ID\") MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT\n> fk_lockerevents_idmoneysymbol_moneysymbols_id FOREIGN\n> KEY (\"IDMoneySymbol\")\n> REFERENCES \"MoneySymbols\" (\"ID\") MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT fk_lockerevents_iduser_users_id FOREIGN\n> KEY (\"IDUser\")\n> REFERENCES \"Users\" (\"ID\") MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (OIDS=FALSE);\n> \n> \n> CREATE INDEX idx_col_lockerevents_income\n> ON \"LockerEvents\"\n> USING btree\n> (\"Income\");\n> \n> CREATE INDEX idx_col_lockerevents_incomewithrate\n> ON \"LockerEvents\"\n> USING btree\n> (\"IncomeWithRate\");\n> \n> CREATE INDEX idx_col_lockerevents_notes\n> ON \"LockerEvents\"\n> USING btree\n> (\"Notes\");\n> \n> CREATE INDEX idx_col_lockerevents_timebegin\n> ON \"LockerEvents\"\n> USING btree\n> (\"TimeBegin\");\n> \n> CREATE INDEX\n> idx_fk_lockerevents_ideventtype_eventtypes_id\n> ON \"LockerEvents\"\n> USING btree\n> (\"IDEventType\");\n> \n> CREATE INDEX idx_fk_lockerevents_idlocker_lockers_id\n> ON \"LockerEvents\"\n> USING btree\n> (\"IDLocker\");\n> \n> CREATE INDEX\n> idx_fk_lockerevents_idmoneysymbol_moneysymbols_id\n> ON \"LockerEvents\"\n> USING btree\n> (\"IDMoneySymbol\");\n> \n> CREATE INDEX idx_fk_lockerevents_iduser_users_id\n> ON \"LockerEvents\"\n> USING btree\n> (\"IDUser\");\n> \n> CREATE UNIQUE INDEX idx_pk_lockerevents_id\n> ON \"LockerEvents\"\n> USING btree\n> (\"ID\");\n> \n> \n> If I do the query :\n> SELECT * FROM \"LockerEvents\" LIMIT 10000 OFFSET 0\n> then this query takes under a second to complete - I\n> believe this is because the sequential scan starts\n> from beginning.\n> \n> I need the query to complete under 10 seconds and I do\n> not know how to do it. \n> Please help me!\n> \n> Thank you,\n> Danny\n> \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Tue, 13 May 2008 13:03:27 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with 11 M records table" }, { "msg_contents": "\n\nidc danny wrote:\n> Hi everybody,\n> \n> I'm fairly new to PostgreSQL and I have a problem with\n> a query:\n> \n> SELECT * FROM \"LockerEvents\" LIMIT 10000 OFFSET\n> 10990000\n> \n> The table LockerEvents has 11 Mlillions records on it\n> and this query takes about 60 seconds to complete.\n> Moreover, even after making for each column in the\n> table a index the EXPLAIN still uses sequential scan\n> instead of indexes.\n> \n> The EXPLAIN is:\n> \"Limit (cost=100245579.54..100245803.00 rows=10000\n> width=60) (actual time=58414.753..58482.661 rows=10000\n> loops=1)\"\n> \" -> Seq Scan on \"LockerEvents\" \n> (cost=100000000.00..100245803.00 rows=11000000\n> width=60) (actual time=12.620..45463.222 rows=11000000\n> loops=1)\"\n> \"Total runtime: 58493.648 ms\"\n> \n> The table is:\n> \n> CREATE TABLE \"LockerEvents\"\n> (\n> \"ID\" serial NOT NULL,\n> \"IDMoneySymbol\" integer NOT NULL,\n> \"IDLocker\" integer NOT NULL,\n> \"IDUser\" integer NOT NULL,\n> \"IDEventType\" integer NOT NULL,\n> \"TimeBegin\" timestamp(0) without time zone NOT NULL,\n> \"Notes\" character varying(200),\n> \"Income\" double precision NOT NULL DEFAULT 0,\n> \"IncomeWithRate\" double precision NOT NULL DEFAULT\n> 0,\n> CONSTRAINT pk_lockerevents_id PRIMARY KEY (\"ID\"),\n> CONSTRAINT fk_lockerevents_ideventtype_eventtypes_id\n> FOREIGN KEY (\"IDEventType\")\n> REFERENCES \"EventTypes\" (\"ID\") MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT fk_lockerevents_idlocker_lockers_id\n> FOREIGN KEY (\"IDLocker\")\n> REFERENCES \"Lockers\" (\"ID\") MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT\n> fk_lockerevents_idmoneysymbol_moneysymbols_id FOREIGN\n> KEY (\"IDMoneySymbol\")\n> REFERENCES \"MoneySymbols\" (\"ID\") MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT fk_lockerevents_iduser_users_id FOREIGN\n> KEY (\"IDUser\")\n> REFERENCES \"Users\" (\"ID\") MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (OIDS=FALSE);\n> \n> \n> CREATE INDEX idx_col_lockerevents_income\n> ON \"LockerEvents\"\n> USING btree\n> (\"Income\");\n> \n> CREATE INDEX idx_col_lockerevents_incomewithrate\n> ON \"LockerEvents\"\n> USING btree\n> (\"IncomeWithRate\");\n> \n> CREATE INDEX idx_col_lockerevents_notes\n> ON \"LockerEvents\"\n> USING btree\n> (\"Notes\");\n> \n> CREATE INDEX idx_col_lockerevents_timebegin\n> ON \"LockerEvents\"\n> USING btree\n> (\"TimeBegin\");\n> \n> CREATE INDEX\n> idx_fk_lockerevents_ideventtype_eventtypes_id\n> ON \"LockerEvents\"\n> USING btree\n> (\"IDEventType\");\n> \n> CREATE INDEX idx_fk_lockerevents_idlocker_lockers_id\n> ON \"LockerEvents\"\n> USING btree\n> (\"IDLocker\");\n> \n> CREATE INDEX\n> idx_fk_lockerevents_idmoneysymbol_moneysymbols_id\n> ON \"LockerEvents\"\n> USING btree\n> (\"IDMoneySymbol\");\n> \n> CREATE INDEX idx_fk_lockerevents_iduser_users_id\n> ON \"LockerEvents\"\n> USING btree\n> (\"IDUser\");\n> \n> CREATE UNIQUE INDEX idx_pk_lockerevents_id\n> ON \"LockerEvents\"\n> USING btree\n> (\"ID\");\n> \n> \n> If I do the query :\n> SELECT * FROM \"LockerEvents\" LIMIT 10000 OFFSET 0\n> then this query takes under a second to complete - I\n> believe this is because the sequential scan starts\n> from beginning.\n> \n> I need the query to complete under 10 seconds and I do\n> not know how to do it. \n> Please help me!\n> \n> Thank you,\n> Danny\n> \n\nI recall it being mentioned on one of these lists that with offset, all \nthe rows in between still have to be read. So, you may get better \nresults if you use a 'where id > 10000' clause in the query.\n\n-salman\n\n", "msg_date": "Tue, 13 May 2008 13:09:29 -0400", "msg_from": "salman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with 11 M records table" }, { "msg_contents": "idc danny wrote:\n> Hi everybody,\n> \n> I'm fairly new to PostgreSQL and I have a problem with\n> a query:\n> \n> SELECT * FROM \"LockerEvents\" LIMIT 10000 OFFSET\n> 10990000\n> \n> The table LockerEvents has 11 Mlillions records on it\n> and this query takes about 60 seconds to complete.\n\nThe OFFSET clause is almost always inefficient for anything but very small tables or small offsets. In order for a relational database (not just Postgres) to figure out which row is the 11000000th row, it has to actually retrieve the first 10999999 rows and and discard them. There is no magical way to go directly to the 11-millionth row. Even on a trivial query such as yours with no WHERE clause, the only way to determine which row is the 11 millionths is to scan the previous 10999999.\n\nThere are better (faster) ways to achieve this, but it depends on why you are doing this query. That is, do you just want this one block of data, or are you scanning the whole database in 10,000-row blocks?\n\nCraig\n", "msg_date": "Tue, 13 May 2008 10:17:04 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with 11 M records table" }, { "msg_contents": "On Tue, May 13, 2008 at 10:57 AM, idc danny <[email protected]> wrote:\n> Hi everybody,\n>\n> I'm fairly new to PostgreSQL and I have a problem with\n> a query:\n>\n> SELECT * FROM \"LockerEvents\" LIMIT 10000 OFFSET\n> 10990000\n>\n> The table LockerEvents has 11 Mlillions records on it\n> and this query takes about 60 seconds to complete.\n> Moreover, even after making for each column in the\n> table a index the EXPLAIN still uses sequential scan\n> instead of indexes.\n\nYep. The way offset limit works is it first materializes the data\nneeded for OFFSET+LIMIT rows, then throws away OFFSET worth's of data.\nSo, it has to do a lot of retrieving.\n\nBetter off to use something like:\n\nselect * from table order by indexfield where indexfield between\n10000000 and 10001000;\n\nwhich can use an index on indexfield, as long as the amount of data is\nsmall enough, etc...\n", "msg_date": "Tue, 13 May 2008 12:02:20 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with 11 M records table" }, { "msg_contents": "\nHi all,\n\nThis sql is taking too long for the size of my tiny db. Any tips from \nthis alias? I tried moving the sort to the first left outer join\n(between projects and features tables) using a nested subquery, but \npostgres tells me only one column could be returned from a subqueyr.\n\nTIA,\n\nfdo\n\nSELECT projects.\"id\" AS t0_r0, projects.\"name\" AS t0_r1, projects.\"display_name\"\n AS t0_r2, projects.\"description\" AS t0_r3, projects.\"community_id\" AS t0_r4, projects.\"parent_id\" AS t0_r5, \nprojects.\"visible\" AS t0_r6, projects.\"created_at\" AS t0_r7, projects.\"updated_at\" AS t0_r8, projects.\"image_path\"\n AS t0_r9, projects.\"with_navigation\" AS t0_r10, projects.\"static_home\" AS t0_r11, projects.\"active\" AS t0_r12, \nprojects.\"image_id\" AS t0_r13, projects.\"request_message\" AS t0_r14, projects.\"response_message\" AS t0_r15, \nprojects.\"approval_status\" AS t0_r16, projects.\"approved_by_id\" AS t0_r17, projects.\"owner_id\" AS t0_r18,\n project_tags.\"id\" AS t1_r0, project_tags.\"project_id\" AS t1_r1, project_tags.\"name\" AS t1_r2, \nproject_tags.\"created_at\" AS t1_r3, project_tags.\"updated_at\" AS t1_r4, person_roles.\"id\" AS t2_r0, \nperson_roles.\"project_id\" AS t2_r1, person_roles.\"person_id\" AS t2_r2, person_roles.\"role_id\" AS t2_r3, \nperson_roles.\"authorized\" AS t2_r4, person_roles.\"created_at\" AS t2_r5, person_roles.\"updated_at\" AS t2_r6, \nperson_roles.\"request_message\" AS t2_r7, person_roles.\"response_message\" AS t2_r8, features.\"id\" AS t3_r0, \nfeatures.\"project_id\" AS t3_r1, features.\"name\" AS t3_r2, features.\"display_name\" AS t3_r3,\n features.\"feature_uri\" AS t3_r4, features.\"provisioned\" AS t3_r5, features.\"service_name\" AS t3_r6,\n features.\"created_at\" AS t3_r7, features.\"updated_at\" AS t3_r8, features.\"active\" AS t3_r9, \nfeatures.\"description\" AS t3_r10, features.\"type\" AS t3_r11, features.\"forum_topic_count\" AS t3_r12,\n features.\"forum_post_count\" AS t3_r13, features.\"forum_last_post_at\" AS t3_r14, \nfeatures.\"forum_last_post_by_id\" AS t3_r15, features.\"wiki_default_page_id\" AS t3_r16, \nfeatures.\"wiki_default_page_name\" AS t3_r17, features.\"wiki_format\" AS t3_r18,\n features.\"service_id\" AS t3_r19, features.\"service_type_id\" AS t3_r20 FROM projects\n LEFT OUTER JOIN project_tags ON project_tags.project_id = projects.id \nLEFT OUTER JOIN person_roles ON person_roles.project_id = projects.id \nLEFT OUTER JOIN features ON features.project_id = projects.id \nWHERE (projects.\"visible\" = 't') AND projects.id IN (3, 4, 5, 6, 10, 7, 8, 9, 13, 11) \nORDER BY projects.name asc;\n\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=57.17..57.20 rows=12 width=4925) (actual time=147.880..148.325 rows=846 loops=1)\n Sort Key: projects.name\n -> Hash Left Join (cost=45.53..56.95 rows=12 width=4925) (actual time=1.374..6.694 rows=846 loops=1)\n Hash Cond: (projects.id = project_tags.project_id)\n -> Hash Left Join (cost=22.48..33.48 rows=4 width=4819) (actual time=1.243..3.018 rows=222 loops=1)\n Hash Cond: (projects.id = person_roles.project_id)\n -> Hash Left Join (cost=10.90..21.86 rows=4 width=3754) (actual time=1.121..1.702 rows=78 loops=1)\n Hash Cond: (projects.id = features.project_id)\n -> Seq Scan on projects (cost=0.00..10.90 rows=4 width=1884) (actual time=0.039..0.109 rows=10 loops=1)\n Filter: (visible AND (id = ANY ('{3,4,5,6,10,7,8,9,13,11}'::integer[])))\n -> Hash (cost=10.40..10.40 rows=40 width=1870) (actual time=1.048..1.048 rows=101 loops=1)\n -> Seq Scan on features (cost=0.00..10.40 rows=40 width=1870) (actual time=0.026..0.464 rows=101 loops=1)\n -> Hash (cost=10.70..10.70 rows=70 width=1065) (actual time=0.098..0.098 rows=29 loops=1)\n -> Seq Scan on person_roles (cost=0.00..10.70 rows=70 width=1065) (actual time=0.014..0.037 rows=29 loops=1)\n -> Hash (cost=15.80..15.80 rows=580 width=106) (actual time=0.105..0.105 rows=32 loops=1)\n -> Seq Scan on project_tags (cost=0.00..15.80 rows=580 width=106) (actual time=0.013..0.036 rows=32 loops=1)\n Total runtime: 149.622 ms\n(17 rows)\n\n\n\n", "msg_date": "Tue, 13 May 2008 21:40:40 -0700", "msg_from": "fernando castano <[email protected]>", "msg_from_op": false, "msg_subject": "can I move sort to first outer join ?" }, { "msg_contents": "On Wed, 14 May 2008 06:40:40 +0200, fernando castano \n<[email protected]> wrote:\n\n>\n> Hi all,\n>\n> This sql is taking too long for the size of my tiny db. Any tips from \n> this alias? I tried moving the sort to the first left outer join\n> (between projects and features tables) using a nested subquery, but \n> postgres tells me only one column could be returned from a subqueyr.\n\n\tInstead of :\n\n\tSELECT * FROM a LEFT JOIN b LEFT JOIN c WHERE c.column=... ORDER BY c.x \nLIMIT N\n\n\tYou could write :\n\n\tSELECT * FROM a LEFT JOIN b LEFT JOIN (SELECT * FROM c WHERE c.column=... \nORDER BY c.x LIMIT N) AS cc ORDER BY cc.x LIMIT N\n\n\tThis is only interesting of you use a LIMIT and this allows you to reduce \nthe number of rows sorted/joined.\n\n\tHowever in your case this is not the right thing to do since you do not \nuse LIMIT, and sorting your 846 rows will only take a very small time. \nYour problem are those seq scans, you need to optimize that query so it \ncan use indexes.\n\n> -> Seq Scan on projects (cost=0.00..10.90 rows=4 \n> width=1884) (actual time=0.039..0.109 rows=10 loops=1)\n> Filter: (visible AND (id = ANY \n> ('{3,4,5,6,10,7,8,9,13,11}'::integer[])))\n> -> Hash (cost=10.40..10.40 rows=40 width=1870) \n> (actual time=1.048..1.048 rows=101 loops=1)\n> -> Seq Scan on features (cost=0.00..10.40 \n> rows=40 width=1870) (actual time=0.026..0.464 rows=101 loops=1)\n> -> Hash (cost=10.70..10.70 rows=70 width=1065) (actual \n> time=0.098..0.098 rows=29 loops=1)\n> -> Seq Scan on person_roles (cost=0.00..10.70 \n> rows=70 width=1065) (actual time=0.014..0.037 rows=29 loops=1)\n> -> Hash (cost=15.80..15.80 rows=580 width=106) (actual \n> time=0.105..0.105 rows=32 loops=1)\n> -> Seq Scan on project_tags (cost=0.00..15.80 rows=580 \n> width=106) (actual time=0.013..0.036 rows=32 loops=1)\n> Total runtime: 149.622 ms\n\n\tAll those seq scans !!!\n\n\tPlease post, for each of those tables :\n\n\t- The total number of rows (SELECT count(*) is fine)\n\t- The table definitions with indexes (\\d table)\n\n\tEXPLAIN ANALYZE tells you the number of rows it picked out of a seq scan \n(that's the \"rows=\") but not the number of rows scanned... this is \nimportant, because a seq scan on a small table isn't a problem, but on a \nbig one, it is.\n", "msg_date": "Wed, 14 May 2008 11:58:09 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can I move sort to first outer join ?" }, { "msg_contents": "Hi everybody,\n\nI know that this group deals with performance but is the only one on which I'm subscribed, so my apologize in advance for the question.\n\nI want to allow everybody in the world, all IP's, to connect to my server. How do I accomplish that? Definitely, it's not a good solution to enter all them manually in pg_hba.conf :).\n\nCurrently, if above question cannot be answered, I want to achieve to allow the IP's of Hamachi network, which all are of the form 5.*.*.* - but in the future it can expand to all IP's.\n\nThank you,\nDanny\n\n\n \n\n\n \n", "msg_date": "Thu, 3 Jul 2008 21:52:03 -0700 (PDT)", "msg_from": "idc danny <[email protected]>", "msg_from_op": true, "msg_subject": "Define all IP's in the world in pg_hba.conf" }, { "msg_contents": "idc danny wrote:\n> Hi everybody,\n>\n> I know that this group deals with performance but is the only one on which I'm subscribed, so my apologize in advance for the question.\n>\n> I want to allow everybody in the world, all IP's, to connect to my server. How do I accomplish that? Definitely, it's not a good solution to enter all them manually in pg_hba.conf :).\n> \nwhat's wrong with 0.0.0.0/0 ?\n> Currently, if above question cannot be answered, I want to achieve to allow the IP's of Hamachi network, which all are of the form 5.*.*.* - but in the future it can expand to all IP's.\n>\n> Thank you,\n> Danny\n>\n>\n> \n>\n>\n> \n>\n> \n\n", "msg_date": "Fri, 04 Jul 2008 15:30:50 +1000", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Define all IP's in the world in pg_hba.conf" } ]
[ { "msg_contents": "idc danny wrote:\n> Hi James,\n> \n> Than you for your response.\n> \n> What I want to achieve is to give to the application\n> user 10k rows where the records are one after another\n> in the table, and the application has a paginating GUI\n> (\"First page\", \"Previous page\", \"Next page\", \"Last\n> page\" - all links & \"Jump to page\" combobox) where\n> thsi particular query gets to run if the user clicks\n> on the \"Last page\" link.\n> The application receive the first 10k rows in under a\n> second when the user clicks on \"First page\" link and\n> receive the last 10k rows in about 60 seconds when he\n> clicks on \"Last page\" link.\n\nYou need a sequence that automatically assigns an ascending \"my_rownum\" to each row as it is added to the table, and an index on that my_rownum column. Then you select your page by (for example)\n\n select * from my_table where my_rownum >= 100 and id < 110;\n\nThat will do what you want, with instant performance that's linear over your whole table.\n\nIf your table will have deletions, then you have to update the row numbering a lot, which will cause you terrible performance problems due to the nature of the UPDATE operation in Postgres. If this is the case, then you should keep a separate table just for numbering the rows, which is joined to your main table when you want to retrieve a \"page\" of data. When you delete data (which should be batched, since this will be expensive), then you truncate your rownum table, reset the sequence that generates your row numbers, then regenerate your row numbers with something like \"insert into my_rownum_table (select id, nextval('my_rownum_seq') from my_big_table)\". To retrieve a page, just do \"select ... from my_table join my_rownum_table on (...)\", which will be really fast since you'll have indexes on both tables.\n\nNote that this method requires that you have a primary key, or at least a unique column, on your main table, so that you have something to join with your row-number table.\n\nCraig\n", "msg_date": "Tue, 13 May 2008 10:57:08 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem with 11 M records table" } ]
[ { "msg_contents": "Hi Guys,\n\nI'm using postgresql 8.3.1 and I'm seeing weird behavior between what \nI expect and what's happening when the query is executed\n\nI'm trying to match a table that contains regexps against another \ntable that is full of the text to match against so my query is:\n\nselect wc_rule.id from classifications, wc_rule where \nclassifications.classification ~* wc_rule.regexp;\n\nWhen I run that the query takes a very very long time (never ending so \nfar 20 minutes or so) to execute.\n\nBut if I loop through all of the rules and a query for each rule:\n\nselect wc_rule.id from classifications, wc_rule where \nclassifications.classification ~* wc_rule.regexp and wc_rule.id = ?\n\nAll of the rules when run individually can be matched in a little \nunder then 3 minutes. I'd assume postgres would be equal to or faster \nwith the single row execution method.\n\nThe table schema:\n\nCREATE TABLE wc_rule (\n id integer NOT NULL,\n regexp text,\n);\n\nCREATE TABLE classifications (\n id integer NOT NULL,\n classification text NOT NULL\n);\n\ngb_render_1_db=# explain select wc_rule.id from classifications, \nwc_rule where classifications.classification ~* wc_rule.regexp;\n QUERY PLAN\n-----------------------------------------------------------------------------\n Nested Loop (cost=13.71..891401.71 rows=197843 width=4)\n Join Filter: (classifications.classification ~* wc_rule.regexp)\n -> Seq Scan on classifications (cost=0.00..1093.46 rows=56446 \nwidth=42)\n -> Materialize (cost=13.71..20.72 rows=701 width=22)\n -> Seq Scan on wc_rule (cost=0.00..13.01 rows=701 width=22)\n(5 rows)\n\n\ngb_render_1_db=# select count(*) from classifications;\n count\n-------\n 56446\n(1 row)\n\ngb_render_1_db=# select count(*) from wc_rule;\n count\n-------\n 701\n(1 row)\n\nI have exports of the tables up at so you can try it if you'd like.\n\nhttp://rusty.devel.infogears.com/regexp-tables.tar.bz2\n\nAny insight is greatly appreciated, even if it's just showing me how I \nmade a mistake in the query.\n\nThanks,\n\nRusty\n--\nRusty Conover\nInfoGears Inc.\nhttp://www.infogears.com\n\n\nAn example script that shows how each rule was run individually in perl.\n\n$dbh->begin_work();\neval {\n my $all_rules = $dbh->selectall_arrayref(\"select id from wc_rule\");\n foreach my $row (@$all_rules) {\n print \"Doing rule: $row->[0]\\n\";\n eval {\n local $SIG{ALRM} = sub { die(\"Alarm\") };\n alarm(5);\n my $results = $dbh->selectall_arrayref(\"select wc_rule.id from \nclassifications, wc_rule where classifications.classification ~* \nwc_rule.regexp and wc_rule.id = ?\", undef, $row->[0]);\n alarm(0);\n };\n if ($@) {\n alarm(0);\n print \"Got bad rule id of : $row->[0]\\n\";\n exit(0);\n }\n alarm(0);\n print \"ok rule: $row->[0]\\n\";\n }\n};\nif ($@) {\n print \"Failed to run rules:\\n$@\\n\";\n $dbh->rollback();\n $dbh->disconnect();\n exit(-1);\n}\n\n$dbh->commit();\n$dbh->disconnect();\nexit(0);\n\n\n\n\n\n\n\n", "msg_date": "Tue, 13 May 2008 23:45:26 -0600", "msg_from": "Rusty Conover <[email protected]>", "msg_from_op": true, "msg_subject": "Regexps - never completing join." }, { "msg_contents": "\n\n\nOn May 13, 2008, at 11:45 PM, Rusty Conover wrote:\n\n> Hi Guys,\n>\n> I'm using postgresql 8.3.1 and I'm seeing weird behavior between \n> what I expect and what's happening when the query is executed\n>\n> I'm trying to match a table that contains regexps against another \n> table that is full of the text to match against so my query is:\n>\n> select wc_rule.id from classifications, wc_rule where \n> classifications.classification ~* wc_rule.regexp;\n>\n> When I run that the query takes a very very long time (never ending \n> so far 20 minutes or so) to execute.\n>\n> But if I loop through all of the rules and a query for each rule:\n>\n> select wc_rule.id from classifications, wc_rule where \n> classifications.classification ~* wc_rule.regexp and wc_rule.id = ?\n>\n> All of the rules when run individually can be matched in a little \n> under then 3 minutes. I'd assume postgres would be equal to or \n> faster with the single row execution method.\n>\n> The table schema:\n>\n> CREATE TABLE wc_rule (\n> id integer NOT NULL,\n> regexp text,\n> );\n>\n> CREATE TABLE classifications (\n> id integer NOT NULL,\n> classification text NOT NULL\n> );\n>\n> gb_render_1_db=# explain select wc_rule.id from classifications, \n> wc_rule where classifications.classification ~* wc_rule.regexp;\n> QUERY PLAN\n> -----------------------------------------------------------------------------\n> Nested Loop (cost=13.71..891401.71 rows=197843 width=4)\n> Join Filter: (classifications.classification ~* wc_rule.regexp)\n> -> Seq Scan on classifications (cost=0.00..1093.46 rows=56446 \n> width=42)\n> -> Materialize (cost=13.71..20.72 rows=701 width=22)\n> -> Seq Scan on wc_rule (cost=0.00..13.01 rows=701 width=22)\n> (5 rows)\n>\n>\n\n\nAs a followup I did some digging:\n\nby editing:\n\nsrc/backend/utils/adt/regexp.c\n\nand increasing the cache size for regular expressions to an \narbitrarily large number\n\n#define MAX_CACHED_RES 3200\n\nRather then the default of\n\n#define MAX_CACHED_RES 32\n\nI was able to get the query to complete in a respectable amount of time:\n\ngb_render_1_db=# explain analyze select wc_rule.id from \nclassifications, wc_rule where classifications.classification ~* \nwc_rule.regexp;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=13.71..891401.71 rows=197843 width=4) (actual \ntime=72.714..366899.913 rows=55052 loops=1)\n Join Filter: (classifications.classification ~* wc_rule.regexp)\n -> Seq Scan on classifications (cost=0.00..1093.46 rows=56446 \nwidth=42) (actual time=28.820..109.895 rows=56446 loops=1)\n -> Materialize (cost=13.71..20.72 rows=701 width=22) (actual \ntime=0.000..0.193 rows=701 loops=56446)\n -> Seq Scan on wc_rule (cost=0.00..13.01 rows=701 \nwidth=22) (actual time=0.030..0.593 rows=701 loops=1)\n Total runtime: 366916.632 ms\n(6 rows)\n\nWhich is still > 6 minutes, but at least it completed.\n\nI'll keep digging into what is causing this bad performance.\n\nThanks,\n\nRusty\n--\nRusty Conover\nInfoGears Inc.\nhttp://www.infogears.com\n\n", "msg_date": "Wed, 14 May 2008 01:08:37 -0600", "msg_from": "Rusty Conover <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regexps - never completing join." }, { "msg_contents": "Returning to this problem this morning, I made some more insight.\n\nThe regexp cache isn't getting very many hits because the executor is \nlooping through all of the classification rows then looping through \nall of the regular expressions, causing each expression to be \nrecompiled every time since the cache limit is only for 32 cached \nregular expressions. You can think of the behavior like:\n\nforeach classification {\n foreach regexp {\n do match\n }\n}\n\nObviously to make this perform better without requiring a bigger \nregexp cache I'd like it to run like:\n\nforeach regexp {\n foreach classification {\n do match\n }\n}\n\nThat way the cache wouldn't have to be very big at all since the last \nused regular expression would be at the top of the cache.\n\nVarious methods of changing the query don't seem to have the desired \neffect. Even with setting join_collapse_limit to 1.\n\nselect wc_rule.id from wc_rule cross join classifications on \nclassifications.classification ~* wc_rule.regexp;\n\n QUERY PLAN\n-----------------------------------------------------------------------------\n Nested Loop (cost=13.71..891401.71 rows=197843 width=4)\n Join Filter: (classifications.classification ~* wc_rule.regexp)\n -> Seq Scan on classifications (cost=0.00..1093.46 rows=56446 \nwidth=42)\n -> Materialize (cost=13.71..20.72 rows=701 width=22)\n -> Seq Scan on wc_rule (cost=0.00..13.01 rows=701 width=22)\n(5 rows)\n\n\n\nselect wc_rule.id from classifications cross join wc_rule on \nclassifications.classification ~* wc_rule.regexp;\n\n QUERY PLAN\n-----------------------------------------------------------------------------\n Nested Loop (cost=13.71..891401.71 rows=197843 width=4)\n Join Filter: (classifications.classification ~* wc_rule.regexp)\n -> Seq Scan on classifications (cost=0.00..1093.46 rows=56446 \nwidth=42)\n -> Materialize (cost=13.71..20.72 rows=701 width=22)\n -> Seq Scan on wc_rule (cost=0.00..13.01 rows=701 width=22)\n(5 rows)\n\n\n\nBoth of those queries execute in the same looping order, there doesn't \nseem to be a control to say use this table as the inner table and this \ntable as the outer table for the join that I could find.\n\nOne way I did find that worked to control the loop (but doesn't yield \nthe same results because its a left join)\n\nselect wc_rule.id from wc_rule left join classifications on \nclassifications.classification ~* wc_rule.regexp;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=1149.91..891457.45 rows=197843 width=4) \n(actual time=0.627..149051.505 rows=55126 loops=1)\n Join Filter: (classifications.classification ~* wc_rule.regexp)\n -> Seq Scan on wc_rule (cost=0.00..13.01 rows=701 width=22) \n(actual time=0.030..1.272 rows=701 loops=1)\n -> Materialize (cost=1149.91..1714.37 rows=56446 width=42) \n(actual time=0.001..14.244 rows=56446 loops=701)\n -> Seq Scan on classifications (cost=0.00..1093.46 \nrows=56446 width=42) (actual time=0.022..29.913 rows=56446 loops=1)\n Total runtime: 149067.764 ms\n(6 rows)\n\nThanks,\n\nRusty\n--\nRusty Conover\nInfoGears Inc.\nhttp://www.infogears.com\n\n\n\n\n\n\n", "msg_date": "Wed, 14 May 2008 09:33:31 -0600", "msg_from": "Rusty Conover <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regexps - never completing join." }, { "msg_contents": "On Wed, May 14, 2008 at 9:33 AM, Rusty Conover <[email protected]> wrote:\n> Returning to this problem this morning, I made some more insight.\n>\n> One way I did find that worked to control the loop (but doesn't yield the\n> same results because its a left join)\n>\n> select wc_rule.id from wc_rule left join classifications on\n> classifications.classification ~* wc_rule.regexp;\n\nIf you do that and exclude the extra rows added to the right with somthing like\n\nand wc_rule.somefield IS NOT NULL\n\ndoes it run fast and give you the same answers as the regular join?\n\nI'm guessing that this could be optimized to use a hash agg method of\njoining for text, but I'm no expert on the subject.\n", "msg_date": "Fri, 16 May 2008 14:35:24 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regexps - never completing join." }, { "msg_contents": "\nOn May 16, 2008, at 2:35 PM, Scott Marlowe wrote:\n\n> On Wed, May 14, 2008 at 9:33 AM, Rusty Conover \n> <[email protected]> wrote:\n>> Returning to this problem this morning, I made some more insight.\n>>\n>> One way I did find that worked to control the loop (but doesn't \n>> yield the\n>> same results because its a left join)\n>>\n>> select wc_rule.id from wc_rule left join classifications on\n>> classifications.classification ~* wc_rule.regexp;\n>\n> If you do that and exclude the extra rows added to the right with \n> somthing like\n>\n> and wc_rule.somefield IS NOT NULL\n>\n> does it run fast and give you the same answers as the regular join?\n>\n> I'm guessing that this could be optimized to use a hash agg method of\n> joining for text, but I'm no expert on the subject.\n\nHi Scott,\n\nIt's not really a hash agg problem really just a looping inside/ \noutside table selection problem.\n\nThe slowdown is really the compilation of the regexp repeatedly by \nRE_compile_and_cache() because the regexps are being run on the inside \nof the loop rather then the outside. And since the regexp cache is \nonly 32 items big, the every match is resulting in a recompilation of \nthe regexp since I have about 700 regexps.\n\nThanks,\n\nRusty\n--\nRusty Conover\nInfoGears Inc.\nhttp://www.infogears.com\n\n\n", "msg_date": "Fri, 16 May 2008 15:37:29 -0600", "msg_from": "Rusty Conover <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regexps - never completing join." }, { "msg_contents": "On Fri, May 16, 2008 at 3:37 PM, Rusty Conover <[email protected]> wrote:\n>\n> On May 16, 2008, at 2:35 PM, Scott Marlowe wrote:\n>\n>> On Wed, May 14, 2008 at 9:33 AM, Rusty Conover <[email protected]>\n>> wrote:\n>>>\n>>> Returning to this problem this morning, I made some more insight.\n>>>\n>>> One way I did find that worked to control the loop (but doesn't yield the\n>>> same results because its a left join)\n>>>\n>>> select wc_rule.id from wc_rule left join classifications on\n>>> classifications.classification ~* wc_rule.regexp;\n>>\n>> If you do that and exclude the extra rows added to the right with somthing\n>> like\n>>\n>> and wc_rule.somefield IS NOT NULL\n>>\n>> does it run fast and give you the same answers as the regular join?\n>>\n>> I'm guessing that this could be optimized to use a hash agg method of\n>> joining for text, but I'm no expert on the subject.\n>\n> Hi Scott,\n>\n> It's not really a hash agg problem really just a looping inside/outside\n> table selection problem.\n>\n> The slowdown is really the compilation of the regexp repeatedly by\n> RE_compile_and_cache() because the regexps are being run on the inside of\n> the loop rather then the outside. And since the regexp cache is only 32\n> items big, the every match is resulting in a recompilation of the regexp\n> since I have about 700 regexps.\n\nThat's not what I meant. What I meant was it seems like a good\ncandidate for a hash aggregate solution. I'm pretty sure pgsql can't\nuse hashagg for something like this right now.\n\nIf you hashagged each regexp and each column fed through it, you could\nprobably get good performance. but that's a backend hacker thing, not\nsomething I'd know how to do.\n", "msg_date": "Fri, 16 May 2008 15:44:26 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regexps - never completing join." } ]
[ { "msg_contents": "Hi ,\n\nSet this parameter in psotgresql.conf set enable_seqscan=off;\nAnd try:\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Scott\nMarlowe\nSent: Tuesday, May 13, 2008 11:32 PM\nTo: idc danny\nCc: [email protected]\nSubject: Re: [PERFORM] Problem with 11 M records table\n\nOn Tue, May 13, 2008 at 10:57 AM, idc danny <[email protected]> wrote:\n> Hi everybody,\n>\n> I'm fairly new to PostgreSQL and I have a problem with\n> a query:\n>\n> SELECT * FROM \"LockerEvents\" LIMIT 10000 OFFSET\n> 10990000\n>\n> The table LockerEvents has 11 Mlillions records on it\n> and this query takes about 60 seconds to complete.\n> Moreover, even after making for each column in the\n> table a index the EXPLAIN still uses sequential scan\n> instead of indexes.\n\nYep. The way offset limit works is it first materializes the data\nneeded for OFFSET+LIMIT rows, then throws away OFFSET worth's of data.\nSo, it has to do a lot of retrieving.\n\nBetter off to use something like:\n\nselect * from table order by indexfield where indexfield between\n10000000 and 10001000;\n\nwhich can use an index on indexfield, as long as the amount of data is\nsmall enough, etc...\n\n-- \nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 14 May 2008 11:28:08 +0530", "msg_from": "\"Ramasubramanian G\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem with 11 M records table" } ]
[ { "msg_contents": "HI,\n\nI have an application that maintains 150 open connections to a Postgres DB server. The application works fine without a problem for the most time. \n\nThe problem seem to arise when a SELECT that returns a lot of rows is executed or the SELECT is run on a large object. These selects are run from time to time by a separate process whose purpose is to generate reports from the db data.\n\nThe problem is that when the SELECTs are run the main application starts running out of available connections which means that postgres is not returning the query results fast enough. What I find a little bit starnge is that the report engine's SELECTs operate on a different set of tables than the ones the main application is using. Also the db box is hardly breaking a sweat, CPU and memory utilization are ridiculously low and IOwaits are typically less than 10%.\n\nHas anyone experienced this? Are there any settings I can change to improve throughput? Any help will be greatly appreciated.\n\n\nThanks,\nval\n\n\n __________________________________________________________\nSent from Yahoo! Mail.\nA Smarter Email http://uk.docs.yahoo.com/nowyoucan.html\n", "msg_date": "Wed, 14 May 2008 10:00:39 +0000 (GMT)", "msg_from": "Valentin Bogdanov <[email protected]>", "msg_from_op": true, "msg_subject": "postgres overall performance seems to degrade when large SELECT are\n\trequested" }, { "msg_contents": "\n> The problem seem to arise when a SELECT that returns a lot of rows is\n\n\tDoes the SELECT return a lot of rows, or does it scan a lot of rows ? \n(for instance, if you use aggregates, it might scan lots of data but only \nreturn few rows).\n\n> The problem is that when the SELECTs are run the main application starts \n> running out of available connections which means that postgres is not \n> returning the query results fast enough. What I find a little bit \n> starnge is that the report engine's SELECTs operate on a different set \n> of tables than the ones the main application is using. Also the db box \n> is hardly breaking a sweat, CPU and memory utilization are ridiculously \n> low and IOwaits are typically less than 10%.\n\n\tIs it swapping ? (vmstat -> si/so)\n\tIs it locking ? (probably not from what you say)\n\tIs the network connection between the client and DB server saturated ? \n(easy with 100 Mbps connections, SELECT with a large result set will \nhappily blast your LAN)\n\tIs the reporting tool running on the same machine as the DB client and \nkilling it ? (swapping, etc)\n\n\tIf it's a saturated network, solutions are :\n\t- install Gb ethernet\n\t- run the report on the database server (no bandwidth problems...)\n\t- rewrite the reporting tool to use SQL aggregates to transfer less data \nover the network\n\t- or use a cursor to fetch your results in chunks, and wait a little \nbetween chunks\n\t\n> Has anyone experienced this?\n\n\tYeah on benchmarks sometimes the LAN gave up before Postgres broke a \nsweat... Gb ethernet solved that...\n\n> Are there any settings I can change to improve throughput? Any help \n> will be greatly appreciated.\n\n\tiptraf will tell you all about your network traffic\n\tvmstat will tell you if your server or client is io-cpu-swap bound\n\tyou'd need to post output from those...\n\n>\n>\n> Thanks,\n> val\n>\n>\n> __________________________________________________________\n> Sent from Yahoo! Mail.\n> A Smarter Email http://uk.docs.yahoo.com/nowyoucan.html\n>\n\n\n", "msg_date": "Wed, 14 May 2008 13:10:34 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres overall performance seems to degrade when large SELECT\n\tare requested" } ]
[ { "msg_contents": "I have a large table (~ 2B rows) that contains an indexed timestamp column. I am attempting to run a query to determine the number of rows for a given day using something like \"select count(*) from tbl1 where ts between '2008-05-12 00:00:00.000' and '2008-05-12 23:59:59.999'\". Explain tells me that the query will be done using an index scan (as I would expect), and I realize that it is going to take a while. My question concerns some unusual I/O activity on the box (SUSE) when I run the query.\n\nFor the first couple of minutes I see reads only. After that vmstat shows mixed reads and writes in a ratio of about 1 block read to 5 blocks written. We have determined that files in our data and log partitions are being hit, but the file system itself is not growing during this time (it appears to be writing over the same chunk of space over and over again). Memory on the box is not being swapped while all of this is happening. I would have guessed that a \"select count(*)\" would not require a bunch of writes, and I can't begin to figure out why the number of blocks written are so much higher than the blocks read. If I modify the where clause to only count the rows for a given minute or two, I see the reads but I never see the unusual write behavior.\n\nAny thoughts into what could be going on? Thanks in advance for your help.\n\nDoug\n\n\n\n \nI have a large table (~ 2B rows) that contains an indexed timestamp column.  I am attempting to run a query to determine the number of rows for a given day using something like \"select count(*) from tbl1 where ts between '2008-05-12 00:00:00.000' and '2008-05-12 23:59:59.999'\".  Explain tells me that the query will be done using an index scan (as I would expect), and I realize that it is going to take a while.  My question concerns some unusual I/O activity on the box (SUSE)  when I run the query.For the first couple of minutes I see reads only.  After that vmstat shows mixed reads and writes in a ratio of about 1 block read to 5 blocks written.  We have determined that files in our data and log partitions are being hit, but the file system itself is\n not growing during this time (it appears to be writing over the same chunk of space over and over again).  Memory on the box is not being swapped while all of this is happening.  I would have guessed that a \"select count(*)\" would not require a bunch of writes, and I can't begin to figure out why the number of blocks written are so much higher than the blocks read.  If I modify the where clause to only count the rows for a given minute or two, I see the reads but I never see the unusual write behavior.Any thoughts into what could be going on?  Thanks in advance for your help.Doug", "msg_date": "Wed, 14 May 2008 13:09:48 -0700 (PDT)", "msg_from": "Doug Eck <[email protected]>", "msg_from_op": true, "msg_subject": "I/O on select count(*)" }, { "msg_contents": "On Wed, May 14, 2008 at 4:09 PM, Doug Eck <[email protected]> wrote:\n> I have a large table (~ 2B rows) that contains an indexed timestamp column.\n> I am attempting to run a query to determine the number of rows for a given\n> day using something like \"select count(*) from tbl1 where ts between\n> '2008-05-12 00:00:00.000' and '2008-05-12 23:59:59.999'\". Explain tells me\n> that the query will be done using an index scan (as I would expect), and I\n> realize that it is going to take a while. My question concerns some unusual\n> I/O activity on the box (SUSE) when I run the query.\n>\n> For the first couple of minutes I see reads only. After that vmstat shows\n> mixed reads and writes in a ratio of about 1 block read to 5 blocks\n> written. We have determined that files in our data and log partitions are\n> being hit, but the file system itself is not growing during this time (it\n> appears to be writing over the same chunk of space over and over again).\n> Memory on the box is not being swapped while all of this is happening. I\n> would have guessed that a \"select count(*)\" would not require a bunch of\n> writes, and I can't begin to figure out why the number of blocks written are\n> so much higher than the blocks read. If I modify the where clause to only\n> count the rows for a given minute or two, I see the reads but I never see\n> the unusual write behavior.\n>\n> Any thoughts into what could be going on? Thanks in advance for your help.\n\ncan you post the exact output of explain analyze? (or, at least,\nexplain if the query takes too long)\n\nmerlin\n", "msg_date": "Wed, 14 May 2008 16:38:23 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": ">>> Doug Eck <[email protected]> wrote: \n \n> I am attempting to run a query to determine the number of rows for a\ngiven \n> day using something like \"select count(*) from tbl1 where ts between\n\n> '2008-05-12 00:00:00.000' and '2008-05-12 23:59:59.999'\". Explain\ntells me \n> that the query will be done using an index scan (as I would expect),\nand I \n> realize that it is going to take a while. My question concerns some\nunusual \n> I/O activity on the box (SUSE) when I run the query.\n> \n> For the first couple of minutes I see reads only. After that vmstat\nshows \n> mixed reads and writes in a ratio of about 1 block read to 5 blocks\nwritten. \n \n> Any thoughts into what could be going on? Thanks in advance for your\nhelp.\n \nOdd as it may seem, a SELECT can cause a page to be rewritten.\n \nIf this is the first time that the rows are being read since they were\ninserted (or since the database was loaded, including from backup), it\nmay be rewriting the rows to set hint bits, which can make subsequent\naccess faster.\n \nThe best solution may be to vacuum more often.\n \nhttp://archives.postgresql.org/pgsql-performance/2007-12/msg00206.php\n \n-Kevin\n \n\n", "msg_date": "Wed, 14 May 2008 17:11:21 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "On Wed, 14 May 2008, Kevin Grittner wrote:\n\n> If this is the first time that the rows are being read since they were\n> inserted (or since the database was loaded, including from backup), it\n> may be rewriting the rows to set hint bits, which can make subsequent\n> access faster.\n\nThis is the second time this has come up recently, and I know it used to \npuzzle me too. This is a particularly relevant area to document better \nfor people doing benchmarking. As close I've found to a useful commentary \non this subject is the thread at \nhttp://archives.postgresql.org/pgsql-patches/2005-07/msg00390.php\n\nI still don't completely understand this myself though, if I did I'd add a \nFAQ on it. Anyone want to lecture for a minute on the birth and care of \nhint bits? I'll make sure any comments here get onto the wiki.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 14 May 2008 21:39:59 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "Greg Smith wrote:\n> On Wed, 14 May 2008, Kevin Grittner wrote:\n>\n>> If this is the first time that the rows are being read since they were\n>> inserted (or since the database was loaded, including from backup), it\n>> may be rewriting the rows to set hint bits, which can make subsequent\n>> access faster.\n>\n> This is the second time this has come up recently, and I know it used to \n> puzzle me too. This is a particularly relevant area to document better \n> for people doing benchmarking. As close I've found to a useful \n> commentary on this subject is the thread at \n> http://archives.postgresql.org/pgsql-patches/2005-07/msg00390.php\n>\n> I still don't completely understand this myself though, if I did I'd add \n> a FAQ on it. Anyone want to lecture for a minute on the birth and care \n> of hint bits? I'll make sure any comments here get onto the wiki.\n\nHint bits are used to mark tuples as created and/or deleted by\ntransactions that are know committed or aborted. To determine the\nvisibility of a tuple without such bits set, you need to consult pg_clog\nand possibly pg_subtrans, so it is an expensive check. On the other\nhand, if the tuple has the bits set, then it's state is known (or, at\nworst, it can be calculated easily from your current snapshot, without\nlooking at pg_clog.)\n\nThere are four hint bits:\nXMIN_COMMITTED -- creating transaction is known committed\nXMIN_ABORTED -- creating transaction is known aborted\nXMAX_COMMITTED -- same, for the deleting transaction\nXMAX_ABORTED -- ditto\n\nIf neither of the bits is set, then the transaction is either in\nprogress (which you can check by examining the list of running\ntransactions in shared memory) or your process is the first one to check\n(in which case, you need to consult pg_clog to know the status, and you\ncan update the hint bits if you find out a permanent state).\n\n\nRegarding FAQs, I'm having trouble imagining putting this in the user\nFAQ; I think it belongs into the developer's FAQ. However, a\nbenchmarker is not going to look there. Maybe we should start \"a\nbenchmarker's FAQ\"?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 14 May 2008 22:05:53 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "On Wed, 14 May 2008, Alvaro Herrera wrote:\n\n> If neither of the bits is set, then the transaction is either in \n> progress (which you can check by examining the list of running \n> transactions in shared memory) or your process is the first one to check \n> (in which case, you need to consult pg_clog to know the status, and you \n> can update the hint bits if you find out a permanent state).\n\nSo is vacuum helpful here because it will force all that to happen in one \nbatch? To put that another way: if I've run a manual vacuum, is it true \nthat it will have updated all the hint bits to XMIN_COMMITTED for all the \ntuples that were all done when the vacuum started?\n\n> Regarding FAQs, I'm having trouble imagining putting this in the user\n> FAQ; I think it belongs into the developer's FAQ. However, a\n> benchmarker is not going to look there. Maybe we should start \"a\n> benchmarker's FAQ\"?\n\nOn the wiki I've started adding a series of things that are \nperformance-related FAQs. There's three of them mixed in the bottom of \nhttp://wiki.postgresql.org/wiki/Frequently_Asked_Questions right now, \nabout slow count(*) and dealing with slow queries.\n\nHere the FAQ would be \"Why am I seeing all these writes when I'm just \ndoing selects on my table?\", and if it's mixed in with a lot of other \nperformance related notes people should be able to find it. The answer \nand suggestions should be simple enough to be useful to a user who just \nnoticed this behavior, while perhaps going into developer land for those \nwho want to know more about the internals.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 14 May 2008 22:21:45 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "On 5/14/08, Greg Smith <[email protected]> wrote:\n> On Wed, 14 May 2008, Alvaro Herrera wrote:\n>\n>\n> > If neither of the bits is set, then the transaction is either in progress\n> (which you can check by examining the list of running transactions in shared\n> memory) or your process is the first one to check (in which case, you need\n> to consult pg_clog to know the status, and you can update the hint bits if\n> you find out a permanent state).\n> >\n>\n> So is vacuum helpful here because it will force all that to happen in one\n> batch? To put that another way: if I've run a manual vacuum, is it true\n> that it will have updated all the hint bits to XMIN_COMMITTED for all the\n> tuples that were all done when the vacuum started?\n\n From my benchmarking experience: Yes, vacuum helps. See also below.\n\n>\n>\n> > Regarding FAQs, I'm having trouble imagining putting this in the user\n> > FAQ; I think it belongs into the developer's FAQ. However, a\n> > benchmarker is not going to look there. Maybe we should start \"a\n> > benchmarker's FAQ\"?\n> >\n>\n> On the wiki I've started adding a series of things that are\n> performance-related FAQs. There's three of them mixed in the bottom of\n> http://wiki.postgresql.org/wiki/Frequently_Asked_Questions\n> right now, about slow count(*) and dealing with slow queries.\n>\n> Here the FAQ would be \"Why am I seeing all these writes when I'm just doing\n> selects on my table?\", and if it's mixed in with a lot of other performance\n> related notes people should be able to find it. The answer and suggestions\n> should be simple enough to be useful to a user who just noticed this\n> behavior, while perhaps going into developer land for those who want to know\n> more about the internals.\n\nObviously, this issue is tied to the slow count(*) one, as I found out\nthe hard way. Consider the following scenario:\n* Insert row\n* Update that row a couple of times\n* Rinse and repeat many times\n\nNow somewhere during that cycle, do a select count(*) just to see\nwhere you are. You will be appalled by how slow that is, due to not\nonly the usual 'slow count(*)' reasons. This whole hint bit business\nmakes it even worse, as demonstrated by the fact that running a vacuum\nbefore the count(*) makes the latter noticably faster.\n\njan\n", "msg_date": "Wed, 14 May 2008 22:38:08 -0400", "msg_from": "\"Jan de Visser\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "On Thu, May 15, 2008 at 7:51 AM, Greg Smith <[email protected]> wrote:\n>\n>\n> So is vacuum helpful here because it will force all that to happen in one\n> batch? To put that another way: if I've run a manual vacuum, is it true\n> that it will have updated all the hint bits to XMIN_COMMITTED for all the\n> tuples that were all done when the vacuum started?\n>\n\nYes. For that matter, even a plain SELECT or count(*) on the entire\ntable is good enough. That will check every tuple for visibility and\nset it's hint bits.\n\nAnother point to note is that the hint bits are checked and set on a\nper tuple basis. So especially during index scan, the same heap page\nmay get rewritten many times. I had suggested in the past that\nwhenever we set hint bits for a tuple, we should check all other\ntuples in the page and set their hint bits too to avoid multiple\nwrites of the same page. I guess the idea got rejected because of lack\nof benchmarks to prove the benefit.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 15 May 2008 08:10:58 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "BTW ­ we¹ve removed HINT bit checking in Greenplum DB and improved the\nvisibility caching which was enough to provide performance at the same level\nas with the HINT bit optimization, but avoids this whole ³write the data,\nwrite it to the log also, then write it again just for good measure²\nbehavior.\n\nFor people doing data warehousing work like the poster, this Postgres\nbehavior is miserable. It should be fixed for 8.4 for sure (volunteers?)\n\nBTW ­ for the poster¹s benefit, you should implement partitioning by date,\nthen load each partition and VACUUM ANALYZE after each load. You probably\nwon¹t need the date index anymore ­ so your load times will vastly improve\n(no indexes), you¹ll store less data (no indexes) and you¹ll be able to do\nsimpler data management with the partitions.\n\nYou may also want to partition AND index if you do a lot of short range\nselective date predicates. Example would be: partition by day, index on\ndate field, queries selective on date ranges by hour will then select out\nonly the day needed, then index scan to get the hourly values. Typically\ntime-oriented data is nearly time sorted anyway, so you¹ll also get the\nbenefit of a clustered index.\n\n- Luke\n\n\nOn 5/15/08 10:40 AM, \"Pavan Deolasee\" <[email protected]> wrote:\n\n> On Thu, May 15, 2008 at 7:51 AM, Greg Smith <[email protected]> wrote:\n>> >\n>> >\n>> > So is vacuum helpful here because it will force all that to happen in one\n>> > batch? To put that another way: if I've run a manual vacuum, is it true\n>> > that it will have updated all the hint bits to XMIN_COMMITTED for all the\n>> > tuples that were all done when the vacuum started?\n>> >\n> \n> Yes. For that matter, even a plain SELECT or count(*) on the entire\n> table is good enough. That will check every tuple for visibility and\n> set it's hint bits.\n> \n> Another point to note is that the hint bits are checked and set on a\n> per tuple basis. So especially during index scan, the same heap page\n> may get rewritten many times. I had suggested in the past that\n> whenever we set hint bits for a tuple, we should check all other\n> tuples in the page and set their hint bits too to avoid multiple\n> writes of the same page. I guess the idea got rejected because of lack\n> of benchmarks to prove the benefit.\n> \n> Thanks,\n> Pavan\n> \n> --\n> Pavan Deolasee\n> EnterpriseDB http://www.enterprisedb.com\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n\n\nRe: [PERFORM] I/O on select count(*)\n\n\nBTW – we’ve removed HINT bit checking in Greenplum DB and improved the visibility caching which was enough to provide performance at the same level as with the HINT bit optimization, but avoids this whole “write the data, write it to the log also, then write it again just for good measure” behavior.\n\nFor people doing data warehousing work like the poster, this Postgres behavior is miserable.  It should be fixed for 8.4 for sure (volunteers?)\n\nBTW – for the poster’s benefit, you should implement partitioning by date, then load each partition and VACUUM ANALYZE after each load.  You probably won’t need the date index anymore – so your load times will vastly improve (no indexes), you’ll store less data (no indexes) and you’ll be able to do simpler data management with the partitions.\n\nYou may also want to partition AND index if you do a lot of short range selective date predicates.  Example would be: partition by day, index on date field, queries selective on date ranges by hour will then select out only the day needed, then index scan to get the hourly values.  Typically time-oriented data is nearly time sorted anyway, so you’ll also get the benefit of a clustered index.\n\n- Luke\n\n\nOn 5/15/08 10:40 AM, \"Pavan Deolasee\" <[email protected]> wrote:\n\nOn Thu, May 15, 2008 at 7:51 AM, Greg Smith <[email protected]> wrote:\n>\n>\n> So is vacuum helpful here because it will force all that to happen in one\n> batch?  To put that another way:  if I've run a manual vacuum, is it true\n> that it will have updated all the hint bits to XMIN_COMMITTED for all the\n> tuples that were all done when the vacuum started?\n>\n\nYes. For that matter, even a plain SELECT or count(*) on the entire\ntable is good enough. That will check every tuple for visibility and\nset it's hint bits.\n\nAnother point to note is that the hint bits are checked and set on a\nper tuple basis. So especially during index scan, the same heap page\nmay get rewritten many times. I had suggested in the past that\nwhenever we set hint bits for a tuple, we should check all other\ntuples in the page and set their hint bits too to avoid multiple\nwrites of the same page. I guess the idea got rejected because of lack\nof benchmarks to prove the benefit.\n\nThanks,\nPavan\n\n--\nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 15 May 2008 10:52:01 +0800", "msg_from": "Luke Lonergan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "On Thu, 15 May 2008, Pavan Deolasee wrote:\n\n> I had suggested in the past that whenever we set hint bits for a tuple, \n> we should check all other tuples in the page and set their hint bits too \n> to avoid multiple writes of the same page. I guess the idea got rejected \n> because of lack of benchmarks to prove the benefit.\n\n From glancing at http://www.postgresql.org/docs/faqs.TODO.html I got the \nimpression the idea was to have the background writer get involved to help \nwith this particular situation. The way things are setup right now, I \nwould guess it's impractical for an individual client to be forced to wait \nfor all the tuples in a block to be checked just because it ran into one \ntuple that needed its hint bits refreshed.\n\nIf the pages that had any hint bit updates since they were read/created \nwere made easy to identify (maybe they already are), the writer could do \nthe kind of scan you suggest anytime it was about to evict that page. \nThat wouldn't be in the client's critical path and it would maximize the \npossible improvement here.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 14 May 2008 23:03:37 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "On Thu, 15 May 2008 10:52:01 +0800\nLuke Lonergan <[email protected]> wrote:\n\n> BTW ­ we¹ve removed HINT bit checking in Greenplum DB and improved the\n> visibility caching which was enough to provide performance at the\n> same level as with the HINT bit optimization, but avoids this whole\n> ³write the data, write it to the log also, then write it again just\n> for good measure² behavior.\n> \n> For people doing data warehousing work like the poster, this Postgres\n> behavior is miserable. It should be fixed for 8.4 for sure\n> (volunteers?)\n\nDonations? You have the code Luke :)\n\nSincerely,\n\nJoshua D. Drake\n\nP.S. Sorry for the almost bad Star Wars pun.\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate", "msg_date": "Wed, 14 May 2008 20:13:30 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "Hi,\n\nLuke Lonergan wrote:\n> BTW – we’ve removed HINT bit checking in Greenplum DB and improved the \n> visibility caching which was enough to provide performance at the same \n> level as with the HINT bit optimization, but avoids this whole “write \n> the data, write it to the log also, then write it again just for good \n> measure” behavior.\n\ncan you go a bit deeper into how you implemented this or is it some IP\nof greenplum you cannot reveal?\n\nBtw, is there something with your eyes:\n<FONT SIZE=\"4\"><FONT FACE=\"Verdana, Helvetica, Arial\"><SPAN \nSTYLE='font-size:14pt'> ? :-))\n\nCheers\nTino", "msg_date": "Thu, 15 May 2008 07:20:11 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "\"Jan de Visser\" <[email protected]> writes:\n> Obviously, this issue is tied to the slow count(*) one, as I found out\n> the hard way. Consider the following scenario:\n> * Insert row\n> * Update that row a couple of times\n> * Rinse and repeat many times\n\n> Now somewhere during that cycle, do a select count(*) just to see\n> where you are. You will be appalled by how slow that is, due to not\n> only the usual 'slow count(*)' reasons. This whole hint bit business\n> makes it even worse, as demonstrated by the fact that running a vacuum\n> before the count(*) makes the latter noticably faster.\n\nUh, well, you can't blame that entirely on hint-bit updates. The vacuum\nhas simply *removed* two-thirds of the rows in the system, resulting in\na large drop in the number of rows that the select even has to look at.\n\nIt's certainly true that hint-bit updates cost something, but\nquantifying how much isn't easy. The off-the-cuff answer is to do the\nselect count(*) twice and see how much cheaper the second one is. But\nthere are two big holes in that answer: the first is the possible cache\neffects from having already read in the pages, and the second is that\nthe follow-up scan gets to avoid the visits to pg_clog that the first\nscan had to make (which after all is the point of the hint bits).\n\nI don't know any easy way to disambiguate the three effects that are at\nwork here. But blaming it all on the costs of writing out hint-bit\nupdates is wrong.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 May 2008 03:02:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*) " }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> ... To put that another way: if I've run a manual vacuum, is it true \n> that it will have updated all the hint bits to XMIN_COMMITTED for all the \n> tuples that were all done when the vacuum started?\n\nAny examination whatsoever of a tuple --- whether by vacuum or any\nordinary DML operation --- will update its hint bits to match the\ncommit/abort status of the inserting/deleting transaction(s) as of\nthe instant of the examination. Your statement above is true but\nis weaker than reality.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 May 2008 03:07:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*) " }, { "msg_contents": "On Wed, 14 May 2008, Alvaro Herrera wrote:\n> Hint bits are used to mark tuples as created and/or deleted by\n> transactions that are know committed or aborted. To determine the\n> visibility of a tuple without such bits set, you need to consult pg_clog\n> and possibly pg_subtrans, so it is an expensive check.\n\nSo, as I understand it, Postgres works like this:\n\n1. You begin a transaction. Postgres writes an entry into pg_clog.\n2. You write some tuples. Postgres writes them to the WAL, but doesn't\n bother fsyncing.\n3. At some point, the bgwriter or a checkpoint may write the tuples to the\n database tables, and fsync the lot.\n4. You commit the transaction. Postgres alters pg_clog again, writes that\n to the WAL, and fsyncs the WAL.\n5. If the tuples hadn't already made it to the database tables, then a\n checkpoint or bgwriter will do it later on, and fsync the lot.\n6. You read the tuples. Postgres reads them from the database table, looks\n in pg_clog, notices that the transaction has been committed, and\n writes the tuples to the database table again with the hint bits set.\n This write is not WAL protected, and is not fsynced.\n\nThis seems like a good architecture, with some cool characteristics, \nmainly that at no point does Postgres have to hold vast quantities of data \nin memory. I have two questions though:\n\nIs it really safe to update the hint bits in place? If there is a power \ncut in the middle of writing a block, is there a guarantee from the disc \nthat the block will never be garbled?\n\nIs there a way to make a shortcut and have the hint bits written the first \ntime the data is written to the table? One piece of obvious low-hanging \nfruit would be to enhance step five above, so that the bgwriter or \ncheckpoint that writes the data to the database table checks the pg_clog \nand writes the correct hint bits. In fact, if the tuple's creating \ntransaction has aborted, then the tuple can be vacuumed right there and \nthen before it is even written. For OLTP, almost all the hint bits will be \nwritten first time, and also the set of transactions that will be looked \nup in the pg_clog will be small (the set of transactions that were active \nsince the last checkpoint), so its cache coherency will be good.\n\nHowever, this idea does not deal well with bulk data loads, where the data \nis checkpointed before transaction is committed or aborted.\n\nMatthew\n\n-- \nNow, you would have thought these coefficients would be integers, given that\nwe're working out integer results. Using a fraction would seem really\nstupid. Well, I'm quite willing to be stupid here - in fact, I'm going to\nuse complex numbers. -- Computer Science Lecturer\n", "msg_date": "Thu, 15 May 2008 13:37:34 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "Matthew Wakeling wrote:\n> Is it really safe to update the hint bits in place? If there is a power \n> cut in the middle of writing a block, is there a guarantee from the disc \n> that the block will never be garbled?\n\nDon't know, to be honest. We've never seen any reports of corrupted data \nthat would suggest such a problem, but it doesn't seem impossible to me \nthat some exotic storage system might do that.\n\n> Is there a way to make a shortcut and have the hint bits written the \n> first time the data is written to the table? One piece of obvious \n> low-hanging fruit would be to enhance step five above, so that the \n> bgwriter or checkpoint that writes the data to the database table checks \n> the pg_clog and writes the correct hint bits.\n\nYep, that's an idea that's been suggested before. In fact, I seem to \nremember a patch to do just that. Don't remember what happened to it,\n\n> In fact, if the tuple's \n> creating transaction has aborted, then the tuple can be vacuumed right \n> there and then before it is even written. \n\nNot if you have any indexes on the table. To vacuum, you'll have to scan \nall indexes to remove pointers to the tuple.\n\n> However, this idea does not deal well with bulk data loads, where the \n> data is checkpointed before transaction is committed or aborted.\n\nYep, that's the killer :-(.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 15 May 2008 13:52:31 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "On Thu, 15 May 2008, Luke Lonergan wrote:\n> BTW � we�ve removed HINT bit checking in Greenplum DB and improved the\n> visibility caching which was enough to provide performance at the same level\n> as with the HINT bit optimization, but avoids this whole �write the data,\n> write it to the log also, then write it again just for good measure�\n> behavior.\n\nThis sounds like a good option. I believe I suggested this a few months \nago, however it was rejected because in the worst case (when the hints are \nnot cached), if you're doing an index scan, you can do twice the number of \nseeks as before.\n\nhttp://archives.postgresql.org/pgsql-performance/2007-12/msg00217.php\n\nThe hint data will be four bits per tuple plus overheads, so it could be \nmade very compact, and therefore likely to stay in the cache fairly well. \nEach tuple fetched would have to be spaced really far apart in the \ndatabase table in order to exhibit the worst case, because fetching a page \nof hint cache will cause 64kB or so of disc to appear in the disc's \nread-ahead buffer, which will be equivalent to 128MB worth of database \ntable (assuming eight tuples per block and no overhead). As soon as you \naccess another tuple in the same 128MB bracket, you'll hit the disc \nread-ahead buffer for the hints.\n\nOn balance, to me it still seems like a good option.\n\nMatthew\n\n-- \nThose who do not understand Unix are condemned to reinvent it, poorly.\n -- Henry Spencer", "msg_date": "Thu, 15 May 2008 13:54:13 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "On Thursday 15 May 2008 03:02:19 Tom Lane wrote:\n> \"Jan de Visser\" <[email protected]> writes:\n> > Obviously, this issue is tied to the slow count(*) one, as I found out\n> > the hard way. Consider the following scenario:\n> > * Insert row\n> > * Update that row a couple of times\n> > * Rinse and repeat many times\n> >\n> > Now somewhere during that cycle, do a select count(*) just to see\n> > where you are. You will be appalled by how slow that is, due to not\n> > only the usual 'slow count(*)' reasons. This whole hint bit business\n> > makes it even worse, as demonstrated by the fact that running a vacuum\n> > before the count(*) makes the latter noticably faster.\n>\n> Uh, well, you can't blame that entirely on hint-bit updates. The vacuum\n> has simply *removed* two-thirds of the rows in the system, resulting in\n> a large drop in the number of rows that the select even has to look at.\n>\n> It's certainly true that hint-bit updates cost something, but\n> quantifying how much isn't easy. The off-the-cuff answer is to do the\n> select count(*) twice and see how much cheaper the second one is. But\n> there are two big holes in that answer: the first is the possible cache\n> effects from having already read in the pages, and the second is that\n> the follow-up scan gets to avoid the visits to pg_clog that the first\n> scan had to make (which after all is the point of the hint bits).\n>\n> I don't know any easy way to disambiguate the three effects that are at\n> work here. But blaming it all on the costs of writing out hint-bit\n> updates is wrong.\n>\n> \t\t\tregards, tom lane\n\nTrue. But it still contributes to the fact that queries sometimes behave in a \nnon-deterministic way, which IMHO is the major annoyance when starting to \nwork with pgsql. And contrary to other causes (vacuum, checkpoints) this is \nwoefully underdocumented.\n\njan\n", "msg_date": "Thu, 15 May 2008 09:15:40 -0400", "msg_from": "Jan de Visser <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "On Thu, 15 May 2008, Heikki Linnakangas wrote:\n> > Is it really safe to update the hint bits in place? If there is a \n> > power cut in the middle of writing a block, is there a guarantee from \n> > the disc that the block will never be garbled?\n>\n> Don't know, to be honest. We've never seen any reports of corrupted data \n> that would suggest such a problem, but it doesn't seem impossible to me \n> that some exotic storage system might do that.\n\nHmm. That problem is what WAL full-page-writes is meant to handle, isn't \nit? So basically, if you're telling people that WAL full-page-writes is \nsafer than partial WAL, because it avoids updating pages in-place, then \nyou shouldn't be updating pages in-place for the hint bits either. You \ncan't win!\n\n>> In fact, if the tuple's creating transaction has aborted, then the tuple \n>> can be vacuumed right there and then before it is even written. \n>\n> Not if you have any indexes on the table. To vacuum, you'll have to scan all \n> indexes to remove pointers to the tuple.\n\nAh. Well, would that be so expensive? After all, someone has to do it \neventually, and these are index entries that have only just been added \nanyway.\n\nI can understand index updating being a bit messy in the middle of a \ncheckpoint though, as you would have to write the update to the WAL, which \nyou are checkpointing...\n\nSo, I don't know exactly how the WAL updates to indexes work, but my guess \nis that it has been implemented as \"write the blocks that we would change \nto the WAL\". The problem with this is that all the changes to the index \nare done individually, so there's no easy way to \"undo\" one of them later \non when you find out that the transaction has been aborted during the \ncheckpoint.\n\nAn alternative would be to build a \"list of changes\" in the WAL without \nactually changing the underlying index at all. When reading the index, you \nwould read the \"list\" first (which would be in memory, and in an \nefficient-to-search structure), then read the original index and add the \ntwo. Then when checkpointing, vet all the changes against known aborted \ntransactions before making all the changes to the index together. This is \nlikely to speed up index writes quite a bit, and also allow you to \neffectively vacuum aborted tuples before they get written to the disc.\n\nMatthew\n\n-- \nVacuums are nothings. We only mention them to let them know we know\nthey're there.\n", "msg_date": "Thu, 15 May 2008 14:38:48 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "Matthew Wakeling wrote:\n> On Thu, 15 May 2008, Luke Lonergan wrote:\n>> ...HINT bit optimization, but avoids this whole �write the data,\n>> write it to the log also, then write it again just for good measure�\n> ...\n> The hint data will be four bits per tuple plus overheads, so it could be \n> made very compact, and therefore likely to stay in the cache fairly \n> well. \n\nDoes it seem like these HINT bits would be good candidates to move\noff to map forks similar to how the visibility map stuff will be handled?\n\nSince (if I understand right) only the hint bits change during the\nselect(*) it seems a lot less write-IO would happen if such a map\nwere updated rather than the data pages themselves.\n", "msg_date": "Thu, 15 May 2008 06:50:44 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "Greg Smith escribi�:\n> On Thu, 15 May 2008, Pavan Deolasee wrote:\n>\n>> I had suggested in the past that whenever we set hint bits for a tuple, \n>> we should check all other tuples in the page and set their hint bits \n>> too to avoid multiple writes of the same page. I guess the idea got \n>> rejected because of lack of benchmarks to prove the benefit.\n>\n> From glancing at http://www.postgresql.org/docs/faqs.TODO.html I got the \n> impression the idea was to have the background writer get involved to \n> help with this particular situation.\n\nThe problem is that the bgwriter does not understand about the content\nof the pages it is writing -- they're opaque pages for all it knows. So\nit cannot touch the hint bits.\n\nI agree with Pavan that it's likely that setting hint bits in batches\ninstead of just for the tuple being examined is a benefit. However,\nit's perhaps not so good to be doing it in a foreground process, because\nyou're imposing extra cost to the client queries which we want to be as\nfast as possible. Perhaps the thing to do is have a \"database-local\nbgwriter\" which would scan pages and do this kind of change ...\na different kind of process to be launched by autovacuum perhaps.\n\nIf we had the bitmask in a separate map fork, this could be cheap.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 15 May 2008 10:41:06 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> Hmm. That problem is what WAL full-page-writes is meant to handle, isn't \n> it? So basically, if you're telling people that WAL full-page-writes is \n> safer than partial WAL, because it avoids updating pages in-place, then \n> you shouldn't be updating pages in-place for the hint bits either. You \n> can't win!\n\nThis argument ignores the nature of the data change. With a hint-bit\nupdate, no data is being shuffled around, so there is no danger from a\npartial page write.\n\nA disk that leaves an individual sector corrupt would be a problem,\nbut I don't think that's a huge risk. Keep in mind that disks aren't\ndesigned to just stop dead when power dies --- they are made to be able\nto park their heads before the juice is entirely gone. I think it's\nreasonable to assume they'll finish writing the sector in progress\nbefore they start parking.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 May 2008 10:52:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*) " }, { "msg_contents": "Matthew Wakeling wrote:\n> On Thu, 15 May 2008, Heikki Linnakangas wrote:\n>> > Is it really safe to update the hint bits in place? If there is a > \n>> power cut in the middle of writing a block, is there a guarantee from \n>> > the disc that the block will never be garbled?\n>>\n>> Don't know, to be honest. We've never seen any reports of corrupted \n>> data that would suggest such a problem, but it doesn't seem impossible \n>> to me that some exotic storage system might do that.\n> \n> Hmm. That problem is what WAL full-page-writes is meant to handle, isn't \n> it? So basically, if you're telling people that WAL full-page-writes is \n> safer than partial WAL, because it avoids updating pages in-place, then \n> you shouldn't be updating pages in-place for the hint bits either. You \n> can't win!\n\nFull-page-writes protect from torn pages, that is, when one half of an \nupdate hits the disk but the other one doesn't. In particular, if the \nbeginning of the page where the WAL pointer (XLogRecPtr) is flushed to \ndisk, but the actual changes elsewhere in the page aren't, you're in \ntrouble. WAL replay will look at the WAL pointer, and think that the \npage doesn't need to be replayed, while other half of the update is \nstill missing.\n\nHint bits are different. We're only updating a single bit, and it \ndoesn't matter from correctness point of view whether the hint bit \nupdate hits the disk or not. But what would spell trouble is if the disk \ncontroller/whatever garbles the whole sector, IOW changes something else \nthan the changed bit, while doing the update.\n\n>>> In fact, if the tuple's creating transaction has aborted, then the \n>>> tuple can be vacuumed right there and then before it is even written. \n>>\n>> Not if you have any indexes on the table. To vacuum, you'll have to \n>> scan all indexes to remove pointers to the tuple.\n> \n> Ah. Well, would that be so expensive? After all, someone has to do it \n> eventually, and these are index entries that have only just been added \n> anyway.\n\nScanning all indexes? Depends on your table of course, but yes it would \nbe expensive in general.\n\n> An alternative would be to build a \"list of changes\" in the WAL without \n> actually changing the underlying index at all. When reading the index, \n> you would read the \"list\" first (which would be in memory, and in an \n> efficient-to-search structure), then read the original index and add the \n> two. Then when checkpointing, vet all the changes against known aborted \n> transactions before making all the changes to the index together. This \n> is likely to speed up index writes quite a bit, and also allow you to \n> effectively vacuum aborted tuples before they get written to the disc.\n\nThere's not much point optimizing something that only helps with aborted \ntransactions.\n\nThe general problem with any idea that involves keeping a list of \nchanges made in a transaction is that that list will grow big during \nbulk loads, so you'll have to overflow to disk or abandon the list \napproach. Which means that it won't help with bulk loads.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 15 May 2008 16:08:36 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "Alvaro Herrera wrote:\n> Greg Smith escribi�:\n>> On Thu, 15 May 2008, Pavan Deolasee wrote:\n>>\n>>> I had suggested in the past that whenever we set hint bits for a tuple, \n>>> we should check all other tuples in the page and set their hint bits \n>>> too to avoid multiple writes of the same page. I guess the idea got \n>>> rejected because of lack of benchmarks to prove the benefit.\n>>\n>> From glancing at http://www.postgresql.org/docs/faqs.TODO.html I got the \n>> impression the idea was to have the background writer get involved to \n>> help with this particular situation.\n> \n> The problem is that the bgwriter does not understand about the content\n> of the pages it is writing -- they're opaque pages for all it knows. So\n> it cannot touch the hint bits.\n\nWe know what kind of a relation we're dealing with in ReadBuffer, so we \ncould add a flag to BufferDesc to mark heap pages.\n\n> If we had the bitmask in a separate map fork, this could be cheap.\n\nI don't buy that. The point of a hint bit is that it's right there along \nwith the tuple you're looking at. If you have to look at a separate \nbuffer, you might as well just look at clog.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 15 May 2008 16:15:50 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "On Thu, 15 May 2008, Heikki Linnakangas wrote:\n> There's not much point optimizing something that only helps with aborted \n> transactions.\n\nThat's fair enough, but this list method is likely to speed up index \nwrites anyway.\n\n> The general problem with any idea that involves keeping a list of changes \n> made in a transaction is that that list will grow big during bulk loads, so \n> you'll have to overflow to disk or abandon the list approach. Which means \n> that it won't help with bulk loads.\n\nYeah, it wouldn't be a list of changes for the transaction, it would be a \nlist of changes since the last checkpoint. Keeping data in memory for the \nlength of the transaction is doomed to failure, because there is no bound \non its size, so bulk loads are still going to miss out on hint \noptimisation.\n\nMatthew\n\n-- \nfor a in past present future; do\n for b in clients employers associates relatives neighbours pets; do\n echo \"The opinions here in no way reflect the opinions of my $a $b.\"\ndone; done\n", "msg_date": "Thu, 15 May 2008 16:17:29 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "Heikki Linnakangas escribi�:\n> Alvaro Herrera wrote:\n\n>> The problem is that the bgwriter does not understand about the content\n>> of the pages it is writing -- they're opaque pages for all it knows. So\n>> it cannot touch the hint bits.\n>\n> We know what kind of a relation we're dealing with in ReadBuffer, so we \n> could add a flag to BufferDesc to mark heap pages.\n\nHmm, I was thinking that it would need access to the catalogs to know\nwhere the tuples are, but that's certainly not true, so perhaps it could\nbe made to work.\n\n>> If we had the bitmask in a separate map fork, this could be cheap.\n>\n> I don't buy that. The point of a hint bit is that it's right there along \n> with the tuple you're looking at. If you have to look at a separate \n> buffer, you might as well just look at clog.\n\nTrue -- I was confusing this with the idea of having the tuple MVCC\nheader (xmin, xmax, etc) in a separate fork, which would make the idea\nof index-only scans more feasible at the expense of seqscans.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 15 May 2008 11:49:32 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "Matthew Wakeling wrote:\n\nAside from the rest of commentary, a slight clarification:\n\n> So, as I understand it, Postgres works like this:\n>\n> 1. You begin a transaction. Postgres writes an entry into pg_clog.\n\nStarting a transaction does not write anything to pg_clog.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 15 May 2008 11:52:32 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Heikki Linnakangas escribi�:\n>> We know what kind of a relation we're dealing with in ReadBuffer, so we \n>> could add a flag to BufferDesc to mark heap pages.\n\n> Hmm, I was thinking that it would need access to the catalogs to know\n> where the tuples are, but that's certainly not true, so perhaps it could\n> be made to work.\n\nThe issue in my mind is not so much could bgwriter physically do it\nas that it's a violation of module layering. That has real\nconsequences, like potential for deadlocks. It'll become particularly\npressing if we go forward with the plans to get rid of the separate\ndedicated buffers for pg_clog etc and have them work in the main\nshared-buffer pool.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 May 2008 12:19:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*) " }, { "msg_contents": "Tom Lane wrote:\n> It's certainly true that hint-bit updates cost something, but\n> quantifying how much isn't easy. \nMaybe we can instrument the code with DTrace probes to quantify the \nactual costs. I'm not familiar with the code, but if I know where to \nplace the probes, I can easily do a quick test and provide the data.\n> The off-the-cuff answer is to do the\n> select count(*) twice and see how much cheaper the second one is. \nDoesn't seem the second run is cheaper as shown in the results below. \nThe data came from the probes I've added recently.\n\n*************** Run #1 **********************\nSQL Statement : select count(*) from accounts;\nExecution time : 1086.58 (ms)\n\n============ Buffer Read Counts ============\nTablespace Database Table Count\n 1663 16384 1247 1\n 1663 16384 2600 1\n 1663 16384 2703 1\n 1663 16384 1255 2\n 1663 16384 2650 2\n 1663 16384 2690 3\n 1663 16384 2691 3\n 1663 16384 16397 8390\n\n======== Dirty Buffer Write Counts =========\nTablespace Database Table Count\n 1663 16384 16397 2865\n\nTotal buffer cache hits : 1932\nTotal buffer cache misses : 6471\nAverage read time from cache : 5638 (ns)\nAverage read time from disk : 143371 (ns)\nAverage write time to disk : 20368 (ns)\n\n\n*************** Run #2 **********************\nSQL Statement : select count(*) from accounts;\nExecution time : 1115.94 (ms)\n\n============ Buffer Read Counts ============\nTablespace Database Table Count\n 1663 16384 16397 8390\n\n======== Dirty Buffer Write Counts =========\nTablespace Database Table Count\n 1663 16384 16397 2865\n\nTotal buffer cache hits : 1931\nTotal buffer cache misses : 6459\nAverage read time from cache : 4357 (ns)\nAverage read time from disk : 154127 (ns)\nAverage write time to disk : 20368 (ns)\n\n\n-Robert\n", "msg_date": "Thu, 15 May 2008 11:55:00 -0500", "msg_from": "Robert Lor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "Robert Lor <[email protected]> writes:\n> Tom Lane wrote:\n>> It's certainly true that hint-bit updates cost something, but\n>> quantifying how much isn't easy. \n\n> Maybe we can instrument the code with DTrace probes to quantify the \n> actual costs.\n\nHmm, the problem would be trying to figure out what percentage of writes\ncould be blamed solely on hint-bit updates and not any other change to\nthe page. I don't think that the bufmgr currently keeps enough state to\nknow that, but you could probably modify it easily enough, since callers\ndistinguish MarkBufferDirty from SetBufferCommitInfoNeedsSave. Define\nanother flag bit that's set only by the first, and test it during\nwrite-out.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 May 2008 13:42:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*) " }, { "msg_contents": "Tom Lane wrote:\n> Hmm, the problem would be trying to figure out what percentage of writes\n> could be blamed solely on hint-bit updates and not any other change to\n> the page. I don't think that the bufmgr currently keeps enough state to\n> know that, but you could probably modify it easily enough, since callers\n> distinguish MarkBufferDirty from SetBufferCommitInfoNeedsSave. Define\n> another flag bit that's set only by the first, and test it during\n> write-out.\n> \nOk, I made a few changes to bufmgr per my understanding of your \ndescription above and with my limited understanding of the code. Patch \nis attached.\n\nAssuming the patch is correct, the effect of writes due to hint bits is \nquite significant. I collected the data below by runing pgbench in one \nterminal and psql on another to run the query.\n\nIs the data plausible?\n\n-Robert\n\n--------------\n\n\nBackend PID : 16189\nSQL Statement : select count(*) from accounts;\nExecution time : 17.33 sec\n\n============ Buffer Read Counts ============\nTablespace Database Table Count\n 1663 16384 2600 1\n 1663 16384 2601 1\n 1663 16384 2615 1\n 1663 16384 1255 2\n 1663 16384 2602 2\n 1663 16384 2603 2\n 1663 16384 2616 2\n 1663 16384 2650 2\n 1663 16384 2678 2\n 1663 16384 1247 3\n 1663 16384 1249 3\n 1663 16384 2610 3\n 1663 16384 2655 3\n 1663 16384 2679 3\n 1663 16384 2684 3\n 1663 16384 2687 3\n 1663 16384 2690 3\n 1663 16384 2691 3\n 1663 16384 2703 4\n 1663 16384 1259 5\n 1663 16384 2653 5\n 1663 16384 2662 5\n 1663 16384 2663 5\n 1663 16384 2659 7\n 1663 16384 16397 8390\n\n======== Dirty Buffer Write Counts =========\nTablespace Database Table Count\n 1663 16384 16402 2\n 1663 16384 16394 11\n 1663 16384 16397 4771\n\n========== Hint Bits Write Counts ==========\nTablespace Database Table Count\n 1663 16384 16397 4508\n\nTotal buffer cache hits : 732\nTotal buffer cache misses : 7731\nAverage read time from cache : 9136 (ns)\nAverage read time from disk : 384201 (ns)\nAverage write time to disk : 210709 (ns)\n\n\nBackend PID : 16189\nSQL Statement : select count(*) from accounts;\nExecution time : 12.72 sec\n\n============ Buffer Read Counts ============\nTablespace Database Table Count\n 1663 16384 16397 8392\n\n======== Dirty Buffer Write Counts =========\nTablespace Database Table Count\n 1663 16384 16394 6\n 1663 16384 16402 7\n 1663 16384 16397 2870\n\n========== Hint Bits Write Counts ==========\nTablespace Database Table Count\n 1663 16384 16402 2\n 1663 16384 16397 2010\n\nTotal buffer cache hits : 606\nTotal buffer cache misses : 7786\nAverage read time from cache : 6949 (ns)\nAverage read time from disk : 706288 (ns)\nAverage write time to disk : 90426 (ns)", "msg_date": "Thu, 15 May 2008 16:23:10 -0500", "msg_from": "Robert Lor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "Alvaro Herrera wrote:\n> Hint bits are used to mark tuples as created and/or deleted by\n> transactions that are know committed or aborted. To determine the\n> visibility of a tuple without such bits set, you need to consult pg_clog\n> and possibly pg_subtrans, so it is an expensive check. On the other\n> \nSo, how come there is this outstanding work to do, which will inevitably \nbe done, and it\nhasn't been done until it is 'just too late' to avoid getting in the way \nof the query?\n\nThe OP didn't suggest that he had just loaded the data.\n\nAlso - is it the case that this only affects the case where updated \npages were spilled\nduring the transaction that changed them? ie, if we commit a \ntransaction and there\nare changed rows still in the cache since their pages are not evicted \nyet, are the hint\nbits set immediately so that page is written just once? Seems this \nwould be common\nin most OLTP systems.\n\nHeikki points out that the list might get big and need to be abandoned, \nbut then you\nfall back to scheduling a clog scan that can apply the bits, which does \nwhat you have\nnow, though hopefully in a way that fills slack disk IO rather than \nwaiting for the\nread.\n\nMatthew says: 'it would be a list of changes since the last checkpoint' \nbut I don't\nsee why you can't start writing hints to in-memory pages as soon as the \ntransaction\nends. You might fall behind, but I doubt it with modern CPU speeds.\n\nI can't see why Pavan's suggestion to try to update as many of the bits \nas possible\nwhen a dirty page is evicted would be contentious.\n\nI do think this is something of interest to users, not just developers, \nsince it\nmay influence the way updates are processed where it is reasonable to do\nso in 'bite sized chunks' as a multipart workflow.\n\n\n", "msg_date": "Thu, 15 May 2008 23:11:52 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "\"Luke Lonergan\" <[email protected]> writes:\n\n> BTW ­ we¹ve removed HINT bit checking in Greenplum DB and improved the\n> visibility caching which was enough to provide performance at the same level\n> as with the HINT bit optimization, but avoids this whole ³write the data,\n> write it to the log also, then write it again just for good measure²\n> behavior.\n>\n> For people doing data warehousing work like the poster, this Postgres\n> behavior is miserable. It should be fixed for 8.4 for sure (volunteers?)\n\nFor people doing data warehousing I would think the trick would be to do\nsomething like what we do to avoid WAL logging for tables created in the same\ntransaction. \n\nThat is, if you're loading a lot of data at the same time then all of that\ndata is going to be aborted or committed and that will happen at the same\ntime. Ideally we would find a way to insert the data with the hint bits\nalready set to committed and mark the section of the table as being only\nprovisionally extended so other transactions wouldn't even look at those pages\nuntil the transaction commits.\n\nThis is similar to the abortive attempt to have the abovementioned WAL logging\ntrick insert the records pre-frozen. I recall there were problems with that\nidea though but I don't recall if they were insurmountable or just required\nmore work.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Thu, 15 May 2008 23:30:41 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": ">>> On Thu, May 15, 2008 at 5:11 PM, in message\n<[email protected]>, James Mansion\n<[email protected]> wrote: \n> Alvaro Herrera wrote:\n>> Hint bits are used to mark tuples as created and/or deleted by\n>> transactions that are know committed or aborted. To determine the\n>> visibility of a tuple without such bits set, you need to consult\npg_clog\n>> and possibly pg_subtrans, so it is an expensive check. On the\nother\n>> \n> So, how come there is this outstanding work to do, which will\ninevitably \n> be done, and it\n> hasn't been done until it is 'just too late' to avoid getting in the\nway \n> of the query?\n \nThere has been discussion from time to time about setting the hint\nbits for tuple inserts which occur within the same database\ntransaction as the creation of the table into which they're being\ninserted. That would allow people to cover many of the bulk load\nsituations. I don't see it on the task list. (I would also argue\nthat there is little information lost, even from a forensic\nperspective, to writing such rows as \"frozen\".) Is this idea done,\ndead, or is someone working on it?\n \nIf we could set hint bits on dirty buffer pages after the commit, we'd\ncover the OLTP situation. In many situations, there is a bigger OS\ncache than PostgreSQL shared memory, and an attempt to set the bits\nsoon after the commit would coalesce the two writes into one physical\nwrite using RAM-based access, which would be almost as good. I don't\nknow if it's feasible to try to do that after the pages have moved\nfrom the PostgreSQL cache to the OS cache, but it would likely be a\nperformance win.\n \nIf we are going to burden any requester process with the job of\nsetting the hint bits, it would typically be better to burden the one\ndoing the data modification rather than some random thread later\ntrying to read data from the table. Of course, getting work off the\nrequester processes onto some background worker process is generally\neven better.\n \n-Kevin\n\n", "msg_date": "Thu, 15 May 2008 17:38:27 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "On Thu, 15 May 2008, Alvaro Herrera wrote:\n\n> Starting a transaction does not write anything to pg_clog.\n\nFor Matt and others, some details here are in \nsrc/backend/access/transam/README:\n\n\"pg_clog records the commit status for each transaction that has been \nassigned an XID.\"\n\n\"Transactions and subtransactions are assigned permanent XIDs only when/if \nthey first do something that requires one --- typically, \ninsert/update/delete a tuple, though there are a few other places that \nneed an XID assigned.\"\n\nAfter reading the code and that documentation a bit, the part I'm still \nnot sure about is whether the CLOG entry is created when the XID is \nassigned and then kept current as the state changes, or whether that isn't \neven in CLOG until the transaction is committed. It seems like the \nlatter, but there's some ambiguity in the wording and too many code paths \nfor me to map right now.\n\n From there, it doesn't make its way out to disk until the internal CLOG \nbuffers are filled, at which point the least recently used buffer there is \nevicted to permanent storage.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 16 May 2008 14:05:49 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "\nOn Fri, 2008-05-16 at 14:05 -0400, Greg Smith wrote:\n> After reading the code and that documentation a bit, the part I'm\n> still not sure about is whether the CLOG entry is created when the XID\n> is assigned and then kept current as the state changes, or whether\n> that isn't even in CLOG until the transaction is committed. It seems\n> like the latter, but there's some ambiguity in the wording and too\n> many code paths for me to map right now.\n\nAlvaro already said this, I thought? The clog is updated only at sub or\nmain transaction end, thank goodness. When the transactionid is assigned\nthe page of the clog that contains that transactionid is checked to see\nif it already exists and if not, it is initialised.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Fri, 16 May 2008 19:23:55 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "Greg Smith wrote:\n\n> After reading the code and that documentation a bit, the part I'm still \n> not sure about is whether the CLOG entry is created when the XID is \n> assigned and then kept current as the state changes, or whether that \n> isn't even in CLOG until the transaction is committed. It seems like the \n> latter, but there's some ambiguity in the wording and too many code paths \n> for me to map right now.\n\npg_clog is allocated in pages of 8kB apiece(*). On allocation, pages are\nzeroed, which is the bit pattern for \"transaction in progress\". So when\na transaction starts, it only needs to ensure that the pg_clog page that\ncorresponds to it is allocated, but it need not write anything to it.\n\n(*) Each transaction needs 2 bits, so on a 8 kB page there is space for\n4 transactions/byte * 8 pages * 1kB/page = 32k transactions.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 16 May 2008 16:15:28 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "Alvaro Herrera wrote:\n\n> pg_clog is allocated in pages of 8kB apiece(*). On allocation, pages are\n> zeroed, which is the bit pattern for \"transaction in progress\". So when\n> a transaction starts, it only needs to ensure that the pg_clog page that\n> corresponds to it is allocated, but it need not write anything to it.\n\nOf course, in 8.3 it's not when the transaction starts, but when the Xid\nis assigned (i.e. when the transaction first calls a read-write\ncommand). In previous versions it happens when the first snapshot is\ntaken (i.e. normally on the first command of any type, with very few\nexceptions.)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 16 May 2008 17:00:50 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Greg Smith wrote:\n>> After reading the code and that documentation a bit, the part I'm still \n>> not sure about is whether the CLOG entry is created when the XID is \n>> assigned and then kept current as the state changes, or whether that \n>> isn't even in CLOG until the transaction is committed.\n\n> pg_clog is allocated in pages of 8kB apiece(*). On allocation, pages are\n> zeroed, which is the bit pattern for \"transaction in progress\". So when\n> a transaction starts, it only needs to ensure that the pg_clog page that\n> corresponds to it is allocated, but it need not write anything to it.\n\nOne additional point: this means that one transaction in every 32K\nwriting transactions *does* have to do extra work when it assigns itself\nan XID, namely create and zero out the next page of pg_clog. And that\ndoesn't just slow down the transaction in question, but the next few\nguys that would like an XID but arrive on the scene while the\nzeroing-out is still in progress.\n\nThis probably contributes to the behavior that Simon and Josh regularly\ncomplain about, that our transaction execution time is subject to\nunpredictable spikes. I'm not sure how to get rid of it though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 May 2008 22:45:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*) " }, { "msg_contents": "Tom Lane wrote:\n> One additional point: this means that one transaction in every 32K\n> writing transactions *does* have to do extra work when it assigns itself\n> an XID, namely create and zero out the next page of pg_clog. And that\n> doesn't just slow down the transaction in question, but the next few\n> guys that would like an XID but arrive on the scene while the\n> zeroing-out is still in progress.\n> \n> This probably contributes to the behavior that Simon and Josh regularly\n> complain about, that our transaction execution time is subject to\n> unpredictable spikes. I'm not sure how to get rid of it though.\n\nA thread maintaining a pool of assigned and cleared pg_clog pages, ahead\nof the immediate need? Possibly another job for an existing daemon\nthread.\n\n- Jeremy\n", "msg_date": "Sat, 17 May 2008 14:13:52 +0100", "msg_from": "Jeremy Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "I just collected all the good internals information included in this \nthread and popped it onto http://wiki.postgresql.org/wiki/Hint_Bits where \nI'll continue to hack away at the text until it's readable. Thanks to \neveryone who answered my questions here, that's good progress toward \nclearing up a very underdocumented area.\n\nI note a couple of potential TODO items not on the official list yet that \ncame up during this discussion:\n\n-Smooth latency spikes when switching commit log pages by preallocating \ncleared pages before they are needed\n\n-Improve bulk loading by setting \"frozen\" hint bits for tuple inserts \nwhich occur within the same database transaction as the creation of the \ntable into which they're being inserted\n\nDid I miss anything? I think everything brought up falls either into one \nof those two or the existing \"Consider having the background writer update \nthe transaction status hint bits...\" TODO.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sun, 18 May 2008 01:28:26 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*) " }, { "msg_contents": "> Alvaro Herrera <[email protected]> writes:\n>> pg_clog is allocated in pages of 8kB apiece(*). On allocation, pages are\n>> zeroed, which is the bit pattern for \"transaction in progress\". So when\n>> a transaction starts, it only needs to ensure that the pg_clog page that\n>> corresponds to it is allocated, but it need not write anything to it.\n\nYeah, that's pretty-much how I imagined it would be. Nice and compact. I \nwould imagine that if there are only a few transactions, doing the pg_clog \nlookup would be remarkably quick. However, once there have been a \nbazillion transactions, with tuples pointing to the whole range of them, \nit would degenerate into having to perform an extra seek for each tuple, \nand that's why you added the hint bits.\n\nOn Fri, 16 May 2008, Tom Lane wrote:\n> One additional point: this means that one transaction in every 32K\n> writing transactions *does* have to do extra work when it assigns itself\n> an XID, namely create and zero out the next page of pg_clog. And that\n> doesn't just slow down the transaction in question, but the next few\n> guys that would like an XID but arrive on the scene while the\n> zeroing-out is still in progress.\n>\n> This probably contributes to the behavior that Simon and Josh regularly\n> complain about, that our transaction execution time is subject to\n> unpredictable spikes. I'm not sure how to get rid of it though.\n\nDoes it really take that long to zero out 8kB of RAM? I thought CPUs were \nreally quick at doing that!\n\nAnyway, the main thing you need to avoid is all the rest of the \ntransactions waiting for the new pg_clog page. The trick is to generate \nthe new page early, outside any locks on existing pages. It doesn't \nnecessarily need to be done by a daemon thread at all.\n\nMatthew\n\n-- \nI'm NOT paranoid! Which of my enemies told you this?\n", "msg_date": "Mon, 19 May 2008 13:32:32 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*) " }, { "msg_contents": "Matthew Wakeling wrote:\n> On Fri, 16 May 2008, Tom Lane wrote:\n> > One additional point: this means that one transaction in every 32K\n> > writing transactions *does* have to do extra work when it assigns itself\n> > an XID, namely create and zero out the next page of pg_clog. And that\n> > doesn't just slow down the transaction in question, but the next few\n> > guys that would like an XID but arrive on the scene while the\n> > zeroing-out is still in progress.\n> >\n> > This probably contributes to the behavior that Simon and Josh regularly\n> > complain about, that our transaction execution time is subject to\n> > unpredictable spikes. I'm not sure how to get rid of it though.\n> \n> Does it really take that long to zero out 8kB of RAM? I thought CPUs were \n> really quick at doing that!\n\nYea, that was my assumption too.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Mon, 19 May 2008 10:54:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Matthew Wakeling wrote:\n>> Does it really take that long to zero out 8kB of RAM? I thought CPUs were \n>> really quick at doing that!\n\n> Yea, that was my assumption too.\n\nYou have to write the page (to be sure there is space for it on disk)\nnot only zero it.\n\nThis design is kind of a holdover, though, from back when we had one\never-growing clog file. Today I'd be inclined to think about managing\nit more like pg_xlog, ie, have some background process pre-create a\nwhole segment file at a time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 May 2008 11:37:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*) " }, { "msg_contents": "On Mon, 19 May 2008, Matthew Wakeling wrote:\n\n> Does it really take that long to zero out 8kB of RAM? I thought CPUs were \n> really quick at doing that!\n\nYou don't get the whole CPU--you get time slices of one. Some of the \ncases complaints have come in about have over a thousand connections all \nfighting for CPU time, and making every one of them block for one guy who \nneeds to fiddle with memory for a while can be a problem. If you're \nunlucky you won't even be on the same CPU you started on each time you get \na little check of time, and you'll run at the speed of RAM rather than \nthat of the CPU--again, fighting for RAM access with every other process \non the server.\n\nThe real question in my mind is why this turns into a bottleneck before \nthe similar task of cleaning the 16MB XLOG segment does. I expected that \none would need to be cracked before the CLOG switch time could possibly be \nan issue, but reports from the field seem to suggest otherwise.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 19 May 2008 11:47:38 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*) " }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> The real question in my mind is why this turns into a bottleneck before \n> the similar task of cleaning the 16MB XLOG segment does.\n\nBecause we do the latter off-line, or at least try to.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 May 2008 11:53:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*) " }, { "msg_contents": "\n> The real question in my mind is why this turns into a bottleneck before \n> the similar task of cleaning the 16MB XLOG segment does. I expected \n> that one would need to be cracked before the CLOG switch time could \n> possibly be an issue, but reports from the field seem to suggest \n> otherwise.\n\n\tHm, on current CPUs zeroing 8kB of RAM should take less than 2 us... now \nif it has to be written to disk, that's another story !\n", "msg_date": "Mon, 19 May 2008 19:14:48 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "On May 18, 2008, at 1:28 AM, Greg Smith wrote:\n> I just collected all the good internals information included in \n> this thread and popped it onto http://wiki.postgresql.org/wiki/ \n> Hint_Bits where I'll continue to hack away at the text until it's \n> readable. Thanks to everyone who answered my questions here, \n> that's good progress toward clearing up a very underdocumented area.\n>\n> I note a couple of potential TODO items not on the official list \n> yet that came up during this discussion:\n>\n> -Smooth latency spikes when switching commit log pages by \n> preallocating cleared pages before they are needed\n>\n> -Improve bulk loading by setting \"frozen\" hint bits for tuple \n> inserts which occur within the same database transaction as the \n> creation of the table into which they're being inserted\n>\n> Did I miss anything? I think everything brought up falls either \n> into one of those two or the existing \"Consider having the \n> background writer update the transaction status hint bits...\" TODO.\n\n-Evaluate impact of improved caching of CLOG per Greenplum:\n\nPer Luke Longergan:\nI'll find out if we can extract our code that did the work. It was \nsimple but scattered in a few routines. In concept it worked like this:\n\n1 - Ignore if hint bits are unset, use them if set. This affects \nheapam and vacuum I think.\n2 - implement a cache for clog lookups based on the optimistic \nassumption that the data was inserted in bulk. Put the cache one \ncall away from heapgetnext()\n\nI forget the details of (2). As I recall, if we fall off of the \nassumption, the penalty for long scans get large-ish (maybe 2X), but \nsince when do people full table scan when they're updates/inserts are \nso scattered across TIDs? It's an obvious big win for DW work.\n\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Sat, 24 May 2008 15:06:56 -0400", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*) " }, { "msg_contents": "On May 18, 2008, at 1:28 AM, Greg Smith wrote:\n> I just collected all the good internals information included in \n> this thread and popped it onto http://wiki.postgresql.org/wiki/ \n> Hint_Bits where I'll continue to hack away at the text until it's \n> readable. Thanks to everyone who answered my questions here, \n> that's good progress toward clearing up a very underdocumented area.\n>\n> I note a couple of potential TODO items not on the official list \n> yet that came up during this discussion:\n>\n> -Smooth latency spikes when switching commit log pages by \n> preallocating cleared pages before they are needed\n>\n> -Improve bulk loading by setting \"frozen\" hint bits for tuple \n> inserts which occur within the same database transaction as the \n> creation of the table into which they're being inserted\n>\n> Did I miss anything? I think everything brought up falls either \n> into one of those two or the existing \"Consider having the \n> background writer update the transaction status hint bits...\" TODO.\n\nBlah, sorry for the double-post, but I just remembered a few things...\n\nDid we completely kill the idea of the bg_writer *or some other \nbackground process* being responsible for setting all hint-bits on \ndirty pages before they're written out?\n\nAlso, Simon and Tom had an idea at PGCon: Don't set hint-bits in the \nback-end if the page isn't already dirty. We'd likely need some \nheuristics on this... based on Luke's comments about improved CLOG \ncaching maybe we want to set the bits anyway if the tuples without \nthem set are from old transactions (idea being that pulling those \nCLOG pages would be pretty expensive). Or better yet; if we have to \nread a CLOG page off disk, set the bits.\n\nThis could still potentially be a big disadvantage for data \nwarehouses; though perhaps the way to fix that is recommend a \nbackgrounded vacuum after data load.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Sat, 24 May 2008 15:15:34 -0400", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*) " }, { "msg_contents": "Decibel! wrote:\n> Also, Simon and Tom had an idea at PGCon: Don't set hint-bits in the \n> back-end if the page isn't already dirty. \n\nOr even better: set the hint-bits, but don't dirty the page.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 26 May 2008 14:03:17 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" }, { "msg_contents": "\"Heikki Linnakangas\" <[email protected]> writes:\n> Decibel! wrote:\n>> Also, Simon and Tom had an idea at PGCon: Don't set hint-bits in the \n>> back-end if the page isn't already dirty. \n\n> Or even better: set the hint-bits, but don't dirty the page.\n\nWhich in fact is what Simon suggested, not the other thing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 May 2008 11:36:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*) " }, { "msg_contents": "\nOn Mon, 2008-05-26 at 11:36 -0400, Tom Lane wrote:\n> \"Heikki Linnakangas\" <[email protected]> writes:\n> > Decibel! wrote:\n> >> Also, Simon and Tom had an idea at PGCon: Don't set hint-bits in the \n> >> back-end if the page isn't already dirty. \n> \n> > Or even better: set the hint-bits, but don't dirty the page.\n> \n> Which in fact is what Simon suggested, not the other thing.\n\nJust raised this on -hackers, BTW. \n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 27 May 2008 21:54:26 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I/O on select count(*)" } ]
[ { "msg_contents": "----- Original Message ----\nFrom: Merlin Moncure <[email protected]>\nTo: Doug Eck <[email protected]>\nCc: [email protected]\nSent: Wednesday, May 14, 2008 3:38:23 PM\nSubject: Re: [PERFORM] I/O on select count(*)\n\nOn Wed, May 14, 2008 at 4:09 PM, Doug Eck <[email protected]> wrote:\n> I have a large table (~ 2B rows) that contains an indexed timestamp column.\n> I am attempting to run a query to determine the number of rows for a given\n> day using something like \"select count(*) from tbl1 where ts between\n> '2008-05-12 00:00:00.000' and '2008-05-12 23:59:59.999'\". Explain tells me\n> that the query will be done using an index scan (as I would expect), and I\n> realize that it is going to take a while. My question concerns some unusual\n> I/O activity on the box (SUSE) when I run the query.\n>\n> For the first couple of minutes I see reads only. After that vmstat shows\n> mixed reads and writes in a ratio of about 1 block read to 5 blocks\n> written. We have determined that files in our data and log partitions are\n> being hit, but the file system itself is not growing during this time (it\n> appears to be writing over the same chunk of space over and over again).\n> Memory on the box is not being swapped while all of this is happening. I\n> would have guessed that a \"select count(*)\" would not require a bunch of\n> writes, and I can't begin to figure out why the number of blocks written are\n> so much higher than the blocks read. If I modify the where clause to only\n> count the rows for a given minute or two, I see the reads but I never see\n> the unusual write behavior.\n>\n> Any thoughts into what could be going on? Thanks in advance for your help.\n\ncan you post the exact output of explain analyze? (or, at least,\nexplain if the query takes too long)\n\nmerlin\n\nThe query takes a long time to run, so I'll start with the explain output. I\ncan run explain analyze (given enough time) if you believe its output\ncould hold some clues.\n\ndb_2008=> explain select count(*) from ot_2008_05 where\ntransact_time between '2008-05-12 00:00:00.000' and '2008-05-12\n23:59:59.999';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=10368613.47..10368613.48 rows=1 width=0)\n -> Index Scan using ot_2008_05_ak2 on ot_2008_05 (cost=0.00..10011333.27 rows=142912078 width=0)\n Index Cond: ((transact_time >= '2008-05-12\n00:00:00-04'::timestamp with time zone) AND (transact_time <=\n'2008-05-12 23:59:59.999-04'::timestamp with time zone))\n(3 rows)\n\ndb_2008=>\n\nDoug\n\n\n\n \n----- Original Message ----From: Merlin Moncure <[email protected]>To: Doug Eck <[email protected]>Cc: [email protected]: Wednesday, May 14, 2008 3:38:23 PMSubject: Re: [PERFORM] I/O on select count(*)\nOn Wed, May 14, 2008 at 4:09 PM, Doug Eck <[email protected]> wrote:> I have a large table (~ 2B rows) that contains an indexed timestamp column.> I am attempting to run a query to determine the number of rows for a given> day using something like \"select count(*) from tbl1 where ts between> '2008-05-12 00:00:00.000' and '2008-05-12 23:59:59.999'\".  Explain tells me> that the query will be done using an index scan (as I would expect), and I> realize that it is going to take a while.  My question concerns some unusual> I/O activity on the box (SUSE)  when I run the query.>> For the first couple of minutes I see reads only.  After that vmstat shows> mixed reads and writes in a ratio of about 1 block read to 5 blocks> written.  We have determined that files in our data and log\n partitions are> being hit, but the file system itself is not growing during this time (it> appears to be writing over the same chunk of space over and over again).> Memory on the box is not being swapped while all of this is happening.  I> would have guessed that a \"select count(*)\" would not require a bunch of> writes, and I can't begin to figure out why the number of blocks written are> so much higher than the blocks read.  If I modify the where clause to only> count the rows for a given minute or two, I see the reads but I never see> the unusual write behavior.>> Any thoughts into what could be going on?  Thanks in advance for your help.can you post the exact output of explain analyze? (or, at least,explain if the query takes too long)merlinThe query takes a long time to run, so I'll start with the explain output.  I\ncan run explain analyze (given enough time) if you believe its output\ncould hold some clues.\n\n\n\ndb_2008=> explain select count(*) from ot_2008_05 where\ntransact_time between '2008-05-12 00:00:00.000' and '2008-05-12\n23:59:59.999';\n\n\n                                                                                QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n Aggregate  (cost=10368613.47..10368613.48 rows=1 width=0)\n\n\n   ->  Index Scan using ot_2008_05_ak2 on ot_2008_05  (cost=0.00..10011333.27 rows=142912078 width=0)\n\n\n         Index Cond: ((transact_time >= '2008-05-12\n00:00:00-04'::timestamp with time zone) AND (transact_time <=\n'2008-05-12 23:59:59.999-04'::timestamp with time zone))\n\n\n(3 rows)\n\n\n\ndb_2008=>\n\n\nDoug", "msg_date": "Wed, 14 May 2008 14:23:49 -0700 (PDT)", "msg_from": "Doug Eck <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I/O on select count(*)" } ]
[ { "msg_contents": "Hi All,\n \nWe are doing some load tests with our application running postgres\n8.2.4. At times we see updates on a table taking longer (around\n11-16secs) than expected sub-second response time. The table in question\nis getting updated constantly through the load tests. In checking the\ntable size including indexes, they seem to be bloated got it confirmed\nafter recreating it (stats below). We have autovacuum enabled with\ndefault parameters. I thought autovaccum would avoid bloating issues but\nlooks like its not aggressive enough. Wondering if table/index bloating\nis causing update slowness in over a period of time. Any ideas how to\ntroubleshoot this further.\n \nNo IO waits seen during load tests and cpu usage on the server seem to\nbe 85% idle. This is a v445 sol10 with 4 cpu box attached to SAN\nstorage.\n \nHere is the update statement and table/index/instance stats.\n \nshared_buffers=4000MB\nmax_fsm_pages = 2048000\nmaintenance_work_mem = 512MB\ncheckpoint_segments = 128 \neffective_cache_size = 4000MB\n \nupdate tablexy set col2=$1,col9=$2, col10=$3,col11=$4,col3=$5 WHERE\nID=$6;\n \nBloated\n relname | relowner | relpages | reltuples \n ------------------------------+----------+----------+-----------\n tablexy | 10 | 207423 | 502627\n ix_tablexy_col1_col2 | 10 | 38043 | 502627\n ix_tablexy_col3 | 10 | 13944 | 502627\n ix_tablexy_col4 | 10 | 17841 | 502627\n ix_tablexy_col5 | 10 | 19669 | 502627\n ix_tablexy_col6 | 10 | 3865 | 502627\n ix_tablexy_col7 | 10 | 12359 | 502627\n ix_tablexy_col8_col7 | 10 | 26965 | 502627\n ct_tablexy_id_u1 | 10 | 6090 | 502627\n \nRecreating tablexy (compact),\n \n relname | relowner | relpages | reltuples \n------------------------------+----------+----------+-----------\n tablexy | 10 | 41777 | 501233\n ix_tablexy_col3 | 10 | 2137 | 501233\n ix_tablexy_col8_col7 | 10 | 4157 | 501233\n ix_tablexy_col6 | 10 | 1932 | 501233\n ix_tablexy_col7 | 10 | 1935 | 501233\n ix_tablexy_col1_col2 | 10 | 1933 | 501233\n ix_tablexy_col5 | 10 | 2415 | 501233\n ix_tablexy_col6 | 10 | 1377 | 501233\n ct_tablexy_id_u1 | 10 | 3046 | 501233\n \nThanks,\nStalin\n\n\n\n\n\nHi \nAll,\n \nWe are doing \nsome load tests with our application running postgres 8.2.4. At times \nwe see updates on a table taking longer (around \n11-16secs) than expected sub-second response time. The table in \nquestion is getting updated constantly through the load tests. In checking the \ntable size including indexes, they seem to be bloated got it confirmed after \nrecreating it (stats below). We have autovacuum enabled with default parameters. \nI thought autovaccum would avoid bloating issues but looks like its not \naggressive enough. Wondering if table/index bloating is causing update \nslowness in over a period of time. Any ideas how to troubleshoot this \nfurther.\n \nNo IO waits seen \nduring load tests and cpu usage on the server seem to be 85% idle. This is a \nv445 sol10 with 4 cpu box attached to SAN storage.\n \nHere is the update \nstatement and table/index/instance stats.\n \nshared_buffers=4000MB\nmax_fsm_pages = \n2048000\nmaintenance_work_mem = 512MB\ncheckpoint_segments = 128 \neffective_cache_size = 4000MB\n \nupdate \ntablexy set col2=$1,col9=$2, col10=$3,col11=$4,col3=$5 WHERE \nID=$6;\n \nBloated\n            \nrelname            | \nrelowner | relpages | reltuples \n ------------------------------+----------+----------+-----------  \ntablexy                      \n|       10 |   207423 \n|    502627  \nix_tablexy_col1_col2         \n|       10 |    38043 \n|    502627  \nix_tablexy_col3              \n|       10 |    13944 \n|    502627  \nix_tablexy_col4              \n|       10 |    17841 \n|    502627  \nix_tablexy_col5              \n|       10 |    19669 \n|    502627  \nix_tablexy_col6              \n|       10 |     3865 \n|    502627  \nix_tablexy_col7              \n|       10 |    12359 \n|    502627  \nix_tablexy_col8_col7         \n|       10 |    26965 \n|    502627  \nct_tablexy_id_u1             \n|       10 |     6090 \n|    502627\n \nRecreating tablexy \n(compact),\n \n           \nrelname            | \nrelowner | relpages | reltuples \n------------------------------+----------+----------+----------- tablexy                      \n|       10 |    41777 \n|    \n501233 ix_tablexy_col3              \n|       10 |     2137 \n|    \n501233 ix_tablexy_col8_col7         \n|       10 |     4157 \n|    \n501233 ix_tablexy_col6              \n|       10 |     1932 \n|    \n501233 ix_tablexy_col7              \n|       10 |     1935 \n|    \n501233 ix_tablexy_col1_col2         \n|       10 |     1933 \n|    \n501233 ix_tablexy_col5              \n|       10 |     2415 \n|    \n501233 ix_tablexy_col6              \n|       10 |     1377 \n|    \n501233 ct_tablexy_id_u1             \n|       10 |     3046 \n|    501233\n \nThanks,\nStalin", "msg_date": "Wed, 14 May 2008 18:31:30 -0400", "msg_from": "\"Subbiah Stalin-XCGF84\" <[email protected]>", "msg_from_op": true, "msg_subject": "Update performance degrades over time" }, { "msg_contents": "On Wed, May 14, 2008 at 6:31 PM, Subbiah Stalin-XCGF84\n<[email protected]> wrote:\n> Hi All,\n>\n> We are doing some load tests with our application running postgres 8.2.4. At\n> times we see updates on a table taking longer (around\n> 11-16secs) than expected sub-second response time. The table in question is\n> getting updated constantly through the load tests. In checking the table\n> size including indexes, they seem to be bloated got it confirmed after\n> recreating it (stats below). We have autovacuum enabled with default\n> parameters. I thought autovaccum would avoid bloating issues but looks like\n> its not aggressive enough. Wondering if table/index bloating is causing\n> update slowness in over a period of time. Any ideas how to troubleshoot this\n> further.\n\nSometimes it is necessary to not only VACUUM, but also REINDEX. If\nyour update changes an indexed column to a new, distinct value, you\ncan easily get index bloat.\n\nAlso, you should check to see if you have any old, open transactions\non the same instance. If you do, it's possible that VACUUM will have\nno beneficial effect.\n\n-jwb\n", "msg_date": "Thu, 15 May 2008 09:56:24 -0400", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Update performance degrades over time" }, { "msg_contents": "Yes we are updating one of indexed timestamp columns which gets unique\nvalue on every update. We tried setting autovacuum_vacuum_scale_factor =\n0.1 from default to make autovacuum bit aggressive, we see bloating on\nboth table and it's indexes but it's creeping up slowly though. \n\nAnyways, even with slower bloating, I still see update performance to\ndegrade with 15 sec response time captured by setting\nlog_min_duration_stmt. Looks like bloating isn't causing slower updates.\nAny help/ideas to tune this is appreciated.\n\nExplain plan seems reasonable for the update statement.\n\nupdate tablexy set col2=$1,col9=$2, col10=$3,col11=$4,col3=$5 WHERE\nID=$6;\n\n QUERY PLAN\n\n------------------------------------------------------------------------\n----------------------------------------------------\n Index Scan using ct_tablexy_id_u1 on tablexy (cost=0.00..8.51 rows=1\nwidth=194) (actual time=0.162..0.166 rows=1 loops=1)\n Index Cond: ((id)::text = '32xka8axki8'::text)\n\nThanks in advance.\n\nStalin\n\n-----Original Message-----\nFrom: Jeffrey Baker [mailto:[email protected]] \nSent: Thursday, May 15, 2008 6:56 AM\nTo: Subbiah Stalin-XCGF84; [email protected]\nSubject: Re: [PERFORM] Update performance degrades over time\n\nOn Wed, May 14, 2008 at 6:31 PM, Subbiah Stalin-XCGF84\n<[email protected]> wrote:\n> Hi All,\n>\n> We are doing some load tests with our application running postgres \n> 8.2.4. At times we see updates on a table taking longer (around\n> 11-16secs) than expected sub-second response time. The table in \n> question is getting updated constantly through the load tests. In \n> checking the table size including indexes, they seem to be bloated got\n\n> it confirmed after recreating it (stats below). We have autovacuum \n> enabled with default parameters. I thought autovaccum would avoid \n> bloating issues but looks like its not aggressive enough. Wondering if\n\n> table/index bloating is causing update slowness in over a period of \n> time. Any ideas how to troubleshoot this further.\n\nSometimes it is necessary to not only VACUUM, but also REINDEX. If your\nupdate changes an indexed column to a new, distinct value, you can\neasily get index bloat.\n\nAlso, you should check to see if you have any old, open transactions on\nthe same instance. If you do, it's possible that VACUUM will have no\nbeneficial effect.\n\n-jwb\n", "msg_date": "Thu, 15 May 2008 12:28:19 -0400", "msg_from": "\"Subbiah Stalin-XCGF84\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Update performance degrades over time" }, { "msg_contents": "Any system catalog views I can check for wait events causing slower\nresponse times.\n\nThanks in advance.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Subbiah\nStalin\nSent: Thursday, May 15, 2008 9:28 AM\nTo: Jeffrey Baker; [email protected]\nSubject: Re: [PERFORM] Update performance degrades over time\n\nYes we are updating one of indexed timestamp columns which gets unique\nvalue on every update. We tried setting autovacuum_vacuum_scale_factor =\n0.1 from default to make autovacuum bit aggressive, we see bloating on\nboth table and it's indexes but it's creeping up slowly though. \n\nAnyways, even with slower bloating, I still see update performance to\ndegrade with 15 sec response time captured by setting\nlog_min_duration_stmt. Looks like bloating isn't causing slower updates.\nAny help/ideas to tune this is appreciated.\n\nExplain plan seems reasonable for the update statement.\n\nupdate tablexy set col2=$1,col9=$2, col10=$3,col11=$4,col3=$5 WHERE\nID=$6;\n\n QUERY PLAN\n\n------------------------------------------------------------------------\n----------------------------------------------------\n Index Scan using ct_tablexy_id_u1 on tablexy (cost=0.00..8.51 rows=1\nwidth=194) (actual time=0.162..0.166 rows=1 loops=1)\n Index Cond: ((id)::text = '32xka8axki8'::text)\n\nThanks in advance.\n\nStalin\n\n-----Original Message-----\nFrom: Jeffrey Baker [mailto:[email protected]]\nSent: Thursday, May 15, 2008 6:56 AM\nTo: Subbiah Stalin-XCGF84; [email protected]\nSubject: Re: [PERFORM] Update performance degrades over time\n\nOn Wed, May 14, 2008 at 6:31 PM, Subbiah Stalin-XCGF84\n<[email protected]> wrote:\n> Hi All,\n>\n> We are doing some load tests with our application running postgres \n> 8.2.4. At times we see updates on a table taking longer (around\n> 11-16secs) than expected sub-second response time. The table in \n> question is getting updated constantly through the load tests. In \n> checking the table size including indexes, they seem to be bloated got\n\n> it confirmed after recreating it (stats below). We have autovacuum \n> enabled with default parameters. I thought autovaccum would avoid \n> bloating issues but looks like its not aggressive enough. Wondering if\n\n> table/index bloating is causing update slowness in over a period of \n> time. Any ideas how to troubleshoot this further.\n\nSometimes it is necessary to not only VACUUM, but also REINDEX. If your\nupdate changes an indexed column to a new, distinct value, you can\neasily get index bloat.\n\nAlso, you should check to see if you have any old, open transactions on\nthe same instance. If you do, it's possible that VACUUM will have no\nbeneficial effect.\n\n-jwb\n\n--\nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n______________________________________________________________________\nThis email has been scanned by the MessageLabs Email Security System.\nFor more information please visit http://www.messagelabs.com/email\n______________________________________________________________________\n", "msg_date": "Thu, 15 May 2008 15:27:48 -0400", "msg_from": "\"Subbiah Stalin-XCGF84\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Update performance degrades over time" } ]
[ { "msg_contents": "The following query produces some fairly off estimates for the number of rows \nthat should be returned (this is based on a much more complex query, but \nwhittling it down to this which seems to be the heart of the problem) \n\npeii=# explain analyze select * from adv.peii_fast_lookup pfl1 join \nadv.lsteml_m le1 on (pfl1.ctm_nbr = le1.ctm_nbr and pfl1.emal_id = \nle1.emal_id) ;\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\nHash Join (cost=386721.95..1848154.67 rows=7 width=100) (actual \ntime=11407.555..103368.646 rows=18348993 loops=1)\nHash Cond: (((le1.ctm_nbr)::text = (pfl1.ctm_nbr)::text) AND \n((le1.emal_id)::text = (pfl1.emal_id)::text))\n-> Seq Scan on lsteml_m le1 (cost=0.00..435026.44 rows=18712844 width=67) \n(actual time=0.027..7057.486 rows=18703401 loops=1)\n-> Hash (cost=172924.18..172924.18 rows=9371918 width=33) (actual \ntime=11387.413..11387.413 rows=9368565 loops=1)\n-> Seq Scan on peii_fast_lookup pfl1 (cost=0.00..172924.18 rows=9371918 \nwidth=33) (actual time=0.006..2933.512 rows=9368565 loops=1)\nTotal runtime: 108132.205 ms\n\ndefault_stats_target is 100, both tables freshly analyzed\nall join columns on both sides are varchar(12)\nand we're on 8.3.1\n\nI notice that it seems to give a better number of rows when doing single \ncolumn joins (explain only, didnt want to wait for it to actually run this) \n\npeii=# explain select * from adv.peii_fast_lookup pfl1 join adv.lsteml_m le1 \non (pfl1.ctm_nbr = le1.ctm_nbr) ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Merge Join (cost=7243997.70..8266364.43 rows=65065332 width=100)\n Merge Cond: ((pfl1.ctm_nbr)::text = (le1.ctm_nbr)::text)\n -> Sort (cost=1917159.20..1940589.00 rows=9371918 width=33)\n Sort Key: pfl1.ctm_nbr\n -> Seq Scan on peii_fast_lookup pfl1 (cost=0.00..172924.18 \nrows=9371918 width=33)\n -> Materialize (cost=5326833.82..5560745.31 rows=18712919 width=67)\n -> Sort (cost=5326833.82..5373616.12 rows=18712919 width=67)\n Sort Key: le1.ctm_nbr\n -> Seq Scan on lsteml_m le1 (cost=0.00..435028.19 \nrows=18712919 width=67)\n(9 rows)\n\npeii=# explain select * from adv.peii_fast_lookup pfl1 join adv.lsteml_m le1 \non (pfl1.emal_id = le1.emal_id) ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Hash Join (cost=363292.16..1754557.17 rows=18712919 width=100)\n Hash Cond: ((le1.emal_id)::text = (pfl1.emal_id)::text)\n -> Seq Scan on lsteml_m le1 (cost=0.00..435028.19 rows=18712919 width=67)\n -> Hash (cost=172924.18..172924.18 rows=9371918 width=33)\n -> Seq Scan on peii_fast_lookup pfl1 (cost=0.00..172924.18 \nrows=9371918 width=33)\n(5 rows)\n\n\nfor kicks, I upped the stats target and reran everything...\n\npeii=# set default_statistics_target = 400;\nSET\npeii=# analyze verbose adv.peii_fast_lookup;\nINFO: analyzing \"adv.peii_fast_lookup\"\nINFO: \"peii_fast_lookup\": scanned 79205 of 79205 pages, containing 9368569 \nlive rows and 316 dead rows; 120000 rows in sample, 9368569 estimated total \nrows\nANALYZE\npeii=# analyze verbose adv.lsteml_m;\nINFO: analyzing \"adv.lsteml_m\"\nINFO: \"lsteml_m\": scanned 120000 of 247899 pages, containing 9050726 live \nrows and 110882 dead rows; 120000 rows in sample, 18697216 estimated total \nrows\nANALYZE\npeii=# explain analyze select * from adv.peii_fast_lookup pfl1 join \nadv.lsteml_m le1 on (pfl1.ctm_nbr = le1.ctm_nbr and pfl1.emal_id = \nle1.emal_id) ;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=386611.22..1847063.87 rows=4 width=100) (actual \ntime=11169.338..95460.560 rows=18348993 loops=1)\n Hash Cond: (((le1.ctm_nbr)::text = (pfl1.ctm_nbr)::text) AND \n((le1.emal_id)::text = (pfl1.emal_id)::text))\n -> Seq Scan on lsteml_m le1 (cost=0.00..434871.16 rows=18697216 width=67) \n(actual time=0.008..7012.533 rows=18703401 loops=1)\n -> Hash (cost=172890.69..172890.69 rows=9368569 width=33) (actual \ntime=11160.329..11160.329 rows=9368569 loops=1)\n -> Seq Scan on peii_fast_lookup pfl1 (cost=0.00..172890.69 \nrows=9368569 width=33) (actual time=0.005..2898.336 rows=9368569 loops=1)\n Total runtime: 100223.220 ms\n(6 rows)\n\npeii=# set enable_hashjoin = false;\nSET\npeii=# explain analyze select * from adv.peii_fast_lookup pfl1 join \nadv.lsteml_m le1 on (pfl1.ctm_nbr = le1.ctm_nbr and pfl1.emal_id = \nle1.emal_id) ;\n \nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=5322783.26..5972103.39 rows=4 width=100) (actual \ntime=415082.543..606999.689 rows=18348993 loops=1)\n Merge Cond: (((pfl1.emal_id)::text = (le1.emal_id)::text) AND \n((pfl1.ctm_nbr)::text = (le1.ctm_nbr)::text))\n -> Index Scan using peii_fast_lookup_pkey on peii_fast_lookup pfl1 \n(cost=0.00..462635.50 rows=9368569 width=33) (actual time=0.031..7342.227 \nrows=9368569 loops=1)\n -> Materialize (cost=5322446.84..5556162.04 rows=18697216 width=67) \n(actual time=414700.258..519877.718 rows=18703401 loops=1)\n -> Sort (cost=5322446.84..5369189.88 rows=18697216 width=67) \n(actual time=414700.254..506652.718 rows=18703401 loops=1)\n Sort Key: le1.emal_id, le1.ctm_nbr\n Sort Method: external merge Disk: 1620632kB\n -> Seq Scan on lsteml_m le1 (cost=0.00..434871.16 \nrows=18697216 width=67) (actual time=0.006..6776.725 rows=18703401 loops=1)\n Total runtime: 611728.059 ms\n(9 rows)\n\nStill the same issue, so this doesn't seem like something specific to hash \njoins. I'll note that this is the behavior I recall from 8.2, so I'm not sure \nif this is a bug, or just an outright deficiancy, but thought I would post to \nsee if anyone had any thoughts on it. (If there is some additional info I can \nprovide, please lmk). \n\n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Wed, 14 May 2008 18:34:54 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "poor row estimates with multi-column joins" } ]
[ { "msg_contents": "Hi all,\nusing mkfs.ext3 I can use \"-T\" to tune the filesytem\n\nmkfs.ext3 -T fs_type ...\n\nfs_type are in /etc/mke2fs.conf (on debian)\n\nis there a recommended setting for this parameter ???\n\nthanks\n", "msg_date": "Thu, 15 May 2008 12:11:02 +0200", "msg_from": "\"Philippe Amelant\" <[email protected]>", "msg_from_op": true, "msg_subject": "which ext3 fs type should I use for postgresql" }, { "msg_contents": "On Thu, 15 May 2008, Philippe Amelant wrote:\n> using mkfs.ext3 I can use \"-T\" to tune the filesytem\n>\n> mkfs.ext3 -T fs_type ...\n>\n> fs_type are in /etc/mke2fs.conf (on debian)\n\nIf you look at that file, you'd see that tuning really doesn't change that \nmuch. In fact, the only thing it does change (if you avoid \"small\" and \n\"floppy\") is the number of inodes available in the filesystem. Since \nPostgres tends to produce few large files, you don't need that many \ninodes, so the \"largefile\" option may be best. However, note that the \nnumber of inodes is a hard limit of the filesystem - if you try to create \nmore files on the filesystem than there are available inodes, then you \nwill get an out of space error even if the filesystem has space left.\nThe only real benefit of having not many inodes is that you waste a little \nless space, so many admins are pretty generous with this setting.\n\nProbably of more use are some of the other settings:\n\n -m reserved-blocks-percentage - this reserves a portion of the filesystem\n that only root can write to. If root has no need for it, you can kill\n this by setting it to zero. The default is for 5% of the disc to be\n wasted.\n -j turns the filesystem into ext3 instead of ext2 - many people say that\n for Postgres you shouldn't do this, as ext2 is faster.\n\nMatthew\n\n-- \nThe surest protection against temptation is cowardice.\n -- Mark Twain\n", "msg_date": "Thu, 15 May 2008 12:29:40 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which ext3 fs type should I use for postgresql" }, { "msg_contents": "On Thu, 15 May 2008, Matthew Wakeling wrote:\n\n> On Thu, 15 May 2008, Philippe Amelant wrote:\n>> using mkfs.ext3 I can use \"-T\" to tune the filesytem\n>> \n>> mkfs.ext3 -T fs_type ...\n>> \n>> fs_type are in /etc/mke2fs.conf (on debian)\n>\n> If you look at that file, you'd see that tuning really doesn't change that \n> much. In fact, the only thing it does change (if you avoid \"small\" and \n> \"floppy\") is the number of inodes available in the filesystem. Since Postgres \n> tends to produce few large files, you don't need that many inodes, so the \n> \"largefile\" option may be best. However, note that the number of inodes is a \n> hard limit of the filesystem - if you try to create more files on the \n> filesystem than there are available inodes, then you will get an out of space \n> error even if the filesystem has space left.\n> The only real benefit of having not many inodes is that you waste a little \n> less space, so many admins are pretty generous with this setting.\n\nIIRC postgres likes to do 1M/file, which isn't very largeas far as the -T \nsetting goes.\n\n> Probably of more use are some of the other settings:\n>\n> -m reserved-blocks-percentage - this reserves a portion of the filesystem\n> that only root can write to. If root has no need for it, you can kill\n> this by setting it to zero. The default is for 5% of the disc to be\n> wasted.\n\nthink twice about this. ext2/3 get slow when they fill up (they have \nfragmentation problems when free space gets too small), this 5% that \nonly root can use also serves as a buffer against that as well.\n\n> -j turns the filesystem into ext3 instead of ext2 - many people say that\n> for Postgres you shouldn't do this, as ext2 is faster.\n\nfor the partition with the WAL on it you may as well do ext2 (the WAL is \nwritten synchronously and sequentially so the journal doesn't help you), \nbut for the data partition you may benifit from the journal.\n\nDavid Lang\n", "msg_date": "Thu, 15 May 2008 05:20:09 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: which ext3 fs type should I use for postgresql" }, { "msg_contents": "On Thu, 15 May 2008, [email protected] wrote:\n> IIRC postgres likes to do 1M/file, which isn't very largeas far as the -T \n> setting goes.\n\nITYF it's actually 1GB/file.\n\n> think twice about this. ext2/3 get slow when they fill up (they have \n> fragmentation problems when free space gets too small), this 5% that only \n> root can use also serves as a buffer against that as well.\n\nIt makes sense to me that the usage pattern of Postgres would be much less \nsusceptible to causing fragmentation than normal filesystem usage. Has \nanyone actually tested this and found out?\n\nMatthew\n\n-- \nIsn't \"Microsoft Works\" something of a contradiction?\n", "msg_date": "Thu, 15 May 2008 13:23:57 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which ext3 fs type should I use for postgresql" }, { "msg_contents": "Matthew Wakeling wrote:\n> Probably of more use are some of the other settings:\n> \n> -m reserved-blocks-percentage - this reserves a portion of the filesystem\n> that only root can write to. If root has no need for it, you can kill\n> this by setting it to zero. The default is for 5% of the disc to be\n> wasted.\n\nThis is not a good idea. The 5% is NOT reserved for root's use, but rather is to prevent severe file fragmentation. As the disk gets full, the remaining empty spaces tend to be small spaces scattered all over the disk, meaning that even for modest-sized files, the kernel can't allocate contiguous disk blocks. If you reduce this restriction to 0%, you are virtually guaranteed poor performance when you fill up your disk, since those files that are allocated last will be massively fragmented.\n\nWorse, the fragmented files that you create remain fragmented even if you clean up to get back below the 95% mark. If Postgres happened to insert a lot of data on a 99% full file system, those blocks could be spread all over the place, and they'd stay that way forever, even after you cleared some space.\n\nCraig\n", "msg_date": "Thu, 15 May 2008 07:57:01 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which ext3 fs type should I use for postgresql" }, { "msg_contents": "Craig James <craig_james 'at' emolecules.com> writes:\n\n> Matthew Wakeling wrote:\n>> Probably of more use are some of the other settings:\n>>\n>> -m reserved-blocks-percentage - this reserves a portion of the filesystem\n>> that only root can write to. If root has no need for it, you can kill\n>> this by setting it to zero. The default is for 5% of the disc to be\n>> wasted.\n>\n> This is not a good idea. The 5% is NOT reserved for root's\n> use, but rather is to prevent severe file fragmentation. As\n\nAlso, IIRC when PG writes data up to a full filesystem,\npostmaster won't be able to then restart if the filesystem is\nstill full (it needs some free disk space for its startup).\n\nOr maybe this has been fixed in recent versions?\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Thu, 15 May 2008 17:08:16 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which ext3 fs type should I use for postgresql" }, { "msg_contents": "On Thu, 15 May 2008, Guillaume Cottenceau wrote:\n> Also, IIRC when PG writes data up to a full filesystem,\n> postmaster won't be able to then restart if the filesystem is\n> still full (it needs some free disk space for its startup).\n>\n> Or maybe this has been fixed in recent versions?\n\nAh, the \"not enough space to delete file, delete some files and try again\" \nproblem. Anyway, that isn't relevant to the reserved percentage, as that \nwill happen whether or not the filesystem is 5% smaller.\n\nMatthew\n\n-- \nLet's say I go into a field and I hear \"baa baa baa\". Now, how do I work \nout whether that was \"baa\" followed by \"baa baa\", or if it was \"baa baa\"\nfollowed by \"baa\"?\n - Computer Science Lecturer\n", "msg_date": "Thu, 15 May 2008 16:21:24 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which ext3 fs type should I use for postgresql" }, { "msg_contents": "Matthew Wakeling <matthew 'at' flymine.org> writes:\n\n> On Thu, 15 May 2008, Guillaume Cottenceau wrote:\n>> Also, IIRC when PG writes data up to a full filesystem,\n>> postmaster won't be able to then restart if the filesystem is\n>> still full (it needs some free disk space for its startup).\n>>\n>> Or maybe this has been fixed in recent versions?\n>\n> Ah, the \"not enough space to delete file, delete some files and try\n> again\" problem. Anyway, that isn't relevant to the reserved\n> percentage, as that will happen whether or not the filesystem is 5%\n> smaller.\n\nIt is still relevant, as with 5% margin, you can afford changing\nthat to 0% with tune2fs, just the time for you to start PG and\nremove some data by SQL, then shutdown and set the margin to 5%\nagain.\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Thu, 15 May 2008 17:32:37 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which ext3 fs type should I use for postgresql" }, { "msg_contents": "Guillaume Cottenceau wrote:\n> Matthew Wakeling <matthew 'at' flymine.org> writes:\n\n> It is still relevant, as with 5% margin, you can afford changing\n> that to 0% with tune2fs, just the time for you to start PG and\n> remove some data by SQL, then shutdown and set the margin to 5%\n> again.\n> \n\nI find that if you actually reach that level of capacity failure it is \ndue to lack of management and likely there is much lower hanging fruit \nleft over by a lazy dba or sysadmin than having to adjust filesystem \nlevel parameters.\n\nManage actively and the above change is absolutely irrelevant.\n\nJoshua D. Drake\n", "msg_date": "Thu, 15 May 2008 08:38:18 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which ext3 fs type should I use for postgresql" }, { "msg_contents": "\"Joshua D. Drake\" <jd 'at' commandprompt.com> writes:\n\n> Guillaume Cottenceau wrote:\n>> Matthew Wakeling <matthew 'at' flymine.org> writes:\n>\n>> It is still relevant, as with 5% margin, you can afford changing\n>> that to 0% with tune2fs, just the time for you to start PG and\n>> remove some data by SQL, then shutdown and set the margin to 5%\n>> again.\n>\n> I find that if you actually reach that level of capacity failure it is\n> due to lack of management and likely there is much lower hanging fruit\n> left over by a lazy dba or sysadmin than having to adjust filesystem\n> level parameters.\n>\n> Manage actively and the above change is absolutely irrelevant.\n\nOf course. I didn't say otherwise. I only say that it's useful in\nthat case. E.g. if you're using a dedicated partition for PG,\nthen a good solution is what I describe, rather than horrifyingly\ntrying to remove some random PG files, or when you cannot\ntemporarily move some of them and symlink from the PG partition.\nI don't praise that kind of case, it should of course be avoided\nby sane management. A bad management is not a reason for hiding\nsolutions to the problems that can happen!\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Thu, 15 May 2008 17:56:31 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which ext3 fs type should I use for postgresql" }, { "msg_contents": "On Thu, 15 May 2008, [email protected] wrote:\n\n> On Thu, 15 May 2008, Matthew Wakeling wrote:\n>\n>> On Thu, 15 May 2008, Philippe Amelant wrote:\n>>> using mkfs.ext3 I can use \"-T\" to tune the filesytem\n>>> \n>>> mkfs.ext3 -T fs_type ...\n>>> \n>>> fs_type are in /etc/mke2fs.conf (on debian)\n>> \n>> If you look at that file, you'd see that tuning really doesn't change that \n>> much. In fact, the only thing it does change (if you avoid \"small\" and \n>> \"floppy\") is the number of inodes available in the filesystem. Since \n>> Postgres tends to produce few large files, you don't need that many inodes, \n>> so the \"largefile\" option may be best. However, note that the number of \n>> inodes is a hard limit of the filesystem - if you try to create more files \n>> on the filesystem than there are available inodes, then you will get an out \n>> of space error even if the filesystem has space left.\n>> The only real benefit of having not many inodes is that you waste a little \n>> less space, so many admins are pretty generous with this setting.\n>\n> IIRC postgres likes to do 1M/file, which isn't very largeas far as the -T \n> setting goes.\n>\n>> Probably of more use are some of the other settings:\n>> \n>> -m reserved-blocks-percentage - this reserves a portion of the filesystem\n>> that only root can write to. If root has no need for it, you can kill\n>> this by setting it to zero. The default is for 5% of the disc to be\n>> wasted.\n>\n> think twice about this. ext2/3 get slow when they fill up (they have \n> fragmentation problems when free space gets too small), this 5% that only \n> root can use also serves as a buffer against that as well.\n>\n>> -j turns the filesystem into ext3 instead of ext2 - many people say that\n>> for Postgres you shouldn't do this, as ext2 is faster.\n>\n> for the partition with the WAL on it you may as well do ext2 (the WAL is \n> written synchronously and sequentially so the journal doesn't help you), but \n> for the data partition you may benifit from the journal.\n\na fairly recent article on the subject\n\nhttp://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\n\nDavid Lang\n", "msg_date": "Fri, 16 May 2008 02:58:08 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: which ext3 fs type should I use for postgresql" }, { "msg_contents": "On Thu, May 15, 2008 at 9:38 AM, Joshua D. Drake <[email protected]> wrote:\n> Guillaume Cottenceau wrote:\n>>\n>> Matthew Wakeling <matthew 'at' flymine.org> writes:\n>\n>> It is still relevant, as with 5% margin, you can afford changing\n>> that to 0% with tune2fs, just the time for you to start PG and\n>> remove some data by SQL, then shutdown and set the margin to 5%\n>> again.\n>>\n>\n> I find that if you actually reach that level of capacity failure it is due\n> to lack of management and likely there is much lower hanging fruit left over\n> by a lazy dba or sysadmin than having to adjust filesystem level parameters.\n>\n> Manage actively and the above change is absolutely irrelevant.\n\nSorry, but that's like saying that open heart surgery isn't a fix for\nclogged arteries because you should have been taking aspirin everyday\nand exercising. It might not be the best answer, but sometimes it's\nthe only answer you've got.\n\nI know that being able to drop the margin from x% to 0% for 10 minutes\nhas pulled more than one db back from the brink for me (usually\nconsulting on other people's databases, only once or so on my own) :)\n", "msg_date": "Fri, 16 May 2008 11:07:17 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which ext3 fs type should I use for postgresql" }, { "msg_contents": "On Fri, 16 May 2008 11:07:17 -0600\n\"Scott Marlowe\" <[email protected]> wrote:\n\n> Sorry, but that's like saying that open heart surgery isn't a fix for\n> clogged arteries because you should have been taking aspirin everyday\n> and exercising. It might not be the best answer, but sometimes it's\n> the only answer you've got.\n> \n> I know that being able to drop the margin from x% to 0% for 10 minutes\n> has pulled more than one db back from the brink for me (usually\n> consulting on other people's databases, only once or so on my own) :)\n\nMy point is, if you are adjusting that parameter you probably have a\nstray log or a bunch of rpms etc... that can be truncated to get\nyou where you need to be.\n\nOf course there is always the last ditch effort of what you suggest but\nfirst you should look for the more obvious possible solution.\n\nSincerely,\n\nJoshua D. Drake\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate", "msg_date": "Fri, 16 May 2008 10:15:05 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which ext3 fs type should I use for postgresql" } ]
[ { "msg_contents": "Hi List;\n\nI have a table with 9,961,914 rows in it (see the describe of \nbigtab_stats_fact_tmp14 below)\n\nI also have a table with 7,785 rows in it (see the describe of \nxsegment_dim below)\n\nI'm running the join shown below and it takes > 10 hours and \neventually runs out of disk space on a 1.4TB file system\n\nI've included below a describe of both tables, the join and an explain \nplan, any help / suggestions would be much appreciated !\n\nI need to get this beast to run as quickly as possible (without \nfilling up my file system)\n\n\nThanks in advance...\n\n\n\n\n\n\n\n\n\n\n\nselect\nf14.xpublisher_dim_id,\nf14.xtime_dim_id,\nf14.xlocation_dim_id,\nf14.xreferrer_dim_id,\nf14.xsite_dim_id,\nf14.xsystem_cfg_dim_id,\nf14.xaffiliate_dim_id,\nf14.customer_id,\npf_dts_id,\nepisode_id,\nsessionid,\nbytes_received,\nbytes_transmitted,\ntotal_played_time_sec,\nsegdim.xsegment_dim_id as episode_level_segid\nfrom\nbigtab_stats_fact_tmp14 f14,\nxsegment_dim segdim\nwhere\nf14.customer_id = segdim.customer_srcid\nand f14.show_id = segdim.show_srcid\nand f14.season_id = segdim.season_srcid\nand f14.episode_id = segdim.episode_srcid\nand segdim.segment_srcid is NULL;\n\n\n\n\n\n\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nMerge Join (cost=1757001.74..73569676.49 rows=3191677219 width=118)\nMerge Cond: ((segdim.episode_srcid = f14.episode_id) AND \n(segdim.customer_srcid = f14.customer_id) AND (segdim.show_srcid = \nf14.show_id) AND (segdim.season_srcid = f14.season_id))\n-> Sort (cost=1570.35..1579.46 rows=3643 width=40)\nSort Key: segdim.episode_srcid, segdim.customer_srcid, \nsegdim.show_srcid, segdim.season_srcid\n-> Seq Scan on xsegment_dim segdim (cost=0.00..1354.85 rows=3643 \nwidth=40)\nFilter: (segment_srcid IS NULL)\n-> Sort (cost=1755323.26..1780227.95 rows=9961874 width=126)\nSort Key: f14.episode_id, f14.customer_id, f14.show_id, f14.season_id\n-> Seq Scan on bigtab_stats_fact_tmp14 f14 (cost=0.00..597355.74 \nrows=9961874 width=126)\n(9 rows)\n\n\n\n\n\n\n\n\n\n# \\d bigtab_stats_fact_tmp14\nTable \"public.bigtab_stats_fact_tmp14\"\nColumn | Type | Modifiers\n--------------------------+-----------------------------+-----------\npf_dts_id | bigint |\npf_device_id | bigint |\nsegment_id | bigint |\ncdn_id | bigint |\ncollector_id | bigint |\ndigital_envoy_id | bigint |\nmaxmind_id | bigint |\nquova_id | bigint |\nwebsite_id | bigint |\nreferrer_id | bigint |\naffiliate_id | bigint |\ncustom_info_id | bigint |\nstart_dt | timestamp without time zone |\ntotal_played_time_sec | numeric(18,5) |\nbytes_received | bigint |\nbytes_transmitted | bigint |\nstall_count | integer |\nstall_duration_sec | numeric(18,5) |\nhiccup_count | integer |\nhiccup_duration_sec | numeric(18,5) |\nwatched_duration_sec | numeric(18,5) |\nrewatched_duration_sec | numeric(18,5) |\nrequested_start_position | numeric(18,5) |\nrequested_stop_position | numeric(18,5) |\npost_position | numeric(18,5) |\nis_vod | numeric(1,0) |\nsessionid | bigint |\ncreate_dt | timestamp without time zone |\nsegment_type_id | bigint |\ncustomer_id | bigint |\ncontent_publisher_id | bigint |\ncontent_owner_id | bigint |\nepisode_id | bigint |\nduration_sec | numeric(18,5) |\ndevice_id | bigint |\nos_id | bigint |\nbrowser_id | bigint |\ncpu_id | bigint |\nxsystem_cfg_dim_id | bigint |\nxreferrer_dim_id | bigint |\nxaffiliate_dim_id | bigint |\nxsite_dim_id | bigint |\nxpublisher_dim_id | bigint |\nseason_id | bigint |\nshow_id | bigint |\nxsegment_dim_id | bigint |\nlocation_id | bigint |\nzipcode | character varying(20) |\nxlocation_dim_id | bigint |\nlocation_srcid | bigint |\ntimezone | real |\nxtime_dim_id | bigint |\nIndexes:\n\"bigtab_stats_fact_tmp14_idx1\" btree (customer_id)\n\"bigtab_stats_fact_tmp14_idx2\" btree (show_id)\n\"bigtab_stats_fact_tmp14_idx3\" btree (season_id)\n\"bigtab_stats_fact_tmp14_idx4\" btree (episode_id)\n\n\n\n\n\n\n# \\d xsegment_dim\nTable \"public.xsegment_dim\"\nColumn | Type | \nModifiers\n----------------------+----------------------------- \n+-------------------------------------------------------------\nxsegment_dim_id | bigint | not null default \nnextval('xsegment_dim_seq'::regclass)\ncustomer_srcid | bigint | not null\nshow_srcid | bigint | not null\nshow_name | character varying(500) | not null\nseason_srcid | bigint | not null\nseason_name | character varying(500) | not null\nepisode_srcid | bigint | not null\nepisode_name | character varying(500) | not null\nsegment_type_id | integer |\nsegment_type | character varying(500) |\nsegment_srcid | bigint |\nsegment_name | character varying(500) |\neffective_dt | timestamp without time zone | not null default \nnow()\ninactive_dt | timestamp without time zone |\nlast_update_dt | timestamp without time zone | not null default \nnow()\nIndexes:\n\"xsegment_dim_pk\" PRIMARY KEY, btree (xsegment_dim_id)\n\"seg1\" btree (customer_srcid)\n\"seg2\" btree (show_srcid)\n\"seg3\" btree (season_srcid)\n\"seg4\" btree (episode_srcid)\n\"seg5\" btree (segment_srcid)\n\"xsegment_dim_ix1\" btree (customer_srcid)\n\n\n\n\n\n", "msg_date": "Fri, 16 May 2008 00:31:08 -0600", "msg_from": "kevin kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Join runs for > 10 hours and then fills up >1.3TB of disk space" }, { "msg_contents": "Sorry I goofed on the query text Here's the correct query:\n\nselect\nf14.xpublisher_dim_id,\nf14.xtime_dim_id,\nf14.xlocation_dim_id,\nf14.xreferrer_dim_id,\nf14.xsite_dim_id,\nf14.xsystem_cfg_dim_id,\nf14.xaffiliate_dim_id,\nf14.customer_id,\nf14.pf_dts_id,\nf14.episode_id,\nf14.sessionid,\nf14.bytes_received,\nf14.bytes_transmitted,\nf14.total_played_time_sec,\nsegdim.xsegment_dim_id as episode_level_segid\nfrom\nbigtab_stats_fact_tmp14 f14,\nxsegment_dim segdim\nwhere\nf14.customer_id = segdim.customer_srcid\nand f14.show_id = segdim.show_srcid\nand f14.season_id = segdim.season_srcid\nand f14.episode_id = segdim.episode_srcid\nand segdim.segment_srcid is NULL;\n\n\n\n\n\n\nOn May 16, 2008, at 12:31 AM, kevin kempter wrote:\n\n> Hi List;\n>\n> I have a table with 9,961,914 rows in it (see the describe of \n> bigtab_stats_fact_tmp14 below)\n>\n> I also have a table with 7,785 rows in it (see the describe of \n> xsegment_dim below)\n>\n> I'm running the join shown below and it takes > 10 hours and \n> eventually runs out of disk space on a 1.4TB file system\n>\n> I've included below a describe of both tables, the join and an \n> explain plan, any help / suggestions would be much appreciated !\n>\n> I need to get this beast to run as quickly as possible (without \n> filling up my file system)\n>\n>\n> Thanks in advance...\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> select\n> f14.xpublisher_dim_id,\n> f14.xtime_dim_id,\n> f14.xlocation_dim_id,\n> f14.xreferrer_dim_id,\n> f14.xsite_dim_id,\n> f14.xsystem_cfg_dim_id,\n> f14.xaffiliate_dim_id,\n> f14.customer_id,\n> pf_dts_id,\n> episode_id,\n> sessionid,\n> bytes_received,\n> bytes_transmitted,\n> total_played_time_sec,\n> segdim.xsegment_dim_id as episode_level_segid\n> from\n> bigtab_stats_fact_tmp14 f14,\n> xsegment_dim segdim\n> where\n> f14.customer_id = segdim.customer_srcid\n> and f14.show_id = segdim.show_srcid\n> and f14.season_id = segdim.season_srcid\n> and f14.episode_id = segdim.episode_srcid\n> and segdim.segment_srcid is NULL;\n>\n>\n>\n>\n>\n>\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Merge Join (cost=1757001.74..73569676.49 rows=3191677219 width=118)\n> Merge Cond: ((segdim.episode_srcid = f14.episode_id) AND \n> (segdim.customer_srcid = f14.customer_id) AND (segdim.show_srcid = \n> f14.show_id) AND (segdim.season_srcid = f14.season_id))\n> -> Sort (cost=1570.35..1579.46 rows=3643 width=40)\n> Sort Key: segdim.episode_srcid, segdim.customer_srcid, \n> segdim.show_srcid, segdim.season_srcid\n> -> Seq Scan on xsegment_dim segdim (cost=0.00..1354.85 rows=3643 \n> width=40)\n> Filter: (segment_srcid IS NULL)\n> -> Sort (cost=1755323.26..1780227.95 rows=9961874 width=126)\n> Sort Key: f14.episode_id, f14.customer_id, f14.show_id, f14.season_id\n> -> Seq Scan on bigtab_stats_fact_tmp14 f14 (cost=0.00..597355.74 \n> rows=9961874 width=126)\n> (9 rows)\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> # \\d bigtab_stats_fact_tmp14\n> Table \"public.bigtab_stats_fact_tmp14\"\n> Column | Type | Modifiers\n> --------------------------+-----------------------------+-----------\n> pf_dts_id | bigint |\n> pf_device_id | bigint |\n> segment_id | bigint |\n> cdn_id | bigint |\n> collector_id | bigint |\n> digital_envoy_id | bigint |\n> maxmind_id | bigint |\n> quova_id | bigint |\n> website_id | bigint |\n> referrer_id | bigint |\n> affiliate_id | bigint |\n> custom_info_id | bigint |\n> start_dt | timestamp without time zone |\n> total_played_time_sec | numeric(18,5) |\n> bytes_received | bigint |\n> bytes_transmitted | bigint |\n> stall_count | integer |\n> stall_duration_sec | numeric(18,5) |\n> hiccup_count | integer |\n> hiccup_duration_sec | numeric(18,5) |\n> watched_duration_sec | numeric(18,5) |\n> rewatched_duration_sec | numeric(18,5) |\n> requested_start_position | numeric(18,5) |\n> requested_stop_position | numeric(18,5) |\n> post_position | numeric(18,5) |\n> is_vod | numeric(1,0) |\n> sessionid | bigint |\n> create_dt | timestamp without time zone |\n> segment_type_id | bigint |\n> customer_id | bigint |\n> content_publisher_id | bigint |\n> content_owner_id | bigint |\n> episode_id | bigint |\n> duration_sec | numeric(18,5) |\n> device_id | bigint |\n> os_id | bigint |\n> browser_id | bigint |\n> cpu_id | bigint |\n> xsystem_cfg_dim_id | bigint |\n> xreferrer_dim_id | bigint |\n> xaffiliate_dim_id | bigint |\n> xsite_dim_id | bigint |\n> xpublisher_dim_id | bigint |\n> season_id | bigint |\n> show_id | bigint |\n> xsegment_dim_id | bigint |\n> location_id | bigint |\n> zipcode | character varying(20) |\n> xlocation_dim_id | bigint |\n> location_srcid | bigint |\n> timezone | real |\n> xtime_dim_id | bigint |\n> Indexes:\n> \"bigtab_stats_fact_tmp14_idx1\" btree (customer_id)\n> \"bigtab_stats_fact_tmp14_idx2\" btree (show_id)\n> \"bigtab_stats_fact_tmp14_idx3\" btree (season_id)\n> \"bigtab_stats_fact_tmp14_idx4\" btree (episode_id)\n>\n>\n>\n>\n>\n>\n> # \\d xsegment_dim\n> Table \"public.xsegment_dim\"\n> Column | Type \n> | Modifiers\n> ----------------------+----------------------------- \n> +-------------------------------------------------------------\n> xsegment_dim_id | bigint | not null default \n> nextval('xsegment_dim_seq'::regclass)\n> customer_srcid | bigint | not null\n> show_srcid | bigint | not null\n> show_name | character varying(500) | not null\n> season_srcid | bigint | not null\n> season_name | character varying(500) | not null\n> episode_srcid | bigint | not null\n> episode_name | character varying(500) | not null\n> segment_type_id | integer |\n> segment_type | character varying(500) |\n> segment_srcid | bigint |\n> segment_name | character varying(500) |\n> effective_dt | timestamp without time zone | not null \n> default now()\n> inactive_dt | timestamp without time zone |\n> last_update_dt | timestamp without time zone | not null \n> default now()\n> Indexes:\n> \"xsegment_dim_pk\" PRIMARY KEY, btree (xsegment_dim_id)\n> \"seg1\" btree (customer_srcid)\n> \"seg2\" btree (show_srcid)\n> \"seg3\" btree (season_srcid)\n> \"seg4\" btree (episode_srcid)\n> \"seg5\" btree (segment_srcid)\n> \"xsegment_dim_ix1\" btree (customer_srcid)\n>\n>\n>\n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Fri, 16 May 2008 00:58:23 -0600", "msg_from": "kevin kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Join runs for > 10 hours and then fills up >1.3TB of disk space" }, { "msg_contents": "Also, I'm running version 8.3 on a centOS box with 2 dual core CPU's \nand 32Gig of ram\n\n\nOn May 16, 2008, at 12:58 AM, kevin kempter wrote:\n\n> Sorry I goofed on the query text Here's the correct query:\n>\n> select\n> f14.xpublisher_dim_id,\n> f14.xtime_dim_id,\n> f14.xlocation_dim_id,\n> f14.xreferrer_dim_id,\n> f14.xsite_dim_id,\n> f14.xsystem_cfg_dim_id,\n> f14.xaffiliate_dim_id,\n> f14.customer_id,\n> f14.pf_dts_id,\n> f14.episode_id,\n> f14.sessionid,\n> f14.bytes_received,\n> f14.bytes_transmitted,\n> f14.total_played_time_sec,\n> segdim.xsegment_dim_id as episode_level_segid\n> from\n> bigtab_stats_fact_tmp14 f14,\n> xsegment_dim segdim\n> where\n> f14.customer_id = segdim.customer_srcid\n> and f14.show_id = segdim.show_srcid\n> and f14.season_id = segdim.season_srcid\n> and f14.episode_id = segdim.episode_srcid\n> and segdim.segment_srcid is NULL;\n>\n>\n>\n>\n>\n>\n> On May 16, 2008, at 12:31 AM, kevin kempter wrote:\n>\n>> Hi List;\n>>\n>> I have a table with 9,961,914 rows in it (see the describe of \n>> bigtab_stats_fact_tmp14 below)\n>>\n>> I also have a table with 7,785 rows in it (see the describe of \n>> xsegment_dim below)\n>>\n>> I'm running the join shown below and it takes > 10 hours and \n>> eventually runs out of disk space on a 1.4TB file system\n>>\n>> I've included below a describe of both tables, the join and an \n>> explain plan, any help / suggestions would be much appreciated !\n>>\n>> I need to get this beast to run as quickly as possible (without \n>> filling up my file system)\n>>\n>>\n>> Thanks in advance...\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>> select\n>> f14.xpublisher_dim_id,\n>> f14.xtime_dim_id,\n>> f14.xlocation_dim_id,\n>> f14.xreferrer_dim_id,\n>> f14.xsite_dim_id,\n>> f14.xsystem_cfg_dim_id,\n>> f14.xaffiliate_dim_id,\n>> f14.customer_id,\n>> pf_dts_id,\n>> episode_id,\n>> sessionid,\n>> bytes_received,\n>> bytes_transmitted,\n>> total_played_time_sec,\n>> segdim.xsegment_dim_id as episode_level_segid\n>> from\n>> bigtab_stats_fact_tmp14 f14,\n>> xsegment_dim segdim\n>> where\n>> f14.customer_id = segdim.customer_srcid\n>> and f14.show_id = segdim.show_srcid\n>> and f14.season_id = segdim.season_srcid\n>> and f14.episode_id = segdim.episode_srcid\n>> and segdim.segment_srcid is NULL;\n>>\n>>\n>>\n>>\n>>\n>>\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Merge Join (cost=1757001.74..73569676.49 rows=3191677219 width=118)\n>> Merge Cond: ((segdim.episode_srcid = f14.episode_id) AND \n>> (segdim.customer_srcid = f14.customer_id) AND (segdim.show_srcid = \n>> f14.show_id) AND (segdim.season_srcid = f14.season_id))\n>> -> Sort (cost=1570.35..1579.46 rows=3643 width=40)\n>> Sort Key: segdim.episode_srcid, segdim.customer_srcid, \n>> segdim.show_srcid, segdim.season_srcid\n>> -> Seq Scan on xsegment_dim segdim (cost=0.00..1354.85 rows=3643 \n>> width=40)\n>> Filter: (segment_srcid IS NULL)\n>> -> Sort (cost=1755323.26..1780227.95 rows=9961874 width=126)\n>> Sort Key: f14.episode_id, f14.customer_id, f14.show_id, f14.season_id\n>> -> Seq Scan on bigtab_stats_fact_tmp14 f14 (cost=0.00..597355.74 \n>> rows=9961874 width=126)\n>> (9 rows)\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>> # \\d bigtab_stats_fact_tmp14\n>> Table \"public.bigtab_stats_fact_tmp14\"\n>> Column | Type | Modifiers\n>> --------------------------+-----------------------------+-----------\n>> pf_dts_id | bigint |\n>> pf_device_id | bigint |\n>> segment_id | bigint |\n>> cdn_id | bigint |\n>> collector_id | bigint |\n>> digital_envoy_id | bigint |\n>> maxmind_id | bigint |\n>> quova_id | bigint |\n>> website_id | bigint |\n>> referrer_id | bigint |\n>> affiliate_id | bigint |\n>> custom_info_id | bigint |\n>> start_dt | timestamp without time zone |\n>> total_played_time_sec | numeric(18,5) |\n>> bytes_received | bigint |\n>> bytes_transmitted | bigint |\n>> stall_count | integer |\n>> stall_duration_sec | numeric(18,5) |\n>> hiccup_count | integer |\n>> hiccup_duration_sec | numeric(18,5) |\n>> watched_duration_sec | numeric(18,5) |\n>> rewatched_duration_sec | numeric(18,5) |\n>> requested_start_position | numeric(18,5) |\n>> requested_stop_position | numeric(18,5) |\n>> post_position | numeric(18,5) |\n>> is_vod | numeric(1,0) |\n>> sessionid | bigint |\n>> create_dt | timestamp without time zone |\n>> segment_type_id | bigint |\n>> customer_id | bigint |\n>> content_publisher_id | bigint |\n>> content_owner_id | bigint |\n>> episode_id | bigint |\n>> duration_sec | numeric(18,5) |\n>> device_id | bigint |\n>> os_id | bigint |\n>> browser_id | bigint |\n>> cpu_id | bigint |\n>> xsystem_cfg_dim_id | bigint |\n>> xreferrer_dim_id | bigint |\n>> xaffiliate_dim_id | bigint |\n>> xsite_dim_id | bigint |\n>> xpublisher_dim_id | bigint |\n>> season_id | bigint |\n>> show_id | bigint |\n>> xsegment_dim_id | bigint |\n>> location_id | bigint |\n>> zipcode | character varying(20) |\n>> xlocation_dim_id | bigint |\n>> location_srcid | bigint |\n>> timezone | real |\n>> xtime_dim_id | bigint |\n>> Indexes:\n>> \"bigtab_stats_fact_tmp14_idx1\" btree (customer_id)\n>> \"bigtab_stats_fact_tmp14_idx2\" btree (show_id)\n>> \"bigtab_stats_fact_tmp14_idx3\" btree (season_id)\n>> \"bigtab_stats_fact_tmp14_idx4\" btree (episode_id)\n>>\n>>\n>>\n>>\n>>\n>>\n>> # \\d xsegment_dim\n>> Table \"public.xsegment_dim\"\n>> Column | Type \n>> | Modifiers\n>> ----------------------+----------------------------- \n>> +-------------------------------------------------------------\n>> xsegment_dim_id | bigint | not null default \n>> nextval('xsegment_dim_seq'::regclass)\n>> customer_srcid | bigint | not null\n>> show_srcid | bigint | not null\n>> show_name | character varying(500) | not null\n>> season_srcid | bigint | not null\n>> season_name | character varying(500) | not null\n>> episode_srcid | bigint | not null\n>> episode_name | character varying(500) | not null\n>> segment_type_id | integer |\n>> segment_type | character varying(500) |\n>> segment_srcid | bigint |\n>> segment_name | character varying(500) |\n>> effective_dt | timestamp without time zone | not null \n>> default now()\n>> inactive_dt | timestamp without time zone |\n>> last_update_dt | timestamp without time zone | not null \n>> default now()\n>> Indexes:\n>> \"xsegment_dim_pk\" PRIMARY KEY, btree (xsegment_dim_id)\n>> \"seg1\" btree (customer_srcid)\n>> \"seg2\" btree (show_srcid)\n>> \"seg3\" btree (season_srcid)\n>> \"seg4\" btree (episode_srcid)\n>> \"seg5\" btree (segment_srcid)\n>> \"xsegment_dim_ix1\" btree (customer_srcid)\n>>\n>>\n>>\n>>\n>>\n>>\n>> -- \n>> Sent via pgsql-performance mailing list ([email protected] \n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n", "msg_date": "Fri, 16 May 2008 01:08:37 -0600", "msg_from": "kevin kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Join runs for > 10 hours and then fills up >1.3TB of disk space" }, { "msg_contents": "> I have a table with 9,961,914 rows in it (see the describe of\n> bigtab_stats_fact_tmp14 below)\n>\n> I also have a table with 7,785 rows in it (see the describe of xsegment_dim\n> below)\n>\n> I'm running the join shown below and it takes > 10 hours and eventually runs\n> out of disk space on a 1.4TB file system\n>\n> I've included below a describe of both tables, the join and an explain plan,\n> any help / suggestions would be much appreciated !\n>\n> I need to get this beast to run as quickly as possible (without filling up\n> my file system)\n>\n>\n> Thanks in advance...\n\nWhat version of postgresql are you using? According to\nhttp://www.postgresql.org/docs/8.2/static/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-MEMORY\nyou may benefit from adjusting work_mem.\n\nYou also index segment_srcid (in table xsegment_dim) but if you search\nfor NULL and you have enough of those it defaults to a seq. scan:\n\nSeq Scan on xsegment_dim segdim (cost=0.00..1354.85 rows=3643 width=40)\n> Filter: (segment_srcid IS NULL)\n\nMaby you could insert some default value into segment_srcid (some\narbitrary large numbers) instead of NULL and then search for values\ngreater than??\n\nYou could also try to lower random_page_cost from default to 2.\n\n> select\n> f14.xpublisher_dim_id,\n> f14.xtime_dim_id,\n> f14.xlocation_dim_id,\n> f14.xreferrer_dim_id,\n> f14.xsite_dim_id,\n> f14.xsystem_cfg_dim_id,\n> f14.xaffiliate_dim_id,\n> f14.customer_id,\n> pf_dts_id,\n> episode_id,\n> sessionid,\n> bytes_received,\n> bytes_transmitted,\n> total_played_time_sec,\n> segdim.xsegment_dim_id as episode_level_segid\n> from\n> bigtab_stats_fact_tmp14 f14,\n> xsegment_dim segdim\n> where\n> f14.customer_id = segdim.customer_srcid\n> and f14.show_id = segdim.show_srcid\n> and f14.season_id = segdim.season_srcid\n> and f14.episode_id = segdim.episode_srcid\n> and segdim.segment_srcid is NULL;\n>\n>\n>\n>\n>\n>\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Merge Join (cost=1757001.74..73569676.49 rows=3191677219 width=118)\n> Merge Cond: ((segdim.episode_srcid = f14.episode_id) AND\n> (segdim.customer_srcid = f14.customer_id) AND (segdim.show_srcid =\n> f14.show_id) AND (segdim.season_srcid = f14.season_id))\n> -> Sort (cost=1570.35..1579.46 rows=3643 width=40)\n> Sort Key: segdim.episode_srcid, segdim.customer_srcid, segdim.show_srcid,\n> segdim.season_srcid\n> -> Seq Scan on xsegment_dim segdim (cost=0.00..1354.85 rows=3643 width=40)\n> Filter: (segment_srcid IS NULL)\n> -> Sort (cost=1755323.26..1780227.95 rows=9961874 width=126)\n> Sort Key: f14.episode_id, f14.customer_id, f14.show_id, f14.season_id\n> -> Seq Scan on bigtab_stats_fact_tmp14 f14 (cost=0.00..597355.74\n> rows=9961874 width=126)\n> (9 rows)\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> # \\d bigtab_stats_fact_tmp14\n> Table \"public.bigtab_stats_fact_tmp14\"\n> Column | Type | Modifiers\n> --------------------------+-----------------------------+-----------\n> pf_dts_id | bigint |\n> pf_device_id | bigint |\n> segment_id | bigint |\n> cdn_id | bigint |\n> collector_id | bigint |\n> digital_envoy_id | bigint |\n> maxmind_id | bigint |\n> quova_id | bigint |\n> website_id | bigint |\n> referrer_id | bigint |\n> affiliate_id | bigint |\n> custom_info_id | bigint |\n> start_dt | timestamp without time zone |\n> total_played_time_sec | numeric(18,5) |\n> bytes_received | bigint |\n> bytes_transmitted | bigint |\n> stall_count | integer |\n> stall_duration_sec | numeric(18,5) |\n> hiccup_count | integer |\n> hiccup_duration_sec | numeric(18,5) |\n> watched_duration_sec | numeric(18,5) |\n> rewatched_duration_sec | numeric(18,5) |\n> requested_start_position | numeric(18,5) |\n> requested_stop_position | numeric(18,5) |\n> post_position | numeric(18,5) |\n> is_vod | numeric(1,0) |\n> sessionid | bigint |\n> create_dt | timestamp without time zone |\n> segment_type_id | bigint |\n> customer_id | bigint |\n> content_publisher_id | bigint |\n> content_owner_id | bigint |\n> episode_id | bigint |\n> duration_sec | numeric(18,5) |\n> device_id | bigint |\n> os_id | bigint |\n> browser_id | bigint |\n> cpu_id | bigint |\n> xsystem_cfg_dim_id | bigint |\n> xreferrer_dim_id | bigint |\n> xaffiliate_dim_id | bigint |\n> xsite_dim_id | bigint |\n> xpublisher_dim_id | bigint |\n> season_id | bigint |\n> show_id | bigint |\n> xsegment_dim_id | bigint |\n> location_id | bigint |\n> zipcode | character varying(20) |\n> xlocation_dim_id | bigint |\n> location_srcid | bigint |\n> timezone | real |\n> xtime_dim_id | bigint |\n> Indexes:\n> \"bigtab_stats_fact_tmp14_idx1\" btree (customer_id)\n> \"bigtab_stats_fact_tmp14_idx2\" btree (show_id)\n> \"bigtab_stats_fact_tmp14_idx3\" btree (season_id)\n> \"bigtab_stats_fact_tmp14_idx4\" btree (episode_id)\n>\n>\n>\n>\n>\n>\n> # \\d xsegment_dim\n> Table \"public.xsegment_dim\"\n> Column | Type |\n> Modifiers\n> ----------------------+-----------------------------+-------------------------------------------------------------\n> xsegment_dim_id | bigint | not null default\n> nextval('xsegment_dim_seq'::regclass)\n> customer_srcid | bigint | not null\n> show_srcid | bigint | not null\n> show_name | character varying(500) | not null\n> season_srcid | bigint | not null\n> season_name | character varying(500) | not null\n> episode_srcid | bigint | not null\n> episode_name | character varying(500) | not null\n> segment_type_id | integer |\n> segment_type | character varying(500) |\n> segment_srcid | bigint |\n> segment_name | character varying(500) |\n> effective_dt | timestamp without time zone | not null default now()\n> inactive_dt | timestamp without time zone |\n> last_update_dt | timestamp without time zone | not null default now()\n> Indexes:\n> \"xsegment_dim_pk\" PRIMARY KEY, btree (xsegment_dim_id)\n> \"seg1\" btree (customer_srcid)\n> \"seg2\" btree (show_srcid)\n> \"seg3\" btree (season_srcid)\n> \"seg4\" btree (episode_srcid)\n> \"seg5\" btree (segment_srcid)\n> \"xsegment_dim_ix1\" btree (customer_srcid)\n>\n>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Fri, 16 May 2008 09:15:14 +0200", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join runs for > 10 hours and then fills up >1.3TB of disk space" }, { "msg_contents": "\nOn Fri, 2008-05-16 at 00:31 -0600, kevin kempter wrote:\n\n> I'm running the join shown below and it takes > 10 hours and \n> eventually runs out of disk space on a 1.4TB file system\n\nWell, running in 10 hours doesn't mean there's a software problem, nor\ndoes running out of disk space.\n\nPlease crunch some numbers before you ask, such as how much disk space\nwas used by the query, how big you'd expect it to be etc, plus provide\ninformation such as what the primary key of the large table is and what\nis your release level is etc..\n\nAre you sure you want to retrieve an estimated 3 billion rows? Can you\ncope if that estimate is wrong and the true figure is much higher? Do\nyou think the estimate is realistic?\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n\n", "msg_date": "Fri, 16 May 2008 08:38:30 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join runs for > 10 hours and then fills up >1.3TB of\n\tdisk space" }, { "msg_contents": "kevin kempter wrote:\n> Hi List;\n> \n> I have a table with 9,961,914 rows in it (see the describe of \n> bigtab_stats_fact_tmp14 below)\n> \n> I also have a table with 7,785 rows in it (see the describe of \n> xsegment_dim below)\n> \n> I'm running the join shown below and it takes > 10 hours and eventually \n> runs out of disk space on a 1.4TB file system\n\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \n> \n> Merge Join (cost=1757001.74..73569676.49 rows=3191677219 width=118)\n\nDumb question Kevin, but are you really expecting 3.2 billion rows in \nthe result-set? Because that's approaching 400GB of result-set without \nany overheads.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 16 May 2008 08:40:16 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join runs for > 10 hours and then fills up >1.3TB of\n disk space" }, { "msg_contents": "I'm expecting 9,961,914 rows returned. Each row in the big table \nshould have a corresponding key in the smaller tale, I want to \nbasically \"expand\" the big table column list by one, via adding the \nappropriate key from the smaller table for each row in the big table. \nIt's not a cartesion product join.\n\n\n\nOn May 16, 2008, at 1:40 AM, Richard Huxton wrote:\n\n> kevin kempter wrote:\n>> Hi List;\n>> I have a table with 9,961,914 rows in it (see the describe of \n>> bigtab_stats_fact_tmp14 below)\n>> I also have a table with 7,785 rows in it (see the describe of \n>> xsegment_dim below)\n>> I'm running the join shown below and it takes > 10 hours and \n>> eventually runs out of disk space on a 1.4TB file system\n>\n>> QUERY PLAN\n>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Merge \n>> Join (cost=1757001.74..73569676.49 rows=3191677219 width=118)\n>\n> Dumb question Kevin, but are you really expecting 3.2 billion rows \n> in the result-set? Because that's approaching 400GB of result-set \n> without any overheads.\n>\n> -- \n> Richard Huxton\n> Archonet Ltd\n\n", "msg_date": "Fri, 16 May 2008 02:00:41 -0600", "msg_from": "kevin kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Join runs for > 10 hours and then fills up >1.3TB of disk space" }, { "msg_contents": "kevin kempter wrote:\n> I'm expecting 9,961,914 rows returned. Each row in the big table should \n> have a corresponding key in the smaller tale, I want to basically \n> \"expand\" the big table column list by one, via adding the appropriate \n> key from the smaller table for each row in the big table. It's not a \n> cartesion product join.\n\nDidn't seem likely, to be honest.\n\nWhat happens if you try the query as a cursor, perhaps with an order-by \non customer_id or something to encourage index use? Do you ever get a \nfirst row back?\n\nIn fact, what happens if you slap an index over all your join columns on \nxsegment_dim? With 7,000 rows that should make it a cheap test.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 16 May 2008 09:16:19 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join runs for > 10 hours and then fills up >1.3TB of\n disk space" }, { "msg_contents": "kevin kempter wrote:\n> Hi List;\n> \n> I have a table with 9,961,914 rows in it (see the describe of \n> bigtab_stats_fact_tmp14 below)\n> \n> I also have a table with 7,785 rows in it (see the describe of \n> xsegment_dim below)\n\nSomething else is puzzling me with this - you're joining over four fields.\n\n> from\n> bigtab_stats_fact_tmp14 f14,\n> xsegment_dim segdim\n> where\n> f14.customer_id = segdim.customer_srcid\n> and f14.show_id = segdim.show_srcid\n> and f14.season_id = segdim.season_srcid\n> and f14.episode_id = segdim.episode_srcid\n> and segdim.segment_srcid is NULL;\n\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \n> \n> Merge Join (cost=1757001.74..73569676.49 rows=3191677219 width=118)\n\n> -> Sort (cost=1570.35..1579.46 rows=3643 width=40)\n\n> -> Sort (cost=1755323.26..1780227.95 rows=9961874 width=126)\n\nHere it's still expecting 320 matches against each row from the large \ntable. That's ~ 10% of the small table (or that fraction of it that PG \nexpects) which seems very high for four clauses ANDed together.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 16 May 2008 09:18:12 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join runs for > 10 hours and then fills up >1.3TB of\n disk space" }, { "msg_contents": "On further investigation it turns out that I/we have a serious data \nissue in that my small table is full of 'UNKNOWN' tags so my query \ncannot associate the data correctly - thus I will end up with 2+ \nbillion rows.\n\n\nThanks everyone for your help\n\n\n\n\nOn May 16, 2008, at 1:38 AM, Simon Riggs wrote:\n\n>\n> On Fri, 2008-05-16 at 00:31 -0600, kevin kempter wrote:\n>\n>> I'm running the join shown below and it takes > 10 hours and\n>> eventually runs out of disk space on a 1.4TB file system\n>\n> Well, running in 10 hours doesn't mean there's a software problem, nor\n> does running out of disk space.\n>\n> Please crunch some numbers before you ask, such as how much disk space\n> was used by the query, how big you'd expect it to be etc, plus provide\n> information such as what the primary key of the large table is and \n> what\n> is your release level is etc..\n>\n> Are you sure you want to retrieve an estimated 3 billion rows? Can you\n> cope if that estimate is wrong and the true figure is much higher? Do\n> you think the estimate is realistic?\n>\n> -- \n> Simon Riggs www.2ndQuadrant.com\n> PostgreSQL Training, Services and Support\n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Fri, 16 May 2008 02:38:04 -0600", "msg_from": "kevin kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Join runs for > 10 hours and then fills up >1.3TB of disk space" } ]
[ { "msg_contents": "Try 'set enable-mergejoin=false' and see if you get a hashjoin.\r\n\r\n- Luke\r\n\r\n----- Original Message -----\r\nFrom: [email protected] <[email protected]>\r\nTo: Richard Huxton <[email protected]>\r\nCc: [email protected] <[email protected]>\r\nSent: Fri May 16 04:00:41 2008\r\nSubject: Re: [PERFORM] Join runs for > 10 hours and then fills up >1.3TB of disk space\r\n\r\nI'm expecting 9,961,914 rows returned. Each row in the big table \r\nshould have a corresponding key in the smaller tale, I want to \r\nbasically \"expand\" the big table column list by one, via adding the \r\nappropriate key from the smaller table for each row in the big table. \r\nIt's not a cartesion product join.\r\n\r\n\r\n\r\nOn May 16, 2008, at 1:40 AM, Richard Huxton wrote:\r\n\r\n> kevin kempter wrote:\r\n>> Hi List;\r\n>> I have a table with 9,961,914 rows in it (see the describe of \r\n>> bigtab_stats_fact_tmp14 below)\r\n>> I also have a table with 7,785 rows in it (see the describe of \r\n>> xsegment_dim below)\r\n>> I'm running the join shown below and it takes > 10 hours and \r\n>> eventually runs out of disk space on a 1.4TB file system\r\n>\r\n>> QUERY PLAN\r\n>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Merge \r\n>> Join (cost=1757001.74..73569676.49 rows=3191677219 width=118)\r\n>\r\n> Dumb question Kevin, but are you really expecting 3.2 billion rows \r\n> in the result-set? Because that's approaching 400GB of result-set \r\n> without any overheads.\r\n>\r\n> -- \r\n> Richard Huxton\r\n> Archonet Ltd\r\n\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\n\n\n\n\nRe: [PERFORM] Join runs for > 10 hours and then fills up >1.3TB of disk space\n\n\n\nTry 'set enable-mergejoin=false' and see if you get a hashjoin.\n\r\n- Luke\n\r\n----- Original Message -----\r\nFrom: [email protected] <[email protected]>\r\nTo: Richard Huxton <[email protected]>\r\nCc: [email protected] <[email protected]>\r\nSent: Fri May 16 04:00:41 2008\r\nSubject: Re: [PERFORM] Join runs for > 10 hours and then fills up >1.3TB of disk space\n\r\nI'm expecting 9,961,914 rows returned. Each row in the big table \r\nshould have a corresponding key in the smaller tale, I want to \r\nbasically \"expand\" the big table column list by one, via adding the \r\nappropriate key from the smaller table for each row in the big table. \r\nIt's not a cartesion product join.\n\n\n\r\nOn May 16, 2008, at 1:40 AM, Richard Huxton wrote:\n\r\n> kevin kempter wrote:\r\n>> Hi List;\r\n>> I have a table with 9,961,914 rows in it (see the describe of \r\n>> bigtab_stats_fact_tmp14 below)\r\n>> I also have a table with 7,785 rows in it (see the describe of \r\n>> xsegment_dim below)\r\n>> I'm running the join shown below and it takes > 10 hours and \r\n>> eventually runs out of disk space on a 1.4TB file system\r\n>\r\n>> QUERY PLAN\r\n>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Merge\r\n>>  Join  (cost=1757001.74..73569676.49 rows=3191677219 width=118)\r\n>\r\n> Dumb question Kevin, but are you really expecting 3.2 billion rows \r\n> in the result-set? Because that's approaching 400GB of result-set \r\n> without any overheads.\r\n>\r\n> --\r\n>  Richard Huxton\r\n>  Archonet Ltd\n\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 16 May 2008 04:36:11 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Join runs for > 10 hours and then fills up >1.3TB of disk space" } ]
[ { "msg_contents": "I've inherited an Oracle database that I'm porting to Postgres, and this \nhas been going quite well until now. Unfortunately, I've found one view (a \nlargish left join) that runs several orders of magnitude slower on \nPostgres than it did on Oracle.\n\n=> select version();\n version\n----------------------------------------------------------------------------------------------------------\n PostgreSQL 8.2.4 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.1 20070105 (Red Hat 4.1.1-52)\n(1 row)\n\n\nAfter analyzing the database, the explain analyze output for the query is:\n\n Nested Loop Left Join (cost=133.51..15846.99 rows=1 width=312) (actual time=109.131..550711.374 rows=1248 loops=1)\n Join Filter: (log.logkey = ln.logkey)\n -> Nested Loop (cost=133.51..267.44 rows=1 width=306) (actual time=15.316..74.074 rows=1248 loops=1)\n -> Merge Join (cost=133.51..267.16 rows=1 width=325) (actual time=15.300..60.332 rows=1248 loops=1)\n Merge Cond: (log.eventkey = e.eventkey)\n Join Filter: ((e.clientkey = log.clientkey) AND (e.premiseskey = log.premiseskey))\n -> Index Scan using log_eventkey_idx on log (cost=0.00..3732.14 rows=36547 width=167) (actual time=0.015..25.385 rows=36547 loops=1)\n Filter: (logicaldel = 'N'::bpchar)\n -> Sort (cost=133.51..135.00 rows=595 width=328) (actual time=15.185..16.379 rows=1248 loops=1)\n Sort Key: e.eventkey\n -> Hash Join (cost=1.30..106.09 rows=595 width=328) (actual time=0.073..2.033 rows=1248 loops=1)\n Hash Cond: ((e.clientkey = p.clientkey) AND (e.premiseskey = p.premiseskey))\n -> Seq Scan on event e (cost=0.00..89.48 rows=1248 width=246) (actual time=0.005..0.481 rows=1248 loops=1)\n -> Hash (cost=1.14..1.14 rows=11 width=82) (actual time=0.059..0.059 rows=11 loops=1)\n -> Seq Scan on premises p (cost=0.00..1.14 rows=11 width=82) (actual time=0.004..0.020 rows=11 loops=1)\n Filter: (logicaldel = 'N'::bpchar)\n -> Index Scan using severity_pk on severity s (cost=0.00..0.27 rows=1 width=49) (actual time=0.007..0.009 rows=1 loops=1248)\n Index Cond: (e.severitykey = s.severitykey)\n -> Seq Scan on lognote ln1 (cost=0.00..15552.67 rows=1195 width=175) (actual time=1.173..440.695 rows=1244 loops=1248)\n Filter: ((logicaldel = 'N'::bpchar) AND (subplan))\n SubPlan\n -> Limit (cost=4.30..8.58 rows=1 width=34) (actual time=0.171..0.171 rows=1 loops=2982720)\n InitPlan\n -> GroupAggregate (cost=0.00..4.30 rows=1 width=110) (actual time=0.089..0.089 rows=1 loops=2982720)\n -> Index Scan using lognote_pk on lognote (cost=0.00..4.28 rows=1 width=110) (actual time=0.086..0.087 rows=1 loops=2982720)\n Index Cond: ((clientkey = $0) AND (premiseskey = $1) AND (logkey = $2))\n Filter: ((logicaldel = 'N'::bpchar) AND ((lognotetext ~~ '_%;%'::text) OR (lognotetext ~~ '_%has modified Respond Status to%'::text)))\n -> Index Scan using lognote_pk on lognote (cost=0.00..4.28 rows=1 width=34) (actual time=0.170..0.170 rows=1 loops=2982720)\n Index Cond: ((clientkey = $0) AND (premiseskey = $1) AND (logkey = $2))\n Filter: ((logicaldel = 'N'::bpchar) AND (lognotetime = $3))\n Total runtime: 550712.393 ms\n(31 rows)\n\n\nEither side of the left join runs quite fast independently. (The full \nquery also runs well when made into an inner join, but that's not the \nlogic I want.) The biggest difference between running each side \nindpendently and together in a left join is that this line in the plan for \nthe right side of the left join:\n\n-> Index Scan using lognote_pk on lognote (cost=0.00..4.28 rows=1 width=110) (actual time=0.086..0.087 rows=1 loops=2982720)\n\n...becomes this line when run independantly:\n\n-> Index Scan using lognote_pk on lognote (cost=0.00..4.28 rows=1 width=110) (actual time=0.086..0.087 rows=1 loops=2390)\n\nThat's quite a few more loops in the left join. Am I right to think that \nit's looping so much because the analyzer is so far off when guessing the \nrows for the left side of the join (1 vs. 1248)? Or is there something \nelse going on? I've tried bumping up analyze stats on a few columns, but \nI'm not too sure how to spot which columns it might help with and, sure \nenough, it didn't help.\n\n\nThe actual query:\n\nselect *\nfrom\n (\n select *\n from\n event e,\n severity s,\n premises p,\n log\n where\n p.clientkey = e.clientkey and\n p.premiseskey = e.premiseskey and\n p.logicaldel = 'N' and\n log.logicaldel = 'N' and\n e.clientkey = log.clientkey and\n e.premiseskey = log.premiseskey and\n e.eventkey = log.eventkey and\n e.severitykey = s.severitykey\n ) lj\n left join\n (\n select\n clientkey, premiseskey, logkey, lognotetime, logicaldel,\n case\n when\n (case when instr(lognotetext,';') = 0 then instr(lognotetext,' has modified')\n else instr(lognotetext,';') end) = 0 then NULL\n else\n substr(lognotetext,1,\n (\n case when instr(lognotetext,';') = 0 then\n instr(lognotetext,' has modified') else\n instr(lognotetext,';') end\n ) - 1)\n end as responderid\n from lognote ln1\n where\n logicaldel = 'N' and\n lognotekey in\n (\n select lognotekey\n from lognote\n where\n logicaldel = 'N' and\n clientkey = ln1.clientkey and\n premiseskey = ln1.premiseskey and\n logkey = ln1.logkey and\n lognotetime =\n (\n select min(lognotetime)\n from lognote\n where\n logicaldel = 'N' and\n (\n lognotetext like '_%;%' or\n lognotetext like '_%has modified Respond Status to%'\n ) and\n clientkey = ln1.clientkey and\n premiseskey = ln1.premiseskey and\n logkey = ln1.logkey\n group by clientkey, premiseskey, logkey\n )\n order by lognotekey limit 1\n )\n ) ln on\n (\n lj.logkey = ln.logkey\n )\n\n\nThe instr() function calls are calling this version of instr:\nhttp://www.postgresql.org/docs/8.2/interactive/plpgsql-porting.html#PLPGSQL-PORTING-APPENDIX\n\n\n\nThe relevent schema:\n\n Table \"public.event\"\n Column | Type | Modifiers\n----------------+-----------------------------+------------------------\n clientkey | character(30) | not null\n premiseskey | character(30) | not null\n eventkey | character(30) | not null\n severitykey | character(30) |\nIndexes:\n \"event_pk\" PRIMARY KEY, btree (clientkey, premiseskey, eventkey), tablespace \"data\"\nForeign-key constraints:\n \"premisesevent\" FOREIGN KEY (clientkey, premiseskey) REFERENCES premises(clientkey, premiseskey) DEFERRABLE INITIALLY DEFERRED\n\n Table \"public.severity\"\n Column | Type | Modifiers\n--------------------+---------------+-----------\n severitykey | character(30) | not null\n severityname | text |\nIndexes:\n \"severity_pk\" PRIMARY KEY, btree (severitykey), tablespace \"data\"\n\n Table \"public.premises\"\n Column | Type | Modifiers\n-----------------+-----------------------------+---------------------\n clientkey | character(30) | not null\n premiseskey | character(30) | not null\n logicaldel | character(1) | default 'N'::bpchar\nIndexes:\n \"premises_pk\" PRIMARY KEY, btree (clientkey, premiseskey), tablespace \"data\"\nForeign-key constraints:\n \"clientpremises\" FOREIGN KEY (clientkey) REFERENCES client(clientkey) DEFERRABLE INITIALLY DEFERRED\n\n Table \"public.log\"\n Column | Type | Modifiers\n----------------+-----------------------------+---------------------\n clientkey | character(30) | not null\n premiseskey | character(30) | not null\n logkey | character(30) | not null\n logicaldel | character(1) | default 'N'::bpchar\n eventkey | character(30) |\nIndexes:\n \"log_pk\" PRIMARY KEY, btree (clientkey, premiseskey, logkey), tablespace \"data\"\n \"log_ak1\" btree (clientkey, premiseskey, logtime, logkey), tablespace \"data\"\n \"log_eventkey_idx\" btree (eventkey), tablespace \"data\"\nForeign-key constraints:\n \"premiseslog\" FOREIGN KEY (clientkey, premiseskey) REFERENCES premises(clientkey, premiseskey) DEFERRABLE INITIALLY DEFERRED\n\n Table \"public.lognote\"\n Column | Type | Modifiers\n-------------+-----------------------------+---------------------\n clientkey | character(30) | not null\n premiseskey | character(30) | not null\n logkey | character(30) | not null\n lognotekey | character(30) | not null\n logicaldel | character(1) | default 'N'::bpchar\n lognotetext | text |\n lognotetime | timestamp without time zone |\nIndexes:\n \"lognote_pk\" PRIMARY KEY, btree (clientkey, premiseskey, logkey, lognotekey), tablespace \"data\"\n \"lognotekey_idx\" UNIQUE, btree (lognotekey), tablespace \"data\"\nForeign-key constraints:\n \"log_lognote_fk1\" FOREIGN KEY (clientkey, premiseskey, logkey) REFERENCES log(clientkey, premiseskey, logkey) DEFERRABLE INITIALLY DEFERRED\n\n\n\nAny help would be appreciated!\n", "msg_date": "Fri, 16 May 2008 10:56:03 -0700 (PDT)", "msg_from": "Ben <[email protected]>", "msg_from_op": true, "msg_subject": "very slow left join" }, { "msg_contents": "On Fri, May 16, 2008 at 11:56 AM, Ben <[email protected]> wrote:\n> I've inherited an Oracle database that I'm porting to Postgres, and this has\n> been going quite well until now. Unfortunately, I've found one view (a\n> largish left join) that runs several orders of magnitude slower on Postgres\n> than it did on Oracle.\n>\n> => select version();\n> version\n> ----------------------------------------------------------------------------------------------------------\n> PostgreSQL 8.2.4 on x86_64-redhat-linux-gnu, compiled by GCC gcc (GCC)\n> 4.1.1 20070105 (Red Hat 4.1.1-52)\n> (1 row)\n\n1: Update to 8.2.7. It's pretty painless, and who knows what\nperformance bugs you might be fighting that you don't really need to.\n\n> After analyzing the database, the explain analyze output for the query is:\n>\n> Nested Loop Left Join (cost=133.51..15846.99 rows=1 width=312) (actual\n> time=109.131..550711.374 rows=1248 loops=1)\n> Join Filter: (log.logkey = ln.logkey)\n> -> Nested Loop (cost=133.51..267.44 rows=1 width=306) (actual\n> time=15.316..74.074 rows=1248 loops=1)\nSNIP\n> Total runtime: 550712.393 ms\n\nJust for giggles, try running the query like so:\n\nset enable_nestloop = off;\nexplain analyze ...\n\nand see what happens. I'm guessing that the nested loops are bad choices here.\n\n> (case when instr(lognotetext,';') = 0 then instr(lognotetext,' has\n> modified')\n> else instr(lognotetext,';') end) = 0 then NULL\n\nTry creating indexes on the functions above, and make sure you're\nrunning the db in the C local if you can. Note you may need to dump /\ninitdb --locale=C / reload your data if you're not in the C locale\nalready. text_pattern_ops may be applicable here, but I'm not sure\nhow to use it in the above functions.\n\n> Table \"public.event\"\n> Column | Type | Modifiers\n> ----------------+-----------------------------+------------------------\n> clientkey | character(30) | not null\n> premiseskey | character(30) | not null\n> eventkey | character(30) | not null\n> severitykey | character(30) |\n\nDo these really need to be character and not varchar? varchar / text\nare better optimized in pgsql, and character often need to be cast\nanyway, so you might as well start with varchar. Unless you REALLY\nneed padding in your db, avoid char(x).\n\nDon't see anything else, but who knows what someone else might see.\n", "msg_date": "Fri, 16 May 2008 12:09:46 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very slow left join" }, { "msg_contents": "On Fri, 16 May 2008, Scott Marlowe wrote:\n\n> Just for giggles, try running the query like so:\n>\n> set enable_nestloop = off;\n> explain analyze ...\n>\n> and see what happens. I'm guessing that the nested loops are bad choices here.\n\nYou guess correctly, sir! Doing so shaves 3 orders of magnitude off the \nruntime. That's nice. :) But that brings up the question of why postgres \nthinks nested loops are the way to go? It would be handy if I could make \nit guess correctly to begin with and didn't have to turn nested loops off \neach time I run this.\n\n\n>> Table \"public.event\"\n>> Column | Type | Modifiers\n>> ----------------+-----------------------------+------------------------\n>> clientkey | character(30) | not null\n>> premiseskey | character(30) | not null\n>> eventkey | character(30) | not null\n>> severitykey | character(30) |\n>\n> Do these really need to be character and not varchar? varchar / text\n> are better optimized in pgsql, and character often need to be cast\n> anyway, so you might as well start with varchar. Unless you REALLY\n> need padding in your db, avoid char(x).\n\nUnfortuantely, the people who created this database made all keys 30 \ncharacter strings, and we're not near a place in our release cycle where \nwe can fix that.\n", "msg_date": "Fri, 16 May 2008 11:21:04 -0700 (PDT)", "msg_from": "Ben <[email protected]>", "msg_from_op": true, "msg_subject": "Re: very slow left join" }, { "msg_contents": "On Fri, May 16, 2008 at 12:21 PM, Ben <[email protected]> wrote:\n> On Fri, 16 May 2008, Scott Marlowe wrote:\n>\n>> Just for giggles, try running the query like so:\n>>\n>> set enable_nestloop = off;\n>> explain analyze ...\n>>\n>> and see what happens. I'm guessing that the nested loops are bad choices\n>> here.\n>\n> You guess correctly, sir! Doing so shaves 3 orders of magnitude off the\n> runtime. That's nice. :) But that brings up the question of why postgres\n> thinks nested loops are the way to go? It would be handy if I could make it\n> guess correctly to begin with and didn't have to turn nested loops off each\n> time I run this.\n\nWell, I'm guessing that you aren't in locale=C and that the text\nfunctions in your query aren't indexed. Try creating an index on them\nsomething like:\n\ncreate index abc_txtfield_func on mytable (substring(textfield,1,5));\n\netc and see if that helps.\n\nAs for the char type, I totally understand the issue, having inherited\noracle dbs before...\n", "msg_date": "Fri, 16 May 2008 12:27:09 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very slow left join" }, { "msg_contents": "On Fri, 16 May 2008, Scott Marlowe wrote:\n\n> Well, I'm guessing that you aren't in locale=C and that the text\n\nCorrect, I am not. And my understanding is that by moving to the C locale, \nI would loose utf8 validation, so I don't want to go there. Though, it's \nnews to me that I would get any kind of select performance boost with \nlocale=C. Why would it help?\n\n> functions in your query aren't indexed. Try creating an index on them\n> something like:\n>\n> create index abc_txtfield_func on mytable (substring(textfield,1,5));\n>\n> etc and see if that helps.\n\nIt does not. :(\n", "msg_date": "Fri, 16 May 2008 11:43:12 -0700 (PDT)", "msg_from": "Ben <[email protected]>", "msg_from_op": true, "msg_subject": "Re: very slow left join" }, { "msg_contents": "Ben wrote:\n> On Fri, 16 May 2008, Scott Marlowe wrote:\n> \n>> Well, I'm guessing that you aren't in locale=C and that the text\n> \n> Correct, I am not. And my understanding is that by moving to the C \n> locale, I would loose utf8 validation, so I don't want to go there. \n> Though, it's news to me that I would get any kind of select performance \n> boost with locale=C. Why would it help?\n\nAs far as I know the difference is that in the \"C\" locale PostgreSQL can \nuse simple byte-ordinal-oriented rules for sorting, character access, \netc. It can ignore the possibility of a character being more than one \nbyte in size. It can also avoid having to consider pairs of characters \nwhere the ordinality of the numeric byte value of the characters is not \nthe same as the ordinality of the characters in the locale (ie they \ndon't sort in byte-value order).\n\nIf I've understood it correctly ( I don't use \"C\" locale databases \nmyself and I have not tested any of this ) that means that two UTF-8 \nencoded strings stored in a \"C\" locale database might not compare how \nyou expect. They might sort in a different order to what you expect, \nespecially if one is a 2-byte or more char and the other is only 1 byte. \nThey might compare non-equal even though they contain the same sequence \nof Unicode characters because one is in a decomposed form and one is in \na precomposed form. The database neither knows the encoding of the \nstrings nor cares about it; it's just treating them as byte sequences \nwithout any interest in their meaning.\n\nIf you only ever work with 7-bit ASCII, that might be OK. Ditto if you \nnever rely on the database for text sorting and comparison.\n\nSomeone please yell at me if I've mistaken something here.\n\n--\nCraig Ringer\n", "msg_date": "Sat, 17 May 2008 03:25:17 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: very slow left join" } ]
[ { "msg_contents": "Hi all;\n\nI have a query that does this:\n\nupdate tab_x set (inactive_dt, last_update_dt) =\n((select run_dt from current_run_date), (select run_dt from \ncurrent_run_date))\nwhere\ncust_id::text || loc_id::text in\n(select cust_id::text || loc_id::text from summary_tab);\n\n\nThe current_run_date table has only 1 row in it\nthe summary_tab table has 0 rows and the tab_x had 450,000 rows\n\nThe update takes 45min even though there is no rows to update.\nI have a compound index (cust_id, loc_id) on both tables (summary_tab \nand tab_x)\n\nHow can I speed this up ?\n\n\nThanks in advance for any thoughts, suggestions, etc...\n\n\n/Kevin\n", "msg_date": "Mon, 19 May 2008 23:56:27 -0600", "msg_from": "kevin kempter <[email protected]>", "msg_from_op": true, "msg_subject": "slow update" }, { "msg_contents": "am Mon, dem 19.05.2008, um 23:56:27 -0600 mailte kevin kempter folgendes:\n> Hi all;\n> \n> I have a query that does this:\n> \n> update tab_x set (inactive_dt, last_update_dt) =\n> ((select run_dt from current_run_date), (select run_dt from \n> current_run_date))\n> where\n> cust_id::text || loc_id::text in\n> (select cust_id::text || loc_id::text from summary_tab);\n> \n> \n> The current_run_date table has only 1 row in it\n> the summary_tab table has 0 rows and the tab_x had 450,000 rows\n> \n> The update takes 45min even though there is no rows to update.\n> I have a compound index (cust_id, loc_id) on both tables (summary_tab \n> and tab_x)\n> \n> How can I speed this up ?\n\nPlease show us more details, for instance the data-types for cust_id and\nloc_id. Wild guess: these columns are INT-Values and you have an Index.\nOkay, but in the quere there is a CAST to TEXT -> Index not used.\n\nVerfify this with EXPLAIN.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Tue, 20 May 2008 08:15:48 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow update" }, { "msg_contents": "On Mon, May 19, 2008 at 11:56 PM, kevin kempter\n<[email protected]> wrote:\n> Hi all;\n>\n> I have a query that does this:\n>\n> update tab_x set (inactive_dt, last_update_dt) =\n> ((select run_dt from current_run_date), (select run_dt from\n> current_run_date))\n> where\n> cust_id::text || loc_id::text in\n> (select cust_id::text || loc_id::text from summary_tab);\n\nI think what you're looking for in the where clause is something like:\n\nwhere (cust_id, loc_id) in (select cust_id, loc_id from summary_tab);\n\nwhich should let it compare the native types all at once. Not sure if\nthis works on versions before 8.2 or not.\n\nIf you MUST use that syntax, then create indexes on them, i.e.:\n\ncreate index tab_x_multidx on tab_x ((cust_id::text||loc_id::text));\ncreate index summary_tab_x_multidx on summary_tab\n((cust_id::text||loc_id::text));\n", "msg_date": "Tue, 20 May 2008 11:47:13 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow update" } ]
[ { "msg_contents": "Hello List,\n\nAs an editor for the german Linux Magazine I am looking for Authors who would \nwant to write an article of about 5-7 pages (=15-20000 characters) on Postgres \nTroubleshooting and Performance issues. The Article is planned for our \nforthcoming \"Linux Technical Review 09 - Datenbanken\", which aims at \nIT-Managers, Technicians and Administrators at a very high skill level. The \ndeadline is in four weeks.\n\nI would feel very happy if one or two of you would like to write for us, and \nif you have good ideas about relevant topics, please let me know.\n\nPlease let me know, I'd be very happy if you could contribute... \n(Don't worry about language!)\n:-)\n\n-- \n\nBest Regards - Mit freundlichen Gruessen\nMarkus Feilner\n\n-------------------------\nFeilner IT Linux & GIS\nLinux Solutions, Training, Seminare und Workshops - auch Inhouse\nKoetztingerstr 6c 93057 Regensburg\nTelefon: +49 941 8 10 79 89\nMobil: +49 170 3 02 70 92\nWWW: www.feilner-it.net mail: [email protected]\n--------------------------------------\nMy OpenVPN book - http://www.packtpub.com/openvpn/book\nOPENVPN : Building and Integrating Virtual Private Networks\nMy new book - Out now: http://www.packtpub.com/scalix/book\nSCALIX Linux Administrator's Guide\n", "msg_date": "Tue, 20 May 2008 13:21:06 +0200", "msg_from": "Markus Feilner <[email protected]>", "msg_from_op": true, "msg_subject": "Author Wanted" }, { "msg_contents": "In response to Markus Feilner <[email protected]>:\n> \n> As an editor for the german Linux Magazine I am looking for Authors who would \n> want to write an article of about 5-7 pages (=15-20000 characters) on Postgres \n> Troubleshooting and Performance issues. The Article is planned for our \n> forthcoming \"Linux Technical Review 09 - Datenbanken\", which aims at \n> IT-Managers, Technicians and Administrators at a very high skill level. The \n> deadline is in four weeks.\n> \n> I would feel very happy if one or two of you would like to write for us, and \n> if you have good ideas about relevant topics, please let me know.\n> \n> Please let me know, I'd be very happy if you could contribute... \n> (Don't worry about language!)\n> :-)\n\nCan you provide detailed information on submission guidelines as well as\npay rates?\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 20 May 2008 08:32:25 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Author Wanted" } ]
[ { "msg_contents": "Hi all;\n\nI have 2 tables where I basically want to delete from the first table \n(seg_id_tmp7) any rows where the entire row already exists in the \nsecond table (sl_cd_segment_dim)\n\nI have a query that looks like this (and it's slow):\n\n\ndelete from seg_id_tmp7\nwhere\n\tcustomer_srcid::text ||\n\tshow_srcid::text ||\n\tshow_name::text ||\n\tseason_srcid::text ||\n\tseason_name::text ||\n\tepisode_srcid::text ||\n\tepisode_name::text ||\n\tsegment_type_id::text ||\n\tsegment_type::text ||\n\tsegment_srcid::text ||\n\tsegment_name::text\nin\n\t( select\n\t\tcustomer_srcid::text ||\n\t\tshow_srcid::text ||\n\t\tshow_name::text ||\n\t\tseason_srcid::text ||\n\t\tseason_name::text ||\n\t\tepisode_srcid::text ||\n\t\tepisode_name::text ||\n\t\tsegment_type_id::text ||\n\t\tsegment_type::text ||\n\t\tsegment_srcid::text ||\n\t\tsegment_name::text\n\t\tfrom sl_cd_location_dim )\n;\n\n\n\n\n\nHere's the query plan for it:\n\n QUERY PLAN\n-----------------------------------------------------------------------------------\n Seq Scan on seg_id_tmp7 (cost=0.00..138870701.56 rows=2136 width=6)\n Filter: (subplan)\n SubPlan\n -> Seq Scan on sl_cd_location_dim (cost=0.00..63931.60 \nrows=433040 width=8)\n(4 rows)\n\n\n\n\n\n\n\n\nI also tried this:\n\ndelete from seg_id_tmp7\nwhere\n\t( customer_srcid ,\n\tshow_srcid ,\n\tshow_name ,\n\tseason_srcid ,\n\tseason_name ,\n\tepisode_srcid ,\n\tepisode_name ,\n\tsegment_type_id ,\n\tsegment_type ,\n\tsegment_srcid ,\n\tsegment_name )\nin\n\t( select\n\t\tcustomer_srcid ,\n\t\tshow_srcid ,\n\t\tshow_name ,\n\t\tseason_srcid ,\n\t\tseason_name ,\n\t\tepisode_srcid ,\n\t\tepisode_name ,\n\t\tsegment_type_id ,\n\t\tsegment_type ,\n\t\tsegment_srcid ,\n\t\tsegment_name\n\t\tfrom sl_cd_location_dim )\n;\n\n\nand I get this query plan:\n\n QUERY PLAN\n-----------------------------------------------------------------------------------\n Seq Scan on seg_id_tmp7 (cost=0.00..87997034.20 rows=2136 width=6)\n Filter: (subplan)\n SubPlan\n -> Seq Scan on sl_cd_location_dim (cost=0.00..40114.40 \nrows=433040 width=8)\n(4 rows)\n\n\n\nIf it helps here's the describe's (including indexes) for both tables:\n\n# \\d seg_id_tmp7\n Table \"public.seg_id_tmp7\"\n Column | Type | Modifiers\n-----------------+-----------------------------+-----------\n customer_srcid | bigint |\n show_srcid | bigint |\n show_name | character varying |\n season_srcid | bigint |\n season_name | character varying |\n episode_srcid | bigint |\n episode_name | character varying |\n segment_type_id | bigint |\n segment_type | character varying |\n segment_srcid | bigint |\n segment_name | character varying |\n create_dt | timestamp without time zone |\n\n\n\n\n# \\d sl_cd_segment_dim\n Table \n\"public.sl_cd_segment_dim\"\n Column | Type \n| Modifiers\n----------------------+----------------------------- \n+-------------------------------------------------------------\n sl_cd_segment_dim_id | bigint | not null \ndefault nextval('sl_cd_segment_dim_seq'::regclass)\n customer_srcid | bigint | not null\n show_srcid | bigint | not null\n show_name | character varying(500) | not null\n season_srcid | bigint | not null\n season_name | character varying(500) | not null\n episode_srcid | bigint | not null\n episode_name | character varying(500) | not null\n segment_type_id | integer |\n segment_type | character varying(500) |\n segment_srcid | bigint |\n segment_name | character varying(500) |\n effective_dt | timestamp without time zone | not null \ndefault now()\n inactive_dt | timestamp without time zone |\n last_update_dt | timestamp without time zone | not null \ndefault now()\nIndexes:\n \"sl_cd_segment_dim_pk\" PRIMARY KEY, btree (sl_cd_segment_dim_id)\n \"seg1\" btree (customer_srcid)\n \"seg2\" btree (show_srcid)\n \"seg3\" btree (season_srcid)\n \"seg4\" btree (episode_srcid)\n \"seg5\" btree (segment_srcid)\n \"sl_cd_segment_dim_ix1\" btree (customer_srcid)\n\n\n\n\n\n\nAny thoughts, suggestions, etc on how to improve performance for this \ndelete ?\n\n\nThanks in advance..\n\n/Kevin\n\n\n", "msg_date": "Tue, 20 May 2008 13:51:45 -0600", "msg_from": "kevin kempter <[email protected]>", "msg_from_op": true, "msg_subject": "improving performance for a delete" }, { "msg_contents": "Version 8.3.1\n\n\nOn May 20, 2008, at 1:51 PM, kevin kempter wrote:\n\n> Hi all;\n>\n> I have 2 tables where I basically want to delete from the first \n> table (seg_id_tmp7) any rows where the entire row already exists in \n> the second table (sl_cd_segment_dim)\n>\n> I have a query that looks like this (and it's slow):\n>\n>\n> delete from seg_id_tmp7\n> where\n> \tcustomer_srcid::text ||\n> \tshow_srcid::text ||\n> \tshow_name::text ||\n> \tseason_srcid::text ||\n> \tseason_name::text ||\n> \tepisode_srcid::text ||\n> \tepisode_name::text ||\n> \tsegment_type_id::text ||\n> \tsegment_type::text ||\n> \tsegment_srcid::text ||\n> \tsegment_name::text\n> in\n> \t( select\n> \t\tcustomer_srcid::text ||\n> \t\tshow_srcid::text ||\n> \t\tshow_name::text ||\n> \t\tseason_srcid::text ||\n> \t\tseason_name::text ||\n> \t\tepisode_srcid::text ||\n> \t\tepisode_name::text ||\n> \t\tsegment_type_id::text ||\n> \t\tsegment_type::text ||\n> \t\tsegment_srcid::text ||\n> \t\tsegment_name::text\n> \t\tfrom sl_cd_location_dim )\n> ;\n>\n>\n>\n>\n>\n> Here's the query plan for it:\n>\n> QUERY PLAN\n> -----------------------------------------------------------------------------------\n> Seq Scan on seg_id_tmp7 (cost=0.00..138870701.56 rows=2136 width=6)\n> Filter: (subplan)\n> SubPlan\n> -> Seq Scan on sl_cd_location_dim (cost=0.00..63931.60 \n> rows=433040 width=8)\n> (4 rows)\n>\n>\n>\n>\n>\n>\n>\n>\n> I also tried this:\n>\n> delete from seg_id_tmp7\n> where\n> \t( customer_srcid ,\n> \tshow_srcid ,\n> \tshow_name ,\n> \tseason_srcid ,\n> \tseason_name ,\n> \tepisode_srcid ,\n> \tepisode_name ,\n> \tsegment_type_id ,\n> \tsegment_type ,\n> \tsegment_srcid ,\n> \tsegment_name )\n> in\n> \t( select\n> \t\tcustomer_srcid ,\n> \t\tshow_srcid ,\n> \t\tshow_name ,\n> \t\tseason_srcid ,\n> \t\tseason_name ,\n> \t\tepisode_srcid ,\n> \t\tepisode_name ,\n> \t\tsegment_type_id ,\n> \t\tsegment_type ,\n> \t\tsegment_srcid ,\n> \t\tsegment_name\n> \t\tfrom sl_cd_location_dim )\n> ;\n>\n>\n> and I get this query plan:\n>\n> QUERY PLAN\n> -----------------------------------------------------------------------------------\n> Seq Scan on seg_id_tmp7 (cost=0.00..87997034.20 rows=2136 width=6)\n> Filter: (subplan)\n> SubPlan\n> -> Seq Scan on sl_cd_location_dim (cost=0.00..40114.40 \n> rows=433040 width=8)\n> (4 rows)\n>\n>\n>\n> If it helps here's the describe's (including indexes) for both tables:\n>\n> # \\d seg_id_tmp7\n> Table \"public.seg_id_tmp7\"\n> Column | Type | Modifiers\n> -----------------+-----------------------------+-----------\n> customer_srcid | bigint |\n> show_srcid | bigint |\n> show_name | character varying |\n> season_srcid | bigint |\n> season_name | character varying |\n> episode_srcid | bigint |\n> episode_name | character varying |\n> segment_type_id | bigint |\n> segment_type | character varying |\n> segment_srcid | bigint |\n> segment_name | character varying |\n> create_dt | timestamp without time zone |\n>\n>\n>\n>\n> # \\d sl_cd_segment_dim\n> Table \n> \"public.sl_cd_segment_dim\"\n> Column | Type \n> | Modifiers\n> ----------------------+----------------------------- \n> +-------------------------------------------------------------\n> sl_cd_segment_dim_id | bigint | not null \n> default nextval('sl_cd_segment_dim_seq'::regclass)\n> customer_srcid | bigint | not null\n> show_srcid | bigint | not null\n> show_name | character varying(500) | not null\n> season_srcid | bigint | not null\n> season_name | character varying(500) | not null\n> episode_srcid | bigint | not null\n> episode_name | character varying(500) | not null\n> segment_type_id | integer |\n> segment_type | character varying(500) |\n> segment_srcid | bigint |\n> segment_name | character varying(500) |\n> effective_dt | timestamp without time zone | not null \n> default now()\n> inactive_dt | timestamp without time zone |\n> last_update_dt | timestamp without time zone | not null \n> default now()\n> Indexes:\n> \"sl_cd_segment_dim_pk\" PRIMARY KEY, btree (sl_cd_segment_dim_id)\n> \"seg1\" btree (customer_srcid)\n> \"seg2\" btree (show_srcid)\n> \"seg3\" btree (season_srcid)\n> \"seg4\" btree (episode_srcid)\n> \"seg5\" btree (segment_srcid)\n> \"sl_cd_segment_dim_ix1\" btree (customer_srcid)\n>\n>\n>\n>\n>\n>\n> Any thoughts, suggestions, etc on how to improve performance for \n> this delete ?\n>\n>\n> Thanks in advance..\n>\n> /Kevin\n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Tue, 20 May 2008 14:03:30 -0600", "msg_from": "kevin kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improving performance for a delete" }, { "msg_contents": "On Tue, 20 May 2008 22:03:30 +0200, kevin kempter \n<[email protected]> wrote:\n\n> Version 8.3.1\n>\n>\n> On May 20, 2008, at 1:51 PM, kevin kempter wrote:\n>\n>> Hi all;\n>>\n>> I have 2 tables where I basically want to delete from the first table \n>> (seg_id_tmp7) any rows where the entire row already exists in the \n>> second table (sl_cd_segment_dim)\n>>\n>> I have a query that looks like this (and it's slow):\n>>\n>>\n>> delete from seg_id_tmp7\n>> where\n>> \tcustomer_srcid::text ||\n\n\tBesides being slow as hell and not able to use any indexes, the string \nconcatenation can also yield incorrect results, for instance :\n\nseason_name::text || episode_srcid::text\n\n\tWill have the same contents for\n\nseason_name='season 1' episode_srcid=12\nseason_name='season 11' episode_srcid=2\n\n\tI suggest doing it the right way, one possibility being :\n\ntest=> EXPLAIN DELETE from test where (id,value) in (select id,value from \ntest2);\n QUERY PLAN\n-------------------------------------------------------------------------\n Hash IN Join (cost=2943.00..6385.99 rows=2 width=6)\n Hash Cond: ((test.id = test2.id) AND (test.value = test2.value))\n -> Seq Scan on test (cost=0.00..1442.99 rows=99999 width=14)\n -> Hash (cost=1443.00..1443.00 rows=100000 width=8)\n -> Seq Scan on test2 (cost=0.00..1443.00 rows=100000 width=8)\n\n\tThanks to the hash it is very fast, one seq scan on both tables, instead \nof one seq scan PER ROW in your query.\n\n\tAnother solution would be :\n\ntest=> EXPLAIN DELETE FROM test USING test2 WHERE test.id=test2.id AND \ntest.value=test2.value;\n QUERY PLAN\n-------------------------------------------------------------------------\n Hash Join (cost=2943.00..6385.99 rows=2 width=6)\n Hash Cond: ((test.id = test2.id) AND (test.value = test2.value))\n -> Seq Scan on test (cost=0.00..1442.99 rows=99999 width=14)\n -> Hash (cost=1443.00..1443.00 rows=100000 width=8)\n -> Seq Scan on test2 (cost=0.00..1443.00 rows=100000 width=8)\n\t\n\tWhich chooses the same plan here, quite logically, as it is the best one \nin this particular case.\n", "msg_date": "Tue, 20 May 2008 22:54:23 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving performance for a delete" } ]
[ { "msg_contents": "Hi,\n\nI am currently designing a database and wanted to know something that may\nsound trivial, but I thought its still good to confirm before dumping\nmillions of rows in it.\n\nThe design requires a few master tables with very limited rows, for e.g.\ncurrency_denomination table could at the max have a few records like million\n/ billion / crore (used in india) / lacs (india specific) and so on.\n\nNow what I wanted to ask was whether its any different to have the\nprimary-keys in such master tables as text/varchar rather than integer ?\ni.e. Can I use a character varying(10) and use the text 'million' /\n'billion' instead of a serial / integer type ?\n\np.s.: I am not as much concerned with the size that it'd take on the data\ntables, as much as the fact that the select / insert performances shouldn't\nsuffer. However, if that increase in size (per data record) may make a\nconsiderable impact on the performance, I would certainly want to take that\ninto account during design phase.\n\nAny pointers / replies appreciated.\n\nRegards,\n*Robins Tharakan*\n\nHi,I am currently designing a database and wanted to know something that may sound trivial, but I thought its still good to confirm before dumping millions of rows in it.\nThe design requires a few master tables with very limited rows, for e.g. currency_denomination table could at the max have a few records like million / billion / crore (used in india) / lacs (india specific) and so on.\nNow what I wanted to ask was whether its any different to have the primary-keys in such master tables as text/varchar rather than integer ? i.e. Can I use a character varying(10) and use the text 'million' / 'billion' instead of a serial / integer type ?\np.s.: I am not as much concerned with the size that it'd take on the data tables, as much as the fact that the select / insert performances shouldn't suffer. However, if that increase in size (per data record) may make a considerable impact on the performance, I would certainly want to take that into account during design phase.\nAny pointers / replies appreciated.\nRegards,Robins Tharakan", "msg_date": "Wed, 21 May 2008 11:10:43 +0530", "msg_from": "\"Robins Tharakan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Varchar pkey instead of integer" }, { "msg_contents": "Robins Tharakan wrote:\n> Hi,\n\n> Now what I wanted to ask was whether its any different to have the \n> primary-keys in such master tables as text/varchar rather than integer ? \n> i.e. Can I use a character varying(10) and use the text 'million' / \n> 'billion' instead of a serial / integer type ?\n\nOne should ask themselves why before can I. :)\n\nIf you want to use a varchar() for a primary key that is fine but make \nit a natural key not an arbitrary number. If you are going to use \narbitrary numbers, use a serial or bigserial.\n\nJoshua D. Drake\n\n", "msg_date": "Tue, 20 May 2008 23:16:34 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Varchar pkey instead of integer" }, { "msg_contents": "Robins Tharakan wrote:\n> Hi,\n> \n> I am currently designing a database and wanted to know something that may\n> sound trivial, but I thought its still good to confirm before dumping\n> millions of rows in it.\n> \n> The design requires a few master tables with very limited rows, for e.g.\n> currency_denomination table could at the max have a few records like million\n> / billion / crore (used in india) / lacs (india specific) and so on.\n> \n> Now what I wanted to ask was whether its any different to have the\n> primary-keys in such master tables as text/varchar rather than integer ?\n\nAs far as I know it's just slower to compare (ie for fkey checks, index \nlookups, etc) and uses more storage. However, if you're only using the \nother table to limit possible values in a field rather than storing \nother information and you can avoid doing a join / index lookup by \nstoring the string directly in the master table then that might well be \nworth it. It's a tradeoff between the storage cost (seq scan speed, \nindex size, etc) of using the text values directly vs the savings made \nby avoiding having to constantly hit a lookup table.\n\nI have several places in the database I'm presently working on where I \nstore meaningful integers directly in a \"main\" table and reference a \nsingle-field table as a foreign key just to limit acceptable values. It \nworks very well, though it's only suitable in limited situations.\n\nOne of the places I'm doing that is for in-database postcode validation. \nMy current app only needs to validate Australian post codes (as per the \nspec) and other post/zip codes are just stored in the address text. I \nstore the integer representation of the post code directly in address \nrecords but use a foreign key to the single-field \"aust_post_code\" table \nto enforce the use of only valid postcodes. There's an ON DELETE SET \nNULL cascade on the fkey because for this app's purpose a postcode \nthat's no longer accepted by the postal service is bad data.\n\nThis means that the postcode list can't be updated by a TRUNCATE and \nrepopulate. No big deal; I prefer to do a compare between the current \ndatabase contents and the latest postcode data and insert/delete as \nappropriate anyway; especially as the app needs to be able to record and \nflag tentative entries for postcodes that the user *insists* exist but \nthe latest (possibly even weeks old) australia post data says do not.\n\nYou could reasonably do the same sort of thing with a text postcode if \nyour app had to care about non-numeric postal codes.\n\nIt's nice being able to work on something that doesn't have to handle \npedal-post in some awful corner of the earth where they identify postal \nregions by coloured tags. OK, not really, but sometimes addressing seems \nalmost that bad.\n\n> i.e. Can I use a character varying(10) and use the text 'million' /\n> 'billion' instead of a serial / integer type ?\n\nIf you're looking at a small set of possible values an enumeration \n*might* be an option. Be aware that they're painful and slow to change \nlater, though, especially when used in foreign keys, views, etc.\n\nI certainly wouldn't use one for your currency denomination table, which \nis likely to see values added to it over time.\n\n> p.s.: I am not as much concerned with the size that it'd take on the data\n> tables, as much as the fact that the select / insert performances shouldn't\n> suffer. However, if that increase in size (per data record) may make a\n> considerable impact on the performance, I would certainly want to take that\n> into account during design phase.\n\nI suspect it's just another tradeoff - table size increase (and thus \nscan performance cost) from storing the text vs avoiding the need to \naccess the lookup table for most operations.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 21 May 2008 14:34:45 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Varchar pkey instead of integer" }, { "msg_contents": "Craig Ringer wrote:\n\n>> p.s.: I am not as much concerned with the size that it'd take on the data\n>> tables, as much as the fact that the select / insert performances \n>> shouldn't\n>> suffer. However, if that increase in size (per data record) may make a\n>> considerable impact on the performance, I would certainly want to take \n>> that\n>> into account during design phase.\n> \n> I suspect it's just another tradeoff - table size increase (and thus \n> scan performance cost) from storing the text vs avoiding the need to \n> access the lookup table for most operations.\n> \n\nSize can affect performance as much as anything else. In your case of \nlimited rows it will make little difference, though the larger table \nwith millions of rows will have this key entered for each row and be \nindexed as the foreign key.\n\nThe real question is how you want to use the column, if you wish to \nquery for rows of a certain currency then you will notice the difference.\n\nYou could use a smallint of 2 bytes each (or a varchar(1) with an int \nvalue instead of a real char) or an integer of 4 bytes, compared to your \nvarchar(10)\n\nData size on the column could be less than half of the size of the \nvarchar(10) so there will be less disk reads (the biggest slow down) and \nsmaller indexes which can increase chances of caching.\n\nWithout storage overheads each million rows will have 10*1000000=10M \nbytes of data compared to 4*1000000=4M bytes - you can see that the \nchance of caching and the time to read off disk will come into effect \neach time you reference that column.\n\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n", "msg_date": "Wed, 21 May 2008 17:03:34 +0930", "msg_from": "Shane Ambler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Varchar pkey instead of integer" }, { "msg_contents": "\nOn May 21, 2008, at 12:33 AM, Shane Ambler wrote:\n>\n> Size can affect performance as much as anything else.\n\nFor a brief moment, I thought the mailing list had been spammed. ;-)\n\nJ. Andrew Rogers\n\n", "msg_date": "Wed, 21 May 2008 00:57:35 -0700", "msg_from": "\"J. Andrew Rogers\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Varchar pkey instead of integer" }, { "msg_contents": "Shane Ambler wrote:\n\n> Size can affect performance as much as anything else. In your case of \n> limited rows it will make little difference, though the larger table \n> with millions of rows will have this key entered for each row and be \n> indexed as the foreign key.\n> \n> The real question is how you want to use the column, if you wish to \n> query for rows of a certain currency then you will notice the difference.\n> \n> You could use a smallint of 2 bytes each (or a varchar(1) with an int \n> value instead of a real char) or an integer of 4 bytes, compared to your \n> varchar(10)\n\n... and if there are only a few records in the currency column, it \nrarely changes, and you put a trigger in place to prevent the re-use of \npreviously assigned keys you may be able to cache that data in your \napplication.\n\nThat way you avoid a join or subquery on your lookup table to get the \ntext description of the currency AND get the storage/performance of a \nsmall integer key.\n\nIt's something I'm doing in other places in my current DB where I have \nessentially static lookup tables. You do have to watch out for lookup \ntable changes, though.\n\nIt's worth noting that my database is rather puny (the largest table has \n500,000 records) and I'm very, very far from an expert on any of this, \nso there might be some hidden downside to doing things this way that I \njust haven't hit yet.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 21 May 2008 16:00:46 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Varchar pkey instead of integer" }, { "msg_contents": "On Wed, May 21, 2008 at 1:27 PM, J. Andrew Rogers <[email protected]>\nwrote:\n\n>\n> On May 21, 2008, at 12:33 AM, Shane Ambler wrote:\n>\n>>\n>> Size can affect performance as much as anything else.\n>>\n>\n> For a brief moment, I thought the mailing list had been spammed. ;-)\n>\n\nAnd that sums up why I wish to thank everyone for the responses.. :)\n\n*Robins*\n\nOn Wed, May 21, 2008 at 1:27 PM, J. Andrew Rogers <[email protected]> wrote:\n\nOn May 21, 2008, at 12:33 AM, Shane Ambler wrote:\n\n\nSize can affect performance as much as anything else.\n\n\nFor a brief moment, I thought the mailing list had been spammed. ;-)\nAnd that sums up why I wish to thank everyone for the responses.. :)Robins", "msg_date": "Thu, 22 May 2008 06:24:36 +0530", "msg_from": "\"Robins Tharakan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Varchar pkey instead of integer" } ]
[ { "msg_contents": "I've got a query similar to this:\n\nselect * from t1, t2 where t1.id > 158507 and t1.id = t2.id;\n\nThat took > 84 minutes (the query was a bit longer but this is the part that \nmade the difference) after a little change the query took ~1 second:\n\nselect * from t1, t2 where t1.id > 158507 and t2.id > 158507 and t1.id = \nt2.id;\n\nThe change is pretty simple and it seems (note I don't have a clue on how the \nplanner works) it'd be possible for the planner to make this assumption \nitself. Do you think it is really feasible/appropiate?\n\n \n", "msg_date": "Wed, 21 May 2008 12:28:47 +0200", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": true, "msg_subject": "Posible planner improvement?" }, { "msg_contents": "Albert Cervera Areny wrote:\n> I've got a query similar to this:\n> \n> select * from t1, t2 where t1.id > 158507 and t1.id = t2.id;\n> \n> That took > 84 minutes (the query was a bit longer but this is the part that \n> made the difference) after a little change the query took ~1 second:\n> \n> select * from t1, t2 where t1.id > 158507 and t2.id > 158507 and t1.id = \n> t2.id;\n\nTry posting EXPLAIN ANALYSE SELECT ... for both of those queries and \nwe'll see why it's better at the second one.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 21 May 2008 11:48:05 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Posible planner improvement?" }, { "msg_contents": "A Dimecres 21 Maig 2008, Richard Huxton va escriure:\n> Albert Cervera Areny wrote:\n> > I've got a query similar to this:\n> >\n> > select * from t1, t2 where t1.id > 158507 and t1.id = t2.id;\n> >\n> > That took > 84 minutes (the query was a bit longer but this is the part\n> > that made the difference) after a little change the query took ~1 second:\n> >\n> > select * from t1, t2 where t1.id > 158507 and t2.id > 158507 and t1.id =\n> > t2.id;\n>\n> Try posting EXPLAIN ANALYSE SELECT ... for both of those queries and\n> we'll see why it's better at the second one.\n\nRight, attached an example of such a difference.", "msg_date": "Wed, 21 May 2008 13:11:24 +0200", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Posible planner improvement?" }, { "msg_contents": "A Dimecres 21 Maig 2008, Richard Huxton va escriure:\n>> Albert Cervera Areny wrote:\n>> \n>>> I've got a query similar to this:\n>>>\n>>> select * from t1, t2 where t1.id > 158507 and t1.id = t2.id;\n>>>\n>>> That took > 84 minutes (the query was a bit longer but this is the part\n>>> that made the difference) after a little change the query took ~1 second:\n>>>\n>>> select * from t1, t2 where t1.id > 158507 and t2.id > 158507 and t1.id =\n>>> t2.id;\n>>> \n>> Try posting EXPLAIN ANALYSE SELECT ... for both of those queries and\n>> we'll see why it's better at the second one.\n>> \n\nEven if the estimates were off (they look a bit off for the first \ntable), the above two queries are logically identical, and I would \nexpect the planner to make the same decision for both.\n\nI am curious - what is the result of:\n\n select * from t1, t2 where t2.id > 158507 and t1.id = t2.id;\n\nIs it the same speed as the first or second, or is a third speed entirely?\n\nIf t1.id = t2.id, I would expect the planner to substitute them freely \nin terms of identities?\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n", "msg_date": "Wed, 21 May 2008 07:24:55 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Posible planner improvement?" }, { "msg_contents": "A Dimecres 21 Maig 2008, Mark Mielke va escriure:\n> A Dimecres 21 Maig 2008, Richard Huxton va escriure:\n> >> Albert Cervera Areny wrote:\n> >>> I've got a query similar to this:\n> >>>\n> >>> select * from t1, t2 where t1.id > 158507 and t1.id = t2.id;\n> >>>\n> >>> That took > 84 minutes (the query was a bit longer but this is the part\n> >>> that made the difference) after a little change the query took ~1\n> >>> second:\n> >>>\n> >>> select * from t1, t2 where t1.id > 158507 and t2.id > 158507 and t1.id\n> >>> = t2.id;\n> >>\n> >> Try posting EXPLAIN ANALYSE SELECT ... for both of those queries and\n> >> we'll see why it's better at the second one.\n>\n> Even if the estimates were off (they look a bit off for the first\n> table), the above two queries are logically identical, and I would\n> expect the planner to make the same decision for both.\n>\n> I am curious - what is the result of:\n>\n> select * from t1, t2 where t2.id > 158507 and t1.id = t2.id;\n>\n> Is it the same speed as the first or second, or is a third speed entirely?\n\nAttached the same file with the third result at the end. The result is worst \nthan the other two cases. Note that I've analyzed both tables but results are \nthe same. One order of magnitude between the two first queries.\n\n>\n> If t1.id = t2.id, I would expect the planner to substitute them freely\n> in terms of identities?\n>\n> Cheers,\n> mark", "msg_date": "Wed, 21 May 2008 13:30:16 +0200", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Posible planner improvement?" }, { "msg_contents": "A Dimecres 21 Maig 2008, Albert Cervera Areny va escriure:\n> A Dimecres 21 Maig 2008, Mark Mielke va escriure:\n> > A Dimecres 21 Maig 2008, Richard Huxton va escriure:\n> > >> Albert Cervera Areny wrote:\n> > >>> I've got a query similar to this:\n> > >>>\n> > >>> select * from t1, t2 where t1.id > 158507 and t1.id = t2.id;\n> > >>>\n> > >>> That took > 84 minutes (the query was a bit longer but this is the\n> > >>> part that made the difference) after a little change the query took\n> > >>> ~1 second:\n> > >>>\n> > >>> select * from t1, t2 where t1.id > 158507 and t2.id > 158507 and\n> > >>> t1.id = t2.id;\n> > >>\n> > >> Try posting EXPLAIN ANALYSE SELECT ... for both of those queries and\n> > >> we'll see why it's better at the second one.\n> >\n> > Even if the estimates were off (they look a bit off for the first\n> > table), the above two queries are logically identical, and I would\n> > expect the planner to make the same decision for both.\n> >\n> > I am curious - what is the result of:\n> >\n> > select * from t1, t2 where t2.id > 158507 and t1.id = t2.id;\n> >\n> > Is it the same speed as the first or second, or is a third speed\n> > entirely?\n>\n> Attached the same file with the third result at the end. The result is\n> worst than the other two cases. Note that I've analyzed both tables but\n> results are the same. One order of magnitude between the two first queries.\n\nSorry, it's not worse than the other two cases as shown in the file. However, \nafter repetition it seems the other two seem to decrease more than the third \none whose times vary a bit more and some times take up to 5 seconds.\n\nOther queries are running in the same machine, so take times with a grain of \nsalt. What's clear is that always there's a big difference between first and \nsecond queries.\n\n>\n> > If t1.id = t2.id, I would expect the planner to substitute them freely\n> > in terms of identities?\n> >\n> > Cheers,\n> > mark\n\n\n\n-- \nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................ AVISO LEGAL ............................\nLa presente comunicación y sus anexos tiene como destinatario la\npersona a la que va dirigida, por lo que si usted lo recibe\npor error debe notificarlo al remitente y eliminarlo de su\nsistema, no pudiendo utilizarlo, total o parcialmente, para\nningún fin. Su contenido puede tener información confidencial o\nprotegida legalmente y únicamente expresa la opinión del\nremitente. El uso del correo electrónico vía Internet no\npermite asegurar ni la confidencialidad de los mensajes\nni su correcta recepción. En el caso de que el\ndestinatario no consintiera la utilización del correo electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its attachments are intended exclusively for the\nnamed addressee. If you receive this message in error, please\nimmediately delete it from your system and notify the sender. You\nmay not use this message or any part of it for any purpose.\nThe message may contain information that is confidential or\nprotected by law, and any opinions expressed are those of the\nindividual sender. Internet e-mail guarantees neither the\nconfidentiality nor the proper receipt of the message sent.\nIf the addressee of this message does not consent to the use\nof internet e-mail, please inform us inmmediately.\n====================================================================\n\n\n \n", "msg_date": "Wed, 21 May 2008 13:37:49 +0200", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Posible planner improvement?" }, { "msg_contents": "Hello Albert,\n\nAlbert Cervera Areny <[email protected]> wrote:\n\n> I've got a query similar to this:\n> \n> select * from t1, t2 where t1.id > 158507 and t1.id = t2.id;\n> \n> That took > 84 minutes (the query was a bit longer but this is the part that \n> made the difference) after a little change the query took ~1 second:\n> \n> select * from t1, t2 where t1.id > 158507 and t2.id > 158507 and t1.id = \n> t2.id;\n\nI had a similar problem here:\n http://archives.postgresql.org/pgsql-general/2007-02/msg00850.php\nand added a redundant inequality explicitly to make it work well.\n\nI think it is worth trying to improve, but I'm not sure we can do it\nagainst user defined types. Does postgres always require transitive law\nto all types?\n\nRegards,\n---\nITAGAKI Takahiro\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 26 May 2008 20:30:00 +0900", "msg_from": "ITAGAKI Takahiro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Posible planner improvement?" }, { "msg_contents": ">>> On Mon, May 26, 2008 at 6:30 AM, ITAGAKI Takahiro\n<[email protected]> wrote: \n> Albert Cervera Areny <[email protected]> wrote:\n> \n>> I've got a query similar to this:\n>> \n>> select * from t1, t2 where t1.id > 158507 and t1.id = t2.id;\n>> \n>> That took > 84 minutes (the query was a bit longer but this is the\npart that \n>> made the difference) after a little change the query took ~1\nsecond:\n>> \n>> select * from t1, t2 where t1.id > 158507 and t2.id > 158507 and\nt1.id = \n>> t2.id;\n> \n> I had a similar problem here:\n> http://archives.postgresql.org/pgsql-general/2007-02/msg00850.php\n> and added a redundant inequality explicitly to make it work well.\n> \n> I think it is worth trying to improve, but I'm not sure we can do it\n> against user defined types. Does postgres always require transitive\nlaw\n> to all types?\n \nI've recently run into this. It would be a nice optimization,\nif feasible.\n \n-Kevin\n", "msg_date": "Mon, 04 Aug 2008 13:16:45 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Posible planner improvement?" } ]
[ { "msg_contents": "The problem is that the implied join predicate is not being propagated. This is definitely a planner deficiency.\r\n\r\n- Luke\r\n\r\n----- Original Message -----\r\nFrom: [email protected] <[email protected]>\r\nTo: [email protected] <[email protected]>\r\nSent: Wed May 21 07:37:49 2008\r\nSubject: Re: [PERFORM] Posible planner improvement?\r\n\r\nA Dimecres 21 Maig 2008, Albert Cervera Areny va escriure:\r\n> A Dimecres 21 Maig 2008, Mark Mielke va escriure:\r\n> > A Dimecres 21 Maig 2008, Richard Huxton va escriure:\r\n> > >> Albert Cervera Areny wrote:\r\n> > >>> I've got a query similar to this:\r\n> > >>>\r\n> > >>> select * from t1, t2 where t1.id > 158507 and t1.id = t2.id;\r\n> > >>>\r\n> > >>> That took > 84 minutes (the query was a bit longer but this is the\r\n> > >>> part that made the difference) after a little change the query took\r\n> > >>> ~1 second:\r\n> > >>>\r\n> > >>> select * from t1, t2 where t1.id > 158507 and t2.id > 158507 and\r\n> > >>> t1.id = t2.id;\r\n> > >>\r\n> > >> Try posting EXPLAIN ANALYSE SELECT ... for both of those queries and\r\n> > >> we'll see why it's better at the second one.\r\n> >\r\n> > Even if the estimates were off (they look a bit off for the first\r\n> > table), the above two queries are logically identical, and I would\r\n> > expect the planner to make the same decision for both.\r\n> >\r\n> > I am curious - what is the result of:\r\n> >\r\n> > select * from t1, t2 where t2.id > 158507 and t1.id = t2.id;\r\n> >\r\n> > Is it the same speed as the first or second, or is a third speed\r\n> > entirely?\r\n>\r\n> Attached the same file with the third result at the end. The result is\r\n> worst than the other two cases. Note that I've analyzed both tables but\r\n> results are the same. One order of magnitude between the two first queries.\r\n\r\nSorry, it's not worse than the other two cases as shown in the file. However, \r\nafter repetition it seems the other two seem to decrease more than the third \r\none whose times vary a bit more and some times take up to 5 seconds.\r\n\r\nOther queries are running in the same machine, so take times with a grain of \r\nsalt. What's clear is that always there's a big difference between first and \r\nsecond queries.\r\n\r\n>\r\n> > If t1.id = t2.id, I would expect the planner to substitute them freely\r\n> > in terms of identities?\r\n> >\r\n> > Cheers,\r\n> > mark\r\n\r\n\r\n\r\n-- \r\nAlbert Cervera Areny\r\nDept. Informàtica Sedifa, S.L.\r\n\r\nAv. Can Bordoll, 149\r\n08202 - Sabadell (Barcelona)\r\nTel. 93 715 51 11\r\nFax. 93 715 51 12\r\n\r\n====================================================================\r\n........................ AVISO LEGAL ............................\r\nLa presente comunicación y sus anexos tiene como destinatario la\r\npersona a la que va dirigida, por lo que si usted lo recibe\r\npor error debe notificarlo al remitente y eliminarlo de su\r\nsistema, no pudiendo utilizarlo, total o parcialmente, para\r\nningún fin. Su contenido puede tener información confidencial o\r\nprotegida legalmente y únicamente expresa la opinión del\r\nremitente. El uso del correo electrónico vía Internet no\r\npermite asegurar ni la confidencialidad de los mensajes\r\nni su correcta recepción. En el caso de que el\r\ndestinatario no consintiera la utilización del correo electrónico,\r\ndeberá ponerlo en nuestro conocimiento inmediatamente.\r\n====================================================================\r\n........................... DISCLAIMER .............................\r\nThis message and its attachments are intended exclusively for the\r\nnamed addressee. If you receive this message in error, please\r\nimmediately delete it from your system and notify the sender. You\r\nmay not use this message or any part of it for any purpose.\r\nThe message may contain information that is confidential or\r\nprotected by law, and any opinions expressed are those of the\r\nindividual sender. Internet e-mail guarantees neither the\r\nconfidentiality nor the proper receipt of the message sent.\r\nIf the addressee of this message does not consent to the use\r\nof internet e-mail, please inform us inmmediately.\r\n====================================================================\r\n\r\n\r\n \r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\n\n\n\n\nRe: [PERFORM] Posible planner improvement?\n\n\n\nThe problem is that the implied join predicate is not being propagated.  This is definitely a planner deficiency.\n\r\n- Luke\n\r\n----- Original Message -----\r\nFrom: [email protected] <[email protected]>\r\nTo: [email protected] <[email protected]>\r\nSent: Wed May 21 07:37:49 2008\r\nSubject: Re: [PERFORM] Posible planner improvement?\n\r\nA Dimecres 21 Maig 2008, Albert Cervera Areny va escriure:\r\n> A Dimecres 21 Maig 2008, Mark Mielke va escriure:\r\n> > A Dimecres 21 Maig 2008, Richard Huxton va escriure:\r\n> > >> Albert Cervera Areny wrote:\r\n> > >>> I've got a query similar to this:\r\n> > >>>\r\n> > >>> select * from t1, t2 where t1.id > 158507 and t1.id = t2.id;\r\n> > >>>\r\n> > >>> That took > 84 minutes (the query was a bit longer but this is the\r\n> > >>> part that made the difference) after a little change the query took\r\n> > >>> ~1 second:\r\n> > >>>\r\n> > >>> select * from t1, t2 where t1.id > 158507 and t2.id > 158507 and\r\n> > >>> t1.id = t2.id;\r\n> > >>\r\n> > >> Try posting EXPLAIN ANALYSE SELECT ... for both of those queries and\r\n> > >> we'll see why it's better at the second one.\r\n> >\r\n> > Even if the estimates were off (they look a bit off for the first\r\n> > table), the above two queries are logically identical, and I would\r\n> > expect the planner to make the same decision for both.\r\n> >\r\n> > I am curious - what is the result of:\r\n> >\r\n> >     select * from t1, t2 where t2.id > 158507 and t1.id = t2.id;\r\n> >\r\n> > Is it the same speed as the first or second, or is a third speed\r\n> > entirely?\r\n>\r\n> Attached the same file with the third result at the end. The result is\r\n> worst than the other two cases. Note that I've analyzed both tables but\r\n> results are the same. One order of magnitude between the two first queries.\n\r\nSorry, it's not worse than the other two cases as shown in the file. However,\r\nafter repetition it seems the other two seem to decrease more than the third\r\none whose times vary a bit more and some times take up to 5 seconds.\n\r\nOther queries are running in the same machine, so take times with a grain of\r\nsalt. What's clear is that always  there's a big difference between first and\r\nsecond queries.\n\r\n>\r\n> > If t1.id = t2.id, I would expect the planner to substitute them freely\r\n> > in terms of identities?\r\n> >\r\n> > Cheers,\r\n> > mark\n\n\n\r\n--\r\nAlbert Cervera Areny\r\nDept. Informàtica Sedifa, S.L.\n\r\nAv. Can Bordoll, 149\r\n08202 - Sabadell (Barcelona)\r\nTel. 93 715 51 11\r\nFax. 93 715 51 12\n\r\n====================================================================\r\n........................  AVISO LEGAL  ............................\r\nLa   presente  comunicación  y sus anexos tiene como destinatario la\r\npersona a  la  que  va  dirigida, por  lo  que  si  usted lo  recibe\r\npor error  debe  notificarlo  al  remitente  y   eliminarlo   de  su\r\nsistema,  no  pudiendo  utilizarlo,  total  o   parcialmente,   para\r\nningún  fin.  Su  contenido  puede  tener información confidencial o\r\nprotegida legalmente   y   únicamente   expresa  la  opinión     del\r\nremitente.  El   uso   del   correo   electrónico   vía Internet  no\r\npermite   asegurar    ni  la   confidencialidad   de   los  mensajes\r\nni    su    correcta     recepción.   En    el  caso   de   que   el\r\ndestinatario no consintiera la utilización  del correo  electrónico,\r\ndeberá ponerlo en nuestro conocimiento inmediatamente.\r\n====================================================================\r\n........................... DISCLAIMER .............................\r\nThis message and its  attachments are  intended  exclusively for the\r\nnamed addressee. If you  receive  this  message  in   error,  please\r\nimmediately delete it from  your  system  and notify the sender. You\r\nmay  not  use  this message  or  any  part  of it  for any  purpose.\r\nThe   message   may  contain  information  that  is  confidential or\r\nprotected  by  law,  and  any  opinions  expressed  are those of the\r\nindividual    sender.  Internet  e-mail   guarantees   neither   the\r\nconfidentiality   nor  the  proper  receipt  of  the  message  sent.\r\nIf  the  addressee  of  this  message  does  not  consent to the use\r\nof   internet    e-mail,    please    inform     us    inmmediately.\r\n====================================================================\n\n\n\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 21 May 2008 07:52:28 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Posible planner improvement?" }, { "msg_contents": "Luke Lonergan wrote:\n> The problem is that the implied join predicate is not being\n> propagated. This is definitely a planner deficiency.\n\nIIRC only equality conditions are propagated and gt, lt, between aren't. \n I seem to remember that the argument given was that the cost of \nchecking for the ability to propagate was too high for the frequency \nwhen it ocurred.\n\nOf course, what was true for code and machines of 5 years ago might not \nbe so today.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 21 May 2008 14:09:49 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Posible planner improvement?" }, { "msg_contents": "On Wed, 21 May 2008 15:09:49 +0200, Richard Huxton <[email protected]> \nwrote:\n\n> Luke Lonergan wrote:\n>> The problem is that the implied join predicate is not being\n>> propagated. This is definitely a planner deficiency.\n>\n> IIRC only equality conditions are propagated and gt, lt, between aren't. \n> I seem to remember that the argument given was that the cost of \n> checking for the ability to propagate was too high for the frequency \n> when it ocurred.\n>\n> Of course, what was true for code and machines of 5 years ago might not \n> be so today.\n>\n\n\tSuggestion : when executing a one-off sql statement, optimizer should try \nto offer \"best effort while being fast\" ; when making a plan that will be \nreused many times (ie PREPARE, functions...) planning time could be \nmuuuuch longer...\n", "msg_date": "Wed, 21 May 2008 18:18:27 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Posible planner improvement?" }, { "msg_contents": "A Dimecres 21 Maig 2008, Richard Huxton va escriure:\n> Luke Lonergan wrote:\n> > The problem is that the implied join predicate is not being\n> > propagated. This is definitely a planner deficiency.\n>\n> IIRC only equality conditions are propagated and gt, lt, between aren't.\n> I seem to remember that the argument given was that the cost of\n> checking for the ability to propagate was too high for the frequency\n> when it ocurred.\n>\n> Of course, what was true for code and machines of 5 years ago might not\n> be so today.\n\nHope this can be revisited given the huge difference in this case: 80 minutes \nto 1 second.\n\n-- \nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................ AVISO LEGAL ............................\nLa presente comunicación y sus anexos tiene como destinatario la\npersona a la que va dirigida, por lo que si usted lo recibe\npor error debe notificarlo al remitente y eliminarlo de su\nsistema, no pudiendo utilizarlo, total o parcialmente, para\nningún fin. Su contenido puede tener información confidencial o\nprotegida legalmente y únicamente expresa la opinión del\nremitente. El uso del correo electrónico vía Internet no\npermite asegurar ni la confidencialidad de los mensajes\nni su correcta recepción. En el caso de que el\ndestinatario no consintiera la utilización del correo electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its attachments are intended exclusively for the\nnamed addressee. If you receive this message in error, please\nimmediately delete it from your system and notify the sender. You\nmay not use this message or any part of it for any purpose.\nThe message may contain information that is confidential or\nprotected by law, and any opinions expressed are those of the\nindividual sender. Internet e-mail guarantees neither the\nconfidentiality nor the proper receipt of the message sent.\nIf the addressee of this message does not consent to the use\nof internet e-mail, please inform us inmmediately.\n====================================================================\n\n\n \n", "msg_date": "Wed, 21 May 2008 18:22:42 +0200", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Posible planner improvement?" }, { "msg_contents": "Moving to -hackers...\n\nOn May 21, 2008, at 9:09 AM, Richard Huxton wrote:\n> Luke Lonergan wrote:\n>> The problem is that the implied join predicate is not being\n>> propagated. This is definitely a planner deficiency.\n>\n> IIRC only equality conditions are propagated and gt, lt, between \n> aren't. I seem to remember that the argument given was that the \n> cost of checking for the ability to propagate was too high for the \n> frequency when it ocurred.\n>\n> Of course, what was true for code and machines of 5 years ago might \n> not be so today.\n\nDefinitely...\n\nHow hard would it be to propagate all conditions (except maybe \nfunctions, though perhaps the new function cost estimates make that \nmore practical) in cases of equality?\n\nFor reference, the original query as posted to -performance:\n\nselect * from t1, t2 where t1.id > 158507 and t1.id = t2.id;\n\nThat took > 84 minutes (the query was a bit longer but this is the \npart that made the difference) after a little change the query took \n~1 second:\n\nselect * from t1, t2 where t1.id > 158507 and t2.id > 158507 and \nt1.id = t2.id;\n\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Sat, 24 May 2008 14:49:46 -0400", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Posible planner improvement?" }, { "msg_contents": "Decibel! wrote:\n>For reference, the original query as posted to -performance:\n\n>select * from t1, t2 where t1.id > 158507 and t1.id = t2.id;\n\n>That took > 84 minutes (the query was a bit longer but this is the \n>part that made the difference) after a little change the query took \n>~1 second:\n\nJust out of curiosity, would predefining the order of join have solved\nthe issue, as in:\n\na. select * from t1 join t2 using(id) where t1.id > 158507;\nvs.\nb. select * from t2 join t1 using(id) where t1.id > 158507;\n\nI'd expect a to be faster than b, is it?\n-- \nSincerely, [email protected]\n Stephen R. van den Berg.\n\"Technology is stuff that doesn't work yet.\" -- Bran Ferren\n\"We no longer think of chairs as technology.\" -- Douglas Adams\n", "msg_date": "Sun, 25 May 2008 10:21:33 +0200", "msg_from": "\"Stephen R. van den Berg\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Posible planner improvement?" } ]
[ { "msg_contents": "Does anyone know if there is a source that provides \"Big O\" notation for \npostgres's aggregate functions and operations? For example is count(*) \n= O(1) or O(n)?\n\nDo the developers for postgres use Big O when selecting algorithms? If \nso, is the info easily available?\n\nThanks,\nHH\n\n\n\n\n-- \nH. Hall\nReedyRiver Group LLC\nsite: reedyriver.com\n\n", "msg_date": "Wed, 21 May 2008 10:10:53 -0400", "msg_from": "\"H. Hall\" <[email protected]>", "msg_from_op": true, "msg_subject": "\"Big O\" notation for postgres?" }, { "msg_contents": "On Wed, May 21, 2008 at 10:10 AM, H. Hall <[email protected]> wrote:\n> Does anyone know if there is a source that provides \"Big O\" notation for\n> postgres's aggregate functions and operations? For example is count(*) =\n> O(1) or O(n)?\n\nI don't know of any document containing the complexity of each\naggregate, but it's sometimes left as a comment in the souce code.\n\nIIRC, COUNT (non-distinct) is currently O(n), where n also includes\nevaluation of tuples not represented in the final count (due to\nPostgres' MVCC design).\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Wed, 21 May 2008 10:28:59 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Big O\" notation for postgres?" }, { "msg_contents": "Jonah H. Harris wrote:\n> On Wed, May 21, 2008 at 10:10 AM, H. Hall <[email protected]> wrote:\n>> Does anyone know if there is a source that provides \"Big O\" notation for\n>> postgres's aggregate functions and operations? For example is count(*) =\n>> O(1) or O(n)?\n> \n> I don't know of any document containing the complexity of each\n> aggregate, but it's sometimes left as a comment in the souce code.\n\nRecent max() and min() can be O(n) or O(1) depending on the where-clause \nand presence of an index too, just to muddy the waters.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 21 May 2008 15:39:54 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Big O\" notation for postgres?" }, { "msg_contents": "On Wed, 21 May 2008 16:10:53 +0200, H. Hall <[email protected]> \nwrote:\n\n> Does anyone know if there is a source that provides \"Big O\" notation for \n> postgres's aggregate functions and operations? For example is count(*) \n> = O(1) or O(n)?\n>\n> Do the developers for postgres use Big O when selecting algorithms? If \n> so, is the info easily available?\n\n\tYou can't do any better than O( n rows examined by the aggregate ) except \nfor max() and min() on an indexed expression, which in this case aren't \nreally aggrgates anymore since they are internally rewritten as an index \nlookup to get the value you want... but stuff like sum() or avg() or \ncount() will always have to see all the rows selected (and some more) \nunless you use clever hacks like materialized views etc, in which case the \nthing in the O() will change, or at least the O() constant will change...\n", "msg_date": "Wed, 21 May 2008 18:27:14 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Big O\" notation for postgres?" }, { "msg_contents": "PFC wrote:\n> On Wed, 21 May 2008 16:10:53 +0200, H. Hall wrote:\n>\n>> Does anyone know if there is a source that provides \"Big O\" notation \n>> for postgres's aggregate functions and operations? For example is \n>> count(*) = O(1) or O(n)?\n>>\n>> Do the developers for postgres use Big O when selecting algorithms? \n>> If so, is the info easily available?\n>\n> You can't do any better than O( n rows examined by the aggregate ) \n> except for max() and min() on an indexed expression, which in this \n> case aren't really aggrgates anymore since they are internally \n> rewritten as an index lookup to get the value you want... but stuff \n> like sum() or avg() or count() will always have to see all the rows \n> selected (and some more) unless you use clever hacks like materialized \n> views etc, in which case the thing in the O() will change, or at least \n> the O() constant will change...\n>\nThank you PFC and also Jonah, and Richard for your replies.\n\nIt occurs to me that Big O might be a useful way to understand/explain \nwhat is happening with situations like Albert's earlier today:\n\nI've got a query similar to this:\n> >\n> > select * from t1, t2 where t1.id > 158507 and t1.id = t2.id;\n> >\n> > That took > 84 minutes (the query was a bit longer but this is the part\n> > that made the difference) after a little change the query took ~1 second:\n> >\n> > select * from t1, t2 where t1.id > 158507 and t2.id > 158507 and t1.id =\n> > t2.id;\n\nBTW, anyone reading this and not familiar with Big O notation might want \nto check out these links. All are intro type articles:\n\n * An informal introduction to O(N) notation:\n http://www.perlmonks.org/?node_id=227909\n * Analysis of Algorithms and Selection of Algorithms:\n \nhttp://www.cs.utk.edu/~parker/Courses/CS302-Fall06/Notes/complexity.html\n * Complexity and Big-O Notation\n http://pages.cs.wisc.edu/~hasti/cs367-common/notes/COMPLEXITY.html\n\n-- \nH. Hall\nReedyRiver Group LLC\nhttp://www.reedyriver.com\n\n", "msg_date": "Wed, 21 May 2008 15:14:51 -0400", "msg_from": "\"H. Hall\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"Big O\" notation for postgres?" }, { "msg_contents": "\"Richard Huxton\" <[email protected]> writes:\n\n> Jonah H. Harris wrote:\n>> On Wed, May 21, 2008 at 10:10 AM, H. Hall <[email protected]> wrote:\n>>> Does anyone know if there is a source that provides \"Big O\" notation for\n>>> postgres's aggregate functions and operations? For example is count(*) =\n>>> O(1) or O(n)?\n>>\n>> I don't know of any document containing the complexity of each\n>> aggregate, but it's sometimes left as a comment in the souce code.\n>\n> Recent max() and min() can be O(n) or O(1) depending on the where-clause and\n> presence of an index too, just to muddy the waters.\n\nHm, true. But excluding those special cases all Postgres aggregate functions\nwill be O(n) unless they're doing something very odd. None of the built-in\nfunctions (except min/max as noted) do anything odd like that.\n\nThe reason way is because of the basic design of Postgres aggregate functions.\nThey are fed every tuple one by one and have to keep their state in a single\nvariable. Most of the aggregate functions like count(*) etc just keep a static\nnon-growing state and the state transition function is a simple arithmetic\noperation which is O(1). So the resulting operation is O(n).\n\nActually one exception would be something like\n\nCREATE AGGREGATE array_agg(anyelement) (SFUNC = array_append, STYPE = anyarray, INITCOND='{}');\n\nSince the state variable has to keep accumulating more and more stuff the\narray_append becomes more and more expensive (it has to generate a new array\nso it has to copy the existing stuff). So actually it woul dbe O(n^2).\n\nThe only builtin aggregate which looks like it falls in this category would be\nxmlagg()\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n", "msg_date": "Thu, 22 May 2008 16:59:46 -0400", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Big O\" notation for postgres?" }, { "msg_contents": "Gregory Stark wrote:\n> \"Richard Huxton\" <[email protected]> writes:\n>\n> \n>> Jonah H. Harris wrote:\n>> \n>>> On Wed, May 21, 2008 at 10:10 AM, H. Hall wrote:\n>>> \n>>>> Does anyone know if there is a source that provides \"Big O\" notation for\n>>>> postgres's aggregate functions and operations? For example is count(*) =\n>>>> O(1) or O(n)?\n>>>> \n>>> I don't know of any document containing the complexity of each\n>>> aggregate, but it's sometimes left as a comment in the souce code.\n>>> \n>> Recent max() and min() can be O(n) or O(1) depending on the where-clause and\n>> presence of an index too, just to muddy the waters.\n>> \nWhen I first read the above from Jonah I just assumed some Postgres \nmagic was producing O(1). After seeing this again, I believe that \nPostgres must be doing something like the following for max and min:\nMax: ORDER BY colName DESC LIMIT 1\nMin: ORDER BY coName ASC LIMIT 1\n\nThus Jonah's caveat about using an index. If postgres is using an index \nas in the above then the Max and Min functions would both be O(log N) , \nthis is log base2 of N, which is the time it takes to search a balanced \nbinary tree, not O(1) which implies a constant time to perform, \nregardless of the size of the dataset N.\n\n>\n> Hm, true. But excluding those special cases all Postgres aggregate functions\n> will be O(n) unless they're doing something very odd. None of the built-in\n> functions (except min/max as noted) do anything odd like that.\n>\n> The reason way is because of the basic design of Postgres aggregate functions.\n> They are fed every tuple one by one and have to keep their state in a single\n> variable. Most of the aggregate functions like count(*) etc just keep a static\n> non-growing state and the state transition function is a simple arithmetic\n> operation which is O(1). So the resulting operation is O(n).\n>\n> Actually one exception would be something like\n>\n> CREATE AGGREGATE array_agg(anyelement) (SFUNC = array_append, STYPE = anyarray, INITCOND='{}');\n>\n> Since the state variable has to keep accumulating more and more stuff the\n> array_append becomes more and more expensive (it has to generate a new array\n> so it has to copy the existing stuff). So actually it woul dbe O(n^2).\n>\n> The only builtin aggregate which looks like it falls in this category would be\n> xmlagg()\n>\n> \nFunctions with O(N^2) scale very badly of course.\n\nIt would be very handy if the Planner could kick out Big O with its \nestimates. This would allow one to quickly tell how a query scales with \na large number of rows.\n\nThanks,\nHH\n\n-- \nH. Hall\nReedyRiver Group LLC\nhttp://www.reedyriver.com\n\n", "msg_date": "Mon, 26 May 2008 11:00:18 -0400", "msg_from": "\"H. Hall\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"Big O\" notation for postgres?" } ]
[ { "msg_contents": "Hi -performance,\n\nI experienced this morning a performance problem when we imported a\ndump in a 8.1 database.\n\nThe table is 5 millions rows large and when the dump creates an index\non a specific text column called clazz it takes 27 minutes while on\nthe other columns, it only takes a couple of seconds:\nLOG: duration: 1636301.317 ms statement: CREATE INDEX\nindex_journal_clazz ON journal USING btree (clazz);\nLOG: duration: 20613.009 ms statement: CREATE INDEX\nindex_journal_date ON journal USING btree (date);\nLOG: duration: 10653.290 ms statement: CREATE INDEX\nindex_journal_modifieur ON journal USING btree (modifieur);\nLOG: duration: 15031.579 ms statement: CREATE INDEX\nindex_journal_objectid ON journal USING btree (objectid);\n\nThe only weird thing about this column is that 4.7 millions of rows\nhave the exact same value. A partial index excluding this value is\nreally fast to create but, as the database is used via JDBC and\nprepared statements, this index is totally useless (the plan is\ncreated before the BIND so it can't use the partial index). FWIW we\ncan't use ?protocolVersion=2 with this application so it's not an\noption.\n\nAs part of the deployment process of this application, we often need\nto drop/create/restore the database and 25 minutes is really longer\nthan we can afford.\n\nSo my questions are:\n- is the index creation time so correlated with the distribution? I\nwas quite surprised by this behaviour. The time is essentially CPU\ntime.\n- if not, what can I check to diagnose this problem?\n- IIRC, 8.3 could allow me to use the partial index as the query\nshould be planned after the BIND (plans are unnamed). Am I right?\n\nThanks for any input.\n\n-- \nGuillaume\n", "msg_date": "Thu, 22 May 2008 14:32:27 +0200", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index creation time and distribution" }, { "msg_contents": "\"Guillaume Smet\" <[email protected]> writes:\n> I experienced this morning a performance problem when we imported a\n> dump in a 8.1 database.\n> The table is 5 millions rows large and when the dump creates an index\n> on a specific text column called clazz it takes 27 minutes while on\n> the other columns, it only takes a couple of seconds:\n> The only weird thing about this column is that 4.7 millions of rows\n> have the exact same value.\n\nDo you have maintenance_work_mem set large enough that the index\ncreation sort is done in-memory? 8.1 depends on the platform's qsort\nand a lot of them are kinda pessimal for input like this.\n\n8.2 (which uses our own qsort) seems to perform better in a quick\ntest.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 May 2008 09:14:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index creation time and distribution " }, { "msg_contents": "On Thu, May 22, 2008 at 3:14 PM, Tom Lane <[email protected]> wrote:\n> Do you have maintenance_work_mem set large enough that the index\n> creation sort is done in-memory? 8.1 depends on the platform's qsort\n> and a lot of them are kinda pessimal for input like this.\n\nFWIW, it's a 32 bits CentOS 4.6 box.\n\nmaintenance_work_mem is set to 256 MB and the size of the index is 400 MB.\n\nShould I try to raise it up to 512 MB? The server only has 2GB of RAM\nso it seems a bit high.\n\n> 8.2 (which uses our own qsort) seems to perform better in a quick\n> test.\n\nMmmmh OK. I was considering an upgrade to 8.3 in the next months anyway.\n\nDo we agree that in the case of unnamed prepared statement, 8.3 plans\nthe query after the BIND? The partial index seems to be a better\nsolution anyway, considering that it's 12 MB vs 400 MB.\n\nThanks.\n\n-- \nGuillaume\n", "msg_date": "Thu, 22 May 2008 15:38:25 +0200", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index creation time and distribution" }, { "msg_contents": "On Thu, 22 May 2008, Tom Lane wrote:\n> Do you have maintenance_work_mem set large enough that the index\n> creation sort is done in-memory? 8.1 depends on the platform's qsort\n> and a lot of them are kinda pessimal for input like this.\n\nLooking at the fact that other indexes on the same table are created \nquickly, it seems that the maintenance_work_mem isn't the issue - the sort \nalgorithm is.\n\nHaving lots of elements the same value is a worst-case-scenario for a \nnaive quicksort. I am in the middle of testing sorting algorithms for a \nperformance lecture I'm going to give, and one of the best algorithms I \nhave seen yet is that used in Java's java.util.Arrays.sort(). I haven't \nbeen able to beat it with any other comparison sort yet (although I have \nbeaten it with a bucket sort, but I wouldn't recommend such an algorithm \nfor a database).\n\n From the JavaDoc:\n\n> The sorting algorithm is a tuned quicksort, adapted from Jon L. Bentley \n> and M. Douglas McIlroy's \"Engineering a Sort Function\", \n> Software-Practice and Experience, Vol. 23(11) P. 1249-1265 (November \n> 1993). This algorithm offers n*log(n) performance on many data sets that \n> cause other quicksorts to degrade to quadratic performance.\n\nMatthew\n\n-- \nFirst law of computing: Anything can go wro\nsig: Segmentation fault. core dumped.\n", "msg_date": "Thu, 22 May 2008 15:10:06 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index creation time and distribution " }, { "msg_contents": "On Thu, May 22, 2008 at 6:32 AM, Guillaume Smet\n<[email protected]> wrote:\n> Hi -performance,\n>\n>\n> LOG: duration: 1636301.317 ms statement: CREATE INDEX\n> index_journal_clazz ON journal USING btree (clazz);\n> LOG: duration: 20613.009 ms statement: CREATE INDEX\n> index_journal_date ON journal USING btree (date);\n\nJust curious, what happens if you create the date index first, then\nthe clazz one?\n", "msg_date": "Thu, 22 May 2008 10:50:33 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index creation time and distribution" }, { "msg_contents": "On Thu, May 22, 2008 at 6:50 PM, Scott Marlowe <[email protected]> wrote:\n> Just curious, what happens if you create the date index first, then\n> the clazz one?\n\nIt's not due to any cache effect if it's your question. It's mostly\nCPU time and changing the order doesn't change the behaviour.\n\nI'll make some tests with 8.3 in a few weeks (I'll be out of town next\nweek) to see if using PostgreSQL qsort reduces the problem.\n\n-- \nGuillaume\n", "msg_date": "Thu, 22 May 2008 20:34:39 +0200", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index creation time and distribution" }, { "msg_contents": "\"Guillaume Smet\" <[email protected]> writes:\n> On Thu, May 22, 2008 at 3:14 PM, Tom Lane <[email protected]> wrote:\n>> Do you have maintenance_work_mem set large enough that the index\n>> creation sort is done in-memory? 8.1 depends on the platform's qsort\n>> and a lot of them are kinda pessimal for input like this.\n\n> maintenance_work_mem is set to 256 MB and the size of the index is 400 MB.\n\n> Should I try to raise it up to 512 MB? The server only has 2GB of RAM\n> so it seems a bit high.\n\nHmm, that's most likely not going to be enough to get it to do an\nin-memory sort ... try turning on trace_sort to see. But anyway,\nif you are in the on-disk sort regime, 8.3 is only going to be\nmarginally faster for such a case --- it's going to have to write\nall the index entries out and read 'em back in anyway.\n\n>> 8.2 (which uses our own qsort) seems to perform better in a quick\n>> test.\n\n> Mmmmh OK. I was considering an upgrade to 8.3 in the next months anyway.\n\n> Do we agree that in the case of unnamed prepared statement, 8.3 plans\n> the query after the BIND? The partial index seems to be a better\n> solution anyway, considering that it's 12 MB vs 400 MB.\n\nErmm .. this is in fact mostly broken in 8.3.0 and 8.3.1. If you don't\nwant to wait for 8.3.2, you need this patch:\nhttp://archives.postgresql.org/pgsql-committers/2008-03/msg00566.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 May 2008 15:18:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index creation time and distribution " }, { "msg_contents": "On Thu, May 22, 2008 at 9:18 PM, Tom Lane <[email protected]> wrote:\n> Ermm .. this is in fact mostly broken in 8.3.0 and 8.3.1. If you don't\n> want to wait for 8.3.2, you need this patch:\n> http://archives.postgresql.org/pgsql-committers/2008-03/msg00566.php\n\nThat's what I had in mind. We have to test a lot of things before even\nconsidering an upgrade so that's not really a problem for us to wait\nfor 8.3.2.\n\nThanks.\n\n-- \nGuillaume\n", "msg_date": "Thu, 22 May 2008 23:52:11 +0200", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index creation time and distribution" } ]
[ { "msg_contents": "Hi Hannu,\r\n\r\nInteresting suggestion on the partial index!\r\n\r\nI'll find out if we can extract our code that did the work. It was simple but scattered in a few routines.\r\n\r\nIn concept it worked like this:\r\n\r\n1 - Ignore if hint bits are unset, use them if set. This affects heapam and vacuum I think.\r\n2 - implement a cache for clog lookups based on the optimistic assumption that the data was inserted in bulk. Put the cache one call away from heapgetnext()\r\n\r\nI forget the details of (2). As I recall, if we fall off of the assumption, the penalty for long scans get large-ish (maybe 2X), but since when do people full table scan when they're updates/inserts are so scattered across TIDs? It's an obvious big win for DW work.\r\n\r\nWe also have a GUC to turn it off if needed, in which case a vacuum will write the hint bits.\r\n\r\n- Luke\r\n\r\n----- Original Message -----\r\nFrom: Hannu Krosing <[email protected]>\r\nTo: Luke Lonergan\r\nCc: Pavan Deolasee <[email protected]>; Greg Smith <[email protected]>; Alvaro Herrera <[email protected]>; [email protected] <[email protected]>\r\nSent: Thu May 22 12:10:02 2008\r\nSubject: Re: [PERFORM] I/O on select count(*)\r\n\r\nOn Thu, 2008-05-15 at 10:52 +0800, Luke Lonergan wrote:\r\n> BTW – we’ve removed HINT bit checking in Greenplum DB and improved the\r\n> visibility caching which was enough to provide performance at the same\r\n> level as with the HINT bit optimization, but avoids this whole “write\r\n> the data, write it to the log also, then write it again just for good\r\n> measure” behavior.\r\n> \r\n> For people doing data warehousing work like the poster, this Postgres\r\n> behavior is miserable. It should be fixed for 8.4 for sure\r\n> (volunteers?)\r\n\r\nI might try it. I think I have told you about my ideas ;)\r\nI plan to first do \"cacheing\" (for being able to doi index only scans\r\namong other things) and then if the cache works reliably, use the\r\n\"cacheing\" code as the main visibility / MVCC mechanism.\r\n\r\nIs Greenplums code available, or should I roll my own ?\r\n\r\n> BTW – for the poster’s benefit, you should implement partitioning by\r\n> date, then load each partition and VACUUM ANALYZE after each load.\r\n> You probably won’t need the date index anymore – so your load times\r\n> will vastly improve (no indexes), you’ll store less data (no indexes)\r\n> and you’ll be able to do simpler data management with the partitions.\r\n> \r\n> You may also want to partition AND index if you do a lot of short\r\n> range selective date predicates. Example would be: partition by day,\r\n> index on date field, queries selective on date ranges by hour will\r\n> then select out only the day needed, then index scan to get the \r\n> hourly values.\r\n\r\nIf your queries allow it, you may try indexing on \r\nint2::extract('HOUR' from date)\r\nso the index may be smaller\r\n\r\nstoring the date as type abstime is another way to reduce index size.\r\n\r\n> Typically time-oriented data is nearly time sorted anyway, so you’ll\r\n> also get the benefit of a clustered index.\r\n\r\n----------------\r\nHannu\r\n\r\n\r\n\n\n\n\n\nRe: [PERFORM] I/O on select count(*)\n\n\n\nHi Hannu,\n\r\nInteresting suggestion on the partial index!\n\r\nI'll find out if we can extract our code that did the work.  It was simple but scattered in a few routines.\n\r\nIn concept it worked like this:\n\r\n1 - Ignore if hint bits are unset, use them if set.  This affects heapam and vacuum I think.\r\n2 - implement a cache for clog lookups based on the optimistic assumption that the data was inserted in bulk.  Put the cache one call away from heapgetnext()\n\r\nI forget the details of (2).  As I recall, if we fall off of the assumption, the penalty for long scans get large-ish (maybe 2X), but since when do people full table scan when they're updates/inserts are so scattered across TIDs?  It's an obvious big win for DW work.\n\r\nWe also have a GUC to turn it off if needed, in which case a vacuum will write the hint bits.\n\r\n- Luke\n\r\n----- Original Message -----\r\nFrom: Hannu Krosing <[email protected]>\r\nTo: Luke Lonergan\r\nCc: Pavan Deolasee <[email protected]>; Greg Smith <[email protected]>; Alvaro Herrera <[email protected]>; [email protected] <[email protected]>\r\nSent: Thu May 22 12:10:02 2008\r\nSubject: Re: [PERFORM] I/O on select count(*)\n\r\nOn Thu, 2008-05-15 at 10:52 +0800, Luke Lonergan wrote:\r\n> BTW – we’ve removed HINT bit checking in Greenplum DB and improved the\r\n> visibility caching which was enough to provide performance at the same\r\n> level as with the HINT bit optimization, but avoids this whole “write\r\n> the data, write it to the log also, then write it again just for good\r\n> measure” behavior.\r\n>\r\n> For people doing data warehousing work like the poster, this Postgres\r\n> behavior is miserable.  It should be fixed for 8.4 for sure\r\n> (volunteers?)\n\r\nI might try it. I think I have told you about my ideas ;)\r\nI plan to first do \"cacheing\" (for being able to doi index only scans\r\namong other things) and then if the cache works reliably, use the\r\n\"cacheing\" code as the main visibility / MVCC mechanism.\n\r\nIs Greenplums code available, or should I roll my own ?\n\r\n> BTW – for the poster’s benefit, you should implement partitioning by\r\n> date, then load each partition and VACUUM ANALYZE after each load.\r\n>  You probably won’t need the date index anymore – so your load times\r\n> will vastly improve (no indexes), you’ll store less data (no indexes)\r\n> and you’ll be able to do simpler data management with the partitions.\r\n>\r\n> You may also want to partition AND index if you do a lot of short\r\n> range selective date predicates.  Example would be: partition by day,\r\n> index on date field, queries selective on date ranges by hour will\r\n> then select out only the day needed, then index scan to get the\r\n> hourly values.\n\r\nIf your queries allow it, you may try indexing on\r\nint2::extract('HOUR' from date)\r\nso the index may be smaller\n\r\nstoring the date as type abstime is another way to reduce index size.\n\r\n> Typically time-oriented data is nearly time sorted anyway, so you’ll\r\n> also get the benefit of a clustered index.\n\r\n----------------\r\nHannu", "msg_date": "Thu, 22 May 2008 23:11:20 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I/O on select count(*)" } ]
[ { "msg_contents": "I wonder why join_collapse_limit default values is set to 8 but \ngeqo_threshold is 12. Optimizer doesn't change the order of JOIN's of \nqueries that contains from 8 to 11 tables. Why it's 'wise' decision as \ndocumentation says?\n\nfrom_collapse_limit (integer)\n\n The planner will merge sub-queries into upper queries if the\n resulting FROM list would have no more than this many items. Smaller\n values reduce planning time but might yield inferior query plans.\n The default is eight. It is usually wise to keep this less than\n geqo_threshold\n <http://www.postgresql.org/docs/8.3/static/runtime-config-query.html#GUC-GEQO-THRESHOLD>.\n For more information see Section 14.3\n <http://www.postgresql.org/docs/8.3/static/explicit-joins.html>.\n\n\n\n\n\n\nI wonder why join_collapse_limit default values is set to 8 but\ngeqo_threshold is 12. Optimizer doesn't change the order of JOIN's of\nqueries that contains from 8 to 11 tables. Why it's 'wise' decision as\ndocumentation says?\n\nfrom_collapse_limit (integer)\n\n The planner will merge sub-queries into upper queries if the\nresulting FROM\nlist would have no more than this many items. Smaller values reduce\nplanning time but might yield inferior query plans. The default is\neight. It is usually wise to keep this less than geqo_threshold.\nFor more information see Section\n14.3.", "msg_date": "Fri, 23 May 2008 18:01:16 +0900", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "join/from_collapse_limit and geqo_threshold default values" } ]
[ { "msg_contents": "Hello,\n\n We're planning new production server for PostgreSQL and I'm wondering\nwhich processor (or even platform) will be better: Quad Xeon or Quad\nOpteron (for example SUN now has a new offer Sun Fire X4440 x64).\n\nWhen I was buying my last database server, then SUN v40z was a really\nvery good choice (Intel's base server was slower). This v40z still works\npretty good but I need one more.\n\nAFAIK Intel made some changes in chipset but... is this better then AMD\nHyperTransport and Direct Connect Architecture from database point of\nview? How about L3 cache - is this important for performance?\n\nDo You have any opinions? Suggestions?\n\nThanks,\n\nBest regards\n\n-- \nAndrzej Zawadzki\n", "msg_date": "Fri, 23 May 2008 12:41:29 +0200", "msg_from": "Andrzej Zawadzki <[email protected]>", "msg_from_op": true, "msg_subject": "Quad Xeon or Quad Opteron?" }, { "msg_contents": "Andrzej Zawadzki wrote:\n> Hello,\n> \n> We're planning new production server for PostgreSQL and I'm wondering\n> which processor (or even platform) will be better: Quad Xeon or Quad\n> Opteron (for example SUN now has a new offer Sun Fire X4440 x64).\n\n[snip]\n\n> Suggestions?\n\nTo get a more useful response here, you might want to include some\ninformation about your workload and database size, and report on your\nplanned disk subsystem and RAM.\n\nAlso, based on what I've seen on this list rather than personal\nexperience, you might want to give more thought to your storage than to\nCPU power. The usual thrust of advice seems to be: Get a fast, battery\nbacked RAID controller. \"Fast\" does not mean \"fast sequential I/O in\nideal conditions so marketing can print a big number on the box\"; you\nneed to consider random I/O too. Get lots of fast disks. Get enough RAM\nto ensure that your indexes fit in RAM if possible.\n\nNote, however, that I have no direct experience with big Pg databases;\nI'm just trying to provide you with a guide of what information to\nprovide and what to think about so you can get better answers here from\npeople who actually have a clue.\n\n--\nCraig Ringer\n", "msg_date": "Fri, 23 May 2008 19:34:41 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad Xeon or Quad Opteron?" }, { "msg_contents": "> Also, based on what I've seen on this list rather than personal\n> experience, you might want to give more thought to your storage than to\n> CPU power. The usual thrust of advice seems to be: Get a fast, battery\n> backed RAID controller. \"Fast\" does not mean \"fast sequential I/O in\n> ideal conditions so marketing can print a big number on the box\"; you\n> need to consider random I/O too. Get lots of fast disks. Get enough RAM\n> to ensure that your indexes fit in RAM if possible.\n> Note, however, that I have no direct experience with big Pg databases;\n> I'm just trying to provide you with a guide of what information to\n> provide and what to think about so you can get better answers here from\n> people who actually have a clue.\n\nYep, we've had PostreSQL databases for a long time. The various\ncurrent generation processors, IMO, have no substantive difference in\npractice; at least not relative to the bang-for-the-buck or more RAM\nand good I/O.\n\n", "msg_date": "Fri, 23 May 2008 08:21:57 -0400", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad Xeon or Quad Opteron?" }, { "msg_contents": "Hi,\nAs a gauge, we recently purchased several servers as our systems get\nclose to going operational. We bought Dell 2900s, with the cheapest quad\ncore processors (dual) and put most of the expense into lots of drives\n(8 15K 146GB SAS drives in a RAID 10 set), and the PERC 6 embedded\ncontroller with 512MB battery backed cache. That gives us more spindles,\nthe RAID redundancy we want, plus the high, reliable throughput of the\nBBC. The OS (and probably WAL) will run on a RAID 1 pair of 15K 76GB\ndrives. We also went with 8GB memory, which seemed to be the price cost\npoint in these systems (going above 8GB had a much higher cost).\nBesides, in our prototyping, or systems had 2GB, which we rarely\nexceeded, so 8GB should be plently (and we can always expand). \n\nSo really, if you can save money on processors by going Opteron (and\nyour IT department doesn't have an Intel-based system requirement like\nours), put what you save into a good disk I/O subsystem. Hope that\nhelps.\n\nDoug\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Adam Tauno\nWilliams\nSent: Friday, May 23, 2008 8:22 AM\nTo: pgsql-performance\nSubject: Re: [PERFORM] Quad Xeon or Quad Opteron?\n\n> Also, based on what I've seen on this list rather than personal\n> experience, you might want to give more thought to your storage than\nto\n> CPU power. The usual thrust of advice seems to be: Get a fast, battery\n> backed RAID controller. \"Fast\" does not mean \"fast sequential I/O in\n> ideal conditions so marketing can print a big number on the box\"; you\n> need to consider random I/O too. Get lots of fast disks. Get enough\nRAM\n> to ensure that your indexes fit in RAM if possible.\n> Note, however, that I have no direct experience with big Pg databases;\n> I'm just trying to provide you with a guide of what information to\n> provide and what to think about so you can get better answers here\nfrom\n> people who actually have a clue.\n\nYep, we've had PostreSQL databases for a long time. The various\ncurrent generation processors, IMO, have no substantive difference in\npractice; at least not relative to the bang-for-the-buck or more RAM\nand good I/O.\n\n\n-- \nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Fri, 23 May 2008 08:36:38 -0400", "msg_from": "\"Knight, Doug\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad Xeon or Quad Opteron?" }, { "msg_contents": "This may be of interest...\n\n\nhttp://weblog.infoworld.com/yager/archives/2008/05/ahead_of_the_cu_4.html\n\n-----Original Message-----\nFrom: [email protected] on behalf of Andrzej Zawadzki\nSent: Fri 5/23/2008 6:41 AM\nTo: [email protected]\nSubject: [PERFORM] Quad Xeon or Quad Opteron?\n \nHello,\n\n We're planning new production server for PostgreSQL and I'm wondering\nwhich processor (or even platform) will be better: Quad Xeon or Quad\nOpteron (for example SUN now has a new offer Sun Fire X4440 x64).\n\nWhen I was buying my last database server, then SUN v40z was a really\nvery good choice (Intel's base server was slower). This v40z still works\npretty good but I need one more.\n\nAFAIK Intel made some changes in chipset but... is this better then AMD\nHyperTransport and Direct Connect Architecture from database point of\nview? How about L3 cache - is this important for performance?\n\nDo You have any opinions? Suggestions?\n\nThanks,\n\nBest regards\n\n-- \nAndrzej Zawadzki\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\nRE: [PERFORM] Quad Xeon or Quad Opteron?\n\n\n\nThis may be of interest...\n\n\nhttp://weblog.infoworld.com/yager/archives/2008/05/ahead_of_the_cu_4.html\n\n-----Original Message-----\nFrom: [email protected] on behalf of Andrzej Zawadzki\nSent: Fri 5/23/2008 6:41 AM\nTo: [email protected]\nSubject: [PERFORM] Quad Xeon or Quad Opteron?\n\nHello,\n\n We're planning new production server for PostgreSQL and I'm wondering\nwhich processor (or even platform) will be better: Quad Xeon or Quad\nOpteron (for example SUN now has a new offer Sun Fire X4440 x64).\n\nWhen I was buying my last database server, then SUN v40z was a really\nvery good choice (Intel's base server was slower). This v40z still works\npretty good but I need one more.\n\nAFAIK Intel made some changes in chipset but... is this better then AMD\nHyperTransport and Direct Connect Architecture from database point of\nview? How about L3 cache - is this important for performance?\n\nDo You have any opinions? Suggestions?\n\nThanks,\n\nBest regards\n\n--\nAndrzej Zawadzki\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 23 May 2008 08:50:23 -0400", "msg_from": "\"Reid Thompson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad Xeon or Quad Opteron?" }, { "msg_contents": "Craig Ringer wrote:\n> Andrzej Zawadzki wrote:\n> \n>> Hello,\n>>\n>> We're planning new production server for PostgreSQL and I'm wondering\n>> which processor (or even platform) will be better: Quad Xeon or Quad\n>> Opteron (for example SUN now has a new offer Sun Fire X4440 x64).\n>> \n>\n> [snip]\n>\n> \n>> Suggestions?\n>> \n>\n> To get a more useful response here, you might want to include some\n> information about your workload and database size, and report on your\n> planned disk subsystem and RAM.\n> \nDisk subsystem:\nHitachi AMS200, 12x10krpm SAS drives in RAID 10 (+1 hot spare), 1GB mem\nwith battery\nDatabase is ~60GB and growing ;-)\nWorkloads: ~94% - SELECTS\nQ/sek: Avg~300 (1000 in peak)\n\nServer:\nv40z is a 4xdouble core with 16GB RAM\n\n> Also, based on what I've seen on this list rather than personal\n> experience, you might want to give more thought to your storage than to\n> CPU power. The usual thrust of advice seems to be: Get a fast, battery\n> backed RAID controller. \"Fast\" does not mean \"fast sequential I/O in\n> ideal conditions so marketing can print a big number on the box\"; you\n> need to consider random I/O too. Get lots of fast disks. Get enough RAM\n> to ensure that your indexes fit in RAM if possible.\n> \nYes, of course You are right: disks are very important - I know that\nespecially after switch to SAN.\nBut server is getting older ;-) - I need good warranty - I have 3 years\nfrom SUN for example.\n\nps. After reading about HP: SA P800 with StorageWorks MSA70 I'm\nconsidering buying such storage with ~20 disks.\n\n[...]\n\n-- \nAndrzej Zawadzki\n\n", "msg_date": "Sat, 24 May 2008 18:39:16 +0200", "msg_from": "Andrzej Zawadzki <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Quad Xeon or Quad Opteron?" }, { "msg_contents": "Knight, Doug wrote:\n> Hi,\n> As a gauge, we recently purchased several servers as our systems get\n> close to going operational. We bought Dell 2900s, with the cheapest quad\n> core processors (dual) and put most of the expense into lots of drives\n> (8 15K 146GB SAS drives in a RAID 10 set), and the PERC 6 embedded\n> controller with 512MB battery backed cache. That gives us more spindles,\n> the RAID redundancy we want, plus the high, reliable throughput of the\n> BBC. The OS (and probably WAL) will run on a RAID 1 pair of 15K 76GB\n> drives. We also went with 8GB memory, which seemed to be the price cost\n> point in these systems (going above 8GB had a much higher cost).\n> Besides, in our prototyping, or systems had 2GB, which we rarely\n> exceeded, so 8GB should be plently (and we can always expand). \n>\n> So really, if you can save money on processors by going Opteron (and\n> your IT department doesn't have an Intel-based system requirement like\n> ours), put what you save into a good disk I/O subsystem. Hope that\n> helps.\n> \nTop posting? Bleee ;-) How to read now?\n\nOK I know that IO is most important for database but: I'm sorry, my\nquestion is about processor/platform choice? :-)\nI have to buy new server and I want optimal one.\nLike I've wrote in different email my IO subsystem is quite good for now.\n\nps. To admin of that list: what is with Reply-to on that list?\n\n\n\n> Doug\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Adam Tauno\n> Williams\n> Sent: Friday, May 23, 2008 8:22 AM\n> To: pgsql-performance\n> Subject: Re: [PERFORM] Quad Xeon or Quad Opteron?\n>\n> \n>> Also, based on what I've seen on this list rather than personal\n>> experience, you might want to give more thought to your storage than\n>> \n> to\n> \n>> CPU power. The usual thrust of advice seems to be: Get a fast, battery\n>> backed RAID controller. \"Fast\" does not mean \"fast sequential I/O in\n>> ideal conditions so marketing can print a big number on the box\"; you\n>> need to consider random I/O too. Get lots of fast disks. Get enough\n>> \n> RAM\n> \n>> to ensure that your indexes fit in RAM if possible.\n>> Note, however, that I have no direct experience with big Pg databases;\n>> I'm just trying to provide you with a guide of what information to\n>> provide and what to think about so you can get better answers here\n>> \n> from\n> \n>> people who actually have a clue.\n>> \n>\n> Yep, we've had PostreSQL databases for a long time. The various\n> current generation processors, IMO, have no substantive difference in\n> practice; at least not relative to the bang-for-the-buck or more RAM\n> and good I/O.\n>\n>\n> \n\n", "msg_date": "Sat, 24 May 2008 18:49:52 +0200", "msg_from": "Andrzej Zawadzki <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Quad Xeon or Quad Opteron?" }, { "msg_contents": "On Fri, May 23, 2008 at 3:41 AM, Andrzej Zawadzki <[email protected]> wrote:\n\n> Hello,\n>\n> We're planning new production server for PostgreSQL and I'm wondering\n> which processor (or even platform) will be better: Quad Xeon or Quad\n> Opteron (for example SUN now has a new offer Sun Fire X4440 x64).\n>\n> When I was buying my last database server, then SUN v40z was a really\n> very good choice (Intel's base server was slower). This v40z still works\n> pretty good but I need one more.\n>\n> AFAIK Intel made some changes in chipset but... is this better then AMD\n> HyperTransport and Direct Connect Architecture from database point of\n> view? How about L3 cache - is this important for performance?\n>\n\nIntel's chipset is still broken when using dual sockets and quad core\nprocessors. The problem manifests itself as excessive cache line bouncing.\nIn my opinion the best bang/buck combo on the CPU side is the fastest\ndual-core Xeon CPUs you can find. You get excellent single-thread\nperformance and you still have four processors, which was a fantasy for most\npeople only 5 years ago. In addition you can put a ton of memory in the new\nXeon machines. 64GB is completely practical.\n\nI still run several servers on Opterons but in my opinion they don't make\nsense right now unless you truly need the CPU parallelism.\n\n-jwb\n\nOn Fri, May 23, 2008 at 3:41 AM, Andrzej Zawadzki <[email protected]> wrote:\nHello,\n\n We're planning new production server for PostgreSQL and I'm wondering\nwhich processor (or even platform) will be better: Quad Xeon or Quad\nOpteron (for example SUN now has a new offer Sun Fire X4440 x64).\n\nWhen I was buying my last database server, then SUN v40z was a really\nvery good choice (Intel's base server was slower). This v40z still works\npretty good but I need one more.\n\nAFAIK Intel made some changes in chipset but... is this better then AMD\nHyperTransport and Direct Connect Architecture from database point of\nview? How about L3 cache - is this important for performance?\nIntel's chipset is still broken when using dual sockets and quad core processors.  The problem manifests itself as excessive cache line bouncing.  In my opinion the best bang/buck combo on the CPU side is the fastest dual-core Xeon CPUs you can find.  You get excellent single-thread performance and you still have four processors, which was a fantasy for most people only 5 years ago.  In addition you can put a ton of memory in the new Xeon machines.  64GB is completely practical.\nI still run several servers on Opterons but in my opinion they don't make sense right now unless you truly need the CPU parallelism.-jwb", "msg_date": "Sat, 24 May 2008 13:39:15 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad Xeon or Quad Opteron?" } ]
[ { "msg_contents": "I have a large table with about 2 million rows and it will keep growing...\n\nI need to do update/inserts, and select as well.\n\nAn index will speed up the select, but it will slow down the updates.\n\nAre all Postgres indexes ordered? i.e., with every update, the index pages will have to be physically reordered?\n\nDoes Postgres have any kind of non-ordered indexes (like Syabse's non-clustered index)?\n\nWhat is the common way to take care of the performance issue when you have to do both update and select on the same large table?\n\nThanks,\nJessica\n\n\n\n \nI have a large table with about 2 million rows and it will keep growing...\n\nI need to do update/inserts, and select as well.\n\nAn index will speed up the select, but it will slow down the updates.\n\nAre all Postgres indexes ordered? i.e., with every update, the index pages will have to be physically reordered?\n\nDoes Postgres have any kind of non-ordered indexes (like Syabse's non-clustered index)?\n\nWhat is the common way to take care of the performance issue when you have to do both update and select on the same large table?\n\nThanks,\nJessica", "msg_date": "Fri, 23 May 2008 06:21:17 -0700 (PDT)", "msg_from": "Jessica Richard <[email protected]>", "msg_from_op": true, "msg_subject": "index performance on large tables with update and insert" }, { "msg_contents": "Jessica Richard wrote:\n> I have a large table with about 2 million rows and it will keep\n> growing...\n> \n> I need to do update/inserts, and select as well.\n> \n> An index will speed up the select, but it will slow down the updates.\n> \n> Are all Postgres indexes ordered? i.e., with every update, the index\n> pages will have to be physically reordered?\n> \n> Does Postgres have any kind of non-ordered indexes (like Syabse's\n> non-clustered index)?\n\nAll PostgreSQL indexes are like the non-clustered ones in Sybase or SQL\nServer.\n\n\n> What is the common way to take care of the performance issue when you\n> have to do both update and select on the same large table?\n\nCreate the indexes you actually need to make the selects and updates\nfast, just make sure you don't create any unnecessary ones. Usually,\nyour UPDATEs will also require indexes - only the INSERTs actually are\nlosing.\n\n//Magnus\n", "msg_date": "Fri, 23 May 2008 12:35:17 -0400", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index performance on large tables with update and\n insert" } ]
[ { "msg_contents": "We're about to try out a new BBU controller card, and would welcome\nany advice from anyone with experience with this hardware.\n \nIt is an IBM ServeRAID-MR10M SAS/SATA Controller with the optional\nBBU. The docs say it is \"a LSI1078ROC-based PCI Express RAID\nadapter.\" We're hooking it up to four drawers of 12 drives each; all\ndrives are 146 GB 3.5 inch 15 kRPM HS SAS. Our hope is to set up\nthree drawers in RAID 10. I don't know if this will work -- our\nprevious adapter refused to allow more than 14 drives in RAID 10, so\nwe had to use RAID 5 for the \"big\" partition. The other drawer will\nbe RAID 5.\n \nTips?\n \n-Kevin\n\n", "msg_date": "Fri, 23 May 2008 10:15:29 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "IBM ServRAID-MR10M / LSI1078ROC advice" } ]
[ { "msg_contents": "PostgreSQL: 8.2.X\n\nOS: Red Hat Linux 4.X\n\n \n\nI have started to define tablespaces on different disks. Is there any\nperformance issues related to referencing tablespaces on different disks\nwith symbolic links? By using symbolic links to tablespaces can I then\nstop the database and move a particular tablespace from one location to\nanother without causing a problem? This would seem to give a lot of\nflexibility to the location of tablespaces. \n\n \n\nThanks,\n\n \n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\nMy e-mail address has changed to [email protected]\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPostgreSQL: 8.2.X\nOS: Red Hat Linux 4.X\n \nI have started to define tablespaces on different\ndisks.  Is there any performance issues related to referencing tablespaces\non different disks with symbolic links?  By using symbolic links to\ntablespaces can I then stop the database and move a particular tablespace from\none location to another without causing a problem?  This would seem to give\na lot of flexibility to the location of tablespaces.  \n \nThanks,\n \n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\nMy e-mail address has changed to [email protected]", "msg_date": "Mon, 26 May 2008 09:11:51 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Symbolic Links to Tablespaces" }, { "msg_contents": "\"Campbell, Lance\" <[email protected]> writes:\n> I have started to define tablespaces on different disks. Is there any\n> performance issues related to referencing tablespaces on different disks\n> with symbolic links? By using symbolic links to tablespaces can I then\n> stop the database and move a particular tablespace from one location to\n> another without causing a problem? This would seem to give a lot of\n> flexibility to the location of tablespaces. \n\nA tablespace already is a symbolic link --- read\nhttp://www.postgresql.org/docs/8.2/static/storage.html\n\nPutting another one into the path will eat cycles and doesn't seem like\nit could buy anything.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 May 2008 11:08:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Symbolic Links to Tablespaces " }, { "msg_contents": "Once I have assigned tables and indexes to a particular tablespace that\npoints to a particular location on disk is there a simple way to move\nthe files to a new location?\n\nExample:\nTable xyz is using tablespace xyz_tbl which is located at\n/somedir/xyz_tbl on the disk. If I want to move it to a new disk\nlocated at /someotherdir/xyz_tbl/ how can I do that easily? \n\nDo I have to backup all of the tables using the tablespace xyz_tbl, drop\nthe tables, drop the tablespace, recreate the tablespace with a\ndifferent disk location and then finally reload the tables and data? Or\nis there an easier way? Is there a move tablespace disk location\ncommand?\n\nThanks,\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\nMy e-mail address has changed to [email protected]\n \n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Monday, May 26, 2008 10:09 AM\nTo: Campbell, Lance\nCc: [email protected]\nSubject: Re: [PERFORM] Symbolic Links to Tablespaces \n\n\"Campbell, Lance\" <[email protected]> writes:\n> I have started to define tablespaces on different disks. Is there any\n> performance issues related to referencing tablespaces on different\ndisks\n> with symbolic links? By using symbolic links to tablespaces can I\nthen\n> stop the database and move a particular tablespace from one location\nto\n> another without causing a problem? This would seem to give a lot of\n> flexibility to the location of tablespaces. \n\nA tablespace already is a symbolic link --- read\nhttp://www.postgresql.org/docs/8.2/static/storage.html\n\nPutting another one into the path will eat cycles and doesn't seem like\nit could buy anything.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 May 2008 12:10:07 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Symbolic Links to Tablespaces " }, { "msg_contents": "am Mon, dem 26.05.2008, um 12:10:07 -0500 mailte Campbell, Lance folgendes:\n> Once I have assigned tables and indexes to a particular tablespace that\n> points to a particular location on disk is there a simple way to move\n> the files to a new location?\n> \n> Example:\n> Table xyz is using tablespace xyz_tbl which is located at\n> /somedir/xyz_tbl on the disk. If I want to move it to a new disk\n> located at /someotherdir/xyz_tbl/ how can I do that easily? \n\nALTER TABLE SET TABLESPACE\n\nhttp://www.postgresql.org/docs/8.3/static/sql-altertable.html\n\n\nPS.: please no top-posting.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Mon, 26 May 2008 19:32:12 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Symbolic Links to Tablespaces" }, { "msg_contents": "Campbell, Lance wrote:\n> Once I have assigned tables and indexes to a particular tablespace that\n> points to a particular location on disk is there a simple way to move\n> the files to a new location?\n> \n> Example:\n> Table xyz is using tablespace xyz_tbl which is located at\n> /somedir/xyz_tbl on the disk. If I want to move it to a new disk\n> located at /someotherdir/xyz_tbl/ how can I do that easily? \n\nShut down the database server, replace the symbolic link in \ndata/pg_tblspc to the new location, and start the server again. The \nlocation is also stored in pg_tablespace catalog; you'll need to fix it \nwith \"UPDATE pg_tablespace SET spclocation ='/someotherdir/xyz_tbl' \nWHERE spcname='xyz_tbl'\", or pg_dumpall will still show the old location.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 26 May 2008 19:26:11 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Symbolic Links to Tablespaces" }, { "msg_contents": "Heikki Linnakangas wrote:\n...\n> Shut down the database server, replace the symbolic link in \n> data/pg_tblspc to the new location, and start the server again. The \n> location is also stored in pg_tablespace catalog; you'll need to fix it \n> with \"UPDATE pg_tablespace SET spclocation ='/someotherdir/xyz_tbl' \n> WHERE spcname='xyz_tbl'\", or pg_dumpall will still show the old location.\n> \nwouldn't alter tablespace be not more easy and less fragile?\n\nT.", "msg_date": "Mon, 26 May 2008 21:19:03 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Symbolic Links to Tablespaces" }, { "msg_contents": "Tino Wildenhain wrote:\n> Heikki Linnakangas wrote:\n> ...\n>> Shut down the database server, replace the symbolic link in \n>> data/pg_tblspc to the new location, and start the server again. The \n>> location is also stored in pg_tablespace catalog; you'll need to fix \n>> it with \"UPDATE pg_tablespace SET spclocation ='/someotherdir/xyz_tbl' \n>> WHERE spcname='xyz_tbl'\", or pg_dumpall will still show the old location.\n>>\n> wouldn't alter tablespace be not more easy and less fragile?\n\nYes. But it requires copying all the data.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 26 May 2008 21:26:15 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Symbolic Links to Tablespaces" } ]
[ { "msg_contents": "Hi, is there anyway this can be made faster? id is the primary key,\nand there is an index on uid..\n\nthanks\n\n\nEXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\nDESC limit 6;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..9329.02 rows=6 width=135) (actual\ntime=13612.247..13612.247 rows=0 loops=1)\n -> Index Scan Backward using pokes_pkey on pokes\n(cost=0.00..5182270.69 rows=3333 width=135) (actual\ntime=13612.245..13612.245 rows=0 loops=1)\n Filter: (uid = 578439028)\n Total runtime: 13612.369 ms\n(4 rows)\n", "msg_date": "Mon, 26 May 2008 15:49:35 -0700", "msg_from": "mark <[email protected]>", "msg_from_op": true, "msg_subject": "select query takes 13 seconds to run with index" }, { "msg_contents": "\n\nmark wrote:\n> Hi, is there anyway this can be made faster? id is the primary key,\n> and there is an index on uid..\n>\n> thanks\n>\n>\n> EXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\n> DESC limit 6;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..9329.02 rows=6 width=135) (actual\n> time=13612.247..13612.247 rows=0 loops=1)\n> -> Index Scan Backward using pokes_pkey on pokes\n> (cost=0.00..5182270.69 rows=3333 width=135) (actual\n> time=13612.245..13612.245 rows=0 loops=1)\n> Filter: (uid = 578439028)\n> Total runtime: 13612.369 ms\n> (4 rows)\n>\n> \nFirst this should be posted on performance list.\n\nhow many records are in this table? The uid 578,439,028 assuming this \nis auto incremented key that started 1 this means there is 578 million \nrecords in the table. \n\nThe estimate is way off, when was the last time Vaccum was on the table?\n\nWhat verison of Postgresql are you running\nSize of the Table\nTable layout\nLoad on the database\n\n\n\n", "msg_date": "Mon, 26 May 2008 19:26:33 -0400", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select query takes 13 seconds to run with index" }, { "msg_contents": "On Mon, May 26, 2008 at 4:26 PM, Justin <[email protected]> wrote:\n> mark wrote:\n>> Hi, is there anyway this can be made faster? id is the primary key,\n>> and there is an index on uid..\n>> thanks\n>> EXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\n>> DESC limit 6;\n>> QUERY\n>> PLAN\n>>\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=0.00..9329.02 rows=6 width=135) (actual\n>> time=13612.247..13612.247 rows=0 loops=1)\n>> -> Index Scan Backward using pokes_pkey on pokes\n>> (cost=0.00..5182270.69 rows=3333 width=135) (actual\n>> time=13612.245..13612.245 rows=0 loops=1)\n>> Filter: (uid = 578439028)\n>> Total runtime: 13612.369 ms\n>> (4 rows)\n> First this should be posted on performance list.\nsorry about this.\n\n> how many records are in this table?\n22334262, 22 million records.\n\n> The estimate is way off, when was the last time Vaccum was on the table?\nabout a week ago i ran this VACUUM VERBOSE ANALYZE;\nthis table is never updated or deleted, rows are just inserted...\n\n\n> What verison of Postgresql are you running\n8.3.1\n\n> Size of the Table\n22 million rows approximately\n\n> Table layout\nCREATE TABLE pokes\n(\n id serial NOT NULL,\n uid integer,\n action_id integer,\n created timestamp without time zone DEFAULT now(),\n friend_id integer,\n message text,\n pic text,\n \"name\" text,\n CONSTRAINT pokes_pkey PRIMARY KEY (id)\n)\nWITH (OIDS=FALSE);\nALTER TABLE pokes OWNER TO postgres;\n\n-- Index: idx_action_idx\n\n-- DROP INDEX idx_action_idx;\n\nCREATE INDEX idx_action_idx\n ON pokes\n USING btree\n (action_id);\n\n-- Index: idx_friend_id\n\n-- DROP INDEX idx_friend_id;\n\nCREATE INDEX idx_friend_id\n ON pokes\n USING btree\n (friend_id);\n\n-- Index: idx_pokes_uid\n\n-- DROP INDEX idx_pokes_uid;\n\nCREATE INDEX idx_pokes_uid\n ON pokes\n USING btree\n (uid);\n\n\n> Load on the database\nhow do i measure load on database?\n", "msg_date": "Mon, 26 May 2008 16:32:50 -0700", "msg_from": "mark <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select query takes 13 seconds to run with index" }, { "msg_contents": "mark wrote:\n> On Mon, May 26, 2008 at 4:26 PM, Justin <[email protected]> wrote:\n> \n>> mark wrote:\n>> \n>>> Hi, is there anyway this can be made faster? id is the primary key,\n>>> and there is an index on uid..\n>>> thanks\n>>> EXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\n>>> DESC limit 6;\n>>> QUERY\n>>> PLAN\n>>>\n>>> ----------------------------------------------------------------------------------------------------------------------------------------------------\n>>> Limit (cost=0.00..9329.02 rows=6 width=135) (actual\n>>> time=13612.247..13612.247 rows=0 loops=1)\n>>> -> Index Scan Backward using pokes_pkey on pokes\n>>> (cost=0.00..5182270.69 rows=3333 width=135) (actual\n>>> time=13612.245..13612.245 rows=0 loops=1)\n>>> Filter: (uid = 578439028)\n>>> Total runtime: 13612.369 ms\n>>> (4 rows)\n>>> \n>> First this should be posted on performance list.\n>> \n> sorry about this.\n>\n> \n>> how many records are in this table?\n>> \n> 22334262, 22 million records.\n>\n> \n>> The estimate is way off, when was the last time Vaccum was on the table?\n>> \n> about a week ago i ran this VACUUM VERBOSE ANALYZE;\n> this table is never updated or deleted, rows are just inserted...\n>\n>\n> \n>> What verison of Postgresql are you running\n>> \n> 8.3.1\n>\n> \n>> Size of the Table\n>> \n> 22 million rows approximately\n> \nI have no experience on large datasets so people with more experience \nin this area are going to have to chime in.\nMy gut feel is 13 seconds for Postgresql to sort through an index of \nthat size and table is not bad. \n\nyou may need to take a look at hardware and postgresql.config settings \nto improve the performance for this query\n\nThis query is very simple where changing it around or adding index \nresults massive improvements is not going to help in this case.\n> \n>> Table layout\n>> \n> CREATE TABLE pokes\n> (\n> id serial NOT NULL,\n> uid integer,\n> action_id integer,\n> created timestamp without time zone DEFAULT now(),\n> friend_id integer,\n> message text,\n> pic text,\n> \"name\" text,\n> CONSTRAINT pokes_pkey PRIMARY KEY (id)\n> )\n> WITH (OIDS=FALSE);\n> ALTER TABLE pokes OWNER TO postgres;\n>\n> -- Index: idx_action_idx\n>\n> -- DROP INDEX idx_action_idx;\n>\n> CREATE INDEX idx_action_idx\n> ON pokes\n> USING btree\n> (action_id);\n>\n> -- Index: idx_friend_id\n>\n> -- DROP INDEX idx_friend_id;\n>\n> CREATE INDEX idx_friend_id\n> ON pokes\n> USING btree\n> (friend_id);\n>\n> -- Index: idx_pokes_uid\n>\n> -- DROP INDEX idx_pokes_uid;\n>\n> CREATE INDEX idx_pokes_uid\n> ON pokes\n> USING btree\n> (uid);\n>\n>\n> \n>> Load on the database\n>> \n> how do i measure load on database?\n> \n\nHow many users are attached to the server at any given time. how many \ninserts, deletes selects are being done on the server. Its number TPS \non the server.\n\n\n\n\n\n\n\n\nmark wrote:\n\nOn Mon, May 26, 2008 at 4:26 PM, Justin <[email protected]> wrote:\n \n\nmark wrote:\n \n\nHi, is there anyway this can be made faster? id is the primary key,\nand there is an index on uid..\nthanks\nEXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\nDESC limit 6;\n QUERY\nPLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..9329.02 rows=6 width=135) (actual\ntime=13612.247..13612.247 rows=0 loops=1)\n -> Index Scan Backward using pokes_pkey on pokes\n(cost=0.00..5182270.69 rows=3333 width=135) (actual\ntime=13612.245..13612.245 rows=0 loops=1)\n Filter: (uid = 578439028)\n Total runtime: 13612.369 ms\n(4 rows)\n \n\nFirst this should be posted on performance list.\n \n\nsorry about this.\n\n \n\nhow many records are in this table?\n \n\n22334262, 22 million records.\n\n \n\nThe estimate is way off, when was the last time Vaccum was on the table?\n \n\nabout a week ago i ran this VACUUM VERBOSE ANALYZE;\nthis table is never updated or deleted, rows are just inserted...\n\n\n \n\nWhat verison of Postgresql are you running\n \n\n8.3.1\n\n \n\nSize of the Table\n \n\n22 million rows approximately\n \n\nI have no experience  on large datasets so people with more experience\nin this area are going to have to chime in.\nMy gut feel is 13 seconds for Postgresql to sort through an index of\nthat size and table is not bad.  \n\nyou may need to take a look at hardware and postgresql.config settings\nto improve the performance for this query\n\nThis query is very simple where changing it around or adding index\nresults massive improvements is not going to help in this case. \n\n\n \n\nTable layout\n \n\nCREATE TABLE pokes\n(\n id serial NOT NULL,\n uid integer,\n action_id integer,\n created timestamp without time zone DEFAULT now(),\n friend_id integer,\n message text,\n pic text,\n \"name\" text,\n CONSTRAINT pokes_pkey PRIMARY KEY (id)\n)\nWITH (OIDS=FALSE);\nALTER TABLE pokes OWNER TO postgres;\n\n-- Index: idx_action_idx\n\n-- DROP INDEX idx_action_idx;\n\nCREATE INDEX idx_action_idx\n ON pokes\n USING btree\n (action_id);\n\n-- Index: idx_friend_id\n\n-- DROP INDEX idx_friend_id;\n\nCREATE INDEX idx_friend_id\n ON pokes\n USING btree\n (friend_id);\n\n-- Index: idx_pokes_uid\n\n-- DROP INDEX idx_pokes_uid;\n\nCREATE INDEX idx_pokes_uid\n ON pokes\n USING btree\n (uid);\n\n\n \n\nLoad on the database\n \n\nhow do i measure load on database?\n \n\n\nHow many users are attached to the server at any given time.  how many\ninserts, deletes selects are being done on the server.  Its number \nTPS  on the server.", "msg_date": "Mon, 26 May 2008 19:49:08 -0400", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select query takes 13 seconds to run with index" }, { "msg_contents": "On Mon, May 26, 2008 at 4:49 PM, Justin <[email protected]> wrote:\n> mark wrote:\n>\n> On Mon, May 26, 2008 at 4:26 PM, Justin <[email protected]> wrote:\n> mark wrote:\n> Hi, is there anyway this can be made faster? id is the primary key,\n> and there is an index on uid..\n> thanks\n> EXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\n> DESC limit 6;\n> QUERY\n> PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..9329.02 rows=6 width=135) (actual\n> time=13612.247..13612.247 rows=0 loops=1)\n> -> Index Scan Backward using pokes_pkey on pokes\n> (cost=0.00..5182270.69 rows=3333 width=135) (actual\n> time=13612.245..13612.245 rows=0 loops=1)\n> Filter: (uid = 578439028)\n> Total runtime: 13612.369 ms\n> (4 rows)\n>\n>\n> First this should be posted on performance list.\n>\n>\n> sorry about this.\n>\n>\n>\n> how many records are in this table?\n>\n>\n> 22334262, 22 million records.\n>\n>\n>\n> The estimate is way off, when was the last time Vaccum was on the table?\n>\n>\n> about a week ago i ran this VACUUM VERBOSE ANALYZE;\n> this table is never updated or deleted, rows are just inserted...\n>\n>\n>\n>\n> What verison of Postgresql are you running\n>\n>\n> 8.3.1\n>\n>\n>\n> Size of the Table\n>\n>\n> 22 million rows approximately\n>\n>\n> I have no experience on large datasets so people with more experience in\n> this area are going to have to chime in.\n> My gut feel is 13 seconds for Postgresql to sort through an index of that\n> size and table is not bad.\n>\n> you may need to take a look at hardware and postgresql.config settings to\n> improve the performance for this query\n>\n> This query is very simple where changing it around or adding index results\n> massive improvements is not going to help in this case.\nthe hardware is e5405 dual quad core on a 16GB RAM machine, with 8.3.1\ndefault settings except maximum connections increased...\n", "msg_date": "Mon, 26 May 2008 16:57:06 -0700", "msg_from": "mark <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select query takes 13 seconds to run with index" }, { "msg_contents": "Justin wrote:\n>\n>\n> mark wrote:\n>> On Mon, May 26, 2008 at 4:26 PM, Justin <[email protected]> wrote:\n>> \n>>> mark wrote:\n>>> \n>>>> Hi, is there anyway this can be made faster? id is the primary key,\n>>>> and there is an index on uid..\n>>>> thanks\n>>>> EXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\n>>>> DESC limit 6;\n>>>> QUERY\n>>>> PLAN\n>>>>\n>>>> ----------------------------------------------------------------------------------------------------------------------------------------------------\n>>>> Limit (cost=0.00..9329.02 rows=6 width=135) (actual\n>>>> time=13612.247..13612.247 rows=0 loops=1)\n>>>> -> Index Scan Backward using pokes_pkey on pokes\n>>>> (cost=0.00..5182270.69 rows=3333 width=135) (actual\n>>>> time=13612.245..13612.245 rows=0 loops=1)\n>>>> Filter: (uid = 578439028)\n>>>> Total runtime: 13612.369 ms\n>>>> (4 rows)\n>>>> \n>>> First this should be posted on performance list.\n>>> \n>> sorry about this.\n>>\n>> \n>>> how many records are in this table?\n>>> \n>> 22334262, 22 million records.\n>>\n>> \n>>> The estimate is way off, when was the last time Vaccum was on the table?\n>>> \n>> about a week ago i ran this VACUUM VERBOSE ANALYZE;\n>> this table is never updated or deleted, rows are just inserted...\n>>\n>>\n>> \n>>> What verison of Postgresql are you running\n>>> \n>> 8.3.1\n>>\n>> \n>>> Size of the Table\n>>> \n>> 22 million rows approximately\n>> \n> I have no experience on large datasets so people with more experience in this area are going to have to chime in.\n> My gut feel is 13 seconds for Postgresql to sort through an index of that size and table is not bad. \n>\n> you may need to take a look at hardware and postgresql.config settings to improve the performance for this query\n>\n> This query is very simple where changing it around or adding index results massive improvements is not going to help in this case.\n>> \n>>> Table layout\n>>> \n>> CREATE TABLE pokes\n>> (\n>> id serial NOT NULL,\n>> uid integer,\n>> action_id integer,\n>> created timestamp without time zone DEFAULT now(),\n>> friend_id integer,\n>> message text,\n>> pic text,\n>> \"name\" text,\n>> CONSTRAINT pokes_pkey PRIMARY KEY (id)\n>> )\n>> WITH (OIDS=FALSE);\n>> ALTER TABLE pokes OWNER TO postgres;\n>>\n>> -- Index: idx_action_idx\n>>\n>> -- DROP INDEX idx_action_idx;\n>>\n>> CREATE INDEX idx_action_idx\n>> ON pokes\n>> USING btree\n>> (action_id);\n>>\n>> -- Index: idx_friend_id\n>>\n>> -- DROP INDEX idx_friend_id;\n>>\n>> CREATE INDEX idx_friend_id\n>> ON pokes\n>> USING btree\n>> (friend_id);\n>>\n>> -- Index: idx_pokes_uid\n>>\n>> -- DROP INDEX idx_pokes_uid;\n>>\n>> CREATE INDEX idx_pokes_uid\n>> ON pokes\n>> USING btree\n>> (uid);\n>>\n>>\n>> \n>>> Load on the database\n>>> \n>> how do i measure load on database?\n>> \n>\n> How many users are attached to the server at any given time. how many inserts, deletes selects are being done on the server. Its number TPS on the server.\n\nJustin wrote:\n>\n>\n> mark wrote:\n>> On Mon, May 26, 2008 at 4:26 PM, Justin <[email protected]> wrote:\n>> \n>>> mark wrote:\n>>> \n>>>> Hi, is there anyway this can be made faster? id is the primary key,\n>>>> and there is an index on uid..\n>>>> thanks\n>>>> EXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\n>>>> DESC limit 6;\n>>>> QUERY\n>>>> PLAN\n>>>>\n>>>> ----------------------------------------------------------------------------------------------------------------------------------------------------\n>>>> Limit (cost=0.00..9329.02 rows=6 width=135) (actual\n>>>> time=13612.247..13612.247 rows=0 loops=1)\n>>>> -> Index Scan Backward using pokes_pkey on pokes\n>>>> (cost=0.00..5182270.69 rows=3333 width=135) (actual\n>>>> time=13612.245..13612.245 rows=0 loops=1)\n>>>> Filter: (uid = 578439028)\n>>>> Total runtime: 13612.369 ms\n>>>> (4 rows)\n>>>> \n>>> First this should be posted on performance list.\n>>> \n>> sorry about this.\n>>\n>> \n>>> how many records are in this table?\n>>> \n>> 22334262, 22 million records.\n>>\n>> \n>>> The estimate is way off, when was the last time Vaccum was on the table?\n>>> \n>> about a week ago i ran this VACUUM VERBOSE ANALYZE;\n>> this table is never updated or deleted, rows are just inserted...\n>>\n>>\n>> \n>>> What verison of Postgresql are you running\n>>> \n>> 8.3.1\n>>\n>> \n>>> Size of the Table\n>>> \n>> 22 million rows approximately\n>> \n> I have no experience on large datasets so people with more experience in this area are going to have to chime in.\n> My gut feel is 13 seconds for Postgresql to sort through an index of that size and table is not bad. \n>\n> you may need to take a look at hardware and postgresql.config settings to improve the performance for this query\n>\n> This query is very simple where changing it around or adding index results massive\n> improvements is not going to help in this case.\n\nI just ran a test on not particularly impressive hardware (8.2.6) on a table with 58980741 rows:\nbilling=# explain analyze select * from stats_asset_use where date = '2006-03-12' order by tracking_id desc limit 6;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..5.45 rows=6 width=38) (actual time=0.028..0.037 rows=6 loops=1)\n -> Index Scan Backward using stats_day_ndx on stats_asset_use (cost=0.00..61279.91 rows=67437 width=38) (actual time=0.026..0.032 rows=6 loops=1)\n Index Cond: (date = '2006-03-12'::date)\n Total runtime: 5.957 ms\n(4 rows)\n\nThere is an index on date (only). A typical day might have anywhere from a few thousand entries to a few hundred thousand with the average in the low thousands. Inserts only, no deletes or updates.\n\nThis table gets analyzed daily (overkill) so the stats are up to date; I wonder if that's a problem in your case ?\n\n>> \n>>> Table layout\n>>> \n>> CREATE TABLE pokes\n>> (\n>> id serial NOT NULL,\n>> uid integer,\n>> action_id integer,\n>> created timestamp without time zone DEFAULT now(),\n>> friend_id integer,\n>> message text,\n>> pic text,\n>> \"name\" text,\n>> CONSTRAINT pokes_pkey PRIMARY KEY (id)\n>> )\n>> WITH (OIDS=FALSE);\n>> ALTER TABLE pokes OWNER TO postgres;\n>>\n>> -- Index: idx_action_idx\n>>\n>> -- DROP INDEX idx_action_idx;\n>>\n>> CREATE INDEX idx_action_idx\n>> ON pokes\n>> USING btree\n>> (action_id);\n>>\n>> -- Index: idx_friend_id\n>>\n>> -- DROP INDEX idx_friend_id;\n>>\n>> CREATE INDEX idx_friend_id\n>> ON pokes\n>> USING btree\n>> (friend_id);\n>>\n>> -- Index: idx_pokes_uid\n>>\n>> -- DROP INDEX idx_pokes_uid;\n>>\n>> CREATE INDEX idx_pokes_uid\n>> ON pokes\n>> USING btree\n>> (uid);\n>>\n>>\n>> \n>>> Load on the database\n>>> \n>> how do i measure load on database?\n>> \n>\n> How many users are attached to the server at any given time. how many inserts, deletes\n> selects are being done on the server. Its number TPS on the server.\n\nOn Windoze I don't know; on *NIX variants the utility \"top\" can show useful information on load and active processes; iostat or vmstat can give detailed looks over time (use period of between 1 and 5 seconds maybe and discard the first row as nonsense); they show disk i/o and context switching, etc.\n\nHTH,\n\nGreg Williamson\nSenior DBA\nDigitalGlobe\n\nConfidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.\n\n(My corporate masters made me say this.)\n\n\n\n\n\n\nRE: [GENERAL] select query takes 13 seconds to run with index\n\n\n\nJustin wrote:\n>\n>\n> mark wrote:\n>> On Mon, May 26, 2008 at 4:26 PM, Justin <[email protected]> wrote:\n>>  \n>>> mark wrote:\n>>>    \n>>>> Hi, is there anyway this can be made faster?  id is the primary key,\n>>>> and there is an index on uid..\n>>>> thanks\n>>>> EXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\n>>>> DESC limit 6;\n>>>>                                                                     QUERY\n>>>> PLAN\n>>>>\n>>>> ----------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>  Limit  (cost=0.00..9329.02 rows=6 width=135) (actual\n>>>> time=13612.247..13612.247 rows=0 loops=1)\n>>>>   ->  Index Scan Backward using pokes_pkey on pokes\n>>>> (cost=0.00..5182270.69 rows=3333 width=135) (actual\n>>>> time=13612.245..13612.245 rows=0 loops=1)\n>>>>         Filter: (uid = 578439028)\n>>>>  Total runtime: 13612.369 ms\n>>>> (4 rows)\n>>>>      \n>>> First this should be posted on performance list.\n>>>    \n>> sorry about this.\n>>\n>>  \n>>> how many records are in this table?\n>>>    \n>> 22334262, 22 million records.\n>>\n>>  \n>>> The estimate is way off, when was the last time Vaccum was on the table?\n>>>    \n>> about a week ago i ran this VACUUM VERBOSE ANALYZE;\n>> this table is never updated or deleted, rows are just inserted...\n>>\n>>\n>>  \n>>> What verison of Postgresql are you running\n>>>    \n>> 8.3.1\n>>\n>>  \n>>> Size of the Table\n>>>    \n>> 22 million rows approximately\n>>  \n> I have no experience  on large datasets so people with more experience in this area are going to have to chime in.\n> My gut feel is 13 seconds for Postgresql to sort through an index of that size and table is not bad.\n>\n> you may need to take a look at hardware and postgresql.config settings to improve the performance for this query\n>\n> This query is very simple where changing it around or adding index results massive improvements is not going to help in this case.\n>>  \n>>> Table layout\n>>>    \n>> CREATE TABLE pokes\n>> (\n>>   id serial NOT NULL,\n>>   uid integer,\n>>   action_id integer,\n>>   created timestamp without time zone DEFAULT now(),\n>>   friend_id integer,\n>>   message text,\n>>   pic text,\n>>   \"name\" text,\n>>   CONSTRAINT pokes_pkey PRIMARY KEY (id)\n>> )\n>> WITH (OIDS=FALSE);\n>> ALTER TABLE pokes OWNER TO postgres;\n>>\n>> -- Index: idx_action_idx\n>>\n>> -- DROP INDEX idx_action_idx;\n>>\n>> CREATE INDEX idx_action_idx\n>>   ON pokes\n>>   USING btree\n>>   (action_id);\n>>\n>> -- Index: idx_friend_id\n>>\n>> -- DROP INDEX idx_friend_id;\n>>\n>> CREATE INDEX idx_friend_id\n>>   ON pokes\n>>   USING btree\n>>   (friend_id);\n>>\n>> -- Index: idx_pokes_uid\n>>\n>> -- DROP INDEX idx_pokes_uid;\n>>\n>> CREATE INDEX idx_pokes_uid\n>>   ON pokes\n>>   USING btree\n>>   (uid);\n>>\n>>\n>>  \n>>> Load on the database\n>>>    \n>> how do i measure load on database?\n>>  \n>\n> How many users are attached to the server at any given time.  how many inserts, deletes selects are being done on the server.  Its number  TPS  on the server.\n\nJustin wrote:\n>\n>\n> mark wrote:\n>> On Mon, May 26, 2008 at 4:26 PM, Justin <[email protected]> wrote:\n>>  \n>>> mark wrote:\n>>>    \n>>>> Hi, is there anyway this can be made faster?  id is the primary key,\n>>>> and there is an index on uid..\n>>>> thanks\n>>>> EXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\n>>>> DESC limit 6;\n>>>>                                                                     QUERY\n>>>> PLAN\n>>>>\n>>>> ----------------------------------------------------------------------------------------------------------------------------------------------------\n>>>>  Limit  (cost=0.00..9329.02 rows=6 width=135) (actual\n>>>> time=13612.247..13612.247 rows=0 loops=1)\n>>>>   ->  Index Scan Backward using pokes_pkey on pokes\n>>>> (cost=0.00..5182270.69 rows=3333 width=135) (actual\n>>>> time=13612.245..13612.245 rows=0 loops=1)\n>>>>         Filter: (uid = 578439028)\n>>>>  Total runtime: 13612.369 ms\n>>>> (4 rows)\n>>>>      \n>>> First this should be posted on performance list.\n>>>    \n>> sorry about this.\n>>\n>>  \n>>> how many records are in this table?\n>>>    \n>> 22334262, 22 million records.\n>>\n>>  \n>>> The estimate is way off, when was the last time Vaccum was on the table?\n>>>    \n>> about a week ago i ran this VACUUM VERBOSE ANALYZE;\n>> this table is never updated or deleted, rows are just inserted...\n>>\n>>\n>>  \n>>> What verison of Postgresql are you running\n>>>    \n>> 8.3.1\n>>\n>>  \n>>> Size of the Table\n>>>    \n>> 22 million rows approximately\n>>  \n> I have no experience  on large datasets so people with more experience in this area are going to have to chime in.\n> My gut feel is 13 seconds for Postgresql to sort through an index of that size and table is not bad.\n>\n> you may need to take a look at hardware and postgresql.config settings to improve the performance for this query\n>\n> This query is very simple where changing it around or adding index results massive\n> improvements is not going to help in this case.\n\nI just ran a test on not particularly impressive hardware (8.2.6) on a table with 58980741 rows:\nbilling=# explain analyze select * from stats_asset_use where date = '2006-03-12' order by tracking_id desc limit 6;\n                                                                      QUERY PLAN                                      \n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=0.00..5.45 rows=6 width=38) (actual time=0.028..0.037 rows=6 loops=1)\n   ->  Index Scan Backward using stats_day_ndx on stats_asset_use  (cost=0.00..61279.91 rows=67437 width=38) (actual time=0.026..0.032 rows=6 loops=1)\n         Index Cond: (date = '2006-03-12'::date)\n Total runtime: 5.957 ms\n(4 rows)\n\nThere is an index on date (only). A typical day might have anywhere from a few thousand entries to a few hundred thousand with the average in the low thousands. Inserts only, no deletes or updates.\n\nThis table gets analyzed daily (overkill) so the stats are up to date; I wonder if that's a problem in your case ?\n\n>>  \n>>> Table layout\n>>>    \n>> CREATE TABLE pokes\n>> (\n>>   id serial NOT NULL,\n>>   uid integer,\n>>   action_id integer,\n>>   created timestamp without time zone DEFAULT now(),\n>>   friend_id integer,\n>>   message text,\n>>   pic text,\n>>   \"name\" text,\n>>   CONSTRAINT pokes_pkey PRIMARY KEY (id)\n>> )\n>> WITH (OIDS=FALSE);\n>> ALTER TABLE pokes OWNER TO postgres;\n>>\n>> -- Index: idx_action_idx\n>>\n>> -- DROP INDEX idx_action_idx;\n>>\n>> CREATE INDEX idx_action_idx\n>>   ON pokes\n>>   USING btree\n>>   (action_id);\n>>\n>> -- Index: idx_friend_id\n>>\n>> -- DROP INDEX idx_friend_id;\n>>\n>> CREATE INDEX idx_friend_id\n>>   ON pokes\n>>   USING btree\n>>   (friend_id);\n>>\n>> -- Index: idx_pokes_uid\n>>\n>> -- DROP INDEX idx_pokes_uid;\n>>\n>> CREATE INDEX idx_pokes_uid\n>>   ON pokes\n>>   USING btree\n>>   (uid);\n>>\n>>\n>>  \n>>> Load on the database\n>>>    \n>> how do i measure load on database?\n>>  \n>\n> How many users are attached to the server at any given time.  how many inserts, deletes\n> selects are being done on the server.  Its number  TPS  on the server.\n\nOn Windoze I don't know; on *NIX variants the utility \"top\" can show useful information on load and active processes; iostat or vmstat can give detailed looks over time (use period of between 1 and 5 seconds maybe and discard the first row as nonsense); they show disk i/o and context switching, etc.\n\nHTH,\n\nGreg Williamson\nSenior DBA\nDigitalGlobe\n\nConfidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.\n\n(My corporate masters made me say this.)", "msg_date": "Mon, 26 May 2008 18:04:39 -0600", "msg_from": "\"Gregory Williamson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select query takes 13 seconds to run with index" }, { "msg_contents": "mark wrote:\n>>\n>>\n>>\n>> Size of the Table\n>>\n>>\n>> 22 million rows approximately\n>>\n>>\n>> I have no experience on large datasets so people with more experience in\n>> this area are going to have to chime in.\n>> My gut feel is 13 seconds for Postgresql to sort through an index of that\n>> size and table is not bad.\n>>\n>> you may need to take a look at hardware and postgresql.config settings to\n>> improve the performance for this query\n>>\n>> This query is very simple where changing it around or adding index results\n>> massive improvements is not going to help in this case.\n>> \n> the hardware is e5405 dual quad core on a 16GB RAM machine, with 8.3.1\n> default settings except maximum connections increased...\n> \nThat could be problem, Postgresql default settings are very conservative.\n\nYou need to read http://www.postgresqldocs.org/wiki/Performance_Optimization\nand tune posgtresql.config settings. \n\nWhat OS are you running?\nWhat is Disk Subsystem setup??? \n\n\n\n\n\n\nmark wrote:\n\n\n\n\n\n\nSize of the Table\n\n\n22 million rows approximately\n\n\nI have no experience on large datasets so people with more experience in\nthis area are going to have to chime in.\nMy gut feel is 13 seconds for Postgresql to sort through an index of that\nsize and table is not bad.\n\nyou may need to take a look at hardware and postgresql.config settings to\nimprove the performance for this query\n\nThis query is very simple where changing it around or adding index results\nmassive improvements is not going to help in this case.\n \n\nthe hardware is e5405 dual quad core on a 16GB RAM machine, with 8.3.1\ndefault settings except maximum connections increased...\n \n\nThat could be problem, Postgresql default settings are very\nconservative.\n\nYou need to read\nhttp://www.postgresqldocs.org/wiki/Performance_Optimization\nand tune posgtresql.config settings.  \n\nWhat OS are you running?\nWhat is Disk Subsystem setup???", "msg_date": "Mon, 26 May 2008 20:05:20 -0400", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select query takes 13 seconds to run with index" }, { "msg_contents": "On Mon, May 26, 2008 at 7:26 PM, Justin <[email protected]> wrote:\n> The estimate is way off, when was the last time Vaccum was on the table?\n\nI'm going to second this- run \"ANALYZE pokes;\" and then test the query\nagain; I'll bet you'll get much better results.\n\nIt's not the VACUUM that matters so much as the ANALYZE, and it\ndefinitely needs to be done on occasion if you're adding a lot of\nrecords. Do you have the autovacuum daemon running? (And if not, why\nnot?)\n\n\n-- \n- David T. Wilson\[email protected]\n", "msg_date": "Mon, 26 May 2008 20:09:30 -0400", "msg_from": "\"David Wilson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select query takes 13 seconds to run with index" }, { "msg_contents": ">\n> >\n> > How many users are attached to the server at any given time. how \n> many inserts, deletes\n> > selects are being done on the server. Its number TPS on the server.\n>\n> On Windoze I don't know; on *NIX variants the utility \"top\" can show \n> useful information on load and active processes; iostat or vmstat can \n> give detailed looks over time (use period of between 1 and 5 seconds \n> maybe and discard the first row as nonsense); they show disk i/o and \n> context switching, etc.\n>\n> HTH,\n>\n> Greg Williamson\n> Senior DBA\n> DigitalGlobe\n>\n>\nThere are several tools to do this process explorer which has to be down \nloaded, and performance monitor. The problem with performance monitor \nis posgresql keeps spawning new exe which makes reading the result real \na pain.\n\n\n\n\n\n\n \n\n>\n> How many users are attached to the server at any given time.  how\nmany inserts, deletes\n> selects are being done on the server.  Its number  TPS  on the\nserver.\n\nOn Windoze I don't know; on *NIX variants the utility \"top\" can show\nuseful information on load and active processes; iostat or vmstat can\ngive detailed looks over time (use period of between 1 and 5 seconds\nmaybe and discard the first row as nonsense); they show disk i/o and\ncontext switching, etc.\n\nHTH,\n\nGreg Williamson\nSenior DBA\nDigitalGlobe\n\n\n\n\nThere are several tools to do this process explorer which has to be\ndown loaded, and performance monitor.  The problem with performance\nmonitor is posgresql keeps spawning new exe which makes reading the\nresult real a pain.", "msg_date": "Mon, 26 May 2008 20:13:54 -0400", "msg_from": "Justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select query takes 13 seconds to run with index" }, { "msg_contents": "mark <[email protected]> writes:\n> EXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\n> DESC limit 6;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..9329.02 rows=6 width=135) (actual\n> time=13612.247..13612.247 rows=0 loops=1)\n> -> Index Scan Backward using pokes_pkey on pokes\n> (cost=0.00..5182270.69 rows=3333 width=135) (actual\n> time=13612.245..13612.245 rows=0 loops=1)\n> Filter: (uid = 578439028)\n> Total runtime: 13612.369 ms\n> (4 rows)\n\nThe problem is the vast disconnect between the estimated and actual\nrowcounts for the indexscan (3333 vs 0). The planner thinks there\nare three thousand rows matching uid = 578439028, and that encourages\nit to try a plan that's only going to be fast if at least six such\nrows show up fairly soon while scanning the index in reverse id order.\nWhat you really want it to do here is scan on the uid index and then\nsort the result by id ... but that will be slow in exactly the case\nwhere this plan is fast, ie, when there are a lot of matching uids.\n\nBottom line: the planner cannot make the right choice between these\nalternatives unless it's got decent statistics about the frequency\nof uid values. \"I analyzed the table about a week ago\" is not good\nenough maintenance policy --- you need current stats, and you might need\nto bump up the statistics target to get enough data about less-common\nvalues of uid.\n\n(Since it's 8.3, the autovac daemon might have been analyzing for you,\nif you didn't turn off autovacuum. In that case increasing the\nstatistics target is the first thing to try.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 May 2008 20:36:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select query takes 13 seconds to run with index " }, { "msg_contents": "On Mon, May 26, 2008 at 5:36 PM, Tom Lane <[email protected]> wrote:\n> mark <[email protected]> writes:\n>> EXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\n>> DESC limit 6;\n>> QUERY PLAN\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=0.00..9329.02 rows=6 width=135) (actual\n>> time=13612.247..13612.247 rows=0 loops=1)\n>> -> Index Scan Backward using pokes_pkey on pokes\n>> (cost=0.00..5182270.69 rows=3333 width=135) (actual\n>> time=13612.245..13612.245 rows=0 loops=1)\n>> Filter: (uid = 578439028)\n>> Total runtime: 13612.369 ms\n>> (4 rows)\n>\n> The problem is the vast disconnect between the estimated and actual\n> rowcounts for the indexscan (3333 vs 0). The planner thinks there\n> are three thousand rows matching uid = 578439028, and that encourages\n> it to try a plan that's only going to be fast if at least six such\n> rows show up fairly soon while scanning the index in reverse id order.\n> What you really want it to do here is scan on the uid index and then\n> sort the result by id ... but that will be slow in exactly the case\n> where this plan is fast, ie, when there are a lot of matching uids.\n>\n> Bottom line: the planner cannot make the right choice between these\n> alternatives unless it's got decent statistics about the frequency\n> of uid values. \"I analyzed the table about a week ago\" is not good\n> enough maintenance policy --- you need current stats, and you might need\n> to bump up the statistics target to get enough data about less-common\n> values of uid.\nhow do i do this? bump up the statistics target?\n\n> (Since it's 8.3, the autovac daemon might have been analyzing for you,\n> if you didn't turn off autovacuum. In that case increasing the\n> statistics target is the first thing to try.)\ni did not turn it off..\nand my OS is fedora 9\n\ni ran vacuum verbose analyze pokes, and then ran the same query, and\nthere is no improvement..\n\nEXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id limit 6;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..8446.80 rows=6 width=130) (actual\ntime=12262.779..12262.779 rows=0 loops=1)\n -> Index Scan using pokes_pkey on pokes (cost=0.00..5149730.49\nrows=3658 width=130) (actual time=12262.777..12262.777 rows=0 loops=1)\n Filter: (uid = 578439028)\n Total runtime: 12262.817 ms\n\nVACUUM VERBOSE ANALYZE pokes ;\nINFO: vacuuming \"public.pokes\"\nINFO: index \"pokes_pkey\" now contains 22341026 row versions in 61258 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.24s/0.06u sec elapsed 1.61 sec.\nINFO: index \"idx_action_idx\" now contains 22341026 row versions in 61548 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.38s/0.09u sec elapsed 7.21 sec.\nINFO: index \"idx_friend_id\" now contains 22341026 row versions in 60547 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.44s/0.11u sec elapsed 9.13 sec.\nINFO: index \"idx_pokes_uid\" now contains 22341026 row versions in 62499 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.41s/0.09u sec elapsed 7.44 sec.\nINFO: \"pokes\": found 0 removable, 22341026 nonremovable row versions\nin 388144 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n1923 pages contain useful free space.\n0 pages are entirely empty.\nCPU 3.02s/2.38u sec elapsed 29.21 sec.\nINFO: vacuuming \"pg_toast.pg_toast_43415\"\nINFO: index \"pg_toast_43415_index\" now contains 12 row versions in 2 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_43415\": found 0 removable, 12 nonremovable row\nversions in 2 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n2 pages contain useful free space.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.pokes\"\nINFO: \"pokes\": scanned 3000 of 388144 pages, containing 172933 live\nrows and 0 dead rows; 3000 rows in sample, 22374302 estimated total\n", "msg_date": "Mon, 26 May 2008 19:58:51 -0700", "msg_from": "mark <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select query takes 13 seconds to run with index" }, { "msg_contents": "On Mon, May 26, 2008 at 7:58 PM, mark <[email protected]> wrote:\n> On Mon, May 26, 2008 at 5:36 PM, Tom Lane <[email protected]> wrote:\n>> mark <[email protected]> writes:\n>>> EXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\n>>> DESC limit 6;\n>>> QUERY PLAN\n>>> ----------------------------------------------------------------------------------------------------------------------------------------------------\n>>> Limit (cost=0.00..9329.02 rows=6 width=135) (actual\n>>> time=13612.247..13612.247 rows=0 loops=1)\n>>> -> Index Scan Backward using pokes_pkey on pokes\n>>> (cost=0.00..5182270.69 rows=3333 width=135) (actual\n>>> time=13612.245..13612.245 rows=0 loops=1)\n>>> Filter: (uid = 578439028)\n>>> Total runtime: 13612.369 ms\n>>> (4 rows)\n>>\n>> The problem is the vast disconnect between the estimated and actual\n>> rowcounts for the indexscan (3333 vs 0). The planner thinks there\n>> are three thousand rows matching uid = 578439028, and that encourages\n>> it to try a plan that's only going to be fast if at least six such\n>> rows show up fairly soon while scanning the index in reverse id order.\n>> What you really want it to do here is scan on the uid index and then\n>> sort the result by id ... but that will be slow in exactly the case\n>> where this plan is fast, ie, when there are a lot of matching uids.\n>>\n>> Bottom line: the planner cannot make the right choice between these\n>> alternatives unless it's got decent statistics about the frequency\n>> of uid values. \"I analyzed the table about a week ago\" is not good\n>> enough maintenance policy --- you need current stats, and you might need\n>> to bump up the statistics target to get enough data about less-common\n>> values of uid.\n> how do i do this? bump up the statistics target?\n>\n>> (Since it's 8.3, the autovac daemon might have been analyzing for you,\n>> if you didn't turn off autovacuum. In that case increasing the\n>> statistics target is the first thing to try.)\n> i did not turn it off..\n> and my OS is fedora 9\n>\n> i ran vacuum verbose analyze pokes, and then ran the same query, and\n> there is no improvement..\n>\n> EXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id limit 6;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..8446.80 rows=6 width=130) (actual\n> time=12262.779..12262.779 rows=0 loops=1)\n> -> Index Scan using pokes_pkey on pokes (cost=0.00..5149730.49\n> rows=3658 width=130) (actual time=12262.777..12262.777 rows=0 loops=1)\n> Filter: (uid = 578439028)\n> Total runtime: 12262.817 ms\n\nOK I did this\n\nALTER TABLE pokes ALTER uid set statistics 500;\nALTER TABLE\n\nANALYZE pokes;\nANALYZE\n\nand then it became super fast!! thanks a lot!!!\nmy question:\n-> is 500 too high? what all does this affect?\n-> now increasing this number does it affect only when i am running\nanalyze commands, or will it slow down inserts and other operations?\nEXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\ndesc limit 6;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=467.80..467.81 rows=6 width=134) (actual\ntime=0.016..0.016 rows=0 loops=1)\n -> Sort (cost=467.80..468.09 rows=117 width=134) (actual\ntime=0.016..0.016 rows=0 loops=1)\n Sort Key: id\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using idx_pokes_uid on pokes\n(cost=0.00..465.70 rows=117 width=134) (actual time=0.011..0.011\nrows=0 loops=1)\n Index Cond: (uid = 578439028)\n Total runtime: 0.037 ms\n", "msg_date": "Mon, 26 May 2008 20:12:26 -0700", "msg_from": "mark <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select query takes 13 seconds to run with index" }, { "msg_contents": "\n>> The estimate is way off, when was the last time Vaccum was on the table?\n> about a week ago i ran this VACUUM VERBOSE ANALYZE;\n> this table is never updated or deleted, rows are just inserted...\n\n\tYou should analyze it more often, then... Postgres probably thinks the \ntable has the same data distribution as last week !\n\tAnalyze just takes a couple seconds...\n\n>> Load on the database\n> how do i measure load on database?\n\n\tJust look at vmstat.\n\n\tAlso if you very often do SELECT .. WHERE x = ... ORDER BY id DESC you'll \nbenefit from an index on (x,id) instead of just (x).\n\n\n", "msg_date": "Tue, 27 May 2008 10:19:37 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] select query takes 13 seconds to run with index" }, { "msg_contents": "On Mon, May 26, 2008 at 04:32:50PM -0700, mark wrote:\n> >> EXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\n> >> DESC limit 6;\n> > The estimate is way off, when was the last time Vaccum was on the table?\n> about a week ago i ran this VACUUM VERBOSE ANALYZE;\n> this table is never updated or deleted, rows are just inserted...\n\n1. boost default_statistics_target\n2. run analyze more often - daily job for example\n3. create index q on pokes (uid, id); should help\n\ndepesz\n", "msg_date": "Tue, 27 May 2008 10:22:01 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] select query takes 13 seconds to run with index" }, { "msg_contents": "On Tue, May 27, 2008 at 1:22 AM, hubert depesz lubaczewski\n<[email protected]> wrote:\n> On Mon, May 26, 2008 at 04:32:50PM -0700, mark wrote:\n>> >> EXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\n>> >> DESC limit 6;\n>> > The estimate is way off, when was the last time Vaccum was on the table?\n>> about a week ago i ran this VACUUM VERBOSE ANALYZE;\n>> this table is never updated or deleted, rows are just inserted...\n>\n> 1. boost default_statistics_target\n> 2. run analyze more often - daily job for example\n> 3. create index q on pokes (uid, id); should help\n\nOK I did this\n\nALTER TABLE pokes ALTER uid set statistics 500;\nALTER TABLE\n\nANALYZE pokes;\nANALYZE\n\nand then it became super fast!! thanks a lot!!!\nmy question:\n-> is 500 too high? what all does this affect?\n-> now increasing this number does it affect only when i am running\nanalyze commands, or will it slow down inserts and other operations?\nEXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\ndesc limit 6;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=467.80..467.81 rows=6 width=134) (actual\ntime=0.016..0.016 rows=0 loops=1)\n -> Sort (cost=467.80..468.09 rows=117 width=134) (actual\ntime=0.016..0.016 rows=0 loops=1)\n Sort Key: id\n Sort Method: quicksort Memory: 25kB\n -> Index Scan using idx_pokes_uid on pokes\n(cost=0.00..465.70 rows=117 width=134) (actual time=0.011..0.011\nrows=0 loops=1)\n Index Cond: (uid = 578439028)\n Total runtime: 0.037 ms\n", "msg_date": "Tue, 27 May 2008 07:46:05 -0700", "msg_from": "mark <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] select query takes 13 seconds to run with index" }, { "msg_contents": "On Tue, May 27, 2008 at 07:46:05AM -0700, mark wrote:\n> and then it became super fast!! thanks a lot!!!\n> my question:\n> -> is 500 too high? what all does this affect?\n\ni usually dont go over 100. it affects number of elements in statistics\nfor fields. you can see the stats in:\nselect * from pg_stats;\n\n> -> now increasing this number does it affect only when i am running\n> analyze commands, or will it slow down inserts and other operations?\n> EXPLAIN ANALYZE select * from pokes where uid = 578439028 order by id\n> desc limit 6;\n\nit (theoretically) can slow down selects to to the fact that it now has\nto load more data to be able to plan (i.e. it loads the statistics, and\nsince there are more values - the statistics are larger).\n\ngenerally - in most cases this shouldn't be an issue.\n\nadditionally - i think that the 2-column index would work in this\nparticular case even better.\n\nregards,\n\ndepesz\n", "msg_date": "Tue, 27 May 2008 16:58:19 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] select query takes 13 seconds to run with index" } ]
[ { "msg_contents": "\nI have a complex query where making a small change to the SQL increases\nrun-time by > 1000 times.\n\nThe first SQL statement is of the form\n\nA JOIN B ON (a.id = b.id) LEFT JOIN C ON (a.id = c.id) \n\nand the second is like this\n\nA JOIN B ON (a.id = b.id) LEFT JOIN C ON (b.id = c.id)\n\nthe only difference is the substitution of a -> b\n\nThis has been verified by examining EXPLAIN of SQL1, SQL2 and SQL1. The\nfirst and third EXPLAINs are equivalent. All ANALYZE etc has been run.\nAll relevant join columns are INTEGERs. So we have a repeatable\ndifference in plans attributable to a single change.\n\nThe difference in run time occurs because the second form of the query\nuses a SeqScan of a large table, whereas the first form is able to use a\nnested loops join to access the large table, which then allows it to\naccess just 3 rows rather than 85 million rows.\n\nThere is a clear equivalence between the two forms of SQL, since the\nequivalence a = b is derived from a natural rather than an outer join.\nThis can be applied from the left side to the right side of the join. \n\nSo this looks to me like either a bug or just an un-implemented\noptimizer feature. The code I've just looked at for equivalent class\nrelationships appears to refer to using this to propagate constant info\nonly, so I'm thinking it is not a bug. and hence why it is reported here\nand not to pgsql-bugs.\n\nI do recognise that we would *not* be able to deduce this form of SQL\n\nA JOIN B ON (a.id = c.id) LEFT JOIN C ON (b.id = c.id)\n\nthough that restriction on outer join equivalence is not relevant here.\n\n(SQL, EXPLAINs etc available off-list only, by request).\n\nI'm looking into this more now.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Tue, 27 May 2008 20:59:18 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Outer joins and equivalence" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> I have a complex query where making a small change to the SQL increases\n> run-time by > 1000 times.\n\n> The first SQL statement is of the form\n\n> A JOIN B ON (a.id = b.id) LEFT JOIN C ON (a.id = c.id) \n\n> and the second is like this\n\n> A JOIN B ON (a.id = b.id) LEFT JOIN C ON (b.id = c.id)\n\n> the only difference is the substitution of a -> b\n\nPlease provide an actual test case.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 May 2008 17:43:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Outer joins and equivalence " }, { "msg_contents": "On Tue, 27 May 2008, Simon Riggs wrote:\n> I do recognise that we would *not* be able to deduce this form of SQL\n>\n> A JOIN B ON (a.id = c.id) LEFT JOIN C ON (b.id = c.id)\n\nSurely that would not be valid SQL?\n\nMatthew\n\n-- \nPsychotics are consistently inconsistent. The essence of sanity is to\nbe inconsistently inconsistent.\n", "msg_date": "Wed, 28 May 2008 11:45:14 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Outer joins and equivalence" }, { "msg_contents": "\nOn Wed, 2008-05-28 at 11:45 +0100, Matthew Wakeling wrote:\n> On Tue, 27 May 2008, Simon Riggs wrote:\n> > I do recognise that we would *not* be able to deduce this form of SQL\n> >\n> > A JOIN B ON (a.id = c.id) LEFT JOIN C ON (b.id = c.id)\n> \n> Surely that would not be valid SQL?\n\nYou are right, but my point was about inferences during SQL planning,\nnot about initial analysis of the statement.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Wed, 28 May 2008 14:39:10 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Outer joins and equivalence" }, { "msg_contents": "\nOn Tue, 2008-05-27 at 17:43 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > I have a complex query where making a small change to the SQL increases\n> > run-time by > 1000 times.\n> \n> > The first SQL statement is of the form\n> \n> > A JOIN B ON (a.id = b.id) LEFT JOIN C ON (a.id = c.id) \n> \n> > and the second is like this\n> \n> > A JOIN B ON (a.id = b.id) LEFT JOIN C ON (b.id = c.id)\n> \n> > the only difference is the substitution of a -> b\n> \n> Please provide an actual test case.\n\nGetting closer, but still not able to produce a moveable test case.\n\nSymptoms are\n\n* using partitioning\n* when none of the partitions are excluded\n* when equivalence classes ought to be able to reconcile join \n\nStill working on it\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Mon, 02 Jun 2008 18:10:44 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Outer joins and equivalence" }, { "msg_contents": "On Mon, 2008-06-02 at 18:10 +0100, Simon Riggs wrote:\n> On Tue, 2008-05-27 at 17:43 -0400, Tom Lane wrote:\n> > Simon Riggs <[email protected]> writes:\n> > > I have a complex query where making a small change to the SQL increases\n> > > run-time by > 1000 times.\n> > \n> > > The first SQL statement is of the form\n> > \n> > > A JOIN B ON (a.id = b.id) LEFT JOIN C ON (a.id = c.id) \n> > \n> > > and the second is like this\n> > \n> > > A JOIN B ON (a.id = b.id) LEFT JOIN C ON (b.id = c.id)\n> > \n> > > the only difference is the substitution of a -> b\n> > \n> > Please provide an actual test case.\n> \n> Getting closer, but still not able to produce a moveable test case.\n\nI've got a test case which shows something related and weird, though not\nthe exact case.\n\nThe queries shown here have significantly different costs, depending\nupon whether we use tables a or b in the query. Since a and b are\nequivalent this result isn't expected at all.\n\nI suspect the plan variation in the original post is somehow cost\nrelated and we are unlikely to discover the exact plan.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support", "msg_date": "Mon, 02 Jun 2008 20:47:11 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Outer joins and equivalence" }, { "msg_contents": "[ redirecting thread from -performance to -hackers ]\n\nSimon Riggs <[email protected]> writes:\n> I've got a test case which shows something related and weird, though not\n> the exact case.\n\n> The queries shown here have significantly different costs, depending\n> upon whether we use tables a or b in the query. Since a and b are\n> equivalent this result isn't expected at all.\n\nHmm. I had been guessing that there was something about your original\nquery that prevented the system from applying best_appendrel_indexscan,\nbut after fooling with this a bit, I don't believe that's the issue\nat all. The problem is that these two cases \"should\" be equivalent:\n\n select ... from a join b on (a.id = b.id) left join c on (a.id = c.id);\n\n select ... from a join b on (a.id = b.id) left join c on (b.id = c.id);\n\nbut they are not seen that way by the current planner. It correctly\nforms an EquivalenceClass consisting of a.id and b.id, but it cannot put\nc.id into that same class, and so the clause a.id = c.id is just left\nalone; there is noplace that can generate \"b.id = c.id\" as an\nalternative join condition. This means that (for the first query)\nwe can consider the join orders \"(a join b) leftjoin c\" and\n\"(a leftjoin c) join b\", but there is no way to consider the join\norder \"(b leftjoin c) join a\"; to implement that we'd need to have the\nalternative join clause available. So if that join order is\nsignificantly better than the other two, we lose.\n\nThis is going to take a bit of work to fix :-(. I am toying with the\nidea that we could go ahead and put c.id into the EquivalenceClass\nas a sort of second-class citizen that's labeled as associated with this\nparticular outer join --- the implication being that we can execute the\nouter join using a generated clause that equates c.id to any one of the\nfirst-class members of the EquivalenceClass, but above the outer join\nwe can't assume that c.id hasn't gone to null, so it's not really equal\nto anything else in the class. I think it might also be possible\nto get rid of the reconsider_outer_join_clauses() kluge in favor of\ndriving transitive-equality-to-a-constant off of this representation.\n\nHowever there's a larger issue here, which is the very identity of an\nouter join :-(. Currently, for the first query above, the left join\nis defined as being between a and c, with a being the minimum\nleft-hand-side needed to form the join. To be able to handle a case\nlike this, it seems that the notion of a \"minimum left hand side\"\nfalls apart altogether. We can execute the OJ using either a or b\nas left hand side. So the current representation of OuterJoinInfo,\nand the code that uses it to enforce valid join orders, needs a serious\nrethink.\n\nLooks like 8.4 development material to me, rather than something we\ncan hope to back-patch a fix for...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Jun 2008 22:18:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Outer joins and equivalence " }, { "msg_contents": "\nOn Wed, 2008-06-04 at 22:18 -0400, Tom Lane wrote:\n> [ redirecting thread from -performance to -hackers ]\n> \n> Simon Riggs <[email protected]> writes:\n> > I've got a test case which shows something related and weird, though not\n> > the exact case.\n> \n> > The queries shown here have significantly different costs, depending\n> > upon whether we use tables a or b in the query. Since a and b are\n> > equivalent this result isn't expected at all.\n> \n> Hmm. I had been guessing that there was something about your original\n> query that prevented the system from applying best_appendrel_indexscan,\n> but after fooling with this a bit, I don't believe that's the issue\n> at all. The problem is that these two cases \"should\" be equivalent:\n> \n> select ... from a join b on (a.id = b.id) left join c on (a.id = c.id);\n> \n> select ... from a join b on (a.id = b.id) left join c on (b.id = c.id);\n> \n> but they are not seen that way by the current planner. It correctly\n> forms an EquivalenceClass consisting of a.id and b.id, but it cannot put\n> c.id into that same class, and so the clause a.id = c.id is just left\n> alone; there is noplace that can generate \"b.id = c.id\" as an\n> alternative join condition. This means that (for the first query)\n> we can consider the join orders \"(a join b) leftjoin c\" and\n> \"(a leftjoin c) join b\", but there is no way to consider the join\n> order \"(b leftjoin c) join a\"; to implement that we'd need to have the\n> alternative join clause available. So if that join order is\n> significantly better than the other two, we lose.\n> \n> This is going to take a bit of work to fix :-(. I am toying with the\n> idea that we could go ahead and put c.id into the EquivalenceClass\n> as a sort of second-class citizen that's labeled as associated with this\n> particular outer join --- the implication being that we can execute the\n> outer join using a generated clause that equates c.id to any one of the\n> first-class members of the EquivalenceClass, but above the outer join\n> we can't assume that c.id hasn't gone to null, so it's not really equal\n> to anything else in the class. I think it might also be possible\n> to get rid of the reconsider_outer_join_clauses() kluge in favor of\n> driving transitive-equality-to-a-constant off of this representation.\n\nYes, EquivalenceClass allows an implication to be made either way\naround, which is wrong for this class of problem. I was imagining a\nhigher level ImplicationClass that was only of the form A => B but not B\n=> A. So we end up with an ImplicationTree, rather than a just a flat\nClass. Which is where I punted...\n\n> However there's a larger issue here, which is the very identity of an\n> outer join :-(. Currently, for the first query above, the left join\n> is defined as being between a and c, with a being the minimum\n> left-hand-side needed to form the join. To be able to handle a case\n> like this, it seems that the notion of a \"minimum left hand side\"\n> falls apart altogether. We can execute the OJ using either a or b\n> as left hand side. So the current representation of OuterJoinInfo,\n> and the code that uses it to enforce valid join orders, needs a serious\n> rethink.\n\nHadn't seen that at all.\n\n> Looks like 8.4 development material to me, rather than something we\n> can hope to back-patch a fix for...\n\nDefinitely.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Thu, 05 Jun 2008 06:17:47 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Outer joins and equivalence" } ]
[ { "msg_contents": "Hi all,\n\nI'm using the TPC-H Benchmark for testing of performance in PostgreSQL.\nBut it is composed of only 8 tables, which is not enough to call the GEQO\nalgorithm.\n\nI don't want to change any of the 22 queries provided by the TPC-H to call\nthe GEQO, and not lose the credibility of the TPC's tool.\n\nIs there a specific tool for testing queries with large number of tables\n(12 or more), or some a documentation of tests carried out in GEQO\nAlgorithm?\n\n--\nTarcizio Bini\n\n\n", "msg_date": "Wed, 28 May 2008 10:50:48 -0300 (BRT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "GEQO Benchmark" }, { "msg_contents": "[email protected] writes:\n> I'm using the TPC-H Benchmark for testing of performance in PostgreSQL.\n> But it is composed of only 8 tables, which is not enough to call the GEQO\n> algorithm.\n\nSee\nhttp://www.postgresql.org/docs/8.3/static/runtime-config-query.html#RUNTIME-CONFIG-QUERY-GEQO\n\nparticularly geqo_threshold.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 May 2008 10:02:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GEQO Benchmark " }, { "msg_contents": "Hi,\n\nOf course, the geqo_threshold can be changed so that the geqo be performed\nin queries that have less than 12 tables. However, we aim to test the GEQO\nalgorithm in conditions where the standard algorithm (dynamic programming)\nhas a high cost to calculate the query plan.\n\n--\n\nTarcizio Bini\n\n2008/5/28 Tom Lane <[email protected]>:\n\n> [email protected] writes:\n> > I'm using the TPC-H Benchmark for testing of performance in PostgreSQL.\n> > But it is composed of only 8 tables, which is not enough to call the GEQO\n> > algorithm.\n>\n> See\n>\n> http://www.postgresql.org/docs/8.3/static/runtime-config-query.html#RUNTIME-CONFIG-QUERY-GEQO\n>\n> particularly geqo_threshold.\n>\n> regards, tom lane\n>\n\nHi,Of course, the geqo_threshold can be changed so that the geqo be performed in queries that have less than 12 tables. However, we aim to test the GEQO algorithm in conditions where the standard algorithm (dynamic programming) has a high cost to calculate the query plan.\n--Tarcizio Bini2008/5/28 Tom Lane <[email protected]>:\[email protected] writes:\n> I'm using the TPC-H Benchmark for testing of performance in PostgreSQL.\n> But it is composed of only 8 tables, which is not enough to call the GEQO\n> algorithm.\n\nSee\nhttp://www.postgresql.org/docs/8.3/static/runtime-config-query.html#RUNTIME-CONFIG-QUERY-GEQO\n\nparticularly geqo_threshold.\n\n                        regards, tom lane", "msg_date": "Wed, 28 May 2008 13:13:38 -0300", "msg_from": "\"Tarcizio Bini\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GEQO Benchmark" }, { "msg_contents": "\nOn Wed, 2008-05-28 at 13:13 -0300, Tarcizio Bini wrote:\n\n> Of course, the geqo_threshold can be changed so that the geqo be\n> performed in queries that have less than 12 tables. However, we aim to\n> test the GEQO algorithm in conditions where the standard algorithm\n> (dynamic programming) has a high cost to calculate the query plan.\n\nMy understanding is the GEQO cannot arrive at a better plan than the\nstandard optimizer, so unless you wish to measure planning time there\nisn't much to test. What is the quality of the plans it generates? Well\nthat varies according to the input; sometimes it gets the right plan,\nother times it doesn't get close.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Wed, 28 May 2008 17:52:40 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GEQO Benchmark" }, { "msg_contents": "I'm interested in the response time instead of the quality of it, that is,\nif this response is (or not) a best plan. Thus, my interest on the response\ntime is to compare the GEQO algorithm with other solutions for optimization\nof queries, more specificaly when there are 12 or more tables evolved.\nAccordingly, the algorithm of dynamic programming becomes hard because of\nthe computational cost to obtain the optimal plan.\n\nMy question is to know what tool or method was used to test the GEQO\nalgorithm or if there is some documentation that explain how the geqo was\ntested.\n\nRegards,\n\nTarcizio Bini\n\n2008/5/28 Simon Riggs <[email protected]>:\n\n>\n> On Wed, 2008-05-28 at 13:13 -0300, Tarcizio Bini wrote:\n>\n> > Of course, the geqo_threshold can be changed so that the geqo be\n> > performed in queries that have less than 12 tables. However, we aim to\n> > test the GEQO algorithm in conditions where the standard algorithm\n> > (dynamic programming) has a high cost to calculate the query plan.\n>\n> My understanding is the GEQO cannot arrive at a better plan than the\n> standard optimizer, so unless you wish to measure planning time there\n> isn't much to test. What is the quality of the plans it generates? Well\n> that varies according to the input; sometimes it gets the right plan,\n> other times it doesn't get close.\n>\n> --\n> Simon Riggs www.2ndQuadrant.com\n> PostgreSQL Training, Services and Support\n>\n>\n\nI'm interested in the response time instead of the quality of it, that is, if this response is (or not) a best plan. Thus, my interest on the response time is to compare the GEQO algorithm with other solutions for optimization of queries, more specificaly when there are 12 or more tables evolved. Accordingly, the algorithm of dynamic programming becomes hard because of the computational cost to obtain the optimal plan.\nMy question is to know what tool or method was used to test the GEQO algorithm or if there is some documentation that explain how the geqo was tested.Regards,Tarcizio Bini\n2008/5/28 Simon Riggs <[email protected]>:\n\nOn Wed, 2008-05-28 at 13:13 -0300, Tarcizio Bini wrote:\n\n> Of course, the geqo_threshold can be changed so that the geqo be\n> performed in queries that have less than 12 tables. However, we aim to\n> test the GEQO algorithm in conditions where the standard algorithm\n> (dynamic programming) has a high cost to calculate the query plan.\n\nMy understanding is the GEQO cannot arrive at a better plan than the\nstandard optimizer, so unless you wish to measure planning time there\nisn't much to test. What is the quality of the plans it generates? Well\nthat varies according to the input; sometimes it gets the right plan,\nother times it doesn't get close.\n\n--\n Simon Riggs           www.2ndQuadrant.com\n PostgreSQL Training, Services and Support", "msg_date": "Wed, 28 May 2008 14:54:56 -0300", "msg_from": "\"Tarcizio Bini\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GEQO Benchmark" } ]
[ { "msg_contents": "Folks,\n\nSubsequent to my presentation of the new annotated.conf at pgCon last week, \nthere's been some argument about the utility of certain memory settings \nabove 2GB. I'd like to hash those out on this list so that we can make \nsome concrete recomendations to users.\n\nshared_buffers: according to witnesses, Greg Smith presented at East that \nbased on PostgreSQL's buffer algorithms, buffers above 2GB would not \nreally receive significant use. However, Jignesh Shah has tested that on \nworkloads with large numbers of connections, allocating up to 10GB \nimproves performance. \n\nsort_mem: My tests with 8.2 and DBT3 seemed to show that, due to \nlimitations of our tape sort algorithm, allocating over 2GB for a single \nsort had no benefit. However, Magnus and others have claimed otherwise. \nHas this improved in 8.3?\n\nSo, can we have some test evidence here? And workload descriptions?\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Wed, 28 May 2008 16:59:26 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "2GB or not 2GB" }, { "msg_contents": "Josh Berkus wrote:\n> Folks,\n>\n> Subsequent to my presentation of the new annotated.conf at pgCon last week,...\n\nAvailable online yet? At?...\n\nCheers,\nSteve\n\n", "msg_date": "Wed, 28 May 2008 17:04:37 -0700", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2GB or not 2GB" }, { "msg_contents": "\"Josh Berkus\" <[email protected]> writes:\n\n> sort_mem: My tests with 8.2 and DBT3 seemed to show that, due to \n> limitations of our tape sort algorithm, allocating over 2GB for a single \n> sort had no benefit. However, Magnus and others have claimed otherwise. \n> Has this improved in 8.3?\n\nSimon previously pointed out that we have some problems in our tape sort\nalgorithm with large values of work_mem. If the tape is \"large enough\" to\ngenerate some number of output tapes then increasing the heap size doesn't buy\nus any reduction in the future passes. And managing very large heaps is a\nfairly large amount of cpu time itself.\n\nThe problem of course is that we never know if it's \"large enough\". We talked\nat one point about having a heuristic where we start the heap relatively small\nand double it (adding one row) whenever we find we're starting a new tape. Not\nsure how that would work out though.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Wed, 28 May 2008 20:25:57 -0400", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2GB or not 2GB" }, { "msg_contents": "On Wed, 28 May 2008, Josh Berkus wrote:\n\n> shared_buffers: according to witnesses, Greg Smith presented at East that\n> based on PostgreSQL's buffer algorithms, buffers above 2GB would not\n> really receive significant use. However, Jignesh Shah has tested that on\n> workloads with large numbers of connections, allocating up to 10GB\n> improves performance.\n\nLies! The only upper-limit for non-Windows platforms I mentioned was \nsuggesting those recent tests at Sun showed a practical limit in the low \nmulti-GB range.\n\nI've run with 4GB usefully for one of the multi-TB systems I manage, the \nmain index on the most frequently used table is 420GB and anything I can \ndo to keep the most popular parts of that pegged in memory seems to help. \nI haven't tried to isolate the exact improvement going from 2GB to 4GB \nwith benchmarks though.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 28 May 2008 21:06:06 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2GB or not 2GB" }, { "msg_contents": "\n\nJosh Berkus wrote:\n> Folks,\n>\n> Subsequent to my presentation of the new annotated.conf at pgCon last week, \n> there's been some argument about the utility of certain memory settings \n> above 2GB. I'd like to hash those out on this list so that we can make \n> some concrete recomendations to users.\n>\n> shared_buffers: according to witnesses, Greg Smith presented at East that \n> based on PostgreSQL's buffer algorithms, buffers above 2GB would not \n> really receive significant use. However, Jignesh Shah has tested that on \n> workloads with large numbers of connections, allocating up to 10GB \n> improves performance. \n> \nI have certainly seen improvements in performance upto 10GB using \nEAStress. The delicate balance is between file system cache and shared \nbuffers. I think the initial ones are more beneficial at shared buffers \nlevel and after that file system cache.\nI am trying to remember Greg's presentation where I think he suggested \nmore like 50% of available RAM (eg in 4GB system used just for \nPostgreSQL, it may not help setting more than 2GB since you need memory \nfor other stuff also).. Right Greg?\n\nBut if you have 32GB RAM .. I dont mind allocating 10GB to PostgreSQL \nbeyond which I find lots of other things that begin to impact..\n\nBTW I am really +1 for just setting AvailRAM tunable for PostgreSQL \n(example that you showed in tutorials) and do default derivations for \nall other settings unless overridden manually. So people dont forget to \nbump up wal_buffers or one of them while bumping the rest and trying to \nfight why the hell they are not seeing what they are expecting.\n\n-Jignesh\n\n\n", "msg_date": "Wed, 28 May 2008 22:54:13 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2GB or not 2GB" }, { "msg_contents": "\n\nGreg Smith wrote:\n> On Wed, 28 May 2008, Josh Berkus wrote:\n>\n>> shared_buffers: according to witnesses, Greg Smith presented at East \n>> that\n>> based on PostgreSQL's buffer algorithms, buffers above 2GB would not\n>> really receive significant use. However, Jignesh Shah has tested \n>> that on\n>> workloads with large numbers of connections, allocating up to 10GB\n>> improves performance.\n>\n> Lies! The only upper-limit for non-Windows platforms I mentioned was \n> suggesting those recent tests at Sun showed a practical limit in the \n> low multi-GB range.\n>\n> I've run with 4GB usefully for one of the multi-TB systems I manage, \n> the main index on the most frequently used table is 420GB and anything \n> I can do to keep the most popular parts of that pegged in memory seems \n> to help. I haven't tried to isolate the exact improvement going from \n> 2GB to 4GB with benchmarks though.\n>\nYep its always the index that seems to benefit with high cache hits.. In \none of the recent tests what I end up doing is writing a select \ncount(*) from trade where t_id >= $1 and t_id < SOMEMAX just to kick in \nindex scan and get it in memory first. So higher the bufferpool better \nthe hit for index in it better the performance.\n\n-Jignesh\n\n\n\n\n\n", "msg_date": "Wed, 28 May 2008 23:01:37 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2GB or not 2GB" }, { "msg_contents": "\n\nOn Wed, 2008-05-28 at 16:59 -0700, Josh Berkus wrote:\n> Folks,\n\n> shared_buffers: according to witnesses, Greg Smith presented at East that \n> based on PostgreSQL's buffer algorithms, buffers above 2GB would not \n> really receive significant use. However, Jignesh Shah has tested that on \n> workloads with large numbers of connections, allocating up to 10GB \n> improves performance. \n\nI have seen multiple production systems where upping the buffers up to\n6-8GB helps. What I don't know, and what I am guessing Greg is referring\nto is if it helps as much as say upping to 2GB. E.g; the scale of\nperformance increase goes down while the actual performance goes up\n(like adding more CPUs).\n\n\n> \n> sort_mem: My tests with 8.2 and DBT3 seemed to show that, due to \n> limitations of our tape sort algorithm, allocating over 2GB for a single \n> sort had no benefit. However, Magnus and others have claimed otherwise. \n> Has this improved in 8.3?\n\nI have never see work_mem (there is no sort_mem Josh) do any good above\n1GB. Of course, I would never willingly use that much work_mem unless\nthere was a really good reason that involved a guarantee of not calling\nme at 3:00am.\n\n> \n> So, can we have some test evidence here? And workload descriptions?\n> \n\nIts all, tune now buddy :P\n\nSinceerely,\n\nJoshua D. Drake\n\n\n\n", "msg_date": "Thu, 29 May 2008 08:45:14 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2GB or not 2GB" }, { "msg_contents": "Joshua D. Drake wrote:\n> \n> \n> On Wed, 2008-05-28 at 16:59 -0700, Josh Berkus wrote:\n> > Folks,\n> \n> > shared_buffers: according to witnesses, Greg Smith presented at\n> > East that based on PostgreSQL's buffer algorithms, buffers above\n> > 2GB would not really receive significant use. However, Jignesh\n> > Shah has tested that on workloads with large numbers of\n> > connections, allocating up to 10GB improves performance. \n> \n> I have seen multiple production systems where upping the buffers up to\n> 6-8GB helps. What I don't know, and what I am guessing Greg is\n> referring to is if it helps as much as say upping to 2GB. E.g; the\n> scale of performance increase goes down while the actual performance\n> goes up (like adding more CPUs).\n\nThat could be it. I'm one of the people who recall *something* about\nit, but I don't remember any specifics :-)\n\n\n> > sort_mem: My tests with 8.2 and DBT3 seemed to show that, due to \n> > limitations of our tape sort algorithm, allocating over 2GB for a\n> > single sort had no benefit. However, Magnus and others have\n> > claimed otherwise. Has this improved in 8.3?\n> \n> I have never see work_mem (there is no sort_mem Josh) do any good\n> above 1GB. Of course, I would never willingly use that much work_mem\n> unless there was a really good reason that involved a guarantee of\n> not calling me at 3:00am.\n\nI have. Not as a system-wide setting, but for a single batch job doing\n*large* queries. Don't recall exactly, but it wasn't necessarily for\nsort - might have been for hash. I've seen it make a *big* difference.\n\nmaintenance_work_mem, however, I didn't see much difference upping it\npast 1Gb or so.\n\n\n//Magnus\n", "msg_date": "Thu, 29 May 2008 21:50:48 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2GB or not 2GB" }, { "msg_contents": "\nOn Wed, 2008-05-28 at 16:59 -0700, Josh Berkus wrote:\n\n> sort_mem: My tests with 8.2 and DBT3 seemed to show that, due to \n> limitations of our tape sort algorithm, allocating over 2GB for a single \n> sort had no benefit. However, Magnus and others have claimed otherwise. \n> Has this improved in 8.3?\n\nThere is an optimum for each specific sort. \n\nYour results cannot be used to make a global recommendation about the\nsetting of work_mem. So not finding any benefit in your tests *and*\nMagnus seeing an improvement are not inconsistent events.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Sat, 31 May 2008 08:44:57 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2GB or not 2GB" }, { "msg_contents": "Simon,\n\n> There is an optimum for each specific sort.\n\nWell, if the optimum is something other than \"as much as we can get\", then we \nstill have a pretty serious issue with work_mem, no?\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Sat, 31 May 2008 11:53:13 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 2GB or not 2GB" }, { "msg_contents": "\"Josh Berkus\" <[email protected]> writes:\n\n> Simon,\n>\n>> There is an optimum for each specific sort.\n>\n> Well, if the optimum is something other than \"as much as we can get\", then we \n> still have a pretty serious issue with work_mem, no?\n\nWith the sort algorithm. The problem is that the database can't predict the\nfuture and doesn't know how many more records will be arriving and how out of\norder they will be.\n\nWhat appears to be happening is that if you give the tape sort a large amount\nof memory it keeps a large heap filling that memory. If that large heap\ndoesn't actually save any passes and doesn't reduce the number of output tapes\nthen it's just wasted cpu time to maintain such a large heap. If you have any\nclever ideas on how to auto-size the heap based on how many output tapes it\nwill create or avoid then by all means speak up.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!\n", "msg_date": "Sat, 31 May 2008 20:41:14 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2GB or not 2GB" }, { "msg_contents": "\nOn Sat, 2008-05-31 at 11:53 -0700, Josh Berkus wrote:\n> Simon,\n> \n> > There is an optimum for each specific sort.\n> \n> Well, if the optimum is something other than \"as much as we can get\", then we \n> still have a pretty serious issue with work_mem, no?\n\nDepends upon your view of serious I suppose. I would say it is an\nacceptable situation, but needs further optimization. I threw some ideas\naround on Hackers around Dec/New Year, but I don't have time to work on\nthis further myself in this dev cycle. Further contributions welcome.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Sun, 01 Jun 2008 10:10:40 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 2GB or not 2GB" } ]
[ { "msg_contents": "\n\n\n\nHello everybody!\n\nI have found a performance issue with 2 equivalent queries stably\ntaking different (~x2) time to finish. In just a few words it can be\ndescribed like this: if you have a lot of values in an IN() statement,\nyou should put most heavy (specifying most number of rows) ids first.\nThis is mostly just a bug submit, than looking for help.\n\nSo this is what I have:\n\n RHEL\n PostgreSQL 8.3.1\n A table ext_feeder_item with ~4.6M records.\nkia=# \\d+ ext_feeder_item;\nTable\n\"public.ext_feeder_item\"\nColumn | Type | Modifiers | Description\n----------+--------------------------+--------------------------------------------------------------+-------------\nid | bigint | not null default\nnextval('ext_feeder_item_id_seq'::regclass) |\nfeed_id | bigint | not\nnull |\npub_date | timestamp with time zone\n| |\nIndexes:\n\"ext_feeder_item_pkey\" PRIMARY KEY, btree (id)\n\"ext_feeder_item_feed_id_pub_date_idx\" btree (feed_id, pub_date)\n\"ext_feeder_item_idx\" btree (feed_id)\nTriggers:\n....\nHas OIDs: no\n\nStatistics for the fields feed_id and pub_date are set to 1000;\nThe table have just been vacuumed and analyzed.\nA simple query to the table:\n SELECT\nid\nFROM\next_feeder_item AS i\nWHERE\ni.feed_id IN (...)\nORDER BY pub_date DESC, id DESC\nLIMIT 11 OFFSET 0;\n\nwith many (~1200) ids in the IN() statement.\nThe count of rows distribution for these ids (may be thought of\nas foreign keys in this table) is the following:\nid = 54461: ~180000 - actually the most heavy id in the whole table.\nother ids: a single id at most specifies 2032 rows; 6036 rows total.\nIf I perform a query with\nIN(54461, ...)\nit stably (5 attempts) takes 4.5..4.7 secs. to perform.\nQUERY PLAN\nLimit  (cost=1463104.22..1463104.25 rows=11 width=16) (actual\ntime=4585.420..4585.452 rows=11 loops=1)\n  ->  Sort  (cost=1463104.22..1464647.29 rows=617228 width=16)\n(actual time=4585.415..4585.425 rows=11 loops=1)\n        Sort Key: pub_date, id\n        Sort Method:  top-N heapsort  Memory: 17kB\n        ->  Bitmap Heap Scan on ext_feeder_item i\n(cost=13832.40..1449341.79 rows=617228 width=16) (actual\ntime=894.622..4260.441 rows=185625 loops=1)\n              Recheck Cond: (feed_id = ANY ('{54461, ...}'::bigint[]))\n              ->  Bitmap Index Scan on ext_feeder_item_idx\n(cost=0.00..13678.10 rows=617228 width=0) (actual time=884.686..884.686\nrows=185625 loops=1)\n                    Index Cond: (feed_id = ANY ('{54461,\n...}'::bigint[]))\nTotal runtime: 4585.852 ms\n\nIf I perform a query with\nIN(..., 54461)\nit stably (5 attempts) takes 9.3..9.5 secs. to perform.\nQUERY PLAN\nLimit  (cost=1463104.22..1463104.25 rows=11 width=16) (actual\ntime=9330.267..9330.298 rows=11 loops=1)\n  ->  Sort  (cost=1463104.22..1464647.29 rows=617228 width=16)\n(actual time=9330.263..9330.273 rows=11 loops=1)\n        Sort Key: pub_date, id\n        Sort Method:  top-N heapsort  Memory: 17kB\n        ->  Bitmap Heap Scan on ext_feeder_item i\n(cost=13832.40..1449341.79 rows=617228 width=16) (actual\ntime=1018.401..8971.029 rows=185625 loops=1)\n              Recheck Cond: (feed_id = ANY ('{... ,54461}'::bigint[]))\n              ->  Bitmap Index Scan on ext_feeder_item_idx\n(cost=0.00..13678.10 rows=617228 width=0) (actual\ntime=1008.791..1008.791 rows=185625 loops=1)\n                    Index Cond: (feed_id = ANY ('{...\n,54461}'::bigint[]))\nTotal runtime: 9330.729 ms\n\n\nI don't know what are the roots of the problem, but I think that some\nsymptomatic healing could be applied: the PostgreSQL could sort the IDs\ndue to the statistics.\nSo currently I tend to select the IDs from another table ordering them\ndue to their weights: it's easy for me thanks to denormalization.\n\nAlso I would expect from PostgreSQL that it sorted the values to make\nindex scan more sequential, but this expectation already conflicts with\nthe bug described above :)\n\n\n", "msg_date": "Thu, 29 May 2008 16:38:38 +0700", "msg_from": "Alexey Kupershtokh <[email protected]>", "msg_from_op": true, "msg_subject": "IN() statement values order makes 2x performance hit" }, { "msg_contents": "You may try contrib/intarray, which we developed specially for\ndenormalization.\n\nOleg\nOn Thu, 29 May 2008, Alexey Kupershtokh wrote:\n\n> Hello everybody!\n> \n> I have found a performance issue with 2 equivalent queries stably taking\n> different (~x2) time to finish. In just a few words it can be described\n> like this: if you have a lot of values in an IN() statement, you should\n> put most heavy (specifying most number of rows) ids first.\n> This is mostly just a bug submit, than looking for help.\n> \n> So this is what I have:\n> * RHEL\n> * PostgreSQL 8.3.1\n> * A table ext_feeder_item with ~4.6M records.\n> kia=# \\d+ ext_feeder_item;\n> Table \"public.ext_feeder_item\"\n> Column | Type | Modifiers | Description\n> ----------+--------------------------+------------------------------------------\n> --------------------+-------------\n> id | bigint | not null default\n> nextval('ext_feeder_item_id_seq'::regclass) |\n> feed_id | bigint | not null |\n> pub_date | timestamp with time zone | |\n> Indexes:\n> \"ext_feeder_item_pkey\" PRIMARY KEY, btree (id)\n> \"ext_feeder_item_feed_id_pub_date_idx\" btree (feed_id, pub_date)\n> \"ext_feeder_item_idx\" btree (feed_id)\n> Triggers:\n> ....\n> Has OIDs: no\n> * Statistics for the fields feed_id and pub_date are set to 1000;\n> * The table have just been vacuumed and analyzed.\n> * A simple query to the table:\n> SELECT\n> id\n> FROM\n> ext_feeder_item AS i\n> WHERE\n> i.feed_id IN (...)\n> ORDER BY pub_date DESC, id DESC\n> LIMIT 11 OFFSET 0;\n>\n> with many (~1200) ids in the IN() statement.\n> * The count of rows distribution for these ids (may be thought of as\n> foreign keys in this table) is the following:\n> id = 54461: ~180000 - actually the most heavy id in the whole table.\n> other ids: a single id at most specifies 2032 rows; 6036 rows total.\n> * If I perform a query with\n> IN(54461, ...)\n> it stably (5 attempts) takes 4.5..4.7 secs. to perform.\n> QUERY PLAN\n> Limit  (cost=1463104.22..1463104.25 rows=11 width=16) (actual\n> time=4585.420..4585.452 rows=11 loops=1)\n>   ->  Sort  (cost=1463104.22..1464647.29 rows=617228 width=16)\n> (actual time=4585.415..4585.425 rows=11 loops=1)\n>         Sort Key: pub_date, id\n>         Sort Method:  top-N heapsort  Memory: 17kB\n>         ->  Bitmap Heap Scan on ext_feeder_item i\n> (cost=13832.40..1449341.79 rows=617228 width=16) (actual\n> time=894.622..4260.441 rows=185625 loops=1)\n>               Recheck Cond: (feed_id = ANY ('{54461,\n> ...}'::bigint[]))\n>               ->  Bitmap Index Scan on ext_feeder_item_idx\n> (cost=0.00..13678.10 rows=617228 width=0) (actual\n> time=884.686..884.686 rows=185625 loops=1)\n>                     Index Cond: (feed_id = ANY ('{54461,\n> ...}'::bigint[]))\n> Total runtime: 4585.852 ms\n> * If I perform a query with\n> IN(..., 54461)\n> it stably (5 attempts) takes 9.3..9.5 secs. to perform.\n> QUERY PLAN\n> Limit  (cost=1463104.22..1463104.25 rows=11 width=16) (actual\n> time=9330.267..9330.298 rows=11 loops=1)\n>   ->  Sort  (cost=1463104.22..1464647.29 rows=617228 width=16)\n> (actual time=9330.263..9330.273 rows=11 loops=1)\n>         Sort Key: pub_date, id\n>         Sort Method:  top-N heapsort  Memory: 17kB\n>         ->  Bitmap Heap Scan on ext_feeder_item i\n> (cost=13832.40..1449341.79 rows=617228 width=16) (actual\n> time=1018.401..8971.029 rows=185625 loops=1)\n>               Recheck Cond: (feed_id = ANY ('{...\n> ,54461}'::bigint[]))\n>               ->  Bitmap Index Scan on ext_feeder_item_idx\n> (cost=0.00..13678.10 rows=617228 width=0) (actual\n> time=1008.791..1008.791 rows=185625 loops=1)\n>                     Index Cond: (feed_id = ANY ('{...\n> ,54461}'::bigint[]))\n> Total runtime: 9330.729 ms\n> I don't know what are the roots of the problem, but I think that some\n> symptomatic healing could be applied: the PostgreSQL could sort the IDs\n> due to the statistics.\n> So currently I tend to select the IDs from another table ordering them\n> due to their weights: it's easy for me thanks to denormalization.\n> \n> Also I would expect from PostgreSQL that it sorted the values to make\n> index scan more sequential, but this expectation already conflicts with\n> the bug described above :)\n> \n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83", "msg_date": "Thu, 29 May 2008 13:39:57 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN() statement values order makes 2x performance hit" }, { "msg_contents": "Thanks for the response.\nI've taken a look at this feature. But it seems unapplicable to my case: \nthis table is not a many2many relation which seems the most common case \nof the intarray usage.\nThe table just stores an information about items (rss posts): what feeds \n(rss) are they from, and their publication date. Id in the table is PK.\n\nOleg Bartunov wrote:\n> You may try contrib/intarray, which we developed specially for\n> denormalization.\n>\n> Oleg\n> On Thu, 29 May 2008, Alexey Kupershtokh wrote:\n>\n>> Hello everybody!\n>>\n>> I have found a performance issue with 2 equivalent queries stably taking\n>> different (~x2) time to finish. In just a few words it can be described\n>> like this: if you have a lot of values in an IN() statement, you should\n>> put most heavy (specifying most number of rows) ids first.\n>> This is mostly just a bug submit, than looking for help.\n>>\n>> So this is what I have:\n>> * RHEL\n>> * PostgreSQL 8.3.1\n>> * A table ext_feeder_item with ~4.6M records.\n>> kia=# \\d+ ext_feeder_item;\n>> Table \"public.ext_feeder_item\"\n>> Column | Type | Modifiers | Description\n>> ----------+--------------------------+------------------------------------------ \n>>\n>> --------------------+-------------\n>> id | bigint | not null default\n>> nextval('ext_feeder_item_id_seq'::regclass) |\n>> feed_id | bigint | not null |\n>> pub_date | timestamp with time zone | |\n>> Indexes:\n>> \"ext_feeder_item_pkey\" PRIMARY KEY, btree (id)\n>> \"ext_feeder_item_feed_id_pub_date_idx\" btree (feed_id, pub_date)\n>> \"ext_feeder_item_idx\" btree (feed_id)\n>> Triggers:\n>> ....\n>> Has OIDs: no\n>> * Statistics for the fields feed_id and pub_date are set to 1000;\n>> * The table have just been vacuumed and analyzed.\n>> * A simple query to the table:\n>> SELECT\n>> id\n>> FROM\n>> ext_feeder_item AS i\n>> WHERE\n>> i.feed_id IN (...)\n>> ORDER BY pub_date DESC, id DESC\n>> LIMIT 11 OFFSET 0;\n>>\n>> with many (~1200) ids in the IN() statement.\n>> * The count of rows distribution for these ids (may be thought of as\n>> foreign keys in this table) is the following:\n>> id = 54461: ~180000 - actually the most heavy id in the whole table.\n>> other ids: a single id at most specifies 2032 rows; 6036 rows total.\n>> * If I perform a query with\n>> IN(54461, ...)\n>> it stably (5 attempts) takes 4.5..4.7 secs. to perform.\n>> QUERY PLAN\n>> Limit (cost=1463104.22..1463104.25 rows=11 width=16) (actual\n>> time=4585.420..4585.452 rows=11 loops=1)\n>> -> Sort (cost=1463104.22..1464647.29 rows=617228 width=16)\n>> (actual time=4585.415..4585.425 rows=11 loops=1)\n>> Sort Key: pub_date, id\n>> Sort Method: top-N heapsort Memory: 17kB\n>> -> Bitmap Heap Scan on ext_feeder_item i\n>> (cost=13832.40..1449341.79 rows=617228 width=16) (actual\n>> time=894.622..4260.441 rows=185625 loops=1)\n>> Recheck Cond: (feed_id = ANY ('{54461,\n>> ...}'::bigint[]))\n>> -> Bitmap Index Scan on ext_feeder_item_idx\n>> (cost=0.00..13678.10 rows=617228 width=0) (actual\n>> time=884.686..884.686 rows=185625 loops=1)\n>> Index Cond: (feed_id = ANY ('{54461,\n>> ...}'::bigint[]))\n>> Total runtime: 4585.852 ms\n>> * If I perform a query with\n>> IN(..., 54461)\n>> it stably (5 attempts) takes 9.3..9.5 secs. to perform.\n>> QUERY PLAN\n>> Limit (cost=1463104.22..1463104.25 rows=11 width=16) (actual\n>> time=9330.267..9330.298 rows=11 loops=1)\n>> -> Sort (cost=1463104.22..1464647.29 rows=617228 width=16)\n>> (actual time=9330.263..9330.273 rows=11 loops=1)\n>> Sort Key: pub_date, id\n>> Sort Method: top-N heapsort Memory: 17kB\n>> -> Bitmap Heap Scan on ext_feeder_item i\n>> (cost=13832.40..1449341.79 rows=617228 width=16) (actual\n>> time=1018.401..8971.029 rows=185625 loops=1)\n>> Recheck Cond: (feed_id = ANY ('{...\n>> ,54461}'::bigint[]))\n>> -> Bitmap Index Scan on ext_feeder_item_idx\n>> (cost=0.00..13678.10 rows=617228 width=0) (actual\n>> time=1008.791..1008.791 rows=185625 loops=1)\n>> Index Cond: (feed_id = ANY ('{...\n>> ,54461}'::bigint[]))\n>> Total runtime: 9330.729 ms\n>> I don't know what are the roots of the problem, but I think that some\n>> symptomatic healing could be applied: the PostgreSQL could sort the IDs\n>> due to the statistics.\n>> So currently I tend to select the IDs from another table ordering them\n>> due to their weights: it's easy for me thanks to denormalization.\n>>\n>> Also I would expect from PostgreSQL that it sorted the values to make\n>> index scan more sequential, but this expectation already conflicts with\n>> the bug described above :)\n>>\n>>\n>\n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n> Sternberg Astronomical Institute, Moscow University, Russia\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Thu, 29 May 2008 16:52:59 +0700", "msg_from": "Alexey Kupershtokh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IN() statement values order makes 2x performance hit" } ]
[ { "msg_contents": "[Attn list-queue maintainers: Please drop the earlier version\nof this email that I accidentally sent from an unsubscribed address. ]\n\nHi, \n\nI'm having a strange problem with a slow-running select query. The\nquery I use in production ends in \"LIMIT 1\", and it runs very slowly.\nBut when I remove the \"LIMIT 1\", the query runs quite quickly. This\nbehavior has stumped a couple smart DBAs.\n\nThe full queries and EXPLAIN ANALYZE plans are included below, but by\nway of explanation/observation:\n\n1) The \"LIMIT 1\" case will sometimes be quicker (but still much slower\nthan the non-\"LIMIT 1\" case) for different values of\ncalendar_group_id.\n\n2) The query below is a slightly simplified version of the one I\nactually use. The real one includes more conditions which explain why\neach table is joined. For reference, the original query is quoted at\nthe end [1]. The original query exhibits the same behavior as the\nsimplified versions w.r.t. the \"LIMIT 1\" case taking _much_ longer\n(even longer than the simplified version) than the non-\"LIMIT 1\" case,\nand uses the same plans.\n\n\nCan anyone explain why such a slow plan is chosen when the \"LIMIT 1\"\nis present? Is there anything I can do to speed this query up?\nThanks.\n\n-chris\n\n\nproduction=> select version();\n version \n------------------------------------------------------------------------------\n PostgreSQL 8.2.6 on x86_64-pc-linux-gnu, compiled by GCC x86_64-pc-linux-gnu-gcc (GCC) 4.1.2 (Gentoo 4.1.2 p1.0.2)\n(1 row)\n\nproduction=> analyze calendar_groups;\nANALYZE\nproduction=> analyze calendar_links;\nANALYZE\nproduction=> analyze calendars;\nANALYZE\nproduction=> analyze event_updates;\nANALYZE\nproduction=> EXPLAIN ANALYZE SELECT event_updates.*\n FROM event_updates\n INNER JOIN calendars ON event_updates.feed_id = calendars.id\n INNER JOIN calendar_links ON calendars.id = calendar_links.source_tracker_id\n WHERE (calendar_links.calendar_group_id = 3640)\n ORDER BY event_updates.id DESC\n LIMIT 1;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------\n Limit (cost=16.55..91.73 rows=1 width=2752) (actual time=27810.058..27810.059 rows=1 loops=1)\n -> Nested Loop (cost=16.55..695694.18 rows=9254 width=2752) (actual time=27810.054..27810.054 rows=1 loops=1)\n Join Filter: (event_updates.feed_id = calendars.id)\n -> Index Scan Backward using event_updates_pkey on event_updates (cost=0.00..494429.30 rows=8944370 width=2752) (actual time=0.030..7452.142 rows=5135706 loops=1)\n -> Materialize (cost=16.55..16.56 rows=1 width=8) (actual time=0.001..0.002 rows=1 loops=5135706)\n -> Nested Loop (cost=0.00..16.55 rows=1 width=8) (actual time=0.029..0.034 rows=1 loops=1)\n -> Index Scan using index_calendar_links_on_calendar_group_id_and_source_tracker_id on calendar_links (cost=0.00..8.27 rows=1 width=4) (actual time=0.012..0.013 rows=1 loops=1)\n Index Cond: (calendar_group_id = 3640)\n -> Index Scan using harvest_trackers_pkey on calendars (cost=0.00..8.27 rows=1 width=4) (actual time=0.012..0.013 rows=1 loops=1)\n Index Cond: (calendars.id = calendar_links.source_tracker_id)\n Total runtime: 27810.161 ms\n(11 rows)\n\nproduction=> EXPLAIN ANALYZE SELECT event_updates.* FROM event_updates\n INNER JOIN calendars ON event_updates.feed_id = calendars.id\n INNER JOIN calendar_links ON calendars.id = calendar_links.source_tracker_id\n WHERE (calendar_links.calendar_group_id = 3640)\n ORDER BY event_updates.id DESC;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------\n Sort (cost=43376.36..43399.50 rows=9256 width=2752) (actual time=10.178..10.205 rows=36 loops=1)\n Sort Key: event_updates.id\n -> Nested Loop (cost=249.86..31755.56 rows=9256 width=2752) (actual time=9.957..10.098 rows=36 loops=1)\n -> Nested Loop (cost=0.00..16.55 rows=1 width=8) (actual time=9.868..9.873 rows=1 loops=1)\n -> Index Scan using index_calendar_links_on_calendar_group_id_and_source_tracker_id on calendar_links (cost=0.00..8.27 rows=1 width=4) (actual time=9.824..9.825 rows=1 loops=1)\n Index Cond: (calendar_group_id = 3640)\n -> Index Scan using harvest_trackers_pkey on calendars (cost=0.00..8.27 rows=1 width=4) (actual time=0.034..0.036 rows=1 loops=1)\n Index Cond: (calendars.id = calendar_links.source_tracker_id)\n -> Bitmap Heap Scan on event_updates (cost=249.86..31623.01 rows=9280 width=2752) (actual time=0.080..0.138 rows=36 loops=1)\n Recheck Cond: (event_updates.feed_id = calendars.id)\n -> Bitmap Index Scan on index_event_updates_on_feed_id_and_feed_type (cost=0.00..247.54 rows=9280 width=0) (actual time=0.056..0.056 rows=36 loops=1)\n Index Cond: (event_updates.feed_id = calendars.id)\n Total runtime: 10.337 ms\n(13 rows)\n\n\n\n---------\n[1] The original, unsimplified query: \nSELECT event_updates.* FROM event_updates\nINNER JOIN calendars ON (event_updates.feed_id = calendars.id AND event_updates.feed_type = E'Calendar')\nINNER JOIN calendar_links ON (calendars.id = calendar_links.source_tracker_id AND calendars.type = E'SourceTracker')\nWHERE (calendar_links.calendar_group_id = 3640 AND calendars.deactivated_at IS NULL)\nORDER BY event_updates.id DESC\nLIMIT 1\n", "msg_date": "Thu, 29 May 2008 11:47:34 -0400", "msg_from": "Chris Shoemaker <[email protected]>", "msg_from_op": true, "msg_subject": "Adding \"LIMIT 1\" kills performance." }, { "msg_contents": "Chris Shoemaker wrote:\n> [Attn list-queue maintainers: Please drop the earlier version\n> of this email that I accidentally sent from an unsubscribed address. ]\n> \n> Hi, \n> \n> I'm having a strange problem with a slow-running select query. The\n> query I use in production ends in \"LIMIT 1\", and it runs very slowly.\n> But when I remove the \"LIMIT 1\", the query runs quite quickly. This\n> behavior has stumped a couple smart DBAs.\n> \n\n> Can anyone explain why such a slow plan is chosen when the \"LIMIT 1\"\n> is present? Is there anything I can do to speed this query up?\n> Thanks.\n> \n\n From what I know using an ORDER BY and a LIMIT can often prevent \n*shortening* the query as it still needs to find all rows to perform the \norder by before it limits.\nThe difference in plans eludes me.\n\n> production=> EXPLAIN ANALYZE SELECT event_updates.*\n> FROM event_updates\n> INNER JOIN calendars ON event_updates.feed_id = calendars.id\n> INNER JOIN calendar_links ON calendars.id = calendar_links.source_tracker_id\n> WHERE (calendar_links.calendar_group_id = 3640)\n> ORDER BY event_updates.id DESC\n> LIMIT 1;\n\nDoes removing the DESC from the order by give the same variation in \nplans? Or is this only when using ORDER BY ... DESC LIMIT 1?\n\n\nOne thing that interests me is try -\n\nEXPLAIN ANALYZE SELECT * FROM (\n\nSELECT event_updates.*\nFROM event_updates\nINNER JOIN calendars ON event_updates.feed_id = calendars.id\nINNER JOIN calendar_links ON calendars.id = calendar_links.source_tracker_id\nWHERE (calendar_links.calendar_group_id = 3640)\nORDER BY event_updates.id DESC\n) AS foo\n\nLIMIT 1;\n\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n", "msg_date": "Fri, 30 May 2008 02:23:46 +0930", "msg_from": "Shane Ambler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding \"LIMIT 1\" kills performance." }, { "msg_contents": "On Fri, May 30, 2008 at 02:23:46AM +0930, Shane Ambler wrote:\n> Chris Shoemaker wrote:\n>> [Attn list-queue maintainers: Please drop the earlier version\n>> of this email that I accidentally sent from an unsubscribed address. ]\n>>\n>> Hi, \n>> I'm having a strange problem with a slow-running select query. The\n>> query I use in production ends in \"LIMIT 1\", and it runs very slowly.\n>> But when I remove the \"LIMIT 1\", the query runs quite quickly. This\n>> behavior has stumped a couple smart DBAs.\n>>\n>\n>> Can anyone explain why such a slow plan is chosen when the \"LIMIT 1\"\n>> is present? Is there anything I can do to speed this query up?\n>> Thanks.\n>>\n>\n> From what I know using an ORDER BY and a LIMIT can often prevent \n> *shortening* the query as it still needs to find all rows to perform the \n> order by before it limits.\n\nThat makes complete sense, of course.\n\n> The difference in plans eludes me.\n>\n>> production=> EXPLAIN ANALYZE SELECT event_updates.*\n>> FROM event_updates\n>> INNER JOIN calendars ON event_updates.feed_id = calendars.id\n>> INNER JOIN calendar_links ON calendars.id = calendar_links.source_tracker_id\n>> WHERE (calendar_links.calendar_group_id = 3640)\n>> ORDER BY event_updates.id DESC\n>> LIMIT 1;\n>\n> Does removing the DESC from the order by give the same variation in plans? \n> Or is this only when using ORDER BY ... DESC LIMIT 1?\n\nExcept for using Index Scan instead of Index Scan Backward, the plan\nis the same with ORDER BY ... or ORDER BY ... ASC as with ORDER BY\n... DESC. In case you're wondering what would happen without the\nORDER BY at all:\n\nproduction=> EXPLAIN SELECT event_updates.*\nFROM event_updates\nINNER JOIN calendars ON event_updates.feed_id = calendars.id\nINNER JOIN calendar_links ON calendars.id = calendar_links.source_tracker_id\nWHERE (calendar_links.calendar_group_id = 3640)\nLIMIT 1; \n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..3.95 rows=1 width=2752)\n -> Nested Loop (cost=0.00..36992.38 rows=9362 width=2752)\n -> Nested Loop (cost=0.00..16.55 rows=1 width=8)\n -> Index Scan using index_calendar_links_on_calendar_group_id_and_source_tracker_id on calendar_links (cost=0.00..8.27 rows=1 width=4)\n Index Cond: (calendar_group_id = 3640)\n -> Index Scan using harvest_trackers_pkey on calendars (cost=0.00..8.27 rows=1 width=4)\n Index Cond: (calendars.id = calendar_links.source_tracker_id)\n -> Index Scan using index_event_updates_on_feed_id_and_feed_type on event_updates (cost=0.00..36858.50 rows=9386 width=2752)\n Index Cond: (event_updates.feed_id = calendars.id)\n(9 rows)\n\n\n>\n>\n> One thing that interests me is try -\n>\n> EXPLAIN ANALYZE SELECT * FROM (\n>\n> SELECT event_updates.*\n> FROM event_updates\n> INNER JOIN calendars ON event_updates.feed_id = calendars.id\n> INNER JOIN calendar_links ON calendars.id = calendar_links.source_tracker_id\n> WHERE (calendar_links.calendar_group_id = 3640)\n> ORDER BY event_updates.id DESC\n> ) AS foo\n>\n> LIMIT 1;\n\nThat's an interesting experiment. Here are the results:\n\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=16.55..91.74 rows=1 width=6027) (actual time=490709.355..490709.357 rows=1 loops=1)\n -> Nested Loop (cost=16.55..703794.95 rows=9361 width=2752) (actual time=490709.352..490709.352 rows=1 loops=1)\n Join Filter: (event_updates.feed_id = calendars.id)\n -> Index Scan Backward using event_updates_pkey on event_updates (cost=0.00..500211.53 rows=9047416 width=2752) (actual time=0.222..469082.071 rows=5251179 loops=1)\n -> Materialize (cost=16.55..16.56 rows=1 width=8) (actual time=0.001..0.002 rows=1 loops=5251179)\n -> Nested Loop (cost=0.00..16.55 rows=1 width=8) (actual time=0.240..0.246 rows=1 loops=1)\n -> Index Scan using index_calendar_links_on_calendar_group_id_and_source_tracker_id on calendar_links (cost=0.00..8.27 rows=1 width=4) (actual time=0.108..0.109 rows=1 loops=1)\n Index Cond: (calendar_group_id = 3640)\n -> Index Scan using harvest_trackers_pkey on calendars (cost=0.00..8.27 rows=1 width=4) (actual time=0.127..0.129 rows=1 loops=1)\n Index Cond: (calendars.id = calendar_links.source_tracker_id)\n Total runtime: 490709.576 ms\n(11 rows)\n\n\nThat is, no real change in the performance.\n\nStill stumped,\n-chris\n", "msg_date": "Thu, 29 May 2008 13:47:12 -0400", "msg_from": "Chris Shoemaker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Adding \"LIMIT 1\" kills performance." }, { "msg_contents": "Chris Shoemaker <[email protected]> writes:\n> Still stumped,\n\nThe short answer here is that the planner is guessing that scanning the\nindex in ID order will come across the desired row (ie, the first one\nmatching the join condition) in less time than it will take to select\nall the joinable rows, sort them by ID, and take the first one. It's\nwrong in this case, but the plan is not unreasonable on its face.\nThe problem boils down to a misestimate of how many join rows there are.\nYou might get better results by increasing the statistics targets.\n\nThere are plenty of similar cases in the list archives.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 May 2008 13:59:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Adding \"LIMIT 1\" kills performance. " } ]
[ { "msg_contents": "I'm doing some analysis on temporal usages, and was hoping to make use\nof OVERLAPS, but it does not appear that it makes use of indices.\n\nCouching this in an example... I created a table, t1, thus:\n\nmetadata=# \\d t1\n Table \"public.t1\"\n Column | Type | Modifiers \n--------+--------------------------+-------------------------------------------------------\n id | integer | not null default nextval('t1_id_seq'::regclass)\n t1 | timestamp with time zone | not null default now()\n t2 | timestamp with time zone | not null default 'infinity'::timestamp with time zone\n data | text | not null\nIndexes:\n \"t1_pkey\" PRIMARY KEY, btree (id)\n \"f2\" btree (id) WHERE t2 = 'infinity'::timestamp with time zone\n \"t1t1\" btree (t1)\n \"t1t2\" btree (t2)\n\nWhen entries go in, they default to having an effective date range\nfrom now() until 'infinity'.\n\nI then went off and seeded a bunch of data into the table, inserting\nvalues:\n\nfor i in `cat /etc/dictionaries-common/words | head 2000`; do\n psql -d metadata -c \"insert into t1 (data) values ('$i');\"\ndone\n\nThen, I started doing temporal updates, thus:\n\nfor i in `cat /etc/dictionaries-common/words`; do\npsql -d metadata -c \"insert into t1 (data) values ('$i');update t1 set t2 = now() where t2 = 'infinity' and id in (select id from t1 where t2 = 'infinity' order by random() limit 1);\"\ndone\n\nThis terminates many of those entries, and creates a new one that is\neffective \"to infinity.\"\n\nAfter running this for a while, I have a reasonably meaningful amount\nof data in the table:\n\nmetadata=# select count(*) from t1; select count(*) from t1 where t2 = 'infinity';\n count \n--------\n 125310\n(1 row)\n\n count \n-------\n 2177\n(1 row)\n\nSearching for the \"active\" items in the table, via a constructed 'overlap':\n\nmetadata=# explain analyze select count(*) from t1 where t1 <= now() and t2 >= now();\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=98.13..98.14 rows=1 width=0) (actual time=8.104..8.105 rows=1 loops=1)\n -> Index Scan using t1t2 on t1 (cost=0.00..93.95 rows=1671 width=0) (actual time=0.116..6.374 rows=2177 loops=1)\n Index Cond: (t2 >= now())\n Filter: (t1 <= now())\n Total runtime: 8.193 ms\n(5 rows)\n\nNote, that makes use of the index on column t2, and runs nice and\nquick. (And notice that the rows found, 2177, agrees with the earlier\ncount.)\n\nUnfortunately, when I try using OVERLAPS, it reverts to a Seq Scan.\n\nmetadata=# explain analyze select * from t1 where (t1,t2) overlaps (now(), now());\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------\n Seq Scan on t1 (cost=0.00..3156.59 rows=43135 width=24) (actual time=171.248..205.941 rows=2177 loops=1)\n Filter: \"overlaps\"(t1, t2, now(), now())\n Total runtime: 207.508 ms\n(3 rows)\n\nI would surely think that I have enough data in the table for the\nstats to be good, and the first query certainly does harness the index\non t2 to determine if records are overlapping (now(),now()).\n\nIs it possible that we need to have some improvement to the optimizer\nso that OVERLAPS could make use of the indices?\n-- \nselect 'cbbrowne' || '@' || 'cbbrowne.com';\nhttp://linuxfinances.info/info/lsf.html\n\"Very little is known about the War of 1812 because the Americans lost\nit.\" -- Eric Nicol\n", "msg_date": "Thu, 29 May 2008 12:46:39 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": true, "msg_subject": "OVERLAPS is slow" } ]
[ { "msg_contents": "Based on feedback after the sessions I did few more tests which might be \nuseful to share\n\nOne point that was suggested to get each clients do more work and reduce \nthe number of clients.. The igen benchmarks was flexible and what I did \nwas remove all think time from it and repeated the test till the \nscalability stops (This was done with CVS downloaded yesterday)\n\nNote with this no think time concept, each clients can be about 75% CPU \nbusy from what I observed. running it I found the clients scaling up \nsaturates at about 60 now (compared to 500 from the original test). The \npeak throughput was at about 50 users (using synchrnous_commit=off)\n\nHere is the interesting DTrace Lock Ouput state (lock id, mode of lock \nand time in ns spent waiting for lock in a 10-sec snapshot (Just taking \nthe last few top ones in ascending order):\n\nWith less than 20 users it is WALInsert at the top:\n52 Exclusive 721950129\n4 Exclusive 768537190\n46 Exclusive 842063837\n7 Exclusive 1031851713\n\nWith 35 Users:\n52 Exclusive 2599074739\n4 Exclusive 2647927574\n46 Exclusive 2789581991\n7 Exclusive 3220008691\n\nAt the peak at about 50 users that I saw earlier (PEAK Throughput):\n46 Exclusive 3669210393\n4 Exclusive 6024966938\n52 Exclusive 6529168107\n7 Exclusive 9408290367\n\nWith about 60 users where the throughput actually starts to drop \n(throughput drops)\n41 Exclusive 4570660567\n52 Exclusive 10706741643\n46 Exclusive 13152005125\n 4 Exclusive 13550187806\n 7 Exclusive 22146882562\n\n\nWith about 100 users ( below the peak value)\n42 Exclusive 4238582775\n46 Exclusive 6773515243\n7 Exclusive 7467346038\n52 Exclusive 9846216440\n4 Shared 22528501166\n4 Exclusive 223043774037\n\nSo it seems when both shared and exclusive time for ProcArrayLock wait \nare the top 2 it is basically saturated in terms of throughput it can \nhandle.\n\nOptimizing wait queues will help improve shared which might help \nExclusive a bit but eventually Exclusive for ProcArray will limit \nscaling with as few as 60-70 users.\n\n\nLock hold times are below (though taken from different run)\nwith 30 users:\n\n Lock Id Mode Combined Time (ns)\n 1616992 Exclusive 1199791629\n 4 Exclusive 1399371867\n 34 Exclusive 1426153620\n 1616978 Exclusive 1528327035\n 1616990 Exclusive 1546374298\n 1616988 Exclusive 1553461559\n 5 Exclusive 2477558484\n\nWith 50+ users\n Lock Id Mode Combined Time (ns)\n 4 Exclusive 1438509198\n 1616992 Exclusive 1450973466\n 1616978 Exclusive 1505626978\n 1616990 Exclusive 1850432217\n 1616988 Exclusive 2033226225\n 34 Exclusive 2098542547\n 5 Exclusive 3280151374\n\nWith 100 users\n\n Lock Id Mode Combined Time (ns)\n 1616992 Exclusive 1206516505\n 1616988 Exclusive 1486704087\n 1616990 Exclusive 1521900997\n 34 Exclusive 1532815803\n 1616978 Exclusive 1541986895\n 5 Exclusive 2179043424\n 5 2395098279\n\n(Why 5 was printing with blank??)\nRerunning it with slight variation of the script\n\n\n Lock Id Mode Combined Time (ns)\n 1616996 0 1167708953\n 36 0 1291958451\n 5 4299305160 1344486968\n 4 0 1347557908\n 1616978 0 1377931882\n 34 0 1724752938\n 5 0 2079012548\n\nLooks like trend of 4's hold time looks similar to previous ones.. \nthough the new kid is 5 with mode <> 0,1 .. not sure if that is causing \nproblems..What mode is \"4299305160\" for Lock 5 (SInvalLock) ? Anyway at \nthis point the wait time for 4 increases to a point where the database \nis not scaling anymore\n\nany thoughts?\n\n\n-Jignesh\n\n\n\n", "msg_date": "Thu, 29 May 2008 18:08:10 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "ProcArrayLock (The Saga continues)" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n\n> Note with this no think time concept, each clients can be about 75% CPU busy\n> from what I observed. running it I found the clients scaling up saturates at\n> about 60 now (compared to 500 from the original test). The peak throughput was\n> at about 50 users (using synchrnous_commit=off)\n\nSo to get the maximum throughput on the benchmark with think times you want to\naggregate the clients about 10:1 with a connection pooler or some middleware\nlayer of some kind, it seems.\n\nIt's still interesting to find the choke points for large numbers of\nconnections. But I think not because it's limiting your benchmark results --\nthat would be better addressed by using fewer connections -- just for the sake\nof knowing where problems loom on the horizon.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n", "msg_date": "Fri, 30 May 2008 01:19:11 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ProcArrayLock (The Saga continues)" } ]
[ { "msg_contents": "I have a big table that is used in many queries. Most used index is\ncreated on date field. Number of records in this table when date field\nis saturday is about 5 times smaller than other days, on sunday this\nnumber is always 0. Statistics target is 1000. Many queries have\nproblems when condition on this table looks like \"d between '2007-05-12'\nand '2007-05-12'\" (saturday).\n\nEXPLAIN ANALYZE\nSELECT *\nFROM i\nWHERE d BETWEEN '2007-05-12' AND '2007-05-12'\n\nIndex Scan using i_d on i (cost=0.00..2.39 rows=1 width=402) (actual\ntime=0.053..4.284 rows=1721 loops=1)\n Index Cond: ((d >= '2007-05-12'::date) AND (d <= '2007-05-12'::date))\nTotal runtime: 6.645 ms\n\nEXPLAIN ANALYZE\nSELECT *\nFROM i\nWHERE d = '2007-05-12'\n\nIndex Scan using i_d on i (cost=0.00..38.97 rows=1572 width=402)\n(actual time=0.044..4.250 rows=1721 loops=1)\n Index Cond: (d = '2007-05-12'::date)\nTotal runtime: 6.619 ms\n\nIs there a way to solve this problem?\n\n", "msg_date": "Fri, 30 May 2008 20:02:41 +0900", "msg_from": "Vlad Arkhipov <[email protected]>", "msg_from_op": true, "msg_subject": "Statistics issue" }, { "msg_contents": "Vlad Arkhipov <[email protected]> writes:\n> EXPLAIN ANALYZE\n> SELECT *\n> FROM i\n> WHERE d BETWEEN '2007-05-12' AND '2007-05-12'\n\n> Index Scan using i_d on i (cost=0.00..2.39 rows=1 width=402) (actual\n> time=0.053..4.284 rows=1721 loops=1)\n> Index Cond: ((d >= '2007-05-12'::date) AND (d <= '2007-05-12'::date))\n> Total runtime: 6.645 ms\n\n> EXPLAIN ANALYZE\n> SELECT *\n> FROM i\n> WHERE d = '2007-05-12'\n\n> Index Scan using i_d on i (cost=0.00..38.97 rows=1572 width=402)\n> (actual time=0.044..4.250 rows=1721 loops=1)\n> Index Cond: (d = '2007-05-12'::date)\n> Total runtime: 6.619 ms\n\nHmm, I wonder whether we shouldn't do something like this\nhttp://archives.postgresql.org/pgsql-committers/2008-03/msg00128.php\nfor all range conditions, not just those made up by\nprefix_selectivity().\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 May 2008 11:51:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Statistics issue " } ]
[ { "msg_contents": "As i've been looking over the more complicated queries that i have \nwritten and gotten allot of help in redoing the quires from you all, \nthanks again.\n\nI have noticed that estimated Cost to do the query is way off from \nActual. The queries don't run slow at least not to me. \nThe Estimated Cost is way higher than the actual time on Hash joins but \non the scan through the tables the Estimate Cost to Actual flips where \nActual is way higher than Estimated Cost\n\nI have tried increasing and decreasing the Stats on the important \ncolumns with no changes\n\nChanged the stats from 10 to 50, 100 and 150, 200 and 250.\n\nThe Estimated Cost always stays the same. What is the process to track \ndown what is going on why the estimate cost is off\n\n----------------Query/View----------------\nSELECT (wo.wo_number::text || '-'::text) || wo.wo_subnumber::text AS \nwo_number, wo.wo_qtyord, 'Labor' AS valuetype, item.item_number AS \nparentitem, wooper.wooper_descrip1 AS wooper_des, \nwooperpost.wooperpost_seqnumber AS wooperpost, wooperpost.wooperpost_qty \nAS qty, wooperpost.wooperpost_sutime AS setuptime_matcost, \nwooperpost.wooperpost_sucost AS setupcost_issuecost, \nwooperpost.wooperpost_rntime AS runtime_scrapqty, \nwooperpost.wooperpost_rncost AS runcost_scrapcost, wo.wo_status, \nwooperpost.wooperpost_timestamp::date AS opposteddate, \nwo.wo_completed_date::date AS wocompletedate, \nwo.wo_processstart_date::date AS wostarteddated\n FROM wo, wooper, wooperpost, itemsite, item\n WHERE wo.wo_id = wooper.wooper_wo_id AND wooper.wooper_id = \nwooperpost.wooperpost_wooper_id AND wo.wo_itemsite_id = \nitemsite.itemsite_id AND itemsite.itemsite_item_id = item.item_id\nUNION\n SELECT (wo.wo_number::text || '-'::text) || wo.wo_subnumber::text AS \nwo_number, wo.wo_qtyord, 'Material' AS valuetype, pitem.item_number AS \nparentitem,\n CASE\n WHEN womatl.womatl_type = 'I'::bpchar THEN citem.item_number\n ELSE ( SELECT costelem.costelem_type\n FROM costelem, itemcost, womatl\n WHERE womatl.womatl_itemcost_id = itemcost.itemcost_id AND \nitemcost.itemcost_costelem_id = costelem.costelem_id\n LIMIT 1)\n END AS wooper_des, 0 AS wooperpost, \nwomatlpost.womatlpost_qtyposted AS qty, round(( SELECT \nsum(womatlpost.womatlpost_cost) / sum(womatlpost.womatlpost_qtyposted) \nAS unitcost\n FROM womatlpost\n WHERE womatlpost.womatlpost_womatl_id = womatl.womatl_id AND \nwomatlpost.womatlpost_qtyposted > 0::numeric), 4) AS setuptime_matcost, \nwomatlpost.womatlpost_cost AS setupcost_issuecost, 0.0 AS \nruntime_scrapqty, 0.0 AS runcost_scrapcost, wo.wo_status, \nwomatlpost.womatlpost_dateposted::date AS opposteddate, \nwo.wo_completed_date::date AS wocompletedate, \nwo.wo_processstart_date::date AS wostarteddated\n FROM womatl, wo, itemsite citemsite, item citem, itemsite pitemsite, \nitem pitem, womatlpost\n WHERE wo.wo_id = womatl.womatl_wo_id AND citemsite.itemsite_id = \nwomatl.womatl_itemsite_id AND citem.item_id = citemsite.itemsite_item_id \nAND pitemsite.itemsite_id = wo.wo_itemsite_id AND pitem.item_id = \npitemsite.itemsite_item_id AND womatlpost.womatlpost_womatl_id = \nwomatl.womatl_id\n ORDER BY 1;\n\n-------------End Query-----------\n\n-------------Begin Analyze---------\n\n\"Unique (cost=76456.48..77934.64 rows=36954 width=115) (actual \ntime=1618.244..1729.004 rows=36747 loops=1)\"\n\" -> Sort (cost=76456.48..76548.86 rows=36954 width=115) (actual \ntime=1618.241..1641.059 rows=36966 loops=1)\"\n\" Sort Key: \"*SELECT* 1\".wo_number, \"*SELECT* 1\".wo_qtyord, \n('Labor'::text), \"*SELECT* 1\".parentitem, \"*SELECT* 1\".wooper_des, \n\"*SELECT* 1\".wooperpost, \"*SELECT* 1\".qty, \"*SELECT* \n1\".setuptime_matcost, \"*SELECT* 1\".setupcost_issuecost, \"*SELECT* \n1\".runtime_scrapqty, \"*SELECT* 1\".runcost_scrapcost, \"*SELECT* \n1\".wo_status, \"*SELECT* 1\".opposteddate, \"*SELECT* 1\".wocompletedate, \n\"*SELECT* 1\".wostarteddated\"\n\" Sort Method: quicksort Memory: 8358kB\"\n\" -> Append (cost=2844.41..73652.88 rows=36954 width=115) \n(actual time=117.263..809.691 rows=36966 loops=1)\"\n\" -> Subquery Scan \"*SELECT* 1\" (cost=2844.41..4916.09 \nrows=21835 width=115) (actual time=117.261..311.658 rows=21847 loops=1)\"\n\" -> Hash Join (cost=2844.41..4697.74 rows=21835 \nwidth=115) (actual time=117.250..277.481 rows=21847 loops=1)\"\n\" Hash Cond: (wooper.wooper_wo_id = \npublic.wo.wo_id)\"\n\" -> Hash Join (cost=2090.82..3125.34 \nrows=21835 width=75) (actual time=83.903..156.356 rows=21847 loops=1)\"\n\" Hash Cond: \n(wooperpost.wooperpost_wooper_id = wooper.wooper_id)\"\n\" -> Seq Scan on wooperpost \n(cost=0.00..596.08 rows=22008 width=45) (actual time=0.024..17.068 \nrows=22020 loops=1)\"\n\" -> Hash (cost=1503.70..1503.70 \nrows=46970 width=38) (actual time=83.793..83.793 rows=46936 loops=1)\"\n\" -> Seq Scan on wooper \n(cost=0.00..1503.70 rows=46970 width=38) (actual time=0.024..42.876 \nrows=46936 loops=1)\"\n\" -> Hash (cost=723.91..723.91 rows=2374 \nwidth=48) (actual time=33.265..33.265 rows=2328 loops=1)\"\n\" -> Hash Join (cost=434.74..723.91 \nrows=2374 width=48) (actual time=19.562..30.708 rows=2328 loops=1)\"\n\" Hash Cond: (item.item_id = \nitemsite.itemsite_item_id)\"\n\" -> Seq Scan on item \n(cost=0.00..196.38 rows=6138 width=15) (actual time=0.024..4.672 \nrows=6140 loops=1)\"\n\" -> Hash (cost=405.07..405.07 \nrows=2374 width=41) (actual time=19.522..19.522 rows=2328 loops=1)\"\n\" -> Hash Join \n(cost=264.85..405.07 rows=2374 width=41) (actual time=10.300..17.043 \nrows=2328 loops=1)\"\n\" Hash Cond: \n(public.wo.wo_itemsite_id = itemsite.itemsite_id)\"\n\" -> Seq Scan on wo \n(cost=0.00..92.74 rows=2374 width=41) (actual time=0.019..1.988 \nrows=2328 loops=1)\"\n\" -> Hash \n(cost=188.82..188.82 rows=6082 width=8) (actual time=10.259..10.259 \nrows=6084 loops=1)\"\n\" -> Seq Scan on \nitemsite (cost=0.00..188.82 rows=6082 width=8) (actual \ntime=0.021..5.469 rows=6084 loops=1)\"\n\" -> Subquery Scan \"*SELECT* 2\" (cost=2081.69..68736.79 \nrows=15119 width=83) (actual time=96.372..456.864 rows=15119 loops=1)\"\n\" -> Hash Join (cost=2081.69..68585.60 rows=15119 \nwidth=83) (actual time=96.365..429.660 rows=15119 loops=1)\"\n\" Hash Cond: (public.womatl.womatl_itemsite_id \n= citemsite.itemsite_id)\"\n\" InitPlan\"\n\" -> Limit (cost=0.00..0.60 rows=1 \nwidth=12) (never executed)\"\n\" -> Nested Loop (cost=0.00..10306.58 \nrows=17196 width=12) (never executed)\"\n\" -> Nested Loop \n(cost=0.00..5478.44 rows=17196 width=4) (never executed)\"\n\" -> Seq Scan on womatl \n(cost=0.00..452.96 rows=17196 width=4) (never executed)\"\n\" -> Index Scan using \nitemcost_pkey on itemcost (cost=0.00..0.28 rows=1 width=8) (never \nexecuted)\"\n\" Index Cond: \n(itemcost.itemcost_id = public.womatl.womatl_itemcost_id)\"\n\" -> Index Scan using \ncostelem_pkey on costelem (cost=0.00..0.27 rows=1 width=16) (never \nexecuted)\"\n\" Index Cond: \n(costelem.costelem_id = itemcost.itemcost_costelem_id)\"\n\" -> Hash Join (cost=1421.50..2295.65 \nrows=15119 width=76) (actual time=67.342..141.405 rows=15119 loops=1)\"\n\" Hash Cond: (public.womatl.womatl_wo_id \n= public.wo.wo_id)\"\n\" -> Hash Join (cost=667.91..1315.28 \nrows=15119 width=36) (actual time=35.971..82.704 rows=15119 loops=1)\"\n\" Hash Cond: \n(public.womatlpost.womatlpost_womatl_id = public.womatl.womatl_id)\"\n\" -> Seq Scan on womatlpost \n(cost=0.00..307.19 rows=15119 width=26) (actual time=0.026..12.373 \nrows=15119 loops=1)\"\n\" -> Hash (cost=452.96..452.96 \nrows=17196 width=14) (actual time=35.911..35.911 rows=17199 loops=1)\"\n\" -> Seq Scan on womatl \n(cost=0.00..452.96 rows=17196 width=14) (actual time=0.017..21.804 \nrows=17199 loops=1)\"\n\" -> Hash (cost=723.91..723.91 \nrows=2374 width=48) (actual time=31.340..31.340 rows=2328 loops=1)\"\n\" -> Hash Join \n(cost=434.74..723.91 rows=2374 width=48) (actual time=18.197..28.794 \nrows=2328 loops=1)\"\n\" Hash Cond: (pitem.item_id = \npitemsite.itemsite_item_id)\"\n\" -> Seq Scan on item pitem \n(cost=0.00..196.38 rows=6138 width=15) (actual time=0.006..4.172 \nrows=6140 loops=1)\"\n\" -> Hash \n(cost=405.07..405.07 rows=2374 width=41) (actual time=18.172..18.172 \nrows=2328 loops=1)\"\n\" -> Hash Join \n(cost=264.85..405.07 rows=2374 width=41) (actual time=9.441..15.807 \nrows=2328 loops=1)\"\n\" Hash Cond: \n(public.wo.wo_itemsite_id = pitemsite.itemsite_id)\"\n\" -> Seq Scan on \nwo (cost=0.00..92.74 rows=2374 width=41) (actual time=0.007..1.668 \nrows=2328 loops=1)\"\n\" -> Hash \n(cost=188.82..188.82 rows=6082 width=8) (actual time=9.410..9.410 \nrows=6084 loops=1)\"\n\" -> Seq \nScan on itemsite pitemsite (cost=0.00..188.82 rows=6082 width=8) \n(actual time=0.013..4.726 rows=6084 loops=1)\"\n\" -> Hash (cost=583.57..583.57 rows=6082 \nwidth=15) (actual time=28.856..28.856 rows=6084 loops=1)\"\n\" -> Hash Join (cost=273.11..583.57 \nrows=6082 width=15) (actual time=10.017..23.614 rows=6084 loops=1)\"\n\" Hash Cond: \n(citemsite.itemsite_item_id = citem.item_id)\"\n\" -> Seq Scan on itemsite \ncitemsite (cost=0.00..188.82 rows=6082 width=8) (actual \ntime=0.008..3.992 rows=6084 loops=1)\"\n\" -> Hash (cost=196.38..196.38 \nrows=6138 width=15) (actual time=9.987..9.987 rows=6140 loops=1)\"\n\" -> Seq Scan on item citem \n(cost=0.00..196.38 rows=6138 width=15) (actual time=0.009..4.928 \nrows=6140 loops=1)\"\n\" SubPlan\"\n\" -> Aggregate (cost=4.28..4.29 rows=1 \nwidth=14) (actual time=0.009..0.009 rows=1 loops=15119)\"\n\" -> Index Scan using \nwomatlpost_womatl_id_index on womatlpost (cost=0.00..4.27 rows=1 \nwidth=14) (actual time=0.004..0.005 rows=1 loops=15119)\"\n\" Index Cond: \n(womatlpost_womatl_id = $1)\"\n\" Filter: (womatlpost_qtyposted > \n0::numeric)\"\n\"Total runtime: 1751.218 ms\"\n\n-------------End Analyze ------------------\n", "msg_date": "Mon, 02 Jun 2008 17:43:09 -0400", "msg_from": "Justin <[email protected]>", "msg_from_op": true, "msg_subject": "getting estimated cost to agree with actual" }, { "msg_contents": "On Mon, Jun 2, 2008 at 3:43 PM, Justin <[email protected]> wrote:\n> As i've been looking over the more complicated queries that i have written\n> and gotten allot of help in redoing the quires from you all, thanks again.\n>\n> I have noticed that estimated Cost to do the query is way off from Actual.\n> The queries don't run slow at least not to me. The Estimated Cost is way\n> higher than the actual time on Hash joins but on the scan through the tables\n> the Estimate Cost to Actual flips where Actual is way higher than Estimated\n> Cost\n>\n> I have tried increasing and decreasing the Stats on the important columns\n> with no changes\n\nWell, they're not measured in the same units. estimated costs are in\nterms of the cost to sequentially scan a single tuple, while actual\ncosts are in milliseconds.\n\nYou might be able to change the cost of sequential scan from 1 to\nsomething else and everything else to reflect that change to get them\nclose. But they aren't supposed to match directly up.\n", "msg_date": "Mon, 2 Jun 2008 17:38:21 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting estimated cost to agree with actual" }, { "msg_contents": "\"Scott Marlowe\" <[email protected]> writes:\n\n> On Mon, Jun 2, 2008 at 3:43 PM, Justin <[email protected]> wrote:\n>>\n>> I have noticed that estimated Cost to do the query is way off from Actual.\n>\n> Well, they're not measured in the same units. estimated costs are in\n> terms of the cost to sequentially scan a single tuple, while actual\n> costs are in milliseconds.\n\ns/tuple/page/\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Tue, 03 Jun 2008 02:20:10 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting estimated cost to agree with actual" }, { "msg_contents": "On Mon, Jun 2, 2008 at 7:20 PM, Gregory Stark <[email protected]> wrote:\n> \"Scott Marlowe\" <[email protected]> writes:\n>\n>> On Mon, Jun 2, 2008 at 3:43 PM, Justin <[email protected]> wrote:\n>>>\n>>> I have noticed that estimated Cost to do the query is way off from Actual.\n>>\n>> Well, they're not measured in the same units. estimated costs are in\n>> terms of the cost to sequentially scan a single tuple, while actual\n>> costs are in milliseconds.\n>\n> s/tuple/page/\n\nDangit! I knew that too. time for some sleep I guess. :)\n", "msg_date": "Mon, 2 Jun 2008 21:06:20 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting estimated cost to agree with actual" } ]
[ { "msg_contents": "Hello,\n\nI have a table (transactions) containing 61 414 503 rows. The basic \ncount query (select count(transid) from transactions) takes 138226 \nmilliseconds.\nThis is the query analysis output:\n\nAggregate (cost=2523970.79..2523970.80 rows=1 width=8) (actual \ntime=268964.088..268964.090 rows=1 loops=1);\n -> Seq Scan on transactions (cost=0.00..2370433.43 rows=61414943 \nwidth=8) (actual time=13.886..151776.860 rows=61414503 loops=1);\nTotal runtime: 268973.248 ms;\n\nQuery has several indexes defined, including one on transid column:\n\nnon-unique;index-qualifier;index-name;type;ordinal-position;column-name;asc-or-desc;cardinality;pages;filter-condition\n\nf;<null>;transactions_id_key;3;1;transid;<null>;61414488;168877;<null>;\nt;<null>;trans_ip_address_index;3;1;ip_address;<null>;61414488;168598;<null>;\nt;<null>;trans_member_id_index;3;1;member_id;<null>;61414488;169058;<null>;\nt;<null>;trans_payment_id_index;3;1;payment_id;<null>;61414488;168998;<null>;\nt;<null>;trans_status_index;3;1;status;<null>;61414488;169005;<null>;\nt;<null>;transactions__time_idx;3;1;time;<null>;61414488;168877;<null>;\nt;<null>;transactions_offer_id_idx;3;1;offer_id;<null>;61414488;169017;<null>;\n\nI'm not a dba so I'm not sure if the time it takes to execute this query \nis OK or not, it just seems a bit long to me.\nI'd appreciate it if someone could share his/her thoughts on this. Is \nthere a way to make this table/query perform better?\nAny query I'm running that joins with transactions table takes forever \nto complete, but maybe this is normal for a table this size.\nRegards,\n\nMarcin", "msg_date": "Tue, 03 Jun 2008 09:57:15 +0200", "msg_from": "Marcin Citowicki <[email protected]>", "msg_from_op": true, "msg_subject": "query performance question" }, { "msg_contents": "On Tue, Jun 03, 2008 at 09:57:15AM +0200, Marcin Citowicki wrote:\n> I'm not a dba so I'm not sure if the time it takes to execute this query \n> is OK or not, it just seems a bit long to me.\n\nThis is perfectly OK. count(*) from table is generally slow. There are\nsome ways to make it faster (depending if you need exact count, or some\nestimate).\n\n> I'd appreciate it if someone could share his/her thoughts on this. Is \n> there a way to make this table/query perform better?\n\nYou can keep the count of elements in this table in separate table, and\nupdate it with triggers.\n\n> Any query I'm running that joins with transactions table takes forever \n> to complete, but maybe this is normal for a table this size.\n\nAs for other queries - show them, and their explain analyze.\n\nPerformance of count(*) is dependent basically only on size of table. In\ncase of other queries - it might be simple to optimize them. Or\nimpossible - without knowing the queries it's impossible to tell.\n\nDo you really care about count(*) from 60m+ record table? How often do\nyou count the records?\n\nBest regards,\n\ndepesz\n\n", "msg_date": "Tue, 3 Jun 2008 10:31:46 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query performance question" }, { "msg_contents": "Hello Hubert,\n\nThank you for your reply. I don't really need to count rows in \ntransactions table, I just thought this was a good example to show how \nslow the query was.\nBut based on what you wrote it looks like count(*) is slow in general, \nso this seems to be OK since the table is rather large.\nI just ran other queries (joining transactions table) and they returned \nquickly, which leads me to believe that there could be a problem not \nwith the database, but with the box\nthe db is running on. Sometimes those same queries take forever and now \nthey complete in no time at all, so perhaps there is a process that is \nrunning periodically which is slowing the db down.\nI'll need to take a look at this.\nThank you for your help!\n\nMarcin\n\n\nhubert depesz lubaczewski wrote:\n> On Tue, Jun 03, 2008 at 09:57:15AM +0200, Marcin Citowicki wrote:\n> \n>> I'm not a dba so I'm not sure if the time it takes to execute this query \n>> is OK or not, it just seems a bit long to me.\n>> \n>\n> This is perfectly OK. count(*) from table is generally slow. There are\n> some ways to make it faster (depending if you need exact count, or some\n> estimate).\n>\n> \n>> I'd appreciate it if someone could share his/her thoughts on this. Is \n>> there a way to make this table/query perform better?\n>> \n>\n> You can keep the count of elements in this table in separate table, and\n> update it with triggers.\n>\n> \n>> Any query I'm running that joins with transactions table takes forever \n>> to complete, but maybe this is normal for a table this size.\n>> \n>\n> As for other queries - show them, and their explain analyze.\n>\n> Performance of count(*) is dependent basically only on size of table. In\n> case of other queries - it might be simple to optimize them. Or\n> impossible - without knowing the queries it's impossible to tell.\n>\n> Do you really care about count(*) from 60m+ record table? How often do\n> you count the records?\n>\n> Best regards,\n>\n> depesz\n>\n>", "msg_date": "Tue, 03 Jun 2008 10:55:22 +0200", "msg_from": "Marcin Citowicki <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query performance question" }, { "msg_contents": "Hi,\n\nHubert already answered your question - it's expected behavior, the\ncount(*) has to read all the tuples from the table (even dead ones!). So\nif you have a really huge table, it will take a long time to read it.\n\nThere are several ways to speed it up - some of them are simple (but the\nspeedup is limited), some of them require change of application logic and\nrequires to rewrite part of the application (using triggers to count the\nrows, etc.)\n\n1) If the transactions have sequential ID without gaps, you may easily\nselect MAX(id) and that'll give the count. This won't work if some of the\ntransactions were deleted or if you need to use other filtering criteria.\nThe needed changes in the application are quite small (basically just a\nsingle SQL query).\n\n2) Move the table to a separate tablespace (a separate disk if possible).\nThis will speed up the reads, as the table will be 'compact'. This is just\na db change, it does not require change in the application logic. This\nwill give you some speedup, but not as good as 1) or 3).\n\n3) Build a table with totals or maybe subtotals, updated by triggers. This\nrequires serious changes in application as well as in database, but solves\nissues of 1) and may give you even better results.\n\nTomas\n\n> Hello,\n>\n> I have a table (transactions) containing 61 414 503 rows. The basic\n> count query (select count(transid) from transactions) takes 138226\n> milliseconds.\n> This is the query analysis output:\n>\n> Aggregate (cost=2523970.79..2523970.80 rows=1 width=8) (actual\n> time=268964.088..268964.090 rows=1 loops=1);\n> -> Seq Scan on transactions (cost=0.00..2370433.43 rows=61414943\n> width=8) (actual time=13.886..151776.860 rows=61414503 loops=1);\n> Total runtime: 268973.248 ms;\n>\n> Query has several indexes defined, including one on transid column:\n>\n> non-unique;index-qualifier;index-name;type;ordinal-position;column-name;asc-or-desc;cardinality;pages;filter-condition\n>\n> f;<null>;transactions_id_key;3;1;transid;<null>;61414488;168877;<null>;\n> t;<null>;trans_ip_address_index;3;1;ip_address;<null>;61414488;168598;<null>;\n> t;<null>;trans_member_id_index;3;1;member_id;<null>;61414488;169058;<null>;\n> t;<null>;trans_payment_id_index;3;1;payment_id;<null>;61414488;168998;<null>;\n> t;<null>;trans_status_index;3;1;status;<null>;61414488;169005;<null>;\n> t;<null>;transactions__time_idx;3;1;time;<null>;61414488;168877;<null>;\n> t;<null>;transactions_offer_id_idx;3;1;offer_id;<null>;61414488;169017;<null>;\n>\n> I'm not a dba so I'm not sure if the time it takes to execute this query\n> is OK or not, it just seems a bit long to me.\n> I'd appreciate it if someone could share his/her thoughts on this. Is\n> there a way to make this table/query perform better?\n> Any query I'm running that joins with transactions table takes forever\n> to complete, but maybe this is normal for a table this size.\n> Regards,\n>\n> Marcin\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n", "msg_date": "Tue, 3 Jun 2008 11:04:34 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: query performance question" }, { "msg_contents": "\n> Thank you for your reply. I don't really need to count rows in\n> transactions table, I just thought this was a good example to show how\n> slow the query was.\n\n\tUsually you're more interested in the performance of the queries you need \nto make rather than the ones you don't need to make ;)\n\n> But based on what you wrote it looks like count(*) is slow in general,\n> so this seems to be OK since the table is rather large.\n\n\tWell any query that needs to scan 60 million rows will be slow...\n\tNow understand that this is not a problem with count(*) which can be very \nfast if you \"select count(*) where...\" and the condition in the where \nproduces a reasonable number of rows to count, it is just a problem of \nhaving to scan the 60 million rows. But fortunately since it is perfectly \nuseless to know the rowcount of this 60 million table with a perfect \nprecision you never need to make this query ;)\n\n> I just ran other queries (joining transactions table) and they returned\n> quickly, which leads me to believe that there could be a problem not\n> with the database, but with the box\n> the db is running on. Sometimes those same queries take forever and now\n> they complete in no time at all, so perhaps there is a process that is\n> running periodically which is slowing the db down.\n\n\tThen if you have specific queries that you need to optimize you will need \nto run EXPLAIN ANALYZE on them and post the results, when they are fast \nand when they are slow to see if there is a difference in plans. Also the \noutput from vmstat in times of big slowness can provide useful \ninformation. Crosschecking with your cron jobs, etc is a good idea. Also \nthe usual suspects, like are your tables VACUUM'd and ANALYZE'd etc.\n", "msg_date": "Wed, 04 Jun 2008 02:10:45 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query performance question" }, { "msg_contents": "[email protected] wrote:\n>\n> 3) Build a table with totals or maybe subtotals, updated by triggers. This\n> requires serious changes in application as well as in database, but solves\n> issues of 1) and may give you even better results.\n>\n> Tomas\n>\n> \nI have tried this. It's not a magic bullet. We do our billing based on \ncounts from huge tables, so accuracy is important to us. I tried \nimplementing such a scheme and ended up abandoning it because the \nsummary table became so full of dead tuples during and after large bulk \ninserts that it slowed down selects on that table to an unacceptable \nspeed. Even with a VACUUM issued every few hundred inserts, it still \nbogged down due to the constant churn of the inserts. \n\nI ended up moving this count tracking into the application level. It's \nmessy and only allows a single instance of an insert program due to the \nlocalization of the counts in program memory, but it was the only way I \nfound to avoid the penalty of constant table churn on the triggered inserts.\n\n-Dan\n", "msg_date": "Thu, 05 Jun 2008 09:43:06 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query performance question" }, { "msg_contents": "Dan,\n\nDid you try this with 8.3 and its new HOT functionality? \n\nKen\n\nOn Thu, Jun 05, 2008 at 09:43:06AM -0600, Dan Harris wrote:\n> [email protected] wrote:\n>>\n>> 3) Build a table with totals or maybe subtotals, updated by triggers. This\n>> requires serious changes in application as well as in database, but solves\n>> issues of 1) and may give you even better results.\n>>\n>> Tomas\n>>\n>> \n> I have tried this. It's not a magic bullet. We do our billing based on \n> counts from huge tables, so accuracy is important to us. I tried \n> implementing such a scheme and ended up abandoning it because the summary \n> table became so full of dead tuples during and after large bulk inserts \n> that it slowed down selects on that table to an unacceptable speed. Even \n> with a VACUUM issued every few hundred inserts, it still bogged down due to \n> the constant churn of the inserts. \n> I ended up moving this count tracking into the application level. It's \n> messy and only allows a single instance of an insert program due to the \n> localization of the counts in program memory, but it was the only way I \n> found to avoid the penalty of constant table churn on the triggered \n> inserts.\n>\n> -Dan\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Thu, 5 Jun 2008 12:16:39 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query performance question" }, { "msg_contents": "Kenneth Marshall wrote:\n> Dan,\n>\n> Did you try this with 8.3 and its new HOT functionality? \n>\n> Ken\n> \nI did not. I had to come up with the solution before we were able to \nmove to 8.3. But, Tom did mention that the HOT might help and I forgot \nabout that when writing the prior message. I'm in the midst of moving \n30 databases from 8.0 to 8.3 at the moment but when I'm finished, I \nmight have time to test it.\n\n-Dan\n\n", "msg_date": "Thu, 05 Jun 2008 14:22:49 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query performance question" } ]
[ { "msg_contents": "Running postgres 8.2.5\n \nI have a table that has 5 indices, no foreign keys or any \ndependency on any other table. If delete the database and \nstart entering entries, everything works very well until I get\nto some point (let's say 1M rows). Basically, I have a somewhat\nconstant rate of inserts/updates that go into a work queue and then\nget passed to postgres. The work queue starts filling up as the\nresponsiveness slows down. For example at 1.5M \nrows it takes >2 seconds for 300 inserts issued in one transaction. \n \nPrior to this point I had added regular VACUUM ANALYZE on \nthe table and it did help.  I increased maintenance work memory to \n128M. I also set the fillfactor on the table indices to 50% (not sure \nif that made any difference have to study results more closely).  \n \nIn an effort to figure out the bottleneck, I DROPed 4 of the indices \non the table and the tps increased to over 1000. I don't really know \nwhich index removal gave the best performance improvement. I \ndropped 2 32-bit indices and 2 text indices which all using btree. \n \nThe cpu load is not that high, i.e. plenty of idle cpu. I am running an older\nversion of freebsd and the iostat output is not very detailed.\nDuring this time, the number is low < 10Mbs. The system has an \nLSI Logic MegaRAID controller with 2 disks.\n \nAny ideas on how to find the bottleneck/decrease overhead of index usage. \n \nThanks.\n\n\n \nRunning postgres 8.2.5\n \nI have a table that has 5 indices, no foreign keys or any \ndependency on any other table. If delete the database and \nstart entering entries, everything works very well until I get\nto some point (let's say 1M rows). Basically, I have a somewhat\nconstant rate of inserts/updates that go into a work queue and then\nget passed to postgres. The work queue starts filling up as the\nresponsiveness slows down. For example at 1.5M \nrows it takes >2 seconds for 300 inserts issued in one transaction. \n \nPrior to this point I had added regular VACUUM ANALYZE on \nthe table and it did help.  I increased maintenance work memory to \n128M. I also set the fillfactor on the table indices to 50% (not sure \nif that made any difference have to study results more closely).  \n \nIn an effort to figure out the bottleneck, I DROPed 4 of the indices \non the table and the tps increased to over 1000. I don't really know \nwhich index removal gave the best performance improvement. I \ndropped 2 32-bit indices and 2 text indices which all using btree. \n \nThe cpu load is not that high, i.e. plenty of idle cpu. I am running an older\nversion of freebsd and the iostat output is not very detailed.\nDuring this time, the number is low < 10Mbs. The system has an \nLSI Logic MegaRAID controller with 2 disks.\n \nAny ideas on how to find the bottleneck/decrease overhead of index usage. \n \nThanks.", "msg_date": "Tue, 3 Jun 2008 15:36:09 -0700 (PDT)", "msg_from": "andrew klassen <[email protected]>", "msg_from_op": true, "msg_subject": "insert/update tps slow with indices on table > 1M rows" }, { "msg_contents": "On Wed, 04 Jun 2008 00:36:09 +0200, andrew klassen <[email protected]> \nwrote:\n\n> Running postgres 8.2.5\n>  \n> I have a table that has 5 indices, no foreign keys or any\n> dependency on any other table. If delete the database and\n> start entering entries, everything works very well until I get\n> to some point (let's say 1M rows). Basically, I have a somewhat\n> constant rate of inserts/updates that go into a work queue and then\n> get passed to postgres. The work queue starts filling up as the\n> responsiveness slows down. For example at 1.5M\n> rows it takes >2 seconds for 300 inserts issued in one transaction.\n>  \n> Prior to this point I had added regular VACUUM ANALYZE on\n> the table and it did help.  I increased maintenance work memory to\n> 128M. I also set the fillfactor on the table indices to 50% (not sure\n> if that made any difference have to study results more closely). \n>  \n> In an effort to figure out the bottleneck, I DROPed 4 of the indices\n> on the table and the tps increased to over 1000. I don't really know\n> which index removal gave the best performance improvement. I\n> dropped 2 32-bit indices and 2 text indices which all using btree.\n>  \n> The cpu load is not that high, i.e. plenty of idle cpu. I am running an \n> older\n> version of freebsd and the iostat output is not very detailed.\n> During this time, the number is low < 10Mbs. The system has an\n> LSI Logic MegaRAID controller with 2 disks.\n>  \n> Any ideas on how to find the bottleneck/decrease overhead of index usage.\n>  \n> Thanks.\n\n\tIf you are filling an empty table it is generally faster to create the \nindexes after the data import.\n\tOf course if this is a live table or you need the indexes during the \nimport, this is not an option.\n\tI find it generally faster to lightly preprocess the data and generate \ntext files that I then import using COPY, then doing the rest of the \nprocessing in SQL.\n\n\tHow much RAM in the box ? size of the data & indexes ?\n", "msg_date": "Wed, 04 Jun 2008 02:15:10 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert/update tps slow with indices on table > 1M rows" }, { "msg_contents": "On Tue, Jun 3, 2008 at 4:36 PM, andrew klassen <[email protected]> wrote:\n\n> The cpu load is not that high, i.e. plenty of idle cpu. I am running an\n> older\n> version of freebsd and the iostat output is not very detailed.\n> During this time, the number is low < 10Mbs. The system has an\n> LSI Logic MegaRAID controller with 2 disks.\n>\n> Any ideas on how to find the bottleneck/decrease overhead of index usage.\n\nOlder versions of BSD can be pretty pokey compared to the 6.x and 7.x\nbranches. I seriously consider upgrading to 7 if possible.\n\nThe cost of maintaining indexes is always an issue. There are a few\nthings you can do to help out.\n\nPartitioning and having fewer indexes are what I'd recommend.\n", "msg_date": "Wed, 4 Jun 2008 00:02:34 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert/update tps slow with indices on table > 1M rows" }, { "msg_contents": "On Tue, 3 Jun 2008, andrew klassen wrote:\n> Basically, I have a somewhat constant rate of inserts/updates that go \n> into a work queue and then get passed to postgres.\n\n> The cpu load is not that high, i.e. plenty of idle cpu. I am running an older\n> version of freebsd and the iostat output is not very detailed.\n\nIf you're running a \"work queue\" architecture, that probably means you \nonly have one thread doing all the updates/inserts? It might be worth \ngoing multi-threaded, and issuing inserts and updates through more than \none connection. Postgres is designed pretty well to scale performance by \nthe number of simultaneous connections.\n\nMatthew\n\n-- \nContrary to popular belief, Unix is user friendly. It just happens to be\nvery selective about who its friends are. -- Kyle Hearn", "msg_date": "Wed, 4 Jun 2008 11:31:22 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert/update tps slow with indices on table > 1M rows" }, { "msg_contents": "Matthew Wakeling wrote:\n> If you're running a \"work queue\" architecture, that probably means you \n> only have one thread doing all the updates/inserts? It might be worth \n> going multi-threaded, and issuing inserts and updates through more \n> than one connection. Postgres is designed pretty well to scale \n> performance by the number of simultaneous connections.\nThat would explain a disappointing upper limit on insert rate, but not \nany sort of cliff for the rate. Nor, really, any material slowdown, if \nthe single thread implies that we're stuck on round trip latency as a \nmaterial limiting factor.\n\nJames\n\n", "msg_date": "Wed, 04 Jun 2008 21:15:45 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert/update tps slow with indices on table > 1M rows" } ]
[ { "msg_contents": "I am not currently using copy, but  I am using prepared statements  \nfor table insert/updates so the overhead for the actual data transfer \nshould be pretty good. I am sending at most  300 inserts/updates \nper transaction, but that is just an arbitrary value. When the queue \ngrows, I could easily send more per transaction. I  did experiment \na little, but it did not seem to help significantly at the time.\n \nThe system has 4G total memory. Shared memory is locked by the OS,\ni.e. not paged so I am only using shared_buffers=28MB.\n \nThe maximum data per row is 324 bytes assuming maximum expected length of two \ntext fields. There are 5 total indices: 1 8-byte, 2 4-byte and 2 text fields. \nAs mentioned all indices are btree.\n \n\n \n----- Original Message ----\nFrom: PFC <[email protected]>\nTo: andrew klassen <[email protected]>; [email protected]\nSent: Tuesday, June 3, 2008 7:15:10 PM\nSubject: Re: [PERFORM] insert/update tps slow with indices on table > 1M rows\n\nOn Wed, 04 Jun 2008 00:36:09 +0200, andrew klassen <[email protected]>  \nwrote:\n\n> Running postgres 8.2.5\n>  \n> I have a table that has 5 indices, no foreign keys or any\n> dependency on any other table. If delete the database and\n> start entering entries, everything works very well until I get\n> to some point (let's say 1M rows). Basically, I have a somewhat\n> constant rate of inserts/updates that go into a work queue and then\n> get passed to postgres. The work queue starts filling up as the\n> responsiveness slows down. For example at 1.5M\n> rows it takes >2 seconds for 300 inserts issued in one transaction.\n>  \n> Prior to this point I had added regular VACUUM ANALYZE on\n> the table and it did help.  I increased maintenance work memory to\n> 128M. I also set the fillfactor on the table indices to 50% (not sure\n> if that made any difference have to study results more closely). \n>  \n> In an effort to figure out the bottleneck, I DROPed 4 of the indices\n> on the table and the tps increased to over 1000. I don't really know\n> which index removal gave the best performance improvement. I\n> dropped 2 32-bit indices and 2 text indices which all using btree.\n>  \n> The cpu load is not that high, i.e. plenty of idle cpu. I am running an  \n> older\n> version of freebsd and the iostat output is not very detailed.\n> During this time, the number is low < 10Mbs. The system has an\n> LSI Logic MegaRAID controller with 2 disks.\n>  \n> Any ideas on how to find the bottleneck/decrease overhead of index usage.\n>  \n> Thanks.\n\n    If you are filling an empty table it is generally faster to create the  \nindexes after the data import.\n    Of course if this is a live table or you need the indexes during the  \nimport, this is not an option.\n    I find it generally faster to lightly preprocess the data and generate  \ntext files that I then import using COPY, then doing the rest of the  \nprocessing in SQL.\n\n    How much RAM in the box ? size of the data & indexes ?\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \nI am not currently using copy, but  I am using prepared statements  \nfor table insert/updates so the overhead for the actual data transfer \nshould be pretty good. I am sending at most  300 inserts/updates \nper transaction, but that is just an arbitrary value. When the queue \ngrows, I could easily send more per transaction. I  did experiment \na little, but it did not seem to help significantly at the time.\n \nThe system has 4G total memory. Shared memory is locked by the OS,\ni.e. not paged so I am only using shared_buffers=28MB.\n \nThe maximum data per row is 324 bytes assuming maximum expected length of two \ntext fields. There are 5 total indices: 1 8-byte, 2 4-byte and 2 text fields. \nAs mentioned all indices are btree.\n \n \n----- Original Message ----From: PFC <[email protected]>To: andrew klassen <[email protected]>; [email protected]: Tuesday, June 3, 2008 7:15:10 PMSubject: Re: [PERFORM] insert/update tps slow with indices on table > 1M rowsOn Wed, 04 Jun 2008 00:36:09 +0200, andrew klassen <[email protected]>  wrote:> Running postgres 8.2.5>  > I have a table that has 5 indices, no foreign keys or any> dependency on any other table. If delete the database and> start entering entries, everything works very well until I get> to some point (let's say 1M rows). Basically, I have a somewhat> constant rate of inserts/updates that go into a work queue and then> get passed to\n postgres. The work queue starts filling up as the> responsiveness slows down. For example at 1.5M> rows it takes >2 seconds for 300 inserts issued in one transaction.>  > Prior to this point I had added regular VACUUM ANALYZE on> the table and it did help.  I increased maintenance work memory to> 128M. I also set the fillfactor on the table indices to 50% (not sure> if that made any difference have to study results more closely). >  > In an effort to figure out the bottleneck, I DROPed 4 of the indices> on the table and the tps increased to over 1000. I don't really know> which index removal gave the best performance improvement. I> dropped 2 32-bit indices and 2 text indices which all using btree.>  > The cpu load is not that high, i.e. plenty of idle cpu. I am running an  > older> version of\n freebsd and the iostat output is not very detailed.> During this time, the number is low < 10Mbs. The system has an> LSI Logic MegaRAID controller with 2 disks.>  > Any ideas on how to find the bottleneck/decrease overhead of index usage.>  > Thanks.    If you are filling an empty table it is generally faster to create the  indexes after the data import.    Of course if this is a live table or you need the indexes during the  import, this is not an option.    I find it generally faster to lightly preprocess the data and generate  text files that I then import using COPY, then doing the rest of the  processing in SQL.    How much RAM in the box ? size of the data & indexes ?-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 3 Jun 2008 19:44:12 -0700 (PDT)", "msg_from": "andrew klassen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert/update tps slow with indices on table > 1M rows" } ]
[ { "msg_contents": "hello,\n\nI am wondering if actually, does exists any rule to determine\nthe amount of RAM, according to the hardware disk size\n(in prevision to use all or nearly available space),\nwhen designing a server?\n\nOf course fsm settings implies that when db size grows, more memory will \nbe in use\nfor tracking free space.\nSame thing applies from RAM cache effectiveness: the bigger db size is, \nthe more RAM is needed.\n\nDo be more specific, we have an heavy loaded server with SCSI disks \n(RAID 0 on a SAS controller),\nmaking a total of 500GB. Actually, there are 16GB RAM, representing \nabout 2,5% of db size.\n\nany experiences to share?\n\nin advance, thank you.\n\n\nMathieu\n\n\n\n", "msg_date": "Wed, 04 Jun 2008 16:02:23 +0200", "msg_from": "Mathieu Gilardet <[email protected]>", "msg_from_op": true, "msg_subject": "RAM / Disk ratio, any rule? " }, { "msg_contents": "On Wed, 4 Jun 2008, Mathieu Gilardet wrote:\n\n> Do be more specific, we have an heavy loaded server with SCSI disks \n> (RAID 0 on a SAS controller), making a total of 500GB. Actually, there \n> are 16GB RAM, representing about 2,5% of db size.\n\nThat's a reasonable ratio. Being able to hold somewhere around 1 to 5% of \nthe database in RAM seems to common nowadays, and that works OK. But it's \nimpossible to have a hard \"rule\" here because the working set needed to \noperate the queries and other activities on your server is completely \ndependant on the code you're running, your performance expectations, and \nyour expected user load.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 4 Jun 2008 20:32:29 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RAM / Disk ratio, any rule? " } ]
[ { "msg_contents": "I have a Windows application which connects to a Postgres (8.3) database \nresiding on our company server. Most of the application's users work \nfrom their homes, so the application was developed with a lot of \nsecurity checks.\n\nWhen a client connects to the database, a random hash is generated and \nsent to the client; this hash is also saved in a Postgres table along \nwith the user id and the return value of pg_backend_pid(). When the \nclient submits queries, it presents its hash value and the server \ncross-checks this, and the current value of pg_backend_pid(), against \nthe values that were stored previously.\n\nIf there is a mismatch, the client is instructed to obtain a new hash \nand begin again. The information about the mismatch is also recorded \nfor future inspection. By examining the logs, I have observed that the \nbackend pid for a particular client sometimes changes during a session. \n This seems to happen about a dozen times a day, total. Usually this \nis not a problem, as the client will get a new hash and keep going.\n\nSometimes, however, this seems to happen in the middle of an operation. \n This happens when the client has sent a large chunk of data that is to \nbe stored in the database. The client sends its authorization \ninformation immediately before sending the data, and also with the data \nchunk. On rare occasions, the backend pid somehow seems to change \nduring the time it takes for the data to be sent. This causes errors \nand loss of time for the user.\n\nI'm sure there are more details that would be needed to give a complete \npicture of what is going on, yet this message is pretty long already. I \nam going to stop here and ask whether anyone can make sense of this. \nThat is, make sense of what I have written, and also of why the backend \npid would change during an operation as I have described. Thanks to any \nwho can offer information on this.\n\nLewis\n", "msg_date": "Wed, 04 Jun 2008 10:44:25 -0400", "msg_from": "Lewis Kapell <[email protected]>", "msg_from_op": true, "msg_subject": "backend pid changing" }, { "msg_contents": "On Wed, 4 Jun 2008, Lewis Kapell wrote:\n> The client sends its authorization information immediately before \n> sending the data, and also with the data chunk.\n\nWell, I have no idea why the backend pid is changing, but here it looks \nlike you have a classic concurrency problem caused by checking a variable \ntwice. It seems you have client-side error recovery on the initial check, \nbut not on the second check. The solution is to eliminate the first check, \nand implement proper error recovery on the second check, so that the \nclient can just get a new hash and try again.\n\nMatthew\n\n-- \nIt's one of those irregular verbs - \"I have an independent mind,\" \"You are\nan eccentric,\" \"He is round the twist.\"\n -- Bernard Woolly, Yes Prime Minister\n", "msg_date": "Wed, 4 Jun 2008 16:06:48 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backend pid changing" }, { "msg_contents": "Lewis Kapell <[email protected]> writes:\n> ... By examining the logs, I have observed that the \n> backend pid for a particular client sometimes changes during a session. \n\nThat is just about impossible to believe, unless perhaps you have a\nconnection pooler in the loop somewhere?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Jun 2008 11:10:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: backend pid changing " }, { "msg_contents": "We are not using connection pooling, however the clients are tunneling \nthrough SSH. Forgot to mention that in my first message. Does that \nmake any difference to it?\n\nThank you,\n\nLewis Kapell\nComputer Operations\nSeton Home Study School\n\n\nTom Lane wrote:\n> Lewis Kapell <[email protected]> writes:\n>> ... By examining the logs, I have observed that the \n>> backend pid for a particular client sometimes changes during a session. \n> \n> That is just about impossible to believe, unless perhaps you have a\n> connection pooler in the loop somewhere?\n> \n> \t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Jun 2008 11:18:28 -0400", "msg_from": "Lewis Kapell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: backend pid changing" }, { "msg_contents": "Hi,\n I'm trying to make use of a cluster of 40 nodes that my group has, \nand I'm curious if anyone has experience with PgPool's parallel query \nmode. Under what circumstances could I expect the most benefit from \nquery parallelization as implemented by PgPool?\n", "msg_date": "Wed, 04 Jun 2008 11:58:32 -0400", "msg_from": "John Beaver <[email protected]>", "msg_from_op": false, "msg_subject": "PgPool parallel query performance rules of thumb" }, { "msg_contents": "Hi John,\n\nIt has been a while since I played around with PgPool-II. In the tests\nthat I did, it did help with load balancing. For parallel query, it\nhelped for simple queries, such as when querying a single table. If that\nis your typical use case, you may benefit. For other queries, it was not\nas effective. For example:\n\nSELECT t1.col1, t1.col2\nFROM t1 inner join t2 on t1.col1 = t2.col1\nWHERE t2.col3 > 1000\nORDER BY t1.col1\n\nAssume that the optimizer decided to process t2 first. It would apply\nthe where predicate t2.col3 > 1000 in parallel across all the nodes,\nwhich is a good thing, and pull in those results. But, for t1, it will\nquery all of the nodes, then pull in all of the rows (just t1.col1 and\nt1.col2 though) into a single node and perform the join and sort there\nas well. You are not getting much parallelism on that step, particularly\nnoticeable if it is a large table.\n\nSo, there is some benefit, but it is limited. Also, again, it has been a\nwhile since I ran this. It may have since improved (I apologize if this\nis inaccurate), and I do like the other features of PgPool and what SRA\nhas done.\n\nIn contrast, GridSQL would parallelize this better. (Full disclosure: I\nwork on the free and open source GridSQL project.) It would likely\nprocess t2 first, like pgpool. However, it would send the intermediate\nresults to the other nodes in the cluster. If it turns out that t1.col1\nwas also the column on which a distribution hash was based for t1, it\nwould ship those intermediate rows to only those nodes that it needs to\nfor joining. Then, on this second step, all of these joins would happen\nin parallel, with ORDER BY applied. Back at the coordinator, since an\nORDER BY is present, GridSQL would do a merge-sort from the results of\nthe other nodes and return them to the client.\n\nI hope that helps. On pgfoundry.org there are forums within the pgpool\nproject where they can probably better answer your questions. If you\nhave any questions about GridSQL, please feel free to post in the forums\nat enterprisedb.com or email me directly.\n\nRegards,\n\nMason\n\n\n> -----Original Message-----\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of John Beaver\n> Sent: Wednesday, June 04, 2008 11:59 AM\n> To: Pgsql-Performance\n> Subject: [PERFORM] PgPool parallel query performance rules of thumb\n> \n> Hi,\n> I'm trying to make use of a cluster of 40 nodes that my group has,\n> and I'm curious if anyone has experience with PgPool's parallel query\n> mode. Under what circumstances could I expect the most benefit from\n> query parallelization as implemented by PgPool?\n> \n> --\n> Sent via pgsql-performance mailing list\n([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 6 Jun 2008 11:01:27 -0400", "msg_from": "\"Mason Sharp\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgPool parallel query performance rules of thumb" } ]
[ { "msg_contents": "I am using multiple threads, but only one worker thread for insert/updated to this table.\nI don't mind trying to add multiple threads for this table, but my guess is it would not \nhelp because basically the overall tps rate is decreasing so dramatically. Since\nthe cpu time consumed by the corresponding postgres server process for my thread is\nsmall it does not seem to be the bottleneck. There has to be a bottleneck somewhere else. \nDo you agree or is there some flaw in my reasoning?\n\n----- Original Message ----\nFrom: Matthew Wakeling <[email protected]>\nTo: andrew klassen <[email protected]>\nCc: [email protected]\nSent: Wednesday, June 4, 2008 5:31:22 AM\nSubject: Re: [PERFORM] insert/update tps slow with indices on table > 1M rows\n\nOn Tue, 3 Jun 2008, andrew klassen wrote:\n> Basically, I have a somewhat constant rate of inserts/updates that go \n> into a work queue and then get passed to postgres.\n\n> The cpu load is not that high, i.e. plenty of idle cpu. I am running an older\n> version of freebsd and the iostat output is not very detailed.\n\nIf you're running a \"work queue\" architecture, that probably means you \nonly have one thread doing all the updates/inserts? It might be worth \ngoing multi-threaded, and issuing inserts and updates through more than \none connection. Postgres is designed pretty well to scale performance by \nthe number of simultaneous connections.\n\nMatthew\n\n-- \nContrary to popular belief, Unix is user friendly. It just happens to be\nvery selective about who its friends are.                -- Kyle Hearn\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \n \nI am using multiple threads, but only one worker thread for insert/updated to this table.\nI don't mind trying to add multiple threads for this table, but my guess is it would not \nhelp because basically the overall tps rate is decreasing so dramatically. Since\nthe cpu time consumed by the corresponding postgres server process for my thread is\nsmall it does not seem to be the bottleneck. There has to be a bottleneck somewhere else. \n \nDo you agree or is there some flaw in my reasoning?\n \n----- Original Message ----From: Matthew Wakeling <[email protected]>To: andrew klassen <[email protected]>Cc: [email protected]: Wednesday, June 4, 2008 5:31:22 AMSubject: Re: [PERFORM] insert/update tps slow with indices on table > 1M rowsOn Tue, 3 Jun 2008, andrew klassen wrote:> Basically, I have a somewhat constant rate of inserts/updates that go > into a work queue and then get passed to postgres.> The cpu load is not that high, i.e. plenty of idle cpu. I am running an older> version of freebsd and the iostat output is not very detailed.If you're running a \"work queue\" architecture, that probably means you only have one thread doing all the updates/inserts? It might be worth going multi-threaded, and issuing inserts and updates through more than\n one connection. Postgres is designed pretty well to scale performance by the number of simultaneous connections.Matthew-- Contrary to popular belief, Unix is user friendly. It just happens to bevery selective about who its friends are.                -- Kyle Hearn-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 4 Jun 2008 08:05:24 -0700 (PDT)", "msg_from": "andrew klassen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert/update tps slow with indices on table > 1M rows" }, { "msg_contents": "On Wed, 4 Jun 2008, andrew klassen wrote:\n> I am using multiple threads, but only one worker thread for insert/updated to this table.\n> I don't mind trying to add multiple threads for this table, but my guess is it would not\n> help because basically the overall tps rate is decreasing so dramatically. Since\n> the cpu time consumed by the corresponding postgres server process for my thread is\n> small it does not seem to be the bottleneck. There has to be a bottleneck somewhere else.\n> Do you agree or is there some flaw in my reasoning?\n\nThere is indeed a flaw in your reasoning - there may be very little CPU \ntime consumed, but that just indicates that the discs are busy. Getting \nPostgres to do multiple things at once will cause a more efficient use of \nthe disc subsystem, resulting in greater overall throughput. This is \nespecially the case if you have multiple discs in your box.\n\nMatthew\n\n-- \nContrary to popular belief, Unix is user friendly. It just happens to be\nvery selective about who its friends are. -- Kyle Hearn", "msg_date": "Wed, 4 Jun 2008 16:10:38 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert/update tps slow with indices on table > 1M rows" } ]
[ { "msg_contents": "I agree that the discs are probably very busy. I do have 2 disks but they are\nfor redundancy. Would it help to put the data, indexes and xlog on separate \ndisk partitions? \nI'll try adding more threads to update the table as you suggest.\n\n----- Original Message ----\nFrom: Matthew Wakeling <[email protected]>\nTo: [email protected]\nSent: Wednesday, June 4, 2008 10:10:38 AM\nSubject: Re: [PERFORM] insert/update tps slow with indices on table > 1M rows\n\nOn Wed, 4 Jun 2008, andrew klassen wrote:\n> I am using multiple threads, but only one worker thread for insert/updated to this table.\n> I don't mind trying to add multiple threads for this table, but my guess is it would not\n> help because basically the overall tps rate is decreasing so dramatically. Since\n> the cpu time consumed by the corresponding postgres server process for my thread is\n> small it does not seem to be the bottleneck. There has to be a bottleneck somewhere else.\n> Do you agree or is there some flaw in my reasoning?\n\nThere is indeed a flaw in your reasoning - there may be very little CPU \ntime consumed, but that just indicates that the discs are busy. Getting \nPostgres to do multiple things at once will cause a more efficient use of \nthe disc subsystem, resulting in greater overall throughput. This is \nespecially the case if you have multiple discs in your box.\n\nMatthew\n\n-- \nContrary to popular belief, Unix is user friendly. It just happens to be\nvery selective about who its friends are.                -- Kyle Hearn\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \nI agree that the discs are probably very busy. I do have 2 disks but they are\nfor redundancy. Would it help to put the data, indexes and xlog on separate \ndisk partitions? \n \nI'll try adding more threads to update the table as you suggest.\n \n \n----- Original Message ----From: Matthew Wakeling <[email protected]>To: [email protected]: Wednesday, June 4, 2008 10:10:38 AMSubject: Re: [PERFORM] insert/update tps slow with indices on table > 1M rowsOn Wed, 4 Jun 2008, andrew klassen wrote:> I am using multiple threads, but only one worker thread for insert/updated to this table.> I don't mind trying to add multiple threads for this table, but my guess is it would not> help because basically the overall tps rate is decreasing so dramatically. Since> the cpu time consumed by the corresponding postgres server process for my thread is> small it does not seem to be the bottleneck. There has to be a bottleneck somewhere else.> Do you agree or is there some flaw in my reasoning?There is indeed a flaw in your\n reasoning - there may be very little CPU time consumed, but that just indicates that the discs are busy. Getting Postgres to do multiple things at once will cause a more efficient use of the disc subsystem, resulting in greater overall throughput. This is especially the case if you have multiple discs in your box.Matthew-- Contrary to popular belief, Unix is user friendly. It just happens to bevery selective about who its friends are.                -- Kyle Hearn-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 4 Jun 2008 10:24:09 -0700 (PDT)", "msg_from": "andrew klassen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert/update tps slow with indices on table > 1M rows" }, { "msg_contents": "andrew klassen wrote:\n> I'll try adding more threads to update the table as you suggest.\nYou could try materially increasing the update batch size too. As an \nexercise you could\nsee what the performance of COPY is by backing out the data and \nreloading it from\na suitable file.\n\n", "msg_date": "Wed, 04 Jun 2008 21:20:26 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert/update tps slow with indices on table > 1M rows" } ]
[ { "msg_contents": "I am using the c-library interface and for these particular transactions\nI preload PREPARE statements. Then as I get requests, I issue a BEGIN, \nfollowed by at most 300 EXECUTES and then a COMMIT. That is the\ngeneral scenario. What value beyond 300 should I try? \nAlso, how might COPY (which involves file I/O) improve the \nabove scenario? \nThanks.\n\n\n----- Original Message ----\nFrom: James Mansion <[email protected]>\nTo: andrew klassen <[email protected]>\nCc: [email protected]\nSent: Wednesday, June 4, 2008 3:20:26 PM\nSubject: Re: [PERFORM] insert/update tps slow with indices on table > 1M rows\n\nandrew klassen wrote:\n> I'll try adding more threads to update the table as you suggest.\nYou could try materially increasing the update batch size too.  As an \nexercise you could\nsee what the performance of COPY is by backing out the data and \nreloading it from\na suitable file.\n\n\n \n \nI am using the c-library interface and for these particular transactions\nI preload PREPARE statements. Then as I get requests, I issue a BEGIN, \nfollowed by at most 300 EXECUTES and then a COMMIT. That is the\ngeneral scenario. What value beyond 300 should I try? \n \nAlso, how might COPY (which involves file I/O) improve the \nabove scenario? \n \nThanks.\n----- Original Message ----From: James Mansion <[email protected]>To: andrew klassen <[email protected]>Cc: [email protected]: Wednesday, June 4, 2008 3:20:26 PMSubject: Re: [PERFORM] insert/update tps slow with indices on table > 1M rowsandrew klassen wrote:> I'll try adding more threads to update the table as you suggest.You could try materially increasing the update batch size too.  As an exercise you couldsee what the performance of COPY is by backing out the data and reloading it froma suitable file.", "msg_date": "Wed, 4 Jun 2008 14:30:13 -0700 (PDT)", "msg_from": "andrew klassen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert/update tps slow with indices on table > 1M rows" }, { "msg_contents": "\n> I am using the c-library interface and for these particular transactions\n> I preload PREPARE statements. Then as I get requests, I issue a BEGIN,\n> followed by at most 300 EXECUTES and then a COMMIT. That is the\n> general scenario. What value beyond 300 should I try?\n> Thanks.\n\n\tDo you have PREPARE statements whose performance might change as the \ntable grows ?\n\n\tI mean, some selects, etc... in that case if you start with an empty \ntable, after inserting say 100K rows you might want to just disconnect, \nreconnect and analyze to trigger replanning of those statements.\n\n> Also, how might COPY (which involves file I/O) improve the\n> above scenario?\n\n\tIt won't but if you see that COPY is very much faster than your INSERT \nbased process it will give you a useful piece of information.\n\n\tI understand your problem is :\n\n- Create table with indexes\n- Insert batches of rows\n- After a while it gets slow\n\n\tTry :\n\n- Create table with indexes\n- COPY huge batch of rows\n- Compare time with above\n\n\tSince COPY also updates the indexes just like your inserts do it will \ntell you if it's the indexes which slow you down or something else.\n\n\tAlso for insert heavy loads it's a good idea to put the xlog on a \nseparate disk (to double your disk bandwidth) unless you have a monster \ndisk setup.\n\n\tDuring your INSERTs, do you also make some SELECTs ? Do you have triggers \non the table ? Foreign keys ? Anything ?\n\tHow much RAM you have ? And can you measure the size of the table+indexes \nwhen it gets slow ?\n\n\n>\n>\n> ----- Original Message ----\n> From: James Mansion <[email protected]>\n> To: andrew klassen <[email protected]>\n> Cc: [email protected]\n> Sent: Wednesday, June 4, 2008 3:20:26 PM\n> Subject: Re: [PERFORM] insert/update tps slow with indices on table > 1M \n> rows\n>\n> andrew klassen wrote:\n>> I'll try adding more threads to update the table as you suggest.\n> You could try materially increasing the update batch size too.  As an\n> exercise you could\n> see what the performance of COPY is by backing out the data and\n> reloading it from\n> a suitable file.\n>\n>\n>\n\n\n", "msg_date": "Wed, 04 Jun 2008 23:50:09 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert/update tps slow with indices on table > 1M rows" }, { "msg_contents": "andrew klassen <[email protected]> writes:\n> I am using the c-library interface and for these particular transactions\n> I preload PREPARE statements. Then as I get requests, I issue a BEGIN, \n> followed by at most 300 EXECUTES and then a COMMIT. That is the\n> general scenario. What value beyond 300 should I try? \n\nWell, you could try numbers in the low thousands, but you'll probably\nget only incremental improvement.\n\n> Also, how might COPY (which involves file I/O) improve the \n> above scenario? \n\nCOPY needn't involve file I/O. If you are using libpq you can push\nanything you want into PQputCopyData. This would involve formatting\nthe data according to COPY's escaping rules, which are rather different\nfrom straight SQL, but I doubt it'd be a huge amount of code. Seems\nworth trying.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Jun 2008 18:52:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert/update tps slow with indices on table > 1M rows " }, { "msg_contents": "andrew klassen wrote:\n> I am using the c-library interface and for these particular transactions\n> I preload PREPARE statements. Then as I get requests, I issue a BEGIN, \n> followed by at most 300 EXECUTES and then a COMMIT. That is the\n> general scenario. What value beyond 300 should I try? \n\nMake sure you use the asynchronous PQsendQuery, instead of plain PQexec. \nOtherwise you'll be doing a round-trip for each EXECUTE anyway \nregardless of the batch size. Of course, if the bottleneck is somewhere \nelse, it won't make a difference..\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 05 Jun 2008 09:18:29 +0300", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert/update tps slow with indices on table > 1M rows" } ]
[ { "msg_contents": "One of the things to hit my mailbox this week is from someone who is \nfrustrated not only by their database server but by issues sending \nmessages to this list; I'm forwarding to here for them, please reply to \nall so they get a copy.\n\nHere's the basic server information:\n\n> PostgreSQL version - 8.2.4, RHEL4 Linux\n> 64-bit, 8 cpu(s), 16GB memory, raid 5 storage.\n> The tuning objective is to optimize the PostgreSQL database to handle \n> both reads and writes. The database receives continuous inserts, \n> updates and deletes on tables with 140+ million records.\n\nThe primary problem they're having are really awful checkpoint spikes, \nwhich is how I got conne^H^Hvinced into helping out here. I belive this \nis hardware RAID with a caching controller.\n\nFirst off, the bad news nobody ever wants to hear: you can't really make \nthis problem go completely away in many situations with 8.2, whereas the \nnew spread checkpoint feature in 8.3 is aimed specifically at this \nproblem. \nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm goes \nover all that, along with introducing some of the ideas I'll toss in below \nabout how to optimize for 8.2. A related paper I did talks about reducing \nhow much memory Linux caches for you when writing heavily, that might be \nappropriate here as well: \nhttp://www.westnet.com/~gsmith/content/linux-pdflush.htm\n\nThe other obvious suggestion is that RAID5 is known to be poor at heavy \nwrite performance, which makes it really the wrong choice here as well. \nWhat this system really wants to have done to it is to be reconfigured \nwith RAID10 and PostgreSQL 8.3 instead. But since as always that's \nimpractical for now, let's take a look at the postgresql.settings to see \nwhat might be improved immediately:\n\nmax_connections = 128\nshared_buffers = 400\ntemp_buffers = 1000\neffective_cache_size = 50000\nrandom_page_cost = 2.5\n\nThey've experimented with lowering shared_buffers so much here because it \nhelps the problem, but 400 is going a bit too far. You should be able to \nget at to least a few thousand for that setting without making the problem \nmuch worse, and that will help lower general I/O that might block the \ncheckpoint work a bit. Something like 5000 to 20000 would be my guess for \na good setting here.\n\nsort_mem = 4194304\nvacuum_mem = 2097152\nwork_mem = 4194304\nmaintenance_work_mem = 256000\n\nThere is no sort_mem or vacuum_mem in 8.2 anymore, so those can be \ndeleted: replaced by work_mem and maintenance_work_mem. The values for \nall the active *_mem settings here are on the low side for a system with \n16GB of RAM. If we re-cast these with more useful units this is obvious:\n\nwork_mem = 4MB\nmaintenance_work_mem = 256KB\n\nTry work_mem=16MB and maintenance_work_mem=256MB instead as starting \nvalues. work_mem could go a lot higher, but you have to have to be \ncareful to consider how many connections are involved because this is a \nper-session parameter.\n\neffective_cache_size is wildly low here; something >8GB is likely more \naccurate. While not directly causing checkpoint issues, getting better \nplans can lower overall system I/O through more efficient use of available \nresources and therefore leave more bandwidth for the writes.\n\nbgwriter_lru_percent = 70\nbgwriter_lru_maxpages = 800\nbgwriter_all_percent = 50\nbgwriter_all_maxpages = 800\n\nAh, the delicate scent of someone on IRC suggesting \"oh, checkpoints \nspikes are taken care of by the background writer, just make that more \naggressive and they'll go away\". These values are crazy big, and the only \nreason they work at all is that with shared_buffers=400 and 8 CPUs you can \nafford to scan them every single time and nobody cares. The settings \nKevin Grittner settled on that I mentioned in the 8.2->8.3 paper are about \nas aggressive as I've ever seen work well in the real world:\n\n> bgwriter_delay = 200\n> bgwriter_lru_percent = 20.0\n> bgwriter_lru_maxpages = 200\n> bgwriter_all_percent = 10.0\n> bgwriter_all_maxpages = 600\n\nI personally will often just turn the background writer off all together \nby setting both maxpages parameters to zero, and wait for the surprised \nlooks as the checkpoint spikes get smaller. The 8.2 BGW just isn't \neffective in modern systems with gigabytes of RAM. It writes the same \nblocks over and over into the gigantic OS cache, in a way that competes \ninefficiently for I/O resources with how buffers are naturally evicted \nanyway when you use the kind of low shared_buffers settings that are a \nmust on 8.2.\n\nfsync = off\n\nWell, this is asking for trouble. The first time your server crashes, I \nhope you're feeling lucky. I think this system is setup so that it can \neasily be replaced if there's a problem, so this may not be a huge \nproblem, but it is dangerous to turn fsync off.\n\ncheckpoint_segments = 40\ncheckpoint_timeout = 300\ncheckpoint_warning = 15\n\nsetting checkpoint_segments to 40 is likely too large for an 8.2 system \nthat's writing heavily. That keeps the number of checkpoints down, so you \nget less spikes, but each one of them will be much larger. Something in \nthe 5-20 range is likely more appropriate here.\n\nvacuum_cost_delay = 750\nautovacuum = true\nautovacuum_naptime = 3600\nautovacuum_vacuum_threshold = 1000\nautovacuum_analyze_threshold = 500\nautovacuum_vacuum_scale_factor = 0.4\nautovacuum_analyze_scale_factor = 0.2\nautovacuum_vacuum_cost_delay = -1\nautovacuum_vacuum_cost_limit = -1\nmax_fsm_pages = 5000000\nmax_fsm_relations = 2000\n\nNow, when I was on the phone about this system, I recall hearing that \nthey've fallen into that ugly trap where they are forced to reload this \ndatabase altogether regularly to get performance to stay at a reasonable \nlevel. That's usually a vacuum problem, and yet another reason to upgrade \nto 8.3 so you get the improved autovacuum there. Vacuum tuning isn't \nreally my bag, and I'm out of time here tonight; anybody else want to make \nsome suggestions on what might be changed here based on what I've shared \nabout the system?\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 6 Jun 2008 02:30:23 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Checkpoint tuning on 8.2.4" }, { "msg_contents": "On Fri, Jun 6, 2008 at 12:30 AM, Greg Smith <[email protected]> wrote:\n\n> vacuum_cost_delay = 750\n> autovacuum = true\n> autovacuum_naptime = 3600\n> autovacuum_vacuum_threshold = 1000\n> autovacuum_analyze_threshold = 500\n> autovacuum_vacuum_scale_factor = 0.4\n> autovacuum_analyze_scale_factor = 0.2\n> autovacuum_vacuum_cost_delay = -1\n> autovacuum_vacuum_cost_limit = -1\n> max_fsm_pages = 5000000\n> max_fsm_relations = 2000\n\nThese are terrible settings for a busy database. A cost delay\nanything over 10 or 20 is usually WAY too big, and will make vacuums\ntake nearly forever. Naptime of 3600 is 1 hour, right? That's also\nfar too long to be napping between just checking to see if you should\nrun another vacuum.\n\nI'd recommend:\nvacuum_cost_delay = 20\nautovacuum = true\nautovacuum_naptime = 300 # 5 minutes.\n\nNote that I'm used to 8.2 where such settings are in more easily\nreadable settings like 5min. So if 3600 is in some other unit, I\ncould be wrong here.\n\n> Now, when I was on the phone about this system, I recall hearing that\n> they've fallen into that ugly trap where they are forced to reload this\n> database altogether regularly to get performance to stay at a reasonable\n> level. That's usually a vacuum problem, and yet another reason to upgrade\n> to 8.3 so you get the improved autovacuum there. Vacuum tuning isn't really\n> my bag, and I'm out of time here tonight; anybody else want to make some\n> suggestions on what might be changed here based on what I've shared about\n> the system?\n\nIt may well be that their best option is to manually vacuum certain\ntables more often (i.e. the ones that bloat). you can write a script\nthat connects, sets vacuum_cost_delay to something higher, like 20 or\n30, and then run the vacuum by hand. Such a vacuum may need to be run\nin an almost continuous loop if the update rate is high enough.\n\nI agree with what you said earlier, the biggest mistake here is\nrunning a db on a RAID-5 array.\n", "msg_date": "Fri, 6 Jun 2008 11:22:27 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checkpoint tuning on 8.2.4" }, { "msg_contents": "I concur with most of what was already posted. Some additions below.\n \n>>> \"Scott Marlowe\" <[email protected]> wrote: \n> On Fri, Jun 6, 2008 at 12:30 AM, Greg Smith <[email protected]>\nwrote:\n> \n>> vacuum_cost_delay = 750\n>> autovacuum = true\n>> autovacuum_naptime = 3600\n>> autovacuum_vacuum_threshold = 1000\n>> autovacuum_analyze_threshold = 500\n>> autovacuum_vacuum_scale_factor = 0.4\n>> autovacuum_analyze_scale_factor = 0.2\n>> autovacuum_vacuum_cost_delay = -1\n>> autovacuum_vacuum_cost_limit = -1\n>> max_fsm_pages = 5000000\n>> max_fsm_relations = 2000\n> \n> These are terrible settings for a busy database. A cost delay\n> anything over 10 or 20 is usually WAY too big, and will make vacuums\n> take nearly forever. Naptime of 3600 is 1 hour, right? That's also\n> far too long to be napping between just checking to see if you\nshould\n> run another vacuum.\n> \n> I'd recommend:\n> vacuum_cost_delay = 20\n> autovacuum = true\n> autovacuum_naptime = 300 # 5 minutes.\n \nI would also reduce the autovacuum thresholds and scale factors;\nmany small vacuums are more efficient than a few big ones.\nAlso, you stand a chance to force the hint bit writing to coalesce\nwith the initial page write if you are more aggressive here.\n \nI'd probably go all the way down to a vacuum cost delay of 10 and then\nsee if you need to go higher. That has worked best for us in a\nwrite-heavy environment with hundreds of millions of rows.\n \nA nightly database vacuum is good if it can complete off-hours and\ndoesn't interfere with the application; otherwise, some regular\nschedule, by table.\n \nIt's hard to give more advice without more specifics.\n \n-Kevin\n", "msg_date": "Mon, 23 Jun 2008 17:42:08 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checkpoint tuning on 8.2.4" } ]
[ { "msg_contents": "Hi,\n\nAm I wrong or AGE() always gets directed to a sequential scan?\n\n # BEGIN;\n ] SET enable_seqscan TO off;\n ] EXPLAIN ANALYZE\n ] SELECT count(1)\n ] FROM incomingmessageslog\n ] WHERE AGE(time) < '1 year';\n ] ROLLBACK;\n BEGIN\n SET\n QUERY PLAN \n ----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=100000528.33..100000528.34 rows=1 width=0) (actual time=13.789..13.790 rows=1 loops=1)\n -> Seq Scan on incomingmessageslog (cost=100000000.00..100000520.00 rows=3333 width=0) (actual time=13.783..13.783 rows=0 loops=1)\n Filter: (age((('now'::text)::date)::timestamp without time zone, \"time\") < '1 year'::interval)\n Total runtime: 13.852 ms\n (4 rows)\n \n ROLLBACK\n\nAs far as I know, AGE() can take advantage of a very simple equation for\nconstant comparisons:\n\n = AGE(field) < constant_criteria\n = AGE(field, constant_ts) < constant_criteria\n = AGE(field) < constant_criteria + constant_ts\n = AGE(field) < CONSTANT_CRITERIA\n\nHow much does such a hack into optimizer cost? I don't know about its\nimplications but I'll really appreciate such a functionality. At the\nmoment, I'm trying replace every AGE() usage in my code and it really\nfeels a PITA.\n\n\nRegards.\n", "msg_date": "Fri, 06 Jun 2008 16:20:59 +0300", "msg_from": "Volkan YAZICI <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing AGE()" }, { "msg_contents": "On Fri, Jun 6, 2008 at 7:20 AM, Volkan YAZICI <[email protected]> wrote:\n> Hi,\n>\n> Am I wrong or AGE() always gets directed to a sequential scan?\n>\n> # BEGIN;\n> ] SET enable_seqscan TO off;\n> ] EXPLAIN ANALYZE\n> ] SELECT count(1)\n> ] FROM incomingmessageslog\n> ] WHERE AGE(time) < '1 year';\n> ] ROLLBACK;\n> BEGIN\n> SET\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=100000528.33..100000528.34 rows=1 width=0) (actual time=13.789..13.790 rows=1 loops=1)\n> -> Seq Scan on incomingmessageslog (cost=100000000.00..100000520.00 rows=3333 width=0) (actual time=13.783..13.783 rows=0 loops=1)\n> Filter: (age((('now'::text)::date)::timestamp without time zone, \"time\") < '1 year'::interval)\n> Total runtime: 13.852 ms\n> (4 rows)\n>\n> ROLLBACK\n>\n> As far as I know, AGE() can take advantage of a very simple equation for\n> constant comparisons:\n>\n> = AGE(field) < constant_criteria\n> = AGE(field, constant_ts) < constant_criteria\n> = AGE(field) < constant_criteria + constant_ts\n> = AGE(field) < CONSTANT_CRITERIA\n>\n> How much does such a hack into optimizer cost? I don't know about its\n> implications but I'll really appreciate such a functionality. At the\n> moment, I'm trying replace every AGE() usage in my code and it really\n> feels a PITA.\n\nYeah, age() isn't real performent in such situations. I generally\nstick to simpler date math like:\n\nwhere timestampvalue < now() - interval '2 year'\n\nwhich can use an index on timestampvalue\n\nThe problem with age is it's always compared to now, so there's always\ngonna be some math.\n", "msg_date": "Sun, 8 Jun 2008 00:09:19 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing AGE()" } ]
[ { "msg_contents": "Hi,\n\nI got a question about scalability in high volume insert situation\nwhere the table has a primary key and several non-unique indexes\non other columns of the table. How does PostgreSQL behave\nin terms of scalability? The high volume of inserts comes from\nmultiple transactions.\n\nBest regards,\nZoltďż˝n Bďż˝szďż˝rmďż˝nyi\n\n-- \n----------------------------------\nZoltďż˝n Bďż˝szďż˝rmďż˝nyi\nCybertec Schďż˝nig & Schďż˝nig GmbH\nhttp://www.postgresql.at/\n\n", "msg_date": "Wed, 11 Jun 2008 11:56:04 +0200", "msg_from": "Zoltan Boszormenyi <[email protected]>", "msg_from_op": true, "msg_subject": "Scalability question" }, { "msg_contents": "> Hi,\n>\n> I got a question about scalability in high volume insert situation\n> where the table has a primary key and several non-unique indexes\n> on other columns of the table. How does PostgreSQL behave\n> in terms of scalability? The high volume of inserts comes from\n> multiple transactions.\n>\n> Best regards,\n> Zoltďż˝n Bďż˝szďż˝rmďż˝nyi\n\nWell, that's a difficult question as it depends on hardware and software,\nbut with a proper tunning the results may be very good. Just do the basic\nPostgreSQL tuning and then tune it for the INSERT performance if needed.\nIt's difficult to give any other recommendations without a more detailed\nknowledge of the problem, but consider these hints:\n\n1) move the pg_xlog to a separate drive (so it's linear)\n2) move the table with large amount of inserts to a separate tablespace\n3) minimize the amount of indexes etc.\n\nThe basic rule is that each index adds some overhead to the insert, but it\ndepends on datatype, etc. Just prepare some data to import, and run the\ninsert with and without the indexes and compare the time.\n\nTomas\n\n", "msg_date": "Wed, 11 Jun 2008 12:15:41 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Scalability question" }, { "msg_contents": "[email protected] ďż˝rta:\n>> Hi,\n>>\n>> I got a question about scalability in high volume insert situation\n>> where the table has a primary key and several non-unique indexes\n>> on other columns of the table. How does PostgreSQL behave\n>> in terms of scalability? The high volume of inserts comes from\n>> multiple transactions.\n>>\n>> Best regards,\n>> Zoltďż˝n Bďż˝szďż˝rmďż˝nyi\n>> \n>\n> Well, that's a difficult question as it depends on hardware and software,\n> but with a proper tunning the results may be very good. Just do the basic\n> PostgreSQL tuning and then tune it for the INSERT performance if needed.\n> It's difficult to give any other recommendations without a more detailed\n> knowledge of the problem, but consider these hints:\n>\n> 1) move the pg_xlog to a separate drive (so it's linear)\n> 2) move the table with large amount of inserts to a separate tablespace\n> 3) minimize the amount of indexes etc.\n>\n> The basic rule is that each index adds some overhead to the insert, but it\n> depends on datatype, etc. Just prepare some data to import, and run the\n> insert with and without the indexes and compare the time.\n>\n> Tomas\n> \n\nThanks. The question is more about theoretical working.\nE.g. if INSERTs add \"similar\" records with identical index records\n(they are non-unique indexes) does it cause contention? Because\nthese similar records add index tuples that supposed to be near\nto each other in the btree.\n\n-- \n----------------------------------\nZoltďż˝n Bďż˝szďż˝rmďż˝nyi\nCybertec Schďż˝nig & Schďż˝nig GmbH\nhttp://www.postgresql.at/\n\n", "msg_date": "Wed, 11 Jun 2008 12:35:54 +0200", "msg_from": "Zoltan Boszormenyi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scalability question" }, { "msg_contents": "Zoltan Boszormenyi wrote:\n> Hi,\n> \n> I got a question about scalability in high volume insert situation\n> where the table has a primary key and several non-unique indexes\n> on other columns of the table. How does PostgreSQL behave\n> in terms of scalability? The high volume of inserts comes from\n> multiple transactions.\n\nbtree and gist indexes can have multiple concurrent insertions in\nflight. A potential for blocking is in UNIQUE indexes: if two\ntransactions try to insert the same value in the unique index, the\nsecond one will block until the first transaction finishes.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Wed, 11 Jun 2008 09:34:38 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability question" }, { "msg_contents": "On Wed, Jun 11, 2008 at 3:56 AM, Zoltan Boszormenyi <[email protected]> wrote:\n> Hi,\n>\n> I got a question about scalability in high volume insert situation\n> where the table has a primary key and several non-unique indexes\n> on other columns of the table. How does PostgreSQL behave\n> in terms of scalability? The high volume of inserts comes from\n> multiple transactions.\n\nPostgreSQL supports initial fill rates of < 100% for indexes, so set\nit to 50% filled and new entries that live near current entries will\nhave room to be added without having the split the btree.\n\nPostgreSQL also allows you to easily put your indexes on other\nparitions / drive arrays etc...\n\nPostgreSQL does NOT store visibility info in the indexes, so they stay\nsmall and updates to them are pretty fast.\n", "msg_date": "Wed, 11 Jun 2008 08:13:44 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability question" } ]
[ { "msg_contents": "We run GCC-compiled postgresql on a number\nof HP-UX and Linux boxes.\n\nOur measurements to date show 8.3.1\nperformance to be about 30% *worse*\nthan 8.2 on HP-UX for the same \"drink the firehose\"\ninsert/update/delete benchmarks. Linux\nperformance is fine. \n\nTweaking the new 8.3.1 synchronous_commit\nand bg writer delays that *should* speed\nthings up actually makes them a bit worse,\nagain only on HP-UX PA-RISK 11.11 and 11.23.\n\nRight now it's 32 bit, both for 8.2 and 8.3. \n\nAny hints?\n\n\n\nP. J. Rovero \n", "msg_date": "Wed, 11 Jun 2008 21:40:20 -0400", "msg_from": "\"Josh Rovero\" <[email protected]>", "msg_from_op": true, "msg_subject": "8.3.1 vs 8.2.X on HP-UX PA-RISC 11.11/11.23" }, { "msg_contents": "Are you using the same locales for both?\n\nKen\n\nOn Wed, Jun 11, 2008 at 09:40:20PM -0400, Josh Rovero wrote:\n> We run GCC-compiled postgresql on a number\n> of HP-UX and Linux boxes.\n> \n> Our measurements to date show 8.3.1\n> performance to be about 30% *worse*\n> than 8.2 on HP-UX for the same \"drink the firehose\"\n> insert/update/delete benchmarks. Linux\n> performance is fine. \n> \n> Tweaking the new 8.3.1 synchronous_commit\n> and bg writer delays that *should* speed\n> things up actually makes them a bit worse,\n> again only on HP-UX PA-RISK 11.11 and 11.23.\n> \n> Right now it's 32 bit, both for 8.2 and 8.3. \n> \n> Any hints?\n> \n> \n> \n> P. J. Rovero \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n", "msg_date": "Thu, 12 Jun 2008 07:28:03 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.3.1 vs 8.2.X on HP-UX PA-RISC 11.11/11.23" }, { "msg_contents": "Yes. Locale is 'C' for both 8.3.1 and 8.2.1.\n\nKenneth Marshall wrote:\n> Are you using the same locales for both?\n> \n> Ken\n> \n> On Wed, Jun 11, 2008 at 09:40:20PM -0400, Josh Rovero wrote:\n>> We run GCC-compiled postgresql on a number\n>> of HP-UX and Linux boxes.\n>>\n>> Our measurements to date show 8.3.1\n>> performance to be about 30% *worse*\n>> than 8.2 on HP-UX for the same \"drink the firehose\"\n>> insert/update/delete benchmarks. \n\n-- \nP. J. \"Josh\" Rovero Sonalysts, Inc.\nEmail: [email protected] www.sonalysts.com 215 Parkway North\nWork: (860)326-3671 or 442-4355 Waterford CT 06385\n***********************************************************************\n\n", "msg_date": "Thu, 12 Jun 2008 08:54:27 -0400", "msg_from": "Josh Rovero <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.3.1 vs 8.2.X on HP-UX PA-RISC 11.11/11.23" } ]
[ { "msg_contents": "Hi,\n\nI read a lot about postgres tuning and did some of it. But one of the \nthings, when you start tuning a system that is completely new to you, is \nchecking which sql statement(s) cost most of the resources.\n\ncpu instensive sql seems easy to find.\nBut how do I find i/o intensive sql as fast as possible?\n\nTuning a sql statements I'm familiar with. Finding a sql statement which \ntakes too long due to i/o is probably easy as well. But how about \nstatements that take about 100 ms, that read a lot and that are executed \nseveral times per second?\n\nTo ask the same question just different - Is there a possibility to \ncheck how many pages/kB a sql reads from shared buffer and how many \npages/kB from disk?\nIs there a possibility to read or create historical records about how \noften, how fast one sql is run and how many reads it used from shared \nbuffers, from disk an so on.\n\n\nBest regards,\nUwe\n", "msg_date": "Sun, 15 Jun 2008 15:48:32 +0200", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "how to find the sql using most of the i/o in an oltp system" }, { "msg_contents": "Hi Alexander,\n\nthanks for you answer.\nWhat you wrote in terms of postgres I knew. I just tested to log all\nstatements with statistics. This is a lot of unstructured data in a logfile.\nBut this is the best I found as far.\n\nThe database is running on a solaris box. So DTrace is no problem. I\ncouldn't find any dtrace scripts for postgres. Do you know any scripts\nexcept this sample script?\n\nThanks.\nUwe\n\nOn Sun, Jun 15, 2008 at 4:03 PM, Alexander Staubo <[email protected]> wrote:\n\n> On Sun, Jun 15, 2008 at 3:48 PM, Uwe Bartels <[email protected]>\n> wrote:\n> > Tuning a sql statements I'm familiar with. Finding a sql statement which\n> > takes too long due to i/o is probably easy as well. But how about\n> statements\n> > that take about 100 ms, that read a lot and that are executed several\n> times\n> > per second?\n>\n> Take a look at the PostgreSQL manual chapter on monitoring and statistics:\n>\n> http://www.postgresql.org/docs/8.3/interactive/monitoring.html\n>\n> If you have access to DTrace (available on Solaris, OS X and possibly\n> FreeBSD), you could hook the low-level system calls to reads and\n> writes. If you don't have access to DTrace, the pg_statio_* set of\n> tables is your main option. In particular, pg_statio_user_tables and\n> pg_statio_user_indexes. See the documentation for the meaning of the\n> individual columns.\n>\n> Unfortunately, the statistics tables are not transaction-specific\n> (indeed I believe they only update once you commit the transaction,\n> and then only after a delay), meaning they capture statistics about\n> everything currently going on in the database. The only way to capture\n> statistics about a single query, then, is to run it in complete\n> isolation.\n>\n> Alexander.\n>\n\nHi Alexander,\n\nthanks for you answer.\nWhat you wrote in terms of postgres I knew. I just tested to log all\nstatements with statistics. This is a lot of unstructured data in a\nlogfile. But this is the best I found as far. \n\nThe database is running on a solaris box. So DTrace is no problem. I\ncouldn't find any dtrace scripts for postgres. Do you know any scripts\nexcept this sample script?\n\nThanks.\nUweOn Sun, Jun 15, 2008 at 4:03 PM, Alexander Staubo <[email protected]> wrote:\nOn Sun, Jun 15, 2008 at 3:48 PM, Uwe Bartels <[email protected]> wrote:\n> Tuning a sql statements I'm familiar with. Finding a sql statement which\n> takes too long due to i/o is probably easy as well. But how about statements\n> that take about 100 ms, that read a lot and that are executed several times\n> per second?\n\nTake a look at the PostgreSQL manual chapter on monitoring and statistics:\n\n  http://www.postgresql.org/docs/8.3/interactive/monitoring.html\n\nIf you have access to DTrace (available on Solaris, OS X and possibly\nFreeBSD), you could hook the low-level system calls to reads and\nwrites. If you don't have access to DTrace, the pg_statio_* set of\ntables is your main option. In particular, pg_statio_user_tables and\npg_statio_user_indexes. See the documentation for the meaning of the\nindividual columns.\n\nUnfortunately, the statistics tables are not transaction-specific\n(indeed I believe they only update once you commit the transaction,\nand then only after a delay), meaning they capture statistics about\neverything currently going on in the database. The only way to capture\nstatistics about a single query, then, is to run it in complete\nisolation.\n\nAlexander.", "msg_date": "Sun, 15 Jun 2008 16:41:25 +0200", "msg_from": "\"Uwe Bartels\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to find the sql using most of the i/o in an oltp system" }, { "msg_contents": "Check out pgFouine.\n\nSent from my iPhone\n\nOn Jun 15, 2008, at 10:41 AM, \"Uwe Bartels\" <[email protected]> \nwrote:\n\n> Hi Alexander,\n>\n> thanks for you answer.\n> What you wrote in terms of postgres I knew. I just tested to log all \n> statements with statistics. This is a lot of unstructured data in a \n> logfile. But this is the best I found as far.\n>\n> The database is running on a solaris box. So DTrace is no problem. I \n> couldn't find any dtrace scripts for postgres. Do you know any \n> scripts except this sample script?\n>\n> Thanks.\n> Uwe\n>\n> On Sun, Jun 15, 2008 at 4:03 PM, Alexander Staubo <[email protected]> \n> wrote:\n> On Sun, Jun 15, 2008 at 3:48 PM, Uwe Bartels <[email protected]> \n> wrote:\n> > Tuning a sql statements I'm familiar with. Finding a sql statement \n> which\n> > takes too long due to i/o is probably easy as well. But how about \n> statements\n> > that take about 100 ms, that read a lot and that are executed \n> several times\n> > per second?\n>\n> Take a look at the PostgreSQL manual chapter on monitoring and \n> statistics:\n>\n> http://www.postgresql.org/docs/8.3/interactive/monitoring.html\n>\n> If you have access to DTrace (available on Solaris, OS X and possibly\n> FreeBSD), you could hook the low-level system calls to reads and\n> writes. If you don't have access to DTrace, the pg_statio_* set of\n> tables is your main option. In particular, pg_statio_user_tables and\n> pg_statio_user_indexes. See the documentation for the meaning of the\n> individual columns.\n>\n> Unfortunately, the statistics tables are not transaction-specific\n> (indeed I believe they only update once you commit the transaction,\n> and then only after a delay), meaning they capture statistics about\n> everything currently going on in the database. The only way to capture\n> statistics about a single query, then, is to run it in complete\n> isolation.\n>\n> Alexander.\n>\n\nCheck out pgFouine.Sent from my iPhoneOn Jun 15, 2008, at 10:41 AM, \"Uwe Bartels\" <[email protected]> wrote:Hi Alexander,\n\nthanks for you answer.\nWhat you wrote in terms of postgres I knew. I just tested to log all\nstatements with statistics. This is a lot of unstructured data in a\nlogfile. But this is the best I found as far. \n\nThe database is running on a solaris box. So DTrace is no problem. I\ncouldn't find any dtrace scripts for postgres. Do you know any scripts\nexcept this sample script?\n\nThanks.\nUweOn Sun, Jun 15, 2008 at 4:03 PM, Alexander Staubo <[email protected]> wrote:\nOn Sun, Jun 15, 2008 at 3:48 PM, Uwe Bartels <[email protected]> wrote:\n> Tuning a sql statements I'm familiar with. Finding a sql statement which\n> takes too long due to i/o is probably easy as well. But how about statements\n> that take about 100 ms, that read a lot and that are executed several times\n> per second?\n\nTake a look at the PostgreSQL manual chapter on monitoring and statistics:\n\n  http://www.postgresql.org/docs/8.3/interactive/monitoring.html\n\nIf you have access to DTrace (available on Solaris, OS X and possibly\nFreeBSD), you could hook the low-level system calls to reads and\nwrites. If you don't have access to DTrace, the pg_statio_* set of\ntables is your main option. In particular, pg_statio_user_tables and\npg_statio_user_indexes. See the documentation for the meaning of the\nindividual columns.\n\nUnfortunately, the statistics tables are not transaction-specific\n(indeed I believe they only update once you commit the transaction,\nand then only after a delay), meaning they capture statistics about\neverything currently going on in the database. The only way to capture\nstatistics about a single query, then, is to run it in complete\nisolation.\n\nAlexander.", "msg_date": "Sun, 15 Jun 2008 10:53:06 -0400", "msg_from": "\"Gavin M. Roy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to find the sql using most of the i/o in an oltp system" } ]
[ { "msg_contents": "Hi,\nIn my pgsql procedure, i use the function\n\ngeometryDiff := difference\n(geometry1,geometry2);\n\nbut this function is very slow!!!\nWhat can I do to \nspeed this function?\nExists a special index for it?\n\nThanks in advance!\nLuke\n\n", "msg_date": "Mon, 16 Jun 2008 11:06:44 +0200 (CEST)", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "function difference(geometry,geometry) is SLOW!" } ]
[ { "msg_contents": "Hi, I am looking to improve the initial query speed for the following query:\n\nselect email_id from email, to_tsquery('default','example') as q where \nq@@fts;\n\nThis is running on 8.2.4 on Windows Server 2K3.\n\nThe initial output from explain analyse is as follows.\n\n\"Nested Loop (cost=8.45..76.70 rows=18 width=8) (actual \ntime=5776.347..27364.248 rows=14938 loops=1)\"\n\" -> Function Scan on q (cost=0.00..0.01 rows=1 width=32) (actual \ntime=0.023..0.024 rows=1 loops=1)\"\n\" -> Bitmap Heap Scan on email (cost=8.45..76.46 rows=18 width=322) \n(actual time=5776.314..27353.344 rows=14938 loops=1)\"\n\" Filter: (q.q @@ email.fts)\"\n\" -> Bitmap Index Scan on email_fts_index (cost=0.00..8.44 \nrows=18 width=0) (actual time=5763.355..5763.355 rows=15118 loops=1)\"\n\" Index Cond: (q.q @@ email.fts)\"\n\"Total runtime: 27369.091 ms\"\n\nSubsequent output is considerably faster. (I am guessing that is because \nemail_fts_index is cached.\n\n\"Nested Loop (cost=8.45..76.70 rows=18 width=8) (actual \ntime=29.241..264.712 rows=14938 loops=1)\"\n\" -> Function Scan on q (cost=0.00..0.01 rows=1 width=32) (actual \ntime=0.008..0.010 rows=1 loops=1)\"\n\" -> Bitmap Heap Scan on email (cost=8.45..76.46 rows=18 width=322) \n(actual time=29.224..256.135 rows=14938 loops=1)\"\n\" Filter: (q.q @@ email.fts)\"\n\" -> Bitmap Index Scan on email_fts_index (cost=0.00..8.44 \nrows=18 width=0) (actual time=28.344..28.344 rows=15118 loops=1)\"\n\" Index Cond: (q.q @@ email.fts)\"\n\"Total runtime: 268.663 ms\"\n\nThe table contains text derived from emails and therefore its contents \nand the searches can vary wildly.\n\nTable construction as follows:\n\nCREATE TABLE email\n(\n email_id bigint NOT NULL DEFAULT \nnextval(('public.email_email_id_seq'::text)::regclass),\n send_to text NOT NULL DEFAULT ''::text,\n reply_from character varying(100) NOT NULL DEFAULT ''::character varying,\n cc text NOT NULL DEFAULT ''::text,\n bcc text NOT NULL DEFAULT ''::text,\n subject text NOT NULL DEFAULT ''::text,\n \"content\" text NOT NULL DEFAULT ''::text,\n time_tx_rx timestamp without time zone NOT NULL DEFAULT now(),\n fts tsvector,\n CONSTRAINT email_pkey PRIMARY KEY (email_id),\n)\nWITH (OIDS=FALSE);\n\n-- Index: email_fts_index\n\nCREATE INDEX email_fts_index\n ON email\n USING gist\n (fts);\n\nCREATE INDEX email_mailbox_id_idx\n ON email\n USING btree\n (mailbox_id);\n\n\n-- Trigger: fts_trigger on email\nCREATE TRIGGER fts_trigger\n BEFORE INSERT OR UPDATE\n ON email\n FOR EACH ROW\n EXECUTE PROCEDURE tsearch2('fts', 'send_to', 'reply_from', 'cc', \n'content', 'subject');\n\n", "msg_date": "Mon, 16 Jun 2008 18:55:43 +0100", "msg_from": "Howard Cole <[email protected]>", "msg_from_op": true, "msg_subject": "Tsearch2 Initial Search Speed" }, { "msg_contents": "On Monday 16 June 2008, Howard Cole <[email protected]> wrote:\n> Hi, I am looking to improve the initial query speed for the following\n> query:\n>\n> select email_id from email, to_tsquery('default','example') as q where\n> q@@fts;\n>\n> This is running on 8.2.4 on Windows Server 2K3.\n>\n> The initial output from explain analyse is as follows.\n>\n> \"Nested Loop (cost=8.45..76.70 rows=18 width=8) (actual\n> time=5776.347..27364.248 rows=14938 loops=1)\"\n> \" -> Function Scan on q (cost=0.00..0.01 rows=1 width=32) (actual\n> time=0.023..0.024 rows=1 loops=1)\"\n> \" -> Bitmap Heap Scan on email (cost=8.45..76.46 rows=18 width=322)\n> (actual time=5776.314..27353.344 rows=14938 loops=1)\"\n> \" Filter: (q.q @@ email.fts)\"\n> \" -> Bitmap Index Scan on email_fts_index (cost=0.00..8.44\n> rows=18 width=0) (actual time=5763.355..5763.355 rows=15118 loops=1)\"\n> \" Index Cond: (q.q @@ email.fts)\"\n> \"Total runtime: 27369.091 ms\"\n>\n> Subsequent output is considerably faster. (I am guessing that is because\n> email_fts_index is cached.\n\nIt's because everything is cached, in particular the relevant rows from \nthe \"email\" table (accessing which took 22 of the original 27 seconds).\n\nThe plan looks good for what it's doing.\n\nI don't see that query getting much faster unless you could add a lot more \ncache RAM; 30K random IOs off disk is going to take a fair bit of time \nregardless of what you do. \n\n-- \nAlan\n", "msg_date": "Mon, 16 Jun 2008 11:24:57 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 Initial Search Speed" }, { "msg_contents": "Alan Hodgson wrote:\n> It's because everything is cached, in particular the relevant rows from \n> the \"email\" table (accessing which took 22 of the original 27 seconds).\n>\n> The plan looks good for what it's doing.\n>\n> I don't see that query getting much faster unless you could add a lot more \n> cache RAM; 30K random IOs off disk is going to take a fair bit of time \n> regardless of what you do. \n>\n> \n\nThanks Alan, I guessed that the caching was the difference, but I do not \nunderstand why there is a heap scan on the email table? The query seems \nto use the email_fts_index correctly, which only takes 6 seconds, why \ndoes it then need to scan the email table?\n\nSorry If I sound a bit stupid - I am not very experienced with the \nanalyse statement.\n", "msg_date": "Tue, 17 Jun 2008 10:54:09 +0100", "msg_from": "Howard Cole <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 Initial Search Speed" }, { "msg_contents": "I think I may have answered my own question partially, the problem may \nbe how I structure the query.\n\nI always structured my tsearch queries as follows following my initial \nread of the tsearch2 instructions...\n\nselect email_id from email, to_tsquery('default', 'howard') as q where \nq@@fts;\n\nHowever if I construct them in the following way, as stipulated in the \n8.3 documentation....\n\nselect email_id from email where fts@@to_tsquery('default','howard')\n\nThen the results are better due to the fact that the email table is not \nnecessarily scanned as can be seen from the two analyse statements:\n\nOriginal statement:\n\n\"Nested Loop (cost=4.40..65.08 rows=16 width=8)\"\n\" -> Function Scan on q (cost=0.00..0.01 rows=1 width=32)\"\n\" -> Bitmap Heap Scan on email (cost=4.40..64.87 rows=16 width=489)\"\n\" Filter: (email.fts @@ q.q)\"\n\" -> Bitmap Index Scan on email_fts_index (cost=0.00..4.40 \nrows=16 width=0)\"\n\" Index Cond: (email.fts @@ q.q)\"\n\nSecond statement:\n\n\"Bitmap Heap Scan on email (cost=4.40..64.91 rows=16 width=8)\"\n\" Filter: (fts @@ '''howard'''::tsquery)\"\n\" -> Bitmap Index Scan on email_fts_index (cost=0.00..4.40 rows=16 \nwidth=0)\"\n\" Index Cond: (fts @@ '''howard'''::tsquery)\"\n\nThis misses out the random access of the email table, turning my 27 \nsecond query into 6 seconds.\n\nI guess the construction of the first statement effectively stops the \nquery optimisation from working.\n\n", "msg_date": "Tue, 17 Jun 2008 11:54:12 +0100", "msg_from": "Howard Cole <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 Initial Search Speed" }, { "msg_contents": "On Tue, 17 Jun 2008, Howard Cole wrote:\n> Alan Hodgson wrote:\n>> It's because everything is cached, in particular the relevant rows from the \n>> \"email\" table (accessing which took 22 of the original 27 seconds).\n\n> Thanks Alan, I guessed that the caching was the difference, but I do not \n> understand why there is a heap scan on the email table? The query seems to \n> use the email_fts_index correctly, which only takes 6 seconds, why does it \n> then need to scan the email table?\n\nIt's not a sequential scan - that really would take a fair time. It's a \nbitmap heap scan - that is, it has built a bitmap of the rows needed by \nusing the index, and now it needs to fetch all those rows from the email \ntable. There's 14938 of them, and they're likely scattered all over the \ntable, so you'll probably have to do 14938 seeks on the disc. At 5ms a \npop, that would be 70 seconds, so count yourself lucky it only takes 22 \nseconds instead!\n\nIf you aren't actually interested in having all 14938 rows splurged at \nyou, try using the LIMIT keyword at the end of the query. That would make \nit run a bit faster, and would make sense if you only want to display the \nfirst twenty on a web page or something.\n\nMatthew\n\n-- \nFor every complex problem, there is a solution that is simple, neat, and wrong.\n -- H. L. Mencken \n", "msg_date": "Tue, 17 Jun 2008 12:00:24 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 Initial Search Speed" }, { "msg_contents": "On Tue, 17 Jun 2008, Howard Cole wrote:\n> I think I may have answered my own question partially, the problem may be how \n> I structure the query.\n>\n> Original statement:\n>\n> \"Nested Loop (cost=4.40..65.08 rows=16 width=8)\"\n> \" -> Function Scan on q (cost=0.00..0.01 rows=1 width=32)\"\n> \" -> Bitmap Heap Scan on email (cost=4.40..64.87 rows=16 width=489)\"\n> \" Filter: (email.fts @@ q.q)\"\n> \" -> Bitmap Index Scan on email_fts_index (cost=0.00..4.40 rows=16 width=0)\"\n> \" Index Cond: (email.fts @@ q.q)\"\n>\n> Second statement:\n>\n> \"Bitmap Heap Scan on email (cost=4.40..64.91 rows=16 width=8)\"\n> \" Filter: (fts @@ '''howard'''::tsquery)\"\n> \" -> Bitmap Index Scan on email_fts_index (cost=0.00..4.40 rows=16 width=0)\"\n> \" Index Cond: (fts @@ '''howard'''::tsquery)\"\n\nAs far as I can see, that shouldn't make any difference. Both queries \nstill do the bitmap heap scan, and have almost exactly the same cost.\n\nMatthew\n\n-- \nLord grant me patience, and I want it NOW!\n", "msg_date": "Tue, 17 Jun 2008 12:04:06 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 Initial Search Speed" }, { "msg_contents": "\n> As far as I can see, that shouldn't make any difference. Both queries \n> still do the bitmap heap scan, and have almost exactly the same cost.\n>\n> Matthew\n>\nYou may have a point there Matthew, they both appear to do a scan on the \nemail table (Why?). But for whatever reason, I swear the second method \nis significantly faster! If I run the new style query first, then the \noriginal style (to_tsquery as q) then the original style still takes \nlonger, even with the new style cached!\n\nIncidentally, how can I clear the cache in between queries?\n", "msg_date": "Tue, 17 Jun 2008 12:23:29 +0100", "msg_from": "Howard Cole <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 Initial Search Speed" }, { "msg_contents": "On Tue, 17 Jun 2008, Howard Cole wrote:\n> They both appear to do a scan on the email table (Why?).\n\nThe indexes don't contain copies of the row data. They only contain \npointers to the rows in the table. So once the index has been consulted, \nPostgres still needs to look at the table to fetch the actual rows. Of \ncourse, it only needs to bother looking where the index points, and that \nis the benefit of an index.\n\nMatthew\n\n-- \nI've run DOOM more in the last few days than I have the last few\nmonths. I just love debugging ;-) -- Linus Torvalds\n", "msg_date": "Tue, 17 Jun 2008 12:30:48 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 Initial Search Speed" }, { "msg_contents": "Matthew Wakeling wrote:\n> On Tue, 17 Jun 2008, Howard Cole wrote:\n>> They both appear to do a scan on the email table (Why?).\n>\n> The indexes don't contain copies of the row data. They only contain \n> pointers to the rows in the table. So once the index has been \n> consulted, Postgres still needs to look at the table to fetch the \n> actual rows. Of course, it only needs to bother looking where the \n> index points, and that is the benefit of an index.\n>\n> Matthew\n>\nThanks for your patience with me here Matthew, But what I don't \nunderstand is why it needs to do a scan on email. If I do a query that \nuses another index, then it uses the index only and does not scan the \nemail table. The scan on the fts index takes 6 seconds, which presumably \nreturns email_id's (the email_id being the primary key) - what does it \nthen need from the email table that takes 22 seconds?\n\ne.g.\n\ntriohq=> explain select email_id from email where email_directory_id=1;\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n-------------\n Index Scan using email_email_directory_id_idx on email \n(cost=0.00..129.01 rows\n=35 width=8)\n Index Cond: (email_directory_id = 1)\n(2 rows)\n", "msg_date": "Tue, 17 Jun 2008 12:59:53 +0100", "msg_from": "Howard Cole <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 Initial Search Speed" }, { "msg_contents": "On Tue, 17 Jun 2008, Howard Cole wrote:\n> If I do a query that uses another index, then it uses the index only and \n> does not scan the email table.\n\nNot true. It only looks a little bit like that from the explain output. \nHowever, if you look closely:\n\n> Index Scan using email_email_directory_id_idx on email (cost=0.00..129.01 rows=35 width=8)\n> Index Cond: (email_directory_id = 1)\n> (2 rows)\n\nIt's a scan *using* the index, but *on* the table \"email\". This index scan \nis having to read the email table too.\n\n> The scan on the fts index takes 6 seconds, which presumably returns \n> email_id's (the email_id being the primary key) - what does it then need \n> from the email table that takes 22 seconds?\n\nActually, the index returns page numbers in the table on disc which may \ncontain one or more rows that are relevant. Postgres has to fetch the \nwhole row to find out the email_id and any other information, including \nwhether the row is visible in your current transaction (concurrency \ncontrol complicates it all). Just having a page number isn't much use to \nyou!\n\nMatthew\n\n-- \nFirst law of computing: Anything can go wro\nsig: Segmentation fault. core dumped.\n", "msg_date": "Tue, 17 Jun 2008 14:13:27 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 Initial Search Speed" }, { "msg_contents": "\n> Actually, the index returns page numbers in the table on disc which \n> may contain one or more rows that are relevant. Postgres has to fetch \n> the whole row to find out the email_id and any other information, \n> including whether the row is visible in your current transaction \n> (concurrency control complicates it all). Just having a page number \n> isn't much use to you!\n>\n> Matthew\n>\nI learn something new every day.\n\nThanks Matthew.\n", "msg_date": "Tue, 17 Jun 2008 14:28:23 +0100", "msg_from": "Howard Cole <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 Initial Search Speed" }, { "msg_contents": "On Tuesday 17 June 2008, Howard Cole <[email protected]> wrote:\n> This misses out the random access of the email table, turning my 27\n> second query into 6 seconds.\n\nIt took less time because it retrieved a lot less data - it still has to \nlook at the table.\n\n-- \nAlan\n", "msg_date": "Tue, 17 Jun 2008 08:55:21 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 Initial Search Speed" }, { "msg_contents": "On Tuesday 17 June 2008, Howard Cole <[email protected]> wrote:\n> Incidentally, how can I clear the cache in between queries?\n\nStop PostgreSQL, unmount the filesystem it's on, remount it, restart \nPostgreSQL. Works under Linux.\n\nIf it's on a filesystem you can't unmount hot, you'll need to reboot.\n\n-- \nAlan\n", "msg_date": "Tue, 17 Jun 2008 08:56:32 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 Initial Search Speed" }, { "msg_contents": "On Tue, 17 Jun 2008, Alan Hodgson wrote:\n> On Tuesday 17 June 2008, Howard Cole <[email protected]> wrote:\n>> Incidentally, how can I clear the cache in between queries?\n>\n> Stop PostgreSQL, unmount the filesystem it's on, remount it, restart\n> PostgreSQL. Works under Linux.\n>\n> If it's on a filesystem you can't unmount hot, you'll need to reboot.\n\nNot true - on recent Linux kernels, you can drop the OS cache by running\n\necho \"1\" >/proc/sys/vm/drop_caches\n\nas root. You'll still need to restart Postgres to drop its cache too.\n\nMatthew\n\n-- \nRichards' Laws of Data Security:\n 1. Don't buy a computer.\n 2. If you must buy a computer, don't turn it on.\n", "msg_date": "Tue, 17 Jun 2008 17:04:07 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 Initial Search Speed" }, { "msg_contents": "\n>\n> Actually, the index returns page numbers in the table on disc which \n> may contain one or more rows that are relevant. Postgres has to fetch \n> the whole row to find out the email_id and any other information, \n> including whether the row is visible in your current transaction \n> (concurrency control complicates it all). Just having a page number \n> isn't much use to you!\n>\n> Matthew\n>\nOut of interest, if I could create a multicolumn index with both the \nprimary key and the fts key (I don't think I can create a multi-column \nindex using GIST with both the email_id and the fts field), would this \nreduce access to the table due to the primary key being part of the index?\n\nMore importantly, are there other ways that I can improve performance on \nthis? I am guessing that a lot of the problem is that the email table is \nso big. If I cut out some of the text fields that are not needed in the \nsearch and put them in another table, presumably the size of the table \nwill be reduced to a point where it will reduce the number of disk hits \nand speed the query up.\n\nSo I could split the table into two parts:\n\ncreate table email_part2 (\nemail_id int8 references email_part1 (email_id),\nfts ...,\nemail_directory_id ...,\n)\n\ncreate table email_part1(\nemail_id serial8 primary key,\ncc text,\nbcc text,\n...\n)\n\nand the query will be\nselect email_id from email_part2 where to_tsquery('default', 'howard') \n@@ fts;\n", "msg_date": "Wed, 18 Jun 2008 11:40:11 +0100", "msg_from": "Howard Cole <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 Initial Search Speed" }, { "msg_contents": "On Wed, 18 Jun 2008, Howard Cole wrote:\n> Out of interest, if I could create a multicolumn index with both the primary \n> key and the fts key (I don't think I can create a multi-column index using \n> GIST with both the email_id and the fts field), would this reduce access to \n> the table due to the primary key being part of the index?\n\nUnfortunately not, since the indexes do not contain information on whether \na particular row is visible in your current transaction. Like I said, \nconcurrency control really complicates things!\n\n> More importantly, are there other ways that I can improve performance on \n> this? I am guessing that a lot of the problem is that the email table is so \n> big. If I cut out some of the text fields that are not needed in the search \n> and put them in another table, presumably the size of the table will be \n> reduced to a point where it will reduce the number of disk hits and speed the \n> query up.\n\nGood idea. Note that Postgres is already doing this to some extent with \nTOAST - read \nhttp://www.postgresql.org/docs/8.3/interactive/storage-toast.html - \nunfortunately, there doesn't seem to be an option to always move \nparticular columns out to TOAST. Your idea will produce an even smaller \ntable. However, are email_ids all that you want from the query?\n\nMatthew\n\n-- \nOkay, I'm weird! But I'm saving up to be eccentric.\n", "msg_date": "Wed, 18 Jun 2008 13:38:42 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tsearch2 Initial Search Speed" }, { "msg_contents": "\n> Good idea. Note that Postgres is already doing this to some extent \n> with TOAST - read \n> http://www.postgresql.org/docs/8.3/interactive/storage-toast.html - \n> unfortunately, there doesn't seem to be an option to always move \n> particular columns out to TOAST. Your idea will produce an even \n> smaller table. However, are email_ids all that you want from the query?\n>\n> Matthew\n>\nAs you point out - I will need more then the email_ids in the query, but \nif I remove just the content, to, cc fields then the size of the table \nshould shrink dramatically. Just remains to be seen if the TOAST has \nalready done that optimisation for me.\n\nAgain. Thanks Matthew - I owe you a beer.\n", "msg_date": "Thu, 19 Jun 2008 10:03:37 +0100", "msg_from": "Howard Cole <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 Initial Search Speed" }, { "msg_contents": "PFC wrote:\n>> Hi, I am looking to improve the initial query speed for the following \n>> query:\n>\n> Try Xapian full text search engine, it behaves much better than \n> tsearch when the dataset exceeds your memory cache size.\n>\n> __________ Information from ESET NOD32 Antivirus, version of virus \n> signature database 3202 (20080620) __________\n>\n> The message was checked by ESET NOD32 Antivirus.\n>\n> http://www.eset.com\n>\n>\n>\nThanks for the heads-up PFC, but I prefer the tsearch2 license.\n", "msg_date": "Fri, 20 Jun 2008 14:10:49 +0100", "msg_from": "Howard Cole <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tsearch2 Initial Search Speed" } ]
[ { "msg_contents": " >\n > Date: Mon, 16 Jun 2008 11:06:44 +0200 (CEST)\n > From: \"[email protected]\" <[email protected]>\n > To: [email protected]\n > Subject: function difference(geometry,geometry) is SLOW!\n > Message-ID:\n > <10574478.390451213607204584.JavaMail.defaultUser@defaultHost>\n >\n > Hi,\n > In my pgsql procedure, i use the function\n >\n > geometryDiff := difference\n > (geometry1,geometry2);\n >\n > but this function is very slow!!!\n > What can I do to\n > speed this function?\n > Exists a special index for it?\n >\n > Thanks in advance!\n > Luke\n\nHi,\n\nthis is a postgis function. Postgis is an independent project\nand you might want to ask there:\n\nhttp://www.postgis.org/mailman/listinfo/postgis-users\n\nor\n\nhttp://www.faunalia.com/cgi-bin/mailman/listinfo/gfoss\n(italian).\n\nAnyway, as long as you just compute the difference between\n2 given shapes, no index can help you. Indices speed up\nsearches...\n\nBye,\nChris.\n\n\n", "msg_date": "Mon, 16 Jun 2008 21:26:14 +0200", "msg_from": "Chris Mair <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [pgsql-performance] function difference(geometry,\n\tgeometry) is SLOW!" } ]
[ { "msg_contents": "Dear All,\n\nAm going to do migration of database from one version to another., is there\nany article or any other document explaining the possibilities and other\nthings.\n\nFurther Explanation:\n\nI have a database in postgres X.Y which has around 90 tables, and lot of\ndata in it.\nIn the next version of that product, i had some more tables, so how to\nmigrate that,. there may be 150 tables., in that 90 tables, 70 may be the\nsame, 20 got deleted, and 80 may be new., i want the 70 tables to have same\ndata as it is.,\n\nHow to do this migration ??? any ways ???\n\nDear All,Am going to do migration of database from one version to another., is there any article or any other document explaining the possibilities and other things.Further Explanation:I have a database in postgres X.Y which has around 90 tables, and lot of data in it.\nIn the next version of that product, i had some more tables, so how to migrate that,. there may be 150 tables., in that 90 tables, 70 may be the same, 20 got deleted, and 80 may be new., i want the 70 tables to have same data as it is.,\nHow to do this migration ??? any ways ???", "msg_date": "Tue, 17 Jun 2008 18:43:18 +0530", "msg_from": "\"sathiya psql\" <[email protected]>", "msg_from_op": true, "msg_subject": "Migration Articles.. ???" }, { "msg_contents": "On Tue, 17 Jun 2008, sathiya psql wrote:\n> I have a database in postgres X.Y which has around 90 tables, and lot of\n> data in it.\n> In the next version of that product, i had some more tables, so how to\n> migrate that,. there may be 150 tables., in that 90 tables, 70 may be the\n> same, 20 got deleted, and 80 may be new., i want the 70 tables to have same\n> data as it is.,\n\nPlease do not cross-post. This question has nothing to do with \nperformance. (Cross-posting answer so everyone else doesn't answer the \nsame.)\n\nYou'll want to dump the source database selectively, and then reload the \ndump into a new database. RTFM on pg_dump, especially the \"-t\" and \"-T\" \noptions.\n\nMatthew\n\n-- \nAll of this sounds mildly turgid and messy and confusing... but what the\nheck. That's what programming's all about, really\n -- Computer Science Lecturer\n", "msg_date": "Tue, 17 Jun 2008 14:23:00 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Migration Articles.. ???" } ]
[ { "msg_contents": "Hi,\n\nI need to install a 8.3 database and was wondering which hardware would be \nsufficient to have good performances (less than 30s for� slowest select).\n\nDatabase size: 25 Go /year, 5 years of history\nOne main table containing 40 million lines per year.\nBatch inserts of 100000 lines. Very very few deletes, few updates.\n\n30 other tables, 4 levels of hierarchy, containing from 10 lines up to 20000 \nlines.\n5 of them have forein keys on the main table.\n\nI will use table partitionning on the year column.\n\nStatements will mainly do sums on the main table, grouped by whatever column \nof the database (3-5 joined tables, or join on join), with some criterions \nthat may vary, lots of \"joined varchar in ('a','b',...,'z')\".\nIt's almost impossible to predict what users will do via the webapplication \nthat queries this database: almost all select, join, group by, where... \npossibilities are available.\n\nUp to 4 simultaneous users.\n\nI'm planning to host it on a quad xeon 2.66Ghz with 8Go of DDR2, and a dual \n(RAID1) SATA2 750Go HD.\nPerharps with another HD for indexes.\n\nDo you think it will be enough ?\nIs another RAID for better performances a minimum requirement ?\nWill a secondary HD for indexes help ?\n\nWhich OS would you use ? (knowing that there will be a JDK 1.6 installed \ntoo)\n\nWith 5 millions of lines, the same application runs quite fast on windows \n2000 on a single P4 2.8 GHz (very few statements last more than 10s, mostly \nwhen concurrent statements are made). Each statement consumes 100% of the \nCPU.\n\n\nthanks for advices.\n\n\n", "msg_date": "Tue, 17 Jun 2008 15:38:59 +0200", "msg_from": "\"Lionel\" <[email protected]>", "msg_from_op": true, "msg_subject": "Which hardware ?" }, { "msg_contents": "On Tue, Jun 17, 2008 at 7:38 AM, Lionel <[email protected]> wrote:\n> Hi,\n>\n> I need to install a 8.3 database and was wondering which hardware would be\n> sufficient to have good performances (less than 30s for² slowest select).\n>\n> Database size: 25 Go /year, 5 years of history\n> One main table containing 40 million lines per year.\n> Batch inserts of 100000 lines. Very very few deletes, few updates.\n>\n> 30 other tables, 4 levels of hierarchy, containing from 10 lines up to 20000\n> lines.\n> 5 of them have forein keys on the main table.\n>\n> I will use table partitionning on the year column.\n>\n> Statements will mainly do sums on the main table, grouped by whatever column\n> of the database (3-5 joined tables, or join on join), with some criterions\n> that may vary, lots of \"joined varchar in ('a','b',...,'z')\".\n> It's almost impossible to predict what users will do via the webapplication\n> that queries this database: almost all select, join, group by, where...\n> possibilities are available.\n>\n> Up to 4 simultaneous users.\n>\n> I'm planning to host it on a quad xeon 2.66Ghz with 8Go of DDR2, and a dual\n> (RAID1) SATA2 750Go HD.\n> Perharps with another HD for indexes.\n>\n> Do you think it will be enough ?\n> Is another RAID for better performances a minimum requirement ?\n> Will a secondary HD for indexes help ?\n\nMore drives, all in the same RAID-10 setup. For reporting like this\nwriting speed often isn't that critical, so you are often better off\nwith software RAID-10 than using a mediocre hardware RAID controller\n(most adapatecs, low end LSI, etc...)\n\nYou'd be surprised what going from a 2 disk RAID1 to a 4 disk RAID10\ncan do in these circumstances. Going up to 6, 8, 10 or more disks\nreally makes a difference.\n\n> Which OS would you use ? (knowing that there will be a JDK 1.6 installed\n> too)\n\nI'd use RHEL5 because it's what I'm familiar with. Any stable flavor\nof linux or FreeBSD7 are good performance choices if you know how to\ndrive them.\n\n> With 5 millions of lines, the same application runs quite fast on windows\n> 2000 on a single P4 2.8 GHz (very few statements last more than 10s, mostly\n> when concurrent statements are made). Each statement consumes 100% of the\n> CPU.\n\nThat statement about concurrent statements REALLY sells me on the idea\nof a many disk RAID10 here. I'd take that over quad cores for what\nyou're doing any day. Not that I'd turn down quad cores here either.\n:)\n", "msg_date": "Tue, 17 Jun 2008 08:23:22 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which hardware ?" }, { "msg_contents": "On Tue, Jun 17, 2008 at 03:38:59PM +0200, Lionel wrote:\n> Hi,\n> \n> I need to install a 8.3 database and was wondering which hardware would be \n> sufficient to have good performances (less than 30s for� slowest select).\n\n> Statements will mainly do sums on the main table, grouped by whatever column \n> of the database (3-5 joined tables, or join on join), with some criterions \n> that may vary, lots of \"joined varchar in ('a','b',...,'z')\".\n> It's almost impossible to predict what users will do via the webapplication \n> that queries this database: almost all select, join, group by, where... \n> possibilities are available.\n\nI'm not sure that I have any specific recommendation to make in the\nface of such sweeping requirements. But I'd say you need to make I/O\ncheap, which means piles of memory and extremely fast disk\nsubsystems.\n\nAlso, there's another important question (which never gets asked in\nthese discussions), which is, \"How much is the performance worth to\nyou?\" If the last 10% of users get something longer than 30s, but\nless than 40s, and they will pay no more to get the extra 10s response\ntime, then it's worth nothing to you, and you shouldn't fix it.\n \n> Up to 4 simultaneous users.\n\nYou won't need lots of processer, then.\n \n> I'm planning to host it on a quad xeon 2.66Ghz with 8Go of DDR2, and a dual \n> (RAID1) SATA2 750Go HD.\n> Perharps with another HD for indexes.\n\nHow big's the database? If you can have enough memory to hold the\nwhole thing, including all indexes, in memory, that's what you want.\nApart from that, \"dual SATA2\" is probably underpowered. But. . .\n \n> Which OS would you use ? (knowing that there will be a JDK 1.6 installed \n> too)\n\n. . .I think this is the real mistake. Get a separate database box.\nIt's approximately impossible to tune a box correctly for both your\napplication and your database, in my experience.\n\nA\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Tue, 17 Jun 2008 10:25:02 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which hardware ?" }, { "msg_contents": "On Tue, Jun 17, 2008 at 8:25 AM, Andrew Sullivan <[email protected]> wrote:\n> On Tue, Jun 17, 2008 at 03:38:59PM +0200, Lionel wrote:\n\n>> Which OS would you use ? (knowing that there will be a JDK 1.6 installed\n>> too)\n>\n> . . .I think this is the real mistake. Get a separate database box.\n> It's approximately impossible to tune a box correctly for both your\n> application and your database, in my experience.\n\nHaving had to install jvm 1.6 for a few weird admin bits we were using\nin the past, I didn't even think the same thing as you on this. Good\ncatch.\n", "msg_date": "Tue, 17 Jun 2008 08:31:58 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which hardware ?" }, { "msg_contents": "On Tue, 17 Jun 2008, Lionel wrote:\n> I need to install a 8.3 database and was wondering which hardware would be\n> sufficient to have good performances (less than 30s for� slowest select).\n\n> It's almost impossible to predict what users will do via the webapplication\n> that queries this database: almost all select, join, group by, where...\n> possibilities are available.\n\nWell, Scott has given you some good pointers on how to make a fast system. \nHowever, your original question (\"is this fast enough\") is impossible to \nanswer, especially if the users are allowed to run absolutely anything \nthey want. I bet you I could craft a query that takes more than 30 seconds \nregardless of how fast you make your system.\n\nHaving said that, I'll add the suggestion that you should put as much RAM \nin the box as possible. It can only help. As others have said, if you only \nhave four users, then CPU power isn't going to be such a problem, and \ngiven that, I'd disagree with Andrew and say as long as you have plenty of \nRAM, Java can play well with a database on the same box. Depends what it \nis doing, of course.\n\nMatthew\n\n-- \nTo be or not to be -- Shakespeare\nTo do is to be -- Nietzsche\nTo be is to do -- Sartre\nDo be do be do -- Sinatra", "msg_date": "Tue, 17 Jun 2008 15:33:40 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which hardware ?" }, { "msg_contents": "Andrew Sullivan wrote:\n> You won't need lots of processer, then.\n\ncan't find less than quad core for this price range...\n\n> How big's the database?\n\nwith 20 millions of rows, the main table is 3.5 Go on win XP.\nWith 8 Go of indexes.\n\nI estimate the whole database around 30 Go / year\n\n> If you can have enough memory to hold the\n> whole thing, including all indexes, in memory, that's what you want.\n> Apart from that, \"dual SATA2\" is probably underpowered. But. . .\n\nRAID is twice more expansive.\n(600euros/month for a 5x750Go SATA2 with 12Gb of ram and unnecessary 2x quad \ncore)\n\ndidn't find any RAID 10 \"not too expansive\" dedicated server.\n\nIf this setup is twice as fast, I can afford it. But if it a 30sec VS \n40sec...I'm not sure my customer will pay.\n\n>> Which OS would you use ? (knowing that there will be a JDK 1.6\n>> installed too)\n>\n> . . .I think this is the real mistake. Get a separate database box.\n> It's approximately impossible to tune a box correctly for both your\n> application and your database, in my experience.\n\nMy tomcat webapp is well coded and consumes nearly nothing.\nOn such powerful hardware, I prefer to run both on the same server.\nI could eventually run it on a different server, much less powerfull, but \nit's not on the same network, I guess this would be an issue. \n\n\n", "msg_date": "Tue, 17 Jun 2008 16:49:17 +0200", "msg_from": "\"Lionel\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Which hardware ?" }, { "msg_contents": "On Tue, 17 Jun 2008, Andrew Sullivan wrote:\n\n>> Which OS would you use ? (knowing that there will be a JDK 1.6 installed\n>> too)\n>\n> . . .I think this is the real mistake. Get a separate database box.\n> It's approximately impossible to tune a box correctly for both your\n> application and your database, in my experience.\n\nI can't remember the last time I deployed a PG box that didn't have a Java \napp or three on it, too. You've got even odds that putting it a separate \nsystem will even be a improvement. Yes, if the Java app is a pig and the \nmachine doesn't have enough resources, separating it out to another system \nwill help. But there are plenty of these buggers that will end up so much \nslower from the additional network latency that it's a net loss (depends \non how the app groups its requests for rows).\n\nIf you know enough about Java to watch things like how much memory the \nJVMs are taking up, I wouldn't hesitate to put them all on the same \nmachine. Considering that Lionel's system seems pretty overpowered for \nwhat he's doing--runs plenty fast on a much slower system, enough RAM to \nhold a large portion of the primary tables and database, all batch updates \nthat don't really need a good RAID setup--I'd say \"looks good\" here and \nrecommend he just follow the plan he outlined. Just watch the system with \ntop for a bit under load to make sure the Java processes are staying under \ncontrol.\n\nAs for OS, a RHEL5 or clone like CentOS should work fine here, which is \nmore appropriate depends on your support requirements. I would recommend \nagainst using FreeBSD as it's not the friendliest Java platform, and the \nadditional complexity of Solaris seems like overkill for your app. \nBasically, evem though it's available for more of them, I only consider \ndeploying a Java app on one of the mainstream platforms listed at \nhttp://www.java.com/en/download/manual.jsp right now because those are the \nmature releases.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 17 Jun 2008 11:32:16 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which hardware ?" }, { "msg_contents": "On Tue, Jun 17, 2008 at 04:49:17PM +0200, Lionel wrote:\n> My tomcat webapp is well coded and consumes nearly nothing.\n\nIf I were ever inclined to say, \"Nonsense,\" about code I've never\nseen, this is probably the occasion on which I'd do it. A running JVM\nis necessarily going to use some memory, and that is memory use that\nyou won't be able to factor out properly when developing models of\nyour database system performance.\n\n> I could eventually run it on a different server, much less powerfull, but \n> it's not on the same network, I guess this would be an issue. \n\nThe power of the system is hard to know about in the context (with\nonly 8Go of memory, I don't consider this a powerful box at all,\nnote). But why wouldn't it be on the same network? You're using the\nnetwork stack anyway, note: JVMs can't go over domain sockets.\n\nA\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Tue, 17 Jun 2008 11:42:15 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which hardware ?" }, { "msg_contents": "On Tue, Jun 17, 2008 at 9:32 AM, Greg Smith <[email protected]> wrote:\n\n> Considering that Lionel's system seems pretty overpowered for what he's\n> doing--runs plenty fast on a much slower system, enough RAM to hold a large\n> portion of the primary tables and database, all batch updates that don't\n> really need a good RAID setup--I'd say \"looks good\" here and recommend he\n> just follow the plan he outlined. Just watch the system with top for a bit\n> under load to make sure the Java processes are staying under control.\n\nIn the original post he mentioned that he had 5 years of data at about\n25G / year.\n\nWith 125G of data, it's likely that if most queries are on recent data\nit'll be in RAM, but anything that hits older data will NOT have that\nluxury. Which is why I recommended RAID-10. It doesn't have to be on\na $1200 card with 44 disks or something, but even 4 disks in a sw\nRAID-10 will be noticeably faster (about 2x) than a simple RAID-1 at\nhitting that old data.\n\nWe had a reporting server with about 80G of data on a machine with 4G\nram last place I worked, and it could take it a few extra seconds to\nhit the old data, but the SW RAID-10 on it made it much faster at\nreporting than it would have been with a single disk.\n", "msg_date": "Tue, 17 Jun 2008 09:42:34 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which hardware ?" }, { "msg_contents": "On Tue, Jun 17, 2008 at 9:42 AM, Andrew Sullivan <[email protected]> wrote:\n> On Tue, Jun 17, 2008 at 04:49:17PM +0200, Lionel wrote:\n>> My tomcat webapp is well coded and consumes nearly nothing.\n>\n> If I were ever inclined to say, \"Nonsense,\" about code I've never\n> seen, this is probably the occasion on which I'd do it. A running JVM\n> is necessarily going to use some memory, and that is memory use that\n> you won't be able to factor out properly when developing models of\n> your database system performance.\n\nBut if that amount of memory is 256 Megs and it only ever acts as a\ncontrol panel or data access point, it's probably not a huge issue.\nIf it's 2 Gig it's another issue. It's all about scale. The real\nperformance hog for me on all in one boxes has been perl / fastcgi\nsetups.\n\n> The power of the system is hard to know about in the context (with\n> only 8Go of memory, I don't consider this a powerful box at all,\n> note).\n\nI always think of main memory in terms of how high a cache hit rate it\ncan get me. If 8G gets you a 50% hit rate, and 16G gets you a 95% hit\nrate, then 16G is the way to go. But if 8G gets you to 75% and 32G\ngets you to 79% because of your usage patterns (the world isn't always\nbell curve shaped) then 8G is plenty and it's time to work on faster\ndisk subsystems if you need more performance.\n", "msg_date": "Tue, 17 Jun 2008 10:22:14 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which hardware ?" }, { "msg_contents": "On Tue, 17 Jun 2008, Andrew Sullivan wrote:\n\n> A running JVM is necessarily going to use some memory, and that is \n> memory use that you won't be able to factor out properly when developing \n> models of your database system performance.\n\nNow you've wandered into pure FUD. Tuning maximum memory usage on a Java \napp so you can model it is straightforward (albeit a little confusing at \nfirst), and in most cases you can just sample it periodically to get a \ngood enough estimate for database tuning purposes. JVMs let you adjust \nmaximum memory use with -Xmx , and if anything the bigger problem I run \ninto is that using too much memory hits that limit and crashes Java long \nbefore it becomes a hazard to the database.\n\nThis is a system with 8GB of RAM here; having some Tomcat instances \nco-existing with the database when there's that much room to work is not \nthat hard.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 17 Jun 2008 12:32:10 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which hardware ?" }, { "msg_contents": "On Tue, 17 Jun 2008, Scott Marlowe wrote:\n\n> We had a reporting server with about 80G of data on a machine with 4G\n> ram last place I worked, and it could take it a few extra seconds to\n> hit the old data, but the SW RAID-10 on it made it much faster at\n> reporting than it would have been with a single disk.\n\nI agree with your statement above, that query time could likely be dropped \na few seconds with a better disk setup. I just question whether that's \nnecessary given the performance target here.\n\nRight now the app is running on an underpowered Windows box and is \nreturning results in around 10s, on a sample data set that sounds like 1/8 \nof a year worth of data (1/40 of the total). It is seemingly CPU bound \nwith not enough processor to handle concurrent queries being the source of \nthe worst-case behavior. The target is keeping that <30s on more powerful \nhardware, with at least 6X as much processor power and a more efficient \nOS, while using yearly partitions to keep the amount of data to juggle at \nonce under control. That seems reasonable to me, and while better disks \nwould be nice I don't see any evidence they're really needed here. This \napplication sounds a batch processing/reporting one where plus or minus a \nfew seconds doesn't have a lot of business value.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 17 Jun 2008 12:56:26 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which hardware ?" }, { "msg_contents": "On Tue, Jun 17, 2008 at 10:56 AM, Greg Smith <[email protected]> wrote:\n> On Tue, 17 Jun 2008, Scott Marlowe wrote:\n>\n>> We had a reporting server with about 80G of data on a machine with 4G\n>> ram last place I worked, and it could take it a few extra seconds to\n>> hit the old data, but the SW RAID-10 on it made it much faster at\n>> reporting than it would have been with a single disk.\n>\n> I agree with your statement above, that query time could likely be dropped a\n> few seconds with a better disk setup. I just question whether that's\n> necessary given the performance target here.\n>\n> Right now the app is running on an underpowered Windows box and is returning\n> results in around 10s, on a sample data set that sounds like 1/8 of a year\n> worth of data (1/40 of the total). It is seemingly CPU bound with not\n> enough processor to handle concurrent queries being the source of the\n> worst-case behavior. The target is keeping that <30s on more powerful\n> hardware, with at least 6X as much processor power and a more efficient OS,\n> while using yearly partitions to keep the amount of data to juggle at once\n> under control. That seems reasonable to me, and while better disks would be\n> nice I don't see any evidence they're really needed here. This application\n> sounds a batch processing/reporting one where plus or minus a few seconds\n> doesn't have a lot of business value.\n\nI think you're making a big assumption that this is CPU bound. And it\nmay be that when all the queries are operating on current data that it\nis. But as soon as a few ugly queries fire that need to read tens of\ngigs of data off the drives, then you'll start to switch to I/O bound\nand the system will slow a lot.\n\nWe had a single drive box doing work on an 80G set that was just fine\nwith the most recent bits. Until I ran a report that ran across the\nlast year instead of the last two days, and took 2 hours to run.\n\nAll the queries that had run really quickly on all the recent data\nsuddenly were going from 1 or 2 seconds to 2 or 3 minutes. And I'd\nhave to kill my reporting query.\n\nMoved it to the same exact hardware but with a 4 disc RAID-10 and the\nlittle queries stayed 1-2 seconds while th reporting queries were cut\ndown by factors of about 4 to 10. RAID-1 will be somewhere between\nthem I'd imagine. RAID-10 has an amazing ability to handle parallel\naccesses without falling over performance-wise.\n\nYou're absolutely right though, we really need to know the value of\nfast performance here.\n\nIf you're monitoring industrial systems you need fast enough response\nto spot problems before they escalate to disasters.\n\nIf you're running aggregations of numbers used for filling out\nquarterly reports, not so much.\n", "msg_date": "Tue, 17 Jun 2008 11:07:31 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which hardware ?" }, { "msg_contents": "\"Scott Marlowe\" wrote:\n> You're absolutely right though, we really need to know the value of\n> fast performance here.\n\nthe main problem is that my customers are used to have their reporting after \nfew seconds.\nThey want do have 10 times more data but still have the same speed, which \nis, I think, quite impossible.\n\n> If you're running aggregations of numbers used for filling out\n> quarterly reports, not so much.\n\nThe application is used to analyse products sales behaviour, display charts, \nperform comparisons, study progression...\n10-40 seconds seems to be a quite good performance.\nMore than 1 minute will be too slow (meaning they won't pay for that).\n\nI did some test with a 20 millions lines database on a single disk dual core \n2GB win XP system (default postgresql config), most of the time is spent in \nI/O: 50-100 secs for statements that scan 6 millions of lines, which will \nhappen. Almost no CPU activity.\n\nSo here is the next question: 4 disks RAID10 (did not find a french web host \nyet) or 5 disk RAID5 (found at 600euros/month) ?\nI don't want to have any RAID issue...\nI did not have any problem with my basic RAID1 since many years, and don't \nwant that to change. \n\n\n", "msg_date": "Tue, 17 Jun 2008 19:59:45 +0200", "msg_from": "\"Lionel\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Which hardware ?" }, { "msg_contents": "On Tue, Jun 17, 2008 at 11:59 AM, Lionel <[email protected]> wrote:\n> \"Scott Marlowe\" wrote:\n>> You're absolutely right though, we really need to know the value of\n>> fast performance here.\n>\n> the main problem is that my customers are used to have their reporting after\n> few seconds.\n> They want do have 10 times more data but still have the same speed, which\n> is, I think, quite impossible.\n>\n>> If you're running aggregations of numbers used for filling out\n>> quarterly reports, not so much.\n>\n> The application is used to analyse products sales behaviour, display charts,\n> perform comparisons, study progression...\n> 10-40 seconds seems to be a quite good performance.\n> More than 1 minute will be too slow (meaning they won't pay for that).\n>\n> I did some test with a 20 millions lines database on a single disk dual core\n> 2GB win XP system (default postgresql config), most of the time is spent in\n> I/O: 50-100 secs for statements that scan 6 millions of lines, which will\n> happen. Almost no CPU activity.\n>\n> So here is the next question: 4 disks RAID10 (did not find a french web host\n> yet) or 5 disk RAID5 (found at 600euros/month) ?\n> I don't want to have any RAID issue...\n> I did not have any problem with my basic RAID1 since many years, and don't\n> want that to change.\n\nDo you have root access on your servers? then just ask for 5 disks\nwith one holding the OS / Apps and you'll do the rest. Software RAID\nis probably a good fit for cheap right now.\n\nIf you can set it up yourself, you might be best off with >2 disk\nRAID-1. 5 750G disks in a RAID-1 yields 750G of storage (duh) but\nallows for five different readers to operate without the heads having\nto seek. large amounts of data can be read at a medium speed from a\nRAID-1 like this. But most RAID implementations don't aggregate\nbandwidth for RAID-1.\n\nThey do for RAID-0. So, having a huge RAID-0 zero array allows for\nreading a large chunk of data really fast from all disks at once.\n\nRAID1+0 gives you the ability to tune this in either direction. But\nthe standard config of a 4 disk setup (striping two mirrors, each made\nfrom two disks, is a good compromise to start with. Average read\nspeed of array is doubled, and the ability to have two reads not\nconflict helps too.\n\nRAID5 is a comproise to provide the most storage while having mediocre\nperformance or, when degraded, horrifficaly poor performance.\n\nHard drives are cheap, hosting not as much.\n\nAlso, always look at optimizing their queries. A lot of analysis is\ndone by brute force queries that rewritten intelligently suddenly run\nin minutes not hours. or seconds not minutes.\n", "msg_date": "Tue, 17 Jun 2008 12:28:14 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which hardware ?" }, { "msg_contents": "On Tue, 17 Jun 2008, Lionel wrote:\n\n> I did some test with a 20 millions lines database on a single disk dual core\n> 2GB win XP system (default postgresql config), most of the time is spent in\n> I/O: 50-100 secs for statements that scan 6 millions of lines, which will\n> happen. Almost no CPU activity.\n\nI hope you're aware that the default config is awful, and there are all \nsorts of possible causes for heavy I/O churn that might improve if you \nsetup the postgresql.conf file to use the server's resources more \naggressively (the default is setup for machines with a very small amount \nof RAM). There are lots of links to articles that cover the various areas \nyou might improve at \nhttp://wiki.postgresql.org/wiki/Performance_Optimization\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 17 Jun 2008 17:15:14 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which hardware ?" }, { "msg_contents": "\n\n----------------------------------------\n> From: [email protected]\n> Subject: [PERFORM] Which hardware ?\n> Date: Tue, 17 Jun 2008 15:38:59 +0200\n> To: [email protected]\n> \n> Hi,\n> \n> I need to install a 8.3 database and was wondering which hardware would be \n> sufficient to have good performances (less than 30s for² slowest select).\n> \n> Database size: 25 Go /year, 5 years of history\n> One main table containing 40 million lines per year.\n> Batch inserts of 100000 lines. Very very few deletes, few updates.\n> \n> 30 other tables, 4 levels of hierarchy, containing from 10 lines up to 20000 \n> lines.\n> 5 of them have forein keys on the main table.\n> \n> I will use table partitionning on the year column.\n> \n> Statements will mainly do sums on the main table, grouped by whatever column \n> of the database (3-5 joined tables, or join on join), with some criterions \n> that may vary, lots of \"joined varchar in ('a','b',...,'z')\".\n> It's almost impossible to predict what users will do via the webapplication \n> that queries this database: almost all select, join, group by, where... \n> possibilities are available.\n> \n> Up to 4 simultaneous users.\n> \n> I'm planning to host it on a quad xeon 2.66Ghz with 8Go of DDR2, and a dual \n> (RAID1) SATA2 750Go HD.\n> Perharps with another HD for indexes.\n> \n> Do you think it will be enough ?\n> Is another RAID for better performances a minimum requirement ?\n> Will a secondary HD for indexes help ?\n> \n> Which OS would you use ? (knowing that there will be a JDK 1.6 installed \n> too)\n> \n> With 5 millions of lines, the same application runs quite fast on windows \n> 2000 on a single P4 2.8 GHz (very few statements last more than 10s, mostly \n> when concurrent statements are made). Each statement consumes 100% of the \n> CPU.\n> \n> \n> thanks for advices.\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\nI think hardware isnt going to solve your problem, especially the cpu. You only have four users.. and postgres can only use 1 core per query. If you have sequential scans that span this table and say it has 60-80 million rows, It can could take longer then 30 seconds. Even if you have alot of ram. Just imagine what postgres is doing... if its target search is going to end in searching 40 million rows and it has to aggregate on two, or three columns its going to be slow. No amount of hardware is going to fix this. Sure you can gain some speed by having entire tables in ram. No magic bullet here. Disk is definitely not a magic bullet. Even if you have a bunch of fast disks its still much slower then RAM in performing reads. So if you read heavy then adding more disk isnt going to just solve all your problems. RAM is nice. The more pages you can keep in ram the less reading from the disk. \n\nEven with that all said and done... aggregating lots of rows takes time. I suggest you come up with a system from preaggregating your data if possible. Identify all of your target dimensions. If your lucky, you only have a few key dimensions which can reduce size of table by lots and reduce queries to 1-2 seconds. There are a number of ways to tackle this, but postgres is a nice db to do this with, since writers do not block readers. \n\nI think you should focus on getting this system to work well with minimal hardware first. Then you can upgrade. Over the next few years the db is only going to get larger. You have 4 users now.. but who's to say what it will evolve into. \n_________________________________________________________________\nEarn cashback on your purchases with Live Search - the search that pays you back!\nhttp://search.live.com/cashback/?&pkw=form=MIJAAF/publ=HMTGL/crea=earncashback", "msg_date": "Wed, 18 Jun 2008 01:14:58 +0000", "msg_from": "Jon D <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which hardware ?" }, { "msg_contents": "\"Scott Marlowe\" wrote:\n> We had a reporting server with about 80G of data on a machine with 4G\n> ram last place I worked, and it could take it a few extra seconds to\n> hit the old data, but the SW RAID-10 on it made it much faster at\n> reporting than it would have been with a single disk.\n\nWould this be a nice choice ?\n\nHP Proliant DL320 G5p Xeon DC 3 GHz - 8 Go RAM DDR2 ECC\n- 4 x 146 Go SAS 15k rpm - RAID-10 HP Smart Array (128 Mo cache)\n\nI finally choose to have 2 data tables:\n- one with pre aggregated (dividing size by 10), unpartitionned (=the \ndatabase they currently use)\n- one with original data, yearly partitionned\n\nI will choose before each statement which table will be used depending on \nwhich select/joins/where/groupby the user choosed.\nThe aggregated datas will allow me to maintain actual performances (and even \nimprove it using the new hardware twice more powerfull).\n\nI think lines aggregation will be handled by the java application (excel/csv \nfile loaded in memory),\nwhich will be much faster than using a trigger on insertion in the full \ntable.\n\nThanks.\n\n\n", "msg_date": "Fri, 20 Jun 2008 12:53:45 +0200", "msg_from": "\"Lionel\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Which hardware ?" } ]
[ { "msg_contents": "Hi, i'm new to this ML, i'll try to explain my issue:\n\nI've two tables defined as is (postgresql 8.1):\n\nCREATE TABLE table1\n(\n _id serial,\n num1 int4 not null,\n num2 int4 not null,\n\n primary key(_id)\n);\n\nCREATE INDEX table1IDX1 ON table1(num1);\n\nCREATE TABLE table2\n (\n _id serial,\n _table1_id int not null,\n num3 int4 not null,\n num4 int4 not null,\n\n primary key(_id),\n\n foreign key(_table1_id) references table1(_id) on delete CASCADE\n );\n\nCREATE INDEX table2IDX1 ON table2(_table1_id);\n\n\nI need to select only a subset of table1/table2 records and backup \nthem (to disk).\n\nI proceed as following:\n\n1. Create equivalent tables with _tmp name with indexes and cascade;\n\nCREATE TABLE table1_tmp\n(\n _id serial,\n num1 int4 not null,\n num2 int4 not null,\n\n primary key(_id)\n);\n\nCREATE INDEX table1_tmpIDX1 ON table1_tmp(num1);\n\nCREATE TABLE table2_tmp\n (\n _id serial,\n _table1_id int not null,\n num3 int4 not null,\n num4 int4 not null,\n\n primary key(_id),\n\n foreign key(_table1_id) references table1_tmp(_id) on delete CASCADE\n );\n\nCREATE INDEX table2_tmpIDX1 ON table2_tmp(_table1_id);\n\n\n2. Select and insert into table1_tmp a subset of table1 based on a \nquery (num1 < 10)\n\nINSERT INTO table1_tmp SELECT * from table1 WHERE num1 < 10;\n\n\n3. Populate other tables with a foreign key;\n\nINSERT INTO table2_tmp SELECT table2.* from table2, table1_tmp WHERE \ntable2._table1_id = table1_tmp._id;\n\n\n4. Copy each table into a file (i don't have an 8.2, so that i can't \nexecute pg_dump with several -t options)\n\nCOPY table1_tmp TO \"/tmp/table1_tmp.data\";\nCOPY table2_tmp TO \"/tmp/table2_tmp.data\";\n\n\nThis is only an example, i've more complex tables, but schema is \nequivalent to previous.\n\nMy question is: There'are some optimization/tips that i can do for \nachieve better performance?\nWhen i have several rows (10^6 or greater) returned by query into \ntable1, that starts to hogs time and CPU.\n\nDoing an EXPLAIN, all queries on join are performed using indexes.\n\nThanks in advance,\nCisko\n", "msg_date": "Wed, 18 Jun 2008 18:18:00 +0200", "msg_from": "Cisko <[email protected]>", "msg_from_op": true, "msg_subject": "Partial backup of linked tables" } ]
[ { "msg_contents": "Tengo una pregunta, y este es el escenario de lo que tengo\n\n\n\n\n\n\t \n\t\n\t\n\t\n\t\n\t\n\nSe crea una instancia de\n\tpostgreSQL\n\tSe crea un directorio\n\t$PGDATA/walback donde se almacenararn los wal antiguos\n\tSe exporta una variable $PGDATA2\n\tque es la ubicacion del respaldo del contenido de $PGDATA\n\tSe activa el wal\n\tSe crea una BD y una tabla\n\tEn psql  se ejecuta\n\tpg_start_backup('etiqueta');\n\tSe realiza una copia de todo lo\n\tque esta en $PGDATA hacia otro directorio ($PGDATA2)\n\tEn psql  se ejecuta\n\tpg_stop_backup();\n\tSe actualiza el valor de un\n\tregistro en la tabla que se creo\n\tSe baja la instancia\n\tSe copia todo el contenido de\n\t$PGDATA/pg_xlog y $PGDATA/walback en $PGDATA2/pg_xlog y\n\t$PGDATA2/walback\n\tSe inicia la instancia con pg_ctl\n\t-D $PGDATA2 --log $PGDATA2/log.log start\n\tSe ejecuta psql\n\tSe consulta la tabla y no existen\n\tregistro\n\t\n\tSi alguien sabe el porque pasa esto me\n\tavisan. Gracias\n\nTengo una pregunta, y este es el escenario de lo que tengo\nSe crea una instancia de\n\tpostgreSQL\nSe crea un directorio\n\t$PGDATA/walback donde se almacenararn los wal antiguos\nSe exporta una variable $PGDATA2\n\tque es la ubicacion del respaldo del contenido de $PGDATA\nSe activa el wal\nSe crea una BD y una tabla\nEn psql  se ejecuta\n\tpg_start_backup('etiqueta');\nSe realiza una copia de todo lo\n\tque esta en $PGDATA hacia otro directorio ($PGDATA2)\nEn psql  se ejecuta\n\tpg_stop_backup();\nSe actualiza el valor de un\n\tregistro en la tabla que se creo\nSe baja la instancia\nSe copia todo el contenido de\n\t$PGDATA/pg_xlog y $PGDATA/walback en $PGDATA2/pg_xlog y\n\t$PGDATA2/walback\nSe inicia la instancia con pg_ctl\n\t-D $PGDATA2 --log $PGDATA2/log.log start\nSe ejecuta psql\nSe consulta la tabla y no existen\n\tregistro\nSi alguien sabe el porque pasa esto me\n\tavisan. Gracias", "msg_date": "Wed, 18 Jun 2008 12:43:51 -0700 (PDT)", "msg_from": "Antonio Perez <[email protected]>", "msg_from_op": true, "msg_subject": "WAL DUDAS" }, { "msg_contents": "2008/6/18 Antonio Perez <[email protected]>:\n> Tengo una pregunta, y este es el escenario de lo que tengo\n>\n\n-performance es una lista en ingles, estoy redirigiendo tu pregunta a\nla lista en español ([email protected])\n\n>\n> Se crea una instancia de postgreSQL\n>\n> Se crea un directorio $PGDATA/walback donde se almacenararn los wal antiguos\n>\n> Se exporta una variable $PGDATA2 que es la ubicacion del respaldo del\n> contenido de $PGDATA\n>\n> Se activa el wal\n>\n> Se crea una BD y una tabla\n>\n> En psql se ejecuta pg_start_backup('etiqueta');\n>\n> Se realiza una copia de todo lo que esta en $PGDATA hacia otro directorio\n> ($PGDATA2)\n>\n> En psql se ejecuta pg_stop_backup();\n>\n> Se actualiza el valor de un registro en la tabla que se creo\n>\n> Se baja la instancia\n>\n> Se copia todo el contenido de $PGDATA/pg_xlog y $PGDATA/walback en\n> $PGDATA2/pg_xlog y $PGDATA2/walback\n>\n> Se inicia la instancia con pg_ctl -D $PGDATA2 --log $PGDATA2/log.log start\n>\n> Se ejecuta psql\n>\n> Se consulta la tabla y no existen registro\n>\n> Si alguien sabe el porque pasa esto me avisan. Gracias\n>\n\n\n\n-- \nAtentamente,\nJaime Casanova\nSoporte y capacitación de PostgreSQL\nGuayaquil - Ecuador\nCel. (593) 87171157\n", "msg_date": "Wed, 18 Jun 2008 18:19:06 -0500", "msg_from": "\"Jaime Casanova\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] WAL DUDAS" }, { "msg_contents": "Antonio Perez wrote:\r\n[wonders why his online backup / recovery test didn't work]\r\n\r\n> 1.\tSe crea una instancia de postgreSQL\r\n> \r\n> 2.\tSe crea un directorio $PGDATA/walback donde se almacenararn los wal antiguos\r\n> \r\n> 3.\tSe exporta una variable $PGDATA2 que es la ubicacion del respaldo del contenido de $PGDATA\r\n> \r\n> 4.\tSe activa el wal\r\n> \r\n> 5.\tSe crea una BD y una tabla\r\n> \r\n> 6.\tEn psql se ejecuta pg_start_backup('etiqueta');\r\n> \r\n> 7.\tSe realiza una copia de todo lo que esta en $PGDATA hacia otro directorio ($PGDATA2)\r\n> \r\n> 8.\tEn psql se ejecuta pg_stop_backup();\r\n> \r\n> 9.\tSe actualiza el valor de un registro en la tabla que se creo\r\n> \r\n> 10.\tSe baja la instancia\r\n> \r\n> 11.\tSe copia todo el contenido de $PGDATA/pg_xlog y $PGDATA/walback en $PGDATA2/pg_xlog y $PGDATA2/walback\r\n> \r\n> 12.\tSe inicia la instancia con pg_ctl -D $PGDATA2 --log $PGDATA2/log.log start\r\n> \r\n> 13.\tSe ejecuta psql\r\n> \r\n> 14.\tSe consulta la tabla y no existen registro\r\n> \r\n> \tSi alguien sabe el porque pasa esto me avisan. Gracias\r\n\r\nFirst, you are supposed to use English on this list.\r\n\r\nWhat you did with your copy of the cluster files is a crash recovery, basically\r\nthe same thing that will take place if you kill -9 the postmaster and restart it.\r\n\r\nThis is not the correct way to restore, left alone to recover the database.\r\n\r\nThere are step-by-step instructions at\r\nhttp://www.postgresql.org/docs/current/static/continuous-archiving.html#BACKUP-PITR-RECOVERY\r\n\r\nThe important step you missed is step number 7 in which you create a recovery.conf\r\nfile that tells the server where it should look for archived WAL files, how to restore\r\nthem and until what point in time it should recover.\r\n\r\nYours,\r\nLaurenz Albe\r\n", "msg_date": "Thu, 19 Jun 2008 09:13:42 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: WAL DUDAS" } ]
[ { "msg_contents": "Hello,\n\nI am experiencing a query for which an useful index is not being used by \nPostgreSQL. The query is in the form:\n\n select count(*) from foo\n where foo.account_id in (\n select id from accounts where system = 'abc');\n\nand the size of the tables it works on is:\n\n - 270 records in \"accounts\" 22 of which match the condition 'abc';\n - 5.3M records in \"foo\", 92K of which match the query condition.\n\nThere is an index in the field \"foo.account_id\" but is not used. The resulting \nquery plan is:\n\n Aggregate (cost=300940.70..300940.71 rows=1 width=0) (actual\ntime=13412.088..13412.089 rows=1 loops=1)\n -> Hash IN Join (cost=11.97..299858.32 rows=432953 width=0) (actual\ntime=0.678..13307.074 rows=92790 loops=1)\n Hash Cond: (foo.account_id = accounts.id)\n -> Seq Scan on foo (cost=0.00..275591.14 rows=5313514 width=4)\n(actual time=0.014..7163.538 rows=5313514 loops=1)\n -> Hash (cost=11.70..11.70 rows=22 width=4) (actual\ntime=0.199..0.199 rows=22 loops=1)\n -> Bitmap Heap Scan on accounts (cost=1.42..11.70 rows=22\nwidth=4) (actual time=0.092..0.160 rows=22 loops=1)\n Recheck Cond: ((\"system\")::text = 'abc'::text)\n -> Bitmap Index Scan on iaccounts_x1\n(cost=0.00..1.42 rows=22 width=0) (actual time=0.077..0.077 rows=22\nloops=1)\n Index Cond: ((\"system\")::text = 'abc'::text)\n Total runtime: 13412.226 ms\n\n\nThere is a seqscan on the large table. If seqscans are disabled, the plan \nbecomes the more acceptable:\n\n Aggregate (cost=2471979.99..2471980.00 rows=1 width=0) (actual\ntime=630.977..630.978 rows=1 loops=1)\n -> Nested Loop (cost=1258.12..2470897.61 rows=432953 width=0) (actual\ntime=0.164..526.174 rows=92790 loops=1)\n -> HashAggregate (cost=12.75..12.97 rows=22 width=4) (actual\ntime=0.131..0.169 rows=22 loops=1)\n -> Bitmap Heap Scan on accounts (cost=2.42..12.70 rows=22\nwidth=4) (actual time=0.047..0.091 rows=22 loops=1)\n Recheck Cond: ((\"system\")::text = 'abc'::text)\n -> Bitmap Index Scan on iaccounts_x1\n(cost=0.00..2.42 rows=22 width=0) (actual time=0.036..0.036 rows=22\nloops=1)\n Index Cond: ((\"system\")::text = 'abc'::text)\n -> Bitmap Heap Scan on foo (cost=1245.37..111275.14 rows=83024\nwidth=4) (actual time=3.086..14.391 rows=4218 loops=22)\n Recheck Cond: (foo.account_id = accounts.id)\n -> Bitmap Index Scan on ifoo_x1 (cost=0.00..1224.61\nrows=83024 width=0) (actual time=2.962..2.962 rows=4218 loops=22)\n Index Cond: (foo.account_id = accounts.id)\n Total runtime: 631.121 ms\n\nwhere the index \"ifoo_x1\" is used.\n\n\nA similar query plan can be also obtained performing first the internal query \nand hardcoding the result in a new query:\n\n explain analyze select count(*) from foo\n where account_id in\n(70,33,190,21,191,223,203,202,148,246,85,281,280,319,234,67,245,310,318,279,320,9);\n\n\nI have tried to:\n\n - rewrite the query with a JOIN instead of an IN (no change in the plan),\n - rewrite the query using EXISTS (it gets worse),\n - raise the statistics for the foo.account_id field to 100 and to 1000,\n - decrease the random_page_cost down to 1,\n - vacuum-analyze the tables at each change,\n\nnone of which has changed the situation.\n\nThe system is an Ubuntu Hardy 64 bits running PG 8.3. The issue has been \nconfirmed on Mac OS 1.5/PG 8.3. Although I made fewer tests on a PG 8.2 we \nrecently switched from, I think the issue presents on that version too.\n\nThis is the first time I see the query planner failing a plan rather obvious: \nis there any other setting to tweak to force it to do good? (but a sensible \ntweaking: the random_page_cost to 1 was just a try to have the index used, \nnothing to be really put in production)\n\nIf you want to try the issue, an anonimized dataset is available on \nhttp://piro.develer.com/test.sql.bz2 . The file size is 46MB (1.5GB \nuncompressed). Chris Mair, who tested it on Mac OS, also noticed that PG \nbehaved correctly with the freshly imported data: as soon as he VACUUMed the \ndatabase he started experiencing the described issue.\n\nThank you very much.\n\n-- \nDaniele Varrazzo - Develer S.r.l.\nhttp://www.develer.com\n", "msg_date": "Thu, 19 Jun 2008 02:07:11 +0100", "msg_from": "Daniele Varrazzo <[email protected]>", "msg_from_op": true, "msg_subject": "An \"obvious\" index not being used" }, { "msg_contents": "Daniele Varrazzo <[email protected]> writes:\n> There is an index in the field \"foo.account_id\" but is not used. The resulting \n> query plan is:\n\n> Aggregate (cost=300940.70..300940.71 rows=1 width=0) (actual\n> time=13412.088..13412.089 rows=1 loops=1)\n> -> Hash IN Join (cost=11.97..299858.32 rows=432953 width=0) (actual\n> time=0.678..13307.074 rows=92790 loops=1)\n> Hash Cond: (foo.account_id = accounts.id)\n> -> Seq Scan on foo (cost=0.00..275591.14 rows=5313514 width=4)\n> (actual time=0.014..7163.538 rows=5313514 loops=1)\n\nWell, if the estimate of 432953 rows selected were correct, it'd be\nright not to use the index. Fetching one row in ten is not a chore\nfor an indexscan. (I'm not sure it'd prefer an indexscan even with an\naccurate 92K-row estimate, but at least you'd be in the realm where\ntweaking random_page_cost would make a difference.)\n\nI'm not sure why that estimate is so bad, given that you said you\nincreased the stats target on the table. Is there anything particularly\nskewed about the distribution of the account IDs?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 18 Jun 2008 21:43:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An \"obvious\" index not being used " }, { "msg_contents": "Tom Lane ha scritto:\n> Daniele Varrazzo <[email protected]> writes:\n>> There is an index in the field \"foo.account_id\" but is not used. The resulting \n>> query plan is:\n> \n>> Aggregate (cost=300940.70..300940.71 rows=1 width=0) (actual\n>> time=13412.088..13412.089 rows=1 loops=1)\n>> -> Hash IN Join (cost=11.97..299858.32 rows=432953 width=0) (actual\n>> time=0.678..13307.074 rows=92790 loops=1)\n>> Hash Cond: (foo.account_id = accounts.id)\n>> -> Seq Scan on foo (cost=0.00..275591.14 rows=5313514 width=4)\n>> (actual time=0.014..7163.538 rows=5313514 loops=1)\n> \n> Well, if the estimate of 432953 rows selected were correct, it'd be\n> right not to use the index. Fetching one row in ten is not a chore\n> for an indexscan. (I'm not sure it'd prefer an indexscan even with an\n> accurate 92K-row estimate, but at least you'd be in the realm where\n> tweaking random_page_cost would make a difference.)\n\nLet me guess: because the account tables has an estimated (and correct) guess \nof 22 records fetched out from 270 =~ 8%, it assumes that it will need to \nfetch the 8% of 5.3M records (which... yes, it matches the estimate of 433K). \nWell, this seems terribly wrong for this data set :(\n\n> I'm not sure why that estimate is so bad, given that you said you\n> increased the stats target on the table. Is there anything particularly\n> skewed about the distribution of the account IDs?\n\nProbably there is, in the sense that the relatively many accounts of 'abc' \ntype are referred by relatively few records. In the plan for the hardcoded \nquery the estimate is:\n\n-> Bitmap Index Scan on ifoo_x1 (cost=0.00..4115.67 rows=178308\nwidth=0) (actual time=89.766..89.766 rows=92790 loops=1)\n\nwhich is actually more accurate.\n\nI suspect the foo.account_id statistical data are not used at all in query: \nthe query planner can only estimate the number of accounts to look for, not \nhow they are distributed in the referencing tables. It seems the only way to \nget the proper plan is to add a load of fake accounts! Well, I'd rather have \nthe query executed in 2 times, in order to have the stats correctly used: this \nis the first time it happens to me.\n\n-- \nDaniele Varrazzo - Develer S.r.l.\nhttp://www.develer.com\n", "msg_date": "Thu, 19 Jun 2008 03:19:59 +0100", "msg_from": "Daniele Varrazzo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: An \"obvious\" index not being used" }, { "msg_contents": ">>> Daniele Varrazzo <[email protected]> wrote: \n \n> select count(*) from foo\n> where foo.account_id in (\n> select id from accounts where system = 'abc');\n \n> Total runtime: 13412.226 ms\n \nOut of curiosity, how does it do with the logically equivalent?:\n \nselect count(*) from foo\nwhere exists (select * from accounts\n where accounts.id = foo.account_id\n and accounts.system = 'abc');\n \n-Kevin\n", "msg_date": "Thu, 19 Jun 2008 08:46:26 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An \"obvious\" index not being used" }, { "msg_contents": "\n>>>> Daniele Varrazzo <[email protected]> wrote:\n>\n>> select count(*) from foo\n>> where foo.account_id in (\n>> select id from accounts where system = 'abc');\n>\n>> Total runtime: 13412.226 ms\n>\n> Out of curiosity, how does it do with the logically equivalent?:\n>\n> select count(*) from foo\n> where exists (select * from accounts\n> where accounts.id = foo.account_id\n> and accounts.system = 'abc');\n\nI tried it: it is slower and the query plan still includes the seqscan:\n\n Aggregate (cost=44212346.30..44212346.31 rows=1 width=0) (actual\ntime=21510.468..21510.469 rows=1 loops=1)\n -> Seq Scan on foo (cost=0.00..44205704.40 rows=2656760 width=0)\n(actual time=0.058..21402.752 rows=92790 loops=1)\n Filter: (subplan)\n SubPlan\n -> Index Scan using accounts_pkey on accounts (cost=0.00..8.27\nrows=1 width=288) (actual time=0.002..0.002 rows=0 loops=5313519)\n Index Cond: (id = $0)\n Filter: ((\"system\")::text = 'abc'::text)\n Total runtime: 21510.531 ms\n\nHere the estimate is even more gross: 2656760 is exactly the 50% of the\nrecords in the table.\n\n-- \nDaniele Varrazzo - Develer S.r.l.\nhttp://www.develer.com\n", "msg_date": "Thu, 19 Jun 2008 16:03:38 +0200 (CEST)", "msg_from": "\"Daniele Varrazzo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An \"obvious\" index not being used" }, { "msg_contents": "Daniele Varrazzo writes:\n\n> I suspect the foo.account_id statistical data are not used at all in query: \n> the query planner can only estimate the number of accounts to look for, not \n\nYou mentioned you bumped your default_statistics_target.\nWhat did you increase it to?\nMy data sets are so \"strange\" that anything less than 350 gives many bad \nplans. \n", "msg_date": "Fri, 18 Jul 2008 21:45:07 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An =?ISO-8859-1?B?Im9idmlvdXMi?= index not being used" }, { "msg_contents": "Francisco Reyes writes:\n> Daniele Varrazzo writes:\n> \n>> I suspect the foo.account_id statistical data are not used at all in \n>> query: the query planner can only estimate the number of accounts to \n>> look for, not \n> \n> You mentioned you bumped your default_statistics_target.\n> What did you increase it to?\n> My data sets are so \"strange\" that anything less than 350 gives many bad \n> plans.\n\nNot default_statistics_target: I used \"ALTER TABLE SET STATISTICS\" to change \nthe stats only for the tables I was interested in, arriving up to 1000. I \nthink the result is the same, but it was a red herring anyway: these stats \ncouldn't be used at all in my query.\n\nIn my problem I had 2 tables: a small one (accounts), a large one (foo). The \nway the query is written doesn't allow the stats from the large table to be \nused at all, unless the records from the small table are fetched. This is \nindependent from the stats accuracy.\n\nWhat the planner does is to assume an even distribution in the data in the \njoined fields. The assumption is probably better than not having anything, but \nin my data set (where there were a bunch of accounts with many foo each,but \nmany accounts with too little foo) this proved false.\n\nThe stats can be used only if at planning time the planner knows what values \nto look for in the field: this is the reason for which, if the query is split \nin two parts, performances become acceptable. In this case we may fall in your \nsituation: a data set may be \"strange\" and thus require an increase in the \nstats resolution. I can't remember if the default 10 was too low, but 100 was \ndefinitely enough for me.\n\nIt would be nice if the planner could perform the \"split query\" optimization \nautomatically, i.e. fetch records from small tables to plan the action on \nlarger tables. But I suspect this doesn't fit at all in the current PostgreSQL \nquery pipeline... or does it?\n\n-- \nDaniele Varrazzo - Develer S.r.l.\nhttp://www.develer.com\n", "msg_date": "Sat, 19 Jul 2008 18:21:43 +0100", "msg_from": "Daniele Varrazzo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: An \"obvious\" index not being used" }, { "msg_contents": "Daniele Varrazzo <[email protected]> writes:\n> In my problem I had 2 tables: a small one (accounts), a large one (foo). The \n> way the query is written doesn't allow the stats from the large table to be \n> used at all, unless the records from the small table are fetched. This is \n> independent from the stats accuracy.\n\n> What the planner does is to assume an even distribution in the data in the \n> joined fields.\n\nSir, you don't know what you're talking about.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Jul 2008 00:44:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An \"obvious\" index not being used " }, { "msg_contents": "Tom Lane ha scritto:\n> Daniele Varrazzo <[email protected]> writes:\n>> In my problem I had 2 tables: a small one (accounts), a large one (foo). The \n>> way the query is written doesn't allow the stats from the large table to be \n>> used at all, unless the records from the small table are fetched. This is \n>> independent from the stats accuracy.\n> \n>> What the planner does is to assume an even distribution in the data in the \n>> joined fields.\n> \n> Sir, you don't know what you're talking about.\n\nThis is probably correct, I am not into the PG internals.\n\nI was just reporting the analysis I proposed in my previous message in this \nthread \n(http://archives.postgresql.org/pgsql-performance/2008-06/msg00095.php). You \ngave me an hint of where the backend was missing to correctly estimate, and I \ndeduced a guess of the strategy the backend could have used to reach that \nresult - not matching the reality of my data set but I think matching the \npicture it could have using the stats data but not performing any further fetch.\n\nNobody confuted that message, of course that may have happened because it was \nlaughable:\n\nDaniele Varrazzo ha scritto:\n > Tom Lane ha scritto:\n >> Daniele Varrazzo <[email protected]> writes:\n >>> There is an index in the field \"foo.account_id\" but is not used. The\n >>> resulting query plan is:\n >>\n >>> Aggregate (cost=300940.70..300940.71 rows=1 width=0) (actual\n >>> time=13412.088..13412.089 rows=1 loops=1)\n >>> -> Hash IN Join (cost=11.97..299858.32 rows=432953 width=0)\n >>> (actual\n >>> time=0.678..13307.074 rows=92790 loops=1)\n >>> Hash Cond: (foo.account_id = accounts.id)\n >>> -> Seq Scan on foo (cost=0.00..275591.14 rows=5313514\n >>> width=4)\n >>> (actual time=0.014..7163.538 rows=5313514 loops=1)\n >>\n >> Well, if the estimate of 432953 rows selected were correct, it'd be\n >> right not to use the index. Fetching one row in ten is not a chore\n >> for an indexscan. (I'm not sure it'd prefer an indexscan even with an\n >> accurate 92K-row estimate, but at least you'd be in the realm where\n >> tweaking random_page_cost would make a difference.)\n >\n > Let me guess: because the account tables has an estimated (and correct)\n > guess of 22 records fetched out from 270 =~ 8%, it assumes that it will\n > need to fetch the 8% of 5.3M records (which... yes, it matches the\n > estimate of 433K).\n\nThis is the idea I had about how the query planner behaved in that query, and \nwhy the query performs as I expect when the joined items are explicit. Was it \nwrong?\n\nThank you very much. Again, the only reason for which I think I was right is \nbecause nobody confuted my previous email.\n\nRegards,\n\n-- \nDaniele Varrazzo - Develer S.r.l.\nhttp://www.develer.com\n", "msg_date": "Mon, 21 Jul 2008 00:07:04 +0100", "msg_from": "Daniele Varrazzo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: An \"obvious\" index not being used" } ]
[ { "msg_contents": "Hello to list,\n\nWe have a CentOS-5 server with postgresql-8.1.8 installed. I am struggling with postgresql performance. Any query say select * from tablename takes 10-15 mins to give the output, and while executing the query system loads goes up like anything. After the query output, system loads starts decresing.\n\nAny query select,insert,update simple or complex behaves in the same way, what i have explained above.\n\nSystem Specification:\n\nOS :- CentOs 5\nPostgresql 8.1.8\nRAM :- 1 GB\nSWAP 2 GB\n\nSome relevent part(uncommented) of my /var/lib/pgsql/data/postgresql.conf\n\nlisten_addresses = 'localhost'\nmax_connections = 100\nshared_buffers = 1000\n\nThe one more strange thing is that with the same setting on another server, postgresql is running very smooth. I had run vacum also some times back.\n\nPlease help me out and let me know if you need any other information.\n\nThanks & Regards,\n\nBijayant Kumar\n\nSend instant messages to your online friends http://uk.messenger.yahoo.com \n", "msg_date": "Mon, 23 Jun 2008 04:06:54 -0700 (PDT)", "msg_from": "bijayant kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql is very slow" }, { "msg_contents": "On Monday 23 June 2008 07:06:54 bijayant kumar wrote:\n> Hello to list,\n>\n> We have a CentOS-5 server with postgresql-8.1.8 installed. I am struggling\n> with postgresql performance. Any query say select * from tablename takes\n> 10-15 mins to give the output, and while executing the query system loads\n> goes up like anything. After the query output, system loads starts\n> decresing.\n\nSounds like a vacuum problem.\n\n>\n> Any query select,insert,update simple or complex behaves in the same way,\n> what i have explained above.\n>\n> System Specification:\n>\n> OS :- CentOs 5\n> Postgresql 8.1.8\n> RAM :- 1 GB\n> SWAP 2 GB\n>\n> Some relevent part(uncommented) of my /var/lib/pgsql/data/postgresql.conf\n>\n> listen_addresses = 'localhost'\n> max_connections = 100\n> shared_buffers = 1000\n\nYou shared_buffers seems low.\n\n>\n> The one more strange thing is that with the same setting on another server,\n> postgresql is running very smooth. I had run vacum also some times back.\n\nYou are aware that vacuum is supposed to be an ongoing maintenance activity, \nright? \n\n>\n> Please help me out and let me know if you need any other information.\n>\n> Thanks & Regards,\n>\n> Bijayant Kumar\n>\n> Send instant messages to your online friends http://uk.messenger.yahoo.com\n\njan\n", "msg_date": "Mon, 23 Jun 2008 08:35:58 -0400", "msg_from": "Jan de Visser <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql is very slow" }, { "msg_contents": "bijayant kumar wrote:\n> select * from tablename takes 10-15 mins to give the output\n\n\nThere are better ways to dump data than using a database; that's\nnot a useful query.\n\n\n> Any query select,insert,update simple or complex behaves in the same way\n\nHave you set up suitable indexes for your operations (and then run analyze)?\n\nCheers,\n Jeremy\n", "msg_date": "Mon, 23 Jun 2008 14:36:28 +0100", "msg_from": "Jeremy Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql is very slow" }, { "msg_contents": "Hi,\n\n> Hello to list,\n>\n> We have a CentOS-5 server with postgresql-8.1.8 installed. I am struggling\n> with postgresql performance. Any query say select * from tablename takes\n> 10-15 mins to give the output, and while executing the query system loads\n> goes up like anything. After the query output, system loads starts\n> decresing.\n\nI doubt the 'select * from tablename' is a good candidate for tuning, but\ngive us more information about the table. What is it's size - how many\nrows does it have and how much space does it occupy on the disk? What is a\ntypical usage of the table - is it modified (update / delete) frequently?\nHow is it maintained - is there a autovacuum running, or did you set a\nroutine vacuum (and analyze) job to maintain the database?\n\nI guess one of the servers (the slow one) is running for a long time\nwithout a proper db maintenance (vacuum / analyze) and you dumped / loaded\nthe db onto a new server. So the 'new server' has much more 'compact'\ntables and thus gives the responses much faster. And this holds for more\ncomplicated queries (with indexes etc) too.\n\nAn output from 'EXPLAIN' (or 'EXPLAIN ANALYZE') command would give a much\nbetter overview.\n\nTomas\n\n", "msg_date": "Mon, 23 Jun 2008 15:50:14 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Postgresql is very slow" }, { "msg_contents": "\n> System Specification:\n\n> OS :- CentOs 5\n> Postgresql 8.1.8\n> RAM :- 1 GB\n> SWAP 2 GB\n\n[Greg says] \nHow much memory is actually free, can you include the output from the command \"free\" in your reply? What else runs on this server? What is the system load before and during your query?\n\nWhile it's more likely the other comments about vacuum will be the ultimate cause, performance will also be degraded, sometimes significantly, if your system has too many other things running and you are actively using swap space.\n\nGreg\n\n", "msg_date": "Mon, 23 Jun 2008 09:06:31 -0700", "msg_from": "\"Gregory S. Youngblood\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql is very slow" }, { "msg_contents": "Hi,\n\nThanks for the reply. Many gentlemans have replied to my question, thanks to all of them. I have tried to answer all questions in one mail.\n\n--- On Mon, 23/6/08, [email protected] <[email protected]> wrote:\n\n> From: [email protected] <[email protected]>\n> Subject: Re: [PERFORM] Postgresql is very slow\n> To: [email protected]\n> Cc: [email protected]\n> Date: Monday, 23 June, 2008, 7:20 PM\n> Hi,\n> \n> > Hello to list,\n> >\n> > We have a CentOS-5 server with postgresql-8.1.8\n> installed. I am struggling\n> > with postgresql performance. Any query say select *\n> from tablename takes\n> > 10-15 mins to give the output, and while executing the\n> query system loads\n> > goes up like anything. After the query output, system\n> loads starts\n> > decresing.\n> \n> I doubt the 'select * from tablename' is a good\n> candidate for tuning, but\n> give us more information about the table. What is it's\n> size - how many\n> rows does it have and how much space does it occupy on the\n> disk? What is a\n> typical usage of the table - is it modified (update /\n> delete) frequently?\n> How is it maintained - is there a autovacuum running, or\n> did you set a\n> routine vacuum (and analyze) job to maintain the database?\n> \n> I guess one of the servers (the slow one) is running for a\n> long time\n> without a proper db maintenance (vacuum / analyze) and you\n> dumped / loaded\n> the db onto a new server. So the 'new server' has\n> much more 'compact'\n> tables and thus gives the responses much faster. And this\n> holds for more\n> complicated queries (with indexes etc) too.\n> \n> An output from 'EXPLAIN' (or 'EXPLAIN\n> ANALYZE') command would give a much\n> better overview.\n> \n\nWe maintains mail server, for this datas are stored in postgresql. There are total 24 tables but only two are used. Basically one table say USER stores the users information like mailid and his password, and there are 1669 rows in this table. The other table stores the domains name and no updation/deletion/insertion happens very frequently. Once in a month this table is touched.\nBut the second table USER is modified frequently(like on an average 10 times daily) because users changes their password, new users are being added, old ones are deleted.\n\nWe have created this database with the dump of our old server, and with the same dump the database is running fine on the new server but not on the slow server.\n\nI was not aware of the VACUUM functionality earlier, but some times back i read and run this on the server but i did not achieve anything in terms of performance. The server is running from 1 to 1.5 years and we have done VACUUM only once.\n\nIs this the problem of slow database? One more thing if i recreate the database, will it help?\n\nThe output of ANALYZE\n\nANALYZE verbose USERS;\nINFO: analyzing \"public.USERS\"\nINFO: \"USERS\": scanned 3000 of 54063 pages, containing 128 live rows and 1 dead rows; 128 rows in sample, 2307 estimated total rows\nANALYZE\n\nThe output of EXPLAIN query;\n\nselect * from USERS where email like '%bijayant.kumar%';\nThis simplest query tooks 10 minutes and server loads goes from 0.35 to 16.94.\n\nEXPLAIN select * from USERS where email like '%bijayant.kumar%';\n QUERY PLAN\n--------------------------------------------------------------\n Seq Scan on USERS (cost=0.00..54091.84 rows=1 width=161)\n Filter: ((email)::text ~~ '%bijayant.kumar%'::text)\n(2 rows)\n\n\nI hope i have covered everything in my mail to troubleshoot my problem.\n\n> Tomas\n> \n> \n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nSend instant messages to your online friends http://uk.messenger.yahoo.com \n", "msg_date": "Mon, 23 Jun 2008 22:48:53 -0700 (PDT)", "msg_from": "bijayant kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql is very slow" }, { "msg_contents": "On Mon, Jun 23, 2008 at 11:48 PM, bijayant kumar <[email protected]> wrote:\n\nOK, you don't have a ton of updates each day, but they add up over time.\n\n> I was not aware of the VACUUM functionality earlier, but some times back i read and run this on the server but i did not achieve anything in terms of performance. The server is running from 1 to 1.5 years and we have done VACUUM only once.\n\nvacuuming isn't so much about performance as about maintenance. You\ndon't change the oil in your car to make it go faster, you do it to\nkeep it running smoothly. Don't change it for 1.5 years and you could\nhave problems. sludge build up / dead tuple build up. Kinda similar.\n\n> Is this the problem of slow database? One more thing if i recreate the database, will it help?\n\nMost likely. What does\n\nvacuum verbose;\n\non the main database say?\n\n> The output of ANALYZE\n>\n> ANALYZE verbose USERS;\n> INFO: analyzing \"public.USERS\"\n> INFO: \"USERS\": scanned 3000 of 54063 pages, containing 128 live rows and 1 dead rows; 128 rows in sample, 2307 estimated total rows\n> ANALYZE\n\nSo, 54963 pages hold 128 live database rows. A page is 8k. that\nmeans you're storing 128 live rows in approximately a 400+ megabyte\nfile.\n\n> The output of EXPLAIN query;\n>\n> select * from USERS where email like '%bijayant.kumar%';\n> This simplest query tooks 10 minutes and server loads goes from 0.35 to 16.94.\n>\n> EXPLAIN select * from USERS where email like '%bijayant.kumar%';\n> QUERY PLAN\n> --------------------------------------------------------------\n> Seq Scan on USERS (cost=0.00..54091.84 rows=1 width=161)\n> Filter: ((email)::text ~~ '%bijayant.kumar%'::text)\n> (2 rows)\n\nYou're scanning ~ 54094 sequential pages to retrieve 1 row. Note\nthat explain analyze is generally a better choice, it gives more data\nuseful for troubleshooting.\n\nDefinitely need a vacuum full on this table, likely followed by a reindex.\n", "msg_date": "Tue, 24 Jun 2008 01:51:54 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql is very slow" }, { "msg_contents": "> Definitely need a vacuum full on this table, likely followed by a reindex.\n\nOr a cluster on the table...\n", "msg_date": "Tue, 24 Jun 2008 01:52:28 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql is very slow" }, { "msg_contents": ">> I was not aware of the VACUUM functionality earlier, but some times back\n>> i read and run this on the server but i did not achieve anything in\n>> terms of performance. The server is running from 1 to 1.5 years and we\n>> have done VACUUM only once.\n>\n> vacuuming isn't so much about performance as about maintenance. You\n> don't change the oil in your car to make it go faster, you do it to\n> keep it running smoothly. Don't change it for 1.5 years and you could\n> have problems. sludge build up / dead tuple build up. Kinda similar.\n>\n\nI have to disagree - the VACUUM is a maintenance task, but with a direct\nimpact on performance. The point is that Postgresql holds dead rows (old\nversions, deleted, etc.) until freed by vacuum, and these rows need to be\nchecked every time (are they still visible to the transaction?). So on a\nheavily modified table you may easily end up with most of the tuples being\ndead and table consisting of mostly dead tuples.\n\n>> The output of EXPLAIN query;\n>>\n>> select * from USERS where email like '%bijayant.kumar%';\n>> This simplest query tooks 10 minutes and server loads goes from 0.35 to\n>> 16.94.\n>>\n>> EXPLAIN select * from USERS where email like '%bijayant.kumar%';\n>> QUERY PLAN\n>> --------------------------------------------------------------\n>> Seq Scan on USERS (cost=0.00..54091.84 rows=1 width=161)\n>> Filter: ((email)::text ~~ '%bijayant.kumar%'::text)\n>> (2 rows)\n>\n> You're scanning ~ 54094 sequential pages to retrieve 1 row. Note\n> that explain analyze is generally a better choice, it gives more data\n> useful for troubleshooting.\n\nNot necessarily, the 'cost' depends on seq_page_cost and there might be\nother value than 1 (which is the default). A better approach is\n\nSELECT relpages, reltuples FROM pg_class WHERE relname = 'users';\n\nwhich reads the values from system catalogue.\n\n> Definitely need a vacuum full on this table, likely followed by a reindex.\n\nYes, that's true. I guess the table holds a lot of dead tuples. I'm not\nsure why this happens on one server (the new one) and not on the other\none. I guess the original one uses some automatic vacuuming (autovacuum,\ncron job, or something like that).\n\nAs someone already posted, clustering the table (by primary key for\nexample) should be much faster than vacuuming and give better performance\nin the end. See\n\nhttp://www.postgresql.org/docs/8.3/interactive/sql-cluster.html\n\nThe plain reindex won't help here - it won't remove dead tuples.\n\nTomas\n\n", "msg_date": "Tue, 24 Jun 2008 10:17:33 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Postgresql is very slow" }, { "msg_contents": "--- On Tue, 24/6/08, [email protected] <[email protected]> wrote:\n\n> From: [email protected] <[email protected]>\n> Subject: Re: [PERFORM] Postgresql is very slow\n> To: \"Scott Marlowe\" <[email protected]>\n> Cc: [email protected], [email protected], [email protected]\n> Date: Tuesday, 24 June, 2008, 1:47 PM\n> >> I was not aware of the VACUUM functionality\n> earlier, but some times back\n> >> i read and run this on the server but i did not\n> achieve anything in\n> >> terms of performance. The server is running from 1\n> to 1.5 years and we\n> >> have done VACUUM only once.\n> >\n> > vacuuming isn't so much about performance as about\n> maintenance. You\n> > don't change the oil in your car to make it go\n> faster, you do it to\n> > keep it running smoothly. Don't change it for 1.5\n> years and you could\n> > have problems. sludge build up / dead tuple build up.\n> Kinda similar.\n> >\n> \n> I have to disagree - the VACUUM is a maintenance task, but\n> with a direct\n> impact on performance. The point is that Postgresql holds\n> dead rows (old\n> versions, deleted, etc.) until freed by vacuum, and these\n> rows need to be\n> checked every time (are they still visible to the\n> transaction?). So on a\n> heavily modified table you may easily end up with most of\n> the tuples being\n> dead and table consisting of mostly dead tuples.\n> \n> >> The output of EXPLAIN query;\n> >>\n> >> select * from USERS where email like\n> '%bijayant.kumar%';\n> >> This simplest query tooks 10 minutes and server\n> loads goes from 0.35 to\n> >> 16.94.\n> >>\n> >> EXPLAIN select * from USERS where email like\n> '%bijayant.kumar%';\n> >> QUERY PLAN\n> >>\n> --------------------------------------------------------------\n> >> Seq Scan on USERS (cost=0.00..54091.84 rows=1\n> width=161)\n> >> Filter: ((email)::text ~~\n> '%bijayant.kumar%'::text)\n> >> (2 rows)\n> >\n> > You're scanning ~ 54094 sequential pages to\n> retrieve 1 row. Note\n> > that explain analyze is generally a better choice, it\n> gives more data\n> > useful for troubleshooting.\n> \n> Not necessarily, the 'cost' depends on\n> seq_page_cost and there might be\n> other value than 1 (which is the default). A better\n> approach is\n> \n> SELECT relpages, reltuples FROM pg_class WHERE relname =\n> 'users';\n> \n> which reads the values from system catalogue.\n> \nThe Output of query on the Slow Server\n\nSELECT relpages, reltuples FROM pg_class WHERE relname ='users';\n relpages | reltuples\n----------+-----------\n 54063 | 2307\n(1 row)\n\nThe Output of query on the old server which is fast\n\n relpages | reltuples\n----------+-----------\n 42 | 1637\n\n\n> > Definitely need a vacuum full on this table, likely\n> followed by a reindex.\n> \n\nThe Slow server load increases whenever i run a simple query, is it the good idea to run VACUUM full on the live server's database now or it should be run when the traffic is very low may be in weekend.\n\n> Yes, that's true. I guess the table holds a lot of dead\n> tuples. I'm not\n> sure why this happens on one server (the new one) and not\n> on the other\n> one. I guess the original one uses some automatic vacuuming\n> (autovacuum,\n> cron job, or something like that).\n\nThere was nothing related to VACUUM of database in the crontab.\n> \n> As someone already posted, clustering the table (by primary\n> key for\n> example) should be much faster than vacuuming and give\n> better performance\n> in the end. See\n> \n> http://www.postgresql.org/docs/8.3/interactive/sql-cluster.html\n> \n> The plain reindex won't help here - it won't remove\n> dead tuples.\n> \nI am new to Postgres database, i didnt understand the \"indexing\" part. Is it related to PRIMARY_KEY column of the table?\n\nShould i have to run:- CLUSTER USERS using 'username';\n\n> Tomas\n\nSend instant messages to your online friends http://uk.messenger.yahoo.com \n", "msg_date": "Tue, 24 Jun 2008 02:17:34 -0700 (PDT)", "msg_from": "bijayant kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql is very slow" }, { "msg_contents": "2008/6/24 Scott Marlowe <[email protected]>:\n> On Mon, Jun 23, 2008 at 11:48 PM, bijayant kumar <[email protected]> wrote:\n(...)\n>> The output of EXPLAIN query;\n>>\n>> select * from USERS where email like '%bijayant.kumar%';\n>> This simplest query tooks 10 minutes and server loads goes from 0.35 to 16.94.\n>>\n>> EXPLAIN select * from USERS where email like '%bijayant.kumar%';\n>> QUERY PLAN\n>> --------------------------------------------------------------\n>> Seq Scan on USERS (cost=0.00..54091.84 rows=1 width=161)\n>> Filter: ((email)::text ~~ '%bijayant.kumar%'::text)\n>> (2 rows)\n>\n> You're scanning ~ 54094 sequential pages to retrieve 1 row. Note\n> that explain analyze is generally a better choice, it gives more data\n> useful for troubleshooting.\n>\n> Definitely need a vacuum full on this table, likely followed by a reindex.\n\nThis is a LIKE query with a wildcard at the start of the string to\nmatch, reindexing won't help much.\n\n\nIan Barwick\n", "msg_date": "Tue, 24 Jun 2008 18:33:10 +0900", "msg_from": "\"Ian Barwick\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql is very slow" }, { "msg_contents": ">> Not necessarily, the 'cost' depends on\n>> seq_page_cost and there might be\n>> other value than 1 (which is the default). A better\n>> approach is\n>>\n>> SELECT relpages, reltuples FROM pg_class WHERE relname =\n>> 'users';\n>>\n>> which reads the values from system catalogue.\n>>\n> The Output of query on the Slow Server\n>\n> SELECT relpages, reltuples FROM pg_class WHERE relname ='users';\n> relpages | reltuples\n> ----------+-----------\n> 54063 | 2307\n> (1 row)\n>\n> The Output of query on the old server which is fast\n>\n> relpages | reltuples\n> ----------+-----------\n> 42 | 1637\n>\n>\n\nThis definitely confirms the suspicion about dead tuples etc. On the old\nserver the table has 1637 tuples and occupies just 42 pages (i.e. 330kB\nwith 8k pages), which gives about 0.025 of a page (0.2kB per) per row.\n\nLet's suppose the characteristics of data (row sizes, etc.) are the same\non both servers - in that case the 2307 rows should occuppy about 58\npages, but as you can see from the first output it occupies 54063, i.e.\n400MB instead of 450kB.\n\n>> > Definitely need a vacuum full on this table, likely\n>> followed by a reindex.\n>>\n>\n> The Slow server load increases whenever i run a simple query, is it the\n> good idea to run VACUUM full on the live server's database now or it\n> should be run when the traffic is very low may be in weekend.\n\nThe load increases because with the queries you've sent the database has\nto read the whole table (sequential scan) and may be spread through the\ndisk (thus the disk has to seek).\n\nI'd recommend running CLUSTER instead of VACUUM - that should be much\nfaster in this case. It will lock the table, but the performance already\nsucks, so I'd probably prefer a short downtime with a much faster\nprocessing after that.\n\n>\n>> Yes, that's true. I guess the table holds a lot of dead\n>> tuples. I'm not\n>> sure why this happens on one server (the new one) and not\n>> on the other\n>> one. I guess the original one uses some automatic vacuuming\n>> (autovacuum,\n>> cron job, or something like that).\n>\n> There was nothing related to VACUUM of database in the crontab.\n\nIn that case there's something running vacuum - maybe autovacuum (see\npostgresql.conf), or so.\n\n>> As someone already posted, clustering the table (by primary\n>> key for\n>> example) should be much faster than vacuuming and give\n>> better performance\n>> in the end. See\n>>\n>> http://www.postgresql.org/docs/8.3/interactive/sql-cluster.html\n>>\n>> The plain reindex won't help here - it won't remove\n>> dead tuples.\n>>\n> I am new to Postgres database, i didnt understand the \"indexing\" part. Is\n> it related to PRIMARY_KEY column of the table?\n\nNot sure what you mean by the 'nd\n\nPrinciple of clustering is quite simple - by sorting the table according\nto an index (by the columns in the index) you may get better performance\nwhen using the index. Another 'bonus' is that it compacts the table on the\ndisk, so disk seeking is less frequent. These two effects may mean a\nserious increase of performance. You may cluster according to any index on\nthe table, not just by primary key - just choose the most frequently used\nindex.\n\nSure, there are some drawbacks - it locks the table, so you may not use it\nwhen the command is running. It's not an incremental operation, the order\nis not enforced when modifying the table - when you modify a row the new\nversion won't respect the order and you have to run the CLUSTER command\nfrom time to time. And it's possible to cluster by one index only.\n\n>\n> Should i have to run:- CLUSTER USERS using 'username';\n\nI guess 'username' is a column, so it won't work. You have to choose an\nindex (I'd recommend the primary key index, i.e. the one with _pk at the\nend).\n\nTomas\n\n", "msg_date": "Tue, 24 Jun 2008 12:02:08 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Postgresql is very slow" }, { "msg_contents": "\n> SELECT relpages, reltuples FROM pg_class WHERE relname ='users';\n> relpages | reltuples\n> ----------+-----------\n> 54063 | 2307\n> (1 row)\n\n\tThis is a horribly bloated table.\n\n> The Output of query on the old server which is fast\n>\n> relpages | reltuples\n> ----------+-----------\n> 42 | 1637\n\n\n\tThis is a healthy table.\n\n\tYou need to clean up the users table.\n\tFor this the easiest way is either to VACUUM FULL or CLUSTER it. CLUSTER \nwill be faster in your case. Use whatever index makes sense, or even the \nPK.\n\n> The Slow server load increases whenever i run a simple query, is it the \n> good idea to run VACUUM full on the live server's database now or it \n> should be run when the traffic is very low may be in weekend.\n\n\tUse CLUSTER.\n\tIt is blocking so your traffic will suffer during the operation, which \nshould not take very long. Since you have very few rows, most of the \nneeded time will be reading the table from disk. I would suggest to do it \nright now.\n\n\tCLUSTER users_pk ON users;\n\n\tThen, configure your autovacuum so it runs often enough. On a small table \nlike this (once cleaned up) VACUUM will be very fast, 42 pages should take \njust a couple tens of ms to vacuum, so you can do it often.\n\n\n\n\n", "msg_date": "Tue, 24 Jun 2008 13:25:45 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql is very slow" }, { "msg_contents": "Thank you and all very much for your support. Now i have understood the problem related to my server. I will try the suggested thing like CLUSTER and then let you all know what happens after that.\n\nOnce again Thanking you all.\n\nBijayant Kumar\n\n\n--- On Tue, 24/6/08, [email protected] <[email protected]> wrote:\n\n> From: [email protected] <[email protected]>\n> Subject: Re: [PERFORM] Postgresql is very slow\n> To: [email protected]\n> Cc: [email protected]\n> Date: Tuesday, 24 June, 2008, 3:32 PM\n> >> Not necessarily, the 'cost' depends on\n> >> seq_page_cost and there might be\n> >> other value than 1 (which is the default). A\n> better\n> >> approach is\n> >>\n> >> SELECT relpages, reltuples FROM pg_class WHERE\n> relname =\n> >> 'users';\n> >>\n> >> which reads the values from system catalogue.\n> >>\n> > The Output of query on the Slow Server\n> >\n> > SELECT relpages, reltuples FROM pg_class WHERE relname\n> ='users';\n> > relpages | reltuples\n> > ----------+-----------\n> > 54063 | 2307\n> > (1 row)\n> >\n> > The Output of query on the old server which is fast\n> >\n> > relpages | reltuples\n> > ----------+-----------\n> > 42 | 1637\n> >\n> >\n> \n> This definitely confirms the suspicion about dead tuples\n> etc. On the old\n> server the table has 1637 tuples and occupies just 42 pages\n> (i.e. 330kB\n> with 8k pages), which gives about 0.025 of a page (0.2kB\n> per) per row.\n> \n> Let's suppose the characteristics of data (row sizes,\n> etc.) are the same\n> on both servers - in that case the 2307 rows should occuppy\n> about 58\n> pages, but as you can see from the first output it occupies\n> 54063, i.e.\n> 400MB instead of 450kB.\n> \n> >> > Definitely need a vacuum full on this table,\n> likely\n> >> followed by a reindex.\n> >>\n> >\n> > The Slow server load increases whenever i run a simple\n> query, is it the\n> > good idea to run VACUUM full on the live server's\n> database now or it\n> > should be run when the traffic is very low may be in\n> weekend.\n> \n> The load increases because with the queries you've sent\n> the database has\n> to read the whole table (sequential scan) and may be spread\n> through the\n> disk (thus the disk has to seek).\n> \n> I'd recommend running CLUSTER instead of VACUUM - that\n> should be much\n> faster in this case. It will lock the table, but the\n> performance already\n> sucks, so I'd probably prefer a short downtime with a\n> much faster\n> processing after that.\n> \n> >\n> >> Yes, that's true. I guess the table holds a\n> lot of dead\n> >> tuples. I'm not\n> >> sure why this happens on one server (the new one)\n> and not\n> >> on the other\n> >> one. I guess the original one uses some automatic\n> vacuuming\n> >> (autovacuum,\n> >> cron job, or something like that).\n> >\n> > There was nothing related to VACUUM of database in the\n> crontab.\n> \n> In that case there's something running vacuum - maybe\n> autovacuum (see\n> postgresql.conf), or so.\n> \n> >> As someone already posted, clustering the table\n> (by primary\n> >> key for\n> >> example) should be much faster than vacuuming and\n> give\n> >> better performance\n> >> in the end. See\n> >>\n> >>\n> http://www.postgresql.org/docs/8.3/interactive/sql-cluster.html\n> >>\n> >> The plain reindex won't help here - it\n> won't remove\n> >> dead tuples.\n> >>\n> > I am new to Postgres database, i didnt understand the\n> \"indexing\" part. Is\n> > it related to PRIMARY_KEY column of the table?\n> \n> Not sure what you mean by the 'nd\n> \n> Principle of clustering is quite simple - by sorting the\n> table according\n> to an index (by the columns in the index) you may get\n> better performance\n> when using the index. Another 'bonus' is that it\n> compacts the table on the\n> disk, so disk seeking is less frequent. These two effects\n> may mean a\n> serious increase of performance. You may cluster according\n> to any index on\n> the table, not just by primary key - just choose the most\n> frequently used\n> index.\n> \n> Sure, there are some drawbacks - it locks the table, so you\n> may not use it\n> when the command is running. It's not an incremental\n> operation, the order\n> is not enforced when modifying the table - when you modify\n> a row the new\n> version won't respect the order and you have to run the\n> CLUSTER command\n> from time to time. And it's possible to cluster by one\n> index only.\n> \n> >\n> > Should i have to run:- CLUSTER USERS using\n> 'username';\n> \n> I guess 'username' is a column, so it won't\n> work. You have to choose an\n> index (I'd recommend the primary key index, i.e. the\n> one with _pk at the\n> end).\n> \n> Tomas\n> \n> \n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nSend instant messages to your online friends http://uk.messenger.yahoo.com \n", "msg_date": "Tue, 24 Jun 2008 05:12:53 -0700 (PDT)", "msg_from": "bijayant kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql is very slow" }, { "msg_contents": "\nI've a table with about 34601755 rows ,when I execute 'update msg_table set\ntype=0;' is very very slow, cost several hours, but still not complete?\n\nWhy postgresql is so slowly? Is the PG MVCC problem? \n\nBut I try it on Mysql, the same table and rows, it only cost about 340\nseconds.\n\nAny idea for the problem?\n\n\nMy machine config:\n\tMemory 8G, 8 piece 15K disk , 2CPU(Quad-Core) AMD\t\n\tOS: Red Hat AS4\n\nMy postgres.conf main parameter is following:\n\n\nshared_buffers = 5GB # min 128kB or max_connections*16kB\n # (change requires restart)\ntemp_buffers = 512MB # min 800kB\nwork_mem = 400MB # min 64kB\nmaintenance_work_mem = 600MB # min 1MB\nmax_fsm_pages = 262144 # 2G min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 2000 # min 100, ~70 bytes each\n\nbgwriter_delay = 20ms # 10-10000ms between rounds\nbgwriter_lru_maxpages = 500 # 0-1000 max buffers written/round\nbgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers \n\n", "msg_date": "Wed, 25 Jun 2008 11:12:03 +0800", "msg_from": "\"jay\" <[email protected]>", "msg_from_op": false, "msg_subject": "Postgresql update op is very very slow" }, { "msg_contents": "\nOn Jun 24, 2008, at 9:12 PM, jay wrote:\n\n>\n> I've a table with about 34601755 rows ,when I execute 'update \n> msg_table set\n> type=0;' is very very slow, cost several hours, but still not \n> complete?\n>\n> Why postgresql is so slowly? Is the PG MVCC problem?\n>\n> But I try it on Mysql, the same table and rows, it only cost about 340\n> seconds.\n>\n> Any idea for the problem?\n>\n>\n> My machine config:\n> \tMemory 8G, 8 piece 15K disk , 2CPU(Quad-Core) AMD\t\n> \tOS: Red Hat AS4\n>\n> My postgres.conf main parameter is following:\n>\n>\n\nHi Jay,\n\nIs the \"type\" used in an index? Have you properly increased your \nnumber of checkpoint segments? Any warnings in in your log file about \nexcessive checkpointing?\n\nCheers,\n\nRusty\n--\nRusty Conover\nInfoGears Inc.\nhttp://www.infogears.com\n\n\n\n\n\n\n", "msg_date": "Tue, 24 Jun 2008 23:02:04 -0600", "msg_from": "Rusty Conover <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql update op is very very slow" }, { "msg_contents": "Thank you all very very much. After running CLUSTER on the \"USERS\" table, now the speed is very very good. Now i have also understood the importance of VACUUM and ANALYZE.\n\nOnce again thank you all very very much. You guys rock.\n\n\n--- On Tue, 24/6/08, [email protected] <[email protected]> wrote:\n\n> From: [email protected] <[email protected]>\n> Subject: Re: [PERFORM] Postgresql is very slow\n> To: [email protected]\n> Cc: [email protected]\n> Date: Tuesday, 24 June, 2008, 3:32 PM\n> >> Not necessarily, the 'cost' depends on\n> >> seq_page_cost and there might be\n> >> other value than 1 (which is the default). A\n> better\n> >> approach is\n> >>\n> >> SELECT relpages, reltuples FROM pg_class WHERE\n> relname =\n> >> 'users';\n> >>\n> >> which reads the values from system catalogue.\n> >>\n> > The Output of query on the Slow Server\n> >\n> > SELECT relpages, reltuples FROM pg_class WHERE relname\n> ='users';\n> > relpages | reltuples\n> > ----------+-----------\n> > 54063 | 2307\n> > (1 row)\n> >\n> > The Output of query on the old server which is fast\n> >\n> > relpages | reltuples\n> > ----------+-----------\n> > 42 | 1637\n> >\n> >\n> \n> This definitely confirms the suspicion about dead tuples\n> etc. On the old\n> server the table has 1637 tuples and occupies just 42 pages\n> (i.e. 330kB\n> with 8k pages), which gives about 0.025 of a page (0.2kB\n> per) per row.\n> \n> Let's suppose the characteristics of data (row sizes,\n> etc.) are the same\n> on both servers - in that case the 2307 rows should occuppy\n> about 58\n> pages, but as you can see from the first output it occupies\n> 54063, i.e.\n> 400MB instead of 450kB.\n> \n> >> > Definitely need a vacuum full on this table,\n> likely\n> >> followed by a reindex.\n> >>\n> >\n> > The Slow server load increases whenever i run a simple\n> query, is it the\n> > good idea to run VACUUM full on the live server's\n> database now or it\n> > should be run when the traffic is very low may be in\n> weekend.\n> \n> The load increases because with the queries you've sent\n> the database has\n> to read the whole table (sequential scan) and may be spread\n> through the\n> disk (thus the disk has to seek).\n> \n> I'd recommend running CLUSTER instead of VACUUM - that\n> should be much\n> faster in this case. It will lock the table, but the\n> performance already\n> sucks, so I'd probably prefer a short downtime with a\n> much faster\n> processing after that.\n> \n> >\n> >> Yes, that's true. I guess the table holds a\n> lot of dead\n> >> tuples. I'm not\n> >> sure why this happens on one server (the new one)\n> and not\n> >> on the other\n> >> one. I guess the original one uses some automatic\n> vacuuming\n> >> (autovacuum,\n> >> cron job, or something like that).\n> >\n> > There was nothing related to VACUUM of database in the\n> crontab.\n> \n> In that case there's something running vacuum - maybe\n> autovacuum (see\n> postgresql.conf), or so.\n> \n> >> As someone already posted, clustering the table\n> (by primary\n> >> key for\n> >> example) should be much faster than vacuuming and\n> give\n> >> better performance\n> >> in the end. See\n> >>\n> >>\n> http://www.postgresql.org/docs/8.3/interactive/sql-cluster.html\n> >>\n> >> The plain reindex won't help here - it\n> won't remove\n> >> dead tuples.\n> >>\n> > I am new to Postgres database, i didnt understand the\n> \"indexing\" part. Is\n> > it related to PRIMARY_KEY column of the table?\n> \n> Not sure what you mean by the 'nd\n> \n> Principle of clustering is quite simple - by sorting the\n> table according\n> to an index (by the columns in the index) you may get\n> better performance\n> when using the index. Another 'bonus' is that it\n> compacts the table on the\n> disk, so disk seeking is less frequent. These two effects\n> may mean a\n> serious increase of performance. You may cluster according\n> to any index on\n> the table, not just by primary key - just choose the most\n> frequently used\n> index.\n> \n> Sure, there are some drawbacks - it locks the table, so you\n> may not use it\n> when the command is running. It's not an incremental\n> operation, the order\n> is not enforced when modifying the table - when you modify\n> a row the new\n> version won't respect the order and you have to run the\n> CLUSTER command\n> from time to time. And it's possible to cluster by one\n> index only.\n> \n> >\n> > Should i have to run:- CLUSTER USERS using\n> 'username';\n> \n> I guess 'username' is a column, so it won't\n> work. You have to choose an\n> index (I'd recommend the primary key index, i.e. the\n> one with _pk at the\n> end).\n> \n> Tomas\n> \n> \n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nSend instant messages to your online friends http://uk.messenger.yahoo.com \n", "msg_date": "Tue, 24 Jun 2008 22:41:40 -0700 (PDT)", "msg_from": "bijayant kumar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SOLVED] Postgresql is very slow" }, { "msg_contents": "Hi Rusty,\n\n The \"type\" is not in a index. The number of checkpoint segement is 64\nand PG version is 8.3.3\n\nAfter turn on log, I found something about checkpoints.\n\n \n\nLOG: 00000: checkpoint complete: wrote 174943 buffers (26.7%); 0\ntransaction log file(s) added, 0 removed, 0 recycled; write=207.895 s,\nsync=12.282 s, total=220.205 s\n\nLOCATION: LogCheckpointEnd, xlog.c:5640\n\nLOG: 00000: checkpoint starting: xlog\n\nLOCATION: LogCheckpointStart, xlog.c:5604\n\nLOG: 00000: duration: 11060.593 ms statement: select * from\npg_stat_bgwriter;\n\nLOCATION: exec_simple_query, postgres.c:1063\n\nLOG: 00000: checkpoint complete: wrote 173152 buffers (26.4%); 0\ntransaction log file(s) added, 0 removed, 64 recycled; write=217.455 s,\nsync=5.059 s, total=222.874 s\n\nLOCATION: LogCheckpointEnd, xlog.c:5640\n\nLOG: 00000: checkpoint starting: xlog\n\nLOCATION: LogCheckpointStart, xlog.c:5604\n\n \n\npostgres=# select * from pg_stat_bgwriter;\n\n checkpoints_timed | checkpoints_req | buffers_checkpoint | buffers_clean |\nmaxwritten_clean | buffers_backend | buffers_alloc \n\n-------------------+-----------------+--------------------+---------------+-\n-----------------+-----------------+---------------\n\n 292 | 93 | 16898561 | 243176 |\n2303 | 3989550 | 3694189\n\n(1 row)\n\n \n\n Is checkpoint too frequency lead the problem?\n\nIf it’s, how to solve it ?\n\n \n\n \n\n \n\n \n\n-----邮件原件-----\n发件人: [email protected]\n[mailto:[email protected]] 代表 Rusty Conover\n发送时间: 2008年6月25日 13:02\n收件人: jay\n抄送: [email protected]\n主题: Re: [PERFORM] Postgresql update op is very very slow\n\n \n\n \n\nOn Jun 24, 2008, at 9:12 PM, jay wrote:\n\n \n\n> \n\n> I've a table with about 34601755 rows ,when I execute 'update \n\n> msg_table set\n\n> type=0;' is very very slow, cost several hours, but still not \n\n> complete?\n\n> \n\n> Why postgresql is so slowly? Is the PG MVCC problem?\n\n> \n\n> But I try it on Mysql, the same table and rows, it only cost about 340\n\n> seconds.\n\n> \n\n> Any idea for the problem?\n\n> \n\n> \n\n> My machine config:\n\n> Memory 8G, 8 piece 15K disk , 2CPU(Quad-Core) AMD \n\n> OS: Red Hat AS4\n\n> \n\n> My postgres.conf main parameter is following:\n\n> \n\n> \n\n \n\nHi Jay,\n\n \n\nIs the \"type\" used in an index? Have you properly increased your \n\nnumber of checkpoint segments? Any warnings in in your log file about \n\nexcessive checkpointing?\n\n \n\nCheers,\n\n \n\nRusty\n\n--\n\nRusty Conover\n\nInfoGears Inc.\n\nhttp://www.infogears.com\n\n \n\n \n\n \n\n \n\n \n\n \n\n \n\n-- \n\nSent via pgsql-performance mailing list ([email protected])\n\nTo make changes to your subscription:\n\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi Rusty,\n     The \"type\" is\nnot in a index. The number of checkpoint segement is 64 and PG version is 8.3.3\nAfter turn on log, I found something about checkpoints.\n \nLOG:  00000: checkpoint complete: wrote 174943 buffers (26.7%); 0\ntransaction log file(s) added, 0 removed, 0 recycled; write=207.895 s,\nsync=12.282 s, total=220.205 s\nLOCATION:  LogCheckpointEnd, xlog.c:5640\nLOG:  00000: checkpoint starting: xlog\nLOCATION:  LogCheckpointStart, xlog.c:5604\nLOG:  00000: duration: 11060.593 ms  statement: select * from\npg_stat_bgwriter;\nLOCATION:  exec_simple_query, postgres.c:1063\nLOG:  00000: checkpoint complete: wrote 173152 buffers (26.4%); 0\ntransaction log file(s) added, 0 removed, 64 recycled; write=217.455 s,\nsync=5.059 s, total=222.874 s\nLOCATION:  LogCheckpointEnd, xlog.c:5640\nLOG:  00000: checkpoint starting: xlog\nLOCATION:  LogCheckpointStart, xlog.c:5604\n \npostgres=# select * from pg_stat_bgwriter;\n checkpoints_timed | checkpoints_req | buffers_checkpoint |\nbuffers_clean | maxwritten_clean | buffers_backend | buffers_alloc \n-------------------+-----------------+--------------------+---------------+------------------+-----------------+---------------\n              \n292\n|             \n93 |           16898561\n|        243176\n|             2303\n|         3989550\n|       3694189\n(1 row)\n \n     Is checkpoint too frequency lead the problem?\nIf it’s, how to\nsolve it ?\n \n \n \n \n-----邮件原件-----\n发件人: [email protected]\n[mailto:[email protected]] 代表 Rusty\nConover\n发送时间: 2008年6月25日 13:02\n收件人: jay\n抄送: [email protected]\n主题: Re: [PERFORM] Postgresql update op is very very\nslow\n \n \nOn Jun 24, 2008, at 9:12 PM, jay wrote:\n \n> \n> I've a table with about 34601755 rows ,when I execute 'update \n\n> msg_table set\n> type=0;' is very very slow, cost several hours, but still not \n\n> complete?\n> \n> Why postgresql is so slowly? Is the PG MVCC problem?\n> \n> But I try it on Mysql, the same table and rows, it only cost about\n340\n> seconds.\n> \n> Any idea for the problem?\n> \n> \n> My machine config:\n>    Memory 8G,\n8 piece 15K disk , 2CPU(Quad-Core) AMD  \n>    OS: Red Hat AS4\n> \n> My postgres.conf main parameter is following:\n> \n> \n \nHi Jay,\n \nIs the \"type\" used in an index?  Have you properly\nincreased your  \nnumber of checkpoint segments?  Any warnings in in your log file\nabout  \nexcessive checkpointing?\n \nCheers,\n \nRusty\n--\nRusty Conover\nInfoGears Inc.\nhttp://www.infogears.com\n \n \n \n \n \n \n \n-- \nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 25 Jun 2008 15:39:31 +0800", "msg_from": "\"jay\" <[email protected]>", "msg_from_op": false, "msg_subject": "=?gb2312?B?tPC4tDogW1BFUkZPUk1dIFBvc3RncmVzcWwgdXBkYXRlIG9wIGlzIA==?=\n\t=?gb2312?B?dmVyeSB2ZXJ5IHNsb3c=?=" }, { "msg_contents": "Hi, Could anybody comment on the postgres-pr driver, from performance point \nof view, is it faster than others?\n\nWhat other options are available to access postgresql in ruby/ruby on rails?\n\nwhich of them is most popular, better?\n\nregards\nAmol\n\n\nDISCLAIMER\n==========\nThis e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.\n\n\n\n\n\n\n\n\n\n\nHi, Could anybody comment on the postgres-pr driver, from\nperformance point \nof view, is it faster than others?\n\nWhat other options are available to access postgresql in ruby/ruby on rails? \nwhich of them is most popular, better?\n\nregards\nAmol\n\nDISCLAIMER\n==========\nThis e-mail may contain privileged and confidential information which is the property of Persistent Systems Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Ltd. does not accept any liability for virus infected mails.", "msg_date": "Wed, 25 Jun 2008 13:16:12 +0530", "msg_from": "\"Amol Pujari\" <[email protected]>", "msg_from_op": false, "msg_subject": "PostgreSQL and Ruby on Rails - better accessibility" }, { "msg_contents": "Hi,\n\ni, Could anybody comment on the postgres-pr driver, from performance point\n> of view, is it faster than others?\n>\nI guess, a more appropriate place to check out for ruby/rails postgres\ndrivers would be rubyforge.org itself. There is a libpq based postgres\ndriver available there (ruby-postgres) but YMMV.\n\n>\n> What other options are available to access postgresql in ruby/ruby on\n> rails?\n> which of them is most popular, better?\n>\nAgain refer to rubyforge.org. There is a RubyES project amongst others. And\nlastly and more importantly I think this list is appropriate for Postgres\ndatabase backend related performance questions only.\n\nRegards,\nNikhils\n-- \nEnterpriseDB http://www.enterprisedb.com\n\nHi, \ni, Could anybody comment on the postgres-pr driver, from\nperformance point \nof view, is it faster than others?\nI guess, a more appropriate place to check out for ruby/rails postgres drivers would be rubyforge.org itself. There is a libpq based postgres driver available there (ruby-postgres) but YMMV. \n\nWhat other options are available to access postgresql in ruby/ruby on rails? \nwhich of them is most popular, better?\nAgain refer to rubyforge.org. There is a RubyES project amongst others. And lastly and more importantly I think this list is appropriate for Postgres database backend related performance questions only.\nRegards,Nikhils-- EnterpriseDB http://www.enterprisedb.com", "msg_date": "Wed, 25 Jun 2008 14:03:36 +0530", "msg_from": "Nikhils <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and Ruby on Rails - better accessibility" }, { "msg_contents": "jay wrote:\n> I've a table with about 34601755 rows ,when I execute 'update msg_table set\n> type=0;' is very very slow, cost several hours, but still not complete?\n> \n> Why postgresql is so slowly? Is the PG MVCC problem? \n\nPossibly. Because of MVCC, a full-table update will actually create a \nnew version of each row.\n\nI presume that's a one-off query, or a seldom-run batch operation, and \nnot something your application needs to do often. In that case, you \ncould drop all indexes, and recreate them after the update, which should \nhelp a lot:\n\nBEGIN;\nDROP INDEX <index name>, <index name 2>, ...; -- for each index\nUPDATE msg_table SET type = 0;\nCREATE INDEX ... -- Recreate indexes\nCOMMIT;\n\nOr even better, instead of using UPDATE, do a SELECT INTO a new table, \ndrop the old one, and rename the new one in its place. That has the \nadvantage that the new table doesn't contain the old row version, so you \ndon't need to vacuum right away to reclaim the space.\n\nActually, there's an even more clever trick to do roughly the same thing:\n\nALTER TABLE msg_table ALTER COLUMN type TYPE int4 USING 0;\n\n(assuming type is int4, replace with the actual data type if necessary)\n\nThis will rewrite the table, similar to a DROP + CREATE, and rebuild all \nindexes. But all in one command.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 25 Jun 2008 13:11:02 +0300", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql update op is very very slow" }, { "msg_contents": "On Wed, 25 Jun 2008, jay wrote:\n\n> Why postgresql is so slowly? Is the PG MVCC problem?\n\nUpdate is extremely intensive not just because of MVCC, but because a \nnew version of all the rows are being written out. This forces both lots \nof database commits and lots of complicated disk I/O to accomplish.\n\nCouple of suggestions:\n-Increase checkpoint_segments a lot; start with a 10X increase to 30.\n-If you can afford some potential for data loss in case of a crash, \nconsider using async commit:\nhttp://www.postgresql.org/docs/8.3/static/wal-async-commit.html\n\n> \tMemory 8G, 8 piece 15K disk , 2CPU(Quad-Core) AMD\n\nIs there any sort of write cache on the controller driving those disks? \nIf not, or if you've turned it off, that would explain your problem right \nthere, because you'd be limited by how fast you can sync to disk after \neach update. Async commit is the only good way around that. If you have \na good write cache, that feature won't buy you as much improvement.\n\n> bgwriter_delay = 20ms # 10-10000ms between rounds\n> bgwriter_lru_maxpages = 500 # 0-1000 max buffers written/round\n> bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers\n\nThis a bit much and the background writer can get in the way in this \nsituation. You might turn it off (bgwriter_lru_maxpages = 0) until you've \nsorted through everything else, then increase that parameter again. The \ncombination of 20ms and 500 pages is far faster than your disk system can \npossibly handle anyway; 100ms/500 or 20ms/100 (those two are approximately \nthe same) would be as aggressive as I'd even consider with an 8-disk \narray, and something lower is probably more appropriate for you.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 25 Jun 2008 09:50:39 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql update op is very very slow" }, { "msg_contents": "\tI know the problem, because there are about 35 million rows , which\ncost about 12G disk space and checkpoint segments use 64, but update\noperation is in one transaction which lead fast fill up the checkpoint\nsegments and lead do checkpoints frequently, but checkpoints will cost lots\nresources, so update operation become slowly and slowly and bgwrite won't\nwrite because it's not commit yet.\nCreate a new table maybe a quick solution, but it's not appropriated in some\ncases.\n\tIf we can do commit very 1000 row per round, it may resolve the\nproblem.\nBut PG not support transaction within function yet? \n\n-----邮件原件-----\n发件人: [email protected]\n[mailto:[email protected]] 代表 Heikki Linnakangas\n发送时间: 2008年6月25日 18:11\n收件人: jay\n抄送: [email protected]\n主题: Re: [PERFORM] Postgresql update op is very very slow\n\njay wrote:\n> I've a table with about 34601755 rows ,when I execute 'update msg_table\nset\n> type=0;' is very very slow, cost several hours, but still not complete?\n> \n> Why postgresql is so slowly? Is the PG MVCC problem? \n\nPossibly. Because of MVCC, a full-table update will actually create a \nnew version of each row.\n\nI presume that's a one-off query, or a seldom-run batch operation, and \nnot something your application needs to do often. In that case, you \ncould drop all indexes, and recreate them after the update, which should \nhelp a lot:\n\nBEGIN;\nDROP INDEX <index name>, <index name 2>, ...; -- for each index\nUPDATE msg_table SET type = 0;\nCREATE INDEX ... -- Recreate indexes\nCOMMIT;\n\nOr even better, instead of using UPDATE, do a SELECT INTO a new table, \ndrop the old one, and rename the new one in its place. That has the \nadvantage that the new table doesn't contain the old row version, so you \ndon't need to vacuum right away to reclaim the space.\n\nActually, there's an even more clever trick to do roughly the same thing:\n\nALTER TABLE msg_table ALTER COLUMN type TYPE int4 USING 0;\n\n(assuming type is int4, replace with the actual data type if necessary)\n\nThis will rewrite the table, similar to a DROP + CREATE, and rebuild all \nindexes. But all in one command.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Thu, 26 Jun 2008 18:04:18 +0800", "msg_from": "\"jay\" <[email protected]>", "msg_from_op": false, "msg_subject": "=?gb2312?B?tPC4tDogW1BFUkZPUk1dIFBvc3RncmVzcWwgdXBkYXRlIG9wIGlzIA==?=\n\t=?gb2312?B?dmVyeSB2ZXJ5IHNsb3c=?=" }, { "msg_contents": "2008/6/26 jay <[email protected]>:\n\n> If we can do commit very 1000 row per round, it may resolve the\n> problem.\n> But PG not support transaction within function yet?\n>\n\nYeah, transaction control is not supported inside functions. There are\nsome hacks using dblink to do transactions inside functions. You may\nwant to check that out.\n\nI had suggested another hack in the past for very simplistic updates,\nwhen you are sure that the tuple length does not change between\nupdates and you are ready to handle half updated table if there is a\ncrash or failure in between. May be for your case, where you are\nupdating a single column of the entire table and setting it to some\ndefault value for all the rows, it may work fine. But please be aware\nof data consistency issues before you try that. And it must be once in\na lifetime kind of hack.\n\nhttp://postgresql-in.blogspot.com/2008/04/postgresql-in-place-update.html\n\nThanks,\nPavan\n\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 26 Jun 2008 16:01:42 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "=?GB2312?B?UmU6IFtQRVJGT1JNXSC08Li0OiBbUEVSRk9STV0gUG9zdGc=?=\n\t=?GB2312?B?cmVzcWwgdXBkYXRlIG9wIGlzIHZlcnkgdmVyeSBzbG93?=" }, { "msg_contents": "jay wrote:\n> \tI know the problem, because there are about 35 million rows , which\n> cost about 12G disk space and checkpoint segments use 64, but update\n> operation is in one transaction which lead fast fill up the checkpoint\n> segments and lead do checkpoints frequently, but checkpoints will cost lots\n> resources, so update operation become slowly and slowly and bgwrite won't\n> write because it's not commit yet.\n> Create a new table maybe a quick solution, but it's not appropriated in some\n> cases.\n> \tIf we can do commit very 1000 row per round, it may resolve the\n> problem.\n\nCommitting more frequently won't help you with checkpoints. The updates \nwill generate just as much WAL regardless of how often you commit, so \nyou will have to checkpoint just as often. And commits have no effect on \nbgwriter either; bgwriter will write just as much regardless of how \noften you commit.\n\nOne idea would be to partition the table vertically, that is, split the \ntable into two tables, so that the columns that you need to update like \nthat are in one table, together with the primary key, and the rest of \nthe columns are in another table. That way the update won't need to scan \nor write the columns that are not changed. You can create a view on top \nof the two tables to make them look like the original table to the \napplication.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 26 Jun 2008 15:19:05 +0300", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ??: Postgresql update op is very very slow" }, { "msg_contents": "\nHi -\n\nI have been following this thread and find some of the recommendations\nreally surprising. I understand that MVCC necessarily creates overhead,\nin-place updates would not be safe against crashes etc. but have a hard\ntime believing that this is such a huge problem for RDBMS in 2008. How do\nlarge databases treat mass updates? AFAIK both DB2 and Oracle use MVCC\n(maybe a different kind?) as well, but I cannot believe that large updates\nstill pose such big problems.\nAre there no options (algorithms) for adaptively choosing different\nupdate strategies that do not incur the full MVCC overhead?\n\nHolger\n\n(Disclaimer: I'm not a professional DBA, just a curious developer).\n\n\n", "msg_date": "Thu, 26 Jun 2008 14:40:59 +0200", "msg_from": "\"Holger Hoffstaette\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ??: Postgresql update op is very very slow" }, { "msg_contents": "Holger Hoffstaette wrote:\n> Hi -\n> \n> I have been following this thread and find some of the recommendations\n> really surprising. I understand that MVCC necessarily creates overhead,\n> in-place updates would not be safe against crashes etc. but have a hard\n> time believing that this is such a huge problem for RDBMS in 2008. How do\n> large databases treat mass updates? AFAIK both DB2 and Oracle use MVCC\n> (maybe a different kind?) as well, but I cannot believe that large updates\n> still pose such big problems.\n> Are there no options (algorithms) for adaptively choosing different\n> update strategies that do not incur the full MVCC overhead?\n\nI think Pg already does in place updates, or close, if the tuples being \nreplaced aren't referenced by any in-flight transaction. I noticed a \nwhile ago that if I'm doing bulk load/update work, if there aren't any \nother transactions no MVCC bloat seems to occur and updates are faster.\n\nI'd be interested to have this confirmed, as I don't think I've seen it \ndocumented anywhere. Is it a side-effect/benefit of HOT somehow?\n\n--\nCraig Ringer\n\n", "msg_date": "Thu, 26 Jun 2008 21:16:25 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ??: Postgresql update op is very very slow" }, { "msg_contents": "On Thu, Jun 26, 2008 at 02:40:59PM +0200, Holger Hoffstaette wrote:\n\n> large databases treat mass updates? AFAIK both DB2 and Oracle use MVCC\n> (maybe a different kind?) as well, but I cannot believe that large updates\n> still pose such big problems.\n\nDB2 does not use MVCC. This is why lock escalation is such a big\nproblem for them.\n\nOracle uses a kind of MVCC based on rollback segments: your work goes\ninto the rollback segment, so that it can be undone, and the update\nhappens in place. This causes a different kind of pain: you can run\nout of rollback segments (part way through a long-running transaction,\neven) and then have to undo everything in order to do any work at\nall. Every system involves trade-offs, and different systems make\ndifferent ones. The bulk update problem is PostgreSQL's weak spot,\nand for that cost one gets huge other benefits. \n\n> Are there no options (algorithms) for adaptively choosing different\n> update strategies that do not incur the full MVCC overhead?\n\nHow would you pick? But one thing you could do is create the table\nwith a non-standard fill factor, which might allow HOT to work its magic.\n\nA\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Thu, 26 Jun 2008 09:53:41 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ??: Postgresql update op is very very slow" }, { "msg_contents": "On Thu, Jun 26, 2008 at 09:16:25PM +0800, Craig Ringer wrote:\n\n> I think Pg already does in place updates, or close, if the tuples being \n> replaced aren't referenced by any in-flight transaction. I noticed a while \n> ago that if I'm doing bulk load/update work, if there aren't any other \n> transactions no MVCC bloat seems to occur and updates are faster.\n\nAre you on 8.3? That may be HOT working for you. MVCC doesn't get\nturned off if there are no other transactions (it can't: what if\nanother transaction starts part way through yours?).\n\nA\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Thu, 26 Jun 2008 09:55:01 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ??: Postgresql update op is very very slow" }, { "msg_contents": "Holger Hoffstaette wrote:\n> Hi -\n>\n> I have been following this thread and find some of the recommendations\n> really surprising. I understand that MVCC necessarily creates overhead,\n> in-place updates would not be safe against crashes etc. but have a hard\n> time believing that this is such a huge problem for RDBMS in 2008. How do\n> large databases treat mass updates? AFAIK both DB2 and Oracle use MVCC\n> (maybe a different kind?) as well, but I cannot believe that large updates\n> still pose such big problems.\n> Are there no options (algorithms) for adaptively choosing different\n> update strategies that do not incur the full MVCC overhead?\n> \n\nMy opinion:\n\nAny system that provides cheap UPDATE operations is either not ACID \ncompliant, or is not designed for highly concurrent access, possibly \nboth. By ACID compliant I mean that there both the OLD and NEW need to \ntake space on the hard disk in order to guarantee that if a failure \noccurs in the middle of the transaction, one can select only the OLD \nversions for future transactions, or if it fails after the end fo the \ntransaction, one can select only the NEW versions for future \ntransactions. If both must be on disk, it follows that updates are \nexpensive. Even with Oracle rollback segments - the rollback segments \nneed to be written. Perhaps they will be more sequential, and able to be \nwritten more efficiently, but the data still needs to be written. The \nother option is to make sure that only one person is doing updates at a \ntime, and in this case it becomes possible (although not necessarily \nsafe unless one implements the ACID compliant behaviour described in the \nprevious point) for one operation to complete before the next begins.\n\nThe HOT changes introduced recently into PostgreSQL should reduce the \ncost of updates in many cases (but not all - I imagine that updating ALL \nrows is still expensive).\n\nThere is a third system I can think of, but I think it's more \ntheoretical than practical. That is, remember the list of changes to \neach row/column and \"replay\" them on query. The database isn't ever \nstored in a built state, but is only kept as pointers that allow any \npart of the table to be re-built on access. The UPDATE statement could \nbe recorded cheaply, but queries against the UPDATE statement might be \nvery expensive. :-)\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n", "msg_date": "Thu, 26 Jun 2008 10:53:21 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ??: Postgresql update op is very very slow" }, { "msg_contents": "\"jay\" <[email protected]> writes:\n> \tI know the problem, because there are about 35 million rows , which\n> cost about 12G disk space and checkpoint segments use 64, but update\n> operation is in one transaction which lead fast fill up the checkpoint\n> segments and lead do checkpoints frequently, but checkpoints will cost lots\n> resources, so update operation become slowly and slowly and bgwrite won't\n> write because it's not commit yet.\n> Create a new table maybe a quick solution, but it's not appropriated in some\n> cases.\n> \tIf we can do commit very 1000 row per round, it may resolve the\n> problem.\n\nNo, that's utterly unrelated. Transaction boundaries have nothing to do\nwith checkpoints.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Jun 2008 11:04:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re:\n =?gb2312?B?tPC4tDogW1BFUkZPUk1dIFBvc3RncmVzcWwgdXBkYXRlIG9wIGlzIA==?=\n\t=?gb2312?B?dmVyeSB2ZXJ5IHNsb3c=?=" }, { "msg_contents": "2008/6/26 Pavan Deolasee <[email protected]>:\n> 2008/6/26 jay <[email protected]>:\n>\n>> If we can do commit very 1000 row per round, it may resolve the\n>> problem.\n>> But PG not support transaction within function yet?\n>>\n>\n> Yeah, transaction control is not supported inside functions. There are\n> some hacks using dblink to do transactions inside functions. You may\n> want to check that out.\n\nIf you need autonomous transactions. For most people save points and\ncatching seem to be a n acceptable form of transaction control.\n\n> I had suggested another hack in the past for very simplistic updates,\n> when you are sure that the tuple length does not change between\n> updates and you are ready to handle half updated table if there is a\n> crash or failure in between. May be for your case, where you are\n> updating a single column of the entire table and setting it to some\n> default value for all the rows, it may work fine. But please be aware\n> of data consistency issues before you try that. And it must be once in\n> a lifetime kind of hack.\n>\n> http://postgresql-in.blogspot.com/2008/04/postgresql-in-place-update.html\n\nIn a way that's what pg_bulkloader does.\n", "msg_date": "Thu, 26 Jun 2008 09:59:53 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "=?GB2312?B?UmU6IFtQRVJGT1JNXSBSZTogW1BFUkZPUk1dILTwuLQ6IFtQRVJGT1JN?=\n\t=?GB2312?B?XSBQb3N0Z3Jlc3FsIHVwZGF0ZSBvcCBpcyB2ZXJ5IHZlcnkgc2xvdw==?=" }, { "msg_contents": "2008/6/26 Tom Lane <[email protected]>:\n> \"jay\" <[email protected]> writes:\n>> I know the problem, because there are about 35 million rows , which\n>> cost about 12G disk space and checkpoint segments use 64, but update\n>> operation is in one transaction which lead fast fill up the checkpoint\n>> segments and lead do checkpoints frequently, but checkpoints will cost lots\n>> resources, so update operation become slowly and slowly and bgwrite won't\n>> write because it's not commit yet.\n>> Create a new table maybe a quick solution, but it's not appropriated in some\n>> cases.\n>> If we can do commit very 1000 row per round, it may resolve the\n>> problem.\n>\n> No, that's utterly unrelated. Transaction boundaries have nothing to do\n> with checkpoints.\n\nTrue. But if you update 10000 rows and vacuum you can keep the bloat\nto something reasonable.\n\nOn another note, I haven't seen anyone suggest adding the appropriate\nwhere clause to keep from updating rows that already match. Cheap\ncompared to updating the whole table even if a large chunk aren't a\nmatch. i.e.\n\n... set col=0 where col <>0;\n\nThat should be the first thing you reach for in this situation, if it can help.\n", "msg_date": "Thu, 26 Jun 2008 10:02:31 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "=?GB2312?B?UmU6IFtQRVJGT1JNXSC08Li0OiBbUEVSRk9STV0gUG9zdGc=?=\n\t=?GB2312?B?cmVzcWwgdXBkYXRlIG9wIGlzIHZlcnkgdmVyeSBzbG93?=" }, { "msg_contents": "On Thu, 26 Jun 2008, Holger Hoffstaette wrote:\n\n> How do large databases treat mass updates? AFAIK both DB2 and Oracle use \n> MVCC (maybe a different kind?) as well\n\nAn intro to the other approaches used by Oracle and DB2 (not MVCC) is at\n\nhttp://wiki.postgresql.org/wiki/Why_PostgreSQL_Instead_of_MySQL:_Comparing_Reliability_and_Speed_in_2007#Transaction_Locking_and_Scalability\n\n(a URL which I really need to shorten one day).\n\n> Are there no options (algorithms) for adaptively choosing different \n> update strategies that do not incur the full MVCC overhead?\n\nIf you stare at the big picture of PostgreSQL's design, you might notice \nthat it usually aims to do things one way and get that implementation \nright for the database's intended audience. That intended audience cares \nabout data integrity and correctness and is willing to suffer the overhead \nthat goes along with operating that way. There's few \"I don't care about \nreliability here so long as it's fast\" switches you can flip, and not \nhaving duplicate code paths to support them helps keep the code simpler \nand therefore more reliable.\n\nThis whole area is one of those good/fast/cheap trios. If you want good \ntransaction guarantees on updates, you either get the hardware and \nsettings right to handle that (!cheap), or it's slow. The idea of \nproviding a !good/fast/cheap option for updates might have some \ntheoretical value, but I think you'd find it hard to get enough support \nfor that idea to get work done on it compared to the other things \ndeveloper time is being spent on right now.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 26 Jun 2008 18:15:14 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ??: Postgresql update op is very very slow" }, { "msg_contents": "On Thu, 26 Jun 2008, Craig Ringer wrote:\n\n> I'd be interested to have this confirmed, as I don't think I've seen it \n> documented anywhere. Is it a side-effect/benefit of HOT somehow?\n\nThe documentation is in README.HOT, for example: \nhttp://anoncvs.postgresql.org/cvsweb.cgi/pgsql/src/backend/access/heap/README.HOT?rev=1.3;content-type=text%2Fplain\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 26 Jun 2008 18:20:15 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ??: Postgresql update op is very very slow" } ]
[ { "msg_contents": "Hi list,\n\n\nWe have a database with lots of small simultaneous writes and reads \n(millions every day) and are looking at buying a good hardware for this.\n\nWhat are your suggestions. What we are currently looking at is.\n\nDual Quad Core Intel\n8 - 12 GB RAM\n\n10 disks total.\n\n4 x 146 GB SAS disk in RAID 1+0 for database\n6 x 750 GB SATA disks in RAID 1+0 or RAID 5 for OS and transactions \nlogs.\n\nGood RAID controller with lots of memory and BBU.\n\nAny hints, recommendations would be greatly appreciated.\n\nCheers,\nHenke\n", "msg_date": "Wed, 25 Jun 2008 12:16:10 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": true, "msg_subject": "Hardware suggestions for high performance 8.3" }, { "msg_contents": "> We have a database with lots of small simultaneous writes and reads\n> (millions every day) and are looking at buying a good hardware for this.\n>\n> What are your suggestions. What we are currently looking at is.\n>\n> Dual Quad Core Intel\n> 8 - 12 GB RAM\n>\n> 10 disks total.\n>\n> 4 x 146 GB SAS disk in RAID 1+0 for database\n> 6 x 750 GB SATA disks in RAID 1+0 or RAID 5 for OS and transactions logs.\n>\n> Good RAID controller with lots of memory and BBU.\n\nI have very positive experiences with HP's DL360 and DL380. The latter\nslightly more expandable (2U vs. 1U). I have used the internal\np400i-controller with 512 MB cache on the DL380 and bought an external\np800-controller (512 MB cache as well) and a MSA-70-cabinet. I've have\n11 disks in raid-6 (one hotspare).\n\nI don't see any reason to mix sas- and sata-disks with different\nsizes. I'd go for sas-disks, smaller and faster, less power and heat.\nRaid 1+0 or raid-6 does not seem to make much of a difference today as\nit used to if you have more than 6-7 disks.\n\nThe DL380 is a 4-way woodcrest at 3 GHz and 16 GB ram and the DL360 is\na two-way woodcrest at 2.66 GHz with 16 GB.\n\nMy personal preference is FreeBSD and the DL3x0-servers all run\nwithout problems on this platform. But choose your OS depending on\nwhat you're most comfortable with. And choose hardware according to\nwhat your OS supports.\n\nAreca-controllers may also be worth looking into but I haven't tried\nthese myself.\n\nOur largest table has 85 mill. entries.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Wed, 25 Jun 2008 12:56:59 +0200", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions for high performance 8.3" }, { "msg_contents": "On Wed, 25 Jun 2008, Henrik wrote:\n> What are your suggestions. What we are currently looking at is.\n>\n> Dual Quad Core Intel\n> 8 - 12 GB RAM\n\nMore RAM would be helpful. It's not that expensive, compared to the rest \nof your system.\n\n> 10 disks total.\n>\n> 4 x 146 GB SAS disk in RAID 1+0 for database\n> 6 x 750 GB SATA disks in RAID 1+0 or RAID 5 for OS and transactions logs.\n>\n> Good RAID controller with lots of memory and BBU.\n\nIf you have a good RAID controller with BBU cache, then there's no point \nsplitting the discs into two sets. You're only creating an opportunity to \nunder-utilise the system. I'd get ten identical discs and put them in a \nsingle array, probably RAID 10.\n\nAlso, do you really need 6*750GB for OS and transaction logs? How big can \nthey be?\n\nHowever, the most important factor is that you get a good BBU cache.\n\nMatthew\n\n-- \nI don't want the truth. I want something I can tell parliament!\n -- Rt. Hon. Jim Hacker MP\n", "msg_date": "Wed, 25 Jun 2008 12:15:56 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions for high performance 8.3" }, { "msg_contents": "\n25 jun 2008 kl. 12.56 skrev Claus Guttesen:\n\n>> We have a database with lots of small simultaneous writes and reads\n>> (millions every day) and are looking at buying a good hardware for \n>> this.\n>>\n>> What are your suggestions. What we are currently looking at is.\n>>\n>> Dual Quad Core Intel\n>> 8 - 12 GB RAM\n>>\n>> 10 disks total.\n>>\n>> 4 x 146 GB SAS disk in RAID 1+0 for database\n>> 6 x 750 GB SATA disks in RAID 1+0 or RAID 5 for OS and transactions \n>> logs.\n>>\n>> Good RAID controller with lots of memory and BBU.\n>\n> I have very positive experiences with HP's DL360 and DL380. The latter\n> slightly more expandable (2U vs. 1U). I have used the internal\n> p400i-controller with 512 MB cache on the DL380 and bought an external\n> p800-controller (512 MB cache as well) and a MSA-70-cabinet. I've have\n> 11 disks in raid-6 (one hotspare).\nMmm I've used DL380 and I also have had good experience with them.\n\nI guess that the nees of splitting up the transactions logs are not \nthat important if you have enought disks in a raid 10 or raid 6.\n>\n\n> My personal preference is FreeBSD and the DL3x0-servers all run\n> without problems on this platform. But choose your OS depending on\n> what you're most comfortable with. And choose hardware according to\n> what your OS supports.\n>\nI like BDS also but this time its a 64bit Linux system which wil be \nused.\n>\n>\n> Our largest table has 85 mill. entries.\n>\nI believe we will be running in the 200 mill. area.\n\nThanks for your input!\n\n//Henke\n", "msg_date": "Wed, 25 Jun 2008 14:12:15 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware suggestions for high performance 8.3" }, { "msg_contents": "\n25 jun 2008 kl. 13.15 skrev Matthew Wakeling:\n\n> On Wed, 25 Jun 2008, Henrik wrote:\n>> What are your suggestions. What we are currently looking at is.\n>>\n>> Dual Quad Core Intel\n>> 8 - 12 GB RAM\n>\n> More RAM would be helpful. It's not that expensive, compared to the \n> rest of your system.\n\nTrue, as long as I can build the system on 2G or 4G modules I can max \nout the banks.\n\n>\n>\n>> 10 disks total.\n>>\n>> 4 x 146 GB SAS disk in RAID 1+0 for database\n>> 6 x 750 GB SATA disks in RAID 1+0 or RAID 5 for OS and transactions \n>> logs.\n>>\n>> Good RAID controller with lots of memory and BBU.\n>\n> If you have a good RAID controller with BBU cache, then there's no \n> point splitting the discs into two sets. You're only creating an \n> opportunity to under-utilise the system. I'd get ten identical discs \n> and put them in a single array, probably RAID 10.\nOK, thats good to know. Really want to keep it as simple as possible. \nWould you turn off fsync if you had a controller with BBU? =)\n\n>\n>\n> Also, do you really need 6*750GB for OS and transaction logs? How \n> big can they be?\n\nAhh, we are going to save a lot of other datafiles on those also but \nmaybe i'll just get a cabinett.\n\n>\n>\n> However, the most important factor is that you get a good BBU cache.\nHere that!\n\nThanks for your input!\n\n//Henke\n", "msg_date": "Wed, 25 Jun 2008 14:15:07 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware suggestions for high performance 8.3" }, { "msg_contents": ">> If you have a good RAID controller with BBU cache, then there's no point\n>> splitting the discs into two sets. You're only creating an opportunity to\n>> under-utilise the system. I'd get ten identical discs and put them in a\n>> single array, probably RAID 10.\n>\n> OK, thats good to know. Really want to keep it as simple as possible. Would\n> you turn off fsync if you had a controller with BBU? =)\n\nNo, don't do that. Leaving this setting on is *highly* recommended\nunless you have data which can easily be reproduced. :-)\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Wed, 25 Jun 2008 14:40:30 +0200", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions for high performance 8.3" }, { "msg_contents": "On Wed, 25 Jun 2008, Henrik wrote:\n> Would you turn off fsync if you had a controller with BBU? =)\n\nNo, certainly not. Fsync is what makes the data move from the volatile OS \ncache to the non-volatile disc system. It'll just be a lot quicker on a \ncontroller with a BBU cache, because it won't need to actually wait for \nthe discs. But you still need the fsync to move the data from main OS \ncache to BBU cache.\n\n>> Also, do you really need 6*750GB for OS and transaction logs? How big can \n>> they be?\n>\n> Ahh, we are going to save a lot of other datafiles on those also but maybe \n> i'll just get a cabinett.\n\nOr you could just get 10 large SATA drives. To be honest, the performance \ndifference is not large, especially if you ensure the database data is \nheld compactly on the discs, so the seeks are small.\n\nMatthew\n\n-- \nIt's one of those irregular verbs - \"I have an independent mind,\" \"You are\nan eccentric,\" \"He is round the twist.\"\n -- Bernard Woolly, Yes Prime Minister\n", "msg_date": "Wed, 25 Jun 2008 13:47:02 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions for high performance 8.3" }, { "msg_contents": "On Wed, 25 Jun 2008, Henrik wrote:\n\n> 4 x 146 GB SAS disk in RAID 1+0 for database\n> 6 x 750 GB SATA disks in RAID 1+0 or RAID 5 for OS and transactions logs.\n\nThe transaction logs are not that big, and there's very little value to \nstriping them across even two disks. You should just get more SAS disks \ninstead and make them available to the database, adding more spindles for \nrandom I/O is much more important. Separating out a single RAID-1 pair \nfrom that set to hold the logs is a reasonable practice, with a good \nbattery-backed controller even that might not buy you anything useful.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 25 Jun 2008 09:56:50 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions for high performance 8.3" }, { "msg_contents": "On Wed, 25 Jun 2008, Henrik wrote:\n\n> Would you turn off fsync if you had a controller with BBU? =)\n\nTurning off fsync has some potential to introduce problems even in that \nenvironment, so better not to do that. The issue is that you might have, \nsay, 1GB of OS-level cache but 256MB of BBU cache, and if you turn fsync \noff it won't force the OS cache out to the controller when it's supposed \nto and that can cause corruption.\n\nAlso, if you've got a controller with BBU, the overhead of fsync for \nregular writes is low enough that you don't really need to turn it off. \nIf writes are cached the fsync is almost free.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 25 Jun 2008 11:45:43 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions for high performance 8.3" }, { "msg_contents": "\n25 jun 2008 kl. 17.45 skrev Greg Smith:\n\n> On Wed, 25 Jun 2008, Henrik wrote:\n>\n>> Would you turn off fsync if you had a controller with BBU? =)\n>\n> Turning off fsync has some potential to introduce problems even in \n> that environment, so better not to do that. The issue is that you \n> might have, say, 1GB of OS-level cache but 256MB of BBU cache, and \n> if you turn fsync off it won't force the OS cache out to the \n> controller when it's supposed to and that can cause corruption.\n>\n> Also, if you've got a controller with BBU, the overhead of fsync for \n> regular writes is low enough that you don't really need to turn it \n> off. If writes are cached the fsync is almost free.\nThanks for a thoroughly answer. I guess I wont be turning of fsync. :)\n\nThanks Greg!\n\n\n\n>\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com \n> Baltimore, MD\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Thu, 26 Jun 2008 00:32:03 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware suggestions for high performance 8.3" }, { "msg_contents": "I've seen some concerns about buying database performance hardware \nfrom DELL. Are there at least some of the RAID cards that work well \nwith Linux or should I stay clear of DELL permanently?\n\nThanks!\n\n//Henke\n25 jun 2008 kl. 17.45 skrev Greg Smith:\n\n> On Wed, 25 Jun 2008, Henrik wrote:\n>\n>> Would you turn off fsync if you had a controller with BBU? =)\n>\n> Turning off fsync has some potential to introduce problems even in \n> that environment, so better not to do that. The issue is that you \n> might have, say, 1GB of OS-level cache but 256MB of BBU cache, and \n> if you turn fsync off it won't force the OS cache out to the \n> controller when it's supposed to and that can cause corruption.\n>\n> Also, if you've got a controller with BBU, the overhead of fsync for \n> regular writes is low enough that you don't really need to turn it \n> off. If writes are cached the fsync is almost free.\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com \n> Baltimore, MD\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Thu, 26 Jun 2008 15:35:34 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware suggestions for high performance 8.3" }, { "msg_contents": "On Thu, 26 Jun 2008, Henrik wrote:\n\n> I've seen some concerns about buying database performance hardware from DELL. \n> Are there at least some of the RAID cards that work well with Linux or should \n> I stay clear of DELL permanently?\n\nPeople seem to be doing OK if the RAID card is their Perc/6i, which has an \nLSI Logic MegaRAID SAS 1078 chipset under the hood. There's some helpful \nbenchmark results and follow-up meesages related to one of those at \nhttp://archives.postgresql.org/message-id/[email protected]\n\nThat said, I consider the rebranded LSI cards a pain and hate the quality \nof Dell's hardware. Seems like everybody I talk to lately is buying HP's \nDL380 instead of Dells for this level of Linux installs nowadays, I \nhaven't gotten one of those HP boxes myself yet to comment.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 27 Jun 2008 00:47:41 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions for high performance 8.3" }, { "msg_contents": "On Thu, Jun 26, 2008 at 10:47 PM, Greg Smith <[email protected]> wrote:\n> On Thu, 26 Jun 2008, Henrik wrote:\n>\n>> I've seen some concerns about buying database performance hardware from\n>> DELL. Are there at least some of the RAID cards that work well with Linux or\n>> should I stay clear of DELL permanently?\n>\n> People seem to be doing OK if the RAID card is their Perc/6i, which has an\n> LSI Logic MegaRAID SAS 1078 chipset under the hood. There's some helpful\n> benchmark results and follow-up meesages related to one of those at\n> http://archives.postgresql.org/message-id/[email protected]\n\nYeah, the problems I've had have been with the internal RAID (perc\n5???) lsi based controllers. They kick their drives offline. Dell\nhas a firmware update but we haven't had a chance to install it just\nyet to see if it fixes the problem with that one.\n\n> That said, I consider the rebranded LSI cards a pain and hate the quality of\n> Dell's hardware.\n\nYeah, I'd just as soon get a regular LSI bios as the remade one Dell\nseems intent on pushing. Also, we just discovered the broadcom\nchipsets we have in our Dell 1950s and 1850s will not negotiate to\ngigabit with our Nortel switches. Everything else I've plugged in\njust worked. Went looking at Dell's site, and for the 1950 they\nrecommend buying a dual port Intel NIC for it. Why couldn't they just\nbuild in better NICS to start?\n", "msg_date": "Thu, 26 Jun 2008 23:40:20 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions for high performance 8.3" }, { "msg_contents": "Greg Smith wrote:\n> On Thu, 26 Jun 2008, Henrik wrote:\n> \n>> I've seen some concerns about buying database performance hardware \n>> from DELL. Are there at least some of the RAID cards that work well \n>> with Linux or should I stay clear of DELL permanently?\n> \n> People seem to be doing OK if the RAID card is their Perc/6i, which has \n> an LSI Logic MegaRAID SAS 1078 chipset under the hood. There's some \n> helpful benchmark results and follow-up meesages related to one of those \n> at \n> http://archives.postgresql.org/message-id/[email protected]\n> \n> That said, I consider the rebranded LSI cards a pain and hate the \n> quality of Dell's hardware. Seems like everybody I talk to lately is \n> buying HP's DL380 instead of Dells for this level of Linux installs \n> nowadays, I haven't gotten one of those HP boxes myself yet to comment.\n> \n\nThe HP P800 controller is a top notch performer.\n\nJoshua D. Drake\n\nP.S. The DL360-380 series is very nice as well\n\n> -- \n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n> \n\n", "msg_date": "Thu, 26 Jun 2008 23:21:31 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware suggestions for high performance 8.3" } ]
[ { "msg_contents": "Hi\n\nHas anyone done some benchmarks between hardware RAID vs Linux MD \nsoftware RAID?\n\nI'm keen to know the result.\n\n-- \nAdrian Moisey\nSystems Administrator | CareerJunction | Your Future Starts Here.\nWeb: www.careerjunction.co.za | Email: [email protected]\nPhone: +27 21 686 6820 | Mobile: +27 82 858 7830 | Fax: +27 21 686 6842\n", "msg_date": "Wed, 25 Jun 2008 13:05:04 +0200", "msg_from": "Adrian Moisey <[email protected]>", "msg_from_op": true, "msg_subject": "Hardware vs Software RAID" }, { "msg_contents": "On Wed, Jun 25, 2008 at 7:05 AM, Adrian Moisey\n<[email protected]> wrote:\n> Has anyone done some benchmarks between hardware RAID vs Linux MD software\n> RAID?\n>\n> I'm keen to know the result.\n\nI have here:\nhttp://merlinmoncure.blogspot.com/2007/08/following-are-results-of-our-testing-of.html\n\nI also did some pgbench tests which I unfortunately did not record.\nThe upshot is I don't really see a difference in performance. I\nmainly prefer software raid because it's flexible and you can use the\nsame set of tools across different hardware. One annoying thing about\nsoftware raid that comes up periodically is that you can't grow raid 0\nvolumes.\n\nmerlin\n", "msg_date": "Wed, 25 Jun 2008 08:52:21 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Wed, 25 Jun 2008, Merlin Moncure wrote:\n>> Has anyone done some benchmarks between hardware RAID vs Linux MD software\n>> RAID?\n>\n> I have here:\n> http://merlinmoncure.blogspot.com/2007/08/following-are-results-of-our-testing-of.html\n>\n> The upshot is I don't really see a difference in performance.\n\nThe main difference is that you can get hardware RAID with \nbattery-backed-up cache, which means small writes will be much quicker \nthan software RAID. Postgres does a lot of small writes under some use \ncases.\n\nWithout a BBU cache, it is sensible to put the transaction logs on a \nseparate disc system to the main database, to make the transaction log \nwrites fast (due to no seeking on those discs). However, with a BBU cache, \nthat advantage is irrelevant, as the cache will absorb the writes.\n\nHowever, not all hardware RAID will have such a battery-backed-up cache, \nand those that do tend to have a hefty price tag.\n\nMatthew\n\n-- \n$ rm core\nSegmentation Fault (core dumped)\n", "msg_date": "Wed, 25 Jun 2008 14:03:58 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "\"Also sprach Matthew Wakeling:\"\n> >> Has anyone done some benchmarks between hardware RAID vs Linux MD software\n> >> RAID?\n ...\n> > The upshot is I don't really see a difference in performance.\n> \n> The main difference is that you can get hardware RAID with \n> battery-backed-up cache, which means small writes will be much quicker \n> than software RAID. Postgres does a lot of small writes under some use \n\nIt doesn't \"mean\" that, I'm afraid. You can put the log/bitmap wherever\nyou want in software raid, including on a battery-backed local ram disk\nif you feel so inclined. So there is no intrinsic advantage to be\ngained there at all.\n\n> However, not all hardware RAID will have such a battery-backed-up cache, \n> and those that do tend to have a hefty price tag.\n\nWhereas software raid and a firewire-attached log device does not.\n\n\nPeter\n", "msg_date": "Wed, 25 Jun 2008 15:11:23 +0200 (MET DST)", "msg_from": "\"Peter T. Breuer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Wed, 25 Jun 2008, Peter T. Breuer wrote:\n\n> You can put the log/bitmap wherever you want in software raid, including \n> on a battery-backed local ram disk if you feel so inclined. So there is \n> no intrinsic advantage to be gained there at all.\n\nYou are technically correct but this is irrelevant. There are zero \nmainstream battery-backed local RAM disk setups appropriate for database \nuse that don't cost substantially more than the upgrade cost to just \ngetting a good hardware RAID controller with cache integrated and using \nregular disks.\n\nWhat I often do is get a hardware RAID controller, just to accelerate disk \nwrites, but configure it in JBOD mode and use Linux or other software RAID \non that platform.\n\nAdvantages of using software RAID, in general and in some cases even with \na hardware disk controller:\n\n-Your CPU is inevitably faster than the one on the controller, so this can \ngive better performance than having RAID calcuations done on the \ncontroller itself.\n\n-If the RAID controllers dies, you can move everything to another machine \nand know that the RAID setup will transfer. Usually hardware RAID \ncontrollers use a formatting process such that you can't read the array \nwithout such a controller, so you're stuck with having a replacement \ncontroller around if you're paranoid. As long as I've got any hardware \nthat can read the disks, I can get a software RAID back again.\n\n-There is a transparency to having the disks directly attached to the OS \nyou lose with most hardware RAID. Often with hardware RAID you lose the \nability to do things like monitor drive status and temperature without \nusing a special utility to read SMART and similar data.\n\nDisadvantages:\n\n-Maintenance like disk replacement rebuilds will be using up your main CPU \nand its resources (like I/O bus bandwidth) that might be offloaded onto \nthe hardware RAID controller.\n\n-It's harder to setup a redundant boot volume with software RAID that \nworks right with a typical PC BIOS. If you use hardware RAID it tends to \ninsulate you from the BIOS quirks.\n\n-If a disk fails, I've found a full hardware RAID setup is less likely to \nresult in an OS crash than a software RAID is. The same transparency and \nvisibility into what the individual disks are doing can be a problem when \na disk goes crazy and starts spewing junk the OS has to listen to. \nHardware controllers tend to do a better job planning for that sort of \nfailure, and some of that is lost even by putting them into JBOD mode.\n\n>> However, not all hardware RAID will have such a battery-backed-up cache,\n>> and those that do tend to have a hefty price tag.\n>\n> Whereas software raid and a firewire-attached log device does not.\n\nA firewire-attached log device is an extremely bad idea. First off, \nyou're at the mercy of the firewire bridge's write guarantees, which may \nor may not be sensible. It's not hard to find reports of people whose \ndisks were corrupted when the disk was accidentally disconnected, or of \nbuggy drive controller firmware causing problems. I stopped using \nFirewire years ago because it seems you need to do some serious QA to \nfigure out which combinations are reliable and which aren't, and I don't \nuse external disks enough to spend that kind of time with them.\n\nSecond, there's few if any Firewire setups where the host gets to read \nSMART error data from the disk. This means that you can continue to use a \nflaky disk long past the point where a direct connected drive would have \nbeen kicked out of an array for being unreliable. SMART doesn't detect \n100% of drive failures in advance, but you'd be silly to setup a database \nsystem where you don't get to take advantage of the ~50% it does catch \nbefore you lose any data.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 25 Jun 2008 11:24:23 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Wed, Jun 25, 2008 at 11:24 AM, Greg Smith <[email protected]> wrote:\n> SMART doesn't detect 100% of drive failures in advance, but you'd be silly\n> to setup a database system where you don't get to take advantage of the\n> ~50% it does catch before you lose any data.\n\nCan't argue with that one.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Wed, 25 Jun 2008 11:30:14 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "\n\nOn Wed, 2008-06-25 at 11:30 -0400, Jonah H. Harris wrote:\n> On Wed, Jun 25, 2008 at 11:24 AM, Greg Smith <[email protected]> wrote:\n> > SMART doesn't detect 100% of drive failures in advance, but you'd be silly\n> > to setup a database system where you don't get to take advantage of the\n> > ~50% it does catch before you lose any data.\n> \n> Can't argue with that one.\n\nSMART has certainly saved our butts more than once.\n\nJoshua D. Drake\n\n\n", "msg_date": "Wed, 25 Jun 2008 08:35:28 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Wed, 25 Jun 2008, Greg Smith wrote:\n> A firewire-attached log device is an extremely bad idea.\n\nAnyone have experience with IDE, SATA, or SAS-connected flash devices like \nthe Samsung MCBQE32G5MPP-0VA? I mean, it seems lovely - 32GB, at a \ntransfer rate of 100MB/s, and doesn't degrade much in performance when \nwriting small random blocks. But what's it actually like, and is it \nreliable?\n\nMatthew\n\n-- \nTerrorists evolve but security is intelligently designed? -- Jake von Slatt\n", "msg_date": "Wed, 25 Jun 2008 16:35:37 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Wed, Jun 25, 2008 at 5:05 AM, Adrian Moisey\n<[email protected]> wrote:\n> Hi\n>\n> Has anyone done some benchmarks between hardware RAID vs Linux MD software\n> RAID?\n>\n> I'm keen to know the result.\n\nI've had good performance from sw RAID-10 in later kernels, especially\nif it was handling a mostly read type load, like a reporting server.\nThe problem with hw RAID is that the actual performance delivered\ndoesn't always match up to the promise, due to issues like driver\nbugs, mediocre implementations, etc. Years ago when the first\nmegaraid v2 drivers were coming out they were pretty buggy. Once a\nstable driver was out they worked quite well.\n\nI'm currently having a problem with a \"well known very large\nservermanufacturer who shall remain unnamed\" and their semi-custom\nRAID controller firmware not getting along with the driver for ubuntu.\n\nThe machine we're ordering to replace it will have a much beefier RAID\ncontroller with a better driver / OS match and I expect better\nbehavior from that setup.\n", "msg_date": "Wed, 25 Jun 2008 09:53:18 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "\n\nOn Wed, 2008-06-25 at 09:53 -0600, Scott Marlowe wrote:\n> On Wed, Jun 25, 2008 at 5:05 AM, Adrian Moisey\n> <[email protected]> wrote:\n> > Hi\n\n> I'm currently having a problem with a \"well known very large\n> servermanufacturer who shall remain unnamed\" and their semi-custom\n> RAID controller firmware not getting along with the driver for ubuntu.\n\n/me waves to Dell.\n\nJoshua D. Drake\n\n\n", "msg_date": "Wed, 25 Jun 2008 08:55:31 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "\"Also sprach Greg Smith:\"\n> On Wed, 25 Jun 2008, Peter T. Breuer wrote:\n> \n> > You can put the log/bitmap wherever you want in software raid, including \n> > on a battery-backed local ram disk if you feel so inclined. So there is \n> > no intrinsic advantage to be gained there at all.\n> \n> You are technically correct but this is irrelevant. There are zero \n> mainstream battery-backed local RAM disk setups appropriate for database \n> use that don't cost substantially more than the upgrade cost to just \n\nI refrained from saying in my reply that I would set up a firewire-based\nlink to ram in a spare old portable (which comes with a battery) if I\nwanted to do this cheaply.\n\nOne reason I refrained was because I did not want to enter into a\ndiscussion of transport speeds vs latency vs block request size. GE,\nfor example, would have horrendous performance at 1KB i/o blocks. Mind\nyou, it still would be over 20MB/s (I measure 70MB/s to a real scsi\nremote disk across GE at 64KB blocksize).\n\n> getting a good hardware RAID controller with cache integrated and using \n> regular disks.\n> \n> What I often do is get a hardware RAID controller, just to accelerate disk \n> writes, but configure it in JBOD mode and use Linux or other software RAID \n> on that platform.\n\nI wonder what \"JBOD mode\" is ... :) Journaled block over destiny? Oh ..\n\"Just a Bunch of Disks\". So you use the linux software raid driver\ninstead of the hardware or firmware driver on the raid assembly. Fair\nenough.\n\n> Advantages of using software RAID, in general and in some cases even with \n> a hardware disk controller:\n> \n> -Your CPU is inevitably faster than the one on the controller, so this can \n> give better performance than having RAID calcuations done on the \n> controller itself.\n\nIt's not clear. You take i/o bandwidth out of the rest of your system,\nand cpu time too. In a standard dual core machine which is not a\nworkstation, it's OK. On my poor ol' 1GHz P3 TP x24 laptop, doing two\nthings at once is definitely a horrible strain on my X responsiveness.\nOn a risc machine (ARM, 250MHz) I have seen horrible cpu loads from\nsoftware raid.\n\n> -If the RAID controllers dies, you can move everything to another machine \n> and know that the RAID setup will transfer. Usually hardware RAID \n\nOh, I agree with that. You're talking about the proprietary formatting\nin hw raid assemblies, I take it? Yah.\n\n> -There is a transparency to having the disks directly attached to the OS \n\nAgreed. \"It's alright until it goes wrong\".\n\n> Disadvantages:\n> \n> -Maintenance like disk replacement rebuilds will be using up your main CPU \n\nAgreed (above).\n\n> \n> -It's harder to setup a redundant boot volume with software RAID that \n\nYeah. I don't bother. A small boot volume in readonly mode with a copy\non another disk is what I use.\n\n> works right with a typical PC BIOS. If you use hardware RAID it tends to \n> insulate you from the BIOS quirks.\n\nUntil the machine dies? (and fries a disk or two on the way down ..\nhappens, has happend to me).\n\n> -If a disk fails, I've found a full hardware RAID setup is less likely to \n> result in an OS crash than a software RAID is. The same transparency and \n\nNot sure. \n\n> >> However, not all hardware RAID will have such a battery-backed-up cache,\n> >> and those that do tend to have a hefty price tag.\n> >\n> > Whereas software raid and a firewire-attached log device does not.\n> \n> A firewire-attached log device is an extremely bad idea. First off, \n> you're at the mercy of the firewire bridge's write guarantees, which may \n> or may not be sensible.\n\nThe log is sync. Therefore it doesn't matter what the guarantees are,\nor at least I assume you are worrying about acks coming back before the\nwrite has been sent, etc. Only an actual net write will be acked by the\nfirewire transport as far as I know. If OTOH you are thinking of \"a\nfirewire attached disk\" as a complete black box, then yes, I agree, you\nare at the mercy of the driver writer for that black box. But I was not\nthinking of that. I was only choosing firewire as a transport because\nof its relatively good behaviour with small requests, as opposed to GE\nas a transport, or 100BT as a transport, or whatever else as a\ntransport...\n\n\n> It's not hard to find reports of people whose \n> disks were corrupted when the disk was accidentally disconnected, or of \n> buggy drive controller firmware causing problems. I stopped using \n> Firewire years ago because it seems you need to do some serious QA to \n> figure out which combinations are reliable and which aren't, and I don't \n> use external disks enough to spend that kind of time with them.\n\nSync operation of the disk should make you immune to any quirks, even\nif you are thinking of \"firewire plus disk\" as a black-box unit.\n\n> Second, there's few if any Firewire setups where the host gets to read \n> SMART error data from the disk.\n\nAn interesting point, but I really was considering firewire only as the\ntransport (I'm the author of the ENBD - enhanced network block device -\ndriver, which makes any remote block device available over any\ntransport, so I guess that accounts for the different assumption).\n\nPeter\n", "msg_date": "Wed, 25 Jun 2008 18:05:18 +0200 (MET DST)", "msg_from": "\"Peter T. Breuer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Wed, Jun 25, 2008 at 11:55 AM, Joshua D. Drake <[email protected]> wrote:\n> On Wed, 2008-06-25 at 09:53 -0600, Scott Marlowe wrote:\n>> On Wed, Jun 25, 2008 at 5:05 AM, Adrian Moisey\n>> <[email protected]> wrote:\n>\n>> I'm currently having a problem with a \"well known very large\n>> servermanufacturer who shall remain unnamed\" and their semi-custom\n>> RAID controller firmware not getting along with the driver for ubuntu.\n\n> /me waves to Dell.\n\nnot just ubuntu...the dell perc/x line software utilities also\nexplicitly check the hardware platform so they only run on dell\nhardware. However, the lsi logic command line utilities run just\nfine. As for ubuntu sas support, ubuntu suports the mpt fusion/sas\nline directly through the kernel.\n\nIn fact, installing ubuntu server fixed an unrelated issue relating to\na qlogic fibre hba that was causing reboots under heavy load with a\npci-x fibre controller on centos. So, based on this and other\nexperiences, i'm starting to be more partial to linux distributions\nwith faster moving kernels, mainly because i trust the kernel drivers\nmore than the vendor provided drivers. The in place distribution\nupgrade is also very nice.\n\nmerlin\n", "msg_date": "Wed, 25 Jun 2008 13:35:49 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Wed, Jun 25, 2008 at 9:03 AM, Matthew Wakeling <[email protected]> wrote:\n> On Wed, 25 Jun 2008, Merlin Moncure wrote:\n>>>\n>>> Has anyone done some benchmarks between hardware RAID vs Linux MD\n>>> software\n>>> RAID?\n>>\n>> I have here:\n>>\n>> http://merlinmoncure.blogspot.com/2007/08/following-are-results-of-our-testing-of.html\n>>\n>> The upshot is I don't really see a difference in performance.\n>\n> The main difference is that you can get hardware RAID with battery-backed-up\n> cache, which means small writes will be much quicker than software RAID.\n> Postgres does a lot of small writes under some use cases.\n\nAs discussed down thread, software raid still gets benefits of\nwrite-back caching on the raid controller...but there are a couple of\nthings I'd like to add. First, if your sever is extremely busy, the\nwrite back cache will eventually get overrun and performance will\neventually degrade to more typical ('write through') performance.\nSecondly, many hardware raid controllers have really nasty behavior in\nthis scenario. Linux software raid has decent degradation in overload\nconditions but many popular raid controllers (dell perc/lsi logic sas\nfor example) become unpredictable and very bursty in sustained high\nload conditions.\n\nAs greg mentioned, I trust the linux kernel software raid much more\nthan the black box hw controllers. Also, contrary to vast popular\nmythology, the 'overhead' of sw raid in most cases is zero except in\nvery particular conditions.\n\nmerlin\n", "msg_date": "Wed, 25 Jun 2008 13:46:14 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Wed, Jun 25, 2008 at 01:35:49PM -0400, Merlin Moncure wrote:\n> experiences, i'm starting to be more partial to linux distributions\n> with faster moving kernels, mainly because i trust the kernel drivers\n> more than the vendor provided drivers.\n\nWhile I have some experience that agrees with this, I'll point out\nthat I've had the opposite experience, too: upgrading the kernel made\na perfectly stable system both unstable and prone to data loss. I\nthink this is a blade that cuts both ways, and the key thing to do is\nto ensure you have good testing infrastructure in place to check that\nthings will work before you deploy to production. (The other way to\nsay that, of course, is \"Linux is only free if your time is worth\nnothing.\" Substitute your favourite free software for \"Linux\", of\ncourse. ;-) )\n\nA\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Wed, 25 Jun 2008 14:00:29 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": ">>> Andrew Sullivan <[email protected]> wrote: \n \n> this is a blade that cuts both ways, and the key thing to do is\n> to ensure you have good testing infrastructure in place to check\nthat\n> things will work before you deploy to production. (The other way to\n> say that, of course, is \"Linux is only free if your time is worth\n> nothing.\" Substitute your favourite free software for \"Linux\", of\n> course. ;-) )\n \nIt doesn't have to be free software to cut that way. I've actually\nfound the free software to waste less of my time. If you depend on\nyour systems, though, you should never deploy any change, no matter\nhow innocuous it seems, without testing.\n \n-Kevin\n", "msg_date": "Wed, 25 Jun 2008 13:07:25 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Wed, Jun 25, 2008 at 01:07:25PM -0500, Kevin Grittner wrote:\n> \n> It doesn't have to be free software to cut that way. I've actually\n> found the free software to waste less of my time. \n\nNo question. But one of the unfortunate facts of the\nno-charge-for-licenses world is that many people expect the systems to\nbe _really free_. It appears that some people think, because they've\nalready paid $smallfortune for a license, it's therefore ok to pay\nanother amount in operation costs and experts to run the system. Free\nsystems, for some reason, are expected also magically to run\nthemselves. This tendency is getting better, but hasn't gone away.\nIt's partly because the budget for the administrators is often buried\nin the overall large system budget, so nobody balks when there's a big\nfigure attached there. When you present a budget for \"free software\"\nthat includes the cost of a few administrators, the accounting people\nwant to know why the free software costs so much. \n\n> If you depend on your systems, though, you should never deploy any\n> change, no matter how innocuous it seems, without testing.\n\nI agree completely.\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Wed, 25 Jun 2008 14:27:04 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Wed, 25 Jun 2008, Peter T. Breuer wrote:\n\n> I refrained from saying in my reply that I would set up a firewire-based\n> link to ram in a spare old portable (which comes with a battery) if I\n> wanted to do this cheaply.\n\nMaybe, but this is kind of a weird setup. Not many people are going to \nrun a production database that way and us wandering into the details too \nmuch risks confusing everybody else.\n\n> The log is sync. Therefore it doesn't matter what the guarantees are, or \n> at least I assume you are worrying about acks coming back before the \n> write has been sent, etc. Only an actual net write will be acked by the \n> firewire transport as far as I know.\n\nThat's exactly the issue; it's critical for database use that a disk not \nlie to you about writes being done if they're actually sitting in a cache \nsomewhere. (S)ATA disks do that, so you have to turn that off for them to \nbe safe to use. Since the firewire enclosure is a black box, it's \ndifficult to know exactly what it's doing here, and history here says that \nevery type (S)ATA disk does the wrong in the default case. I expect that \nfor any Firewire/USB device, if I write to the disk, then issue a fsync, \nit will return success from that once the data has been written to the \ndisk's cache--which is crippling behavior from the database's perspective \none day when you get a crash.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 25 Jun 2008 15:59:59 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Wed, 25 Jun 2008, Merlin Moncure wrote:\n\n> So, based on this and other experiences, i'm starting to be more partial \n> to linux distributions with faster moving kernels, mainly because i \n> trust the kernel drivers more than the vendor provided drivers.\n\nDepends on how fast. I find it takes a minimum of 3-6 months before any \nnew kernel release stabilizes (somewhere around 2.6.X-5 to -10), and some \ndistributions push them out way before that. Also, after major changes, \nit can be a year or more before a new kernel is not a regression either in \nreliability, performance, or worst-case behavior.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 25 Jun 2008 16:05:49 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "\"Also sprach Merlin Moncure:\"\n> write back: raid controller can lie to host o/s. when o/s asks\n\nThis is not what the linux software raid controller does, then. It \ndoes not queue requests internally at all, nor ack requests that have\nnot already been acked by the components (modulo the fact that one can\ndeliberately choose to have a slow component not be sync by allowing\n\"write-behind\" on it, in which case the \"controller\" will ack the\nincoming request after one of the compionents has been serviced,\nwithout waiting for both).\n\n> integrity and performance. 'write back' caching provides insane burst\n> IOPS (because you are writing to controller cache) and somewhat\n> improved sustained IOPS because the controller is reorganizing writes\n> on the fly in (hopefully) optimal fashion.\n\nThis is what is provided by Linux file system and (ordinary) block\ndevice driver subsystem. It is deliberately eschewed by the soft raid\ndriver, because any caching will already have been done above and below\nthe driver, either in the FS or in the components. \n\n> > However the lack of extra buffering is really deliberate (double\n> > buffering is a horrible thing in many ways, not least because of the\n> \n> <snip>\n> completely unconvincing. \n\nBut true. Therefore the problem in attaining conviction must be at your\nend. Double buffering just doubles the resources dedicated to a single\nrequest, without doing anything for it! It doubles the frequency with\nwhich one runs out of resources, it doubles the frequency of the burst\nlimit being reached. It's deadly (deadlockly :) in the situation where\nthe receiving component device also needs resources in order to service\nthe request, such as when the transport is network tcp (and I have my\nsuspicions about scsi too).\n\n> the overhead of various cache layers is\n> completely minute compared to a full fault to disk that requires a\n> seek which is several orders of magnitude slower.\n\nThat's aboslutely true when by \"overhead\" you mean \"computation cycles\"\nand absolutely false when by overhead you mean \"memory resources\", as I\ndo. Double buffering is a killer.\n\n> The linux software raid algorithms are highly optimized, and run on a\n\nI can confidently tell you that that's balderdash both as a Linux author\nand as a software RAID linux author (check the attributions in the\nkernel source, or look up something like \"Raiding the Noosphere\" on\ngoogle).\n\n> presumably (much faster) cpu than what the controller supports.\n> However, there is still some extra oomph you can get out of letting\n> the raid controller do what the software raid can't...namely delay\n> sync for a time.\n\nThere are several design problems left in software raid in the linux kernel.\nOne of them is the need for extra memory to dispatch requests with and\nas (i.e. buffer heads and buffers, both). bhs should be OK since the\nsmall cache per device won't be exceeded while the raid driver itself\nserialises requests, which is essentially the case (it does not do any\nbuffering, queuing, whatever .. and tries hard to avoid doing so). The\nneed for extra buffers for the data is a problem. On different\nplatforms different aspects of that problem are important (would you\nbelieve that on ARM mere copying takes so much cpu time that one wants\nto avoid it at all costs, whereas on intel it's a forgettable trivium).\n\nI also wouldn't aboslutely swear that request ordering is maintained\nunder ordinary circumstances.\n\nBut of course we try.\n\n\nPeter\n", "msg_date": "Thu, 26 Jun 2008 07:03:38 +0200 (MET DST)", "msg_from": "\"Peter T. Breuer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software Raid" }, { "msg_contents": "\nOn Jun 25, 2008, at 11:35 AM, Matthew Wakeling wrote:\n\n> On Wed, 25 Jun 2008, Greg Smith wrote:\n>> A firewire-attached log device is an extremely bad idea.\n>\n> Anyone have experience with IDE, SATA, or SAS-connected flash \n> devices like the Samsung MCBQE32G5MPP-0VA? I mean, it seems lovely - \n> 32GB, at a transfer rate of 100MB/s, and doesn't degrade much in \n> performance when writing small random blocks. But what's it actually \n> like, and is it reliable?\n\nNone of these manufacturers rates these drives for massive amounts of \nwrites. They're sold as suitable for laptop/desktop use, which \nnormally is not a heavy wear and tear operation like a DB. Once they \nclaim suitability for this purpose, be sure that I and a lot of others \nwill dive into it to see how well it really works. Until then, it \nwill just be an expensive brick-making experiment, I'm sure.\n", "msg_date": "Thu, 26 Jun 2008 09:43:03 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Wed, 25 Jun 2008, Andrew Sullivan wrote:\n\n> the key thing to do is to ensure you have good testing infrastructure in \n> place to check that things will work before you deploy to production.\n\nThis is true whether you're using Linux or completely closed source \nsoftware. There are two main differences from my view:\n\n-OSS software lets you look at the code before a typical closed-source \ncompany would have pushed a product out the door at all. Downside is that \nyou need to recognize that. Linux kernels for example need significant \namounts of encouters with the real world after release before they're \nready for most people.\n\n-If your OSS program doesn't work, you can potentially find the problem \nyourself. I find that I don't fix issues when I come across them very \nmuch, but being able to browse the source code for something that isn't \nworking frequently makes it easier to understand what's going on as part \nof troubleshooting.\n\nIt's not like closed source software doesn't have the same kinds of bugs. \nThe way commercial software (and projects like PostgreSQL) get organized \ninto a smaller number of official releases tends to focus the QA process a \nbit better though, so that regular customers don't see as many rough \nedges. Linux used to do a decent job of this with their development vs. \nstable kernels, which I really miss. Unfortunately there's just not \nenough time for the top-level developers to manage that while still \nkeeping up with the pace needed just for new work. Sorting out which are \nthe stable kernel releases seems to have become the job of the \ndistributors (RedHat, SuSE, Debian, etc.) instead of the core kernel \ndevelopers.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 26 Jun 2008 09:45:21 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "\"Also sprach Merlin Moncure:\"\n> As discussed down thread, software raid still gets benefits of\n> write-back caching on the raid controller...but there are a couple of\n\n(I wish I knew what write-back caching was!)\n\nWell, if you mean the Linux software raid driver, no, there's no extra\ncaching (buffering). Every request arriving at the device is duplicated\n(for RAID1), using a local finite cache of buffer head structures and\nreal extra muffers from the kernel's general resources. Every arriving\nrequest is dispatched two its subtargets as it arrives (as two or more\nnew requests). On reception of both (or more) acks, the original\nrequest is acked, and not before.\n\nThis imposes a considerable extra resource burden. It's a mystery to me\nwhy the driver doesn't deadlock against other resource eaters that it\nmay depend on. Writing to a device that also needs extra memory per\nrequest in its driver should deadlock it, in theory. Against a network\ndevice as component, it's a problem (tcp needs buffers).\n\nHowever the lack of extra buffering is really deliberate (double\nbuffering is a horrible thing in many ways, not least because of the\nprobable memory deadlock against some component driver's requirement).\nThe driver goes to the lengths of replacing the kernel's generic\nmake_request function just for itself in order to make sure full control\nresides in the driver. This is required, among other things, to make\nsure that request order is preserved, and that requests.\n\nIt has the negative that standard kernel contiguous request merging does\nnot take place. But that's really required for sane coding in the\ndriver. Getting request pages into general kernel buffers ... may happen.\n\n\n> things I'd like to add. First, if your sever is extremely busy, the\n> write back cache will eventually get overrun and performance will\n> eventually degrade to more typical ('write through') performance.\n\nI'd like to know where this 'write back cache' �s! (not to mention what\nit is :). What on earth does `write back' mean? Peraps you mean the\nkernel's general memory system, which has the effect of buffering\nand caching requests on the way to drivers like raid. Yes, if you write\nto a device, any device, you will only write to the kernel somwhere,\nwhich may or may not decide now or later to send the dirty buffers thus\ncreated on to the driver in question, either one by one or merged. But\nas I said, raid replaces most of the kernel's mechanisms in that area\n(make_request, plug) to avoid losing ordering. I would be surprised if\nthe raw device exhibited any buffering at all after getting rid of the\ngeneric kernel mechanisms. Any buffering you see would likely be\nhappening at file system level (and be a darn nuisance).\n\nReads from the device are likely to hit the kernel's existing buffers\nfirst, thus making them act as a \"cache\".\n\n\n> Secondly, many hardware raid controllers have really nasty behavior in\n> this scenario. Linux software raid has decent degradation in overload\n\nI wouldn't have said so! If there is any, it's sort of accidental. On\nmemory starvation, the driver simply couldn't create and despatch\ncomponent requests. Dunno what happens then. It won't run out of buffer\nhead structs though, since it's pretty well serialised on those, per\ndevice, in order to maintain request order, and it has its own cache.\n\n> conditions but many popular raid controllers (dell perc/lsi logic sas\n> for example) become unpredictable and very bursty in sustained high\n> load conditions.\n\nWell, that's because they can't tell the linux memory manager to quit\nstoring data from them in memory and let them have it NOW (a general\nproblem .. how one gets feedback on the mm state, I don't know). Maybe one\ncould .. one can control buffer aging pretty much per device nowadays.\nPerhaps one can set the limit to zero for buffer age in memory before \nbeing sent to the device. That would help. Also one can lower the\nbdflush limit at which the device goes sync. All that would help against\nbursty performance, but it would slow ordinary operation towards sync\nbehaviour.\n\n\n> As greg mentioned, I trust the linux kernel software raid much more\n> than the black box hw controllers. Also, contrary to vast popular\n\nWell, it's readable code. That's the basis for my comments!\n\n> mythology, the 'overhead' of sw raid in most cases is zero except in\n> very particular conditions.\n\nIt's certainly very small. It would be smaller still if we could avoid\nneeding new buffers per device. Perhaps the dm multipathing allows that.\n\nPeter\n", "msg_date": "Thu, 26 Jun 2008 15:49:44 +0200 (CEST)", "msg_from": "\"Peter T. Breuer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Thu, 26 Jun 2008, Vivek Khera wrote:\n>> Anyone have experience with IDE, SATA, or SAS-connected flash devices like \n>> the Samsung MCBQE32G5MPP-0VA? I mean, it seems lovely - 32GB, at a transfer \n>> rate of 100MB/s, and doesn't degrade much in performance when writing small \n>> random blocks. But what's it actually like, and is it reliable?\n>\n> None of these manufacturers rates these drives for massive amounts of writes. \n> They're sold as suitable for laptop/desktop use, which normally is not a \n> heavy wear and tear operation like a DB. Once they claim suitability for \n> this purpose, be sure that I and a lot of others will dive into it to see how \n> well it really works. Until then, it will just be an expensive brick-making \n> experiment, I'm sure.\n\nIt claims a MTBF of 2,000,000 hours, but no further reliability \ninformation seems forthcoming. I thought the idea that flash couldn't cope \nwith many writes was no longer true these days?\n\nMatthew\n\n-- \nI work for an investment bank. I have dealt with code written by stock\nexchanges. I have seen how the computer systems that store your money are\nrun. If I ever make a fortune, I will store it in gold bullion under my\nbed. -- Matthew Crosby\n", "msg_date": "Thu, 26 Jun 2008 17:14:06 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Thu, Jun 26, 2008 at 10:14 AM, Matthew Wakeling <[email protected]> wrote:\n> On Thu, 26 Jun 2008, Vivek Khera wrote:\n>>>\n>>> Anyone have experience with IDE, SATA, or SAS-connected flash devices\n>>> like the Samsung MCBQE32G5MPP-0VA? I mean, it seems lovely - 32GB, at a\n>>> transfer rate of 100MB/s, and doesn't degrade much in performance when\n>>> writing small random blocks. But what's it actually like, and is it\n>>> reliable?\n>>\n>> None of these manufacturers rates these drives for massive amounts of\n>> writes. They're sold as suitable for laptop/desktop use, which normally is\n>> not a heavy wear and tear operation like a DB. Once they claim suitability\n>> for this purpose, be sure that I and a lot of others will dive into it to\n>> see how well it really works. Until then, it will just be an expensive\n>> brick-making experiment, I'm sure.\n>\n> It claims a MTBF of 2,000,000 hours, but no further reliability information\n> seems forthcoming. I thought the idea that flash couldn't cope with many\n> writes was no longer true these days?\n\nWhat's mainly happened is a great increase in storage capacity has\nallowed flash based devices to spread their writes out over so many\ncells that the time it takes to overwrite all the cells enough to get\ndead ones is measured in much longer intervals. Instead of dieing in\nweeks or months, they'll now die, for most work loads, in years or\nmore.\n\nHowever, I've tested a few less expensive solid state storage and for\nsome transactional loads it was much faster, but then for things like\nreport queries scanning whole tables they were factors slower than a\nsw RAID-10 array of just 4 spinning disks. But pg_bench was quite\nsnappy using the solid state storage for pg_xlog.\n", "msg_date": "Thu, 26 Jun 2008 10:31:52 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Thu, Jun 26, 2008 at 9:49 AM, Peter T. Breuer <[email protected]> wrote:\n> \"Also sprach Merlin Moncure:\"\n>> As discussed down thread, software raid still gets benefits of\n>> write-back caching on the raid controller...but there are a couple of\n>\n> (I wish I knew what write-back caching was!)\n\nhardware raid controllers generally have some dedicated memory for\ncaching. the controllers can be configured in one of two modes: (the\njargon is so common it's almost standard)\nwrite back: raid controller can lie to host o/s. when o/s asks\ncontroller to sync, controller can hold data in cache (for a time)\nwrite through: raid controller can not lie. all sync requests must\npass through to disk\n\nThe thinking is, the bbu on the controller can hold scheduled writes\nin memory (for a time) and replayed to disk when server restarts in\nevent of power failure. This is a reasonable compromise between data\nintegrity and performance. 'write back' caching provides insane burst\nIOPS (because you are writing to controller cache) and somewhat\nimproved sustained IOPS because the controller is reorganizing writes\non the fly in (hopefully) optimal fashion.\n\n> This imposes a considerable extra resource burden. It's a mystery to me\n> However the lack of extra buffering is really deliberate (double\n> buffering is a horrible thing in many ways, not least because of the\n\n<snip>\ncompletely unconvincing. the overhead of various cache layers is\ncompletely minute compared to a full fault to disk that requires a\nseek which is several orders of magnitude slower.\n\nThe linux software raid algorithms are highly optimized, and run on a\npresumably (much faster) cpu than what the controller supports.\nHowever, there is still some extra oomph you can get out of letting\nthe raid controller do what the software raid can't...namely delay\nsync for a time.\n\nmerlin\n", "msg_date": "Thu, 26 Jun 2008 13:26:00 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Thu, Jun 26, 2008 at 12:14 PM, Matthew Wakeling <[email protected]> wrote:\n>> None of these manufacturers rates these drives for massive amounts of\n>> writes. They're sold as suitable for laptop/desktop use, which normally is\n>> not a heavy wear and tear operation like a DB. Once they claim suitability\n>> for this purpose, be sure that I and a lot of others will dive into it to\n>> see how well it really works. Until then, it will just be an expensive\n>> brick-making experiment, I'm sure.\n>\n> It claims a MTBF of 2,000,000 hours, but no further reliability information\n> seems forthcoming. I thought the idea that flash couldn't cope with many\n> writes was no longer true these days?\n\nFlash and disks have completely different failure modes, and you can't\ndo apples to apples MTBF comparisons. In addition there are many\ndifferent types of flash (MLC/SLC) and the flash cells themselves can\nbe organized in particular ways involving various trade-offs.\n\nThe best flash drives combined with smart wear leveling are\nanecdotally believed to provide lifetimes that are good enough to\nwarrant use in high duty server environments. The main issue is lousy\nrandom write performance that basically makes them useless for any\nkind of OLTP operation. There are a couple of software (hacks?) out\nthere which may address this problem if the technology doesn't get\nthere first.\n\nIf the random write problem were solved, a single ssd would provide\nthe equivalent of a stack of 15k disks in a raid 10.\n\nsee:\nhttp://www.bigdbahead.com/?p=44\nhttp://feedblog.org/2008/01/30/24-hours-with-an-ssd-and-mysql/\n\nmerlin\n", "msg_date": "Thu, 26 Jun 2008 13:35:09 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Thu, Jun 26, 2008 at 1:03 AM, Peter T. Breuer <[email protected]> wrote:\n> \"Also sprach Merlin Moncure:\"\n>> write back: raid controller can lie to host o/s. when o/s asks\n>\n> This is not what the linux software raid controller does, then. It\n> does not queue requests internally at all, nor ack requests that have\n> not already been acked by the components (modulo the fact that one can\n> deliberately choose to have a slow component not be sync by allowing\n> \"write-behind\" on it, in which case the \"controller\" will ack the\n> incoming request after one of the compionents has been serviced,\n> without waiting for both).\n>\n>> integrity and performance. 'write back' caching provides insane burst\n>> IOPS (because you are writing to controller cache) and somewhat\n>> improved sustained IOPS because the controller is reorganizing writes\n>> on the fly in (hopefully) optimal fashion.\n>\n> This is what is provided by Linux file system and (ordinary) block\n> device driver subsystem. It is deliberately eschewed by the soft raid\n> driver, because any caching will already have been done above and below\n> the driver, either in the FS or in the components.\n>\n>> > However the lack of extra buffering is really deliberate (double\n>> > buffering is a horrible thing in many ways, not least because of the\n>>\n>> <snip>\n>> completely unconvincing.\n>\n> But true. Therefore the problem in attaining conviction must be at your\n> end. Double buffering just doubles the resources dedicated to a single\n> request, without doing anything for it! It doubles the frequency with\n> which one runs out of resources, it doubles the frequency of the burst\n> limit being reached. It's deadly (deadlockly :) in the situation where\n\nOnly if those resources are drawn from the same pool. You are\noversimplifying a calculation that has many variables such as cost.\nCPUs for example are introducing more cache levels (l1, l2, l3), etc.\n Also, the different levels of cache have different capabilities.\nOnly the hardware controller cache is (optionally) allowed to delay\nacknowledgment of a sync. In postgresql terms, we get roughly the\nsame effect with the computers entire working memory with fsync\ndisabled...so that we are trusting, rightly or wrongly, that all\nwrites will eventually make it to disk. In this case, the raid\ncontroller cache is redundant and marginally useful.\n\n> the receiving component device also needs resources in order to service\n> the request, such as when the transport is network tcp (and I have my\n> suspicions about scsi too).\n>\n>> the overhead of various cache layers is\n>> completely minute compared to a full fault to disk that requires a\n>> seek which is several orders of magnitude slower.\n>\n> That's aboslutely true when by \"overhead\" you mean \"computation cycles\"\n> and absolutely false when by overhead you mean \"memory resources\", as I\n> do. Double buffering is a killer.\n\nDouble buffering is most certainly _not_ a killer (or at least, _the_\nkiller) in practical terms. Most database systems that do any amount\nof writing (that is, interesting databases) are bound by the ability\nto randomly read and write to the storage medium, and only that.\n\nThis is why raid controllers come with a relatively small amount of\ncache...there are diminishing returns from reorganizing writes. This\nis also why up and coming storage technologies (like flash) are so\ninteresting. Disk drives have made only marginal improvements in\nspeed since the early 80's.\n\n>> The linux software raid algorithms are highly optimized, and run on a\n>\n> I can confidently tell you that that's balderdash both as a Linux author\n\nI'm just saying here that there is little/no cpu overhead for using\nsoftware raid on modern hardware.\n\n> believe that on ARM mere copying takes so much cpu time that one wants\n> to avoid it at all costs, whereas on intel it's a forgettable trivium).\n\nThis is a database list. The main area of interest is in dealing with\nserver class hardware.\n\nmerlin\n", "msg_date": "Thu, 26 Jun 2008 16:01:34 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software Raid" }, { "msg_contents": "On Thu, 26 Jun 2008, Peter T. Breuer wrote:\n\n> \"Also sprach Merlin Moncure:\"\n>> The linux software raid algorithms are highly optimized, and run on a\n>\n> I can confidently tell you that that's balderdash both as a Linux author\n> and as a software RAID linux author (check the attributions in the\n> kernel source, or look up something like \"Raiding the Noosphere\" on\n> google).\n>\n>> presumably (much faster) cpu than what the controller supports.\n>> However, there is still some extra oomph you can get out of letting\n>> the raid controller do what the software raid can't...namely delay\n>> sync for a time.\n>\n> There are several design problems left in software raid in the linux kernel.\n> One of them is the need for extra memory to dispatch requests with and\n> as (i.e. buffer heads and buffers, both). bhs should be OK since the\n> small cache per device won't be exceeded while the raid driver itself\n> serialises requests, which is essentially the case (it does not do any\n> buffering, queuing, whatever .. and tries hard to avoid doing so). The\n> need for extra buffers for the data is a problem. On different\n> platforms different aspects of that problem are important (would you\n> believe that on ARM mere copying takes so much cpu time that one wants\n> to avoid it at all costs, whereas on intel it's a forgettable trivium).\n>\n> I also wouldn't aboslutely swear that request ordering is maintained\n> under ordinary circumstances.\n\nwhich flavor of linux raid are you talking about (the two main families I \nam aware of are the md and dm ones)\n\nDavid Lang\n", "msg_date": "Thu, 26 Jun 2008 13:17:28 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software Raid" }, { "msg_contents": "On Thu, 26 Jun 2008, Peter T. Breuer wrote:\n\n> Double buffering is a killer.\n\nNo, it isn't; it's a completely trivial bit of overhead. It only exists \nduring the time when blocks are queued to write but haven't been written \nyet. On any database system, in those cases I/O congestion at the disk \nlevel (probably things backed up behind seeks) is going to block writes \nway before the memory used or the bit of CPU time making the extra copy \nbecomes a factor on anything but minimal platforms.\n\nYou seem to know quite a bit about the RAID implementation, but you are a) \nextrapolating from that knowledge into areas of database performance you \nneed to spend some more time researching first and b) extrapolating based \non results from trivial hardware, relative to what the average person on \nthis list is running a database server on in 2008. The weakest platform I \ndeploy PostgreSQL on and consider relevant today has two cores and 2GB of \nRAM, for a single-user development system that only has to handle a small \namount of data relative to what the real servers handle. If you note the \nkind of hardware people ask about here that's pretty typical.\n\nYou have some theories here, Merlin and I have positions that come from \nrunning benchmarks, and watching theories suffer a brutal smack-down from \nthe real world is one of those things that happens every day. There is \nabsolutely some overhead from paths through the Linux software RAID that \nconsume resources. But you can't even measure that in database-oriented \ncomparisions against hardware setups that don't use those resources, which \nmeans that for practical purposes the overhead doesn't exist in this \ncontext.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 26 Jun 2008 19:15:23 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software Raid" }, { "msg_contents": "On Wednesday 25 June 2008 11:24:23 Greg Smith wrote:\n> What I often do is get a hardware RAID controller, just to accelerate disk\n> writes, but configure it in JBOD mode and use Linux or other software RAID\n> on that platform.\n>\n\nJBOD + RAIDZ2 FTW ;-)\n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Thu, 26 Jun 2008 22:17:50 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Thu, 26 Jun 2008, Merlin Moncure wrote:\n> In addition there are many different types of flash (MLC/SLC) and the \n> flash cells themselves can be organized in particular ways involving \n> various trade-offs.\n\nYeah, I wouldn't go for MLC, given it has a tenth the lifespan of SLC.\n\n> The main issue is lousy random write performance that basically makes \n> them useless for any kind of OLTP operation.\n\nFor the mentioned device, they claim a sequential read speed of 100MB/s, \nsequential write speed of 80MB/s, random read speed of 80MB/s and random \nwrite speed of 30MB/s. This is *much* better than figures quoted for many \nother devices, but of course unless they publish the block size they used \nfor the random speed tests, the figures are completely useless.\n\nMatthew\n\n-- \nsed -e '/^[when][coders]/!d;/^...[discover].$/d;/^..[real].[code]$/!d\n' <`locate dict/words`\n", "msg_date": "Fri, 27 Jun 2008 12:00:25 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" }, { "msg_contents": "On Fri, Jun 27, 2008 at 7:00 AM, Matthew Wakeling <[email protected]> wrote:\n> On Thu, 26 Jun 2008, Merlin Moncure wrote:\n>>\n>> In addition there are many different types of flash (MLC/SLC) and the\n>> flash cells themselves can be organized in particular ways involving various\n>> trade-offs.\n>\n> Yeah, I wouldn't go for MLC, given it has a tenth the lifespan of SLC.\n>\n>> The main issue is lousy random write performance that basically makes them\n>> useless for any kind of OLTP operation.\n>\n> For the mentioned device, they claim a sequential read speed of 100MB/s,\n> sequential write speed of 80MB/s, random read speed of 80MB/s and random\n> write speed of 30MB/s. This is *much* better than figures quoted for many\n> other devices, but of course unless they publish the block size they used\n> for the random speed tests, the figures are completely useless.\n\nright. not likely completely truthful. here's why:\n\nA 15k drive can deliver around 200 seeks/sec (under worst case\nconditions translating to 1-2mb/sec with 8k block size). 30mb/sec\nrandom performance would then be rough equivalent to around 40 15k\ndrives configured in a raid 10. Of course, I'm assuming the block\nsize :-).\n\nUnless there were some other mitigating factors (lifetime, etc), this\nwould demonstrate that flash ssd would crush disks in any reasonable\ncost/performance metric. It's probably not so cut and dry, otherwise\nwe'd be hearing more about them (pure speculation on my part).\n\nmerlin\n", "msg_date": "Fri, 27 Jun 2008 09:16:13 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware vs Software RAID" } ]
[ { "msg_contents": "The query optimizer fails to use a hash aggregate most of the time. This is\nan inconsistent behavior -- the queries below were happily using\nhash_aggregate on a previous pg_restore from the data.\n\nOn one particular class of tables this is especially painful. The example\ntable has 25 million rows, and when aggregating on a column that the\noptimizer expects only a few unique values, it chooses a full sort of those\n25 million rows before a group aggregate, rather than using a hash aggregate\nthat would be 2 to 4 orders of magnitude faster and use less memory.\n\nThe simple statement of this bug is the following EXPLAIN output and\ncorresponding output from the statistics tables. The actual query used has\na more complicated GROUP BY and aggregation (and joins, etc), but if it\ncan't get the most simple version of a sub query correct, of course the\ncomposite will be worse.\n\nThe condition will occur for any column used to group by regardless of the\nestimated # of unique items on that column. Even one that has only two\nunique values in a 25 million row table.\n\nrr=# explain SELECT count(distinct v_guid) as view_count, p_type FROM\np_log.creative_display_logs_012_2008_06_15 GROUP BY\np_type;\n QUERY\nPLAN\n------------------------------------------------------------\n--------------------------------------------------\n GroupAggregate (cost=5201495.80..5395385.38 rows=7 width=47)\n -> Sort (cost=5201495.80..5266125.63 rows=25851932 width=47)\n Sort Key: p_type\n -> Seq Scan on creative_display_logs_012_2008_06_15\n(cost=0.00..1223383.32 rows=25851932 width=47)\n\nrr=# select attname, null_frac, avg_width,n_distinct\n,correlation from pg_stats where\ntablename='creative_display_logs_012_2008_06_15'\nand attname in ('g_id', 'p_type', 'strat', 'datetime', 'ext_s_id', 't_id');\n attname | null_frac | avg_width | n_distinct | correlation\n----------------+-----------+-----------+------------+--------------\n g_id | 0 | 8 | 14 | 0.221548\n p_type | 0 | 4 | 7 | 0.350718\n datetime | 0 | 8 | 12584 | 0.977156\n ext_s_id | 0.001 | 38 | 11444 | -0.000842848\n strat | 0 | 13 | 11 | 0.147418\n t_id | 0 | 8 | 2 | 0.998711\n\n(5 rows)\n\nI have dumped, dropped, and restored this table twice recently. Both times\nfollowed by a full vacuum analyze. And in both cases the query optimizer\nbehaves differently. In one case the poor plan only occures when using the\npartition table inheritance facade rather than the direct-to-table version\nabove. In the other case (the current condition), all variants on the query\nare bad.\nThis definitely occurs in general and its reproducibility is affected by\npartitioning but not dependent on it as far as I can tell.\n\nThe database is tuned with the default optimizer settings for 8.3.3 plus\nconstraint exclusion for the partition tables enabled. Yes, hash_agg is on\n(actually, commented out so the default of on is active, verified in\npg_settings)\n\nThe configuration has ample RAM and all the memory tuning parameters are\ngenerous (shared_mem 7g, temp space 200m, sort/agg space 500m -- I've tried\nvarious settings here with no effect on the plan, just the execution of it\nw.r.t. disk based sort or mem based sort).\n\n\nThe table definition is the following, if that helps:\n Column | Type | Modifiers\n--------------------+-----------------------------+-----------\n v_guid | character varying(255) |\n site_id | bigint |\n c_id | bigint |\n item_id | bigint |\n creative_id | bigint |\n camp_id | bigint |\n p_type | integer |\n datetime | timestamp without time zone |\n date | date |\n ext_u_id | character varying(50) |\n ext_s_id | character varying(50) |\n u_guid | character varying(50) |\n strat | character varying(50) |\n sub_p_type | character varying(32) |\n exp_id | bigint |\n t_id | bigint |\n htmlpi_id | bigint |\n p_score | double precision |\n\n\nOf course DB hints would solve this. So would some sort of tuning parameter\nthat lets you dial up or down the tendency to do a hash aggregate rather\nthan a full sort followed by a group aggregate. This is broken rather\nseverely, especially in combination with partitions (where it is about 3x as\nlikely to fail to use a hash_aggregate where appropriate in limited\nexperiments so far -- there are a few thousand partition tables).\n\nAll I want is it to stop being brain-dead and deciding to sort large tables\nto produce aggregates. In fact, given the rarity in which a sort is\npreferred over a hash_agg with large tables, and the tendancy for aggregates\nto reduce the count by a factor of 10 or more -- i'd turn off the group\naggregate if possible!\n\nThanks for any help!\n\n-Scott\n\nThe query optimizer fails to use a hash aggregate most of the time.  This is an inconsistent behavior -- the queries below were happily using hash_aggregate on a previous pg_restore from the data.On\none particular class of tables this is especially painful.  The example table has 25\nmillion rows, and when aggregating on a column that the optimizer\nexpects only a few unique values, it chooses a full sort of those 25\nmillion rows before a group aggregate, rather than using a hash\naggregate that would be 2 to 4 orders of magnitude faster and use less\nmemory.\nThe simple statement of this bug is the following EXPLAIN output\nand corresponding output from the statistics tables.  The actual query\nused has a more complicated GROUP BY and aggregation (and joins, etc), but if it can't get the most simple version of a sub query correct, of course the composite will be worse. The condition will occur for any column used to group by regardless of the estimated # of unique items on that column.  Even one that has only two unique values in a 25 million row table.\nrr=# explain SELECT  count(distinct v_guid) as view_count, p_type FROM p_log.creative_display_logs_012_2008_06_15 GROUP BY p_type;                                                                                                                         QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------\n GroupAggregate  (cost=5201495.80..5395385.38 rows=7 width=47)   ->  Sort  (cost=5201495.80..5266125.63 rows=25851932 width=47)\n\n         Sort Key: p_type         ->  Seq Scan on creative_display_logs_012_2008_06_15  (cost=0.00..1223383.32 rows=25851932 width=47)\nrr=# select attname, null_frac, avg_width,n_distinct,correlation from pg_stats where tablename='creative_display_logs_012_2008_06_15' and attname in ('g_id', 'p_type', 'strat', 'datetime', 'ext_s_id', 't_id');\n\n    attname     | null_frac | avg_width | n_distinct | correlation----------------+-----------+-----------+------------+--------------\n g_id       |         0 |         8 |         14 |     0.221548 p_type |         0 |         4 |          7 |     0.350718\n\n datetime       |         0 |         8 |      12584 |     0.977156 ext_s_id |     0.001 |        38 |      11444 | -0.000842848\n strat       |         0 |        13 |         11 |     0.147418 t_id       |         0 |         8 |          2 |     0.998711\n(5 rows)I have dumped, dropped, and restored this table\ntwice recently.  Both times followed by a full vacuum analyze.  And in\nboth cases the query optimizer behaves differently.   In one case the poor plan only occures when using the partition table inheritance facade rather than the direct-to-table version above.  In the other case (the current condition), all variants on the query are bad.\nThis definitely occurs in general and its reproducibility is affected by\npartitioning  but not dependent on it as far as I can tell.\nThe database is tuned with the default optimizer settings for 8.3.3 plus constraint exclusion for the partition tables enabled.  Yes, hash_agg is on (actually, commented out so the default of on is active, verified in pg_settings)\nThe configuration has\nample RAM and all the memory tuning parameters are generous (shared_mem\n7g, temp space 200m, sort/agg space 500m -- I've tried various settings\nhere with no effect on the plan, just the execution of it w.r.t. disk\nbased sort or mem based sort).\nThe table definition is the following, if that helps:       Column       |            Type             | Modifiers--------------------+-----------------------------+----------- v_guid             | character varying(255)      |\n\n site_id            | bigint                      | c_id               | bigint                      | item_id            | bigint                      | creative_id        | bigint                      |\n camp_id            | bigint                      |\n p_type             | integer                     | datetime           | timestamp without time zone | date               | date                        | ext_u_id           | character varying(50)       |\n ext_s_id           | character varying(50)       |\n u_guid             | character varying(50)       | strat              | character varying(50)       | sub_p_type         | character varying(32)       | exp_id             | bigint                      |\n t_id               | bigint                      |\n htmlpi_id          | bigint                      | p_score            | double precision            |Of\ncourse DB hints would solve this.  So would some sort of tuning\nparameter that lets you dial up or down the tendency to do a hash\naggregate rather than a full sort followed by a group aggregate.  This\nis broken rather severely, especially in combination with partitions\n(where it is about 3x as likely to fail to use a hash_aggregate where\nappropriate in limited experiments so far -- there are a few thousand\npartition tables).\nAll I want is it to stop being brain-dead and deciding to sort\nlarge tables to produce aggregates.  In fact, given the rarity in which\na sort is preferred over a hash_agg with large tables, and the tendancy for aggregates to reduce the count by a factor of 10 or more -- i'd turn off\nthe group aggregate if possible!Thanks for any help!-Scott", "msg_date": "Wed, 25 Jun 2008 13:42:38 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query Planner not choosing hash_aggregate appropriately." } ]
[ { "msg_contents": "Hi List;\n\nAnyone have any experiences to share per setting up a federated \narchitecture with PostgreSQL ? I wonder if the dblink contrib works \nwell in a federated scenario, specifically in the setup of the \nfederated views which equate to a select * from the same table on each \nfederated server ?\n\nThanks in advance...\n\n\n/Kevin\n", "msg_date": "Thu, 26 Jun 2008 14:33:51 -0600", "msg_from": "kevin kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Federated Postgresql architecture ?" }, { "msg_contents": "On Thu, Jun 26, 2008 at 4:33 PM, kevin kempter\n<[email protected]> wrote:\n> Anyone have any experiences to share per setting up a federated architecture\n> with PostgreSQL ? I wonder if the dblink contrib works well in a federated\n> scenario, specifically in the setup of the federated views which equate to a\n> select * from the same table on each federated server ?\n\nBecause Postgres currently lacks the ability to push down predicates\nto individual nodes over a database link, you have to spend a good\namount of time writing PL set-returning functions capable of adding\nappropriate WHERE clauses to queries sent over the link. There are\nother things you can do, but it's mostly hackery at this point in\ntime. IIRC, David Fetter is trying to get some of the required\npredicate information exposed for use in DBI-Link.\n\nNot to self-plug, but if you require it, EnterpriseDB includes\nOracle-style database links (SELECT col FROM table@node) which support\npredicate push-down.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Thu, 26 Jun 2008 16:57:23 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Federated Postgresql architecture ?" }, { "msg_contents": "On Thu, Jun 26, 2008 at 5:41 PM, Josh Berkus <[email protected]> wrote:\n>> Not to self-plug, but if you require it, EnterpriseDB includes\n>> Oracle-style database links (SELECT col FROM table@node) which support\n>> predicate push-down.\n>\n> Also check out Skytools: http://skytools.projects.postgresql.org/doc/\n\nHmm, I didn't think the Skype tools could really provide federated\ndatabase functionality without a good amount of custom work. Or, am I\nmistaken?\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Thu, 26 Jun 2008 17:40:54 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Federated Postgresql architecture ?" }, { "msg_contents": "Kevin,\n\n> Not to self-plug, but if you require it, EnterpriseDB includes\n> Oracle-style database links (SELECT col FROM table@node) which support\n> predicate push-down.\n\nAlso check out Skytools: http://skytools.projects.postgresql.org/doc/\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Thu, 26 Jun 2008 14:41:11 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Federated Postgresql architecture ?" }, { "msg_contents": "Jonah,\n\n> Hmm, I didn't think the Skype tools could really provide federated\n> database functionality without a good amount of custom work. Or, am I\n> mistaken?\n\nSure, what do you think pl/proxy is for?\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Thu, 26 Jun 2008 15:31:01 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Federated Postgresql architecture ?" }, { "msg_contents": "On Thu, Jun 26, 2008 at 6:31 PM, Josh Berkus <[email protected]> wrote:\n> Sure, what do you think pl/proxy is for?\n\nWell, considering that an application must be written specifically to\nmake use of it, and for very specific scenarios, I wouldn't consider\nit as making PostgreSQL a federated database. The pl/proxy\narchitecture certainly doesn't resemble federated in the sense of the\nother database vendors.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Thu, 26 Jun 2008 22:05:14 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Federated Postgresql architecture ?" }, { "msg_contents": "[email protected] (Josh Berkus) writes:\n> Jonah,\n>\n>> Hmm, I didn't think the Skype tools could really provide federated\n>> database functionality without a good amount of custom work. Or, am I\n>> mistaken?\n>\n> Sure, what do you think pl/proxy is for?\n\nAh, but the thing is, it changes the model from a relational one,\nwhere you can have fairly arbitrary \"where clauses,\" to one where\nparameterization of queries must be predetermined.\n\nThe \"hard part\" of federated database functionality at this point is\nthe [parenthesized portion] of...\n\n select * from table@node [where criterion = x];\n\nWhat we'd like to be able to do is to ascertain that [where criterion\n= x] portion, and run it on the remote DBMS, so that only the relevant\ntuples would come back.\n\nConsider...\n\nWhat if table@node is a remote table with 200 million tuples, and\n[where criterion = x] restricts the result set to 200 of those.\n\nIf you *cannot* push the \"where clause\" down to the remote node, then\nyou're stuck with pulling all 200 million tuples, and filtering out,\non the \"local\" node, the 200 tuples that need to be kept.\n\nTo do better, with pl/proxy, requires having a predetermined function\nthat would do that filtering, and if it's missing, you're stuck\npulling 200M tuples, and throwing out nearly all of them.\n\nIn contrast, with the work David Fetter's looking at, the [where\ncriterion = x] clause would get pushed to the node which the data is\nbeing drawn from, and so the query, when running on \"table@node,\"\ncould use indices, and return only the 200 tuples that are of\ninterest. \n\nIt's a really big win, if it works.\n-- \nselect 'cbbrowne' || '@' || 'cbbrowne.com';\nhttp://cbbrowne.com/info/lisp.html\n\"The avalanche has started, it is too late for the pebbles to vote\" \n-- Kosh, Vorlon Ambassador to Babylon 5\n", "msg_date": "Fri, 27 Jun 2008 14:16:57 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Federated Postgresql architecture ?" }, { "msg_contents": "On 6/27/08, Chris Browne <[email protected]> wrote:\n> [email protected] (Josh Berkus) writes:\n> > Jonah,\n> >\n> >> Hmm, I didn't think the Skype tools could really provide federated\n> >> database functionality without a good amount of custom work. Or, am I\n> >> mistaken?\n> >\n> > Sure, what do you think pl/proxy is for?\n>\n>\n> Ah, but the thing is, it changes the model from a relational one,\n> where you can have fairly arbitrary \"where clauses,\" to one where\n> parameterization of queries must be predetermined.\n>\n> The \"hard part\" of federated database functionality at this point is\n> the [parenthesized portion] of...\n>\n> select * from table@node [where criterion = x];\n>\n> What we'd like to be able to do is to ascertain that [where criterion\n> = x] portion, and run it on the remote DBMS, so that only the relevant\n> tuples would come back.\n>\n> Consider...\n>\n> What if table@node is a remote table with 200 million tuples, and\n> [where criterion = x] restricts the result set to 200 of those.\n>\n> If you *cannot* push the \"where clause\" down to the remote node, then\n> you're stuck with pulling all 200 million tuples, and filtering out,\n> on the \"local\" node, the 200 tuples that need to be kept.\n>\n> To do better, with pl/proxy, requires having a predetermined function\n> that would do that filtering, and if it's missing, you're stuck\n> pulling 200M tuples, and throwing out nearly all of them.\n>\n> In contrast, with the work David Fetter's looking at, the [where\n> criterion = x] clause would get pushed to the node which the data is\n> being drawn from, and so the query, when running on \"table@node,\"\n> could use indices, and return only the 200 tuples that are of\n> interest.\n>\n> It's a really big win, if it works.\n\nI agree that for doing free-form queries on remote database,\nthe PL/Proxy is not the right answer. (Although the recent patch\nto support dynamic records with AS clause at least makes them work.)\n\nBut I want to clarify it's goal - it is not to run \"pre-determined\nqueries.\" It is to run \"pre-determined complex transactions.\"\n\nAnd to make those work in a \"federated database\" takes huge amount\nof complexity that PL/Proxy simply sidesteps. At the price of\nrequiring function-based API. But as the function-based API has\nother advantages even without PL/Proxy, it seems fine tradeoff.\n\n-- \nmarko\n", "msg_date": "Mon, 30 Jun 2008 16:16:26 +0300", "msg_from": "\"Marko Kreen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Federated Postgresql architecture ?" }, { "msg_contents": "On Mon, Jun 30, 2008 at 9:16 AM, Marko Kreen <[email protected]> wrote:\n> But I want to clarify it's goal - it is not to run \"pre-determined\n> queries.\" It is to run \"pre-determined complex transactions.\"\n\nYes.\n\n> And to make those work in a \"federated database\" takes huge amount\n> of complexity that PL/Proxy simply sidesteps. At the price of\n> requiring function-based API. But as the function-based API has\n> other advantages even without PL/Proxy, it seems fine tradeoff.\n\nAgreed. PL/Proxy has its own set of advantages.\n\nAs usual, it really just depends on the application and its requirements.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Mon, 30 Jun 2008 09:34:27 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Federated Postgresql architecture ?" } ]
[ { "msg_contents": "Hello,\n\nI have been searching on the net on how to tune and monitor performance \nof my postgresql server but not met with success. A lot of information \nis vague and most often then not the answer is \"it depends\". Can anyone \nof you refer me a nice guide or tutorial on this?\n\nThanks.\n\n", "msg_date": "Fri, 27 Jun 2008 19:53:03 +0530", "msg_from": "\"Nikhil G. Daddikar\" <[email protected]>", "msg_from_op": true, "msg_subject": "A guide/tutorial to performance monitoring and tuning" }, { "msg_contents": "Nikhil G. Daddikar <[email protected]> schrieb:\n\n> Hello,\n>\n> I have been searching on the net on how to tune and monitor performance \n> of my postgresql server but not met with success. A lot of information \n> is vague and most often then not the answer is \"it depends\". Can anyone \n> of you refer me a nice guide or tutorial on this?\n\nDepends ;-)\n\nYou can log queries with an execution time more than N milliseconds via\nlog_min_duration. You can analyse the log with tools like pgfouine. And\nyou can analyse such queries with EXPLAIN ANALYSE.\n\nBut, i don't know your current problem, that's why my answer are a little\nbit vague...\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Fri, 27 Jun 2008 17:37:49 +0200", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A guide/tutorial to performance monitoring and tuning" }, { "msg_contents": "On Fri, Jun 27, 2008 at 8:23 AM, Nikhil G. Daddikar <[email protected]> wrote:\n> Hello,\n>\n> I have been searching on the net on how to tune and monitor performance of\n> my postgresql server but not met with success. A lot of information is vague\n> and most often then not the answer is \"it depends\". Can anyone of you refer\n> me a nice guide or tutorial on this?\n\nIf you run nagios, lookup the pgsql nagios plugin. it's quite an\nimpressive little bit of code.\n", "msg_date": "Fri, 27 Jun 2008 11:21:32 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A guide/tutorial to performance monitoring and tuning" }, { "msg_contents": "On Fri, 27 Jun 2008, Nikhil G. Daddikar wrote:\n\n> I have been searching on the net on how to tune and monitor performance of my \n> postgresql server but not met with success. A lot of information is vague and \n> most often then not the answer is \"it depends\".\n\nThat's because it does depend. I collect up the best of resources out \nthere and keep track of them at \nhttp://wiki.postgresql.org/wiki/Performance_Optimization so if you didn't \nfind that yet there's probably some good ones you missed.\n\nRight now I'm working with a few other people to put together a more \nstraightforward single intro guide that should address some of the \nvagueness you point out here, but that's still a few weeks away from being \nready.\n\nMonitoring performance isn't really covered in any of this though. Right \nnow the best simple solution out there is probably Nagios with the \nPostgreSQL plug-in.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sun, 29 Jun 2008 14:59:39 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A guide/tutorial to performance monitoring and tuning" }, { "msg_contents": "Le Friday 27 June 2008, Scott Marlowe a écrit :\n> On Fri, Jun 27, 2008 at 8:23 AM, Nikhil G. Daddikar <[email protected]> wrote:\n> > Hello,\n> >\n> > I have been searching on the net on how to tune and monitor performance\n> > of my postgresql server but not met with success. A lot of information is\n> > vague and most often then not the answer is \"it depends\". Can anyone of\n> > you refer me a nice guide or tutorial on this?\n>\n> If you run nagios, lookup the pgsql nagios plugin. it's quite an\n> impressive little bit of code.\n\nhttp://bucardo.org/check_postgres/ but it only supervise afaik\n\nyou can collect data and monitor with munin : \nhttp://pgfoundry.org/projects/muninpgplugins/\n\n\n-- \nCédric Villemain\nAdministrateur de Base de Données\nCel: +33 (0)6 74 15 56 53\nhttp://dalibo.com - http://dalibo.org", "msg_date": "Mon, 30 Jun 2008 15:54:08 +0200", "msg_from": "=?iso-8859-1?q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A guide/tutorial to performance monitoring and tuning" }, { "msg_contents": "On 2:59 pm 06/29/08 Greg Smith <[email protected]> wrote:\n> Right now I'm working with a few other people to put together a more\n> straightforward single intro guide that should address some of the\n> vagueness you point out here,\n\nWas that ever completed?\n\n", "msg_date": "Mon, 21 Jul 2008 17:27:31 -0400", "msg_from": "\"Francisco Reyes\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A guide/tutorial to performance monitoring and tuning" }, { "msg_contents": "On Mon, 21 Jul 2008, Francisco Reyes wrote:\n\n> On 2:59 pm 06/29/08 Greg Smith <[email protected]> wrote:\n>> Right now I'm working with a few other people to put together a more\n>> straightforward single intro guide that should address some of the\n>> vagueness you point out here,\n>\n> Was that ever completed?\n\nNot done yet; we're planning to have a first rev done in another couple of \nweeks. The work in progress is at \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server and I'm due \nto work out another set of improvements to that this week during OSCON.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 22 Jul 2008 01:24:54 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A guide/tutorial to performance monitoring and tuning" }, { "msg_contents": "On Mon, Jul 21, 2008 at 10:24 PM, Greg Smith <[email protected]> wrote:\n> On Mon, 21 Jul 2008, Francisco Reyes wrote:\n>\n>> On 2:59 pm 06/29/08 Greg Smith <[email protected]> wrote:\n>>>\n>>> Right now I'm working with a few other people to put together a more\n>>> straightforward single intro guide that should address some of the\n>>> vagueness you point out here,\n>>\n>> Was that ever completed?\n>\n> Not done yet; we're planning to have a first rev done in another couple of\n> weeks. The work in progress is at\n> http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server and I'm due to\n> work out another set of improvements to that this week during OSCON.\n\nI'd also like to point out we're putting together some data revolving\nabout software raid, hardware raid, volume management, and filesystem\nperformance on a system donated by HP here:\n\nhttp://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n\nNote that it's also a living guide and we've haven't started covering\nsome of the things I just mentioned.\n\nRegards,\nMark\n", "msg_date": "Mon, 28 Jul 2008 13:17:08 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: A guide/tutorial to performance monitoring and tuning" } ]
[ { "msg_contents": "Hi all,\n\nI need to specify servers and storage to run PostgreSQL. Does anyone\nknow any source of information (articles, presentations, books, etc.)\nwhich describes methods of hardware sizing for running a large\nPostgreSLQ installation?\n\nThank you in advance.\n\nSergio.\n", "msg_date": "Fri, 27 Jun 2008 14:56:21 -0300", "msg_from": "\"=?ISO-8859-1?Q?S=E9rgio_R_F_Oliveira?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Sources of information about sizing of hardwares to run PostgreSQL" }, { "msg_contents": "On Fri, 27 Jun 2008, Sérgio R F Oliveira wrote:\n\n> I need to specify servers and storage to run PostgreSQL. Does anyone\n> know any source of information (articles, presentations, books, etc.)\n> which describes methods of hardware sizing for running a large\n> PostgreSLQ installation?\n\nThere aren't any, just a fair number of people who know how to do it and \nsome scattered bits of lore on the subject. The quickest way to get some \nsort of estimate that is actually useful is to create a small prototype of \nsome tables you expect will be the larger ones for the application, load \nsome data into them, measure how big they are, and then extrapolate from \nthere. I'm dumping links and notes on the subject of measurements like \nthat http://wiki.postgresql.org/wiki/Disk_Usage that should get you \nstarted with such a simulation.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD", "msg_date": "Sun, 29 Jun 2008 15:04:21 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sources of information about sizing of hardwares to run\n PostgreSQL" } ]
[ { "msg_contents": "Hi,\nI am new to SQL and have two tables..., \"processor\" and \n\"users_processors\". The first table contains Processors:\n\nCREATE TABLE processor (\nid SERIAL,\nspeed varchar(50) NOT NULL,\ntype int2 NOT NULL,\nPRIMARY KEY (id)\n);\nCREATE UNIQUE INDEX processor_speed_index ON processors(lower(speed));\n\nExample:\n1 \"100MHz\" 0\n2 \"36GHz\" 7\n...\n\n\nThe second Table defines which processor one user has got:\n\nCREATE TABLE users_processors (\nuserid int REFERENCES users ON UPDATE CASCADE ON DELETE CASCADE,\nprocessorid int REFERENCES processors ON UPDATE CASCADE ON DELETE CASCADE,\nPRIMARY KEY(userid, processorid)\n);\nCREATE INDEX users_processors_processorid_index ON \nusers_processors(processorid);\nCREATE INDEX users_processors_processorid_index ON \nusers_processors(processorid);\n\nExample:\n1 2\n1 3\n1 4\n...\n2 1\n2 2\n...\n(The user \"1\" own processors 2,3,4 and the user 2 owns processors 1,2)\n\n\n__________________________________________________________\n\nNow, I would like to list all processors user \"1\" has got. The following \nquery does that:\nSELECT speed FROM processors WHERE id IN (SELECT processorid FROM \nusers_processors WHERE userid=1) ORDER BY speed ASC LIMIT 10 OFFSET 2;\n\nThis would return 10 processors beginning with number 3. I have read, \nthat this query is slow and can be faster. I analyzed it:\nLimit (cost=22.90..22.90 rows=1 width=118) (actual time=0.344..0.349 \nrows=9 loops=1)\n -> Sort (cost=22.90..22.90 rows=2 width=118) (actual \ntime=0.341..0.341 rows=11 loops=1)\n Sort Key: processors.speed\n Sort Method: quicksort Memory: 17kB\n -> Nested Loop (cost=15.03..22.89 rows=2 width=118) (actual \ntime=0.225..0.289 rows=11 loops=1)\n -> HashAggregate (cost=15.03..15.05 rows=2 width=4) \n(actual time=0.207..0.214 rows=11 loops=1)\n -> Bitmap Heap Scan on users_processors \n(cost=4.34..15.01 rows=11 width=4) (actual time=0.175..0.179 rows=11 \nloops=1)\n Recheck Cond: (userid = 1)\n -> Bitmap Index Scan on \nusers_processors_userid_index (cost=0.00..4.33 rows=11 width=0) (actual \ntime=0.159..0.159 rows=12 loops=1)\n Index Cond: (userid = 1)\n -> Index Scan using processors_pkey on processors \n(cost=0.00..3.90 rows=1 width=122) (actual time=0.004..0.004 rows=1 \nloops=11)\n Index Cond: (processors.id = \nusers_processors.processorid)\n Total runtime: 0.478 ms\n(13 rows)\n\n\n__________________________________________________________\n\n\nPeople say that this query is faster:\nSELECT speed FROM processors WHERE EXISTS (SELECT 1 FROM \nusers_processors WHERE userid=1 AND processorid=processors.id) ORDER BY \nspeed ASC LIMIT 10 OFFSET 2;\n\nAnalyze returns:\n Limit (cost=4404.52..4404.55 rows=10 width=118) (actual \ntime=0.179..0.184 rows=9 loops=1)\n -> Sort (cost=4404.52..4405.18 rows=265 width=118) (actual \ntime=0.176..0.177 rows=11 loops=1)\n Sort Key: processors.speed\n Sort Method: quicksort Memory: 17kB\n -> Seq Scan on processors (cost=0.00..4398.44 rows=265 \nwidth=118) (actual time=0.056..0.118 rows=11 loops=1)\n Filter: (subplan)\n SubPlan\n -> Index Scan using users_processors_pkey on \nusers_processors (cost=0.00..8.27 rows=1 width=0) (actual \ntime=0.006..0.006 rows=1 loops=11)\n Index Cond: ((userid = 1) AND (processorid = $0))\n Total runtime: 0.267 ms\n(10 rows)\n\n\n\n\nThe second query is faster, but I have only used a very small table with \nless than 20 items. In real-world I will have tables with thousands of \nentries. I wonder if the second query is also faster in cases where I \nhave big tables, because it does a \"Seq Scan\", for me this looks like a \ncomplete table scan. This seams reasonable if we look at the query I do \nnot expect that it is possible to use an INDEX for the second query. So, \nis it slower?\n\nWhich query would you use, the first or the second one?\n\n\nI would also like to know the total number of processors one user has \ngot. I would use one of those queries and replace the \"SELECT speed\" \nwith \"SELECT count(*)\" and remove the LIMIT and OFFSET. Is this good? I \nhave read that count(*) is slow.\n\nKind regards\nUlrich\n", "msg_date": "Sat, 28 Jun 2008 17:22:41 +0200", "msg_from": "Ulrich <[email protected]>", "msg_from_op": true, "msg_subject": "Subquery WHERE IN or WHERE EXISTS faster?" }, { "msg_contents": "Ulrich <[email protected]> writes:\n> People say that [EXISTS is faster]\n\nPeople who say that are not reliable authorities, at least as far as\nPostgres is concerned. But it is always a bad idea to extrapolate\nresults on toy tables to large tables --- quite aside from measurement\nnoise and caching issues, the planner might pick a different plan when\nfaced with large tables. Load up a realistic amount of data and then\nsee what you get.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Jun 2008 11:53:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Subquery WHERE IN or WHERE EXISTS faster? " }, { "msg_contents": "Hi,\nI have added a bit of dummy Data, 100000 processors, 10000 users, each \nuser got around 12 processors.\n\nI have tested both queries. First of all, I was surprised that it is \nthat fast :) Here are the results:\n\n\nEXPLAIN ANALYZE SELECT speed FROM processors WHERE id IN (SELECT \nprocessorid FROM users_processors WHERE userid=4040) ORDER BY speed ASC \nLIMIT 10 OFFSET 1;\n\nLimit (cost=113.73..113.75 rows=7 width=5) (actual time=0.335..0.340 \nrows=10 loops=1)\n -> Sort (cost=113.73..113.75 rows=8 width=5) (actual \ntime=0.332..0.333 rows=11 loops=1)\n Sort Key: processors.speed\n Sort Method: quicksort Memory: 17kB\n -> Nested Loop (cost=47.22..113.61 rows=8 width=5) (actual \ntime=0.171..0.271 rows=13 loops=1)\n -> HashAggregate (cost=47.22..47.30 rows=8 width=4) \n(actual time=0.148..0.154 rows=13 loops=1)\n -> Bitmap Heap Scan on users_processors \n(cost=4.36..47.19 rows=12 width=4) (actual time=0.074..0.117 rows=13 \nloops=1)\n Recheck Cond: (userid = 4040)\n -> Bitmap Index Scan on \nusers_processors_userid_index (cost=0.00..4.35 rows=12 width=0) (actual \ntime=0.056..0.056 rows=13 loops=1)\n Index Cond: (userid = 4040)\n -> Index Scan using processors_pkey on processors \n(cost=0.00..8.28 rows=1 width=9) (actual time=0.006..0.007 rows=1 loops=13)\n Index Cond: (processors.id = \nusers_processors.processorid)\n Total runtime: 0.471 ms\n(13 rows)\n\n___________\n\nEXPLAIN ANALYZE SELECT speed FROM processors WHERE EXISTS (SELECT 1 FROM \nusers_processors WHERE userid=4040 AND processorid=processors.id) ORDER \nBY speed ASC LIMIT 10 OFFSET 1;\n\n Limit (cost=831413.86..831413.89 rows=10 width=5) (actual \ntime=762.475..762.482 rows=10 loops=1)\n -> Sort (cost=831413.86..831538.86 rows=50000 width=5) (actual \ntime=762.471..762.473 rows=11 loops=1)\n Sort Key: processors.speed\n Sort Method: quicksort Memory: 17kB\n -> Seq Scan on processors (cost=0.00..830299.00 rows=50000 \nwidth=5) (actual time=313.591..762.411 rows=13 loops=1)\n Filter: (subplan)\n SubPlan\n -> Index Scan using users_processors_pkey on \nusers_processors (cost=0.00..8.29 rows=1 width=0) (actual \ntime=0.006..0.006 rows=0 loops=100000)\n Index Cond: ((userid = 4040) AND (processorid = $0))\n Total runtime: 762.579 ms\n(10 rows)\n\n\n\n\nAs you can see the second query is much slower. First I thought \"Just a \ndifference of 0.3ms?\", but then I realized that it was 762ms not 0.762 ;-).\nBoth queries return the same result, so I will use #1 and count(*) takes \njust 0.478ms if I use query #1.\n\nKind Regards,\nUlrich\n\nTom Lane wrote:\n> Ulrich <[email protected]> writes:\n> \n>> People say that [EXISTS is faster]\n>> \n>\n> People who say that are not reliable authorities, at least as far as\n> Postgres is concerned. But it is always a bad idea to extrapolate\n> results on toy tables to large tables --- quite aside from measurement\n> noise and caching issues, the planner might pick a different plan when\n> faced with large tables. Load up a realistic amount of data and then\n> see what you get.\n>\n> \t\t\tregards, tom lane\n>\n> \n\n", "msg_date": "Sun, 29 Jun 2008 00:07:32 +0200", "msg_from": "Ulrich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Subquery WHERE IN or WHERE EXISTS faster?" }, { "msg_contents": "\"Ulrich\" <[email protected]> writes:\n\n> EXPLAIN ANALYZE SELECT speed FROM processors WHERE id IN (SELECT processorid\n> FROM users_processors WHERE userid=4040) ORDER BY speed ASC LIMIT 10 OFFSET 1;\n>\n> Limit (cost=113.73..113.75 rows=7 width=5) (actual time=0.335..0.340 rows=10 loops=1)\n> -> Sort (cost=113.73..113.75 rows=8 width=5) (actual time=0.332..0.333 rows=11 loops=1)\n\n ^^\n\n> Sort Key: processors.speed\n> Sort Method: quicksort Memory: 17kB\n> -> Nested Loop (cost=47.22..113.61 rows=8 width=5) (actual time=0.171..0.271 rows=13 loops=1)\n> -> HashAggregate (cost=47.22..47.30 rows=8 width=4) (actual time=0.148..0.154 rows=13 loops=1)\n> -> Bitmap Heap Scan on users_processors (cost=4.36..47.19 rows=12 width=4) (actual time=0.074..0.117 rows=13 loops=1)\n\n ^^\n\n> Index Cond: (userid = 4040)\n> -> Index Scan using processors_pkey on processors (cost=0.00..8.28 rows=1 width=9) (actual time=0.006..0.007 rows=1 loops=13)\n> Index Cond: (processors.id = users_processors.processorid)\n\n\nIt looks to me like you have some processors which appear in\n\"users_processors\" but not in \"processors\". I don't know your data model but\nthat sounds like broken referential integrity to me.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's 24x7 Postgres support!\n", "msg_date": "Sun, 29 Jun 2008 00:01:08 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Subquery WHERE IN or WHERE EXISTS faster?" }, { "msg_contents": "Hi,\nYes that looks strange. But it is not possible that I have processors in \n\"users_processors\" which do not appear in \"processors\", because \n\"users_processors\" contains foreign keys to \"processors\".\n\nIf I remove the LIMIT 10 OFFSET 1 the line \"Sort (cost=.... rows=11..\" \ndisappears and the query return 13 correct processors from \"processors\". \nThen, I have tested different values for OFFSET. If I set Offset to \"2\" \nand LIMIT=10 the line is:\n Sort (cost=113.73..113.75 rows=8 width=5) (actual \ntime=0.322..0.330 rows=12 loops=1)\nIf I set Offset to \"3\" and LIMIT=10 it is\n Sort (cost=113.73..113.75 rows=8 width=5) (actual \ntime=0.321..0.328 rows=13 loops=1)\n\nIt looks like if this \"row\" is something like min(max_rows=13, \nLIMIT+OFFSET). But I do not completely understand the Syntax... ;-)\n\nKind regards\nUlrich\n\nGregory Stark wrote:\n> \"Ulrich\" <[email protected]> writes:\n>\n> \n>> EXPLAIN ANALYZE SELECT speed FROM processors WHERE id IN (SELECT processorid\n>> FROM users_processors WHERE userid=4040) ORDER BY speed ASC LIMIT 10 OFFSET 1;\n>>\n>> Limit (cost=113.73..113.75 rows=7 width=5) (actual time=0.335..0.340 rows=10 loops=1)\n>> -> Sort (cost=113.73..113.75 rows=8 width=5) (actual time=0.332..0.333 rows=11 loops=1)\n>> \n>\n> ^^\n>\n> \n>> Sort Key: processors.speed\n>> Sort Method: quicksort Memory: 17kB\n>> -> Nested Loop (cost=47.22..113.61 rows=8 width=5) (actual time=0.171..0.271 rows=13 loops=1)\n>> -> HashAggregate (cost=47.22..47.30 rows=8 width=4) (actual time=0.148..0.154 rows=13 loops=1)\n>> -> Bitmap Heap Scan on users_processors (cost=4.36..47.19 rows=12 width=4) (actual time=0.074..0.117 rows=13 loops=1)\n>> \n>\n> ^^\n>\n> \n>> Index Cond: (userid = 4040)\n>> -> Index Scan using processors_pkey on processors (cost=0.00..8.28 rows=1 width=9) (actual time=0.006..0.007 rows=1 loops=13)\n>> Index Cond: (processors.id = users_processors.processorid)\n>> \n>\n>\n> It looks to me like you have some processors which appear in\n> \"users_processors\" but not in \"processors\". I don't know your data model but\n> that sounds like broken referential integrity to me.\n>\n> \n\n", "msg_date": "Sun, 29 Jun 2008 12:15:25 +0200", "msg_from": "Ulrich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Subquery WHERE IN or WHERE EXISTS faster?" }, { "msg_contents": "\"Ulrich\" <[email protected]> writes:\n\n> Hi,\n> Yes that looks strange. But it is not possible that I have processors in\n> \"users_processors\" which do not appear in \"processors\", because\n> \"users_processors\" contains foreign keys to \"processors\".\n>\n> If I remove the LIMIT 10 OFFSET 1 the line \"Sort (cost=.... rows=11..\"\n> disappears and the query return 13 correct processors from \"processors\". \n\nOh, er, my bad. That makes perfect sense. The \"actual\" numbers can be affected\nby what records are actually requested. The LIMIT prevents the records beyond\n11 from ever being requested even though they exist. \n\nWhile the bitmap heap scan has to fetch all the records even though they don't\nall get used, the nested loop only fetches the records as requested.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n", "msg_date": "Sun, 29 Jun 2008 13:08:57 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Subquery WHERE IN or WHERE EXISTS faster?" }, { "msg_contents": "On Sat, Jun 28, 2008 at 10:53 AM, Tom Lane <[email protected]> wrote:\n> Ulrich <[email protected]> writes:\n>> People say that [EXISTS is faster]\n>\n> People who say that are not reliable authorities, at least as far as\n> Postgres is concerned. But it is always a bad idea to extrapolate\n> results on toy tables to large tables --- quite aside from measurement\n> noise and caching issues, the planner might pick a different plan when\n> faced with large tables. Load up a realistic amount of data and then\n> see what you get.\n>\n\ni've made some queries run faster using EXISTS instead of large IN\nclauses... actually, it was NOT EXISTS replacing a NOT IN\n\nwhile i'm not telling EXISTS is better i actually know in some cases is better\n\n-- \nAtentamente,\nJaime Casanova\nSoporte y capacitación de PostgreSQL\nGuayaquil - Ecuador\nCel. (593) 87171157\n", "msg_date": "Sun, 29 Jun 2008 23:33:57 -0500", "msg_from": "\"Jaime Casanova\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Subquery WHERE IN or WHERE EXISTS faster?" }, { "msg_contents": "\"Jaime Casanova\" <[email protected]> writes:\n> i've made some queries run faster using EXISTS instead of large IN\n> clauses... actually, it was NOT EXISTS replacing a NOT IN\n\nThat's just about entirely unrelated ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Jun 2008 00:48:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Subquery WHERE IN or WHERE EXISTS faster? " }, { "msg_contents": "\n\nOn Jun 28, 2008, at 4:07 PM, Ulrich wrote:\n\n> Hi,\n> I have added a bit of dummy Data, 100000 processors, 10000 users, \n> each user got around 12 processors.\n>\n> I have tested both queries. First of all, I was surprised that it is \n> that fast :) Here are the results:\n>\n>\n> EXPLAIN ANALYZE SELECT speed FROM processors WHERE id IN (SELECT \n> processorid FROM users_processors WHERE userid=4040) ORDER BY speed \n> ASC LIMIT 10 OFFSET 1;\n>\n> Limit (cost=113.73..113.75 rows=7 width=5) (actual \n> time=0.335..0.340 rows=10 loops=1)\n> -> Sort (cost=113.73..113.75 rows=8 width=5) (actual \n> time=0.332..0.333 rows=11 loops=1)\n> Sort Key: processors.speed\n> Sort Method: quicksort Memory: 17kB\n> -> Nested Loop (cost=47.22..113.61 rows=8 width=5) (actual \n> time=0.171..0.271 rows=13 loops=1)\n> -> HashAggregate (cost=47.22..47.30 rows=8 width=4) \n> (actual time=0.148..0.154 rows=13 loops=1)\n> -> Bitmap Heap Scan on users_processors \n> (cost=4.36..47.19 rows=12 width=4) (actual time=0.074..0.117 rows=13 \n> loops=1)\n> Recheck Cond: (userid = 4040)\n> -> Bitmap Index Scan on \n> users_processors_userid_index (cost=0.00..4.35 rows=12 width=0) \n> (actual time=0.056..0.056 rows=13 loops=1)\n> Index Cond: (userid = 4040)\n> -> Index Scan using processors_pkey on processors \n> (cost=0.00..8.28 rows=1 width=9) (actual time=0.006..0.007 rows=1 \n> loops=13)\n> Index Cond: (processors.id = \n> users_processors.processorid)\n> Total runtime: 0.471 ms\n> (13 rows)\n>\n> ___________\n>\n> EXPLAIN ANALYZE SELECT speed FROM processors WHERE EXISTS (SELECT 1 \n> FROM users_processors WHERE userid=4040 AND \n> processorid=processors.id) ORDER BY speed ASC LIMIT 10 OFFSET 1;\n>\n> Limit (cost=831413.86..831413.89 rows=10 width=5) (actual \n> time=762.475..762.482 rows=10 loops=1)\n> -> Sort (cost=831413.86..831538.86 rows=50000 width=5) (actual \n> time=762.471..762.473 rows=11 loops=1)\n> Sort Key: processors.speed\n> Sort Method: quicksort Memory: 17kB\n> -> Seq Scan on processors (cost=0.00..830299.00 rows=50000 \n> width=5) (actual time=313.591..762.411 rows=13 loops=1)\n> Filter: (subplan)\n> SubPlan\n> -> Index Scan using users_processors_pkey on \n> users_processors (cost=0.00..8.29 rows=1 width=0) (actual \n> time=0.006..0.006 rows=0 loops=100000)\n> Index Cond: ((userid = 4040) AND (processorid = \n> $0))\n> Total runtime: 762.579 ms\n> (10 rows)\n>\n>\n>\n>\n> As you can see the second query is much slower. First I thought \n> \"Just a difference of 0.3ms?\", but then I realized that it was 762ms \n> not 0.762 ;-).\n> Both queries return the same result, so I will use #1 and count(*) \n> takes just 0.478ms if I use query #1.\n>\n\n\nThis is what I've found with tables ranging in the millions of rows.\n\nUsing IN is better when you've got lots of rows to check against the \nIN set and the IN set may be large and possibly complicated to \nretrieve (i.e. lots of joins, or expensive functions).\n\nPostgres will normally build a hash table of the IN set and just \nsearch that hash table. It's especially fast if the entire hash table \nthat is built can fit into RAM. The cpu/io cost of building the IN \nset can be quite large because it needs to fetch every tuple to hash \nit, but this can be faster then searching tuple by tuple through \npossibly many indexes and tables like EXISTS does. I like to increase \nwork_mem a lot (512mb and up) if I know I'm going to be doing a lot of \nmatches against a large IN set of rows because I'd prefer for that \nhash table to never to be written to disk.\n\nEXISTS is better when you're doing fewer matches because it will pull \nthe rows out one at a time from its query possibly using indexes, its \nmain advantage is that it doesn't pull all of the tuples before it \nstarts processing matches.\n\nSo in summary both are good to know how to use, but choosing which one \nto use can really depend on your data set and resources.\n\nCheers,\n\nRusty\n--\nRusty Conover\nInfoGears Inc.\nhttp://www.infogears.com\n", "msg_date": "Mon, 30 Jun 2008 00:50:27 -0600", "msg_from": "Rusty Conover <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Subquery WHERE IN or WHERE EXISTS faster?" }, { "msg_contents": "I think it will be fast, because the \"IN set\", which is the result of \n\"SELECT processorid FROM users_processors WHERE userid=4040\", is limited \nto a maximum of ~500 processors which is not very big. Increasing \nPostgres' RAM would be difficult for me, because I am only running a \nvery small server with 256MB RAM and the webserver also likes to use \nsome RAM.\n\nDoes Postgre cache the HASH-Table for later use? For example when the \nuser reloads the website.\n\nKind regards\nUlrich\n\nRusty Conover wrote:\n> This is what I've found with tables ranging in the millions of rows.\n>\n> Using IN is better when you've got lots of rows to check against the \n> IN set and the IN set may be large and possibly complicated to \n> retrieve (i.e. lots of joins, or expensive functions).\n>\n> Postgres will normally build a hash table of the IN set and just \n> search that hash table. It's especially fast if the entire hash table \n> that is built can fit into RAM. The cpu/io cost of building the IN \n> set can be quite large because it needs to fetch every tuple to hash \n> it, but this can be faster then searching tuple by tuple through \n> possibly many indexes and tables like EXISTS does. I like to increase \n> work_mem a lot (512mb and up) if I know I'm going to be doing a lot of \n> matches against a large IN set of rows because I'd prefer for that \n> hash table to never to be written to disk.\n>\n> EXISTS is better when you're doing fewer matches because it will pull \n> the rows out one at a time from its query possibly using indexes, its \n> main advantage is that it doesn't pull all of the tuples before it \n> starts processing matches.\n>\n> So in summary both are good to know how to use, but choosing which one \n> to use can really depend on your data set and resources.\n>\n> Cheers,\n>\n> Rusty\n> -- \n> Rusty Conover\n> InfoGears Inc.\n> http://www.infogears.com\n>\n\n", "msg_date": "Mon, 30 Jun 2008 09:29:08 +0200", "msg_from": "Ulrich <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Subquery WHERE IN or WHERE EXISTS faster?" }, { "msg_contents": "\nOn Jun 30, 2008, at 1:29 AM, Ulrich wrote:\n\n> I think it will be fast, because the \"IN set\", which is the result \n> of \"SELECT processorid FROM users_processors WHERE userid=4040\", is \n> limited to a maximum of ~500 processors which is not very big. \n> Increasing Postgres' RAM would be difficult for me, because I am \n> only running a very small server with 256MB RAM and the webserver \n> also likes to use some RAM.\n>\n> Does Postgre cache the HASH-Table for later use? For example when \n> the user reloads the website.\n>\n\nNo the hash table only lives as long as the query is being executed. \nIf you're looking for generic caching, I'd suggest memcached may be \nable to fill your needs.\n\nCheers,\n\nRusty\n--\nRusty Conover\nInfoGears Inc.\nhttp://www.infogears.com\n\n\n\n\n\n\n", "msg_date": "Mon, 30 Jun 2008 01:44:45 -0600", "msg_from": "Rusty Conover <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Subquery WHERE IN or WHERE EXISTS faster?" }, { "msg_contents": "Hi Ulrich, do you try with\n\nSELECT p.speed FROM processor p\n INNER JOIN users_processors up ON p.id=up.processorid\n AND up.userid=1\n?\nOr your question is only about IN and EXIST?\n\nregards,\n\nSergio Gabriel Rodriguez\nCorrientes - Argentina\nhttp://www.3trex.com.ar\n\nOn Mon, Jun 30, 2008 at 4:44 AM, Rusty Conover <[email protected]> wrote:\n>\n> On Jun 30, 2008, at 1:29 AM, Ulrich wrote:\n>\n>> I think it will be fast, because the \"IN set\", which is the result of\n>> \"SELECT processorid FROM users_processors WHERE userid=4040\", is limited to\n>> a maximum of ~500 processors which is not very big. Increasing Postgres' RAM\n>> would be difficult for me, because I am only running a very small server\n>> with 256MB RAM and the webserver also likes to use some RAM.\n>>\n>> Does Postgre cache the HASH-Table for later use? For example when the user\n>> reloads the website.\n>>\n>\n> No the hash table only lives as long as the query is being executed. If\n> you're looking for generic caching, I'd suggest memcached may be able to\n> fill your needs.\n>\n> Cheers,\n>\n> Rusty\n> --\n> Rusty Conover\n> InfoGears Inc.\n> http://www.infogears.com\n>\n>\n>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Sat, 5 Jul 2008 09:02:18 -0300", "msg_from": "\"Sergio Gabriel Rodriguez\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Subquery WHERE IN or WHERE EXISTS faster?" }, { "msg_contents": "Hi Ulrich, do you try with\n\nSELECT p.speed FROM processor p\n INNER JOIN users_processors up ON p.id=up.processorid\n AND up.userid=1\n?\nOr your question is only about IN and EXIST?\n\nregards,\n\nSergio Gabriel Rodriguez\nCorrientes - Argentina\nhttp://www.3trex.com.ar\n\nOn Sat, Jun 28, 2008 at 7:07 PM, Ulrich <[email protected]> wrote:\n> Hi,\n> I have added a bit of dummy Data, 100000 processors, 10000 users, each user\n> got around 12 processors.\n>\n> I have tested both queries. First of all, I was surprised that it is that\n> fast :) Here are the results:\n>\n>\n> EXPLAIN ANALYZE SELECT speed FROM processors WHERE id IN (SELECT processorid\n> FROM users_processors WHERE userid=4040) ORDER BY speed ASC LIMIT 10 OFFSET\n> 1;\n>\n> Limit (cost=113.73..113.75 rows=7 width=5) (actual time=0.335..0.340\n> rows=10 loops=1)\n> -> Sort (cost=113.73..113.75 rows=8 width=5) (actual time=0.332..0.333\n> rows=11 loops=1)\n> Sort Key: processors.speed\n> Sort Method: quicksort Memory: 17kB\n> -> Nested Loop (cost=47.22..113.61 rows=8 width=5) (actual\n> time=0.171..0.271 rows=13 loops=1)\n> -> HashAggregate (cost=47.22..47.30 rows=8 width=4) (actual\n> time=0.148..0.154 rows=13 loops=1)\n> -> Bitmap Heap Scan on users_processors\n> (cost=4.36..47.19 rows=12 width=4) (actual time=0.074..0.117 rows=13\n> loops=1)\n> Recheck Cond: (userid = 4040)\n> -> Bitmap Index Scan on\n> users_processors_userid_index (cost=0.00..4.35 rows=12 width=0) (actual\n> time=0.056..0.056 rows=13 loops=1)\n> Index Cond: (userid = 4040)\n> -> Index Scan using processors_pkey on processors\n> (cost=0.00..8.28 rows=1 width=9) (actual time=0.006..0.007 rows=1 loops=13)\n> Index Cond: (processors.id =\n> users_processors.processorid)\n> Total runtime: 0.471 ms\n> (13 rows)\n>\n> ___________\n>\n> EXPLAIN ANALYZE SELECT speed FROM processors WHERE EXISTS (SELECT 1 FROM\n> users_processors WHERE userid=4040 AND processorid=processors.id) ORDER BY\n> speed ASC LIMIT 10 OFFSET 1;\n>\n> Limit (cost=831413.86..831413.89 rows=10 width=5) (actual\n> time=762.475..762.482 rows=10 loops=1)\n> -> Sort (cost=831413.86..831538.86 rows=50000 width=5) (actual\n> time=762.471..762.473 rows=11 loops=1)\n> Sort Key: processors.speed\n> Sort Method: quicksort Memory: 17kB\n> -> Seq Scan on processors (cost=0.00..830299.00 rows=50000 width=5)\n> (actual time=313.591..762.411 rows=13 loops=1)\n> Filter: (subplan)\n> SubPlan\n> -> Index Scan using users_processors_pkey on\n> users_processors (cost=0.00..8.29 rows=1 width=0) (actual time=0.006..0.006\n> rows=0 loops=100000)\n> Index Cond: ((userid = 4040) AND (processorid = $0))\n> Total runtime: 762.579 ms\n> (10 rows)\n>\n>\n>\n>\n> As you can see the second query is much slower. First I thought \"Just a\n> difference of 0.3ms?\", but then I realized that it was 762ms not 0.762 ;-).\n> Both queries return the same result, so I will use #1 and count(*) takes\n> just 0.478ms if I use query #1.\n>\n> Kind Regards,\n> Ulrich\n>\n> Tom Lane wrote:\n>>\n>> Ulrich <[email protected]> writes:\n>>\n>>>\n>>> People say that [EXISTS is faster]\n>>>\n>>\n>> People who say that are not reliable authorities, at least as far as\n>> Postgres is concerned. But it is always a bad idea to extrapolate\n>> results on toy tables to large tables --- quite aside from measurement\n>> noise and caching issues, the planner might pick a different plan when\n>> faced with large tables. Load up a realistic amount of data and then\n>> see what you get.\n>>\n>> regards, tom lane\n>>\n>>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Sat, 5 Jul 2008 09:14:05 -0300", "msg_from": "\"Sergio Gabriel Rodriguez\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Subquery WHERE IN or WHERE EXISTS faster?" } ]
[ { "msg_contents": "Hi,\n\nI have a query\n\nselect count(*)\n from result\n where exists\n (select * from item where item.url LIKE result.url || '%' limit 1);\n\nwhich basically returns the number of items which exist in table \nresult and match a URL in table item by its prefix.\nI read all about idexes (http://www.postgresql.org/docs/8.3/static/indexes-types.html \n) and especially this part:\n\"The optimizer can also use a B-tree index for queries involving the \npattern matching operators LIKE and ~ if the pattern is a constant and \nis anchored to the beginning of the string � for example, col LIKE 'foo \n%' or col ~ '^foo', but not col LIKE '%bar'.\"\n\nSince my server does not use the C locale I created the index with\n\nCREATE INDEX test_index\n ON item\n USING btree\n (url varchar_pattern_ops);\n\nwhich works fine for queries like\n\n SELECT distinct url from item where url like 'http://www.micro%' \nlimit 10;\n\nexplain analyze shows:\n\"Limit (cost=9.53..9.54 rows=1 width=34) (actual time=80.809..80.856 \nrows=10 loops=1)\"\n\" -> Unique (cost=9.53..9.54 rows=1 width=34) (actual \ntime=80.806..80.835 rows=10 loops=1)\"\n\" -> Sort (cost=9.53..9.53 rows=1 width=34) (actual \ntime=80.802..80.812 rows=11 loops=1)\"\n\" Sort Key: url\"\n\" Sort Method: quicksort Memory: 306kB\"\n\" -> Index Scan using test_index on item \n(cost=0.00..9.52 rows=1 width=34) (actual time=0.030..6.165 rows=2254 \nloops=1)\"\n\" Index Cond: (((url)::text ~>=~ 'http:// \nwww.micro'::text) AND ((url)::text ~<~ 'http://www.micrp'::text))\"\n\" Filter: ((url)::text ~~ 'http://www.micro%'::text)\"\n\"Total runtime: 80.908 ms\"\n\nwhich is great but if I run the query with the subselect it uses a \nsequence scan:\n\nselect *\n from result\n where exists\n (select * from item where item.url LIKE result.url || '%' limit 1) \nlimit 10;\n\n\"Limit (cost=0.00..96.58 rows=10 width=36) (actual \ntime=12.660..35295.928 rows=10 loops=1)\"\n\" -> Seq Scan on result (cost=0.00..93886121.77 rows=9721314 \nwidth=36) (actual time=12.657..35295.906 rows=10 loops=1)\"\n\" Filter: (subplan)\"\n\" SubPlan\"\n\" -> Limit (cost=0.00..4.81 rows=1 width=42) (actual \ntime=2715.061..2715.061 rows=1 loops=13)\"\n\" -> Seq Scan on item (cost=0.00..109589.49 \nrows=22781 width=42) (actual time=2715.055..2715.055 rows=1 loops=13)\"\n\" Filter: ((url)::text ~~ (($0)::text || \n'%'::text))\"\n\"Total runtime: 35295.994 ms\"\n\n\nThe only explaination is that I don't use a constant when comparing \nthe values. But actually it is a constant...\n\n\nany help?\n\nusing postgres 8.3.3 on ubuntu.\n\n\n\nCheers,\n\nmoritz\n\n", "msg_date": "Sat, 28 Jun 2008 18:24:42 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Planner should use index on a LIKE 'foo%' query" }, { "msg_contents": "On Sat, Jun 28, 2008 at 06:24:42PM +0200, Moritz Onken wrote:\n> SELECT distinct url from item where url like 'http://www.micro%' limit \n> 10;\n\nHere, the planner knows the pattern beforehand, and can see that it's a\nsimple prefix.\n> select *\n> from result\n> where exists\n> (select * from item where item.url LIKE result.url || '%' limit 1) \n> limit 10;\n\nHere it cannot (what if result.url was '%foo%'?).\n\nTry using something like (item.url >= result.url && item.url <= result.url ||\n'z'), substituting an appropriately high character for 'z'.\n\n> The only explaination is that I don't use a constant when comparing the \n> values. But actually it is a constant...\n\nIt's not a constant at planning time.\n\nAlso note that you'd usually want to use IN instead of a WHERE EXISTS.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 28 Jun 2008 21:19:31 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner should use index on a LIKE 'foo%' query" }, { "msg_contents": "\n\nAnfang der weitergeleiteten E-Mail:\n\n> Von: Moritz Onken <[email protected]>\n> Datum: 30. Juni 2008 09:16:06 MESZ\n> An: Steinar H. Gunderson <[email protected]>\n> Betreff: Re: [PERFORM] Planner should use index on a LIKE 'foo%' query\n>\n>\n> Am 28.06.2008 um 21:19 schrieb Steinar H. Gunderson:\n>\n>> On Sat, Jun 28, 2008 at 06:24:42PM +0200, Moritz Onken wrote:\n>>> SELECT distinct url from item where url like 'http://www.micro%' \n>>> limit\n>>> 10;\n>>\n>> Here, the planner knows the pattern beforehand, and can see that \n>> it's a\n>> simple prefix.\n>>> select *\n>>> from result\n>>> where exists\n>>> (select * from item where item.url LIKE result.url || '%' limit 1)\n>>> limit 10;\n>>\n>> Here it cannot (what if result.url was '%foo%'?).\n>\n> That's right. Thanks for that hint. Is there a Postgres function \n> which returns a constant (possibly an escape function)?\n>>\n>>\n>> Try using something like (item.url >= result.url && item.url <= \n>> result.url ||\n>> 'z'), substituting an appropriately high character for 'z'.\n>>\n>>> The only explaination is that I don't use a constant when \n>>> comparing the\n>>> values. But actually it is a constant...\n>\n> I created a new column in \"item\" where I store the shortened url \n> which makes \"=\" comparisons possible.\n>\n> the result table has 20.000.000 records and the item table 5.000.000.\n> The query\n>\n> select count(1) from result where url in (select shorturl from item \n> where shorturl = result.url);\n>\n> will take about 8 hours (still running, just guessing). Is this \n> reasonable on a system with 1 GB of RAM and a AMD Athlon 64 3200+ \n> processor? (1 SATA HDD)\n>\n> regards,\n>\n> moritz\n>\n\n", "msg_date": "Mon, 30 Jun 2008 09:16:44 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner should use index on a LIKE 'foo%' query" }, { "msg_contents": "\nAm 28.06.2008 um 21:19 schrieb Steinar H. Gunderson:\n\n> On Sat, Jun 28, 2008 at 06:24:42PM +0200, Moritz Onken wrote:\n>> SELECT distinct url from item where url like 'http://www.micro%' \n>> limit\n>> 10;\n>\n> Here, the planner knows the pattern beforehand, and can see that \n> it's a\n> simple prefix.\n>> select *\n>> from result\n>> where exists\n>> (select * from item where item.url LIKE result.url || '%' limit 1)\n>> limit 10;\n>\n> Here it cannot (what if result.url was '%foo%'?).\n\nThat's right. Thanks for that hint. Is there a Postgres function which \nreturns a constant (possibly an escape function)?\n>\n>\n> Try using something like (item.url >= result.url && item.url <= \n> result.url ||\n> 'z'), substituting an appropriately high character for 'z'.\n>\n>> The only explaination is that I don't use a constant when comparing \n>> the\n>> values. But actually it is a constant...\n\nI created a new column in \"item\" where I store the shortened url which \nmakes \"=\" comparisons possible.\n\nthe result table has 20.000.000 records and the item table 5.000.000.\nThe query\n\nselect count(1) from result where url in (select shorturl from item \nwhere shorturl = result.url);\n\nwill take about 8 hours (still running, just guessing). Is this \nreasonable on a system with 1 GB of RAM and a AMD Athlon 64 3200+ \nprocessor? (1 SATA HDD)\n\nregards,\n\nmoritz\n\n\n", "msg_date": "Mon, 30 Jun 2008 09:22:12 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner should use index on a LIKE 'foo%' query" }, { "msg_contents": "On Mon, 30 Jun 2008, Moritz Onken wrote:\n> I created a new column in \"item\" where I store the shortened url which makes \n> \"=\" comparisons possible.\n\nGood idea. Now create an index on that column.\n\n> select count(1) from result where url in (select shorturl from item where \n> shorturl = result.url);\n\nWhat on earth is wrong with writing it like this?\n\nSELECT COUNT(*) FROM (SELECT DISTINCT result.url FROM result, item WHERE\n item.shorturl = result.url) AS a\n\nThat should do a fairly sensible join plan. There's no point in using \nfancy IN or EXISTS syntax when a normal join will do.\n\nMatthew\n\n-- \nI have an inferiority complex. But it's not a very good one.\n", "msg_date": "Mon, 30 Jun 2008 11:19:47 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner should use index on a LIKE 'foo%' query" }, { "msg_contents": "Hi,\n\nLe samedi 28 juin 2008, Moritz Onken a écrit :\n> select count(*)\n> from result\n> where exists\n> (select * from item where item.url LIKE result.url || '%' limit 1);\n>\n> which basically returns the number of items which exist in table\n> result and match a URL in table item by its prefix.\n\nIt seems you could benefit from the prefix project, which support indexing \nyour case of prefix searches. Your query would then be:\n SELECT count(*) FROM result r JOIN item i ON r.url @> i.url;\n\nThe result.url column would have to made of type prefix_range, which casts \nautomatically to text when needed.\n\nFind out more about the prefix projects at those urls:\n http://pgfoundry.org/projects/prefix\n http://prefix.projects.postgresql.org/README.html\n\nRegards,\n-- \ndim", "msg_date": "Mon, 30 Jun 2008 12:20:40 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner should use index on a LIKE 'foo%' query" }, { "msg_contents": "\nAm 30.06.2008 um 12:19 schrieb Matthew Wakeling:\n>\n>> select count(1) from result where url in (select shorturl from item \n>> where shorturl = result.url);\n>\n> What on earth is wrong with writing it like this?\n>\n> SELECT COUNT(*) FROM (SELECT DISTINCT result.url FROM result, item \n> WHERE\n> item.shorturl = result.url) AS a\n\nI tried the this approach but it's slower than WHERE IN in my case.\n\n>\n> It seems you could benefit from the prefix project, which support \n> indexing\n> your case of prefix searches. Your query would then be:\n> SELECT count(*) FROM result r JOIN item i ON r.url @> i.url;\n>\n> The result.url column would have to made of type prefix_range, which \n> casts\n> automatically to text when needed.\n>\n> Find out more about the prefix projects at those urls:\n> http://pgfoundry.org/projects/prefix\n> http://prefix.projects.postgresql.org/README.html\n>\n> Regards,\n> -- \n> dim\n\nThanks for that! looks interesting.\n\nregards\n", "msg_date": "Mon, 30 Jun 2008 14:46:22 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner should use index on a LIKE 'foo%' query" }, { "msg_contents": "On Mon, 30 Jun 2008, Moritz Onken wrote:\n>> SELECT COUNT(*) FROM (SELECT DISTINCT result.url FROM result, item WHERE\n>> item.shorturl = result.url) AS a\n>\n> I tried the this approach but it's slower than WHERE IN in my case.\n\nHowever there's a lot more scope for improving a query along these lines, \nlike adding indexes, or CLUSTERing on an index. It depends what other \nqueries you are wanting to run.\n\nI don't know how much update/insert activity there will be on your \ndatabase. However, if you were to add an index on the URL on both tables, \nthen CLUSTER both tables on those indexes, and ANALYSE, then this query \nshould run as a merge join, and be pretty quick.\n\nHowever, this is always going to be a long-running query, because it \naccesses at least one whole table scan of a large table.\n\nMatthew\n\n-- \n\"Finger to spiritual emptiness underlying everything.\"\n -- How a foreign C manual referred to a \"pointer to void.\"\n", "msg_date": "Mon, 30 Jun 2008 13:52:08 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner should use index on a LIKE 'foo%' query" }, { "msg_contents": ">\n>\n> However there's a lot more scope for improving a query along these \n> lines, like adding indexes, or CLUSTERing on an index. It depends \n> what other queries you are wanting to run.\n>\n> I don't know how much update/insert activity there will be on your \n> database. However, if you were to add an index on the URL on both \n> tables, then CLUSTER both tables on those indexes, and ANALYSE, then \n> this query should run as a merge join, and be pretty quick.\n>\n> However, this is always going to be a long-running query, because it \n> accesses at least one whole table scan of a large table.\n>\n> Matthew\n\nThere are already indexes on the url columns. I didn't cluster yet but \nthis is a pretty good idea, thanks. There will be no updates or \ninserts. It's static data for research purposes.\n\nmoritz\n", "msg_date": "Mon, 30 Jun 2008 14:56:57 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner should use index on a LIKE 'foo%' query" }, { "msg_contents": "\nAm 30.06.2008 um 16:59 schrieb Steinar H. Gunderson:\n\n> On Mon, Jun 30, 2008 at 09:16:06AM +0200, Moritz Onken wrote:\n>> the result table has 20.000.000 records and the item table 5.000.000.\n>> The query\n>>\n>> select count(1) from result where url in (select shorturl from item\n>> where shorturl = result.url);\n>>\n>> will take about 8 hours (still running, just guessing). Is this\n>> reasonable on a system with 1 GB of RAM and a AMD Athlon 64 3200+\n>> processor? (1 SATA HDD)\n>\n> I really don't see what your query tries to accomplish. Why would \n> you want\n> \"url IN (... where .. = url)\"? Wouldn't you want a different qualifier\n> somehow?\n\nwell, it counts the number of rows with urls which already exist in \nanother\ntable.\nHow would you describe the query?\nIf the \"(select shorturl from item where shorturl = result.url)\"\nclause is empty the row is not counted, that's what I want...\n\ngreetings,\n\nmoritz\n", "msg_date": "Mon, 30 Jun 2008 18:11:15 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner should use index on a LIKE 'foo%' query" }, { "msg_contents": "On Mon, 30 Jun 2008, Moritz Onken wrote:\n>>> select count(1) from result where url in (select shorturl from item\n>>> where shorturl = result.url);\n>> \n>> I really don't see what your query tries to accomplish. Why would you want\n>> \"url IN (... where .. = url)\"? Wouldn't you want a different qualifier\n>> somehow?\n>\n> well, it counts the number of rows with urls which already exist in another\n> table.\n> How would you describe the query?\n> If the \"(select shorturl from item where shorturl = result.url)\"\n> clause is empty the row is not counted, that's what I want...\n\nThe thing here is that you are effectively causing Postgres to run a \nsub-select for each row of the \"result\" table, each time generating either \nan empty list or a list with one or more identical URLs. This is \neffectively forcing a nested loop. In a way, you have two constraints \nwhere you only need one.\n\nYou can safely take out the constraint in the subquery, so it is like \nthis:\n\nSELECT COUNT(*) FROM result WHERE url IN (SELECT shorturl FROM item);\n\nThis will generate equivalent results, because those rows that didn't \nmatch the constraint wouldn't have affected the IN anyway. However, it \nwill alter the performance, because the subquery will contain more \nresults, but it will only be run once, rather than multiple times. This is \neffectively forcing a hash join (kind of).\n\nWhereas if you rewrite the query as I demonstrated earlier, then you allow \nPostgres to make its own choice about which join algorithm will work best.\n\nMatthew\n\n-- \nAnyone who goes to a psychiatrist ought to have his head examined.\n", "msg_date": "Mon, 30 Jun 2008 17:21:19 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner should use index on a LIKE 'foo%' query" }, { "msg_contents": ">\n> The thing here is that you are effectively causing Postgres to run a \n> sub-select for each row of the \"result\" table, each time generating \n> either an empty list or a list with one or more identical URLs. This \n> is effectively forcing a nested loop. In a way, you have two \n> constraints where you only need one.\n>\n> You can safely take out the constraint in the subquery, so it is \n> like this:\n>\n> SELECT COUNT(*) FROM result WHERE url IN (SELECT shorturl FROM item);\n>\n> This will generate equivalent results, because those rows that \n> didn't match the constraint wouldn't have affected the IN anyway. \n> However, it will alter the performance, because the subquery will \n> contain more results, but it will only be run once, rather than \n> multiple times. This is effectively forcing a hash join (kind of).\n>\n> Whereas if you rewrite the query as I demonstrated earlier, then you \n> allow Postgres to make its own choice about which join algorithm \n> will work best.\n>\n> Matthew\n\nThank you! I learned a lot today :-)\nI thought the subquery will be run on every row thus I tried to make \nit as fast as possible by using a where clause. I didn't try your \nfirst query on the hole table so it could be faster than mine approach.\n\ngreetings,\n\nmoritz\n", "msg_date": "Mon, 30 Jun 2008 18:42:07 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner should use index on a LIKE 'foo%' query" } ]
[ { "msg_contents": "I'm having a strange problem with a query. The query is fairly simple, \nwith a few constants and two joins. All relevant columns should be \nindexed, and I'm pretty sure there aren't any type conversion issues. \nBut the query plan includes a fairly heavy seq scan. The only possible \ncomplication is that the tables involved are fairly large - hundreds of \nmillions of rows each.\n\nCan anyone explain this? There should only ever be a maximum of about 50 \nrows returned when the query is executed.\n\nQuery:\n\nselect fls.function_verified, fls.score, fls.go_category_group_ref, \nfs1.gene_ref, fs1.function_verified_exactly, fs2.gene_ref, \nfs2.function_verified_exactly from functional_linkage_scores fls, \ngene_prediction_view fs1, gene_prediction_view fs2 where fls.gene_ref1 = \nfs1.gene_ref and fls.gene_ref2 = fs2.gene_ref and fs1.go_term_ref = 2 \nand fs2.go_term_ref = 2\n\nExplain on query:\nMerge Join (cost=1331863800.16..6629339921.15 rows=352770803726 width=22)\n Merge Cond: (fs2.gene_ref = fls.gene_ref2)\n -> Index Scan using gene_prediction_view_gene_ref on \ngene_prediction_view fs2 (cost=0.00..6235287.98 rows=197899 width=5)\n Index Cond: (go_term_ref = 2)\n -> Materialize (cost=1331794730.41..1416453931.72 rows=6772736105 \nwidth=21)\n -> Sort (cost=1331794730.41..1348726570.67 rows=6772736105 \nwidth=21)\n Sort Key: fls.gene_ref2\n -> Merge Join (cost=38762951.04..146537410.33 \nrows=6772736105 width=21)\n Merge Cond: (fs1.gene_ref = fls.gene_ref1)\n -> Index Scan using gene_prediction_view_gene_ref \non gene_prediction_view fs1 (cost=0.00..6235287.98 rows=197899 width=5)\n Index Cond: (go_term_ref = 2)\n -> Materialize (cost=38713921.60..41618494.20 \nrows=232365808 width=20)\n -> Sort (cost=38713921.60..39294836.12 \nrows=232365808 width=20)\n Sort Key: fls.gene_ref1\n -> Seq Scan on \nfunctional_linkage_scores fls (cost=0.00..3928457.08 rows=232365808 \nwidth=20)\n\n\n\\d on functional_linkage_scores (232241678 rows):\n Table \"public.functional_linkage_scores\"\n Column | Type | \nModifiers \n-----------------------+---------------+------------------------------------------------------------------------\n id | integer | not null default \nnextval('functional_linkage_scores_id_seq'::regclass)\n gene_ref1 | integer | not null\n gene_ref2 | integer | not null\n function_verified | boolean | not null\n score | numeric(12,4) | not null\n go_category_group_ref | integer | not null\n go_term_ref | integer |\nIndexes:\n \"functional_linkage_scores_pkey\" PRIMARY KEY, btree (id)\n \"functional_linkage_scores_gene_ref1_key\" UNIQUE, btree (gene_ref1, \ngene_ref2, go_category_group_ref, go_term_ref)\n \"ix_functional_linkage_scores_gene_ref2\" btree (gene_ref2)\nForeign-key constraints:\n \"functional_linkage_scores_gene_ref1_fkey\" FOREIGN KEY (gene_ref1) \nREFERENCES genes(id)\n \"functional_linkage_scores_gene_ref2_fkey\" FOREIGN KEY (gene_ref2) \nREFERENCES genes(id)\n \"functional_linkage_scores_go_category_group_ref_fkey\" FOREIGN KEY \n(go_category_group_ref) REFERENCES go_category_groups(id)\n\n\\d on gene_prediction_view (568654245 rows):\n Table \n\"public.gene_prediction_view\"\n Column | Type \n| Modifiers \n----------------------------------+------------------------+-------------------------------------------------------------------\n id | integer | not null \ndefault nextval('gene_prediction_view_id_seq'::regclass)\n gene_ref | integer | not null\n go_term_ref | integer | not null\n go_description | character varying(200) | not null\n go_category | character varying(50) | not null\n function_verified_exactly | boolean | not null\n function_verified_with_parent_go | boolean | not null\n score | numeric(12,4) | not null\n prediction_method_ref | integer |\n functional_score_ref | integer |\nIndexes:\n \"gene_prediction_view_pkey\" PRIMARY KEY, btree (id)\n \"gene_prediction_view_functional_score_ref_key\" UNIQUE, btree \n(functional_score_ref)\n \"gene_prediction_view_gene_ref\" UNIQUE, btree (gene_ref, \ngo_term_ref, prediction_method_ref)\nForeign-key constraints:\n \"gene_prediction_view_functional_score_ref_fkey\" FOREIGN KEY \n(functional_score_ref) REFERENCES functional_scores(id)\n \"gene_prediction_view_gene_ref_fkey\" FOREIGN KEY (gene_ref) \nREFERENCES genes(id)\n \"gene_prediction_view_go_term_ref_fkey\" FOREIGN KEY (go_term_ref) \nREFERENCES go_terms(term)\n\n...and just in case someone can give advice on more aggressive settings \nthat might help out the planner for this particular comptuer...\nThis computer: Mac Pro / 4 gigs ram / software Raid 0 across two hard \ndrives.\nProduction computer: Xeon 3ghz / 32 gigs ram / Debian\n\n", "msg_date": "Sun, 29 Jun 2008 17:52:24 -0400", "msg_from": "John Beaver <[email protected]>", "msg_from_op": true, "msg_subject": "sequence scan problem" }, { "msg_contents": "John Beaver wrote:\n> I'm having a strange problem with a query. The query is fairly simple, \n> with a few constants and two joins. All relevant columns should be \n> indexed, and I'm pretty sure there aren't any type conversion issues. \n> But the query plan includes a fairly heavy seq scan. The only possible \n> complication is that the tables involved are fairly large - hundreds of \n> millions of rows each.\n> \n> Can anyone explain this? There should only ever be a maximum of about 50 \n> rows returned when the query is executed.\n\nYou didn't say when you last vacuumed?\nIf there should only be 50 rows returned then the estimates from the\nplanner are way out.\n\nIf that doesn't help, we'll need version info, and (if you can afford\nthe time) an \"explain analyze\"\n\nCheers,\n Jeremy\n", "msg_date": "Sun, 29 Jun 2008 23:10:40 +0100", "msg_from": "Jeremy Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sequence scan problem" }, { "msg_contents": "\n\nJeremy Harris wrote:\n> John Beaver wrote:\n>> I'm having a strange problem with a query. The query is fairly \n>> simple, with a few constants and two joins. All relevant columns \n>> should be indexed, and I'm pretty sure there aren't any type \n>> conversion issues. But the query plan includes a fairly heavy seq \n>> scan. The only possible complication is that the tables involved are \n>> fairly large - hundreds of millions of rows each.\n>>\n>> Can anyone explain this? There should only ever be a maximum of about \n>> 50 rows returned when the query is executed.\n>\n> You didn't say when you last vacuumed?\nI ran 'vacuum analyze' on both tables directly after I finished building \nthem, and I haven't updated their contents since.\n> If there should only be 50 rows returned then the estimates from the\n> planner are way out.\n>\n> If that doesn't help, we'll need version info, and (if you can afford\n> the time) an \"explain analyze\"\nSure, I'm running it now. I'll send the results when it's done, but yes, \nit could take a while.\n>\n> Cheers,\n> Jeremy\n>\n", "msg_date": "Sun, 29 Jun 2008 18:20:20 -0400", "msg_from": "John Beaver <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sequence scan problem" }, { "msg_contents": "Oh, and the version is 8.3.3.\n\nJeremy Harris wrote:\n> John Beaver wrote:\n>> I'm having a strange problem with a query. The query is fairly \n>> simple, with a few constants and two joins. All relevant columns \n>> should be indexed, and I'm pretty sure there aren't any type \n>> conversion issues. But the query plan includes a fairly heavy seq \n>> scan. The only possible complication is that the tables involved are \n>> fairly large - hundreds of millions of rows each.\n>>\n>> Can anyone explain this? There should only ever be a maximum of about \n>> 50 rows returned when the query is executed.\n>\n> You didn't say when you last vacuumed?\n> If there should only be 50 rows returned then the estimates from the\n> planner are way out.\n>\n> If that doesn't help, we'll need version info, and (if you can afford\n> the time) an \"explain analyze\"\n>\n> Cheers,\n> Jeremy\n>\n", "msg_date": "Sun, 29 Jun 2008 18:32:03 -0400", "msg_from": "John Beaver <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sequence scan problem" }, { "msg_contents": "John Beaver <[email protected]> writes:\n> Can anyone explain this? There should only ever be a maximum of about 50 \n> rows returned when the query is executed.\n\nIs the estimate that 197899 rows of gene_prediction_view have\ngo_term_ref = 2 about right? If not, then we need to talk about\nfixing your statistics. If it is in the right ballpark then I do\nnot see *any* plan for this query that runs in small time.\nThe only way to avoid a seqscan on functional_linkage_scores would\nbe to do 198K^2 index probes into it, one for each combination of\nmatching fs1 and fs2 rows; I can guarantee you that that's not a win.\n\nThe fact that the planner is estimating 352770803726 result rows\ncompared to your estimate of 50 offers some hope that it's a stats\nproblem, but ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 29 Jun 2008 23:31:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sequence scan problem " }, { "msg_contents": "Ok, here's the explain analyze result. Again, this is Postgres 8.3.3 and \nI vacuumed-analyzed both tables directly after they were created.\n\n\n# explain analyze select fls.function_verified, fls.score, \nfls.go_category_group_ref, fs1.gene_ref, fs1.function_verified_exactly, \nfs2.gene_ref, fs2.function_verified_exactly from \nfunctional_linkage_scores fls, gene_prediction_view fs1, \ngene_prediction_view fs2 where fls.gene_ref1 = fs1.gene_ref and \nfls.gene_ref2 = fs2.gene_ref and fs1.go_term_ref = 2 and fs2.go_term_ref \n= 2;\n \nQUERY \nPLAN \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=1399203593.41..6702491234.74 rows=352770803726 \nwidth=22) (actual time=6370194.467..22991303.434 rows=15610535128 loops=1)\n Merge Cond: (fs2.gene_ref = fls.gene_ref2)\n -> Index Scan using gene_prediction_view_gene_ref on \ngene_prediction_view fs2 (cost=0.00..12111899.77 rows=197899 width=5) \n(actual time=29.592..469838.583 rows=180629 loops=1)\n Index Cond: (go_term_ref = 2)\n -> Materialize (cost=1399069432.20..1483728633.52 rows=6772736105 \nwidth=21) (actual time=6370164.864..16623552.417 rows=15610535121 loops=1)\n -> Sort (cost=1399069432.20..1416001272.47 rows=6772736105 \nwidth=21) (actual time=6370164.860..13081970.248 rows=1897946790 loops=1)\n Sort Key: fls.gene_ref2\n Sort Method: external merge Disk: 61192240kB\n -> Merge Join (cost=40681244.97..154286110.62 \nrows=6772736105 width=21) (actual time=592112.778..2043161.851 \nrows=1897946790 loops=1)\n Merge Cond: (fs1.gene_ref = fls.gene_ref1)\n -> Index Scan using gene_prediction_view_gene_ref \non gene_prediction_view fs1 (cost=0.00..12111899.77 rows=197899 \nwidth=5) (actual time=0.015..246613.129 rows=180644 loops=1)\n Index Cond: (go_term_ref = 2)\n -> Materialize (cost=40586010.10..43490582.70 \nrows=232365808 width=20) (actual time=592112.755..1121366.375 \nrows=1897946783 loops=1)\n -> Sort (cost=40586010.10..41166924.62 \nrows=232365808 width=20) (actual time=592112.721..870349.308 \nrows=232241678 loops=1)\n Sort Key: fls.gene_ref1\n Sort Method: external merge Disk: \n7260856kB\n -> Seq Scan on \nfunctional_linkage_scores fls (cost=0.00..3928457.08 rows=232365808 \nwidth=20) (actual time=14.221..86455.902 rows=232241678 loops=1)\n Total runtime: 24183346.271 ms\n(18 rows)\n\n\n\n\n\nJeremy Harris wrote:\n> John Beaver wrote:\n>> I'm having a strange problem with a query. The query is fairly \n>> simple, with a few constants and two joins. All relevant columns \n>> should be indexed, and I'm pretty sure there aren't any type \n>> conversion issues. But the query plan includes a fairly heavy seq \n>> scan. The only possible complication is that the tables involved are \n>> fairly large - hundreds of millions of rows each.\n>>\n>> Can anyone explain this? There should only ever be a maximum of about \n>> 50 rows returned when the query is executed.\n>\n> You didn't say when you last vacuumed?\n> If there should only be 50 rows returned then the estimates from the\n> planner are way out.\n>\n> If that doesn't help, we'll need version info, and (if you can afford\n> the time) an \"explain analyze\"\n>\n> Cheers,\n> Jeremy\n>\n", "msg_date": "Mon, 30 Jun 2008 06:59:00 -0400", "msg_from": "John Beaver <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sequence scan problem" }, { "msg_contents": "John Beaver <[email protected]> writes:\n> Ok, here's the explain analyze result. Again, this is Postgres 8.3.3 and \n> I vacuumed-analyzed both tables directly after they were created.\n\n> Merge Join (cost=1399203593.41..6702491234.74 rows=352770803726 \n> width=22) (actual time=6370194.467..22991303.434 rows=15610535128 loops=1)\n ^^^^^^^^^^^\n\nWeren't you saying that only 50 rows should be returned? I'm thinking\nthe real problem here is pilot error: you missed out a needed join\ncondition or something. SQL will happily execute underconstrained\nqueries ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Jun 2008 11:25:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sequence scan problem " }, { "msg_contents": "\n\n\n\n\n<chuckle> You're right - for some reason I was looking at the (18\nrows) at the bottom. Pilot error indeed - I'll have to figure out\nwhat's going on with my data.\n\nThanks!\n\nTom Lane wrote:\n\nJohn Beaver <[email protected]> writes:\n \n\nOk, here's the explain analyze result. Again, this is Postgres 8.3.3 and \nI vacuumed-analyzed both tables directly after they were created.\n \n\n\n \n\n Merge Join (cost=1399203593.41..6702491234.74 rows=352770803726 \nwidth=22) (actual time=6370194.467..22991303.434 rows=15610535128 loops=1)\n \n\n ^^^^^^^^^^^\n\nWeren't you saying that only 50 rows should be returned? I'm thinking\nthe real problem here is pilot error: you missed out a needed join\ncondition or something. SQL will happily execute underconstrained\nqueries ...\n\n\t\t\tregards, tom lane\n\n \n\n\n\n", "msg_date": "Mon, 30 Jun 2008 20:37:58 -0400", "msg_from": "John Beaver <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sequence scan problem" } ]
[ { "msg_contents": "All,\n\nWhile running a Select query we get the below error:\n\nERROR: out of memory\nDETAIL: Failed on request of size 192.\n\nPostgres Conf details:\nshared_buffers = 256000\nwork_mem =150000\nmax_stack_depth = 16384\nmax_fsm_pages = 400000\nversion: 8.1.3\n\nWe are using 8gb of Primary memory for the server which is used as a\ndedicated database machine.\n\nThe data log shows the below message after getting the Out of memory error.\nAlso attached the explain for the query. Can someone let us know , if have\nsome worng parameter setup or any solution to the problem?\n\nRegards,\nNimesh.\n\n\nTopMemoryContext: 57344 total in 6 blocks; 9504 free (12 chunks); 47840 used\nTopTransactionContext: 8192 total in 1 blocks; 7856 free (0 chunks); 336\nused\nType information cache: 8192 total in 1 blocks; 1864 free (0 chunks); 6328\nused\nOperator class cache: 8192 total in 1 blocks; 4936 free (0 chunks); 3256\nused\nMessageContext: 1040384 total in 7 blocks; 263096 free (4 chunks); 777288\nused\nJoinRelHashTable: 8192 total in 1 blocks; 3888 free (0 chunks); 4304 used\nsmgr relation table: 8192 total in 1 blocks; 1840 free (0 chunks); 6352 used\nPortal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used\nPortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used\nPortalHeapMemory: 1024 total in 1 blocks; 856 free (0 chunks); 168 used\nExecutorState: 122880 total in 4 blocks; 51840 free (6 chunks); 71040 used\nHashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nHashBatchContext: 2089044 total in 8 blocks; 573232 free (12 chunks);\n1515812 used\nHashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nHashBatchContext: 2080768 total in 7 blocks; 749448 free (11 chunks);\n1331320 used\nHashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nHashBatchContext: 245760 total in 4 blocks; 109112 free (4 chunks); 136648\nused\nHashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nHashBatchContext: 1032192 total in 6 blocks; 504104 free (8 chunks); 528088\nused\nHashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nHashBatchContext: 1032192 total in 6 blocks; 474456 free (8 chunks); 557736\nused\nHashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nHashBatchContext: 2080768 total in 7 blocks; 783856 free (11 chunks);\n1296912 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\n.\n.\n.\n\nAggContext: 941613056 total in 129 blocks; 13984 free (154 chunks);\n941599072 used\nTupleHashTable: 113303576 total in 24 blocks; 1347032 free (74 chunks);\n111956544 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\nRelcache by OID: 8192 total in 1 blocks; 3376 free (0 chunks); 4816 used\nCacheMemoryContext: 516096 total in 6 blocks; 12080 free (0 chunks); 504016\nused\nrg_key_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nrg_idx: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\nrg_id_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nrc_key_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_c_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_c_id_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_ch_key_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_ch_id_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_ch_cd: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_cm_key_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_c_m_id_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_s_key_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_p_id_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_p_cd_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_a_key_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_a_v_id_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_d_sqldt_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_da_key_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_nw_key_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_n_id_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nr_m_network_date_idx: 1024 total in 1 blocks; 328 free (0 chunks); 696 used\npg_index_indrelid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 328 free (0 chunks);\n696 used\npg_type_typname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks); 696\nused\npg_type_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_statistic_relid_att_index: 1024 total in 1 blocks; 328 free (0 chunks);\n696 used\npg_auth_members_member_role_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_auth_members_role_member_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 328 free (0 chunks);\n696 used\npg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 256 free (0 chunks);\n768 used\npg_proc_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 192 free (0\nchunks); 832 used\npg_operator_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_opclass_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 256 free (0 chunks);\n768 used\npg_namespace_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_namespace_nspname_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_language_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_language_name_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_inherits_relid_seqno_index: 1024 total in 1 blocks; 328 free (0 chunks);\n696 used\npg_index_indexrelid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_authid_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_authid_rolname_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_database_datname_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_conversion_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\npg_conversion_name_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks);\n696 used\npg_conversion_default_index: 1024 total in 1 blocks; 192 free (0 chunks);\n832 used\npg_class_relname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks); 696\nused\npg_class_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_cast_source_target_index: 1024 total in 1 blocks; 328 free (0 chunks);\n696 used\npg_attribute_relid_attnum_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_attribute_relid_attnam_index: 1024 total in 1 blocks; 328 free (0\nchunks); 696 used\npg_amproc_opc_proc_index: 1024 total in 1 blocks; 256 free (0 chunks); 768\nused\npg_amop_opr_opc_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used\npg_amop_opc_strat_index: 1024 total in 1 blocks; 256 free (0 chunks); 768\nused\npg_aggregate_fnoid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\nMdSmgr: 8192 total in 1 blocks; 5584 free (0 chunks); 2608 used\nLockTable (locallock hash): 8192 total in 1 blocks; 3912 free (0 chunks);\n4280 used\nTimezones: 47592 total in 2 blocks; 5968 free (0 chunks); 41624 used\nErrorContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\n2008-06-29 20:48:25 PDT [13980]: [5-1] ERROR: out of memory\n2008-06-29 20:48:25 PDT [13980]: [6-1] DETAIL: Failed on request of size\n192.", "msg_date": "Mon, 30 Jun 2008 09:50:15 +0530", "msg_from": "\"Nimesh Satam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Out of memory for Select query." }, { "msg_contents": "\n\nOn Jun 29, 2008, at 10:20 PM, Nimesh Satam wrote:\n\n> All,\n>\n> While running a Select query we get the below error:\n>\n> ERROR: out of memory\n> DETAIL: Failed on request of size 192.\n>\n> Postgres Conf details:\n> shared_buffers = 256000\n> work_mem =150000\n> max_stack_depth = 16384\n> max_fsm_pages = 400000\n> version: 8.1.3\n>\n> We are using 8gb of Primary memory for the server which is used as a \n> dedicated database machine.\n>\n> The data log shows the below message after getting the Out of memory \n> error. Also attached the explain for the query. Can someone let us \n> know , if have some worng parameter setup or any solution to the \n> problem?\n>\n> Regards,\n> Nimesh.\n>\n\n\nHi Nimesh,\n\nI'd try decreasing work_mem (try something smaller like 16384 and work \nup if you'd like), since you have lots of hashes being built for this \nquery, you may simply be running into a limit on process size \ndepending on your platform. Also look at \"ulimit -a\" as the postgres \nuser to make sure you aren't running into any administrative limits.\n\nCheers,\n\nRusty\n--\nRusty Conover\nInfoGears Inc.\nhttp://www.infogears.com\n\n", "msg_date": "Mon, 30 Jun 2008 00:59:16 -0600", "msg_from": "Rusty Conover <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Out of memory for Select query." } ]
[ { "msg_contents": "Hello,\n\nmy understanding, and generally my experience, has been that VACUUM\nand VACUUM ANALYZE (but not VACUUM FULL) are never supposed to block\nneither SELECT:s nor UPDATE:s/INSERT:s/DELETE:s to a table.\n\nThis is seemingly confirmed by reading the \"explicit locking\"\ndocumentation, in terms of the locks acquired by various forms of\nvacuuming, and with which other lock modes they conflict.\n\nI have now seen it happen twice that a VACUMM ANALYZE has seemingly\nbeen the triggering factor to blocking queries.\n\nIn the first instance, we had two particularly interesting things\ngoing on:\n\n VACUUM ANALYZE thetable\n LOCK TABLE thetable IN ACCESS SHARE MODE\n\nIn addition there was one SELECT from the table, and a bunch of\nINSERT:s (this is based on pg_stat_activity).\n\nWhile I am unsure of why there is an explicit LOCK going on with\nACCESS SHARE MODE (no explicit locking is ever done on this table by\nthe application), it is supposed to be the locking used for selects. I\nsuspect it may be a referential integrity related acquisition\ngenerated by PG.\n\nThe second time it happned, there was again a single SELECT, a bunch\nof INSERT:s, and then:\n\n VACUUM ANALYZE thetable\n\nThis time there was no explicit LOCK visible.\n\nIn both cases, actitivy was completely blocked until the VACUUM\nANALYZE completed.\n\nDoes anyone have input on why this could be happening? The PostgreSQL\nversion is 8.2.4[1]. Am I correct in that it *should* not be possible\nfor this to happen?\n\nFor the next time this happens I will try to have a query prepared\nthat will dump as much relevant information as possible regarding\nacquired locks. \n\nIf it makes a difference the SELECT does have a subselect that also\nselcts from the same table - a MAX(colum) on an indexed column.\n\n[1] I did check the ChangeLog for 8.2.x releases above .4, and the 8.3\nreleases, but did not see anything that indicated locking/conflict\nrelated fixes in relation to vacuums.\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Mon, 30 Jun 2008 16:59:03 +0200", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "VACUUM ANALYZE blocking both reads and writes to a table" }, { "msg_contents": "Peter Schuller wrote:\n\n> Does anyone have input on why this could be happening? The PostgreSQL\n> version is 8.2.4[1]. Am I correct in that it *should* not be possible\n> for this to happen?\n\nNo. VACUUM takes an exclusive lock at the end of the operation to\ntruncate empty pages. (If it cannot get the lock then it'll just skip\nthis step.) In 8.2.4 there was a bug that caused it to sleep\naccording to vacuum_delay during the scan to identify possibly empty\npages. This was fixed in 8.2.5:\n\n revision 1.81.2.1\n date: 2007-09-10 13:58:50 -0400; author: alvherre; state: Exp; lines: +6 -2;\n Remove the vacuum_delay_point call in count_nondeletable_pages, because we hold\n an exclusive lock on the table at this point, which we want to release as soon\n as possible. This is called in the phase of lazy vacuum where we truncate the\n empty pages at the end of the table.\n\n An alternative solution would be to lower the vacuum delay settings before\n starting the truncating phase, but this doesn't work very well in autovacuum\n due to the autobalancing code (which can cause other processes to change our\n cost delay settings). This case could be considered in the balancing code, but\n it is simpler this way.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 30 Jun 2008 11:25:15 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM ANALYZE blocking both reads and writes to a\n\ttable" }, { "msg_contents": "Hello,\n\n> No. VACUUM takes an exclusive lock at the end of the operation to\n> truncate empty pages. (If it cannot get the lock then it'll just skip\n> this step.) In 8.2.4 there was a bug that caused it to sleep\n> according to vacuum_delay during the scan to identify possibly empty\n> pages. This was fixed in 8.2.5:\n\n[snip revision log]\n\nThank you very much! This does indeed seem to be the likely\nculprit. Will try to either upgrade, or if not possible in time for\nthe next occurance, confirm that this is what is happening based on\npg_locks.\n\nThanks again for the very informative response.\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Mon, 30 Jun 2008 17:34:35 +0200", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "Re: VACUUM ANALYZE blocking both reads and writes to a\n\ttable" }, { "msg_contents": "Actually, while on the topic:\n\n> date: 2007-09-10 13:58:50 -0400; author: alvherre; state: Exp; lines: +6 -2;\n> Remove the vacuum_delay_point call in count_nondeletable_pages, because we hold\n> an exclusive lock on the table at this point, which we want to release as soon\n> as possible. This is called in the phase of lazy vacuum where we truncate the\n> empty pages at the end of the table.\n\nEven with the fix the lock is held. Is the operation expected to be\n\"fast\" (for some definition of \"fast\") and in-memory, or is this\nsomething that causes significant disk I/O and/or scales badly with\ntable size or similar?\n\nI.e., is this enough that, even without the .4 bug, one should not\nreally consider VACUUM ANALYZE non-blocking with respect to other\ntransactions?\n\n(I realize various exclusive locks are taken for short periods of time\neven for things that are officially declared non-blocking; the\nquestion is whether this falls into this category.)\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Mon, 30 Jun 2008 17:43:18 +0200", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "Re: VACUUM ANALYZE blocking both reads and writes to a\n\ttable" }, { "msg_contents": "Peter Schuller wrote:\n> Actually, while on the topic:\n> \n> > date: 2007-09-10 13:58:50 -0400; author: alvherre; state: Exp; lines: +6 -2;\n> > Remove the vacuum_delay_point call in count_nondeletable_pages, because we hold\n> > an exclusive lock on the table at this point, which we want to release as soon\n> > as possible. This is called in the phase of lazy vacuum where we truncate the\n> > empty pages at the end of the table.\n> \n> Even with the fix the lock is held. Is the operation expected to be\n> \"fast\" (for some definition of \"fast\") and in-memory, or is this\n> something that causes significant disk I/O and/or scales badly with\n> table size or similar?\n\nIt is fast.\n\n> I.e., is this enough that, even without the .4 bug, one should not\n> really consider VACUUM ANALYZE non-blocking with respect to other\n> transactions?\n\nYou should consider it non-blocking.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Mon, 30 Jun 2008 12:23:06 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM ANALYZE blocking both reads and writes to a\n\ttable" }, { "msg_contents": "Alvaro Herrera wrote:\n> Peter Schuller wrote:\n> > Actually, while on the topic:\n> > \n> > > date: 2007-09-10 13:58:50 -0400; author: alvherre; state: Exp; lines: +6 -2;\n> > > Remove the vacuum_delay_point call in count_nondeletable_pages, because we hold\n> > > an exclusive lock on the table at this point, which we want to release as soon\n> > > as possible. This is called in the phase of lazy vacuum where we truncate the\n> > > empty pages at the end of the table.\n> > \n> > Even with the fix the lock is held. Is the operation expected to be\n> > \"fast\" (for some definition of \"fast\") and in-memory, or is this\n> > something that causes significant disk I/O and/or scales badly with\n> > table size or similar?\n> \n> It is fast.\n\nTo elaborate: it scans the relation backwards and makes note of how many\nare unused. As soon as it finds a non-empty one, it stops scanning.\nTypically this should be quick. It is not impossible that there are a\nlot of empty blocks at the end though, but I have never heard a problem\nreport about this.\n\nIt could definitely cause I/O though.\n\n> > I.e., is this enough that, even without the .4 bug, one should not\n> > really consider VACUUM ANALYZE non-blocking with respect to other\n> > transactions?\n> \n> You should consider it non-blocking.\n\nThe lock in conditionally acquired: as I said earlier, the code would\nrather skip this part than block. So if there's some other operation\ngoing on, there's no lock held at all. If this grabs the lock, then\nother operations are going to block behind it, but the time holding the\nlock should be short. Note, however, that sleeping for 20ms or more\nbecause of vacuum_delay (the bug fixed above) clearly falls out of this\ncategory, and easily explains the behavior you're seeing with 8.2.4.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 30 Jun 2008 14:58:48 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM ANALYZE blocking both reads and writes to a\n\ttable" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Peter Schuller wrote:\n>> Even with the fix the lock is held. Is the operation expected to be\n>> \"fast\" (for some definition of \"fast\") and in-memory, or is this\n>> something that causes significant disk I/O and/or scales badly with\n>> table size or similar?\n\n> It is fast.\n\nWell, it's *normally* fast. In a situation where there are a whole lot\nof empty pages at the end of the table, it could be slow. That's\nprobably not very likely on a heavily used table. One should also note\nthat\n\n(1) The only way vacuum will be able to obtain an exclusive lock in the\nfirst place is if there are *no* other transactions using the table at\nthe time.\n\n(2) If it's autovacuum we're talking about, it will get kicked off the\ntable if anyone else comes along and wants a conflicting lock.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Jun 2008 15:00:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM ANALYZE blocking both reads and writes to a table " }, { "msg_contents": "Tom Lane wrote:\n\n> (2) If it's autovacuum we're talking about, it will get kicked off the\n> table if anyone else comes along and wants a conflicting lock.\n\nNot on 8.2 though.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Mon, 30 Jun 2008 15:04:19 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM ANALYZE blocking both reads and writes to a\n\ttable" }, { "msg_contents": "> > (2) If it's autovacuum we're talking about, it will get kicked off the\n> > table if anyone else comes along and wants a conflicting lock.\n> \n> Not on 8.2 though.\n\nThat is also nice to know. One more reason to upgrade to 8.3.\n\nThank you very much, both Alvaro and Tom, for the very insightful\ndiscussion!\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Tue, 1 Jul 2008 17:26:35 +0200", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "Re: VACUUM ANALYZE blocking both reads and writes to a\n\ttable" } ]
[ { "msg_contents": "Hi,\n\nI have problems with my database becoming huge in size (around 150 GB\nright now, and 2/3 for only three tables, each having around 30 millions\ntuples. Space is spent mainly on indices.).\n\nI have a lot of multi-column varchar primary keys (natural keys), and\nlot of foreign keys on these tables and thus a lot of indices.\n\nWhen using VARCHAR, we defaulted to VARCHAR(32) (because on _some_ of\nthe identifiers, we have to apply md5).\n\nWe assumed that using VARCHAR(32) but having values at most 4 characters\nlong (for example) wouldn't influence indices size, ie it would be the\nsame as using VARCHAR(4) to keep the example.\n\nNow I really doubt if we were right :)\n\nSo, what should we expect ? And are there other factors influencing\nindices size ?\n\nThanks,\nFranck\n\n\n", "msg_date": "Mon, 30 Jun 2008 18:57:58 +0200", "msg_from": "Franck Routier <[email protected]>", "msg_from_op": true, "msg_subject": "Does max size of varchar influence index size" }, { "msg_contents": "\nOn Mon, 2008-06-30 at 18:57 +0200, Franck Routier wrote:\n> Hi,\n> \n> I have problems with my database becoming huge in size (around 150 GB\n> right now, and 2/3 for only three tables, each having around 30 millions\n> tuples. Space is spent mainly on indices.).\n> \n> I have a lot of multi-column varchar primary keys (natural keys), and\n> lot of foreign keys on these tables and thus a lot of indices.\n> \n> When using VARCHAR, we defaulted to VARCHAR(32) (because on _some_ of\n> the identifiers, we have to apply md5).\n> \n> We assumed that using VARCHAR(32) but having values at most 4 characters\n> long (for example) wouldn't influence indices size, ie it would be the\n> same as using VARCHAR(4) to keep the example.\n> \n> Now I really doubt if we were right :)\n> \n> So, what should we expect ? And are there other factors influencing\n> indices size ?\n> \n> Thanks,\n> Franck\n\nIs there any particular reason that you're not using a surrogate key? I\nfound that switching from natural to surrogate keys in a similar\nsituation made the indexes not only smaller, but faster.\n\nIt really only became an issue after our individual tables got larger\nthan 20-25G, but I think we got lucky and headed the issue off at the\npass.\n\nI think it should be fairly trivial* to set up a test case using\npg_total_relation_size() to determine whether your suspicions are\ncorrect.\n\n-Mark\n\n* It may not be as trivial as I say, or I'd have done it in the 5\nminutes it took to write this email.\n\n", "msg_date": "Mon, 30 Jun 2008 13:24:54 -0700", "msg_from": "Mark Roberts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does max size of varchar influence index size" }, { "msg_contents": "Le lundi 30 juin 2008 à 13:24 -0700, Mark Roberts a écrit :\n\nHi Mark,\n\n> Is there any particular reason that you're not using a surrogate key?\n\nWell, human readability is the main reason, no standard way to handle\nsequences between databases vendors being the second... (and also\nproblems when copying data between different instances of the database).\n\nSo surrogate keys could be a way, and I am considering this, but I'd\nrather avoid it :)\n\nFranck\n\n\n\n\n", "msg_date": "Tue, 01 Jul 2008 17:05:36 +0200", "msg_from": "Franck Routier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Does max size of varchar influence index size" }, { "msg_contents": "Franck Routier wrote:\n> Le lundi 30 juin 2008 à 13:24 -0700, Mark Roberts a écrit :\n> \n> Hi Mark,\n> \n>> Is there any particular reason that you're not using a surrogate key?\n> \n> Well, human readability is the main reason, no standard way to handle\n> sequences between databases vendors being the second... (and also\n> problems when copying data between different instances of the database).\n> \n> So surrogate keys could be a way, and I am considering this, but I'd\n> rather avoid it :)\n\nMight be worth looking at 8.3 - that can save you significant space with \nshort varchar's - the field-length is no longer fixed at 32 bits but can \n adjust itself automatically. Apart from the overheads, you need the \nspace to store the text in each string, not the maximum possible.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 01 Jul 2008 16:33:54 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does max size of varchar influence index size" } ]
[ { "msg_contents": "Hi,\ni have a table with a huge amount of rows (actually 4 millions and a half),\ndefined like this:\n\nCREATE TABLE rtp_frame (\n i_len integer NOT NULL,\n i_file_offset bigint NOT NULL,\n i_file_id integer NOT NULL, -- foreign key\n i_timestamp bigint NOT NULL,\n i_loop integer NOT NULL,\n i_medium_id integer NOT NULL, -- foreign key\n PRIMARY KEY(i_medium_id, i_loop, i_timestamp)\n);\n\nThe primary key creates the btree index.\n\nIf I ask the database something like this:\n\nSELECT ((max(i_timestamp) - min(i_timestamp))::double precision / <rate>)\nFROM rtp_frame\nWHERE i_medium_id = <medium> AND i_loop = <loop>;\n\nit replies istantaneously.\n\nBut if i ask\n\nDECLARE blablabla INSENSITIVE NO SCROLL CURSOR WITHOUT HOLD FOR\nSELECT i_file_id, i_len, i_file_offset, i_timestamp\nFROM rtp_frame WHERE i_medium_id = <medium>\nAND i_loop = <loop>\nAND i_timestamp BETWEEN 0 and 5400000\nORDER BY i_timestamp\n\non a medium with, say, 4 millions rows co-related, it takes 15 seconds to\nreply, even with a different clause on i_timestamp (say i_timestamp >= 0),\neven with the ORDER BY clause specified on the three indexed columns (ORDER\nBY i_medium_id, i_loop, i_timestamp).\n\nIssued on a medium with \"just\" some hundred thousand rows, it runs\ninstantaneously.\n\nIf I add a single btree index on i_timestamp, it runs instantaneously event\non a medium with millions rows (so having a btree(i_medium_id, i_loop,\ni_timestamp) and btree(i_timestamp)).\n\nWith (btree(i_medium_id, i_loop) and btree(i_timestamp)), the first for sure\ntakes 15 seconds to run, the second i think too but not sure atm.\n\ncan anybody explain me why this happens ? and if i should try different\nindexes ?\n\nthanks a lot\n\nEmiliano\n\nHi,i have a table with a huge amount of rows (actually 4 millions and a half), defined like this:CREATE TABLE rtp_frame (    i_len integer NOT NULL,    i_file_offset bigint NOT NULL,    i_file_id integer NOT NULL,  -- foreign key\n    i_timestamp bigint NOT NULL,    i_loop integer NOT NULL,    i_medium_id integer NOT NULL, -- foreign key    PRIMARY KEY(i_medium_id, i_loop, i_timestamp));The primary key creates the btree index.\nIf I ask the database something like this:SELECT ((max(i_timestamp) - min(i_timestamp))::double precision / <rate>)FROM rtp_frameWHERE i_medium_id = <medium> AND i_loop = <loop>;\nit replies istantaneously.But if i askDECLARE blablabla INSENSITIVE NO SCROLL CURSOR WITHOUT HOLD FORSELECT i_file_id, i_len, i_file_offset, i_timestampFROM rtp_frame WHERE i_medium_id = <medium>\nAND i_loop = <loop>AND i_timestamp BETWEEN 0 and 5400000ORDER BY i_timestampon a medium with, say, 4 millions rows co-related, it takes 15 seconds to reply, even with a different clause on i_timestamp (say i_timestamp >= 0), even with the ORDER BY clause specified on the three indexed columns (ORDER BY i_medium_id, i_loop, i_timestamp).\nIssued on a medium with \"just\" some hundred thousand rows, it runs instantaneously.If I add a single btree index on i_timestamp, it runs instantaneously event on a medium with millions rows (so having a btree(i_medium_id, i_loop, i_timestamp) and btree(i_timestamp)).\nWith (btree(i_medium_id, i_loop) and btree(i_timestamp)), the first for sure takes 15 seconds to run, the second i think too but not sure atm.can anybody explain me why this happens ? and if i should try different indexes ?\nthanks a lotEmiliano", "msg_date": "Tue, 1 Jul 2008 12:49:19 +0200", "msg_from": "\"Emiliano Leporati\" <[email protected]>", "msg_from_op": true, "msg_subject": "un-understood index performance behaviour" }, { "msg_contents": "On Tue, Jul 1, 2008 at 4:49 AM, Emiliano Leporati\n<[email protected]> wrote:\n> Hi,\n> i have a table with a huge amount of rows (actually 4 millions and a half),\n> defined like this:\n>\n> CREATE TABLE rtp_frame (\n> i_len integer NOT NULL,\n> i_file_offset bigint NOT NULL,\n> i_file_id integer NOT NULL, -- foreign key\n> i_timestamp bigint NOT NULL,\n> i_loop integer NOT NULL,\n> i_medium_id integer NOT NULL, -- foreign key\n> PRIMARY KEY(i_medium_id, i_loop, i_timestamp)\n> );\n>\n> The primary key creates the btree index.\n>\n> If I ask the database something like this:\n>\n> SELECT ((max(i_timestamp) - min(i_timestamp))::double precision / <rate>)\n> FROM rtp_frame\n> WHERE i_medium_id = <medium> AND i_loop = <loop>;\n>\n> it replies istantaneously.\n>\n> But if i ask\n>\n> DECLARE blablabla INSENSITIVE NO SCROLL CURSOR WITHOUT HOLD FOR\n> SELECT i_file_id, i_len, i_file_offset, i_timestamp\n> FROM rtp_frame WHERE i_medium_id = <medium>\n> AND i_loop = <loop>\n> AND i_timestamp BETWEEN 0 and 5400000\n> ORDER BY i_timestamp\n>\n> on a medium with, say, 4 millions rows co-related, it takes 15 seconds to\n> reply, even with a different clause on i_timestamp (say i_timestamp >= 0),\n> even with the ORDER BY clause specified on the three indexed columns (ORDER\n> BY i_medium_id, i_loop, i_timestamp).\n>\n> Issued on a medium with \"just\" some hundred thousand rows, it runs\n> instantaneously.\n>\n> If I add a single btree index on i_timestamp, it runs instantaneously event\n> on a medium with millions rows (so having a btree(i_medium_id, i_loop,\n> i_timestamp) and btree(i_timestamp)).\n>\n> With (btree(i_medium_id, i_loop) and btree(i_timestamp)), the first for sure\n> takes 15 seconds to run, the second i think too but not sure atm.\n>\n> can anybody explain me why this happens ? and if i should try different\n> indexes ?\n\nNot yet, we don't have enough information, although I'm guessing that\nthe db is switching from an index scan to a sequential scan, perhaps\nprematurely.\n\nTo see what's happening, run your queries with explain analyze in front...\n\nexplain analyze select ...\n\nand see what you get. Post the output as an attachment here and we'll\nsee what we can do.\n", "msg_date": "Tue, 1 Jul 2008 08:17:17 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: un-understood index performance behaviour" }, { "msg_contents": "\"Emiliano Leporati\" <[email protected]> writes:\n> can anybody explain me why this happens ? and if i should try different\n> indexes ?\n\nShowing EXPLAIN ANALYE output would probably make things a lot clearer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Jul 2008 10:18:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: un-understood index performance behaviour " } ]
[ { "msg_contents": "Hi,\n\n We are seeing system hang-up issue when we do continuous\nupdate on table ( 2-3 records/sec) within 10-12 hours. Memory parameter\nInact_dirty( shown in /proc/meminfo) is increasing continuously and\ncausing the system to hang-up(not responding state). This issue is not\nhappening when we stop the continuous update.\n\n \n\nPlease help us to resolve this issue.\n\n \n\nSystem details follows:\n\nOS : Linux kernel version 2.4.7-10\n\nRAM; 256MB (but 64MB used by RAM file system)\n\nPostgreSQL version:7.4.3\n\npostgresql.conf settings : default settings \n\n \n\nBest Regards,\n\nJeeva\n\n \n\n\n\n\n\n\n\n\n\n\nHi,\n            We\nare seeing system hang-up issue when we do continuous update on table ( 2-3\nrecords/sec) within 10-12 hours. Memory parameter Inact_dirty( shown in\n/proc/meminfo) is increasing continuously and causing the system to hang-up(not\nresponding state). This issue is not happening when we stop the continuous\nupdate.\n \nPlease help us to resolve this issue.\n \nSystem details follows:\nOS : Linux kernel version  2.4.7-10\nRAM; 256MB (but 64MB used by RAM file system)\nPostgreSQL version:7.4.3\npostgresql.conf settings : default settings \n            \nBest Regards,\nJeeva", "msg_date": "Tue, 1 Jul 2008 17:29:13 +0530", "msg_from": "\"Kathirvel, Jeevanandam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Inact_dirty is increasing continuously and causing the system to\n hang." }, { "msg_contents": "On Tue, 1 Jul 2008, Kathirvel, Jeevanandam wrote:\n\n> We are seeing system hang-up issue when we do continuous update on table \n> ( 2-3 records/sec) within 10-12 hours. Memory parameter Inact_dirty( \n> shown in /proc/meminfo) is increasing continuously and causing the \n> system to hang-up(not responding state).\n\nWhen you update a row, what it does is write a new version of that row out \nto disk and then mark the old version dead afterwards. That process \ngenerates disk writes, which show up as Inact_dirty data while they're in \nmemory. Eventually your system should be writing those out to disk. The \nmost helpful thing you could post here to narrow down what's going on is a \nsnippet of the output from \"vmstat 1\" during a period where things are \nrunning slowly.\n\nDirty memory growing continuously suggests you're updating faster than \nyour disk(s) can keep up. The main thing you can usefully do in \nPostgreSQL 7.4.3 to lower how much I/O is going on during updates is to \nincrease the checkpoint_segments parameters in your postgresql.conf file. \nA modest increase there, say going from the default of 3 to 10, may reduce \nthe slowdowns you're seeing. Note that this will cause the database to \nget larger and it will take longer to recover from a crash.\n\nGiven how old the versions of all the software you're using are, it's \nquite possible what you're actually running into is a Linux kernel bug or \neven a PostgreSQL bug. If this problem is getting annoying enough to \ndisrupt your operations you should be considering an upgrade of your whole \nsoftware stack. Start with going from PostgreSQL 7.4.3 to 7.4.21, try and \nadd more RAM to the server, look into whether you can re-install on a more \nmodern Linux, and try to get onto PostgreSQL 8.3 one day.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 1 Jul 2008 11:03:22 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inact_dirty is increasing continuously and causing\n\tthe system to hang." } ]