threads
listlengths
1
275
[ { "msg_contents": "\nI have a table with a 3 column key. I noticed that when I update a non-key field\nin a record of the table that the update was taking longer than I thought it \nshould. After much experimenting I discovered that if I changed the data\ntypes of two of the key columns to FLOAT8 that I got vastly improved\nperformance.\n\nOrignally the data types of the 3 columns were FLOAT4, FLOAT4 and INT4.\nMy plaform is a PowerPC running Linux. I speculated that the performance\nimprovement might be because the PowePC is a 64 bit processor but when\nI changed the column data types to INT8, INT8 and INT4 I din't see any\nimprovement. I also ran my test code on a Pentium 4 machine with the same\nresults in all cases.\n\nThis doesn't make any sense to me. Why would FLOAT8 keys ever result\nin improved performance?\n\nI verified with EXPLAIN that the index is used in every case for the update.\n\nMy postmaster version is 7.1.3.\n\nAny help will be greatly appreciated.\n\n***********************************************************************\nMedora Schauer\nSr. Software Engineer\n\nFairfield Industries\n14100 Southwest Freeway\nSuite 600\nSugar Land, Tx 77478-3469\nUSA\n\[email protected]\n***********************************************************************\n\n\n", "msg_date": "Thu, 31 Jul 2003 15:29:14 -0500", "msg_from": "\"Medora Schauer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Odd performance results" }, { "msg_contents": "\"Medora Schauer\" <[email protected]> writes:\n> I have a table with a 3 column key. I noticed that when I update a non-key field\n> in a record of the table that the update was taking longer than I thought it \n> should. After much experimenting I discovered that if I changed the data\n> types of two of the key columns to FLOAT8 that I got vastly improved\n> performance.\n\nAre there any foreign key linkages to or from this table? Maybe the\nother end of the foreign key is float8?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 31 Jul 2003 17:10:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd performance results " } ]
[ { "msg_contents": "\nOrignally there were but in the process of trying to figure\nout what is going on I stripped everything out of the database\nexcept the table being queried.\n\n> \n> \"Medora Schauer\" <[email protected]> writes:\n> > I have a table with a 3 column key. I noticed that when I \n> update a non-key field\n> > in a record of the table that the update was taking longer \n> than I thought it \n> > should. After much experimenting I discovered that if I \n> changed the data\n> > types of two of the key columns to FLOAT8 that I got vastly improved\n> > performance.\n> \n> Are there any foreign key linkages to or from this table? Maybe the\n> other end of the foreign key is float8?\n> \n> \t\t\tregards, tom lane\n> \n", "msg_date": "Thu, 31 Jul 2003 16:32:16 -0500", "msg_from": "\"Medora Schauer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Odd performance results " } ]
[ { "msg_contents": "This is stepping back quite a while; let me point people to the thread\nof 2003-02 where Mariusz Czu\\x{0142}ada <[email protected]> was\nlooking for a way of optimizing a VIEW that was a UNION.\n\n<http://archives.postgresql.org/pgsql-performance/2003-02/msg00095.php>\n\nThe subject has come up a few times through PostgreSQL history, and\nI'd imagine to think I may have a little something new to offer to it.\n\nLet's consider a table used to store log information:\n\ncreate table log_table (\n request_time timestamp with time zone,\n object character varying, -- What they asked for\n request_type character(8), -- What they did to it\n request_size integer,\n requestor inet,\n request_status integer,\n result_size integer,\n request_detail character varying\n);\ncreate index log_times on log_table(request_time);\ncreate index log_object on log_table(object);\n\nEvery time \"something happens,\" an entry goes into this table.\nUnfortunately, the table is likely to grow to tremendous size, over\ntime, and there are all sorts of troublesome things about purging it:\n\n -> Fragmentation may waste space and destroy the usefulness of\n indices;\n\n -> Deleting data row by row will cause replication logic to go mad,\n as triggers get invoked for every single row modified;\n\n -> The action of deletion will draw the data we just decided was\n _useless_ into memory, injuring cache utilization badly as we fill\n the cache with trash.\n\nThe obvious thought: Create several tables, and join them together\ninto a view. So instead of log_table being a table, we have\nlog_table_1 thru log_table_3, each with the schema describe above, and\ndefine the view:\n\ncreate view log_table as select * from log_table_1 union all \n select * from log_table_2 union all \n select * from log_table_3;\n\nIt's easy enough (modulo a little debugging and pl/pgsql work :-)) to\nturn this into an updatable view so that inserts into log_table use a\ndifferent log table every (day|week|month). And we can TRUNCATE the\neldest one, which is a cheap operation.\n\nThis approach also resembles the way the \"O guys\" handle partitioned\ntables, so it's not merely about \"logs.\"\n\nUnfortunately, selects on the VIEW are, at present, unable to make use\nof the indices. So if we want all log entries for June 11th, the\nquery:\n\n select * from log_table where request_time between 'june 11 2003' and\n 'june 12 2003';\n\nreturns a plan:\nSubquery Scan log_table (cost=0.00..10950.26 rows=177126 width=314)\n -> Append (cost=0.00..10950.26 rows=177126 width=314)\n -> Subquery Scan *SELECT* 1 (cost=0.00..3089.07 rows=50307 width=71)\n -> Seq Scan on log_table_1 (cost=0.00..3089.07 rows=50307 width=71)\n -> Subquery Scan *SELECT* 2 (cost=0.00..602.92 rows=9892 width=314)\n -> Seq Scan on log_table_2 (cost=0.00..602.92 rows=9892 width=314)\n -> Subquery Scan *SELECT* 3 (cost=0.00..2390.09 rows=39209 width=314)\n -> Seq Scan on log_table_3 (cost=0.00..2390.09 rows=39209 width=314)\n\nIn effect, the query is materialized into:\n\nselect * from \n (select * from log_table_1 union all select * from log_table_2\n union all select * from log_table_3) as merger\nwhere [request_time between 'june 11 2003' and 'june 12 2003'];\n\nWhat would perform better would be to attach the WHERE clause to each\nof the union members. (Everyone stop and sing \"Solidarity Forever\"\n:-))\n\nE.g.:\n\nselect * from\n (\n select * from log_table_1 where request_time between 'june 11 2003' and 'june 12 2003' union all\n select * from log_table_2 where request_time between 'june 11 2003' and 'june 12 2003' union all\n select * from log_table_3 where request_time between 'june 11 2003' and 'june 12 2003' union all\n ) as merged_version;\n\nSubquery Scan merged_version (cost=0.00..947.04 rows=247 width=314) (actual time=55.86..1776.42 rows=20124 loops=1)\n -> Append (cost=0.00..947.04 rows=247 width=314) (actual time=55.84..1483.60 rows=20124 loops=1)\n -> Subquery Scan *SELECT* 1 (cost=0.00..3.02 rows=1 width=71) (actual time=55.83..289.81 rows=3422 loops=1)\n -> Index Scan using log_table_1_trans_on_idx on log_table_1 (cost=0.00..3.02 rows=1 width=71) (actual time=55.80..239.84 rows=3422 loops=1)\n -> Subquery Scan *SELECT* 2 (cost=0.00..191.38 rows=49 width=314) (actual time=62.32..1115.15 rows=16702 loops=1)\n -> Index Scan using log_table_2_trans_on_idx on log_table_2 (cost=0.00..191.38 rows=49 width=314) (actual time=62.29..873.63 rows=16702 loops=1)\n -> Subquery Scan *SELECT* 3 (cost=0.00..752.64 rows=196 width=314) (actual time=26.69..26.69 rows=0 loops=1)\n -> Index Scan using log_table_3_trans_on_idx on log_table_3 (cost=0.00..752.64 rows=196 width=314) (actual time=26.69..26.69 rows=0 loops=1)\nTotal runtime: 1806.39 msec\n\nWhich is nice and quick, as it cuts each set down to size _before_\nmerging them.\n\nMariusz had been looking, back in February, for an optimization that\nwould, in effect, throw away the UNION ALL clauses that were\nunnecessary. Tom Lane and Stephan Szabo, in discussing this,\nobserved, quite rightly, that this is liable to be an obscure sort of\noptimization:\n\nTom Lane writes:\n> Stephan Szabo <[email protected]> writes:\n> > Yeah, but I think what he's hoping is that it'll notice that\n> > \"key=1 and key=3\" would be noticed as a false condition so that it doesn't\n> > scan those tables since a row presumably can't satisify both. The question\n> > would be, is the expense of checking the condition for all queries\n> > greater than the potential gain for these sorts of queries.\n\n> Yes, this is the key point: we won't put in an optimization that\n> wins on a small class of queries unless there is no material cost\n> added for planning cases where it doesn't apply.\n\nIn contrast, I would argue that adding the WHERE clause in as an extra\ncondition on each of the UNION subqueries is an optimization that is\nlikely to win in _most_ cases.\n\nIt helps with the example I illustrated; it would help with Mariusz'\nscenario, not by outright eliminating UNION subqueries, but rather by\nmaking their result sets empty.\n\nselect key, value from view123 where key = 2\n\ntransforms into...\n\nselect key, value from tab1 where key=1 [and key = 2] \nunion all\nselect key, value from tab2 where key=2 [and key = 2] \nunion all\nselect key, value from tab3 where key=3 [and key = 2];\n\nThe generalization is that:\n\n select * from\n (select [fields1] from t1 where [cond1] (UNION|UNION ALL|INTERSECT) \n select [fields2] from t2 where [cond2] (UNION|UNION ALL|INTERSECT) \n ...\n select [fieldsn] from tn where [condn]) as COMBINATION\n WHERE [globalcond];\n\nis equivalent to:\n\n select * from\n (select [fields1] from t1 where ([cond1]) and [globalcond] (UNION|UNION ALL|INTERSECT) \n select [fields2] from t2 where ([cond2]) and [globalcond] (UNION|UNION ALL|INTERSECT) \n ...\n select [fieldsn] from tn where ([condn]) and [globalcond]\n ) as COMBINATION;\n\n[globalcond] has to be expressed in terms of the fields available for\neach subquery, but that already needs to be true, because the global\ncondition at present is being applied to the fields that are given by\nthe UNION/INTERSECT/UNION ALL.\n-- \nlet name=\"cbbrowne\" and tld=\"libertyrms.info\" in String.concat \"@\" [name;tld];;\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Thu, 31 Jul 2003 18:24:11 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": true, "msg_subject": "Views With Unions" }, { "msg_contents": "\nOn Thu, 31 Jul 2003, Christopher Browne wrote:\n\n> select * from log_table where request_time between 'june 11 2003' and\n> 'june 12 2003';\n>\n> returns a plan:\n> Subquery Scan log_table (cost=0.00..10950.26 rows=177126 width=314)\n> -> Append (cost=0.00..10950.26 rows=177126 width=314)\n> -> Subquery Scan *SELECT* 1 (cost=0.00..3089.07 rows=50307 width=71)\n> -> Seq Scan on log_table_1 (cost=0.00..3089.07 rows=50307 width=71)\n> -> Subquery Scan *SELECT* 2 (cost=0.00..602.92 rows=9892 width=314)\n> -> Seq Scan on log_table_2 (cost=0.00..602.92 rows=9892 width=314)\n> -> Subquery Scan *SELECT* 3 (cost=0.00..2390.09 rows=39209 width=314)\n> -> Seq Scan on log_table_3 (cost=0.00..2390.09 rows=39209 width=314)\n\nWhat version are you using? In 7.3 and up it should be willing to\nconsider moving the clause down, unless there's something like a type\nmismatch (because in that case it may not be equivalent without a bunch\nmore work on the clause).\n\n\n", "msg_date": "Thu, 31 Jul 2003 15:34:27 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Views With Unions" }, { "msg_contents": "Stephan Szabo wrote:\n\n>On Thu, 31 Jul 2003, Christopher Browne wrote:\n>\n> \n>\n>> select * from log_table where request_time between 'june 11 2003' and\n>> 'june 12 2003';\n>>\n>>returns a plan:\n>>Subquery Scan log_table (cost=0.00..10950.26 rows=177126 width=314)\n>> -> Append (cost=0.00..10950.26 rows=177126 width=314)\n>> -> Subquery Scan *SELECT* 1 (cost=0.00..3089.07 rows=50307 width=71)\n>> -> Seq Scan on log_table_1 (cost=0.00..3089.07 rows=50307 width=71)\n>> -> Subquery Scan *SELECT* 2 (cost=0.00..602.92 rows=9892 width=314)\n>> -> Seq Scan on log_table_2 (cost=0.00..602.92 rows=9892 width=314)\n>> -> Subquery Scan *SELECT* 3 (cost=0.00..2390.09 rows=39209 width=314)\n>> -> Seq Scan on log_table_3 (cost=0.00..2390.09 rows=39209 width=314)\n>> \n>>\n>\n>What version are you using? In 7.3 and up it should be willing to\n>consider moving the clause down, unless there's something like a type\n>mismatch (because in that case it may not be equivalent without a bunch\n>more work on the clause).\n>\nDear Chris,\n\n I had the same problem(type mismatch) and it was solved finally. \ncheck the list\n\"factoring problem ... \" subject only 2 weeks back .\n\nregds\nmallah.\n\n\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n>\n\n\n", "msg_date": "Fri, 01 Aug 2003 09:10:31 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Views With Unions" }, { "msg_contents": "\nStephan Szabo said:\n>\n> On Thu, 31 Jul 2003, Christopher Browne wrote:\n>\n>> select * from log_table where request_time between 'june 11 2003'\n>> and\n>> 'june 12 2003';\n>>\n>> returns a plan:\n>> Subquery Scan log_table (cost=0.00..10950.26 rows=177126 width=314)\n>> -> Append (cost=0.00..10950.26 rows=177126 width=314)\n>> -> Subquery Scan *SELECT* 1 (cost=0.00..3089.07 rows=50307\n>> width=71)\n>> -> Seq Scan on log_table_1 (cost=0.00..3089.07\n>> rows=50307 width=71)\n>> -> Subquery Scan *SELECT* 2 (cost=0.00..602.92 rows=9892\n>> width=314)\n>> -> Seq Scan on log_table_2 (cost=0.00..602.92\n>> rows=9892 width=314)\n>> -> Subquery Scan *SELECT* 3 (cost=0.00..2390.09 rows=39209\n>> width=314)\n>> -> Seq Scan on log_table_3 (cost=0.00..2390.09\n>> rows=39209 width=314)\n>\n> What version are you using? In 7.3 and up it should be willing to\n> consider moving the clause down, unless there's something like a type\n> mismatch (because in that case it may not be equivalent without a bunch\n> more work on the clause).\n\nThat was 7.2.4, although I had also tried it on 7.4 (yesterday's CVS).\n\nWhich provides four findings:\n\n1. On 7.2.4, adding additional type info just doesn't help, fitting with\nthe notion that, consistent with your comment, improvement wouldn't happen\nearlier than 7.3.\n\nThere's no help on 7.2 :-(, and the system I'm initially most interested\nin using this on is still on 7.2.\n\n2. When I retried on 7.4, it _did_ find search paths based on Index Scan,\nwhen I added in additional type information. So the optimization I was\nwishing for _is_ there :-). In the longer term, that's very good news.\n\n3. I'll have to test this out on 7.3.4, now, as I hadn't, and it sounds\nas though that is an interesting case.\n\n4. It's often necessary to expressly specify type information in queries\nto get the optimizer to do the Right Thing.\n-- \n(reverse (concatenate 'string \"ofni.smrytrebil@\" \"enworbbc\"))\n<http://dev6.int.libertyrms.info/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n\n\n", "msg_date": "Fri, 1 Aug 2003 08:20:02 -0400 (EDT)", "msg_from": "\"Christopher Browne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Views With Unions" }, { "msg_contents": "On Fri, 1 Aug 2003, Christopher Browne wrote:\n\n> Stephan Szabo said:\n> >\n> >\n> > What version are you using? In 7.3 and up it should be willing to\n> > consider moving the clause down, unless there's something like a type\n> > mismatch (because in that case it may not be equivalent without a bunch\n> > more work on the clause).\n>\n> That was 7.2.4, although I had also tried it on 7.4 (yesterday's CVS).\n>\n> Which provides four findings:\n>\n> 1. On 7.2.4, adding additional type info just doesn't help, fitting with\n> the notion that, consistent with your comment, improvement wouldn't happen\n> earlier than 7.3.\n>\n> There's no help on 7.2 :-(, and the system I'm initially most interested\n> in using this on is still on 7.2.\n\nIf you really wanted you could try going back and finding the diffs\nassociated with this in the CVS history or committers archives and see if\nyou can make equivalent changes to 7.2, but that's possibly going to be\ndifficult.\n\n> 2. When I retried on 7.4, it _did_ find search paths based on Index Scan,\n> when I added in additional type information. So the optimization I was\n> wishing for _is_ there :-). In the longer term, that's very good news.\n>\n> 3. I'll have to test this out on 7.3.4, now, as I hadn't, and it sounds\n> as though that is an interesting case.\n>\n> 4. It's often necessary to expressly specify type information in queries\n> to get the optimizer to do the Right Thing.\n\nEspecially for cases like this. It takes the safer route of not pushing\nthings down when it's not sure if pushing down might change the semantics\n(for example if a union piece has a different type from the union\noutput, simply pushing clauses down unchanged could change the results)\n\nTom would probably be willing to relax conditions if it could be proven\nsafe even for the wierd outlying cases with char and varchar and such.\n\n\n", "msg_date": "Fri, 1 Aug 2003 08:10:32 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Views With Unions" }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n> On Fri, 1 Aug 2003, Christopher Browne wrote:\n>> Stephan Szabo said:\n>> > What version are you using? In 7.3 and up it should be willing to\n>> > consider moving the clause down, unless there's something like a type\n>> > mismatch (because in that case it may not be equivalent without a bunch\n>> > more work on the clause).\n>>\n>> That was 7.2.4, although I had also tried it on 7.4 (yesterday's CVS).\n>>\n>> Which provides four findings:\n>>\n>> 1. On 7.2.4, adding additional type info just doesn't help, fitting with\n>> the notion that, consistent with your comment, improvement wouldn't happen\n>> earlier than 7.3.\n>>\n>> There's no help on 7.2 :-(, and the system I'm initially most interested\n>> in using this on is still on 7.2.\n>\n> If you really wanted you could try going back and finding the diffs\n> associated with this in the CVS history or committers archives and see if\n> you can make equivalent changes to 7.2, but that's possibly going to be\n> difficult.\n\nSomehow I don't think that'll fly; I have taken a brief look at some\nof the optimizer code, and I very much don't want to leap into that at\nthe moment. (I don't imagine I'd be able to muster much enthusiasm\nfor the idea from others that are involved, either. More likely, I'm\nunderstating the probable opposition to the idea... :-))\n\nI was hoping there would be some help on 7.2, but can live without it.\nThis approach to improving log purgeability is NOT the sort of thing\nthat you deploy on a day's notice because it seems like a \"neat idea.\"\nIf it waits a couple months to be implemented, that's doubtless OK.\n\n>> 2. When I retried on 7.4, it _did_ find search paths based on Index Scan,\n>> when I added in additional type information. So the optimization I was\n>> wishing for _is_ there :-). In the longer term, that's very good news.\n>>\n>> 3. I'll have to test this out on 7.3.4, now, as I hadn't, and it sounds\n>> as though that is an interesting case.\n\nIt turns out nicely on 7.3.4, using index scans for the subqueries in\nthe query:\n\n select count(*) from log_table where event_date between\n '2003-04-01' and '2003-05-01';\n\nWhich is a Good Thing.\n\n>> 4. It's often necessary to expressly specify type information in\n>> queries to get the optimizer to do the Right Thing.\n>\n> Especially for cases like this. It takes the safer route of not\n> pushing things down when it's not sure if pushing down might change\n> the semantics (for example if a union piece has a different type\n> from the union output, simply pushing clauses down unchanged could\n> change the results)\n>\n> Tom would probably be willing to relax conditions if it could be\n> proven safe even for the wierd outlying cases with char and varchar\n> and such.\n\nEvidently the dates of the form '2003-04-01' and such are getting\ntypes promoted properly enough. I don't see anything to \"lobby\" for\nat this point.\n\nThe DOMAIN case I mentioned the other day had something odd going on\nthat LOST the type information associated with the domain. Albeit\nthat was on 7.3, whereas the changes in DOMAIN functionality that make\nthem meaningfully useful come in 7.4...\n-- \nlet name=\"cbbrowne\" and tld=\"libertyrms.info\" in name ^ \"@\" ^ tld;;\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Fri, 01 Aug 2003 11:48:01 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Views With Unions" }, { "msg_contents": "Christopher Browne <[email protected]> writes:\n> The DOMAIN case I mentioned the other day had something odd going on\n> that LOST the type information associated with the domain. Albeit\n> that was on 7.3, whereas the changes in DOMAIN functionality that make\n> them meaningfully useful come in 7.4...\n\nDomains were a work-in-progress in 7.3, and to some extent still are.\nPlease try to test out 7.4beta and let us know about deficiencies you\nfind.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 01 Aug 2003 12:51:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Views With Unions " }, { "msg_contents": "On Fri, 1 Aug 2003, Tom Lane wrote:\n\n> Domains were a work-in-progress in 7.3, and to some extent still are.\n> Please try to test out 7.4beta and let us know about deficiencies you\n> find.\n\nAre domains user defined types? That they seem to be based on what I see\non the docs.\n\nAny drawbacks to using them?\n\nRight now I have a new database I am making and wanted some consistency\naccros some tables. Currently I used inheritance to enforce the consitency\nsince a good number of fields needed to be common among the tables AND the\ninheritted tables are basically a supperset of the data, so some times I\nwould want to access the inheritted tables and other times the parent/main\ntable.\n", "msg_date": "Fri, 1 Aug 2003 13:26:47 -0400 (EDT)", "msg_from": "Francisco J Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Domains (Was [PERFORM] Views With Unions)" }, { "msg_contents": "On Fri, 2003-08-01 at 12:26, Francisco J Reyes wrote:\n> On Fri, 1 Aug 2003, Tom Lane wrote:\n[snip]\n> accros some tables. Currently I used inheritance to enforce the consitency\n> since a good number of fields needed to be common among the tables AND the\n> inheritted tables are basically a supperset of the data, so some times I\n> would want to access the inheritted tables and other times the parent/main\n> table.\n\nIsn't this when you'd really want child tables, instead?\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n", "msg_date": "01 Aug 2003 18:56:33 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Domains (Was [PERFORM] Views With Unions)" }, { "msg_contents": "On Fri, 1 Aug 2003, Ron Johnson wrote:\n\n> On Fri, 2003-08-01 at 12:26, Francisco J Reyes wrote:\n> > On Fri, 1 Aug 2003, Tom Lane wrote:\n> [snip]\n> > Currently I used inheritance to enforce the consitency\n> > since a good number of fields needed to be common among the tables AND the\n> > inheritted tables are basically a supperset of the data, so some times I\n> > would want to access the inheritted tables and other times the parent/main\n> > table.\n>\n> Isn't this when you'd really want child tables, instead?\n\n\nI think both ways can accomplish the same (if not very simmilar\nfunctionality), however I find using inherittance easier.\nNot really sure about efficiency though.\n\nA simple example of the type of design I am planning to do would be:\n\nTable A\nUserid\ndate entered\nlast changed\n\n\nTable B inherited from A(additional fields)\nperson name\nbirthday\n\nTable C inherited from A(additional fields)\nbook\nisbn\ncomment\n\nI plan to keep track of how many records a user has so with inherittance\nit's easy to do this. I can count for the user in Table A and find out how\nmany records he/she has or I can count in each of the inheritted tables\nand see how many there are for that particular table.\n\nInheritance makes it easier to see everything for a userid or just a\nparticular type of records.\n", "msg_date": "Sat, 2 Aug 2003 14:22:39 -0400 (EDT)", "msg_from": "Francisco J Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Inheritance vs child tables (Was Domains)" }, { "msg_contents": "On Sat, 2003-08-02 at 13:22, Francisco J Reyes wrote:\n> On Fri, 1 Aug 2003, Ron Johnson wrote:\n> \n> > On Fri, 2003-08-01 at 12:26, Francisco J Reyes wrote:\n> > > On Fri, 1 Aug 2003, Tom Lane wrote:\n> > [snip]\n> > > Currently I used inheritance to enforce the consitency\n> > > since a good number of fields needed to be common among the tables AND the\n> > > inheritted tables are basically a supperset of the data, so some times I\n> > > would want to access the inheritted tables and other times the parent/main\n> > > table.\n> >\n> > Isn't this when you'd really want child tables, instead?\n> \n> \n> I think both ways can accomplish the same (if not very simmilar\n> functionality), however I find using inherittance easier.\n> Not really sure about efficiency though.\n> \n> A simple example of the type of design I am planning to do would be:\n> \n> Table A\n> Userid\n> date entered\n> last changed\n> \n> \n> Table B inherited from A(additional fields)\n> person name\n> birthday\n> \n> Table C inherited from A(additional fields)\n> book\n> isbn\n> comment\n> \n> I plan to keep track of how many records a user has so with inherittance\n> it's easy to do this. I can count for the user in Table A and find out how\n> many records he/she has or I can count in each of the inheritted tables\n> and see how many there are for that particular table.\n> \n> Inheritance makes it easier to see everything for a userid or just a\n> particular type of records.\n\nBut isn't this what LEFT OUTER JOIN is for?\n\nAttached is a zip of the sql and results of what I mean.\n\nPlain inner joins or LOJ with \"WHERE {B|C}.whatever IS NOT NULL\"\nalso pare things dawn.\n\nOf course, just yesterday, in a post on -general or -performance,\nI read that LEFT OUTER JOIN isn't particularly efficient in PG.\n\nAlso, wouldn't it be odd to have a userid without a name? So,\nwhy isn't table_b combined with table_a? But all circumstances\nare different, I guess...\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+", "msg_date": "02 Aug 2003 16:50:33 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inheritance vs child tables (Was Domains)" }, { "msg_contents": "On Sat, 2 Aug 2003, Ron Johnson wrote:\n\n> > Inheritance makes it easier to see everything for a userid or just a\n> > particular type of records.\n>\n> But isn't this what LEFT OUTER JOIN is for?\n\n\nYes but the more tables you have the more cumbersome it would become to do\nwith outer joins.\nImagine a parent table and 20 children tables. To get a count of all\nrecords the user has I either have to do a nasty/ugly union or do 20\ncounts and then add them (ie doing the separate counts and keeping\ntrack of them with a language like PHP)\n\n> Of course, just yesterday, in a post on -general or -performance,\n> I read that LEFT OUTER JOIN isn't particularly efficient in PG.\n\nAnd it's probably worse when many tables are involved.\n\n\n> Also, wouldn't it be odd to have a userid without a name? So,\n> why isn't table_b combined with table_a?\n\nI have a separate table with user information.\nThe main reason I thought of inherittance was because I need to do\naccounting and keep track of how many records a user has for certain type\nof data or in total. Inheritance makes this really easy.\n\nTable A, B and C are not combined because B, C and onward have totally\ndifferent type of data and they are not one to one.\n\nThere are times when children tables make more sense like:\n\n*person table\n-person id\n-name\n-address\n\n*phones\n-person id\n-phone type (ie fax, home, work)\n-area\n-phone\n\n*emails\n-person id\n-email type (home, work)\n-email\n\nIn my opinion a case like that is best handled with children tables.\nSpecially if there are only a couple of childre tables.\n\nOn my case I have about 8 inherited tables and what I believe inheritance\ndoes for me is:\n* Easy way to count both a grand total or a table per inherited table.\n* Easy to work with each inheritted table, which will be very often.\n* Much simpler queries/reporting\n\n", "msg_date": "Sun, 3 Aug 2003 10:27:11 -0400 (EDT)", "msg_from": "Francisco J Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inheritance vs child tables (Was Domains)" } ]
[ { "msg_contents": "If a table which will be heavily used has numerous fields, yet only a\nhandfull of them will be used heavily, would it make sense performance wise to split it?\n\nExample\nTable 1\nField 1\n....\nField 100\n\nTable 2\nReferences Field 1 of table1\n.....\n\nTable n\nReferences Field 1 of table 1\n\nSo table 1 basically will be referenced by many tables and most of the\ntime only a handfull of fields of table 1 are needed. Don't have exact\nnumbers, but let's say that more than 60% of queries to table 1 queries\nonly use 20 fields or less.\n\nIf I split Table 1 then the second table will basically be a 1 to 1 to\nTable 1.\n\nI have this simmilar scenario for two tables. One is close to 1 Million\nrecords and the other is about 300,000 records.\n\nProgramming wise it is much easier to only have one table, but I am just\nconcerned about performance.\n\nMost access to these tables will be indexed with some occassional\nsequential scans. Number of concurrent users now is probably 10 or less.\nExpect to grow to 20+ concurrent connections. Will this be more of an\nissue if I had hundreds/thousands of users?\n", "msg_date": "Fri, 1 Aug 2003 12:08:18 -0400 (EDT)", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": true, "msg_subject": "How number of columns affects performance" }, { "msg_contents": "On Fri, 2003-08-01 at 11:08, Francisco Reyes wrote:\n> If a table which will be heavily used has numerous fields, yet only a\n> handfull of them will be used heavily, would it make sense performance wise to split it?\n> \n> Example\n> Table 1\n> Field 1\n> ....\n> Field 100\n> \n> Table 2\n> References Field 1 of table1\n> .....\n> \n> Table n\n> References Field 1 of table 1\n> \n> So table 1 basically will be referenced by many tables and most of the\n> time only a handfull of fields of table 1 are needed. Don't have exact\n> numbers, but let's say that more than 60% of queries to table 1 queries\n> only use 20 fields or less.\n> \n> If I split Table 1 then the second table will basically be a 1 to 1 to\n> Table 1.\n\nDo all 100 fields *really* all refer to the same *one* entity,\nwith no repeating values, etc?\nIf not, then good design says to split the table.\n\nAlso, if it's a high-activity table, but you only rarely need fields\n60-90, then splitting them out to their own table might be useful\n(especially if some of those fields are large *CHAR or TEXT).\n\n> I have this simmilar scenario for two tables. One is close to 1 Million\n> records and the other is about 300,000 records.\n> \n> Programming wise it is much easier to only have one table, but I am just\n> concerned about performance.\n> \n> Most access to these tables will be indexed with some occassional\n> sequential scans. Number of concurrent users now is probably 10 or less.\n> Expect to grow to 20+ concurrent connections. Will this be more of an\n> issue if I had hundreds/thousands of users?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n", "msg_date": "01 Aug 2003 11:34:48 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How number of columns affects performance" }, { "msg_contents": "On Fri, 1 Aug 2003, Ron Johnson wrote:\n\n> Do all 100 fields *really* all refer to the same *one* entity,\n> with no repeating values, etc?\n\nYes all fields belong to the same entity. I used 100 as an example it may\nbe something like 60 to 80 fields (there are two tables in question). I\ndon't formally do 3rd normal form, but for the most part I do most of\nthe general concepts of normalization.\n\n> If not, then good design says to split the table.\n\nThe original data was in Foxpro tables and I have made better normalized\ntables in PostgreSQL.\n\n\n> Also, if it's a high-activity table, but you only rarely need fields\n> 60-90, then splitting them out to their own table might be useful\n> (especially if some of those fields are large *CHAR or TEXT).\n\nYes some of the fields are varchars. 5 fields are varchar(22) and 3 longer\n(35, 58, 70). The total row length is a little over 400 characters in\nFoxpro. In postgreSQL may be less than 300 (ie Foxpro uses ASCII\nrepresentation for numbers so to store \"1234567\" it uses 7 bytes, whereas\nin PostgreSQL I can just make it an int and use 4 bytes)\n", "msg_date": "Fri, 1 Aug 2003 13:14:53 -0400 (EDT)", "msg_from": "Francisco J Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How number of columns affects performance" }, { "msg_contents": "On Fri, 2003-08-01 at 12:14, Francisco J Reyes wrote:\n> On Fri, 1 Aug 2003, Ron Johnson wrote:\n> \n> > Do all 100 fields *really* all refer to the same *one* entity,\n> > with no repeating values, etc?\n> \n> Yes all fields belong to the same entity. I used 100 as an example it may\n> be something like 60 to 80 fields (there are two tables in question). I\n> don't formally do 3rd normal form, but for the most part I do most of\n> the general concepts of normalization.\n\nWoo hoo!!\n\n> Yes some of the fields are varchars. 5 fields are varchar(22) and 3 longer\n> (35, 58, 70). The total row length is a little over 400 characters in\n> Foxpro. In postgreSQL may be less than 300 (ie Foxpro uses ASCII\n> representation for numbers so to store \"1234567\" it uses 7 bytes, whereas\n> in PostgreSQL I can just make it an int and use 4 bytes)\n\nBut I'd only split if these big field are rarely used. Note that\nVARCHAR(xx) removes trailing spaces, so that also is a factor.\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n", "msg_date": "01 Aug 2003 12:32:13 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How number of columns affects performance" }, { "msg_contents": "Francisco,\n\n> Yes all fields belong to the same entity. I used 100 as an example it may\n> be something like 60 to 80 fields (there are two tables in question). I\n> don't formally do 3rd normal form, but for the most part I do most of\n> the general concepts of normalization.\n>\n> > If not, then good design says to split the table.\n\nActually, no, it doesn't. If all 60-80 fields are unitary and required \ncharacteristics of the row-entity, normalization says keep them in one table.\n\nThe only time NF would recommend splitting the table is for fields which are \nfrequenly NULL for reasons other than missing data entry. For those, you'd \ncreate a child table. Although while this is good 4NF, it's impractical in \nPostgreSQL, where queries with several LEFT OUTER JOINs tend to be very slow \nindeed.\n\nMy attitude toward these normalization vs. performance issues is consistenly \nthe same: First, verify that you have a problem. That is, build the \ndatabase with everything in one table (or with child tables for Nullable \nfields, as above) and try to run your application. If performance is \nappalling, *then* take denormalization steps to improve it.\n\nI'm frequently distressed by the number of developers who make questionable \ndesign decisions \"for performance reasons\" without every verifying that they \nwere, in fact, improving performance ...\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 1 Aug 2003 10:44:03 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How number of columns affects performance" }, { "msg_contents": "On Fri, 2003-08-01 at 12:44, Josh Berkus wrote:\n> Francisco,\n> \n> > Yes all fields belong to the same entity. I used 100 as an example it may\n> > be something like 60 to 80 fields (there are two tables in question). I\n> > don't formally do 3rd normal form, but for the most part I do most of\n> > the general concepts of normalization.\n> >\n> > > If not, then good design says to split the table.\n> \n> Actually, no, it doesn't. If all 60-80 fields are unitary and required \n> characteristics of the row-entity, normalization says keep them in one table.\n\nYou snipped out too much, because that's exactly what I said...\nAnother way of writing it: only split the table if some of the fields\nare not unitary to the entity.\n\n> The only time NF would recommend splitting the table is for fields which are \n> frequenly NULL for reasons other than missing data entry. For those, you'd \n> create a child table. Although while this is good 4NF, it's impractical in \n> PostgreSQL, where queries with several LEFT OUTER JOINs tend to be very slow \n> indeed.\n\nGood to know.\n\n> My attitude toward these normalization vs. performance issues is consistenly \n> the same: First, verify that you have a problem. That is, build the \n> database with everything in one table (or with child tables for Nullable \n> fields, as above) and try to run your application. If performance is \n> appalling, *then* take denormalization steps to improve it.\n\nThe OP was not talking about denormalizing ...\n\nIt was: will vertically partitioning a table increase performance.\nAnd the answer is \"sometimes\",\n\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n", "msg_date": "01 Aug 2003 14:19:20 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How number of columns affects performance" }, { "msg_contents": "Ron,\n\n> You snipped out too much, because that's exactly what I said...\n> Another way of writing it: only split the table if some of the fields\n> are not unitary to the entity.\n\nSorry! No offense meant.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 1 Aug 2003 12:21:43 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How number of columns affects performance" }, { "msg_contents": "On Fri, 1 Aug 2003, Josh Berkus wrote:\n\n> My attitude toward these normalization vs. performance issues is consistenly\n> the same: First, verify that you have a problem. That is, build the\n> database with everything in one table (or with child tables for Nullable\n> fields, as above) and try to run your application. If performance is\n> appalling, *then* take denormalization steps to improve it.\n\nI think I understand your point, however it would be very laborious after\nyou do all development to find out you need to de-normalize.\n\nOn your experience at which point it would actually help to do this\nde-normalization in PostgreSQL? I know there are numerous factors ,but any\nfeedback based on previous experiences would help.\n\nRight now the work I am doing is only for company internal use. If I was\nto ever do work that outside users would have access to then I would be\nlooking at 100+ concurrent users.\n", "msg_date": "Fri, 1 Aug 2003 16:56:08 -0400 (EDT)", "msg_from": "Francisco J Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How number of columns affects performance" }, { "msg_contents": "Francisco,\n\n> I think I understand your point, however it would be very laborious after\n> you do all development to find out you need to de-normalize.\n\nNot terribly. Views and Rules are good for this.\n\n> On your experience at which point it would actually help to do this\n> de-normalization in PostgreSQL? I know there are numerous factors ,but any\n> feedback based on previous experiences would help.\n\nMy experience? If you're running on good hardware, it's completely \nunnecessary to vertically partition the table. The only thing I'd do would \nbe to look for columns which are frequently NULL and can be grouped together, \nand spin those off into a sub-table. That is, if you have 4 columns which \nare generally either all null or all filled, and are all null for 70% of \nrecords then those 4 could make a nice child table.\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n", "msg_date": "Fri, 1 Aug 2003 14:18:51 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How number of columns affects performance" } ]
[ { "msg_contents": "OSDL has ported OSDL Database Test Suite 3(OSDL-DBT3) to PostgreSQL. \nIt drives the database with an ad-hoc decision support workload. It\nhelps database developers and users to identify database performance\nissues.\n\nOSDL-DBT3 is derived from TPC-H benchmark. TPC-H is an ad-hoc decision\nsupport benchmark. It has 22 complicated queries doing a lot of\ntable/index scanning, sorting and grouping by. Details about TPC-H can\nbe found at: http://www.tpc.org/tpch/\n\nThough OSDL-DBT3 is based on TPC-H, it deviates from TPC-H\nsignificantly. It skipped many of the requirements for audit purpose,\nas well as added flexibility. OSDL-DBT3 performance test results are\nnot comparable to TPC-H results.\n\nOSDL-DBT3 tarball can be downloaded from:\nhttp://sourceforge.net/projects/osdldbt/\nThe source can be downloaded from source forge cvs tree and osdl bk tree\nhttp://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/osdldbt/dbt3/\nbk://developer.osdl.org/dbt3\n\nOSDL-DBT3 is also implemented on OSDL Scalable Test Platform(STP). The\nplatform provides a framework where users run a set of performance and\nscalability tests on various hardware platforms. Currently, OSDL-DBT3\nis running against PostgreSQL 7.3.3 only. We plan to improve it so that\nit can run against PostgreSQL patches. To find more information about\nSTP, visit: http://www.osdl.org/stp/. \n\nA sample OSDL-DBT3 test result report can be found at:\nhttp://khack.osdl.org/stp/276912/\n\nYour comments are welcome,\nRegards,\nJenny\n-- \nJenny Zhang\nOpen Source Development Lab Inc \n12725 SW Millikan Way\nSuite 400\nBeaverton, OR 97005\n(503)626-2455 ext 31\n\n", "msg_date": "01 Aug 2003 11:04:10 -0700", "msg_from": "Jenny Zhang <[email protected]>", "msg_from_op": true, "msg_subject": "OSDL Database Test Suite 3 is available on PostgreSQL" }, { "msg_contents": "Jenny,\n\n> OSDL has ported OSDL Database Test Suite 3(OSDL-DBT3) to PostgreSQL.\n> It drives the database with an ad-hoc decision support workload. It\n> helps database developers and users to identify database performance\n> issues.\n\nWay, way, cool! We've been waiting for something like this eagerly. I \nreally look forward to trying it out.\n\nFurther, OSDL-DBT3 can hopefully serve as the kernel of our internal \nperformance option testing suite. We'll evaluate very soon.\n\nThanks so much for your hard work!\n\n-- \nJosh Berkus\nPostgreSQL Core Team\nSan Francisco\n", "msg_date": "Fri, 1 Aug 2003 11:57:44 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] OSDL Database Test Suite 3 is available on PostgreSQL" }, { "msg_contents": "On 01 Aug 2003 11:04:10 -0700, Jenny Zhang <[email protected]> wrote:\n>A sample OSDL-DBT3 test result report can be found at:\n>http://khack.osdl.org/stp/276912/\n>\n>Your comments are welcome,\n\n| effective_cache_size | 1000\n\nWith 4GB of memory this is definitely too low and *can* (note that I\ndon't say *must*) lead the planner to wrong decisions.\n\n| shared_buffers | 15200\n\n... looks reasonable. Did you test with other values?\n\n| sort_mem | 524288\n\nThis is a bit high, IMHO, but might be ok given that DBT3 is not run\nwith many concurrent sessions (right?).\nhttp://khack.osdl.org/stp/276912/results/plot/vmstat_swap.png shows\nsome swapping activity towards the end of the run which could be\ncaused by a too high sort_mem setting.\n\nServus\n Manfred\n", "msg_date": "Mon, 04 Aug 2003 15:33:49 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] OSDL Database Test Suite 3 is available on PostgreSQL" }, { "msg_contents": "On Mon, 4 Aug 2003, Manfred Koizar wrote:\n\n> On 01 Aug 2003 11:04:10 -0700, Jenny Zhang <[email protected]> wrote:\n> >A sample OSDL-DBT3 test result report can be found at:\n> >http://khack.osdl.org/stp/276912/\n> >\n> >Your comments are welcome,\n> \n> | effective_cache_size | 1000\n> \n> With 4GB of memory this is definitely too low and *can* (note that I\n> don't say *must*) lead the planner to wrong decisions.\n> \n> | shared_buffers | 15200\n> \n> ... looks reasonable. Did you test with other values?\n> \n> | sort_mem | 524288\n> \n> This is a bit high, IMHO, but might be ok given that DBT3 is not run\n> with many concurrent sessions (right?).\n> http://khack.osdl.org/stp/276912/results/plot/vmstat_swap.png shows\n> some swapping activity towards the end of the run which could be\n> caused by a too high sort_mem setting.\n\nAnd, as always, don't forget to set effect_cache_size.\n\n", "msg_date": "Mon, 4 Aug 2003 10:07:01 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] OSDL Database Test Suite 3 is available on PostgreSQL" }, { "msg_contents": "On Mon, 4 Aug 2003, Manfred Koizar wrote:\n\n> On 01 Aug 2003 11:04:10 -0700, Jenny Zhang <[email protected]> wrote:\n> >A sample OSDL-DBT3 test result report can be found at:\n> >http://khack.osdl.org/stp/276912/\n> >\n> >Your comments are welcome,\n> \n> | effective_cache_size | 1000\n> \n> With 4GB of memory this is definitely too low and *can* (note that I\n> don't say *must*) lead the planner to wrong decisions.\n> \n> | shared_buffers | 15200\n> \n> ... looks reasonable. Did you test with other values?\n> \n> | sort_mem | 524288\n> \n> This is a bit high, IMHO, but might be ok given that DBT3 is not run\n> with many concurrent sessions (right?).\n> http://khack.osdl.org/stp/276912/results/plot/vmstat_swap.png shows\n> some swapping activity towards the end of the run which could be\n> caused by a too high sort_mem setting.\n\nSorry, my last email shot off out of the gun before it was completed...\n\nTo repeat: Don't forget to set effective_cache_size the same way as \nshared buffers (i.e. it's in 8k blocks for most systems.) If you have a \nmachine with 4 gig ram, and 3 gigs is available as disk cache, then \ndivide out 3 gigs by 8k to get the right number. My quick calculation \nshows that being about 393216 blocks.\n\n", "msg_date": "Mon, 4 Aug 2003 10:09:09 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] OSDL Database Test Suite 3 is available on PostgreSQL" }, { "msg_contents": "On 4 Aug 2003 at 15:33, Manfred Koizar wrote:\n\n> On 01 Aug 2003 11:04:10 -0700, Jenny Zhang <[email protected]> wrote:\n> >A sample OSDL-DBT3 test result report can be found at:\n> >http://khack.osdl.org/stp/276912/\n> >\n> >Your comments are welcome,\n\nI could not get postgresql .conf so I will combine the comments.\n\n1. Effective cache size, already mentioned\n2. Sort memory already mentioned.\n3. Was WAL put on different drive?\n4. Can you try with autovacuum daemon and 7.4beta when it comes out..\n5. What was the file system? Ext2/Ext3/reiser/XFS?\n\n<Scratching head>\n\nIs there any comparison available for other databases.. Could be interesting to \nsee..:-)\n\n</Scratching head>\n\nThanks for the good work. I understand it must have been quite an effort to run \nit..\n\nKeep it up..\n\nBye\n Shridhar\n\n--\nFourth Law of Revision:\tIt is usually impractical to worry beforehand about\t\ninterferences -- if you have none, someone will make one for you.\n\n", "msg_date": "Mon, 04 Aug 2003 22:09:27 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] OSDL Database Test Suite 3 is available on PostgreSQL" }, { "msg_contents": "On 4 Aug 2003, Jenny Zhang wrote:\n\n> On Mon, 2003-08-04 at 06:33, Manfred Koizar wrote:\n> > | effective_cache_size | 1000\n> > \n> > With 4GB of memory this is definitely too low and *can* (note that I\n> > don't say *must*) lead the planner to wrong decisions.\n> > \n> I changed the default to effective_cache_size=393216 as calculated by\n> Scott. Another way to check the execution plan is to go to the results\n> dir:\n> http://khack.osdl.org/stp/276917/results\n> There is a 'power_plan.out' file to record the execution plan. I am\n> running a test with the changed effective_cache_size, I will see how it\n> affect the plan.\n> \n> > | shared_buffers | 15200\n> > \n> > ... looks reasonable. Did you test with other values?\n> I have only one with shared_buffers=1200000 at:\n> http://khack.osdl.org/stp/276847/\n> The performance degraded. \n\nWell, that's truly huge, even for a machine with lots-o-ram. Most tests \nfind that once the shared_buffers are big enough to use more than about 25 \nto 33% of RAM, they're too big, as you get little return.\n\n> > | sort_mem | 524288\n> > \n> > This is a bit high, IMHO, but might be ok given that DBT3 is not run\n> > with many concurrent sessions (right?).\n> > http://khack.osdl.org/stp/276912/results/plot/vmstat_swap.png shows\n> > some swapping activity towards the end of the run which could be\n> > caused by a too high sort_mem setting.\n> Right, I run only 4 streams. Setting this parameter lower caused more\n> reading/writing to the pgsql/tmp. I guess the database has to do it if\n> it can not do sorting in memory. \n\nNote that IF your sortmem really is 1/2 gig, then you'll likely have LESS \nthan 3 gigs left for OS system cache. About how big does top show buff \nand cached to be on that box under load? Not that it's a big deal if you \nget the effective cache size off by a little bit, it's more of a rubber \nmallet setting than a jeweler's screw driver setting.\n\n\nThanks a bunch for all the great testing. It's a very nice tool to have \nfor convincing the bosses to go with Postgresql.\n\n", "msg_date": "Mon, 4 Aug 2003 16:40:11 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [osdldbt-general] Re: [PERFORM] OSDL Database Test Suite 3 is" }, { "msg_contents": "On Mon, 2003-08-04 at 09:39, Shridhar Daithankar wrote:\n> On 4 Aug 2003 at 15:33, Manfred Koizar wrote:\n> \n> > On 01 Aug 2003 11:04:10 -0700, Jenny Zhang <[email protected]> wrote:\n> > >A sample OSDL-DBT3 test result report can be found at:\n> > >http://khack.osdl.org/stp/276912/\n> > >\n> > >Your comments are welcome,\n> \n> I could not get postgresql .conf so I will combine the comments.\n> \n> 1. Effective cache size, already mentioned\n> 2. Sort memory already mentioned.\n> 3. Was WAL put on different drive?\n> 4. Can you try with autovacuum daemon and 7.4beta when it comes out..\n> 5. What was the file system? Ext2/Ext3/reiser/XFS?\n> \n> <Scratching head>\n> \n> Is there any comparison available for other databases.. Could be interesting to \n> see..:-)\n\nOSDL has run workloads using the SAP DB and PostgreSQL. However, each\nof the workloads have been tweaked to work around deficiencies of each\ndatabase with respect to the TPC benchmarks from with the DBT workloads\nare derrived. Since there are extensive modifications to each workload\nan that the fact that each workload operates under different situations\n(SAP uses raw disk, PostgreSQL uses a file system), it is not beneficial\nto compare numbers between different databases.\n\nRemember, the intent is to provide a tool kit that can be used to \nbenefit the community. From other postings, it appears that these \nworkloads we have available can be used to help the PostgreSQl community\ndevelop a better database; one that is better able to handle the kinds\nof stress these workloads can produce when scaled to large database\nsizes.\n\nWe have been using these kits to characterize the abilities of the Linux\nkernel. To show that these workloads work with two different databases\nimplies that Linux is capable of supporting these two databases.\n\nThe other tool kits, by the way, are being ported to PostgreSQL as well.\nHelp is needed to tune the workloads to exercise PostgreSQL better. It\nwould be great if you could get involved with the porting efforts and\nassist with the tuning of the PostgreSQL kit.\n\n\n> \n> </Scratching head>\n> \n> Thanks for the good work. I understand it must have been quite an effort to run \n> it..\n> \n> Keep it up..\n> \n> Bye\n> Shridhar\n> \n> --\n> Fourth Law of Revision:\tIt is usually impractical to worry beforehand about\t\n> interferences -- if you have none, someone will make one for you.\n> \n> \n> \n> -------------------------------------------------------\n> This SF.Net email sponsored by: Free pre-built ASP.NET sites including\n> Data Reports, E-commerce, Portals, and Forums are available now.\n> Download today and enter to win an XBOX or Visual Studio .NET.\n> http://aspnet.click-url.com/go/psa00100003ave/direct;at.aspnet_072303_01/01\n> _______________________________________________\n> osdldbt-general mailing list\n> [email protected]\n> https://lists.sourceforge.net/lists/listinfo/osdldbt-general\n-- \nCraig Thomas\[email protected]\n\n", "msg_date": "04 Aug 2003 15:48:31 -0700", "msg_from": "Craig Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [osdldbt-general] Re: [PERFORM] OSDL Database Test Suite 3 is" }, { "msg_contents": "Thanks all for your feedback.\n\nI think I should explain more about how to use this test kit. \n\nThe main purpose of putting the test kit on Scalability Test\nPlatform(STP) is that testers can run the workload against the database\nwith different parameters and Linux kernels to see performance\ndifferences. Though the test kit picks up default parameters if they\nare not provided, the command line parameters overwrite the default\nones. Currently, the following parameters are supported:\n-s <scale_factor> -n <number of streams> -d '<database parameters>' -r\n<{0|1}> -x <{0|1}> \n\nwhere:\n-s <scale_factor> is tpc-h database scale factor, right now, only SF=1\nis available.\n\n-n <number of streams> is the number of throughput test streams, which\ncorresponds number of simultaneous database connections during\nthroughput test.\n\n-d <database parameters> is the database parameters used when starting\npostmaster. for example: \n-B 120000 -c effective_cache_size=393216 -c sort_mem=524288 -c\nstats_command_string=true -c stats_row_level=true -c\nstats_block_level=true\n\n-r {0|1}: indicates if the database dir base/<database dir>/pgsql_tmp is\nput on a separate disk drive\n\n-x {0|1}: indicates if the WAL is put on a separate disk drive.\n\nThe other comments are in-lined:\n\nOn Mon, 2003-08-04 at 06:33, Manfred Koizar wrote:\n> | effective_cache_size | 1000\n> \n> With 4GB of memory this is definitely too low and *can* (note that I\n> don't say *must*) lead the planner to wrong decisions.\n> \nI changed the default to effective_cache_size=393216 as calculated by\nScott. Another way to check the execution plan is to go to the results\ndir:\nhttp://khack.osdl.org/stp/276917/results\nThere is a 'power_plan.out' file to record the execution plan. I am\nrunning a test with the changed effective_cache_size, I will see how it\naffect the plan.\n\n> | shared_buffers | 15200\n> \n> ... looks reasonable. Did you test with other values?\nI have only one with shared_buffers=1200000 at:\nhttp://khack.osdl.org/stp/276847/\nThe performance degraded. \n> \n> | sort_mem | 524288\n> \n> This is a bit high, IMHO, but might be ok given that DBT3 is not run\n> with many concurrent sessions (right?).\n> http://khack.osdl.org/stp/276912/results/plot/vmstat_swap.png shows\n> some swapping activity towards the end of the run which could be\n> caused by a too high sort_mem setting.\nRight, I run only 4 streams. Setting this parameter lower caused more\nreading/writing to the pgsql/tmp. I guess the database has to do it if\nit can not do sorting in memory. \n\nOn 4 Aug 2003 at 15:33, Manfred Koizar wrote:\n> \n> I could not get postgresql .conf so I will combine the comments.\nIt is under database monitor data: database parameters\n> \n> 1. Effective cache size, already mentioned\n> 2. Sort memory already mentioned.\n> 3. Was WAL put on different drive?\nThat run did not put WAL on different drive. I changed it this morning\nso that it is configurable. Also I changed the result page so that the\ntesters can tell from the result page.\n> 4. Can you try with autovacuum daemon and 7.4beta when it comes out..\nI'd be happy to run it. We would like to improve out Patch Life\nManagement(PLM) system so that it can accept PG patches and run\nperformance tests on those patches. Right now PLM only manages Linux\nKernel patches. I would like to ask the PostgreSQL community if this\nkind of tools is of interest.\n> 5. What was the file system? Ext2/Ext3/reiser/XFS?\n> \n> <Scratching head>\n> \nIt is Ext2. Yeah, it is not reported on the page.\n> Is there any comparison available for other databases.. Could be interesting to \n> see..:-)\n> \n> </Scratching head>\n> \n\nLet me know if you have any suggestions about how to improve the test\nkit (parameters, reported information, etc.), or how to make it more\nuseful to PG community.\n\nThanks,\n-- \nJenny Zhang\nOpen Source Development Lab Inc \n12725 SW Millikan Way\nSuite 400\nBeaverton, OR 97005\n(503)626-2455 ext 31\n\n", "msg_date": "04 Aug 2003 15:48:55 -0700", "msg_from": "Jenny Zhang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [osdldbt-general] Re: [PERFORM] OSDL Database Test Suite 3 is" } ]
[ { "msg_contents": "Hi all!\nReally I don't know what happened with this query. I'm running PG 7.3.1\non solaris, vaccumed (full) every nigth.\nThe cardinality of each table was:\n \ncont_contenido: 97 rows\njuegos_config: 40 rows\ncont_publicacion: 446 rows\nnot huge tables...\n \nhowever, this query took a lot of time to run: Total runtime: 432478.44\nmsec\nI made a explain analyze, but really I don't undertand why...\n \nesdc=> explain analyze\nSELECT\n cont_contenido.id_contenido\n ,cont_contenido.pertenece_premium\n ,cont_contenido.Titulo_esp as v_sufix \n ,cont_contenido.url_contenido\n ,cont_contenido.tipo_acceso\n ,cont_contenido.id_sbc\n ,cont_contenido.cant_vistos\n ,cont_contenido.cant_votos \n ,cont_contenido.puntaje_total \n ,cont_contenido.id_contenido_padre \n ,juegos_config.imagen_tapa_especial \n ,juegos_config.info_general_esp as info_general \n ,juegos_config.ayuda \n ,juegos_config.tips_tricks_esp as tips_tricks \n ,juegos_config.mod_imagen_tapa_especial \n ,cont_publicacion.fecha_publicacion as fecha_publicacion \n ,cont_publicacion.generar_Vainilla \n FROM \n cont_contenido \n ,juegos_config \n ,cont_publicacion \nWHERE \n cont_contenido.id_instalacion = 2\n AND cont_contenido.id_sbc = 619\n AND cont_contenido.id_tipo = 2\n AND cont_contenido.id_instalacion = juegos_config.id_instalacion \n AND cont_contenido.id_contenido = juegos_config.id_contenido \n AND upper(cont_publicacion.generar_Vainilla) = 'S'\n AND cont_publicacion.id_instalacion = cont_contenido.id_instalacion \n AND cont_publicacion.id_contenido = cont_contenido.id_contenido \n AND cont_publicacion.fecha_publicacion = (SELECT\nmax(cp1.fecha_publicacion) \n FROM cont_publicacion cp1 \n WHERE cp1.id_instalacion =\ncont_publicacion.id_instalacion \n AND cp1.id_contenido = cont_publicacion.id_contenido\n\n AND cp1.generar_vainilla =\ncont_publicacion.generar_vainilla) \n ORDER BY cont_publicacion.fecha_publicacion desc \n LIMIT 10\n OFFSET 0\nesdc->;\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n---------\n Limit (cost=8.72..8.73 rows=1 width=478) (actual\ntime=432473.69..432473.72 rows=8 loops=1)\n -> Sort (cost=8.72..8.73 rows=1 width=478) (actual\ntime=432473.67..432473.68 rows=8 loops=1)\n Sort Key: cont_publicacion.fecha_publicacion\n -> Merge Join (cost=8.69..8.71 rows=1 width=478) (actual\ntime=197393.80..432471.92 rows=8 loops=1)\n Merge Cond: ((\"outer\".id_instalacion =\n\"inner\".id_instalacion) AND (\"outer\".id_contenido =\n\"inner\".id_contenido))\n -> Nested Loop (cost=0.00..281713.36 rows=1 width=367)\n(actual time=7524.66..432454.11 rows=40 loops=1)\n Join Filter: ((\"inner\".id_contenido =\n\"outer\".id_contenido) AND (\"inner\".id_instalacion =\n\"outer\".id_instalacion))\n -> Index Scan using jue_conf_pk on juegos_config\n(cost=0.00..12.19 rows=40 width=332) (actual time=0.39..7.81 rows=40\nloops=1)\n -> Seq Scan on cont_publicacion\n(cost=0.00..7042.51 rows=1 width=35) (actual time=23.64..10807.83\nrows=96 loops=40)\n Filter: ((upper((generar_vainilla)::text) =\n'S'::text) AND (fecha_publicacion = (subplan)))\n SubPlan\n -> Aggregate (cost=15.79..15.79 rows=1\nwidth=8) (actual time=24.16..24.16 rows=1 loops=17800)\n -> Seq Scan on cont_publicacion cp1\n(cost=0.00..15.79 rows=1 width=8) (actual time=10.14..24.01 rows=7\nloops=17800)\n Filter: ((id_instalacion = $0)\nAND (id_contenido = $1) AND (generar_vainilla = $2))\n -> Sort (cost=8.69..8.70 rows=3 width=111) (actual\ntime=11.14..11.18 rows=8 loops=1)\n Sort Key: cont_contenido.id_instalacion,\ncont_contenido.id_contenido\n -> Seq Scan on cont_contenido (cost=0.00..8.66\nrows=3 width=111) (actual time=0.57..8.62 rows=8 loops=1)\n Filter: ((id_instalacion = 2::numeric) AND\n(id_sbc = 619::numeric) AND (id_tipo = 2::numeric))\n Total runtime: 432478.44 msec\n(19 rows)\n \nesdc=> \n \n \n\nIf I replace the subquery with a fixed date\n \n\"AND cont_publicacion.fecha_publicacion = '17/01/2003'::timestamp\"\n\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n----------------\n Limit (cost=8.72..8.73 rows=1 width=478) (actual time=797.26..797.26\nrows=0 loops=1)\n -> Sort (cost=8.72..8.73 rows=1 width=478) (actual\ntime=797.25..797.25 rows=0 loops=1)\n Sort Key: cont_publicacion.fecha_publicacion\n -> Merge Join (cost=8.69..8.71 rows=1 width=478) (actual\ntime=796.45..796.45 rows=0 loops=1)\n Merge Cond: ((\"outer\".id_instalacion =\n\"inner\".id_instalacion) AND (\"outer\".id_contenido =\n\"inner\".id_contenido))\n -> Nested Loop (cost=0.00..644.29 rows=1 width=367)\n(actual time=796.44..796.44 rows=0 loops=1)\n Join Filter: ((\"inner\".id_contenido =\n\"outer\".id_contenido) AND (\"inner\".id_instalacion =\n\"outer\".id_instalacion))\n -> Index Scan using jue_conf_pk on juegos_config\n(cost=0.00..12.19 rows=40 width=332) (actual time=0.23..6.71 rows=40\nloops=1)\n -> Seq Scan on cont_publicacion (cost=0.00..15.79\nrows=1 width=35) (actual time=19.70..19.70 rows=0 loops=40)\n Filter: ((upper((generar_vainilla)::text) =\n'S'::text) AND (fecha_publicacion = '17/01/2003 00:00:00'::timestamp\nwithout time zone))\n -> Sort (cost=8.69..8.70 rows=3 width=111) (never\nexecuted)\n Sort Key: cont_contenido.id_instalacion,\ncont_contenido.id_contenido\n -> Seq Scan on cont_contenido (cost=0.00..8.66\nrows=3 width=111) (never executed)\n Filter: ((id_instalacion = 2::numeric) AND\n(id_sbc = 619::numeric) AND (id_tipo = 2::numeric))\n Total runtime: 798.79 msec\n \nrun very smooth.\n \nI have another query similar to this query (include more tables, but\nhave the same subquery) but I don't have any problems.\n \nSomebody can help me with this mess? Thanks in advance!!!\n \nFernando.-\n\nMensaje\n\n\n\n \nHi \nall!\nReally I don't know \nwhat happened with this query. I'm running PG 7.3.1 on solaris, vaccumed (full) \nevery nigth.\nThe cardinality of \neach table was:\n \ncont_contenido: 97 rows\njuegos_config: 40 rows\ncont_publicacion: 446 rows\nnot huge \ntables...\n \nhowever, this query \ntook a lot of time to run:  Total runtime: 432478.44 msecI made a \nexplain analyze, but really I don't undertand \nwhy...\n \nesdc=> explain analyzeSELECT  \ncont_contenido.id_contenido ,cont_contenido.pertenece_premium ,cont_contenido.Titulo_esp \nas v_sufix \n ,cont_contenido.url_contenido ,cont_contenido.tipo_acceso ,cont_contenido.id_sbc ,cont_contenido.cant_vistos ,cont_contenido.cant_votos \n ,cont_contenido.puntaje_total \n ,cont_contenido.id_contenido_padre \n ,juegos_config.imagen_tapa_especial \n ,juegos_config.info_general_esp as info_general \n ,juegos_config.ayuda  ,juegos_config.tips_tricks_esp as \ntips_tricks  ,juegos_config.mod_imagen_tapa_especial \n ,cont_publicacion.fecha_publicacion as fecha_publicacion \n ,cont_publicacion.generar_Vainilla  FROM \n cont_contenido    ,juegos_config    \n ,cont_publicacion WHERE \n cont_contenido.id_instalacion        \n= 2 AND \ncont_contenido.id_sbc           \n= 619 AND \ncont_contenido.id_tipo           \n= 2 AND cont_contenido.id_instalacion  = \njuegos_config.id_instalacion  AND \ncont_contenido.id_contenido   = \njuegos_config.id_contenido         \n AND upper(cont_publicacion.generar_Vainilla) = 'S' AND \ncont_publicacion.id_instalacion = cont_contenido.id_instalacion  AND \ncont_publicacion.id_contenido = cont_contenido.id_contenido  AND \ncont_publicacion.fecha_publicacion = (SELECT max(cp1.fecha_publicacion) \n                  \nFROM cont_publicacion cp1  \n                  \nWHERE cp1.id_instalacion = cont_publicacion.id_instalacion \n                    \nAND cp1.id_contenido = cont_publicacion.id_contenido  \n                    \nAND cp1.generar_vainilla = \ncont_publicacion.generar_vainilla)      ORDER \nBY  cont_publicacion.fecha_publicacion desc  LIMIT \n10 OFFSET \n0esdc->;                                                                       \nQUERY \nPLAN                                                                        \n--------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  \n(cost=8.72..8.73 rows=1 width=478) (actual time=432473.69..432473.72 rows=8 \nloops=1)   ->  Sort  (cost=8.72..8.73 rows=1 \nwidth=478) (actual time=432473.67..432473.68 rows=8 \nloops=1)         Sort Key: \ncont_publicacion.fecha_publicacion         \n->  Merge Join  (cost=8.69..8.71 rows=1 width=478) (actual \ntime=197393.80..432471.92 rows=8 \nloops=1)               \nMerge Cond: ((\"outer\".id_instalacion = \"inner\".id_instalacion) AND \n(\"outer\".id_contenido = \n\"inner\".id_contenido))               \n->  Nested Loop  (cost=0.00..281713.36 rows=1 width=367) (actual \ntime=7524.66..432454.11 rows=40 \nloops=1)                     \nJoin Filter: ((\"inner\".id_contenido = \"outer\".id_contenido) AND \n(\"inner\".id_instalacion = \n\"outer\".id_instalacion))                     \n->  Index Scan using jue_conf_pk on juegos_config  \n(cost=0.00..12.19 rows=40 width=332) (actual time=0.39..7.81 rows=40 \nloops=1)                     \n->  Seq Scan on cont_publicacion  (cost=0.00..7042.51 rows=1 \nwidth=35) (actual time=23.64..10807.83 rows=96 \nloops=40)                           \nFilter: ((upper((generar_vainilla)::text) = 'S'::text) AND (fecha_publicacion = \n(subplan)))                           \nSubPlan                             \n->  Aggregate  (cost=15.79..15.79 rows=1 width=8) (actual \ntime=24.16..24.16 rows=1 \nloops=17800)                                   \n->  Seq Scan on cont_publicacion cp1  (cost=0.00..15.79 rows=1 \nwidth=8) (actual time=10.14..24.01 rows=7 \nloops=17800)                                         \nFilter: ((id_instalacion = $0) AND (id_contenido = $1) AND (generar_vainilla = \n$2))               \n->  Sort  (cost=8.69..8.70 rows=3 width=111) (actual \ntime=11.14..11.18 rows=8 \nloops=1)                     \nSort Key: cont_contenido.id_instalacion, \ncont_contenido.id_contenido                     \n->  Seq Scan on cont_contenido  (cost=0.00..8.66 rows=3 width=111) \n(actual time=0.57..8.62 rows=8 \nloops=1)                           \nFilter: ((id_instalacion = 2::numeric) AND (id_sbc = 619::numeric) AND (id_tipo \n= 2::numeric)) Total runtime: 432478.44 msec(19 rows)\n \nesdc=> \n \n \nIf I replace the subquery with a fixed \ndate\n \n\"AND \ncont_publicacion.fecha_publicacion = \n'17/01/2003'::timestamp\"\n                                                                           \nQUERY \nPLAN                                                                           \n---------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  \n(cost=8.72..8.73 rows=1 width=478) (actual time=797.26..797.26 rows=0 \nloops=1)   ->  Sort  (cost=8.72..8.73 rows=1 \nwidth=478) (actual time=797.25..797.25 rows=0 \nloops=1)         Sort Key: \ncont_publicacion.fecha_publicacion         \n->  Merge Join  (cost=8.69..8.71 rows=1 width=478) (actual \ntime=796.45..796.45 rows=0 \nloops=1)               \nMerge Cond: ((\"outer\".id_instalacion = \"inner\".id_instalacion) AND \n(\"outer\".id_contenido = \n\"inner\".id_contenido))               \n->  Nested Loop  (cost=0.00..644.29 rows=1 width=367) (actual \ntime=796.44..796.44 rows=0 \nloops=1)                     \nJoin Filter: ((\"inner\".id_contenido = \"outer\".id_contenido) AND \n(\"inner\".id_instalacion = \n\"outer\".id_instalacion))                     \n->  Index Scan using jue_conf_pk on juegos_config  \n(cost=0.00..12.19 rows=40 width=332) (actual time=0.23..6.71 rows=40 \nloops=1)                     \n->  Seq Scan on cont_publicacion  (cost=0.00..15.79 rows=1 \nwidth=35) (actual time=19.70..19.70 rows=0 \nloops=40)                           \nFilter: ((upper((generar_vainilla)::text) = 'S'::text) AND (fecha_publicacion = \n'17/01/2003 00:00:00'::timestamp without time \nzone))               \n->  Sort  (cost=8.69..8.70 rows=3 width=111) (never \nexecuted)                     \nSort Key: cont_contenido.id_instalacion, \ncont_contenido.id_contenido                     \n->  Seq Scan on cont_contenido  (cost=0.00..8.66 rows=3 width=111) \n(never \nexecuted)                           \nFilter: ((id_instalacion = 2::numeric) AND (id_sbc = 619::numeric) AND (id_tipo \n= 2::numeric)) Total runtime: 798.79 msec\n \nrun very \nsmooth.\n \nI have another query \nsimilar to this query (include more tables, but have the same subquery) but I \ndon't have any problems.\n \nSomebody can help me \nwith this mess? Thanks in advance!!!\n \nFernando.-", "msg_date": "Fri, 1 Aug 2003 18:17:17 -0300", "msg_from": "\"Fernando Papa\" <[email protected]>", "msg_from_op": true, "msg_subject": "I can't wait too much: Total runtime 432478.44 msec" }, { "msg_contents": "From: \"\"Fernando Papa\"\" <[email protected]>\n\n> AND upper(cont_publicacion.generar_Vainilla) = 'S'\n\n\n> Filter: ((upper((generar_vainilla)::text) = 'S'::text) AND\n(fecha_publicacion = (subplan)))\n\nusing a functional index on this field should help\n\ncreate index idx_generar_vainilla_ci on cont_publicacion (\nupper(generar_Vainilla) )\n\n\n\nRegards\nGaetano Mendola\n\n\n", "msg_date": "Sat, 2 Aug 2003 12:36:12 +0200", "msg_from": "\"Mendola Gaetano\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I can't wait too much: Total runtime 432478.44 msec" }, { "msg_contents": "On Fri, 1 Aug 2003 18:17:17 -0300, \"Fernando Papa\" <[email protected]>\nwrote:\n> AND cont_publicacion.fecha_publicacion = (SELECT\n>max(cp1.fecha_publicacion) \n> FROM cont_publicacion cp1 \n> WHERE cp1.id_instalacion =\n>cont_publicacion.id_instalacion \n> AND cp1.id_contenido = cont_publicacion.id_contenido\n>\n> AND cp1.generar_vainilla =\n>cont_publicacion.generar_vainilla) \n\nIf certain uniqueness conditions are met, the Postgres specific\nDISTINCT ON clause could help totally eliminating the subselect:\n\nSELECT DISTINCT ON (\n cp.id_instalacion,\n cp.id_contenido,\n cp.generar_vainilla,\n cp.fecha_publicacion\n )\n cc.id_contenido\n ,cc.pertenece_premium\n ,cc.Titulo_esp as v_sufix \n ,cc.url_contenido\n ,cc.tipo_acceso\n ,cc.id_sbc\n ,cc.cant_vistos\n ,cc.cant_votos \n ,cc.puntaje_total \n ,cc.id_contenido_padre \n ,jc.imagen_tapa_especial \n ,jc.info_general_esp as info_general \n ,jc.ayuda \n ,jc.tips_tricks_esp as tips_tricks \n ,jc.mod_imagen_tapa_especial \n ,cp.fecha_publicacion as fecha_publicacion \n ,cp.generar_Vainilla \n FROM \n cont_contenido cc\n ,juegos_config jc\n ,cont_publicacion cp\nWHERE \n cc.id_instalacion = 2\n AND cc.id_sbc = 619\n AND cc.id_tipo = 2\n AND cc.id_instalacion = jc.id_instalacion \n AND cc.id_contenido = jc.id_contenido \n AND upper(cp.generar_Vainilla) = 'S'\n AND cp.id_instalacion = cc.id_instalacion \n AND cp.id_contenido = cc.id_contenido \n ORDER BY \n cp.id_instalacion,\n cp.id_contenido,\n cp.generar_vainilla,\n cp.fecha_publicacion desc\n\nHowever, this doesn't get the result in the original order, so you\nhave to wrap another SELECT ... ORDER BY ... LIMIT around it. Or try\nto move the subselect into the FROM clause:\n\nSELECT\n cc.id_contenido\n ,cc.pertenece_premium\n ,cc.Titulo_esp as v_sufix \n ,cc.url_contenido\n ,cc.tipo_acceso\n ,cc.id_sbc\n ,cc.cant_vistos\n ,cc.cant_votos \n ,cc.puntaje_total \n ,cc.id_contenido_padre \n ,jc.imagen_tapa_especial \n ,jc.info_general_esp as info_general \n ,jc.ayuda \n ,jc.tips_tricks_esp as tips_tricks \n ,jc.mod_imagen_tapa_especial \n ,cp.fecha_publicacion as fecha_publicacion \n ,cp.generar_Vainilla \n FROM \n cont_contenido cc\n ,juegos_config jc\n ,(SELECT DISTINCT ON (\n id_instalacion,\n id_contenido,\n generar_vainilla,\n fecha_publicacion\n )\n *\n FROM cont_publicacion\n ORDER BY\n id_instalacion,\n id_contenido,\n generar_vainilla,\n fecha_publicacion desc\n ) AS cp\nWHERE \n cc.id_instalacion = 2\n AND cc.id_sbc = 619\n AND cc.id_tipo = 2\n AND cc.id_instalacion = jc.id_instalacion \n AND cc.id_contenido = jc.id_contenido \n AND upper(cp.generar_Vainilla) = 'S'\n AND cp.id_instalacion = cc.id_instalacion \n AND cp.id_contenido = cc.id_contenido \n ORDER BY \n cp.fecha_publicacion desc\n LIMIT 10\n OFFSET 0\n \n[completely untested]\n\nServus\n Manfred\n", "msg_date": "Mon, 04 Aug 2003 16:10:18 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I can't wait too much: Total runtime 432478.44 msec" }, { "msg_contents": "On Mon, 04 Aug 2003 16:10:18 +0200, I wrote:\n>SELECT DISTINCT ON (\n> cp.id_instalacion,\n> cp.id_contenido,\n> cp.generar_vainilla,\n> cp.fecha_publicacion\n> )\n\nCut'n'paste error! fecha_publicacion should not be in the DISTINCT ON\nlist. The same error is in my second suggestion (FROM (subselect)).\n\nServus\n Manfred\n", "msg_date": "Tue, 05 Aug 2003 01:36:27 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I can't wait too much: Total runtime 432478.44 msec" } ]
[ { "msg_contents": "\nI'd point at the following as being a sterling candidate for being a\ncause of this being slow...\n\n AND cont_publicacion.fecha_publicacion = (SELECT max(cp1.fecha_publicacion) \n                  FROM cont_publicacion cp1  \n                  WHERE cp1.id_instalacion = cont_publicacion.id_instalacion \n                    AND cp1.id_contenido = cont_publicacion.id_contenido  \n                    AND cp1.generar_vainilla = cont_publicacion.generar_vainilla)     \n\nMay I suggest changing it to:\n\n AND cont_publicacion.fecha_publicacion = (SELECT cp1.fecha_publicacion\n                  FROM cont_publicacion cp1  \n                  WHERE cp1.id_instalacion = cont_publicacion.id_instalacion \n                    AND cp1.id_contenido = cont_publicacion.id_contenido  \n                    AND cp1.generar_vainilla = cont_publicacion.generar_vainilla\n ORDER BY fecha_publicacion LIMIT 1)\n\nThat would get rid of the aggregate that's sitting deep in the query.\n-- \nselect 'cbbrowne' || '@' || 'libertyrms.info';\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Fri, 01 Aug 2003 17:27:08 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I can't wait too much: Total runtime 432478.44 msec" }, { "msg_contents": "Fernando,\n\n> AND cont_publicacion.fecha_publicacion = (SELECT max(cp1.fecha_publicacion) \n> FROM cont_publicacion cp1 \n> WHERE cp1.id_instalacion = cont_publicacion.id_instalacion \n> AND cp1.id_contenido = cont_publicacion.id_contenido \n> AND cp1.generar_vainilla = \ncont_publicacion.generar_vainilla) \n\nOr event changing it to:\n\nAND EXISTS (SELECT max(cp1.fecha_publicacion)\n\tFROM cont_publicacion cp1\n\tWHERE cp1.id_instalacion = cont_publicacion.id_instalacion \n\tAND cp1.id_contenido = cont_publicacion.id_contenido \n\tAND cp1.generar_vainilla = cont_publicacion.generar_vainilla\n\tHAVING max(cp1.fecha_publicacion) = cont_publicacion.fecha_publicacion)\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 1 Aug 2003 14:31:52 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I can't wait too much: Total runtime 432478.44 msec" } ]
[ { "msg_contents": "\n-- \n+-----------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"I'm not a vegetarian because I love animals, I'm a vegetarian |\n| because I hate vegetables!\" |\n| unknown |\n+-----------------------------------------------------------------+\n\n\n", "msg_date": "03 Aug 2003 02:09:51 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": true, "msg_subject": "testing" } ]
[ { "msg_contents": "\nSorry Chris... a little slower...\n\nesdc=> EXPLAIN ANALYZE \nSELECT\n cont_contenido.id_contenido\n ,cont_contenido.pertenece_premium\n ,cont_contenido.Titulo_esp as v_sufix \n ,cont_contenido.url_contenido\n ,cont_contenido.tipo_acceso\n ,cont_contenido.id_sbc\n ,cont_contenido.cant_vistos\n ,cont_contenido.cant_votos \n ,cont_contenido.puntaje_total \n ,cont_contenido.id_contenido_padre \n ,juegos_config.imagen_tapa_especial \n ,juegos_config.info_general_esp as info_general \n ,juegos_config.ayuda \n ,juegos_config.tips_tricks_esp as tips_tricks \n ,juegos_config.mod_imagen_tapa_especial \n ,cont_publicacion.fecha_publicacion as fecha_publicacion \n ,cont_publicacion.generar_Vainilla \nFROM \n cont_contenido \n ,juegos_config \n ,cont_publicacion \nWHERE \n cont_contenido.id_instalacion = 2\n AND cont_contenido.id_sbc = 619\n AND cont_contenido.id_tipo = 2\n AND cont_contenido.id_instalacion = juegos_config.id_instalacion \n AND cont_contenido.id_contenido = juegos_config.id_contenido \n AND upper(cont_publicacion.generar_Vainilla) = 'S'\n AND cont_publicacion.id_instalacion = cont_contenido.id_instalacion \n AND cont_publicacion.id_contenido = cont_contenido.id_contenido \n AND cont_publicacion.fecha_publicacion = (SELECT cp1.fecha_publicacion\n FROM cont_publicacion cp1\n WHERE cp1.id_instalacion = cont_publicacion.id_instalacion\n AND cp1.id_contenido = cont_publicacion.id_contenido\n AND cp1.generar_vainilla = cont_publicacion.generar_vainilla\n ORDER BY fecha_publicacion LIMIT 1)\nORDER BY cont_publicacion.fecha_publicacion desc \n LIMIT 10\n OFFSET 0\n ;\n\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=9.75..9.76 rows=1 width=479) (actual time=465085.25..465085.27 rows=8 loops=1)\n -> Sort (cost=9.75..9.76 rows=1 width=479) (actual time=465085.23..465085.24 rows=8 loops=1)\n Sort Key: cont_publicacion.fecha_publicacion\n -> Merge Join (cost=9.73..9.74 rows=1 width=479) (actual time=210743.83..465083.31 rows=8 loops=1)\n Merge Cond: ((\"outer\".id_instalacion = \"inner\".id_instalacion) AND (\"outer\".id_contenido = \"inner\".id_contenido))\n -> Nested Loop (cost=0.00..284756.79 rows=1 width=367) (actual time=8319.87..464981.68 rows=40 loops=1)\n Join Filter: ((\"inner\".id_contenido = \"outer\".id_contenido) AND (\"inner\".id_instalacion = \"outer\".id_instalacion))\n -> Index Scan using jue_conf_pk on juegos_config (cost=0.00..12.19 rows=40 width=332) (actual time=52.93..142.31 rows=40 loops=1)\n -> Seq Scan on cont_publicacion (cost=0.00..7118.60 rows=1 width=35) (actual time=51.79..11617.12 rows=97 loops=40)\n Filter: ((upper((generar_vainilla)::text) = 'S'::text) AND (fecha_publicacion = (subplan)))\n SubPlan\n -> Limit (cost=15.85..15.85 rows=1 width=8) (actual time=25.86..25.86 rows=1 loops=17880)\n -> Sort (cost=15.85..15.86 rows=1 width=8) (actual time=25.82..25.82 rows=2 loops=17880)\n Sort Key: fecha_publicacion\n -> Seq Scan on cont_publicacion cp1 (cost=0.00..15.84 rows=1 width=8) (actual time=10.68..25.32 rows=7 loops=17880)\n Filter: ((id_instalacion = $0) AND (id_contenido = $1) AND (generar_vainilla = $2))\n -> Sort (cost=9.73..9.74 rows=3 width=112) (actual time=94.91..94.93 rows=8 loops=1)\n Sort Key: cont_contenido.id_instalacion, cont_contenido.id_contenido\n -> Seq Scan on cont_contenido (cost=0.00..9.70 rows=3 width=112) (actual time=21.70..92.96 rows=8 loops=1)\n Filter: ((id_instalacion = 2::numeric) AND (id_sbc = 619::numeric) AND (id_tipo = 2::numeric))\n Total runtime: 465088.66 msec\n(21 rows)\n\n\n\n-----Mensaje original-----\nDe: Christopher Browne [mailto:[email protected]] \nEnviado el: viernes, 01 de agosto de 2003 18:27\nPara: Fernando Papa\nCC: [email protected]\nAsunto: Re: [PERFORM] I can't wait too much: Total runtime 432478.44 msec\n\n\n\nI'd point at the following as being a sterling candidate for being a cause of this being slow...\n\n AND cont_publicacion.fecha_publicacion = (SELECT max(cp1.fecha_publicacion) \n                  FROM cont_publicacion cp1  \n                  WHERE cp1.id_instalacion = cont_publicacion.id_instalacion \n                    AND cp1.id_contenido = cont_publicacion.id_contenido  \n                    AND cp1.generar_vainilla = cont_publicacion.generar_vainilla)     \n\nMay I suggest changing it to:\n\n AND cont_publicacion.fecha_publicacion = (SELECT cp1.fecha_publicacion\n                  FROM cont_publicacion cp1  \n                  WHERE cp1.id_instalacion = cont_publicacion.id_instalacion \n                    AND cp1.id_contenido = cont_publicacion.id_contenido  \n                    AND cp1.generar_vainilla = cont_publicacion.generar_vainilla\n ORDER BY fecha_publicacion LIMIT 1)\n\nThat would get rid of the aggregate that's sitting deep in the query.\n-- \nselect 'cbbrowne' || '@' || 'libertyrms.info'; <http://dev6.int.libertyrms.com/> Christopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Mon, 4 Aug 2003 11:01:41 -0300", "msg_from": "\"Fernando Papa\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I can't wait too much: Total runtime 432478.44 msec" } ]
[ { "msg_contents": "\nHi Josh... a little worse time:\n\nEXPLAIN ANALYZE \nSELECT\n cont_contenido.id_contenido\n ,cont_contenido.pertenece_premium\n ,cont_contenido.Titulo_esp as v_sufix \n ,cont_contenido.url_contenido\n ,cont_contenido.tipo_acceso\n ,cont_contenido.id_sbc\n ,cont_contenido.cant_vistos\n ,cont_contenido.cant_votos \n ,cont_contenido.puntaje_total \n ,cont_contenido.id_contenido_padre \n ,juegos_config.imagen_tapa_especial \n ,juegos_config.info_general_esp as info_general \n ,juegos_config.ayuda \n ,juegos_config.tips_tricks_esp as tips_tricks \n ,juegos_config.mod_imagen_tapa_especial \n ,cont_publicacion.fecha_publicacion as fecha_publicacion \n ,cont_publicacion.generar_Vainilla \n FROM \n cont_contenido \n ,juegos_config \n,cont_publicacion \n WHERE \n cont_contenido.id_instalacion = 2\n AND cont_contenido.id_sbc = 619\n AND cont_contenido.id_tipo = 2\n AND cont_contenido.id_instalacion = juegos_config.id_instalacion \n AND cont_contenido.id_contenido = juegos_config.id_contenido \n AND upper(cont_publicacion.generar_Vainilla) = 'S'\n AND cont_publicacion.id_instalacion = cont_contenido.id_instalacion \n AND cont_publicacion.id_contenido = cont_contenido.id_contenido \n AND EXISTS (SELECT max(cp1.fecha_publicacion)\n FROM cont_publicacion cp1\n WHERE cp1.id_instalacion = cont_publicacion.id_instalacion\n AND cp1.id_contenido = cont_publicacion.id_contenido\n AND cp1.generar_vainilla = cont_publicacion.generar_vainilla\n HAVING max(cp1.fecha_publicacion) =\ncont_publicacion.fecha_publicacion)\nORDER BY cont_publicacion.fecha_publicacion desc \n LIMIT 10\nOFFSET 0\n;\n\n\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n---------\n Limit (cost=9.75..9.76 rows=1 width=479) (actual\ntime=449760.88..449760.91 rows=8 loops=1)\n -> Sort (cost=9.75..9.76 rows=1 width=479) (actual\ntime=449760.87..449760.88 rows=8 loops=1)\n Sort Key: cont_publicacion.fecha_publicacion\n -> Merge Join (cost=9.73..9.74 rows=1 width=479) (actual\ntime=202257.20..449759.00 rows=8 loops=1)\n Merge Cond: ((\"outer\".id_instalacion =\n\"inner\".id_instalacion) AND (\"outer\".id_contenido =\n\"inner\".id_contenido))\n -> Nested Loop (cost=0.00..284556.86 rows=1 width=367)\n(actual time=7794.28..449741.85 rows=40 loops=1)\n Join Filter: ((\"inner\".id_contenido =\n\"outer\".id_contenido) AND (\"inner\".id_instalacion =\n\"outer\".id_instalacion))\n -> Index Scan using jue_conf_pk on juegos_config\n(cost=0.00..12.19 rows=40 width=332) (actual time=0.43..8.12 rows=40\nloops=1)\n -> Seq Scan on cont_publicacion\n(cost=0.00..7113.60 rows=1 width=35) (actual time=24.10..11239.67\nrows=97 loops=40)\n Filter: ((upper((generar_vainilla)::text) =\n'S'::text) AND (subplan))\n SubPlan\n -> Aggregate (cost=15.85..15.85 rows=1\nwidth=8) (actual time=25.03..25.03 rows=0 loops=17880)\n Filter: (max(fecha_publicacion) = $3)\n -> Seq Scan on cont_publicacion cp1\n(cost=0.00..15.84 rows=1 width=8) (actual time=10.51..24.85 rows=7\nloops=17880)\n Filter: ((id_instalacion = $0)\nAND (id_contenido = $1) AND (generar_vainilla = $2))\n -> Sort (cost=9.73..9.74 rows=3 width=112) (actual\ntime=10.49..10.52 rows=8 loops=1)\n Sort Key: cont_contenido.id_instalacion,\ncont_contenido.id_contenido\n -> Seq Scan on cont_contenido (cost=0.00..9.70\nrows=3 width=112) (actual time=0.59..8.07 rows=8 loops=1)\n Filter: ((id_instalacion = 2::numeric) AND\n(id_sbc = 619::numeric) AND (id_tipo = 2::numeric))\n Total runtime: 449765.69 msec\n(20 rows)\n\n\n\n-----Mensaje original-----\nDe: Josh Berkus [mailto:[email protected]] \nEnviado el: viernes, 01 de agosto de 2003 18:32\nPara: Christopher Browne; Fernando Papa\nCC: [email protected]\nAsunto: Re: [PERFORM] I can't wait too much: Total runtime 432478.44\nmsec\n\n\nFernando,\n\n> AND cont_publicacion.fecha_publicacion = (SELECT\nmax(cp1.fecha_publicacion) \n> FROM cont_publicacion cp1 \n> WHERE cp1.id_instalacion =\ncont_publicacion.id_instalacion \n> AND cp1.id_contenido =\ncont_publicacion.id_contenido \n> AND cp1.generar_vainilla =\ncont_publicacion.generar_vainilla) \n\nOr event changing it to:\n\nAND EXISTS (SELECT max(cp1.fecha_publicacion)\n\tFROM cont_publicacion cp1\n\tWHERE cp1.id_instalacion = cont_publicacion.id_instalacion \n\tAND cp1.id_contenido = cont_publicacion.id_contenido \n\tAND cp1.generar_vainilla = cont_publicacion.generar_vainilla\n\tHAVING max(cp1.fecha_publicacion) =\ncont_publicacion.fecha_publicacion)\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 4 Aug 2003 11:13:43 -0300", "msg_from": "\"Fernando Papa\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I can't wait too much: Total runtime 432478.44 msec" } ]
[ { "msg_contents": "\nI create the index, but doesn't help too much:\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=9.75..9.76 rows=1 width=479) (actual time=486421.35..486421.38 rows=8 loops=1)\n -> Sort (cost=9.75..9.76 rows=1 width=479) (actual time=486421.33..486421.34 rows=8 loops=1)\n Sort Key: cont_publicacion.fecha_publicacion\n -> Merge Join (cost=9.73..9.74 rows=1 width=479) (actual time=220253.76..486420.35 rows=8 loops=1)\n Merge Cond: ((\"outer\".id_instalacion = \"inner\".id_instalacion) AND (\"outer\".id_contenido = \"inner\".id_contenido))\n -> Nested Loop (cost=0.00..1828.35 rows=1 width=367) (actual time=8347.78..486405.02 rows=40 loops=1)\n Join Filter: ((\"inner\".id_contenido = \"outer\".id_contenido) AND (\"inner\".id_instalacion = \"outer\".id_instalacion))\n -> Index Scan using jue_conf_pk on juegos_config (cost=0.00..12.19 rows=40 width=332) (actual time=0.23..6.73 rows=40 loops=1)\n -> Index Scan using idx_generar_vainilla_ci on cont_publicacion (cost=0.00..45.39 rows=1 width=35) (actual time=56.01..12156.48 rows=97 loops=40)\n Index Cond: (upper((generar_vainilla)::text) = 'S'::text)\n Filter: (fecha_publicacion = (subplan))\n SubPlan\n -> Aggregate (cost=15.84..15.84 rows=1 width=8) (actual time=27.03..27.03 rows=1 loops=17880)\n -> Seq Scan on cont_publicacion cp1 (cost=0.00..15.84 rows=1 width=8) (actual time=11.21..26.86 rows=7 loops=17880)\n Filter: ((id_instalacion = $0) AND (id_contenido = $1) AND (generar_vainilla = $2))\n -> Sort (cost=9.73..9.74 rows=3 width=112) (actual time=9.28..9.32 rows=8 loops=1)\n Sort Key: cont_contenido.id_instalacion, cont_contenido.id_contenido\n -> Seq Scan on cont_contenido (cost=0.00..9.70 rows=3 width=112) (actual time=0.47..7.48 rows=8 loops=1)\n Filter: ((id_instalacion = 2::numeric) AND (id_sbc = 619::numeric) AND (id_tipo = 2::numeric))\n Total runtime: 486445.19 msec\n(20 rows)\n\n\n-----Mensaje original-----\nDe: Mendola Gaetano [mailto:[email protected]] \nEnviado el: sábado, 02 de agosto de 2003 7:36\nPara: [email protected]\nCC: Fernando Papa\nAsunto: Re: I can't wait too much: Total runtime 432478.44 msec\n\n\nFrom: \"\"Fernando Papa\"\" <[email protected]>\n\n> AND upper(cont_publicacion.generar_Vainilla) = 'S'\n\n\n> Filter: ((upper((generar_vainilla)::text) = 'S'::text) AND\n(fecha_publicacion = (subplan)))\n\nusing a functional index on this field should help\n\ncreate index idx_generar_vainilla_ci on cont_publicacion (\nupper(generar_Vainilla) )\n\n\n\nRegards\nGaetano Mendola\n\n\n", "msg_date": "Mon, 4 Aug 2003 11:51:30 -0300", "msg_from": "\"Fernando Papa\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I can't wait too much: Total runtime 432478.44 msec" } ]
[ { "msg_contents": "Hi Volker!!! I think you're right. Look at times:\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=23.37..23.37 rows=1 width=487) (actual time=2245.61..2245.61 rows=0 loops=1)\n -> Sort (cost=23.37..23.37 rows=1 width=487) (actual time=2245.60..2245.60 rows=0 loops=1)\n Sort Key: cont_publicacion.fecha_publicacion\n -> Nested Loop (cost=23.33..23.36 rows=1 width=487) (actual time=2244.10..2244.10 rows=0 loops=1)\n Join Filter: (\"outer\".fecha_publicacion = \"inner\".max_pub)\n -> Merge Join (cost=9.73..9.74 rows=1 width=479) (actual time=918.73..1988.43 rows=16 loops=1)\n Merge Cond: ((\"outer\".id_instalacion = \"inner\".id_instalacion) AND (\"outer\".id_contenido = \"inner\".id_contenido))\n -> Nested Loop (cost=0.00..409.35 rows=1 width=367) (actual time=35.44..1967.20 rows=82 loops=1)\n Join Filter: ((\"inner\".id_contenido = \"outer\".id_contenido) AND (\"inner\".id_instalacion = \"outer\".id_instalacion))\n -> Index Scan using jue_conf_pk on juegos_config (cost=0.00..12.19 rows=40 width=332) (actual time=0.42..6.73 rows=40 loops=1)\n -> Index Scan using idx_generar_vainilla_ci on cont_publicacion (cost=0.00..9.90 rows=2 width=35) (actual time=0.20..35.19 rows=447 loops=40)\n Index Cond: (upper((generar_vainilla)::text) = 'S'::text)\n -> Sort (cost=9.73..9.74 rows=3 width=112) (actual time=10.42..10.48 rows=15 loops=1)\n Sort Key: cont_contenido.id_instalacion, cont_contenido.id_contenido\n -> Seq Scan on cont_contenido (cost=0.00..9.70 rows=3 width=112) (actual time=0.57..8.11 rows=8 loops=1)\n Filter: ((id_instalacion = 2::numeric) AND (id_sbc = 619::numeric) AND (id_tipo = 2::numeric))\n -> Subquery Scan a (cost=13.60..13.60 rows=1 width=8) (actual time=15.89..15.90 rows=1 loops=16)\n -> Aggregate (cost=13.60..13.60 rows=1 width=8) (actual time=15.87..15.88 rows=1 loops=16)\n -> Seq Scan on cont_publicacion cp1 (cost=0.00..12.48 rows=448 width=8) (actual time=0.05..11.62 rows=448 loops=16)\n Total runtime: 2250.92 msec\n(20 rows)\nThe problem was the subquery, no doubt.\n \n\n\t-----Mensaje original-----\n\tDe: Volker Helm [mailto:[email protected]] \n\tEnviado el: lunes, 04 de agosto de 2003 11:45\n\tPara: Fernando Papa\n\tAsunto: AW: [PERFORM] I can't wait too much: Total runtime 432478.44 msec\n\t\n\t\n\tHi,\n\t \n\tjust use the subquery as inline-View an join the tables:\n\t \n\tSELECT\n\t cont_contenido.id_contenido\n\t ,cont_contenido.pertenece_premium\n\t ,cont_contenido.Titulo_esp as v_sufix \n\t ,cont_contenido.url_contenido\n\t ,cont_contenido.tipo_acceso\n\t ,cont_contenido.id_sbc\n\t ,cont_contenido.cant_vistos\n\t ,cont_contenido.cant_votos \n\t ,cont_contenido.puntaje_total \n\t ,cont_contenido.id_contenido_padre \n\t ,juegos_config.imagen_tapa_especial \n\t ,juegos_config.info_general_esp as info_general \n\t ,juegos_config.ayuda \n\t ,juegos_config.tips_tricks_esp as tips_tricks \n\t ,juegos_config.mod_imagen_tapa_especial \n\t ,cont_publicacion.fecha_publicacion as fecha_publicacion \n\t ,cont_publicacion.generar_Vainilla \n\t FROM \n\t cont_contenido \n\t ,juegos_config \n\t ,cont_publicacion \n\t ,(SELECT max(cp1.fecha_publicacion) as max_pub --change here\n\t FROM cont_publicacion cp1) a --change here\n\tWHERE \n\t cont_contenido.id_instalacion = 2\n\t AND cont_contenido.id_sbc = 619\n\t AND cont_contenido.id_tipo = 2\n\t AND cont_contenido.id_instalacion = juegos_config.id_instalacion \n\t AND cont_contenido.id_contenido = juegos_config.id_contenido \n\t AND upper(cont_publicacion.generar_Vainilla) = 'S'\n\t AND cont_publicacion.id_instalacion = cont_contenido.id_instalacion \n\t AND cont_publicacion.id_contenido = cont_contenido.id_contenido \n\t AND cont_publicacion.fecha_publicacion = a.max_pub -- change here \n\t ORDER BY cont_publicacion.fecha_publicacion desc\n\t \n\thope it helps,\n\t \n\tVolker Helm\n\n\t\t-----Ursprüngliche Nachricht-----\n\t\tVon: [email protected] [mailto:[email protected]]Im Auftrag von Fernando Papa\n\t\tGesendet: Freitag, 1. August 2003 23:17\n\t\tAn: [email protected]\n\t\tBetreff: [PERFORM] I can't wait too much: Total runtime 432478.44 msec\n\t\t\n\t\t\n\t\t \n\t\tHi all!\n\t\tReally I don't know what happened with this query. I'm running PG 7.3.1 on solaris, vaccumed (full) every nigth.\n\t\tThe cardinality of each table was:\n\t\t \n\t\tcont_contenido: 97 rows\n\t\tjuegos_config: 40 rows\n\t\tcont_publicacion: 446 rows\n\t\tnot huge tables...\n\t\t \n\t\thowever, this query took a lot of time to run: Total runtime: 432478.44 msec\n\t\tI made a explain analyze, but really I don't undertand why...\n\t\t \n\t\tesdc=> explain analyze\n\t\tSELECT\n\t\t cont_contenido.id_contenido\n\t\t ,cont_contenido.pertenece_premium\n\t\t ,cont_contenido.Titulo_esp as v_sufix \n\t\t ,cont_contenido.url_contenido\n\t\t ,cont_contenido.tipo_acceso\n\t\t ,cont_contenido.id_sbc\n\t\t ,cont_contenido.cant_vistos\n\t\t ,cont_contenido.cant_votos \n\t\t ,cont_contenido.puntaje_total \n\t\t ,cont_contenido.id_contenido_padre \n\t\t ,juegos_config.imagen_tapa_especial \n\t\t ,juegos_config.info_general_esp as info_general \n\t\t ,juegos_config.ayuda \n\t\t ,juegos_config.tips_tricks_esp as tips_tricks \n\t\t ,juegos_config.mod_imagen_tapa_especial \n\t\t ,cont_publicacion.fecha_publicacion as fecha_publicacion \n\t\t ,cont_publicacion.generar_Vainilla \n\t\t FROM \n\t\t cont_contenido \n\t\t ,juegos_config \n\t\t ,cont_publicacion \n\t\tWHERE \n\t\t cont_contenido.id_instalacion = 2\n\t\t AND cont_contenido.id_sbc = 619\n\t\t AND cont_contenido.id_tipo = 2\n\t\t AND cont_contenido.id_instalacion = juegos_config.id_instalacion \n\t\t AND cont_contenido.id_contenido = juegos_config.id_contenido \n\t\t AND upper(cont_publicacion.generar_Vainilla) = 'S'\n\t\t AND cont_publicacion.id_instalacion = cont_contenido.id_instalacion \n\t\t AND cont_publicacion.id_contenido = cont_contenido.id_contenido \n\t\t AND cont_publicacion.fecha_publicacion = (SELECT max(cp1.fecha_publicacion) \n\t\t FROM cont_publicacion cp1 \n\t\t WHERE cp1.id_instalacion = cont_publicacion.id_instalacion \n\t\t AND cp1.id_contenido = cont_publicacion.id_contenido \n\t\t AND cp1.generar_vainilla = cont_publicacion.generar_vainilla) \n\t\t ORDER BY cont_publicacion.fecha_publicacion desc \n\t\t LIMIT 10\n\t\t OFFSET 0\n\t\tesdc->;\n\t\t QUERY PLAN \n\t\t---------------------------------------------------------------------------------------------------------------------------------------------------------\n\t\t Limit (cost=8.72..8.73 rows=1 width=478) (actual time=432473.69..432473.72 rows=8 loops=1)\n\t\t -> Sort (cost=8.72..8.73 rows=1 width=478) (actual time=432473.67..432473.68 rows=8 loops=1)\n\t\t Sort Key: cont_publicacion.fecha_publicacion\n\t\t -> Merge Join (cost=8.69..8.71 rows=1 width=478) (actual time=197393.80..432471.92 rows=8 loops=1)\n\t\t Merge Cond: ((\"outer\".id_instalacion = \"inner\".id_instalacion) AND (\"outer\".id_contenido = \"inner\".id_contenido))\n\t\t -> Nested Loop (cost=0.00..281713.36 rows=1 width=367) (actual time=7524.66..432454.11 rows=40 loops=1)\n\t\t Join Filter: ((\"inner\".id_contenido = \"outer\".id_contenido) AND (\"inner\".id_instalacion = \"outer\".id_instalacion))\n\t\t -> Index Scan using jue_conf_pk on juegos_config (cost=0.00..12.19 rows=40 width=332) (actual time=0.39..7.81 rows=40 loops=1)\n\t\t -> Seq Scan on cont_publicacion (cost=0.00..7042.51 rows=1 width=35) (actual time=23.64..10807.83 rows=96 loops=40)\n\t\t Filter: ((upper((generar_vainilla)::text) = 'S'::text) AND (fecha_publicacion = (subplan)))\n\t\t SubPlan\n\t\t -> Aggregate (cost=15.79..15.79 rows=1 width=8) (actual time=24.16..24.16 rows=1 loops=17800)\n\t\t -> Seq Scan on cont_publicacion cp1 (cost=0.00..15.79 rows=1 width=8) (actual time=10.14..24.01 rows=7 loops=17800)\n\t\t Filter: ((id_instalacion = $0) AND (id_contenido = $1) AND (generar_vainilla = $2))\n\t\t -> Sort (cost=8.69..8.70 rows=3 width=111) (actual time=11.14..11.18 rows=8 loops=1)\n\t\t Sort Key: cont_contenido.id_instalacion, cont_contenido.id_contenido\n\t\t -> Seq Scan on cont_contenido (cost=0.00..8.66 rows=3 width=111) (actual time=0.57..8.62 rows=8 loops=1)\n\t\t Filter: ((id_instalacion = 2::numeric) AND (id_sbc = 619::numeric) AND (id_tipo = 2::numeric))\n\t\t Total runtime: 432478.44 msec\n\t\t(19 rows)\n\t\t \n\t\tesdc=> \n\t\t \n\t\t \n\t\t\n\t\t\n\t\tIf I replace the subquery with a fixed date\n\t\t \n\t\t\"AND cont_publicacion.fecha_publicacion = '17/01/2003'::timestamp\"\n\t\t\n\t\t QUERY PLAN \n\t\t----------------------------------------------------------------------------------------------------------------------------------------------------------------\n\t\t Limit (cost=8.72..8.73 rows=1 width=478) (actual time=797.26..797.26 rows=0 loops=1)\n\t\t -> Sort (cost=8.72..8.73 rows=1 width=478) (actual time=797.25..797.25 rows=0 loops=1)\n\t\t Sort Key: cont_publicacion.fecha_publicacion\n\t\t -> Merge Join (cost=8.69..8.71 rows=1 width=478) (actual time=796.45..796.45 rows=0 loops=1)\n\t\t Merge Cond: ((\"outer\".id_instalacion = \"inner\".id_instalacion) AND (\"outer\".id_contenido = \"inner\".id_contenido))\n\t\t -> Nested Loop (cost=0.00..644.29 rows=1 width=367) (actual time=796.44..796.44 rows=0 loops=1)\n\t\t Join Filter: ((\"inner\".id_contenido = \"outer\".id_contenido) AND (\"inner\".id_instalacion = \"outer\".id_instalacion))\n\t\t -> Index Scan using jue_conf_pk on juegos_config (cost=0.00..12.19 rows=40 width=332) (actual time=0.23..6.71 rows=40 loops=1)\n\t\t -> Seq Scan on cont_publicacion (cost=0.00..15.79 rows=1 width=35) (actual time=19.70..19.70 rows=0 loops=40)\n\t\t Filter: ((upper((generar_vainilla)::text) = 'S'::text) AND (fecha_publicacion = '17/01/2003 00:00:00'::timestamp without time zone))\n\t\t -> Sort (cost=8.69..8.70 rows=3 width=111) (never executed)\n\t\t Sort Key: cont_contenido.id_instalacion, cont_contenido.id_contenido\n\t\t -> Seq Scan on cont_contenido (cost=0.00..8.66 rows=3 width=111) (never executed)\n\t\t Filter: ((id_instalacion = 2::numeric) AND (id_sbc = 619::numeric) AND (id_tipo = 2::numeric))\n\t\t Total runtime: 798.79 msec\n\t\t \n\t\trun very smooth.\n\t\t \n\t\tI have another query similar to this query (include more tables, but have the same subquery) but I don't have any problems.\n\t\t \n\t\tSomebody can help me with this mess? Thanks in advance!!!\n\t\t \n\t\tFernando.-\n\n\nMensaje\n\n\n\nHi \nVolker!!! I think you're right. Look at times:\n                                                                               \nQUERY \nPLAN                                                                                 \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  \n(cost=23.37..23.37 rows=1 width=487) (actual time=2245.61..2245.61 rows=0 \nloops=1)   ->  Sort  (cost=23.37..23.37 rows=1 \nwidth=487) (actual time=2245.60..2245.60 rows=0 \nloops=1)         Sort Key: \ncont_publicacion.fecha_publicacion         \n->  Nested Loop  (cost=23.33..23.36 rows=1 width=487) (actual \ntime=2244.10..2244.10 rows=0 \nloops=1)               \nJoin Filter: (\"outer\".fecha_publicacion = \n\"inner\".max_pub)               \n->  Merge Join  (cost=9.73..9.74 rows=1 width=479) (actual \ntime=918.73..1988.43 rows=16 \nloops=1)                     \nMerge Cond: ((\"outer\".id_instalacion = \"inner\".id_instalacion) AND \n(\"outer\".id_contenido = \n\"inner\".id_contenido))                     \n->  Nested Loop  (cost=0.00..409.35 rows=1 width=367) (actual \ntime=35.44..1967.20 rows=82 \nloops=1)                           \nJoin Filter: ((\"inner\".id_contenido = \"outer\".id_contenido) AND \n(\"inner\".id_instalacion = \n\"outer\".id_instalacion))                           \n->  Index Scan using jue_conf_pk on juegos_config  \n(cost=0.00..12.19 rows=40 width=332) (actual time=0.42..6.73 rows=40 \nloops=1)                           \n->  Index Scan using idx_generar_vainilla_ci on cont_publicacion  \n(cost=0.00..9.90 rows=2 width=35) (actual time=0.20..35.19 rows=447 \nloops=40)                                 \nIndex Cond: (upper((generar_vainilla)::text) = \n'S'::text)                     \n->  Sort  (cost=9.73..9.74 rows=3 width=112) (actual \ntime=10.42..10.48 rows=15 \nloops=1)                           \nSort Key: cont_contenido.id_instalacion, \ncont_contenido.id_contenido                           \n->  Seq Scan on cont_contenido  (cost=0.00..9.70 rows=3 width=112) \n(actual time=0.57..8.11 rows=8 \nloops=1)                                 \nFilter: ((id_instalacion = 2::numeric) AND (id_sbc = 619::numeric) AND (id_tipo \n= \n2::numeric))               \n->  Subquery Scan a  (cost=13.60..13.60 rows=1 width=8) (actual \ntime=15.89..15.90 rows=1 \nloops=16)                     \n->  Aggregate  (cost=13.60..13.60 rows=1 width=8) (actual \ntime=15.87..15.88 rows=1 \nloops=16)                           \n->  Seq Scan on cont_publicacion cp1  (cost=0.00..12.48 rows=448 \nwidth=8) (actual time=0.05..11.62 rows=448 loops=16) Total runtime: \n2250.92 msec(20 rows)\nThe \nproblem was the subquery, no doubt.\n \n\n\n-----Mensaje original-----De: Volker Helm \n [mailto:[email protected]] Enviado el: lunes, 04 de agosto de 2003 \n 11:45Para: Fernando PapaAsunto: AW: [PERFORM] I can't \n wait too much: Total runtime 432478.44 msec\nHi,\n \njust use the subquery as inline-View an join the \n tables:\n \nSELECT  \n cont_contenido.id_contenido ,cont_contenido.pertenece_premium ,cont_contenido.Titulo_esp \n as v_sufix \n  ,cont_contenido.url_contenido ,cont_contenido.tipo_acceso ,cont_contenido.id_sbc ,cont_contenido.cant_vistos ,cont_contenido.cant_votos \n  ,cont_contenido.puntaje_total \n  ,cont_contenido.id_contenido_padre \n  ,juegos_config.imagen_tapa_especial \n  ,juegos_config.info_general_esp as info_general \n  ,juegos_config.ayuda  ,juegos_config.tips_tricks_esp as \n tips_tricks  ,juegos_config.mod_imagen_tapa_especial \n  ,cont_publicacion.fecha_publicacion as fecha_publicacion \n  ,cont_publicacion.generar_Vainilla  FROM \n  cont_contenido   \n  ,juegos_config     ,cont_publicacion \n \n ,(SELECT max(cp1.fecha_publicacion) as \n max_pub                                         \n --change \n here                  \n FROM cont_publicacion cp1) a     --change \n hereWHERE \n  cont_contenido.id_instalacion        \n = 2 AND \n cont_contenido.id_sbc           \n = 619 AND \n cont_contenido.id_tipo           \n = 2 AND cont_contenido.id_instalacion  = \n juegos_config.id_instalacion  AND \n cont_contenido.id_contenido   = \n juegos_config.id_contenido         \n  AND upper(cont_publicacion.generar_Vainilla) = 'S' AND \n cont_publicacion.id_instalacion = cont_contenido.id_instalacion  AND \n cont_publicacion.id_contenido = cont_contenido.id_contenido  AND \n cont_publicacion.fecha_publicacion \n = a.max_pub                              \n --  change here      ORDER BY  \n cont_publicacion.fecha_publicacion desc\n \nhope it helps,\n \nVolker Helm\n\n-----Ursprüngliche Nachricht-----Von: \n [email protected] \n [mailto:[email protected]]Im Auftrag von \n Fernando PapaGesendet: Freitag, 1. August 2003 \n 23:17An: [email protected]: \n [PERFORM] I can't wait too much: Total runtime 432478.44 \n msec\n \nHi \n all!\nReally I don't \n know what happened with this query. I'm running PG 7.3.1 on solaris, \n vaccumed (full) every nigth.\nThe cardinality \n of each table was:\n \ncont_contenido: 97 rows\njuegos_config: 40 rows\ncont_publicacion: 446 rows\nnot huge \n tables...\n \nhowever, this \n query took a lot of time to run:  Total runtime: 432478.44 \n msecI made a explain analyze, but really I don't undertand \n why...\n \nesdc=> explain analyzeSELECT  \n cont_contenido.id_contenido ,cont_contenido.pertenece_premium ,cont_contenido.Titulo_esp \n as v_sufix \n  ,cont_contenido.url_contenido ,cont_contenido.tipo_acceso ,cont_contenido.id_sbc ,cont_contenido.cant_vistos ,cont_contenido.cant_votos \n  ,cont_contenido.puntaje_total \n  ,cont_contenido.id_contenido_padre \n  ,juegos_config.imagen_tapa_especial \n  ,juegos_config.info_general_esp as info_general \n  ,juegos_config.ayuda  ,juegos_config.tips_tricks_esp as \n tips_tricks  ,juegos_config.mod_imagen_tapa_especial \n  ,cont_publicacion.fecha_publicacion as fecha_publicacion \n  ,cont_publicacion.generar_Vainilla  FROM \n  cont_contenido   \n  ,juegos_config     ,cont_publicacion \n WHERE \n  cont_contenido.id_instalacion        \n = 2 AND \n cont_contenido.id_sbc           \n = 619 AND \n cont_contenido.id_tipo           \n = 2 AND cont_contenido.id_instalacion  = \n juegos_config.id_instalacion  AND \n cont_contenido.id_contenido   = \n juegos_config.id_contenido         \n  AND upper(cont_publicacion.generar_Vainilla) = 'S' AND \n cont_publicacion.id_instalacion = cont_contenido.id_instalacion \n  AND cont_publicacion.id_contenido = cont_contenido.id_contenido \n  AND cont_publicacion.fecha_publicacion = (SELECT \n max(cp1.fecha_publicacion) \n                   \n FROM cont_publicacion cp1  \n                   \n WHERE cp1.id_instalacion = cont_publicacion.id_instalacion \n                     \n AND cp1.id_contenido = cont_publicacion.id_contenido  \n                     \n AND cp1.generar_vainilla = \n cont_publicacion.generar_vainilla)      ORDER \n BY  cont_publicacion.fecha_publicacion desc  LIMIT \n 10 OFFSET \n 0esdc->;                                                                       \n QUERY \n PLAN                                                                        \n --------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  \n (cost=8.72..8.73 rows=1 width=478) (actual time=432473.69..432473.72 rows=8 \n loops=1)   ->  Sort  (cost=8.72..8.73 rows=1 \n width=478) (actual time=432473.67..432473.68 rows=8 \n loops=1)         Sort Key: \n cont_publicacion.fecha_publicacion         \n ->  Merge Join  (cost=8.69..8.71 rows=1 width=478) (actual \n time=197393.80..432471.92 rows=8 \n loops=1)               \n Merge Cond: ((\"outer\".id_instalacion = \"inner\".id_instalacion) AND \n (\"outer\".id_contenido = \n \"inner\".id_contenido))               \n ->  Nested Loop  (cost=0.00..281713.36 rows=1 width=367) \n (actual time=7524.66..432454.11 rows=40 \n loops=1)                     \n Join Filter: ((\"inner\".id_contenido = \"outer\".id_contenido) AND \n (\"inner\".id_instalacion = \n \"outer\".id_instalacion))                     \n ->  Index Scan using jue_conf_pk on juegos_config  \n (cost=0.00..12.19 rows=40 width=332) (actual time=0.39..7.81 rows=40 \n loops=1)                     \n ->  Seq Scan on cont_publicacion  (cost=0.00..7042.51 rows=1 \n width=35) (actual time=23.64..10807.83 rows=96 \n loops=40)                           \n Filter: ((upper((generar_vainilla)::text) = 'S'::text) AND \n (fecha_publicacion = \n (subplan)))                           \n SubPlan                             \n ->  Aggregate  (cost=15.79..15.79 rows=1 width=8) (actual \n time=24.16..24.16 rows=1 \n loops=17800)                                   \n ->  Seq Scan on cont_publicacion cp1  (cost=0.00..15.79 rows=1 \n width=8) (actual time=10.14..24.01 rows=7 \n loops=17800)                                         \n Filter: ((id_instalacion = $0) AND (id_contenido = $1) AND (generar_vainilla \n = \n $2))               \n ->  Sort  (cost=8.69..8.70 rows=3 width=111) (actual \n time=11.14..11.18 rows=8 \n loops=1)                     \n Sort Key: cont_contenido.id_instalacion, \n cont_contenido.id_contenido                     \n ->  Seq Scan on cont_contenido  (cost=0.00..8.66 rows=3 \n width=111) (actual time=0.57..8.62 rows=8 \n loops=1)                           \n Filter: ((id_instalacion = 2::numeric) AND (id_sbc = 619::numeric) AND \n (id_tipo = 2::numeric)) Total runtime: 432478.44 msec(19 \n rows)\n \nesdc=> \n \n \nIf I replace the subquery with a fixed \n date\n \n\"AND \n cont_publicacion.fecha_publicacion = \n '17/01/2003'::timestamp\"\n                                                                           \n QUERY \n PLAN                                                                           \n ---------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  \n (cost=8.72..8.73 rows=1 width=478) (actual time=797.26..797.26 rows=0 \n loops=1)   ->  Sort  (cost=8.72..8.73 rows=1 \n width=478) (actual time=797.25..797.25 rows=0 \n loops=1)         Sort Key: \n cont_publicacion.fecha_publicacion         \n ->  Merge Join  (cost=8.69..8.71 rows=1 width=478) (actual \n time=796.45..796.45 rows=0 \n loops=1)               \n Merge Cond: ((\"outer\".id_instalacion = \"inner\".id_instalacion) AND \n (\"outer\".id_contenido = \n \"inner\".id_contenido))               \n ->  Nested Loop  (cost=0.00..644.29 rows=1 width=367) (actual \n time=796.44..796.44 rows=0 \n loops=1)                     \n Join Filter: ((\"inner\".id_contenido = \"outer\".id_contenido) AND \n (\"inner\".id_instalacion = \n \"outer\".id_instalacion))                     \n ->  Index Scan using jue_conf_pk on juegos_config  \n (cost=0.00..12.19 rows=40 width=332) (actual time=0.23..6.71 rows=40 \n loops=1)                     \n ->  Seq Scan on cont_publicacion  (cost=0.00..15.79 rows=1 \n width=35) (actual time=19.70..19.70 rows=0 \n loops=40)                           \n Filter: ((upper((generar_vainilla)::text) = 'S'::text) AND \n (fecha_publicacion = '17/01/2003 00:00:00'::timestamp without time \n zone))               \n ->  Sort  (cost=8.69..8.70 rows=3 width=111) (never \n executed)                     \n Sort Key: cont_contenido.id_instalacion, \n cont_contenido.id_contenido                     \n ->  Seq Scan on cont_contenido  (cost=0.00..8.66 rows=3 \n width=111) (never \n executed)                           \n Filter: ((id_instalacion = 2::numeric) AND (id_sbc = 619::numeric) AND \n (id_tipo = 2::numeric)) Total runtime: 798.79 \n msec\n \nrun very \n smooth.\n \nI have another \n query similar to this query (include more tables, but have the same \n subquery) but I don't have any problems.\n \nSomebody can \n help me with this mess? Thanks in advance!!!\n \nFernando.-", "msg_date": "Mon, 4 Aug 2003 12:02:46 -0300", "msg_from": "\"Fernando Papa\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I can't wait too much: Total runtime 432478.44 msec" }, { "msg_contents": "On Mon, 4 Aug 2003 12:02:46 -0300, \"Fernando Papa\" <[email protected]>\nwrote:\n>\t FROM \n>\t cont_contenido \n>\t ,juegos_config \n>\t ,cont_publicacion \n>\t ,(SELECT max(cp1.fecha_publicacion) as max_pub --change here\n>\t FROM cont_publicacion cp1) a --change here\n\nBut this calculates the global maximum, not per id_instalacion,\nid_contenido, and generar_vainilla as in\n\n>\t\t AND cont_publicacion.fecha_publicacion = (SELECT max(cp1.fecha_publicacion) \n>\t\t FROM cont_publicacion cp1 \n>\t\t WHERE cp1.id_instalacion = cont_publicacion.id_instalacion \n>\t\t AND cp1.id_contenido = cont_publicacion.id_contenido \n>\t\t AND cp1.generar_vainilla = cont_publicacion.generar_vainilla) \n\nServus\n Manfred\n", "msg_date": "Mon, 04 Aug 2003 17:17:06 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I can't wait too much: Total runtime 432478.44 msec" } ]
[ { "msg_contents": "Err... you're right... one of us say the same thing when I show the\nVolker mail...\n\n-----Mensaje original-----\nDe: Manfred Koizar [mailto:[email protected]] \nEnviado el: lunes, 04 de agosto de 2003 12:17\nPara: Fernando Papa\nCC: Volker Helm; [email protected]\nAsunto: Re: [PERFORM] I can't wait too much: Total runtime 432478.44\nmsec\n\n\nOn Mon, 4 Aug 2003 12:02:46 -0300, \"Fernando Papa\" <[email protected]>\nwrote:\n>\t FROM \n>\t cont_contenido \n>\t ,juegos_config \n>\t ,cont_publicacion \n>\t ,(SELECT max(cp1.fecha_publicacion) as max_pub\n--change here\n>\t FROM cont_publicacion cp1) a --change here\n\nBut this calculates the global maximum, not per id_instalacion,\nid_contenido, and generar_vainilla as in\n\n>\t\t AND cont_publicacion.fecha_publicacion = (SELECT\nmax(cp1.fecha_publicacion) \n>\t\t FROM cont_publicacion cp1 \n>\t\t WHERE cp1.id_instalacion =\ncont_publicacion.id_instalacion \n>\t\t AND cp1.id_contenido =\ncont_publicacion.id_contenido \n>\t\t AND cp1.generar_vainilla =\ncont_publicacion.generar_vainilla) \n\nServus\n Manfred\n", "msg_date": "Mon, 4 Aug 2003 12:26:38 -0300", "msg_from": "\"Fernando Papa\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I can't wait too much: Total runtime 432478.44 msec" } ]
[ { "msg_contents": "\nI was play with nested loops, and I found this:\n\n\nOriginal explain:\n\n Limit (cost=9.75..9.76 rows=1 width=479) (actual\ntime=436858.90..436858.93 rows=8 loops=1)\n -> Sort (cost=9.75..9.76 rows=1 width=479) (actual\ntime=436858.88..436858.89 rows=8 loops=1)\n Sort Key: cont_publicacion.fecha_publicacion\n -> Merge Join (cost=9.73..9.74 rows=1 width=479) (actual\ntime=196970.93..436858.04 rows=8 loops=1)\n Merge Cond: ((\"outer\".id_instalacion =\n\"inner\".id_instalacion) AND (\"outer\".id_contenido =\n\"inner\".id_contenido))\n -> Nested Loop (cost=0.00..1828.46 rows=1 width=367)\n(actual time=7525.51..436843.27 rows=40 loops=1)\n Join Filter: ((\"inner\".id_contenido =\n\"outer\".id_contenido) AND (\"inner\".id_instalacion =\n\"outer\".id_instalacion))\n -> Index Scan using jue_conf_pk on juegos_config\n(cost=0.00..12.19 rows=40 width=332) (actual time=0.38..6.63 rows=40\nloops=1)\n -> Index Scan using idx_generar_vainilla_ci on\ncont_publicacion (cost=0.00..45.39 rows=1 width=35) (actual\ntime=48.81..10917.53 rows=97 loops=40)\n Index Cond: (upper((generar_vainilla)::text)\n= 'S'::text)\n Filter: (subplan)\n SubPlan\n -> Aggregate (cost=15.85..15.85 rows=1\nwidth=8) (actual time=24.30..24.30 rows=0 loops=17880)\n Filter: (max(fecha_publicacion) = $3)\n -> Seq Scan on cont_publicacion cp1\n(cost=0.00..15.84 rows=1 width=8) (actual time=10.17..24.12 rows=7\nloops=17880)\n Filter: ((id_instalacion = $0)\nAND (id_contenido = $1) AND (generar_vainilla = $2))\n -> Sort (cost=9.73..9.74 rows=3 width=112) (actual\ntime=8.91..8.95 rows=8 loops=1)\n Sort Key: cont_contenido.id_instalacion,\ncont_contenido.id_contenido\n -> Seq Scan on cont_contenido (cost=0.00..9.70\nrows=3 width=112) (actual time=0.45..7.59 rows=8 loops=1)\n Filter: ((id_instalacion = 2::numeric) AND\n(id_sbc = 619::numeric) AND (id_tipo = 2::numeric))\n Total runtime: 436860.84 msec\n(21 rows)\n\nWith set enable_nestloop to off :\n \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n------------------------------\n Limit (cost=55.15..55.16 rows=1 width=479) (actual\ntime=11394.79..11394.82 rows=8 loops=1)\n -> Sort (cost=55.15..55.16 rows=1 width=479) (actual\ntime=11394.77..11394.79 rows=8 loops=1)\n Sort Key: cont_publicacion.fecha_publicacion\n -> Merge Join (cost=55.13..55.14 rows=1 width=479) (actual\ntime=11380.12..11394.01 rows=8 loops=1)\n Merge Cond: ((\"outer\".id_instalacion =\n\"inner\".id_instalacion) AND (\"outer\".id_contenido =\n\"inner\".id_contenido))\n -> Merge Join (cost=45.40..45.41 rows=1 width=367)\n(actual time=11358.48..11380.18 rows=40 loops=1)\n Merge Cond: ((\"outer\".id_instalacion =\n\"inner\".id_instalacion) AND (\"outer\".id_contenido =\n\"inner\".id_contenido))\n -> Index Scan using jue_conf_pk on juegos_config\n(cost=0.00..12.19 rows=40 width=332) (actual time=0.23..5.62 rows=40\nloops=1)\n -> Sort (cost=45.40..45.40 rows=1 width=35)\n(actual time=11357.48..11357.68 rows=97 loops=1)\n Sort Key: cont_publicacion.id_instalacion,\ncont_publicacion.id_contenido\n -> Index Scan using idx_generar_vainilla_ci\non cont_publicacion (cost=0.00..45.39 rows=1 width=35) (actual\ntime=48.81..11339.80 rows=97 loops=1)\n Index Cond:\n(upper((generar_vainilla)::text) = 'S'::text)\n Filter: (fecha_publicacion = (subplan))\n SubPlan\n -> Aggregate (cost=15.84..15.84\nrows=1 width=8) (actual time=25.21..25.22 rows=1 loops=447)\n -> Seq Scan on\ncont_publicacion cp1 (cost=0.00..15.84 rows=1 width=8) (actual\ntime=10.21..25.07 rows=7 loops=447)\n Filter: ((id_instalacion\n= $0) AND (id_contenido = $1) AND (generar_vainilla = $2))\n -> Sort (cost=9.73..9.74 rows=3 width=112) (actual\ntime=8.77..8.79 rows=8 loops=1)\n Sort Key: cont_contenido.id_instalacion,\ncont_contenido.id_contenido\n -> Seq Scan on cont_contenido (cost=0.00..9.70\nrows=3 width=112) (actual time=0.45..7.41 rows=8 loops=1)\n Filter: ((id_instalacion = 2::numeric) AND\n(id_sbc = 619::numeric) AND (id_tipo = 2::numeric))\n Total runtime: 11397.66 msec\n(22 rows)\n\n\n\nWhy postgresql don't choose not to use nested loop? Why is more cheap to\nuse nested loops but It's take a lot of time? \n", "msg_date": "Mon, 4 Aug 2003 17:17:36 -0300", "msg_from": "\"Fernando Papa\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I can't wait too much: Total runtime 432478.44 msec" }, { "msg_contents": "\"Fernando Papa\" <[email protected]> writes:\n> -> Nested Loop (cost=0.00..1828.46 rows=1 width=367)\n> (actual time=7525.51..436843.27 rows=40 loops=1)\n> Join Filter: ((\"inner\".id_contenido =\n> \"outer\".id_contenido) AND (\"inner\".id_instalacion =\n> \"outer\".id_instalacion))\n> -> Index Scan using jue_conf_pk on juegos_config\n> (cost=0.00..12.19 rows=40 width=332) (actual time=0.38..6.63 rows=40\n> loops=1)\n> -> Index Scan using idx_generar_vainilla_ci on\n> cont_publicacion (cost=0.00..45.39 rows=1 width=35) (actual\n> time=48.81..10917.53 rows=97 loops=40)\n> Index Cond: (upper((generar_vainilla)::text)\n> = 'S'::text)\n> Filter: (subplan)\n> SubPlan\n> -> Aggregate (cost=15.85..15.85 rows=1\n> width=8) (actual time=24.30..24.30 rows=0 loops=17880)\n\nAs best I can tell, the problem here is coming from a drastic\nunderestimate of the number of rows selected by\n\"upper(generar_vainilla) = 'S'\". Evidently there are about 450 such\nrows (since in 40 repetitions of the inner index scan, the aggregate\nsubplan gets evaluated 17880 times), but the planner seems to think\nthere will be only about two such rows. Had it made a more correct\nestimate, it would never have picked a plan that required multiple\nrepetitions of the indexscan.\n\nOne thing I'm wondering is if you've VACUUM ANALYZEd cont_publicacion\nlately --- the cost estimate seems on the small side, and I'm wondering\nif the planner thinks the table is much smaller than it really is. But\nassuming you didn't make that mistake, the only solution I can see is to\nnot use a functional index. The planner is not good about making row\ncount estimates for functional indexes. You could replace the index on \nupper(generar_vainilla) with a plain index on generar_vainilla, and\nchange the query condition from \"upper(generar_vainilla) = 'S'\" to\n\"generar_vainilla IN ('S', 's')\". I think the planner would have a lot\nbetter chance at understanding the statistics that way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Aug 2003 17:28:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I can't wait too much: Total runtime 432478.44 msec " } ]
[ { "msg_contents": "On Mon, Aug 04, 2003 at 15:05:08 -0600,\n \"Valsecchi, Patrick\" <[email protected]> wrote:\n> Sir,\n> \n> I did a search with the \"index\" keyword on the mailing list archive and it did come with no result. Sorry if it's a known bug. \n\nIt isn't a bug. It is a design trade off. The database has no special\nknowledge about the min and max aggregate functions that would allow it\nto use indexes.\n\n> But in general, I think the indexes are under used. I have several queries that are taking at least 30 minutes and that would take less than one minute if indexes where used (comes with a comparison I made with Oracle). In particular, I have the feeling that indexes are not used for \"IN\" statements (within a where clauses).\n\nThere are know performance problems with in. These are addressed in 7.4\nwhich will be going into beta any time now. You can usually rewrite IN\nqueries to use exists instead, which will speed things up.\n\nAlso be sure to run analyze (or vacuum analyze) so that the database\nserver has accurate statistics on which to bases its decisions.\n\n> On the same subject, I'd add that the ability to provide plan \"hints\" within the queries (like provided in Oracle) would be helpful. I know that the Postgres optimizer is supposed to do a better job than the one from Oracle, but an optimizer cannot be perfect for every cases.\n\n\"Hints\" aren't going to happen. They cause maintainance problems.\nYou can disable features for a session (such as sequential scans)\nand try to get a plan you like. But generally, rather than adding\nthis to you application code, you should try to find out why the\nplanner is making the wrong choice. Adjusting the relative costs\nfor doing things might allow the planner to do a much better job for\nyou.\n\nThis kind of thing gets discussed on the performance list.\n", "msg_date": "Mon, 4 Aug 2003 16:19:51 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Indexes not used for \"min()\"" } ]
[ { "msg_contents": "\nThanks Tom. I vaccumed full every night. Now I drop function index and\nchange the upper. Nothing change (I know, total time rise because we are\ndoing other things on database now). But you can see, if was any\nperformace gain i didn't see. Actually I get better results when I\ndisable nested loops or disable merge joins, as I write in a older post.\n\nThanks!\n\n Limit (cost=9.76..9.76 rows=1 width=479) (actual\ntime=720480.00..720480.03 rows=8 loops=1)\n -> Sort (cost=9.76..9.76 rows=1 width=479) (actual\ntime=720479.99..720480.00 rows=8 loops=1)\n Sort Key: cont_publicacion.fecha_publicacion\n -> Merge Join (cost=9.73..9.75 rows=1 width=479) (actual\ntime=323197.81..720477.96 rows=8 loops=1)\n Merge Cond: ((\"outer\".id_instalacion =\n\"inner\".id_instalacion) AND (\"outer\".id_contenido =\n\"inner\".id_contenido))\n -> Nested Loop (cost=0.00..213197.04 rows=4 width=367)\n(actual time=12136.55..720425.66 rows=40 loops=1)\n Join Filter: ((\"inner\".id_contenido =\n\"outer\".id_contenido) AND (\"inner\".id_instalacion =\n\"outer\".id_instalacion))\n -> Index Scan using jue_conf_pk on juegos_config\n(cost=0.00..12.19 rows=40 width=332) (actual time=34.13..92.02 rows=40\nloops=1)\n -> Seq Scan on cont_publicacion\n(cost=0.00..5329.47 rows=10 width=35) (actual time=41.74..18004.75\nrows=97 loops=40)\n Filter: (((generar_vainilla = 'S'::character\nvarying) OR (generar_vainilla = 's'::character varying)) AND\n(fecha_publicacion = (subplan)))\n SubPlan\n -> Aggregate (cost=11.86..11.86 rows=1\nwidth=8) (actual time=40.15..40.15 rows=1 loops=17880)\n -> Index Scan using\ncont_pub_gen_vainilla on cont_publicacion cp1 (cost=0.00..11.86 rows=1\nwidth=8) (actual time=16.89..40.01 rows=7 loops=17880)\n Index Cond: (generar_vainilla =\n$2)\n Filter: ((id_instalacion = $0)\nAND (id_contenido = $1))\n -> Sort (cost=9.73..9.74 rows=3 width=112) (actual\ntime=30.96..31.00 rows=8 loops=1)\n Sort Key: cont_contenido.id_instalacion,\ncont_contenido.id_contenido\n -> Seq Scan on cont_contenido (cost=0.00..9.70\nrows=3 width=112) (actual time=0.65..28.98 rows=8 loops=1)\n Filter: ((id_instalacion = 2::numeric) AND\n(id_sbc = 619::numeric) AND (id_tipo = 2::numeric))\n Total runtime: 720595.77 msec\n(20 rows)\n\n-----Mensaje original-----\nDe: Tom Lane [mailto:[email protected]] \nEnviado el: lunes, 04 de agosto de 2003 18:28\nPara: Fernando Papa\nCC: [email protected]\nAsunto: Re: [PERFORM] I can't wait too much: Total runtime 432478.44\nmsec \n\n\n\"Fernando Papa\" <[email protected]> writes:\n> -> Nested Loop (cost=0.00..1828.46 rows=1 width=367) \n> (actual time=7525.51..436843.27 rows=40 loops=1)\n> Join Filter: ((\"inner\".id_contenido =\n> \"outer\".id_contenido) AND (\"inner\".id_instalacion =\n> \"outer\".id_instalacion))\n> -> Index Scan using jue_conf_pk on juegos_config\n\n> (cost=0.00..12.19 rows=40 width=332) (actual time=0.38..6.63 rows=40\n> loops=1)\n> -> Index Scan using idx_generar_vainilla_ci on \n> cont_publicacion (cost=0.00..45.39 rows=1 width=35) (actual \n> time=48.81..10917.53 rows=97 loops=40)\n> Index Cond: \n> (upper((generar_vainilla)::text) = 'S'::text)\n> Filter: (subplan)\n> SubPlan\n> -> Aggregate (cost=15.85..15.85 rows=1\n> width=8) (actual time=24.30..24.30 rows=0 loops=17880)\n\nAs best I can tell, the problem here is coming from a drastic\nunderestimate of the number of rows selected by\n\"upper(generar_vainilla) = 'S'\". Evidently there are about 450 such\nrows (since in 40 repetitions of the inner index scan, the aggregate\nsubplan gets evaluated 17880 times), but the planner seems to think\nthere will be only about two such rows. Had it made a more correct\nestimate, it would never have picked a plan that required multiple\nrepetitions of the indexscan.\n\nOne thing I'm wondering is if you've VACUUM ANALYZEd cont_publicacion\nlately --- the cost estimate seems on the small side, and I'm wondering\nif the planner thinks the table is much smaller than it really is. But\nassuming you didn't make that mistake, the only solution I can see is to\nnot use a functional index. The planner is not good about making row\ncount estimates for functional indexes. You could replace the index on \nupper(generar_vainilla) with a plain index on generar_vainilla, and\nchange the query condition from \"upper(generar_vainilla) = 'S'\" to\n\"generar_vainilla IN ('S', 's')\". I think the planner would have a lot\nbetter chance at understanding the statistics that way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 4 Aug 2003 19:01:50 -0300", "msg_from": "\"Fernando Papa\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I can't wait too much: Total runtime 432478.44 msec " }, { "msg_contents": "\"Fernando Papa\" <[email protected]> writes:\n> Thanks Tom. I vaccumed full every night. Now I drop function index and\n> change the upper. Nothing change (I know, total time rise because we are\n> doing other things on database now).\n\n> -> Seq Scan on cont_publicacion\n> (cost=0.00..5329.47 rows=10 width=35) (actual time=41.74..18004.75\n> rows=97 loops=40)\n> Filter: (((generar_vainilla = 'S'::character\n> varying) OR (generar_vainilla = 's'::character varying)) AND\n> (fecha_publicacion = (subplan)))\n> SubPlan\n> -> Aggregate (cost=11.86..11.86 rows=1\n> width=8) (actual time=40.15..40.15 rows=1 loops=17880)\n\nSomething fishy going on here. Why did it switch to a seqscan,\nconsidering it still (mistakenly) thinks there are only going to be 10\nor 20 rows matching the generar_vainilla condition? How many rows have\ngenerar_vainilla equal to 's' or 'S', anyway?\n\nIn any case, the real problem is to get rid of the subselect at the \nNow that I look at your original query, I see that what you really seem\nto be after is the publications with latest pub date among each group with\nidentical id_instalacion, id_contenido, and generar_vainilla. You would\nprobably do well to reorganize the query using SELECT DISTINCT ON, viz\n\nSELECT * FROM\n(SELECT DISTINCT ON (id_instalacion, id_contenido, generar_vainilla)\n ...\n FROM ...\n WHERE ...\n ORDER BY\n id_instalacion, id_contenido, generar_vainilla, fecha_publicacion DESC)\nAS ss\nORDER BY fecha_publicacion desc \nLIMIT 10\nOFFSET 0\n\nSee the \"weather reports\" example in the SELECT reference page for\nmotivation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 04 Aug 2003 18:30:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: I can't wait too much: Total runtime 432478.44 msec " } ]
[ { "msg_contents": "Sorry Guy,\n\nwas just a little tired yesterday.\n\n> Err... you're right... one of us say the same thing when I show the\n> Volker mail...\n\nTry to make a group by in the inline-view, so you will get something\nlike this:\n\n> \n> On Mon, 4 Aug 2003 12:02:46 -0300, \"Fernando Papa\" <[email protected]>\n> wrote:\n> >\t FROM \n> >\t cont_contenido \n> >\t ,juegos_config \n> >\t ,cont_publicacion \n> >\t ,(SELECT id_instalacion,\n id_contenido,\n generar_vainilla,\n max(cp1.fecha_publicacion) as max_pub \n FROM cont_publicacion cp1\n GROUP BY id_instalacion,id_contenido,generar_vainilla) a \nwhere \n ...\n AND a.id_instalacion = cont_publicacion.id_instalacion\n AND a.id_contenido = cont_publicacion.id_contenido\n AND a.generar_vainilla = cont_publicacion.generar_vainilla\n AND a.max_pub = cont_publicacion.fecha_publicacion \n\n\nSorry for this missing group.\n\nBye,\n\nVolker\n", "msg_date": "Tue, 5 Aug 2003 08:22:58 +0200", "msg_from": "\"Volker Helm\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: I can't wait too much: Total runtime 432478.44 msec" } ]
[ { "msg_contents": "I have a table\npermissions\nwith the fields (party_id integer, permission varchar, key_name varchar,\nkey_value integer)\nfor which I need to a query to see if a person has permission to carry out a\nparticular action.\nThe query looks like:\nSELECT 1\nFROM permissions\nWHERE party_id in (4, 7, 11, 26)\n AND\n permission = 'permission8'\n AND\n ((key_name = 'keyname8' AND key_value = 223) OR\n (key_name = 'keyname1' AND key_value = 123) OR\n (key_name = 'keyname5' AND key_value = 212) OR\n (key_name = 'keyname7' AND key_value = 523) OR\n (key_name = 'keyname0' AND key_value = 123) OR\n (key_name = 'keyname10' AND key_value = 400));\n\nwould a permissions(party_id, permission) index work best here?\nor should I index all 4 columns?\n\nAlso,\nAnother alternative is to combine the key_name and key_value fields into a\nvarchar\nfield key (e. g. 'keyname8=223'), in which case the equilalent query would\njust check\n1 field 6 times instead of having 6 ANDstatements.\n\nI expect the table to have about 1 million rows at the most, and I need this\nquery to run as fast\nas possible since it will be run many, many times.\nSo, from a design standpoint, what is the the best way to go, should I have\ntwo fields key_name, and key_value,\nor just one field key. And how should I index this table best. I guess the\nfundamental question here is, is it faster\nto check a varchar(60) field for equality, or to check two check an integer\nand then a varchar(30). Or does having\none varchar field replace an integer and a varchar field, allow for some\nnice optimization not practical otherwise (i.e a 3-column index).\n\nI'd greatly appreciate any insight into this matter.\n-Ara Anjargolian\n\n\n\n\n", "msg_date": "Tue, 5 Aug 2003 02:33:17 -0700", "msg_from": "\"Ara Anjargolian\" <[email protected]>", "msg_from_op": true, "msg_subject": "query/table design help" }, { "msg_contents": "On Tuesday 05 August 2003 15:03, Ara Anjargolian wrote:\n> I have a table\n> permissions\n> with the fields (party_id integer, permission varchar, key_name varchar,\n> key_value integer)\n> for which I need to a query to see if a person has permission to carry out\n> a particular action.\n> The query looks like:\n> SELECT 1\n> FROM permissions\n> WHERE party_id in (4, 7, 11, 26)\n> AND\n> permission = 'permission8'\n> AND\n> ((key_name = 'keyname8' AND key_value = 223) OR\n> (key_name = 'keyname1' AND key_value = 123) OR\n> (key_name = 'keyname5' AND key_value = 212) OR\n> (key_name = 'keyname7' AND key_value = 523) OR\n> (key_name = 'keyname0' AND key_value = 123) OR\n> (key_name = 'keyname10' AND key_value = 400));\n>\n> would a permissions(party_id, permission) index work best here?\n> or should I index all 4 columns?\n>\n> Also,\n> Another alternative is to combine the key_name and key_value fields into a\n> varchar\n> field key (e. g. 'keyname8=223'), in which case the equilalent query would\n> just check\n> 1 field 6 times instead of having 6 ANDstatements.\n>\n> I expect the table to have about 1 million rows at the most, and I need\n> this query to run as fast\n> as possible since it will be run many, many times.\n\nI would suggest a 3 column table with party id, action and permission. Index \non partyid and action.\n\nIf table is static enough clustering should help.\n\n But this is one of many possible ways to design it. There could be other \ndetails that can affect this decision.\n\n Shridhar\n\n", "msg_date": "Tue, 5 Aug 2003 15:27:50 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query/table design help" } ]
[ { "msg_contents": "I've been trying to search through the archives, but it hasn't been\nsuccessful.\n\nWe recently upgraded from pg7.0.2 to 7.3.4 and things were happy. I'm\ntrying to fine tune things to get it running a bit better and I'm trying\nto figure out how vacuum output correlates to tuning parameters.\n\nHere's the msot recent vacuum for the \"active\" table. It gets a few\nhundred updates/inserts a minute constantly throughout the day.\n\nINFO: Pages 27781: Changed 0, Empty 0; Tup 2451648: Vac 0, Keep 0, UnUsed\n1003361.\n Total CPU 2.18s/0.61u sec elapsed 2.78 sec.\n\nI see unused is quite high. This morning I bumped max_fsm_pages to 500000.\nIf I'm thinking right you want unused and max_fsm to be closish, right?\n(Yesterday it was down around.. oh.. 600k?)\n\nI'm thinking vacuum full's may be in order. Which stinks because I was\nhoping to do away with the db essentially down for 10 minutes (includes\nall the db's on that machine) while it vacuum'd.\n\nThe upside is: it is performing great. During the vacuum analyze I do get\na few multi-second pauses while something occurs. I figured it was a\ncheckpoint, so I bumped checkpoint_timeout to 30 seconds and wal_buffers\nto 128. (I'm just guessing on wal_buffers).\n\nMachine is weenucks 2.2.17 on a dual p3 800, 2gb ram, 18gb drive (mirrored).\nIf you guys need other info (shared_buffers, etc) I'll be happy to funish\nthem. but the issue isn't query slowness.. just want to get this thing\noiled).\n\nthanks\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Tue, 5 Aug 2003 08:09:19 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Some vacuum & tuning help" }, { "msg_contents": "On 5 Aug 2003 at 8:09, Jeff wrote:\n\n> I've been trying to search through the archives, but it hasn't been\n> successful.\n> \n> We recently upgraded from pg7.0.2 to 7.3.4 and things were happy. I'm\n> trying to fine tune things to get it running a bit better and I'm trying\n> to figure out how vacuum output correlates to tuning parameters.\n> \n> Here's the msot recent vacuum for the \"active\" table. It gets a few\n> hundred updates/inserts a minute constantly throughout the day.\n\nI would suggest autovacuum daemon which is in CVS contrib works for 7.3.x as \nwell.. Or schedule a vacuum analyze every 15 minutes or so..\n> \n> INFO: Pages 27781: Changed 0, Empty 0; Tup 2451648: Vac 0, Keep 0, UnUsed\n> 1003361.\n> Total CPU 2.18s/0.61u sec elapsed 2.78 sec.\n> \n> I see unused is quite high. This morning I bumped max_fsm_pages to 500000.\n> If I'm thinking right you want unused and max_fsm to be closish, right?\n> (Yesterday it was down around.. oh.. 600k?)\n> \n> I'm thinking vacuum full's may be in order. Which stinks because I was\n> hoping to do away with the db essentially down for 10 minutes (includes\n> all the db's on that machine) while it vacuum'd.\n\nI think vacuum full is required.\n \n> The upside is: it is performing great. During the vacuum analyze I do get\n> a few multi-second pauses while something occurs. I figured it was a\n> checkpoint, so I bumped checkpoint_timeout to 30 seconds and wal_buffers\n> to 128. (I'm just guessing on wal_buffers).\n\nIf it is couple of tables that are that heavily killed, I would suggest to a \npg_dump, drop table and reload table. That should take less time. Your downtime \nmight not be 10 minutes but more like 15 say. That's a rough estimate..\n\n> \n> Machine is weenucks 2.2.17 on a dual p3 800, 2gb ram, 18gb drive (mirrored).\n\nYou mean linux? I guess you need a kernel revision for a long time. How about \n2.4.21?\n\n> If you guys need other info (shared_buffers, etc) I'll be happy to funish\n> them. but the issue isn't query slowness.. just want to get this thing\n> oiled).\n\nSee if this helps..\n\nBye\n Shridhar\n\n--\nQOTD:\t\"I thought I saw a unicorn on the way over, but it was just a\thorse with \none of the horns broken off.\"\n\n", "msg_date": "Tue, 05 Aug 2003 18:05:26 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help" }, { "msg_contents": "On Tue, 5 Aug 2003, Shridhar Daithankar wrote:\n\n> On 5 Aug 2003 at 8:09, Jeff wrote:\n> \n> I would suggest autovacuum daemon which is in CVS contrib works for 7.3.x as \n> well.. Or schedule a vacuum analyze every 15 minutes or so..\n\n\tI've just got autovacum up and Since we have had a lot of talk \nabout it recently..... I thought some feed back might be useful.\n\tIt seams to work quite well. But can be rather zelous on its \nanalysing for the first few hours. Curretly its analysig static (ie \nnothigs changed) tables every 10minites. Vacuums seam to be about right.\n\tI think that many vacuums may be slowing does my database....\n\nPeter Childs\n\n", "msg_date": "Tue, 5 Aug 2003 14:15:36 +0100 (BST)", "msg_from": "Peter Childs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help" }, { "msg_contents": "On Tue, 5 Aug 2003, Shridhar Daithankar wrote:\n>\n> I would suggest autovacuum daemon which is in CVS contrib works for 7.3.x as\n> well.. Or schedule a vacuum analyze every 15 minutes or so..\n> >\n\nGood Call. I'll give that a whirl and let you know.\n\n> I think vacuum full is required.\n>\nD'oh. Would this be a regular thing? I suppose we could do it weekly.\n\nAs for the pg_dumping of it. I suppose it would work on this table as it\nis only a couple million rows and not terribly big data-wise. The other\ntables in this db are rather big and a load is not fast. (It is about\n8GB).\n\nthanks\n\n> You mean linux? I guess you need a kernel revision for a long time. How about\n> 2.4.21?\n>\nYeah, linux. We're planning on upgrading when we relocate datacenters at\nthe end of August. This machine has actually been up for 486 days (We're\nhoping to reach linux's uptime wraparound of 496 days :) and the only\nreason it went down then was because the power supply failed. (That can\nbe read: pg7.0.2 had over a year of uptime. lets hope 7.3 works as good :)\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Tue, 5 Aug 2003 09:18:37 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some vacuum & tuning help" }, { "msg_contents": "On 5 Aug 2003 at 9:18, Jeff wrote:\n> As for the pg_dumping of it. I suppose it would work on this table as it\n> is only a couple million rows and not terribly big data-wise. The other\n> tables in this db are rather big and a load is not fast. (It is about\n> 8GB).\n\nYou need to dump only those table which has unusualy high unused stats. If that \nis a small table, dump/reload it would be far faster than vacuum.. For others \nvacuum analyse should do..\n\n> > You mean linux? I guess you need a kernel revision for a long time. How about\n> > 2.4.21?\n> >\n> Yeah, linux. We're planning on upgrading when we relocate datacenters at\n> the end of August. This machine has actually been up for 486 days (We're\n> hoping to reach linux's uptime wraparound of 496 days :) and the only\n> reason it went down then was because the power supply failed. (That can\n> be read: pg7.0.2 had over a year of uptime. lets hope 7.3 works as good :)\n\nGood to know that. AFAIK, the 496 wraparound is fixed in 2.6. So that won't be \na complaint any longer..\n\nBye\n Shridhar\n\n--\nGravity:\tWhat you get when you eat too much and too fast.\n\n", "msg_date": "Tue, 05 Aug 2003 19:05:05 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help" }, { "msg_contents": "On 5 Aug 2003 at 14:15, Peter Childs wrote:\n\n> On Tue, 5 Aug 2003, Shridhar Daithankar wrote:\n> \n> > On 5 Aug 2003 at 8:09, Jeff wrote:\n> > \n> > I would suggest autovacuum daemon which is in CVS contrib works for 7.3.x as \n> > well.. Or schedule a vacuum analyze every 15 minutes or so..\n> \n> \tI've just got autovacum up and Since we have had a lot of talk \n> about it recently..... I thought some feed back might be useful.\n> \tIt seams to work quite well. But can be rather zelous on its \n> analysing for the first few hours. Curretly its analysig static (ie \n> nothigs changed) tables every 10minites. Vacuums seam to be about right.\n> \tI think that many vacuums may be slowing does my database....\n\nIIRC there is per operation threshold. If update threshold is 5% and table is \n2% updatedit, then it should not look at it at all.\n\nIt's worth mentioning that you should start auto vacuum daemon on a clean \ndatabase. i.e. no pending vacuum. It is not supposed to start with a database \nwhich has lots of clean up pending. The essence of auto vacuum daemon is to \nmaintain a clean database in clean state..\n\nI agree, specifying per table thresholds would be good in autovacuum..\n\n\nBye\n Shridhar\n\n--\nWYSIWYG:\tWhat You See Is What You Get.\n\n", "msg_date": "Tue, 05 Aug 2003 19:18:33 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help" }, { "msg_contents": "On Tue, 5 Aug 2003, Shridhar Daithankar wrote:\n\n> On 5 Aug 2003 at 14:15, Peter Childs wrote:\n> \n> > On Tue, 5 Aug 2003, Shridhar Daithankar wrote:\n> > \n> > > On 5 Aug 2003 at 8:09, Jeff wrote:\n> > > \n> > > I would suggest autovacuum daemon which is in CVS contrib works for 7.3.x as \n> > > well.. Or schedule a vacuum analyze every 15 minutes or so..\n> > \n> > \tI've just got autovacum up and Since we have had a lot of talk \n> > about it recently..... I thought some feed back might be useful.\n> > \tIt seams to work quite well. But can be rather zelous on its \n> > analysing for the first few hours. Curretly its analysig static (ie \n> > nothigs changed) tables every 10minites. Vacuums seam to be about right.\n> > \tI think that many vacuums may be slowing does my database....\n\n\tSorry typo big time\n\nit should read\n\n\"I think that may analysing may may be slowing down my database.\n\n> \n> IIRC there is per operation threshold. If update threshold is 5% and table is \n> 2% updatedit, then it should not look at it at all.\n\n\tI left it with debug over night and it improved to that after 5 \nhours. switch the debug down (to 1) this morning and it has not settled \ndown yet.\n\n> \n> It's worth mentioning that you should start auto vacuum daemon on a clean \n> database. i.e. no pending vacuum. It is not supposed to start with a database \n> which has lots of clean up pending. The essence of auto vacuum daemon is to \n> maintain a clean database in clean state..\n> \n> I agree, specifying per table thresholds would be good in autovacuum..\n\nPeter Childs\n\n", "msg_date": "Tue, 5 Aug 2003 15:17:09 +0100 (BST)", "msg_from": "Peter Childs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help" }, { "msg_contents": "Jeff <[email protected]> writes:\n> Here's the msot recent vacuum for the \"active\" table. It gets a few\n> hundred updates/inserts a minute constantly throughout the day.\n\n> INFO: Pages 27781: Changed 0, Empty 0; Tup 2451648: Vac 0, Keep 0, UnUsed\n> 1003361.\n> Total CPU 2.18s/0.61u sec elapsed 2.78 sec.\n\n> I see unused is quite high. This morning I bumped max_fsm_pages to 500000.\n> If I'm thinking right you want unused and max_fsm to be closish, right?\n\nNo, they're unrelated. UnUsed is the number of currently-unused tuple\npointers in page headers, whereas the FSM parameters are measured in\npages. 30000 FSM slots would be more than enough for this table.\n\nThe above numbers don't seem terribly unreasonable to me, although\nprobably UnUsed would be smaller if you'd been vacuuming more often.\nIf you see UnUsed continuing to increase then you definitely ought to\nshorten the intervacuum time.\n\nVACUUM FULL does not reclaim unused tuple pointers AFAIR, except where\nit is able to release entire pages at the end of the relation. So if\nyou really wanted to get back down to nil UnUsed, you'd need to do a\ndump and reload of the table (or near equivalent, such as CLUSTER).\nNot sure it's worth the trouble.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Aug 2003 10:37:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help " }, { "msg_contents": "Try the pg_autovacuum daemon in CVS contrib dir. It works fine with 7.3.\n\nChris\n\n----- Original Message ----- \nFrom: \"Jeff\" <[email protected]>\nTo: <[email protected]>\nSent: Tuesday, August 05, 2003 8:09 PM\nSubject: [PERFORM] Some vacuum & tuning help\n\n\n> I've been trying to search through the archives, but it hasn't been\n> successful.\n>\n> We recently upgraded from pg7.0.2 to 7.3.4 and things were happy. I'm\n> trying to fine tune things to get it running a bit better and I'm trying\n> to figure out how vacuum output correlates to tuning parameters.\n>\n> Here's the msot recent vacuum for the \"active\" table. It gets a few\n> hundred updates/inserts a minute constantly throughout the day.\n>\n> INFO: Pages 27781: Changed 0, Empty 0; Tup 2451648: Vac 0, Keep 0, UnUsed\n> 1003361.\n> Total CPU 2.18s/0.61u sec elapsed 2.78 sec.\n>\n> I see unused is quite high. This morning I bumped max_fsm_pages to 500000.\n> If I'm thinking right you want unused and max_fsm to be closish, right?\n> (Yesterday it was down around.. oh.. 600k?)\n>\n> I'm thinking vacuum full's may be in order. Which stinks because I was\n> hoping to do away with the db essentially down for 10 minutes (includes\n> all the db's on that machine) while it vacuum'd.\n>\n> The upside is: it is performing great. During the vacuum analyze I do get\n> a few multi-second pauses while something occurs. I figured it was a\n> checkpoint, so I bumped checkpoint_timeout to 30 seconds and wal_buffers\n> to 128. (I'm just guessing on wal_buffers).\n>\n> Machine is weenucks 2.2.17 on a dual p3 800, 2gb ram, 18gb drive\n(mirrored).\n> If you guys need other info (shared_buffers, etc) I'll be happy to funish\n> them. but the issue isn't query slowness.. just want to get this thing\n> oiled).\n>\n> thanks\n>\n> --\n> Jeff Trout <[email protected]>\n> http://www.jefftrout.com/\n> http://www.stuarthamm.net/\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Wed, 6 Aug 2003 09:19:45 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help" } ]
[ { "msg_contents": "Scott\n\n>For example, a dedicated database for a webserver would be tuned\n>differently from a server that was running both the webserver and the database on\n>the same machine. \n\nThis is the situation I'm having fun and games with so I'd be very interested. (Client has made the mistake of putting Mr Gates firewall in between the two to add to my woes!!) \n\nLooking forward to seeing the outcome of your documentation immensely.\nMany thanks\n\nHilary\n\n\n\nHilary Forbes\n-------------\nDMR Computer Limited: http://www.dmr.co.uk/\nDirect line: 01689 889950\nSwitchboard: (44) 1689 860000 Fax: (44) 1689 860330\nE-mail: [email protected]\n\n**********************************************************\n\n", "msg_date": "Tue, 05 Aug 2003 14:20:01 +0100", "msg_from": "Hilary Forbes <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Re: postgresql.conf" } ]
[ { "msg_contents": "Shridhar Daithankar wrote: \n> I agree, specifying per table thresholds would be good in autovacuum.. \n \nWhich begs the question of what the future direction is for pg_autovacuum.\n\nThere would be some merit to having pg_autovacuum throw in some tables\nin which to store persistent information, and at that point, it would\nmake sense to add some flags to support the respective notions that:\n\n -> Some tables should _never_ be touched;\n\n -> Some tables might get \"reset\" to indicate that they should be\n considered as having been recently vacuumed, or perhaps that they\n badly need vacuuming;\n\n -> As you suggest, per-table thresholds;\n\n -> pg_autovacuum would know when tables were last vacuumed by\n it...\n\n -> You could record vacuum times to tell pg_autovacuum that you\n vacuumed something \"behind its back.\"\n\n -> If the system queued up proposed vacuums by having a \"queue\"\n table, you could request that pg_autovacuum do a vacuum on a\n particular table at the next opportunity.\n\nAll well and interesting stuff that could be worth implementing.\n\nBut the usual talk has been about ultimately integrating the\nfunctionality into the backend, making it fairly futile to enhance\npg_autovacuum terribly much.\n\nUnfortunately, the \"integrate into the backend\" thing has long seemed\n\"just around the corner.\" I think we should either:\n a) Decide to enhance pg_autovacuum, or\n b) Not.\n\nIn view of how long the \"better answers\" seem to be taking to emerge,\nI think it makes sense to add functionality to pg_autovacuum.\n-- \noutput = reverse(\"ofni.smrytrebil\" \"@\" \"enworbbc\")\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Tue, 05 Aug 2003 10:29:26 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some vacuum & tuning help " }, { "msg_contents": "On 5 Aug 2003 at 10:29, Christopher Browne wrote:\n\n> Shridhar Daithankar wrote: \n> > I agree, specifying per table thresholds would be good in autovacuum.. \n> \n> Which begs the question of what the future direction is for pg_autovacuum.\n> \n> There would be some merit to having pg_autovacuum throw in some tables\n> in which to store persistent information, and at that point, it would\n> make sense to add some flags to support the respective notions that:\n\nWell, the C++ version I wrote quite a while back, which resides on gborg and \nunmaintained, did that. It was considered as table pollution. However whenever \nautovacuum stuff goes in backend as such, it is going to need a catalogue.\n \n> -> Some tables should _never_ be touched;\n\nThat can be determined runtime from stats. Not required as a special feature \nIMHO..\n\n> \n> -> Some tables might get \"reset\" to indicate that they should be\n> considered as having been recently vacuumed, or perhaps that they\n> badly need vacuuming;\n\nWell, stats collector takes care of that. Autovacuum daemon reads that \nstatistics, maintain a periodic snapshot of the same to determine whether or \nnot it needs to vacuum.\n\nWhy it crawls for a dirty database is as follows. Autovauum daemon starts, read \nstatistics, sets it as base level and let a cycle pass, which is typically few \nminutes. When it goes again, it finds that lots of things are modified and need \nvacuum and so it triggers vacuum.\n\nNow vacuum goes on cleaning entire table which might be days job continously \npostponed some one reason or another. Oops.. your database is on it's knees..\n\n \n> -> As you suggest, per-table thresholds;\n\nI would rather put it in terms of pages. If any table wastes 100 pages each, it \ndeserves a vacuum..\n\n \n> -> pg_autovacuum would know when tables were last vacuumed by\n> it...\n\nIf you maintain a table in database, there are lot of things you can maintain. \nAnd you need to connect to database anyway to fire vacuum..\n\n \n> -> You could record vacuum times to tell pg_autovacuum that you\n> vacuumed something \"behind its back.\"\n\nIt should notice..\n\n> -> If the system queued up proposed vacuums by having a \"queue\"\n> table, you could request that pg_autovacuum do a vacuum on a\n> particular table at the next opportunity.\n\nThat won't ever happen if autovacuum is constantly running..\n\n> Unfortunately, the \"integrate into the backend\" thing has long seemed\n> \"just around the corner.\" I think we should either:\n> a) Decide to enhance pg_autovacuum, or\n> b) Not.\n\nIn fact, I would say that after we have autovacuum, we should not integrate it. \nIt is a very handy tool of tighting a database. Other database go other way \nround. They develop maintance functionality built in and then create tool on \ntop of it. Here we have it already done.\n\nIt's just that it should be triggered by default. That would rock..\n\nBye\n Shridhar\n\n--\nBubble Memory, n.:\tA derogatory term, usually referring to a person's intelligence.\tSee also \"vacuum tube\".\n\n", "msg_date": "Tue, 05 Aug 2003 20:13:39 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help " }, { "msg_contents": "From: \"Christopher Browne\" <[email protected]>\n\n> Shridhar Daithankar wrote:\n> > I agree, specifying per table thresholds would be good in autovacuum..\n>\n> Which begs the question of what the future direction is for pg_autovacuum.\n\nThis is a good question.\n\n> There would be some merit to having pg_autovacuum throw in some tables\n> in which to store persistent information\n\nAs long as pg_autovacuum is either a contrib module, or not integrated into\nthe backend, we can't do this. I don't think we should require that tables\nare added to your database in order to run pg_autovacuum, I have thought\nthat a \"helper table\" could be used, this table, if found by pg_autovacuum\nwould use it for per table defaults, exclusion list etc.... That way\npg_autovacuum can run without a polluted database, or can be tuned.\n\nIf pg_autovacuum in made official, moves out of contrib and becomes a core\ntool, then we can either add columns to some system catalogs to track this\ninformation or add a new system table.\n\n> All well and interesting stuff that could be worth implementing.\n>\n> But the usual talk has been about ultimately integrating the\n> functionality into the backend, making it fairly futile to enhance\n> pg_autovacuum terribly much.\n>\n> Unfortunately, the \"integrate into the backend\" thing has long seemed\n> \"just around the corner.\" I think we should either:\n> a) Decide to enhance pg_autovacuum, or\n> b) Not.\n\nI have been talking about \"integraging it into the backend\" for a while, and\nI used to think it was \"just around the corner\" unfortunately, work\nschedule and my C skills have prevented me from getting anything useful\nworking. If you would like to work on it, I would help as much as possible.\n\nI chose to leave pg_autovacuum simple and not add too many features because\nthe core team has said that it needs to be integrated into the backend\nbefore it can be considered a core tool.\n\nps, please cc me as I'm not subscribed to the list.\n\n", "msg_date": "Tue, 5 Aug 2003 11:06:22 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help " }, { "msg_contents": "From: \"Shridhar Daithankar\" <[email protected]>\n> On 5 Aug 2003 at 10:29, Christopher Browne wrote:\n>\n> > Shridhar Daithankar wrote:\n> > There would be some merit to having pg_autovacuum throw in some tables\n> > in which to store persistent information,\n>\n> Well, the C++ version I wrote quite a while back, which resides on gborg\nand\n> unmaintained, did that. It was considered as table pollution. However\nwhenever\n> autovacuum stuff goes in backend as such, it is going to need a catalogue.\n\nright, I think there is a distinction between adding a system catalogue\nneeded for core functionality, and requiring a user to put a table in the\nusers name space of their database just to run a utility. As I mentioned in\nmy other email, I do think that a non-required helper table could be a good\nidea, the question is should we do this considering autovacuum should be\nintegrated into the backend at which point pg_autovacuum will be scrapped.\n\n> > -> Some tables should _never_ be touched;\n>\n> That can be determined runtime from stats. Not required as a special\nfeature\n> IMHO..\n>\n> >\n> > -> Some tables might get \"reset\" to indicate that they should be\n> > considered as having been recently vacuumed, or perhaps that they\n> > badly need vacuuming;\n>\n> Well, stats collector takes care of that. Autovacuum daemon reads that\n> statistics, maintain a periodic snapshot of the same to determine whether\nor\n> not it needs to vacuum.\n\nActually I don't think that pg_autovacuum will notice. The stats that it\nwatches no nothing about when a table is vacuumed outside of pg_autovacuum.\nI agree this is a deficiency, but I don't know how to get that information\nwithout being part of the backend.\n\n> Why it crawls for a dirty database is as follows. Autovauum daemon starts,\nread\n> statistics, sets it as base level and let a cycle pass, which is typically\nfew\n> minutes. When it goes again, it finds that lots of things are modified and\nneed\n> vacuum and so it triggers vacuum.\n>\n> Now vacuum goes on cleaning entire table which might be days job\ncontinously\n> postponed some one reason or another. Oops.. your database is on it's\nknees..\n\nIf one table takes days (or even hours) to vacuum, then most probably it\nrequires *a lot* of activity before pg_autovacuum will try to vacuum the\ntable. The thresholds are based on two factors, a base value (default =\n1,000), and a multiplier (default = 2) of the total number of tuples. So\nusing the default pg_autovacuum settings, a table with 1,000,000 rows will\nnot be vacuumed until the number of rows updated or deleted = 2,001,000.\nSo, a table shouldn't be vacuumed until it really needs it. This setup\nworks well since a small table of say 100 rows, will be updated every 1,200\n(updates or deletes).\n>\n> > -> As you suggest, per-table thresholds;\n>\n> I would rather put it in terms of pages. If any table wastes 100 pages\neach, it\n> deserves a vacuum..\n\nunfortunately I don't know of an efficient method of looking at how many\npages have free space without running vacuum or without using the\npgstattuple contrib module which in my testing took about 90% as long to run\nas vacuum.\n\n> > -> pg_autovacuum would know when tables were last vacuumed by\n> > it...\n\npg_autovacuum already does this, but the data does not persist through\npg_autovacuum restarts.\n\n> If you maintain a table in database, there are lot of things you can\nmaintain.\n> And you need to connect to database anyway to fire vacuum..\n>\n> > -> You could record vacuum times to tell pg_autovacuum that you\n> > vacuumed something \"behind its back.\"\n>\n> It should notice..\n\nI don't think it does.\n\n> > -> If the system queued up proposed vacuums by having a \"queue\"\n> > table, you could request that pg_autovacuum do a vacuum on a\n> > particular table at the next opportunity.\n\nThat would be a design changes as right now pg_autovacuum doesn't keep a\nlist of tables to vacuum at all, it just decides to vacuum or not vacuum a\ntable as it loops through the database.\n\n> > Unfortunately, the \"integrate into the backend\" thing has long seemed\n> > \"just around the corner.\" I think we should either:\n> > a) Decide to enhance pg_autovacuum, or\n> > b) Not.\n\nI have been of the opinion to not enhance pg_autovacuum because it needs to\nbe intgrated, and enhancing it will only put that off. Also, I think many\nof the real enhancements can only come from being integrated (using the FSM\nto make decisions, keepting track of external vacuums, modifying system\ncatalogs to keep autovacuum information etc...)\n\n> In fact, I would say that after we have autovacuum, we should not\nintegrate it.\n> It is a very handy tool of tighting a database. Other database go other\nway\n> round. They develop maintance functionality built in and then create tool\non\n> top of it. Here we have it already done.\n\nI'm not sure I understand your point.\n\n> It's just that it should be triggered by default. That would rock..\n\n\nI agree that if pg_autovacuum becomes a core tool (not contrib and not\nintegrated into backend) then pg_ctl should fire it up and kill it\nautomatically.\n\n", "msg_date": "Tue, 5 Aug 2003 11:30:12 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help " }, { "msg_contents": "\"Matthew T. O'Connor\" <[email protected]> writes:\n> I chose to leave pg_autovacuum simple and not add too many features because\n> the core team has said that it needs to be integrated into the backend\n> before it can be considered a core tool.\n\nI think actually it makes plenty of sense to enhance pg_autovacuum while\nit's still contrib stuff. My guess is it'll be much less painful to\nwhack it around in minor or major ways while it's standalone code.\nOnce it's integrated in the backend, making significant changes will be\nharder and more ticklish. So, now is precisely the time to be\nexperimenting to find out what works well and what features are needed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Aug 2003 11:35:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help " }, { "msg_contents": "From: \"Tom Lane\" <[email protected]>\n> \"Matthew T. O'Connor\" <[email protected]> writes:\n> > I chose to leave pg_autovacuum simple and not add too many features\nbecause\n> > the core team has said that it needs to be integrated into the backend\n> > before it can be considered a core tool.\n>\n> I think actually it makes plenty of sense to enhance pg_autovacuum while\n> it's still contrib stuff. My guess is it'll be much less painful to\n> whack it around in minor or major ways while it's standalone code.\n> Once it's integrated in the backend, making significant changes will be\n> harder and more ticklish. So, now is precisely the time to be\n> experimenting to find out what works well and what features are needed.\n\nFair point, my only concern is that a backend integrated pg_autovacuum would\nbe radically different from the current libpq based client application.\nWhen integrated into the backend you have access to a lot of information\nthat you don't have access to as a client. I know one goal I have for the\nbackend version is to be based on the FSM and not require the stats\ncollector since it has a measurable negative effect on performance.\n\nBut in the more general sense of learning what features people want\n(exclusion lists, per table defaults etc) I agree the current version is a\nsufficient testing ground.\n\n", "msg_date": "Tue, 5 Aug 2003 11:45:04 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help " }, { "msg_contents": "From: \"Tom Lane\" <[email protected]>\n> \"Matthew T. O'Connor\" <[email protected]> writes:\n> So, now is precisely the time to be experimenting to find out what works\nwell and what features are needed.\n\nAnother quick question while I have your attention :-)\n\nSince pg_autovaccum is a contrib module does that mean I can make functional\nchanges that will be included in point release of 7.4?\n\n", "msg_date": "Tue, 5 Aug 2003 12:01:22 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help " }, { "msg_contents": "\"Matthew T. O'Connor\" <[email protected]> writes:\n> Since pg_autovaccum is a contrib module does that mean I can make functional\n> changes that will be included in point release of 7.4?\n\nWell, the bar is lower for contrib stuff than for core, but you'd better\nget such changes in PDQ, I'd say ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Aug 2003 12:18:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help " }, { "msg_contents": "Tom Lane wrote:\n> \"Matthew T. O'Connor\" <[email protected]> writes:\n> > Since pg_autovaccum is a contrib module does that mean I can make functional\n> > changes that will be included in point release of 7.4?\n> \n> Well, the bar is lower for contrib stuff than for core, but you'd better\n> get such changes in PDQ, I'd say ...\n\nThe contrib stuff is usually at the control of the author, so you can\nmake changes relatively late. However, the later the changes, the less\ntesting they get, but the decision is mostly yours, rather than core.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 5 Aug 2003 12:49:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help" }, { "msg_contents": "On Tue, 2003-08-05 at 12:49, Bruce Momjian wrote:\n> > Well, the bar is lower for contrib stuff than for core, but you'd better\n> > get such changes in PDQ, I'd say ...\n> \n> The contrib stuff is usually at the control of the author, so you can\n> make changes relatively late. However, the later the changes, the less\n> testing they get, but the decision is mostly yours, rather than core.\n\nWell I don't have anything in the hopper right now, so there is little\nchance anything would be ready before the release. My really question\nwas can I make large changes to a contrib module to a point release,\nmeaning, 7.4.0 will have what is in beta, but 7.4.1 would have a much\nimproved version. Does that sound possible? Or too radical for a point\nrelease?\n\n", "msg_date": "06 Aug 2003 00:21:51 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help" }, { "msg_contents": "Matthew T. O'Connor wrote:\n> On Tue, 2003-08-05 at 12:49, Bruce Momjian wrote:\n> > > Well, the bar is lower for contrib stuff than for core, but you'd better\n> > > get such changes in PDQ, I'd say ...\n> > \n> > The contrib stuff is usually at the control of the author, so you can\n> > make changes relatively late. However, the later the changes, the less\n> > testing they get, but the decision is mostly yours, rather than core.\n> \n> Well I don't have anything in the hopper right now, so there is little\n> chance anything would be ready before the release. My really question\n> was can I make large changes to a contrib module to a point release,\n> meaning, 7.4.0 will have what is in beta, but 7.4.1 would have a much\n> improved version. Does that sound possible? Or too radical for a point\n> release?\n\nYes, that is possible, but you should try to get lots of testers because\nthere is little testing in minor releases.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 6 Aug 2003 00:32:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help" }, { "msg_contents": "\"Matthew T. O'Connor\" <[email protected]> writes:\n> ... My really question\n> was can I make large changes to a contrib module to a point release,\n> meaning, 7.4.0 will have what is in beta, but 7.4.1 would have a much\n> improved version. Does that sound possible?\n\nFor core code, the answer would be a big NYET. We do not do feature\nadditions in point releases, only bug fixes. While contrib code is more\nunder the author's control than the core committee's control, I'd still\nsay that you'd be making a big mistake to not follow that basic\nguideline. People expect release x.y.z+1 to be the same as x.y.z except\nfor bug fixes. Introducing any new bugs into x.y.z+1 would cause a\nlarge loss in your credibility.\n\n(speaking as one who's introduced new bugs into a point-release\nrecently, and is still embarrassed about it, even though the intent\nwas only to fix older bugs...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Aug 2003 00:45:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help " }, { "msg_contents": "On Wed, Aug 06, 2003 at 12:45:34AM -0400, Tom Lane wrote:\n> For core code, the answer would be a big NYET. We do not do feature\n> additions in point releases, only bug fixes. While contrib code is more\n> under the author's control than the core committee's control, I'd still\n> say that you'd be making a big mistake to not follow that basic\n> guideline. People expect release x.y.z+1 to be the same as x.y.z except\n> for bug fixes. Introducing any new bugs into x.y.z+1 would cause a\n> large loss in your credibility.\n\n... and since contrib packages are distributed along with PG, it would\nalso be a loss to PG's credibility. IMHO, core should disallow feature\nadditions in point releases for contrib modules, as well as the core\ncode, except for very unusual situations. If contrib authors don't like\nthis facet of our release engineering process, they can always\ndistribute their code via some other outlet (gborg, SF, etc.).\n\n-Neil\n\n", "msg_date": "Wed, 6 Aug 2003 01:35:52 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help" }, { "msg_contents": "On Wed, 2003-08-06 at 00:45, Tom Lane wrote:\n> For core code, the answer would be a big NYET. We do not do feature\n> additions in point releases, only bug fixes. While contrib code is more\n> under the author's control than the core committee's control, I'd still\n> say that you'd be making a big mistake to not follow that basic\n> guideline. People expect release x.y.z+1 to be the same as x.y.z except\n> for bug fixes. Introducing any new bugs into x.y.z+1 would cause a\n> large loss in your credibility.\n> \n> (speaking as one who's introduced new bugs into a point-release\n> recently, and is still embarrassed about it, even though the intent\n> was only to fix older bugs...)\n\nRight, OK, that is basically the answer I was expecting, but thought I\nwould ask.\n\nMatthew\n\n", "msg_date": "06 Aug 2003 08:54:09 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help" } ]
[ { "msg_contents": "I was wondering if anyone found a sweet spot regarding how many inserts to \ndo in a single transaction to get the best performance? Is there an \napproximate number where there isn't any more performance to be had or \nperformance may drop off?\n\nIt's just a general question...I don't have any specific scenario, other\nthan there are multiple backends doing many inserts.\n\nThanks,\n\nTrevor\n\n", "msg_date": "Tue, 5 Aug 2003 11:59:11 -0400 (EDT)", "msg_from": "Trevor Astrope <[email protected]>", "msg_from_op": true, "msg_subject": "How Many Inserts Per Transactions" }, { "msg_contents": "On Tue, 5 Aug 2003, Trevor Astrope wrote:\n\n> I was wondering if anyone found a sweet spot regarding how many inserts to \n> do in a single transaction to get the best performance? Is there an \n> approximate number where there isn't any more performance to be had or \n> performance may drop off?\n> \n> It's just a general question...I don't have any specific scenario, other\n> than there are multiple backends doing many inserts.\n\nI've found that after 1,000 or so inserts, there's no great increase in \nspeed.\n\n", "msg_date": "Tue, 5 Aug 2003 10:02:45 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Many Inserts Per Transactions" }, { "msg_contents": "Trevor Astrope kirjutas T, 05.08.2003 kell 18:59:\n> I was wondering if anyone found a sweet spot regarding how many inserts to \n> do in a single transaction to get the best performance? Is there an \n> approximate number where there isn't any more performance to be had or \n> performance may drop off?\n> \n> It's just a general question...I don't have any specific scenario, other\n> than there are multiple backends doing many inserts.\n\nI did test on huge (up to 60 million rows) simple table (5 fields with\nprimary key) and found that at that size many inserts per transaction\nwas actually a little slower than single inserts. It probably had to do\nwith inserting/checking new index entries and moving index pages from/to\ndisk. \n\nWith small sizes or no index ~100 inserts/transaction was significantly\nfaster though.\n\nI did run several (10-30) backends in parallel.\n\nThe computer was quad Xeon with 2GB RAM and ~50 MB/sec RAID.\n\n------------------\nHannu\n\n", "msg_date": "06 Aug 2003 09:42:33 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Many Inserts Per Transactions" } ]
[ { "msg_contents": "Trevor Astrope wrote:\n> I was wondering if anyone found a sweet spot regarding how many\n> inserts to do in a single transaction to get the best performance?\n> Is there an approximate number where there isn't any more\n> performance to be had or performance may drop off?\n> \n> It's just a general question...I don't have any specific scenario,\n> other than there are multiple backends doing many inserts.\n\nThe ideal should be enough to make the work involved in establishing\nthe transaction context be a small part of the cost of processing the\nqueries.\n\nThus, 2 inserts should be twice as good as 1, by virtue of dividing\nthe cost of the transaction 2 ways.\n\nIncreasing the number of inserts/updates to 10 means splitting the\ncost 10 ways.\n\nIncreasing the number to 1000 means splitting the cost 1000 ways,\nwhich, while better than merely splitting the cost 10 ways, probably\n_isn't_ a stunningly huge further improvement.\n\nThe benefits of grouping more together drops off; you'll probably NOT\nnotice much difference between grouping 10,000 updates together into a\ntransaction as compared to grouping 15,000 updates together.\n\nFortunately, it doesn't drop off to being downright WORSE.\n\nOn Oracle, I have seen performance Suck Badly when using SQL*Load; if\nI grouped too many updates together, it started blowing up the\n\"rollback segment,\" which was a Bad Thing. And in that kind of\ncontext, there will typically be some \"sweet spot\" where you want to\ncommit transactions before they grow too big.\n\nIn contrast, pg_dump/pg_restore puts the load of each table into a\nsingle COPY statement, so that if there are 15,000,000 entries in the\ntable, that gets grouped into a single (rather enormous) transaction.\nAnd doing things that way presents no particular problem.\n-- \noutput = (\"cbbrowne\" \"@\" \"libertyrms.info\")\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Tue, 05 Aug 2003 12:28:10 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How Many Inserts Per Transactions " }, { "msg_contents": "On 5 Aug 2003 at 12:28, Christopher Browne wrote:\n> On Oracle, I have seen performance Suck Badly when using SQL*Load; if\n> I grouped too many updates together, it started blowing up the\n> \"rollback segment,\" which was a Bad Thing. And in that kind of\n> context, there will typically be some \"sweet spot\" where you want to\n> commit transactions before they grow too big.\n> \n> In contrast, pg_dump/pg_restore puts the load of each table into a\n> single COPY statement, so that if there are 15,000,000 entries in the\n> table, that gets grouped into a single (rather enormous) transaction.\n> And doing things that way presents no particular problem.\n\nWell, psotgresql recycles WAL files and use data files as well to store \nuncommitted transaction. Correct me if I am wrong.\n\nOracle does not do this. \n\nWhat does this buy? Oracle has constant time commits. I doubt if postgresql has \nthem with such a design.\n\nFor what hassle that is worth, I would buy expand-on-disk as you go approach of \npostgresql rather than spending time designing rollback segments for each \napplication.\n\nIt's not nice when customer reports rollback segment overflow. Tablespaces over \nfile is royal pain when it does not work.\n\nJust a thought..\n\nBye\n Shridhar\n\n--\nboss, n:\tAccording to the Oxford English Dictionary, in the Middle Ages the\t\nwords \"boss\" and \"botch\" were largely synonymous, except that boss,\tin addition \nto meaning \"a supervisor of workers\" also meant \"an\tornamental stud.\"\n\n", "msg_date": "Wed, 06 Aug 2003 15:46:06 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How Many Inserts Per Transactions " } ]
[ { "msg_contents": "\nI didn't get any helpful responses to my previous email so I thought\nI would try again, this time with example code.\n\nBelow is my orignal email and code for a dead simple ASCII menu driven \napplication that demonstrates the problem. The app starts up with a\nmenu of 4 items to create the test databases, to run the float or int\ntest or to quit. The databases created are named \"int_db\" and \"float_db\"\nand contain just 1 table and an index on that table, The tables are\npopulated with 2000 records and the \"test\" menu options perform\nan UPDATE on each of the 2000 records. Postmaster is assumed\nto be running on \"localhost\". You only need to select the \"Create \ndatabases\" option on the menu once. \n\nI would greatly appreciate it if someone could run this code in their\nenvironment and let me know if you get results similiar to mine.\nThe INT test results in execution times of 11 - 50+ secs increasing\neach time the test is run. The FLOAT test execution times are\nconsistently < 3 secs regardless of how many times it is run.\n\nI hope this piques someones curiosity. I'd really like to know\nwhat is going on here...\n\n\n***********************************************************************\nMedora Schauer\nSr. Software Engineer\n\nFairfield Industries\nSugar Land, Tx 77478-3469\n\[email protected]\n***********************************************************************\n\noriginal email:\n\n************************************************************************************\nI have a table with a 3 column key. I noticed that when I update a non-key field\nin a record of the table that the update was taking longer than I thought it \nshould. After much experimenting I discovered that if I changed the data\ntypes of two of the key columns to FLOAT8 that I got vastly improved\nperformance.\n\nOrignally the data types of the 3 columns were FLOAT4, FLOAT4 and INT4.\nMy plaform is a PowerPC running Linux. I speculated that the performance\nimprovement might be because the PowePC is a 64 bit processor but when\nI changed the column data types to INT8, INT8 and INT4 I din't see any\nimprovement. I also ran my test code on a Pentium 4 machine with the same\nresults in all cases.\n\nThis doesn't make any sense to me. Why would FLOAT8 keys ever result\nin improved performance?\n\nI verified with EXPLAIN that the index is used in every case for the update.\n\nMy postmaster version is 7.1.3.\n\nAny help will be greatly appreciated.\n\n*************************************************************************************\n\ntest code:\n\n#include <stdlib.h>\n#include <errno.h>\n#include <string.h>\n#include <libpq-fe.h>\n#include <time.h>\n#include <sys/time.h>\n#include <unistd.h>\n\n#ifndef TRUE\n#define TRUE (1)\n#define FALSE (0)\n#endif\n\n#define INT_TYPE (0)\n#define FLOAT_TYPE (1)\n\nchar buffer[200];\nPGconn *fconn = NULL, *iconn = NULL;\n\nPGconn *CreateTestDb(int dtype);\nvoid AddRecords(PGconn *conn, int dtype);\nPGconn *Connect(char *dbname);\nint DoSql(PGconn *conn, char *query, PGresult **result);\nlong FindExeTime(struct timeval *start_time, struct timeval *end_time);\nvoid UpdateTraceCounts(int dtype);\n\n \nint\nmain (int argc, char **argv) {\n char database[40];\n char option[10];\n int quit = FALSE;\n char response[100];\n\n while (!quit){\n printf (\"\\n\\n\\n\" \\\n \"c - Create databases.\\n\" \\\n \"f - Run FLOAT8 test.\\n\" \\\n \"i - Run INT8 test.\\n\" \\\n \"q - Quit\\n\\n\" \\\n \"Selection : \");\n gets(option);\n \n switch (option[0]){\n case 'c' :\n printf (\"Starting creation of FLOAT8 database...\\n\");\n if ((fconn = CreateTestDb(FLOAT_TYPE)) == NULL){\n printf (\"\\n#### ERROR #### : Counldn't build FLOAT8 db.\\n\");\n } else {\n printf (\"Adding 2000 records to FLOAT8 database...\\n\");\n AddRecords(fconn, FLOAT_TYPE);\n printf (\"\\nFLOAT8 Db created.\\n\");\n } \n\n printf (\"Starting creation of INT8 database...\\n\");\n if ((iconn = CreateTestDb(INT_TYPE)) == NULL){\n printf (\"\\n#### ERROR #### : Counldn't build INT8 db.\\n\");\n } else {\n printf (\"Adding 2000 records to INT8 database...\\n\");\n AddRecords(iconn, INT_TYPE);\n printf (\"\\nINT8 Db created.\\n\");\n } \n break;\n \n case 'f' :\n printf (\"Updating 2000 records in FLOAT8 database...\\n\");\n UpdateTraceCounts(FLOAT_TYPE);\n break;\n\n case 'i' :\n printf (\"Updating 2000 records in INT8 database...\\n\");\n UpdateTraceCounts(INT_TYPE);\n break;\n\n case 'q' :\n quit = TRUE;\n break;\n\n default:\n printf (\"Invalid option.\\n\");\n } \n }\n\n if (iconn) PQfinish(iconn);\n if (fconn) PQfinish(fconn);\n exit (0);\n}\n \n \nvoid\nUpdateTraceCounts(int dtype){\n int i, last, status;\n PGresult *result;\n struct timeval exe_begin, exe_end, exe_begin2, exe_end2;\n long exe_time, exe_time2;\n int shotpoint, shotline;\n PGconn *conn;\n \n if (dtype == FLOAT_TYPE){\n if (fconn == NULL){\n if ((fconn = Connect(\"float_db\")) == NULL){\n printf (\"#### ERROR #### : Cannot connect to float_db database.\\n\");\n return;\n }\n }\n conn = fconn;\n } else {\n if (iconn == NULL){\n if ((iconn = Connect(\"int_db\")) == NULL){\n printf (\"#### ERROR #### : Cannot connect to int_db database.\\n\");\n return;\n } \n }\n conn = iconn;\n }\n\n last = 2000; \n\n gettimeofday(&exe_begin2, NULL);\n \n if ((status = DoSql(conn, \"BEGIN TRANSACTION\", &result)) != 0){\n printf(\"#### ERROR #### Error starting database transaction.\\n\"); \n if (result) PQclear(result);\n }\n \n gettimeofday(&exe_begin, NULL);\n \n shotline = 1;\n shotpoint = 10001;\n for (i = 0; i < last; i++){\n if (dtype == INT_TYPE){\n snprintf(buffer, sizeof(buffer), \n \"UPDATE shot_record SET trace_count = %d \" \\\n \"WHERE shot_line_num = %d \" \\\n \" AND shotpoint = %d \" \\\n \" AND index = %d\" ,\n 0, shotline, shotpoint + i, 0);\n } else {\n snprintf(buffer, sizeof(buffer),\n \"UPDATE shot_record SET trace_count = %d \" \\\n \"WHERE shot_line_num = %f \" \\\n \" AND shotpoint = %f \" \\\n \" AND index = %d\" ,\n 0, (float)shotline, (float)shotpoint + (float)i, 0);\n }\n\n status = DoSql(conn, buffer, &result);\n if (status != 0){\n printf (\"#### ERROR #### : Error updating db.\\n\");\n break;\n }\n }\n \n gettimeofday(&exe_end, NULL);\n\n if ((status = DoSql(conn, \"COMMIT TRANSACTION\", &result)) != 0){\n printf(\"#### ERROR #### : Error commiting database transaction.\\n\"); \n if (result) PQclear(result);\n }\n \n gettimeofday(&exe_end2, NULL);\n \n exe_time = FindExeTime(&exe_begin, &exe_end);\n exe_time2 = FindExeTime(&exe_begin2, &exe_end2);\n\n printf (\"time to complete updates: %ld msec\\n\", exe_time);\n printf (\"total time: %ld msec\\n\", exe_time2);\n}\n\nPGconn *\nCreateTestDb(int dtype){\n PGconn *conn;\n int status;\n PGresult *result;\n char *dbname;\n char *icmd_strings[] = { \n \"CREATE TABLE shot_record ( \" \\\n \"shot_line_num INT8, \" \\\n \"shotpoint INT8, \" \\\n \"index INT2, \" \\\n \"trace_count INT4, \" \\\n \"PRIMARY KEY (shot_line_num, shotpoint, index)) \",\n NULL}; \n char *fcmd_strings[] = {\n \"CREATE TABLE shot_record ( \" \\\n \"shot_line_num FLOAT8, \" \\\n \"shotpoint FLOAT8, \" \\\n \"index INT2, \" \\\n \"trace_count INT4, \" \\\n \"PRIMARY KEY (shot_line_num, shotpoint, index))\", \n NULL};\n \n char **cmdP;\n\n if (dtype == INT_TYPE){\n cmdP = icmd_strings;\n dbname = \"int_db\";\n } else {\n cmdP = fcmd_strings;\n dbname = \"float_db\";\n }\n \n /* Open a connection to the template1 database. */\n if ((conn = Connect(\"template1\")) == NULL){\n fprintf(stderr, \"Database connect query: %s\\n\", buffer);\n return(NULL);\n }\n\n /* Create the new database. */\n snprintf(buffer, sizeof(buffer), \"CREATE DATABASE %s\", dbname);\n result = PQexec(conn, buffer);\n \n if (!result || PQresultStatus(result) != PGRES_COMMAND_OK){\n if (result){\n PQclear(result);\n }\n fprintf(stderr, \"%s\\n %s\\n\", buffer, PQresultErrorMessage(result));\n PQfinish(conn);\n return(NULL);\n }\n \n PQclear(result); \n PQfinish(conn);\n \n /* Open a connection to the new database. */\n if ((conn = Connect(dbname)) == NULL){\n return (NULL);\n }\n \n /* Create the database tables. */\n while (*cmdP){\n if ((status = DoSql(conn, *cmdP, &result)) != 0){\n fprintf(stderr,\"%s\\n%s\\n\", \n *cmdP, PQresultErrorMessage(result));\n PQfinish(conn);\n return(NULL);\n } else {\n PQclear(result);\n }\n ++cmdP;\n }\n \n return(conn);\n}\n\nvoid\nAddRecords(PGconn *conn, int dtype){\n int num_shots, i;\n PGresult *result;\n int shotpoint;\n \n /* Add a bunch of records to the table. */\n num_shots = 2000; \n \n shotpoint = 10001;\n for (i = 0; i < num_shots; i++){\n \n if (dtype == INT_TYPE){\n sprintf (buffer, \"INSERT INTO Shot_Record VALUES (1, %d, 0, 0)\",\n shotpoint + i, shotpoint + i);\n } else {\n sprintf (buffer, \"INSERT INTO Shot_Record VALUES (1.0, %f, 0, 0) \",\n shotpoint + (float)i, shotpoint + (float)i);\n }\n result = PQexec(conn, buffer);\n \n if (!result || PQresultStatus(result) != PGRES_COMMAND_OK){\n if (result){\n PQclear(result);\n }\n fprintf(stderr, \"%s\\n %s\\n\", buffer,PQresultErrorMessage(result));\n PQfinish(conn);\n return;\n }\n }\n}\n \nPGconn *\nConnect(char *dbname){\n char *func = \"Connect()\";\n PGconn *conn;\n\n snprintf (buffer, sizeof(buffer), \"host='localhost' dbname='%s'\", dbname);\n conn = PQconnectdb(buffer);\n \n if (PQstatus(conn) != CONNECTION_OK){\n fprintf(stderr, \"%s: Database connect query: %s\\n\", func, buffer);\n return(NULL);\n }\n \n return (conn);\n} \n\nint\nDoSql(PGconn *conn, char *query, PGresult **result){\n char *func = \"DoSql()\";\n int status = 0;\n \n if (!conn){\n return(-1);\n }\n \n *result = PQexec(conn, query);\n if (!*result || ((PQresultStatus(*result) != PGRES_COMMAND_OK) \n && (PQresultStatus(*result) != PGRES_TUPLES_OK))){\n if (*result){\n fprintf(stderr, \"%s: %s\\n%s\\n\", func, query, PQresultErrorMessage(*result));\n PQclear(*result);\n *result = NULL;\n }\n\n /* See if the database connection is valid. */\n if (PQstatus(conn) == CONNECTION_BAD){\n status = -1;\n } else {\n status = -2;\n }\n } else {\n status = 0;\n }\n \n return(status);\n}\n\n#define DB_TIME_CONVERT (1000)\n\nlong\nFindExeTime(struct timeval *start_time, struct timeval *end_time){\n long exe_time;\n\n exe_time = ((1000000 * end_time->tv_sec) + end_time->tv_usec) -\n ((1000000 * start_time->tv_sec) + start_time->tv_usec);\n\n return(exe_time/DB_TIME_CONVERT);\n}\n\n\n", "msg_date": "Tue, 5 Aug 2003 12:30:46 -0500", "msg_from": "\"Medora Schauer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Odd performance results - more info" }, { "msg_contents": "Medora Schauer wrote:\n> I would greatly appreciate it if someone could run this code in their\n> environment and let me know if you get results similiar to mine.\n> The INT test results in execution times of 11 - 50+ secs increasing\n> each time the test is run. The FLOAT test execution times are\n> consistently < 3 secs regardless of how many times it is run.\n\nWithout actually trying the code, I'd bet that an index is getting used \nfor the float8 case, but not in the int8 case:\n\n if (dtype == INT_TYPE){\n snprintf(buffer, sizeof(buffer),\n \"UPDATE shot_record SET trace_count = %d \" \\\n \"WHERE shot_line_num = %d \" \\\n \" AND shotpoint = %d \" \\\n \" AND index = %d\" ,\n 0, shotline, shotpoint + i, 0);\n } else {\n snprintf(buffer, sizeof(buffer),\n \"UPDATE shot_record SET trace_count = %d \" \\\n \"WHERE shot_line_num = %f \" \\\n \" AND shotpoint = %f \" \\\n \" AND index = %d\" ,\n 0, (float)shotline, (float)shotpoint + (float)i, 0);\n }\n\nTry running EXPLAIN ANALYZE on these update statements manually. It also \nmight help to run VACUUM ANALYZE after populating the tables.\n\nHTH,\n\nJoe\n\n", "msg_date": "Tue, 05 Aug 2003 10:41:43 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd performance results - more info" }, { "msg_contents": "\nOn Tue, 5 Aug 2003, Medora Schauer wrote:\n\n> I hope this piques someones curiosity. I'd really like to know\n> what is going on here...\n\nI think you're getting caught by the typing of constants preventing\nindex scans.\n\n> \"UPDATE shot_record SET trace_count = %d \" \\\n> \"WHERE shot_line_num = %d \" \\\n> \" AND shotpoint = %d \" \\\n> \" AND index = %d\" ,\n> 0, shotline, shotpoint + i, 0);\n\nI believe that the int constants are going to generally be treated as\nint4. If you're comparing them to an int8 you're not going to get\nan index scan probably. Try explicitly casting the constants to\nthe appropriate type: CAST(%d AS int8).\n\n\n> snprintf(buffer, sizeof(buffer),\n> \"UPDATE shot_record SET trace_count = %d \" \\\n> \"WHERE shot_line_num = %f \" \\\n> \" AND shotpoint = %f \" \\\n> \" AND index = %d\" ,\n> 0, (float)shotline, (float)shotpoint + (float)i, 0);\n\nSame general issue here, I think the floats are going to get treated\nas float8 in 7.1, so you'll probably need an explicit cast.\n\nAs Joe said, try explain on the queries for more details.\n\n\n", "msg_date": "Tue, 5 Aug 2003 10:50:05 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd performance results - more info" } ]
[ { "msg_contents": "\n \n> Medora Schauer wrote:\n> > I would greatly appreciate it if someone could run this \n> code in their\n> > environment and let me know if you get results similiar to mine.\n> > The INT test results in execution times of 11 - 50+ secs increasing\n> > each time the test is run. The FLOAT test execution times are\n> > consistently < 3 secs regardless of how many times it is run.\n> \n> Without actually trying the code, I'd bet that an index is \n> getting used \n> for the float8 case, but not in the int8 case:\n> \n\nI've already verifed that the index is used in both cases.\n\nMedora\n", "msg_date": "Tue, 5 Aug 2003 12:58:23 -0500", "msg_from": "\"Medora Schauer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Odd performance results - more info" } ]
[ { "msg_contents": "\n> \n> > Medora Schauer wrote:\n> > > I would greatly appreciate it if someone could run this \n> > code in their\n> > > environment and let me know if you get results similiar to mine.\n> > > The INT test results in execution times of 11 - 50+ secs \n> increasing\n> > > each time the test is run. The FLOAT test execution times are\n> > > consistently < 3 secs regardless of how many times it is run.\n> > \n> > Without actually trying the code, I'd bet that an index is \n> > getting used \n> > for the float8 case, but not in the int8 case:\n> > \n> \n> I've already verifed that the index is used in both cases.\n> \n\nI stand corrected. A sequential scan was being used in the INT8 case. \nWhen I changed it to INT4 I got better results. I got confused cuz \nI had changed the types so often.\n\nThanks for your help,\n\nMedora\n\n", "msg_date": "Tue, 5 Aug 2003 13:18:29 -0500", "msg_from": "\"Medora Schauer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Odd performance results - more info" } ]
[ { "msg_contents": "Matthew T. O'Connor wrote:\n> Fair point, my only concern is that a backend integrated\n> pg_autovacuum would be radically different from the current libpq\n> based client application.\n\nUnfortunately, a \"configurable-via-tables\" pg_autovacuum is also going\nto be quite different from the current \"unconfigurable\" version.\n\nIf we were to make it configurable, I would suggest doing so via\nspecifying a database and schema into which it would then insert a set\nof tables to provide whatever information was considered worth\n'fiddling' with.\n\nBut at that point, it makes sense to add in quite a bit of\n\"configurable\" behaviour, such as:\n\n -> Specifying that certain tables should _never_ be automatically \n vacuumed.\n\n -> Establishing a \"queue\" of tables that pg_autovacuum plans to\n vacuum, so that users could add in desired vacuums (\"after the\n other stuff being handled, force in a vacuum of app_table_foo\").\n That way, vacuums can be 'forced in' without introducing the\n possibility that multiple vacuums might be done at once...\n\n -> Making information about what vacuums have been done/planned\n persistent across runs of pg_autovacuum, and even across\n shutdowns of the DBMS.\n\nThis changes behaviour enough that I'm not sure it's the same\n\"program\" as the unconfigurable version. Almost every option would be\nsubstantially affected by the logic:\n\n if (CONFIG_DATA_IN_DB) {\n /* Logic path that uses data in Vacuum Schema */\n } else {\n /* More banal logic */\n }\n\nIf I can store configuration in the database, then I'd like to also\nmake up a view or two, and possibly even base the logic used on views\nthat combine configuration tables with system views. In effect, that\nmakes for a _third_ radically different option.\n-- \noutput = reverse(\"ofni.smrytrebil\" \"@\" \"enworbbc\")\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Tue, 05 Aug 2003 17:40:16 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some vacuum & tuning help " }, { "msg_contents": "On Tue, 2003-08-05 at 17:40, Christopher Browne wrote:\n> Unfortunately, a \"configurable-via-tables\" pg_autovacuum is also going\n> to be quite different from the current \"unconfigurable\" version.\n\ntrue, however I would like to preserve the \"unconfigured\" functionality\nso that it can be run against a totally unmodified database cluster. If\nit finds configuration information on the server then it uses it,\notherwise it just acts as it does now.\n\n> But at that point, it makes sense to add in quite a bit of\n> \"configurable\" behaviour, such as:\n> \n> -> Specifying that certain tables should _never_ be automatically \n> vacuumed.\n\nagreed\n\n> -> Establishing a \"queue\" of tables that pg_autovacuum plans to\n> vacuum, so that users could add in desired vacuums (\"after the\n> other stuff being handled, force in a vacuum of app_table_foo\").\n> That way, vacuums can be 'forced in' without introducing the\n> possibility that multiple vacuums might be done at once...\n\nmakes sense.\n\n> -> Making information about what vacuums have been done/planned\n> persistent across runs of pg_autovacuum, and even across\n> shutdowns of the DBMS.\n\ngood.\n\n> This changes behaviour enough that I'm not sure it's the same\n> \"program\" as the unconfigurable version. Almost every option would be\n> substantially affected by the logic:\n> \n> if (CONFIG_DATA_IN_DB) {\n> /* Logic path that uses data in Vacuum Schema */\n> } else {\n> /* More banal logic */\n> }\n\nI'm not so sure it's that different. In either case we are going to\nhave a threshold and decide to vacuum based on that threshold. The\nchange is only that the data would be persistent, and could be\ncustomized on a per table basis. The logic only really changes if\nrunning unconfigured uses different data than the configured version,\nwhich I don't see as being proposed.\n\n> If I can store configuration in the database, then I'd like to also\n> make up a view or two, and possibly even base the logic used on views\n> that combine configuration tables with system views. In effect, that\n> makes for a _third_ radically different option.\n\nNot sure I see what all you are implying here. Please expand on this if\nyou deem it worthy.\n\n\nI guess I'll start coding again. I'll send an email to the hackers list\ntomorrow evening with as much info / design as I can think of.\n\nMatthew\n\n", "msg_date": "06 Aug 2003 00:36:12 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help" }, { "msg_contents": "\n> On Tue, 2003-08-05 at 17:40, Christopher Browne wrote:\n> > Unfortunately, a \"configurable-via-tables\" pg_autovacuum is also going\n> > to be quite different from the current \"unconfigurable\" version.\n\nYou don't need to create actual tables - just use 'virtual' tables, like the\npg_settings one. That's all based off a set-returning-function. You can\nuse updates and inserts to manipulate internal data structures or\nsomething...\n\nChris\n\n", "msg_date": "Wed, 6 Aug 2003 12:41:06 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some vacuum & tuning help" } ]
[ { "msg_contents": "Hi All!\n\n\n\tI have installed PostgreSQL 7.3.2 on FreeBSD 4.7, running on PC with \nCPU Pentium II 400MHz and 384Mb RAM.\n\n\tProblem is that SQL statement (see below) is running too long. With \ncurrent WHERE clause 'SUBSTR(2, 2) IN ('NL', 'NM') return 25 records. \nWith 1 record, SELECT time is about 50 minutes and takes approx. 120Mb \nRAM. With 25 records SELECT takes about 600Mb of memory and ends after \nabout 10 hours with error: \"Memory exhausted in AllocSetAlloc(32)\".\n\n\t***\n\tHow can I speed up processing? Why query (IMHO not too complex) \nexecutes so long? :(\n\t***\n\n\tInformation about configuration, data structures and table sizes see \nbelow. Model picture attached.\n\n\tCurrent postgresql.conf settings (some) are:\n\n=== Cut ===\nmax_connections = 8\n\nshared_buffers = 8192\nmax_fsm_relations = 256\nmax_fsm_pages = 65536\nmax_locks_per_transaction = 16\nwal_buffers = 256\n\nsort_mem = 131072\nvacuum_mem = 16384\n\ncheckpoint_segments = 4\ncheckpoint_timeout = 300\ncommit_delay = 32000\ncommit_siblings = 4\nfsync = false\n\nenable_seqscan = false\n\neffective_cache_size = 65536\n=== Cut ===\n\n\tSELECT statement is:\n\nSELECT\tshowcalc('B00204', dd, r020, t071) AS s04\nFROM\tv_file02wide\nWHERE\ta011 = 3\n\tAND inrepdate(data)\n\tAND SUBSTR(ncks, 2, 2) IN ('NL', 'NM')\n\tAND r030 = 980;\n\n\tQuery plan is:\n \n \n QUERY PLAN \n \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=174200202474.99..174200202474.99 rows=1 width=143)\n -> Hash Join (cost=174200199883.63..174200202474.89 rows=43 width=143)\n Hash Cond: (\"outer\".id_k041 = \"inner\".id_k041)\n -> Hash Join (cost=174200199880.57..174200202471.07 rows=43 \nwidth=139)\n Hash Cond: (\"outer\".id_r030 = \"inner\".id_r030)\n -> Hash Join (cost=174200199865.31..174200202410.31 \nrows=8992 width=135)\n Hash Cond: (\"outer\".id_r020 = \"inner\".id_r020)\n -> Hash Join \n(cost=174200199681.91..174200202069.55 rows=8992 width=124)\n Hash Cond: (\"outer\".id_dd = \"inner\".id_dd)\n -> Merge Join \n(cost=174200199676.04..174200201906.32 rows=8992 width=114)\n Merge Cond: (\"outer\".id_v = \"inner\".id_v)\n Join Filter: ((\"outer\".data >= CASE \nWHEN (\"inner\".dataa IS NOT NULL) THEN \"inner\".dataa WHEN (\"outer\".data \nIS NOT NULL) THEN \"outer\".data ELSE NULL::date END) AND (\"outer\".data <= \nCASE WHEN (\"inner\".datab IS NOT NULL) THEN \"inner\".datab WHEN \n(\"outer\".data IS NOT NULL) THEN \"outer\".data ELSE NULL::date END))\n -> Sort (cost=42528.39..42933.04 \nrows=161858 width=65)\n Sort Key: filexxr.id_v\n -> Hash Join \n(cost=636.25..28524.10 rows=161858 width=65)\n Hash Cond: (\"outer\".id_obl \n= \"inner\".id_obl)\n -> Hash Join \n(cost=632.67..25687.99 rows=161858 width=61)\n Hash Cond: \n(\"outer\".id_r = \"inner\".id_r)\n -> Index Scan using \nindex_file02_k041 on file02 (cost=0.00..18951.63 rows=816093 width=32)\n -> Hash \n(cost=615.41..615.41 rows=6903 width=29)\n -> Index Scan \nusing index_filexxr_a011 on filexxr (cost=0.00..615.41 rows=6903 width=29)\n Index \nCond: (id_a011 = 3)\n Filter: \ninrepdate(data)\n -> Hash (cost=3.47..3.47 \nrows=43 width=4)\n -> Index Scan using \nkod_obl_pkey on kod_obl obl (cost=0.00..3.47 rows=43 width=4)\n -> Sort \n(cost=174200157147.65..174200157150.57 rows=1167 width=49)\n Sort Key: dov_tvbv.id_v\n -> Merge Join \n(cost=0.00..174200157088.20 rows=1167 width=49)\n Merge Cond: \n(\"outer\".id_bnk = \"inner\".id_bnk)\n -> Index Scan using \ndov_bank_pkey on dov_bank (cost=0.00..290100261328.45 rows=1450 width=13)\n Filter: (subplan)\n SubPlan\n -> Materialize \n(cost=100000090.02..100000090.02 rows=29 width=11)\n -> Seq Scan \non dov_bank (cost=100000000.00..100000090.02 rows=29 width=11)\n \nFilter: ((substr((box)::text, 2, 2) = 'NL'::text) OR \n(substr((box)::text, 2, 2) = 'NM'::text))\n -> Index Scan using \nindex_dov_tvbv_bnk on dov_tvbv (cost=0.00..142.42 rows=2334 width=36)\n -> Hash (cost=5.83..5.83 rows=16 width=10)\n -> Index Scan using ek_pok_r_pkey on \nek_pok_r epr (cost=0.00..5.83 rows=16 width=10)\n -> Hash (cost=178.15..178.15 rows=2100 width=11)\n -> Index Scan using kl_r020_pkey on kl_r020 \n (cost=0.00..178.15 rows=2100 width=11)\n -> Hash (cost=15.26..15.26 rows=1 width=4)\n -> Index Scan using kl_r030_pkey on kl_r030 r030 \n (cost=0.00..15.26 rows=1 width=4)\n Filter: ((r030)::text = '980'::text)\n -> Hash (cost=3.04..3.04 rows=4 width=4)\n -> Index Scan using kl_k041_pkey on kl_k041 \n(cost=0.00..3.04 rows=4 width=4)\n(45 rows)\n\n\tFunction showcalc definition is:\n\nCREATE OR REPLACE FUNCTION showcalc(VARCHAR(10), VARCHAR(2), VARCHAR(4), \nNUMERIC(16)) RETURNS NUMERIC(16)\nLANGUAGE SQL AS '\n-- Parameters: code, dd, r020, t071\n\tSELECT COALESCE(\n\t\t(SELECT sc.koef * $4\n\t\t\tFROM showing AS s NATURAL JOIN showcomp AS sc\n\t\t\tWHERE s.kod LIKE $1\n\t\t\t\tAND NOT SUBSTR(acc_mask, 1, 1) LIKE ''[''\n\t\t\t\tAND SUBSTR(acc_mask, 1, 4) LIKE $3\n\t\t\t\tAND SUBSTR(acc_mask, 5, 1) LIKE SUBSTR($2, 1, 1)),\n\t\t(SELECT SUM(sc.koef * COALESCE(showcalc(SUBSTR(acc_mask, 2, \nLENGTH(acc_mask) - 2), $2, $3, $4), 0))\n\t\t\tFROM showing AS s NATURAL JOIN showcomp AS sc\n\t\t\tWHERE s.kod LIKE $1\n\t\t\t\tAND SUBSTR(acc_mask, 1, 1) LIKE ''[''),\n\t\t0) AS showing;\n';\n\n\tView v_file02wide is:\n\nCREATE VIEW v_file02wide AS\nSELECT id_a011 AS a011, data, obl.ko, obl.nazva AS oblast, b030, \nbanx.box AS ncks, banx.nazva AS bank,\n\tepr.dd, r020, r030, a3, r030.nazva AS valuta, k041,\n\t-- Sum equivalent in national currency\n\tt071 * get_kurs(id_r030, data) AS t070,\n\tt071\nFROM v_file02 AS vf02\n\tJOIN kod_obl AS obl USING(id_obl)\n\tJOIN (dov_bank NATURAL JOIN dov_tvbv) AS banx\n\t\tON banx.id_v = vf02.id_v\n\t\t\tAND data BETWEEN COALESCE(banx.dataa, data)\n\t\t\t\tAND COALESCE(banx.datab, data)\n\tJOIN ek_pok_r AS epr USING(id_dd)\n\tJOIN kl_r020 USING(id_r020)\n\tJOIN kl_r030 AS r030 USING(id_r030)\n\tJOIN kl_k041 USING(id_k041);\n\n\tFunction inrepdate is:\n\nCREATE OR REPLACE FUNCTION inrepdate(DATE) RETURNS BOOL\nLANGUAGE SQL AS '\n\t-- Returns true if given date is in repdate\n\tSELECT (SELECT COUNT(*) FROM repdate\n\t\tWHERE $1 BETWEEN COALESCE(data1, CURRENT_DATE)\n\t\t\tAND COALESCE(data2, CURRENT_DATE))\n\t\t> 0;\n';\n\n\tTable sizes (records)\n\tfilexxr\t\t34712\n\tfile02\t\t816589\n\tv_file02\t816589\n\tkod_obl\t\t43\n\tbanx\t\t2334\n\tek_pok_r\t16\n\tkl_r020\t\t2100\n\tkl_r030\t\t208\n\tkl_r041\t\t4\n\tv_file02wide\t\n\tshowing\t\t2787\n\tshowcomp\t13646\n\trepdate\t\t1\n\n\tTable has indexes almost for all selected fields.\n\tshowcalc in this query selects and uses 195 rows.\n\tTotal query size is 8066 records (COUNT(*) executes about 33 seconds \nand uses 120Mb RAM).\n\n\nWith best regards\n\tYaroslav Mazurak.", "msg_date": "Wed, 06 Aug 2003 10:34:54 +0300", "msg_from": "Yaroslav Mazurak <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL performance problem -> tuning" }, { "msg_contents": "On Wednesday 06 August 2003 08:34, Yaroslav Mazurak wrote:\n> \t\tHi All!\n>\n>\n> \tI have installed PostgreSQL 7.3.2 on FreeBSD 4.7, running on PC with\n> CPU Pentium II 400MHz and 384Mb RAM.\n\nVersion 7.3.4 is just out - probably worth upgrading as soon as it's \nconvenient.\n\n> \tProblem is that SQL statement (see below) is running too long. With\n> current WHERE clause 'SUBSTR(2, 2) IN ('NL', 'NM') return 25 records.\n> With 1 record, SELECT time is about 50 minutes and takes approx. 120Mb\n> RAM. With 25 records SELECT takes about 600Mb of memory and ends after\n> about 10 hours with error: \"Memory exhausted in AllocSetAlloc(32)\".\n[snip]\n>\n> \tCurrent postgresql.conf settings (some) are:\n>\n> === Cut ===\n> max_connections = 8\n>\n> shared_buffers = 8192\n> max_fsm_relations = 256\n> max_fsm_pages = 65536\n> max_locks_per_transaction = 16\n> wal_buffers = 256\n>\n> sort_mem = 131072\nThis sort_mem value is *very* large - that's 131MB for *each sort* that gets \ndone. I'd suggest trying something in the range 1,000-10,000. What's probably \nhappening with the error above is that PG is allocating ridiculous amounts of \nmemory, the machines going into swap and everything eventually grinds to a \nhalt.\n\n> vacuum_mem = 16384\n>\n> checkpoint_segments = 4\n> checkpoint_timeout = 300\n> commit_delay = 32000\n> commit_siblings = 4\n> fsync = false\n\nI'd turn fsync back on - unless you don't mind losing your data after a crash.\n\n> enable_seqscan = false\n\nDon't tinker with these in a live system, they're only really for \ntesting/debugging.\n\n> effective_cache_size = 65536\n\nSo you typically get about 256MB cache usage in top/free?\n\n> === Cut ===\n>\n> \tSELECT statement is:\n>\n> SELECT\tshowcalc('B00204', dd, r020, t071) AS s04\n> FROM\tv_file02wide\n> WHERE\ta011 = 3\n> \tAND inrepdate(data)\n> \tAND SUBSTR(ncks, 2, 2) IN ('NL', 'NM')\n> \tAND r030 = 980;\n\nHmm - mostly views and function calls, OK - I'll read on.\n\n> (cost=174200202474.99..174200202474.99 rows=1 width=143) -> Hash Join \n ^^^^^^^\nThis is a BIG cost estimate and you've got lots more like them. I'm guessing \nit's because of the sort_mem / enable_seqscan settings you have. The numbers \ndon't make sense to me - it sounds like you've pushed the cost estimator into \na very strange corner.\n\n> \tFunction showcalc definition is:\n>\n> CREATE OR REPLACE FUNCTION showcalc(VARCHAR(10), VARCHAR(2), VARCHAR(4),\n> NUMERIC(16)) RETURNS NUMERIC(16)\n> LANGUAGE SQL AS '\n> -- Parameters: code, dd, r020, t071\n> \tSELECT COALESCE(\n> \t\t(SELECT sc.koef * $4\n> \t\t\tFROM showing AS s NATURAL JOIN showcomp AS sc\n> \t\t\tWHERE s.kod LIKE $1\n> \t\t\t\tAND NOT SUBSTR(acc_mask, 1, 1) LIKE ''[''\n> \t\t\t\tAND SUBSTR(acc_mask, 1, 4) LIKE $3\n> \t\t\t\tAND SUBSTR(acc_mask, 5, 1) LIKE SUBSTR($2, 1, 1)),\nObviously, you could use = for these 3 rather than LIKE ^^^\nSame below too.\n\n> \t\t(SELECT SUM(sc.koef * COALESCE(showcalc(SUBSTR(acc_mask, 2,\n> LENGTH(acc_mask) - 2), $2, $3, $4), 0))\n> \t\t\tFROM showing AS s NATURAL JOIN showcomp AS sc\n> \t\t\tWHERE s.kod LIKE $1\n> \t\t\t\tAND SUBSTR(acc_mask, 1, 1) LIKE ''[''),\n> \t\t0) AS showing;\n> ';\n>\n> \tView v_file02wide is:\n>\n> CREATE VIEW v_file02wide AS\n> SELECT id_a011 AS a011, data, obl.ko, obl.nazva AS oblast, b030,\n> banx.box AS ncks, banx.nazva AS bank,\n> \tepr.dd, r020, r030, a3, r030.nazva AS valuta, k041,\n> \t-- Sum equivalent in national currency\n> \tt071 * get_kurs(id_r030, data) AS t070,\n> \tt071\n> FROM v_file02 AS vf02\n> \tJOIN kod_obl AS obl USING(id_obl)\n> \tJOIN (dov_bank NATURAL JOIN dov_tvbv) AS banx\n> \t\tON banx.id_v = vf02.id_v\n> \t\t\tAND data BETWEEN COALESCE(banx.dataa, data)\n> \t\t\t\tAND COALESCE(banx.datab, data)\n> \tJOIN ek_pok_r AS epr USING(id_dd)\n> \tJOIN kl_r020 USING(id_r020)\n> \tJOIN kl_r030 AS r030 USING(id_r030)\n> \tJOIN kl_k041 USING(id_k041);\nYou might want to rewrite the view so it doesn't use explicit JOIN statements, \ni.e FROM a,b WHERE a.id=b.ref rather than FROM a JOIN b ON id=ref\nAt the moment, this will force PG into making the joins in the order you write \nthem (I think this is changed in v7.4)\n\n\n> \tFunction inrepdate is:\n>\n> CREATE OR REPLACE FUNCTION inrepdate(DATE) RETURNS BOOL\n> LANGUAGE SQL AS '\n> \t-- Returns true if given date is in repdate\n> \tSELECT (SELECT COUNT(*) FROM repdate\n> \t\tWHERE $1 BETWEEN COALESCE(data1, CURRENT_DATE)\n> \t\t\tAND COALESCE(data2, CURRENT_DATE))\n>\n> \t\t> 0;\n\nYou can probably replace this with:\n SELECT true FROM repdate WHERE $1 ...\nYou'll need to look at where it's used though.\n\n[snip table sizes]\n> \tTable has indexes almost for all selected fields.\n\nThat's not going to help you for the SUBSTR(...) stuff, although you could use \nfunctional indexes (see manuals/list archives for details).\n\nFirst thing is to get those two configuration settings somewhere sane, then we \ncan tune properly. You might like the document at:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 6 Aug 2003 10:30:36 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "\"Yaroslav Mazurak\" <[email protected]>\n> Problem is that SQL statement (see below) is running too long. With \n> current WHERE clause 'SUBSTR(2, 2) IN ('NL', 'NM') return 25 records. \n> With 1 record, SELECT time is about 50 minutes and takes approx. 120Mb \n> RAM. With 25 records SELECT takes about 600Mb of memory and ends after \n> about 10 hours with error: \"Memory exhausted in AllocSetAlloc(32)\".\n\nDid you try to use a functional index on that field ?\n\ncreate or replace function my_substr(varchar)\nreturns varchar AS'\nbegin\n return substr($1,2,2);\nend;\n' language 'plpgsql'\nIMMUTABLE;\n\n\ncreate index idx on <table> ( my_substr(<field>) );\n\n\nand after you should use in your where:\n\nwhere my_substr(<field>) = 'NL'\n\n", "msg_date": "Wed, 6 Aug 2003 13:48:06 +0200", "msg_from": "\"Mendola Gaetano\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "Hi All!\n\n\tFirst, thanks for answers.\n\nRichard Huxton wrote:\n\n> On Wednesday 06 August 2003 08:34, Yaroslav Mazurak wrote:\n\n> Version 7.3.4 is just out - probably worth upgrading as soon as it's \n> convenient.\n\n\tHas version 7.3.4 significant performance upgrade relative to 7.3.2?\nI've downloaded version 7.3.4, but not installed yet.\n\n>>sort_mem = 131072\n\n> This sort_mem value is *very* large - that's 131MB for *each sort* that gets \n> done. I'd suggest trying something in the range 1,000-10,000. What's probably \n> happening with the error above is that PG is allocating ridiculous amounts of \n> memory, the machines going into swap and everything eventually grinds to a \n> halt.\n\n\tWhat mean \"each sort\"? Each query with SORT clause or some internal\n(invisible to user) sorts too (I can't imagine: indexed search or\nwhatever else)?\n\tI'm reduced sort_mem to 16M.\n\n>>fsync = false\n\n> I'd turn fsync back on - unless you don't mind losing your data after a crash.\n\n\tThis is temporary performance solution - I want get SELECT query result\nfirst, but current performance is too low.\n\n>>enable_seqscan = false\n\n> Don't tinker with these in a live system, they're only really for \n> testing/debugging.\n\n\tThis is another strange behavior of PostgreSQL - he don't use some\ncreated indexes (seq_scan only) after ANALYZE too. OK, I'm turned on\nthis option back.\n\n>>effective_cache_size = 65536\n\n> So you typically get about 256MB cache usage in top/free?\n\n\tNo, top shows 12-20Mb.\n\tI'm reduced effective_cache_size to 4K blocks (16M?).\n\n>>\tSELECT statement is:\n>>\n>>SELECT\tshowcalc('B00204', dd, r020, t071) AS s04\n>>FROM\tv_file02wide\n>>WHERE\ta011 = 3\n>>\tAND inrepdate(data)\n>>\tAND SUBSTR(ncks, 2, 2) IN ('NL', 'NM')\n>>\tAND r030 = 980;\n\n> Hmm - mostly views and function calls, OK - I'll read on.\n\n\tMy data are distributed accross multiple tables to integrity and avoid\nredundancy. During SELECT query these data rejoined to be presented in\n\"human-readable\" form. :)\n\t\"SUBSTR\" returns about 25 records, I'm too lazy for write 25 numbers.\n:) I'm also worried for errors.\n\n>>(cost=174200202474.99..174200202474.99 rows=1 width=143) -> Hash Join \n\n> ^^^^^^^\n> This is a BIG cost estimate and you've got lots more like them. I'm guessing \n> it's because of the sort_mem / enable_seqscan settings you have. The numbers \n> don't make sense to me - it sounds like you've pushed the cost estimator into \n> a very strange corner.\n\n\tI think that cost estimator \"pushed into very strange corner\" by himself.\n\n>>\tFunction showcalc definition is:\n\n>>CREATE OR REPLACE FUNCTION showcalc(VARCHAR(10), VARCHAR(2), VARCHAR(4),\n>>NUMERIC(16)) RETURNS NUMERIC(16)\n>>LANGUAGE SQL AS '\n>>-- Parameters: code, dd, r020, t071\n>>\tSELECT COALESCE(\n>>\t\t(SELECT sc.koef * $4\n>>\t\t\tFROM showing AS s NATURAL JOIN showcomp AS sc\n>>\t\t\tWHERE s.kod LIKE $1\n>>\t\t\t\tAND NOT SUBSTR(acc_mask, 1, 1) LIKE ''[''\n>>\t\t\t\tAND SUBSTR(acc_mask, 1, 4) LIKE $3\n>>\t\t\t\tAND SUBSTR(acc_mask, 5, 1) LIKE SUBSTR($2, 1, 1)),\n\n> Obviously, you could use = for these 3 rather than LIKE ^^^\n> Same below too.\n\n>>\t\t(SELECT SUM(sc.koef * COALESCE(showcalc(SUBSTR(acc_mask, 2,\n>>LENGTH(acc_mask) - 2), $2, $3, $4), 0))\n>>\t\t\tFROM showing AS s NATURAL JOIN showcomp AS sc\n>>\t\t\tWHERE s.kod LIKE $1\n>>\t\t\t\tAND SUBSTR(acc_mask, 1, 1) LIKE ''[''),\n>>\t\t0) AS showing;\n>>';\n\n\tOK, all unnecessary \"LIKEs\" replaced by \"=\", JOIN removed too:\nCREATE OR REPLACE FUNCTION showcalc(VARCHAR(10), VARCHAR(2), VARCHAR(4),\nNUMERIC(16)) RETURNS NUMERIC(16)\nLANGUAGE SQL AS '\n-- Parameters: code, dd, r020, t071\n\tSELECT COALESCE(\n\t\t(SELECT sc.koef * $4\n\t\t\tFROM showing AS s, showcomp AS sc\n\t\t\tWHERE sc.kod = s.kod\n\t\t\t\tAND s.kod LIKE $1\n\t\t\t\tAND NOT SUBSTR(acc_mask, 1, 1) = ''[''\n\t\t\t\tAND SUBSTR(acc_mask, 1, 4) = $3\n\t\t\t\tAND SUBSTR(acc_mask, 5, 1) = SUBSTR($2, 1, 1)),\n\t\t(SELECT SUM(sc.koef * COALESCE(showcalc(SUBSTR(acc_mask, 2,\nLENGTH(acc_mask) - 2), $2, $3, $4), 0))\n\t\t\tFROM showing AS s, showcomp AS sc\n\t\t\tWHERE sc.kod = s.kod\n\t\t\t\tAND s.kod = $1\n\t\t\t\tAND SUBSTR(acc_mask, 1, 1) = ''[''),\n\t\t0) AS showing;\n';\n\n>>\tView v_file02wide is:\n\n>>CREATE VIEW v_file02wide AS\n>>SELECT id_a011 AS a011, data, obl.ko, obl.nazva AS oblast, b030,\n>>banx.box AS ncks, banx.nazva AS bank,\n>>\tepr.dd, r020, r030, a3, r030.nazva AS valuta, k041,\n>>\t-- Sum equivalent in national currency\n>>\tt071 * get_kurs(id_r030, data) AS t070,\n>>\tt071\n>>FROM v_file02 AS vf02\n>>\tJOIN kod_obl AS obl USING(id_obl)\n>>\tJOIN (dov_bank NATURAL JOIN dov_tvbv) AS banx\n>>\t\tON banx.id_v = vf02.id_v\n>>\t\t\tAND data BETWEEN COALESCE(banx.dataa, data)\n>>\t\t\t\tAND COALESCE(banx.datab, data)\n>>\tJOIN ek_pok_r AS epr USING(id_dd)\n>>\tJOIN kl_r020 USING(id_r020)\n>>\tJOIN kl_r030 AS r030 USING(id_r030)\n>>\tJOIN kl_k041 USING(id_k041);\n\n> You might want to rewrite the view so it doesn't use explicit JOIN statements, \n> i.e FROM a,b WHERE a.id=b.ref rather than FROM a JOIN b ON id=ref\n> At the moment, this will force PG into making the joins in the order you write \n> them (I think this is changed in v7.4)\n\n\tI think this is a important remark. Can \"JOIN\" significantly reduce\nperformance of SELECT statement relative to \", WHERE\"?\n\tOK, I'm changed VIEW to this text:\n\nCREATE VIEW v_file02 AS\nSELECT filenum, data, id_a011, id_v, id_obl, id_dd, id_r020, id_r030,\nid_k041, t071\nFROM filexxr, file02\nWHERE file02.id_r = filexxr.id_r;\n\nCREATE VIEW v_file02wide AS\nSELECT id_a011 AS a011, data, obl.ko, obl.nazva AS oblast, b030,\nbanx.box AS ncks, banx.nazva AS bank,\n\tepr.dd, r020, vf02.id_r030 AS r030, a3, kl_r030.nazva AS valuta, k041,\n\t-- Sum equivalent in national currency\n\tt071 * get_kurs(vf02.id_r030, data) AS t070, t071\nFROM v_file02 AS vf02, kod_obl AS obl, v_banx AS banx,\n\tek_pok_r AS epr, kl_r020, kl_r030, kl_k041\nWHERE obl.id_obl = vf02.id_obl\n\tAND banx.id_v = vf02.id_v\n\tAND data BETWEEN COALESCE(banx.dataa, data)\n\t\tAND COALESCE(banx.datab, data)\n\tAND epr.id_dd = vf02.id_dd\n\tAND kl_r020.id_r020 = vf02.id_r020\n\tAND kl_r030.id_r030 = vf02.id_r030\n\tAND kl_k041.id_k041 = vf02.id_k041;\n\n\tNow (with configuration and view definition changed) \"SELECT COUNT(*)\nFROM v_file02wide;\" executes about 6 minutes and 45 seconds instead of\n30 seconds (previous).\n\tAnother annoying \"feature\" is impossibility writing \"SELECT * FROM...\"\n- duplicate column names error. In NATURAL JOIN joined columns hiding\nautomatically. :-|\n\n>>\tFunction inrepdate is:\n\n>>CREATE OR REPLACE FUNCTION inrepdate(DATE) RETURNS BOOL\n>>LANGUAGE SQL AS '\n>>\t-- Returns true if given date is in repdate\n>>\tSELECT (SELECT COUNT(*) FROM repdate\n>>\t\tWHERE $1 BETWEEN COALESCE(data1, CURRENT_DATE)\n>>\t\t\tAND COALESCE(data2, CURRENT_DATE))\n>>\n>>\t\t> 0;\n\n> You can probably replace this with:\n> SELECT true FROM repdate WHERE $1 ...\n> You'll need to look at where it's used though.\n\n\tHmm... table repdate contain date intervals. For example:\n\tdata1\t\tdata2\n\t2003-01-01\t2003-01-10\n\t2003-05-07\t2003-05-24\n\t...\n\tI need single value (true or false) about given date as parameter -\nreport includes given date or not. COUNT used as aggregate function for\nthis. Can you write this function more simpler?\n\tBTW, I prefer SQL language if possible, then PL/pgSQL. This may be mistake?\n\n>>\tTable has indexes almost for all selected fields.\n\n> That's not going to help you for the SUBSTR(...) stuff, although you could use \n> functional indexes (see manuals/list archives for details).\n\n\tYes, I'm using functional indexes, but not in this case... now in this\ncase too! :)\n\n> First thing is to get those two configuration settings somewhere sane, then we \n> can tune properly. You might like the document at:\n\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n\tThanks, it's interesting.\n\n\tCurrent query plan:\n\n\n QUERY PLAN\n\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=188411.98..188411.98 rows=1 width=151)\n -> Hash Join (cost=186572.19..188398.39 rows=5437 width=151)\n Hash Cond: (\"outer\".id_obl = \"inner\".id_obl)\n -> Hash Join (cost=186570.65..188301.70 rows=5437 width=147)\n Hash Cond: (\"outer\".id_dd = \"inner\".id_dd)\n -> Hash Join (cost=186569.45..188205.34 rows=5437\nwidth=137)\n Hash Cond: (\"outer\".id_k041 = \"inner\".id_k041)\n -> Hash Join (cost=186568.40..188109.14\nrows=5437 width=133)\n Hash Cond: (\"outer\".id_r020 = \"inner\".id_r020)\n -> Hash Join (cost=186499.15..187944.74\nrows=5437 width=122)\n Hash Cond: (\"outer\".id_r030 =\n\"inner\".id_r030)\n -> Merge Join\n(cost=186493.55..187843.99 rows=5437 width=118)\n Merge Cond: (\"outer\".id_v =\n\"inner\".id_v)\n Join Filter: ((\"outer\".data >=\nCASE WHEN (\"inner\".dataa IS NOT NULL) THEN \"inner\".dataa WHEN\n(\"outer\".data IS NOT NULL) THEN \"outer\".data ELSE NULL::date END) AND\n(\"outer\".data <= CASE WHEN (\"inner\".datab IS NOT NULL) THEN\n\"inner\".datab WHEN (\"outer\".data IS NOT NULL) THEN \"outer\".data ELSE\nNULL::date END))\n -> Sort\n(cost=29324.30..29568.97 rows=97870 width=61)\n Sort Key: filexxr.id_v\n -> Hash Join\n(cost=632.67..21211.53 rows=97870 width=61)\n Hash Cond:\n(\"outer\".id_r = \"inner\".id_r)\n -> Seq Scan on\nfile02 (cost=0.00..16888.16 rows=493464 width=32)\n Filter:\n(id_r030 = 980)\n -> Hash\n(cost=615.41..615.41 rows=6903 width=29)\n -> Index Scan\nusing index_filexxr_a011 on filexxr (cost=0.00..615.41 rows=6903 width=29)\n Index\nCond: (id_a011 = 3)\n Filter:\ninrepdate(data)\n -> Sort\n(cost=157169.25..157172.17 rows=1167 width=57)\n Sort Key: v.id_v\n -> Hash Join\n(cost=1.18..157109.80 rows=1167 width=57)\n Hash Cond:\n(\"outer\".id_oz = \"inner\".id_oz)\n -> Merge Join\n(cost=0.00..157088.20 rows=1167 width=53)\n Merge Cond:\n(\"outer\".id_bnk = \"inner\".id_bnk)\n -> Index Scan\nusing dov_bank_pkey on dov_bank b (cost=0.00..261328.45 rows=1450 width=17)\n Filter:\n(subplan)\n SubPlan\n ->\nMaterialize (cost=90.02..90.02 rows=29 width=11)\n\n-> Seq Scan on dov_bank (cost=0.00..90.02 rows=29 width=11)\n\n Filter: ((dov_bank_box_22(box) = 'NL'::character varying) OR\n(dov_bank_box_22(box) = 'NM'::character varying))\n -> Index Scan\nusing index_dov_tvbv_bnk on dov_tvbv v (cost=0.00..142.42 rows=2334\nwidth=36)\n -> Hash\n(cost=1.14..1.14 rows=14 width=4)\n -> Seq Scan\non ozkb o (cost=0.00..1.14 rows=14 width=4)\n -> Hash (cost=5.08..5.08 rows=208\nwidth=4)\n -> Seq Scan on kl_r030\n(cost=0.00..5.08 rows=208 width=4)\n -> Hash (cost=64.00..64.00 rows=2100 width=11)\n -> Seq Scan on kl_r020\n(cost=0.00..64.00 rows=2100 width=11)\n -> Hash (cost=1.04..1.04 rows=4 width=4)\n -> Seq Scan on kl_k041 (cost=0.00..1.04\nrows=4 width=4)\n -> Hash (cost=1.16..1.16 rows=16 width=10)\n -> Seq Scan on ek_pok_r epr (cost=0.00..1.16\nrows=16 width=10)\n -> Hash (cost=1.43..1.43 rows=43 width=4)\n -> Seq Scan on kod_obl obl (cost=0.00..1.43 rows=43\nwidth=4)\n(49 rows)\n\n\tNow (2K shared_buffers blocks, 16K effective_cache_size blocks, 16Mb\nsort_mem) PostgreSQL uses much less memory, about 64M... it's not good,\nI want using all available RAM if possible - PostgreSQL is the main task\non this PC.\n\tMay set effective_cache_size to 192M (48K blocks) be better? I don't\nunderstand exactly: effective_cache_size tells PostgreSQL about OS cache\nsize or about available free RAM?\n\n\nWith best regards\n\tYaroslav Mazurak.", "msg_date": "Wed, 06 Aug 2003 15:42:55 +0300", "msg_from": "Yaroslav Mazurak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "On 6 Aug 2003 at 15:42, Yaroslav Mazurak wrote:\n> >>sort_mem = 131072\n> > This sort_mem value is *very* large - that's 131MB for *each sort* that gets \n> > done. I'd suggest trying something in the range 1,000-10,000. What's probably \n> > happening with the error above is that PG is allocating ridiculous amounts of \n> > memory, the machines going into swap and everything eventually grinds to a \n> > halt.\n> \n> \tWhat mean \"each sort\"? Each query with SORT clause or some internal\n> (invisible to user) sorts too (I can't imagine: indexed search or\n> whatever else)?\n> \tI'm reduced sort_mem to 16M.\n\nGood call. I would say start with 4M if you time to experiment. \n\n> >>enable_seqscan = false\n> \n> > Don't tinker with these in a live system, they're only really for \n> > testing/debugging.\n> \n> \tThis is another strange behavior of PostgreSQL - he don't use some\n> created indexes (seq_scan only) after ANALYZE too. OK, I'm turned on\n> this option back.\n\nAt times it thinks correct as well. An index scan might be costly. It does not \nhurt leaving this option on. If your performance improves by turning off this \noption, usually the problem is somewhere else..\n\n> \n> >>effective_cache_size = 65536\n> \n> > So you typically get about 256MB cache usage in top/free?\n> \n> \tNo, top shows 12-20Mb.\n> \tI'm reduced effective_cache_size to 4K blocks (16M?).\n\nAre you on linux?( I lost OP). Don't trust top. Use free to find out how much \ntrue free memory you have.. Look at second line of free..\n\nHTH\n\nBye\n Shridhar\n\n--\nmillihelen, n.:\tThe amount of beauty required to launch one ship.\n\n", "msg_date": "Wed, 06 Aug 2003 18:23:40 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": ">> On Wednesday 06 August 2003 08:34, Yaroslav Mazurak wrote:\n>\n>> Version 7.3.4 is just out - probably worth upgrading as soon as it's\n>> convenient.\n>\n> \tHas version 7.3.4 significant performance upgrade relative to 7.3.2?\n> I've downloaded version 7.3.4, but not installed yet.\n\nNo, but there are some bug fixes.\n\n>>>sort_mem = 131072\n>\n>> This sort_mem value is *very* large - that's 131MB for *each sort* that\n\n> \tWhat mean \"each sort\"? Each query with SORT clause or some internal\n> (invisible to user) sorts too (I can't imagine: indexed search or\n> whatever else)?\n> \tI'm reduced sort_mem to 16M.\n\nIt means each sort - if you look at your query plan and see three \"sort\"\nclauses that means that query might allocate 48MB to sorting. Now, that's\ngood because sorting items on disk is much slower. It's bad because that's\n48MB less for everything else that's happening.\n\n>>>fsync = false\n>\n>> I'd turn fsync back on - unless you don't mind losing your data after a\n>> crash.\n>\n> \tThis is temporary performance solution - I want get SELECT query result\n> first, but current performance is too low.\n>\n>>>enable_seqscan = false\n>\n>> Don't tinker with these in a live system, they're only really for\n>> testing/debugging.\n>\n> \tThis is another strange behavior of PostgreSQL - he don't use some\n> created indexes (seq_scan only) after ANALYZE too. OK, I'm turned on\n> this option back.\n\nFair enough, we can work on those. With 7.3.x you can tell PG to examine\nsome tables more thouroughly to get better plans.\n\n>>>effective_cache_size = 65536\n>\n>> So you typically get about 256MB cache usage in top/free?\n>\n> \tNo, top shows 12-20Mb.\n> \tI'm reduced effective_cache_size to 4K blocks (16M?).\n\nCache size is in blocks of 8KB (usually) - it's a way of telling PG what\nthe chances are of disk blocks being already cached by Linux.\n\n>>>\tSELECT statement is:\n>>>\n>>>SELECT\tshowcalc('B00204', dd, r020, t071) AS s04\n>>>FROM\tv_file02wide\n>>>WHERE\ta011 = 3\n>>>\tAND inrepdate(data)\n>>>\tAND SUBSTR(ncks, 2, 2) IN ('NL', 'NM')\n>>>\tAND r030 = 980;\n>\n>> Hmm - mostly views and function calls, OK - I'll read on.\n>\n> \tMy data are distributed accross multiple tables to integrity and avoid\n> redundancy. During SELECT query these data rejoined to be presented in\n> \"human-readable\" form. :)\n> \t\"SUBSTR\" returns about 25 records, I'm too lazy for write 25 numbers.\n> :) I'm also worried for errors.\n\nSounds like good policy.\n\n>\n>>>(cost=174200202474.99..174200202474.99 rows=1 width=143) -> Hash Join\n>\n>> ^^^^^^^\n>> This is a BIG cost estimate and you've got lots more like them. I'm\n>> guessing\n>> it's because of the sort_mem / enable_seqscan settings you have. The\n>> numbers\n>> don't make sense to me - it sounds like you've pushed the cost estimator\n>> into\n>> a very strange corner.\n>\n> \tI think that cost estimator \"pushed into very strange corner\" by himself.\n>\n>>>\tFunction showcalc definition is:\n>\n>>>CREATE OR REPLACE FUNCTION showcalc(VARCHAR(10), VARCHAR(2), VARCHAR(4),\n>>>NUMERIC(16)) RETURNS NUMERIC(16)\n>>>LANGUAGE SQL AS '\n>>>-- Parameters: code, dd, r020, t071\n>>>\tSELECT COALESCE(\n>>>\t\t(SELECT sc.koef * $4\n>>>\t\t\tFROM showing AS s NATURAL JOIN showcomp AS sc\n>>>\t\t\tWHERE s.kod LIKE $1\n>>>\t\t\t\tAND NOT SUBSTR(acc_mask, 1, 1) LIKE ''[''\n>>>\t\t\t\tAND SUBSTR(acc_mask, 1, 4) LIKE $3\n>>>\t\t\t\tAND SUBSTR(acc_mask, 5, 1) LIKE SUBSTR($2, 1, 1)),\n>\n>> Obviously, you could use = for these 3 rather than LIKE ^^^\n>> Same below too.\n>\n>>>\t\t(SELECT SUM(sc.koef * COALESCE(showcalc(SUBSTR(acc_mask, 2,\n>>>LENGTH(acc_mask) - 2), $2, $3, $4), 0))\n>>>\t\t\tFROM showing AS s NATURAL JOIN showcomp AS sc\n>>>\t\t\tWHERE s.kod LIKE $1\n>>>\t\t\t\tAND SUBSTR(acc_mask, 1, 1) LIKE ''[''),\n>>>\t\t0) AS showing;\n>>>';\n>\n> \tOK, all unnecessary \"LIKEs\" replaced by \"=\", JOIN removed too:\n> CREATE OR REPLACE FUNCTION showcalc(VARCHAR(10), VARCHAR(2), VARCHAR(4),\n> NUMERIC(16)) RETURNS NUMERIC(16)\n> LANGUAGE SQL AS '\n> -- Parameters: code, dd, r020, t071\n> \tSELECT COALESCE(\n> \t\t(SELECT sc.koef * $4\n> \t\t\tFROM showing AS s, showcomp AS sc\n> \t\t\tWHERE sc.kod = s.kod\n> \t\t\t\tAND s.kod LIKE $1\n> \t\t\t\tAND NOT SUBSTR(acc_mask, 1, 1) = ''[''\n> \t\t\t\tAND SUBSTR(acc_mask, 1, 4) = $3\n> \t\t\t\tAND SUBSTR(acc_mask, 5, 1) = SUBSTR($2, 1, 1)),\n> \t\t(SELECT SUM(sc.koef * COALESCE(showcalc(SUBSTR(acc_mask, 2,\n> LENGTH(acc_mask) - 2), $2, $3, $4), 0))\n> \t\t\tFROM showing AS s, showcomp AS sc\n> \t\t\tWHERE sc.kod = s.kod\n> \t\t\t\tAND s.kod = $1\n> \t\t\t\tAND SUBSTR(acc_mask, 1, 1) = ''[''),\n> \t\t0) AS showing;\n> ';\n>\n>>>\tView v_file02wide is:\n>\n>>>CREATE VIEW v_file02wide AS\n>>>SELECT id_a011 AS a011, data, obl.ko, obl.nazva AS oblast, b030,\n>>>banx.box AS ncks, banx.nazva AS bank,\n>>>\tepr.dd, r020, r030, a3, r030.nazva AS valuta, k041,\n>>>\t-- Sum equivalent in national currency\n>>>\tt071 * get_kurs(id_r030, data) AS t070,\n>>>\tt071\n>>>FROM v_file02 AS vf02\n>>>\tJOIN kod_obl AS obl USING(id_obl)\n>>>\tJOIN (dov_bank NATURAL JOIN dov_tvbv) AS banx\n>>>\t\tON banx.id_v = vf02.id_v\n>>>\t\t\tAND data BETWEEN COALESCE(banx.dataa, data)\n>>>\t\t\t\tAND COALESCE(banx.datab, data)\n>>>\tJOIN ek_pok_r AS epr USING(id_dd)\n>>>\tJOIN kl_r020 USING(id_r020)\n>>>\tJOIN kl_r030 AS r030 USING(id_r030)\n>>>\tJOIN kl_k041 USING(id_k041);\n>\n>> You might want to rewrite the view so it doesn't use explicit JOIN\n>> statements,\n>> i.e FROM a,b WHERE a.id=b.ref rather than FROM a JOIN b ON id=ref\n>> At the moment, this will force PG into making the joins in the order you\n>> write\n>> them (I think this is changed in v7.4)\n>\n> \tI think this is a important remark. Can \"JOIN\" significantly reduce\n> performance of SELECT statement relative to \", WHERE\"?\n> \tOK, I'm changed VIEW to this text:\n\nIt can sometimes. What it means is that PG will follow whatever order you\nwrite the joins in. If you know joining a to b to c is the best order,\nthat can be a good thing. Unfortunately, it means the planner can't make a\nbetter guess based on its statistics.\n\n> CREATE VIEW v_file02 AS\n> SELECT filenum, data, id_a011, id_v, id_obl, id_dd, id_r020, id_r030,\n> id_k041, t071\n> FROM filexxr, file02\n> WHERE file02.id_r = filexxr.id_r;\n>\n> CREATE VIEW v_file02wide AS\n> SELECT id_a011 AS a011, data, obl.ko, obl.nazva AS oblast, b030,\n> banx.box AS ncks, banx.nazva AS bank,\n> \tepr.dd, r020, vf02.id_r030 AS r030, a3, kl_r030.nazva AS valuta, k041,\n> \t-- Sum equivalent in national currency\n> \tt071 * get_kurs(vf02.id_r030, data) AS t070, t071\n> FROM v_file02 AS vf02, kod_obl AS obl, v_banx AS banx,\n> \tek_pok_r AS epr, kl_r020, kl_r030, kl_k041\n> WHERE obl.id_obl = vf02.id_obl\n> \tAND banx.id_v = vf02.id_v\n> \tAND data BETWEEN COALESCE(banx.dataa, data)\n> \t\tAND COALESCE(banx.datab, data)\n> \tAND epr.id_dd = vf02.id_dd\n> \tAND kl_r020.id_r020 = vf02.id_r020\n> \tAND kl_r030.id_r030 = vf02.id_r030\n> \tAND kl_k041.id_k041 = vf02.id_k041;\n>\n> \tNow (with configuration and view definition changed) \"SELECT COUNT(*)\n> FROM v_file02wide;\" executes about 6 minutes and 45 seconds instead of\n> 30 seconds (previous).\n\nOK - don't worry if it looks like we're going backwards, we should be able\nto get everything running nicely soon.\n\n> \tAnother annoying \"feature\" is impossibility writing \"SELECT * FROM...\"\n> - duplicate column names error. In NATURAL JOIN joined columns hiding\n> automatically. :-|\n>\n>>>\tFunction inrepdate is:\n>\n>>>CREATE OR REPLACE FUNCTION inrepdate(DATE) RETURNS BOOL\n>>>LANGUAGE SQL AS '\n>>>\t-- Returns true if given date is in repdate\n>>>\tSELECT (SELECT COUNT(*) FROM repdate\n>>>\t\tWHERE $1 BETWEEN COALESCE(data1, CURRENT_DATE)\n>>>\t\t\tAND COALESCE(data2, CURRENT_DATE))\n>>>\n>>>\t\t> 0;\n>\n>> You can probably replace this with:\n>> SELECT true FROM repdate WHERE $1 ...\n>> You'll need to look at where it's used though.\n>\n> \tHmm... table repdate contain date intervals. For example:\n> \tdata1\t\tdata2\n> \t2003-01-01\t2003-01-10\n> \t2003-05-07\t2003-05-24\n> \t...\n> \tI need single value (true or false) about given date as parameter -\n> report includes given date or not. COUNT used as aggregate function for\n> this. Can you write this function more simpler?\n> \tBTW, I prefer SQL language if possible, then PL/pgSQL. This may be\n> mistake?\n\nNo - not really. You can do things in plpgsql that you can't in sql, but I\nuse both depending on the situation.\n\n>>>\tTable has indexes almost for all selected fields.\n>\n>> That's not going to help you for the SUBSTR(...) stuff, although you\n>> could use\n>> functional indexes (see manuals/list archives for details).\n>\n> \tYes, I'm using functional indexes, but not in this case... now in this\n> case too! :)\n>\n>> First thing is to get those two configuration settings somewhere sane,\n>> then we\n>> can tune properly. You might like the document at:\n>\n>> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n>\n> \tThanks, it's interesting.\n>\n> \tCurrent query plan:\n>\n>\n> QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=188411.98..188411.98 rows=1 width=151)\n> -> Hash Join (cost=186572.19..188398.39 rows=5437 width=151)\n> Hash Cond: (\"outer\".id_obl = \"inner\".id_obl)\n> -> Hash Join (cost=186570.65..188301.70 rows=5437 width=147)\n[snip]\nWell the cost estimates look much more plausible. You couldn't post\nEXPLAIN ANALYSE could you? That actually runs the query.\n\n> \tNow (2K shared_buffers blocks, 16K effective_cache_size blocks, 16Mb\n> sort_mem) PostgreSQL uses much less memory, about 64M... it's not good,\n> I want using all available RAM if possible - PostgreSQL is the main task\n> on this PC.\n\nDon't forget that any memory PG is using the operating-system can't. The\nOS will cache frequently accessed disk blocks for you, so it's a question\nof finding the right balance.\n\n> \tMay set effective_cache_size to 192M (48K blocks) be better? I don't\n> understand exactly: effective_cache_size tells PostgreSQL about OS cache\n> size or about available free RAM?\n\nIt needs to reflect how much cache the system is using - try the \"free\"\ncommand to see figures.\n\nIf you could post the output of EXPLAIN ANALYSE rather than EXPLAIN, I'll\ntake a look at it this evening (London time). There's also plenty of other\npeople on this list who can help too.\n\nHTH\n\n- Richard Huxton\n", "msg_date": "Wed, 6 Aug 2003 16:38:24 +0100 (BST)", "msg_from": "\"Richard Huxton\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "Yaroslav Mazurak <[email protected]> writes:\n> \tCurrent postgresql.conf settings (some) are:\n\n> max_locks_per_transaction = 16\n\nThis strikes me as a really bad idea --- you save little space by\nreducing it from the default, and open yourself up to unexpected failures.\n\n> wal_buffers = 256\n\nThat is almost certainly way more than you need.\n\n> sort_mem = 131072\n\nPeople have already told you that one's a bad idea.\n\n> commit_delay = 32000\n\nI'm unconvinced that setting this nonzero is a good idea. Have you done\nexperiments to prove that you get a benefit?\n\n> enable_seqscan = false\n\nThis is the cause of the bizarre-looking cost estimates. I don't\nrecommend setting it false as a system-wide setting. If you want\nto nudge the planner towards indexscans, reducing random_page_cost\na little is probably a better way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Aug 2003 14:13:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem -> tuning " }, { "msg_contents": "Yaroslav Mazurak <[email protected]> writes:\n>>> fsync = false\n\n>> I'd turn fsync back on - unless you don't mind losing your data after a crash.\n\n> \tThis is temporary performance solution - I want get SELECT query result\n> first, but current performance is too low.\n\nDisabling fsync will not help SELECT performance one bit. It would only\naffect transactions that modify the database.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Aug 2003 14:16:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem -> tuning " }, { "msg_contents": "\t\tHi All!\n\n\nRichard Huxton wrote:\n\n>>>On Wednesday 06 August 2003 08:34, Yaroslav Mazurak wrote:\n\n>>>>sort_mem = 131072\n\n>>>This sort_mem value is *very* large - that's 131MB for *each sort* that\n\n\tIt's not TOO large *for PostgreSQL*. When I'm inserting a large amount \nof data into tables, sort_mem helps. Value of 192M speeds up inserting \nsignificantly (verified :))!\n\n>>\tWhat mean \"each sort\"? Each query with SORT clause or some internal\n>>(invisible to user) sorts too (I can't imagine: indexed search or\n>>whatever else)?\n\n>>\tI'm reduced sort_mem to 16M.\n\n> It means each sort - if you look at your query plan and see three \"sort\"\n> clauses that means that query might allocate 48MB to sorting. Now, that's\n> good because sorting items on disk is much slower. It's bad because that's\n> 48MB less for everything else that's happening.\n\n\tOK, I'm preparing to fix this value. :)\n\tIMHO this is PostgreSQL's lack of memory management. I think that \nPostgreSQL can finally allocate enough memory by himself! :-E\n\n>>\tThis is another strange behavior of PostgreSQL - he don't use some\n>>created indexes (seq_scan only) after ANALYZE too. OK, I'm turned on\n>>this option back.\n\n> Fair enough, we can work on those. With 7.3.x you can tell PG to examine\n> some tables more thouroughly to get better plans.\n\n\tYou might EXPLAIN ANALYZE?\n\n>>>>effective_cache_size = 65536\n\n>>>So you typically get about 256MB cache usage in top/free?\n\n>>\tNo, top shows 12-20Mb.\n>>\tI'm reduced effective_cache_size to 4K blocks (16M?).\n\n> Cache size is in blocks of 8KB (usually) - it's a way of telling PG what\n> the chances are of disk blocks being already cached by Linux.\n\n\tPostgreSQL is running on FreeBSD, memory block actually is 4Kb, but in \nmost cases documentation says about 8Kb... I don't know exactly about \nreal disk block size, but suspect that it's 4Kb. :)\n\n>>\tI think this is a important remark. Can \"JOIN\" significantly reduce\n>>performance of SELECT statement relative to \", WHERE\"?\n>>\tOK, I'm changed VIEW to this text:\n\n> It can sometimes. What it means is that PG will follow whatever order you\n> write the joins in. If you know joining a to b to c is the best order,\n> that can be a good thing. Unfortunately, it means the planner can't make a\n> better guess based on its statistics.\n\n\tAt this moment this don't helps. :(\n\n> Well the cost estimates look much more plausible. You couldn't post\n> EXPLAIN ANALYSE could you? That actually runs the query.\n\n>>\tNow (2K shared_buffers blocks, 16K effective_cache_size blocks, 16Mb\n>>sort_mem) PostgreSQL uses much less memory, about 64M... it's not good,\n>>I want using all available RAM if possible - PostgreSQL is the main task\n>>on this PC.\n\n> Don't forget that any memory PG is using the operating-system can't. The\n> OS will cache frequently accessed disk blocks for you, so it's a question\n> of finding the right balance.\n\n\tPostgreSQL is the primary task for me on this PC - I don't worry about \nother tasks except OS. ;)\n\n>>\tMay set effective_cache_size to 192M (48K blocks) be better? I don't\n>>understand exactly: effective_cache_size tells PostgreSQL about OS cache\n>>size or about available free RAM?\n\n> It needs to reflect how much cache the system is using - try the \"free\"\n> command to see figures.\n\n\tI'm not found \"free\" utility on FreeBSD 4.7. :(\n\n> If you could post the output of EXPLAIN ANALYSE rather than EXPLAIN, I'll\n> take a look at it this evening (London time). There's also plenty of other\n> people on this list who can help too.\n\n\tI'm afraid that this may be too long. :-(((\n\tYesterday I'm re-execute my query with all changes... after 700 (!) \nminutes query failed with: \"ERROR: Memory exhausted in AllocSetAlloc(104)\".\n\tI don't understand: result is actually 8K rows long only, but \nPostgreSQL failed! Why?!! Function showcalc is recursive, but in my \nquery used with level 1 depth only (I know exactly).\n\tAgain: I think that this is PostgreSQL's lack of quality memory \nmanagement. :-(\n\n> - Richard Huxton\n\nWith best regards\n\tYaroslav Mazurak.\n\n", "msg_date": "Thu, 07 Aug 2003 10:05:23 +0300", "msg_from": "Yaroslav Mazurak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "On 7 Aug 2003 at 10:05, Yaroslav Mazurak wrote:\n> > It needs to reflect how much cache the system is using - try the \"free\"\n> > command to see figures.\n> \n> \tI'm not found \"free\" utility on FreeBSD 4.7. :(\n\n<rant>\nGrr.. I don't like freeBSD for it's top output.Active/inactive/Wired.. Grr.. \nwhy can't it be shared buffered and cached? Same goes for HP-UX top. Looking at \nit one gets hardly any real information.. Anyway that's just me..\n</rant>\n\n Top on freeBSD seems pretty unintuituive em but if you find any documentation \non that, that would help you. ( Haven't booted in freeBSD in ages so no data \nout of my head..)\n\nYou can try various sysctls on freeBSD. Basicalyl idea is to find out how much \nof memory is used and how much is cached. FreeBSD must be providing that one in \nsome form..\n\nIIRC there is a limit on filesystem cache on freeBSD. 300MB by default. If that \nis the case, you might have to raise it to make effective_cache_size really \neffective..\n\nHTH\n\nBye\n Shridhar\n\n--\nAnother war ... must it always be so? How many comrades have we lostin this \nway? ... Obedience. Duty. Death, and more death ...\t\t-- Romulan Commander, \n\"Balance of Terror\", stardate 1709.2\n\n", "msg_date": "Thu, 07 Aug 2003 13:07:36 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "\t\tHi All!\n\nTom Lane wrote:\n\n> Yaroslav Mazurak <[email protected]> writes:\n\n>>>>fsync = false\n\n>>>I'd turn fsync back on - unless you don't mind losing your data after a crash.\n\n>>\tThis is temporary performance solution - I want get SELECT query result\n>>first, but current performance is too low.\n\n> Disabling fsync will not help SELECT performance one bit. It would only\n> affect transactions that modify the database.\n\n\tFixed. But at this moment primary tasks are *get result* (1st) from \nSELECT in *reasonable* time (2nd). :)\n\n> \t\t\tregards, tom lane\n\nWith best regards\n\tYaroslav Mazurak.\n\n", "msg_date": "Thu, 07 Aug 2003 10:41:17 +0300", "msg_from": "Yaroslav Mazurak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "On Thursday 07 August 2003 08:05, Yaroslav Mazurak wrote:\n> Hi All!\n>\n> Richard Huxton wrote:\n> >>>On Wednesday 06 August 2003 08:34, Yaroslav Mazurak wrote:\n> >>>>sort_mem = 131072\n> >>>\n> >>>This sort_mem value is *very* large - that's 131MB for *each sort* that\n>\n> \tIt's not TOO large *for PostgreSQL*. When I'm inserting a large amount\n> of data into tables, sort_mem helps. Value of 192M speeds up inserting\n> significantly (verified :))!\n\nAnd what about every other operation?\n\n> >>\tWhat mean \"each sort\"? Each query with SORT clause or some internal\n> >>(invisible to user) sorts too (I can't imagine: indexed search or\n> >>whatever else)?\n> >>\n> >>\tI'm reduced sort_mem to 16M.\n> >\n> > It means each sort - if you look at your query plan and see three \"sort\"\n> > clauses that means that query might allocate 48MB to sorting. Now, that's\n> > good because sorting items on disk is much slower. It's bad because\n> > that's 48MB less for everything else that's happening.\n>\n> \tOK, I'm preparing to fix this value. :)\n> \tIMHO this is PostgreSQL's lack of memory management. I think that\n> PostgreSQL can finally allocate enough memory by himself! :-E\n\nBut this parameter controls how much memory can be allocated to sorts - I \ndon't see how PG can figure out a reasonable maximum by itself.\n\n> >>\tThis is another strange behavior of PostgreSQL - he don't use some\n> >>created indexes (seq_scan only) after ANALYZE too. OK, I'm turned on\n> >>this option back.\n> >\n> > Fair enough, we can work on those. With 7.3.x you can tell PG to examine\n> > some tables more thouroughly to get better plans.\n>\n> \tYou might EXPLAIN ANALYZE?\n\nNo - I meant altering the number of rows used to gather stats (ALTER \nTABLE...SET STATISTICS) - this controls how many rows PG looks at when \ndeciding the \"shape\" of the data in the table.\n\n[snip]\n\n> > Don't forget that any memory PG is using the operating-system can't. The\n> > OS will cache frequently accessed disk blocks for you, so it's a question\n> > of finding the right balance.\n>\n> \tPostgreSQL is the primary task for me on this PC - I don't worry about\n> other tasks except OS. ;)\n\nYou still want the OS to cache your database files. If you try and allocate \ntoo much memory to PG you will only hurt performance.\n\n> >>\tMay set effective_cache_size to 192M (48K blocks) be better? I don't\n> >>understand exactly: effective_cache_size tells PostgreSQL about OS cache\n> >>size or about available free RAM?\n> >\n> > It needs to reflect how much cache the system is using - try the \"free\"\n> > command to see figures.\n>\n> \tI'm not found \"free\" utility on FreeBSD 4.7. :(\n\nSorry - I don't know what the equivalent is in FreeBSD.\n\n> > If you could post the output of EXPLAIN ANALYSE rather than EXPLAIN, I'll\n> > take a look at it this evening (London time). There's also plenty of\n> > other people on this list who can help too.\n>\n> \tI'm afraid that this may be too long. :-(((\n> \tYesterday I'm re-execute my query with all changes... after 700 (!)\n> minutes query failed with: \"ERROR: Memory exhausted in\n> AllocSetAlloc(104)\". I don't understand: result is actually 8K rows long\n> only, but\n> PostgreSQL failed! Why?!! Function showcalc is recursive, but in my\n> query used with level 1 depth only (I know exactly).\n\nI must say I'm puzzled as to how this can happen. In fact, if the last EXPLAIN \noutput was accurate, it couldn't run out of memory, not with the settings \nyou've got now.\n\n> \tAgain: I think that this is PostgreSQL's lack of quality memory\n> management. :-(\n\nIf it's allocating all that memory (do you see the memory usage going up in \ntop) then there's something funny going on now.\n\nWell sir, I can only think of two options now:\n 1. simplify the query until it works and then build it back up again - that \nshould identify where the problem is.\n 2. If you can put together a pg_dump with a small amount of sample data, I \ncan take a look at it here.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 7 Aug 2003 09:10:58 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "On Thu, 7 Aug 2003, Richard Huxton wrote:\n\n> But this parameter controls how much memory can be allocated to sorts - I \n> don't see how PG can figure out a reasonable maximum by itself.\n\nOne could have one setting for the total memory usage and pg could use\nstatistics or some heuristics to use the memory for different things in a \ngood way.\n\nThen that setting could have an auto setting so it uses 40% of all memory \nor something like that. Not perfect but okay for most people.\n\n-- \n/Dennis\n\n", "msg_date": "Thu, 7 Aug 2003 10:23:04 +0200 (CEST)", "msg_from": "=?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "\t\tHi All!\n\nShridhar Daithankar wrote:\n\n> On 7 Aug 2003 at 10:05, Yaroslav Mazurak wrote:\n\n>>>It needs to reflect how much cache the system is using - try the \"free\"\n>>>command to see figures.\n\n>>\tI'm not found \"free\" utility on FreeBSD 4.7. :(\n\n> <rant>\n> Grr.. I don't like freeBSD for it's top output.Active/inactive/Wired.. Grr.. \n> why can't it be shared buffered and cached? Same goes for HP-UX top. Looking at \n> it one gets hardly any real information.. Anyway that's just me..\n> </rant>\n\n\tGrr... I don't like PostgreSQL for it's memory usage parameters. In \nSybase ASA, I say for example: \"use 64Mb RAM for cache\". I don't worry \nabout data in this cache - this may be queries, sort areas, results etc. \nI think that server know better about it's memory requirements. I know \nthat Sybase *use*, and use *only this* memory and don't trap with \n\"Memory exhausted\" error.\n\tI'm not remember 700 minutes queries (more complex that my query), \nfollowing with \"memory exhausted\" error, on Sybase.\n\tAdvertising, he? :(\n\n> Top on freeBSD seems pretty unintuituive em but if you find any documentation \n> on that, that would help you. (Haven't booted in freeBSD in ages so no data \n> out of my head..)\n\n> You can try various sysctls on freeBSD. Basicalyl idea is to find out how much \n> of memory is used and how much is cached. FreeBSD must be providing that one in \n> some form..\n\n> IIRC there is a limit on filesystem cache on freeBSD. 300MB by default. If that \n> is the case, you might have to raise it to make effective_cache_size really \n> effective..\n\n\t\"Try various sysctls\" says nothing for me. I want use *all available \nRAM* (of course, without needed for OS use) for PostgreSQL.\n\n\tWhile idle time top says:\n\nMem: 14M Active, 1944K Inact, 28M Wired, 436K Cache, 48M Buf, 331M Free\nSwap: 368M Total, 17M Used, 352M Free, 4% Inuse\n\n\tAfter 1 minute of \"EXPLAIN ANALYZE SELECT SUM(showcalc('B00204', dd, \nr020, t071)) FROM v_file02wide WHERE a011 = 3 AND inrepdate(data) AND \nb030 IN (SELECT b030 FROM dov_bank WHERE dov_bank_box_22(box) IN ('NL', \n'NM')) AND r030 = 980;\" executing:\n\nMem: 64M Active, 17M Inact, 72M Wired, 436K Cache, 48M Buf, 221M Free\nSwap: 368M Total, 3192K Used, 365M Free\n\n PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND\n59063 postgres 49 0 65560K 55492K RUN 1:06 94.93% 94.63% postgres\n\n\tAfter 12 minutes of query executing:\n\nMem: 71M Active, 17M Inact, 72M Wired, 436K Cache, 48M Buf, 214M Free\nSwap: 368M Total, 3192K Used, 365M Free\n\n PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND\n59063 postgres 56 0 73752K 62996K RUN 12:01 99.02% 99.02% postgres\n\n\tI suspect that swap-file size is too small for my query... but query \nisn't too large, about 8K rows only. :-|\n\n> Shridhar\n\nWith best regards\n\tYaroslav Mazurak.\n\n", "msg_date": "Thu, 07 Aug 2003 11:24:17 +0300", "msg_from": "Yaroslav Mazurak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "On Thursday 07 August 2003 09:24, Yaroslav Mazurak wrote:\n> > IIRC there is a limit on filesystem cache on freeBSD. 300MB by default.\n> > If that is the case, you might have to raise it to make\n> > effective_cache_size really effective..\n>\n> \t\"Try various sysctls\" says nothing for me. I want use *all available\n> RAM* (of course, without needed for OS use) for PostgreSQL.\n\nPG will be using the OS' disk caching.\n\n> \tWhile idle time top says:\n>\n> Mem: 14M Active, 1944K Inact, 28M Wired, 436K Cache, 48M Buf, 331M Free\n> Swap: 368M Total, 17M Used, 352M Free, 4% Inuse\n>\n> \tAfter 1 minute of \"EXPLAIN ANALYZE SELECT SUM(showcalc('B00204', dd,\n> r020, t071)) FROM v_file02wide WHERE a011 = 3 AND inrepdate(data) AND\n> b030 IN (SELECT b030 FROM dov_bank WHERE dov_bank_box_22(box) IN ('NL',\n> 'NM')) AND r030 = 980;\" executing:\n>\n> Mem: 64M Active, 17M Inact, 72M Wired, 436K Cache, 48M Buf, 221M Free\n> Swap: 368M Total, 3192K Used, 365M Free\n>\n> PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU\n> COMMAND 59063 postgres 49 0 65560K 55492K RUN 1:06 94.93% 94.63%\n> postgres\n>\n> \tAfter 12 minutes of query executing:\n>\n> Mem: 71M Active, 17M Inact, 72M Wired, 436K Cache, 48M Buf, 214M Free\n> Swap: 368M Total, 3192K Used, 365M Free\n>\n> PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU\n> COMMAND 59063 postgres 56 0 73752K 62996K RUN 12:01 99.02% 99.02%\n> postgres\n>\n> \tI suspect that swap-file size is too small for my query... but query\n> isn't too large, about 8K rows only. :-|\n\nLooks fine - PG isn't growing too large and your swap usage seems steady. We \ncan try upping the sort memory later, but given the amount of data you're \ndealing with I'd guess 64MB should be fine.\n\nI think we're going to have to break the query down a little and see where the \nissue is.\n\nWhat's the situation with:\nEXPLAIN ANALYZE SELECT <some_field> FROM v_file02wide WHERE a011 = 3 AND \ninrepdate(data) AND b030 IN (SELECT b030 FROM dov_bank WHERE \ndov_bank_box_22(box) IN ('NL', 'NM')) AND r030 = 980;\n\nand:\nEXPLAIN ANALYZE SELECT SUM(showcalc(<parameters>)) FROM <something simple>\n\nHopefully one of these will run in a reasonable time, and the other will not. \nThen we can examine the slow query in more detail. Nothing from your previous \nEXPLAIN (email of yesterday 13:42) looks unreasonable but something must be \ngoing wild in the heart of the query, otherwise you wouldn't be here.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 7 Aug 2003 15:52:48 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "On Thu, 7 Aug 2003, Yaroslav Mazurak wrote:\n\n> \t\tHi All!\n> \n> \n> Richard Huxton wrote:\n> \n> >>>On Wednesday 06 August 2003 08:34, Yaroslav Mazurak wrote:\n> \n> >>>>sort_mem = 131072\n> \n> >>>This sort_mem value is *very* large - that's 131MB for *each sort* that\n> \n> \tIt's not TOO large *for PostgreSQL*. When I'm inserting a large amount \n> of data into tables, sort_mem helps. Value of 192M speeds up inserting \n> significantly (verified :))!\n\nIf I remember right, this is on a PII-400 with 384 Megs of RAM. On a \nmachine that small, 128Meg is probably too big for ensuring there are no \nswap storms. Once you force the box to swap you loose.\n\n> >>>>effective_cache_size = 65536\n> \n> >>>So you typically get about 256MB cache usage in top/free?\n> \n> >>\tNo, top shows 12-20Mb.\n> >>\tI'm reduced effective_cache_size to 4K blocks (16M?).\n> \n> > Cache size is in blocks of 8KB (usually) - it's a way of telling PG what\n> > the chances are of disk blocks being already cached by Linux.\n> \n> \tPostgreSQL is running on FreeBSD, memory block actually is 4Kb, but in \n> most cases documentation says about 8Kb... I don't know exactly about \n> real disk block size, but suspect that it's 4Kb. :)\n\nFYI effective cache size and shared_buffers are both measured in \nPostgresql sized blocks, which default to 8k but can be changed upon \ncompile. So, effective_cache size for a machine that shows 128 Meg kernel \ncache and 20 meg buffers would be (138*2^20)/(8*2^10) -> (138*2^10)/8 -> \n17664.\n\n> \tI'm afraid that this may be too long. :-(((\n> \tYesterday I'm re-execute my query with all changes... after 700 (!) \n> minutes query failed with: \"ERROR: Memory exhausted in AllocSetAlloc(104)\".\n> \tI don't understand: result is actually 8K rows long only, but \n> PostgreSQL failed! Why?!! Function showcalc is recursive, but in my \n> query used with level 1 depth only (I know exactly).\n> \tAgain: I think that this is PostgreSQL's lack of quality memory \n> management. :-(\n\nCan you run top while this is happening and see postgresql's memory usage \nclimb or df the disks to see if they're filling up? could be swap is \nfilling even. How much swap space do you have allocated, by the way?\n\nAlso, you have to restart postgresql to get the changes to postgresql.conf \nto take effect. Just in case you haven't. Do a show all; in psql to see \nif the settings are what they should be.\n\n", "msg_date": "Thu, 7 Aug 2003 10:15:20 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "On Thu, 7 Aug 2003, Yaroslav Mazurak wrote:\n\n> \t\tHi All!\n> \n> Shridhar Daithankar wrote:\n> \n> > On 7 Aug 2003 at 10:05, Yaroslav Mazurak wrote:\n> \n> >>>It needs to reflect how much cache the system is using - try the \"free\"\n> >>>command to see figures.\n> \n> >>\tI'm not found \"free\" utility on FreeBSD 4.7. :(\n> \n> > <rant>\n> > Grr.. I don't like freeBSD for it's top output.Active/inactive/Wired.. Grr.. \n> > why can't it be shared buffered and cached? Same goes for HP-UX top. Looking at \n> > it one gets hardly any real information.. Anyway that's just me..\n> > </rant>\n> \n> \tGrr... I don't like PostgreSQL for it's memory usage parameters. In \n> Sybase ASA, I say for example: \"use 64Mb RAM for cache\". I don't worry \n> about data in this cache - this may be queries, sort areas, results etc. \n> I think that server know better about it's memory requirements. I know \n> that Sybase *use*, and use *only this* memory and don't trap with \n> \"Memory exhausted\" error.\n> \tI'm not remember 700 minutes queries (more complex that my query), \n> following with \"memory exhausted\" error, on Sybase.\n> \tAdvertising, he? :(\n> \n> > Top on freeBSD seems pretty unintuituive em but if you find any documentation \n> > on that, that would help you. (Haven't booted in freeBSD in ages so no data \n> > out of my head..)\n> \n> > You can try various sysctls on freeBSD. Basicalyl idea is to find out how much \n> > of memory is used and how much is cached. FreeBSD must be providing that one in \n> > some form..\n> \n> > IIRC there is a limit on filesystem cache on freeBSD. 300MB by default. If that \n> > is the case, you might have to raise it to make effective_cache_size really \n> > effective..\n> \n> \t\"Try various sysctls\" says nothing for me. I want use *all available \n> RAM* (of course, without needed for OS use) for PostgreSQL.\n\nThat's a nice theory, but it doesn't work out that way. About every two \nmonths someone shows up wanting postgresql to use all the memory in their \nbox for caching and we wind up explaining that the kernel is better at \ncaching than postgresql is, and how it's better not to push the usage of \nthe memory right up to the limit.\n\nThe reason you don't want to use every bit for postgresql is that, if you \nuse add load after that you may make the machine start to swap out and \nslow down considerably.\n\nMy guess is that this is exactly what's happening to you, you're using so \nmuch memory that the machine is running out and slowing down.\n\nDrop shared_buffers to 1000 to 4000, sort_mem to 8192 and start over from \nthere. Then, increase them each one at a time until there's no increase \nin speed, or stop if it starts getting slower and back off.\n\nbigger is NOT always better.\n\n", "msg_date": "Thu, 7 Aug 2003 10:20:26 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "\t\tHi All!\n\n\n\tFirst, thanks for answers!\n\nRichard Huxton wrote:\n\n> On Thursday 07 August 2003 09:24, Yaroslav Mazurak wrote:\n\n>>>IIRC there is a limit on filesystem cache on freeBSD. 300MB by default.\n>>>If that is the case, you might have to raise it to make\n>>>effective_cache_size really effective..\n\n>>\t\"Try various sysctls\" says nothing for me. I want use *all available\n>>RAM* (of course, without needed for OS use) for PostgreSQL.\n\n> PG will be using the OS' disk caching.\n\n\tI think all applications using OS disk caching. ;)\n\tOr you want to say that PostgreSQL tuned for using OS-specific cache \nimplementation?\n\tDo you know method for examining real size of OS filesystem cache? If I \nunderstood right, PostgreSQL dynamically use all available RAM minus \nshared_buffers minus k * sort_mem minus effective_cache_size?\n\tI want configure PostgreSQL for using _maximum_ of available RAM.\n\n> Looks fine - PG isn't growing too large and your swap usage seems steady. We \n> can try upping the sort memory later, but given the amount of data you're \n> dealing with I'd guess 64MB should be fine.\n\n> I think we're going to have to break the query down a little and see where the \n> issue is.\n\n> What's the situation with:\n> EXPLAIN ANALYZE SELECT <some_field> FROM v_file02wide WHERE a011 = 3 AND \n> inrepdate(data) AND b030 IN (SELECT b030 FROM dov_bank WHERE \n> dov_bank_box_22(box) IN ('NL', 'NM')) AND r030 = 980;\n\n> and:\n> EXPLAIN ANALYZE SELECT SUM(showcalc(<parameters>)) FROM <something simple>\n\n> Hopefully one of these will run in a reasonable time, and the other will not. \n> Then we can examine the slow query in more detail. Nothing from your previous \n> EXPLAIN (email of yesterday 13:42) looks unreasonable but something must be \n> going wild in the heart of the query, otherwise you wouldn't be here.\n\n\tYes, you're right. I've tested a few statements and obtain interesting \nresults.\n\tSELECT * FROM v_file02wide WHERE... executes about 34 seconds.\n\tSELECT showcalc(...); executes from 0.7 seconds (without recursion) up \nto 6.3 seconds if recursion is used! :(\n\tThis mean, that approximate execute time for fully qualified SELECT \nwith about 8K rows is... about 13 hours! :-O\n\tHence, problem is in my function showcalc:\n\nCREATE OR REPLACE FUNCTION showcalc(VARCHAR(10), VARCHAR(2), VARCHAR(4), \nNUMERIC(16)) RETURNS NUMERIC(16)\nLANGUAGE SQL STABLE AS '\n-- Parameters: code, dd, r020, t071\n\tSELECT COALESCE(\n\t\t(SELECT sc.koef * $4\n\t\t\tFROM showing AS s NATURAL JOIN showcomp AS sc\n\t\t\tWHERE s.kod = $1\n\t\t\t\tAND NOT SUBSTR(acc_mask, 1, 1) = ''[''\n\t\t\t\tAND SUBSTR(acc_mask, 1, 4) = $3\n\t\t\t\tAND SUBSTR(acc_mask, 5, 1) = SUBSTR($2, 1, 1)),\n\t\t(SELECT SUM(sc.koef * COALESCE(showcalc(SUBSTR(acc_mask, 2, \nLENGTH(acc_mask) - 2), $2, $3, $4), 0))\n\t\t\tFROM showing AS s NATURAL JOIN showcomp AS sc\n\t\t\tWHERE s.kod = $1\n\t\t\t\tAND SUBSTR(acc_mask, 1, 1) = ''[''),\n\t\t0) AS showing;\n';\n\n\tBTW, cross join \",\" with WHERE clause don't improve performance \nrelative to NATURAL JOIN.\n\tAdditionally, with user-defined function beginchar (SUBSTR(..., 1, 1)), \nused for indexing, showcalc executes about 16 seconds. With function \nSUBSTR the same showcalc executes 6 seconds.\n\n\tTable showing contain information about showing: showing id (id_show), \ncode (kod) and description (opys). Table showcomp contain information \nabout showing components (accounts): showing id (id_show), coefficient \n(koef) and account_mask (acc_mask). Account mask is 4-char balance \naccount mask || 1-char account characteristics or another showing in \nsquare bracket.\n\tExample:\n\tshowing\n\t=========+==========+===========\n\t id_show | kod | opys\n\t=========+==========+===========\n\t 1 | 'A00101' | 'Received'\n\t 2 | 'A00102' | 'Sent'\n\t 3 | 'A00103' | 'Total'\n\t=========+==========+===========\n\tshowcomp\n\t=========+======+===========\n\t id_show | koef | acc_mask\n\t=========+======+===========\n\t 1 | 1.0 | '60102'\n\t 1 | 1.0 | '60112'\n\t 2 | 1.0 | '70011'\n\t 2 | 1.0 | '70021'\n\t 3 | 1.0 | '[A00101]'\n\t 3 | -1.0 | '[A00102]'\n\t=========+======+===========\n\tThis mean that: A00101 includes accounts 6010 and 6011 with \ncharacteristics 2, A00102 includes accounts 7001 and 7002 with \ncharacteristics 1, and A00103 = A00102 - A00101. In almost all cases \nrecursion depth not exceed 1 level, but I'm not sure. :)\n\n\tView v_file02wide contain account (r020) and 2-char characteristics \n(dd). Using showcalc I want to sum numbers (t071) on accounts included \nin appropriate showings. I.e SELECT SUM(showcalc('A00101', dd, r020, \nt071)) FROM ... must return sum on accounts 6010 and 6011 with \ncharacteristics 2 etc.\n\n\tNow I think about change function showcalc or/and this data \nstructures... :)\n\tAnyway, 600Mb is too low for PostgreSQL for executing my query - DBMS \nraise error after 11.5 hours (of estimated 13?). :(\n\n\nWith best regards\n\tYaroslav Mazurak.\n\n", "msg_date": "Thu, 07 Aug 2003 19:30:49 +0300", "msg_from": "Yaroslav Mazurak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "scott.marlowe wrote:\n\n> On Thu, 7 Aug 2003, Yaroslav Mazurak wrote:\n\n>>Shridhar Daithankar wrote:\n\n> That's a nice theory, but it doesn't work out that way. About every two \n> months someone shows up wanting postgresql to use all the memory in their \n> box for caching and we wind up explaining that the kernel is better at \n> caching than postgresql is, and how it's better not to push the usage of \n> the memory right up to the limit.\n\n\tI'm reading this mailing list just few days. :)))\n\n> The reason you don't want to use every bit for postgresql is that, if you \n> use add load after that you may make the machine start to swap out and \n> slow down considerably.\n\n\tWhat kind of load? PostgreSQL or another? I say that for this PC \nprimary task and critical goal is DBMS and it's performance.\n\n> My guess is that this is exactly what's happening to you, you're using so \n> much memory that the machine is running out and slowing down.\n\n> Drop shared_buffers to 1000 to 4000, sort_mem to 8192 and start over from \n> there. Then, increase them each one at a time until there's no increase \n> in speed, or stop if it starts getting slower and back off.\n\n> bigger is NOT always better.\n\n\tLet I want to use all available RAM with PostgreSQL.\n\tWithout executing query (PostgreSQL is running) top say now:\n\nMem: 71M Active, 23M Inact, 72M Wired, 436K Cache, 48M Buf, 208M Free\nSwap: 368M Total, 2852K Used, 366M Free\n\n\tIt's right that I can figure that I can use 384M (total RAM) - 72M \n(wired) - 48M (buf) = 264M for PostgreSQL.\n\tHence, if I set effective_cache_size to 24M (3072 8K blocks), \nreasonable value (less than 240M, say 48M) for sort_mem, some value for \nshared_buffers (i.e. 24M, or 6144 4K blocks (FreeBSD), or 3072 8K blocks \n(PostgreSQL)), and rest of RAM 264M (total free with OS cache) - 24M \n(reserved for OS cache) - 48M (sort) - 24M (shared) = 168M PostgreSQL \nallocate dynamically by himself?\n\n\nWith best regards\n\tYaroslav Mazurak.\n\n", "msg_date": "Thu, 07 Aug 2003 20:04:03 +0300", "msg_from": "Yaroslav Mazurak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "\n> Mem: 71M Active, 23M Inact, 72M Wired, 436K Cache, 48M Buf, 208M Free\n> Swap: 368M Total, 2852K Used, 366M Free\n> \n> \tIt's right that I can figure that I can use 384M (total RAM) - 72M \n> (wired) - 48M (buf) = 264M for PostgreSQL.\n> \tHence, if I set effective_cache_size to 24M (3072 8K blocks), \n> reasonable value (less than 240M, say 48M) for sort_mem, some value for \n> shared_buffers (i.e. 24M, or 6144 4K blocks (FreeBSD), or 3072 8K blocks \n> (PostgreSQL)), and rest of RAM 264M (total free with OS cache) - 24M \n> (reserved for OS cache) - 48M (sort) - 24M (shared) = 168M PostgreSQL \n> allocate dynamically by himself?\n\nTotally, utterly the wrong way around.\n\nStart with 384M, subtract whatever is in use by other processes,\nexcepting kernel disk cache, subtract your PG shared buffers, subtract\n(PG proc size + PG sort mem)*(max number of PG processes you need to run\n- should be same as max_connections if thinking conservatively), leave\nsome spare room so you can ssh in without swapping, and *the remainder*\nis what you should set effective_cache_size to. This is all in the\ndocs.\n\nThe key thing is: set effective_cache_size *last*. Note that Postgres\nassumes your OS is effective at caching disk blocks, so if that\nassumption is wrong you lose performance.\n\nAlso, why on _earth_ would you need 48MB for sort memory? Are you\nseriously going to run a query that returns 48M of data and then sort\nit, on a machine with 384M of RAM?\n\nM\n\n\n\n", "msg_date": "07 Aug 2003 19:06:05 +0100", "msg_from": "matt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "On Thu, 7 Aug 2003, Yaroslav Mazurak wrote:\n\n> scott.marlowe wrote:\n> \n> > On Thu, 7 Aug 2003, Yaroslav Mazurak wrote:\n> \n> >>Shridhar Daithankar wrote:\n> \n> > That's a nice theory, but it doesn't work out that way. About every two \n> > months someone shows up wanting postgresql to use all the memory in their \n> > box for caching and we wind up explaining that the kernel is better at \n> > caching than postgresql is, and how it's better not to push the usage of \n> > the memory right up to the limit.\n> \n> \tI'm reading this mailing list just few days. :)))\n\nWe all get started somewhere. Glad to have you on the list.\n\n> > The reason you don't want to use every bit for postgresql is that, if you \n> > use add load after that you may make the machine start to swap out and \n> > slow down considerably.\n> \n> \tWhat kind of load? PostgreSQL or another? I say that for this PC \n> primary task and critical goal is DBMS and it's performance.\n\nJust Postgresql. Imagine that you set up the machine with 64 Meg sort_mem \nsetting, and it has only two or three users right now. If the number of \nusers jumps up to 16 or 32, then it's quite possible that all those \nconnections can each spawn a sort or two, and if they are large sorts, \nthen poof, all your memory is gone and your box is swapping out like mad.\n\n> > My guess is that this is exactly what's happening to you, you're using so \n> > much memory that the machine is running out and slowing down.\n> \n> > Drop shared_buffers to 1000 to 4000, sort_mem to 8192 and start over from \n> > there. Then, increase them each one at a time until there's no increase \n> > in speed, or stop if it starts getting slower and back off.\n> \n> > bigger is NOT always better.\n> \n> \tLet I want to use all available RAM with PostgreSQL.\n> \tWithout executing query (PostgreSQL is running) top say now:\n> \n> Mem: 71M Active, 23M Inact, 72M Wired, 436K Cache, 48M Buf, 208M Free\n> Swap: 368M Total, 2852K Used, 366M Free\n> \n> \tIt's right that I can figure that I can use 384M (total RAM) - 72M \n> (wired) - 48M (buf) = 264M for PostgreSQL.\n> \tHence, if I set effective_cache_size to 24M (3072 8K blocks), \n> reasonable value (less than 240M, say 48M) for sort_mem, some value for \n> shared_buffers (i.e. 24M, or 6144 4K blocks (FreeBSD), or 3072 8K blocks \n> (PostgreSQL)), and rest of RAM 264M (total free with OS cache) - 24M \n> (reserved for OS cache) - 48M (sort) - 24M (shared) = 168M PostgreSQL \n> allocate dynamically by himself?\n\nIt's important to understand that effective_cache_size is simply a number \nthat tells the query planner about how big the kernel cache is for \npostgresql.\n\nNote that in your top output, it shows 48 M buffer, and 208M free, and \n436k cache. Adding those up comes to about 256 Megs of available cache to \nthe OS.\n\nBut that's assuming postgresql isn't gonna use some of that for sorts or \nbuffers, so assuming some of the memory will get used for that, then it's \nlikely that effective_cache_size will really be about 100 to 150 Meg.\n\nLike someone else said, you set effective cache size last. First set \nbuffers to a few thousand (1000 to 5000 is usually a good number) and set \nsort_mem to 8 to 32 meg to start, and adjust it as you test the database \nunder parallel load. Then, take the numbers you get for free/buffer/cache \nfrom top to figure out effective_cache_size.\n\nAgain, I'll repeat what I said in an earlier post on this, the size of \nbuffers and effective_cache_size are set in POSTGRESQL blocks. i.e. your \nkernel page block size is meaningless here. If you have 100 Meg left \nover, then you need to do the math as:\n\n100*2^20\n--------- \n8*2^10\n\nbecomes \n\n100*2^10\n---------\n8\n\nbecomes \n\n12800 (8k blocks.)\n\nReading your other response I got the feeling you may have been under the \nimpression that this is set in OS blocks, so I just wanted to make sure it \nwas clear it's not.\n\n", "msg_date": "Thu, 7 Aug 2003 12:52:09 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "On Thu, 2003-08-07 at 12:04, Yaroslav Mazurak wrote:\n> scott.marlowe wrote:\n> \n> > On Thu, 7 Aug 2003, Yaroslav Mazurak wrote:\n> \n> >>Shridhar Daithankar wrote:\n> \n[snip]\n> > My guess is that this is exactly what's happening to you, you're using so \n> > much memory that the machine is running out and slowing down.\n> \n> > Drop shared_buffers to 1000 to 4000, sort_mem to 8192 and start over from \n> > there. Then, increase them each one at a time until there's no increase \n> > in speed, or stop if it starts getting slower and back off.\n> \n> > bigger is NOT always better.\n> \n> \tLet I want to use all available RAM with PostgreSQL.\n> \tWithout executing query (PostgreSQL is running) top say now:\n\nYou're missing the point. PostgreSQL is not designed like Oracle,\nSybase, etc. \n\nThey say, \"Give me all the RAM; I will cache everything myself.\"\n\nPostgreSQL says \"The kernel programmers have worked very hard on\ndisk caching. Why should I duplicate their efforts?\"\n\nThus, give PG only a \"little\" RAM, and let the OS' disk cache hold\nthe rest.\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"Man, I'm pretty. Hoo Hah!\" |\n| Johnny Bravo |\n+---------------------------------------------------------------+\n\n\n", "msg_date": "07 Aug 2003 13:59:14 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "On Thursday 07 August 2003 17:30, Yaroslav Mazurak wrote:\n> Hi All!\n>\n>\n> \tFirst, thanks for answers!\n>\n> Richard Huxton wrote:\n> > On Thursday 07 August 2003 09:24, Yaroslav Mazurak wrote:\n> >>>IIRC there is a limit on filesystem cache on freeBSD. 300MB by default.\n> >>>If that is the case, you might have to raise it to make\n> >>>effective_cache_size really effective..\n> >>\n> >>\t\"Try various sysctls\" says nothing for me. I want use *all available\n> >>RAM* (of course, without needed for OS use) for PostgreSQL.\n> >\n> > PG will be using the OS' disk caching.\n>\n> \tI think all applications using OS disk caching. ;)\n> \tOr you want to say that PostgreSQL tuned for using OS-specific cache\n> implementation?\n> \tDo you know method for examining real size of OS filesystem cache? If I\n> understood right, PostgreSQL dynamically use all available RAM minus\n> shared_buffers minus k * sort_mem minus effective_cache_size?\n> \tI want configure PostgreSQL for using _maximum_ of available RAM.\n\nPG's memory use can be split into four areas (note - I'm not a developer so \nthis could be wrong).\n1. Shared memory - vital so that different connections can communicate with \neach other. Shouldn't be too large, otherwise PG spends too long managing its \nshared memory rather than working on your queries.\n2. Sort memory - If you have to sort results during a query it will use up to \nthe amount you define in sort_mem and then use disk if it needs any more. \nThis is for each sort.\n3. Results memory - If you're returning 8000 rows then PG will assemble these \nand send them to the client which also needs space to store the 8000 rows.\n4. Working memory - to actually run the queries - stack and heap space to keep \ntrack of its calculations etc.\n\nYour best bet is to start off with some smallish reasonable values and step \nthem up gradually until you don't see any improvement. What is vital is that \nthe OS can cache enough disk-space to keep all your commonly used tables and \nindexes in memory - if it can't then you'll see performance drop rapidly as \nPG has to keep accessing the disk.\n\nFor the moment, I'd leave the settings roughly where they are while we look at \nthe query, then once that's out of the way we can fine-tune the settings.\n\n[snip suggestion to break the query down]\n> \tYes, you're right. I've tested a few statements and obtain interesting\n> results.\n> \tSELECT * FROM v_file02wide WHERE... executes about 34 seconds.\n> \tSELECT showcalc(...); executes from 0.7 seconds (without recursion) up\n> to 6.3 seconds if recursion is used! :(\n> \tThis mean, that approximate execute time for fully qualified SELECT\n> with about 8K rows is... about 13 hours! :-O\n\nHmm - not good.\n\n> \tHence, problem is in my function showcalc:\n\nThat's certainly the place to start, although we might be able to do something \nwith v_file02wide later.\n\n> CREATE OR REPLACE FUNCTION showcalc(VARCHAR(10), VARCHAR(2), VARCHAR(4),\n> NUMERIC(16)) RETURNS NUMERIC(16)\n> LANGUAGE SQL STABLE AS '\n> -- Parameters: code, dd, r020, t071\n> \tSELECT COALESCE(\n> \t\t(SELECT sc.koef * $4\n> \t\t\tFROM showing AS s NATURAL JOIN showcomp AS sc\n> \t\t\tWHERE s.kod = $1\n> \t\t\t\tAND NOT SUBSTR(acc_mask, 1, 1) = ''[''\n> \t\t\t\tAND SUBSTR(acc_mask, 1, 4) = $3\n> \t\t\t\tAND SUBSTR(acc_mask, 5, 1) = SUBSTR($2, 1, 1)),\n> \t\t(SELECT SUM(sc.koef * COALESCE(showcalc(SUBSTR(acc_mask, 2,\n> LENGTH(acc_mask) - 2), $2, $3, $4), 0))\n> \t\t\tFROM showing AS s NATURAL JOIN showcomp AS sc\n> \t\t\tWHERE s.kod = $1\n> \t\t\t\tAND SUBSTR(acc_mask, 1, 1) = ''[''),\n> \t\t0) AS showing;\n> ';\n>\n> \tBTW, cross join \",\" with WHERE clause don't improve performance\n> relative to NATURAL JOIN.\n> \tAdditionally, with user-defined function beginchar (SUBSTR(..., 1, 1)),\n> used for indexing, showcalc executes about 16 seconds. With function\n> SUBSTR the same showcalc executes 6 seconds.\n\nFair enough - substr should be fairly efficient.\n\n[snip explanation of table structures and usage]\n\nI'm not going to claim I understood everything in your explanation, but there \nare a couple of things I can suggest. However, before you go and do any of \nthat, can I ask you to post an EXPLAIN ANALYSE of two calls to your \nshowcalc() function (once for a simple account, once for one with recursion)? \nYou'll need to cut and paste the query as standard SQL since the explain \nwon't look inside the function body.\n\nOK - bear in mind that these suggestions are made without the benefit of the \nexplain analyse:\n\n1. You could try splitting out the various tags of your mask into different \nfields - that will instantly eliminate all the substr() calls and might make \na difference. If you want to keep the mask for display purposes, we could \nbuild a trigger to keep it in sync with the separate flags.\n\n2. Use a \"calculations\" table and build your results step by step. So - \ncalculate all the simple accounts, then calculate the ones that contain the \nsimple accounts.\n\n3. You could keep a separate \"account_contains\" table that might look like:\n acc_id | contains\n A001 | A001\n A002 | A002\n A003 | A003\n A003 | A001\n A004 | A004\n A004 | A003\n A004 | A001\n\nSo here A001/A002 are simple accounts but A003 contains A001 too. A004 \ncontains A003 and A001. The table can be kept up to date automatically using \nsome triggers.\nThis should make it simple to pick up all the accounts contained within the \ntarget account and might mean you can eliminate the recursion.\n\n> \tNow I think about change function showcalc or/and this data\n> structures... :)\n\nPost the EXPLAIN ANALYSE first - maybe someone smarter than me will have an \nidea.\n\n> \tAnyway, 600Mb is too low for PostgreSQL for executing my query - DBMS\n> raise error after 11.5 hours (of estimated 13?). :(\n\nI think the problem is the 13 hours, not the 600MB. Once we've got the query \nrunning in a reasonable length of time (seconds) then the memory requirements \nwill go down, I'm sure.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 7 Aug 2003 20:06:34 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance problem -> tuning" }, { "msg_contents": "\t\tHi, All!\n\n\nRichard Huxton wrote:\n\n> On Thursday 07 August 2003 17:30, Yaroslav Mazurak wrote:\n\n>>Richard Huxton wrote:\n\n>>>On Thursday 07 August 2003 09:24, Yaroslav Mazurak wrote:\n\n> PG's memory use can be split into four areas (note - I'm not a developer so \n> this could be wrong).\n> 1. Shared memory - vital so that different connections can communicate with \n> each other. Shouldn't be too large, otherwise PG spends too long managing its \n> shared memory rather than working on your queries.\n> 2. Sort memory - If you have to sort results during a query it will use up to \n> the amount you define in sort_mem and then use disk if it needs any more. \n> This is for each sort.\n> 3. Results memory - If you're returning 8000 rows then PG will assemble these \n> and send them to the client which also needs space to store the 8000 rows.\n> 4. Working memory - to actually run the queries - stack and heap space to keep \n> track of its calculations etc.\n\n\tHence, total free RAM - shared_buffers - k * sort_mem - \neffective_cache_size == (results memory + working memory)?\n\n> For the moment, I'd leave the settings roughly where they are while we look at \n> the query, then once that's out of the way we can fine-tune the settings.\n\n\tOK.\n\n>>\tAdditionally, with user-defined function beginchar (SUBSTR(..., 1, 1)),\n>>used for indexing, showcalc executes about 16 seconds. With function\n>>SUBSTR the same showcalc executes 6 seconds.\n\n> Fair enough - substr should be fairly efficient.\n\n\tCost of user-defined SQL function call in PostgreSQL is high?\n\n> OK - bear in mind that these suggestions are made without the benefit of the \n> explain analyse:\n\n> 1. You could try splitting out the various tags of your mask into different \n> fields - that will instantly eliminate all the substr() calls and might make \n> a difference. If you want to keep the mask for display purposes, we could \n> build a trigger to keep it in sync with the separate flags.\n\n\tThis will be next step. :)\n\n> 2. Use a \"calculations\" table and build your results step by step. So - \n> calculate all the simple accounts, then calculate the ones that contain the \n> simple accounts.\n\n\tI give to SQL to user and few helper functions. Therefore single step \nis required for building results.\n\n> 3. You could keep a separate \"account_contains\" table that might look like:\n> acc_id | contains\n> A001 | A001\n> A002 | A002\n> A003 | A003\n> A003 | A001\n> A004 | A004\n> A004 | A003\n> A004 | A001\n\n> So here A001/A002 are simple accounts but A003 contains A001 too. A004 \n> contains A003 and A001. The table can be kept up to date automatically using \n> some triggers.\n> This should make it simple to pick up all the accounts contained within the \n> target account and might mean you can eliminate the recursion.\n\n\tThanks, sounds not so bad, but I suspect that this method don't improve \nperformance essentially.\n\tI think about another secondary table for showcomp (compshow :)) with \nshowings \"compiled\" into account numbers and characteritics. After \ninserting or updating new or old showing this showing will be \n\"recompiled\" by explicit function call or trigger into atomary account \nnumbers and characteristics.\n\n> Post the EXPLAIN ANALYSE first - maybe someone smarter than me will have an \n> idea.\n\n\tFirst result - simple showing 'B00202' (without recursion).\n\tSecond result - complex showing 'B00204' with recursion (1 level depth).\n\tShowing 'B00202' contains 85 accounts, 'B00203' - 108 accounts, and \n'B00204' = 'B00202' - 'B00203'.\n\tQuery text:\n\nEXPLAIN ANALYZE SELECT COALESCE(\n\t(SELECT sc.koef * 100\n\t\tFROM showing AS s NATURAL JOIN showcomp AS sc\n\t\tWHERE s.kod = 'B00202'\n\t\t\tAND NOT SUBSTR(acc_mask, 1, 1) = '['\n\t\t\tAND SUBSTR(acc_mask, 1, 4) = '6010'\n\t\t\tAND SUBSTR(acc_mask, 5, 1) = SUBSTR('20', 1, 1)),\n\t(SELECT SUM(sc.koef * COALESCE(showcalc(SUBSTR(acc_mask, 2, \nLENGTH(acc_mask) - 2), '20', '6010', 100), 0))\n\t\tFROM showing AS s NATURAL JOIN showcomp AS sc\n\t\tWHERE s.kod = 'B00202'\n\t\t\tAND SUBSTR(acc_mask, 1, 1) = '['),\n\t0) AS showing;\n\nEXPLAIN ANALYZE SELECT COALESCE(\n\t(SELECT sc.koef * 100\n\t\tFROM showing AS s NATURAL JOIN showcomp AS sc\n\t\tWHERE s.kod = 'B00204'\n\t\t\tAND NOT SUBSTR(acc_mask, 1, 1) = '['\n\t\t\tAND SUBSTR(acc_mask, 1, 4) = '6010'\n\t\t\tAND SUBSTR(acc_mask, 5, 1) = SUBSTR('20', 1, 1)),\n\t(SELECT SUM(sc.koef * COALESCE(showcalc(SUBSTR(acc_mask, 2, \nLENGTH(acc_mask) - 2), '20', '6010', 100), 0))\n\t\tFROM showing AS s NATURAL JOIN showcomp AS sc\n\t\tWHERE s.kod = 'B00204'\n\t\t\tAND SUBSTR(acc_mask, 1, 1) = '['),\n\t0) AS showing;\n \n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=704.39..704.39 \nrows=1 loops=1)\n InitPlan\n -> Hash Join (cost=5.22..449.63 rows=1 width=19) (actual \ntime=167.28..352.90 rows=1 loops=1)\n Hash Cond: (\"outer\".id_show = \"inner\".id_show)\n -> Seq Scan on showcomp sc (cost=0.00..444.40 rows=1 \nwidth=15) (actual time=23.29..350.17 rows=32 loops=1)\n Filter: ((substr((acc_mask)::text, 1, 1) <> '['::text) \nAND (substr((acc_mask)::text, 1, 4) = '6010'::text) AND \n(substr((acc_mask)::text, 5, 1) = '2'::text))\n -> Hash (cost=5.22..5.22 rows=1 width=4) (actual \ntime=0.67..0.67 rows=0 loops=1)\n -> Index Scan using index_showing_kod on showing s \n(cost=0.00..5.22 rows=1 width=4) (actual time=0.61..0.64 rows=1 loops=1)\n Index Cond: (kod = 'B00202'::character varying)\n -> Hash Join (cost=5.22..449.63 rows=1 width=19) (actual \ntime=166.20..351.28 rows=1 loops=1)\n Hash Cond: (\"outer\".id_show = \"inner\".id_show)\n -> Seq Scan on showcomp sc (cost=0.00..444.40 rows=1 \nwidth=15) (actual time=23.36..349.24 rows=32 loops=1)\n Filter: ((substr((acc_mask)::text, 1, 1) <> '['::text) \nAND (substr((acc_mask)::text, 1, 4) = '6010'::text) AND \n(substr((acc_mask)::text, 5, 1) = '2'::text))\n -> Hash (cost=5.22..5.22 rows=1 width=4) (actual \ntime=0.17..0.17 rows=0 loops=1)\n -> Index Scan using index_showing_kod on showing s \n(cost=0.00..5.22 rows=1 width=4) (actual time=0.12..0.14 rows=1 loops=1)\n Index Cond: (kod = 'B00202'::character varying)\n -> Aggregate (cost=312.61..312.61 rows=1 width=28) (never executed)\n -> Hash Join (cost=5.22..312.61 rows=1 width=28) (never \nexecuted)\n Hash Cond: (\"outer\".id_show = \"inner\".id_show)\n -> Seq Scan on showcomp sc (cost=0.00..307.04 \nrows=69 width=24) (never executed)\n Filter: (substr((acc_mask)::text, 1, 1) = '['::text)\n -> Hash (cost=5.22..5.22 rows=1 width=4) (never \nexecuted)\n -> Index Scan using index_showing_kod on \nshowing s (cost=0.00..5.22 rows=1 width=4) (never executed)\n Index Cond: (kod = 'B00202'::character \nvarying)\n -> Aggregate (cost=312.61..312.61 rows=1 width=28) (never executed)\n -> Hash Join (cost=5.22..312.61 rows=1 width=28) (never \nexecuted)\n Hash Cond: (\"outer\".id_show = \"inner\".id_show)\n -> Seq Scan on showcomp sc (cost=0.00..307.04 \nrows=69 width=24) (never executed)\n Filter: (substr((acc_mask)::text, 1, 1) = '['::text)\n -> Hash (cost=5.22..5.22 rows=1 width=4) (never \nexecuted)\n -> Index Scan using index_showing_kod on \nshowing s (cost=0.00..5.22 rows=1 width=4) (never executed)\n Index Cond: (kod = 'B00202'::character \nvarying)\n Total runtime: 706.82 msec\n(33 rows)\n\n \n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=6256.20..6256.21 \nrows=1 loops=1)\n InitPlan\n -> Hash Join (cost=5.22..449.63 rows=1 width=19) (actual \ntime=357.43..357.43 rows=0 loops=1)\n Hash Cond: (\"outer\".id_show = \"inner\".id_show)\n -> Seq Scan on showcomp sc (cost=0.00..444.40 rows=1 \nwidth=15) (actual time=23.29..355.41 rows=32 loops=1)\n Filter: ((substr((acc_mask)::text, 1, 1) <> '['::text) \nAND (substr((acc_mask)::text, 1, 4) = '6010'::text) AND \n(substr((acc_mask)::text, 5, 1) = '2'::text))\n -> Hash (cost=5.22..5.22 rows=1 width=4) (actual \ntime=0.22..0.22 rows=0 loops=1)\n -> Index Scan using index_showing_kod on showing s \n(cost=0.00..5.22 rows=1 width=4) (actual time=0.16..0.19 rows=1 loops=1)\n Index Cond: (kod = 'B00204'::character varying)\n -> Hash Join (cost=5.22..449.63 rows=1 width=19) (never executed)\n Hash Cond: (\"outer\".id_show = \"inner\".id_show)\n -> Seq Scan on showcomp sc (cost=0.00..444.40 rows=1 \nwidth=15) (never executed)\n Filter: ((substr((acc_mask)::text, 1, 1) <> '['::text) \nAND (substr((acc_mask)::text, 1, 4) = '6010'::text) AND \n(substr((acc_mask)::text, 5, 1) = '2'::text))\n -> Hash (cost=5.22..5.22 rows=1 width=4) (never executed)\n -> Index Scan using index_showing_kod on showing s \n(cost=0.00..5.22 rows=1 width=4) (never executed)\n Index Cond: (kod = 'B00204'::character varying)\n -> Aggregate (cost=312.61..312.61 rows=1 width=28) (actual \ntime=2952.69..2952.69 rows=1 loops=1)\n -> Hash Join (cost=5.22..312.61 rows=1 width=28) (actual \ntime=12.59..264.69 rows=2 loops=1)\n Hash Cond: (\"outer\".id_show = \"inner\".id_show)\n -> Seq Scan on showcomp sc (cost=0.00..307.04 \nrows=69 width=24) (actual time=0.09..251.52 rows=1035 loops=1)\n Filter: (substr((acc_mask)::text, 1, 1) = '['::text)\n -> Hash (cost=5.22..5.22 rows=1 width=4) (actual \ntime=0.17..0.17 rows=0 loops=1)\n -> Index Scan using index_showing_kod on \nshowing s (cost=0.00..5.22 rows=1 width=4) (actual time=0.12..0.14 \nrows=1 loops=1)\n Index Cond: (kod = 'B00204'::character \nvarying)\n -> Aggregate (cost=312.61..312.61 rows=1 width=28) (actual \ntime=2945.79..2945.80 rows=1 loops=1)\n -> Hash Join (cost=5.22..312.61 rows=1 width=28) (actual \ntime=12.02..263.63 rows=2 loops=1)\n Hash Cond: (\"outer\".id_show = \"inner\".id_show)\n -> Seq Scan on showcomp sc (cost=0.00..307.04 \nrows=69 width=24) (actual time=0.09..251.09 rows=1035 loops=1)\n Filter: (substr((acc_mask)::text, 1, 1) = '['::text)\n -> Hash (cost=5.22..5.22 rows=1 width=4) (actual \ntime=0.17..0.17 rows=0 loops=1)\n -> Index Scan using index_showing_kod on \nshowing s (cost=0.00..5.22 rows=1 width=4) (actual time=0.12..0.14 \nrows=1 loops=1)\n Index Cond: (kod = 'B00204'::character \nvarying)\n Total runtime: 6257.35 msec\n(33 rows)\n\n>>\tAnyway, 600Mb is too low for PostgreSQL for executing my query - DBMS\n>>raise error after 11.5 hours (of estimated 13?). :(\n\n> I think the problem is the 13 hours, not the 600MB. Once we've got the query \n> running in a reasonable length of time (seconds) then the memory requirements \n> will go down, I'm sure.\n\n\tOK, that's right.\n\n\nWith best regards\n\tYaroslav Mazurak.\n\n", "msg_date": "Fri, 08 Aug 2003 10:53:57 +0300", "msg_from": "Yaroslav Mazurak <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL performance problem -> tuning" } ]
[ { "msg_contents": "Hello.\nI have this problem: i'm running the postgre 7.3 on a windows 2000 server with P3 1GHZ DUAL/1gb ram with good performance. For best performance i have change the server for a XEON 2.4/1gb ram and for my suprise the performance decrease 80%. anybody have a similar experience? does exist any special configuration to postgre running on a Xeon processor? Any have any idea to help-me? Excuse-me my bad english.\nVery Thanks\nWilson\nicq 77032308\nmsn [email protected]\n\n\n\n\n\n\n\n\nHello.\nI have this problem: i'm running the \npostgre 7.3 on a windows 2000 server with  P3 1GHZ DUAL/1gb \nram with good performance. For best performance i have \nchange the server for a  XEON 2.4/1gb ram and for  my \nsuprise the performance decrease 80%. anybody have a similar \nexperience? does exist any special configuration to postgre running on \na Xeon processor? Any have any idea to help-me? Excuse-me my bad \nenglish.\nVery Thanks\nWilson\nicq 77032308\nmsn \[email protected]", "msg_date": "Wed, 6 Aug 2003 10:50:38 -0300", "msg_from": "\"Wilson A. Galafassi Jr.\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql slow on XEON 2.4ghz/1gb ram" }, { "msg_contents": "On Wed, Aug 06, 2003 at 10:50:38AM -0300, Wilson A. Galafassi Jr. wrote:\n> Hello.\n\n> I have this problem: i'm running the postgre 7.3 on a windows 2000 server\n> with P3 1GHZ DUAL/1gb ram with good performance. For best performance i\n> have change the server for a XEON 2.4/1gb ram and for my suprise the\n> performance decrease 80%. anybody have a similar experience? does exist\n> any special configuration to postgre running on a Xeon processor? Any have\n> any idea to help-me? Excuse-me my bad english.\n\nI assume you've done the vacuums, analyze, configured the wal and shmem\noption appropriate for that size machine.\n\nBut in any case, without specific examples about what you're seeing we can't\nhelp you.\n\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> \"All that is needed for the forces of evil to triumph is for enough good\n> men to do nothing.\" - Edmond Burke\n> \"The penalty good people pay for not being interested in politics is to be\n> governed by people worse than themselves.\" - Plato", "msg_date": "Thu, 7 Aug 2003 19:30:07 +1000", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Postgresql slow on XEON 2.4ghz/1gb ram" } ]
[ { "msg_contents": "Hello people.\n\nI'm installing Postgresql under linux for better performance and i want to know how is the best configuration.\n\nMy server is a dual pentium3 1ghz/1gb ram/36gb scsi. running only postgresql. My question is:\n1. What is the best linux distribuition for better performance?\n2. Does exists any compilation options to better performance on this machine?\n\nThanks\n\nWilson Galafassi\n\n\n\n\n\n\nHello people.\n \nI'm installing Postgresql under linux for better \nperformance and i want to know how is the best configuration.\n \nMy server is a dual pentium3 1ghz/1gb ram/36gb \nscsi. running only postgresql. My question is:\n1. What is the best linux distribuition for better \nperformance?\n2. Does exists any compilation options to better \nperformance on this machine?\n \nThanks\n \nWilson Galafassi", "msg_date": "Wed, 6 Aug 2003 15:03:41 -0300", "msg_from": "\"Wilson A. Galafassi Jr.\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSql under Linux" }, { "msg_contents": "On Wed, Aug 06, 2003 at 03:03:41PM -0300, Wilson A. Galafassi Jr. wrote:\n> I'm installing Postgresql under linux for better performance and i want to know how is the best configuration.\n\n> 1. What is the best linux distribuition for better performance?\n\nThe Linux distribution itself isn't that important, IMHO. Spend some time\nselecting the right filesystem (check the archives for threads on this\ntopic), the right kernel (and perhaps compiling your own from scratch),\nperhaps some kernel tuning (I/O scheduler, etc.), and so forth.\n\n> 2. Does exists any compilation options to better performance on this machine?\n\nNot compilation options, but there are plenty of configuration settings\nyou should be tweaking to ensure good performance. You can find a list\nof configuration options here:\n\n http://www.postgresql.org/docs/7.3/static/runtime-config.html\n\n-Neil\n\n", "msg_date": "Wed, 6 Aug 2003 17:33:01 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSql under Linux" }, { "msg_contents": "On Wed, 6 Aug 2003, Wilson A. Galafassi Jr. wrote:\n\n> Hello people.\n>\n> I'm installing Postgresql under linux for better performance and i want to know how is the best configuration.\n>\n> My server is a dual pentium3 1ghz/1gb ram/36gb scsi. running only postgresql. My question is:\n> 1. What is the best linux distribuition for better performance?\n\nLFS (http://www.linuxfromscratch.org/) if you have the time. WARNING: it\nis a major undertaking, but for a machine dedicated to one task, where\nvirtually everything can be well-optimised for the machine, it has major\nperformance and admin benefits.\n\nOnly try it if you have a lot of time and/or already know linux well. If\nyou have a *lot* of time, you will gain the knowledge as you go. If you\ndecide to, I'll see you on *that* mailing list.\n\nAlso, I recommend slackware as an LFS base.\n\n-- \n\nSam Barnett-Cormack\nSoftware Developer | Student of Physics & Maths\nUK Mirror Service (http://www.mirror.ac.uk) | Lancaster University\n", "msg_date": "Wed, 6 Aug 2003 22:39:52 +0100 (BST)", "msg_from": "Sam Barnett-Cormack <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSql under Linux" }, { "msg_contents": "I say go for GENTOO :)\n\nYou can compile all the system exactly for your mechine / cpu.\n--------------------------\nCanaan Surfing Ltd.\nInternet Service Providers\nBen-Nes Michael - Manager\nTel: 972-4-6991122\nFax: 972-4-6990098\nhttp://www.canaan.net.il\n--------------------------\n\n ----- Original Message ----- \n From: Wilson A. Galafassi Jr. \n To: \"Undisclosed-Recipient:;\"@svr1.postgresql.org \n Sent: Wednesday, August 06, 2003 8:03 PM\n Subject: [GENERAL] PostgreSql under Linux\n\n\n Hello people.\n\n I'm installing Postgresql under linux for better performance and i want to know how is the best configuration.\n\n My server is a dual pentium3 1ghz/1gb ram/36gb scsi. running only postgresql. My question is:\n 1. What is the best linux distribuition for better performance?\n 2. Does exists any compilation options to better performance on this machine?\n\n Thanks\n\n Wilson Galafassi\n\n\n\n\n\n\n\nI say go for GENTOO :)\n \nYou can compile all the system exactly for your \nmechine / cpu.\n--------------------------Canaan Surfing Ltd.Internet Service \nProvidersBen-Nes Michael - ManagerTel: 972-4-6991122Fax: \n972-4-6990098http://www.canaan.net.il--------------------------\n\n----- Original Message ----- \nFrom:\nWilson \n A. Galafassi Jr. \nTo: \"Undisclosed-Recipient:;\"@svr1.postgresql.org\n\nSent: Wednesday, August 06, 2003 8:03 \n PM\nSubject: [GENERAL] PostgreSql under \n Linux\n\nHello people.\n \nI'm installing Postgresql under linux for better \n performance and i want to know how is the best configuration.\n \nMy server is a dual pentium3 1ghz/1gb ram/36gb \n scsi. running only postgresql. My question is:\n1. What is the best linux distribuition for \n better performance?\n2. Does exists any compilation options to better \n performance on this machine?\n \nThanks\n \nWilson \nGalafassi", "msg_date": "Thu, 07 Aug 2003 10:51:03 +0200", "msg_from": "Ben-Nes Michael <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSql under Linux" } ]
[ { "msg_contents": "Hi,\n\n I'm running on Redhat 7.2 with postgresql 7.3.2 and I have two schema in\nthe same database 'db' and 'db_dev'. Both contain a set of >20 tables for\na total of less than 50 Mb of data each (on the order of 50k rows in\ntotal). Once in a while (often these days!), I need to synchronize the\ndev version from the production 'db'. Currently, I do this by setting\nconstraints to deferred, deleting everything in db_dev, then issue a serie\nof insert ... select ... to copy data from each table in db to the\nequivalent table in db_dev.\n\n This approach used to run in less than 30 seconds in MySQL, but in \nPostgreSQL it currently takes around 30 minutes. The postmaster process \nis running at 100% cpu all the time. I enclosed all the delete statement \nin one transaction and all the insert statements in a second transaction. \nAll the time is taken at the commit of both transaction.\n\n Is there a more straightforward way to synchronize a development \ndatabase to a production one? Is there anyway to increase the performance \nof this delete/insert combination? I've got indexes and constraints on \nmost tables, could that be the problem? At some point in the future, I \nwill also need to make a copy of a whole schema ('db' into 'db_backup'), \nwhat would be an efficient way to do that?\n\n These are the parameters I've adjusted in the postgresql.conf:\n\nmax_connections = 16\nshared_buffers = 3000\nmax_fsm_relations = 2000\nmax_fsm_pages = 20000\nsort_mem = 20000\nvacuum_mem = 20000\neffective_cache_size = 15000\n\n And this is the memory state of the machine:\n\nslemieux@neptune> free\n total used free shared buffers cached\nMem: 2059472 2042224 17248 24768 115712 1286572\n-/+ buffers/cache: 639940 1419532\nSwap: 2096440 490968 1605472\n\nthanks,\n\n-- \nSebastien Lemieux\nBioinformatics, post-doc\nElitra-canada\n\n", "msg_date": "Wed, 6 Aug 2003 14:56:14 -0400 (EDT)", "msg_from": "Sebastien Lemieux <[email protected]>", "msg_from_op": true, "msg_subject": "How to efficiently duplicate a whole schema?" }, { "msg_contents": "On Wed, 6 Aug 2003, Tom Lane wrote:\n\n> Sebastien Lemieux <[email protected]> writes:\n> > All the time is taken at the commit of both transaction.\n> \n> Sounds like the culprit is foreign-key checks.\n> \n> One obvious question is whether you have your foreign keys set up\n> efficiently in the first place. As a rule, the referenced and\n> referencing columns should have identical datatypes and both should\n> be indexed. (PG will often let you create foreign key constraints\n> that don't meet these rules ... but performance will suffer.)\n\nIs this one of those things that should spit out a NOTICE when it happens? \nI.e. when a table is created with a references and uses a different type \nthan the parent, would it be a good idea to issue a \"NOTICE: parent and \nchild fields are not of the same type\"\n\n\n", "msg_date": "Wed, 6 Aug 2003 13:09:47 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to efficiently duplicate a whole schema? " }, { "msg_contents": "Sebastien-\n\nI have a similar nightly process to keep our development system synched with\nproduction. I just do a complete pg_dump of production, do a dropdb &\ncreatedb to empty the database for development, and then restore the whole\ndb from the pg_dump file. Our database is about 12 GB currently, and it\ntakes less than one hour to dump & restore back into dev if I go through a\nfile. I can go even faster by piping the data to eliminate one set of reads\n& writes to the disk:\n\ndropdb dev\ncreatedb dev\npg_dump prod | psql dev\n\nThis of course only works if you haven't changed your data structure in the\ndevelopment area, but it is very simple and reasonably quick.\n\nin situations where the data structure has changed, I run a more complex\nsystem that deletes data rather than drop the whole db, but I always drop\nthe indexes in development before restoring data and recreate them\nafterwards.\n\n-Nick\n\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Sebastien\n> Lemieux\n> Sent: Wednesday, August 06, 2003 1:56 PM\n> To: Postgresql-performance\n> Subject: [PERFORM] How to efficiently duplicate a whole schema?\n>\n>\n> Hi,\n>\n> I'm running on Redhat 7.2 with postgresql 7.3.2 and I have two schema in\n> the same database 'db' and 'db_dev'. Both contain a set of >20 tables for\n> a total of less than 50 Mb of data each (on the order of 50k rows in\n> total). Once in a while (often these days!), I need to synchronize the\n> dev version from the production 'db'. Currently, I do this by setting\n> constraints to deferred, deleting everything in db_dev, then issue a serie\n> of insert ... select ... to copy data from each table in db to the\n> equivalent table in db_dev.\n>\n> This approach used to run in less than 30 seconds in MySQL, but in\n> PostgreSQL it currently takes around 30 minutes. The postmaster process\n> is running at 100% cpu all the time. I enclosed all the delete statement\n> in one transaction and all the insert statements in a second\n> transaction.\n> All the time is taken at the commit of both transaction.\n>\n> Is there a more straightforward way to synchronize a development\n> database to a production one? Is there anyway to increase the\n> performance\n> of this delete/insert combination? I've got indexes and constraints on\n> most tables, could that be the problem? At some point in the future, I\n> will also need to make a copy of a whole schema ('db' into 'db_backup'),\n> what would be an efficient way to do that?\n>\n> These are the parameters I've adjusted in the postgresql.conf:\n>\n> max_connections = 16\n> shared_buffers = 3000\n> max_fsm_relations = 2000\n> max_fsm_pages = 20000\n> sort_mem = 20000\n> vacuum_mem = 20000\n> effective_cache_size = 15000\n>\n> And this is the memory state of the machine:\n>\n> slemieux@neptune> free\n> total used free shared buffers cached\n> Mem: 2059472 2042224 17248 24768 115712 1286572\n> -/+ buffers/cache: 639940 1419532\n> Swap: 2096440 490968 1605472\n>\n> thanks,\n>\n> --\n> Sebastien Lemieux\n> Bioinformatics, post-doc\n> Elitra-canada\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Wed, 6 Aug 2003 14:09:48 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to efficiently duplicate a whole schema?" }, { "msg_contents": "Sebastien Lemieux <[email protected]> writes:\n> All the time is taken at the commit of both transaction.\n\nSounds like the culprit is foreign-key checks.\n\nOne obvious question is whether you have your foreign keys set up\nefficiently in the first place. As a rule, the referenced and\nreferencing columns should have identical datatypes and both should\nbe indexed. (PG will often let you create foreign key constraints\nthat don't meet these rules ... but performance will suffer.)\n\nAlso, what procedure are you using to delete all the old data? What\nI'd recommend is\n\tANALYZE table;\n\tTRUNCATE table;\n\tINSERT new data;\nThe idea here is to make sure that the planner's statistics reflect the\n\"full\" state of the table, not the \"empty\" state. Otherwise it may pick\nplans for the foreign key checks that are optimized for small tables.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Aug 2003 15:13:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to efficiently duplicate a whole schema? " }, { "msg_contents": "\"scott.marlowe\" <[email protected]> writes:\n> On Wed, 6 Aug 2003, Tom Lane wrote:\n>> One obvious question is whether you have your foreign keys set up\n>> efficiently in the first place. As a rule, the referenced and\n>> referencing columns should have identical datatypes and both should\n>> be indexed. (PG will often let you create foreign key constraints\n>> that don't meet these rules ... but performance will suffer.)\n\n> Is this one of those things that should spit out a NOTICE when it happens? \n> I.e. when a table is created with a references and uses a different type \n> than the parent, would it be a good idea to issue a \"NOTICE: parent and \n> child fields are not of the same type\"\n\nI could see doing that for unequal data types, but I'm not sure if it's\nreasonable to do it for lack of index. Usually you won't have created\nthe referencing column's index yet when you create the FK constraint,\nso any warning would just be noise. (The referenced column's index *is*\nchecked for, since we require it to be unique.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Aug 2003 15:29:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to efficiently duplicate a whole schema? " }, { "msg_contents": "On Wed, 6 Aug 2003, Tom Lane wrote:\n\n> \"scott.marlowe\" <[email protected]> writes:\n> > On Wed, 6 Aug 2003, Tom Lane wrote:\n> >> One obvious question is whether you have your foreign keys set up\n> >> efficiently in the first place. As a rule, the referenced and\n> >> referencing columns should have identical datatypes and both should\n> >> be indexed. (PG will often let you create foreign key constraints\n> >> that don't meet these rules ... but performance will suffer.)\n> \n> > Is this one of those things that should spit out a NOTICE when it happens? \n> > I.e. when a table is created with a references and uses a different type \n> > than the parent, would it be a good idea to issue a \"NOTICE: parent and \n> > child fields are not of the same type\"\n> \n> I could see doing that for unequal data types, but I'm not sure if it's\n> reasonable to do it for lack of index. Usually you won't have created\n> the referencing column's index yet when you create the FK constraint,\n> so any warning would just be noise. (The referenced column's index *is*\n> checked for, since we require it to be unique.)\n\nSure. I wasn't thinking of the index issue anyway, just the type \nmismatch.\n\n", "msg_date": "Wed, 6 Aug 2003 14:00:59 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to efficiently duplicate a whole schema? " }, { "msg_contents": "On Wed, 6 Aug 2003, Tom Lane wrote:\n\n> Sebastien Lemieux <[email protected]> writes:\n> > All the time is taken at the commit of both transaction.\n> \n> Sounds like the culprit is foreign-key checks.\n> \n> One obvious question is whether you have your foreign keys set up\n> efficiently in the first place. As a rule, the referenced and\n> referencing columns should have identical datatypes and both should\n> be indexed. (PG will often let you create foreign key constraints\n> that don't meet these rules ... but performance will suffer.)\n\nI've checked and all the foreign keys are setup between 'serial' (the \nprimary key of the referenced table) and 'integer not null' (the foreign \nkey field). Would that be same type? A couple of my foreign keys are not \nindexed, I'll fix that. The latter seems to do the job, since I can now \nsynchronize in about 75 seconds (compared to 30 minutes), which seems good \nenough.\n\n> Also, what procedure are you using to delete all the old data? What\n> I'd recommend is\n> \tANALYZE table;\n> \tTRUNCATE table;\n> \tINSERT new data;\n> The idea here is to make sure that the planner's statistics reflect the\n> \"full\" state of the table, not the \"empty\" state. Otherwise it may pick\n> plans for the foreign key checks that are optimized for small tables.\n\nI added the 'analyze' but without any noticable gain in speed. I can't\nuse 'truncate' since I need to 'set constraints all deferred'. I guess\nthe bottom line is that I really need to first drop all constraints and\nindexes, synchronize and then rebuild indexes and check constraints. But\nfor that I'll need to reorganize my code a little bit!\n\nIn the meantime, how bad a decision would it be to simply remove all \nforeign key constraints? Because, currently I think they are causing more \nproblems than they are avoiding...\n\nthanks,\n\n-- \nSebastien Lemieux\nBioinformatics, post-doc\nElitra-canada\n\n\n\n\n\n", "msg_date": "Wed, 6 Aug 2003 17:22:05 -0400 (EDT)", "msg_from": "Sebastien Lemieux <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to efficiently duplicate a whole schema? " }, { "msg_contents": "On Wed, 6 Aug 2003, Sebastien Lemieux wrote:\n\n> On Wed, 6 Aug 2003, Tom Lane wrote:\n>\n> > Sebastien Lemieux <[email protected]> writes:\n> > > All the time is taken at the commit of both transaction.\n> >\n> > Sounds like the culprit is foreign-key checks.\n> >\n> > One obvious question is whether you have your foreign keys set up\n> > efficiently in the first place. As a rule, the referenced and\n> > referencing columns should have identical datatypes and both should\n> > be indexed. (PG will often let you create foreign key constraints\n> > that don't meet these rules ... but performance will suffer.)\n>\n> I've checked and all the foreign keys are setup between 'serial' (the\n> primary key of the referenced table) and 'integer not null' (the foreign\n> key field). Would that be same type? A couple of my foreign keys are not\n> indexed, I'll fix that. The latter seems to do the job, since I can now\n> synchronize in about 75 seconds (compared to 30 minutes), which seems good\n> enough.\n\nAnother thing might be the management of the trigger queue. I don't think\n7.3.2 had the optimization for limiting the scans of the queue when you\nhave lots of deferred triggers. It looks like 7.3.4 may though.\n\n", "msg_date": "Wed, 6 Aug 2003 14:41:43 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to efficiently duplicate a whole schema? " }, { "msg_contents": "Sebastien Lemieux <[email protected]> writes:\n> On Wed, 6 Aug 2003, Tom Lane wrote:\n>> The idea here is to make sure that the planner's statistics reflect the\n>> \"full\" state of the table, not the \"empty\" state. Otherwise it may pick\n>> plans for the foreign key checks that are optimized for small tables.\n\n> I added the 'analyze' but without any noticable gain in speed. I can't\n> use 'truncate' since I need to 'set constraints all deferred'.\n\nWhat are you using, exactly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Aug 2003 17:47:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to efficiently duplicate a whole schema? " }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n> Another thing might be the management of the trigger queue. I don't think\n> 7.3.2 had the optimization for limiting the scans of the queue when you\n> have lots of deferred triggers. It looks like 7.3.4 may though.\n\nGood point. We put that in in 7.3.3, according to the CVS logs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Aug 2003 18:06:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to efficiently duplicate a whole schema? " }, { "msg_contents": "\n> >> The idea here is to make sure that the planner's statistics reflect the\n> >> \"full\" state of the table, not the \"empty\" state. Otherwise it may pick\n> >> plans for the foreign key checks that are optimized for small tables.\n> \n> > I added the 'analyze' but without any noticable gain in speed. I can't\n> > use 'truncate' since I need to 'set constraints all deferred'.\n> \n> What are you using, exactly?\n\nWhat I want to do:\n\n let t be the list of tables\n\n for t in tables:\n delete from db_dev.t;\n\n for t in tables:\n insert into db_dev.t (...) select ... from db.t;\n\nSome of my foreign keys are creating references loops in my schema, thus\nthere is no correct order to do the deletes and inserts so that the\nconstraints are satisfied at all time. I have to enclose those two loops\nin a 'set constraints all deferred' to avoid complaints from the \nconstraints.\n\nI tried dropping the indexes first, doing the transfer and recreating the \nindexes: no gain. So computing the indexes doesn't take significant time.\n\nI then tried removing all the foreign keys constraints, replacing delete\nby truncate and it now runs in about 25 seconds. Downside is that I lose \nthe foreign keys integrity verification, but because of this reference \nloop in my schema it has caused me more problem than it has avoided until \nnow. So I can live with that!\n\nThanks all!\n\n-- \nSebastien Lemieux\nBioinformatics, post-doc\nElitra-canada\n\n", "msg_date": "Thu, 7 Aug 2003 11:04:40 -0400 (EDT)", "msg_from": "Sebastien Lemieux <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to efficiently duplicate a whole schema? " } ]
[ { "msg_contents": "On Wed, 6 Aug 2003, Wilson A. Galafassi Jr. wrote:\n\n> hello!!!\n> what is suggested partitioning schema for postgresql??\n> the size of my db is 5BG and i have 36GB scsi disk!\n\nThe first recommendation is to run Postgresql on a RAID set for \nreliability. \n\nI'm assuming you're building a machine and need to put both the OS and \nPostgresql database on that one disk.\n\nIf that's the case, just put the OS on however you like (lotsa different \nways to partition for the OS) and leave about 30 gig for Postgresql to run \nin, then just put the whole database $PGDATA directory on that partition.\n\nI'd recommend running ext3 with meta data journaling only for speed,\nsecurity, and ease of setup and use. XFS is the next choice, which is a \nlittle harder to setup, as it's not included in most distros, but is \ndefinitely faster than ext3 at most stuff.\n\n", "msg_date": "Wed, 6 Aug 2003 14:09:14 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning for postgresql" }, { "msg_contents": "hello!!!\nwhat is suggested partitioning schema for postgresql??\nthe size of my db is 5BG and i have 36GB scsi disk!\nthanks\nwilson\n\n\n\n\n\n\nhello!!!\nwhat is suggested partitioning schema for \npostgresql??\nthe size of my db is 5BG and i have 36GB scsi \ndisk!\nthanks\nwilson", "msg_date": "Wed, 6 Aug 2003 17:09:59 -0300", "msg_from": "\"Wilson A. Galafassi Jr.\" <[email protected]>", "msg_from_op": false, "msg_subject": "partitioning for postgresql" } ]
[ { "msg_contents": "hello.\nmy database size is 5GB. what is the block size recommend?\nthanks\nwilson\n\n\n\n\n\n\n\nhello.\nmy database size is 5GB. what is the block size \nrecommend?\nthanks\nwilson", "msg_date": "Wed, 6 Aug 2003 17:34:39 -0300", "msg_from": "\"Wilson A. Galafassi Jr.\" <[email protected]>", "msg_from_op": true, "msg_subject": "ext3 block size" }, { "msg_contents": "On Wed, 6 Aug 2003, Wilson A. Galafassi Jr. wrote:\n\n> hello.\n> my database size is 5GB. what is the block size recommend?\n\nWell, the biggest block size currently supported by stock linux distros is \n4k, so I'd go with that. Postgresql's default block size of 8k is fine \nalso. Note that linux page/system/file block sizes are NOT related to \nPostgresql block sizes.\n\n", "msg_date": "Wed, 6 Aug 2003 14:37:34 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ext3 block size" } ]
[ { "msg_contents": "> what is suggested partitioning schema for postgresql?? \n> the size of my db is 5BG and i have 36GB scsi disk! \n\nThe interesting forms of partitioning kind of assume that you have\nmultiple disk drives.\n\nIf you only have one drive, then there is not terribly much reason to\nprefer anything much over simply having one big partition, and letting\nthe data fall where it will.\n\nIf you had two drives, then it would make sense to have data on one\ndrive and WAL on the other. With three drives, having other I/O (such\nas database logging) on a third drive would have merit.\n\nBut none of those approaches are useful if you only have one disk\ndrive.\n-- \n\"cbbrowne\",\"@\",\"libertyrms.info\"\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Wed, 06 Aug 2003 16:51:07 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning for postgresql " } ]
[ { "msg_contents": "http://kerneltrap.org/node/view/715\n\nMight be interesting for people running 2.6. Last I heard, the anticipatory \nscheduler did not yield it's maximum throughput for random reads. So they said \ndatabase guys would not want it right away.\n\nAnybody using it for testing? Couple of guys are running it here in my company \non a moderate desktop-cum-server. So far it's good.. In fact far better..\n\n\nBye\n Shridhar\n\n--\nWedding, n:\tA ceremony at which two persons undertake to become one, one \nundertakes\tto become nothing and nothing undertakes to become supportable.\t\t-- \nAmbrose Bierce\n\n", "msg_date": "Thu, 07 Aug 2003 13:58:37 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Simple filesystem benchmark on Linux 2.6" } ]
[ { "msg_contents": "\nIn theory, the news2mail gateway is back in place ...\n\n", "msg_date": "Thu, 7 Aug 2003 17:49:43 +0000 (UTC)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": true, "msg_subject": "Testing gateway" } ]
[ { "msg_contents": "I have just installed redhat linux 9 which ships with Pg\n7.3.2. Pg has to be setup so that data inserts (blobs) should\nbe able to handle at least 8M at a time. The machine has\ntwo P III 933MHz CPU's, 1.128G RAM (512M*2 + 128M), and\na 36 Gig hd with 1 Gig swap and 3 equal size ext3 partitions.\nWhat would be the recomended setup for good performance\nconsidering that the db will have about 15 users for\n9 hours in a day, and about 10 or so users throughout the day\nwho wont be conistenly using the db.\n\n\n", "msg_date": "Fri, 08 Aug 2003 12:28:09 +0200", "msg_from": "mixo <[email protected]>", "msg_from_op": true, "msg_subject": "Perfomance Tuning" }, { "msg_contents": "> be able to handle at least 8M at a time. The machine has\n> two P III 933MHz CPU's, 1.128G RAM (512M*2 + 128M), and\n> a 36 Gig hd with 1 Gig swap and 3 equal size ext3 partitions.\n> What would be the recomended setup for good performance\n> considering that the db will have about 15 users for\n> 9 hours in a day, and about 10 or so users throughout the day\n> who wont be conistenly using the db.\n\nFor 15 users you won't need great tuning at all. Just make sure, that you\nhave the right indizes on the tables and that you have good queries (query\nplan).\n\nAbout the 8Meg blobs, I don't know. Other people on this list may be able to\ngive you hints here.\n\nRegards,\nBjoern\n\n", "msg_date": "Fri, 8 Aug 2003 12:40:13 +0200", "msg_from": "\"Bjoern Metzdorf\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On 8 Aug 2003 at 12:28, mixo wrote:\n\n> I have just installed redhat linux 9 which ships with Pg\n> 7.3.2. Pg has to be setup so that data inserts (blobs) should\n> be able to handle at least 8M at a time. The machine has\n> two P III 933MHz CPU's, 1.128G RAM (512M*2 + 128M), and\n> a 36 Gig hd with 1 Gig swap and 3 equal size ext3 partitions.\n> What would be the recomended setup for good performance\n> considering that the db will have about 15 users for\n> 9 hours in a day, and about 10 or so users throughout the day\n> who wont be conistenly using the db.\n\nYou can look at http://www.varlena.com/GeneralBits/Tidbits/perf.html to start \nwith, although that would not take careof anything specifics to BLOB.\n\nI would suggest some pilot benchmark about how system performs after initial \ntuning. We could discuss this in detail after you have a set of initial \nbenchmark.\n\nHTH\n\nBye\n Shridhar\n\n--\nwolf, n.:\tA man who knows all the ankles.\n\n", "msg_date": "Fri, 08 Aug 2003 16:22:52 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "\nOn 08/08/2003 11:28 mixo wrote:\n> I have just installed redhat linux 9 which ships with Pg\n> 7.3.2. Pg has to be setup so that data inserts (blobs) should\n> be able to handle at least 8M at a time. The machine has\n> two P III 933MHz CPU's, 1.128G RAM (512M*2 + 128M), and\n> a 36 Gig hd with 1 Gig swap and 3 equal size ext3 partitions.\n> What would be the recomended setup for good performance\n> considering that the db will have about 15 users for\n> 9 hours in a day, and about 10 or so users throughout the day\n> who wont be conistenly using the db.\n\n\nIt doesn't sound like a particlarly heavy loading to me. I'd start off \nwith something like\n\nshared_buffers = 2000\nsort_mem = 1024\nmax_coonections = 100\n\nand see how it performs under normal business loading.\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for the Smaller \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n", "msg_date": "Fri, 8 Aug 2003 12:28:00 +0100", "msg_from": "Paul Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Fri, 8 Aug 2003, mixo wrote:\n\n> I have just installed redhat linux 9 which ships with Pg\n> 7.3.2. Pg has to be setup so that data inserts (blobs) should\n> be able to handle at least 8M at a time.\n\nNothing has to be done to tune postgresql to handle this, 8 Meg blobs are \nno problem as far as I know.\n\n> The machine has\n> two P III 933MHz CPU's, 1.128G RAM (512M*2 + 128M), and\n> a 36 Gig hd with 1 Gig swap and 3 equal size ext3 partitions.\n> What would be the recomended setup for good performance\n> considering that the db will have about 15 users for\n> 9 hours in a day, and about 10 or so users throughout the day\n> who wont be conistenly using the db.\n\nSeeing as you have only one hard drive, how you arrange things on it \ndoesn't really make a big difference. If you can get another drive and \nmirror your data partition that will help speed up selects as well as \nprovide some redundancy should one drive fail.\n\nHow many queries per second are you looking at handling? If it's 1 or \nless, you probably don't have much to worry about with this setup. We run \ndual PIII-750s at work with 1.5 Gig ram, and while we're going to upgrade \nthe servers (they're currently handling apache/php/postgresql & ldap) \nwe'll keep the dual PIII-750 machines as the database boxes with nothing \nelse on them. Postgresql is quite snappy on such hardware.\n\n", "msg_date": "Fri, 8 Aug 2003 07:28:13 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Friday 08 August 2003 03:28, mixo wrote:\n> I have just installed redhat linux 9 which ships with Pg\n> 7.3.2. Pg has to be setup so that data inserts (blobs) should\n> be able to handle at least 8M at a time. The machine has\n> two P III 933MHz CPU's, 1.128G RAM (512M*2 + 128M), and\n> a 36 Gig hd with 1 Gig swap and 3 equal size ext3 partitions.\n> What would be the recomended setup for good performance\n> considering that the db will have about 15 users for\n> 9 hours in a day, and about 10 or so users throughout the day\n> who wont be conistenly using the db.\n>\n\nRedhat puts ext3 on by default. Consider switching to a non-journaling FS \n(ext2?) with the partition that holds your data and WAL.\n\nConsider having a seperate partition for the WAL as well.\n\nThese are things that are more difficult to change later on. Everything else \nis tweaking.\n\nIs it absolutely necessary to store 8MB files in the database? I find it \ncumbersome. Storing them on a file server has been a better alternative for \nme.\n\n- -- \nJonathan Gardner <[email protected]>\nLive Free, Use Linux!\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.1 (GNU/Linux)\n\niD8DBQE/M9J0WgwF3QvpWNwRAlT5AJ9EmDourbCiqj7MFOqfBospc2dW7gCfZKz0\nJQjn/2KAeh1SPJfN601LoFg=\n=PW6k\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 8 Aug 2003 09:40:20 -0700", "msg_from": "Jonathan Gardner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Fri, Aug 08, 2003 at 09:40:20AM -0700, Jonathan Gardner wrote:\n> \n> Redhat puts ext3 on by default. Consider switching to a non-journaling FS \n> (ext2?) with the partition that holds your data and WAL.\n\nI would give you exactly the opposite advice: _never_ use a\nnon-journalling fs for your data and WAL. I suppose if you can\nafford to lose some transactions, you can do without journalling. \nOtherwise, you're just borrowing trouble, near as I can tell.\n\nA\n\n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 8 Aug 2003 14:53:23 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Fri, 2003-08-08 at 14:53, Andrew Sullivan wrote:\n> On Fri, Aug 08, 2003 at 09:40:20AM -0700, Jonathan Gardner wrote:\n> > \n> > Redhat puts ext3 on by default. Consider switching to a non-journaling FS \n> > (ext2?) with the partition that holds your data and WAL.\n> \n> I would give you exactly the opposite advice: _never_ use a\n> non-journalling fs for your data and WAL. I suppose if you can\n> afford to lose some transactions, you can do without journalling. \n> Otherwise, you're just borrowing trouble, near as I can tell.\n\nAgreed.. WAL cannot recover something when WAL no longer exists due to a\nfilesystem corruption.", "msg_date": "Fri, 08 Aug 2003 15:13:47 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "Rod Taylor wrote:\n-- Start of PGP signed section.\n> On Fri, 2003-08-08 at 14:53, Andrew Sullivan wrote:\n> > On Fri, Aug 08, 2003 at 09:40:20AM -0700, Jonathan Gardner wrote:\n> > > \n> > > Redhat puts ext3 on by default. Consider switching to a non-journaling FS \n> > > (ext2?) with the partition that holds your data and WAL.\n> > \n> > I would give you exactly the opposite advice: _never_ use a\n> > non-journalling fs for your data and WAL. I suppose if you can\n> > afford to lose some transactions, you can do without journalling. \n> > Otherwise, you're just borrowing trouble, near as I can tell.\n> \n> Agreed.. WAL cannot recover something when WAL no longer exists due to a\n> filesystem corruption.\n\nIt is true that ext2 isn't good because the file system may not recover,\nbut BSD UFS isn't a journalled file system, but does guarantee file\nsystem recovery after a crash --- it is especially good using soft\nupdates.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 8 Aug 2003 15:34:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "> > Agreed.. WAL cannot recover something when WAL no longer exists due to a\n> > filesystem corruption.\n> \n> It is true that ext2 isn't good because the file system may not recover,\n> but BSD UFS isn't a journalled file system, but does guarantee file\n> system recovery after a crash --- it is especially good using soft\n> updates.\n\nYes, UFS(2) is an excellent filesystem for PostgreSQL, especially if you\ncan use background fsck & softupdates.", "msg_date": "Fri, 08 Aug 2003 15:46:44 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Fri, Aug 08, 2003 at 03:34:44PM -0400, Bruce Momjian wrote:\n> \n> It is true that ext2 isn't good because the file system may not recover,\n> but BSD UFS isn't a journalled file system, but does guarantee file\n> system recovery after a crash --- it is especially good using soft\n> updates.\n\nSorry. I usually write \"journalled or equivalent\" for this reason. \nI think UFS with soft updates is a good example of this. You also\ndon't need complete journalling in most cases -- metadata is probably\nsufficient, given fsync.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 8 Aug 2003 15:56:58 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Rod Taylor wrote:\n>> On Fri, 2003-08-08 at 14:53, Andrew Sullivan wrote:\n>>> I would give you exactly the opposite advice: _never_ use a\n>>> non-journalling fs for your data and WAL. I suppose if you can\n>>> afford to lose some transactions, you can do without journalling. \n>>> Otherwise, you're just borrowing trouble, near as I can tell.\n>> \n>> Agreed.. WAL cannot recover something when WAL no longer exists due to a\n>> filesystem corruption.\n\n> It is true that ext2 isn't good because the file system may not recover,\n> but BSD UFS isn't a journalled file system, but does guarantee file\n> system recovery after a crash --- it is especially good using soft\n> updates.\n\nThe main point here is that the filesystem has to be able to take care\nof itself; we expect it not to lose any files or forget where the data\nis. If it wants to use journalling to accomplish that, fine.\n\nJournalling file contents updates, as opposed to filesystem metadata,\nshould be redundant with what we do in WAL. So I'd recommend\njournalling metadata only, if that option is available (and if Postgres\nstuff is the only stuff on the disk...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Aug 2003 16:13:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning " }, { "msg_contents": "On Fri, 8 Aug 2003, Andrew Sullivan wrote:\n\n> On Fri, Aug 08, 2003 at 09:40:20AM -0700, Jonathan Gardner wrote:\n> > \n> > Redhat puts ext3 on by default. Consider switching to a non-journaling FS \n> > (ext2?) with the partition that holds your data and WAL.\n> \n> I would give you exactly the opposite advice: _never_ use a\n> non-journalling fs for your data and WAL. I suppose if you can\n> afford to lose some transactions, you can do without journalling. \n> Otherwise, you're just borrowing trouble, near as I can tell.\n\nI'd argue that a reliable filesystem (ext2) is still better than a \nquestionable journaling filesystem (ext3 on kernels <2.4.20).\n\nThis isn't saying to not use jounraling, but I would definitely test it \nunder load first to make sure it's not gonna lose data or get corrupted.\n\n", "msg_date": "Mon, 11 Aug 2003 08:47:07 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Mon, Aug 11, 2003 at 08:47:07AM -0600, scott.marlowe wrote:\n> This isn't saying to not use jounraling, but I would definitely test it \n> under load first to make sure it's not gonna lose data or get corrupted.\n\nWell, yeah. But given the Linux propensity for introducing major\nfeatures in \"minor\" releases (and thereby introducing all the\nattendant bugs), I'd think twice about using _any_ Linux feature\nuntil it's been through a major version (e.g. things introduced in\n2.4.x won't really be stable until 2.6.x) -- and even there one is\ntaking a risk[1].\n\nA\n\nMy laptop's PCMCIA network card recently stopped working during a\n\"minor\" version upgrade, even though it's almost 6 years old. \nSomeone decided that \"cleaning up\" the code required complete\nredesign, and so all the bugs that had been shaken out during the 2.2\nseries will now be reimplemented in a new and interesting way. Sigh.\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 11 Aug 2003 11:24:14 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "scott.marlowe wrote:\n> On Fri, 8 Aug 2003, Andrew Sullivan wrote:\n> \n> > On Fri, Aug 08, 2003 at 09:40:20AM -0700, Jonathan Gardner wrote:\n> > > \n> > > Redhat puts ext3 on by default. Consider switching to a non-journaling FS \n> > > (ext2?) with the partition that holds your data and WAL.\n> > \n> > I would give you exactly the opposite advice: _never_ use a\n> > non-journalling fs for your data and WAL. I suppose if you can\n> > afford to lose some transactions, you can do without journalling. \n> > Otherwise, you're just borrowing trouble, near as I can tell.\n> \n> I'd argue that a reliable filesystem (ext2) is still better than a \n> questionable journaling filesystem (ext3 on kernels <2.4.20).\n> \n> This isn't saying to not use jounraling, but I would definitely test it \n> under load first to make sure it's not gonna lose data or get corrupted.\n\nThat _would_ work if ext2 was a reliable file system --- it is not.\n\nThis is the problem of Linux file systems --- they have unreliable, and\njournalled, with nothing in between, except using a journalling file\nsystem and having it only journal metadata.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 11 Aug 2003 18:16:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Mon, 2003-08-11 at 15:16, Bruce Momjian wrote:\n\n> That _would_ work if ext2 was a reliable file system --- it is not.\n\n\nBruce-\n\nI'd like to know your evidence for this. I'm not refuting it, but I'm a\n>7 year linux user (including several clusters, all of which have run\next2 or ext3) and keep a fairly close ear to kernel newsgroups,\nannouncements, and changelogs. I am aware that there have very\noccasionally been corruption problems, but my understanding is that\nthese are fixed (and quickly). In any case, I'd say that your assertion\nis not widely known and I'd appreciate some data or references.\n\nAs for PostgreSQL on ext2 and ext3, I recently switched from ext3 to\next2 (Stephen Tweedy was insightful to facilitate this backward\ncompatibility). I did this because I had a 45M row update on one table\nthat was taking inordinate time (killed after 10 hours), even though\ncreating the database from backup takes ~4 hours including indexing (see\npgsql-perform post on 2003/07/22). CPU usage was ~2% on an otherwise\nunloaded, fast, SCSI160 machine. vmstat io suggested that PostgreSQL was\nwriting something on the order of 100x as many blocks as being read. My\nuntested interpretation was that the update bookkeeping as well as data\nupdate were all getting journalled, the journal space would fill, get\nsync'd, then repeat. In effect, all blocks were being written TWICE just\nfor the journalling, never mind the overhead for PostgreSQL\ntransactions. This emphasizes that journals probably work best with\nshort burst writes and syncing during lulls rather than sustained\nwrites.\n\nI ended up solving the update issue without really updating, so ext2\ntimings aren't known. So, you may want to test this yourself if you're\nconcerned.\n\n-Reece\n\n\n-- \nReece Hart, Ph.D. [email protected], http://www.gene.com/\nGenentech, Inc. 650/225-6133 (voice), -5389 (fax)\nBioinformatics and Protein Engineering\n1 DNA Way, MS-93 http://www.in-machina.com/~reece/\nSouth San Francisco, CA 94080-4990 [email protected], GPG: 0x25EC91A0\n\n\n\n\n\n\n\nOn Mon, 2003-08-11 at 15:16, Bruce Momjian wrote:\n\nThat _would_ work if ext2 was a reliable file system --- it is not.\n\n\nBruce-\n\nI'd like to know your evidence for this. I'm not refuting it, but I'm a >7 year linux user (including several clusters, all of which have run ext2 or ext3) and keep a fairly close ear to kernel newsgroups, announcements, and changelogs. I am aware that there have very occasionally been corruption problems, but my understanding is that these are fixed (and quickly). In any case, I'd say that your assertion is not widely known and I'd appreciate some data or references.\n\nAs for PostgreSQL on ext2 and ext3, I recently switched from ext3 to ext2 (Stephen Tweedy was insightful to facilitate this backward compatibility). I did this because I had a 45M row update on one table that was taking inordinate time (killed after 10 hours), even though creating the database from backup takes ~4 hours including indexing (see pgsql-perform post on 2003/07/22). CPU usage was ~2% on an otherwise unloaded, fast, SCSI160 machine. vmstat io suggested that PostgreSQL was writing something on the order of 100x as many blocks as being read. My untested interpretation was that the update bookkeeping as well as data update were all getting journalled, the journal space would fill, get sync'd, then repeat. In effect, all blocks were being written TWICE just for the journalling, never mind the overhead for PostgreSQL transactions. This emphasizes that journals probably work best with short burst writes and syncing during lulls rather than sustained writes.\n\nI ended up solving the update issue without really updating, so ext2 timings aren't known. So, you may want to test this yourself if you're concerned.\n\n-Reece\n\n\n\n\n-- \nReece Hart, Ph.D. [email protected], http://www.gene.com/\nGenentech, Inc. 650/225-6133 (voice), -5389 (fax)\nBioinformatics and Protein Engineering\n1 DNA Way, MS-93 http://www.in-machina.com/~reece/\nSouth San Francisco, CA 94080-4990 [email protected], GPG: 0x25EC91A0", "msg_date": "Mon, 11 Aug 2003 15:56:07 -0700", "msg_from": "Reece Hart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "\nUh, the ext2 developers say it isn't 100% reliable --- at least that is\nthat was told. I don't know any personally, but I mentioned it while I\nwas visiting Red Hat, and they didn't refute it.\n\nNow, the failure window might be quite small, but I have seen it happen\nmyself, and have heard it from others.\n\n---------------------------------------------------------------------------\n\nReece Hart wrote:\n> On Mon, 2003-08-11 at 15:16, Bruce Momjian wrote:\n> \n> > That _would_ work if ext2 was a reliable file system --- it is not.\n> \n> \n> Bruce-\n> \n> I'd like to know your evidence for this. I'm not refuting it, but I'm a\n> >7 year linux user (including several clusters, all of which have run\n> ext2 or ext3) and keep a fairly close ear to kernel newsgroups,\n> announcements, and changelogs. I am aware that there have very\n> occasionally been corruption problems, but my understanding is that\n> these are fixed (and quickly). In any case, I'd say that your assertion\n> is not widely known and I'd appreciate some data or references.\n> \n> As for PostgreSQL on ext2 and ext3, I recently switched from ext3 to\n> ext2 (Stephen Tweedy was insightful to facilitate this backward\n> compatibility). I did this because I had a 45M row update on one table\n> that was taking inordinate time (killed after 10 hours), even though\n> creating the database from backup takes ~4 hours including indexing (see\n> pgsql-perform post on 2003/07/22). CPU usage was ~2% on an otherwise\n> unloaded, fast, SCSI160 machine. vmstat io suggested that PostgreSQL was\n> writing something on the order of 100x as many blocks as being read. My\n> untested interpretation was that the update bookkeeping as well as data\n> update were all getting journalled, the journal space would fill, get\n> sync'd, then repeat. In effect, all blocks were being written TWICE just\n> for the journalling, never mind the overhead for PostgreSQL\n> transactions. This emphasizes that journals probably work best with\n> short burst writes and syncing during lulls rather than sustained\n> writes.\n> \n> I ended up solving the update issue without really updating, so ext2\n> timings aren't known. So, you may want to test this yourself if you're\n> concerned.\n> \n> -Reece\n> \n> \n> -- \n> Reece Hart, Ph.D. [email protected], http://www.gene.com/\n> Genentech, Inc. 650/225-6133 (voice), -5389 (fax)\n> Bioinformatics and Protein Engineering\n> 1 DNA Way, MS-93 http://www.in-machina.com/~reece/\n> South San Francisco, CA 94080-4990 [email protected], GPG: 0x25EC91A0\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 11 Aug 2003 18:59:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "> Well, yeah. But given the Linux propensity for introducing major\n> features in \"minor\" releases (and thereby introducing all the\n> attendant bugs), I'd think twice about using _any_ Linux feature\n> until it's been through a major version (e.g. things introduced in\n> 2.4.x won't really be stable until 2.6.x) -- and even there one is\n> taking a risk[1].\n\nDudes, seriously - switch to FreeBSD :P\n\nChris\n\n", "msg_date": "Tue, 12 Aug 2003 08:50:48 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Mon, Aug 11, 2003 at 06:59:30PM -0400, Bruce Momjian wrote:\n> Uh, the ext2 developers say it isn't 100% reliable --- at least that is\n> that was told. I don't know any personally, but I mentioned it while I\n> was visiting Red Hat, and they didn't refute it.\n\nIMHO, if we're going to say \"don't use X on production PostgreSQL\nsystems\", we need to have some better evidene than \"no one has\nsaid anything to the contrary, and I heard X is bad\". If we can't\nproduce such evidence, we shouldn't say anything at all, and users\ncan decide what to use for themselves.\n\n(Not that I'm agreeing or disagreeing about ext2 in particular...)\n\n> > My\n> > untested interpretation was that the update bookkeeping as well as data\n> > update were all getting journalled, the journal space would fill, get\n> > sync'd, then repeat. In effect, all blocks were being written TWICE just\n> > for the journalling, never mind the overhead for PostgreSQL\n> > transactions.\n\nJournalling may or may not have been the culprit, but I doubt everything\nwas being written to disk twice:\n\n(a) ext3 does metadata-only journalling by default\n\n(b) PostgreSQL only fsyncs WAL records to disk, not the data itself\n\n-Neil\n\n", "msg_date": "Tue, 12 Aug 2003 00:37:19 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Mon, 2003-08-11 at 19:50, Christopher Kings-Lynne wrote:\n> > Well, yeah. But given the Linux propensity for introducing major\n> > features in \"minor\" releases (and thereby introducing all the\n> > attendant bugs), I'd think twice about using _any_ Linux feature\n> > until it's been through a major version (e.g. things introduced in\n> > 2.4.x won't really be stable until 2.6.x) -- and even there one is\n> > taking a risk[1].\n> \n> Dudes, seriously - switch to FreeBSD :P\n\nBut, like, we want a *good* OS... 8-0\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"Man, I'm pretty. Hoo Hah!\" |\n| Johnny Bravo |\n+---------------------------------------------------------------+\n\n\n", "msg_date": "11 Aug 2003 23:42:21 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "Neil Conway wrote:\n> On Mon, Aug 11, 2003 at 06:59:30PM -0400, Bruce Momjian wrote:\n> > Uh, the ext2 developers say it isn't 100% reliable --- at least that is\n> > that was told. I don't know any personally, but I mentioned it while I\n> > was visiting Red Hat, and they didn't refute it.\n> \n> IMHO, if we're going to say \"don't use X on production PostgreSQL\n> systems\", we need to have some better evidene than \"no one has\n> said anything to the contrary, and I heard X is bad\". If we can't\n> produce such evidence, we shouldn't say anything at all, and users\n> can decide what to use for themselves.\n> \n> (Not that I'm agreeing or disagreeing about ext2 in particular...)\n\nI don't use Linux and was just repeating what I had heard from others,\nand read in postings. I don't have any first-hand experience with ext2\n(except for a laptop I borrowed that wouldn't boot after being shut\noff), but others on this mailing list have said the same thing.\n\nHere is another email talking about corrupting ext2 file systems:\n\n\thttp://groups.google.com/groups?q=ext2+corrupt+%22power+failure%22&start=10&hl=en&lr=&ie=UTF-8&selm=20021128061318.GE18980%40ursine&rnum=11\n\n From his wording, I assume he is not talking about fsck-correctable\ncorrupting.\n\n From what I remember, the ext2 failure cases were quite small, but known\nby the ext2 developers, and considered too large a performance hit to\ncorrect.\n\n> > > My\n> > > untested interpretation was that the update bookkeeping as well as data\n> > > update were all getting journalled, the journal space would fill, get\n> > > sync'd, then repeat. In effect, all blocks were being written TWICE just\n> > > for the journalling, never mind the overhead for PostgreSQL\n> > > transactions.\n> \n> Journalling may or may not have been the culprit, but I doubt everything\n> was being written to disk twice:\n> \n> (a) ext3 does metadata-only journalling by default\n\nIf that is true, why was I told people have to mount their ext3 file\nsystems with metadata-only. Again, I have no experience myself, but why\nare people telling me this?\n\n> (b) PostgreSQL only fsyncs WAL records to disk, not the data itself\n\nRight. WAL recovers the data.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 12 Aug 2003 00:52:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Tue, Aug 12, 2003 at 12:52:46AM -0400, Bruce Momjian wrote:\n I don't use Linux and was just repeating what I had heard from others,\n> and read in postings. I don't have any first-hand experience with ext2\n> (except for a laptop I borrowed that wouldn't boot after being shut\n> off), but others on this mailing list have said the same thing.\n\nRight, and I understand the need to answer users asking about\nwhich filesystem to use, but I'd be cautious of bad-mouthing\nanother OSS project without any hard evidence to back up our\nclaim (of course if we have such evidence, then fine -- I\njust haven't seen it). It would be like $SOME_LARGE_OSS\nproject saying \"Don't use our project with PostgreSQL, as\[email protected] had data corruption with PostgreSQL 6.3 on\nUnixWare\" -- kind of annoying, right?\n\n> > (a) ext3 does metadata-only journalling by default\n> \n> If that is true, why was I told people have to mount their ext3 file\n> systems with metadata-only. Again, I have no experience myself, but why\n> are people telling me this?\n\nPerhaps they were suggesting that people mount ext2 using\ndata=writeback, rather than the default of data=ordered.\n\nBTW, I've heard from a couple different people that using\next3 with data=journalled (i.e. enabling journalling of both\ndata and metadata) actually makes PostgreSQL faster, as\nit means that ext3 can skip PostgreSQL's fsync request\nsince ext3's log is flushed to disk already. I haven't\ntested this myself, however.\n\n-Neil\n\n", "msg_date": "Tue, 12 Aug 2003 01:08:09 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On 11 Aug 2003 at 23:42, Ron Johnson wrote:\n\n> On Mon, 2003-08-11 at 19:50, Christopher Kings-Lynne wrote:\n> > > Well, yeah. But given the Linux propensity for introducing major\n> > > features in \"minor\" releases (and thereby introducing all the\n> > > attendant bugs), I'd think twice about using _any_ Linux feature\n> > > until it's been through a major version (e.g. things introduced in\n> > > 2.4.x won't really be stable until 2.6.x) -- and even there one is\n> > > taking a risk[1].\n> > \n> > Dudes, seriously - switch to FreeBSD :P\n> \n> But, like, we want a *good* OS... 8-0\n\nJoke aside, I guess since postgresql is pretty much reliant on file system for \nbasic file functionality, I guess it's time to test Linux 2.6 and compare it.\n\nAnd don't forget, for large databases, there is still XFS out there which is \nprobably the ruler at upper end..\n\nBye\n Shridhar\n\n--\nUnfair animal names:-- tsetse fly\t\t\t-- bullhead-- booby\t\t\t-- duck-billed \nplatypus-- sapsucker\t\t\t-- Clarence\t\t-- Gary Larson\n\n", "msg_date": "Tue, 12 Aug 2003 12:13:18 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "Thanks to everyone who responded. It's a pity that the discussion has gone\nthe ext2 vs ext3 route. The main reason I asked my original question is\nthat I am currently importing data into Pg which is about 2.9 Gigs.\nUnfortunately, to maintain data intergrity, data is inserted into a table\none row at a time. This exercise took ~7 days on the same system with\nslightly different setup(PIII 1.0GHZ, 512M RAM -- CPU speed was down graded\ndue to serveral over heating problems which have since been fixed, and RAM\nwas added for good measure). I have just reloaded the machine, and started\nthe import. So far ~ 6000 record have been imported, and there is 32000\nleft.\n\nP.S. Importing the same data on Mysql took ~2 days.\n\nBjoern Metzdorf wrote:\n\n>>be able to handle at least 8M at a time. The machine has\n>>two P III 933MHz CPU's, 1.128G RAM (512M*2 + 128M), and\n>>a 36 Gig hd with 1 Gig swap and 3 equal size ext3 partitions.\n>>What would be the recomended setup for good performance\n>>considering that the db will have about 15 users for\n>>9 hours in a day, and about 10 or so users throughout the day\n>>who wont be conistenly using the db.\n>> \n>>\n>\n>For 15 users you won't need great tuning at all. Just make sure, that you\n>have the right indizes on the tables and that you have good queries (query\n>plan).\n>\n>About the 8Meg blobs, I don't know. Other people on this list may be able to\n>give you hints here.\n>\n>Regards,\n>Bjoern\n> \n>\n\n\n", "msg_date": "Tue, 12 Aug 2003 09:16:25 +0200", "msg_from": "mixo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Tue, 12 Aug 2003, Christopher Kings-Lynne wrote:\n\n> > Well, yeah. But given the Linux propensity for introducing major\n> > features in \"minor\" releases (and thereby introducing all the\n> > attendant bugs), I'd think twice about using _any_ Linux feature\n> > until it's been through a major version (e.g. things introduced in\n> > 2.4.x won't really be stable until 2.6.x) -- and even there one is\n> > taking a risk[1].\n> \n> Dudes, seriously - switch to FreeBSD :P\n\nYeah, it's nice to have a BUG FREE OS huh? ;^)\n\nAnd yes, I've used FreeBSD, it's quite good, but I kept getting the \nfeeling it wasn't quite done. Especially the installation documentation.\n\n", "msg_date": "Tue, 12 Aug 2003 08:27:52 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Mon, 11 Aug 2003, Bruce Momjian wrote:\n\n> scott.marlowe wrote:\n> > On Fri, 8 Aug 2003, Andrew Sullivan wrote:\n> > \n> > > On Fri, Aug 08, 2003 at 09:40:20AM -0700, Jonathan Gardner wrote:\n> > > > \n> > > > Redhat puts ext3 on by default. Consider switching to a non-journaling FS \n> > > > (ext2?) with the partition that holds your data and WAL.\n> > > \n> > > I would give you exactly the opposite advice: _never_ use a\n> > > non-journalling fs for your data and WAL. I suppose if you can\n> > > afford to lose some transactions, you can do without journalling. \n> > > Otherwise, you're just borrowing trouble, near as I can tell.\n> > \n> > I'd argue that a reliable filesystem (ext2) is still better than a \n> > questionable journaling filesystem (ext3 on kernels <2.4.20).\n> > \n> > This isn't saying to not use jounraling, but I would definitely test it \n> > under load first to make sure it's not gonna lose data or get corrupted.\n> \n> That _would_ work if ext2 was a reliable file system --- it is not.\n> \n> This is the problem of Linux file systems --- they have unreliable, and\n> journalled, with nothing in between, except using a journalling file\n> system and having it only journal metadata.\n\nNever the less, on LINUX, which is what we use, it is by far more reliable \nthan ext3 or reiserfs. In four years of use I've lost zero files to any \nof its bugs. Of course, maybe that's RedHat patching the kernel for me or \nsomething. :-) they seem to hire some pretty good hackers.\n\n", "msg_date": "Tue, 12 Aug 2003 08:31:57 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On 11 Aug 2003, Ron Johnson wrote:\n\n> On Mon, 2003-08-11 at 19:50, Christopher Kings-Lynne wrote:\n> > > Well, yeah. But given the Linux propensity for introducing major\n> > > features in \"minor\" releases (and thereby introducing all the\n> > > attendant bugs), I'd think twice about using _any_ Linux feature\n> > > until it's been through a major version (e.g. things introduced in\n> > > 2.4.x won't really be stable until 2.6.x) -- and even there one is\n> > > taking a risk[1].\n> > \n> > Dudes, seriously - switch to FreeBSD :P\n> \n> But, like, we want a *good* OS... 8-0\n\nWhat, like Unixware? (ducking quickly) (*_*)\n\n", "msg_date": "Tue, 12 Aug 2003 08:34:56 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Tue, 12 Aug 2003, Neil Conway wrote:\n\n> On Tue, Aug 12, 2003 at 12:52:46AM -0400, Bruce Momjian wrote:\n> I don't use Linux and was just repeating what I had heard from others,\n> > and read in postings. I don't have any first-hand experience with ext2\n> > (except for a laptop I borrowed that wouldn't boot after being shut\n> > off), but others on this mailing list have said the same thing.\n> \n> Right, and I understand the need to answer users asking about\n> which filesystem to use, but I'd be cautious of bad-mouthing\n> another OSS project without any hard evidence to back up our\n> claim (of course if we have such evidence, then fine -- I\n> just haven't seen it). It would be like $SOME_LARGE_OSS\n> project saying \"Don't use our project with PostgreSQL, as\n> [email protected] had data corruption with PostgreSQL 6.3 on\n> UnixWare\" -- kind of annoying, right?\n\nWow, you put my thoughts exactly into words for me, thanks Neil.\n\n> > > (a) ext3 does metadata-only journalling by default\n> > \n> > If that is true, why was I told people have to mount their ext3 file\n> > systems with metadata-only. Again, I have no experience myself, but why\n> > are people telling me this?\n> \n> Perhaps they were suggesting that people mount ext2 using\n> data=writeback, rather than the default of data=ordered.\n> \n> BTW, I've heard from a couple different people that using\n> ext3 with data=journalled (i.e. enabling journalling of both\n> data and metadata) actually makes PostgreSQL faster, as\n> it means that ext3 can skip PostgreSQL's fsync request\n> since ext3's log is flushed to disk already. I haven't\n> tested this myself, however.\n\nNow that you mention it, that makes sense. I might have to test ext3 now \nthat the 2.6 kernel is on the way, i.e. the 2.4 kernel should be settling \ndown by now.\n\n", "msg_date": "Tue, 12 Aug 2003 08:36:22 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "Shridhar Daithankar wrote:\n> On 11 Aug 2003 at 23:42, Ron Johnson wrote:\n> \n> \n>>On Mon, 2003-08-11 at 19:50, Christopher Kings-Lynne wrote:\n>>\n>>>>Well, yeah. But given the Linux propensity for introducing major\n>>>>features in \"minor\" releases (and thereby introducing all the\n>>>>attendant bugs), I'd think twice about using _any_ Linux feature\n>>>>until it's been through a major version (e.g. things introduced in\n>>>>2.4.x won't really be stable until 2.6.x) -- and even there one is\n>>>>taking a risk[1].\n>>>\n>>>Dudes, seriously - switch to FreeBSD :P\n>>\n>>But, like, we want a *good* OS... 8-0\n> \n> \n> Joke aside, I guess since postgresql is pretty much reliant on file system for \n> basic file functionality, I guess it's time to test Linux 2.6 and compare it.\n> \n> And don't forget, for large databases, there is still XFS out there which is \n> probably the ruler at upper end..\n\nThis is going to push the whole thing a little off-topic, but I'm curious to\nknow the answer.\n\nHas it ever been proposed or attemped to run PostgreSQL without any filesystem\n(or any other database for that matter ...).\n\nMeaning ... just tell it a raw partition to keep the data on and Postgre would\ncreate its own \"filesystem\" ... obviously, doing that would allow Postgre to\nbypass all the failings of all filesystems and rely entirely apon its own\nrules.\n\nOr are modern filesystems advanced enough that doing something like that would\nlose more than it would gain?\n\nJust thinking out loud.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Tue, 12 Aug 2003 14:39:19 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "\nOK, I got some hard evidence. Here is a discussion on the Linux kernel\nmailing list with postings from Allen Cox (ac Linux kernels) and Stephen\nTweedie (ext3 author).\n\n\thttp://www.tux.org/hypermail/linux-kernel/1999week14/subject.html#start\n\nSearch for \"softupdates and ext2\".\n\nHere is the original email in the thread:\n\n\thttp://www.tux.org/hypermail/linux-kernel/1999week14/0498.html\n\nSummary is at:\n\n\thttp://www.tux.org/hypermail/linux-kernel/1999week14/0571.html\n\nand conclusion in:\n\n\thttp://www.tux.org/hypermail/linux-kernel/1999week14/0504.html\n\nI now remember the issue --- ext2 makes all disk changes asynchonously\n(unless you mount it via sync, which is slow). This means that the file\nsystem isn't always consistent on disk. \n\nUFS has always sync metadata (file/directory creation) to the disk so\nthe disk was always consistent, but doesn't sync the data to the disk,\nfor performance reasons. With soft updates, the metadata writes are\ndelayed, and written to disk in an order that keeps the file system\nconsistent.\n \nIs this enough evidence, or should I keep researching?\n\n---------------------------------------------------------------------------\n\nNeil Conway wrote:\n> On Tue, Aug 12, 2003 at 12:52:46AM -0400, Bruce Momjian wrote:\n> I don't use Linux and was just repeating what I had heard from others,\n> > and read in postings. I don't have any first-hand experience with ext2\n> > (except for a laptop I borrowed that wouldn't boot after being shut\n> > off), but others on this mailing list have said the same thing.\n> \n> Right, and I understand the need to answer users asking about\n> which filesystem to use, but I'd be cautious of bad-mouthing\n> another OSS project without any hard evidence to back up our\n> claim (of course if we have such evidence, then fine -- I\n> just haven't seen it). It would be like $SOME_LARGE_OSS\n> project saying \"Don't use our project with PostgreSQL, as\n> [email protected] had data corruption with PostgreSQL 6.3 on\n> UnixWare\" -- kind of annoying, right?\n> \n> > > (a) ext3 does metadata-only journalling by default\n> > \n> > If that is true, why was I told people have to mount their ext3 file\n> > systems with metadata-only. Again, I have no experience myself, but why\n> > are people telling me this?\n> \n> Perhaps they were suggesting that people mount ext2 using\n> data=writeback, rather than the default of data=ordered.\n> \n> BTW, I've heard from a couple different people that using\n> ext3 with data=journalled (i.e. enabling journalling of both\n> data and metadata) actually makes PostgreSQL faster, as\n> it means that ext3 can skip PostgreSQL's fsync request\n> since ext3's log is flushed to disk already. I haven't\n> tested this myself, however.\n> \n> -Neil\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 12 Aug 2003 14:39:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Tue, Aug 12, 2003 at 02:39:19PM -0400, Bill Moran wrote:\n> Meaning ... just tell it a raw partition to keep the data on and\n> Postgre would create its own \"filesystem\" ... obviously, doing that\n> would allow Postgre to bypass all the failings of all filesystems\n> and rely entirely apon its own rules.\n> \n> Or are modern filesystems advanced enough that doing something like\n> that would lose more than it would gain?\n\nThe latter, mostly. This has been debated repeatedly on -hackers. \nIf you want \"raw\" access, then you have to implement some other kind\nof specialised filesystem of your own. And you have to have all\nsorts of nice tools to cope with the things that (for instance) fsck\nhandles. I think the reaction of most developers has been, \"Why\nreinvent the wheel?\"\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 12 Aug 2003 14:55:39 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "> > > Well, yeah. But given the Linux propensity for introducing major\n> > > features in \"minor\" releases (and thereby introducing all the\n> > > attendant bugs), I'd think twice about using _any_ Linux feature\n> > > until it's been through a major version (e.g. things introduced in\n> > > 2.4.x won't really be stable until 2.6.x) -- and even there one is\n> > > taking a risk[1].\n> > \n> > Dudes, seriously - switch to FreeBSD :P\n> \n> Yeah, it's nice to have a BUG FREE OS huh? ;^)\n> \n> And yes, I've used FreeBSD, it's quite good, but I kept getting the\n> feeling it wasn't quite done. Especially the installation\n> documentation.\n\nWhile the handbook isn't the same as reading the actual source or the\nonly FreeBSD documentation, it certainly is quite good (to the point\nthat publishers see small market to publish FreeBSD books because the\ndocumentation provided by the project is so good), IMHO.\n\nhttp://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/\n\nIf anyone on this list has any issues with the documentation, please\ntake them up with me _privately_ and I will do my best to either\naddress or correct the problem.\n\nNow, back to our regularly scheduled and on topic programming... -sc\n\n-- \nSean Chittenden\n\"(PostgreSQL|FreeBSD).org - The Power To Serve\"\n", "msg_date": "Tue, 12 Aug 2003 12:46:09 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Tue, 2003-08-12 at 13:39, Bruce Momjian wrote:\n> OK, I got some hard evidence. Here is a discussion on the Linux kernel\n> mailing list with postings from Allen Cox (ac Linux kernels) and Stephen\n> Tweedie (ext3 author).\n> \n> \thttp://www.tux.org/hypermail/linux-kernel/1999week14/subject.html#start\n> \n> Search for \"softupdates and ext2\".\n> \n> Here is the original email in the thread:\n> \n> \thttp://www.tux.org/hypermail/linux-kernel/1999week14/0498.html\n> \n> Summary is at:\n> \n> \thttp://www.tux.org/hypermail/linux-kernel/1999week14/0571.html\n> \n> and conclusion in:\n> \n> \thttp://www.tux.org/hypermail/linux-kernel/1999week14/0504.html\n> \n> I now remember the issue --- ext2 makes all disk changes asynchonously\n> (unless you mount it via sync, which is slow). This means that the file\n> system isn't always consistent on disk. \n> \n> UFS has always sync metadata (file/directory creation) to the disk so\n> the disk was always consistent, but doesn't sync the data to the disk,\n> for performance reasons. With soft updates, the metadata writes are\n> delayed, and written to disk in an order that keeps the file system\n> consistent.\n> \n> Is this enough evidence, or should I keep researching?\n\nThis is all 4 years old, though. Isn't that why the ext3 \"layer\" was\ncreated, and filesystems like reiserFS, XFS and (kinda) JFS were added\nto Linux?\n\n> ---------------------------------------------------------------------------\n> \n> Neil Conway wrote:\n> > On Tue, Aug 12, 2003 at 12:52:46AM -0400, Bruce Momjian wrote:\n> > I don't use Linux and was just repeating what I had heard from others,\n> > > and read in postings. I don't have any first-hand experience with ext2\n> > > (except for a laptop I borrowed that wouldn't boot after being shut\n> > > off), but others on this mailing list have said the same thing.\n> > \n> > Right, and I understand the need to answer users asking about\n> > which filesystem to use, but I'd be cautious of bad-mouthing\n> > another OSS project without any hard evidence to back up our\n> > claim (of course if we have such evidence, then fine -- I\n> > just haven't seen it). It would be like $SOME_LARGE_OSS\n> > project saying \"Don't use our project with PostgreSQL, as\n> > [email protected] had data corruption with PostgreSQL 6.3 on\n> > UnixWare\" -- kind of annoying, right?\n> > \n> > > > (a) ext3 does metadata-only journalling by default\n> > > \n> > > If that is true, why was I told people have to mount their ext3 file\n> > > systems with metadata-only. Again, I have no experience myself, but why\n> > > are people telling me this?\n> > \n> > Perhaps they were suggesting that people mount ext2 using\n> > data=writeback, rather than the default of data=ordered.\n> > \n> > BTW, I've heard from a couple different people that using\n> > ext3 with data=journalled (i.e. enabling journalling of both\n> > data and metadata) actually makes PostgreSQL faster, as\n> > it means that ext3 can skip PostgreSQL's fsync request\n> > since ext3's log is flushed to disk already. I haven't\n> > tested this myself, however.\n> > \n> > -Neil\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"Man, I'm pretty. Hoo Hah!\" |\n| Johnny Bravo |\n+---------------------------------------------------------------+\n\n\n", "msg_date": "12 Aug 2003 22:30:33 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Tue, 12 Aug 2003, mixo wrote:\n\n> that I am currently importing data into Pg which is about 2.9 Gigs.\n> Unfortunately, to maintain data intergrity, data is inserted into a table\n> one row at a time.'\n\nSo you don't put a number of inserts into one transaction?\n\nIf you don't do that then postgresql will treat each command as a\ntransaction and each insert is going to be forced out on disk (returning\nwhen the data is just in some cache is not safe even if other products\nmight do that). If you don't do this then the server promise the client\nthat the row have been stored but then the server goes down and the row\nthat was in the cache is lost. It's much faster but not what you expect\nfrom a real database.\n\nSo, group the inserts in transactions with maybe 1000 commands each. It \nwill go much faster. It can then cache the rows and in the end just make \nsure all 1000 have been written out on disk.\n\nThere is also a configuration variable that can tell postgresql to not \nwait until the insert is out on disk, but that is not recomended if you \nvalue your data.\n\nAnd last, why does it help integrity to insert data one row at a time?\n\n-- \n/Dennis\n\n", "msg_date": "Wed, 13 Aug 2003 08:30:12 +0200 (CEST)", "msg_from": "=?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "> So, group the inserts in transactions with maybe 1000 commands each. It\n> will go much faster. It can then cache the rows and in the end just make\n> sure all 1000 have been written out on disk.\n\nMore than that, he should be using COPY - it's 10x faster than even grouped\ninserts.\n\nChris\n\n", "msg_date": "Wed, 13 Aug 2003 14:47:03 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Wed, 2003-08-13 at 01:47, Christopher Kings-Lynne wrote:\n> > So, group the inserts in transactions with maybe 1000 commands each. It\n> > will go much faster. It can then cache the rows and in the end just make\n> > sure all 1000 have been written out on disk.\n> \n> More than that, he should be using COPY - it's 10x faster than even grouped\n> inserts.\n\nI have a table which has a foreign key reference to a properly indexed\ntable, and needed to load 15GB of uncompressed data into that table.\n\nSince the machine is minimal (60GB 5400RPM IDE HDD, 1GB RAM, 1GHz\nAthlon), to save precious disk space, I had the data compressed into\n22 files totaling 641GiB. The records are approximately 275 bytes\nin size.\n\nAlso, because date transformations needed to be made, I had to 1st\ninsert into a temp table, and insert from there into the main table.\nThus, in essence, I had to insert each record twice.\n\nSo, in 8:45 (not 8 minutes 45 seconds!, decompressed 641MiB worth of\n96% compressed files, inserted 30M rows, and inserted 30M rows again,\nwhile doing foreign key checks to another table. And the data files\nplus database are all on the same disk.\n\nPretty impressive: 1,920 inserts/second.\n\nfor f in ltx_*unl.gz;\ndo\n psql test1 -c \"truncate table t_lane_tx2;\" ;\n (zcat $f | sed \"s/\\\"//g\" | \\\n psql test1 -c \"copy t_lane_tx2 from stdin delimiter ',';\");\n time psql -a -f sel_into_ltx.sql -d test1 ;\ndone\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"Man, I'm pretty. Hoo Hah!\" |\n| Johnny Bravo |\n+---------------------------------------------------------------+\n\n\n", "msg_date": "13 Aug 2003 05:33:47 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "Ron Johnson wrote:\n> On Tue, 2003-08-12 at 13:39, Bruce Momjian wrote:\n> > OK, I got some hard evidence. Here is a discussion on the Linux kernel\n> > mailing list with postings from Allen Cox (ac Linux kernels) and Stephen\n> > Tweedie (ext3 author).\n> > \n> > \thttp://www.tux.org/hypermail/linux-kernel/1999week14/subject.html#start\n> > \n> > Search for \"softupdates and ext2\".\n> > \n> > Here is the original email in the thread:\n> > \n> > \thttp://www.tux.org/hypermail/linux-kernel/1999week14/0498.html\n> > \n> > Summary is at:\n> > \n> > \thttp://www.tux.org/hypermail/linux-kernel/1999week14/0571.html\n> > \n> > and conclusion in:\n> > \n> > \thttp://www.tux.org/hypermail/linux-kernel/1999week14/0504.html\n> > \n> > I now remember the issue --- ext2 makes all disk changes asynchonously\n> > (unless you mount it via sync, which is slow). This means that the file\n> > system isn't always consistent on disk. \n> > \n> > UFS has always sync metadata (file/directory creation) to the disk so\n> > the disk was always consistent, but doesn't sync the data to the disk,\n> > for performance reasons. With soft updates, the metadata writes are\n> > delayed, and written to disk in an order that keeps the file system\n> > consistent.\n> > \n> > Is this enough evidence, or should I keep researching?\n> \n> This is all 4 years old, though. Isn't that why the ext3 \"layer\" was\n\nYes, it is four years old, but no one has told me ext2 has changed in\nthis regard, and seeing that they created ext3 to fix these aspects, I\nwould think ext2 hasn't changed.\n\n> created, and filesystems like reiserFS, XFS and (kinda) JFS were added\n> to Linux?\n\nYes, it is those ext2 limitations that caused the development of ext3\nand the others. However, they went much father than implementing a\ncrash-safe file system, but rather enabled a file system that doesn't\nneed fsck on crash reboot. This causes fsync of data and metadata (file\ncreation), which slows down the file system, and PostgreSQL doesn't need\nit.\n\nYou can mount ext3 and others with data=writeback to fsync only\nmetadata, but it isn't the default.\n\nI am not sure what the ext3 layer is.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 13 Aug 2003 11:37:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" } ]
[ { "msg_contents": "\nSuSE 7.3, PostgreSQL cvshead (7.4)\n\nThis is as far as I've gotten with 7.4.\nIt is built and configured like my 7.3 installation\non the same machine. I have built from CVS previously. \nAnd the production sources always builds very nice and clean.\n\nNothing runs. gbd output is below. It is balking\non loading a suspect library.\n\nI ordinarily would suspect some minor permission \nor configuration issue, but I cannot see one. \n\nJoe Conway suggested it might be a problem with the\nthe windows configuration changes.\n\nMy machine has WINE on it (windows) although\nI never use it. There really should be no\nreason for anything to be trying to link\nin a windows socket library.\n\nI don't have time to follow it further right now.\nBut if someone else has a similar configuration\nand/or wine installed, it would be very helpful\nto compare 7.4 installations. I can provide\nconfig arguments and log if someone would find\nthem useful.\n\n>gdb bin/postmaster\nGNU gdb 20010316\nCopyright 2001 Free Software Foundation, Inc.\nGDB is free software, covered by the GNU General Public License, and you are\nwelcome to change it and/or distribute copies of it under certain conditions.\nType \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB. Type \"show warranty\" for details.\nThis GDB was configured as \"i386-suse-linux\"...\n(gdb) run -i -D /local/pghead/data\nStarting program: /local/pghead/bin/postmaster -i -D /local/pghead/data\n\nProgram received signal SIGSEGV, Segmentation fault.\n0x40099ac5 in dllname () from /usr/lib/libwsock32.so\n(gdb) bt\n#0 0x40099ac5 in dllname () from /usr/lib/libwsock32.so\n#1 0x08121821 in StreamServerPort ()\n#2 0x08150996 in PostmasterMain ()\n#3 0x08123353 in main ()\n#4 0x4010e7ee in __libc_start_main () from /lib/libc.so.6\n\n\[email protected]\n", "msg_date": "Sun, 10 Aug 2003 20:04:40 -0700", "msg_from": "elein <[email protected]>", "msg_from_op": true, "msg_subject": "Windows on SuSE? 7.4" }, { "msg_contents": "\nThat line is certainly strange:\n\n\t#0 0x40099ac5 in dllname () from /usr/lib/libwsock32.so\n\nWhen you run configure, it says you are on Linux, right?\n\nMy guess is that gdb is getting confused because there is no dllopen\ncall in StreamServerPort().\n\n---------------------------------------------------------------------------\n\nelein wrote:\n> \n> SuSE 7.3, PostgreSQL cvshead (7.4)\n> \n> This is as far as I've gotten with 7.4.\n> It is built and configured like my 7.3 installation\n> on the same machine. I have built from CVS previously. \n> And the production sources always builds very nice and clean.\n> \n> Nothing runs. gbd output is below. It is balking\n> on loading a suspect library.\n> \n> I ordinarily would suspect some minor permission \n> or configuration issue, but I cannot see one. \n> \n> Joe Conway suggested it might be a problem with the\n> the windows configuration changes.\n> \n> My machine has WINE on it (windows) although\n> I never use it. There really should be no\n> reason for anything to be trying to link\n> in a windows socket library.\n> \n> I don't have time to follow it further right now.\n> But if someone else has a similar configuration\n> and/or wine installed, it would be very helpful\n> to compare 7.4 installations. I can provide\n> config arguments and log if someone would find\n> them useful.\n> \n> >gdb bin/postmaster\n> GNU gdb 20010316\n> Copyright 2001 Free Software Foundation, Inc.\n> GDB is free software, covered by the GNU General Public License, and you are\n> welcome to change it and/or distribute copies of it under certain conditions.\n> Type \"show copying\" to see the conditions.\n> There is absolutely no warranty for GDB. Type \"show warranty\" for details.\n> This GDB was configured as \"i386-suse-linux\"...\n> (gdb) run -i -D /local/pghead/data\n> Starting program: /local/pghead/bin/postmaster -i -D /local/pghead/data\n> \n> Program received signal SIGSEGV, Segmentation fault.\n> 0x40099ac5 in dllname () from /usr/lib/libwsock32.so\n> (gdb) bt\n> #0 0x40099ac5 in dllname () from /usr/lib/libwsock32.so\n> #1 0x08121821 in StreamServerPort ()\n> #2 0x08150996 in PostmasterMain ()\n> #3 0x08123353 in main ()\n> #4 0x4010e7ee in __libc_start_main () from /lib/libc.so.6\n> \n> \n> [email protected]\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 11 Aug 2003 00:08:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows on SuSE? 7.4" }, { "msg_contents": "elein <[email protected]> writes:\n> This is as far as I've gotten with 7.4.\n\nWould you rebuild with --enable-debug (perhaps also --enable-cassert)\nso that the gdb backtrace is more informative?\n\nAlso, it seems likely that the issue is in or around the recently-added\nIPv6 support, so I'd suggest using CVS tip or last night's snapshot\nrather than the beta1 tarball. We've already made some portability\nfixes there since beta1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Aug 2003 11:54:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows on SuSE? 7.4 " }, { "msg_contents": "cassert was on. Now debug is on, too.\nI updated from cvs-head just now.\n\nconfigure knows it is a linux box.\nShould it be trying to link to libwsock32.so\nor not? If this is a legitimate link, then\nthe problem is different than if it is trying\nto link it in erroneously.\n\n--elein\n\nThe is the top of the config.log\n---------------------\nThis file contains any messages produced by compilers while\nrunning configure, to aid debugging if configure makes a mistake.\n\nIt was created by PostgreSQL configure 7.4beta1, which was\ngenerated by GNU Autoconf 2.53. Invocation command line was\n\n $ ./configure --prefix=/local/pghead --with-perl --with-python --enable-depend --enable-cassert --enable-d\nebug\n\n## --------- ##\n## Platform. ##\n## --------- ##\n\nhostname = cookie\nuname -m = i686\nuname -r = 2.4.16-4GB\nuname -s = Linux\nuname -v = #1 Mon Apr 15 08:57:26 GMT 2002\n---------------------\n\n\n$ gdb postmaster\nGNU gdb 20010316\nCopyright 2001 Free Software Foundation, Inc.\nGDB is free software, covered by the GNU General Public License, and you are\nwelcome to change it and/or distribute copies of it under certain conditions.\nType \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB. Type \"show warranty\" for details.\nThis GDB was configured as \"i386-suse-linux\"...\n(gdb) run -i -D /local/pghead/data\nStarting program: /local/pghead/bin/postmaster -i -D /local/pghead/data\n\nProgram received signal SIGSEGV, Segmentation fault.\n0x40099ac5 in dllname () from /usr/lib/libwsock32.so\n(gdb) bt\n#0 0x40099ac5 in dllname () from /usr/lib/libwsock32.so\n#1 0x081218b1 in StreamServerPort (family=0, hostName=0x0, portNumber=5432, unixSocketName=0x82d6e68 \"\", \n ListenSocket=0x829b420, MaxListen=10) at pqcomm.c:279\n#2 0x08150a26 in PostmasterMain (argc=4, argv=0x82cfae8) at postmaster.c:765\n#3 0x081233e3 in main (argc=4, argv=0xbffff414) at main.c:215\n#4 0x4010e7ee in __libc_start_main () from /lib/libc.so.6\n\n\nOn Mon, Aug 11, 2003 at 11:54:39AM -0400, Tom Lane wrote:\n> elein <[email protected]> writes:\n> > This is as far as I've gotten with 7.4.\n> \n> Would you rebuild with --enable-debug (perhaps also --enable-cassert)\n> so that the gdb backtrace is more informative?\n> \n> Also, it seems likely that the issue is in or around the recently-added\n> IPv6 support, so I'd suggest using CVS tip or last night's snapshot\n> rather than the beta1 tarball. We've already made some portability\n> fixes there since beta1.\n> \n> \t\t\tregards, tom lane\n> \n", "msg_date": "Mon, 11 Aug 2003 10:07:40 -0700", "msg_from": "elein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Windows on SuSE? 7.4" }, { "msg_contents": "elein <[email protected]> writes:\n> configure knows it is a linux box.\n> Should it be trying to link to libwsock32.so\n> or not? If this is a legitimate link, then\n> the problem is different than if it is trying\n> to link it in erroneously.\n\nconfigure is unconditionally including libwsock32 if it can find one.\nAFAICT from the CVS logs, this was only expected to happen on win32\n(Bruce, that was your commit, configure.in v1.250; please confirm).\nSo it would probably make sense to not look for libwsock32 unless\nPORTNAME is \"win32\".\n\nI take it you actually have a libwsock32? What's it supposed to do?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Aug 2003 13:29:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows on SuSE? 7.4 " }, { "msg_contents": "Yes, I actually have a libwsock32 because my\nsystem has wine on it. Wine is a windows \nemulator. \n\nSo the assumption that any system with that\nfile is a windows system will break on \nsystems with windows emulators.\n\nIt sounds like Joe's guess on this was right.\n\n--elein\n\n\nOn Mon, Aug 11, 2003 at 01:29:19PM -0400, Tom Lane wrote:\n> elein <[email protected]> writes:\n> > configure knows it is a linux box.\n> > Should it be trying to link to libwsock32.so\n> > or not? If this is a legitimate link, then\n> > the problem is different than if it is trying\n> > to link it in erroneously.\n> \n> configure is unconditionally including libwsock32 if it can find one.\n> AFAICT from the CVS logs, this was only expected to happen on win32\n> (Bruce, that was your commit, configure.in v1.250; please confirm).\n> So it would probably make sense to not look for libwsock32 unless\n> PORTNAME is \"win32\".\n> \n> I take it you actually have a libwsock32? What's it supposed to do?\n> \n> \t\t\tregards, tom lane\n> \n", "msg_date": "Mon, 11 Aug 2003 10:44:47 -0700", "msg_from": "elein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Windows on SuSE? 7.4" }, { "msg_contents": "I blame SuSE. \n\nThank you for the fix and confirmation of the problem.\n\nelein\n\nOn Mon, Aug 11, 2003 at 01:53:31PM -0400, Tom Lane wrote:\n> elein <[email protected]> writes:\n> > Yes, I actually have a libwsock32 because my\n> > system has wine on it. Wine is a windows \n> > emulator. \n> \n> And they drop windows-only libraries into /usr/lib? Yech.\n> \n> Anyway, I can't see a need to include libwsock32 on non-win32 platforms.\n> Will modify configure.\n> \n> \t\t\tregards, tom lane\n> \n", "msg_date": "Mon, 11 Aug 2003 10:53:22 -0700", "msg_from": "elein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows on SuSE? 7.4" }, { "msg_contents": "elein <[email protected]> writes:\n> Yes, I actually have a libwsock32 because my\n> system has wine on it. Wine is a windows \n> emulator. \n\nAnd they drop windows-only libraries into /usr/lib? Yech.\n\nAnyway, I can't see a need to include libwsock32 on non-win32 platforms.\nWill modify configure.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Aug 2003 13:53:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows on SuSE? 7.4 " }, { "msg_contents": "elein <[email protected]> writes:\n> It sounds like Joe's guess on this was right.\n\nI've committed this fix in configure.in:\n\n***************\n*** 631,637 ****\n AC_CHECK_LIB(gen, main)\n AC_CHECK_LIB(PW, main)\n AC_CHECK_LIB(resolv, main)\n- AC_CHECK_LIB(wsock32, main)\n AC_SEARCH_LIBS(getopt_long, [getopt gnugetopt])\n # QNX:\n AC_CHECK_LIB(unix, main)\n--- 636,641 ----\n***************\n*** 645,650 ****\n--- 649,659 ----\n AC_SEARCH_LIBS(fdatasync, [rt posix4])\n # Cygwin:\n AC_CHECK_LIB(cygipc, shmget)\n+ # WIN32:\n+ if test \"$PORTNAME\" = \"win32\"\n+ then\n+ \tAC_CHECK_LIB(wsock32, main)\n+ fi\n \n if test \"$with_readline\" = yes; then\n PGAC_CHECK_READLINE\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Aug 2003 14:14:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows on SuSE? 7.4 " }, { "msg_contents": "On Mon, 2003-08-11 at 13:44, elein wrote:\n> Yes, I actually have a libwsock32 because my\n> system has wine on it. Wine is a windows \n> emulator. \n> \n\nWine Is Not an Emulator :-)\n\nRobert Treat\n-- \nPostgreSQL :: The Enterprise Open Source Database\n\n", "msg_date": "11 Aug 2003 14:53:12 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows on SuSE? 7.4" }, { "msg_contents": "\nYes, this is the right fix. I never suspected wsock32 would exist on a\nnon-MS WIn machine.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> elein <[email protected]> writes:\n> > It sounds like Joe's guess on this was right.\n> \n> I've committed this fix in configure.in:\n> \n> ***************\n> *** 631,637 ****\n> AC_CHECK_LIB(gen, main)\n> AC_CHECK_LIB(PW, main)\n> AC_CHECK_LIB(resolv, main)\n> - AC_CHECK_LIB(wsock32, main)\n> AC_SEARCH_LIBS(getopt_long, [getopt gnugetopt])\n> # QNX:\n> AC_CHECK_LIB(unix, main)\n> --- 636,641 ----\n> ***************\n> *** 645,650 ****\n> --- 649,659 ----\n> AC_SEARCH_LIBS(fdatasync, [rt posix4])\n> # Cygwin:\n> AC_CHECK_LIB(cygipc, shmget)\n> + # WIN32:\n> + if test \"$PORTNAME\" = \"win32\"\n> + then\n> + \tAC_CHECK_LIB(wsock32, main)\n> + fi\n> \n> if test \"$with_readline\" = yes; then\n> PGAC_CHECK_READLINE\n> \n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 11 Aug 2003 17:45:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows on SuSE? 7.4" }, { "msg_contents": "Bruce Momjian commented:\n\n \"Uh, the ext2 developers say it isn't 100% reliable\" ... \"I mentioned\n it while I was visiting Red Hat, and they didn't refute it.\"\n\n1. Nobody has gone through any formal proofs, and there are few\nsystems _anywhere_ that are 100% reliable. NASA has occasionally lost\nspacecraft to software bugs, so nobody will be making such rash claims\nabout ext2.\n\n2. Several projects have taken on the task of introducing journalled\nfilesystems, most notably ext3 (sponsored by RHAT via Stephen Tweedy)\nand ReiserFS (oft sponsored by SuSE). (I leave off JFS/XFS since they\nexisted long before they had any relationship with Linux.)\n\nParticipants in such projects certainly have interest in presenting\nthe notion that they provide improved reliability over ext2.\n\n3. There is no \"apologist\" for ext2 that will either (stupidly and\nfutilely) claim it to be flawless. Nor is there substantial interest\nin improving it; the sort people that would be interested in that sort\nof thing are working on the other FSes.\n\nThis also means that there's no one interested in going into the\nguaranteed-to-be-unsung effort involved in trying to prove ext2 to be\n\"formally reliable.\"\n\n4. It would be silly to minimize the impact of commercial interest.\nRHAT has been paying for the development of a would-be ext2 successor.\nFor them to refute your comments wouldn't be in their interests.\n\nNote that these are \"warm and fuzzy\" comments, the whole lot. The\n80-some thousand lines of code involved in ext2, ext3, reiserfs, and\njfs are no more amenable to absolute mathematical proof of reliability\nthan the corresponding BSD FFS code.\n\n6. Such efforts would be futile, anyways. Disks are mechanical\ndevices, and, as such, suffer from substantial reliability issues\nirrespective of the reliability of the software. I have lost sleep on\ntoo many occasions due to failures of:\n a) Disk drives,\n b) Disk controllers [the worst Oracle failure I encountered resulted\n from this], and\n c) OS memory management.\n\nI used ReiserFS back in its \"bleeding edge\" days, and find myself a\nlot more worried about losing data to flakey disk controllers.\n\nIt frankly seems insulting to focus on ext2 in this way when:\n\n a) There aren't _hard_ conclusions to point to, just soft ones;\n\n b) The reasons for you hearing vaguely negative things about ext2\n are much more likely political than they are technical.\n\nI wish there were more \"hard and fast\" conclusions to draw, to be able\nto conclusively say that one or another Linux filesystem was\nunambiguously preferable for use with PostgreSQL. There are not\nconclusive metrics, either in terms of speed or of some notion of\n\"reliability.\" I'd expect ReiserFS to be the poorest choice, and for\nXFS to be the best, but I only have fuzzy reasons, as opposed to\nmetrics.\n\nThe absence of measurable metrics of the sort is _NOT_ a proof that\n(say) FreeBSD is conclusively preferable, whatever your own\npreferences (I'll try to avoid characterizing it as \"prejudices,\" as\nthat would be unkind) may be. That would represent a quite separate\ndebate, and one that doesn't belong here, certainly not on a thread\nwhere the underlying question was \"Which Linux FS is preferred?\"\n\nIf the OSDB TPC-like benchmarks can get \"packaged\" up well enough to\neasily run and rerun them, there's hope of getting better answers,\nperhaps even including performance metrics for *BSD. That, not\nLinux-baiting, is the answer...\n-- \nselect 'cbbrowne' || '@' || 'acm.org';\nhttp://www.ntlug.org/~cbbrowne/sap.html\n(eq? 'truth 'beauty) ; to avoid unassigned-var error, since compiled code\n ; will pick up previous value to var set!-ed,\n ; the unassigned object.\n-- from BBN-CL's cl-parser.scm\n", "msg_date": "Mon, 11 Aug 2003 22:58:18 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "On Linux Filesystems" }, { "msg_contents": "\nAs I remember, there were clear cases that ext2 would fail to recover,\nand it was known to be a limitation of the file system implementation. \nSome of the ext2 developers were in the room at Red Hat when I said\nthat, so if it was incorrect, they would hopefully have spoken up. I\naddressed the comments directly to them.\n\nTo be recoverasble, you have to be careful how you sync metadata to\ndisk. All the journalling file systems, and the BSD UFS do that. I am\ntold ext2 does not. I don't know much more than that.\n\nAs I remember years ago, ext2 was faster than UFS, but it was true\nbecause ext2 didn't guarantee failure recovery. Now, with UFS soft\nupdates, the have similar performance characteristics, but UFS is still\ncrash-safe.\n\nHowever, I just tried google and couldn't find any documented evidence\nthat ext2 isn't crash-safe, so maybe I am wrong.\n\n---------------------------------------------------------------------------\n\nChristopher Browne wrote:\n> Bruce Momjian commented:\n> \n> \"Uh, the ext2 developers say it isn't 100% reliable\" ... \"I mentioned\n> it while I was visiting Red Hat, and they didn't refute it.\"\n> \n> 1. Nobody has gone through any formal proofs, and there are few\n> systems _anywhere_ that are 100% reliable. NASA has occasionally lost\n> spacecraft to software bugs, so nobody will be making such rash claims\n> about ext2.\n> \n> 2. Several projects have taken on the task of introducing journalled\n> filesystems, most notably ext3 (sponsored by RHAT via Stephen Tweedy)\n> and ReiserFS (oft sponsored by SuSE). (I leave off JFS/XFS since they\n> existed long before they had any relationship with Linux.)\n> \n> Participants in such projects certainly have interest in presenting\n> the notion that they provide improved reliability over ext2.\n> \n> 3. There is no \"apologist\" for ext2 that will either (stupidly and\n> futilely) claim it to be flawless. Nor is there substantial interest\n> in improving it; the sort people that would be interested in that sort\n> of thing are working on the other FSes.\n> \n> This also means that there's no one interested in going into the\n> guaranteed-to-be-unsung effort involved in trying to prove ext2 to be\n> \"formally reliable.\"\n> \n> 4. It would be silly to minimize the impact of commercial interest.\n> RHAT has been paying for the development of a would-be ext2 successor.\n> For them to refute your comments wouldn't be in their interests.\n> \n> Note that these are \"warm and fuzzy\" comments, the whole lot. The\n> 80-some thousand lines of code involved in ext2, ext3, reiserfs, and\n> jfs are no more amenable to absolute mathematical proof of reliability\n> than the corresponding BSD FFS code.\n> \n> 6. Such efforts would be futile, anyways. Disks are mechanical\n> devices, and, as such, suffer from substantial reliability issues\n> irrespective of the reliability of the software. I have lost sleep on\n> too many occasions due to failures of:\n> a) Disk drives,\n> b) Disk controllers [the worst Oracle failure I encountered resulted\n> from this], and\n> c) OS memory management.\n> \n> I used ReiserFS back in its \"bleeding edge\" days, and find myself a\n> lot more worried about losing data to flakey disk controllers.\n> \n> It frankly seems insulting to focus on ext2 in this way when:\n> \n> a) There aren't _hard_ conclusions to point to, just soft ones;\n> \n> b) The reasons for you hearing vaguely negative things about ext2\n> are much more likely political than they are technical.\n> \n> I wish there were more \"hard and fast\" conclusions to draw, to be able\n> to conclusively say that one or another Linux filesystem was\n> unambiguously preferable for use with PostgreSQL. There are not\n> conclusive metrics, either in terms of speed or of some notion of\n> \"reliability.\" I'd expect ReiserFS to be the poorest choice, and for\n> XFS to be the best, but I only have fuzzy reasons, as opposed to\n> metrics.\n> \n> The absence of measurable metrics of the sort is _NOT_ a proof that\n> (say) FreeBSD is conclusively preferable, whatever your own\n> preferences (I'll try to avoid characterizing it as \"prejudices,\" as\n> that would be unkind) may be. That would represent a quite separate\n> debate, and one that doesn't belong here, certainly not on a thread\n> where the underlying question was \"Which Linux FS is preferred?\"\n> \n> If the OSDB TPC-like benchmarks can get \"packaged\" up well enough to\n> easily run and rerun them, there's hope of getting better answers,\n> perhaps even including performance metrics for *BSD. That, not\n> Linux-baiting, is the answer...\n> -- \n> select 'cbbrowne' || '@' || 'acm.org';\n> http://www.ntlug.org/~cbbrowne/sap.html\n> (eq? 'truth 'beauty) ; to avoid unassigned-var error, since compiled code\n> ; will pick up previous value to var set!-ed,\n> ; the unassigned object.\n> -- from BBN-CL's cl-parser.scm\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 12 Aug 2003 00:07:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Linux Filesystems" }, { "msg_contents": "On Mon, Aug 11, 2003 at 10:58:18PM -0400, Christopher Browne wrote:\n> 1. Nobody has gone through any formal proofs, and there are few\n> systems _anywhere_ that are 100% reliable. \n\nI think the problem is that ext2 is known to be not perfectly crash\nsafe. That is, fsck on reboot after a crash can cause, in some\nextreme cases, recently-fscynced data to end up in lost+found/. The\ndata may or may not be recoverable from there.\n\nI don't think anyone would object to such a characterisation of ext2. \nIt was not designed, ever, for perfect data safety -- it was designed\nas a reasonably good compromise for most cases. _Every_ filesystem\nentails some compromises. This happens to be the one entailed by\next2.\n\nFor production use with valuable data, for my money (or, more\nprecisely, my time when a system panics for no good reason), it is\nalways worth the additional speed penalty to use something like\nmetadata journalling. Maybe others have more time to spare.\n\n> perhaps even including performance metrics for *BSD. That, not\n> Linux-baiting, is the answer...\n\nI didn't see anyone Linux-baiting.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 12 Aug 2003 10:28:56 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Linux Filesystems" }, { "msg_contents": "On Tue, 12 Aug 2003, Andrew Sullivan wrote:\n\n> On Mon, Aug 11, 2003 at 10:58:18PM -0400, Christopher Browne wrote:\n> > 1. Nobody has gone through any formal proofs, and there are few\n> > systems _anywhere_ that are 100% reliable. \n> \n> I think the problem is that ext2 is known to be not perfectly crash\n> safe. That is, fsck on reboot after a crash can cause, in some\n> extreme cases, recently-fscynced data to end up in lost+found/. The\n> data may or may not be recoverable from there.\n> \n> I don't think anyone would object to such a characterisation of ext2. \n> It was not designed, ever, for perfect data safety -- it was designed\n> as a reasonably good compromise for most cases. _Every_ filesystem\n> entails some compromises. This happens to be the one entailed by\n> ext2.\n> \n> For production use with valuable data, for my money (or, more\n> precisely, my time when a system panics for no good reason), it is\n> always worth the additional speed penalty to use something like\n> metadata journalling. Maybe others have more time to spare.\n\nI think the issue here is if you are running with the async mount option, \nthen it is quite likely that your volume will be corrupted if there are \nwrites going on and power fails.\n\nI'm pretty sure that as long as the partition is mounted sync, this isn't \na problem. \n\nI have seen reports where ext3 caused the data corruption (old kernels, \n2.4.4 and before I believe) problem, not ext2. I.e. the addition of \njournaling caused data loss.\n\nGiven that possibility, it may well have been at one time that ext2 was a \nsafer bet than ext3.\n\n> > perhaps even including performance metrics for *BSD. That, not\n> > Linux-baiting, is the answer...\n> \n> I didn't see anyone Linux-baiting.\n\nNo more than the typical, light hearted stuff we toss back and forth. I \ncertainly wasn't upset by any of it.\n\n", "msg_date": "Tue, 12 Aug 2003 09:06:14 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Linux Filesystems" }, { "msg_contents": "People:\n\n> On Mon, Aug 11, 2003 at 10:58:18PM -0400, Christopher Browne wrote:\n> > 1. Nobody has gone through any formal proofs, and there are few\n> > systems _anywhere_ that are 100% reliable.\n>\n> I think the problem is that ext2 is known to be not perfectly crash\n> safe. That is, fsck on reboot after a crash can cause, in some\n> extreme cases, recently-fscynced data to end up in lost+found/. The\n> data may or may not be recoverable from there.\n\nAside from that, as recently as eighteen months ago I had to manually fsck an \next2 system after an unexpected power-out. After my interactive session the \nsystem recovered and no data was lost. However, the client lost 3.5 hours of \nwork time ... 2.5 hours for me to get to the site, and 1 hour to recover the \nserver (mostly waiting time). \n\nSo it's a tradeoff with loss of performance vs. recovery time. In a server \nroom with redundant backup power supplies, \"clean room\" security and \nfail-over services, I can certainly imagine that data journalling would not \nbe needed. That is, however, the minority ...\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 12 Aug 2003 09:36:21 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Linux Filesystems" }, { "msg_contents": "On Tue, Aug 12, 2003 at 09:36:21AM -0700, Josh Berkus wrote:\n> So it's a tradeoff with loss of performance vs. recovery time. In\n> a server room with redundant backup power supplies, \"clean room\"\n> security and fail-over services, I can certainly imagine that data\n> journalling would not be needed. \n\nYou can have all the redundant power, high availability hardware, and\nultra-Robocop security going, and still have crashes: so far as I\nknow, _nobody_ makes perfectly reliable hardware, and the harder you\npush it, the more likely you are to find trouble. And certainly,\nwhen you have a surprise outage because the CPU where the kernel\nhappened to be burned itself up, an extra hour or two offline while\nyou do fsck is liable to make you cry out variations of those four\nletters more than once. :-/ \n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 12 Aug 2003 12:47:18 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Linux Filesystems" }, { "msg_contents": "Christopher, I appreciate your comments. At the end it goes down to personal\nexperience with one or the other file system. From that I can tell, that \nI have\nmade good experience with UFS, EXT2, and XFS. I made catastrophic ex-\nperience with ReiserFS (not during operation but you are a looser when \nit fails\nbecause the recovery methods are likely to be insufficient)\n\nSo at the end if somebody runs technical equipment, regardless whether it's\na computer or a chemical fab. It can fail and you need to make up your mind\nabout contingency.\n\nThis is due even before you start operating the equipment.\n\nSo waste too much time on thinking about the perfect file system. But \nevaluate\nthe potential damage that can result from failure. Develop a Backup&Recovery\nstrategy and test it, test it and test it again, so that you can do it \nblindly when it's\ndue.\n\nCiao, Toni\n\n>I wish there were more \"hard and fast\" conclusions to draw, to be able\n>to conclusively say that one or another Linux filesystem was\n>unambiguously preferable for use with PostgreSQL. There are not\n>conclusive metrics, either in terms of speed or of some notion of\n>\"reliability.\" I'd expect ReiserFS to be the poorest choice, and for\n>XFS to be the best, but I only have fuzzy reasons, as opposed to\n>metrics.\n>\n>The absence of measurable metrics of the sort is _NOT_ a proof that\n>(say) FreeBSD is conclusively preferable, whatever your own\n>preferences (I'll try to avoid characterizing it as \"prejudices,\" as\n>that would be unkind) may be. That would represent a quite separate\n>debate, and one that doesn't belong here, certainly not on a thread\n>where the underlying question was \"Which Linux FS is preferred?\"\n>\n>If the OSDB TPC-like benchmarks can get \"packaged\" up well enough to\n>easily run and rerun them, there's hope of getting better answers,\n>perhaps even including performance metrics for *BSD. That, not\n>Linux-baiting, is the answer...\n> \n>\n\n", "msg_date": "Fri, 15 Aug 2003 12:06:04 +0200", "msg_from": "Toni Schlichting <[email protected]>", "msg_from_op": false, "msg_subject": "Re: On Linux Filesystems" } ]
[ { "msg_contents": "Hi PostrgeSQL team,\n\nMy PostrgeSQL installed as part of CYGWIN (Windows XP).\nI have compared performance PostrgeSQL to MS SQL (I used a little Java\nprogram with number of inserts in table).\nMS SQL is faster in 12 times :-(\nIt's very strange results.\nGuys who developed this server: what you can tell in this - what\ncustomizations needs to increase of productivity?\nHow to force PostgreeSQL to work faster? \n\n\n \t Speed (inserts/sec)\t Elapsed time (ms)\t\nMS SQL (Average):\t 295\t 39 869\t\n \t testInsert 5000\t \t\n \t 263\t 18 977\t\n \t 255\t 19 619\t\n \t 306\t 16 334\t\n \t \t \t\n \t testInsert 10000\t \t\n \t 315\t 31 716\t\n \t 324\t 30 905\t\n \t 319\t 31 325\t\n \t \t \t\n \t testInsert 20000\t \t\n \t 241\t 82 919\t\n \t 313\t 63 922\t\n \t 317\t 63 101\t\n \t \t \t\nPostrgreSQL (Average):\t 24\t 520 160\t\n \t testInsert 5000\t \t\n \t 26\t 191 434\t\n \t 26\t 191 264\t\n \t 26\t 192 295\t\n \t \t \t\n \t testInsert 10000\t \t\n \t 22\t 463 669\t\n \t 25\t 393 510\t\n \t 24\t 409 528\t\n \t \t \t\n \t testInsert 20000\t \t\n \t 24\t 834 911\t\n \t 17\t 1 184 613\t\n \t 24\t 820 218\t\nMS SQL is faster (times):\t 12\t 13\t\n\n\n\n______________________________________________________\nWith regards,\nSerge.\n\n\n\n\n\nMessage\n\n\nHi PostrgeSQL  team,My PostrgeSQL installed as part of CYGWIN \n(Windows XP).I have compared performance PostrgeSQL to MS SQL (I used a \nlittle Java program with number of inserts in table).MS SQL is faster in 12 \ntimes :-(It's very strange results.Guys who developed this server: what \nyou can tell in this - what customizations needs to increase of \nproductivity?How to force PostgreeSQL to work faster? \n\n\n\n\n\n\n\n\n \nSpeed (inserts/sec)\nElapsed time (ms)\n\nMS SQL (Average):\n295\n39 \n 869\n\n \ntestInsert 5000\n \n\n \n263\n18 977\n\n \n255\n19 619\n\n \n306\n16 334\n\n \n \n \n\n \ntestInsert 10000\n \n\n \n315\n31 716\n\n \n324\n30 905\n\n \n319\n31 325\n\n \n \n \n\n \ntestInsert 20000\n \n\n \n241\n82 919\n\n \n313\n63 922\n\n \n317\n63 101\n\n \n \n \n\nPostrgreSQL (Average):\n24\n520 \n 160\n\n \ntestInsert 5000\n \n\n \n26\n191 434\n\n \n26\n191 264\n\n \n26\n192 295\n\n \n \n \n\n \ntestInsert 10000\n \n\n \n22\n463 669\n\n \n25\n393 510\n\n \n24\n409 528\n\n \n \n \n\n \ntestInsert 20000\n \n\n \n24\n834 911\n\n \n17\n1 184 613\n\n \n24\n820 218\n\nMS SQL is faster \n(times):\n12\n13\n______________________________________________________With \nregards,Serge.", "msg_date": "Mon, 11 Aug 2003 11:59:45 +0300", "msg_from": "\"Serge Dorofeev\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to force PostgreeSQL to work faster? " }, { "msg_contents": "Hi!\n\nPlease send me the test db and the queries, with precise information\nmaybe the developers can help.\n\n-- \nTomka Gergely\n\"S most - vajon barbárok nélkül mi lesz velünk?\nŐk mégiscsak megoldás voltak valahogy...\"\n\n\n", "msg_date": "Mon, 11 Aug 2003 13:37:50 +0200 (CEST)", "msg_from": "Tomka Gergely <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to force PostgreeSQL to work faster? " }, { "msg_contents": "\nOn 11/08/2003 09:59 Serge Dorofeev wrote:\n> Hi PostrgeSQL team,\n> \n> My PostrgeSQL installed as part of CYGWIN (Windows XP).\n> I have compared performance PostrgeSQL to MS SQL (I used a little Java\n> program with number of inserts in table).\n> MS SQL is faster in 12 times :-(\n> It's very strange results.\n> Guys who developed this server: what you can tell in this - what\n> customizations needs to increase of productivity?\n> How to force PostgreeSQL to work faster?\n> \n> \n> \t Speed (inserts/sec)\t Elapsed time (ms)\t \n> MS SQL (Average):\t 295\t 39 869\t \n> \t testInsert 5000\t \t \n> \t 263\t 18 977\t \n> \t 255\t 19 619\t \n> \t 306\t 16 334\t \n> \t \t \t \n> \t testInsert 10000\t \t \n> \t 315\t 31 716\t \n> \t 324\t 30 905\t \n> \t 319\t 31 325\t \n> \t \t \t \n> \t testInsert 20000\t \t \n> \t 241\t 82 919\t \n> \t 313\t 63 922\t \n> \t 317\t 63 101\t \n> \t \t \t \n> PostrgreSQL (Average):\t 24\t 520 160\t \n> \t testInsert 5000\t \t \n> \t 26\t 191 434\t \n> \t 26\t 191 264\t \n> \t 26\t 192 295\t \n> \t \t \t \n> \t testInsert 10000\t \t \n> \t 22\t 463 669\t \n> \t 25\t 393 510\t \n> \t 24\t 409 528\t \n> \t \t \t \n> \t testInsert 20000\t \t \n> \t 24\t 834 911\t \n> \t 17\t 1 184 613\t \n> \t 24\t 820 218\t \n> MS SQL is faster (times):\t 12\t 13\t \n\nYou don't give any details about your test code or how the databases are \nconfigured so I'm guessing that you're inserts use an autocommitting \nconnection. For PostgreSQL, this causes each insert to be run inside a \ntranaction and the transaction is then immediately written to disk. My \nguess is that MS SQL behaves differently and doesn't immediately write to \ndisk (faster maybe but could cause data corruption). Try modifying your \nprogram to have connection.setAutoCommit(false) and do a \nconnection.commit() after say every 100 inserts.\n\nHTH\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for the Smaller \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n", "msg_date": "Mon, 11 Aug 2003 13:30:33 +0100", "msg_from": "Paul Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to force PostgreeSQL to work faster?" }, { "msg_contents": "On 11 Aug 2003 at 11:59, Serge Dorofeev wrote:\n\n> \n> Hi PostrgeSQL team,\n> \n> My PostrgeSQL installed as part of CYGWIN (Windows XP).\n> I have compared performance PostrgeSQL to MS SQL (I used a little Java program \n> with number of inserts in table).\n> MS SQL is faster in 12 times :-(\n> It's very strange results.\n> Guys who developed this server: what you can tell in this - what customizations \n> needs to increase of productivity?\n> How to force PostgreeSQL to work faster? \n\nFirst of all, get a unix. Cygwin is nowhere near any unix OS as far as \nperformance goes. Get linux and test. Testing postgresql under cygwin is like \ntesting MSSQL server under wine. May be wine is faster than cygwin but you got \nthe idea..\n\nSecond tune postgresql. Since you have not given any details, I would assume \nyou are runnning stock install of postgresql, which is not made for a benchmark \nto say the least.\n\n\nCheck http://www.varlena.com/GeneralBits/Tidbits/perf.html for starters. Let us \nknow if that makes any difference..\n\n\n\nBye\n Shridhar\n\n--\nBaker's First Law of Federal Geometry:\tA block grant is a solid mass of money surrounded on all sides by\tgovernors.\n\n", "msg_date": "Mon, 11 Aug 2003 18:54:29 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to force PostgreeSQL to work faster? " } ]
[ { "msg_contents": "Hi,\n Currently we are using postgresql 7.3 with Redhat linux 9. We find that when we try to execute 200,000 update statement through JDBC, the performance of degraded obviously for each update statement when comparing with less update statement(eg. 5000). Is there any suggestion that we can improve the performance for executing update statement at postgresql ? Thanks.\n\nRegards,\nRicky.\n\n\n\n\n\n\n\nHi,\n    Currently we are using postgresql 7.3 with \nRedhat linux 9. We find that when we try to execute 200,000 update statement \nthrough JDBC, the performance of degraded obviously for each update statement \nwhen comparing with less update statement(eg. 5000). Is there any suggestion \nthat we can improve the performance for executing update statement at \npostgresql ? Thanks.\n \nRegards,\nRicky.", "msg_date": "Mon, 11 Aug 2003 19:33:32 +0800", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Peformance of Update" }, { "msg_contents": "On 11 Aug 2003 at 19:33, [email protected] wrote:\n> Currently we are using postgresql 7.3 with Redhat linux 9. We find that \n> when we try to execute 200,000 update statement through JDBC, the performance \n> of degraded obviously for each update statement when comparing with less update \n> statement(eg. 5000). Is there any suggestion that we can improve the \n> performance for executing update statementat postgresql ? Thanks.\n\nHow are you bunching your transactions? I mean how many updates per \ntransaction?\n\nAnd have you tried moving WAL to separate disk for such a update heavy \nenvironment? Have you are tuned FSM to take care of dead tuples generated in \nvacuum? Are you running autovacuum daemon?\n\nAll these things are almost a must for such update heavy environment..\n\nBye\n Shridhar\n\n--\nMoore's Constant:\tEverybody sets out to do something, and everybody\tdoes \nsomething, but no one does what he sets out to do.\n\n", "msg_date": "Mon, 11 Aug 2003 18:56:08 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance of Update" } ]
[ { "msg_contents": "Hi all,\n\nI posted this problem on the sql list, and was referred to this list in stead.\nI have attached an sql statement that normally runs under 3 minutes.\nThat is, until I vacuum analyze the database (or just the tables in the query),\nthen the same query runs longer than 12 hours, and I have to kill it.\n\nHowever 90% of queries are faster after analyzing on this database,\nthere are two or three, including this one that takes for ever.\n\nI have tried to reverse engineer the explain plan from before analyzing,\nto come up with an sql statement, using proper joins, to force the planner\nto do the original join, but although I came close, I never got the same \nresult as the original query.\n\nI suspect that this might be caused by some of the crazy indexes that \nwere built on some of these tables, but I can't really do much about that,\nunless I can come up with a very good reason to nuke them.\n\nI also attached the \"create table\" statements for all the tables, as well\nas a row count of each.\n\nCan somebody help me with guidelines or something similar, \nto understand exactly what is happening in the explain plan.\n\nTIA\nStefan", "msg_date": "Mon, 11 Aug 2003 15:58:41 +0200", "msg_from": "Stef <[email protected]>", "msg_from_op": true, "msg_subject": "Analyze makes queries slow..." }, { "msg_contents": "Stef <[email protected]> writes:\n> I have attached an sql statement that normally runs under 3 minutes.\n> That is, until I vacuum analyze the database (or just the tables in the query),\n> then the same query runs longer than 12 hours, and I have to kill it.\n\nCould we see the results of \"EXPLAIN ANALYZE\", rather than just EXPLAIN,\nfor the un-analyzed case? I won't make you do it for the analyzed case ;-)\nbut when dealing with a plan-selection problem the planner's estimates\nare obviously not to be trusted.\n\nAlso, what do you see in pg_stats (after analyzing) for each of the\ntables used in the query?\n\nAnd what PG version is this, exactly?\n\n\t\t\tregards, tom lane\n\nPS: in case you don't know this already, an easy way to get back to the\nun-analyzed state is \"DELETE FROM pg_statistics\".\n", "msg_date": "Mon, 11 Aug 2003 11:43:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analyze makes queries slow... " }, { "msg_contents": "Hi Tom,\n\nThanks for responding.\nI got as much info as I could :\n\nOn Mon, 11 Aug 2003 11:43:45 -0400\nTom Lane <[email protected]> wrote:\n\n=> Could we see the results of \"EXPLAIN ANALYZE\", rather than just EXPLAIN,\n=> for the un-analyzed case? \n\nAttached the output of this.\n\n=> Also, what do you see in pg_stats (after analyzing) for each of the\n=> tables used in the query?\n\nI attached a file in csv format of pg_stats after analyzing.\n(With the columns selected on the top line)\n\nIt looks like cached values for (quite a lot of?) the table columns.\nI would assume it stores the most commonly selected \nvalues for every column with an index. Don't know if I'm correct.\n\n=> And what PG version is this, exactly?\n\nPostgreSQL 7.3.1\n\nKind regards\nStefan", "msg_date": "Mon, 11 Aug 2003 18:47:36 +0200", "msg_from": "Stef <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analyze makes queries slow..." }, { "msg_contents": "Stef <[email protected]> writes:\n> => Could we see the results of \"EXPLAIN ANALYZE\", rather than just EXPLAIN,\n> => for the un-analyzed case? \n\n> Attached the output of this.\n\nHmm... not immediately obvious where it's going wrong. Could you try\nthis (after ANALYZE):\n\n\tset enable_mergejoin to off;\n\texplain analyze ... query ...\n\nIf it finishes in a reasonable amount of time, send the explain output.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Aug 2003 14:25:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analyze makes queries slow... " }, { "msg_contents": "Stef <[email protected]> writes:\n> => And what PG version is this, exactly?\n\n> PostgreSQL 7.3.1\n\nAh, I think I see it: you are getting burnt by a mergejoin estimation\nbug that was fixed in 7.3.2. Please update (you might as well go to\n7.3.4 while you're at it) and see if the results improve.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Aug 2003 14:50:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analyze makes queries slow... " }, { "msg_contents": "On Mon, Aug 11, 2003 at 03:58:41PM +0200, Stef wrote:\n\n> I have attached an sql statement that normally runs under 3 minutes.\n> That is, until I vacuum analyze the database (or just the tables in the query),\n> then the same query runs longer than 12 hours, and I have to kill it.\n\nHmm, I have noticed similar problem with a query with order by ... limit clause.Although it runs only 10 times slower after analyze :)\n\nThe query joins one big table (20 000 rows) with several small tables\n(200-4000 rows) than order by \"primary key of big table\" limit 20\n\nWithout this order by ... limit clause the query is 5 times faster after\nanalyze.\n\nLooking into explain analyze outputs:\n1. Before vacuum analyze a planer chooses nested loop, the deepest is:\n -> Nested Loop (cost=0.00..116866.54 rows=19286 width=96) (actual time=0.14..1.39 rows=21 loops=1)\n -> Index Scan Backward using big_table_pkey on big_table k (cost=0.00..1461.15 rows=19286 width=52) (actual time=0.07..0.47 rows=21 loops=1)\n -> Index Scan using 4000rows_table_pkey on 4000rows_table zs (cost=0.00..5.97 rows=1 width=44) (actual time=0.02..0.02 rows=0 loops=21)\n\n2. After analyze uses hashjoins\n\nWhen I remove this order by limit clause the query after analyze takes \nthe same time and the query before analyze is much more slower.\n\nI won't blame the planer. How he could learn that he should first \ntake those 20 rows and than perform joins? There is a where clause\nwith complex exists(subquery) condition regarding one of big_table fields,\nbut removing this condition does not change the query plan.\n\nPure joining without any additional conditions and only primary key of big \ntable in select clause runs 4 times slower then whole query before \nvacuuum analyze :)\n\nDoes in all the planer take in the consideration the limit clause?\n\nProbably I'm missing something. I don't know much about the planer.\n\nFinaly I have redesigned the query.\n\nRegards,\nJacek\n\n", "msg_date": "Tue, 12 Aug 2003 11:57:07 +0200", "msg_from": "Jacek Rembisz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Analyze makes queries slow..." }, { "msg_contents": "On Mon, 11 Aug 2003 14:25:03 -0400\nTom Lane <[email protected]> wrote:\n\n=> \tset enable_mergejoin to off;\n=> \texplain analyze ... query ...\n=> \n=> If it finishes in a reasonable amount of time, send the explain output.\n\nHi again,\n\nI did this on the 7.3.1 database, and attached the output.\nIt actually ran faster after ANALYZE and 'set enable_mergejoin to off'\nThanks!\n\nI also reloaded this database onto 7.3.4, tried the same query after\nthe ANALYZE, and the query executed a lot faster.\nThanks again!\n\nI also attached the output of the EXPLAIN ANALYZE on 7.3.4\n\nFor now I'll maybe just disable mergejoin. But definitely a postgres\nupgrade is what I will do.\n\nI went through the different outputs of EXPLAIN ANALYZE a bit, and\nI think I can now see where the difference is.\n\nThanks a lot for the help.\n\nRegards\nStefan.", "msg_date": "Tue, 12 Aug 2003 19:30:10 +0200", "msg_from": "Stef <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Analyze makes queries slow..." } ]
[ { "msg_contents": "Folks:\n\nI have a live and a test database for an in-production system running on \n7.2.4.\n\nThe test database is a copy of the live one. They are running on the same \ncopy of Postgres, on the same server. I use the test database to test \nchanges before I apply them to the production system. Periodically (like \ntoday) I reload the test system from the backup of the live one.\n\nI'm now having a problem with a huge complex query which runs very slowly on \nthe test database, but very quickly on the live database! The problems \nseems to be that the test database seems to think that a nested loop is an \nappropriate strategy for a table of 140,000 records, while the live one \nrealizes that it's not.\n\nWhat really has me scratching my head is that the test database is an exact \ncopy of the live database, just a few hours older. And the live database has \nnever displayed this performance problem, whereas the test database has had \nit for 2-3 weeks.\n\nBoth EXPLAIN ANALYZEs are attached.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco", "msg_date": "Mon, 11 Aug 2003 14:15:07 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Odd problem with performance in duplicate database" }, { "msg_contents": "Folks,\n\nMore followup on this:\n\nThe crucial difference between the two execution plans is this clause:\n\ntest db has:\n-> Seq Scan on case_clients (cost=0.00..3673.48 rows=11274 width=11) (actual \ntime=0.02..302.20 rows=8822 loops=855)\n\nwhereas live db has:\n-> Index Scan using idx_caseclients_case on case_clients (cost=0.00..5.10 \nrows=1 width=11) (actual time=0.03..0.04 rows=1 loops=471)\n\nusing an enable_seqscan = false fixes this, but is obviously not a long-term \nsolution. \n\nI've re-created the test system from an immediate copy of the live database, \nand checked that the the main tables and indexes were reproduced faithfully.\n\nLowering random_page_cost seems to do the trick. But I'm still mystified; why \nwould one identical database pick a different plan than its copy?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 11 Aug 2003 15:03:46 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Odd problem with performance in duplicate database" }, { "msg_contents": "On Mon, 2003-08-11 at 17:03, Josh Berkus wrote:\n> Folks,\n> \n> More followup on this:\n> \n> The crucial difference between the two execution plans is this clause:\n> \n> test db has:\n> -> Seq Scan on case_clients (cost=0.00..3673.48 rows=11274 width=11) (actual \n> time=0.02..302.20 rows=8822 loops=855)\n> \n> whereas live db has:\n> -> Index Scan using idx_caseclients_case on case_clients (cost=0.00..5.10 \n> rows=1 width=11) (actual time=0.03..0.04 rows=1 loops=471)\n> \n> using an enable_seqscan = false fixes this, but is obviously not a long-term \n> solution. \n> \n> I've re-created the test system from an immediate copy of the live database, \n> and checked that the the main tables and indexes were reproduced faithfully.\n> \n> Lowering random_page_cost seems to do the trick. But I'm still mystified; why \n> would one identical database pick a different plan than its copy?\n\nIf the databases are on different machines, maybe the postgres.conf\nor pg_hba.conf files are different, and the buffer counts is affect-\ning the optimizer?\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"Man, I'm pretty. Hoo Hah!\" |\n| Johnny Bravo |\n+---------------------------------------------------------------+\n\n\n", "msg_date": "11 Aug 2003 17:42:28 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd problem with performance in duplicate database" }, { "msg_contents": "Ron,\n\n> If the databases are on different machines, maybe the postgres.conf\n> or pg_hba.conf files are different, and the buffer counts is affect-\n> ing the optimizer?\n\nThe databases are on the same machine, using the same postmaster.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 11 Aug 2003 15:48:09 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Odd problem with performance in duplicate database" }, { "msg_contents": "Josh,\n\tI'm sure that you've thought of this, but it sounds like you may not have\ndone an analyze in your new DB.\nThanks,\nPeter Darley\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Josh Berkus\nSent: Monday, August 11, 2003 3:48 PM\nTo: Ron Johnson; PgSQL Performance ML\nSubject: Re: [PERFORM] Odd problem with performance in duplicate\ndatabase\n\n\nRon,\n\n> If the databases are on different machines, maybe the postgres.conf\n> or pg_hba.conf files are different, and the buffer counts is affect-\n> ing the optimizer?\n\nThe databases are on the same machine, using the same postmaster.\n\n--\n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n", "msg_date": "Mon, 11 Aug 2003 15:51:42 -0700", "msg_from": "\"Peter Darley\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd problem with performance in duplicate database" }, { "msg_contents": "Peter,\n\n> \tI'm sure that you've thought of this, but it sounds like you may not have\n> done an analyze in your new DB.\n\nYes. Also a VACUUM. Also forcing a REINDEX on the major involved tables.\n\nAlso running counts on the pg_* system tables to see if any objects did not \nget restored from the backup as compared with the live database.\n\nBy everything I can measure, the live database and the test are identical; yet \nthe test does not think that idx_caseclients_case is very accessable, and the \nlive database knows it is.\n\nIs this perhaps a bug with ANALYZE statistics in 7.2.4? I know that in that \nversion I don't have the option of increasing the statistics sampling ...\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 11 Aug 2003 15:59:09 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Odd problem with performance in duplicate database" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> By everything I can measure, the live database and the test are\n> identical; yet the test does not think that idx_caseclients_case is\n> very accessable, and the live database knows it is.\n\nLet's see the pg_stats rows for case_clients in both databases. The\nentries for trial_groups might be relevant too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Aug 2003 19:22:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd problem with performance in duplicate database " }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Still, they are differences. Attached.\n\nActually, it was mainly \"cases\" that I wanted to know about ---\nspecifically, whichever columns are in \"idx_cases_tgroup\".\nAlso, which of the trial_groups columns is the pkey?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Aug 2003 19:46:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd problem with performance in duplicate database " }, { "msg_contents": "Tom,\n\n> Let's see the pg_stats rows for case_clients in both databases. The\n> entries for trial_groups might be relevant too.\n\nMy reading is that the case is \"borderline\"; that is, becuase the correlation \nis about 10-20% higher on the test database (since it was restored \"clean\" \nfrom backup) the planner is resorting to a seq scan.\n\nAt which point the spectre of random_page_cost less than 1.0 rears its ugly \nhead again. Because the planner seems to regard this as a borderline case, \nbut it's far from borderline ... index scan takes 260ms, seq scan takes \n244,000ms. Yet my random_page_cost is set pretty low already, at 1.5.\n\nIt seems like I'd have to set random_page_cost to less than 1.0 to make sure \nthat the planner never used a seq scan. Which kinda defies the meaning of \nthe setting.\n\n*sigh* wish the client would pay for an upgrade ....\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 11 Aug 2003 16:51:01 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Odd problem with performance in duplicate database" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> My reading is that the case is \"borderline\";\n\nWell, clearly the planner is flipping to a much less desirable plan, but\nthe core estimation error is not borderline by my standards. In the\nlive DB we have this subplan:\n\n-> Nested Loop (cost=0.00..7.41 rows=1 width=12) (actual time=0.01..0.02 rows=1 loops=856)\n -> Index Scan using trial_groups_pkey on trial_groups (cost=0.00..3.49 rows=1 width=4) (actual time=0.01..0.01 rows=0 loops=856)\n -> Index Scan using idx_cases_tgroup on cases (cost=0.00..3.92 rows=1 width=8) (actual time=0.02..0.04 rows=4 loops=133)\n\nIn the test DB, the identical subplan is estimated at:\n\n-> Nested Loop (cost=0.00..81.53 rows=887 width=12) (actual time=0.03..0.04 rows=1 loops=855)\n -> Index Scan using trial_groups_pkey on trial_groups (cost=0.00..3.49 rows=1 width=4) (actual time=0.02..0.02 rows=0 loops=855)\n -> Index Scan using idx_cases_tgroup on cases (cost=0.00..77.77 rows=43 width=8) (actual time=0.03..0.07 rows=6 loops=133)\n\nand that factor of 887 error in the output rows estimate is what's\ndriving all the outer plan steps to make bad choices.\n\nThe \"trial_groups_pkey\" estimate is the same in both databases,\nso it's presumably a problem with estimating the number of\nmatches to a \"trial_groups\" row that will be found in \"cases\".\nThis is dependent on the pg_stats entries for the relevant\ncolumns, which I'm still hoping to see ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Aug 2003 19:59:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd problem with performance in duplicate database " }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Tom,\n>> Okay, here's our problem:\n>> \n>> live DB: tgroup_id n_distinct = -1\n>> \n>> test DN: tgroup_id n_distinct = 11\n>> \n>> The former estimate actually means that it thinks tgroup_id is a unique\n>> column, whereas the latter says there are only 11 distinct values in the\n>> column. I assume the former is much nearer to the truth (how many rows\n>> in cases, and how many distinct tgroup_id values)?\n\n> The real case is that there are 113 distinct tgroup_ids, which cover\n> about 10% of the population of cases. The other 90% is NULL. The\n> average tgroup_id is shared between 4.7 cases.\n\n> So this seems like sampling error.\n\nPartly. The numbers suggest that in ANALYZE's default sample of 3000\nrows, it's only finding about a dozen non-null tgroup_ids (yielding the\n0.996 null_frac value); and that in one case all dozen are different and\nin the other case there are two duplicates. It would help if you\nboosted the stats target for this column by a factor of 10. (You can\ndo that in 7.2, btw --- IIRC the only problem is that a pg_dump won't\nshow that you did so.)\n\nBut the other part of the problem is that in 7.2, the join selectivity\nestimator is way off when you are joining a unique column (like the pkey\non the other side) to a column with a very large fraction of nulls.\nWe only discovered this recently; it's fixed as of 7.3.3:\n\n2003-04-15 01:18 tgl\n\n\t* src/backend/utils/adt/selfuncs.c (REL7_3_STABLE): eqjoinsel's\n\tlogic for case where MCV lists are not present should account for\n\tNULLs; in hindsight this is obvious since the code for the\n\tMCV-lists case would reduce to this when there are zero entries in\n\tboth lists. Per example from Alec Mitchell.\n\nPossibly you could backpatch that into 7.2, although I'd think an update\nto 7.3.4 would be a more profitable use of time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Aug 2003 20:29:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd problem with performance in duplicate database " }, { "msg_contents": "Tom,\n\n> Partly. The numbers suggest that in ANALYZE's default sample of 3000\n> rows, it's only finding about a dozen non-null tgroup_ids (yielding the\n> 0.996 null_frac value); and that in one case all dozen are different and\n> in the other case there are two duplicates. It would help if you\n> boosted the stats target for this column by a factor of 10. (You can\n> do that in 7.2, btw --- IIRC the only problem is that a pg_dump won't\n> show that you did so.)\n\nAlso, there doesn't seem to be any way in 7.2 for me to find out what the \ncurrent statistics target for a column is. What am I missing?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 11 Aug 2003 17:43:48 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Odd problem with performance in duplicate database" }, { "msg_contents": "Tom,\n\n> Partly. The numbers suggest that in ANALYZE's default sample of 3000\n> rows, it's only finding about a dozen non-null tgroup_ids (yielding the\n> 0.996 null_frac value); and that in one case all dozen are different and\n> in the other case there are two duplicates. It would help if you\n> boosted the stats target for this column by a factor of 10. (You can\n> do that in 7.2, btw --- IIRC the only problem is that a pg_dump won't\n> show that you did so.)\n\nHmmm. No dice. I raised the selectivity to 1000, which increased n_distinct \nto 108, which is pretty close to accurate. However, the planner still \ninsists on using a seq scan on case_clients unless I drop random_page_cost to \n1.5 (which is up from 1.2 but still somewhat unreasonable).\n\n> But the other part of the problem is that in 7.2, the join selectivity\n> estimator is way off when you are joining a unique column (like the pkey\n> on the other side) to a column with a very large fraction of nulls.\n> We only discovered this recently; it's fixed as of 7.3.3:\n\nOK, I'll talk to the client about upgrading.\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n", "msg_date": "Mon, 11 Aug 2003 17:51:39 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Odd problem with performance in duplicate database" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Also, there doesn't seem to be any way in 7.2 for me to find out what the \n> current statistics target for a column is. What am I missing?\n\nThere still isn't a handy command for it --- you have to look at\npg_attribute.attstattarget for the column.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 12 Aug 2003 09:11:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd problem with performance in duplicate database " } ]
[ { "msg_contents": "Dear master:\n I have learned postgreSQL for serveral days, now i meet some problems. when I use a TPCC(Transaction Processing Performance Council) test program to test the performance of postgreSQL , postgreSQL works very slowly, it almost need 1 minute to finish a transaction, and the CPU percent is almost 100%,\nthe test environment is :\n OS: redhat 9.0(ext3, default configurations)\n Server: postgre7.3.4(default configurations) , PIII 800M, 1G Memory\n Client: tpcc test program,using ODBC API, PIII 800M, 1G Memory\n when using SQLServer, it can work on a workload of 40 Warehouse,\nbut postgreSQL can not work even on 1 warehouse. I think there must be\nsome problem with my postgreSQL, can you help me?\n I am in china, and my english is very poor, but i hope you can give \nme some advice, thanks.\n\n\n\n---------------------------------\nDo You Yahoo!?\n锟斤拷锟节达拷片锟斤拷锟斤拷呕锟酵� 锟斤拷锟斤拷锟斤拷锟斤拷头+锟脚伙拷通锟斤拷频锟斤拷锟斤拷锟斤拷锟斤拷锟斤拷锟斤拷锟斤拷\nDear master:\n       I have learned postgreSQL for serveral days, now i meet some problems. when I use a TPCC(Transaction Processing Performance Council) test program to test the performance of postgreSQL , postgreSQL works very slowly, it almost need 1 minute to finish a transaction, and the CPU percent is almost 100%,\nthe test environment is :\n       OS: redhat 9.0(ext3, default configurations)\n       Server: postgre7.3.4(default configurations) , PIII 800M, 1G Memory\n       Client: tpcc test program,using ODBC API, PIII 800M, 1G Memory\n       when using SQLServer, it can work on a workload of 40 Warehouse,\nbut postgreSQL can not work even on 1 warehouse. I think there must be\nsome problem with my postgreSQL, can you help me?\n      I am in china, and my english is very poor, but i hope you can give \nme some advice, thanks.Do You Yahoo!?\n锟斤拷锟节达拷片锟斤拷锟斤拷呕锟酵� 锟斤拷锟斤拷锟斤拷锟斤拷头+锟脚伙拷通锟斤拷频锟斤拷锟斤拷锟斤拷锟斤拷锟斤拷锟斤拷锟斤拷", "msg_date": "Tue, 12 Aug 2003 10:35:04 +0800 (CST)", "msg_from": "=?gb2312?q?xin=20fu?= <[email protected]>", "msg_from_op": true, "msg_subject": "about performance of postgreSQL" }, { "msg_contents": "Hi Xin;\n\nPostgreSQL is configured to run on virutally anything out of the box.\nThe reason for this is that, like Oracle, the database manager will not\nstart if it cannot allocate sufficient resources. We take the approach\nof ensuring that it will start so you can tune it.\n\nI would recomment trying to take a close look at many of the posts on\nthe Performance list (searching the archives) and paying attention to\nthings such as effective_cache_size and shared_buffers. If these don't\nanswer your questions, ask this list again.\n\nBest Wishes,\nChris Travers\n\n\nxin fu wrote:\n\n> Dear master:\n> I have learned postgreSQL for serveral days, now i meet some problems.\n> when I use a TPCC(Transaction Processing Performance Council) test\n> program to test the performance of postgreSQL , postgreSQL works very\n> slowly, it almost need 1 minute to finish a transaction, and the CPU\n> percent is almost 100%,\n> the test environment is :\n> OS: redhat 9.0(ext3, default configurations)\n> Server: postgre7.3.4(default configurations) , PIII 800M, 1G Memory\n> Client: tpcc test program,using ODBC API, PIII 800M, 1G Memory\n> when using SQLServer, it can work on a workload of 40 Warehouse,\n> but postgreSQL can not work even on 1 warehouse. I think there must be\n> some problem with my postgreSQL, can you help me?\n> I am in china, and my english is very poor, but i hope you can give\n> me some advice, thanks.\n>\n>\n> ------------------------------------------------------------------------\n> *Do You Yahoo!?*\n> 锟斤拷锟节达拷片锟斤拷锟斤拷呕锟酵� 锟斤拷锟斤拷锟斤拷锟斤拷头+锟脚伙拷通锟斤拷频锟斤拷锟斤拷锟斤拷锟斤拷锟斤拷锟斤拷锟斤拷\n> <http://cn.rd.yahoo.com/mail_cn/tag/?http://cn.promo.yahoo.com/minisite/messenger1/>\n\n\n\n\n", "msg_date": "Tue, 12 Aug 2003 10:26:17 -0700", "msg_from": "Chris Travers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: about performance of postgreSQL" }, { "msg_contents": "Xin,\n\n> I would recomment trying to take a close look at many of the posts on\n> the Performance list (searching the archives) and paying attention to\n> things such as effective_cache_size and shared_buffers. If these don't\n> answer your questions, ask this list again.\n\nAlso see these articles:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 12 Aug 2003 11:02:01 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: about performance of postgreSQL" } ]
[ { "msg_contents": "\nHere is one talking about ext2 corruption from power failure from 2002:\n\n http://groups.google.com/groups?q=ext2+corrupt+%22power+failure%22&hl=en&lr=&ie=UTF-8&selm=alvrj5%249in%241%40usc.edu&rnum=9\n\n---------------------------------------------------------------------------\n\npgman wrote:\n> \n> As I remember, there were clear cases that ext2 would fail to recover,\n> and it was known to be a limitation of the file system implementation. \n> Some of the ext2 developers were in the room at Red Hat when I said\n> that, so if it was incorrect, they would hopefully have spoken up. I\n> addressed the comments directly to them.\n> \n> To be recoverasble, you have to be careful how you sync metadata to\n> disk. All the journalling file systems, and the BSD UFS do that. I am\n> told ext2 does not. I don't know much more than that.\n> \n> As I remember years ago, ext2 was faster than UFS, but it was true\n> because ext2 didn't guarantee failure recovery. Now, with UFS soft\n> updates, the have similar performance characteristics, but UFS is still\n> crash-safe.\n> \n> However, I just tried google and couldn't find any documented evidence\n> that ext2 isn't crash-safe, so maybe I am wrong.\n> \n> ---------------------------------------------------------------------------\n> \n> Christopher Browne wrote:\n> > Bruce Momjian commented:\n> > \n> > \"Uh, the ext2 developers say it isn't 100% reliable\" ... \"I mentioned\n> > it while I was visiting Red Hat, and they didn't refute it.\"\n> > \n> > 1. Nobody has gone through any formal proofs, and there are few\n> > systems _anywhere_ that are 100% reliable. NASA has occasionally lost\n> > spacecraft to software bugs, so nobody will be making such rash claims\n> > about ext2.\n> > \n> > 2. Several projects have taken on the task of introducing journalled\n> > filesystems, most notably ext3 (sponsored by RHAT via Stephen Tweedy)\n> > and ReiserFS (oft sponsored by SuSE). (I leave off JFS/XFS since they\n> > existed long before they had any relationship with Linux.)\n> > \n> > Participants in such projects certainly have interest in presenting\n> > the notion that they provide improved reliability over ext2.\n> > \n> > 3. There is no \"apologist\" for ext2 that will either (stupidly and\n> > futilely) claim it to be flawless. Nor is there substantial interest\n> > in improving it; the sort people that would be interested in that sort\n> > of thing are working on the other FSes.\n> > \n> > This also means that there's no one interested in going into the\n> > guaranteed-to-be-unsung effort involved in trying to prove ext2 to be\n> > \"formally reliable.\"\n> > \n> > 4. It would be silly to minimize the impact of commercial interest.\n> > RHAT has been paying for the development of a would-be ext2 successor.\n> > For them to refute your comments wouldn't be in their interests.\n> > \n> > Note that these are \"warm and fuzzy\" comments, the whole lot. The\n> > 80-some thousand lines of code involved in ext2, ext3, reiserfs, and\n> > jfs are no more amenable to absolute mathematical proof of reliability\n> > than the corresponding BSD FFS code.\n> > \n> > 6. Such efforts would be futile, anyways. Disks are mechanical\n> > devices, and, as such, suffer from substantial reliability issues\n> > irrespective of the reliability of the software. I have lost sleep on\n> > too many occasions due to failures of:\n> > a) Disk drives,\n> > b) Disk controllers [the worst Oracle failure I encountered resulted\n> > from this], and\n> > c) OS memory management.\n> > \n> > I used ReiserFS back in its \"bleeding edge\" days, and find myself a\n> > lot more worried about losing data to flakey disk controllers.\n> > \n> > It frankly seems insulting to focus on ext2 in this way when:\n> > \n> > a) There aren't _hard_ conclusions to point to, just soft ones;\n> > \n> > b) The reasons for you hearing vaguely negative things about ext2\n> > are much more likely political than they are technical.\n> > \n> > I wish there were more \"hard and fast\" conclusions to draw, to be able\n> > to conclusively say that one or another Linux filesystem was\n> > unambiguously preferable for use with PostgreSQL. There are not\n> > conclusive metrics, either in terms of speed or of some notion of\n> > \"reliability.\" I'd expect ReiserFS to be the poorest choice, and for\n> > XFS to be the best, but I only have fuzzy reasons, as opposed to\n> > metrics.\n> > \n> > The absence of measurable metrics of the sort is _NOT_ a proof that\n> > (say) FreeBSD is conclusively preferable, whatever your own\n> > preferences (I'll try to avoid characterizing it as \"prejudices,\" as\n> > that would be unkind) may be. That would represent a quite separate\n> > debate, and one that doesn't belong here, certainly not on a thread\n> > where the underlying question was \"Which Linux FS is preferred?\"\n> > \n> > If the OSDB TPC-like benchmarks can get \"packaged\" up well enough to\n> > easily run and rerun them, there's hope of getting better answers,\n> > perhaps even including performance metrics for *BSD. That, not\n> > Linux-baiting, is the answer...\n> > -- \n> > select 'cbbrowne' || '@' || 'acm.org';\n> > http://www.ntlug.org/~cbbrowne/sap.html\n> > (eq? 'truth 'beauty) ; to avoid unassigned-var error, since compiled code\n> > ; will pick up previous value to var set!-ed,\n> > ; the unassigned object.\n> > -- from BBN-CL's cl-parser.scm\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 12 Aug 2003 00:16:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: On Linux Filesystems" } ]
[ { "msg_contents": "FWIW, Informix can be run using a \"cooked\" (Unix) file for storing data or it uses \"raw\" disk space and bypasses the ordinary (high level) UNIX controllers and does its own reads/writes. About 10 times faster and safer. Of course, itmay have taken a lot of programmer time to make that solid. But the performance gains are significant.\n\nGreg W.\n\n\n-----Original Message-----\nFrom:\tBill Moran [mailto:[email protected]]\nSent:\tTue 8/12/2003 11:39 AM\nTo:\t\nCc:\tPgSQL Performance ML\nSubject:\tRe: [PERFORM] Perfomance Tuning\n\nShridhar Daithankar wrote:\n> On 11 Aug 2003 at 23:42, Ron Johnson wrote:\n> \n> \n>>On Mon, 2003-08-11 at 19:50, Christopher Kings-Lynne wrote:\n>>\n>>>>Well, yeah. But given the Linux propensity for introducing major\n>>>>features in \"minor\" releases (and thereby introducing all the\n>>>>attendant bugs), I'd think twice about using _any_ Linux feature\n>>>>until it's been through a major version (e.g. things introduced in\n>>>>2.4.x won't really be stable until 2.6.x) -- and even there one is\n>>>>taking a risk[1].\n>>>\n>>>Dudes, seriously - switch to FreeBSD :P\n>>\n>>But, like, we want a *good* OS... 8-0\n> \n> \n> Joke aside, I guess since postgresql is pretty much reliant on file system for \n> basic file functionality, I guess it's time to test Linux 2.6 and compare it.\n> \n> And don't forget, for large databases, there is still XFS out there which is \n> probably the ruler at upper end..\n\nThis is going to push the whole thing a little off-topic, but I'm curious to\nknow the answer.\n\nHas it ever been proposed or attemped to run PostgreSQL without any filesystem\n(or any other database for that matter ...).\n\nMeaning ... just tell it a raw partition to keep the data on and Postgre would\ncreate its own \"filesystem\" ... obviously, doing that would allow Postgre to\nbypass all the failings of all filesystems and rely entirely apon its own\nrules.\n\nOr are modern filesystems advanced enough that doing something like that would\nlose more than it would gain?\n\nJust thinking out loud.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n\n", "msg_date": "Tue, 12 Aug 2003 12:30:43 -0700", "msg_from": "\"Gregory S. Williamson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "Greg,\n\n> FWIW, Informix can be run using a \"cooked\" (Unix) file for storing data or\n> it uses \"raw\" disk space and bypasses the ordinary (high level) UNIX\n> controllers and does its own reads/writes. About 10 times faster and safer.\n> Of course, itmay have taken a lot of programmer time to make that solid.\n> But the performance gains are significant.\n\nYes, but it's still slower than PostgreSQL on medium-end hardware. ;-)\n\nThis idea has been discussed numerous times on the HACKERS list, and is a \n(pretty much) closed issue. While Oracle and SQL Server use their own \nfilesystems, PostgreSQL will not because:\n\n1) It would interfere with our cross-platform compatibility. PostgreSQL runs \non something like 20 OSes.\n\n2) The filesystem projects out there are (mostly) well-staffed and are \nconstantly advancing using specialized technology and theory. There's no way \nthat the PostgreSQL team can do a better job in our \"spare time\".\n\n3) Development of our \"own\" filesystem would then require PostgreSQL to create \nand maintain a whole hardware compatibility library, and troubleshoot \nproblems on exotic hardware and wierd RAID configurations.\n\n4) A database FS also often causes side-effect problems; for example, one \ncannot move or copy a SQL Server partition without destroying it.\n\nOf course, that could all change if some corp with deep pockets steps in an \ndecides to create a \"postgresFS\" and funds and staffs the effort 100%. But \nit's unlikely to be a priority for the existing development team any time in \nthe forseeable future.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 12 Aug 2003 13:09:42 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystems WAS: Perfomance Tuning" }, { "msg_contents": "On Tue, 12 Aug 2003 13:09:42 -0700 Josh Berkus <[email protected]> wrote:\n> This idea has been discussed numerous times on the HACKERS list, and is\n> a \n> (pretty much) closed issue. While Oracle and SQL Server use their own \n> filesystems, PostgreSQL will not because:\n...\n> 2) The filesystem projects out there are (mostly) well-staffed and are \n> constantly advancing using specialized technology and theory. There's\n> no way \n> that the PostgreSQL team can do a better job in our \"spare time\".\n\ni consider this a fair answer, but i have a slightly different question to\nask, inspired by my discussions with a good friend who is a top notch\nInformix DBA.\n\nthere are advantages to being able to split the database across a slew of\ndisk drives. if we accept the notion of using the native OS filesystem on\neach, it would seem that being able to direct various tables and indices to\nspecific drives might be a valuble capability. i know that i could go into\n/var/lib/pgsql/data/base and fan the contents out, but this is unweildy and\nimpractical. has any consideration been given to providing a way to manage\nsuch a deployment?\n\nor is it the judgement of the hackers community that a monsterous raid-10\narray offers comparable performance?\n\ni forget how large the data store on my friend's current project is, but\ni'll check. knowing the size and transaction rate he's dealing with might\nput a finer point on this discussion.\n\nrichard\n--\nRichard Welty [email protected]\nAverill Park Networking 518-573-7592\n Java, PHP, PostgreSQL, Unix, Linux, IP Network Engineering, Security\n\n\n", "msg_date": "Tue, 12 Aug 2003 16:31:19 -0400 (EDT)", "msg_from": "Richard Welty <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystems WAS: Perfomance Tuning" }, { "msg_contents": "On Tue, Aug 12, 2003 at 04:31:19PM -0400, Richard Welty wrote:\n> impractical. has any consideration been given to providing a way to manage\n> such a deployment?\n\nPlenty. No-one's completed an implementation yet. \n\n> or is it the judgement of the hackers community that a monsterous raid-10\n> array offers comparable performance?\n\nIt's tough to say, but I _can_ tell you that, so far in my tests,\nI've never been able to prove an advantage in separating even the WAL\non a Sun A5200. That's not a result yet, of course, just a bit of\ngossip.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 12 Aug 2003 16:40:22 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystems WAS: Perfomance Tuning" }, { "msg_contents": "> specific drives might be a valuble capability. i know that i could go into\n> /var/lib/pgsql/data/base and fan the contents out, but this is unweildy and\n> impractical. has any consideration been given to providing a way to manage\n> such a deployment?\n\nThe ability to take various database objects and store them in different\nlocations, sometimes referred to as table spaces, will probably be done\nin the future. There was a substantial group not all that long ago that\nwas organizing to complete the implementation.\n\n> or is it the judgement of the hackers community that a monsterous raid-10\n> array offers comparable performance?\n\nOften performs well enough... But a raid-10 for data, a stripe for\nindexes, and a mirror for WAL will offer better performance :)", "msg_date": "Tue, 12 Aug 2003 16:40:40 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystems WAS: Perfomance Tuning" }, { "msg_contents": "Martha Stewart called it a Good Thing [email protected] (\"Gregory S. Williamson\")wrote:\n> FWIW, Informix can be run using a \"cooked\" (Unix) file for storing\n> data or it uses \"raw\" disk space and bypasses the ordinary (high\n> level) UNIX controllers and does its own reads/writes. About 10\n> times faster and safer. Of course, itmay have taken a lot of\n> programmer time to make that solid. But the performance gains are\n> significant.\n\nAre you _certain_ that's still true? Have you a metric that shows\nInformix being 10x faster on a modern system? That would be quite\nsurprising...\n\nIt may have been true on '80s style UFS implementations, but a couple\nof decades have passed, and pretty much any Unix system has new\nselections of filesystems that probably aren't so much slower.\n\nIt could conceivably be an interesting idea to implement a\nblock-oriented filesystem where the granularity of files was 8K (or\nsome such number :-)).\n\nOracle seems to have done something vaguely like this...\nhttp://otn.oracle.com/tech/linux/open_source.html\n\nBut long and short is that the guys implementing OSes have been\nputting a LOT of effort into making the potential performance gains of\nusing \"raw\" partitions less and less.\n-- \nselect 'cbbrowne' || '@' || 'acm.org';\nhttp://www.ntlug.org/~cbbrowne/sap.html\n(eq? 'truth 'beauty) ; to avoid unassigned-var error, since compiled code\n ; will pick up previous value to var set!-ed,\n ; the unassigned object.\n-- from BBN-CL's cl-parser.scm\n", "msg_date": "Tue, 12 Aug 2003 17:43:29 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "> there are advantages to being able to split the database across a slew of\n> disk drives. if we accept the notion of using the native OS filesystem on\n> each, it would seem that being able to direct various tables and indices\nto\n> specific drives might be a valuble capability. i know that i could go into\n> /var/lib/pgsql/data/base and fan the contents out, but this is unweildy\nand\n> impractical. has any consideration been given to providing a way to manage\n> such a deployment?\n\nWe've got a little bunch of us tinkering with a tablespace implementation.\nHowever, it's been staller for a while now.\n\nChris\n\n", "msg_date": "Wed, 13 Aug 2003 09:48:18 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystems WAS: Perfomance Tuning" }, { "msg_contents": "On Wed, 13 Aug 2003 09:48:18 +0800 Christopher Kings-Lynne <[email protected]> wrote:\n> We've got a little bunch of us tinkering with a tablespace\n> implementation.\n> However, it's been staller for a while now.\n\ninteresting. i'm involved in the very early stages of a startup that is\nlikely to do a prototype using Java and PostgreSQL.\n\ntablespace and replication are issues that would weigh heavily in a\ndecision to stick with PostgreSQL after the prototype.\n\nrichard\n--\nRichard Welty [email protected]\nAverill Park Networking 518-573-7592\n Java, PHP, PostgreSQL, Unix, Linux, IP Network Engineering, Security\n\n\n", "msg_date": "Tue, 12 Aug 2003 21:52:52 -0400 (EDT)", "msg_from": "Richard Welty <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystems WAS: Perfomance Tuning" }, { "msg_contents": "\nI think Gavin Sherry is working on this. I am CC'ing him.\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> > there are advantages to being able to split the database across a slew of\n> > disk drives. if we accept the notion of using the native OS filesystem on\n> > each, it would seem that being able to direct various tables and indices\n> to\n> > specific drives might be a valuble capability. i know that i could go into\n> > /var/lib/pgsql/data/base and fan the contents out, but this is unweildy\n> and\n> > impractical. has any consideration been given to providing a way to manage\n> > such a deployment?\n> \n> We've got a little bunch of us tinkering with a tablespace implementation.\n> However, it's been staller for a while now.\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 12 Aug 2003 21:53:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystems WAS: Perfomance Tuning" }, { "msg_contents": "On Tue, 12 Aug 2003, Bruce Momjian wrote:\n\n> \n> I think Gavin Sherry is working on this. I am CC'ing him.\n> \n> ---------------------------------------------------------------------------\n\nYes I am working on this. I am about 50% of the way through the patch but\nhave been held up with other work. For those who are interested, it\nbasically allow:\n\n1) creation of different 'storage' locations. Tables and indexes can be\ncreated in different storage locations. Storage locations can also be\nassigned to schemas and databases. Tables and indexes will default to the\nschema storage location if STORAGE 'store name' is not provided to CREATE\n.... This will cascade to the default database storage location if\nthe schema was not created with STORAGE 'store name'.\n\n2) the patch will allow different storage locations to have different\nrand_cost parameters passed to the planner.\n\n3) the patch *will not* address issues concerning quotas, resource\nmanagement, WAL/clog, temp or sort spaces.\n\nWill keep everyone posted if/when I finish.\n\nThanks,\n\nGavin\n\n", "msg_date": "Wed, 13 Aug 2003 12:07:47 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystems WAS: Perfomance Tuning" }, { "msg_contents": "On Tue, 12 Aug 2003, Christopher Browne wrote:\n\n> Are you _certain_ that's still true? Have you a metric that shows\n> Informix being 10x faster on a modern system? That would be quite\n> surprising...\n>\n\nWe were forced (for budget reason) to switch from raw disk to cooked files\non our informix db. We took a huge hit - about 5-6x slower. Granted part\nof that was because informix takes number of spindles, etc into account\nwhen generating query plans and the fact running UPDATE STATISTICS (think\nVacuum analyze) on the version we run locks the table exclusively. And it\nis unacceptable to have our \"main table\" unavailable for hours and hours\nwhile the update runs. (For the record: its a 8cpu sun e4500 running\nsol2.6. The raw disks were on a hitachi fibre array and the cooked files\nwere on a raid5 (scsi). Forget how many spindles in the raid.\nThere were 20 raw disks)\n\nInformix, etc. have spent a lot of time and money working on it.\nThey also have the advantage of having many paid fulltime\ndevelopers who are doing this for a job, not as a weekend hobby\n(Compared to the what? 2-3 full time PG developers).\n\nThe other advantage (which I hinted to above) with raw disks is being able\nto optimize queries to take advantage of it. Informix is multithreaded\nand it will spawn off multiple \"readers\" to do say, a seq scan (and merge\nthe results at the end).\n\nSo if you have a table across say, 3 disks and you need to do a seq scan\nit will spawn three readers to do the read. Result: nice and fast (Yes, It\nmay not always spawn the three readers, only when it thinks it will be a\ngood thing to do)\n\nI think for PG the effort would be much better spent on other features...\nlike replication and whatnot.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Wed, 13 Aug 2003 09:55:53 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "Jeff <[email protected]> writes:\n> On Tue, 12 Aug 2003, Christopher Browne wrote:\n>> Are you _certain_ that's still true? Have you a metric that shows\n>> Informix being 10x faster on a modern system? That would be quite\n>> surprising...\n\n> We were forced (for budget reason) to switch from raw disk to cooked files\n> on our informix db. We took a huge hit - about 5-6x slower.\n> [snip]\n> The raw disks were on a hitachi fibre array and the cooked files\n> were on a raid5 (scsi). Forget how many spindles in the raid.\n> There were 20 raw disks)\n\nSeems like you can't know how much of the performance hit was due to the\nfilesystem change and how much to the hardware change. But I'd bet 20\ndisks on fibre array have way more net throughput than a single RAID\narray on scsi.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Aug 2003 10:37:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning " }, { "msg_contents": "On Wed, 2003-08-13 at 09:37, Tom Lane wrote:\n> Jeff <[email protected]> writes:\n> > On Tue, 12 Aug 2003, Christopher Browne wrote:\n> >> Are you _certain_ that's still true? Have you a metric that shows\n> >> Informix being 10x faster on a modern system? That would be quite\n> >> surprising...\n> \n> > We were forced (for budget reason) to switch from raw disk to cooked files\n> > on our informix db. We took a huge hit - about 5-6x slower.\n> > [snip]\n> > The raw disks were on a hitachi fibre array and the cooked files\n> > were on a raid5 (scsi). Forget how many spindles in the raid.\n> > There were 20 raw disks)\n> \n> Seems like you can't know how much of the performance hit was due to the\n> filesystem change and how much to the hardware change. But I'd bet 20\n> disks on fibre array have way more net throughput than a single RAID\n> array on scsi.\n\nI wouldn't be surprised either if the fiber array had more cache\nthan the SCSI controller.\n\nWas/is the Hitachi device a SAN?\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"Man, I'm pretty. Hoo Hah!\" |\n| Johnny Bravo |\n+---------------------------------------------------------------+\n\n\n", "msg_date": "13 Aug 2003 10:16:03 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "Jeff,\n\n> Informix, etc. have spent a lot of time and money working on it.\n> They also have the advantage of having many paid fulltime\n> developers who are doing this for a job, not as a weekend hobby\n> (Compared to the what? 2-3 full time PG developers).\n\nI think 4-6 full-time, actually, plus about 200 part-time contributors. Which \nadds up to a bloody *lot* of code if you monitor pgsql-patches between \nversions. The only development advantage the commercials have over us is the \nability to engage in large projects (e.g. replication, raw filesystems, etc.) \nthat are difficult for a distributed network of people.\n\n> The other advantage (which I hinted to above) with raw disks is being able\n> to optimize queries to take advantage of it. Informix is multithreaded\n> and it will spawn off multiple \"readers\" to do say, a seq scan (and merge\n> the results at the end).\n\nI like this idea. Has it ever been discussed for PostgreSQL? Hmmm .... we'd \nneed to see some tests demonstrating that this approach was still a technical \nadvantage given the improvements in RAID and FS technology since Informix \nwas designed.\n\nAs I have said elsewhere, Informix is probably a poor database to emulate \nsince they are effectively an old dead-end fork of the Ingres/Postgres code, \nand have already been \"mined\" for most of the improvements they made.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 13 Aug 2003 08:46:55 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "Josh Berkus wrote:\n> Jeff,\n> \n> > Informix, etc. have spent a lot of time and money working on it.\n> > They also have the advantage of having many paid fulltime\n> > developers who are doing this for a job, not as a weekend hobby\n> > (Compared to the what? 2-3 full time PG developers).\n> \n> I think 4-6 full-time, actually, plus about 200 part-time contributors. Which \n> adds up to a bloody *lot* of code if you monitor pgsql-patches between \n> versions. The only development advantage the commercials have over us is the \n> ability to engage in large projects (e.g. replication, raw filesystems, etc.) \n> that are difficult for a distributed network of people.\n\nI think Informix's track record for post-Informix 5.0 releases is poor:\n\n\t6.0 aborted release, pretty much withdrawn\n\t7.0 took 1-2 years to stabalize\n\t8.0 where was that?\n\t9.0 confused customers\n\nHow much does Informix improve in 6 months? In 2 years? How long does\nit take to get a bug fixed?\n\nAt this point, only the largest corporations can keep up with our\nopen-source development model. The other database vendors have already\nclosed, as did Informix when purchased by IBM.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 13 Aug 2003 13:51:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "[email protected] (Jeff) writes:\n> On Tue, 12 Aug 2003, Christopher Browne wrote:\n>> Are you _certain_ that's still true? Have you a metric that shows\n>> Informix being 10x faster on a modern system? That would be quite\n>> surprising...\n\n> We were forced (for budget reason) to switch from raw disk to cooked\n> files on our informix db. We took a huge hit - about 5-6x slower.\n> Granted part of that was because informix takes number of spindles,\n> etc into account when generating query plans and the fact running\n> UPDATE STATISTICS (think Vacuum analyze) on the version we run locks\n> the table exclusively. And it is unacceptable to have our \"main\n> table\" unavailable for hours and hours while the update runs. (For\n> the record: its a 8cpu sun e4500 running sol2.6. The raw disks were\n> on a hitachi fibre array and the cooked files were on a raid5\n> (scsi). Forget how many spindles in the raid. There were 20 raw\n> disks)\n\nSounds like what you were forced to do was to do TWO things:\n\n 1. Switch from raw disk to cooked files, and\n 2. Switch from a fibre array to a RAID array\n\nYou're attributing the 5-6x slowdown to 1., when it seems likely that\n2. is a far more significant multiple.\n\nWhat with there being TWO big changes that took place that might be\nexpected to affect performance, it seems odd to attribute a\nfactor-of-many change to just one aspect of that.\n\n> Informix, etc. have spent a lot of time and money working on it.\n> They also have the advantage of having many paid fulltime developers\n> who are doing this for a job, not as a weekend hobby (Compared to\n> the what? 2-3 full time PG developers).\n\n<flame on>\nSure, and I'm sure the PG developers hardly know _anything_ about\nimplementing databases, either.\n<flame off>\n\nCertainly IBM (who bought Informix) has lots of time and money to\ndevote to enhancements. But I think you underestimate the time,\nskill, and effort involved with PG work. It's quite typical for\npeople to imagine free software projects to basically be free-wheeling\nefforts mostly involving guys that still have acne that haven't much\nknowledge of the area. Reality, for the projects that are of some\nimportance, is staggeringly different from that. The number of people\nwith graduate degrees tends to surprise everyone.\n\nThe developers may not have time to add frivolous things to the\nsystem, like building sophisticated Java-based GUI installers, XML\nprocessors, or such. That does, however, improve their focus, and so\nPostgreSQL does not suffer from the way Oracle has fifty different\nbundlings most of which nobody understands.\n\n> The other advantage (which I hinted to above) with raw disks is\n> being able to optimize queries to take advantage of it. Informix is\n> multithreaded and it will spawn off multiple \"readers\" to do say, a\n> seq scan (and merge the results at the end).\n>\n> So if you have a table across say, 3 disks and you need to do a seq\n> scan it will spawn three readers to do the read. Result: nice and\n> fast (Yes, It may not always spawn the three readers, only when it\n> thinks it will be a good thing to do)\n\nAndrew Sullivan's fairly regular response is that he tried (albeit not\nVASTLY extensively) to distinguish between disks when working with\nfibre arrays, and he couldn't measure an improvement in shifting WAL\n(the OBVIOUS thing to shift) to separate disks.\n\nThere's a lot of guesswork as to precisely why that result falls out.\n\nOne of the better guesses seems to be that if you've got enough\nbattery-backed memory cache on the array, that lets updates get pushed\nto cache so fast that it doesn't too much matter which disk they hit.\n\nIf you've got enough spindles, and build much of the array in a\nstriped manner, you'll get data splitting across disks without having\nto specify any \"table options\" to force it to happen.\n\nYou raise a good point vis-a-vis the thought of spawning multiple\nreaders; that could conceivably be a useful approach to improve\nperformance for very large queries. If you could \"stripe\" the tables\nin some manner so they could be doled out to multiple worker\nprocesses, that could indeed provide some benefits. If there are\nthree workers, they might round-robin to grab successive pages from\nthe table to do their work, and then end with a merge step.\n\nThat's probably a 7.7 change, mind you :-), but once other simpler\napproaches to making the engine faster have been exhausted, that's the\nsort of thing to look into next.\n\n> I think for PG the effort would be much better spent on other\n> features... like replication and whatnot.\n\nAt this point, sure.\n-- \nlet name=\"cbbrowne\" and tld=\"ntlug.org\" in String.concat \"@\" [name;tld];;\nhttp://www3.sympatico.ca/cbbrowne/lisp.html\n\"Using Java as a general purpose application development language is\nlike going big game hunting armed with Nerf weapons.\" \n-- Author Unknown\n", "msg_date": "Wed, 13 Aug 2003 14:54:38 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "[email protected] (Josh Berkus) writes:\n>> The other advantage (which I hinted to above) with raw disks is being able\n>> to optimize queries to take advantage of it. Informix is multithreaded\n>> and it will spawn off multiple \"readers\" to do say, a seq scan (and merge\n>> the results at the end).\n>\n> I like this idea. Has it ever been discussed for PostgreSQL? Hmmm\n> .... we'd need to see some tests demonstrating that this approach\n> was still a technical advantage given the improvements in RAID and\n> FS technology since Informix was designed.\n\nAh, but this approach isn't so much an I/O optimization as it is a CPU\noptimization.\n\nIf you have some sort of join against a big table, and do a lot of\nprocessing on each component, there might be CPU benefits from the\nsplit:\n\ncreate table customers(\n id customer_id, name character varying, other fields\n); --- And we're a phone company with 8 millions of them...\n\n\ncreate table customer_status (\n customer_id customer_id,\n status status_code\n);\n\ncreate table customer_address (\n customer_id customer_id,\n address_info...\n);\n\nAnd then are doing:\n\n select c.id, sum(status), address_label(c.id), balance(c.id) from\n customers c, customer_status cs;\n\nWe know there's going to be a SEQ SCAN against customers, because\nthat's the big table.\n\nIf I wanted to finish the query as fast as possible, as things stand\nnow, and had 4 CPUs, I would run 4 concurrent queries, for 4 ranges of\ncustomers.\n\nThe Really Cool approach would be for PostgreSQL to dole out customers\nacross four processors, perhaps throwing a page at a time at each CPU,\nwhere each process would quasi-independently build up their respective\nresult sets.\n-- \nlet name=\"cbbrowne\" and tld=\"libertyrms.info\" in String.concat \"@\" [name;tld];;\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Wed, 13 Aug 2003 15:07:04 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Wed, 2003-08-13 at 10:46, Josh Berkus wrote:\n> Jeff,\n> \n[snip]\n> > The other advantage (which I hinted to above) with raw disks is being able\n> > to optimize queries to take advantage of it. Informix is multithreaded\n> > and it will spawn off multiple \"readers\" to do say, a seq scan (and merge\n> > the results at the end).\n> \n> I like this idea. Has it ever been discussed for PostgreSQL? Hmmm .... we'd \n> need to see some tests demonstrating that this approach was still a technical \n> advantage given the improvements in RAID and FS technology since Informix \n> was designed.\n\nWouldn't PG 1st need horizontal partitioning, and as a precursor to\nthat, \"tablespaces\"?\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA |\n| |\n| \"Man, I'm pretty. Hoo Hah!\" |\n| Johnny Bravo |\n+---------------------------------------------------------------+\n\n\n", "msg_date": "13 Aug 2003 16:31:16 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "> Andrew Sullivan's fairly regular response is that he tried (albeit\n> not VASTLY extensively) to distinguish between disks when working\n> with fibre arrays, and he couldn't measure an improvement in\n> shifting WAL (the OBVIOUS thing to shift) to separate disks.\n\nReal quick... the faster the drives, the less important it is to move\nWAL onto a different drive. The slower the drives, the more important\nthis is... which is why this isn't as necessary (if at all) for large\nproduction environments.\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Wed, 13 Aug 2003 16:23:30 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Wed, 13 Aug 2003, Christopher Browne wrote:\n\n> Sounds like what you were forced to do was to do TWO things:\n>\n> 1. Switch from raw disk to cooked files, and\n> 2. Switch from a fibre array to a RAID array\n>\n> You're attributing the 5-6x slowdown to 1., when it seems likely that\n> 2. is a far more significant multiple.\n>\n\nTrue.\n\n> <flame on>\n> Sure, and I'm sure the PG developers hardly know _anything_ about\n> implementing databases, either.\n> <flame off>\n\nOh I know they are good at it. I deal a lot with informix and PG and if I\ncould I'd bring Tom, Bruce, Joe, etc. out for a beer as I'm *constantly*\nfighting informix and our PG box just sits there merrily churning away.\n(and god bless \"explain analyze\" - informix's version is basically boolean\n- \"I will use an index\" \"I will use a seq scan\". Doesn't even tell you\nwhat index!. )\n\n> You raise a good point vis-a-vis the thought of spawning multiple\n> readers; that could conceivably be a useful approach to improve\n> performance for very large queries. If you could \"stripe\" the tables\n> in some manner so they could be doled out to multiple worker\n> processes, that could indeed provide some benefits. If there are\n> three workers, they might round-robin to grab successive pages from\n> the table to do their work, and then end with a merge step.\n\nThe way informix does this is two fold:\n1. it handles the raw disks, it knows where table data is\n2. it can \"partition\" tables in a number of ways: round robin,\nconcatination or expression (Expression is nifty, allows you to use a\nbasic \"where\" clause to decide where to put data. ie\ncreate table foo (\na int,\nb int,\nc int ) fragment on c > 0 and c < 100 in dbspace1, c > 100 c < 200 in\ndbspace 2;\n\nthat kind of thing.\nand yeah, I would not expect to see it for a long time.. Without threading\nit would be rather difficult to implement.. but who knows what the future\nwill bring us.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Thu, 14 Aug 2003 08:30:02 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Wed, 13 Aug 2003, Bruce Momjian wrote:\n\n> I think Informix's track record for post-Informix 5.0 releases is poor:\n>\n> \t6.0 aborted release, pretty much withdrawn\n> \t7.0 took 1-2 years to stabalize\n> \t8.0 where was that?\n\n8.0 never occured. It went 7.3 -> 9.0\n\n> \t9.0 confused customers\n\n9.0 had some good stuff. 9.4 *FINALLY* removed a lot of limitations (2GB\nchunks, etc). (9.4 came out a few years after 7)\n\n> How much does Informix improve in 6 months? In 2 years? How long does\n> it take to get a bug fixed?\n\nYou make me laugh :)\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Thu, 14 Aug 2003 08:33:12 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "On Wed, 13 Aug 2003, Josh Berkus wrote:\n\n> As I have said elsewhere, Informix is probably a poor database to emulate\n> since they are effectively an old dead-end fork of the Ingres/Postgres code,\n> and have already been \"mined\" for most of the improvements they made.\n>\nWith informix 7.0 they rewrote the entire thing from the ground up to\nremove a bunch of limitations and build a multithreaded engine.\nso it isn't so much an old fork anymore.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Thu, 14 Aug 2003 08:35:17 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "[email protected] (Jeff) writes:\n> On Wed, 13 Aug 2003, Josh Berkus wrote:\n>> As I have said elsewhere, Informix is probably a poor database to emulate\n>> since they are effectively an old dead-end fork of the Ingres/Postgres code,\n>> and have already been \"mined\" for most of the improvements they made.\n>>\n> With informix 7.0 they rewrote the entire thing from the ground up\n> to remove a bunch of limitations and build a multithreaded engine.\n> so it isn't so much an old fork anymore.\n\nNo, I think you misunderstand the intent...\n\nThe pre-7.0 version was based on Informix's B-Tree libraries, and the\nfile structuring actually bears a marked resemblance to that of MySQL\n(that's an observation; neither forcibly a good or a bad thing), where\nthere's a data file for the table, and then a bunch of index files,\nnamed somewhat after the table.\n\nIn the 7.0-and-after era, they added in the \"old dead-end fork of the\nIngres/Postgres code\" to get the \"Universal Data Server.\"\n\n[This is diverging somewhat from \"performance;\" let's try to resist\nextending discussion...]\n-- \n(reverse (concatenate 'string \"ofni.smrytrebil\" \"@\" \"enworbbc\"))\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Thu, 14 Aug 2003 10:49:53 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "[email protected] (Jeff) writes:\n> On Wed, 13 Aug 2003, Christopher Browne wrote:\n>> You raise a good point vis-a-vis the thought of spawning multiple\n>> readers; that could conceivably be a useful approach to improve\n>> performance for very large queries. If you could \"stripe\" the tables\n>> in some manner so they could be doled out to multiple worker\n>> processes, that could indeed provide some benefits. If there are\n>> three workers, they might round-robin to grab successive pages from\n>> the table to do their work, and then end with a merge step.\n>\n> The way informix does this is two fold:\n> 1. it handles the raw disks, it knows where table data is\n\nThe thing is, this isn't something where there is guaranteed to be a\npermanent _massive_ difference in performance between \"raw\" and\n\"cooked.\"\n\nTraditionally, \"handling raw disks\" was a big deal because the DBMS\ncould then decide where to stick the data, possibly down to specifying\nwhat sector of what track of what spindle. There are four reasons for\nthis to not be such a big deal anymore:\n\n 1. Disk drives lie to you. They don't necessarily provide\n information that even _resembles_ their true geometry. So the\n best you can get is to be sure that \"this block was on drive 4,\n that block was on drive 7.\"\n\n 2. On a big system, you're more than likely using hardware RAID,\n where there's further cacheing, and where the disk array may\n not be honest to the DBMS about where the drives actually are.\n\n 3. The other traditional benefit to \"raw\" disks was that they\n allowed the DBMS to be _certain_ that data was committed in\n some particular order. But 1. and 2. provide regrettable\n opportunities for the DBMS' belief to be forlorn. (With the\n degree to which disk drives lie about things, I have to be a\n bit skeptical of some of the BSD FFS claims which at least\n appear to assume that they _do_ control the disk drive...\n This is NOT reason, by the way, to consider FFS to be, in\n any way, \"bad,\" but rather just that some of the guarantees\n may get stolen by your disk drive...)\n\n 4. Today's filesystems _aren't_ Grandpa's UFS. We've got better\n stuff than we had back in the Ultrix days.\n\n> 2. it can \"partition\" tables in a number of ways: round robin,\n> concatination or expression (Expression is nifty, allows you to use a\n> basic \"where\" clause to decide where to put data. ie\n> create table foo (\n> a int,\n> b int,\n> c int ) fragment on c > 0 and c < 100 in dbspace1, c > 100 c < 200 in\n> dbspace 2;\n>\n> that kind of thing.\n\nI remember thinking this was rather neat when I first saw it.\n\nThe \"fragment on\" part was most interesting at the time, when everyone\nelse (including filesystem makers) were decrying fragmentation as the\nultimate evil. In effect, Informix was saying that they would\n_improve_ performance through fragmentation... Sort of like the rash\nclaim that performance can be improved _without_ resorting to a\nthreading-based model...\n\n> and yeah, I would not expect to see it for a long time.. Without\n> threading it would be rather difficult to implement.. but who knows\n> what the future will bring us.\n\nThe typical assumption is that threading is a magical talisman that\nwill bring all sorts of benefits. There have been enough cases where\nPostgreSQL has demonstrated stunning improvements _without_ threading\nthat I am very skeptical that it is necessarily necessary.\n-- \noutput = reverse(\"gro.gultn\" \"@\" \"enworbbc\")\nhttp://www3.sympatico.ca/cbbrowne/sap.html\nRules of the Evil Overlord #204. \"I will hire an entire squad of blind\nguards. Not only is this in keeping with my status as an equal\nopportunity employer, but it will come in handy when the hero becomes\ninvisible or douses my only light source.\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Thu, 14 Aug 2003 10:57:33 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "Jeff wrote:\n> > You raise a good point vis-a-vis the thought of spawning multiple\n> > readers; that could conceivably be a useful approach to improve\n> > performance for very large queries. If you could \"stripe\" the tables\n> > in some manner so they could be doled out to multiple worker\n> > processes, that could indeed provide some benefits. If there are\n> > three workers, they might round-robin to grab successive pages from\n> > the table to do their work, and then end with a merge step.\n> \n> The way informix does this is two fold:\n> 1. it handles the raw disks, it knows where table data is\n> 2. it can \"partition\" tables in a number of ways: round robin,\n> concatination or expression (Expression is nifty, allows you to use a\n> basic \"where\" clause to decide where to put data. ie\n> create table foo (\n> a int,\n> b int,\n> c int ) fragment on c > 0 and c < 100 in dbspace1, c > 100 c < 200 in\n> dbspace 2;\n> \n> that kind of thing.\n> and yeah, I would not expect to see it for a long time.. Without threading\n> it would be rather difficult to implement.. but who knows what the future\n> will bring us.\n\nThe big question is whether the added complexity is worth it. I know\nInformix 5 was faster than Informix 7 on single CPU machines for quite a\nwhile. It might still be true.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 14 Aug 2003 17:47:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" }, { "msg_contents": "Christopher Browne wrote:\n> [email protected] (Jeff) writes:\n> > On Wed, 13 Aug 2003, Josh Berkus wrote:\n> >> As I have said elsewhere, Informix is probably a poor database to emulate\n> >> since they are effectively an old dead-end fork of the Ingres/Postgres code,\n> >> and have already been \"mined\" for most of the improvements they made.\n> >>\n> > With informix 7.0 they rewrote the entire thing from the ground up\n> > to remove a bunch of limitations and build a multithreaded engine.\n> > so it isn't so much an old fork anymore.\n> \n> No, I think you misunderstand the intent...\n> \n> The pre-7.0 version was based on Informix's B-Tree libraries, and the\n> file structuring actually bears a marked resemblance to that of MySQL\n> (that's an observation; neither forcibly a good or a bad thing), where\n> there's a data file for the table, and then a bunch of index files,\n> named somewhat after the table.\n> \n> In the 7.0-and-after era, they added in the \"old dead-end fork of the\n> Ingres/Postgres code\" to get the \"Universal Data Server.\"\n\nI think 9.0 was the the Ingres/Postgres code, not 7.X.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 14 Aug 2003 17:51:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perfomance Tuning" } ]
[ { "msg_contents": "I want to know, how can I improve the performance of postgres, I have a java class thar inserts register every 30 min but is very slow. \nThe process of postgres consume the 78% of CPU.\n\nI have in /etc/system\n\nset shmsys:shminfo_shmmax=0x50000000\nset shmsys:shminfo_shmmni=0x100\nset shmsys:shminfo_shmseg=0x10\n\n\n\nThanks\n\nIngrid\n\n\n\n\n\n\n\nI want to know, how can I improve the performance of postgres, \nI have a java class thar inserts register every 30 min but is very slow. \n\nThe process of postgres consume the 78% of CPU.\n \nI have in /etc/system\n \n\nset \nshmsys:shminfo_shmmax=0x50000000set shmsys:shminfo_shmmni=0x100set \nshmsys:shminfo_shmseg=0x10\n \nThanks\nIngrid", "msg_date": "Wed, 13 Aug 2003 09:03:31 -0500", "msg_from": "ingrid martinez <[email protected]>", "msg_from_op": true, "msg_subject": "How can I Improve performance in Solaris?" }, { "msg_contents": "On Wed, Aug 13, 2003 at 09:03:31AM -0500, ingrid martinez wrote:\n> I want to know, how can I improve the performance of postgres, I\n> have a java class thar inserts register every 30 min but is very\n> slow.\n\nWhat does the query do? How is postgres configured?\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 13 Aug 2003 10:32:58 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I Improve performance in Solaris?" }, { "msg_contents": "The query that execute is only inserts, I use a batch of 300 and then do\ncommit.\n\ninsert into FLOWS values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)\n\nand\n\npostgresql.conf looks like this\n\n\n#\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form\n#\n# name = value\n#\n# (The `=' is optional.) White space is collapsed, comments are\n# introduced by `#' anywhere on a line. The complete list of option\n# names and allowed values can be found in the PostgreSQL\n# documentation. The commented-out settings shown in this file\n# represent the default values.\n\n# Any option can also be given as a command line switch to the\n# postmaster, e.g., 'postmaster -c log_connections=on'. Some options\n# can be changed at run-time with the 'SET' SQL command.\n\n\n#========================================================================\n\n\n#\n# Connection Parameters\n#\n#tcpip_socket = false\n#ssl = false\n\n#max_connections = 32\n\n#port = 5432\n#hostname_lookup = false\n#show_source_port = false\n\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777\n\n#virtual_host = ''\n\n#krb_server_keyfile = ''\n\n\n#\n# Shared Memory Size\n#\n#shared_buffers = 64 # 2*max_connections, min 16\n#max_fsm_relations = 100 # min 10, fsm is free space map\n#max_fsm_pages = 10000 # min 1000, fsm is free space map\n#max_locks_per_transaction = 64 # min 10\n#wal_buffers = 8 # min 4\n\n#\n# Non-shared Memory Sizes\n#\n#sort_mem = 512 # min 32\n#vacuum_mem = 8192 # min 1024\n\n\n#\n# Write-ahead log (WAL)\n#\n#wal_files = 0 # range 0-64\n#wal_sync_method = fsync # the default varies across platforms:\n# # fsync, fdatasync, open_sync, or open_datasync\n#wal_debug = 0 # range 0-16\n#commit_delay = 0 # range 0-100000\n#commit_siblings = 5 # range 1-1000\n#checkpoint_segments = 3 # in logfile segments (16MB each), min 1\n#checkpoint_timeout = 300 # in seconds, range 30-3600\n#fsync = true\n\n\n#\n# Optimizer Parameters\n#\n#enable_seqscan = true\n#enable_indexscan = true\n#enable_tidscan = true\n#enable_sort = true\n#enable_nestloop = true\n#enable_mergejoin = true\n#enable_hashjoin = true\n\n#ksqo = false\n\n#effective_cache_size = 1000 # default in 8k pages\n#random_page_cost = 4\n#cpu_tuple_cost = 0.01\n#cpu_index_tuple_cost = 0.001\n#cpu_operator_cost = 0.0025\n\n\n#\n# GEQO Optimizer Parameters\n#\n#geqo = true\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n#geqo_threshold = 11\n#geqo_pool_size = 0 # default based on #tables in query, range\n128-1024\n#geqo_effort = 1\n#geqo_generations = 0\n#geqo_random_seed = -1 # auto-compute seed\n\n\n#\n# Debug display\n#\n#silent_mode = false\n\n#log_connections = false\n#log_timestamp = false\n#log_pid = false\n\n#debug_level = 0 # range 0-16\n\n#debug_print_query = false\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\n\n# requires USE_ASSERT_CHECKING\n#debug_assertions = true\n\n\n#\n# Syslog\n#\n# requires ENABLE_SYSLOG\n#syslog = 0 # range 0-2\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n\n#\n# Statistics\n#\n#show_parser_stats = false\n#show_planner_stats = false\n#show_executor_stats = false\n#show_query_stats = false\n\n# requires BTREE_BUILD_STATS\n#show_btree_build_stats = false\n\n\n#\n# Access statistics collection\n#\n#stats_start_collector = true\n#stats_reset_on_server_start = true\n#stats_command_string = false\n#stats_row_level = false\n#stats_block_level = false\n\n\n#\n# Lock Tracing\n#\n#trace_notify = false\n\n# requires LOCK_DEBUG\n#trace_locks = false\n#trace_userlocks = false\n#trace_lwlocks = false\n#debug_deadlocks = false\n#trace_lock_oidmin = 16384\n#trace_lock_table = 0\n\n\n#\n# Misc\n#\n#dynamic_library_path = '$libdir'\n#australian_timezones = false\n#authentication_timeout = 60 # min 1, max 600\n#deadlock_timeout = 1000\n#default_transaction_isolation = 'read committed'\n#max_expr_depth = 10000 # min 10\n#max_files_per_process = 1000 # min 25\n#password_encryption = false\n#sql_inheritance = true\n#transform_null_equals = false\n\n\n\n\n\n----- Original Message ----- \nFrom: \"Andrew Sullivan\" <[email protected]>\nTo: <[email protected]>\nSent: Wednesday, August 13, 2003 9:32 AM\nSubject: Re: [PERFORM] How can I Improve performance in Solaris?\n\n\n> On Wed, Aug 13, 2003 at 09:03:31AM -0500, ingrid martinez wrote:\n> > I want to know, how can I improve the performance of postgres, I\n> > have a java class thar inserts register every 30 min but is very\n> > slow.\n>\n> What does the query do? How is postgres configured?\n>\n> A\n>\n> -- \n> ----\n> Andrew Sullivan 204-4141 Yonge Street\n> Liberty RMS Toronto, Ontario Canada\n> <[email protected]> M2P 2A8\n> +1 416 646 3304 x110\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n", "msg_date": "Wed, 13 Aug 2003 10:17:45 -0500", "msg_from": "ingrid martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How can I Improve performance in Solaris?" }, { "msg_contents": "On Wed, 2003-08-13 at 11:17, ingrid martinez wrote: \n> The query that execute is only inserts, I use a batch of 300 and then do\n> commit.\n> \n> insert into FLOWS values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)\n\nAny foreign keys on this table? Triggers or Rules?\n\nWhat kind of hardware do you have? Anything else running on it?\n\nCould you provide the header information from top?\n\n\nOff the cuff, modify your postgresql.conf for the below and restart\nPostgreSQL.\n\nshared_buffers = 1000 # 2*max_connections, min 16\neffective_cache_size = 4000 # default in 8k pages", "msg_date": "Wed, 13 Aug 2003 11:49:51 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I Improve performance in Solaris?" }, { "msg_contents": "On Wed, Aug 13, 2003 at 10:17:45AM -0500, ingrid martinez wrote:\n> The query that execute is only inserts, I use a batch of 300 and then do\n> commit.\n> \n> insert into FLOWS values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)\n\nAre there any foreign keys, &c?\n\n> \n> and\n> \n> postgresql.conf looks like this\n\n[ . . .]\n\nThe configuration is the default. You'll certainly want to increase\nthe shared memory and fiddle with some of the other usual pieces. \nThere is some discussion of the config file at\n<http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html>. \nUnless the INSERTs are causing SELECTs, though, I can't see what\nexactly might be causing you so much difficulty.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 13 Aug 2003 12:17:01 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I Improve performance in Solaris?" }, { "msg_contents": "Floes table looks like this\n\n Table \"flows\"\n Column | Type | Modifiers\n----------------------+--------------------------+-----------\n flidload | bigint | not null\n firsttime | bigint |\n fldestpeeraddress | character varying(30) |\n fldesttransaddress | bigint |\n fldesttranstype | smallint |\n fldfromoctets | bigint |\n fldscodepoint | smallint |\n fldtooctets | bigint |\n flfrompdus | bigint |\n flid | text |\n flidrule | bigint |\n flsourcepeeraddress | character varying(30) |\n flsourcetransaddress | bigint |\n flsourcetranstype | smallint |\n fltime | timestamp with time zone |\n fltopdus | bigint |\n lasttime | bigint |\n sourceinterface | smallint |\n destinterface | smallint |\n sourceasn | smallint |\n destasn | smallint |\nPrimary key: flows_pkey\n\n\ninsert into FLOWS values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)\n\n\n\n\npostgresql.conf looks like this\n\n\n#\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form\n#\n# name = value\n#\n# (The `=' is optional.) White space is collapsed, comments are\n# introduced by `#' anywhere on a line. The complete list of option\n# names and allowed values can be found in the PostgreSQL\n# documentation. The commented-out settings shown in this file\n# represent the default values.\n\n# Any option can also be given as a command line switch to the\n# postmaster, e.g., 'postmaster -c log_connections=on'. Some options\n# can be changed at run-time with the 'SET' SQL command.\n\n\n#========================================================================\n\n\n#\n# Connection Parameters\n#\n#tcpip_socket = false\n#ssl = false\n\n#max_connections = 32\n\n#port = 5432\n#hostname_lookup = false\n#show_source_port = false\n\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777\n\n#virtual_host = ''\n\n#krb_server_keyfile = ''\n\n\n#\n# Shared Memory Size\n#\n#shared_buffers = 64 # 2*max_connections, min 16\n#max_fsm_relations = 100 # min 10, fsm is free space map\n#max_fsm_pages = 10000 # min 1000, fsm is free space map\n#max_locks_per_transaction = 64 # min 10\n#wal_buffers = 8 # min 4\n\n#\n# Non-shared Memory Sizes\n#\n#sort_mem = 512 # min 32\n#vacuum_mem = 8192 # min 1024\n\n\n#\n# Write-ahead log (WAL)\n#\n#wal_files = 0 # range 0-64\n#wal_sync_method = fsync # the default varies across platforms:\n# # fsync, fdatasync, open_sync, or open_datasync\n#wal_debug = 0 # range 0-16\n#commit_delay = 0 # range 0-100000\n#commit_siblings = 5 # range 1-1000\n#checkpoint_segments = 3 # in logfile segments (16MB each), min 1\n#checkpoint_timeout = 300 # in seconds, range 30-3600\n#fsync = true\n\n\n#\n# Optimizer Parameters\n#\n#enable_seqscan = true\n#enable_indexscan = true\n#enable_tidscan = true\n#enable_sort = true\n#enable_nestloop = true\n#enable_mergejoin = true\n#enable_hashjoin = true\n\n#ksqo = false\n\n#effective_cache_size = 1000 # default in 8k pages\n#random_page_cost = 4\n#cpu_tuple_cost = 0.01\n#cpu_index_tuple_cost = 0.001\n#cpu_operator_cost = 0.0025\n\n\n#\n# GEQO Optimizer Parameters\n#\n#geqo = true\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n#geqo_threshold = 11\n#geqo_pool_size = 0 # default based on #tables in query, range\n128-1024\n#geqo_effort = 1\n#geqo_generations = 0\n#geqo_random_seed = -1 # auto-compute seed\n\n\n#\n# Debug display\n#\n#silent_mode = false\n\n#log_connections = false\n#log_timestamp = false\n#log_pid = false\n\n#debug_level = 0 # range 0-16\n\n#debug_print_query = false\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\n\n# requires USE_ASSERT_CHECKING\n#debug_assertions = true\n\n\n#\n# Syslog\n#\n# requires ENABLE_SYSLOG\n#syslog = 0 # range 0-2\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n\n#\n# Statistics\n#\n#show_parser_stats = false\n#show_planner_stats = false\n#show_executor_stats = false\n#show_query_stats = false\n\n# requires BTREE_BUILD_STATS\n#show_btree_build_stats = false\n\n\n#\n# Access statistics collection\n#\n#stats_start_collector = true\n#stats_reset_on_server_start = true\n#stats_command_string = false\n#stats_row_level = false\n#stats_block_level = false\n\n\n#\n# Lock Tracing\n#\n#trace_notify = false\n\n# requires LOCK_DEBUG\n#trace_locks = false\n#trace_userlocks = false\n#trace_lwlocks = false\n#debug_deadlocks = false\n#trace_lock_oidmin = 16384\n#trace_lock_table = 0\n\n\n#\n# Misc\n#\n#dynamic_library_path = '$libdir'\n#australian_timezones = false\n#authentication_timeout = 60 # min 1, max 600\n#deadlock_timeout = 1000\n#default_transaction_isolation = 'read committed'\n#max_expr_depth = 10000 # min 10\n#max_files_per_process = 1000 # min 25\n#password_encryption = false\n#sql_inheritance = true\n#transform_null_equals = false\n\n----- Original Message ----- \nFrom: \"Andrew Sullivan\" <[email protected]>\nTo: <[email protected]>\nSent: Wednesday, August 13, 2003 11:17 AM\nSubject: Re: [PERFORM] How can I Improve performance in Solaris?\n\n\n> On Wed, Aug 13, 2003 at 10:17:45AM -0500, ingrid martinez wrote:\n> > The query that execute is only inserts, I use a batch of 300 and then do\n> > commit.\n> >\n> > insert into FLOWS values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)\n>\n> Are there any foreign keys, &c?\n>\n> >\n> > and\n> >\n> > postgresql.conf looks like this\n>\n> [ . . .]\n>\n> The configuration is the default. You'll certainly want to increase\n> the shared memory and fiddle with some of the other usual pieces.\n> There is some discussion of the config file at\n>\n<http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html>.\n> Unless the INSERTs are causing SELECTs, though, I can't see what\n> exactly might be causing you so much difficulty.\n>\n> A\n>\n> -- \n> ----\n> Andrew Sullivan 204-4141 Yonge Street\n> Liberty RMS Toronto, Ontario Canada\n> <[email protected]> M2P 2A8\n> +1 416 646 3304 x110\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Wed, 13 Aug 2003 11:20:39 -0500", "msg_from": "ingrid martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How can I Improve performance in Solaris?" }, { "msg_contents": "On Wed, 13 Aug 2003, ingrid martinez wrote:\n\n> Floes table looks like this\n> \n> Table \"flows\"\n> Column | Type | Modifiers\n> ----------------------+--------------------------+-----------\n> flidload | bigint | not null\n> firsttime | bigint |\n> fldestpeeraddress | character varying(30) |\n> fldesttransaddress | bigint |\n> fldesttranstype | smallint |\n> fldfromoctets | bigint |\n> fldscodepoint | smallint |\n> fldtooctets | bigint |\n> flfrompdus | bigint |\n> flid | text |\n> flidrule | bigint |\n> flsourcepeeraddress | character varying(30) |\n> flsourcetransaddress | bigint |\n> flsourcetranstype | smallint |\n> fltime | timestamp with time zone |\n> fltopdus | bigint |\n> lasttime | bigint |\n> sourceinterface | smallint |\n> destinterface | smallint |\n> sourceasn | smallint |\n> destasn | smallint |\n> Primary key: flows_pkey\n\nWhich columns are in the pkey?\n\n", "msg_date": "Wed, 13 Aug 2003 10:47:36 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I Improve performance in Solaris?" }, { "msg_contents": "the primary key is flidload\n\n\n----- Original Message ----- \nFrom: \"scott.marlowe\" <[email protected]>\nTo: \"ingrid martinez\" <[email protected]>\nCc: \"Andrew Sullivan\" <[email protected]>;\n<[email protected]>\nSent: Wednesday, August 13, 2003 11:47 AM\nSubject: Re: [PERFORM] How can I Improve performance in Solaris?\n\n\n> On Wed, 13 Aug 2003, ingrid martinez wrote:\n>\n> > Floes table looks like this\n> >\n> > Table \"flows\"\n> > Column | Type | Modifiers\n> > ----------------------+--------------------------+-----------\n> > flidload | bigint | not null\n> > firsttime | bigint |\n> > fldestpeeraddress | character varying(30) |\n> > fldesttransaddress | bigint |\n> > fldesttranstype | smallint |\n> > fldfromoctets | bigint |\n> > fldscodepoint | smallint |\n> > fldtooctets | bigint |\n> > flfrompdus | bigint |\n> > flid | text |\n> > flidrule | bigint |\n> > flsourcepeeraddress | character varying(30) |\n> > flsourcetransaddress | bigint |\n> > flsourcetranstype | smallint |\n> > fltime | timestamp with time zone |\n> > fltopdus | bigint |\n> > lasttime | bigint |\n> > sourceinterface | smallint |\n> > destinterface | smallint |\n> > sourceasn | smallint |\n> > destasn | smallint |\n> > Primary key: flows_pkey\n>\n> Which columns are in the pkey?\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n", "msg_date": "Wed, 13 Aug 2003 11:55:00 -0500", "msg_from": "ingrid martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How can I Improve performance in Solaris?" }, { "msg_contents": "More than likely you are suffering from an affliction known as type \nmismatch. This is listed as tip 9 here on the performance list (funny, it \nwas sent at the bottom of your reply :-)\n\nWhat happens is that when you do:\n\nselect * from some_table where id=123;\n\nwhere id is a bigint the query planner assumes you must want 123 \ncast to int4, which doesn't match int8 (aka bigint) and uses a sequential \nscan to access that row. I.e. it reads the whole table in.\n\nYou can force the planner to do the right thing here in a couple of ways:\n\nselect * from some_table where id=123::bigint;\n\n-- OR --\n\nselect * from some_table where id='123';\n\nOn Wed, 13 Aug 2003, ingrid martinez wrote:\n\n> the primary key is flidload\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n\n\n", "msg_date": "Wed, 13 Aug 2003 13:00:23 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I Improve performance in Solaris?" } ]
[ { "msg_contents": "\n\tHello,\n\n\tI recently switched to 7.4 beta 1, one query that used to be\ncorectly executed uder 7.3.3 albeit slowly now abnormaly ends when the\nbackend can't get more memory when it ate about 480 MB of swap space.\nI suspect that this behavior is the result of a 7.4 beta 1 bug but I\nwanted to be sure it is one before posting a report on pgsql-bugs.\n\n\tThat query operates on three tables:\n\n==============================================================================\ndb=> \\d movies\n Table \"public.movies\"\n Column | Type | Modifiers \n------------+-------------------+-------------------------------------------------\n id | bigint | not null default nextval('movies_id_seq'::text)\n title | character varying | not null\n orig_title | character varying | not null\n year | smallint | \n year_end | smallint | \nIndexes:\n \"movies_id_idx\" unique, btree (id)\n \"movies_title_idx\" unique, btree (title)\n \"movies_orig_title_idx\" btree (orig_title)\n \"movies_year_idx\" btree (\"year\")\nCheck constraints:\n \"movies_year\" CHECK ((\"year\" >= 1888) OR (\"year\" IS NULL))\n \"$1\" CHECK ((year_end >= 1888) OR (year_end IS NULL))\n\ndb=> \\d raw_atitles\n Table \"public.raw_atitles\"\n Column | Type | Modifiers \n------------+---------------------+--------------------------------------------------\n main_title | character varying | not null\n aka_title | character varying | \n charset | character varying | not null default 'ISO-8859-1'::character varying\n byte_title | character varying | not null\n attribs | character varying[] | \nIndexes:\n \"rimdb_atitles_aka_title_idx\" btree (aka_title)\n \"rimdb_atitles_attribs_idx\" btree (attribs array_ops)\n \"rimdb_atitles_main_title_idx\" btree (main_title)\n\ndb=> \\d atitles\n Table \"public.atitles\"\n Column | Type | Modifiers \n------------+---------------------+-----------\n title | character varying | not null\n movie_id | bigint | not null\n attribs | character varying[] | \n orig_title | character varying | not null\nIndexes:\n \"truc\" unique, btree (movie_id, orig_title, attribs array_ops)\n \"atitles_movie_id_idx\" btree (movie_id)\n \"atitles_title_idx\" btree (title)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (movie_id) REFERENCES movies(id)\n\n==============================================================================\n\n\tThe operation is to update the \"core\" atitles table with the\ncontents of the \"raw\" raw_atitles table. The query is as follows:\n\n==============================================================================\nINSERT INTO atitles (movie_id, title, attribs, orig_title)\n SELECT mo.id, trans_title(rak.aka_title), rak.attribs, rak.aka_title\n FROM movies AS mo, raw_atitles AS rak \n WHERE mo.orig_title=rak.main_title AND \n NOT EXISTS\n (SELECT at2.movie_id from atitles AS at2\n WHERE at2.movie_id=mo.id AND\n at2.orig_title=rak.aka_title AND \n at2.attribs=rak.attribs);\n==============================================================================\n\n\tTable sizes are 362,921 rows for movies, 152,549 for atitles,\nand 160,114 for raw_atitles.\n\n\tThe query plan is:\n\n==============================================================================\n QUERY PLAN \n--------------------------------------------------------------------------------------------------\n Merge Join (cost=106998.67..1039376.63 rows=80057 width=86)\n Merge Cond: (\"outer\".\"?column3?\" = \"inner\".\"?column4?\")\n Join Filter: (NOT (subplan))\n -> Sort (cost=66212.69..67119.99 rows=362921 width=38)\n Sort Key: (mo.orig_title)::text\n -> Seq Scan on movies mo (cost=0.00..8338.21 rows=362921 width=38)\n -> Sort (cost=40785.99..41186.27 rows=160114 width=107)\n Sort Key: (rak.main_title)::text\n -> Seq Scan on raw_atitles rak (cost=0.00..5145.14 rows=160114 width=107)\n SubPlan\n -> Index Scan using truc on atitles at2 (cost=0.00..5.80 rows=1 width=8)\n Index Cond: ((movie_id = $0) AND ((orig_title)::text = ($1)::text) AND (attribs = $2))\n(12 rows)\n\n==============================================================================\n\n\tI suspect that the backend does not comply to the sort_mem\nparameter (set to the default 1024).\n\n\tSo my question is: does this really looks like a bug?\n\n\tRegards.\n\n-- \n%!PS\n297.6 420.9 translate 90 rotate 0 setgray gsave 0 1 1{pop 0 180 moveto 100\n180 170 100 170 -10 curveto 180 -9 180 -9 190 -10 curveto 190 100 100 180\n0 180 curveto fill 180 rotate}for grestore/Bookman-LightItalic findfont\n240 scalefont setfont -151.536392 -63.7998886 moveto (bp)show showpage\n", "msg_date": "Thu, 14 Aug 2003 12:21:30 +0200", "msg_from": "Bertrand Petit <[email protected]>", "msg_from_op": true, "msg_subject": "7.4 beta 1 getting out of swap" }, { "msg_contents": "Bertrand Petit <[email protected]> writes:\n> \tI recently switched to 7.4 beta 1, one query that used to be\n> corectly executed uder 7.3.3 albeit slowly now abnormaly ends when the\n> backend can't get more memory when it ate about 480 MB of swap space.\n\nPlease show us the memory context size info that the backend dumps to\nstderr when it reports \"out of memory\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Aug 2003 07:45:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.4 beta 1 getting out of swap " }, { "msg_contents": "On Thu, Aug 14, 2003 at 07:45:46AM -0400, Tom Lane wrote:\n>\n> Please show us the memory context size info that the backend dumps to\n> stderr when it reports \"out of memory\".\n\n\tHere it is:\n\n==============================================================================\nTopMemoryContext: 32792 total in 4 blocks; 14600 free (9 chunks); 18192 used\nTopTransactionContext: 8192 total in 1 blocks; 8136 free (0 chunks); 56 used\nDeferredTriggerXact: 2088960 total in 8 blocks; 97384 free (6 chunks); 1991576 used\nMessageContext: 57344 total in 3 blocks; 7552 free (2 chunks); 49792 used\nPortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used\nPortalHeapMemory: 1024 total in 1 blocks; 936 free (0 chunks); 88 used\nExecutorState: 4556232 total in 46 blocks; 3183264 free (8303 chunks); 1372968 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExecutorState: 528474112 total in 72 blocks; 760 free (1 chunks); 528473352 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\nCacheMemoryContext: 516096 total in 6 blocks; 44208 free (2 chunks); 471888 used\nraw_atitles_main_title_idx: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\nraw_atitles_attribs_idx: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\nraw_atitles_aka_title_idx: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\nmovies_orig_title_idx: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\nmovies_year_idx: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\nmovies_title_idx: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\nmovies_id_idx: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_constraint_conrelid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\ntruc: 3144 total in 2 blocks; 1848 free (0 chunks); 1296 used\natitles_title_idx: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\natitles_movie_id_idx: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_index_indrelid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_type_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_type_typname_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\npg_statistic_relid_att_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\npg_shadow_usesysid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\npg_proc_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_proc_proname_args_nsp_index: 3144 total in 2 blocks; 1784 free (0 chunks); 1360 used\npg_operator_oprname_l_r_n_index: 3144 total in 2 blocks; 1784 free (0 chunks); 1360 used\npg_namespace_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_namespace_nspname_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_language_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_language_name_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_inherits_relid_seqno_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\npg_group_sysid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_group_name_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_conversion_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_conversion_name_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\npg_conversion_default_index: 3144 total in 2 blocks; 1784 free (0 chunks); 1360 used\npg_opclass_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_opclass_am_name_nsp_index: 3144 total in 2 blocks; 1848 free (0 chunks); 1296 used\npg_cast_source_target_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\npg_attribute_relid_attnam_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\npg_amop_opr_opc_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\npg_aggregate_fnoid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_shadow_usename_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\npg_operator_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_amproc_opc_procnum_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\npg_amop_opc_strategy_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\npg_index_indexrelid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_attribute_relid_attnum_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\npg_class_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_amproc_opc_procnum_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\npg_operator_oid_index: 1024 total in 1 blocks; 640 free (0 chunks); 384 used\npg_amop_opc_strategy_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\npg_class_relname_nsp_index: 1024 total in 1 blocks; 320 free (0 chunks); 704 used\nMdSmgr: 8192 total in 1 blocks; 6120 free (0 chunks); 2072 used\nDynaHash: 8192 total in 1 blocks; 7064 free (0 chunks); 1128 used\nDynaHashTable: 8192 total in 1 blocks; 5080 free (0 chunks); 3112 used\nDynaHashTable: 8192 total in 1 blocks; 2008 free (0 chunks); 6184 used\nDynaHashTable: 8192 total in 1 blocks; 3520 free (0 chunks); 4672 used\nDynaHashTable: 8192 total in 1 blocks; 3520 free (0 chunks); 4672 used\nDynaHashTable: 24576 total in 2 blocks; 13240 free (4 chunks); 11336 used\nDynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nDynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nDynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nDynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nDynaHashTable: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nErrorContext: 8192 total in 1 blocks; 8176 free (8 chunks); 16 used\n==============================================================================\n\n-- \n%!PS\n297.6 420.9 translate 90 rotate 0 setgray gsave 0 1 1{pop 0 180 moveto 100\n180 170 100 170 -10 curveto 180 -9 180 -9 190 -10 curveto 190 100 100 180\n0 180 curveto fill 180 rotate}for grestore/Bookman-LightItalic findfont\n240 scalefont setfont -151.536392 -63.7998886 moveto (bp)show showpage\n", "msg_date": "Thu, 14 Aug 2003 16:53:11 +0200", "msg_from": "Bertrand Petit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.4 beta 1 getting out of swap" }, { "msg_contents": "Bertrand Petit <[email protected]> writes:\n> On Thu, Aug 14, 2003 at 07:45:46AM -0400, Tom Lane wrote:\n>> Please show us the memory context size info that the backend dumps to\n>> stderr when it reports \"out of memory\".\n\n> \tHere it is:\n\n> ExecutorState: 4556232 total in 46 blocks; 3183264 free (8303 chunks); 1372968 used\n\nThat seems a bit large, but I'm not sure if it's really to worry about.\n\n> ExecutorState: 528474112 total in 72 blocks; 760 free (1 chunks); 528473352 used\n\nOkay, you definitely must have a memory leak in query execution.\n\nCould you provide enough info to let someone else reproduce it? We\ndon't need your data, but a schema dump (pg_dump -s) would be nice\nto avoid trying to guess what causes it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Aug 2003 11:18:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.4 beta 1 getting out of swap " }, { "msg_contents": "Bertrand Petit <[email protected]> writes:\n> \tAnd I just got another one, much simpler, that failed the same\n> way with the same data set:\n> UPDATE rimdb_atitles SET aka_title=convert(byte_title,charset,'UTF8');\n\n[ where rimdb_atitles has an index on column \"attribs varchar[]\" ]\n\nUh-huh. Actually, any large insert or update on that table will run out\nof memory, I bet. The problem appears to be due to the newly-added\nsupport for indexing array columns --- array_cmp() leaks memory, which\nis verboten for index support operators.\n\nAt first I thought this would be an easy fix --- just rewrite array_cmp\nto not depend on deconstruct_array, as array_eq already does not.\nI soon found that that only reduced the speed of leakage, however.\nThe real problem comes from the fact that array_eq and array_cmp expect\nto be able to save information across calls using flinfo->fn_extra.\nWhile this works to some extent, the btree routines generate a new\nscankey --- with a nulled fn_extra --- for every index AM call. btree\nknows to delete the scankey when it's done, but it doesn't know anything\nabout deleting what fn_extra points to. (Even if it did, there is\nadditional leakage inside equality_oper(), which would be very difficult\nto clean up directly.)\n\nQuite aside from the memory leak problem, it's annoying to think that\nthe array element information will be looked up again on every btree\noperation. That seems expensive.\n\nI can think of a number of ways we might attack this, but none seem\nespecially attractive ---\n\n1. Have the index AMs create and switch into a special memory context\nfor each call, rather than running in the main execution context.\nI am not sure this is workable at all, since the AMs tend to think they\ncan create data structures that will live across calls (for example a\nbtree lookup stack). It'd be the most general solution, if we could\nmake it work.\n\n2. Modify the index AMs so that the comparison function FmgrInfo is\npreserved across a whole query. I think this requires changes to the\nindex AM API (index_insert for instance has no provision for sharing\ndata across multiple calls). Messy, and would likely mean an initdb.\nIt would probably be the fastest answer though, since lookups wouldn't\nneed to be done more than once per query.\n\n3. Set up a long-lived cache internal to the array functions that can\ntranslate element type OID to the needed lookup data, and won't leak\nmemory across repeated calls. This is not the fastest or most general\nsolution, but it seems the most localized and safest fix.\n\nHas anyone got some other ideas?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Aug 2003 20:17:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.4 beta 1 getting out of swap " }, { "msg_contents": "> I can think of a number of ways we might attack this, but none seem\n> especially attractive ---\n> \n> 1. Have the index AMs create and switch into a special memory context\n> for each call, rather than running in the main execution context.\n> I am not sure this is workable at all, since the AMs tend to think they\n> can create data structures that will live across calls (for example a\n> btree lookup stack). It'd be the most general solution, if we could\n> make it work.\n> \n> 2. Modify the index AMs so that the comparison function FmgrInfo is\n> preserved across a whole query. I think this requires changes to the\n> index AM API (index_insert for instance has no provision for sharing\n> data across multiple calls). Messy, and would likely mean an initdb.\n> It would probably be the fastest answer though, since lookups wouldn't\n> need to be done more than once per query.\n\n#2 seems most natural in that it formalizes something that is common for\nlots of index methods.\n\nWe are only in beta1, so I think we can initdb.\n\n> 3. Set up a long-lived cache internal to the array functions that can\n> translate element type OID to the needed lookup data, and won't leak\n> memory across repeated calls. This is not the fastest or most general\n> solution, but it seems the most localized and safest fix.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 14 Aug 2003 20:41:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.4 beta 1 getting out of swap" }, { "msg_contents": "Tom Lane wrote:\n> Bertrand Petit <[email protected]> writes:\n>>\tAnd I just got another one, much simpler, that failed the same\n>>way with the same data set:\n>>UPDATE rimdb_atitles SET aka_title=convert(byte_title,charset,'UTF8');\n> \n> [ where rimdb_atitles has an index on column \"attribs varchar[]\" ]\n> \n> Uh-huh. Actually, any large insert or update on that table will run out\n> of memory, I bet. The problem appears to be due to the newly-added\n> support for indexing array columns --- array_cmp() leaks memory, which\n> is verboten for index support operators.\n\nUgh.\n\n> I can think of a number of ways we might attack this, but none seem\n> especially attractive ---\n> \n> 1. Have the index AMs create and switch into a special memory context\n> for each call, rather than running in the main execution context.\n> I am not sure this is workable at all, since the AMs tend to think they\n> can create data structures that will live across calls (for example a\n> btree lookup stack). It'd be the most general solution, if we could\n> make it work.\n\nThis seems like a risky change at this point.\n\n> 2. Modify the index AMs so that the comparison function FmgrInfo is\n> preserved across a whole query. I think this requires changes to the\n> index AM API (index_insert for instance has no provision for sharing\n> data across multiple calls). Messy, and would likely mean an initdb.\n> It would probably be the fastest answer though, since lookups wouldn't\n> need to be done more than once per query.\n\nThis seems like a fairly big change this late in the game too.\n\n> 3. Set up a long-lived cache internal to the array functions that can\n> translate element type OID to the needed lookup data, and won't leak\n> memory across repeated calls. This is not the fastest or most general\n> solution, but it seems the most localized and safest fix.\n> \n\nI think I like #3 the best, but maybe that's because it's the one I \nthink I understand the best ;-)\n\nIt seems to me that #3 is the least risky, and even if it isn't the best \npossible performance, this is the initial implementation of indexes on \narrays, so it isn't like we're taking away something. Maybe solution #2 \nis better held as a performance enhancement for 7.5.\n\nDo you want me to take a shot at this since I created the mess?\n\nJoe\n\n", "msg_date": "Thu, 14 Aug 2003 19:46:20 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.4 beta 1 getting out of swap" }, { "msg_contents": "Joe Conway <[email protected]> writes:\n> Tom Lane wrote:\n>> 3. Set up a long-lived cache internal to the array functions that can\n>> translate element type OID to the needed lookup data, and won't leak\n>> memory across repeated calls. This is not the fastest or most general\n>> solution, but it seems the most localized and safest fix.\n\n> It seems to me that #3 is the least risky, and even if it isn't the best \n> possible performance, this is the initial implementation of indexes on \n> arrays, so it isn't like we're taking away something. Maybe solution #2 \n> is better held as a performance enhancement for 7.5.\n\nI'm leaning that way too. It occurs to me also that the same cache\ncould be used to eliminate repeated lookups in sorting setup --- which\nwould not be much of a win percentagewise, compared to the sort itself,\nbut still it seems worth doing.\n\n> Do you want me to take a shot at this since I created the mess?\n\nActually I led you down the garden path on that, IIRC --- I was the one\nwho insisted these lookups needed to be cached. I'll work on fixing it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Aug 2003 07:09:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.4 beta 1 getting out of swap " }, { "msg_contents": "> Joe Conway <[email protected]> writes:\n>> Tom Lane wrote:\n> 3. Set up a long-lived cache internal to the array functions that can\n> translate element type OID to the needed lookup data, and won't leak\n> memory across repeated calls. This is not the fastest or most general\n> solution, but it seems the most localized and safest fix.\n\n>> It seems to me that #3 is the least risky, and even if it isn't the best \n>> possible performance, this is the initial implementation of indexes on \n>> arrays, so it isn't like we're taking away something. Maybe solution #2 \n>> is better held as a performance enhancement for 7.5.\n\n> I'm leaning that way too. It occurs to me also that the same cache\n> could be used to eliminate repeated lookups in sorting setup --- which\n> would not be much of a win percentagewise, compared to the sort itself,\n> but still it seems worth doing.\n\nI've committed fixes for this, and verified that inserts/updates on an\nindexed array column don't leak memory anymore.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Aug 2003 18:25:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.4 beta 1 getting out of swap " } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nMy database is approximately 8gb in size. My application is 24/7. I'm \nconstantly cycling through data. I analyze every 15 minutes but I'm noticing \nthat during a vacuum the database becomes extremely sluggish.\n\nIn some cases the operation of my software goes from 1-3 second runtime to \n300+ seconds during a vacuum.\n\nShould I expect this with a vacuum? I've done reading online where people say \nthey see only a 10% decrease in speed. Is this supposed to be the norm?\n\nI've allocated 196MB of RAM to vacuums.\n\nThe system is a dual P4-2.4ghz w/ 1.5 gig of RAM w/ 36 gig of RAID mirrored \ndisk.\n\nI'm runing 7.3.2 and I am upgrading to 7.3.3 next week.\n\nPlease advise.\n\n- -- \nJeremy M. Guthrie\nSystems Engineer\nBerbee\n5520 Research Park Dr.\nMadison, WI 53711\nPhone: 608-298-1061\n\nBerbee...Decade 1. 1993-2003\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.1 (GNU/Linux)\n\niD8DBQE/O80UqtjaBHGZBeURAqmLAJ9kxh0DyaZm3pAP77XGlDTq5JcsowCfeqpC\n36SjIo5XW44bEkmHnbwXXBQ=\n=oozQ\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Thu, 14 Aug 2003 12:55:32 -0500", "msg_from": "\"Jeremy M. Guthrie\" <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum performance question" }, { "msg_contents": "> I've allocated 196MB of RAM to vacuums.\n\nI would be willing to bet that you have kicked the system into swap\nbecause of this. Hence the large decrease in speed.\n\nTry sliding back to 32MB for vacuum. A ton more ram doesn't really help\nit all that much.", "msg_date": "Thu, 14 Aug 2003 14:04:58 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum performance question" }, { "msg_contents": "\"Jeremy M. Guthrie\" <[email protected]> writes:\n> In some cases the operation of my software goes from 1-3 second runtime to \n> 300+ seconds during a vacuum.\n\nThat seems like a lot. I concur with the nearby recommendation to\nreduce vacuum_mem, but I think there may be another problem. You should\nwatch top, iostat, vmstat during a vacuum to try to see what resource is\ngetting saturated.\n\n> I'm runing 7.3.2 and I am upgrading to 7.3.3 next week.\n\n7.3.4, please. 7.3.3 has at least one nasty bug.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Aug 2003 15:28:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum performance question " } ]
[ { "msg_contents": "I have the following setup :\n Apache 2 + mod_perl 2\n Postgres 7.3.2\n\nI need to is measure the perfomance of a ticketing system (written in perl)\nwhich has web interface (html::mason, apache2) with Pg as a backend. Users\nof the ticketing system can only connect to the backend via the web \ninterface\nand they usually login to the system at the begining of the the day and \nremain\nconnected untill they knock of. I have setup two test machines, one with Pg\nand the other with Mysql. Both machine have the same data (sample).\n\nI am looking for a benchmark utilty that the simulate a user session.\nFor example, a user login in, displaying a ticket and searching for tickets;\nall these invlove a user connecting to the a url, like for search, a \nuser needs\nto open \"somehost.domain/path/to/search.html?with=arguments\". The \nutiltly needs\nto simulate these actions.\n\nThe following tools currently have so far caught my attention:\n Apache Jmeter\n ab\n\nI need suggestions for other utilities.\n\n\n", "msg_date": "Fri, 15 Aug 2003 08:33:57 +0200", "msg_from": "mixo <[email protected]>", "msg_from_op": true, "msg_subject": "Benchmark" }, { "msg_contents": "2003-08-15 ragyogó napján mixo ezt üzente:\n\n> For example, a user login in, displaying a ticket and searching for tickets;\n> all these invlove a user connecting to the a url, like for search, a\n> user needs\n> to open \"somehost.domain/path/to/search.html?with=arguments\". The\n> utiltly needs\n> to simulate these actions.\n\nMaybe a simple script, with lines like this:\n#!/bin/bash\nwget http://somehost.domain/path/to/search.html?with=arguments\nwget http://somehost.domain/another/script.html\n...\n\n?\n\n-- \nTomka Gergely\n\"S most - vajon barbárok nélkül mi lesz velünk?\nŐk mégiscsak megoldás voltak valahogy...\"\n\n", "msg_date": "Fri, 15 Aug 2003 11:32:11 +0200 (CEST)", "msg_from": "Tomka Gergely <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "On Friday 15 August 2003 07:33, mixo wrote:\n> I have the following setup :\n> Apache 2 + mod_perl 2\n> Postgres 7.3.2\n>\n\n> I am looking for a benchmark utilty that the simulate a user session.\n> For example, a user login in, displaying a ticket and searching for\n> tickets; all these invlove a user connecting to the a url, like for search,\n> a user needs\n> to open \"somehost.domain/path/to/search.html?with=arguments\". The\n> utiltly needs\n> to simulate these actions.\n>\n> The following tools currently have so far caught my attention:\n> Apache Jmeter\n> ab\n\nDepending on how complex things are, you may be able to use Tomka's suggestion \nof a batch of wget commands.\n\nHowever, if you want to be able to fill in forms, handle cookies and react to \nresults you might want to look at something based on Perl's LWP bundle \n(LWP::UserAgent is a good start point).\n\nIf you want to see how many users can be supported, don't forget to include \nplausible delays in the test sequence where the users would be \nreading/entering data.\n\nIf you do find some flexible, scriptable web testing system that can read/fill \nout forms etc please post to the list - I've had no luck finding anything I \nlike.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 15 Aug 2003 14:12:08 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "Mixo,\n\n> I need to is measure the perfomance of a ticketing system (written in perl)\n> which has web interface (html::mason, apache2) with Pg as a backend. Users\n> of the ticketing system can only connect to the backend via the web\n> interface\n\nI'd suggest Perl LWP. There's even a good article on how to use it in last \nmonth's Linux Magazine (or the previous month, not sure). \n\nDon't forget to come back here for help in tweaking your PostgreSQL settings!\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 15 Aug 2003 09:21:06 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "On Fri, 15 Aug 2003, Richard Huxton wrote:\n\n> If you do find some flexible, scriptable web testing system that can read/fill\n> out forms etc please post to the list - I've had no luck finding anything I\n> like.\n>\nThere was a tool created by altavista R&D called \"WebL\" that I used on a\nproject. I think if you search around you'll be able to find it.\n\nHowever, I think it has developed \"bit rot\" from not being touched in so\nlong. My old webl stuff will no longer work.\n\nBut what is it? it is a web scraping language written in java. Fast it is\nnot. Easy to scrape and interact with pages: YES. It has all sorts of\nthings for locating fields, locating table cells, etc. (I used it for\nwriting a prototype that would scrape a competitors site and import the\ndata into our application :)\n\nbut if it doesn't work LWP isn't _that_ bad. You will end up in regex hell\nthough.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Fri, 15 Aug 2003 14:24:01 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "You might want to check out these links:\n\nhttp://www.loadtestingtool.com/download.html\nhttp://www.opensta.org/\nhttp://www.microsoft.com/technet/treeview/default.asp?url=/TechNet/itsol\nutions/intranet/downloads/webstres.asp?frame=true\nhttp://www.softwareqatest.com/qatweb1.html\n\n\nBalazs\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of mixo\nSent: Thursday, August 14, 2003 11:34 PM\nTo: [email protected]\nSubject: [PERFORM] Benchmark\n\nI have the following setup :\n Apache 2 + mod_perl 2\n Postgres 7.3.2\n\nI need to is measure the perfomance of a ticketing system (written in\nperl)\nwhich has web interface (html::mason, apache2) with Pg as a backend.\nUsers\nof the ticketing system can only connect to the backend via the web \ninterface\nand they usually login to the system at the begining of the the day and \nremain\nconnected untill they knock of. I have setup two test machines, one with\nPg\nand the other with Mysql. Both machine have the same data (sample).\n\nI am looking for a benchmark utilty that the simulate a user session.\nFor example, a user login in, displaying a ticket and searching for\ntickets;\nall these invlove a user connecting to the a url, like for search, a \nuser needs\nto open \"somehost.domain/path/to/search.html?with=arguments\". The \nutiltly needs\nto simulate these actions.\n\nThe following tools currently have so far caught my attention:\n Apache Jmeter\n ab\n\nI need suggestions for other utilities.\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n", "msg_date": "Fri, 15 Aug 2003 11:47:21 -0700", "msg_from": "\"Balazs Wellisch\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "\nOn Saturday, August 16, 2003, at 01:21 AM, Josh Berkus wrote:\n\n> Mixo,\n>\n>> I need to is measure the perfomance of a ticketing system (written in \n>> perl)\n>> which has web interface (html::mason, apache2) with Pg as a backend. \n>> Users\n>> of the ticketing system can only connect to the backend via the web\n>> interface\n>\n> I'd suggest Perl LWP. There's even a good article on how to use it in \n> last\n> month's Linux Magazine (or the previous month, not sure).\n>\n\nThe Perl module HTTP::WebTest should help you do what you want.\n\nhttp://search.cpan.org/author/ILYAM/HTTP-WebTest-2.03/lib/HTTP/ \nWebTest.pm\n\nIt should save a ton of time over building one from scratch. Lots of \ndocs too. There are other Perl web testing tools too if you do a search \nat:\n\nhttp://search.cpan.org\n\nHTH,\nDave\n\n", "msg_date": "Sun, 17 Aug 2003 12:48:24 +0900", "msg_from": "David Emery <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" } ]
[ { "msg_contents": "Hi all,\n\nCouple of days ago, one of my colleague, Rahul Iyer posted a query regarding \ninsert performance of 5M rows. A common suggestion was to use copy.\n\nUnfortunately he can not use copy due to some constraints. I was helping him to \nget maximum out of it. We were playing with a data set of 500K rows on SunOS5.6 \nand postgresql 7.3.3\n\nBest we could get was 500K records in 355 sec. That's roughly 1400 inserts per \nsec. This was with default settings and around 10K inserts per transaction. \nPostgresql was hogging all the CPU during load.\n\nTroubled by this, I set up a similar database at home. That is a Athlon \nXP2000+/512MB machine with a 40GB seagate disk. It is running slackware 9.0 \nwith 2.4.20(IIRC).\n\nI have attached the results of my experiements and the code I used to \nbenchmark. It was a simple table with an integer and a varchar(30) field.\n\nI was really amazed to see the numbers. First of all, it beat the sunOS machine \n left and right. Bruce posted some numbers of 9K inserts/sec. Here we see the \nsame.\n\nSecondly I also played with filesystems. Ext3 does not seem to be performing as \ngood. Reiser and ext2 did fine. Unfortunately the kernel didn't support XFS/JFS \nso could not test them.\n\nI have also attached the raw benchmark data in kspread format, for the curious. \nDidn't exported to anything else because kspread had troubles with exporting \nformula values.\n\nI also noticed that reiser takes hell lot more CPU than ext2 and ext3. It \nnearly peaks out all CPU capacity. Not so with ext2.\n\nComments? One thing I can't help to notice is sunOs is not on same scale. The \nsunOS machine is a 1GB RAM machine. It has oracle and mysql running on it and \nhave 300MB swap in use but I am sure it has SCSI disk and in all respect I \nwould rather expect a RISC machine to perform better than an athlon XP machine, \nat least for an IO.\n\nIf you want me to post details of sparc machine, please let me know how do I \nfind it. I have never worked with sparcs earlier and have no intention of doing \nthis again..:-)\n\nBye\n Shridhar\n\n--\nFourth Law of Applied Terror:\tThe night before the English History mid-term, \nyour Biology\tinstructor will assign 200 pages on planaria.Corollary:\tEvery \ninstructor assumes that you have nothing else to do except\tstudy for that \ninstructor's course.", "msg_date": "Sat, 16 Aug 2003 14:10:15 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Insert performance" }, { "msg_contents": "Shridhar,\n\n> Unfortunately he can not use copy due to some constraints. \n\nWhy not use COPY to load the table, and then apply the constraints by query \nafterwords? It might not be faster, but then again it might.\n\n> I was really amazed to see the numbers. First of all, it beat the sunOS\n> machine left and right. Bruce posted some numbers of 9K inserts/sec. Here\n> we see the same.\n<snip>\n> Comments? One thing I can't help to notice is sunOs is not on same scale.\n> The sunOS machine is a 1GB RAM machine. It has oracle and mysql running on\n> it and have 300MB swap in use but I am sure it has SCSI disk and in all\n> respect I would rather expect a RISC machine to perform better than an\n> athlon XP machine, at least for an IO.\n\nIt's been reported on this list several times that Solaris is the *worst* of \nthe *nixes for PostgreSQL performance. No analysis has been posted as to \nwhy; my own thoughts are:\n\t- Solaris' multi-threaded architecture which imposes a hefty per-process \noverhead, about triple that of Linux, slowing new connections and large \nmulti-user activity;\n\t- Poor filesystem management; Sun simply hasn't kept up with IBM, Reiser, Red \nHat and BSD in developing filesystems.\n\t... but that's based on inadequate experimentation, just a few tests on \nBonnie++ on a Netra running Solaris 8.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 16 Aug 2003 11:40:59 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance" }, { "msg_contents": "Martha Stewart called it a Good Thing [email protected] (Josh Berkus)wrote:\n> Shridhar,\n>> Unfortunately he can not use copy due to some constraints. \n\n> Why not use COPY to load the table, and then apply the constraints\n> by query afterwords? It might not be faster, but then again it\n> might.\n\nIf you can transform the information into COPYable form, that's\ncertainly a good thing. Whether it will be workable or not is another\nquestion.\n\n> ... but that's based on inadequate experimentation, just a few tests\n> on Bonnie++ on a Netra running Solaris 8.\n\nAs far as the filesystem issues are concerned, you're probably using\nan \"old, standard\" version of UFS. The \"high performance\" option on\nSolaris involves using third-party Veritas software.\n\nThe persistence of this is somewhat surprising; I'd somewhat have\nexpected Sun to have bought out Veritas or some such thing, as it's a\npretty vital technology that _isn't_ totally under their control.\nActually, an entertaining option would be for them to buy out SGI, as\nthat would get them control of XFS and a number of other interesting\ntechnologies.\n\nMy expectations of a Netra wouldn't be terribly high, either; they\nseem to exist as a product so that people that need a cheap Sun box\nhave an option. They are mostly running IDE disk, and the latest\nIA-32 hardware is likely to have newer faster interface options.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"ntlug.org\")\nhttp://cbbrowne.com/info/advocacy.html\n\"Windows NT was designed to be administered by an idiot and usually\nis...\" -- Chris Adams\" <[email protected]>\n", "msg_date": "Sat, 16 Aug 2003 18:22:48 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance" }, { "msg_contents": "On 16 Aug 2003 at 11:40, Josh Berkus wrote:\n\n> Shridhar,\n> \n> > Unfortunately he can not use copy due to some constraints. \n> \n> Why not use COPY to load the table, and then apply the constraints by query \n> afterwords? It might not be faster, but then again it might.\n\nLol.. The constraints I mentioned weren't database constraints.. It need to \ninterface with another database module.. The data import can not be text.. \nthat's a project requirement..\n\nI should have been clearer in first place..\n\nBye\n Shridhar\n\n--\nO'Toole's commentary on Murphy's Law:\tMurphy was an optimist.\n\n", "msg_date": "Mon, 18 Aug 2003 11:51:32 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Insert performance" }, { "msg_contents": "Shridhar Daithankar kirjutas E, 18.08.2003 kell 09:21:\n> On 16 Aug 2003 at 11:40, Josh Berkus wrote:\n> \n> > Shridhar,\n> > \n> > > Unfortunately he can not use copy due to some constraints. \n> > \n> > Why not use COPY to load the table, and then apply the constraints by query \n> > afterwords? It might not be faster, but then again it might.\n> \n> Lol.. The constraints I mentioned weren't database constraints.. \n\nCan't you still apply them later ;)\n\nMy own experimentation also got numbers in 9k/sec range (on a quad\n1.3GHz Xeons, 2GM mem, 50MB/sec raid) when doing 10-20 parallel runs of\n~1000 inserts/transaction.\n\nPerformance dropped to ~300/sec (at about 60M rows) when I added an\nindex (primary key) - as I did random inserts, the hit rates for index\npages were probably low.\n\n--------------\nHannu\n\n", "msg_date": "Mon, 18 Aug 2003 18:52:43 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance" }, { "msg_contents": "On 18 Aug 2003 at 18:52, Hannu Krosing wrote:\n> My own experimentation also got numbers in 9k/sec range (on a quad\n> 1.3GHz Xeons, 2GM mem, 50MB/sec raid) when doing 10-20 parallel runs of\n> ~1000 inserts/transaction.\n> \n> Performance dropped to ~300/sec (at about 60M rows) when I added an\n> index (primary key) - as I did random inserts, the hit rates for index\n> pages were probably low.\n\nI was loading a geographic data couple of months back.. It was 3GB data when \nloaded in postgresql.\n\nI tried loading data first and creating index later. It ran out of available \n9GB space. So I created index on an empty table and started loading it. It was \nslow but at least finished after 3 hours... Co-incidentally oracle had same \nproblems as well. So creating index beforehand remains only option at times, it \nseems. Tom remarked that it shouldn't have made difference but apparently it \ndoes..\n\nYou mentioned parallel runs and still getting 9K/sec. Was that overall 9K or \nper connection? If it is former, probably WAL is hit too hard. You could do \nsome additional testing by having WALit's own disk.\n\nI was plannning to do the multiple writers test. But since objective was to \nfind out why postgresql was so slow and it tunred out to be slowaris in first \nplace, didn't have any enthu. left.\n\nI recommended my colleague to move to linux. But apparently this product is a \npart of suit which runs on some HUGE solaris machines. So if it has to run \npostgresql, it has to run sunos. I hope they are faster with SCSI..\n\nBye\n Shridhar\n\n--\nOn my planet, to rest is to rest -- to cease using energy. To me, itis quite \nillogical to run up and down on green grass, using energy,instead of saving it.\t\n\t-- Spock, \"Shore Leave\", stardate 3025.2\n\n", "msg_date": "Mon, 18 Aug 2003 21:32:27 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Insert performance" }, { "msg_contents": "Chris,\n\n> My expectations of a Netra wouldn't be terribly high, either; they\n> seem to exist as a product so that people that need a cheap Sun box\n> have an option. They are mostly running IDE disk, and the latest\n> IA-32 hardware is likely to have newer faster interface options.\n\nNeither are ours ... we bought an array of 1U Netras as a dot-com closeout. \nAt $330 per machine including OS, they were a darned good deal, and with 4 of \nthem the redundancy makes up for the lack of individual server performance.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 18 Aug 2003 09:57:59 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance" }, { "msg_contents": "Shridhar Daithankar kirjutas E, 18.08.2003 kell 19:02:\n> On 18 Aug 2003 at 18:52, Hannu Krosing wrote:\n> > My own experimentation also got numbers in 9k/sec range (on a quad\n> > 1.3GHz Xeons, 2GM mem, 50MB/sec raid) when doing 10-20 parallel runs of\n> > ~1000 inserts/transaction.\n> > \n> > Performance dropped to ~300/sec (at about 60M rows) when I added an\n> > index (primary key) - as I did random inserts, the hit rates for index\n> > pages were probably low.\n> \n> I was loading a geographic data couple of months back.. It was 3GB data when \n> loaded in postgresql.\n\nWith or without indexes ?\n\n> I tried loading data first and creating index later. It ran out of available \n> 9GB space. So I created index on an empty table and started loading it. It was \n> slow but at least finished after 3 hours... Co-incidentally oracle had same \n> problems as well. So creating index beforehand remains only option at times, it \n> seems. Tom remarked that it shouldn't have made difference but apparently it \n> does..\n\nTom just fixed some memory leaks on array indexing the other day. Could\nthere be something like that on geographic types ?\n\n> You mentioned parallel runs and still getting 9K/sec. Was that overall 9K or \n> per connection?\n\nOverall. But notice that my setup was (a little) slower per processor.\n\n> If it is former, probably WAL is hit too hard. You could do \n> some additional testing by having WALit's own disk.\n\nI guess that todays IDE disks are about the same speed (~50MB/sec) as my\ntest RAID was.\n\nI run multiple parallel runs to have a chance to use all 4 processors\n(but IIRC it was heavyly IO-bound) as well as to better use writing time\non WAL platters (not to wait for full rotation on each platter)\n\n--------------\nHannu\n\n\n\n\n", "msg_date": "Tue, 19 Aug 2003 01:16:58 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance" }, { "msg_contents": "On 18 Aug 2003 at 9:57, Josh Berkus wrote:\n\n> Chris,\n> \n> > My expectations of a Netra wouldn't be terribly high, either; they\n> > seem to exist as a product so that people that need a cheap Sun box\n> > have an option. They are mostly running IDE disk, and the latest\n> > IA-32 hardware is likely to have newer faster interface options.\n> \n> Neither are ours ... we bought an array of 1U Netras as a dot-com closeout. \n> At $330 per machine including OS, they were a darned good deal, and with 4 of \n> them the redundancy makes up for the lack of individual server performance.\n\nI am sure they would run much better with linux on them rather than solaris..\n\nBye\n Shridhar\n\n--\nShannon's Observation:\tNothing is so frustrating as a bad situation that is \nbeginning to\timprove.\n\n", "msg_date": "Tue, 19 Aug 2003 11:58:45 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Insert performance" }, { "msg_contents": "On 19 Aug 2003 at 1:16, Hannu Krosing wrote:\n\n> Shridhar Daithankar kirjutas E, 18.08.2003 kell 19:02:\n> > I was loading a geographic data couple of months back.. It was 3GB data when \n> > loaded in postgresql.\n> \n> With or without indexes ?\n\nWithout index. Index was another 3 GB with 50% utilisation. It was 81M rows \nwith 3 floats each..\n\n> \n> > I tried loading data first and creating index later. It ran out of available \n> > 9GB space. So I created index on an empty table and started loading it. It was \n> > slow but at least finished after 3 hours... Co-incidentally oracle had same \n> > problems as well. So creating index beforehand remains only option at times, it \n> > seems. Tom remarked that it shouldn't have made difference but apparently it \n> > does..\n> \n> Tom just fixed some memory leaks on array indexing the other day. Could\n> there be something like that on geographic types ?\n\nDunno.. This was 7.3.2 or earlier.. Later the project abandoned all types of \ndatabases and went to in memory structures since flat data was of the order of \n200MB. Now they aer returning to databases as flat data is approaching excess \nof 3 GB..\n\nGod knows what will they do next. It's an ideal example what schedule pressure \ncan do to architecture design of a software..\n\n\nBye\n Shridhar\n\n--\nAir Force Inertia Axiom:\tConsistency is always easier to defend than correctness.\n\n", "msg_date": "Tue, 19 Aug 2003 12:03:26 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Insert performance" } ]
[ { "msg_contents": "Hi ,\nI am using pg 7.3.3 on RH 7.3,\ndual Athlon\n1 GB RAM.\n\nI have 2 tables a_acc and a_vid_doc (all PK are int).\n\nsizes:\n\n select count(IDS) from a_acc;\n count\n---------\n 1006772\n\nselect count(IDS) from a_vid_doc;\n count\n-------\n 25\n\nI have problem with the join ot this tables.\nI tryed this examples:\n\n explain analyze select G.IDS from A_ACC G join A_VID_DOC VD\nON(G.IDS_VID_DOC=VD.IDS) WHERE G.IDS = 1338673 ;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n\n Merge Join (cost=1.83..1.97 rows=1 width=12) (actual\ntime=40.78..2085.82 rows=1 loops=1)\n Merge Cond: (\"outer\".ids_vid_doc = \"inner\".ids)\n -> Index Scan using i_a_acc_ids_vid_doc on a_acc g\n(cost=0.00..43706.42 rows=1 width=8) (actual time=40.52..2085.55 rows=1\nloops=1)\n Filter: (ids = 1338673)\n -> Sort (cost=1.83..1.89 rows=25 width=4) (actual time=0.22..0.22\nrows=25 loops=1)\n Sort Key: vd.ids\n -> Seq Scan on a_vid_doc vd (cost=0.00..1.25 rows=25 width=4)\n(actual time=0.05..0.07 rows=25 loops=1)\n Total runtime: 2085.93 msec\n(8 rows)\n\n\nand\n\n explain analyze select G.IDS from A_ACC G , A_VID_DOC VD where\nG.IDS_VID_DOC=VD.IDS and G.IDS = 1338673 ;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n\n Merge Join (cost=1.83..1.97 rows=1 width=12) (actual\ntime=40.91..2099.13 rows=1 loops=1)\n Merge Cond: (\"outer\".ids_vid_doc = \"inner\".ids)\n -> Index Scan using i_a_acc_ids_vid_doc on a_acc g\n(cost=0.00..43706.42 rows=1 width=8) (actual time=40.65..2098.86 rows=1\nloops=1)\n Filter: (ids = 1338673)\n -> Sort (cost=1.83..1.89 rows=25 width=4) (actual time=0.22..0.22\nrows=25 loops=1)\n Sort Key: vd.ids\n -> Seq Scan on a_vid_doc vd (cost=0.00..1.25 rows=25 width=4)\n(actual time=0.05..0.07 rows=25 loops=1)\n Total runtime: 2099.24 msec\n(8 rows)\n\n From time to time the second one is very slow (15-17 sek).\n\nIf I execute:\n\n explain analyze select G.IDS from A_ACC G where G.IDS = 1338673 ;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n\n Index Scan using a_acc_pkey on a_acc g (cost=0.00..3.13 rows=1\nwidth=4) (actual time=0.06..0.06 rows=1 loops=1)\n Index Cond: (ids = 1338673)\n Total runtime: 0.11 msec\n(3 rows)\n\n, all is working well.\n\nHow can I find the problem?\nI have index on A_ACC.IDS_VID_DOC and have vacuum full analyze;\nWill it help if I make A_ACC.IDS_VID_DOC not null ?\n\nMy problem is that I will execute this query many times and ~ 2 sek is\nvery slow for me.\n\nMany thanks and best regards,\nivan.\n\n\n\n", "msg_date": "Mon, 18 Aug 2003 12:10:43 +0200", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "bad join preformance" } ]
[ { "msg_contents": "Hi ,\nI am using pg 7.3.3 on RH 7.3,\ndual Athlon\n1 GB RAM.\n\nI have 2 tables a_acc and a_vid_doc (all PK are int).\n\nsizes:\n\n select count(IDS) from a_acc;\n count\n---------\n 1006772\n\nselect count(IDS) from a_vid_doc;\n count\n-------\n 25\n\nI have problem with the join ot this tables.\nI tryed this examples:\n\n explain analyze select G.IDS from A_ACC G join A_VID_DOC VD\nON(G.IDS_VID_DOC=VD.IDS) WHERE G.IDS = 1338673 ;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n\n Merge Join (cost=1.83..1.97 rows=1 width=12) (actual\ntime=40.78..2085.82 rows=1 loops=1)\n Merge Cond: (\"outer\".ids_vid_doc = \"inner\".ids)\n -> Index Scan using i_a_acc_ids_vid_doc on a_acc g\n(cost=0.00..43706.42 rows=1 width=8) (actual time=40.52..2085.55 rows=1\nloops=1)\n Filter: (ids = 1338673)\n -> Sort (cost=1.83..1.89 rows=25 width=4) (actual time=0.22..0.22\nrows=25 loops=1)\n Sort Key: vd.ids\n -> Seq Scan on a_vid_doc vd (cost=0.00..1.25 rows=25 width=4)\n\n(actual time=0.05..0.07 rows=25 loops=1)\n Total runtime: 2085.93 msec\n(8 rows)\n\n\nand\n\n explain analyze select G.IDS from A_ACC G , A_VID_DOC VD where\nG.IDS_VID_DOC=VD.IDS and G.IDS = 1338673 ;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n\n Merge Join (cost=1.83..1.97 rows=1 width=12) (actual\ntime=40.91..2099.13 rows=1 loops=1)\n Merge Cond: (\"outer\".ids_vid_doc = \"inner\".ids)\n -> Index Scan using i_a_acc_ids_vid_doc on a_acc g\n(cost=0.00..43706.42 rows=1 width=8) (actual time=40.65..2098.86 rows=1\nloops=1)\n Filter: (ids = 1338673)\n -> Sort (cost=1.83..1.89 rows=25 width=4) (actual time=0.22..0.22\nrows=25 loops=1)\n Sort Key: vd.ids\n -> Seq Scan on a_vid_doc vd (cost=0.00..1.25 rows=25 width=4)\n\n(actual time=0.05..0.07 rows=25 loops=1)\n Total runtime: 2099.24 msec\n(8 rows)\n\n>From time to time the second one is very slow (15-17 sek).\n\nIf I execute:\n\n explain analyze select G.IDS from A_ACC G where G.IDS = 1338673 ;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n\n Index Scan using a_acc_pkey on a_acc g (cost=0.00..3.13 rows=1\nwidth=4) (actual time=0.06..0.06 rows=1 loops=1)\n Index Cond: (ids = 1338673)\n Total runtime: 0.11 msec\n(3 rows)\n\n, all is working well.\n\nHow can I find the problem?\nI have index on A_ACC.IDS_VID_DOC and have vacuum full analyze;\nWill it help if I make A_ACC.IDS_VID_DOC not null ?\n\nMy problem is that I will execute this query many times and ~ 2 sek is\nvery slow for me.\n\nMany thanks and best regards,\nivan.\n\n\n", "msg_date": "Mon, 18 Aug 2003 16:58:49 +0200", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "bad join performance" }, { "msg_contents": "\nOn Mon, 18 Aug 2003, pginfo wrote:\n\n> Hi ,\n> I am using pg 7.3.3 on RH 7.3,\n> dual Athlon\n> 1 GB RAM.\n>\n> I have 2 tables a_acc and a_vid_doc (all PK are int).\n>\n> sizes:\n>\n> select count(IDS) from a_acc;\n> count\n> ---------\n> 1006772\n>\n> select count(IDS) from a_vid_doc;\n> count\n> -------\n> 25\n>\n> I have problem with the join ot this tables.\n> I tryed this examples:\n>\n> explain analyze select G.IDS from A_ACC G join A_VID_DOC VD\n> ON(G.IDS_VID_DOC=VD.IDS) WHERE G.IDS = 1338673 ;\n\nIn general the best index on A_ACC for this kind of query might\nbe on on A_ACC(IDS, IDS_VID_DOC). That should allow you to search\nby IDS value but still get a sorted order of IDS_VID_DOC to help\nthe join.\n\n", "msg_date": "Mon, 18 Aug 2003 10:45:25 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad join performance" }, { "msg_contents": "pginfo <[email protected]> writes:\n> I am using pg 7.3.3 on RH 7.3,\n\nAre you certain the server is 7.3.3? This looks like a mergejoin\nestimation bug that was present in 7.3 and 7.3.1, but should be fixed\nin 7.3.2 and later.\n\nIf it is 7.3.3, I'd like to see the pg_stats rows for \na_acc.ids_vid_doc and a_vid_doc.ids.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Aug 2003 14:40:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad join performance " }, { "msg_contents": "Hi tom,\nsorry for my bad.\nMy production server is 7.3.7, but the development is 7.3.1 and I ran the\ntests on 7.3.1.\n\nIt is courios that on 7.3.1 the query is not constantly bad.\n>From time to time it is running well.\n\nregards,\nivan.\nTom Lane wrote:\n\n> pginfo <[email protected]> writes:\n> > I am using pg 7.3.3 on RH 7.3,\n>\n> Are you certain the server is 7.3.3? This looks like a mergejoin\n> estimation bug that was present in 7.3 and 7.3.1, but should be fixed\n> in 7.3.2 and later.\n>\n> If it is 7.3.3, I'd like to see the pg_stats rows for\n> a_acc.ids_vid_doc and a_vid_doc.ids.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n\n", "msg_date": "Tue, 19 Aug 2003 05:16:16 +0200", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad join performance" }, { "msg_contents": "Ok,\nthanks for the help\nand best regards.\nivan.\n\nTom Lane wrote:\n\n> pginfo <[email protected]> writes:\n> > sorry for my bad.\n> > My production server is 7.3.7, but the development is 7.3.1 and I ran the\n> > tests on 7.3.1.\n>\n> > It is courios that on 7.3.1 the query is not constantly bad.\n> > From time to time it is running well.\n>\n> Yeah, the mergejoin estimation bug doesn't bite in every case (if it\n> did, we'd have found it before release ;-)). Please update to 7.3.4.\n>\n> regards, tom lane\n\n\n\n", "msg_date": "Tue, 19 Aug 2003 05:25:42 +0200", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad join performance" }, { "msg_contents": "pginfo <[email protected]> writes:\n> sorry for my bad.\n> My production server is 7.3.7, but the development is 7.3.1 and I ran the\n> tests on 7.3.1.\n\n> It is courios that on 7.3.1 the query is not constantly bad.\n> From time to time it is running well.\n\nYeah, the mergejoin estimation bug doesn't bite in every case (if it\ndid, we'd have found it before release ;-)). Please update to 7.3.4.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Aug 2003 00:24:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad join performance " } ]
[ { "msg_contents": "I have found many reasons not to index small tables (see .\nBut I still have questions.\n\n1. How small is small enough?\n2. am I supposed to drop primary key index on so called 'label tables'\nknowing I am using this pk in join clause?\n3. Is it good to drop index on small table If I know I have created VIEW and\njoined large table and the small one\nand I have where condition on that particular colum?\nIlustrative example:\ntable small\n{\nid -- primary key\ncolum_for_where_condition -- index or not to index?\n}\ntable large\n{\n...\nfk_for_small_table -- indexed\n...\n}\ncreate view as select .....\ninner join on common_column ..\nI have migrated from mysql .\nI have found that complex view expression takes about 10 sec using ordinary\nplanner,\nbut genetic alg takes about 5 sec. Strange?\nI can give details if someone is interested.\n\nRegards,\nAlvis\n\n\n", "msg_date": "Wed, 20 Aug 2003 13:01:16 +0300", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "When NOT to index small tables?" }, { "msg_contents": "On Wed, Aug 20, 2003 at 13:01:16 +0300,\n [email protected] wrote:\n> I have found many reasons not to index small tables (see .\n> But I still have questions.\n> \n> 1. How small is small enough?\n\nUnless you think maintaining the indexes is a significant overhead, you\nshouldn't worry about it as the planner will choose whether or not to\nuse them according to what it thinks is faster.\n\n> 2. am I supposed to drop primary key index on so called 'label tables'\n> knowing I am using this pk in join clause?\n\nI don't think you want to drop any indexes that are used to enforce\nconstraints.\n\n> 3. Is it good to drop index on small table If I know I have created VIEW and\n> joined large table and the small one\n> and I have where condition on that particular colum?\n\nIf you have some other reason for creating the index, you probably don't\nwant to drop it to try to speed up queries. If there isn't some reason you\nhave for keeping the index, then you might want to drop it if it isn't\nhelping your queries as maintaining the index will slow down queries\nthat modify the table.\n", "msg_date": "Wed, 20 Aug 2003 08:36:24 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: When NOT to index small tables?" } ]
[ { "msg_contents": "I try to increase my no. of connections of the Postmaster in my client's\nSolaris9 box but fails:\n\nWhen startup using bin/postmaster -N 300 -B 2000 -D /export/data, I got the\nmessage:\n\nIpcSemaphoreCreate: semget(key=5432004, num=17, 03600) failed: No space left\non device\n\nThis error does *not* mean that you have run out of disk space.\n\nIt occurs either because system limit for the maximum number of\nsemaphore sets (SEMMNI), or the system wide maximum number of\nsemaphores (SEMMNS), would be exceeded. You need to raise the\nrespective kernel parameter. Look into the PostgreSQL documentation\nfor details.\n\nSo what should I set for SEMMNI and SEMMNS? Any suggestion?\n\nThe /etc/system file (partial) is set with the following:\n\nset shmsys:shminfo_shmmax=0x7fffffff\n\nset rlim_fd_max=8192\nset rlim_fd_cur=4096\n\nRegards,\nTam MK\n\n\n", "msg_date": "Thu, 21 Aug 2003 19:56:57 +0800", "msg_from": "\"Tam MK\" <[email protected]>", "msg_from_op": true, "msg_subject": "More connections in Solaris9" } ]
[ { "msg_contents": "Hi,\n\nI'm having a performance problem in postgresql.\n\nI have a rather complex view (attached) which, on itself, executes very\nfast, as it should. Normally this view is unordered. When I order the\nview itself (see comments in attachment), the view executes with about\nthe same speed since the field i'm sorting on has an index.\nHowever, i dont want the view to be presorted, but sort it in the\nqueries that use the view. When I do that, the index I have on that\nfield seems to be ignored. It stretches so far that, when I sort the\nview on A and sort the query on A too, the query will try to sort\n_again_ _without_ index and thus lose all performance.\n\nI've attached the query-plans for the different cases.\npreordered: means the view itself is sorted\nwith/without sorting: tells whether the query is sorted\n\nNote: the \"NOT NULL\" in the queries doesn't affect performance\n\nWith kind regards,\n\n\tMathieu \n", "msg_date": "Thu, 21 Aug 2003 15:35:38 +0200", "msg_from": "Mathieu De Zutter <[email protected]>", "msg_from_op": true, "msg_subject": "Sorting a query on a view ignores an index" }, { "msg_contents": "Mathieu De Zutter <[email protected]> writes:\n> However, i dont want the view to be presorted, but sort it in the\n> queries that use the view. When I do that, the index I have on that\n> field seems to be ignored. It stretches so far that, when I sort the\n> view on A and sort the query on A too, the query will try to sort\n> _again_ _without_ index and thus lose all performance.\n\nThis is a limitation of the 7.3 query planner. 7.4 should do better.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Aug 2003 10:59:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorting a query on a view ignores an index " }, { "msg_contents": "On Thu, Aug 21, 2003 at 10:59:11AM -0400, Tom Lane wrote:\n> Mathieu De Zutter <[email protected]> writes:\n> > However, i dont want the view to be presorted, but sort it in the\n> > queries that use the view. When I do that, the index I have on that\n> > field seems to be ignored. It stretches so far that, when I sort the\n> > view on A and sort the query on A too, the query will try to sort\n> > _again_ _without_ index and thus lose all performance.\n> \n> This is a limitation of the 7.3 query planner. 7.4 should do better.\n\nOk I'll have to live with that I guess.\nApart from avoiding views or subselects when sorting afterwards and\nputting the whole bunch in a huge SQL statement (which i'll have to\nproduce on-the-fly), do you have an other alternative? \nThe 2 seconds is way to much, as the database will eventually run on a\nmachine that is 10 times slower.\n\nWith kind regards,\n\n\tMathieu\n", "msg_date": "Thu, 21 Aug 2003 18:24:48 +0200", "msg_from": "Mathieu De Zutter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sorting a query on a view ignores an index" }, { "msg_contents": "Mathieu De Zutter <[email protected]> writes:\n> Apart from avoiding views or subselects when sorting afterwards and\n> putting the whole bunch in a huge SQL statement (which i'll have to\n> produce on-the-fly), do you have an other alternative? \n\nSee if you can avoid the subselects in the view's SELECT list. That's\nwhat's preventing 7.3 from doing a good job.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Aug 2003 13:02:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorting a query on a view ignores an index " }, { "msg_contents": "On Thu, Aug 21, 2003 at 01:02:08PM -0400, Tom Lane wrote:\n> Mathieu De Zutter <[email protected]> writes:\n> > Apart from avoiding views or subselects when sorting afterwards and\n> > putting the whole bunch in a huge SQL statement (which i'll have to\n> > produce on-the-fly), do you have an other alternative? \n> \n> See if you can avoid the subselects in the view's SELECT list. That's\n> what's preventing 7.3 from doing a good job.\n\nFirst of all, I've installed 7.4beta1 and the performance is as should be\nexpected. Good, I'll try to convince the admin to upgrade to 7.4 when it\ngets released (though that machine I will eventually have to use, runs \ndebian/stable atm, with pgsql 7.2).\n\nSecond, I have tried to eliminate the subselects by (left) joining some\ntables (I only kept a part of the original view to do this(1)). That made \nme also add a GROUP BY clause to remove duplicates. Comparing to the \nresult-equivalent subselect-version, it has an enormous performance hit\n\n(1) I only kept one subselect and rearranged it to left join's\n\nLimit (cost=2169.31..2169.36 rows=20 width=140) (actual\n\t time=526.12..526.14 rows=20 loops=1)\n-> Sort (cost=2169.31..2185.79 rows=6591 width=140) (actual\n\t\ttime=526.11..526.12 rows=20 loops=1)\n Sort Key: search_title\n -> Subquery Scan search_song\n\t\t(cost=1254.25..1484.93 rows=6591 width=140)\n \t(actual time=149.29..423.16 rows=6380 loops=1)\n -> GroupAggregate (cost=1254.25..1419.02 rows=6591\n width=49) (actual time=149.28..407.04 rows=6380 loops=1)\n -> Sort (cost=1254.25..1270.73 rows=6591 width=49) (actual\n\t\t\t\t time=147.92..150.75 rows=6380 loops=1)\n Sort Key: s.id, s.title, s.\"year\"\netc...\n\nRunning half a second. It looks like the grouping is messing it up.\nNo index seems to be used either?\n(s.id is primary key, s.title = search_title, with index\n search_title_idx)\nWhile the subselect-version shows no sort in its plan and runs in 2msec\nand shows \"Index Scan using song_title_idx on song s\" \n(both queries are sorted on search_title)\n\nI'm fairly sure the two queries were written as they should.\nIf you want more info on how I got this result, just tell me. I think my\nmail is yet long enough.\n\nOh, important detail, these results are obtained with 7.4beta1, I\nhaven't checked it in 7.3\n\nWith kind regards,\n\n\tMathieu\n", "msg_date": "Thu, 21 Aug 2003 22:56:59 +0200", "msg_from": "Mathieu De Zutter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sorting a query on a view ignores an index" } ]
[ { "msg_contents": "Hi,\n\nAfter my first mail, I found a better testcase (well it's a about the\nsame, but you have a better look to compare).\nSee attachment prob-query.sql\n\nThe ORDER BY in the FROM clause uses the index.\nThe last ORDER BY does not use the index. \nThey should be the same...\n\nThe query plans are identical to the plans in my previous mail.\n\nWith kind regards,\n\n\tMathieu\n\nP.S.: i just noted that i forgot to attach to previous mail, i'll\nattach everything now.", "msg_date": "Thu, 21 Aug 2003 16:06:40 +0200", "msg_from": "Mathieu De Zutter <[email protected]>", "msg_from_op": true, "msg_subject": "A better look on the same problem (ignored index)" } ]
[ { "msg_contents": "http://195.199.65.92/~horvaths/pgperformance.html\n\nOne of my friend do these tests. We think about the best filesystem for\nthe Linux/Postgres systems.\n\nThe four test was:\n\nteszt1 - 10.000 inserts\nteszt2 - 10.inserts, 10 in one trans.\nteszt3 - 14.000.000 inserts, 1.000 in one trans.\nteszt4 - \"Select count(*)\" on the big table.\n\nWhat test are interesting? Plese give us tips and ideas. The guy has time\nfor other test.\n\n-- \nTomka Gergely\n\"S most - vajon barbárok nélkül mi lesz velünk?\nŐk mégiscsak megoldás voltak valahogy...\"\n\n", "msg_date": "Thu, 21 Aug 2003 16:51:05 +0200 (CEST)", "msg_from": "Tomka Gergely <[email protected]>", "msg_from_op": true, "msg_subject": "Tests" }, { "msg_contents": "On 21 Aug 2003 at 16:51, Tomka Gergely wrote:\n\n> http://195.199.65.92/~horvaths/pgperformance.html\n> \n> One of my friend do these tests. We think about the best filesystem for\n> the Linux/Postgres systems.\n> \n> The four test was:\n> \n> teszt1 - 10.000 inserts\n> teszt2 - 10.inserts, 10 in one trans.\n> teszt3 - 14.000.000 inserts, 1.000 in one trans.\n> teszt4 - \"Select count(*)\" on the big table.\n> \n> What test are interesting? Plese give us tips and ideas. The guy has time\n> for other test.\n\n- Try creating table without OIDs and repeat runs\n- Try changing number of inserts per transactions.\n\nI tried similar test on my home machine last week. It gave me 500K records in \nabout 54 seconds on 7.3.3. That translates to roughly 3 minutes for 1.4M \nrecords. You sure something isn't right here?\n\nWhat is the hardware? Can you try it with 7.4CVS?\n\nBye\n Shridhar\n\n--\nXIIdigitation, n.:\tThe practice of trying to determine the year a movie was \nmade\tby deciphering the Roman numerals at the end of the credits.\t\t-- Rich \nHall, \"Sniglets\"\n\n", "msg_date": "Thu, 21 Aug 2003 20:29:20 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tests" }, { "msg_contents": "Tomka Gergely wrote:\n> http://195.199.65.92/~horvaths/pgperformance.html\n> \n> One of my friend do these tests. We think about the best filesystem for\n> the Linux/Postgres systems.\n> \n> The four test was:\n> \n> teszt1 - 10.000 inserts\n> teszt2 - 10.inserts, 10 in one trans.\n> teszt3 - 14.000.000 inserts, 1.000 in one trans.\n> teszt4 - \"Select count(*)\" on the big table.\n> \n> What test are interesting? Plese give us tips and ideas. The guy has time\n> for other test.\n\nIt's a shame you didn't test ufs+softupdates. I'd be curious to see how\nit stacked up against the others.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Thu, 21 Aug 2003 12:27:41 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tests" }, { "msg_contents": "2003-08-21 ragyogó napján Bill Moran ezt üzente:\n\n> Tomka Gergely wrote:\n> > http://195.199.65.92/~horvaths/pgperformance.html\n> >\n> > One of my friend do these tests. We think about the best filesystem for\n> > the Linux/Postgres systems.\n> >\n> > The four test was:\n> >\n> > teszt1 - 10.000 inserts\n> > teszt2 - 10.inserts, 10 in one trans.\n> > teszt3 - 14.000.000 inserts, 1.000 in one trans.\n> > teszt4 - \"Select count(*)\" on the big table.\n> >\n> > What test are interesting? Plese give us tips and ideas. The guy has time\n> > for other test.\n>\n> It's a shame you didn't test ufs+softupdates. I'd be curious to see how\n> it stacked up against the others.\n\nShame? I smell here a harcore BSD fighter :) Maybe sometime he run\ntests on *SD, but now we need linux data.\n\n-- \nTomka Gergely\n\"S most - vajon barbárok nélkül mi lesz velünk?\nŐk mégiscsak megoldás voltak valahogy...\"\n\n", "msg_date": "Thu, 21 Aug 2003 19:55:18 +0200 (CEST)", "msg_from": "Tomka Gergely <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tests" }, { "msg_contents": "Tomka Gergely wrote:\n> 2003-08-21 ragyogďż˝ napjďż˝n Bill Moran ezt ďż˝zente:\n> \n> \n>>Tomka Gergely wrote:\n>>\n>>>http://195.199.65.92/~horvaths/pgperformance.html\n>>>\n>>>One of my friend do these tests. We think about the best filesystem for\n>>>the Linux/Postgres systems.\n>>>\n>>>The four test was:\n>>>\n>>>teszt1 - 10.000 inserts\n>>>teszt2 - 10.inserts, 10 in one trans.\n>>>teszt3 - 14.000.000 inserts, 1.000 in one trans.\n>>>teszt4 - \"Select count(*)\" on the big table.\n>>>\n>>>What test are interesting? Plese give us tips and ideas. The guy has time\n>>>for other test.\n>>\n>>It's a shame you didn't test ufs+softupdates. I'd be curious to see how\n>>it stacked up against the others.\n> \n> Shame? I smell here a harcore BSD fighter :)\n\nWell, I suppose ...\n\n> Maybe sometime he run\n> tests on *SD, but now we need linux data.\n\n\"He\" has had it on his list of things to do for months.\n\nAnd if I ever get enough free moments, I will test *both* BSD and Linux,\nbecause I'm terribly biased towards BSD because that's all I use.\nIn addition to being interested in how FreeBSD's ufs+softupdates compares\nto the variety of Linux journalling filesystems, I'm also interested to\ncompare FreeBSD's performance on ext2fs and FAT to Linux's peformance on\nthose same filesystems.\n\nIt's a \"shame\" because I simply haven't found the time yet.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Thu, 21 Aug 2003 14:16:59 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tests" }, { "msg_contents": "On Thu, 2003-08-21 at 14:16, Bill Moran wrote:\n> >>>What test are interesting? Plese give us tips and ideas. The guy has time\n> >>>for other test.\n> >>\n> >>It's a shame you didn't test ufs+softupdates. I'd be curious to see how\n> >>it stacked up against the others.\n> > \n> > Shame? I smell here a harcore BSD fighter :)\n> \n> Well, I suppose ...\n> \n\nWell I'm not a hardcore bsd fighter and I'd like to see how it stacks up\nas well. UFS+softupdates is supposed to be a very safe combination, if\nit performs well enough I could see a recommendation for it for those\nwho are willing to look beyond linux.\n\nRobert Treat \n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "22 Aug 2003 16:47:45 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tests" }, { "msg_contents": "2003-08-22 ragyogó napján Robert Treat ezt üzente:\n\n> On Thu, 2003-08-21 at 14:16, Bill Moran wrote:\n> > >>>What test are interesting? Plese give us tips and ideas. The guy has time\n> > >>>for other test.\n> > >>\n> > >>It's a shame you didn't test ufs+softupdates. I'd be curious to see how\n> > >>it stacked up against the others.\n> > >\n> > > Shame? I smell here a harcore BSD fighter :)\n> >\n> > Well, I suppose ...\n> >\n>\n> Well I'm not a hardcore bsd fighter and I'd like to see how it stacks up\n> as well. UFS+softupdates is supposed to be a very safe combination, if\n> it performs well enough I could see a recommendation for it for those\n> who are willing to look beyond linux.\n\nThe guy who do the test have only a few weeks *bsd-experience, and i dont\nhave bsd experience at all. The guy now planning tests on BSD, but he\nneed some time to build up a good relationship with the *BSD.\n\nPs.: i am a harcore linux-fgihter, so sorry :)\n\n-- \nTomka Gergely\n\"S most - vajon barbárok nélkül mi lesz velünk?\nŐk mégiscsak megoldás voltak valahogy...\"\n\n", "msg_date": "Fri, 22 Aug 2003 22:54:05 +0200 (CEST)", "msg_from": "Tomka Gergely <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tests" }, { "msg_contents": "Tomka Gergely wrote:\n> 2003-08-22 ragyogďż˝ napjďż˝n Robert Treat ezt ďż˝zente:\n> \n>>On Thu, 2003-08-21 at 14:16, Bill Moran wrote:\n>>\n>>>>>>What test are interesting? Plese give us tips and ideas. The guy has time\n>>>>>>for other test.\n>>>>>\n>>>>>It's a shame you didn't test ufs+softupdates. I'd be curious to see how\n>>>>>it stacked up against the others.\n>>>>\n>>>>Shame? I smell here a harcore BSD fighter :)\n>>>\n>>>Well, I suppose ...\n>>\n>>Well I'm not a hardcore bsd fighter and I'd like to see how it stacks up\n>>as well. UFS+softupdates is supposed to be a very safe combination, if\n>>it performs well enough I could see a recommendation for it for those\n>>who are willing to look beyond linux.\n> \n> \n> The guy who do the test have only a few weeks *bsd-experience, and i dont\n> have bsd experience at all. The guy now planning tests on BSD, but he\n> need some time to build up a good relationship with the *BSD.\n> \n> Ps.: i am a harcore linux-fgihter, so sorry :)\n\nWell, as much as I _am_ a die-hard BSD geek, I'm far more interested in\nknowing what platform is really best when I need a top-notch PostgreSQL\nserver.\n\nI'm going to try to force some time this weekend to do some tests ...\nwe'll see if I succeed ...\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Fri, 22 Aug 2003 17:00:44 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tests" }, { "msg_contents": "Tomka,\n\nDid you get my test suggestion? It never hit the lists, so I wonder if you \ngot it ....\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 22 Aug 2003 15:17:33 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tests" }, { "msg_contents": "On Fri, 2003-08-22 at 16:54, Tomka Gergely wrote:\n> 2003-08-22 ragyogó napján Robert Treat ezt üzente:\n> \n> > On Thu, 2003-08-21 at 14:16, Bill Moran wrote:\n> > > >>>What test are interesting? Plese give us tips and ideas. The guy has time\n> > > >>>for other test.\n> > > >>\n> > > >>It's a shame you didn't test ufs+softupdates. I'd be curious to see how\n> > > >>it stacked up against the others.\n> > > >\n> > > > Shame? I smell here a harcore BSD fighter :)\n> > >\n> > > Well, I suppose ...\n> > >\n> >\n> > Well I'm not a hardcore bsd fighter and I'd like to see how it stacks up\n> > as well. UFS+softupdates is supposed to be a very safe combination, if\n> > it performs well enough I could see a recommendation for it for those\n> > who are willing to look beyond linux.\n> \n> The guy who do the test have only a few weeks *bsd-experience, and i dont\n> have bsd experience at all. The guy now planning tests on BSD, but he\n> need some time to build up a good relationship with the *BSD.\n> \n\nAnother thought would be linux w/ xfs\n\nAlso, can you post more complete hardware/ os info? \n\nOh, and vs 7.4beta1 would be great too. \n\n:-)\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "22 Aug 2003 18:25:03 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tests" }, { "msg_contents": "\nOn 22/08/2003 22:00 Bill Moran wrote:\n\n> [snip] Well, as much as I _am_ a die-hard BSD geek, I'm far more \n> interested in\n> knowing what platform is really best when I need a top-notch PostgreSQL\n> server.\n> \n> I'm going to try to force some time this weekend to do some tests ...\n> we'll see if I succeed ...\n\nI, for one, would be very interested to see your results (can't you find \nsomething better to at the weekend than **** about with computers ?:)\n\n<selfish-mode>\nWhat I'd really be interested in is a comparison of Linux vs BSD using \neach OS's variations of file system on the same single-processor Intel/AMD \nbased hardware.\n</selfish-mode>\n\nSelfishness and sillyness aside, I'm sure your tests will of interest to \nus all. Thanks in advance\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for the Smaller \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n", "msg_date": "Sat, 23 Aug 2003 00:32:00 +0100", "msg_from": "Paul Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tests" }, { "msg_contents": "Paul Thomas wrote:\n> \n> On 22/08/2003 22:00 Bill Moran wrote:\n> \n>> [snip] Well, as much as I _am_ a die-hard BSD geek, I'm far more \n>> interested in\n>> knowing what platform is really best when I need a top-notch PostgreSQL\n>> server.\n>>\n>> I'm going to try to force some time this weekend to do some tests ...\n>> we'll see if I succeed ...\n> \n> I, for one, would be very interested to see your results (can't you find \n> something better to at the weekend than **** about with computers ?:)\n\nUnfortunately, no. I've been trying to get a date for a month and I'm\nabout ready to give up ;)\n\n> <selfish-mode>\n> What I'd really be interested in is a comparison of Linux vs BSD using \n> each OS's variations of file system on the same single-processor \n> Intel/AMD based hardware.\n> </selfish-mode>\n\nThat was what I was hoping to do.\n\n> Selfishness and sillyness aside, I'm sure your tests will of interest to \n> us all. Thanks in advance\n\nOh great, now I feel obligated ... well, I guess I'd better get started.\n\nI'll post the results for as far as I get late sometime Sunday. I can't\npromise they'll be complete at that point, but we'll see what happens.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Fri, 22 Aug 2003 19:55:26 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tests" }, { "msg_contents": "Tomka,\n\nSince this didn't post last time:\n\nAnother good test to do would be one that measures simultaenous steaming \nread-write. For example:\n\ncreate table_a, with at least 10,000 records, where field1 is the PK and \nfield2 is a text field. \ncreate table_b, with 500,000 or more records, where field2 is an FK to \ntable_a, and field_3 is a text field.\n\ndo the following:\n\nUPDATE table_b SET field3 = field3 || 'something'\nWHERE EXISTS (select field1 FROM table_a\n\twhere table_a.field1 = table_b.field2\n\tAND table_a.field_2 = 'x')\n\nFor the test to be most effective, the condition above should affect about \n100,000 records of table_a, and the records should not be contiguos on disk. \nFurther, tablea.field2 should not be indexed in order to force a disk scan.\n\nAlso another test I'd really like to see is hardware raid (Adaptec, LSI) \nagainst Linux software raid, and 5-disk RAID 5 against 4-disk RAID 1+0.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 22 Aug 2003 17:00:17 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tests" }, { "msg_contents": "2003-08-22 ragyogó napján Josh Berkus ezt üzente:\n\n> Tomka,\n>\n> Did you get my test suggestion? It never hit the lists, so I wonder if you\n> got it ....\n\nYes, we got it, and we use your ideas. Thanks.\n\n-- \nTomka Gergely\n\"S most - vajon barbárok nélkül mi lesz velünk?\nŐk mégiscsak megoldás voltak valahogy...\"\n\n", "msg_date": "Sat, 23 Aug 2003 07:20:01 +0200 (CEST)", "msg_from": "Tomka Gergely <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tests" }, { "msg_contents": "On Fri, 22 Aug 2003, Josh Berkus wrote:\n\n> Also another test I'd really like to see is hardware raid (Adaptec, LSI) \n> against Linux software raid, and 5-disk RAID 5 against 4-disk RAID 1+0.\n\nIt would be nice to cross those, so you have RAID5 sw vs RAID5 hw, vs \nRAID1+0 sw vs RAID1+0 hw.\n\nNow if only I had the hardware. Time to scrounge.\n\n", "msg_date": "Mon, 25 Aug 2003 09:15:07 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tests" }, { "msg_contents": "scott.marlowe wrote:\n> On Fri, 22 Aug 2003, Josh Berkus wrote:\n> \n>>Also another test I'd really like to see is hardware raid (Adaptec, LSI) \n>>against Linux software raid, and 5-disk RAID 5 against 4-disk RAID 1+0.\n> \n> It would be nice to cross those, so you have RAID5 sw vs RAID5 hw, vs \n> RAID1+0 sw vs RAID1+0 hw.\n> \n> Now if only I had the hardware. Time to scrounge.\n\nI just wanted to comment, that I don't know where you guys find the time\nand money to do all this.\n\nI pretty much blew my entire weekend running tests, and I'm still not done\nenough to post any results.\n\nThe upshot being, thanks to everyone who donates time, money, spare parts\nor whatever. Sometimes I forget how valuable all that is. And I don't\nthink it gets said often enough how much people like me appreciate all the\nhard work the developers and testers and everyone else involved does.\nI mean, PostgreSQL (like Linux or BSD or any other open-source project) is\na fantastic piece of software!\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Mon, 25 Aug 2003 11:23:08 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tests" }, { "msg_contents": "2003-08-25 ragyogó napján scott.marlowe ezt üzente:\n\n> On Fri, 22 Aug 2003, Josh Berkus wrote:\n>\n> > Also another test I'd really like to see is hardware raid (Adaptec, LSI)\n> > against Linux software raid, and 5-disk RAID 5 against 4-disk RAID 1+0.\n>\n> It would be nice to cross those, so you have RAID5 sw vs RAID5 hw, vs\n> RAID1+0 sw vs RAID1+0 hw.\n>\n> Now if only I had the hardware. Time to scrounge.\n\nHm.\nAt first, now we dont have raid hardware, sorry.\nSecond, in my practice the harware raid slower than sw raid, because now\nthe cheapest computing power is the cpu. So i dont like hardware raid\nsolutions, and even on ibm pc-servers i try not using them.\n\n-- \nTomka Gergely\n\"S most - vajon barbárok nélkül mi lesz velünk?\nŐk mégiscsak megoldás voltak valahogy...\"\n\n", "msg_date": "Mon, 25 Aug 2003 17:27:32 +0200 (CEST)", "msg_from": "Tomka Gergely <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tests" }, { "msg_contents": "http://mail.sth.sze.hu/~hsz/sql/\n\nNew, upgraded test results. As we see, the developers works hard, and with\ngood results. Million thanks and congratulations.\n\nSorry *BSD-lovers, if you send a new hard drive, our tester can do bsd\ntests also. Life is hard.\n\nAnd last, but not least, thanks for the tests, Horváth Szabolcs\n\n-- \nTomka Gergely\n\"S most - vajon barbárok nélkül mi lesz velünk?\nŐk mégiscsak megoldás voltak valahogy...\"\n\n", "msg_date": "Wed, 27 Aug 2003 14:40:02 +0200 (CEST)", "msg_from": "Tomka Gergely <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tests" }, { "msg_contents": "\nNicely done!\n\nThanks,\n\nBalazs\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Tomka\nGergely\nSent: Wednesday, August 27, 2003 5:40 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Tests\n\nhttp://mail.sth.sze.hu/~hsz/sql/\n\nNew, upgraded test results. As we see, the developers works hard, and\nwith\ngood results. Million thanks and congratulations.\n\nSorry *BSD-lovers, if you send a new hard drive, our tester can do bsd\ntests also. Life is hard.\n\nAnd last, but not least, thanks for the tests, Horváth Szabolcs\n\n-- \nTomka Gergely\n\"S most - vajon barbárok nélkül mi lesz velünk?\nŐk mégiscsak megoldás voltak valahogy...\"\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n\n\n", "msg_date": "Wed, 27 Aug 2003 22:24:48 -0700", "msg_from": "\"Balazs Wellisch\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tests" }, { "msg_contents": "\nWhat I thought was really interesting in this test was a dramatic\ndifference for ext3 mounted \"writeback\" in the \"h1\" test, 1 minute vs. 9\nminutes compared to the default \"ordered\" mount option. This was the\n\"add new constraint\" test.\n\n---------------------------------------------------------------------------\n\nBalazs Wellisch wrote:\n> \n> Nicely done!\n> \n> Thanks,\n> \n> Balazs\n> \n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Tomka\n> Gergely\n> Sent: Wednesday, August 27, 2003 5:40 AM\n> To: [email protected]\n> Subject: Re: [PERFORM] Tests\n> \n> http://mail.sth.sze.hu/~hsz/sql/\n> \n> New, upgraded test results. As we see, the developers works hard, and\n> with\n> good results. Million thanks and congratulations.\n> \n> Sorry *BSD-lovers, if you send a new hard drive, our tester can do bsd\n> tests also. Life is hard.\n> \n> And last, but not least, thanks for the tests, Horv?th Szabolcs\n> \n> -- \n> Tomka Gergely\n> \"S most - vajon barb?rok n?lk?l mi lesz vel?nk?\n> ?k m?giscsak megold?s voltak valahogy...\"\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 30 Aug 2003 00:25:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tests" } ]
[ { "msg_contents": "> Apart from avoiding views or subselects when sorting afterwards and\n> putting the whole bunch in a huge SQL statement (which i'll have to\n> produce on-the-fly), do you have an other alternative? \n> The 2 seconds is way to much, as the database will eventually run on a\n> machine that is 10 times slower.\n\nSomething that isn't _totally_ clear is whether or not it is going to \nbe possible to make use of indices in the selection. If the postmaster \nmust assemble, out of disparate sources, a large collection of data, \nthe best trade-off may very well be to build the collection the best \nway the system knows how (perhaps NOT ordering this using the index you \nexpect), and sort it afterwards.\n\nSorting doesn't tend to be grieviously expensive except when finding \nthe query results is also grieviously expensive.\n\nI think you are assuming that the query would be quicker if it used the \nsorted index; that is an assumption that should be checked at the door, \nor at least checked somewhere.\n-- \n\"The main difference between an amateur crypto designer and a used car\nsalesman is that the used car salesman can probably drive and knows\nwhen he's lying.\" -- An Metet <[email protected]>\n", "msg_date": "Thu, 21 Aug 2003 16:11:37 -0400", "msg_from": "\"Your Name\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sorting a query on a view ignores an index" } ]
[ { "msg_contents": "\nHello,\n\nHere is my exp. regarding VIEW (create VIEW view_name AS SELECT ....)\noptimization.\nI have found strange (or at least not very intelligent) behavior of query\nplanner (v.7.3.4).\nGiven task is to take data from 14 tables, join, group, sort, etc. I\ncurrently use about ten views referencing each other.\n\nI have done a lot of job to adjust 'postgresql.conf' configuration.\nExcept common increase of shared_mem&buffer stuff I felt I was unable to do\nanything else.\nSo, initialy, time to generate the Monthly report was about 10 sec.\nI started to play with GEQO....\nScreaming success -- I downplayed query execution time about 3x JUST by\nsetting geqo_threshold=3.\nNow my Monthly report takes about 3 sec.\n(Users are running that sequence of select queries (I call it invocation of\nthe final view ;-) quite often for any given period of time --\n usually from one week to half a year (rarely) so it is essential part of my\ndbvs.\n\nMaybe my exp. will encourage somebody to play with scary genetic stuff. ;-)\n\nGood luck & Best Regards,\nAlvis\n\n", "msg_date": "Fri, 22 Aug 2003 13:11:19 +0300", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sorting a query on a view ignores an index" } ]
[ { "msg_contents": "This message was created automatically by mail delivery software (Exim).\n\nA message that you sent could not be delivered to one or more of its\nrecipients. This is a permanent error. The following address(es) failed:\n\n [email protected]\n unknown local-part \"cellog\" in domain \"sourceforge.net\"\n\n------ This is a copy of the message, including all the headers. ------\n\nReturn-path: <[email protected]>\nReceived: from bsn-95-242-77.dsl.siol.net ([193.95.242.77] helo=MORDOR)\n\tby sc8-sf-list1.sourceforge.net with esmtp (Exim 3.31-VA-mm2 #1 (Debian))\n\tid 19qTEY-0002W8-00\n\tfor <[email protected]>; Sat, 23 Aug 2003 00:54:50 -0700\nFrom: <[email protected]>\nTo: <[email protected]>\nSubject: Thank you!\nDate: Sat, 23 Aug 2003 9:54:48 +0200\nX-MailScanner: Found to be clean\nImportance: Normal\nX-Mailer: Microsoft Outlook Express 6.00.2600.0000\nX-MSMail-Priority: Normal\nX-Priority: 3 (Normal)\nMIME-Version: 1.0\nContent-Type: multipart/mixed;\n\tboundary=\"_NextPart_000_001AFC43\"\nMessage-Id: <[email protected]>\n\nThis is a multipart message in MIME format\n\n--_NextPart_000_001AFC43\nContent-Type: text/plain;\n\tcharset=\"iso-8859-1\"\nContent-Transfer-Encoding: 7bit\n\nPlease see the attached file for details.\n--_NextPart_000_001AFC43--\n\n", "msg_date": "Sat, 23 Aug 2003 00:54:51 -0700", "msg_from": "Mail Delivery System <[email protected]>", "msg_from_op": true, "msg_subject": "Mail delivery failed: returning message to sender" } ]
[ { "msg_contents": "Hi--\n\nI had been thinking of the issues of multimaster replication and how to \ndo highly available, loadballanced clustering with PostgreSQL. Here is \nmy outline, and I am looking for comments on the limitations of how this \nwould work.\n\nSeveral PostgreSQL servers would share a virtual IP address, and would \ncoordinate among themselves which will act as \"Master\" for the purposes \nof a single transaction (but connection could be easier). SELECT \nstatements are handled exclusively by the transaction master while \nanything that writes to a database would be sent to all the the \n\"Masters.\" At the end of each transaction the systems would poll \neachother regarding whether they were all successful:\n\n1: Any system which is successful in COMMITting the transaction must \nignore any system which fails the transaction untill a recovery can be made.\n\n2: Any system which fails in COMMITting the transaction must cease to \nbe a master, provided that it recieves a signat from any other member of \nthe cluster that indicates that that member succeeded in committing the \ntransaction.\n\n3: If all nodes fail to commit, then they all remain masters.\n\nRecovery would be done in several steps:\n\n1: The database would be copied to the failed system using pg_dump.\n2: A current recovery would be done from the transaction log.\n3: This would be repeated in order to ensure that the database is up to \ndate.\n4: When two successive restores have been achieved with no new \nadditions to the database, the \"All Recovered\" signal is sent to the \ncluster and the node is ready to start processing again. (need a better \nway of doing this).\n\nNote: Recovery is the problem, I know. my model is only a starting \npoint for the purposes of discussion and trying to bring something to \nthe conversation.\n\nAny thoughts or suggestions?\n\nBest Wishes,\nChris Travers\n\n", "msg_date": "Sat, 23 Aug 2003 21:27:41 -0700", "msg_from": "Chris Travers <[email protected]>", "msg_from_op": true, "msg_subject": "Replication Ideas" }, { "msg_contents": "On Sat, 2003-08-23 at 23:27, Chris Travers wrote:\n> Hi--\n> \n> I had been thinking of the issues of multimaster replication and how to \n> do highly available, loadballanced clustering with PostgreSQL. Here is \n> my outline, and I am looking for comments on the limitations of how this \n> would work.\n> \n> Several PostgreSQL servers would share a virtual IP address, and would \n> coordinate among themselves which will act as \"Master\" for the purposes \n> of a single transaction (but connection could be easier). SELECT \n> statements are handled exclusively by the transaction master while \n> anything that writes to a database would be sent to all the the \n> \"Masters.\" At the end of each transaction the systems would poll \n> eachother regarding whether they were all successful:\n> \n> 1: Any system which is successful in COMMITting the transaction must \n> ignore any system which fails the transaction untill a recovery can be made.\n> \n> 2: Any system which fails in COMMITting the transaction must cease to \n> be a master, provided that it recieves a signat from any other member of \n> the cluster that indicates that that member succeeded in committing the \n> transaction.\n> \n> 3: If all nodes fail to commit, then they all remain masters.\n> \n> Recovery would be done in several steps:\n> \n> 1: The database would be copied to the failed system using pg_dump.\n> 2: A current recovery would be done from the transaction log.\n> 3: This would be repeated in order to ensure that the database is up to \n> date.\n> 4: When two successive restores have been achieved with no new \n> additions to the database, the \"All Recovered\" signal is sent to the \n> cluster and the node is ready to start processing again. (need a better \n> way of doing this).\n> \n> Note: Recovery is the problem, I know. my model is only a starting \n> point for the purposes of discussion and trying to bring something to \n> the conversation.\n\nThis is vaguely similar to Two Phase Commit, which is a sine qua\nnon of distributed transactions, which is the s.q.n. of multi-master\nreplication.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Eternal vigilance is the price of liberty: power is ever \nstealing from the many to the few. The manna of popular liberty \nmust be gathered each day, or it is rotten... The hand entrusted \nwith power becomes, either from human depravity or esprit de \ncorps, the necessary enemy of the people. Only by continual \noversight can the democrat in office be prevented from hardening \ninto a despot: only by unintermitted agitation can a people be \nkept sufficiently awake to principle not to let liberty be \nsmothered in material prosperity... Never look, for an age when \nthe people can be quiet and safe. At such times despotism, like \na shrouding mist, steals over the mirror of Freedom\"\nWendell Phillips\n\n", "msg_date": "Sun, 24 Aug 2003 01:13:08 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Ideas" }, { "msg_contents": "Ron Johnson wrote:\n\n>This is vaguely similar to Two Phase Commit, which is a sine qua\n>non of distributed transactions, which is the s.q.n. of multi-master\n>replication.\n>\n> \n>\n\nI may be wrong, but if I recall correctly, one of the problems with a \nstandard 2-phase commit is that if one server goes down, the other \nmasters cannot commit their transactions. This would make a clustered \ndatabase server have a downtime equivalent to the total downtime of all \nof its nodes. This is a real problem. Of course my understanding of \nTwo Phase Commit may be incorrect, in which case, I would appreciate it \nif someone could point out where I am wrong.\n\nIt had occurred to me that the issue was one of failure handling more \nthan one of concept. I.e. the problem is how one node's failure is \nhandled rather than the fundamental structure of Two Phase Commit. If a \nsingle node fails, we don't want that to take down the whole cluster, \nand I have actually revised my logic a bit more (to make it even \nsafer). In this I assume that:\n\n1: General failures on any one node are rare\n2: A failure is more likely to prevent a transaction from being \ncommitted than allow one to be committed.\n\nThis hot-failover solution requires a transparency from a client \nperspective-- i.e. the client should not have to choose a different \nserver should one go and should not need to know when a server comes \nback up. This also means that we need to assume that a load balancing \nsolution can be a part of the clustering solution. I would assume that \nthis would require a shared IP address for the public interface of the \nserver and a private communicatiions channel where each node has a \nseparate IP address (similar to Microsoft's implimentation of Network \nLoad Balancing). Also, different transactions within a single \nconnection should be able to be handled by different nodes, so if one \nnode goes down, users don't have to reconnect.\n\nSo here is my suggested logic for high availablility/load balanced \nclustering:\n\n1: All nodes recognize each user connection and delegage transactions \nrather than connections.\n\n2: At the beginning of a transaction, nodes decide who will take it. \nAny operation which does not change the information or schema of the \ndatabase is handled exclusively on that node. Other operations are \ndistributed across nodes.\n\n3: When the transaction is committed, the nodes \"vote\" on whether the \ncommitment of the transaction is valid. Majority rules, and the minority \nmust remove themselves from the cluster until they can synchronize their \ndatabases with the existing masters. If the vote is split 50/50 (i.e. \none node fails in a 2 node cluster), success is considered more likely \nto be valid than failure, and the node(s) which failed to commit the \ntransaction must remove themselves from the cluster until they can recover.\n\nBest Wishes,\nChris Travers\n\n\n\n", "msg_date": "Mon, 25 Aug 2003 10:06:22 -0700", "msg_from": "Chris Travers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication Ideas" }, { "msg_contents": "On Mon, 2003-08-25 at 12:06, Chris Travers wrote:\n> Ron Johnson wrote:\n> \n> >This is vaguely similar to Two Phase Commit, which is a sine qua\n> >non of distributed transactions, which is the s.q.n. of multi-master\n> >replication.\n> >\n> > \n> >\n> \n> I may be wrong, but if I recall correctly, one of the problems with a \n> standard 2-phase commit is that if one server goes down, the other \n> masters cannot commit their transactions. This would make a clustered \n> database server have a downtime equivalent to the total downtime of all \n> of its nodes. This is a real problem. Of course my understanding of \n> Two Phase Commit may be incorrect, in which case, I would appreciate it \n> if someone could point out where I am wrong.\n\nNote that I didn't mean to imply that 2PC is sufficient to implement\nM-M. The DBMS designer(s) must decide what to do (like queue up\nchanges) if 2PC fails.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Our computers and their computers are the same color. The \nconversion should be no problem!\"\nUnknown\n\n", "msg_date": "Mon, 25 Aug 2003 12:38:16 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Ideas" }, { "msg_contents": "On Mon, Aug 25, 2003 at 10:06:22AM -0700, Chris Travers wrote:\n> Ron Johnson wrote:\n> \n> >This is vaguely similar to Two Phase Commit, which is a sine qua\n> >non of distributed transactions, which is the s.q.n. of multi-master\n> >replication.\n> \n> I may be wrong, but if I recall correctly, one of the problems with a \n> standard 2-phase commit is that if one server goes down, the other \n> masters cannot commit their transactions.\n\nBefore the discussion goes any further, have you read the work related\nto Postgres-r? It's a substantially different animal from 2PC AFAIK.\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n\"Right now the sectors on the hard disk run clockwise, but I heard a rumor that\nyou can squeeze 0.2% more throughput by running them counterclockwise.\nIt's worth the effort. Recommended.\" (Gerry Pourwelle)\n", "msg_date": "Mon, 25 Aug 2003 14:24:41 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Ideas" }, { "msg_contents": "Alvaro Herrera wrote:\n\n>Before the discussion goes any further, have you read the work related\n>to Postgres-r? It's a substantially different animal from 2PC AFAIK.\n>\n> \n>\nYes I have. Postgres-r is not a high-availability solution which is \ncapable of transparent failover, although it is a very useful project on \nits own.\n\nBest Wishes,\nChris Travers.\n\n", "msg_date": "Mon, 25 Aug 2003 11:36:20 -0700", "msg_from": "Chris Travers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication Ideas" }, { "msg_contents": "Chris Travers <[email protected]> writes:\n> Yes I have. Postgres-r is not a high-availability solution which is \n> capable of transparent failover,\n\nWhat makes you say that? My understanding is it's supposed to survive\nloss of individual servers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Aug 2003 17:09:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Ideas " }, { "msg_contents": "Tom Lane wrote:\n\n>Chris Travers <[email protected]> writes:\n> \n>\n>>Yes I have. Postgres-r is not a high-availability solution which is \n>>capable of transparent failover,\n>> \n>>\n>\n>What makes you say that? My understanding is it's supposed to survive\n>loss of individual servers.\n>\n>\t\t\tregards, tom lane\n>\n>\n> \n>\nMy mistake. I must have gotten them confused with another \n(asynchronous) replication project.\n\nBest Wishes,\nChris Travers\n\n", "msg_date": "Mon, 25 Aug 2003 15:15:25 -0700", "msg_from": "Chris Travers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Replication Ideas" }, { "msg_contents": "A long time ago, in a galaxy far, far away, \"Bupp Phillips\" <[email protected]> wrote:\n>I have a table that has 103,000 records in it (record size is about\n>953 bytes) and when I do a select all (select * from <table>) it takes\n>a whopping 30 secs for the data to return!!\n \n>SQLServer on the other hand takes 6 secs, but you can also use what is\n>called a firehose cursor, which will return the data in < 1 sec. \n \n>I have done everything that I know how to speed this up, does anyone\n>have any advise?\n\nHave you VACUUMed the table? 30 seconds to start getting data back\nfrom such a query _seems_ a liittle high.\n\nIt would be quite a reasonable idea to open up a CURSOR and request\nthe data in more byte-sized pieces so that the result set wouldn't\nforcibly bloat in any one spot.\n\nYou start by submitting the cursor definition, inside a transaction:\n begin transaction;\n declare cursor my_fire_hose for select * from <table>;\n\nYou then iterate over the following, which fetches 1000 rows at a time:\n fetch forward 1000 in my_fire_hose;\n\nThat should cut down the time it takes to start getting records to\nnear enough to zero...\n-- \noutput = reverse(\"gro.gultn\" \"@\" \"enworbbc\")\nhttp://www3.sympatico.ca/cbbrowne/lisp.html\n\"Microsoft is sort of a mixture between the Borg and the\nFerengi. Combine the Borg marketing with Ferengi networking...\"\n-- Andre Beck in dcouln\n", "msg_date": "Mon, 25 Aug 2003 18:28:09 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Ideas" }, { "msg_contents": "\n\nOn Mon, 25 Aug 2003, Tom Lane wrote:\n\n> Chris Travers <[email protected]> writes:\n> > Yes I have. Postgres-r is not a high-availability solution which is\n> > capable of transparent failover,\n>\n> What makes you say that? My understanding is it's supposed to survive\n> loss of individual servers.\n\nHow does it play 'catch up' went a server comes back online?\n\nnote that I did go through the 'docs' on how it works, and am/was quite\nimpressed at what they were doing ... but, if I have a large network, say,\nand one group is connecting to ServerA, and another group with ServerB,\nwhat happens when ServerA and ServerB loose network connectivity for any\nperiod of time? How do they re-sync when the network comes back up again?\n", "msg_date": "Tue, 26 Aug 2003 03:01:26 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Ideas " }, { "msg_contents": "WARNING: This is getting long ...\n\nPostgres-R is a very interesting and inspiring idea. And I've been \nkicking that concept around for a while now. What I don't like about it \nis that it requires fundamental changes in the lock mechanism and that \nit is based on the assumption of very low lock conflict.\n\n<explain-PG-R>\nIn Postgres-R a committing transaction sends it's workset (WS - a list \nof all updates done in this transaction) to the group communication \nsystem (GC). The GC guarantees total order, meaning that all nodes will \nreceive all WSs in the same order, no matter how they have been sent.\n\nIf a node receives back it's own WS before any error occured, it goes \nahead and finalizes the commit. If it receives a foreign WS, it has to \napply the whole WS and commit it before it can process anything else. If \nnow a local transaction, in progress or while waiting for it's WS to \ncome back, holds a lock that is required to process such remote WS, the \nlocal transaction needs to be aborted to unlock it's resources ... it \nlost the total order race.\n</explain-PG-R>\n\nPostgres-R requires that all remote WSs are applied and committed before \na local transaction can commit. Otherwise it couldn't correctly detect a \nlock conflict. So there will not be any read ahead. And since the total \norder really counts here, it cannot apply any two remote WSs in \nparallel, a race condition could possibly exist and a later WS in the \ntotal order runs faster and locks up a previous one, so we have to \nsqueeze all remote WSs through one single replication work process. And \nall the locally parallel executed transactions that wait for their WSs \nto come back have to wait until that poor little worker is done with the \nwhole pile. Bye bye concurrency. And I don't know how the GC will deal \nwith the backlog either. Could well choke on it.\n\nI do not see how this will scale well in a multi-SMP-system cluster. At \nleast the serialization of WSs will become a horror if there is \nsignificant lock contention like in a standard TPC-C on the district row \ncontaining the order number counter. I don't know for sure, but I \nsuspect that with this kind of bottleneck, Postgres-R will have to \nrollback more than 50% of it's transactions when there are more than 4 \nnodes under heavy load (like in a benchmark run). That will suck ...\n\n\nBut ... initially I said that it is an inspiring concept ... soooo ...\n\nI am currently hacking around with some C+PL/TclU+Spread constructs that \nmight form a rude kind of prototype creature.\n\nMy changes to the Postgres-R concept are that there will be as many \nreplicating slave processes as there are in summary masters out in the \ncluster ... yes, it will try to utilize all the CPU's in the cluster! \nFor failover reliability, A committing transaction will hold before \nfinalizing the commit and send it's \"I'm ready\" to the GC. Every \nreplicator that reaches the same state send's \"I'm ready\" too. Spread \nguarantees in SAFE_MESS mode that messages are delivered to all nodes in \na group or that at least LEAVE/DISCONNECT messages are deliverd before. \nSo if a node receives more than 50% of \"I'm ready\", there would be a \nvery small gap where multiple nodes have to fail in the same split \nsecond so that the majority of nodes does NOT commit. A node that \nreported \"I'm ready\" but lost more than 50% of the cluster before \ncommitting has to rollback and rejoin or wait for operator intervention.\n\nNow the idea is to split up the communication into GC distribution \ngroups per transaction. So working master backends and associated \nreplication backends will join/leave a unique group for every \ntransaction in the cluster. This way, the per process communication is \nreduced to the required minimum.\n\n\nAs said, I am hacking on some code ...\n\n\nJan\n\nChris Travers wrote:\n> Tom Lane wrote:\n> \n>>Chris Travers <[email protected]> writes:\n>> \n>>\n>>>Yes I have. Postgres-r is not a high-availability solution which is \n>>>capable of transparent failover,\n>>> \n>>>\n>>\n>>What makes you say that? My understanding is it's supposed to survive\n>>loss of individual servers.\n>>\n>>\t\t\tregards, tom lane\n>>\n>>\n>> \n>>\n> My mistake. I must have gotten them confused with another \n> (asynchronous) replication project.\n> \n> Best Wishes,\n> Chris Travers\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Tue, 26 Aug 2003 21:43:10 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Ideas" }, { "msg_contents": "\n[ Moved to hackers.]\n\nJan Wieck wrote:\n> I am currently hacking around with some C+PL/TclU+Spread constructs that \n> might form a rude kind of prototype creature.\n> \n> My changes to the Postgres-R concept are that there will be as many \n> replicating slave processes as there are in summary masters out in the \n> cluster ... yes, it will try to utilize all the CPU's in the cluster! \n\nInteresting --- so your idea is to have the group sets run in parallel,\nand detecting if the groups themselves are in conflict --- makes sense. \nIf you can detect if outside transactions conflict with your\ntransaction, you should be able to determine if the outside transactions\nconflict with each other.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 26 Aug 2003 23:25:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Replication Ideas" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> If you can detect if outside transactions conflict with your\n> transaction, you should be able to determine if the outside transactions\n> conflict with each other.\n\nUh ... not necessarily. That amounts to assuming that every xact has\ncomplete knowledge of the actions of every other, which is an assumption\nI'd rather not make. Detecting that what you've done conflicts with\nsomeone else is one thing, detecting that party B has conflicted with\nparty C is another league entirely.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Aug 2003 23:37:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Replication Ideas " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > If you can detect if outside transactions conflict with your\n> > transaction, you should be able to determine if the outside transactions\n> > conflict with each other.\n> \n> Uh ... not necessarily. That amounts to assuming that every xact has\n> complete knowledge of the actions of every other, which is an assumption\n> I'd rather not make. Detecting that what you've done conflicts with\n> someone else is one thing, detecting that party B has conflicted with\n> party C is another league entirely.\n\nYep.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 26 Aug 2003 23:57:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Replication Ideas" }, { "msg_contents": "Jan Wieck wrote:\n\n> WARNING: This is getting long ...\n>\n> Postgres-R is a very interesting and inspiring idea. And I've been \n> kicking that concept around for a while now. What I don't like about \n> it is that it requires fundamental changes in the lock mechanism and \n> that it is based on the assumption of very low lock conflict.\n>\n> <explain-PG-R>\n> In Postgres-R a committing transaction sends it's workset (WS - a list \n> of all updates done in this transaction) to the group communication \n> system (GC). The GC guarantees total order, meaning that all nodes \n> will receive all WSs in the same order, no matter how they have been \n> sent.\n>\n> If a node receives back it's own WS before any error occured, it goes \n> ahead and finalizes the commit. If it receives a foreign WS, it has to \n> apply the whole WS and commit it before it can process anything else. \n> If now a local transaction, in progress or while waiting for it's WS \n> to come back, holds a lock that is required to process such remote WS, \n> the local transaction needs to be aborted to unlock it's resources ... \n> it lost the total order race.\n> </explain-PG-R>\n>\n> Postgres-R requires that all remote WSs are applied and committed \n> before a local transaction can commit. Otherwise it couldn't correctly \n> detect a lock conflict. So there will not be any read ahead. And since \n> the total order really counts here, it cannot apply any two remote WSs \n> in parallel, a race condition could possibly exist and a later WS in \n> the total order runs faster and locks up a previous one, so we have to \n> squeeze all remote WSs through one single replication work process. \n> And all the locally parallel executed transactions that wait for their \n> WSs to come back have to wait until that poor little worker is done \n> with the whole pile. Bye bye concurrency. And I don't know how the GC \n> will deal with the backlog either. Could well choke on it.\n>\n> I do not see how this will scale well in a multi-SMP-system cluster. \n> At least the serialization of WSs will become a horror if there is \n> significant lock contention like in a standard TPC-C on the district \n> row containing the order number counter. I don't know for sure, but I \n> suspect that with this kind of bottleneck, Postgres-R will have to \n> rollback more than 50% of it's transactions when there are more than 4 \n> nodes under heavy load (like in a benchmark run). That will suck ...\n>\n>\n> But ... initially I said that it is an inspiring concept ... soooo ...\n>\n> I am currently hacking around with some C+PL/TclU+Spread constructs \n> that might form a rude kind of prototype creature.\n>\n> My changes to the Postgres-R concept are that there will be as many \n> replicating slave processes as there are in summary masters out in the \n> cluster ... yes, it will try to utilize all the CPU's in the cluster! \n> For failover reliability, A committing transaction will hold before \n> finalizing the commit and send it's \"I'm ready\" to the GC. Every \n> replicator that reaches the same state send's \"I'm ready\" too. Spread \n> guarantees in SAFE_MESS mode that messages are delivered to all nodes \n> in a group or that at least LEAVE/DISCONNECT messages are deliverd \n> before. So if a node receives more than 50% of \"I'm ready\", there \n> would be a very small gap where multiple nodes have to fail in the \n> same split second so that the majority of nodes does NOT commit. A \n> node that reported \"I'm ready\" but lost more than 50% of the cluster \n> before committing has to rollback and rejoin or wait for operator \n> intervention.\n>\n> Now the idea is to split up the communication into GC distribution \n> groups per transaction. So working master backends and associated \n> replication backends will join/leave a unique group for every \n> transaction in the cluster. This way, the per process communication is \n> reduced to the required minimum.\n>\n>\n> As said, I am hacking on some code ...\n>\n>\n> Jan\n>\n> Chris Travers wrote:\n>\n>> Tom Lane wrote:\n>>\n>>> Chris Travers <[email protected]> writes:\n>>> \n>>>\n>>>> Yes I have. Postgres-r is not a high-availability solution which is \n>>>> capable of transparent failover,\n>>>> \n>>>\n>>>\n>>> What makes you say that? My understanding is it's supposed to survive\n>>> loss of individual servers.\n>>>\n>>> regards, tom lane\n>>>\n>>>\n>>> \n>>>\n>> My mistake. I must have gotten them confused with another \n>> (asynchronous) replication project.\n>>\n>> Best Wishes,\n>> Chris Travers\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 9: the planner will ignore your desire to choose an index scan if \n>> your\n>> joining column's datatypes do not match\n>\n>\n>\nAs my british friends would say, \"Bully for you\",and I applaud you \nplaying, struggling, learning from this for our sakes. Jeez, all I think \nabout is me,huh?\n\n", "msg_date": "Tue, 26 Aug 2003 21:33:59 -0700", "msg_from": "Dennis Gearon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Ideas" }, { "msg_contents": "On Tue, 2003-08-26 at 22:37, Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > If you can detect if outside transactions conflict with your\n> > transaction, you should be able to determine if the outside transactions\n> > conflict with each other.\n> \n> Uh ... not necessarily. That amounts to assuming that every xact has\n> complete knowledge of the actions of every other, which is an assumption\n> I'd rather not make. Detecting that what you've done conflicts with\n> someone else is one thing, detecting that party B has conflicted with\n> party C is another league entirely.\n\nMaybe some sort of Lock Manager? A process running on each node\nkeeps a tree structure of all locks, requested locks, what is \n(requested to be) locked, and the type of lock. If you are running\nmulti-master replication, each LM keeps in sync with each other, \nthus creating a Distributed Lock Manager. (This would also be the\nkey to implementing database clusters. Of course, the interface \nto the DLM would have to be pretty deep within Postgres itself...)\n\nUsing a DLM, the postmaster on node_a would know that the postmaster\non node_b has just locked a certain set of tuples and index keys,\nand \n(1) will queue up it's request to lock that data into that node's\n LM,\n(2) which will propagate it to the other nodes,\n(3) then when the node_a postmaster executes the COMMIT WORK, the\n node_b postmaster can obtain it's desired locks.\n(4) If the postmaster on node_[ac-z] needs to lock the that same \n data, it will then similarly queue up to wait until the node_b\n postmaster executes it's COMMIT WORK.\n\nNotes:\na) this is, of course, not *sufficient* for multi-master\nb) yes, you need a fast, low latency network for the DLM chatter.\n\nThis is a tried and true method of synchronization. DEC Rdb/VMS\nhas been using it for 19 years as the underpinnings of it's cluster\ntechnology, and Oracle licensed it from them (well, really Compaq)\nfor it's 9i RAC.\n\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"The UN couldn't break up a cookie fight in a Brownie meeting.\"\nLarry Miller\n\n", "msg_date": "Wed, 27 Aug 2003 03:12:31 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Ideas" }, { "msg_contents": "\"Marc G. Fournier\" <[email protected]> writes:\n> On Mon, 25 Aug 2003, Tom Lane wrote:\n>> What makes you say that? My understanding is it's supposed to survive\n>> loss of individual servers.\n\n> How does it play 'catch up' went a server comes back online?\n\nThe recovered server has to run through the part of the GCS data stream\nthat it missed the first time. This is not conceptually different from\nrecovering using archived WAL logs (or archived trigger-driven\nreplication data streams). As with using WAL for recovery, you have to\nbe able to archive the message stream until you don't need it any more.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Aug 2003 15:38:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Ideas " }, { "msg_contents": "On 26 Aug 2003 at 3:01, Marc G. Fournier wrote:\n\n> \n> \n> On Mon, 25 Aug 2003, Tom Lane wrote:\n> \n> > Chris Travers <[email protected]> writes:\n> > > Yes I have. Postgres-r is not a high-availability solution which is\n> > > capable of transparent failover,\n> >\n> > What makes you say that? My understanding is it's supposed to survive\n> > loss of individual servers.\n> \n> How does it play 'catch up' went a server comes back online?\n\n<dumb idea>\nPITR + archive logs daemon? Chances of a node and an archive log daemon going \ndown simalrenously are pretty low. If archive log daemon works on another \nmachin, the MTBF should be pretty acceptable..\n</dumb idea>\n\n\nBye\n Shridhar\n\n--\nThe Briggs-Chase Law of Program Development:\tTo determine how long it will take \nto write and debug a\tprogram, take your best estimate, multiply that by two, \nadd\tone, and convert to the next higher units.\n\n", "msg_date": "Thu, 28 Aug 2003 12:37:53 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Ideas " }, { "msg_contents": "\n\nRon Johnson wrote:\n\n> Notes:\n> a) this is, of course, not *sufficient* for multi-master\n> b) yes, you need a fast, low latency network for the DLM chatter.\n\n\"Fast\" is an understatement. The DLM you're talking about would (in our \ncase) need to use Spread's AGREED_MESS or SAFE_MESS service type, \nmeaning guarantee of total order. A transaction that needs any type of \nlock sends that request into the DLM group and then waits. The incoming \nstream of lock messages determines success or failure. With the overhead \nof these service types I don't think one single communication group for \nall database backends in the whole cluster guaranteeing total order will \nbe that efficient.\n\n> \n> This is a tried and true method of synchronization. DEC Rdb/VMS\n> has been using it for 19 years as the underpinnings of it's cluster\n> technology, and Oracle licensed it from them (well, really Compaq)\n> for it's 9i RAC.\n\nAre you sure they're using it that way?\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n", "msg_date": "Thu, 28 Aug 2003 17:00:02 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Ideas" }, { "msg_contents": "On Thu, 2003-08-28 at 16:00, Jan Wieck wrote:\n> Ron Johnson wrote:\n> \n> > Notes:\n> > a) this is, of course, not *sufficient* for multi-master\n> > b) yes, you need a fast, low latency network for the DLM chatter.\n> \n> \"Fast\" is an understatement. The DLM you're talking about would (in our \n> case) need to use Spread's AGREED_MESS or SAFE_MESS service type, \n> meaning guarantee of total order. A transaction that needs any type of \n> lock sends that request into the DLM group and then waits. The incoming \n> stream of lock messages determines success or failure. With the overhead \n> of these service types I don't think one single communication group for \n> all database backends in the whole cluster guaranteeing total order will \n> be that efficient.\n\nI guess it's the differing protocols involved. DEC made clustering\n(including Rdb/VMS) work over an 80Mbps protocol, back in The Day,\nand HPaq says that it works fine now over fast ethernet.\n \n> > This is a tried and true method of synchronization. DEC Rdb/VMS\n> > has been using it for 19 years as the underpinnings of it's cluster\n> > technology, and Oracle licensed it from them (well, really Compaq)\n> > for it's 9i RAC.\n> \n> Are you sure they're using it that way?\n\nNot as sure as I am that the sun will rise in the east tomorrow,\nbut, yes, I am highly confident that O modified DLM for use in\n9i RAC. Note that O purchased Rdb/VMS from DEC back in 1994, along\nwith the Engineers, so they have long knowledge of how it works\nin VMS. One of the reasons they bought Rdb was to merge the tech-\nnology into RDBMS.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"they love our milk and honey, but preach about another way of living\"\nMerle Haggard, \"The Fighting Side Of Me\"\n\n", "msg_date": "Thu, 28 Aug 2003 17:04:19 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Ideas" }, { "msg_contents": "Are these clusters physically together using dedicate LAN lines .... or \nare they synchronizing over the Interwait?\n\nRon Johnson wrote:\n\n>On Thu, 2003-08-28 at 16:00, Jan Wieck wrote:\n> \n>\n>>Ron Johnson wrote:\n>>\n>> \n>>\n>>>Notes:\n>>>a) this is, of course, not *sufficient* for multi-master\n>>>b) yes, you need a fast, low latency network for the DLM chatter.\n>>> \n>>>\n>>\"Fast\" is an understatement. The DLM you're talking about would (in our \n>>case) need to use Spread's AGREED_MESS or SAFE_MESS service type, \n>>meaning guarantee of total order. A transaction that needs any type of \n>>lock sends that request into the DLM group and then waits. The incoming \n>>stream of lock messages determines success or failure. With the overhead \n>>of these service types I don't think one single communication group for \n>>all database backends in the whole cluster guaranteeing total order will \n>>be that efficient.\n>> \n>>\n>\n>I guess it's the differing protocols involved. DEC made clustering\n>(including Rdb/VMS) work over an 80Mbps protocol, back in The Day,\n>and HPaq says that it works fine now over fast ethernet.\n> \n> \n>\n>>>This is a tried and true method of synchronization. DEC Rdb/VMS\n>>>has been using it for 19 years as the underpinnings of it's cluster\n>>>technology, and Oracle licensed it from them (well, really Compaq)\n>>>for it's 9i RAC.\n>>> \n>>>\n>>Are you sure they're using it that way?\n>> \n>>\n>\n>Not as sure as I am that the sun will rise in the east tomorrow,\n>but, yes, I am highly confident that O modified DLM for use in\n>9i RAC. Note that O purchased Rdb/VMS from DEC back in 1994, along\n>with the Engineers, so they have long knowledge of how it works\n>in VMS. One of the reasons they bought Rdb was to merge the tech-\n>nology into RDBMS.\n>\n> \n>\n\n", "msg_date": "Thu, 28 Aug 2003 15:52:24 -0700", "msg_from": "Dennis Gearon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Ideas" }, { "msg_contents": "On Thu, 2003-08-28 at 17:52, Dennis Gearon wrote:\n> Are these clusters physically together using dedicate LAN lines .... or \n> are they synchronizing over the Interwait?\n\nThere have been multiple methods over the years. In order:\n\n1. Cluster Interconnect (CI) : There's a big box, called the CI,\n that in the early days was really a stripped PDP-11 running\n an RTOS. Each VAX (and, later, Alpha) is connected to the CI\n via a special adapters and cables. Disks are connected to an\n \"HSC\" Storage Controllers which also plug into the CI. Basic-\n ally, it's a big, intelligent switch. Disk sectors pass\n along the wires from VAX and Alpha to disks and back. DLM \n messages pass along the wires from node to node. With mul-\n tiple CI adapters, and HSCs (they were dual-ported) you could \n set up otal dual-redundancy. Up to 96 nodes can be cluster-\n ed. It still works, but Memory Channel is preferred now.\n\n2. LAVC - Local Area VAX Cluster : In this scheme, disks were\n directly attached to nodes, and data (disk and DLM) is trans-\n ferred back and forth across the 10Mbps Ethernet. It could\n travel over TCP/IP or DECnet. For obvious reasons, LAVC was\n a lot cheaper and slower than CI.\n\n3. SCSI clusters : SCSI disks are wired to a dual-ported \"HSZ\"\n Storage Controller. Then, SCSI cards on each of 2 nodes \n could be wired into a port. The SCSI disks could also be\n wired to a 2nd HSZ, and a 2nd SCSI card in each node plugged\n into that HSZ, dual-redundancy is achieved. With modern\n versions of VMS, the SCSI drivers can choose which SCSI \n card it wanted to send data through, to increase performance.\n DLM messages are passed via TCP/IP. Only 2 nodes can be\n clustered. A related method uses fiber channel disks on\n \"HSG\" Storage Controllers.\n\n4. Memory Channel : A higher speed interconnect. Don't know\n much about it. 128 nodes can be clustered.\n\nNote that since DLM awareness is built deep into VMS and all the\nRTLs, every program is cluster-aware, no matter what type of \ncluster method is used.\n\n\n> Ron Johnson wrote:\n> \n> >On Thu, 2003-08-28 at 16:00, Jan Wieck wrote:\n> > \n> >\n> >>Ron Johnson wrote:\n> >>\n> >> \n> >>\n> >>>Notes:\n> >>>a) this is, of course, not *sufficient* for multi-master\n> >>>b) yes, you need a fast, low latency network for the DLM chatter.\n> >>> \n> >>>\n> >>\"Fast\" is an understatement. The DLM you're talking about would (in our \n> >>case) need to use Spread's AGREED_MESS or SAFE_MESS service type, \n> >>meaning guarantee of total order. A transaction that needs any type of \n> >>lock sends that request into the DLM group and then waits. The incoming \n> >>stream of lock messages determines success or failure. With the overhead \n> >>of these service types I don't think one single communication group for \n> >>all database backends in the whole cluster guaranteeing total order will \n> >>be that efficient.\n> >> \n> >>\n> >\n> >I guess it's the differing protocols involved. DEC made clustering\n> >(including Rdb/VMS) work over an 80Mbps protocol, back in The Day,\n> >and HPaq says that it works fine now over fast ethernet.\n> > \n> > \n> >\n> >>>This is a tried and true method of synchronization. DEC Rdb/VMS\n> >>>has been using it for 19 years as the underpinnings of it's cluster\n> >>>technology, and Oracle licensed it from them (well, really Compaq)\n> >>>for it's 9i RAC.\n> >>> \n> >>>\n> >>Are you sure they're using it that way?\n> >> \n> >>\n> >\n> >Not as sure as I am that the sun will rise in the east tomorrow,\n> >but, yes, I am highly confident that O modified DLM for use in\n> >9i RAC. Note that O purchased Rdb/VMS from DEC back in 1994, along\n> >with the Engineers, so they have long knowledge of how it works\n> >in VMS. One of the reasons they bought Rdb was to merge the tech-\n> >nology into RDBMS.\n> >\n> > \n> >\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Oh, great altar of passive entertainment, bestow upon me thy \ndiscordant images at such speed as to render linear thought impossible\"\nCalvin, regarding TV\n\n", "msg_date": "Thu, 28 Aug 2003 22:20:10 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Replication Ideas" } ]
[ { "msg_contents": "Hi List,\n\n I have posted a subjetc on the admin list but I thought that it might fit \nbetter on this list as follow:\n\nHi List,\n\n As I said before, I'm not a DBA \" yet\" , but I'm learning ... and I \nalready have a PostgreSQL running, so I have to ask some help...\n I got a SQL as folows :\n \n SELECT /*+ */ \nftnfco00.estado_cliente , \nftcofi00.grupo_faturamento , \nSUM( DECODE( ftcofi00.atual_fatura, '-', -(NVL\n(ftnfpr00.qtde_duzias,0)), '+', NVL(ftnfpr00.qtde_duzias,0), 0) ) , \nSUM( DECODE( ftcofi00.atual_fatura, '-', -(NVL(ftnfpr00.vlr_liquido,0)), '+', \nNVL(ftnfpr00.vlr_liquido,0), 0) ) , \nftprod00.tipo_cadastro||ftprod00.codigo_produto||'||'||gsames00.ano_mes , \nftprod00.descricao_produto||'||'||gsames00.descricao , \nDIVIDE( SUM( DECODE( ftcofi00.atual_fatura, '-', -(NVL\n(ftnfpr00.vlr_liquido,0)), '+', NVL(ftnfpr00.vlr_liquido,0), 0)\n*ftnfpr00.margem_comercial ),\n SUM( DECODE( ftcofi00.atual_fatura, '-', -(NVL\n(ftnfpr00.vlr_liquido,0)), '+', NVL(ftnfpr00.vlr_liquido,0), 0)) ) , \nSUM( DECODE( ftcofi00.nf_prodgratis, 'S', NVL(ftnfpr00.qtde_duzias,0), 0 ) ) , \nSUM( DECODE( ftcofi00.nf_prodgratis, 'S', NVL(ftnfpr00.vlr_liquido,0), 0 ) ) \nFROM \nftprod00 , \nftnfco00 , \nftcgma00 , \nftcgca00 , \nftspro00 , \nftclcr00 , \ngsames00 , \nftcofi00 , \nftrepr00 , \ngsesta00 , \nftsupv00 , \nftgrep00 , \nftclgr00 , \nftband00 , \nfttcli00 , \nftredc00 , \nftnfpr00 \nWHERE \nftnfco00.emp = 909 AND \nftnfpr00.fil IN ('101') AND \nftnfco00.situacao_nf = 'N' AND \nTO_CHAR(ftnfco00.data_emissao,'YYYYMM') >= '200208' AND \nTO_CHAR(ftnfco00.data_emissao,'YYYYMM') <= '200304' AND \nftcofi00.grupo_faturamento >= '01' AND \n(ftcofi00.atual_fatura IN ('+','-') OR ftcofi00.nf_prodgratis = 'S') AND \nftcgma00.emp = ftprod00.emp AND \nftcgma00.fil = ftprod00.fil AND \nftcgma00.codigo = ftprod00.cla_marca AND \nftcgca00.emp = ftprod00.emp AND \nftcgca00.fil = ftprod00.fil AND \nftcgca00.codigo = ftprod00.cla_categoria AND \nftspro00.emp = ftprod00.emp AND \nftspro00.fil = ftprod00.fil AND \nftspro00.codigo = ftprod00.situacao AND \nftclcr00.emp = ftnfco00.emp AND \nftclcr00.fil = ftnfco00.empfil AND \nftclcr00.tipo_cadastro = ftnfco00.tipo_cad_clicre AND \nftclcr00.codigo = ftnfco00.cod_cliente AND \ngsames00.ano_mes = TO_CHAR(ftnfco00.data_emissao,'YYYYMM') AND \nftcofi00.emp = ftnfco00.emp AND \nftcofi00.fil = ftnfco00.empfil AND \nftcofi00.codigo_fiscal = ftnfco00.cod_fiscal AND \nftrepr00.emp = ftnfco00.emp AND \nftrepr00.fil = ftnfco00.empfil AND \nftrepr00.codigo_repr = ftnfco00.cod_repres AND \ngsesta00.estado_sigla = ftnfco00.estado_cliente AND \nftsupv00.emp = ftrepr00.emp AND \nftsupv00.fil = ftrepr00.fil AND \nftsupv00.codigo_supervisor = ftrepr00.codigo_supervisor AND \nftgrep00.emp = ftrepr00.emp AND \nftgrep00.fil = ftrepr00.fil AND \nftgrep00.codigo_grupo_rep = ftrepr00.codigo_grupo_rep AND \nftclgr00.emp = ftclcr00.emp AND \nftclgr00.fil = ftclcr00.fil AND \nftclgr00.codigo = ftclcr00.codigo_grupo_cliente AND \nftband00.emp = ftclcr00.emp AND \nftband00.fil = ftclcr00.fil AND \nftband00.codigo = ftclcr00.bandeira_cliente AND \nfttcli00.emp = ftclcr00.emp AND \nfttcli00.fil = ftclcr00.fil AND \nfttcli00.cod_tipocliente = ftclcr00.codigo_tipo_cliente AND \nftredc00.emp = ftclcr00.emp AND \nftredc00.fil = ftclcr00.fil AND \nftredc00.tipo_contribuinte = ftclcr00.tipo_contribuinte AND \nftredc00.codigo_rede = ftclcr00.codigo_rede AND \ngsesta00.estado_sigla = ftclcr00.emp_estado AND \nftnfco00.emp = ftnfpr00.emp AND \nftnfco00.fil = ftnfpr00.fil AND \nftnfco00.nota_fiscal = ftnfpr00.nota_fiscal AND \nftnfco00.serie = ftnfpr00.serie AND \nftnfco00.data_emissao = ftnfpr00.data_emissao AND \nftprod00.emp = ftnfpr00.emp AND \nftprod00.fil = ftnfpr00.empfil AND \nftprod00.tipo_cadastro = ftnfpr00.tipo_cad_promat AND \nftprod00.codigo_produto= ftnfpr00.cod_produto \nGROUP BY \nftnfco00.estado_cliente , \nftcofi00.grupo_faturamento , \nftprod00.tipo_cadastro||ftprod00.codigo_produto||'||'||gsames00.ano_mes , \nftprod00.descricao_produto||'||'||gsames00.descricao\n \n\nI have created the decode, NVL and DIVIDE functions.... the problem is that the \nwhere condition makes this query to slow ( about 4 min ) and the same query in \nmy Oracle database takes less than 40 seconds. I have tried to isolate the \nproblem taking off some fields and I left justa the two first fields in the \nquery ( ftnfco00.estado_cliente , ftcofi00.grupo_faturamento ) and it still \ntaking almost 4 min to return. Does anyone have a hint to give me to make it \nfaster ?\n\n Atached goes a explain analyze return os this query.\n\n\n\n\nAtenciosamente,\n\nRhaoni Chiu Pereira\nSist�mica Computadores\n\nVisite-nos na Web: http://sistemica.info\nFone/Fax : +55 51 3328 1122", "msg_date": "Mon, 25 Aug 2003 12:03:12 -0300", "msg_from": "Rhaoni Chiu Pereira <[email protected]>", "msg_from_op": true, "msg_subject": "Query too slow" }, { "msg_contents": "On Mon, 25 Aug 2003, Rhaoni Chiu Pereira wrote:\n\n> Hi List,\n>\n> As I said before, I'm not a DBA \" yet\" , but I'm learning ... and I\n> already have a PostgreSQL running, so I have to ask some help...\n> I got a SQL as folows :\n\n...\n\nLooking at the explain:\n\nIt's choosing lots of nested loops because it's expecting a small number\nof rows to be returned at each step but in reality there are alot of rows\nso that's may not really be a good choice.\n\nFor example the scan of ftnfco00 is expected to return 295 rows but\nactually returns 9339, and it looks like it's not estimating the number of\nmatches between the tables very well either since the real count gets up\nto 240000 in a step where the estimated rows goes to 1.\n\nWhat does explain analyze give after set enable_nestloop=off;?\n\n", "msg_date": "Mon, 25 Aug 2003 08:44:43 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query too slow" }, { "msg_contents": "On 25 Aug 2003 at 8:44, Stephan Szabo wrote:\n\n> On Mon, 25 Aug 2003, Rhaoni Chiu Pereira wrote:\n> \n> > Hi List,\n> >\n> > As I said before, I'm not a DBA \" yet\" , but I'm learning ... and I\n> > already have a PostgreSQL running, so I have to ask some help...\n> > I got a SQL as folows :\n> \n> ...\n> \n> Looking at the explain:\n> \n> It's choosing lots of nested loops because it's expecting a small number\n> of rows to be returned at each step but in reality there are alot of rows\n> so that's may not really be a good choice.\n> \n> For example the scan of ftnfco00 is expected to return 295 rows but\n> actually returns 9339, and it looks like it's not estimating the number of\n> matches between the tables very well either since the real count gets up\n> to 240000 in a step where the estimated rows goes to 1.\n> \n> What does explain analyze give after set enable_nestloop=off;?\n\nIn addition to that if it is getting the stats wrong, does running vacuum \nanalyze help? If stats are updated, it should pick up proper plans, right?\n\nBye\n Shridhar\n\n--\nFlon's Law:\tThere is not now, and never will be, a language in\twhich it is the \nleast bit difficult to write bad programs.\n\n", "msg_date": "Mon, 25 Aug 2003 22:05:05 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query too slow" }, { "msg_contents": "Stephan Szabo wrote:\n\n> Looking at the explain:\n\nVeering aside a bit, since we usually pinpoint performance problems by \nlooking at EXPLAIN ANALYZE's differences between the planner's \nestimation and actual execution's stats, what's involved in parsing the \nEXPLAIN ANALYZE results, and highlighting the places where they are way \ndifferent? Bold, underline, or put some asterisks in front of those steps.\n\nMakes looking at big EXPLAIN ANALYZE trees much easier.\n\n-- \nLinux homer 2.4.18-14 #1 Wed Sep 4 13:35:50 EDT 2002 i686 i686 i386 \nGNU/Linux\n 2:30pm up 243 days, 5:48, 8 users, load average: 5.52, 5.29, 5.10", "msg_date": "Tue, 26 Aug 2003 14:49:17 +0800", "msg_from": "Ang Chin Han <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query too slow" }, { "msg_contents": "On Tue, 26 Aug 2003, Ang Chin Han wrote:\n\n> Stephan Szabo wrote:\n>\n> > Looking at the explain:\n>\n> Veering aside a bit, since we usually pinpoint performance problems by\n> looking at EXPLAIN ANALYZE's differences between the planner's\n> estimation and actual execution's stats, what's involved in parsing the\n> EXPLAIN ANALYZE results, and highlighting the places where they are way\n> different? Bold, underline, or put some asterisks in front of those steps.\n\nThe hardest part is determining where it matters I think. You can use the\nrow counts as the base for that, but going from 1 row to 50 is not\nnecessarily going to be an issue, but it might be if a nested loop is\nchosen.\n\n", "msg_date": "Tue, 26 Aug 2003 08:47:52 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query too slow" }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n> On Tue, 26 Aug 2003, Ang Chin Han wrote:\n>> Veering aside a bit, since we usually pinpoint performance problems by\n>> looking at EXPLAIN ANALYZE's differences between the planner's\n>> estimation and actual execution's stats, what's involved in parsing the\n>> EXPLAIN ANALYZE results, and highlighting the places where they are way\n>> different? Bold, underline, or put some asterisks in front of those steps.\n\n> The hardest part is determining where it matters I think. You can use the\n> row counts as the base for that, but going from 1 row to 50 is not\n> necessarily going to be an issue, but it might be if a nested loop is\n> chosen.\n\nWe've been chatting about this idea among the Red Hat group. The RHDB\nVisual Explain tool (get it at http://sources.redhat.com/rhdb/) already\ncomputes the percent of total runtime represented by each plan node.\nIt seems like we could highlight nodes based on a large difference\nbetween estimated and actual percentage, or just highlight the nodes\nthat are more than X percent of the runtime.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Aug 2003 12:01:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query too slow " }, { "msg_contents": "Tom Lane wrote:\n> Stephan Szabo <[email protected]> writes:\n> > On Tue, 26 Aug 2003, Ang Chin Han wrote:\n> >> Veering aside a bit, since we usually pinpoint performance problems by\n> >> looking at EXPLAIN ANALYZE's differences between the planner's\n> >> estimation and actual execution's stats, what's involved in parsing the\n> >> EXPLAIN ANALYZE results, and highlighting the places where they are way\n> >> different? Bold, underline, or put some asterisks in front of those steps.\n> \n> > The hardest part is determining where it matters I think. You can use the\n> > row counts as the base for that, but going from 1 row to 50 is not\n> > necessarily going to be an issue, but it might be if a nested loop is\n> > chosen.\n> \n> We've been chatting about this idea among the Red Hat group. The RHDB\n> Visual Explain tool (get it at http://sources.redhat.com/rhdb/) already\n> computes the percent of total runtime represented by each plan node.\n> It seems like we could highlight nodes based on a large difference\n> between estimated and actual percentage, or just highlight the nodes\n> that are more than X percent of the runtime.\n\nIs there a TODO here? Perhaps:\n\n\to Have EXPLAIN ANALYZE highlight poor optimizer estimates\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 5 Sep 2003 00:34:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query too slow" }, { "msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Stephan Szabo <[email protected]> writes:\n> > > On Tue, 26 Aug 2003, Ang Chin Han wrote:\n> > >> Veering aside a bit, since we usually pinpoint performance problems by\n> > >> looking at EXPLAIN ANALYZE's differences between the planner's\n> > >> estimation and actual execution's stats, what's involved in parsing the\n> > >> EXPLAIN ANALYZE results, and highlighting the places where they are way\n> > >> different? Bold, underline, or put some asterisks in front of those steps.\n> > \n> > > The hardest part is determining where it matters I think. You can use the\n> > > row counts as the base for that, but going from 1 row to 50 is not\n> > > necessarily going to be an issue, but it might be if a nested loop is\n> > > chosen.\n> > \n> > We've been chatting about this idea among the Red Hat group. The RHDB\n> > Visual Explain tool (get it at http://sources.redhat.com/rhdb/) already\n> > computes the percent of total runtime represented by each plan node.\n> > It seems like we could highlight nodes based on a large difference\n> > between estimated and actual percentage, or just highlight the nodes\n> > that are more than X percent of the runtime.\n> \n> Is there a TODO here? Perhaps:\n> \n> \to Have EXPLAIN ANALYZE highlight poor optimizer estimates\n\nNo one commented, so I had to guess --- I added it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 10 Sep 2003 16:15:01 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query too slow" } ]
[ { "msg_contents": "I'm very new to Postgresql, so don't beat me up to bad if you see a problem,\njust inform me what I've done wrong.\n\nI'm use Postgresql 7.2 (PeerDirect's Windows port) on Win2000 384MB RAM 10GB\nof Free space 800 Mhz, using the ODBC driver 7.03.01.00.\n\nI have a table that has 103,000 records in it (record size is about 953\nbytes) and when I do a select all (select * from <table>) it takes a\nwhopping 30 secs for the data to return!!\n\nSQLServer on the other hand takes 6 secs, but you can also use what is\ncalled a firehose cursor, which will return the data in < 1 sec.\n\nI have done everything that I know how to speed this up, does anyone have\nany advise?\n\nThanks\n\n\n\n", "msg_date": "Mon, 25 Aug 2003 12:56:59 -0700", "msg_from": "\"Bupp Phillips\" <[email protected]>", "msg_from_op": true, "msg_subject": "What is the fastest way to get a resultset" }, { "msg_contents": "Bupp Phillips <[email protected]> wrote:\n> I'm very new to Postgresql, so don't beat me up to bad if you see a\n> problem, just inform me what I've done wrong.\n>\n> I'm use Postgresql 7.2 (PeerDirect's Windows port) on Win2000 384MB\n> RAM 10GB of Free space 800 Mhz, using the ODBC driver 7.03.01.00.\n>\n> I have a table that has 103,000 records in it (record size is about\n> 953 bytes) and when I do a select all (select * from <table>) it\n> takes a whopping 30 secs for the data to return!!\n>\n> SQLServer on the other hand takes 6 secs, but you can also use what is\n> called a firehose cursor, which will return the data in < 1 sec.\n>\n> I have done everything that I know how to speed this up, does anyone\n> have any advise?\n>\n\nProbably you need to fetch more than one row at a time.\nI made that misstake once myself :)\n\nMagnus\n\n", "msg_date": "Mon, 25 Aug 2003 22:37:24 +0200", "msg_from": "\"Magnus Naeslund(f)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the fastest way to get a resultset" }, { "msg_contents": "On Mon, 25 Aug 2003, Bupp Phillips wrote:\n\n>\n> I have a table that has 103,000 records in it (record size is about 953\n> bytes) and when I do a select all (select * from <table>) it takes a\n> whopping 30 secs for the data to return!!\n>\n> SQLServer on the other hand takes 6 secs, but you can also use what is\n> called a firehose cursor, which will return the data in < 1 sec.\n>\nYou probably want a cursor.\nTypically what happens is postgres sends _all_ the data to the client -\nwhich can be rather substantial. A cursor allows you to say \"get me the\nfirst 1000 records. now the next 1000\" - it should get you the speed you\nwant.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Mon, 25 Aug 2003 16:46:34 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the fastest way to get a resultset" }, { "msg_contents": "Is this something that can be done thru a SQL statement, or are you saying\nthat I need to develop logic to handle this because the database won't hold\nthe resultset on the server, but instead sends it all to the client?\n\nIt there a way to get server side cursors with Postgresql like SQLServer has\nor is this a limitation that it has?\n\nThanks\n\n\"Jeff\" <[email protected]> wrote in message\nnews:[email protected]...\n> On Mon, 25 Aug 2003, Bupp Phillips wrote:\n>\n> >\n> > I have a table that has 103,000 records in it (record size is about 953\n> > bytes) and when I do a select all (select * from <table>) it takes a\n> > whopping 30 secs for the data to return!!\n> >\n> > SQLServer on the other hand takes 6 secs, but you can also use what is\n> > called a firehose cursor, which will return the data in < 1 sec.\n> >\n> You probably want a cursor.\n> Typically what happens is postgres sends _all_ the data to the client -\n> which can be rather substantial. A cursor allows you to say \"get me the\n> first 1000 records. now the next 1000\" - it should get you the speed you\n> want.\n>\n>\n> --\n> Jeff Trout <[email protected]>\n> http://www.jefftrout.com/\n> http://www.stuarthamm.net/\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n\n\n", "msg_date": "Tue, 26 Aug 2003 02:18:23 -0700", "msg_from": "\"Bupp Phillips\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What is the fastest way to get a resultset" }, { "msg_contents": "On Tue, Aug 26, 2003 at 02:18:23AM -0700, Bupp Phillips wrote:\n> It there a way to get server side cursors with Postgresql like SQLServer has\n> or is this a limitation that it has?\n\nhttp://www.postgresql.org/docs/7.3/static/sql-declare.html\nhttp://www.postgresql.org/docs/7.3/static/sql-fetch.html\n\n-Neil\n\n", "msg_date": "Tue, 26 Aug 2003 14:26:24 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is the fastest way to get a resultset" }, { "msg_contents": "So there are no settings for PG to can give me this same (fast) capability\njust by issuing a SQL statement thru the ODBC driver?\n\nThe reason I can't go the route of a DECLARE CURSOR is because my\napplication runs on multiple databases, so I have stay clear of certain\nroutines that may are may not be supported on another database.\n\n\n\"Neil Conway\" <[email protected]> wrote in message\nnews:[email protected]...\n> On Tue, Aug 26, 2003 at 02:18:23AM -0700, Bupp Phillips wrote:\n> > It there a way to get server side cursors with Postgresql like SQLServer\nhas\n> > or is this a limitation that it has?\n>\n> http://www.postgresql.org/docs/7.3/static/sql-declare.html\n> http://www.postgresql.org/docs/7.3/static/sql-fetch.html\n>\n> -Neil\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n\n", "msg_date": "Tue, 26 Aug 2003 22:56:10 -0700", "msg_from": "\"Bupp Phillips\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What is the fastest way to get a resultset" } ]
[ { "msg_contents": "Here's an interesting situation, and I think it may be just that Sun\nstinks.\n\nI was recently given the go ahead to switch from Informix to Postgres on\none of our properties. (I had dozens of performance comparisons showing\nhow slow Informix was compared to it and my boss seeing me struggle trying\nto make it run fast while Postgres, nearly out of the box, was simply\nspanking it.).\n\n\nWell, in order to facilitate things we were going to run pg on a 4 cpu\n(ultrasparc ii 400Mhz) sun with 4gb of memory (also the current informix\nbox. It isn't very loaded). Now I know FreeBSD/Linux is preferred (and\nwhere I do a lot of development and testing). But check this out for\ninteresting results.\n\nThe Hardware:\nMachine A: 4 CPU Sun Ultrasparc II 400Mhz, 4GB mem, 20GB RAID5, Solaris 8\n(32 bit mode)\n\nMachine B: 2 CPU Pentium II, 450Mhz, 512MB mem, 18GB RAID0 (2 old scsi\ndisks) Linux 2.4.18 (Stock redhat 8.0)\n\nThe software: PG 7.3.4 compiled myself. (Reading specs from\n/opt/sfw/lib/gcc-lib/sparc-sun-solaris2.9/2.95.3/specs gcc version 2.95.3\n20010315 (release) (The solaris 8 box has no compilers, could this be the\nissue?) and (Reading specs from\n/usr/lib/gcc-lib/i386-redhat-linux/3.2/specs\nConfigured with: ../configure --prefix=/usr --mandir=/usr/share/man\n--infodir=/u\nsr/share/info --enable-shared --enable-threads=posix --disable-checking\n--host=i\n386-redhat-linux --with-system-zlib --enable-__cxa_atexit\nThread model: posix\ngcc version 3.2 20020903 (Red Hat Linux 8.0 3.2-7))\n\nOk. Maybe the compiler (I'll try installing a newer gcc for sun later\ntoday).\n\nThe PG.conf:\nshared_buffers = 2000\nsort_mem = 8192\neffective_cache_size = 32000\ntcpip_sockets = true\n\nThe Schema:\n\nuserprofile:\n\nuserkey | character varying(128) |\n displayname | character varying(128) |\n displayname_v | boolean | default 'f'\n name | character varying(128) |\n name_v | boolean | default 'f'\n email | character varying(128) |\n email_v | boolean | default 'f'\n gender | character varying(1) |\n gender_v | boolean | default 'f'\n country | character varying(64) |\n country_v | boolean | default 'f'\n zip | character varying(10) |\n zip_v | boolean | default 'f'\n city | character varying(128) |\n city_v | boolean | default 'f'\n state | character varying(10) |\n state_v | boolean | default 'f'\n lang | character varying(2) |\n lang_v | boolean | default 'f'\n url | character varying(255) |\nurl_v | boolean | default 'f'\n phone | character varying(64) |\n phone_v | boolean | default 'f'\n phonemobile | character varying(64) |\n phonemobile_v | boolean | default 'f'\n phonefax | character varying(64) |\n phonefax_v | boolean | default 'f'\n dob | timestamp with time zone |\n dob_v | boolean | default 'f'\n interests_v | boolean | default 'f'\n description | character varying(255) |\n description2 | character varying(255) |\n description_v | boolean | default 'f'\n\n(Yes, I kknow it isn't good - a lot of it is because it is the same schema\nI had to use on informix. Convienantly you cannot do much with a textblob\non infomrix, so I have to use big varchar's, but that is a fiffernt\nstory).\n\nThe magic query:\n\nselect userkey, dob, email, gender, country from imuserprofile\nwhere gender_v and gender='m'\nand country_v and country = 'br'\nand dob_v = 't'\nand dob >= 'now'::timestamptz - '29 years'::interval\nand dob <= 'now'::timestamptz - '18 years'::interval\norder by dob asc\nlimit 20\noffset 100\n\n(Page 5 of male brazillians, 18-29)\n\nNow the P2 runs this in about 0.3 seconds, and hte sun box runs it in 1\nsecond.\nHere's the explain analyze's on each:\n\nP2:\nLimit (cost=2484.52..2484.57 rows=20 width=67) (actual\ntime=377.32..377.41 row\ns=20 loops=1)\n -> Sort (cost=2484.27..2484.74 rows=186 width=67) (actual\ntime=377.02..377.\n21 rows=121 loops=1)\n Sort Key: dob\n -> Seq Scan on userprofile (cost=0.00..2477.28 rows=186\nwidth=67) (\nactual time=0.15..350.93 rows=1783 loops=1)\n Filter: (gender_v AND (gender = 'm'::character varying) AND\ncount\nry_v AND (country = 'br'::character varying) AND (dob_v = true) AND (dob\n>= '197\n4-08-26 07:13:15.903437-04'::timestamp with time zone) AND (dob <=\n'1985-08-26 0\n7:13:15.903437-04'::timestamp with time zone))\n Total runtime: 378.21 msec\n(6 rows)\n\nSun:\nLimit (cost=2521.19..2521.24 rows=20 width=67) (actual\ntime=1041.14..1041.20 r\nows=20 loops=1)\n -> Sort (cost=2520.94..2521.39 rows=178 width=67) (actual\ntime=1040.96..104\n1.08 rows=121 loops=1)\n Sort Key: dob\n -> Seq Scan on userprofile (cost=0.00..2514.28 rows=178\nwidth=67) (\nactual time=0.37..1014.50 rows=1783 loops=1)\n Filter: (gender_v AND (gender = 'm'::character varying) AND\ncount\nry_v AND (country = 'br'::character varying) AND (dob_v = true) AND (dob\n>= '197\n4-08-26 08:21:52.158181-04'::timestamp with time zone) AND (dob <=\n'1985-08-26 0\n8:21:52.158181-04'::timestamp with time zone))\n Total runtime: 1042.54 msec\n(6 rows)\n\nThey are loaded with the exact same dataset - 53k rows, ~10MB\nNotice the estimates are roughly the same, but the execution time is\ndifferent.\n\nI don't think it is the IO system, since 10MB will be cached by the OS and\niostat reports no activity on the disks (when running the query many\ntimes over and over and in parellel). it is a simple query..\n\nCould it just be that the sun sucks? (And for the record - same schema,\nnearly same query (modified for datetime syntax) on informix runs in 3\nseconds).\n\n\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Tue, 26 Aug 2003 08:34:01 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Sun vs a P2. Interesting results." }, { "msg_contents": "I spoke with my SUN admin, and this is what he had to say about what you are \nseeing.\n\nSun gear is known to show a lower than Intel performance on light loads, rerun \nyour test with 100 concurrent users (queries) and see what happens. Also he \nrecommends installing a 64bit version of Solaris, the 32bit robs a lot of \nperformance as well.\n\n\nOn Tuesday 26 August 2003 05:34, Jeff wrote:\n> Here's an interesting situation, and I think it may be just that Sun\n> stinks.\n>\n> I was recently given the go ahead to switch from Informix to Postgres on\n> one of our properties. (I had dozens of performance comparisons showing\n> how slow Informix was compared to it and my boss seeing me struggle trying\n> to make it run fast while Postgres, nearly out of the box, was simply\n> spanking it.).\n>\n>\n> Well, in order to facilitate things we were going to run pg on a 4 cpu\n> (ultrasparc ii 400Mhz) sun with 4gb of memory (also the current informix\n> box. It isn't very loaded). Now I know FreeBSD/Linux is preferred (and\n> where I do a lot of development and testing). But check this out for\n> interesting results.\n>\n> The Hardware:\n> Machine A: 4 CPU Sun Ultrasparc II 400Mhz, 4GB mem, 20GB RAID5, Solaris 8\n> (32 bit mode)\n>\n> Machine B: 2 CPU Pentium II, 450Mhz, 512MB mem, 18GB RAID0 (2 old scsi\n> disks) Linux 2.4.18 (Stock redhat 8.0)\n>\n> The software: PG 7.3.4 compiled myself. (Reading specs from\n> /opt/sfw/lib/gcc-lib/sparc-sun-solaris2.9/2.95.3/specs gcc version 2.95.3\n> 20010315 (release) (The solaris 8 box has no compilers, could this be the\n> issue?) and (Reading specs from\n> /usr/lib/gcc-lib/i386-redhat-linux/3.2/specs\n> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man\n> --infodir=/u\n> sr/share/info --enable-shared --enable-threads=posix --disable-checking\n> --host=i\n> 386-redhat-linux --with-system-zlib --enable-__cxa_atexit\n> Thread model: posix\n> gcc version 3.2 20020903 (Red Hat Linux 8.0 3.2-7))\n>\n> Ok. Maybe the compiler (I'll try installing a newer gcc for sun later\n> today).\n>\n> The PG.conf:\n> shared_buffers = 2000\n> sort_mem = 8192\n> effective_cache_size = 32000\n> tcpip_sockets = true\n>\n> The Schema:\n>\n> userprofile:\n>\n> userkey | character varying(128) |\n> displayname | character varying(128) |\n> displayname_v | boolean | default 'f'\n> name | character varying(128) |\n> name_v | boolean | default 'f'\n> email | character varying(128) |\n> email_v | boolean | default 'f'\n> gender | character varying(1) |\n> gender_v | boolean | default 'f'\n> country | character varying(64) |\n> country_v | boolean | default 'f'\n> zip | character varying(10) |\n> zip_v | boolean | default 'f'\n> city | character varying(128) |\n> city_v | boolean | default 'f'\n> state | character varying(10) |\n> state_v | boolean | default 'f'\n> lang | character varying(2) |\n> lang_v | boolean | default 'f'\n> url | character varying(255) |\n> url_v | boolean | default 'f'\n> phone | character varying(64) |\n> phone_v | boolean | default 'f'\n> phonemobile | character varying(64) |\n> phonemobile_v | boolean | default 'f'\n> phonefax | character varying(64) |\n> phonefax_v | boolean | default 'f'\n> dob | timestamp with time zone |\n> dob_v | boolean | default 'f'\n> interests_v | boolean | default 'f'\n> description | character varying(255) |\n> description2 | character varying(255) |\n> description_v | boolean | default 'f'\n>\n> (Yes, I kknow it isn't good - a lot of it is because it is the same schema\n> I had to use on informix. Convienantly you cannot do much with a textblob\n> on infomrix, so I have to use big varchar's, but that is a fiffernt\n> story).\n>\n> The magic query:\n>\n> select userkey, dob, email, gender, country from imuserprofile\n> where gender_v and gender='m'\n> and country_v and country = 'br'\n> and dob_v = 't'\n> and dob >= 'now'::timestamptz - '29 years'::interval\n> and dob <= 'now'::timestamptz - '18 years'::interval\n> order by dob asc\n> limit 20\n> offset 100\n>\n> (Page 5 of male brazillians, 18-29)\n>\n> Now the P2 runs this in about 0.3 seconds, and hte sun box runs it in 1\n> second.\n> Here's the explain analyze's on each:\n>\n> P2:\n> Limit (cost=2484.52..2484.57 rows=20 width=67) (actual\n> time=377.32..377.41 row\n> s=20 loops=1)\n> -> Sort (cost=2484.27..2484.74 rows=186 width=67) (actual\n> time=377.02..377.\n> 21 rows=121 loops=1)\n> Sort Key: dob\n> -> Seq Scan on userprofile (cost=0.00..2477.28 rows=186\n> width=67) (\n> actual time=0.15..350.93 rows=1783 loops=1)\n> Filter: (gender_v AND (gender = 'm'::character varying) AND\n> count\n> ry_v AND (country = 'br'::character varying) AND (dob_v = true) AND (dob\n>\n> >= '197\n>\n> 4-08-26 07:13:15.903437-04'::timestamp with time zone) AND (dob <=\n> '1985-08-26 0\n> 7:13:15.903437-04'::timestamp with time zone))\n> Total runtime: 378.21 msec\n> (6 rows)\n>\n> Sun:\n> Limit (cost=2521.19..2521.24 rows=20 width=67) (actual\n> time=1041.14..1041.20 r\n> ows=20 loops=1)\n> -> Sort (cost=2520.94..2521.39 rows=178 width=67) (actual\n> time=1040.96..104\n> 1.08 rows=121 loops=1)\n> Sort Key: dob\n> -> Seq Scan on userprofile (cost=0.00..2514.28 rows=178\n> width=67) (\n> actual time=0.37..1014.50 rows=1783 loops=1)\n> Filter: (gender_v AND (gender = 'm'::character varying) AND\n> count\n> ry_v AND (country = 'br'::character varying) AND (dob_v = true) AND (dob\n>\n> >= '197\n>\n> 4-08-26 08:21:52.158181-04'::timestamp with time zone) AND (dob <=\n> '1985-08-26 0\n> 8:21:52.158181-04'::timestamp with time zone))\n> Total runtime: 1042.54 msec\n> (6 rows)\n>\n> They are loaded with the exact same dataset - 53k rows, ~10MB\n> Notice the estimates are roughly the same, but the execution time is\n> different.\n>\n> I don't think it is the IO system, since 10MB will be cached by the OS and\n> iostat reports no activity on the disks (when running the query many\n> times over and over and in parellel). it is a simple query..\n>\n> Could it just be that the sun sucks? (And for the record - same schema,\n> nearly same query (modified for datetime syntax) on informix runs in 3\n> seconds).\n\n-- \nDarcy Buskermolen\nWavefire Technologies Corp.\nph: 250.717.0200\nfx: 250.763.1759\nhttp://www.wavefire.com\n", "msg_date": "Tue, 26 Aug 2003 10:48:10 -0700", "msg_from": "Darcy Buskermolen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun vs a P2. Interesting results." }, { "msg_contents": "Also, after having taken another look at this, you aren't preforming the same \nquery on both datasets, so you can't expect them to generate the same \nresults, or the same query plans, or even comparable times. Please retry your \ntests with identical queries , specify the dates, don;t use a function like \nnow() to retrieve them.\n\n\nOn Tuesday 26 August 2003 05:34, Jeff wrote:\n> Here's an interesting situation, and I think it may be just that Sun\n> stinks.\n>\n> I was recently given the go ahead to switch from Informix to Postgres on\n> one of our properties. (I had dozens of performance comparisons showing\n> how slow Informix was compared to it and my boss seeing me struggle trying\n> to make it run fast while Postgres, nearly out of the box, was simply\n> spanking it.).\n>\n>\n> Well, in order to facilitate things we were going to run pg on a 4 cpu\n> (ultrasparc ii 400Mhz) sun with 4gb of memory (also the current informix\n> box. It isn't very loaded). Now I know FreeBSD/Linux is preferred (and\n> where I do a lot of development and testing). But check this out for\n> interesting results.\n>\n> The Hardware:\n> Machine A: 4 CPU Sun Ultrasparc II 400Mhz, 4GB mem, 20GB RAID5, Solaris 8\n> (32 bit mode)\n>\n> Machine B: 2 CPU Pentium II, 450Mhz, 512MB mem, 18GB RAID0 (2 old scsi\n> disks) Linux 2.4.18 (Stock redhat 8.0)\n>\n> The software: PG 7.3.4 compiled myself. (Reading specs from\n> /opt/sfw/lib/gcc-lib/sparc-sun-solaris2.9/2.95.3/specs gcc version 2.95.3\n> 20010315 (release) (The solaris 8 box has no compilers, could this be the\n> issue?) and (Reading specs from\n> /usr/lib/gcc-lib/i386-redhat-linux/3.2/specs\n> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man\n> --infodir=/u\n> sr/share/info --enable-shared --enable-threads=posix --disable-checking\n> --host=i\n> 386-redhat-linux --with-system-zlib --enable-__cxa_atexit\n> Thread model: posix\n> gcc version 3.2 20020903 (Red Hat Linux 8.0 3.2-7))\n>\n> Ok. Maybe the compiler (I'll try installing a newer gcc for sun later\n> today).\n>\n> The PG.conf:\n> shared_buffers = 2000\n> sort_mem = 8192\n> effective_cache_size = 32000\n> tcpip_sockets = true\n>\n> The Schema:\n>\n> userprofile:\n>\n> userkey | character varying(128) |\n> displayname | character varying(128) |\n> displayname_v | boolean | default 'f'\n> name | character varying(128) |\n> name_v | boolean | default 'f'\n> email | character varying(128) |\n> email_v | boolean | default 'f'\n> gender | character varying(1) |\n> gender_v | boolean | default 'f'\n> country | character varying(64) |\n> country_v | boolean | default 'f'\n> zip | character varying(10) |\n> zip_v | boolean | default 'f'\n> city | character varying(128) |\n> city_v | boolean | default 'f'\n> state | character varying(10) |\n> state_v | boolean | default 'f'\n> lang | character varying(2) |\n> lang_v | boolean | default 'f'\n> url | character varying(255) |\n> url_v | boolean | default 'f'\n> phone | character varying(64) |\n> phone_v | boolean | default 'f'\n> phonemobile | character varying(64) |\n> phonemobile_v | boolean | default 'f'\n> phonefax | character varying(64) |\n> phonefax_v | boolean | default 'f'\n> dob | timestamp with time zone |\n> dob_v | boolean | default 'f'\n> interests_v | boolean | default 'f'\n> description | character varying(255) |\n> description2 | character varying(255) |\n> description_v | boolean | default 'f'\n>\n> (Yes, I kknow it isn't good - a lot of it is because it is the same schema\n> I had to use on informix. Convienantly you cannot do much with a textblob\n> on infomrix, so I have to use big varchar's, but that is a fiffernt\n> story).\n>\n> The magic query:\n>\n> select userkey, dob, email, gender, country from imuserprofile\n> where gender_v and gender='m'\n> and country_v and country = 'br'\n> and dob_v = 't'\n> and dob >= 'now'::timestamptz - '29 years'::interval\n> and dob <= 'now'::timestamptz - '18 years'::interval\n> order by dob asc\n> limit 20\n> offset 100\n>\n> (Page 5 of male brazillians, 18-29)\n>\n> Now the P2 runs this in about 0.3 seconds, and hte sun box runs it in 1\n> second.\n> Here's the explain analyze's on each:\n>\n> P2:\n> Limit (cost=2484.52..2484.57 rows=20 width=67) (actual\n> time=377.32..377.41 row\n> s=20 loops=1)\n> -> Sort (cost=2484.27..2484.74 rows=186 width=67) (actual\n> time=377.02..377.\n> 21 rows=121 loops=1)\n> Sort Key: dob\n> -> Seq Scan on userprofile (cost=0.00..2477.28 rows=186\n> width=67) (\n> actual time=0.15..350.93 rows=1783 loops=1)\n> Filter: (gender_v AND (gender = 'm'::character varying) AND\n> count\n> ry_v AND (country = 'br'::character varying) AND (dob_v = true) AND (dob\n>\n> >= '197\n>\n> 4-08-26 07:13:15.903437-04'::timestamp with time zone) AND (dob <=\n> '1985-08-26 0\n> 7:13:15.903437-04'::timestamp with time zone))\n> Total runtime: 378.21 msec\n> (6 rows)\n>\n> Sun:\n> Limit (cost=2521.19..2521.24 rows=20 width=67) (actual\n> time=1041.14..1041.20 r\n> ows=20 loops=1)\n> -> Sort (cost=2520.94..2521.39 rows=178 width=67) (actual\n> time=1040.96..104\n> 1.08 rows=121 loops=1)\n> Sort Key: dob\n> -> Seq Scan on userprofile (cost=0.00..2514.28 rows=178\n> width=67) (\n> actual time=0.37..1014.50 rows=1783 loops=1)\n> Filter: (gender_v AND (gender = 'm'::character varying) AND\n> count\n> ry_v AND (country = 'br'::character varying) AND (dob_v = true) AND (dob\n>\n> >= '197\n>\n> 4-08-26 08:21:52.158181-04'::timestamp with time zone) AND (dob <=\n> '1985-08-26 0\n> 8:21:52.158181-04'::timestamp with time zone))\n> Total runtime: 1042.54 msec\n> (6 rows)\n>\n> They are loaded with the exact same dataset - 53k rows, ~10MB\n> Notice the estimates are roughly the same, but the execution time is\n> different.\n>\n> I don't think it is the IO system, since 10MB will be cached by the OS and\n> iostat reports no activity on the disks (when running the query many\n> times over and over and in parellel). it is a simple query..\n>\n> Could it just be that the sun sucks? (And for the record - same schema,\n> nearly same query (modified for datetime syntax) on informix runs in 3\n> seconds).\n\n-- \nDarcy Buskermolen\nWavefire Technologies Corp.\nph: 250.717.0200\nfx: 250.763.1759\nhttp://www.wavefire.com\n", "msg_date": "Tue, 26 Aug 2003 11:03:48 -0700", "msg_from": "Darcy Buskermolen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun vs a P2. Interesting results." }, { "msg_contents": "On Tue, 26 Aug 2003, Darcy Buskermolen wrote:\n\n> Also, after having taken another look at this, you aren't preforming the same\n> query on both datasets, so you can't expect them to generate the same\n> results, or the same query plans, or even comparable times. Please retry your\n> tests with identical queries , specify the dates, don;t use a function like\n> now() to retrieve them.\n>\n\nGiven what you said in the previous email and this one here's some new\ninformation. I redid the query to use a static starting time and I ran\n19 beaters in parallel. After I send this mail out I'll try it with 40.\n\nNew Query:\n\nselect userkey, dob, email, gender, country from userprofile\nwhere gender_v and gender='m'\n\t and country_v and country = 'br'\n\t and dob_v\n\t and dob\t>= '2003-08-26'::timestamptz - '29\nyears'::interval\n\t and dob <= '2003-08-26'::timestamptz - '18 years'::interval\norder by dob asc\nlimit 20\noffset 100\n\nExplain Analyze's: (basically the same)\nSun:\n Limit (cost=2390.05..2390.10 rows=20 width=67) (actual\ntime=1098.34..1098.39 rows=20 loops=1)\n -> Sort (cost=2389.80..2390.24 rows=178 width=67) (actual\ntime=1098.16..1098.28 rows=121 loops=1)\n Sort Key: dob\n -> Seq Scan on imuserprofile (cost=0.00..2383.14 rows=178\nwidth=67) (actual time=0.38..1068.94 rows=1783 loops=1)\n Filter: (gender_v AND (gender = 'm'::character varying) AND\ncountry_v AND (country = 'br'::character varying) AND dob_v AND (dob >=\n'1974-08-26 00:00:00-04'::timestamp with time zone) AND (dob <=\n'1985-08-26 00:00:00-04'::timestamp with time zone))\n Total runtime: 1099.93 msec\n(6 rows)\n\n\np2\n\n Limit (cost=2353.38..2353.43 rows=20 width=67) (actual\ntime=371.75..371.83 rows=20 loops=1)\n -> Sort (cost=2353.13..2353.60 rows=186 width=67) (actual\ntime=371.46..371.63 rows=121 loops=1)\n Sort Key: dob\n -> Seq Scan on imuserprofile (cost=0.00..2346.14 rows=186\nwidth=67) (actual time=0.17..345.53 rows=1783 loops=1)\n Filter: (gender_v AND (gender = 'm'::character varying) AND\ncountry_v AND (country = 'br'::character varying) AND dob_v AND (dob >=\n'1974-08-26 00:00:00-04'::timestamp with time zone) AND (dob <=\n'1985-08-26 00:00:00-04'::timestamp with time zone))\n Total runtime: 372.63 msec\n(6 rows)\n\n\nI ran this query 100 times per beater (no prepared queries) and ran\n19 beaters in parellel.\n\nP2 Machine: 345sec avg\nSun: 565sec avg\n\n\n\nI know solaris/sun isn't the preferred pg platform, and we have plenty of\ncapicty even with these numbers, I just find it a little suprising the\nspeed difference.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Tue, 26 Aug 2003 14:41:36 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sun vs a P2. Interesting results." }, { "msg_contents": "I'm still seeing differences in the planner estimates, have you run a VACUUM \nANALYZE prior to running these tests?\n\nAlso, are the disk subsystems in these 2 systems the same? You may be seeing \nsome discrepancies in things spindle speed, U160 vs U320, throughput on \nspecific RAID controlers, different blocksize, ect.\n\n\nOn Tuesday 26 August 2003 11:41, Jeff wrote:\n> On Tue, 26 Aug 2003, Darcy Buskermolen wrote:\n> > Also, after having taken another look at this, you aren't preforming the\n> > same query on both datasets, so you can't expect them to generate the\n> > same results, or the same query plans, or even comparable times. Please\n> > retry your tests with identical queries , specify the dates, don;t use a\n> > function like now() to retrieve them.\n>\n> Given what you said in the previous email and this one here's some new\n> information. I redid the query to use a static starting time and I ran\n> 19 beaters in parallel. After I send this mail out I'll try it with 40.\n>\n> New Query:\n>\n> select userkey, dob, email, gender, country from userprofile\n> where gender_v and gender='m'\n> \t and country_v and country = 'br'\n> \t and dob_v\n> \t and dob\t>= '2003-08-26'::timestamptz - '29\n> years'::interval\n> \t and dob <= '2003-08-26'::timestamptz - '18 years'::interval\n> order by dob asc\n> limit 20\n> offset 100\n>\n> Explain Analyze's: (basically the same)\n> Sun:\n> Limit (cost=2390.05..2390.10 rows=20 width=67) (actual\n> time=1098.34..1098.39 rows=20 loops=1)\n> -> Sort (cost=2389.80..2390.24 rows=178 width=67) (actual\n> time=1098.16..1098.28 rows=121 loops=1)\n> Sort Key: dob\n> -> Seq Scan on imuserprofile (cost=0.00..2383.14 rows=178\n> width=67) (actual time=0.38..1068.94 rows=1783 loops=1)\n> Filter: (gender_v AND (gender = 'm'::character varying) AND\n> country_v AND (country = 'br'::character varying) AND dob_v AND (dob >=\n> '1974-08-26 00:00:00-04'::timestamp with time zone) AND (dob <=\n> '1985-08-26 00:00:00-04'::timestamp with time zone))\n> Total runtime: 1099.93 msec\n> (6 rows)\n>\n>\n> p2\n>\n> Limit (cost=2353.38..2353.43 rows=20 width=67) (actual\n> time=371.75..371.83 rows=20 loops=1)\n> -> Sort (cost=2353.13..2353.60 rows=186 width=67) (actual\n> time=371.46..371.63 rows=121 loops=1)\n> Sort Key: dob\n> -> Seq Scan on imuserprofile (cost=0.00..2346.14 rows=186\n> width=67) (actual time=0.17..345.53 rows=1783 loops=1)\n> Filter: (gender_v AND (gender = 'm'::character varying) AND\n> country_v AND (country = 'br'::character varying) AND dob_v AND (dob >=\n> '1974-08-26 00:00:00-04'::timestamp with time zone) AND (dob <=\n> '1985-08-26 00:00:00-04'::timestamp with time zone))\n> Total runtime: 372.63 msec\n> (6 rows)\n>\n>\n> I ran this query 100 times per beater (no prepared queries) and ran\n> 19 beaters in parellel.\n>\n> P2 Machine: 345sec avg\n> Sun: 565sec avg\n>\n>\n>\n> I know solaris/sun isn't the preferred pg platform, and we have plenty of\n> capicty even with these numbers, I just find it a little suprising the\n> speed difference.\n\n-- \nDarcy Buskermolen\nWavefire Technologies Corp.\nph: 250.717.0200\nfx: 250.763.1759\nhttp://www.wavefire.com\n\n\n", "msg_date": "Tue, 26 Aug 2003 12:01:55 -0700", "msg_from": "Darcy Buskermolen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun vs a P2. Interesting results." }, { "msg_contents": "On Tue, 26 Aug 2003, Darcy Buskermolen wrote:\n\n> I'm still seeing differences in the planner estimates, have you run a VACUUM\n> ANALYZE prior to running these tests?\n>\nI did. I shall retry that.. but the numbers (the cost estimates) are\npretty close on both. the actual times are very different.\n\n> Also, are the disk subsystems in these 2 systems the same? You may be seeing\n> some discrepancies in things spindle speed, U160 vs U320, throughput on\n> specific RAID controlers, different blocksize, ect.\n>\n\nAs I said in my first email IO isn't the problem here - the data set is\nsmall enough that it is all cached (~10MB). iostat reports 0 activity on\nthe disks on both the sun and p2.\n\nand I just ran teh test again with 40 clients: 730s for hte p2, 1100 for\nthe sun. (0% idle on both of them, no IO). I think the next I may try is\nrecompiling with a newer gcc.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Tue, 26 Aug 2003 15:05:12 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sun vs a P2. Interesting results." }, { "msg_contents": "On Tue, Aug 26, 2003 at 03:05:12PM -0400, Jeff wrote:\n> On Tue, 26 Aug 2003, Darcy Buskermolen wrote:\n> > I'm still seeing differences in the planner estimates, have you run a VACUUM\n> > ANALYZE prior to running these tests?\n> >\n> I did. I shall retry that.. but the numbers (the cost estimates) are\n> pretty close on both. the actual times are very different.\n\nI don't see why you need to bother, the query plans & cost estimates\nare similar enough I doubt that's the problem.\n \n> As I said in my first email IO isn't the problem here - the data set is\n> small enough that it is all cached (~10MB). iostat reports 0 activity on\n> the disks on both the sun and p2.\n\nWould it be possible to get a profile (e.g. gprof output) for a postgres\nbackend executing the query on the Sun machine?\n\n-Neil\n\n", "msg_date": "Tue, 26 Aug 2003 15:37:29 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun vs a P2. Interesting results." }, { "msg_contents": "On 26 Aug 2003 at 8:34, Jeff wrote:\n\n> Could it just be that the sun sucks? (And for the record - same schema,\n> nearly same query (modified for datetime syntax) on informix runs in 3\n> seconds).\n\nMy impression is IPC on sun has higher initial latency than linux. But given \nthat you also ran the tests with multiple connections and results were of \nsimilar pattern, I can't help to conclude that postgresql has to find a way to \nhave faste IPC on solaris.\n\nI know I will be flamed for repeatedly making this suggestion. But what does \ntests with sparc linux on same machine yield?\n\nBye\n Shridhar\n\n--\nuntold wealth, n.:\tWhat you left out on April 15th.\n\n", "msg_date": "Wed, 27 Aug 2003 12:24:40 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun vs a P2. Interesting results." }, { "msg_contents": "On Tue, 26 Aug 2003, Neil Conway wrote:\n>\n> Would it be possible to get a profile (e.g. gprof output) for a postgres\n> backend executing the query on the Sun machine?\n>\nHeh. Never thought of doing a profile!\n\nI attached the entire gprof output, but here's the top few functions.\n\nI did the test, 1 beater, 100 searches: 148 seconds total.\n\n 30.9 45.55 45.55 nocachegetattr [16]\n 16.0 69.20 23.65 internal_mcount [22]\n 6.9 79.37 10.17 5245902 0.00 0.00 heapgettup [21]\n 6.0 88.28 8.91 3663201 0.00 0.00\nExecMakeFunctionResult\n<cycle 5> [23]\n 5.4 96.27 7.99 11431400 0.00 0.00 ExecEvalVar [25]\n 3.0 100.73 4.46 18758201 0.00 0.00 ExecEvalExpr\n<cycle 5\n> [24]\n 3.0 105.17 4.44 5246005 0.00 0.00 AllocSetReset [29]\n 2.5 108.89 3.72 5245700 0.00 0.00\nHeapTupleSatisfiesSnapshot\n [30]\n 2.0 111.78 2.89 5650632 0.00 0.00 LWLockRelease [32]\n 1.6 114.10 2.32 5650632 0.00 0.00 LWLockAcquire [34]\n 1.6 116.40 2.30 5245800 0.00 0.01 SeqNext [17]\n 1.4 118.54 2.14 5438301 0.00 0.00 ExecStoreTuple [27]\n 1.4 120.62 2.08 5245700 0.00 0.01 ExecQual [18]\n 1.3 122.50 1.88 5379202 0.00 0.00 ReleaseAndReadBuffer\n[35]\n 1.1 124.16 1.66 178400 0.01 0.40 ExecScan [15]\n 1.1 125.80 1.64 _mcount (6247)\n 1.1 127.41 1.61 5245902 0.00 0.01 heap_getnext [20]\n\n\n.. as it turns out the profile gzipped is still huge (100kb) so I put it\non my web server - snag it at\n\nhttp://www.jefftrout.com/~threshar/postgres/postgres-7.3.4-sol8-gprof.txt.gz\n\nI'll do a profile for hte p2 and send post that in an hour or two\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Wed, 27 Aug 2003 08:33:34 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sun vs a P2. Interesting results." }, { "msg_contents": "Well, installing gcc 3.3.1 and using -mcpu=v9 didn't help. in fact it made\nthings worse. Unless someone has something clever I'm just gonna stop\ntinkering with it - my goal was met (it is several orders of magnitude\nfaster than informix ) and the hardware is being replaced in a month or\ntwo.\n\nthanks for the ideas / comments.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Wed, 27 Aug 2003 11:51:29 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sun vs a P2. Interesting results." }, { "msg_contents": "Jeff <[email protected]> writes:\n> I'll do a profile for hte p2 and send post that in an hour or two\n\nPlease redo the linux profile after recompiling postmaster.c with\n-DLINUX_PROFILE added (I use \"make PROFILE='-pg -DLINUX_PROFILE'\"\nwhen building for profile on Linux).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Aug 2003 23:15:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sun vs a P2. Interesting results. " } ]
[ { "msg_contents": "need input on parameter values on confs...\n\nour database is getting 1000 transactions/sec on peak periods..\n\nsitting on RH 7.3 \n2.4.7-10smp\nRAM: 1028400\nSWAP: 2040244\n\nqueries are just simple select statements based on timestamps, varchars...\nless on joins... on a 300K rows..\n\n\nTIA\n", "msg_date": "Tue, 26 Aug 2003 21:42:52 +0800", "msg_from": "JM <[email protected]>", "msg_from_op": true, "msg_subject": "Best tweak for fast results.. ?" }, { "msg_contents": "On Tue, 2003-08-26 at 08:42, JM wrote:\n> need input on parameter values on confs...\n> \n> our database is getting 1000 transactions/sec on peak periods..\n> \n> sitting on RH 7.3 \n> 2.4.7-10smp\n> RAM: 1028400\n> SWAP: 2040244\n> \n> queries are just simple select statements based on timestamps, varchars...\n> less on joins... on a 300K rows..\n\nCould it be that 1000tps is as good as your h/w can do? You didn't\nmention what kind and speed of CPU(s), SCSI-or-IDE controller(s) and\ntype/speed of disk(s) you have.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Oh, great altar of passive entertainment, bestow upon me thy \ndiscordant images at such speed as to render linear thought impossible\"\nCalvin, regarding TV\n\n", "msg_date": "Tue, 26 Aug 2003 12:11:41 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Best tweak for fast results.. ?" }, { "msg_contents": "actually, isin't 1000 tps pretty good?\n\nRon Johnson wrote:\n\n>On Tue, 2003-08-26 at 08:42, JM wrote:\n> \n>\n>>need input on parameter values on confs...\n>>\n>>our database is getting 1000 transactions/sec on peak periods..\n>>\n>>sitting on RH 7.3 \n>>2.4.7-10smp\n>>RAM: 1028400\n>>SWAP: 2040244\n>>\n>>queries are just simple select statements based on timestamps, varchars...\n>>less on joins... on a 300K rows..\n>> \n>>\n>\n>Could it be that 1000tps is as good as your h/w can do? You didn't\n>mention what kind and speed of CPU(s), SCSI-or-IDE controller(s) and\n>type/speed of disk(s) you have.\n>\n> \n>\n\n", "msg_date": "Tue, 26 Aug 2003 10:25:28 -0700", "msg_from": "Dennis Gearon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Best tweak for fast results.. ?" }, { "msg_contents": "On Tuesday 26 August 2003 14:42, JM wrote:\n> need input on parameter values on confs...\n>\n> our database is getting 1000 transactions/sec on peak periods..\n>\n> sitting on RH 7.3\n> 2.4.7-10smp\n> RAM: 1028400\n> SWAP: 2040244\n>\n> queries are just simple select statements based on timestamps, varchars...\n> less on joins... on a 300K rows..\n\nAssuming you're getting good query plans (check the output of EXPLAIN \nANALYSE)...\n\nStart by checking the output of vmstat/iostat during busy periods - this will \ntell you whether CPU/IO/RAM is the bottleneck.\n\nThere is a good starter for tuning PG at:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n\nAssuming your rows aren't too wide, they're probably mostly cached by Linux, \nso you probably don't want to overdo the shared buffers/sort memory and make \nsure the effective cache size is accurate.\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 26 Aug 2003 18:44:19 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best tweak for fast results.. ?" }, { "msg_contents": "On Tue, 26 Aug 2003, JM wrote:\n\n> need input on parameter values on confs...\n> \n> our database is getting 1000 transactions/sec on peak periods..\n> \n> sitting on RH 7.3 \n> 2.4.7-10smp\n> RAM: 1028400\n> SWAP: 2040244\n\n1: Upgrade your kernel. 2.4.7 on RH3 was updated to 2.4.18-24 in March, \nand the 2.4.18 kernel is MUCH faster and has many bugs squashed.\n\n2: Upgrade to the latest stable version of postgresql, 7.3.4\n\n3: Make sure your kernels file-nr settings, and shm settings are big \nenough to handle load. \n\n4: Edit the $PGDATA/postgresql.conf file to reflect all that extra cache \nyou've got etc.... \n\nshared_buffers = 5000\nsort_mem = 16384\neffective_cache_size = (size of cache/buffer mem divided by 8192)\n\n5: Look at moving WAL to it's own spindle(s), as it is often the choke \npoint when doing lots of transactions.\n\n6: Look at using more drives in a RAID 1+0 array for the data (as well as \na seperate one for WAL if you can afford it.)\n\n7: Make sure your drives are mounted noatime.\n\n8: If you don't mind living dangerously, or the data can be reproduced \nfrom source files (i.e. catastrophic failure of your data set won't set \nyou back) look at both mounting the drives async (the default for linux, \nslightly dangerous) and turning fsync off (quite dangerous, in case of \ncrashed hardware / OS, you very well might lose data.\n\n", "msg_date": "Tue, 26 Aug 2003 16:10:35 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best tweak for fast results.. ?" } ]
[ { "msg_contents": "I'm wondering if the good people out there could perhaps give me some\npointers on suitable hardware to solve an upcoming performance issue. \nI've never really dealt with these kinds of loads before, so any\nexperience you guys have would be invaluable. Apologies in advance for\nthe amount of info below...\n\nMy app is likely to come under some serious load in the next 6 months,\nbut the increase will be broadly predictable, so there is time to throw\nhardware at the problem.\n\nCurrently I have a ~1GB DB, with the largest (and most commonly accessed\nand updated) two tables having 150,000 and 50,000 rows.\n\nA typical user interaction with the system involves about 15\nsingle-table selects, 5 selects with joins or subqueries, 3 inserts, and\n3 updates. The current hardware probably (based on benchmarking and\nprofiling) tops out at about 300 inserts/updates *or* 2500 selects per\nsecond.\n\nThere are multiple indexes on each table that updates & inserts happen\non. These indexes are necessary to provide adequate select performance.\n\nCurrent hardware/software:\nQuad 700MHz PIII Xeon/1MB cache\n3GB RAM\nRAID 10 over 4 18GB/10,000rpm drives\n128MB battery backed controller cache with write-back enabled\nRedhat 7.3, kernel 2.4.20\nPostgres 7.2.3 (stock redhat issue)\n\nI need to increase the overall performance by a factor of 10, while at\nthe same time the DB size increases by a factor of 50. e.g. 3000\ninserts/updates or 25,000 selects per second, over a 25GB database with\nmost used tables of 5,000,000 and 1,000,000 rows.\n\nNotably, the data is very time-sensitive, so the active dataset at any\nhour is almost certainly going to be more on the order of 5GB than 25GB\n(plus I'll want all the indexes in RAM of course).\n\nAlso, and importantly, the load comes but one hour per week, so buying a\nStarfire isn't a real option, as it'd just sit idle the rest of the\ntime. I'm particularly interested in keeping the cost down, as I'm a\nshareholder in the company!\n\nSo what do I need? Can anyone who has (or has ever had) that kind of\nload in production offer any pointers, anecdotes, etc? Any theoretical\nmusings also more than welcome. Comments upon my sanity will be\nreferred to my doctor.\n\nIf the best price/performance option is a second hand 32-cpu Alpha\nrunning VMS I'd be happy to go that way ;-)\n\nMany thanks for reading this far.\n\nMatt\n\n\n\n", "msg_date": "27 Aug 2003 02:35:13 +0100", "msg_from": "matt <[email protected]>", "msg_from_op": true, "msg_subject": "Hardware recommendations to scale to silly load" }, { "msg_contents": "matt wrote:\n> I'm wondering if the good people out there could perhaps give me some\n> pointers on suitable hardware to solve an upcoming performance issue. \n> I've never really dealt with these kinds of loads before, so any\n> experience you guys have would be invaluable. Apologies in advance for\n> the amount of info below...\n> \n> My app is likely to come under some serious load in the next 6 months,\n> but the increase will be broadly predictable, so there is time to throw\n> hardware at the problem.\n> \n> Currently I have a ~1GB DB, with the largest (and most commonly accessed\n> and updated) two tables having 150,000 and 50,000 rows.\n> \n> A typical user interaction with the system involves about 15\n> single-table selects, 5 selects with joins or subqueries, 3 inserts, and\n> 3 updates. The current hardware probably (based on benchmarking and\n> profiling) tops out at about 300 inserts/updates *or* 2500 selects per\n> second.\n> \n> There are multiple indexes on each table that updates & inserts happen\n> on. These indexes are necessary to provide adequate select performance.\n\nAre you sure? Have you tested the overall application to see if possibly\nyou gain more on insert performance than you lose on select performanc?\n\n(Hey, you asked for musings ...)\n\n> Current hardware/software:\n> Quad 700MHz PIII Xeon/1MB cache\n> 3GB RAM\n> RAID 10 over 4 18GB/10,000rpm drives\n> 128MB battery backed controller cache with write-back enabled\n> Redhat 7.3, kernel 2.4.20\n> Postgres 7.2.3 (stock redhat issue)\n\nIt's possible that compiling Postgres manually with proper optimizations\ncould yield some improvements, as well as building a custom kernel in\nRedhat.\n\nAlso, you don't mention which filesystem you're using:\nhttp://www.potentialtech.com/wmoran/postgresql.php\n\n> I need to increase the overall performance by a factor of 10, while at\n> the same time the DB size increases by a factor of 50. e.g. 3000\n> inserts/updates or 25,000 selects per second, over a 25GB database with\n> most used tables of 5,000,000 and 1,000,000 rows.\n> \n> Notably, the data is very time-sensitive, so the active dataset at any\n> hour is almost certainly going to be more on the order of 5GB than 25GB\n> (plus I'll want all the indexes in RAM of course).\n> \n> Also, and importantly, the load comes but one hour per week, so buying a\n> Starfire isn't a real option, as it'd just sit idle the rest of the\n> time. I'm particularly interested in keeping the cost down, as I'm a\n> shareholder in the company!\n\nI can't say for sure without looking at your application overall, but\nmany applications I've seen could be optimized. It's usually a few\nseconds here and there that take hours to find and tweak.\n\nBut if you're in the situation where you have more time than money,\nyou may find that an overall audit of your app is worthwhile. Consider\ntaking parts that are in perl (for example) and recoding them into C\n(that is, unless you've already identified that all the bottlenecks are\nat the PostgreSQL server)\n\nI doubt if the suggestions I've made are going to get you 10x, but they\nmay get you 2x, and then you only need the hardware to do 5x.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Tue, 26 Aug 2003 22:11:48 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On Tue, 2003-08-26 at 20:35, matt wrote:\n> I'm wondering if the good people out there could perhaps give me some\n> pointers on suitable hardware to solve an upcoming performance issue. \n> I've never really dealt with these kinds of loads before, so any\n> experience you guys have would be invaluable. Apologies in advance for\n> the amount of info below...\n> \n> My app is likely to come under some serious load in the next 6 months,\n> but the increase will be broadly predictable, so there is time to throw\n> hardware at the problem.\n> \n> Currently I have a ~1GB DB, with the largest (and most commonly accessed\n> and updated) two tables having 150,000 and 50,000 rows.\n> \n> A typical user interaction with the system involves about 15\n> single-table selects, 5 selects with joins or subqueries, 3 inserts, and\n> 3 updates. The current hardware probably (based on benchmarking and\n> profiling) tops out at about 300 inserts/updates *or* 2500 selects per\n> second.\n> \n> There are multiple indexes on each table that updates & inserts happen\n> on. These indexes are necessary to provide adequate select performance.\n> \n> Current hardware/software:\n> Quad 700MHz PIII Xeon/1MB cache\n> 3GB RAM\n> RAID 10 over 4 18GB/10,000rpm drives\n> 128MB battery backed controller cache with write-back enabled\n\nMuch more cache needed. Say 512MB per controller?\n\n> Redhat 7.3, kernel 2.4.20\n> Postgres 7.2.3 (stock redhat issue)\n\nUpgrade to Pg 7.3.4!\n\n> I need to increase the overall performance by a factor of 10, while at\n> the same time the DB size increases by a factor of 50. e.g. 3000\n\nAre you *sure* about that???? 3K updates/inserts per second xlates\nto 10,800,000 per hour. That, my friend, is a WHOLE HECK OF A LOT!\n\n> inserts/updates or 25,000 selects per second, over a 25GB database with\n\nLikewise: 90,000,000 selects per hour.\n\n> most used tables of 5,000,000 and 1,000,000 rows.\n> \n> Notably, the data is very time-sensitive, so the active dataset at any\n\nDuring the 1 hour surge, will SELECTs at 10 minutes after the \nhour depend on INSERTs at 5 minutes after the hour?\n\nIf not, maybe you could pump the INSERT/UPDATE records into\nflat files, to be processed after the 1-hour surge is complete.\nThat may reduce the h/w requirements.\n\n> hour is almost certainly going to be more on the order of 5GB than 25GB\n> (plus I'll want all the indexes in RAM of course).\n> \n> Also, and importantly, the load comes but one hour per week, so buying a\n\nOnly one hour out of 168????? May I ask what kind of app it is?\n\n> Starfire isn't a real option, as it'd just sit idle the rest of the\n> time. I'm particularly interested in keeping the cost down, as I'm a\n> shareholder in the company!\n\nWhat a fun exercises. Ok, lets see:\nPostgres 7.3.4\nRH AS 2.1\n12GB RAM\nmotherboard with 64 bit 66MHz PCI slots\n4 - Xenon 3.0GHz (1MB cache) CPUs\n8 - 36GB 15K RPM as RAID10 on a 64 bit 66MHz U320 controller\n having 512MB cache (for database)\n2 - 36GB 15K RPM as RAID1 on a 64 bit 66MHz U320 controller\n having 512MB cache (for OS, swap, WAL files)\n1 - library tape drive plugged into the OS' SCSI controller. I\n prefer DLT, but that's my DEC bias.\n1 - 1000 volt UPS.\n\nIf you know when the flood will be coming, you could perform\nSELECT * FROM ... WHERE statements on an indexed field, to\npull the relevant data into Linux's buffers.\n\nYes, the 8 disks is capacity-overkill, but the 8 high-speed\nspindles is what you're looking for.\n\n> So what do I need? Can anyone who has (or has ever had) that kind of\n> load in production offer any pointers, anecdotes, etc? Any theoretical\n> musings also more than welcome. Comments upon my sanity will be\n> referred to my doctor.\n> \n> If the best price/performance option is a second hand 32-cpu Alpha\n> running VMS I'd be happy to go that way ;-)\n\nI'd love to work on a GS320! You may even pick one up for a million\nor 2. The license costs for VMS & Rdb would eat you, though.\n\nRdb *does* have ways, though, using large buffers and hashed indexes,\nwith the table tuples stored on the same page as the hashed index\nkeys, to make such accesses *blazingly* fast.\n\n> Many thanks for reading this far.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"A C program is like a fast dance on a newly waxed dance floor \nby people carrying razors.\"\nWaldi Ravens\n\n", "msg_date": "Tue, 26 Aug 2003 21:59:06 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On Wed, Aug 27, 2003 at 02:35:13AM +0100, matt wrote:\n\n> I need to increase the overall performance by a factor of 10, while at\n> the same time the DB size increases by a factor of 50. e.g. 3000\n> inserts/updates or 25,000 selects per second, over a 25GB database with\n> most used tables of 5,000,000 and 1,000,000 rows.\n\nYour problem is mostly going to be disk related. You can only get in\nthere as many tuples in a second as your disk rotates per second. I\nsuspect what you need is really expensive disk hardware (sorry to\ntell you that) set up as RAID 1+0 on fibre channel or something. \n3000 write transactions per second is probably too much to ask for\nany standard hardware.\n\nBut given that you are batching this once a week, and trying to avoid\nbig expenses, are you use this is the right approach? Perhaps you\nshould consider a redesign using COPY and such?\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 27 Aug 2003 07:40:07 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "> Are you sure? Have you tested the overall application to see if possibly\n> you gain more on insert performance than you lose on select performanc?\n\nUnfortunately dropping any of the indexes results in much worse select\nperformance that is not remotely clawed back by the improvement in\ninsert performance.\n\nActually there doesn't really seem to *be* that much improvement in\ninsert performance when going from 3 indexes to 2. I guess indexes must\nbe fairly cheap for PG to maintain?\n\n> It's possible that compiling Postgres manually with proper optimizations\n> could yield some improvements, as well as building a custom kernel in\n> Redhat.\n> \n> Also, you don't mention which filesystem you're using:\n> http://www.potentialtech.com/wmoran/postgresql.php\n\nYeah, I can imagine getting 5% extra from a slim kernel and\nsuper-optimised PG.\n\nThe FS is ext3, metadata journaling (the default), mounted noatime.\n\n> But if you're in the situation where you have more time than money,\n> you may find that an overall audit of your app is worthwhile. Consider\n> taking parts that are in perl (for example) and recoding them into C\n> (that is, unless you've already identified that all the bottlenecks are\n> at the PostgreSQL server)\n\nI can pretty cheaply add more CPU horsepower for the app servers, as\nthey scale horizontally, so I can chuck in a couple (or 3, or 4, or ...)\nmore dual-cpu boxen with a gig of ram and tell the load balancer about\nthem. The problem with the DB is that that approach simply won't work -\nthe box just has to get bigger!\n\n> I doubt if the suggestions I've made are going to get you 10x, but they\n> may get you 2x, and then you only need the hardware to do 5x.\n\nIt all helps :-) A few percent here, a few percent there, pretty soon\nyou're talking serious improvements...\n\nThanks\n\nMatt\n\n", "msg_date": "27 Aug 2003 14:17:44 +0100", "msg_from": "matt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "matt wrote:\n>>Are you sure? Have you tested the overall application to see if possibly\n>>you gain more on insert performance than you lose on select performanc?\n> \n> Unfortunately dropping any of the indexes results in much worse select\n> performance that is not remotely clawed back by the improvement in\n> insert performance.\n\nBummer. It was just a thought: never assume dropping indexes will hurt\nperformance. But, since you've obviously tested ...\n\n> Actually there doesn't really seem to *be* that much improvement in\n> insert performance when going from 3 indexes to 2. I guess indexes must\n> be fairly cheap for PG to maintain?\n\nDon't know how \"cheap\" they are.\n\nI have an app that does large batch updates. I found that if I dropped\nthe indexes, did the updates and recreated the indexes, it was faster\nthan doing the updates while the indexes were intact.\n\nIt doesn't sound like your app can use that approach, but I thought I'd\nthrow it out there.\n\n>>It's possible that compiling Postgres manually with proper optimizations\n>>could yield some improvements, as well as building a custom kernel in\n>>Redhat.\n>>\n>>Also, you don't mention which filesystem you're using:\n>>http://www.potentialtech.com/wmoran/postgresql.php\n> \n> Yeah, I can imagine getting 5% extra from a slim kernel and\n> super-optimised PG.\n> \n> The FS is ext3, metadata journaling (the default), mounted noatime.\n\next3 is more reliable than ext2, but it's 1.1x slower. You can squeeze\na little performance by using Reiser or JFS, if you're not willing to\ntake the risk of ext2, either way, it's a pretty minor improvement.\n\nDoes noatime make much difference on a PostgreSQL database? I haven't\ntested that yet.\n\n>>But if you're in the situation where you have more time than money,\n>>you may find that an overall audit of your app is worthwhile. Consider\n>>taking parts that are in perl (for example) and recoding them into C\n>>(that is, unless you've already identified that all the bottlenecks are\n>>at the PostgreSQL server)\n> \n> I can pretty cheaply add more CPU horsepower for the app servers, as\n> they scale horizontally, so I can chuck in a couple (or 3, or 4, or ...)\n> more dual-cpu boxen with a gig of ram and tell the load balancer about\n> them. The problem with the DB is that that approach simply won't work -\n> the box just has to get bigger!\n\nCan you split it onto multiple boxes? Some database layouts lend themselves\nto this, others don't. Obviously you can't do joins from one server to\nanother, so you may lose more in multiple queries than you gain by having\nmultiple servers. It's worth looking into though.\n\nI know my answers aren't quite the ones you were looking for, but my\nexperience is that many people try to solve poor application design\nby simply throwing bigger hardware at the problem. It appears as though\nyou've already done your homework, though.\n\nHope this has been _some_ help.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Wed, 27 Aug 2003 10:17:46 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "\nHi,\n\ncan anyone point me to information regarding this please?\n\nObjective is to find entries that match one (or more) supplied strings in \ntwo tables. The first has about 20.000 entries with 1 varchar field to \ncheck, the other about 40.000 with 5 varchar fields to check. The currently \nused sequential scan is getting too expensive.\n\nThanks,\n Fabian\n\n", "msg_date": "Wed, 27 Aug 2003 16:36:14 +0200", "msg_from": "Fabian Kreitner <[email protected]>", "msg_from_op": false, "msg_subject": "Improving simple textsearch?" }, { "msg_contents": "> Don't know how \"cheap\" they are.\n> \n> I have an app that does large batch updates. I found that if I dropped\n> the indexes, did the updates and recreated the indexes, it was faster\n> than doing the updates while the indexes were intact.\n\nYeah, unfortunately it's not batch work, but real time financial work. \nIf I drop all the indexes my select performance goes through the floor,\nas you'd expect.\n\n> Does noatime make much difference on a PostgreSQL database? I haven't\n> tested that yet.\n\nYup, it does. In fact it should probably be in the standard install\ndocumentation (unless someone has a reason why it shouldn't). Who\n*cares* when PG last looked at the tables? If 'nomtime' was available\nthat would probably be a good thing too.\n\n> Can you split it onto multiple boxes? Some database layouts lend themselves\n> to this, others don't. Obviously you can't do joins from one server to\n> another, so you may lose more in multiple queries than you gain by having\n> multiple servers. It's worth looking into though.\n\nI'm considering that. There are some tables which I might be able to\nsplit out. There amy even be some things I can pull from the DB\naltogether (session info in particular, so long as I can reliably send a\ngiven user's requests to the same app server each time, bearing in mind\nI can't see the cookies too easily because 50% of the requests are over\nSSL)\n\n> I know my answers aren't quite the ones you were looking for, but my\n> experience is that many people try to solve poor application design\n> by simply throwing bigger hardware at the problem. It appears as though\n> you've already done your homework, though.\n\nWell, I *hope* that's the case! The core issue is simply that we have\nto deal with an insane load for 1 hour a week, and there's just no\navoiding it.\n\nMaybe I can get Sun/HP/IBM to lend some gear (it's a pretty high-profile\nsite).\n\n", "msg_date": "27 Aug 2003 15:37:14 +0100", "msg_from": "matt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On 27 Aug 2003, matt wrote:\n\n> I'm wondering if the good people out there could perhaps give me some\n> pointers on suitable hardware to solve an upcoming performance issue. \n> I've never really dealt with these kinds of loads before, so any\n> experience you guys have would be invaluable. Apologies in advance for\n> the amount of info below...\n> \n> My app is likely to come under some serious load in the next 6 months,\n> but the increase will be broadly predictable, so there is time to throw\n> hardware at the problem.\n> \n> Currently I have a ~1GB DB, with the largest (and most commonly accessed\n> and updated) two tables having 150,000 and 50,000 rows.\n> \n> A typical user interaction with the system involves about 15\n> single-table selects, 5 selects with joins or subqueries, 3 inserts, and\n> 3 updates. The current hardware probably (based on benchmarking and\n> profiling) tops out at about 300 inserts/updates *or* 2500 selects per\n> second.\n> \n> There are multiple indexes on each table that updates & inserts happen\n> on. These indexes are necessary to provide adequate select performance.\n> \n> Current hardware/software:\n> Quad 700MHz PIII Xeon/1MB cache\n> 3GB RAM\n> RAID 10 over 4 18GB/10,000rpm drives\n> 128MB battery backed controller cache with write-back enabled\n> Redhat 7.3, kernel 2.4.20\n> Postgres 7.2.3 (stock redhat issue)\n> \n> I need to increase the overall performance by a factor of 10, while at\n> the same time the DB size increases by a factor of 50. e.g. 3000\n> inserts/updates or 25,000 selects per second, over a 25GB database with\n> most used tables of 5,000,000 and 1,000,000 rows.\n\nIt will likely take a combination of optimizing your database structure / \nmethods and increasing your hardware / OS performance.\n\nYou probably, more than anything, should look at some kind of \nsuperfast, external storage array that has dozens of drives, and a large \nbattery backed cache. You may be able to approximate this yourself with \njust a few dual channel Ultra 320 SCSI cards and a couple dozen hard \ndrives. The more spindles you throw at a database, generally speaking, \nthe more parallel load it can handle. \n\nYou may find that once you get to 10 or 20 drives, RAID 5 or 5+0 or 0+5 \nwill be outrunning 1+0/0+1 due to fewer writes.\n\nYou likely want to look at the fastest CPUs with the fastest memory you \ncan afford. those 700MHz xeons are likely using PC133 memory, which is \npainfully slow compared to the stuff pumping data out at 4 to 8 times the \nrate of the older stuff.\n\nMaybe an SGI Altix could do this? Have you looked at them? They're not \ncheap, but they do look to be quite fast, and can scale to 64 CPUs if need \nbe. They're interbox communication fabric is faster than most CPU's front \nside busses.\n\n> Notably, the data is very time-sensitive, so the active dataset at any\n> hour is almost certainly going to be more on the order of 5GB than 25GB\n> (plus I'll want all the indexes in RAM of course).\n> \n> Also, and importantly, the load comes but one hour per week, so buying a\n> Starfire isn't a real option, as it'd just sit idle the rest of the\n> time. I'm particularly interested in keeping the cost down, as I'm a\n> shareholder in the company!\n\nInteresting. If you can't spread the load out, can you batch some parts \nof it? Or is the whole thing interactive therefore needing to all be \ndone in real time at once?\n\n> So what do I need?\n\nwhether you like it or not, you're gonna need heavy iron if you need to do \nthis all in one hour once a week.\n\n> Can anyone who has (or has ever had) that kind of\n> load in production offer any pointers, anecdotes, etc? Any theoretical\n> musings also more than welcome. Comments upon my sanity will be\n> referred to my doctor.\n> \n> If the best price/performance option is a second hand 32-cpu Alpha\n> running VMS I'd be happy to go that way ;-)\n\nActually, I've seen stuff like that going on Ebay pretty cheap lately. I \nsaw a 64 CPU E10k (366 MHz CPUs) with 64 gigs ram and 20 hard drives going \nfor $24,000 a month ago. Put Linux or BSD on it and Postgresql should \nfly.\n\n", "msg_date": "Wed, 27 Aug 2003 09:17:42 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "> You probably, more than anything, should look at some kind of \n> superfast, external storage array\n\nYeah, I think that's going to be a given. Low end EMC FibreChannel\nboxes can do around 20,000 IOs/sec, which is probably close to good\nenough.\n\nYou mentioned using multiple RAID controllers as a boost - presumably\nthe trick here is to split the various elements (WAL, tables, indexes)\nacross different controllers using symlinks or suchlike? Can I feasibly\nsplit the DB tables across 5 or more controllers?\n\n> > Also, and importantly, the load comes but one hour per week, so buying a\n> > Starfire isn't a real option, as it'd just sit idle the rest of the\n> > time. I'm particularly interested in keeping the cost down, as I'm a\n> > shareholder in the company!\n> \n> Interesting. If you can't spread the load out, can you batch some parts \n> of it? Or is the whole thing interactive therefore needing to all be \n> done in real time at once?\n\nAll interactive I'm afraid. It's a micropayment system that's going to\nbe used here in the UK to do online voting for a popular TV programme. \nThe phone voting system has a hard limit of [redacted] million votes per\nhour, and the producers would like to be able to tell people to vote\nonline if the phone lines are busy. They can vote online anyway, but we\nexpect the average viewer to have to make 10 calls just to get through\nduring peak times, so the attraction is obvious.\n\n> whether you like it or not, you're gonna need heavy iron if you need to do \n> this all in one hour once a week.\n\nYeah, I need to rent a Starfire for a month later this year, anybody got\none lying around? Near London?\n\n> Actually, I've seen stuff like that going on Ebay pretty cheap lately. I \n> saw a 64 CPU E10k (366 MHz CPUs) with 64 gigs ram and 20 hard drives going \n> for $24,000 a month ago. Put Linux or BSD on it and Postgresql should \n> fly.\n\nJeez, and I thought I was joking about the Starfire. Even Slowaris\nwould be OK on one of them.\n\nThe financial issue is that there's just not that much money in the\nmicropayments game for bursty sales. If I was doing these loads\n*continuously* then I wouldn't be working, I'd be in the Maldives :-)\n\nI'm also looking at renting equipment, or even trying out IBM/HP's\n'on-demand' offerings.\n\n\n\n", "msg_date": "27 Aug 2003 17:49:25 +0100", "msg_from": "matt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "Martha Stewart called it a Good Thing [email protected] (matt)wrote:\n> I'm also looking at renting equipment, or even trying out IBM/HP's\n> 'on-demand' offerings.\n\nYou're assuming that this is likely to lead to REAL savings, and that\nseems unlikely.\n\nDuring the recent power outage in the NorthEast, people looking for\ngenerators and fuel were paying _premium_ prices, not discounted\nprices.\n\nIf your hardware requirement leads to someone having to buy hardware\nto support your peak load, then _someone_ has to pay the capital cost,\nand that someone is unlikely to be IBM or HP. \"Peak demand\" equipment\nis likely to attract pretty \"peaked\" prices.\n\nIf you can find someone who needs the hardware during the day, but who\n_never_ needs it during your needful hours, then there might be an\narrangement to be had, assuming the \"someone else\" trusts you to use\nwhat's, at other times, their hardware, and assuming you trust them\nwith the financial information you're managing.\n-- \nselect 'cbbrowne' || '@' || 'ntlug.org';\nhttp://www3.sympatico.ca/cbbrowne/linux.html\nRules of the Evil Overlord #170. \"I will be an equal-opportunity\ndespot and make sure that terror and oppression is distributed fairly,\nnot just against one particular group that will form the core of a\nrebellion.\" <http://www.eviloverlord.com/>\n", "msg_date": "Wed, 27 Aug 2003 21:32:16 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "After takin a swig o' Arrakan spice grog, [email protected]\n(\"scott.marlowe\") belched out... :-):\n> whether you like it or not, you're gonna need heavy iron if you need\n> to do this all in one hour once a week.\n\nThe other thing worth considering is trying to see if there is a way\nof partitioning the workload across multiple hosts.\n\nAt the point that you start going past hardware that is\n\"over-the-counter commodity\" stuff, the premiums start getting pretty\nhigh. Dual-CPU Intel boxes are pretty cheap compared to buncha-CPU\nSparc boxes.\n\nIf some sort of segmentation of the workload can be done, whether by\narea code, postal code, or perhaps the last couple digits of the\ncaller's phone number, or even a \"round robin,\" it's likely to be a\nlot cheaper to get an array of 4 Dual-Xeon boxes with 8 disk drives\napiece than a Sun/HP/IBM box with 16 CPUs.\n-- \nlet name=\"cbbrowne\" and tld=\"ntlug.org\" in name ^ \"@\" ^ tld;;\nhttp://cbbrowne.com/info/linuxxian.html\n\"Show me... show me... show me... COMPUTERS!\"\n", "msg_date": "Wed, 27 Aug 2003 22:07:20 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "Christopher Browne wrote:\n> Martha Stewart called it a Good Thing [email protected] (matt)wrote:\n> \n>>I'm also looking at renting equipment, or even trying out IBM/HP's\n>>'on-demand' offerings.\n> \n> You're assuming that this is likely to lead to REAL savings, and that\n> seems unlikely.\n> \n> During the recent power outage in the NorthEast, people looking for\n> generators and fuel were paying _premium_ prices, not discounted\n> prices.\n> \n> If your hardware requirement leads to someone having to buy hardware\n> to support your peak load, then _someone_ has to pay the capital cost,\n> and that someone is unlikely to be IBM or HP. \"Peak demand\" equipment\n> is likely to attract pretty \"peaked\" prices.\n> \n> If you can find someone who needs the hardware during the day, but who\n> _never_ needs it during your needful hours, then there might be an\n> arrangement to be had, assuming the \"someone else\" trusts you to use\n> what's, at other times, their hardware, and assuming you trust them\n> with the financial information you're managing.\n\nI hadn't considered this, but that's not a bad idea.\n\nWith FreeBSD, you have jails, which allow multiple users to share\nhardware without having to worry about user A looking at user B's\nstuff. Does such a paradigm exist on any heavy iron? I have no\nidea where you'd go to find this kind of \"co-op\" server leasing,\nbut it sure sounds like it could work.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Wed, 27 Aug 2003 22:26:01 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On Wed, 2003-08-27 at 21:26, Bill Moran wrote:\n> Christopher Browne wrote:\n> > Martha Stewart called it a Good Thing [email protected] (matt)wrote:\n[snip]\n> With FreeBSD, you have jails, which allow multiple users to share\n> hardware without having to worry about user A looking at user B's\n> stuff. Does such a paradigm exist on any heavy iron? I have no\n\nIBM invented the idea (or maybe stole it) back in the '70s. The\nVM hypervisor was designed as a conversion tool, to let customers\nrun both OS/MVS and DOS/VSE, to aid in converting from VSE to MVS.\n\nCustomers, the cheap, uncooperative beasts, liked VSE, but also liked\nVM, since it let them have, for example, a dev, test, and production\n\"systems\" all on the same piece of h/w, thus saving them oodles of\nmoney in h/w costs and maintenance fees.\n\nYes, yes, the modern term for this is \"server consolidation\", and\nVMware does the same thing, 30 years after dinosaur customers had\nit on boxen that academics, analysts and \"young whippersnappers\" \nsaid were supposed to be extinct 20 years ago.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Knowledge should be free for all.\"\nHarcourt Fenton Mudd, Star Trek:TOS, \"I, Mudd\"\n\n", "msg_date": "Wed, 27 Aug 2003 23:07:35 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "2003-08-27 ragyogó napján matt ezt üzente:\n\n> Yeah, I can imagine getting 5% extra from a slim kernel and\n> super-optimised PG.\n\nHm, about 20%, but only for the correctness - 20% not help you also :(\n\n> The FS is ext3, metadata journaling (the default), mounted noatime.\n\nWorst fs under linux :) Try xfs.\n\n-- \nTomka Gergely\n\"S most - vajon barbárok nélkül mi lesz velünk?\nŐk mégiscsak megoldás voltak valahogy...\"\n\n", "msg_date": "Thu, 28 Aug 2003 08:38:47 +0200 (CEST)", "msg_from": "Tomka Gergely <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "2003-08-27 ragyogó napján Bill Moran ezt üzente:\n\n> With FreeBSD, you have jails, which allow multiple users to share\n> hardware without having to worry about user A looking at user B's\n> stuff. Does such a paradigm exist on any heavy iron? I have no\n\nOf course. All IBM hw can do this, because on all ibm hw runs linux, and\nlinux have more ways to do this :)\n\n-- \nTomka Gergely\n\"S most - vajon barbárok nélkül mi lesz velünk?\nŐk mégiscsak megoldás voltak valahogy...\"\n\n", "msg_date": "Thu, 28 Aug 2003 08:51:33 +0200 (CEST)", "msg_from": "Tomka Gergely <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "> Are you *sure* about that???? 3K updates/inserts per second xlates\n> to 10,800,000 per hour. That, my friend, is a WHOLE HECK OF A LOT!\n\nYup, I know! \n\n> During the 1 hour surge, will SELECTs at 10 minutes after the \n> hour depend on INSERTs at 5 minutes after the hour?\n\nYes, they do. It's a payments system, so things like account balances\nand purchase histories have to be updated in real time.\n\n> Only one hour out of 168????? May I ask what kind of app it is?\n\nOnline voting for an unnamed TV show...\n\n> > If the best price/performance option is a second hand 32-cpu Alpha\n> > running VMS I'd be happy to go that way ;-)\n> \n> I'd love to work on a GS320! You may even pick one up for a million\n> or 2. The license costs for VMS & Rdb would eat you, though.\n\nYou'd be amazed how little they do go for actually :-)\n\n\n\n", "msg_date": "28 Aug 2003 09:17:20 +0100", "msg_from": "matt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On Thu, 2003-08-28 at 03:17, matt wrote:\n> > Are you *sure* about that???? 3K updates/inserts per second xlates\n> > to 10,800,000 per hour. That, my friend, is a WHOLE HECK OF A LOT!\n> \n> Yup, I know! \n> \n> > During the 1 hour surge, will SELECTs at 10 minutes after the \n> > hour depend on INSERTs at 5 minutes after the hour?\n> \n> Yes, they do. It's a payments system, so things like account balances\n> and purchase histories have to be updated in real time.\n> \n> > Only one hour out of 168????? May I ask what kind of app it is?\n> \n> Online voting for an unnamed TV show...\n> \n> > > If the best price/performance option is a second hand 32-cpu Alpha\n> > > running VMS I'd be happy to go that way ;-)\n> > \n> > I'd love to work on a GS320! You may even pick one up for a million\n> > or 2. The license costs for VMS & Rdb would eat you, though.\n> \n> You'd be amazed how little they do go for actually :-)\n\nThen that's what I'd do. VMS, Rdb, (your favorite 3GL language).\nPresumably the SELECT statements will be direct lookup instead\nof range retrieval? If so, then I'd create a *large* amount of\nGLOBAL BUFFERS, many MIXED AREAs, tables PLACED VIA HASHED INDEXES\nso that the index nodes are on the same page as the corresponding\ntuples. Thus, 1 disk I/O gets both the relevant index key, plus \nthe tuple. (Each I/O reads 3 pages into GBs [Global Buffers], so\nthat if a later statement needs a records nearby, it's already in\nRAM.)\n\nWith fast storage controllers (dual-redundant, with 512MB each)\nyou could even use RAID5, and your app may not even know the diffie.\nOf course, since the requirements are *so* extreme, better still\nstick to RAID10.\n\nI know that a certain pharmaceutical company had a similar situation,\nwhere test results would flood in every morning. A certain North-\neastern US wireless phone company needed to record every time every\nphone call was started and stopped.\n\nThe technique I described is how both of these high-volume apps\nsolved The Need For Speed.\n\nWith VMS 7.3 and Rdb 7.1.04 and, oh, 16GB RAM, a carefully crafted\nstored procedure run an hour or 2 before the show could pull the\nnecessary 5GB slice of the DB into GBs, and you'd reduce the I/O\nload during the show itself.\n\nSorry it's not PostgreSQL, but I *know* that Rdb+VMS could handle\nthe task...\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"You ask us the same question every day, and we give you the \nsame answer every day. Someday, we hope that you will believe us...\"\nU.S. Secretary of Defense Donald Rumsfeld, to a reporter\n\n", "msg_date": "Thu, 28 Aug 2003 04:02:36 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On Tue, 2003-08-26 at 23:59, Ron Johnson wrote:\n\n> What a fun exercises. Ok, lets see:\n> Postgres 7.3.4\n> RH AS 2.1\n> 12GB RAM\n> motherboard with 64 bit 66MHz PCI slots\n> 4 - Xenon 3.0GHz (1MB cache) CPUs\n> 8 - 36GB 15K RPM as RAID10 on a 64 bit 66MHz U320 controller\n> having 512MB cache (for database)\n> 2 - 36GB 15K RPM as RAID1 on a 64 bit 66MHz U320 controller\n> having 512MB cache (for OS, swap, WAL files)\n> 1 - library tape drive plugged into the OS' SCSI controller. I\n> prefer DLT, but that's my DEC bias.\n> 1 - 1000 volt UPS.\n\n Be careful here, we've seen that with the P4 Xeon's that are\nhyper-threaded and a system that has very high disk I/O causes the\nsystem to be sluggish and slow. But after disabling the hyper-threading\nitself, our system flew..\n\n-- \nChris Bowlby <[email protected]>\nHub.Org Networking Services\n\n", "msg_date": "Thu, 28 Aug 2003 11:05:17 -0300", "msg_from": "Chris Bowlby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On 28 Aug 2003 at 11:05, Chris Bowlby wrote:\n\n> On Tue, 2003-08-26 at 23:59, Ron Johnson wrote:\n> \n> > What a fun exercises. Ok, lets see:\n> > Postgres 7.3.4\n> > RH AS 2.1\n> > 12GB RAM\n> > motherboard with 64 bit 66MHz PCI slots\n> > 4 - Xenon 3.0GHz (1MB cache) CPUs\n> > 8 - 36GB 15K RPM as RAID10 on a 64 bit 66MHz U320 controller\n> > having 512MB cache (for database)\n> > 2 - 36GB 15K RPM as RAID1 on a 64 bit 66MHz U320 controller\n> > having 512MB cache (for OS, swap, WAL files)\n> > 1 - library tape drive plugged into the OS' SCSI controller. I\n> > prefer DLT, but that's my DEC bias.\n> > 1 - 1000 volt UPS.\n> \n> Be careful here, we've seen that with the P4 Xeon's that are\n> hyper-threaded and a system that has very high disk I/O causes the\n> system to be sluggish and slow. But after disabling the hyper-threading\n> itself, our system flew..\n\nAnybody has opteron working? Hows' the performance?\n\nBye\n Shridhar\n\n--\nA father doesn't destroy his children.\t\t-- Lt. Carolyn Palamas, \"Who Mourns for \nAdonais?\",\t\t stardate 3468.1.\n\n", "msg_date": "Thu, 28 Aug 2003 19:53:13 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "\nsm> On 27 Aug 2003, matt wrote:\n\n>> My app is likely to come under some serious load in the next 6 months,\n>> but the increase will be broadly predictable, so there is time to throw\n>> hardware at the problem.\n>> \n>> Currently I have a ~1GB DB, with the largest (and most commonly accessed\n>> and updated) two tables having 150,000 and 50,000 rows.\n\nJust how big do you expect your DB to grow? For a 1GB disk-space\ndatabase, I'd probably just splurge for an SSD hooked up either via\nSCSI or FibreChannel. Heck, up to about 5Gb or so it is not that\nexpensive (about $25k) and adding another 5Gb should set you back\nprobably another $20k. I use an SSD from Imperial Technology\n( http://www.imperialtech.com/ ) for mail spools. My database is way\nto big for my budget to put in SSD.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Thu, 28 Aug 2003 10:38:54 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "> I need to increase the overall performance by a factor of 10, while at\n> the same time the DB size increases by a factor of 50. e.g. 3000\n> inserts/updates or 25,000 selects per second, over a 25GB database with\n> most used tables of 5,000,000 and 1,000,000 rows.\n\nOk.. I would be surprised if you needed much more actual CPU power. I\nsuspect they're mostly idle waiting on data -- especially with a Quad\nXeon (shared memory bus is it not?).\n\nI'd be looking to get your hands on a large pSeries machine from IBM or\nperhaps an 8-way Opteron (not that hard to come by today, should be easy\nin the near future). The key is low latency ram tied to a chip rather\nthan a centralized bus -- a 3800 SunFire would do too ;).\n\nWrite performance won't matter very much. 3000 inserts/second isn't high\n-- some additional battery backed write cache may be useful but not\noverly important with enough ram to hold the complete dataset. I suspect\nthose are slow due to things like foreign keys -- which of course are \nselects.\n\n> Notably, the data is very time-sensitive, so the active dataset at any\n> hour is almost certainly going to be more on the order of 5GB than 25GB\n> (plus I'll want all the indexes in RAM of course).\n\nVery good. Find yourself 8GB to 12GB ram and you should be fine. In this\ncase, additional ram will keep the system from hitting the disk for\nwrites as well.\n\nYou may want to play around with checkpoints. Prevention of a checkpoint\nduring this hour will help prevent peaks. Be warned though, WAL will\ngrow very large, and recovery time should a crash occur could be\npainful.\n\nYou say the data is very time sensitive -- how time sensitive? Are the\nselects all based on this weeks data? A copy of the database on a second\nmachine (say your Quad Xeon) for static per client data would be very\nuseful to reduce needless load. I assume the application servers have\nalready cached any static global data by this point.\n\nFinally, upgrade to 7.4. Do use prepared statements. Do limit the number\nof connections any given application server is allowed (especially for\nshort transactions). 3 PostgreSQL processes per CPU (where the box limit\nis not Disk) seems to be about right -- your OS may vary.\n\nPre-calculate anything you can. Are the $ amounts for a transaction\ngenerally the the same? Do you tend to have repeat clients? Great --\nmake your current clients transactions a day in advance. Now you have a\npair of selects and 1 update (mark it with the time the client actually\napproved it). If the client doesn't approve of the pre-calculated\ntransaction, throw it away at some later time.", "msg_date": "Thu, 28 Aug 2003 10:41:52 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On 27 Aug 2003, matt wrote:\n\n> > You probably, more than anything, should look at some kind of \n> > superfast, external storage array\n> \n> Yeah, I think that's going to be a given. Low end EMC FibreChannel\n> boxes can do around 20,000 IOs/sec, which is probably close to good\n> enough.\n> \n> You mentioned using multiple RAID controllers as a boost - presumably\n> the trick here is to split the various elements (WAL, tables, indexes)\n> across different controllers using symlinks or suchlike? Can I feasibly\n> split the DB tables across 5 or more controllers?\n\nI'm not sure I'd split the tables by hand right up front. Try getting as \nmany hard drives as you can afford hooked up at once, and then try \ndifferent ways of partitioning them. I'm guessing that making two fairly \ngood sized 1+0 sets, one for data and one for WAL might be the best \nanswer.\n\n> > Actually, I've seen stuff like that going on Ebay pretty cheap lately. I \n> > saw a 64 CPU E10k (366 MHz CPUs) with 64 gigs ram and 20 hard drives going \n> > for $24,000 a month ago. Put Linux or BSD on it and Postgresql should \n> > fly.\n> \n> Jeez, and I thought I was joking about the Starfire. Even Slowaris\n> would be OK on one of them.\n> \n> The financial issue is that there's just not that much money in the\n> micropayments game for bursty sales. If I was doing these loads\n> *continuously* then I wouldn't be working, I'd be in the Maldives :-)\n\n$24,000 isn't that much for a server really, and if you can leverage this \none \"sale\" to get more, then it would likely pay for itself over time.\n\nIf you have problems keeping up with load, it will be harder to get more \ncustomers, so you kinda wanna do this as well as possible the first time.\n\n\n\n", "msg_date": "Thu, 28 Aug 2003 09:33:15 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On Wed, Aug 27, 2003 at 05:49:25PM +0100, matt wrote:\n> \n> I'm also looking at renting equipment, or even trying out IBM/HP's\n> 'on-demand' offerings.\n\nTo handle that kind of load, you're not going to be able to do it\nwith cheap hardware. Renting may be your answer.\n\na\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 28 Aug 2003 12:19:53 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "> Just how big do you expect your DB to grow? For a 1GB disk-space\n> database, I'd probably just splurge for an SSD hooked up either via\n> SCSI or FibreChannel. Heck, up to about 5Gb or so it is not that\n> expensive (about $25k) and adding another 5Gb should set you back\n> probably another $20k. I use an SSD from Imperial Technology\n> ( http://www.imperialtech.com/ ) for mail spools. My database is way\n> to big for my budget to put in SSD.\n\nI may well be able to split some tables that aren't used in joins into a separate DB, and could well use an SSD for those.\n\nIn fact two of the inserts per user interaction could be split off, and they're not financially important tables, so fsync=false\ncould be enabled for those, in which case an SSD might be overkill...\n\nThe whole thing will definitely *not* fit in an SSD for a sensible price, but the WAL might well!\n\n\n\n\n\n", "msg_date": "Thu, 28 Aug 2003 17:29:39 +0100", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "> Ok.. I would be surprised if you needed much more actual CPU power. I\n> suspect they're mostly idle waiting on data -- especially with a Quad\n> Xeon (shared memory bus is it not?).\n\nIn reality the CPUs get pegged: about 65% PG and 35% system. But I agree that memory throughput and latency is an issue.\n\n> Write performance won't matter very much. 3000 inserts/second isn't high\n> -- some additional battery backed write cache may be useful but not\n> overly important with enough ram to hold the complete dataset. I suspect\n> those are slow due to things like foreign keys -- which of course are\n> selects.\n\n3000 inserts/sec isn't high when they're inside one transaction, but if each is inside its own transaction then that's 3000\ncommits/second.\n\n> case, additional ram will keep the system from hitting the disk for\n> writes as well.\n\nHow does that work?\n\n> You may want to play around with checkpoints. Prevention of a checkpoint\n> during this hour will help prevent peaks. Be warned though, WAL will\n> grow very large, and recovery time should a crash occur could be\n> painful.\n\nGood point. I'll have a think about that.\n\n\n\n", "msg_date": "Thu, 28 Aug 2003 17:37:24 +0100", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On Thu, 2003-08-28 at 12:37, Matt Clark wrote:\n> > Ok.. I would be surprised if you needed much more actual CPU power. I\n> > suspect they're mostly idle waiting on data -- especially with a Quad\n> > Xeon (shared memory bus is it not?).\n> \n> In reality the CPUs get pegged: about 65% PG and 35% system. But I agree that memory throughput and latency is an issue.\n\nsystem in this case is dealing with disk activity or process switches?\n\nUsually the 65% includes the CPU waiting on a request for data from main\nmemory. Since you will be moving a lot of data through the CPU, the L1 /\nL2 cache doesn't help too much (even large cache), but low latency high \nbandwidth memory will make a significant difference. CPUs not having to\nwait on other CPUs doing a memory fetch will make an even larger\ndifference (dedicated memory bus per CPU).\n\nGood memory is the big ticket item. Sun CPUs are not better than Intel\nCPUs, for simple DB interaction. It's the additional memory bandwidth\nthat makes them shine. Incidentally, Suns are quite slow with PG for\ncalculation intensive work on a small dataset.\n\n> > Write performance won't matter very much. 3000 inserts/second isn't high\n> > -- some additional battery backed write cache may be useful but not\n> > overly important with enough ram to hold the complete dataset. I suspect\n> > those are slow due to things like foreign keys -- which of course are\n> > selects.\n> \n> 3000 inserts/sec isn't high when they're inside one transaction, but if each is inside its own transaction then that's 3000\n> commits/second.\n\nStill not anything to concern yourself with. WAL on battery backed\nwrite cache (with a good controller) will more than suffice -- boils\ndown to the same as if fsync was disabled. You might want to try putting\nit onto it's own controller, but I don't think you will see much of a\nchange. 20k WAL operations / sec would be something to worry about.\n\n> > case, additional ram will keep the system from hitting the disk for\n> > writes as well.\n> \n> How does that work?\n\nSimple. Your OS will buffer writes in memory until they are required to\nhit disk (fsync or similar). Modify the appropriate sysctl to inform\nthe OS it can use more than 10% (10% is the FreeBSD default I believe)\nof the memory for writes. Buffering 4GB of work in memory (WAL logs\nwill ensure this is crash safe) will nearly eliminate I/O.\n\nWhen the OS is no longer busy, it will filter the writes from ram back\nto disk. Visibly, there is no change to the user aside from a speed\nincrease.\n\n> > You may want to play around with checkpoints. Prevention of a checkpoint\n> > during this hour will help prevent peaks. Be warned though, WAL will\n> > grow very large, and recovery time should a crash occur could be\n> > painful.\n> \n> Good point. I'll have a think about that.\n\nThis is more important with a larger buffer. A checkpoint informs the OS\nto dump the buffer to disk so it can guarantee it hit hardware (thus\nallowing PG to remove / recycle WAL files).\n\n\nI do think your best bet is to segregate the DB. Read / write, by user\nlocation, first 4 digits of the credit card, anything will make a much\nbetter system.\n\nKeep a master with all of the data that can take the full week to\nprocess it.", "msg_date": "Thu, 28 Aug 2003 14:29:02 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "Shridhar Daithankar wrote:\n>> Be careful here, we've seen that with the P4 Xeon's that are\n>>hyper-threaded and a system that has very high disk I/O causes the\n>>system to be sluggish and slow. But after disabling the hyper-threading\n>>itself, our system flew..\n> \n> Anybody has opteron working? Hows' the performance?\n\nYes. I'm using an 2x 1.8GHz Opteron system w/ 8GB of RAM. Right now, I'm \nstill using 32-bit Linux -- I'm letting others be the 64-bit guinea \npigs. :) I probably will get a cheapie 1x Opteron machine first and test \nthe 64-bit kernel/libraries thoroughly before rolling it out to production.\n\nAs for performance, the scaling is magnificient -- even when just using \nPAE instead of 64-bit addressing. At low transaction counts, it's only \n~75% faster than the 2x Athlon 1800+ MP it replaced. But once the \ntransactions start coming in, the gap is as high as 5x. My w-a-g: since \neach CPU has an integrated memory controller, you avoid memory bus \ncontention which is probably the major bottleneck as transaction load \nincreases. (I've seen Opteron several vs Xeon comparisons where \nsingle-connection tests are par for both CPUs but heavy-load tests favor \nthe Opteron by a wide margin.) I suspect the 4X comparisons would tilt \neven more towards AMD's favor.\n\nWe should see a boost when we move to 64-bit Linux and hopefully another \none when NUMA for Linux is production-stable.\n\n", "msg_date": "Fri, 29 Aug 2003 00:05:03 -0700", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "> We should see a boost when we move to 64-bit Linux and hopefully another \n> one when NUMA for Linux is production-stable.\n\nAssuming SCO doesn't make them remove it :P\n\nChris\n\n", "msg_date": "Fri, 29 Aug 2003 15:30:06 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On 29 Aug 2003 at 0:05, William Yu wrote:\n\n> Shridhar Daithankar wrote:\n> >> Be careful here, we've seen that with the P4 Xeon's that are\n> >>hyper-threaded and a system that has very high disk I/O causes the\n> >>system to be sluggish and slow. But after disabling the hyper-threading\n> >>itself, our system flew..\n> > \n> > Anybody has opteron working? Hows' the performance?\n> \n> Yes. I'm using an 2x 1.8GHz Opteron system w/ 8GB of RAM. Right now, I'm \n> still using 32-bit Linux -- I'm letting others be the 64-bit guinea \n> pigs. :) I probably will get a cheapie 1x Opteron machine first and test \n> the 64-bit kernel/libraries thoroughly before rolling it out to production.\n\nJust a guess here but does a precompiled postgresql for x86 and a x86-64 \noptimized one makes difference?\n\nOpteron is one place on earth you can watch difference between 32/64 bit on \nsame machine. Can be handy at times..\n\n> \n> As for performance, the scaling is magnificient -- even when just using \n> PAE instead of 64-bit addressing. At low transaction counts, it's only \n> ~75% faster than the 2x Athlon 1800+ MP it replaced. But once the \n> transactions start coming in, the gap is as high as 5x. My w-a-g: since \n> each CPU has an integrated memory controller, you avoid memory bus \n> contention which is probably the major bottleneck as transaction load \n> increases. (I've seen Opteron several vs Xeon comparisons where \n> single-connection tests are par for both CPUs but heavy-load tests favor \n> the Opteron by a wide margin.) I suspect the 4X comparisons would tilt \n> even more towards AMD's favor.\n\nI am sure. But is 64 bit environment, Xeon is not the compitition. It's PA-RSC-\n8700, ultraSparcs, Power series and if possible itanium.\n\nI would still expect AMD to compete comfortably given high clock speed. But \nchipset need to be competent as well..\n\nI still remember the product I work on, a single CPU PA-RISC 8700 with single \nSCSI disc, edged out a quad CPU Xeon with SCSI RAID controller running windows \nin terms of scalability while running oracle.\n\nI am not sure if it was windows v/s HP-UX issue but at the end HP machine was \nlot better than windows machine. Windows machine shooted ahead for light load \nand drooeed dead equally fast with rise in load..\n \n> We should see a boost when we move to 64-bit Linux and hopefully another \n> one when NUMA for Linux is production-stable.\n\nGetting a 2.6 running now is the answer to make it stable fast..:-) Of course \nif you have spare hardware..\n\nBye\n Shridhar\n\n--\nbriefcase, n:\tA trial where the jury gets together and forms a lynching party.\n\n", "msg_date": "Fri, 29 Aug 2003 13:48:24 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On Fri, 2003-08-29 at 03:18, Shridhar Daithankar wrote:\n> On 29 Aug 2003 at 0:05, William Yu wrote:\n> \n> > Shridhar Daithankar wrote:\n[snip]\n> > As for performance, the scaling is magnificient -- even when just using \n> > PAE instead of 64-bit addressing. At low transaction counts, it's only \n> > ~75% faster than the 2x Athlon 1800+ MP it replaced. But once the \n> > transactions start coming in, the gap is as high as 5x. My w-a-g: since \n> > each CPU has an integrated memory controller, you avoid memory bus \n> > contention which is probably the major bottleneck as transaction load \n> > increases. (I've seen Opteron several vs Xeon comparisons where \n> > single-connection tests are par for both CPUs but heavy-load tests favor \n> > the Opteron by a wide margin.) I suspect the 4X comparisons would tilt \n> > even more towards AMD's favor.\n> \n> I am sure. But is 64 bit environment, Xeon is not the compitition. It's PA-RSC-\n> 8700, ultraSparcs, Power series and if possible itanium.\n\nIMO, Opti will compete in *both* markets.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Adventure is a sign of incompetence\"\nStephanson, great polar explorer\n\n", "msg_date": "Fri, 29 Aug 2003 07:05:36 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "Hi everyone,\n\nI have a sql request which on first invocation completes in ~12sec but then \ndrops to ~3sec on the following runs. The 3 seconds would be acceptable but \nhow can I make sure that the data is cached and all times? Is it simply \nenough to set shared_buffers high enough to hold the entire database (and \nhave enough ram installed of course)? The OS is linux in this case.\n\n\nNested Loop (cost=0.00..11.44 rows=1 width=362) (actual \ntime=247.83..12643.96 rows=14700 loops=1)\n -> Index Scan using suchec_testa on suchec (cost=0.00..6.02 rows=1 \nwidth=23) (actual time=69.91..902.68 rows=42223 loops=1)\n -> Index Scan using idx_dokument on dokument d (cost=0.00..5.41 rows=1 \nwidth=339) (actual time=0.26..0.26 rows=0 loops=42223)\nTotal runtime: 12662.64 msec\n\n\nNested Loop (cost=0.00..11.44 rows=1 width=362) (actual time=1.18..2829.79 \nrows=14700 loops=1)\n -> Index Scan using suchec_testa on suchec (cost=0.00..6.02 rows=1 \nwidth=23) (actual time=0.51..661.75 rows=42223 loops=1)\n -> Index Scan using idx_dokument on dokument d (cost=0.00..5.41 rows=1 \nwidth=339) (actual time=0.04..0.04 rows=0 loops=42223)\nTotal runtime: 2846.63 msec\n\n", "msg_date": "Fri, 29 Aug 2003 14:52:10 +0200", "msg_from": "Fabian Kreitner <[email protected]>", "msg_from_op": false, "msg_subject": "Force table to be permanently in cache?" }, { "msg_contents": "On Fri, Aug 29, 2003 at 12:05:03AM -0700, William Yu wrote:\n> We should see a boost when we move to 64-bit Linux and hopefully another \n> one when NUMA for Linux is production-stable.\n\nAccording to the people who've worked with SGIs, NUMA actually seems\nto make things worse. It has something to do with how the shared\nmemory is handled. You'll want to dig through the -general or\n-hackers archives from somewhere between 9 and 14 months ago, IIRC.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 29 Aug 2003 10:05:08 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On Fri, Aug 29, 2003 at 02:52:10PM +0200, Fabian Kreitner wrote:\n> Hi everyone,\n> \n> I have a sql request which on first invocation completes in ~12sec but then \n> drops to ~3sec on the following runs. The 3 seconds would be acceptable but \n> how can I make sure that the data is cached and all times? Is it simply \n> enough to set shared_buffers high enough to hold the entire database (and \n> have enough ram installed of course)? The OS is linux in this case.\n\nIf the table gets hit often enough, then it'll be in your filesystem\ncache anyway. See the many discussions of sizing shared_buffers in\nthe archives of this list for thoughts on how big that should be.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 29 Aug 2003 10:26:30 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Force table to be permanently in cache?" }, { "msg_contents": "Shridhar Daithankar wrote:\n> Just a guess here but does a precompiled postgresql for x86 and a x86-64 \n> optimized one makes difference?\n >\n > Opteron is one place on earth you can watch difference between 32/64\n > bit on same machine. Can be handy at times..\n\nI don't know yet. I tried building a 64-bit kernel and my eyes glazed \nover trying to figure out how to create the cross-platform GCC compiler \nthat's first needed to build the kernel. Then I read all the libraries & \ndrivers also needed to be 64-bit compiled and at that point gave up the \nghost. I'll wait until a 64-bit Redhat distro is available before I test \nthe 64-bit capabilities.\n\nThe preview SuSE 64-bit Linux used in most of the Opteron rollout tests \nhas MySql precompiled as 64-bit and under that DB, 64-bit added an extra \n ~25% performance (compared to a 32-bit SuSE install). My guess is half \nof the performance comes from eliminating the PAE swapping.\n\n> I am sure. But is 64 bit environment, Xeon is not the compitition. It's PA-RSC-\n> 8700, ultraSparcs, Power series and if possible itanium.\n\nWell, just because the Opteron is 64-bit doesn't mean it's direct \ncompetition for the high-end RISC chips. Yes, if you're looking at the \ndiscrete CPU itself, it appears they could compete -- the SpecINT scores \nplaces the Opteron near the top of the list. But big companies also need \nthe infrastructure, management tools and top-end scalability. If you \njust have to have the million dollar machines (128x Itanium2 servers or \nwhatever), AMD is nowhere close to competing unless Beowulf clusters fit \nyour needs.\n\nIn terms of infrastructure, scalability, mindshare and pricing, Xeon is \nmost certainly Opteron's main competition. We're talking <$10K servers \nversus $50K+ servers (assuming you actually want performance instead of \nhaving a single pokey UltraSparc CPU in a box). And yes, just because \nOpteron is a better performing server platform than Xeon doesn't mean a \ncorporate fuddy-duddy still won't buy Xeon due to the $1B spent by Intel \non marketting.\n\n>>We should see a boost when we move to 64-bit Linux and hopefully another \n>>one when NUMA for Linux is production-stable.\n> \n> Getting a 2.6 running now is the answer to make it stable fast..:-) Of course \n> if you have spare hardware..\n\nMy office is a pigsty of spare hardware lying around. :) We're like pigs \nrolling around in the mud.\n\n", "msg_date": "Fri, 29 Aug 2003 09:33:51 -0700", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On Fri, 2003-08-29 at 11:33, William Yu wrote:\n> Shridhar Daithankar wrote:\n[snip]\n> > I am sure. But is 64 bit environment, Xeon is not the compitition. It's PA-RSC-\n> > 8700, ultraSparcs, Power series and if possible itanium.\n> \n> Well, just because the Opteron is 64-bit doesn't mean it's direct \n> competition for the high-end RISC chips. Yes, if you're looking at the \n> discrete CPU itself, it appears they could compete -- the SpecINT scores \n> places the Opteron near the top of the list. But big companies also need \n> the infrastructure, management tools and top-end scalability. If you \n> just have to have the million dollar machines (128x Itanium2 servers or \n> whatever), AMD is nowhere close to competing unless Beowulf clusters fit \n> your needs.\n\nWith the proper motherboards and chipsets, it can definitely compete.\n\nWhat's so special about Itanic-2 that it can be engineered to be\nput in 128x boxes and run VMS and high-end Unix , but Opti can't?\nNothing. If a company with enough engineering talent wants to do\nit, it can happen.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"For me and windows it became a matter of easy to start with, \nand becoming increasingly difficult to be productive as time \nwent on, and if something went wrong very difficult to fix, \ncompared to linux's large over head setting up and learning the \nsystem with ease of use and the increase in productivity \nbecoming larger the longer I use the system.\"\nRohan Nicholls , The Netherlands\n\n", "msg_date": "Fri, 29 Aug 2003 12:31:27 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "matt wrote:\n> > Are you *sure* about that???? 3K updates/inserts per second xlates\n> > to 10,800,000 per hour. That, my friend, is a WHOLE HECK OF A LOT!\n> \n> Yup, I know! \n\nJust a data point, but on my Dual Xeon 2.4Gig machine with a 10k SCSI\ndrive I can do 4k inserts/second if I turn fsync off. If you have a\nbattery-backed controller, you should be able to do the same. (You will\nnot need to turn fsync off --- fsync will just be fast because of the\ndisk drive RAM).\n\nAm I missing something?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 29 Aug 2003 22:44:04 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On Fri, 2003-08-29 at 21:44, Bruce Momjian wrote:\n> matt wrote:\n> > > Are you *sure* about that???? 3K updates/inserts per second xlates\n> > > to 10,800,000 per hour. That, my friend, is a WHOLE HECK OF A LOT!\n> > \n> > Yup, I know! \n> \n> Just a data point, but on my Dual Xeon 2.4Gig machine with a 10k SCSI\n> drive I can do 4k inserts/second if I turn fsync off. If you have a\n> battery-backed controller, you should be able to do the same. (You will\n> not need to turn fsync off --- fsync will just be fast because of the\n> disk drive RAM).\n> \n> Am I missing something?\n\nIs that\n FOR I BETWEEN 1 AND 4000\n BEGIN\n INSERT\n COMMIT\nor\n BEGIN\n INSERT \n <snip 3998 inserts>\n INSERT\n COMMIT;\nor\n COPY \n\nI get the impression that Matt will need to do 25,000 of these per\nhour:\n SELECT <blah>\n IF <some circumstance that happens about 1/8th of the time>\n BEGIN\n INSERT\n or\n UPDATE\n COMMIT;\n\nHe says his current h/w peaks at 1/10th that rate.\n\nMy question is: is that current peak rate (\"300 inserts/updates \n*or* 2500 selects\") based upon 1 connection, or many connections?\nWith 4 CPUs, and a 4 disk RAID10, I wouldn't be surprised if 4 con-\ncurrent connections gives the optimum speed.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\nGreat Inventors of our time: \nAl Gore -> Internet \nSun Microsystems -> Clusters\n\n", "msg_date": "Sat, 30 Aug 2003 00:21:17 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On Sat, 30 Aug 2003, Richard Jones wrote:\n\n> Hi,\n> i have a table of around 3 million rows from which i regularly (twice a second\n> at the moment) need to select a random row from\n>\n> currently i'm doing \"order by rand() limit 1\" - but i suspect this is\n> responsible for the large load on my db server - i guess that PG is doing far\n> too much work just to pick one row.\n>\n\nIf you have an int id (aka serial) column then it is simple - just pick a\nrandom number between 1 and currval('id_seq')...\n\nor offset rand() limit 1 perhaps?\n\nsince you want random ther eis no need to bother with an order and that'll\nsave a sort.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Sat, 30 Aug 2003 09:08:51 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selecting random rows efficiently" }, { "msg_contents": "Hi,\ni have a table of around 3 million rows from which i regularly (twice a second \nat the moment) need to select a random row from\n\ncurrently i'm doing \"order by rand() limit 1\" - but i suspect this is \nresponsible for the large load on my db server - i guess that PG is doing far \ntoo much work just to pick one row.\n\none way i can think of is to read in all the primary keys from my table, and \nselect one of the keys at random then directly fetch that row. \n\nare there any other ways to do this? i need to keep the load down :)\n\nThanks,\nRichard \n", "msg_date": "Sat, 30 Aug 2003 13:09:03 +0000", "msg_from": "Richard Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Selecting random rows efficiently" }, { "msg_contents": "> My question is: is that current peak rate (\"300 inserts/updates \n> *or* 2500 selects\") based upon 1 connection, or many connections?\n> With 4 CPUs, and a 4 disk RAID10, I wouldn't be surprised if 4 con-\n> current connections gives the optimum speed.\n\nOptimum number of active workers is probably between 10 and 16. 4 doing\nmath, 4 doing a dma transfer of data, and 4 to be available the instant\none of the other 8 completes.\n\nOn FreeBSD it seems to work that way when there is a mix of activity\nwith the database.", "msg_date": "Sat, 30 Aug 2003 13:58:12 +0000", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "> i was hoping there was some trickery with sequences that would allow me to \n> easily pick a random valid sequence number..?\n\nI would suggest renumbering the data.\n\nALTER SEQUENCE ... RESTART WITH 1;\nUPDATE table SET pkey = DEFAULT;\n\nOf course, PostgreSQL may have trouble with that update due to\nevaluation of the unique constraint immediately -- so drop the primary\nkey first, and add it back after.", "msg_date": "Sat, 30 Aug 2003 14:01:58 +0000", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selecting random rows efficiently" }, { "msg_contents": "On Sat, 2003-08-30 at 08:09, Richard Jones wrote:\n> Hi,\n> i have a table of around 3 million rows from which i regularly (twice a second \n> at the moment) need to select a random row from\n> \n> currently i'm doing \"order by rand() limit 1\" - but i suspect this is \n> responsible for the large load on my db server - i guess that PG is doing far \n> too much work just to pick one row.\n\nWhat datatype is the selected by key?\n\nAlso, where is rand() defined? Is that a UDF?\n\nCould it be that there is a type mismatch?\n\n> one way i can think of is to read in all the primary keys from my table, and \n> select one of the keys at random then directly fetch that row. \n> \n> are there any other ways to do this? i need to keep the load down :)\n> \n> Thanks,\n> Richard \n\nAre you really in Micronesia?\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"The greatest dangers to liberty lurk in insidious encroachment \nby men of zeal, well-meaning, but without understanding.\"\nJustice Louis Brandeis, dissenting, Olmstead v US (1928)\n\n", "msg_date": "Sat, 30 Aug 2003 09:04:01 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selecting random rows efficiently" }, { "msg_contents": "On Saturday 30 August 2003 1:08 pm, you wrote:\n> On Sat, 30 Aug 2003, Richard Jones wrote:\n> > Hi,\n> > i have a table of around 3 million rows from which i regularly (twice a\n> > second at the moment) need to select a random row from\n> >\n> > currently i'm doing \"order by rand() limit 1\" - but i suspect this is\n> > responsible for the large load on my db server - i guess that PG is doing\n> > far too much work just to pick one row.\n>\n> If you have an int id (aka serial) column then it is simple - just pick a\n> random number between 1 and currval('id_seq')...\n>\n> or offset rand() limit 1 perhaps?\n>\n> since you want random ther eis no need to bother with an order and that'll\n> save a sort.\n\n\nYes, the pkey is a SERIAL but the problem is that the sequence is rather \nsparse\n\nfor example, it goes something like 1 -> 5000 then 100000->100000 and then \n2000000->upwards\n\nthis is due to chunks being deleted etc.. \n\nif i pick a random number for the key it will not be a random enough \ndistribution, because the sequence is sparse.. sometimes it will pick a key \nthat doesnt exist.\n\ni'm currently reading all the keys into an array and selecting randoms from \nthere - but this is no good long-term as i need to refresh the array of keys \nto take into account newly added rows to the table (daily)\n\ni was hoping there was some trickery with sequences that would allow me to \neasily pick a random valid sequence number..?\n\nThanks,\nRich.\n\n\n\n\n\n\n\n\n>\n> --\n> Jeff Trout <[email protected]>\n> http://www.jefftrout.com/\n> http://www.stuarthamm.net/\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Sat, 30 Aug 2003 14:25:51 +0000", "msg_from": "Richard Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selecting random rows efficiently" }, { "msg_contents": "Richard Jones <[email protected]> writes:\n>>> i have a table of around 3 million rows from which i regularly (twice a\n>>> second at the moment) need to select a random row from\n\n> i was hoping there was some trickery with sequences that would allow me to \n> easily pick a random valid sequence number..?\n\nThere is no magic bullet here, but if you expect that requirement to\npersist then it is worth your trouble to expend effort on a real\nsolution. A real solution in my mind would look like\n\n1. Add a column \"random_id float8 default random()\". The idea here\n is that you assign a random ID to each row as it is created.\n\n2. Add an index on the above column.\n\n3. Your query now looks like\n\n\tSELECT * FROM table WHERE random_id >= random()\n\tORDER BY random_id LIMIT 1;\n\nThis gives you a plan on the order of\n\n Limit (cost=0.00..0.17 rows=1 width=8)\n -> Index Scan using fooi on foo (cost=0.00..57.00 rows=334 width=8)\n Filter: (random_id >= random())\n\nwhich is fast and gives a genuinely random result row. At least up\nuntil you have enough rows that there start being duplicate random_ids,\nwhich AFAIK would be 2 billion rows with a decent random()\nimplementation. If you're concerned about that, you could periodically\nre-randomize with\n\tUPDATE table SET random_id = random();\nso that any rows that were \"hidden\" because they had a duplicate\nrandom_id have another shot at being choosable. But with only a few mil\nrows I don't think you need to worry.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 30 Aug 2003 10:47:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selecting random rows efficiently " }, { "msg_contents": "I said:\n> 3. Your query now looks like\n> \tSELECT * FROM table WHERE random_id >= random()\n> \tORDER BY random_id LIMIT 1;\n\nCorrection: the above won't give quite the right query because random()\nis marked as a volatile function. You can hide the random() call inside\na user-defined function that you (misleadingly) mark stable, or you can\njust stick it into a sub-select:\n\nregression=# explain select * from foo WHERE random_id >= (select random())\nregression-# ORDER BY random_id LIMIT 1;\n QUERY PLAN\n-------------------------------------------------------------------------\n Limit (cost=0.01..0.15 rows=1 width=8)\n InitPlan\n -> Result (cost=0.00..0.01 rows=1 width=0)\n -> Index Scan using fooi on foo (cost=0.00..45.50 rows=334 width=8)\n Index Cond: (random_id >= $0)\n(5 rows)\n\nThis technique is probably safer against future planner changes,\nhowever:\n\nregression=# create function oneshot_random() returns float8 as\nregression-# 'select random()' language sql stable;\nCREATE FUNCTION\nregression=# explain select * from foo WHERE random_id >= oneshot_random()\nregression-# ORDER BY random_id LIMIT 1;\n QUERY PLAN\n-------------------------------------------------------------------------\n Limit (cost=0.00..0.14 rows=1 width=8)\n -> Index Scan using fooi on foo (cost=0.00..46.33 rows=334 width=8)\n Index Cond: (random_id >= oneshot_random())\n(3 rows)\n\nThe point here is that an indexscan boundary condition has to use stable\nor immutable functions. By marking oneshot_random() stable, you\nessentially say that it's okay to evaluate it only once per query,\nrather than once at each row.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 30 Aug 2003 11:14:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selecting random rows efficiently " }, { "msg_contents": "> Just a data point, but on my Dual Xeon 2.4Gig machine with a 10k SCSI\n> drive I can do 4k inserts/second if I turn fsync off. If you have a\n> battery-backed controller, you should be able to do the same. (You will\n> not need to turn fsync off --- fsync will just be fast because of the\n> disk drive RAM).\n>\n> Am I missing something?\n\nI think Ron asked this, but I will too, is that 4k inserts in one transaction or 4k transactions each with one insert?\n\nfsync is very much faster (as are all random writes) with the write-back cache, but I'd hazard a guess that it's still not nearly as\nfast as turning fsync off altogether. I'll do a test perhaps...\n\n\n\n", "msg_date": "Sat, 30 Aug 2003 16:36:20 +0100", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "> SELECT <blah>\n> IF <some circumstance that happens about 1/8th of the time>\n> BEGIN\n> INSERT\n> or\n> UPDATE\n> COMMIT;\n>\n> He says his current h/w peaks at 1/10th that rate.\n>\n> My question is: is that current peak rate (\"300 inserts/updates\n> *or* 2500 selects\") based upon 1 connection, or many connections?\n> With 4 CPUs, and a 4 disk RAID10, I wouldn't be surprised if 4 con-\n> current connections gives the optimum speed.\n\nWell it's more like each user interaction looks like:\n\n\tSELECT\n\tSELECT\n\tSELECT\n\tSELECT\n\tSELECT\n\tSELECT\n\tINSERT\n\tSELECT\n\tSELECT\n\tSELECT\n\tSELECT\n\tINSERT\n\tSELECT\n\tSELECT\n\tSELECT\n\tUPDATE\n\tSELECT\n\tSELECT\n\tUPDATE\n\tSELECT\n\nAnd concurrency is very high, because it's a web app, and each httpd has one connection to PG, and there can be hundreds of active\nhttpd processes. Some kind of connection pooling scheme might be in order when there are that many active clients. Any views?\n\n\n", "msg_date": "Sat, 30 Aug 2003 16:36:20 +0100", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On Sat, 2003-08-30 at 09:01, Rod Taylor wrote:\n> > i was hoping there was some trickery with sequences that would allow me to \n> > easily pick a random valid sequence number..?\n> \n> I would suggest renumbering the data.\n> \n> ALTER SEQUENCE ... RESTART WITH 1;\n> UPDATE table SET pkey = DEFAULT;\n> \n> Of course, PostgreSQL may have trouble with that update due to\n> evaluation of the unique constraint immediately -- so drop the primary\n> key first, and add it back after.\n\nAnd if there are child tables, they'd all have to be updated, too.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Whatever may be the moral ambiguities of the so-called \ndemoratic nations and however serious may be their failure to \nconform perfectly to their democratic ideals, it is sheer moral \nperversity to equate the inconsistencies of a democratic \ncivilization with the brutalities which modern tyrannical states\npractice.\"\nReinhold Nieburhr, ca. 1940\n\n", "msg_date": "Sat, 30 Aug 2003 11:18:25 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selecting random rows efficiently" }, { "msg_contents": "Considering that we'd have to index the random field too, it'd be neater in\nthe long term to re-number the primary key. Although, being a primary key,\nthat's foreign-keyed from absolutely everywhere, so that'd probably take an\namusingly long time.\n\n...and no we're not from Micronesia, we're from ever so slightly less exotic\nLondon. Though Micronesia might be nice...\n\nRuss (also from last.fm but without the fancy address)\n\[email protected] wrote:\n> On Sat, 2003-08-30 at 09:01, Rod Taylor wrote:\n>>> i was hoping there was some trickery with sequences that would\n>>> allow me to easily pick a random valid sequence number..?\n>>\n>> I would suggest renumbering the data.\n>>\n>> ALTER SEQUENCE ... RESTART WITH 1;\n>> UPDATE table SET pkey = DEFAULT;\n>>\n>> Of course, PostgreSQL may have trouble with that update due to\n>> evaluation of the unique constraint immediately -- so drop the\n>> primary key first, and add it back after.\n>\n> And if there are child tables, they'd all have to be updated, too.\n\n\n", "msg_date": "Sat, 30 Aug 2003 18:43:22 +0100", "msg_from": "\"Russell Garrett\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selecting random rows efficiently" }, { "msg_contents": ">>>>> \"AS\" == Andrew Sullivan <[email protected]> writes:\n\nAS> On Fri, Aug 29, 2003 at 12:05:03AM -0700, William Yu wrote:\n>> We should see a boost when we move to 64-bit Linux and hopefully another \n>> one when NUMA for Linux is production-stable.\n\nAS> According to the people who've worked with SGIs, NUMA actually seems\nAS> to make things worse. It has something to do with how the shared\nAS> memory is handled. You'll want to dig through the -general or\nAS> -hackers archives from somewhere between 9 and 14 months ago, IIRC.\n\nI knew my PhD research would one day be good for *something* ...\n\nThe basic premise of NUMA is that you can isolate which data belongs\nto which processor and put that on memory pages that are local/closer\nto it. In practice, this is harder than it sounds as it requires very\ndetailed knowledge of the application's data access patterns, and how\nmemory is allocated by the OS and standard libraries. Often you end\nup with pages that have data that should be local to two different\nprocessors, and that data keeps being migrated (if your NUMA OS\nsupports page migration) between the two processors or one of them\njust gets slow access.\n\nI can't imagine it benefiting postgres given its globally shared\nbuffers.\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Tue, 02 Sep 2003 12:01:16 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": ">>>>> \"MC\" == Matt Clark <[email protected]> writes:\n\nMC> And concurrency is very high, because it's a web app, and each\nMC> httpd has one connection to PG, and there can be hundreds of\nMC> active httpd processes. Some kind of connection pooling scheme\nMC> might be in order when there are that many active clients. Any\n\nOne thing you really should do (don't know if you already do it...) is\nhave your web split into a front-end proxy and a back-end application\nserver. There are lots of docs on how to do this for mod_perl, but it\ncan apply to just about any backend technology that is pooling the\nconnections.\n\nWith a setup like this, my front-end web server typically has about\n100 to 150 connections, and the backend doing the dynamic work (and\naccessing the database) has peaked at 60 or so. Usually the backend\nnumbers at about 25.\n\nThe front-end small processes get to deal with your dialup customers\ntrickling down the data since it buffers your backend for you.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Tue, 02 Sep 2003 12:08:15 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "\nVivek Khera <[email protected]> writes:\n\n> The front-end small processes get to deal with your dialup customers\n> trickling down the data since it buffers your backend for you.\n\nHuh. Well, I used to think this. But I think I was wrong. I used to have\napache proxy servers running in front of the mod_perl apache servers. The\nproxy servers handled image and static html requests, and proxied any dynamic\ncontent to the mod_perl servers.\n\nIn fact most web pages are only a few kilobytes, and you can easily configure\nthe kernel buffers on the sockets to be 32kb or more. So the proxies would\nonly come into play when there was a really large dynamic document, something\nthat should probably never happen on a high volume web site anyways.\n\nI think the main source of the benefit people see from this setup is the\nstatic content. For that you get a bigger kick out of separating the static\ncontent onto entirely separate servers, preferably something slim like thttpd\nand just exposing the mod_perl/php/whatever servers directly.\n\nThe one thing I worry about exposing the dynamic servers directly is\nsusceptibility to dos or ddos attacks. Since all someone has to do to tie up\nyour precious heavyweight apache slot is make a connection, one machine could\neasily tie up your whole web site. That would be a bit harder if you had\nhundreds of slots available. Of course even so it's not hard.\n\n-- \ngreg\n\n", "msg_date": "02 Sep 2003 13:32:41 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "On Tue, 2003-09-02 at 11:01, Vivek Khera wrote:\n> >>>>> \"AS\" == Andrew Sullivan <[email protected]> writes:\n> \n> AS> On Fri, Aug 29, 2003 at 12:05:03AM -0700, William Yu wrote:\n> >> We should see a boost when we move to 64-bit Linux and hopefully another \n> >> one when NUMA for Linux is production-stable.\n> \n> AS> According to the people who've worked with SGIs, NUMA actually seems\n> AS> to make things worse. It has something to do with how the shared\n> AS> memory is handled. You'll want to dig through the -general or\n> AS> -hackers archives from somewhere between 9 and 14 months ago, IIRC.\n> \n> I knew my PhD research would one day be good for *something* ...\n> \n> The basic premise of NUMA is that you can isolate which data belongs\n> to which processor and put that on memory pages that are local/closer\n> to it. In practice, this is harder than it sounds as it requires very\n> detailed knowledge of the application's data access patterns, and how\n> memory is allocated by the OS and standard libraries. Often you end\n> up with pages that have data that should be local to two different\n> processors, and that data keeps being migrated (if your NUMA OS\n> supports page migration) between the two processors or one of them\n> just gets slow access.\n> \n> I can't imagine it benefiting postgres given its globally shared\n> buffers.\n\nOpteron is supposed to have screaming fast inter-CPU memory xfer\n(HyperTransport does inter-CPU as well as well as CPU-RAM transport).\n\nThat's supposed to help with scaling, and PostgreSQL really may take\nadvantage of that, with, say 16-32 processors?\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Knowledge should be free for all.\"\nHarcourt Fenton Mudd, Star Trek:TOS, \"I, Mudd\"\n\n", "msg_date": "Tue, 02 Sep 2003 12:45:19 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" }, { "msg_contents": "Can you just create an extra serial column and make sure that one is \nalways in order and no holes in it? (i.e. a nightly process, etc...)???\n\nIf so, then something like this truly flies:\n\nselect * from accounts where aid = (select cast(floor(random()*100000)+1 as int));\n\nMy times on it on a 100,000 row table are < 1 millisecond.\n\nNote that you have to have a hole free sequence AND know how many rows \nthere are, but if you can meet those needs, this is screamingly fast.\n\nOn Sat, 30 Aug 2003, Russell Garrett wrote:\n\n> Considering that we'd have to index the random field too, it'd be neater in\n> the long term to re-number the primary key. Although, being a primary key,\n> that's foreign-keyed from absolutely everywhere, so that'd probably take an\n> amusingly long time.\n> \n> ...and no we're not from Micronesia, we're from ever so slightly less exotic\n> London. Though Micronesia might be nice...\n> \n> Russ (also from last.fm but without the fancy address)\n> \n> [email protected] wrote:\n> > On Sat, 2003-08-30 at 09:01, Rod Taylor wrote:\n> >>> i was hoping there was some trickery with sequences that would\n> >>> allow me to easily pick a random valid sequence number..?\n> >>\n> >> I would suggest renumbering the data.\n> >>\n> >> ALTER SEQUENCE ... RESTART WITH 1;\n> >> UPDATE table SET pkey = DEFAULT;\n> >>\n> >> Of course, PostgreSQL may have trouble with that update due to\n> >> evaluation of the unique constraint immediately -- so drop the\n> >> primary key first, and add it back after.\n> >\n> > And if there are child tables, they'd all have to be updated, too.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n\n", "msg_date": "Wed, 3 Sep 2003 18:09:14 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Selecting random rows efficiently" }, { "msg_contents": "Vivek, you reported recently that increasing sort_mem and\ncheckpoint_segments increased performance. Can you run a test to see\nhow much of that improvement was just because of increasing\ncheckpoint_segments?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 5 Sep 2003 11:17:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "checkpoints too frequent" }, { "msg_contents": ">>>>> \"BM\" == Bruce Momjian <[email protected]> writes:\n\nBM> Vivek, you reported recently that increasing sort_mem and\nBM> checkpoint_segments increased performance. Can you run a test to see\nBM> how much of that improvement was just because of increasing\nBM> checkpoint_segments?\n\ni was thinking just the same thing myself.\n\ni'll start that run now.\n", "msg_date": "Fri, 5 Sep 2003 11:41:53 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: checkpoints too frequent" }, { "msg_contents": "Matt Clark wrote:\n> > Just a data point, but on my Dual Xeon 2.4Gig machine with a 10k SCSI\n> > drive I can do 4k inserts/second if I turn fsync off. If you have a\n> > battery-backed controller, you should be able to do the same. (You will\n> > not need to turn fsync off --- fsync will just be fast because of the\n> > disk drive RAM).\n> >\n> > Am I missing something?\n> \n> I think Ron asked this, but I will too, is that 4k inserts in\n> one transaction or 4k transactions each with one insert?\n> \n> fsync is very much faster (as are all random writes) with the\n> write-back cache, but I'd hazard a guess that it's still not\n> nearly as fast as turning fsync off altogether. I'll do a test\n> perhaps...\n\nSorry to be replying late. Here is what I found.\n\nfsync on\n Inserts all in one transaction 3700 inserts/second\n Inserts in separate transactions 870 inserts/second\n\nfsync off\n Inserts all in one transaction 3700 inserts/second\n Inserts all in one transaction 2500 inserts/second\n\nECPG test program attached.\n\n--\n\n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n/*\n *\tThread test program\n *\tby Philip Yarra\n */\n\n\n#include <stdlib.h>\n\nvoid\t\tins1(void);\n\nEXEC SQL BEGIN DECLARE SECTION;\nchar\t *dbname;\nint\t iterations = 10;\nEXEC SQL END DECLARE SECTION;\n\nint\nmain(int argc, char *argv[])\n{\n\n\tif (argc < 2 || argc > 3)\n\t{\n\t\tfprintf(stderr, \"Usage: %s dbname [iterations]\\n\", argv[0]);\n\t\treturn 1;\n\t}\n\tdbname = argv[1];\n\n\tif (argc == 3)\n\t\titerations = atoi(argv[2]);\n\tif (iterations % 2 != 0)\n\t{\n\t\tfprintf(stderr, \"iterations must be an even number\\n\");\n\t\treturn 1;\n\t}\n\n\tEXEC SQL CONNECT TO:dbname AS test0;\n\n\t/* DROP might fail */\n\tEXEC SQL AT test0 DROP TABLE test_thread;\n\tEXEC SQL AT test0 COMMIT WORK;\n\tEXEC SQL AT test0 CREATE TABLE test_thread(message TEXT);\n\tEXEC SQL AT test0 COMMIT WORK;\n\tEXEC SQL DISCONNECT test0;\n\t\n\tins1();\n\n\treturn 0;\n}\n\nvoid\nins1(void)\n{\n\tint\t\t\ti;\n\tEXEC SQL WHENEVER sqlerror sqlprint;\n\tEXEC SQL CONNECT TO:dbname AS test1;\n\tEXEC SQL AT test1 SET AUTOCOMMIT TO ON;\n\n\tfor (i = 0; i < iterations; i++)\n\t\tEXEC SQL AT test1 INSERT INTO test_thread VALUES('thread1');\n//\tEXEC SQL AT test1 COMMIT WORK;\n\n\tEXEC SQL DISCONNECT test1;\n\n\tprintf(\"thread 1 : done!\\n\");\n}", "msg_date": "Tue, 9 Sep 2003 23:25:23 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware recommendations to scale to silly load" } ]
[ { "msg_contents": "Hey all.\n\nI said I was going to do it, and I finally did it.\n\nAs with all performance tests/benchmarks, there are probably dozens or\nmore reasons why these results aren't as accurate or wonderful as they\nshould be. Take them for what they are and hopefully everyone can\nlearn a few things from them.\n\nIntelligent feedback is welcome.\n\nhttp://www.potentialtech.com/wmoran/postgresql.php\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Tue, 26 Aug 2003 21:47:48 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": true, "msg_subject": "The results of my PostgreSQL/filesystem performance tests" }, { "msg_contents": "\nBill,\n\nVery interesting results. I'd like to command you on your honesty.\nHaving started out with the intentions of proving that FreeBSD is faster\nthan Linux only to find that the opposite is true must not have been\nrewarding for you. However, these unexpected results serve only to\nreinforce the integrity of your tests.\n\nThanks for all the work.\n\nBalazs\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Bill Moran\nSent: Tuesday, August 26, 2003 6:48 PM\nTo: [email protected]\nSubject: [PERFORM] The results of my PostgreSQL/filesystem performance\ntests\n\nHey all.\n\nI said I was going to do it, and I finally did it.\n\nAs with all performance tests/benchmarks, there are probably dozens or\nmore reasons why these results aren't as accurate or wonderful as they\nshould be. Take them for what they are and hopefully everyone can\nlearn a few things from them.\n\nIntelligent feedback is welcome.\n\nhttp://www.potentialtech.com/wmoran/postgresql.php\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n\n\n", "msg_date": "Wed, 27 Aug 2003 22:21:23 -0700", "msg_from": "\"Balazs Wellisch\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance tests" }, { "msg_contents": "On 26 Aug 2003 at 21:47, Bill Moran wrote:\n\n> Hey all.\n> \n> I said I was going to do it, and I finally did it.\n> \n> As with all performance tests/benchmarks, there are probably dozens or\n> more reasons why these results aren't as accurate or wonderful as they\n> should be. Take them for what they are and hopefully everyone can\n> learn a few things from them.\n> \n> Intelligent feedback is welcome.\n> \n> http://www.potentialtech.com/wmoran/postgresql.php\n\nCan we have these benchmarks and relevant information stored in a central \narchive at techdocs(Say)?\n\nThat would be better than searching mail archives..\n\n\nBye\n Shridhar\n\n--\n\t\"... freedom ... is a worship word...\"\t\"It is our worship word too.\"\t\t-- Cloud \nWilliam and Kirk, \"The Omega Glory\", stardate unknown\n\n", "msg_date": "Thu, 28 Aug 2003 12:15:14 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance tests" }, { "msg_contents": "On Tue, 26 Aug 2003, Bill Moran wrote:\n\n> As with all performance tests/benchmarks, there are probably dozens or\n> more reasons why these results aren't as accurate or wonderful as they\n> should be. Take them for what they are and hopefully everyone can\n> learn a few things from them.\n\nWhat version of pg was used in debian and redhat? For freebsd it's 7.2.4\nit says on the page, but I see nothing about the other two. The version\nthat comes with Redhat 9 (Shrike) is 7.3.2.\n\n-- \n/Dennis\n\n", "msg_date": "Thu, 28 Aug 2003 10:11:02 +0200 (CEST)", "msg_from": "=?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance" }, { "msg_contents": "On Tue, 2003-08-26 at 20:47, Bill Moran wrote:\n> Hey all.\n> \n> I said I was going to do it, and I finally did it.\n> \n> As with all performance tests/benchmarks, there are probably dozens or\n> more reasons why these results aren't as accurate or wonderful as they\n> should be. Take them for what they are and hopefully everyone can\n> learn a few things from them.\n> \n> Intelligent feedback is welcome.\n> \n> http://www.potentialtech.com/wmoran/postgresql.php\n\nHi,\n\nWoody has pg 7.2.1. Note also that Woody comes with kernel 2.4.18.\n\nIt would be interesting to see how Debian Sid (kernel 2.4.21 and\npg 7.3.3) would perform.\n\nThanks for the results!\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Oh, great altar of passive entertainment, bestow upon me thy \ndiscordant images at such speed as to render linear thought impossible\"\nCalvin, regarding TV\n\n", "msg_date": "Thu, 28 Aug 2003 04:19:21 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance" }, { "msg_contents": "On Tue, 2003-08-26 at 20:47, Bill Moran wrote:\n> Hey all.\n> \n> I said I was going to do it, and I finally did it.\n> \n> As with all performance tests/benchmarks, there are probably dozens or\n> more reasons why these results aren't as accurate or wonderful as they\n> should be. Take them for what they are and hopefully everyone can\n> learn a few things from them.\n> \n> Intelligent feedback is welcome.\n> \n> http://www.potentialtech.com/wmoran/postgresql.php\n\nI notice that the Linux FSs weren't tested with noatime. Any \nreason?\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"As I like to joke, I may have invented it, but Microsoft made \nit popular\"\nDavid Bradley, regarding Ctrl-Alt-Del \n\n", "msg_date": "Thu, 28 Aug 2003 04:23:29 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance" }, { "msg_contents": "A long time ago, in a galaxy far, far away, [email protected] (\"Balazs Wellisch\") wrote:\n> Very interesting results. I'd like to command you on your honesty.\n> Having started out with the intentions of proving that FreeBSD is faster\n> than Linux only to find that the opposite is true must not have been\n> rewarding for you. However, these unexpected results serve only to\n> reinforce the integrity of your tests.\n\nWell put.\n\nTo see a result that the tester didn't really want to see/present does\nsuggest good things about the tester's honesty. There was incentive\nto hide unfavorable results.\n\nWhat it still leaves quite open is just what happens when the OS has\nmore than one disk drive or CPU to play with. It's not clear what\nhappens in such cases, whether FreeBSD would catch up, or be \"left\nfurther in the dust.\" The traditional \"propaganda\" has been that\nthere are all sorts of reasons to expect PostgreSQL on FreeBSD to run\na bit faster than on Linux; it is a bit unexpected for the opposite to\nseem true.\n-- \noutput = reverse(\"gro.mca\" \"@\" \"enworbbc\")\nhttp://www3.sympatico.ca/cbbrowne/sap.html\n\"I am aware of the benefits of a micro kernel approach. However, the\nfact remains that Linux is here, and GNU isn't --- and people have\nbeen working on Hurd for a lot longer than Linus has been working on\nLinux.\" -- Ted T'so, 1992.\n", "msg_date": "Thu, 28 Aug 2003 07:48:07 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance tests" }, { "msg_contents": "> Intelligent feedback is welcome.\n> \n> http://www.potentialtech.com/wmoran/postgresql.php\n\nGood work. But I can't find information about xfs. Do you plan to add\nthis one FS in test?\n\nLuf\n", "msg_date": "Thu, 28 Aug 2003 14:05:05 +0200", "msg_from": "Ludek Finstrle <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance tests" }, { "msg_contents": "On Tue, 26 Aug 2003, Bill Moran wrote:\n\n>\n> Intelligent feedback is welcome.\n>\nThat's some good work there, Lou. You'll make sgt for that someday.\n\nBut I think the next step, before trying out other filesystems and options\nwould be concurrency. Run a bunch of these beasts together and see what\nhappens (I don't think too many of us have a single session running).\nPerhaps even make them \"interfere\" with each other to create as much\n\"pain\" as possible?\n\non a side note - I might be blind here - I didn't see what version of pg\nyou were using or any postgresql.conf tweaks - or did you just use\nwhatever came with each distro?\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Thu, 28 Aug 2003 08:25:18 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance" }, { "msg_contents": "Couple of questions:\n\nWhat was the postgresql.conf configuration used? Default?\n\nHow many threads of the script ran? Looks like a single user only.\n\nI assume there was nothing else running at the time (cron, sendmail,\netc. were all off?)\n\nDo you know whether the machines were disk or I/O bound?\n\nWas PostgreSQL compiled the same for each OS or did you use the rpm,\ndeb, tgz that were available?\n\nOn Tue, 2003-08-26 at 21:47, Bill Moran wrote:\n> Hey all.\n> \n> I said I was going to do it, and I finally did it.\n> \n> As with all performance tests/benchmarks, there are probably dozens or\n> more reasons why these results aren't as accurate or wonderful as they\n> should be. Take them for what they are and hopefully everyone can\n> learn a few things from them.\n> \n> Intelligent feedback is welcome.\n> \n> http://www.potentialtech.com/wmoran/postgresql.php", "msg_date": "Thu, 28 Aug 2003 09:44:18 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance tests" }, { "msg_contents": "2003-08-28 ragyogó napján Christopher Browne ezt üzente:\n\n> A long time ago, in a galaxy far, far away, [email protected] (\"Balazs Wellisch\") wrote:\n> > Very interesting results. I'd like to command you on your honesty.\n> > Having started out with the intentions of proving that FreeBSD is faster\n> > than Linux only to find that the opposite is true must not have been\n> > rewarding for you. However, these unexpected results serve only to\n> > reinforce the integrity of your tests.\n>\n> Well put.\n>\n> To see a result that the tester didn't really want to see/present does\n> suggest good things about the tester's honesty. There was incentive\n> to hide unfavorable results.\n>\n> What it still leaves quite open is just what happens when the OS has\n> more than one disk drive or CPU to play with. It's not clear what\n> happens in such cases, whether FreeBSD would catch up, or be \"left\n> further in the dust.\" The traditional \"propaganda\" has been that\n> there are all sorts of reasons to expect PostgreSQL on FreeBSD to run\n> a bit faster than on Linux; it is a bit unexpected for the opposite to\n> seem true.\n\nAFAIK *BSD better in the handling of big loads - maybe when multiple\nconcurrent tests run against a linux and a bsd box, we see better result.\nOr not.\n\n-- \nTomka Gergely\n\"S most - vajon barbárok nélkül mi lesz velünk?\nŐk mégiscsak megoldás voltak valahogy...\"\n\n", "msg_date": "Thu, 28 Aug 2003 15:44:58 +0200 (CEST)", "msg_from": "Tomka Gergely <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance" }, { "msg_contents": "2003-08-28 ragyogó napján Ludek Finstrle ezt üzente:\n\n> > Intelligent feedback is welcome.\n> >\n> > http://www.potentialtech.com/wmoran/postgresql.php\n>\n> Good work. But I can't find information about xfs. Do you plan to add\n> this one FS in test?\n\nhttp://mail.sth.sze.hu/~hsz/sql/\n\n\n\n-- \nTomka Gergely\n\"S most - vajon barbárok nélkül mi lesz velünk?\nŐk mégiscsak megoldás voltak valahogy...\"\n\n", "msg_date": "Thu, 28 Aug 2003 15:45:42 +0200 (CEST)", "msg_from": "Tomka Gergely <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance" }, { "msg_contents": "> What it still leaves quite open is just what happens when the OS has\n> more than one disk drive or CPU to play with. It's not clear what\n> happens in such cases, whether FreeBSD would catch up, or be \"left\n> further in the dust.\" The traditional \"propaganda\" has been that\n> there are all sorts of reasons to expect PostgreSQL on FreeBSD to\n> run a bit faster than on Linux; it is a bit unexpected for the\n> opposite to seem true.\n\nLet me nip this in the butt before people run away with ideas that\naren't correct. When the tests were performed in FreeBSD 5.1 and\nLinux, the hard drives were running UDMA. When running 4.8, for some\nreason his drives settled in on PIO mode:\n\nad0s1a: UDMA ICRC error writing fsbn 1458368 of 729184-729215 (ad0s1 bn 1458368; cn 241 tn 12 sn 44) retrying\nad0s1a: UDMA ICRC error writing fsbn 1458368 of 729184-729215 (ad0s1 bn 1458368; cn 241 tn 12 sn 44) retrying\nad0s1a: UDMA ICRC error writing fsbn 1458368 of 729184-729215 (ad0s1 bn 1458368; cn 241 tn 12 sn 44) retrying\nad0s1a: UDMA ICRC error writing fsbn 1458368 of 729184-729215 (ad0s1 bn 1458368; cn 241 tn 12 sn 44) falling back to PIO mode\n\nThe benchmarks were hardly conclusive as UDMA runs vastly faster than\nPIO. Until we hear back as to whether cables were jarred loose\nbetween the tests or hearing if something else changed, I'd hardly\nconsider these conclusive tests given PIO/UDMA is apples to oranges in\nterms of speed and I fully expect that FreeBSD 4.8 will perform at\nleast faster than 5.1 (5.x is still being unwound from Giant), but\nshould out perform Linux as well if industry experience iss any\nindicator.\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Thu, 28 Aug 2003 09:12:33 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance tests" }, { "msg_contents": "> http://www.potentialtech.com/wmoran/postgresql.php\n> -- \n> Bill Moran\n> Potential Technologies\n> http://www.potentialtech.com\n\nAdding my voice to the many, thanks for sharing your results Bill. Very \ninstructive.\n\n-- \nBest,\nAl Hulaton | Sr. Account Engineer | Command Prompt, Inc.\n503.222.2783 | [email protected]\nHome of Mammoth PostgreSQL and 'Practical PostgreSQL'\nManaged PostgreSQL, Linux services and consulting\nRead and Search O'Reilly's 'Practical PostgreSQL' at\nhttp://www.commandprompt.com\n\n", "msg_date": "Thu, 28 Aug 2003 11:17:16 -0700", "msg_from": "Al Hulaton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance tests" }, { "msg_contents": "I need to step in and do 2 things:\n\nFirst, apologize for posting inaccurate test results.\n\nSecond, verify that Sean is absolutely correct. FreeBSD 4.8 was accessing\nthe drives in PIO mode, which is significantly lousier than DMA, which\nRedHat was able to use. As a result, the tests are unreasonably skewed\nin favor of Linux.\n\nThe only thing that the currently posted results prove is that Linux is\nbetter at dealing with crappy hardware than BSD (which I feel we already\nknew).\n\nI did some rescrounging, and found some newer hardware stuffed in a\ncorner that I had forgotten was even around. I am currently re-running\nthe tests and will post new results as soon as there are enough to\nbe interesting to talk about.\n\nIn an attempt to avoid the same mistake, I did a timed test with dd(1)\nto a raw partition on both Linux and FreeBSD to ensure that both systems\nare able to access the hardware at more or less the same speed. The\nresults of this will be included.\n\nI'm also gathering considerably more information about the state of\nthe system during the tests, which should answer a number of questions\nI've been getting.\n\nTo the many people who asked questions like \"why not try filesystem x\non distribution y\" and similar questions, the answer in most cases is\ntime. I've pared the tests down some so they run faster, and I'm hoping\nto be able to run more tests on more combinations of configurations as\na result. Also, I never intended for anyone to assume that I was _done_\ntesting, just that I had enough results for folks to talk about.\n\nI'll post again when I have enough results to be interesting, until then,\nI apologize again for the inaccurate results.\n\nSean Chittenden wrote:\n>>What it still leaves quite open is just what happens when the OS has\n>>more than one disk drive or CPU to play with. It's not clear what\n>>happens in such cases, whether FreeBSD would catch up, or be \"left\n>>further in the dust.\" The traditional \"propaganda\" has been that\n>>there are all sorts of reasons to expect PostgreSQL on FreeBSD to\n>>run a bit faster than on Linux; it is a bit unexpected for the\n>>opposite to seem true.\n> \n> Let me nip this in the butt before people run away with ideas that\n> aren't correct. When the tests were performed in FreeBSD 5.1 and\n> Linux, the hard drives were running UDMA. When running 4.8, for some\n> reason his drives settled in on PIO mode:\n> \n> ad0s1a: UDMA ICRC error writing fsbn 1458368 of 729184-729215 (ad0s1 bn 1458368; cn 241 tn 12 sn 44) retrying\n> ad0s1a: UDMA ICRC error writing fsbn 1458368 of 729184-729215 (ad0s1 bn 1458368; cn 241 tn 12 sn 44) retrying\n> ad0s1a: UDMA ICRC error writing fsbn 1458368 of 729184-729215 (ad0s1 bn 1458368; cn 241 tn 12 sn 44) retrying\n> ad0s1a: UDMA ICRC error writing fsbn 1458368 of 729184-729215 (ad0s1 bn 1458368; cn 241 tn 12 sn 44) falling back to PIO mode\n> \n> The benchmarks were hardly conclusive as UDMA runs vastly faster than\n> PIO. Until we hear back as to whether cables were jarred loose\n> between the tests or hearing if something else changed, I'd hardly\n> consider these conclusive tests given PIO/UDMA is apples to oranges in\n> terms of speed and I fully expect that FreeBSD 4.8 will perform at\n> least faster than 5.1 (5.x is still being unwound from Giant), but\n> should out perform Linux as well if industry experience iss any\n> indicator.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Thu, 28 Aug 2003 15:16:11 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance" }, { "msg_contents": "> I need to step in and do 2 things:\n\nThanks for posting that. Let me know if you have any questions while\ndoing your testing. I've found that using 16K blocks on FreeBSD\nresults in about an 8% speedup in writes to the database, fwiw.\n\nI'm likely going to make this the default for PostgreSQL on FreeBSD\nstarting with 7.4 (just posted something to -hackers about this)f. If\nyou'd like to do this in your testing, just apply the following patch.\n\nRight now PostgreSQL defaults to 8K blocks, but FreeBSD uses 16K\nblocks which means that currently, reading two blocks of data in PG is\ntwo read calls to the OS, one reads 16K of data off disk and returns\nthe 1st page, the 2nd call pulls the 2nd block from the FS cache. In\nmaking things 16K, it avoids the need for the 2nd system call which is\nwhere the performance difference is coming from, afaikt. -sc\n\n-- \nSean Chittenden", "msg_date": "Thu, 28 Aug 2003 12:31:23 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance tests" }, { "msg_contents": ">>>>> \"SC\" == Sean Chittenden <[email protected]> writes:\n\n>> I need to step in and do 2 things:\nSC> Thanks for posting that. Let me know if you have any questions while\nSC> doing your testing. I've found that using 16K blocks on FreeBSD\nSC> results in about an 8% speedup in writes to the database, fwiw.\n\nWhere/how does one set this? In postgresql.conf or on the file system\nor during compilation of postgres? I'm on FreeBSD 4.8 still.\n\nI've got a box right now on which I'm comparing the speed merits of\nhardware RAID10, RAID5, and RAID50 using a filesystem benchmark\nutility (bonnie++). If I have time I'm gonna try different striping\nblock sizes. Right now I'm using 32k byte stripe size.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Thu, 28 Aug 2003 16:20:24 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance tests" }, { "msg_contents": ">>>>> \"SC\" == Sean Chittenden <[email protected]> writes:\n\n>> I need to step in and do 2 things:\nSC> Thanks for posting that. Let me know if you have any questions while\nSC> doing your testing. I've found that using 16K blocks on FreeBSD\nSC> results in about an 8% speedup in writes to the database, fwiw.\n\nok.. ignore my prior request about how to set that... i missed you had\nincluded a patch.\n\nAny recommendations on newfs parameters for an overly large file\nsystem used solely for Postgres? Over 100Gb (with raid 10) or over\n200Gb (with raid 5)?\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Thu, 28 Aug 2003 16:27:55 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance tests" }, { "msg_contents": "On Thu, 28 Aug 2003, Sean Chittenden wrote:\n\n> > What it still leaves quite open is just what happens when the OS has\n> > more than one disk drive or CPU to play with. It's not clear what\n> > happens in such cases, whether FreeBSD would catch up, or be \"left\n> > further in the dust.\" The traditional \"propaganda\" has been that\n> > there are all sorts of reasons to expect PostgreSQL on FreeBSD to\n> > run a bit faster than on Linux; it is a bit unexpected for the\n> > opposite to seem true.\n> \n> Let me nip this in the butt before people run away with ideas that\n> aren't correct. When the tests were performed in FreeBSD 5.1 and\n> Linux, the hard drives were running UDMA. When running 4.8, for some\n> reason his drives settled in on PIO mode:\n> \n> ad0s1a: UDMA ICRC error writing fsbn 1458368 of 729184-729215 (ad0s1 bn 1458368; cn 241 tn 12 sn 44) retrying\n> ad0s1a: UDMA ICRC error writing fsbn 1458368 of 729184-729215 (ad0s1 bn 1458368; cn 241 tn 12 sn 44) retrying\n> ad0s1a: UDMA ICRC error writing fsbn 1458368 of 729184-729215 (ad0s1 bn 1458368; cn 241 tn 12 sn 44) retrying\n> ad0s1a: UDMA ICRC error writing fsbn 1458368 of 729184-729215 (ad0s1 bn 1458368; cn 241 tn 12 sn 44) falling back to PIO mode\n> \n> The benchmarks were hardly conclusive as UDMA runs vastly faster than\n> PIO. Until we hear back as to whether cables were jarred loose\n> between the tests or hearing if something else changed, I'd hardly\n> consider these conclusive tests given PIO/UDMA is apples to oranges in\n> terms of speed and I fully expect that FreeBSD 4.8 will perform at\n> least faster than 5.1 (5.x is still being unwound from Giant), but\n> should out perform Linux as well if industry experience iss any\n> indicator.\n\nPlus, in most \"real\" servers you're gonna be running SCSI, so it might be \nnice to see a test with a good SCSI controller (Symbios 875 is a nice \nchoice) and a couple hard drives, one each for WAL and data. This would \nmore closely resemble actual usage and there are likely to be fewer issues \nwith things like UDMA versus PIO on SCSI.\n\n", "msg_date": "Thu, 28 Aug 2003 14:33:41 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance" }, { "msg_contents": "> >> I need to step in and do 2 things:\n> SC> Thanks for posting that. Let me know if you have any questions while\n> SC> doing your testing. I've found that using 16K blocks on FreeBSD\n> SC> results in about an 8% speedup in writes to the database, fwiw.\n> \n> ok.. ignore my prior request about how to set that... i missed you\n> had included a patch.\n> \n> Any recommendations on newfs parameters for an overly large file\n> system used solely for Postgres? Over 100Gb (with raid 10) or over\n> 200Gb (with raid 5)?\n\nNope, you'll have to test and see. If you find something that works,\nhowever, let me know. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Thu, 28 Aug 2003 14:49:40 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance tests" }, { "msg_contents": "Shridhar Daithankar wrote:\n> On 26 Aug 2003 at 21:47, Bill Moran wrote:\n> \n> \n>>Hey all.\n>>\n>>I said I was going to do it, and I finally did it.\n>>\n>>As with all performance tests/benchmarks, there are probably dozens or\n>>more reasons why these results aren't as accurate or wonderful as they\n>>should be. Take them for what they are and hopefully everyone can\n>>learn a few things from them.\n>>\n>>Intelligent feedback is welcome.\n>>\n>>http://www.potentialtech.com/wmoran/postgresql.php\n> \n> \n> Can we have these benchmarks and relevant information stored in a central \n> archive at techdocs(Say)?\n> \n> That would be better than searching mail archives..\n\nI agree.\n\nThere doesn't seem to be a place on techdocs for benchmarks at the time.\nIs there another part of the site that would be good for these to go?\nI'll keep them posted on my site until a better location is found.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Thu, 28 Aug 2003 21:19:05 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance" }, { "msg_contents": "> > As with all performance tests/benchmarks, there are probably dozens or\n> > more reasons why these results aren't as accurate or wonderful as they\n> > should be. Take them for what they are and hopefully everyone can\n> > learn a few things from them.\n> >\n> > Intelligent feedback is welcome.\n> >\n> > http://www.potentialtech.com/wmoran/postgresql.php\n>\n> I notice that the Linux FSs weren't tested with noatime. Any\n> reason?\n\nMy friend, (a FreeBSD committer), was wondering what the results are if you\nturn off softupdates (to match Linux default installation) and use noatime.\nHe also wonders how bug the default IO is?\n\nChris\n\n", "msg_date": "Fri, 29 Aug 2003 09:40:18 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance" }, { "msg_contents": "> I'm likely going to make this the default for PostgreSQL on FreeBSD\n> starting with 7.4 (just posted something to -hackers about this)f. If\n> you'd like to do this in your testing, just apply the following patch.\n>\n> Right now PostgreSQL defaults to 8K blocks, but FreeBSD uses 16K\n> blocks which means that currently, reading two blocks of data in PG is\n> two read calls to the OS, one reads 16K of data off disk and returns\n> the 1st page, the 2nd call pulls the 2nd block from the FS cache. In\n> making things 16K, it avoids the need for the 2nd system call which is\n> where the performance difference is coming from, afaikt. -sc\n\nAre you _sure_ this won't cause any atomicity problems? Can FreeBSD write\n16k as an atomic unit?\n\nChris\n\n", "msg_date": "Fri, 29 Aug 2003 09:59:50 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance tests" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n>>>As with all performance tests/benchmarks, there are probably dozens or\n>>>more reasons why these results aren't as accurate or wonderful as they\n>>>should be. Take them for what they are and hopefully everyone can\n>>>learn a few things from them.\n>>>\n>>>Intelligent feedback is welcome.\n>>>\n>>>http://www.potentialtech.com/wmoran/postgresql.php\n>>\n>>I notice that the Linux FSs weren't tested with noatime. Any\n>>reason?\n> \n> My friend, (a FreeBSD committer), was wondering what the results are if you\n> turn off softupdates (to match Linux default installation) and use noatime.\n\nKeep an eye on the page. The test results will be posted shortly after I\nfinish them.\n\nKeep in mind, I'm more interested in figuring out what can be done to make\nPostgres _faster_, so tests along that line are going to have a higher\npriority than ones that specifically compare \"apples to apples\" or anything\nlike that.\n\n> He also wonders how bug the default IO is?\n\nHuh?\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Thu, 28 Aug 2003 22:15:20 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > I'm likely going to make this the default for PostgreSQL on FreeBSD\n> > starting with 7.4 (just posted something to -hackers about this)f. If\n> > you'd like to do this in your testing, just apply the following patch.\n> >\n> > Right now PostgreSQL defaults to 8K blocks, but FreeBSD uses 16K\n> > blocks which means that currently, reading two blocks of data in PG is\n> > two read calls to the OS, one reads 16K of data off disk and returns\n> > the 1st page, the 2nd call pulls the 2nd block from the FS cache. In\n> > making things 16K, it avoids the need for the 2nd system call which is\n> > where the performance difference is coming from, afaikt. -sc\n> \n> Are you _sure_ this won't cause any atomicity problems? Can FreeBSD write\n> 16k as an atomic unit?\n\nWe pre-modified page images to WAL before modifying the page. The disks\nare only 512-byte blocks, so we don't rely on file system atomicity\nanymore anyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 29 Aug 2003 22:32:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance tests" }, { "msg_contents": "Balazs Wellisch wrote:\n> \n> Bill,\n> \n> Very interesting results. I'd like to command you on your honesty.\n> Having started out with the intentions of proving that FreeBSD is faster\n> than Linux only to find that the opposite is true must not have been\n> rewarding for you. However, these unexpected results serve only to\n> reinforce the integrity of your tests.\n\nLooking at the results, ext2 is out because it isn't crash safe, and I\nhave heard Reiser uses a lot more CPU to do its work. It does show\next3 as slow, which was expected. Interesting how JFS came out, and XFS\nwould be interesting. And, of course, it is multiple backends that\nreally shows PostgreSQL off, so it could radically affect the results.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 29 Aug 2003 22:34:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The results of my PostgreSQL/filesystem performance tests" }, { "msg_contents": ">>>>> \"SC\" == Sean Chittenden <[email protected]> writes:\n\n>> I need to step in and do 2 things:\nSC> Thanks for posting that. Let me know if you have any questions while\nSC> doing your testing. I've found that using 16K blocks on FreeBSD\nSC> results in about an 8% speedup in writes to the database, fwiw.\n\nJust double checking: if I do this, then I need to halve the\nparameters in postgresql.conf that involve buffers, specifically,\nmax_fsm_pages and shared_buffers. I think max_fsm_pages should be\nadjusted since the number of pages in the system overall has been\nhalved.\n\nAnything else that should be re-tuned for this?\n\nMy tests are still running so I don't have numbers yet.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Wed, 03 Sep 2003 11:48:22 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "FreeBSD page size (was Re: The results of my PostgreSQL/filesystem\n\tperformance tests)" }, { "msg_contents": "Ok... simple tests have completed. Here are some numbers.\n\nFreeBSD 4.8\nPG 7.4b2\n4GB Ram\nDual Xeon 2.4GHz processors\n14 U320 SCSI disks attached to Dell PERC3/DC RAID controller in RAID 5\n config with 32k stripe size\n\nDump file:\n-rw-r--r-- 1 vivek wheel 1646633745 Aug 28 11:01 19-Aug-2003.dump\n\nWhen restored (after deleting one index that took up ~1Gb -- turned\nout it was redundant to another multi-column index):\n\n% df -k /u/d02\nFilesystem 1K-blocks Used Avail Capacity Mounted on\n/dev/amrd1s1e 226408360 18067260 190228432 9% /u/d02\n\n\n\npostgresql.conf alterations from standard:\nshared_buffers = 60000\nsort_mem = 8192\nvacuum_mem=131702\nmax_fsm_pages=1000000\neffective_cache_size=25600\nrandom_page-cost = 2\n\n\nrestore time: 14777 seconds\nvacuum analyze time: 30 minutes\nselect count(*) from user_list where owner_id=315; 50388.64 ms\n\n\nthe restore complained often about checkpoints occurring every few\nseconds:\n\nSep 2 11:57:14 d02 postgres[49721]: [5-1] LOG: checkpoints are occurring too frequently (15 seconds apart)\nSep 2 11:57:14 d02 postgres[49721]: [5-2] HINT: Consider increasing CHECKPOINT_SEGMENTS.\n\nThe HINT threw me off since I had to set checkpoint_segments in\npostgresql.conf, where as CHECKPOINT_SEGMENTS implied to me a\ncompile-time constant.\n\nAnyhow, so I deleted the PG data directory, and made these two\nchanges:\n\ncheckpoint_segments=50\nsort_mem = 131702\n\nThis *really* improved the time for the restore:\n\nrestore time: 11594 seconds\n\nthen I reset the checkpoint_segments and sort_mem back to old\nvalues...\n\nvacuum analyze time is still 30 minutes\nselect count(*) from user_list where owner_id=315; 51363.98 ms\n\nso the select appears a bit slower but it is hard to say why. the\nsystem is otherwise idle as it is not in production yet.\n\n\nThen I took the suggestion to update PG's page size to 16k and did the\nsame increase on sort_mem and checkpoint_segments as above. I also\nhalved the shared_buffers and max_fsm_pages (probably should have\nhalved the effective_cache_size too...)\n\nrestore time: 11322 seconds\nvacuum analyze time: 27 minutes\nselect count(*) from user_list where owner_id=315; 48267.66 ms\n\n\nGranted, given this simple test it is hard to say whether the 16k\nblocks will make an improvement under live load, but I'm gonna give it\na shot. The 16k block size shows me roughly 2-6% improvement on these\ntests.\n\nSo throw in my vote for 16k blocks on FreeBSD (and annotate the docs\nto tell which parameters need to be halved to account for it).\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Wed, 03 Sep 2003 15:16:30 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD page size" }, { "msg_contents": "> Ok... simple tests have completed. Here are some numbers.\n> \n> FreeBSD 4.8\n> PG 7.4b2\n> 4GB Ram\n> Dual Xeon 2.4GHz processors\n> 14 U320 SCSI disks attached to Dell PERC3/DC RAID controller in RAID 5\n> config with 32k stripe size\n[snip]\n> Then I took the suggestion to update PG's page size to 16k and did the\n> same increase on sort_mem and checkpoint_segments as above. I also\n> halved the shared_buffers and max_fsm_pages (probably should have\n> halved the effective_cache_size too...)\n> \n> restore time: 11322 seconds\n> vacuum analyze time: 27 minutes\n> select count(*) from user_list where owner_id=315; 48267.66 ms\n> \n> \n> Granted, given this simple test it is hard to say whether the 16k\n> blocks will make an improvement under live load, but I'm gonna give it\n> a shot. The 16k block size shows me roughly 2-6% improvement on these\n> tests.\n> \n> So throw in my vote for 16k blocks on FreeBSD (and annotate the docs\n> to tell which parameters need to be halved to account for it).\n\nI haven't had a chance to run any tests yet (ELIFE), but there was a\nsuggestion that 32K blocks was a better performer than 16K blocks\n(!!??!!??). I'm not sure why this is and my only guess is that it\nrelies more heavily on the disk cache to ease IO. Since you have the\nhardware setup, Vivek, would it be possible for you to run a test with\n32K blocks?\n\nI've started writing a threaded benchmarking program called pg_crush\nthat I hope to post here in a few days that'll time connection startup\ntimes, INSERTs, DELETEs, UPDATEs, and both sequential scans as well as\nindex scans for random and sequentially ordered tuples. It's similar\nto pgbench, except it generates its own data, uses pthreads (chears on\nKSE!), and returns more fine grained timing information for the\nvarious activities.\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Wed, 3 Sep 2003 12:35:51 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD page size" }, { "msg_contents": ">>>>> \"SC\" == Sean Chittenden <[email protected]> writes:\n\nSC> hardware setup, Vivek, would it be possible for you to run a test with\nSC> 32K blocks?\n\nWill do. What's another 4 hours... ;-)\n\nI guess I'll halve the buffer size parameters again...\n\nSC> I've started writing a threaded benchmarking program called pg_crush\nSC> that I hope to post here in a few days that'll time connection startup\n\nOk. Please post it when it is ready. I've decided to wait until 7.4\nis final before going to production so I've got this very expensive\nvery fast box doing not much of anything for a little while...\n", "msg_date": "Wed, 3 Sep 2003 15:41:25 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD page size" }, { "msg_contents": "\n\nJust curious, but Bruce(?) mentioned that apparently a 32k block size was\nfound to show a 15% improvement ... care to run one more test? :)\n\nOn Wed, 3 Sep 2003, Vivek Khera wrote:\n\n> Ok... simple tests have completed. Here are some numbers.\n>\n> FreeBSD 4.8\n> PG 7.4b2\n> 4GB Ram\n> Dual Xeon 2.4GHz processors\n> 14 U320 SCSI disks attached to Dell PERC3/DC RAID controller in RAID 5\n> config with 32k stripe size\n>\n> Dump file:\n> -rw-r--r-- 1 vivek wheel 1646633745 Aug 28 11:01 19-Aug-2003.dump\n>\n> When restored (after deleting one index that took up ~1Gb -- turned\n> out it was redundant to another multi-column index):\n>\n> % df -k /u/d02\n> Filesystem 1K-blocks Used Avail Capacity Mounted on\n> /dev/amrd1s1e 226408360 18067260 190228432 9% /u/d02\n>\n>\n>\n> postgresql.conf alterations from standard:\n> shared_buffers = 60000\n> sort_mem = 8192\n> vacuum_mem=131702\n> max_fsm_pages=1000000\n> effective_cache_size=25600\n> random_page-cost = 2\n>\n>\n> restore time: 14777 seconds\n> vacuum analyze time: 30 minutes\n> select count(*) from user_list where owner_id=315; 50388.64 ms\n>\n>\n> the restore complained often about checkpoints occurring every few\n> seconds:\n>\n> Sep 2 11:57:14 d02 postgres[49721]: [5-1] LOG: checkpoints are occurring too frequently (15 seconds apart)\n> Sep 2 11:57:14 d02 postgres[49721]: [5-2] HINT: Consider increasing CHECKPOINT_SEGMENTS.\n>\n> The HINT threw me off since I had to set checkpoint_segments in\n> postgresql.conf, where as CHECKPOINT_SEGMENTS implied to me a\n> compile-time constant.\n>\n> Anyhow, so I deleted the PG data directory, and made these two\n> changes:\n>\n> checkpoint_segments=50\n> sort_mem = 131702\n>\n> This *really* improved the time for the restore:\n>\n> restore time: 11594 seconds\n>\n> then I reset the checkpoint_segments and sort_mem back to old\n> values...\n>\n> vacuum analyze time is still 30 minutes\n> select count(*) from user_list where owner_id=315; 51363.98 ms\n>\n> so the select appears a bit slower but it is hard to say why. the\n> system is otherwise idle as it is not in production yet.\n>\n>\n> Then I took the suggestion to update PG's page size to 16k and did the\n> same increase on sort_mem and checkpoint_segments as above. I also\n> halved the shared_buffers and max_fsm_pages (probably should have\n> halved the effective_cache_size too...)\n>\n> restore time: 11322 seconds\n> vacuum analyze time: 27 minutes\n> select count(*) from user_list where owner_id=315; 48267.66 ms\n>\n>\n> Granted, given this simple test it is hard to say whether the 16k\n> blocks will make an improvement under live load, but I'm gonna give it\n> a shot. The 16k block size shows me roughly 2-6% improvement on these\n> tests.\n>\n> So throw in my vote for 16k blocks on FreeBSD (and annotate the docs\n> to tell which parameters need to be halved to account for it).\n>\n>\n> --\n> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\n> Vivek Khera, Ph.D. Khera Communications, Inc.\n> Internet: [email protected] Rockville, MD +1-240-453-8497\n> AIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n", "msg_date": "Wed, 3 Sep 2003 17:32:37 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD page size" }, { "msg_contents": "Vivek Khera wrote:\n> the restore complained often about checkpoints occurring every few\n> seconds:\n> \n> Sep 2 11:57:14 d02 postgres[49721]: [5-1] LOG: checkpoints are occurring too frequently (15 seconds apart)\n> Sep 2 11:57:14 d02 postgres[49721]: [5-2] HINT: Consider increasing CHECKPOINT_SEGMENTS.\n> \n> The HINT threw me off since I had to set checkpoint_segments in\n> postgresql.conf, where as CHECKPOINT_SEGMENTS implied to me a\n> compile-time constant.\n\nWoo hoo, my warning worked. Great.\n\nI uppercased it because config parameters are uppercased in the\ndocumentation. Do we mention config parameters in any other error\nmessages? Should it be lowercased?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 3 Sep 2003 19:29:24 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD page size" }, { "msg_contents": "> I uppercased it because config parameters are uppercased in the\n> documentation. Do we mention config parameters in any other error\n> messages? Should it be lowercased?\n\nHow about changing the hint?\n\nConsider increasing CHECKPOINT_SEGMENTS in your postgresql.conf", "msg_date": "Wed, 03 Sep 2003 19:49:55 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD page size" }, { "msg_contents": "\n\nOn Wed, 3 Sep 2003, Bruce Momjian wrote:\n\n> Vivek Khera wrote:\n> > the restore complained often about checkpoints occurring every few\n> > seconds:\n> >\n> > Sep 2 11:57:14 d02 postgres[49721]: [5-1] LOG: checkpoints are occurring too frequently (15 seconds apart)\n> > Sep 2 11:57:14 d02 postgres[49721]: [5-2] HINT: Consider increasing CHECKPOINT_SEGMENTS.\n> >\n> > The HINT threw me off since I had to set checkpoint_segments in\n> > postgresql.conf, where as CHECKPOINT_SEGMENTS implied to me a\n> > compile-time constant.\n>\n> Woo hoo, my warning worked. Great.\n>\n> I uppercased it because config parameters are uppercased in the\n> documentation. Do we mention config parameters in any other error\n> messages? Should it be lowercased?\n\nk, to me upper case denotes a compiler #define, so I would have been\nconfused ... I'd go with lower case and single quotes around it to denote\nits a variable to be changed ...\n", "msg_date": "Wed, 3 Sep 2003 21:04:58 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD page size" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Wed, 3 Sep 2003, Bruce Momjian wrote:\n> \n> > Vivek Khera wrote:\n> > > the restore complained often about checkpoints occurring every few\n> > > seconds:\n> > >\n> > > Sep 2 11:57:14 d02 postgres[49721]: [5-1] LOG: checkpoints are occurring too frequently (15 seconds apart)\n> > > Sep 2 11:57:14 d02 postgres[49721]: [5-2] HINT: Consider increasing CHECKPOINT_SEGMENTS.\n> > >\n> > > The HINT threw me off since I had to set checkpoint_segments in\n> > > postgresql.conf, where as CHECKPOINT_SEGMENTS implied to me a\n> > > compile-time constant.\n> >\n> > Woo hoo, my warning worked. Great.\n> >\n> > I uppercased it because config parameters are uppercased in the\n> > documentation. Do we mention config parameters in any other error\n> > messages? Should it be lowercased?\n> \n> k, to me upper case denotes a compiler #define, so I would have been\n> confused ... I'd go with lower case and single quotes around it to denote\n> its a variable to be changed ...\n\nDone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 3 Sep 2003 20:36:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD page size" }, { "msg_contents": ">>>>> \"MGF\" == Marc G Fournier <[email protected]> writes:\n\nMGF> Just curious, but Bruce(?) mentioned that apparently a 32k block size was\nMGF> found to show a 15% improvement ... care to run one more test? :)\n\n\nWell, it is hard to tell based on my quick and dirty test:\n\n16k page size:\nrestore time: 11322 seconds\nvacuum analyze time: 1663 seconds (27 minutes)\nselect count(*) from user_list where owner_id=315; 56666.64 ms\n\n\n\n32k page size:\nrestore time: 11430 seconds\nvacuum analyze time: 1346 seconds\nselect count(*) from user_list where owner_id=315; 63275.73 ms\n\n\none anomaly I note is that if I re-run the select count(*) query\nabove, the large the page size, the longer the query takes. In the\nstandard 8k page size, it was on the order of 306ms, with 16k page\nsize it was over 1400, and with 32k page size nearly 3000ms.\n\nAnother anomaly I note is that for the larger indexes, the relpages\ndoesn't scale as expected. ie, I'd expect roughly half the relpages\nper index for 32k page size as for 16k page size, but this is not\nalways the case... some are about the same size and some are about 2/3\nand some are about 1/2. The smaller indexes are often the same number\nof pages (when under 20 pages).\n\n\nI think I'm going to write a synthetic load generator that does a\nbunch of inserts to some linked tables with several indexes, then goes\nthru and pounds on it (update/select) from multiple children with\noccasional vacuum's thrown in. That's the only way to get 'real'\nnumbers, it seems.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Thu, 04 Sep 2003 12:09:40 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD page size" }, { "msg_contents": "Vivek Khera wrote:\n> >>>>> \"MGF\" == Marc G Fournier <[email protected]> writes:\n> \n> MGF> Just curious, but Bruce(?) mentioned that apparently a 32k block size was\n> MGF> found to show a 15% improvement ... care to run one more test? :)\n> \n> \n> Well, it is hard to tell based on my quick and dirty test:\n\nThe 32k number is from Tatsuo testing a few years ago.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 4 Sep 2003 14:17:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD page size" }, { "msg_contents": ">>>>> \"BM\" == Bruce Momjian <[email protected]> writes:\n\n\nBM> The 32k number is from Tatsuo testing a few years ago.\n\nCan you verify for me that these parameters in postgresql.conf are\nbased on the BLCKSZ (ie one buffer is the size of the BLCKSIZ macro):\n\nshared_buffers\neffective_cache_size\n\nLogically it makes sense, but I want to be sure I'm adjusting my\npostgresql.conf parameters accordingly when I try different block\nsizes.\n\nThanks.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Thu, 04 Sep 2003 16:56:09 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD page size" }, { "msg_contents": "On Wed, 3 Sep 2003, Vivek Khera wrote:\n\n> >>>>> \"SC\" == Sean Chittenden <[email protected]> writes:\n> \n> >> I need to step in and do 2 things:\n> SC> Thanks for posting that. Let me know if you have any questions while\n> SC> doing your testing. I've found that using 16K blocks on FreeBSD\n> SC> results in about an 8% speedup in writes to the database, fwiw.\n> \n> Just double checking: if I do this, then I need to halve the\n> parameters in postgresql.conf that involve buffers, specifically,\n> max_fsm_pages and shared_buffers. I think max_fsm_pages should be\n> adjusted since the number of pages in the system overall has been\n> halved.\n> \n> Anything else that should be re-tuned for this?\n\nYes, effective_cache_size as well is measured in pgsql blocks.\n\n", "msg_date": "Thu, 4 Sep 2003 15:56:22 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD page size (was Re: The results of my" }, { "msg_contents": "Vivek Khera wrote:\n> >>>>> \"BM\" == Bruce Momjian <[email protected]> writes:\n> \n> \n> BM> The 32k number is from Tatsuo testing a few years ago.\n> \n> Can you verify for me that these parameters in postgresql.conf are\n> based on the BLCKSZ (ie one buffer is the size of the BLCKSIZ macro):\n> \n> shared_buffers\n> effective_cache_size\n> \n> Logically it makes sense, but I want to be sure I'm adjusting my\n> postgresql.conf parameters accordingly when I try different block\n> sizes.\n\nUh, yes, I think they have to be the same because they are pages in the\nshared buffer cache, not disk blocks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 4 Sep 2003 18:12:46 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD page size" }, { "msg_contents": "Vivek Khera wrote:\n> >>>>> \"BM\" == Bruce Momjian <[email protected]> writes:\n> \n> \n> BM> The 32k number is from Tatsuo testing a few years ago.\n> \n> Can you verify for me that these parameters in postgresql.conf are\n> based on the BLCKSZ (ie one buffer is the size of the BLCKSIZ macro):\n> \n> shared_buffers\n> effective_cache_size\n> \n> Logically it makes sense, but I want to be sure I'm adjusting my\n> postgresql.conf parameters accordingly when I try different block\n> sizes.\n\nAlso, to check, you can use ipcs to see the shared memory sizes\nallocated.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 4 Sep 2003 18:13:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FreeBSD page size" } ]
[ { "msg_contents": "Hi all,\n\nHopefully a quick query. \n\nIs there anything special I should look at when configuring either\nPostgreSQL and Linux to work together on an 8-way Intel P-III based system?\n\nI currently have the database system running fairly nicely on a 2 CPU\nPentium-II 400 server with 512MB memory, hardware RAID with 32MB cache on\nthe controllers, unfortunately the db file systems are RAID-5 but I am not\nin a position to change that.\n\nI'm mainly interested in what I should concentrate on from a\nLinux/PostgreSQL config point of view to see if we can take advantage of the\nextra CPUs. - Yes it's overkill but this is a piece of surplus hardware we\ndon't have to buy.\n\nPerhaps some may say Linux isn't the best option for an 8 CPU server but\nthis is what I have to work with for reasons we won't get into :-)\n\nThe current usage is along the lines of a few thousands updates spread over\nthe space of a few hours in the morning then followed by a thousand select\nqueries to do some reporting.\n\nThe server also runs an apache web server for people to access but is only\nused to view html files on a fairly ad-hoc basis, nothing serious on the web\nside of things.\n\nCurrently RedHat 9 with PostgreSQL 7.3.2 installed.\n\nThanks for your time\n\nLinz\n", "msg_date": "Wed, 27 Aug 2003 14:07:56 +1000", "msg_from": "\"Castle, Lindsay\" <[email protected]>", "msg_from_op": true, "msg_subject": "8 way Intel Xeon system" }, { "msg_contents": "On 27 Aug 2003 at 14:07, Castle, Lindsay wrote:\n> I'm mainly interested in what I should concentrate on from a\n> Linux/PostgreSQL config point of view to see if we can take advantage of the\n> extra CPUs. - Yes it's overkill but this is a piece of surplus hardware we\n> don't have to buy.\n\nFirst of all whatever you do, add multiple connections using a middle layer. \nThat will keep your CPU busy. \n\nOf course there are few things need to be done over single connection but this \nshould help at least in some scenarios.\n \n> Perhaps some may say Linux isn't the best option for an 8 CPU server but\n> this is what I have to work with for reasons we won't get into :-)\n\nI think if you can afford a performance benchmark trial, give 2.6.0-testx a \ntry. They should be much better than 2.4.x.\n \n> The current usage is along the lines of a few thousands updates spread over\n> the space of a few hours in the morning then followed by a thousand select\n> queries to do some reporting.\n\nIn case of such IO intensive and update/delete heavy load, might be a good idea \nto move WAL to a separate SCSI channel. I believe merely moving it to another \ndrive would not yield as much boost.\n\n> Currently RedHat 9 with PostgreSQL 7.3.2 installed.\n\nGet 7.4CVS head. and don't forget to use a autovacuum daemon. It's in contrib \ndir.\n\nHTH\n\nBye\n Shridhar\n\n--\nOgden's Law:\tThe sooner you fall behind, the more time you have to catch up.\n\n", "msg_date": "Wed, 27 Aug 2003 12:30:33 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8 way Intel Xeon system" }, { "msg_contents": "2003-08-27 ragyogó napján Castle, Lindsay ezt üzente:\n\n> Perhaps some may say Linux isn't the best option for an 8 CPU server but\n> this is what I have to work with for reasons we won't get into :-)\n\nThis is not true, 2.4 series AFAIK run nicely on these monstrums. If you\nwant some thrill, try 2.6-test series. Linux Is Good For You (tm) :)\n\n-- \nTomka Gergely\n\"S most - vajon barbárok nélkül mi lesz velünk?\nŐk mégiscsak megoldás voltak valahogy...\"\n\n\n", "msg_date": "Wed, 27 Aug 2003 10:56:41 +0200 (CEST)", "msg_from": "Tomka Gergely <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8 way Intel Xeon system" }, { "msg_contents": "After a long battle with technology,[email protected] (Tomka Gergely), an earthling, wrote:\n> 2003-08-27 ragyogďż˝ napjďż˝n Castle, Lindsay ezt ďż˝zente:\n>> Perhaps some may say Linux isn't the best option for an 8 CPU\n>> server but this is what I have to work with for reasons we won't\n>> get into :-)\n>\n> This is not true, 2.4 series AFAIK run nicely on these monstrums. If\n> you want some thrill, try 2.6-test series. Linux Is Good For You\n> (tm) :)\n\nThe other \"bleeding edge\" that it'll be interesting to see, um,\n\"coagulate,\" is Dragonfly BSD, which plans to do some really\ninteresting SMP stuff as a fork of FreeBSD...\n-- \nlet name=\"cbbrowne\" and tld=\"ntlug.org\" in String.concat \"@\" [name;tld];;\nhttp://www.ntlug.org/~cbbrowne/nonrdbms.html\n\"People are more vocally opposed to fur than leather because it's\neasier to harass rich women than motorcycle gangs.\" [bumper sticker]\n", "msg_date": "Wed, 27 Aug 2003 22:49:23 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8 way Intel Xeon system" }, { "msg_contents": "2003-08-27 ragyogó napján Christopher Browne ezt üzente:\n\n> After a long battle with technology,[email protected] (Tomka Gergely), an earthling, wrote:\n> > 2003-08-27 ragyogó napján Castle, Lindsay ezt üzente:\n> >> Perhaps some may say Linux isn't the best option for an 8 CPU\n> >> server but this is what I have to work with for reasons we won't\n> >> get into :-)\n> >\n> > This is not true, 2.4 series AFAIK run nicely on these monstrums. If\n> > you want some thrill, try 2.6-test series. Linux Is Good For You\n> > (tm) :)\n>\n> The other \"bleeding edge\" that it'll be interesting to see, um,\n> \"coagulate,\" is Dragonfly BSD, which plans to do some really\n> interesting SMP stuff as a fork of FreeBSD...\n\nAs isee the pages (what a beautiul insect:) tehy not reach te limit of\nuseability - or i am wrong?\n\n-- \nTomka Gergely\n\"S most - vajon barbárok nélkül mi lesz velünk?\nŐk mégiscsak megoldás voltak valahogy...\"\n\n", "msg_date": "Thu, 28 Aug 2003 08:55:55 +0200 (CEST)", "msg_from": "Tomka Gergely <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8 way Intel Xeon system" } ]
[ { "msg_contents": "\n\tHi,\n\n\tI have a (big) problem with postgresql when making lots of \ninserts per second. I have a tool that is generating an output of ~2500 \nlines per seconds. I write a script in PERL that opens a pipe to that \ntool, reads every line and inserts data.\n\tI tryed both commited and not commited variants (the inserts \nwere commited at every 60 seconds), and the problem persists.\n\n\tThe problems is that only ~15% of the lines are inserted into \nthe database. The same script modified to insert the same data in a \nsimilar table created in a MySQL database inserts 100%.\n\n\tI also dropped the indexes on various columns, just to make sure \nthat the overhead is not to big (but I also need that indexes because \nI'll make lots of SELECTs from that table).\n\tI tried both variants: connecting to a host and localy (through \npostgresql server's socket (/tmp/s.PGSQL.5432).\n\n\tWhere can be the problem?\n\n\tI'm using postgresql 7.4 devel snapshot 20030628 and 20030531. \nSome of the settings are:\n\nshared_buffers = 520\nmax_locks_per_transaction = 128\nwal_buffers = 8 \nmax_fsm_relations = 30000\nmax_fsm_pages = 482000 \nsort_mem = 131072\nvacuum_mem = 131072\neffective_cache_size = 10000\nrandom_page_cost = 2\n\n-- \nAny views or opinions presented within this e-mail are solely those of\nthe author and do not necessarily represent those of any company, unless\notherwise expressly stated.\n", "msg_date": "Wed, 27 Aug 2003 15:50:32 +0300 (EEST)", "msg_from": "Tarhon-Onu Victor <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql inserts problem" }, { "msg_contents": "On Wed, Aug 27, 2003 at 15:50:32 +0300,\n Tarhon-Onu Victor <[email protected]> wrote:\n> \n> \tThe problems is that only ~15% of the lines are inserted into \n> the database. The same script modified to insert the same data in a \n> similar table created in a MySQL database inserts 100%.\n\nDid you check the error status for the records that weren't entered?\n\nMy first guess is that you have some bad data you are trying to insert.\n", "msg_date": "Wed, 27 Aug 2003 22:23:56 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pgsql inserts problem" }, { "msg_contents": "> > The problems is that only ~15% of the lines are inserted into\n> > the database. The same script modified to insert the same data in a\n> > similar table created in a MySQL database inserts 100%.\n>\n> Did you check the error status for the records that weren't entered?\n>\n> My first guess is that you have some bad data you are trying to insert.\n\nI wouldn't be surprised, MySQL will just insert a zero instead of failing in\nmost cases :P\n\nChris\n\n", "msg_date": "Thu, 28 Aug 2003 12:01:37 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pgsql inserts problem" }, { "msg_contents": "On Wed, 27 Aug 2003, Bruno Wolff III wrote:\n\n> Did you check the error status for the records that weren't entered?\n> \n> My first guess is that you have some bad data you are trying to insert.\n\n\tOf course, I checked the error status for every insert, there is \nno error. It seems like in my case the postgres server cannot handle so \nmuch inserts per second some of the lines are not being parsed and data \ninserted into the database.\n\tI don't know where can be the problem: in the DBD::Pg Perl DBI\ndriver or my postgresql server settings are not optimal.\n\n-- \nAny views or opinions presented within this e-mail are solely those of\nthe author and do not necessarily represent those of any company, unless\notherwise expressly stated.\n", "msg_date": "Thu, 28 Aug 2003 09:29:03 +0300 (EEST)", "msg_from": "Tarhon-Onu Victor <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgsql inserts problem" }, { "msg_contents": "On 27 Aug 2003 at 15:50, Tarhon-Onu Victor wrote:\n\n> \n> \tHi,\n> \n> \tI have a (big) problem with postgresql when making lots of \n> inserts per second. I have a tool that is generating an output of ~2500 \n> lines per seconds. I write a script in PERL that opens a pipe to that \n> tool, reads every line and inserts data.\n> \tI tryed both commited and not commited variants (the inserts \n> were commited at every 60 seconds), and the problem persists.\n\nAssuming one record per line, you are committing after 150K records, that's not \ngood.\n\nTry committing every 5 seconds. And open more than one conenction. That will \ncertainly improve performance. Afterall concurrency is biggest assset of \npostgresql.\n\nFiddle around with combination and see which works best for you.\n\nBye\n Shridhar\n\n--\nMencken and Nathan's Ninth Law of The Average American:\tThe quality of a \nchampagne is judged by the amount of noise the\tcork makes when it is popped.\n\n", "msg_date": "Thu, 28 Aug 2003 12:22:02 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql inserts problem" }, { "msg_contents": "> Of course, I checked the error status for every insert, there is\n> no error. It seems like in my case the postgres server cannot handle so\n> much inserts per second some of the lines are not being parsed and data\n> inserted into the database.\n\nThat sounds extremely unlikely. Postgres is not one to fail without any\nsort of error. There's something else that is the problem. More than\nlikely, it's a problem in your code.\n\nChris\n\n", "msg_date": "Thu, 28 Aug 2003 15:34:12 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql inserts problem" }, { "msg_contents": "On Wednesday 27 August 2003 13:50, Tarhon-Onu Victor wrote:\n>\n> shared_buffers = 520\n> max_locks_per_transaction = 128\n> wal_buffers = 8\n> max_fsm_relations = 30000\n> max_fsm_pages = 482000\n> sort_mem = 131072\n> vacuum_mem = 131072\n> effective_cache_size = 10000\n> random_page_cost = 2\n\nSlightly off-topic, but I think your tuning settings are a bit out. You've got \n4MB allocated to shared_buffers but 128MB allocated to sort_mem? And only \n80MB to effective_cache_size? Your settings might be right, but you'd need a \nvery strange set of circumstances.\n\nAs for PG silently discarding inserts, your best bet might be to write a short \nPerl script to reproduce the problem. Without that, people are likely to be \nsceptical - if PG tended to do this sort of thing, none of us would use it.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 28 Aug 2003 13:56:49 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pgsql inserts problem" } ]
[ { "msg_contents": "I'm running into some performance problems trying to execute simple queries.\n\npostgresql version 7.3.3\n.conf params changed from defaults.\nshared_buffers = 64000\nsort_mem = 64000\nfsync = false\neffective_cache_size = 400000\n\nex. query: select * from x where id in (select id from y);\n \nThere's an index on each table for id. SQL Server takes <1s to return, \npostgresql doesn't return at all, neither does explain analyze.\nx has 1200673 rows\ny has 1282 rows\n\nIt seems like its ignoring the index and not using enough memory.. any \nideas?\n\n\n", "msg_date": "Wed, 27 Aug 2003 09:55:59 -0400", "msg_from": "Michael Guerin <[email protected]>", "msg_from_op": true, "msg_subject": "Simple queries take forever to run" } ]
[ { "msg_contents": "Hi all,\n\nI did some benchmarking using pgbench and postgresql CVS head, compiled \nyesterday. \n\nThe results are attached. It looks like 2.6.0-test4 does better under load but \nunder light load the performance isn't that great. OTOH 2.4.20 suffer major \ndegradation compare to 2.6. Looks like linux is also getting heavy at lower \nend. Of course it isn't as bad as solaris as yet..:-)\n\nIIRC in a kernel release note recently, it was commented that IO scheduler is \nstill being worked on and does not perform as much for random seeks, which \nexaclty what database needs.\n\nHow does these number stack up with other platforms? Anybody with SCSI disks \nout there? I doubt IDE has some role to play with this.\n\nComments?\n\n Shridhar", "msg_date": "Wed, 27 Aug 2003 21:02:25 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": true, "msg_subject": "Comparing postgresql7.4 CVS head on linux 2.4.20 and 2.6.0-test4" }, { "msg_contents": "On Wed, Aug 27, 2003 at 09:02:25PM +0530, Shridhar Daithankar wrote:\n> IIRC in a kernel release note recently, it was commented that IO scheduler is \n> still being worked on and does not perform as much for random seeks, which \n> exaclty what database needs.\n\nYeah, I've read that as well. It would be interesting to see how 2.6\nperforms with the traditional (non-anticipatory) scheduler -- I believe\nyou can switch from one I/O scheduler to another via a sysctl.\n\n> pgbench -c10 -t100 test1\n> tps = 64.917044 (including connections establishing)\n> tps = 65.438067 (excluding connections establishing)\n\nInteresting that the performance of 2.4.20 for this particular\nbenchmark is a little less than 3 times faster than 2.6\n\n> 3) Shared buffers 3000\n> \n> pgbench -c5 -t100 test\n> tps = 132.489569 (including connections establishing)\n> tps = 135.177003 (excluding connections establishing)\n> \n> pgbench -c5 -t1000 test\n> tps = 70.272855 (including connections establishing)\n> tps = 70.343452 (excluding connections establishing)\n> \n> pgbench -c10 -t100 test\n> tps = 121.624524 (including connections establishing)\n> tps = 123.549086 (excluding connections establishing)\n\n[...] \n\n> 4) noatime enabled Shared buffers 3000\n> \n> pgbench -c5 -t100 test\n> tps = 90.850600 (including connections establishing)\n> tps = 92.053686 (excluding connections establishing)\n> \n> pgbench -c5 -t1000 test\n> tps = 92.209724 (including connections establishing)\n> tps = 92.329682 (excluding connections establishing)\n> \n> pgbench -c10 -t100 test\n> tps = 79.264231 (including connections establishing)\n> tps = 80.145448 (excluding connections establishing)\n\nI'm a little skeptical of the consistency of these numbers\n(several people have observed in the past that it's difficult\nto get pgbench to produce reliable results) -- how is it\npossible that using noatime can possibly *reduce* performance\nby 50%, in the case of the first and third benchmarks?\n\n-Neil\n\n", "msg_date": "Wed, 27 Aug 2003 19:00:14 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comparing postgresql7.4 CVS head on linux 2.4.20 and 2.6.0-test4" }, { "msg_contents": "On 27 Aug 2003 at 19:00, Neil Conway wrote:\n\n> On Wed, Aug 27, 2003 at 09:02:25PM +0530, Shridhar Daithankar wrote:\n> > IIRC in a kernel release note recently, it was commented that IO scheduler is \n> > still being worked on and does not perform as much for random seeks, which \n> > exaclty what database needs.\n> \n> Yeah, I've read that as well. It would be interesting to see how 2.6\n> performs with the traditional (non-anticipatory) scheduler -- I believe\n> you can switch from one I/O scheduler to another via a sysctl.\n\nI will repeat the tests after get that setting. Will google for it..\n\n> \n> > pgbench -c10 -t100 test1\n> > tps = 64.917044 (including connections establishing)\n> > tps = 65.438067 (excluding connections establishing)\n> \n> Interesting that the performance of 2.4.20 for this particular\n> benchmark is a little less than 3 times faster than 2.6\n\nYeah but 2.4 drops like anything..\n> > 4) noatime enabled Shared buffers 3000\n> > \n> > pgbench -c5 -t100 test\n> > tps = 90.850600 (including connections establishing)\n> > tps = 92.053686 (excluding connections establishing)\n> > \n> > pgbench -c5 -t1000 test\n> > tps = 92.209724 (including connections establishing)\n> > tps = 92.329682 (excluding connections establishing)\n> > \n> > pgbench -c10 -t100 test\n> > tps = 79.264231 (including connections establishing)\n> > tps = 80.145448 (excluding connections establishing)\n> \n> I'm a little skeptical of the consistency of these numbers\n> (several people have observed in the past that it's difficult\n> to get pgbench to produce reliable results) -- how is it\n> possible that using noatime can possibly *reduce* performance\n> by 50%, in the case of the first and third benchmarks?\n\nI know. I am puzzled too. Probably I didn't put noatime properly in /etc/fstab. \nUnfortunately I have only one linux partition. So I prefer to boot rather than \nremounting root.\n\nI will redo the bechmarks and post the results..\n\nBye\n Shridhar\n\n--\nLaw of Communications:\tThe inevitable result of improved and enlarged \ncommunications\tbetween different levels in a hierarchy is a vastly increased\t\narea of misunderstanding.\n\n", "msg_date": "Thu, 28 Aug 2003 12:23:54 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Comparing postgresql7.4 CVS head on linux 2.4.20 and 2.6.0-test4" } ]
[ { "msg_contents": "Someone who has my:\n\n [email protected]\n\nemail address has an infected computer, infected with the SoBig.F virus. \nI'm getting 200+ infected emails a day from that person(s).\n\nGo to this site and do a free online virus scan. It's safe, and done by \none of the two top virus scanning companies in world. I've done it \nseveral times.\n\nhttp://housecall.antivirus.com/\n\n-- \nDennis Gearon\n\n", "msg_date": "Wed, 27 Aug 2003 09:57:36 -0700", "msg_from": "Dennis Gearon <[email protected]>", "msg_from_op": true, "msg_subject": "Please scan your computer" } ]
[ { "msg_contents": "I'm running into some performance problems trying to execute simple \nqueries.\n\npostgresql version 7.3.3\n.conf params changed from defaults.\nshared_buffers = 64000\nsort_mem = 64000\nfsync = false\neffective_cache_size = 400000\n\nex. query: select * from x where id in (select id from y);\n\nThere's an index on each table for id. SQL Server takes <1s to return, \npostgresql doesn't return at all, neither does explain analyze.\nx has 1200673 rows\ny has 1282 rows\n\nIt seems like its ignoring the index and not using enough memory.. any \nideas?\n\n", "msg_date": "Wed, 27 Aug 2003 17:40:05 -0400", "msg_from": "Michael Guerin <[email protected]>", "msg_from_op": true, "msg_subject": "Simple queries take forever to run" }, { "msg_contents": "On Wed, Aug 27, 2003 at 05:40:05PM -0400, Michael Guerin wrote:\n> ex. query: select * from x where id in (select id from y);\n> \n> There's an index on each table for id. SQL Server takes <1s to return, \n> postgresql doesn't return at all, neither does explain analyze.\n\nThis particular form of query is a known performance problem for PostgreSQL\n7.3 and earlier -- the problem should hopefully be fixed in 7.4 (currently\nin beta). Check the archives for more discussion on this topic.\n\n-Neil\n\n", "msg_date": "Wed, 27 Aug 2003 20:11:19 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple queries take forever to run" }, { "msg_contents": "On Wed, 27 Aug 2003, Michael Guerin wrote:\n\n> I'm running into some performance problems trying to execute simple\n> queries.\n>\n> postgresql version 7.3.3\n> .conf params changed from defaults.\n> shared_buffers = 64000\n> sort_mem = 64000\n> fsync = false\n> effective_cache_size = 400000\n>\n> ex. query: select * from x where id in (select id from y);\n>\n> There's an index on each table for id. SQL Server takes <1s to return,\n> postgresql doesn't return at all, neither does explain analyze.\n\nIN(subquery) is known to run poorly in 7.3.x and earlier. 7.4 is\ngenerally much better (for reasonably sized subqueries) but in earlier\nversions you'll probably want to convert into an EXISTS or join form.\n\n", "msg_date": "Wed, 27 Aug 2003 18:22:23 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple queries take forever to run" }, { "msg_contents": "> postgresql version 7.3.3\n> .conf params changed from defaults.\n> shared_buffers = 64000\n> sort_mem = 64000\n> fsync = false\n> effective_cache_size = 400000\n>\n> ex. query: select * from x where id in (select id from y);\n>\n> There's an index on each table for id. SQL Server takes <1s to return,\n> postgresql doesn't return at all, neither does explain analyze.\n> x has 1200673 rows\n> y has 1282 rows\n>\n> It seems like its ignoring the index and not using enough memory.. any\n> ideas?\n\nThis is a known problem in 7.3, it is much faster in 7.4b1. This should be\nvery, very fast though, and do exactly the same thing:\n\nselect * from x where exists (select id from y where y.id=x.id);\n\nChris\n\n", "msg_date": "Thu, 28 Aug 2003 09:29:01 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple queries take forever to run" }, { "msg_contents": "Stephan Szabo wrote:\n\n>On Wed, 27 Aug 2003, Michael Guerin wrote:\n>\n> \n>\n>>I'm running into some performance problems trying to execute simple\n>>queries.\n>>\n>>postgresql version 7.3.3\n>>.conf params changed from defaults.\n>>shared_buffers = 64000\n>>sort_mem = 64000\n>>fsync = false\n>>effective_cache_size = 400000\n>>\n>>ex. query: select * from x where id in (select id from y);\n>>\n>>There's an index on each table for id. SQL Server takes <1s to return,\n>>postgresql doesn't return at all, neither does explain analyze.\n>> \n>>\n>\n>IN(subquery) is known to run poorly in 7.3.x and earlier. 7.4 is\n>generally much better (for reasonably sized subqueries) but in earlier\n>versions you'll probably want to convert into an EXISTS or join form.\n>\n>\n> \n>\nSomething else seems to be going on, even switching to an exists clause \ngives much better but poor performance.\ncount(*) where exists clause: Postgresql 19s, SQL Server <1s\ncount(*) where not exists: 23.3s SQL Server 1.5s\n\nSQL Server runs on a dual 1.4 with 4gigs, win2k\nPostgresql runs on a quad 900 with 8 gigs, sunos 5.8\n\n\n\n", "msg_date": "Thu, 28 Aug 2003 10:38:07 -0400", "msg_from": "Michael Guerin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple queries take forever to run" }, { "msg_contents": "On 28 Aug 2003 at 10:38, Michael Guerin wrote:\n> >IN(subquery) is known to run poorly in 7.3.x and earlier. 7.4 is\n> >generally much better (for reasonably sized subqueries) but in earlier\n> >versions you'll probably want to convert into an EXISTS or join form.\n> Something else seems to be going on, even switching to an exists clause \n> gives much better but poor performance.\n> count(*) where exists clause: Postgresql 19s, SQL Server <1s\n> count(*) where not exists: 23.3s SQL Server 1.5s\n\nThis was with 7.4? Can you try downloading 7.4CVS and try?\n\n> \n> SQL Server runs on a dual 1.4 with 4gigs, win2k\n> Postgresql runs on a quad 900 with 8 gigs, sunos 5.8\n\nSunOS...Not the impala out there but anyways I would refrain from slipping in \nthat..\n\nParden me if this is a repeatation, have you set your effective cache size?\n\nBye\n Shridhar\n\n--\nNouvelle cuisine, n.:\tFrench for \"not enough food\".Continental breakfast, n.:\t\nEnglish for \"not enough food\".Tapas, n.:\tSpanish for \"not enough food\".Dim Sum, \nn.:\tChinese for more food than you've ever seen in your entire life.\n\n", "msg_date": "Thu, 28 Aug 2003 20:15:40 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple queries take forever to run" }, { "msg_contents": "On Thu, 28 Aug 2003, Michael Guerin wrote:\n\n> Stephan Szabo wrote:\n>\n> >On Wed, 27 Aug 2003, Michael Guerin wrote:\n> >\n> >\n> >\n> >>I'm running into some performance problems trying to execute simple\n> >>queries.\n> >>\n> >>postgresql version 7.3.3\n> >>.conf params changed from defaults.\n> >>shared_buffers = 64000\n> >>sort_mem = 64000\n> >>fsync = false\n> >>effective_cache_size = 400000\n> >>\n> >>ex. query: select * from x where id in (select id from y);\n> >>\n> >>There's an index on each table for id. SQL Server takes <1s to return,\n> >>postgresql doesn't return at all, neither does explain analyze.\n> >>\n> >>\n> >\n> >IN(subquery) is known to run poorly in 7.3.x and earlier. 7.4 is\n> >generally much better (for reasonably sized subqueries) but in earlier\n> >versions you'll probably want to convert into an EXISTS or join form.\n> >\n> >\n> >\n> >\n> Something else seems to be going on, even switching to an exists clause\n> gives much better but poor performance.\n> count(*) where exists clause: Postgresql 19s, SQL Server <1s\n> count(*) where not exists: 23.3s SQL Server 1.5s\n\nWhat does explain analyze show for the two queries?\n\n", "msg_date": "Thu, 28 Aug 2003 08:19:53 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple queries take forever to run" }, { "msg_contents": "Stephan Szabo wrote:\n\n>On Thu, 28 Aug 2003, Michael Guerin wrote:\n>\n> \n>\n>>Stephan Szabo wrote:\n>>\n>> \n>>\n>>>On Wed, 27 Aug 2003, Michael Guerin wrote:\n>>>\n>>>\n>>>\n>>> \n>>>\n>>>>I'm running into some performance problems trying to execute simple\n>>>>queries.\n>>>>\n>>>>postgresql version 7.3.3\n>>>>.conf params changed from defaults.\n>>>>shared_buffers = 64000\n>>>>sort_mem = 64000\n>>>>fsync = false\n>>>>effective_cache_size = 400000\n>>>>\n>>>>ex. query: select * from x where id in (select id from y);\n>>>>\n>>>>There's an index on each table for id. SQL Server takes <1s to return,\n>>>>postgresql doesn't return at all, neither does explain analyze.\n>>>>\n>>>>\n>>>> \n>>>>\n>>>IN(subquery) is known to run poorly in 7.3.x and earlier. 7.4 is\n>>>generally much better (for reasonably sized subqueries) but in earlier\n>>>versions you'll probably want to convert into an EXISTS or join form.\n>>>\n>>>\n>>>\n>>>\n>>> \n>>>\n>>Something else seems to be going on, even switching to an exists clause\n>>gives much better but poor performance.\n>>count(*) where exists clause: Postgresql 19s, SQL Server <1s\n>>count(*) where not exists: 23.3s SQL Server 1.5s\n>> \n>>\n>\n>What does explain analyze show for the two queries?\n>\n>\n> \n>\nexplain analyze select count(*) from tbltimeseries where exists(select \nuniqid from tblobjectname where timeseriesid = uniqid);\nAggregate (cost=5681552.18..5681552.18 rows=1 width=0) (actual \ntime=22756.64..22756.64 rows=1 loops=1)\n -> Seq Scan on tbltimeseries (cost=0.00..5680051.34 rows=600336 \nwidth=0) (actual time=22.06..21686.78 rows=1200113 loops=1)\n Filter: (NOT (subplan))\n SubPlan\n -> Index Scan using idx_objectname on tblobjectname \n(cost=0.00..4.70 rows=1 width=4) (actual time=0.01..0.01 rows=0 \nloops=1200673)\n Index Cond: ($0 = uniqid)\n Total runtime: 22756.83 msec\n(7 rows)\n\nfiasco=# explain analyze select count(*) from tbltimeseries where \nexists(select uniqid from tblobjectname where timeseriesid = uniqid);\n QUERY \nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\nexplain analyze select count(*) from tbltimeseries where exists(select \nuniqid from tblobjectname where timeseriesid = uniqid);\n Aggregate (cost=5681552.18..5681552.18 rows=1 width=0) (actual \ntime=19558.77..19558.77 rows=1 loops=1)\n -> Seq Scan on tbltimeseries (cost=0.00..5680051.34 rows=600336 \nwidth=0) (actual time=0.21..19557.73 rows=560 loops=1)\n Filter: (subplan)\n SubPlan\n -> Index Scan using idx_objectname on tblobjectname \n(cost=0.00..4.70 rows=1 width=4) (actual time=0.01..0.01 rows=0 \nloops=1200673)\n Index Cond: ($0 = uniqid)\n Total runtime: 19559.04 msec\n(7 rows)\n\n\n\n\n", "msg_date": "Thu, 28 Aug 2003 13:42:35 -0400", "msg_from": "Michael Guerin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple queries take forever to run" }, { "msg_contents": "On Thu, 28 Aug 2003, Michael Guerin wrote:\n\n> Stephan Szabo wrote:\n>\n> >On Thu, 28 Aug 2003, Michael Guerin wrote:\n> >\n> >\n> >\n> >>Stephan Szabo wrote:\n> >>\n> >>\n> >>\n> >>>On Wed, 27 Aug 2003, Michael Guerin wrote:\n> >>>\n> >>>\n> >>>\n> >>>\n> >>>\n> >>>>I'm running into some performance problems trying to execute simple\n> >>>>queries.\n> >>>>\n> >>>>postgresql version 7.3.3\n> >>>>.conf params changed from defaults.\n> >>>>shared_buffers = 64000\n> >>>>sort_mem = 64000\n> >>>>fsync = false\n> >>>>effective_cache_size = 400000\n> >>>>\n> >>>>ex. query: select * from x where id in (select id from y);\n> >>>>\n> >>>>There's an index on each table for id. SQL Server takes <1s to return,\n> >>>>postgresql doesn't return at all, neither does explain analyze.\n> >>>>\n> >>>>\n> >>>>\n> >>>>\n> >>>IN(subquery) is known to run poorly in 7.3.x and earlier. 7.4 is\n> >>>generally much better (for reasonably sized subqueries) but in earlier\n> >>>versions you'll probably want to convert into an EXISTS or join form.\n> >>>\n> >>>\n> >>>\n> >>>\n> >>>\n> >>>\n> >>Something else seems to be going on, even switching to an exists clause\n> >>gives much better but poor performance.\n> >>count(*) where exists clause: Postgresql 19s, SQL Server <1s\n> >>count(*) where not exists: 23.3s SQL Server 1.5s\n> >>\n> >>\n> >\n> >What does explain analyze show for the two queries?\n> >\n> >\n> >\n> >\n> explain analyze select count(*) from tbltimeseries where exists(select\n> uniqid from tblobjectname where timeseriesid = uniqid);\n> Aggregate (cost=5681552.18..5681552.18 rows=1 width=0) (actual\n> time=22756.64..22756.64 rows=1 loops=1)\n> -> Seq Scan on tbltimeseries (cost=0.00..5680051.34 rows=600336\n> width=0) (actual time=22.06..21686.78 rows=1200113 loops=1)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Index Scan using idx_objectname on tblobjectname\n> (cost=0.00..4.70 rows=1 width=4) (actual time=0.01..0.01 rows=0\n> loops=1200673)\n> Index Cond: ($0 = uniqid)\n> Total runtime: 22756.83 msec\n> (7 rows)\n\nHmm... I'd thought that it had options for a better plan than that.\n\nWhat do things like:\n\nexplain analyze select count(distinct timeseriesid) from tbltimeseries,\n tblobjectname where timeseriesid=uniquid;\n\nand\n\nexplain analyze select count(distinct timeseriesid) from\n tbltimeseries left outer join tblobjectname on (timeseriesid=uniqid)\n where uniqid is null;\n\ngive you?\n\n", "msg_date": "Thu, 28 Aug 2003 11:49:00 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple queries take forever to run" }, { "msg_contents": "Stephan Szabo wrote:\n\n>On Thu, 28 Aug 2003, Michael Guerin wrote:\n>\n> \n>\n>>Stephan Szabo wrote:\n>>\n>> \n>>\n>>>On Thu, 28 Aug 2003, Michael Guerin wrote:\n>>>\n>>>\n>>>\n>>> \n>>>\n>>>>Stephan Szabo wrote:\n>>>>\n>>>>\n>>>>\n>>>> \n>>>>\n>>>>>On Wed, 27 Aug 2003, Michael Guerin wrote:\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>> \n>>>>>\n>>>>>>I'm running into some performance problems trying to execute simple\n>>>>>>queries.\n>>>>>>\n>>>>>>postgresql version 7.3.3\n>>>>>>.conf params changed from defaults.\n>>>>>>shared_buffers = 64000\n>>>>>>sort_mem = 64000\n>>>>>>fsync = false\n>>>>>>effective_cache_size = 400000\n>>>>>>\n>>>>>>ex. query: select * from x where id in (select id from y);\n>>>>>>\n>>>>>>There's an index on each table for id. SQL Server takes <1s to return,\n>>>>>>postgresql doesn't return at all, neither does explain analyze.\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> \n>>>>>>\n>>>>>IN(subquery) is known to run poorly in 7.3.x and earlier. 7.4 is\n>>>>>generally much better (for reasonably sized subqueries) but in earlier\n>>>>>versions you'll probably want to convert into an EXISTS or join form.\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>> \n>>>>>\n>>>>Something else seems to be going on, even switching to an exists clause\n>>>>gives much better but poor performance.\n>>>>count(*) where exists clause: Postgresql 19s, SQL Server <1s\n>>>>count(*) where not exists: 23.3s SQL Server 1.5s\n>>>>\n>>>>\n>>>> \n>>>>\n>>>What does explain analyze show for the two queries?\n>>>\n>>>\n>>>\n>>>\n>>> \n>>>\n>>explain analyze select count(*) from tbltimeseries where exists(select\n>>uniqid from tblobjectname where timeseriesid = uniqid);\n>>Aggregate (cost=5681552.18..5681552.18 rows=1 width=0) (actual\n>>time=22756.64..22756.64 rows=1 loops=1)\n>> -> Seq Scan on tbltimeseries (cost=0.00..5680051.34 rows=600336\n>>width=0) (actual time=22.06..21686.78 rows=1200113 loops=1)\n>> Filter: (NOT (subplan))\n>> SubPlan\n>> -> Index Scan using idx_objectname on tblobjectname\n>>(cost=0.00..4.70 rows=1 width=4) (actual time=0.01..0.01 rows=0\n>>loops=1200673)\n>> Index Cond: ($0 = uniqid)\n>> Total runtime: 22756.83 msec\n>>(7 rows)\n>> \n>>\n>\n>Hmm... I'd thought that it had options for a better plan than that.\n>\n>What do things like:\n>\n>explain analyze select count(distinct timeseriesid) from tbltimeseries,\n> tblobjectname where timeseriesid=uniquid;\n>\n>and\n>\n>explain analyze select count(distinct timeseriesid) from\n> tbltimeseries left outer join tblobjectname on (timeseriesid=uniqid)\n> where uniqid is null;\n>\n>give you?\n>\n>\n> \n>\nmuch better performance:\n\nexplain analyze select count(distinct timeseriesid) from tbltimeseries,\n tblobjectname where timeseriesid=uniquid;\n\n Aggregate (cost=7384.03..7384.03 rows=1 width=8) (actual time=668.15..668.15 rows=1 loops=1)\n -> Nested Loop (cost=0.00..7380.83 rows=1282 width=8) (actual time=333.31..666.13 rows=561 loops=1)\n -> Seq Scan on tblobjectname (cost=0.00..33.82 rows=1282 width=4) (actual time=0.05..4.98 rows=1282 loops=1)\n -> Index Scan using xx on tbltimeseries (cost=0.00..5.72 rows=1 width=4) (actual time=0.51..0.51 rows=0 loops=1282)\n Index Cond: (tbltimeseries.timeseriesid = \"outer\".uniqid)\n Total runtime: 669.61 msec\n(6 rows)\n\nexplain analyze select count(distinct timeseriesid) from\n tbltimeseries left outer join tblobjectname on (timeseriesid=uniqid)\n where uniqid is null;\n\n Aggregate (cost=59144.19..59144.19 rows=1 width=8) (actual time=12699.47..12699.47 rows=1 loops=1)\n -> Hash Join (cost=37.02..56142.51 rows=1200673 width=8) (actual time=7.41..6376.12 rows=1200113 loops=1)\n Hash Cond: (\"outer\".timeseriesid = \"inner\".uniqid)\n Filter: (\"inner\".uniqid IS NULL)\n -> Seq Scan on tbltimeseries (cost=0.00..44082.73 rows=1200673 width=4) (actual time=0.01..3561.61 rows=1200673 loops=1)\n -> Hash (cost=33.82..33.82 rows=1282 width=4) (actual time=4.84..4.84 rows=0 loops=1)\n -> Seq Scan on tblobjectname (cost=0.00..33.82 rows=1282 width=4) (actual time=0.04..2.84 rows=1282 loops=1)\n Total runtime: 12699.76 msec\n(8 rows)\n\n\n\n\n\n\n", "msg_date": "Thu, 28 Aug 2003 16:07:46 -0400", "msg_from": "Michael Guerin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Simple queries take forever to run" } ]
[ { "msg_contents": "Hello,\n\nWe're running a set of Half-Life based game servers that lookup user \nprivileges from a central PostgreSQL 7.3.4 database server (I recently \nported the MySQL code in Adminmod to PostgreSQL to be able to do this).\n\nThe data needed by the game servers are combined from several different \ntables, so we have some views set up to provide the data in the format \nneeded.\n\nCurrently there's only a few users in the database for testing purposes, \nand most of the time the user lookup's take 2-3 ms (I have syslog'ing of \nqueries and duration turned on), but several times per hour the duration \nfor one of the queries is 2-3 seconds (1000 times larger), while the \nsurrounding lookups take the usual 2-3 ms.\n\nThis is rather critical, as the game server software isn't asynchonous \nand thus waits for a reply before continuing, so when someone connects, \nand the user lookup happens to have one of these very long durations, \nthe players on this server experience a major lag spike, which isn't \nvery popular :-(\n\nAll the game servers and the database server are connected to the same \nswitch, so I don't think, that it is a network problem.\n\nSo far I've been unable to locate the problem, so any suggestions are \nvery welcome.\n\nRegards,\nAnders K. Pedersen\n\n", "msg_date": "Thu, 28 Aug 2003 01:07:05 +0200", "msg_from": "\"Anders K. Pedersen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Queries sometimes take 1000 times the normal time" }, { "msg_contents": "\"Anders K. Pedersen\" <[email protected]> writes:\n> Currently there's only a few users in the database for testing purposes, \n> and most of the time the user lookup's take 2-3 ms (I have syslog'ing of \n> queries and duration turned on), but several times per hour the duration \n> for one of the queries is 2-3 seconds (1000 times larger), while the \n> surrounding lookups take the usual 2-3 ms.\n\nOne thing that comes to mind is that the slow query could be occurring\nat the same time as a checkpoint, or some other cycle-chewing background\noperation. It's not clear why a checkpoint would slow things down that\nmuch, though. Anyway I'd suggest looking for such activities; once we\nknow if that's the issue or not, we can make some progress.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Aug 2003 19:39:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries sometimes take 1000 times the normal time " }, { "msg_contents": "> Currently there's only a few users in the database for testing purposes, \n> and most of the time the user lookup's take 2-3 ms (I have syslog'ing of \n> queries and duration turned on), but several times per hour the duration \n> for one of the queries is 2-3 seconds (1000 times larger), while the \n> surrounding lookups take the usual 2-3 ms.\n\nAre there any other jobs running at the time of these excessive queries?", "msg_date": "Wed, 27 Aug 2003 21:14:19 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries sometimes take 1000 times the normal time" }, { "msg_contents": "On 28 Aug 2003 at 1:07, Anders K. Pedersen wrote:\n\n> Hello,\n> \n> We're running a set of Half-Life based game servers that lookup user \n> privileges from a central PostgreSQL 7.3.4 database server (I recently \n> ported the MySQL code in Adminmod to PostgreSQL to be able to do this).\n> \n> The data needed by the game servers are combined from several different \n> tables, so we have some views set up to provide the data in the format \n> needed.\n> \n> Currently there's only a few users in the database for testing purposes, \n> and most of the time the user lookup's take 2-3 ms (I have syslog'ing of \n> queries and duration turned on), but several times per hour the duration \n> for one of the queries is 2-3 seconds (1000 times larger), while the \n> surrounding lookups take the usual 2-3 ms.\n\nCheck vmstat during the same period if it is syncing at that point as Tom \nsuggested.\n\nAre you using pooled connections? If yes you could shorten life of a connection \nand force making a new connection every 10-15 minutes say. That would avoid IO \navelanche at the end of the hour types.\n\nHTH.\n\nBye\n Shridhar\n\n--\nignorance, n.:\tWhen you don't know anything, and someone else finds out.\n\n", "msg_date": "Thu, 28 Aug 2003 12:19:07 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries sometimes take 1000 times the normal time" }, { "msg_contents": "We have a somewhat similar situation - we're running a fairly constant, but\nlow priority, background load of about 70 selects and 40 inserts per second\n(batched into fairly large transactions), and on top of that we're trying to\nrun time-sensitive queries for a web site (well two). I should emphasize\nthat this is low low priority - if a query is delayed by an hour here, it\ndoesn't matter.\n\nThe web site queries will jump up one or two orders of magnitude (I have\nseen a normally 100ms query take in excess of 30 seconds) in duration at\nseemingly random points. It's not always when the transactions are\ncommitting, and it doesn't seem to be during checkpointing either. The same\nthing happens with WAL switched off. It appears to happen the first time the\nquery runs after a while. If I run the same query immediately afterwards, it\nwill take the normal amount of time.\n\nAny ideas?\n\nCheers,\n\nRuss Garrett\n\[email protected] wrote:\n> Subject: [PERFORM] Queries sometimes take 1000 times the normal time\n>\n>\n> Hello,\n>\n> We're running a set of Half-Life based game servers that lookup user\n> privileges from a central PostgreSQL 7.3.4 database server (I recently\n> ported the MySQL code in Adminmod to PostgreSQL to be able to do\n> this).\n>\n> The data needed by the game servers are combined from several\n> different tables, so we have some views set up to provide the data in\n> the format needed.\n>\n> Currently there's only a few users in the database for testing\n> purposes, and most of the time the user lookup's take 2-3 ms (I have\n> syslog'ing of queries and duration turned on), but several times per\n> hour the duration for one of the queries is 2-3 seconds (1000 times\n> larger), while the surrounding lookups take the usual 2-3 ms.\n>\n> This is rather critical, as the game server software isn't asynchonous\n> and thus waits for a reply before continuing, so when someone\n> connects, and the user lookup happens to have one of these very long\n> durations, the players on this server experience a major lag spike,\n> which isn't very popular :-(\n>\n> All the game servers and the database server are connected to the same\n> switch, so I don't think, that it is a network problem.\n>\n> So far I've been unable to locate the problem, so any suggestions are\n> very welcome.\n>\n> Regards,\n> Anders K. Pedersen\n\n", "msg_date": "Thu, 28 Aug 2003 10:02:46 +0100", "msg_from": "\"Russell Garrett\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries sometimes take 1000 times the normal time" }, { "msg_contents": "On 28 Aug 2003 at 10:02, Russell Garrett wrote:\n\n> The web site queries will jump up one or two orders of magnitude (I have\n> seen a normally 100ms query take in excess of 30 seconds) in duration at\n> seemingly random points. It's not always when the transactions are\n> committing, and it doesn't seem to be during checkpointing either. The same\n> thing happens with WAL switched off. It appears to happen the first time the\n> query runs after a while. If I run the same query immediately afterwards, it\n> will take the normal amount of time.\n\nLooks like it got flushed out of every type of cache and IO scheduler could not \ndeliver immediately because of other loads...\n\n\nBye\n Shridhar\n\n--\nAbstainer, n.:\tA weak person who yields to the temptation of denying himself a\t\npleasure.\t\t-- Ambrose Bierce, \"The Devil's Dictionary\"\n\n", "msg_date": "Thu, 28 Aug 2003 14:40:03 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries sometimes take 1000 times the normal time" }, { "msg_contents": ">> The web site queries will jump up one or two orders of magnitude (I\n>> have seen a normally 100ms query take in excess of 30 seconds) in\n>> duration at seemingly random points. It's not always when the\n>> transactions are committing, and it doesn't seem to be during\n>> checkpointing either. The same thing happens with WAL switched off.\n>> It appears to happen the first time the query runs after a while. If\n>> I run the same query immediately afterwards, it will take the normal\n>> amount of time.\n>\n> Looks like it got flushed out of every type of cache and IO scheduler\n> could not deliver immediately because of other loads...\n\nYeah, I wasn't sure what (or how) Postgres caches. The db server does have\n2Gb of memory, but then again the database amounts to more than 2Gb, so it's\nfairly possible it's getting pushed out of cache. It's also fairly possible\nthat it's not tuned completely optimally. I wonder if FreeBSD/kernel 2.6\nwould perform better in such a situation...\n\nRuss\n\n", "msg_date": "Thu, 28 Aug 2003 10:28:29 +0100", "msg_from": "\"Russell Garrett\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries sometimes take 1000 times the normal time" }, { "msg_contents": "Rod Taylor wrote:\n>>Currently there's only a few users in the database for testing purposes, \n>>and most of the time the user lookup's take 2-3 ms (I have syslog'ing of \n>>queries and duration turned on), but several times per hour the duration \n>>for one of the queries is 2-3 seconds (1000 times larger), while the \n>>surrounding lookups take the usual 2-3 ms.\n> \n> Are there any other jobs running at the time of these excessive queries?\n\nI don't know if you're referring to jobs inside the PostgreSQL database \nor just jobs on the server, but I'm pretty sure that nothing major is \ngoing on inside the database - the only other job using it is doing an \ninsert whenever one of our game admins executes an administrative \ncommand (like ban or kick), but this doesn't happen all that often, and \naccording the PostgreSQL log isn't happening at the same times as the \nlong queries.\n\nWith regards to other jobs on the server, there is a MySQL server on it \nas well, which from time to time has some multi-second queries generated \nfrom a webserver also on this host, but the MySQL is running with nice \n10 (PostgreSQL isn't nice'd).\n\nSomeone else asked about vmstat results, and I've been running this for \na while now, and I will report the results shortly.\n\nRegards,\nAnders K. Pedersen\n\n", "msg_date": "Thu, 28 Aug 2003 19:59:09 +0200", "msg_from": "\"Anders K. Pedersen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries sometimes take 1000 times the normal time" }, { "msg_contents": "Shridhar Daithankar wrote:\n> On 28 Aug 2003 at 1:07, Anders K. Pedersen wrote:\n>>We're running a set of Half-Life based game servers that lookup user \n>>privileges from a central PostgreSQL 7.3.4 database server (I recently \n>>ported the MySQL code in Adminmod to PostgreSQL to be able to do this).\n>>\n>>The data needed by the game servers are combined from several different \n>>tables, so we have some views set up to provide the data in the format \n>>needed.\n>>\n>>Currently there's only a few users in the database for testing purposes, \n>>and most of the time the user lookup's take 2-3 ms (I have syslog'ing of \n>>queries and duration turned on), but several times per hour the duration \n>>for one of the queries is 2-3 seconds (1000 times larger), while the \n>>surrounding lookups take the usual 2-3 ms.\n> \n> \n> Check vmstat during the same period if it is syncing at that point as Tom \n> suggested.\n\nI've been running a vmstat 1 logging process for a while now, and the \nsample below shows what happende around one of these spikes - at \n18:18:03 specifically (actually there were two 1 second long queries, \nthat finished at 18:18:03).\n\nThu Aug 28 18:17:53 2003 0 0 0 40904 4568 22288 404352 0 0 \n12 0 181 362 2 1 97\nThu Aug 28 18:17:54 2003 0 0 0 40904 4580 22260 404380 0 0 \n128 0 205 330 2 3 95\nThu Aug 28 18:17:55 2003 0 0 2 40904 4576 22264 404380 0 0 \n 0 284 224 127 0 1 99\nThu Aug 28 18:17:56 2003 0 0 2 40904 5008 22268 404512 0 0 \n128 728 571 492 2 3 95\nThu Aug 28 18:17:57 2003 0 0 1 40904 5000 22276 404512 0 0 \n 0 120 201 181 1 0 99\nThu Aug 28 18:17:58 2003 0 0 1 40904 4936 22284 404528 0 0 \n 8 0 1147 2204 12 3 85\nThu Aug 28 18:17:59 2003 0 0 0 40904 4784 22304 404660 0 0 \n148 0 2112 3420 2 3 95\nThu Aug 28 18:18:00 2003 1 1 3 40904 4760 22324 404664 0 0 \n20 456 2374 3277 2 1 97\nThu Aug 28 18:18:01 2003 0 2 10 40904 4436 22000 401456 0 0 \n144 540 510 457 11 6 83\nThu Aug 28 18:18:02 2003 1 1 2 40904 8336 22032 401512 0 0 \n68 676 1830 2540 4 3 93\nThu Aug 28 18:18:04 2003 1 0 1 40904 8160 22052 401664 0 0 \n140 220 2308 3253 2 3 95\nThu Aug 28 18:18:05 2003 0 0 1 40904 7748 22064 402064 0 0 \n288 0 1941 2856 1 3 96\nThu Aug 28 18:18:06 2003 0 0 3 40904 6704 22064 403100 0 0 \n496 992 2326 3510 0 5 95\nThu Aug 28 18:18:07 2003 1 0 0 40904 6324 22088 402716 0 0 \n260 188 1984 2927 11 4 85\nThu Aug 28 18:18:08 2003 0 0 0 40904 6920 22088 402828 0 0 \n72 0 419 1473 17 5 78\nThu Aug 28 18:18:09 2003 0 0 0 40904 6784 22088 402964 0 0 \n128 0 235 476 2 1 97\nThu Aug 28 18:18:10 2003 0 0 1 40904 6404 22088 402980 0 0 \n 0 0 343 855 14 2 84\n\nAs this shows, some disk I/O and an increased amount of interrupts and \ncontext switches is taking place at this time, and this also happens at \nthe same time as all the other long queries I examined. However, vmstat \nalso shows this pattern at a lot of other times, where the queries \naren't affected by it.\n\n> Are you using pooled connections? If yes you could shorten life of a connection \n> and force making a new connection every 10-15 minutes say. That would avoid IO \n> avelanche at the end of the hour types.\n\nI'm not quite sure, what you mean by \"pooled connections\". Each game \nserver has one connection to the PostgreSQL server, which is opened, \nwhen the server is first started, and then never closed (until the game \nserver terminates, but there's days between this happens).\n\nRegards,\nAnders K. Pedersen\n\n", "msg_date": "Thu, 28 Aug 2003 20:16:34 +0200", "msg_from": "\"Anders K. Pedersen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries sometimes take 1000 times the normal time" }, { "msg_contents": "Tom Lane wrote:\n> \"Anders K. Pedersen\" <[email protected]> writes:\n> \n>>Currently there's only a few users in the database for testing purposes, \n>>and most of the time the user lookup's take 2-3 ms (I have syslog'ing of \n>>queries and duration turned on), but several times per hour the duration \n>>for one of the queries is 2-3 seconds (1000 times larger), while the \n>>surrounding lookups take the usual 2-3 ms.\n> \n> \n> One thing that comes to mind is that the slow query could be occurring\n> at the same time as a checkpoint, or some other cycle-chewing background\n> operation. It's not clear why a checkpoint would slow things down that\n> much, though. Anyway I'd suggest looking for such activities; once we\n> know if that's the issue or not, we can make some progress.\n\nOne of my colleagues suggested looking for checkpoints as well; I \nsearched the log, but only the following messages turned up:\n\nAug 11 15:21:04 gs1 postgres[5447]: [2] LOG: checkpoint record is at \n0/80193C\nAug 23 13:59:51 gs1 postgres[16451]: [2] LOG: checkpoint record is at \n0/201EB74\nAug 25 02:48:17 gs1 postgres[1059]: [2] LOG: checkpoint record is at \n0/2B787D0\n\nCurrently there are only relatively few changes to the database - one \nINSERT everytime one of our game admins executes an administrative \ncommand (like ban or kick), and this happens at most 10 times per hour. \nAs I understand checkpoints, this should mean, that they aren't \nhappening very often, and when they do, should be able to finish almost \nimmediately.\n\nRegards,\nAnders K. Pedersen\n\n", "msg_date": "Thu, 28 Aug 2003 20:23:46 +0200", "msg_from": "\"Anders K. Pedersen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries sometimes take 1000 times the normal time" }, { "msg_contents": "> With regards to other jobs on the server, there is a MySQL server on it \n> as well, which from time to time has some multi-second queries generated \n> from a webserver also on this host, but the MySQL is running with nice \n> 10 (PostgreSQL isn't nice'd).\n\nDo those MySQL queries hit disk hard?\n\nI've never seen PostgreSQL have hicups like you describe when running on\na machine by itself. I have experienced similar issues when another\nprocess (cron job in my case) caused brief swapping to occur.", "msg_date": "Thu, 28 Aug 2003 14:33:24 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries sometimes take 1000 times the normal time" }, { "msg_contents": "Rod Taylor wrote:\n>>With regards to other jobs on the server, there is a MySQL server on it \n>>as well, which from time to time has some multi-second queries generated \n>>from a webserver also on this host, but the MySQL is running with nice \n>>10 (PostgreSQL isn't nice'd).\n> \n> Do those MySQL queries hit disk hard?\n\nI guess they may be able to do so - the MySQL database is 450 MB, and \nthe server has 512 MB RAM, and some of the queries pretty summarizes \neverything in the database.\n\nHowever, I just cross-referenced the access logs from the webserver with \nthe duration logs, and although some of the spikes did happen, while \nthere would have been some MySQL activity (I can't tell for sure, if it \nwas simple queries or the long ones), other spikes happened without any \nwebsite activity in the surrounding minutes.\n\n> I've never seen PostgreSQL have hicups like you describe when running on\n> a machine by itself. I have experienced similar issues when another\n> process (cron job in my case) caused brief swapping to occur.\n\nOK. I may have to try to put the database on a separate server.\n\nRegards,\nAnders K. Pedersen\n\n", "msg_date": "Thu, 28 Aug 2003 20:57:02 +0200", "msg_from": "\"Anders K. Pedersen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries sometimes take 1000 times the normal time" }, { "msg_contents": "\nJust to add to the clutch here, also check your bdflush settings (if \nyou're on linux) or equivalent (if you're not.)\n\nMany times the swapping algo in linux can be quite bursty if you have it \nset to move too many pages at a time during cleanup / flush.\n\n", "msg_date": "Thu, 28 Aug 2003 14:39:11 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries sometimes take 1000 times the normal time" }, { "msg_contents": "scott.marlowe wrote:\n> Just to add to the clutch here, also check your bdflush settings (if \n> you're on linux) or equivalent (if you're not.)\n> \n> Many times the swapping algo in linux can be quite bursty if you have it \n> set to move too many pages at a time during cleanup / flush.\n\nAccording to vmstat it doesn't swap near the spikes, so I don't think \nthis is the problem. I posted a vmstat sample in another reply, where \nyou can see an example of what happens.\n\nRegards,\nAnders K. Pedersen\n\n", "msg_date": "Fri, 29 Aug 2003 02:49:44 +0200", "msg_from": "\"Anders K. Pedersen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries sometimes take 1000 times the normal time" }, { "msg_contents": "On 28 Aug 2003 at 20:16, Anders K. Pedersen wrote:\n\n> Shridhar Daithankar wrote:\n> > On 28 Aug 2003 at 1:07, Anders K. Pedersen wrote:\n> >>We're running a set of Half-Life based game servers that lookup user \n> >>privileges from a central PostgreSQL 7.3.4 database server (I recently \n> >>ported the MySQL code in Adminmod to PostgreSQL to be able to do this).\n> >>\n> >>The data needed by the game servers are combined from several different \n> >>tables, so we have some views set up to provide the data in the format \n> >>needed.\n> >>\n> >>Currently there's only a few users in the database for testing purposes, \n> >>and most of the time the user lookup's take 2-3 ms (I have syslog'ing of \n> >>queries and duration turned on), but several times per hour the duration \n> >>for one of the queries is 2-3 seconds (1000 times larger), while the \n> >>surrounding lookups take the usual 2-3 ms.\n> > \n> > \n> > Check vmstat during the same period if it is syncing at that point as Tom \n> > suggested.\n> \n> I've been running a vmstat 1 logging process for a while now, and the \n> sample below shows what happende around one of these spikes - at \n> 18:18:03 specifically (actually there were two 1 second long queries, \n> that finished at 18:18:03).\n> \n> Thu Aug 28 18:17:53 2003 0 0 0 40904 4568 22288 404352 0 0 \n> 12 0 181 362 2 1 97\n> Thu Aug 28 18:17:54 2003 0 0 0 40904 4580 22260 404380 0 0 \n> 128 0 205 330 2 3 95\n> Thu Aug 28 18:17:55 2003 0 0 2 40904 4576 22264 404380 0 0 \n> 0 284 224 127 0 1 99\n> Thu Aug 28 18:17:56 2003 0 0 2 40904 5008 22268 404512 0 0 \n> 128 728 571 492 2 3 95\n> Thu Aug 28 18:17:57 2003 0 0 1 40904 5000 22276 404512 0 0 \n> 0 120 201 181 1 0 99\n> Thu Aug 28 18:17:58 2003 0 0 1 40904 4936 22284 404528 0 0 \n> 8 0 1147 2204 12 3 85\n> Thu Aug 28 18:17:59 2003 0 0 0 40904 4784 22304 404660 0 0 \n> 148 0 2112 3420 2 3 95\n> Thu Aug 28 18:18:00 2003 1 1 3 40904 4760 22324 404664 0 0 \n> 20 456 2374 3277 2 1 97\n> Thu Aug 28 18:18:01 2003 0 2 10 40904 4436 22000 401456 0 0 \n> 144 540 510 457 11 6 83\n> Thu Aug 28 18:18:02 2003 1 1 2 40904 8336 22032 401512 0 0 \n> 68 676 1830 2540 4 3 93\n> Thu Aug 28 18:18:04 2003 1 0 1 40904 8160 22052 401664 0 0 \n> 140 220 2308 3253 2 3 95\n> Thu Aug 28 18:18:05 2003 0 0 1 40904 7748 22064 402064 0 0 \n> 288 0 1941 2856 1 3 96\n> Thu Aug 28 18:18:06 2003 0 0 3 40904 6704 22064 403100 0 0 \n> 496 992 2326 3510 0 5 95\n> Thu Aug 28 18:18:07 2003 1 0 0 40904 6324 22088 402716 0 0 \n> 260 188 1984 2927 11 4 85\n> Thu Aug 28 18:18:08 2003 0 0 0 40904 6920 22088 402828 0 0 \n> 72 0 419 1473 17 5 78\n> Thu Aug 28 18:18:09 2003 0 0 0 40904 6784 22088 402964 0 0 \n> 128 0 235 476 2 1 97\n> Thu Aug 28 18:18:10 2003 0 0 1 40904 6404 22088 402980 0 0 \n> 0 0 343 855 14 2 84\n> \n\nNotice a pattern. In first few entries free memory is increasing coupled with \nIO. Few entries down the line it's decreasing again with IO. I would guess \nthat something terminated and started..\n\nHowever given how idle CPU is, I wonder would it matter. Besides changes in \nstats are pretty small to be any significant. I wouldn't worry about context \nswitches or interrupts. They don't seem to be any dramatic..\n\nI wonder what kernel you are using. While running pgbench on 2.4 and 2.6 couple \nof days back, I noticed several stalls with 2.4 where neither CPU or disk does \nanything but nothing moves forward, for 30 sec. or so.\n\nIf possible try with 2.6 Check which scheduler you are using and which works \nbest for you.\n\n http://marc.theaimsgroup.com/?l=linux-kernel&m=105743728122143&w=2\n\nIf you want a step by step how to install 2.6, I can give that too.. It's \npretty simple..\n\n> > Are you using pooled connections? If yes you could shorten life of a connection \n> > and force making a new connection every 10-15 minutes say. That would avoid IO \n> > avelanche at the end of the hour types.\n> \n> I'm not quite sure, what you mean by \"pooled connections\". Each game \n> server has one connection to the PostgreSQL server, which is opened, \n> when the server is first started, and then never closed (until the game \n> server terminates, but there's days between this happens).\n\nI would say let each server start a new connection every 15 minutes. As soon as \nnew connection is established, close old one. \n\nSee if that cures the problem. Apache uses similar methods albeit it measure by \nnubmer of requests served by each child.\n\nHTH\n\nBye\n Shridhar\n\n--\nIt is necessary to have purpose.\t\t-- Alice #1, \"I, Mudd\", stardate 4513.3\n\n", "msg_date": "Fri, 29 Aug 2003 14:00:34 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries sometimes take 1000 times the normal time" } ]
[ { "msg_contents": "I have a table that has about 20 foreign key constraints on it. I think \nthis is a bit excessive and am considering removing them ( they are all \nrelated to the same table and I don't think there is much chance of any \nintegrity violations ). Would this improve performance or not?\n\nthanks\n\n", "msg_date": "Thu, 28 Aug 2003 18:15:57 +0100", "msg_from": "teknokrat <[email protected]>", "msg_from_op": true, "msg_subject": "performance of foreign key constraints" }, { "msg_contents": "On Thu, Aug 28, 2003 at 06:15:57PM +0100, teknokrat wrote:\n> I have a table that has about 20 foreign key constraints on it. I think \n> this is a bit excessive and am considering removing them ( they are all \n> related to the same table and I don't think there is much chance of any \n> integrity violations ). Would this improve performance or not?\n\nAlmost certainly. But there's probably room for some middle ground\nbetween 20 and none.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 28 Aug 2003 13:25:39 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of foreign key constraints" }, { "msg_contents": "On Thu, 28 Aug 2003, teknokrat wrote:\n\n> I have a table that has about 20 foreign key constraints on it. I think\n> this is a bit excessive and am considering removing them ( they are all\n> related to the same table and I don't think there is much chance of any\n> integrity violations ). Would this improve performance or not?\n\nIt depends on your frequency of inserts/updates to the table with the\nconstraint and the frequency of update/delete to the table(s) being\nrefered to. My guess is probably. You may wish to leave some of the\nconstraints (decide which are the most important), but 20 does seem a bit\nexcessive in general.\n\n", "msg_date": "Thu, 28 Aug 2003 10:28:19 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of foreign key constraints" }, { "msg_contents": "Stephan Szabo wrote:\n\n> On Thu, 28 Aug 2003, teknokrat wrote:\n> \n> \n>>I have a table that has about 20 foreign key constraints on it. I think\n>>this is a bit excessive and am considering removing them ( they are all\n>>related to the same table and I don't think there is much chance of any\n>>integrity violations ). Would this improve performance or not?\n> \n> \n> It depends on your frequency of inserts/updates to the table with the\n> constraint and the frequency of update/delete to the table(s) being\n> refered to. My guess is probably. You may wish to leave some of the\n> constraints (decide which are the most important), but 20 does seem a bit\n> excessive in general.\n> \n\nThe references are all to the same table i.e. they are employee ids, so \nleaving some and not others would make no sense. The table has no \ndeletes, small amount of inserts and moderate amount of updates. However \nthere are many selects and its their performance I am most concerned with.\n\nthanks\n\n", "msg_date": "Thu, 28 Aug 2003 18:54:40 +0100", "msg_from": "teknokrat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance of foreign key constraints" }, { "msg_contents": "\nOn Thu, 28 Aug 2003, teknokrat wrote:\n\n> Stephan Szabo wrote:\n>\n> > On Thu, 28 Aug 2003, teknokrat wrote:\n> >\n> >\n> >>I have a table that has about 20 foreign key constraints on it. I think\n> >>this is a bit excessive and am considering removing them ( they are all\n> >>related to the same table and I don't think there is much chance of any\n> >>integrity violations ). Would this improve performance or not?\n> >\n> >\n> > It depends on your frequency of inserts/updates to the table with the\n> > constraint and the frequency of update/delete to the table(s) being\n> > refered to. My guess is probably. You may wish to leave some of the\n> > constraints (decide which are the most important), but 20 does seem a bit\n> > excessive in general.\n> >\n>\n> The references are all to the same table i.e. they are employee ids, so\n> leaving some and not others would make no sense. The table has no\n> deletes, small amount of inserts and moderate amount of updates. However\n> there are many selects and its their performance I am most concerned with.\n\nThe foreign keys should only really affect insert/update/delete\nperformance. If you're using 7.3.4 (I think) then updates to the fk table\nthat don't change any of the keys should be relatively cheap. I'd be much\nmore worried if you had any changes the the referenced employee table that\nmight change the key because that could get relatively expensive.\n\n\n", "msg_date": "Thu, 28 Aug 2003 11:51:08 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of foreign key constraints" }, { "msg_contents": "> The references are all to the same table i.e. they are employee ids, so \n> leaving some and not others would make no sense. The table has no \n> deletes, small amount of inserts and moderate amount of updates. However \n> there are many selects and its their performance I am most concerned with.\n\nForeign keys have no impact on selects.\n\nAlthough this does sound like a rather strange design to me (20+ columns\nwide and they're all employee ids?)", "msg_date": "Thu, 28 Aug 2003 14:58:18 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of foreign key constraints" }, { "msg_contents": "Rod Taylor wrote:\n>>The references are all to the same table i.e. they are employee ids, so \n>>leaving some and not others would make no sense. The table has no \n>>deletes, small amount of inserts and moderate amount of updates. However \n>>there are many selects and its their performance I am most concerned with.\n> \n> \n> Foreign keys have no impact on selects.\n> \n> Although this does sound like a rather strange design to me (20+ columns\n> wide and they're all employee ids?)\n\nThere are more than 20 fieldss. Its a report that can get updated by \ndifferent employees and we wish to keep a record of which employee \ncompleted which section. Couldn't think of any other way to do it.\n\n", "msg_date": "Thu, 28 Aug 2003 20:16:59 +0100", "msg_from": "teknokrat <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance of foreign key constraints" } ]
[ { "msg_contents": "I just ran a handful of tests on a 14-disk array on a SCSI hardware\nRAID card.\n\n From some quickie benchmarks using the bonnie++ benchmark, it appears\nthat the RAID5 across all 14 disks is a bit faster than RAID50 and\nnoticeably faster than RAID10...\n\nSample numbers for a 10Gb file (speed in Kbytes/second)\n\t\t\t\t\t\t \n RAID5 RAID50 RAID10 \nsequential write: 39728 37568 23533\t \nread/write file: 13831 13289 11400 \nsequential read: 52184 51529 \t 54222 \n\n\nHardware is a Dell 2650 dual Xeon, 4GB Ram, PERC3/DC RAID card with\n14 external U320 SCSI 15kRPM drives. Software is FreeBSD 4.8 with the\ndefault newfs settings.\n\nThe RAID drives were configured with 32k stripe size. From informal\ntests it doesn't seem to make much difference in the bonnie++\nbenchmark to go with 64k stripe on the RAID10 (didn't test it with\nRAID5 or RAID50). They say use larger stripe size for sequential\naccess, and lower for random access.\n\nMy concern is speed. Any RAID config on this system has more disk\nspace than I will need for a LOOONG time.\n\nMy Postgres load is a heavy mix of select/update/insert. ie, it is a\nvery actively updated and read database.\n\nThe conventional wisdom has been to use RAID10, but with 14 disks, I'm\nkinda leaning toward RAID50 or perhaps just RAID5.\n\nHas anyone else done similar tests of different RAID levels? What\nwere your conclusions?\n\nRaw output from bonnie++ available upon request.\n", "msg_date": "Thu, 28 Aug 2003 17:16:41 -0400 (EDT)", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "opinion on RAID choice" }, { "msg_contents": "On Thu, 28 Aug 2003, Vivek Khera wrote:\n\n> I just ran a handful of tests on a 14-disk array on a SCSI hardware\n> RAID card.\n\nSNIP\n\n> Has anyone else done similar tests of different RAID levels? What\n> were your conclusions?\n\nYes I have. I had a 6 disk array plus 2 disks inside my machine (this was \non a Sparc 20 with 4 narrow SCSI channels and the disks spread across them \nevenly, using RH6.2 and linux sw raid.\n\nMy results were about the same as yours, RAID1+0 tended to beat RAID5 at \nreads, while RAID5 tended to win at writes.\n\nThere's an old wive's tale that RAID5 has to touch every single disk in a \nstripe when writing, which simply isn't true. I believe that many old \ncontrollers (decades back, 286 land kinda stuff) might have done it this \nway, and so people kept thinking this was how RAID5 worked, and avoided \nit.\n\nMy experience has been that once you get past 6 disks, RAID5 is faster \nthan RAID1+0.\n\n", "msg_date": "Thu, 28 Aug 2003 15:26:14 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on RAID choice" }, { "msg_contents": ">>>>> \"sm\" == scott marlowe <scott.marlowe> writes:\n\nsm> My experience has been that once you get past 6 disks, RAID5 is faster \nsm> than RAID1+0.\n\nAny opinion on stripe size for the RAID?\n", "msg_date": "Thu, 28 Aug 2003 22:00:21 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: opinion on RAID choice" }, { "msg_contents": "On Thu, 28 Aug 2003, Vivek Khera wrote:\n\n> >>>>> \"sm\" == scott marlowe <scott.marlowe> writes:\n> \n> sm> My experience has been that once you get past 6 disks, RAID5 is faster \n> sm> than RAID1+0.\n> \n> Any opinion on stripe size for the RAID?\n\nThat's more determined by what kind of data you're gonna be handling. If \nyou want to do lots of little financial transactions, then 32k or less is \ngood. If you're gonna store moderately large text fields and such, then \ngoing above 32k or 64k is usually a good idea.\n\n", "msg_date": "Thu, 28 Aug 2003 22:22:35 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on RAID choice" }, { "msg_contents": "On Thu, Aug 28, 2003 at 03:26:14PM -0600, scott.marlowe wrote:\n> \n> My experience has been that once you get past 6 disks, RAID5 is faster \n> than RAID1+0.\n\nAlso depends on your filesystem and volume manager. As near as I can\ntell, you do _not_ want to use RAID 5 with Veritas.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 2 Sep 2003 12:14:34 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on RAID choice" }, { "msg_contents": "\n\n--On Tuesday, September 02, 2003 12:14:34 -0400 Andrew Sullivan \n<[email protected]> wrote:\n\n> On Thu, Aug 28, 2003 at 03:26:14PM -0600, scott.marlowe wrote:\n>>\n>> My experience has been that once you get past 6 disks, RAID5 is faster\n>> than RAID1+0.\n>\n> Also depends on your filesystem and volume manager. As near as I can\n> tell, you do _not_ want to use RAID 5 with Veritas.\nOut of curiosity, why?\n\nI have Veritas Doc up (since UnixWare has it) at:\n\nhttp://www.lerctr.org:8458/en/Navpages/FShome.html\n\nif anyone wants to read.\n\nLER\n\n>\n> A\n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "Tue, 02 Sep 2003 11:24:16 -0500", "msg_from": "Larry Rosenman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on RAID choice" }, { "msg_contents": "On Tue, 2003-09-02 at 11:14, Andrew Sullivan wrote:\n> On Thu, Aug 28, 2003 at 03:26:14PM -0600, scott.marlowe wrote:\n> > \n> > My experience has been that once you get past 6 disks, RAID5 is faster \n> > than RAID1+0.\n> \n> Also depends on your filesystem and volume manager. As near as I can\n> tell, you do _not_ want to use RAID 5 with Veritas.\n\nWhy should Veritas care? Or is it that Veritas has a high overhead\nof small block writes?\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Millions of Chinese speak Chinese, and it's not hereditary...\"\nDr. Dean Edell\n\n", "msg_date": "Tue, 02 Sep 2003 11:33:44 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on RAID choice" }, { "msg_contents": "On Tue, Sep 02, 2003 at 11:24:16AM -0500, Larry Rosenman wrote:\n> >tell, you do _not_ want to use RAID 5 with Veritas.\n> Out of curiosity, why?\n\nWhat I keep hearing through various back channels is that, if you pay\nfolks from Veritas to look at your installation, and they see RAID 5,\nthey suggest you move it to 1+0. I haven't any idea why. It could\nbe just a matter of preference; it could be prejudice; it could be\nbaseless faith in the inefficiency of RAID5; it could be they have\nstock in a drive company; or it could be they know about some strange\nbug that they haven't an idea how to fix. Nobody's ever been\nable/willing to tell me.\n\nA\n\n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 2 Sep 2003 12:36:29 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on RAID choice" }, { "msg_contents": "Ron Johnson wrote:\n> On Tue, 2003-09-02 at 11:14, Andrew Sullivan wrote:\n> \n>>On Thu, Aug 28, 2003 at 03:26:14PM -0600, scott.marlowe wrote:\n>>\n>>>My experience has been that once you get past 6 disks, RAID5 is faster \n>>>than RAID1+0.\n>>\n>>Also depends on your filesystem and volume manager. As near as I can\n>>tell, you do _not_ want to use RAID 5 with Veritas.\n> \n> \n> Why should Veritas care? Or is it that Veritas has a high overhead\n> of small block writes?\n> \n\n\nI agree with Scott however only when it's hardware RAID 5 and only\ncertain hardware implementations of it. A Sun A1000 RAID 5 is not\nequal to a Sun T3. Putting disk technologies aside, the A1000 array\nXOR function is in software whereas the T3 is implemented in hardware.\nAdditionally, most external hardware based RAID systems have some\nform of battery backup to ensure all data is written.\n\nVeritas Volume Manager and even Linux, HP-UX and AIX LVM works just\nfine when slicing & dicing but not for stitching LUN's together. IMHO,\nif you have the $$ for VxVM buy a hardware based RAID solution as well\nand let it do the work.\n\nGreg\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n", "msg_date": "Tue, 02 Sep 2003 12:47:26 -0400", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on RAID choice" }, { "msg_contents": "On Tue, 2003-09-02 at 11:47, Greg Spiegelberg wrote:\n> Ron Johnson wrote:\n> > On Tue, 2003-09-02 at 11:14, Andrew Sullivan wrote:\n> > \n> >>On Thu, Aug 28, 2003 at 03:26:14PM -0600, scott.marlowe wrote:\n> >>\n> >>>My experience has been that once you get past 6 disks, RAID5 is faster \n> >>>than RAID1+0.\n> >>\n> >>Also depends on your filesystem and volume manager. As near as I can\n> >>tell, you do _not_ want to use RAID 5 with Veritas.\n> > \n> > \n> > Why should Veritas care? Or is it that Veritas has a high overhead\n> > of small block writes?\n> > \n> \n> \n> I agree with Scott however only when it's hardware RAID 5 and only\n> certain hardware implementations of it. A Sun A1000 RAID 5 is not\n> equal to a Sun T3. Putting disk technologies aside, the A1000 array\n> XOR function is in software whereas the T3 is implemented in hardware.\n> Additionally, most external hardware based RAID systems have some\n> form of battery backup to ensure all data is written.\n> \n> Veritas Volume Manager and even Linux, HP-UX and AIX LVM works just\n> fine when slicing & dicing but not for stitching LUN's together. IMHO,\n> if you have the $$ for VxVM buy a hardware based RAID solution as well\n> and let it do the work.\n\nAh, shows how isolated, or behind the times, I am. I thought that\nVeritas just handled backups. Never cared about looking at it to\ndo anything, else we always use h/w RAID storage controllers.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\nAfter listening to many White House, Pentagon & CENTCOM \nbriefings in both Gulf Wars, it is my firm belief that most \n\"senior correspondents\" either have serious agendas that don't \nget shaken by facts, or are dumb as dog feces.\n\n", "msg_date": "Tue, 02 Sep 2003 12:33:06 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: opinion on RAID choice" } ]
[ { "msg_contents": "I'm surprised at the effort pgsql requires to run one of my queries. I \ndon't know how to tune this query.\n\n Column | Type | Modifiers\n------------+--------------+-----------\n the_id | integer | not null\n the_date | date | not null\n num1 | numeric(9,4) |\n num2 | numeric(9,4) |\n num3 | numeric(9,4) |\n num4 | numeric(9,4) |\n int1 | integer |\nIndexes:\n \"the_table_pkey\" primary key, btree (the_id, the_date)\n\n---------------------------------------\n\nThe query I want to run is\n\nselect stock_id, min(price_date) from day_ends group by stock_id;\n\n---------------------------------------\n\nHere's the plan that I get.\n\nGroupAggregate (cost=3711244.30..3838308.31 rows=6732 width=8)\n -> Sort (cost=3711244.30..3753593.36 rows=16939624 width=8)\n Sort Key: stock_id\n -> Seq Scan on day_ends (cost=0.00..361892.24 rows=16939624\n width=8)\n\nIf I set enable_seqscan = false, the plan changes to\n\n GroupAggregate (cost=0.00..67716299.91 rows=6732 width=8)\n -> Index Scan using day_ends_pkey on day_ends\n (cost=0.00..67631584.96 rows=16939624 width=8)\n\n---------------------------------------\n\nNow... the first plan uses up tons of temporary space for sorting. The \nsecond one just runs and runs and runs. I've tried setting the \nstatistics to 1000 with little effect.\n\nSo the query can get everything it needs from the index, and a full scan \nof the index should be faster (the index file is less than half the size \nof the data file.) So why does the optimizer estimate so high?\n\nAlso, to get the MIN for a given group, not all values of the index need \nto be seen. Must pgsql do a full scan because it treats all aggregates \nin the same way? Are MIN and MAX used often enough to justify special \ntreatment, and could that be cleanly implemented? Perhaps the aggregate \nfunction can request the data in a certain order, be told that it is \nbeing passed data in a certain order, and return before seeing the \nentire set of data.\n\nFood for thought...\n\n\nThanks,\n\nKen Geis\n\n\n", "msg_date": "Thu, 28 Aug 2003 17:10:31 -0700", "msg_from": "Ken Geis <[email protected]>", "msg_from_op": true, "msg_subject": "bad estimates / non-scanning aggregates" }, { "msg_contents": "Bruno Wolff III wrote:\n> On Thu, Aug 28, 2003 at 17:10:31 -0700,\n> Ken Geis <[email protected]> wrote:\n> \n>>The query I want to run is\n>>\n>>select stock_id, min(price_date) from day_ends group by stock_id;\n> \n> The fast way to do this is:\n> \n> select distinct on (stock_id) stock_id, price_date\n> order by stock_id, price_date;\n\nNot according to the optimizer! Plus, this is not guaranteed to return \nthe correct results.\n\n Unique (cost=3711244.30..3795942.42 rows=6366 width=8)\n -> Sort (cost=3711244.30..3753593.36 rows=16939624 width=8)\n Sort Key: stock_id, price_date\n -> Seq Scan on day_ends (cost=0.00..361892.24 rows=16939624 \nwidth=8)\n\n\n", "msg_date": "Thu, 28 Aug 2003 19:50:38 -0700", "msg_from": "Ken Geis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad estimates / non-scanning aggregates" }, { "msg_contents": "On Thu, Aug 28, 2003 at 17:10:31 -0700,\n Ken Geis <[email protected]> wrote:\n> The query I want to run is\n> \n> select stock_id, min(price_date) from day_ends group by stock_id;\n\nThe fast way to do this is:\n\nselect distinct on (stock_id) stock_id, price_date\n order by stock_id, price_date;\n\n> Also, to get the MIN for a given group, not all values of the index need \n> to be seen. Must pgsql do a full scan because it treats all aggregates \n> in the same way? Are MIN and MAX used often enough to justify special \n> treatment, and could that be cleanly implemented? Perhaps the aggregate \n> function can request the data in a certain order, be told that it is \n> being passed data in a certain order, and return before seeing the \n> entire set of data.\n\nYes, max and min are not treated special so they don't benefit from\nindexes. This has been discussed repeatedly in the archives.\n", "msg_date": "Thu, 28 Aug 2003 21:51:09 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad estimates / non-scanning aggregates" }, { "msg_contents": "Bruno Wolff III wrote:\n>>Not according to the optimizer! Plus, this is not guaranteed to return \n>>the correct results.\n> \n> For it to be fast you need an index on (stock_id, price_date) so that\n> you can use an index scan.\n\nI already said that such an index existed. In fact, it is the primary \nkey of the table. And yes, I *am* analyzed!\n\n> The answers are guarenteed to be correct. See:\n> http://developer.postgresql.org/docs/postgres/sql-select.html#SQL-DISTINCT\n\nThat's good to know. Thanks!\n\n\nKen\n\n\n", "msg_date": "Thu, 28 Aug 2003 20:00:32 -0700", "msg_from": "Ken Geis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad estimates / non-scanning aggregates" }, { "msg_contents": "On Thu, Aug 28, 2003 at 19:50:38 -0700,\n Ken Geis <[email protected]> wrote:\n> Bruno Wolff III wrote:\n> >On Thu, Aug 28, 2003 at 17:10:31 -0700,\n> > Ken Geis <[email protected]> wrote:\n> >\n> >>The query I want to run is\n> >>\n> >>select stock_id, min(price_date) from day_ends group by stock_id;\n> >\n> >The fast way to do this is:\n> >\n> >select distinct on (stock_id) stock_id, price_date\n> > order by stock_id, price_date;\n> \n> Not according to the optimizer! Plus, this is not guaranteed to return \n> the correct results.\n\nFor it to be fast you need an index on (stock_id, price_date) so that\nyou can use an index scan.\n\nThe answers are guarenteed to be correct. See:\nhttp://developer.postgresql.org/docs/postgres/sql-select.html#SQL-DISTINCT\n", "msg_date": "Thu, 28 Aug 2003 22:01:56 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad estimates / non-scanning aggregates" }, { "msg_contents": "On Thu, Aug 28, 2003 at 20:00:32 -0700,\n Ken Geis <[email protected]> wrote:\n> Bruno Wolff III wrote:\n> >>Not according to the optimizer! Plus, this is not guaranteed to return \n> >>the correct results.\n> >\n> >For it to be fast you need an index on (stock_id, price_date) so that\n> >you can use an index scan.\n> \n> I already said that such an index existed. In fact, it is the primary \n> key of the table. And yes, I *am* analyzed!\n\nYour original example didn't actually match that of the table you are showing\nexamples from. In that example the second half of the primary key was the\ndate not the end of the day price. If this is the case for the real table,\nthen that is the reason the distinct on doesn't help.\n", "msg_date": "Thu, 28 Aug 2003 22:38:18 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad estimates / non-scanning aggregates" }, { "msg_contents": "Bruno Wolff III wrote:\n> On Thu, Aug 28, 2003 at 20:00:32 -0700,\n> Ken Geis <[email protected]> wrote:\n> \n>>Bruno Wolff III wrote:\n>>\n>>>>Not according to the optimizer! Plus, this is not guaranteed to return \n>>>>the correct results.\n>>>\n>>>For it to be fast you need an index on (stock_id, price_date) so that\n>>>you can use an index scan.\n>>\n>>I already said that such an index existed. In fact, it is the primary \n>>key of the table. And yes, I *am* analyzed!\n> \n> \n> Your original example didn't actually match that of the table you are showing\n> examples from. In that example the second half of the primary key was the\n> date not the end of the day price. If this is the case for the real table,\n> then that is the reason the distinct on doesn't help.\n\nI had obfuscated the table in the example and forgot to do the same with \nthe query. Serves me right for thinking I care about that.\n\nA big problem is that the values I am working with are *only* the \nprimary key and the optimizer is choosing a table scan over an index \nscan. That is why I titled the email \"bad estimates.\" The table has \n(stock_id, price_date) as the primary key, and a bunch of other columns. \n What I *really* want to do efficiently is\n\nselect stock_id, min(price_date), max(price_date)\n from day_ends\n group by stock_id;\n\nIt is not the table or the query that is wrong. It is either the db \nparameters or the optimizer itself.\n\n\nKen\n\n", "msg_date": "Thu, 28 Aug 2003 20:46:00 -0700", "msg_from": "Ken Geis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad estimates" }, { "msg_contents": "On Thu, Aug 28, 2003 at 20:46:00 -0700,\n Ken Geis <[email protected]> wrote:\n> \n> A big problem is that the values I am working with are *only* the \n> primary key and the optimizer is choosing a table scan over an index \n> scan. That is why I titled the email \"bad estimates.\" The table has \n> (stock_id, price_date) as the primary key, and a bunch of other columns. \n> What I *really* want to do efficiently is\n> \n> select stock_id, min(price_date), max(price_date)\n> from day_ends\n> group by stock_id;\n> \n> It is not the table or the query that is wrong. It is either the db \n> parameters or the optimizer itself.\n\nIf you want both the max and the min, then things are going to be a bit\nmore work. You are either going to want to do two separate selects\nor join two selects or use subselects. If there aren't enough prices\nper stock, the sequential scan might be fastest since you only need to\ngo through the table once and don't have to hit the index blocks.\n\nIt is still odd that you didn't get a big speed up for just the min though.\nYou example did have the stock id and the date as the primary key which\nwould make sense since the stock id and stock price on a day wouldn't\nbe guarenteed to be unique. Are you absolutely sure you have a combined\nkey on the stock id and the stock price?\n", "msg_date": "Thu, 28 Aug 2003 23:05:19 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad estimates" }, { "msg_contents": "Bruno Wolff III wrote:\n> On Thu, Aug 28, 2003 at 20:46:00 -0700,\n> Ken Geis <[email protected]> wrote:\n>>It is not the table or the query that is wrong. It is either the db \n>>parameters or the optimizer itself.\n...\n> \n> It is still odd that you didn't get a big speed up for just the min though.\n> You example did have the stock id and the date as the primary key which\n> would make sense since the stock id and stock price on a day wouldn't\n> be guarenteed to be unique. Are you absolutely sure you have a combined\n> key on the stock id and the stock price?\n\nI am positive! I can send a log if you want, but I won't post it to the \nlist.\n\nThe arity on the data is roughly 1500 price_dates per stock_id.\n\nI was able to get the query to return in a reasonable amount of time \n(still ~3 minutes) by forcing a nested loop path using SQL functions \ninstead of min and max.\n\nI'm going to run comparisons on 7.3.3 and 7.4-beta2. I'll also look \ninto the optimizer source to try to figure out why it thinks scanning \nthis index is so expensive.\n\n\nKen\n\n\n", "msg_date": "Thu, 28 Aug 2003 21:09:00 -0700", "msg_from": "Ken Geis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad estimates" }, { "msg_contents": "On Thu, Aug 28, 2003 at 21:09:00 -0700,\n Ken Geis <[email protected]> wrote:\n> Bruno Wolff III wrote:\n> \n> I am positive! I can send a log if you want, but I won't post it to the \n> list.\n\nCan you do a \\d on the real table or is that too sensitive?\n\nIt still doesn't make sense that you have a primary key that\nis a stock and its price. What happens when the stock has the\nsame price on two different dates? And I doubt that you are looking\nfor the minimum and maximum dates for which you have price data.\nSo it is hard to believe that the index for your primary key is the\none you need for your query.\n\n> The arity on the data is roughly 1500 price_dates per stock_id.\n\nTwo index scans (one for min values and another for max values)\nshould be better than one sequential scan under those conditions.\n\nI am calling it quits for tonight, but will check back tomorrow\nto see how things turned out.\n", "msg_date": "Thu, 28 Aug 2003 23:24:53 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad estimates" }, { "msg_contents": "Bruno Wolff III wrote:\n> Can you do a \\d on the real table or is that too sensitive?\n\nIt was silly of me to think of this as particularly sensitive.\n\nstocks=> \\d day_ends\n Table \"public.day_ends\"\n Column | Type | Modifiers\n------------+--------------+-----------\n stock_id | integer | not null\n price_date | date | not null\n open | numeric(9,4) |\n high | numeric(9,4) |\n low | numeric(9,4) |\n close | numeric(9,4) |\n volume | integer |\nIndexes: day_ends_pkey primary key btree (stock_id, price_date)\nTriggers: RI_ConstraintTrigger_16558399\n\n> It still doesn't make sense that you have a primary key that\n> is a stock and its price. What happens when the stock has the\n> same price on two different dates? And I doubt that you are looking\n> for the minimum and maximum dates for which you have price data.\n> So it is hard to believe that the index for your primary key is the\n> one you need for your query.\n\nI can see the naming being confusing. I used \"price_date\" because, of \ncourse, \"date\" is not a legal name. \"day_ends\" is a horrible name for \nthe table; \"daily_bars\" would probably be better. I *am* looking for \nthe mininum and maximum dates for which I have price data. I'm running \nthis query to build a chart so I can see visually where the majority of \nmy data begins to use as the start of a window for analysis.\n\nWhen run on 7.3.3, forcing an index scan by setting \nenable_seqscan=false, the query took 55 minutes to run. The index is \nabout 660M in size, and the table is 1G. As I mentioned before, with \ntable scans enabled, it bombs, running out of temporary space.\n\nHey Bruno, thanks for your attention here. I'm not a newbie, but I've \nnever really had performance issues with pgsql before. And I've been \nrunning this database for a couple of years now, but I haven't run these \nqueries against it.\n\n\nKen\n\n\n", "msg_date": "Fri, 29 Aug 2003 00:01:09 -0700", "msg_from": "Ken Geis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad estimates" }, { "msg_contents": "Sorry, all, to wipe out the context, but it was getting a little long.\n\nBruno Wolff III wrote:\n> I am calling it quits for tonight, but will check back tomorrow\n> to see how things turned out.\n\nI went through the code (7.4 beta2) that estimates the cost of an index \nscan path. What I need to be sure of is that when running a query in \npgsql that uses only the columns that are in an index, the underlying \ntable need not be accessed. I know that Oracle does this.\n\nThe cost_index function is assuming that after finding an entry in the \nindex it will be looking it up in the underlying table. That table is \nnot well correlated to the index, so it is assuming (in the worst case) \na random page lookup for each of 17 million records! In my case, if the \nunderlying table is indeed not touched, the estimated cost is 1000 times \nthe real cost.\n\n63388.624000 to scan the index\n67406506.915595 to scan the index and load a random page for each entry\n\n\nKen\n\n\n", "msg_date": "Fri, 29 Aug 2003 00:57:43 -0700", "msg_from": "Ken Geis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad estimates" }, { "msg_contents": "Ken Geis wrote:\n> I went through the code (7.4 beta2) that estimates the cost of an index \n> scan path. What I need to be sure of is that when running a query in \n> pgsql that uses only the columns that are in an index, the underlying \n> table need not be accessed. I know that Oracle does this.\n\nThinking about it some more, it's obvious to me that a pgsql index scan \nmust be accessing the underlying table even though all of the \ninformation needed is in the index itself. A linear scan of a 660M file \nshould not take 55 minutes. I could confirm this with stats, but \nsomeone out there probably already knows the answer here.\n\n\nKen\n\n\n", "msg_date": "Fri, 29 Aug 2003 01:10:06 -0700", "msg_from": "Ken Geis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad estimates" }, { "msg_contents": "> > I went through the code (7.4 beta2) that estimates the cost of an index\n> > scan path. What I need to be sure of is that when running a query in\n> > pgsql that uses only the columns that are in an index, the underlying\n> > table need not be accessed. I know that Oracle does this.\n\nPostgreSQL absolutely does not do this. It is also not possible to do this\ndue to MVCC.\n\nChris\n\n", "msg_date": "Fri, 29 Aug 2003 16:15:05 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad estimates" }, { "msg_contents": "Ken Geis wrote:\n> When run on 7.3.3, forcing an index scan by setting \n> enable_seqscan=false, the query took 55 minutes to run. The index is \n> about 660M in size, and the table is 1G. As I mentioned before, with \n> table scans enabled, it bombs, running out of temporary space.\n\nMan, I should wait a while before I send mails, because I keep having \nmore to say!\n\nSome good news here. Doing the same as above on 7.4beta2 took 29 \nminutes. Now, the 7.3.3 was on reiser and 7.4 on ext2, so take that as \nyou will. 7.4's index selectivity estimate seems much better; 7.3.3's \nanticipated rows was ten times the actual; 7.4's is one half of the actual.\n\n\nKen\n\n\n", "msg_date": "Fri, 29 Aug 2003 01:17:25 -0700", "msg_from": "Ken Geis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad estimates" }, { "msg_contents": "On Fri, 29 Aug 2003, Ken Geis wrote:\n\n> Some good news here. Doing the same as above on 7.4beta2 took 29\n> minutes. Now, the 7.3.3 was on reiser and 7.4 on ext2, so take that as\n> you will. 7.4's index selectivity estimate seems much better; 7.3.3's\n> anticipated rows was ten times the actual; 7.4's is one half of the actual.\n>\nMin() & Max() unfortunatly suck on PG. It will be that way for a while\nperhaps at some point someone will make a \"special\" case and convince\n-HACKERS it is a Good Thing(tm) (Like select count(*) from table being\n'cached' - a lot of people probably get bad first impressions because of\nthat)\n\nWould it be possible ot rewrite your queries replacing min/max with a\nselect stock_id from bigtable where blah = blorch order by stock_id\n(desc|asc) limit 1? because that would enable PG to use an index and\nmagically \"go fast\". You may need a subselect..\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Fri, 29 Aug 2003 08:52:34 -0400 (EDT)", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad estimates" }, { "msg_contents": "On Fri, 29 Aug 2003, Ken Geis wrote:\n\n> Ken Geis wrote:\n> > I went through the code (7.4 beta2) that estimates the cost of an index\n> > scan path. What I need to be sure of is that when running a query in\n> > pgsql that uses only the columns that are in an index, the underlying\n> > table need not be accessed. I know that Oracle does this.\n>\n> Thinking about it some more, it's obvious to me that a pgsql index scan\n> must be accessing the underlying table even though all of the\n> information needed is in the index itself. A linear scan of a 660M file\n> should not take 55 minutes. I could confirm this with stats, but\n> someone out there probably already knows the answer here.\n\nUnfortunately not all the information needed is in the index. You can't\ntell from the index alone currently whether or not the row is visible to\nyou. Adding said information would be possible but there are downsides to\nthat as well (there are some past discussions on the topic, but I'm too\nlazy to look them up to give a link, check the archives ;) ).\n\n", "msg_date": "Fri, 29 Aug 2003 08:02:26 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad estimates" }, { "msg_contents": "Bruno Wolff III wrote:\n> If you want both the max and the min, then things are going to be a bit\n> more work. You are either going to want to do two separate selects\n> or join two selects or use subselects. If there aren't enough prices\n> per stock, the sequential scan might be fastest since you only need to\n> go through the table once and don't have to hit the index blocks.\n> \n> It is still odd that you didn't get a big speed up for just the min though.\n\nI found I'm suffering from an effect detailed in a previous thread titled\n\n\tDoes \"correlation\" mislead the optimizer on large tables?\n\n\nKen\n\n\n", "msg_date": "Fri, 29 Aug 2003 09:06:31 -0700", "msg_from": "Ken Geis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad estimates" }, { "msg_contents": "> >If you want both the max and the min, then things are going to be a\n> >bit more work. You are either going to want to do two separate\n> >selects or join two selects or use subselects. If there aren't\n> >enough prices per stock, the sequential scan might be fastest since\n> >you only need to go through the table once and don't have to hit\n> >the index blocks.\n> >\n> >It is still odd that you didn't get a big speed up for just the min though.\n> \n> I found I'm suffering from an effect detailed in a previous thread titled\n> \n> \tDoes \"correlation\" mislead the optimizer on large tables?\n\nI don't know about large tables, but this is a big problem and\nsomething I'm going to spend some time validating later today. I\nthink Manfred's patch is pretty good and certainly better than where\nwe are but I haven't used it yet to see if it's the magic ticket for\nmany of these index problems.\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Fri, 29 Aug 2003 09:36:13 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad estimates" }, { "msg_contents": "Sean Chittenden wrote:\n>>I found I'm suffering from an effect detailed in a previous thread titled\n>>\n>>\tDoes \"correlation\" mislead the optimizer on large tables?\n> \n> \n> I don't know about large tables, but this is a big problem and\n> something I'm going to spend some time validating later today. I\n> think Manfred's patch is pretty good and certainly better than where\n> we are but I haven't used it yet to see if it's the magic ticket for\n> many of these index problems.\n\nI had to dig through a lot of archives to find this. Is this the patch, \nfrom last October?\n\nhttp://members.aon.at/pivot/pg/16-correlation.diff\n\nIf so, I'll try it out and report my results.\n\n\nKen\n\n\n", "msg_date": "Fri, 29 Aug 2003 09:56:59 -0700", "msg_from": "Ken Geis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad estimates" }, { "msg_contents": "> >>I found I'm suffering from an effect detailed in a previous thread titled\n> >>\n> >>\tDoes \"correlation\" mislead the optimizer on large tables?\n> >\n> >\n> >I don't know about large tables, but this is a big problem and\n> >something I'm going to spend some time validating later today. I\n> >think Manfred's patch is pretty good and certainly better than where\n> >we are but I haven't used it yet to see if it's the magic ticket for\n> >many of these index problems.\n> \n> I had to dig through a lot of archives to find this. Is this the patch, \n> from last October?\n> \n> http://members.aon.at/pivot/pg/16-correlation.diff\n> \n> If so, I'll try it out and report my results.\n\nSame guy, but that patch is pretty out of date and has been replaced\nby some newer work that's much better.\n\nFrom: Manfred Koizar <[email protected]>\nCc: [email protected]\nSubject: Re: [HACKERS] Correlation in cost_index()\nDate: Wed, 20 Aug 2003 19:57:12 +0200\nMessage-ID: <[email protected]>\n\n\nand\n\n\nFrom: Manfred Koizar <[email protected]>\nTo: [email protected]\nSubject: [HACKERS] Again on index correlation\nDate: Wed, 20 Aug 2003 21:21:14 +0200\nMessage-ID: <[email protected]>\n\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Fri, 29 Aug 2003 10:03:31 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad estimates" }, { "msg_contents": "I haven't come up with any great ideas for this one. It might be interesting\nto compare the explain analyze output from the distinct on query with\nand without seqscans enabled.\n", "msg_date": "Fri, 29 Aug 2003 21:55:03 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad estimates" }, { "msg_contents": "Bruno Wolff III wrote:\n> I haven't come up with any great ideas for this one. It might be interesting\n> to compare the explain analyze output from the distinct on query with\n> and without seqscans enabled.\n\nCan't do that comparison. Remember, with seqscan it fails. (Oh, and \nthat nested loops solution I thought was fast actually took 31 minutes \nversus 29 for index scan in 7.4b2.)\n\nI ran another query across the same data:\n\nselect price_date, count(*) from day_ends group by price_date;\n\nIt used a table scan and hashed aggregates, and it ran in 5.5 minutes. \nConsidering that, pgsql should be able to do the query that I had been \nrunning in a little more time than that. So...\n\n From what I've learned, we want to convince the optimizer to use a \ntable scan; that's a good thing. I want it to use hashed aggregates, \nbut I can't convince it to (unless maybe I removed all of the \nstatistics.) To use group aggregates, it first sorts the results of the \ntable scan (all 17 million rows!) There ought to be some way to tell \npgsql not to do sorts above a certain size. In this case, if I set \nenable_sort=false, it goes back to the index scan. If I then set \nenable_indexscan=false, it goes back to sorting.\n\n", "msg_date": "Fri, 29 Aug 2003 22:05:18 -0700", "msg_from": "Ken Geis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad estimates" }, { "msg_contents": "Bruno Wolff III wrote:\n> I haven't come up with any great ideas for this one. It might be interesting\n> to compare the explain analyze output from the distinct on query with\n> and without seqscans enabled.\n\nAfter digging through planner code, I found that bumping up the sort_mem \nwill make the planner prefer a full table scan and hashed aggregation. \nThe sort memory is where the hash table is stored. In the end, the \nquery runs in 4.5 minutes, which is reasonable.\n\nI had planned to try Manfred's index correlation patch to see if it \nwould give better estimates for an index scan. The index scan method \ntook maybe 6.5x as long, but the estimate was that it would take 1400x \nas long. I think instead of trying out his patch I might actually work \non my application!\n\n\nKen\n\n", "msg_date": "Sat, 30 Aug 2003 00:38:13 -0700", "msg_from": "Ken Geis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad estimates" }, { "msg_contents": "Ken Geis <[email protected]> writes:\n> From what I've learned, we want to convince the optimizer to use a \n> table scan; that's a good thing. I want it to use hashed aggregates, \n> but I can't convince it to (unless maybe I removed all of the \n> statistics.)\n\nYou probably just need to increase sort_mem. Multiple aggregates take\nmore RAM to process in a hashtable style ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 30 Aug 2003 10:35:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad estimates " } ]
[ { "msg_contents": "Alright.\n\nTo anyone who didn't get the news the first time:\nThe first set of benchmarks were terribly skewed because FreeBSD\ndidn't properly work with the hardware I was using. Thanks to\nthose who pointed the problem out to me.\n\nI have scrounged new hardware, and insured that FreeBSD is working\nproperly with it. And I have rerun many of the tests (not all).\n\nThanks again to all who responded with helpful tips and pointers.\n\nI'm hoping to work on these tests a little bit at a time, adding\nadditional tests and their results, so feel free to stop back and\ncheck the page every so often. I don't intend to announce future\nupdates on these lists unless there is something that seems\nconsiderably important found.\n\nIf you've made a suggestion on additional tests to run, please\nrest assured that I got your email and have added the test to my\nlist of things to try. I simply got more emails than I could\nrespond to. I apologize to everyone who didn't get a personal\nresponse.\n\nhttp://www.potentialtech.com/wmoran/postgresql.php\n\nAgain, feedback is welcome, but if it's of any magnitude similar\nto what I just got, I doubt I'll be able to respond to everyone.\nPlease don't be offended. I'm rather surprised at how popular\nthis information was.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Thu, 28 Aug 2003 21:08:05 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": true, "msg_subject": "Taking another shot at the PostgreSQL/filesystem benchmarks" } ]
[ { "msg_contents": "How To calcute PostgreSQL HDD grow capacity for every byte data, start from \ninstallation initialize.\n\nRegards,\n\nEko Pranoto\n", "msg_date": "Fri, 29 Aug 2003 10:44:40 +0700", "msg_from": "Eko Pranoto <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL HDD Grow capacity" }, { "msg_contents": "Eko Pranoto wrote:\n> How To calcute PostgreSQL HDD grow capacity for every byte data, start from \n> installation initialize.\n\nFirst, see the FAQ item about calculating row size. Second, see the\nchapter in the administration manual talking about computing disk space.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 30 Aug 2003 12:47:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL HDD Grow capacity" } ]
[ { "msg_contents": "Hi all,\n\nI have some tables (which can get pretty large) in which I want to record 'current' data as well as 'historical' data. This table has fields 'deleted' and 'deleteddate' (among other fields, of course). The field 'deleted' is false be default. Every record that I want to delete gets the value true for 'deleted' and 'deleteddate' is set to the date of deletion.\n\nSince these tables are used a lot by queries that only use 'current' data, I have created a view with a where clause 'Where not deleted'. Also, I have indexed field 'deleted'.\n\nI did this this because I read somewhere that fields that can contain NULL values will NOT be indexed.\n\nIs this true?\n\nOr could I ditch the 'deleted' field and just set 'deleteddate' to NULL by default and to a DATE in the case of a deleted record? I could then index the field 'deleteddate' and create a view with where clause 'Where deleteddate is null'.\n\nWould this give the same performance as my current solution (with an indexed boolean field 'deleted') ?\n\nI cannot test this myself at the moment as I am still in a design phase and do not have a real server available yet...\n\nThanks in advance,\n\nAlexander Priem\n\n\n\n\n\n\nHi all,\n \nI have some tables (which can get pretty large) in \nwhich I want to record 'current' data as well as 'historical' data. This \ntable has fields 'deleted' and 'deleteddate' (among other fields, of course). \nThe field 'deleted' is false be default. Every record that I want to delete gets \nthe value true for 'deleted' and 'deleteddate' is set to the date of \ndeletion.\n \nSince these tables are used a lot by queries that \nonly use 'current' data, I have created a view with a where clause 'Where not \ndeleted'. Also, I have indexed field 'deleted'.\n \nI did this this because I read somewhere that \nfields that can contain NULL values will NOT be indexed.\n \nIs this true?\n \nOr could I ditch the 'deleted' field and just set \n'deleteddate' to NULL by default and to a DATE in the case of a deleted record? \nI could then index the field 'deleteddate' and create a view with where clause \n'Where deleteddate is null'.\n \nWould this give the same performance as my current \nsolution (with an indexed boolean field 'deleted') ?\n \nI cannot test this myself at the moment as I am \nstill in a design phase and do not have a real server available \nyet...\n \nThanks in advance,\nAlexander Priem", "msg_date": "Fri, 29 Aug 2003 08:52:06 +0200", "msg_from": "\"Alexander Priem\" <[email protected]>", "msg_from_op": true, "msg_subject": "Indexing question" }, { "msg_contents": "> Hi all,\n> \n> I have some tables (which can get pretty large) in which I want to \n> record 'current' data as well as 'historical' data. This table has \n> fields 'deleted' and 'deleteddate' (among other fields, of course). The \n> field 'deleted' is false be default. Every record that I want to delete \n> gets the value true for 'deleted' and 'deleteddate' is set to the date \n> of deletion.\n> \n> Since these tables are used a lot by queries that only use 'current' \n> data, I have created a view with a where clause 'Where not deleted'. \n> Also, I have indexed field 'deleted'.\n\n<cut>\nI think the best choice for your case is using conditional indexes. It \nshould be much better than indexing 'deleted' field. I don't know on \nwhich exactly fields you have to create this index - you have to check \nit by yourself - what do you have in \"where\" clause?\n\nExample:\ncreate index some_index on your_table(id_field) where not deleted;\n\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Fri, 29 Aug 2003 09:03:18 +0200", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing question" }, { "msg_contents": "Remember to consider partial indexes:\n\neg. CREATE INDEX ON table (col) WHERE deletedate IS NOT NULL\n\nChris\n ----- Original Message ----- \n From: Alexander Priem \n To: [email protected] \n Sent: Friday, August 29, 2003 2:52 PM\n Subject: [PERFORM] Indexing question\n\n\n Hi all,\n\n I have some tables (which can get pretty large) in which I want to record 'current' data as well as 'historical' data. This table has fields 'deleted' and 'deleteddate' (among other fields, of course). The field 'deleted' is false be default. Every record that I want to delete gets the value true for 'deleted' and 'deleteddate' is set to the date of deletion.\n\n Since these tables are used a lot by queries that only use 'current' data, I have created a view with a where clause 'Where not deleted'. Also, I have indexed field 'deleted'.\n\n I did this this because I read somewhere that fields that can contain NULL values will NOT be indexed.\n\n Is this true?\n\n Or could I ditch the 'deleted' field and just set 'deleteddate' to NULL by default and to a DATE in the case of a deleted record? I could then index the field 'deleteddate' and create a view with where clause 'Where deleteddate is null'.\n\n Would this give the same performance as my current solution (with an indexed boolean field 'deleted') ?\n\n I cannot test this myself at the moment as I am still in a design phase and do not have a real server available yet...\n\n Thanks in advance,\n\n Alexander Priem\n\n\n\n\n\n\nRemember to consider partial indexes:\n \neg. CREATE INDEX ON table (col) WHERE deletedate IS NOT \nNULL\n \nChris\n\n----- Original Message ----- \nFrom:\nAlexander Priem \nTo: [email protected]\n\nSent: Friday, August 29, 2003 2:52 \n PM\nSubject: [PERFORM] Indexing \nquestion\n\nHi all,\n \nI have some tables (which can get pretty large) \n in which I want to record 'current' data as well as 'historical' data. \n This table has fields 'deleted' and 'deleteddate' (among other fields, of \n course). The field 'deleted' is false be default. Every record that I want to \n delete gets the value true for 'deleted' and 'deleteddate' is set to the date \n of deletion.\n \nSince these tables are used a lot by queries that \n only use 'current' data, I have created a view with a where clause 'Where not \n deleted'. Also, I have indexed field 'deleted'.\n \nI did this this because I read somewhere that \n fields that can contain NULL values will NOT be indexed.\n \nIs this true?\n \nOr could I ditch the 'deleted' field and just set \n 'deleteddate' to NULL by default and to a DATE in the case of a deleted \n record? I could then index the field 'deleteddate' and create a view with \n where clause 'Where deleteddate is null'.\n \nWould this give the same performance as my \n current solution (with an indexed boolean field 'deleted') ?\n \nI cannot test this myself at the moment as I am \n still in a design phase and do not have a real server available \n yet...\n \nThanks in advance,\nAlexander Priem", "msg_date": "Fri, 29 Aug 2003 15:05:52 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing question" }, { "msg_contents": "So if I understand correctly I could ditch the 'deleted' field entirely and\nuse just the 'deleteddate' field. This 'deleteddate' field would be NULL by\ndefault. It would contain a date value if the record is considered\n'deleted'.\n\nThe index would be 'create index a on tablename(deleteddate) where\ndeleteddate is null'.\n\nI could then access 'current' records with a view like 'create view x_view\nas select * from tablename where deleteddate is null'.\n\nIs that correct? This would be the best performing solution for this kind of\nthing, I think (theoretically at least)?\n\nKind regards,\nAlexander Priem.\n\n\n\n----- Original Message -----\nFrom: \"Tomasz Myrta\" <[email protected]>\nTo: \"Alexander Priem\" <[email protected]>\nCc: <[email protected]>\nSent: Friday, August 29, 2003 9:03 AM\nSubject: Re: [PERFORM] Indexing question\n\n\n> > Hi all,\n> >\n> > I have some tables (which can get pretty large) in which I want to\n> > record 'current' data as well as 'historical' data. This table has\n> > fields 'deleted' and 'deleteddate' (among other fields, of course). The\n> > field 'deleted' is false be default. Every record that I want to delete\n> > gets the value true for 'deleted' and 'deleteddate' is set to the date\n> > of deletion.\n> >\n> > Since these tables are used a lot by queries that only use 'current'\n> > data, I have created a view with a where clause 'Where not deleted'.\n> > Also, I have indexed field 'deleted'.\n>\n> <cut>\n> I think the best choice for your case is using conditional indexes. It\n> should be much better than indexing 'deleted' field. I don't know on\n> which exactly fields you have to create this index - you have to check\n> it by yourself - what do you have in \"where\" clause?\n>\n> Example:\n> create index some_index on your_table(id_field) where not deleted;\n>\n>\n> Regards,\n> Tomasz Myrta\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n", "msg_date": "Fri, 29 Aug 2003 09:49:28 +0200", "msg_from": "\"Alexander Priem\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Indexing question" }, { "msg_contents": "> So if I understand correctly I could ditch the 'deleted' field entirely and\n> use just the 'deleteddate' field. This 'deleteddate' field would be NULL by\n> default. It would contain a date value if the record is considered\n> 'deleted'.\n> \n> The index would be 'create index a on tablename(deleteddate) where\n> deleteddate is null'.\n> \n> I could then access 'current' records with a view like 'create view x_view\n> as select * from tablename where deleteddate is null'.\n> \n> Is that correct? This would be the best performing solution for this kind of\n> thing, I think (theoretically at least)?\n> \n> Kind regards,\n> Alexander Priem.\n\nNear, but not exactly. You don't need field deleted - it's true.\n\nYour example:\ncreate index a on tablename(deleteddate) where deleteddate is null\nwe can translate to:\ncreate index a on tablename(NULL) where deleteddate is null\nwhich doesn't make too much sense.\n\nCheck your queries. You probably have something like this:\nselect * from tablename where not deleted and xxx\n\nCreate your index to match xxx clause - if xxx is \"some_id=13\", then \ncreate your index as:\ncreate index on tablename(some_id) where deleteddate is null;\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Fri, 29 Aug 2003 09:59:10 +0200", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing question" }, { "msg_contents": "\n> So if I understand correctly I could ditch the 'deleted' field entirely\nand\n> use just the 'deleteddate' field. This 'deleteddate' field would be NULL\nby\n> default. It would contain a date value if the record is considered\n> 'deleted'.\n>\n> The index would be 'create index a on tablename(deleteddate) where\n> deleteddate is null'.\n>\n> I could then access 'current' records with a view like 'create view x_view\n> as select * from tablename where deleteddate is null'.\n>\n> Is that correct? This would be the best performing solution for this kind\nof\n> thing, I think (theoretically at least)?\n\nYes, I think it would be best. Definitely better than your current\nsolution.\n\nCheers,\n\nChris\n\n", "msg_date": "Fri, 29 Aug 2003 16:14:04 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing question" }, { "msg_contents": "I think I understand what you mean :)\n\nLet's see if that's true :\n\nThe entire table WAS like this: (just one example table, I have many more)\n\ncreate table orderadvice (\norad_id serial primary key,\norad_name varchar(25) unique not null,\norad_description varchar(50) default null,\norad_value integer not null default 0,\norad_value_quan integer references quantity (quan_id) not null default 0,\norad_deleted boolean not null default false,\norad_deleteddate date default null,\norad_deletedby integer references systemuser (user_id) default null )\nwithout oids;\n\nIndexes were like this:\n\ncreate index orad_deleted_index on orderadvice (orad_deleted);\n(orad_id and orad_name indexed implicitly in the create table statement)\n\nA view on this table:\n\ncreate view orderadvice_edit as select\norad_id,orad_name,orad_description,orad_value,orad_value_quan from\norderadvice where not orad_deleted;\n\nMost queries on this view would be like 'select * from orderadvice_edit\nwhere orad_id=100' or 'select * from orderadvice_edit order by orad_name'.\n\nHow about the following script. Would it be better, given the type of\nqueries that would take place on this table?\n\ncreate table orderadvice (\norad_id serial primary key,\norad_name varchar(25) not null,\norad_description varchar(50) default null,\norad_value integer not null default 0,\norad_value_quan integer references quantity (quan_id) not null default 0,\norad_deleteddate date default null,\norad_deletedby integer references systemuser (user_id) default null )\nwithout oids;\n\ncreate index orad_id_index on orderadvice (orad_id) where orad_deleteddate\nis null;\ncreate index orad_name_index on orderadvice (orad_name) where\norad_deleteddate is null;\n\ncreate view orderadvice_edit as select\norad_id,orad_name,orad_description,orad_value,orad_value_quan from\norderadvice where orad_deleteddate is null;\n\nWould queries like 'select * from orderadvice_edit where orad_id=100' or\n'select * from orderadvice_edit order by orad_name' both use one of these\ntwo partial indexes, given enough records are present in the table?\n\nThere would be a double index on the primary key this way, right?\n\nThanks for your advice so far,\nAlexander Priem.\n\n\n\n\n\n\n----- Original Message -----\nFrom: \"Tomasz Myrta\" <[email protected]>\nTo: \"Alexander Priem\" <[email protected]>\nCc: <[email protected]>\nSent: Friday, August 29, 2003 9:59 AM\nSubject: Re: [PERFORM] Indexing question\n\n\n> > So if I understand correctly I could ditch the 'deleted' field entirely\nand\n> > use just the 'deleteddate' field. This 'deleteddate' field would be NULL\nby\n> > default. It would contain a date value if the record is considered\n> > 'deleted'.\n> >\n> > The index would be 'create index a on tablename(deleteddate) where\n> > deleteddate is null'.\n> >\n> > I could then access 'current' records with a view like 'create view\nx_view\n> > as select * from tablename where deleteddate is null'.\n> >\n> > Is that correct? This would be the best performing solution for this\nkind of\n> > thing, I think (theoretically at least)?\n> >\n> > Kind regards,\n> > Alexander Priem.\n>\n> Near, but not exactly. You don't need field deleted - it's true.\n>\n> Your example:\n> create index a on tablename(deleteddate) where deleteddate is null\n> we can translate to:\n> create index a on tablename(NULL) where deleteddate is null\n> which doesn't make too much sense.\n>\n> Check your queries. You probably have something like this:\n> select * from tablename where not deleted and xxx\n>\n> Create your index to match xxx clause - if xxx is \"some_id=13\", then\n> create your index as:\n> create index on tablename(some_id) where deleteddate is null;\n>\n> Regards,\n> Tomasz Myrta\n>\n\n", "msg_date": "Fri, 29 Aug 2003 10:41:04 +0200", "msg_from": "\"Alexander Priem\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Indexing question" }, { "msg_contents": "> create index orad_id_index on orderadvice (orad_id) where orad_deleteddate\n> is null;\n> create index orad_name_index on orderadvice (orad_name) where\n> orad_deleteddate is null;\n> \n> create view orderadvice_edit as select\n> orad_id,orad_name,orad_description,orad_value,orad_value_quan from\n> orderadvice where orad_deleteddate is null;\n> \n> Would queries like 'select * from orderadvice_edit where orad_id=100' or\n> 'select * from orderadvice_edit order by orad_name' both use one of these\n> two partial indexes, given enough records are present in the table?\n> \n> There would be a double index on the primary key this way, right?\n\nIt looks much better now. I'm not sure about the second index. Probably \nit will be useless, because you sort ALL records with deleteddtata is \nnull. Maybe the first index will be enough.\n\nI'm not sure what to do with doubled index on a primary key field.\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Fri, 29 Aug 2003 10:57:25 +0200", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing question" }, { "msg_contents": "The first index is for sorting on orad_id, the second one for sorting on\norad_name. The first one would be useful for queries like 'select * from\norderadvice_edit where orad_id=100', the second one for queries like 'select\n* from orderadvice_edit order by orad_name'. Right?\n\nDoes anyone know whether it is bad practise to have two indexes on the\nprimary key of a table? (one 'primary key' index and one partial index)\n\n\n\n\n----- Original Message -----\nFrom: \"Tomasz Myrta\" <[email protected]>\nTo: \"Alexander Priem\" <[email protected]>\nCc: <[email protected]>\nSent: Friday, August 29, 2003 10:57 AM\nSubject: Re: [PERFORM] Indexing question\n\n\n> > create index orad_id_index on orderadvice (orad_id) where\norad_deleteddate\n> > is null;\n> > create index orad_name_index on orderadvice (orad_name) where\n> > orad_deleteddate is null;\n> >\n> > create view orderadvice_edit as select\n> > orad_id,orad_name,orad_description,orad_value,orad_value_quan from\n> > orderadvice where orad_deleteddate is null;\n> >\n> > Would queries like 'select * from orderadvice_edit where orad_id=100' or\n> > 'select * from orderadvice_edit order by orad_name' both use one of\nthese\n> > two partial indexes, given enough records are present in the table?\n> >\n> > There would be a double index on the primary key this way, right?\n>\n> It looks much better now. I'm not sure about the second index. Probably\n> it will be useless, because you sort ALL records with deleteddtata is\n> null. Maybe the first index will be enough.\n>\n> I'm not sure what to do with doubled index on a primary key field.\n>\n> Regards,\n> Tomasz Myrta\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n", "msg_date": "Fri, 29 Aug 2003 11:08:21 +0200", "msg_from": "\"Alexander Priem\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Indexing question" }, { "msg_contents": "\"Alexander Priem\" <[email protected]> writes:\n> Does anyone know whether it is bad practise to have two indexes on the\n> primary key of a table? (one 'primary key' index and one partial index)\n\nIt's a little unusual, but if you get enough performance boost from it\nto justify the maintenance cost of the extra index, then I can't see\nanything wrong with it.\n\nThe \"if\" is worth checking though. I missed the start of this thread,\nbut what percentage of your rows do you expect to have null deleteddate?\nUnless it's a pretty small percentage, I'm unconvinced that the extra\nindexes will be worth their cost.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Aug 2003 10:00:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing question " }, { "msg_contents": "Well, the intention is to hold every record that ever existed in the table.\nTherefore, records do not get deleted, but they get a date in the\ndeleteddate field. This way, we can track what changes were made to the\ntable(s).\n\nSo if a record gets 'deleted', the field 'deleted' is set to today's date.\nIf a record gets 'updated', a new record is made containing the new data,\nand the old record is marked as 'deleted'.\n\nSo the percentage of 'deleted' records will grow with time, if you\nunderstand what I mean.\n\n\n\n----- Original Message -----\nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Alexander Priem\" <[email protected]>\nCc: \"Tomasz Myrta\" <[email protected]>; <[email protected]>\nSent: Friday, August 29, 2003 4:00 PM\nSubject: Re: [PERFORM] Indexing question\n\n\n> \"Alexander Priem\" <[email protected]> writes:\n> > Does anyone know whether it is bad practise to have two indexes on the\n> > primary key of a table? (one 'primary key' index and one partial index)\n>\n> It's a little unusual, but if you get enough performance boost from it\n> to justify the maintenance cost of the extra index, then I can't see\n> anything wrong with it.\n>\n> The \"if\" is worth checking though. I missed the start of this thread,\n> but what percentage of your rows do you expect to have null deleteddate?\n> Unless it's a pretty small percentage, I'm unconvinced that the extra\n> indexes will be worth their cost.\n>\n> regards, tom lane\n\n", "msg_date": "Fri, 29 Aug 2003 17:13:52 +0200", "msg_from": "\"Alexander Priem\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Indexing question " }, { "msg_contents": "On Fri, Aug 29, 2003 at 05:13:52PM +0200, Alexander Priem wrote:\n> Well, the intention is to hold every record that ever existed in the table.\n> Therefore, records do not get deleted, but they get a date in the\n> deleteddate field. This way, we can track what changes were made to the\n> table(s).\n> \n> So if a record gets 'deleted', the field 'deleted' is set to today's date.\n> If a record gets 'updated', a new record is made containing the new data,\n> and the old record is marked as 'deleted'.\n> \n> So the percentage of 'deleted' records will grow with time, if you\n> understand what I mean.\n\nDid you consider a two table implimentation. 1 table \"live_table\"\ncontaining the non-deleted records, a second table \"deleted_table\"\ncontaining the deleted records, along with the \"deleted_date\" field. Keep\nthe two in sync column type/number wise, and use a before delete trigger\nfunction on \"live_table\" to actually insert a copy of the deleted row plus\n\"deleted_date\" into \"deleted_table\" before performing the delete on\n\"live_table\".\n\nYou could also use a before update trigger to keep old copies of updated\nrecords in the same way.\n\nThen you would only incur the performance loss of scanning/etc. the deleted\nrecords when you actually need to pull up deleted plus live records.\n\n", "msg_date": "Fri, 29 Aug 2003 12:58:51 -0400", "msg_from": "Richard Ellis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing question" }, { "msg_contents": "Hi,\n\n> I have some tables (which can get pretty large) in which I want to \n> record 'current' data as well as 'historical' data.\n\nAnother solution can be using a trigger and function to record every\ntransaction to a 'logging' table.\nThis way you'll have one 'current' table and one 'historical' table\nThe 'historical' table will contain every transaction recorded from\nthe current table.\n\nRegards\nRudi.\n\n\n\n", "msg_date": "Sat, 30 Aug 2003 11:07:43 +1000", "msg_from": "\"Rudi Starcevic\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing question" } ]
[ { "msg_contents": "Hi,\n\nI'm using PostgreSQL 7.3.4 and noticed a havy performance issue when\nusing the datatype text for PL/pgSQL functions instead of varchar.\n\nThis is the table:\n\nCREATE TABLE user_login_table (\n id serial,\n username varchar(100),\n PRIMARY ID (id),\n UNIQUE (username)\n);\n\nThis table contains ~ 500.000 records. The database runs on a P4 with\n512 MB RAM. When using the following functions, I notice a havy\nspeed difference:\n\n\nCREATE OR REPLACE FUNCTION get_foo_exists (varchar(100))\nRETURNS bool\nAS '\n BEGIN\n PERFORM username\n FROM user_login_table\n WHERE username = $1;\n\n RETURN FOUND;\n END;\n'\nLANGUAGE 'plpgsql';\n\nCREATE OR REPLACE FUNCTION get_foo_exists2 (text)\nRETURNS bool\nAS '\n BEGIN\n PERFORM username\n FROM user_login_table\n WHERE username = $1;\n\n RETURN FOUND;\n END;\n'\nLANGUAGE 'plpgsql';\n\n\n\nThe function 'get_foo_exists (varchar(100))' is extremly fast\n(can't estimate - < 0.5 seconds). The function 'get_foo_exists2 (text)'\ntakes about 3 seconds for the same operation.\nIs that normal?\n\n\nBye,\nOliver\n\n", "msg_date": "Fri, 29 Aug 2003 15:54:46 +0200", "msg_from": "Oliver Siegmar <[email protected]>", "msg_from_op": true, "msg_subject": "PL/pgSQL functions - text / varchar - havy performance issue?!" }, { "msg_contents": "Oliver Siegmar wrote:\n> Hi,\n> \n> I'm using PostgreSQL 7.3.4 and noticed a havy performance issue when\n> using the datatype text for PL/pgSQL functions instead of varchar.\n> \n> This is the table:\n> \n> CREATE TABLE user_login_table (\n> id serial,\n> username varchar(100),\n> PRIMARY ID (id),\n> UNIQUE (username)\n> );\n> \n> This table contains ~ 500.000 records. The database runs on a P4 with\n> 512 MB RAM. When using the following functions, I notice a havy\n> speed difference:\n> \n> \n> CREATE OR REPLACE FUNCTION get_foo_exists (varchar(100))\n> RETURNS bool\n> AS '\n> BEGIN\n> PERFORM username\n> FROM user_login_table\n> WHERE username = $1;\n> \n> RETURN FOUND;\n> END;\n> '\n> LANGUAGE 'plpgsql';\n> \n> CREATE OR REPLACE FUNCTION get_foo_exists2 (text)\n> RETURNS bool\n> AS '\n> BEGIN\n> PERFORM username\n> FROM user_login_table\n> WHERE username = $1;\n> \n> RETURN FOUND;\n> END;\n> '\n> LANGUAGE 'plpgsql';\n> \n> \n> \n> The function 'get_foo_exists (varchar(100))' is extremly fast\n> (can't estimate - < 0.5 seconds). The function 'get_foo_exists2 (text)'\n> takes about 3 seconds for the same operation.\n> Is that normal?\n\nI don't know if it's normal for it to be that slow, but I would\nexpect it to be slower.\n\nPostgres has to convert the text to a varchar before it can actually\ndo anything. It's possible (though I'm not sure) that it has to\ndo the conversion with each record it looks at.\n\nEvery language I know of hits performance issues when you have to\nconvert between types. I wouldn't _think_ that it would be that\nmuch work converting between text and varchar, but I'm not familiar\nenough with the server code to know what's actually involved.\n\nWhat kind of performance do you get if you accept a text value\nand then manually convert it to a varchar?\n\ni.e.\n\nCREATE OR REPLACE FUNCTION get_foo_exists2 (text)\nRETURNS bool\nAS '\n DECLARE\n tempvar VARCHAR(100);\n BEGIN\n tempvar := $1;\n PERFORM username\n FROM user_login_table\n WHERE username = tempvar;\n\n RETURN FOUND;\n END;\n'\nLANGUAGE 'plpgsql';\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Fri, 29 Aug 2003 10:46:44 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL functions - text / varchar - havy performance" }, { "msg_contents": "Hi Bill,\n\nOn Friday 29 August 2003 16:46, you wrote:\n> Postgres has to convert the text to a varchar before it can actually\n> do anything. It's possible (though I'm not sure) that it has to\n> do the conversion with each record it looks at.\n\nNope. I tested you function with the temporary varchar variable...it\nis as slow as the 'text-only' varayity.\n\n> Every language I know of hits performance issues when you have to\n> convert between types. I wouldn't _think_ that it would be that\n> much work converting between text and varchar, but I'm not familiar\n> enough with the server code to know what's actually involved.\n\nI have absolutely no idea how pgsql handles text/varchar stuff\nin its server code. But ~ 3 seconds for that small function is ways\nto slow in any case.\n\n\nBye,\nOliver\n\n", "msg_date": "Fri, 29 Aug 2003 17:01:51 +0200", "msg_from": "Oliver Siegmar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PL/pgSQL functions - text / varchar - havy performance issue?!" }, { "msg_contents": "On Fri, Aug 29, 2003 at 10:46:44AM -0400, Bill Moran wrote:\n> \n> Postgres has to convert the text to a varchar before it can actually\n> do anything. It's possible (though I'm not sure) that it has to\n> do the conversion with each record it looks at.\n\nIt does? According to the docs, varchar is just syntactic sugar for\ntext. In fact, text and varchar() are supposed to be exactly the\nsame.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 29 Aug 2003 11:01:51 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL functions - text / varchar - havy performance" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Fri, Aug 29, 2003 at 10:46:44AM -0400, Bill Moran wrote:\n> \n>>Postgres has to convert the text to a varchar before it can actually\n>>do anything. It's possible (though I'm not sure) that it has to\n>>do the conversion with each record it looks at.\n> \n> It does? According to the docs, varchar is just syntactic sugar for\n> text. In fact, text and varchar() are supposed to be exactly the\n> same.\n\nReally? Well, if I'm wrong, I'm wrong. Wouldn't be the first time.\n\nHave any explanation as to why that function is so slow?\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n", "msg_date": "Fri, 29 Aug 2003 11:34:13 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL functions - text / varchar - havy performance" }, { "msg_contents": "On Fri, Aug 29, 2003 at 11:34:13AM -0400, Bill Moran wrote:\n> Have any explanation as to why that function is so slow?\n\nSorry, no. It might have to do with the planning, though. I believe\nthe funciton is planned the first time it is run. It may need to be\nmarked as \"STABLE\" in order to use any indexes, and that could be\npart of the problem.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 29 Aug 2003 11:54:01 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL functions - text / varchar - havy performance" }, { "msg_contents": "Andrew Sullivan <[email protected]> writes:\n> On Fri, Aug 29, 2003 at 11:34:13AM -0400, Bill Moran wrote:\n>> Have any explanation as to why that function is so slow?\n\n> Sorry, no. It might have to do with the planning, though.\n\nSpecifically, I'll bet he's getting an indexscan plan with one and not\nwith the other. It's just ye olde cross-datatype-comparisons-aren't-\nindexable problem. \"varchar = varchar\" matches the index on the varchar\ncolumn, but \"text = text\" is a different operator that doesn't match.\nGuess which one gets selected when the initial input is \"varchar = text\".\n\n7.4 has fixed this particular problem by essentially eliminating the\nseparate operators for varchar, but in prior releases the behavior\nOliver describes is entirely to be expected. A workaround is to\ncast inside the function:\n\n\t... where varcharcolumn = textarg::varchar;\n\nso that \"=\" gets interpreted as \"varchar = varchar\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Aug 2003 12:18:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PL/pgSQL functions - text / varchar - havy performance " } ]
[ { "msg_contents": "Hi all,\n\nI compared 2.6 with elevator=deadline. It did bring some improvement in \nperformance. But still it does not beat 2.4.\n\nAttached are three files for details.\n\nI also ran a simple insert benchmark to insert a million record in a simple \ntable with a small int and a varchar(30). \n\nHere are the results\n\n2.6 deadline\n1K/xact\t\t299sec\n10K/xact\t\t277 sec\n100K/xact\t\t271 sec\n\n2.6 AS\n1K/xact\t\t262sec\n10K/xact\t\tNot done\n100K/xact\t\t257 sec\n\n2.6 AS\n1K/xact\t\t252sec\n10K/xact\t\t243 sec\n100K/xact\t\t246 sec\n\nIt seems that I noted a test result wrongly. I need to do it again.\n\nOverall 2.6 needs some real IO improvements. Of course it could do better on \nmultiway machine.\n\nI guess there is no point bothering this with kernel hackers. They know this \nstuff already, right.\n\nLooking forward to next release of kernel and hope it improves things...\n\n Shridhar", "msg_date": "Fri, 29 Aug 2003 21:59:14 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": true, "msg_subject": "2.4 v/s 2.6 again." } ]
[ { "msg_contents": "I was ecstatic to hear that postgresql.com is releasing the eRServer\nreplication project to postgresql.org as open source! I'm anxious\nto get my hands on it -- actually I'm desperate: I'm under pressure to\nproduce a warm-failover server for our lab. I REALLY would like to\nget hands on this code soon!\n\nDoes anyone know how and when the actual release will happen?\nI would be glad to be an alpha tester and promise to contribute\nback bug-reports/patches. I'll take cvs or tar.gz or paper tape or\nstone tablets engraved in high Elvish...\n\n-- George\n\n-- \n I cannot think why the whole bed of the ocean is\n not one solid mass of oysters, so prolific they seem. Ah,\n I am wandering! Strange how the brain controls the brain!\n\t-- Sherlock Holmes in \"The Dying Detective\"\n", "msg_date": "Fri, 29 Aug 2003 13:19:35 -0400", "msg_from": "george young <[email protected]>", "msg_from_op": true, "msg_subject": "sourcecode for newly release eRServer?" }, { "msg_contents": "On Fri, Aug 29, 2003 at 01:19:35PM -0400, george young wrote:\n> Does anyone know how and when the actual release will happen?\n\nSee the erserver project on gborg. It's out. There's a list, too;\nany problems, send 'em there.\n\nA\n\n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 29 Aug 2003 14:51:09 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sourcecode for newly release eRServer?" }, { "msg_contents": "> Does anyone know how and when the actual release will happen?\n> I would be glad to be an alpha tester and promise to contribute\n> back bug-reports/patches. I'll take cvs or tar.gz or paper tape or\n> stone tablets engraved in high Elvish...\n\nI think someone should call him on that :P\n\nChris\n\n\n", "msg_date": "Sat, 30 Aug 2003 12:50:31 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sourcecode for newly release eRServer?" } ]
[ { "msg_contents": "Hi List,\n\n\n I still have performance problems with this sql:\n\nSELECT /*+ */\nftnfco00.estado_cliente ,\nftcofi00.grupo_faturamento ,\nSUM( DECODE( ftcofi00.atual_fatura, '-', -(NVL(ftnfpr00.qtde_duzias,0)), \n'+', NVL(ftnfpr00.qtde_duzias,0), 0) ) ,\nSUM( DECODE( ftcofi00.atual_fatura, '-', -(NVL(ftnfpr00.vlr_liquido,0)), '+',\nNVL(ftnfpr00.vlr_liquido,0), 0) ) ,\nftprod00.tipo_cadastro||ftprod00.codigo_produto ,\nftprod00.descricao_produto ,\nDIVIDE( SUM( DECODE( ftcofi00.atual_fatura, '-', -(NVL(ftnfpr00.vlr_liquido,0)),\n'+', NVL(ftnfpr00.vlr_liquido,0), 0)*ftnfpr00.margem_comercial ),\n SUM( DECODE( ftcofi00.atual_fatura, '-',\n-(NVL(ftnfpr00.vlr_liquido,0)), '+', NVL(ftnfpr00.vlr_liquido,0), 0)) ) ,\nSUM( DECODE( ftcofi00.nf_prodgratis, 'S', NVL(ftnfpr00.qtde_duzias,0), 0 ) ) ,\nSUM( DECODE( ftcofi00.nf_prodgratis, 'S', NVL(ftnfpr00.vlr_liquido,0), 0 ) )\nFROM\nftprod00 ,\nftnfco00 ,\nftcgma00 ,\nftcgca00 ,\nftspro00 , \nftclcr00 ,\ngsames00 ,\nftcofi00 ,\nftrepr00 ,\ngsesta00 ,\nftsupv00 ,\nftgrep00 ,\nftclgr00 ,\nftband00 ,\nfttcli00 ,\nftredc00 ,\nftnfpr00\nWHERE\nftnfco00.emp = 909 AND\nftnfpr00.fil IN ('101') AND\nftnfco00.situacao_nf = 'N' AND\nTO_CHAR(ftnfco00.data_emissao,'YYYYMM') >= '200208' AND\nTO_CHAR(ftnfco00.data_emissao,'YYYYMM') <= '200303' AND\nftcofi00.grupo_faturamento >= '01' AND\n(ftcofi00.atual_fatura IN ('+','-') OR ftcofi00.nf_prodgratis = 'S') AND\nftcgma00.emp = ftprod00.emp AND\nftcgma00.fil = ftprod00.fil AND\nftcgma00.codigo = ftprod00.cla_marca AND \nftcgca00.emp = ftprod00.emp AND\nftcgca00.fil = ftprod00.fil AND \nftcgca00.codigo = ftprod00.cla_categoria AND\nftspro00.emp = ftprod00.emp AND\nftspro00.fil = ftprod00.fil AND\nftspro00.codigo = ftprod00.situacao AND\nftclcr00.emp = ftnfco00.emp AND\nftclcr00.fil = ftnfco00.empfil AND\nftclcr00.tipo_cadastro = ftnfco00.tipo_cad_clicre AND\nftclcr00.codigo = ftnfco00.cod_cliente AND\ngsames00.ano_mes = TO_CHAR(ftnfco00.data_emissao,'YYYYMM') AND\nftcofi00.emp = ftnfco00.emp AND\nftcofi00.fil = ftnfco00.empfil AND\nftcofi00.codigo_fiscal = ftnfco00.cod_fiscal AND\nftrepr00.emp = ftnfco00.emp AND\nftrepr00.fil = ftnfco00.empfil AND\nftrepr00.codigo_repr = ftnfco00.cod_repres AND\ngsesta00.estado_sigla = ftnfco00.estado_cliente AND\nftsupv00.emp = ftrepr00.emp AND\nftsupv00.fil = ftrepr00.fil AND\nftsupv00.codigo_supervisor = ftrepr00.codigo_supervisor AND\nftgrep00.emp = ftrepr00.emp AND\nftgrep00.fil = ftrepr00.fil AND\nftgrep00.codigo_grupo_rep = ftrepr00.codigo_grupo_rep AND\nftclgr00.emp = ftclcr00.emp AND\nftclgr00.fil = ftclcr00.fil AND\nftclgr00.codigo = ftclcr00.codigo_grupo_cliente AND\nftband00.emp = ftclcr00.emp AND\nftband00.fil = ftclcr00.fil AND\nftband00.codigo = ftclcr00.bandeira_cliente AND\nfttcli00.emp = ftclcr00.emp AND\nfttcli00.fil = ftclcr00.fil AND\nfttcli00.cod_tipocliente = ftclcr00.codigo_tipo_cliente AND\nftredc00.emp = ftclcr00.emp AND\nftredc00.fil = ftclcr00.fil AND\nftredc00.tipo_contribuinte = ftclcr00.tipo_contribuinte AND\nftredc00.codigo_rede = ftclcr00.codigo_rede AND\ngsesta00.estado_sigla = ftclcr00.emp_estado AND\nftnfco00.emp = ftnfpr00.emp AND\nftnfco00.fil = ftnfpr00.fil AND\nftnfco00.nota_fiscal = ftnfpr00.nota_fiscal AND\nftnfco00.serie = ftnfpr00.serie AND\nftnfco00.data_emissao = ftnfpr00.data_emissao AND\nftprod00.emp = ftnfpr00.emp AND\nftprod00.fil = ftnfpr00.empfil AND\nftprod00.tipo_cadastro = ftnfpr00.tipo_cad_promat AND\nftprod00.codigo_produto= ftnfpr00.cod_produto\nGROUP BY\nftnfco00.estado_cliente ,\nftcofi00.grupo_faturamento ,\nftprod00.tipo_cadastro||ftprod00.codigo_produto ,\nftprod00.descricao_produto\n\nI have created some oracle function in the database 'cuz I want the same\nalication to use both Oracle or PostgreSQL without changing any source.\n\natached follow tha explain analyze for this query and my postgresql.conf.\n\nI still searching a way to make it faster. I've tried to change a lot of\nvariables values like sort_mem, effective_cache_size, fsync, ...\nI change the machine box from a Pentium III 1Ghz with 256 RAM to a P4 1.7 with\n512 RAM DDR.\n I don't know what else to do !\n\nAtenciosamente,\n\nRhaoni Chiu Pereira\nSist�mica Computadores\n\nVisite-nos na Web: http://sistemica.info\nFone/Fax : +55 51 3328 1122", "msg_date": "Fri, 29 Aug 2003 17:59:19 -0300", "msg_from": "Rhaoni Chiu Pereira <[email protected]>", "msg_from_op": true, "msg_subject": "SQL performance problems" }, { "msg_contents": "Rhaoni Chiu Pereira <[email protected]> writes:\n> I still have performance problems with this sql:\n\nIt seems odd that all the joins are being done as nestloops. Perhaps\nyou shouldn't be forcing enable_seqscan off?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Aug 2003 18:03:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL performance problems " }, { "msg_contents": "Hi,\n\n\tEstimated and actual rows differ a lot. Did you a VACUUM ANALYZE so\nthat the optimizer could update its statistics?\n\n\tAlso it would be great if you could provide more information, as your\nPostgreSQL version, your table and indexes descriptions, etc. Have a\nlook at: \n\nhttp://techdocs.postgresql.org/guides/SlowQueryPostingGuidelines\n\n\tPlease excuse me if you have already done it. I had a quick look at the\nlist's archives and didn't found it in your prior posts.\n\n\tRegards,\n-- \nAlberto Caso Palomino\nAdaptia Soluciones Integrales\nhttp://www.adaptia.net\[email protected]", "msg_date": "Mon, 01 Sep 2003 00:16:38 +0200", "msg_from": "Alberto Caso <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL performance problems" }, { "msg_contents": "On Mon, 01-09-2003 at 13:42, Rhaoni Chiu Pereira wrote: \n> I've done that but it didn't make much difference.\n> Do you know some documentation on explain ? I don't understand the results..\n> \n\nhttp://developer.postgresql.org/docs/postgres/sql-explain.html\nhttp://developer.postgresql.org/docs/postgres/performance-tips.html\n\nAlso this list's archives (which can be found at\nhttp://archives.postgresql.org/pgsql-performance/ ) are a good source\nof info on the subject.\n\nBest Regards,\n\n-- \nAlberto Caso Palomino\nAdaptia Soluciones Integrales\nhttp://www.adaptia.net\[email protected]", "msg_date": "Mon, 01 Sep 2003 14:11:40 +0200", "msg_from": "Alberto Caso <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL performance problems" } ]
[ { "msg_contents": "I'm trying to understand how I can get the planner to always do the\nright thing with this query:\n\n EXPLAIN ANALYZE\n SELECT\n\taa_t.min_date_time\n FROM\n\taa_t\n\t, bb_t\n\t, cc_t\n WHERE bb_t.bb_id = aa_t.bb_id\n\tAND aa_t.realm_id = cc_t.realm_id\n\tAND aa_t.server_id = 21\n ORDER BY aa_t.min_date_time desc\n LIMIT 1\n OFFSET 674\n ;\n\nThere's a extreme elbow in the performance curve around the sum of\nLIMIT and OFFSET. The two plans follow. First for the query above:\n\n Limit (cost=21569.56..21601.56 rows=1 width=84) (actual time=59.60..59.69 rows=1 loops=1)\n -> Nested Loop (cost=0.00..110535.66 rows=3454 width=84) (actual time=0.19..59.20 rows=676 loops=1)\n -> Nested Loop (cost=0.00..93177.46 rows=3454 width=65) (actual time=0.14..44.41 rows=676 loops=1)\n -> Index Scan Backward using aa_t20 on aa_t (cost=0.00..76738.77 rows=3454 width=46) (actual time=0.10..31.30 rows=676 loops=1)\n Filter: (server_id = 21::numeric)\n -> Index Scan using cc_t1 on cc_t (cost=0.00..4.75 rows=1 width=19) (actual time=0.01..0.01 rows=1 loops=676)\n Index Cond: (\"outer\".realm_id = cc_t.realm_id)\n -> Index Scan using bb_t1 on bb_t (cost=0.00..5.01 rows=1 width=19) (actual time=0.02..0.02 rows=1 loops=676)\n Index Cond: (bb_t.bb_id = \"outer\".bb_id)\n Total runtime: 59.89 msec\n(10 rows)\n\nSetting OFFSET to 675 in the above query, results in this 100 times\nslower plan:\n\n Limit (cost=21614.48..21614.48 rows=1 width=84) (actual time=4762.39..4762.39 rows=1 loops=1)\n -> Sort (cost=21612.79..21621.42 rows=3454 width=84) (actual time=4761.45..4761.92 rows=677 loops=1)\n Sort Key: aa_t.min_date_time\n -> Merge Join (cost=21139.96..21409.80 rows=3454 width=84) (actual time=4399.80..4685.24 rows=41879 loops=1)\n Merge Cond: (\"outer\".bb_id = \"inner\".bb_id)\n -> Sort (cost=8079.83..8184.53 rows=41879 width=19) (actual time=936.99..967.37 rows=41879 loops=1)\n Sort Key: bb_t.bb_id\n -> Seq Scan on bb_t (cost=0.00..4864.79 rows=41879 width=19) (actual time=0.06..729.60 rows=41879 loops=1)\n -> Sort (cost=13060.13..13068.76 rows=3454 width=65) (actual time=3462.76..3493.97 rows=41879 loops=1)\n Sort Key: aa_t.bb_id\n -> Merge Join (cost=12794.42..12857.14 rows=3454 width=65) (actual time=2923.62..3202.78 rows=41879 loops=1)\n Merge Cond: (\"outer\".realm_id = \"inner\".realm_id)\n -> Sort (cost=12762.78..12771.41 rows=3454 width=46) (actual time=2920.78..2950.87 rows=41879 loops=1)\n Sort Key: aa_t.realm_id\n -> Index Scan using aa_t5 on aa_t (cost=0.00..12559.79 rows=3454 width=46) (actual time=0.18..2589.22 rows=41879 loops=1)\n Index Cond: (server_id = 21::numeric)\n -> Sort (cost=31.64..32.78 rows=455 width=19) (actual time=2.54..33.12 rows=42163 loops=1)\n Sort Key: cc_t.realm_id\n -> Seq Scan on cc_t (cost=0.00..11.55 rows=455 width=19) (actual time=0.04..0.86 rows=455 loops=1)\n Total runtime: 4792.84 msec\n(20 rows)\n\nTwiddling effective_cache_size and random_page_cost allows for a large\nLIMIT+OFFSET number but not enough. These tests are made with 400000\neffective_cache_size and random_page_cost of 4.\n\nI can increase the LIMIT+OFFSET elbow to 1654 by changing the\nquery thusly:\n\n< AND aa_t.server_id = 21\n---\n> AND aa_t.server_id IN (21, 0)\n\nThe value 0 is an invalid server_id, so I know it won't be returned.\nHowever, I've got 41K rows that could be returned by this query and\ngrowing, and 1654 is obviously not enough. (aa is 690K rows, bb is\n41K rows, and cc is 500 rows.)\n\nIf I drop the ORDER BY, the query goes much faster, but the query is\nuseless without the ORDER BY.\n\nI've figured out that the second plan is slow, because it is writing a\nhuge result set to disk (+200MB). This doesn't make sense to me,\nsince sort_mem is 32000.\n\nIs there a way to tell the optimizer to use Nested Loop plan always\ninstead of the Merge/Join plan? Turning off enable_mergejoin is\nobviously not an option.\n\nThanks,\nRob\n\n\n", "msg_date": "Sat, 30 Aug 2003 07:16:13 -0600", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "How to force Nested Loop plan?" }, { "msg_contents": "Rob Nagler <[email protected]> writes:\n> I'm trying to understand how I can get the planner to always do the\n> right thing with this query:\n\n> SELECT\n> \taa_t.min_date_time\n> FROM\n> \taa_t\n> \t, bb_t\n> \t, cc_t\n> WHERE bb_t.bb_id = aa_t.bb_id\n> \tAND aa_t.realm_id = cc_t.realm_id\n> \tAND aa_t.server_id = 21\n> ORDER BY aa_t.min_date_time desc\n> LIMIT 1\n> OFFSET 674\n\n\n> -> Index Scan Backward using aa_t20 on aa_t (cost=0.00..76738.77 rows=3454 width=46) (actual time=0.10..31.30 rows=676 loops=1)\n> Filter: (server_id = 21::numeric)\n\nThe reason the planner does not much like this plan is that it's\nestimating that quite a lot of rows will have to be hit in min_date_time\norder before it finds enough rows with server_id = 21. Thus the high\ncost estimate for the above step.\n\nI suspect that the reason you like this plan is that there's actually\nsubstantial correlation between server_id and min_date_time, such that\nthe required rows are found quickly. Before trying to force the planner\ninto what you consider an optimal plan, you had better ask yourself\nwhether you can expect that correlation to hold up in the future.\nIf not, your plan could become pessimal pretty quickly.\n\nI'd suggest creating a double-column index:\n\n\tcreate index aa_ti on aa_t(server_id, min_date_time);\n\nand altering the query to read\n\n\tORDER BY aa_t.server_id DESC, aa_t.min_date_time DESC\n\n(you need this kluge to make sure the planner recognizes that the new\nindex matches the ORDER BY request). Then you should get a plan with\na much smaller cost coefficient for this step.\n\n\t\t\tregards, tom lane\n\nPS: does server_id really need to be NUMERIC? Why not integer, or at\nworst bigint?\n", "msg_date": "Sat, 30 Aug 2003 11:02:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to force Nested Loop plan? " }, { "msg_contents": "Tom Lane writes:\n> The reason the planner does not much like this plan is that it's\n> estimating that quite a lot of rows will have to be hit in min_date_time\n> order before it finds enough rows with server_id = 21. Thus the high\n> cost estimate for the above step.\n\nThanks for the speedy and useful reply! More questions follow. :)\n\nVery interesting. How does it know \"quite a lot\"? Is there something\nI can do to get the planner to analyze the data better?\n\n> I suspect that the reason you like this plan is that there's actually\n> substantial correlation between server_id and min_date_time, such that\n> the required rows are found quickly. Before trying to force the planner\n> into what you consider an optimal plan, you had better ask yourself\n> whether you can expect that correlation to hold up in the future.\n> If not, your plan could become pessimal pretty quickly.\n\nThe correlation holds. min_date_time increases over time as records\nare inserted. server_id is uniformly distributed over time. There's\nno randomness. There is at least one 21 record for every value of\nmin_date_time. 21 is a special server_id containing aggregate\n(denormalized) data for the other servers. I thought about putting it\nin a separate table, but this would complicate the code as the data is\nidentical to the non-aggregated case.\n\nDo you have any suggestions for organizing the data/query now that you\nknow this?\n\n> I'd suggest creating a double-column index:\n\nThanks. I'll try this.\n\nI'm a very big fan of declarative programming. However, there's a\ndanger in declarative programming when the interperter isn't smart\nenough. When I add this index, I will slow down inserts (about\n20K/day) and increase data size (this is the second largest table in\nthe database). Moreover, if the planner is improved, I've should fix\nmy code, delete the index, etc.\n\nIs there a way of giving the planner direct hints as in Oracle? They\ncan be ignored when the optimizer is improved, just as \"register\" is\nignored by C compilers nowadays.\n\nAdding the extra index and ORDER BY is also not easy in our case. The\nquery is dynamically generated. I can force the query ORDER BY to be\nwhatever I want, but I would lose the ability to do interesting\nthings, like the automatic generation of ORDER BY when someone clicks\non a column header in the application. Indeed there are times when\npeople want to sort on other columns in the query. I reduced the\nproblem to the salient details for my post to this board. What if the\nORDER BY was:\n\n ORDER BY aa_t.server_id DESC, cc_t.name ASC\n\nWould the planner do the right thing?\n\n> PS: does server_id really need to be NUMERIC? Why not integer, or at\n> worst bigint?\n\nIt is a NUMERIC(18). It could be a bigint. What would be the change\nin performance of this query if we changed it to bigint?\n\nBTW, my customer is probably going to be switching to Oracle. This\nparticular query has been one of the reasons. Maybe this change will\nhelp us stay with Postgres.\n\nThanks,\nRob\n\n\n", "msg_date": "Sat, 30 Aug 2003 09:47:02 -0600", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to force Nested Loop plan? " }, { "msg_contents": "Rob Nagler <[email protected]> writes:\n> Tom Lane writes:\n>> The reason the planner does not much like this plan is that it's\n>> estimating that quite a lot of rows will have to be hit in min_date_time\n>> order before it finds enough rows with server_id = 21.\n\n> Very interesting. How does it know \"quite a lot\"?\n\nIt doesn't, because it has no cross-column-correlation stats. The\ndefault assumption is that there's no correlation.\n\n> server_id is uniformly distributed over time. There's\n> no randomness. There is at least one 21 record for every value of\n> min_date_time.\n\nThat doesn't really tell me anything. What's the proportion of 21\nrecords out of the total table?\n\n> 21 is a special server_id containing aggregate\n> (denormalized) data for the other servers. I thought about putting it\n> in a separate table, but this would complicate the code as the data is\n> identical to the non-aggregated case.\n\nHm. If most of your queries are for id 21, an alternative approach is\nto create single-column partial indexes:\n\n\tcreate index fooi on foo (min_date_time) where server_id = 21;\n\nThis reduces the cost of maintaining the index but also makes it useful\n*only* for id = 21 queries. On the plus side, you don't need to hack\nthe ORDER BY clause to get your queries to use it. Your choice...\n\n> What if the ORDER BY was:\n> ORDER BY aa_t.server_id DESC, cc_t.name ASC\n> Would the planner do the right thing?\n\nWhat do you consider the right thing? cc_t.name doesn't seem connected\nto this table at all --- or did I miss something?\n\n> It is a NUMERIC(18). It could be a bigint. What would be the change\n> in performance of this query if we changed it to bigint?\n\nHard to say. I can tell you that the raw comparison operator is a lot\nquicker for bigint than for numeric, but I don't have any hard numbers\nabout what percentage of total CPU time is involved. You'd pretty much\nhave to try it for yourself to see what the effect is in your queries.\n\nIf you've been generically using NUMERIC(n) where you could be using\ninteger or bigint, then I think you've probably paid a high price\nwithout knowing it. I don't know what Oracle's cost tradeoffs are for\nthese datatypes, but I can tell you that Postgres's integer types are\nway faster (and more compact) than our NUMERIC.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 30 Aug 2003 12:19:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to force Nested Loop plan? " }, { "msg_contents": "On Sat, 2003-08-30 at 10:47, Rob Nagler wrote:\n> Tom Lane writes:\n[snip]\n> enough. When I add this index, I will slow down inserts (about\n> 20K/day) and increase data size (this is the second largest table in\n[snip]\n\nSince I gather that this is a web-site, can we presume that they\nare clumped into an 8 hour range? 20,000/8 = 2,500/hour, which\nis 41.67/minute. If you can't do .69 inserts/second, something\nis wrong, and it ain't hardware, and it ain't Postgresql...\n\n> > PS: does server_id really need to be NUMERIC? Why not integer, or at\n> > worst bigint?\n> \n> It is a NUMERIC(18). It could be a bigint. What would be the change\n> in performance of this query if we changed it to bigint?\n\nhttp://www.postgresql.org/docs/7.3/static/datatype.html#DATATYPE-INT\nhttp://www.postgresql.org/docs/7.3/static/datatype.html#DATATYPE-NUMERIC-DECIMAL\n\nScalars are faster than arbitrary precision types. Small (32 bit)\nscalars are faster than bit (64 bit) scalars on x86 h/w.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Adventure is a sign of incompetence\"\nStephanson, great polar explorer\n\n", "msg_date": "Sat, 30 Aug 2003 11:33:40 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to force Nested Loop plan?" }, { "msg_contents": "Tom Lane writes:\n> That doesn't really tell me anything. What's the proportion of 21\n> records out of the total table?\n\nCurrently we have about 15 servers so 6% of the data is uniformly\ndistributed with the value 21.\n\n> \tcreate index fooi on foo (min_date_time) where server_id = 21;\n> \n> This reduces the cost of maintaining the index but also makes it useful\n> *only* for id = 21 queries. On the plus side, you don't need to hack\n> the ORDER BY clause to get your queries to use it. Your choice...\n\nI like that better, thanks. After testing I found the elbow was at\n1610 records with this index, but this clause still yields better\nperformance at 1654 records:\n\n\tAND aa_t.server_id IN (21, 0)\n\nThis is independent of the existence of the new index.\n\nInterestingly, when I drop the aa_t5 index, the elbow goes up to\n1729 with the IN (21, 0) query.\n\nYou might ask: why do I have an index at all? That's from my Oracle\nexperience. server_id is a foreign key into the server table. If you\ndon't create an index on a foreign key, Oracle locks the entire\nforeign table when you modify the local table. With an index, it only\nlocks a row in the foreign table. This causes a major bottleneck, but\nin this case the server table is static. Therefore, the index is\nsuperfluous, and since there are only 16 values, the index should be\nbitmap index (Oracle speak, sorry, don't know the PG term). Dropping\nthe index probably won't change any of the other queries, I think.\n\nWithout the aa_t5 index and after the elbow, the Index Scan is\nreplaced with a Seq Scan, which is just about as fast, but still 50\ntimes slower than before the elbow:\n\n Limit (cost=34071.30..34071.31 rows=1 width=84) (actual time=5111.14..5111.15 rows=1 loops=1)\n -> Sort (cost=34066.98..34075.61 rows=3454 width=84) (actual time=5108.74..5109.96 rows=1733 loops=1)\n Sort Key: aa_t.min_date_time\n -> Merge Join (cost=33801.26..33863.98 rows=3454 width=84) (actual time=4868.62..5020.58 rows=41879 loops=1)\n Merge Cond: (\"outer\".realm_id = \"inner\".realm_id)\n -> Sort (cost=31.64..32.78 rows=455 width=19) (actual time=3.06..3.38 rows=415 loops=1)\n Sort Key: cc_t.realm_id\n -> Seq Scan on cc_t (cost=0.00..11.55 rows=455 width=19) (actual time=0.05..0.99 rows=455 loops=1)\n -> Sort (cost=33769.63..33778.26 rows=3454 width=65) (actual time=4865.20..4895.28 rows=41879 loops=1)\n Sort Key: aa_t.realm_id\n -> Merge Join (cost=33296.79..33566.63 rows=3454 width=65) (actual time=4232.52..4541.24 rows=41879 loops=1)\n Merge Cond: (\"outer\".bb_id = \"inner\".bb_id)\n -> Sort (cost=25216.97..25225.60 rows=3454 width=46) (actual time=3213.53..3243.65 rows=41879 loops=1)\n Sort Key: aa_t.bb_id\n -> Seq Scan on aa_t (cost=0.00..25013.97 rows=3454 width=46) (actual time=20.07..2986.11 rows=41879 loops=1)\n Filter: (server_id = 21::numeric)\n -> Sort (cost=8079.83..8184.53 rows=41879 width=19) (actual time=1018.95..1049.37 rows=41879 loops=1)\n Sort Key: bb_t.bb_id\n -> Seq Scan on bb_t (cost=0.00..4864.79 rows=41879 width=19) (actual time=0.04..810.88 rows=41879 loops=1)\n Total runtime: 5141.22 msec\n\nWhat I'm not sure is why does it decide to switch modes so \"early\",\ni.e., at about 5% of the table size or less? It seems that an Index\nScan would give better mileage than a Seq Scan for possibly up to 50%\nof the table in this case. I clearly don't understand the internals,\nbut the elbow seems rather sharp to me.\n\n> > What if the ORDER BY was:\n> > ORDER BY aa_t.server_id DESC, cc_t.name ASC\n> > Would the planner do the right thing?\n> \n> What do you consider the right thing? \n> cc_t.name doesn't seem connected\n> to this table at all --- or did I miss something?\n\nSorry, this is a red herring. Please ignore.\n\n> If you've been generically using NUMERIC(n) where you could be using\n> integer or bigint, then I think you've probably paid a high price\n> without knowing it. I don't know what Oracle's cost tradeoffs are for\n> these datatypes, but I can tell you that Postgres's integer types are\n> way faster (and more compact) than our NUMERIC.\n\nI'll try to figure out what the price is in our case. I think Oracle\ndoes a pretty good job on data compression for NUMERIC. I haven't\ndealt with a large Postgres database until this one, so I guess it's\ntime to learn. :)\n\nWe actually have been quite pleased with Postgres's performance\nwithout paying much attention to it before this. When we first set up\nOracle, we got into all of its parameters pretty heavily. With\nPostgres, we just tried it and it worked. This is the first query\nwhere we ran out of ideas to try.\n\nBTW, everybody's help on this list is fantastic. Usually, I can find\nthe answer to my question (and have been doing so for 3 years) on this\nlist without asking.\n\nThanks,\nRob\n\n\n", "msg_date": "Sat, 30 Aug 2003 14:42:09 -0600", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to force Nested Loop plan? " }, { "msg_contents": "On Sat, 2003-08-30 at 15:42, Rob Nagler wrote:\n[snip]\n> We actually have been quite pleased with Postgres's performance\n> without paying much attention to it before this. When we first set up\n> Oracle, we got into all of its parameters pretty heavily. With\n> Postgres, we just tried it and it worked. This is the first query\n> where we ran out of ideas to try.\n\nDumb question: given your out-of-the-box satisfaction, could it be\nthat postgresql.conf hasn't been tweaked?\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n484,246 sq mi are needed for 6 billion people to live, 4 persons \nper lot, in lots that are 60'x150'.\nThat is ~ California, Texas and Missouri.\nAlternatively, France, Spain and The United Kingdom.\n\n", "msg_date": "Sat, 30 Aug 2003 16:28:22 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to force Nested Loop plan?" }, { "msg_contents": "Rob Nagler <[email protected]> writes:\n> What I'm not sure is why does it decide to switch modes so \"early\",\n> i.e., at about 5% of the table size or less?\n\nGiven the default cost parameters and cost models, that's the correct\nplace to switch. Since the estimate evidently doesn't match reality\nfor your case, you might want to play with the parameters. Reducing\nrandom_page_cost would be the first thing I'd try. Some people think\nthat increasing effective_cache_size is a good idea too, though I feel\nthat that has only marginal impact on the planner's choices.\n\nKeep in mind though that you seem to be experimenting with a\nfully-cached database; you may find that the planner's beliefs more\nnearly approach reality when actual I/O has to occur.\n\nAnother thing I'd be interested to know about is how closely the\nphysical order of the table entries correlates with min_date_time.\nA high correlation reduces the actual cost of the indexscan (since\nvisiting the rows in index order becomes less of a random-access\nproposition). We are aware that the planner doesn't model this effect\nvery well at present ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 30 Aug 2003 18:10:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to force Nested Loop plan? " }, { "msg_contents": "Ron Johnson writes:\n> Dumb question: given your out-of-the-box satisfaction, could it be\n> that postgresql.conf hasn't been tweaked?\n\nHere are the modified values:\n\nshared_buffers = 8000\nwal_buffers = 80\nsort_mem = 32000\neffective_cache_size = 400000\nrandom_page_cost = 4\nautocommit = false\ntimezone = UTC\n\nI had run a test with effective_cache_size to high value to see what\nwould happen. Also adjusted random_page_cost:\n\nrandom_page_cost effective_cache_size\telbow\n 4\t \t \t40000\t \t675\n .5\t \t \t40000\t \t592\n .1\t \t \t40000\t \t392\n 4\t \t \t1000\t \t30\n\nMy conclusion is that random_page_cost should be left alone and\neffective_cache_size higher is better.\n\nBTW, the hardware is 2 x 2.4ghz Xeon, 1.2GB, SCSI (linux software\nraid) with 10K disks. This is close to the production box. Although\nwe are planning on adding more memory to production.\n\nRob\n\n\n", "msg_date": "Sat, 30 Aug 2003 17:59:33 -0600", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to force Nested Loop plan?" }, { "msg_contents": "Tom Lane writes:\n> Keep in mind though that you seem to be experimenting with a\n> fully-cached database; you may find that the planner's beliefs more\n> nearly approach reality when actual I/O has to occur.\n\nMy hope is that the entire database should fit in memory. This may\nnot be in the case right now with only 1GB, but it should be close.\nThe pgsql/data/base/NNN directory is about 1.5GB on production. I'm\npretty sure with constant vacuuming, we could keep that size down.\nA pgdump is about 60MB now, growing at about .5MB a day.\n\n> Another thing I'd be interested to know about is how closely the\n> physical order of the table entries correlates with min_date_time.\n\nProbably \"pretty close\". The primary key of aa_t is (bb_id,\nserver_id), and bb_id is a sequence. aa_t is updated heavily on\nproduction, but these tests are on a fresh import so vacuuming and\nindex order is not a factor. We do a reload every now and then to\nimprove performance on production. min_date_time is highly correlated\nwith bb_id, because both are increasing constantly. server_id is one\nof 16 values.\n\n> A high correlation reduces the actual cost of the indexscan (since\n> visiting the rows in index order becomes less of a random-access\n> proposition). We are aware that the planner doesn't model this effect\n> very well at present ...\n\nOracle's optimizer is lacking here, too. The best optimizer I've seen\nwas at Tandem, and even then hints were required.\n\nAre there plans for explicit hints to the planner?\n\nThanks,\nRob\n\n\n", "msg_date": "Sat, 30 Aug 2003 18:08:32 -0600", "msg_from": "Rob Nagler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to force Nested Loop plan? " }, { "msg_contents": "Rob Nagler <[email protected]> writes:\n> Are there plans for explicit hints to the planner?\n\nPersonally, I'm philosophically opposed to planner hints; see previous\ndiscussions in the archives.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 31 Aug 2003 19:12:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to force Nested Loop plan? " }, { "msg_contents": "On Sun, 2003-08-31 at 18:12, Tom Lane wrote:\n> Rob Nagler <[email protected]> writes:\n> > Are there plans for explicit hints to the planner?\n> \n> Personally, I'm philosophically opposed to planner hints; see previous\n> discussions in the archives.\n\nHow about (if you don't already do it) ranked (or approximately \nranked) b-tree indexes, where each node also stores the (approximate)\ncount of tuple pointers under it?\n\nThis way, the planner would know whether or how skewed a tree is,\nand (approximately) how many tuples a given WHERE predicate resolves\nto.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Fair is where you take your cows to be judged.\"\nUnknown\n\n", "msg_date": "Mon, 01 Sep 2003 10:22:47 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to force Nested Loop plan?" }, { "msg_contents": "Ron Johnson <[email protected]> writes:\n> How about (if you don't already do it) ranked (or approximately \n> ranked) b-tree indexes, where each node also stores the (approximate)\n> count of tuple pointers under it?\n> This way, the planner would know whether or how skewed a tree is,\n> and (approximately) how many tuples a given WHERE predicate resolves\n> to.\n\nWhy is that better than our existing implementation of column statistics?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Sep 2003 17:03:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to force Nested Loop plan? " } ]
[ { "msg_contents": "------------\n------------\n------------\n------------\n------------\[email protected]\[email protected]\[email protected]\n\n\n\n", "msg_date": "Mon, 01 Sep 2003 01:45:29 GMT", "msg_from": "\"F Brown\" <[email protected]>", "msg_from_op": true, "msg_subject": "Mail Test" } ]
[ { "msg_contents": "\nHas anyone seen any performace problems with the use to to_timestamp?\n\nWhen I use it in a where clause I get a full file scan, when I don't it\nuses the index\nfor the query. The begin_time column is of type timestamp.\n\nThis does a full sequential scan\n\tselect id from details where begin_time > to_timestamp('03/08/25\n18:30');\n\n\nThis uses the index\n\tselect id from details where begin_time > '03/08/25 18:30';\n\nDon\n\n", "msg_date": "Tue, 2 Sep 2003 13:50:14 -0400 ", "msg_from": "\"Zaremba, Don\" <[email protected]>", "msg_from_op": true, "msg_subject": "Use of to_timestamp causes full scan" }, { "msg_contents": "\"Zaremba, Don\" <[email protected]> writes:\n> This does a full sequential scan\n> \tselect id from details where begin_time > to_timestamp('03/08/25\n> 18:30');\n\nto_timestamp('foo') is not a constant, so the planner doesn't know how\nmuch of the table this is going to select. In the absence of that\nknowledge, its default guess favors a seqscan.\n\n> This uses the index\n> \tselect id from details where begin_time > '03/08/25 18:30';\n\nHere the planner can consult pg_stats to get a pretty good idea how\nmuch of the table will be scanned; if the percentage is small enough\nit will pick an indexscan.\n\nThere are various ways to deal with this --- one thing you might\nconsider is making a wrapper function for to_timestamp that is\nmarked \"immutable\", so that it will be constant-folded on sight.\nThat has potential gotchas if you want to put the query in a function\nthough. Another tack is to make the query into a range query:\n\twhere begin_time > ... AND begin_time < 'infinity';\nSee the archives for more discussion.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Sep 2003 14:06:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use of to_timestamp causes full scan " } ]
[ { "msg_contents": "Hi,\n\nI'm working on a project to make an application run on MySQL and PostgreSQL.\nI find that PostgreSQL runs up to 10 times slower than MySQL. For small records\nit is not much problems. But as the records grew (up to 12,000 records) the\ndifference is quite significant. We are talking about 15s (MySQL) vs 111s \n(PostgreSQL). Someone suggest that my way of implementing PostgreSQL is not\nefficient and someone out there might be able to help.\n\nFYI, I running the application on ASP, XP Professional and Pentium 4 machine.\n\nBelow is the exact statement I used:\n \n strSQL = \"CREATE TABLE temp1 SELECT accposd.item,items.name,Sum(accposd.qty)\nas Quantity \" & _\n \",accposd.loose,AVG(accposd.price) as price, Sum(accposd.amount) as\nsum_amount \" & _\n \",Sum(accposd.cost) as sum_cost FROM \" & _\n \"accposd left join items on accposd.item = items.fcc \" & _\n \"where accposd.date between '\" & varStartDate & \"' AND '\" &\nvarStopDate & \"'\" & _\n \" GROUP by accposd.item,items.name,accposd.loose ORDER by items.name\"\n\nBelow is the information about the fields:\n\nCREATE TABLE accposd (\n fcc double precision default NULL,\n date date default NULL,\n recvbch double precision default NULL,\n type int default NULL,\n item double precision default NULL,\n qty double precision default NULL,\n price double precision default NULL,\n amount double precision default NULL,\n discamt double precision default NULL,\n cost double precision default NULL,\n loose varchar(10) default NULL,\n discflg varchar(10) default NULL,\n hour smallint default NULL,\n min smallint default NULL,\n sec smallint default NULL,\n who varchar(50) default NULL,\n promoter varchar(50) default NULL,\n userID double precision default '0',\n batchno double precision default '0'\n);\n\n\nCREATE TABLE items (\n fcc serial,\n code varchar(20) default NULL,\n name varchar(40) default NULL,\n description varchar(255) default NULL,\n barcode varchar(15) default NULL,\n brand varchar(30) default NULL,\n sub_category double precision default NULL,\n schedule char(1) default NULL,\n price double precision default NULL,\n lprice double precision default NULL,\n avgcost double precision default NULL,\n gname varchar(40) default NULL,\n strength varchar(10) default NULL,\n packsize double precision default NULL,\n whspack varchar(15) default NULL,\n packing varchar(10) default NULL,\n lowstock double precision default NULL,\n lstockls double precision default NULL,\n orderqty double precision default NULL,\n creation date default NULL,\n shelfno varchar(8) default NULL,\n status char(1) default NULL,\n q_cust double precision default NULL,\n ql_cust double precision default NULL,\n qoh double precision default NULL,\n qohl double precision default NULL,\n poison double precision default NULL,\n candisc double precision default NULL,\n maxdisc double precision default NULL,\n chkdate date default NULL,\n chkby varchar(5) default NULL,\n isstock double precision default NULL,\n wprice double precision default '0',\n wlprice double precision default '0',\n PRIMARY KEY (fcc)\n);\n\n\nI appreciate your advice. Thank you.\n\nRegards,\nAZLIN.\n\n__________________________________\nDo you Yahoo!?\nYahoo! SiteBuilder - Free, easy-to-use web site design software\nhttp://sitebuilder.yahoo.com\n", "msg_date": "Wed, 3 Sep 2003 06:08:57 -0700 (PDT)", "msg_from": "Azlin Ghazali <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL is slow...HELP" }, { "msg_contents": "On Wed, Sep 03, 2003 at 06:08:57AM -0700, Azlin Ghazali wrote:\n> I find that PostgreSQL runs up to 10 times slower than MySQL. For small records\n\nHave you done any tuning on PostgreSQL? Have you vacuumed, &c.? All\nthe usual questions. \n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 3 Sep 2003 09:14:55 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL is slow...HELP" }, { "msg_contents": "\n> For small records\n> it is not much problems. But as the records grew (up to 12,000\n> records) the\n> difference is quite significant.\n\nAlthough there are many tuning options, I'd suggest starting by making sure\nyou have an index (unique in cases where appropriate) on accposd.date\naccposd.item, items.name, accposd.loose and items.name. Then do an\n\"analyze;\" on the DB to make sure the database takes advantage of the\nindexes where appropriate.\n\nIf this doesn't help, there are other options to pursue, but this is where I\nwould start.\n\n-Nick\n\n\n", "msg_date": "Wed, 3 Sep 2003 09:10:59 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL is slow...HELP" }, { "msg_contents": "On 3 Sep 2003 at 6:08, Azlin Ghazali wrote:\n\n> Hi,\n> \n> I'm working on a project to make an application run on MySQL and PostgreSQL.\n> I find that PostgreSQL runs up to 10 times slower than MySQL. For small records\n> it is not much problems. But as the records grew (up to 12,000 records) the\n> difference is quite significant. We are talking about 15s (MySQL) vs 111s \n> (PostgreSQL). Someone suggest that my way of implementing PostgreSQL is not\n> efficient and someone out there might be able to help.\n> \n> FYI, I running the application on ASP, XP Professional and Pentium 4 machine.\n\nAre you running postgresql on windows? That's not an performance monster \nexactly? Is it under cygwin?\n\nBTW, did you do any performance tuning to postgresql?\n\nHTH\n\nBye\n Shridhar\n\n--\nVulcans do not approve of violence.\t\t-- Spock, \"Journey to Babel\", stardate \n3842.4\n\n", "msg_date": "Wed, 03 Sep 2003 20:08:32 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL is slow...HELP" }, { "msg_contents": "Azlin Ghazali <[email protected]> writes:\n> Below is the exact statement I used:\n\nThat's not very informative. Could we see the results of EXPLAIN ANALYZE\non that SELECT? Also, what PG version are you running?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Sep 2003 10:39:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL is slow...HELP " } ]
[ { "msg_contents": "Hi everyone,\n\nSaw this earlier on today, on the mailing list of the Open Source \nDevelopment Labs people who are porting their database testing suite \nfrom SAP to PostgreSQL.\n\nThe comment near the end by Jenny Zhang (one of the porters), saying \nthat \"I will put a tar ball on SourceForge today, though the pgsql \nversion performance is not very great.\" doesn't sound very nifty.\n\nPersonally, I don't have the time to analyse why the performance isn't \nvery good and take it forward, so I'm mentioning it here as a heads up \nin case someone want's something useful to sink their teeth into.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n-------- Original Message --------\nSubject: Re: [osdldbt-general] DBT1 and dynamic cache\nDate: 02 Sep 2003 09:46:29 -0700\nFrom: Jenny Zhang <[email protected]>\nTo: [email protected]\nCC: [email protected]\nReferences: <[email protected]>\n\nMark is right. DBT1 is designed to be run in several modes:\ndbdriver + database: each user opens a database connection, and all the\ntransaction go to the database\n\ndbdriver + cache + database: each user opens a database connection,\nthree transactions (bestsellers, newprodicts, and search results by\nsubject) go to the cache, the others go to the database\n\ndbdriver + transaction manager + database: each user opens a connection\nto the transaction manager, which manages the transaction queue and\ndatabase connection. All the transaction go to the database.\n\ndbdriver + transaction manager + database + cache: each user opens a\nconnection to the transaction manager, which manages the transaction\nqueue and database connection. Three transactions (bestsellers,\nnewprodicts, and search results by subject) go to the cache, the others\ngo to the database\n\nThe pgsql version is available at bk://developer.osdl.org/dbt1.\n\nI will put a tar ball on SourceForge today, though the pgsql version\nperformance is not very great.\n\nJenny\nOn Tue, 2003-09-02 at 08:42, [email protected] wrote:\n> We do have something that simulates some of the caching effects of web\n> servers. It's under the 'cache' directory. If I remember correctly, it\n> doesn't have to be used, but we do have it working.\n> \n> Mark\n> \n> On 31 Aug, Wenguang Wang wrote:\n> > Hi, dbt1 designers and developers,\n> > \n> > I like the idea of eliminating the web servers in dbt1 to focus on the \n> > performance of DBMS. However, I have a question about whether the \n> > current dbt1 can really represent e-commerce workloads.\n> > \n> > TPC-W encourages the use of web caches to reduce the load on DBMS. Since \n> > web caches are not used in dbt1, all cachable queries have to be \n> > procesed by the DBMS in dbt1. This could increase the load to the \n> > backend DBMS by several times at least. These queries make dbt1 more \n> > like a TPC-H throughput test instead of an e-commerce test. Is this \n> > design of dbt1 intentional or is it planned to be fixed later?\n> > \n> > Thanks.\n> > \n> \n> \n> \n> -------------------------------------------------------\n> This sf.net email is sponsored by:ThinkGeek\n> Welcome to geek heaven.\n> http://thinkgeek.com/sf\n> _______________________________________________\n> osdldbt-general mailing list\n> [email protected]\n> https://lists.sourceforge.net/lists/listinfo/osdldbt-general\n\n\n\n\n-------------------------------------------------------\nThis sf.net email is sponsored by:ThinkGeek\nWelcome to geek heaven.\nhttp://thinkgeek.com/sf\n_______________________________________________\nosdldbt-general mailing list\[email protected]\nhttps://lists.sourceforge.net/lists/listinfo/osdldbt-general\n\n\n", "msg_date": "Wed, 03 Sep 2003 21:59:04 +0800", "msg_from": "Justin Clift <[email protected]>", "msg_from_op": true, "msg_subject": "\" ... though the pgsql version performance is not very great.\"" } ]
[ { "msg_contents": "Hi List,\n\n I trying to increase performance in my PostgreSQL but there is something\nwrong. when I run this SQL for the first time it takes 1 min. 40 seconds to\nreturn, but when I run it for the second time it takes more than 2 minutes, and\nI should retunr faster than the first time.\n\nDoes anyone have a advice ?\n\nAtenciosamente,\n\nRhaoni Chiu Pereira\nSist�mica Computadores\n\nVisite-nos na Web: http://sistemica.info\nFone/Fax : +55 51 3328 1122\n\n\n\n\n\n", "msg_date": "Wed, 3 Sep 2003 16:15:58 -0300", "msg_from": "Rhaoni Chiu Pereira <[email protected]>", "msg_from_op": true, "msg_subject": "SQL slower when running for the second time" }, { "msg_contents": "Rhaoni Chiu Pereira writes:\n\n> I trying to increase performance in my PostgreSQL but there is something\n> wrong. when I run this SQL for the first time\n\nWhich SQL?\n\n> it takes 1 min. 40 seconds to\n> return, but when I run it for the second time it takes more than 2 minutes, and\n> I should retunr faster than the first time.\n\nWhat happens the third time?\n\n-- \nPeter Eisentraut [email protected]\n\n", "msg_date": "Wed, 3 Sep 2003 22:03:11 +0200 (CEST)", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL slower when running for the second time" }, { "msg_contents": "On Wed, 2003-09-03 at 14:15, Rhaoni Chiu Pereira wrote:\n> Hi List,\n> \n> I trying to increase performance in my PostgreSQL but there is something\n> wrong. when I run this SQL for the first time it takes 1 min. 40 seconds to\n> return, but when I run it for the second time it takes more than 2 minutes, and\n> I should retunr faster than the first time.\n> \n> Does anyone have a advice ?\n\nIs it a query or insert/update?\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Vanity, my favorite sin.\"\n Larry/John/Satan, \"The Devil's Advocate\"\n\n", "msg_date": "Wed, 03 Sep 2003 15:49:03 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL slower when running for the second time" } ]
[ { "msg_contents": "Hi ,\n\nI am currently using Postgresql for a Research project . I observed some performance results of Postgresql which I would like to discuss .\n\nI have a server which accepts requests from clients. It spawns a new thread for each client. The clients are trying to add entries to a relation in Postgresql database . The server ( a C/C++ program running on Linux ) accesses Postgresql using psqlodbc .\nMy server spawns a new connection to Postgresql foreach client. My postgresql.conf has the following additional settings . I am also running Postmaster with \"-o -F\" option .\n\ntcpip_socket = true\nmax_connections = 100\nshared_buffers = 200\nvacuum_mem = 16384\n\nMy clients are adding strings of length approximately 20 bytes . The database size is 1 Million entries .\n\nI observed the following results :-\n\n1) Effects related to Vaccum :- I performed 10 trials of adding and deleting entries . In each trial , 1 client adds 10,000 entries and then deletes them . During the course of these 10 trials , the Add Rates (rate at which my server can add entries to the Postgresql database ) drops from around 200 Adds/second in the 1st trial to around 100 Adds/second in the 10th trial . But when I do a Vaccuum , Immediately I get back the Add Rates to around 200 Adds/Second . \n This effect is more pronounced if there are more than 1 client. As the number of clients increases , the Add Rate drops more steeply requiring me to perform Vaccum more frequently between the trials . So if I draw a graph of the Add Rates in the Y- Axis and the number of Add Operations in the X-axis , I get a saw toothed graph .\n\n2) In the second Experiment , I had a multi threaded client . In the sense , it spawns threads as specified by a command line argument . The server in turn spawns new connections for each Thread of each client ( even the number of client increases) . \nI fixed the number of threads per client at 4 . and I increased the number of clients from 1 to 10 . I observed :-\n\n a) As the number of clients are increased , the Add Rate decreases from around 200 Adds/ Second for 1 client to around 130 Adds/Second for 10 clients .\n b) suppose I run a trial with 3 clients and 4 threads per client . and I get a Add Rate of 180 Adds/Second the first time .This Add Rate decreases the scond time I repeat the same trial with everything being the same . \n During each trial , each thread of each client adds 3000 entries and deletes them and I perform vaccuum after each trial .\n\n\nPostgresql version :- 7.2.4\nPsqlodbc version :- 7.03.0100\n\nI was using Postgresql 7.3.3 earlier but it kept crashing the database after a Vaccum . So I switched to a older and stabler version 7.2.4\n\nAny comments on these observations will be very welcome . Additional details will be provided if needed .\n\nThanking you in Advance,\nNaveen.\n \n\n\n\n\n\n\n\nHi ,\n \nI am currently using Postgresql for a Research \nproject . I observed some performance results of Postgresql which I would like \nto discuss .\n \nI have a server which accepts requests from \nclients. It spawns a new thread for each client. The clients are trying to add \nentries  to a relation in Postgresql database . The server ( a C/C++ \nprogram running on Linux ) accesses Postgresql using psqlodbc \n.\nMy server spawns a new connection to Postgresql \nforeach client. My postgresql.conf has the following additional settings . I am \nalso running Postmaster with \"-o -F\" option .\n \ntcpip_socket = truemax_connections = \n100shared_buffers = 200vacuum_mem = 16384\nMy clients are adding strings of length \napproximately 20 bytes . The database size is 1 Million entries .\n \nI observed the following results :-\n \n1) Effects related to Vaccum :- I performed 10 \ntrials of adding and deleting entries . In each trial , 1 client  adds \n10,000 entries and then deletes them . During the course of these 10 trials \n,  the Add Rates (rate at which my server can add entries to the Postgresql \ndatabase ) drops from  around 200 Adds/second in the 1st trial  to \naround 100 Adds/second in the 10th trial . But when I do a Vaccuum , Immediately \nI get back the Add Rates to  around 200 Adds/Second . \n    This effect is more pronounced \nif there are more than 1 client. As the number of clients increases , the Add \nRate drops more steeply requiring me to perform Vaccum more frequently between \nthe trials . So if I draw a graph of the Add Rates in the Y- Axis and the number \nof Add Operations in the X-axis , I get a saw toothed graph .\n \n2) In the second Experiment , I had a multi \nthreaded client . In the sense , it spawns threads as specified by a command \nline argument . The server in turn spawns new connections for each Thread of \neach client ( even the number of client increases) . \nI fixed the number of threads per client at 4 . and \nI increased the number of clients from 1 to 10 .  I observed \n:-\n \n    a) As the number of clients are \nincreased , the Add Rate decreases from around 200 Adds/ Second for 1 client to \naround 130 Adds/Second for 10 clients .\n    b) suppose I run a trial with 3 \nclients and 4 threads per client . and I get a Add Rate of 180 Adds/Second the \nfirst  time .This Add Rate decreases the scond time I repeat the same trial \nwith everything being the same . \n       During \neach trial , each thread of each client  adds 3000 entries and deletes \n them and I perform vaccuum after each trial .\n \n \nPostgresql version     \n:-        7.2.4\nPsqlodbc \nversion       :-        \n7.03.0100\n \nI was using Postgresql 7.3.3 earlier but it kept \ncrashing the database after a Vaccum . So I switched to a older and stabler \nversion  7.2.4\n \nAny comments on these observations will be very \nwelcome . Additional details will be provided if needed .\n \nThanking you in Advance,\nNaveen.", "msg_date": "Wed, 3 Sep 2003 12:32:42 -0700", "msg_from": "\"Naveen Palavalli\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query on Postgresql performance" }, { "msg_contents": "On Wed, 2003-09-03 at 15:32, Naveen Palavalli wrote:\n> shared_buffers = 200\n\nIf you're using a relatively modern machine, this is probably on the low\nside.\n \n> 1) Effects related to Vaccum :- I performed 10 trials of adding and\n> deleting entries . In each trial , 1 client adds 10,000 entries and\n> then deletes them . During the course of these 10 trials , the Add\n> Rates (rate at which my server can add entries to the Postgresql\n> database ) drops from around 200 Adds/second in the 1st trial to\n> around 100 Adds/second in the 10th trial . But when I do a Vaccuum ,\n> Immediately I get back the Add Rates to around 200 Adds/Second .\n\nWell, there's nothing wrong with vacuuming frequently (since it won't\nblock concurrent database operations, and the more often you vacuum, the\nless time each vacuum takes).\n \n> I was using Postgresql 7.3.3 earlier but it kept crashing the database\n> after a Vaccum . So I switched to a older and stabler version 7.2.4\n\nCan you reproduce the 7.3.3 crash? (BTW, in the future, it would be\nappreciated if you could report these kinds of bugs to the dev team).\n\n-Neil\n\n\n", "msg_date": "Wed, 03 Sep 2003 22:01:10 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query on Postgresql performance" }, { "msg_contents": "On Wed, Sep 03, 2003 at 12:32:42PM -0700, Naveen Palavalli wrote:\n> My server spawns a new connection to Postgresql foreach client. My\n\nI don't think you want to do that. You should use a pool. Back end\nstartup is mighty expensive.\n\n> 1) Effects related to Vaccum :- I performed 10 trials of adding and\n> deleting entries . In each trial , 1 client adds 10,000 entries\n> and then deletes them . During the course of these 10 trials ,\n\nYou'll want to vacuum after every set of deletes, I should think. If\nyou're woking in more than one transaction for the deletes, then\nfairly frequent vacuums of that table will be effective.\n\n> I was using Postgresql 7.3.3 earlier but it kept crashing the\n> database after a Vaccum . So I switched to a older and stabler\n> version 7.2.4\n\nYou don't want to use 7.3.3. It has a rare but serious bug and was\nreplaced in something like 24 hours with 7.3.4. The 7.2 branch is no\nlonger being maintained, so you really probably should use the 7.3\nbranch. I'm unaware of others having stability problems with 7.3.4,\nso if you see them, you should find your core dump and talk to the\npeople on -hackers.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 4 Sep 2003 06:40:54 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query on Postgresql performance" } ]
[ { "msg_contents": "<> Version of PostgreSQL?\n \n 7.3.2-3 on RedHat 9\n<> \n<> Standard server configuration?\n \n Follow atached\n\n<> Hardware configuration?\n\n P4 1.7 Ghz\n 512 MB RAM DDR\n HD 20 GB 7200 RPM\n\n<> -----Original Message-----\n<> From: [email protected]\n<> [mailto:[email protected]]On Behalf Of Rhaoni Chiu\n<> Pereira\n<> Sent: Wednesday, September 03, 2003 3:16 PM\n<> To: PostgreSQL Performance; Lista PostgreSQL\n<> Subject: [ADMIN] SQL slower when running for the second time\n<> \n<> \n<> Hi List,\n<> \n<> I trying to increase performance in my PostgreSQL but there is something\n<> wrong. when I run this SQL for the first time it takes 1 min. 40 seconds\n<> to\n<> return, but when I run it for the second time it takes more than 2 minutes,\n<> and\n<> I should retunr faster than the first time.\n<> \n<> Does anyone have a advice ?\n<> \n<> Atenciosamente,\n<> \n<> Rhaoni Chiu Pereira\n<> Sist�mica Computadores\n<> \n<> Visite-nos na Web: http://sistemica.info\n<> Fone/Fax : +55 51 3328 1122\n<> \n<> \n<> \n<> \n<> \n<> \n<> ---------------------------(end of broadcast)---------------------------\n<> TIP 6: Have you searched our list archives?\n<> \n<> http://archives.postgresql.org\n<> \n<>", "msg_date": "Wed, 3 Sep 2003 17:44:12 -0300", "msg_from": "Rhaoni Chiu Pereira <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ADMIN] SQL slower when running for the second time" } ]
[ { "msg_contents": "I have a table with 102,384 records in it, each record is 934 bytes.\n\nUsing the follow select statement:\n SELECT * from <table>\n\nPG Info: version 7.3.4 under cygwin on Windows 2000\nODBC: version 7.3.100\n\nMachine: 500 Mhz/ 512MB RAM / IDE HDD\n\n\nUnder PG: Data is returned in 26 secs!!\nUnder SQL Server: Data is returned in 5 secs.\nUnder SQLBase: Data is returned in 6 secs.\nUnder SAPDB: Data is returned in 7 secs.\n\nThis is the ONLY table in the database and only 1 user.\n\nAnd yes I did a vacuum.\n\nIs this normal behavior for PG?\n\nThanks\n\n\n", "msg_date": "Wed, 3 Sep 2003 17:28:26 -0700", "msg_from": "\"Relaxin\" <[email protected]>", "msg_from_op": true, "msg_subject": "SELECT's take a long time compared to other DBMS" }, { "msg_contents": "> Under PG: Data is returned in 26 secs!!\n> Under SQL Server: Data is returned in 5 secs.\n> Under SQLBase: Data is returned in 6 secs.\n> Under SAPDB: Data is returned in 7 secs.\n\nWhat did you use as the client? Do those times include ALL resulting\ndata or simply the first few lines?\n\nPostgreSQL performance on windows (via Cygwin) is known to be poor.\nDo you receive similar results with 7.4 beta 2?", "msg_date": "Wed, 03 Sep 2003 21:05:06 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "Hi,\n\n\n>And yes I did a vacuum.\n>\n\nDid you 'Analyze' too ?\n\nCheers\nRudi.\n\n", "msg_date": "Thu, 04 Sep 2003 11:06:26 +1000", "msg_from": "Rudi Starcevic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "Yes I Analyze also, but there was no need to because it was a fresh brand\nnew database.\n\n\"Rudi Starcevic\" <[email protected]> wrote in message\nnews:[email protected]...\n> Hi,\n>\n>\n> >And yes I did a vacuum.\n> >\n>\n> Did you 'Analyze' too ?\n>\n> Cheers\n> Rudi.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n\n\n", "msg_date": "Wed, 3 Sep 2003 18:22:33 -0700", "msg_from": "\"Relaxin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "All queries were ran on the SERVER for all of the databases I tested.\n\nThis is all resulting data for all of the databases that I tested.\n\n\n\"Rod Taylor\" <[email protected]> wrote in message\nnews:1062637505.84923.7.camel@jester...\n\n\n", "msg_date": "Wed, 3 Sep 2003 18:24:31 -0700", "msg_from": "\"Relaxin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "Hi,\n\n>Yes I Analyze also, but there was no need to because it was a fresh brand\n>new database.\n>\n\nHmm ... Sorry I'm not sure then. I only use Linux with PG.\nEven though it's 'brand new' you still need to Analyze so that any \nIndexes etc. are built.\n\nI'll keep an eye on this thread - Good luck.\n\nRegards\nRudi.\n\n\n\n", "msg_date": "Thu, 04 Sep 2003 11:32:38 +1000", "msg_from": "Rudi Starcevic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "On Wed, 2003-09-03 at 21:32, Rudi Starcevic wrote:\n> Hmm ... Sorry I'm not sure then. I only use Linux with PG.\n> Even though it's 'brand new' you still need to Analyze so that any \n> Indexes etc. are built.\n\nANALYZE doesn't build indexes, it only updates the statistics used by\nthe query optimizer (and in any case, \"select * from <foo>\" has only one\nreasonable query plan anyway).\n\n-Neil\n\n", "msg_date": "Wed, 03 Sep 2003 21:54:37 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "Quoth \"Relaxin\" <[email protected]>:\n> Yes I Analyze also, but there was no need to because it was a fresh\n> brand new database.\n\nThat is _absolutely not true_.\n\nIt is not true with any DBMS that uses a cost-based optimizer.\nCost-based optimizers need some equivalent to ANALYZE in order to\ncollect statistics to allow them to pick any path other than a\nsequential scan.\n\nIn this particular case, a seq scan is pretty likely to be the best\nanswer when there is no WHERE clause on the query.\n\nActually, it doesn't make all that much sense that the other systems\nwould be terribly much faster, because they obviously need to do some\nprocessing on 102,384 records.\n\nCan you tell us what you were *actually* doing? Somehow it sounds as\nthough the other databases were throwing away the data whereas\nPostgreSQL was returning it all \"kawhump!\" in one batch.\n\nWhat programs were you using to submit the queries?\n-- \nlet name=\"cbbrowne\" and tld=\"acm.org\" in name ^ \"@\" ^ tld;;\nhttp://cbbrowne.com/info/oses.html\n\"Computers let you make more mistakes faster than any other invention\nin human history, with the possible exception of handguns and\ntequila.\" -- Mitch Radcliffe\n", "msg_date": "Wed, 03 Sep 2003 23:19:22 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "In the last exciting episode, \"Relaxin\" <[email protected]> wrote:\n> All queries were ran on the SERVER for all of the databases I tested.\n\nQueries obviously run \"on the server.\" That's kind of the point of\nthe database system being a \"client/server\" system.\n\nThe question is what client program(s) you used to process the result\nsets. I'd be surprised to see any client process 100K records in any\nmeaningful way in much less than 30 seconds. Rendering that much data\ninto a console will take some time. Drawing it into cells on a GUI\nwindow will take a lot more time.\n\nSupposing you were using a graphical client, it would be unsurprising\nfor it to have submitted something equivalent to \"limit 30 rows\" (or\nwhatever you can display on screen), and defer further processing 'til\nlater. If that were the case, then 26s to process the whole thing\nwould be a lot more efficient than 5-6s to process a mere 30 rows...\n\n> This is all resulting data for all of the databases that I tested.\n\nYou seem to have omitted \"all resulting data.\"\n-- \nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://www.ntlug.org/~cbbrowne/sap.html\n\"Women who seek to be equal to men lack ambition. \"\n-- Timothy Leary\n", "msg_date": "Wed, 03 Sep 2003 23:24:19 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "\n> Yes I Analyze also, but there was no need to because it was a fresh brand\n> new database.\n\nThis apparently wasn't the source of problem since he did an analyze anyway,\nbut my impression was that a fresh brand new database is exactly the\nsituation where an analyze is needed- ie: a batch of data has just been\nloaded and stats haven't been collected yet.\n\nAm I mistaken?\n\n-Nick\n\n\n", "msg_date": "Wed, 3 Sep 2003 23:43:45 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "\"Nick Fankhauser\" <[email protected]> writes:\n> This apparently wasn't the source of problem since he did an analyze anyway,\n> but my impression was that a fresh brand new database is exactly the\n> situation where an analyze is needed- ie: a batch of data has just been\n> loaded and stats haven't been collected yet.\n\nIndeed. But as someone else already pointed out, a seqscan is the only\nreasonable plan for an unqualified \"SELECT whatever FROM table\" query;\nlack of stats wouldn't deter the planner from arriving at that\nconclusion.\n\nMy guess is that the OP is failing to account for some client-side\ninefficiency in absorbing a large query result.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Sep 2003 01:03:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS " }, { "msg_contents": "> Can you tell us what you were *actually* doing? Somehow it sounds as\n> though the other databases were throwing away the data whereas\n> PostgreSQL was returning it all \"kawhump!\" in one batch.\n\nAll of the databases that I tested the query against gave me immediate\naccess to ANY row of the resultset once the data had been returned.\nEx. If I'm currently at the first row and then wanted to goto the 100,000\nrow, I would be there immediately, and if I wanted to then goto the 5\nrow...same thing, I have the record immediately!\n\nThe other databases I tested against stored the entire resultset on the\nServer, I'm not sure what PG does...It seems that brings the entire\nresultset client side.\nIf that is the case, how can I have PG store the resultset on the Server AND\nstill allow me immediate access to ANY row in the resultset?\n\n\n> What programs were you using to submit the queries?\nI used the same program for all of the database. I was using ODBC as\nconnectivity.\n\n\n\n", "msg_date": "Thu, 4 Sep 2003 00:48:42 -0700", "msg_from": "\"Relaxin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "On 4 Sep 2003 at 0:48, Relaxin wrote:\n> All of the databases that I tested the query against gave me immediate\n> access to ANY row of the resultset once the data had been returned.\n> Ex. If I'm currently at the first row and then wanted to goto the 100,000\n> row, I would be there immediately, and if I wanted to then goto the 5\n> row...same thing, I have the record immediately!\n> \n> The other databases I tested against stored the entire resultset on the\n> Server, I'm not sure what PG does...It seems that brings the entire\n> resultset client side.\n> If that is the case, how can I have PG store the resultset on the Server AND\n> still allow me immediate access to ANY row in the resultset?\n\nYou can use a cursor and get only required rows.\n\n\nBye\n Shridhar\n\n--\nNick the Greek's Law of Life:\tAll things considered, life is 9 to 5 against.\n\n", "msg_date": "Thu, 04 Sep 2003 13:30:51 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "All rows are required.\n\n\"\"Shridhar Daithankar\"\" <[email protected]> wrote in\nmessage news:3F573E8B.31916.A1063F8@localhost...\n> On 4 Sep 2003 at 0:48, Relaxin wrote:\n> > All of the databases that I tested the query against gave me immediate\n> > access to ANY row of the resultset once the data had been returned.\n> > Ex. If I'm currently at the first row and then wanted to goto the\n100,000\n> > row, I would be there immediately, and if I wanted to then goto the 5\n> > row...same thing, I have the record immediately!\n> >\n> > The other databases I tested against stored the entire resultset on the\n> > Server, I'm not sure what PG does...It seems that brings the entire\n> > resultset client side.\n> > If that is the case, how can I have PG store the resultset on the Server\nAND\n> > still allow me immediate access to ANY row in the resultset?\n>\n> You can use a cursor and get only required rows.\n>\n>\n> Bye\n> Shridhar\n>\n> --\n> Nick the Greek's Law of Life: All things considered, life is 9 to 5\nagainst.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n\n\n", "msg_date": "Thu, 4 Sep 2003 01:16:47 -0700", "msg_from": "\"Relaxin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "Relaxin kirjutas N, 04.09.2003 kell 03:28:\n> I have a table with 102,384 records in it, each record is 934 bytes.\n\nI created a test database on my Linux (RH9) laptop with 30GB/4200RPM ide\ndrive and P3-1133Mhz, 768MB, populated it with 128000 rows of 930 bytes\neach and did \n\n[hannu@fuji hannu]$ time psql test100k -c 'select * from test' >\n/dev/null\n \nreal 0m3.970s\nuser 0m0.980s\nsys 0m0.570s\n\nso it seems definitely not a problem with postgres as such, but perhaps\nwith Cygwin and/or ODBC driver\n\nI also ran the same query using the \"standard\" pg adapter:\n\n>>> import pg, time\n>>>\n>>> con = pg.connect('test100k')\n>>>\n>>> def getall():\n... t1 = time.time()\n... res = con.query('select * from test')\n... t2 = time.time()\n... list = res.getresult()\n... t3 = time.time()\n... print t2 - t1, t3-t2\n...\n>>> getall()\n3.27637195587 1.10105705261\n>>> getall()\n3.07413101196 0.996125936508\n>>> getall()\n3.03377199173 1.07322502136\n\nwhich gave similar results\n\n------------------------------\nHannu \n\n\n\n", "msg_date": "Thu, 04 Sep 2003 14:01:53 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "So after you did that, where able to position to ANY record within the\nresultset?\n\nEx. Position 100,000; then to Position 5; then to position 50,000, etc...\n\nIf you are able to do that and have your positioned row available to you\nimmediately, then I'll believe that it's the ODBC driver.\n\n\"Hannu Krosing\" <[email protected]> wrote in message\nnews:[email protected]...\n> Relaxin kirjutas N, 04.09.2003 kell 03:28:\n> > I have a table with 102,384 records in it, each record is 934 bytes.\n>\n> I created a test database on my Linux (RH9) laptop with 30GB/4200RPM ide\n> drive and P3-1133Mhz, 768MB, populated it with 128000 rows of 930 bytes\n> each and did\n>\n> [hannu@fuji hannu]$ time psql test100k -c 'select * from test' >\n> /dev/null\n>\n> real 0m3.970s\n> user 0m0.980s\n> sys 0m0.570s\n>\n> so it seems definitely not a problem with postgres as such, but perhaps\n> with Cygwin and/or ODBC driver\n>\n> I also ran the same query using the \"standard\" pg adapter:\n>\n> >>> import pg, time\n> >>>\n> >>> con = pg.connect('test100k')\n> >>>\n> >>> def getall():\n> ... t1 = time.time()\n> ... res = con.query('select * from test')\n> ... t2 = time.time()\n> ... list = res.getresult()\n> ... t3 = time.time()\n> ... print t2 - t1, t3-t2\n> ...\n> >>> getall()\n> 3.27637195587 1.10105705261\n> >>> getall()\n> 3.07413101196 0.996125936508\n> >>> getall()\n> 3.03377199173 1.07322502136\n>\n> which gave similar results\n>\n> ------------------------------\n> Hannu\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n\n\n", "msg_date": "Thu, 4 Sep 2003 07:35:24 -0700", "msg_from": "\"Relaxin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "You forgot that the original poster's query was:\n SELECT * from <table>\n\nThis should require a simple table scan. NO need for stats.\nEither the table has not been properly vacuumed or he's got seq_scan\noff...\n\nJLL\n\n\nNick Fankhauser wrote:\n> \n> > Yes I Analyze also, but there was no need to because it was a fresh brand\n> > new database.\n> \n> This apparently wasn't the source of problem since he did an analyze anyway,\n> but my impression was that a fresh brand new database is exactly the\n> situation where an analyze is needed- ie: a batch of data has just been\n> loaded and stats haven't been collected yet.\n> \n> Am I mistaken?\n> \n> -Nick\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n", "msg_date": "Thu, 04 Sep 2003 11:01:13 -0400", "msg_from": "Jean-Luc Lachance <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "The table has been Vacuumed and seq_scan is turned on.\n\n\"Jean-Luc Lachance\" <[email protected]> wrote in message\nnews:[email protected]...\n> You forgot that the original poster's query was:\n> SELECT * from <table>\n>\n> This should require a simple table scan. NO need for stats.\n> Either the table has not been properly vacuumed or he's got seq_scan\n> off...\n>\n> JLL\n>\n>\n> Nick Fankhauser wrote:\n> >\n> > > Yes I Analyze also, but there was no need to because it was a fresh\nbrand\n> > > new database.\n> >\n> > This apparently wasn't the source of problem since he did an analyze\nanyway,\n> > but my impression was that a fresh brand new database is exactly the\n> > situation where an analyze is needed- ie: a batch of data has just been\n> > loaded and stats haven't been collected yet.\n> >\n> > Am I mistaken?\n> >\n> > -Nick\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n\n\n", "msg_date": "Thu, 4 Sep 2003 08:05:15 -0700", "msg_from": "\"Relaxin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "Relaxin kirjutas N, 04.09.2003 kell 17:35:\n> So after you did that, where able to position to ANY record within the\n> resultset?\n> \n> Ex. Position 100,000; then to Position 5; then to position 50,000, etc...\n\nnot in the case of :\n time psql test100k -c 'select * from test' > /dev/null\nas the whole result would be written to dev null (i.e discarded)\n\nYes in case of python: after doing\n\nres = con.query('select * from test') # 3 sec - perform query\nlist = res.getresult() # 1 sec - construct list of tuples\n\nthe whole 128k records are in a python list , \nso that i can immediately access any record by python list syntax,\nie list[5], list[50000] etc.\n\n> If you are able to do that and have your positioned row available to you\n> immediately, then I'll believe that it's the ODBC driver.\n\nIt can also be the Cygwin port, which is known to have several problems,\nand if you run both your client and server on the same machine, then it\ncan also be an interaction of the two processes (cygwin/pgsql server and\nnative win32 ODBC client) not playing together very well.\n\n> \"Hannu Krosing\" <[email protected]> wrote in message\n> news:[email protected]...\n> > Relaxin kirjutas N, 04.09.2003 kell 03:28:\n> > > I have a table with 102,384 records in it, each record is 934 bytes.\n> >\n> > I created a test database on my Linux (RH9) laptop with 30GB/4200RPM ide\n> > drive and P3-1133Mhz, 768MB, populated it with 128000 rows of 930 bytes\n> > each and did\n> >\n> > [hannu@fuji hannu]$ time psql test100k -c 'select * from test' >\n> > /dev/null\n> >\n> > real 0m3.970s\n> > user 0m0.980s\n> > sys 0m0.570s\n> >\n> > so it seems definitely not a problem with postgres as such, but perhaps\n> > with Cygwin and/or ODBC driver\n> >\n> > I also ran the same query using the \"standard\" pg adapter:\n> >\n> > >>> import pg, time\n> > >>>\n> > >>> con = pg.connect('test100k')\n> > >>>\n> > >>> def getall():\n> > ... t1 = time.time()\n> > ... res = con.query('select * from test')\n> > ... t2 = time.time()\n> > ... list = res.getresult()\n> > ... t3 = time.time()\n> > ... print t2 - t1, t3-t2\n> > ...\n> > >>> getall()\n> > 3.27637195587 1.10105705261\n> > >>> getall()\n> > 3.07413101196 0.996125936508\n> > >>> getall()\n> > 3.03377199173 1.07322502136\n> >\n> > which gave similar results\n-------------------\nHannu\n\n", "msg_date": "Thu, 04 Sep 2003 19:30:09 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "I had these same issues with the PeerDirect version also.\n\"Hannu Krosing\" <[email protected]> wrote in message\nnews:[email protected]...\n> Relaxin kirjutas N, 04.09.2003 kell 17:35:\n> > So after you did that, where able to position to ANY record within the\n> > resultset?\n> >\n> > Ex. Position 100,000; then to Position 5; then to position 50,000,\netc...\n>\n> not in the case of :\n> time psql test100k -c 'select * from test' > /dev/null\n> as the whole result would be written to dev null (i.e discarded)\n>\n> Yes in case of python: after doing\n>\n> res = con.query('select * from test') # 3 sec - perform query\n> list = res.getresult() # 1 sec - construct list of tuples\n>\n> the whole 128k records are in a python list ,\n> so that i can immediately access any record by python list syntax,\n> ie list[5], list[50000] etc.\n>\n> > If you are able to do that and have your positioned row available to you\n> > immediately, then I'll believe that it's the ODBC driver.\n>\n> It can also be the Cygwin port, which is known to have several problems,\n> and if you run both your client and server on the same machine, then it\n> can also be an interaction of the two processes (cygwin/pgsql server and\n> native win32 ODBC client) not playing together very well.\n>\n> > \"Hannu Krosing\" <[email protected]> wrote in message\n> > news:[email protected]...\n> > > Relaxin kirjutas N, 04.09.2003 kell 03:28:\n> > > > I have a table with 102,384 records in it, each record is 934 bytes.\n> > >\n> > > I created a test database on my Linux (RH9) laptop with 30GB/4200RPM\nide\n> > > drive and P3-1133Mhz, 768MB, populated it with 128000 rows of 930\nbytes\n> > > each and did\n> > >\n> > > [hannu@fuji hannu]$ time psql test100k -c 'select * from test' >\n> > > /dev/null\n> > >\n> > > real 0m3.970s\n> > > user 0m0.980s\n> > > sys 0m0.570s\n> > >\n> > > so it seems definitely not a problem with postgres as such, but\nperhaps\n> > > with Cygwin and/or ODBC driver\n> > >\n> > > I also ran the same query using the \"standard\" pg adapter:\n> > >\n> > > >>> import pg, time\n> > > >>>\n> > > >>> con = pg.connect('test100k')\n> > > >>>\n> > > >>> def getall():\n> > > ... t1 = time.time()\n> > > ... res = con.query('select * from test')\n> > > ... t2 = time.time()\n> > > ... list = res.getresult()\n> > > ... t3 = time.time()\n> > > ... print t2 - t1, t3-t2\n> > > ...\n> > > >>> getall()\n> > > 3.27637195587 1.10105705261\n> > > >>> getall()\n> > > 3.07413101196 0.996125936508\n> > > >>> getall()\n> > > 3.03377199173 1.07322502136\n> > >\n> > > which gave similar results\n> -------------------\n> Hannu\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n\n\n", "msg_date": "Thu, 4 Sep 2003 09:52:29 -0700", "msg_from": "\"Relaxin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "Relaxin wrote:\n> I have a table with 102,384 records in it, each record is 934 bytes.\n> \n> Using the follow select statement:\n> SELECT * from <table>\n> \n> PG Info: version 7.3.4 under cygwin on Windows 2000\n> ODBC: version 7.3.100\n> \n> Machine: 500 Mhz/ 512MB RAM / IDE HDD\n> \n> Under PG: Data is returned in 26 secs!!\n> Under SQL Server: Data is returned in 5 secs.\n> Under SQLBase: Data is returned in 6 secs.\n> Under SAPDB: Data is returned in 7 secs.\n\nI created a similar table (934 bytes, 102K records) on a slightly faster \nmachine: P3/800 + 512MB RAM + IDE HD. The server OS is Solaris 8 x86 and \nthe version is 7.3.3.\n\nOn the server (via PSQL client) : 7.5 seconds\nUsing ODBC under VFPW: 10.5 seconds\n\nHow that translates to what you should see, I'm not sure. Assuming it \nwas just the CPU difference, you should see numbers of roughly 13 \nseconds. But the documentation says PG under CYGWIN is significantly \nslower than PG under UNIX so your mileage may vary...\n\nHave you changed any of the settings yet in postgresql.conf, \nspecifically the shared_buffers setting?\n\n", "msg_date": "Thu, 04 Sep 2003 10:44:28 -0700", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "On Wed, 3 Sep 2003, Relaxin wrote:\n\n> I have a table with 102,384 records in it, each record is 934 bytes.\n> \n> Using the follow select statement:\n> SELECT * from <table>\n> \n> PG Info: version 7.3.4 under cygwin on Windows 2000\n> ODBC: version 7.3.100\n> \n> Machine: 500 Mhz/ 512MB RAM / IDE HDD\n> \n> \n> Under PG: Data is returned in 26 secs!!\n> Under SQL Server: Data is returned in 5 secs.\n> Under SQLBase: Data is returned in 6 secs.\n> Under SAPDB: Data is returned in 7 secs.\n\nThis is typical of postgresql under cygwin, it's much faster under a Unix \nOS like Linux or BSD. That said, you CAN do some things to help speed it \nup, the biggest being tuning the shared_buffers to be something large \nenough to hold a fair bit of data. Set the shared_buffers to 1000, \nrestart, and see if things get better.\n\nRunning Postgresql in a unix emulation layer is guaranteed to make it \nslow. If you've got a spare P100 with 128 Meg of RAM you can throw redhat \n9 or FreeBSD 4.7 on and run Postgresql on, it will likely outrun your \n500MHZ cygwin box, and might even keep up with the other databases on that \nmachine as well.\n\n", "msg_date": "Thu, 4 Sep 2003 16:28:36 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "You would \"get\" all rows, but they'd be stored server side until your \nclient asked for them.\n\nI.e. a cursor would level the field here, since you say that the other \ntest cases stored the entire result set on the server. Or did I \nmisunderstand what you meant there?\n\nOn Thu, 4 Sep 2003, Relaxin wrote:\n\n> All rows are required.\n> \n> \"\"Shridhar Daithankar\"\" <[email protected]> wrote in\n> message news:3F573E8B.31916.A1063F8@localhost...\n> > On 4 Sep 2003 at 0:48, Relaxin wrote:\n> > > All of the databases that I tested the query against gave me immediate\n> > > access to ANY row of the resultset once the data had been returned.\n> > > Ex. If I'm currently at the first row and then wanted to goto the\n> 100,000\n> > > row, I would be there immediately, and if I wanted to then goto the 5\n> > > row...same thing, I have the record immediately!\n> > >\n> > > The other databases I tested against stored the entire resultset on the\n> > > Server, I'm not sure what PG does...It seems that brings the entire\n> > > resultset client side.\n> > > If that is the case, how can I have PG store the resultset on the Server\n> AND\n> > > still allow me immediate access to ANY row in the resultset?\n> >\n> > You can use a cursor and get only required rows.\n> >\n> >\n> > Bye\n> > Shridhar\n> >\n> > --\n> > Nick the Greek's Law of Life: All things considered, life is 9 to 5\n> against.\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to [email protected]\n> >\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n", "msg_date": "Thu, 4 Sep 2003 16:58:19 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": ">\n> Have you changed any of the settings yet in postgresql.conf,\n> specifically the shared_buffers setting?\n>\n\nfsync = false\ntcpip_socket = true\nshared_buffers = 128\n\n\n", "msg_date": "Thu, 4 Sep 2003 16:14:50 -0700", "msg_from": "\"Relaxin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "I reset the shared_buffers to 1000 from 128, but it made no difference.\n\n\"\"scott.marlowe\"\" <[email protected]> wrote in message\nnews:[email protected]...\n> On Wed, 3 Sep 2003, Relaxin wrote:\n>\n> > I have a table with 102,384 records in it, each record is 934 bytes.\n> >\n> > Using the follow select statement:\n> > SELECT * from <table>\n> >\n> > PG Info: version 7.3.4 under cygwin on Windows 2000\n> > ODBC: version 7.3.100\n> >\n> > Machine: 500 Mhz/ 512MB RAM / IDE HDD\n> >\n> >\n> > Under PG: Data is returned in 26 secs!!\n> > Under SQL Server: Data is returned in 5 secs.\n> > Under SQLBase: Data is returned in 6 secs.\n> > Under SAPDB: Data is returned in 7 secs.\n>\n> This is typical of postgresql under cygwin, it's much faster under a Unix\n> OS like Linux or BSD. That said, you CAN do some things to help speed it\n> up, the biggest being tuning the shared_buffers to be something large\n> enough to hold a fair bit of data. Set the shared_buffers to 1000,\n> restart, and see if things get better.\n>\n> Running Postgresql in a unix emulation layer is guaranteed to make it\n> slow. If you've got a spare P100 with 128 Meg of RAM you can throw redhat\n> 9 or FreeBSD 4.7 on and run Postgresql on, it will likely outrun your\n> 500MHZ cygwin box, and might even keep up with the other databases on that\n> machine as well.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n\n\n", "msg_date": "Thu, 4 Sep 2003 17:45:20 -0700", "msg_from": "\"Relaxin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "A long time ago, in a galaxy far, far away, \"Relaxin\" <[email protected]> wrote:\n>> Have you changed any of the settings yet in postgresql.conf,\n>> specifically the shared_buffers setting?\n>\n> fsync = false\n> tcpip_socket = true\n> shared_buffers = 128\n\nChange fsync to true (you want your data to survive, right?) and\nincrease shared buffers to something that represents ~10% of your\nsystem memory, in blocks of 8K.\n\nSo, if you have 512MB of RAM, then the total blocks is 65536, and it\nwould likely be reasonable to increase shared_buffers to 1/10 of that,\nor about 6500.\n\nWhat is the value of effective_cache_size? That should probably be\nincreased a whole lot, too. If you are mainly just running the\ndatabase on your system, then it would be reasonable to set it to most\nof memory, or\n (* 1/2 (/ (* 512 1024 1024) 8192))\n32768.\n\nNone of this is likely to substantially change the result of that one\nquery, however, and it seems quite likely that it is because\nPostgreSQL is honestly returning the whole result set of ~100K rows at\nonce, whereas the other DBMSes are probably using cursors to return\nonly the few rows of the result that you actually looked at.\n-- \n\"cbbrowne\",\"@\",\"cbbrowne.com\"\nhttp://www3.sympatico.ca/cbbrowne/linuxdistributions.html\nRules of the Evil Overlord #14. \"The hero is not entitled to a last\nkiss, a last cigarette, or any other form of last request.\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Thu, 04 Sep 2003 21:26:14 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "Thank you Christopher.\n\n> Change fsync to true (you want your data to survive, right?) and\n> increase shared buffers to something that represents ~10% of your\n> system memory, in blocks of 8K.\n\nI turned it off just in the hope that things would run faster.\n\n> None of this is likely to substantially change the result of that one\n> query, however, and it seems quite likely that it is because\n> PostgreSQL is honestly returning the whole result set of ~100K rows at\n> once, whereas the other DBMSes are probably using cursors to return\n> only the few rows of the result that you actually looked at.\n\nFinally, someone who will actually assume/admit that it is returning the\nentire result set to the client.\nWhere as other DBMS manage the records at the server.\n\nI hope PG could fix/enhance this issue.\n\nThere are several issues that's stopping our company from going with PG\n(with paid support, if available), but this seems to big the one at the top\nof the list.\n\nThe next one is the handling of BLOBS. PG handles them like no other system\nI have ever come across.\n\nAfter that is a native Windows port, but we would deal cygwin (for a very\nlittle while) if these other issues were handled.\n\nThanks\n\n\n\n\n\n\"Christopher Browne\" <[email protected]> wrote in message\nnews:[email protected]...\n> A long time ago, in a galaxy far, far away, \"Relaxin\" <[email protected]>\nwrote:\n> >> Have you changed any of the settings yet in postgresql.conf,\n> >> specifically the shared_buffers setting?\n> >\n> > fsync = false\n> > tcpip_socket = true\n> > shared_buffers = 128\n>\n> Change fsync to true (you want your data to survive, right?) and\n> increase shared buffers to something that represents ~10% of your\n> system memory, in blocks of 8K.\n>\n> So, if you have 512MB of RAM, then the total blocks is 65536, and it\n> would likely be reasonable to increase shared_buffers to 1/10 of that,\n> or about 6500.\n>\n> What is the value of effective_cache_size? That should probably be\n> increased a whole lot, too. If you are mainly just running the\n> database on your system, then it would be reasonable to set it to most\n> of memory, or\n> (* 1/2 (/ (* 512 1024 1024) 8192))\n> 32768.\n>\n> None of this is likely to substantially change the result of that one\n> query, however, and it seems quite likely that it is because\n> PostgreSQL is honestly returning the whole result set of ~100K rows at\n> once, whereas the other DBMSes are probably using cursors to return\n> only the few rows of the result that you actually looked at.\n> -- \n> \"cbbrowne\",\"@\",\"cbbrowne.com\"\n> http://www3.sympatico.ca/cbbrowne/linuxdistributions.html\n> Rules of the Evil Overlord #14. \"The hero is not entitled to a last\n> kiss, a last cigarette, or any other form of last request.\"\n> <http://www.eviloverlord.com/>\n\n\n", "msg_date": "Thu, 4 Sep 2003 19:13:30 -0700", "msg_from": "\"Relaxin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "On Thu, 2003-09-04 at 22:13, Relaxin wrote:\n> Finally, someone who will actually assume/admit that it is returning the\n> entire result set to the client.\n> Where as other DBMS manage the records at the server.\n\nIs there a reason you can't use cursors (explicitely, or via ODBC if it\nprovides some glue on top of them) to keep the result set on the server?\n\nhttp://candle.pha.pa.us/main/writings/pgsql/sgml/sql-declare.html\nhttp://candle.pha.pa.us/main/writings/pgsql/sgml/sql-fetch.html\n\n> The next one is the handling of BLOBS. PG handles them like no other system\n> I have ever come across.\n\nJust FYI, you can use both the lo_*() functions, as well as simple\nbytea/text columns (which can be very large in PostgreSQL).\n\n-Neil\n\n\n", "msg_date": "Thu, 04 Sep 2003 22:32:23 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi,\n\nI think the problem is the ODBC driver NOT using cursors properly even\nif it should. The database itself is not doing anything it shouldn't do,\nin fact it has all the needed functionality to handle this request in a\nfast and effective way - just like any other respectable RDBMS.\n\nI don't know what ODBC driver you are using, and how it is configrued -\nand I never actually used PostgreSQL with ODBC myself. However in the\napplications I have developed we DO use 'standardized' DB access\nlibraries, which work on just about any DBMS you throw them at. \nIn our development system, which is running on a low-end dual P2-433mhz\nbox with IDE drives, we routinely test both simple queries as yours and\nmore complex ones, which at times returns several hundred\nthousand (or sometimes even millions) of rows. And processing time is,\ngenerally speaking, in range with what you are seeing on the other\nDBMSes you have.\n\nSo if PG is indeed returning ALL the rows, it is because it is\nexplicitly told to by the ODBC driver, so you need to look there to find\nthe problem. Could there be some kind of connection parameters you are\noverlooking, or is the driver too old? Just throwing out ideas here,\nmost likely you have already thought about it :)\n\nJust thought I'd point out that this is NOT expected behaviour from PG\nitself.\n\n/Eirik\n\nOn Thu, 4 Sep 2003 21:59:01 -0700\n\"Relaxin\" <[email protected]> wrote:\n\n> > Is there a reason you can't use cursors (explicitely, or via ODBC if\n> > it provides some glue on top of them) to keep the result set on the\n> > server?\n> >\n> > http://candle.pha.pa.us/main/writings/pgsql/sgml/sql-declare.html\n> > http://candle.pha.pa.us/main/writings/pgsql/sgml/sql-fetch.html\n> \n> I can only use generally accepted forms of connectivity (ie. ODBC, ADO\n> or OLEDB).\n> This is what many of the people on the Windows side are going to need,\n> because most of us are going to be converting from an existing already\n> established system, such as Oracle, SQL Server or DB2, all of which\n> have 1 or more of the 3 mentioned above.\n> \n> \n> > > The next one is the handling of BLOBS. PG handles them like no\n> > > other\n> system\n> > > I have ever come across.\n> >\n> > Just FYI, you can use both the lo_*() functions, as well as simple\n> > bytea/text columns (which can be very large in PostgreSQL).\n> >\n> > -Neil\n> \n> I know PG has a ODBC driver (that's all I've been using), but it or PG\n> just doesn't handle BLOBS the way people on the Windows side (don't\n> know about Unix) are use too.\n> \n> There is this conversion to octet that must be performed on the data ,\n> I don't understand why, but I guess there was a reason for it long\n> ago, but it seems that it can now be modified to just accept ANY byte\n> you give it and then store it without any manipulation of the data.\n> This will make Postgresql much more portable for the Windows\n> developers...no need for any special handling for a data type that all\n> large RDBMS support.\n> \n> \n> Thanks\n> \n> \n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 3: if posting/reading\n> through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that\n> your message can get through to the mailing list cleanly\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.3 (FreeBSD)\n\niD8DBQE/WBcAdAvR8ct7fEcRAvZPAJ9FgkYxck6Yh5gPeomk8QgWraeV0gCfQF/v\nCjyihMwTdrEZo2Y5YBwLVrI=\n=Ng2I\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 5 Sep 2003 06:54:24 +0200", "msg_from": "Eirik Oeverby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "> Is there a reason you can't use cursors (explicitely, or via ODBC if it\n> provides some glue on top of them) to keep the result set on the server?\n>\n> http://candle.pha.pa.us/main/writings/pgsql/sgml/sql-declare.html\n> http://candle.pha.pa.us/main/writings/pgsql/sgml/sql-fetch.html\n\nI can only use generally accepted forms of connectivity (ie. ODBC, ADO or\nOLEDB).\nThis is what many of the people on the Windows side are going to need,\nbecause most of us are going to be converting from an existing already\nestablished system, such as Oracle, SQL Server or DB2, all of which have 1\nor more of the 3 mentioned above.\n\n\n> > The next one is the handling of BLOBS. PG handles them like no other\nsystem\n> > I have ever come across.\n>\n> Just FYI, you can use both the lo_*() functions, as well as simple\n> bytea/text columns (which can be very large in PostgreSQL).\n>\n> -Neil\n\nI know PG has a ODBC driver (that's all I've been using), but it or PG just\ndoesn't handle BLOBS the way people on the Windows side (don't know about\nUnix) are use too.\n\nThere is this conversion to octet that must be performed on the data , I\ndon't understand why, but I guess there was a reason for it long ago, but it\nseems that it can now be modified to just accept ANY byte you give it and\nthen store it without any manipulation of the data.\nThis will make Postgresql much more portable for the Windows developers...no\nneed for any special handling for a data type that all large RDBMS support.\n\n\nThanks\n\n\n", "msg_date": "Thu, 4 Sep 2003 21:59:01 -0700", "msg_from": "\"Relaxin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "\nRelaxin,\nI can't remember during this thread if you said you were using ODBC or not.\nIf you are, then your problem is with the ODBC driver. You will need to\ncheck the Declare/Fetch box or you will definitely bring back the entire\nrecordset. For small a small recordset this is not a problem, but the\nlarger the recordset the slower the data is return to the client. I played\naround with the cache size on the driver and found a value between 100 to\n200 provided good results.\n\nHTH\nPatrick Hatcher\n\n\n\n\n \n \"Relaxin\" <[email protected]> \n Sent by: To: [email protected] \n pgsql-performance-owner@post cc: \n gresql.org Subject: Re: [PERFORM] SELECT's take a long time compared to other DBMS \n \n \n 09/04/2003 07:13 PM \n \n\n\n\n\nThank you Christopher.\n\n> Change fsync to true (you want your data to survive, right?) and\n> increase shared buffers to something that represents ~10% of your\n> system memory, in blocks of 8K.\n\nI turned it off just in the hope that things would run faster.\n\n> None of this is likely to substantially change the result of that one\n> query, however, and it seems quite likely that it is because\n> PostgreSQL is honestly returning the whole result set of ~100K rows at\n> once, whereas the other DBMSes are probably using cursors to return\n> only the few rows of the result that you actually looked at.\n\nFinally, someone who will actually assume/admit that it is returning the\nentire result set to the client.\nWhere as other DBMS manage the records at the server.\n\nI hope PG could fix/enhance this issue.\n\nThere are several issues that's stopping our company from going with PG\n(with paid support, if available), but this seems to big the one at the top\nof the list.\n\nThe next one is the handling of BLOBS. PG handles them like no other\nsystem\nI have ever come across.\n\nAfter that is a native Windows port, but we would deal cygwin (for a very\nlittle while) if these other issues were handled.\n\nThanks\n\n\n\n\n\n\"Christopher Browne\" <[email protected]> wrote in message\nnews:[email protected]...\n> A long time ago, in a galaxy far, far away, \"Relaxin\" <[email protected]>\nwrote:\n> >> Have you changed any of the settings yet in postgresql.conf,\n> >> specifically the shared_buffers setting?\n> >\n> > fsync = false\n> > tcpip_socket = true\n> > shared_buffers = 128\n>\n> Change fsync to true (you want your data to survive, right?) and\n> increase shared buffers to something that represents ~10% of your\n> system memory, in blocks of 8K.\n>\n> So, if you have 512MB of RAM, then the total blocks is 65536, and it\n> would likely be reasonable to increase shared_buffers to 1/10 of that,\n> or about 6500.\n>\n> What is the value of effective_cache_size? That should probably be\n> increased a whole lot, too. If you are mainly just running the\n> database on your system, then it would be reasonable to set it to most\n> of memory, or\n> (* 1/2 (/ (* 512 1024 1024) 8192))\n> 32768.\n>\n> None of this is likely to substantially change the result of that one\n> query, however, and it seems quite likely that it is because\n> PostgreSQL is honestly returning the whole result set of ~100K rows at\n> once, whereas the other DBMSes are probably using cursors to return\n> only the few rows of the result that you actually looked at.\n> --\n> \"cbbrowne\",\"@\",\"cbbrowne.com\"\n> http://www3.sympatico.ca/cbbrowne/linuxdistributions.html\n> Rules of the Evil Overlord #14. \"The hero is not entitled to a last\n> kiss, a last cigarette, or any other form of last request.\"\n> <http://www.eviloverlord.com/>\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n\n\n\n", "msg_date": "Fri, 5 Sep 2003 08:05:00 -0700", "msg_from": "\"Patrick Hatcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "Expect that the Declare/Fetch only creates a forwardonly cursor, you can go\nbackwards thru the result set.\n\n\"\"Patrick Hatcher\"\" <[email protected]> wrote in message\nnews:OFAD2A2CF4.499F8F67-ON88256D98.00527BCB-88256D98.00538130@fds.com...\n>\n> Relaxin,\n> I can't remember during this thread if you said you were using ODBC or\nnot.\n> If you are, then your problem is with the ODBC driver. You will need to\n> check the Declare/Fetch box or you will definitely bring back the entire\n> recordset. For small a small recordset this is not a problem, but the\n> larger the recordset the slower the data is return to the client. I\nplayed\n> around with the cache size on the driver and found a value between 100 to\n> 200 provided good results.\n>\n> HTH\n> Patrick Hatcher\n>\n>\n>\n>\n>\n> \"Relaxin\" <[email protected]>\n> Sent by: To:\[email protected]\n> pgsql-performance-owner@post cc:\n> gresql.org Subject: Re:\n[PERFORM] SELECT's take a long time compared to other DBMS\n>\n>\n> 09/04/2003 07:13 PM\n>\n>\n>\n>\n>\n> Thank you Christopher.\n>\n> > Change fsync to true (you want your data to survive, right?) and\n> > increase shared buffers to something that represents ~10% of your\n> > system memory, in blocks of 8K.\n>\n> I turned it off just in the hope that things would run faster.\n>\n> > None of this is likely to substantially change the result of that one\n> > query, however, and it seems quite likely that it is because\n> > PostgreSQL is honestly returning the whole result set of ~100K rows at\n> > once, whereas the other DBMSes are probably using cursors to return\n> > only the few rows of the result that you actually looked at.\n>\n> Finally, someone who will actually assume/admit that it is returning the\n> entire result set to the client.\n> Where as other DBMS manage the records at the server.\n>\n> I hope PG could fix/enhance this issue.\n>\n> There are several issues that's stopping our company from going with PG\n> (with paid support, if available), but this seems to big the one at the\ntop\n> of the list.\n>\n> The next one is the handling of BLOBS. PG handles them like no other\n> system\n> I have ever come across.\n>\n> After that is a native Windows port, but we would deal cygwin (for a very\n> little while) if these other issues were handled.\n>\n> Thanks\n>\n>\n>\n>\n>\n> \"Christopher Browne\" <[email protected]> wrote in message\n> news:[email protected]...\n> > A long time ago, in a galaxy far, far away, \"Relaxin\" <[email protected]>\n> wrote:\n> > >> Have you changed any of the settings yet in postgresql.conf,\n> > >> specifically the shared_buffers setting?\n> > >\n> > > fsync = false\n> > > tcpip_socket = true\n> > > shared_buffers = 128\n> >\n> > Change fsync to true (you want your data to survive, right?) and\n> > increase shared buffers to something that represents ~10% of your\n> > system memory, in blocks of 8K.\n> >\n> > So, if you have 512MB of RAM, then the total blocks is 65536, and it\n> > would likely be reasonable to increase shared_buffers to 1/10 of that,\n> > or about 6500.\n> >\n> > What is the value of effective_cache_size? That should probably be\n> > increased a whole lot, too. If you are mainly just running the\n> > database on your system, then it would be reasonable to set it to most\n> > of memory, or\n> > (* 1/2 (/ (* 512 1024 1024) 8192))\n> > 32768.\n> >\n> > None of this is likely to substantially change the result of that one\n> > query, however, and it seems quite likely that it is because\n> > PostgreSQL is honestly returning the whole result set of ~100K rows at\n> > once, whereas the other DBMSes are probably using cursors to return\n> > only the few rows of the result that you actually looked at.\n> > --\n> > \"cbbrowne\",\"@\",\"cbbrowne.com\"\n> > http://www3.sympatico.ca/cbbrowne/linuxdistributions.html\n> > Rules of the Evil Overlord #14. \"The hero is not entitled to a last\n> > kiss, a last cigarette, or any other form of last request.\"\n> > <http://www.eviloverlord.com/>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n\n", "msg_date": "Fri, 5 Sep 2003 11:18:35 -0700", "msg_from": "\"Relaxin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "On Fri, 2003-09-05 at 14:18, Relaxin wrote:\n> Expect that the Declare/Fetch only creates a forwardonly cursor, you can go\n> backwards thru the result set.\n\nNo, DECLARE can create scrollable cursors, read the ref page again. This\nfunctionality is much improved in PostgreSQL 7.4, though.\n\n-Neil\n\n\n", "msg_date": "Fri, 05 Sep 2003 17:09:49 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" }, { "msg_contents": "It is forward only in the ODBC driver.\n\n\"Neil Conway\" <[email protected]> wrote in message\nnews:1062796189.447.9.camel@tokyo...\n> On Fri, 2003-09-05 at 14:18, Relaxin wrote:\n> > Expect that the Declare/Fetch only creates a forwardonly cursor, you can\ngo\n> > backwards thru the result set.\n>\n> No, DECLARE can create scrollable cursors, read the ref page again. This\n> functionality is much improved in PostgreSQL 7.4, though.\n>\n> -Neil\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n\n\n", "msg_date": "Fri, 5 Sep 2003 17:55:46 -0700", "msg_from": "\"Relaxin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT's take a long time compared to other DBMS" } ]
[ { "msg_contents": "(Please follow Mail-Followup-To, I'm not on the pgsql-performance\nmailing list but am on the Linux-XFS mailing list. My apologies too for\nthe cross-post. I'm cc'ing the Linux-XFS mailing list in case people\nthere will be interested in this, too.)\n\n\nHi,\n\nWe have a server running PostgreSQL v7.3.3 on Debian GNU/Linux with\nLinux kernel 2.4.21-xfs. The PostgreSQL data is stored on an XFS[1]\npartition mounted with the options \"rw,noatime,logbufs=8\". The machine\nis an Intel Pentium III 733MHz with 512MB RAM and a four-disk hardware\nIDE RAID-5 array with a 3ware controller.\n\nAmong other databases, we have a centralized Snort[2] database that is\nanalyzed by ACIDLab[3]. I noticed performance problems during SELECT and\nINSERT operations when the tables reach around 200,000 records. Because\nof timeout issues, the PHP-based ACIDLab can't be used properly.\n\nI read the performance section of the ACID FAQ[4] as well as the\nPostgreSQL \"Managing Kernel Resources\" document , and so far have tuned\nmy system by setting /proc/sys/kernel/{shmall,shmmax} to 134217728.\n\nI also turned off fsync in /etc/postgresql/postgresql.conf.\n\nThe latter did a LOT to improve INSERT performance, which is now\nCPU-bound instead of I/O-bound. However, as expected, I am concerned\nabout the reliability penalty this will cause. Our server has been up\nand running without problems for 67 days since the last reboot, but this\ndoesn't mean it will never hiccup either because of some random problem\nor because of an extended power outage.\n\nWould anyone have \"authoritative\" information with respect to:\n\n - the way PostgreSQL expects data to be written to disk without the\n fsync calls for things not to get corrupted in the event of a crash,\n and\n \n - the way XFS writes data to disk without the fsync calls that\n PostgreSQL normally does and how this will affect PostgreSQL data\n integrity in the event of a system crash?\n\nI know that at the end of the day, if I value my data, I must (1) back\nit up regularly, and (2) keep fsync enabled in PostgreSQL. However given\nthe significance performance hit (at least as far as massive INSERT or\nUPDATE operations are concerned) and the journalling component of XFS,\nit would be great to find out just how bad the odds are if the system\ngoes down unexpectedly.\n\nThank you very much for your time. :)\n\n --> Jijo\n\nNote- I should also have selected RAID10 instead of RAID5, but that's a\nchange I can't afford to do at this point so I have to explore other\noptions.\n\n[1] http://oss.sgi.com/projects/xfs/\n[2] http://www.snort.org\n[3] http://acidlab.sourceforge.net\n[4] http://www.andrew.cmu.edu/~rdanyliw/snort/acid_faq.html#faq_c9\n[5] http://developer.postgresql.org/docs/postgres/kernel-resources.html\n\n-- \nFederico Sevilla III : http://jijo.free.net.ph : When we speak of free\nNetwork Administrator : The Leather Collection, Inc. : software we refer to\nGnuPG Key ID : 0x93B746BE : freedom, not price.\n", "msg_date": "Thu, 4 Sep 2003 11:04:04 +0800", "msg_from": "Federico Sevilla III <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL Reliability when fsync = false on Linux-XFS" }, { "msg_contents": "> - the way PostgreSQL expects data to be written to disk without the\n> fsync calls for things not to get corrupted in the event of a crash,\n> and\n\nIf you want the filesystem to deal with this, I believe it is necessary\nfor it to write the data out in the same order the write requests are\nsupplied in between ALL PostgreSQL processes. If you can accomplish\nthis, you do not need WAL.\n\nThere are shortcuts which can be taken in the above, which is where WAL\ncomes in. WAL writes are ordered between processes and WAL of a single\nprocess always hits disk prior to commit -- fsync forces both of these.\nDue to WAL being in place, data can be written at almost any time. The\nbenefit to WAL is a single file fsync rather than the entire database\nrequiring one (PostgreSQL pre-7.1 method).\n\n> I know that at the end of the day, if I value my data, I must (1) back\n> it up regularly, and (2) keep fsync enabled in PostgreSQL. However given\n> the significance performance hit (at least as far as massive INSERT or\n\nIf you want good performance, invest in a SCSI controller that has\nbattery backed write cache. A few megs will do it. You will find\nperformance similar to fsync being off (you don't wait for disk\nrotation) but without the whole dataloss issue. Another alternative is\nto buy a small 15krpm disk dedicated for WAL. In theory you can achieve\none commit per rotation.\n\nI assume your inserts are not supplied in Bulk. The fsync overhead is\nper transaction, not per insert.", "msg_date": "Wed, 03 Sep 2003 23:36:36 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Reliability when fsync = false on Linux-XFS" }, { "msg_contents": "Rod Taylor kirjutas N, 04.09.2003 kell 06:36:\n> Another alternative is\n> to buy a small 15krpm disk dedicated for WAL. In theory you can achieve\n> one commit per rotation.\n\nOne commit per rotation would still be only 15000/60. = 250 tps, but\nfortunately you can get better results if you use multiple concurrent\nbackends, then in the best case you can get one commit per backend per\nrotation.\n\n-----------------\nHannu\n\n", "msg_date": "Thu, 04 Sep 2003 09:55:27 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Reliability when fsync = false on" }, { "msg_contents": "On 3 Sep 2003 at 23:36, Rod Taylor wrote:\n\n> > - the way PostgreSQL expects data to be written to disk without the\n> > fsync calls for things not to get corrupted in the event of a crash,\n> > and\n> \n> If you want the filesystem to deal with this, I believe it is necessary\n> for it to write the data out in the same order the write requests are\n> supplied in between ALL PostgreSQL processes. If you can accomplish\n> this, you do not need WAL.\n> \n> There are shortcuts which can be taken in the above, which is where WAL\n> comes in. WAL writes are ordered between processes and WAL of a single\n> process always hits disk prior to commit -- fsync forces both of these.\n> Due to WAL being in place, data can be written at almost any time. The\n> benefit to WAL is a single file fsync rather than the entire database\n> requiring one (PostgreSQL pre-7.1 method).\n> \n> > I know that at the end of the day, if I value my data, I must (1) back\n> > it up regularly, and (2) keep fsync enabled in PostgreSQL. However given\n> > the significance performance hit (at least as far as massive INSERT or\n> \n> If you want good performance, invest in a SCSI controller that has\n> battery backed write cache. A few megs will do it. You will find\n> performance similar to fsync being off (you don't wait for disk\n> rotation) but without the whole dataloss issue. Another alternative is\n> to buy a small 15krpm disk dedicated for WAL. In theory you can achieve\n> one commit per rotation.\n\nJust wonderin. What if you symlink WAL to a directory which is on mounted USB \nRAM drive?\n\nWill that increase any throughput? I am sure a 256/512MB flash drive will cost \nlot less than a SCSI disk. May be even a GB on flash drive would do..\n\nJust a thought..\n\nBye\n Shridhar\n\n--\nAmbition, n:\tAn overmastering desire to be vilified by enemies while\tliving and \nmade ridiculous by friends when dead.\t\t-- Ambrose Bierce\n\n", "msg_date": "Thu, 04 Sep 2003 12:34:51 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Reliability when fsync = false on Linux-XFS" }, { "msg_contents": "> Just wonderin. What if you symlink WAL to a directory which is on\n> mounted USB RAM drive?\n\nUSB 2.0 you mean? It supposedly runs at 1394 speeds, but USB 1.0/1.1\nruns at 1MB/s under ideal circumstances... that's slower than even old\nIDE drives.\n\n> Will that increase any throughput?\n\nProbably not...\n\n> I am sure a 256/512MB flash drive will cost lot less than a SCSI\n> disk. May be even a GB on flash drive would do..\n\nThat's true... but on a per $$/MB, you're better off investing in RAM\nand increasing your effective_cache_size. If dd to a flash card is\nfaster than to an IDE drive, please let me know. :) -sc\n\n-- \nSean Chittenden\nUNIX(TM), a BSD like Operating System\n", "msg_date": "Thu, 4 Sep 2003 00:27:35 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Reliability when fsync = false on Linux-XFS" }, { "msg_contents": "Sean Chittenden <[email protected]> writes:\n>> Just wonderin. What if you symlink WAL to a directory which is on\n>> mounted USB RAM drive?\n\n> USB 2.0 you mean? It supposedly runs at 1394 speeds, but USB 1.0/1.1\n> runs at 1MB/s under ideal circumstances... that's slower than even old\n> IDE drives.\n\n>> Will that increase any throughput?\n\n> Probably not...\n\nAlso, doesn't flash memory have a very limited lifetime in write cycles?\nUsing it as WAL, you'd wear it out PDQ.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Sep 2003 09:48:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Reliability when fsync = false on Linux-XFS " }, { "msg_contents": "On Thu, 4 Sep 2003, Federico Sevilla III wrote:\n\n> (Please follow Mail-Followup-To, I'm not on the pgsql-performance\n> mailing list but am on the Linux-XFS mailing list. My apologies too for\n> the cross-post. I'm cc'ing the Linux-XFS mailing list in case people\n> there will be interested in this, too.)\n> \n> \n> Hi,\n> \n> We have a server running PostgreSQL v7.3.3 on Debian GNU/Linux with\n> Linux kernel 2.4.21-xfs. The PostgreSQL data is stored on an XFS[1]\n\nTwo points.\n\n1: 7.3.3 has a data loss issue fixed in 7.3.4. You should upgrade to \navoid the pain associated with this problem.\n\n2: When you turn off fsync, all bets are off. If the data doesn't get \nwritten in the right order, your database may be corrupted if power is \nshut off. \n\n", "msg_date": "Thu, 4 Sep 2003 16:35:08 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Reliability when fsync = false on Linux-XFS" } ]
[ { "msg_contents": "Our port of OSDL DBT3 test suite to PostgreSQL (see Background\ninformation below) is nearing completion. We would also like to confirm\nour understanding of an outstanding consistency issue.\n\nWe have not been able to do meaningful kernel testing since the runs\n(all parameters/kernels being equal) arewildly varying - sometimes\n20-25% differences in the metrics run to run. \n\nWe found plans were changing from test run to test run. In one case a\nplan ran 20 minutes in the throughput test of one run, and 2 seconds in\nanother run! By forcing the contents of pg_statistics to be the same\nbefore the queries run, we have consistent results now. So we know for\nsure the problem is due to the random nature of the stats sampling: the\noptimizer always saw different stats data resulting in different plans. \n\nStephan Szabo kindly responded to our earlier queries suggesting we look\nat default_statistics_target and ALTER TABLE ALTER COLUMN SET\nSTATISTICS. \n\nThese determine the number of bins in the histogram for a given column. \nBut for a large number of rows (for example 6 million) the maximum value\n(1000) does not guarantee that ANALYZE will do a full scan of the table.\nWe do not see a way to guarantee the same statistics run to run without\nforcing ANALYZE to examine every row of every table. \n\nAre we wrong in our analysis?\n\nAre there main-stream alternatives we have missed? \n \nHow do you do testing on large tables and make the execution plans\nconsistent?\n\nIs there a change to ANALYZE in 7.4 that solves our problem?\n\nTIA.\n\n\n********************************************************************\nBackground information:\n\nDatabase Test 3 (DBT-3) is a decision support workload.\n\nThe test kit itself has been executing on PostgreSQL for some time, is\navailable on sourceforge, and is implemented on our Scalable Test\nPlatform (STP). \n\n\nA bit of background: The test \n(1) builds a database from load files, gathers statistics, \n(2) runs a single stream of 22 queries plus a set of inserts and deletes\n(the power test), then \n(3) runs a multiple stream of the queries with one added stream of\ninserts/deletes (the throughput test). \n\n\n-- \nMary Edie Meredith <[email protected]>\nOpen Source Development Lab\n\n", "msg_date": "04 Sep 2003 10:41:10 -0700", "msg_from": "Mary Edie Meredith <[email protected]>", "msg_from_op": true, "msg_subject": "[GENERAL] how to get accurate values in pg_statistic (continued)" }, { "msg_contents": "On Thu, 2003-09-04 at 13:41, Mary Edie Meredith wrote:\n> Our port of OSDL DBT3 test suite to PostgreSQL (see Background\n> information below) is nearing completion. We would also like to confirm\n> our understanding of an outstanding consistency issue.\n> \n> We have not been able to do meaningful kernel testing since the runs\n> (all parameters/kernels being equal) arewildly varying - sometimes\n> 20-25% differences in the metrics run to run. \n\nRun a VACUUM FULL ANALYZE between runs. This will force a full scan of\nall data for stats, as well as ensure the table is consistently\ncompacted.", "msg_date": "Thu, 04 Sep 2003 13:46:47 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic" }, { "msg_contents": "Mary Edie Meredith <[email protected]> writes:\n> Stephan Szabo kindly responded to our earlier queries suggesting we look\n> at default_statistics_target and ALTER TABLE ALTER COLUMN SET\n> STATISTICS. \n\n> These determine the number of bins in the histogram for a given column. \n> But for a large number of rows (for example 6 million) the maximum value\n> (1000) does not guarantee that ANALYZE will do a full scan of the table.\n> We do not see a way to guarantee the same statistics run to run without\n> forcing ANALYZE to examine every row of every table. \n\nDo you actually still have a problem with the plans changing when the\nstats target is above 100 or so? I think the notion of \"force ANALYZE\nto do a full scan\" is inherently wrongheaded ... it certainly would not\nproduce numbers that have anything to do with ordinary practice.\n\nIf you have data statistics that are so bizarre that the planner still\ngets things wrong with a target of 1000, then I'd like to know more\nabout why.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Sep 2003 19:16:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic (continued) " }, { "msg_contents": "On Thu, 2003-09-04 at 13:46, Rod Taylor wrote:\n> Run a VACUUM FULL ANALYZE between runs. This will force a full scan of\n> all data for stats\n\nIt will? Are you sure about that?\n\n-Neil\n\n\n", "msg_date": "Thu, 04 Sep 2003 19:50:24 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic" }, { "msg_contents": "The documentation lead us to believe that it would not, but we are\ntesting just the same (at least checking that the pg_statistics are the\nsame after each load and VACUUM FULL ANALYZE). Will report back.\n\n\n\nOn Thu, 2003-09-04 at 16:50, Neil Conway wrote:\n> On Thu, 2003-09-04 at 13:46, Rod Taylor wrote:\n> > Run a VACUUM FULL ANALYZE between runs. This will force a full scan of\n> > all data for stats\n> \n> It will? Are you sure about that?\n> \n> -Neil\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n-- \nMary Edie Meredith <[email protected]>\nOpen Source Development Lab\n\n", "msg_date": "04 Sep 2003 17:01:56 -0700", "msg_from": "Mary Edie Meredith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic" }, { "msg_contents": "On Thu, 2003-09-04 at 19:50, Neil Conway wrote:\n> On Thu, 2003-09-04 at 13:46, Rod Taylor wrote:\n> > Run a VACUUM FULL ANALYZE between runs. This will force a full scan of\n> > all data for stats\n> \n> It will? Are you sure about that?\n\nYou're right. According to the docs it won't.\n\nI had a poor stats issue on one table that was solved using that\ncommand, coincidentally apparently.", "msg_date": "Thu, 04 Sep 2003 20:29:11 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic" }, { "msg_contents": "I certainly don't claim that it is appropriate to force customers into a\nfull analysis, particularly if random sampling versus a full scan of the\ndata reveals little to no performance differences in the plans. Being\nable to sample accurately is _very nice for large tables.\n\nFor our testing purposes, however, consistent results are extremely\nimportant. We have observed that small difference in one plan for one of\n22 queries can cause a difference in the DBT-3 results. If this\nhappens, a small change in performance runs between two Linux kernels\nmay appear to be due to the kernels, when in fact it is due to the plan\nchange. \n\nWe know that the plans are _exactly the same if the data in the\npg_statistics table is the same from run to run (all other things being\nequal). So what we need to have is identical optimizer costs\n(pg_statistics) for the same table data for each.\n\nI feel certain that the pg_statistics table will be identical from run\nto run if analyze looks at every row. Thus our hope to find a way to\nget that.\n\nWe did runs over night. We can confirm that VACUUM FULL ANALYZE does\nnot produce the same pg_statistics run to run. With the default (10)\ndefault_statistics_target the plans are also different.\n\nWe ran additional tests with default_statistics_target set to 1000 (the\nmax I believe). The plans are the same over the different runs, but the\npg_statistics table has different cost values. The performance results\nof the runs are consistent (we would expect this with the same plans). \nThe resulting performance metrics are similar to the best plans we see\nusing the default histogram size (good news).\n\nHowever, we worry that one day the cost will change enough for whatever\nreason to cause a plan change, especially for a larger database scale\nfactor (database size/row size). \n\nI know we appear to be an isolated case, but customers also do testing\nand may have the same consistency issues we have. I can also imagine\ncases where customers want to guarantee that plans stay the same\n(between replicated sites, for example). If two developers are\nanalyzing changes to the optimizer, don't you want the costs used for\ntesting on their two systems to be identical for comparison purposes?\n\nAnyway, IMHO I believe that an option for an ANALYZE FULL (\"sampling\"\nall rows) would be valuable. Any other ideas for how to force this\nwithout code change are very welcome. \n\nThanks for your info!\n\n\n\nOn Thu, 2003-09-04 at 16:16, Tom Lane wrote:\n> Mary Edie Meredith <[email protected]> writes:\n> > Stephan Szabo kindly responded to our earlier queries suggesting we look\n> > at default_statistics_target and ALTER TABLE ALTER COLUMN SET\n> > STATISTICS. \n> \n> > These determine the number of bins in the histogram for a given column. \n> > But for a large number of rows (for example 6 million) the maximum value\n> > (1000) does not guarantee that ANALYZE will do a full scan of the table.\n> > We do not see a way to guarantee the same statistics run to run without\n> > forcing ANALYZE to examine every row of every table. \n> \n> Do you actually still have a problem with the plans changing when the\n> stats target is above 100 or so? I think the notion of \"force ANALYZE\n> to do a full scan\" is inherently wrongheaded ... it certainly would not\n> produce numbers that have anything to do with ordinary practice.\n> \n> If you have data statistics that are so bizarre that the planner still\n> gets things wrong with a target of 1000, then I'd like to know more\n> about why.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n-- \nMary Edie Meredith <[email protected]>\nOpen Source Development Lab\n\n", "msg_date": "05 Sep 2003 10:44:25 -0700", "msg_from": "Mary Edie Meredith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic" }, { "msg_contents": "Mary Edie Meredith <[email protected]> writes:\n> For our testing purposes, however, consistent results are extremely\n> important. We have observed that small difference in one plan for one of\n> 22 queries can cause a difference in the DBT-3 results. If this\n> happens, a small change in performance runs between two Linux kernels\n> may appear to be due to the kernels, when in fact it is due to the plan\n> change. \n\nFair enough. If you are trying to force exactly repeatable results,\nwhy don't you just \"set seed = 0\" before you ANALYZE? There's only\none random-number generator, so that should force ANALYZE to make the\nsame random sampling every time.\n\nAlso, it'd be a good idea to ANALYZE the needed tables by name,\nexplicitly, to ensure that they are analyzed in a known order\nrather than whatever order ANALYZE happens to find them in pg_class.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Sep 2003 14:38:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic (continued) " }, { "msg_contents": "Mary Edie Meredith wrote:\n> I certainly don't claim that it is appropriate to force customers into a\n> full analysis, particularly if random sampling versus a full scan of the\n> data reveals little to no performance differences in the plans. Being\n> able to sample accurately is _very nice for large tables.\n> \n> For our testing purposes, however, consistent results are extremely\n> important. We have observed that small difference in one plan for one of\n> 22 queries can cause a difference in the DBT-3 results. If this\n> happens, a small change in performance runs between two Linux kernels\n> may appear to be due to the kernels, when in fact it is due to the plan\n> change. \n> \n> We know that the plans are _exactly the same if the data in the\n> pg_statistics table is the same from run to run (all other things being\n> equal). So what we need to have is identical optimizer costs\n> (pg_statistics) for the same table data for each.\n> \n> I feel certain that the pg_statistics table will be identical from run\n> to run if analyze looks at every row. Thus our hope to find a way to\n> get that.\n\n\nActually, if you are usig GEQO (many tables in a join) the optimizer\nitself will randomly try plans --- even worse than random statistics.\n\nWe do have:\n\n\t#geqo_random_seed = -1 # -1 = use variable seed\n\nthat lets you force a specific random seed for testing purposes. I\nwonder if that could be extended to control VACUUM radomization too. \nRight now, it just controls GEQO and in fact gets reset on every\noptimizer run.\n\nI wonder if you could just poke a srandom(10) in\nsrc/backend/command/analyze.c.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 5 Sep 2003 16:49:11 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> We do have:\n> \t#geqo_random_seed = -1 # -1 = use variable seed\n\n> that lets you force a specific random seed for testing purposes. I\n> wonder if that could be extended to control VACUUM radomization too. \n> Right now, it just controls GEQO and in fact gets reset on every\n> optimizer run.\n\nActually, just the other day I was thinking we should take that out.\nSince there is only one random number generator in the C library,\nGEQO is messing with everyone else's state every time it decides to do\nan srandom(). And there is certainly no need to do an explicit srandom\nwith a \"random\" seed every time through the optimizer, which is the\ncode's default behavior at the moment. That just decreases the\nrandomness AFAICS, compared to letting the established sequence run.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Sep 2003 16:55:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > We do have:\n> > \t#geqo_random_seed = -1 # -1 = use variable seed\n> \n> > that lets you force a specific random seed for testing purposes. I\n> > wonder if that could be extended to control VACUUM radomization too. \n> > Right now, it just controls GEQO and in fact gets reset on every\n> > optimizer run.\n> \n> Actually, just the other day I was thinking we should take that out.\n> Since there is only one random number generator in the C library,\n> GEQO is messing with everyone else's state every time it decides to do\n> an srandom(). And there is certainly no need to do an explicit srandom\n> with a \"random\" seed every time through the optimizer, which is the\n> code's default behavior at the moment. That just decreases the\n> randomness AFAICS, compared to letting the established sequence run.\n\nAgreed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 5 Sep 2003 17:02:02 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic" }, { "msg_contents": "\nI have learned you can use:\n\n\tSET random = 0;\n\nto force identical statistics every time you run ANALYZE.\n\n---------------------------------------------------------------------------\n\nMary Edie Meredith wrote:\n> I certainly don't claim that it is appropriate to force customers into a\n> full analysis, particularly if random sampling versus a full scan of the\n> data reveals little to no performance differences in the plans. Being\n> able to sample accurately is _very nice for large tables.\n> \n> For our testing purposes, however, consistent results are extremely\n> important. We have observed that small difference in one plan for one of\n> 22 queries can cause a difference in the DBT-3 results. If this\n> happens, a small change in performance runs between two Linux kernels\n> may appear to be due to the kernels, when in fact it is due to the plan\n> change. \n> \n> We know that the plans are _exactly the same if the data in the\n> pg_statistics table is the same from run to run (all other things being\n> equal). So what we need to have is identical optimizer costs\n> (pg_statistics) for the same table data for each.\n> \n> I feel certain that the pg_statistics table will be identical from run\n> to run if analyze looks at every row. Thus our hope to find a way to\n> get that.\n> \n> We did runs over night. We can confirm that VACUUM FULL ANALYZE does\n> not produce the same pg_statistics run to run. With the default (10)\n> default_statistics_target the plans are also different.\n> \n> We ran additional tests with default_statistics_target set to 1000 (the\n> max I believe). The plans are the same over the different runs, but the\n> pg_statistics table has different cost values. The performance results\n> of the runs are consistent (we would expect this with the same plans). \n> The resulting performance metrics are similar to the best plans we see\n> using the default histogram size (good news).\n> \n> However, we worry that one day the cost will change enough for whatever\n> reason to cause a plan change, especially for a larger database scale\n> factor (database size/row size). \n> \n> I know we appear to be an isolated case, but customers also do testing\n> and may have the same consistency issues we have. I can also imagine\n> cases where customers want to guarantee that plans stay the same\n> (between replicated sites, for example). If two developers are\n> analyzing changes to the optimizer, don't you want the costs used for\n> testing on their two systems to be identical for comparison purposes?\n> \n> Anyway, IMHO I believe that an option for an ANALYZE FULL (\"sampling\"\n> all rows) would be valuable. Any other ideas for how to force this\n> without code change are very welcome. \n> \n> Thanks for your info!\n> \n> \n> \n> On Thu, 2003-09-04 at 16:16, Tom Lane wrote:\n> > Mary Edie Meredith <[email protected]> writes:\n> > > Stephan Szabo kindly responded to our earlier queries suggesting we look\n> > > at default_statistics_target and ALTER TABLE ALTER COLUMN SET\n> > > STATISTICS. \n> > \n> > > These determine the number of bins in the histogram for a given column. \n> > > But for a large number of rows (for example 6 million) the maximum value\n> > > (1000) does not guarantee that ANALYZE will do a full scan of the table.\n> > > We do not see a way to guarantee the same statistics run to run without\n> > > forcing ANALYZE to examine every row of every table. \n> > \n> > Do you actually still have a problem with the plans changing when the\n> > stats target is above 100 or so? I think the notion of \"force ANALYZE\n> > to do a full scan\" is inherently wrongheaded ... it certainly would not\n> > produce numbers that have anything to do with ordinary practice.\n> > \n> > If you have data statistics that are so bizarre that the planner still\n> > gets things wrong with a target of 1000, then I'd like to know more\n> > about why.\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> -- \n> Mary Edie Meredith <[email protected]>\n> Open Source Development Lab\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 7 Sep 2003 12:22:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic" }, { "msg_contents": "\nMary Edie Meredith <[email protected]> writes:\n\n> We ran additional tests with default_statistics_target set to 1000 (the\n> max I believe). The plans are the same over the different runs, but the\n> pg_statistics table has different cost values. The performance results\n> of the runs are consistent (we would expect this with the same plans). \n> The resulting performance metrics are similar to the best plans we see\n> using the default histogram size (good news).\n\nHm, would it be possible to do a binary search and find the target at which\nyou start getting consistent plans? Perhaps the default of 10 is simply way\ntoo small and should be raised? \n\nObviously this would depend on the data model, but I suspect if your aim is\nfor the benchmark data to be representative of typical data models, which\nscares me into thinking perhaps users are seeing similarly unpredictably\nvariable performance.\n\n-- \ngreg\n\n", "msg_date": "07 Sep 2003 19:18:01 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Perhaps the default of 10 is simply way\n> too small and should be raised? \n\nI've suspected since the default existed that it might be too small ;-).\nNo one's yet done any experiments to try to establish a better default,\nthough. I suppose the first hurdle is to find a representative dataset.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 07 Sep 2003 20:32:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic " }, { "msg_contents": "Tom Lane wrote:\n> Mary Edie Meredith <[email protected]> writes:\n> > Stephan Szabo kindly responded to our earlier queries suggesting we look\n> > at default_statistics_target and ALTER TABLE ALTER COLUMN SET\n> > STATISTICS. \n> \n> > These determine the number of bins in the histogram for a given column. \n> > But for a large number of rows (for example 6 million) the maximum value\n> > (1000) does not guarantee that ANALYZE will do a full scan of the table.\n> > We do not see a way to guarantee the same statistics run to run without\n> > forcing ANALYZE to examine every row of every table. \n> \n> Do you actually still have a problem with the plans changing when the\n> stats target is above 100 or so? I think the notion of \"force ANALYZE\n> to do a full scan\" is inherently wrongheaded ... it certainly would not\n> produce numbers that have anything to do with ordinary practice.\n> \n> If you have data statistics that are so bizarre that the planner still\n> gets things wrong with a target of 1000, then I'd like to know more\n> about why.\n\nHas there been any progress in determining if the number of default\nbuckets (10) is the best value?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 10 Sep 2003 15:44:50 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic (continued)" }, { "msg_contents": "[email protected] (Bruce Momjian) writes:\n> Tom Lane wrote:\n>> Mary Edie Meredith <[email protected]> writes:\n>> > Stephan Szabo kindly responded to our earlier queries suggesting\n>> > we look at default_statistics_target and ALTER TABLE ALTER COLUMN\n>> > SET STATISTICS.\n>> \n>> > These determine the number of bins in the histogram for a given\n>> > column. But for a large number of rows (for example 6 million)\n>> > the maximum value (1000) does not guarantee that ANALYZE will do\n>> > a full scan of the table. We do not see a way to guarantee the\n>> > same statistics run to run without forcing ANALYZE to examine\n>> > every row of every table.\n>> \n>> Do you actually still have a problem with the plans changing when\n>> the stats target is above 100 or so? I think the notion of \"force\n>> ANALYZE to do a full scan\" is inherently wrongheaded ... it\n>> certainly would not produce numbers that have anything to do with\n>> ordinary practice.\n>> \n>> If you have data statistics that are so bizarre that the planner\n>> still gets things wrong with a target of 1000, then I'd like to\n>> know more about why.\n>\n> Has there been any progress in determining if the number of default\n> buckets (10) is the best value?\n\nI would think this is much more the key to the issue for their\nbenchmark than issues of correctly replicating the random number\ngenerator.\n\nI'm not clear on how data is collected into the histogram bins;\nobviously it's not selecting all 6 million rows, but how many rows is\nit?\n\nThe \"right answer\" for most use seems likely to involve:\n\n a) Getting an appropriate number of bins (I suspect 10 is a bit\n small, but I can't justify that mathematically), and\n b) Attaching an appropriate sample size to those bins.\n\nWhat is apparently going wrong with the benchmark (and this can\ndoubtless arise in \"real life,\" too) is that the random selection is\npulling too few records with the result that some of the bins are\nbeing filled in a \"skewed\" manner that causes the optimizer to draw\nthe wrong conclusions. (I may merely be restating the obvious here,\nbut if I say it a little differently than it has been said before,\nsomeone may notice the vital \"wrong assumption.\")\n\nIf the samples are crummy, then perhaps:\n - There need to be more bins\n - There need to be more samples\n\nDoes the sample size change if you increase the number of bins? If\nnot, then having more, smaller bins will lead to them getting\nincreasingly skewed if there is any accidental skew in the selection.\n\nDo we also need a parameter to control sample size?\n-- \noutput = reverse(\"ofni.smrytrebil\" \"@\" \"enworbbc\")\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Wed, 10 Sep 2003 18:22:04 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic (continued)" }, { "msg_contents": "We tried 1000 as the default and found that the plans were good plans\nand were consistent, but the pg_statistics was not exactly the same.\n\nWe took Tom's' advice and tried SET SEED=0 (actually select setseed (0)\n).\n\nWe did runs last night on our project machine which produced consistent\npg_statistics data and (of course) the same plans.\n\nWe will next try runs where we vary the default buckets. Other than 10\nand 1000, what numbers would you like us to try besides. Previously the\nnumber 100 was mentioned. Are there others?\n\nOn Wed, 2003-09-10 at 12:44, Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Mary Edie Meredith <[email protected]> writes:\n> > > Stephan Szabo kindly responded to our earlier queries suggesting we look\n> > > at default_statistics_target and ALTER TABLE ALTER COLUMN SET\n> > > STATISTICS. \n> > \n> > > These determine the number of bins in the histogram for a given column. \n> > > But for a large number of rows (for example 6 million) the maximum value\n> > > (1000) does not guarantee that ANALYZE will do a full scan of the table.\n> > > We do not see a way to guarantee the same statistics run to run without\n> > > forcing ANALYZE to examine every row of every table. \n> > \n> > Do you actually still have a problem with the plans changing when the\n> > stats target is above 100 or so? I think the notion of \"force ANALYZE\n> > to do a full scan\" is inherently wrongheaded ... it certainly would not\n> > produce numbers that have anything to do with ordinary practice.\n> > \n> > If you have data statistics that are so bizarre that the planner still\n> > gets things wrong with a target of 1000, then I'd like to know more\n> > about why.\n> \n> Has there been any progress in determining if the number of default\n> buckets (10) is the best value?\n-- \nMary Edie Meredith <[email protected]>\nOpen Source Development Lab\n\n", "msg_date": "10 Sep 2003 17:17:15 -0700", "msg_from": "Mary Edie Meredith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [osdldbt-general] Re: [GENERAL] how to get accurate" }, { "msg_contents": "The world rejoiced as [email protected] (Mary Edie Meredith) wrote:\n> We tried 1000 as the default and found that the plans were good\n> plans and were consistent, but the pg_statistics was not exactly the\n> same.\n>\n> We took Tom's' advice and tried SET SEED=0 (actually select setseed\n> (0) ).\n\nWhen you're trying to get strict replicability of results, setting the\nseed to some specific value is necessary.\n\nSome useful results could be attained by varying the seed, and seeing\nhow the plans change.\n\n> We did runs last night on our project machine which produced\n> consistent pg_statistics data and (of course) the same plans.\n\n> We will next try runs where we vary the default buckets. Other than\n> 10 and 1000, what numbers would you like us to try besides.\n> Previously the number 100 was mentioned. Are there others?\n\nThat presumably depends on what your goal is.\n\nA useful experiment would be to see at what point (e.g. - at what\nbucket size) plans tend to \"settle down\" to the right values.\n\nIt might well be that defaulting to 23 buckets (I'm picking that out\nof thin air) would cause the plans to typically be stable whatever\nseed got used.\n\nA test for this would be to, for each bucket size value, repeatedly\nANALYZE and check query plans.\n\nAt bucket size 10, you have seen the query plans vary quite a bit.\n\nAt 1000, they seem to stabilize very well.\n\nThe geometric centre, between 10 and 1000, is 100, so it would surely\nbe useful to see if query plans are stable at that bucket size.\n\nThe most interesting number to know would be the lowest number of\nbuckets at which query plans are nearly always stable. Supposing that\nnumber was 23 (the number I earlier pulled out of the air), then that\ncan be used as evidence that the default value for SET STATISTICS\nshould be changed from 10 to 23.\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('aa454','freenet.carleton.ca').\nhttp://www3.sympatico.ca/cbbrowne/nonrdbms.html\nSturgeon's Law: 90% of *EVERYTHING* is crud.\n", "msg_date": "Wed, 10 Sep 2003 23:07:12 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [osdldbt-general] Re: [GENERAL] how to get accurate" }, { "msg_contents": "Christopher Browne <[email protected]> writes:\n> The \"right answer\" for most use seems likely to involve:\n> a) Getting an appropriate number of bins (I suspect 10 is a bit\n> small, but I can't justify that mathematically), and\n\nI suspect that also, but I don't have real evidence for it either.\nWe've heard complaints from a number of people for whom it was indeed\ntoo small ... but that doesn't prove it's not appropriate in the\nmajority of cases ...\n\n> Does the sample size change if you increase the number of bins?\n\nYes, read the comments in backend/commands/analyze.c.\n\n> Do we also need a parameter to control sample size?\n\nNot if the paper I read before writing that code is correct.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Sep 2003 00:30:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic (continued) " }, { "msg_contents": "On Thu, 11 Sep 2003, Tom Lane wrote:\n\n> Christopher Browne <[email protected]> writes:\n> > The \"right answer\" for most use seems likely to involve:\n> > a) Getting an appropriate number of bins (I suspect 10 is a bit\n> > small, but I can't justify that mathematically), and\n> \n> I suspect that also, but I don't have real evidence for it either.\n> We've heard complaints from a number of people for whom it was indeed\n> too small ... but that doesn't prove it's not appropriate in the\n> majority of cases ...\n> \n> > Does the sample size change if you increase the number of bins?\n> \n> Yes, read the comments in backend/commands/analyze.c.\n> \n> > Do we also need a parameter to control sample size?\n> \n> Not if the paper I read before writing that code is correct.\n\nI was just talking to a friend of mine who does statistical analysis, and \nhe suggested a different way of looking at this. I know little of the \nanalyze.c, but I'll be reading it some today.\n\nHis theory was that we can figure out the number of target bins by \nbasically running analyze twice with two different random seeds, and \ninitially setting the bins to 10.\n\nThe, compare the variance of the two runs. If the variance is great, \nincrease the target by X, and run two again. repeat, wash, rinse, until \nthe variance drops below some threshold.\n\nI like the idea, I'm not at all sure if it's practical for Postgresql to \nimplement it.\n\n", "msg_date": "Thu, 11 Sep 2003 09:55:47 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic" }, { "msg_contents": "[email protected] (\"scott.marlowe\") writes:\n> On Thu, 11 Sep 2003, Tom Lane wrote:\n>\n>> Christopher Browne <[email protected]> writes:\n>> > The \"right answer\" for most use seems likely to involve:\n>> > a) Getting an appropriate number of bins (I suspect 10 is a bit\n>> > small, but I can't justify that mathematically), and\n>> \n>> I suspect that also, but I don't have real evidence for it either.\n>> We've heard complaints from a number of people for whom it was indeed\n>> too small ... but that doesn't prove it's not appropriate in the\n>> majority of cases ...\n>> \n>> > Does the sample size change if you increase the number of bins?\n>> \n>> Yes, read the comments in backend/commands/analyze.c.\n>> \n>> > Do we also need a parameter to control sample size?\n>> \n>> Not if the paper I read before writing that code is correct.\n>\n> I was just talking to a friend of mine who does statistical analysis, and \n> he suggested a different way of looking at this. I know little of the \n> analyze.c, but I'll be reading it some today.\n>\n> His theory was that we can figure out the number of target bins by \n> basically running analyze twice with two different random seeds, and \n> initially setting the bins to 10.\n>\n> The, compare the variance of the two runs. If the variance is great, \n> increase the target by X, and run two again. repeat, wash, rinse, until \n> the variance drops below some threshold.\n>\n> I like the idea, I'm not at all sure if it's practical for Postgresql to \n> implement it.\n\nIt may suffice to do some analytic runs on some \"reasonable datasets\"\nin order to come up with a better default than 10.\n\nIf you run this process a few times on some different databases and\nfind that the variance keeps dropping pretty quickly, then that would\nbe good material for arguing that 10 should change to 17 or 23 or 31\nor some such value. (The only interesting pttern in that is that\nthose are all primes :-).)\n-- \noutput = (\"cbbrowne\" \"@\" \"libertyrms.info\")\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n", "msg_date": "Thu, 11 Sep 2003 12:32:01 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic" }, { "msg_contents": "On Thu, 11 Sep 2003, Christopher Browne wrote:\n\n> [email protected] (\"scott.marlowe\") writes:\n> > On Thu, 11 Sep 2003, Tom Lane wrote:\n> >\n> >> Christopher Browne <[email protected]> writes:\n> >> > The \"right answer\" for most use seems likely to involve:\n> >> > a) Getting an appropriate number of bins (I suspect 10 is a bit\n> >> > small, but I can't justify that mathematically), and\n> >> \n> >> I suspect that also, but I don't have real evidence for it either.\n> >> We've heard complaints from a number of people for whom it was indeed\n> >> too small ... but that doesn't prove it's not appropriate in the\n> >> majority of cases ...\n> >> \n> >> > Does the sample size change if you increase the number of bins?\n> >> \n> >> Yes, read the comments in backend/commands/analyze.c.\n> >> \n> >> > Do we also need a parameter to control sample size?\n> >> \n> >> Not if the paper I read before writing that code is correct.\n> >\n> > I was just talking to a friend of mine who does statistical analysis, and \n> > he suggested a different way of looking at this. I know little of the \n> > analyze.c, but I'll be reading it some today.\n> >\n> > His theory was that we can figure out the number of target bins by \n> > basically running analyze twice with two different random seeds, and \n> > initially setting the bins to 10.\n> >\n> > The, compare the variance of the two runs. If the variance is great, \n> > increase the target by X, and run two again. repeat, wash, rinse, until \n> > the variance drops below some threshold.\n> >\n> > I like the idea, I'm not at all sure if it's practical for Postgresql to \n> > implement it.\n> \n> It may suffice to do some analytic runs on some \"reasonable datasets\"\n> in order to come up with a better default than 10.\n> \n> If you run this process a few times on some different databases and\n> find that the variance keeps dropping pretty quickly, then that would\n> be good material for arguing that 10 should change to 17 or 23 or 31\n> or some such value. (The only interesting pttern in that is that\n> those are all primes :-).)\n\nThat's a good intermediate solution, but it really doesn't solve \neveryone's issue. If one table/field has a nice even distribution (i.e. \n10 rows with id 1, 10 rows with id2, so on and so on) then it won't need \nnearly as high of a default target as a row with lots of weird spikes and \nsuch in it. \n\nThat's why Joe (my statistics friend) made the point about iterating over \neach table with higher targets until the variance drops to something \nreasonable.\n\nI would imagine a simple script would be a good proof of concept of this, \nbut in the long run, it would be a huge win if the analyze.c code did this \nautomagically eventually, so that you don't have a target that's still too \nlow for some complex data sets and too high for simple ones.\n\nWell, time for me to get to work on a proof of concept...\n\n", "msg_date": "Fri, 12 Sep 2003 09:19:55 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] how to get accurate values in pg_statistic" } ]
[ { "msg_contents": "\nHi,\n\nI have a table that looks like this:\n\n DATA ID TIME\n|------|----|------|\n\nThe table holds app. 14M rows now and grows by app. 350k rows a day.\n\nThe ID-column holds about 1500 unique values (integer).\nThe TIME-columns is of type timestamp without timezone.\n\nI have one index (b-tree) on the ID-column and one index (b-tree) on the\ntime-column.\n\nMy queries most often look like this:\n\nSELECT DATA FROM <tbl> WHERE ID = 1 AND TIME > now() - '1 day'::interval;\n\nor\n\nSELECT DATA FROM <tbl> WHERE ID = 2 AND TIME > now() - '1 week'::interval;\n\n\nSince I have about 350000 rows the last 24 hours the query planner chooses\nto use my ID-index to get hold of the rows - then using only a filter on\nthe time column.\n\nThis takes a lot of time (over a minute) on a P4 1900MHz which\nunfortenately isn't good enough for my purpose (webpages times out and so\non..).\n\n\nIf I SELECT only the rows with a certain ID (regardless of time):\n\nSELECT DATA FROM <tbl> WHERE ID = 3;\n\n..it still takes almost a minute so I guess this is the problem (not the\nfiltering on the TIME-column), especially since it recieves a lot of rows\nwhich will be descarded using my filter anyway.\n(I recieve ~6000 rows and want about 250).\n\nBut using the TIME-column as a first subset of rows and discarding using\nthe ID-column as a filter is even worse since I then get 350k rows and\ndiscards about 349750 of them using the filter.\n\nI tried applying a multicolumn index on ID and TIME, but that one won't\neven be used (after ANALYZE).\n\nMy only option here seems to have like a \"daily\" table which will only\ncarry the rows for the past 24 hours which will give my SELECT a result of\n6000 initial rows out of ~350k (instead of 14M like now) and then 250 when\nfiltered.\nBut I really hope there is a cleaner solution to the problem - actually I\nthough a multicolumn index would do it.\n\n-ra\n\n", "msg_date": "Fri, 5 Sep 2003 00:53:46 +0200 (CEST)", "msg_from": "\"Rasmus Aveskogh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problems on a fairly big table with two key columns." }, { "msg_contents": "On Thursday 04 September 2003 23:53, Rasmus Aveskogh wrote:\n> Hi,\n>\n> I have a table that looks like this:\n>\n> DATA ID TIME\n>\n> |------|----|------|\n>\n> The table holds app. 14M rows now and grows by app. 350k rows a day.\n>\n> The ID-column holds about 1500 unique values (integer).\n> The TIME-columns is of type timestamp without timezone.\n>\n> I have one index (b-tree) on the ID-column and one index (b-tree) on the\n> time-column.\n>\n> My queries most often look like this:\n>\n> SELECT DATA FROM <tbl> WHERE ID = 1 AND TIME > now() - '1 day'::interval;\n[snip]\n> I tried applying a multicolumn index on ID and TIME, but that one won't\n> even be used (after ANALYZE).\n\nThe problem is likely to be that the parser isn't spotting that now()-'1 day' \nis constant. Try an explicit time and see if the index is used. If so, you \ncan write a wrapper function for your expression (mark it STABLE so the \nplanner knows it won't change during the statement).\n\nAlternatively, you can do the calculation in the application and use an \nexplicit time.\n\nHTH\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 5 Sep 2003 09:34:52 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems on a fairly big table with two key columns." }, { "msg_contents": "\nRichard,\n\nThanks a lot! You were right - the query parser \"misunderstood\"\nnow() - '1 day'::interval and only used one of the indexes (as I already\nnoticed).\n\nActually all I had to do was to cast the result like this:\n\n(now() - '1 day'::interval)::date\n\n75s is not between 10ms and 200ms.\n\nThanks again!\n\n-ra\n\n\n> On Thursday 04 September 2003 23:53, Rasmus Aveskogh wrote:\n>> Hi,\n>>\n>> I have a table that looks like this:\n>>\n>> DATA ID TIME\n>>\n>> |------|----|------|\n>>\n>> The table holds app. 14M rows now and grows by app. 350k rows a day.\n>>\n>> The ID-column holds about 1500 unique values (integer).\n>> The TIME-columns is of type timestamp without timezone.\n>>\n>> I have one index (b-tree) on the ID-column and one index (b-tree) on the\n>> time-column.\n>>\n>> My queries most often look like this:\n>>\n>> SELECT DATA FROM <tbl> WHERE ID = 1 AND TIME > now() - '1\n>> day'::interval;\n> [snip]\n>> I tried applying a multicolumn index on ID and TIME, but that one won't\n>> even be used (after ANALYZE).\n>\n> The problem is likely to be that the parser isn't spotting that now()-'1\n> day'\n> is constant. Try an explicit time and see if the index is used. If so, you\n> can write a wrapper function for your expression (mark it STABLE so the\n> planner knows it won't change during the statement).\n>\n> Alternatively, you can do the calculation in the application and use an\n> explicit time.\n>\n> HTH\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\n", "msg_date": "Fri, 5 Sep 2003 17:36:50 +0200 (CEST)", "msg_from": "\"Rasmus Aveskogh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems on a fairly big table with two " }, { "msg_contents": "On Friday 05 September 2003 16:36, Rasmus Aveskogh wrote:\n> Richard,\n>\n> Thanks a lot! You were right - the query parser \"misunderstood\"\n> now() - '1 day'::interval and only used one of the indexes (as I already\n> noticed).\n>\n> Actually all I had to do was to cast the result like this:\n>\n> (now() - '1 day'::interval)::date\n>\n> 75s is not between 10ms and 200ms.\n>\n> Thanks again!\n\nAh - good. You also want to be careful with differences between timestamp \nwith/without time zone etc.\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 5 Sep 2003 17:39:32 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems on a fairly big table with two" } ]
[ { "msg_contents": "I am trying to tune my database and I discovered one select that does a\nseq scan on a table but I can't see why... All the join fields are indexed\nand I am returning just one record, so no sort is done.\nDoes it just pick seq scan for the heck of it or is it a reason?\n\nRegards,\n\nBTJ\n\n-----------------------------------------------------------------------------------------------\nBj�rn T Johansen (BSc,MNIF)\nExecutive Manager\[email protected] Havleik Consulting\nPhone : +47 67 54 15 17 Conradisvei 4\nFax : +47 67 54 13 91 N-1338 Sandvika\nCellular : +47 926 93 298 http://www.havleik.no\n-----------------------------------------------------------------------------------------------\n\"The stickers on the side of the box said \"Supported Platforms: Windows\n98, Windows NT 4.0,\nWindows 2000 or better\", so clearly Linux was a supported platform.\"\n-----------------------------------------------------------------------------------------------\n\n", "msg_date": "Fri, 5 Sep 2003 08:08:07 +0200 (CEST)", "msg_from": "\"Bjorn T Johansen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Seq scan of table?" }, { "msg_contents": "Hello,\n\n> I am trying to tune my database and I discovered one select \n> that does a seq scan on a table but I can't see why... All \n> the join fields are indexed and I am returning just one \n> record, so no sort is done. Does it just pick seq scan for \n> the heck of it or is it a reason?\n\nAre the join fields both of the exactly same type ? If no (eg : INT2 and\nINT4)\nyou must cast in order to have the same type.\n\nIf the join fields are not of the same type, PostgreSQL will do a seq\nscan.\n\nI had exactly the same problem and learned here that tip :-)\n\nHope this help,\n\n---------------------------------------\nBruno BAGUETTE - [email protected] \n\n\n", "msg_date": "Fri, 5 Sep 2003 08:34:47 +0200", "msg_from": "\"Bruno BAGUETTE\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE : Seq scan of table?" }, { "msg_contents": "\"Bjorn T Johansen\" <[email protected]> writes:\n> I am trying to tune my database and I discovered one select that does a\n> seq scan on a table but I can't see why... All the join fields are indexed\n> and I am returning just one record, so no sort is done.\n> Does it just pick seq scan for the heck of it or is it a reason?\n\nWho's to say, when you gave us no details? Show us the table schemas,\nthe exact query, and EXPLAIN ANALYZE output, and you might get useful\nresponses.\n\n(btw, pgsql-performance would be a more appropriate list for this issue\nthan pgsql-general.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Sep 2003 02:44:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seq scan of table? " }, { "msg_contents": "Well, I just checked and all the join fields are of the same type...\n\nBTJ\n\n> Hello,\n>\n>> I am trying to tune my database and I discovered one select\n>> that does a seq scan on a table but I can't see why... All\n>> the join fields are indexed and I am returning just one\n>> record, so no sort is done. Does it just pick seq scan for\n>> the heck of it or is it a reason?\n>\n> Are the join fields both of the exactly same type ? If no (eg : INT2 and\n> INT4)\n> you must cast in order to have the same type.\n>\n> If the join fields are not of the same type, PostgreSQL will do a seq\n> scan.\n>\n> I had exactly the same problem and learned here that tip :-)\n>\n> Hope this help,\n>\n> ---------------------------------------\n> Bruno BAGUETTE - [email protected]\n>\n>\n>\n\n\n", "msg_date": "Fri, 5 Sep 2003 08:55:24 +0200 (CEST)", "msg_from": "\"Bjorn T Johansen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: RE : Seq scan of table?" }, { "msg_contents": "\n\n> \"Bjorn T Johansen\" <[email protected]> writes:\n>> I am trying to tune my database and I discovered one select that does a\n>> seq scan on a table but I can't see why... All the join fields are\n>> indexed\n>> and I am returning just one record, so no sort is done.\n>> Does it just pick seq scan for the heck of it or is it a reason?\n>\n> Who's to say, when you gave us no details? Show us the table schemas,\n> the exact query, and EXPLAIN ANALYZE output, and you might get useful\n> responses.\n>\n> (btw, pgsql-performance would be a more appropriate list for this issue\n> than pgsql-general.)\n>\n> \t\t\tregards, tom lane\n>\n\nWell, since the select involves 10-12 tables and a large sql, I just\nthought I would try without all that information first... :)\n\nAnd yes, pgsql-performance sounds like the right list....\n\n\nBTJ\n\n\n", "msg_date": "Fri, 5 Sep 2003 08:57:55 +0200 (CEST)", "msg_from": "\"Bjorn T Johansen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seq scan of table?" }, { "msg_contents": "I think I have found out why.. I have a where clause on a ID field but it\nseems like I need to cast this integer to the same integer as the field is\ndefined in the table, else it will do a tablescan.\n\nIs this assumtion correct? And if it is, do I then need to change all my\nsql's to cast the where clause where I just have a number (eg where field\n= 1) to force the planner to use index scan instead of seq scan?\n\n\nBTJ\n\n> I am trying to tune my database and I discovered one select that does a\n> seq scan on a table but I can't see why... All the join fields are indexed\n> and I am returning just one record, so no sort is done.\n> Does it just pick seq scan for the heck of it or is it a reason?\n>\n> Regards,\n>\n> BTJ\n>\n> -----------------------------------------------------------------------------------------------\n> Bj�rn T Johansen (BSc,MNIF)\n> Executive Manager\n> [email protected] Havleik Consulting\n> Phone : +47 67 54 15 17 Conradisvei 4\n> Fax : +47 67 54 13 91 N-1338 Sandvika\n> Cellular : +47 926 93 298 http://www.havleik.no\n> -----------------------------------------------------------------------------------------------\n> \"The stickers on the side of the box said \"Supported Platforms: Windows\n> 98, Windows NT 4.0,\n> Windows 2000 or better\", so clearly Linux was a supported platform.\"\n> -----------------------------------------------------------------------------------------------\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n\n", "msg_date": "Fri, 5 Sep 2003 10:47:54 +0200 (CEST)", "msg_from": "\"Bjorn T Johansen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seq scan of table?" }, { "msg_contents": "On Friday 05 September 2003 09:47, Bjorn T Johansen wrote:\n> I think I have found out why.. I have a where clause on a ID field but it\n> seems like I need to cast this integer to the same integer as the field is\n> defined in the table, else it will do a tablescan.\n>\n> Is this assumtion correct? And if it is, do I then need to change all my\n> sql's to cast the where clause where I just have a number (eg where field\n> = 1) to force the planner to use index scan instead of seq scan?\n\nPG's parser will assume an explicit number is an int4 - if you need an int8 \netc you'll need to cast it, yes.\nYou should find plenty of discussion of why in the archives, but the short \nreason is that PG's type structure is quite flexible which means it can't \nafford to make too many assumptions.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 5 Sep 2003 11:07:12 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Seq scan of table?" }, { "msg_contents": "On Fri, 2003-09-05 at 12:07, Richard Huxton wrote:\n> On Friday 05 September 2003 09:47, Bjorn T Johansen wrote:\n> > I think I have found out why.. I have a where clause on a ID field but it\n> > seems like I need to cast this integer to the same integer as the field is\n> > defined in the table, else it will do a tablescan.\n> >\n> > Is this assumtion correct? And if it is, do I then need to change all my\n> > sql's to cast the where clause where I just have a number (eg where field\n> > = 1) to force the planner to use index scan instead of seq scan?\n> \n> PG's parser will assume an explicit number is an int4 - if you need an int8 \n> etc you'll need to cast it, yes.\n> You should find plenty of discussion of why in the archives, but the short \n> reason is that PG's type structure is quite flexible which means it can't \n> afford to make too many assumptions.\n\nOki, I am using both int2 and int8 as well, so that explains it...\nThanks!\n\n\nBTJ\n\n\n\n", "msg_date": "Fri, 05 Sep 2003 15:23:24 +0200", "msg_from": "=?ISO-8859-1?Q?Bj=F8rn?= T Johansen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Seq scan of table?" }, { "msg_contents": "> I think I have found out why.. I have a where clause on a ID field but it\n> seems like I need to cast this integer to the same integer as the field is\n> defined in the table, else it will do a tablescan.\n\nYes, this is correct\n\n> Is this assumtion correct? And if it is, do I then need to change all my\n> sql's to cast the where clause where I just have a number (eg where field\n> = 1) to force the planner to use index scan instead of seq scan?\n\nSomeone correct me if I'm wrong, but I believe numbers are int4's, so they\nneed to be cast if your column is not an int4.\n\nJon\n\n\n\n>\n>\n> BTJ\n>\n> > I am trying to tune my database and I discovered one select that does a\n> > seq scan on a table but I can't see why... All the join fields are indexed\n> > and I am returning just one record, so no sort is done.\n> > Does it just pick seq scan for the heck of it or is it a reason?\n> >\n> > Regards,\n> >\n> > BTJ\n> >\n> > -----------------------------------------------------------------------------------------------\n> > Bj�rn T Johansen (BSc,MNIF)\n> > Executive Manager\n> > [email protected] Havleik Consulting\n> > Phone : +47 67 54 15 17 Conradisvei 4\n> > Fax : +47 67 54 13 91 N-1338 Sandvika\n> > Cellular : +47 926 93 298 http://www.havleik.no\n> > -----------------------------------------------------------------------------------------------\n> > \"The stickers on the side of the box said \"Supported Platforms: Windows\n> > 98, Windows NT 4.0,\n> > Windows 2000 or better\", so clearly Linux was a supported platform.\"\n> > -----------------------------------------------------------------------------------------------\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> >\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n\n", "msg_date": "Fri, 5 Sep 2003 07:39:49 -0700 (PDT)", "msg_from": "Jonathan Bartlett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seq scan of table?" }, { "msg_contents": "On Fri, 2003-09-05 at 09:39, Jonathan Bartlett wrote:\n> > I think I have found out why.. I have a where clause on a ID field but it\n> > seems like I need to cast this integer to the same integer as the field is\n> > defined in the table, else it will do a tablescan.\n> \n> Yes, this is correct\n> \n> > Is this assumtion correct? And if it is, do I then need to change all my\n> > sql's to cast the where clause where I just have a number (eg where field\n> > = 1) to force the planner to use index scan instead of seq scan?\n> \n> Someone correct me if I'm wrong, but I believe numbers are int4's, so they\n> need to be cast if your column is not an int4.\n\nYou mean \"constant\" scalars? Yes, constants scalars are interpreted\nas int4.\n\n-- \n-----------------------------------------------------------------\nRon Johnson, Jr. [email protected]\nJefferson, LA USA\n\n\"Millions of Chinese speak Chinese, and it's not hereditary...\"\nDr. Dean Edell\n\n", "msg_date": "Fri, 05 Sep 2003 11:09:18 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Seq scan of table?" }, { "msg_contents": "On Fri, 2003-09-05 at 06:07, Richard Huxton wrote:\n> PG's parser will assume an explicit number is an int4 - if you need an int8 \n> etc you'll need to cast it, yes.\n\nOr enclose the integer literal in single quotes.\n\n> You should find plenty of discussion of why in the archives, but the short \n> reason is that PG's type structure is quite flexible which means it can't \n> afford to make too many assumptions.\n\nWell, it's definitely a bug in PG, it's \"quite flexible\" type structure\nnotwithstanding.\n\n-Neil\n\n\n", "msg_date": "Fri, 05 Sep 2003 14:20:18 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Seq scan of table?" }, { "msg_contents": "On Friday 05 September 2003 19:20, Neil Conway wrote:\n> On Fri, 2003-09-05 at 06:07, Richard Huxton wrote:\n> > PG's parser will assume an explicit number is an int4 - if you need an\n> > int8 etc you'll need to cast it, yes.\n>\n> Or enclose the integer literal in single quotes.\n>\n> > You should find plenty of discussion of why in the archives, but the\n> > short reason is that PG's type structure is quite flexible which means it\n> > can't afford to make too many assumptions.\n>\n> Well, it's definitely a bug in PG, it's \"quite flexible\" type structure\n> notwithstanding.\n\nIt certainly catches out a lot of people. I'd guess it's in the top three \nissues in the general/sql lists. I'd guess part of the problem is it's so \nsilent. In some ways it would be better to issue a NOTICE every time a \ntypecast is forced in a comparison - irritating as that would be.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 5 Sep 2003 20:37:12 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Seq scan of table?" }, { "msg_contents": "Neil Conway <[email protected]> writes:\n> On Fri, 2003-09-05 at 06:07, Richard Huxton wrote:\n>> You should find plenty of discussion of why in the archives, but the short \n>> reason is that PG's type structure is quite flexible which means it can't \n>> afford to make too many assumptions.\n\n> Well, it's definitely a bug in PG, it's \"quite flexible\" type structure\n> notwithstanding.\n\nLet's say it's something we'd really like to fix ;-) ... and will, as\nsoon as we can figure out a cure that's not worse than the disease.\nDorking around with the semantics of numeric expressions has proven\nto be a risky business. See, eg, the thread starting here:\nhttp://archives.postgresql.org/pgsql-hackers/2002-11/msg00468.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Sep 2003 17:52:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Seq scan of table? " } ]
[ { "msg_contents": "Hi,\n\ni'm having _serious_ issues of postgres hogging up the CPU over time. A graph\nshowing this can be seen at http://andri.estpak.ee/cpu0.png .\n\nThe database is running on Redhat 9 (stock 2.4.20-8 kernel), on a reiserfs\npartition (~8% usage - no problem there), and this problem has been with\nPostgreSQL 7.3.2 (both package provided by Redhat and self-rebuilt package)\nand 7.3.4 (i used the 7.3.4 SRPM available at postgres ftp site).\n\nA VACUUM FULL is a remedy to this problem, but a simple VACUUM isn't. \n\nThis can be reproduced, I think, by a simple UPDATE command:\n\ndatabase=# EXPLAIN ANALYZE UPDATE table SET random_int_field = 1, last_updated\n= NOW() WHERE primary_key = 3772;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Index Scan using table_pkey on table (cost=0.00..6.81 rows=1 width=83)\n(actual time=0.09..0.10 rows=1 loops=1)\n Index Cond: (primary_key = 3772)\n Total runtime: 0.37 msec\n\nWhen I repeat this command using simple <up><enter>, I can see the \"Total\nruntime\" time grow ever so slightly - creeping from 0.37 to 0.38, then 0.39\netc. Would probably get higher if I had the patience. :)\n\nThe table \"table\" used in this example has 2721 rows, so size isn't an issue here.\n\nAny comments or suggestions are welcome. If more information is needed, let me\nknow and I'll post the needed details.\n", "msg_date": "Fri, 5 Sep 2003 21:58:29 -0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Serious issues with CPU usage" }, { "msg_contents": "<[email protected]> writes:\n> i'm having _serious_ issues of postgres hogging up the CPU over time. A graph\n> showing this can be seen at http://andri.estpak.ee/cpu0.png .\n\nYou really haven't shown us anything that would explain that graph ...\nrepeated UPDATEs will slow down a little until you vacuum, but not\nby the ratio you seem to be indicating. At least not if they're\nindexscans. If you've also got sequential-scan queries, and you're\ndoing many zillion updates between vacuums, the answer is to vacuum\nmore often. A decent rule of thumb is to vacuum whenever you've updated\nmore than about 10% of the rows in a table since your last vacuum.\n\n> A VACUUM FULL is a remedy to this problem, but a simple VACUUM isn't. \n\nI find that odd; maybe there's something else going on here. But you've\nnot given enough details to speculate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Sep 2003 20:05:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Serious issues with CPU usage " }, { "msg_contents": "[email protected] kirjutas L, 06.09.2003 kell 00:58:\n> Hi,\n> \n> i'm having _serious_ issues of postgres hogging up the CPU over time. A graph\n> showing this can be seen at http://andri.estpak.ee/cpu0.png .\n> \n> The database is running on Redhat 9 (stock 2.4.20-8 kernel), on a reiserfs\n> partition (~8% usage - no problem there), and this problem has been with\n> PostgreSQL 7.3.2 (both package provided by Redhat and self-rebuilt package)\n> and 7.3.4 (i used the 7.3.4 SRPM available at postgres ftp site).\n> \n> A VACUUM FULL is a remedy to this problem, but a simple VACUUM isn't. \n\nCould it be that FSM is too small for your vacuum interval ?\n\nAlso, you could try running REINDEX (instead of or in addition to plain\nVACUUM) and see if this is is an index issue.\n\n> This can be reproduced, I think, by a simple UPDATE command:\n> \n> database=# EXPLAIN ANALYZE UPDATE table SET random_int_field = 1, last_updated\n> = NOW() WHERE primary_key = 3772;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using table_pkey on table (cost=0.00..6.81 rows=1 width=83)\n> (actual time=0.09..0.10 rows=1 loops=1)\n> Index Cond: (primary_key = 3772)\n> Total runtime: 0.37 msec\n> \n> When I repeat this command using simple <up><enter>, I can see the \"Total\n> runtime\" time grow ever so slightly - creeping from 0.37 to 0.38, then 0.39\n> etc. Would probably get higher if I had the patience. :)\n> \n> The table \"table\" used in this example has 2721 rows, so size isn't an issue here.\n\nDue to the MVCC the raw table size (file size) can be much bigger if you\ndont VACUUM often enough.\n\n> Any comments or suggestions are welcome. If more information is needed, let me\n> know and I'll post the needed details.\n\n1. What types of queries do you run, and how often ?\n\n2. How is your database tuned (postgresql.conf settings) ?\n\n3. How much memory does your machine have ?\n\nBTW, are you sure that this is postgres that is using up the memory ?\nI've read that reiserfs is a CPU hog, so this may be something that does\nintensive disk access, so some IO stats would be useful as well as real\ndata and index file sizes.\n\nYou could also set up logging and then check if there are some\npathological queries that run for several hour doing nested seqscans ;)\n\n-----------------------\nHannu\n\n\n\n\n", "msg_date": "Sat, 06 Sep 2003 12:10:13 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Serious issues with CPU usage" }, { "msg_contents": "\nHope that you don't find it too distracting, I decided to answer to emails in \none go.\n\n----\n\nOn Saturday 06 September 2003 03:05, Tom Lane wrote:\n> indexscans. If you've also got sequential-scan queries, and you're\n> doing many zillion updates between vacuums, the answer is to vacuum\n> more often. A decent rule of thumb is to vacuum whenever you've updated\n> more than about 10% of the rows in a table since your last vacuum.\n\nBasically I do this:\n1) select about ~700 ID's I have to poll\n2) poll them\n3) update those 700 rows in that \"table\" I used (~2700 rows total).\n\nAnd I do this cycle once per minute, so yes, I've got a zillion updates. 700 \nof 2700 is roughly 25%, so I'd have to vacuum once per minute?\nThe manual actually had a suggestion of vacuuming after big changes, but I \ndidn't think it was that bad.\n\n-----\n\nOn Saturday 06 September 2003 12:10, Hannu Krosing wrote:\n> Could it be that FSM is too small for your vacuum interval ?\n>\n> Also, you could try running REINDEX (instead of or in addition to plain\n> VACUUM) and see if this is is an index issue.\n\nVACUUM ANALYZE helped to lessen the load. Not as much as VACUUM FULL, but \nstill bring it down to reasonable level.\n\n> 1. What types of queries do you run, and how often ?\n\nFirst, cycle posted above; second, every 5 minutes ~40 SELECTs that include \nthat table. I left the once-per-minute poller offline this weekend, and the \nCPU usage didn't creep up.\n\n> 2. How is your database tuned (postgresql.conf settings) ?\n\nshared_buffers = 13000 \nmax_fsm_relations = 100000 \nmax_fsm_pages = 1000000 \nmax_locks_per_transaction = 256 \nwal_buffers = 64 \nsort_mem = 32768 \nvacuum_mem = 16384\nfsync = false\neffective_cache_size = 60000\n\nUsing these settings I was able to bring CPU usage down to a more reasonable \nlevel: http://andri.estpak.ee/cpu1.png\nThis is much better than the first graph (see http://andri.estpak.ee/cpu0.png \n), but you can still see CPU usage creeping up.\nVACUUM FULL was done at 03:00 and 09:00. The small drop at ~12:45 is thanks to \nVACUUM ANALYZE.\n\nIf this is the best you can get with postgres right now, then I'll just have \nto increase the frequency of VACUUMing, but that feels like a hackish \nsolution :(\n\n> 3. How much memory does your machine have ?\n\n1 gigabyte.\n\n\n--\nandri\n\n", "msg_date": "Mon, 8 Sep 2003 13:50:23 +0300", "msg_from": "Andri Saar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Serious issues with CPU usage" }, { "msg_contents": "On 8 Sep 2003 at 13:50, Andri Saar wrote:\n> If this is the best you can get with postgres right now, then I'll just have \n> to increase the frequency of VACUUMing, but that feels like a hackish \n> solution :(\n\nUse a autovacuum daemon. There is one in postgresql contrib module. It was \nintroduced during 7.4 development and it works with 7.3.x. as well.\n\nCurrent 7.4CVS head has some problems with stats collector but soon it should \nbe fine.\n\nCheck it out..\n\nBye\n Shridhar\n\n--\nPunishment becomes ineffective after a certain point. Men become insensitive.\t\t\n-- Eneg, \"Patterns of Force\", stardate 2534.7\n\n", "msg_date": "Mon, 08 Sep 2003 16:44:56 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Serious issues with CPU usage" }, { "msg_contents": "On Mon, 8 Sep 2003 13:50:23 +0300, Andri Saar <[email protected]>\nwrote:\n>Basically I do this:\n>1) select about ~700 ID's I have to poll\n>2) poll them\n>3) update those 700 rows in that \"table\" I used (~2700 rows total).\n>\n>And I do this cycle once per minute, so yes, I've got a zillion updates. 700 \n>of 2700 is roughly 25%, so I'd have to vacuum once per minute?\n\nWith such a small table VACUUM should be a matter of less than one\nsecond:\n\nfred=# vacuum verbose t;\nINFO: --Relation public.t--\nINFO: Index t_pkey: Pages 65; Tuples 16384: Deleted 4096.\n CPU 0.01s/0.10u sec elapsed 0.21 sec.\nINFO: Removed 4096 tuples in 154 pages.\n CPU 0.04s/0.02u sec elapsed 0.07 sec.\nINFO: Pages 192: Changed 192, Empty 0; Tup 16384: Vac 4096, Keep 0,\nUnUsed 0.\n Total CPU 0.08s/0.16u sec elapsed 0.36 sec.\nVACUUM\nTime: 415.00 ms\n\nAnd this is on a 400 MHz machine under cygwin, so don't worry if you\nhave a real computer.\n\nServus\n Manfred\n", "msg_date": "Mon, 08 Sep 2003 13:53:02 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Serious issues with CPU usage" }, { "msg_contents": "Andri Saar <[email protected]> writes:\n> If this is the best you can get with postgres right now, then I'll just have \n> to increase the frequency of VACUUMing, but that feels like a hackish \n> solution :(\n\nNot at all. The overhead represented by VACUUM would have to be paid\nsomewhere, somehow, in any database. Postgres allows you to control\nexactly when it gets paid.\n\nIt looks to me like throwing a plain VACUUM into your poller cycle\n(or possibly VACUUM ANALYZE depending on how fast the table's stats\nchange) would solve your problems nicely.\n\nNote that once you have that policy in place, you will want to do one\nVACUUM FULL, and possibly a REINDEX, to get the table's physical size\nback down to something commensurate with 2700 useful rows. I shudder\nto think of where it had gotten to before. Routine VACUUMing should\nhold it to a reasonable size after that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Sep 2003 10:04:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Serious issues with CPU usage " }, { "msg_contents": "On Monday 08 September 2003 17:04, Tom Lane wrote:\n>\n> It looks to me like throwing a plain VACUUM into your poller cycle\n> (or possibly VACUUM ANALYZE depending on how fast the table's stats\n> change) would solve your problems nicely.\n>\n\nI compled the pg_autovacuum daemon from 7.4beta sources as Shridhar Daithankar \nrecommended, and it seems to work fine. At first glance I thought VACUUM is a \nthing you do maybe once per week during routine administration tasks like \nmaking a full backup, but I was wrong.\n\nThanks to all for your help, we can consider this problem solved.\n\nNote to future generations: default postgres configuration settings are very \nconservative and don't be afraid to VACUUM very often.\n\n\nandri\n\n", "msg_date": "Mon, 8 Sep 2003 17:31:13 +0300", "msg_from": "Andri Saar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Serious issues with CPU usage" }, { "msg_contents": "On 8 Sep 2003 at 17:31, Andri Saar wrote:\n> Note to future generations: default postgres configuration settings are very \n> conservative and don't be afraid to VACUUM very often.\n\nYou should have looked at earlier default postgresql they were arcane by that \nstandard.\n\n7.4 at least attempts to determine the shared_buffers while doing initdb. That \nresults in much better default performance.\n\nBye\n Shridhar\n\n--\nRitchie's Rule:\t(1) Everything has some value -- if you use the right currency.\t\n(2) Paint splashes last longer than the paint job.\t(3) Search and ye shall find \n-- but make sure it was lost.\n\n", "msg_date": "Mon, 08 Sep 2003 20:20:35 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Serious issues with CPU usage" } ]
[ { "msg_contents": "First, sorry if this has been answered before; the list search seems to\nbe down...\n\nThis is on a quad Xeon-PII 450 machine running FBSD 4.8.\n\n84386 pgsql 64 0 104M 99M RUN 1 78:20 61.87% 61.87% postgres\n84385 decibel 64 0 3748K 2268K CPU1 3 49:49 37.79% 37.79% pg_dump\n\n(note that the CPU percents are per-cpu, so 100% would be 100% of one\nCPU)\n\nAccording to vmstat, there's very little disk I/O, so that's not a\nbottleneck. The command I used was: \n\npg_dump -vFc -f pgsql-20030906.cdb stats\n\nIt should be compressing, but if that was the bottleneck, shouldn't the\npg_dump process be at 100% CPU? It does seem a bit coincidental that the\ntwo procs seem to be taking 100% of one CPU (top shows them running on\ndifferent CPUs though).\n\nThis is version 7.3.4.\n-- \nJim C. Nasby, Database Consultant [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Sun, 7 Sep 2003 00:23:24 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Poor pg_dump performance" } ]
[ { "msg_contents": "I have:\n> psql (PostgreSQL) 7.3.2\nI do a modification of 'access/index/indexam.c' where I comment:\n> #ifdef NOT_USED\n> if (scan->keys_are_unique && scan->got_tuple)\n> {\n> if (ScanDirectionIsForward(direction))\n> {\n> if (scan->unique_tuple_pos <= 0)\n> scan->unique_tuple_pos++;\n> }\n> else if (ScanDirectionIsBackward(direction))\n> {\n> if (scan->unique_tuple_pos >= 0)\n> scan->unique_tuple_pos--;\n> }\n> if (scan->unique_tuple_pos == 0)\n> return heapTuple;\n> else\n> return NULL;\n> }\n> #endif\nI do not remember the references of the bug.\nBut the solution was planned for 7.4.\n\nI do:\n> psql=# \\di\n> [skip]\n> public | url_next_index_time | index | postgresql | url\n> [skip]\n> (11 rows)\nI have an index on next_index_time field on table url.\n\n> psql=# explain select min(next_index_time) from url \\g\n> QUERY PLAN\n> -------------------------------------------------------------------\n> Aggregate (cost=85157.70..85157.70 rows=1 width=4)\n> -> Seq Scan on url (cost=0.00..80975.56 rows=1672856 width=4)\n> (2 rows)\nSilly SeqScan of all the table.\n\n> psql=# explain SELECT next_index_time FROM url ORDER BY \n> next_index_time LIMIT 1 \\g\n> QUERY PLAN\n> ----------------------------------------------------------------------- \n> -------------------------\n> Limit (cost=0.00..0.20 rows=1 width=4)\n> -> Index Scan using url_next_index_time on url \n> (cost=0.00..340431.47 rows=1672856 width=4)\n> (2 rows)\nI ask for the same thing.\nThat's better !\n\nWhy the planner does that ?\n\nJean-Gérard Pailloncy\nParis, France\n\n", "msg_date": "Sun, 7 Sep 2003 20:04:20 +0200", "msg_from": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]>", "msg_from_op": true, "msg_subject": "slow plan for min/max" }, { "msg_contents": "On Sun, 7 Sep 2003, Pailloncy Jean-G�rard wrote:\n\nAsking a question about why max(id) is so much slower than select id order \nby id desc limit 1, Pailloncy said:\n\n> I ask for the same thing.\n> That's better !\n\nThis is a Frequently asked question about something that isn't likely to \nchange any time soon.\n\nBasically, Postgresql uses an MVCC locking system that makes massively \nparallel operation possible, but costs in certain areas, and one of those \nareas is aggregate performance over large sets. MVCC makes it very hard \nto optimize all but the simplest of aggregates, and even those \noptimzations which are possible would wind up being quite ugly at the \nparser level.\n\nYou might want to search the archives in the last couple years for this \nsubject, as it's come up quite often.\n\n", "msg_date": "Mon, 8 Sep 2003 09:56:28 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max" }, { "msg_contents": "On Mon, 2003-09-08 at 11:56, scott.marlowe wrote:\n> Basically, Postgresql uses an MVCC locking system that makes massively \n> parallel operation possible, but costs in certain areas, and one of those \n> areas is aggregate performance over large sets. MVCC makes it very hard \n> to optimize all but the simplest of aggregates, and even those \n> optimzations which are possible would wind up being quite ugly at the \n> parser level.\n\nAs was pointed out in a thread a couple days ago, MIN/MAX() optimization\nhas absolutely nothing to do with MVCC. It does, however, make\noptimizing COUNT() more difficult.\n\n-Neil\n\n\n", "msg_date": "Mon, 08 Sep 2003 14:49:12 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max" }, { "msg_contents": "[email protected] (\"scott.marlowe\") writes:\n> On Sun, 7 Sep 2003, Pailloncy Jean-G�rard wrote:\n>\n> Asking a question about why max(id) is so much slower than select id order \n> by id desc limit 1, Pailloncy said:\n>\n>> I ask for the same thing. That's better !\n>\n> This is a Frequently asked question about something that isn't\n> likely to change any time soon.\n>\n> Basically, Postgresql uses an MVCC locking system that makes\n> massively parallel operation possible, but costs in certain areas,\n> and one of those areas is aggregate performance over large sets.\n> MVCC makes it very hard to optimize all but the simplest of\n> aggregates, and even those optimzations which are possible would\n> wind up being quite ugly at the parser level.\n\nMVCC makes it difficult to optimize aggregates resembling COUNT(*) or\nSUM(*), at least vis-a-vis having this available for a whole table\n(e.g. - you have to be doing 'SELECT COUNT(*), SUM(SOMEFIELD) FROM\nTHIS_TABLE' with NO \"WHERE\" clause).\n\nBut there is nothing about MVCC that makes it particularly difficult\nto handle the transformation:\n\n select max(field) from some_table where another_field <\n still_another_field;\n\n (which isn't particularly efficient) into\n\n select field from some_table where another_field <\n still_another_field order by field desc limit 1;\n\nThe problems observed are thus:\n\n 1. If the query asks for other data, it might be necessary to scan\n the table to get the other data, making the optimization\n irrelevant;\n\n 2. If there's a good index to key on, the transformed version might\n be a bunch quicker, but it is nontrivial to determine that, a\n priori;\n\n 3. It would be a fairly hairy optimization to throw into the query\n optimizer, so people are reluctant to try to do so.\n\nNote that MVCC has _nothing_ to do with any of those three problems.\n\nThe MVCC-related point is that there is reluctance to create some\nspecial case that will be troublesome to maintain instead of having\nsome comprehensive handling of _all_ aggregates. It seems a better\nidea to \"fix them all\" rather than to kludge things up by fixing one\nafter another.\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in name ^ \"@\" ^ tld;;\nhttp://cbbrowne.com/info/lisp.html\nSigns of a Klingon Programmer - 10. \"A TRUE Klingon Warrior does not\ncomment his code!\"\n", "msg_date": "Mon, 08 Sep 2003 15:32:16 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max" }, { "msg_contents": "On Mon, 8 Sep 2003, Neil Conway wrote:\n\n> On Mon, 2003-09-08 at 11:56, scott.marlowe wrote:\n> > Basically, Postgresql uses an MVCC locking system that makes massively \n> > parallel operation possible, but costs in certain areas, and one of those \n> > areas is aggregate performance over large sets. MVCC makes it very hard \n> > to optimize all but the simplest of aggregates, and even those \n> > optimzations which are possible would wind up being quite ugly at the \n> > parser level.\n> \n> As was pointed out in a thread a couple days ago, MIN/MAX() optimization\n> has absolutely nothing to do with MVCC. It does, however, make\n> optimizing COUNT() more difficult.\n\nNot exactly. While max(id) is easily optimized by query replacement, \nmore complex aggregates will still have perfomance issues that would not \nbe present in a row locking database. i.e. max((field1/field2)*field3) is \nstill going to cost more to process, isn't it?\n\n\n", "msg_date": "Mon, 8 Sep 2003 14:37:10 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max" }, { "msg_contents": "\"scott.marlowe\" <[email protected]> writes:\n> On Mon, 8 Sep 2003, Neil Conway wrote:\n>> As was pointed out in a thread a couple days ago, MIN/MAX() optimization\n>> has absolutely nothing to do with MVCC. It does, however, make\n>> optimizing COUNT() more difficult.\n\n> Not exactly. While max(id) is easily optimized by query replacement, \n> more complex aggregates will still have perfomance issues that would not \n> be present in a row locking database. i.e. max((field1/field2)*field3) is \n> still going to cost more to process, isn't it?\n\nEr, what makes you think that would be cheap in any database?\n\nPostgres would actually have an advantage given its support for\nexpressional indexes (nee functional indexes). If we had an optimizer\ntransform to convert MAX() into an index scan, I would expect it to be\nable to match up max((field1/field2)*field3) with an index on\n((field1/field2)*field3).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Sep 2003 17:26:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max " }, { "msg_contents": "After takin a swig o' Arrakan spice grog, [email protected] (\"scott.marlowe\") belched out...:\n> On Mon, 8 Sep 2003, Neil Conway wrote:\n>> On Mon, 2003-09-08 at 11:56, scott.marlowe wrote:\n>> > Basically, Postgresql uses an MVCC locking system that makes massively \n>> > parallel operation possible, but costs in certain areas, and one of those \n>> > areas is aggregate performance over large sets. MVCC makes it very hard \n>> > to optimize all but the simplest of aggregates, and even those \n>> > optimzations which are possible would wind up being quite ugly at the \n>> > parser level.\n>> \n>> As was pointed out in a thread a couple days ago, MIN/MAX() optimization\n>> has absolutely nothing to do with MVCC. It does, however, make\n>> optimizing COUNT() more difficult.\n>\n> Not exactly. While max(id) is easily optimized by query replacement, \n> more complex aggregates will still have perfomance issues that would not \n> be present in a row locking database. i.e. max((field1/field2)*field3) is \n> still going to cost more to process, isn't it?\n\nThat sort of MAX() would be difficult to optimize in almost any case,\nand would mandate doing a scan across the relevant portion of the\ntable...\n\n... Unless you had a functional index on (field1/field2)*field3, in\nwhich case it might well be that this would cost Still Less.\n\nI still can't fathom what this has to do with MVCC; you have yet to\nactually connect it with that...\n-- \nlet name=\"cbbrowne\" and tld=\"ntlug.org\" in String.concat \"@\" [name;tld];;\nhttp://www3.sympatico.ca/cbbrowne/lsf.html\n\"Cars move huge weights at high speeds by controlling violent\nexplosions many times a second. ...car analogies are always fatal...\"\n-- <[email protected]>\n", "msg_date": "Mon, 08 Sep 2003 17:41:43 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max" }, { "msg_contents": "\n\"scott.marlowe\" <[email protected]> writes:\n\n> Basically, Postgresql uses an MVCC locking system that makes massively \n\nAs discussed, uh, a few days ago, this particular problem is not caused by\nMVCC but by postgres having a general purpose aggregate system and not having\nspecial code for handling min/max. Aggregates normally require access to every\nrecord they're operating on, not just the first or last in some particular\norder. You'll note the LIMIT 1/DISTINCT ON work-around works fine with MVCC...\n\n-- \ngreg\n\n", "msg_date": "08 Sep 2003 18:07:48 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max" }, { "msg_contents": "Scott,\n \n> Not exactly. While max(id) is easily optimized by query replacement, \n> more complex aggregates will still have perfomance issues that would not \n> be present in a row locking database. i.e. max((field1/field2)*field3) is \n> still going to cost more to process, isn't it?\n\nSorry, no. \n\nThe issue has nothing to do with MVCC. It has everything to do with the fact \nthat PostgreSQL allows you to create your own aggregates using functions in \nany of 11 languages. This forces the planner to treat aggregates as a \n\"black box\" which does not allow index utilization, because the planner \nsimply doesn't know what the aggregate is doing internally.\n\nTo put it another way, the planner sees SUM() or CONCAT() -- which require \ntable scans as they must include all values -- as identical to MAX() and \nMIN(). \n\nEscaping this would require programming a special exception for MAX() and \nMIN() into the planner and parser. This has been discussed numerous times \non HACKERS; the problem is, making special exceptions for MAX() and MIN() \nwould then make it very difficult to implement MAX() or MIN() for new data \ntypes, as well as requiring a lot of debugging in numerous places. So far, \nnobody has been frustrated enough to spend 3 months tackling the problem.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 8 Sep 2003 15:40:48 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max" }, { "msg_contents": "\n>This is a Frequently asked question about something that isn't likely to\n>change any time soon.\n\nYou're right, it is in the FAQ, but pretty well buried. It is entirely\nnon-obvious to most people that min() and max() don't/can't use indices.\nSomething so counterintuitive should be explicitly and prominently\nadvertised, especially since the \"order by X limit 1\" workaround is so\nsimple.\n\nActually, referring down to later parts of this thread, why can't this\noptimisation be performed internally for built-in types? I understand the\nissue with aggregates over user-defined types, but surely optimising max()\nfor int4, text, etc is safe and easy?\n\nOf course I may be so far out of my depth as to be drowning, in which case\nplease put me out of my misery.\n\nM\n\n", "msg_date": "Tue, 9 Sep 2003 00:17:09 +0100", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max" }, { "msg_contents": "> Actually, referring down to later parts of this thread, why can't this\n> optimisation be performed internally for built-in types? I understand the\n> issue with aggregates over user-defined types, but surely optimising max()\n> for int4, text, etc is safe and easy?\n\nSorry, missed the bit about user-defined functions. So I should have said\nbuilt-in functions operating over built-in types. Which does sound more\ncomplicated, but anyone redefining max() is surely not in a position to seek\nsympathy if they lose performance?\n\n", "msg_date": "Tue, 9 Sep 2003 00:38:36 +0100", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max" }, { "msg_contents": "\"Matt Clark\" <[email protected]> writes:\n> Actually, referring down to later parts of this thread, why can't this\n> optimisation be performed internally for built-in types? I understand the\n> issue with aggregates over user-defined types, but surely optimising max()\n> for int4, text, etc is safe and easy?\n\nI can't see that the datatype involved has anything to do with it.\nNone of the issues that come up in making the planner do this are\ndatatype-specific. You could possibly avoid adding some columns\nto pg_aggregate if you instead hard-wired the equivalent knowledge\n(for builtin types only) into some code somewhere, but a patch that\napproached it that way would be rejected as unmaintainable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Sep 2003 19:42:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max " }, { "msg_contents": "> \"Matt Clark\" <[email protected]> writes:\n> > Actually, referring down to later parts of this thread, why can't this\n> > optimisation be performed internally for built-in types? I\n> understand the\n> > issue with aggregates over user-defined types, but surely\n> optimising max()\n> > for int4, text, etc is safe and easy?\n>\n> I can't see that the datatype involved has anything to do with it.\n> None of the issues that come up in making the planner do this are\n> datatype-specific. You could possibly avoid adding some columns\n> to pg_aggregate if you instead hard-wired the equivalent knowledge\n> (for builtin types only) into some code somewhere, but a patch that\n> approached it that way would be rejected as unmaintainable.\n\nI don't pretend to have any useful knowledge of the internals of this, so\nmuch of what I write may seem like noise to you guys. The naive question is\n'I have an index on X, so finding max(X) should be trivial, so why can't the\nplanner exploit that triviality?'. AFAICS the short sophisticated answer is\nthat it just isn't trivial in the general case.\n\nUpon rereading the docs on aggregates I see that it really isn't trivial at\nall. Not even knowing things like 'this index uses the same function as\nthis aggregate' gets you very far, because of the very general nature of the\nimplementation of aggs.\n\nSo it should be flagged very prominently in the docs that max() and min()\nare almost always not what 90% of people want to use 90% of the time,\nbecause indexes do the same job much better for anything other than tiny\ntables.\n\nKnow what we (OK, I) need? An explicitly non-aggregate max() and min(),\nimplemented differently, so they can be optimised. let's call them\nidx_max() and idx_min(), which completely bypass the standard aggregate\ncode. Because let's face it, in most cases where you regularly want a max\nor a min you have an index defined, and you want the DB to use it.\n\nAnd I would volunteer to do it, I would, but you really don't want my C in\nyour project ;-) I do volunteer to do some doc tweaking though - who do I\ntalk to?\n\nM\n\n", "msg_date": "Tue, 9 Sep 2003 01:42:03 +0100", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max " }, { "msg_contents": "\"Matt Clark\" <[email protected]> writes:\n> Know what we (OK, I) need? An explicitly non-aggregate max() and min(),\n> implemented differently, so they can be optimised.\n\nNot per se. The way I've been visualizing this is that we add to\npg_aggregate a column named, say, aggsortop, with the definition:\n\n\tzero: no index optimization possible\n\tnot zero: OID of a comparison operator ('<' or '>')\n\nA nonzero entry means that the aggregate's value is the same as the\nfirst item of the aggregate's input when sorted by the given operator.\n(So MIN uses the '<' operator for its datatype and MAX uses '>'.)\nOf course we have to add a clause to CREATE AGGREGATE to allow this to\nbe set for max/min aggregates of user-defined types. But that's just a\nsmall matter of programming. This gives us all the type-specific info\nwe need; the aggsortop can be matched against the opclasses of indexes\nto figure out whether a particular index is relevant to a particular max\nor min call.\n\nThe hard part comes in teaching the planner to use this information\nintelligently. Exactly which queries is it even *possible* to use the\ntransformation for? Which queries is it really a win for? (Obviously\nit's not if there's no matching index, but even if there is, the\npresence of WHERE or GROUP BY clauses has got to affect the answer.)\nHow do you structure the resulting query plan, if it's at all complex\n(think multiple aggregate calls...)? I'm not clear on the answers to\nany of those questions, so I'm not volunteering to try to code it up ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Sep 2003 22:21:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max " }, { "msg_contents": "I did not expect so many answers about this question.\nThanks.\n\nI find by myself the \"order by trick\" to speed min/max function.\n\nJean-Gérard Pailloncy\n\n", "msg_date": "Tue, 9 Sep 2003 15:36:24 +0200", "msg_from": "=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow plan for min/max" }, { "msg_contents": "\nThe only connection to MVCC is that the \"obvious\" solution doesn't work,\nnamely storing a cache of the aggregate in the table information. \n\nSo what would it take to implement this for \"all\" aggregates? Where I think\n\"all\" really just means min(), max(), first(), last().\n\nI think it would mean having a way to declare when defining an aggregate that\nonly specific records are necessary. For first() and last() it would only have\nto indicate in some way that only the first or last record of the grouping was\nnecessary in the pre-existing order.\n\nFor min() and max() it would have to indicate not only that only the first or\nlast record is necessary but also the sort order to impose. \n\nThen if the optimizer determines that all the aggregates used either impose no\nsort order or impose compatible sort orders, then it should insert an extra\nsort step before the grouping, and flag the executor to indicate it should do\nDISTINCT ON type behaviour to skip unneeded records.\n\nNow the problem I see is if there's no index on the sort order imposed, and\nthe previous step wasn't a merge join or something else that would return the\nrecords in order then it's not necessarily any faster to sort the records and\nreturn only some. It might be for small numbers of records, but it might be\nfaster to just read them all in and check each one for min/max the linear way.\n\n-- \ngreg\n\n", "msg_date": "09 Sep 2003 12:54:04 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max" }, { "msg_contents": "Greg,\n\n> The only connection to MVCC is that the \"obvious\" solution doesn't work,\n> namely storing a cache of the aggregate in the table information.\n\nWell, that solution also doesn't work if you use a WHERE condition or JOIN, \nnow does it?\n\n> So what would it take to implement this for \"all\" aggregates? Where I think\n> \"all\" really just means min(), max(), first(), last().\n\nUm, what the heck are first() and last()? These are not supported aggregates \n... table rows are *not* ordered.\n\n> For min() and max() it would have to indicate not only that only the first\n> or last record is necessary but also the sort order to impose.\n\nI think Tom already suggested this based on adding a field to CREATE \nAGGREGATE. But I think implementation isn't as simple as you think it is.\n\n> Now the problem I see is if there's no index on the sort order imposed, and\n> the previous step wasn't a merge join or something else that would return\n> the records in order then it's not necessarily any faster to sort the\n> records and return only some. It might be for small numbers of records, but\n> it might be faster to just read them all in and check each one for min/max\n> the linear way.\n\nYes, Tom mentioned this also. Working out the rules whereby the planner could \ndecide the viability of index use is a non-trivial task.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 9 Sep 2003 10:14:03 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max" }, { "msg_contents": "On Tue, Sep 09, 2003 at 12:54:04 -0400,\n Greg Stark <[email protected]> wrote:\n> \n> So what would it take to implement this for \"all\" aggregates? Where I think\n> \"all\" really just means min(), max(), first(), last().\n\nThere can be other aggregates where indexes are helpful. The case of interest\nis when functions such that if the new item is contains the current value\nof the aggregate then the new value of the aggregate with be that of the\ncurrent item. This allows you to skip looking at all of the other items\ncontained in the current item. Dual problems can also benefit in a similar\nmanner. In a case where the set is totally ordered by the contains index\n(as is the case or max and min) then the problem is even simpler and you\ncan use the greatest or least element as appropiate.\n", "msg_date": "Tue, 9 Sep 2003 13:49:54 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max" }, { "msg_contents": "Tom Lane wrote:\n\n>\"scott.marlowe\" <[email protected]> writes:\n> \n>\n>>On Mon, 8 Sep 2003, Neil Conway wrote:\n>> \n>>\n>>>As was pointed out in a thread a couple days ago, MIN/MAX() optimization\n>>>has absolutely nothing to do with MVCC. It does, however, make\n>>>optimizing COUNT() more difficult.\n>>> \n>>>\n>\n> \n>\n>>Not exactly. While max(id) is easily optimized by query replacement, \n>>more complex aggregates will still have perfomance issues that would not \n>>be present in a row locking database. i.e. max((field1/field2)*field3) is \n>>still going to cost more to process, isn't it?\n>> \n>>\n>\n>Er, what makes you think that would be cheap in any database?\n>\n>Postgres would actually have an advantage given its support for\n>expressional indexes (nee functional indexes). If we had an optimizer\n>transform to convert MAX() into an index scan, I would expect it to be\n>able to match up max((field1/field2)*field3) with an index on\n>((field1/field2)*field3).\n> \n>\n\nWould it be possible to rewrite min and max at the parser level into a\nselect/subselect (clause) condition ( repeat condition ) order by\n(clause ) descending/ascending limit 1 and thereby avoiding the\npenalties of altering the default aggregate behavior? Would it yield\nanything beneficial?\n\n\n", "msg_date": "Tue, 09 Sep 2003 14:06:56 -0500", "msg_from": "Thomas Swan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max" }, { "msg_contents": "On Tue, Sep 09, 2003 at 14:06:56 -0500,\n Thomas Swan <[email protected]> wrote:\n> \n> Would it be possible to rewrite min and max at the parser level into a\n> select/subselect (clause) condition ( repeat condition ) order by\n> (clause ) descending/ascending limit 1 and thereby avoiding the\n> penalties of altering the default aggregate behavior? Would it yield\n> anything beneficial?\n\nThat isn't always going to be the best way to do the calculation. If there\nare other aggregates or if the groups are small, doing things the normal\nway (and hash aggregates in 7.4 will help) can be faster.\n", "msg_date": "Tue, 9 Sep 2003 14:34:28 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max" }, { "msg_contents": "> > Know what we (OK, I) need? An explicitly non-aggregate max() and min(),\n> > implemented differently, so they can be optimised.\n>\n> Not per se. The way I've been visualizing this is that we add to\n> pg_aggregate a column named, say, aggsortop, with the definition:\n...snip of cunning potentially geralisable plan...\n> How do you structure the resulting query plan, if it's at all complex\n> (think multiple aggregate calls...)? I'm not clear on the answers to\n> any of those questions, so I'm not volunteering to try to code it up ...\n\nSo, you're not going to code it, I'm not going to code it, I doubt anyone\nelse is soon.\n\nThe issue is going to remain then, that max() and min() are implemented in a\nway that is grossly counterintuitively slow for 99% of uses. It's not bad,\nor wrong, just a consequence of many higher level factors. This should\ntherefore be very prominently flagged in the docs until there is either a\ngeneral or specific solution.\n\nFYI I have rewritten 4 queries today to work around this (with nice\nperformance benefits) as a result of this thread. Yeah, I should have\nspotted the _silly_ seq scans beforehand, but if you're not looking, you\ndon't tend to see. Best improvement is 325msec to 0.60msec!\n\nI'm happy to do the doc work.\n\nM\n\n", "msg_date": "Wed, 10 Sep 2003 01:22:09 +0100", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow plan for min/max " } ]