threads
listlengths 1
275
|
---|
[
{
"msg_contents": "I understand that COUNT queries are expensive. So I'm looking for advice on \ndisplaying paginated query results.\n\nI display my query results like this:\n\n Displaying 1 to 50 of 2905.\n 1-50 | 51-100 | 101-150 | etc.\n\nI do this by executing two queries. One is of the form:\n\n SELECT <select list> FROM <view/table list> WHERE <filter> LIMIT m OFFSET n\n\nThe other is identical except that I replace the select list with COUNT(*).\n\nI'm looking for suggestions to replace that COUNT query. I cannot use the \nmethod of storing the number of records in a separate table because my queries \n(a) involve joins, and (b) have a WHERE clause.\n\nAnd an unrelated question:\nI'm running PG 7.2.2 and want to upgrade to 7.4.1. I've never upgraded PG \nbefore and I'm nervous. Can I simply run pg_dumpall, install 7.4.1, and then \nfeed the dump into psql? I'm planning to use pg_dumpall rather than pg_dump \nbecause I want to preserve the users I've defined. My database is the only one \non the system.\n\nThanks.\n-David (who would love to go to Bruce Momjian's boot camp)\n",
"msg_date": "Sun, 11 Jan 2004 10:10:52 -0800",
"msg_from": "David Shadovitz <[email protected]>",
"msg_from_op": true,
"msg_subject": "COUNT & Pagination"
},
{
"msg_contents": "Hi,\n\nDavid Shadovitz wrote, On 1/11/2004 7:10 PM:\n\n> I understand that COUNT queries are expensive. So I'm looking for advice on \n> displaying paginated query results.\n> \n> I display my query results like this:\n> \n> Displaying 1 to 50 of 2905.\n> 1-50 | 51-100 | 101-150 | etc.\n> \n> I do this by executing two queries. One is of the form:\n> \n> SELECT <select list> FROM <view/table list> WHERE <filter> LIMIT m OFFSET n\n> \n> The other is identical except that I replace the select list with COUNT(*).\nyes, you need 2 query. Or select it from one:\nselect *, (select count(*) from table) as count from table...\n\npg will optimize this query, and do the count only once\n\n> \n> And an unrelated question:\n> I'm running PG 7.2.2 and want to upgrade to 7.4.1. I've never upgraded PG \n> before and I'm nervous. Can I simply run pg_dumpall, install 7.4.1, and then \n> feed the dump into psql? I'm planning to use pg_dumpall rather than pg_dump \n> because I want to preserve the users I've defined. My database is the only one \n> on the system.\n\nyes. But check tha faq and the manual for a better explain.\n\nC.\n",
"msg_date": "Sun, 11 Jan 2004 21:01:18 +0100",
"msg_from": "CoL <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COUNT & Pagination"
},
{
"msg_contents": "> So I'm looking for advice on displaying paginated query results.\n> Displaying 1 to 50 of 2905.\n> 1-50 | 51-100 | 101-150 | etc.\n>\n> I do this by executing two queries. One is of the form:\n> SELECT <select list> FROM <view/table list> WHERE <filter> LIMIT m \n> OFFSET n\n> The other is identical except that I replace the select list with \n> COUNT(*).\n\nThis is the only way I know of how to do it.\n\n> I'm running PG 7.2.2 and want to upgrade to 7.4.1. I've never \n> upgraded PG\n> before and I'm nervous. Can I simply run pg_dumpall, install 7.4.1, \n> and then\n> feed the dump into psql?\n\nI would practice and play with it on another machine until you can do \nit easily. You will learn a lot and the experience might prove \ninvaluable in may ways :-)\n\nJeff\n\n\n",
"msg_date": "Sun, 11 Jan 2004 12:59:36 -0800",
"msg_from": "Jeff Fitzmyers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COUNT & Pagination"
},
{
"msg_contents": "> I understand that COUNT queries are expensive. So I'm looking for advice on \n> displaying paginated query results.\n> \n> I display my query results like this:\n> \n> Displaying 1 to 50 of 2905.\n> 1-50 | 51-100 | 101-150 | etc.\n> \n> I do this by executing two queries. One is of the form:\n> \n> SELECT <select list> FROM <view/table list> WHERE <filter> LIMIT m OFFSET n\n> \n> The other is identical except that I replace the select list with COUNT(*).\n> \n> I'm looking for suggestions to replace that COUNT query. I cannot use the \n> method of storing the number of records in a separate table because my queries \n> (a) involve joins, and (b) have a WHERE clause.\n\nWell, on all my sites, I do what you do and just live with it :P You \ncan investigate using cursors however (DECLARE, MOVE & FETCH)\n\n> And an unrelated question:\n> I'm running PG 7.2.2 and want to upgrade to 7.4.1. I've never upgraded PG \n> before and I'm nervous. Can I simply run pg_dumpall, install 7.4.1, and then \n> feed the dump into psql? I'm planning to use pg_dumpall rather than pg_dump \n> because I want to preserve the users I've defined. My database is the only one \n> on the system.\n\nI recommend something like this:\n\n-- disable access to your database to make sure you have a complete dump\n\n-- run dump as database owner account\nsu pgsql (or whatever your postgres user is)\n\n-- do compressed dump\npg_dumpall > backup.sql\n\n-- backup old data dir\nmv /usr/local/pgsql/data /usr/local/pgsql/data.7.2\n\n-- remove old postgres, install new\n-- run NEW initdb. replace latin1 with your encoding\n-- -W specifies a superuser password\ninitdb -D /usr/local/pgsql/data -E LATIN1 -W\n\n-- restore dump, watching output VERY CAREFULLY:\n-- (run as pgsql user again)\npsql template1 < backup.sql > log.txt\n-- Watch stderr very carefully to check any errors that might occur.\n\n-- If restore fails, re-initdb and re-restore\n\nChris\n\n",
"msg_date": "Mon, 12 Jan 2004 08:38:23 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COUNT & Pagination"
},
{
"msg_contents": "On Sunday 11 January 2004 18:10, David Shadovitz wrote:\n> I understand that COUNT queries are expensive. So I'm looking for advice\n> on displaying paginated query results.\n>\n> I display my query results like this:\n>\n> Displaying 1 to 50 of 2905.\n> 1-50 | 51-100 | 101-150 | etc.\n>\n> I do this by executing two queries.\n\nIf you only need the count when you've got the results, most PG client \ninterfaces will tell you how many rows you've got. What language is your app \nin?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 12 Jan 2004 10:16:37 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COUNT & Pagination"
},
{
"msg_contents": "> I understand that COUNT queries are expensive. So I'm looking for advice\n> on\n> displaying paginated query results.\n>\n> I display my query results like this:\n>\n> Displaying 1 to 50 of 2905.\n> 1-50 | 51-100 | 101-150 | etc.\n>\n> I do this by executing two queries. One is of the form:\n>\n> SELECT <select list> FROM <view/table list> WHERE <filter> LIMIT m\n> OFFSET n\n>\n> The other is identical except that I replace the select list with\n> COUNT(*).\n>\n> I'm looking for suggestions to replace that COUNT query.\n\n\n\nWe avert the subsequent execution of count(*) by passing the\nvalue of cout(*) as a query parameter through the link in page\nnumbers. This works for us.\n\nThis ofcourse assumes that that the number of rows matching the\nWhere clause does not changes while the user is viewing the search\nresults.\n\nHope it helps.\n\nRegds\nMallah.\n\n\n\n\nI cannot use the\n> method of storing the number of records in a separate table because my\n> queries\n> (a) involve joins, and (b) have a WHERE clause.\n>\n> And an unrelated question:\n> I'm running PG 7.2.2 and want to upgrade to 7.4.1. I've never upgraded PG\n> before and I'm nervous. Can I simply run pg_dumpall, install 7.4.1, and\n> then\n> feed the dump into psql? I'm planning to use pg_dumpall rather than\n> pg_dump\n> because I want to preserve the users I've defined. My database is the\n> only one\n> on the system.\n>\n> Thanks.\n> -David (who would love to go to Bruce Momjian's boot camp)\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n>\n\n",
"msg_date": "Tue, 13 Jan 2004 23:01:37 +0530 (IST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: COUNT & Pagination"
},
{
"msg_contents": "> We avert the subsequent execution of count(*) by passing the\n> value of count(*) as a query parameter through the link in page\n> numbers.\n\nMallah, and others who mentioned caching the record count:\n\nYes, I will certainly do this. I can detect whether the query's filter has \nbeen changed, or whether the user is merely paging through the results or \nsorting* the results.\n\nI'd love to completely eliminate the cost of the COUNT(*) query, but I guess \nthat I cannot have everything.\n\n* My HTML table column headers are hyperlinks which re-execute the query, \nsorting the results by the selected column. The first click does an ASC \nsort; a second click does a DESC sort.\n\nThanks.\n-David\n",
"msg_date": "Tue, 13 Jan 2004 10:45:33 -0700",
"msg_from": "\"David Shadovitz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COUNT & Pagination"
},
{
"msg_contents": "On Tue, 13 Jan 2004, David Shadovitz wrote:\n\n> > We avert the subsequent execution of count(*) by passing the\n> > value of count(*) as a query parameter through the link in page\n> > numbers.\n> \n> Mallah, and others who mentioned caching the record count:\n> \n> Yes, I will certainly do this. I can detect whether the query's filter has \n> been changed, or whether the user is merely paging through the results or \n> sorting* the results.\n> \n> I'd love to completely eliminate the cost of the COUNT(*) query, but I guess \n> that I cannot have everything.\n> \n> * My HTML table column headers are hyperlinks which re-execute the query, \n> sorting the results by the selected column. The first click does an ASC \n> sort; a second click does a DESC sort.\n\nanother useful trick is to have your script save out the count(*) result \nin a single row table with a timestamp, and every time you grab if, check \nto see if x number of minutes have passed, and if so, update that row with \na count(*). You can even have a cron job do it so your own scripts don't \nincur the cost of the count(*) and delay output to the user.\n\n\n",
"msg_date": "Tue, 13 Jan 2004 10:55:59 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COUNT & Pagination"
},
{
"msg_contents": "scott.marlowe wrote:\n\n>On Tue, 13 Jan 2004, David Shadovitz wrote:\n>\n> \n>\n>>>We avert the subsequent execution of count(*) by passing the\n>>>value of count(*) as a query parameter through the link in page\n>>>numbers.\n>>> \n>>>\n>>Mallah, and others who mentioned caching the record count:\n>>\n>>Yes, I will certainly do this. I can detect whether the query's filter has \n>>been changed, or whether the user is merely paging through the results or \n>>sorting* the results.\n>>\n>>I'd love to completely eliminate the cost of the COUNT(*) query, but I guess \n>>that I cannot have everything.\n>>\n>>* My HTML table column headers are hyperlinks which re-execute the query, \n>>sorting the results by the selected column. The first click does an ASC \n>>sort; a second click does a DESC sort.\n>> \n>>\n>\n>another useful trick is to have your script save out the count(*) result \n>in a single row table with a timestamp, and every time you grab if, check \n>to see if x number of minutes have passed, and if so, update that row with \n>a count(*). \n>\n\nGreetings!\n\nThe count(*) can get evaluated with any arbitrary combination\nin whre clause how do you plan to store that information ?\n\nIn a typical application pagination could be required in n number\nof contexts . I would be interested to know more about this trick\nand its applicability in such situations.\n\nOfftopic:\n\nDoes PostgreSQL optimise repeated execution of similar queries ie\nqueries on same table or set of tables (in a join) with same where clause\n and only differing in LIMIT and OFFSET.\n\nI dont know much about MySQL, Is their \"Query Cache\" achieving\nbetter results in such cases? and do we have anything similar in\nPostgreSQL ? I think the most recently accessed tables anyways\nget loaded in shared buffers in PostgreSQL so that its not accessed\nfrom the disk. But is the \"Query Cache\" really different from this.\nCan anyone knowing a little better about the working of MySQLs'\nquery cache throw some light?\n\nRegds\nMallah.\n\n> You can even have a cron job do it so your own scripts don't \n>incur the cost of the count(*) and delay output to the user.\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n> \n>\n\n\n\n\n\n\n\n\nscott.marlowe wrote:\n\nOn Tue, 13 Jan 2004, David Shadovitz wrote:\n\n \n\n\nWe avert the subsequent execution of count(*) by passing the\nvalue of count(*) as a query parameter through the link in page\nnumbers.\n \n\nMallah, and others who mentioned caching the record count:\n\nYes, I will certainly do this. I can detect whether the query's filter has \nbeen changed, or whether the user is merely paging through the results or \nsorting* the results.\n\nI'd love to completely eliminate the cost of the COUNT(*) query, but I guess \nthat I cannot have everything.\n\n* My HTML table column headers are hyperlinks which re-execute the query, \nsorting the results by the selected column. The first click does an ASC \nsort; a second click does a DESC sort.\n \n\n\nanother useful trick is to have your script save out the count(*) result \nin a single row table with a timestamp, and every time you grab if, check \nto see if x number of minutes have passed, and if so, update that row with \na count(*). \n\n\nGreetings!\n\nThe count(*) can get evaluated with any arbitrary combination \nin whre clause how do you plan to store that information ?\n\nIn a typical application pagination could be required in n number\nof contexts . I would be interested to know more about this trick \nand its applicability in such situations.\n\nOfftopic:\n\nDoes PostgreSQL optimise repeated execution of similar queries ie\nqueries on same table or set of tables (in a join) with same where\nclause\n and only differing in LIMIT and OFFSET.\n\nI dont know much about MySQL, Is their \"Query Cache\" achieving\nbetter results in such cases? and do we have anything similar in \nPostgreSQL ? I think the most recently accessed tables anyways \nget loaded in shared buffers in PostgreSQL so that its not accessed\nfrom the disk. But is the \"Query Cache\" really different from this.\nCan anyone knowing a little better about the working of MySQLs' \nquery cache throw some light?\n\nRegds\nMallah.\n\n\n You can even have a cron job do it so your own scripts don't \nincur the cost of the count(*) and delay output to the user.\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly",
"msg_date": "Wed, 14 Jan 2004 23:43:30 +0530",
"msg_from": "Rajesh Kumar Mallah <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COUNT & Pagination"
},
{
"msg_contents": "On Wed, 14 Jan 2004, Rajesh Kumar Mallah wrote:\n\n> scott.marlowe wrote:\n> \n> >On Tue, 13 Jan 2004, David Shadovitz wrote:\n> >\n> > \n> >\n> >>>We avert the subsequent execution of count(*) by passing the\n> >>>value of count(*) as a query parameter through the link in page\n> >>>numbers.\n> >>> \n> >>>\n> >>Mallah, and others who mentioned caching the record count:\n> >>\n> >>Yes, I will certainly do this. I can detect whether the query's filter has \n> >>been changed, or whether the user is merely paging through the results or \n> >>sorting* the results.\n> >>\n> >>I'd love to completely eliminate the cost of the COUNT(*) query, but I guess \n> >>that I cannot have everything.\n> >>\n> >>* My HTML table column headers are hyperlinks which re-execute the query, \n> >>sorting the results by the selected column. The first click does an ASC \n> >>sort; a second click does a DESC sort.\n> >> \n> >>\n> >\n> >another useful trick is to have your script save out the count(*) result \n> >in a single row table with a timestamp, and every time you grab if, check \n> >to see if x number of minutes have passed, and if so, update that row with \n> >a count(*). \n> >\n> \n> Greetings!\n> \n> The count(*) can get evaluated with any arbitrary combination\n> in whre clause how do you plan to store that information ?\n> \n> In a typical application pagination could be required in n number\n> of contexts . I would be interested to know more about this trick\n> and its applicability in such situations.\n> \n> Offtopic:\n> \n> Does PostgreSQL optimise repeated execution of similar queries ie\n> queries on same table or set of tables (in a join) with same where clause\n> and only differing in LIMIT and OFFSET.\n\nYes, and no.\n\nYes, previously run query should be faster, if it fits in kernel cache. \n\nNo, Postgresql doesn't cache any previous results or plans (unless you use \nprepare / execute, then it only caches the plan, not the query results).\n\nPlus, the design of Postgresql is such that it would have to do a LOT of \ncache checking to see if there were any updates to the underlying data \nbetween selects. Since such queries are unlikely to be repeated inside a \ntransaction, the only place where you wouldn't have to check for new \ntuples, it's not really worth trying to implement.\n\nKeep in mind most databases can use an index on max(*) because each \naggregate is programmed by hand to do one thing. In Postgresql, you can \ncreate your own aggregate, and since there's no simple way to make \naggregates use indexes in the general sense, it's not likely to get \noptimized. I.e. any optimization for JUST max(*)/min(*) is unlikely \nunless it can be used for the other aggregates.\n\n\n",
"msg_date": "Wed, 14 Jan 2004 14:40:01 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COUNT & Pagination"
},
{
"msg_contents": "\"scott.marlowe\" <[email protected]> writes:\n> Yes, previously run query should be faster, if it fits in kernel\n> cache.\n\nOr the PostgreSQL buffer cache.\n\n> Plus, the design of Postgresql is such that it would have to do a\n> LOT of cache checking to see if there were any updates to the\n> underlying data between selects.\n\nLast I checked (which was a while ago, admittedly), the MySQL design\ncompletely purges the query cache for a relation whenever that\nrelation is mentioned in an INSERT, UPDATE, or DELETE. When this was\ndiscussed (check the -hackers archives for more), IIRC the consensus\nwas that it's not worth implementing it if we can't do better than\nthat.\n\n-Neil\n\n",
"msg_date": "Mon, 19 Jan 2004 18:58:44 -0500",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COUNT & Pagination"
}
] |
[
{
"msg_contents": "\nI have a situation that is giving me small fits, and would like to see \nif anyone can shed any light on it.\n\nI have a modest table (@1.4 million rows, and growing), that has a \nvariety of queries run against it. One is\na very straightforward one - pull a set of distinct rows out based on \ntwo columns, with a simple where clause\nbased on one of the indexed columns. For illustration here, I've \nremoved the distinct and order-by clauses, as\nthey are not the culprits.\n\nBefore I go on - v7.4.1, currently on a test box, dual P3, 1G ram, 10K \nscsi, Slackware 9 or so. The table has been\nvacuumed and analyzed. Even offered pizza and beer. Production box will \nbe a dual Xeon with 2G ram and RAID 5.\n\nWhen the query is run with a where clause that returns small number of \nrows, the query uses the index and is quite speedy:\n\nrav=# explain analyze select casno, parameter from hai.results where \nsite_id = 9982;\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------------------------------------------------\n Index Scan using hai_res_siteid_ndx on results (cost=0.00..7720.87 \nrows=2394 width=30) (actual time=12.118..12.933 rows=50 loops=1)\n Index Cond: (site_id = 9982)\n Total runtime: 13.145 ms\n\nWhen a query is run that returns a much larger set, the index is not \nused, I assume because the planner thinks that a sequential scan\nwould work just as well with a large result set:\n\nrav=# explain analyze select casno, parameter from hai.results where \nsite_id = 18;\n QUERY PLAN\n------------------------------------------------------------------------ \n----------------------------------------------\n Seq Scan on results (cost=0.00..73396.39 rows=211205 width=30) \n(actual time=619.020..15012.807 rows=186564 loops=1)\n Filter: (site_id = 18)\n Total runtime: 15279.789 ms\n(3 rows)\n\n\nUnfortunately, its way off:\n\nrav=# set enable_seqscan=off;\nSET\nrav=# explain analyze select casno, parameter from hai.results where \nsite_id = 18;\n QUERY \nPLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------------------------\n Index Scan using hai_res_siteid_ndx on results (cost=0.00..678587.01 \nrows=211205 width=30) (actual time=9.575..3569.387 rows=186564 loops=1)\n Index Cond: (site_id = 18)\n Total runtime: 3872.292 ms\n(3 rows)\n\n\nI would like, of course, for it to use the index, given that it takes \n20-25% of the time. Fiddling with CPU_TUPLE_COST doesn't do anything \nuntil I exceed\n0.5, which strikes me as a bit high (though please correct me if I am \nassuming too much...). RANDOM_PAGE_COST seems to have no effect. I \nsuppose I could\ncluster it, but it is constantly being added to, and would have to be \nre-done on a daily basis (if not more).\n\nAny suggestions?\n\n\n\n\n--------------------\n\nAndrew Rawnsley\nPresident\nThe Ravensfield Digital Resource Group, Ltd.\n(740) 587-0114\nwww.ravensfield.com\n\n",
"msg_date": "Sun, 11 Jan 2004 22:05:11 -0500",
"msg_from": "Andrew Rawnsley <[email protected]>",
"msg_from_op": true,
"msg_subject": "annoying query/planner choice"
},
{
"msg_contents": "On Sun, 11 Jan 2004, Andrew Rawnsley wrote:\n\n> 20-25% of the time. Fiddling with CPU_TUPLE_COST doesn't do anything\n> until I exceed 0.5, which strikes me as a bit high (though please\n> correct me if I am assuming too much...). RANDOM_PAGE_COST seems to have\n> no effect.\n\nWhat about the effective cache size, is that set properly?\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Mon, 12 Jan 2004 04:50:25 +0100 (CET)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: annoying query/planner choice"
},
{
"msg_contents": "Centuries ago, Nostradamus foresaw when [email protected] (Andrew Rawnsley) would write:\n> I would like, of course, for it to use the index, given that it\n> takes 20-25% of the time. Fiddling with CPU_TUPLE_COST doesn't do\n> anything until I exceed 0.5, which strikes me as a bit high (though\n> please correct me if I am assuming too much...). RANDOM_PAGE_COST\n> seems to have no effect. I suppose I could cluster it, but it is\n> constantly being added to, and would have to be re-done on a daily\n> basis (if not more).\n>\n> Any suggestions?\n\nThe apparent problem is a bad query plan, and for clustering to \"fix\"\nit seems a disturbing answer.\n\nA problem I saw last week with some query plans pointed to the issue\nthat the statistics were inadequate.\n\nWe had some queries where indexing on \"customer\" is extremely\nworthwhile in nearly all cases, but it often wasn't happening. The\nproblem was that the 10 \"bins\" in the default stats table would\ncollect up stats about a few _highly_ active customers, and pretty\nmuch ignore the less active ones. Because the \"bins\" were highly\ndominated by the few common values, stats for the others were missing\nand pretty useless.\n\nI upped the size of the histogram from 10 to 100, and that allowed\nstats to be kept for less active customers, GREATLY improving the\nquality of the queries.\n\nThe point that falls out is that if you have a column which has a\nbunch of discrete values (rather more than 10) that aren't near-unique\n(e.g. - on a table with a million transactions, you have a only few\nhundred customers), that's a good candidate for upping column stats.\n\nThus, you might try:\n ALTER TABLE MY_TABLE ALTER COLUMN SOME_COLUMN SET STATISTICS 50;\n ANALYZE MY_TABLE;\n-- \nlet name=\"cbbrowne\" and tld=\"ntlug.org\" in name ^ \"@\" ^ tld;;\nhttp://www.ntlug.org/~cbbrowne/postgresql.html\n\"There's no longer a boycott of Apple. But MacOS is still a\nproprietary OS.\" -- RMS - June 13, 1998\n",
"msg_date": "Sun, 11 Jan 2004 22:56:59 -0500",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: annoying query/planner choice"
},
{
"msg_contents": "\nLow (1000). I'll fiddle with that. I just noticed that the machine only \nhas 512MB of ram in it, and not 1GB. I must\nhave raided it for some other machine...\n\nOn Jan 11, 2004, at 10:50 PM, Dennis Bjorklund wrote:\n\n> On Sun, 11 Jan 2004, Andrew Rawnsley wrote:\n>\n>> 20-25% of the time. Fiddling with CPU_TUPLE_COST doesn't do anything\n>> until I exceed 0.5, which strikes me as a bit high (though please\n>> correct me if I am assuming too much...). RANDOM_PAGE_COST seems to \n>> have\n>> no effect.\n>\n> What about the effective cache size, is that set properly?\n>\n> -- \n> /Dennis Björklund\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> [email protected]\n>\n--------------------\n\nAndrew Rawnsley\nPresident\nThe Ravensfield Digital Resource Group, Ltd.\n(740) 587-0114\nwww.ravensfield.com\n\n",
"msg_date": "Sun, 11 Jan 2004 23:05:10 -0500",
"msg_from": "Andrew Rawnsley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: annoying query/planner choice"
},
{
"msg_contents": "Andrew Rawnsley <[email protected]> writes:\n> I have a situation that is giving me small fits, and would like to see \n> if anyone can shed any light on it.\n\nIn general, pulling 10% of a table *should* be faster as a seqscan than\nan indexscan, except under the most extreme assumptions about clustering\n(is the table clustered on site_id, by any chance?). What I suspect is\nthat the table is a bit larger than your available RAM, so that a\nseqscan ends up flushing all of the kernel's cache and forcing a lot of\nI/O, whereas an indexscan avoids the cache flush by not touching (quite)\nall of the table. The trouble with this is that the index only looks\nthat good under test conditions, ie, when you repeat it just after an\nidentical query that pulled all of the needed pages into RAM. Under\nrealistic load conditions where different site_ids are being hit, the\nindexscan is not going to be as good as you think, because it will incur\nsubstantial I/O.\n\nYou should try setting up a realistic test load hitting different random\nsite_ids, and see whether it's really a win to force seqscan off for\nthis query or not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 12 Jan 2004 00:40:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: annoying query/planner choice "
},
{
"msg_contents": "\nProbably my best solution is to find a better way to produce the \ninformation, or cache it on the\napplication side, as it doesn't actually change that much across client \nsessions.\n\nClustering it occurred to me - it would have to be done on a frequent \nbasis, as the contents\nof the table change constantly. What I am getting out of it with this \noperation doesn't change\nmuch, so caching in a separate table, in the application layer, or both \nwould probably shortcut\nthe whole problem.\n\nAlways amazing what occurs to you when you sleep on it...if only I \ncould take a good nap in the\nmiddle of the afternoon I would have no problems at all.\n\n\nOn Jan 12, 2004, at 12:40 AM, Tom Lane wrote:\n\n> Andrew Rawnsley <[email protected]> writes:\n>> I have a situation that is giving me small fits, and would like to see\n>> if anyone can shed any light on it.\n>\n> In general, pulling 10% of a table *should* be faster as a seqscan than\n> an indexscan, except under the most extreme assumptions about \n> clustering\n> (is the table clustered on site_id, by any chance?). What I suspect is\n> that the table is a bit larger than your available RAM, so that a\n> seqscan ends up flushing all of the kernel's cache and forcing a lot of\n> I/O, whereas an indexscan avoids the cache flush by not touching \n> (quite)\n> all of the table. The trouble with this is that the index only looks\n> that good under test conditions, ie, when you repeat it just after an\n> identical query that pulled all of the needed pages into RAM. Under\n> realistic load conditions where different site_ids are being hit, the\n> indexscan is not going to be as good as you think, because it will \n> incur\n> substantial I/O.\n>\n> You should try setting up a realistic test load hitting different \n> random\n> site_ids, and see whether it's really a win to force seqscan off for\n> this query or not.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n--------------------\n\nAndrew Rawnsley\nPresident\nThe Ravensfield Digital Resource Group, Ltd.\n(740) 587-0114\nwww.ravensfield.com\n\n",
"msg_date": "Mon, 12 Jan 2004 10:02:09 -0500",
"msg_from": "Andrew Rawnsley <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: annoying query/planner choice "
}
] |
[
{
"msg_contents": "> If you only need the count when you've got the results, most PG client\n> interfaces will tell you how many rows you've got. What language is your app\n> in?\n\nPHP.\nBut I have only a subset of the results, retrieved via a query with a \"LIMIT \n<m>\" clause, so $pg_numrows is m.\nAnd retrieving all results (i.e. no LIMIT) is at least as expensive as \nCOUNT(*).\n\n-David\n",
"msg_date": "Mon, 12 Jan 2004 07:37:36 -0800",
"msg_from": "David Shadovitz <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: COUNT & Pagination"
},
{
"msg_contents": "On Mon, 2004-01-12 at 10:37, David Shadovitz wrote:\n> > If you only need the count when you've got the results, most PG client\n> > interfaces will tell you how many rows you've got. What language is your app\n> > in?\n> \n> PHP.\n> But I have only a subset of the results, retrieved via a query with a \"LIMIT \n> <m>\" clause, so $pg_numrows is m.\n> And retrieving all results (i.e. no LIMIT) is at least as expensive as \n> COUNT(*).\n> \n\nDepending on frequency of updates and need for real time info, you could\ncache the count in session as long as the user stays within the given\npiece of your app.\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n",
"msg_date": "12 Jan 2004 16:42:12 -0500",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COUNT & Pagination"
}
] |
[
{
"msg_contents": "Hi There,\n\nWe are considering to use NetApp filer for a highly\nbusy 24*7 postgres database and the reason we chose\nnetapp, mostly being the \"snapshot\" functionality for\nbacking up database online. The filer would be mounted\non a rh linux server (7.3), 4g RAM, dual cpu with a\ndedicated card for filer.\n\nI'd appreciate if anyone could share your experience\nin configuring things on the filer for optimal\nperformance or any recomendataion that i should be\naware of.\n\nThanks,\nShankar\n\n__________________________________\nDo you Yahoo!?\nYahoo! Hotjobs: Enter the \"Signing Bonus\" Sweepstakes\nhttp://hotjobs.sweepstakes.yahoo.com/signingbonus\n",
"msg_date": "Mon, 12 Jan 2004 13:45:45 -0800 (PST)",
"msg_from": "Shankar K <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres on Netapp"
},
{
"msg_contents": "--On Monday, January 12, 2004 13:45:45 -0800 Shankar K <[email protected]> \nwrote:\n\n> Hi There,\n>\n> We are considering to use NetApp filer for a highly\n> busy 24*7 postgres database and the reason we chose\n> netapp, mostly being the \"snapshot\" functionality for\n> backing up database online. The filer would be mounted\n> on a rh linux server (7.3), 4g RAM, dual cpu with a\n> dedicated card for filer.\n>\n> I'd appreciate if anyone could share your experience\n> in configuring things on the filer for optimal\n> performance or any recomendataion that i should be\n> aware of.\nI run a (not very busy) PG cluster on a NetAPP.\n\nIt seems to do just fine.\n\nThe issue is the speed of the network connection. In my case it's only\nFastEthernet (100BaseTX). If it's very busy, you may need to look\nat GigE.\n\nLER\n\n>\n> Thanks,\n> Shankar\n>\n> __________________________________\n> Do you Yahoo!?\n> Yahoo! Hotjobs: Enter the \"Signing Bonus\" Sweepstakes\n> http://hotjobs.sweepstakes.yahoo.com/signingbonus\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: [email protected]\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749",
"msg_date": "Fri, 16 Jan 2004 10:53:18 -0600",
"msg_from": "Larry Rosenman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgres on Netapp"
},
{
"msg_contents": "\n> I'd appreciate if anyone could share your experience\n> in configuring things on the filer for optimal\n> performance or any recomendataion that i should be\n> aware of.\n\nNetapps are great things. Just beware that you'll be using NFS, and NFS\ndrivers on many operating systems have been known to be buggy in some\nway or another which may cause dataloss or corruption during extreme\nconditions (power failure, network interruption, etc.)\n\nIf you can configure it as a SAN, you may find it works out better.\n\n",
"msg_date": "Fri, 16 Jan 2004 12:01:39 -0500",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgres on Netapp"
},
{
"msg_contents": "On January 16, 2004 11:53 am, Larry Rosenman wrote:\n> --On Monday, January 12, 2004 13:45:45 -0800 Shankar K <[email protected]>\n> wrote:\n> > We are considering to use NetApp filer for a highly\n> > busy 24*7 postgres database and the reason we chose\n> I run a (not very busy) PG cluster on a NetAPP.\n\nI run a very busy PG installation on one.\n\n> It seems to do just fine.\n\nDitto.\n\n> The issue is the speed of the network connection. In my case it's only\n> FastEthernet (100BaseTX). If it's very busy, you may need to look\n> at GigE.\n\nWith the price of GigE adapters I wouldn't consider anything else.\n\nI have a huge database that takes about an hour to copy. The netApp snapshot \nfeature is very nice because I can get a \"moment in time\" image of the \ndatabase. Even though I can't run from the snapshot because it is read only \n(*) and PG needs to write to files just to open the database, I can copy it \nand get a runnable version of the DB. If I copy directly from the original I \ncan get many changes while copying and wind up with a copy that will not run.\n\n(*): It would be nice if PG had a flag that allowed a database to be opened in \nread only mode without touching anything in the directory.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Sat, 17 Jan 2004 11:55:13 -0500",
"msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgres on Netapp"
},
{
"msg_contents": "D'Arcy J.M. Cain wrote:\n> With the price of GigE adapters I wouldn't consider anything else.\n> \n> I have a huge database that takes about an hour to copy. The netApp snapshot \n> feature is very nice because I can get a \"moment in time\" image of the \n> database. Even though I can't run from the snapshot because it is read only \n> (*) and PG needs to write to files just to open the database, I can copy it \n> and get a runnable version of the DB. If I copy directly from the original I \n> can get many changes while copying and wind up with a copy that will not run.\n> \n> (*): It would be nice if PG had a flag that allowed a database to be opened in \n> read only mode without touching anything in the directory.\n\nPostgreSQL has to read the WAL to adjust the contents of the flat file\non startup in such a setup, so I don't see how we could do it read-only.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 23 Jan 2004 10:01:09 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Postgres on Netapp"
}
] |
[
{
"msg_contents": "Dear developers,\n\nI wonder it happens to systems where inefficient update \nSQL's are used like this:\n\nUPDATE MyTable SET MyColumn=1234\n\nQuestion arises when the value of MyColumn is already 1234 \nbefore the update.\n\nIf I am right, even when the to-be-updated column values \nequal to the new values, the core never hates to update that \nrow anyway. If so, is it wise or not to adjust the core for \nlazy SQL users to ignore such \"meaningless\" updates in order \nto reduce some disk load and prevent some \"holes\" resulted \nfrom the delete (a consequence of update) in that table?\n\nRegards,\nCN\n",
"msg_date": "Tue, 13 Jan 2004 11:07:24 +0800 (CST)",
"msg_from": "\"cnliou\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Ignore Some Updates"
},
{
"msg_contents": "\"cnliou\" <[email protected]> writes:\n> I wonder it happens to systems where inefficient update \n> SQL's are used like this:\n> UPDATE MyTable SET MyColumn=1234\n> Question arises when the value of MyColumn is already 1234 \n> before the update.\n\nWe have to fire UPDATE triggers in any case.\n\n> If I am right, even when the to-be-updated column values \n> equal to the new values, the core never hates to update that \n> row anyway. If so, is it wise or not to adjust the core for \n> lazy SQL users to ignore such \"meaningless\" updates in order \n\nSeems like penalizing the intelligent people (by adding useless\ncomparisons) in order to reward the \"lazy\" ones.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 12 Jan 2004 23:59:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Ignore Some Updates "
}
] |
[
{
"msg_contents": "Hello -\nI am using postgresql to hold aliases, users, and relay_domains for postfix and courier to do lookups from. I am not storing mail in sql.\n\nI need postgresql to have fast read performance, so i setup index's on the tables. Also, the queries are basically \"select blah from table where domain='domain.com'\";, so i dont need to be able to support large results.\n\nI will have a lot of mail servers connecting to this postgresql db, so i need to support a lot of connections... but dont need to support large results.\n\nI am using FreeBSD 5.2.\n\nWhat are some tuning options and formulas I can use to get good values?\n\nThanks!\nDavid\n\n",
"msg_date": "Tue, 13 Jan 2004 11:04:13 -0500",
"msg_from": "David Hill <[email protected]>",
"msg_from_op": true,
"msg_subject": "freebsd 5.2 and max_connections"
},
{
"msg_contents": "On Tuesday 13 January 2004 16:04, David Hill wrote:\n> Hello -\n> I am using postgresql to hold aliases, users, and relay_domains for postfix\n> and courier to do lookups from. I am not storing mail in sql.\n>\n> I need postgresql to have fast read performance, so i setup index's on the\n> tables. Also, the queries are basically \"select blah from table where\n> domain='domain.com'\";, so i dont need to be able to support large results.\n>\n> I will have a lot of mail servers connecting to this postgresql db, so i\n> need to support a lot of connections... but dont need to support large\n> results.\n\nFirstly - if you don't know about the tuning guidelines/annotated config file, \nyou should go here:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n\nHmm - small result sets accessed directly via indexed fields, so sort_mem \nprobably isn't important to you.\n\nMake sure your effective cache setting is accurate though, so PG can estimate \nwhether it'll need to access the disks.\n\nNot sure if clustering one or more tables will help - I'm guessing not. What \nmight help is to increase the statistics gathered on important columns. That \nshould give the planner a more accurate estimate of value distribution and \nshouldn't cost you too much to keep accurate, since I'm guessing a low rate \nof updating.\n\nYou might want to play with the random page cost (?or is it random access \ncost?) but more RAM for a bigger disk cache is probably the simplest tweak.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 13 Jan 2004 17:13:42 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: freebsd 5.2 and max_connections"
}
] |
[
{
"msg_contents": "I am writing a website that will probably have some traffic.\nRight now I wrap every .php page in pg_connect() and pg_close().\nThen I read somewhere that Postgres only supports 100 simultaneous \nconnections (default). Is that a limitation? Should I use some other \nmethod when writing code for high-traffic website?\nJ.\n\n-- \[email protected]\nhttp://jonr.beecee.org\n+354 699 8086\n\n",
"msg_date": "Wed, 14 Jan 2004 12:42:21 +0000",
"msg_from": "=?ISO-8859-1?Q?J=F3n_Ragnarsson?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "100 simultaneous connections, critical limit?"
}
] |
[
{
"msg_contents": "I am writing a website that will probably have some traffic.\nRight now I wrap every .php page in pg_connect() and pg_close().\nThen I read somewhere that Postgres only supports 100 simultaneous\nconnections (default). Is that a limitation? Should I use some other\nmethod when writing code for high-traffic website?\nJ.\n\n\n",
"msg_date": "Wed, 14 Jan 2004 12:48:17 +0000",
"msg_from": "=?ISO-8859-1?Q?J=F3n_Ragnarsson?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "100 simultaneous connections, critical limit?"
},
{
"msg_contents": "On Wednesday 14 January 2004 18:18, J�n Ragnarsson wrote:\n> I am writing a website that will probably have some traffic.\n> Right now I wrap every .php page in pg_connect() and pg_close().\n> Then I read somewhere that Postgres only supports 100 simultaneous\n> connections (default). Is that a limitation? Should I use some other\n> method when writing code for high-traffic website?\n\nYes. You should rather investigate connection pooling.\n\nI am no php expert but probably this could help you..\n\nhttp://www.php.net/manual/en/function.pg-pconnect.php\n\n Shridhar\n\n\n",
"msg_date": "Wed, 14 Jan 2004 18:31:00 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100 simultaneous connections, critical limit?"
},
{
"msg_contents": "Clinging to sanity, [email protected] (J�n Ragnarsson) mumbled into her beard:\n> I am writing a website that will probably have some traffic.\n> Right now I wrap every .php page in pg_connect() and pg_close().\n> Then I read somewhere that Postgres only supports 100 simultaneous\n> connections (default). Is that a limitation? Should I use some other\n> method when writing code for high-traffic website?\n\nI thought the out-of-the-box default was 32.\n\nIf you honestly need a LOT of connections, you can configure the\ndatabase to support more. I \"upped the limit\" on one system to have\n512 the other week; certainly supportable, if you have the RAM for it.\n\nIt is, however, quite likely that the connect()/close() cuts down on\nthe efficiency of your application. If PHP supports some form of\n\"connection pooling,\" you should consider using that, as it will cut\ndown _dramatically_ on the amount of work done establishing/closing\nconnections, and should let your apps use somewhat fewer connections\nmore effectively.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://cbbrowne.com/info/linux.html\n\"It has been said that man is a rational animal. All my life I have\nbeen searching for evidence which could support this.\"\n-- Bertrand Russell\n",
"msg_date": "Wed, 14 Jan 2004 08:27:03 -0500",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100 simultaneous connections, critical limit?"
},
{
"msg_contents": "Ok, connection pooling was the thing that I thought of first, but I \nhaven't found any docs regarding pooling with PHP+Postgres.\nOTOH, I designed the application to be as independent from the DB as \npossible. (No stored procedures or other Postgres specific stuff)\nThanks,\nJ.\n\nChristopher Browne wrote:\n\n> Clinging to sanity, [email protected] (J�n Ragnarsson) mumbled into her beard:\n> \n>>I am writing a website that will probably have some traffic.\n>>Right now I wrap every .php page in pg_connect() and pg_close().\n>>Then I read somewhere that Postgres only supports 100 simultaneous\n>>connections (default). Is that a limitation? Should I use some other\n>>method when writing code for high-traffic website?\n> \n> \n> I thought the out-of-the-box default was 32.\n> \n> If you honestly need a LOT of connections, you can configure the\n> database to support more. I \"upped the limit\" on one system to have\n> 512 the other week; certainly supportable, if you have the RAM for it.\n> \n> It is, however, quite likely that the connect()/close() cuts down on\n> the efficiency of your application. If PHP supports some form of\n> \"connection pooling,\" you should consider using that, as it will cut\n> down _dramatically_ on the amount of work done establishing/closing\n> connections, and should let your apps use somewhat fewer connections\n> more effectively.\n\n",
"msg_date": "Wed, 14 Jan 2004 13:44:17 +0000",
"msg_from": "=?ISO-8859-1?Q?J=F3n_Ragnarsson?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 100 simultaneous connections, critical limit?"
},
{
"msg_contents": "Christopher Browne <[email protected]> writes:\n> Clinging to sanity, [email protected] (J�n Ragnarsson) mumbled into her beard:\n>> Then I read somewhere that Postgres only supports 100 simultaneous\n>> connections (default).\n\n> I thought the out-of-the-box default was 32.\n\nPre-7.4 it was 32; in 7.4 it's 100 (if your kernel settings will allow it).\nIt's important to point out that both of these are trivial-to-alter\nconfiguration settings, not some kind of hardwired limit. However, the\nmore backend processes you have, the more RAM you need on the database\nserver. It's good advice to look into connection pooling instead of\ntrying to push max_connections up to the moon.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 14 Jan 2004 10:04:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100 simultaneous connections, critical limit? "
},
{
"msg_contents": "On Wed, 14 Jan 2004, J�n Ragnarsson wrote:\n\n> I am writing a website that will probably have some traffic.\n> Right now I wrap every .php page in pg_connect() and pg_close().\n> Then I read somewhere that Postgres only supports 100 simultaneous\n> connections (default). Is that a limitation? Should I use some other\n> method when writing code for high-traffic website?\n\nA few tips from an old PHP/Apache/Postgresql developer.\n\n1: Avoid pg_pconnect unless you are certain you have load tested the \nsystem and it will behave properly. pg_pconnect often creates as many \nissues as it solves.\n\n2: While php has pretty mediocre run time performance, it's startup / \nshutdown / cleanup are quite fast, and it caches previously executed \npages. Thus, if your pages are relatively small, code-wise, then the \namount of time it will take to execute them, versus the amount of time the \nuser will spend reading the output will be quite small. So, you can \nlikely handle many hundreds of users before hitting any limit on the \ndatabase end.\n\n3: Apache can only run so many children too. The default for the 1.3 \nbranch is 150. If you decrease that to 50 or so, you are quite unlikely \nto ever run out of connections to the database.\n\n4: Postgresql can handle thousands of connections if the server and \npostgresql itself are properly configured, so don't worry so much about \nthat. You can always increase the max should you need to later.\n\n5: Database connection time in a php script is generally a non-issue. \npg_connect on a fast machine, hitting a local pgsql database generally \nruns in about 1/10,000th of a second. Persistant connects get this down \nto about 1/1,000,000th of a second. Either way, a typical script takes \nmilliseconds to run, i.e. 1/100th of a second or longer, so the actual \ndifference between a pg_pconnect and a pg_connect just isn't worth \nworrying about in 99% of all circumstances.\n\n6: Profile your user's actions and the time it takes the server versus how \nlong it takes them to make the next click. Even the fastest user is \nusually much slower than your server, so it takes a whole bunch of them to \nstart bogging the system down. \n\n7: Profile your machine under parallel load. Note that machine simos \n(i.e. the kind you get from the ab utility) generally represent about 10 \nto 20 real people. I.e. if your machine runs well with 20 machine simos, \nyou can bet on it handling 100 or more real people with ease.\n\n",
"msg_date": "Wed, 14 Jan 2004 10:09:23 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100 simultaneous connections, critical limit?"
},
{
"msg_contents": "scott.marlowe wrote:\n\n>A few tips from an old PHP/Apache/Postgresql developer.\n>\n>1: Avoid pg_pconnect unless you are certain you have load tested the \n>system and it will behave properly. pg_pconnect often creates as many \n>issues as it solves.\n> \n>\n\nI share the above view. I've had little success with persistent \nconnections. The cost of pg_connect is minimal, pg_pconnect is not a \nviable solution IMHO. Connections are rarely actually reused.\n\n--\nAdam Alkins\nhttp://www.rasadam.com\n",
"msg_date": "Wed, 14 Jan 2004 14:10:14 -0400",
"msg_from": "Adam Alkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100 simultaneous connections, critical limit?"
},
{
"msg_contents": "Hi!\n\n\n\n\nAA> scott.marlowe wrote:\n\n>>A few tips from an old PHP/Apache/Postgresql developer.\n>>\n>>1: Avoid pg_pconnect unless you are certain you have load tested the\n>>system and it will behave properly. pg_pconnect often creates as many\n>>issues as it solves.\n>> \n>>\n\n\nMy experience with persistant connections in PHP is quite similar to\nthe one of Scott Marlowe. There are some nasty effects if something\nis not working. The most harmless results come probably from not closed\ntransactions which will result in a warning as PHP seems to send\nalways a BEGIN; ROLLBACK; for reusing a connection.\n\n\n\nAA> I share the above view. I've had little success with persistent \nAA> connections. The cost of pg_connect is minimal, pg_pconnect is not a\nAA> viable solution IMHO. Connections are rarely actually reused.\n\n\nStill I think itᅵs a good way to speed things up. Probably the\nconnection time it takes in PHP is not so the gain, but the general\nsaving of processor time. Spawning a new process on the backend can be\na very expensive operation. And if it happens often, it sums up.\nPerhaps itᅵs only a memory for CPU time deal.\n\nMy persistant connections get very evenly used, no matter if there are\n2 or 10. The CPU usage for them is very equally distributed.\n\n\n\nChristoph Nelles\n\n\n-- \nMit freundlichen Grᅵssen\nEvil Azrael mailto:[email protected]\n\n",
"msg_date": "Wed, 14 Jan 2004 19:36:25 +0100",
"msg_from": "Evil Azrael <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100 simultaneous connections, critical limit?"
},
{
"msg_contents": "On Wed, 14 Jan 2004, Adam Alkins wrote:\n\n> scott.marlowe wrote:\n> \n> >A few tips from an old PHP/Apache/Postgresql developer.\n> >\n> >1: Avoid pg_pconnect unless you are certain you have load tested the \n> >system and it will behave properly. pg_pconnect often creates as many \n> >issues as it solves.\n> > \n> >\n> \n> I share the above view. I've had little success with persistent \n> connections. The cost of pg_connect is minimal, pg_pconnect is not a \n> viable solution IMHO. Connections are rarely actually reused.\n\nI've found that for best performance with pg_pconnect, you need to \nrestrict the apache server to a small number of backends, say 40 or 50, \nextend keep alive to 60 or so seconds, and use the same exact connection \nstring all over the place. Also, set max.persistant.connections or \nwhatever it is in php.ini to 1 or 2. Note that max.persistant.connections \nis PER BACKEND, not total, in php.ini, so 1 or 2 should be enough for most \ntypes of apps. 3 tops. Then, setup postgresql for 200 connections, so \nyou'll never run out. Tis better to waste a little shared memory and be \nsafe than it is to get the dreaded out of connections error from \npostgresql.\n\nIf you do all of the above, pg_pconnect can work pretty well, on things \nlike dedicated app servers where only one thing is being done and it's \nbeing done a lot. On general purpose servers with 60 databases and 120 \napplications, it adds little, although extending the keep alive timeout \nhelps. \n\nbut if you just start using pg_pconnect without reconfiguring and then \ntesting, it's quite likely your site will topple over under load with out \nof connection errors.\n\n",
"msg_date": "Wed, 14 Jan 2004 14:35:52 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100 simultaneous connections, critical limit?"
},
{
"msg_contents": "> 7: Profile your machine under parallel load. Note that machine simos \n> (i.e. the kind you get from the ab utility) generally represent about 10 \n> to 20 real people. I.e. if your machine runs well with 20 machine simos, \n> you can bet on it handling 100 or more real people with ease.\n\n8. Use the Turck MMCache - it rocks. Works absolutely perfectly and \ncaches compiled versions of all your PHP scripts - cut the load on our \nserver by a factor of 5.\n",
"msg_date": "Thu, 15 Jan 2004 09:28:07 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100 simultaneous connections, critical limit?"
},
{
"msg_contents": "On Thu, 2004-01-15 at 01:48, Jón Ragnarsson wrote:\n> I am writing a website that will probably have some traffic.\n> Right now I wrap every .php page in pg_connect() and pg_close().\n> Then I read somewhere that Postgres only supports 100 simultaneous\n> connections (default). Is that a limitation? Should I use some other\n> method when writing code for high-traffic website?\n\nWhether the overhead of pg_connect() pg_close() has a noticeable effect\non your application depends on what you do in between them. TBH I never\ndo that second one myself - PHP will close the connection when the page\nis finished.\n\nI have developed some applications which are trying to be\nas-fast-as-possible and for which I either use pg_pconnect so you have\none DB connection per Apache process, or I use DBBalancer where you have\na pool of connections, and pg_connect is _actually_ connecting to\nDBBalancer in a very low-overhead manner and you have a pool of\nconnections out the back. I am the Debian package maintainer for\nDBBalancer.\n\nYou may also want to consider differentiating based on whether the\napplication is writing to the database or not. Pooling and persistent\nconnections can give weird side-effects if transaction scoping is\nbollixed in the application - a second page view re-using an earlier\nconnection which was serving a different page could find itself in the\nmiddle of an unexpected transaction. Temp tables are one thing that can\nbite you here.\n\nThere are a few database pooling solutions out there. Using pg_pconnect\nis the simplest of these, DBBalancer fixes some of it's issues, and\nothers go further still.\n\nAnother point to consider is that database pooling will give you the\nbiggest performance increase if your queries are all returning small\ndatasets. If you return large datasets it can potentially make things\nworse (depending on implementation) through double-handling of the data.\n\nAs others have said too: 100 is just a configuration setting in\npostgresql.conf - not an implemented limit.\n\nCheers,\n\t\t\t\t\tAndrew McMillan.\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n How many things I can do without! - Socrates\n-------------------------------------------------------------------------\n",
"msg_date": "Thu, 15 Jan 2004 15:57:08 +1300",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100 simultaneous connections, critical limit?"
},
{
"msg_contents": "scott.marlowe wrote:\n\n>On Wed, 14 Jan 2004, Adam Alkins wrote:\n>\n> \n>\n>>scott.marlowe wrote:\n>>\n>> \n>>\n>>>A few tips from an old PHP/Apache/Postgresql developer.\n>>>\n>>>1: Avoid pg_pconnect unless you are certain you have load tested the \n>>>system and it will behave properly. pg_pconnect often creates as many \n>>>issues as it solves.\n>>> \n>>>\n>>> \n>>>\n>>I share the above view. I've had little success with persistent \n>>connections. The cost of pg_connect is minimal, pg_pconnect is not a \n>>viable solution IMHO. Connections are rarely actually reused.\n>> \n>>\n>\n>I've found that for best performance with pg_pconnect, you need to \n>restrict the apache server to a small number of backends, say 40 or 50, \n>extend keep alive to 60 or so seconds, and use the same exact connection \n>string all over the place. Also, set max.persistant.connections or \n>whatever it is in php.ini to 1 or 2. Note that max.persistant.connections \n>is PER BACKEND, not total, in php.ini, so 1 or 2 should be enough for most \n>types of apps. 3 tops. Then, setup postgresql for 200 connections, so \n>you'll never run out. Tis better to waste a little shared memory and be \n>safe than it is to get the dreaded out of connections error from \n>postgresql.\n>\n> \n>\nI disagree. With the server I have been running for the last two years\nwe found the the pconnect settings with long keep-alives in apache\nconsumed far more resources than you would imagine. We found the\nbecause some clients would not support keep-alive (older IE clients)\ncorrectly. They would hammer the server with 20-30 individual requests;\napache would keep those processes in keep-alive mode. When the number\nof apache processes were restricted there were DoS problems. The\nshort-keep alive pattern works best to keep a single pages related\nrequests to be served effeciently. In fact the best performance and\nthe greatest capacity in real life was with a 3 second timeout for\nkeep-alive requests. A modem connection normally won't have sufficient\nlag as to time-out on related loads and definitely not a broadband\nconnection. \n\nAlso, depending on your machine you should time the amount of time it\ntakes to connect to the db. This server ran about 3-4 milliseconds on\naverage to connect without pconnect, and it was better to conserve\nmemory so that none postgresql scripts and applications didn't have the\nextra memory footprint of a postgresql connection preventing memory\nexhaustion and excessive swapping.\n\nPlease keep in mind that this was on a dedicated server with apache and\npostgresql and a slew of other processes running on the same machine. \nThe results may be different for separate process oriented setups.\n\n>If you do all of the above, pg_pconnect can work pretty well, on things \n>like dedicated app servers where only one thing is being done and it's \n>being done a lot. On general purpose servers with 60 databases and 120 \n>applications, it adds little, although extending the keep alive timeout \n>helps. \n>\n>but if you just start using pg_pconnect without reconfiguring and then \n>testing, it's quite likely your site will topple over under load with out \n>of connection errors.\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: don't forget to increase your free space map settings\n> \n>\n\n",
"msg_date": "Fri, 16 Jan 2004 00:24:45 -0600",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 100 simultaneous connections, critical limit?"
}
] |
[
{
"msg_contents": "Hi, I have to following select:\n\nset enable_seqscan = on;\nset enable_indexscan =on;\n\nselect a.levelno,a.id from (select 1 as levelno,42 as id) a, menutable b \nwhere b.site_id='21' and a.id=b.id;\n\nmenutable:\nid bigint,\nsite_id bigint\n\nIndexes: menutable_pkey primary key btree (site_id, id),\n\nThe explain analyze shows:\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..13.50 rows=1 width=34) (actual \ntime=0.04..0.43 rows=1 loops=1)\n Join Filter: (\"outer\".id = \"inner\".id)\n -> Subquery Scan a (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.01..0.01 rows=1 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.00..0.00 rows=1 loops=1)\n -> Seq Scan on menutable b (cost=0.00..13.01 rows=38 width=22) \n(actual time=0.02..0.38 rows=38 loops=1)\n Filter: (site_id = 21::bigint)\n Total runtime: 0.47 msec\n\nsetting set enable_seqscan = off;\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..29.85 rows=1 width=34) (actual \ntime=0.07..0.18 rows=1 loops=1)\n Join Filter: (\"outer\".id = \"inner\".id)\n -> Subquery Scan a (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.01..0.01 rows=1 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.00..0.00 rows=1 loops=1)\n -> Index Scan using menutable_pkey on menutable b \n(cost=0.00..29.36 rows=38 width=22) (actual time=0.02..0.12 rows=38 loops=1)\n Index Cond: (site_id = 21::bigint)\n Total runtime: 0.22 msec\n\nI do analyze, vacumm full analyze on table but nothing changed. The same \nplan in case of join syntax.\n\nversion: PostgreSQL 7.3.3 and PostgreSQL 7.3.4\n\nAny idea?\nthx\n\nC.\n",
"msg_date": "Wed, 14 Jan 2004 14:35:25 +0100",
"msg_from": "CoL <[email protected]>",
"msg_from_op": true,
"msg_subject": "subquery and table join, index not use for table"
},
{
"msg_contents": "\nOn Wed, 14 Jan 2004, CoL wrote:\n[plan1]\n> -> Seq Scan on menutable b (cost=0.00..13.01 rows=38 width=22)\n> (actual time=0.02..0.38 rows=38 loops=1)\n\n[plan2]\n> -> Index Scan using menutable_pkey on menutable b\n> (cost=0.00..29.36 rows=38 width=22) (actual time=0.02..0.12 rows=38 loops=1)\n\nIt's estimating a cost of 13 for the sequence scan and 29 for the index\nscan so it's choosing the sequence scan.\n\n The value of random_page_cost may be too high for your system and\nespecially so if the database is small and likely to fit in ram. In\naddition, if the table in question is small, you'll likely find that at\nsome point for a larger data set that the system switches over to an index\nscan.\n",
"msg_date": "Fri, 16 Jan 2004 09:48:18 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: subquery and table join, index not use for table"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of Jón Ragnarsson\n> Sent: 14 January 2004 13:44\n> Cc: [email protected]\n> Subject: Re: [PERFORM] 100 simultaneous connections, critical limit?\n> \n> Ok, connection pooling was the thing that I thought of first, but I\n> haven't found any docs regarding pooling with PHP+Postgres.\n> OTOH, I designed the application to be as independent from the DB as\n> possible. (No stored procedures or other Postgres specific stuff)\n> Thanks,\n> J.\n\nAs far as I know PHP supports persistent connections to a PG database.\nSee pg_pconnect instead of pg_connect. Each of the db connections are\ntied to a particular Apache process and will stay open for the life of\nthat process. So basically make sure your Apache config file\n(httpd.conf) and PG config file (postgresql.conf) agree on the maximum\nnumber of connections otherwise some pages will not be able to connect\nto your database.\n\nThis may not be a problem for small sites but on large sites it is, with\nheavy loads and large number of concurrent users. For example, consider\na site that must support 500 concurrent connections. If persistent\nconnections are used at least 500 concurrent connections to PG would be\nrequired, which I guess is probably not recommended.\n\nThe way I would like Apache/PHP to work is to have a global pool of\nconnections to a postgres server, which can be shared around all Apache\nprocesses. This pool can be limited to say 50 or 100 connections.\nProblems occur under peak load where all 500 concurrent connections are\nin use, but all that should happen is there is a bit of a delay.\n\nHope that (almost) makes sense,\n\n\nKind Regards,\n\nNick Barr\nWebBased Ltd.\n\n\n> Christopher Browne wrote:\n> \n> > Clinging to sanity, [email protected] (Jón Ragnarsson) mumbled\ninto\n> her beard:\n> >\n> >>I am writing a website that will probably have some traffic.\n> >>Right now I wrap every .php page in pg_connect() and pg_close().\n> >>Then I read somewhere that Postgres only supports 100 simultaneous\n> >>connections (default). Is that a limitation? Should I use some other\n> >>method when writing code for high-traffic website?\n> >\n> >\n> > I thought the out-of-the-box default was 32.\n> >\n> > If you honestly need a LOT of connections, you can configure the\n> > database to support more. I \"upped the limit\" on one system to have\n> > 512 the other week; certainly supportable, if you have the RAM for\nit.\n> >\n> > It is, however, quite likely that the connect()/close() cuts down on\n> > the efficiency of your application. If PHP supports some form of\n> > \"connection pooling,\" you should consider using that, as it will cut\n> > down _dramatically_ on the amount of work done establishing/closing\n> > connections, and should let your apps use somewhat fewer connections\n> > more effectively.\n> \n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n\n",
"msg_date": "Wed, 14 Jan 2004 14:00:52 -0000",
"msg_from": "\"Nick Barr\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 100 simultaneous connections, critical limit?"
}
] |
[
{
"msg_contents": "Hi there.\n\nI've got a database (the schema is:\nhttp://compsoc.man.ac.uk/~cez/2004/01/14/tv-schema.sql) for television\ndata. Now, one of the things I want to use this for is a now and next\ndisplay. (much like http://teletext.com/tvplus/nownext.asp ). \n\nI've got a view defined like this:\nCREATE VIEW progtit AS SELECT programme.*, title_seen, title_wanted, title_text FROM (programme NATURAL JOIN title);\n\nAnd to select the programmes that are on currently and next, I'm doing\nsomething like this:\n\nSELECT * \nFROM progtit AS p1 LEFT JOIN progtit AS p2 ON p1.prog_next = p2.prog_id\nWHERE prog_start <= '2004-01-14 23:09:11' \n AND prog_stop > '2004-01-14 23:09:11';\n\nNow, unfourtunately this runs rather slowly (takes around 1sec to\ncomplete on my machine), as it (AFAIK) ends up building a complete\ninstance of the progtit view and then joining the current programmes\nwith that, instead of just finding the current set of programs and then\nselecting the relevant rows from the view.\n\nNow, I know I could just two it in two seperate passes for the current\nprogrammes and those after them, but I'd be neater to do it in one.\n\nSo, is there any way to optimize the above query? Any clues, or\nreferences would be wonderful.\n\nThanks.\n-- \nCeri Storey <[email protected]>\n",
"msg_date": "Wed, 14 Jan 2004 23:10:15 +0000",
"msg_from": "Ceri Storey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Join optimisation Quandry"
},
{
"msg_contents": "\nOn Wed, 14 Jan 2004, Ceri Storey wrote:\n\n> Hi there.\n>\n> I've got a database (the schema is:\n> http://compsoc.man.ac.uk/~cez/2004/01/14/tv-schema.sql) for television\n> data. Now, one of the things I want to use this for is a now and next\n> display. (much like http://teletext.com/tvplus/nownext.asp ).\n>\n> I've got a view defined like this:\n> CREATE VIEW progtit AS SELECT programme.*, title_seen, title_wanted,\n> title_text FROM (programme NATURAL JOIN title);\n>\n> And to select the programmes that are on currently and next, I'm doing\n> something like this:\n>\n> SELECT *\n> FROM progtit AS p1 LEFT JOIN progtit AS p2 ON p1.prog_next = p2.prog_id\n> WHERE prog_start <= '2004-01-14 23:09:11'\n> AND prog_stop > '2004-01-14 23:09:11';\n>\n> Now, unfourtunately this runs rather slowly (takes around 1sec to\n> complete on my machine), as it (AFAIK) ends up building a complete\n> instance of the progtit view and then joining the current programmes\n> with that, instead of just finding the current set of programs and then\n> selecting the relevant rows from the view.\n>\n> Now, I know I could just two it in two seperate passes for the current\n> programmes and those after them, but I'd be neater to do it in one.\n>\n> So, is there any way to optimize the above query? Any clues, or\n> references would be wonderful.\n\nAs a starting point, we're likely to need the exact query, explain analyze\noutput for the query and version information.\n",
"msg_date": "Fri, 16 Jan 2004 10:17:50 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join optimisation Quandry"
},
{
"msg_contents": "On Fri, Jan 16, 2004 at 10:17:50AM -0800, Stephan Szabo wrote:\n> As a starting point, we're likely to need the exact query, explain analyze\n> output for the query and version information.\n\nOkay, from top to bottom:\n\n SELECT p1.chan_name, p1.prog_start AS now_start, p1.prog_id, p1.title_text, \n p2.prog_start AS next_start, p2.prog_id, p2.title_text, \n p1.title_wanted, p2.title_wanted, p1.chan_id\n FROM (programme natural join channel NATURAL JOIN title) AS p1 \n LEFT OUTER JOIN (programme NATURAL JOIN title) AS p2 \n ON p1.prog_next = p2.prog_id \n WHERE p1.prog_start <= timestamp 'now' AND p1.prog_stop > timestamp 'now'\n ORDER BY p1.chan_id ASC\n\nQUERY PLAN\n----\n Sort (cost=983.38..983.38 rows=1 width=85) (actual time=10988.525..10988.557 rows=17 loops=1)\n Sort Key: public.programme.chan_id\n -> Nested Loop Left Join (cost=289.86..983.37 rows=1 width=85) (actual time=631.918..10988.127 rows=17 loops=1)\n Join Filter: (\"outer\".prog_next = \"inner\".prog_id)\n -> Nested Loop (cost=4.33..9.12 rows=1 width=55) (actual time=4.111..7.960 rows=17 loops=1)\n -> Hash Join (cost=4.33..5.64 rows=1 width=37) (actual time=4.017..5.182 rows=17 loops=1)\n Hash Cond: (\"outer\".chan_id = \"inner\".chan_id)\n -> Seq Scan on channel (cost=0.00..1.20 rows=20 width=17) (actual time=0.017..0.403 rows=20 loops=1)\n -> Hash (cost=4.32..4.32 rows=1 width=24) (actual time=3.910..3.910 rows=0 loops=1)\n -> Index Scan using prog_stop_idx on programme (cost=0.00..4.32 rows=1 width=24) (actual time=0.140..3.809 rows=17 loops=1)\n Index Cond: (prog_stop > '2004-01-17 01:01:51.786145'::timestamp without time zone)\n Filter: (prog_start <= '2004-01-17 01:01:51.786145'::timestamp without time zone)\n -> Index Scan using \"$3\" on title (cost=0.00..3.47 rows=1 width=26) (actual time=0.078..0.114 rows=1 loops=17)\n Index Cond: (\"outer\".title_id = title.title_id)\n -> Hash Join (cost=285.54..892.91 rows=6507 width=34) (actual time=191.612..586.407 rows=7145 loops=17)\n Hash Cond: (\"outer\".title_id = \"inner\".title_id)\n -> Seq Scan on programme (cost=0.00..121.07 rows=6507 width=16) (actual time=0.036..42.337 rows=7145 loops=17)\n -> Hash (cost=190.83..190.83 rows=10683 width=26) (actual time=190.795..190.795 rows=0 loops=17)\n -> Seq Scan on title (cost=0.00..190.83 rows=10683 width=26) (actual time=0.143..113.223 rows=10715 loops=17)\n Total runtime: 10989.661 ms\n\nAnd both client and server are:\npostgres (PostgreSQL) 7.4.1\n\nThanks for looking into it.\n-- \nCeri Storey <[email protected]>\n",
"msg_date": "Sat, 17 Jan 2004 01:03:34 +0000",
"msg_from": "Ceri Storey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Join optimisation Quandry"
},
{
"msg_contents": "On Sat, 17 Jan 2004, Ceri Storey wrote:\n\n> On Fri, Jan 16, 2004 at 10:17:50AM -0800, Stephan Szabo wrote:\n> > As a starting point, we're likely to need the exact query, explain analyze\n> > output for the query and version information.\n>\n> Okay, from top to bottom:\n>\n> SELECT p1.chan_name, p1.prog_start AS now_start, p1.prog_id, p1.title_text,\n> p2.prog_start AS next_start, p2.prog_id, p2.title_text,\n> p1.title_wanted, p2.title_wanted, p1.chan_id\n> FROM (programme natural join channel NATURAL JOIN title) AS p1\n> LEFT OUTER JOIN (programme NATURAL JOIN title) AS p2\n> ON p1.prog_next = p2.prog_id\n> WHERE p1.prog_start <= timestamp 'now' AND p1.prog_stop > timestamp 'now'\n> ORDER BY p1.chan_id ASC\n>\n\nWell the plan would seems reasonable to me if there really was only 1 row\ncoming from the where conditions on p1. As a first step, if you raise the\nstatistics target (see ALTER TABLE) for prog_start and prog_stop and\nre-analyze the table, do you get a better estimate of the rows from that\ncondition? Secondly, what do you get if you enable_nestloop=off before\nexplain analyzing the query?\n",
"msg_date": "Fri, 16 Jan 2004 22:05:54 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join optimisation Quandry"
},
{
"msg_contents": "On Fri, Jan 16, 2004 at 10:05:54PM -0800, Stephan Szabo wrote:\n> Well the plan would seems reasonable to me if there really was only 1 row\n> coming from the where conditions on p1. As a first step, if you raise the\n> statistics target (see ALTER TABLE) for prog_start and prog_stop and\n> re-analyze the table, do you get a better estimate of the rows from that\n> condition? \n\nIndeed, that helps a fair bit with the estimate.\n\n> Secondly, what do you get if you enable_nestloop=off before\n> explain analyzing the query?\nSee attachment.\n\nSaying that, I've managed to halve the query time by lifting the join of\nthe title out of the RHS of the left outer join into the top-level of\nthe FROM clause; which was really the kind of advice I was after. If\nthis is the wrong list for that kind of thing, please say so.\n\nAgain, thanks.\n-- \nCeri Storey <[email protected]>",
"msg_date": "Sat, 17 Jan 2004 10:40:49 +0000",
"msg_from": "Ceri Storey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Join optimisation Quandry"
},
{
"msg_contents": "On Sat, Jan 17, 2004 at 01:03:34AM +0000, Ceri Storey wrote:\n> Okay, from top to bottom:\n> \n> SELECT p1.chan_name, p1.prog_start AS now_start, p1.prog_id, p1.title_text, \n> p2.prog_start AS next_start, p2.prog_id, p2.title_text, \n> p1.title_wanted, p2.title_wanted, p1.chan_id\n> FROM (programme natural join channel NATURAL JOIN title) AS p1 \n> LEFT OUTER JOIN (programme NATURAL JOIN title) AS p2 \n> ON p1.prog_next = p2.prog_id \n> WHERE p1.prog_start <= timestamp 'now' AND p1.prog_stop > timestamp 'now'\n> ORDER BY p1.chan_id ASC\n\n\nAlthough, as I've just found, another bottleneck is the title table.\nPostgreSQL seems to inst on doing a Seq Scan on the entire table. \n\nCompare:\ntv=> explain analyse SELECT * FROM tid LEFT OUTER JOIN title ON t1 = title_id OR t2 = title_id;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=190.83..267285.83 rows=2000 width=35) (actual time=222.776..2430.073 rows=33 loops=1)\n Join Filter: ((\"outer\".t1 = \"inner\".title_id) OR (\"outer\".t2 = \"inner\".title_id))\n -> Seq Scan on tid (cost=0.00..20.00 rows=1000 width=8) (actual time=0.028..10.457 rows=17 loops=1)\n -> Materialize (cost=190.83..297.66 rows=10683 width=27) (actual time=0.197..57.918 rows=10767 loops=17)\n -> Seq Scan on title (cost=0.00..190.83 rows=10683 width=27) (actual time=0.045..64.988 rows=10767 loops=1)\n Total runtime: 2435.059 ms\n(6 rows)\n\nWith:\ntv=> explain analyse select * from title where title_id IN (SELECT t1 FROM tid UNION SELECT t2 FROM tid);\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=205.16..451.40 rows=200 width=27) (actual time=3.065..82.689 rows=33 loops=1)\n Hash Cond: (\"outer\".title_id = \"inner\".t1)\n -> Seq Scan on title (cost=0.00..190.83 rows=10683 width=27) (actual time=0.010..36.325 rows=10767 loops=1)\n -> Hash (cost=204.66..204.66 rows=200 width=4) (actual time=1.464..1.464 rows=0 loops=1)\n -> HashAggregate (cost=204.66..204.66 rows=200 width=4) (actual time=1.234..1.355 rows=33 loops=1)\n -> Subquery Scan \"IN_subquery\" (cost=169.66..199.66 rows=2000 width=4) (actual time=0.735..1.104 rows=33 loops=1)\n -> Unique (cost=169.66..179.66 rows=2000 width=4) (actual time=0.728..0.934 rows=33 loops=1)\n -> Sort (cost=169.66..174.66 rows=2000 width=4) (actual time=0.722..0.779 rows=34 loops=1)\n Sort Key: t1\n -> Append (cost=0.00..60.00 rows=2000 width=4) (actual time=0.054..0.534 rows=34 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..30.00 rows=1000 width=4) (actual time=0.050..0.228 rows=17 loops=1)\n -> Seq Scan on tid (cost=0.00..20.00 rows=1000 width=4) (actual time=0.041..0.126 rows=17 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..30.00 rows=1000 width=4) (actual time=0.014..0.183 rows=17 loops=1)\n -> Seq Scan on tid (cost=0.00..20.00 rows=1000 width=4) (actual time=0.008..0.087 rows=17 loops=1)\n Total runtime: 83.214 ms\n(15 rows)\n\n-- \nCeri Storey <[email protected]>\n",
"msg_date": "Sat, 17 Jan 2004 11:58:26 +0000",
"msg_from": "Ceri Storey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join optimisation Quandry"
},
{
"msg_contents": "Ceri Storey <[email protected]> writes:\n> Although, as I've just found, another bottleneck is the title table.\n> PostgreSQL seems to inst on doing a Seq Scan on the entire table. \n\n> -> Seq Scan on tid (cost=0.00..20.00 rows=1000 width=8) (actual time=0.028..10.457 rows=17 loops=1)\n\nIt doesn't look like you've ever vacuumed or analyzed \"tid\" --- those\nare the default cost and rows estimates. Although I'm unsure whether\nthe plan would change much if you had.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 18 Jan 2004 21:21:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join optimisation Quandry "
}
] |
[
{
"msg_contents": "Hi,\n\nI am using pg 7.4.1 and have created a trigger over table with 3 M rows.\n\nIf I start masive update on this table, pg executes this trigger on\nevery row and dramaticaly slows the system.\n\nExists in pg any way to define the trigger execution only if I have\nchanges on some fields?\n\nFor example I am able to declare this in oracle.\n\nMy trigger is writen in pgSQL.\n\nregards,\nivan.\n\n",
"msg_date": "Thu, 15 Jan 2004 14:13:15 +0100",
"msg_from": "pginfo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trigger question"
},
{
"msg_contents": "On Thursday 15 January 2004 13:13, pginfo wrote:\n> Hi,\n>\n> I am using pg 7.4.1 and have created a trigger over table with 3 M rows.\n> If I start masive update on this table, pg executes this trigger on\n> every row and dramaticaly slows the system.\n> Exists in pg any way to define the trigger execution only if I have\n> changes on some fields?\n\nNot at the moment (and I don't know of any plans for it).\n\n> For example I am able to declare this in oracle.\n> My trigger is writen in pgSQL.\n\nHmm - I can only think of two things you can try:\n1. check for the change first thing you do and exit if not there\n2. do the same, but write the trigger function in 'C'\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 15 Jan 2004 16:10:38 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger question"
},
{
"msg_contents": "Guten Tag Richard Huxton,\n\nAm Donnerstag, 15. Januar 2004 um 17:10 schrieben Sie:\n\nRH> On Thursday 15 January 2004 13:13, pginfo wrote:\n>> Hi,\n>>\n>> I am using pg 7.4.1 and have created a trigger over table with 3 M rows.\n>> If I start masive update on this table, pg executes this trigger on\n>> every row and dramaticaly slows the system.\n>> Exists in pg any way to define the trigger execution only if I have\n>> changes on some fields?\n\nThat not, but perhaps a STATEMENT Level Trigger could help. It�s a new\nfeature in 7.4.x and rather undocumented. http://www.postgresql.org/docs/current/interactive/sql-createtrigger.html\n\"FOR EACH STATEMENT\" probably this can help you a bit.\n\nChristoph Nelles\n\n\n\n-- \nMit freundlichen Gr�ssen\nEvil Azrael mailto:[email protected]\n\n",
"msg_date": "Thu, 15 Jan 2004 18:08:11 +0100",
"msg_from": "Evil Azrael <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger question"
},
{
"msg_contents": "> Exists in pg any way to define the trigger execution only if I have\n> changes on some fields?\n\nNo, but you chould check for those fields and return if no changes have been\nmade. Depending on how intensive the trigger is, this might help. You may\nalso want to look at statement-level triggers or conditional rules.\n\nBest Wishes,\nChris Travers\n\n\n",
"msg_date": "Fri, 16 Jan 2004 17:19:24 +0700",
"msg_from": "\"Chris Travers\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger question"
},
{
"msg_contents": "In article <[email protected]>,\nEvil Azrael <[email protected]> writes:\n\n> That not, but perhaps a STATEMENT Level Trigger could help. It�s a new\n> feature in 7.4.x and rather undocumented. http://www.postgresql.org/docs/current/interactive/sql-createtrigger.html\n> \"FOR EACH STATEMENT\" probably this can help you a bit.\n\nDoes anyone know how to access the affected values for statement-level\ntriggers? I mean what the \"old\" and \"new\" pseudo-records are for\nrow-level triggers.\n\n",
"msg_date": "16 Jan 2004 14:05:20 +0100",
"msg_from": "Harald Fuchs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger question"
},
{
"msg_contents": "Harald Fuchs <[email protected]> writes:\n> Does anyone know how to access the affected values for\n> statement-level triggers? I mean what the \"old\" and \"new\"\n> pseudo-records are for row-level triggers.\n\nYeah, I didn't get around to implementing that. If anyone wants this\nfeature, I'd encourage them to step up to the plate -- I'm not sure\nwhen I'll get the opportunity/motivation to implement this myself.\n\n-Neil\n\n",
"msg_date": "Mon, 19 Jan 2004 19:01:01 -0500",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger question"
},
{
"msg_contents": "On Tuesday 20 January 2004 00:01, Neil Conway wrote:\n> Harald Fuchs <[email protected]> writes:\n> > Does anyone know how to access the affected values for\n> > statement-level triggers? I mean what the \"old\" and \"new\"\n> > pseudo-records are for row-level triggers.\n>\n> Yeah, I didn't get around to implementing that. If anyone wants this\n> feature, I'd encourage them to step up to the plate -- I'm not sure\n> when I'll get the opportunity/motivation to implement this myself.\n\nI didn't think they'd be meaningful for a statement-level trigger. Surely \nOLD/NEW are by definition row-level details.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 20 Jan 2004 08:13:56 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger question"
},
{
"msg_contents": "Richard Huxton <[email protected]> writes:\n> I didn't think they'd be meaningful for a statement-level\n> trigger. Surely OLD/NEW are by definition row-level details.\n\nGranted; the feature in question is *some* means of accessing the\nresult set of a statement-level trigger -- it probably would not use\nthe same UI (i.e. OLD/NEW) that row-level triggers use.\n\n-Neil\n\n",
"msg_date": "Tue, 20 Jan 2004 10:17:24 -0500",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger question"
},
{
"msg_contents": "Richard Huxton <[email protected]> writes:\n> On Tuesday 20 January 2004 00:01, Neil Conway wrote:\n>> Yeah, I didn't get around to implementing that. If anyone wants this\n>> feature, I'd encourage them to step up to the plate -- I'm not sure\n>> when I'll get the opportunity/motivation to implement this myself.\n\n> I didn't think they'd be meaningful for a statement-level trigger. Surely \n> OLD/NEW are by definition row-level details.\n\nAccording to the complainants, OLD/NEW are commonly available as\nrecordsets (tables) inside a statement trigger. I'm not very clear on\nhow that works myself --- in particular, one would think it important to\nbe able to work with corresponding pairs of OLD and NEW rows, which\nwould be painful with a table-like abstraction. Can anyone explain\nexactly how it's done in, say, Oracle?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Jan 2004 11:02:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger question "
},
{
"msg_contents": "In article <[email protected]>,\nTom Lane <[email protected]> writes:\n\n> Richard Huxton <[email protected]> writes:\n>> On Tuesday 20 January 2004 00:01, Neil Conway wrote:\n>>> Yeah, I didn't get around to implementing that. If anyone wants this\n>>> feature, I'd encourage them to step up to the plate -- I'm not sure\n>>> when I'll get the opportunity/motivation to implement this myself.\n\n>> I didn't think they'd be meaningful for a statement-level trigger. Surely \n>> OLD/NEW are by definition row-level details.\n\n> According to the complainants, OLD/NEW are commonly available as\n> recordsets (tables) inside a statement trigger.\n\nYes.\n\n> I'm not very clear on\n> how that works myself --- in particular, one would think it important to\n> be able to work with corresponding pairs of OLD and NEW rows, which\n> would be painful with a table-like abstraction.\n\nWhy? If the underlying table has a primary key, finding corresponding\npairs is trivial; if there isn't, it's impossible.\n\n",
"msg_date": "20 Jan 2004 17:15:31 +0100",
"msg_from": "Harald Fuchs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger question"
},
{
"msg_contents": "On Tue, 20 Jan 2004, Harald Fuchs wrote:\n\n> In article <[email protected]>,\n> Tom Lane <[email protected]> writes:\n>\n> > Richard Huxton <[email protected]> writes:\n> >> On Tuesday 20 January 2004 00:01, Neil Conway wrote:\n> >>> Yeah, I didn't get around to implementing that. If anyone wants this\n> >>> feature, I'd encourage them to step up to the plate -- I'm not sure\n> >>> when I'll get the opportunity/motivation to implement this myself.\n>\n> >> I didn't think they'd be meaningful for a statement-level trigger. Surely\n> >> OLD/NEW are by definition row-level details.\n>\n> > According to the complainants, OLD/NEW are commonly available as\n> > recordsets (tables) inside a statement trigger.\n>\n> Yes.\n>\n> > I'm not very clear on\n> > how that works myself --- in particular, one would think it important to\n> > be able to work with corresponding pairs of OLD and NEW rows, which\n> > would be painful with a table-like abstraction.\n>\n> Why? If the underlying table has a primary key, finding corresponding\n> pairs is trivial; if there isn't, it's impossible.\n\nI don't think that's sufficient unless you can guarantee that the primary\nkey values never change for any reason that causes the trigger to try to\ncorrespond them.\n",
"msg_date": "Tue, 20 Jan 2004 08:30:29 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger question"
},
{
"msg_contents": "Harald Fuchs <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> I'm not very clear on\n>> how that works myself --- in particular, one would think it important to\n>> be able to work with corresponding pairs of OLD and NEW rows, which\n>> would be painful with a table-like abstraction.\n\n> Why? If the underlying table has a primary key, finding corresponding\n> pairs is trivial; if there isn't, it's impossible.\n\nExactly. Nonetheless, the correspondence exists --- the UPDATE\ndefinitely updated some particular row of the OLD set into some\nparticular one of the NEW set. If the trigger API makes it impossible\nto reconstruct the matchup, the API is broken.\n\nEven if there is a primary key, the API should not force you to rely\non that; what of an UPDATE that changes the primary key?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 20 Jan 2004 11:42:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger question "
},
{
"msg_contents": "On Tuesday 20 January 2004 16:42, Tom Lane wrote:\n> Harald Fuchs <[email protected]> writes:\n> > Why? If the underlying table has a primary key, finding corresponding\n> > pairs is trivial; if there isn't, it's impossible.\n>\n> Exactly. Nonetheless, the correspondence exists --- the UPDATE\n> definitely updated some particular row of the OLD set into some\n> particular one of the NEW set. If the trigger API makes it impossible\n> to reconstruct the matchup, the API is broken.\n\nPerhaps they should be cursors? The only sensible way I can think of working \nwith them would be:\n1. count how many rows affected\n2. step through one row at a time, doing something.\n\nI suppose there might be cases where you'd want to GROUP BY... which would \nmean you'd need some oid/row-id added to a \"real\" recordset.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 20 Jan 2004 19:05:46 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger question"
},
{
"msg_contents": "In article <[email protected]>,\nRichard Huxton <[email protected]> writes:\n\n> On Tuesday 20 January 2004 16:42, Tom Lane wrote:\n>> Harald Fuchs <[email protected]> writes:\n>> > Why? If the underlying table has a primary key, finding corresponding\n>> > pairs is trivial; if there isn't, it's impossible.\n>> \n>> Exactly. Nonetheless, the correspondence exists --- the UPDATE\n>> definitely updated some particular row of the OLD set into some\n>> particular one of the NEW set. If the trigger API makes it impossible\n>> to reconstruct the matchup, the API is broken.\n\nI would not say so. You could use tables without primary keys, and\nyou could define statement-level triggers on them, but you could not\nidentify a particular row in this very special and probably rare case.\n\n> Perhaps they should be cursors? The only sensible way I can think of working \n> with them would be:\n> 1. count how many rows affected\n> 2. step through one row at a time, doing something.\n\nWhen I read about the \"insert\" and \"delete\" pseudotables in a book\nabout Transact-SQL, i was enthusiastic about the elegance of this\nidea: you're operating on multiple (perhaps lots of) rows, and the SQL\nway of doing that is by set-operations, i.e. single operations\naffecting a set of rows. Pseudotables extend this idea nicely into\nthe area of statement-level triggers. Your cursor idea doesn't look\nvery SQL-like to me.\n\nWe really should find an Oracle/DB2/Informix guy who can tell us how\nto get that right.\n\n",
"msg_date": "21 Jan 2004 14:50:26 +0100",
"msg_from": "Harald Fuchs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger question"
},
{
"msg_contents": "On Wed, 21 Jan 2004, Harald Fuchs wrote:\n\n> In article <[email protected]>,\n> Richard Huxton <[email protected]> writes:\n>\n> > On Tuesday 20 January 2004 16:42, Tom Lane wrote:\n> >> Harald Fuchs <[email protected]> writes:\n> >> > Why? If the underlying table has a primary key, finding corresponding\n> >> > pairs is trivial; if there isn't, it's impossible.\n> >>\n> >> Exactly. Nonetheless, the correspondence exists --- the UPDATE\n> >> definitely updated some particular row of the OLD set into some\n> >> particular one of the NEW set. If the trigger API makes it impossible\n> >> to reconstruct the matchup, the API is broken.\n>\n> I would not say so. You could use tables without primary keys, and\n> you could define statement-level triggers on them, but you could not\n> identify a particular row in this very special and probably rare case.\n\nA technique that requires matching of primary key values also undercuts\nits usefulness for at least some types of triggers. For example, ON UPDATE\nreferential actions need three pieces of information, the starting state,\nthe end state and the mapping between those states. AFAICS you cannot\nfake the last by trying to map primary key values and still implement the\nconstraint correctly. Even if 99.9% of the time the primary key value\ndoesn't change, you can't safly implement the triggers this way. Other\ntriggers of the same sort may run into the same problems.\n\n> > Perhaps they should be cursors? The only sensible way I can think of working\n> > with them would be:\n> > 1. count how many rows affected\n> > 2. step through one row at a time, doing something.\n>\n> When I read about the \"insert\" and \"delete\" pseudotables in a book\n> about Transact-SQL, i was enthusiastic about the elegance of this\n> idea: you're operating on multiple (perhaps lots of) rows, and the SQL\n> way of doing that is by set-operations, i.e. single operations\n> affecting a set of rows. Pseudotables extend this idea nicely into\n> the area of statement-level triggers. Your cursor idea doesn't look\n> very SQL-like to me.\n>\n> We really should find an Oracle/DB2/Informix guy who can tell us how\n> to get that right.\n\nIt wouldn't surprise me if there was an internal key (or row number)\nthat could be used to match the rows between the old and new.\n",
"msg_date": "Wed, 21 Jan 2004 08:45:00 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger question"
}
] |
[
{
"msg_contents": "I've read most of the threads on insert speed in this list and wanted \nto share some interesting observations and a question.\n\nWe've been benchmarking some dbs to implement Bayesian processing on an \nemail server. This involves frequent insert and updates to the \nfollowing table:\n\ncreate table bayes_token (\nusername varchar(200) not null default '',\ntoken varchar(200) not null default '',\nspam_count integer not null default 0,\nham_count integer not null default 0,\natime integer not null default 0,\nprimary key (username, token));\n\nOn a variety of hardware with Redhat, and versions of postgres, we're \nnot getting much better than 50 inserts per second. This is prior to \nmoving WAL to another disk, and fsync is on.\n\nHowever, with postgres 7.4 on Mac OSX 10.2.3, we're getting an amazing \n500 inserts per second.\n\nWe can only put this down to the OS.\n\nCan anyone shed light on why Redhat appears to be so much poorer than \nMac OS X in supporting postgres insert transactions? Or why MacOS \nappears to be so much better?\n\nBTW, on the same hardware that postgres is running on to get 50 inserts \nper sec, MySQL (4.0.17) is getting an almost unbelievable 5,500 inserts \nper second.\n\n-SL\n\n",
"msg_date": "Fri, 16 Jan 2004 00:56:36 +1100",
"msg_from": "Syd <[email protected]>",
"msg_from_op": true,
"msg_subject": "insert speed - Mac OSX vs Redhat"
},
{
"msg_contents": "> On a variety of hardware with Redhat, and versions of postgres, we're\n> not getting much better than 50 inserts per second. This is prior to\n> moving WAL to another disk, and fsync is on.\n>\n> However, with postgres 7.4 on Mac OSX 10.2.3, we're getting an amazing\n> 500 inserts per second.\n>\n> We can only put this down to the OS.\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nYou haven't really produced much evidence to support that statement. Given that the differences in performance between Postgres\nrunning on *BSD and Linux on Intel hardware are not large at all, it seems to be almost certainly false in fact.\n\nIt may of course be due to some settings of the different OSes, but not the OSes themselves.\n\nIt would help if you gave a straight PG7.4 comparison with hardware specs as well, and config file differences if any.\n\nOne thought: assuming the Apple has IDE disks, then the disks probably have write caching turned on, which is good for speed, but\nnot crash-safe.\n\nmatt\n\n\n\n\n",
"msg_date": "Thu, 15 Jan 2004 14:17:24 -0000",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert speed - Mac OSX vs Redhat"
},
{
"msg_contents": "Syd <[email protected]> writes:\n> However, with postgres 7.4 on Mac OSX 10.2.3, we're getting an amazing \n> 500 inserts per second.\n\n> We can only put this down to the OS.\n\nAs noted elsewhere, it's highly likely that this has nothing to do with\nthe OS, and everything to do with write caching in the disks being used.\n\nI assume you are benchmarking small individual transactions (one insert\nper xact). In such scenarios it's essentially impossible to commit more\nthan one transaction per revolution of the WAL disk, because you have to\nwrite the same WAL disk page repeatedly and wait for it to get down to\nthe platter. When you get results that are markedly in excess of the\ndisk RPM figure, it's proof positive that the disk is lying about write\ncomplete (or that you don't have fsync on).\n\nThe only way to get better performance and still have genuine ACID\nbehavior is to gang multiple insertions per WAL write. You can do\nmultiple insertions per transaction, or if you are doing several\ninsertion transactions in parallel, you can try to commit them all in\none write (experiment with the commit_delay and commit_siblings\nparameters).\n\n> BTW, on the same hardware that postgres is running on to get 50 inserts \n> per sec, MySQL (4.0.17) is getting an almost unbelievable 5,500 inserts \n> per second.\n\nI'll bet a good lunch that MySQL is not being ACID compliant in this\ntest. Are you using a transaction-safe table type (InnoDB) and\ncommitting after every insert?\n\nIf you don't in fact care about ACID safety, turn off fsync in Postgres\nso that you have an apples-to-apples comparison (or at least\napples-to-oranges rather than apples-to-cannonballs).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Jan 2004 10:44:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: insert speed - Mac OSX vs Redhat "
},
{
"msg_contents": "\nOn 16/01/2004, at 2:44 AM, Tom Lane wrote:\n...\n> As noted elsewhere, it's highly likely that this has nothing to do with\n> the OS, and everything to do with write caching in the disks being \n> used.\n>\n> I assume you are benchmarking small individual transactions (one insert\n> per xact). In such scenarios it's essentially impossible to commit \n> more\n> than one transaction per revolution of the WAL disk, because you have \n> to\n> write the same WAL disk page repeatedly and wait for it to get down to\n> the platter. When you get results that are markedly in excess of the\n> disk RPM figure, it's proof positive that the disk is lying about write\n> complete (or that you don't have fsync on).\n>\n\nTom, thanks for this explanation - we'll check this out straight away, \nbut it would explain a lot.\n\n",
"msg_date": "Fri, 16 Jan 2004 08:17:32 +1100",
"msg_from": "Syd <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: insert speed - Mac OSX vs Redhat "
},
{
"msg_contents": "We've found these tools\nhttp://scsirastools.sourceforge.net/ and\nhttp://www.seagate.com/support/seatools/ (for seagate drives)\nto check the settings of scsi disks and to change settings for seagate \ndrives.\n\nWhat are people using for IDE disks?\nAre you all using hdparm on linux\nhttp://freshmeat.net/projects/hdparm/?topic_id=146%2C861\n\nor are there other tools?\n\n",
"msg_date": "Fri, 16 Jan 2004 13:29:24 +1100",
"msg_from": "Syd <[email protected]>",
"msg_from_op": true,
"msg_subject": "IDE/SCSI disk tools to turn off write caching"
}
] |
[
{
"msg_contents": "\nAlmoust identical querys are having very different exec speed (Postgresql\n7.2.4).\n\nquery: select \"NP_ID\" from a WHERE \"NP_ID\" > '0'\nIndex Scan using NP_ID_a on a (cost=0.00..13.01 rows=112 width=4) (actual\ntime=16.89..18.11 rows=93 loops=1)\nTotal runtime: 18.32 msec\n-------------------------------------------------\nquery: select \"NP_ID\" from a WHERE \"NP_ID\" > '1'\nIndex Scan using NP_ID_a on a (cost=0.00..13.01 rows=112 width=4) (actual\ntime=0.08..1.36 rows=93 loops=1)\nTotal runtime: 1.56 msec\n\n From where such difference comes?\n\nThere are about 37K rows and only about 100 of then are not \"NP_ID\" = 0\n\nFor a workaround i use WHERE \"NP_ID\" >= '1' and if works as speedy as '> 1'\n\nRigmor Ukuhe\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.560 / Virus Database: 352 - Release Date: 08.01.2004\n\n",
"msg_date": "Thu, 15 Jan 2004 18:02:52 +0200",
"msg_from": "\"Rigmor Ukuhe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Weird query speed "
},
{
"msg_contents": "\"Rigmor Ukuhe\" <[email protected]> writes:\n> query: select \"NP_ID\" from a WHERE \"NP_ID\" > '0' [is slow]\n>\n> query: select \"NP_ID\" from a WHERE \"NP_ID\" > '1' [is fast]\n>\n> There are about 37K rows and only about 100 of then are not \"NP_ID\" = 0\n\nYeah, it's scanning over all the zero values when you say \"> 0\" :-(\n\nThis is fixed for 7.5:\n\n2003-12-20 20:23 tgl\n\n\t* src/: backend/access/nbtree/nbtinsert.c,\n\tbackend/access/nbtree/nbtpage.c, backend/access/nbtree/nbtsearch.c,\n\tinclude/access/nbtree.h: Improve btree's\n\tinitial-positioning-strategy code so that we never need to step\n\tmore than one entry after descending the search tree to arrive at\n\tthe correct place to start the scan. This can improve the behavior\n\tsubstantially when there are many entries equal to the chosen\n\tboundary value. Per suggestion from Dmitry Tkach, 14-Jul-03.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Jan 2004 11:40:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Weird query speed "
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nI recently upgraded from 7.3.4 to 7.4. Besides the PostGreSQL change I \nupdated my schema to pull out all OIDs and 'set storage external'. \n\nWith PostGreSQL V7.3.4, a completely reindexed and vacuumed(not-full) foot \nprint was 5.2 gig. Now some of the disk space(probably 10-15 % was waiting \nin the FSM to be used). However, 7.4 comes up at 2.3 gig. \n\nDon't get me wrong, I'm not complaining. I was under the impression that text \nwas compressed by default and that changing from compressed to \n'non-compressed' would result in a larger foot print. Is there something I \nam missing?\n\n- -- \nJeremy M. Guthrie\nSystems Engineer\nBerbee\n5520 Research Park Dr.\nMadison, WI 53711\nPhone: 608-298-1061\n\nBerbee...Decade 1. 1993-2003\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.3 (GNU/Linux)\n\niD8DBQFABsHVqtjaBHGZBeURAp3tAKCTe9SjnPEU5V25KnxWZHD4ulXIXwCfd45O\nCf9lRBReCjv9fKyglR+dSM8=\n=GpkI\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Thu, 15 Jan 2004 10:37:41 -0600",
"msg_from": "\"Jeremy M. Guthrie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question about space usage:"
},
{
"msg_contents": "\"Jeremy M. Guthrie\" <[email protected]> writes:\n> With PostGreSQL V7.3.4, a completely reindexed and vacuumed(not-full) foot \n> print was 5.2 gig. Now some of the disk space(probably 10-15 % was waiting \n> in the FSM to be used). However, 7.4 comes up at 2.3 gig. \n\nFirst thought is that your 7.3 DB was suffering from index bloat (I know\nyou said you reindexed, but did you get everything including system\nindexes?). The only material space savings in 7.4 over 7.3 as far as\nraw data size goes is Manfred's row header size reduction (8 bytes per\nrow), which wouldn't account for that kind of difference unless you had\na huge number of very narrow rows ...\n\n> Besides the PostGreSQL change I updated my schema to pull out all OIDs\n> and 'set storage external'.\n\nRemoving OIDs would save another 4 or 8 bytes per row (depending on what\nhardware you're on).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jan 2004 14:26:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about space usage: "
}
] |
[
{
"msg_contents": "Gurus,\n \nI have defined the following values on a db:\n \nshared_buffers = 10240 # 10240 = 80MB \nmax_connections = 100\nsort_mem = 1024 # 1024KB is 1MB per operation\neffective_cache_size = 262144 # equals to 2GB for 8k pages\n \nRest of the values are unchanged from default.\n \n \nThe poweredge 2650 machine has 4GB RAM, and the size of the database\n(size of 'data' folder) is about 5GB. PG is 7.4, RH9.\n \nThe machine has been getting quite busy (when, say, 50 students login at\nthe same time, when others have logged in already) and is maxing out at\n100 connections (will increase this tonight probably to 200). We have\nbeen getting \"too many clients\" message upon trying to connect. Once\nconnected, the pgmonitor, and the 'pg_stat_activity' show connections\nreaching about 100.\n \nThere's a series of SELECT and UPDATE statements that get called for\nwhen a group of users log in simultaneously...and for some reason, many\nof them stay there for a while...\n \nDuring that time, if i do a 'top', i can see multiple postmaster\nprocesses, each about 87MB in size. The Memory utilization drops down to\nabout 30MB free, and i can see a little bit of swap utilization in\nvmstat then.\n \nQuestion is, does the 80MB buffer allocation correspond to ~87MB per\npostmaster instance? (with about 100 instances of postmaster, that will\nbe about 100 x 80MB = 8GB??)\n \nShould i decrease the buffer value to about 50MB and monitor?\n \nInterestingly, at one point, we vacuumed the database, and the size\nreported by 'df -k' on the pgsql slice dropped very\nsignificantly...guess, it had been using a lot of temp files?\n \nFurther steps will be to add more memory, and possibly drop/recreate a\ncouple of indexes that are used in the UPDATE statements.\n \n \nThanks in advance for any inputs.\n-Anjan\n \n************************************************************************\n** \n\nThis e-mail and any files transmitted with it are intended for the use\nof the addressee(s) only and may be confidential and covered by the\nattorney/client and other privileges. If you received this e-mail in\nerror, please notify the sender; do not disclose, copy, distribute, or\ntake any action in reliance on the contents of this information; and\ndelete it from your system. Any other use of this e-mail is prohibited.\n\n \n\nMessage\n\n\n\nGurus,\n \nI have defined the \nfollowing values on a db:\n \nshared_buffers = \n10240 # 10240 = 80MB \n\nmax_connections = 100\nsort_mem = \n1024 \n# 1024KB is 1MB per operation\neffective_cache_size = 262144 # equals \nto 2GB for 8k pages\n \nRest of the values \nare unchanged from default.\n \n \nThe poweredge 2650 \nmachine has 4GB RAM, and the size of the database (size of 'data' folder) is \nabout 5GB. PG is 7.4, RH9.\n \nThe machine has been \ngetting quite busy (when, say, 50 students login at the same time, when others \nhave logged in already) and is maxing out at 100 connections (will increase this \ntonight probably to 200). We have been getting \"too many clients\" message upon \ntrying to connect. Once connected, the pgmonitor, and the 'pg_stat_activity' \nshow connections reaching about 100.\n \nThere's a series of \nSELECT and UPDATE statements that get called for when a group of users log in \nsimultaneously...and for some reason, many of them stay there for a \nwhile...\n \nDuring that time, if \ni do a 'top', i can see multiple postmaster processes, each about 87MB in size. \nThe Memory utilization drops down to about 30MB free, and i can see a little bit \nof swap utilization in vmstat then.\n \nQuestion is, does \nthe 80MB buffer allocation correspond to ~87MB per postmaster instance? (with \nabout 100 instances of postmaster, that will be about 100 x 80MB = \n8GB??)\n \nShould i decrease \nthe buffer value to about 50MB and monitor?\n \nInterestingly, at \none point, we vacuumed the database, and the size reported by 'df -k' on the \npgsql slice dropped very significantly...guess, it had been using a lot of temp \nfiles?\n \nFurther steps will \nbe to add more memory, and possibly drop/recreate a couple of indexes that are \nused in the UPDATE statements.\n \n \nThanks in advance \nfor any inputs.\n-Anjan\n \n************************************************************************** \n\n\nThis e-mail and any files transmitted with it are intended for the use of the \naddressee(s) only and may be confidential and covered by the attorney/client and \nother privileges. If you received this e-mail in error, please notify the \nsender; do not disclose, copy, distribute, or take any action in reliance on the \ncontents of this information; and delete it from your system. Any other use of \nthis e-mail is prohibited.",
"msg_date": "Thu, 15 Jan 2004 17:49:28 -0500",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "shared_buffer value"
},
{
"msg_contents": "On Thursday 15 January 2004 22:49, Anjan Dave wrote:\n> Gurus,\n>\n> I have defined the following values on a db:\n>\n> shared_buffers = 10240 # 10240 = 80MB\n> max_connections = 100\n> sort_mem = 1024 # 1024KB is 1MB per operation\n> effective_cache_size = 262144 # equals to 2GB for 8k pages\n>\n> Rest of the values are unchanged from default.\n>\n>\n> The poweredge 2650 machine has 4GB RAM, and the size of the database\n> (size of 'data' folder) is about 5GB. PG is 7.4, RH9.\n\nOK - settings don't look unreasonable so far.\n\n> The machine has been getting quite busy (when, say, 50 students login at\n> the same time, when others have logged in already) and is maxing out at\n> 100 connections (will increase this tonight probably to 200). We have\n> been getting \"too many clients\" message upon trying to connect. Once\n> connected, the pgmonitor, and the 'pg_stat_activity' show connections\n> reaching about 100.\n>\n> There's a series of SELECT and UPDATE statements that get called for\n> when a group of users log in simultaneously...and for some reason, many\n> of them stay there for a while...\n>\n> During that time, if i do a 'top', i can see multiple postmaster\n> processes, each about 87MB in size. The Memory utilization drops down to\n> about 30MB free, and i can see a little bit of swap utilization in\n> vmstat then.\n\nOn linux you'll see three values: SIZE, RSS and SHARE. SIZE is what you're \nlooking at, RSS is resident set size (it's in main memory) and SHARE is how \nmuch is shared with other processes. So - 3 processes each with RSS=15MB, \nSIZE=10MB take up 10+5+5+5 = 25MB.\nDon't worry about a tiny bit of swap - how is your buff/cache doing then?\n\n> Should i decrease the buffer value to about 50MB and monitor?\n\nThat shared_buffer is between all backends. The sort_mem however, is *per \nsort*, not even per backend. So - if a complicated query uses four sorts you \ncould use 4MB in one backend.\n\n> Interestingly, at one point, we vacuumed the database, and the size\n> reported by 'df -k' on the pgsql slice dropped very\n> significantly...guess, it had been using a lot of temp files?\n\nYou need to run VACUUM regularly to reclaim unused space. Since you're on 7.4, \ntake a look at the pg_autovacuum utility, or start by running VACUUM ANALYZE \nfrom a cron job every evening. Perhaps a VACUUM FULL at weekends?\n\n> Further steps will be to add more memory, and possibly drop/recreate a\n> couple of indexes that are used in the UPDATE statements.\n\nA REINDEX might be worthwhile. Details on this and VACUUM in the manuals.\n\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 16 Jan 2004 00:17:42 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffer value"
},
{
"msg_contents": "\"Anjan Dave\" <[email protected]> writes:\n> Question is, does the 80MB buffer allocation correspond to ~87MB per\n> postmaster instance? (with about 100 instances of postmaster, that will\n> be about 100 x 80MB =3D 8GB??)\n\nMost likely, top is counting some portion of the shared memory block\nagainst each backend process. This behavior is platform-specific,\nhowever, and you did not tell us what platform you're on.\n\n> Interestingly, at one point, we vacuumed the database, and the size\n> reported by 'df -k' on the pgsql slice dropped very\n> significantly...guess, it had been using a lot of temp files?\n\n\"At one point\"? If your setup doesn't include *routine* vacuuming,\nyou are going to have problems with file bloat. This isn't something\nyou can do just when you happen to remember it --- it needs to be driven\noff a cron job or some such. Or use the contrib autovacuum daemon.\nYou want to vacuum often enough to keep the database size more or less\nconstant.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Jan 2004 19:51:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffer value "
}
] |
[
{
"msg_contents": "Sorry I wasn't clear. We do have nightly vacuum crons defined on all pg\nservers. Apparently, this one had been taking many hours to finish\nrecently, and we did an additional vacuum during day time when there was\nlow volume, which finished quickly.\n\nThe platform I mentioned is RedHat 9, PG7.4, on Dell PowerEdge2650.\n\nHere's the output of 'top' below, taken just now. I can capture the\nstats during peak time in the afternoon also:\n\n68 processes: 67 sleeping, 1 running, 0 zombie, 0 stopped\nCPU0 states: 3.1% user 4.4% system 0.0% nice 0.0% iowait 92.0%\nidle\nCPU1 states: 0.0% user 3.2% system 0.0% nice 0.0% iowait 96.3%\nidle\nCPU2 states: 0.4% user 0.3% system 0.0% nice 0.0% iowait 98.3%\nidle\nCPU3 states: 0.3% user 1.0% system 0.0% nice 0.0% iowait 98.2%\nidle\nMem: 3874188k av, 3622296k used, 251892k free, 0k shrd, 322372k\nbuff\n 2369836k actv, 454984k in_d, 44568k in_c\nSwap: 4096532k av, 24552k used, 4071980k free 2993384k\ncached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU\nCOMMAND\n 4258 postgres 16 0 88180 86M 85796 S 2.1 2.2 14:55 0\npostmaster\n 5260 postgres 15 0 85844 83M 84704 S 0.0 2.2 2:51 1\npostmaster\n14068 root 23 0 69240 67M 2164 S 3.9 1.7 59:44 2 wish\n 3157 postgres 15 0 50364 49M 48484 S 0.0 1.2 0:02 3\npostmaster\n 2174 postgres 15 0 50196 48M 48380 S 0.1 1.2 0:00 0\npostmaster\n 3228 postgres 15 0 49292 48M 47536 S 0.0 1.2 0:00 3\npostmaster\n 3050 postgres 15 0 49184 47M 47364 S 0.5 1.2 0:00 2\npostmaster\n 2725 postgres 15 0 7788 7688 6248 S 0.0 0.1 0:00 3\npostmaster\n 3600 postgres 16 0 5812 5700 4784 S 0.0 0.1 0:00 3\npostmaster\n 1342 gdm 15 0 12988 5560 2056 S 0.0 0.1 19:36 3\ngdmgreeter\n\nAccording to top's man, \nRSS is: The total amount of physical memory used by the task, in\nkilo-bytes...\nSHARE is: The amount of shared memory used by the task is shown...\nSIZE is: The size of the task's code plus data plus stack space, in\nkilo-bytes...\n\nI am not sure how do I calculate whether 80MB shared_buffer (in\npostgresql.conf)should be increased or decreased from the above values,\nbecause during higher loads, the number of postmaster instances go up to\n100 (limited by max connections), each at an RSS of about 87MB...\n\nThanks,\nanjan\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Thursday, January 15, 2004 7:52 PM\nTo: Anjan Dave\nCc: [email protected]\nSubject: Re: [PERFORM] shared_buffer value \n\n\n\"Anjan Dave\" <[email protected]> writes:\n> Question is, does the 80MB buffer allocation correspond to ~87MB per \n> postmaster instance? (with about 100 instances of postmaster, that \n> will be about 100 x 80MB =3D 8GB??)\n\nMost likely, top is counting some portion of the shared memory block\nagainst each backend process. This behavior is platform-specific,\nhowever, and you did not tell us what platform you're on.\n\n> Interestingly, at one point, we vacuumed the database, and the size \n> reported by 'df -k' on the pgsql slice dropped very \n> significantly...guess, it had been using a lot of temp files?\n\n\"At one point\"? If your setup doesn't include *routine* vacuuming, you\nare going to have problems with file bloat. This isn't something you\ncan do just when you happen to remember it --- it needs to be driven off\na cron job or some such. Or use the contrib autovacuum daemon. You want\nto vacuum often enough to keep the database size more or less constant.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 16 Jan 2004 10:29:38 -0500",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: shared_buffer value "
},
{
"msg_contents": "On Fri, 16 Jan 2004, Anjan Dave wrote:\n\n> 68 processes: 67 sleeping, 1 running, 0 zombie, 0 stopped\n> CPU0 states: 3.1% user 4.4% system 0.0% nice 0.0% iowait 92.0%\n> idle\n> CPU1 states: 0.0% user 3.2% system 0.0% nice 0.0% iowait 96.3%\n> idle\n> CPU2 states: 0.4% user 0.3% system 0.0% nice 0.0% iowait 98.3%\n> idle\n> CPU3 states: 0.3% user 1.0% system 0.0% nice 0.0% iowait 98.2%\n> idle\n> Mem: 3874188k av, 3622296k used, 251892k free, 0k shrd, 322372k\n> buff\n> 2369836k actv, 454984k in_d, 44568k in_c\n> Swap: 4096532k av, 24552k used, 4071980k free 2993384k\n> cached\n\nNote that that machine has 2993384k of kernel cache. This means that \nafter all that it's doing, there's about 3 gigs of free memory, and the \nkernel is just using it to cache files. Should a process need that \nmemory, the kernel would free it right up.\n\nSo, you don't have to worry about setting the buffers too high in \npostgresql and running out of memory, you're not even close.\n\nI'd crank up sort mem to 4 or 8 meg or so, and the shared buffers to \nsomething a little higher, say 5000 to 10000 or so. Note that there is a \npoint of diminishing returns in postgresql where if you allocate too much \nbuffer memory, it gets slower than just letting the kernel do it.\n\n> PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU\n> COMMAND\n> 4258 postgres 16 0 88180 86M 85796 S 2.1 2.2 14:55 0\n\nthis says that this process is using 88 meg or so of ram, and of that 88 \nmef or so, 84 meg is shared between it and the other postgres processes.\n\n> 5260 postgres 15 0 85844 83M 84704 S 0.0 2.2 2:51 1\n\nSame here. That means that this single process represents a delta of 1 \nmeg or so.\n\n> 3157 postgres 15 0 50364 49M 48484 S 0.0 1.2 0:02 3\n\nDelta is about 2 meg.\nand so forth. I.e. you're not using 50 to 80 megs per process, only 2 \nmegs or so, plus the 80 meg of shared memory.\n\n> I am not sure how do I calculate whether 80MB shared_buffer (in\n> postgresql.conf)should be increased or decreased from the above values,\n> because during higher loads, the number of postmaster instances go up to\n> 100 (limited by max connections), each at an RSS of about 87MB...\n\nGenerally, increase it until it doesn't make things go faster any more. \n80 meg is pretty small, especially for a machine with 4 gigs of ram. The \nupper limit is generally found to be around 256 Meg or so, and that's what \nwe use on our machine at work. Note this may make smaller queries slower, \nsince the overhead of maintaining a large buffer costs a bit, but it makes \nlarger queries faster, so it's a trade off.\n\n",
"msg_date": "Fri, 16 Jan 2004 10:20:24 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffer value "
}
] |
[
{
"msg_contents": "Folks,\n\nWhile debugging a wireless card, I came across this interesting bit:\nhttp://portal.suse.com/sdb/en/2003/10/pohletz_desktop_90.html\n\nWhat it indicates is that by default SuSE 9.0 plays with the timeslice values \nfor the Linux kernel in order to provide a \"smoother\" user experience. In \nmy experience, this can be very bad news for databases under heavy multi-user \nload. \n\nI would suggest that anyone installing a SuSE 9.0 PostgreSQL server remove the \nDesktop pararmeter in the bootloader configuration.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 16 Jan 2004 12:37:49 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Potential Problem with PostgeSQL performance on SuSE Linux 9.0"
},
{
"msg_contents": "Along similar lines - have generally obtained better server performance \n(and stability) from most Linux distros after replacing their supplied \nkernel with one from kernel.org .\n\nregards\n\nMark\n\nJosh Berkus wrote:\n\n>Folks,\n>\n>While debugging a wireless card, I came across this interesting bit:\n>http://portal.suse.com/sdb/en/2003/10/pohletz_desktop_90.html\n>\n>What it indicates is that by default SuSE 9.0 plays with the timeslice values \n>for the Linux kernel in order to provide a \"smoother\" user experience. In \n>my experience, this can be very bad news for databases under heavy multi-user \n>load. \n>\n>I would suggest that anyone installing a SuSE 9.0 PostgreSQL server remove the \n>Desktop pararmeter in the bootloader configuration.\n>\n> \n>\n\n",
"msg_date": "Sat, 17 Jan 2004 11:11:33 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Potential Problem with PostgeSQL performance on SuSE"
},
{
"msg_contents": "Mark,\n\n> Along similar lines - have generally obtained better server performance \n> (and stability) from most Linux distros after replacing their supplied \n> kernel with one from kernel.org .\n\nHmmm.... any anecdotes about replacing Red Hat 2.4.18 to .24? I've been \nhaving problems I can't track down on that platform ...\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Fri, 16 Jan 2004 17:13:55 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Potential Problem with PostgeSQL performance on SuSE"
},
{
"msg_contents": "We use a server at work that is patched RH 7.3 / 2.4.18 with 2.4.21 (or \nthereabouts its < .24).\n\nWe have stability issues with Java 1.4 / Tomcat 4.1 but not Pg.\n\nIt might even be worth building yourself a vanilla 2.4.18 kernel and \nseeing if that makes any difference...\n\nregards\n\nMark\n\n\nJosh Berkus wrote:\n\n>Mark,\n>\n> \n>\n>>Along similar lines - have generally obtained better server performance \n>>(and stability) from most Linux distros after replacing their supplied \n>>kernel with one from kernel.org .\n>> \n>>\n>\n>Hmmm.... any anecdotes about replacing Red Hat 2.4.18 to .24? I've been \n>having problems I can't track down on that platform ...\n>\n> \n>\n\n",
"msg_date": "Sat, 17 Jan 2004 14:51:26 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Potential Problem with PostgeSQL performance on SuSE"
}
] |
[
{
"msg_contents": "Here's the top item from the \"top\" listing on my Postgres machine:\n\n 10:20am up 98 days, 18:28, 3 users, load average: 1.14, 1.09, 0.85\n103 processes: 101 sleeping, 1 running, 0 zombie, 1 stopped\nCPU0 states: 38.0% user, 13.1% system, 0.0% nice, 48.3% idle\nCPU1 states: 39.0% user, 11.3% system, 0.0% nice, 49.1% idle\nMem: 1033384K av, 1021972K used, 11412K free, 0K shrd, 42772K buff\nSwap: 1185080K av, 17012K used, 1168068K free 748736K cached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n20753 postgres 17 0 40292 39M 35960 S 60.9 3.8 26:54 postmaster\n\nProcess 20753 was hovering around 60-70% CPU usage for a long time.\n\nI tried to see what that postmaster was doing, but it appeared to be \"idle\"\n\n% ps -wwwx -p 20753\n PID TTY STAT TIME COMMAND\n20753 pts/3 R 26:59 postgres: postgres pdm 192.168.0.35 idle\n\n192.168.0.35 is one of our web servers. I restarted the web server (thus\nending all db connections from that machine) and the \"idle-but-CPU-hogging\"\npostmaster went away. I'm still concerned about exactly what it was doing,\nhowever. Any ideas? Any suggestions for further debugging the situation if\nit happens again?\n\nAlso, a second question while I'm here :) Ignoring that one CPU-hogging\npostmaster, does the header information in that \"top\" listing look\nreasonable in terms of memory usage? The \"free\" memory always hovers around\n11412K, but \"buff\" and \"cache\" are usually huge. I'm assuming \"buff\" and\n\"cache\" memory is available if any of the processes needs it, but there is\nalways some small amount of swap in use anyway. Should I be concerned, or\ndoes this look okay?\n\n-John\n\n",
"msg_date": "Sat, 17 Jan 2004 10:34:22 -0500",
"msg_from": "John Siracusa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Idle postmaster taking up a lot of CPU"
},
{
"msg_contents": "John Siracusa <[email protected]> writes:\n> Process 20753 was hovering around 60-70% CPU usage for a long time.\n\n> I tried to see what that postmaster was doing, but it appeared to be \"idle\"\n\n> % ps -wwwx -p 20753\n> PID TTY STAT TIME COMMAND\n> 20753 pts/3 R 26:59 postgres: postgres pdm 192.168.0.35 idle\n\nThat seems odd to me too. What PG version is this exactly?\n\nIf it happens again, please see if you can get a stack trace from the\nnot-so-idle process:\n\n\t$ gdb /path/to/postgres-executable\n\tgdb> attach PID-of-process\n\tgdb> bt\n\tgdb> quit\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Jan 2004 12:32:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Idle postmaster taking up a lot of CPU "
},
{
"msg_contents": "On 1/17/04 12:32 PM, Tom Lane wrote:\n> John Siracusa <[email protected]> writes:\n>> Process 20753 was hovering around 60-70% CPU usage for a long time.\n> \n>> I tried to see what that postmaster was doing, but it appeared to be \"idle\"\n> \n>> % ps -wwwx -p 20753\n>> PID TTY STAT TIME COMMAND\n>> 20753 pts/3 R 26:59 postgres: postgres pdm 192.168.0.35 idle\n> \n> That seems odd to me too. What PG version is this exactly?\n\nThis is Postgres 7.4.0 on:\n\n% uname -a\nLinux foo.com 2.4.22 #5 SMP Fri Oct 10 16:32:42 EDT 2003 i686 i686 i386\nGNU/Linux\n\n> If it happens again, please see if you can get a stack trace from the\n> not-so-idle process:\n> \n> $ gdb /path/to/postgres-executable\n> gdb> attach PID-of-process\n> gdb> bt\n> gdb> quit\n\nWill do.\n\n-John\n\n",
"msg_date": "Sat, 17 Jan 2004 16:33:56 -0500",
"msg_from": "John Siracusa <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Idle postmaster taking up a lot of CPU "
}
] |
[
{
"msg_contents": "I'm running several postgreSQL databases containing tens of millions of\nrows. While postgreSQL has proven itself amazingly scalable, I am\nfinding it difficult to properly tune/profile these databases as they\ngrow. I am not aware of any tools to profile the database outside of\nheavy use of the \"EXPLAIN\" command and application space logging of\nquery times. I wanted to lay down some ideas I have for some powerful\ntools for dealing with this. Please let me know if these are redundant,\nnot needed or too complex to implement.\n\n1) Statistics kept on all the tables and indexes showing the number of\ntimes and the number of milliseconds spent sequential scanning / index\nscanning them. You could easily identify an index that's not being used\nor a table that's being sequential scanned too often. Also supply a\ndatabase command to reset the statistic gather to zero.\n\n2) Statics that show count/total times to insert into/updating/deleting\nfrom a table with time times spent in table and index separated so you\ncould see how much the indexes are effecting the write performance on\nthe table.\n\n3) The ability to view the current running time, plan and SQL text of\nall currently running queries.\n\nThank you for your time,\n\n___________________________________________________\nO R I O N H E N R Y\n\nChief Technical Officer, TrustCommerce\n959 East Colorado Blvd #1, Pasadena, CA 91106\n(626) 744-7700 x815, fax (626) 628-3431 [email protected]\nwww.trustcommerce.com",
"msg_date": "20 Jan 2004 11:47:09 -0800",
"msg_from": "Orion Henry <[email protected]>",
"msg_from_op": true,
"msg_subject": "database profiling"
},
{
"msg_contents": "> 1) Statistics kept on all the tables and indexes showing the number of\n> times and the number of milliseconds spent sequential scanning / index\n> scanning them. You could easily identify an index that's not being used\n> or a table that's being sequential scanned too often. Also supply a\n> database command to reset the statistic gather to zero.\n\nEnable the stats collector in your postgresql.conf, and then look at the \npg_stat_* views. They are your friend.\n\n> 2) Statics that show count/total times to insert into/updating/deleting\n> from a table with time times spent in table and index separated so you\n> could see how much the indexes are effecting the write performance on\n> the table.\n\nLots of that is in the pg_statio_* tables (if you have the collector \nrunning)\n\n> 3) The ability to view the current running time, plan and SQL text of\n> all currently running queries.\n\nThat's in the pg_stat_activity table.\n\nThere's also a new postgresql.conf setting in 7.4 that lets you log any \nquery that runs more than a certain number of milliseconds - which is \nvery useful for tracking down your slow queries.\n\nChris\n\n",
"msg_date": "Wed, 21 Jan 2004 08:24:55 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database profiling"
}
] |
[
{
"msg_contents": "Hi,\n\nI must have missed something, because I get really slow performance from\nPG 7.3.4-RH. This is my situation: I have a table (jms_messages), which\ncontains 500.000 records. select count(*) from jms_messages is\nunderstandable quite slow. Then I delete all of the records and add 1000\nnew ones. I run vacuumdb on this database and run select count(*) from\njms_messages again. It takes 80 seconds!?! While before, with 1000\nrecords, it took only a fraction of a second.\n\nThanks for any tips,\n\nJeroen Baekelandt\n\n",
"msg_date": "Wed, 21 Jan 2004 10:34:10 +0100",
"msg_from": "Jeroen Baekelandt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Really slow even after vacuum"
},
{
"msg_contents": "On Wed, 21 Jan 2004, Jeroen Baekelandt wrote:\n\n> jms_messages again. It takes 80 seconds!?! While before, with 1000\n> records, it took only a fraction of a second.\n\nrun: VACUUM FULL ANALYZE;\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Wed, 21 Jan 2004 11:42:01 +0100 (CET)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Really slow even after vacuum"
}
] |
[
{
"msg_contents": "I need to get 200 sets of the most recent data from a table for further \nprocessing, ordered by payDate. My\ncurrent solution is to use subselects to:\n1 - get a list of unique data\n2 - get the 200 most recent records (first 200 rows, sorted descending)\n3 - sort them in ascending order\n\nSELECT SSS.* FROM\n (SELECT SS.* FROM\n (SELECT DISTINCT ON (nonUniqField)\n first, second, third, cost, payDate, nonUniqField\n FROM histdata\n WHERE userID = 19048 AND cost IS NOT NULL\n )\n SS \n ORDER BY SS.payDate DESC LIMIT 200\n) SSS\nORDER BY payDate;\n\nMy question is in regards to steps 2 and 3 above. Is there some way that \nI can combine both steps into one to save some time?\n\nPostgreSQL 7.4beta2 on i686-pc-linux-gnu, compiled by GCC 2.95.4\n\nThanks\nRon\n\n\n",
"msg_date": "Wed, 21 Jan 2004 09:18:18 -0800",
"msg_from": "Ron St-Pierre <[email protected]>",
"msg_from_op": true,
"msg_subject": "ORDER BY and LIMIT with SubSelects"
},
{
"msg_contents": "On Wed, Jan 21, 2004 at 09:18:18 -0800,\n Ron St-Pierre <[email protected]> wrote:\n> \n> My question is in regards to steps 2 and 3 above. Is there some way that \n> I can combine both steps into one to save some time?\n \nTIP 4: Don't 'kill -9' the postmaster\nSELECT SS.* FROM\n(SELECT DISTINCT ON (nonUniqField)\n first, second, third, cost, payDate, nonUniqField\n FROM histdata\n WHERE userID = 19048 AND cost IS NOT NULL \n ORDER BY nonUniqField, payDate DESC LIMIT 200\n)\nSS \nORDER BY payDate;\n",
"msg_date": "Wed, 21 Jan 2004 12:36:56 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY and LIMIT with SubSelects"
}
] |
[
{
"msg_contents": "Hi all,\n\n I'm quite newbie in SQL and I have a performance problem. I have the\nfollowing table (with some extra fields) and without any index:\n\n CREATE TABLE STATISTICS\n (\n STATISTIC_ID NUMERIC(10) NOT NULL DEFAULT\nnextval('STATISTIC_ID_SEQ')\n CONSTRAINT pk_st_statistic_id\nPRIMARY KEY,\n TIMESTAMP_IN TIMESTAMP,\n VALUE NUMERIC(10)\n );\n\n The queries on this table are mainly related with the timestamp field,\ne.g.:\n\n select * from statistics where time::date < current_date - interval\n'1 month';\n\n As the number of rows grows the time needed to execute this query takes\nlonger. What'd I should do improve the performance of this query?\n\nThank you very much\n\n--\nArnau\n\n\n",
"msg_date": "Wed, 21 Jan 2004 19:25:15 +0100",
"msg_from": "\"Arnau\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Queries with timestamps"
},
{
"msg_contents": "Arnau,\n\n> As the number of rows grows the time needed to execute this query takes\n> longer. What'd I should do improve the performance of this query?\n\nTip #1) add an index to the timestamp column\nTip #2) make sure that you VACUUM and ANALYZE regularly\nTip #3) You will get better performance if you pass the \"current_date - 1 \nmonth\" as a constant from the client instead of in the query. This is a \nknown issue, expected to be fixed in 7.5.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 21 Jan 2004 11:06:06 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries with timestamps"
},
{
"msg_contents": "On Wednesday 21 January 2004 19:06, Josh Berkus wrote:\n> Arnau,\n>\n> > As the number of rows grows the time needed to execute this query\n> > takes longer. What'd I should do improve the performance of this query?\n>\n> Tip #1) add an index to the timestamp column\n> Tip #2) make sure that you VACUUM and ANALYZE regularly\n> Tip #3) You will get better performance if you pass the \"current_date - 1\n> month\" as a constant from the client instead of in the query. This is a\n> known issue, expected to be fixed in 7.5.\n\n(I think Tip 3 is already fixed in 7.3, or I misunderstand what Josh is \nsaying)\n\nNote that this is timestamp-related and not \"timestamp with time zone\" \nrelated. Most of the time you want the latter anyway. If you can use with \ntime zones and drop the cast you might well find the index is being used...\n\nEXPLAIN ANALYSE SELECT * FROM log_core WHERE log_ts > CURRENT_DATE - '1 \nweek'::interval;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Index Scan using log_core_ts_idx on log_core (cost=0.00..18.73 rows=239 \nwidth=117) (actual time=0.79..0.79 rows=0 loops=1)\n Index Cond: (log_ts > ((('now'::text)::date - '7 \ndays'::interval))::timestamp with time zone)\n Total runtime: 1.03 msec\n(3 rows)\n\nIt seems to help an accurate estimate of number-of-rows if you put an upper \nand lower limit in.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 21 Jan 2004 20:38:22 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries with timestamps"
},
{
"msg_contents": "Richard,\n\n> (I think Tip 3 is already fixed in 7.3, or I misunderstand what Josh is \n> saying)\n\nYeah? Certainly looks like it. Apparently I can't keep track.\n\nI'd swear that this issue reared its ugly head again shortly before the 7.4 \nrelease.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 21 Jan 2004 12:58:42 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries with timestamps"
}
] |
[
{
"msg_contents": "Hi,\n\nI need to speed up the triggers that we are using and I began to make\nsome tests to compare the \"C\" and pgSQL trigger performance.\n\nI try to write two identical test triggers (sorry I do not know very\ngood the pgsql C interface and I got one of examples and werite it) and\nattached it on insert of my test table.\n\nAfter it I try to insert in thi stable ~ 160 K rows and compared the\nspeeds.\n\nI was supprised that the pgsql trigger take ~8 sec. to insert this rows\nand the \"C\" trigger take ~ 17 sec.\n\nThis are my triggers:\nCREATE OR REPLACE FUNCTION trig1_t()\n RETURNS trigger AS\n'\nDECLARE\n\nmy_rec RECORD;\n\nBEGIN\n\nselect into my_rec count(*) from ttest;\n\nRETURN NEW;\nEND;\n'\n LANGUAGE 'plpgsql' VOLATILE;\n\nand this writen in \"C\":\n\n#include \"postgres.h\"\n #include \"executor/spi.h\" /* this is what you need to work with\nSPI */\n #include \"commands/trigger.h\" /* ... and triggers */\n\n extern Datum trigf(PG_FUNCTION_ARGS);\n\n PG_FUNCTION_INFO_V1(trigf);\n\n Datum\n trigf(PG_FUNCTION_ARGS)\n {\n TriggerData *trigdata = (TriggerData *) fcinfo->context;\n TupleDesc tupdesc;\n HeapTuple rettuple;\n char *when;\n bool checknull = false;\n bool isnull;\n int ret, i;\n\n /* make sure it's called as a trigger at all */\n if (!CALLED_AS_TRIGGER(fcinfo))\n elog(ERROR, \"trigf: not called by trigger manager\");\n\n /* tuple to return to executor */\n if (TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event))\n rettuple = trigdata->tg_newtuple;\n else\n rettuple = trigdata->tg_trigtuple;\n\n /* check for null values */\n if (!TRIGGER_FIRED_BY_DELETE(trigdata->tg_event)\n && TRIGGER_FIRED_BEFORE(trigdata->tg_event))\n checknull = true;\n\n if (TRIGGER_FIRED_BEFORE(trigdata->tg_event))\n when = \"before\";\n else\n when = \"after \";\n\n tupdesc = trigdata->tg_relation->rd_att;\n\n /* connect to SPI manager */\n if ((ret = SPI_connect()) < 0)\n elog(INFO, \"trigf (fired %s): SPI_connect returned %d\", when,\nret);\n\n /* get number of rows in table */\n ret = SPI_exec(\"SELECT count(*) FROM ttest\", 0);\n\n if (ret < 0)\n elog(NOTICE, \"trigf (fired %s): SPI_exec returned %d\", when,\nret);\n\n\n SPI_finish();\n\n if (checknull)\n {\n SPI_getbinval(rettuple, tupdesc, 1, &isnull);\n if (isnull)\n rettuple = NULL;\n }\n\n return PointerGetDatum(rettuple);\n }\n\n\n\nMy question:\nCan I do the \"C\" trigger to be faster that the pgSQL?\n\nregards,\nivan.\n\n\n",
"msg_date": "Thu, 22 Jan 2004 10:35:17 +0100",
"msg_from": "pginfo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trigger performance"
},
{
"msg_contents": "Hi,\n\nthanks for the answer.\nIt is very interest, because I readet many times that if I write the trigger\nin \"C\" it will work faster.\nIn wich case will this trigger work faster if write it in \"C\"?\nIn all my triggres I have \"select ....\" or \"insert into mytable select ...\"\nor \"update mytable set ...where...\".\nI need this info because I have a table with ~1.5 M rows and if I start to\nupdate 300 K from this rows it takes ~ 2h.\nIf I remove the trigger for this table all the time is ~ 1 min.\n\nregards,\nivan.\n\nTom Lane wrote:\n\n> pginfo <[email protected]> writes:\n> > I was supprised that the pgsql trigger take ~8 sec. to insert this rows\n> > and the \"C\" trigger take ~ 17 sec.\n>\n> The reason is that plpgsql caches the plan for the invoked SELECT,\n> whereas the way you coded the C function, it's re-planning that SELECT\n> on every call.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n\n",
"msg_date": "Thu, 22 Jan 2004 16:31:34 +0100",
"msg_from": "pginfo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trigger performance"
},
{
"msg_contents": "pginfo <[email protected]> writes:\n> I was supprised that the pgsql trigger take ~8 sec. to insert this rows\n> and the \"C\" trigger take ~ 17 sec.\n\nThe reason is that plpgsql caches the plan for the invoked SELECT,\nwhereas the way you coded the C function, it's re-planning that SELECT\non every call.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Jan 2004 10:37:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger performance "
},
{
"msg_contents": "Ok, thanks.\nI will do it.\n\nregards,\nivan.\n\nTom Lane wrote:\n\n> pginfo <[email protected]> writes:\n> > In wich case will this trigger work faster if write it in \"C\"?\n>\n> Given that the dominant part of the time will be spent down inside SPI\n> in either case, I doubt you will be able to see much difference. You\n> need to think about how to optimize the invoked query, not waste your\n> time recoding the wrapper around it.\n>\n> regards, tom lane\n\n\n\n",
"msg_date": "Thu, 22 Jan 2004 17:05:35 +0100",
"msg_from": "pginfo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trigger performance"
},
{
"msg_contents": "pginfo <[email protected]> writes:\n> In wich case will this trigger work faster if write it in \"C\"?\n\nGiven that the dominant part of the time will be spent down inside SPI\nin either case, I doubt you will be able to see much difference. You\nneed to think about how to optimize the invoked query, not waste your\ntime recoding the wrapper around it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Jan 2004 12:10:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger performance "
},
{
"msg_contents": "Hello\n\ntry prepared statements, PQexecPrepared\nhttp://developer.postgresql.org/docs/postgres/libpq-exec.html\n\nRegards\nPavel Stehule\n\nOn Thu, 22 Jan 2004, pginfo wrote:\n\n> Hi,\n> \n> thanks for the answer.\n> It is very interest, because I readet many times that if I write the trigger\n> in \"C\" it will work faster.\n> In wich case will this trigger work faster if write it in \"C\"?\n> In all my triggres I have \"select ....\" or \"insert into mytable select ...\"\n> or \"update mytable set ...where...\".\n> I need this info because I have a table with ~1.5 M rows and if I start to\n> update 300 K from this rows it takes ~ 2h.\n> If I remove the trigger for this table all the time is ~ 1 min.\n> \n> regards,\n> ivan.\n> \n> Tom Lane wrote:\n> \n> > pginfo <[email protected]> writes:\n> > > I was supprised that the pgsql trigger take ~8 sec. to insert this rows\n> > > and the \"C\" trigger take ~ 17 sec.\n> >\n> > The reason is that plpgsql caches the plan for the invoked SELECT,\n> > whereas the way you coded the C function, it's re-planning that SELECT\n> > on every call.\n> >\n> > regards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n\n",
"msg_date": "Fri, 23 Jan 2004 08:35:27 +0100 (CET)",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger performance"
}
] |
[
{
"msg_contents": "Hi there,\n\ni m new in this board. I read a little here to find a solution for my problem. But i couldn�t find something.\n\nMy Problem ist:\nI m programming a counter in PHP with Postgres as DB. I had 6,000 visitors across all day, so everything worked fine first.\nYesterday i got 80K Users at my sites, that was the point were all crashed.\n\nI have an Intel Celeron 2,2 Server with 1 GB Ram\nI have PHP in Version 4.2.0 and Postgres 7.3.4\nThe Connection to DB from php ist over PEAR-DB-Class\n\nI wrote a postgres function which gets up to 17 parameters such as os, browser, referer and so on\nThis function tries to update a row in the db/table which matches with the hour of the current datetime\nIf this returns not found then i do an insert\nSo i have a few tables and th\n e update-insert procedure is done a few times (about 15-17 times). At the end i collect some data and return them to show as the counter-banner\n\nI called the function with expain analyze, so it showed something around 222 ms duration\n\nMy first problem ist, what is about transactions? Do i have to care about? I read that a function is always just one transaction\nSo if something fails, the whole function will be undone with a rollback\n\nSecond problem is my update works just if they are not too much visitors/postmaster processes...\nIf there are too much postmasters i get too many errors in my data (the update seems not to work, it doesnt find the exisiting row in the current hour, so it does inserts i think)\nDo i have to care about permissions? I have set for example �lock table os in exclusive mode� for all tables i work with\n\nThe next problem is, i m looking to get 2,000,000 visitors day\nSo i will have to change something in pos\n tgres config right? But what exactly? Max_connectionsnumber, what is else important? Something in apache maybe too?\n\nI hope i can get some ideas, because everything works, except the perfomance and the number of the visitors manipulate data in wrong way and are making me seek!\nI know there are too many questions, but every idea of you guys will help me, thanks\n\nThank you so far\n\nBoris\n\n",
"msg_date": "Thu, 22 Jan 2004 16:13:23 +0100 (CET)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Function & Update/Insert Problem and Performance "
}
] |
[
{
"msg_contents": "Our database has slowed right down. We are not getting any performance from\nour biggest table \"forecastelement\".\nThe table has 93,218,671 records in it and climbing.\nThe index is on 4 columns, origianlly it was on 3. I added another to see\nif it improve performance. It did not.\nShould there be less columns in the index?\nHow can we improve database performance?\nHow should I improve my query?\n\nPWFPM_DEV=# \\d forecastelement\n Table \"public.forecastelement\"\n Column | Type | Modifiers\n----------------+-----------------------------+-----------\n version | character varying(99) | not null\n origin | character varying(10) | not null\n timezone | character varying(99) | not null\n region_id | character varying(20) | not null\n wx_element | character varying(99) | not null\n value | character varying(99) | not null\n flag | character(3) | not null\n units | character varying(99) | not null\n valid_time | timestamp without time zone | not null\n issue_time | timestamp without time zone | not null\n next_forecast | timestamp without time zone | not null\n reception_time | timestamp without time zone | not null\nIndexes:\n \"forecastelement_vrwi_idx\" btree\n(valid_time,region_id.wx_element.issue_time)\n\nexplain analyze select DISTINCT ON (valid_time)\nto_char(valid_time,'YYYYMMDDHH24MISS') as valid_time,value from\n (select valid_time,value,\"time\"(valid_time) as\nhour,reception_time,\n issue_time from forecastelement where\n valid_time between '2002-09-02 04:00:00' and\n '2002-09-07 03:59:59' and region_id = 'PU-REG-WTO-00200'\n and wx_element = 'TEMP_VALEUR1' and issue_time between\n '2002-09-02 05:00:00' and '2002-09-06 05:00:00'\n and origin = 'REGIONAL' and \"time\"(issue_time) =\n'05:00:00'\n order by issue_time,reception_time DESC,valid_time) as\nfoo where\n (date(valid_time) = date(issue_time)+1 -1 or\ndate(valid_time) = date(issue_time)+1 or\n (valid_time between '2002-09-07 00:00:00' and '2002-09-07\n03:59:59'\n and issue_time = '2002-09-06 05:00:00')) order by valid_time\n,issue_time DESC;\n\nUSING INDEX\n\"forecastelement_vrwi_idx\" btree (valid_time, region_id, wx_element,\nissue_time)\n Unique (cost=116.75..116.76 rows=1 width=83) (actual\ntime=9469.088..9470.002 rows=115 loops=1)\n -> Sort (cost=116.75..116.75 rows=1 width=83) (actual\ntime=9469.085..9469.308 rows=194 loops=1)\n Sort Key: to_char(valid_time, 'YYYYMMDDHH24MISS'::text), issue_time\n -> Subquery Scan foo (cost=116.72..116.74 rows=1 width=83)\n(actual time=9465.979..9467.735 rows=194 loops=1)\n -> Sort (cost=116.72..116.73 rows=1 width=30) (actual\ntime=9440.756..9440.981 rows=194 loops=1)\n Sort Key: issue_time, reception_time, valid_time\n -> Index Scan using forecastelement_vrwi_idx on\nforecastelement (cost=0.00..116.71 rows=1 width=30) (actual\ntime=176.510..9439.470 rows=194 loops=1)\n Index Cond: ((valid_time >= '2002-09-02\n04:00:00'::timestamp without time zone) AND (valid_time <= '2002-09-07\n03:59:59'::timestamp without time zone) AND ((region_id)::text =\n'PU-REG-WTO-00200'::text) AND ((wx_element)::text = 'TEMP_VALEUR1'::text)\nAND (issue_time >= '2002-09-02 05:00:00'::timestamp without time zone) AND\n(issue_time <= '2002-09-06 05:00:00'::timestamp without time zone))\n Filter: (((origin)::text = 'REGIONAL'::text) AND\n(\"time\"(issue_time) = '05:00:00'::time without time zone) AND\n((date(valid_time) = ((date(issue_time) + 1) - 1)) OR (date(valid_time) =\n(date(issue_time) + 1)) OR ((valid_time >= '2002-09-07 00:00:00'::timestamp\nwithout time zone) AND (valid_time <= '2002-09-07 03:59:59'::timestamp\nwithout time zone) AND (issue_time = '2002-09-06 05:00:00'::timestamp\nwithout time zone))))\n Total runtime: 9470.404 ms\n\nWe are running postgresql-7.4-0.5PGDG.i386.rpm .\non a Dell Poweredge 6650.\nsystem\nOS RHAS 3.0\ncpu 4\nmemory 3.6 GB\ndisk 270 GB raid 5\n\npostgresql.conf\nmax_connections = 64 \nshared_buffers = 4000 \nvacuum_mem = 32768 \neffective_cache_size = 312500 \nrandom_page_cost = 2 \n",
"msg_date": "Thu, 22 Jan 2004 14:47:04 -0500",
"msg_from": "\"Shea,Dan [CIS]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "database performance and query performance question"
},
{
"msg_contents": "Dan,\n\n> Should there be less columns in the index?\n> How can we improve database performance?\n> How should I improve my query?\n\nYour query plan isn't the problem. It's a good plan, and a reasonably \nefficient query. Under other circumstances, the SELECT DISTINCT with the \nto_char could be a performance-killer, but it's not in that result set.\n\nOverall, you're taking 9 seconds to scan 93 million records. Is this the time \nthe first time you run the query, or the 2nd and successive times?\n\nWhen did you last run VACUUM ANALYZE on the table? Have you tried increasing \nthe ANALYZE statistics on the index columns to, say, 500?\n\nYour disks are RAID 5. How many drives? In RAID5, more drives improves the \nspeed of large scans.\n\nAnd what's your sort_mem setting? You didn't mention it.\n\nWhy is your effective cache size only 300mb when you have 3 GB of RAM? It's \nnot affecting this query, but it could affect others.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 22 Jan 2004 12:00:45 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database performance and query performance question"
},
{
"msg_contents": "Dan,\n\n> Why is your effective cache size only 300mb when you have 3 GB of RAM? It's \n> not affecting this query, but it could affect others.\n\nIgnore this last question, I dropped a zero from my math. Sorry!\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 22 Jan 2004 12:21:19 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database performance and query performance question"
},
{
"msg_contents": "\"Shea,Dan [CIS]\" <[email protected]> writes:\n\n> Indexes:\n> \"forecastelement_vrwi_idx\" btree (valid_time,region_id.wx_element.issue_time)\n> \n> explain analyze \n> SELECT DISTINCT ON (valid_time) \n> to_char(valid_time,'YYYYMMDDHH24MISS') AS valid_time,\n> value\n> from (\n> \t SELECT valid_time,value, \"time\"(valid_time) AS hour, reception_time, issue_time \n> \t FROM forecastelement \n> \t WHERE valid_time BETWEEN '2002-09-02 04:00:00' AND '2002-09-07 03:59:59' \n> \t AND region_id = 'PU-REG-WTO-00200'\n> \t AND wx_element = 'TEMP_VALEUR1' \n> \t AND issue_time BETWEEN '2002-09-02 05:00:00' AND '2002-09-06 05:00:00'\n> \t AND origin = 'REGIONAL' \n> \t AND \"time\"(issue_time) = '05:00:00'\n> \t ORDER BY issue_time,reception_time DESC,valid_time\n> \t ) AS foo \n> WHERE\n> ( date(valid_time) = date(issue_time)+1 -1 \n> OR date(valid_time) = date(issue_time)+1 \n> OR ( valid_time BETWEEN '2002-09-07 00:00:00' AND '2002-09-07 03:59:59'\n> AND issue_time = '2002-09-06 05:00:00'\n> )\n> )\n> ORDER BY valid_time ,issue_time DESC;\n\nIncidentally, I find it easier to analyze queries when they've been formatted\nwell. This makes what's going on much clearer.\n\n From this it's clear your index doesn't match the query. Adding more columns\nwill be useless because only the leading column \"valid_time\" will be used at\nall. Since you're fetching a whole range of valid_times the remaining columns\nare all irrelevant. They only serve to bloat the index and require reading a\nlot more data.\n\nYou could either try creating an index just on valid_time, or create an index\non (region_id,wx_element,valid_time) or (region_id,wx_element,issue_time)\nwhichever is more selective. You could put wx_element first if it's more\nselective than region_id.\n\nMoreover, what purpose does the inner ORDER BY clause serve? It's only going\nto be re-sorted again by the outer ORDER BY.\n\n-- \ngreg\n\n",
"msg_date": "22 Jan 2004 19:20:25 -0500",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database performance and query performance question"
}
] |
[
{
"msg_contents": "\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]]\nSent: Thursday, January 22, 2004 3:01 PM\nTo: Shea,Dan [CIS]; [email protected]\nSubject: Re: [PERFORM] database performance and query performance\nquestion\n\n\nDan,\n\n> Should there be less columns in the index?\n> How can we improve database performance?\n> How should I improve my query?\n\n>>Your query plan isn't the problem. It's a good plan, and a reasonably \n>>efficient query. Under other circumstances, the SELECT DISTINCT with the\n\n>>to_char could be a performance-killer, but it's not in that result set.\n\n>>Overall, you're taking 9 seconds to scan 93 million records. Is this the\ntime \n>>the first time you run the query, or the 2nd and successive times?\n\nThis is actually the second time. The first query took more time.\nConcerning the number of columns for an index, I switched the index to have\nonly one column and tried the same query. It is below.\n\n\n>>When did you last run VACUUM ANALYZE on the table? Have you tried\nincreasing \n>>the ANALYZE statistics on the index columns to, say, 500?\n It is run nightly. But last night's did not complete. It was taking quite\nsome time and I cancelled it, over 4 hours. I will try increasing the\nANALYZE statistics to 500.\n\n>>Your disks are RAID 5. How many drives? In RAID5, more drives improves\nthe \n>>speed of large scans.\n There are 4 drives in this raid 5. We are using lvm with ext3 filesystem.\nWill be moving the database to a SAN within the next month.\n\nAnd what's your sort_mem setting? You didn't mention it.\n>>The sort_mem is the default \nPWFPM_DEV=# show sort_mem;\n sort_mem\n----------\n 1024\n\nWhy is your effective cache size only 300mb when you have 3 GB of RAM? It's\n\nnot affecting this query, but it could affect others.\n>> Oh, I thought I had it set for 2.5 GB of RAM. 312500 * 8k = 2.5 GB\n\n\nQUERY WITH 1 column in index.\n\n Unique (cost=717633.28..717633.29 rows=1 width=83) (actual\ntime=62922.399..62923.334 rows=115 loops=1)\n -> Sort (cost=717633.28..717633.29 rows=1 width=83) (actual\ntime=62922.395..62922.615 rows=194 loops=1)\n Sort Key: to_char(valid_time, 'YYYYMMDDHH24MISS'::text), issue_time\n -> Subquery Scan foo (cost=717633.26..717633.27 rows=1 width=83)\n(actual time=62918.232..62919.989 rows=194 loops=1)\n -> Sort (cost=717633.26..717633.26 rows=1 width=30) (actual\ntime=62902.378..62902.601 rows=194 loops=1)\n Sort Key: issue_time, reception_time, valid_time\n -> Index Scan using forecastelement_v_idx on\nforecastelement (cost=0.00..717633.25 rows=1 width=30) (actual\ntime=1454.974..62900.752 rows=194 loops=1)\n Index Cond: ((valid_time >= '2002-09-02\n04:00:00'::timestamp without time zone) AND (valid_time <= '2002-09-07\n03:59:59'::timestamp without time zone))\n Filter: (((region_id)::text =\n'PU-REG-WTO-00200'::text) AND ((wx_element)::text = 'TEMP_VALEUR1'::text)\nAND (issue_time >= '2002-09-02 05:00:00'::timestamp without time zone) AND\n(issue_time <= '2002-09-06 05:00:00'::timestamp without time zone) AND\n((origin)::text = 'REGIONAL'::text) AND (\"time\"(issue_time) =\n'05:00:00'::time without time zone) AND ((date(valid_time) =\n((date(issue_time) + 1) - 1)) OR (date(valid_time) = (date(issue_time) + 1))\nOR ((valid_time >= '2002-09-07 00:00:00'::timestamp without time zone) AND\n(valid_time <= '2002-09-07 03:59:59'::timestamp without time zone) AND\n(issue_time = '2002-09-06 05:00:00'::timestamp without time zone))))\n Total runtime: 62923.723 ms\n(10 rows)\n\nPWFPM_DEV=# expalin analyze 312500\nPWFPM_DEV=# explain analyze select DISTINCT ON (valid_time)\nto_char(valid_time,'YYYYMMDDHH24MISS') as valid_time,value from\nPWFPM_DEV-# (select valid_time,value,\"time\"(valid_time)\nas hour,reception_time,\nPWFPM_DEV(# issue_time from forecastelement where\nPWFPM_DEV(# valid_time between '2002-09-02 04:00:00' and\nPWFPM_DEV(# '2002-09-07 03:59:59' and region_id =\n'PU-REG-WTO-00200'\nPWFPM_DEV(# and wx_element = 'TEMP_VALEUR1' and\nissue_time between\nPWFPM_DEV(# '2002-09-02 05:00:00' and '2002-09-06\n05:00:00'\nPWFPM_DEV(# and origin = 'REGIONAL' and\n\"time\"(issue_time) = '05:00:00'\nPWFPM_DEV(# order by issue_time,reception_time\nDESC,valid_time) as foo where\nPWFPM_DEV-# (date(valid_time) = date(issue_time)+1 -1 or\ndate(valid_time) = date(issue_time)+1 or\nPWFPM_DEV(# (valid_time between '2002-09-07 00:00:00' and\n'2002-09-07 03:59:59'\nPWFPM_DEV(# and issue_time = '2002-09-06 05:00:00'))\norder by valid_time ,issue_time DESC;\n \nQUERY PLAN\n\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n--------\n Unique (cost=717633.28..717633.29 rows=1 width=83) (actual\ntime=21468.227..21469.164 rows=115 loops=1)\n -> Sort (cost=717633.28..717633.29 rows=1 width=83) (actual\ntime=21468.223..21468.452 rows=194 loops=1)\n Sort Key: to_char(valid_time, 'YYYYMMDDHH24MISS'::text), issue_time\n -> Subquery Scan foo (cost=717633.26..717633.27 rows=1 width=83)\n(actual time=21465.274..21467.006 rows=194 loops=1)\n -> Sort (cost=717633.26..717633.26 rows=1 width=30) (actual\ntime=21465.228..21465.452 rows=194 loops=1)\n Sort Key: issue_time, reception_time, valid_time\n -> Index Scan using forecastelement_v_idx on\nforecastelement (cost=0.00..717633.25 rows=1 width=30) (actual\ntime=1479.649..21463.779 rows=194 loops=1)\n Index Cond: ((valid_time >= '2002-09-02\n04:00:00'::timestamp without time zone) AND (valid_time <= '2002-09-07\n03:59:59'::timestamp without time zone))\n Filter: (((region_id)::text =\n'PU-REG-WTO-00200'::text) AND ((wx_element)::text = 'TEMP_VALEUR1'::text)\nAND (issue_time >= '2002-09-02 05:00:00'::timestamp without time zone) AND\n(issue_time <= '2002-09-06 05:00:00'::timestamp without time zone) AND\n((origin)::text = 'REGIONAL'::text) AND (\"time\"(issue_time) =\n'05:00:00'::time without time zone) AND ((date(valid_time) =\n((date(issue_time) + 1) - 1)) OR (date(valid_time) = (date(issue_time) + 1))\nOR ((valid_time >= '2002-09-07 00:00:00'::timestamp without time zone) AND\n(valid_time <= '2002-09-07 03:59:59'::timestamp without time zone) AND\n(issue_time = '2002-09-06 05:00:00'::timestamp without time zone))))\n Total runtime: 21469.485 ms\n(10 rows)\n\nPWFPM_DEV=#\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n",
"msg_date": "Thu, 22 Jan 2004 15:35:36 -0500",
"msg_from": "\"Shea,Dan [CIS]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: database performance and query performance question"
}
] |
[
{
"msg_contents": "\nSomething that I do not understand is why if you use a valid_time =\n'2004-01-22 00:00:00' the query will use the index but if you do a\nvalid_time > '2004-01-22 00:00:00' it does not use the index?\nPWFPM_DEV=# explain analyze select * from forecastelement where valid_time >\ndate '2004-01-23'::date limit 10;\n QUERY PLAN\n----------------------------------------------------------------------------\n------------------------------------------------------------\n Limit (cost=0.00..3.82 rows=10 width=129) (actual\ntime=199550.388..199550.783 rows=10 loops=1)\n -> Seq Scan on forecastelement (cost=0.00..2722898.40 rows=7131102\nwidth=129) (actual time=199550.382..199550.757 rows=10 loops=1)\n Filter: (valid_time > '2004-01-23 00:00:00'::timestamp without time\nzone)\n Total runtime: 199550.871 ms\n(4 rows)\n\nPWFPM_DEV=# explain analyze select * from forecastelement where valid_time =\ndate '2004-01-23'::date limit 10;\n \nQUERY PLAN \n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n--------\n Limit (cost=0.00..18.76 rows=10 width=129) (actual time=176.141..276.577\nrows=10 loops=1)\n -> Index Scan using forecastelement_vrwi_idx on forecastelement\n(cost=0.00..160770.98 rows=85707 width=129) (actual time=176.133..276.494\nrows=10 loops=1)\n Index Cond: (valid_time = '2004-01-23 00:00:00'::timestamp without\ntime zone)\n Total runtime: 276.721 ms\n(4 rows)\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]]\nSent: Thursday, January 22, 2004 3:01 PM\nTo: Shea,Dan [CIS]; [email protected]\nSubject: Re: [PERFORM] database performance and query performance\nquestion\n\n\nDan,\n\n> Should there be less columns in the index?\n> How can we improve database performance?\n> How should I improve my query?\n\nYour query plan isn't the problem. It's a good plan, and a reasonably \nefficient query. Under other circumstances, the SELECT DISTINCT with the \nto_char could be a performance-killer, but it's not in that result set.\n\nOverall, you're taking 9 seconds to scan 93 million records. Is this the\ntime \nthe first time you run the query, or the 2nd and successive times?\n\nWhen did you last run VACUUM ANALYZE on the table? Have you tried\nincreasing \nthe ANALYZE statistics on the index columns to, say, 500?\n\nYour disks are RAID 5. How many drives? In RAID5, more drives improves the\n\nspeed of large scans.\n\nAnd what's your sort_mem setting? You didn't mention it.\n\nWhy is your effective cache size only 300mb when you have 3 GB of RAM? It's\n\nnot affecting this query, but it could affect others.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n",
"msg_date": "Thu, 22 Jan 2004 15:35:59 -0500",
"msg_from": "\"Shea,Dan [CIS]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: database performance and query performance question"
},
{
"msg_contents": "Dan,\n\n> Something that I do not understand is why if you use a valid_time =\n> '2004-01-22 00:00:00' the query will use the index but if you do a\n> valid_time > '2004-01-22 00:00:00' it does not use the index?\n\nBecause of the expected number of rows to be returned. Take a look at the row \nestimates on the forecastleelement scans. \n\nYou can improve these estimates by increasing the ANALYZE stats and/or running \nANALYZE more often. Of course, increasing the stats makes analyze run \nslower ...\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 22 Jan 2004 12:43:15 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database performance and query performance question"
},
{
"msg_contents": "Shea,Dan [CIS] kirjutas N, 22.01.2004 kell 22:35:\n> Something that I do not understand is why if you use a valid_time =\n> '2004-01-22 00:00:00' the query will use the index but if you do a\n> valid_time > '2004-01-22 00:00:00' it does not use the index?\n\nIt probably can't tell if > is selective enough to justify using index.\n\nTogether with \"limit 10\" it may be.\n\nYou could try \n\nexplain analyze select * from forecastelement where valid_time between \n'2004-01-22'::date and '2004-01-22'::date limit 10;\n\nto see if this is considered good enough.\n\n--------------\nHannu\n\n",
"msg_date": "Thu, 22 Jan 2004 22:46:46 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database performance and query performance question"
},
{
"msg_contents": "Hannu Krosing kirjutas N, 22.01.2004 kell 22:46:\n> Shea,Dan [CIS] kirjutas N, 22.01.2004 kell 22:35:\n> > Something that I do not understand is why if you use a valid_time =\n> > '2004-01-22 00:00:00' the query will use the index but if you do a\n> > valid_time > '2004-01-22 00:00:00' it does not use the index?\n> \n> It probably can't tell if > is selective enough to justify using index.\n> \n> Together with \"limit 10\" it may be.\n> \n> You could try \n> \n> explain analyze select * from forecastelement where valid_time between \n> '2004-01-22'::date and '2004-01-22'::date limit 10;\n\nSorry, that should have been:\n\nbetween '2004-01-22'::date and '2004-01-23'::date\n\n\n> to see if this is considered good enough.\n> \n> --------------\n> Hannu\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n",
"msg_date": "Thu, 22 Jan 2004 22:53:37 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database performance and query performance question"
}
] |
[
{
"msg_contents": "This sure speed up the query, it is fast.\nPWFPM_DEV=# explain analyze select * from forecastelement where valid_time\nbetween '2004-01-12'::date and '2003-01-12'::date;\n \nQUERY PLAN \n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n---\n Index Scan using forecastelement_v_idx on forecastelement\n(cost=0.00..159607.11 rows=466094 width=129) (actual time=49.504..49.504\nrows=0 loops=1)\n Index Cond: ((valid_time >= '2004-01-12 00:00:00'::timestamp without time\nzone) AND (valid_time <= '2003-01-12 00:00:00'::timestamp without time\nzone))\n Total runtime: 49.589 ms\n(3 rows)\n\n-----Original Message-----\nFrom: Hannu Krosing [mailto:[email protected]]\nSent: Thursday, January 22, 2004 3:54 PM\nTo: Shea,Dan [CIS]\nCc: '[email protected]'; [email protected]\nSubject: Re: [PERFORM] database performance and query performance\nquestion\n\n\nHannu Krosing kirjutas N, 22.01.2004 kell 22:46:\n> Shea,Dan [CIS] kirjutas N, 22.01.2004 kell 22:35:\n> > Something that I do not understand is why if you use a valid_time =\n> > '2004-01-22 00:00:00' the query will use the index but if you do a\n> > valid_time > '2004-01-22 00:00:00' it does not use the index?\n> \n> It probably can't tell if > is selective enough to justify using index.\n> \n> Together with \"limit 10\" it may be.\n> \n> You could try \n> \n> explain analyze select * from forecastelement where valid_time between \n> '2004-01-22'::date and '2004-01-22'::date limit 10;\n\nSorry, that should have been:\n\nbetween '2004-01-22'::date and '2004-01-23'::date\n\n\n> to see if this is considered good enough.\n> \n> --------------\n> Hannu\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n",
"msg_date": "Thu, 22 Jan 2004 16:09:32 -0500",
"msg_from": "\"Shea,Dan [CIS]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: database performance and query performance question"
}
] |
[
{
"msg_contents": "The end date in the previous example was actually invalid between\n'2004-01-12'::date and '2003-01-12'::date;\nThere have been multiple inserts since I recreated the index but it took\nquite some time to complete the following\nPWFPM_DEV=# explain analyze select * from forecastelement where valid_time\nbetween '2004-01-12'::date and '2004-01-13'::date;\n \nQUERY PLAN \n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------\n Index Scan using forecastelement_v_idx on forecastelement\n(cost=0.00..832139.81 rows=2523119 width=129) (actual time=0.519..467159.658\nrows=2940600 loops=1)\n Index Cond: ((valid_time >= '2004-01-12 00:00:00'::timestamp without time\nzone) AND (valid_time <= '2004-01-13 00:00:00'::timestamp without time\nzone))\n Total runtime: 472627.148 ms\n(3 rows)\n\n-----Original Message-----\nFrom: Shea,Dan [CIS] \nSent: Thursday, January 22, 2004 4:10 PM\nTo: 'Hannu Krosing'; Shea,Dan [CIS]\nCc: '[email protected]'; [email protected]\nSubject: RE: [PERFORM] database performance and query performance\nquestion\n\n\nThis sure speed up the query, it is fast.\nPWFPM_DEV=# explain analyze select * from forecastelement where valid_time\nbetween '2004-01-12'::date and '2003-01-12'::date;\n \nQUERY PLAN \n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n---\n Index Scan using forecastelement_v_idx on forecastelement\n(cost=0.00..159607.11 rows=466094 width=129) (actual time=49.504..49.504\nrows=0 loops=1)\n Index Cond: ((valid_time >= '2004-01-12 00:00:00'::timestamp without time\nzone) AND (valid_time <= '2003-01-12 00:00:00'::timestamp without time\nzone))\n Total runtime: 49.589 ms\n(3 rows)\n\n-----Original Message-----\nFrom: Hannu Krosing [mailto:[email protected]]\nSent: Thursday, January 22, 2004 3:54 PM\nTo: Shea,Dan [CIS]\nCc: '[email protected]'; [email protected]\nSubject: Re: [PERFORM] database performance and query performance\nquestion\n\n\nHannu Krosing kirjutas N, 22.01.2004 kell 22:46:\n> Shea,Dan [CIS] kirjutas N, 22.01.2004 kell 22:35:\n> > Something that I do not understand is why if you use a valid_time =\n> > '2004-01-22 00:00:00' the query will use the index but if you do a\n> > valid_time > '2004-01-22 00:00:00' it does not use the index?\n> \n> It probably can't tell if > is selective enough to justify using index.\n> \n> Together with \"limit 10\" it may be.\n> \n> You could try \n> \n> explain analyze select * from forecastelement where valid_time between \n> '2004-01-22'::date and '2004-01-22'::date limit 10;\n\nSorry, that should have been:\n\nbetween '2004-01-22'::date and '2004-01-23'::date\n\n\n> to see if this is considered good enough.\n> \n> --------------\n> Hannu\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n",
"msg_date": "Thu, 22 Jan 2004 16:32:28 -0500",
"msg_from": "\"Shea,Dan [CIS]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: database performance and query performance question"
},
{
"msg_contents": "Dan,\n\nOf course it took forever. You're retrieving 2.9 million rows! \n\n> Index Scan using forecastelement_v_idx on forecastelement\n> (cost=0.00..832139.81 rows=2523119 width=129) (actual time=0.519..467159.658\n> rows=2940600 loops=1)\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 22 Jan 2004 13:52:38 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database performance and query performance question"
},
{
"msg_contents": "Shea,Dan [CIS] kirjutas N, 22.01.2004 kell 23:32:\n> The end date in the previous example was actually invalid between\n> '2004-01-12'::date and '2003-01-12'::date;\n> There have been multiple inserts since I recreated the index but it took\n> quite some time to complete the following\n> PWFPM_DEV=# explain analyze select * from forecastelement where valid_time\n> between '2004-01-12'::date and '2004-01-13'::date;\n\nYou could try ORDER BY to bias the optimiser towards using an index:\n\nexplain analyze\n select *\n from forecastelement\n where valid_time > '2004-01-12'::date\n order by valid_time\n limit 10;\n\nThis also may be more close to what you are expecting :)\n\n------------------\nHannu\n\n",
"msg_date": "Fri, 23 Jan 2004 01:38:37 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database performance and query performance question"
}
] |
[
{
"msg_contents": "Hi all,\n\n First of all thanks to Josh and Richard for their replies. What I have\ndone to test\ntheir indications is the following. I have created a new table identical to\nSTATISTICS,\nand an index over the TIMESTAMP_IN field.\n\nCREATE TABLE STATISTICS2\n(\n STATISTIC_ID NUMERIC(10) NOT NULL DEFAULT\n NEXTVAL('STATISTIC_ID_SEQ')\n CONSTRAINT pk_st_statistic2_id PRIMARY KEY,\n TIMESTAMP_IN TIMESTAMP,\n VALUE NUMERIC(10)\n);\n\nCREATE INDEX i_stats2_tin ON STATISTICS2(TIMESTAMP_IN);\n\nAfter that I inserted the data from STATISTICS and vacuumed the DB:\n\n INSERT INTO STATISTICS2 ( SELECT * FROM STATISTICS );\n vacuumdb -f -z -d test\n\nonce the vacuum has finished I do the following query\n\nexplain analyze select * from statistics2 where timestamp_in <\nto_timestamp( '20031201', 'YYYYMMDD' );\nNOTICE: QUERY PLAN:\n\nSeq Scan on statistics2 (cost=0.00..638.00 rows=9289 width=35) (actual\ntime=0.41..688.34 rows=27867 loops=1)\nTotal runtime: 730.82 msec\n\nThat query is not using the index. Anybody knows what I'm doing wrong?\n\nThank you very much\n\n--\nArnau\n\n\n",
"msg_date": "Fri, 23 Jan 2004 08:50:02 +0100",
"msg_from": "\"Arnau Rebassa i Villalonga\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Queries with timestamp, 2"
}
] |
[
{
"msg_contents": "Hello All\r\n\r\nJust wanted to gather opinions on what file system has the best balance between performance and reliability when used on a quad processor machine running SuSE64. Thanks\r\n\r\nDAve\n\n\n\n\n\n\nHello All\n \nJust wanted to gather opinions on what \r\nfile system has the best balance between performance and reliability when used \r\non a quad processor machine running SuSE64. Thanks\n \nDAve",
"msg_date": "Fri, 23 Jan 2004 12:43:28 -0400",
"msg_from": "\"Dave Thompson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "Dave Thompson wrote:\n\n> Hello All\n> \n> Just wanted to gather opinions on what file system has the best \n> balance between performance and reliability when used on a quad \n> processor machine running SuSE64. Thanks\n\n\nXFS.. hands down.\n\n> \n> DAve\n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL\n\n\n\n\n\n\n\n\nDave Thompson wrote:\n\n\n\n\nHello All\n \nJust wanted to gather\nopinions on what file system has the best balance between performance\nand reliability when used on a quad processor machine running SuSE64. \nThanks\n\n\nXFS.. hands down.\n\n\n \nDAve\n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL",
"msg_date": "Fri, 23 Jan 2004 08:51:03 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "On Fri, Jan 23, 2004 at 08:51:03AM -0800, Joshua D. Drake wrote:\n> \n> \n> XFS.. hands down.\n\nI thought it was you who recently said you thought there was some\nsort of possible caching problem there?\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe plural of anecdote is not data.\n\t\t--Roger Brinner\n",
"msg_date": "Fri, 23 Jan 2004 12:55:44 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "\n>>XFS.. hands down.\n>> \n>>\n>\n>I thought it was you who recently said you thought there was some\n>sort of possible caching problem there?\n>\n> \n>\nNot I. We have had issues with JFS and data corruption on a powerout but\nXFS has been rock solid in all of our tests.\n\nXFS also has the interesting ability (although I have yet to test it) \nthat will allow you\nto take a snapshot of the filesystem. Thus you can have filesystem level \nbackups\nof the PGDATA directory that are consistent even though the database is \nrunning.\nThere is nothing else on Linux that comes close to that. Plus XFS has been\nproven in a 64 bit environment (Irix).\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n>A\n>\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL\n\n",
"msg_date": "Fri, 23 Jan 2004 10:18:35 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "Anyone have good/bad experience using OpenGFS or the Sistina GFS?\nAny other cluster file systems out there?\n\nGreg\n\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n\n\n**********************************************************************\nThis email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you have received this email in error please notify\nthe system manager.\n\nThis footnote also confirms that this email message has been swept by\nMIMEsweeper for the presence of computer viruses.\n\nwww.mimesweeper.com\n**********************************************************************\n\n",
"msg_date": "Fri, 23 Jan 2004 13:53:21 -0500",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Linux Cluster File Systems"
},
{
"msg_contents": "On Fri, Jan 23, 2004 at 10:18:35AM -0800, Joshua D. Drake wrote:\n> >\n> Not I. We have had issues with JFS and data corruption on a powerout but\n> XFS has been rock solid in all of our tests.\n\nSorry, it was Josh Berkus:\n\nhttp://archives.postgresql.org/pgsql-performance/2004-01/msg00086.php\n\n> There is nothing else on Linux that comes close to that. Plus XFS has been\n> proven in a 64 bit environment (Irix).\n\nI had lots of happy experiences with XFS when administering IRIX\nboxes[1], but I don't know what differences the Linux port entailed. \nDo you have details on that? We're certainly looking for an option\nover JFS at the moment.\n\nA\n\n[1] I will note, however, that it was practically the only happy\nexperience I had with them. IRIX made the early Debian installer\nlook positively user-friendly, and SGI's desire to make everything\nwhiz-bang nifty by running practically every binary setuid root gave\nme fits. But XFS was nice.\n\n-- \nAndrew Sullivan | [email protected]\nIn the future this spectacle of the middle classes shocking the avant-\ngarde will probably become the textbook definition of Postmodernism. \n --Brad Holland\n",
"msg_date": "Fri, 23 Jan 2004 14:01:48 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "\n>>There is nothing else on Linux that comes close to that. Plus XFS has been\n>>proven in a 64 bit environment (Irix).\n>> \n>>\n>\n>I had lots of happy experiences with XFS when administering IRIX\n>boxes[1], but I don't know what differences the Linux port entailed. \n>Do you have details on that? \n>\n\nhttp://oss.sgi.com/projects/xfs/\n\n\n>We're certainly looking for an option\n>over JFS at the moment.\n>\n>A\n>\n>[1] I will note, however, that it was practically the only happy\n>experience I had with them. IRIX made the early Debian installer\n>look positively user-friendly, and SGI's desire to make everything\n>whiz-bang nifty by running practically every binary setuid root gave\n>me fits. But XFS was nice.\n>\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL\n\n",
"msg_date": "Fri, 23 Jan 2004 11:05:41 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "In article <[email protected]>, Greg Spiegelberg wrote:\n> Anyone have good/bad experience using OpenGFS or the Sistina GFS?\n> Any other cluster file systems out there?\n\nLustre and CFS spring to mind. CFS is the cluster filesystem layer\nof openssi (http://www.openssi.org), which is layered on top of\next3 and provides the necessary locking. It can work with shared\ndisk hardware, but I believe 1 node is always responsible for all\ntransactions. It can fail-over to another node in the cluster, if\nthat node has direct access to the disk.\n\nYou can also use it without shared disk hardware (you loose the HA\nofcourse). That way you can easily test openssi.\n\nPostgresql should work without problems on an OpenSSI cluster.\n\n\nI don't know much about Lustre (http://www.clusterfs.com)\n\n\n",
"msg_date": "Fri, 23 Jan 2004 20:16:35 +0000 (UTC)",
"msg_from": "\"Karl Vogel\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux Cluster File Systems"
},
{
"msg_contents": "On Fri, Jan 23, 2004 at 11:05:41AM -0800, Joshua D. Drake wrote:\n> http://oss.sgi.com/projects/xfs/\n\nYes, I guess I shoulda thought of that, eh? Thanks. The docs do\nsuggest that there are some significant differences between the two\nversions of the filesystem, so I'm not sure how sanguine I'd be about\nthe degree of \"testing\" the filesystem has received on Linux. On the\nother hand, I wouldn't be surprised if it were no worse than the\nother options.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe fact that technology doesn't work is no bar to success in the marketplace.\n\t\t--Philip Greenspun\n",
"msg_date": "Fri, 23 Jan 2004 15:50:56 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "\n>Yes, I guess I shoulda thought of that, eh? Thanks. The docs do\n>suggest that there are some significant differences between the two\n>versions of the filesystem, so I'm not sure how sanguine I'd be about\n>the degree of \"testing\" the filesystem has received on Linux. On the\n> \n>\nWell SuSE ships with XFS and SuSE tends to be really good about testing.\nBetter than RedHat IMHO. Just the fact that RedHat uses ext3 as the default\nis a black eye.\n\nXFS has been around a LONG time, and on Linux for a couple of years now.\nPlus I believe it is the default FS for all of the really high end stuff \nSGI is doing\nwith Linux.\n\nI would (and do) trust XFS currently over ANY other journalled option on \nLinux.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n>other hand, I wouldn't be surprised if it were no worse than the\n>other options.\n>\n>A\n>\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL\n\n",
"msg_date": "Fri, 23 Jan 2004 12:56:18 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "[email protected] (\"Joshua D. Drake\") writes:\n>>Yes, I guess I shoulda thought of that, eh? Thanks. The docs do\n>>suggest that there are some significant differences between the two\n>>versions of the filesystem, so I'm not sure how sanguine I'd be about\n>>the degree of \"testing\" the filesystem has received on Linux. On the\n>>\n> Well SuSE ships with XFS and SuSE tends to be really good about\n> testing. Better than RedHat IMHO. Just the fact that RedHat uses\n> ext3 as the default is a black eye.\n\nWell, I'd point to one major factor with RHAT; they employ Stephen\nTweedie, creator of ext3, and have been paying him to work on it for\nsome time now. If they _didn't_ promote use of ext3, they would be\nvery much vulnerable to the \"won't eat their own dogfood\" criticism.\n\n> XFS has been around a LONG time, and on Linux for a couple of years\n> now. Plus I believe it is the default FS for all of the really high\n> end stuff SGI is doing with Linux.\n\nAh, but there is a bit of a 'problem' nonetheless; XFS is not\n'officially supported' as part of the Linux kernel until version 2.6,\nwhich is still pretty \"bleeding edge.\" Until 2.6 solidifies a bit\nmore (aside: based on experiences with 2.6.0, \"quite a lot more\"), it\nis a \"patchy\" add-on to the 'stable' 2.4 kernel series.\n\nDo the patches work? As far as I have heard, quite well indeed. But\nthe fact of it not having been 'official' is a fair little bit of a\ndownside.\n\n> I would (and do) trust XFS currently over ANY other journalled\n> option on Linux.\n\nI'm getting less and less inclined to trust ext3 or JFS, which \"floats\nupwards\" any other boats that are lingering around...\n-- \nlet name=\"cbbrowne\" and tld=\"libertyrms.info\" in String.concat \"@\" [name;tld];;\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n",
"msg_date": "Fri, 23 Jan 2004 16:39:16 -0500",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "\n>Well, I'd point to one major factor with RHAT; they employ Stephen\n>Tweedie, creator of ext3, and have been paying him to work on it for\n>some time now. If they _didn't_ promote use of ext3, they would be\n>very much vulnerable to the \"won't eat their own dogfood\" criticism.\n>\n> \n>\nTrue but frankly, they shouldn't. EXT3 has some serious issues. In fact\nif you are running a stock RH kernel before 2.4.20 you can destroy your\nPostgreSQL database with it.\n\nNot to mention how slow it is ;)\n\n>>XFS has been around a LONG time, and on Linux for a couple of years\n>>now. Plus I believe it is the default FS for all of the really high\n>>end stuff SGI is doing with Linux.\n>> \n>>\n>\n>Ah, but there is a bit of a 'problem' nonetheless; XFS is not\n>'officially supported' as part of the Linux kernel until version 2.6,\n>which is still pretty \"bleeding edge.\" \n>\nThat is not true see:\n\nhttp://kerneltrap.org/node/view/1751\n\n\n\n>Until 2.6 solidifies a bit\n>more (aside: based on experiences with 2.6.0, \"quite a lot more\"), it\n>is a \"patchy\" add-on to the 'stable' 2.4 kernel series.\n>\n> \n>\nAgain see above :)\n\n>Do the patches work? As far as I have heard, quite well indeed. But\n>the fact of it not having been 'official' is a fair little bit of a\n>downside.\n> \n>\n\nWhat is official?\n\n\nSincerely,\n\nJoshua D. Drake\n\n>>I would (and do) trust XFS currently over ANY other journalled\n>>option on Linux.\n>> \n>>\n>\n>I'm getting less and less inclined to trust ext3 or JFS, which \"floats\n>upwards\" any other boats that are lingering around...\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL\n\n",
"msg_date": "Fri, 23 Jan 2004 14:57:46 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "[email protected] (\"Joshua D. Drake\") writes:\n>>Well, I'd point to one major factor with RHAT; they employ Stephen\n>>Tweedie, creator of ext3, and have been paying him to work on it for\n>>some time now. If they _didn't_ promote use of ext3, they would be\n>>very much vulnerable to the \"won't eat their own dogfood\" criticism.\n>\n> True but frankly, they shouldn't. EXT3 has some serious issues. In fact\n> if you are running a stock RH kernel before 2.4.20 you can destroy your\n> PostgreSQL database with it.\n>\n> Not to mention how slow it is ;)\n\nI'm not defending ext3's merits; just the clear reason why RHAT uses\nit :-).\n\n>>>XFS has been around a LONG time, and on Linux for a couple of years\n>>>now. Plus I believe it is the default FS for all of the really high\n>>>end stuff SGI is doing with Linux.\n>>>\n>>>\n>>\n>>Ah, but there is a bit of a 'problem' nonetheless; XFS is not\n>>'officially supported' as part of the Linux kernel until version 2.6,\n>> which is still pretty \"bleeding edge.\"\n>>\n> That is not true see:\n>\n> http://kerneltrap.org/node/view/1751\n\nWell, I just downloaded 2.4.24 this week, and I don't see XFS included\nin it. I see ReiserFS, ext3, and JFS, but not XFS.\n\n>>Until 2.6 solidifies a bit more (aside: based on experiences with\n>>2.6.0, \"quite a lot more\"), it is a \"patchy\" add-on to the 'stable'\n>>2.4 kernel series.\n>>\n> Again see above :)\n\n>>Do the patches work? As far as I have heard, quite well indeed. But\n>>the fact of it not having been 'official' is a fair little bit of a\n>>downside.\n>\n> What is official?\n\n\"Is it included in the kernel sources hosted at ftp.kernel.org?\"\n-- \nlet name=\"cbbrowne\" and tld=\"libertyrms.info\" in name ^ \"@\" ^ tld;;\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n",
"msg_date": "Fri, 23 Jan 2004 18:30:38 -0500",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "On 01/23/2004-10:18AM, Joshua D. Drake wrote:\n> \n> XFS also has the interesting ability (although I have yet to test it) \n> that will allow you\n> to take a snapshot of the filesystem. Thus you can have filesystem level \n> backups\n> of the PGDATA directory that are consistent even though the database is \n> running.\n\nYou can do snapshots in FreeBSD 5.x with UFS2 as well but that (\nnor XFS snapshots ) will let you backup with the database server\nrunning. Just because you will get the file exactly as it was at\na particular instant does not mean that the postmaster did not\nstill have some some data that was not flushed to disk yet.\n\n",
"msg_date": "Fri, 23 Jan 2004 20:22:47 -0500",
"msg_from": "Christopher Weimann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "\n>You can do snapshots in FreeBSD 5.x with UFS2 as well but that (\n>nor XFS snapshots ) will let you backup with the database server\n>running. Just because you will get the file exactly as it was at\n>a particular instant does not mean that the postmaster did not\n>still have some some data that was not flushed to disk yet.\n> \n>\nAhh... isn't that what fsync is for?\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL\n\n",
"msg_date": "Fri, 23 Jan 2004 17:26:45 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "They seem pretty clean (have patched vanilla kernels + xfs for Mandrake \n9.2/9.0).\n\nAnd yes, I would recommend xfs - noticeably faster than ext3, and no \nsign of any mysterious hangs under load.\n\nbest wishes\n\nMark\n\nChristopher Browne wrote:\n\n>Do the patches work? As far as I have heard, quite well indeed. But\n>the fact of it not having been 'official' is a fair little bit of a\n>downside.\n>\n> \n>\n\n",
"msg_date": "Sat, 24 Jan 2004 16:27:04 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "Mark Kirkwood wrote:\n\n> They seem pretty clean (have patched vanilla kernels + xfs for \n> Mandrake 9.2/9.0).\n>\n> And yes, I would recommend xfs - noticeably faster than ext3, and no \n> sign of any mysterious hangs under load.\n\n\nThe hangs you are having are due to several issues... one of them is the \nway ext3 syncs. What kernel version are\nyou running?\n\nSincerely,\n\nJoshua Drake\n\n\n\n>\n> best wishes\n>\n> Mark\n>\n> Christopher Browne wrote:\n>\n>> Do the patches work? As far as I have heard, quite well indeed. But\n>> the fact of it not having been 'official' is a fair little bit of a\n>> downside.\n>>\n>> \n>>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL\n\n",
"msg_date": "Sat, 24 Jan 2004 09:11:50 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "\[email protected] says...\n\n\n> XFS.. hands down.\n\n\nOff topic question here, but I'm a bit at a loss to understand exactly \nwhat sgi are doing.\n\n\nI thought that they were removing IRIX and going with Linux as the OS to \ntheir high end graphical workstations, yet I see they still have IRIX on \ntheir site.\n\n\nWhat, exactly, is the story with this?\n\n\nPaul...\n \n\n-- \nplinehan y_a_h_o_o and d_o_t com\nC++ Builder 5 SP1, Interbase 6.0.1.6 IBX 5.04 W2K Pro\nPlease do not top-post.\n\n\"XML avoids the fundamental question of what we should do, \nby focusing entirely on how we should do it.\" \n\nquote from http://www.metatorial.com \n\n",
"msg_date": "Sat, 24 Jan 2004 17:31:56 -0000",
"msg_from": "Paul Ganainm <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "Paul Ganainm <[email protected]> writes:\n\n> [email protected] says...\n>\n>\n>> XFS.. hands down.\n>\n>\n> Off topic question here, but I'm a bit at a loss to understand exactly \n> what sgi are doing.\n>\n>\n> I thought that they were removing IRIX and going with Linux as the OS to \n> their high end graphical workstations, yet I see they still have IRIX on \n> their site.\n>\n>\n> What, exactly, is the story with this?\n\nWell, they do have a very large installed base of Irix systems, and\na certain obligation to keep supporting them... As long as there are\npeople whi still want MIPS/Irix they will probably keep selling at\nleast a few models.\n\n-Doug\n\n",
"msg_date": "Sat, 24 Jan 2004 15:15:26 -0500",
"msg_from": "Doug McNaught <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "\n\n> Mark Kirkwood wrote a little unclearly:\n>\n>>\n>> And yes, I would recommend xfs - noticeably faster than ext3, and no \n>> sign of any mysterious hangs under load.\n>\n>\nI was thinking about the reported mini-hangs that folks are seeing with \njfs, except the all important keyword \"jfs\" didnt make it out of my head \nand into the 2nd bit of the message :-( Sorry for the confusion.\n\nSo I meant \"faster that ext3, and without strange mini hangs like jfs...\".\n\n\nJoshua wrote:\n\n>\n> The hangs you are having are due to several issues... one of them is \n> the way ext3 syncs.\n\nNever suffered this personally, it was an unexpected filesystem \ncorruption with ext3 that \"encouraged\" me to try out xfs instead.\n\nbest wishes\n\nMark\n\n",
"msg_date": "Sun, 25 Jan 2004 10:56:07 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "\nChristopher Browne <[email protected]> writes:\n\n> Ah, but there is a bit of a 'problem' nonetheless; XFS is not\n> 'officially supported' as part of the Linux kernel until version 2.6,\n> which is still pretty \"bleeding edge.\" Until 2.6 solidifies a bit\n> more (aside: based on experiences with 2.6.0, \"quite a lot more\"), it\n> is a \"patchy\" add-on to the 'stable' 2.4 kernel series.\n\nActually I think XFS is in 2.4.25.\n\n-- \ngreg\n\n",
"msg_date": "26 Jan 2004 11:16:28 -0500",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
},
{
"msg_contents": "On 23 Jan, Dave Thompson wrote:\n> Hello All\n> \n> Just wanted to gather opinions on what file system has the best balance between performance and reliability when used on a quad processor machine running SuSE64. Thanks\n> \n> DAve\n\nHi Dave,\n\nI have some data for performance using our DBT-2 workload (OLTP type\ntransactions) with Linux-2.6 with various filesystems and i/o\nschedulers. I know it doesn't address the reliability part of your\nquestion, and it's in a completely different environment (32-bit as well\nas a different distro), but if you think you'll find the results\ninteresting:\n\n\thttp://developer.osdl.org/markw/fs/project_results.html\n\nMark\n",
"msg_date": "Wed, 28 Jan 2004 10:36:39 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on"
},
{
"msg_contents": "Christopher Weimann <[email protected]> writes:\n> You can do snapshots in FreeBSD 5.x with UFS2 as well but that (\n> nor XFS snapshots ) will let you backup with the database server\n> running. Just because you will get the file exactly as it was at\n> a particular instant does not mean that the postmaster did not\n> still have some some data that was not flushed to disk yet.\n\nIt *will* work, if you have an instantaneous filesystem snapshot\ncovering the entire $PGDATA directory tree (both data files and WAL).\nRestarting the postmaster on the backup will result in a WAL replay\nsequence, and at the end the data files will be consistent. If this\nwere not so, we'd not be crash-proof. The instantaneous snapshot\nis exactly equivalent to the on-disk state at the moment of a kernel\ncrash or power failure, no?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 31 Jan 2004 00:07:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64 "
},
{
"msg_contents": "Christopher Weimann wrote:\n> On 01/23/2004-10:18AM, Joshua D. Drake wrote:\n> > \n> > XFS also has the interesting ability (although I have yet to test it) \n> > that will allow you\n> > to take a snapshot of the filesystem. Thus you can have filesystem level \n> > backups\n> > of the PGDATA directory that are consistent even though the database is \n> > running.\n> \n> You can do snapshots in FreeBSD 5.x with UFS2 as well but that (\n> nor XFS snapshots ) will let you backup with the database server\n> running. Just because you will get the file exactly as it was at\n> a particular instant does not mean that the postmaster did not\n> still have some some data that was not flushed to disk yet.\n\nUh, yea, it does. If the snapshot includes all of /data, including\nWAL/xlog, you can then back up the snapshot and restore it on another\nmachine. It will restart just like a crash.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Sat, 31 Jan 2004 13:40:24 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
}
] |
[
{
"msg_contents": "\n\nA few question regarding PostgreSQL handling of queries:\n\n- Is each query submitted parsed and planned even if it is identical to a\nquery submitted before? \nFor example, 10 queries \"select * from animals where id=:b1\" with possibly\ndifferent bind variable :b1 values will be fully processed (parsed and\nplanned) 10 times? \n\n- does it make difference for postgreSQL performance if bind variables are\nused or not? \nDoes it make difference in performance if the same prepared statement is used\njust with different values of bind variables? \n\n- Does postgreSQL optimizer account for statistics like histograms when bind\nvariables are used (i.e. try to built a new plan given a concrete value of\nbind variable)? \n\nThank you in advance, \nLaimis\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n",
"msg_date": "Fri, 23 Jan 2004 17:06:30 -0000",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing SQL: bind variables, prepared stms, histograms"
}
] |
[
{
"msg_contents": "I've got a table with about 10 million events in it.\n\nEach has a user_id (about 1000 users) and a event_time timestamp\ncovering a 4 year period with about 50% of the events being in the last\nyear. Some users have only dozens of events. A few have hundreds of\nthousands.\n\nThe queries usually are in the form of, where \"user_id = something and\nevent_time between something and something\".\n\nHalf of my queries index off of the user_id and half index off the\nevent_time. I was thinking this would be a perfect opportunity to use a\ndual index of (user_id,event_time) but I'm confused as to weather this\nwill help considering the size of this index given that there very few\ntuples that have the exact same timestamp as another and I'm not sure\nwhich order to put the user_id/event_time as I don't know what is meant\nwhen people on this list ask which is more selective.\n\nAlso, would it make sense for me to raise my ANALYZE value and how would\nI go about doing this?\n\nThanks for the help.\n\n\n-- \nOrion Henry <[email protected]>",
"msg_date": "23 Jan 2004 10:58:31 -0800",
"msg_from": "Orion Henry <[email protected]>",
"msg_from_op": true,
"msg_subject": "help with dual indexing"
},
{
"msg_contents": "Orion Henry <[email protected]> writes:\n> The queries usually are in the form of, where \"user_id = something and\n> event_time between something and something\".\n\n> Half of my queries index off of the user_id and half index off the\n> event_time. I was thinking this would be a perfect opportunity to use a\n> dual index of (user_id,event_time) but I'm confused as to weather this\n> will help\n\nProbably. Put the user_id as the first column of the index --- if you\nthink about the sort ordering of a multicolumn index, you will see why.\nWith user_id first, a constraint as above describes a contiguous\nsubrange of the index; with event_time first it does not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Jan 2004 20:08:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: help with dual indexing "
},
{
"msg_contents": "Thanks Tom! You're a life-saver.\n\nOn Fri, 2004-01-23 at 17:08, Tom Lane wrote:\n> Orion Henry <[email protected]> writes:\n> > The queries usually are in the form of, where \"user_id = something and\n> > event_time between something and something\".\n> \n> > Half of my queries index off of the user_id and half index off the\n> > event_time. I was thinking this would be a perfect opportunity to use a\n> > dual index of (user_id,event_time) but I'm confused as to weather this\n> > will help\n> \n> Probably. Put the user_id as the first column of the index --- if you\n> think about the sort ordering of a multicolumn index, you will see why.\n> With user_id first, a constraint as above describes a contiguous\n> subrange of the index; with event_time first it does not.\n> \n> \t\t\tregards, tom lane\n-- \nOrion Henry <[email protected]>",
"msg_date": "26 Jan 2004 12:53:16 -0800",
"msg_from": "Orion Henry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: help with dual indexing"
}
] |
[
{
"msg_contents": "Please tell me if this timing makes sense to you for a Celeron 433 w/\nRAM=256MB dedicated testing server. I expected some slowness, but not this\nhigh.\n\ndb_epsilon=# \\d t_active_subjects\n Table \"public.t_active_subjects\"\n Column | Type | Modifiers\n------------------------+--------------+--------------------------------------------------------------------\n id | integer | not null default\nnextval('public.t_active_subjects_id_seq'::text)\n old_id | integer |\n ext_subject | integer | not null\n ext_group | integer |\n final_grade | integer |\n type | character(1) |\n ree | date |\n borrado | boolean |\n ext_active_student | integer |\n sum_presences | integer |\n sum_hours | integer |\nIndexes: t_active_subjects_pkey primary key btree (id),\n i_t_active_subjects__ext_active_student btree (ext_active_student),\n i_t_active_subjects__ext_group btree (ext_group),\n i_t_active_subjects__ext_subject btree (ext_subject),\n i_t_active_subjects__old_id btree (old_id)\nForeign Key constraints: $4 FOREIGN KEY (ext_group) REFERENCES\nt_groups(id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n $3 FOREIGN KEY (ext_subject) REFERENCES\nt_subjects(id) ON UPDATE NO ACTION ON DELETE NO\nACTION\n\ndb_epsilon=# EXPLAIN DELETE FROM t_active_subjects;\n QUERY PLAN\n-------------------------------------------------------------------------\n Seq Scan on t_active_subjects (cost=0.00..3391.73 rows=52373 width=6)\n(1 row)\n\ndb_epsilon=# EXPLAIN ANALYZE DELETE FROM t_active_subjects;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Seq Scan on t_active_subjects (cost=0.00..3391.73 rows=52373 width=6)\n(actual time=0.11..4651.82 rows=73700 loops=1)\n Total runtime: 3504528.15 msec\n(2 rows)\n\ndb_epsilon=# SELECT version();\n version\n---------------------------------------------------------------------------------------------------------\n PostgreSQL 7.3.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2\n20020903 (Red Hat Linux 8.0 3.2-7)\n(1 row)\n\n[root@pgsql data]# cat postgresql.conf | grep -v \\# | grep \\=\ntcpip_socket = true\nfsync = false\nLC_MESSAGES = 'en_US.UTF-8'\nLC_MONETARY = 'en_US.UTF-8'\nLC_NUMERIC = 'en_US.UTF-8'\nLC_TIME = 'en_US.UTF-8'\n\nOkay, some details:\n * The query takes to run about 3,504.52815 sec for 52,373 rows, which\naverages about 15 deletes per second.\n * Each ext_* field is a foreign key to another table's pk.\n * This is a dedicated testing server with 256 MB RAM, and is a Celeron\n433 MHz. It still has enough disk space, I think: about 200 MB.\n * Disk is 4 MB. I guess it must be about what, 4500 RPM?\n * fsync is disabled.\n\nI don't know what other info to provide...\n\nThanks in advance.\n\n--\nOctavio Alvarez Piza.\nE-mail: [email protected]\n",
"msg_date": "Fri, 23 Jan 2004 16:38:30 -0800 (PST)",
"msg_from": "\"Octavio Alvarez\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow delete times??"
},
{
"msg_contents": "Octavio Alvarez wrote:\n\n>Please tell me if this timing makes sense to you for a Celeron 433 w/\n>RAM=256MB dedicated testing server. I expected some slowness, but not this\n>high.\n> \n>\n\nWell delete is generally slow. If you want to delete the entire table \n(and your really sure)\nuse truncate.\n\nJ\n\n\n\n\n>db_epsilon=# \\d t_active_subjects\n> Table \"public.t_active_subjects\"\n> Column | Type | Modifiers\n>------------------------+--------------+--------------------------------------------------------------------\n> id | integer | not null default\n>nextval('public.t_active_subjects_id_seq'::text)\n> old_id | integer |\n> ext_subject | integer | not null\n> ext_group | integer |\n> final_grade | integer |\n> type | character(1) |\n> ree | date |\n> borrado | boolean |\n> ext_active_student | integer |\n> sum_presences | integer |\n> sum_hours | integer |\n>Indexes: t_active_subjects_pkey primary key btree (id),\n> i_t_active_subjects__ext_active_student btree (ext_active_student),\n> i_t_active_subjects__ext_group btree (ext_group),\n> i_t_active_subjects__ext_subject btree (ext_subject),\n> i_t_active_subjects__old_id btree (old_id)\n>Foreign Key constraints: $4 FOREIGN KEY (ext_group) REFERENCES\n>t_groups(id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n> $3 FOREIGN KEY (ext_subject) REFERENCES\n>t_subjects(id) ON UPDATE NO ACTION ON DELETE NO\n>ACTION\n>\n>db_epsilon=# EXPLAIN DELETE FROM t_active_subjects;\n> QUERY PLAN\n>-------------------------------------------------------------------------\n> Seq Scan on t_active_subjects (cost=0.00..3391.73 rows=52373 width=6)\n>(1 row)\n>\n>db_epsilon=# EXPLAIN ANALYZE DELETE FROM t_active_subjects;\n> QUERY PLAN\n>------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on t_active_subjects (cost=0.00..3391.73 rows=52373 width=6)\n>(actual time=0.11..4651.82 rows=73700 loops=1)\n> Total runtime: 3504528.15 msec\n>(2 rows)\n>\n>db_epsilon=# SELECT version();\n> version\n>---------------------------------------------------------------------------------------------------------\n> PostgreSQL 7.3.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2\n>20020903 (Red Hat Linux 8.0 3.2-7)\n>(1 row)\n>\n>[root@pgsql data]# cat postgresql.conf | grep -v \\# | grep \\=\n>tcpip_socket = true\n>fsync = false\n>LC_MESSAGES = 'en_US.UTF-8'\n>LC_MONETARY = 'en_US.UTF-8'\n>LC_NUMERIC = 'en_US.UTF-8'\n>LC_TIME = 'en_US.UTF-8'\n>\n>Okay, some details:\n> * The query takes to run about 3,504.52815 sec for 52,373 rows, which\n>averages about 15 deletes per second.\n> * Each ext_* field is a foreign key to another table's pk.\n> * This is a dedicated testing server with 256 MB RAM, and is a Celeron\n>433 MHz. It still has enough disk space, I think: about 200 MB.\n> * Disk is 4 MB. I guess it must be about what, 4500 RPM?\n> * fsync is disabled.\n>\n>I don't know what other info to provide...\n>\n>Thanks in advance.\n>\n>--\n>Octavio Alvarez Piza.\n>E-mail: [email protected]\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL\n\n",
"msg_date": "Fri, 23 Jan 2004 16:55:28 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow delete times??"
},
{
"msg_contents": "\"Octavio Alvarez\" <[email protected]> writes:\n> Please tell me if this timing makes sense to you for a Celeron 433 w/\n> RAM=256MB dedicated testing server. I expected some slowness, but not this\n> high.\n\nI'll bet you have foreign keys referencing this table, and the\nreferencing columns do not have indexes. PG will let you do that\n... but it makes updates and deletes horribly slow. You generally\nwant to add those indexes.\n\nIf they *are* indexed, check for datatype mismatches. That's\nanother thing that kills performance ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Jan 2004 20:44:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow delete times?? "
}
] |
[
{
"msg_contents": "Hi,\n\nSorry for the long e-mail. Here is a summary of my questions:\n\nI am running osdl-dbt1 against pgsql-7.3.3. The result is at:\nhttp://khack.osdl.org/stp/286627/\n\n1. Based on the hardware and software configuration, does my database\nconfiguration make sense?\n2. Is 'defining a cursor and fetch multiple times' an efficient way to\nimplement a stored procedure?\n3. index with desc/asc is not supported in PG, why it is not needed? Is\nthere any work-around?\n4. I created a function to order the items, and created an index on that\nfunction. But the query did not pick up that index. What did I miss?\n\nThanks,\n=============\n\nThe I/O is light <10% disk utility, memory is 100% used, and CPU is\nabout 75%. My goal is to increase CPU utilization to about 85% (without\nswapping). I've tried several database parameters and it did not make\nmuch difference, I can get about 86 transactions/second. Since the same\nworkload on SAPDB gives about 200 transactions/second, I must have\nmissed some important parameters.\n\nSo, the first question is:\nBased on the hardware and software configuration, does my database\nconfiguration make sense?\n\nMy statistics showed that one transaction is responsible for the bad\nperformance. It takes 3-5 seconds to finish this transaction. The\nstoredprocedure for this transaction executes select and fetches 20\ntimes if there is record:\n\n OPEN refcur FOR SELECT i_id, i_title, a_fname, a_lname\n FROM item, author\n WHERE i_subject = _i_subject\n AND i_a_id = a_id\n ORDER BY i_pub_date DESC, i_title ASC;\n \n FETCH refcur INTO _i_id1, i_title1, a_fname1, a_lname1;\n-- RAISE NOTICE ''%,%,%,%'', _i_id1, i_title1, a_fname1, a_lname1;\n \n IF FOUND THEN\n items := items + 1;\n FETCH refcur INTO _i_id2, i_title2, a_fname2, a_lname2;\n END IF;\n IF FOUND THEN\n items := items + 1;\n FETCH refcur INTO _i_id3, i_title3, a_fname3, a_lname3;\n END IF;\n...\nThe second question is:\nIs this the efficient way to implement?\n\nThe execution plan for the query is:\n> explain analyze select i_id, i_title, a_fname, a_lname from item,\nauthor where i_subject = 'ART' AND i_a_id = 1 ORDER BY i_pub_date DESC,\ni_title ASC;\n QUERY\nPLAN \n-----------------------------------------------------------------------------------------------------------------------------\n Sort (cost=33.95..34.57 rows=250 width=103) (actual time=0.44..0.44\nrows=0 loops=1)\n Sort Key: item.i_pub_date, item.i_title\n -> Nested Loop (cost=0.00..23.99 rows=250 width=103) (actual\ntime=0.29..0.29 rows=0 loops=1)\n -> Index Scan using i_i_subject on item (cost=0.00..5.99\nrows=1 width=64) (actual time=0.29..0.29 rows=0 loops=1)\n Index Cond: (i_subject = 'ART'::character varying)\n Filter: (i_a_id = 1::numeric)\n -> Seq Scan on author (cost=0.00..15.50 rows=250 width=39)\n(never executed)\n Total runtime: 0.57 msec\n(8 rows)\n\nI think an index on item (i_pub_date desc, i_title asc) would help. But\nfrom reading the mailing list, PG does not have this kind of index, and\nit is not needed (I could not find an answer for this). Is there any\nwork-around?\n\nI created an function to cache the order and created an index on it, but\nthe query did not pick it up. Do I need to rewrite the query?\n\ncreate or replace function item_order (varchar(60)) returns numeric(10)\nas '\n DECLARE\n _i_subject alias for $1;\n _i_id numeric(10);\n rec record;\n BEGIN\n select i_id\n into _i_id\n from item\n where i_subject=_i_subject\n order by i_pub_date DESC, i_title ASC;\n \n return _i_id;\n END;\n'IMMUTABLE LANGUAGE 'plpgsql';\n\ncreate index i_item_order on item (item_order(i_subject));\n\nTIA,\n-- \nJenny Zhang\nOpen Source Development Lab\n12725 SW Millikan Way, Suite 400\nBeaverton, OR 97005\n(503)626-2455 ext 31\n\n\n",
"msg_date": "Fri, 23 Jan 2004 17:34:24 -0800",
"msg_from": "Jenny Zhang <[email protected]>",
"msg_from_op": true,
"msg_subject": "query slows under load"
},
{
"msg_contents": "\nOn Fri, 23 Jan 2004, Jenny Zhang wrote:\n\n> 3. index with desc/asc is not supported in PG, why it is not needed? Is\n> there any work-around?\n\nYou can do this with index operator classes. There aren't any\nautomatically provided ones that do the reversed sort iirc, but I think\nthat's come up before with examples. I've toyed with the idea of writing\nthe reverse opclasses for at least some of the types, but haven't been\nseriously motivated to actually get it done.\n",
"msg_date": "Sun, 25 Jan 2004 13:14:05 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query slows under load"
}
] |
[
{
"msg_contents": "Hi\n\nI have a php script and i make a pg_pconnect\nIf i want to make 4-10 pg_query in that script\nHave i to close the connection at end of the script?\n(i would say yes, is it right?)\n\nSorry I m a little bit confused about the persistent thing!!\nIs it smart to use persistent connections at all if i expect 100K Users to hit the script in an hour and the script calls up to 10-15 pg functions?\nI have at the mom one function but the server needs 500 ms, its a little bit too much i think, and it crashed when i had 20K users\n\nThanks\nBye\n\n",
"msg_date": "Sat, 24 Jan 2004 18:32:09 +0100 (CET)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Persistent Connections "
},
{
"msg_contents": "Hi,\n\[email protected] wrote:\n\n> Hi\n>\n> I have a php script and i make a pg_pconnect\n> If i want to make 4-10 pg_query in that script\n> Have i to close the connection at end of the script?\n> (i would say yes, is it right?)\n\n\nIf you want to make multiple pg_query's in a page you can, and you can \nuse the same connection. You dont have to use persistent connections for \nthis. Just open the connection and fire off the different queries. The \npersistent connection remains open between different pages loading, \nwhich is supposedly faster because you dont have the overhead of opening \nthe connection.\n\nIf you want to use a persistent connection then definitely dont close it \nat the bottom of the page. If you want to use the other connection \n(pg_connect, non-persistent) then you dont have to close this connection \nat the bottom of the page because PHP does it for you, although you can \nif you are feeling nice ;-).\n\n>\n> Sorry I m a little bit confused about the persistent thing!!\n> Is it smart to use persistent connections at all if i expect 100K \n> Users to hit the script in an hour and the script calls up to 10-15 pg \n> functions?\n> I have at the mom one function but the server needs 500 ms, its a \n> little bit too much i think, and it crashed when i had 20K users\n>\n\nUse the persistent connection but make sure the parameters in \npostgresql.conf match up with the Apache config. The specific settings \nare MaxClients in httpd.conf and max_connections in postgresql.conf. \nMake sure that max_connections is at least as big as MaxClients for \nevery database that your PHP scripts connect to.\n\n> Thanks\n> Bye\n\n\n\n",
"msg_date": "Sun, 25 Jan 2004 11:28:28 +0000",
"msg_from": "Nick Barr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Persistent Connections"
},
{
"msg_contents": "\"[email protected] (Nick Barr)\" stated in \ncomp.databases.postgresql.performance:\n\n> [email protected] wrote:\n[sNip]\n>> Sorry I m a little bit confused about the persistent thing!!\n>> Is it smart to use persistent connections at all if i expect 100K \n>> Users to hit the script in an hour and the script calls up to 10-15 pg \n>> functions?\n>> I have at the mom one function but the server needs 500 ms, its a \n>> little bit too much i think, and it crashed when i had 20K users\n> \n> Use the persistent connection but make sure the parameters in \n> postgresql.conf match up with the Apache config. The specific settings \n> are MaxClients in httpd.conf and max_connections in postgresql.conf. \n> Make sure that max_connections is at least as big as MaxClients for \n> every database that your PHP scripts connect to.\n\n \tDo you happen to have (or know where to get) some sample configuration \nfiles for Apache 2 and PostgreSQL for this? The documentation I've found \nso far is pretty sparse, and sample files would be very helpful.\n\n \tTHanks in advance.\n\n-- \nRandolf Richardson - [email protected]\nVancouver, British Columbia, Canada\n\n\"We are anti-spammers. You will confirm\nsubscriptions. Resistance is futile.\"\n\nPlease do not eMail me directly when responding\nto my postings in the newsgroups.\n",
"msg_date": "Mon, 26 Jan 2004 06:35:34 +0000 (UTC)",
"msg_from": "Randolf Richardson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Persistent Connections"
},
{
"msg_contents": "Randolf Richardson wrote:\n\n>\"[email protected] (Nick Barr)\" stated in \n>comp.databases.postgresql.performance:\n>\n> \n>\n>>[email protected] wrote:\n>> \n>>\n>[sNip]\n> \n>\n>>>Sorry I m a little bit confused about the persistent thing!!\n>>>Is it smart to use persistent connections at all if i expect 100K \n>>>Users to hit the script in an hour and the script calls up to 10-15 pg \n>>>functions?\n>>>I have at the mom one function but the server needs 500 ms, its a \n>>>little bit too much i think, and it crashed when i had 20K users\n>>> \n>>>\n>>Use the persistent connection but make sure the parameters in \n>>postgresql.conf match up with the Apache config. The specific settings \n>>are MaxClients in httpd.conf and max_connections in postgresql.conf. \n>>Make sure that max_connections is at least as big as MaxClients for \n>>every database that your PHP scripts connect to.\n>> \n>>\n>\n> \tDo you happen to have (or know where to get) some sample configuration \n>files for Apache 2 and PostgreSQL for this? The documentation I've found \n>so far is pretty sparse, and sample files would be very helpful.\n>\n> \t\n> \n>\nBeware that persistent connections in PHP behave a little differently \nthan you would think. The connections stays open between an apache \nprocess and postgres. So each process has its own connection and you \nmay not hit the same process on each request to the apache server. \nTemporary tables are not dropped automatically between refreshes on \npersistent connections. An example of this is to enable persistent \nconnections and execute \"CREATE TEMPORARY TABLE foo ( id INTEGER );\" \n\n$conn = pg_pconnect( ... );\nif (!$result = pg_query($conn, \"CREATE TEMPORARY TABLE tmp_foo ( id \nINTEGER );\")) {\n echo pg_result_error($result) ;\n} else {\n echo \"created ok!\";\n}\n\nAfter a couple of refreshes you will get an error that states the table \nalready exists. This was a pain to learn, especially while I was doing \nthese operations inside of transactions.\n\nOn most of my servers the connect time for postgresql was 6ms or less, \nso I disabled persistent connections altogether so that I could be \nassured that temporary tables and all php launched postgresql sessions \nwere properly reset.\n\nAs far as I know, there is no way to reset the sesssion ( cleaning up \ntemporary tables, etc ) automatically with an SQL statement without \nclosing the connection\n\n",
"msg_date": "Tue, 20 Apr 2004 10:49:44 -0500",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Persistent Connections"
}
] |
[
{
"msg_contents": "I'm conducting some benchmarking (mostly for fun and learning), and one \npart of it is benchmarking PostgreSQL (7.4.1, on FreeBSD 4.9 and 5.2). \nI'm using pgbench from the contrib directory, but I'm puzzled by the \nresults. I have little experience in benchmarking, but I don't think I \nshould be getting the scattered results I have.\n\n- the computer is P3 @ 933MHz, 1Gb RAM\n- I'm running pgbench with 35 clients and 50 transactions/client\n- benchmark results are differentiating by about +/- 6 TPS. The results \nare between 32 TPS and 44 TPS\n- the results seem to be getting worse over time then suddenly jumping \nto the maximum (saw-tooth like). Sometime there is even (very noticable \nas a pattern!) indication of more-or-less regular alteration between the \nminimum and maximum values (e.g. first measurement yields 32, second \nyields 44, third again 32 or 31, etc...)\n- running vacuumdb -z -f on the database does not influence the results \nin predictable ways\n- waiting (sleeping) between pgbench runs (either regular or random) \ndoes not influence it in predictable ways\n- the same patterns appear between operating systems (FreeBSD 4.9 and \n5.2) and reinstalls of it (the numbers are ofcourse somewhat different)\n\npostgresql.conf contains only these active lines:\n max_connections = 40\n shared_buffers = 10000\n sort_mem = 8192\n vacuum_mem = 32768\n\nI've used these settings as they are expected to be used on a regular \nwork load when the server goes into production. I've upped vacuum_mem as \nit seems to shorten vacuum times dramaticaly (I assume the memory is \nallocated when needed as the postgresql process is only 88MB in size \n(80Mb shared buffers)).\n\nWhat I'm really asking here is: are these results normal? If not, can \nthey be improved, or is there a better off-the-shelf PostgreSQL \nbenchmark tool?\n\n\n",
"msg_date": "Sun, 25 Jan 2004 02:01:53 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Benchmarking PostgreSQL?"
},
{
"msg_contents": "Ivan Voras <[email protected]> writes:\n> I'm conducting some benchmarking (mostly for fun and learning), and one \n> part of it is benchmarking PostgreSQL (7.4.1, on FreeBSD 4.9 and 5.2). \n> I'm using pgbench from the contrib directory, but I'm puzzled by the \n> results.\n\nIt is notoriously hard to get reproducible results from pgbench.\nHowever...\n\n> - I'm running pgbench with 35 clients and 50 transactions/client\n\n(1) what scale factor did you use to size the database? One of the\ngotchas is that you need to use a scale factor at least as large as the\nnumber of clients you are testing. The scale factor is equal to the\nnumber of rows in the \"branches\" table, and since every transaction\nwants to update some row of branches, you end up mostly measuring the\neffects of update contention if the scale factor is less than about\nthe number of clients. scale 1 is particularly deadly, it means all\nthe transactions get serialized :-(\n\n(2) 50 xacts/client is too small to get anything reproducible; you'll\nmostly be measuring startup transients. I usually use 1000 xacts/client.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 24 Jan 2004 21:55:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarking PostgreSQL? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> It is notoriously hard to get reproducible results from pgbench.\n> However...\n> \n> \n>>- I'm running pgbench with 35 clients and 50 transactions/client\n> \n> \n> (1) what scale factor did you use to size the database? One of the\n> gotchas is that you need to use a scale factor at least as large as the\n\nI forgot to mention that - I read the pgbench README, and the scale \nfactor was set to 40.\n\n> (2) 50 xacts/client is too small to get anything reproducible; you'll\n> mostly be measuring startup transients. I usually use 1000 xacts/client.\n\nI was using 100 and 50, hoping that the larger value will help \nreproducability and the smaller just what you said - to measure startup \ntime. What I also forgot to mention was that the numbers I was talking \nabout were got by using '-C' pgbench switch. Without it the results wary \nfrom about 60 and 145 (same 'alternating' effects, etc).\n\nThanks, I will try 1000 transactions!\n\nThere's another thing I'm puzzled about: I deliberately used -C switch \nin intention to measure connection time, but with it, the numbers \ndisplayed by pgbench for 'tps with' and 'tps without connection time' \nare same to the 6th decimal place. Without -C, both numbers are more \nthen doubled and are different by about 2-3 tps. (I was expecting that \nwith -C the 'tps with c.t.' would be much lower than 'tps without c.t.').\n\n(the README is here: \nhttp://developer.postgresql.org/cvsweb.cgi/pgsql-server/contrib/pgbench/README.pgbench)\n\n\n",
"msg_date": "Sun, 25 Jan 2004 13:23:23 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Benchmarking PostgreSQL?"
}
] |
[
{
"msg_contents": "\n\nFirst of, thanks, Tom.\n\nAlthough I've been very careful on this kind of things, looks like I\nmissed one index on a referencing column. Still, I don't allow an entire\ndelete of a table if it has referencing columns with values, so at the\nmoment of the deletion, it has no rows at all.\n\nI checked datatype mismatches, and there are none. All my FKs are\nintegers, like the referenced column of the referenced table.\n\nI was thinking on dropping the indexes before doing the deletes, but\nJoshua suggested using TRUNCATE instead.\n\nThanks.\nOctavio.\n\nTom Lane said:\n> \"Octavio Alvarez\" <[email protected]> writes:\n>> Please tell me if this timing makes sense to you for a Celeron 433 w/\nRAM=256MB dedicated testing server. I expected some slowness, but not\nthis\n>> high.\n>\n> I'll bet you have foreign keys referencing this table, and the\n> referencing columns do not have indexes. PG will let you do that ...\nbut it makes updates and deletes horribly slow. You generally want to\nadd those indexes.\n>\n> If they *are* indexed, check for datatype mismatches. That's\n> another thing that kills performance ...\n>\n> \t\t\tregards, tom lane\n>\n\n\n-- \nOctavio Alvarez Piza.\nE-mail: [email protected]\n\n\n\n-- \nOctavio Alvarez Piza.\nE-mail: [email protected]\n",
"msg_date": "Sat, 24 Jan 2004 18:12:42 -0800 (PST)",
"msg_from": "\"Octavio Alvarez\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow delete times??"
}
] |
[
{
"msg_contents": "Hi all,\n\n First of all thanks to Josh and Richard for their replies. What I have\ndone to test\ntheir indications is the following. I have created a new table identical to\nSTATISTICS,\nand an index over the TIMESTAMP_IN field.\n\nCREATE TABLE STATISTICS2\n(\n STATISTIC_ID NUMERIC(10) NOT NULL DEFAULT\n NEXTVAL('STATISTIC_ID_SEQ')\n CONSTRAINT pk_st_statistic2_id PRIMARY KEY,\n TIMESTAMP_IN TIMESTAMP,\n VALUE NUMERIC(10)\n);\n\nCREATE INDEX i_stats2_tin ON STATISTICS2(TIMESTAMP_IN);\n\nAfter that I inserted the data from STATISTICS and vacuumed the DB:\n\n INSERT INTO STATISTICS2 ( SELECT * FROM STATISTICS );\n vacuumdb -f -z -d test\n\nonce the vacuum has finished I do the following query\n\nexplain analyze select * from statistics2 where timestamp_in <\nto_timestamp( '20031201', 'YYYYMMDD' );\nNOTICE: QUERY PLAN:\n\nSeq Scan on statistics2 (cost=0.00..638.00 rows=9289 width=35) (actual\ntime=0.41..688.34 rows=27867 loops=1)\nTotal runtime: 730.82 msec\n\nThat query is not using the index. Anybody knows what I'm doing wrong?\n\nThank you very much\n\n--\nArnau\n\n\n",
"msg_date": "Mon, 26 Jan 2004 15:12:38 +0100",
"msg_from": "\"Arnau\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Queries with timestamp II"
},
{
"msg_contents": "Arnau wrote:\n\n> explain analyze select * from statistics2 where timestamp_in <\n> to_timestamp( '20031201', 'YYYYMMDD' );\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on statistics2 (cost=0.00..638.00 rows=9289 width=35) (actual\n> time=0.41..688.34 rows=27867 loops=1)\n> Total runtime: 730.82 msec\n> \n> That query is not using the index. Anybody knows what I'm doing wrong?\n\nSince it expects large number of rows will be returned, it is favouring \nsequential scan.\n\nGiven how the estimates have differed, i.e. estimated 9289 v/s actual 27867, I \nsugest you up the statistics for the table using alter table. Check\nhttp://www.postgresql.org/docs/current/static/sql-altertable.html\n\n\nHTH\n\n Shridhar\n",
"msg_date": "Mon, 26 Jan 2004 20:08:13 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries with timestamp II"
},
{
"msg_contents": "Dnia 2004-01-26 15:12, U�ytkownik Arnau napisa�:\n> Hi all,\n> \n> First of all thanks to Josh and Richard for their replies. What I have\n> done to test\n> their indications is the following. I have created a new table identical to\n> STATISTICS,\n> and an index over the TIMESTAMP_IN field.\n> \n> CREATE TABLE STATISTICS2\n> (\n> STATISTIC_ID NUMERIC(10) NOT NULL DEFAULT\n> NEXTVAL('STATISTIC_ID_SEQ')\n> CONSTRAINT pk_st_statistic2_id PRIMARY KEY,\n> TIMESTAMP_IN TIMESTAMP,\n> VALUE NUMERIC(10)\n> );\n\nDo you really have to use numeric as primary key? Integer datatypes \n(int4/int8) are much faster than numeric.\n\n> \n> CREATE INDEX i_stats2_tin ON STATISTICS2(TIMESTAMP_IN);\n> \n> After that I inserted the data from STATISTICS and vacuumed the DB:\n> \n> INSERT INTO STATISTICS2 ( SELECT * FROM STATISTICS );\n> vacuumdb -f -z -d test\n> \n> once the vacuum has finished I do the following query\n> \n> explain analyze select * from statistics2 where timestamp_in <\n> to_timestamp( '20031201', 'YYYYMMDD' );\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on statistics2 (cost=0.00..638.00 rows=9289 width=35) (actual\n> time=0.41..688.34 rows=27867 loops=1)\n> Total runtime: 730.82 msec\n> \n> That query is not using the index. Anybody knows what I'm doing wrong?\n\nOver 25000 rows match your condition:\ntimestamp_in < to_timestamp( '20031201', 'YYYYMMDD' );\n\nHow many rows do you have in your table? It's possible, that seq scan is \njust faster than using index when getting so many rows output.\n\nRegards,\nTomasz Myrta\n",
"msg_date": "Mon, 26 Jan 2004 16:01:45 +0100",
"msg_from": "Tomasz Myrta <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries with timestamp II"
}
] |
[
{
"msg_contents": "I have an application that I'm porting from MSSQL to PostgreSQL. Part of this\napplication consists of hundreds of stored procedures that I need to convert\nto Postgres functions ... or views?\n\nAt first I was going to just convert all MSSQL procedures to Postgres functions.\nBut now that I'm looking at it, a lot of them may be candidates for views. A\nlot of them take on the format of:\n\nSELECT a.cola, b.colb, c.colc\nFROM a JOIN b JOIN c\nWHERE a.prikey=$1\n\n(this is slightly oversimplified, but as a generalization of hundreds of\nfunctions, it's pretty accurate)\n\nNow, I know this questions is pretty generalized, and I intend to test before\nactually commiting to a particular course of action, but I'm very early in the\nconversion and I'm curious as to whether people with more experience than I\nthink that views will provide better performance than functions containing\nSQL statements like the above. The client is _very_ interested in performance\n(just like anyone who needs to market things these days ... apparently, if you\nadmit that something might take a few seconds in your advertising, you're sunk)\n\nAny opinions are welcome. Also, if this is a relatively unknown thing, I'd\nbe curious to know that as well, because then it would be worthwhile for me\nto record and publish my experience.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Mon, 26 Jan 2004 10:19:14 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": true,
"msg_subject": "On the performance of views"
},
{
"msg_contents": "Bill Moran wrote:\n\n> I have an application that I'm porting from MSSQL to PostgreSQL. Part \n> of this\n> application consists of hundreds of stored procedures that I need to \n> convert\n> to Postgres functions ... or views?\n> \n> At first I was going to just convert all MSSQL procedures to Postgres \n> functions.\n> But now that I'm looking at it, a lot of them may be candidates for \n> views. A\n> lot of them take on the format of:\n> \n> SELECT a.cola, b.colb, c.colc\n> FROM a JOIN b JOIN c\n> WHERE a.prikey=$1\n\nMake sure that you typecase correctly. It makes a differnce of order of \nmagnitude when you say 'where intpkey=<somevalue>::int' rather than 'where \nintpkey=<somevalue>'.\n\nIt is called typecasting and highly recommened in postgresql for correctly \nchoosing indexes.\n\nI remember another post on some list, which said pl/pgsql seems to be very \nstrongly typed language compared to MSSQL counterpart. So watch for that as well.\n\n> \n> (this is slightly oversimplified, but as a generalization of hundreds of\n> functions, it's pretty accurate)\n> \n> Now, I know this questions is pretty generalized, and I intend to test \n> before\n> actually commiting to a particular course of action, but I'm very early \n> in the\n> conversion and I'm curious as to whether people with more experience than I\n> think that views will provide better performance than functions containing\n> SQL statements like the above. The client is _very_ interested in \n\nTo my understanding, views are expanded at runtime and considered while \npreparing plan for the complete (and possibly bigger) query(Consider a view \njoined with something else). That is not as easy/possible if at all, when it is \nfunction. For postgresql query planner, the function is a black box(rightly so, \nI would say).\n\nSo using views opens possibility of changing query plans if required. Most of \nthe times that should be faster than using them as functions.\n\nOf course, the standard disclaimer, YMMV. Try yourself.\n\nCorrect me if I am wrong.\n\nHTH\n\n Shridhar\n",
"msg_date": "Mon, 26 Jan 2004 20:59:46 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On the performance of views"
},
{
"msg_contents": "Bill Moran <[email protected]> writes:\n> At first I was going to just convert all MSSQL procedures to Postgres functions.\n> But now that I'm looking at it, a lot of them may be candidates for views. A\n> lot of them take on the format of:\n\n> SELECT a.cola, b.colb, c.colc\n> FROM a JOIN b JOIN c\n> WHERE a.prikey=$1\n\nYou'd probably be better off using views, if making that significant a\nnotational change is feasible for you. Functions that return multiple\ncolumns are notationally messy in Postgres. A view-based solution would\nbe more flexible and likely have better performance.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 26 Jan 2004 11:43:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On the performance of views "
},
{
"msg_contents": "Bill,\n\n> > SELECT a.cola, b.colb, c.colc\n> > FROM a JOIN b JOIN c\n> > WHERE a.prikey=$1\n\nIf your views are simple, PostgreSQL will be able to \"push down\" any filter \ncriteria into the view itself. For example,\n\nCREATE view_a AS\nSELECT a.cola, b.colb, c.colc\nFROM a JOIN b JOIN c;\n\nSELECT * FROM view_a\nWHERE a.prikey = 2334432;\n\nwill execute just like:\n\nSELECT a.cola, b.colb, c.colc\nFROM a JOIN b JOIN c\nWHERE a.prikey = 2334432;\n\nHowever, this does not work for really complex views, which have to be \nmaterialized or executed as a sub-loop.\n\nThe \"Procedures faster than views\" thing is a SQL Server peculiarity which is \na result of MS's buggering up views since they bought the code from Sybase.\n\n> To my understanding, views are expanded at runtime and considered while \n> preparing plan for the complete (and possibly bigger) query(Consider a view \n> joined with something else). That is not as easy/possible if at all, when it \nis \n> function. For postgresql query planner, the function is a black box(rightly \nso, \n> I would say).\n\nWell, as of 7.4 SQL functions are inlined. And simple PL/pgSQL functions \nwill be \"prepared\". So it's possible that either could execute as fast as a \nview.\n\nAlso, if your client is really concerned about no-holds-barred speed, you \nshould investigate prepared queries.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 26 Jan 2004 09:09:41 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On the performance of views"
},
{
"msg_contents": "Tom Lane wrote:\n> Bill Moran <[email protected]> writes:\n> \n>>At first I was going to just convert all MSSQL procedures to Postgres functions.\n>>But now that I'm looking at it, a lot of them may be candidates for views. A\n>>lot of them take on the format of:\n> \n>>SELECT a.cola, b.colb, c.colc\n>>FROM a JOIN b JOIN c\n>>WHERE a.prikey=$1\n> \n> You'd probably be better off using views, if making that significant a\n> notational change is feasible for you. Functions that return multiple\n> columns are notationally messy in Postgres. A view-based solution would\n> be more flexible and likely have better performance.\n\nWell, I don't see a huge difference in how the application will be built.\nBasically, PQexec calls will have a string like\n\"SELECT * FROM view_name WHERE prikey=%i\" instead of\n\"SELECT * FROM function_name(%i)\" ... which really doesn't make life much\nmore difficult (unless there's something I'm missing?)\n\nThanks for the input, Tom. I'll definately try out views where possible to\nsee if it improves things.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Mon, 26 Jan 2004 12:17:41 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: On the performance of views"
},
{
"msg_contents": "Shridhar Daithankar wrote:\n> Bill Moran wrote:\n> \n>> I have an application that I'm porting from MSSQL to PostgreSQL. Part \n>> of this\n>> application consists of hundreds of stored procedures that I need to \n>> convert\n>> to Postgres functions ... or views?\n>>\n>> At first I was going to just convert all MSSQL procedures to Postgres \n>> functions.\n>> But now that I'm looking at it, a lot of them may be candidates for \n>> views. A\n>> lot of them take on the format of:\n>>\n>> SELECT a.cola, b.colb, c.colc\n>> FROM a JOIN b JOIN c\n>> WHERE a.prikey=$1\n> \n> Make sure that you typecase correctly. It makes a differnce of order of \n> magnitude when you say 'where intpkey=<somevalue>::int' rather than \n> 'where intpkey=<somevalue>'.\n> \n> It is called typecasting and highly recommened in postgresql for \n> correctly choosing indexes.\n> \n> I remember another post on some list, which said pl/pgsql seems to be \n> very strongly typed language compared to MSSQL counterpart. So watch for \n> that as well.\n\nOh yeah. This particular difference is causing a lot of headaches, as I have\nto track down the type of each field to create a new type for each function.\nMSSQL seems to be _very_ loosely typed, in that it will return anything you\nwant and determine the return type at run time.\n\nSome functions they prototyped in MSSQL even return different types, based\non certian parameters, I'm not sure how I'll do this in Postgres, but I'll\nhave to figure something out.\n\n>> (this is slightly oversimplified, but as a generalization of hundreds of\n>> functions, it's pretty accurate)\n>>\n>> Now, I know this questions is pretty generalized, and I intend to test \n>> before\n>> actually commiting to a particular course of action, but I'm very \n>> early in the\n>> conversion and I'm curious as to whether people with more experience \n>> than I\n>> think that views will provide better performance than functions \n>> containing\n>> SQL statements like the above. The client is _very_ interested in \n> \n> To my understanding, views are expanded at runtime and considered while \n> preparing plan for the complete (and possibly bigger) query(Consider a \n> view joined with something else). That is not as easy/possible if at \n> all, when it is function. For postgresql query planner, the function is \n> a black box(rightly so, I would say).\n> \n> So using views opens possibility of changing query plans if required. \n> Most of the times that should be faster than using them as functions.\n> \n> Of course, the standard disclaimer, YMMV. Try yourself.\n\nThanks. I think I'm going to track the performance of many of these\nfunctions and write up what I discover.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Tue, 27 Jan 2004 14:39:21 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: On the performance of views"
},
{
"msg_contents": "Bill,\n\n> Some functions they prototyped in MSSQL even return different types, based\n> on certian parameters, I'm not sure how I'll do this in Postgres, but I'll\n> have to figure something out.\n\nWe support that as of 7.4.1 to an extent; check out \"Polymorphic Functions\".\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Tue, 27 Jan 2004 12:07:33 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On the performance of views"
},
{
"msg_contents": "Josh Berkus wrote:\n\n> Bill,\n> \n> \n>>Some functions they prototyped in MSSQL even return different types, based\n>>on certian parameters, I'm not sure how I'll do this in Postgres, but I'll\n>>have to figure something out.\n> \n> \n> We support that as of 7.4.1 to an extent; check out \"Polymorphic Functions\".\n\nTo my understanding, polymorphism means more than one function with same name \nbut different signature(Sorry C++ days!!).\n\nThat still can not return rwos of two types in one call. At any moment, rowset \nreturned by a function call would be homogenous.\n\nIs MSSQL allows to mix rows of two types in single function invocation, I am \nsure that would be a hell lot of porting trouble..\n\nJust a thought..\n\n Shridhar\n",
"msg_date": "Wed, 28 Jan 2004 14:13:55 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On the performance of views"
},
{
"msg_contents": "Shridhar Daithankar wrote:\n> Josh Berkus wrote:\n> \n>> Bill,\n>>\n>>> Some functions they prototyped in MSSQL even return different types, \n>>> based\n>>> on certian parameters, I'm not sure how I'll do this in Postgres, but \n>>> I'll\n>>> have to figure something out.\n>>\n>> We support that as of 7.4.1 to an extent; check out \"Polymorphic \n>> Functions\".\n> \n> To my understanding, polymorphism means more than one function with same \n> name but different signature(Sorry C++ days!!).\n> \n> That still can not return rwos of two types in one call. At any moment, \n> rowset returned by a function call would be homogenous.\n> \n> Is MSSQL allows to mix rows of two types in single function invocation, \n> I am sure that would be a hell lot of porting trouble..\n\nThese are two seperate problems.\n\n1) Returning a homogenious set of rows, but the composition of those rows\n will not be known until run time, as a different set of logic will be\n done depending on the values of some parameters.\n2) Returning what MSSQL calls \"combined recordsets\", which are many rows,\n but the rows are not homogenious.\n\nAs I see it, #1 can be solved by polymorphism in Postgres functions.\n\n#2 has to be solved at the application level. My solution so far has\nbeen to create multiple Postgres functions, call each one in turn, join\nthe results in C, and return them as a structure via SOAP to the client.\nMay not be the easiest way to get it working, but it's working so far.\n(although I'm always open to suggestions if someone knows of a better\nway)\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Wed, 28 Jan 2004 11:02:21 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: On the performance of views"
},
{
"msg_contents": "Shridhar, Bill,\n\n> > Is MSSQL allows to mix rows of two types in single function invocation,\n> > I am sure that would be a hell lot of porting trouble..\n\nThere's also the question of whether or not PG would every want to do this. \nFrankly, as a once-upon-a-time SQL Server application developer, I found the \nability to return multiple rowsets from a single SQL Server procedure pretty \nuseless, and a source of endless debugging if we tried to implement it.\n\n> 1) Returning a homogenious set of rows, but the composition of those rows\n> will not be known until run time, as a different set of logic will be\n> done depending on the values of some parameters.\n\nThis can be done with Set Returning Functions. The issue is that the call to \nthe function requires special syntax, and the program calling the function \nmust know what columns are going to be returned at the time of the call. \nHmmm, is that clear or confusing?\n\n> #2 has to be solved at the application level. My solution so far has\n> been to create multiple Postgres functions, call each one in turn, join\n> the results in C, and return them as a structure via SOAP to the client.\n> May not be the easiest way to get it working, but it's working so far.\n> (although I'm always open to suggestions if someone knows of a better\n> way)\n\nSee my comment above. I frankly don't understand what the use of a \nnon-homogenous recordset is. Can you explain?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 29 Jan 2004 10:41:19 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On the performance of views"
},
{
"msg_contents": "Josh Berkus wrote:\n> Shridhar, Bill,\n> \n>>>Is MSSQL allows to mix rows of two types in single function invocation,\n>>>I am sure that would be a hell lot of porting trouble..\n> \n> There's also the question of whether or not PG would every want to do this. \n> Frankly, as a once-upon-a-time SQL Server application developer, I found the \n> ability to return multiple rowsets from a single SQL Server procedure pretty \n> useless, and a source of endless debugging if we tried to implement it.\n\nWell, I would have agreed with the uselessness, until this project. The\n\"source of endless debugging\" frightens me!\n\n>>1) Returning a homogenious set of rows, but the composition of those rows\n>> will not be known until run time, as a different set of logic will be\n>> done depending on the values of some parameters.\n> \n> This can be done with Set Returning Functions. The issue is that the call to \n> the function requires special syntax, and the program calling the function \n> must know what columns are going to be returned at the time of the call. \n> Hmmm, is that clear or confusing?\n\nClear as mud. In my case, my application simply doesn't care what row of\nwhat kind are returned. See, I'm writing the server end, and all said and\ndone, it's really just glue (frighteningly thick glue, but glue nonetheless)\n\nBasically, all I do is call each query in turn until I've collected all the\nresults, then marshall the results in to a SOAP XML response (using gsoap,\nif anyone's curious) and give them back to the client application. It's\nthe client app's job to figure out what to do with them, not mine. I\nnever would have written it this way on my own, but the client app is\nalready written, so as I migrate it to the client-server model, the\nprogrammers who wrote the client app are specifying what they expect me\nto provide them.\n\nThe only advantage I see is that combining a number of result sets into a\nsingle response reduces the number of round trips between the client and\nserver.\n\nIf Postgres supported combined recordsets, it would simplify my C code\nsomewhat, and possibly speed up things a bit by making less calls between\nthe soap server and Postgrees ... overall, I don't see a huge advantage\nto it.\n\n>>#2 has to be solved at the application level. My solution so far has\n>>been to create multiple Postgres functions, call each one in turn, join\n>>the results in C, and return them as a structure via SOAP to the client.\n>>May not be the easiest way to get it working, but it's working so far.\n>>(although I'm always open to suggestions if someone knows of a better\n>>way)\n> \n> See my comment above. I frankly don't understand what the use of a \n> non-homogenous recordset is. Can you explain?\n\nI hope what I already mentioned explains enough. If I understand the\napplication enough (and it's amazing how little I understand about it,\nconsidering I'm writing the server end!) what they're doing with these\ncombined recordsets is driving their forms. When a form instantiates,\nit makes a single soap call that causes me to return one of these\nnon-homogenious recordsets. One row may have data on how to display\nthe form, while another has data on what buttons are available, and\nanother has the actual data for the header of the form, while the\nremaing rows might have data to fill in the lower (grid) portion of\nthe form.\n\nIf I had designed this, I would just have done the same thing with\na homogenious recordset that had values set to null where they weren't\napropriate. This would have bloated the data being transfered, but\notherwise would have worked in the same way.\n\nNow that I'm aware of MSSQL's combined recordset capability, I'm not\nsure if I would do it differently or not (were I developing on a system\nthat had that capability) I probably won't have a concept of whether\nor not I think this is a good idea until this project is further along.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Thu, 29 Jan 2004 14:12:14 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: On the performance of views"
},
{
"msg_contents": "Bill,\n\nFirst off: discussion moved to the SQL list, where it really belongs.\n\n> Well, I would have agreed with the uselessness, until this project. The\n> \"source of endless debugging\" frightens me!\n\nWell, the last time I tried to use this capability was SQL Server 7. On that \nmodel, the problems I found were:\n1) There was no good way to differentiate the recordsets returned; you had to \nkeep careful track of what order they were in and put in \"fillers\" for \nrecordsets that didn't get returned. \n2) Most of the MS client technology (ODBC, ADO) was not prepared to handle \nmultiple recordsets. I ended up hiring a C-based COM hacker to write me a \ncustom replacement for ADO so that we could handle procedure results \nreliably.\n\nAll in all, it wasn't worth it and if I had the project to do over again, I \nwould have chosen a different approach.\n\n> > This can be done with Set Returning Functions. The issue is that the\n> > call to the function requires special syntax, and the program calling the\n> > function must know what columns are going to be returned at the time of\n> > the call. Hmmm, is that clear or confusing?\n>\n> Clear as mud. In my case, my application simply doesn't care what row of\n> what kind are returned. See, I'm writing the server end, and all said and\n> done, it's really just glue (frighteningly thick glue, but glue\n> nonetheless)\n\nTo be clearer: You can create a Set Returning Function (SRF) without a \nclearly defined set of return columns, and just have it return \"SETOF \nRECORD\". However, when you *use* that function, the query you use to call \nit needs to have a clear idea of what columns will be returned, or you get no \ndata.\n\nAll of this is very hackneyed, as I'm sure you realize. Overall, I'd say \nthat the programming team you've been inflicted with don't like relational \ndatabases, or at least have no understanding of them.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 29 Jan 2004 11:21:54 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Set-Returning Functions WAS: On the performance of\n views"
},
{
"msg_contents": "Josh Berkus wrote:\n> Bill,\n> \n> First off: discussion moved to the SQL list, where it really belongs.\n\nTrue, it started out as [PERFORM], but is no longer.\n\n>>Well, I would have agreed with the uselessness, until this project. The\n>>\"source of endless debugging\" frightens me!\n> \n> Well, the last time I tried to use this capability was SQL Server 7. On that \n> model, the problems I found were:\n> 1) There was no good way to differentiate the recordsets returned; you had to \n> keep careful track of what order they were in and put in \"fillers\" for \n> recordsets that didn't get returned. \n> 2) Most of the MS client technology (ODBC, ADO) was not prepared to handle \n> multiple recordsets. I ended up hiring a C-based COM hacker to write me a \n> custom replacement for ADO so that we could handle procedure results \n> reliably.\n\nWell, they're already handling what MSSQL gives them in their prototype, so\nthat's not my problem.\n\n>>>This can be done with Set Returning Functions. The issue is that the\n>>>call to the function requires special syntax, and the program calling the\n>>>function must know what columns are going to be returned at the time of\n>>>the call. Hmmm, is that clear or confusing?\n>>\n>>Clear as mud. In my case, my application simply doesn't care what row of\n>>what kind are returned. See, I'm writing the server end, and all said and\n>>done, it's really just glue (frighteningly thick glue, but glue\n>>nonetheless)\n> \n> To be clearer: You can create a Set Returning Function (SRF) without a \n> clearly defined set of return columns, and just have it return \"SETOF \n> RECORD\". However, when you *use* that function, the query you use to call \n> it needs to have a clear idea of what columns will be returned, or you get no \n> data.\n\nI don't understand at all. If I do \"SELECT * FROM set_returning_function()\"\nand all I'm going to do is iterate through the columns and rows, adding them\nto a two dimensional array that will be marshalled as a SOAP message, what\nabout not knowing the nature of the return set can cause me to get no data?\n\n> All of this is very hackneyed, as I'm sure you realize.\n\nWell, the way this project is being done tends to cause that. It was written\nin VB, it's being converted to VB.NET ... the original backend was MSSQL, now\nit's being converted to PostgreSQL with C glue to make PostgreSQL talk SOAP ...\nand all on the lowest budget possible.\n\n> Overall, I'd say \n> that the programming team you've been inflicted with don't like relational \n> databases, or at least have no understanding of them.\n\nQuite possibly. It's amazing to me how well I've apparently self-taught\nmyself relational databases. I've spoken with a lot of people who have had\nformal schooling in RDBMS who don't really understand it. And I've seen\nLOTs of applications that are written so badly that it's scarey. I mean,\ncheck out http://www.editavenue.com ... they wanted me to optimize their\ndatabase to get rid of the deadlocks. I've been totally unable to make\nthem understand that deadlocks are not caused by poor optimization, but\nby poor database programmers who don't really know how to code for\nmulti-user. As a result, I've probably lost the work, but I'm probably\nbetter off without it.\n\nOne of the things I love about working with open source databases is I\ndon't see a lot of that. The people on these lists are almost always\nsmarter than me, and I find that comforting ;)\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Thu, 29 Jan 2004 14:44:28 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Set-Returning Functions WAS: On the performance of"
},
{
"msg_contents": "Bill,\n\n> I don't understand at all. If I do \"SELECT * FROM\n> set_returning_function()\" and all I'm going to do is iterate through the\n> columns and rows, adding them to a two dimensional array that will be\n> marshalled as a SOAP message, what about not knowing the nature of the\n> return set can cause me to get no data?\n\nBecause that's not the syntax for a function that returns SETOF RECORD.\n\nThe syntax is:\n\nSELECT * \nFROM set_returning_function(var1, var2) AS alias (col1 TYPE, col2 TYPE);\n\nThat is, if the function definition does not contain a clear row structure, \nthe query has to contain one.\n\nThis does not apply to functions that are based on a table or composite type:\n\nCREATE FUNCTION .... RETURNS SETOF table1 ...\nCREATE FUNCTION .... RETURNS SETOF comp_type\n\nCan be called with: \n\nSELECT * FROM some_function(var1, var2) as alias;\n\nWhat this means is that you have to know the structure of the result set, \neither at function creation time or at function execution time.\n\n>\n> One of the things I love about working with open source databases is I\n> don't see a lot of that. The people on these lists are almost always\n> smarter than me, and I find that comforting ;)\n\nFlattery will get you everywhere.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 29 Jan 2004 15:17:19 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Set-Returning Functions WAS: On the performance of\n views"
},
{
"msg_contents": "Josh Berkus wrote:\n> Bill,\n> \n>>I don't understand at all. If I do \"SELECT * FROM\n>>set_returning_function()\" and all I'm going to do is iterate through the\n>>columns and rows, adding them to a two dimensional array that will be\n>>marshalled as a SOAP message, what about not knowing the nature of the\n>>return set can cause me to get no data?\n> \n> Because that's not the syntax for a function that returns SETOF RECORD.\n> \n> The syntax is:\n> \n> SELECT * \n> FROM set_returning_function(var1, var2) AS alias (col1 TYPE, col2 TYPE);\n> \n> That is, if the function definition does not contain a clear row structure, \n> the query has to contain one.\n> \n> This does not apply to functions that are based on a table or composite type:\n> \n> CREATE FUNCTION .... RETURNS SETOF table1 ...\n> CREATE FUNCTION .... RETURNS SETOF comp_type\n> \n> Can be called with: \n> \n> SELECT * FROM some_function(var1, var2) as alias;\n> \n> What this means is that you have to know the structure of the result set, \n> either at function creation time or at function execution time.\n\nYep. You're right, I hadn't looked at that, but I'm probably better off\ncreating types and returning setof those types as much as possible.\n\n>>One of the things I love about working with open source databases is I\n>>don't see a lot of that. The people on these lists are almost always\n>>smarter than me, and I find that comforting ;)\n> \n> Flattery will get you everywhere.\n\nReally? I'll have to use it more often.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Thu, 29 Jan 2004 20:07:40 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Set-Returning Functions WAS: On the performance of"
},
{
"msg_contents": "Bill Moran wrote:\n> Basically, all I do is call each query in turn until I've collected all the\n> results, then marshall the results in to a SOAP XML response (using gsoap,\n> if anyone's curious) and give them back to the client application. It's\n> the client app's job to figure out what to do with them, not mine. I\n> never would have written it this way on my own, but the client app is\n> already written, so as I migrate it to the client-server model, the\n> programmers who wrote the client app are specifying what they expect me\n> to provide them.\n\nIn C++/OO way, that would have been a array/list of base object pointers with \nvirtual methods for\n- Obtaining result set\n- Making a soap/XML out of it\n- Deliver it\n- And pre/post processing\n\nAnd delegate the unique query/data handling to specific subclasses.\n\nNow that you are doing that in C, you can as well use function pointers to mimic \ninheritance and virtual functions. The only difference being\n\na. No typechecking. A function pointer in C, won't crib if you pass it wrong \nnumber/wrong types of argument. C++ compiler will catch that.\nb. Maintainance is nightmare unless you thoroughly document it. In C++, glancing \nover source code might suffice. And no, in-source comment rarely suffice when \nyou use C for such applications.\n\nI maintain such an application for day-job and I know what help/menace it can be \nat times..:-(\n\nBut either way, asking for/using/maintaining non-homogeneous recordset is a \nclassic three finger salute. Load the gun, point to head and shoot.\n\nJust a thought..\n\n Shridhar\n",
"msg_date": "Fri, 30 Jan 2004 12:33:45 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: On the performance of views"
}
] |
[
{
"msg_contents": "I'm trying to track down some performance issues where time (in \nmilliseconds) is critical. One thing that I've noticed is that it \nseems like the first insert takes an inordinate amount of time. \nInserts after the first one are acceptable. My production environment \nis like this:\n\nSolaris 9\nJBoss\nJava 1.4.2\nPostgreSQL 7.3 JDBC drivers\nPostgreSQL 7.3.2 database\n\nI've isolated the problem in order to more accurately recreate it by \nusing the following table definition and insert statement:\n\ngregqa=# \\d one\n Table \"public.one\"\n Column | Type | Modifiers\n--------+------------------------+-----------\n id | integer |\n msg | character varying(255) |\n\n\nexplain analyze insert into one (id, msg) values (1, 'blah');\n\n\nI'm currently using Jython (Python written in Java) and the JDBC \ndrivers to recreate the problem with this code:\n\nfrom java.sql import *\nfrom java.lang import Class\n\nClass.forName(\"org.postgresql.Driver\")\ndb = \nDriverManager.getConnection(\"jdbc:postgresql://localhost:5432/blah\", \n\"blah\", \"\")\n\nfor i in range(5):\n query = \"EXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, \n'blah');\"\n print query\n\n st = db.createStatement()\n rs = st.executeQuery(query)\n rsmd = rs.getMetaData()\n cols = rsmd.getColumnCount()\n cols_range = range(1, cols + 1)\n\n while rs.next():\n for col in cols_range:\n print rs.getString(col)\n\n rs.close()\n st.close()\n\ndb.close()\n\n\nWhen I run this code (which will execute the query 5 times before \nfinishing), here's the output I get:\n\n[bert:~/tmp] > env CLASSPATH=pg73jdbc3.jar jython dbquery.py\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\nResult (cost=0.00..0.01 rows=1 width=0) (actual time=0.02..0.02 rows=1 \nloops=1)\nTotal runtime: 0.59 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\nResult (cost=0.00..0.01 rows=1 width=0) (actual time=0.02..0.02 rows=1 \nloops=1)\nTotal runtime: 0.17 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\nResult (cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.01 rows=1 \nloops=1)\nTotal runtime: 0.12 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\nResult (cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.01 rows=1 \nloops=1)\nTotal runtime: 0.12 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\nResult (cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.01 rows=1 \nloops=1)\nTotal runtime: 0.12 msec\n\n[bert:~/tmp] > env CLASSPATH=pg73jdbc3.jar jython dbquery.py\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\nResult (cost=0.00..0.01 rows=1 width=0) (actual time=0.02..0.02 rows=1 \nloops=1)\nTotal runtime: 0.55 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\nResult (cost=0.00..0.01 rows=1 width=0) (actual time=0.02..0.02 rows=1 \nloops=1)\nTotal runtime: 0.15 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\nResult (cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.01 rows=1 \nloops=1)\nTotal runtime: 0.13 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\nResult (cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.02 rows=1 \nloops=1)\nTotal runtime: 0.13 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\nResult (cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.02 rows=1 \nloops=1)\nTotal runtime: 0.17 msec\n\n(I ran it twice to show that it is consistently repeatable)\n\nNow, of course this query isn't very interesting and shouldn't take \nvery long, but it does illustrate the time difference between the first \nquery and the last four. On my bigger insert query, it's taking 79msec \nfor the first query and ~0.9msec for the last four. Any ideas as to \nwhy the first query would take so long? I can provide more information \nif necessary, but I think the example is pretty simple as is.\n\n--\nPC Drew\n\n",
"msg_date": "Mon, 26 Jan 2004 14:00:16 -0700",
"msg_from": "PC Drew <[email protected]>",
"msg_from_op": true,
"msg_subject": "Insert Times"
},
{
"msg_contents": "Drew,\n\nJust a guess because I don't know PostGres that well, but it could be the\nSQL parse time and once the statement is parsed, it may be pinned in parse\ncache for subsequent calls.\n\nRegards,\nBrad Gawne\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of PC Drew\nSent: January 26, 2004 16:00\nTo: [email protected]\nSubject: [PERFORM] Insert Times\n\nI'm trying to track down some performance issues where time (in\nmilliseconds) is critical. One thing that I've noticed is that it seems\nlike the first insert takes an inordinate amount of time. \nInserts after the first one are acceptable. My production environment is\nlike this:\n\nSolaris 9\nJBoss\nJava 1.4.2\nPostgreSQL 7.3 JDBC drivers\nPostgreSQL 7.3.2 database\n\nI've isolated the problem in order to more accurately recreate it by using\nthe following table definition and insert statement:\n\ngregqa=# \\d one\n Table \"public.one\"\n Column | Type | Modifiers\n--------+------------------------+-----------\n id | integer |\n msg | character varying(255) |\n\n\nexplain analyze insert into one (id, msg) values (1, 'blah');\n\n\nI'm currently using Jython (Python written in Java) and the JDBC drivers to\nrecreate the problem with this code:\n\nfrom java.sql import *\nfrom java.lang import Class\n\nClass.forName(\"org.postgresql.Driver\")\ndb =\nDriverManager.getConnection(\"jdbc:postgresql://localhost:5432/blah\",\n\"blah\", \"\")\n\nfor i in range(5):\n query = \"EXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\"\n print query\n\n st = db.createStatement()\n rs = st.executeQuery(query)\n rsmd = rs.getMetaData()\n cols = rsmd.getColumnCount()\n cols_range = range(1, cols + 1)\n\n while rs.next():\n for col in cols_range:\n print rs.getString(col)\n\n rs.close()\n st.close()\n\ndb.close()\n\n\nWhen I run this code (which will execute the query 5 times before\nfinishing), here's the output I get:\n\n[bert:~/tmp] > env CLASSPATH=pg73jdbc3.jar jython dbquery.py EXPLAIN ANALYZE\nINSERT INTO one (id, msg) VALUES (1, 'blah'); Result (cost=0.00..0.01\nrows=1 width=0) (actual time=0.02..0.02 rows=1\nloops=1)\nTotal runtime: 0.59 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah'); Result\n(cost=0.00..0.01 rows=1 width=0) (actual time=0.02..0.02 rows=1\nloops=1)\nTotal runtime: 0.17 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah'); Result\n(cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.01 rows=1\nloops=1)\nTotal runtime: 0.12 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah'); Result\n(cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.01 rows=1\nloops=1)\nTotal runtime: 0.12 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah'); Result\n(cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.01 rows=1\nloops=1)\nTotal runtime: 0.12 msec\n\n[bert:~/tmp] > env CLASSPATH=pg73jdbc3.jar jython dbquery.py EXPLAIN ANALYZE\nINSERT INTO one (id, msg) VALUES (1, 'blah'); Result (cost=0.00..0.01\nrows=1 width=0) (actual time=0.02..0.02 rows=1\nloops=1)\nTotal runtime: 0.55 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah'); Result\n(cost=0.00..0.01 rows=1 width=0) (actual time=0.02..0.02 rows=1\nloops=1)\nTotal runtime: 0.15 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah'); Result\n(cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.01 rows=1\nloops=1)\nTotal runtime: 0.13 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah'); Result\n(cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.02 rows=1\nloops=1)\nTotal runtime: 0.13 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah'); Result\n(cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.02 rows=1\nloops=1)\nTotal runtime: 0.17 msec\n\n(I ran it twice to show that it is consistently repeatable)\n\nNow, of course this query isn't very interesting and shouldn't take very\nlong, but it does illustrate the time difference between the first query and\nthe last four. On my bigger insert query, it's taking 79msec for the first\nquery and ~0.9msec for the last four. Any ideas as to why the first query\nwould take so long? I can provide more information if necessary, but I\nthink the example is pretty simple as is.\n\n--\nPC Drew\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faqs/FAQ.html\n\n",
"msg_date": "Mon, 26 Jan 2004 22:15:09 -0500",
"msg_from": "\"Brad Gawne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert Times"
}
] |
[
{
"msg_contents": "Hi,\n\nMy personal feeling on this is, that the long time taken for the first query\nis for loading all sorts of libraries, JVM startup overhead etc.\n\nWhat if you first do some SELECT (whatever), on a different table, to warm\nup the JVM and the database?\n\nregards,\n\n--Tim\n\nTHIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY\nMATERIAL and is thus for use only by the intended recipient. If you received\nthis in error, please contact the sender and delete the e-mail and its\nattachments from all computers. \n\n",
"msg_date": "Tue, 27 Jan 2004 01:38:28 -0600",
"msg_from": "\"Leeuw van der, Tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert Times"
}
] |
[
{
"msg_contents": "I tested this out and saw no improvement:\n\nEXPLAIN ANALYZE SELECT * FROM one;\nSeq Scan on one (cost=0.00..20.00 rows=1000 width=404) (actual time=0.04..0.50 rows=51 loops=1)\nTotal runtime: 0.75 msec\nEXPLAIN ANALYZE SELECT * FROM one;\nSeq Scan on one (cost=0.00..20.00 rows=1000 width=404) (actual time=0.06..0.50 rows=51 loops=1)\nTotal runtime: 0.64 msec\nEXPLAIN ANALYZE SELECT * FROM one;\nSeq Scan on one (cost=0.00..20.00 rows=1000 width=404) (actual time=0.04..0.40 rows=51 loops=1)\nTotal runtime: 0.54 msec\nEXPLAIN ANALYZE SELECT * FROM one;\nSeq Scan on one (cost=0.00..20.00 rows=1000 width=404) (actual time=0.04..0.41 rows=51 loops=1)\nTotal runtime: 0.54 msec\nEXPLAIN ANALYZE SELECT * FROM one;\nSeq Scan on one (cost=0.00..20.00 rows=1000 width=404) (actual time=0.04..0.41 rows=51 loops=1)\nTotal runtime: 0.53 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\nResult (cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.02 rows=1 loops=1)\nTotal runtime: 0.85 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\nResult (cost=0.00..0.01 rows=1 width=0) (actual time=0.02..0.02 rows=1 loops=1)\nTotal runtime: 0.15 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\nResult (cost=0.00..0.01 rows=1 width=0) (actual time=0.02..0.02 rows=1 loops=1)\nTotal runtime: 0.14 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\nResult (cost=0.00..0.01 rows=1 width=0) (actual time=0.02..0.02 rows=1 loops=1)\nTotal runtime: 0.12 msec\nEXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\nResult (cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.02 rows=1 loops=1)\nTotal runtime: 0.12 msec\n\n\n\n-----Original Message-----\nFrom:\tLeeuw van der, Tim [mailto:[email protected]]\nSent:\tTue 1/27/2004 12:38 AM\nTo:\tPC Drew; [email protected]\nCc:\t\nSubject:\tRE: [PERFORM] Insert Times\n\nHi,\n\nMy personal feeling on this is, that the long time taken for the first query\nis for loading all sorts of libraries, JVM startup overhead etc.\n\nWhat if you first do some SELECT (whatever), on a different table, to warm\nup the JVM and the database?\n\nregards,\n\n--Tim\n\nTHIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY\nMATERIAL and is thus for use only by the intended recipient. If you received\nthis in error, please contact the sender and delete the e-mail and its\nattachments from all computers. \n\n\n\n",
"msg_date": "Tue, 27 Jan 2004 07:06:15 -0700",
"msg_from": "\"PC Drew\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert Times"
},
{
"msg_contents": "PC Drew wrote:\n\n>I tested this out and saw no improvement:\n> \n>\nI'd still suspect some class loading issues and HotSpot compilation \nissues are polluting your numbers. Try using a PreparedStatement to \nanother table first in order to make sure that classes bytecode has been \nloaded. There are some command line options to the JVM to have it \nprint out some status info when it is loading classes and compiling \nmethods; you might want to turn on those options as well.\n\n-- Alan\n\n>EXPLAIN ANALYZE SELECT * FROM one;\n>Seq Scan on one (cost=0.00..20.00 rows=1000 width=404) (actual time=0.04..0.50 rows=51 loops=1)\n>Total runtime: 0.75 msec\n>EXPLAIN ANALYZE SELECT * FROM one;\n>Seq Scan on one (cost=0.00..20.00 rows=1000 width=404) (actual time=0.06..0.50 rows=51 loops=1)\n>Total runtime: 0.64 msec\n>EXPLAIN ANALYZE SELECT * FROM one;\n>Seq Scan on one (cost=0.00..20.00 rows=1000 width=404) (actual time=0.04..0.40 rows=51 loops=1)\n>Total runtime: 0.54 msec\n>EXPLAIN ANALYZE SELECT * FROM one;\n>Seq Scan on one (cost=0.00..20.00 rows=1000 width=404) (actual time=0.04..0.41 rows=51 loops=1)\n>Total runtime: 0.54 msec\n>EXPLAIN ANALYZE SELECT * FROM one;\n>Seq Scan on one (cost=0.00..20.00 rows=1000 width=404) (actual time=0.04..0.41 rows=51 loops=1)\n>Total runtime: 0.53 msec\n>EXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\n>Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.02 rows=1 loops=1)\n>Total runtime: 0.85 msec\n>EXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\n>Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.02..0.02 rows=1 loops=1)\n>Total runtime: 0.15 msec\n>EXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\n>Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.02..0.02 rows=1 loops=1)\n>Total runtime: 0.14 msec\n>EXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\n>Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.02..0.02 rows=1 loops=1)\n>Total runtime: 0.12 msec\n>EXPLAIN ANALYZE INSERT INTO one (id, msg) VALUES (1, 'blah');\n>Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.02 rows=1 loops=1)\n>Total runtime: 0.12 msec\n>\n>\n>\n>-----Original Message-----\n>From:\tLeeuw van der, Tim [mailto:[email protected]]\n>Sent:\tTue 1/27/2004 12:38 AM\n>To:\tPC Drew; [email protected]\n>Cc:\t\n>Subject:\tRE: [PERFORM] Insert Times\n>\n>Hi,\n>\n>My personal feeling on this is, that the long time taken for the first query\n>is for loading all sorts of libraries, JVM startup overhead etc.\n>\n>What if you first do some SELECT (whatever), on a different table, to warm\n>up the JVM and the database?\n>\n>regards,\n>\n>--Tim\n>\n>THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY\n>MATERIAL and is thus for use only by the intended recipient. If you received\n>this in error, please contact the sender and delete the e-mail and its\n>attachments from all computers. \n>\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 8: explain analyze is your friend\n> \n>\n\n",
"msg_date": "Tue, 27 Jan 2004 10:58:29 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert Times"
}
] |
[
{
"msg_contents": "> Hello All\n> \n> Just wanted to gather opinions on what file system has the best balance between performance and \n> reliability when used on a quad processor machine running SuSE64. Thanks\n> \n> DAve\n\nI was reading the article 'Behind the ALTIX 3000' in the Feb. 2003 Linux Journal, and it mentioned\nthe following paper which compares Ext2, Ext3, ReiserFS, XFS, and JFS:\n\nFilesystem Performance and Scalability in Linux 2.4.17\nOriginally published in Proceedings of the FREENIX Track: \n2002 USENIX Annual Technical Conference\nhttp://oss.sgi.com/projects/xfs/papers/filesystem-perf-tm.pdf\n\nQuoting the abtract:\n\nAlthough the best-performing filesystem varies depending on the benchmark and system used, some\nlarger trends are evident in the data. On the smaller systems, the best-performing file system is\noften Ext2, Ext3 or ReiserFS. For the larger systems and higher loads, XFS can provide the best\noverall performance.\n\nGeorge Essig\n",
"msg_date": "Tue, 27 Jan 2004 09:52:36 -0800 (PST)",
"msg_from": "George Essig <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High Performance/High Reliability File system on SuSE64"
}
] |
[
{
"msg_contents": "Hello,\n\nWith the new preload option is there any benefit/drawback to using \npl/Python versus\npl/pgSQL? And no... I don't care that pl/Python is now considered untrusted.\n\nSincerely,\n\nJoshua D. Drake\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL\n\n",
"msg_date": "Tue, 27 Jan 2004 10:52:21 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pl/pgSQL versus pl/Python"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> With the new preload option is there any benefit/drawback to using \n> pl/Python versus pl/pgSQL?\n\nIf you're asking about relative speed, I did some *very* simple tests \nand posted them here:\n\nhttp://archives.postgresql.org/pgsql-patches/2003-07/msg00239.php\n\nwithout preload:\n=====================================================================\nregression=# explain analyze select echo_plperl('hello');\n Total runtime: 55.29 msec\nregression=# explain analyze select echo_pltcl('hello');\n Total runtime: 23.34 msec\nregression=# explain analyze select echo_plpythonu('hello');\n Total runtime: 32.40 msec\nregression=# explain analyze select echo_plpgsql('hello');\n Total runtime: 3.09 msec\n\n\nwith preload:\n=====================================================================\nregression=# explain analyze select echo_plperl('hello');\n Total runtime: 5.14 msec\nregression=# explain analyze select echo_pltcl('hello');\n Total runtime: 7.64 msec\nregression=# explain analyze select echo_plpythonu('hello');\n Total runtime: 1.91 msec\nregression=# explain analyze select echo_plpgsql('hello');\n Total runtime: 1.35 msec\n\nThis was intended to just measure the time to execute a simple \"hello \nworld\" type of function, for the first time in a given session. I did \nnot repeat/average the results though, so you might want to do some of \nyour own testing.\n\nJoe\n\n",
"msg_date": "Tue, 27 Jan 2004 16:56:18 -0800",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgSQL versus pl/Python"
}
] |
[
{
"msg_contents": "Hi,\n\nwith the following table:\n\n Table \"public.foo\"\n Column | Type | Modifiers\n--------+------+-----------\n t | text |\nIndexes:\n \"a\" btree (t)\n\nShouldn't queries that use\n ... where t like '%something%'\n\nbenefit from \"a\" when t is NULL in almost all cases, since the query \nplanner could use \"a\" to access the few non-NULL rows quickly? It \ndoesn't seem to work right now.\n\n(I assume that it would make no difference if the index \"a\" was partial, \nexcluding NULLs)\n\nRegards,\n-mjy\n\n",
"msg_date": "Tue, 27 Jan 2004 20:26:06 +0100",
"msg_from": "\"Marinos J. Yannikos\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "(partial?) indexes, LIKE and NULL"
},
{
"msg_contents": "\"Marinos J. Yannikos\" <[email protected]> writes:\n> Shouldn't queries that use\n> ... where t like '%something%'\n> benefit from [an index on t] when t is NULL in almost all cases, since\n> the query planner could use [it] to access the few non-NULL rows\n> quickly?\n\nNo, and the reason is that the planner *can't* use the index that way.\nTo do that we'd have to support \"x IS NOT NULL\" as an indexable\noperator, which we don't. This is exactly the same stumbling block as\nfor more-direct uses of indexes to search for NULL or NOT NULL rows.\nSee the pghackers archives for more details.\n\n> (I assume that it would make no difference if the index \"a\" was partial, \n> excluding NULLs)\n\nYou could do\n\n\tcreate index a on foo(t) where t is not null;\n\nand then this index would likely get used for any query explicitly\nmentioning \"AND t is not null\". The planner will not induce such a\nwhere clause entry from the presence of other tests on t, however.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Jan 2004 17:46:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: (partial?) indexes, LIKE and NULL "
}
] |
[
{
"msg_contents": "My understanding is that having NULL values in an index breaks it completely. Meaning it won't be used in any query planning. Maybe I'm wrong though...\n\n\n-----Original Message-----\nFrom:\tMarinos J. Yannikos [mailto:[email protected]]\nSent:\tTue 1/27/2004 12:26 PM\nTo:\[email protected]\nCc:\t\nSubject:\t[PERFORM] (partial?) indexes, LIKE and NULL\n\nHi,\n\nwith the following table:\n\n Table \"public.foo\"\n Column | Type | Modifiers\n--------+------+-----------\n t | text |\nIndexes:\n \"a\" btree (t)\n\nShouldn't queries that use\n ... where t like '%something%'\n\nbenefit from \"a\" when t is NULL in almost all cases, since the query \nplanner could use \"a\" to access the few non-NULL rows quickly? It \ndoesn't seem to work right now.\n\n(I assume that it would make no difference if the index \"a\" was partial, \nexcluding NULLs)\n\nRegards,\n-mjy\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n\n\n",
"msg_date": "Tue, 27 Jan 2004 12:51:56 -0700",
"msg_from": "\"PC Drew\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: (partial?) indexes, LIKE and NULL"
}
] |
[
{
"msg_contents": "Hi all ,\n\nI'm trying to find out if there is a specific setting \nto make transactions time out faster in a scenario\nwhere there's an update on a table in a transaction \nblock, and another update process tries to update\nthe same column.\n\nIt looks like the second process will wait until you\nend the transaction block in the first transaction.\n\nI've looked at the deadlock timeout parameter and\nother parameters, but I don't think I found what \nI'm looking for.\n\nI basically need to be able to let the second process\nexit with an error after waiting 5 - 10 seconds.\n\nPlease can someone help?\n\nKind Regards\nStefan",
"msg_date": "Wed, 28 Jan 2004 12:11:09 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres timeout."
},
{
"msg_contents": "Forgot to mention that I use postgres 7.3.4\n\nStef mentioned :\n=> Hi all ,\n=> \n=> I'm trying to find out if there is a specific setting \n=> to make transactions time out faster in a scenario\n=> where there's an update on a table in a transaction \n=> block, and another update process tries to update\n=> the same column.\n=> \n=> It looks like the second process will wait until you\n=> end the transaction block in the first transaction.\n=> \n=> I've looked at the deadlock timeout parameter and\n=> other parameters, but I don't think I found what \n=> I'm looking for.\n=> \n=> I basically need to be able to let the second process\n=> exit with an error after waiting 5 - 10 seconds.\n=> \n=> Please can someone help?\n=> \n=> Kind Regards\n=> Stefan\n=>",
"msg_date": "Wed, 28 Jan 2004 12:12:17 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres timeout."
},
{
"msg_contents": "Hi all, \n\nIt seems I always find a solution just after\npanicking a little bit.\nAnyway, I found that statement_timeout solved\nmy problem. When I tested it earlier, I actually \nmade an error, and skipped it as a possible \nsolution.\n\nCheers\nStef\n\nStef mentioned :\n=> Forgot to mention that I use postgres 7.3.4\n=> \n=> Stef mentioned :\n=> => Hi all ,\n=> => \n=> => I'm trying to find out if there is a specific setting \n=> => to make transactions time out faster in a scenario\n=> => where there's an update on a table in a transaction \n=> => block, and another update process tries to update\n=> => the same column.\n=> => \n=> => It looks like the second process will wait until you\n=> => end the transaction block in the first transaction.\n=> => \n=> => I've looked at the deadlock timeout parameter and\n=> => other parameters, but I don't think I found what \n=> => I'm looking for.\n=> => \n=> => I basically need to be able to let the second process\n=> => exit with an error after waiting 5 - 10 seconds.\n=> => \n=> => Please can someone help?\n=> => \n=> => Kind Regards\n=> => Stefan\n=> => \n=>",
"msg_date": "Wed, 28 Jan 2004 12:47:22 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres timeout. [SOLVED]"
},
{
"msg_contents": "Stef <[email protected]> writes:\n> I'm trying to find out if there is a specific setting \n> to make transactions time out faster in a scenario\n> where there's an update on a table in a transaction \n> block, and another update process tries to update\n> the same column.\n\n> It looks like the second process will wait until you\n> end the transaction block in the first transaction.\n\nYou can use statement_timeout to limit the wait.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 28 Jan 2004 11:49:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] postgres timeout. "
}
] |
[
{
"msg_contents": "Hi, \n\nPostgres choses the wrong index when I add limit 1 to the query.\nThis should not affect the index chosen.\nI read that functional indexes are sometimes not chosen correctly by \noptimizer. \nIs there anything I can do to always use the functional index in the\nfollowing queries? \n\nQuery with limit 1 choses wrong index:\n---------------------------------------------------------------------------------------\nexplain\nselect code \nfrom transactions \nwhere UPPER(pop) = UPPER('79bcdc8a4a4f99e7c111111111111111')\norder by order_date DESC LIMIT 1\n\nIndex Scan Backward using transactions_date_aff on transactions (cost=0.00..930780.96 rows=2879 width=33)\n---------------------------------------------------------------------------------------\n\nWithout limit 1 choses correct index:\n---------------------------------------------------------------------------------------\nexplain\nselect code \nfrom transactions \nwhere UPPER(pop) = UPPER('79bcdc8a4a4f99e7c111111111111111')\norder by order_date DESC\n\nIndex Scan using transactions_pop_i on transactions (cost=0.00..11351.72 rows=2879 width=33)\n---------------------------------------------------------------------------------------\n\nWe have postgresql-7.3.2-3.\nThank you,\n\nAlexandra\n",
"msg_date": "Wed, 28 Jan 2004 12:23:38 +0100",
"msg_from": "\"Alexandra Birch\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "limit 1 and functional indexes"
},
{
"msg_contents": "On Wed, Jan 28, 2004 at 12:23:38 +0100,\n Alexandra Birch <[email protected]> wrote:\n> Hi, \n> \n> Postgres choses the wrong index when I add limit 1 to the query.\n> This should not affect the index chosen.\n\nI don't know the complete answer to your question, but since no one else\nhas commented I will answer what I can.\n\nIt IS reasobable for the planner to choose a different plan when you\nadd a LIMIT clause to a query.\n\n> I read that functional indexes are sometimes not chosen correctly by \n> optimizer. \n\nI don't believe there are any particular problems with functional indexes.\nThe opitmizer isn't perfect and will sometimes choose poor plans.\n\n> Is there anything I can do to always use the functional index in the\n> following queries? \n\nHave you done an ANALYZE of the table recently?\n\nIt might be useful to see the EXPLAIN ANALYZE output, rather than just\nthe EXPLAIN output, as that will give the actual times needed to do\nthe various steps.\n\n> \n> Query with limit 1 choses wrong index:\n> ---------------------------------------------------------------------------------------\n> explain\n> select code \n> from transactions \n> where UPPER(pop) = UPPER('79bcdc8a4a4f99e7c111111111111111')\n> order by order_date DESC LIMIT 1\n> \n> Index Scan Backward using transactions_date_aff on transactions (cost=0.00..930780.96 rows=2879 width=33)\n> ---------------------------------------------------------------------------------------\n> \n> Without limit 1 choses correct index:\n> ---------------------------------------------------------------------------------------\n> explain\n> select code \n> from transactions \n> where UPPER(pop) = UPPER('79bcdc8a4a4f99e7c111111111111111')\n> order by order_date DESC\n> \n> Index Scan using transactions_pop_i on transactions (cost=0.00..11351.72 rows=2879 width=33)\n> ---------------------------------------------------------------------------------------\n> \n> We have postgresql-7.3.2-3.\n> Thank you,\n> \n> Alexandra\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n",
"msg_date": "Thu, 29 Jan 2004 06:52:40 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: limit 1 and functional indexes"
},
{
"msg_contents": "\n> >\n> > Postgres choses the wrong index when I add limit 1 to the query.\n> > This should not affect the index chosen.\n>\n> I don't know the complete answer to your question, but since no one else\n> has commented I will answer what I can.\n\nThanks - your reply is apreciated :)\n\n> It IS reasobable for the planner to choose a different plan when you\n> add a LIMIT clause to a query.\n\nOK - I'll investigate this further.\n\n> > I read that functional indexes are sometimes not chosen correctly by\n> > optimizer.\n>\n> I don't believe there are any particular problems with functional indexes.\n> The opitmizer isn't perfect and will sometimes choose poor plans.\n\nOK - but there was some discussion about statistics for functional indexes, for eg:\nhttp://archives.postgresql.org/pgsql-general/2004-01/msg00978.php\nThis does not help me solve my problem though :)\n\n> > Is there anything I can do to always use the functional index in the\n> > following queries?\n>\n> Have you done an ANALYZE of the table recently?\n\nYip - I should have said we do a daily VACUUM ANALYZE.\n\n> It might be useful to see the EXPLAIN ANALYZE output, rather than just\n> the EXPLAIN output, as that will give the actual times needed to do\n> the various steps.\n\nI thought the cost values would be enough from the EXPLAIN alone.\nAnd the query takes so long to run :(\n\nHere is the output of EXPLAIN ANALYZE first with limit 1 then without:\n\nexplain analyze\nselect code\nfrom transactions\nwhere UPPER(pop) = UPPER('79bcdc8a4a4f99e7c111111111111111')\norder by order_date DESC LIMIT 1;\n--------------------------------------------------------------------------------------------------\n Limit (cost=0.00..332.44 rows=1 width=33) (actual time=377745.75..377745.75 rows=0 loops=1)\n -> Index Scan Backward using transactions_date_aff on transactions (cost=0.00..982549.96 rows=2956 width=33) (actual\ntime=377718.61..377718.61 rows=0 loops=1)\n Filter: (upper((pop)::text) = '79BCDC8A4A4F99E7C111111111111111'::text)\n Total runtime: 378439.32 msec\n\nexplain analyze\nselect code\nfrom transactions\nwhere UPPER(pop) = UPPER('79bcdc8a4a4f99e7c111111111111111')\norder by order_date DESC;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n-------------\n Sort (cost=11824.16..11831.55 rows=2956 width=33) (actual time=248.17..248.17 rows=0 loops=1)\n Sort Key: order_date\n -> Index Scan using transactions_pop_i on transactions (cost=0.00..11653.79 rows=2956 width=33) (actual time=126.13..126.13\nrows=0 loops=1)\n Index Cond: (upper((pop)::text) = '79BCDC8A4A4F99E7C111111111111111'::text)\n Total runtime: 248.25 msec\n\nThank you,\n\nAlexandra\n\n",
"msg_date": "Thu, 29 Jan 2004 16:02:06 +0100",
"msg_from": "\"Alexandra Birch\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: limit 1 and functional indexes"
},
{
"msg_contents": "On Thu, Jan 29, 2004 at 16:02:06 +0100,\n Alexandra Birch <[email protected]> wrote:\n> \n> Here is the output of EXPLAIN ANALYZE first with limit 1 then without:\n\nThe time estimate for the limit 1 case is way off. I can't tell if that\nis a bug or not having detailed enough statistics.\n\nHopefully someone more knowlegable will take a look at this question.\n\n> \n> explain analyze\n> select code\n> from transactions\n> where UPPER(pop) = UPPER('79bcdc8a4a4f99e7c111111111111111')\n> order by order_date DESC LIMIT 1;\n> --------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..332.44 rows=1 width=33) (actual time=377745.75..377745.75 rows=0 loops=1)\n> -> Index Scan Backward using transactions_date_aff on transactions (cost=0.00..982549.96 rows=2956 width=33) (actual\n> time=377718.61..377718.61 rows=0 loops=1)\n> Filter: (upper((pop)::text) = '79BCDC8A4A4F99E7C111111111111111'::text)\n> Total runtime: 378439.32 msec\n> \n> explain analyze\n> select code\n> from transactions\n> where UPPER(pop) = UPPER('79bcdc8a4a4f99e7c111111111111111')\n> order by order_date DESC;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------\n> -------------\n> Sort (cost=11824.16..11831.55 rows=2956 width=33) (actual time=248.17..248.17 rows=0 loops=1)\n> Sort Key: order_date\n> -> Index Scan using transactions_pop_i on transactions (cost=0.00..11653.79 rows=2956 width=33) (actual time=126.13..126.13\n> rows=0 loops=1)\n> Index Cond: (upper((pop)::text) = '79BCDC8A4A4F99E7C111111111111111'::text)\n> Total runtime: 248.25 msec\n> \n> Thank you,\n> \n> Alexandra\n> \n",
"msg_date": "Thu, 29 Jan 2004 09:11:46 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: limit 1 and functional indexes"
},
{
"msg_contents": "One other suggestion I forgot is that this should move over to the\nperformance list rather than being on the sql list. The right people\nare more likely to see your question there.\n\nOn Thu, Jan 29, 2004 at 16:02:06 +0100,\n Alexandra Birch <[email protected]> wrote:\n> \n> > >\n> > > Postgres choses the wrong index when I add limit 1 to the query.\n> > > This should not affect the index chosen.\n> >\n> > I don't know the complete answer to your question, but since no one else\n> > has commented I will answer what I can.\n> \n> Thanks - your reply is apreciated :)\n> \n> > It IS reasobable for the planner to choose a different plan when you\n> > add a LIMIT clause to a query.\n> \n> OK - I'll investigate this further.\n> \n> > > I read that functional indexes are sometimes not chosen correctly by\n> > > optimizer.\n> >\n> > I don't believe there are any particular problems with functional indexes.\n> > The opitmizer isn't perfect and will sometimes choose poor plans.\n> \n> OK - but there was some discussion about statistics for functional indexes, for eg:\n> http://archives.postgresql.org/pgsql-general/2004-01/msg00978.php\n> This does not help me solve my problem though :)\n> \n> > > Is there anything I can do to always use the functional index in the\n> > > following queries?\n> >\n> > Have you done an ANALYZE of the table recently?\n> \n> Yip - I should have said we do a daily VACUUM ANALYZE.\n> \n> > It might be useful to see the EXPLAIN ANALYZE output, rather than just\n> > the EXPLAIN output, as that will give the actual times needed to do\n> > the various steps.\n> \n> I thought the cost values would be enough from the EXPLAIN alone.\n> And the query takes so long to run :(\n> \n> Here is the output of EXPLAIN ANALYZE first with limit 1 then without:\n> \n> explain analyze\n> select code\n> from transactions\n> where UPPER(pop) = UPPER('79bcdc8a4a4f99e7c111111111111111')\n> order by order_date DESC LIMIT 1;\n> --------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..332.44 rows=1 width=33) (actual time=377745.75..377745.75 rows=0 loops=1)\n> -> Index Scan Backward using transactions_date_aff on transactions (cost=0.00..982549.96 rows=2956 width=33) (actual\n> time=377718.61..377718.61 rows=0 loops=1)\n> Filter: (upper((pop)::text) = '79BCDC8A4A4F99E7C111111111111111'::text)\n> Total runtime: 378439.32 msec\n> \n> explain analyze\n> select code\n> from transactions\n> where UPPER(pop) = UPPER('79bcdc8a4a4f99e7c111111111111111')\n> order by order_date DESC;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------\n> -------------\n> Sort (cost=11824.16..11831.55 rows=2956 width=33) (actual time=248.17..248.17 rows=0 loops=1)\n> Sort Key: order_date\n> -> Index Scan using transactions_pop_i on transactions (cost=0.00..11653.79 rows=2956 width=33) (actual time=126.13..126.13\n> rows=0 loops=1)\n> Index Cond: (upper((pop)::text) = '79BCDC8A4A4F99E7C111111111111111'::text)\n> Total runtime: 248.25 msec\n> \n> Thank you,\n> \n> Alexandra\n> \n",
"msg_date": "Thu, 29 Jan 2004 09:43:19 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] limit 1 and functional indexes"
},
{
"msg_contents": "\nBruno Wolff III <[email protected]> writes:\n\n> > QUERY PLAN\n> > ------------------------------------------------------------------------------------------------------------------------------------\n> > Sort (cost=11824.16..11831.55 rows=2956 width=33) (actual time=248.17..248.17 rows=0 loops=1)\n> > Sort Key: order_date\n> > -> Index Scan using transactions_pop_i on transactions\n> > (cost=0.00..11653.79 rows=2956 width=33) \n> > (actual time=126.13..126.13 rows=0 loops=1)\n> > Index Cond: (upper((pop)::text) = '79BCDC8A4A4F99E7C111111111111111'::text)\n> > Total runtime: 248.25 msec\n\n\nYeah, the problem with functional indexes is that the optimizer doesn't have\nany clue how the records are distributed since it only has statistics for\ncolumns, not your expression. Notice it's estimating 2956 rows where in fact\nthere are 0.\n\nI think someone was actually working on this so it may be improved in 7.5 but\nI'm not sure.\n\nGiven the type of data you're storing, which looks like hex strings, are you\nsure you need to do a case-insensitive search here? Can't you just uppercase\nit when you store it?\n\nThe other option would be to use a subquery and force the planner not to pull\nit up, something like:\n\n\n select code\n from (\n select code \n from transactions \n where UPPER(pop) = UPPER('79bcdc8a4a4f99e7c111111111111111') \n offset 0\n )\n order by order_date DESC;\n\n\nThe offset 0 prevents the optimizer from pulling the subquery into the outer\nquery. I think this will prevent it from even considering the order_date index\nscan, but you'll have to try to be sure.\n\n-- \ngreg\n\n",
"msg_date": "30 Jan 2004 01:07:39 -0500",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: limit 1 and functional indexes"
},
{
"msg_contents": "\n> From: [email protected] [mailto:[email protected]]\n> Sent: viernes, 30 de enero de 2004 7:08\n>\n> Yeah, the problem with functional indexes is that the optimizer doesn't have\n> any clue how the records are distributed since it only has statistics for\n> columns, not your expression. Notice it's estimating 2956 rows where in fact\n> there are 0.\n\nThanks for the explication.\n\n> Given the type of data you're storing, which looks like hex strings, are you\n> sure you need to do a case-insensitive search here? Can't you just uppercase\n> it when you store it?\n\nThat would be great but we store a variety of case insensitive proof of purchase\ncodes here. Some we give to customers in upper case and some in lower case.\nHopefully someday we can redesign it all to just be in uppercase...\n\n> The offset 0 prevents the optimizer from pulling the subquery into the outer\n> query. I think this will prevent it from even considering the order_date index\n> scan, but you'll have to try to be sure.\n\nIt works perfectly - thanks a million!\nStrangely the offset 0 does not seem to make any difference.\nGotta read up more about subqueries :)\n\n explain analyze\n select code,order_date\n from (\n select code, order_date\n from transactions\n where UPPER(pop) = UPPER('c892eb2f877e3a28ddc8e196cd5a8aad')\n limit 1\n ) as foo\n order by order_date DESC;\n--------------------------------------------------\n Sort (cost=3.95..3.96 rows=1 width=33) (actual time=0.14..0.14 rows=1 loops=1)\n Sort Key: order_date\n -> Subquery Scan foo (cost=0.00..3.94 rows=1 width=33) (actual time=0.06..0.07 rows=1 loops=1)\n -> Limit (cost=0.00..3.94 rows=1 width=33) (actual time=0.06..0.06 rows=1 loops=1)\n -> Index Scan using transactions_pop_i on transactions (cost=0.00..11653.84 rows=2956 width=33) (actual\ntime=0.05..0.06 rows=2 loops=1)\n Index Cond: (upper((pop)::text) = 'C892EB2F877E3A28DDC8E196CD5A8AAD'::text)\n Total runtime: 0.20 msec\n(7 rows)\n\n\nexplain analyze\n select code,order_date\n from (\n select code, order_date\n from transactions\n where UPPER(pop) = UPPER('c892eb2f877e3a28ddc8e196cd5a8aad')\n limit 1\n offset 0\n ) as foo\n order by order_date DESC;\n--------------------------------------------------\n Sort (cost=3.95..3.96 rows=1 width=33) (actual time=0.14..0.14 rows=1 loops=1)\n Sort Key: order_date\n -> Subquery Scan foo (cost=0.00..3.94 rows=1 width=33) (actual time=0.06..0.07 rows=1 loops=1)\n -> Limit (cost=0.00..3.94 rows=1 width=33) (actual time=0.06..0.06 rows=1 loops=1)\n -> Index Scan using transactions_pop_i on transactions (cost=0.00..11653.84 rows=2956 width=33) (actual\ntime=0.06..0.06 rows=2 loops=1)\n Index Cond: (upper((pop)::text) = 'C892EB2F877E3A28DDC8E196CD5A8AAD'::text)\n Total runtime: 0.20 msec\n\n\n\n",
"msg_date": "Fri, 30 Jan 2004 10:06:13 +0100",
"msg_from": "\"Alexandra Birch\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: limit 1 and functional indexes: SOLVED"
},
{
"msg_contents": "\n\"Alexandra Birch\" <[email protected]> writes:\n\n> It works perfectly - thanks a million!\n> Strangely the offset 0 does not seem to make any difference.\n> Gotta read up more about subqueries :)\n> \n> explain analyze\n> select code,order_date\n> from (\n> select code, order_date\n> from transactions\n> where UPPER(pop) = UPPER('c892eb2f877e3a28ddc8e196cd5a8aad')\n> limit 1\n> ) as foo\n> order by order_date DESC;\n\nI think what you're trying to do here is get the last order? Then you'll want\nthe limit to be on the outer query where it's ordered by order_date:\n\n select code,order_date\n from (\n select code, order_date\n from transactions\n where UPPER(pop) = UPPER('c892eb2f877e3a28ddc8e196cd5a8aad')\n offset 0\n ) as foo\n order by order_date DESC;\n limit 1\n\nNote that in theory postgres should be able to find the same plan for this\nquery as yours since it's equivalent. It really ought to use the order_date\nindex since it thinks it would be more efficient. \n\nHowever it's unable to because postgres doesn't try every possible index, only\nthe ones that look like they'll be useful for a where clause or an order by.\nAnd the order by on the outer query isn't considered when it's looking at the\nsubquery. \n\nIt normally handles this case by merging the subquery into the outer query,\nbut it can't do that if there's a limit or offset. So an \"offset 0\" is\nconvenient for fooling it into thinking the subquery can't be pulled up\nwithout actually changing the output.\n\nYou could do \"order by upper(pop)\" instead which might be clearer for someone\nreading the query in that it makes it look like you're trying to encourage it\nto use the index on upper(pop). In theory \"order by\"s on subqueries are\nuseless and postgres could ignore them, but it doesn't.\n\n-- \ngreg\n\n",
"msg_date": "30 Jan 2004 04:56:22 -0500",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: limit 1 and functional indexes: SOLVED"
}
] |
[
{
"msg_contents": "do views exist fysically a separate \"table\",\nor are they generated on the fly whenever they are queried?\n\nif they exist fysically they could improve performance (..php, web),\nfor example a view being a join between two or more tables..\n\ntnx?\n\n\n",
"msg_date": "Wed, 28 Jan 2004 19:56:46 +0100",
"msg_from": "\"Loeke\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "views?"
},
{
"msg_contents": "\"Loeke\" <[email protected]> writes:\n> do views exist fysically a separate \"table\", or are they generated\n> on the fly whenever they are queried?\n\nViews are implementing by rewriting queries into the appropriate query\non the view's base tables.\n\nhttp://www.postgresql.org/docs/current/static/rules-views.html\n\n> if they exist fysically they could improve performance (..php, web),\n\nThis is called a \"materialized view\". PostgreSQL doesn't support them\nyet, but most people think it would be a Good Thing to have.\n\n-Neil\n\n",
"msg_date": "Fri, 30 Jan 2004 23:35:31 -0500",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: views?"
},
{
"msg_contents": "On Saturday 31 January 2004 04:35, Neil Conway wrote:\n>\n> This is called a \"materialized view\". PostgreSQL doesn't support them\n> yet, but most people think it would be a Good Thing to have.\n\nThere is a project on gborg (called \"mview\" iirc) though I don't know how far \nit's got - I think it's still pretty new.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 2 Feb 2004 09:48:21 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: views?"
},
{
"msg_contents": "> > This is called a \"materialized view\". PostgreSQL doesn't support them\n> > yet, but most people think it would be a Good Thing to have.\n>\n> There is a project on gborg (called \"mview\" iirc) though I don't know how\nfar\n> it's got - I think it's still pretty new.\n\ntnx\n\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n\n",
"msg_date": "Mon, 2 Feb 2004 22:26:43 +0100",
"msg_from": "\"Loeke\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: views?"
}
] |
[
{
"msg_contents": "Couple quick questions.\n\n1. Turning autocommit=FALSE is most useful when using COPY to load\n data to a table and could impact the performance of INSERT of\n similiar data on the same table.\n\n2. What comparison operator is the most efficient when comparing\n text or varchar? Today the SELECT I use to compare rows in this\n table do a simple row comparison\n\n (A.c1,A.c2,A.c3,...)<>(B.c1,B.c2,B.c3,...)\n\n but there is a mix of int8, varchar, text in there. Just fishing\n to see if I should instead use\n\n ((A.i1,A.i2,A.i3)<>(B.i1,B.i2,B.i3)AND A.t1!=B.t1)\n\n or\n\n ((A.i1,A.i2,A.i3)<>(B.i1,B.i2,B.i3)AND compare_func(A.t1,B.t1))\n\n where columns iX are integer and tX are text or varchar.\n\nWhile I'm asking, how is TEXT and VARCHAR compared? Byte by Byte until\nthere's an inequality?\n\nJust checking.\n\nTy,\nGreg\n\n-- \nGreg Spiegelberg\n Sr. Product Development Engineer\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nCranel. Technology. Integrity. Focus.\n\n\n",
"msg_date": "Wed, 28 Jan 2004 14:49:41 -0500",
"msg_from": "Greg Spiegelberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimization questions"
}
] |
[
{
"msg_contents": "Anybody used Linux with EMC Clariions for PG databases?\n\nAny good war stories, pros, cons, performance results ?\n\nI'm wearing thin on my 6 disk 0+1 configuration and looking for\nsomething beefy, possibly for clustering, and I'm wondering what the net\nwisdom is. :)\n\nthanks!\n\n\n",
"msg_date": "Wed, 28 Jan 2004 14:32:53 -0700",
"msg_from": "Cott Lang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Linux / Clariion"
},
{
"msg_contents": "Cott Lang wrote:\n\n>Anybody used Linux with EMC Clariions for PG databases?\n>\n>Any good war stories, pros, cons, performance results ?\n>\n>I'm wearing thin on my 6 disk 0+1 configuration and looking for\n>something beefy, possibly for clustering, and I'm wondering what the net\n>wisdom is. :)\n>\n>thanks!\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n> \n>\nHeya,\n\nWe are currently using a Dell badged Clariion FC4500 running at 1Gb \nfibre channel. It has 20 HDD in there at the moment, and another 5 disks \nare coming in the next couple of weeks. These disks are split into \nseveral RAID 5 arrays, each about 200Gb. We are using QLogic HBA's under \nRH Linux. We have had a couple of problems, one driver based which was \nresolved by a kernel upgrade. They also dont seem to like changing IP \naddresses of the servers, Navisphere wanted the servers to be \nreregistered before it started working properly.\n\nIn terms of performance, its 1Gb fibre to the disk, U320 SCSI hot swap, \n10k disks. They run very fast and apart from the configuration issues \nabove have never given us any grief. The LUN's have been running \nflawlessly for over a year (touch wood). We just need some beefier boxes \nto take advantage of their speed. I am thinking of proposing one or more \nQuad Opterons with 32Gb RAM ;-) That should do the trick i reckon.\n\nWe can try and run some benchmarks on one of the spare machines if you \nwant, if so send through some examples of pg_bench parameters that are \nappropriate. Anyone got any useful sets? We are currently running PG 7.3 \n(7.3.1 or 7.3.2 I think) at the moment on that box.\n\nTrouble is with this model of Clariion, EMC is apparently setting EOL \nfor 2005 sometime, which we aint to pleased about. Never mind, hopefully \nwe will have some more money by that time......\n\nAny other info just say and I will see what I can dig up.\n\nNick Barr\n\n\n\n",
"msg_date": "Wed, 28 Jan 2004 22:08:40 +0000",
"msg_from": "Nick Barr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux / Clariion"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've got a query that needs some help, please. Is there a way to avoid\nall the looping? I've got freedom to work with the double-indented\nsections below ) AND (, but the initial select distinct wrapper is much\nmore difficult to change. This is auto-generated code.\n\nexplain analyze SELECT DISTINCT members_.emailaddr_, members_.memberid_ \nFROM members_ WHERE ( \n\tmembers_.List_='list1' \n\tAND members_.MemberType_='normal' \n\tAND members_.SubType_='mail' \n\tAND members_.emailaddr_ IS NOT NULL \n\t) AND (\n\t\t( select count(*) from lyrActiveRecips, members_ a, outmail_ \n\t\twhere lyrActiveRecips.UserName = a.UserNameLC_ \n\t\tand lyrActiveRecips.Domain = a.Domain_ \n\t\tand a.MemberID_ = members_.MemberID_ \n\t\tand outmail_.MessageID_ = lyrActiveRecips.MailingID \n\t\tand outmail_.Type_ = 'list' \n\t\tand lyrActiveRecips.NextAttempt > '2004-01-20 00:00:00' \n\t\t)\n\t\t + \n\t\t( select count(*) from lyrCompletedRecips, members_ a, outmail_ \n\t\twhere a.MemberID_ = lyrCompletedRecips.MemberID \n\t\tand a.UserNameLC_ = members_.UserNameLC_ \n\t\tand a.Domain_ = members_.Domain_ \n\t\tand outmail_.MessageID_ = lyrCompletedRecips.MailingID \n\t\tand outmail_.Type_ = 'list' \n\t\tand lyrCompletedRecips.FinalAttempt > '2004-01-20 00:00:00' \n\t\tand lyrCompletedRecips.CompletionStatusID = 300 ) \n\t\t = 3 \n\t) \n;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=537.06..537.07 rows=1 width=72) (actual\ntime=114460.908..114460.908 rows=0 loops=1)\n -> Sort (cost=537.06..537.06 rows=1 width=72) (actual\ntime=114460.905..114460.905 rows=0 loops=1)\n Sort Key: emailaddr_, memberid_\n -> Index Scan using ix_members_list_notifyerr on members_ \n(cost=0.00..537.05 rows=1 width=72) (actual time=114460.893..114460.893\nrows=0 loops=1)\n Index Cond: ((list_)::text = 'list1'::text)\n Filter: (((membertype_)::text = 'normal'::text) AND\n((subtype_)::text = 'mail'::text) AND (emailaddr_ IS NOT NULL) AND\n(((subplan) + (subplan)) = 3))\n SubPlan\n -> Aggregate (cost=52.39..52.39 rows=1 width=0)\n(actual time=0.089..0.090 rows=1 loops=818122)\n -> Hash Join (cost=47.55..52.39 rows=1 width=0)\n(actual time=0.086..0.086 rows=0 loops=818122)\n Hash Cond: (\"outer\".memberid_ =\n\"inner\".memberid)\n -> Index Scan using ix_members_emaillc on\nmembers_ a (cost=0.00..4.83 rows=1 width=4) (actual time=0.077..0.081\nrows=1 loops=818122)\n Index Cond: (((domain_)::text =\n($2)::text) AND ((usernamelc_)::text = ($1)::text))\n -> Hash (cost=47.55..47.55 rows=1\nwidth=4) (actual time=0.025..0.025 rows=0 loops=1)\n -> Hash Join (cost=25.00..47.55\nrows=1 width=4) (actual time=0.023..0.023 rows=0 loops=1)\n Hash Cond: (\"outer\".messageid_\n= \"inner\".mailingid)\n -> Seq Scan on outmail_ \n(cost=0.00..22.50 rows=6 width=4) (actual time=0.001..0.001 rows=0\nloops=1)\n Filter: ((type_)::text =\n'list'::text)\n -> Hash (cost=25.00..25.00\nrows=2 width=8) (actual time=0.003..0.003 rows=0 loops=1)\n -> Seq Scan on\nlyrcompletedrecips (cost=0.00..25.00 rows=2 width=8) (actual\ntime=0.001..0.001 rows=0 loops=1)\n Filter:\n((finalattempt > '2004-01-20 00:00:00'::timestamp without time zone) AND\n(completionstatusid = 300))\n -> Aggregate (cost=51.59..51.59 rows=1 width=0)\n(actual time=0.033..0.034 rows=1 loops=818122)\n -> Hash Join (cost=27.35..51.59 rows=1 width=0)\n(actual time=0.028..0.028 rows=0 loops=818122)\n Hash Cond: (((\"outer\".username)::text =\n(\"inner\".usernamelc_)::text) AND ((\"outer\".\"domain\")::text =\n(\"inner\".domain_)::text))\n -> Hash Join (cost=22.52..46.72 rows=3\nwidth=211) (actual time=0.003..0.003 rows=0 loops=818122)\n Hash Cond: (\"outer\".mailingid =\n\"inner\".messageid_)\n -> Seq Scan on lyractiverecips \n(cost=0.00..22.50 rows=334 width=215) (actual time=0.001..0.001 rows=0\nloops=818122)\n Filter: (nextattempt >\n'2004-01-20 00:00:00'::timestamp without time zone)\n -> Hash (cost=22.50..22.50 rows=6\nwidth=4) (actual time=0.003..0.003 rows=0 loops=1)\n -> Seq Scan on outmail_ \n(cost=0.00..22.50 rows=6 width=4) (actual time=0.002..0.002 rows=0\nloops=1)\n Filter: ((type_)::text =\n'list'::text)\n -> Hash (cost=4.82..4.82 rows=2\nwidth=211) (actual time=0.017..0.017 rows=0 loops=818122)\n -> Index Scan using pk_members_ on\nmembers_ a (cost=0.00..4.82 rows=2 width=211) (actual time=0.011..0.013\nrows=1 loops=818122)\n Index Cond: (memberid_ = $0)\n Total runtime: 114474.407 ms\n(34 rows)\n\nthat's with no data in lyractiverecips or lyrcompletedrecips. With data\nin those tables, the query still hasn't completed after several hours on\ntwo different machines.\n\nthanks,\n-- \nJack Coates, Lyris Technologies Applications Engineer\n510-549-4350 x148, [email protected]\n\"Interoperability is the keyword, uniformity is a dead end.\"\n\t\t\t\t--Olivier Fourdan\n\n\n",
"msg_date": "Wed, 28 Jan 2004 17:05:54 -0800",
"msg_from": "Jack Coates <[email protected]>",
"msg_from_op": true,
"msg_subject": "query optimization question"
},
{
"msg_contents": "Jack Coates <[email protected]> writes:\n> I've got a query that needs some help, please. Is there a way to avoid\n> all the looping? I've got freedom to work with the double-indented\n> sections below ) AND (, but the initial select distinct wrapper is much\n> more difficult to change. This is auto-generated code.\n\nWell, you're not going to get any serious improvement without a\nwholesale rewrite of the query --- I'd think that something driven by\na GROUP BY memberid_ HAVING count(*) = whatever at the outer level would\nbe a better way to approach it. As you have it, the system has no\nchoice but to fully evaluate two very expensive subselects, from scratch,\nfor each outer row.\n\nHowever...\n\n> \t\t( select count(*) from lyrActiveRecips, members_ a, outmail_ \n> \t\twhere lyrActiveRecips.UserName = a.UserNameLC_ \n> \t\tand lyrActiveRecips.Domain = a.Domain_ \n> \t\tand a.MemberID_ = members_.MemberID_ \n> \t\tand outmail_.MessageID_ = lyrActiveRecips.MailingID \n\nIs memberid_ a unique identifier for members_, as one would think from\nthe name? If so, can't you drop the join of members_ a in this\nsubselect, and just use the corresponding fields from the outer table?\n\n> \t\t( select count(*) from lyrCompletedRecips, members_ a, outmail_\n> \t\twhere a.MemberID_ = lyrCompletedRecips.MemberID \n> \t\tand a.UserNameLC_ = members_.UserNameLC_ \n> \t\tand a.Domain_ = members_.Domain_ \n> \t\tand outmail_.MessageID_ = lyrCompletedRecips.MailingID \n\nWhy are the join conditions different here from the other subselect?\nCan't you rephrase them the same as above, and then again remove the\ninner appearance of members_ ?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 28 Jan 2004 21:04:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query optimization question "
},
{
"msg_contents": "On Wed, 2004-01-28 at 18:04, Tom Lane wrote:\n> Jack Coates <[email protected]> writes:\n> > I've got a query that needs some help, please. Is there a way to avoid\n> > all the looping? I've got freedom to work with the double-indented\n> > sections below ) AND (, but the initial select distinct wrapper is much\n> > more difficult to change. This is auto-generated code.\n> \n> Well, you're not going to get any serious improvement without a\n> wholesale rewrite of the query --- I'd think that something driven by\n> a GROUP BY memberid_ HAVING count(*) = whatever at the outer level would\n> be a better way to approach it. As you have it, the system has no\n> choice but to fully evaluate two very expensive subselects, from scratch,\n> for each outer row.\n> \n\nI hear you. There's definitely an understanding that this tool can\ngenerate some gnarly queries, and we want to redesign in a way that will\nallow some more intelligence to be applied to the problem. In the\nmeantime, I'll be happy if PG grinds at the same level as other\ndatabases. MS-SQL completed that query in 25 minutes on a database with\n31 times the data in it. Since I'm one of the bigger *nix fans around\nhere, that doesn't make me happy.\n\n> However...\n> \n> > \t\t( select count(*) from lyrActiveRecips, members_ a, outmail_ \n> > \t\twhere lyrActiveRecips.UserName = a.UserNameLC_ \n> > \t\tand lyrActiveRecips.Domain = a.Domain_ \n> > \t\tand a.MemberID_ = members_.MemberID_ \n> > \t\tand outmail_.MessageID_ = lyrActiveRecips.MailingID \n> \n> Is memberid_ a unique identifier for members_, as one would think from\n> the name? If so, can't you drop the join of members_ a in this\n> subselect, and just use the corresponding fields from the outer table?\n> \n> > \t\t( select count(*) from lyrCompletedRecips, members_ a, outmail_\n> > \t\twhere a.MemberID_ = lyrCompletedRecips.MemberID \n> > \t\tand a.UserNameLC_ = members_.UserNameLC_ \n> > \t\tand a.Domain_ = members_.Domain_ \n> > \t\tand outmail_.MessageID_ = lyrCompletedRecips.MailingID \n> \n> Why are the join conditions different here from the other subselect?\n> Can't you rephrase them the same as above, and then again remove the\n> inner appearance of members_ ?\n> \n> \t\t\tregards, tom lane\n\nunfortunately, the column names are different between lyrcompletedrecips\nand lyractiverecips. However, one thing we were able to do is to reduce\nthe number of queries by not trying to match across multiple lists.\n\nSELECT DISTINCT members_.emailaddr_, members_.memberid_ FROM members_ \nWHERE ( members_.List_='list1' \n\tAND members_.MemberType_='normal' \n\tAND members_.SubType_='mail' \n\tAND members_.emailaddr_ IS NOT NULL ) \nAND ( \n\t( select count(*) from lyrActiveRecips, outmail_ \n\twhere outmail_.MessageID_ = lyrActiveRecips.MailingID \n\tand outmail_.Type_ = 'list' \n\tand members_.MemberID_ = lyrActiveRecips.MemberID \n\tand lyrActiveRecips.NextAttempt > '2004-01-20 00:00:00' )\n\t + \n\t( select count(*) from lyrCompletedRecips, outmail_ \n\twhere members_.MemberID_ = lyrCompletedRecips.MemberID \n\tand outmail_.MessageID_ = lyrCompletedRecips.MailingID \n\tand outmail_.Type_ = 'list' \n\tand lyrCompletedRecips.FinalAttempt > '2004-01-20 00:00:00' \n\tand lyrCompletedRecips.CompletionStatusID = 300 )\n\t = 3\n);\n\nThat completed in 3.5 minutes on MS-SQL. I killed the query this morning\nafter 15 hours on PostgreSQL 7.4. I tried a GROUP BY memberid_ HAVING\nvariation, which completed in 59 seconds on MS-SQL. I killed it after 35\nminutes on PostgreSQL.\n\nOn a more positive note, if you remember the benchmarking I was doing\nlast month, PostgreSQL got some pretty good relative numbers. It\nrequires a lot of hand-holding and tuning relative to MS-SQL, but it\ncertainly beat the pants off of Oracle 8 and 9 for speed and ease of\nmanagement. Oracle 8 was in fact unable to complete the uglier stress\ntests. I'll be working on a tuning recommendations white paper today.\n\nthanks for all the help,\n-- \nJack Coates, Lyris Technologies Applications Engineer\n510-549-4350 x148, [email protected]\n\"Interoperability is the keyword, uniformity is a dead end.\"\n\t\t\t\t--Olivier Fourdan\n\n\n",
"msg_date": "Thu, 29 Jan 2004 09:23:00 -0800",
"msg_from": "Jack Coates <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query optimization question"
},
{
"msg_contents": "Jack Coates <[email protected]> writes:\n> That completed in 3.5 minutes on MS-SQL. I killed the query this morning\n> after 15 hours on PostgreSQL 7.4. I tried a GROUP BY memberid_ HAVING\n> variation, which completed in 59 seconds on MS-SQL. I killed it after 35\n> minutes on PostgreSQL.\n\nHm. I'd like to think that 7.4 would be competitive on grouping\nqueries. What sort of plan did you get from it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 29 Jan 2004 13:05:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query optimization question "
},
{
"msg_contents": "On Thu, 2004-01-29 at 10:05, Tom Lane wrote:\n> Jack Coates <[email protected]> writes:\n> > That completed in 3.5 minutes on MS-SQL. I killed the query this morning\n> > after 15 hours on PostgreSQL 7.4. I tried a GROUP BY memberid_ HAVING\n> > variation, which completed in 59 seconds on MS-SQL. I killed it after 35\n> > minutes on PostgreSQL.\n> \n> Hm. I'd like to think that 7.4 would be competitive on grouping\n> queries. What sort of plan did you get from it?\n\nComparable to the first plan.\n\njackdb=# explain SELECT DISTINCT members_.memberid_ \njackdb-# FROM members_ \njackdb-# WHERE ( members_.List_='list1' \njackdb(# AND members_.MemberType_='normal' \njackdb(# AND members_.SubType_='mail' \njackdb(# AND members_.emailaddr_ IS NOT NULL ) \njackdb-# GROUP BY memberid_ HAVING ( \njackdb(# ( select count(*) from lyrActiveRecips, outmail_ \njackdb(# where outmail_.MessageID_ = lyrActiveRecips.MailingID \njackdb(# and outmail_.Type_ = 'list' \njackdb(# and members_.MemberID_ = lyrActiveRecips.MemberID \njackdb(# and lyrActiveRecips.NextAttempt > '2004-01-20 00:00:00' ) \njackdb(# + \njackdb(# ( select count(*) from lyrCompletedRecips, outmail_ \njackdb(# where members_.MemberID_ = lyrCompletedRecips.MemberID \njackdb(# and outmail_.MessageID_ = lyrCompletedRecips.MailingID \njackdb(# and outmail_.Type_ = 'list' \njackdb(# and lyrCompletedRecips.FinalAttempt > '2004-01-20 00:00:00' \njackdb(# and lyrCompletedRecips.CompletionStatusID = 300 ) \njackdb(# = 3 );\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=453.08..453.09 rows=1 width=4)\n -> Group (cost=453.08..453.09 rows=1 width=4)\n -> Sort (cost=453.08..453.08 rows=1 width=4)\n Sort Key: memberid_\n -> Index Scan using ix_members_list_notifyerr on\nmembers_ (cost=0.00..453.07 rows=1 width=4)\n Index Cond: ((list_)::text = 'list1'::text)\n Filter: (((membertype_)::text = 'normal'::text) AND\n((subtype_)::text = 'mail'::text) AND (emailaddr_ IS NOT NULL) AND\n(((subplan) + (subplan)) = 3))\n SubPlan\n -> Aggregate (cost=39.64..39.64 rows=1 width=0)\n -> Hash Join (cost=17.10..39.64 rows=1\nwidth=0)\n Hash Cond: (\"outer\".messageid_ =\n\"inner\".mailingid)\n -> Seq Scan on outmail_ \n(cost=0.00..22.50 rows=6 width=4)\n Filter: ((type_)::text =\n'list'::text)\n -> Hash (cost=17.09..17.09 rows=1\nwidth=4)\n -> Index Scan using\nix_completedrecipsmemberid on lyrcompletedrecips (cost=0.00..17.09\nrows=1 width=4)\n Index Cond: ($0 =\nmemberid)\n Filter: ((finalattempt >\n'2004-01-20 00:00:00'::timestamp without time zone) AND\n(completionstatusid = 300))\n -> Aggregate (cost=47.55..47.55 rows=1 width=0)\n -> Hash Join (cost=25.00..47.55 rows=1\nwidth=0)\n Hash Cond: (\"outer\".messageid_ =\n\"inner\".mailingid)\n -> Seq Scan on outmail_ \n(cost=0.00..22.50 rows=6 width=4)\n Filter: ((type_)::text =\n'list'::text)\n -> Hash (cost=25.00..25.00 rows=2\nwidth=4)\n -> Seq Scan on\nlyractiverecips (cost=0.00..25.00 rows=2 width=4)\n Filter: (($0 = memberid)\nAND (nextattempt > '2004-01-20 00:00:00'::timestamp without time zone))\n(25 rows)\n\n-- \nJack Coates, Lyris Technologies Applications Engineer\n510-549-4350 x148, [email protected]\n\"Interoperability is the keyword, uniformity is a dead end.\"\n\t\t\t\t--Olivier Fourdan\n\n\n",
"msg_date": "Thu, 29 Jan 2004 11:04:55 -0800",
"msg_from": "Jack Coates <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query optimization question"
},
{
"msg_contents": "Jack Coates <[email protected]> writes:\n> jackdb=# explain SELECT DISTINCT members_.memberid_ \n> jackdb-# FROM members_ \n> jackdb-# WHERE ( members_.List_='list1' \n> jackdb(# AND members_.MemberType_='normal' \n> jackdb(# AND members_.SubType_='mail' \n> jackdb(# AND members_.emailaddr_ IS NOT NULL ) \n> jackdb-# GROUP BY memberid_ HAVING ( \n\nUm, that's not what I had in mind at all. Does GROUP BY actually do\nanything at all here? (You didn't answer me as to whether memberid_\nis a unique identifier or not, but if it is, this GROUP BY is just an\nexpensive no-op.)\n\nWhat I was envisioning was pulling the sub-selects up to the top level\nand using grouping to calculate the count(*) values for all memberids\nin parallel. Roughly speaking it would look like (again assuming\nmemberid_ is unique)\n\nSELECT memberid_ FROM\n(\n SELECT memberid_ FROM lyrActiveRecips, members_, outmail\n WHERE (all the conditions for this case)\n UNION ALL\n SELECT memberid_ FROM lyrCompletedRecips, members_, outmail\n WHERE (all the conditions for this case)\n)\nGROUP BY memberid_ HAVING count(*) = 3;\n\nHowever, if you can't change the boilerplate part of your query then\nthis is all blue-sky speculation anyway. What I'm actually more\ninterested in is your statement that MSSQL can do the original query\nquickly. I find that a bit hard to believe because I don't see any\nrelevant optimization techniques. Do they have any equivalent to\nEXPLAIN that would give some hint how they're doing it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 29 Jan 2004 14:31:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query optimization question "
},
{
"msg_contents": "On Thu, 2004-01-29 at 11:31, Tom Lane wrote:\n> Jack Coates <[email protected]> writes:\n> > jackdb=# explain SELECT DISTINCT members_.memberid_ \n> > jackdb-# FROM members_ \n> > jackdb-# WHERE ( members_.List_='list1' \n> > jackdb(# AND members_.MemberType_='normal' \n> > jackdb(# AND members_.SubType_='mail' \n> > jackdb(# AND members_.emailaddr_ IS NOT NULL ) \n> > jackdb-# GROUP BY memberid_ HAVING ( \n> \n> Um, that's not what I had in mind at all. Does GROUP BY actually do\n> anything at all here? (You didn't answer me as to whether memberid_\n> is a unique identifier or not, but if it is, this GROUP BY is just an\n> expensive no-op.)\n> \n\nSorry for the misunderstanding. It should be unique, yes.\n\n> What I was envisioning was pulling the sub-selects up to the top level\n> and using grouping to calculate the count(*) values for all memberids\n> in parallel. Roughly speaking it would look like (again assuming\n> memberid_ is unique)\n> \n> SELECT memberid_ FROM\n> (\n> SELECT memberid_ FROM lyrActiveRecips, members_, outmail\n> WHERE (all the conditions for this case)\n> UNION ALL\n> SELECT memberid_ FROM lyrCompletedRecips, members_, outmail\n> WHERE (all the conditions for this case)\n> )\n> GROUP BY memberid_ HAVING count(*) = 3;\n> \n> However, if you can't change the boilerplate part of your query then\n> this is all blue-sky speculation anyway. \n\nGot it now -- I'm running into some subquery errors trying to implement\nthis, anyway.\n\n> What I'm actually more\n> interested in is your statement that MSSQL can do the original query\n> quickly. I find that a bit hard to believe because I don't see any\n> relevant optimization techniques. Do they have any equivalent to\n> EXPLAIN that would give some hint how they're doing it?\n\nyup -- here it is. It will probably be a nasty mess after linewrap gets\ndone with it, so let me know if you'd like me to post a copy on ftp.\n\nSELECT DISTINCT members_.memberid_ FROM members_ WHERE (\nmembers_.List_='list1' AND members_.MemberType_='normal' AND\nmembers_.SubType_='mail' ) GROUP BY memberid_ HAVING ( ( select\ncount(*) from lyrActiveRecips, outmail_ where\noutmail\t11\t1\t0\tNULL\tNULL\t1\tNULL\t102274.5\tNULL\tNULL\tNULL\t104.10356\tNULL\tNULL\tSELECT\t0\tNULL\n |--Parallelism(Gather Streams)\t11\t2\t1\tParallelism\tGather\nStreams\tNULL\tNULL\t102274.5\t0.0\t0.22011127\t23\t104.10356\t[members_].[MemberID_]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Filter(WHERE:(If ([Expr1006] IS NULL) then 0 else\n[Expr1006]+If ([Expr1012] IS NULL) then 0 else\n[Expr1012]=3))\t11\t3\t2\tFilter\tFilter\tWHERE:(If ([Expr1006] IS NULL) then\n0 else [Expr1006]+If ([Expr1012] IS NULL) then 0 else\n[Expr1012]=3)\tNULL\t102274.5\t0.0\t3.5393338\t23\t103.88345\t[members_].[MemberID_]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Hash Match(Right Outer Join,\nHASH:([lyrCompletedRecips].[MemberID])=([members_].[MemberID_]),\nRESIDUAL:([members_].[MemberID_]=[lyrCompletedRecips].[MemberID]))\t11\t4\t3\tHash Match\tRight Outer Join\tHASH:([lyrCompletedRecips].[MemberID])=([members_].[MemberID_]), RESIDUAL:([members_].[MemberID_]=[lyrCompletedRecips].[MemberID])\tNULL\t4782883.5\t0.0\t21.874712\t23\t100.34412\t[members_].[MemberID_], [Expr1006], [Expr1012]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Compute\nScalar(DEFINE:([Expr1012]=Convert([Expr1020])))\t11\t5\t4\tCompute\nScalar\tCompute\nScalar\tDEFINE:([Expr1012]=Convert([Expr1020]))\t[Expr1012]=Convert([Expr1020])\t119575.35\t0.0\t1.3723248\t15\t4.3749919\t[lyrCompletedRecips].[MemberID], [Expr1012]\tNULL\tPLAN_ROW\t-1\t1.0\n | |--Hash Match(Aggregate,\nHASH:([lyrCompletedRecips].[MemberID]),\nRESIDUAL:([lyrCompletedRecips].[MemberID]=[lyrCompletedRecips].[MemberID]) DEFINE:([Expr1020]=COUNT(*)))\t11\t6\t5\tHash Match\tAggregate\tHASH:([lyrCompletedRecips].[MemberID]), RESIDUAL:([lyrCompletedRecips].[MemberID]=[lyrCompletedRecips].[MemberID])\t[Expr1020]=COUNT(*)\t119575.35\t0.0\t1.3723248\t15\t4.3749919\t[lyrCompletedRecips].[MemberID], [Expr1020]\tNULL\tPLAN_ROW\t-1\t1.0\n | |--Parallelism(Repartition Streams, PARTITION\nCOLUMNS:([lyrCompletedRecips].[MemberID]))\t11\t7\t6\tParallelism\tRepartition Streams\tPARTITION COLUMNS:([lyrCompletedRecips].[MemberID])\tNULL\t119640.6\t0.0\t0.32407209\t173\t3.002667\t[lyrCompletedRecips].[MemberID]\tNULL\tPLAN_ROW\t-1\t1.0\n | |--Nested Loops(Inner Join, OUTER\nREFERENCES:([outmail_].[MessageID_]))\t11\t8\t7\tNested Loops\tInner\nJoin\tOUTER\nREFERENCES:([outmail_].[MessageID_])\tNULL\t119640.6\t0.0\t0.75014657\t173\t2.6785948\t[lyrCompletedRecips].[MemberID]\tNULL\tPLAN_ROW\t-1\t1.0\n | |--Parallelism(Distribute\nStreams)\t11\t9\t8\tParallelism\tDistribute\nStreams\tNULL\tNULL\t1.0\t0.0\t2.8501874E-2\t128\t9.4664574E-2\t[outmail_].[MessageID_]\tNULL\tPLAN_ROW\t-1\t1.0\n | | |--Clustered Index\nScan(OBJECT:([lmdb].[dbo].[outmail_].[IX_outmail_list]),\nWHERE:([outmail_].[Type_]='list'))\t11\t10\t9\tClustered Index\nScan\tClustered Index\nScan\tOBJECT:([lmdb].[dbo].[outmail_].[IX_outmail_list]),\nWHERE:([outmail_].[Type_]='list')\t[outmail_].[Type_],\n[outmail_].[MessageID_]\t1.0\t0.01878925\t3.9800001E-5\t128\t3.7658099E-2\t[outmail_].[Type_], [outmail_].[MessageID_]\tNULL\tPLAN_ROW\t0\t1.0\n | |--Clustered Index\nSeek(OBJECT:([lmdb].[dbo].[lyrCompletedRecips].[IX_CompletedRecipsMailingID]), SEEK:([lyrCompletedRecips].[MailingID]=[outmail_].[MessageID_]), WHERE:([lyrCompletedRecips].[CompletionStatusID]=300 AN\t11\t11\t8\tClustered Index Seek\tClustered Index Seek\tOBJECT:([lmdb].[dbo].[lyrCompletedRecips].[IX_CompletedRecipsMailingID]), SEEK:([lyrCompletedRecips].[MailingID]=[outmail_].[MessageID_]), WHERE:([lyrCompletedRecips].[CompletionStatusID]=300 AND [lyrCompletedRecips].[FinalAttempt]>'Jan 20 2004 12:00AM') \t[lyrCompletedRecips].[CompletionStatusID], [lyrCompletedRecips].[FinalAttempt], [lyrCompletedRecips].[MemberID]\t119640.6\t0.5750553\t0.13207871\t53\t1.5463468\t[lyrCompletedRecips].[CompletionStatusID], [lyrCompletedRecips].[FinalAttempt], [lyrCompletedRecips].[MemberID]\tNULL\tPLAN_ROW\t-1\t3.0\n |--Parallelism(Repartition Streams, PARTITION\nCOLUMNS:([members_].[MemberID_]))\t11\t19\t4\tParallelism\tRepartition\nStreams\tPARTITION\nCOLUMNS:([members_].[MemberID_])\tNULL\t4782883.5\t0.0\t15.474822\t19\t74.094414\t[members_].[MemberID_], [Expr1006]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Nested Loops(Left Outer Join,\nWHERE:([members_].[MemberID_]=[lyrActiveRecips].[MemberID]))\t11\t20\t19\tNested Loops\tLeft Outer Join\tWHERE:([members_].[MemberID_]=[lyrActiveRecips].[MemberID])\tNULL\t4782883.5\t0.0\t9.9962263\t19\t58.619591\t[members_].[MemberID_], [Expr1006]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Clustered Index\nSeek(OBJECT:([lmdb].[dbo].[members_].[IX_members_List_EmailLC]),\nSEEK:([members_].[List_]='list1'), \nWHERE:([members_].[MemberType_]='normal' AND\n[members_].[SubType_]='mail') ORDERED FORWARD)\t11\t22\t20\tClustered Index\nSeek\tClustered Index\nSeek\tOBJECT:([lmdb].[dbo].[members_].[IX_members_List_EmailLC]),\nSEEK:([members_].[List_]='list1'), \nWHERE:([members_].[MemberType_]='normal' AND\n[members_].[SubType_]='mail') ORDERED FORWARD\t[members_].[SubType_],\n[members_].[MemberType_],\n[members_].[MemberID_]\t4782883.5\t40.160122\t3.2745986\t410\t43.434719\t[members_].[SubType_], [members_].[MemberType_], [members_].[MemberID_]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Table Spool\t11\t24\t20\tTable Spool\tLazy\nSpool\tNULL\tNULL\t1.0\t1.6756756E-2\t3.7999999E-7\t15\t0.90211391\t[lyrActiveRecips].[MemberID], [Expr1006]\tNULL\tPLAN_ROW\t-1\t4782883.5\n |--Compute\nScalar(DEFINE:([Expr1006]=Convert([Expr1021])))\t11\t25\t24\tCompute\nScalar\tCompute\nScalar\tDEFINE:([Expr1006]=Convert([Expr1021]))\t[Expr1006]=Convert([Expr1021])\t1.0\t0.0\t7.6000001E-6\t15\t2.4437904E-2\t[lyrActiveRecips].[MemberID], [Expr1006]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Stream Aggregate(GROUP\nBY:([lyrActiveRecips].[MemberID])\nDEFINE:([Expr1021]=Count(*)))\t11\t26\t25\tStream Aggregate\tAggregate\tGROUP\nBY:([lyrActiveRecips].[MemberID])\t[Expr1021]=Count(*)\t1.0\t0.0\t7.6000001E-6\t15\t2.4437904E-2\t[lyrActiveRecips].[MemberID], [Expr1021]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Sort(ORDER\nBY:([lyrActiveRecips].[MemberID] ASC))\t11\t27\t26\tSort\tSort\tORDER\nBY:([lyrActiveRecips].[MemberID]\nASC)\tNULL\t1.0\t1.1261261E-2\t1.00011E-4\t11\t2.4430305E-2\t[lyrActiveRecips].[MemberID]\tNULL\tPLAN_ROW\t-1\t1.0\n \n|--Filter(WHERE:([outmail_].[Type_]='list'))\t11\t28\t27\tFilter\tFilter\tWHERE:([outmail_].[Type_]='list')\tNULL\t1.0\t0.0\t4.7999998E-7\t156\t1.3069032E-2\t[lyrActiveRecips].[MemberID]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Bookmark\nLookup(BOOKMARK:([Bmk1004]),\nOBJECT:([lmdb].[dbo].[outmail_]))\t11\t29\t28\tBookmark Lookup\tBookmark\nLookup\tBOOKMARK:([Bmk1004]),\nOBJECT:([lmdb].[dbo].[outmail_])\t[outmail_].[Type_]\t1.0\t3.1249749E-3\t0.0000011\t156\t1.3068552E-2\t[lyrActiveRecips].[MemberID], [outmail_].[Type_]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Nested\nLoops(Inner Join, OUTER\nREFERENCES:([lyrActiveRecips].[MailingID]))\t11\t30\t29\tNested Loops\tInner\nJoin\tOUTER\nREFERENCES:([lyrActiveRecips].[MailingID])\tNULL\t1.0\t0.0\t0.00001254\t138\t9.9424766E-3\t[lyrActiveRecips].[MemberID], [Bmk1004]\tNULL\tPLAN_ROW\t-1\t1.0\n \n|--Bookmark Lookup(BOOKMARK:([Bmk1002]),\nOBJECT:([lmdb].[dbo].[lyrActiveRecips]))\t11\t31\t30\tBookmark\nLookup\tBookmark Lookup\tBOOKMARK:([Bmk1002]),\nOBJECT:([lmdb].[dbo].[lyrActiveRecips])\t[lyrActiveRecips].[MemberID],\n[lyrActiveRecips].[MailingID]\t1.0\t3.1249749E-3\t0.0000011\t53\t6.4091529E-3\t[lyrActiveRecips].[MemberID], [lyrActiveRecips].[MailingID]\tNULL\tPLAN_ROW\t-1\t1.0\n | \n|--Index\nSeek(OBJECT:([lmdb].[dbo].[lyrActiveRecips].[jacktest_lar_date_ix]),\nSEEK:([lyrActiveRecips].[NextAttempt] > 'Jan 20 2004 12:00AM') ORDERED\nFORWARD)\t11\t32\t31\tIndex Seek\tIndex\nSeek\tOBJECT:([lmdb].[dbo].[lyrActiveRecips].[jacktest_lar_date_ix]),\nSEEK:([lyrActiveRecips].[NextAttempt] > 'Jan 20 2004 12:00AM') ORDERED\nFORWARD\t[Bmk1002]\t1.0\t3.2034749E-3\t7.9603E-5\t40\t3.2830781E-3\t[Bmk1002]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Index\nSeek(OBJECT:([lmdb].[dbo].[outmail_].[PK_outmail_]),\nSEEK:([outmail_].[MessageID_]=[lyrActiveRecips].[MailingID]) ORDERED\nFORWARD)\t11\t33\t30\tIndex Seek\tIndex\nSeek\tOBJECT:([lmdb].[dbo].[outmail_].[PK_outmail_]),\nSEEK:([outmail_].[MessageID_]=[lyrActiveRecips].[MailingID]) ORDERED\nFORWARD\t[Bmk1004]\t1.0\t3.2034749E-3\t7.9603E-5\t93\t3.520784E-3\t[Bmk1004]\tNULL\tPLAN_ROW\t-1\t3.0\n-- \nJack Coates, Lyris Technologies Applications Engineer\n510-549-4350 x148, [email protected]\n\"Interoperability is the keyword, uniformity is a dead end.\"\n\t\t\t\t--Olivier Fourdan\n\n\n",
"msg_date": "Thu, 29 Jan 2004 13:23:28 -0800",
"msg_from": "Jack Coates <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query optimization question"
},
{
"msg_contents": "Jack Coates <[email protected]> writes:\n> yup -- here it is. It will probably be a nasty mess after linewrap gets\n> done with it,\n\nyup, sure is :-( If I was familiar with the layout I could probably\ndecipher where the line breaks are supposed to be, but right now I'm\njust confused.\n\n> so let me know if you'd like me to post a copy on ftp.\n\nProbably better to repost it as a gzip'd attachment. That should\nprotect the formatting and get it into the list archives.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 29 Jan 2004 17:01:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query optimization question "
},
{
"msg_contents": "On Thu, 2004-01-29 at 14:01, Tom Lane wrote:\n\n> Probably better to repost it as a gzip'd attachment. That should\n> protect the formatting and get it into the list archives.\n> \n> \t\t\tregards, tom lane\n\ncomplete with a picture of the GUI version. 26k zipped, let's see if\nthis makes it through.\n-- \nJack Coates, Lyris Technologies Applications Engineer\n510-549-4350 x148, [email protected]\n\"Interoperability is the keyword, uniformity is a dead end.\"\n\t\t\t\t--Olivier Fourdan\n\n\n",
"msg_date": "Thu, 29 Jan 2004 14:26:56 -0800",
"msg_from": "Jack Coates <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query optimization question"
},
{
"msg_contents": "On Thu, 29 Jan 2004, Tom Lane wrote:\n\n> > jackdb-# GROUP BY memberid_ HAVING ( \n> \n> Um, that's not what I had in mind at all. Does GROUP BY actually do\n> anything at all here? (You didn't answer me as to whether memberid_\n> is a unique identifier or not, but if it is, this GROUP BY is just an\n> expensive no-op.)\n\n From your comment I assume that there is no transformation in pg that \ndetects that the group by columns are unique?\n\n> this is all blue-sky speculation anyway. What I'm actually more\n> interested in is your statement that MSSQL can do the original query\n> quickly. I find that a bit hard to believe because I don't see any\n> relevant optimization techniques.\n\nGetting rid of the group by would not give that kind of speedup? Maybe\nmssql manage to rewrite the query like that before executing.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Fri, 30 Jan 2004 08:19:43 +0100 (CET)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query optimization question "
},
{
"msg_contents": "On Thu, 29 Jan 2004, Jack Coates wrote:\n\n> > Probably better to repost it as a gzip'd attachment. That should\n> \n> complete with a picture of the GUI version. 26k zipped, let's see if\n> this makes it through.\n\nAre you sure you attached it?\n\nAt least when it got here there was no attachment.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Fri, 30 Jan 2004 08:23:14 +0100 (CET)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query optimization question"
},
{
"msg_contents": "\nTom Lane <[email protected]> writes:\n\n> Jack Coates <[email protected]> writes:\n> > yup -- here it is. It will probably be a nasty mess after linewrap gets\n> > done with it,\n> \n> yup, sure is :-( If I was familiar with the layout I could probably\n> decipher where the line breaks are supposed to be, but right now I'm\n> just confused.\n\nI just replaced all newlines that are followed by lines starting in column 1\nwith spaces and got something reasonable:\n\nSELECT DISTINCT members_.memberid_ FROM members_ WHERE ( members_.List_='list1' AND members_.MemberType_='normal' AND members_.SubType_='mail' ) GROUP BY memberid_ HAVING ( ( select count(*) from lyrActiveRecips, outmail_ where outmail\t11\t1\t0\tNULL\tNULL\t1\tNULL\t102274.5\tNULL\tNULL\tNULL\t104.10356\tNULL\tNULL\tSELECT\t0\tNULL\n |--Parallelism(Gather Streams)\t11\t2\t1\tParallelism\tGather Streams\tNULL\tNULL\t102274.5\t0.0\t0.22011127\t23\t104.10356\t[members_].[MemberID_]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Filter(WHERE:(If ([Expr1006] IS NULL) then 0 else [Expr1006]+If ([Expr1012] IS NULL) then 0 else [Expr1012]=3))\t11\t3\t2\tFilter\tFilter\tWHERE:(If ([Expr1006] IS NULL) then 0 else [Expr1006]+If ([Expr1012] IS NULL) then 0 else [Expr1012]=3)\tNULL\t102274.5\t0.0\t3.5393338\t23\t103.88345\t[members_].[MemberID_]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Hash Match(Right Outer Join, HASH:([lyrCompletedRecips].[MemberID])=([members_].[MemberID_]), RESIDUAL:([members_].[MemberID_]=[lyrCompletedRecips].[MemberID]))\t11\t4\t3\tHash Match\tRight Outer Join\tHASH:([lyrCompletedRecips].[MemberID])=([members_].[MemberID_]), RESIDUAL:([members_].[MemberID_]=[lyrCompletedRecips].[MemberID])\tNULL\t4782883.5\t0.0\t21.874712\t23\t100.34412\t[members_].[MemberID_], [Expr1006], [Expr1012]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Compute Scalar(DEFINE:([Expr1012]=Convert([Expr1020])))\t11\t5\t4\tCompute Scalar\tCompute Scalar\tDEFINE:([Expr1012]=Convert([Expr1020]))\t[Expr1012]=Convert([Expr1020])\t119575.35\t0.0\t1.3723248\t15\t4.3749919\t[lyrCompletedRecips].[MemberID], [Expr1012]\tNULL\tPLAN_ROW\t-1\t1.0\n | |--Hash Match(Aggregate, HASH:([lyrCompletedRecips].[MemberID]), RESIDUAL:([lyrCompletedRecips].[MemberID]=[lyrCompletedRecips].[MemberID]) DEFINE:([Expr1020]=COUNT(*)))\t11\t6\t5\tHash Match\tAggregate\tHASH:([lyrCompletedRecips].[MemberID]), RESIDUAL:([lyrCompletedRecips].[MemberID]=[lyrCompletedRecips].[MemberID])\t[Expr1020]=COUNT(*)\t119575.35\t0.0\t1.3723248\t15\t4.3749919\t[lyrCompletedRecips].[MemberID], [Expr1020]\tNULL\tPLAN_ROW\t-1\t1.0\n | |--Parallelism(Repartition Streams, PARTITION COLUMNS:([lyrCompletedRecips].[MemberID]))\t11\t7\t6\tParallelism\tRepartition Streams\tPARTITION COLUMNS:([lyrCompletedRecips].[MemberID])\tNULL\t119640.6\t0.0\t0.32407209\t173\t3.002667\t[lyrCompletedRecips].[MemberID]\tNULL\tPLAN_ROW\t-1\t1.0\n | |--Nested Loops(Inner Join, OUTER REFERENCES:([outmail_].[MessageID_]))\t11\t8\t7\tNested Loops\tInner Join\tOUTER REFERENCES:([outmail_].[MessageID_])\tNULL\t119640.6\t0.0\t0.75014657\t173\t2.6785948\t[lyrCompletedRecips].[MemberID]\tNULL\tPLAN_ROW\t-1\t1.0\n | |--Parallelism(Distribute Streams)\t11\t9\t8\tParallelism\tDistribute Streams\tNULL\tNULL\t1.0\t0.0\t2.8501874E-2\t128\t9.4664574E-2\t[outmail_].[MessageID_]\tNULL\tPLAN_ROW\t-1\t1.0\n | | |--Clustered Index Scan(OBJECT:([lmdb].[dbo].[outmail_].[IX_outmail_list]), WHERE:([outmail_].[Type_]='list'))\t11\t10\t9\tClustered Index Scan\tClustered Index Scan\tOBJECT:([lmdb].[dbo].[outmail_].[IX_outmail_list]), WHERE:([outmail_].[Type_]='list')\t[outmail_].[Type_], [outmail_].[MessageID_]\t1.0\t0.01878925\t3.9800001E-5\t128\t3.7658099E-2\t[outmail_].[Type_], [outmail_].[MessageID_]\tNULL\tPLAN_ROW\t0\t1.0\n | |--Clustered Index Seek(OBJECT:([lmdb].[dbo].[lyrCompletedRecips].[IX_CompletedRecipsMailingID]), SEEK:([lyrCompletedRecips].[MailingID]=[outmail_].[MessageID_]), WHERE:([lyrCompletedRecips].[CompletionStatusID]=300 AN\t11\t11\t8\tClustered Index Seek\tClustered Index Seek\tOBJECT:([lmdb].[dbo].[lyrCompletedRecips].[IX_CompletedRecipsMailingID]), SEEK:([lyrCompletedRecips].[MailingID]=[outmail_].[MessageID_]), WHERE:([lyrCompletedRecips].[CompletionStatusID]=300 AND [lyrCompletedRecips].[FinalAttempt]>'Jan 20 2004 12:00AM') \t[lyrCompletedRecips].[CompletionStatusID], [lyrCompletedRecips].[FinalAttempt], [lyrCompletedRecips].[MemberID]\t119640.6\t0.5750553\t0.13207871\t53\t1.5463468\t[lyrCompletedRecips].[CompletionStatusID], [lyrCompletedRecips].[FinalAttempt], [lyrCompletedRecips].[MemberID]\tNULL\tPLAN_ROW\t-1\t3.0\n |--Parallelism(Repartition Streams, PARTITION COLUMNS:([members_].[MemberID_]))\t11\t19\t4\tParallelism\tRepartition Streams\tPARTITION COLUMNS:([members_].[MemberID_])\tNULL\t4782883.5\t0.0\t15.474822\t19\t74.094414\t[members_].[MemberID_], [Expr1006]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Nested Loops(Left Outer Join, WHERE:([members_].[MemberID_]=[lyrActiveRecips].[MemberID]))\t11\t20\t19\tNested Loops\tLeft Outer Join\tWHERE:([members_].[MemberID_]=[lyrActiveRecips].[MemberID])\tNULL\t4782883.5\t0.0\t9.9962263\t19\t58.619591\t[members_].[MemberID_], [Expr1006]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Clustered Index Seek(OBJECT:([lmdb].[dbo].[members_].[IX_members_List_EmailLC]), SEEK:([members_].[List_]='list1'), WHERE:([members_].[MemberType_]='normal' AND [members_].[SubType_]='mail') ORDERED FORWARD)\t11\t22\t20\tClustered Index Seek\tClustered Index Seek\tOBJECT:([lmdb].[dbo].[members_].[IX_members_List_EmailLC]), SEEK:([members_].[List_]='list1'), WHERE:([members_].[MemberType_]='normal' AND [members_].[SubType_]='mail') ORDERED FORWARD\t[members_].[SubType_], [members_].[MemberType_], [members_].[MemberID_]\t4782883.5\t40.160122\t3.2745986\t410\t43.434719\t[members_].[SubType_], [members_].[MemberType_], [members_].[MemberID_]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Table Spool\t11\t24\t20\tTable Spool\tLazy Spool\tNULL\tNULL\t1.0\t1.6756756E-2\t3.7999999E-7\t15\t0.90211391\t[lyrActiveRecips].[MemberID], [Expr1006]\tNULL\tPLAN_ROW\t-1\t4782883.5\n |--Compute Scalar(DEFINE:([Expr1006]=Convert([Expr1021])))\t11\t25\t24\tCompute Scalar\tCompute Scalar\tDEFINE:([Expr1006]=Convert([Expr1021]))\t[Expr1006]=Convert([Expr1021])\t1.0\t0.0\t7.6000001E-6\t15\t2.4437904E-2\t[lyrActiveRecips].[MemberID], [Expr1006]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Stream Aggregate(GROUP BY:([lyrActiveRecips].[MemberID]) DEFINE:([Expr1021]=Count(*)))\t11\t26\t25\tStream Aggregate\tAggregate\tGROUP BY:([lyrActiveRecips].[MemberID])\t[Expr1021]=Count(*)\t1.0\t0.0\t7.6000001E-6\t15\t2.4437904E-2\t[lyrActiveRecips].[MemberID], [Expr1021]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Sort(ORDER BY:([lyrActiveRecips].[MemberID] ASC))\t11\t27\t26\tSort\tSort\tORDER BY:([lyrActiveRecips].[MemberID] ASC)\tNULL\t1.0\t1.1261261E-2\t1.00011E-4\t11\t2.4430305E-2\t[lyrActiveRecips].[MemberID]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Filter(WHERE:([outmail_].[Type_]='list'))\t11\t28\t27\tFilter\tFilter\tWHERE:([outmail_].[Type_]='list')\tNULL\t1.0\t0.0\t4.7999998E-7\t156\t1.3069032E-2\t[lyrActiveRecips].[MemberID]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Bookmark Lookup(BOOKMARK:([Bmk1004]), OBJECT:([lmdb].[dbo].[outmail_]))\t11\t29\t28\tBookmark Lookup\tBookmark Lookup\tBOOKMARK:([Bmk1004]), OBJECT:([lmdb].[dbo].[outmail_])\t[outmail_].[Type_]\t1.0\t3.1249749E-3\t0.0000011\t156\t1.3068552E-2\t[lyrActiveRecips].[MemberID], [outmail_].[Type_]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Nested Loops(Inner Join, OUTER REFERENCES:([lyrActiveRecips].[MailingID]))\t11\t30\t29\tNested Loops\tInner Join\tOUTER REFERENCES:([lyrActiveRecips].[MailingID])\tNULL\t1.0\t0.0\t0.00001254\t138\t9.9424766E-3\t[lyrActiveRecips].[MemberID], [Bmk1004]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Bookmark Lookup(BOOKMARK:([Bmk1002]), OBJECT:([lmdb].[dbo].[lyrActiveRecips]))\t11\t31\t30\tBookmark Lookup\tBookmark Lookup\tBOOKMARK:([Bmk1002]), OBJECT:([lmdb].[dbo].[lyrActiveRecips])\t[lyrActiveRecips].[MemberID], [lyrActiveRecips].[MailingID]\t1.0\t3.1249749E-3\t0.0000011\t53\t6.4091529E-3\t[lyrActiveRecips].[MemberID], [lyrActiveRecips].[MailingID]\tNULL\tPLAN_ROW\t-1\t1.0\n | |--Index Seek(OBJECT:([lmdb].[dbo].[lyrActiveRecips].[jacktest_lar_date_ix]), SEEK:([lyrActiveRecips].[NextAttempt] > 'Jan 20 2004 12:00AM') ORDERED FORWARD)\t11\t32\t31\tIndex Seek\tIndex Seek\tOBJECT:([lmdb].[dbo].[lyrActiveRecips].[jacktest_lar_date_ix]), SEEK:([lyrActiveRecips].[NextAttempt] > 'Jan 20 2004 12:00AM') ORDERED FORWARD\t[Bmk1002]\t1.0\t3.2034749E-3\t7.9603E-5\t40\t3.2830781E-3\t[Bmk1002]\tNULL\tPLAN_ROW\t-1\t1.0\n |--Index Seek(OBJECT:([lmdb].[dbo].[outmail_].[PK_outmail_]), SEEK:([outmail_].[MessageID_]=[lyrActiveRecips].[MailingID]) ORDERED FORWARD)\t11\t33\t30\tIndex Seek\tIndex Seek\tOBJECT:([lmdb].[dbo].[outmail_].[PK_outmail_]), SEEK:([outmail_].[MessageID_]=[lyrActiveRecips].[MailingID]) ORDERED FORWARD\t[Bmk1004]\t1.0\t3.2034749E-3\t7.9603E-5\t93\t3.520784E-3\t[Bmk1004]\tNULL\tPLAN_ROW\t-1\t3.0\n\nI still can't make heads or tails of it though.\n\n-- \ngreg\n\n",
"msg_date": "30 Jan 2004 05:00:58 -0500",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query optimization question"
},
{
"msg_contents": "Dennis Bjorklund <[email protected]> writes:\n> Getting rid of the group by would not give that kind of speedup?\n\nNo. Getting rid of the per-row subqueries (or at least finding a way to\nmake 'em a lot cheaper) is the only way to make any meaningful change.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 30 Jan 2004 09:51:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query optimization question "
},
{
"msg_contents": "On Thu, 2004-01-29 at 23:23, Dennis Bjorklund wrote:\n> On Thu, 29 Jan 2004, Jack Coates wrote:\n> \n> > > Probably better to repost it as a gzip'd attachment. That should\n> > \n> > complete with a picture of the GUI version. 26k zipped, let's see if\n> > this makes it through.\n> \n> Are you sure you attached it?\n> \n> At least when it got here there was no attachment.\n\nargh; attached the 40K version which was in color, removed it to make\nthe new one with greyscale and forgot to attach that. Here it is again:\n-- \nJack Coates, Lyris Technologies Applications Engineer\n510-549-4350 x148, [email protected]\n\"Interoperability is the keyword, uniformity is a dead end.\"\n\t\t\t\t--Olivier Fourdan",
"msg_date": "Fri, 30 Jan 2004 09:13:19 -0800",
"msg_from": "Jack Coates <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query optimization question"
}
] |
[
{
"msg_contents": "I have 2 columns index.\nThe question is if optimizer can use both columns of an index or not, \ni.e. the plan should read like this: \n\n\tIndex Cond: \n\t((name)::text = 'name1'::text) \n\tAND ((date_from)::timestamp with time zone=\n('now'::text)::timestamp(6) with time zone) \n\nWhilst I am getting index scan on first column and filter on the other:\n\n Index Scan using testtab_name_date_from on testtab (cost=0.00..2.01 rows=1\nwidth=18)\n Index Cond: ((name)::text = 'name1'::text)\n Filter: ((date_from)::timestamp with time zone =\n('now'::text)::timestamp(6)with time zone)\n\nCould the problem be timestamp column or timestamp with time zones?\n\nThank you, \nLaimis\n-------------------------------------------\nBellow are details of the test: \n\n\nCreate table testtab (name varchar(10), date_from timestamp);\n\n\ncreate index testtab_name_date_from on testtab(name, date_from) ;\n\n\npopulated table with pseudo random data (10000), analyzed and tuned optimizer\nto favour indexes instead of sequential scans. \n\n\nPg config:\n\nrandom_page_cost = 0\ncpu_index_tuple_cost = 0.0\t\nenable_seqscan = false\ncpu_tuple_cost = 1 \n",
"msg_date": "Thu, 29 Jan 2004 19:29:59 -0000",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Explain plan for 2 column index"
},
{
"msg_contents": "On Thursday 29 January 2004 19:29, [email protected] wrote:\n> I have 2 columns index.\n> The question is if optimizer can use both columns of an index or not,\n\nShould do.\n\n> i.e. the plan should read like this:\n>\n> \tIndex Cond:\n> \t((name)::text = 'name1'::text)\n> \tAND ((date_from)::timestamp with time zone=\n> ('now'::text)::timestamp(6) with time zone)\n>\n> Whilst I am getting index scan on first column and filter on the other:\n>\n> Index Scan using testtab_name_date_from on testtab (cost=0.00..2.01\n> rows=1 width=18)\n> Index Cond: ((name)::text = 'name1'::text)\n> Filter: ((date_from)::timestamp with time zone =\n> ('now'::text)::timestamp(6)with time zone)\n>\n> Could the problem be timestamp column or timestamp with time zones?\n\nWhat types are the columns here? If date_from isn't timestamp with time zone, \nthat might be the issue. Also, I'm not convinced timestamp is the same thing \nas timestamp(6) - why the different accuracies.\n\nAlso, note that 'now' is deprecated - now() or CURRENT_TIMESTAMP/DATE/etc are \npreferred.\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 29 Jan 2004 21:37:16 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Explain plan for 2 column index"
},
{
"msg_contents": "Richard Huxton <[email protected]> writes:\n>> Index Scan using testtab_name_date_from on testtab (cost=0.00..2.01\n>> rows=1 width=18)\n>> Index Cond: ((name)::text = 'name1'::text)\n>> Filter: ((date_from)::timestamp with time zone =\n>> ('now'::text)::timestamp(6)with time zone)\n\n> What types are the columns here? If date_from isn't timestamp with time zone,\n> that might be the issue.\n\nIt clearly isn't, since we can see a coercion to timestamp with time\nzone in the query. My guess is that the original SQL was\n\tWHERE ... date_from = current_timestamp\nThis should be\n\tWHERE ... date_from = localtimestamp\nif timestamp without tz is the intended column datatype. Of course,\nit might just be that date_from was declared as the wrong type (it\nreally sucks that SQL specifies \"timestamp\" to default to \"without\ntime zone\" ...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 29 Jan 2004 17:07:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Explain plan for 2 column index "
}
] |
[
{
"msg_contents": "I have a large query which I would like to place in a view. The explicit\nquery is sufficiently fast, but the same query as a view is much slower\nand uses a different plan. I would appreciate an explanation of why this\nis, and, more importantly whether/how I might coax the view to use a\ndifferent plan.\n\n\nThe original query:\n\nrkh@csb-dev=> select distinct on (AH.p2gblataln_id) AH.p2gblataln_id,H.pseq_id,min(H.pstart) as \"pstart\",\nmax(H.pstop) as \"pstop\",A.ident,(A.ident/Q.len::float*100)::int as \"pct_ident\",\nsum(H.pstop-H.pstart+1) as \"aln_length\",H.genasm_id,H.chr,H.plus_strand,min(H.gstart) as \"gstart\",\nmax(H.gstop) as \"gstop\"\nfrom p2gblathsp H\njoin p2gblatalnhsp AH on H.p2gblathsp_id=AH.p2gblathsp_id\njoin p2gblataln A on AH.p2gblataln_id=A.p2gblataln_id\njoin pseq Q on H.pseq_id=Q.pseq_id\nwhere H.pseq_id=76\ngroup by AH.p2gblataln_id,H.pseq_id,H.genasm_id,H.chr,H.plus_strand,A.ident,Q.len\n\\g /dev/null\nTime: 277.804 ms\n\n\nNow as a view:\n\nrkh@csb-dev=> create view v1 as\nselect distinct on (AH.p2gblataln_id) AH.p2gblataln_id,H.pseq_id,min(H.pstart) as \"pstart\",\nmax(H.pstop) as \"pstop\",A.ident,(A.ident/Q.len::float*100)::int as \"pct_ident\",\nsum(H.pstop-H.pstart+1) as \"aln_length\",H.genasm_id,H.chr,H.plus_strand,min(H.gstart) as \"gstart\",\nmax(H.gstop) as \"gstop\"\nfrom p2gblathsp H\njoin p2gblatalnhsp AH on H.p2gblathsp_id=AH.p2gblathsp_id\njoin p2gblataln A on AH.p2gblataln_id=A.p2gblataln_id\njoin pseq Q on H.pseq_id=Q.pseq_id\ngroup by AH.p2gblataln_id,H.pseq_id,H.genasm_id,H.chr,H.plus_strand,A.ident,Q.len;\nCREATE VIEW\nTime: 103.041 ms\n\nrkh@csb-dev=> select * from v1 where pseq_id=76 \\g /dev/null\nTime: 31973.979 ms\n\nOkay, that's ~100x slower. The plans:\n\nrkh@csb-dev=> explain select distinct on <snip... same as the first query above>\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------- Unique (cost=11157.75..11187.26 rows=454 width=40)\n -> GroupAggregate (cost=11157.75..11186.13 rows=454 width=40)\n -> Sort (cost=11157.75..11158.89 rows=454 width=40)\n Sort Key: ah.p2gblataln_id, h.pseq_id, h.genasm_id, h.chr, h.plus_strand, a.ident, q.len\n -> Nested Loop (cost=11125.62..11137.71 rows=454 width=40)\n -> Index Scan using pseq_pkey on pseq q (cost=0.00..3.01 rows=2 width=6)\n Index Cond: (76 = pseq_id)\n -> Materialize (cost=11125.62..11127.89 rows=227 width=38)\n -> Nested Loop (cost=546.15..11125.62 rows=227 width=38)\n -> Hash Join (cost=546.15..10438.72 rows=227 width=34)\n Hash Cond: (\"outer\".p2gblathsp_id = \"inner\".p2gblathsp_id)\n -> Seq Scan on p2gblatalnhsp ah (cost=0.00..6504.03 rows=451503 width=8)\n -> Hash (cost=545.58..545.58 rows=227 width=34)\n -> Index Scan using p2gblathsp_p_lookup on p2gblathsp h (cost=0.00..545.58 rows=227 wid Index Cond: (pseq_id = 76)\n -> Index Scan using p2gblataln_pkey on p2gblataln a (cost=0.00..3.01 rows=1 width=8)\n Index Cond: (\"outer\".p2gblataln_id = a.p2gblataln_id)\n(17 rows)\n \n\n\nrkh@csb-dev=> explain select * from v1 where pseq_id=76;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------- Subquery Scan v1 (cost=246907.54..281897.70 rows=2258 width=77)\n Filter: (pseq_id = 76)\n -> Unique (cost=246907.54..276254.13 rows=451486 width=40)\n -> GroupAggregate (cost=246907.54..275125.41 rows=451486 width=40)\n -> Sort (cost=246907.54..248036.25 rows=451486 width=40)\n Sort Key: ah.p2gblataln_id, h.pseq_id, h.genasm_id, h.chr, h.plus_strand, a.ident, q.len\n -> Hash Join (cost=14019.29..204503.24 rows=451486 width=40)\n Hash Cond: (\"outer\".p2gblataln_id = \"inner\".p2gblataln_id)\n -> Hash Join (cost=7632.79..191344.45 rows=451486 width=36)\n Hash Cond: (\"outer\".p2gblathsp_id = \"inner\".p2gblathsp_id)\n -> Merge Join (cost=0.00..176939.38 rows=451486 width=36)\n Merge Cond: (\"outer\".pseq_id = \"inner\".pseq_id)\n -> Index Scan using p2gblathsp_p_lookup on p2gblathsp h (cost=0.00..16102.40 rows=451485 widt -> Index Scan using pseq_pkey on pseq q (cost=0.00..159942.83 rows=1960257 width=6)\n -> Hash (cost=6504.03..6504.03 rows=451503 width=8)\n -> Seq Scan on p2gblatalnhsp ah (cost=0.00..6504.03 rows=451503 width=8)\n -> Hash (cost=5587.00..5587.00 rows=319800 width=8)\n -> Seq Scan on p2gblataln a (cost=0.00..5587.00 rows=319800 width=8)\n(18 rows)\n \n\nObviously, the problem is that the pseq_id criterion needs to be pushed\nfarther down into the view plan (as it was for the explicit query).\nSince I've typically had comparable performance between views and\nexplicit queries, I'm curious to know what aspect of this query prevents\nbetter optimization. My understanding from the 7.4.1 HISTORY is that\nANSI-style joins (with JOIN) are now optimized as efficiently as\nWHERE-style joins... am I wrong?\n\nThanks for your time.\n\nReece\n\n-- \nReece Hart, http://www.in-machina.com/~reece/, GPG:0x25EC91A0\n\n\n\n\n\n\n\nI have a large query which I would like to place in a view. The explicit query is sufficiently fast, but the same query as a view is much slower and uses a different plan. I would appreciate an explanation of why this is, and, more importantly whether/how I might coax the view to use a different plan.\n\n\nThe original query:\nrkh@csb-dev=> select distinct on (AH.p2gblataln_id) AH.p2gblataln_id,H.pseq_id,min(H.pstart) as \"pstart\",\nmax(H.pstop) as \"pstop\",A.ident,(A.ident/Q.len::float*100)::int as \"pct_ident\",\nsum(H.pstop-H.pstart+1) as \"aln_length\",H.genasm_id,H.chr,H.plus_strand,min(H.gstart) as \"gstart\",\nmax(H.gstop) as \"gstop\"\nfrom p2gblathsp H\njoin p2gblatalnhsp AH on H.p2gblathsp_id=AH.p2gblathsp_id\njoin p2gblataln A on AH.p2gblataln_id=A.p2gblataln_id\njoin pseq Q on H.pseq_id=Q.pseq_id\nwhere H.pseq_id=76\ngroup by AH.p2gblataln_id,H.pseq_id,H.genasm_id,H.chr,H.plus_strand,A.ident,Q.len\n\\g /dev/null\nTime: 277.804 ms\n\nNow as a view:\nrkh@csb-dev=> create view v1 as\nselect distinct on (AH.p2gblataln_id) AH.p2gblataln_id,H.pseq_id,min(H.pstart) as \"pstart\",\nmax(H.pstop) as \"pstop\",A.ident,(A.ident/Q.len::float*100)::int as \"pct_ident\",\nsum(H.pstop-H.pstart+1) as \"aln_length\",H.genasm_id,H.chr,H.plus_strand,min(H.gstart) as \"gstart\",\nmax(H.gstop) as \"gstop\"\nfrom p2gblathsp H\njoin p2gblatalnhsp AH on H.p2gblathsp_id=AH.p2gblathsp_id\njoin p2gblataln A on AH.p2gblataln_id=A.p2gblataln_id\njoin pseq Q on H.pseq_id=Q.pseq_id\ngroup by AH.p2gblataln_id,H.pseq_id,H.genasm_id,H.chr,H.plus_strand,A.ident,Q.len;\nCREATE VIEW\nTime: 103.041 ms\n\nrkh@csb-dev=> select * from v1 where pseq_id=76 \\g /dev/null\nTime: 31973.979 ms\n\nOkay, that's ~100x slower. The plans:\n\nrkh@csb-dev=> explain select distinct on <snip... same as the first query above>\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------- Unique (cost=11157.75..11187.26 rows=454 width=40)\n -> GroupAggregate (cost=11157.75..11186.13 rows=454 width=40)\n -> Sort (cost=11157.75..11158.89 rows=454 width=40)\n Sort Key: ah.p2gblataln_id, h.pseq_id, h.genasm_id, h.chr, h.plus_strand, a.ident, q.len\n -> Nested Loop (cost=11125.62..11137.71 rows=454 width=40)\n -> Index Scan using pseq_pkey on pseq q (cost=0.00..3.01 rows=2 width=6)\n Index Cond: (76 = pseq_id)\n -> Materialize (cost=11125.62..11127.89 rows=227 width=38)\n -> Nested Loop (cost=546.15..11125.62 rows=227 width=38)\n -> Hash Join (cost=546.15..10438.72 rows=227 width=34)\n Hash Cond: (\"outer\".p2gblathsp_id = \"inner\".p2gblathsp_id)\n -> Seq Scan on p2gblatalnhsp ah (cost=0.00..6504.03 rows=451503 width=8)\n -> Hash (cost=545.58..545.58 rows=227 width=34)\n -> Index Scan using p2gblathsp_p_lookup on p2gblathsp h (cost=0.00..545.58 rows=227 wid Index Cond: (pseq_id = 76)\n -> Index Scan using p2gblataln_pkey on p2gblataln a (cost=0.00..3.01 rows=1 width=8)\n Index Cond: (\"outer\".p2gblataln_id = a.p2gblataln_id)\n(17 rows)\n \n\n\nrkh@csb-dev=> explain select * from v1 where pseq_id=76;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------- Subquery Scan v1 (cost=246907.54..281897.70 rows=2258 width=77)\n Filter: (pseq_id = 76)\n -> Unique (cost=246907.54..276254.13 rows=451486 width=40)\n -> GroupAggregate (cost=246907.54..275125.41 rows=451486 width=40)\n -> Sort (cost=246907.54..248036.25 rows=451486 width=40)\n Sort Key: ah.p2gblataln_id, h.pseq_id, h.genasm_id, h.chr, h.plus_strand, a.ident, q.len\n -> Hash Join (cost=14019.29..204503.24 rows=451486 width=40)\n Hash Cond: (\"outer\".p2gblataln_id = \"inner\".p2gblataln_id)\n -> Hash Join (cost=7632.79..191344.45 rows=451486 width=36)\n Hash Cond: (\"outer\".p2gblathsp_id = \"inner\".p2gblathsp_id)\n -> Merge Join (cost=0.00..176939.38 rows=451486 width=36)\n Merge Cond: (\"outer\".pseq_id = \"inner\".pseq_id)\n -> Index Scan using p2gblathsp_p_lookup on p2gblathsp h (cost=0.00..16102.40 rows=451485 widt  ",
"msg_date": "Thu, 29 Jan 2004 16:31:39 -0800",
"msg_from": "Reece Hart <[email protected]>",
"msg_from_op": true,
"msg_subject": "query optimization differs between view and explicit query"
},
{
"msg_contents": "\n> rkh@csb-dev=> create view v1 as\n> select distinct on (AH.p2gblataln_id) AH.p2gblataln_id,H.pseq_id,min(H.pstart) as \"pstart\",\n> max(H.pstop) as \"pstop\",A.ident,(A.ident/Q.len::float*100)::int as \"pct_ident\",\n> sum(H.pstop-H.pstart+1) as \"aln_length\",H.genasm_id,H.chr,H.plus_strand,min(H.gstart) as \"gstart\",\n> max(H.gstop) as \"gstop\"\n> from p2gblathsp H\n> join p2gblatalnhsp AH on H.p2gblathsp_id=AH.p2gblathsp_id\n> join p2gblataln A on AH.p2gblataln_id=A.p2gblataln_id\n> join pseq Q on H.pseq_id=Q.pseq_id\n> group by AH.p2gblataln_id,H.pseq_id,H.genasm_id,H.chr,H.plus_strand,A.ident,Q.len;\n> CREATE VIEW\n> Time: 103.041 ms\n\nWhat happens if you make it a function:\n\nCREATE FUNCTION f1() RETURNS ... AS '\nselect distinct on (AH.p2gblataln_id) \nAH.p2gblataln_id,H.pseq_id,min(H.pstart) as \"pstart\",\nmax(H.pstop) as \"pstop\",A.ident,(A.ident/Q.len::float*100)::int as \n\"pct_ident\",\nsum(H.pstop-H.pstart+1) as \n\"aln_length\",H.genasm_id,H.chr,H.plus_strand,min(H.gstart) as \"gstart\",\nmax(H.gstop) as \"gstop\"\nfrom p2gblathsp H\njoin p2gblatalnhsp AH on H.p2gblathsp_id=AH.p2gblathsp_id\njoin p2gblataln A on AH.p2gblataln_id=A.p2gblataln_id\njoin pseq Q on H.pseq_id=Q.pseq_id\nwhere H.pseq_id=76\ngroup by \nAH.p2gblataln_id,H.pseq_id,H.genasm_id,H.chr,H.plus_strand,A.ident,Q.len\n' LANGUAGE SQL;\n\nI suspect that will be even faster than the normal (non-view) query.\n\nChris\n",
"msg_date": "Fri, 30 Jan 2004 08:52:15 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query optimization differs between view and explicit"
},
{
"msg_contents": "On Thu, 29 Jan 2004, Reece Hart wrote:\n\n> I have a large query which I would like to place in a view. The explicit\n> query is sufficiently fast, but the same query as a view is much slower\n> and uses a different plan. I would appreciate an explanation of why this\n> is, and, more importantly whether/how I might coax the view to use a\n> different plan.\n\nWell, in general\n\nselect distinct on (A) A, B\n from table\n where B=10\n order by A,B;\n\nis not always the same as\n\nselect * from\n (select distinct on (A) A, B\n from table order by A,B) foo\n where B=10;\n\nIf A is not unique, then given two rows of the\nsame A value one with B=10 and one with another B\nvalue less than 10, the former is guaranteed to give\nyou an A,10 row, the latter will give no such row AFAICS.\n\nIf A is unique then the two queries are equivalent,\nbut then distinct on (A) isn't terribly meaningful.\n",
"msg_date": "Thu, 29 Jan 2004 17:20:57 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query optimization differs between view and explicit"
},
{
"msg_contents": "Stephan Szabo <[email protected]> writes:\n> On Thu, 29 Jan 2004, Reece Hart wrote:\n>> I have a large query which I would like to place in a view. The explicit\n>> query is sufficiently fast, but the same query as a view is much slower\n>> and uses a different plan. I would appreciate an explanation of why this\n>> is, and, more importantly whether/how I might coax the view to use a\n>> different plan.\n\n> Well, in general [ they're not the same query ]\n\nRight. The reason the performance is so much worse is that the\nrestriction pseq_id=76 cannot be \"pushed down\" into the view subquery;\nwe have to form the entire logical output of the view and then filter\non pseq_id=76. In your inline query you have done the pushing down\nanyway and so the restriction is applied much lower in the plan,\nresulting in lots less work. But the results might be different.\n\nThe point that Stephan makes is explicitly understood by the planner as\nof PG 7.4:\n\n * 3. If the subquery uses DISTINCT ON, we must not push down any quals that\n * refer to non-DISTINCT output columns, because that could change the set\n * of rows returned.\n\nIt's hard to give any advice on how to make a faster view without more\ncontext. What's the actual intention in all this? What's the semantics\nof pseq_id --- is it unique? It might be you could fix the problem by\nadding pseq_id to the DISTINCT ON list, but we don't have enough info\nto understand whether that would break the desired behavior.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 30 Jan 2004 00:26:47 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query optimization differs between view and explicit "
},
{
"msg_contents": "Stephan, Tom-\n\nThanks. I now see that DISTINCT can't be moved within the plan as I\nthought. This is exactly the thinko that I was hoping someone would\nexpose.\n\nI've decided to abandon the DISTINCT clause. The view is more general\nand sufficiently fast without it, and callers can always add it\nthemselves to achieve what I was doing in the explicit query.\n\nI appreciate your time.\n\n-Reece\n\n\n-- \nReece Hart, http://www.in-machina.com/~reece/, GPG:0x25EC91A0\n\n\n\n\n\n\n\nStephan, Tom-\n\nThanks. I now see that DISTINCT can't be moved within the plan as I thought. This is exactly the thinko that I was hoping someone would expose.\n\nI've decided to abandon the DISTINCT clause. The view is more general and sufficiently fast without it, and callers can always add it themselves to achieve what I was doing in the explicit query.\n\nI appreciate your time.\n\n-Reece\n\n\n\n\n-- \nReece Hart, http://www.in-machina.com/~reece/, GPG:0x25EC91A0",
"msg_date": "Fri, 30 Jan 2004 11:27:33 -0800",
"msg_from": "Reece Hart <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query optimization differs between view and explicit"
}
] |
[
{
"msg_contents": "\nHi,\n\nPostgres seems to estimate the cost of indexscan to be a bit too high.\nThe table has something like 500000 rows and I have run reindex and vacuum\nanalyze recently. Is there something to tune?\n\nIndex is a multicolumn index:\n \"admin_event_stamp_event_type_name_status\" btree (stamp, event_type_name, status)\n\nSinglecolumn index for stamp doesn't make a significant difference in cost\nestimation.\n\n -- -- -- -- -- -- -- -- -- --\n\ngalleria=> set enable_seqscan = true;\nSET\ngalleria=> explain analyze SELECT * FROM admin_event WHERE stamp > (current_timestamp - '1 days'::INTERVAL)::TIMESTAMP WITHOUT TIME ZONE;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Seq Scan on admin_event (cost=0.00..19844.37 rows=154361 width=109) (actual time=479.173..2760.186 rows=4705 loops=1)\n Filter: (stamp > ((('now'::text)::timestamp(6) with time zone - '1 day'::interval))::timestamp without time zone)\n Total runtime: 2765.428 ms\n(3 rows)\n\ngalleria=> set enable_seqscan = false;\nSET\ngalleria=> explain analyze SELECT * FROM admin_event WHERE stamp > (current_timestamp - '1 days'::INTERVAL)::TIMESTAMP WITHOUT TIME ZONE;\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using admin_event_stamp_event_type_name_status on admin_event (cost=0.00..540690.18 rows=154361 width=109) (actual time=7.771..124.886 rows=4706 loops=1)\n Index Cond: (stamp > ((('now'::text)::timestamp(6) with time zone - '1 day'::interval))::timestamp without time zone)\n Total runtime: 82.530 ms\n(3 rows)\n\n -- -- -- -- -- -- -- -- -- --\n\nDistribution of stamp looks like the following:\n\ngalleria=> SELECT date_trunc('month', stamp)::DATE, count(*), repeat('*', (count(*) / 3000)::INTEGER) FROM admin_event GROUP BY date_trunc('month', stamp)::DATE ORDER BY 1;\n date_trunc | count | repeat\n------------+--------+-------------------------------------------\n 2002-01-01 | 2013 |\n 2002-02-01 | 2225 |\n 2002-03-01 | 2165 |\n 2002-04-01 | 2692 |\n 2002-05-01 | 3031 | *\n 2002-06-01 | 2376 |\n 2002-07-01 | 2694 |\n 2002-08-01 | 4241 | *\n 2002-09-01 | 4140 | *\n 2002-10-01 | 4964 | *\n 2002-11-01 | 8769 | **\n 2002-12-01 | 13249 | ****\n 2003-01-01 | 5776 | *\n 2003-02-01 | 6301 | **\n 2003-03-01 | 6404 | **\n 2003-04-01 | 6905 | **\n 2003-05-01 | 7119 | **\n 2003-06-01 | 8978 | **\n 2003-07-01 | 7723 | **\n 2003-08-01 | 36566 | ************\n 2003-09-01 | 15759 | *****\n 2003-10-01 | 10610 | ***\n 2003-11-01 | 83113 | ***************************\n 2003-12-01 | 90927 | ******************************\n 2004-01-01 | 124479 | *****************************************\n\n\n |\\__/|\n ( oo ) Kari Lavikka - [email protected] - (050) 380 3808\n__ooO( )Ooo_______ _____ ___ _ _ _ _ _ _ _\n \"\"\n",
"msg_date": "Fri, 30 Jan 2004 10:16:23 +0200 (EET)",
"msg_from": "Kari Lavikka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cost of indexscan"
},
{
"msg_contents": "Kari Lavikka <[email protected]> writes:\n> Postgres seems to estimate the cost of indexscan to be a bit too high.\n> The table has something like 500000 rows and I have run reindex and vacuum\n> analyze recently. Is there something to tune?\n\nI think the real problem here is that the row estimate is off by a\nfactor of thirty:\n\n> Seq Scan on admin_event (cost=0.00..19844.37 rows=154361 width=109) (actual time=479.173..2760.186 rows=4705 loops=1)\n\nWith a correct estimate the indexscan would have been chosen.\n\n> galleria=> explain analyze SELECT * FROM admin_event WHERE stamp > (current_timestamp - '1 days'::INTERVAL)::TIMESTAMP WITHOUT TIME ZONE;\n\nIt's not possible for the planner to make a good guess here since it\ndoesn't know what the comparison value for the stamp column is.\n(current_timestamp isn't a constant and so the comparison expression\ncan't be reduced to a constant at plan time.)\n\nThe traditional solution for this is to cheat:\n\ncreate function ago(interval) returns timestamp without time zone as\n'select localtimestamp - $1' language sql strict immutable;\n\nselect * from admin_event where stamp > ago('1 days');\n\nThis works because the function is mislabeled as immutable, encouraging\nthe planner to fold the result to a constant on sight. It also has the\npleasant property of making your query more readable. The downside is\nthat you are in fact lying to the system about the behavior of the ago()\nfunction, and so you can get in trouble. This only really works for\nqueries executed interactively --- you can't use this method inside\nplpgsql functions, for instance.\n\n> Distribution of stamp looks like the following:\n\nHm, you might also find that increasing the statistics target for stamp\nwould be a good idea, since its distribution is so skewed. But unless\nyou do something like the above, the statistics won't get used anyway...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 30 Jan 2004 10:07:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cost of indexscan "
}
] |
[
{
"msg_contents": "\n> From: Tom Lane [mailto:[email protected]] \n\n\n> My guess is that the original SQL was\n> \tWHERE ... date_from = current_timestamp\n> This should be\n> \tWHERE ... date_from = localtimestamp\n> if timestamp without tz is the intended column datatype. \n\nThank you. The problem was exactly this: \n\t\n\tcurrent_timestamp: TIMESTAMP with TZ\n\t\n\tmy attribute \"date_from TIMESTAMP \" - without TZ\n\t\nAfter change to \n\n\tWHERE ... date_from = localtimestamp \n\nthe plan worked just well.\n\t\n> (it really sucks that SQL specifies \"timestamp\" to default to \"without time\nzone\" ...)\n\nTzones is one area I never delt with and IMMEDIATELY ran into problem,\nimplicit type conversion is an evil.\n",
"msg_date": "Fri, 30 Jan 2004 08:47:51 -0000",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Explain plan for 2 column index : timestamps and time zones"
}
] |
[
{
"msg_contents": "\nHi, more strange plans ...\n\nPlanner estimates an indexscan to return 240 rows although it is using a\nunique index and chooses to use hash join and seqscan instead of nested\nloop and indexscan. It's ... very slow.\n\nIdexes used:\n users: \"users_upper_nick\" unique, btree (upper((nick)::text))\n image: \"image_uid_status\" btree (uid, status)\n\ngalleria=> set enable_hashjoin = true;\nSET\ngalleria=> explain analyze SELECT i.image_id, i.info, i.stamp, i.status, i.t_width, i.t_height, u.nick, u.uid FROM image i, users u WHERE i.uid = u.uid AND upper(u.nick) = upper('FireGirl-') AND i.status IN ('d', 'v') AND u.status = 'a' ORDER BY status, stamp DESC;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=24731.07..24734.95 rows=1550 width=63) (actual\ntime=1392.675..1392.686 rows=19 loops=1)\n Sort Key: i.status, i.stamp\n -> Hash Join (cost=961.31..24648.94 rows=1550 width=63) (actual time=552.184..1392.617 rows=19 loops=1)\n Hash Cond: (\"outer\".uid = \"inner\".uid)\n -> Seq Scan on image i (cost=0.00..22025.22 rows=329382 width=53) (actual time=0.009..1088.856 rows=346313 loops=1)\n Filter: ((status = 'd'::bpchar) OR (status = 'v'::bpchar))\n -> Hash (cost=960.71..960.71 rows=240 width=14) (actual time=0.043..0.043 rows=0 loops=1)\n -> Index Scan using users_upper_nick on users u (cost=0.00..960.71 rows=240 width=14) (actual time=0.037..0.039 rows=1 loops=1)\n Index Cond: (upper((nick)::text) = 'FIREGIRL-'::text)\n Filter: (status = 'a'::bpchar)\n Total runtime: 1392.769 ms\n(11 rows)\n\ngalleria=> set enable_hashjoin = false;\nSET\ngalleria=> explain analyze SELECT i.image_id, i.info, i.stamp, i.status, i.t_width, i.t_height, u.nick, u.uid FROM image i, users u WHERE i.uid = u.uid AND upper(u.nick) = upper('FireGirl-') AND i.status IN ('d', 'v') AND u.status = 'a' ORDER BY status, stamp DESC;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=35861.87..35865.74 rows=1550 width=63) (actual\ntime=0.230..0.244 rows=19 loops=1)\n Sort Key: i.status, i.stamp\n -> Nested Loop (cost=0.00..35779.73 rows=1550 width=63) (actual time=0.070..0.173 rows=19 loops=1)\n -> Index Scan using users_upper_nick on users u (cost=0.00..960.71 rows=240 width=14) (actual time=0.036..0.038 rows=1 loops=1)\n Index Cond: (upper((nick)::text) = 'FIREGIRL-'::text)\n Filter: (status = 'a'::bpchar)\n -> Index Scan using image_uid_status on image i (cost=0.00..144.83 rows=20 width=53) (actual time=0.026..0.080 rows=19 loops=1)\n Index Cond: (i.uid = \"outer\".uid)\n Filter: ((status = 'd'::bpchar) OR (status = 'v'::bpchar))\n Total runtime: 0.315 ms\n(10 rows)\n\n\n |\\__/|\n ( oo ) Kari Lavikka - [email protected] - (050) 380 3808\n__ooO( )Ooo_______ _____ ___ _ _ _ _ _ _ _\n \"\"\n",
"msg_date": "Fri, 30 Jan 2004 11:15:51 +0200 (EET)",
"msg_from": "Kari Lavikka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unique index and estimated rows."
},
{
"msg_contents": "Kari,\n\n> users: \"users_upper_nick\" unique, btree (upper((nick)::text))\n> image: \"image_uid_status\" btree (uid, status)\n\nOdd ... when did you last run ANALYZE on \"users\"? \n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Fri, 30 Jan 2004 17:45:45 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unique index and estimated rows."
},
{
"msg_contents": "Uh oh,\n\nfunction indexes seem to be a bit crippled. I created a unique index\nwithout the upper() function and number of estimated rows is now just\nright.\n\n \"users_nick\" unique, btree (nick)\n\nAnd the plan:\n\ngalleria=> explain analyze SELECT i.image_id, i.info, i.stamp, i.status, i.t_width, i.t_height, u.nick, u.uid FROM image i, users u WHERE i.uid = u.uid AND nick = 'FireGirl-' AND i.status IN ('d', 'v') AND u.status = 'a' ORDER BY status, stamp DESC;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=154.10..154.12 rows=7 width=63) (actual time=0.227..0.237 rows=19 loops=1)\n Sort Key: i.status, i.stamp\n -> Nested Loop (cost=0.00..154.00 rows=7 width=63) (actual time=0.075..0.176 rows=19 loops=1)\n -> Index Scan using users_nick on users u (cost=0.00..6.01 rows=1 width=14) (actual time=0.041..0.043 rows=1 loops=1)\n Index Cond: ((nick)::text = 'FireGirl-'::text)\n Filter: (status = 'a'::bpchar)\n -> Index Scan using image_uid_status on image i (cost=0.00..147.73 rows=21 width=53) (actual time=0.026..0.079 rows=19 loops=1)\n Index Cond: (i.uid = \"outer\".uid)\n Filter: ((status = 'd'::bpchar) OR (status = 'v'::bpchar))\n Total runtime: 0.303 ms\n(10 rows)\n\n\nI think that creating an uppercase column for name and unique index for\ncould be a workaround for this.\n\nAnother problem is that function indexes don't seem to care about\nstatistics target settings.\n\n\n |\\__/|\n ( oo ) Kari Lavikka - [email protected] - (050) 380 3808\n__ooO( )Ooo_______ _____ ___ _ _ _ _ _ _ _\n \"\"\n\nOn Fri, 30 Jan 2004, Kari Lavikka wrote:\n\n>\n> Hi, more strange plans ...\n>\n> Planner estimates an indexscan to return 240 rows although it is using a\n> unique index and chooses to use hash join and seqscan instead of nested\n> loop and indexscan. It's ... very slow.\n>\n> Idexes used:\n> users: \"users_upper_nick\" unique, btree (upper((nick)::text))\n> image: \"image_uid_status\" btree (uid, status)\n>\n> galleria=> set enable_hashjoin = true;\n> SET\n> galleria=> explain analyze SELECT i.image_id, i.info, i.stamp, i.status, i.t_width, i.t_height, u.nick, u.uid FROM image i, users u WHERE i.uid = u.uid AND upper(u.nick) = upper('FireGirl-') AND i.status IN ('d', 'v') AND u.status = 'a' ORDER BY status, stamp DESC;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=24731.07..24734.95 rows=1550 width=63) (actual\n> time=1392.675..1392.686 rows=19 loops=1)\n> Sort Key: i.status, i.stamp\n> -> Hash Join (cost=961.31..24648.94 rows=1550 width=63) (actual time=552.184..1392.617 rows=19 loops=1)\n> Hash Cond: (\"outer\".uid = \"inner\".uid)\n> -> Seq Scan on image i (cost=0.00..22025.22 rows=329382 width=53) (actual time=0.009..1088.856 rows=346313 loops=1)\n> Filter: ((status = 'd'::bpchar) OR (status = 'v'::bpchar))\n> -> Hash (cost=960.71..960.71 rows=240 width=14) (actual time=0.043..0.043 rows=0 loops=1)\n> -> Index Scan using users_upper_nick on users u (cost=0.00..960.71 rows=240 width=14) (actual time=0.037..0.039 rows=1 loops=1)\n> Index Cond: (upper((nick)::text) = 'FIREGIRL-'::text)\n> Filter: (status = 'a'::bpchar)\n> Total runtime: 1392.769 ms\n> (11 rows)\n>\n> galleria=> set enable_hashjoin = false;\n> SET\n> galleria=> explain analyze SELECT i.image_id, i.info, i.stamp, i.status, i.t_width, i.t_height, u.nick, u.uid FROM image i, users u WHERE i.uid = u.uid AND upper(u.nick) = upper('FireGirl-') AND i.status IN ('d', 'v') AND u.status = 'a' ORDER BY status, stamp DESC;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=35861.87..35865.74 rows=1550 width=63) (actual\n> time=0.230..0.244 rows=19 loops=1)\n> Sort Key: i.status, i.stamp\n> -> Nested Loop (cost=0.00..35779.73 rows=1550 width=63) (actual time=0.070..0.173 rows=19 loops=1)\n> -> Index Scan using users_upper_nick on users u (cost=0.00..960.71 rows=240 width=14) (actual time=0.036..0.038 rows=1 loops=1)\n> Index Cond: (upper((nick)::text) = 'FIREGIRL-'::text)\n> Filter: (status = 'a'::bpchar)\n> -> Index Scan using image_uid_status on image i (cost=0.00..144.83 rows=20 width=53) (actual time=0.026..0.080 rows=19 loops=1)\n> Index Cond: (i.uid = \"outer\".uid)\n> Filter: ((status = 'd'::bpchar) OR (status = 'v'::bpchar))\n> Total runtime: 0.315 ms\n> (10 rows)\n>\n>\n> |\\__/|\n> ( oo ) Kari Lavikka - [email protected] - (050) 380 3808\n> __ooO( )Ooo_______ _____ ___ _ _ _ _ _ _ _\n> \"\"\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n",
"msg_date": "Sat, 31 Jan 2004 10:52:40 +0200 (EET)",
"msg_from": "Kari Lavikka <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unique index and estimated rows."
}
] |
[
{
"msg_contents": "Hi,\n\nits me again. As far as we tested postgresql ist fast, very fast \ncompared to the other db system we test and are using currently.\n\n We are now testing some test databases on Postgres. We use one \nfunction which simply calculates a difference between two values and \nchecks if on value is 0, so something like this:\n\n\ndeclare\ndiff integer;\nbegin\nif $1 > $2\nthen\ndiff := $1 -$2;\nreturn diff * diff;\nelse\nreturn 0;\nend if;\nend;\n\nLanguage for this function is plpgsql\n\nexecuting a select like this:\n\nselect\nsum(job_property_difference(t0.int_value, t1.int_value)) as rank\n from\n job_property t0,\n job_property t1\n where\n t0.id_job_profile = 911\n and t0.id_job_attribute = t1.id_job_attribute\n and t1.id_job_profile in (select id_job_profile from unemployed)\n and t1.id_job_profile <> 911;\n\nresults in a query plan result:\n\n QUERY PLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------------\n Aggregate (cost=70521.28..70521.28 rows=1 width=8)\n -> Merge Join (cost=66272.11..70158.29 rows=145194 width=8)\n Merge Cond: (\"outer\".id_job_attribute = \n\"inner\".id_job_attribute)\n -> Sort (cost=31.53..32.44 rows=366 width=8)\n Sort Key: t0.id_job_attribute\n -> Index Scan using \njob_property__id_job_profile__fk_index on job_property t0 \n(cost=0.00..15.95 rows=366 width=8)\n Index Cond: (id_job_profile = 911)\n -> Sort (cost=66240.58..67456.79 rows=486483 width=8)\n Sort Key: t1.id_job_attribute\n -> Hash IN Join (cost=34.08..20287.32 rows=486483 \nwidth=8)\n Hash Cond: (\"outer\".id_job_profile = \n\"inner\".id_job_profile)\n -> Seq Scan on job_property t1 \n(cost=0.00..12597.89 rows=558106 width=12)\n Filter: (id_job_profile <> 911)\n -> Hash (cost=31.46..31.46 rows=1046 width=4)\n -> Seq Scan on unemployed \n(cost=0.00..31.46 rows=1046 width=4)\n(21 rows)\n\n\n\n\nThis takes about 1minute, 45 seconds on a test database with about \n31.882 job_profile and 8.483.005 job_property records. The final \nsolution will have about 1.000.000 job_profile records and, well ... \nabout 266.074.901 so we wonder what options we have in order to \nimprove this select. Should we rewrite the function (and others) in C? \nTurning off seqscans makes it slower which might be because psql is \nhopping between the index and the row values back and forth as a lot of \nrows are involved.\n\nAny hint how to speed up this would be great.\n\n\nregards David\n\n",
"msg_date": "Fri, 30 Jan 2004 19:00:33 +0100",
"msg_from": "David Teran <[email protected]>",
"msg_from_op": true,
"msg_subject": "another query optimization question"
},
{
"msg_contents": "\nOn Jan 30, 2004, at 11:00 AM, David Teran wrote:\n>\n> executing a select like this:\n>\n> select\n> sum(job_property_difference(t0.int_value, t1.int_value)) as rank\n> from\n> job_property t0,\n> job_property t1\n> where\n> t0.id_job_profile = 911\n> and t0.id_job_attribute = t1.id_job_attribute\n> and t1.id_job_profile in (select id_job_profile from unemployed)\n> and t1.id_job_profile <> 911;\n>\n> results in a query plan result:\n>\n\nWhat's the goal here? Are you trying to rank by how the int_value \nrelates to each other? I'd like to know more about what kind of result \nyou're trying to achieve.\n\n--\nPC Drew\n\n",
"msg_date": "Fri, 30 Jan 2004 11:09:40 -0700",
"msg_from": "PC Drew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: another query optimization question"
},
{
"msg_contents": "\nOn Fri, 30 Jan 2004, David Teran wrote:\n\n> select\n> sum(job_property_difference(t0.int_value, t1.int_value)) as rank\n> from\n> job_property t0,\n> job_property t1\n> where\n> t0.id_job_profile = 911\n> and t0.id_job_attribute = t1.id_job_attribute\n> and t1.id_job_profile in (select id_job_profile from unemployed)\n> and t1.id_job_profile <> 911;\n>\n> results in a query plan result:\n\nCan we see explain analyze output for the query, it'll give more\ninformation about actual time and row counts than plain explain.\n",
"msg_date": "Fri, 30 Jan 2004 10:10:16 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: another query optimization question"
},
{
"msg_contents": "Hi,\n\n\nOn 30.01.2004, at 19:10, Stephan Szabo wrote:\n\n>\n> On Fri, 30 Jan 2004, David Teran wrote:\n>\n>> select\n>> sum(job_property_difference(t0.int_value, t1.int_value)) as rank\n>> from\n>> job_property t0,\n>> job_property t1\n>> where\n>> t0.id_job_profile = 911\n>> and t0.id_job_attribute = t1.id_job_attribute\n>> and t1.id_job_profile in (select id_job_profile from unemployed)\n>> and t1.id_job_profile <> 911;\n>>\n>> results in a query plan result:\n>\n> Can we see explain analyze output for the query, it'll give more\n> information about actual time and row counts than plain explain.\n>\n\n\nsure, here it is comes. What we need to achieve is: we have different \njob_profiles, each profile has multiple values. For one given profile \nwe need the ' sum of the distance of every value in the given profile \nand every other profile'. The result is usually grouped by the profile \nid but to make the query easier i removed this, it does not cost too \nmuch time and it turned out that this query here uses most of the time.\n\nthanks, David\n\n\n\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n--------------------------------\n Aggregate (cost=2689349.81..2689349.81 rows=1 width=8) (actual \ntime=100487.423..100487.423 rows=1 loops=1)\n -> Merge Join (cost=2451266.53..2655338.83 rows=13604393 width=8) \n(actual time=82899.466..-2371037.726 rows=2091599 loops=1)\n Merge Cond: (\"outer\".id_job_attribute = \n\"inner\".id_job_attribute)\n -> Sort (cost=97.43..100.63 rows=1281 width=8) (actual \ntime=3.937..4.031 rows=163 loops=1)\n Sort Key: t0.id_job_attribute\n -> Index Scan using \njob_property__id_job_profile__fk_index on job_property t0 \n(cost=0.00..31.31 rows=1281 width=8) (actual time=1.343..3.766 rows=163 \nloops=1)\n Index Cond: (id_job_profile = 911)\n -> Sort (cost=2451169.10..2483246.47 rows=12830947 width=8) \n(actual time=82891.076..-529619.213 rows=4187378 loops=1)\n Sort Key: t1.id_job_attribute\n -> Hash IN Join (cost=507.32..439065.37 rows=12830947 \nwidth=8) (actual time=61.943..1874640.807 rows=4187378 loops=1)\n Hash Cond: (\"outer\".id_job_profile = \n\"inner\".id_job_profile)\n -> Seq Scan on job_property t1 \n(cost=0.00..246093.84 rows=12830947 width=12) (actual \ntime=0.136..19101.796 rows=8482533 loops=1)\n Filter: (id_job_profile <> 911)\n -> Hash (cost=467.46..467.46 rows=15946 width=4) \n(actual time=61.313..61.313 rows=0 loops=1)\n -> Seq Scan on unemployed \n(cost=0.00..467.46 rows=15946 width=4) (actual time=0.157..50.842 \nrows=15960 loops=1)\n Total runtime: 103769.592 ms\n\n\n",
"msg_date": "Fri, 30 Jan 2004 19:20:24 +0100",
"msg_from": "David Teran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: another query optimization question"
},
{
"msg_contents": "David Teran <[email protected]> writes:\n> [ query plan ]\n\nI got a little distracted by the bizarre actual-time values shown for\nsome of the query steps:\n\n> -> Merge Join (cost=2451266.53..2655338.83 rows=13604393 width=8) \n> (actual time=82899.466..-2371037.726 rows=2091599 loops=1)\n\n> -> Sort (cost=2451169.10..2483246.47 rows=12830947 width=8) \n> (actual time=82891.076..-529619.213 rows=4187378 loops=1)\n\n> -> Hash IN Join (cost=507.32..439065.37 rows=12830947 \n> width=8) (actual time=61.943..1874640.807 rows=4187378 loops=1)\n\nThe hash-join total time is obviously wrong seeing that the total\nruntime is only about 100000 msec, and the negative values for the other\ntwo are even more obviously wrong.\n\nI recall that we saw similar symptoms once before, and I thought we'd\nfixed it, but I didn't find any relevant mentions in the CVS logs.\n\nWhat version are you running, exactly? Could you repeat the EXPLAIN\nwith client_min_messages set to 'debug2', and see if you see any\nmessages about InstrStartTimer or InstrStopNode?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 30 Jan 2004 14:19:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: another query optimization question "
},
{
"msg_contents": "On Friday 30 January 2004 19:19, Tom Lane wrote:\n>\n> The hash-join total time is obviously wrong seeing that the total\n> runtime is only about 100000 msec, and the negative values for the other\n> two are even more obviously wrong.\n>\n> I recall that we saw similar symptoms once before, and I thought we'd\n> fixed it, but I didn't find any relevant mentions in the CVS logs.\n\nYou're not thinking of the -ve durations on the auto-vacuum gizmo? Something \nto do with system clock accuracy IIRC.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 30 Jan 2004 20:25:24 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: another query optimization question"
},
{
"msg_contents": "HI Tom.\n\n> I got a little distracted by the bizarre actual-time values shown for\n> some of the query steps:\n>\n>> -> Merge Join (cost=2451266.53..2655338.83 rows=13604393 \n>> width=8)\n>> (actual time=82899.466..-2371037.726 rows=2091599 loops=1)\n>\n>> -> Sort (cost=2451169.10..2483246.47 rows=12830947 \n>> width=8)\n>> (actual time=82891.076..-529619.213 rows=4187378 loops=1)\n>\n>> -> Hash IN Join (cost=507.32..439065.37 \n>> rows=12830947\n>> width=8) (actual time=61.943..1874640.807 rows=4187378 loops=1)\n>\n> The hash-join total time is obviously wrong seeing that the total\n> runtime is only about 100000 msec, and the negative values for the \n> other\n> two are even more obviously wrong.\n>\n> I recall that we saw similar symptoms once before, and I thought we'd\n> fixed it, but I didn't find any relevant mentions in the CVS logs.\n>\n> What version are you running, exactly? Could you repeat the EXPLAIN\n> with client_min_messages set to 'debug2', and see if you see any\n> messages about InstrStartTimer or InstrStopNode?\n>\n\n7.4.1, build from sourcecode. Running on MacOS X Server 10.3.2, dual G5 \nwith 3.5 GB RAM\n\nI have set client_min_messages in postgresql.conf to debug2 but i see \nnothing. Is this going to the normal logfile? Must i activate anything \nelse?\n\nregards David\n\n",
"msg_date": "Fri, 30 Jan 2004 23:02:43 +0100",
"msg_from": "David Teran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: another query optimization question "
},
{
"msg_contents": "David Teran <[email protected]> writes:\n>> I recall that we saw similar symptoms once before, and I thought we'd\n>> fixed it, but I didn't find any relevant mentions in the CVS logs.\n>> \n>> What version are you running, exactly?\n\n> 7.4.1, build from sourcecode. Running on MacOS X Server 10.3.2, dual G5 \n> with 3.5 GB RAM\n\nInteresting. I have recollected where we saw this before: \nhttp://archives.postgresql.org/pgsql-hackers/2003-11/msg01528.php\nApparently gettimeofday() has a glitch on some BSD releases. OS X is\na BSD derivative and it's not so surprising if it has it too.\n\nMay I suggest that you try the test program given here:\nhttp://archives.postgresql.org/pgsql-hackers/2003-11/msg01546.php\nand file a bug report with Apple if it shows any out-of-order results?\n\nI am fairly certain that I tried that test program when I devised it\non my own OS X machine, and couldn't get a failure. Maybe it depends\non your hardware (hm, could dual CPUs be the key factor)?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 30 Jan 2004 17:40:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: another query optimization question "
},
{
"msg_contents": "Hi Tim,\n\nyou are right:\n\n\n> Interesting. I have recollected where we saw this before:\n> http://archives.postgresql.org/pgsql-hackers/2003-11/msg01528.php\n> Apparently gettimeofday() has a glitch on some BSD releases. OS X is\n> a BSD derivative and it's not so surprising if it has it too.\n>\n> May I suggest that you try the test program given here:\n> http://archives.postgresql.org/pgsql-hackers/2003-11/msg01546.php\n> and file a bug report with Apple if it shows any out-of-order results?\n>\n> I am fairly certain that I tried that test program when I devised it\n> on my own OS X machine, and couldn't get a failure. Maybe it depends\n> on your hardware (hm, could dual CPUs be the key factor)?\n\n\np:~ david$ ./a.out\nbogus tv_usec: 1075544305 -615731632, prev 1075544304 349672\nout of order tv_sec: 1075544304 349759, prev 1075544305 -615731632\nout of order tv_usec: 1075544305 339904, prev 1075544305 339905\nbogus tv_usec: 1075544307 -615731811, prev 1075544306 349493\nout of order tv_sec: 1075544306 349498, prev 1075544307 -615731811\nout of order tv_usec: 1075544307 339442, prev 1075544307 339443\nout of order tv_usec: 1075544308 339351, prev 1075544308 339352\n\nThis is a part of the output. Whats -very- interesting:\n\nApple provides a little tool that can enable / disable the l2 cache ... \none CPU of a dual CPU system on the fly. When i start the testapp with \ntwo CPU's enabled i get this output here, when i turn off one CPU while \nthe app is still running the messages disappear as long as one CPU is \nturned off. Reactivating the CPU again produces new error messages. I \nchecked the app on a single G4, no errors and i checked the app on a \ndual G4, -not- G5 and also no error messages.\n\nDo you remember where one can find a patch? Maybe its something one can \nfix because parts of the OS from Apple are 'open source'.\n\nDo you know if this bug makes a system unusable with PostgresSQL?\n\nRegards David\n\n",
"msg_date": "Sat, 31 Jan 2004 11:28:35 +0100",
"msg_from": "David Teran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: another query optimization question "
},
{
"msg_contents": "David Teran <[email protected]> writes:\n> Apple provides a little tool that can enable / disable the l2 cache ... \n> one CPU of a dual CPU system on the fly. When i start the testapp with \n> two CPU's enabled i get this output here, when i turn off one CPU while \n> the app is still running the messages disappear as long as one CPU is \n> turned off. Reactivating the CPU again produces new error messages.\n\nAh-hah, so the gettimeofday bug *is* linked to multiple CPUs. Marc,\nwere the machines you saw it on all multi-CPU?\n\n> Do you remember where one can find a patch?\n\nFile a bug report with Apple.\n\n> Do you know if this bug makes a system unusable with PostgresSQL?\n\nNot at all. The only bad effect we've noted is that EXPLAIN results\nare sometimes wacky. In theory you could sometimes see the result of\nnow() being off by a second or two, but no one's reported seeing that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 31 Jan 2004 10:45:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: another query optimization question "
},
{
"msg_contents": "On Sat, 31 Jan 2004, Tom Lane wrote:\n\n> David Teran <[email protected]> writes:\n> > Apple provides a little tool that can enable / disable the l2 cache ...\n> > one CPU of a dual CPU system on the fly. When i start the testapp with\n> > two CPU's enabled i get this output here, when i turn off one CPU while\n> > the app is still running the messages disappear as long as one CPU is\n> > turned off. Reactivating the CPU again produces new error messages.\n>\n> Ah-hah, so the gettimeofday bug *is* linked to multiple CPUs. Marc,\n> were the machines you saw it on all multi-CPU?\n\nI'm not sure ... I thought I ran it on my P4 here in the office and saw it\ntoo, albeit not near as frequently ... but, in FreeBSD's case, it is a\n\"design issue\" ... there are two different functions, once that is kinda\nfuzzy (but fast), and the other that is designed to be exact, but at a\nperformance loss ... or was it the same function, but a 'sysctl' variable\nthat changes the state?\n\nCan't remember which, but it is by design on FreeBSD ... and, if we're\ntalking about Apple, the same most likely applies, as its based on the\nsame kernel ...\n\nBack of my mind, I *think* it was these sysctl variables:\n\nkern.timecounter.method: 0\nkern.timecounter.hardware: i8254\n\n\n\n ----\nMarc G. Fournier Hub.Org Networking Services (http://www.hub.org)\nEmail: [email protected] Yahoo!: yscrappy ICQ: 7615664\n",
"msg_date": "Sat, 31 Jan 2004 16:15:35 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: another query optimization question "
},
{
"msg_contents": "Hi,\n\n> I'm not sure ... I thought I ran it on my P4 here in the office and \n> saw it\n> too, albeit not near as frequently ... but, in FreeBSD's case, it is a\n> \"design issue\" ... there are two different functions, once that is \n> kinda\n> fuzzy (but fast), and the other that is designed to be exact, but at a\n> performance loss ... or was it the same function, but a 'sysctl' \n> variable\n> that changes the state?\n>\n> Can't remember which, but it is by design on FreeBSD ... and, if we're\n> talking about Apple, the same most likely applies, as its based on the\n> same kernel ...\n>\n> Back of my mind, I *think* it was these sysctl variables:\n>\n> kern.timecounter.method: 0\n> kern.timecounter.hardware: i8254\n>\nI will try to check this on my system.\n\nBut here another hint, maybe more interesting for Apple though: The bug \ndoes -not- occur if another process uses a lot of CPU time. We encoded \na quicktime movie into mpeg2 and while this was using about 90% and \nwhile encoding the vcd i wanted to show the bug to a friend and it did \nnot work.\n\nBut besides this, is there any chance that we can optimize our initial \nperformance problem ;-)\n\nregards David\n\n",
"msg_date": "Sat, 31 Jan 2004 22:12:03 +0100",
"msg_from": "David Teran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: another query optimization question "
}
] |
[
{
"msg_contents": "Hi,\n\nI have many indexes somehow overlaping like:\n... btree (\"STATUS\", \"VISIBLE\", \"NP_ID\");\n... btree (\"STATUS\", \"VISIBLE\");\n\nis perfomance gained by \"more exact\" index worth overhead with managing\nindexes.\n\nRigmor Ukuhe\nFinestmedia Ltd\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.564 / Virus Database: 356 - Release Date: 19.01.2004\n\n",
"msg_date": "Mon, 2 Feb 2004 16:46:22 +0200",
"msg_from": "\"Rigmor Ukuhe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"Overlaping\" indexes"
},
{
"msg_contents": "Dnia 2004-02-02 15:46, U�ytkownik Rigmor Ukuhe napisa�:\n> Hi,\n> \n> I have many indexes somehow overlaping like:\n> ... btree (\"STATUS\", \"VISIBLE\", \"NP_ID\");\n> ... btree (\"STATUS\", \"VISIBLE\");\n> \n> is perfomance gained by \"more exact\" index worth overhead with managing\n> indexes.\n\nThe second (2 columns) index is useless - it's function is well done by \nthe first one (3 columns).\n\nRegards,\nTomasz Myrta\n",
"msg_date": "Mon, 02 Feb 2004 16:05:57 +0100",
"msg_from": "Tomasz Myrta <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"Overlaping\" indexes"
},
{
"msg_contents": "On Mon, 2 Feb 2004, Tomasz Myrta wrote:\n\n> Dnia 2004-02-02 15:46, U?ytkownik Rigmor Ukuhe napisa3:\n> > Hi,\n> > \n> > I have many indexes somehow overlaping like:\n> > ... btree (\"STATUS\", \"VISIBLE\", \"NP_ID\");\n> > ... btree (\"STATUS\", \"VISIBLE\");\n> > \n> > is perfomance gained by \"more exact\" index worth overhead with managing\n> > indexes.\n> \n> The second (2 columns) index is useless - it's function is well done by \n> the first one (3 columns).\n\nNot entirely, since it only has to sort two columns, it will be smaller, \nand will therefore be somewhat faster.\n\nOn the other hand, I've seen a lot of folks create multi column indexes \nwho didn't really understand how they work in Postgresql.\n\n",
"msg_date": "Mon, 2 Feb 2004 11:30:36 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"Overlaping\" indexes"
},
{
"msg_contents": "Dnia 2004-02-02 19:30, U�ytkownik scott.marlowe napisa�:\n> Not entirely, since it only has to sort two columns, it will be smaller, \n> and will therefore be somewhat faster.\n\nCan you say something more about it? Will it be enough faster to keep \nthem both? Did anyone make such tests?\n\nRegards,\nTomasz Myrta\n",
"msg_date": "Mon, 02 Feb 2004 19:43:03 +0100",
"msg_from": "Tomasz Myrta <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"Overlaping\" indexes"
},
{
"msg_contents": "On Mon, 2 Feb 2004, Tomasz Myrta wrote:\n\n> Dnia 2004-02-02 19:30, U�ytkownik scott.marlowe napisa�:\n> > Not entirely, since it only has to sort two columns, it will be smaller, \n> > and will therefore be somewhat faster.\n> \n> Can you say something more about it? Will it be enough faster to keep \n> them both? Did anyone make such tests?\n\nthat really depends on the distribution of the third column. If there's \nonly a couple of values in the third column, no big deal. If each entry \nis unique, and it's a large table, very big deal.\n\nIt is only useful to have a three column index if you actually use it. If \nyou have an index on (a,b,c) and select order by b, the index won't get \nused unless the a part is in the where clause.\n\nthe other issue is updates. IT WILL cost more to update two indexes \nrather than one. Generally, you can drop / readd the index and use \nexplain analyze on one of your own queries to see if that helps.\n\n",
"msg_date": "Mon, 2 Feb 2004 11:58:49 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"Overlaping\" indexes"
},
{
"msg_contents": "On Mon, 2004-02-02 at 13:43, Tomasz Myrta wrote:\n> Dnia 2004-02-02 19:30, U�ytkownik scott.marlowe napisa�:\n> > Not entirely, since it only has to sort two columns, it will be smaller, \n> > and will therefore be somewhat faster.\n> \n> Can you say something more about it? Will it be enough faster to keep \n> them both? Did anyone make such tests?\n\nYou can actually come up with test cases where both indexes are useful.\nThe three column index will have more data to sift through. That said,\nhaving both indexes used means there is less ram available for cache.\n\nThe biggest mistake I see is people doing everything they can to\noptimize a single query, then they optimize the next query, etc.\n\nWhen you consider the entire set of queries, those two indexes are very\nlikely to slow select throughput down due to increased memory\nrequirements and the system hitting disk a little more often.\n\nIt's similar to the mistake of benchmarking a set of 1000 row tables and\noptimizing memory settings for that, then using that configuration on\nthe 10M row tables in production.\n\n\n",
"msg_date": "Mon, 02 Feb 2004 14:17:52 -0500",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"Overlaping\" indexes"
}
] |
[
{
"msg_contents": "Folks,\n\nI've had requests from a couple of businesses to see results of infomal MySQL\n+InnoDB vs. PostgreSQL tests. I know that we don't have the setup to do \nfull formal benchmarking, but surely someone in our community has gone \nhead-to-head on your own application?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 2 Feb 2004 09:21:12 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "Josh,\n\nI evaluated MySQL + InnoDB briefly for a project, once. I didn't get \nvery far because of some severe limitations in MySQL.\n\nI had to import all of the data from an existing database (MS SQL). \nOne of the tables was about 8 million rows, 10 fields, and had 5 \nindexes. I found it quite impossible to import into MySQL. I would \nimport the data into a table with no indexes, then perform a bunch of \nmanipulation on it (I wasn't just converting from MS SQL, but also \nneeded to alter quite a bit of the structure). After the manipulation, \nI would drop some columns and build the indexes. It took MySQL over 4 \ndays to do this!\n\nWhat I found out was that any DDL changes to a table in MySQL actually \ndoes this: create a new table, copy all of the data over, then drop \nthe old table and rename the new one. Whenever I added a new index, \nMySQL would go through the process of rebuilding each previous index. \nSame thing when adding or dropping columns.\n\nI could not find a way to import all of the data in a reasonable amount \nof time. For comparison, it took less that 45 minutes to import all of \nthe data in to PostgreSQL (that's ALL of the data, not just that one \ntable).\n\nNeedless to say (but I'll say it anyway :-), I didn't get any farther \nin my evaluation, there was no point.\n\nOne more thing that annoyed me. If you started a process, such as a \nlarge DDL operation, or heaven forbid, a cartesian join (what? I never \ndo that!). There's no way to cancel it with InnoDB. You have to wait \nfor it to finish. Hitting ctrl+c in their command line tool only kills \nthe command line tool, the process continues. Even if you stop the \ndatabase and restart it (including with a hard boot), it will pick \nright up where it left off and continue. That proved to be way too \nmuch of a pain for me.\n\nDisclaimer: I'm not a real MySQL expert, or anything. There could be \nways of getting around this, but after two weeks of trying, I decided \nto give up. It only took me a few hours to build the requisite \nPostgreSQL scripts and I never looked back.\n\nAdam Ruth\n\nOn Feb 2, 2004, at 10:21 AM, Josh Berkus wrote:\n\n> Folks,\n>\n> I've had requests from a couple of businesses to see results of \n> infomal MySQL\n> +InnoDB vs. PostgreSQL tests. I know that we don't have the setup \n> to do\n> full formal benchmarking, but surely someone in our community has gone\n> head-to-head on your own application?\n>\n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n\n",
"msg_date": "Mon, 2 Feb 2004 11:06:24 -0700",
"msg_from": "Adam Ruth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "On Mon, 2004-02-02 at 12:21, Josh Berkus wrote:\n> Folks,\n> \n> I've had requests from a couple of businesses to see results of infomal MySQL\n> +InnoDB vs. PostgreSQL tests. I know that we don't have the setup to do \n> full formal benchmarking, but surely someone in our community has gone \n> head-to-head on your own application?\n> \n\nWe have the setup to do informal benchmarking via OSDL, but afaik mysql\ndoesn't conform to any of the dbt benchmarks...\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n",
"msg_date": "02 Feb 2004 15:14:08 -0500",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "> One more thing that annoyed me. If you started a process, such as a \n> large DDL operation, or heaven forbid, a cartesian join (what? I never \n> do that!).\n\nI believe InnoDB also has O(n) rollback time. eg. if you are rolling \nback 100 million row changes, it takes a long, long time. In PostgreSQL \nrolling back is O(1)...\n\nChris\n\n",
"msg_date": "Tue, 03 Feb 2004 08:44:32 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "On Tue, 3 Feb 2004, Christopher Kings-Lynne wrote:\n\n> > One more thing that annoyed me. If you started a process, such as a \n> > large DDL operation, or heaven forbid, a cartesian join (what? I never \n> > do that!).\n> \n> I believe InnoDB also has O(n) rollback time. eg. if you are rolling \n> back 100 million row changes, it takes a long, long time. In PostgreSQL \n> rolling back is O(1)...\n\nActually, it takes signifigantly longer to rollback than to roll forward, \nso to speak, so that if you inserted for 10,000 rows and it took 5 \nminutes, it would take upwards of 30 times as long to roll back.\n\nThis is from the docs:\n\nhttp://www.mysql.com/documentation/mysql/bychapter/manual_Table_types.html#InnoDB_tuning\n\nPoint 8:\n\n# Beware of big rollbacks of mass inserts: InnoDB uses the insert buffer \nto save disk I/O in inserts, but in a corresponding rollback no such \nmechanism is used. A disk-bound rollback can take 30 times the time of the \ncorresponding insert. Killing the database process will not help because \nthe rollback will start again at the database startup. The only way to get \nrid of a runaway rollback is to increase the buffer pool so that the \nrollback becomes CPU-bound and runs fast, or delete the whole InnoDB \ndatabase. \n\n",
"msg_date": "Mon, 2 Feb 2004 18:18:32 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "Josh Berkus wrote:\n\n> Folks,\n> \n> I've had requests from a couple of businesses to see results of infomal MySQL\n> +InnoDB vs. PostgreSQL tests. I know that we don't have the setup to do \n> full formal benchmarking, but surely someone in our community has gone \n> head-to-head on your own application?\n> \n\nJosh,\n\nhow does someone compare an Apache+PHP+MySQL \"thing\" against something \nimplemented with half the stuff done in stored procedures and the entire \nbusiness model guarded by referential integrity, custom triggers and \nwhatnot?\n\nSeriously, I am tired of this kind of question. You gotta get bold \nenough to stand up in a \"meeting\" like that, say \"guy's, you can ask me \nhow this compares to Oracle ... but if you're seriously asking me how \nthis compares to MySQL, call me again when you've done your homework\".\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n",
"msg_date": "Mon, 02 Feb 2004 20:47:04 -0500",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "Wow, I didn't know that (didn't get far enough to test any rollback). \nThat's not a good thing. <facetious>But then again, it's MySQL who \nneeds rollback anyway?</facetious>\n\nOn Feb 2, 2004, at 5:44 PM, Christopher Kings-Lynne wrote:\n\n>> One more thing that annoyed me. If you started a process, such as a \n>> large DDL operation, or heaven forbid, a cartesian join (what? I \n>> never do that!).\n>\n> I believe InnoDB also has O(n) rollback time. eg. if you are rolling \n> back 100 million row changes, it takes a long, long time. In \n> PostgreSQL rolling back is O(1)...\n>\n> Chris\n>\n\n",
"msg_date": "Mon, 2 Feb 2004 19:20:33 -0700",
"msg_from": "Adam Ruth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "> Seriously, I am tired of this kind of question. You gotta get bold \n> enough to stand up in a \"meeting\" like that, say \"guy's, you can ask me \n> how this compares to Oracle ... but if you're seriously asking me how \n> this compares to MySQL, call me again when you've done your homework\".\n\nHey at least I noticed that InnoDB has one essential feature we don't:\n\nSELECT ... IN SHARE MODE;\n\nWhich does a shared lock on a row as opposed to a write lock, hence \navoiding nasty foreign key deadlocks...\n\nChris\n\n",
"msg_date": "Tue, 03 Feb 2004 11:04:45 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "Chris,\n\n> Hey at least I noticed that InnoDB has one essential feature we don't:\n> \n> SELECT ... IN SHARE MODE;\n> \n> Which does a shared lock on a row as opposed to a write lock, hence \n> avoiding nasty foreign key deadlocks...\n\nUm, wrong. We don't lock rows for SELECT.\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n",
"msg_date": "Mon, 2 Feb 2004 19:41:57 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> Hey at least I noticed that InnoDB has one essential feature we don't:\n>> SELECT ... IN SHARE MODE;\n>> \n>> Which does a shared lock on a row as opposed to a write lock, hence \n>> avoiding nasty foreign key deadlocks...\n\n> Um, wrong. We don't lock rows for SELECT.\n\nNo, but Chris is correct that we could do with having some kind of\nshared lock facility at the row level.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 02 Feb 2004 23:08:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] MySQL+InnoDB vs. PostgreSQL test? "
},
{
"msg_contents": ">>Um, wrong. We don't lock rows for SELECT.\n> \n> No, but Chris is correct that we could do with having some kind of\n> shared lock facility at the row level.\n\nOut of interest, what is it about this particular task that's so hard? \n(Not that I could code it myself). But surely you can use the same sort \nof thing as the FOR UPDATE code path?\n\nChris\n\n",
"msg_date": "Tue, 03 Feb 2004 12:30:43 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "Christopher Kings-Lynne <[email protected]> writes:\n>> No, but Chris is correct that we could do with having some kind of\n>> shared lock facility at the row level.\n\n> Out of interest, what is it about this particular task that's so hard? \n\nKeeping track of multiple lockers in a fixed amount of disk space.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 02 Feb 2004 23:43:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] MySQL+InnoDB vs. PostgreSQL test? "
},
{
"msg_contents": "In an attempt to throw the authorities off his trail, [email protected] (Jan Wieck) transmitted:\n> Josh Berkus wrote:\n>> I've had requests from a couple of businesses to see results of\n>> infomal MySQL\n>> +InnoDB vs. PostgreSQL tests. I know that we don't have the setup\n>> to do full formal benchmarking, but surely someone in our community\n>> has gone head-to-head on your own application?\n>\n> how does someone compare an Apache+PHP+MySQL \"thing\" against something\n> implemented with half the stuff done in stored procedures and the\n> entire business model guarded by referential integrity, custom\n> triggers and whatnot?\n>\n> Seriously, I am tired of this kind of question. You gotta get bold\n> enough to stand up in a \"meeting\" like that, say \"guy's, you can ask\n> me how this compares to Oracle ... but if you're seriously asking me\n> how this compares to MySQL, call me again when you've done your\n> homework\".\n\nActually, before saying anything in public about their products, check\nout what they require for use of their protected trademarks.\n<http://www.mysql.com/company/trademark.html>\n\nTo wit, they indicate that:\n\n \"The MySQL AB Marks may not be used in a manner or with respect to\n products that will decrease the value of the MySQL AB Marks or\n otherwise impair or damage MySQL AB's brand integrity, reputation or\n goodwill\"\n\nIt seems to me that presenting a benchmark that did not favor their\nproduct could be quite reasonably considered to be an \"impairment\" of\ntheir integrity, reputation, or goodwill, and therefore be something\nworthy of legal attack.\n\nThat certainly wouldn't be new to the database industry; numerous\n(most?) database vendors forbid third parties from presenting\nbenchmarks without their express consent.\n\nIt is actually rather surprising that despite having the budget to put\ntogether a \"benchmarketing\" group, the only results that they are\npublishing are multiple years old, with a paucity of InnoDB(tm)\nresults, and where they only seem to compare it with ancient versions\nof \"competitors.\"\n\nAnd, of course, if \"MaxDB(tm)\" is their future, replacing the older\nstorage schemes, the benchmarks should be based on that. And the\nbenchmarks that exist there are all based on the R/3 (tm) SD module,\nwhich is spectacularly different from the usual web server work. (It\nlooks like that involves throwing a load of BDC sessions at the\nserver, but I'm guessing, and in any case, it's work SAP AG did...)\n-- \nlet name=\"cbbrowne\" and tld=\"acm.org\" in name ^ \"@\" ^ tld;;\nhttp://www.ntlug.org/~cbbrowne/nonrdbms.html\n\"Has anyone ever thought about the fact that in general, the only web\nsites that are consistently making money are the ones dealing in\npornography? This brings new meaning to the term, \"obscene\nprofits\". :)\" -- Paul Robinson <[email protected]>\n",
"msg_date": "Mon, 02 Feb 2004 23:54:15 -0500",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": ">>Out of interest, what is it about this particular task that's so hard? \n> \n> \n> Keeping track of multiple lockers in a fixed amount of disk space.\n\nWhy not look at how InnoDB does it? Or is that not applicable?\n\n",
"msg_date": "Tue, 03 Feb 2004 12:55:27 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "Chris,\n\n> > Which does a shared lock on a row as opposed to a write lock, hence \n> > avoiding nasty foreign key deadlocks...\n> \n> Um, wrong. We don't lock rows for SELECT.\n\nUnless you meant something else? Am I not following you?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 2 Feb 2004 21:17:15 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "\n>>Um, wrong. We don't lock rows for SELECT.\n> \n> Unless you meant something else? Am I not following you?\n\nI mean row level shared read lock. eg. a lock that says, you can read \nbut you cannot delete.\n\nIt's what postgres needs to alleviate its foreign key trigger deadlock \nproblems.\n\nChris\n\n",
"msg_date": "Tue, 03 Feb 2004 13:52:41 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "\nWell, when I prepared my PG presentation I did some testing of MySQL (So\nI could be justified in calling it lousy :). I used the latest release\n(4.0.something I think)\n\nI was first bitten by my table type being MyISAM when I thought I set\nthe default ot InnoDB. But I decided since my test was going to be\nread-only MyISAM should be the best possible choice. I loaded up a\ncouple million records and changed my stored procedure into a perl\nscript [I also decided to use this perl script for testing PG to be\nfair].\n\nFor one client mysql simply screamed. \n\nThen I decided to see what happens with 20 clients.\n\nMySQL clocked in at 650 seconds. During this time the machine was VERY\nunresponsive. To be fair, that could be Linux, not MySQL.\n\nPG (7.3.4) clocked in at 220 seconds. The machine was perfectly fine\nduring the test - nice and responsive. \n\nThe hardware wasn't much - dual p2-450 running stock RH8. (2x15k 18g\nscsi drives for the data volume)\n\nThen I decided to try the \"beloved\" InnoDB. \n\nWell.. after it sat for a few hours at 100% cpu loading the data I\nkilled it off and gave up on InnoDB.. I am interested in the numbers.\nPerhaps I'll fire it up again someday and let it finish loading.\n\nRemember - you cannot judge mysql by since connection performance - you\ncan't beat it. But just add up the concurrency and watch the cookies\ntumble\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n",
"msg_date": "Tue, 3 Feb 2004 08:52:03 -0500",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "> script [I also decided to use this perl script for testing PG to be\n> fair].\n>\n> For one client mysql simply screamed.\n>\n\nIf already have test case set up, you could inform us, from where Postgres\nstarts to beat MySql. Because if with 5 clients it still \"screams\" then i\nwould give it a try in case of that kind of requirements.\n\nRigmor Ukuhe\n\n> Then I decided to see what happens with 20 clients.\n>\n> MySQL clocked in at 650 seconds. During this time the machine was VERY\n> unresponsive. To be fair, that could be Linux, not MySQL.\n>\n> PG (7.3.4) clocked in at 220 seconds. The machine was perfectly fine\n> during the test - nice and responsive.\n>\n> The hardware wasn't much - dual p2-450 running stock RH8. (2x15k 18g\n> scsi drives for the data volume)\n>\n> Then I decided to try the \"beloved\" InnoDB.\n>\n> Well.. after it sat for a few hours at 100% cpu loading the data I\n> killed it off and gave up on InnoDB.. I am interested in the numbers.\n> Perhaps I'll fire it up again someday and let it finish loading.\n>\n> Remember - you cannot judge mysql by since connection performance - you\n> can't beat it. But just add up the concurrency and watch the cookies\n> tumble\n>\n> --\n> Jeff Trout <[email protected]>\n> http://www.jefftrout.com/\n> http://www.stuarthamm.net/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> ---\n> Incoming mail is certified Virus Free.\n> Checked by AVG anti-virus system (http://www.grisoft.com).\n> Version: 6.0.564 / Virus Database: 356 - Release Date: 19.01.2004\n>\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.564 / Virus Database: 356 - Release Date: 19.01.2004\n\n",
"msg_date": "Tue, 3 Feb 2004 16:02:00 +0200",
"msg_from": "\"Rigmor Ukuhe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "On Tue, 3 Feb 2004 16:02:00 +0200\n\"Rigmor Ukuhe\" <[email protected]> wrote:\n\n> > script [I also decided to use this perl script for testing PG to be\n> > fair].\n> >\n> > For one client mysql simply screamed.\n> >\n> \n> If already have test case set up, you could inform us, from where\n> Postgres starts to beat MySql. Because if with 5 clients it still\n> \"screams\" then i would give it a try in case of that kind of\n> requirements.\n> \n\nI just checked (to see about restarting the innodb test) and it appears\nthat it'll take a bit of work to get the machine up and running. \nI don't have time right now to do further testing.\n\nHowever, you could try it out.\n\nNot sure at what point it will topple, in my case it didn't matter if it\nran good with 5 clients as I'll always have many more clients than 5. \n\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n",
"msg_date": "Tue, 3 Feb 2004 10:53:49 -0500",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "Christopher Browne wrote:\n\n>In an attempt to throw the authorities off his trail, [email protected] (Jan Wieck) transmitted:\n> \n>\n>>Josh Berkus wrote:\n>> \n>>\n>>>I've had requests from a couple of businesses to see results of\n>>>infomal MySQL\n>>>+InnoDB vs. PostgreSQL tests. I know that we don't have the setup\n>>>to do full formal benchmarking, but surely someone in our community\n>>>has gone head-to-head on your own application?\n>>> \n>>>\n>>how does someone compare an Apache+PHP+MySQL \"thing\" against something\n>>implemented with half the stuff done in stored procedures and the\n>>entire business model guarded by referential integrity, custom\n>>triggers and whatnot?\n>>\n>>Seriously, I am tired of this kind of question. You gotta get bold\n>>enough to stand up in a \"meeting\" like that, say \"guy's, you can ask\n>>me how this compares to Oracle ... but if you're seriously asking me\n>>how this compares to MySQL, call me again when you've done your\n>>homework\".\n>> \n>>\n>\n>Actually, before saying anything in public about their products, check\n>out what they require for use of their protected trademarks.\n><http://www.mysql.com/company/trademark.html>\n>\n>To wit, they indicate that:\n>\n> \"The MySQL AB Marks may not be used in a manner or with respect to\n> products that will decrease the value of the MySQL AB Marks or\n> otherwise impair or damage MySQL AB's brand integrity, reputation or\n> goodwill\"\n>\n>It seems to me that presenting a benchmark that did not favor their\n>product could be quite reasonably considered to be an \"impairment\" of\n>their integrity, reputation, or goodwill, and therefore be something\n>worthy of legal attack.\n> \n>\nIt depends on how it is presented. Basically you just don't offer an \nopinion on the matter.\nFor example...\n\nMySQL was 10x slower than PostgreSQL in this test....\n\nInstead you could use something like.\n\nWe performed the following test.\n\nMySQL scored this much\nPostgreSQL scored this much\n\nNotice no use of explaination.....\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL\n\n\n\n\n\n\n\n\nChristopher Browne wrote:\n\nIn an attempt to throw the authorities off his trail, [email protected] (Jan Wieck) transmitted:\n \n\nJosh Berkus wrote:\n \n\nI've had requests from a couple of businesses to see results of\ninfomal MySQL\n+InnoDB vs. PostgreSQL tests. I know that we don't have the setup\nto do full formal benchmarking, but surely someone in our community\nhas gone head-to-head on your own application?\n \n\nhow does someone compare an Apache+PHP+MySQL \"thing\" against something\nimplemented with half the stuff done in stored procedures and the\nentire business model guarded by referential integrity, custom\ntriggers and whatnot?\n\nSeriously, I am tired of this kind of question. You gotta get bold\nenough to stand up in a \"meeting\" like that, say \"guy's, you can ask\nme how this compares to Oracle ... but if you're seriously asking me\nhow this compares to MySQL, call me again when you've done your\nhomework\".\n \n\n\nActually, before saying anything in public about their products, check\nout what they require for use of their protected trademarks.\n<http://www.mysql.com/company/trademark.html>\n\nTo wit, they indicate that:\n\n \"The MySQL AB Marks may not be used in a manner or with respect to\n products that will decrease the value of the MySQL AB Marks or\n otherwise impair or damage MySQL AB's brand integrity, reputation or\n goodwill\"\n\nIt seems to me that presenting a benchmark that did not favor their\nproduct could be quite reasonably considered to be an \"impairment\" of\ntheir integrity, reputation, or goodwill, and therefore be something\nworthy of legal attack.\n \n\nIt depends on how it is presented. Basically you just don't offer an\nopinion on the matter.\nFor example...\n\nMySQL was 10x slower than PostgreSQL in this test....\n\nInstead you could use something like.\n\nWe performed the following test.\n\nMySQL scored this much\nPostgreSQL scored this much\n\nNotice no use of explaination.....\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL",
"msg_date": "Tue, 03 Feb 2004 08:19:17 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "Jeff <[email protected]> writes:\n> Not sure at what point it will topple, in my case it didn't matter if it\n> ran good with 5 clients as I'll always have many more clients than 5. \n\nI did some idle, very unscientific tests the other day that indicated\nthat MySQL insert performance starts to suck with just 2 concurrent\ninserters. Given a file containing 10000 INSERT commands, a single\nmysql client ran the file in about a second. So if I feed the file\nsimultaneously to two mysqls in two shell windows, it should take about\ntwo seconds total to do the 20000 inserts, right? The observed times\nwere 13 to 15 seconds. (I believe this is with a MyISAM table, since\nI just said CREATE TABLE without any options.)\n\nIt does scream with only one client though ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 03 Feb 2004 11:46:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] MySQL+InnoDB vs. PostgreSQL test? "
},
{
"msg_contents": "On Tue, 03 Feb 2004 11:46:05 -0500\nTom Lane <[email protected]> wrote:\n\n> Jeff <[email protected]> writes:\n> > Not sure at what point it will topple, in my case it didn't matter\n> > if it ran good with 5 clients as I'll always have many more clients\n> > than 5. \n> \n> I did some idle, very unscientific tests the other day that indicated\n> that MySQL insert performance starts to suck with just 2 concurrent\n> inserters. Given a file containing 10000 INSERT commands, a single\n> mysql client ran the file in about a second. So if I feed the file\n> simultaneously to two mysqls in two shell windows, it should take\n> about two seconds total to do the 20000 inserts, right? The observed\n> times were 13 to 15 seconds. (I believe this is with a MyISAM table,\n> since I just said CREATE TABLE without any options.)\n> \n\nMyISAM is well known to suck if you update/insert/delete because it\nsimply aquires a full table lock when you perform those operations!\n\nInnoDB is supposed to be better at that.\n\nSo your results are fairly in line with what you should see.\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n",
"msg_date": "Tue, 3 Feb 2004 11:55:02 -0500",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "Jeff <[email protected]> writes:\n> On Tue, 03 Feb 2004 11:46:05 -0500\n> Tom Lane <[email protected]> wrote:\n>> I did some idle, very unscientific tests the other day that indicated\n>> that MySQL insert performance starts to suck with just 2 concurrent\n>> inserters. Given a file containing 10000 INSERT commands, a single\n>> mysql client ran the file in about a second. So if I feed the file\n>> simultaneously to two mysqls in two shell windows, it should take\n>> about two seconds total to do the 20000 inserts, right? The observed\n>> times were 13 to 15 seconds. (I believe this is with a MyISAM table,\n>> since I just said CREATE TABLE without any options.)\n\n> MyISAM is well known to suck if you update/insert/delete because it\n> simply aquires a full table lock when you perform those operations!\n\nSure, I wasn't expecting it to actually overlap any operations. (If you\ntry the same test with Postgres, the scaling factor is a little better\nthan linear because we do get some overlap.) But that shouldn't result\nin a factor-of-seven slowdown. There's something badly wrong with their\nlow-level locking algorithms I think.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 03 Feb 2004 12:03:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] MySQL+InnoDB vs. PostgreSQL test? "
},
{
"msg_contents": "On Tue, 3 Feb 2004, Joshua D. Drake wrote:\n\n> Christopher Browne wrote:\n> \n> >In an attempt to throw the authorities off his trail, [email protected] (Jan Wieck) transmitted:\n> > \n> >\n> >>Josh Berkus wrote:\n> >> \n> >>\n> >>>I've had requests from a couple of businesses to see results of\n> >>>infomal MySQL\n> >>>+InnoDB vs. PostgreSQL tests. I know that we don't have the setup\n> >>>to do full formal benchmarking, but surely someone in our community\n> >>>has gone head-to-head on your own application?\n> >>> \n> >>>\n> >>how does someone compare an Apache+PHP+MySQL \"thing\" against something\n> >>implemented with half the stuff done in stored procedures and the\n> >>entire business model guarded by referential integrity, custom\n> >>triggers and whatnot?\n> >>\n> >>Seriously, I am tired of this kind of question. You gotta get bold\n> >>enough to stand up in a \"meeting\" like that, say \"guy's, you can ask\n> >>me how this compares to Oracle ... but if you're seriously asking me\n> >>how this compares to MySQL, call me again when you've done your\n> >>homework\".\n> >> \n> >>\n> >\n> >Actually, before saying anything in public about their products, check\n> >out what they require for use of their protected trademarks.\n> ><http://www.mysql.com/company/trademark.html>\n> >\n> >To wit, they indicate that:\n> >\n> > \"The MySQL AB Marks may not be used in a manner or with respect to\n> > products that will decrease the value of the MySQL AB Marks or\n> > otherwise impair or damage MySQL AB's brand integrity, reputation or\n> > goodwill\"\n> >\n> >It seems to me that presenting a benchmark that did not favor their\n> >product could be quite reasonably considered to be an \"impairment\" of\n> >their integrity, reputation, or goodwill, and therefore be something\n> >worthy of legal attack.\n> > \n> >\n> It depends on how it is presented. Basically you just don't offer an \n> opinion on the matter.\n> For example...\n> \n> MySQL was 10x slower than PostgreSQL in this test....\n> \n> Instead you could use something like.\n> \n> We performed the following test.\n> \n> MySQL scored this much\n> PostgreSQL scored this much\n\nMy guess is that what they are saying is that you can't make a program \nlike:\n\nmysqlhelper\n\nwithout their permission.\n\nUsing their mark in a review is fair use, and the only way they could get \nyou is if you either failed to attribute it, or had signed a license with \nthem saying you wouldn't do benchmarks, like how Oracle licenses their \nsoftware.\n\n\n",
"msg_date": "Tue, 3 Feb 2004 10:16:46 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "> Seriously, I am tired of this kind of question. You gotta get bold \n> enough to stand up in a \"meeting\" like that, say \"guy's, you can ask me \n> how this compares to Oracle ... but if you're seriously asking me how \n> this compares to MySQL, call me again when you've done your homework\".\n\nCan they call you at the unemployment office?\n--\nMike Nolan\n",
"msg_date": "Wed, 4 Feb 2004 07:36:43 -0600 (CST)",
"msg_from": "Mike Nolan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "Mike Nolan wrote:\n>> Seriously, I am tired of this kind of question. You gotta get bold \n>> enough to stand up in a \"meeting\" like that, say \"guy's, you can ask me \n>> how this compares to Oracle ... but if you're seriously asking me how \n>> this compares to MySQL, call me again when you've done your homework\".\n> \n> Can they call you at the unemployment office?\n\nIt might not work with the words I used above, but the point I tried to \nmake is that the hardest thing you can \"sell\" is a \"no\". I mean, not \njust saying \"no\", but selling it in a way that the customer will not go \nwith the next idiot who claims \"we can do that\".\n\nIf the customer has a stupid idea, like envisioning an enterprise \nsolution based on ImSOL, there is no way you will be able to deliver it. \nPaying customer or not, you will fail if you bow to their \"strategic\" \ndecisions and ignore knowing that the stuff they want to use just \ndoesn't fit.\n\nThat is absolutely not ImSOL specific. If someone comes to me and asks \nfor a HA scenario with zero transaction loss during failover, we can \ndiscuss a little if this is really what he needs or not, but if he needs \nthat, the solution will be Oracle or DB2, for sure I will not claim that \nPostgreSQL can do that, because it cannot.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n",
"msg_date": "Wed, 04 Feb 2004 16:54:06 -0500",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] MySQL+InnoDB vs. PostgreSQL test?"
},
{
"msg_contents": "Jan Wieck wrote:\n> It might not work with the words I used above, but the point I tried to \n> make is that the hardest thing you can \"sell\" is a \"no\". I mean, not \n> just saying \"no\", but selling it in a way that the customer will not go \n> with the next idiot who claims \"we can do that\".\n\nBut you will need some kind of data or reasoning to back up your response,\nespecially if it is deviating from the conventional wisdom, or from\nsome familiar system.\n\nEspecially in this case, it's not a \"no\" answer that's being sold...\nit's \"solution a is better than solution b, even though you might\nbe more familiar with solution b.\"\n\nCheers,\nMark\n\n",
"msg_date": "Thu, 05 Feb 2004 15:09:36 -0800",
"msg_from": "Mark Harrison <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] MySQL+InnoDB vs. PostgreSQL test?"
}
] |
[
{
"msg_contents": "Folks,\n\nIs anyone on this list using PostgreSQL on a mini or mainframe platform? If \nso, drop me a line. Thanks!\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 2 Feb 2004 10:06:23 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Mainframe Linux + PostgreSQL"
}
] |
[
{
"msg_contents": "I am running a Dual Xeon hyperthreaded server with 4GB RAM RAID-5. The only \nthing running on the server is Postgres running under Fedora. I have a 700 \nconnection limit.\n\nThe DB is setup as a backend for a very high volume website. Most of the queries \nare simple, such as logging accesses, user login verification etc. There are a few \nbigger things suchas reporting etc but for the most part each transaction lasts less \nthen a second. The connections are not persistant (I'm using pg_connect in PHP)\n\nThe system was at 2 GB with a 400 connection limit. We ran into problems because \nwe hit the limit of connections during high volume.\n\n1. Does 400 connections sound consistant with the 2GB of RAM? Does 700 sound \ngood with 4 GB. I've read a little on optimizing postgres. Is there anything else I can \ndo maybe OS wise to increase how many connections I get before I start swapping?\n\n2. Are there any clustering technologies that will work with postgres? Specifically I'm \nlooking at increasing the number of connections.\n\nThe bottom line is since the website launched (middle of January) we have increased \nthe number of http connections, and increased bandwidth allowances by over 10 \ntimes. The site continues to grow and we are looking at our options. Some of the \nideas have been possible DB replication. Write to master and read from multiple \nslaves. Other ideas including increasing hardware.\n\nThis is the biggest site I have ever worked with. Almost everything else fits in a T1 \nwith a single DB server handling multiple sites. Does anybody with experence in this \nrealm have any suggestions?\n\nThank you in advance for whatever help you can provide.\n--\nKevin Barnard\n\n\n",
"msg_date": "Mon, 02 Feb 2004 13:14:15 -0600",
"msg_from": "\"Kevin Barnard\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Increasing number of PG connections."
},
{
"msg_contents": "I am new here. I have a question related to this in some way.\n\nOur web site needs to upload a large volume of data into Postgres at a \ntime. The performance deterioates as number of rows becomes larger. \nWhen it reaches 2500 rows, it never come back to GUI. Since the tests \nwere run through GUI, my suspision is\nthat it might be caused by the way the application server talking to \nPostgres server, the connections, etc.. What might be the factors \ninvolved here? Does anyone know?\n\nThanks a lot!\n\nQing\nOn Feb 2, 2004, at 11:14 AM, Kevin Barnard wrote:\n\n> I am running a Dual Xeon hyperthreaded server with 4GB RAM RAID-5. \n> The only\n> thing running on the server is Postgres running under Fedora. I have \n> a 700\n> connection limit.\n>\n> The DB is setup as a backend for a very high volume website. Most of \n> the queries\n> are simple, such as logging accesses, user login verification etc. \n> There are a few\n> bigger things suchas reporting etc but for the most part each \n> transaction lasts less\n> then a second. The connections are not persistant (I'm using \n> pg_connect in PHP)\n>\n> The system was at 2 GB with a 400 connection limit. We ran into \n> problems because\n> we hit the limit of connections during high volume.\n>\n> 1. Does 400 connections sound consistant with the 2GB of RAM? Does \n> 700 sound\n> good with 4 GB. I've read a little on optimizing postgres. Is there \n> anything else I can\n> do maybe OS wise to increase how many connections I get before I start \n> swapping?\n>\n> 2. Are there any clustering technologies that will work with \n> postgres? Specifically I'm\n> looking at increasing the number of connections.\n>\n> The bottom line is since the website launched (middle of January) we \n> have increased\n> the number of http connections, and increased bandwidth allowances by \n> over 10\n> times. The site continues to grow and we are looking at our options. \n> Some of the\n> ideas have been possible DB replication. Write to master and read \n> from multiple\n> slaves. Other ideas including increasing hardware.\n>\n> This is the biggest site I have ever worked with. Almost everything \n> else fits in a T1\n> with a single DB server handling multiple sites. Does anybody with \n> experence in this\n> realm have any suggestions?\n>\n> Thank you in advance for whatever help you can provide.\n> --\n> Kevin Barnard\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Mon, 2 Feb 2004 11:39:24 -0800",
"msg_from": "Qing Zhao <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increasing number of PG connections."
},
{
"msg_contents": "On Mon, 2 Feb 2004, Qing Zhao wrote:\n\n> I am new here. I have a question related to this in some way.\n> \n> Our web site needs to upload a large volume of data into Postgres at a \n> time. The performance deterioates as number of rows becomes larger. \n> When it reaches 2500 rows, it never come back to GUI. Since the tests \n> were run through GUI, my suspision is\n> that it might be caused by the way the application server talking to \n> Postgres server, the connections, etc.. What might be the factors \n> involved here? Does anyone know?\n\nActually, I'm gonna go out on a limb here and assume two things:\n\n1. you've got lotsa fk/pk relationships setup.\n2. you're analyzing the table empty before loading it up.\n\nWhat happens in this instance is that the analyze on an empty, or nearly \nso, table, means that during the inserts, postgresql thinks you have only \na few rows. At first, this is fine, as pgsql will seq scan the \ntables to make sure there is a proper key in both. As the number of \nrows increases, the planner needs to switch to index scans but doesn't, \nbecause it doesn't know that the number of rows is increasing.\n\nFix: insert a few hundred rows, run analyze, check to see if the explain \nfor inserts is showing index scans or not. If not, load a few more \nhundred rows, analyze, rinse, repeat.\n\nAlso, look for fk/pk mismatches. I.e. an int4 field pointing to an int8 \nfield. That's a performance killer, so if the pk/fk types don't match, \nsee if you can change your field types to match and try again.\n\n",
"msg_date": "Mon, 2 Feb 2004 13:49:53 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "inserting large number of rows was: Re: Increasing number\n\tof PG connections."
},
{
"msg_contents": "On Monday 02 February 2004 19:39, Qing Zhao wrote:\n> I am new here. I have a question related to this in some way.\n\nHmm - no real connection I can see - might have been better to start a new \nthread rather than replying to this one. Also, it is usually considered best \npractice not to quote large amounts of the previous message if you're not \nreplying to it,\n\n> Our web site needs to upload a large volume of data into Postgres at a\n> time. The performance deterioates as number of rows becomes larger.\n> When it reaches 2500 rows, it never come back to GUI. Since the tests\n> were run through GUI, my suspision is\n> that it might be caused by the way the application server talking to\n> Postgres server, the connections, etc.. What might be the factors\n> involved here? Does anyone know?\n\nYou don't really give us enough information. What GUI are you talking about? \nHow are you loading this data - as a series of INSERT statements, text-file \nwith separators, from Access/MySQL etc?\n\nIn general, the fastest way to add a large number of rows is via the COPY sql \ncommand. Next best is to batch your inserts together into larger transactions \nof say 100-1000 inserts.\n\nTwo other things to be aware of are: use of VACUUM/ANALYZE and configuration \ntuning (see http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php).\n\nPG shouldn't have a problem with inserting a few thousand rows, so I suspect \nit's something to do with your application/GUI setup.\n\nHope that helps, if not try turning on statement logging for PG and then we \ncan see what commands your GUI is sending.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 2 Feb 2004 20:52:19 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Bulk Record upload (was Re: Increasing number of PG connections)"
},
{
"msg_contents": "On Mon, 2 Feb 2004, Kevin Barnard wrote:\n\n> I am running a Dual Xeon hyperthreaded server with 4GB RAM RAID-5. The only \n> thing running on the server is Postgres running under Fedora. I have a 700 \n> connection limit.\n> \n> The DB is setup as a backend for a very high volume website. Most of the queries \n> are simple, such as logging accesses, user login verification etc. There are a few \n> bigger things suchas reporting etc but for the most part each transaction lasts less \n> then a second. The connections are not persistant (I'm using pg_connect in PHP)\n> \n> The system was at 2 GB with a 400 connection limit. We ran into problems because \n> we hit the limit of connections during high volume.\n\nwhat do you mean at 2 GB? Is that how much is in kernel cache plus \nbuffer, plus used, plus etc??? Could you give us the top of top output to \nmake sure? If most of that is kernel cache, then that's fine. My \nexperience has been that individual postgresql backends only weigh in at a \nmega byte at most, and they share buffer, so 700 connections can be \nanywhere from 300meg to 1 gig. the rest would be buffer memory. It's not \na good idea to give up too much to shared buffers, as the database isn't \nas good at caching as the kernel.\n\nWhat do you have in postgresql.conf? sort_mem, shared_buffers, etc???\n\nsort_mem can be a real killer if it lets the processes chew up too much \nmemory. Once sort_mem gets high enough to make the machine start swapping \nit is doing more harm than good being that high, and should usually be \nlowered a fair bit.\n\nHow many disks in your RAID5? The more the better. Is it hardware with \nbattery backed cache? If you write much to it it will help to have \nbattery backed cache on board. If it's a megaraid / LSI board, get the \nmegaraid2 driver, it's supposedly much faster.\n\nYou may find it hard to get postgresql to use any more memory than you \nhave, as 32 bit apps can only address 2 gigs anyway, but the extra can \ncertainly be used by the kernel as cache, which will help.\n\n",
"msg_date": "Mon, 2 Feb 2004 13:58:35 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increasing number of PG connections."
},
{
"msg_contents": "I must have missed this post when it was made earlier. Pardon the noise if\nmy suggestion has already been made.\n\nUnlike MySQL (and possibly other database servers) PostgreSQL is faster when\ninserting inside a transaction. Depending on the method in which you are\nactually adding the records.\n\nIn my own experience (generating a list of INSERT statements from a perl\nscript and using psql to execute them) the difference in performance was\nincredibly dramatic when I added a \"BEGIN WORK\" at the beginning and\n\"COMMIT WORK\" at the end.\n\nscott.marlowe wrote:\n> On Mon, 2 Feb 2004, Qing Zhao wrote:\n> \n> \n>>I am new here. I have a question related to this in some way.\n>>\n>>Our web site needs to upload a large volume of data into Postgres at a \n>>time. The performance deterioates as number of rows becomes larger. \n>>When it reaches 2500 rows, it never come back to GUI. Since the tests \n>>were run through GUI, my suspision is\n>>that it might be caused by the way the application server talking to \n>>Postgres server, the connections, etc.. What might be the factors \n>>involved here? Does anyone know?\n> \n> \n> Actually, I'm gonna go out on a limb here and assume two things:\n> \n> 1. you've got lotsa fk/pk relationships setup.\n> 2. you're analyzing the table empty before loading it up.\n> \n> What happens in this instance is that the analyze on an empty, or nearly \n> so, table, means that during the inserts, postgresql thinks you have only \n> a few rows. At first, this is fine, as pgsql will seq scan the \n> tables to make sure there is a proper key in both. As the number of \n> rows increases, the planner needs to switch to index scans but doesn't, \n> because it doesn't know that the number of rows is increasing.\n> \n> Fix: insert a few hundred rows, run analyze, check to see if the explain \n> for inserts is showing index scans or not. If not, load a few more \n> hundred rows, analyze, rinse, repeat.\n> \n> Also, look for fk/pk mismatches. I.e. an int4 field pointing to an int8 \n> field. That's a performance killer, so if the pk/fk types don't match, \n> see if you can change your field types to match and try again.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Mon, 02 Feb 2004 16:37:02 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inserting large number of rows was: Re: Increasing"
},
{
"msg_contents": "On 2 Feb 2004 at 13:58, scott.marlowe wrote:\n\n> what do you mean at 2 GB? Is that how much is in kernel cache plus \n> buffer, plus used, plus etc??? Could you give us the top of top output to \n> make sure? If most of that is kernel cache, then that's fine. \n\n2GB was total system memory. We upgraded to 4GB to prior to increasing the \nnumber of connections.\n\nHere's the top of top\n\n 16:14:17 up 2 days, 16:15, 1 user, load average: 7.60, 6.56, 4.61\n730 processes: 721 sleeping, 9 running, 0 zombie, 0 stopped\nCPU states: cpu user nice system irq softirq iowait idle\n total 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n cpu00 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n cpu01 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n cpu02 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n cpu03 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\nMem: 3747644k av, 3298344k used, 449300k free, 0k shrd, 147880k buff\n 2158532k active, 760040k inactive\nSwap: 1048088k av, 0k used, 1048088k free 2262156k cached\n\n\nThe DB is pretty close to max connections at this point in time. I don't know why \nCPU shows 0% in every bucket. It looks like I can increase the number of \nconnections a little from here. This is a fairly standard Fedora install. It's using \nversion 2.4.22 of the Kernel. Postgres is a complied version using 7.4.1\n\n> experience has been that individual postgresql backends only weigh in at a \n> mega byte at most, and they share buffer, so 700 connections can be \n> anywhere from 300meg to 1 gig. the rest would be buffer memory. It's not \n> a good idea to give up too much to shared buffers, as the database isn't \n> as good at caching as the kernel.\n\nOK I take this as I should keep shared buffers around 2x connections then correct?\n\n> \n> What do you have in postgresql.conf? sort_mem, shared_buffers, etc???\n\nHere is what I have that is not set from the defaults.\n\nmax_connections = 700\nshared_buffers = 1500\nsort_mem = 512\nrandom_page_cost = 2\nstats_start_collector = true\nstats_command_string = true\nstats_block_level = true\nstats_row_level = true\n\n\n> sort_mem can be a real killer if it lets the processes chew up too much \n> memory. Once sort_mem gets high enough to make the machine start swapping \n> it is doing more harm than good being that high, and should usually be \n> lowered a fair bit.\n\nI dropped it down to 512 as you can see. Should I be running with all of the stats on? \nI am no longer using pg_autovacuum. I seem to be getting better results with an \nhourly Vacuum anaylse.\n\n> How many disks in your RAID5? The more the better. Is it hardware with \n> battery backed cache? If you write much to it it will help to have \n> battery backed cache on board. If it's a megaraid / LSI board, get the \n> megaraid2 driver, it's supposedly much faster.\n\n4 disk IBM ServeRAID 5i with battery backed cache.\n\n> You may find it hard to get postgresql to use any more memory than you \n> have, as 32 bit apps can only address 2 gigs anyway, but the extra can \n> certainly be used by the kernel as cache, which will help.\n\nIsn't that only true for each indivdual process space. Shouldn't each process have \naccess at most 2GB. If each backend is in it's own process space is this really a limit \nsince all of my queries are pretty small.\n\nI have been monitoring the system has it gets up to load. For most of the time the \nsytem sits around 100-300 connections. Once it ramps up it ramps up hard. Top \nstarts cycling at 0 and 133% CPU for irq, softirq and iowait. The system stays at 700 \nconnections until users give up. I can watch bandwidth utilization drop to almost \nnothing right before the DB catches up.\n\n--\nKevin Barnard\nSpeed Fulfillment and Call Center\[email protected]\n214-258-0120\n",
"msg_date": "Mon, 02 Feb 2004 17:10:43 -0600",
"msg_from": "\"Kevin Barnard\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Increasing number of PG connections."
},
{
"msg_contents": "On Mon, 2 Feb 2004, Kevin Barnard wrote:\n\n> On 2 Feb 2004 at 13:58, scott.marlowe wrote:\n> \n> > what do you mean at 2 GB? Is that how much is in kernel cache plus \n> > buffer, plus used, plus etc??? Could you give us the top of top output to \n> > make sure? If most of that is kernel cache, then that's fine. \n> \n> 2GB was total system memory. We upgraded to 4GB to prior to increasing the \n> number of connections.\n\nOh, ok. I thought you meant the system was using 2 gigs of RAM for \npostgresql\n\n> Here's the top of top\n> \n> 16:14:17 up 2 days, 16:15, 1 user, load average: 7.60, 6.56, 4.61\n> 730 processes: 721 sleeping, 9 running, 0 zombie, 0 stopped\n> CPU states: cpu user nice system irq softirq iowait idle\n> total 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n> cpu00 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n> cpu01 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n> cpu02 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n> cpu03 0.0% 0.0% 0.0% 0.0% 0.0% 0.0% 0.0%\n> Mem: 3747644k av, 3298344k used, 449300k free, 0k shrd, 147880k buff\n> 2158532k active, 760040k inactive\n> Swap: 1048088k av, 0k used, 1048088k free 2262156k cached\n\nwhen you have a high load but load CPU usage, you are usually I/O bound.\n\n> The DB is pretty close to max connections at this point in time. I don't know why \n> CPU shows 0% in every bucket. It looks like I can increase the number of \n> connections a little from here. This is a fairly standard Fedora install. It's using \n> version 2.4.22 of the Kernel. Postgres is a complied version using 7.4.1\n\nOn this machine you could probably handle even more. What I want is to \nget your page return times down enough so you don't need to increase the \nnumber of connections. I.e. if you've got 2 second response times and you \ndrop those to 0.2 seconds, then you won't need as many processes to handle \nthe load (theoretically... :-)\n\n> > experience has been that individual postgresql backends only weigh in at a \n> > mega byte at most, and they share buffer, so 700 connections can be \n> > anywhere from 300meg to 1 gig. the rest would be buffer memory. It's not \n> > a good idea to give up too much to shared buffers, as the database isn't \n> > as good at caching as the kernel.\n> \n> OK I take this as I should keep shared buffers around 2x connections then correct?\n\nNot really. What happens is that if the shared buffers are so large that \nthey are as large as or god forbid, larger than the kernel cache, then the \nkernel cache becomes less effective. The general rule of thumb is 25% of \nmemory, or 256 Megs, whichever is less. The real test is that you want \nenough shared_buffers so that all the result sets currently being smooshed \nup against each other in joins, sorts, etc... can fit in postgresql's \nshared buffers, or at least the buffers can hold a fair chunk of it. So, \nthe number of buffers can be anywhere from a few thousand, up to 40000 or \n50000, sometimes even higher. But for most tuning you won't be needing to \nbe above 32768, which is 256 Megs of ram.\n\n> > What do you have in postgresql.conf? sort_mem, shared_buffers, etc???\n> \n> Here is what I have that is not set from the defaults.\n> \n> max_connections = 700\n> shared_buffers = 1500\n> sort_mem = 512\n> random_page_cost = 2\n> stats_start_collector = true\n> stats_command_string = true\n> stats_block_level = true\n> stats_row_level = true\n> \n> \n> > sort_mem can be a real killer if it lets the processes chew up too much \n> > memory. Once sort_mem gets high enough to make the machine start swapping \n> > it is doing more harm than good being that high, and should usually be \n> > lowered a fair bit.\n> \n> I dropped it down to 512 as you can see. Should I be running with all of the stats on? \n> I am no longer using pg_autovacuum. I seem to be getting better results with an \n> hourly Vacuum anaylse.\n\nSeeing as how top shows 2262156k kernel cache, you can afford to give up a \nfair bit more than 512k per sort. I generally run 8192 (8 meg) but I \ndon't handle 700 simos. Try running it a little higher, 2048, 4096, \netc... and see if that helps. Note you can change sort_mem and just do a \npg_ctl reload to make the change, without interrupting service, unlike \nshared_buffers, which requires a restart.\n\n> > How many disks in your RAID5? The more the better. Is it hardware with \n> > battery backed cache? If you write much to it it will help to have \n> > battery backed cache on board. If it's a megaraid / LSI board, get the \n> > megaraid2 driver, it's supposedly much faster.\n> \n> 4 disk IBM ServeRAID 5i with battery backed cache.\n\nDo you have the cache set to write back or write through? Write through \ncan be a performance killer. But I don't think your RAID is the problem, \nit looks to me like postgresql is doing a lot of I/O. When you run top, \ndo the postgresql processes show a lot of D status? That's usually waiting \non I/O\n\n> > You may find it hard to get postgresql to use any more memory than you \n> > have, as 32 bit apps can only address 2 gigs anyway, but the extra can \n> > certainly be used by the kernel as cache, which will help.\n> \n> Isn't that only true for each indivdual process space. Shouldn't each process have \n> access at most 2GB. If each backend is in it's own process space is this really a limit \n> since all of my queries are pretty small.\n\nRight, each process can use a big chunk, but shared_buffers will top out \nat ~2 gig. Most tests have shown a negative return on a shared_buffers \nsetting that big though, so the nicest thing about the extra memory is \nthat the kernel can use it to cache, AND you can increase your sort_mem to \nsomething larger. \n\n> I have been monitoring the system has it gets up to load. For most of the time the \n> sytem sits around 100-300 connections. Once it ramps up it ramps up hard. Top \n> starts cycling at 0 and 133% CPU for irq, softirq and iowait. The system stays at 700 \n> connections until users give up. I can watch bandwidth utilization drop to almost \n> nothing right before the DB catches up.\n\nYeah, it sounds to me like it's grinding down to a halt because postgresql \nisn't being able to hold enough data in memory for what it's doing.\n\nTry increasing shared_buffers to 5000 to as high as 30000 or 50000, a but \nat a time, as well as increasing sort_mem to 4096 or 8192. Note that \nincreasing shared_buffers will have a very positive effect on performance \nat first, then less effect, then slowly bring it back down as it goes too \nhigh, but isn't likely to starve the machine (64k buffers = 512 meg, \nyou've got the memory to spare, so no great loss).\n\nhowever, sort_mem will have a huge effect right up until it's big enough \nfor all your sorts to fit into memory. once that happens, increasing it \nwon't help or hurt UNTIL the machine gets enough load to make the sorts \nuse up all memory and send it into a swap storm, so be careful about \noverdoing sorts.\n\nI.e. shared_buffers = too big, no big deal, sort_mem too big = time bomb.\n\nwhat you want to do is get the machine to a point where the kernel cache \nis about twice the size or larger, than the shared_buffers. I'd start at \n10000 shared buffers and 4096 sort mem and see what happens. If you've \nstill got >2 gig kernel cache at that point, then increase both a bit (2x \nor so) and see how much kernel cache you've got. If your kernel cache \nstays above 1Gig, and the machine is running faster, you're doing pretty \ngood.\n\nyou may need to increase shmmax and friends to increase the shared_buffers \nthat high, but sort_mem requires no new kernel configuration.\n\n",
"msg_date": "Mon, 2 Feb 2004 16:45:42 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increasing number of PG connections."
},
{
"msg_contents": "You could do high speed inserts with COPY command:\nhttp://developer.postgresql.org/docs/postgres/sql-copy.html\n\nCheck whenether your database adapter/client lib supports it (i guess it \ndoes).\n\nNote that it doesnt help very much if there are fk's/triggers's on the \ntarget table.\n\nBill Moran wrote:\n\n> I must have missed this post when it was made earlier. Pardon the \n> noise if\n> my suggestion has already been made.\n>\n> Unlike MySQL (and possibly other database servers) PostgreSQL is \n> faster when\n> inserting inside a transaction. Depending on the method in which you are\n> actually adding the records.\n>\n> In my own experience (generating a list of INSERT statements from a perl\n> script and using psql to execute them) the difference in performance was\n> incredibly dramatic when I added a \"BEGIN WORK\" at the beginning and\n> \"COMMIT WORK\" at the end.\n>\n> scott.marlowe wrote:\n>\n>> On Mon, 2 Feb 2004, Qing Zhao wrote:\n>>\n>>\n>>> I am new here. I have a question related to this in some way.\n>>>\n>>> Our web site needs to upload a large volume of data into Postgres at \n>>> a time. The performance deterioates as number of rows becomes \n>>> larger. When it reaches 2500 rows, it never come back to GUI. Since \n>>> the tests were run through GUI, my suspision is\n>>> that it might be caused by the way the application server talking to \n>>> Postgres server, the connections, etc.. What might be the factors \n>>> involved here? Does anyone know?\n>>\n>>\n>>\n>> Actually, I'm gonna go out on a limb here and assume two things:\n>>\n>> 1. you've got lotsa fk/pk relationships setup.\n>> 2. you're analyzing the table empty before loading it up.\n>>\n>> What happens in this instance is that the analyze on an empty, or \n>> nearly so, table, means that during the inserts, postgresql thinks \n>> you have only a few rows. At first, this is fine, as pgsql will seq \n>> scan the tables to make sure there is a proper key in both. As the \n>> number of rows increases, the planner needs to switch to index scans \n>> but doesn't, because it doesn't know that the number of rows is \n>> increasing.\n>>\n>> Fix: insert a few hundred rows, run analyze, check to see if the \n>> explain for inserts is showing index scans or not. If not, load a \n>> few more hundred rows, analyze, rinse, repeat.\n>>\n>> Also, look for fk/pk mismatches. I.e. an int4 field pointing to an \n>> int8 field. That's a performance killer, so if the pk/fk types don't \n>> match, see if you can change your field types to match and try again.\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to [email protected])\n>>\n>\n>\n\n\n",
"msg_date": "Tue, 03 Feb 2004 09:03:05 +0200",
"msg_from": "=?ISO-8859-1?Q?Erki_Kaldj=E4rv?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inserting large number of rows was: Re: Increasing"
},
{
"msg_contents": "On 2 Feb 2004 at 16:45, scott.marlowe wrote:\n\n> Do you have the cache set to write back or write through? Write through \n> can be a performance killer. But I don't think your RAID is the problem, \n> it looks to me like postgresql is doing a lot of I/O. When you run top, \n> do the postgresql processes show a lot of D status? That's usually waiting \n> on I/O\n> \n\nActually I'm not sure. It's setup with the factory defaults from IBM. Actually when I \nstart hitting the limit I was surprised to find only a few D status indicators. Most of the \nprocesses where sleeping.\n\n> what you want to do is get the machine to a point where the kernel cache \n> is about twice the size or larger, than the shared_buffers. I'd start at \n> 10000 shared buffers and 4096 sort mem and see what happens. If you've \n> still got >2 gig kernel cache at that point, then increase both a bit (2x \n> or so) and see how much kernel cache you've got. If your kernel cache \n> stays above 1Gig, and the machine is running faster, you're doing pretty \n> good.\n> \n\nI've set shared to 10000 and sort to 4096. I just have to wait until the afternoon \nbefore I see system load start to max out. Thanks for the tips I'm crossing my \nfingers.\n\n--\nKevin Barnard\n\n",
"msg_date": "Tue, 03 Feb 2004 09:34:01 -0600",
"msg_from": "\"Kevin Barnard\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Increasing number of PG connections."
}
] |
[
{
"msg_contents": "Quick Question,\n\nThe full query listed in pg_stat_activity is getting truncated. Does\nanyone know how I can see the full query in progress?\n\n-- \nOrion Henry <[email protected]>",
"msg_date": "02 Feb 2004 11:27:42 -0800",
"msg_from": "Orion Henry <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_stat_activity"
}
] |
[
{
"msg_contents": "Hello\n\ni've read in the docs to use the proper indexes both types must match in\nthe where clause, to achive this the user can simply put a string into the\nside of the equation mark and pgsql will convert it automaticly. my\nquestion is, when I'm using PQexecParams, should I give all the values as\na string, and it will be converted, or I have to figure out the type of\nthe given field somehow? i'm writing an interface for my php extension(i'm\nnot statisfied by the boundled), so i cannot figure out the type of the\nfields in most cases. what should be done for the best performance in this\nsituation?\n\n\nBye,\n\nGergely Czuczy\nmailto: [email protected]\nPGP: http://phoemix.harmless.hu/phoemix.pgp\n\nThe point is, that geeks are not necessarily the outcasts\nsociety often believes they are. The fact is that society\nisn't cool enough to be included in our activities.\n\n",
"msg_date": "Tue, 3 Feb 2004 08:25:38 +0100 (CET)",
"msg_from": "Czuczy Gergely <[email protected]>",
"msg_from_op": true,
"msg_subject": "PQexecParams and types"
},
{
"msg_contents": "Czuczy Gergely <[email protected]> writes:\n> i've read in the docs to use the proper indexes both types must match in\n> the where clause, to achive this the user can simply put a string into the\n> side of the equation mark and pgsql will convert it automaticly. my\n> question is, when I'm using PQexecParams, should I give all the values as\n> a string, and it will be converted, or I have to figure out the type of\n> the given field somehow?\n\nYou should leave the parameter types unspecified. Their types will be\nresolved in much the same way that a quoted literal is handled.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 03 Feb 2004 09:53:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PQexecParams and types "
},
{
"msg_contents": "hello\n\nto leave it unspecified what value should I set to the paramTypes array?\nand could you insert this answer to to docs, it could be useful\n\n\nBye,\n\nGergely Czuczy\nmailto: [email protected]\nPGP: http://phoemix.harmless.hu/phoemix.pgp\n\nThe point is, that geeks are not necessarily the outcasts\nsociety often believes they are. The fact is that society\nisn't cool enough to be included in our activities.\n\nOn Tue, 3 Feb 2004, Tom Lane wrote:\n\n> Czuczy Gergely <[email protected]> writes:\n> > i've read in the docs to use the proper indexes both types must match in\n> > the where clause, to achive this the user can simply put a string into the\n> > side of the equation mark and pgsql will convert it automaticly. my\n> > question is, when I'm using PQexecParams, should I give all the values as\n> > a string, and it will be converted, or I have to figure out the type of\n> > the given field somehow?\n>\n> You should leave the parameter types unspecified. Their types will be\n> resolved in much the same way that a quoted literal is handled.\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Tue, 3 Feb 2004 15:56:06 +0100 (CET)",
"msg_from": "Czuczy Gergely <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PQexecParams and types "
},
{
"msg_contents": "Czuczy Gergely <[email protected]> writes:\n> to leave it unspecified what value should I set to the paramTypes array?\n> and could you insert this answer to to docs, it could be useful\n\nIt is in the docs:\n\n paramTypes[] specifies, by OID, the data types to be assigned to the\n parameter symbols. If paramTypes is NULL, or any particular element\n in the array is zero, the server assigns a data type to the\n parameter symbol in the same way it would do for an untyped literal\n string.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 03 Feb 2004 10:27:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PQexecParams and types "
}
] |
[
{
"msg_contents": "Hi,\n\nwe are trying to speed up a database which has about 3 GB of data. The \nserver has 8 GB RAM and we wonder how we can ensure that the whole DB \nis read into RAM. We hope that this will speed up some queries.\n\nregards David\n\n",
"msg_date": "Tue, 3 Feb 2004 13:54:17 +0100",
"msg_from": "David Teran <[email protected]>",
"msg_from_op": true,
"msg_subject": "cache whole data in RAM"
},
{
"msg_contents": "David Teran wrote:\n> we are trying to speed up a database which has about 3 GB of data. The \n> server has 8 GB RAM and we wonder how we can ensure that the whole DB is \n> read into RAM. We hope that this will speed up some queries.\n\nNeither the DBa or postgresql has to do anything about it. Usually OS caches the \ndata in it's buffer cache. That is certainly true for linux and freeBSD does \nthat. Most of the unices certainly do. To my knowledge linux is most aggresive \none at that..(Rather over aggressive..)\n\nMake sure that you size effective cache size correctly. It helps postgresql \nplanner at times..\n\n HTH\n\n Shridhar\n",
"msg_date": "Tue, 03 Feb 2004 18:56:06 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cache whole data in RAM"
},
{
"msg_contents": "On Tue, Feb 03, 2004 at 13:54:17 +0100,\n David Teran <[email protected]> wrote:\n> Hi,\n> \n> we are trying to speed up a database which has about 3 GB of data. The \n> server has 8 GB RAM and we wonder how we can ensure that the whole DB \n> is read into RAM. We hope that this will speed up some queries.\n\nThe OS should do this on its own. What you don't want to do is set\nshared_buffers (in postgresql.conf) too high. From what I remember\nfrom past discussions it should be something between about 1000 and 10000.\n\nsort_mem is trickier since that memory is per sort and a single query\ncan potentially generate multiple parallel sorts. You will have to\nmake some guesses for this based on what you think the number of concurrent\nsorts will be when the system is stressed and not use too much memory.\nYou might also find that after a point you don't get a lot of benefit\nfrom increasing sort_mem.\n\nYou should set effective_cache_size pretty large. Probably you want to\nsubtract the space used by shared_buffers and sort_mem (* times estimated\nparallel sorts) and what you think is reasonable overhead for other\nprocesses from the 8GB of memory.\n\nSince your DB's disk blocks will almost certainly all be in buffer cache,\nyou are going to want to set random_page_cost to be pretty close to 1.\n",
"msg_date": "Tue, 3 Feb 2004 07:27:39 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cache whole data in RAM"
},
{
"msg_contents": "Put it on a RAM disk.\n\nchris\n\n\nOn Tue, 2004-02-03 at 07:54, David Teran wrote:\n> Hi,\n> \n> we are trying to speed up a database which has about 3 GB of data. The \n> server has 8 GB RAM and we wonder how we can ensure that the whole DB \n> is read into RAM. We hope that this will speed up some queries.\n> \n> regards David\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n",
"msg_date": "Tue, 03 Feb 2004 08:32:07 -0500",
"msg_from": "Chris Trawick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cache whole data in RAM"
},
{
"msg_contents": "David Teran wrote:\n> Hi,\n> \n> we are trying to speed up a database which has about 3 GB of data. The \n> server has 8 GB RAM and we wonder how we can ensure that the whole DB is \n> read into RAM. We hope that this will speed up some queries.\n> \n> regards David\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\nUpon bootup, automatically run \"SELECT * FROM xyz\" on every table in \nyour database.\n\n",
"msg_date": "Wed, 04 Feb 2004 14:44:42 -0800",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cache whole data in RAM"
}
] |
[
{
"msg_contents": "Hello everyone,\n\nI am doing a massive database conversion from MySQL to Postgresql for a\ncompany I am working for. This has a few quirks to it that I haven't\nbeen able to nail down the answers I need from reading and searching\nthrough previous list info.\n\nFor starters, I am moving roughly 50 seperate databases which each one\nrepresents one of our clients and is roughly 500 megs to 3 gigs in size.\n Currently we are using the MySQL replication, and so I am looking at\nMammoths replicator for this one. However I have seen it only allows on\nDB to be replicated at a time. With the size of each single db, I don't\nknow how I could put them all together under one roof, and if I was\ngoing to, what are the maximums that Postgres can handle for tables in\none db? We track over 2 million new points of data (records) a day, and\nare moving to 5 million in the next year.\n\nSecond what about the physical database size, what are the limits there?\n I have seen that it was 4 gig on Linux from a 2000 message, but what\nabout now? Have we found way's past that? \n\nThanks in advance, will give more detail - just looking for some open\ndirections and maybe some kicks to fuel my thought in other areas.\n\nThanks,\n\n-- \[email protected]\n",
"msg_date": "Tue, 03 Feb 2004 11:42:59 -0500",
"msg_from": "\"Kevin Carpenter\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Database conversion woes..."
},
{
"msg_contents": "On Tue, 03 Feb 2004 11:42:59 -0500\n\"Kevin Carpenter\" <[email protected]> wrote:\n\n> For starters, I am moving roughly 50 seperate databases which each one\n> represents one of our clients and is roughly 500 megs to 3 gigs in\n> size.\n> Currently we are using the MySQL replication, and so I am looking at\n> Mammoths replicator for this one. However I have seen it only allows\n> on DB to be replicated at a time. With the size of each single db, I\n\nNot knowing too much about mammoths, but how the others work, you should\nbe able to run a replicator for each db. (Or hack a shell script up to\nmake it run the replicator for each db.. either way each db will be\nreplicated independant of the others)\n\n> don't know how I could put them all together under one roof, and if I\n> was going to, what are the maximums that Postgres can handle for\n> tables in one db? We track over 2 million new points of data\n> (records) a day, and are moving to 5 million in the next year.\n> \n\n From the docs:\n\nMaximum size for a database \tunlimited (4 TB databases exist)\nMaximum size for a table \t16 TB on all operating systems\nMaximum size for a row \t1.6 TB\nMaximum size for a field \t1 GB\nMaximum number of rows in a table \tunlimited\nMaximum number of columns in a table \t250 - 1600 depending on column\ntypes Maximum number of indexes on a table \tunlimited\n \n...\n\nMy largest PG db is 50GB. \n\nMy busiest PG db runs about 50 update|delete|insert's / second\n(sustained throughout the day. It bursts up to 150 now and then). And\nwe're doing about 40 selects / second. And the machine it is running on\nis typically 95% idle. (Quad 2ghz xeon)\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n",
"msg_date": "Tue, 3 Feb 2004 12:00:34 -0500",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database conversion woes..."
},
{
"msg_contents": "On Tue, 3 Feb 2004, Kevin Carpenter wrote:\n\n> Hello everyone,\n> \n> I am doing a massive database conversion from MySQL to Postgresql for a\n> company I am working for. This has a few quirks to it that I haven't\n> been able to nail down the answers I need from reading and searching\n> through previous list info.\n> \n> For starters, I am moving roughly 50 seperate databases which each one\n> represents one of our clients and is roughly 500 megs to 3 gigs in size.\n> Currently we are using the MySQL replication, and so I am looking at\n> Mammoths replicator for this one. However I have seen it only allows on\n> DB to be replicated at a time.\n\nLook into importing all those seperate databases into seperate schemas in \none postgresql database.\n\n> With the size of each single db, I don't\n> know how I could put them all together under one roof,\n\nThere's no functional difference to postgresql if you have 1 huge database \nor 50 smaller ones that add up to the same size.\n\n> and if I was\n> going to, what are the maximums that Postgres can handle for tables in\n> one db?\n\nNone. also see:\n\nhttp://www.postgresql.org/docs/faqs/FAQ.html#4.5\n\n> We track over 2 million new points of data (records) a day, and\n> are moving to 5 million in the next year.\n\nThat's quite a bit. Postgresql can handle it.\n\n> Second what about the physical database size, what are the limits there?\n\nnone.\n\n> I have seen that it was 4 gig on Linux from a 2000 message, but what\n> about now? Have we found way's past that? \n\nIt has never been 4 gig. It was once, a long time ago, 2 gig for a table \nI believe. That was fixed years ago.\n\n> Thanks in advance, will give more detail - just looking for some open\n> directions and maybe some kicks to fuel my thought in other areas.\n\nImport in bulk, either using copy or wrap a few thousand inserts inside \nbegin;end; pairs.\n\n",
"msg_date": "Tue, 3 Feb 2004 10:30:38 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database conversion woes..."
},
{
"msg_contents": "Kevin,\n\n> With the size of each single db, I don't\n> know how I could put them all together under one roof, and if I was\n> going to, what are the maximums that Postgres can handle for tables in\n> one db? We track over 2 million new points of data (records) a day, and\n> are moving to 5 million in the next year.\n\nUse schemas per Scott's suggestion. This will also ease the sharing of data \nbetween \"databases\".\n\n> Second what about the physical database size, what are the limits there?\n> I have seen that it was 4 gig on Linux from a 2000 message, but what\n> about now? Have we found way's past that?\n\nThe biggest database I've ever worked with was 175G, but I've seen reports of \n2TB databases out there. We don't know what the limit is; so far it's always \nbeen hardware.\n\n> Thanks in advance, will give more detail - just looking for some open\n> directions and maybe some kicks to fuel my thought in other areas.\n\nCome back to this list for help tuning your system! You'll need it, you've \ngot an unusual set-up.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 3 Feb 2004 11:01:57 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database conversion woes..."
},
{
"msg_contents": "On Tuesday 03 February 2004 16:42, Kevin Carpenter wrote:\n>\n> Thanks in advance, will give more detail - just looking for some open\n> directions and maybe some kicks to fuel my thought in other areas.\n\nI've taken to doing a lot of my data manipulation (version conversions etc) in \nPG even if the final resting place is MySQL.\n\nIt's generally not too difficult to transfer data but you will have problems \nwith MySQL's more \"relaxed attitude\" to data types (things like all-zero \ntimestamps). I tend to write a script to tidy the data before export, and \nrepeatedly restore from backup until the script corrects all problems.Not \nsure how convenient that'll be with dozens of gigs of data. Might be \npractical to start with the smaller databases, let your script grow in \ncapabilities before importing the larger ones.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 3 Feb 2004 19:29:52 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database conversion woes..."
},
{
"msg_contents": "[email protected] (\"Kevin Carpenter\") writes:\n> I am doing a massive database conversion from MySQL to Postgresql for a\n> company I am working for. This has a few quirks to it that I haven't\n> been able to nail down the answers I need from reading and searching\n> through previous list info.\n>\n> For starters, I am moving roughly 50 seperate databases which each\n> one represents one of our clients and is roughly 500 megs to 3 gigs\n> in size. Currently we are using the MySQL replication, and so I am\n> looking at Mammoths replicator for this one. However I have seen it\n> only allows on DB to be replicated at a time. With the size of each\n> single db, I don't know how I could put them all together under one\n> roof, and if I was going to, what are the maximums that Postgres can\n> handle for tables in one db? We track over 2 million new points of\n> data (records) a day, and are moving to 5 million in the next year.\n\nI'll be evasive about replication, because the answers are pretty\npainful :-(, but as for the rest of it, nothing about this sounds\nchallenging.\n\nThere is room for debate here as to whether you should have:\n a) One postmaster and many database instances, \n 2) One postmaster, one (or a few) database instances, and do the\n client 'split' via schemas, or\n iii) Several postmasters, many database instances.\n\nReplication will tend to work best with scenario 2), which minimizes\nthe number of connections that are needed to manage replication;\nthat's definitely a factor worth considering.\n\nIt is also possible for it to be worthwhile to spread vastly differing\nkinds of activity across different backends so that they can have\nseparate buffer caches. If all the activity is shared across one\npostmaster, that means it is all shared across one buffer cache, and\nthere are pathological situations that are occasionally observed in\npractice where one process will be \"trashing\" the shared cache,\nthereby injuring performance for all other processes using that back\nend. In such a case, it may be best to give the \"ill-behaved\" process\nits own database instance with a small cache that it can thrash on\nwithout inconveniencing others.\n\nJan Wieck is working on some improvements for buffer management in 7.5\nthat may improve the situation vis-a-vis buffering, but that is\ncertainly not something ready to deploy in production just yet.\n\n> Second what about the physical database size, what are the limits\n> there? I have seen that it was 4 gig on Linux from a 2000 message,\n> but what about now? Have we found way's past that?\n\nThere's NO problem with having enormous databases now; each table is\nrepresented as one or more files (if you break a size barrier, oft\nconfigured as 1GB, it creates an \"extent\" and extends into another\nfile), and for there to be problems with this, the problems would be\n_really crippling_ OS problems.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"cbbrowne.com\")\nhttp://www3.sympatico.ca/cbbrowne/linuxxian.html\n\"We come to bury DOS, not to praise it.\"\n-- Paul Vojta <[email protected]>, paraphrasing a quote of\nShakespeare\n",
"msg_date": "Tue, 03 Feb 2004 14:59:04 -0500",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database conversion woes..."
},
{
"msg_contents": "First just wanted to say thank you all for the quick and helpful \nanswers. With all the input I know I am on the right track. With that \nin mind I created a perl script to do my migrations and to do it based \non moving from a db name to a schema name. I had done alot of the \nreading on converting based on the miss match of data types that MySQL \nlikes to use. I must say it is VERY nice having a intelligent system \nthat say won't let a date of '0000-00-00' be entered. Luckily I didn't \nhave to deal with any enumerations.\n\nSo the conversion goes on. I will definitely be back and forth in here \nas I get the new queries written and start migrating all I can back into \nthe pg backend using plpgsql or c for the stored procedures where \nrequired. The mammoth replicator has been working well. I had tried \nthe pgsql-r and had limited success with it, and dbmirror was just \ntaking to long having to do 4 db transactions just to mirror one \ncommand. I have eserv but was never really a java kind of guy.\n\nAlright then - back to my code. Again thanks for the help and info.\n\nKevin\n\n",
"msg_date": "Tue, 03 Feb 2004 15:27:00 -0700",
"msg_from": "Kevin Carpenter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database conversion woes..."
}
] |
[
{
"msg_contents": "Hello,\n \nI would like to know whether there are any significant performance\nadvantages of compiling (say, 7.4) on your platform (being RH7.3, 8, and\n9.0, and Fedora especially) versus getting the relevant binaries (rpm)\nfrom the postgresql site? Hardware is Intel XEON (various speeds, upto\n2.8GHz, single/dual/quad configuration).\n \nThankyou,\nAnjan\n \n \n \n \n************************************************************************\n** \n\nThis e-mail and any files transmitted with it are intended for the use\nof the addressee(s) only and may be confidential and covered by the\nattorney/client and other privileges. If you received this e-mail in\nerror, please notify the sender; do not disclose, copy, distribute, or\ntake any action in reliance on the contents of this information; and\ndelete it from your system. Any other use of this e-mail is prohibited.\n\n \n\nMessage\n\n\n\nHello,\n \nI would like to know \nwhether there are any significant performance advantages of compiling (say, 7.4) \non your platform (being RH7.3, 8, and 9.0, and Fedora especially) versus getting \nthe relevant binaries (rpm) from the postgresql site? Hardware is Intel XEON \n(various speeds, upto 2.8GHz, single/dual/quad \nconfiguration).\n \nThankyou,\nAnjan\n \n \n \n \n\n************************************************************************** \n\nThis e-mail and any files transmitted with it are intended for the use of the \naddressee(s) only and may be confidential and covered by the attorney/client and \nother privileges. If you received this e-mail in error, please notify the \nsender; do not disclose, copy, distribute, or take any action in reliance on the \ncontents of this information; and delete it from your system. Any other use of \nthis e-mail is prohibited.",
"msg_date": "Tue, 3 Feb 2004 15:58:22 -0500",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Compile Vs RPMs"
},
{
"msg_contents": "Anjan Dave wrote:\n> Hello,\n> \n> I would like to know whether there are any significant performance \n> advantages of compiling (say, 7.4) on your platform (being RH7.3, 8, and \n> 9.0, and Fedora especially) versus getting the relevant binaries (rpm) \n> from the postgresql site? Hardware is Intel XEON (various speeds, upto \n> 2.8GHz, single/dual/quad configuration).\n\n\"significant\" is a relative term. 1% can be significant under the proper\ncircumstances ...\n\nhttp://www.potentialtech.com/wmoran/source.php\n\nThe information isn't specific to Postgres, and the results aren't really\nconclusive, but hopefully it helps.\n\nI really think that if someone would actually test this with Postgres and\npost the results, it would be very beneficial to the community. I have\nit on my list of things to do, but it's unlikely to get done in the first\nquarter the way things are going.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Tue, 03 Feb 2004 17:43:08 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compile Vs RPMs"
},
{
"msg_contents": "\nOn 03/02/2004 20:58 Anjan Dave wrote:\n> Hello,\n> \n> I would like to know whether there are any significant performance\n> advantages of compiling (say, 7.4) on your platform (being RH7.3, 8, and\n> 9.0, and Fedora especially) versus getting the relevant binaries (rpm)\n> from the postgresql site? Hardware is Intel XEON (various speeds, upto\n> 2.8GHz, single/dual/quad configuration).\n\nVery unlikely I would have thought. Databases tend to speed-limited by I-O \nperformance and the amount of RAM available for caching etc. Having said \nthat, I've only got one machine (the laptop on which I'm writing this \nemail) which has still got its rpm binaries. My other machines have all \nbeen upgraded from source.\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for the Smaller \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n",
"msg_date": "Tue, 3 Feb 2004 23:01:47 +0000",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compile Vs RPMs"
},
{
"msg_contents": "[email protected] (\"Anjan Dave\") writes:\n> I would like to know whether there are any significant performance\n> advantages of compiling (say, 7.4) on your platform (being RH7.3, 8,\n> and 9.0, and Fedora especially) versus getting the relevant binaries\n> (rpm) from the postgresql site? Hardware is Intel XEON (various\n> speeds, upto 2.8GHz, single/dual/quad configuration).\n\nSome Linux distribution makers make grand claims of such advantages,\nbut it is not evident that this is much better than superstition.\n\nYou are certainly NOT going to see GCC generating MMX code\nautomagically that would lead to PostgreSQL becoming 8 times faster.\n\nIndeed, in database work, it is quite likely that you will find things\nto be largely I/O bound, with CPU usage being a very much secondary\nfactor.\n\nI did some relative benchmarking between compiling PostgreSQL on GCC\nversus IBM's PPC compilers a while back; did not see differences that\ncould be _clearly_ discerned as separate from \"observational noise.\"\n\nYou should expect find that adding RAM, or adding a better disk\ncontroller would provide discernable differences in performance. It\nis much less clear that custom compiling will have any substantial\neffect on I/O-bound processing.\n-- \noutput = reverse(\"ofni.smrytrebil\" \"@\" \"enworbbc\")\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 646 3304 x124 (land)\n",
"msg_date": "Tue, 03 Feb 2004 18:43:08 -0500",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compile Vs RPMs"
},
{
"msg_contents": "On Tue, 3 Feb 2004, Christopher Browne wrote:\n\n> [email protected] (\"Anjan Dave\") writes:\n> > I would like to know whether there are any significant performance\n> > advantages of compiling (say, 7.4) on your platform (being RH7.3, 8,\n> > and 9.0, and Fedora especially) versus getting the relevant binaries\n> > (rpm) from the postgresql site? Hardware is Intel XEON (various\n> > speeds, upto 2.8GHz, single/dual/quad configuration).\n> \n> Some Linux distribution makers make grand claims of such advantages,\n> but it is not evident that this is much better than superstition.\n> \n> You are certainly NOT going to see GCC generating MMX code\n> automagically that would lead to PostgreSQL becoming 8 times faster.\n> \n> Indeed, in database work, it is quite likely that you will find things\n> to be largely I/O bound, with CPU usage being a very much secondary\n> factor.\n> \n> I did some relative benchmarking between compiling PostgreSQL on GCC\n> versus IBM's PPC compilers a while back; did not see differences that\n> could be _clearly_ discerned as separate from \"observational noise.\"\n> \n> You should expect find that adding RAM, or adding a better disk\n> controller would provide discernable differences in performance. It\n> is much less clear that custom compiling will have any substantial\n> effect on I/O-bound processing.\n\nI would add that the primary reason for compiling versus using RPMs is to \ntake advantage of some compile time option having to do with block size, \nor using a patch to try and test a system that has found a new corner case \nwhere postgresql is having issues performing well, like the vacuum page \ndelay patch for fixing the issue with disk bandwidth saturation. If \nyou've got a machine grinding to its knees under certain loads, and have a \ntest box to test it on, and the test box shows better performance, it \nmight be better to patch the live server on the off hours if it will keep \nthe thing up and running during the day. \n\nIn that way, performance differences are very real, but because you are \ndoing something you can't do with factory rpms. Of course, building \ncustom rpms isn't that hard to do, so if you had a lot of boxes that \nneeded a patched flavor of postgresql, you could still run from rpms and \nhave the custom patch. \n\n\n\n",
"msg_date": "Wed, 4 Feb 2004 09:24:57 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Compile Vs RPMs"
}
] |
[
{
"msg_contents": "First just wanted to say thank you all for the quick and helpful \nanswers. With all the input I know I am on the right track. With that \nin mind I created a perl script to do my migrations and to do it based \non moving from a db name to a schema name. I had done alot of the \nreading on converting based on the miss match of data types that MySQL \nlikes to use. I must say it is VERY nice having a intelligent system \nthat say won't let a date of '0000-00-00' be entered. Luckily I didn't \nhave to deal with any enumerations.\n\nSo the conversion goes on. I will definitely be back and forth in here \nas I get the new queries written and start migrating all I can back into \nthe pg backend using plpgsql or c for the stored procedures where \nrequired. The mammoth replicator has been working well. I had tried \nthe pgsql-r and had limited success with it, and dbmirror was just \ntaking to long having to do 4 db transactions just to mirror one \ncommand. I have eserv but was never really a java kind of guy.\n\nAlright then - back to my code. Again thanks for the help and info.\n\nKevin\n",
"msg_date": "Tue, 03 Feb 2004 15:29:41 -0700",
"msg_from": "Kevin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Database conversion woes..."
},
{
"msg_contents": "On Tuesday 03 February 2004 22:29, Kevin wrote:\n> The mammoth replicator has been working well. I had tried\n> the pgsql-r and had limited success with it, and dbmirror was just\n> taking to long having to do 4 db transactions just to mirror one\n> command. I have eserv but was never really a java kind of guy.\n\nWhen this is over and you've got the time, I don't suppose you could put \ntogether a few hundred words describing your experiences with the Mammoth \nreplicator - there are a couple of places they could be posted.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 3 Feb 2004 23:34:03 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database conversion woes..."
}
] |
[
{
"msg_contents": "We are suddenly getting slow queries on a particular table.\nExplain shows a sequential scan. We have \"vacuum analyze\" ed\nthe table.\n\nAny hints?\n\nMany TIA!\nMark\n\n\ntestdb=# \\d bigtable\n Table \"public.bigtable\"\n Column | Type | Modifiers\n---------+---------+-----------\n id | bigint | not null\n typeid | integer | not null\n reposid | integer | not null\nIndexes: bigtable_id_key unique btree (id)\nForeign Key constraints: type FOREIGN KEY (typeid) REFERENCES types(typeid) ON UPDATE NO ACTION ON DELETE NO ACTION,\n repository FOREIGN KEY (reposid) REFERENCES repositories(reposid) ON UPDATE NO ACTION ON DELETE NO ACTION\n\ntestdb=# select count(1) from bigtable;\n count\n---------\n 3056831\n(1 row)\n\ntestdb=# explain select * from bigtable where id = 123;\n QUERY PLAN\n-----------------------------------------------------------\n Seq Scan on bigtable (cost=0.00..60000.00 rows=1 width=16)\n Filter: (id = 123)\n(2 rows)\n\ntestdb=# vacuum verbose analyze bigtable;\nINFO: --Relation public.bigtable--\nINFO: Pages 19200: Changed 0, Empty 0; Tup 3056831: Vac 0, Keep 0, UnUsed 207009.\n Total CPU 1.03s/0.24u sec elapsed 9.32 sec.\nINFO: Analyzing public.bigtable\nVACUUM\ntestdb=# explain select * from bigtable where id = 123;\n QUERY PLAN\n-----------------------------------------------------------\n Seq Scan on bigtable (cost=0.00..57410.39 rows=1 width=16)\n Filter: (id = 123)\n(2 rows)\n\n-- \nMark Harrison\nPixar Animation Studios\n\n",
"msg_date": "Wed, 04 Feb 2004 14:55:15 -0800",
"msg_from": "Mark Harrison <[email protected]>",
"msg_from_op": true,
"msg_subject": "select is not using index?"
},
{
"msg_contents": "On Wed, 2004-02-04 at 14:55, Mark Harrison wrote:\n> testdb=# \\d bigtable\n> Table \"public.bigtable\"\n> Column | Type | Modifiers\n> ---------+---------+-----------\n> id | bigint | not null\n> typeid | integer | not null\n> reposid | integer | not null\n> Indexes: bigtable_id_key unique btree (id)\n\n> testdb=# explain select * from bigtable where id = 123;\n\nYour column is a bigint but 123 defaults to type int. Indexes aren't\nused when there's a type mismatch. Use an explicit cast or quote it:\n\n select * from bigtable where id = 123::bigint;\n\nOr\n\n select * from bigtable where id = '123';\n\nCorey\n\n\n",
"msg_date": "Wed, 04 Feb 2004 15:22:23 -0800",
"msg_from": "Corey Edwards <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select is not using index?"
},
{
"msg_contents": "Corey Edwards wrote:\n\n> Your column is a bigint but 123 defaults to type int. Indexes aren't\n> used when there's a type mismatch. Use an explicit cast or quote it:\n> \n> select * from bigtable where id = 123::bigint;\n> \n> Or\n> \n> select * from bigtable where id = '123';\n\nThanks Corey, both of these do exactly what I need...\n\nCheers,\nMark\n\n-- \nMark Harrison\nPixar Animation Studios\n\n",
"msg_date": "Thu, 05 Feb 2004 09:31:24 -0800",
"msg_from": "Mark Harrison <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: select is not using index?"
}
] |
[
{
"msg_contents": "I've done some testing of 7.3.4 vs 7.4.1 and found 7.4.1 to be 20%-30%\nslower than 7.3.4. Is this common knowledge or am I just unlucky with\nmy query/data selection?\n\nThings of note that might matter: the machine is a dual Opteron 1.4GHz\nrunning Fedora Core 1 Test 1 for X86_64. The 7.3.4 was from the Fedora\ndistro and the 7.4.1 was the PGDG package. The database is 3.5 Gigs\nwith 10 millions rows and the machine had 1 Gig or ram.\n\nOh... as a side note I'm happy to announce that the 2.6 Linux kernel has\nmore than DOUBLED the speed of all my Postgres queries over the 2.4. =)\n\n\n\n-- \nOrion Henry <[email protected]>",
"msg_date": "04 Feb 2004 21:16:37 -0800",
"msg_from": "Orion Henry <[email protected]>",
"msg_from_op": true,
"msg_subject": "7.3 vs 7.4 performance"
},
{
"msg_contents": "Orion,\n\n> I've done some testing of 7.3.4 vs 7.4.1 and found 7.4.1 to be 20%-30%\n> slower than 7.3.4. Is this common knowledge or am I just unlucky with\n> my query/data selection?\n\nNo, it's not common knowledge. It should be the other way around. Perhaps \nit's the queries you picked? Even so ..... feel free to post individual \nEXPLAIN ANALYZEs to the list.\n\n> Things of note that might matter: the machine is a dual Opteron 1.4GHz\n> running Fedora Core 1 Test 1 for X86_64. The 7.3.4 was from the Fedora\n> distro and the 7.4.1 was the PGDG package. The database is 3.5 Gigs\n> with 10 millions rows and the machine had 1 Gig or ram.\n\nI'm wondering if we need specific compile-time switches for Opteron. I know \nwe got Opteron code tweaks in the last version, but am not sure if a --with \nis required to activate them.\n\n> Oh... as a side note I'm happy to announce that the 2.6 Linux kernel has\n> more than DOUBLED the speed of all my Postgres queries over the 2.4. =)\n\nKeen. Waiting for upgrades ....\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 4 Feb 2004 21:27:59 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 vs 7.4 performance"
},
{
"msg_contents": "Oops! [email protected] (Orion Henry) was seen spray-painting on a wall:\n> I've done some testing of 7.3.4 vs 7.4.1 and found 7.4.1 to be 20%-30%\n> slower than 7.3.4. Is this common knowledge or am I just unlucky with\n> my query/data selection?\n\nThat seems unusual; the opposite seems more typical in view of there\nbeing some substantial improvements to the query optimizer.\n\nHave you tried doing EXPLAIN ANALYZE on the queries on both sides?\nThere would doubtless be interest in figuring out what is breaking\ndown...\n\n> Things of note that might matter: the machine is a dual Opteron\n> 1.4GHz running Fedora Core 1 Test 1 for X86_64. The 7.3.4 was from\n> the Fedora distro and the 7.4.1 was the PGDG package. The database\n> is 3.5 Gigs with 10 millions rows and the machine had 1 Gig or ram.\n>\n> Oh... as a side note I'm happy to announce that the 2.6 Linux kernel\n> has more than DOUBLED the speed of all my Postgres queries over the\n> 2.4. =)\n\nI did some heavy-transaction-oriented tests recently on somewhat\nheftier quad-Xeon hardware, and found little difference between 2.4\nand 2.6, and a small-but-quite-repeatable advantage with FreeBSD 4.9.\nNow, I'm quite sure my load was rather different from yours, but I\nfind the claim of doubling of speed rather surprising.\n-- \n(format nil \"~S@~S\" \"aa454\" \"freenet.carleton.ca\")\nhttp://www.ntlug.org/~cbbrowne/spiritual.html\nFailure is not an option. It comes bundled with your Microsoft product.\n",
"msg_date": "Thu, 05 Feb 2004 00:32:08 -0500",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 vs 7.4 performance"
},
{
"msg_contents": "On Thu, 2004-02-05 at 00:32, Christopher Browne wrote:\n> > Things of note that might matter: the machine is a dual Opteron\n> > 1.4GHz running Fedora Core 1 Test 1 for X86_64. The 7.3.4 was from\n> > the Fedora distro and the 7.4.1 was the PGDG package. The database\n> > is 3.5 Gigs with 10 millions rows and the machine had 1 Gig or ram.\n> >\n> > Oh... as a side note I'm happy to announce that the 2.6 Linux kernel\n> > has more than DOUBLED the speed of all my Postgres queries over the\n> > 2.4. =)\n> \n> I did some heavy-transaction-oriented tests recently on somewhat\n> heftier quad-Xeon hardware, and found little difference between 2.4\n> and 2.6, and a small-but-quite-repeatable advantage with FreeBSD 4.9.\n> Now, I'm quite sure my load was rather different from yours, but I\n> find the claim of doubling of speed rather surprising.\n\nI don't. I got a similar boost out of 2.6 when dealing with extreme\nconcurrency. Then again, I also got a similar boost out of 7.4. The\ntwo together tickled my bank account. ;)\n\nOne question though... It sounds like your 7.3 binaries are 64-bit and\nyour 7.4 binaries are 32-bit. Have you tried grabbing the SRPM for 7.4\nand recompiling it for X86_64?\n\nchris\n",
"msg_date": "Thu, 05 Feb 2004 01:50:31 -0500",
"msg_from": "Chris Trawick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 vs 7.4 performance"
},
{
"msg_contents": "> I did some heavy-transaction-oriented tests recently on somewhat\n> heftier quad-Xeon hardware, and found little difference between 2.4\n> and 2.6, and a small-but-quite-repeatable advantage with FreeBSD 4.9.\n> Now, I'm quite sure my load was rather different from yours, but I\n> find the claim of doubling of speed rather surprising.\n> --\n\nWhat's the type of File System you used in the Linux? I am wanting to know\nwhich is the operational system better for PostgreSQL: FreeBSD versus Linux\n2.6.\n\nThanks.\n\n[]'s\nCarlos Eduardo Smanioto (Brazil)\n\n----- Original Message -----\nFrom: \"Christopher Browne\" <[email protected]>\nTo: <[email protected]>\nSent: Thursday, February 05, 2004 3:32 AM\nSubject: Re: [PERFORM] 7.3 vs 7.4 performance\n\n\n> Oops! [email protected] (Orion Henry) was seen spray-painting on a\nwall:\n> > I've done some testing of 7.3.4 vs 7.4.1 and found 7.4.1 to be 20%-30%\n> > slower than 7.3.4. Is this common knowledge or am I just unlucky with\n> > my query/data selection?\n>\n> That seems unusual; the opposite seems more typical in view of there\n> being some substantial improvements to the query optimizer.\n>\n> Have you tried doing EXPLAIN ANALYZE on the queries on both sides?\n> There would doubtless be interest in figuring out what is breaking\n> down...\n>\n> > Things of note that might matter: the machine is a dual Opteron\n> > 1.4GHz running Fedora Core 1 Test 1 for X86_64. The 7.3.4 was from\n> > the Fedora distro and the 7.4.1 was the PGDG package. The database\n> > is 3.5 Gigs with 10 millions rows and the machine had 1 Gig or ram.\n> >\n> > Oh... as a side note I'm happy to announce that the 2.6 Linux kernel\n> > has more than DOUBLED the speed of all my Postgres queries over the\n> > 2.4. =)\n>\n> I did some heavy-transaction-oriented tests recently on somewhat\n> heftier quad-Xeon hardware, and found little difference between 2.4\n> and 2.6, and a small-but-quite-repeatable advantage with FreeBSD 4.9.\n> Now, I'm quite sure my load was rather different from yours, but I\n> find the claim of doubling of speed rather surprising.\n> --\n> (format nil \"~S@~S\" \"aa454\" \"freenet.carleton.ca\")\n> http://www.ntlug.org/~cbbrowne/spiritual.html\n> Failure is not an option. It comes bundled with your Microsoft product.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n",
"msg_date": "Thu, 5 Feb 2004 09:14:17 -0200",
"msg_from": "\"Carlos Eduardo Smanioto\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 vs 7.4 performance"
},
{
"msg_contents": "In an attempt to throw the authorities off his trail, [email protected] (\"Carlos Eduardo Smanioto\") transmitted:\n>> I did some heavy-transaction-oriented tests recently on somewhat\n>> heftier quad-Xeon hardware, and found little difference between 2.4\n>> and 2.6, and a small-but-quite-repeatable advantage with FreeBSD\n>> 4.9. Now, I'm quite sure my load was rather different from yours,\n>> but I find the claim of doubling of speed rather surprising.\n>\n> What's the type of File System you used in the Linux? I am wanting\n> to know which is the operational system better for PostgreSQL:\n> FreeBSD versus Linux 2.6.\n\nOn the Linux box in question, I was using JFS, which has had the mixed\nreviews, lately, that on the one hand, it _appears_ to be a tad faster\nthan all the others, but that has been, on the other hand, associated\nwith systems hanging up and crashing, under load.\n\nThe latter bit is a _really_ big caveat. On that particular machine,\nI have a nicely repeatable \"test case\" where I can do a particular set\nof \"system load\" that consistently takes the system down, to the point\nof having to hit the \"big red button.\" If I could point to a clear\nreason why it happens, I'd be a much happier camper. As it stands, it\nis a bit nebulous whether the problem is:\n a) Hardware drivers,\n b) Flakey hardware (which Linux 2.6.1 copes with a lot better than\n 2.4!),\n c) Flakey 2.4 kernel,\n d) Problem with JFS,\n e) Something else not yet identified as a plausible cause.\n\nIf I could say, \"Oh, it's an identified bug in the Frobozz RAID\ncontroller drivers, and was fixed in 2.6.0-pre-17\", that would help\nallay the suspicion that the problem could be any of the above.\n-- \nlet name=\"aa454\" and tld=\"freenet.carleton.ca\" in name ^ \"@\" ^ tld;;\nhttp://www.ntlug.org/~cbbrowne/\n\"Another result of the tyranny of Pascal is that beginners don't use\nfunction pointers.\" --Rob Pike\n",
"msg_date": "Thu, 05 Feb 2004 08:30:55 -0500",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 vs 7.4 performance"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> I'm wondering if we need specific compile-time switches for Opteron. I know\n> we got Opteron code tweaks in the last version,\n\nNot in 7.4. There is some marginal hacking in the spinlock code in CVS\ntip for multi-CPU i386 and x86_64 (viz, add a PAUSE instruction inside\nthe wait loop) but I'm not sure that will have any significance in real\nlife.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 05 Feb 2004 10:07:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 vs 7.4 performance "
},
{
"msg_contents": "> \n> One question though... It sounds like your 7.3 binaries are 64-bit and\n> your 7.4 binaries are 32-bit. Have you tried grabbing the SRPM for 7.4\n> and recompiling it for X86_64?\n\nNo, they were all 64 bit.\n\nI'm going to run explains on all my queries and see if I can find \nanything of interest...",
"msg_date": "05 Feb 2004 10:04:44 -0800",
"msg_from": "Orion Henry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 7.3 vs 7.4 performance"
},
{
"msg_contents": "Orion Henry kirjutas N, 05.02.2004 kell 07:16:\n> I've done some testing of 7.3.4 vs 7.4.1 and found 7.4.1 to be 20%-30%\n> slower than 7.3.4. Is this common knowledge or am I just unlucky with\n> my query/data selection?\n> \n> Things of note that might matter: the machine is a dual Opteron 1.4GHz\n> running Fedora Core 1 Test 1 for X86_64. The 7.3.4 was from the Fedora\n> distro and the 7.4.1 was the PGDG package.\n\nAre you sure that it is not the case that it is not tha case that 7.3.4\nis 64 bit and the PGDG package is 32 ?\n\n> The database is 3.5 Gigs with 10 millions rows and the machine had 1 Gig or ram.\n> \n> Oh... as a side note I'm happy to announce that the 2.6 Linux kernel has\n> more than DOUBLED the speed of all my Postgres queries over the 2.4. =)\n\nIs this on this same hardware ?\n\n-------------\nHannu\n\n",
"msg_date": "Fri, 06 Feb 2004 12:43:14 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 vs 7.4 performance"
},
{
"msg_contents": "Christopher Browne kirjutas N, 05.02.2004 kell 07:32:\n> Oops! [email protected] (Orion Henry) was seen spray-painting on a wall:\n> > Oh... as a side note I'm happy to announce that the 2.6 Linux kernel\n> > has more than DOUBLED the speed of all my Postgres queries over the\n> > 2.4. =)\n> \n> I did some heavy-transaction-oriented tests recently on somewhat\n> heftier quad-Xeon hardware, and found little difference between 2.4\n> and 2.6, and a small-but-quite-repeatable advantage with FreeBSD 4.9.\n> Now, I'm quite sure my load was rather different from yours, but I\n> find the claim of doubling of speed rather surprising.\n\nperhaps you were just IO-bound while he was not ?\n\nor starving on some locks ?\n\n-------------\nHannu\n\n",
"msg_date": "Fri, 06 Feb 2004 12:44:59 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 vs 7.4 performance"
},
{
"msg_contents": "I have some results with our DBT-2 (OLTP) workload on various linux-2.6\nfilesystems, if you'll find that interesting:\n\thttp://developer.osdl.org/markw/fs/dbt2_project_results.html\n\nI've found JFS to perform similarly to ext2. Reiserfs isn't far behind.\nXFS and ext3 fall off a bit. These results are also on a 4-way Xeon,\nwith about 70 drives and a ~ 30GB database.\n\nMark\n\nOn 5 Feb, Carlos Eduardo Smanioto wrote:\n>> I did some heavy-transaction-oriented tests recently on somewhat\n>> heftier quad-Xeon hardware, and found little difference between 2.4\n>> and 2.6, and a small-but-quite-repeatable advantage with FreeBSD 4.9.\n>> Now, I'm quite sure my load was rather different from yours, but I\n>> find the claim of doubling of speed rather surprising.\n>> --\n> \n> What's the type of File System you used in the Linux? I am wanting to know\n> which is the operational system better for PostgreSQL: FreeBSD versus Linux\n> 2.6.\n> \n> Thanks.\n> \n> []'s\n> Carlos Eduardo Smanioto (Brazil)\n> \n> ----- Original Message -----\n> From: \"Christopher Browne\" <[email protected]>\n> To: <[email protected]>\n> Sent: Thursday, February 05, 2004 3:32 AM\n> Subject: Re: [PERFORM] 7.3 vs 7.4 performance\n> \n> \n>> Oops! [email protected] (Orion Henry) was seen spray-painting on a\n> wall:\n>> > I've done some testing of 7.3.4 vs 7.4.1 and found 7.4.1 to be 20%-30%\n>> > slower than 7.3.4. Is this common knowledge or am I just unlucky with\n>> > my query/data selection?\n>>\n>> That seems unusual; the opposite seems more typical in view of there\n>> being some substantial improvements to the query optimizer.\n>>\n>> Have you tried doing EXPLAIN ANALYZE on the queries on both sides?\n>> There would doubtless be interest in figuring out what is breaking\n>> down...\n>>\n>> > Things of note that might matter: the machine is a dual Opteron\n>> > 1.4GHz running Fedora Core 1 Test 1 for X86_64. The 7.3.4 was from\n>> > the Fedora distro and the 7.4.1 was the PGDG package. The database\n>> > is 3.5 Gigs with 10 millions rows and the machine had 1 Gig or ram.\n>> >\n>> > Oh... as a side note I'm happy to announce that the 2.6 Linux kernel\n>> > has more than DOUBLED the speed of all my Postgres queries over the\n>> > 2.4. =)\n>>\n>> I did some heavy-transaction-oriented tests recently on somewhat\n>> heftier quad-Xeon hardware, and found little difference between 2.4\n>> and 2.6, and a small-but-quite-repeatable advantage with FreeBSD 4.9.\n>> Now, I'm quite sure my load was rather different from yours, but I\n>> find the claim of doubling of speed rather surprising.\n>> --\n>> (format nil \"~S@~S\" \"aa454\" \"freenet.carleton.ca\")\n>> http://www.ntlug.org/~cbbrowne/spiritual.html\n>> Failure is not an option. It comes bundled with your Microsoft product.\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 5: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n",
"msg_date": "Fri, 6 Feb 2004 09:05:04 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: 7.3 vs 7.4 performance"
},
{
"msg_contents": "On Fri, 2004-02-06 at 02:43, Hannu Krosing wrote:\n> Orion Henry kirjutas N, 05.02.2004 kell 07:16:\n> > I've done some testing of 7.3.4 vs 7.4.1 and found 7.4.1 to be 20%-30%\n> > slower than 7.3.4. Is this common knowledge or am I just unlucky with\n> > my query/data selection?\n> > \n> > Things of note that might matter: the machine is a dual Opteron 1.4GHz\n> > running Fedora Core 1 Test 1 for X86_64. The 7.3.4 was from the Fedora\n> > distro and the 7.4.1 was the PGDG package.\n> \n> Are you sure that it is not the case that it is not tha case that 7.3.4\n> is 64 bit and the PGDG package is 32 ?\n\nYes sure... I don't know if they were compiled with differing\noptimizations or compilers though...\n\n> > The database is 3.5 Gigs with 10 millions rows and the machine had 1 Gig or ram.\n> > \n> > Oh... as a side note I'm happy to announce that the 2.6 Linux kernel has\n> > more than DOUBLED the speed of all my Postgres queries over the 2.4. =)\n> \n> Is this on this same hardware ?\n\nNo. I havent gotten the 2.6 kernel working on the Opteron yet. The 2x speedup \nwas on a dual Athlon 2GHz.",
"msg_date": "06 Feb 2004 17:02:10 -0800",
"msg_from": "Orion Henry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 7.3 vs 7.4 performance"
},
{
"msg_contents": "On Fri, 2004-02-06 at 02:44, Hannu Krosing wrote:\n> Christopher Browne kirjutas N, 05.02.2004 kell 07:32:\n> > Oops! [email protected] (Orion Henry) was seen spray-painting on a wall:\n> > > Oh... as a side note I'm happy to announce that the 2.6 Linux kernel\n> > > has more than DOUBLED the speed of all my Postgres queries over the\n> > > 2.4. =)\n> > \n> > I did some heavy-transaction-oriented tests recently on somewhat\n> > heftier quad-Xeon hardware, and found little difference between 2.4\n> > and 2.6, and a small-but-quite-repeatable advantage with FreeBSD 4.9.\n> > Now, I'm quite sure my load was rather different from yours, but I\n> > find the claim of doubling of speed rather surprising.\n> \n> perhaps you were just IO-bound while he was not ?\n> \n> or starving on some locks ?\n\nThe queries were across almost 4 gigs of data on a machine with 512 MB of ram.\nI personally was assuming it was the anticipatory disk scheduler... but alas I \ndon't know why it affected me so much.",
"msg_date": "06 Feb 2004 17:03:26 -0800",
"msg_from": "Orion Henry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 7.3 vs 7.4 performance"
},
{
"msg_contents": "On Wed, 2004-02-04 at 21:27, Josh Berkus wrote:\n\n> Orion,\n> \n> > I've done some testing of 7.3.4 vs 7.4.1 and found 7.4.1 to be 20%-30%\n> > slower than 7.3.4. Is this common knowledge or am I just unlucky with\n> > my query/data selection?\n> \n> No, it's not common knowledge. It should be the other way around. Perhaps \n> it's the queries you picked? Even so ..... feel free to post individual \n> EXPLAIN ANALYZEs to the list.\n\n\nThank you...\n\nHere's one good example of 7.3 beating 7.4 soundly:\nAgain this could me some compile option since I built the 7.4 RPM \nfrom source and I got the 7.3 from Fedora or something to\ndo with the Opteron architecture. (Yes the compiled postgres\nis 64 bit)\n\nSELECT cid,media_name,media_type,count(*) as count,sum(a_amount) \nas a,sum(case when b_amount > 0 then b_amount else 0 end) as b,\nsum(case when b_amount < 0 then b_amount else 0 end) as c \nFROM transdata JOIN media_info ON (media = media_type) \nWHERE cid = 140100 AND demo is not null \nAND trans_date between date '2004-01-01' \nAND date_trunc('month',date '2004-01-01' + interval '32 days') \nGROUP BY cid,media_name,media_type;\n\nHere's 7.3's time and explain\n\nreal 0m34.260s\nuser 0m0.010s\nsys 0m0.000s\n\n---------------------------------------------------------------\n Aggregate (cost=7411.88..7415.32 rows=17 width=25)\n -> Group (cost=7411.88..7413.60 rows=172 width=25)\n -> Sort (cost=7411.88..7412.31 rows=172 width=25)\n Sort Key: transdata.cid, media_info.media_name, transdata.media_type\n -> Hash Join (cost=1.22..7405.50 rows=172 width=25)\n Hash Cond: (\"outer\".media_type = \"inner\".media)\n -> Index Scan using transdata_date_index on transdata (cost=0.00..7401.27 rows=172 width=14)\n Index Cond: ((trans_date >= ('2004-01-01'::date)::timestamp with time zone) AND (trans_date <= ('2004-02-01 00:00:00'::timestamp without time zone)::timestamp with time zone))\n Filter: ((cid = 140100) AND (demo IS NOT NULL))\n -> Hash (cost=1.18..1.18 rows=18 width=11)\n -> Seq Scan on media_info (cost=0.00..1.18 rows=18 width=11)\n\n\nHere's 7.4's time and explain\n\nreal 0m43.052s\nuser 0m0.000s\nsys 0m0.020s\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=8098.26..8098.29 rows=2 width=23)\n -> Hash Join (cost=1.22..8095.48 rows=159 width=23)\n Hash Cond: (\"outer\".media_type = \"inner\".media)\n -> Index Scan using transdata_date_index on transdata (cost=0.00..8091.87 rows=159 width=14)\n Index Cond: ((trans_date >= ('2004-01-01'::date)::timestamp with time zone) AND (trans_date <= ('2004-02-01 00:00:00'::timestamp without time zone)::timestamp with time zone))\n Filter: ((cid = 140100) AND (demo IS NOT NULL))\n -> Hash (cost=1.18..1.18 rows=18 width=11)\n -> Seq Scan on media_info (cost=0.00..1.18 rows=18 width=11)",
"msg_date": "06 Feb 2004 17:49:05 -0800",
"msg_from": "Orion Henry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 7.3 vs 7.4 performance"
},
{
"msg_contents": "Orion,\n\n> Here's one good example of 7.3 beating 7.4 soundly:\n> Again this could me some compile option since I built the 7.4 RPM \n> from source and I got the 7.3 from Fedora or something to\n> do with the Opteron architecture. (Yes the compiled postgres\n> is 64 bit)\n\nNeed an EXPLAIN ANALYZE, not just an EXPLAIN.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Fri, 6 Feb 2004 18:32:53 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 vs 7.4 performance"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Orion,\n>> Here's one good example of 7.3 beating 7.4 soundly:\n\n> Need an EXPLAIN ANALYZE, not just an EXPLAIN.\n\nIndeed. Also, please try 7.4 with enable_hashagg turned off to see\nwhat it does with a 7.3-style plan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 07 Feb 2004 02:28:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 7.3 vs 7.4 performance "
}
] |
[
{
"msg_contents": "Hi All,\n\nI've been seeing very slow read performance on a database of 1 million\nindexed subscribers, which I believe is nothing to do with the data\nitself, but delays on processing the index. \n\nIf I make a random jump into the index (say X), it can take about 50ms\nto read the subscriber. If I then make a \"close by\" lookup (say X+10),\nit takes only about 0.5ms to read the subscriber. Making another lookup\nto a \"far away\" (say X+1000), it again takes about 50ms to read.\n\n From the analyze output, it looks like most of the work is being done in\nthe index scan of the subscriber table - reading the actual data from\nthe PublicView is quite fast.\n\nAm I correct in my analysis? Is there anything I can do to improve the\nperformance of the index lookups? \n\n(The indexes in question are all created as B-TREE.)\n\nI've tried increasing the index memory and making a number of queries\naround the index range, but a stray of several hundred indexes from a\ncached entry always results in a major lookup delay.\n\nI've also increased the shared memory available to Postgres to 80MB\nincase this is a paging of the index, but it hasn't seemed to have any\neffect.\n\n\n\n\nSample analyze output for an initial query:\n\nhydradb=# explain analyze select * from pvsubscriber where actorid =\n'b3432-asdas-232-Subscriber793500';\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n----------------------------------------\n Nested Loop Left Join (cost=0.00..13.19 rows=1 width=100) (actual\ntime=49.688..49.699 rows=1 loops=1)\n -> Nested Loop Left Join (cost=0.00..10.16 rows=1 width=69) (actual\ntime=49.679..49.689 rows=1 loops=1)\n Join Filter: (\"inner\".mc_childactor_id = \"outer\".id)\n -> Nested Loop (cost=0.00..10.15 rows=1 width=69) (actual\ntime=49.669..49.677 rows=1 loops=1)\n -> Nested Loop (cost=0.00..7.12 rows=1 width=73)\n(actual time=43.969..43.974 rows=1 loops=1)\n -> Index Scan using mc_actor_key on mc_actor\n(cost=0.00..4.08 rows=1 width=69) (actual time=39.497..39.499 rows=1\nloops=1)\n Index Cond: ((actorid)::text =\n'b3432-asdas-232-Subscriber793500'::text)\n -> Index Scan using rel_actor_has_subscriber_idx1\non rel_actor_has_subscriber rel_sub (cost=0.00..3.02 rows=1 width=8)\n(actual time=4.458..4.460 rows=1 loops=1)\n Index Cond: (\"outer\".id =\nrel_sub.mc_actor_id)\n -> Index Scan using mc_subscriber_id_idx on\nmc_subscriber sub (cost=0.00..3.02 rows=1 width=4) (actual\ntime=5.689..5.691 rows=1 loops=1)\n Index Cond: (sub.id = \"outer\".mc_subscriber_id)\n -> Seq Scan on rel_actor_has_actor rel_parent\n(cost=0.00..0.00 rows=1 width=8) (actual time=0.002..0.002 rows=0\nloops=1)\n -> Index Scan using mc_actor_id_idx on mc_actor (cost=0.00..3.02\nrows=1 width=39) (actual time=0.002..0.002 rows=0 loops=1)\n Index Cond: (\"outer\".mc_parentactor_id = mc_actor.id)\n Total runtime: 49.845 ms\n(15 rows)\n\n\n\nAnd the analyze output for a \"nearby\" subscriber (10 indexes away):\n\nhydradb=# explain analyze select * from pvsubscriber where actorid =\n'b3432-asdas-232-Subscriber793510';\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n----------------------------------------\n Nested Loop Left Join (cost=0.00..13.19 rows=1 width=100) (actual\ntime=0.278..0.288 rows=1 loops=1)\n -> Nested Loop Left Join (cost=0.00..10.16 rows=1 width=69) (actual\ntime=0.271..0.280 rows=1 loops=1)\n Join Filter: (\"inner\".mc_childactor_id = \"outer\".id)\n -> Nested Loop (cost=0.00..10.15 rows=1 width=69) (actual\ntime=0.264..0.272 rows=1 loops=1)\n -> Nested Loop (cost=0.00..7.12 rows=1 width=73)\n(actual time=0.246..0.251 rows=1 loops=1)\n -> Index Scan using mc_actor_key on mc_actor\n(cost=0.00..4.08 rows=1 width=69) (actual time=0.220..0.221 rows=1\nloops=1)\n Index Cond: ((actorid)::text =\n'b3432-asdas-232-Subscriber793510'::text)\n -> Index Scan using rel_actor_has_subscriber_idx1\non rel_actor_has_subscriber rel_sub (cost=0.00..3.02 rows=1 width=8)\n(actual time=0.017..0.018 rows=1 loops=1)\n Index Cond: (\"outer\".id =\nrel_sub.mc_actor_id)\n -> Index Scan using mc_subscriber_id_idx on\nmc_subscriber sub (cost=0.00..3.02 rows=1 width=4) (actual\ntime=0.012..0.013 rows=1 loops=1)\n Index Cond: (sub.id = \"outer\".mc_subscriber_id)\n -> Seq Scan on rel_actor_has_actor rel_parent\n(cost=0.00..0.00 rows=1 width=8) (actual time=0.002..0.002 rows=0\nloops=1)\n -> Index Scan using mc_actor_id_idx on mc_actor (cost=0.00..3.02\nrows=1 width=39) (actual time=0.001..0.001 rows=0 loops=1)\n Index Cond: (\"outer\".mc_parentactor_id = mc_actor.id)\n Total runtime: 0.428 ms\n(15 rows)\n\n\n\n\nMany thanks,\n\nDamien\n \n----------------------------------------------------------------------\nDamien Dougan, Software Architect\nMobile Cohesion - http://www.mobilecohesion.com\nEmail: [email protected]\nMobile: +44 7766477997\n----------------------------------------------------------------------\n\n\n",
"msg_date": "Thu, 5 Feb 2004 12:13:40 -0000",
"msg_from": "\"Damien Dougan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index Performance Help"
},
{
"msg_contents": "On Thursday 05 February 2004 12:13, Damien Dougan wrote:\n> Hi All,\n>\n> I've been seeing very slow read performance on a database of 1 million\n> indexed subscribers, which I believe is nothing to do with the data\n> itself, but delays on processing the index.\n>\n> If I make a random jump into the index (say X), it can take about 50ms\n> to read the subscriber. If I then make a \"close by\" lookup (say X+10),\n> it takes only about 0.5ms to read the subscriber. Making another lookup\n> to a \"far away\" (say X+1000), it again takes about 50ms to read.\n\nThe first time, it has to fetch a block from disk. The second time that disk \nblock is already in RAM so it's much faster. The third time it needs a \ndifferent disk block.\n\n> Am I correct in my analysis? Is there anything I can do to improve the\n> performance of the index lookups?\n\nMake sure you have enough RAM to buffer your disks. Buy faster disks.\n\n> I've tried increasing the index memory and making a number of queries\n> around the index range, but a stray of several hundred indexes from a\n> cached entry always results in a major lookup delay.\n\nYep, that'll be your disks.\n\n> I've also increased the shared memory available to Postgres to 80MB\n> incase this is a paging of the index, but it hasn't seemed to have any\n> effect.\n\nProbably the wrong thing to do (although you don't mention what hardware \nyou've got). Read the tuning document at:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n\n> Sample analyze output for an initial query:\n>\n> hydradb=# explain analyze select * from pvsubscriber where actorid =\n> 'b3432-asdas-232-Subscriber793500';\n...\n> -> Index Scan using mc_actor_key on mc_actor\n> (cost=0.00..4.08 rows=1 width=69) (actual time=39.497..39.499 rows=1\n> loops=1)\n...\n> Total runtime: 49.845 ms\n\n> And the analyze output for a \"nearby\" subscriber (10 indexes away):\n>\n> hydradb=# explain analyze select * from pvsubscriber where actorid =\n> 'b3432-asdas-232-Subscriber793510';\n>\n...\n> -> Index Scan using mc_actor_key on mc_actor\n> (cost=0.00..4.08 rows=1 width=69) (actual time=0.220..0.221 rows=1\n> loops=1)\n> Total runtime: 0.428 ms\n> (15 rows)\n\nThat certainly seems to be the big change - the only way to consistently get \n1ms timings is going to be to make sure all your data is cached. Try the \ntuning guide above and see what difference that makes. If that's no good, \npost again with details of your config settings, hardware, number of clients \netc...\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 5 Feb 2004 15:54:27 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index Performance Help"
},
{
"msg_contents": "Thanks Richard.\n\nIt certainly does appear to be memory related (on a smaller data set of\n250K subscribers, all accesses are < 1ms).\n\n\nWe're going to play with increasing RAM on the machine, and applying the\noptimisation levels on the page you recommended.\n\n(We're also running on a hardware RAID controlled SCSI set - mirrored\ndisks so reading should be very fast).\n\n\nCheers,\n\nDamien\n\n\n",
"msg_date": "Thu, 5 Feb 2004 16:40:10 -0000",
"msg_from": "\"Damien Dougan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index Performance Help"
},
{
"msg_contents": "Damian,\n\nAlso, if there have been a lot of updates to the table, you may need to run a \nREINDEX on it. An attenuated index would be slow to load because of the \nnummber of empty disk blocks. \n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 5 Feb 2004 10:28:35 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index Performance Help"
},
{
"msg_contents": "\n\"Damien Dougan\" <[email protected]> writes:\n\n> Sample analyze output for an initial query:\n> \n> hydradb=# explain analyze select * from pvsubscriber where actorid =\n> 'b3432-asdas-232-Subscriber793500';\n\nI take it pvsubscriber is a view? What's the definition of your view?\n\n> -> Index Scan using mc_actor_key on mc_actor\n> (cost=0.00..4.08 rows=1 width=69)\n> (actual time=39.497..39.499 rows=1 loops=1)\n\nIs this table regularly vacuumed? Is it possible it has lots of dead records\nwith this value for actorid? Try running vacuum full, or better \"vacuum full\nverbose\" and keep the output, it might explain.\n\nWhat version of postgres is this? You might try reindexing all your indexes\n(but particularly this one). Older versions of postgres were prone to index\nbloat problems.\n\n\n-- \ngreg\n\n",
"msg_date": "01 Apr 2004 23:44:29 -0500",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index Performance Help"
}
] |
[
{
"msg_contents": "Hi!\n\n I'd like to know if this is expected behavior. These are two couples of\nqueries. In each couple, the first one has a WHERE field = function()\ncondition, just like the second one, but in the form WHERE field =\n(SELECT function()). In my opinion, both should have the same execution\nplan, as the function has no parameters and, therefore, is constant.\n\n I'm concerned about this, because the second form looks like a workaround.\n\n*** TESTED IN: PostgreSQL 7.4.1 on i686-pc-cygwin ***\n\npgdb=# explain analyze select count(*) from t_students where period =\n(select current_period_id());\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=127.84..127.84 rows=1 width=0) (actual time=1.000..1.000\nrows=1 loops=1)\n InitPlan\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=1.000..1.000 rows=1 loops=1)\n -> Index Scan using i_t_students__period on t_students \n(cost=0.00..127.71 rows=44 width=0) (actual time=1.000..1.000 rows=21\nloop=1)\n Index Cond: (period = $0)\n Total runtime: 1.000 ms\n(6 rows)\n\npgdb=# explain analyze select count(*) from t_students where period =\n(select current_period_id());\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=127.84..127.84 rows=1 width=0) (actual time=1.000..1.000\nrows=1 loops=1)\n InitPlan\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=1.000..1.000 rows=1 loops=1)\n -> Index Scan using i_t_students__period on t_students \n(cost=0.00..127.71 rows=44 width=0) (actual time=1.000..1.000 rows=21\nloop=1)\n Index Cond: (period = $0)\n Total runtime: 1.000 ms\n(6 rows)\n\npgdb=# select version();\n version\n---------------------------------------------------------------------------------------\n PostgreSQL 7.4.1 on i686-pc-cygwin, compiled by GCC gcc (GCC) 3.3.1\n(cygming special)\n(1 row)\n\npgdb=#\n\n*** TESTED IN: PostgreSQL 7.3.4 on i386-redhat-linux-gnu ***\n\npgdb=# explain analyze select count(*) from t_students where period =\ncurrent_period_id();\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=182.32..182.32 rows=1 width=0) (actual\ntime=49077.38..49077.38 rows=1 loops=1)\n -> Seq Scan on t_students (cost=0.00..182.22 rows=43 width=0) (actual\ntime=17993.89..49077.13 rows=21 loops=1)\n Filter: (period = current_period_id())\n Total runtime: 49077.61 msec\n(4 rows)\n\npgdb=# explain analyze select count(*) from t_students where period =\n(select current_period_id());\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=125.19..125.19 rows=1 width=0) (actual\ntime=131.59..131.60 rows=1 loops=1)\n InitPlan\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=41.05..41.06 rows=1 loops=1)\n -> Index Scan using i_t_students__period on t_students \n(cost=0.00..125.08 rows=43 width=0) (actual time=131.28..131.48 rows=21\nloops=1)\n Index Cond: (period = $0)\n Total runtime: 131.95 msec\n(6 rows)\n\npgdb=# select version();\n version\n-----------------------------------------------------------------\n PostgreSQL 7.3.4 on i386-redhat-linux-gnu, compiled by GCC 2.96\n(1 row)\n\n\n\n\n-- \nOctavio Alvarez.\nE-mail: [email protected].\n\nAgradezco que sus correos sean enviados siempre a esta direcci�n.\n",
"msg_date": "Thu, 5 Feb 2004 23:19:04 -0800 (PST)",
"msg_from": "\"Octavio Alvarez\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Seq scan on zero-parameters function"
},
{
"msg_contents": "\nTomasz Myrta said:\n> Dnia 2004-02-06 08:19, U�ytkownik Octavio Alvarez napisa�:\n>> In each couple, the first one has a WHERE field = function()\n>> condition, just like the second one, but in the form WHERE field =\n>> (SELECT function()). In my opinion, both should have the same execution\n>> plan, as the function has no parameters and, therefore, is constant.\n>\n> Nope.\n>\n> What would you say about function without params returning timeofday()?\n> Is it constant?\n\nNo... :-P ;-)\n\n> If you are sure, that your function returns constant value - declare it\n> as IMMUTABLE. (look at CREATE FUNCTION documentation)\n\nThanks for the hint.\n\nIn fact, my current_period_id() is based on time, but it should be\nconstant along the query execution. I mean, I don't want some records\nfiltered with some values and other with other values... I'll have an\nuncongruent recordset.\n\nSay SELECT [field-list] FROM [complex-join] WHERE sec = datepart('second',\nnow()); Now suppose the query takes always more than 1 second because of\nthe complex-join or whatever reason: I will naver have a congruent\nrecordset.\n\nIMMUTABLE wouldn't help here, only wrapping the function in a subquery. Is\nthis expected behavior? Is this standards compliant (if it can be\nqualified as such)?\n\nOctavio.\n\n-- \nOctavio Alvarez.\nE-mail: [email protected].\n\nAgradezco que sus correos sean enviados siempre a esta direcci�n.\n",
"msg_date": "Fri, 6 Feb 2004 00:43:12 -0800 (PST)",
"msg_from": "\"Octavio Alvarez\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Seq scan on zero-parameters function"
},
{
"msg_contents": "On Friday 06 February 2004 07:19, Octavio Alvarez wrote:\n> Hi!\n>\n> I'd like to know if this is expected behavior. These are two couples of\n> queries. In each couple, the first one has a WHERE field = function()\n> condition, just like the second one, but in the form WHERE field =\n> (SELECT function()). In my opinion, both should have the same execution\n> plan, as the function has no parameters and, therefore, is constant.\n\nNot necessarily constant - think about random() or timeofday().\nHave you set the attributes on your function?\n\nhttp://www.postgresql.org/docs/7.4/static/sql-createfunction.html\n\n> pgdb=# explain analyze select count(*) from t_students where period =\n> (select current_period_id());\n\nIt's not entirely clear to me why this form is different from the other form \nthough.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 6 Feb 2004 08:50:50 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq scan on zero-parameters function"
},
{
"msg_contents": "Dnia 2004-02-06 09:43, U�ytkownik Octavio Alvarez napisa�:\n> Thanks for the hint.\n> \n> In fact, my current_period_id() is based on time, but it should be\n> constant along the query execution. I mean, I don't want some records\n> filtered with some values and other with other values... I'll have an\n> uncongruent recordset.\n\nWell - you didn't read the chapter I noticed you, did you?\n\nLook at function now(). It returns always the same value inside \ntransaction. If your current_period_id() works the same way as now() \nthen declare it as STABLE.\n\nRegards,\nTomasz Myrta\n",
"msg_date": "Fri, 06 Feb 2004 09:55:26 +0100",
"msg_from": "Tomasz Myrta <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Seq scan on zero-parameters function"
},
{
"msg_contents": "Richard Huxton <[email protected]> writes:\n> It's not entirely clear to me why this form is different from the other form \n> though.\n\nThe code that checks for expressions containing unstable functions\ndoesn't look inside sub-selects. Arguably this is a bug, but people\nwere relying on that behavior way back before we had these nice\nSTABLE/IMMUTABLE tags for functions. I'm hesitant to change it for\nfear of breaking people's apps.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Feb 2004 09:55:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq scan on zero-parameters function "
},
{
"msg_contents": "\nTomasz Myrta said:\n> Dnia 2004-02-06 09:43, U�ytkownik Octavio Alvarez napisa�:\n>> Thanks for the hint.\n>>\n>> In fact, my current_period_id() is based on time, but it should be\n>> constant along the query execution. I mean, I don't want some records\n>> filtered with some values and other with other values... I'll have an\n>> uncongruent recordset.\n>\n> Well - you didn't read the chapter I noticed you, did you?\n\nHuummm.. No... :-$\n\nBut now I did. Although the chapter makes it look as \"how will the\noptimizer think the function behaves\", not \"how the function actually\nbehaves\".\n\nBut thanks. It's a lot clearer now. I assume that if I want to make\n\"timeofday\" have a stable-behavior, I must enclose it in a sub-query. Am I\nright?\n\n-- \nOctavio Alvarez.\nE-mail: [email protected].\n\nAgradezco que sus correos sean enviados siempre a esta direcci�n.\n",
"msg_date": "Fri, 6 Feb 2004 09:24:52 -0800 (PST)",
"msg_from": "\"Octavio Alvarez\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Seq scan on zero-parameters function"
}
] |
[
{
"msg_contents": "Hello,\n\nDo you see a way to get better performances with this query which takes\ncurrently 655.07 msec to be done.\n\nlevure=> explain analyze SELECT distinct lower(substr(l_name, 1, 1)) AS\ninitiale FROM people\nlevure-> UNION\nlevure-> SELECT distinct lower(substr(org_name, 1, 1)) AS initiale FROM\norganizations\nlevure-> ORDER BY initiale;\n\n \nQUERY PLAN \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-----------\n Sort (cost=158.73..158.78 rows=20 width=43) (actual\ntime=650.82..650.89 rows=39 loops=1)\n Sort Key: initiale\n -> Unique (cost=157.30..158.30 rows=20 width=43) (actual\ntime=649.55..650.17 rows=39 loops=1)\n -> Sort (cost=157.30..157.80 rows=200 width=43) (actual\ntime=649.55..649.67 rows=69 loops=1)\n Sort Key: initiale\n -> Append (cost=69.83..149.66 rows=200 width=43)\n(actual time=198.48..648.51 rows=69 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=69.83..74.83\nrows=100 width=38) (actual time=198.48..230.62 rows=37 loops=1)\n -> Unique (cost=69.83..74.83 rows=100\nwidth=38) (actual time=198.46..230.31 rows=37 loops=1)\n -> Sort (cost=69.83..72.33 rows=1000\nwidth=38) (actual time=198.45..205.99 rows=4093 loops=1)\n Sort Key:\nlower(substr((l_name)::text, 1, 1))\n -> Seq Scan on people\n(cost=0.00..20.00 rows=1000 width=38) (actual time=0.19..52.33 rows=4093\nloops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=69.83..74.83\nrows=100 width=43) (actual time=361.82..417.62 rows=32 loops=1)\n -> Unique (cost=69.83..74.83 rows=100\nwidth=43) (actual time=361.79..417.33 rows=32 loops=1)\n -> Sort (cost=69.83..72.33 rows=1000\nwidth=43) (actual time=361.79..374.81 rows=7074 loops=1)\n Sort Key:\nlower(substr((org_name)::text, 1, 1))\n -> Seq Scan on organizations\n(cost=0.00..20.00 rows=1000 width=43) (actual time=0.23..95.47 rows=7074\nloops=1)\n Total runtime: 655.07 msec\n(17 rows)\n\n\nI was thinking that a index on lower(substr(l_name, 1, 1)) and another\nindex on lower(substr(org_name, 1, 1)) should gives better performances.\nWhen I've to create theses two indexes, it seems like this is not\nallowed :\n\nlevure=> CREATE INDEX firstchar_lastname_idx ON\npeople(lower(substr(l_name, 1, 1)));\nERROR: parser: parse error at or near \"(\" at character 59\n\nDo you have another idea to get better performances ?\n\nThanks in advance :-)\n\n\nPS : Note that this database is VACUUMed twice per day (and sometimes\nmore).\n\n-------------------------------------\nBruno BAGUETTE - [email protected] \n\n",
"msg_date": "Fri, 6 Feb 2004 09:58:44 +0100",
"msg_from": "\"Bruno BAGUETTE\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Increase performance of a UNION query that thakes 655.07 msec to be\n\trunned ?"
},
{
"msg_contents": "\"Bruno BAGUETTE\" <[email protected]> writes:\n> Do you see a way to get better performances with this query which takes\n> currently 655.07 msec to be done.\n\n> levure=> explain analyze SELECT distinct lower(substr(l_name, 1, 1)) AS\n> initiale FROM people\n> levure-> UNION\n> levure-> SELECT distinct lower(substr(org_name, 1, 1)) AS initiale FROM\n> organizations\n> levure-> ORDER BY initiale;\n\nThis is inherently a bit inefficient since the UNION implies a DISTINCT\nstep, thus partially repeating the DISTINCT work done inside each SELECT.\nIt would likely be a tad faster to drop the DISTINCTs from the\nsubselects and rely on UNION to do the filtering. However, you're still\ngonna have a big SORT/UNIQUE step.\n\nAs of PG 7.4 you could probably get a performance win by converting the\nthing to use GROUP BY instead of DISTINCT or UNION:\n\nselect initiale from (\n select lower(substr(l_name,1,1)) as initiale from people\n union all\n select lower(substr(org_name,1,1)) as initiale from organizations\n) ss\ngroup by initiale order by initiale;\n\nThis should use a HashAggregate to do the unique-ification. I think\nthat will be faster than Sort/Unique.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Feb 2004 10:28:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increase performance of a UNION query that thakes 655.07 msec to\n\tbe runned ?"
},
{
"msg_contents": "\nOn Fri, 6 Feb 2004, Bruno BAGUETTE wrote:\n\n> I was thinking that a index on lower(substr(l_name, 1, 1)) and another\n> index on lower(substr(org_name, 1, 1)) should gives better performances.\n> When I've to create theses two indexes, it seems like this is not\n> allowed :\n>\n> levure=> CREATE INDEX firstchar_lastname_idx ON\n> people(lower(substr(l_name, 1, 1)));\n> ERROR: parser: parse error at or near \"(\" at character 59\n\nIn 7.4, I believe you would say\n on people((lower(substr(l_name,1,1))))\nbut I'm not sure that index would really help in practice.\n\n> Do you have another idea to get better performances ?\n\nIn addition to what Tom said, the row estimates look suspiciously default.\nYou mention vacuuming, but do you ever analyze the tables?\n\nAlso, what do you have sort_mem set to?\n",
"msg_date": "Fri, 6 Feb 2004 07:52:24 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increase performance of a UNION query that thakes"
},
{
"msg_contents": "re-Hello,\n\nAs suggested by Tom, I've removed the distinct and tried it's query :\n\nlevure=> explain analyze select initiale from (\nlevure(> select lower(substr(l_name,1,1)) as initiale from people\nlevure(> union all\nlevure(> select lower(substr(org_name,1,1)) as initiale from\norganizations\nlevure(> ) ss\nlevure-> group by initiale order by initiale;\n QUERY\nPLAN \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-\n Group (cost=1018.48..1074.32 rows=1117 width=17) (actual\ntime=783.47..867.61 rows=39 loops=1)\n -> Sort (cost=1018.48..1046.40 rows=11167 width=17) (actual\ntime=782.18..801.68 rows=11167 loops=1)\n Sort Key: initiale\n -> Subquery Scan ss (cost=0.00..267.67 rows=11167 width=17)\n(actual time=0.23..330.31 rows=11167 loops=1)\n -> Append (cost=0.00..267.67 rows=11167 width=17)\n(actual time=0.22..263.69 rows=11167 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..87.93\nrows=4093 width=15) (actual time=0.22..79.51 rows=4093 loops=1)\n -> Seq Scan on people (cost=0.00..87.93\nrows=4093 width=15) (actual time=0.20..53.82 rows=4093 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..179.74\nrows=7074 width=17) (actual time=0.24..146.12 rows=7074 loops=1)\n -> Seq Scan on organizations\n(cost=0.00..179.74 rows=7074 width=17) (actual time=0.23..100.70\nrows=7074 loops=1)\n Total runtime: 874.79 msec\n(10 rows)\n\n\nThat seems to be 200 msec longer that my first query... Indeed, I've\nnoticed something strange : now, if I rerun my initial query, I get\nworse runtime than this morning :\n\n\nlevure=> EXPLAIN ANALYZE SELECT lower(substr(l_name, 1, 1)) AS initiale\nFROM people\nlevure-> UNION\nlevure-> SELECT lower(substr(org_name, 1, 1)) AS initiale FROM\norganizations\nlevure-> ORDER BY initiale;\n QUERY\nPLAN \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n Sort (cost=1130.85..1133.64 rows=1117 width=17) (actual\ntime=802.52..802.58 rows=39 loops=1)\n Sort Key: initiale\n -> Unique (cost=1018.48..1074.32 rows=1117 width=17) (actual\ntime=712.04..801.83 rows=39 loops=1)\n -> Sort (cost=1018.48..1046.40 rows=11167 width=17) (actual\ntime=712.03..732.63 rows=11167 loops=1)\n Sort Key: initiale\n -> Append (cost=0.00..267.67 rows=11167 width=17)\n(actual time=0.21..263.54 rows=11167 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..87.93\nrows=4093 width=15) (actual time=0.20..80.47 rows=4093 loops=1)\n -> Seq Scan on people (cost=0.00..87.93\nrows=4093 width=15) (actual time=0.19..54.14 rows=4093 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..179.74\nrows=7074 width=17) (actual time=0.28..144.82 rows=7074 loops=1)\n -> Seq Scan on organizations\n(cost=0.00..179.74 rows=7074 width=17) (actual time=0.27..99.06\nrows=7074 loops=1)\n Total runtime: 806.47 msec\n(11 rows)\n\n\nI don't understand why this runtime has changed because no data has been\nadded/updated/deleted since several weeks (I'm working on a copy of the\nproduction database. And this copy is not accessible for the users).\n\nMy PostgreSQL version is PostgreSQL 7.3.2, I have to ask to the\nadministrator if it can be upgraded to 7.4 in the production server.\n\nThanks in advance for your help.\n\n---------------------------------------\nBruno BAGUETTE - [email protected] \n\n",
"msg_date": "Fri, 6 Feb 2004 17:34:48 +0100",
"msg_from": "\"Bruno BAGUETTE\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE : Increase performance of a UNION query that thakes 655.07 msec to\n\tbe runned ?"
},
{
"msg_contents": "> In addition to what Tom said, the row estimates look \n> suspiciously default. You mention vacuuming, but do you ever \n> analyze the tables?\n\nI run VACUUM FULL ANALYZE with the postgres user on all the PostgreSQL\ndatabases on the server, twice a day, sometimes more.\n\n> Also, what do you have sort_mem set to?\n\n[root@levure data]# cat postgresql.conf | grep sort_mem\nsort_mem = 6144 # min 64, size in KB \n\nDo you think I should increase that value ?\n\nIt's not so easy to do a good setup of that postgresql.conf file, is\nthere any tool that suggests some values for that ?\n\nThanks in advance for your tips :-)\n\n---------------------------------------\nBruno BAGUETTE - [email protected] \n\n",
"msg_date": "Fri, 6 Feb 2004 17:41:20 +0100",
"msg_from": "\"Bruno BAGUETTE\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE : Increase performance of a UNION query that thakes 655.07 msec to\n\tbe runned ?"
},
{
"msg_contents": "On Fri, 6 Feb 2004, Bruno BAGUETTE wrote:\n\n> > In addition to what Tom said, the row estimates look\n> > suspiciously default. You mention vacuuming, but do you ever\n> > analyze the tables?\n>\n> I run VACUUM FULL ANALYZE with the postgres user on all the PostgreSQL\n> databases on the server, twice a day, sometimes more.\n\nWierd, because you're getting 1000 estimated on both people and\norganizations. What does pg_class have to say about those two tables?\n\n> > Also, what do you have sort_mem set to?\n>\n> [root@levure data]# cat postgresql.conf | grep sort_mem\n> sort_mem = 6144 # min 64, size in KB\n>\n> Do you think I should increase that value ?\n\nHmm, I'd expect that the sort would fit in that space in general. If you\nwant to try different values, you can set sort_mem from psql rather than\nchanging the configuration file.\n\n----\n\nOn my machine the index does actually help, although I needed to lower\nrandom_page_cost a little from its default of 4 to get it to use it\npreferentially, but I'm also getting times about 1/3 of yours (and my\nmachine is pretty poor) so I think I may not have data that matches yours\nvery well.\n",
"msg_date": "Fri, 6 Feb 2004 09:13:42 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE : Increase performance of a UNION query that thakes"
},
{
"msg_contents": "> On Fri, 6 Feb 2004, Bruno BAGUETTE wrote:\n> \n> > > In addition to what Tom said, the row estimates look suspiciously \n> > > default. You mention vacuuming, but do you ever analyze \n> > > the tables?\n> >\n> > I run VACUUM FULL ANALYZE with the postgres user on all the \n> > PostgreSQL \n> > databases on the server, twice a day, sometimes more.\n> \n> Wierd, because you're getting 1000 estimated on both people \n> and organizations. What does pg_class have to say about \n> those two tables?\n\nI'm sorry but I think that I misunderstand you. Are you telling me that\nrunning VACUUM FULL ANALYZE is weird ? Or do you mean another thing ?\n\n\nFinally, I've found another way : I've build a MATERIALIZED VIEW that\nstores the initial (CHAR(1) of both people and organizations, with an\nindex on that column. I get excellent results :\n\n Unique (cost=0.00..290.34 rows=1117 width=5) (actual time=0.52..267.38\nrows=39 loops=1)\n -> Index Scan using idx_mview_initials on mview_contacts\n(cost=0.00..262.42 rows=11167 width=5) (actual time=0.51..172.15\nrows=11167 loops=1)\n Total runtime: 267.81 msec\n(3 rows)\n\nSo, that's a better runtime :-)\n\nThanks for your help :-)\n\n-------------------------------------\nBruno BAGUETTE - [email protected] \n\n",
"msg_date": "Sun, 8 Feb 2004 01:26:40 +0100",
"msg_from": "\"Bruno BAGUETTE\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RE : RE : Increase performance of a UNION query that thakes 655.07\n\tmsec to be runned ?"
},
{
"msg_contents": "On Sun, 8 Feb 2004, Bruno BAGUETTE wrote:\n\n> > On Fri, 6 Feb 2004, Bruno BAGUETTE wrote:\n> >\n> > > > In addition to what Tom said, the row estimates look suspiciously\n> > > > default. You mention vacuuming, but do you ever analyze\n> > > > the tables?\n> > >\n> > > I run VACUUM FULL ANALYZE with the postgres user on all the\n> > > PostgreSQL\n> > > databases on the server, twice a day, sometimes more.\n> >\n> > Wierd, because you're getting 1000 estimated on both people\n> > and organizations. What does pg_class have to say about\n> > those two tables?\n>\n> I'm sorry but I think that I misunderstand you. Are you telling me that\n> running VACUUM FULL ANALYZE is weird ? Or do you mean another thing ?\n\nNo, I was saying it's wierd that it'd be misestimating to the default\nvalues after a vacuum full analyze.\n",
"msg_date": "Sat, 7 Feb 2004 21:36:20 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RE : RE : Increase performance of a UNION query that"
}
] |
[
{
"msg_contents": "I have two tables which have related selection data; they get updated\nseparately. One contains messages, the second an \"index key\" for each\nuser's viewing history.\n\nWhen I attempt to use a select that merges the two to produce a \"true or\nfalse\" output in one of the reply rows, I get a sequential scan of the\nsecond table - which is NOT what I want!\n\nHere are the table definitions and query explain results...\n\nakcs=> \\d post\n Table \"public.post\"\n Column | Type | Modifiers \n-----------+-----------------------------+-----------------------------------------------------------\n forum | text | \n number | integer | \n toppost | integer | \n views | integer | default 0\n login | text | \n subject | text | \n message | text | \n inserted | timestamp without time zone | \n modified | timestamp without time zone | \n who | text | \n reason | text | \n ordinal | integer | not null default nextval('public.post_ordinal_seq'::text)\n replies | integer | default 0\n ip | text | \n invisible | integer | \n sticky | integer | \n lock | integer | \n pinned | integer | default 0\n replied | timestamp without time zone | \nIndexes:\n \"post_forum\" btree (forum)\n \"post_lookup\" btree (forum, number)\n \"post_order\" btree (number, inserted)\n \"post_toppost\" btree (forum, toppost, inserted)\n\n\nakcs=> \\d forumlog;\n Table \"public.forumlog\"\n Column | Type | Modifiers \n----------+-----------------------------+-----------\n login | text | \n forum | text | \n lastview | timestamp without time zone | \n number | integer | \nIndexes:\n \"forumlog_composite\" btree (login, forum, number)\n \"forumlog_login\" btree (login)\n \"forumlog_number\" btree (number)\n\nakcs=> explain select forum, (replied > (select lastview from forumlog where forumlog.login='%s' and forumlog.forum='%s' and number=post.number)) as newflag, * from post where forum = '%s' and toppost = 1 order by pinned desc, replied desc;\n QUERY PLAN \n-------------------------------------------------------------------------------------------\n Sort (cost=3.20..3.21 rows=1 width=218)\n Sort Key: pinned, replied\n -> Index Scan using post_forum on post (cost=0.00..3.19 rows=1 width=218)\n Index Cond: (forum = '%s'::text)\n Filter: (toppost = 1)\n SubPlan\n -> Seq Scan on forumlog (cost=0.00..1.18 rows=1 width=8)\n Filter: ((login = '%s'::text) AND (forum = '%s'::text) AND (number = $0))\n(8 rows)\n\nWhy is the subplan using a sequential scan? At minimum the index on the \npost number (\"forumlog_number\") should be used, no? What would be even\nbetter would be a set of indices that allow at least two (or even all three)\nof the keys in the inside SELECT to be used.\n\nWhat am I missing here?\n\n--\n-- \nKarl Denninger ([email protected]) Internet Consultant & Kids Rights Activist\nhttp://www.denninger.net\tTired of spam at your company? LOOK HERE!\nhttp://childrens-justice.org\tWorking for family and children's rights\nhttp://diversunion.org\t\tLOG IN AND GET YOUR TANK STICKERS TODAY!\n",
"msg_date": "Fri, 6 Feb 2004 15:36:02 -0600",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why is query selecting sequential?"
},
{
"msg_contents": "Karl,\n\n> SubPlan\n> -> Seq Scan on forumlog (cost=0.00..1.18 rows=1 width=8)\n> Filter: ((login = '%s'::text) AND (forum = '%s'::text) AND \n(number = $0))\n\n> Why is the subplan using a sequential scan? At minimum the index on the \n> post number (\"forumlog_number\") should be used, no? What would be even\n> better would be a set of indices that allow at least two (or even all three)\n> of the keys in the inside SELECT to be used.\n\nIt's using a seq scan because you have only 1 row in the table. Don't \nbother testing performance before your database is populated.\n\nPostgreSQL doesn't just use an index because it's there; it uses and index \nbecause it's faster than not using one.\n\nIf there is more than one row in the table, then:\n1) run ANALYZE forumlog;\n2) Send us the EXPLAIN ANALYZE, not just the explain for the query.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Fri, 6 Feb 2004 13:51:39 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is query selecting sequential?"
},
{
"msg_contents": "On Fri, Feb 06, 2004 at 01:51:39PM -0800, Josh Berkus wrote:\n> Karl,\n> \n> > SubPlan\n> > -> Seq Scan on forumlog (cost=0.00..1.18 rows=1 width=8)\n> > Filter: ((login = '%s'::text) AND (forum = '%s'::text) AND \n> (number = $0))\n> \n> > Why is the subplan using a sequential scan? At minimum the index on the \n> > post number (\"forumlog_number\") should be used, no? What would be even\n> > better would be a set of indices that allow at least two (or even all three)\n> > of the keys in the inside SELECT to be used.\n> \n> It's using a seq scan because you have only 1 row in the table. Don't \n> bother testing performance before your database is populated.\n> \n> PostgreSQL doesn't just use an index because it's there; it uses and index \n> because it's faster than not using one.\n> \n> If there is more than one row in the table, then:\n> 1) run ANALYZE forumlog;\n> 2) Send us the EXPLAIN ANALYZE, not just the explain for the query.\n\nHmmm... there is more than one row in the table. :-) There aren't a huge\nnumber, but there are a few. I know about the optimizer not using indices \nif there are no (or only one) row in the table - not making that\nmistake here.\n\nRan analyze forumlog;\n\nSame results.\n\nHere's an explain analyze with actual values (that DO match real values in\nthe table) filled in.\n\nakcs=> explain analyze select forum, (replied > (select lastview from forumlog where forumlog.login='genesis' and forumlog.forum='General' and number=post.number)) as newflag, * from post where forum = 'General' and toppost = 1 order by pinned desc, replied desc; \n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------\n Sort (cost=28.41..28.42 rows=6 width=218) (actual time=0.677..0.698 rows=5 loops=1)\n Sort Key: pinned, replied\n -> Index Scan using post_toppost on post (cost=0.00..28.33 rows=6 width=218) (actual time=0.403..0.606 rows=5 loops=1)\n Index Cond: ((forum = 'General'::text) AND (toppost = 1))\n SubPlan\n -> Seq Scan on forumlog (cost=0.00..1.18 rows=1 width=8) (actual time=0.015..0.027 rows=1 loops=5)\n Filter: ((login = 'genesis'::text) AND (forum = 'General'::text) AND (number = $0))\n Total runtime: 0.915 ms\n(8 rows)\n\n--\n-- \nKarl Denninger ([email protected]) Internet Consultant & Kids Rights Activist\nhttp://www.denninger.net\tTired of spam at your company? LOOK HERE!\nhttp://childrens-justice.org\tWorking for family and children's rights\nhttp://diversunion.org\t\tLOG IN AND GET YOUR TANK STICKERS TODAY!\n",
"msg_date": "Fri, 6 Feb 2004 16:22:32 -0600",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is query selecting sequential?"
},
{
"msg_contents": "Karl,\n\nWell, still with only 5 rows in the forumlog table you're not going get \nrealistic results compared to a loaded database. However, you are making \nthings difficult for the parser with awkward query syntax; what you currently \nhave encourages a sequential loop.\n\nIf there are potentially several rows in forumlog for each row in post, then \nyour query won't work either.\n\n> akcs=> explain analyze select forum, (replied > (select lastview from \nforumlog where forumlog.login='genesis' and forumlog.forum='General' and \nnumber=post.number)) as newflag, * from post where forum = 'General' and \ntoppost = 1 order by pinned desc, replied desc; \n\nInstead:\n\nif only one row in forumlog per row in post:\n\nSELECT (replied > lastview) AS newflag, post.* \nFROM post, forumlog\nWHERE post.forum = 'General' and toppost = 1 and forumlog.login = 'genesis'\nand forumlog.forum='General' and forumlog.number=post.number;\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Fri, 6 Feb 2004 14:36:57 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is query selecting sequential?"
},
{
"msg_contents": "On Fri, Feb 06, 2004 at 02:36:57PM -0800, Josh Berkus wrote:\n> Karl,\n> \n> Well, still with only 5 rows in the forumlog table you're not going get \n> realistic results compared to a loaded database. However, you are making \n> things difficult for the parser with awkward query syntax; what you currently \n> have encourages a sequential loop.\n> \n> If there are potentially several rows in forumlog for each row in post, then \n> your query won't work either.\n\nIt better not. Indeed, I WANT it to blow up if there is, as that's a\nserious error, and am counting on that to happen (and yes, I know it will -\nand it should!)\n\n> > akcs=> explain analyze select forum, (replied > (select lastview from \n> forumlog where forumlog.login='genesis' and forumlog.forum='General' and \n> number=post.number)) as newflag, * from post where forum = 'General' and \n> toppost = 1 order by pinned desc, replied desc; \n> \n> Instead:\n> \n> if only one row in forumlog per row in post:\n> \n> SELECT (replied > lastview) AS newflag, post.* \n> FROM post, forumlog\n> WHERE post.forum = 'General' and toppost = 1 and forumlog.login = 'genesis'\n> and forumlog.forum='General' and forumlog.number=post.number;\n\nIt still thinks its going to sequentially scan it...\n\nI'll see what happens when I get some more rows in the table and if it \ndecides to start using the indices then....\n\nakcs=> explain analyze select (replied > lastview) as newflag, post.* from post, forumlog where post.forum ='General' and toppost = 1 and forumlog.login='genesis' and forumlog.forum='General' order by post.pinned desc, post.replied desc;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=23.83..23.90 rows=30 width=226) (actual time=1.228..1.331 rows=25 loops=1)\n Sort Key: post.pinned, post.replied\n -> Nested Loop (cost=1.15..23.09 rows=30 width=226) (actual time=0.157..0.797 rows=25 loops=1)\n -> Index Scan using post_toppost on post (cost=0.00..21.27 rows=6 width=218) (actual time=0.059..0.089 rows=5 loops=1)\n Index Cond: ((forum = 'General'::text) AND (toppost = 1))\n -> Materialize (cost=1.15..1.20 rows=5 width=8) (actual time=0.013..0.046 rows=5 loops=5)\n -> Seq Scan on forumlog (cost=0.00..1.15 rows=5 width=8) (actual time=0.027..0.065 rows=5 loops=1)\n Filter: ((login = 'genesis'::text) AND (forum = 'General'::text))\n Total runtime: 1.754 ms\n(9 rows)\n\n--\n-- \nKarl Denninger ([email protected]) Internet Consultant & Kids Rights Activist\nhttp://www.denninger.net\tTired of spam at your company? LOOK HERE!\nhttp://childrens-justice.org\tWorking for family and children's rights\nhttp://diversunion.org\t\tLOG IN AND GET YOUR TANK STICKERS TODAY!\n",
"msg_date": "Fri, 6 Feb 2004 19:53:48 -0600",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is query selecting sequential?"
},
{
"msg_contents": "Karl Denninger <[email protected]> writes:\n> akcs=> explain analyze select forum, (replied > (select lastview from forumlog where forumlog.login='genesis' and forumlog.forum='General' and number=post.number)) as newflag, * from post where forum = 'General' and toppost = 1 order by pinned desc, replied desc; \n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=28.41..28.42 rows=6 width=218) (actual time=0.677..0.698 rows=5 loops=1)\n> Sort Key: pinned, replied\n> -> Index Scan using post_toppost on post (cost=0.00..28.33 rows=6 width=218) (actual time=0.403..0.606 rows=5 loops=1)\n> Index Cond: ((forum = 'General'::text) AND (toppost = 1))\n> SubPlan\n> -> Seq Scan on forumlog (cost=0.00..1.18 rows=1 width=8) (actual time=0.015..0.027 rows=1 loops=5)\n> Filter: ((login = 'genesis'::text) AND (forum = 'General'::text) AND (number = $0))\n> Total runtime: 0.915 ms\n> (8 rows)\n\nAs noted elsewhere, the inner subplan will not switch over to an\nindexscan until you get some more data in that table. Note however that\nthe subplan is only accounting for about 0.13 msec (0.027*5) so it's not\nthe major cost here anyway. The slow part seems to be the indexed fetch\nfrom \"post\", which is taking nearly 0.5 msec to fetch five rows.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 07 Feb 2004 01:51:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is query selecting sequential? "
},
{
"msg_contents": "On Sat, Feb 07, 2004 at 01:51:54AM -0500, Tom Lane wrote:\n> Karl Denninger <[email protected]> writes:\n> > akcs=> explain analyze select forum, (replied > (select lastview from forumlog where forumlog.login='genesis' and forumlog.forum='General' and number=post.number)) as newflag, * from post where forum = 'General' and toppost = 1 order by pinned desc, replied desc; \n> > QUERY PLAN \n> > ----------------------------------------------------------------------------------------------------------------------------\n> > Sort (cost=28.41..28.42 rows=6 width=218) (actual time=0.677..0.698 rows=5 loops=1)\n> > Sort Key: pinned, replied\n> > -> Index Scan using post_toppost on post (cost=0.00..28.33 rows=6 width=218) (actual time=0.403..0.606 rows=5 loops=1)\n> > Index Cond: ((forum = 'General'::text) AND (toppost = 1))\n> > SubPlan\n> > -> Seq Scan on forumlog (cost=0.00..1.18 rows=1 width=8) (actual time=0.015..0.027 rows=1 loops=5)\n> > Filter: ((login = 'genesis'::text) AND (forum = 'General'::text) AND (number = $0))\n> > Total runtime: 0.915 ms\n> > (8 rows)\n> \n> As noted elsewhere, the inner subplan will not switch over to an\n> indexscan until you get some more data in that table. Note however that\n> the subplan is only accounting for about 0.13 msec (0.027*5) so it's not\n> the major cost here anyway. The slow part seems to be the indexed fetch\n> from \"post\", which is taking nearly 0.5 msec to fetch five rows.\n> \n> \t\t\tregards, tom lane\n\nOk...\n\nBTW, the other posted \"cleaner\" model doesn't work for me. If there is NO\nrow in the subtable that matches, the other version returns nothing (which\nmakes sense since the initial select fails to match any rows as one of the\nthings its trying to match is missing.)\n\nI do need a return even if the log row is missing (and it WILL be, for a\nfirst visit to that particular item in the table by a particular user)\n\n--\n-- \nKarl Denninger ([email protected]) Internet Consultant & Kids Rights Activist\nhttp://www.denninger.net\tTired of spam at your company? LOOK HERE!\nhttp://childrens-justice.org\tWorking for family and children's rights\nhttp://diversunion.org\t\tLOG IN AND GET YOUR TANK STICKERS TODAY!\n",
"msg_date": "Sat, 7 Feb 2004 10:10:52 -0600",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is query selecting sequential?"
},
{
"msg_contents": "Karl,\n\n> BTW, the other posted \"cleaner\" model doesn't work for me. If there is NO\n> row in the subtable that matches, the other version returns nothing (which\n> makes sense since the initial select fails to match any rows as one of the\n> things its trying to match is missing.)\n\nAh, wasn't thinking of that case. Problem with not really knowing what the \ndatabase is about.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 8 Feb 2004 21:34:18 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is query selecting sequential?"
}
] |
[
{
"msg_contents": "Hi\n\nwe have a table with about 4 million rows. One column has an int value, \nthere is a btree index on it. We tried to execute the following \nstatement and it is very slow on a dual G5 2GHZ with 4 GB of RAM.\n\nexplain analyze select count(*) from job_property where int_value = 0;\n\nAggregate (cost=144348.80..144348.80 rows=1 width=0) (actual \ntime=13536.852..13536.852 rows=1 loops=1)\n -> Seq Scan on job_property (cost=0.00..144255.15 rows=37459 \nwidth=0) (actual time=19.422..13511.653 rows=42115 loops=1)\n Filter: (int_value = 0)\nTotal runtime: 13560.862 ms\n\n\n\nIs this more or less normal or can we optimize this a little bit? \nFrontBase (which we compare currently) takes 2 seconds first time and \nabout 0.2 seconds on second+ queries.\n\nregards David\n\n",
"msg_date": "Wed, 11 Feb 2004 14:03:15 +0100",
"msg_from": "David Teran <[email protected]>",
"msg_from_op": true,
"msg_subject": "select count(*) from anIntColumn where int_value = 0; is very slow"
},
{
"msg_contents": "Hello, \n\nIf you has index on id, then you can use\nSELECT id FROM tabulka ORDER BY id DESC LIMIT 1;\n\nSee 4.8. FAQ \n\nRegards\nPavel Stehule\n\nOn Wed, 11 Feb 2004, David Teran wrote:\n\n> Hi\n> \n> we have a table with about 4 million rows. One column has an int value, \n> there is a btree index on it. We tried to execute the following \n> statement and it is very slow on a dual G5 2GHZ with 4 GB of RAM.\n> \n> explain analyze select count(*) from job_property where int_value = 0;\n> \n> Aggregate (cost=144348.80..144348.80 rows=1 width=0) (actual \n> time=13536.852..13536.852 rows=1 loops=1)\n> -> Seq Scan on job_property (cost=0.00..144255.15 rows=37459 \n> width=0) (actual time=19.422..13511.653 rows=42115 loops=1)\n> Filter: (int_value = 0)\n> Total runtime: 13560.862 ms\n> \n> \n> \n> Is this more or less normal or can we optimize this a little bit? \n> FrontBase (which we compare currently) takes 2 seconds first time and \n> about 0.2 seconds on second+ queries.\n> \n> regards David\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n",
"msg_date": "Wed, 11 Feb 2004 14:11:13 +0100 (CET)",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select count(*) from anIntColumn where int_value = 0;"
},
{
"msg_contents": "\n>\n> Hi\n>\n> we have a table with about 4 million rows. One column has an int value,\n> there is a btree index on it. We tried to execute the following\n> statement and it is very slow on a dual G5 2GHZ with 4 GB of RAM.\n>\n> explain analyze select count(*) from job_property where int_value = 0;\n>\n> Aggregate (cost=144348.80..144348.80 rows=1 width=0) (actual\n> time=13536.852..13536.852 rows=1 loops=1)\n> -> Seq Scan on job_property (cost=0.00..144255.15 rows=37459\n> width=0) (actual time=19.422..13511.653 rows=42115 loops=1)\n> Filter: (int_value = 0)\n> Total runtime: 13560.862 ms\n\n\nIs your int_value data type int4? If not then use \"... from job_property\nwhere int_value = '0'\"\nIndexes are used only if datatypes matches.\n\nRigmor Ukuhe\n\n\n>\n>\n>\n> Is this more or less normal or can we optimize this a little bit?\n> FrontBase (which we compare currently) takes 2 seconds first time and\n> about 0.2 seconds on second+ queries.\n>\n> regards David\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n\n---\nOutgoing mail is certified Virus Free.\nChecked by AVG anti-virus system (http://www.grisoft.com).\nVersion: 6.0.564 / Virus Database: 356 - Release Date: 19.01.2004\n\n",
"msg_date": "Wed, 11 Feb 2004 15:12:00 +0200",
"msg_from": "\"Rigmor Ukuhe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select count(*) from anIntColumn where int_value = 0;\n is very slow"
},
{
"msg_contents": "Hi,\n\n> Is your int_value data type int4? If not then use \"... from \n> job_property\n> where int_value = '0'\"\n> Indexes are used only if datatypes matches.\n>\ntried those variations already. Strange enough, after dropping and \nrecreating the index everything worked fine.\n\nregards David\n\n",
"msg_date": "Wed, 11 Feb 2004 14:32:00 +0100",
"msg_from": "David Teran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: select count(*) from anIntColumn where int_value = 0;\n is very slow"
},
{
"msg_contents": "Had you done a VACUUM ANALYZE at all? There has been much discussion \nlately about the planner needing to be updated to know that the index \nis a better choice.\n\nOn Feb 11, 2004, at 6:32 AM, David Teran wrote:\n\n> Hi,\n>\n>> Is your int_value data type int4? If not then use \"... from \n>> job_property\n>> where int_value = '0'\"\n>> Indexes are used only if datatypes matches.\n>>\n> tried those variations already. Strange enough, after dropping and \n> recreating the index everything worked fine.\n>\n> regards David\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> [email protected]\n>\n\n--\nPC Drew\nManager, Dominet\n\nIBSN\n1600 Broadway, Suite 400\nDenver, CO 80202\n\nPhone: 303-984-4727 x107\nCell: 720-841-4543\nFax: 303-984-4730\nEmail: [email protected]\n\n",
"msg_date": "Wed, 11 Feb 2004 06:42:19 -0700",
"msg_from": "PC Drew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select count(*) from anIntColumn where int_value = 0;\n is very slow"
},
{
"msg_contents": "Oops! [email protected] (Pavel Stehule) was seen spray-painting on a wall:\n>\n> Regards\n> Pavel Stehule\n>\n> On Wed, 11 Feb 2004, David Teran wrote:\n>\n>> Hi\n>> \n>> we have a table with about 4 million rows. One column has an int value, \n>> there is a btree index on it. We tried to execute the following \n>> statement and it is very slow on a dual G5 2GHZ with 4 GB of RAM.\n>> \n>> explain analyze select count(*) from job_property where int_value = 0;\n>> \n>> Aggregate (cost=144348.80..144348.80 rows=1 width=0) (actual \n>> time=13536.852..13536.852 rows=1 loops=1)\n>> -> Seq Scan on job_property (cost=0.00..144255.15 rows=37459 \n>> width=0) (actual time=19.422..13511.653 rows=42115 loops=1)\n>> Filter: (int_value = 0)\n>> Total runtime: 13560.862 ms\n>> \n> If you has index on id, then you can use\n> SELECT id FROM tabulka ORDER BY id DESC LIMIT 1;\n>\n> See 4.8. FAQ \n\nI'm afraid that's not the answer. That would be the faster\nalternative to \"select max(id) from tabulka;\"\n\nI guess the question is, is there a faster way of coping with the\n\"int_value = 0\" part?\n\nIt seems a little odd that the index was not selected; it appears that\nthe count was 42115, right?\n\nThe estimated number of rows was 37459, and if the table size is ~4M,\nthen I would have expected the query optimizer to use the index.\n\nCould you try doing \"ANALYZE JOB_PROPERTY;\" and then try again? \n\nOne thought that comes to mind is that perhaps the statistics are\noutdated.\n\nAnother thought is that perhaps there are several really common\nvalues, and the statistics are crummy. You might relieve that by:\n\n alter table job_property alter column int_value set statistics 20;\n analyze job_property;\n\n(Or perhaps some higher value...)\n\nIf there are a few very common discrete values in a particular field,\nthen the default statistics may get skewed because the histogram\nhasn't enough bins...\n-- \nlet name=\"cbbrowne\" and tld=\"acm.org\" in name ^ \"@\" ^ tld;;\nhttp://cbbrowne.com/info/wp.html\nRules of the Evil Overlord #102. \"I will not waste time making my\nenemy's death look like an accident -- I'm not accountable to anyone\nand my other enemies wouldn't believe it.\n",
"msg_date": "Wed, 11 Feb 2004 09:14:39 -0500",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select count(*) from anIntColumn where int_value = 0;"
},
{
"msg_contents": "On Wed, 11 Feb 2004, David Teran wrote:\n\n> Hi,\n> \n> > Is your int_value data type int4? If not then use \"... from \n> > job_property\n> > where int_value = '0'\"\n> > Indexes are used only if datatypes matches.\n> >\n> tried those variations already. Strange enough, after dropping and \n> recreating the index everything worked fine.\n\nHas that table been updated a lot in its life? If so, it may have had a \nproblem with index bloat...\n\n",
"msg_date": "Wed, 11 Feb 2004 09:02:18 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select count(*) from anIntColumn where int_value = 0;"
},
{
"msg_contents": "\n>>>Is your int_value data type int4? If not then use \"... from \n>>>job_property\n>>>where int_value = '0'\"\n>>>Indexes are used only if datatypes matches.\n>>>\n>>\n>>tried those variations already. Strange enough, after dropping and \n>>recreating the index everything worked fine.\n> \n> \n> Has that table been updated a lot in its life? If so, it may have had a \n> problem with index bloat...\n\nTry creating a partial index: create index blah on tablw where int_value=0;\n\nChris\n\n",
"msg_date": "Thu, 12 Feb 2004 10:01:49 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select count(*) from anIntColumn where int_value ="
}
] |
[
{
"msg_contents": "hello\ni have postgres 7.3.2.,linux redhat 9.0\na database,and 20 tables\na lot of fields are char(x)\nwhen i have to make update for all the fields except index\npostgres works verry hard\nwhat should i've changed in configuration to make it work faster\n\nthanks\nbogdan\n\n",
"msg_date": "Wed, 11 Feb 2004 16:08:07 +0200",
"msg_from": "stefan bogdan <[email protected]>",
"msg_from_op": true,
"msg_subject": "update performance"
},
{
"msg_contents": "On Wednesday 11 February 2004 14:08, stefan bogdan wrote:\n> hello\n> i have postgres 7.3.2.,linux redhat 9.0\n> a database,and 20 tables\n> a lot of fields are char(x)\n> when i have to make update for all the fields except index\n> postgres works verry hard\n> what should i've changed in configuration to make it work faster\n\nStefan - we need more information to help you. We'll want to know:\n1. The query being run\n2. EXPLAIN ANALYSE ... results for that query\n3. The size of the tables involved.\n4. That the tables have been VACCUM ANALYSE'd\n\nPS - you should upgrade to 7.3.4 (or is it 7.3.5 now?)\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 11 Feb 2004 15:53:57 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: update performance"
},
{
"msg_contents": "On Wed, 11 Feb 2004, stefan bogdan wrote:\n\n> hello\n> i have postgres 7.3.2.,linux redhat 9.0\n> a database,and 20 tables\n> a lot of fields are char(x)\n> when i have to make update for all the fields except index\n> postgres works verry hard\n> what should i've changed in configuration to make it work faster\n\n1: Upgrate to 7.3.5, (or 7.4.1 if you're feeling adventurous)\n2: Read this: \nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n",
"msg_date": "Wed, 11 Feb 2004 09:10:53 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: update performance"
},
{
"msg_contents": "On Wed, 11 Feb 2004, scott.marlowe wrote:\n\n> On Wed, 11 Feb 2004, stefan bogdan wrote:\n> \n> > hello\n> > i have postgres 7.3.2.,linux redhat 9.0\n> > a database,and 20 tables\n> > a lot of fields are char(x)\n> > when i have to make update for all the fields except index\n> > postgres works verry hard\n> > what should i've changed in configuration to make it work faster\n> \n> 1: Upgrate to 7.3.5, (or 7.4.1 if you're feeling adventurous)\n> 2: Read this: \n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nAs a followup, do you have mismatched foreign keys or big ugly \nconstraints? Sometimes those can slow things down too.\n\n",
"msg_date": "Wed, 11 Feb 2004 09:36:28 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: update performance"
}
] |
[
{
"msg_contents": "\n\nmy data base is very slow. The machine is a processor Xeon 2GB with\n256 MB of RAM DDR. My archive of configuration is this:\n\n================================================================\n\n#\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n# name = value\n#\n# (The '=' is optional.) White space may be used. Comments are introduced\n# with '#' anywhere on a line. The complete list of option names and\n# allowed values can be found in the PostgreSQL documentation. The\n# commented-out settings shown in this file represent the default values.\n#\n# Any option can also be given as a command line switch to the\n# postmaster, e.g. 'postmaster -c log_connections=on'. Some options\n# can be changed at run-time with the 'SET' SQL command.\n#\n# This file is read on postmaster startup and when the postmaster\n# receives a SIGHUP. If you edit the file on a running system, you have\n# to SIGHUP the postmaster for the changes to take effect, or use\n# \"pg_ctl reload\".\n#========================================================================\n\n\n#\n# Connection Parameters\n#\n#tcpip_socket = false\n#ssl = false\n\nmax_connections = 50\nsuperuser_reserved_connections = 2\n\n#port = 5432\n#hostname_lookup = false\n#show_source_port = false\n\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n\n#virtual_host = ''\n\n#krb_server_keyfile = ''\n\n#\n# Shared Memory Size\n#\nshared_buffers = 5000 # min max_connections*2 or 16, 8KB each\nmax_fsm_relations = 400 # min 10, fsm is free space map, ~40 bytes\nmax_fsm_pages = 80000 # min 1000, fsm is free space map, ~6 bytes\nmax_locks_per_transaction = 128 # min 10\nwal_buffers = 4 # min 4, typically 8KB each\n\n#\n# Non-shared Memory Sizes\n#\nsort_mem = 131072 # min 64, size in KB\n#vacuum_mem = 8192 # min 1024, size in KB\n\n\n#\n# Write-ahead log (WAL)\n#\ncheckpoint_segments = 3 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 300 # range 30-3600, in seconds\n#\ncommit_delay = 0 # range 0-100000, in microseconds\ncommit_siblings = 5 # range 1-1000\n#\nfsync = false\nwal_sync_method = fdatasync # the default varies across platforms:\n# # fsync, fdatasync, open_sync, or open_datasync\nwal_debug = 0 # range 0-16\n\n\n#\n# Optimizer Parameters\n#\nenable_seqscan = false\nenable_indexscan = false\nenable_tidscan = false\nenable_sort = false\nenable_nestloop = false\nenable_mergejoin = false\nenable_hashjoin = false\n\neffective_cache_size = 170000 # typically 8KB each\nrandom_page_cost = 1000000000 # units are one sequential page fetch cost\ncpu_tuple_cost = 0.3 # (same)\ncpu_index_tuple_cost = 0.6 # (same)\ncpu_operator_cost = 0.7 # (same)\n\ndefault_statistics_target = 1 # range 1-1000\n\n#\n# GEQO Optimizer Parameters\n#\ngeqo = true\ngeqo_selection_bias = 2.0 # range 1.5-2.0\ngeqo_threshold = 2000\ngeqo_pool_size = 1024 # default based on tables in statement,\n # range 128-1024\ngeqo_effort = 1\ngeqo_generations = 0\ngeqo_random_seed = -1 # auto-compute seed\n\n\n#\n# Message display\n#\nserver_min_messages = fatal # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, log, fatal,\n # panic\nclient_min_messages = fatal # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # log, info, notice, warning, error\nsilent_mode = false\n\nlog_connections = false\nlog_pid = false\nlog_statement = false\nlog_duration = false\nlog_timestamp = false\n\n#log_min_error_statement = error # Values in order of increasing severity:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, panic(off)\n\n\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\n\n#explain_pretty_print = true\n\n# requires USE_ASSERT_CHECKING\n#debug_assertions = true\n\n\n#\n# Syslog\n#\n#syslog = 0 # range 0-2\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n\n#\n# Statistics\n#\n#show_parser_stats = false\n#show_planner_stats = false\n#show_executor_stats = false\n#show_statement_stats = false\n\n# requires BTREE_BUILD_STATS\n#show_btree_build_stats = false\n\n\n#\n# Access statistics collection\n#\n#stats_start_collector = true\n#stats_reset_on_server_start = true\n#stats_command_string = false\n#stats_row_level = false\n#stats_block_level = false\n#\n# Lock Tracing\n#\n#trace_notify = false\n\n# requires LOCK_DEBUG\n#trace_locks = false\n#trace_userlocks = false\n#trace_lwlocks = false\n#debug_deadlocks = false\n#trace_lock_oidmin = 16384\n#trace_lock_table = 0\n\n\n#\n# Misc\n#\n#autocommit = true\n#dynamic_library_path = '$libdir'\n#search_path = '$user,public'\ndatestyle = 'iso, us'\n#timezone = unknown # actually, defaults to TZ environment setting\n#australian_timezones = false\nclient_encoding = sql_ascii # actually, defaults to database encoding\nauthentication_timeout = 1 # 1-600, in seconds\ndeadlock_timeout = 100 # in milliseconds\n#default_transaction_isolation = 'read committed'\n#max_expr_depth = 1000 # min 10\n#max_files_per_process = 1000 # min 25\n#password_encryption = true\n#sql_inheritance = true\n#transform_null_equals = false\n#statement_timeout = 0 # 0 is disabled, in milliseconds\ndb_user_namespace = false\n\n\n\n#\n# Locale settings\n#\n# (initialized by initdb -- may be changed)\nLC_MESSAGES = 'en_US'\nLC_MONETARY = 'en_US'\nLC_NUMERIC = 'en_US'\nLC_TIME = 'en_US'\n\n\n================================================================\n\n\nsomebody please knows to give tips to me to increase the\nperformance\n",
"msg_date": "Wed, 11 Feb 2004 12:23:31 -0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "slow database"
},
{
"msg_contents": "On Wed, 11 Feb 2004 [email protected] wrote:\n\n> somebody please knows to give tips to me to increase the\n> performance\n\nRun VACUUM ANALYZE. Find one query that is slow. Run EXPLAIN ANALYZE on\nthat query. Read the plan and figure out why it is slow. Fix it.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Wed, 11 Feb 2004 15:27:09 +0100 (CET)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow database"
},
{
"msg_contents": "\nOn Feb 11, 2004, at 7:23 AM, [email protected] wrote:\n\n> #\n> # Optimizer Parameters\n> #\n> enable_seqscan = false\n> enable_indexscan = false\n> enable_tidscan = false\n> enable_sort = false\n> enable_nestloop = false\n> enable_mergejoin = false\n> enable_hashjoin = false\n>\n\nWhy did you disable *every* type of query method? Try commenting all \nof these out or changing them to \"true\" instead of \"false\".\n\n--\nPC Drew\nManager, Dominet\n\nIBSN\n1600 Broadway, Suite 400\nDenver, CO 80202\n\nPhone: 303-984-4727 x107\nCell: 720-841-4543\nFax: 303-984-4730\nEmail: [email protected]\n\n",
"msg_date": "Wed, 11 Feb 2004 07:28:15 -0700",
"msg_from": "PC Drew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow database"
},
{
"msg_contents": "\nOn Feb 11, 2004, at 7:23 AM, [email protected] wrote:\n\n>\n>\n> my data base is very slow. The machine is a processor Xeon 2GB with\n> 256 MB of RAM DDR. My archive of configuration is this:\n>\n\nAfter looking through the configuration some more, I would definitely \nrecommend getting rid of your current postgresql.conf file and \nreplacing it with the default. You have some very very odd settings, \nnamely:\n\nThis is dangerous, but maybe you need it:\nfsync = false\n\nYou've essentially disabled the optimizer:\nenable_seqscan = false\nenable_indexscan = false\nenable_tidscan = false\nenable_sort = false\nenable_nestloop = false\nenable_mergejoin = false\nenable_hashjoin = false\n\nWOAH, this is huge:\nrandom_page_cost = 1000000000\n\nTake a look at this page which goes through each option in the \nconfiguration file:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\n--\nPC Drew\nManager, Dominet\n\nIBSN\n1600 Broadway, Suite 400\nDenver, CO 80202\n\nPhone: 303-984-4727 x107\nCell: 720-841-4543\nFax: 303-984-4730\nEmail: [email protected]\n\n",
"msg_date": "Wed, 11 Feb 2004 07:44:42 -0700",
"msg_from": "PC Drew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow database"
},
{
"msg_contents": "On Wed, 2004-02-11 at 09:23, [email protected] wrote:\n> my data base is very slow. The machine is a processor Xeon 2GB with\n> 256 MB of RAM DDR. My archive of configuration is this:\n\nI'm not surprised. New values below old.\n\n\n> sort_mem = 131072 # min 64, size in KB\n\nsort_mem = 8192.\n\n> fsync = false\n\nAre you aware of the potential for data corruption during a hardware,\npower or software failure?\n\n> enable_seqscan = false\n> enable_indexscan = false\n> enable_tidscan = false\n> enable_sort = false\n> enable_nestloop = false\n> enable_mergejoin = false\n> enable_hashjoin = false\n\nYou want all of these set to true, not false.\n\n> effective_cache_size = 170000 # typically 8KB each\n\neffective_cache_size = 16384.\n\n> random_page_cost = 1000000000 # units are one sequential page fetch cost\n\nrandom_page_cost = 3\n\n> cpu_tuple_cost = 0.3 # (same)\n\ncpu_tuple_cost = 0.01\n\n> cpu_index_tuple_cost = 0.6 # (same)\n\ncpu_index_tuple_cost = 0.001\n\n> cpu_operator_cost = 0.7 # (same)\n\ncpu_operator_cost = 0.0025\n\n> default_statistics_target = 1 # range 1-1000\n\ndefault_statistics_target = 10\n\n\n",
"msg_date": "Wed, 11 Feb 2004 09:49:26 -0500",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow database"
},
{
"msg_contents": "[email protected] wrote:\n\n>my data base is very slow. The machine is a processor Xeon 2GB with\n>256 MB of RAM DDR. My archive of configuration is this:\n> \n>\n\nThis is a joke, right?\n\nchris\n",
"msg_date": "Wed, 11 Feb 2004 10:36:00 -0500",
"msg_from": "Chris Trawick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow database"
},
{
"msg_contents": "[email protected] writes:\n> my data base is very slow. The machine is a processor Xeon 2GB with\n> 256 MB of RAM DDR. My archive of configuration is this:\n\n> sort_mem = 131072 # min 64, size in KB\n> #vacuum_mem = 8192 # min 1024, size in KB\n\nChange it back to 8192, or perhaps even less. This large value is\nprobably causing swapping, because it leads to every sort trying to\nuse 1073741824 bytes of memory, which is considerably more than you\nhave.\n\n> fsync = false\n> wal_sync_method = fdatasync # the default varies across platforms:\n\nI presume that you are aware that you have chosen the value that\nleaves your data vulnerable to corruption? I wouldn't set this to false...\n\n> enable_seqscan = false\n> enable_indexscan = false\n> enable_tidscan = false\n> enable_sort = false\n> enable_nestloop = false\n> enable_mergejoin = false\n> enable_hashjoin = false\n\nWas there some reason why you wanted to disable every query\noptimization strategy that can be disabled? If you're looking to get\nslow queries, this would accomplish that nicely.\n\n> effective_cache_size = 170000 # typically 8KB each\n> random_page_cost = 1000000000 # units are one sequential page fetch cost\n> cpu_tuple_cost = 0.3 # (same)\n> cpu_index_tuple_cost = 0.6 # (same)\n> cpu_operator_cost = 0.7 # (same)\n\nWhere did you get those numbers? The random_page_cost alone will\nprobably force every query to do seq scans, ignoring indexes, and is\n_really_ nonsensical. The other values seem way off.\n\n> default_statistics_target = 1 # range 1-1000\n\n... Apparently it didn't suffice to try to disable query optimization,\nand modify the cost parameters into nonsense; it was also \"needful\" to\ntell the statistics analyzer to virtually eliminate statistics\ncollection.\n\nIf you want a value other than 10, then pick a value slightly LARGER than 10.\n\n> somebody please knows to give tips to me to increase the performance\n\nDelete the postgresql.conf file, create a new database using initdb,\nand take the file produced by _that_, and replace with that one. The\ndefault values, while not necessarily perfect, are likely to be 100x\nbetter than what you have got.\n\nWas this the result of someone trying to tune the database for some\nsort of anti-benchmark?\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in name ^ \"@\" ^ tld;;\nhttp://www.ntlug.org/~cbbrowne/rdbms.html\nRules of the Evil Overlord #179. \"I will not outsource core\nfunctions.\" <http://www.eviloverlord.com/>\n",
"msg_date": "Wed, 11 Feb 2004 10:39:19 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow database"
},
{
"msg_contents": "\nthe normal queries do not present problems, but all the ones\nthat join has are very slow.\n\nOBS: I am using way ODBC. He will be that they exist some\nconfiguration specifies inside of the same bank or in the ODBC?\n\n\nQuoting Rod Taylor <[email protected]>:\n\n> On Wed, 2004-02-11 at 09:23, [email protected] wrote:\n> > my data base is very slow. The machine is a processor Xeon 2GB with\n> > 256 MB of RAM DDR. My archive of configuration is this:\n> \n> I'm not surprised. New values below old.\n> \n> \n> > sort_mem = 131072 # min 64, size in KB\n> \n> sort_mem = 8192.\n> \n> > fsync = false\n> \n> Are you aware of the potential for data corruption during a hardware,\n> power or software failure?\n> \n> > enable_seqscan = false\n> > enable_indexscan = false\n> > enable_tidscan = false\n> > enable_sort = false\n> > enable_nestloop = false\n> > enable_mergejoin = false\n> > enable_hashjoin = false\n> \n> You want all of these set to true, not false.\n> \n> > effective_cache_size = 170000 # typically 8KB each\n> \n> effective_cache_size = 16384.\n> \n> > random_page_cost = 1000000000 # units are one sequential page fetch cost\n> \n> random_page_cost = 3\n> \n> > cpu_tuple_cost = 0.3 # (same)\n> \n> cpu_tuple_cost = 0.01\n> \n> > cpu_index_tuple_cost = 0.6 # (same)\n> \n> cpu_index_tuple_cost = 0.001\n> \n> > cpu_operator_cost = 0.7 # (same)\n> \n> cpu_operator_cost = 0.0025\n> \n> > default_statistics_target = 1 # range 1-1000\n> \n> default_statistics_target = 10\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n\n\n",
"msg_date": "Wed, 11 Feb 2004 14:16:42 -0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: slow database"
},
{
"msg_contents": "If my boss came to me and asked me to make my database server run as \nslowly as possible, I might come up with the exact same postgresql.conf \nfile as what you posted.\n\nJust installing the default postgresql.conf that came with postgresql \nshould make this machine run faster.\n\nRead this:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n",
"msg_date": "Wed, 11 Feb 2004 09:19:01 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow database"
},
{
"msg_contents": "[email protected] writes:\n> the normal queries do not present problems, but all the ones\n> that join has are very slow.\n\nNo surprise, as you've disabled all but the stupidest join algorithm...\n\nAs others already pointed out, you'd be a lot better off with the\ndefault configuration settings than with this set.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 Feb 2004 12:00:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow database "
},
{
"msg_contents": "I already came back the old conditions and I continue slow in the same\nway!\n\nQuoting Tom Lane <[email protected]>:\n\n> [email protected] writes:\n> > the normal queries do not present problems, but all the ones\n> > that join has are very slow.\n> \n> No surprise, as you've disabled all but the stupidest join algorithm...\n> \n> As others already pointed out, you'd be a lot better off with the\n> default configuration settings than with this set.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n> \n\n\n\n",
"msg_date": "Wed, 11 Feb 2004 15:15:13 -0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: slow database "
},
{
"msg_contents": "On Wed, 11 Feb 2004 [email protected] wrote:\n\n> I already came back the old conditions and I continue slow in the same\n> way!\n\nOK, we need some things from you to help troubleshoot this problem.\n\nPostgresql version\nschema of your tables\noutput of \"explain analyze your query here\"\na chicken foot (haha, just kidding. :-)\n\n",
"msg_date": "Wed, 11 Feb 2004 10:48:20 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow database "
},
{
"msg_contents": "On Wed, 2004-02-11 at 12:15, [email protected] wrote:\n> I already came back the old conditions and I continue slow in the same\n> way!\n\nDumb question, but did you restart the database after changing the\nconfig file?\n\n",
"msg_date": "Wed, 11 Feb 2004 12:55:21 -0500",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow database"
},
{
"msg_contents": "the version is 7.3.2 in a connective 9.\nthe hen foot is without nails at the moment: =)\n |\n/|\\\n\nthis is a principal table of system:\n\nCREATE TABLE public.compra_prod_forn\n(\n nu_seq_prod_forn numeric(12) NOT NULL,\n cd_fabricante numeric(6),\n cd_moeda numeric(4) NOT NULL,\n cd_compra numeric(12) NOT NULL,\n cd_produto numeric(9),\n cd_fornecedor numeric(6),\n cd_incotermes numeric(3) NOT NULL,\n qtde_compra numeric(12,3),\n perc_comissao_holding numeric(5,2),\n vl_cotacao_unit_negociacao numeric(20,3),\n dt_retorno date,\n cd_status_cv numeric(3) NOT NULL,\n cd_usuario numeric(6) NOT NULL,\n tp_comissao varchar(25),\n vl_pif numeric(9,2),\n cd_fornecedor_contato numeric(6),\n cd_contato numeric(6),\n cd_un_peso varchar(20),\n vl_currier numeric(9,2),\n cd_iqf numeric(3),\n cd_un_peso_vl_unit varchar(10),\n dt_def_fornecedor date,\n vl_cotacao_unit_forn numeric(20,3),\n vl_cotacao_unit_local numeric(20,3),\n tp_vl_cotacao_unit numeric(1),\n cd_moeda_forn numeric(4),\n cd_moeda_local numeric(4),\n vl_cotacao_unit numeric(20,3),\n peso_bruto_emb varchar(20),\n id_fax numeric(1),\n id_email numeric(1),\n fob varchar(40),\n origem varchar(40),\n tipo_comissao varchar(40),\n descr_fabricante_select varchar(200),\n farmacopeia varchar(100),\n vl_frete numeric(10,3),\n descr_abandono_representada varchar(2000),\n descr_abandono_interno varchar(2000),\n vl_frete_unit numeric(10,3),\n CONSTRAINT compra_prod_forn_pkey PRIMARY KEY (cd_compra, nu_seq_prod_forn),\n CONSTRAINT \"$1\" FOREIGN KEY (cd_moeda_local) REFERENCES public.moeda \n(cd_moeda) ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT \"$10\" FOREIGN KEY (cd_usuario) REFERENCES public.usuario_sistema \n(cd_usuario) ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT \"$11\" FOREIGN KEY (cd_status_cv) REFERENCES \npublic.status_compra_venda (cd_status_cv) ON UPDATE NO ACTION ON DELETE NO \nACTION,\n CONSTRAINT \"$12\" FOREIGN KEY (cd_moeda) REFERENCES public.moeda (cd_moeda) ON \nUPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT \"$2\" FOREIGN KEY (cd_moeda_forn) REFERENCES public.moeda \n(cd_moeda) ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT \"$3\" FOREIGN KEY (cd_un_peso_vl_unit) REFERENCES \npublic.unidades_peso (cd_un_peso) ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT \"$4\" FOREIGN KEY (cd_un_peso) REFERENCES public.unidades_peso \n(cd_un_peso) ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT \"$5\" FOREIGN KEY (cd_fornecedor_contato, cd_contato) REFERENCES \npublic.fornecedor_contato (cd_fornecedor, cd_contato) ON UPDATE NO ACTION ON \nDELETE NO ACTION,\n CONSTRAINT \"$6\" FOREIGN KEY (cd_fabricante) REFERENCES public.fabricante \n(cd_fabricante) ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT \"$7\" FOREIGN KEY (cd_produto, cd_fornecedor) REFERENCES \npublic.fornecedor_produto (cd_produto, cd_fornecedor) ON UPDATE NO ACTION ON \nDELETE NO ACTION,\n CONSTRAINT \"$8\" FOREIGN KEY (cd_incotermes) REFERENCES public.incotermes \n(cd_incotermes) ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT \"$9\" FOREIGN KEY (cd_compra) REFERENCES public.compra (cd_compra) \nON UPDATE NO ACTION ON DELETE CASCADE\n) WITH OIDS;\n\n\n\nQuoting \"scott.marlowe\" <[email protected]>:\n\n> On Wed, 11 Feb 2004 [email protected] wrote:\n> \n> > I already came back the old conditions and I continue slow in the same\n> > way!\n> \n> OK, we need some things from you to help troubleshoot this problem.\n> \n> Postgresql version\n> schema of your tables\n> output of \"explain analyze your query here\"\n> a chicken foot (haha, just kidding. :-)\n> \n> \n\n\n\n",
"msg_date": "Wed, 11 Feb 2004 16:18:35 -0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: slow database "
},
{
"msg_contents": "First thing I would check is to make sure all those foreign keys are the \nsame type.\n\nSecond, make sure you've got indexes to go with them. I.e. on a multi-key \nfk, have a multi-key index.\n\n\nOn Wed, 11 Feb 2004 [email protected] wrote:\n\n> the version is 7.3.2 in a connective 9.\n> the hen foot is without nails at the moment: =)\n> |\n> /|\\\n> \n> this is a principal table of system:\n> \n> CREATE TABLE public.compra_prod_forn\n> (\n> nu_seq_prod_forn numeric(12) NOT NULL,\n> cd_fabricante numeric(6),\n> cd_moeda numeric(4) NOT NULL,\n> cd_compra numeric(12) NOT NULL,\n> cd_produto numeric(9),\n> cd_fornecedor numeric(6),\n> cd_incotermes numeric(3) NOT NULL,\n> qtde_compra numeric(12,3),\n> perc_comissao_holding numeric(5,2),\n> vl_cotacao_unit_negociacao numeric(20,3),\n> dt_retorno date,\n> cd_status_cv numeric(3) NOT NULL,\n> cd_usuario numeric(6) NOT NULL,\n> tp_comissao varchar(25),\n> vl_pif numeric(9,2),\n> cd_fornecedor_contato numeric(6),\n> cd_contato numeric(6),\n> cd_un_peso varchar(20),\n> vl_currier numeric(9,2),\n> cd_iqf numeric(3),\n> cd_un_peso_vl_unit varchar(10),\n> dt_def_fornecedor date,\n> vl_cotacao_unit_forn numeric(20,3),\n> vl_cotacao_unit_local numeric(20,3),\n> tp_vl_cotacao_unit numeric(1),\n> cd_moeda_forn numeric(4),\n> cd_moeda_local numeric(4),\n> vl_cotacao_unit numeric(20,3),\n> peso_bruto_emb varchar(20),\n> id_fax numeric(1),\n> id_email numeric(1),\n> fob varchar(40),\n> origem varchar(40),\n> tipo_comissao varchar(40),\n> descr_fabricante_select varchar(200),\n> farmacopeia varchar(100),\n> vl_frete numeric(10,3),\n> descr_abandono_representada varchar(2000),\n> descr_abandono_interno varchar(2000),\n> vl_frete_unit numeric(10,3),\n> CONSTRAINT compra_prod_forn_pkey PRIMARY KEY (cd_compra, nu_seq_prod_forn),\n> CONSTRAINT \"$1\" FOREIGN KEY (cd_moeda_local) REFERENCES public.moeda \n> (cd_moeda) ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT \"$10\" FOREIGN KEY (cd_usuario) REFERENCES public.usuario_sistema \n> (cd_usuario) ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT \"$11\" FOREIGN KEY (cd_status_cv) REFERENCES \n> public.status_compra_venda (cd_status_cv) ON UPDATE NO ACTION ON DELETE NO \n> ACTION,\n> CONSTRAINT \"$12\" FOREIGN KEY (cd_moeda) REFERENCES public.moeda (cd_moeda) ON \n> UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT \"$2\" FOREIGN KEY (cd_moeda_forn) REFERENCES public.moeda \n> (cd_moeda) ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT \"$3\" FOREIGN KEY (cd_un_peso_vl_unit) REFERENCES \n> public.unidades_peso (cd_un_peso) ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT \"$4\" FOREIGN KEY (cd_un_peso) REFERENCES public.unidades_peso \n> (cd_un_peso) ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT \"$5\" FOREIGN KEY (cd_fornecedor_contato, cd_contato) REFERENCES \n> public.fornecedor_contato (cd_fornecedor, cd_contato) ON UPDATE NO ACTION ON \n> DELETE NO ACTION,\n> CONSTRAINT \"$6\" FOREIGN KEY (cd_fabricante) REFERENCES public.fabricante \n> (cd_fabricante) ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT \"$7\" FOREIGN KEY (cd_produto, cd_fornecedor) REFERENCES \n> public.fornecedor_produto (cd_produto, cd_fornecedor) ON UPDATE NO ACTION ON \n> DELETE NO ACTION,\n> CONSTRAINT \"$8\" FOREIGN KEY (cd_incotermes) REFERENCES public.incotermes \n> (cd_incotermes) ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT \"$9\" FOREIGN KEY (cd_compra) REFERENCES public.compra (cd_compra) \n> ON UPDATE NO ACTION ON DELETE CASCADE\n> ) WITH OIDS;\n> \n> \n> \n> Quoting \"scott.marlowe\" <[email protected]>:\n> \n> > On Wed, 11 Feb 2004 [email protected] wrote:\n> > \n> > > I already came back the old conditions and I continue slow in the same\n> > > way!\n> > \n> > OK, we need some things from you to help troubleshoot this problem.\n> > \n> > Postgresql version\n> > schema of your tables\n> > output of \"explain analyze your query here\"\n> > a chicken foot (haha, just kidding. :-)\n> > \n> > \n> \n> \n> \n> \n\n",
"msg_date": "Wed, 11 Feb 2004 11:41:18 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow database "
},
{
"msg_contents": "If things are still slow after you have checked your keys as indicated, \nthen pick one query and post the output from EXPLAIN ANALYZE for the \nlist to examine.\n\noh - and ensure you are *not* still using your original postgresql.conf :-)\n\nbest wishes\n\nMark\n\nscott.marlowe wrote:\n\n>First thing I would check is to make sure all those foreign keys are the \n>same type.\n>\n>Second, make sure you've got indexes to go with them. I.e. on a multi-key \n>fk, have a multi-key index.\n>\n>\n>\n> \n>\n\n",
"msg_date": "Thu, 12 Feb 2004 14:27:44 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow database"
}
] |
[
{
"msg_contents": "Is there a way to automatically coerce an int into a bigint for\nindexing purposes?\n\nWe have a table with a bigint column that is an index.\nFor mi, there's no problem, since I now know to say\n\tselect * from foo where id = 123::bigint\nbut our casual users still say\n\tselect * from foo where id = 123\ncausing a sequential scan because the type of 123 is not\na bigint.\n\nAs you can see, there's nearly 4 orders of magnitude difference\nin time, and we anticipate this will only get worse as our\ntables increase in size:\n\nLOG: duration: 0.861 ms statement: select * from big where id = 123123123123123;\nLOG: duration: 6376.917 ms statement: select * from big where id = 123;\n\nOne thing I have considered is starting our id sequence at 5000000000\nso that \"real\" queries will always be bigint-sized, but this seems\nto me a bit of a hack.\n\nMany TIA,\nMark\n\n-- \nMark Harrison\nPixar Animation Studios\n\n",
"msg_date": "Wed, 11 Feb 2004 14:38:41 -0800",
"msg_from": "Mark Harrison <[email protected]>",
"msg_from_op": true,
"msg_subject": "coercing int to bigint for indexing purposes"
},
{
"msg_contents": "Mark Harrison <[email protected]> writes:\n> Is there a way to automatically coerce an int into a bigint for\n> indexing purposes?\n\nThis problem is fixed in CVS tip.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 Feb 2004 18:24:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: coercing int to bigint for indexing purposes "
}
] |
[
{
"msg_contents": "Hello all. I am in the midst of porting a large web application from a \nMS SQL Server backend to PostgreSQL. The migration work is basically \ncomplete, and we're at the testing and optimization phase of the \nproject. The results so far have been disappointing, with Postgres \nperforming queries in about the same time as SQL Server even though \nPostgres is running on a dedicated box with about 4 times the clock \nspeed of the SQL Server box. For a chart of my results, please see \nhttp://leonout.com/pggraph.pdf for a graph of some test results.\n\nHere are the specs of the systems:\n\nSQL Server\nDell PowerEdge 2400\nWindows 2000 Advanced Server\nDual Pentium III 667\n2 GB Registered PC133 SDRAM\nMS SQL Server 2000 SP2 - shared database (although to be fair, this app \nis by far the heaviest)\nRAID 1 for system / RAID 5 for data (10k RPM Ultra160 SCSI drives)\n\nPostgreSQL\nDell PowerEdge 2650\nRedHat Enterprise Linux 3.1\nDual Xeon 3.06 GHz (Hyperthreading currently disabled)\n4 GB DDR SDRAM\nPostgreSQL 7.4 - dedicated to this app, with no other apps running on \nsystem\nRAID 5 (15k RPM Ultra160 SCSI drives)\n\nThe database is about 4.3 GB in size.\n\nMy postgresql.conf is as follows:\n\nmax_connections = 50\nshared_buffers = 10000 # min 16, at least max_connections*2, \n8KB each - default is 1000\nsort_mem = 2000 # min 64, size in KB - default is 1024 \n(commented out)\neffective_cache_size = 250000 # typically 8KB each - default is 1000 \n(commented out)\ngeqo = true\n\nlc_messages = 'en_US.UTF-8' # locale for system error \nmessage strings\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\n\n\nI hope that someone can help with this. Thanks in advance for your help!\n\nLeon\n\n",
"msg_date": "Thu, 12 Feb 2004 13:29:49 -0500",
"msg_from": "Leon Out <[email protected]>",
"msg_from_op": true,
"msg_subject": "Disappointing performance in db migrated from MS SQL Server"
},
{
"msg_contents": "Please run your largest (worst) queries using EXPLAIN ANALYZE and send \nin the results so we can see how the queries are being executed & \noptimized.\n\nOn Feb 12, 2004, at 11:29 AM, Leon Out wrote:\n\n> Hello all. I am in the midst of porting a large web application from a \n> MS SQL Server backend to PostgreSQL. The migration work is basically \n> complete, and we're at the testing and optimization phase of the \n> project. The results so far have been disappointing, with Postgres \n> performing queries in about the same time as SQL Server even though \n> Postgres is running on a dedicated box with about 4 times the clock \n> speed of the SQL Server box. For a chart of my results, please see \n> http://leonout.com/pggraph.pdf for a graph of some test results.\n>\n\n--\nPC Drew\nManager, Dominet\n\nIBSN\n1600 Broadway, Suite 400\nDenver, CO 80202\n\nPhone: 303-984-4727 x107\nCell: 720-841-4543\nFax: 303-984-4730\nEmail: [email protected]\n\n",
"msg_date": "Thu, 12 Feb 2004 11:43:57 -0700",
"msg_from": "PC Drew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disappointing performance in db migrated from MS SQL Server"
},
{
"msg_contents": "It might be helpful to include a sample query that is running slower \nthan you expect, with the table structure, and the output of explain \n{query}.\n\nGavin\n\nLeon Out wrote:\n\n> Hello all. I am in the midst of porting a large web application from a \n> MS SQL Server backend to PostgreSQL. The migration work is basically \n> complete, and we're at the testing and optimization phase of the \n> project. The results so far have been disappointing, with Postgres \n> performing queries in about the same time as SQL Server even though \n> Postgres is running on a dedicated box with about 4 times the clock \n> speed of the SQL Server box. For a chart of my results, please see \n> http://leonout.com/pggraph.pdf for a graph of some test results.\n>\n> Here are the specs of the systems:\n>\n> SQL Server\n> Dell PowerEdge 2400\n> Windows 2000 Advanced Server\n> Dual Pentium III 667\n> 2 GB Registered PC133 SDRAM\n> MS SQL Server 2000 SP2 - shared database (although to be fair, this \n> app is by far the heaviest)\n> RAID 1 for system / RAID 5 for data (10k RPM Ultra160 SCSI drives)\n>\n> PostgreSQL\n> Dell PowerEdge 2650\n> RedHat Enterprise Linux 3.1\n> Dual Xeon 3.06 GHz (Hyperthreading currently disabled)\n> 4 GB DDR SDRAM\n> PostgreSQL 7.4 - dedicated to this app, with no other apps running on \n> system\n> RAID 5 (15k RPM Ultra160 SCSI drives)\n>\n> The database is about 4.3 GB in size.\n>\n> My postgresql.conf is as follows:\n>\n> max_connections = 50\n> shared_buffers = 10000 # min 16, at least max_connections*2, \n> 8KB each - default is 1000\n> sort_mem = 2000 # min 64, size in KB - default is 1024 \n> (commented out)\n> effective_cache_size = 250000 # typically 8KB each - default is 1000 \n> (commented out)\n> geqo = true\n>\n> lc_messages = 'en_US.UTF-8' # locale for system error \n> message strings\n> lc_monetary = 'en_US.UTF-8' # locale for monetary formatting\n> lc_numeric = 'en_US.UTF-8' # locale for number formatting\n> lc_time = 'en_US.UTF-8' # locale for time formatting\n>\n>\n> I hope that someone can help with this. Thanks in advance for your help!\n>\n> Leon\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n\n\n",
"msg_date": "Thu, 12 Feb 2004 11:03:06 -0800",
"msg_from": "\"Gavin M. Roy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disappointing performance in db migrated from MS SQL"
},
{
"msg_contents": "Leon Out wrote:\n> Hello all. I am in the midst of porting a large web application from a \n> MS SQL Server backend to PostgreSQL. The migration work is basically \n> complete, and we're at the testing and optimization phase of the \n> project. The results so far have been disappointing, with Postgres \n> performing queries in about the same time as SQL Server even though \n> Postgres is running on a dedicated box with about 4 times the clock \n> speed of the SQL Server box. For a chart of my results, please see \n> http://leonout.com/pggraph.pdf for a graph of some test results.\n\nMy only guess is that the tests are I/O bound and therefore the faster\nCPU's aren't helping PostgreSQL.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 12 Feb 2004 14:05:33 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disappointing performance in db migrated from MS SQL Server"
},
{
"msg_contents": "On Thu, 12 Feb 2004, Leon Out wrote:\n\n> Hello all. I am in the midst of porting a large web application from a \n> MS SQL Server backend to PostgreSQL. The migration work is basically \n> complete, and we're at the testing and optimization phase of the \n> project. The results so far have been disappointing, with Postgres \n> performing queries in about the same time as SQL Server even though \n> Postgres is running on a dedicated box with about 4 times the clock \n> speed of the SQL Server box. For a chart of my results, please see \n> http://leonout.com/pggraph.pdf for a graph of some test results.\n\nA couple of things. One, CPU speed is about number 5 in the list of \nthings that make a database fast. Drive subsystem (number of drivers, \ncontroller, RAID cache), memory speed, memory size, and proper database \ntuning are all significantly more important thatn the CPU speed.\n\nOur old server was a dual PIII-750 with 1.5 gig ram (PC133) and it ran \nabout 85% as fast as our brand spanking new Dell 2650 dual 2800MHz box \nwith 2 gig ram. They both had the same basic drive subsystem, by the way.\n\nUsing a battery backed RAID controller (the lsi megaraid one, not the \nadaptect, as it's not very fast) made the biggest difference. With that \nthrown in we got about double the speed on the new box as the old one.\n\nHave you read the tuning docs on varlena?\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nIt's a must read.\n\n> Here are the specs of the systems:\n> \n> SQL Server\n> Dell PowerEdge 2400\n> Windows 2000 Advanced Server\n> Dual Pentium III 667\n> 2 GB Registered PC133 SDRAM\n> MS SQL Server 2000 SP2 - shared database (although to be fair, this app \n> is by far the heaviest)\n> RAID 1 for system / RAID 5 for data (10k RPM Ultra160 SCSI drives)\n> \n> PostgreSQL\n> Dell PowerEdge 2650\n> RedHat Enterprise Linux 3.1\n> Dual Xeon 3.06 GHz (Hyperthreading currently disabled)\n> 4 GB DDR SDRAM\n> PostgreSQL 7.4 - dedicated to this app, with no other apps running on \n> system\n> RAID 5 (15k RPM Ultra160 SCSI drives)\n> \n> The database is about 4.3 GB in size.\n> \n> My postgresql.conf is as follows:\n> \n> max_connections = 50\n> shared_buffers = 10000 # min 16, at least max_connections*2, \n> 8KB each - default is 1000\n> sort_mem = 2000 # min 64, size in KB - default is 1024 \n> (commented out)\n> effective_cache_size = 250000 # typically 8KB each - default is 1000 \n> (commented out)\n\nI'm gonna guess that you could use a larger sort_mem (at least 8 meg, no \nmore than 32 meg is usually a good range. With 4 gigs of ram, you can \nprobably go to 64 or 128 meg if you only handle a hand full of clients at \nat time, but sort_mem is per sort, so be careful cranking it up too fast, \nas you'll throwh the server into a swap storm. I.e. setting sort_mem high \nis a foot gun.\n\nYour effective cache size should likely be at LEAST a setting that \nrepresents 2 gigs, maybe more. It's measured in blocks, so unless you've \nchanged your block size from 8k, that would be: 250000\n\nWhat are your query settings for random_page_cost, and cpu*cost settings?\n\nIt's likely a good idea to drop your random page cost to close to 1, as \nwith this much memory, most of your data will find itself in memory.\n\n10000 is probably plenty for shared_buffers. You might try setting it \nhigher to see if it helps, but I'm doubting it will.\n\nBut more important, WHAT are you doing that's slow? Matching text, \nforeign keys, triggers, stored procedures? Use explain analyze on the the \nslow / mediocre queries and we can help a bit.\n\n",
"msg_date": "Thu, 12 Feb 2004 12:15:16 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disappointing performance in db migrated from MS SQL"
},
{
"msg_contents": "Leon,\n\n> Hello all. I am in the midst of porting a large web application from a \n> MS SQL Server backend to PostgreSQL. The migration work is basically \n> complete, and we're at the testing and optimization phase of the \n> project. The results so far have been disappointing, with Postgres \n> performing queries in about the same time as SQL Server even though \n> Postgres is running on a dedicated box with about 4 times the clock \n> speed of the SQL Server box. For a chart of my results, please see \n> http://leonout.com/pggraph.pdf for a graph of some test results.\n\nYour settings look ok to start, but we'll probably want to tune them further. \nCan you post some details of the tests? Include:\n\n1) the query\n2) the EXPLAIN ANALYZE results of the query\n3) Whether you ran the test as the only connection, or whether you tested \nmulti-user load.\n\nThe last is fairly important for a SQL Server vs. PostgreSQL test; SQL Server \nis basically a single-user-database, so like MySQL it appears very fast until \nyou get a bunch o' users on it.\n\nFinally, for most queries the disk I/O and the RAM are more important than the \nCPU clock speed. From the looks of it, you upgraded the CPU + RAM, but did \ndowngraded the disk array as far as database writes are concered; not a \nterrible effective way to gain performance on your hardware.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 12 Feb 2004 12:26:33 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disappointing performance in db migrated from MS SQL Server"
},
{
"msg_contents": "Bruce,\n\nmy bet is on the limited amount of shared memory. The setup as posted by Leon \nonly shows 80 MB. On a 4 GB database, that's not all that much. Depending on \nwhat he's doing, this might be a bottleneck. I don't like the virtual memory \nstrategy of Linux too much and would rather increase this to 1 - 2 GB for the \nPostgres DB - Specially since he's not running anything else on the machine \nand he has 4 GB to play with.\n\nOn Thursday 12 February 2004 14:05, Bruce Momjian wrote:\n> Leon Out wrote:\n[snip]\n>\n> My only guess is that the tests are I/O bound and therefore the faster\n> CPU's aren't helping PostgreSQL.\n\n",
"msg_date": "Thu, 12 Feb 2004 17:19:27 -0500",
"msg_from": "Chris Ruprecht <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disappointing performance in db migrated from MS SQL Server"
},
{
"msg_contents": "On Thu, Feb 12, 2004 at 05:19:27PM -0500, Chris Ruprecht wrote:\n\n> what he's doing, this might be a bottleneck. I don't like the virtual memory \n> strategy of Linux too much and would rather increase this to 1 - 2 GB for the \n> Postgres DB - Specially since he's not running anything else on the machine \n> and he has 4 GB to play with.\n\nHave you ever had luck with 2G of shared memory?\n\nWhen I have tried that, the system is very fast initially, and\ngradually slows to molasses-like speed. My hypothesis is that the\ncache-lookup logic isn't that smart, and so is inefficient either\nwhen using the cache or when doing cache maintenance.\n\nA\n\n-- \nAndrew Sullivan \n",
"msg_date": "Fri, 13 Feb 2004 07:38:57 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disappointing performance in db migrated from MS SQL Server"
},
{
"msg_contents": "All, thanks for your suggestions. I've tweaked my configuration, and I \nthink I've squeezed a little more performance out of the setup. I also \ntried running several tests simultaneously against postgres and SQL \nServer, and postgres did much better with the heavy load.\n\nMy new settings are:\nmax_connections = 50\nshared_buffers = 120000 # min 16, at least max_connections*2, \n8KB each - default is 1000\nsort_mem = 8000 # min 64, size in KB - default is 1024 \n(commented out)\neffective_cache_size = 375000 # typically 8KB each - default is 1000 \n(commented out)\nrandom_page_cost = 1 # units are one sequential page fetch \ncost - default is 4 (commented out)\ngeqo = true\n\n\nJosh, the disks in the new system should be substantially faster than \nthe old. Both are Ultra160 SCSI RAID 5 arrays, but the new system has \n15k RPM disks, as opposed to the 10k RPM disks in the old system.\n\nOn Feb 12, 2004, at 3:26 PM, Josh Berkus wrote:\n\n> Leon,\n>\n>> Hello all. I am in the midst of porting a large web application from a\n>> MS SQL Server backend to PostgreSQL. The migration work is basically\n>> complete, and we're at the testing and optimization phase of the\n>> project. The results so far have been disappointing, with Postgres\n>> performing queries in about the same time as SQL Server even though\n>> Postgres is running on a dedicated box with about 4 times the clock\n>> speed of the SQL Server box. For a chart of my results, please see\n>> http://leonout.com/pggraph.pdf for a graph of some test results.\n>\n> Your settings look ok to start, but we'll probably want to tune them \n> further.\n> Can you post some details of the tests? Include:\n>\n> 1) the query\n> 2) the EXPLAIN ANALYZE results of the query\n> 3) Whether you ran the test as the only connection, or whether you \n> tested\n> multi-user load.\n>\n> The last is fairly important for a SQL Server vs. PostgreSQL test; SQL \n> Server\n> is basically a single-user-database, so like MySQL it appears very \n> fast until\n> you get a bunch o' users on it.\n>\n> Finally, for most queries the disk I/O and the RAM are more important \n> than the\n> CPU clock speed. From the looks of it, you upgraded the CPU + RAM, \n> but did\n> downgraded the disk array as far as database writes are concered; not a\n> terrible effective way to gain performance on your hardware.\n>\n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n\n",
"msg_date": "Fri, 13 Feb 2004 15:56:52 -0500",
"msg_from": "Leon Out <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disappointing performance in db migrated from MS SQL Server"
},
{
"msg_contents": "> Josh, the disks in the new system should be substantially faster than\n> the old. Both are Ultra160 SCSI RAID 5 arrays, but the new system has\n> 15k RPM disks, as opposed to the 10k RPM disks in the old system.\n\nSpindle speed does not correlate with 'throughput' in any easy way. What\ncontrollers are you using for these disks?\n",
"msg_date": "Fri, 13 Feb 2004 22:00:14 -0000 (GMT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Disappointing performance in db migrated from MS SQL "
},
{
"msg_contents": ">>>>> \"LO\" == Leon Out <[email protected]> writes:\n\nLO> Josh, the disks in the new system should be substantially faster than\nLO> the old. Both are Ultra160 SCSI RAID 5 arrays, but the new system has\nLO> 15k RPM disks, as opposed to the 10k RPM disks in the old system.\n\nIf you've got the time, try making your 5 disk array into a RAID10\nplus one spare. I found that with that few disks, RAID10 was a better\nperformer for an even mix of read/write to the DB.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Fri, 13 Feb 2004 17:01:42 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disappointing performance in db migrated from MS SQL Server"
},
{
"msg_contents": ">>>>> \"LO\" == Leon Out <[email protected]> writes:\n\nLO> project. The results so far have been disappointing, with Postgres\nLO> performing queries in about the same time as SQL Server even though\nLO> Postgres is running on a dedicated box with about 4 times the clock\nLO> speed of the SQL Server box. For a chart of my results, please see\nLO> http://leonout.com/pggraph.pdf for a graph of some test results.\n\nAre you using transactions liberally? If you have large groups of\ninserts/updates, putting them inside transactions buys you a lot of\nimprovement by batching the writes to the WAL.\n\nAlso, increase your checkpoint_segments if you do a lot of writes.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Fri, 13 Feb 2004 17:03:30 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disappointing performance in db migrated from MS SQL Server"
},
{
"msg_contents": "On Fri, 13 Feb 2004 [email protected] wrote:\n\n> > Josh, the disks in the new system should be substantially faster than\n> > the old. Both are Ultra160 SCSI RAID 5 arrays, but the new system has\n> > 15k RPM disks, as opposed to the 10k RPM disks in the old system.\n> \n> Spindle speed does not correlate with 'throughput' in any easy way. What\n> controllers are you using for these disks?\n\nThis is doubly so with a good RAID card with battery backed cache. \n\nI'd bet that 10k rpm drives on a cached array card will beat an otherwise \nequal setup with 15k rpm disks and no cache. I know that losing the cache \nslows my system down to a crawl (i.e. set it to write thru instead of \nwrite back.) comparitively speaking.\n\n",
"msg_date": "Tue, 17 Feb 2004 08:17:44 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disappointing performance in db migrated from MS SQL"
}
] |
[
{
"msg_contents": "Hi,\n\none of our tables has to be updated frequently, but concurrently running \nSELECT-queries must also have low latency times (it's being accessed \nthrough a web interface).\n\nI'm looking for ideas that might improve the interactive performance of \nthe system, without slowing down the updates too much. Here are the \ncharacteristics of the table and its use:\n\n- approx. 2 million rows\n\n- approx. 4-5 million rows per day are replaced in short bursts of \n1-200k rows (average ~3000 rows per update)\n\n- the table needs 6 indexes (not all indexes are used all the time, but \nkeeping them all the time slows the system down less than re-creating \nsome of them just before they're needed and dropping them afterwards)\n\n- an \"update\" means that 1-200k rows with a common value in a particular \nfield are replaced with an arbitrary number of new rows (with the same \nvalue in that field), i.e.:\n\nbegin transaction;\n delete from t where id=5;\n insert into t (id,...) values (5,...);\n ... [1-200k rows]\nend;\n\nThe problem is, that a large update of this kind can delay SELECT \nqueries running in parallel for several seconds, so the web interface \nused by several people will be unusable for a short while.\n\nCurrently, I'm using temporary tables:\n\ncreate table xyz as select * from t limit 0;\ninsert into xyz ...\n...\nbegin transaction;\ndelete from t where id=5;\ninsert into t select * from xyz;\nend;\ndrop table xyz;\n\nThis is slightly faster than inserting directly into t (and probably \nfaster than using COPY, even though using that might reduce the overall \nload on the database).\n\nWhat other possibilities are there, other than splitting up the 15 \ncolumns of that table into several smaller tables, which is something \nI'd like to avoid? Would replication help? (I doubt it, but haven't \ntried it yet...) Writing to one table (without indexes) and creating \nindexes and renaming it to the \"read table\" periodically in a double \nbuffering-like fashion wouldn't work either(?), since the views and \ntriggers would have to be re-created every time as well and other \nproblems might arise.\n\nThe postgresql.conf options are already reasonably tweaked for \nperformance(IMHO), but perhaps some settings are particularly critical:\n\nshared_buffers=100000\n(I tried many values, this seems to work well for us - 12GB RAM)\nwal_buffers=500\nsort_mem=800000\ncheckpoint_segments=16\neffective_cache_size=1000000\netc.\n\nAny help/suggestions would be greatly appreciated... Even if it's \nsomething like \"you need a faster db box, there's no other way\" ;-)\n\nRegards,\n Marinos\n",
"msg_date": "Fri, 13 Feb 2004 01:58:34 +0100",
"msg_from": "\"Marinos J. Yannikos\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "optimization ideas for frequent, large(ish) updates in frequently\n\taccessed DB?"
},
{
"msg_contents": "Marinos,\n\n> shared_buffers=100000\n> (I tried many values, this seems to work well for us - 12GB RAM)\n> wal_buffers=500\n> sort_mem=800000\n> checkpoint_segments=16\n> effective_cache_size=1000000\n> etc.\n\n800MB for sort mem? Are you sure you typed that correctly? You must be \ncounting on not having a lot of concurrent queries. It sure will speed up \nindex updating, though!\n\nI think you might do well to experiment with using the checkpoint_delay and \ncheckpoint_sibilings settings in order to get more efficient batch processing \nof updates while selects are going on. I would also suggest increasing \ncheckpoint segments as much as your disk space will allow; I know one \nreporting database I run that does batch loads is using 128 (which is about a \ngig of disk, I think).\n\nWhat have you set max_fsm_relations and max_fsm_pages to? The latter should \nbe very high for you, like 10,000,000\n\nFor that matter, what *version* of PostgreSQL are you running?\n\nAlso, make sure that your tables get vaccuumed regularly. \n\n> Any help/suggestions would be greatly appreciated... Even if it's \n> something like \"you need a faster db box, there's no other way\" ;-)\n\nWell, a battery-backed RAID controller with a fast cache would certainly help.\n\nYou'll also be glad to know that a *lot* of the improvements in the upcoming \nPostgreSQL 7.5 are aimed at giving better peformance on large, high-activity \ndatabases like yours.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 12 Feb 2004 22:28:41 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimization ideas for frequent,\n\tlarge(ish) updates in frequently accessed DB?"
},
{
"msg_contents": "On Fri, 13 Feb 2004 01:58:34 +0100, \"Marinos J. Yannikos\"\n<[email protected]> wrote:\n>I'm looking for ideas that might improve the interactive performance of \n>the system, without slowing down the updates too much.\n\nIOW, you could accept slower updates. Did you actually try and throttle\ndown the insert rate?\n\n> Here are the \n>characteristics of the table and its use:\n>\n>- approx. 2 million rows\n\nDoesn't sound worrying. What's the min/max/average size of these rows?\nHow large is this table?\n\tSELECT relpages FROM pg_class WHERE relname='...';\n\nWhat else is in this database, how many tables, how large is the\ndatabase (du $PGDATA)?\n\n>- approx. 4-5 million rows per day are replaced in short bursts of \n>1-200k rows (average ~3000 rows per update)\n\nHow often do you VACUUM [ANALYSE]?\n\n>- the table needs 6 indexes (not all indexes are used all the time, but \n>keeping them all the time slows the system down less than re-creating \n>some of them just before they're needed and dropping them afterwards)\n\nI agree.\n\n>- an \"update\" means that 1-200k rows with a common value in a particular \n>field are replaced with an arbitrary number of new rows (with the same \n>value in that field), i.e.:\n>\n>begin transaction;\n> delete from t where id=5;\n> insert into t (id,...) values (5,...);\n> ... [1-200k rows]\n>end;\n\nThis is a wide variation in the number of rows. You told us the average\nbatch size is 3000. Is this also a *typical* batch size? And what is\nthe number of rows where you start to get the feeling that it slows down\nother sessions?\n\nWhere do the new values come from? I don't think they are typed in :-)\nDo they come from external sources or from the same database? If the\nlatter, INSERT INTO ... SELECT ... might help.\n\n>The problem is, that a large update of this kind can delay SELECT \n>queries running in parallel for several seconds, so the web interface \n>used by several people will be unusable for a short while.\n\nSilly question: By SELECT you mean pure SELECT transactions and not\nsome transaction that *mostly* reads from the database? I mean, you are\nsure your SELECT queries are slowed down and not blocked by the\n\"updates\".\n\nShow us the EXPLAIN ANALYSE output for the same SELECT, once when it is\nfast and once when it is slow. BTW, what is fast and what is slow?\n\n>Currently, I'm using temporary tables:\n> [...]\n>This is slightly faster than inserting directly into t (and probably \n>faster than using COPY, even though using that might reduce the overall \n>load on the database).\n\nYou might try using a prepared INSERT statement or COPY.\n\n>shared_buffers=100000\n>(I tried many values, this seems to work well for us - 12GB RAM)\n>wal_buffers=500\n>sort_mem=800000\n>checkpoint_segments=16\n>effective_cache_size=1000000\n\nSee Josh's comments.\n\n>Any help/suggestions would be greatly appreciated... Even if it's \n>something like \"you need a faster db box, there's no other way\" ;-)\n\nWe have to find out, what is the bottleneck. Tell us about your\nenvironment (hardware, OS, ...). Run top and/or vmstat and look for\nsignificant differences between times of normal processing and slow\nphases. Post top/vmstat output here if you need help.\n\nServus\n Manfred\n",
"msg_date": "Fri, 13 Feb 2004 10:26:16 +0100",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimization ideas for frequent,\n\tlarge(ish) updates in frequently accessed DB?"
},
{
"msg_contents": "Marinos, while you are busy answering my first set of questions :-),\nhere is an idea that might help even out resource consumption.\n\nOn Fri, 13 Feb 2004 01:58:34 +0100, \"Marinos J. Yannikos\"\n<[email protected]> wrote:\n>begin transaction;\n> delete from t where id=5;\n> insert into t (id,...) values (5,...);\n> ... [1-200k rows]\n>end;\n>\n>The problem is, that a large update of this kind can delay SELECT \n>queries running in parallel for several seconds, so the web interface \n>used by several people will be unusable for a short while.\n\n\tCREATE TABLE idmap (\n\t\tinternalid int NOT NULL PRIMARY KEY,\n\t\tvisibleid int NOT NULL,\n\t\tactive bool NOT NULL\n\t);\n\tCREATE INDEX ipmap_visible ON idmap(visibleid);\n\nPopulate this table with\n\tINSERT INTO idmap\n\tSELECT id, id, true\n\t FROM t;\n\nChange\n\tSELECT ...\n\t FROM t\n\t WHERE t.id = 5;\n\nto\n\tSELECT ...\n\t FROM t INNER JOIN idmap ON (idmap.internalid = t.id AND\n\t idmap.active)\n\t WHERE idmap.visibleid = 5;\n\nWhen you have to replace the rows in t for id=5, start by\n\n\tINSERT INTO idmap VALUES (12345, 5, false);\n\nThen repeatedly\n\tINSERT INTO t (id, ...) VALUES (12345, ...);\nat a rate as slow as you can accept. You don't have to wrap all INSERTs\ninto a single transaction, but batching together a few hundred to a few\nthousand INSERTs will improve performance.\n\nWhen all the new values are in the database, you switch to the new id in\none short transaction:\n\tBEGIN;\n\tUPDATE idmap SET active = false WHERE visibleid = 5 AND active;\n\tUPDATE idmap SET active = true WHERE internalid = 12345;\n\tCOMMIT;\n\nDo the cleanup in off-peak hours (pseudocode):\n\n\tFOR delid IN (SELECT internalid FROM idmap WHERE NOT active)\n\tBEGIN\n\t\tDELETE FROM t WHERE id = delid;\n\t\tDELETE FROM idmap WHERE internalid = delid;\n\tEND;\n\tVACUUM ANALYSE t;\n\tVACUUM ANALYSE idmap;\n\nTo prevent this cleanup from interfering with INSERTs in progress, you\nmight want to add a \"beinginserted\" flag to idmap.\n\nHTH.\nServus\n Manfred\n",
"msg_date": "Fri, 13 Feb 2004 16:21:29 +0100",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimization ideas for frequent,\n\tlarge(ish) updates in frequently accessed DB?"
},
{
"msg_contents": "On Fri, 13 Feb 2004 16:21:29 +0100, I wrote:\n>Populate this table with\n>\tINSERT INTO idmap\n>\tSELECT id, id, true\n>\t FROM t;\n\nThis should be\n\tINSERT INTO idmap\n\tSELECT DISTINCT id, id, true\n\t FROM t;\n\nServus\n Manfred\n",
"msg_date": "Fri, 13 Feb 2004 17:47:07 +0100",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimization ideas for frequent,\n\tlarge(ish) updates in frequently accessed DB?"
},
{
"msg_contents": "Josh Berkus wrote:\n\n> 800MB for sort mem? Are you sure you typed that correctly? You must be \n> counting on not having a lot of concurrent queries. It sure will speed up \n> index updating, though!\n\n800MB is correct, yes... There are usually only 10-30 postgres processes \n active (imagine 5-10 people working on the web front-end while cron \njobs access the db occasionally). Very few queries can use such large \namounts of memory for sorting, but they do exist.\n\n> I think you might do well to experiment with using the checkpoint_delay and \n> checkpoint_sibilings settings in order to get more efficient batch processing \n> of updates while selects are going on.\n[commit_*?]\n\nI thought that could improve only concurrent transactions...\n\n> What have you set max_fsm_relations and max_fsm_pages to? The latter should \n> be very high for you, like 10,000,000\n\ngood guess ;-) the former is set to 10,000 (I'm not sure how useful this \nis for those temporary tables)\n\n> For that matter, what *version* of PostgreSQL are you running?\n\n7.4.1\n\n> Also, make sure that your tables get vaccuumed regularly. \n\nThere is a noticeable difference between a properly vacuumed db (nightly \n\"vacuum full\") and a non-vacuumed one and people will start complaining \nimmediately if something goes wrong there...\n\n> Well, a battery-backed RAID controller with a fast cache would certainly help.\n\nhttp://www.lsilogic.com/products/ultra320_scsi_megaraid_storage_adapters/320x4128t.html\n(RAID-5 with 9 15k rpm drives; at a hindsight, perhaps we should have \ntried a 0+1)\n\n> You'll also be glad to know that a *lot* of the improvements in the upcoming \n> PostgreSQL 7.5 are aimed at giving better peformance on large, high-activity \n> databases like yours.\n\nThat's good to hear...\n\nRegards,\n Marinos\n",
"msg_date": "Sun, 15 Feb 2004 03:02:48 +0100",
"msg_from": "\"Marinos J. Yannikos\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: optimization ideas for frequent, large(ish) updates"
},
{
"msg_contents": "> 800MB is correct, yes... There are usually only 10-30 postgres processes \n> active (imagine 5-10 people working on the web front-end while cron \n> jobs access the db occasionally). Very few queries can use such large \n> amounts of memory for sorting, but they do exist.\n\nBut remember that means that if you have 4 people doign 2 sorts each at \nthe same time, postgres will use 6.4GB RAM maximum. The sort_mem \nparameter means that if a sort is larger than the max, it will be done \nin disk swap.\n\nChris\n\n",
"msg_date": "Sun, 15 Feb 2004 12:51:41 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimization ideas for frequent, large(ish) updates"
},
{
"msg_contents": "\nOn Feb 14, 2004, at 9:02 PM, Marinos J. Yannikos wrote:\n\n> Josh Berkus wrote:\n>\n>> 800MB for sort mem? Are you sure you typed that correctly? You \n>> must be counting on not having a lot of concurrent queries. It sure \n>> will speed up index updating, though!\n>\n> 800MB is correct, yes... There are usually only 10-30 postgres \n> processes active (imagine 5-10 people working on the web front-end \n> while cron jobs access the db occasionally). Very few queries can use \n> such large amounts of memory for sorting, but they do exist.\n>\n\nRemember that it is going to allocate 800MB per sort. It is not \"you \ncan allocate up to 800MB, so if you need 1 meg, use one meg\". Some \nqueries may end up having a few sort steps.\n\nIn terms of sort mem it is best to set a system default to a nice good \nvalue for most queries. and then in your reporting queries or other \nones set sort_mem for that session (set sort_mem = 800000) then only \nthat session will use the looney sort_mem\n\nIt would be interesting to know if your machine is swapping.\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Sun, 15 Feb 2004 12:20:38 -0500",
"msg_from": "Jeff Trout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimization ideas for frequent, large(ish) updates"
},
{
"msg_contents": "Jeff Trout wrote:\n\n> Remember that it is going to allocate 800MB per sort. It is not \"you \n> can allocate up to 800MB, so if you need 1 meg, use one meg\". Some \n> queries may end up having a few sort steps.\n\nI didn't know that it always allocates the full amount of memory \nspecificed in the configuration (e.g. the annotated configuration guide \nsays: \"Note that for a complex query, several sorts might be running in \nparallel, and each one _will be allowed to use_ as much memory as this \nvalue specifies before it starts to put data into temporary files.\"). \nThe individual postgres processes don't look like they're using the full \namount either (but that could be because the memory isn't written to).\n\n> In terms of sort mem it is best to set a system default to a nice good \n> value for most queries. and then in your reporting queries or other \n> ones set sort_mem for that session (set sort_mem = 800000) then only \n> that session will use the looney sort_mem\n\nQueries from the web front-end use up to ~130MB sort memory (according \nto pgsql_tmp), so I set this to 150MB - thanks.\n\n> It would be interesting to know if your machine is swapping.\n\nIt's not being monitored closely (other than with the occasional \"top\"), \n but it's highly unlikely:\n\nMem: 12441864k total, 10860648k used, 1581216k free, 84552k buffers\nSwap: 4008176k total, 2828k used, 4005348k free, 9762628k cached\n\n(that's a typical situation - the \"2828k used\" are probably some rarely \nused processes that have lower priority than the cache ...)\n\nRegards,\n Marinos\n\n",
"msg_date": "Mon, 16 Feb 2004 03:53:15 +0100",
"msg_from": "\"Marinos J. Yannikos\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: optimization ideas for frequent, large(ish) updates"
},
{
"msg_contents": "\"Marinos J. Yannikos\" <[email protected]> writes:\n> Jeff Trout wrote:\n>> Remember that it is going to allocate 800MB per sort.\n\n> I didn't know that it always allocates the full amount of memory \n> specificed in the configuration\n\nIt doesn't ... but it could use *up to* that much before starting to\nspill to disk. If you are certain your sorts won't use that much,\nthen you could set the limit lower, hm?\n\nAlso keep in mind that sort_mem controls hash table size as well as sort\nsize. The hashtable code is not nearly as accurate as the sort code\nabout honoring the specified limit exactly. So you really oughta figure\nthat you could need some multiple of sort_mem per active backend.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 15 Feb 2004 22:28:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimization ideas for frequent, large(ish) updates "
}
] |
[
{
"msg_contents": "Hello again. I'm setting up a backup routine for my new db server. As \npart of my testing, I'm attempting to pg_restore a pg_dump'ed backup of \nmy database. The database is about 4.3 GB, and the dump file is about \n100 MB.\n\nI first did a schema-only restore, then started a data-only restore \nwith --disable-triggers to get around the referential integrity issues \nof reloading the data. The data-only restore has been running for a \ncouple of hours now, and I'm seeing high iowait numbers in top.\n\n 15:57:58 up 23:55, 2 users, load average: 2.04, 2.07, 2.01\n60 processes: 57 sleeping, 3 running, 0 zombie, 0 stopped\nCPU states: cpu user nice system irq softirq iowait idle\n total 4.0% 0.0% 0.7% 0.0% 0.0% 43.5% 51.6%\n cpu00 0.0% 0.0% 0.3% 0.0% 0.0% 84.8% 14.7%\n cpu01 15.7% 0.0% 1.7% 0.0% 0.0% 2.7% 79.6%\n cpu02 0.1% 0.0% 0.7% 0.0% 0.0% 84.2% 14.7%\n cpu03 0.2% 0.0% 0.0% 0.0% 0.0% 2.4% 97.4%\nMem: 3869544k av, 3849280k used, 20264k free, 0k shrd, \n110544k buff\n 1297452k actv, 2298928k in_d, 57732k in_c\nSwap: 2040244k av, 0k used, 2040244k free \n3576684k cached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU \nCOMMAND\n 8366 postgres 16 0 954M 954M 952M R 4.3 25.2 21:58 1 \npostmaster\n 9282 postgres 15 0 0 0 0 RW 0.2 0.0 0:00 2 \npostmaster\n 1 root 15 0 496 496 444 S 0.0 0.0 0:05 3 init\n\nQuestions:\n* Do these iowait numbers indicate a problem with my setup?\n* Does anyone have a good method for disabling indexes before a restore \nand restoring them afterwards? I've spent some time writing scripts to \ndo this, but I have yet to come up with drop/recreate solution that \nreturns my database to the same pre-drop state.\n\nThanks in advance!\n\nLeon",
"msg_date": "Fri, 13 Feb 2004 16:03:24 -0500",
"msg_from": "Leon Out <[email protected]>",
"msg_from_op": true,
"msg_subject": "Lengthy pg_restore and high iowait?"
}
] |
[
{
"msg_contents": "\nGreetings!\n\nWhy does creation of gist indexes takes significantly more time\nthan normal btree index. Can any configuration changes lead to faster index\ncreation?\n\n\nquery:\nCREATE INDEX co_name_index_idx ON profiles USING gist (co_name_index \npublic.gist_txtidx_ops);\n\n\nregds\nmallah.\n",
"msg_date": "Sat, 14 Feb 2004 09:09:01 +0530",
"msg_from": "Rajesh Kumar Mallah <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow GIST index creation"
}
] |
[
{
"msg_contents": "Hi,\n\nwe have a table with about 6.000.000 rows. There is an index on a \ncolumn with the name id which is an integer and serves as primary key.\n\nWhen we execute select max(id) from theTable; it takes about 10 \nseconds. Explain analyze returns:\n\n------------------------------------------------------------------------ \n--------------------------------------------------------\n Aggregate (cost=153635.15..153635.15 rows=1 width=4) (actual \ntime=9738.263..9738.264 rows=1 loops=1)\n -> Seq Scan on job_property (cost=0.00..137667.32 rows=6387132 \nwidth=4) (actual time=0.102..7303.649 rows=6387132 loops=1)\n Total runtime: 9738.362 ms\n(3 rows)\n\n\n\nI recreated the index on column id and ran vacuum analyze job_property \nbut this did not help. I tried to force index usage with SET \nENABLE_SEQSCAN TO OFF; but the explain analyze still looks like the \nquery is done using a seqscan.\n\nIs the speed more or less normal for a 'dual G5 with 2 GHZ and 4 GB of \nRam and a SATA hd' or do i miss something?\n\nregards David\n\n",
"msg_date": "Mon, 16 Feb 2004 17:51:37 +0100",
"msg_from": "David Teran <[email protected]>",
"msg_from_op": true,
"msg_subject": "select max(id) from aTable is very slow"
},
{
"msg_contents": "David Teran wrote:\n\n> Hi,\n>\n> we have a table with about 6.000.000 rows. There is an index on a \n> column with the name id which is an integer and serves as primary key.\n>\n> When we execute select max(id) from theTable; it takes about 10 \n> seconds. Explain analyze returns:\n>\n> ------------------------------------------------------------------------ \n> --------------------------------------------------------\n> Aggregate (cost=153635.15..153635.15 rows=1 width=4) (actual \n> time=9738.263..9738.264 rows=1 loops=1)\n> -> Seq Scan on job_property (cost=0.00..137667.32 rows=6387132 \n> width=4) (actual time=0.102..7303.649 rows=6387132 loops=1)\n> Total runtime: 9738.362 ms\n> (3 rows)\n>\n>\n>\n> I recreated the index on column id and ran vacuum analyze \n> job_property but this did not help. I tried to force index usage \n> with SET ENABLE_SEQSCAN TO OFF; but the explain analyze still looks \n> like the query is done using a seqscan.\n>\n> Is the speed more or less normal for a 'dual G5 with 2 GHZ and 4 GB \n> of Ram and a SATA hd' or do i miss something?\n>\n> regards David\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\nTry using:\n\nSELECT id FROM theTable ORDER BY is DESC LIMIT 1;\n\nUsing COUNT, MAX, MIN and any aggregate function on the table of that \nsize will always result in a sequential scan. There is currently no way \naround it although there are a few work arounds. See the following for \nmore information.\n\nhttp://archives.postgresql.org/pgsql-performance/2004-01/msg00045.php\nhttp://archives.postgresql.org/pgsql-performance/2004-01/msg00054.php\nhttp://archives.postgresql.org/pgsql-performance/2004-01/msg00059.php\n\nHTH\n\nNick\n\n\n\n",
"msg_date": "Mon, 16 Feb 2004 17:56:44 +0000",
"msg_from": "Nick Barr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select max(id) from aTable is very slow"
},
{
"msg_contents": "Nick Barr wrote:\n\n> David Teran wrote:\n>\n>> Hi,\n>>\n>> we have a table with about 6.000.000 rows. There is an index on a \n>> column with the name id which is an integer and serves as primary key.\n>>\n>> When we execute select max(id) from theTable; it takes about 10 \n>> seconds. Explain analyze returns:\n>>\n>> ------------------------------------------------------------------------ \n>> --------------------------------------------------------\n>> Aggregate (cost=153635.15..153635.15 rows=1 width=4) (actual \n>> time=9738.263..9738.264 rows=1 loops=1)\n>> -> Seq Scan on job_property (cost=0.00..137667.32 rows=6387132 \n>> width=4) (actual time=0.102..7303.649 rows=6387132 loops=1)\n>> Total runtime: 9738.362 ms\n>> (3 rows)\n>>\n>>\n>>\n>> I recreated the index on column id and ran vacuum analyze \n>> job_property but this did not help. I tried to force index usage \n>> with SET ENABLE_SEQSCAN TO OFF; but the explain analyze still looks \n>> like the query is done using a seqscan.\n>>\n>> Is the speed more or less normal for a 'dual G5 with 2 GHZ and 4 GB \n>> of Ram and a SATA hd' or do i miss something?\n>>\n>> regards David\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n>\n>\n> Try using:\n>\n> SELECT id FROM theTable ORDER BY is DESC LIMIT 1;\n>\n> Using COUNT, MAX, MIN and any aggregate function on the table of that \n> size will always result in a sequential scan. There is currently no \n> way around it although there are a few work arounds. See the following \n> for more information.\n>\n> http://archives.postgresql.org/pgsql-performance/2004-01/msg00045.php\n> http://archives.postgresql.org/pgsql-performance/2004-01/msg00054.php\n> http://archives.postgresql.org/pgsql-performance/2004-01/msg00059.php\n>\n> HTH\n>\n> Nick\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\nOops that should be\n\nSELECT id FROM theTable ORDER BY id DESC LIMIT 1;\n\nNick\n\n\n\n\n",
"msg_date": "Mon, 16 Feb 2004 18:02:10 +0000",
"msg_from": "Nick Barr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select max(id) from aTable is very slow"
},
{
"msg_contents": "Hi Nick,\n\n>> Try using:\n>>\n>> SELECT id FROM theTable ORDER BY is DESC LIMIT 1;\n>>\n>> Using COUNT, MAX, MIN and any aggregate function on the table of that \n>> size will always result in a sequential scan. There is currently no \n>> way around it although there are a few work arounds. See the \n>> following for more information.\n>>\n>> http://archives.postgresql.org/pgsql-performance/2004-01/msg00045.php\n>> http://archives.postgresql.org/pgsql-performance/2004-01/msg00054.php\n>> http://archives.postgresql.org/pgsql-performance/2004-01/msg00059.php\n>>\n\n\nthanks, that works fine! I will read the mail archive before asking \nsuch things again ;-)\n\ncheers David\n\n",
"msg_date": "Mon, 16 Feb 2004 19:15:16 +0100",
"msg_from": "David Teran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: select max(id) from aTable is very slow"
},
{
"msg_contents": "David Teran wrote:\n> Hi,\n> \n> we have a table with about 6.000.000 rows. There is an index on a \n> column with the name id which is an integer and serves as primary key.\n> \n> When we execute select max(id) from theTable; it takes about 10 \n> seconds. Explain analyze returns:\n\nDue to the open-ended nature of PG's aggregate function system, it can't \nsee inside the max() function to realise it doesn't need all the values.\n\nFortune favours the flexible however - the simple workaround is to use \nthe equivalent:\n SELECT id FROM theTable ORDER BY id DESC LIMIT 1;\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 03 Jun 2004 14:07:49 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select max(id) from aTable is very slow"
}
] |
[
{
"msg_contents": "Hello,\n\nI m checking Postgresql and MS-SQl database server for our new development. On a very first query Postresql is out performed and I think it is very disappointing. My query consists on a single table only on both machines.\n\nTable Structure\n Table \"inv_detail\"\n Attribute | Type | Modifier\n--------------+-----------------------+--------------------\n inv_no | integer | not null\n unit_id | character(4) | not null\n item | character varying(90) | not null\n qty | double precision | not null default 0\n rate | double precision | not null default 0\n unit | character varying(20) | not null\n vl_ex_stax | double precision | not null default 0\n stax_prc | double precision | not null default 0\n adl_stax_prc | double precision | not null default 0\n package | character varying(12) |\n\nHaving 440,000 Records.\n\nMy Query\n--------\nselect count(*), sum(vl_ex_stax) , sum(qty) , unit from inv_detail group by unit;\non both databases.\n\nPostgreSQL return result in 50 sec every time.\nMS-SQL return result in 2 sec every time.\n\nMS-SQL Machine\n**************\nAthlon 600Mhz. (Unbranded)\n256 MB Ram. ( 133 Mhz)\n40 GB Baracude NTFS File System.\nWindows 2000 Server Enterprise.\nMS-SQL 2000 Enterprise. (Default Settings)\n\nPostgreSQL Machine\n******************\nP-III 600Mhz (Dell Precision 220)\n256 MB Ram (RD Ram)\n40 GB Baracuda Ext2 File System.\nRedHat 7.2\nPostgreSQL 7.1.3-2\n\nMy PostgreSQL Conf is\n*********************\nlog_connections = yes\nsyslog = 2\neffective_cache_size = 327680\nsort_mem = 10485760\nmax_connections = 64\nshared_buffers = 512\nwal_buffers = 1024\n\nNOTICE: QUERY PLAN:\n********************\nAggregate (cost=inf..inf rows=44000 width=28)\n -> Group (cost=inf..inf rows=440000 width=28)\n -> Sort (cost=inf..inf rows=440000 width=28)\n -> Seq Scan on inv_detail (cost=0.00..11747.00 rows=440000 width=28)\nEXPLAIN\n\nEven if I dont compare postgres with any other database server the time taken is alarmingly high. These settings are not good I know, but the Postgres result is very un-acceptable. I m looking forward for comments to change the conf setting for acceptable results.\n\nAnd I have two more questions :\n\n1- How can I lock a single record so that other users can only read it. ??\n2- one user executes a query it will be process and when another user executes the same query having the same result should not again go for processing. The result should be come from the cache. Is this possible in postgres ??\n\n\nSaleem\n\[email protected]\n\n\n",
"msg_date": "Tue, 17 Feb 2004 11:24:02 +0500 (PKT)",
"msg_from": "\"Saleem Burhani Baloch\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow response of PostgreSQL"
},
{
"msg_contents": "> select count(*), sum(vl_ex_stax) , sum(qty) , unit from inv_detail group by unit;\n> on both databases.\n> \n> PostgreSQL return result in 50 sec every time.\n> MS-SQL return result in 2 sec every time.\n\n> My PostgreSQL Conf is\n> *********************\n> log_connections = yes\n> syslog = 2\n> effective_cache_size = 327680\n> sort_mem = 10485760\n> max_connections = 64\n> shared_buffers = 512\n> wal_buffers = 1024\n\nThis is a shockingly bad postgresql.conf. I'm not surprised you have \nperformance problems. Change this:\n\neffective_cache_size = 4000\nsort_mem = 4096\nshared_buffers = 1000\nwal_buffers = 8\n\nAlso, you need a LOT more RAM in your PostgreSQL machine, at least half \na gig for a basic database server.\n\n> 1- How can I lock a single record so that other users can only read it. ??\n\nYou cannot do that in PostgreSQL.\n\n> 2- one user executes a query it will be process and when another user executes the same query having the same result should not again go for processing. The result should be come from the cache. Is this possible in postgres ??\n\nNo, implement it in your application. Prepared queries and stored \nprocedures might help you here.\n\nChris\n\n",
"msg_date": "Tue, 17 Feb 2004 14:59:08 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow response of PostgreSQL"
},
{
"msg_contents": "On Tue, 17 Feb 2004, Saleem Burhani Baloch wrote:\n\n> select count(*), sum(vl_ex_stax) , sum(qty) , unit from inv_detail group by unit;\n> on both databases.\n\n> PostgreSQL Machine\n> ******************\n> P-III 600Mhz (Dell Precision 220)\n> 256 MB Ram (RD Ram)\n> 40 GB Baracuda Ext2 File System.\n> RedHat 7.2\n> PostgreSQL 7.1.3-2\n\nBesides the comments on the conf file already sent, 7.1.3 is many versions\nbehind the current version and definately has some deficiencies either\nfully or partially corrected in later versions. All in all, I'd suggest\ngoing all the way to 7.4.1 since the hash aggregate stuff might help the\nqueries you're running.\n\n",
"msg_date": "Mon, 16 Feb 2004 23:42:48 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow response of PostgreSQL"
},
{
"msg_contents": "\"Saleem Burhani Baloch\" <[email protected]> writes:\n> PostgreSQL 7.1.3-2\n\nAside from the config issues Chris mentioned, I'd recommend trying\na somewhat less obsolete version of Postgres. I believe the poor\nperformance with grouped aggregates should be fixed in 7.4 and later.\n\n(Red Hat 7.2 is a bit long in the tooth as well.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Feb 2004 02:45:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow response of PostgreSQL "
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n>> 1- How can I lock a single record so that other users can only read \n>> it. ??\n> \n> You cannot do that in PostgreSQL.\n\nHow about SELECT ... FOR UPDATE?\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Tue, 17 Feb 2004 08:02:51 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow response of PostgreSQL"
},
{
"msg_contents": "\nEasy two step procedure for speeding this up:\n\n1: Upgrade to 7.4.1\n2: Read this: \nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n",
"msg_date": "Tue, 17 Feb 2004 10:15:46 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow response of PostgreSQL"
},
{
"msg_contents": ">>> 1- How can I lock a single record so that other users can only read \n>>> it. ??\n>>\n>>\n>> You cannot do that in PostgreSQL.\n> \n> \n> How about SELECT ... FOR UPDATE?\n\nNo, because users cannot read the locked row in that case.\n\nChris\n\n",
"msg_date": "Wed, 18 Feb 2004 09:50:00 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow response of PostgreSQL"
},
{
"msg_contents": "On Wed, 18 Feb 2004, Christopher Kings-Lynne wrote:\n\n> >>> 1- How can I lock a single record so that other users can only read \n> >>> it. ??\n> >>\n> >>\n> >> You cannot do that in PostgreSQL.\n> > \n> > \n> > How about SELECT ... FOR UPDATE?\n> \n> No, because users cannot read the locked row in that case.\n\nI just tested it (within transactions) and it appeared that I could still \nview the rows selected for update.\n\n",
"msg_date": "Tue, 17 Feb 2004 18:50:08 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow response of PostgreSQL"
},
{
"msg_contents": "scott.marlowe wrote:\n> On Wed, 18 Feb 2004, Christopher Kings-Lynne wrote:\n> \n>>>>>1- How can I lock a single record so that other users can only read \n>>>>>it. ??\n>>>>\n>>>>\n>>>>You cannot do that in PostgreSQL.\n>>>\n>>>\n>>>How about SELECT ... FOR UPDATE?\n>>\n>>No, because users cannot read the locked row in that case.\n> \n> I just tested it (within transactions) and it appeared that I could still \n> view the rows selected for update.\n\nThank you. I was just about to test it myself.\n\nThe user's guide, section 9.3.2 states that this is the case: i.e. select\nfor update will prevent concurrent updating of the row, while allowing\nqueries utilizing that row to succeed.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Tue, 17 Feb 2004 21:21:31 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow response of PostgreSQL"
},
{
"msg_contents": ">>>How about SELECT ... FOR UPDATE?\n>>\n>>No, because users cannot read the locked row in that case.\n> \n> \n> I just tested it (within transactions) and it appeared that I could still \n> view the rows selected for update.\n\nAh, true. My mistake. OK, well you can do it in postgres then...\n\nChris\n\n",
"msg_date": "Wed, 18 Feb 2004 12:52:55 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow response of PostgreSQL"
}
] |
[
{
"msg_contents": "I changed the conf as you wrote. But now the time is changed from 50 sec to 65 sec. :(\nI have not more 256 MB ram now.\nWhen I execute the query the \nPostmaster takes about 1.8 MB\nPostgres session takes 18 MB ram only.\n& psql takes 1.3 MB.\n\nAfter the query finishes the \nPostgres session reducess memeory and just uses 10 MB ram.\n\nI have a question why MS-SQL with 256 MB RAM gives result in 2 sec ?? If I have low memory Postgres should give result in 10 sec as compared to MS-SQL.\n\nLooking forward for comments & suggestions\n\nSaleem\n\n> This is a shockingly bad postgresql.conf. I'm not surprised you have \n> performance problems. Change this:\n> \n> effective_cache_size = 4000\n> sort_mem = 4096\n> shared_buffers = 1000\n> wal_buffers = 8\n> \n> Also, you need a LOT more RAM in your PostgreSQL machine, at least half \n> a gig for a basic database server.\n> \n> \n> Chris\n\n\n\n",
"msg_date": "Tue, 17 Feb 2004 15:39:09 +0500 (PKT)",
"msg_from": "\"Saleem Burhani Baloch\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow response of PostgreSQL"
},
{
"msg_contents": "Saleem Burhani Baloch wrote:\n> I changed the conf as you wrote. But now the time is changed from 50 sec to 65 sec. :(\n> I have not more 256 MB ram now.\n> When I execute the query the \n> Postmaster takes about 1.8 MB\n> Postgres session takes 18 MB ram only.\n> & psql takes 1.3 MB.\n> \n> After the query finishes the \n> Postgres session reducess memeory and just uses 10 MB ram.\n\nCan you post explain analyze result for the query?\n\n Shridhar\n",
"msg_date": "Tue, 17 Feb 2004 16:30:52 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow response of PostgreSQL"
},
{
"msg_contents": "\"Saleem Burhani Baloch\" <[email protected]> writes:\n> I have a question why MS-SQL with 256 MB RAM gives result in 2 sec ?? If I have low memory Postgres should give result in 10 sec as compared to MS-SQL.\n\nAre you still running 7.1?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Feb 2004 10:02:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow response of PostgreSQL "
}
] |
[
{
"msg_contents": "I can't get the following statement to complete with reasonable time.\nI've had it running for over ten hours without getting anywhere. I\nsuspect (hope) there may be a better way to accomplish what I'm trying\nto do (set fields containing unique values to null):\n\n UPDATE requests\n SET session = NULL\n WHERE session IN\n (\n SELECT session\n FROM requests\n GROUP BY session\n HAVING COUNT(*) = 1\n );\n\nOutput of EXPLAIN:\n\n Nested Loop\n (cost=170350.16..305352.37 rows=33533 width=98)\n -> HashAggregate\n (cost=170350.16..170350.16 rows=200 width=8)\n -> Subquery Scan \"IN_subquery\"\n (cost=169728.12..170261.30 rows=35545 width=8)\n -> HashAggregate\n (cost=169728.12..169905.85 rows=35545 width=8)\n Filter: (count(*) = 1)\n -> Seq Scan on requests\n (cost=0.00..139207.75 rows=6104075 width=8)\n -> Index Scan using requests_session_idx on requests\n (cost=0.00..672.92 rows=168 width=106)\n Index Cond: (requests.\"session\" = \"outer\".\"session\")\n\nIf I drop the index on requests(session):\n\n Hash Join\n (cost=170350.66..340414.12 rows=33533 width=98)\n Hash Cond: (\"outer\".\"session\" = \"inner\".\"session\")\n -> Seq Scan on requests\n (cost=0.00..139207.75 rows=6104075 width=106)\n -> Hash\n (cost=170350.16..170350.16 rows=200 width=8)\n -> HashAggregate\n (cost=170350.16..170350.16 rows=200 width=8)\n -> Subquery Scan \"IN_subquery\"\n (cost=169728.12..170261.30 rows=35545 width=8)\n -> HashAggregate\n (cost=169728.12..169905.85 rows=35545 width=8)\n Filter: (count(*) = 1)\n -> Seq Scan on requests\n (cost=0.00..139207.75 rows=6104075\nwidth=8)\n\nThe subquery itself requires 5-10 min to run on its own, and may return\nseveral million rows.\n\nUsing EXISTS rather than IN (I'm using 7.4-RC2, not sure if IN queries\nwere already improved in this release):\n\n UPDATE requests\n SET session = NULL\n WHERE NOT EXISTS\n (\n SELECT r.session\n FROM requests r\n WHERE\n r.session = session\n AND NOT r.id = id\n );\n\nWith and without index:\n\n Result\n (cost=227855.74..415334.22 rows=8075449 width=101)\n One-Time Filter: (NOT $0)\n InitPlan\n -> Seq Scan on requests r\n (cost=0.00..227855.74 rows=201 width=8)\n Filter: ((\"session\" = \"session\") AND (id <> id))\n -> Seq Scan on requests\n (cost=0.00..187478.49 rows=8075449 width=101)\n\nI've been running this for more than an hour so far, and no end in\nsight, either... Any ideas?\n\n",
"msg_date": "Tue, 17 Feb 2004 13:38:19 +0100",
"msg_from": "\"Eric Jain\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "UPDATE with subquery too slow"
},
{
"msg_contents": "Eric Jain wrote:\n> I can't get the following statement to complete with reasonable time.\n> I've had it running for over ten hours without getting anywhere. I\n> suspect (hope) there may be a better way to accomplish what I'm trying\n> to do (set fields containing unique values to null):\n\n[...]\n\n> Using EXISTS rather than IN (I'm using 7.4-RC2, not sure if IN queries\n> were already improved in this release):\n> \n> UPDATE requests\n> SET session = NULL\n> WHERE NOT EXISTS\n> (\n> SELECT r.session\n> FROM requests r\n> WHERE\n> r.session = session\n> AND NOT r.id = id\n> );\n\nI suppose you could try:\n\n UPDATE requests\n SET session = NULL\n WHERE EXISTS\n (\n SELECT r.session\n FROM requests r\n WHERE\n r.session = session\n GROUP BY r.session\n HAVING count(*) = 1\n );\n\nbut I don't know that you'll get much different results than your\nversion.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Tue, 17 Feb 2004 22:52:26 -0800",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: UPDATE with subquery too slow"
},
{
"msg_contents": "> I can't get the following statement to complete with reasonable time.\n\nUpgraded to 7.4.1, and realized that NOT IN is far more efficient than\nIN, EXISTS or NOT EXISTS, at least for the amount and distribution of\ndata that I have. Here are some numbers from before and after performing\nthe problematic clean up operation:\n\n | Before | After\n------------------------+-----------+-----------\nCOUNT(*) | 6'104'075 | 6'104'075\nCOUNT(session) | 5'945'272 | 3'640'659\nCOUNT(DISTINCT session) | 2'865'570 | 560'957\n\nThe following query completes within less than three hours on a machine\nwith a high load, versa many many hours for any of the alternatives:\n\nUPDATE requests\nSET session = NULL\nWHERE session NOT IN\n(\n SELECT r.session\n FROM requests r\n WHERE r.session IS NOT NULL\n GROUP BY r.session\n HAVING COUNT(*) > 1\n);\n\nNote that in order to correctly reverse an IN subquery, IS NOT NULL\nneeds to be added.\n\nInterestingly, the query planner believes that using EXISTS would be\nmore efficient than NOT IN, and IN only slightly less efficient; I\nassume the query planner is not able to accurately estimate the number\nof rows returned by the subquery.\n\nEXISTS 351'511\nNOT IN 376'577\nIN 386'780\nLEFT JOIN 18'263'826\nNOT EXISTS 7'241'815'330\n\n",
"msg_date": "Wed, 18 Feb 2004 20:11:58 +0100",
"msg_from": "\"Eric Jain\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATE with subquery too slow"
}
] |
[
{
"msg_contents": "Hi,\n\nThis is not going to answer your question of course but did you already try to do this in 2 steps?\n\nYou said that the subquery itself doesn't take very long, so perhaps you can create a temporary table based on the subquery, then in the update do a join with the temporary table?\n\nThis might not be desirable in the end, but it might be useful just to check the performance of it.\n\nAnd - isn't it an option to upgrade to 7.4.1 instead?\n\n\nregards,\n\n--Tim\n\nTHIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY MATERIAL and is thus for use only by the intended recipient. If you received this in error, please contact the sender and delete the e-mail and its attachments from all computers. \n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Eric Jain\nSent: dinsdag 17 februari 2004 13:38\nTo: pgsql-performance\nSubject: [PERFORM] UPDATE with subquery too slow\n\n\nI can't get the following statement to complete with reasonable time.\nI've had it running for over ten hours without getting anywhere. I\nsuspect (hope) there may be a better way to accomplish what I'm trying\nto do (set fields containing unique values to null):\n\n UPDATE requests\n SET session = NULL\n WHERE session IN\n (\n SELECT session\n FROM requests\n GROUP BY session\n HAVING COUNT(*) = 1\n );\n\n[...]\n\n",
"msg_date": "Tue, 17 Feb 2004 12:49:29 -0000",
"msg_from": "\"Leeuw van der, Tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: UPDATE with subquery too slow"
}
] |
[
{
"msg_contents": "Hi!\nDoes PostgreSQL allow to create tables and indices of a single\ndatabase on multiple disk drives with a purpose of increase\nperformance as Oracle database does? If a symbolic reference is the\nonly method then the next question is: how can it be determined what\nfile is referred to what table and index?\n\n\n\n",
"msg_date": "Tue, 17 Feb 2004 15:54:54 +0300",
"msg_from": "Konstantin Tokar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tables on multiple disk drives"
},
{
"msg_contents": "On Tuesday 17 February 2004 12:54, Konstantin Tokar wrote:\n> Hi!\n> Does PostgreSQL allow to create tables and indices of a single\n> database on multiple disk drives with a purpose of increase\n> performance as Oracle database does? If a symbolic reference is the\n> only method then the next question is: how can it be determined what\n> file is referred to what table and index?\n\nYep, symlinks are the way at present (though I think someone is working on \ntablespace support). The files are named after the OID of the object they \nrepresent - there is a useful oid2name utility in the contrib/ folder.\n\nYou might want to check the archives though, and see what RAID setups people \nprefer - less trouble to maintain than symlinking.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 17 Feb 2004 14:56:02 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tables on multiple disk drives"
},
{
"msg_contents": "On Tue, 17 Feb 2004, Konstantin Tokar wrote:\n\n> Hi!\n> Does PostgreSQL allow to create tables and indices of a single\n> database on multiple disk drives with a purpose of increase\n> performance as Oracle database does? If a symbolic reference is the\n> only method then the next question is: how can it be determined what\n> file is referred to what table and index?\n\nYou're life will be simpler, and your setup will be faster without having \nto muck about with it, if you just buy a good RAID controller with battery \nbacked cache. LSI/Megaraid and Adaptec both make serviceable controllers \nfor reasonable prices, and as you add drives, the speed just goes up, no \nmuddling around with sym links.\n\n",
"msg_date": "Tue, 17 Feb 2004 10:18:25 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tables on multiple disk drives"
},
{
"msg_contents": "> On Tue, 17 Feb 2004, Konstantin Tokar wrote:\n>\n>> Hi!\n>> Does PostgreSQL allow to create tables and indices of a single\n>> database on multiple disk drives with a purpose of increase\n>> performance as Oracle database does? If a symbolic reference is the\n>> only method then the next question is: how can it be determined what\n>> file is referred to what table and index?\n>\n> You're life will be simpler, and your setup will be faster without\n> having to muck about with it, if you just buy a good RAID controller\n> with battery backed cache. LSI/Megaraid and Adaptec both make\n> serviceable controllers for reasonable prices, and as you add drives,\n> the speed just goes up, no muddling around with sym links.\n\nThis works to a limited extent. For very large databases, maximum\nthroughput of I/O is the paramount factor for database performance. With\nraid controllers, your LUN is still limited to a small number of disks.\nPostgreSQL can only write on a file system, but Oracle, SAP DB, DB2, etc\ncan write directly to disk (raw I/O). With large databases it is\nadvantageous to spread a table across 100's of disks, if the table is\nquite large. I don't know of any manufacturer that creates a 100 disk\nraid array yet.\n\nSome of the problem can be addressed by using a volume manager (such as\nLVM in Linux, or Veritas on Unix-like systems). This allows one to\ncreate a volume using partitions from many disks. One can then create\na file system and mount it on the volume.\n\nHowever, to get the best performance, Raw I/O capability is the best\nway to go.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n\n\n",
"msg_date": "Tue, 17 Feb 2004 09:35:38 -0800 (PST)",
"msg_from": "Craig Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tables on multiple disk drives"
},
{
"msg_contents": "On Tue, 17 Feb 2004, Craig Thomas wrote:\n\n> > On Tue, 17 Feb 2004, Konstantin Tokar wrote:\n> >\n> >> Hi!\n> >> Does PostgreSQL allow to create tables and indices of a single\n> >> database on multiple disk drives with a purpose of increase\n> >> performance as Oracle database does? If a symbolic reference is the\n> >> only method then the next question is: how can it be determined what\n> >> file is referred to what table and index?\n> >\n> > You're life will be simpler, and your setup will be faster without\n> > having to muck about with it, if you just buy a good RAID controller\n> > with battery backed cache. LSI/Megaraid and Adaptec both make\n> > serviceable controllers for reasonable prices, and as you add drives,\n> > the speed just goes up, no muddling around with sym links.\n> \n> This works to a limited extent. For very large databases, maximum\n> throughput of I/O is the paramount factor for database performance. With\n> raid controllers, your LUN is still limited to a small number of disks.\n> PostgreSQL can only write on a file system, but Oracle, SAP DB, DB2, etc\n> can write directly to disk (raw I/O). With large databases it is\n> advantageous to spread a table across 100's of disks, if the table is\n> quite large. I don't know of any manufacturer that creates a 100 disk\n> raid array yet.\n\nYou can run up to four LSI / Megaraids in one box, each with 3 UW SCSI \ninterfaces, and they act as one unit. That's 3*4*15 = 180 disks max.\n\nWith FC AL connections and four cards, it would be possible to approach \n1000 drives. \n\nOf course, I'm not sure how fast any RAID card setup is gonna be with that \nmany drives, but ya never know. My guess is that before you go there you \nbuy a big external RAID box built for speed. We have a couple of 200+ \ndrive external RAID5 storage boxes at work that are quite impressive.\n\n> Some of the problem can be addressed by using a volume manager (such as\n> LVM in Linux, or Veritas on Unix-like systems). This allows one to\n> create a volume using partitions from many disks. One can then create\n> a file system and mount it on the volume.\n\nPretty much RAID arrays in software, which means no battery backed cache, \nwhich means it'll be fast at reading, but probably pretty slow at writes, \nepsecially if there's a lot of parallel access waiting to write to the \ndatabase.\n\n> However, to get the best performance, Raw I/O capability is the best\n> way to go.\n\nUnsupported statement made as fact. I'm not saying it can't or isn't\ntrue, but my experience has been that large RAID5 arrays are a great \ncompromise between maximum performance and reliability, giving a good \nmeasure of each. It doesn't take 100 drives to do well, even a dozen to \ntwo dozen will get you in the same basic range as splitting out files by \nhand with sym links without all the headache of chasing down the files, \nshutting down the database, linking it over etc...\n\n\n",
"msg_date": "Tue, 17 Feb 2004 10:53:04 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tables on multiple disk drives"
},
{
"msg_contents": "> On Tue, 17 Feb 2004, Craig Thomas wrote:\n>\n>> > On Tue, 17 Feb 2004, Konstantin Tokar wrote:\n>> >\n>> >> Hi!\n>> >> Does PostgreSQL allow to create tables and indices of a single\n>> database on multiple disk drives with a purpose of increase\n>> >> performance as Oracle database does? If a symbolic reference is the\n>> only method then the next question is: how can it be determined\n>> what file is referred to what table and index?\n>> >\n>> > You're life will be simpler, and your setup will be faster without\n>> having to muck about with it, if you just buy a good RAID\n>> controller with battery backed cache. LSI/Megaraid and Adaptec\n>> both make serviceable controllers for reasonable prices, and as you\n>> add drives, the speed just goes up, no muddling around with sym\n>> links.\n>>\n>> This works to a limited extent. For very large databases, maximum\n>> throughput of I/O is the paramount factor for database performance.\n>> With raid controllers, your LUN is still limited to a small number of\n>> disks. PostgreSQL can only write on a file system, but Oracle, SAP DB,\n>> DB2, etc can write directly to disk (raw I/O). With large databases\n>> it is advantageous to spread a table across 100's of disks, if the\n>> table is quite large. I don't know of any manufacturer that creates a\n>> 100 disk raid array yet.\n>\n> You can run up to four LSI / Megaraids in one box, each with 3 UW SCSI\n> interfaces, and they act as one unit. That's 3*4*15 = 180 disks max.\n>\n> With FC AL connections and four cards, it would be possible to approach\n> 1000 drives.\n>\n> Of course, I'm not sure how fast any RAID card setup is gonna be with\n> that many drives, but ya never know. My guess is that before you go\n> there you buy a big external RAID box built for speed. We have a\n> couple of 200+ drive external RAID5 storage boxes at work that are\n> quite impressive.\n\nThat's a good point. But it seems that the databases that are the\nleaders of the TPC numbers seem to be the Oracles of the world. I\nknow that a former company I worked for publised TPC numbers using\nOracle with Raw I/O to get the performance up.\n\nHowever, it would be interesting for us to conduct a small scale\ntest using a couple of HW Raid systems configured so that a single\nfile system can be mounted, then run the OSDL dbt workloads. The\nresluts could then be compared with current results that have been\ncaptured.\n>\n>> Some of the problem can be addressed by using a volume manager (such\n>> as LVM in Linux, or Veritas on Unix-like systems). This allows one to\n>> create a volume using partitions from many disks. One can then create\n>> a file system and mount it on the volume.\n>\n> Pretty much RAID arrays in software, which means no battery backed\n> cache, which means it'll be fast at reading, but probably pretty slow\n> at writes, epsecially if there's a lot of parallel access waiting to\n> write to the database.\n>\n>> However, to get the best performance, Raw I/O capability is the best\n>> way to go.\n>\n> Unsupported statement made as fact. I'm not saying it can't or isn't\n> true, but my experience has been that large RAID5 arrays are a great\n> compromise between maximum performance and reliability, giving a good\n> measure of each. It doesn't take 100 drives to do well, even a dozen to\n> two dozen will get you in the same basic range as splitting out files\n> by hand with sym links without all the headache of chasing down the\n> files, shutting down the database, linking it over etc...\n\nWhoops, you're right. I was typing faster than I was thinking. I was\nassuming a JBOD set up rather than a RAID storage subsystem. SAN units\nsuch as an EMC or Shark usualy have 4-16 GB cache and thus the I/O's\ngo pretty quick for really large databases.\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n\n",
"msg_date": "Tue, 17 Feb 2004 10:16:22 -0800 (PST)",
"msg_from": "Craig Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tables on multiple disk drives"
},
{
"msg_contents": "On Tue, 17 Feb 2004, Craig Thomas wrote:\n\n> > On Tue, 17 Feb 2004, Craig Thomas wrote:\n> >\n> >> > On Tue, 17 Feb 2004, Konstantin Tokar wrote:\n> >> >\n> >> >> Hi!\n> >> >> Does PostgreSQL allow to create tables and indices of a single\n> >> database on multiple disk drives with a purpose of increase\n> >> >> performance as Oracle database does? If a symbolic reference is the\n> >> only method then the next question is: how can it be determined\n> >> what file is referred to what table and index?\n> >> >\n> >> > You're life will be simpler, and your setup will be faster without\n> >> having to muck about with it, if you just buy a good RAID\n> >> controller with battery backed cache. LSI/Megaraid and Adaptec\n> >> both make serviceable controllers for reasonable prices, and as you\n> >> add drives, the speed just goes up, no muddling around with sym\n> >> links.\n> >>\n> >> This works to a limited extent. For very large databases, maximum\n> >> throughput of I/O is the paramount factor for database performance.\n> >> With raid controllers, your LUN is still limited to a small number of\n> >> disks. PostgreSQL can only write on a file system, but Oracle, SAP DB,\n> >> DB2, etc can write directly to disk (raw I/O). With large databases\n> >> it is advantageous to spread a table across 100's of disks, if the\n> >> table is quite large. I don't know of any manufacturer that creates a\n> >> 100 disk raid array yet.\n> >\n> > You can run up to four LSI / Megaraids in one box, each with 3 UW SCSI\n> > interfaces, and they act as one unit. That's 3*4*15 = 180 disks max.\n> >\n> > With FC AL connections and four cards, it would be possible to approach\n> > 1000 drives.\n> >\n> > Of course, I'm not sure how fast any RAID card setup is gonna be with\n> > that many drives, but ya never know. My guess is that before you go\n> > there you buy a big external RAID box built for speed. We have a\n> > couple of 200+ drive external RAID5 storage boxes at work that are\n> > quite impressive.\n> \n> That's a good point. But it seems that the databases that are the\n> leaders of the TPC numbers seem to be the Oracles of the world. I\n> know that a former company I worked for publised TPC numbers using\n> Oracle with Raw I/O to get the performance up.\n\nBut keep in mind, that in the TPC benchmarks, doing things that require \nlots of dba work don't tend to make the cost in the test go up (you can \nhide a lot of admin work in those things) while in real life, they do \ndrive up the real cost of maintenance.\n\nI'd imagine that with Postgresql coming along nicely, it may well be that \nin a year or two, in the real world, you can just take the money you'd \nhave spend on Oracle licenses and Oracle DBAs and just throw more drives \nat a problem to solve it.\n\nAnd still spend less money than you would on Oracle. :-)\n\n",
"msg_date": "Tue, 17 Feb 2004 11:23:09 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tables on multiple disk drives"
},
{
"msg_contents": "[email protected] (Konstantin Tokar) wrote:\n> Hi!\n> Does PostgreSQL allow to create tables and indices of a single\n> database on multiple disk drives with a purpose of increase\n> performance as Oracle database does? If a symbolic reference is the\n> only method then the next question is: how can it be determined what\n> file is referred to what table and index?\n\nIt is possible to do this, albeit not trivially easily, by shutting\ndown the database, moving the index to another filesystem, and using a\nsymbolic link to connect it back in. The system table pg_class\ncontains the relevant linkages.\n\nBut it seems likely to me that using a smart RAID controller (e.g. -\nLSILogic MegaRAID) to link a whole lot of disks together to generate\none enormous striped filesystem would be a more effective strategy, in\nthe long run. \n\nDoing that, with a substantial array of disk drives, allows your disk\nsubsystem to provide an analagous sort of performance increase without\nthere being any need for a DBA to fiddle around with anything.\n\nIf you have the DBA do the work, this means consuming some\nnot-insubstantial amount of time for analysis as well as down-time for\nmaintenance. And it will be necessary to have a somewhat-fragile\n\"registry\" of configuration information indicating what customizations\nwere done.\n\nIn contrast, throwing a smarter RAID controller at the problem costs\nonly a few hundred dollars, and requires little or no analysis effort.\nAnd the RAID controller will affect _all_ cases where there could be\nI/O benefits from striping tables across multiple drives.\n-- \nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://cbbrowne.com/info/x.html\nThe way to a man's heart is through the left ventricle.\n",
"msg_date": "Tue, 17 Feb 2004 17:31:11 -0500",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tables on multiple disk drives"
},
{
"msg_contents": "Konstantin,\n\n> > >> Does PostgreSQL allow to create tables and indices of a single\n> > >> database on multiple disk drives with a purpose of increase\n> > >> performance as Oracle database does? If a symbolic reference is the\n> > >> only method then the next question is: how can it be determined what\n> > >> file is referred to what table and index?\n\nHowdy! I bet you're a bit taken aback by the discussion that ensued, and \neven more confused than before.\n\nYou are actually asking about two related features:\n\nTablespaces, which allows designating different directories/volumes for \nspecific tables and indexes at creation time, and:\n\nPartitioned Tables, which allows the division of large tables and/or indexes \nhorizontally along pre-defined criteria.\n\nThe first, tablespaces, are under development and may make it for 7.5, or \nmaybe not, but certainly in the version after that.\n\nThe second, partitioned tables, is NOT under development because this feature \nlacks both a programmer and a clear specification. \n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 18 Feb 2004 20:56:47 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tables on multiple disk drives"
}
] |
[
{
"msg_contents": "Hi All,\n \nI'm really like this list. Thank you for all the invaluable\ninformation! May I ask a question?\n \nI've got a table with about 8 million rows and growing. I must run\nreports daily off this table, and another smaller one. Typical query -\njoins, groupings and aggregates included. This certain report takes\nabout 10 minutes on average and is getting longer. I've created all the\nindices I think are necessary.\n \nAny advice on how I can get this puppy to go faster? Hardware changes\nare not an option at this point, so I'm hoping there is something else I\ncan poke at. Anyone? \n \n \nTodd\n \n \n \nPOSTGRESQL CONF:\n \n#log_connections = on\n#fsync = off\n#max_connections = 64\n \n# Any option can also be given as a command line switch to the\n# postmaster, e.g., 'postmaster -c log_connections=on'. Some options\n# can be set at run-time with the 'SET' SQL command.\n \n# See /usr/share/doc/postgresql/README.postgresql.conf.gz for a full\nlist\n# of the allowable options\n \ndebug_level = 0\nlog_connections = on\nlog_pid = on\nlog_timestamp = on\nsyslog = 0\n# if syslog is 0, turn silent_mode off!\nsilent_mode = off\nsyslog_facility = LOCAL0\ntrace_notify = off\nmax_connections = 128\n# shared_buffers must be at least twice max_connections, and not less\nthan 16\nshared_buffers = 256\n# TCP/IP access is allowed by default, but the default access given in\n# pg_hba.conf will permit it only from localhost, not other machines.\ntcpip_socket = 1\n \n \nEXPLAIN ANALYZE for the query:\n \nprod=# explain analyze SELECT t.tgpid, t.directoryname, t.templateid,\ncount(*) AS requested FROM (spk_tgp t JOIN spk_tgplog l ON ((t.tgpid =\nl.tgpid))) GROUP BY t.tgpid, t.directoryname, t.templateid;\nNOTICE: QUERY PLAN:\n \nAggregate (cost=2740451.66..2820969.41 rows=805178 width=48) (actual\ntime=460577.85..528968.17 rows=1875 loops=1)\n -> Group (cost=2740451.66..2800839.97 rows=8051775 width=48) (actual\ntime=460577.57..516992.19 rows=8117748 loops=1)\n -> Sort (cost=2740451.66..2740451.66 rows=8051775 width=48)\n(actual time=460577.55..474657.59 rows=8117748 loops=1)\n -> Hash Join (cost=128.26..409517.83 rows=8051775\nwidth=48) (actual time=11.45..85332.88 rows=8117748 loops=1)\n -> Seq Scan on spk_tgplog l (cost=0.00..187965.75\nrows=8051775 width=8) (actual time=0.03..28926.67 rows=8125690 loops=1)\n -> Hash (cost=123.41..123.41 rows=1941 width=40)\n(actual time=11.28..11.28 rows=0 loops=1)\n -> Seq Scan on spk_tgp t (cost=0.00..123.41\nrows=1941 width=40) (actual time=0.06..7.60 rows=1880 loops=1)\nTotal runtime: 529542.66 msec\n \n \n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi All,\n \nI’m really like this list. Thank you for all the invaluable\ninformation! May I ask a question?\n \nI’ve got a table with about 8 million rows and\ngrowing. I must run reports daily off\nthis table, and another smaller one. \nTypical query – joins, groupings and\naggregates included. This certain report\ntakes about 10 minutes on average and is getting longer. I’ve created all the indices I think\nare necessary.\n \nAny advice on how I can get this puppy to go faster? Hardware changes are not an option at this point,\nso I’m hoping there is something else I can poke at. Anyone? \n \n \nTodd\n \n \n \n\nPOSTGRESQL CONF:\n\n \n#log_connections = on\n#fsync = off\n#max_connections = 64\n \n# Any option can also be given as a\ncommand line switch to the\n# postmaster, e.g., 'postmaster -c log_connections=on'. Some options\n# can be set at run-time with the 'SET' SQL command.\n \n# See /usr/share/doc/postgresql/README.postgresql.conf.gz\nfor a full list\n# of the allowable options\n \ndebug_level = 0\nlog_connections = on\nlog_pid = on\nlog_timestamp = on\nsyslog = 0\n# if syslog\nis 0, turn silent_mode off!\nsilent_mode = off\nsyslog_facility = LOCAL0\ntrace_notify = off\nmax_connections = 128\n# shared_buffers\nmust be at least twice max_connections, and not less\nthan 16\nshared_buffers = 256\n# TCP/IP access is allowed by default, but the default\naccess given in\n# pg_hba.conf\nwill permit it only from localhost, not other\nmachines.\ntcpip_socket = 1\n \n \n\nEXPLAIN ANALYZE for the query:\n\n \nprod=# explain\nanalyze SELECT t.tgpid, t.directoryname,\nt.templateid, count(*) AS requested FROM (spk_tgp t JOIN spk_tgplog l ON ((t.tgpid = l.tgpid))) GROUP BY t.tgpid, t.directoryname, t.templateid;\nNOTICE: QUERY PLAN:\n \nAggregate (cost=2740451.66..2820969.41\nrows=805178 width=48) (actual time=460577.85..528968.17 rows=1875 loops=1)\n -> Group \n(cost=2740451.66..2800839.97 rows=8051775 width=48) (actual\ntime=460577.57..516992.19 rows=8117748 loops=1)\n -> Sort (cost=2740451.66..2740451.66 rows=8051775\nwidth=48) (actual time=460577.55..474657.59 rows=8117748 loops=1)\n -> Hash Join (cost=128.26..409517.83 rows=8051775\nwidth=48) (actual time=11.45..85332.88 rows=8117748 loops=1)\n -> Seq\nScan on spk_tgplog l \n(cost=0.00..187965.75 rows=8051775 width=8) (actual time=0.03..28926.67\nrows=8125690 loops=1)\n -> Hash (cost=123.41..123.41 rows=1941 width=40)\n(actual time=11.28..11.28 rows=0 loops=1)\n \n-> Seq Scan on spk_tgp t (cost=0.00..123.41 rows=1941 width=40)\n(actual time=0.06..7.60 rows=1880 loops=1)\nTotal runtime: 529542.66 msec",
"msg_date": "Tue, 17 Feb 2004 09:06:48 -0800",
"msg_from": "\"Todd Fulton\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "long running query running too long"
},
{
"msg_contents": "\nOn Feb 17, 2004, at 10:06 AM, Todd Fulton wrote:\n>\n>\n> I’ve got a table with about 8 million rows and growing. I must run \n> reports daily off this table, and another smaller one. Typical query \n> – joins, groupings and aggregates included. This certain report takes \n> about 10 minutes on average and is getting longer. I’ve created all \n> the indices I think are necessary.\n>\n>\n\nWhat indexes have you created? The query is not using any indexes, so \nthere might be a problem there. Can you disable seqscans temporarily \nto test this?\n\n>\n> prod=# explain analyze SELECT t.tgpid, t.directoryname, t.templateid, \n> count(*) AS requested FROM (spk_tgp t JOIN spk_tgplog l ON ((t.tgpid = \n> l.tgpid))) GROUP BY t.tgpid, t.directoryname, t.templateid;\n\n\nCan you please send the results of the following commands:\n\npsql=# \\d spk_tgp\n\nand\n\npsql=# \\d spk_tgplog\n\n\nYou might also want to try using a sub-query instead of a join. I'm \nassuming that the spk_tgplog table has a lot of rows and spk_tgp has \nvery few rows. It might make sense to try something like this:\n\nEXPLAIN ANALYZE\nSELECT t.tgpid, t.directoryname, t.templateid, r.requested\nFROM (SELECT tgpid, count(*) AS requested FROM spk_tgplog GROUP BY \ntgpid) r, spk_tgp t\nWHERE r.tgpid = t.tgpid;\n\n--\nPC Drew\n\n",
"msg_date": "Tue, 17 Feb 2004 13:05:15 -0700",
"msg_from": "PC Drew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long running query running too long"
},
{
"msg_contents": "Hey! I think I have appropriate indexes, but might now. You're\nabsolutely right on my join -- spk_tgplog has the 8.5 million rows,\nspk_tgp around 2400. I'll try the sub-select. Here is the output you\nasked for:\n\nspank_prod=# \\d spk_tgp;\n Table \"spk_tgp\"\n Column | Type |\nModifiers\n----------------+--------------------------+----------------------------\n---------------------------------\n tgpid | bigint | not null\n directoryname | character varying(64) | not null\n directoryurl | character varying(1028) | not null\n submiturl | character varying(1028) |\n submitdate | date |\n acceptdate | date |\n templateid | character varying(64) | not null\n reciprocalcode | character varying(2056) |\n notes | character varying(2056) |\n createdate | timestamp with time zone | not null default\n('now'::text)::timestamp(6) with time zone\n modifydate | timestamp with time zone | not null default\n('now'::text)::timestamp(6) with time zone\n requested | integer |\n hostid | integer | default 1\nIndexes: idx_spk_tgp_tgpid\nPrimary key: pk_spk_tgp\n\nspank_prod=# \\d idx_spk_tgp_tgpid\n Index \"idx_spk_tgp_tgpid\"\n Column | Type\n---------------+-----------------------\n tgpid | bigint\n directoryname | character varying(64)\nbtree\n\nspank_prod=# \\d spk_tgplog;\n Table \"spk_tgplog\"\n Column | Type |\nModifiers\n---------------+--------------------------+-----------------------------\n--------------------------------\n remoteaddress | character varying(32) | not null\n tgpid | bigint | not null\n referer | character varying(256) |\n createdate | timestamp with time zone | not null default\n('now'::text)::timestamp(6) with time zone\nIndexes: idx_spk_tgplog_createdate,\n idx_spk_tgplog_tgpid\n\nspank_prod=# \\d idx_spk_tgplog_createdate\n Index \"idx_spk_tgplog_createdate\"\n Column | Type\n------------+--------------------------\n createdate | timestamp with time zone\nbtree\n\nspank_prod=# \\d idx_spk_tgplog_tgpid\nIndex \"idx_spk_tgplog_tgpid\"\n Column | Type\n--------+--------\n tgpid | bigint\nbtree\n\n\n\nTodd\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of PC Drew\nSent: Tuesday, February 17, 2004 12:05 PM\nTo: Todd Fulton\nCc: [email protected]\nSubject: Re: [PERFORM] long running query running too long\n\n\nOn Feb 17, 2004, at 10:06 AM, Todd Fulton wrote:\n>\n>\n> Ive got a table with about 8 million rows and growing. I must run \n> reports daily off this table, and another smaller one. Typical query \n> joins, groupings and aggregates included. This certain report takes\n\n> about 10 minutes on average and is getting longer. Ive created all \n> the indices I think are necessary.\n>\n>\n\nWhat indexes have you created? The query is not using any indexes, so \nthere might be a problem there. Can you disable seqscans temporarily \nto test this?\n\n>\n> prod=# explain analyze SELECT t.tgpid, t.directoryname, t.templateid, \n> count(*) AS requested FROM (spk_tgp t JOIN spk_tgplog l ON ((t.tgpid =\n\n> l.tgpid))) GROUP BY t.tgpid, t.directoryname, t.templateid;\n\n\nCan you please send the results of the following commands:\n\npsql=# \\d spk_tgp\n\nand\n\npsql=# \\d spk_tgplog\n\n\nYou might also want to try using a sub-query instead of a join. I'm \nassuming that the spk_tgplog table has a lot of rows and spk_tgp has \nvery few rows. It might make sense to try something like this:\n\nEXPLAIN ANALYZE\nSELECT t.tgpid, t.directoryname, t.templateid, r.requested\nFROM (SELECT tgpid, count(*) AS requested FROM spk_tgplog GROUP BY \ntgpid) r, spk_tgp t\nWHERE r.tgpid = t.tgpid;\n\n--\nPC Drew\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n",
"msg_date": "Tue, 17 Feb 2004 12:41:51 -0800",
"msg_from": "\"Todd Fulton\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: long running query running too long"
},
{
"msg_contents": "\nOn Feb 17, 2004, at 1:41 PM, Todd Fulton wrote:\n>\n> spank_prod=# \\d idx_spk_tgp_tgpid\n> Index \"idx_spk_tgp_tgpid\"\n> Column | Type\n> ---------------+-----------------------\n> tgpid | bigint\n> directoryname | character varying(64)\n> btree\n>\n\nA couple of things to note:\n\n1. What version of PostgreSQL are you running? I'm currently running \n7.3.4 and my output of \\d on a table shows more index information than \nyours does. If you're running anything earlier than 7.3, I'd \ndefinitely recommend that you upgrade.\n\n2. Why are you using a multicolumn index in this case? You might want \nto read the page in the documentation that discusses multi-column \nindexes specifically.\n\nhttp://www.postgresql.org/docs/7.4/interactive/indexes-multicolumn.html\n\nIn any case, it might even be the case that the index isn't being used \nat all. Does anyone know if indexes are used in a case like this:\n\n> spk_tgp t JOIN spk_tgplog l ON (t.tgpid = l.tgpid)\n\nMy hunch is that it's not used. My understanding is that an index acts \nmore as a shortcut so the database doesn't have to go through the \nentire table to look for specific values. When joining two tables, \nhowever, you inherently have to go through the entire table. If anyone \ncan clarify this, that'd be great.\n\n--\nPC Drew\n\n",
"msg_date": "Tue, 17 Feb 2004 16:17:05 -0700",
"msg_from": "PC Drew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long running query running too long"
},
{
"msg_contents": "\"Todd Fulton\" <[email protected]> writes:\n> prod=# explain analyze SELECT t.tgpid, t.directoryname, t.templateid,\n> count(*) AS requested FROM (spk_tgp t JOIN spk_tgplog l ON ((t.tgpid =\n> l.tgpid))) GROUP BY t.tgpid, t.directoryname, t.templateid;\n> NOTICE: QUERY PLAN:\n \n> Aggregate (cost=2740451.66..2820969.41 rows=805178 width=48) (actual\n> time=460577.85..528968.17 rows=1875 loops=1)\n> -> Group (cost=2740451.66..2800839.97 rows=8051775 width=48) (actual\n> time=460577.57..516992.19 rows=8117748 loops=1)\n> -> Sort (cost=2740451.66..2740451.66 rows=8051775 width=48)\n> (actual time=460577.55..474657.59 rows=8117748 loops=1)\n> -> Hash Join (cost=128.26..409517.83 rows=8051775\n> width=48) (actual time=11.45..85332.88 rows=8117748 loops=1)\n> -> Seq Scan on spk_tgplog l (cost=0.00..187965.75\n> rows=8051775 width=8) (actual time=0.03..28926.67 rows=8125690 loops=1)\n> -> Hash (cost=123.41..123.41 rows=1941 width=40)\n> (actual time=11.28..11.28 rows=0 loops=1)\n> -> Seq Scan on spk_tgp t (cost=0.00..123.41\n> rows=1941 width=40) (actual time=0.06..7.60 rows=1880 loops=1)\n> Total runtime: 529542.66 msec\n\nThe join itself is being done fine --- I doubt there is another option\nthat will go faster, given the difference in the table sizes. Note the\njoin step completes in only 85 seconds. What is killing you is the\nsorting/grouping operation. You could try increasing sort_mem to see\nif that makes it go any faster, but I suspect the best answer would be to\nupdate to PG 7.4. 7.4 will probably use hash aggregation for this and\navoid the sort altogether.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 17 Feb 2004 18:55:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: long running query running too long "
}
] |
[
{
"msg_contents": "It seems, that if I know the type and frequency of the queries a\ndatabase will be seeing, I could split the database by hand over\nmultiple disks and get better performance that I would with a RAID array\nwith similar hardware. Most of the data is volatile\nand easily replaceable (and the rest is backed up independently), so\nredundancy isn't importand, and I'm willing to do some ongoing\nmaintenance if I can get a decent speed boost. Am I misguided, or might\nthis work? details of my setup are below:\n\nSix large (3-7 Mrow) 'summary' tables, each being updated continuously\nby 5-20 processes with about 0.5 transactions/second/process. \n\nPeriodically (currently every two weeks), join queries are\nperformed between one of the 'summary' tables(same one each time) and\neach of the other five. Each join touches most rows of both tables,\nindexes aren't used. Results are written into a separate group of\n'inventory' tables (about 500 Krow each), one for each join.\n\nThere are frequent (100-1000/day) queries of both the\ninventory and summary tables using the primary key -- always using the\nindex and returning < 10 rows.\n\nWe're currently getting (barely) acceptable performance from a single\n15k U160 SCSI disk, but db size and activity are growing quickly. \nI've got more disks and a battery-backed LSI card on order.\n\n-mike\n\n\n-- \nMike Glover\nGPG Key ID BFD19F2C <[email protected]>",
"msg_date": "Tue, 17 Feb 2004 13:53:42 -0800",
"msg_from": "Mike Glover <[email protected]>",
"msg_from_op": true,
"msg_subject": "RAID or manual split?"
},
{
"msg_contents": "> There are frequent (100-1000/day) queries of both the\n> inventory and summary tables using the primary key -- always using the\n> index and returning < 10 rows.\n\nFor this query frequency I don't think splitting the drives will do much\n-- you just need more IO. Look at optimizing the query themselves,\nspecifically ensuring the useful information is already in memory.\n\nIf the 10 rows are recent, you might try using partial indexes with the\nlast days worth of information instead of an index across the entire\ntable.\n\n> We're currently getting (barely) acceptable performance from a single\n> 15k U160 SCSI disk, but db size and activity are growing quickly. \n> I've got more disks and a battery-backed LSI card on order.\n\nConfigure for Raid 10 and you're off.\n\n-- \nRod Taylor <rbt [at] rbt [dot] ca>\n\nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\nPGP Key: http://www.rbt.ca/signature.asc",
"msg_date": "Tue, 17 Feb 2004 17:17:15 -0500",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID or manual split?"
},
{
"msg_contents": "> It seems, that if I know the type and frequency of the queries a\n> database will be seeing, I could split the database by hand over\n> multiple disks and get better performance that I would with a RAID array\n> with similar hardware.\n\nUnlikely, but possible if you had radically different hardware for\ndifferent tables.\n\n> Six large (3-7 Mrow) 'summary' tables, each being updated continuously\n> by 5-20 processes with about 0.5 transactions/second/process.\n\nWell you should get close to an order of magnitude better performance from\na RAID controller with write-back cache on those queries.\n\n> Periodically (currently every two weeks), join queries are\n> performed between one of the 'summary' tables(same one each time) and\n> each of the other five. Each join touches most rows of both tables,\n> indexes aren't used. Results are written into a separate group of\n> 'inventory' tables (about 500 Krow each), one for each join.\n\nThe more disks the data is spread over the better (the RAID controller\nwill help here with striping).\n\n> There are frequent (100-1000/day) queries of both the\n> inventory and summary tables using the primary key -- always using the\n> index and returning < 10 rows.\n\nRAM is what you need, to cache the data and indexes, and then as much CPU\npower as you can get.\n\n> We're currently getting (barely) acceptable performance from a single\n> 15k U160 SCSI disk, but db size and activity are growing quickly.\n> I've got more disks and a battery-backed LSI card on order.\n\n3 or more disks in a stripe set, with write back caching, will almost\ncertainly give a huge performance boost. Try that first, and only if you\nhave issues should you think about futzing with symlinks etc.\n\nM\n",
"msg_date": "Tue, 17 Feb 2004 22:34:52 -0000 (GMT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: RAID or manual split?"
}
] |
[
{
"msg_contents": "We have a large (several million row) table with a field containing \nURLs. Now, funny thing about URLs: they mostly start with a common \nsubstring (\"http://www.\"). But not all the rows start with this, so we \ncan't just lop off the first N characters. However, we noticed some time \nago that an index on this field wasn't as effective as an index on the \nREVERSE of the field. So ...\n\nCREATE OR REPLACE FUNCTION fn_urlrev(text) returns text as '\nreturn reverse(lc($_[0]))\n' language 'plperl' with (iscachable,isstrict);\n\nand then\n\nCREATE UNIQUE INDEX ix_links_3 ON links\n(fn_urlrev(path_base));\n\nseemed to be much faster. When we have to look up a single entry in \n\"links\", we do so by something like --\n\nSELECT * FROM links WHERE fn_urlrev(path_base) = ?;\n\nand it's rather fast. When we have a bunch of them to do, under 7.3 we \nfound it useful to create a temporary table, fill it with reversed URLs, \nand join:\n\nINSERT INTO temp_link_urls VALUES (fn_urlrev(?));\n\nSELECT l.path_base,l.link_id\n FROM links l\n JOIN temp_link_urls t\n ON (fn_urlrev(l.path_base) = t.rev_path_base);\n\nHere are query plans from the two versions (using a temp table with 200 \nrows, after ANALYZE on the temp table):\n\n7.3:\n\n# explain select link_id from links l join clm_tmp_links t on \n(fn_urlrev(l.path_base) = t.rev_path_base);\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..3936411.13 rows=2000937 width=152)\n -> Seq Scan on clm_tmp_links t (cost=0.00..5.00 rows=200 width=74)\n -> Index Scan using ix_links_3 on links l (cost=0.00..19531.96 \nrows=10005 width=78)\n Index Cond: (fn_urlrev(l.path_base) = \"outer\".rev_path_base)\n(4 rows)\n\n\n7.4:\n\n # explain select link_id from links l join clm_tmp_links t on \n(fn_urlrev(l.path_base) = t.rev_path_base);\n QUERY PLAN\n------------------------------------------------------------------------------\n Hash Join (cost=5.50..88832.88 rows=1705551 width=4)\n Hash Cond: (fn_urlrev(\"outer\".path_base) = \"inner\".rev_path_base)\n -> Seq Scan on links l (cost=0.00..50452.50 rows=1705550 width=78)\n -> Hash (cost=5.00..5.00 rows=200 width=74)\n -> Seq Scan on clm_tmp_links t (cost=0.00..5.00 rows=200 \nwidth=74)\n(5 rows)\n\nAlthough the cost for the 7.4 query is lower, the 7.3 plan executes in \nabout 3 seconds, while the 7.4 plan executes in 59.8 seconds!\n\nNow the odd part: if I change the query to this:\n\n# explain analyze select link_id from links l join clm_tmp_links t on \n(fn_urlrev(l.path_base) = fn_urlrev(t.rev_path_base));\n QUERY \nPLAN \n--------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=12.64..219974.16 rows=1705551 width=4) (actual \ntime=17.928..17.928 rows=0 loops=1)\n Merge Cond: (fn_urlrev(\"outer\".path_base) = \"inner\".\"?column2?\")\n -> Index Scan using ix_links_3 on links l (cost=0.00..173058.87 \nrows=1705550 width=78) (actual time=0.229..0.285 rows=7 loops=1)\n -> Sort (cost=12.64..13.14 rows=200 width=74) (actual \ntime=9.652..9.871 rows=200 loops=1)\n Sort Key: fn_urlrev(t.rev_path_base)\n -> Seq Scan on clm_tmp_links t (cost=0.00..5.00 rows=200 \nwidth=74) (actual time=0.166..5.753 rows=200 loops=1)\n Total runtime: 18.125 ms\n\n(i.e., apply the function to the data in the temp table), it runs a \nwhole lot faster! Is this a bug in the optimizer? Or did something \nchange about the way functional indexes are used?\n\n-- \nJeff Boes vox 269.226.9550 ext 24\nDatabase Engineer fax 269.349.9076\nNexcerpt, Inc. http://www.nexcerpt.com\n ...Nexcerpt... Extend your Expertise\n\n",
"msg_date": "Wed, 18 Feb 2004 10:24:22 -0500",
"msg_from": "Jeff Boes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizer difference using function index between 7.3 and 7.4"
},
{
"msg_contents": "Jeff Boes <[email protected]> writes:\n> Is this a bug in the optimizer? Or did something \n> change about the way functional indexes are used?\n\nIn 7.3, the only possible plan for these queries was a nestloop or\nnestloop with inner indexscan, because the planner could not generate\nmerge or hash joins on join conditions more complex than \"var1 = var2\".\nYou were fortunate that a nestloop was fast enough for your situation.\n\nIn 7.4 the planner can (as you see) generate both merge and hash options\nfor this query. What it's not very good at yet is picking the best\noption to use, because it doesn't have any statistics about the\ndistribution of functional indexes, and so it's pretty much guessing\nabout selectivity.\n\nAs of just a couple days ago, there is code in CVS tip that keeps and\nuses stats about the values of functional index columns. It seems\nlikely that this would help out tremendously in terms of estimating\nthe costs well for your problem. Don't suppose you'd like to try\nsetting up a test system with your data and trying it ...\n\nBTW, as best I can tell, the amazing speed for the mergejoin is a bit of\na fluke.\n\n> Merge Join (cost=12.64..219974.16 rows=1705551 width=4) (actual \n> time=17.928..17.928 rows=0 loops=1)\n> Merge Cond: (fn_urlrev(\"outer\".path_base) = \"inner\".\"?column2?\")\n> -> Index Scan using ix_links_3 on links l (cost=0.00..173058.87 \n> rows=1705550 width=78) (actual time=0.229..0.285 rows=7 loops=1)\n> -> Sort (cost=12.64..13.14 rows=200 width=74) (actual \n> time=9.652..9.871 rows=200 loops=1)\n> Sort Key: fn_urlrev(t.rev_path_base)\n> -> Seq Scan on clm_tmp_links t (cost=0.00..5.00 rows=200 \n> width=74) (actual time=0.166..5.753 rows=200 loops=1)\n> Total runtime: 18.125 ms\n\nNotice how the indexscan on links is reporting that it only returned\n7 rows. Ordinarily you'd expect that it'd scan the whole table (and\nthat's what the cost estimate is expecting). I think what must have\nhappened is that the scan stopped only a little way into the table,\nbecause the sequence of values from the temp table ended with a value\nthat was close to the start of the range of values in the main table.\nMergejoin stops fetching as soon as it exhausts either input table.\nThis was good luck for you in this case but would likely not hold up\nwith another set of temp values.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Feb 2004 11:55:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer difference using function index between 7.3 and 7.4 "
},
{
"msg_contents": ">Jeff Boes writes\n> # explain select link_id from links l join clm_tmp_links t on\n> (fn_urlrev(l.path_base) = t.rev_path_base);\n\n> executes in 59.8 seconds!\n\n> Now the odd part: if I change the query to this:\n> \n> # explain analyze select link_id from links l join clm_tmp_links t on\n> (fn_urlrev(l.path_base) = fn_urlrev(t.rev_path_base));\n\n> Total runtime: 18.125 ms\n> \n> (i.e., apply the function to the data in the temp table), it runs a\n> whole lot faster! Is this a bug in the optimizer? Or did something\n> change about the way functional indexes are used?\n\nErm..I may have misunderstood your example, but surely the second\nformulation of your query returns the wrong answer? It looks to me as if\nyou are comparing a reversed URL with a twice-reversed URL; if that's\ntrue that would explain why it runs faster: They don't ever match. Is\nthat right?\n \nThanks for the idea of reversing the URLs, nice touch. I'd been thinking\nabout reverse key indexes as a way of relieving the hotspot down the\nrightmost edge of an index during heavy insert traffic. I hadn't thought\nthis would also speed up the access also. \n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Thu, 19 Feb 2004 20:58:03 -0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer difference using function index between 7.3 and 7.4"
}
] |
[
{
"msg_contents": "All three tables have the same integer key, and it's indexed.\n\nI parenthesized the joins to do the two small tables first.\n\nI'm running and INSERT INTO ... SELECT query with this join (one record added per record in join), 4 hours down and all I have to show for it is 100 recycled transaction logs. ?\n\nIf it ever returns, I can post the Explain output.\n\n\n\n\n\n\nAll three tables have the same integer key, \nand it's indexed.\n \nI parenthesized the joins to do the two \nsmall tables first.\n \nI'm running and INSERT INTO ... SELECT \nquery with this join (one record added per record in join), 4 hours down and all \nI have to show for it is 100 recycled transaction logs. ?\n \nIf it ever returns, I can post the Explain \noutput.",
"msg_date": "Wed, 18 Feb 2004 14:51:52 -0800",
"msg_from": "\"Andrew Lazarus\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "JOIN order, 15K, 15K, 7MM rows"
},
{
"msg_contents": "Andrew,\n\n> I'm running and INSERT INTO ... SELECT query with this join (one record\n> added per record in join), 4 hours down and all I have to show for it is\n> 100 recycled transaction logs. ?\n>\n> If it ever returns, I can post the Explain output.\n\nHow about giving us the query and the regular EXPLAIN output right now? I've \na feeling that you have an unconstrained join in the query somewhere.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 22 Feb 2004 11:00:40 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: JOIN order, 15K, 15K, 7MM rows"
}
] |
[
{
"msg_contents": "Folks,\n\nHave an interesting issue with a complex query, where apparently I need to \ntwist the query planner's arm, and am looking for advice on how to do so.\n\nThe situation: I have a table, events, with about 300,000 records. \nIt does an outer join to a second table, cases, with about 150,000 records.\n\nA very simplified version query would be like\n\nSELECT *\nFROM events LEFT OUTER JOIN cases ON events.case_id = cases.case_id\nWHERE events.event_date BETWEEN 'date1' AND 'date2'\n\nThis join is very expensive, as you can imagine. Yet I can't seem to force \nthe query planner to apply the filter conditions to the events table *before* \nattempting to join it to cases. Here's the crucial explain lines:\n\n -> Merge Left Join (cost=0.00..11880.82 \nrows=15879 width=213) (actual time=5.777..901.899 rows=648 loops=1)\n Merge Cond: (\"outer\".case_id = \n\"inner\".case_id)\n Join Filter: ((\"outer\".link_type)::text \n= 'case'::text)\n -> Index Scan using idx_event_ends on \nevents (cost=0.00..4546.15 rows=15879 width=80\n) (actual time=4.144..333.769 rows=648 loops=1)\n Filter: ((status <> 0) AND \n((event_date + duration) >= '2004-02-18 00:00:00'::timestamp without time \nzone) AND (event_date <= '2004-03-05 23:59:00'::timestamp without time zone))\n -> Index Scan using cases_pkey on \ncases (cost=0.00..6802.78 rows=117478 width=137) (\nactual time=0.139..402.363 rows=116835 loops=1)\n\nAs you can see, part of the problem is a pretty drastic (20x) mis-estimation \nof the selectivity of the date limits on events -- and results in 90% of the \nexecution time of my query on this one join. I've tried raising the \nstatistics on event_date, duration, and case_id (to 1000), but this doesn't \nseem to affect the estimate or the query plan.\n\nIn the above test, idx_event_ends indexes (case_id, status, event_date, \n(event_date + duration)), but as you can see the planner uses only the first \ncolumn. This was an attempt to circumvent the planner's tendency to \ncompletely ignoring any index on (event_date, (event_date + duration)) -- \neven though that index is the most selective combination on the events table.\n\nIs there anything I can do to force the query planner to filter on events \nbefore joining cases, other than waiting for version 7.5?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 18 Feb 2004 16:10:31 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Forcing filter/join order?"
},
{
"msg_contents": "Folks,\n\nHmmm posted too soon. Figured out the problem:\n\nThe planner can't, or doesn't want to, use an index on (event_date, \n(event_date + duration)) where the first column is an ascending sort and the \nsecond a descending sort. So I've coded a workaround that's quite \ninelegant but does get the correct results in 0.3 seconds (as opposed to the \n2.2 seconds taken by the example plan).\n\nIs this the sort of thing which is ever likely to get fixed, or just a \nfundamental limitation of index algorithms? Would using a non B-Tree index \nallow me to work around this?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 18 Feb 2004 16:30:43 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Forcing filter/join order?"
},
{
"msg_contents": "Peter,\n\n> \tI'm sure the big brains have a better suggestion, but in the mean time\n> could you do something as simple as:\n> \n> SELECT *\n> FROM (select * from events where event_date BETWEEN 'date1' AND 'date2') e\n> LEFT OUTER JOIN cases ON e.case_id = cases.case_id;\n\nThanks, but that doens't work; the planner will collapse the subquery into the \nmain query, which most of the time is the right thing to do.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 18 Feb 2004 16:31:54 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Forcing filter/join order?"
},
{
"msg_contents": "Josh,\n\tI'm sure the big brains have a better suggestion, but in the mean time\ncould you do something as simple as:\n\nSELECT *\nFROM (select * from events where event_date BETWEEN 'date1' AND 'date2') e\nLEFT OUTER JOIN cases ON e.case_id = cases.case_id;\n\nThanks,\nPeter Darley\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Josh Berkus\nSent: Wednesday, February 18, 2004 4:11 PM\nTo: pgsql-performance\nSubject: [PERFORM] Forcing filter/join order?\n\n\nFolks,\n\nHave an interesting issue with a complex query, where apparently I need to\ntwist the query planner's arm, and am looking for advice on how to do so.\n\nThe situation: I have a table, events, with about 300,000 records.\nIt does an outer join to a second table, cases, with about 150,000 records.\n\nA very simplified version query would be like\n\nSELECT *\nFROM events LEFT OUTER JOIN cases ON events.case_id = cases.case_id\nWHERE events.event_date BETWEEN 'date1' AND 'date2'\n\nThis join is very expensive, as you can imagine. Yet I can't seem to force\nthe query planner to apply the filter conditions to the events table\n*before*\nattempting to join it to cases. Here's the crucial explain lines:\n\n -> Merge Left Join (cost=0.00..11880.82\nrows=15879 width=213) (actual time=5.777..901.899 rows=648 loops=1)\n Merge Cond: (\"outer\".case_id =\n\"inner\".case_id)\n Join Filter:\n((\"outer\".link_type)::text\n= 'case'::text)\n -> Index Scan using idx_event_ends\non\nevents (cost=0.00..4546.15 rows=15879 width=80\n) (actual time=4.144..333.769 rows=648 loops=1)\n Filter: ((status <> 0) AND\n((event_date + duration) >= '2004-02-18 00:00:00'::timestamp without time\nzone) AND (event_date <= '2004-03-05 23:59:00'::timestamp without time\nzone))\n -> Index Scan using cases_pkey on\ncases (cost=0.00..6802.78 rows=117478 width=137) (\nactual time=0.139..402.363 rows=116835 loops=1)\n\nAs you can see, part of the problem is a pretty drastic (20x) mis-estimation\nof the selectivity of the date limits on events -- and results in 90% of the\nexecution time of my query on this one join. I've tried raising the\nstatistics on event_date, duration, and case_id (to 1000), but this doesn't\nseem to affect the estimate or the query plan.\n\nIn the above test, idx_event_ends indexes (case_id, status, event_date,\n(event_date + duration)), but as you can see the planner uses only the first\ncolumn. This was an attempt to circumvent the planner's tendency to\ncompletely ignoring any index on (event_date, (event_date + duration)) --\neven though that index is the most selective combination on the events\ntable.\n\nIs there anything I can do to force the query planner to filter on events\nbefore joining cases, other than waiting for version 7.5?\n\n--\n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n\n",
"msg_date": "Wed, 18 Feb 2004 16:40:33 -0800",
"msg_from": "\"Peter Darley\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing filter/join order?"
},
{
"msg_contents": "On Wed, 18 Feb 2004, Josh Berkus wrote:\n\n> The planner can't, or doesn't want to, use an index on (event_date,\n> (event_date + duration)) where the first column is an ascending sort and the\n> second a descending sort. So I've coded a workaround that's quite\n> inelegant but does get the correct results in 0.3 seconds (as opposed to the\n> 2.2 seconds taken by the example plan).\n\nCan you give more information? I know that I'm not exactly certain what\nthe situation is from the above and the original query/explain piece.\n",
"msg_date": "Wed, 18 Feb 2004 16:56:22 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing filter/join order?"
},
{
"msg_contents": "Stephan,\n\n> Can you give more information? I know that I'm not exactly certain what\n> the situation is from the above and the original query/explain piece.\n> \n\nBelieve me, if I posted the query it wouldn't help. Heck, I'd have trouble \nfollowing it without my notes.\n\na simplifed version:\n\nSELECT events.*, cases.case_name\nFROM events LEFT OUTER JOIN cases ON events.case_id = cases.case_id\nWHERE (event_date >= '2004-03-05' OR (event_date + duration) <= '2004-02-18')\n\tAND events.status <> 0;\n\n... this is to get me all vaild events which overlap with the range \n'2004-02-18' to '2004-03-05'.\n\nI had thought, in 7.4, that adding an index on (event_date, (event_date + \nduration)) would improve the execution of this query. It doesn't, \npresumably because the multi-column index can't be used for both ascending \nand descending sorts at the same time, and event_date >= '2004-03-05' isn't \nselective enough.\n\nThere was a workaround for this posted on hackers about a year ago as I \nrecally, that involved creating custom operators for indexing. Too much \ntrouble when there's a hackish workaround (due to the fact that events have \nto be less than a month long).\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 18 Feb 2004 17:18:22 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Forcing filter/join order?"
},
{
"msg_contents": "On Wed, 18 Feb 2004, Josh Berkus wrote:\n\n> Stephan,\n>\n> > Can you give more information? I know that I'm not exactly certain what\n> > the situation is from the above and the original query/explain piece.\n> >\n>\n> Believe me, if I posted the query it wouldn't help. Heck, I'd have trouble\n> following it without my notes.\n>\n> a simplifed version:\n>\n> SELECT events.*, cases.case_name\n> FROM events LEFT OUTER JOIN cases ON events.case_id = cases.case_id\n> WHERE (event_date >= '2004-03-05' OR (event_date + duration) <= '2004-02-18')\n> \tAND events.status <> 0;\n>\n> ... this is to get me all vaild events which overlap with the range\n> '2004-02-18' to '2004-03-05'.\n>\n> I had thought, in 7.4, that adding an index on (event_date, (event_date +\n> duration)) would improve the execution of this query. It doesn't,\n> presumably because the multi-column index can't be used for both ascending\n> and descending sorts at the same time, and event_date >= '2004-03-05' isn't\n> selective enough.\n\nI don't think the direction issue is the problem in the above. I think\nthe problem is that given a condition like:\n a>value or b<othervalue\nan index on (a,b) doesn't appear to be considered probably since the\nb<othervalue wouldn't be indexable by that index and you can't use the\na>value alone since that'd do the wrong thing.\n\nTesting on a two column table, I see behavior like the following (with\nseqscan off)\n\nsszabo=# create table q2(a int, b int);\nCREATE TABLE\nsszabo=# create index q2ind on q2(a,b);\nCREATE INDEX\nsszabo=# set enable_seqscan=off;\nSET\nsszabo=# explain select * from q2 where a>3 and b<5;\n QUERY PLAN\n-------------------------------------------------------------------\n Index Scan using q2ind on q2 (cost=0.00..42.79 rows=112 width=8)\n Index Cond: ((a > 3) AND (b < 5))\n(2 rows)\n\nsszabo=# explain select * from q2 where a>3 or b<5;\n QUERY PLAN\n--------------------------------------------------------------------\n Seq Scan on q2 (cost=100000000.00..100000025.00 rows=556 width=8)\n Filter: ((a > 3) OR (b < 5))\n(2 rows)\n\nsszabo=# create index q2ind2 on q2(b);\nCREATE INDEX\nsszabo=# explain select * from q2 where a>3 or b<5;\n QUERY PLAN\n---------------------------------------------------------------------------\n Index Scan using q2ind, q2ind2 on q2 (cost=0.00..92.68 rows=556 width=8)\n Index Cond: ((a > 3) OR (b < 5))\n(2 rows)\n\n\n",
"msg_date": "Wed, 18 Feb 2004 19:14:52 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing filter/join order?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> SELECT events.*, cases.case_name\n> FROM events LEFT OUTER JOIN cases ON events.case_id = cases.case_id\n> WHERE (event_date >= '2004-03-05' OR (event_date + duration) <= '2004-02-18')\n> \tAND events.status <> 0;\n\n> ... this is to get me all vaild events which overlap with the range \n> '2004-02-18' to '2004-03-05'.\n\nDid you mean events that *don't* overlap with the range? Seems like\nwhat you say you want should be expressed as\n\nevent_date <= 'end-date' AND (event_date + duration) >= 'start-date'\n\nThis assumes duration is never negative of course.\n\nI think you could make this btree-indexable by negating the second\nclause. Imagine\n\ncreate index evi on events (event_date, (-(event_date+duration)))\n\nand then transforming the query to\n\nevent_date <= 'end-date' AND -(event_date + duration) <= -'start-date'\n\nbut that doesn't quite work because there's no unary minus for date or\ntimestamp types. Is this too ugly for you?\n\ncreate index evi on events (event_date, ('ref-date'-event_date-duration))\n\nevent_date <= 'end-date'\nAND ('ref-date'-event_date-duration) <= 'ref-date'-'start-date'\n\nwhere 'ref-date' is any convenient fixed reference date, say 1-1-2000.\n\nNow, what this will look like to the planner is a one-sided two-column\nrestriction, and I'm not certain that the planner will assign a\nsufficiently small selectivity estimate. But in theory it could work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 18 Feb 2004 23:26:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing filter/join order? "
},
{
"msg_contents": "Tom,\n\nFirst off, you are correct, I swapped the dates when typing the simplified \nquery into e-mail.\n\n> create index evi on events (event_date, ('ref-date'-event_date-duration))\n> \n> event_date <= 'end-date'\n> AND ('ref-date'-event_date-duration) <= 'ref-date'-'start-date'\n> \n> where 'ref-date' is any convenient fixed reference date, say 1-1-2000.\n> \n> Now, what this will look like to the planner is a one-sided two-column\n> restriction, and I'm not certain that the planner will assign a\n> sufficiently small selectivity estimate. But in theory it could work.\n\nInteresting idea. I'll try it just to see if it works when I have a chance. \n\nIn the meantime, for production, I'll stick with the hackish solution I was \nusing under 7.2. \n\nKnowing that events are never more than one month long for this application, I \ncan do:\n\n\"WHERE event.event_date >= (begin_date - '1 month) AND event.event_date <= \nend_date\"\n\n... which works because I have a child table which has event information by \nday:\n\nAND events.event_id IN (SELECT event_id FROM event_day\n\tWHERE calendar_day BETWEEN begin_date AND end_date);\n\nNote that this subselect isn't sufficent on its own, because once again the \nquery planner is unable to correctly estimate the selectivity of the \nsubselect. It needs the \"help\" of the filter against events.event_date.\n\nThis is the workaround I was using with 7.2. I had just hoped that some of \nthe improvements that Tom has made over the last two versions would cure the \nproblem, but no dice.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 18 Feb 2004 20:49:49 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Forcing filter/join order?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Knowing that events are never more than one month long for this\n> application, I can do:\n\n> \"WHERE event.event_date >= (begin_date - '1 month) AND event.event_date <= \n> end_date\"\n\n> ... which works because I have a child table which has event information by \n> day:\n\nUh, why do you need the child table? Seems like the correct incantation\ngiven an assumption about maximum duration is\n\nevent_date <= 'end-date' AND (event_date + duration) >= 'start-date'\nAND event_date >= 'start-date' - 'max-duration'\n\nThe last clause is redundant with the one involving the duration field,\nbut it provides a lower bound for the index scan on event_date. The\nonly index you really need here is one on event_date, but possibly one\non (event_date, (event_date + duration)) would be marginally faster.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 19 Feb 2004 00:08:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing filter/join order? "
},
{
"msg_contents": "Tom,\n\n> Uh, why do you need the child table? \n\nBecause there's linked information which needs to be kept by day for multi-day \nevents. Also, it makes calendar reports easier, where one wants each day of \na multi-day event to appear on each day of the calendar.\n\n>Seems like the correct incantation\n> given an assumption about maximum duration is\n> \n> event_date <= 'end-date' AND (event_date + duration) >= 'start-date'\n> AND event_date >= 'start-date' - 'max-duration'\n\nHmmmm ... so the same as what I have, only with the extra condition for \nevent_date+duration and without the IN clause. I'll try it, thanks!\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 19 Feb 2004 09:39:12 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Forcing filter/join order?"
},
{
"msg_contents": "Tom,\n\n> event_date <= 'end-date' AND (event_date + duration) >= 'start-date'\n> AND event_date >= 'start-date' - 'max-duration'\n\nGreat suggestion! We're down to 160ms, from about 370ms with my subselect \nworkaround. Thanks!\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 19 Feb 2004 13:17:35 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Forcing filter/join order?"
}
] |
[
{
"msg_contents": "Hi,\n\nThanks every one for helping me. I have upgraded to 7.4.1 on redhat 8 ( rh 9 require a lot of lib's) and set the configuration sent by Chris. Now the query results in 6.3 sec waooo. I m thinking that why the 7.1 process aggregate slowly. Anyway.\n\nI still have to go for 2 sec result and now I m thinking to go for Free BSD 5.2.\n\nThe record locking by \"select ..... update for\" is working fine, but a situation arrises when the second user goes for locking the record, the task will wait untill the first user ends up his work. IS their is a way that we can know or get a message that the row/record is already locked by some one else.\n\nThanks for help.\n\nSaleem\n\n",
"msg_date": "Thu, 19 Feb 2004 14:01:20 +0500 (PKT)",
"msg_from": "\"Saleem Burhani Baloch\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow response of PostgreSQL"
},
{
"msg_contents": "On Thursday 19 February 2004 14:31, Saleem Burhani Baloch wrote:\n> Hi,\n>\n> Thanks every one for helping me. I have upgraded to 7.4.1 on redhat 8 ( rh\n> 9 require a lot of lib's) and set the configuration sent by Chris. Now the\n> query results in 6.3 sec waooo. I m thinking that why the 7.1 process\n> aggregate slowly. Anyway.\n>\n> I still have to go for 2 sec result and now I m thinking to go for Free BSD\n> 5.2.\n\nBefore that you can try something with kernel 2.6.3. I think it could make the \ndifference you are looking at.\n\n> The record locking by \"select ..... update for\" is working fine, but a\n> situation arrises when the second user goes for locking the record, the\n> task will wait untill the first user ends up his work. IS their is a way\n> that we can know or get a message that the row/record is already locked by\n> some one else.\n\nRight now, it is hotly debated on HACKERS about adding a NOWAIT clause to \nSELECT FOR UPDATE. If you think your application deployment is away for \nmonths and can try CVS head, you can expect some action on it in coming few \ndays.\n\nAs a side bonus you would be benefitted by performance and scalability \nadditions that won't make there way in 7.4 stream.\n\nHTH\n\n Shridhar\n",
"msg_date": "Thu, 19 Feb 2004 14:56:19 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow response of PostgreSQL"
},
{
"msg_contents": "> Thanks every one for helping me. I have upgraded to 7.4.1 on redhat 8 (\n> rh 9 require a lot of lib's) and set the configuration sent by Chris.\n> Now the query results in 6.3 sec waooo. I m thinking that why the 7.1\n> process aggregate slowly. Anyway.\n\nI'm glad we could help you Saleem :)\n\nWe knew PostgreSQL wasn't that slow :P\n\nChris\n\n\n",
"msg_date": "Thu, 19 Feb 2004 21:52:48 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow response of PostgreSQL"
},
{
"msg_contents": "Saleem Burhani Baloch kirjutas N, 19.02.2004 kell 11:01:\n> Hi,\n> \n> Thanks every one for helping me. I have upgraded to 7.4.1 on \n> redhat 8 ( rh 9 require a lot of lib's) and set the configuration \n> sent by Chris. Now the query results in 6.3 sec waooo. I m thinking \n> that why the 7.1 process aggregate slowly. Anyway.\n> \n> I still have to go for 2 sec result \n\nWhat is the plan now ?\n\n----------------\nHannu\n\n",
"msg_date": "Thu, 19 Feb 2004 22:46:51 +0200",
"msg_from": "Hannu Krosing <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow response of PostgreSQL"
},
{
"msg_contents": "Shridhar Daithankar <[email protected]> writes:\n> Right now, it is hotly debated on HACKERS about adding a NOWAIT\n> clause to SELECT FOR UPDATE. If you think your application\n> deployment is away for months and can try CVS head, you can expect\n> some action on it in coming few days.\n\nYou can also try using the statement_timeout configuration variable\nthat is already included with 7.4. It's not exactly \"don't wait for\nlocks\", but should approximate that behavior well enough.\n\nhttp://www.postgresql.org/docs/7.4/static/runtime-config.html#RUNTIME-CONFIG-CLIENT\n\n-Neil\n\n",
"msg_date": "Fri, 20 Feb 2004 09:34:31 -0500",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow response of PostgreSQL"
}
] |
[
{
"msg_contents": "Hello,\n \nHas anyone designed/implemented postgresql server on storage networks?\n \nAre there any design considerations?\n \nAre there any benchmarks for storage products (HBAs, Switches, Storage\nArrays)?\n \nAny recommendation on the design, resources, references, keeping PG in\nmind?\n \n \nThanks,\nAnjan\n \n************************************************************************\n** \n\nThis e-mail and any files transmitted with it are intended for the use\nof the addressee(s) only and may be confidential and covered by the\nattorney/client and other privileges. If you received this e-mail in\nerror, please notify the sender; do not disclose, copy, distribute, or\ntake any action in reliance on the contents of this information; and\ndelete it from your system. Any other use of this e-mail is prohibited.\n\n \n\nMessage\n\n\n\nHello,\n \nHas anyone \ndesigned/implemented postgresql server on storage networks?\n \nAre there any design \nconsiderations?\n \nAre there any \nbenchmarks for storage products (HBAs, Switches, Storage \nArrays)?\n \nAny recommendation \non the design, resources, references, keeping PG in mind?\n \n \nThanks,Anjan\n \n\n************************************************************************** \n\nThis e-mail and any files transmitted with it are intended for the use of the \naddressee(s) only and may be confidential and covered by the attorney/client and \nother privileges. If you received this e-mail in error, please notify the \nsender; do not disclose, copy, distribute, or take any action in reliance on the \ncontents of this information; and delete it from your system. Any other use of \nthis e-mail is prohibited.",
"msg_date": "Thu, 19 Feb 2004 11:28:56 -0500",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql on SAN"
},
{
"msg_contents": "Anjan,\n\n> Has anyone designed/implemented postgresql server on storage networks?\n\nYes, Zapatec.com runs their stuff this way. Probably others as well.\n\n> Are there any design considerations?\n\nI don't know. Probably.\n\n> Are there any benchmarks for storage products (HBAs, Switches, Storage\n> Arrays)?\n\nNot specific to PostgreSQL. I'm sure there are generic benchmarks. Keep \nin mind that PostgreSQL needs lots of 2-way I/O, batch writes, and random \nreads.\n\n> Any recommendation on the design, resources, references, keeping PG in\n> mind?\n\nSee above. Also keep in mind that PostgreSQL's use of I/O should improve \n100% in version 7.5.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 19 Feb 2004 09:36:47 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on SAN"
},
{
"msg_contents": "Josh Berkus wrote:\n\n>Anjan,\n>\n> \n>\n>>Has anyone designed/implemented postgresql server on storage networks?\n>> \n>>\n>\n>Yes, Zapatec.com runs their stuff this way. Probably others as well.\n>\n> \n>\n>>Are there any design considerations?\n>> \n>>\n>\n>I don't know. Probably.\n>\n> \n>\n>>Are there any benchmarks for storage products (HBAs, Switches, Storage\n>>Arrays)?\n>> \n>>\n>\n>Not specific to PostgreSQL. I'm sure there are generic benchmarks. Keep \n>in mind that PostgreSQL needs lots of 2-way I/O, batch writes, and random \n>reads.\n>\n> \n>\n>>Any recommendation on the design, resources, references, keeping PG in\n>>mind?\n>> \n>>\n>\n>See above. Also keep in mind that PostgreSQL's use of I/O should improve \n>100% in version 7.5.\n>\n> \n>\nWe run PG on a SAN array. We currently have it setup so a single PG \ninstance runs off of a single LUN, this includes the WAL logs. Apart \nfrom that we have made no other special considerations; we just treat it \nas a fast RAID array. We haven't got to the stage where the speed of the \nSAN is a problem as load hasn't increased as expected. This will change, \nwhen it does I am sure the performance list will be hearing from us ;-). \nOut current limitations, as I see it, are amount of memory and then \nprocessing power. The only problem we have had was a dodgy set of kernel \nmodules (drivers) for the fibre cards, this was because they were beta \ndrivers and obviously still had a few bugs. This was solved by reverting \nto an older version. Everything has run smoothly since then (uptime is \n153 days :-)).\n\n\nNick\n\n\n\n",
"msg_date": "Thu, 19 Feb 2004 18:06:09 +0000",
"msg_from": "Nick Barr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on SAN"
},
{
"msg_contents": "Hi All,\n\nI am using Linux 7.2 and postgresql 7.2.\n\nOur Office hours are over at 6pm but we use to keep our server \nrunning 24 hours a day. On the second day morning, Our PGSQL \nServer becomes very slow.\n\nAfter continuous usage of one hour, It gradually starts responding \nfaster ! This has become every day routine !\n\ndo u have any idea related to this !!!! Is there any other reason that I \nneed to check up?\n\nPlease any any idea to get relief daily morning problem !!\n\nThanxs,\nVishal \n",
"msg_date": "Fri, 20 Feb 2004 14:46:15 +0530",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Slow in morning hours"
},
{
"msg_contents": "On Fri, Feb 20, 2004 at 02:46:15PM +0530, [email protected] wrote:\n> \n> After continuous usage of one hour, It gradually starts responding \n> faster ! This has become every day routine !\n> \n> do u have any idea related to this !!!! Is there any other reason that I \n> need to check up?\n\nWhat's running on the machine during those hours? Maybe VACUUM is\nsucking up all your bandwidth. Or your backups. Or some other\ncron job.\n\nNote that 7.2 is pretty old. There are several performance\nimprovements in subsequent versions.\n\nA\n\n-- \nAndrew Sullivan \n",
"msg_date": "Fri, 20 Feb 2004 07:19:45 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow in morning hours"
},
{
"msg_contents": "[email protected] wrote:\n> Hi All,\n> \n> I am using Linux 7.2 and postgresql 7.2.\n> \n> Our Office hours are over at 6pm but we use to keep our server \n> running 24 hours a day. On the second day morning, Our PGSQL \n> Server becomes very slow.\n> \n> After continuous usage of one hour, It gradually starts responding \n> faster ! This has become every day routine !\n> \n> do u have any idea related to this !!!! Is there any other reason that I \n> need to check up?\n> \n> Please any any idea to get relief daily morning problem !!\n\nI've seen this happen, and not just with PostgreSQL. The reasons are many\nan varied, but here's my experience on the most common.\n\n1) As someone else suggested, there may be some daily maintenance process\n (i.e. backup) that's still running when you come in. Check this, and\n reschedule if necessary.\n\n2) Even if these nightly maintenance processes are finished when you first\n come in, they've probably completely rearranged the contents of RAM.\n Meaning, data that Linux had cached that made Postgres fast now needs\n to be fetched from disk again. There are some things you can do, such\n as adding RAM or getting faster disks, but this is a difficult problem\n to solve. Some of the nightly processes could be safely disabled,\n possibly, such as rebuilding the located database (if you don't use\n locate) Possibly (I'm guessing here) if you scheduled pg_dump to be\n the last process to run at night, it might put the cache back in a\n better state?\n\n3) First thing AM load. It's quite common for load to be higher at certain\n times of the day, and first thing in the morning is a common time for\n load to be higher than usual (especially for email servers). Check the\n load on the machine with tools like top and see if it isn't just busier\n in the morning than other times during the day. There might even be one\n or two particular queries that people only run first thing that bog the\n machine down. Depending on what you find, you may be able to optomise\n some queries. Possibly some fine-tuning could correct the problem. Or\n you might be forced to upgrade hardware if you want the machine to handle\n the higher morning load faster. First thing to determine, though, is\n whether or not the load is higher or the same.\n\nWithout more detail on the load, setting, etc of your system, these are all\nguesses. Hopefully the information is helpful, though.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Fri, 20 Feb 2004 08:48:54 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow in morning hours"
},
{
"msg_contents": "Have you tried VACUUM ANALYZE at least one a day?\n\nRegards\nOn Fri, 20 Feb 2004 [email protected] wrote:\n\n> Date: Fri, 20 Feb 2004 14:46:15 +0530\n> From: [email protected]\n> To: [email protected]\n> Cc: [email protected]\n> Subject: [PERFORM] Slow in morning hours\n>\n> Hi All,\n>\n> I am using Linux 7.2 and postgresql 7.2.\n>\n> Our Office hours are over at 6pm but we use to keep our server\n> running 24 hours a day. On the second day morning, Our PGSQL\n> Server becomes very slow.\n>\n> After continuous usage of one hour, It gradually starts responding\n> faster ! This has become every day routine !\n>\n> do u have any idea related to this !!!! Is there any other reason that I\n> need to check up?\n>\n> Please any any idea to get relief daily morning problem !!\n>\n> Thanxs,\n> Vishal\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n-- \nOlivier PRENANT \t Tel: +33-5-61-50-97-00 (Work)\n6, Chemin d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: [email protected]\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n",
"msg_date": "Fri, 20 Feb 2004 15:51:00 +0100 (MET)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Slow in morning hours"
},
{
"msg_contents": "Josh Berkus wrote:\n> \n> \n> See above. Also keep in mind that PostgreSQL's use of I/O should improve \n> 100% in version 7.5.\n> \n\nReally? What happened?\n",
"msg_date": "Wed, 10 Mar 2004 19:43:57 -0500",
"msg_from": "Joseph Shraibman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql on SAN"
},
{
"msg_contents": "In article <40361DBE.3636.10FEBF@localhost>,\n<[email protected]> writes:\n\n> Hi All,\n> I am using Linux 7.2 and postgresql 7.2.\n\n> Our Office hours are over at 6pm but we use to keep our server \n> running 24 hours a day. On the second day morning, Our PGSQL \n> Server becomes very slow.\n\n> After continuous usage of one hour, It gradually starts responding \n> faster ! This has become every day routine !\n\n> do u have any idea related to this !!!! Is there any other reason that I \n> need to check up?\n\n> Please any any idea to get relief daily morning problem !!\n\nI guess you're doing a VACUUM at night which invalidates the buffer\ncache. If that's what happens, it's easy to fix: run some dummy\nqueries after the VACUUM which cause the buffer cache to get filled.\n\n",
"msg_date": "05 Jun 2004 22:54:31 +0200",
"msg_from": "Harald Fuchs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow in morning hours"
}
] |
[
{
"msg_contents": "Hi All,\n\nI have a select statement \n\nselect * from v_func_actual_costs\nwhere parent_project='10478' or proj_pk = '10478'\n\nboth the fields parent_project and proj_pk have indexes based on them, but when I ran explain plan on this statement I found that none of the indexes are being called. But, if I make two separate statement and combine them with Union ALL, the indexes are being called. The select statement in this case is\n\nselect * from ct_admin.v_func_actual_costs\nwhere parent_project='10478'\nunion all\nselect * from ct_admin.v_func_actual_costs\nwhere proj_pk = '10478' \n\nCan anybody help me to find a reason for the same. This is just a part of the query so I cannot use the Union ALL clause.\n\nThanks in advance\n\nChitra\n\n\n\n\n\n\nHi All,\n \nI have a select statement \n \nselect * from v_func_actual_costswhere \nparent_project='10478' or proj_pk = '10478'\n \nboth the fields parent_project and proj_pk have \nindexes based on them, but when I ran explain plan on this statement I \nfound that none of the indexes are being called. But, if I make two separate \nstatement and combine them with Union ALL, the indexes are being called. The \nselect statement in this case is\n \nselect * from ct_admin.v_func_actual_costswhere \nparent_project='10478'union allselect * from \nct_admin.v_func_actual_costswhere proj_pk = '10478' \n \nCan anybody help me to find a reason for the same. \nThis is just a part of the query so I cannot use the Union ALL \nclause.\n \nThanks in advance\nChitra",
"msg_date": "Fri, 20 Feb 2004 12:26:22 +0530",
"msg_from": "\"V Chitra\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index called with Union but not with OR clause"
},
{
"msg_contents": "This discussion really belongs on the performance list and I am copying\nthat list with mail-followup-to set.\n\nOn Fri, Feb 20, 2004 at 12:26:22 +0530,\n V Chitra <[email protected]> wrote:\n> Hi All,\n> \n> I have a select statement \n> \n> select * from v_func_actual_costs\n> where parent_project='10478' or proj_pk = '10478'\n> \n> both the fields parent_project and proj_pk have indexes based on them, but when I ran explain plan on this statement I found that none of the indexes are being called. But, if I make two separate statement and combine them with Union ALL, the indexes are being called. The select statement in this case is\n> \n> select * from ct_admin.v_func_actual_costs\n> where parent_project='10478'\n> union all\n> select * from ct_admin.v_func_actual_costs\n> where proj_pk = '10478' \n> \n> Can anybody help me to find a reason for the same. This is just a part of the query so I cannot use the Union ALL clause.\n\nHave you analyzed the databases recently?\n\nCan you supply explain analyze output for the queries?\n\nIt isn't necessarily faster to use two index scans instead of one sequential\nscan depending on the fraction of the table being returned and some other\nfactors. If the planner is making the wrong choice in your case, you need\nto supply the list with more information to get help figuring out why\nthe wrong choice is being made.\n",
"msg_date": "Fri, 20 Feb 2004 14:11:39 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index called with Union but not with OR clause"
}
] |
[
{
"msg_contents": "I'm converting a SQL application to PostgreSQL. The majority of the logic\nin this application is in the stored functions in the database.\n\nSomewhere, I saw a reference to \"WITH (iscachable)\" for stored functions,\nlooking again, I'm unable to find any reference to this directive. I have\na single function that is _obviously_ safe to cache using this, and it\ngenerates no errors or problems that I can see.\n\nNow I'm looking at a lot of other functions that, if cached, would speed\nup performance considerably. Yet I'm reluctant to use this directive\nsince I can't find documentation on it anywhere.\n\nCan anyone say whether this is a supported feature in plpgsql, and is\nsafe to use? Is it simply undocumented, or am I just looking in the\nwrong place?\n\n(to reduce ambiguity, the manner in which I'm using this is:\n\nCREATE FUNCTION getconstant(VARCHAR)\nRETURNS int\nAS '\n DECLARE\n BEGIN\n\tIF $1 = ''phrase'' THEN\n\t\tRETURN 1;\n\tEND IF;\n\n\t...\n\n END;\n' LANGUAGE 'plpgsql' WITH (iscacheable);\n\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Fri, 20 Feb 2004 10:35:32 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": true,
"msg_subject": "cacheable stored functions?"
},
{
"msg_contents": "On Fri, 20 Feb 2004, Bill Moran wrote:\n\n> I'm converting a SQL application to PostgreSQL. The majority of the logic\n> in this application is in the stored functions in the database.\n>\n> Somewhere, I saw a reference to \"WITH (iscachable)\" for stored functions,\n> looking again, I'm unable to find any reference to this directive. I have\n> a single function that is _obviously_ safe to cache using this, and it\n> generates no errors or problems that I can see.\n\nIt's been basically superceded by IMMUTABLE, and I believe they're\ndescribed in the create function reference page. Note that it doesn't\ninvolve caching as much as the fact that it can be evaluated once and\ntreated as a constant.\n",
"msg_date": "Fri, 20 Feb 2004 07:47:33 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cacheable stored functions?"
},
{
"msg_contents": "Dnia 2004-02-20 16:35, U�ytkownik Bill Moran napisa�:\n> Can anyone say whether this is a supported feature in plpgsql, and is\n> safe to use? Is it simply undocumented, or am I just looking in the\n> wrong place?\n\n\"iscachable\" is only for backward compatibility - it's changed now to \n\"IMMUTABLE\"\n\n\nYou can read more about immutable, stable and volatile functions in \nPostgresql documentation - chapter SQL Commands/CREATE FUNCTION.\n\nRegards,\nTomasz Myrta\n",
"msg_date": "Fri, 20 Feb 2004 16:59:39 +0100",
"msg_from": "Tomasz Myrta <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cacheable stored functions?"
},
{
"msg_contents": "On Friday 20 February 2004 15:35, Bill Moran wrote:\n> I'm converting a SQL application to PostgreSQL. The majority of the logic\n> in this application is in the stored functions in the database.\n>\n> Somewhere, I saw a reference to \"WITH (iscachable)\" for stored functions,\n> looking again, I'm unable to find any reference to this directive. I have\n> a single function that is _obviously_ safe to cache using this, and it\n> generates no errors or problems that I can see.\n>\n> Now I'm looking at a lot of other functions that, if cached, would speed\n> up performance considerably. Yet I'm reluctant to use this directive\n> since I can't find documentation on it anywhere.\n\n From memory, \"iscachable\" was replaced in version 7.3 by the three \nfiner-grained settings IMMUTABLE, STABLE, VOLATILE.\n\nI'm guessing the old behaviour is still there for backwards compatibility, but \nit's probably best to use the new versions.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 20 Feb 2004 16:01:28 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: cacheable stored functions?"
},
{
"msg_contents": "Richard Huxton wrote:\n> On Friday 20 February 2004 15:35, Bill Moran wrote:\n> \n>>I'm converting a SQL application to PostgreSQL. The majority of the logic\n>>in this application is in the stored functions in the database.\n>>\n>>Somewhere, I saw a reference to \"WITH (iscachable)\" for stored functions,\n>>looking again, I'm unable to find any reference to this directive. I have\n>>a single function that is _obviously_ safe to cache using this, and it\n>>generates no errors or problems that I can see.\n>>\n>>Now I'm looking at a lot of other functions that, if cached, would speed\n>>up performance considerably. Yet I'm reluctant to use this directive\n>>since I can't find documentation on it anywhere.\n> \n>>From memory, \"iscachable\" was replaced in version 7.3 by the three \n> finer-grained settings IMMUTABLE, STABLE, VOLATILE.\n> \n> I'm guessing the old behaviour is still there for backwards compatibility, but \n> it's probably best to use the new versions.\n\nThanks to everyone who replied (with more or less the same answer ;)\n\nThis has explained away my confusion, and I now have a reference to read.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n\n",
"msg_date": "Fri, 20 Feb 2004 11:48:24 -0500",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: cacheable stored functions?"
}
] |
[
{
"msg_contents": "To all,\n\nThis is a 2 question email. First is asking about general tuning of the \nApple hardware/postgres combination. The second is whether is is \npossible to speed up a particular query.\n\nPART 1\n\nHardware: Apple G5 dual 2.0 with 8GB memory attached via dual fibre \nchannel to a fully loaded 3.5TB XRaid. The XRaid is configured as two 7 \ndisk hardware based RAID5 sets software striped to form a RAID50 set. \nThe DB, WALS, etc are all on that file set. Running OSX journaled file \nsystem Running postgres 7.4.1. OSX Server 10.3.2 Postgres is compiled \nlocally with '--enable-recode' '--enable-multibyte=UNICODE' \n'CFLAGS=-mcpu=970 -mtune=970 -mpowerpc64 -O3'\n\nConfig stuff that we have changed:\n\ntcpip_socket = true\nmax_connections = 100\n\n# - Memory -\n \nshared_buffers = 16000 # min 16, at least max_connections*2, \n8KB each\nsort_mem = 256000 # min 64, size in KB\nvacuum_mem = 64000 # min 1024, size in KB\nfsync = true # turns forced synchronization on or off\nwal_sync_method = open_sync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or \nopen_datasync\nwal_buffers = 64 # min 4, 8KB each\ncheckpoint_segments = 300 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 30 # range 30-3600, in seconds\neffective_cache_size = 400000 # typically 8KB each\nrandom_page_cost = 1 # units are one sequential page fetch cost\ndefault_statistics_target = 1000 # range 1-1000\n\n\nWe are generally getting poor performance out of the RAID set, they \nclaim 200/MB/sec per channel, the best we can get with straight OS based \ndata transfers is 143MB/sec. :-( (we have a call into apple about this) \nWhen I execute the following, d_url is a big table,\n\ncreate table temp_url as select * from d_url ;\n\nI would expect to bound by IO but via iostat we are seeing only about \n30mb/sec with bursts of 100+ when the WAL is written. sy is high as \nwell and the tps seems low.\n\nCan anyone shed some light on what we might do to improve performance \nfor postgres on this platform? Also, is there a test that is available \nthat would we could run to show the maximum postgres can do on this \nplatform? This is a data warehouse system so generally we only have 1-3 \nqueries running at anytime. More often only 1. We are obviously \nworking with very large tables so we are interested in maximizing our IO \nthroughput.\n\n disk1 disk2 disk0 cpu\n KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us sy id\n 17.04 961 15.99 17.16 957 16.03 8.83 6 0.05 12 32 56\n 22.75 580 12.89 22.79 578 12.87 0.00 0 0.00 10 34 56\n 24.71 586 14.14 24.67 587 14.14 0.00 0 0.00 12 40 48\n 21.98 648 13.91 21.97 648 13.91 0.00 0 0.00 16 27 56\n 22.07 608 13.10 22.09 607 13.09 0.00 0 0.00 14 29 57\n 26.54 570 14.77 26.37 575 14.80 0.00 0 0.00 12 34 54\n 18.91 646 11.93 18.90 646 11.93 0.00 0 0.00 9 33 58\n 15.12 636 9.38 15.12 636 9.38 0.00 0 0.00 14 22 64\n 16.22 612 9.69 16.23 611 9.68 0.00 0 0.00 20 27 54\n 15.02 573 8.41 15.01 574 8.41 0.00 0 0.00 14 29 57\n 15.54 593 9.00 15.52 595 9.02 0.00 0 0.00 13 28 59\n 22.35 596 13.01 22.42 593 12.99 0.00 0 0.00 9 32 58\n 61.57 887 53.33 60.73 901 53.43 4.00 1 0.00 8 48 44\n 11.13 2173 23.62 11.13 2167 23.54 0.00 0 0.00 10 68 22\n 10.07 2402 23.63 10.20 2368 23.58 4.00 1 0.00 10 72 18\n 14.75 1110 15.99 14.74 1116 16.06 8.92 6 0.05 12 42 46\n 22.79 510 11.36 22.79 510 11.36 0.00 0 0.00 16 28 56\n 23.65 519 11.99 23.50 522 11.98 0.00 0 0.00 13 42 46\n 22.45 592 12.98 22.45 592 12.98 0.00 0 0.00 14 27 58\n 25.38 579 14.35 25.37 579 14.35 0.00 0 0.00 8 36 56\n\n\nPART 2\n\nTrying to understand if there is a faster way to do this? This is part \nof our nightly bulk load of a data warehouse. We are reading in new \ndata, pulling out the relevant bits, and then need to check to see if \nthey already exist in the dimension tables. Use to do this via separate \nlookups for each value, not very fast. Trying to do this all in the DB now.\n\nThe query is\n\nSELECT t1.id, t2.md5, t2.url FROM referral_temp t2 LEFT OUTER JOIN \nd_referral t1 ON t2.md5 = t1.referral_md5;\n\n\n\\d d_referral\n id | integer | not null\n referral_md5 | text | not null\n referral_raw_url | text | not null\n referral_host | text |\n referral_path | text |\n referral_query | text |\n job_control_number | integer | not null\n\n\n\\d referral_temp\n md5 | text |\n url | text |\n\nActual row count in the temp table:\n\nselect count(*) from referral_temp ;\n 502347\n\nActual row count in d_referral table:\n\nselect count(*) from d_referral ;\n 27908024\n \n\nNote: that an analyze had not been performed on the referral_temp table \nprior to the explain analyze run.\n\nexplain analyze SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT \nOUTER JOIN d_referral t1 ON t2.md5 = t1.referral_md5\n\nNested Loop Left Join (cost=0.00..3046.00 rows=1001 width=68) (actual \ntime=136.513..6440616.541 rows=502347 loops=1)\n -> Seq Scan on referral_temp t2 (cost=0.00..20.00 rows=1000 \nwidth=64) (actual time=21.730..10552.421 rows=502347 loops=1)\n -> Index Scan using d_referral_referral_md5_key on d_referral t1 \n(cost=0.00..3.01 rows=1 width=40) (actual time=12.768..14.022 rows=1 \nloops=502347)\n Index Cond: (\"outer\".md5 = t1.referral_md5)\n\n\nThanks.\n\n--sean\n Total runtime: 6441969.698 ms\n(5 rows)\n\n\nHere is an explain analyze after the analyze was done. Unfortunately I \nthink a lot of the data was still in cache when I did this again :-(\n\nexplain analyze SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT \nOUTER JOIN d_referral t1 ON t2.md5 = t1.referral_md5;\n\nNested Loop Left Join (cost=0.00..1468759.69 rows=480082 width=149) \n(actual time=69.576..3226854.850 rows=502347 loops=1)\n -> Seq Scan on referral_temp t2 (cost=0.00..16034.81 rows=480081 \nwidth=145) (actual time=11.206..4003.521 rows=502347 loops=1)\n -> Index Scan using d_referral_referral_md5_key on d_referral t1 \n(cost=0.00..3.01 rows=1 width=40) (actual time=6.396..6.402 rows=1 \nloops=502347)\n Index Cond: (\"outer\".md5 = t1.referral_md5)\n Total runtime: 3227830.752 ms\n\n",
"msg_date": "Fri, 20 Feb 2004 14:17:10 -0500",
"msg_from": "Sean Shanny <[email protected]>",
"msg_from_op": true,
"msg_subject": "General performance questions about postgres on Apple hardware..."
},
{
"msg_contents": "On Fri, 20 Feb 2004, Sean Shanny wrote:\n\n> max_connections = 100\n> \n> # - Memory -\n> \n> shared_buffers = 16000 # min 16, at least max_connections*2, \n> 8KB each\n> sort_mem = 256000 # min 64, size in KB\n\nYou might wanna drop sort_mem somewhat and just set it during your imports \nto something big like 512000 or larger. That way with 100 users during \nthe day you won't have to worry about swap storms, and when you run your \nupdates, you get all that sort_mem.\n\n> Actual row count in the temp table:\n> \n> select count(*) from referral_temp ;\n> 502347\n> \n> Actual row count in d_referral table:\n> \n> select count(*) from d_referral ;\n> 27908024\n> \n> \n> Note: that an analyze had not been performed on the referral_temp table \n> prior to the explain analyze run.\n> \n> explain analyze SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT \n> OUTER JOIN d_referral t1 ON t2.md5 = t1.referral_md5\n> \n> Nested Loop Left Join (cost=0.00..3046.00 rows=1001 width=68) (actual \n> time=136.513..6440616.541 rows=502347 loops=1)\n> -> Seq Scan on referral_temp t2 (cost=0.00..20.00 rows=1000 \n> width=64) (actual time=21.730..10552.421 rows=502347 loops=1)\n> -> Index Scan using d_referral_referral_md5_key on d_referral t1 \n> (cost=0.00..3.01 rows=1 width=40) (actual time=12.768..14.022 rows=1 \n> loops=502347)\n> Index Cond: (\"outer\".md5 = t1.referral_md5)\n> \n> \n> Thanks.\n> \n> --sean\n> Total runtime: 6441969.698 ms\n> (5 rows)\n> \n> \n> Here is an explain analyze after the analyze was done. Unfortunately I \n> think a lot of the data was still in cache when I did this again :-(\n> \n> explain analyze SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT \n> OUTER JOIN d_referral t1 ON t2.md5 = t1.referral_md5;\n> \n> Nested Loop Left Join (cost=0.00..1468759.69 rows=480082 width=149) \n> (actual time=69.576..3226854.850 rows=502347 loops=1)\n> -> Seq Scan on referral_temp t2 (cost=0.00..16034.81 rows=480081 \n> width=145) (actual time=11.206..4003.521 rows=502347 loops=1)\n> -> Index Scan using d_referral_referral_md5_key on d_referral t1 \n> (cost=0.00..3.01 rows=1 width=40) (actual time=6.396..6.402 rows=1 \n> loops=502347)\n> Index Cond: (\"outer\".md5 = t1.referral_md5)\n> Total runtime: 3227830.752 ms\n\nHmmm. It looks like postgresql is still picking a nested loop when it \nshould be sorting something faster. Try doing a \"set enable_nestloop = \noff\" and see what you get.\n\nIf that makes it faster, you may want to adjust the costs of the cpu_* \nstuff higher to see if that can force it to do the right thing.\n\nLooking at the amount of time taken by the nested loop, it looks like the \nproblem to me.\n\nAnd why are you doing a left join of ONE row from one table against the \nwhole temp table? Do you really need to do that? since there's only one \nrow in the source table, and I'd guess is only matches one or a few rows \nfrom the temp table, this means you're gonna have that one row and a bunch \nof null filled rows to go with it.\n\n",
"msg_date": "Fri, 20 Feb 2004 14:10:07 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: General performance questions about postgres on Apple"
},
{
"msg_contents": "\n\nscott.marlowe wrote:\n\n>On Fri, 20 Feb 2004, Sean Shanny wrote:\n>\n> \n>\n>>max_connections = 100\n>>\n>># - Memory -\n>> \n>>shared_buffers = 16000 # min 16, at least max_connections*2, \n>>8KB each\n>>sort_mem = 256000 # min 64, size in KB\n>> \n>>\n>\n>You might wanna drop sort_mem somewhat and just set it during your imports \n>to something big like 512000 or larger. That way with 100 users during \n>the day you won't have to worry about swap storms, and when you run your \n>updates, you get all that sort_mem.\n>\n> \n>\n>>Actual row count in the temp table:\n>>\n>>select count(*) from referral_temp ;\n>> 502347\n>>\n>>Actual row count in d_referral table:\n>>\n>>select count(*) from d_referral ;\n>> 27908024\n>> \n>>\n>>Note: that an analyze had not been performed on the referral_temp table \n>>prior to the explain analyze run.\n>>\n>>explain analyze SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT \n>>OUTER JOIN d_referral t1 ON t2.md5 = t1.referral_md5\n>>\n>>Nested Loop Left Join (cost=0.00..3046.00 rows=1001 width=68) (actual \n>>time=136.513..6440616.541 rows=502347 loops=1)\n>> -> Seq Scan on referral_temp t2 (cost=0.00..20.00 rows=1000 \n>>width=64) (actual time=21.730..10552.421 rows=502347 loops=1)\n>> -> Index Scan using d_referral_referral_md5_key on d_referral t1 \n>>(cost=0.00..3.01 rows=1 width=40) (actual time=12.768..14.022 rows=1 \n>>loops=502347)\n>> Index Cond: (\"outer\".md5 = t1.referral_md5)\n>>\n>>\n>>Thanks.\n>>\n>>--sean\n>> Total runtime: 6441969.698 ms\n>>(5 rows)\n>>\n>>\n>>Here is an explain analyze after the analyze was done. Unfortunately I \n>>think a lot of the data was still in cache when I did this again :-(\n>>\n>>explain analyze SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT \n>>OUTER JOIN d_referral t1 ON t2.md5 = t1.referral_md5;\n>>\n>>Nested Loop Left Join (cost=0.00..1468759.69 rows=480082 width=149) \n>>(actual time=69.576..3226854.850 rows=502347 loops=1)\n>> -> Seq Scan on referral_temp t2 (cost=0.00..16034.81 rows=480081 \n>>width=145) (actual time=11.206..4003.521 rows=502347 loops=1)\n>> -> Index Scan using d_referral_referral_md5_key on d_referral t1 \n>>(cost=0.00..3.01 rows=1 width=40) (actual time=6.396..6.402 rows=1 \n>>loops=502347)\n>> Index Cond: (\"outer\".md5 = t1.referral_md5)\n>> Total runtime: 3227830.752 ms\n>> \n>>\n>\n>Hmmm. It looks like postgresql is still picking a nested loop when it \n>should be sorting something faster. Try doing a \"set enable_nestloop = \n>off\" and see what you get.\n> \n>\nNew results with the above changes: (Rather a huge improvement!!!) \nThanks Scott. I will next attempt to make the cpu_* changes to see if \nit the picks the correct plan.\n\nexplain analyze SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT \nOUTER JOIN d_referral t1 ON t2.md5 = t1.referral_md5;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=1669281.60..3204008.48 rows=480082 width=149) \n(actual time=157221.125..-412311.378 rows=502347 loops=1)\n Hash Cond: (\"outer\".md5 = \"inner\".referral_md5)\n -> Seq Scan on referral_temp t2 (cost=0.00..16034.81 rows=480081 \nwidth=145) (actual time=11.537..1852.336 rows=502347 loops=1)\n -> Hash (cost=1356358.48..1356358.48 rows=30344048 width=40) \n(actual time=157187.530..157187.530 rows=0 loops=1)\n -> Seq Scan on d_referral t1 (cost=0.00..1356358.48 \nrows=30344048 width=40) (actual time=14.134..115048.285 rows=27908024 \nloops=1)\n Total runtime: 212595.909 ms\n(6 rows)\n \nTime: 213094.984 ms\ntripmaster=# explain analyze SELECT t1.id, t2.md5, t2.url from url_temp \nt2 LEFT OUTER JOIN d_url t1 ON t2.md5 = t1.url_md5;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=2023843.40..3157938.15 rows=1379872 width=191) \n(actual time=178150.113..867074.579 rows=1172920 loops=1)\n Hash Cond: (\"outer\".md5 = \"inner\".url_md5)\n -> Seq Scan on url_temp t2 (cost=0.00..50461.72 rows=1379872 \nwidth=187) (actual time=6.597..6692.324 rows=1172920 loops=1)\n -> Hash (cost=1734904.72..1734904.72 rows=28018272 width=40) \n(actual time=178124.568..178124.568 rows=0 loops=1)\n -> Seq Scan on d_url t1 (cost=0.00..1734904.72 rows=28018272 \nwidth=40) (actual time=16.912..2639059.078 rows=23239137 loops=1)\n Total runtime: 242846.965 ms\n(6 rows)\n \nTime: 243190.900 ms\n\n>If that makes it faster, you may want to adjust the costs of the cpu_* \n>stuff higher to see if that can force it to do the right thing.\n>\n>Looking at the amount of time taken by the nested loop, it looks like the \n>problem to me.\n>\n>And why are you doing a left join of ONE row from one table against the \n>whole temp table? Do you really need to do that? since there's only one \n>row in the source table, and I'd guess is only matches one or a few rows \n>from the temp table, this means you're gonna have that one row and a bunch \n>of null filled rows to go with it.\n>\n>\n> \n>\n",
"msg_date": "Fri, 20 Feb 2004 16:57:42 -0500",
"msg_from": "Sean Shanny <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: General performance questions about postgres on Apple"
},
{
"msg_contents": "Sean Shanny <[email protected]> writes:\n> New results with the above changes: (Rather a huge improvement!!!) \n> Thanks Scott. I will next attempt to make the cpu_* changes to see if \n> it the picks the correct plan.\n\n> explain analyze SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT \n> OUTER JOIN d_referral t1 ON t2.md5 = t1.referral_md5;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Left Join (cost=1669281.60..3204008.48 rows=480082 width=149) \n> (actual time=157221.125..-412311.378 rows=502347 loops=1)\n> Hash Cond: (\"outer\".md5 = \"inner\".referral_md5)\n> -> Seq Scan on referral_temp t2 (cost=0.00..16034.81 rows=480081 \n> width=145) (actual time=11.537..1852.336 rows=502347 loops=1)\n> -> Hash (cost=1356358.48..1356358.48 rows=30344048 width=40) \n> (actual time=157187.530..157187.530 rows=0 loops=1)\n> -> Seq Scan on d_referral t1 (cost=0.00..1356358.48 \n> rows=30344048 width=40) (actual time=14.134..115048.285 rows=27908024 \n> loops=1)\n> Total runtime: 212595.909 ms\n> (6 rows)\n\nIt seems like the planner is overestimating the cost of a seqscan\nrelative to indexed access. Note that the above large seqscan is priced\nat 1356358.48 cost units vs 115048.285 actual msec, which says that a\nsequential page fetch is taking about 0.1 msec on your hardware.\n(You should check the actual size of d_referral to verify this, though.)\nThe other plan made it look like an indexed fetch was costing several\nmilliseconds. You may have a situation where you need to raise\nrandom_page_cost, rather than lowering it as people more often do.\n\nWhat are you using for random_page_cost anyway? It doesn't look like\nyou are at the default.\n\nThis also suggests that the performance issue with your RAID array\nhas to do with seek time rather than transfer bandwidth...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Feb 2004 18:30:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: General performance questions about postgres on Apple "
},
{
"msg_contents": "Tom,\n\nWe have the following setting for random page cost:\n\nrandom_page_cost = 1 # units are one sequential page fetch cost\n\nAny suggestions on what to bump it up to?\n\nWe are waiting to hear back from Apple on the speed issues, so far we \nare not impressed with the hardware in helping in the IO department. \nOur DB is about 263GB with indexes now so there is not way it is going \nto fit into memory. :-( I have taken the step of breaking out the data \ninto month based groups just to keep the table sizes down. Our current \nmonths table has around 72 million rows in it as of today. The joys of \nbuilding a data warehouse and trying to make it as fast as possible.\n\nThanks.\n\n--sean\n\n\n\nTom Lane wrote:\n\n>Sean Shanny <[email protected]> writes:\n> \n>\n>>New results with the above changes: (Rather a huge improvement!!!) \n>>Thanks Scott. I will next attempt to make the cpu_* changes to see if \n>>it the picks the correct plan.\n>> \n>>\n>\n> \n>\n>>explain analyze SELECT t1.id, t2.md5, t2.url from referral_temp t2 LEFT \n>>OUTER JOIN d_referral t1 ON t2.md5 = t1.referral_md5;\n>> QUERY PLAN\n>>----------------------------------------------------------------------------------------------------------------------------------------------\n>> Hash Left Join (cost=1669281.60..3204008.48 rows=480082 width=149) \n>>(actual time=157221.125..-412311.378 rows=502347 loops=1)\n>> Hash Cond: (\"outer\".md5 = \"inner\".referral_md5)\n>> -> Seq Scan on referral_temp t2 (cost=0.00..16034.81 rows=480081 \n>>width=145) (actual time=11.537..1852.336 rows=502347 loops=1)\n>> -> Hash (cost=1356358.48..1356358.48 rows=30344048 width=40) \n>>(actual time=157187.530..157187.530 rows=0 loops=1)\n>> -> Seq Scan on d_referral t1 (cost=0.00..1356358.48 \n>>rows=30344048 width=40) (actual time=14.134..115048.285 rows=27908024 \n>>loops=1)\n>> Total runtime: 212595.909 ms\n>>(6 rows)\n>> \n>>\n>\n>It seems like the planner is overestimating the cost of a seqscan\n>relative to indexed access. Note that the above large seqscan is priced\n>at 1356358.48 cost units vs 115048.285 actual msec, which says that a\n>sequential page fetch is taking about 0.1 msec on your hardware.\n>(You should check the actual size of d_referral to verify this, though.)\n>The other plan made it look like an indexed fetch was costing several\n>milliseconds. You may have a situation where you need to raise\n>random_page_cost, rather than lowering it as people more often do.\n>\n>What are you using for random_page_cost anyway? It doesn't look like\n>you are at the default.\n>\n>This also suggests that the performance issue with your RAID array\n>has to do with seek time rather than transfer bandwidth...\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n",
"msg_date": "Sun, 22 Feb 2004 21:48:54 -0500",
"msg_from": "Sean Shanny <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: General performance questions about postgres on Apple"
},
{
"msg_contents": "Sean Shanny <[email protected]> writes:\n> We have the following setting for random page cost:\n> random_page_cost = 1 # units are one sequential page fetch cost\n> Any suggestions on what to bump it up to?\n\nWell, the default setting is 4 ... what measurements prompted you to\nreduce it to 1? The particular example you showed suggested that the\ntrue value on your setup might be 10 or more.\n\nNow I would definitely not suggest that you settle on any particular\nvalue based on only one test case. You need to try to determine an\nappropriate average value, bearing in mind that there's likely to be\nlots of noise in any particular measurement.\n\nBut in general, setting random_page_cost to 1 is only reasonable when\nyou are dealing with a fully-cached-in-RAM database, which yours isn't.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 22 Feb 2004 22:24:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: General performance questions about postgres on Apple "
},
{
"msg_contents": "On Sun, 22 Feb 2004, Sean Shanny wrote:\n\n> Tom,\n> \n> We have the following setting for random page cost:\n> \n> random_page_cost = 1 # units are one sequential page fetch cost\n> \n> Any suggestions on what to bump it up to?\n> \n> We are waiting to hear back from Apple on the speed issues, so far we \n> are not impressed with the hardware in helping in the IO department. \n> Our DB is about 263GB with indexes now so there is not way it is going \n> to fit into memory. :-( I have taken the step of breaking out the data \n> into month based groups just to keep the table sizes down. Our current \n> months table has around 72 million rows in it as of today. The joys of \n> building a data warehouse and trying to make it as fast as possible.\n\nYou may be able to achieve similar benefits with a clustered index.\n\nsee cluster:\n\n\\h cluster\nCommand: CLUSTER\nDescription: cluster a table according to an index\nSyntax:\nCLUSTER indexname ON tablename\nCLUSTER tablename\nCLUSTER\n\nI've found this can greatly increase speed, but on 263 gigs of data, I'd \nrun it when you had a couple days free. You might wanna test it on a \nsmaller test set you can afford to chew up some I/O CPU time on over a \nweekend.\n\n",
"msg_date": "Mon, 23 Feb 2004 09:25:13 -0700 (MST)",
"msg_from": "\"scott.marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: General performance questions about postgres on Apple"
},
{
"msg_contents": "Scott,\n\nWe did try clustering on the date_key for the fact table below for a \nmonths worth of data as most of our requests for data are date range \nbased, i.e. get me info for the time period between 2004-02-01 and \n2004-02-07. This normally results in a plan that is doing an index scan \non the date_key which in theory should be fast. However we have found \nthat it is almost always faster to run a sequential scan on the data set \ndue to the size, and probably as Tom pointed out, the high seek time we \nseem to be experiencing with the MAC hardware which kills us when using \nthe index to pop all over the disk. We saw no improvement after having \nclustered based on the date_key.\n\nI am certainly open to any suggestions on how to deal with speed issues \non these sorts of large tables, it isn't going to go away for us. :-( \n\nWe are working on trying to make the table below smaller in record size \nso we can get more records in a page. An example is we are removing the \nsubscriber_key which is 32 characters wide and replacing it with an int \n(user_id) which is an FK to a dimension table. \n\nI welcome any advice from folks that have used postgres to build data \nwarehouses.\n\nThanks.\n\n--sean\n\n\n Table \"public.f_pageviews\"\n Column | Type | Modifiers\n------------------------+---------+-------------------------------------------------------------\n id | integer | not null default \nnextval('public.f_pageviews_id_seq'::text)\n date_key | integer | not null\n time_key | integer | not null\n content_key | integer | not null\n location_key | integer | not null\n session_key | integer | not null\n subscriber_key | text | not null\n persistent_cookie_key | integer | not null\n ip_key | integer | not null\n referral_key | integer | not null\n servlet_key | integer | not null\n tracking_key | integer | not null\n provider_key | text | not null\n marketing_campaign_key | integer | not null\n orig_airport | text | not null\n dest_airport | text | not null\n commerce_page | boolean | not null default false\n job_control_number | integer | not null\n sequenceid | integer | not null default 0\n url_key | integer | not null\n useragent_key | integer | not null\n web_server_name | text | not null default 'Not Available'::text\n cpc | integer | not null default 0\n referring_servlet_key | integer | not null default 1\n first_page_key | integer | not null default 1\n newsletterid_key | text | not null default 'Not Available'::text\n userid_key | integer |\nIndexes:\n \"f_pageviews_pkey\" primary key, btree (id)\n \"idx_pageviews_date\" btree (date_key)\n \"idx_pageviews_session\" btree (session_key)\n\n\nscott.marlowe wrote:\n\n>On Sun, 22 Feb 2004, Sean Shanny wrote:\n>\n> \n>\n>>Tom,\n>>\n>>We have the following setting for random page cost:\n>>\n>>random_page_cost = 1 # units are one sequential page fetch cost\n>>\n>>Any suggestions on what to bump it up to?\n>>\n>>We are waiting to hear back from Apple on the speed issues, so far we \n>>are not impressed with the hardware in helping in the IO department. \n>>Our DB is about 263GB with indexes now so there is not way it is going \n>>to fit into memory. :-( I have taken the step of breaking out the data \n>>into month based groups just to keep the table sizes down. Our current \n>>months table has around 72 million rows in it as of today. The joys of \n>>building a data warehouse and trying to make it as fast as possible.\n>> \n>>\n>\n>You may be able to achieve similar benefits with a clustered index.\n>\n>see cluster:\n>\n>\\h cluster\n>Command: CLUSTER\n>Description: cluster a table according to an index\n>Syntax:\n>CLUSTER indexname ON tablename\n>CLUSTER tablename\n>CLUSTER\n>\n>I've found this can greatly increase speed, but on 263 gigs of data, I'd \n>run it when you had a couple days free. You might wanna test it on a \n>smaller test set you can afford to chew up some I/O CPU time on over a \n>weekend.\n>\n>\n> \n>\n",
"msg_date": "Mon, 23 Feb 2004 11:50:50 -0500",
"msg_from": "Sean Shanny <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: General performance questions about postgres on Apple"
},
{
"msg_contents": "Scott,\n\n> I am certainly open to any suggestions on how to deal with speed issues \n> on these sorts of large tables, it isn't going to go away for us. :-( \n\nI'm not sure what to suggest. I can't think of anything off the top of my \nhead that would improve cripplingly slow random seek times.\n\nThis sort of problem has personally caused me to dump and replace various RAID \ncontrollers in the past. I have to say that I have not been impressed with \nthe Mac as a database server platform in the past; I've had no end of issues \nwith memory and disk management.\n\nI talked to the Apple Server staff at the last MacWorld about some of these \nissues and they admitted that servers are still a \"new thing\" for Apple as a \ncompany, and they're still tweaking OSX server.\n\nBTW, I wasn't clear from your description: did you mean that you have 14 \ndisks?\n\nOh, and for testing real random seek time, you can run bonnie++ which is \nfindable on freshmeat. This should give you a benchmark to e-mail the Apple \npeople. I'd be interested in being cc'd on your communications with them, \nas we use OSX for webserving and would like to see better support for \ndatabase serving.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 23 Feb 2004 12:11:17 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: General performance questions about postgres on Apple"
},
{
"msg_contents": ">Sean Shanny\n> Hardware: Apple G5 dual 2.0 with 8GB memory attached via dual fibre\n> channel to a fully loaded 3.5TB XRaid. The XRaid is configured as two\n7\n> disk hardware based RAID5 sets software striped to form a RAID50 set.\n> The DB, WALS, etc are all on that file set. Running OSX journaled\nfile\n> system Running postgres 7.4.1. OSX Server 10.3.2 Postgres is\ncompiled\n> locally with '--enable-recode' '--enable-multibyte=UNICODE'\n> 'CFLAGS=-mcpu=970 -mtune=970 -mpowerpc64 -O3'\n\nHave you tried altering the blocksize to a higher value? Say 32K?\n\n> max_connections = 100\n\nWhy have you set this to 100 when you have typically 1-3 users?\n\n> sort_mem = 256000 # min 64, size in KB\n\nIf you have only 1-3 users, then that value seems reasonable.\n\n> The query is\n> \n> SELECT t1.id, t2.md5, t2.url FROM referral_temp t2 LEFT OUTER JOIN\n> d_referral t1 ON t2.md5 = t1.referral_md5;\n> \n> \n> \\d d_referral\n> id | integer | not null\n> referral_md5 | text | not null\n> referral_raw_url | text | not null\n> referral_host | text |\n> referral_path | text |\n> referral_query | text |\n> job_control_number | integer | not null\n> \n> \n> \\d referral_temp\n> md5 | text |\n> url | text |\n\nHave you looked at using reversed indexes, as per recent postings in\n[performance]? These seemed to help considerably with lookup speed when\nusing a large URL database, which seems to be your situation here.\n\n...\n>Jeff Boes writes\n> We have a large (several million row) table with a field containing\n> URLs. Now, funny thing about URLs: they mostly start with a common\n> substring (\"http://www.\"). But not all the rows start with this, so we\n> can't just lop off the first N characters. However, we noticed some\ntime\n> ago that an index on this field wasn't as effective as an index on the\n> REVERSE of the field. So ...\n> \n> CREATE OR REPLACE FUNCTION fn_urlrev(text) returns text as '\n> return reverse(lc($_[0]))\n> ' language 'plperl' with (iscachable,isstrict);\n> \n> and then\n> \n> CREATE UNIQUE INDEX ix_links_3 ON links\n> (fn_urlrev(path_base));\n\nYou have 2 CPUs: have you tried splitting your input data file into two\ntables, then executing the same query simultaneously, to split the\nprocessing? If you get the correct plan, you should use roughly the same\nI/O but use all of the available CPU power.\n\nI'm sure we'd all be interested in your further results!\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 24 Feb 2004 00:17:28 -0000",
"msg_from": "\"Simon Riggs\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: General performance questions about postgres on Apple hardware..."
},
{
"msg_contents": "Simon Riggs wrote:\n\n>>Sean Shanny\n>>Hardware: Apple G5 dual 2.0 with 8GB memory attached via dual fibre\n>>channel to a fully loaded 3.5TB XRaid. The XRaid is configured as two\n>> \n>>\n>7\n> \n>\n>>disk hardware based RAID5 sets software striped to form a RAID50 set.\n>>The DB, WALS, etc are all on that file set. Running OSX journaled\n>> \n>>\n>file\n> \n>\n>>system Running postgres 7.4.1. OSX Server 10.3.2 Postgres is\n>> \n>>\n>compiled\n> \n>\n>>locally with '--enable-recode' '--enable-multibyte=UNICODE'\n>>'CFLAGS=-mcpu=970 -mtune=970 -mpowerpc64 -O3'\n>> \n>>\n>\n>Have you tried altering the blocksize to a higher value? Say 32K?\n> \n>\nThat is on our to do list. We had made that change while running on BSD \n5.1on a Dell 2650 with 4GB and 5 10K SCSI drive in RAID 0. Did not see \na huge improvement. \n\n> \n>\n>>max_connections = 100\n>> \n>>\n>\n>Why have you set this to 100 when you have typically 1-3 users?\n> \n>\nHave already addressed that by lowering this to 50. Will drop it lower \nas time goes on.\n\n> \n>\n>>sort_mem = 256000 # min 64, size in KB\n>> \n>>\n>\n>If you have only 1-3 users, then that value seems reasonable.\n>\n> \n>\n>>The query is\n>>\n>>SELECT t1.id, t2.md5, t2.url FROM referral_temp t2 LEFT OUTER JOIN\n>>d_referral t1 ON t2.md5 = t1.referral_md5;\n>>\n>>\n>>\\d d_referral\n>> id | integer | not null\n>> referral_md5 | text | not null\n>> referral_raw_url | text | not null\n>> referral_host | text |\n>> referral_path | text |\n>> referral_query | text |\n>> job_control_number | integer | not null\n>>\n>>\n>>\\d referral_temp\n>> md5 | text |\n>> url | text |\n>> \n>>\n>\n>Have you looked at using reversed indexes, as per recent postings in\n>[performance]? These seemed to help considerably with lookup speed when\n>using a large URL database, which seems to be your situation here.\n> \n>\nWe create an MD5 of the URL and store it as referral_md5. This is our \nkey for lookup. We ran into problems with the URL as the index. The \npostgres indexing code was complaining about the URL being too long, \nhence the MD5 which thought longer to compute during the ETL phase is \nmuch quicker to match on.\n\n>...\n> \n>\n>>Jeff Boes writes\n>>We have a large (several million row) table with a field containing\n>>URLs. Now, funny thing about URLs: they mostly start with a common\n>>substring (\"http://www.\"). But not all the rows start with this, so we\n>>can't just lop off the first N characters. However, we noticed some\n>> \n>>\n>time\n> \n>\n>>ago that an index on this field wasn't as effective as an index on the\n>>REVERSE of the field. So ...\n>>\n>>CREATE OR REPLACE FUNCTION fn_urlrev(text) returns text as '\n>>return reverse(lc($_[0]))\n>>' language 'plperl' with (iscachable,isstrict);\n>>\n>>and then\n>>\n>>CREATE UNIQUE INDEX ix_links_3 ON links\n>>(fn_urlrev(path_base));\n>> \n>>\n>\n>You have 2 CPUs: have you tried splitting your input data file into two\n>tables, then executing the same query simultaneously, to split the\n>processing? If you get the correct plan, you should use roughly the same\n>I/O but use all of the available CPU power.\n> \n>\nHave not considered that.\n\n>I'm sure we'd all be interested in your further results!\n> \n>\nI will post things as I discover them.\n\n--sean\n\n>Best Regards, Simon Riggs\n>\n>\n> \n>\n",
"msg_date": "Mon, 23 Feb 2004 19:54:07 -0500",
"msg_from": "Sean Shanny <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: General performance questions about postgres on Apple"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.