threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Actually, I have some queries that are slow, however I was wondering if you\ncould help me write a query that is rather simple, but I, as a true database\nnovice, can't seem to conjure. So we have stocks, as I have previously\nsaid, and I have a huge table which contains all of the opening and closing\nprices of some stocks from each day. What I like to do, in English, for\neach stock in each day is find a ratio: abs(closing-opening)/opening. Then\nI would like to average all of the ratios of each day of each individual\nstock together to find a final ratio for each stock, then I would like to\nfind the highest average, to find the best performing stock. So what query\ncan I use, and (as is appropriate for this group), how can it be optimized\nto run the fastest?\n\n \n\n\n\n\n\n\n\n\n\n\nActually, I have some\nqueries that are slow, however I was wondering if you could help me write a\nquery that is rather simple, but I, as a true database novice, can't seem to\nconjure. So we have stocks, as I have previously said, and I have a huge table\nwhich contains all of the opening and closing prices of some stocks from each\nday. What I like to do, in English, for each stock in each day is find a\nratio: abs(closing-opening)/opening. Then I would like to average all of the\nratios of each day of each individual stock together to find a final ratio for\neach stock, then I would like to find the highest average, to find the best\nperforming stock. So what query can I use, and (as is appropriate for this group),\nhow can it be optimized to run the fastest?",
"msg_date": "Sun, 27 Jun 2004 22:26:19 -0500",
"msg_from": "\"Bill\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query performance"
},
{
"msg_contents": "Usually, when you post a request like this, you should provide something a little more concrete (the CREATE TABLE statement for that table, with \nSince you didn't, I'll posit something that sounds like what you're using, and take a stab at your problem.\n\nTABLE Prices (\n stock VARCHAR(9)\n ,asof DATE,\n ,opening MONEY\n ,closing MONEY\n ,PRIMARY KEY (stock, asof)\n )\n\nSELECT stock, AVG((closing-opening)/opening) as ratio\nFROM Prices \nGROUP BY stock\nORDER BY ratio DESC LIMIT 10; -- top 10 best-performing stocks.\n\n\"\"Bill\"\" <[email protected]> wrote in message news:[email protected]...\n Actually, I have some queries that are slow, however I was wondering if you could help me write a query that is rather simple, but I, as a true database novice, can't seem to conjure. So we have stocks, as I have previously said, and I have a huge table which contains all of the opening and closing prices of some stocks from each day. What I like to do, in English, for each stock in each day is find a ratio: abs(closing-opening)/opening. Then I would like to average all of the ratios of each day of each individual stock together to find a final ratio for each stock, then I would like to find the highest average, to find the best performing stock. So what query can I use, and (as is appropriate for this group), how can it be optimized to run the fastest?\n\n \n\n\n\n\n\n\n\nUsually, when you post a request like this, you should provide \nsomething a little more concrete (the CREATE TABLE statement for that table, \nwith \nSince you didn't, I'll posit something that sounds like what \nyou're using, and take a stab at your problem.\n \nTABLE Prices (\n stock \nVARCHAR(9)\n ,asof \n DATE,\n ,opening MONEY\n ,closing \nMONEY\n ,PRIMARY KEY (stock, asof)\n )\n \nSELECT stock, AVG((closing-opening)/opening) \nas ratio\nFROM Prices \nGROUP BY stock\nORDER BY ratio DESC LIMIT 10; -- top 10 \nbest-performing stocks.\n \n\"\"Bill\"\" <[email protected]> wrote in \nmessage news:[email protected]...\n\n\nActually, I have some \n queries that are slow, however I was wondering if you could help me write a \n query that is rather simple, but I, as a true database novice, can't seem to \n conjure. So we have stocks, as I have previously said, and I have a huge \n table which contains all of the opening and closing prices of some stocks from \n each day. What I like to do, in English, for each stock in each day is \n find a ratio: abs(closing-opening)/opening. Then I would like to average \n all of the ratios of each day of each individual stock together to find a \n final ratio for each stock, then I would like to find the highest average, to \n find the best performing stock. So what query can I use, and (as is \n appropriate for this group), how can it be optimized to run the \n fastest?",
"msg_date": "Mon, 28 Jun 2004 05:23:53 GMT",
"msg_from": "\"Mischa Sandberg\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "Bill wrote:\n> Actually, I have some queries that are slow, however I was wondering if you\n> could help me write a query that is rather simple, but I, as a true database\n> novice, can't seem to conjure. So we have stocks, as I have previously\n> said, and I have a huge table which contains all of the opening and closing\n> prices of some stocks from each day. \n\nSchemas, Bill - show us your table definitions so people can see exactly \nwhere they stand.\n\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 28 Jun 2004 10:14:11 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "Ok....so here lies the output of oclh (i.e \"\\d oclh\") \n\n Table \"public.oclh\"\n Column | Type | Modifiers \n--------+-----------------------+-------------------------------\n symbol | character varying(10) | not null default ''\n date | date | not null default '0001-01-01'\n open | numeric(12,2) | not null default '0.00'\n close | numeric(12,2) | not null default '0.00'\n low | numeric(12,2) | not null default '0.00'\n high | numeric(12,2) | not null default '0.00'\nIndexes: symbol_2_oclh_index btree (symbol, date),\n symbol_oclh_index btree (symbol, date)\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Richard Huxton\nSent: Monday, June 28, 2004 4:14 AM\nTo: Bill\nCc: [email protected]\nSubject: Re: [PERFORM] Query performance\n\nBill wrote:\n> Actually, I have some queries that are slow, however I was wondering if\nyou\n> could help me write a query that is rather simple, but I, as a true\ndatabase\n> novice, can't seem to conjure. So we have stocks, as I have previously\n> said, and I have a huge table which contains all of the opening and\nclosing\n> prices of some stocks from each day. \n\nSchemas, Bill - show us your table definitions so people can see exactly \nwhere they stand.\n\n\n-- \n Richard Huxton\n Archonet Ltd\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Mon, 28 Jun 2004 12:02:40 -0500",
"msg_from": "\"Bill\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "Bill wrote:\n> Ok....so here lies the output of oclh (i.e \"\\d oclh\") \n> \n> Table \"public.oclh\"\n> Column | Type | Modifiers \n> --------+-----------------------+-------------------------------\n> symbol | character varying(10) | not null default ''\n> date | date | not null default '0001-01-01'\n> open | numeric(12,2) | not null default '0.00'\n> close | numeric(12,2) | not null default '0.00'\n> low | numeric(12,2) | not null default '0.00'\n> high | numeric(12,2) | not null default '0.00'\n> Indexes: symbol_2_oclh_index btree (symbol, date),\n> symbol_oclh_index btree (symbol, date)\n\nWell, I'm not sure why the two indexes on the same columns, and I'm not \nsure it makes sense to have defaults for _any_ of the columns there.\n\nSo - you want:\n1. ratio = abs(closing-opening)/opening\n2. average = all the ratios of each day of each stock\n3. Highest average\n\nWell, I don't know what you mean by #2, but #1 is just:\n\nSELECT\n symbol,\n \"date\",\n abs(close - open)/open AS ratio\nFROM\n oclh\nGROUP BY\n symbol, date;\n\nI'd probably fill in a summary table with this and use that as the basis \nfor your further queries. Presumably from \"yesterday\" back, the \nratios/averages won't change.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 29 Jun 2004 09:37:49 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "Ok, thanks. So let me explain the query number 2 as this is the more\ndifficult to write. So I have a list of stocks, this table contains the\nprice of all of the stocks at the open and close date. Ok, now we have a\nratio from query (1) that returns at least a very rough index of the daily\nperformance of a given stock, with each ratio representing the stock's\nperformance in one day. Now we need to average this with the same stock's\nratio every day, to get a total average for each stock contained in the\ndatabase. Now I would simply like to find a ratio like this that represents\nthe average of every stock in the table and simply find the greatest ratio.\nSorry about the lousy explanation before, is this a bit better?\n\nHere is an example if needed.\n\nSay we have a stock by the name of YYY\n\nI know, due to query 1 that stock YYY has a abs(close-open)/open price ratio\nof for example, 1.3 on Dec 1 and (for simplicity let's say we only have two\ndates) and Dec 2 the ratio for YYY is 1.5. So the query averages and gets\n1.4. Now it needs to do this for all of the stocks in the table and sort by\nincreasing ratio.\n\n \nThanks.\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Richard Huxton\nSent: Tuesday, June 29, 2004 3:38 AM\nTo: Bill\nCc: [email protected]\nSubject: Re: [PERFORM] Query performance\n\nBill wrote:\n> Ok....so here lies the output of oclh (i.e \"\\d oclh\") \n> \n> Table \"public.oclh\"\n> Column | Type | Modifiers \n> --------+-----------------------+-------------------------------\n> symbol | character varying(10) | not null default ''\n> date | date | not null default '0001-01-01'\n> open | numeric(12,2) | not null default '0.00'\n> close | numeric(12,2) | not null default '0.00'\n> low | numeric(12,2) | not null default '0.00'\n> high | numeric(12,2) | not null default '0.00'\n> Indexes: symbol_2_oclh_index btree (symbol, date),\n> symbol_oclh_index btree (symbol, date)\n\nWell, I'm not sure why the two indexes on the same columns, and I'm not \nsure it makes sense to have defaults for _any_ of the columns there.\n\nSo - you want:\n1. ratio = abs(closing-opening)/opening\n2. average = all the ratios of each day of each stock\n3. Highest average\n\nWell, I don't know what you mean by #2, but #1 is just:\n\nSELECT\n symbol,\n \"date\",\n abs(close - open)/open AS ratio\nFROM\n oclh\nGROUP BY\n symbol, date;\n\nI'd probably fill in a summary table with this and use that as the basis \nfor your further queries. Presumably from \"yesterday\" back, the \nratios/averages won't change.\n\n-- \n Richard Huxton\n Archonet Ltd\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n",
"msg_date": "Tue, 29 Jun 2004 12:33:51 -0500",
"msg_from": "\"Bill\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "Bill wrote:\n> Ok, thanks. So let me explain the query number 2 as this is the more\n> difficult to write. So I have a list of stocks, this table contains the\n> price of all of the stocks at the open and close date. Ok, now we have a\n> ratio from query (1) that returns at least a very rough index of the daily\n> performance of a given stock, with each ratio representing the stock's\n> performance in one day. Now we need to average this with the same stock's\n> ratio every day, to get a total average for each stock contained in the\n> database. Now I would simply like to find a ratio like this that represents\n> the average of every stock in the table and simply find the greatest ratio.\n> Sorry about the lousy explanation before, is this a bit better?\n> \n> Here is an example if needed.\n> \n> Say we have a stock by the name of YYY\n> \n> I know, due to query 1 that stock YYY has a abs(close-open)/open price ratio\n> of for example, 1.3 on Dec 1 and (for simplicity let's say we only have two\n> dates) and Dec 2 the ratio for YYY is 1.5. So the query averages and gets\n> 1.4. Now it needs to do this for all of the stocks in the table and sort by\n> increasing ratio.\n\nWell, the simplest would be something like:\n\nCREATE VIEW my_ratios AS SELECT ...(select details we used for #1 \npreviously)\n\nQuery #1 then becomes:\nSELECT * FROM my_ratios;\n\nThen you could do:\nSELECT\n symbol,\n avg(ratio) as ratio_avg\nFROM\n my_ratios\nGROUP BY\n symbol\nORDER BY\n avg(ratio)\n;\n\nNow, in practice, I'd probably create a symbol_ratio table and fill that \none day at a time. Then #2,#3 would be easier.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 29 Jun 2004 20:03:25 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "On Tue, Jun 29, 2004 at 12:33:51 -0500,\n Bill <[email protected]> wrote:\n> Ok, thanks. So let me explain the query number 2 as this is the more\n> difficult to write. So I have a list of stocks, this table contains the\n> price of all of the stocks at the open and close date. Ok, now we have a\n> ratio from query (1) that returns at least a very rough index of the daily\n> performance of a given stock, with each ratio representing the stock's\n> performance in one day. Now we need to average this with the same stock's\n> ratio every day, to get a total average for each stock contained in the\n> database. Now I would simply like to find a ratio like this that represents\n> the average of every stock in the table and simply find the greatest ratio.\n> Sorry about the lousy explanation before, is this a bit better?\n\nYou can do something like:\n\nSELECT symbol, avg((open-close)/open) GROUP BY symbol\n ORDER BY avg((open-close)/open) DESC LIMIT 1;\n\nIf you aren't interested in the variance of the daily change, it seems like\nyou would be best off using the opening price for the first day you have\nrecorded for the stock and the closing price on the last day and looking\nat the relative change.\n",
"msg_date": "Tue, 29 Jun 2004 14:51:30 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "Thanks this query works for what I want. So here is an output of the\nexplain analyze:\n QUERY\nPLAN \n----------------------------------------------------------------------------\n------------------------------------------------------------------------\n Limit (cost=2421582.59..2421582.65 rows=25 width=29) (actual\ntime=1985800.32..1985800.44 rows=25 loops=1)\n -> Sort (cost=2421582.59..2424251.12 rows=1067414 width=29) (actual\ntime=1985800.31..1985800.35 rows=26 loops=1)\n Sort Key: avg(((open - \"close\") / (open + 1::numeric)))\n -> Aggregate (cost=2200163.04..2280219.09 rows=1067414 width=29)\n(actual time=910291.94..1984972.93 rows=22362 loops=1)\n -> Group (cost=2200163.04..2253533.74 rows=10674140\nwidth=29) (actual time=910085.96..1105064.28 rows=10674140 loops=1)\n -> Sort (cost=2200163.04..2226848.39 rows=10674140\nwidth=29) (actual time=910085.93..988909.94 rows=10674140 loops=1)\n Sort Key: symbol\n -> Seq Scan on oclh (cost=0.00..228404.40\nrows=10674140 width=29) (actual time=20.00..137720.61 rows=10674140 loops=1)\n Total runtime: 1986748.44 msec\n(9 rows)\n\nCan I get any better performance?\n\nThanks.\n\n-----Original Message-----\nFrom: Bruno Wolff III [mailto:[email protected]] \nSent: Tuesday, June 29, 2004 2:52 PM\nTo: Bill\nCc: [email protected]\nSubject: Re: [PERFORM] Query performance\n\nOn Tue, Jun 29, 2004 at 12:33:51 -0500,\n Bill <[email protected]> wrote:\n> Ok, thanks. So let me explain the query number 2 as this is the more\n> difficult to write. So I have a list of stocks, this table contains the\n> price of all of the stocks at the open and close date. Ok, now we have a\n> ratio from query (1) that returns at least a very rough index of the daily\n> performance of a given stock, with each ratio representing the stock's\n> performance in one day. Now we need to average this with the same stock's\n> ratio every day, to get a total average for each stock contained in the\n> database. Now I would simply like to find a ratio like this that\nrepresents\n> the average of every stock in the table and simply find the greatest\nratio.\n> Sorry about the lousy explanation before, is this a bit better?\n\nYou can do something like:\n\nSELECT symbol, avg((open-close)/open) GROUP BY symbol\n ORDER BY avg((open-close)/open) DESC LIMIT 1;\n\nIf you aren't interested in the variance of the daily change, it seems like\nyou would be best off using the opening price for the first day you have\nrecorded for the stock and the closing price on the last day and looking\nat the relative change.\n\n",
"msg_date": "Wed, 30 Jun 2004 08:47:03 -0500",
"msg_from": "\"Bill\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "> Can I get any better performance?\n\nYou can try bumping your sort memory way up (for this query only).\n\nAnother method would be to cluster the table by the symbol column\n(eliminates the expensive sort).\n\nIf you could run a very simple calculation against open & close numbers\nto eliminate a majority of symbols early, that would be useful as well.\n\n\n",
"msg_date": "Wed, 30 Jun 2004 10:27:25 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance"
}
] |
[
{
"msg_contents": "Hi,\n\nI've experienced that PG up to current release does not make use of an index when aggregating. Which of course may result in unacceptable answering times\n\nThis behaviour is reproducable on any table with any aggregat function in all of my databases on every machine (PostgreSQL 7.4.2 on i386-redhat-linux-gnu and PostgreSQL 7.2.1 on i686-pc-linux-gnu)\n\nf.e. querying against a 2.8-mio-records (2.800.000) table the_table\nSELECT count(*) FROM the_table\n=> Seq scan -> takes about 12 sec\n\nSELECT Avg(num_found) AS NumFound FROM the_table --(index on num_found)\n=> Seq scan -> takes about 10 sec\n\nSELECT Sum(num_found) AS TotalFound FROM the_table --(index on num_found)\n=> Seq scan -> takes about 11 sec\n\nSELECT Max(date_) AS LatestDate FROM the_table --(index on date_)\n=> Seq scan -> takes about 14 sec\n\nBut\nSELECT date_ AS LatestDate FROM the_table ORDER BY date_ DESC LIMIT 1;\n=> Index scan -> takes 0.18 msec\n\nMS SQLServer 2000: Use of an appropriate index _whenever_ aggregating.\n\nAm I doing something wrong?\n\nGreetings Harald\n",
"msg_date": "Tue, 29 Jun 2004 08:42:14 +0200",
"msg_from": "\"Harald Lau (Sector-X)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "no index-usage on aggregate-functions?"
},
{
"msg_contents": "On Tue, 2004-06-29 at 00:42, Harald Lau (Sector-X) wrote:\n> Hi,\n> \n> I've experienced that PG up to current release does not make use of an index when aggregating. Which of course may result in unacceptable answering times\n> \n> This behaviour is reproducable on any table with any aggregat function in all of my databases on every machine (PostgreSQL 7.4.2 on i386-redhat-linux-gnu and PostgreSQL 7.2.1 on i686-pc-linux-gnu)\n> \n> f.e. querying against a 2.8-mio-records (2.800.000) table the_table\n> SELECT count(*) FROM the_table\n> => Seq scan -> takes about 12 sec\n> \n> SELECT Avg(num_found) AS NumFound FROM the_table --(index on num_found)\n> => Seq scan -> takes about 10 sec\n> \n> SELECT Sum(num_found) AS TotalFound FROM the_table --(index on num_found)\n> => Seq scan -> takes about 11 sec\n> \n> SELECT Max(date_) AS LatestDate FROM the_table --(index on date_)\n> => Seq scan -> takes about 14 sec\n> \n> But\n> SELECT date_ AS LatestDate FROM the_table ORDER BY date_ DESC LIMIT 1;\n> => Index scan -> takes 0.18 msec\n> \n> MS SQLServer 2000: Use of an appropriate index _whenever_ aggregating.\n> \n> Am I doing something wrong?\n\nYes, you're expecting an MVCC database to behave like a row locking\ndatabase.\n\nDue to the way PostgreSQL is put together, it can't count on an index\ngiving it values, only pointers to values, so to speak. This means it\ncan use an index, but it will still go to the table to get the right\nvalue.\n\nOn the other hand, the trade off is that MVCC can handle much higher\nparallel loads, usually. \n\nNote that if you're aggregate is on sub subset of a table, then an index\nscan can often be a big win, such as:\n\ncreate table z(a int, b int);\ninsert into z values (1,1); (repeat a couple thousand times)\nselect avg(b) from z where a=3; <-- this can use the index\n\nBut note that in the above, the table's rows will still have to be\naccessed to get the right values.\n\n",
"msg_date": "Tue, 29 Jun 2004 01:25:03 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: no index-usage on aggregate-functions?"
},
{
"msg_contents": "> f.e. querying against a 2.8-mio-records (2.800.000) table the_table\n> SELECT count(*) FROM the_table\n> => Seq scan -> takes about 12 sec\n\nThis cannot be made O(1) in postgres due to MVCC. You just have to live \nwith it.\n\n> SELECT Avg(num_found) AS NumFound FROM the_table --(index on num_found)\n> => Seq scan -> takes about 10 sec\n> \n> SELECT Sum(num_found) AS TotalFound FROM the_table --(index on num_found)\n> => Seq scan -> takes about 11 sec\n\nAverage and sum can never use an index AFAIK, in any db server. You \nneed information from every row.\n\n> SELECT Max(date_) AS LatestDate FROM the_table --(index on date_)\n> => Seq scan -> takes about 14 sec\n\nYep, that's due to postgresql's type extensibility. You should use th \nworkaround you point out below.\n\n> But\n> SELECT date_ AS LatestDate FROM the_table ORDER BY date_ DESC LIMIT 1;\n> => Index scan -> takes 0.18 msec\n> \n> MS SQLServer 2000: Use of an appropriate index _whenever_ aggregating.\n> \n> Am I doing something wrong?\n\nNope.\n\nChris\n\n",
"msg_date": "Tue, 29 Jun 2004 15:36:16 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: no index-usage on aggregate-functions?"
},
{
"msg_contents": "@Chris:\n\n> > SELECT count(*) FROM the_table\n> > => Seq scan -> takes about 12 sec\n> This cannot be made O(1) in postgres due to MVCC. You just have to live \n> with it.\n\nbad news\nBTW: in this case you could workaround\nselect reltuples from pg_class where relname='the_table'\n(yes, I know: presumes a regular vacuum analyse)\n\n> Average and sum can never use an index AFAIK, in any db server. You \n> need information from every row.\n\nTake a look at the SQLSrv-pendant:\ncreate index x_1 on the_table (num_found)\nselect avg(num_found) from the_table\n-> Index Scan(OBJECT:([midata].[dbo].[THE_TABLE].[x_1])\n\n(I'm not sure what Oracle does - have to re-install it first ...)\n\n\n@Scott:\n> Yes, you're expecting an MVCC database to behave like a row locking\n> database.\n\nhmmmm...\nSo, it seems that PG is not soooo well suited for a datawarehouse and/or performing extensive statistics/calculations/reportings on large tables, is it?\n\nGreetings Harald\n",
"msg_date": "Tue, 29 Jun 2004 10:46:27 +0200",
"msg_from": "\"Harald Lau (Sector-X)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: no index-usage on aggregate-functions?"
},
{
"msg_contents": "On Tue, 29 Jun 2004, Harald Lau (Sector-X) wrote:\n\n> > Average and sum can never use an index AFAIK, in any db server. You \n> > need information from every row.\n> \n> Take a look at the SQLSrv-pendant:\n> create index x_1 on the_table (num_found)\n> select avg(num_found) from the_table\n> -> Index Scan(OBJECT:([midata].[dbo].[THE_TABLE].[x_1])\n\nBut is it really faster is the question?\n\nThis sum needs all the values in that column. As far as I know it uses the\nindex because it uses less space on disk and thus is a little faster due\nto less IO. In pg the index doesn't work like that, so in pg it's faster\nto sum all values using the table itself.\n\nIf you have a WHERE clause to only sum some values, then pg will use an\nindex (if applicable) and you will see a speedup.\n\nFor min and max the situation is different, there an index can give you\nthe answer without scanning all rows. For that the workaround exist in pg. \nThe pg aggregate functions are very general and no one have special cased\nmin/max yet. Until that happen the work around works and is fast.\n\n> So, it seems that PG is not soooo well suited for a datawarehouse and/or\n> performing extensive statistics/calculations/reportings on large tables,\n> is it?\n\nI don't see how you can say that from your example. Just because it uses\nan index for the sum above does not mean that it is a lot faster. It still \nhave to do as many additions as pg has to do.\n\nSure, mvcc is best when you have both read and writes. But it should still\nbe comparable in speed even if you only do reads.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Tue, 29 Jun 2004 13:49:52 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: no index-usage on aggregate-functions?"
},
{
"msg_contents": "On Tue, Jun 29, 2004 at 10:46:27 +0200,\n \"Harald Lau (Sector-X)\" <[email protected]> wrote:\n> \n> hmmmm...\n> So, it seems that PG is not soooo well suited for a datawarehouse and/or performing extensive statistics/calculations/reportings on large tables, is it?\n\nIf you are doing lots of selects of aggregates relative to the number of\nupdates, you can cache the values of interest in derived tables and use\ntriggers to keep those tables up to date.\n",
"msg_date": "Tue, 29 Jun 2004 08:44:06 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: no index-usage on aggregate-functions?"
},
{
"msg_contents": "On Tue, 2004-06-29 at 02:46, Harald Lau (Sector-X) wrote:\n> @Chris:\n> \n> > > SELECT count(*) FROM the_table\n> > > => Seq scan -> takes about 12 sec\n> > This cannot be made O(1) in postgres due to MVCC. You just have to live \n> > with it.\n> \n> bad news\n> BTW: in this case you could workaround\n> select reltuples from pg_class where relname='the_table'\n> (yes, I know: presumes a regular vacuum analyse)\n\nNote that there ARE other options. While the inability to provide a\nspeedy count is a \"cost\" of using an MVCC system, the ability to allow\nthousands of readers to run while updates are happening underneath them\nmore than makes up for the slower aggregate performance.\n\nThe other options to this problem involve maintaining another table that\nhas a single (visible) row that is maintained by a trigger on the main\ntable that fires and updates that single row to reflect the count of the\ntable. This is costly on updates, but may be worth doing for certain\nsituations. Personally, I haven't had a great need to do a count(*) on\nmy tables that much. And on large tables, approximations are usually\nfine.\n\n> > Average and sum can never use an index AFAIK, in any db server. You \n> > need information from every row.\n> \n> Take a look at the SQLSrv-pendant:\n> create index x_1 on the_table (num_found)\n> select avg(num_found) from the_table\n> -> Index Scan(OBJECT:([midata].[dbo].[THE_TABLE].[x_1])\n> \n> (I'm not sure what Oracle does - have to re-install it first ...)\n\nThere's a good chance Oracle can use the index too. That's because both\nOracle is still a row locked database at heart. It's MVCC system sits\non top of it in roll back segments. So, the main store is serialized\nand can be indexed, while the updates live in the rollback segment.\n\nThis, however, is not paradise. This limits Oracle's performance for\nthings like long running transactions and makes it slower as the amount\nof information in the rollback segment grows. Meanwhile, PostgreSQL\nuses an in store MVCC mechanism. This system means that all index\naccesses must then hit the actual MVCC storage, since indexes aren't\neasily serialized.\n\n> @Scott:\n> > Yes, you're expecting an MVCC database to behave like a row locking\n> > database.\n> \n> hmmmm...\n> So, it seems that PG is not soooo well suited for a datawarehouse and/or performing extensive statistics/calculations/reportings on large tables, is it?\n\nOn the contrary, it makes it GREAT for datawarehousing. Not because any\none child process will be super fast, but because ALL the child\nprocesses will run reasonably fast, even under very heavy read and write\nload. Note that if you've got the memory for the hash agg algo to fire\ninto shared memory, it's pretty darned fast now, so if the data (mostly)\nfit into kernel cache you're gold. And 12 gig Intel boxes aren't that\nexpensive, compared to an Oracle license.\n\n",
"msg_date": "Tue, 29 Jun 2004 11:21:58 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: no index-usage on aggregate-functions?"
},
{
"msg_contents": "> Note that there ARE other options. While the inability to provide a\n> speedy count is a \"cost\" of using an MVCC system, the ability to allow\n> thousands of readers to run while updates are happening underneath them\n> more than makes up for the slower aggregate performance.\n\nIMO this depends on the priority of your application resp. the customers intentions and wishes\n\n> This, however, is not paradise.\n\nyou can't have it all ;-)\n\n> On the contrary, it makes it GREAT for datawarehousing. Not because any\n> one child process will be super fast, but because ALL the child\n> processes will run reasonably fast, even under very heavy read and write\n> load.\n\nWhat I meant with datawarehouse are many db's at many locations whose data are to be collected in one central db in order to mix em up, sum up or do anything equivalent.\nBut in fact my quite heavy-read/write-accessed db is running really fast since 1 1/2 years now\nEven though still on PG 7.2\nThe one and only bottleneck are the statistics and the reports - and the tables are getting larger and larger ...\n\n> Note that if you've got the memory for the hash agg algo to fire\n> into shared memory, it's pretty darned fast now,\n\nyes, I've noticed here on the testing server\n\n> so if the data (mostly)\n> fit into kernel cache you're gold. And 12 gig Intel boxes aren't that\n> expensive, compared to an Oracle license.\n\n*that's* the point ...\n\nAnyway: Greetings and thanks for your answers\nHarald\n",
"msg_date": "Tue, 29 Jun 2004 20:16:30 +0200",
"msg_from": "\"Harald Lau (Sector-X)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: no index-usage on aggregate-functions?"
}
] |
[
{
"msg_contents": "I am experiencing rather slow INSERTs on loaded server. The table I am \ninserting to is:\n\nCREATE TABLE pagestats\n(\n page_id int4 NOT NULL,\n viewed timestamptz DEFAULT now(),\n session int4 NOT NULL\n) WITH OIDS;\n\nThe table is populated with 700k rows. It is VACUUM ANALYZED every \nnight, though it is only INSERTED to and SELECTED from, no UPDATES or \nDELETES. There are no indices, triggers or constraints attached to it. \nThere are about 5 inserts pre second (sometimes more, but 10/s max).\n\nThe INSERT is:\nINSERT INTO pagestats (page_id,session) VALUES (5701,1147421823)\n\nSometimes, it takes as long as 1300ms! Other queries are quite swift, \neven compplex SELECTS and most of the INSERTS run fast. But occasionally \n(every 50th or 100th INSERT) it takes forever (and stalls the webpage \nfrom loading). The only special thing about this table is, it does not \nhave a PRIMARY KEY, but I should think that this constraint would only \nslow it down even more.\n\nAny ideas what can be wrong?\n\n-- \nMichal Taborsky\nhttp://www.taborsky.cz\n\n",
"msg_date": "Tue, 29 Jun 2004 14:30:30 +0200",
"msg_from": "=?ISO-8859-2?Q?Michal_T=E1borsk=FD?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow INSERT"
},
{
"msg_contents": "=?ISO-8859-2?Q?Michal_T=E1borsk=FD?= <[email protected]> writes:\n> I am experiencing rather slow INSERTs on loaded server.\n> ... There are no indices, triggers or constraints attached to it. \n\nIt's hard to see how inserting to such a simple table would be slow.\n\n> Sometimes, it takes as long as 1300ms! Other queries are quite swift, \n> even compplex SELECTS and most of the INSERTS run fast. But occasionally \n> (every 50th or 100th INSERT) it takes forever (and stalls the webpage \n> from loading).\n\nIs the number of inserts between slowdowns perfectly repeatable? My\nfirst thought is that the fast case is associated with inserting onto a\npage that is the same one last inserted to, and the slow case is\nassociated with finding a new page to insert onto (which, given that you\nnever UPDATE or DELETE, will always mean extending the file). Given\nthat the table rows are fixed width, the number of rows that fit on a\npage should be constant, so this theory cannot be right if the number of\ninserts between slowdowns varies.\n\nAlso, are all the inserts being issued by the same server process, or\nare they scattered across multiple processes? I'm not sure this theory\nholds water unless all the inserts are done in the same process.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Jun 2004 10:30:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow INSERT "
},
{
"msg_contents": "Tom Lane wrote:\n> It's hard to see how inserting to such a simple table would be slow.\n\nIndeed.\n\n> Is the number of inserts between slowdowns perfectly repeatable? My\n> first thought is that the fast case is associated with inserting onto a\n> page that is the same one last inserted to, and the slow case is\n> associated with finding a new page to insert onto (which, given that you\n> never UPDATE or DELETE, will always mean extending the file). Given\n> that the table rows are fixed width, the number of rows that fit on a\n> page should be constant, so this theory cannot be right if the number of\n> inserts between slowdowns varies.\n\nI ran some tests to support this hypothesis. Every 500th insert is a tad \nslower, but it is insignificant (normally the INSERT lasts 1.5ms, every \n500th is 9ms). During my tests (10 runs of 1000 INSERTS) I had \nexperienced only one \"slow\" insert (2000ms). It is clearly caused by \nother processes running on this server, but such degradation of \nperformance is highly suspicious, because the server very rarely goes \nover load 1.0. Just for the record, it is FreeBSD 4.9 and the system \nnever swaps.\n\n> Also, are all the inserts being issued by the same server process, or\n> are they scattered across multiple processes? I'm not sure this theory\n> holds water unless all the inserts are done in the same process.\n\nNope. It is a webserver, so these requests are pushed through several \npersistent connections (20-30, depends on current load). This insert \noccurs only once per pageload.\n\n-- \nMichal Taborsky\nhttp://www.taborsky.cz\n\n",
"msg_date": "Tue, 29 Jun 2004 17:09:47 +0200",
"msg_from": "Michal Taborsky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow INSERT"
},
{
"msg_contents": "Michal Taborsky <[email protected]> writes:\n> I ran some tests to support this hypothesis. Every 500th insert is a tad \n> slower, but it is insignificant (normally the INSERT lasts 1.5ms, every \n> 500th is 9ms). During my tests (10 runs of 1000 INSERTS) I had \n> experienced only one \"slow\" insert (2000ms). It is clearly caused by \n> other processes running on this server, but such degradation of \n> performance is highly suspicious, because the server very rarely goes \n> over load 1.0.\n\nActually, the simpler theory is that the slowdown is caused by\nbackground checkpoint operations. Now a checkpoint would slow\n*everything* down not only this one insert, so maybe that's not\nthe right answer either, but it's my next idea. You could check\nthis to some extent by manually issuing a CHECKPOINT command and\nseeing if you get an insert hiccup. Note though that closely\nspaced checkpoints will have less effect, because less I/O will\nbe triggered when not much has changed since the last one. So\nyou'd want to wait a bit between experiments.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Jun 2004 11:22:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow INSERT "
},
{
"msg_contents": "Tom Lane wrote:\n> Actually, the simpler theory is that the slowdown is caused by\n> background checkpoint operations. Now a checkpoint would slow\n> *everything* down not only this one insert, so maybe that's not\n> the right answer either, but it's my next idea. You could check\n> this to some extent by manually issuing a CHECKPOINT command and\n> seeing if you get an insert hiccup. Note though that closely\n> spaced checkpoints will have less effect, because less I/O will\n> be triggered when not much has changed since the last one. So\n> you'd want to wait a bit between experiments.\n\nAha! This is really the case. I've let the test run and issued manual \nCHECKPOINT command. The command itself took about 3 secs and during that \ntime I had some slow INSERTS. So we know the reason.\n\nI've read the discussion in \"Trying to minimize the impact of \ncheckpoints\" thread and I get it, that there is nothing I can do about \nit. Well, we'll have to live with that, at least until 7.5.\n\nThanks of the help all the same.\n\n-- \nMichal Taborsky\nhttp://www.taborsky.cz\n\n",
"msg_date": "Tue, 29 Jun 2004 17:47:32 +0200",
"msg_from": "Michal Taborsky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow INSERT"
},
{
"msg_contents": "Michal Taborsky <[email protected]> writes:\n> I've read the discussion in \"Trying to minimize the impact of \n> checkpoints\" thread and I get it, that there is nothing I can do about \n> it. Well, we'll have to live with that, at least until 7.5.\n\nYou could experiment with the checkpoint interval (checkpoint_timeout).\nA shorter interval will mean more total I/O (the same page will get\nwritten out more often) but it should reduce the amount of I/O done by\nany one checkpoint. You might find that the extra overhead is worth it\nto reduce the spikes.\n\nBut 7.5 should provide a much better answer, yes.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Jun 2004 12:02:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow INSERT "
}
] |
[
{
"msg_contents": "Hello,\n\nI'm using PostgreSQL 7.4.2 (package from backports.org) \non a Debian (woody) box. The machine is IBM eServer 345\nwith two 2.8 Xeon CPUs, it has 1024MB of RAM and\ntwo 15k RPM SCSI disks running in hardware RAID1, which\nis provided by the onboard LSI Logic controller (LSI53C1030).\n\nThe database consists of two rather large tables \n(currently about 15 million rows in one table\nand about 5 million in the other one). Both tables\nhave 5 indexes (4 btree/1 hash). \nApplication running on the server INSERTs a lot of \nstuff to the tables (which is not the target use of\nthe DB, it'll add data periodically, about\n300 rows per 10 minutes). Queries (SELECTs) run perfectly\nfine on the database, thanks to the indexes we have\nhere probably. \n\nPerformance issue, I'm experiencing here, is somewhat\nweird - server gets high average load (from 5 up to 15,\n8 on average). Standard performance monitoring\nutilities (like top) show that CPUs are not loaded \n(below 20%, often near zero). \nWith kernel 2.6.x which I was using earlier, \ntop showed very high \"wa\" values (which indicate I/O waiting, AFAIK).\nI've googled some issues with 2.6 kernels and LSI Logic controllers\nrunning RAID, so I've downgraded the kernel to 2.4.26.\nThe machine started to behave a bit better, but still high\nload states look weird. Unfortunately, top with 2.4 kernels\ndoes not show \"wa\" column, so I can't be sure if the load is\ncaused by waiting for disks, but high idle values and high average\nload would suggest it. With kernel 2.6 swap was almost always 100% free,\nwith 2.4.26 Linux eats below 5 megabytes of swapspace.\nPostgreSQL is running with shared_mem set to 48000,\nsort_mem = 4096, fsync off. \n\nWhole config is available here: \nhttp://ludojad.itpp.pl/~eleven/pg-high-load.conf\n\nI've also made some iostat report (using iostat 3 1000 as\nsuggested in one of the posts):\nhttp://ludojad.itpp.pl/~eleven/iostat.log\n\nAny solutions I should consider?\nI'd be grateful getting some hints on this.\n\n-- \n11.\n",
"msg_date": "Tue, 29 Jun 2004 17:55:37 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "High load average with PostgreSQL 7.4.2 on debian/ibm eserver."
},
{
"msg_contents": "On Tue, 29 Jun 2004 17:55:37 +0200, [email protected]\n<[email protected]> wrote:\n\n> Performance issue, I'm experiencing here, is somewhat\n> weird - server gets high average load (from 5 up to 15,\n> 8 on average). Standard performance monitoring\n> utilities (like top) show that CPUs are not loaded\n> (below 20%, often near zero).\n\nSo ... you never actually say what the performance issue you\nexperience is. Having a high load average is not necessarily a\nperformance issue.\n\nWhat is it that you want to fix?\n",
"msg_date": "Tue, 29 Jun 2004 09:17:36 -0700",
"msg_from": "Marc <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High load average with PostgreSQL 7.4.2 on debian/ibm eserver."
},
{
"msg_contents": "[email protected] wrote:\n\n> <>I'm using PostgreSQL 7.4.2 (package from backports.org)\n> on a Debian (woody) box. The machine is IBM eServer 345\n> with two 2.8 Xeon CPUs, it has 1024MB of RAM and\n> two 15k RPM SCSI disks running in hardware RAID1, which\n> is provided by the onboard LSI Logic controller (LSI53C1030).\n\n<snip>\n\n> <>With kernel 2.6.x which I was using earlier,\n> top showed very high \"wa\" values (which indicate I/O waiting, AFAIK)\n\n\nIt sounds like you are very much bound by disk I/O. Your iostat output \nindicates a good amount of I/O going on--I bet an iostat -x /dev/sdX \nwould show very high await times (time in ms before an IO request to the \ndevice is serviced).\n\nIf your RAID controller has a battery-backed cache, check that you have \nwrite-back (as opposed to write-through) enabled. This will cause the \ncontroller to report data written only to RAID cache and not yet flushed \nto disk as sync'd. You can experience large gains in write performance \nthis way.\n\nIf write-back is already enabled, or enabling it does not give a large \nenough performance boost, you may need to buy more disks. In general, if \nyou have the budget for lots of disks, RAID 10 is the best you can do \nperformance-wise; if your budget for disks is limited, RAID 5 is the \nnext best thing. Also, you will get more bang for your buck with a \nlarger number of 10k disks than a smaller number of 15k disks.\n\nGood luck,\n\nBill Montgomery\n",
"msg_date": "Tue, 29 Jun 2004 12:22:42 -0400",
"msg_from": "Bill Montgomery <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High load average with PostgreSQL 7.4.2 on debian/ibm"
},
{
"msg_contents": "On Tue, Jun 29, 2004 at 09:17:36AM -0700, Marc wrote:\n\n> > Performance issue, I'm experiencing here, is somewhat\n> > weird - server gets high average load (from 5 up to 15,\n> > 8 on average). Standard performance monitoring\n> > utilities (like top) show that CPUs are not loaded\n> > (below 20%, often near zero).\n> So ... you never actually say what the performance issue you\n> experience is. Having a high load average is not necessarily a\n> performance issue.\n\nWell, if the server's CPUs are idle and the machine\nis starting to hog itself, one can\nsuspect something bad going on.\n\n> What is it that you want to fix?\n\nBasically, I'm wondering if I'm already on the edge of \nperformance capabilities of this machine/configuration, or maybe\nthere's some abnormal behaviour happening (which\ncould be noticed by somebody from this mailing list, hopefully).\n\nIn particular - could someone tell me if those iostat\nvalues can tell if I'm close to upper performance boundary\nof fast SCSI (Ultra 320, 15k RPM) disks? \n\n-- \n11.\n",
"msg_date": "Tue, 29 Jun 2004 19:52:27 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: High load average with PostgreSQL 7.4.2 on debian/ibm eserver."
},
{
"msg_contents": "On Tue, 2004-06-29 at 09:55, [email protected] wrote:\n> Hello,\n> \n> I'm using PostgreSQL 7.4.2 (package from backports.org) \n> on a Debian (woody) box. The machine is IBM eServer 345\n> with two 2.8 Xeon CPUs, it has 1024MB of RAM and\n> two 15k RPM SCSI disks running in hardware RAID1, which\n> is provided by the onboard LSI Logic controller (LSI53C1030).\n> \n> The database consists of two rather large tables \n> (currently about 15 million rows in one table\n> and about 5 million in the other one). Both tables\n> have 5 indexes (4 btree/1 hash). \n> Application running on the server INSERTs a lot of \n> stuff to the tables (which is not the target use of\n> the DB, it'll add data periodically, about\n> 300 rows per 10 minutes). Queries (SELECTs) run perfectly\n> fine on the database, thanks to the indexes we have\n> here probably. \n> \n> Performance issue, I'm experiencing here, is somewhat\n> weird - server gets high average load (from 5 up to 15,\n> 8 on average). Standard performance monitoring\n> utilities (like top) show that CPUs are not loaded \n> (below 20%, often near zero). \n\nAnytime you have high load but low CPU utilization, you usually have an\nI/O bound system.\n\nAd disks to your RAID (big RAID 5 or RAID 1+0) and make sure you have\nbattery backed cache set to write back. also, put as memory as you can\nin the machine. Jumping up to 2 Gigs may help quite a bit as well.\n\n",
"msg_date": "Tue, 29 Jun 2004 16:36:55 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High load average with PostgreSQL 7.4.2 on"
},
{
"msg_contents": "Eleven,\n\n> In particular - could someone tell me if those iostat\n> values can tell if I'm close to upper performance boundary\n> of fast SCSI (Ultra 320, 15k RPM) disks? \n\nIt's quite possible that you need to improve your disk array; certainly I \nwould have spec'd a lot more disk than you're using (like raid 0+1 with 6 \ndisks or RAID 5 with seven disks).\n\nHowever, there's the other end as well; it's quite possible that your queries \nare doing seq scans and other disk-intensive operations that could be \navoided. Have you analyed this at all?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Tue, 29 Jun 2004 16:34:48 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High load average with PostgreSQL 7.4.2 on debian/ibm eserver."
},
{
"msg_contents": "[email protected] wrote:\n\n\n> Whole config is available here: \n> http://ludojad.itpp.pl/~eleven/pg-high-load.conf\n\neffective_cache_size = 4000\t# typically 8KB each\n#random_page_cost = 4\t\t# units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01\t\t# (same)\n#cpu_index_tuple_cost = 0.001\t# (same)\n#cpu_operator_cost = 0.0025\t# (same)\n\n\nThese values are too higher for your hardware, try to execute the\nexplain analyze for the queries that are running on your box and\nrepeat it lowering these values, I bet postgres is running seq scan\ninstead of an index scan.\n\nThese are the value that I use for a configuration closer to your:\n\n\neffective_cache_size = 20000\nrandom_page_cost = 2.0\ncpu_tuple_cost = 0.005\ncpu_index_tuple_cost = 0.0005\ncpu_operator_cost = 0.0025\n\n\nlast question, do you use the autovacuum daemon ?\nIf no => you have to use it\nIf yes => did you apply the patch that will not fail with\n big tables like yours ?\n\n\nif you can post the autovacuum daemon log ( last lines ).\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n",
"msg_date": "Sun, 18 Jul 2004 11:42:36 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High load average with PostgreSQL 7.4.2 on debian/ibm eserver."
}
] |
[
{
"msg_contents": "Hello All,\n\n We are building a web based application which is database\nintensive (we intend to use postgresql). Expect about 600\nconcurrent users. We are using Zope and python with postgresql on\nRedHat Enterprise Linux. Our server has dual intel xeon 2.4 GHz\nand 2 Gig Ram with lots of disk space with hardware raid.\n\n I would appreciate all help to get the best possible performance\nfrom postgresql, we will be using a lot of views and a lot of\npl/pgsql procedures.\n\n I am looking at good rules to use for memory, WAL, SQL Transaction\nIsolation Levels and cache configuration along with anything else\nthat I have missed.\n\n\nThank you in advance for all your help,\n\nMohan A.\n",
"msg_date": "Tue, 29 Jun 2004 22:24:59 +0530 (IST)",
"msg_from": "\"Mohan A\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "suggestions to improve performace"
},
{
"msg_contents": "Mohan,\n\n> I am looking at good rules to use for memory, WAL, SQL Transaction\n> Isolation Levels and cache configuration along with anything else\n> that I have missed.\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nFor anything beyond that, you're asking for a service I charge $155/hour \nfor ....\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Tue, 29 Jun 2004 16:29:51 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: suggestions to improve performace"
}
] |
[
{
"msg_contents": ">>>>> \"Laurent\" == Laurent Rathle <[email protected]> writes:\n\n Laurent> Le lundi 28 Juin 2004 10:14, Sylvain Lhullier a �crit�:\n >> � �Pour info, le vid�o-proj de Parinux devrait rejoindre les RMLL\n >> (sauf opposition du CA).\n >> \n >> � �Nous sommes en train de trouver un moyen de l'y apporter\n >> (sachant que je d�tiens ce pr�cieux vid�o-proj et que je ne vais\n >> pas aux RMLL).\n\n Laurent> Je n'y vois pas d'opposition, sauf qu� mon avis, il ne sera\n Laurent> pas assur�. Il a �t� achet� d'occasion sans facture et je\n Laurent> vois mal une assurance accepter de rembourser un appareil\n Laurent> sans cet �l�ment. \n\nDe m�moire, le gars nous avait fil� la facture lorsqu'il nous l'a vendu.\n\n\n\n-- \nLaurent Martelli vice-pr�sident de Parinux\nhttp://www.bearteam.org/~laurent/ http://www.parinux.org/\[email protected] \n\n",
"msg_date": "Tue, 29 Jun 2004 19:30:29 +0200",
"msg_from": "Laurent Martelli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: =?iso-8859-1?q?Vid=E9oProj?= -> RMLL"
}
] |
[
{
"msg_contents": "\n I have a problem where a query gets _much_ slower if I add statements\nto the where clause than if I just get everything and filter in the\ncode. I expected the query to be faster as the where clause gets\nbigger (and thus the data to return gets lower).\n\n I've put inline the SQL and explain analyze of both the general and\nspecific queries. I assume the problem is something to do with the\nfact that in the specific query \"ticket_groups\" is uses an index and\nis assumed to return 5 rows, but actually returns 604 and in the\ngeneric case it doesn't use an index and is assumed to return 3.5-4\nthousand and does? Is this right, and if so is it possible to get\npostgres to re-think using an index in this case (or possible up it's\nestimated row count)?\n Any help appreciated.\n\n----------------------- GENERIC -------------------------------------\n\nSELECT g.assigned_to, g.opened AS escalated, t.tid, t.opened,\n e.userid AS commiter, e.eid, e.performed_on, e.status\n FROM (events e JOIN (ticket_groups g JOIN tickets t USING(tid)) USING(tid))\n WHERE (g.gid = '37' AND \n (e.performed_on <= CAST('1088480270' AS bigint)) AND\n ((t.modified >= CAST('1086875491' AS bigint)) OR\n (t.status != '4' AND\n t.status != '5' AND\n t.status != '9')))\n ORDER BY e.userid, t.tid, e.performed_on;\n\n Sort (cost=28017.06..28054.97 rows=15166 width=58) (actual time=2057.25..2076.12 rows=26594 loops=1)\n Sort Key: e.userid, t.tid, e.performed_on\n -> Merge Join (cost=1251.59..26963.93 rows=15166 width=58) (actual time=231.81..1972.29 rows=26594 loops=1)\n Merge Cond: (\"outer\".tid = \"inner\".tid)\n -> Index Scan using idx_tid_events on events e (cost=0.00..18943.61 rows=268784 width=26) (actual time=11.48..1358.15 rows=268803 loops=1)\n Filter: (performed_on <= 1088480270::bigint)\n -> Materialize (cost=7160.00..7160.00 rows=1725 width=32) (actual time=217.14..237.94 rows=26592 loops=1)\n -> Merge Join (cost=1251.59..7160.00 rows=1725 width=32) (actual time=63.14..214.75 rows=983 loops=1)\n Merge Cond: (\"outer\".tid = \"inner\".tid)\n -> Index Scan using idx_tickets_tid on tickets t (cost=0.00..5820.13 rows=18823 width=12) (actual time=2.97..135.57 rows=6020 loops=1)\n Filter: (((status <> 4::smallint) OR (modified >= 1086875491::bigint)) AND ((status <> 5::smallint) OR (modified >= 1086875491::bigint)) AND ((status <> 9::smallint) OR (modified >= 1086875491::bigint)))\n -> Sort (cost=1251.59..1261.38 rows=3915 width=20) (actual time=60.13..62.96 rows=3699 loops=1)\n Sort Key: g.tid\n -> Seq Scan on ticket_groups g (cost=0.00..1017.94 rows=3915 width=20) (actual time=0.05..53.02 rows=3699 loops=1)\n Filter: (gid = 37)\n Total runtime: 2100.75 msec\n\n--------------------- SPECIFIC -------------------------------------\n\nSELECT g.assigned_to, g.opened AS escalated, t.tid, t.opened,\n e.userid AS commiter, e.eid, e.performed_on, e.status\n FROM (events e JOIN (ticket_groups g JOIN tickets t USING(tid)) USING(tid))\n WHERE (g.gid = '37' AND \n (e.performed_on <= CAST('1088480270' AS bigint)) AND\n ((t.modified >= CAST('1086875491' AS bigint)) OR\n (t.status != '4' AND\n t.status != '5' AND\n t.status != '9')) AND (g.assigned_to IS NOT NULL AND g.assigned_to='1540') AND e.userid='1540')\n ORDER BY e.userid, t.tid, e.performed_on;\n\n Sort (cost=5079.17..5079.18 rows=1 width=58) (actual time=218121.00..218122.02 rows=1441 loops=1)\n Sort Key: e.userid, t.tid, e.performed_on\n -> Nested Loop (cost=0.00..5079.16 rows=1 width=58) (actual time=85.28..218115.36 rows=1441 loops=1)\n Join Filter: (\"outer\".tid = \"inner\".tid)\n -> Nested Loop (cost=0.00..305.53 rows=1 width=46) (actual time=0.22..261.78 rows=2420 loops=1)\n -> Index Scan using idx_ticket_groups_assigned on ticket_groups g (cost=0.00..241.76 rows=5 width=20) (actual time=0.13..12.67 rows=604 loops=1)\n Index Cond: (assigned_to = 1540)\n Filter: ((gid = 37) AND (assigned_to IS NOT NULL))\n -> Index Scan using idx_tid_events on events e (cost=0.00..12.50 rows=1 width=26) (actual time=0.11..0.38 rows=4 loops=604)\n Index Cond: (e.tid = \"outer\".tid)\n Filter: ((performed_on <= 1088480270::bigint) AND (userid = 1540))\n -> Seq Scan on tickets t (cost=0.00..4538.35 rows=18823 width=12) (actual time=0.16..83.53 rows=6020 loops=2420)\n Filter: (((status <> 4::smallint) OR (modified >= 1086875491::bigint)) AND ((status <> 5::smallint) OR (modified >= 1086875491::bigint)) AND ((status <> 9::smallint) OR (modified >= 1086875491::bigint)))\n Total runtime: 218123.24 msec\n\n\n-- \nJames Antill -- [email protected]\nNeed an efficient and powerful string library for C?\nhttp://www.and.org/vstr/\n",
"msg_date": "Tue, 29 Jun 2004 17:56:36 -0400",
"msg_from": "James Antill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query gets slow when where clause increases"
},
{
"msg_contents": "On Tue, 29 Jun 2004, James Antill wrote:\n\n> -> Index Scan using idx_ticket_groups_assigned on ticket_groups g (cost=0.00..241.76 rows=5 width=20) (actual time=0.13..12.67 rows=604 loops=1)\n> Index Cond: (assigned_to = 1540)\n\nHere the planner estimated that it would find 5 rows, but it did find 604. \nI take that as a sign that you have not ran VACUUM ANALYZE recently?\n\nIf you done that, then maybe you need to change the statistics target for\nthat column. Before you set it on that column you could try to just alter\nthe default statistics target for one session like this:\n\nSET default_statistics_target TO 100;\nANALYZE;\n\nand then see if you get a better plan when you run the query afterwards.\n\nIf it helps you can either set the default_statistics_target in\npostgresql.conf or set it just for some column using ALTER TABLE.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Thu, 1 Jul 2004 22:44:22 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query gets slow when where clause increases"
}
] |
[
{
"msg_contents": "Creating indexes on a table affects insert performance depending on the\nnumber of indexes that have to be populated. From a query standpoint,\nindexes are a godsend in most cases.\n\nDuane\n\n-----Original Message-----\nFrom: Chris Cheston [mailto:[email protected]]\nSent: Wednesday, June 30, 2004 12:19 AM\nTo: Gavin M. Roy\nCc: [email protected]\nSubject: Re: [PERFORM] postgres 7.4 at 100%\n\n\nOh my, creating an index has absolutely reduced the times it takes to\nquery from around 700 ms to less than 1 ms!\n\nThanks so much for all your help. You've saved me!\n\nOne question:\n\nWhy would I or would I not create multiple indexes in a table? I\ncreated another index in the same table an it's improved performance\neven more.\n\nThanks,\nChris\n\nOn Tue, 29 Jun 2004 09:03:24 -0700, Gavin M. Roy <[email protected]> wrote:\n> \n> Is the from field nullable? If not, try \"create index calllogs_from on\n> calllogs ( from );\" and then do an explain analyze of your query.\n> \n> Gavin\n> \n> \n> \n> Chris Cheston wrote:\n> \n> >ok i just vacuumed it and it's taking slightly longer now to execute\n> >(only about 8 ms longer, to around 701 ms).\n> >\n> >Not using indexes for calllogs(from)... should I? The values for\n> >calllogs(from) are not unique (sorry if I'm misunderstanding your\n> >point).\n> >\n> >Thanks,\n> >\n> >Chris\n> >\n> >On Tue, 29 Jun 2004 16:21:01 +0800, Christopher Kings-Lynne\n> ><[email protected]> wrote:\n> >\n> >\n> >>>live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';\n> >>> QUERY PLAN\n>\n>>>-------------------------------------------------------------------------\n---------------------------------\n> >>> Seq Scan on calllogs (cost=0.00..136.11 rows=24 width=4) (actual\n> >>>time=0.30..574.72 rows=143485 loops=1)\n> >>> Filter: (from = 'you'::character varying)\n> >>> Total runtime: 676.24 msec\n> >>>(3 rows)\n> >>>\n> >>>\n> >>Have you got an index on calllogs(from)?\n> >>\n> >>Have you vacuumed and analyzed that table recently?\n> >>\n> >>Chris\n> >>\n> >>\n> >>\n> >>\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> >\n> >\n> \n>\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faqs/FAQ.html\n\n\n\n\n\nRE: [PERFORM] postgres 7.4 at 100%\n\n\nCreating indexes on a table affects insert performance depending on the number of indexes that have to be populated. From a query standpoint, indexes are a godsend in most cases.\nDuane\n\n-----Original Message-----\nFrom: Chris Cheston [mailto:[email protected]]\nSent: Wednesday, June 30, 2004 12:19 AM\nTo: Gavin M. Roy\nCc: [email protected]\nSubject: Re: [PERFORM] postgres 7.4 at 100%\n\n\nOh my, creating an index has absolutely reduced the times it takes to\nquery from around 700 ms to less than 1 ms!\n\nThanks so much for all your help. You've saved me!\n\nOne question:\n\nWhy would I or would I not create multiple indexes in a table? I\ncreated another index in the same table an it's improved performance\neven more.\n\nThanks,\nChris\n\nOn Tue, 29 Jun 2004 09:03:24 -0700, Gavin M. Roy <[email protected]> wrote:\n> \n> Is the from field nullable? If not, try \"create index calllogs_from on\n> calllogs ( from );\" and then do an explain analyze of your query.\n> \n> Gavin\n> \n> \n> \n> Chris Cheston wrote:\n> \n> >ok i just vacuumed it and it's taking slightly longer now to execute\n> >(only about 8 ms longer, to around 701 ms).\n> >\n> >Not using indexes for calllogs(from)... should I? The values for\n> >calllogs(from) are not unique (sorry if I'm misunderstanding your\n> >point).\n> >\n> >Thanks,\n> >\n> >Chris\n> >\n> >On Tue, 29 Jun 2004 16:21:01 +0800, Christopher Kings-Lynne\n> ><[email protected]> wrote:\n> >\n> >\n> >>>live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';\n> >>> QUERY PLAN\n> >>>----------------------------------------------------------------------------------------------------------\n> >>> Seq Scan on calllogs (cost=0.00..136.11 rows=24 width=4) (actual\n> >>>time=0.30..574.72 rows=143485 loops=1)\n> >>> Filter: (from = 'you'::character varying)\n> >>> Total runtime: 676.24 msec\n> >>>(3 rows)\n> >>>\n> >>>\n> >>Have you got an index on calllogs(from)?\n> >>\n> >>Have you vacuumed and analyzed that table recently?\n> >>\n> >>Chris\n> >>\n> >>\n> >>\n> >>\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> >\n> >\n> \n>\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faqs/FAQ.html",
"msg_date": "Wed, 30 Jun 2004 09:32:42 -0700",
"msg_from": "Duane Lee - EGOVX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 7.4 at 100%"
}
] |
[
{
"msg_contents": "Here is my query, that returns one row:\nSELECT f1, f2,(SELECT dfield FROM d WHERE d.ukey = f1) FROM m WHERE \nstatus IN(2) AND jid IN(17674) ORDER BY pkey DESC LIMIT 25 OFFSET 0;\n\nHere was the really bad plan chosen. This didn't come back for a long \nwhile and had to be cancelled:\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..10493.05 rows=25 width=118)\n -> Index Scan Backward using m_pkey on m (cost=0.00..1883712.97 \nrows=4488 width=118)\n Filter: ((status = 2) AND (jid = 17674))\n SubPlan\n -> Index Scan using d_pkey on d (cost=0.00..3.83 rows=1 \nwidth=24)\n Index Cond: (ukey = $0)\n(6 rows)\n\nAfter an ANALYZE the plan was much better:\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------\n Limit (cost=22060.13..22060.19 rows=25 width=119)\n -> Sort (cost=22060.13..22067.61 rows=2993 width=119)\n Sort Key: serial\n -> Index Scan using m_jid_uid_key on m (cost=0.00..21887.32 \nrows=2993 width=119)\n Index Cond: (jid = 17674)\n Filter: (status = 2)\n SubPlan\n -> Index Scan using d_pkey on d (cost=0.00..3.83 \nrows=1 width=24)\n Index Cond: (ukey = $0)\n(9 rows)\n\n\nThe thing is since there was only 1 row in the (very big) table with \nthat jid, the ANALYZE didn't\ninclude that row in the stats table, so I'm figuring there was a small \nrandom change that made it\nchoose the better query.\n\nDoing: ALTER TABLE m ALTER jid SET STATISTICS 1000;\nproduce a much more accurate row guess:\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------\n Limit (cost=2909.65..2909.71 rows=25 width=115)\n -> Sort (cost=2909.65..2910.64 rows=395 width=115)\n Sort Key: serial\n -> Index Scan using m_jid_uid_key on m (cost=0.00..2892.61 \nrows=395 width=115)\n Index Cond: (jbid = 17674)\n Filter: (status = 2)\n SubPlan\n -> Index Scan using d_pkey on d (cost=0.00..3.83 \nrows=1 width=24)\n Index Cond: (userkey = $0)\n(9 rows)\n\n\nIt seems the problem is that the pg planner goes for the job with the \nlowest projected time,\nbut ignores the worst case scenario.\n\nI think the odds of this problem happening again are lower since the SET \nSTATISTICS, but I don't know what triggered the really bad plan in the \nfirst place. Did pg think that because so many rows would match the \nlimit would be filled up soon, so that a more accurate and lower \nassumption would cause it to choose the better plan?\n",
"msg_date": "Wed, 30 Jun 2004 23:14:58 -0400",
"msg_from": "Joseph Shraibman <[email protected]>",
"msg_from_op": true,
"msg_subject": "planner and worst case scenario"
}
] |
[
{
"msg_contents": "\nHi there,\n\ni have a problem with a query that uses the result of a plsql function\nIn\nthe where clause:\n\nSELECT\n assignments.assignment_id,\n assignments.package_id AS package_id,\n assignments.title AS title,\n COUNT(*) AS Count\nFROM\n assignments INNER JOIN submissions ON\n (assignments.assignment_id=submissions.assignment_id)\nWHERE\n package_id=949589 AND\n submission_status(submissions.submission_id)='closed'\nGROUP BY\n assignments.assignment_id, assignments.package_id, assignments.title\nORDER BY\n assignments.title;\n\nPostgres seems to execute the function \"submission_status\" for every row\nof\nthe submissions table (~1500 rows). The query therefore takes quite a\nlot\ntime, although in fact no row is returned from the assignments table\nwhen\nthe condition package_id=949589 is used.\n\n QUERY PLAN\n------------------------------------------------------------------------\n---\n---------------------------------------------------\n Sort (cost=41.21..41.21 rows=1 width=35) (actual\ntime=4276.978..4276.978\nrows=0 loops=1)\n Sort Key: assignments.title\n -> HashAggregate (cost=41.19..41.20 rows=1 width=35) (actual\ntime=4276.970..4276.970 rows=0 loops=1)\n -> Hash Join (cost=2.40..41.18 rows=1 width=35) (actual\ntime=4276.966..4276.966 rows=0 loops=1)\n Hash Cond: (\"outer\".assignment_id =\n\"inner\".assignment_id)\n -> Seq Scan on submissions (cost=0.00..38.73 rows=9\nwidth=4) (actual time=10.902..4276.745 rows=38 loops=1)\n Filter: (submission_status(submission_id) =\n'closed'::text)\n -> Hash (cost=2.40..2.40 rows=2 width=35) (actual\ntime=0.058..0.058 rows=0 loops=1)\n -> Seq Scan on assignments (cost=0.00..2.40\nrows=2\nwidth=35) (actual time=0.015..0.052 rows=2 loops=1)\n Filter: (package_id = 949589)\n Total runtime: 4277.078 ms\n(11 rows)\n\nI therefore tried to rephrase the query, to make sure that the function\nis\nonly used for the rows returned by the join but not even the following\ndoes\nhelp (the subselect t1 does not return a single row):\n\nselect * from (\n\tSELECT\n\t a.assignment_id, a.package_id, a.title, s.submission_id,\n\t COUNT(*) AS Count\n\tFROM\n\t assignments a INNER JOIN submissions s ON\n(a.assignment_id=s.assignment_id)\n\tWHERE\n a.package_id=949589\n\tGROUP BY\n a.assignment_id, a.package_id, a.title, s.submission_id\n) t1\nwhere\n submission_status(t1.submission_id)='closed'\norder by\n title;\n\n QUERY PLAN\n------------------------------------------------------------------------\n---\n-----------------------------------------------------------\n Sort (cost=41.21..41.22 rows=1 width=188) (actual\ntime=4114.251..4114.251\nrows=0 loops=1)\n Sort Key: title\n -> Subquery Scan t1 (cost=41.20..41.20 rows=1 width=188) (actual\ntime=4114.242..4114.242 rows=0 loops=1)\n -> HashAggregate (cost=41.20..41.20 rows=1 width=39) (actual\ntime=4114.238..4114.238 rows=0 loops=1)\n -> Hash Join (cost=2.40..41.18 rows=1 width=39) (actual\ntime=4114.235..4114.235 rows=0 loops=1)\n Hash Cond: (\"outer\".assignment_id =\n\"inner\".assignment_id)\n -> Seq Scan on submissions s (cost=0.00..38.73\nrows=9 width=8) (actual time=7.179..4113.984 rows=38 loops=1)\n Filter: (submission_status(submission_id) =\n'closed'::text)\n -> Hash (cost=2.40..2.40 rows=2 width=35) (actual\ntime=0.100..0.100 rows=0 loops=1)\n -> Seq Scan on assignments a\n(cost=0.00..2.40\nrows=2 width=35) (actual time=0.045..0.094 rows=2 loops=1)\n Filter: (package_id = 949589)\n Total runtime: 4114.356 ms\n(12 rows)\n\nThe function is nevertheless executed for every row in the submissions\ntable. A simple \"select *, submission_status(submission_id) from\nsubmissions\" takes about the same time as the 2 queries stated above.\n\nThe whole database has been vacuum analysed right before the explain\nanalyse output has been captured.\n\nWhat can I do to reduce the time this query takes? And why is the\nfunction\nexecuted although there is no row in the result set of t1 in my\nrephrased\nquery?\n\nTIA, peter\n\nPs: table definitions:\n\n Table \"public.assignments\"\n Column | Type | Modifiers\n---------------+-----------------------------+------------------------\n assignment_id | integer | not null\n title | character varying(100) | not null\n max_grade | smallint | not null\n start_date | timestamp without time zone | not null default now()\n end_date | timestamp without time zone | not null\n over_due_date | timestamp without time zone |\n score_release | smallint | not null default 1\n package_id | integer | not null\n cal_item_id | integer |\nIndexes:\n \"assignments_pk\" primary key, btree (assignment_id)\nCheck constraints:\n \"assignments_sr_ck\" CHECK (score_release = 1 OR score_release = 2 OR\nscore_release = 3)\nForeign-key constraints:\n \"cal_item_id\" FOREIGN KEY (cal_item_id) REFERENCES\ncal_items(cal_item_id) ON DELETE SET NULL\n \"package_id_fk\" FOREIGN KEY (package_id) REFERENCES\napm_packages(package_id)\n \"assignment_id_fk\" FOREIGN KEY (assignment_id) REFERENCES\nacs_objects(object_id) ON DELETE CASCADE\n\n Table \"public.submissions\"\n Column | Type | Modifiers\n---------------+-----------------------------+-----------\n submission_id | integer | not null\n person_id | integer | not null\n assignment_id | integer | not null\n last_modified | timestamp without time zone | not null\n recovery_date | timestamp without time zone |\n grading | smallint |\n grading_date | timestamp without time zone |\nIndexes:\n \"submissions_pk\" primary key, btree (submission_id)\n \"submissions_person_ass_un\" unique, btree (person_id, assignment_id)\nForeign-key constraints:\n \"assignment_id_fk\" FOREIGN KEY (assignment_id) REFERENCES\nassignments(assignment_id)\n \"person_id_fk\" FOREIGN KEY (person_id) REFERENCES persons(person_id)\n\n--\[email protected] Tel: +43/1/31336/4341\nAbteilung für Wirtschaftsinformatik, Wirtschaftsuniversitaet Wien,\nAustria\n\n\n",
"msg_date": "Fri, 2 Jul 2004 09:48:48 +0200",
"msg_from": "\"Peter Alberer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Mysterious performance of query because of plsql function in where\n\tcondition"
},
{
"msg_contents": "\nOn Jul 2, 2004, at 3:48 AM, Peter Alberer wrote:\n>\n> Postgres seems to execute the function \"submission_status\" for every \n> row\n> of\n> the submissions table (~1500 rows). The query therefore takes quite a\n> lot\n> time, although in fact no row is returned from the assignments table\n> when\n> the condition package_id=949589 is used.\n>\n\nWell, you need to think of it this way - PG has no idea what the \nfunction does so it treats it as a \"black box\" - thus it has to run it \nfor each row to see what evaluates too - especially since it is in a \nwhere clause.\n\nIf you really want a function there you can use a SQL function instead \nof plpgsql - PG has smart enough to push that function up into your \nquery and let the optimizer look at the whole thing.\n\nYou can also take a look at the various flags you can use while \ncreating functions such as immutable, strict, etc. they can help\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Fri, 2 Jul 2004 07:52:27 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mysterious performance of query because of plsql function in\n\twhere condition"
},
{
"msg_contents": "On Fri, Jul 02, 2004 at 09:48:48 +0200,\n Peter Alberer <[email protected]> wrote:\n> \n> Postgres seems to execute the function \"submission_status\" for every row\n> of\n> the submissions table (~1500 rows). The query therefore takes quite a\n> lot\n> time, although in fact no row is returned from the assignments table\n> when\n> the condition package_id=949589 is used.\n\nIf submission_status is invertable you might want to create the\ninverse function, mark it immutable and call it with 'closed'.\nThat would allow the optimizer to compare submissions.submission_id\nto a constant.\n\nAnother option would be be to create an index on\nsubmission_status(submissions.submission_id).\n",
"msg_date": "Fri, 2 Jul 2004 08:07:10 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mysterious performance of query because of plsql function in\n\twhere condition"
},
{
"msg_contents": "hi,\n\nPeter Alberer wrote:\n\n> Hi there,\n> \n> i have a problem with a query that uses the result of a plsql function\n> In\n> the where clause:\n> \n> SELECT\n> assignments.assignment_id,\n> assignments.package_id AS package_id,\n> assignments.title AS title,\n> COUNT(*) AS Count\n> FROM\n> assignments INNER JOIN submissions ON\n> (assignments.assignment_id=submissions.assignment_id)\n> WHERE\n> package_id=949589 AND\n> submission_status(submissions.submission_id)='closed'\n> GROUP BY\n> assignments.assignment_id, assignments.package_id, assignments.title\n> ORDER BY\n> assignments.title;\n> \n> Postgres seems to execute the function \"submission_status\" for every row\n> of\n> the submissions table (~1500 rows). \n\nwhat is submission_status actualy?\n\\df submission_status\nIs the function submission_status called stable?\n\nC.\n",
"msg_date": "Fri, 02 Jul 2004 16:23:44 +0200",
"msg_from": "CoL <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Mysterious performance of query because of plsql function in"
}
] |
[
{
"msg_contents": "\nThe following bug has been logged online:\n\nBug reference: 1186\nLogged by: Gosen, Hitoshi\n\nEmail address: [email protected]\n\nPostgreSQL version: 7.4\n\nOperating system: linux 2.4.18\n\nDescription: Broken Index?\n\nDetails: \n\nHello All,\nWe are using PostgreSQL 7.4.2 for our website that handles over 200,000 \ntransactions a day. \nAbout a month ago, the responses from the SELECT queries on the database \nbecame terribly slow. \nWe tried to anaylze the cause of the problem, searching throught the system \nlogs and all, but nothing appeared to be out of the ordinary. \n\nWhat we did to resolve this was to dump the database, delete the database, \nrecreate the database, and finally restore it. After that, things were back \nto normal. \n\n From the above experience, we were able to hypothesize that the fault of the \nslow responses was not from a broken data or hardware failures, but from a \nbroken index, since we were able to recover 100% of the data on the same \nmachine. \n\nToday, the same problem occured, and the same actions are going to be taken \nto temporary resolve it. \n\nFinal note: we will also experiment with the 'vacuum full' command to see \nif it counters this problem. \n\n\n\n",
"msg_date": "Fri, 2 Jul 2004 04:50:07 -0300 (ADT)",
"msg_from": "\"PostgreSQL Bugs List\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG #1186: Broken Index?"
},
{
"msg_contents": "On Fri, Jul 02, 2004 at 04:50:07 -0300,\n PostgreSQL Bugs List <[email protected]> wrote:\n> \n> The following bug has been logged online:\n\nThis doesn't appear to be a bug at this point. It sounds like you have\na self induced performance problem, so I am moving the discussion to\npgsql-performance.\n\n> \n> Bug reference: 1186\n> Logged by: Gosen, Hitoshi\n> \n> Email address: [email protected]\n> \n> PostgreSQL version: 7.4\n> \n> Operating system: linux 2.4.18\n> \n> Description: Broken Index?\n> \n> Details: \n> \n> Hello All,\n> We are using PostgreSQL 7.4.2 for our website that handles over 200,000 \n> transactions a day. \n> About a month ago, the responses from the SELECT queries on the database \n> became terribly slow. \n> We tried to anaylze the cause of the problem, searching throught the system \n> logs and all, but nothing appeared to be out of the ordinary. \n> \n> What we did to resolve this was to dump the database, delete the database, \n> recreate the database, and finally restore it. After that, things were back \n> to normal. \n> \n> From the above experience, we were able to hypothesize that the fault of the \n> slow responses was not from a broken data or hardware failures, but from a \n> broken index, since we were able to recover 100% of the data on the same \n> machine. \n> \n> Today, the same problem occured, and the same actions are going to be taken \n> to temporary resolve it. \n> \n> Final note: we will also experiment with the 'vacuum full' command to see \n> if it counters this problem. \n\nIt sounds like you aren't properly vacuuming your database. It is possible\nthat you need a higher FSM setting or to vacuum more frequently.\n",
"msg_date": "Fri, 2 Jul 2004 08:12:58 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #1186: Broken Index?"
},
{
"msg_contents": "This is the query:\nselect max(KA) from annuncio\n\nfield KA is indexed and is int4,\n\nexplaining gives:\nexplain select max(KA) from annuncio;\nQUERY PLAN\n-----------------------------------------------------------------------\nAggregate (cost=21173.70..21173.70 rows=1 width=4)\n-> Seq Scan on annuncio (cost=0.00..20326.76 rows=338776 width=4)\n(2 rows)\n\n\nwasn't supposed to do an index scan? it takes about 1sec to get the result.\n",
"msg_date": "Fri, 02 Jul 2004 20:50:26 +0200",
"msg_from": "Edoardo Ceccarelli <[email protected]>",
"msg_from_op": false,
"msg_subject": "finding a max value"
},
{
"msg_contents": "PostgreSQL Bugs List wrote:\n> Hello All,\n> We are using PostgreSQL 7.4.2 for our website that handles over 200,000 \n> transactions a day. \n> About a month ago, the responses from the SELECT queries on the database \n> became terribly slow. \n> We tried to anaylze the cause of the problem, searching throught the system \n> logs and all, but nothing appeared to be out of the ordinary. \n> \n> What we did to resolve this was to dump the database, delete the database, \n> recreate the database, and finally restore it. After that, things were back \n> to normal. \n> \n> From the above experience, we were able to hypothesize that the fault of the \n> slow responses was not from a broken data or hardware failures, but from a \n> broken index, since we were able to recover 100% of the data on the same \n> machine. \n> \n> Today, the same problem occured, and the same actions are going to be taken \n> to temporary resolve it. \n> \n> Final note: we will also experiment with the 'vacuum full' command to see \n> if it counters this problem. \n\nThis is not for sure a bug, but a known behaviour if you don't vacuum at all\nyour db. I bet you don't use the vacuum daemon; use it or schedule a simple\nvacuum on the eavily updated table each 10 minutes. I strongly suggest you to use\nthe autovacuum daemon.\n\nDo not esitate to ask how use it.\n\n\nRegards\nGaetano Mendola\n\n\n\n",
"msg_date": "Sat, 03 Jul 2004 16:51:50 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #1186: Broken Index?"
},
{
"msg_contents": "Edoardo Ceccarelli <[email protected]> writes:\n\n> This is the query:\n> select max(KA) from annuncio\n>\n> field KA is indexed and is int4,\n>\n> explaining gives:\n> explain select max(KA) from annuncio;\n> QUERY PLAN\n> -----------------------------------------------------------------------\n> Aggregate (cost=21173.70..21173.70 rows=1 width=4)\n> -> Seq Scan on annuncio (cost=0.00..20326.76 rows=338776 width=4)\n> (2 rows)\n>\n>\n> wasn't supposed to do an index scan? it takes about 1sec to get the result.\n\n This is a known misfeature of max() in postgresql, see...\n\nhttp://archives.postgresql.org/pgsql-performance/2003-12/msg00283.php\n\n-- \n# James Antill -- [email protected]\n:0:\n* ^From: .*james@and\\.org\n/dev/null\n",
"msg_date": "Wed, 07 Jul 2004 15:29:58 -0400",
"msg_from": "James Antill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: finding a max value"
},
{
"msg_contents": "On Fri, 02 Jul 2004 20:50:26 +0200, Edoardo Ceccarelli <[email protected]> wrote:\n\n> This is the query:\n> select max(KA) from annuncio\n\n> wasn't supposed to do an index scan? it takes about 1sec to get the result.\n\n> TIP 5: Have you checked our extensive FAQ?\n\nI believe this is a FAQ.\n\nSee: http://www.postgresql.org/docs/faqs/FAQ.html#4.8\n\nTry \"select KA from annuncio order by KA desc limit 1;\"\n\n/rls\n\n-- \n:wq\n",
"msg_date": "Wed, 7 Jul 2004 15:27:19 -0500",
"msg_from": "Rosser Schwarz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] finding a max value"
}
] |
[
{
"msg_contents": "[Please CC me on all replies, I'm not subscribed to this list]\n\nHi,\n\nI'm trying to find out why one of my queries is so slow -- I'm primarily\nusing PostgreSQL 7.2 (Debian stable), but I don't really get much better\nperformance with 7.4 (Debian unstable). My prototype table looks like this:\n\n CREATE TABLE opinions (\n prodid INTEGER NOT NULL,\n uid INTEGER NOT NULL,\n opinion INTEGER NOT NULL,\n PRIMARY KEY ( prodid, uid )\n );\n\nIn addition, there are separate indexes on prodid and uid. I've run VACUUM\nANALYZE before all queries, and they are repeatable. (If anybody needs the\ndata, that could be arranged -- it's not secret or anything :-) ) My query\nlooks like this:\n\nEXPLAIN ANALYZE\n SELECT o3.prodid, SUM(o3.opinion*o12.correlation) AS total_correlation FROM opinions o3\n RIGHT JOIN (\n SELECT o2.uid, SUM(o1.opinion*o2.opinion)/SQRT(count(*)+0.0) AS correlation\n FROM opinions o1 LEFT JOIN opinions o2 ON o1.prodid=o2.prodid\n WHERE o1.uid=1355\n GROUP BY o2.uid\n ) o12 ON o3.uid=o12.uid\n LEFT JOIN (\n SELECT o4.prodid, COUNT(*) as num_my_comments\n FROM opinions o4\n WHERE o4.uid=1355\n GROUP BY o4.prodid\n ) nmc ON o3.prodid=nmc.prodid\n WHERE nmc.num_my_comments IS NULL AND o3.opinion<>0 AND o12.correlation<>0\n GROUP BY o3.prodid\n ORDER BY total_correlation desc;\n\nAnd produces the query plan at\n\n http://www.samfundet.no/~sesse/queryplan.txt\n\n(The lines were a bit too long to include in an e-mail :-) ) Note that the\n\"o3.opinion<>0 AND o12.correleation<>0\" lines are an optimization; I can run\nthe query fine without them and it will produce the same results, but it\ngoes slower both in 7.2 and 7.4.\n\nThere are a few oddities here:\n\n- The \"subquery scan o12\" phase outputs 1186 rows, yet 83792 are sorted. Where\n do the other ~82000 rows come from? And why would it take ~100ms to sort the\n rows at all? (In earlier tests, this was _one full second_ but somehow that\n seems to have improved, yet without really improving the overall query time.\n shared_buffers is 4096 and sort_mem is 16384, so it should really fit into\n RAM.)\n- Why does it use uid_index for an index scan on the table, when it obviously\n has no filter on it (since it returns all the rows)? Furthermore, why would\n this take half a second? (The machine is a 950MHz machine with SCSI disks.)\n- Also, the outer sort (the sorting of the 58792 rows from the merge join)\n is slow. :-)\n\n7.4 isn't really much better:\n\n http://www.samfundet.no/~sesse/queryplan74.txt\n\nNote that this is run on a machine with almost twice the speed (in terms of\nCPU speed, at least). The same oddities are mostly present (such as o12\nreturning 1186 rows, but 58788 rows are sorted), so I really don't understand\nwhat's going on here. Any ideas on how to improve this?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 5 Jul 2004 11:18:36 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Odd sorting behaviour"
}
] |
[
{
"msg_contents": "Hello,\n\nCan anybody suggest any hint on this:\n\ntemp=> EXPLAIN SELECT DISTINCT \"number\" FROM \"tablex\" WHERE \"Date\" BETWEEN '2004-06-28'::date AND '2004-07-04'::date AND \"Time\" BETWEEN '00:00:00'::time AND '18:01:00'::time;\n\nUnique (cost=305669.92..306119.43 rows=89 width=8)\n\t-> Sort (cost=305669.92..305894.67 rows=89903 width=8)\n\t\tSort Key: \"number\"\n\t\t\t-> Index Scan using \"DateTimeIndex\" on \"tablex\" (cost=0.00..298272.66 rows=89903 width=8)\n\t\t\t\tIndex Cond: ((\"Date\" >= '2004-06-28'::date) AND (\"Date\" <= '2004-07-04'::date) AND (\"Time\" >= '00:00:00'::time without time zone) AND (\"Time\" <= '18:01:00'::time without time zone))\n\n\ntemp=> EXPLAIN SELECT DISTINCT \"number\" FROM \"tablex\" WHERE \"Date\" BETWEEN '2004-06-28'::date AND '2004-07-04'::date AND \"Time\" BETWEEN '00:00:00'::time AND '19:01:00'::time;\n\nUnique (cost=315252.77..315742.27 rows=97 width=8)\n\t-> Sort (cost=315252.77..315497.52 rows=97900 width=8)\n\t\tSort Key: \"number\"\n\t\t\t-> Seq Scan on \"tablex\" (cost=0.00..307137.34 rows=97900 width=8)\n\t\t\tFilter: ((\"Date\" >= '2004-06-28'::date) AND (\"Date\" <= '2004-07-04'::date) AND (\"Time\" >= '00:00:00'::time without time zone) AND (\"Time\" <= '19:01:00'::time without time zone))\n\nBasically, the difference is in upper \"Time\" value (as you can see, it's\n18:01:00 in the first query and 19:01:00 in the other one). \nThe question is - why does it use index in first case and \nit tries to do full sequential scan when the upper \"Time\" value\nis different?\n\nDateTimeIndex was created on both columns (Date/Time):\nCREATE INDEX \"DateTimeIndex\" ON \"tablex\" USING btree (\"Date\", \"Time\");\n\n-- \nwr\n",
"msg_date": "Mon, 5 Jul 2004 12:15:15 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Seq scan vs. Index scan with different query conditions"
},
{
"msg_contents": "[email protected] wrote:\n\n> -> Index Scan using \"DateTimeIndex\" on \"tablex\" (cost=0.00..298272.66 rows=89903 width=8)\n\n> -> Seq Scan on \"tablex\" (cost=0.00..307137.34 rows=97900 width=8)\n\n> Basically, the difference is in upper \"Time\" value (as you can see, it's\n> 18:01:00 in the first query and 19:01:00 in the other one). \n> The question is - why does it use index in first case and \n> it tries to do full sequential scan when the upper \"Time\" value\n> is different?\n\nLook at the rows, and more importantly the cost. PG thinks the cost in \nthe second case (seq scan) is only slightly more than in the first case \n(index), so presumably the index scan worked out more expensive.\n\nYou can test this by issuing \"SET ENABLE_SEQSCAN=OFF;\" and re-running \nthe second explain.\n\nNow, the question is whether PG is right in these cost estimates. You'll \nneed to run \"EXPLAIN ANALYSE\" rather than just EXPLAIN to see what it \nactually costs.\n\nPS - all the usual questions: make sure you've vacuumed, have you read \nthe tuning document on varlena.com?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 05 Jul 2004 12:41:16 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq scan vs. Index scan with different query conditions"
},
{
"msg_contents": "On Mon, 2004-07-05 at 12:15 +0200, [email protected] wrote:\n> Hello,\n> \n> Can anybody suggest any hint on this:\n> \n> temp=> EXPLAIN SELECT DISTINCT \"number\" FROM \"tablex\" WHERE \"Date\" BETWEEN '2004-06-28'::date AND '2004-07-04'::date AND \"Time\" BETWEEN '00:00:00'::time AND '18:01:00'::time;\n> \n> Unique (cost=305669.92..306119.43 rows=89 width=8)\n> \t-> Sort (cost=305669.92..305894.67 rows=89903 width=8)\n> \t\tSort Key: \"number\"\n> \t\t\t-> Index Scan using \"DateTimeIndex\" on \"tablex\" (cost=0.00..298272.66 rows=89903 width=8)\n> \t\t\t\tIndex Cond: ((\"Date\" >= '2004-06-28'::date) AND (\"Date\" <= '2004-07-04'::date) AND (\"Time\" >= '00:00:00'::time without time zone) AND (\"Time\" <= '18:01:00'::time without time zone))\n> \n> \n> temp=> EXPLAIN SELECT DISTINCT \"number\" FROM \"tablex\" WHERE \"Date\" BETWEEN '2004-06-28'::date AND '2004-07-04'::date AND \"Time\" BETWEEN '00:00:00'::time AND '19:01:00'::time;\n> \n> Unique (cost=315252.77..315742.27 rows=97 width=8)\n> \t-> Sort (cost=315252.77..315497.52 rows=97900 width=8)\n> \t\tSort Key: \"number\"\n> \t\t\t-> Seq Scan on \"tablex\" (cost=0.00..307137.34 rows=97900 width=8)\n> \t\t\tFilter: ((\"Date\" >= '2004-06-28'::date) AND (\"Date\" <= '2004-07-04'::date) AND (\"Time\" >= '00:00:00'::time without time zone) AND (\"Time\" <= '19:01:00'::time without time zone))\n> \n> Basically, the difference is in upper \"Time\" value (as you can see, it's\n> 18:01:00 in the first query and 19:01:00 in the other one). \n> The question is - why does it use index in first case and \n> it tries to do full sequential scan when the upper \"Time\" value\n> is different?\n> \n> DateTimeIndex was created on both columns (Date/Time):\n> CREATE INDEX \"DateTimeIndex\" ON \"tablex\" USING btree (\"Date\", \"Time\");\n\nPostgreSQL is always going to switch at some point, where the number of\nrows that have to be read from the table exceed some percentage of the\ntotal rows in the table.\n\nWe can possibly be more helpful if you send EXPLAIN ANALYZE, rather than\njust EXPLAIN.\n\nA few things to be careful of:\n\n- Is this supposed to be a slice of midnight to 6pm, for each day\nbetween 28 June and 4 July? If you want a continuous period from\nMidnight 28 June -> 6pm 4 July you're better to have a single timestamp\nfield.\n\n- It is unlikely that the , \"Time\" on your index is adding much to your\nselectivity, and it may be that you would be better off without it.\n\n- the DISTINCT can screw up your results, and it usually means that the\nSQL is not really the best it could be. A _real_ need for DISTINCT is\nquite rare in my experience, and from what I have seen it adds overhead\nand tends to encourage bad query plans when used unnecessarily.\n\nHope this is some help.\n\nRegards,\n\t\t\t\t\tAndrew McMillan\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n Make things as simple as possible, but no simpler -- Einstein\n-------------------------------------------------------------------------",
"msg_date": "Mon, 05 Jul 2004 23:44:13 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq scan vs. Index scan with different query"
},
{
"msg_contents": "On Mon, Jul 05, 2004 at 11:44:13PM +1200, Andrew McMillan wrote:\n\n> > DateTimeIndex was created on both columns (Date/Time):\n> > CREATE INDEX \"DateTimeIndex\" ON \"tablex\" USING btree (\"Date\", \"Time\");\n> PostgreSQL is always going to switch at some point, where the number of\n> rows that have to be read from the table exceed some percentage of the\n> total rows in the table.\n> We can possibly be more helpful if you send EXPLAIN ANALYZE, rather than\n> just EXPLAIN.\n\nUnfortunately that seq scan vs. index scan\nheuristic was wrong - full scan kills the machine \nin no time due to large amount of INSERTs happening \nin the background (I/O bottleneck).\n\n> - Is this supposed to be a slice of midnight to 6pm, for each day\n> between 28 June and 4 July? If you want a continuous period from\n> Midnight 28 June -> 6pm 4 July you're better to have a single timestamp\n> field.\n> - It is unlikely that the , \"Time\" on your index is adding much to your\n> selectivity, and it may be that you would be better off without it.\n\nYes, we've figured out that index on Date + Time is rather useless.\nThanks for the tip, we've created index upon Date column instead and\nit should be enough.\n\n> - the DISTINCT can screw up your results, and it usually means that the\n> SQL is not really the best it could be. A _real_ need for DISTINCT is\n> quite rare in my experience, and from what I have seen it adds overhead\n> and tends to encourage bad query plans when used unnecessarily.\n\nWhat do you mean? The reason for which there's DISTINCT in that query is\nbecause I want to know how many unique rows is in the table.\nDo you suggest selecting all rows and doing \"DISTINCT\"/counting \non the application level?\n\n-- \n11.\n",
"msg_date": "Mon, 5 Jul 2004 15:46:15 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Seq scan vs. Index scan with different query conditions"
},
{
"msg_contents": "On Mon, 2004-07-05 at 15:46 +0200, [email protected] wrote:\n> On Mon, Jul 05, 2004 at 11:44:13PM +1200, Andrew McMillan wrote:\n> \n> > > DateTimeIndex was created on both columns (Date/Time):\n> > > CREATE INDEX \"DateTimeIndex\" ON \"tablex\" USING btree (\"Date\", \"Time\");\n> > PostgreSQL is always going to switch at some point, where the number of\n> > rows that have to be read from the table exceed some percentage of the\n> > total rows in the table.\n> > We can possibly be more helpful if you send EXPLAIN ANALYZE, rather than\n> > just EXPLAIN.\n> \n> Unfortunately that seq scan vs. index scan\n> heuristic was wrong - full scan kills the machine \n> in no time due to large amount of INSERTs happening \n> in the background (I/O bottleneck).\n\nIn that case you could perhaps consider tweaking various parameters in\nyour postgresql.conf - with an ideal setup the switch should happen when\nthe costs are roughly equal.\n\nHave you gone through the information here:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\nAlso, if table rows are regularly DELETEd or UPDATEd then you will need\nto ensure it is regularly vacuumed. Does a \"VACUUM VERBOSE tablex\" show\na large number of dead tuples? Are you running pg_autovacuum? Do you\nget similar results immediately after a \"VACUUM FULL ANALYZE tablex\"?\n\nPossibly there is an uneven distribution of rows in the table. You\ncould consider increasing the statistics target:\nALTER TABLE tablex ALTER COLUMN \"Date\" SET STATISTICS;\nANALYZE tablex;\n\n\n> > - Is this supposed to be a slice of midnight to 6pm, for each day\n> > between 28 June and 4 July? If you want a continuous period from\n> > Midnight 28 June -> 6pm 4 July you're better to have a single timestamp\n> > field.\n> > - It is unlikely that the , \"Time\" on your index is adding much to your\n> > selectivity, and it may be that you would be better off without it.\n> \n> Yes, we've figured out that index on Date + Time is rather useless.\n> Thanks for the tip, we've created index upon Date column instead and\n> it should be enough.\n\nIt may be that you are better with a single timestamp column with an\nindex on it in any case, if you want the data sorted in timestamp order.\nThen you can ORDER BY <timestamp> as well, which will encourage the\nindex use further (although this advantage tends to get lost with the\nDISTINCT). You can still access the time part for a separate comparison\njust with a cast.\n\n\n> > - the DISTINCT can screw up your results, and it usually means that the\n> > SQL is not really the best it could be. A _real_ need for DISTINCT is\n> > quite rare in my experience, and from what I have seen it adds overhead\n> > and tends to encourage bad query plans when used unnecessarily.\n> \n> What do you mean? The reason for which there's DISTINCT in that query is\n> because I want to know how many unique rows is in the table.\n> Do you suggest selecting all rows and doing \"DISTINCT\"/counting \n> on the application level?\n\nThat's fine, I've just seen it used far too many times as a substitute\nfor having an extra join, or an application that should only be\ninserting unique rows in the first place. Things like that. It's just\none of those things that always sets off alarm bells when I'm reviewing\nsomeone else's work, and on most of these occasions it has not been\njustified when reexamined.\n\nCheers,\n\t\t\t\t\tAndrew.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\nIt is truth which you cannot contradict; you can without any difficulty\n contradict Socrates. - Plato\n-------------------------------------------------------------------------",
"msg_date": "Tue, 06 Jul 2004 07:22:05 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Seq scan vs. Index scan with different query"
}
] |
[
{
"msg_contents": "I have a very simple problem. I run two select statments, they are identical except for a single\nwhere condition. The first select statment runs in 9 ms, while the second statment runs for 4000\nms \n\nSQL1 - fast 9ms\nexplain analyse select seq_ac from refseq_sequence S where seq_ac in (select seq_ac2 from\nrefseq_refseq_hits where seq_ac1 = 'NP_001217')\n\nSQL2 - very slow 4000ms\nexplain analyse select seq_ac from refseq_sequence S where seq_ac in (select seq_ac2 from\nrefseq_refseq_hits where seq_ac1 = 'NP_001217') AND S.species = 'Homo sapiens' \n\nI think the second sql statment is slower than the first one because planner is not using\nHashAggregate. Can I force HashAggregation before index scan? \n\nHere is the full output from EXPLAIN ANALYZE\n\nexplain analyse select seq_ac from refseq_sequence S where seq_ac in (select seq_ac2 from\nrefseq_refseq_hits where seq_ac1 = 'NP_001217');\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=169907.83..169919.88 rows=3 width=24) (actual time=1.450..8.707 rows=53\nloops=1)\n -> HashAggregate (cost=169907.83..169907.83 rows=2 width=19) (actual time=1.192..1.876\nrows=53 loops=1)\n -> Index Scan using refseq_refseq_hits_pkey on refseq_refseq_hits (cost=0.00..169801.33\nrows=42600 width=19) (actual time=0.140..0.894 rows=54 loops=1)\n Index Cond: ((seq_ac1)::text = 'NP_001217'::text)\n -> Index Scan using refseq_sequence_pkey on refseq_sequence s (cost=0.00..6.01 rows=1\nwidth=24) (actual time=0.105..0.111 rows=1 loops=53)\n Index Cond: ((s.seq_ac)::text = (\"outer\".seq_ac2)::text)\n Total runtime: 9.110 ms\n\n\n\nexplain analyse select seq_ac from refseq_sequence S where seq_ac in (select seq_ac2 from\nrefseq_refseq_hits where seq_ac1 = 'NP_001217') and S.species = 'Homo sapiens';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop IN Join (cost=0.00..4111.66 rows=1 width=24) (actual time=504.176..3857.340 rows=30\nloops=1)\n -> Index Scan using refseq_sequence_key2 on refseq_sequence s (cost=0.00..1516.06 rows=389\nwidth=24) (actual time=0.352..491.107 rows=27391 loops=1)\n Index Cond: ((species)::text = 'Homo sapiens'::text)\n -> Index Scan using refseq_refseq_hits_pkey on refseq_refseq_hits (cost=0.00..858.14 rows=213\nwidth=19) (actual time=0.114..0.114 rows=0 loops=27391)\n Index Cond: (((refseq_refseq_hits.seq_ac1)::text = 'NP_001217'::text) AND\n((\"outer\".seq_ac)::text = (refseq_refseq_hits.seq_ac2)::text))\n Total runtime: 3857.636 ms\n",
"msg_date": "Mon, 5 Jul 2004 16:14:34 -0700 (PDT)",
"msg_from": "Eugene <[email protected]>",
"msg_from_op": true,
"msg_subject": "Forcing HashAggregation prior to index scan?"
},
{
"msg_contents": "Eugene <[email protected]> writes:\n> Can I force HashAggregation before index scan? \n\nNo. But look into why the planner's rows estimate is so bad here:\n\n> -> Index Scan using refseq_sequence_key2 on refseq_sequence s (cost=0.00..1516.06 rows=389\n> width=24) (actual time=0.352..491.107 rows=27391 loops=1)\n> Index Cond: ((species)::text = 'Homo sapiens'::text)\n\nHave you ANALYZEd this table recently? If so, maybe you need a larger\nstatistics target for the species column. The estimated row count\nshouldn't be off by a factor of seventy...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Jul 2004 23:10:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing HashAggregation prior to index scan? "
}
] |
[
{
"msg_contents": "Hello there, I'm trying to make sure my postgres 7.4 is running as fast \nas it can in my box.\nMy hardware configuration is:\n\nHP ML-350G3\nDup processor XEON 2.8\nThree U320, 10000 rpm disks, RAID-5\nHP 641 Raid Controller.\n1GB RAM\n\nMy Software config is:\n\nRedHat 7.3 - 2.4.20-28.7smp Kernel, reporting four processors because of \nhyper threading.\nPostgres 7.4\nData directory is on a ext3 journaled filesystem (data=journal)\nLog directory is on another partition.\nIt is to be a dedicated to databases server, this means there are no \nother heavy processes running.\n\nMy tests with pgbench with fsync turned off:\n\n[root@dbs pgbench]# ./pgbench -U dba -P 4ghinec osdb -c 10 -s 11 -t 1000\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 10\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 10000/10000\ntps = 408.520028 (including connections establishing)\ntps = 409.697088 (excluding connections establishing)\n\n\nwith fsync turned on:\n\n[root@dbs pgbench]# ./pgbench -U dba -P 4ghinec osdb\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 1\nnumber of transactions per client: 10\nnumber of transactions actually processed: 10/10\ntps = 43.366451 (including connections establishing)\ntps = 44.867394 (excluding connections establishing)\n\nI did a lot of these tests and results are consistent. Now then, without \nfsync on I get a\n1000% improvment!!!!\n\nQuestions now:\n\n1) since I'm using ext3 with data=journal, do I need to use fsync=true?\n2) Is there not a problem with RedHat? should fsyncs asked by postgres \nto redhat be such a burden?\n3) Any other tests you would suggest me to do?\n\nthank you all\n\nRodrigo Filgueira Prates\nIT@CINTERFOR/OIT\nhttp://www.cinterfor.org.uy\n\n\n",
"msg_date": "Tue, 06 Jul 2004 11:57:51 -0300",
"msg_from": "Rodrigo Filgueira <[email protected]>",
"msg_from_op": true,
"msg_subject": "to fsync or not to fsync"
},
{
"msg_contents": "Rodrigo,\n\n> Hello there, I'm trying to make sure my postgres 7.4 is running as fast\n> as it can in my box.\n\n> ...\n\n> My Software config is:\n> \n> RedHat 7.3 - 2.4.20-28.7smp Kernel, reporting four processors because of\n> hyper threading.\n> Postgres 7.4\n> Data directory is on a ext3 journaled filesystem (data=journal)\n\nIsn't ext3 a really slow journaled filesystem [1, 2] to begin with?\n\n[1] http://slashdot.org/article.pl?sid=04/05/11/134214\n[2] http://209.81.41.149/~jpiszcz/index.html\n\nIt might be a good experiment to figure out how different file systems\naffect perf.\n\n-- \nFrank Hsueh, EngSci Elec 0T3 + 1\n\nemail: [email protected]\n",
"msg_date": "Tue, 6 Jul 2004 11:45:10 -0400",
"msg_from": "Frank Hsueh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: to fsync or not to fsync"
},
{
"msg_contents": "Frank, thanks for your answer,\n\nThis article http://www.linuxjournal.com/print.php?sid=5841 evaluates \nperformance from a relational database point of view and it concludes \nthat ext3 is faster.\nThe articles you provided evaluate filesystems by using basic shell \ncommands, copy, tar, touch.\nI really don't know why ext3 would be faster for databases but here are \nsome tests that suggest this is true,\n\nI run pgbench with data=writeback for ext3, this is as ext2 as an ext3 \nfs can get, and results are more or less the same as with data=journal\n\nBest run with data=journal\n------------------------------------\n\n[root@dbs pgbench]# ./pgbench -U dba -P 4ghinec osdb -c 10 -s 11 -t 1000\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 10\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 10000/10000\ntps = 349.844978 (including connections establishing)\ntps = 350.715286 (excluding connections establishing)\n\n\nBest run with data=writeback\n----------------------------------------\n\n[root@dbs pgbench]# ./pgbench -U dba -P 4ghinec osdb -c 10 -s 11 -t 1000\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 10\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 10000/10000\ntps = 319.239210 (including connections establishing)\ntps = 319.961564 (excluding connections establishing)\n\nanybody else can throw some light on this?\n\n\nFrank Hsueh wrote:\n\n>Rodrigo,\n> \n>\n>>Hello there, I'm trying to make sure my postgres 7.4 is running as fast\n>>as it can in my box.\n>> \n>>\n>>My Software config is:\n>>\n>>RedHat 7.3 - 2.4.20-28.7smp Kernel, reporting four processors because of\n>>hyper threading.\n>>Postgres 7.4\n>>Data directory is on a ext3 journaled filesystem (data=journal)\n>> \n>>\n>Isn't ext3 a really slow journaled filesystem [1, 2] to begin with?\n>\n>[1] http://slashdot.org/article.pl?sid=04/05/11/134214\n>[2] http://209.81.41.149/~jpiszcz/index.html\n>\n>It might be a good experiment to figure out how different file systems\n>affect perf.\n> \n>\n \nRodrigo Filgueira Prates\nIT@CINTERFOR/OIT\nhttp://www.cinterfor.org.uy\n\n\n",
"msg_date": "Tue, 06 Jul 2004 13:43:14 -0300",
"msg_from": "Rodrigo Filgueira <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: to fsync or not to fsync (ext3?)"
}
] |
[
{
"msg_contents": "I have been trying to get keyword searching quicker and have now decided to make smaller tables.\n\nI was working with tables of about 50 000 000 rows but now will use about 1.5million rows, I am using a TSearch2 to find keywords in the titles column of the rows. I have indexes using a gist index on the TSearch2 using 'default'.\nI wanted to know if it would be quicker to change my words in the title into crc32 bit integers and then look for them using an index which would be on the crc numbers. I could then cluster on the crc numbers and this should result in a fast search but I just want to know if you know of better ways of doing keyword searching, I need it to be about 1-4 seconds and return usually around 1-5 thousand rows.\n\nI know places like google and ebay do seaches and only return 200 or 100 max rows, this can be done on our current system very fast as you just limit the number returned but for our purposes I need to return all the rows,\n\nThanks\nsmokedvw\n\n\n\n\n\n\nI have been trying to get keyword searching quicker \nand have now decided to make smaller tables.\n \nI was working with tables of about 50 000 000 rows \nbut now will use about 1.5million rows, I am using a TSearch2 to find keywords \nin the titles column of the rows. I have indexes using a gist index on the \nTSearch2 using 'default'.\nI wanted to know if it would be quicker to change \nmy words in the title into crc32 bit integers and then look for them using an \nindex which would be on the crc numbers. I could then cluster on the crc \nnumbers and this should result in a fast search but I just want to know if you \nknow of better ways of doing keyword searching, I need it to be about 1-4 \nseconds and return usually around 1-5 thousand rows.\n \nI know places like google and ebay do seaches and \nonly return 200 or 100 max rows, this can be done on our current system very \nfast as you just limit the number returned but for our purposes I need to return \nall the rows,\n \nThanks\nsmokedvw",
"msg_date": "Tue, 6 Jul 2004 10:52:59 -0700",
"msg_from": "\"borajetta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Keyword searching question"
}
] |
[
{
"msg_contents": "Forgot to mention that were running post gres 7.4.2 using a dual optiron system and 4gig ram. The results are used by the PHP web server.\nThanks again\n ----- Original Message ----- \n From: borajetta \n To: [email protected] \n Sent: Tuesday, July 06, 2004 10:52 AM\n Subject: Keyword searching question\n\n\n I have been trying to get keyword searching quicker and have now decided to make smaller tables.\n\n I was working with tables of about 50 000 000 rows but now will use about 1.5million rows, I am using a TSearch2 to find keywords in the titles column of the rows. I have indexes using a gist index on the TSearch2 using 'default'.\n I wanted to know if it would be quicker to change my words in the title into crc32 bit integers and then look for them using an index which would be on the crc numbers. I could then cluster on the crc numbers and this should result in a fast search but I just want to know if you know of better ways of doing keyword searching, I need it to be about 1-4 seconds and return usually around 1-5 thousand rows.\n\n I know places like google and ebay do seaches and only return 200 or 100 max rows, this can be done on our current system very fast as you just limit the number returned but for our purposes I need to return all the rows,\n\n Thanks\n smokedvw\n\n\n\n\n\n\nForgot to mention that were running post gres 7.4.2 \nusing a dual optiron system and 4gig ram. The results are used by the PHP web \nserver.\nThanks again\n\n----- Original Message ----- \nFrom:\nborajetta \nTo: [email protected]\n\nSent: Tuesday, July 06, 2004 10:52 \n AM\nSubject: Keyword searching question\n\nI have been trying to get keyword searching \n quicker and have now decided to make smaller tables.\n \nI was working with tables of about 50 000 000 \n rows but now will use about 1.5million rows, I am using a TSearch2 to find \n keywords in the titles column of the rows. I have indexes using a gist \n index on the TSearch2 using 'default'.\nI wanted to know if it would be quicker to change \n my words in the title into crc32 bit integers and then look for them using an \n index which would be on the crc numbers. I could then cluster on the crc \n numbers and this should result in a fast search but I just want to know if you \n know of better ways of doing keyword searching, I need it to be about 1-4 \n seconds and return usually around 1-5 thousand rows.\n \nI know places like google and ebay do seaches and \n only return 200 or 100 max rows, this can be done on our current system very \n fast as you just limit the number returned but for our purposes I need to \n return all the rows,\n \nThanks\nsmokedvw",
"msg_date": "Tue, 6 Jul 2004 10:56:29 -0700",
"msg_from": "\"borajetta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Keyword searching question"
}
] |
[
{
"msg_contents": "\n\nI would like to ask you where i can find information about the\nimplementation of the inheritance relationship in Postgres.\n\nThere are several ways to store and to retrieve instances\ncontained in an hierarchie.\n\nWhich clustering and buffer replacement policy implements Postgres?\n\nThere is a system table called pg_inherits, but how is it used during\nhierarchies traversing?\n",
"msg_date": "Wed, 7 Jul 2004 18:13:42 +0300 (EET DST)",
"msg_from": "Ioannis Theoharis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Implementatiion of Inheritance in Postgres"
}
] |
[
{
"msg_contents": "Hi,\n\nUsing PostgreSQL 7.4.2 on Solaris. I'm trying to \nimprove performance on some queries to my databases so\nI wanted to try out various index structures. \n\nSince I'm going to be running my performance tests\nrepeatedly, I created some SQL scripts to delete and \nrecreate various index configurations. One of the\nscripts contains the commands for recreating the \n'original' index configuration (i.e. the one I've \nalready got some numbers for). Only thing is now\nwhen I delete and recreate the original indexes then\nrun the query, I'm finding the performance has gone\ncompletely down the tubes compared to what I \noriginally had. A query that used to take 5 minutes\nto complete now takes hours to complete.\n\nFor what it's worth my query looks something like:\n\nselect * from tbl_1, tbl_2 where tbl_1.id = tbl_2.id\nand tbl_2.name like 'x%y%' and tbl_1.x > 1234567890123\norder by tbl_1.x;\n\ntbl_1 is very big (> 2 million rows)\ntbl_2 is relatively small (7000 or so rows)\ntbl_1.x is a numeric(13)\ntbl_1.id & tbl_2.id are integers\ntbl_2.name is a varchar(64)\n\nI've run 'VACUUM ANALYZE' on both tables involved in\nthe query. I also used 'EXPLAIN' and observed that\nthe query plan is completely changed from what it \nwas originally. \n\nAny idea why this would be? I would have thougth \nthat a freshly created index would have better \nperformance not worse. I have not done any inserts\nor updates since recreating the indexes.\n\nthanks in advance,\n\nBill C\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Wed, 7 Jul 2004 09:16:40 -0700 (PDT)",
"msg_from": "Bill Chandler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Terrible performance after deleting/recreating indexes"
},
{
"msg_contents": "Bill Chandler wrote:\n\n> Hi,\n> \n> Using PostgreSQL 7.4.2 on Solaris. I'm trying to \n> improve performance on some queries to my databases so\n> I wanted to try out various index structures. \n> \n> Since I'm going to be running my performance tests\n> repeatedly, I created some SQL scripts to delete and \n> recreate various index configurations. One of the\n> scripts contains the commands for recreating the \n> 'original' index configuration (i.e. the one I've \n> already got some numbers for). Only thing is now\n> when I delete and recreate the original indexes then\n> run the query, I'm finding the performance has gone\n> completely down the tubes compared to what I \n> originally had. A query that used to take 5 minutes\n> to complete now takes hours to complete.\n> \n> For what it's worth my query looks something like:\n> \n> select * from tbl_1, tbl_2 where tbl_1.id = tbl_2.id\n> and tbl_2.name like 'x%y%' and tbl_1.x > 1234567890123\n> order by tbl_1.x;\n> \n> tbl_1 is very big (> 2 million rows)\n> tbl_2 is relatively small (7000 or so rows)\n> tbl_1.x is a numeric(13)\n> tbl_1.id & tbl_2.id are integers\n> tbl_2.name is a varchar(64)\n> \n> I've run 'VACUUM ANALYZE' on both tables involved in\n> the query. I also used 'EXPLAIN' and observed that\n> the query plan is completely changed from what it \n> was originally. \n\nGet an explain analyze. That gives actual v/s planned time spent. See what is \ncausing the difference. A discrepency between planned and actual row is usually \na indication of out-of-date stats.\n\n\nWhich are the indexes on these tables? You should list fields with indexes first \nin where clause. Also list most selective field first so that it eliminates as \nmany rows as possible in first scan.\n\n\nI hope you have read the tuning articles on varlena.com and applied some basic \ntuning.\n\nAnd post the table schema, hardware config, postgresql config(important ones of \ncourse) and explain analyze for queries. That would be something to start with.\n\n Shridhar\n",
"msg_date": "Thu, 08 Jul 2004 14:37:37 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Terrible performance after deleting/recreating indexes"
},
{
"msg_contents": "Thanks for the advice.\n\nOn further review it appears I am only getting this \nperformance degradation when I run the command via\na JDBC app. If I do the exact same query from\npsql, the performance is fine. I've tried both the\nJDBC2 and JDBC3 jars. Same results.\n\nIt definitely seems to correspond to deleting and\nrecreating the indexes, though. The same query thru\nJDBC worked fine before recreating the indexes. \n\nDoes that make any sense at all?\n\nthanks\n\nBill\n\n--- Shridhar Daithankar <[email protected]>\nwrote:\n> Bill Chandler wrote:\n> \n> > Hi,\n> > \n> > Using PostgreSQL 7.4.2 on Solaris. I'm trying to \n> > improve performance on some queries to my\n> databases so\n> > I wanted to try out various index structures. \n> > \n> > Since I'm going to be running my performance tests\n> > repeatedly, I created some SQL scripts to delete\n> and \n> > recreate various index configurations. One of the\n> > scripts contains the commands for recreating the \n> > 'original' index configuration (i.e. the one I've \n> > already got some numbers for). Only thing is now\n> > when I delete and recreate the original indexes\n> then\n> > run the query, I'm finding the performance has\n> gone\n> > completely down the tubes compared to what I \n> > originally had. A query that used to take 5\n> minutes\n> > to complete now takes hours to complete.\n> > \n> > For what it's worth my query looks something like:\n> > \n> > select * from tbl_1, tbl_2 where tbl_1.id =\n> tbl_2.id\n> > and tbl_2.name like 'x%y%' and tbl_1.x >\n> 1234567890123\n> > order by tbl_1.x;\n> > \n> > tbl_1 is very big (> 2 million rows)\n> > tbl_2 is relatively small (7000 or so rows)\n> > tbl_1.x is a numeric(13)\n> > tbl_1.id & tbl_2.id are integers\n> > tbl_2.name is a varchar(64)\n> > \n> > I've run 'VACUUM ANALYZE' on both tables involved\n> in\n> > the query. I also used 'EXPLAIN' and observed\n> that\n> > the query plan is completely changed from what it \n> > was originally. \n> \n> Get an explain analyze. That gives actual v/s\n> planned time spent. See what is \n> causing the difference. A discrepency between\n> planned and actual row is usually \n> a indication of out-of-date stats.\n> \n> \n> Which are the indexes on these tables? You should\n> list fields with indexes first \n> in where clause. Also list most selective field\n> first so that it eliminates as \n> many rows as possible in first scan.\n> \n> \n> I hope you have read the tuning articles on\n> varlena.com and applied some basic \n> tuning.\n> \n> And post the table schema, hardware config,\n> postgresql config(important ones of \n> course) and explain analyze for queries. That would\n> be something to start with.\n> \n> Shridhar\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the\n> unregister command\n> (send \"unregister YourEmailAddressHere\" to\n> [email protected])\n> \n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nNew and Improved Yahoo! Mail - Send 10MB messages!\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Thu, 8 Jul 2004 13:49:21 -0700 (PDT)",
"msg_from": "Bill Chandler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Terrible performance after deleting/recreating indexes"
},
{
"msg_contents": "That is interesting - both psql and JDBC merely submit statements for \nthe backend to process, so generally you would expect no difference in \nexecution plan or performance.\n\nIt might be worth setting \"log_statement=true\" in postgresql.conf and \nchecking that you are executing *exactly* the same statement in both \nJDBC and psql.\n\nregards\n\nMark\n\nP.s : lets see the output from EXPLAIN ANALYZE :-)\n\nBill Chandler wrote:\n\n>Thanks for the advice.\n>\n>On further review it appears I am only getting this \n>performance degradation when I run the command via\n>a JDBC app. If I do the exact same query from\n>psql, the performance is fine. I've tried both the\n>JDBC2 and JDBC3 jars. Same results.\n>\n>\n> \n>\n> \n> \n>\n",
"msg_date": "Fri, 09 Jul 2004 12:14:32 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Terrible performance after deleting/recreating indexes"
},
{
"msg_contents": "Thanks for this tip. Turns out there is a difference.\nI am using cursors (i.e. calling setFetchSize(5000) on\nmy Statement) in JDBC. So the SQL statement is\npreceded by:\n\n DECLARE JDBC_CURS_1 CURSOR FOR ...\n\nwhich is then followed by the SQL statemnt.\n\nThis is followed by the separate statement: \n\n FETCH FORWARD 5000 FROM JDBC_CURS_1;\n\nAlso, don't know if this is significant but there\nare a few lines before both of these:\n\n set datestyle to 'ISO'; select version(), case when\npg_encoding_to_char(1) = 'SQL_ASCII' then 'UNKNOWN'\nelse getdatabaseencoding() end;\n set client_encoding = 'UNICODE\n begin;\n\nOnly thing is, though, none of this is new. I was\nusing cursors before as well.\n\nHere is the output from \"EXPLAIN ANALYZE\". Hope it \ncomes out readable:\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=50466.04..50470.45 rows=1765 width=114)\n(actual time=87237.003..88235.011 rows=108311 loops=1)\n Sort Key: iso_nep_data_update_events.lds\n -> Merge Join (cost=49240.03..50370.85 rows=1765\nwidth=114) (actual time=56658.356..65221.995\nrows=108311 loops=1)\n Merge Cond: (\"outer\".obj_id = \"inner\".obj_id)\n -> Sort (cost=198.01..198.16 rows=61\nwidth=65) (actual time=175.947..181.172 rows=3768\nloops=1)\n Sort Key: iso_nep_control.obj_id\n -> Seq Scan on iso_nep_control \n(cost=0.00..196.20 rows=61 width=65) (actual\ntime=0.056..108.151 rows=3768 loops=1)\n Filter: ((real_name)::text ~~\n'NEPOOL%REAL%'::text)\n -> Sort (cost=49042.02..49598.46\nrows=222573 width=69) (actual\ntime=56482.073..58642.901 rows=216528 loops=1)\n Sort Key:\niso_nep_data_update_events.obj_id\n -> Index Scan using iso_nep_due_idx1\non iso_nep_data_update_events (cost=0.00..7183.18\nrows=222573 width=69) (actual time=0.179..11739.104\nrows=216671 loops=1)\n Index Cond: (lds >\n1088554754000::numeric)\n Total runtime: 88643.330 ms\n(13 rows)\n\n\nHere is the actual query:\n\nselect iso_nep_DATA_UPDATE_EVENTS.lds,\n iso_nep_DATA_UPDATE_EVENTS.tsds,\n iso_nep_DATA_UPDATE_EVENTS.value,\n iso_nep_DATA_UPDATE_EVENTS.correction,\n iso_nep_DATA_UPDATE_EVENTS.delta_lds_tsds,\n iso_nep_CONTROL.real_name,\n iso_nep_CONTROL.freq,\n iso_nep_CONTROL.type from\n iso_nep_DATA_UPDATE_EVENTS, iso_nep_CONTROL\n where iso_nep_CONTROL.real_name like\n'NEPOOL%REAL%' escape '/' and\n iso_nep_DATA_UPDATE_EVENTS.obj_id =\niso_nep_CONTROL.obj_id and\n iso_nep_DATA_UPDATE_EVENTS.lds > 1088554754000\norder by lds;\n\nTwo tables: iso_nep_data_update_events and\niso_nep_control. Basically getting all columns from\nboth tables. Joining the tables on obj_id = obj_id.\nHave unique indexes on iso_nep_control.obj_id\n(clustered) and iso_nep_control.real_name. Have\nnon-unique indexes on iso_nep_data_update_events.lds\nand iso_nep_data_update_events.obj_id.\n\nthanks,\n\nBill\n\n--- Mark Kirkwood <[email protected]> wrote:\n> That is interesting - both psql and JDBC merely\n> submit statements for \n> the backend to process, so generally you would\n> expect no difference in \n> execution plan or performance.\n> \n> It might be worth setting \"log_statement=true\" in\n> postgresql.conf and \n> checking that you are executing *exactly* the same\n> statement in both \n> JDBC and psql.\n> \n> regards\n> \n> Mark\n> \n> P.s : lets see the output from EXPLAIN ANALYZE :-)\n> \n> Bill Chandler wrote:\n> \n> >Thanks for the advice.\n> >\n> >On further review it appears I am only getting this\n> \n> >performance degradation when I run the command via\n> >a JDBC app. If I do the exact same query from\n> >psql, the performance is fine. I've tried both the\n> >JDBC2 and JDBC3 jars. Same results.\n> >\n> >\n> > \n> >\n> > \n> > \n> >\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nNew and Improved Yahoo! Mail - 100MB free storage!\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Fri, 9 Jul 2004 08:18:48 -0700 (PDT)",
"msg_from": "Bill Chandler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Terrible performance after deleting/recreating indexes"
},
{
"msg_contents": "Thanks to all who have responded. I now think my\nproblem is not related to deleting/recreating indexes.\nSomehow it is related to JDBC cursors. It appears\nthat what is happening is that since I'm using \na fetch size of 5000, the command:\n\nFETCH FORWARD 5000 FROM JDBC_CURS_1\n\nis being repeatedly sent to the server as I process\nthe result set from my query. Each time this command\nis sent it it takes about 5 minutes to return which is\nabout the amount of time the whole query took to\ncomplete before the performance degredation. So in\nother words it looks as if the full select is being\nrerun on each fetch. \n\nNow the mystery is why is this happening all of the\nsudden? I have been running w/ fetch size set to 5000\nfor the last couple of weeks and it did not appear to\nbe doing this (i.e. re-running the entire select \nstatement again). Is this what I should expect when\nusing cursors? I would have thought that the server\nshould \"remember\" where it left off in the query since\nthe last fetch and continue from there.\n\nCould I have inadvertently changed a parameter\nsomewhere that would cause this behavior?\n\nthanks,\n\nBill\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Fri, 9 Jul 2004 13:24:16 -0700 (PDT)",
"msg_from": "Bill Chandler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Cursors performance (was: Re: [PERFORM] Terrible performance after\n\tdeleting/recreating indexes)"
},
{
"msg_contents": "Bill,\n\nWhat happens if you do this in psql, also you can turn on duration\nlogging in the backend and log the queries.\n\ndave\nOn Fri, 2004-07-09 at 16:24, Bill Chandler wrote:\n> Thanks to all who have responded. I now think my\n> problem is not related to deleting/recreating indexes.\n> Somehow it is related to JDBC cursors. It appears\n> that what is happening is that since I'm using \n> a fetch size of 5000, the command:\n> \n> FETCH FORWARD 5000 FROM JDBC_CURS_1\n> \n> is being repeatedly sent to the server as I process\n> the result set from my query. Each time this command\n> is sent it it takes about 5 minutes to return which is\n> about the amount of time the whole query took to\n> complete before the performance degredation. So in\n> other words it looks as if the full select is being\n> rerun on each fetch. \n> \n> Now the mystery is why is this happening all of the\n> sudden? I have been running w/ fetch size set to 5000\n> for the last couple of weeks and it did not appear to\n> be doing this (i.e. re-running the entire select \n> statement again). Is this what I should expect when\n> using cursors? I would have thought that the server\n> should \"remember\" where it left off in the query since\n> the last fetch and continue from there.\n> \n> Could I have inadvertently changed a parameter\n> somewhere that would cause this behavior?\n> \n> thanks,\n> \n> Bill\n> \n> __________________________________________________\n> Do You Yahoo!?\n> Tired of spam? Yahoo! Mail has the best spam protection around \n> http://mail.yahoo.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n> \n> \n> !DSPAM:40eefff6170301475214189!\n> \n> \n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n",
"msg_date": "Fri, 09 Jul 2004 16:39:01 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursors performance (was: Re: [PERFORM] Terrible"
},
{
"msg_contents": "\n\nOn Fri, 9 Jul 2004, Bill Chandler wrote:\n\n> Thanks to all who have responded. I now think my\n> problem is not related to deleting/recreating indexes.\n> Somehow it is related to JDBC cursors. It appears\n> that what is happening is that since I'm using \n> a fetch size of 5000, the command:\n> \n> FETCH FORWARD 5000 FROM JDBC_CURS_1\n> \n\nIf the top level node of your execution plan is a sort step, it should \ntake essentially no time to retrieve additional rows after the first \nfetch. The sort step is materializes the results so that future fetches \nsimply need to spit this data back to the client.\n\nI would agree with Dave's suggestion to use log_duration and compare the \nvalues for the first and subsequent fetches.\n\nKris Jurka\n\n",
"msg_date": "Fri, 9 Jul 2004 15:44:32 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursors performance (was: Re: [PERFORM] Terrible performance"
},
{
"msg_contents": "Using psql it peforms exactly as I'd expect. The\nrows get printed out to stdout, I hold down the space\nbar to keep everything scrolling and as every 5000\nrows go by I see a new fetch statement logged in the\nserver log. The results from this statement seem to\ncome back instaneously and the output starts scrolling\nagain immediately. Whole query takes a few minutes \nto complete.\n\nI seems like it has something to do w/ my JDBC app\nbut I can't think for the life of me what I might have\nchanged. Anyway, there's only the setFetchSize(5000)\nand the setAutoCommit(false) that are relevant to\ncursors, right? And those have been in there for \nweeks.\n\nBill\n\n--- Dave Cramer <[email protected]> wrote:\n> Bill,\n> \n> What happens if you do this in psql, also you can\n> turn on duration\n> logging in the backend and log the queries.\n> \n> dave\n> On Fri, 2004-07-09 at 16:24, Bill Chandler wrote:\n> > Thanks to all who have responded. I now think my\n> > problem is not related to deleting/recreating\n> indexes.\n> > Somehow it is related to JDBC cursors. It appears\n> > that what is happening is that since I'm using \n> > a fetch size of 5000, the command:\n> > \n> > FETCH FORWARD 5000 FROM JDBC_CURS_1\n> > \n> > is being repeatedly sent to the server as I\n> process\n> > the result set from my query. Each time this\n> command\n> > is sent it it takes about 5 minutes to return\n> which is\n> > about the amount of time the whole query took to\n> > complete before the performance degredation. So in\n> > other words it looks as if the full select is\n> being\n> > rerun on each fetch. \n> > \n> > Now the mystery is why is this happening all of\n> the\n> > sudden? I have been running w/ fetch size set to\n> 5000\n> > for the last couple of weeks and it did not appear\n> to\n> > be doing this (i.e. re-running the entire select \n> > statement again). Is this what I should expect\n> when\n> > using cursors? I would have thought that the\n> server\n> > should \"remember\" where it left off in the query\n> since\n> > the last fetch and continue from there.\n> > \n> > Could I have inadvertently changed a parameter\n> > somewhere that would cause this behavior?\n> > \n> > thanks,\n> > \n> > Bill\n> > \n> > __________________________________________________\n> > Do You Yahoo!?\n> > Tired of spam? Yahoo! Mail has the best spam\n> protection around \n> > http://mail.yahoo.com\n> > \n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please\n> send an appropriate\n> > subscribe-nomail command to\n> [email protected] so that your\n> > message can get through to the mailing list\n> cleanly\n> > \n> > \n> > \n> > !DSPAM:40eefff6170301475214189!\n> > \n> > \n> -- \n> Dave Cramer\n> 519 939 0336\n> ICQ # 14675561\n> \n> \n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Fri, 9 Jul 2004 14:03:48 -0700 (PDT)",
"msg_from": "Bill Chandler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cursors performance (was: Re: [PERFORM] Terrible performance\n\tafter deleting/recreating indexes)"
},
{
"msg_contents": "Ok, given that there are 5000 rows, the jdbc driver will actually fetch\nall 5000 when you do the fetch, so is it the speed of the connection, or\nthe actual fetch that is taking the time, again, check the server logs\nfor duration.\n\nDave\nOn Fri, 2004-07-09 at 17:03, Bill Chandler wrote:\n> Using psql it peforms exactly as I'd expect. The\n> rows get printed out to stdout, I hold down the space\n> bar to keep everything scrolling and as every 5000\n> rows go by I see a new fetch statement logged in the\n> server log. The results from this statement seem to\n> come back instaneously and the output starts scrolling\n> again immediately. Whole query takes a few minutes \n> to complete.\n> \n> I seems like it has something to do w/ my JDBC app\n> but I can't think for the life of me what I might have\n> changed. Anyway, there's only the setFetchSize(5000)\n> and the setAutoCommit(false) that are relevant to\n> cursors, right? And those have been in there for \n> weeks.\n> \n> Bill\n> \n> --- Dave Cramer <[email protected]> wrote:\n> > Bill,\n> > \n> > What happens if you do this in psql, also you can\n> > turn on duration\n> > logging in the backend and log the queries.\n> > \n> > dave\n> > On Fri, 2004-07-09 at 16:24, Bill Chandler wrote:\n> > > Thanks to all who have responded. I now think my\n> > > problem is not related to deleting/recreating\n> > indexes.\n> > > Somehow it is related to JDBC cursors. It appears\n> > > that what is happening is that since I'm using \n> > > a fetch size of 5000, the command:\n> > > \n> > > FETCH FORWARD 5000 FROM JDBC_CURS_1\n> > > \n> > > is being repeatedly sent to the server as I\n> > process\n> > > the result set from my query. Each time this\n> > command\n> > > is sent it it takes about 5 minutes to return\n> > which is\n> > > about the amount of time the whole query took to\n> > > complete before the performance degredation. So in\n> > > other words it looks as if the full select is\n> > being\n> > > rerun on each fetch. \n> > > \n> > > Now the mystery is why is this happening all of\n> > the\n> > > sudden? I have been running w/ fetch size set to\n> > 5000\n> > > for the last couple of weeks and it did not appear\n> > to\n> > > be doing this (i.e. re-running the entire select \n> > > statement again). Is this what I should expect\n> > when\n> > > using cursors? I would have thought that the\n> > server\n> > > should \"remember\" where it left off in the query\n> > since\n> > > the last fetch and continue from there.\n> > > \n> > > Could I have inadvertently changed a parameter\n> > > somewhere that would cause this behavior?\n> > > \n> > > thanks,\n> > > \n> > > Bill\n> > > \n> > > __________________________________________________\n> > > Do You Yahoo!?\n> > > Tired of spam? Yahoo! Mail has the best spam\n> > protection around \n> > > http://mail.yahoo.com\n> > > \n> > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > TIP 3: if posting/reading through Usenet, please\n> > send an appropriate\n> > > subscribe-nomail command to\n> > [email protected] so that your\n> > > message can get through to the mailing list\n> > cleanly\n> > > \n> > > \n> > > \n> > > \n> > > \n> > > \n> > -- \n> > Dave Cramer\n> > 519 939 0336\n> > ICQ # 14675561\n> > \n> > \n> \n> \n> __________________________________________________\n> Do You Yahoo!?\n> Tired of spam? Yahoo! Mail has the best spam protection around \n> http://mail.yahoo.com\n> \n> \n> \n> !DSPAM:40ef083f256273772718645!\n> \n> \n-- \nDave Cramer\n519 939 0336\nICQ # 14675561\n\n",
"msg_date": "Fri, 09 Jul 2004 17:10:57 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursors performance (was: Re: [PERFORM] Terrible"
},
{
"msg_contents": "Here are the result from \"log_duration = true\" \n\nDECLARE/1st FETCH: 325014.881 ms\n2nd FETCH: 324055.281 ms\n\n--- Dave Cramer <[email protected]> wrote:\n> Ok, given that there are 5000 rows, the jdbc driver\n> will actually fetch\n> all 5000 when you do the fetch, so is it the speed\n> of the connection, or\n> the actual fetch that is taking the time, again,\n> check the server logs\n> for duration.\n> \n> Dave\n> On Fri, 2004-07-09 at 17:03, Bill Chandler wrote:\n> > Using psql it peforms exactly as I'd expect. The\n> > rows get printed out to stdout, I hold down the\n> space\n> > bar to keep everything scrolling and as every 5000\n> > rows go by I see a new fetch statement logged in\n> the\n> > server log. The results from this statement seem\n> to\n> > come back instaneously and the output starts\n> scrolling\n> > again immediately. Whole query takes a few\n> minutes \n> > to complete.\n> > \n> > I seems like it has something to do w/ my JDBC app\n> > but I can't think for the life of me what I might\n> have\n> > changed. Anyway, there's only the\n> setFetchSize(5000)\n> > and the setAutoCommit(false) that are relevant to\n> > cursors, right? And those have been in there for \n> > weeks.\n> > \n> > Bill\n> > \n> > --- Dave Cramer <[email protected]> wrote:\n> > > Bill,\n> > > \n> > > What happens if you do this in psql, also you\n> can\n> > > turn on duration\n> > > logging in the backend and log the queries.\n> > > \n> > > dave\n> > > On Fri, 2004-07-09 at 16:24, Bill Chandler\n> wrote:\n> > > > Thanks to all who have responded. I now think\n> my\n> > > > problem is not related to deleting/recreating\n> > > indexes.\n> > > > Somehow it is related to JDBC cursors. It\n> appears\n> > > > that what is happening is that since I'm using\n> \n> > > > a fetch size of 5000, the command:\n> > > > \n> > > > FETCH FORWARD 5000 FROM JDBC_CURS_1\n> > > > \n> > > > is being repeatedly sent to the server as I\n> > > process\n> > > > the result set from my query. Each time this\n> > > command\n> > > > is sent it it takes about 5 minutes to return\n> > > which is\n> > > > about the amount of time the whole query took\n> to\n> > > > complete before the performance degredation.\n> So in\n> > > > other words it looks as if the full select is\n> > > being\n> > > > rerun on each fetch. \n> > > > \n> > > > Now the mystery is why is this happening all\n> of\n> > > the\n> > > > sudden? I have been running w/ fetch size set\n> to\n> > > 5000\n> > > > for the last couple of weeks and it did not\n> appear\n> > > to\n> > > > be doing this (i.e. re-running the entire\n> select \n> > > > statement again). Is this what I should\n> expect\n> > > when\n> > > > using cursors? I would have thought that the\n> > > server\n> > > > should \"remember\" where it left off in the\n> query\n> > > since\n> > > > the last fetch and continue from there.\n> > > > \n> > > > Could I have inadvertently changed a parameter\n> > > > somewhere that would cause this behavior?\n> > > > \n> > > > thanks,\n> > > > \n> > > > Bill\n> > > > \n> > > >\n> __________________________________________________\n> > > > Do You Yahoo!?\n> > > > Tired of spam? Yahoo! Mail has the best spam\n> > > protection around \n> > > > http://mail.yahoo.com\n> > > > \n> > > > ---------------------------(end of\n> > > broadcast)---------------------------\n> > > > TIP 3: if posting/reading through Usenet,\n> please\n> > > send an appropriate\n> > > > subscribe-nomail command to\n> > > [email protected] so that your\n> > > > message can get through to the mailing\n> list\n> > > cleanly\n> > > > \n> > > > \n> > > > \n> > > > \n> > > > \n> > > > \n> > > -- \n> > > Dave Cramer\n> > > 519 939 0336\n> > > ICQ # 14675561\n> > > \n> > > \n> > \n> > \n> > __________________________________________________\n> > Do You Yahoo!?\n> > Tired of spam? Yahoo! Mail has the best spam\n> protection around \n> > http://mail.yahoo.com\n> > \n> > \n> > \n> > !DSPAM:40ef083f256273772718645!\n> > \n> > \n> -- \n> Dave Cramer\n> 519 939 0336\n> ICQ # 14675561\n> \n> \n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nNew and Improved Yahoo! Mail - Send 10MB messages!\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Fri, 9 Jul 2004 14:18:01 -0700 (PDT)",
"msg_from": "Bill Chandler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Cursors performance (was: Re: [PERFORM] Terrible performance\n\tafter deleting/recreating indexes)"
},
{
"msg_contents": "Bill Chandler wrote:\n\n> Now the mystery is why is this happening all of the\n> sudden? I have been running w/ fetch size set to 5000\n> for the last couple of weeks and it did not appear to\n> be doing this (i.e. re-running the entire select \n> statement again). Is this what I should expect when\n> using cursors? I would have thought that the server\n> should \"remember\" where it left off in the query since\n> the last fetch and continue from there.\n\nI'd check heap size, GC activity (-verbose:gc), CPU use, swapping \nactivity on the *client* side. It may be that your dataset size or \nphysical memory or something similar has changed sufficiently that GC \nresulting from the data in each 5k row batch is killing you.\n\nCan you try a trivial app that runs the same query (with same fetchsize, \nautocommit, etc) via JDBC and does nothing but steps forward through the \nresultset, and see how fast it runs? Perhaps the problem is in your \nprocessing logic.\n\n-O\n",
"msg_date": "Sat, 10 Jul 2004 09:55:35 +1200",
"msg_from": "Oliver Jowett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursors performance"
},
{
"msg_contents": "Might be worth doing a little test:\n\ni) modify your code to fetch 1 row at a time\nii) set log_duration=true in your postgresql.conf (as the other posters \nhave suggested)\n\nThen compare with running the query in psql.\n\nregards\n\nMark\n \n\n\nBill Chandler wrote:\n\n>Thanks to all who have responded. I now think my\n>problem is not related to deleting/recreating indexes.\n>Somehow it is related to JDBC cursors. It appears\n>that what is happening is that since I'm using \n>a fetch size of 5000, the command:\n>\n>FETCH FORWARD 5000 FROM JDBC_CURS_1\n>\n>is being repeatedly sent to the server as I process\n>the result set from my query. Each time this command\n>is sent it it takes about 5 minutes to return which is\n>about the amount of time the whole query took to\n>complete before the performance degredation. So in\n>other words it looks as if the full select is being\n>rerun on each fetch. \n>\n>Now the mystery is why is this happening all of the\n>sudden? I have been running w/ fetch size set to 5000\n>for the last couple of weeks and it did not appear to\n>be doing this (i.e. re-running the entire select \n>statement again). Is this what I should expect when\n>using cursors? I would have thought that the server\n>should \"remember\" where it left off in the query since\n>the last fetch and continue from there.\n>\n>Could I have inadvertently changed a parameter\n>somewhere that would cause this behavior?\n>\n>thanks,\n>\n>Bill\n>\n>__________________________________________________\n>Do You Yahoo!?\n>Tired of spam? Yahoo! Mail has the best spam protection around \n>http://mail.yahoo.com \n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 8: explain analyze is your friend\n> \n>\n",
"msg_date": "Sat, 10 Jul 2004 15:06:20 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Cursors performance"
},
{
"msg_contents": "Thanks,\n\nWill try this test (I'm assuming you mean to say to\nset fetch size of 1 and rerun on both JDBC and\npsql).\n\nBTW, here is another clue: I only get the JDBC\nperformance degradation when I include the \"real_name\nlike 'NEPOOL%REAL%'\" clause. I've tried re-ordering\ntoo: i.e. putting this clause first in the statement,\nlast in the statement, etc. Doesn't seem to make any\ndifference.\n\nreal_name is a varchar(64). There is a unique index\non it.\n\nthanks,\n\nBill\n\n--- Mark Kirkwood <[email protected]> wrote:\n> Might be worth doing a little test:\n> \n> i) modify your code to fetch 1 row at a time\n> ii) set log_duration=true in your postgresql.conf\n> (as the other posters \n> have suggested)\n> \n> Then compare with running the query in psql.\n> \n> regards\n> \n> Mark\n> \n> \n> \n> Bill Chandler wrote:\n> \n> >Thanks to all who have responded. I now think my\n> >problem is not related to deleting/recreating\n> indexes.\n> >Somehow it is related to JDBC cursors. It appears\n> >that what is happening is that since I'm using \n> >a fetch size of 5000, the command:\n> >\n> >FETCH FORWARD 5000 FROM JDBC_CURS_1\n> >\n> >is being repeatedly sent to the server as I process\n> >the result set from my query. Each time this\n> command\n> >is sent it it takes about 5 minutes to return which\n> is\n> >about the amount of time the whole query took to\n> >complete before the performance degredation. So in\n> >other words it looks as if the full select is being\n> >rerun on each fetch. \n> >\n> >Now the mystery is why is this happening all of the\n> >sudden? I have been running w/ fetch size set to\n> 5000\n> >for the last couple of weeks and it did not appear\n> to\n> >be doing this (i.e. re-running the entire select \n> >statement again). Is this what I should expect\n> when\n> >using cursors? I would have thought that the\n> server\n> >should \"remember\" where it left off in the query\n> since\n> >the last fetch and continue from there.\n> >\n> >Could I have inadvertently changed a parameter\n> >somewhere that would cause this behavior?\n> >\n> >thanks,\n> >\n> >Bill\n> >\n> >__________________________________________________\n> >Do You Yahoo!?\n> >Tired of spam? Yahoo! Mail has the best spam\n> protection around \n> >http://mail.yahoo.com \n> >\n> >---------------------------(end of\n> broadcast)---------------------------\n> >TIP 8: explain analyze is your friend\n> > \n> >\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Mail - 50x more storage than other providers!\nhttp://promotions.yahoo.com/new_mail\n",
"msg_date": "Mon, 12 Jul 2004 11:07:29 -0700 (PDT)",
"msg_from": "Bill Chandler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Cursors performance"
},
{
"msg_contents": "Bill,\n\nI suspect that this is an artifact of using server side prepared \nstatements. When testing this via psql you will be forming sql like:\n\nselect ...\nfrom ...\nwhere ...\nand real_name like 'NEPOOL%REAL%'\n...\n\nbut the JDBC driver with server side prepared statements is doing:\n\nselect ...\nfrom ...\nwhere ...\nand real_name like ?\n...\n\nSo when the statement is prepared, since it doesn't know what values you \nare going to use in the bind variable, it will generally take a more \nconcervative execution plan than if it knows what the bind variable is.\n\nSo I suspect the performance difference is just in the different \nexecution plans for the two different forms of the sql statement.\n\nthanks,\n--Barry\n\n\nBill Chandler wrote:\n> Thanks,\n> \n> Will try this test (I'm assuming you mean to say to\n> set fetch size of 1 and rerun on both JDBC and\n> psql).\n> \n> BTW, here is another clue: I only get the JDBC\n> performance degradation when I include the \"real_name\n> like 'NEPOOL%REAL%'\" clause. I've tried re-ordering\n> too: i.e. putting this clause first in the statement,\n> last in the statement, etc. Doesn't seem to make any\n> difference.\n> \n> real_name is a varchar(64). There is a unique index\n> on it.\n> \n> thanks,\n> \n> Bill\n> \n> --- Mark Kirkwood <[email protected]> wrote:\n> \n>>Might be worth doing a little test:\n>>\n>>i) modify your code to fetch 1 row at a time\n>>ii) set log_duration=true in your postgresql.conf\n>>(as the other posters \n>>have suggested)\n>>\n>>Then compare with running the query in psql.\n>>\n>>regards\n>>\n>>Mark\n>> \n>>\n>>\n>>Bill Chandler wrote:\n>>\n>>\n>>>Thanks to all who have responded. I now think my\n>>>problem is not related to deleting/recreating\n>>\n>>indexes.\n>>\n>>>Somehow it is related to JDBC cursors. It appears\n>>>that what is happening is that since I'm using \n>>>a fetch size of 5000, the command:\n>>>\n>>>FETCH FORWARD 5000 FROM JDBC_CURS_1\n>>>\n>>>is being repeatedly sent to the server as I process\n>>>the result set from my query. Each time this\n>>\n>>command\n>>\n>>>is sent it it takes about 5 minutes to return which\n>>\n>>is\n>>\n>>>about the amount of time the whole query took to\n>>>complete before the performance degredation. So in\n>>>other words it looks as if the full select is being\n>>>rerun on each fetch. \n>>>\n>>>Now the mystery is why is this happening all of the\n>>>sudden? I have been running w/ fetch size set to\n>>\n>>5000\n>>\n>>>for the last couple of weeks and it did not appear\n>>\n>>to\n>>\n>>>be doing this (i.e. re-running the entire select \n>>>statement again). Is this what I should expect\n>>\n>>when\n>>\n>>>using cursors? I would have thought that the\n>>\n>>server\n>>\n>>>should \"remember\" where it left off in the query\n>>\n>>since\n>>\n>>>the last fetch and continue from there.\n>>>\n>>>Could I have inadvertently changed a parameter\n>>>somewhere that would cause this behavior?\n>>>\n>>>thanks,\n>>>\n>>>Bill\n>>>\n>>>__________________________________________________\n>>>Do You Yahoo!?\n>>>Tired of spam? Yahoo! Mail has the best spam\n>>\n>>protection around \n>>\n>>>http://mail.yahoo.com \n>>>\n>>>---------------------------(end of\n>>\n>>broadcast)---------------------------\n>>\n>>>TIP 8: explain analyze is your friend\n>>> \n>>>\n>>\n>>---------------------------(end of\n>>broadcast)---------------------------\n>>TIP 5: Have you checked our extensive FAQ?\n>>\n>> \n>>http://www.postgresql.org/docs/faqs/FAQ.html\n>>\n> \n> \n> \n> \n> \t\t\n> __________________________________\n> Do you Yahoo!?\n> Yahoo! Mail - 50x more storage than other providers!\n> http://promotions.yahoo.com/new_mail\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n",
"msg_date": "Mon, 12 Jul 2004 14:05:12 -0700",
"msg_from": "Barry Lind <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Cursors performance"
},
{
"msg_contents": "\n\nOn Mon, 12 Jul 2004, Barry Lind wrote:\n\n> Bill,\n> \n> I suspect that this is an artifact of using server side prepared \n> statements. When testing this via psql you will be forming sql like:\n\nI don't think so. The 7.4 driver can use either cursors or server\nprepared statements, not both. He's definitely using cursors, so I server \nprepared statements don't come into the mix here.\n\nKris Jurka\n\n",
"msg_date": "Mon, 12 Jul 2004 16:11:53 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Cursors performance"
},
{
"msg_contents": "All,\n\nLooks like I may have beaten this one to death. May\nhave to chalk it up to a limitation for now due to\ndeadlines and revisit it later. \n\nOne final clue before I go: if I change my wildcard to\n'NEPOOL%' from 'NEPOOL%REAL%' my query completes much\nfaster. Of course this makes sense since it's much\neasier to search a string for a prefix than it is to\ndo a complex regular expression match. I just didn't\nexpect it to be orders of magnitude difference.\n\nThe table containing the string being searched is only\n7500 rows but I am joining it with a table with\n2.5 million rows. So maybe there's something I can do\nto do the wildcard search on the smaller table first\nthen do the join. \n\nOk, thanks again to all who responded. Really\nappreciate the tips on logging statements and\nduration, etc.\n\nregards,\n\nBill\n--- Kris Jurka <[email protected]> wrote:\n> \n> \n> On Mon, 12 Jul 2004, Barry Lind wrote:\n> \n> > Bill,\n> > \n> > I suspect that this is an artifact of using server\n> side prepared \n> > statements. When testing this via psql you will\n> be forming sql like:\n> \n> I don't think so. The 7.4 driver can use either\n> cursors or server\n> prepared statements, not both. He's definitely\n> using cursors, so I server \n> prepared statements don't come into the mix here.\n> \n> Kris Jurka\n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\n> [email protected]\n> \n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nNew and Improved Yahoo! Mail - Send 10MB messages!\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Tue, 13 Jul 2004 09:13:46 -0700 (PDT)",
"msg_from": "Bill Chandler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Cursors performance"
},
{
"msg_contents": "Bill Chandler <[email protected]> writes:\n> One final clue before I go: if I change my wildcard to\n> 'NEPOOL%' from 'NEPOOL%REAL%' my query completes much\n> faster.\n\nCould we see the exact queries and EXPLAIN ANALYZE output for both\ncases? I'm wondering if the plan changes. I think that the planner\nwill believe that the latter pattern is significantly more selective\n(how much more selective depends on exactly which PG version you're\nusing); if this results in a bad row-count estimate then a bad plan\ncould get picked.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Jul 2004 13:08:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Cursors performance "
}
] |
[
{
"msg_contents": "Hello,\n\nI have been a happy postgresql developer for a few years now. Recently\nI have discovered a very strange phenomenon in regards to inserting\nrows.\n\nMy app inserts millions of records a day, averaging about 30 rows a\nsecond. I use autovac to make sure my stats and indexes are up to date.\nRarely are rows ever deleted. Each day a brand new set of tables is\ncreated and eventually the old tables are dropped. The app calls\nfunctions which based on some simple logic perform the correct inserts.\n\n\nThe problem I am seeing is that after a particular database gets kinda\nold, say a couple of months, performance begins to degrade. Even after\ncreating brand new tables my insert speed is slow in comparison ( by a\nmagnitude of 5 or more ) with a brand new schema which has the exact\nsame tables. I am running on an IBM 360 dual processor Linux server\nwith a 100 gig raid array spanning 5 scsi disks. The machine has 1 gig\nof ram of which 500 meg is dedicated to Postgresql.\n\nJust to be clear, the question I have is why would a brand new db schema\nallow inserts faster than an older schema with brand new tables? Since\nthe tables are empty to start, vacuuming should not be an issue at all.\nEach schema is identical in every way except the db name and creation\ndate.\n\nAny ideas are appreciated.\n\nThanks,\n\nT.R. Missner\n",
"msg_date": "Wed, 7 Jul 2004 11:24:11 -0600",
"msg_from": "\"Missner, T. R.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "inserting into brand new database faster than old database"
},
{
"msg_contents": "I don't think I have enough detail about your app. Couple of questions, \nare there any tables that recieve a lot of inserts / updates / deletes \nthat are not deleted and recreated often? If so, one possibility is \nthat you don't have a large enough FSM settings and your table is \nactually growing despite using autovac. Does that sounds possbile to you?\n\nMissner, T. R. wrote:\n\n> Hello,\n> \n> I have been a happy postgresql developer for a few years now. Recently\n> I have discovered a very strange phenomenon in regards to inserting\n> rows.\n> \n> My app inserts millions of records a day, averaging about 30 rows a\n> second. I use autovac to make sure my stats and indexes are up to date.\n> Rarely are rows ever deleted. Each day a brand new set of tables is\n> created and eventually the old tables are dropped. The app calls\n> functions which based on some simple logic perform the correct inserts.\n> \n> \n> The problem I am seeing is that after a particular database gets kinda\n> old, say a couple of months, performance begins to degrade. Even after\n> creating brand new tables my insert speed is slow in comparison ( by a\n> magnitude of 5 or more ) with a brand new schema which has the exact\n> same tables. I am running on an IBM 360 dual processor Linux server\n> with a 100 gig raid array spanning 5 scsi disks. The machine has 1 gig\n> of ram of which 500 meg is dedicated to Postgresql.\n> \n> Just to be clear, the question I have is why would a brand new db schema\n> allow inserts faster than an older schema with brand new tables? Since\n> the tables are empty to start, vacuuming should not be an issue at all.\n> Each schema is identical in every way except the db name and creation\n> date.\n> \n> Any ideas are appreciated.\n> \n> Thanks,\n> \n> T.R. Missner\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n",
"msg_date": "Wed, 07 Jul 2004 15:17:10 -0400",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inserting into brand new database faster than old database"
},
{
"msg_contents": "Missner, T. R. wrote:\n\n> Hello,\n> \n> I have been a happy postgresql developer for a few years now. Recently\n> I have discovered a very strange phenomenon in regards to inserting\n> rows.\n> \n> My app inserts millions of records a day, averaging about 30 rows a\n> second. I use autovac to make sure my stats and indexes are up to date.\n> Rarely are rows ever deleted. Each day a brand new set of tables is\n> created and eventually the old tables are dropped. The app calls\n> functions which based on some simple logic perform the correct inserts.\n\nHave you profiled where the time goes in a brand new schema and a degraded \ndatabase? Is it IO? Is it CPU? Is the function making decision becoming bottleneck?\n\n> The problem I am seeing is that after a particular database gets kinda\n> old, say a couple of months, performance begins to degrade. Even after\n> creating brand new tables my insert speed is slow in comparison ( by a\n> magnitude of 5 or more ) with a brand new schema which has the exact\n> same tables. I am running on an IBM 360 dual processor Linux server\n> with a 100 gig raid array spanning 5 scsi disks. The machine has 1 gig\n> of ram of which 500 meg is dedicated to Postgresql.\n> \n> Just to be clear, the question I have is why would a brand new db schema\n> allow inserts faster than an older schema with brand new tables? Since\n> the tables are empty to start, vacuuming should not be an issue at all.\n> Each schema is identical in every way except the db name and creation\n> date.\n\nYou can do few things.\n\n- Get explain analyze. See the difference between actual and projected timings. \nThe difference is the hint about where planner is going wrong.\n\n- Is IO your bottleneck? Are vacuum taking longer and longer? If yes then you \ncould try the vacuum delay patch. If your IO is saturated for any reason, \neverything is going to crawl\n\n- Are your indexes bloat free? If you are using pre7.x,vacuum does not clean up \nindexes. You need to reindex.\n\n- Have you vacuumed the complete database? If the catalogs collect dead space it \ncould cause degradation too but that is just a guess.\n\nBasically monitor slow inserts and try to find out where time is spent.\n\nHTH\n\n Shridhar\n",
"msg_date": "Thu, 08 Jul 2004 14:32:43 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inserting into brand new database faster than old database"
},
{
"msg_contents": "\"Missner, T. R.\" <[email protected]> writes:\n> ... Each day a brand new set of tables is\n> created and eventually the old tables are dropped.\n\nYou did not say which PG version you are using (tut tut) but my first\nthought is that it's a pre-7.4 release and your problems trace to bloat\nin the system-catalog indexes. The indexes on pg_class and pg_attribute\nwould be quite likely to suffer serious bloat if you continually create\nand drop tables, because the range of useful table OIDs will be\ncontinually shifting. We didn't fix this until 7.4.\n\nIf you are seeing this in 7.4.* then more investigation is needed...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Jul 2004 22:52:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: inserting into brand new database faster than old database "
}
] |
[
{
"msg_contents": "I do have one table that acts as a lookup table and grows in size as the\napp runs, however in the tests I have been doing I have dropped and\nrecreated all tables including the lookup table.\n\nI keep wondering how disk is allocated to a particular DB. Also is there\nany way I could tell whether the writes to disk are the bottleneck?\n\n\n\nT.R. Missner\nLevel(3) Communications\nSSID tools\nSenior Software Engineer\n\n\n-----Original Message-----\nFrom: Matthew T. O'Connor [mailto:[email protected]] \nSent: Wednesday, July 07, 2004 1:17 PM\nTo: Missner, T. R.\nCc: [email protected]\nSubject: Re: [PERFORM] inserting into brand new database faster than old\ndatabase\n\nI don't think I have enough detail about your app. Couple of questions,\n\nare there any tables that recieve a lot of inserts / updates / deletes \nthat are not deleted and recreated often? If so, one possibility is \nthat you don't have a large enough FSM settings and your table is \nactually growing despite using autovac. Does that sounds possbile to\nyou?\n\nMissner, T. R. wrote:\n\n> Hello,\n> \n> I have been a happy postgresql developer for a few years now.\nRecently\n> I have discovered a very strange phenomenon in regards to inserting\n> rows.\n> \n> My app inserts millions of records a day, averaging about 30 rows a\n> second. I use autovac to make sure my stats and indexes are up to\ndate.\n> Rarely are rows ever deleted. Each day a brand new set of tables is\n> created and eventually the old tables are dropped. The app calls\n> functions which based on some simple logic perform the correct\ninserts.\n> \n> \n> The problem I am seeing is that after a particular database gets kinda\n> old, say a couple of months, performance begins to degrade. Even\nafter\n> creating brand new tables my insert speed is slow in comparison ( by a\n> magnitude of 5 or more ) with a brand new schema which has the exact\n> same tables. I am running on an IBM 360 dual processor Linux server\n> with a 100 gig raid array spanning 5 scsi disks. The machine has 1\ngig\n> of ram of which 500 meg is dedicated to Postgresql.\n> \n> Just to be clear, the question I have is why would a brand new db\nschema\n> allow inserts faster than an older schema with brand new tables?\nSince\n> the tables are empty to start, vacuuming should not be an issue at\nall.\n> Each schema is identical in every way except the db name and creation\n> date.\n> \n> Any ideas are appreciated.\n> \n> Thanks,\n> \n> T.R. Missner\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if\nyour\n> joining column's datatypes do not match\n> \n",
"msg_date": "Wed, 7 Jul 2004 13:28:15 -0600",
"msg_from": "\"Missner, T. R.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: inserting into brand new database faster than old database"
}
] |
[
{
"msg_contents": "\nCan someone explain what I'm missing here? This query does what I\nexpect--it uses the \"foo\" index on the openeddatetime, callstatus,\ncalltype, callkey fields:\n\nelon2=# explain analyse select * from call where aspid='123C' and\nOpenedDateTime between '2000-01-01 00:00:00.0' and '2004-06-24\n23:59:59.999' order by openeddatetime desc, callstatus desc, calltype\ndesc, callkey desc limit 26;\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-----------------------------\n Limit (cost=0.00..103.76 rows=26 width=297) (actual time=0.07..0.58\nrows=26 loops=1)\n -> Index Scan Backward using foo on call (cost=0.00..1882805.77\nrows=471781 width=297) (actual time=0.06..0.54 rows=27 loops=1)\n Index Cond: ((openeddatetime >= '2000-01-01\n00:00:00-07'::timestamp with time zone) AND (openeddatetime <=\n'2004-06-24 23:59:59.999-07'::timestamp with time zone))\n Filter: (aspid = '123C'::bpchar)\n Total runtime: 0.66 msec\n(5 rows)\n\n\nHowever, this query performs a sequence scan on the table, ignoring the\ncall_idx13 index (the only difference is the addition of the aspid field\nin the order by clause):\n\nelon2=# explain analyse select * from call where aspid='123C' and\nOpenedDateTime between '2000-01-01 00:00:00.0' and '2004-06-24\n23:59:59.999' order by aspid, openeddatetime desc, callstatus desc,\ncalltype desc, callkey desc limit 26;\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n------------------------------------------------------------\n Limit (cost=349379.41..349379.48 rows=26 width=297) (actual\ntime=32943.52..32943.61 rows=26 loops=1)\n -> Sort (cost=349379.41..350558.87 rows=471781 width=297) (actual\ntime=32943.52..32943.56 rows=27 loops=1)\n Sort Key: aspid, openeddatetime, callstatus, calltype, callkey\n -> Seq Scan on call (cost=0.00..31019.36 rows=471781\nwidth=297) (actual time=1.81..7318.13 rows=461973 loops=1)\n Filter: ((aspid = '123C'::bpchar) AND (openeddatetime >=\n'2000-01-01 00:00:00-07'::timestamp with time zone) AND (openeddatetime\n<= '2004-06-24 23:59:59.999-07'::timestamp with time zone))\n Total runtime: 39353.86 msec\n(6 rows)\n\n\nHere's the structure of the table in question:\n\n\n Table \"public.call\"\n Column | Type | Modifiers \n------------------+--------------------------+-----------\n aspid | character(4) | \n lastmodifiedtime | timestamp with time zone | \n moduser | character(13) | \n callkey | character(13) | \n calltype | text | \n callqueueid | text | \n openeddatetime | timestamp with time zone | \n assigneddatetime | timestamp with time zone | \n closeddatetime | timestamp with time zone | \n reopeneddatetime | timestamp with time zone | \n openedby | text | \n callstatus | character(1) | \n callpriority | text | \n callreasontext | text | \n keyword1 | text | \n keyword2 | text | \n callername | text | \n custfirstname | text | \n custlastname | text | \n custssntin | character(9) |\ncustssnseq | text | \n custdbccode | character(9) | \n custlongname | text | \n custtypecode | character(2) | \n custphone | text | \n custid | character(9) | \n assigneduserid | character varying(30) | \n historyitemcount | integer | \n callertype | text | \n callerphoneext | text | \n followupdate | text | \n hpjobnumber | character(11) | \nIndexes: call_idx1 unique btree (aspid, callkey),\n call_aspid btree (aspid),\n call_aspid_opendedatetime btree (aspid, openeddatetime),\n call_idx10 btree (aspid, keyword1, openeddatetime, callstatus,\ncalltype\n, custtypecode, custid, callkey),\n call_idx11 btree (aspid, keyword2, openeddatetime, callstatus,\ncalltype\n, custtypecode, custid, callkey),\n call_idx12 btree (aspid, custtypecode, custid, openeddatetime,\ncallstat\nus, calltype, callkey),\n call_idx13 btree (aspid, openeddatetime, callstatus, calltype,\ncallkey),\n call_idx14 btree (aspid, callqueueid, callstatus, callkey),\n call_idx2 btree (aspid, callqueueid, openeddatetime,\ncusttypecode, call\nstatus, callkey),\n call_idx3 btree (aspid, assigneduserid, openeddatetime,\ncusttypecode, c\nallstatus, callkey),\n call_idx4 btree (aspid, custid, custtypecode, callkey,\ncallstatus),\n call_idx7 btree (aspid, calltype, custtypecode, custid,\ncallstatus, cal\nlkey),\n call_idx9 btree (aspid, assigneduserid, callstatus,\nfollowupdate),\n foo btree (openeddatetime, callstatus, calltype, callkey)\n\n\n\n\nTIA,\n\n-Joel\n\n-- CONFIDENTIALITY NOTICE --\n\nThis message is intended for the sole use of the individual and entity to whom it is addressed, and may contain information that is privileged, confidential and exempt from disclosure under applicable law. If you are not the intended addressee, nor authorized to receive for the intended addressee, you are hereby notified that you may not use, copy, disclose or distribute to anyone the message or any information contained in the message. If you have received this message in error, please immediately advise the sender by reply email, and delete the message. Thank you.\n",
"msg_date": "Wed, 7 Jul 2004 14:27:27 -0700",
"msg_from": "Joel McGraw <[email protected]>",
"msg_from_op": true,
"msg_subject": "query plan wierdness?"
},
{
"msg_contents": "The limit is tricking you.\nI guess a sequential scan is cheaper than an index scan with the limit 26 found there.\n\nI am wrong?\n\nGreets\n\n-- \n-------------------------------------------\nGuido Barosio\nBuenos Aires, Argentina\n-------------------------------------------\n\n",
"msg_date": "Wed, 7 Jul 2004 18:45:42 -0300",
"msg_from": "Guido Barosio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query plan wierdness?"
},
{
"msg_contents": "On Wed, 7 Jul 2004, Joel McGraw wrote:\n\n> However, this query performs a sequence scan on the table, ignoring the\n> call_idx13 index (the only difference is the addition of the aspid field\n> in the order by clause):\n>\n> elon2=# explain analyse select * from call where aspid='123C' and\n> OpenedDateTime between '2000-01-01 00:00:00.0' and '2004-06-24\n> 23:59:59.999' order by aspid, openeddatetime desc, callstatus desc,\n> calltype desc, callkey desc limit 26;\n>\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------\n> ------------------------------------------------------------------------\n> ------------------------------------------------------------\n> Limit (cost=349379.41..349379.48 rows=26 width=297) (actual\n> time=32943.52..32943.61 rows=26 loops=1)\n> -> Sort (cost=349379.41..350558.87 rows=471781 width=297) (actual\n> time=32943.52..32943.56 rows=27 loops=1)\n> Sort Key: aspid, openeddatetime, callstatus, calltype, callkey\n> -> Seq Scan on call (cost=0.00..31019.36 rows=471781\n> width=297) (actual time=1.81..7318.13 rows=461973 loops=1)\n> Filter: ((aspid = '123C'::bpchar) AND (openeddatetime >=\n> '2000-01-01 00:00:00-07'::timestamp with time zone) AND (openeddatetime\n> <= '2004-06-24 23:59:59.999-07'::timestamp with time zone))\n> Total runtime: 39353.86 msec\n> (6 rows)\n\n\nHmm, what does it say after a set enable_seqscan=off?\n\nAlso, what does it say if you use aspid desc rather than just aspid in the\norder by?\n",
"msg_date": "Wed, 7 Jul 2004 15:50:31 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query plan wierdness?"
},
{
"msg_contents": "\n> However, this query performs a sequence scan on the table, ignoring the\n> call_idx13 index (the only difference is the addition of the aspid field\n> in the order by clause):\n\nYou do not have an index which matches the ORDER BY, so PostgreSQL\ncannot simply scan the index for the data you want. Thus is needs to\nfind all matching rows, order them, etc.\n\n> 23:59:59.999' order by aspid, openeddatetime desc, callstatus desc,\n> calltype desc, callkey desc limit 26;\n\naspid ASC, openeddatetime DESC, callstatus DESC, calltype DESC\n\n> call_idx13 btree (aspid, openeddatetime, callstatus, calltype,\n> callkey),\n\nThis index is: aspid ASC, openeddatetime ASC, callstatus ASC, calltype\nASC, callkey ASC\n\nA reverse scan, would of course be DESC, DESC, DESC, DESC, DESC --\nneither of which matches your requested order by, thus cannot help the\nreduce the lines looked at to 26.\n\nThis leaves your WHERE clause to restrict the dataset and it doesn't do\na very good job of it. There are more than 450000 rows matching the\nwhere clause, which means the sequential scan was probably the right\nchoice (unless you have over 10 million entries in the table).\n\n\nSince your WHERE clause contains a single aspid, an improvement to the\nPostgreSQL optimizer may be to ignore that field in the ORDER BY as\norder is no longer important since there is only one possible value. If\nit did ignore aspid, it would use a plan similar to the first one you\nprovided.\n\nYou can accomplish the same thing by leaving out aspid ASC OR by setting\nit to aspid DESC in the ORDER BY. Leaving it out entirely will be\nslightly faster, but DESC will cause PostgreSQL to use index\n\"call_idx13\".\n\n\n",
"msg_date": "Thu, 08 Jul 2004 12:50:21 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query plan wierdness?"
},
{
"msg_contents": "Hi,\n\nI tested vacuum_mem setting under a \n4CPU and 4G RAM machine. I am the only person \non that machine.\n\nThe table:\n tablename | size_kb | reltuples\n---------------------------+-------------------------\n big_t | 2048392 | 7.51515e+06\n\nCase 1:\n1. vacuum full big_t;\n2. begin;\n update big_t set email = lpad('a', 255, 'b');\n rollback;\n3. set vacuum_mem=655360; -- 640M\n4. vacuum big_t;\nIt takes 1415,375 ms\nAlso from top, the max SIZE is 615M while \nSHARE is always 566M\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM\n TIME COMMAND\n5914 postgres 16 0 615M 615M 566M D 7.5 15.8\n 21:21 postgres: postgres mydb xxx.xxx.xxx.xxx:34361\nVACUUM\n\nCase 2:\n1. vacuum full big_t;\n2. begin;\n update big_t set email = lpad('a', 255, 'b');\n rollback;\n3. set vacuum_mem=65536; -- 64M\n4. vacuum big_t;\nIt takes 1297,798 ms\nAlso from top, the max SIZE is 615M while \nSHARE is always 566M\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM\n TIME COMMAND\n 3613 postgres 15 0 615M 615M 566M D 17.1 15.8\n 9:04 postgres: postgres mydb xxx.xxx.xxx.xxx:34365\nVACUUM\n\nIt seems vacuum_mem does not have performance \neffect at all.\n\nIn reality, we vaccum nightly and I want to find out \nwhich vacuum_mem value is the \nbest to short vacuum time.\n\nAny thoughts?\n\nThanks,\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nNew and Improved Yahoo! Mail - Send 10MB messages!\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Thu, 8 Jul 2004 11:03:43 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": false,
"msg_subject": "vacuum_mem "
},
{
"msg_contents": "> It seems vacuum_mem does not have performance \n> effect at all.\n\nWrong conclusion. It implies that your test case takes less than 64M of\nmemory to track your removed tuples. I think it takes 8 bytes to track a\ntuple for vacuuming an index, which means it should be able to track\n800000 deletions. Since you're demonstration had 750000 for removal,\nit's under the limit.\n\nTry your test again with 32MB; it should make a single sequential pass\non the table, and 2 passes on each index for that table.\n\nEither that, or do a few more aborted updates.\n\n\n",
"msg_date": "Thu, 08 Jul 2004 14:25:30 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum_mem"
}
] |
[
{
"msg_contents": "Well, you're kind of right. I removed the limit, and now _both_\nversions of the query perform a sequence scan!\n\nOh, I forgot to include in my original post: this is PostgreSQL 7.3.4\n(on x86 Linux and sparc Solaris 6)\n\n-Joel\n\n-----Original Message-----\nFrom: Guido Barosio [mailto:[email protected]] \nSent: Wednesday, July 07, 2004 2:46 PM\nTo: Joel McGraw\nCc: [email protected]\nSubject: Re: [PERFORM] query plan wierdness?\n\nThe limit is tricking you.\nI guess a sequential scan is cheaper than an index scan with the limit\n26 found there.\n\nI am wrong?\n\nGreets\n\n-- \n-------------------------------------------\nGuido Barosio\nBuenos Aires, Argentina\n-------------------------------------------\n\n-- CONFIDENTIALITY NOTICE --\n\nThis message is intended for the sole use of the individual and entity to whom it is addressed, and may contain information that is privileged, confidential and exempt from disclosure under applicable law. If you are not the intended addressee, nor authorized to receive for the intended addressee, you are hereby notified that you may not use, copy, disclose or distribute to anyone the message or any information contained in the message. If you have received this message in error, please immediately advise the sender by reply email, and delete the message. Thank you.\n",
"msg_date": "Wed, 7 Jul 2004 15:12:29 -0700",
"msg_from": "Joel McGraw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query plan wierdness?"
}
] |
[
{
"msg_contents": "[Apologies if this reaches the list twice -- I sent a copy before\n subscribing, but it seems to be stuck waiting for listmaster forever, so I\n subscribed and sent it again.]\n\nHi,\n\nI'm trying to find out why one of my queries is so slow -- I'm primarily\nusing PostgreSQL 7.2 (Debian stable), but I don't really get much better\nperformance with 7.4 (Debian unstable). My prototype table looks like this:\n\n CREATE TABLE opinions (\n prodid INTEGER NOT NULL,\n uid INTEGER NOT NULL,\n opinion INTEGER NOT NULL,\n PRIMARY KEY ( prodid, uid )\n );\n\nIn addition, there are separate indexes on prodid and uid. I've run VACUUM\nANALYZE before all queries, and they are repeatable. (If anybody needs the\ndata, that could be arranged -- it's not secret or anything :-) ) My query\nlooks like this:\n\nEXPLAIN ANALYZE\n SELECT o3.prodid, SUM(o3.opinion*o12.correlation) AS total_correlation FROM opinions o3\n RIGHT JOIN (\n SELECT o2.uid, SUM(o1.opinion*o2.opinion)/SQRT(count(*)+0.0) AS correlation\n FROM opinions o1 LEFT JOIN opinions o2 ON o1.prodid=o2.prodid\n WHERE o1.uid=1355\n GROUP BY o2.uid\n ) o12 ON o3.uid=o12.uid\n LEFT JOIN (\n SELECT o4.prodid, COUNT(*) as num_my_comments\n FROM opinions o4\n WHERE o4.uid=1355\n GROUP BY o4.prodid\n ) nmc ON o3.prodid=nmc.prodid\n WHERE nmc.num_my_comments IS NULL AND o3.opinion<>0 AND o12.correlation<>0\n GROUP BY o3.prodid\n ORDER BY total_correlation desc;\n\nAnd produces the query plan at\n\n http://www.samfundet.no/~sesse/queryplan.txt\n\n(The lines were a bit too long to include in an e-mail :-) ) Note that the\n\"o3.opinion<>0 AND o12.correleation<>0\" lines are an optimization; I can run\nthe query fine without them and it will produce the same results, but it\ngoes slower both in 7.2 and 7.4.\n\nThere are a few oddities here:\n\n- The \"subquery scan o12\" phase outputs 1186 rows, yet 83792 are sorted. Where\n do the other ~82000 rows come from? And why would it take ~100ms to sort the\n rows at all? (In earlier tests, this was _one full second_ but somehow that\n seems to have improved, yet without really improving the overall query time.\n shared_buffers is 4096 and sort_mem is 16384, so it should really fit into\n RAM.)\n- Why does it use uid_index for an index scan on the table, when it obviously\n has no filter on it (since it returns all the rows)? Furthermore, why would\n this take half a second? (The machine is a 950MHz machine with SCSI disks.)\n- Also, the outer sort (the sorting of the 58792 rows from the merge join)\n is slow. :-)\n\n7.4 isn't really much better:\n\n http://www.samfundet.no/~sesse/queryplan74.txt\n\nNote that this is run on a machine with almost twice the speed (in terms of\nCPU speed, at least). The same oddities are mostly present (such as o12\nreturning 1186 rows, but 58788 rows are sorted), so I really don't understand\nwhat's going on here. Any ideas on how to improve this?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 8 Jul 2004 12:19:13 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Odd sorting behaviour"
},
{
"msg_contents": "Hi,\n\nI'm really stuck and I wonder if any of you could help.\n\nI have an application which will be sitting on a quite large database\n(roughly 8-16GB). The nature of the application is such that, on a second by\nsecond basis, the working set of the database is likely to be a substantial\nportion (e.g. between 50 and 70%) of the data - Just imagine an almost\nstochastic sampling of the data in each table, and you'll get an idea.\nPotentially quite smelly.\n\nTo start with, I thought. No problems. Just configure a DB server with an\nobscene amount of RAM (e.g. 64GB), and configure PG with a shared buffer\ncache that is big enough to hold every page of data in the database, plus\n10% or whatever to allow for a bit of room, ensuring that there is enough\nRAM in the box so that all the backend processes can do their thing, and\nall the other services can do their thing, and the swap system on the host\nremains idle.\n\nApparently not :(\n\nI've read a number of places now saying that the PG cache has an optimal\nsize which isn't \"as big as you can make it without affecting other stuff on\nthe machine\".\n\nThe suggestion is to let linux take the strain for the lion's share of the\ncaching (using its buffer cache), and just make the PG cache big enough to\nhold the data it needs for individual queries.\n\n___\n\nIgnoring for a moment the problem of answering the question 'so how big\nshall I make the PG cache?', and ignoring the possibility that as the\ndatabase content changes over the months this answer will need updating from\ntime to time for optimal performance, does anyone have any actual experience\nwith trying to maintain a large, mainly RAM resident database?\n\nWhat is it about the buffer cache that makes it so unhappy being able to\nhold everything? I don't want to be seen as a cache hit fascist, but isn't\nit just better if the data is just *there*, available in the postmaster's\naddress space ready for each backend process to access it, rather than\nexpecting the Linux cache mechanism, optimised as it may be, to have to do\nthe caching?\n\nIs it that the PG cache entries are accessed through a 'not particularly\noptimal for large numbers of tuples' type of strategy? (Optimal though it\nmight be for more modest numbers).\n\nAnd on a more general note, with the advent of 64 bit addressing and rising\nRAM sizes, won't there, with time, be more and more DB applications that\nwould want to capitalise on the potential speed improvements that come with\nnot having to work hard to get the right bits in the right bit of memory all\nthe time?\n\nAnd finally, am I worrying too much, and actually this problem is common to\nall databases?\n\nThanks for reading,\n\nAndy\n\n\n\n\n\n",
"msg_date": "Thu, 8 Jul 2004 19:05:06 +0100",
"msg_from": "\"Andy Ballingall\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Working on huge RAM based datasets"
},
{
"msg_contents": "> What is it about the buffer cache that makes it so unhappy being able to\n> hold everything? I don't want to be seen as a cache hit fascist, but isn't\n> it just better if the data is just *there*, available in the postmaster's\n> address space ready for each backend process to access it, rather than\n> expecting the Linux cache mechanism, optimised as it may be, to have to do\n> the caching?\n\nBecause the PostgreSQL buffer management algorithms are pitiful compared \nto Linux's. In 7.5, it's improved with the new ARC algorithm, but still \n- the Linux disk buffer cache will be very fast.\n\nChris\n\n",
"msg_date": "Fri, 09 Jul 2004 09:41:50 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Working on huge RAM based datasets"
},
{
"msg_contents": "Thanks, Chris.\n\n> > What is it about the buffer cache that makes it so unhappy being able to\n> > hold everything? I don't want to be seen as a cache hit fascist, but\nisn't\n> > it just better if the data is just *there*, available in the\npostmaster's\n> > address space ready for each backend process to access it, rather than\n> > expecting the Linux cache mechanism, optimised as it may be, to have to\ndo\n> > the caching?\n>\n> Because the PostgreSQL buffer management algorithms are pitiful compared\n> to Linux's. In 7.5, it's improved with the new ARC algorithm, but still\n> - the Linux disk buffer cache will be very fast.\n>\n\nI've had that reply elsewhere too. Initially, I was afraid that there was a\nmemory copy involved if the OS buffer cache supplied a block of data to PG,\nbut I've learned a lot more about the linux buffer cache, so it now makes\nmore sense to me why it's not a terrible thing to let the OS manage the\nlions' share of the caching on a high RAM system.\n\nOn another thread, (not in this mailing list), someone mentioned that there\nare a class of databases which, rather than caching bits of database file\n(be it in the OS buffer cache or the postmaster workspace), construct a a\nwell indexed memory representation of the entire data in the postmaster\nworkspace (or its equivalent), and this, remaining persistent, allows the DB\nto service backend queries far quicker than if the postmaster was working\nwith the assumption that most of the data was on disk (even if, in practice,\nlarge amounts or perhaps even all of it resides in OS cache).\n\nThough I'm no stranger to data management in general, I'm still in a steep\nlearning curve for databases in general and PG in particular, but I just\nwondered how big a subject this is in the development group for PG at the\nmoment?\n\nAfter all, we're now seeing the first wave of 'reasonably priced' 64 bit\nservers supported by a proper 64 bit OS (e.g. linux). HP are selling a 4\nOpteron server which can take 256GB of RAM, and that starts at $10000 (ok -\nthey don't give you that much RAM for that price - not yet, anyway!)\n\nThis is the future, isn't it? Each year, a higher percentage of DB\napplications will be able to fit entirely in RAM, and that percentage is\ngoing to be quite significant in just a few years. The disk system gets\nrelegated to a data preload on startup and servicing the writes as the\nserver does its stuff.\n\nRegards,\nAndy\n\n\n",
"msg_date": "Fri, 9 Jul 2004 10:28:24 +0100",
"msg_from": "\"Andy Ballingall\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Working on huge RAM based datasets"
},
{
"msg_contents": "\"\"Andy Ballingall\"\" <[email protected]> wrote in message\nnews:011301c46597$15d145c0$0300a8c0@lappy...\n\n> On another thread, (not in this mailing list), someone mentioned that\nthere\n> are a class of databases which, rather than caching bits of database file\n> (be it in the OS buffer cache or the postmaster workspace), construct a a\n> well indexed memory representation of the entire data in the postmaster\n> workspace (or its equivalent), and this, remaining persistent, allows the\nDB\n> to service backend queries far quicker than if the postmaster was working\n> with the assumption that most of the data was on disk (even if, in\npractice,\n> large amounts or perhaps even all of it resides in OS cache).\n\nAs a historical note, System R (grandaddy of all relational dbs) worked this\nway.\nAnd it worked under ridiculous memory constraints by modern standards.\n\nSpace-conscious MOLAP databases do this, FWIW.\n\nSybase 11 bitmap indexes pretty much amount to this, too.\n\nI've built a SQL engine that used bitmap indexes within B-Tree indexes,\nmaking it practical to index every field of every table (the purpose of the\nengine).\n\nYou can also build special-purpose in-memory representations to test for\nexistence (of a key), when you expect a lot of failures. Google\n\"superimposed coding\" e.g. http://www.dbcsoftware.com/dbcnews/NOV94.TXT\n\n\n",
"msg_date": "Fri, 09 Jul 2004 22:23:03 GMT",
"msg_from": "\"Mischa Sandberg\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inverted-list databases (was: Working on huge RAM based datasets)"
},
{
"msg_contents": "Oops - sorry - I confused my numbers. The opteron machine in mind *only* has\nup to 64GB of RAM (e.g. HP DL585) - here's the datapage:\n\nhttp://h18004.www1.hp.com/products/servers/proliantdl585/index.html\n\nStill - with *just* 64GB of RAM, that would comfortably provide for the type\nof scenario I envisage. Is that still enough for your app?\n\nThe 256GB number came from something I read saying that the current crop of\n64 bit chips will allow up to 256GB of RAM in principle, so it is just a\nmatter of time before the memory limit shoots up on these simple products.\n\nIf you are prepared to pay a bit more, already there are some big memory\noptions on linux:\n\nE.g. you can have up to 192GB in an SGI Altix 350:\n\nhttp://www.sgi.com/servers/altix/downloads/altix350_at_a_glance.pdf\n\nOr up to 4 terabytes in it's bigger brother the Altix 3000 - but that's\ngetting a bit esoteric.\n\nhttp://www.sgi.com/servers/altix/\n\n(This won lots of awards recently)\n\nThe nice thing about the two things above is that they run linux in a single\naddress space NUMA setup, and in theory you can just bolt on more CPUs and\nmore RAM as your needs grow.\n\nThanks,\nAndy\n\n\n\n\n----- Original Message ----- \nFrom: \"J. Andrew Rogers\" <[email protected]>\nTo: \"Andy Ballingall\" <[email protected]>\nSent: Friday, July 09, 2004 10:40 PM\nSubject: Re: [PERFORM] Working on huge RAM based datasets\n\n\n> On Fri, 2004-07-09 at 02:28, Andy Ballingall wrote:\n> > After all, we're now seeing the first wave of 'reasonably priced' 64 bit\n> > servers supported by a proper 64 bit OS (e.g. linux). HP are selling a 4\n> > Opteron server which can take 256GB of RAM, and that starts at $10000\n(ok -\n> > they don't give you that much RAM for that price - not yet, anyway!)\n>\n>\n> Which server is this?! They are selling an Opteron system that can hold\n> 256 GB of RAM?\n>\n> I looked on their site, and couldn't find anything like that. I run\n> some MASSIVE memory codes that don't need a lot of CPU, and if such a\n> box existed, I'd be very interested.\n>\n> cheers,\n>\n> j. andrew rogers\n>\n>\n>\n----- Original Message ----- \nFrom: \"J. Andrew Rogers\" <[email protected]>\nTo: \"Andy Ballingall\" <[email protected]>\nSent: Friday, July 09, 2004 10:40 PM\nSubject: Re: [PERFORM] Working on huge RAM based datasets\n\n\n> On Fri, 2004-07-09 at 02:28, Andy Ballingall wrote:\n> > After all, we're now seeing the first wave of 'reasonably priced' 64 bit\n> > servers supported by a proper 64 bit OS (e.g. linux). HP are selling a 4\n> > Opteron server which can take 256GB of RAM, and that starts at $10000\n(ok -\n> > they don't give you that much RAM for that price - not yet, anyway!)\n>\n>\n> Which server is this?! They are selling an Opteron system that can hold\n> 256 GB of RAM?\n>\n> I looked on their site, and couldn't find anything like that. I run\n> some MASSIVE memory codes that don't need a lot of CPU, and if such a\n> box existed, I'd be very interested.\n>\n> cheers,\n>\n> j. andrew rogers\n>\n>\n>\n\n\n",
"msg_date": "Sat, 10 Jul 2004 13:25:37 +0100",
"msg_from": "\"Andy Ballingall\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Working on huge RAM based datasets"
},
{
"msg_contents": "Quoth [email protected] (\"Andy Ballingall\"):\n> This is the future, isn't it? Each year, a higher percentage of DB\n> applications will be able to fit entirely in RAM, and that percentage is\n> going to be quite significant in just a few years. The disk system gets\n> relegated to a data preload on startup and servicing the writes as the\n> server does its stuff.\n\nRegrettably, this may be something that fits better with MySQL, as it\nalready has an architecture oriented to having different \"storage\nengines\" in behind.\n\nThere may be merit to the notion of implementing in-memory databases;\nsome assumptions change:\n\n - You might use bitmap indices, although that probably \"kills\" MVCC;\n\n - You might use T-trees rather than B-trees for indices, although\n research seems to indicate that B-trees win out if there is a \n great deal of concurrent access;\n\n - It can become worthwhile to use compression schemes to fit more\n records into memory that wouldn't be worthwhile if using demand\n paging.\n\nIf you really want to try this, then look at Konstantin Knizhnik's\nFastDB system:\n http://www.ispras.ru/~knizhnik/fastdb.html\n\nIt assumes that your application will be a monolithic C++ process; if\nthat isn't the case, then performance will probably suffer due to\nthrowing in context switches.\n\nThe changes in assumptions are pretty vital ones, that imply you're\nheading in a fairly different direction than that which PostgreSQL\nseems to be taking. \n\nThat's not to say that there isn't merit to building a database system\nusing T-trees and bitmap indices attuned to applications where\nmain-memory storage is key; it's just that the proposal probably\nshould go somewhere else.\n-- \noutput = (\"cbbrowne\" \"@\" \"cbbrowne.com\")\nhttp://www3.sympatico.ca/cbbrowne/languages.html\nHow does the guy who drives the snowplow get to work in the mornings?\n",
"msg_date": "Sat, 10 Jul 2004 09:06:34 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Working on huge RAM based datasets"
},
{
"msg_contents": "On Thu, Jul 08, 2004 at 12:19:13PM +0200, Steinar H. Gunderson wrote:\n> I'm trying to find out why one of my queries is so slow -- I'm primarily\n> using PostgreSQL 7.2 (Debian stable), but I don't really get much better\n> performance with 7.4 (Debian unstable). My prototype table looks like this:\n\nI hate to nag, but it's been a week with no reply; did anybody look at this?\nIs there any more information I can supply to make it easier?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 15 Jul 2004 01:11:20 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd sorting behaviour"
},
{
"msg_contents": "Steinar,\n\n> - The \"subquery scan o12\" phase outputs 1186 rows, yet 83792 are sorted. \nWhere\n> do the other ~82000 rows come from? And why would it take ~100ms to sort \nthe\n> rows at all? (In earlier tests, this was _one full second_ but somehow \nthat\n> seems to have improved, yet without really improving the overall query \ntime.\n\nI'm puzzled by the \"83792\" rows as well. I've a feeling that Explain Analyze \nis failing to output a step.\n\n> - Why does it use uid_index for an index scan on the table, when it \nobviously\n> has no filter on it (since it returns all the rows)? \n\nIn order to support the merge join. It should be a bit faster to do the sort \nusing the index than the actual table. Also, because you pass the <> 0 \ncondition.\n\n> Furthermore, why would\n> this take half a second? (The machine is a 950MHz machine with SCSI \ndisks.)\n\nI don't see half a second here.\n\n> - Also, the outer sort (the sorting of the 58792 rows from the merge join)\n> is slow. :-)\n\nI don't see a sort after the merge join. Which version are we talking about? \nI'm looking at the 7.4 version because that outputs more detail.\n\nMost of your time is spent in that merge join. Why don't you try doubling \nsort_mem temporarily to see how it does? Or even raising shared_buffers?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 14 Jul 2004 18:41:01 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sorting behaviour"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> - The \"subquery scan o12\" phase outputs 1186 rows, yet 83792 are sorted. \n> Where\n>> do the other ~82000 rows come from?\n\n> I'm puzzled by the \"83792\" rows as well. I've a feeling that Explain\n> Analyze is failing to output a step.\n\nNo, it's not missing anything. The number being reported here is the\nnumber of rows pulled from the plan node --- but this plan node is on\nthe inside of a merge join, and one of the properties of merge join is\nthat it will do partial rescans of its inner input in the presence of\nequal keys in the outer input. If you have, say, 10 occurrences of\n\"42\" in the outer input, then any \"42\" rows in the inner input have to\nbe rescanned 10 times. EXPLAIN ANALYZE will count each of them as 10\nrows returned by the input node.\n\nThe large multiple here (80-to-one overscan) says that you've got a lot\nof duplicate values in the outer input. This is generally a good\nsituation to *not* use a mergejoin in ;-). We do have some logic in the\nplanner that attempts to estimate the extra cost involved in such\nrescanning, but I'm not sure how accurate the cost model is.\n\n> Most of your time is spent in that merge join. Why don't you try doubling \n> sort_mem temporarily to see how it does? Or even raising shared_buffers?\n\nRaising shared_buffers seems unlikely to help. I do agree with raising\nsort_mem --- not so much to make the merge faster as to encourage the\nthing to try a hash join instead.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Jul 2004 00:52:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sorting behaviour "
},
{
"msg_contents": "On Thu, Jul 15, 2004 at 12:52:38AM -0400, Tom Lane wrote:\n> No, it's not missing anything. The number being reported here is the\n> number of rows pulled from the plan node --- but this plan node is on\n> the inside of a merge join, and one of the properties of merge join is\n> that it will do partial rescans of its inner input in the presence of\n> equal keys in the outer input. If you have, say, 10 occurrences of\n> \"42\" in the outer input, then any \"42\" rows in the inner input have to\n> be rescanned 10 times. EXPLAIN ANALYZE will count each of them as 10\n> rows returned by the input node.\n\nOK, that makes sense, although it seems to me as is loops= should have been\nsomething larger than 1 if the data was scanned multiple times.\n\n> The large multiple here (80-to-one overscan) says that you've got a lot\n> of duplicate values in the outer input. This is generally a good\n> situation to *not* use a mergejoin in ;-). We do have some logic in the\n> planner that attempts to estimate the extra cost involved in such\n> rescanning, but I'm not sure how accurate the cost model is.\n\nHum, I'm not sure if I'm in the termiology here -- \"outer input\" in \"A left\njoin B\" is A, right? But yes, I do have a lot of duplicates, that seems to\nmatch my data well.\n\n> Raising shared_buffers seems unlikely to help. I do agree with raising\n> sort_mem --- not so much to make the merge faster as to encourage the\n> thing to try a hash join instead.\n\nsort_mem is already 16384, which I thought would be plenty -- I tried\nincreasing it to 65536 which made exactly zero difference. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 15 Jul 2004 14:08:54 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd sorting behaviour"
},
{
"msg_contents": "Steinar,\n\n> sort_mem is already 16384, which I thought would be plenty -- I tried\n> increasing it to 65536 which made exactly zero difference. :-)\n\nWell, then the next step is increasing the statistical sampling on the 3 join \ncolumns in that table. Try setting statistics to 500 for each of the 3 \ncols, analyze, and see if that makes a difference.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 15 Jul 2004 11:11:33 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sorting behaviour"
},
{
"msg_contents": "On Thu, Jul 15, 2004 at 11:11:33AM -0700, Josh Berkus wrote:\n>> sort_mem is already 16384, which I thought would be plenty -- I tried\n>> increasing it to 65536 which made exactly zero difference. :-)\n> Well, then the next step is increasing the statistical sampling on the 3 join \n> columns in that table. Try setting statistics to 500 for each of the 3 \n> cols, analyze, and see if that makes a difference.\n\nMade no difference on either version (7.2 or 7.4).\n\nBTW, you guys can stop Cc-ing me now; I'm subscribed. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 16 Jul 2004 00:16:12 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd sorting behaviour"
},
{
"msg_contents": "On Thu, Jul 15, 2004 at 02:08:54PM +0200, Steinar H. Gunderson wrote:\n> sort_mem is already 16384, which I thought would be plenty -- I tried\n> increasing it to 65536 which made exactly zero difference. :-)\n\nI've tried some further tweaking, but I'm still unable to force it into doing\na hash join -- any ideas how I can find out why it chooses a merge join?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 20 Jul 2004 13:51:16 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd sorting behaviour"
},
{
"msg_contents": "Steinar,\n\n> I've tried some further tweaking, but I'm still unable to force it into\n> doing a hash join -- any ideas how I can find out why it chooses a merge\n> join?\n\nI'm sorry, I can't really give your issue the attention it deserves. At this \npoint, I'd have to get a copy of your database, and play around with \nalternate query structures; and I don't have time. Sorry!\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 20 Jul 2004 10:02:49 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sorting behaviour"
},
{
"msg_contents": "Steinar,\n\n> I've tried some further tweaking, but I'm still unable to force it into\n> doing a hash join -- any ideas how I can find out why it chooses a merge\n> join?\n\nActually, quick question -- have you tried setting enable_mergjoin=false to \nsee the plan the system comes up with? Is it in fact faster?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 20 Jul 2004 10:06:08 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sorting behaviour"
},
{
"msg_contents": "On Tue, Jul 20, 2004 at 10:06:08AM -0700, Josh Berkus wrote:\n> Actually, quick question -- have you tried setting enable_mergjoin=false to \n> see the plan the system comes up with? Is it in fact faster?\n\nIt is significantly faster -- 1200ms vs. 1900ms (on 7.4, at least). Some of\nthe merge joins are changed to nested loop joins, though, which probably\nreduces the overall performance, so I guess there's more to gain if I can get\nit to convert only that merge join to a hash join. The sum and multiplication\nparts still take 400ms or so, though (is this normal? :-) ), so I guess\nthere's a lower limit :-)\n\nI could of course post the updated query plan if anybody is interested; let\nme know. (The data is still available if anybody needs it as well, of\ncourse.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 20 Jul 2004 19:21:00 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd sorting behaviour"
},
{
"msg_contents": "> I could of course post the updated query plan if anybody is interested; let\n> me know. (The data is still available if anybody needs it as well, of\n> course.)\n\nI've taken a look and managed to cut out quite a bit of used time.\nYou'll need to confirm it's the same results though (I didn't -- it is\nthe same number of results (query below)\n\nFirst off, \"DROP INDEX prodid_index;\". It doesn't help anything since\nthe primary key is just as usable, but it does take enough space that it\ncauses thrashing in the buffer_cache. Any queries based on prodid will\nuse the index for the PRIMARY KEY instead.\n\nSecondly, I had no luck getting the hashjoin but this probably doesn't\nmatter. I've assumed that the number of users will climb faster than the\nproduct set offered, and generated additional data via the below command\nrun 4 times:\n\n INSERT INTO opinions SELECT prodid, uid + (SELECT max(uid) FROM\n opinions), opinion FROM opinions;\n\nI found that by this point, the hashjoin and mergejoin have essentially\nthe same performance -- in otherwords, as you grow you'll want the\nmergejoin eventually so I wouldn't worry about it too much.\n\n\nNew Query cuts about 1/3rd the time, forcing hashjoin gets another 1/3rd\nbut see the above note:\n\n SELECT o3.prodid\n , SUM(o3.opinion*o12.correlation) AS total_correlation\n FROM opinions o3\n\n -- Plain join okay since o12.correlation <> 0\n -- eliminates any NULLs anyway.\n -- Was RIGHT JOIN\n JOIN (SELECT o2.uid\n , SUM(o1.opinion*o2.opinion)/SQRT(count(*)::numeric)\n AS correlation\n FROM opinions AS o1\n JOIN opinions AS o2 USING (prodid)\n WHERE o1.uid = 1355\n GROUP BY o2.uid\n ) AS o12 USING (uid)\n\n -- Was old Left join\n WHERE o3.prodid NOT IN (SELECT prodid\n FROM opinions AS o4\n WHERE uid = 1355)\n AND o3.opinion <> 0 \n AND o12.correlation <> 0\nGROUP BY o3.prodid\nORDER BY total_correlation desc;\n\n\n",
"msg_date": "Tue, 20 Jul 2004 22:18:19 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sorting behaviour"
},
{
"msg_contents": "On Tue, Jul 20, 2004 at 10:18:19PM -0400, Rod Taylor wrote:\n> I've taken a look and managed to cut out quite a bit of used time.\n> You'll need to confirm it's the same results though (I didn't -- it is\n> the same number of results (query below)\n\nIt looks very much like the same results.\n\n> Secondly, I had no luck getting the hashjoin but this probably doesn't\n> matter. I've assumed that the number of users will climb faster than the\n> product set offered, and generated additional data via the below command\n> run 4 times:\n\nActually, the number of users won't climb that much faster; what will\nprobably increase is the number of opinions.\n\n> I found that by this point, the hashjoin and mergejoin have essentially\n> the same performance -- in otherwords, as you grow you'll want the\n> mergejoin eventually so I wouldn't worry about it too much.\n\nHm, OK.\n\n> -- Plain join okay since o12.correlation <> 0\n> -- eliminates any NULLs anyway.\n> -- Was RIGHT JOIN\n\nOK, that makes sense (although I don't really see why it should be faster).\n\n> -- Was old Left join\n> WHERE o3.prodid NOT IN (SELECT prodid\n> FROM opinions AS o4\n> WHERE uid = 1355)\n\nAs my server is 7.2 and not 7.4, that obviously won't help much :-) Thanks\nanyway, though -- we'll upgrade eventually, and it'll help then. \n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 21 Jul 2004 12:04:10 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Odd sorting behaviour"
},
{
"msg_contents": "On Wed, 2004-07-21 at 06:04, Steinar H. Gunderson wrote:\n> On Tue, Jul 20, 2004 at 10:18:19PM -0400, Rod Taylor wrote:\n> > I've taken a look and managed to cut out quite a bit of used time.\n> > You'll need to confirm it's the same results though (I didn't -- it is\n> > the same number of results (query below)\n> \n> It looks very much like the same results.\n\nOh.. On my (slow) laptop it cut the time back significantly..\n\n> As my server is 7.2 and not 7.4, that obviously won't help much :-) Thanks\n> anyway, though -- we'll upgrade eventually, and it'll help then. \n\nI see. Yeah, avoid NOT IN like a plague on 7.2.\n\n\n",
"msg_date": "Wed, 21 Jul 2004 10:58:12 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Odd sorting behaviour"
}
] |
[
{
"msg_contents": "> What is it about the buffer cache that makes it so unhappy being able\nto\n> hold everything? I don't want to be seen as a cache hit fascist, but\nisn't\n> it just better if the data is just *there*, available in the\npostmaster's\n> address space ready for each backend process to access it, rather than\n> expecting the Linux cache mechanism, optimised as it may be, to have\nto do\n> the caching?\n\nThe disk cache on most operating systems is optimized. Plus, keeping\nshared buffers low gives you more room to bump up the sort memory, which\nwill make your big queries run faster.\n\nMerlin\n\n",
"msg_date": "Fri, 9 Jul 2004 10:16:36 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Working on huge RAM based datasets"
},
{
"msg_contents": "\n>The disk cache on most operating systems is optimized. Plus, keeping\nshared buffers low gives you more room to bump up the sort memory, which\nwill make your big queries run faster.\n\nThanks merlin,\n\nWhether the OS caches the data or PG does, you still want it cached. If your\nsorting backends gobble up the pages that otherwise would be filled with the\ndatabase buffers, then your postmaster will crawl, as it'll *really* have to\nwait for stuff from disk. In my scenario, you'd spec the machine so that\nthere would be plenty of memory for *everything*.\n\nOn your OS optimisation point, OS caches are, of course, optimised. But\npeople have told me that PG's caching strategy is simply less well\noptimised, and *that* is the reason for keeping the shared buffer cache down\nin my scenario. That's a shame in a way, but I understand why it is the way\nit is - other things have been addressed which speed up operations in\ndifferent ways. My 'all in RAM' scenario is very rare at the moment, so why\nwaste valuable development resources on developing optimised RAM based data\nstructures to hold the data for quicker query execution when hardly anyone\nwill see the benefit?\n\nHowever - it won't be so rare for too much longer... If I gave you a half a\nterabyte of RAM and a 4 processor 64 bit machine, I'm sure you could imagine\nhow much quicker databases could run if they were optimised for this sort of\nplatform.\n\nAnyway, I'm looking forward to experimenting with stuff the way it works at\nthe moment.\n\nMany thanks,\nAndy\n\n\n",
"msg_date": "Fri, 9 Jul 2004 17:08:05 +0100",
"msg_from": "\"Andy Ballingall\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Working on huge RAM based datasets"
},
{
"msg_contents": "On 7/9/2004 10:16 AM, Merlin Moncure wrote:\n\n>> What is it about the buffer cache that makes it so unhappy being able\n> to\n>> hold everything? I don't want to be seen as a cache hit fascist, but\n> isn't\n>> it just better if the data is just *there*, available in the\n> postmaster's\n>> address space ready for each backend process to access it, rather than\n>> expecting the Linux cache mechanism, optimised as it may be, to have\n> to do\n>> the caching?\n> \n> The disk cache on most operating systems is optimized. Plus, keeping\n> shared buffers low gives you more room to bump up the sort memory, which\n> will make your big queries run faster.\n\nPlus, the situation will change dramatically with 7.5 where the disk \ncache will have less information than the PG shared buffers, which will \nbecome sequential scan resistant and will know that a block was pulled \nin on behalf of vacuum and not because the regular database access \npattern required it.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n",
"msg_date": "Sun, 11 Jul 2004 10:12:46 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Working on huge RAM based datasets"
},
{
"msg_contents": "Martha Stewart called it a Good Thing when [email protected] (Jan Wieck) wrote:\n> On 7/9/2004 10:16 AM, Merlin Moncure wrote:\n>>> What is it about the buffer cache that makes it so unhappy being\n>>> able to hold everything? I don't want to be seen as a cache hit\n>>> fascist, but isn't it just better if the data is just *there*,\n>>> available in the postmaster's address space ready for each backend\n>>> process to access it, rather than expecting the Linux cache\n>>> mechanism, optimised as it may be, to have to do the caching?\n\n>> The disk cache on most operating systems is optimized. Plus,\n>> keeping shared buffers low gives you more room to bump up the sort\n>> memory, which will make your big queries run faster.\n\n> Plus, the situation will change dramatically with 7.5 where the disk\n> cache will have less information than the PG shared buffers, which\n> will become sequential scan resistant and will know that a block was\n> pulled in on behalf of vacuum and not because the regular database\n> access pattern required it.\n\nIt'll be very curious how this changes things.\n\nI _think_ it means that shared buffer usage becomes more efficient\nboth for small and large buffers, since vacuums and seq scans\nshouldn't \"eviscerate\" the shared buffers the way they can in earlier\nversions.\n\nWhat would be most interesting to see is whether this makes it wise to\nincrease shared buffer size. It may be more effective to bump down\nthe cache a little, and bump up sort memory; hard to tell.\n-- \nlet name=\"cbbrowne\" and tld=\"ntlug.org\" in String.concat \"@\" [name;tld];;\nhttp://cbbrowne.com/info/spreadsheets.html\n\"But life wasn't yes-no, on-off. Life was shades of gray, and\nrainbows not in the order of the spectrum.\"\n-- L. E. Modesitt, Jr., _Adiamante_\n",
"msg_date": "Sun, 11 Jul 2004 13:04:44 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Working on huge RAM based datasets"
},
{
"msg_contents": "> What would be most interesting to see is whether this makes it wise to\n> increase shared buffer size. It may be more effective to bump down\n> the cache a little, and bump up sort memory; hard to tell.\n\nHow do we go about scheduling tests with the OSDL folks? If they could\ndo 10 runs with buffers between 1k and 500k it would help us get a broad\nview of the situation.\n\n\n",
"msg_date": "Mon, 12 Jul 2004 10:38:08 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Working on huge RAM based datasets"
},
{
"msg_contents": "Rond, Chris,\n\n> > What would be most interesting to see is whether this makes it wise to\n> > increase shared buffer size. It may be more effective to bump down\n> > the cache a little, and bump up sort memory; hard to tell.\n>\n> How do we go about scheduling tests with the OSDL folks? If they could\n> do 10 runs with buffers between 1k and 500k it would help us get a broad\n> view of the situation.\n\nYes. We'll need to. However, I'd like to wait until we're officially in \nBeta. I'll be seeing the OSDL folks in person (PostgreSQL+OSDL BOF at Linux \nWorld Expo!!) in a couple of weeks.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 12 Jul 2004 09:38:01 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Working on huge RAM based datasets"
},
{
"msg_contents": "On 7/12/2004 12:38 PM, Josh Berkus wrote:\n\n> Rond, Chris,\n> \n>> > What would be most interesting to see is whether this makes it wise to\n>> > increase shared buffer size. It may be more effective to bump down\n>> > the cache a little, and bump up sort memory; hard to tell.\n>>\n>> How do we go about scheduling tests with the OSDL folks? If they could\n>> do 10 runs with buffers between 1k and 500k it would help us get a broad\n>> view of the situation.\n> \n> Yes. We'll need to. However, I'd like to wait until we're officially in \n> Beta. I'll be seeing the OSDL folks in person (PostgreSQL+OSDL BOF at Linux \n> World Expo!!) in a couple of weeks.\n> \n\nDon't forget to add that ARC needs some time actually to let the \nalgorithm adjust the queue sizes and populate the cache according to the \naccess pattern. You can't start a virgin postmaster and then slam on the \naccellerator of your test application by launching 500 concurrent \nclients out of the blue and expect that it starts off airborne.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n",
"msg_date": "Mon, 12 Jul 2004 14:01:10 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Working on huge RAM based datasets"
}
] |
[
{
"msg_contents": "> \n> > However, this query performs a sequence scan on the table, ignoring\nthe\n> > call_idx13 index (the only difference is the addition of the aspid\nfield\n> > in the order by clause):\n> \n> You do not have an index which matches the ORDER BY, so PostgreSQL\n> cannot simply scan the index for the data you want. Thus is needs to\n> find all matching rows, order them, etc.\n> \n> > 23:59:59.999' order by aspid, openeddatetime desc, callstatus desc,\n> > calltype desc, callkey desc limit 26;\n> \n> aspid ASC, openeddatetime DESC, callstatus DESC, calltype DESC\n> \n> > call_idx13 btree (aspid, openeddatetime, callstatus,\ncalltype,\n> > callkey),\n> \n> This index is: aspid ASC, openeddatetime ASC, callstatus ASC, calltype\n> ASC, callkey ASC\n> \n\nOK, that makes sense; however, this doesn't:\n\nelon2=# explain analyse select * from call where aspid='123C' and\nOpenedDateTime between '2000-01-01 00:00:00.0' and '2004-06-24\n23:59:59.999' order by aspid asc, openeddatetime asc, callstatus asc,\ncalltype asc, callkey asc;\n \nQUERY PLAN \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n------------------------------------------------------\n Sort (cost=342903.52..344071.99 rows=467384 width=295) (actual\ntime=33159.38..33897.22 rows=461973 loops=1)\n Sort Key: aspid, openeddatetime, callstatus, calltype, callkey\n -> Seq Scan on call (cost=0.00..31019.36 rows=467384 width=295)\n(actual time=1.80..7373.75 rows=461973 loops=1)\n Filter: ((aspid = '123C'::bpchar) AND (openeddatetime >=\n'2000-01-01 00:00:00-07'::timestamp with time zone) AND (openeddatetime\n<= '2004-06-24 23:59:59.999-07'::timestamp with time zone))\n Total runtime: 38043.03 msec\n(5 rows)\n\n\nI've modified the \"order by\" to reflect the call_idx13 index, yet the\nquery still causes a sequence scan of the table.\n\n\n\n> A reverse scan, would of course be DESC, DESC, DESC, DESC, DESC --\n> neither of which matches your requested order by, thus cannot help the\n> reduce the lines looked at to 26.\n\n\nTo clarify, the query that the programmer wants is:\n\nselect * from call where aspid='123C' and OpenedDateTime between\n'2000-01-01 00:00:00.0' and '2004-06-24 23:59:59.999' order by aspid,\nopeneddatetime desc, callstatus desc, calltype desc, callkey desc;\n\nWe had started playing with placing limits on the query to address\nanother, unrelated problem.\n\nHowever, out of curiosity I did some testing with varying limits to see\nat which point the planner decided to do a sequence scan instead of\nusing the index.\n\n\n\nelon2=# explain analyse select * from call where aspid='123C' and\nOpenedDateTime between '2000-01-01 00:00:00.0' and '2004-06-24\n23:59:59.999' order by aspid, openeddatetime , callstatus , calltype ,\ncallkey limit 92785;\n \nQUERY PLAN \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n----------------------------------------------------------\n Limit (cost=0.00..343130.36 rows=92785 width=295) (actual\ntime=0.17..1835.55 rows=92785 loops=1)\n -> Index Scan using call_idx13 on call (cost=0.00..1728444.76\nrows=467384 width=295) (actual time=0.17..1699.56 rows=92786 loops=1)\n Index Cond: ((aspid = '123C'::bpchar) AND (openeddatetime >=\n'2000-01-01 00:00:00-07'::timestamp with time zone) AND (openeddatetime\n<= '2004-06-24 23:59:59.999-07'::timestamp with time zone))\n Total runtime: 1901.43 msec\n(4 rows)\n\n\n\nelon2=# explain analyse select * from call where aspid='123C' and\nOpenedDateTime between '2000-01-01 00:00:00.0' and '2004-06-24\n23:59:59.999' order by aspid, openeddatetime , callstatus , calltype ,\ncallkey limit 92786;\n \nQUERY PLAN \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n----------------------------------------------------------\n Limit (cost=0.00..343134.06 rows=92786 width=295) (actual\ntime=0.17..1834.09 rows=92786 loops=1)\n -> Index Scan using call_idx13 on call (cost=0.00..1728444.76\nrows=467384 width=295) (actual time=0.17..1698.16 rows=92787 loops=1)\n Index Cond: ((aspid = '123C'::bpchar) AND (openeddatetime >=\n'2000-01-01 00:00:00-07'::timestamp with time zone) AND (openeddatetime\n<= '2004-06-24 23:59:59.999-07'::timestamp with time zone))\n Total runtime: 1899.97 msec\n(4 rows)\n\n\nelon2=# select count(*) from call;\n count\n--------\n 507392\n(1 row)\n \n\n> \n> This leaves your WHERE clause to restrict the dataset and it doesn't\ndo\n> a very good job of it. There are more than 450000 rows matching the\n> where clause, which means the sequential scan was probably the right\n> choice (unless you have over 10 million entries in the table).\n> \n> \n> Since your WHERE clause contains a single aspid, an improvement to the\n> PostgreSQL optimizer may be to ignore that field in the ORDER BY as\n> order is no longer important since there is only one possible value.\nIf\n> it did ignore aspid, it would use a plan similar to the first one you\n> provided.\n> \n> You can accomplish the same thing by leaving out aspid ASC OR by\nsetting\n> it to aspid DESC in the ORDER BY. Leaving it out entirely will be\n> slightly faster, but DESC will cause PostgreSQL to use index\n> \"call_idx13\".\n> \n> \n\nAgain, that makes sense to me, but if I remove aspid from the query it\nstill ignores the index....\n\n\nelon2=# explain analyse select * from call where aspid='123C' and\nOpenedDateTime between '2000-01-01 00:00:00.0' and '2004-06-24\n23:59:59.999' order by openeddatetime desc, callstatus desc, calltype\ndesc, callkey desc;\n \nQUERY PLAN \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n------------------------------------------------------\n Sort (cost=342903.52..344071.99 rows=467384 width=295) (actual\ntime=17598.31..18304.26 rows=461973 loops=1)\n Sort Key: openeddatetime, callstatus, calltype, callkey\n -> Seq Scan on call (cost=0.00..31019.36 rows=467384 width=295)\n(actual time=1.78..7337.85 rows=461973 loops=1)\n Filter: ((aspid = '123C'::bpchar) AND (openeddatetime >=\n'2000-01-01 00:00:00-07'::timestamp with time zone) AND (openeddatetime\n<= '2004-06-24 23:59:59.999-07'::timestamp with time zone))\n Total runtime: 21665.43 msec\n(5 rows)\n\n\nSetting enable_seqscan=off still doesn't cause the desired index to be\nselected:\n\nelon2=# explain analyse select * from call where aspid='123C' and\nOpenedDateTime between '2000-01-01 00:00:00.0' and '2004-06-24\n23:59:59.999' order by aspid desc, openeddatetime desc, callstatus desc,\ncalltype desc, callkey desc;\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-------------------------\n Sort (cost=355314.41..356482.87 rows=467384 width=295) (actual\ntime=33382.92..34088.10 rows=461973 loops=1)\n Sort Key: aspid, openeddatetime, callstatus, calltype, callkey\n -> Index Scan using call_aspid on call (cost=0.00..43430.25\nrows=467384 width=295) (actual time=0.24..7915.21 rows=461973 loops=1)\n Index Cond: (aspid = '123C'::bpchar)\n Filter: ((openeddatetime >= '2000-01-01 00:00:00-07'::timestamp\nwith time zone) AND (openeddatetime <= '2004-06-24\n23:59:59.999-07'::timestamp with time zone))\n Total runtime: 39196.39 msec\n\n\n\n\nThanks for your help (and sorry for the long post),\n\n-Joel\n\n-- CONFIDENTIALITY NOTICE --\n\nThis message is intended for the sole use of the individual and entity to whom it is addressed, and may contain information that is privileged, confidential and exempt from disclosure under applicable law. If you are not the intended addressee, nor authorized to receive for the intended addressee, you are hereby notified that you may not use, copy, disclose or distribute to anyone the message or any information contained in the message. If you have received this message in error, please immediately advise the sender by reply email, and delete the message. Thank you.\n",
"msg_date": "Fri, 9 Jul 2004 11:15:49 -0700",
"msg_from": "Joel McGraw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query plan wierdness?"
},
{
"msg_contents": "> OK, that makes sense; however, this doesn't:\n> \n> elon2=# explain analyse select * from call where aspid='123C' and\n> OpenedDateTime between '2000-01-01 00:00:00.0' and '2004-06-24\n> 23:59:59.999' order by aspid asc, openeddatetime asc, callstatus asc,\n> calltype asc, callkey asc;\n\n> I've modified the \"order by\" to reflect the call_idx13 index, yet the\n> query still causes a sequence scan of the table.\n\nThis query shown above does not have a limit where the original one had\nLIMIT 26. PostgreSQL has determined that pulling out all the table rows,\nand sorting them in CPU is cheaper than pulling out all index rows, then\nrandomly pulling out all table rows.\n\nNormally, that would be well on the mark. You can sort a large number of\ntuples for a single random disk seek, but this is not true for you. \n\nConsidering you're pulling out 450k rows in 8 seconds, I'd also guess\nthe data is mostly in memory. Is that normal? Or is this a result of\nhaving run several test queries against the same data multiple times?\n\nIf it's normal, bump your effective_cache parameter higher to move the\nsort vs. scan threshold.\n\n> Again, that makes sense to me, but if I remove aspid from the query it\n> still ignores the index....\n\nYou've changed 2 variables. You removed the aspid AND removed the LIMIT.\nAdd back the limit of 26 like you originally showed, and it'll do what I\ndescribed.\n\n> Setting enable_seqscan=off still doesn't cause the desired index to be\n> selected:\n> \n> elon2=# explain analyse select * from call where aspid='123C' and\n> OpenedDateTime between '2000-01-01 00:00:00.0' and '2004-06-24\n> 23:59:59.999' order by aspid desc, openeddatetime desc, callstatus desc,\n> calltype desc, callkey desc;\n> \n> QUERY PLAN\n> \n> ------------------------------------------------------------------------\n> ------------------------------------------------------------------------\n> -------------------------\n> Sort (cost=355314.41..356482.87 rows=467384 width=295) (actual\n> time=33382.92..34088.10 rows=461973 loops=1)\n> Sort Key: aspid, openeddatetime, callstatus, calltype, callkey\n> -> Index Scan using call_aspid on call (cost=0.00..43430.25\n> rows=467384 width=295) (actual time=0.24..7915.21 rows=461973 loops=1)\n> Index Cond: (aspid = '123C'::bpchar)\n> Filter: ((openeddatetime >= '2000-01-01 00:00:00-07'::timestamp\n> with time zone) AND (openeddatetime <= '2004-06-24\n> 23:59:59.999-07'::timestamp with time zone))\n> Total runtime: 39196.39 msec\n\nI'm a little surprised at this. I should have done a reverse index scan\nand skipped the sort step. In fact, with a very simple select, I get\nthis:\n\nrbt=# \\d t\n Table \"public.t\"\n Column | Type | Modifiers\n--------+--------------------------------+-----------\n col1 | bpchar |\n col2 | timestamp(0) without time zone |\n col3 | integer |\n col4 | integer |\n col5 | integer |\nIndexes:\n \"t_idx\" btree (col1, col2, col3, col4, col5)\n\nrbt=# set enable_seqscan = false;\nSET\nrbt=# explain analyze select * from t order by col1 desc, col2 desc,\ncol3 desc, col4 desc, col5 desc;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------\n Index Scan Backward using t_idx on t (cost=0.00..6.20 rows=18\nwidth=52) (actual time=0.046..0.219 rows=18 loops=1)\n Total runtime: 1.813 ms\n(2 rows)\n\nAny chance you could put together a test case demonstrating the above\nbehaviour? Everything from CREATE TABLE, through dataload to the EXPLAIN\nANALYZE.\n\n",
"msg_date": "Fri, 09 Jul 2004 15:18:51 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query plan wierdness?"
}
] |
[
{
"msg_contents": ">>>>> \"Matthieu\" == Matthieu Compin <[email protected]> writes:\n\n Matthieu> bonjour � tous! J'avais �mis l'id�e d'acheteer une borne\n Matthieu> wifi pour nos futurs manifestation public. Je viens donc\n Matthieu> de discuter pour avoir qq infos et des prix.\n\nJe pense que c'est un bon investissement. \n\n+1 donc\n\n\n-- \nLaurent Martelli vice-pr�sident de Parinux\nhttp://www.bearteam.org/~laurent/ http://www.parinux.org/\[email protected] \n\n",
"msg_date": "Sat, 10 Jul 2004 12:00:09 +0200",
"msg_from": "Laurent Martelli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: achat borne wifi"
}
] |
[
{
"msg_contents": "Jan wrote:\n> > The disk cache on most operating systems is optimized. Plus,\nkeeping\n> > shared buffers low gives you more room to bump up the sort memory,\nwhich\n> > will make your big queries run faster.\n> \n> Plus, the situation will change dramatically with 7.5 where the disk\n> cache will have less information than the PG shared buffers, which\nwill\n> become sequential scan resistant and will know that a block was pulled\n> in on behalf of vacuum and not because the regular database access\n> pattern required it.\n\nHm. In my experience the different between data cached between shared\nbuffers and the O/S is not very much...both are fast. However, I almost\nalways see dramatic performance speedups for bumping up work mem. Are\nyou suggesting that it will be advantageous to bump up shared buffers?\n\nMerlin\n\n\n",
"msg_date": "Mon, 12 Jul 2004 10:05:06 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Working on huge RAM based datasets"
}
] |
[
{
"msg_contents": "Andy wrote:\n> Whether the OS caches the data or PG does, you still want it cached.\nIf\n> your\n> sorting backends gobble up the pages that otherwise would be filled\nwith\n> the\n> database buffers, then your postmaster will crawl, as it'll *really*\nhave\n> to\n> wait for stuff from disk. In my scenario, you'd spec the machine so\nthat\n> there would be plenty of memory for *everything*.\n\nThat's the whole point: memory is a limited resource. If pg is\ncrawling, then the problem is simple: you need more memory. The\nquestion is: is it postgresql's responsibility to manage that resource?\nPg is a data management tool, not a memory management tool. The same\n'let's manage everything' argument also frequently gets brought up wrt\nfile i/o, because people assume the o/s sucks at file management. In\nreality, they are quite good, and through use of the generic interface\nthe administrator is free to choose a file system that best suits the\nneeds of the application.\n\nAt some point, hard disks will be replaced by solid state memory\ntechnologies...do you really want to recode your memory manager when\nthis happens because all your old assumptions are no longer correct?\n\nMerlin\n",
"msg_date": "Mon, 12 Jul 2004 10:23:06 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Working on huge RAM based datasets"
},
{
"msg_contents": "Sorry for the late reply - I've been away.\n\nMerlin, I'd like to come back with a few more points!\n\n>That's the whole point: memory is a limited resource. If pg is\n>crawling, then the problem is simple: you need more memory.\n\nMy posting only relates to the scenario where RAM is not a limiting factor,\na scenario which shall become increasingly common over the next few years,\nas 64 bit processors and OSs allow the exploitation of ever larger, ever\ncheaper RAM.\n\n> The question is: is it postgresql's responsibility to manage that\nresource?\n\nI think you are confusing the issue of RAM and address space.\n\nAny application can acquire a piece of address space for its own use. It is\nthe responsibility of the application to do what it needs with that address\nspace. I'm interested in how PG could do something better in its address\nspace when it knows that it can fit all the data it operates on within that\naddress space.\n\nThough the OS is responsible for determining whether that address space is\nRAM resident or not, in my scenario, this is irrelevant, because there\n*will* be enough RAM for everything, and the OS will, in that scenario,\nallow all the address space to become RAM resident.\n\nI am not advocating undermining the OS in any way. It would be stupid to\nmake PGSQL take over the running of the hardware.\n\n>Pg is a data management tool, not a memory management tool.\n\nI'm not criticising PG. PG is actually a 'DISK/MEMORY' data management tool.\nIt manages data which lives on disks, but it can only operate on that data\nin memory, and goes to some lengths to try to fit bits of disk data in a\ndefined piece of memory, and push them back out again.\n\nAt the moment, this model assumes that RAM is a scarce resource.\n\nThe model still 'sort of' works when RAM is actually not scarce, because the\nOS effectively uses that extra RAM to make IO *appear* to be quicker, and\nindeed, I've found that a hint has been added to PG to tell it how much the\nOS is likely to be caching.\n\nBut the question is this:\n\n\"If you wrote a DB from scratch with the assumption that *all* the data\ncould fit in the address space allocated by the postmaster, and you were\nconfident that the OS had enough RAM so that you never suffered vmem page\nmisses, couldn't you make things go much faster?\"\n\nA more pertinent question is:\n\n\"Could PG be extended to have a flag, which when enabled, told it to operate\nwith the assumption that it could fit all the disk data in RAM, and\nimplement the data organisation optimisations that rely on the persistence\nof data in address space?\"\n\n\n>The same\n>'let's manage everything' argument also frequently gets brought up wrt\n>file i/o, because people assume the o/s sucks at file management.\n\nWell, I'm not saying this.\n\nI have substantial experience with high performance file IO through a\nfilesystem.\n\nBut if you are interested in high speed IO, naive 'let the OS do everything'\napproach isn't often good enough. You, the application, have to be aware\nthat the order and timing of IO requests, along with the size of IO block\nyou cause to trigger, have a dramatic impact on the speed with which the\ndata reaches your app, OS or no OS. Most high speed storage still relies on\nspinning things containing data that can only be accessed in a certain way,\nand data movement is page boundary sensitive. The OS may hide these details\nfrom you, but you, the app writer, have to have an understanding of the\nunderlying reality if you want to optimise performance.\n\nI want to stress that at no point am I advocating *not* using the OS. PG\nshould do ALL IO and memory allocation through the OS, otherwise you end up\nwith a platform specific product that is of little use.\n\nThat given, there is still the opportunity for PG to be able to operate far\nmore efficiently in my high memory scenario.\n\nWouldn't your backend processes like to have the entire database sitting\nready in address space (ram resident, of course!), indexes all fully built?\nNo tuple more than a few machine instructions away?\n\nImagine the postmaster isn't having to frantically decide which bits of data\nto kick out of the workspace in order to keep the backends happy. Imagine\nthe postmaster isn't having to build structures to keep track of the newly\nread in blocks of data from 'disk' (or OS cache).\n\nIs this not a compelling scenario?\n\n>At some point, hard disks will be replaced by solid state memory\n>technologies...\n\nThis is irrelevant to my scenario, though solid state disks would allow\nwrite speeds to improve, which would add to the gains which I am fishing for\nhere.\n\n>do you really want to recode your memory manager when\n>this happens because all your old assumptions are no longer correct?\n\nMy scenario assumes nothing about how the data is stored, but you are right\nto flag the problems that arise when original assumptions about hardware\nbecome incorrect.\n\nFor example, PG assumes that RAM is a rare resource, and it assumes the\npostmaster cannot fit the entire database in a single address space.\n\n*These* assumptions are now not correct, following the 64bit address space\nbreakthrough.\n\nThe availability of 64 bit addressing and huge banks of RAM is of enormous\nsignificance to databases, and itIt is the whole reason for my post.\n\nOver the next 5-10 years, an increasing proportion of databases will fit\ncomfortably in RAM resident address space on commodity equipment.\n\nSo, the question for the people involved in PG is: *how* can PG be improved\nto make use of this, and reap the very substantial speed gains in this\nscenario, without breaking the existing usage scenarios of PG in the\ntraditional 'DB > RAM' scenario?\n\nThe answer isn't \"undermine the OS\". The answer is \"make the postmaster able\nto build and operate with persistent, query optimised representations of the\ndisk data\".\n\nYes, I guess that might be a lot of work.. But the DB that delivers this\nperformance will be very well placed in the next 5 years, don't you think?\n\nThanks for your comments,\n\nRegards,\nAndy\n\n\n",
"msg_date": "Mon, 19 Jul 2004 10:12:12 +0100",
"msg_from": "\"Andy Ballingall\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Working on huge RAM based datasets"
},
{
"msg_contents": "Sorry for the late reply - I've been away, and I've had problems posting too\n:(\n\nMerlin, I'd like to come back with a few more points!\n\n>That's the whole point: memory is a limited resource. If pg is\n>crawling, then the problem is simple: you need more memory.\n\nMy posting only relates to the scenario where RAM is not a limiting factor,\na scenario which shall become increasingly common over the next few years,\nas 64 bit processors and OSs allow the exploitation of ever larger, ever\ncheaper RAM. Incidentally, If PG is crawling, memory might be the\nproblem...but not necessarily - could be disk bound on writes.\n\n\n> The question is: is it postgresql's responsibility to manage that\nresource?\n\nI think you are confusing the issue of RAM and address space.\n\nAny application can acquire a piece of address space for its own use. It is\nthe responsibility of the application to do what it needs with that address\nspace. I'm interested in how PG could do something better in its address\nspace when it knows that it can fit all the data it operates on within that\naddress space.\n\nThough the OS is responsible for determining whether that address space is\nRAM resident or not, in my scenario, this is irrelevant, because there\n*will* be enough RAM for everything, and the OS will, in that scenario,\nallow all the address space to become RAM resident.\n\nI am not advocating undermining the OS in any way. It would be stupid to\nmake PGSQL take over the running of the hardware. I've learned the hard way\nthat bypassing the OS is just a big pain up the backside!\n\n>Pg is a data management tool, not a memory management tool.\n\nI'm not criticising PG. PG is actually a 'DISK/MEMORY' data management tool.\nIt manages data which lives on disks, but it can only operate on that data\nin memory, and goes to some lengths to try to fit bits of disk data in a\ndefined piece of memory, and push them back out again.\n\nAt the moment, this model assumes that RAM is a scarce resource.\n\nThe model still 'sort of' works when RAM is actually not scarce, because the\nOS effectively uses that extra RAM to make IO *appear* to be quicker, and\nindeed, I've found that a hint has been added to PG to tell it how much the\nOS is likely to be caching.\n\nBut the question is this:\n\n\"If you wrote a DB from scratch with the assumption that *all* the data\ncould fit in the address space allocated by the postmaster, and you were\nconfident that the OS had enough RAM so that you never suffered vmem page\nmisses, couldn't you make things go much faster?\"\n\nA more pertinent question is:\n\n\"Could PG be extended to have a flag, which when enabled, told it to operate\nwith the assumption that it could fit all the disk data in RAM, and\nimplement the data organisation optimisations that rely on the persistence\nof data in address space?\"\n\n\n>The same\n>'let's manage everything' argument also frequently gets brought up wrt\n>file i/o, because people assume the o/s sucks at file management.\n\nWell, I'm not saying this.\n\nI have substantial experience with high performance file IO through a\nfilesystem.\n\nBut if you are interested in high speed IO, naive 'let the OS do everything'\napproach isn't often good enough. You, the application, have to be aware\nthat the order and timing of IO requests, along with the size of IO block\nyou cause to trigger, have a dramatic impact on the speed with which the\ndata reaches your app, OS or no OS. Most high speed storage still relies on\nspinning things containing data that can only be accessed in a certain way,\nand data movement is page boundary sensitive. The OS may hide these details\nfrom you, but you, the app writer, have to have an understanding of the\nunderlying reality if you want to optimise performance.\n\nI want to stress that at no point am I advocating *not* using the OS. PG\nshould do ALL IO and memory allocation through the OS, otherwise you end up\nwith a platform specific product that is of little use.\n\nThat given, there is still the opportunity for PG to be able to operate far\nmore efficiently in my high memory scenario.\n\nWouldn't your backend processes like to have the entire database sitting\nready in address space (ram resident, of course!), indexes all fully built?\nNo tuple more than a few machine instructions away?\n\nImagine the postmaster isn't having to frantically decide which bits of data\nto kick out of the workspace in order to keep the backends happy. Imagine\nthe postmaster isn't having to build structures to keep track of the newly\nread in blocks of data from 'disk' (or OS cache). Imagine that everything\nwas just there...\n\nIs this not a compelling scenario?\n\n>At some point, hard disks will be replaced by solid state memory\n>technologies...\n\nThis is irrelevant to my scenario. The optimisations I crave are to do with\ngetting the entire database in a query-optimised form near to the CPUS -\ni.e. in fast RAM. (I'd expect solid state disk ram to be much slower than\nthe RAM that sits nearer the CPU).\n\nThe speed of the persistent storage system (whether spinning platters or\nsome sort of persistent solid state memory) isn't really of direct relevance\nto my argument. Solid state disks would allow write speeds to improve, which\nwould add to the gains which I am fishing for. So when they come, I'll be\nhappy.\n\nStill. Solid state disks aren't really an option right now. Big RAM is.\n\n\n>do you really want to recode your memory manager when\n>this happens because all your old assumptions are no longer correct?\n\nMy scenario assumes nothing about how the data is stored, but you are right\nto flag the problems that arise when original assumptions about hardware\nbecome incorrect.\n\nFor example, PG assumes that RAM is a rare resource, and it assumes the\npostmaster cannot fit the entire database in a single address space.\n\n*These* assumptions are now not correct, following the 64bit address space\nbreakthrough.\n\nThe availability of 64 bit addressing and huge banks of RAM is of enormous\nsignificance to databases, and it is the whole reason for my post.\n\nOver the next 5-10 years, an increasing proportion of databases will fit\ncomfortably in RAM resident address space on commodity equipment. My\nparticular nationwide application will fit into RAM now. Hence my interest!\n\nSo, the question for the people involved in PG is: *how* can PG be improved\nto make use of this, and reap the very substantial speed gains in this\nscenario, without breaking the existing usage scenarios of PG in the\ntraditional 'DB > RAM' scenario?\n\nThe answer isn't \"undermine the OS\". The answer might be\"make the postmaster\nable\nto build and operate with persistent, query optimised representations of the\ndisk data\".\n\nYes, I guess that might be a lot of work.. But the DB that delivers this\nperformance will be very well placed in the next 5 years, don't you think?\n\nAnyway - I look forward to further feedback, and thanks for your comments so\nfar.\n\nRegards,\nAndy\n\n\n",
"msg_date": "Tue, 20 Jul 2004 11:45:07 +0100",
"msg_from": "\"abhousehunt\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Working on huge RAM based datasets"
}
] |
[
{
"msg_contents": "\nWhen I went to 7.4.3 (Slackware 9.1) w/ JDBC, the improvements are that it doesn't initially take much memory (have 512M) and didn't swap. I ran a full vaccum and a cluster before installation, however speed degaded to 1 *second* / update of one row in 150 rows of data, within a day! pg_autovacuum now gives excellent performance however it is taking 66M of swap; only 270k cached.\n\n\n\n\n_______________________________________________\nJoin Excite! - http://www.excite.com\nThe most personalized portal on the Web!\n",
"msg_date": "Mon, 12 Jul 2004 12:59:05 -0400 (EDT)",
"msg_from": "\"Jim Ewert\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Swapping in 7.4.3"
},
{
"msg_contents": "Jim Ewert wrote:\n> When I went to 7.4.3 (Slackware 9.1) w/ JDBC, the improvements are that it doesn't initially take much memory (have 512M) and didn't swap. I ran a full vaccum and a cluster before installation, however speed degaded to 1 *second* / update of one row in 150 rows of data, within a day! pg_autovacuum now gives excellent performance however it is taking 66M of swap; only 270k cached.\n> \n\nAre you saying that your system stays fast now that you are using \npg_autovacuum, but pg_autovacuum is using 66M of memory? Please \nclarify, I'm not sure what question you want an answered.\n\nMatthew\n\n",
"msg_date": "Tue, 13 Jul 2004 16:26:09 -0400",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Swapping in 7.4.3"
}
] |
[
{
"msg_contents": ">>>>> \"alix\" == alix <[email protected]> writes:\n\n alix> Le Mon, 12 Jul 2004 12:37:47 +0200 (CEST) Jean-Luc Ancey\n alix> <[email protected]> & stef �crivit:\n\n >> > Pour moi la m�me chose que l'ann�e derni�re : on > s'abstient.\n\n alix> Pour moi la m�me chose que l'ann�e derni�re : on y va.\n\nIdem pour moi.\n\n-- \nLaurent Martelli vice-pr�sident de Parinux\nhttp://www.bearteam.org/~laurent/ http://www.parinux.org/\[email protected] \n\n",
"msg_date": "Mon, 12 Jul 2004 20:33:27 +0200",
"msg_from": "Laurent Martelli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fw: invitation au \"Village du Logiciel Libre\" de la"
}
] |
[
{
"msg_contents": "> \n> Considering you're pulling out 450k rows in 8 seconds, I'd also guess\n> the data is mostly in memory. Is that normal? Or is this a result of\n> having run several test queries against the same data multiple times?\n> \n\nAh yes, that would have been the result of running the query several\ntimes...\n\n\nOddly enough, I put the same database on a different machine, and the\nquery now behaves as I hoped all along. Notice that I'm using the\n\"real\" query, with the aspid in asc and the other fields in desc order,\nyet the query does use the call_idx13 index:\n\n\ncsitech=# explain analyse select * from call where aspid='123C' and\nOpenedDateTime between '2000-01-01 00:00:00.0' and '2004-06-24\n23:59:59.999' order by aspid, openeddatetime desc, callstatus desc,\ncalltype desc, callkey desc;\n \nQUERY PLAN \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n----------------------------------------------------------\n Sort (cost=60.01..60.05 rows=14 width=696) (actual\ntime=42393.56..43381.85 rows=510705 loops=1)\n Sort Key: aspid, openeddatetime, callstatus, calltype, callkey\n -> Index Scan using call_idx13 on call (cost=0.00..59.74 rows=14\nwidth=696) (actual time=0.33..19679.01 rows=510705 loops=1)\n Index Cond: ((aspid = '123C'::bpchar) AND (openeddatetime >=\n'2000-01-01 00:00:00-07'::timestamp with time zone) AND (openeddatetime\n<= '2004-06-24 23:59:59.999-07'::timestamp with time zone))\n Total runtime: 43602.05 msec\n\n\nFWIW, this is different hardware (Solaris 9/Sparc), but the same version\nof Postgres (7.3.4). The data is a superset of the data in the other\ndatabase (they are both snapshots taken from production).\n\nI dropped and recreated the index on the other (Linux) machine, ran\nvacuum analyse, then tried the query again. It still performs a\nsequence scan on the call table. :(\n\n\n> \n> Any chance you could put together a test case demonstrating the above\n> behaviour? Everything from CREATE TABLE, through dataload to the\nEXPLAIN\n> ANALYZE.\n\n\nForgive me for being thick: what exactly would be involved? Due to\nHIPAA regulations, I cannot \"expose\" any of the data.\n\n<background>\nI hesitated to bring this up because I wanted to focus on the technical\nissues rather than have this degenerate into a religious war. The chief\ndeveloper in charge of the project brought this query to my attention.\nHe has a fair amount of political sway in the company, and is now\nlobbying to switch to MySQL because he maintains that PostgreSQL is\nbroken and/or too slow for our needs. He has apparently benchmarked the\nsame query using MySQL and gotten much more favorable results (I have\nbeen unable to corroborate this yet).\n</background>\n\n\n-Joel\n\n-- CONFIDENTIALITY NOTICE --\n\nThis message is intended for the sole use of the individual and entity to whom it is addressed, and may contain information that is privileged, confidential and exempt from disclosure under applicable law. If you are not the intended addressee, nor authorized to receive for the intended addressee, you are hereby notified that you may not use, copy, disclose or distribute to anyone the message or any information contained in the message. If you have received this message in error, please immediately advise the sender by reply email, and delete the message. Thank you.\n",
"msg_date": "Mon, 12 Jul 2004 16:54:07 -0700",
"msg_from": "Joel McGraw <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query plan wierdness?"
},
{
"msg_contents": "> Oddly enough, I put the same database on a different machine, and the\n> query now behaves as I hoped all along. Notice that I'm using the\n> \"real\" query, with the aspid in asc and the other fields in desc order,\n> yet the query does use the call_idx13 index:\n\nNotice that while it only takes 19 seconds to pull the data out of the\ntable, it is spending 30 seconds sorting it -- so the index scan isn't\nbuying you very much.\n\nTry it again with ORDER BY ascid DESC and you should get the query down\nto 20 seconds in total on that Sparc; so I wouldn't call it exactly what\nyou wanted.\n\nhe decision about whether to use an index or not, is borderline. And as\nyou've shown they take approximately the same amount of time. Use of an\nindex will not necessarily be faster than a sequential scan -- but the\npenalty for accidentally selecting one when it shouldn't have is much\nhigher.\n\n> > Any chance you could put together a test case demonstrating the above\n> > behaviour? Everything from CREATE TABLE, through dataload to the\n> EXPLAIN\n> > ANALYZE.\n> \n> \n> Forgive me for being thick: what exactly would be involved? Due to\n> HIPAA regulations, I cannot \"expose\" any of the data.\n\nOf course. But that doesn't mean you couldn't create table different\nname and muck around with the values. But you're getting what you want,\nso it isn't a problem anymore.\n\n> <background>\n> I hesitated to bring this up because I wanted to focus on the technical\n> issues rather than have this degenerate into a religious war. The chief\n> developer in charge of the project brought this query to my attention.\n> He has a fair amount of political sway in the company, and is now\n> lobbying to switch to MySQL because he maintains that PostgreSQL is\n> broken and/or too slow for our needs. He has apparently benchmarked the\n> same query using MySQL and gotten much more favorable results (I have\n> been unable to corroborate this yet).\n> </background>\n\nI wouldn't be surprised if MySQL did run this single query faster with\nnothing else going on during that time. MySQL was designed primarily\nwith a single user in mind, but it is unlikely this will be your\nproduction situation so the benchmark is next to useless.\n\nConnect 50 clients to the databases running this (and a mixture of other\nselects) while another 20 clients are firing off updates, inserts,\ndeletes on these and other structures -- or whatever matches your full\nproduction load.\n\nThis is what PostgreSQL (and a number of other DBs) are designed for,\ntypical production loads.\n\n\n",
"msg_date": "Mon, 12 Jul 2004 22:06:42 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query plan wierdness?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm storing some timestamps as integers (UTF) in a table and I want to\nquery by <= and >= for times between a certain period. The table has\ngotten quite large and is now very slow in querying.\n\nI think it's time to create an index for the timestamp column.\n\nI tried using an rtree (for <= and >= optimization):\n\ncreate INDEX logs_timestamp ON logs using rtree (timestamp);\n\nbut I get \n\nERROR: data type integer has no default operator class for access\nmethod \"rtree\"\n You must specify an operator class for the index or define a\n default operator class for the data type\n\nDo I have to create an rtree type for my timestamp integer column? \n\nExisting rtree columns are below.\n\nPls help.\n\nThanks,\nChris\n\nserver=> select am.amname as acc_method, opc.opcname as ops_name from\npg_am am, pg_opclass opc where opc.opcamid = am.oid order by\nacc_method, ops_name;\n acc_method | ops_name \n------------+-----------------\n btree | abstime_ops\n btree | bit_ops\n btree | bool_ops\n btree | bpchar_ops\n btree | bytea_ops\n btree | char_ops\n btree | cidr_ops\n btree | date_ops\n btree | float4_ops\n btree | float8_ops\n btree | inet_ops\n btree | int2_ops\n btree | int4_ops\n btree | int8_ops\n btree | interval_ops\n btree | macaddr_ops\n btree | name_ops\n btree | numeric_ops\n btree | oid_ops\n btree | oidvector_ops\n btree | text_ops\n btree | time_ops\n btree | timestamp_ops\n btree | timestamptz_ops\n btree | timetz_ops\n btree | varbit_ops\n btree | varchar_ops\n hash | bpchar_ops\n hash | char_ops\n hash | cidr_ops\n hash | date_ops\n hash | float4_ops\n hash | float8_ops\n hash | inet_ops\n hash | int2_ops\n hash | int4_ops\n hash | int8_ops\n hash | interval_ops\n hash | macaddr_ops\n hash | name_ops\n hash | oid_ops\n hash | oidvector_ops\n hash | text_ops\n hash | time_ops\n hash | timestamp_ops\n hash | timestamptz_ops\n hash | timetz_ops\n hash | varchar_ops\n rtree | bigbox_ops\n rtree | box_ops\n rtree | poly_ops\n(51 rows)\n",
"msg_date": "Mon, 12 Jul 2004 22:51:27 -0700",
"msg_from": "Chris Cheston <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to create an index for type timestamp column using rtree?"
},
{
"msg_contents": "Chris Cheston <[email protected]> writes:\n> I'm storing some timestamps as integers (UTF) in a table and I want to\n> query by <= and >= for times between a certain period.\n\nbtree can handle range queries nicely; why do you think you need an\nrtree? rtree is for 2-dimensional datums which a timestamp is not ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 13 Jul 2004 02:14:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to create an index for type timestamp column using rtree? "
},
{
"msg_contents": "> I'm storing some timestamps as integers (UTF) in a table and I want to\n> query by <= and >= for times between a certain period. The table has\n> gotten quite large and is now very slow in querying.\n> \n> I think it's time to create an index for the timestamp column.\n\nUh, yeah.\n\n> I tried using an rtree (for <= and >= optimization):\n\nBad idea.\n\n> Do I have to create an rtree type for my timestamp integer column? \n\nWhy do you want an rtree index? They're for multidimensional polygonal \ndata and stuff. Just create a normal index...\n\nChris\n\n",
"msg_date": "Tue, 13 Jul 2004 14:33:48 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to create an index for type timestamp column using"
},
{
"msg_contents": "Thanks, Chris and Tom.\nI had read *incorrectly* that rtrees are better for <= and >= comparisons.\n\nChris\n\nOn Tue, 13 Jul 2004 14:33:48 +0800, Christopher Kings-Lynne\n<[email protected]> wrote:\n> > I'm storing some timestamps as integers (UTF) in a table and I want to\n> > query by <= and >= for times between a certain period. The table has\n> > gotten quite large and is now very slow in querying.\n> >\n> > I think it's time to create an index for the timestamp column.\n> \n> Uh, yeah.\n> \n> > I tried using an rtree (for <= and >= optimization):\n> \n> Bad idea.\n> \n> > Do I have to create an rtree type for my timestamp integer column?\n> \n> Why do you want an rtree index? They're for multidimensional polygonal\n> data and stuff. Just create a normal index...\n> \n> Chris\n> \n>\n",
"msg_date": "Tue, 13 Jul 2004 00:56:02 -0700",
"msg_from": "Chris Cheston <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to create an index for type timestamp column using rtree?"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a database with 10 tables having about 50 000 000 records ...\nEvery day I have to delete about 20 000 records, inserting about the same in \none of this table.\n\nThen I make some agregations inside the other tables to get some week results, \nand globals result by users. That mean about 180 000 to 300 000 insert by \ntable each days.\n\nThe time for the calculation of the request is about 2 / 4 minutes ... I do \nthe request inside a temporary table ... then I do an\ninsert into my_table select * from temp_table.\n\nAnd that's the point, the INSERT take about (depending of the tables) 41 \nminutes up to 2 hours ... only for 180 to 300 000 INSERTs ...\n\nThe table have index really usefull ... so please do not tell me to delete \nsome of them ... and I can't drop them before inserting data ... it's really \ntoo long to regenerate them ...\n\nI'm configured with no flush, I have 8 Gb of RAM, and RAID 5 with SCSI 7200 \nharddrive ... I'm using Linux Debian, with a PostgreSQL version compiled by \nmyself in 7.4.3.\n\nWhat can I do to get better results ?? (configuration option, and/or hardware \nupdate ?)\nWhat can I give you to get more important informations to help me ?\n\nRegards,\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n",
"msg_date": "Tue, 13 Jul 2004 18:50:52 +0200",
"msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Insert are going slower ..."
},
{
"msg_contents": "Herve,\n\n> What can I do to get better results ?? (configuration option, and/or\n> hardware update ?)\n> What can I give you to get more important informations to help me ?\n\n1) What PostgreSQL version are you using?\n\n2) What's your VACUUM, ANALYZE, VACUUM FULL, REINDEX schedule?\n\n3) Can you list the non-default settings in your PostgreSQL.conf? \nParticularly, shared_buffers, sort_mem, checkpoint_segments, estimated_cache, \nand max_fsm_pages?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 13 Jul 2004 10:10:30 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert are going slower ..."
},
{
"msg_contents": "Josh,\n\nLe mardi 13 Juillet 2004 19:10, Josh Berkus a écrit :\n>\n> > What can I do to get better results ?? (configuration option, and/or\n> > hardware update ?)\n> > What can I give you to get more important informations to help me ?\n>\n> 1) What PostgreSQL version are you using?\n\nv7.4.3\n\n> 2) What's your VACUUM, ANALYZE, VACUUM FULL, REINDEX schedule?\n\nVACUUM FULL VERBOSE ANALYZE;\n\nEvery day after the calculation I was talking about ...\n\n> 3) Can you list the non-default settings in your PostgreSQL.conf?\n> Particularly, shared_buffers, sort_mem, checkpoint_segments,\n> estimated_cache, and max_fsm_pages?\n\nshared_buffers = 48828\nsort_mem = 512000\nvacuum_mem = 409600\nmax_fsm_pages = 50000000\nmax_fsm_relations = 2000\nmax_files_per_process = 2000\nwal_buffers = 1000\ncheckpoint_segments = 3\neffective_cache_size = 5000000\nrandom_page_cost = 3\t\ndefault_statistics_target = 20\njoin_collapse_limit = 10\n\nRegards,\n-- \nHervé Piedvache\n\n",
"msg_date": "Wed, 14 Jul 2004 01:42:11 +0200",
"msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert are going slower ..."
},
{
"msg_contents": "\nFrom: \"Hervᅵ Piedvache\" <[email protected]>\nSent: Tuesday, July 13, 2004 11:42 PM\n\n\n> effective_cache_size = 5000000\n\nlooks like madness to me.\nmy (modest) understanding of this, is that\nyou are telling postgres to assume a 40Gb disk\ncache (correct me if I am wrong).\n\nbtw, how much effect does this setting have on the planner?\n\nis there a recommended procedure to estimate\nthe best value for effective_cache_size on a\ndedicated DB server ?\n\ngnari\n\n\n\n\n\n",
"msg_date": "Wed, 14 Jul 2004 10:06:05 -0000",
"msg_from": "\"gnari\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert are going slower ..."
},
{
"msg_contents": "gnari wrote:\n> is there a recommended procedure to estimate\n> the best value for effective_cache_size on a\n> dedicated DB server ?\n\nRule of thumb(On linux): on a typically loaded machine, observe cache memory of \nthe machine and allocate good chunk of it as effective cache.\n\nTo define good chunck of it, you need to consider how many other things are \nrunning on that machine. If it is file server + web server + database server, \nyou have to allocate the resources depending upon requirement.\n\nBut remember It does not guarantee that it will be a good value. It is just a \nstarting point..:-) You have to tune it further if required.\n\nHTH\n\n Shridhar\n",
"msg_date": "Wed, 14 Jul 2004 15:43:22 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert are going slower ..."
},
{
"msg_contents": "Le mercredi 14 Juillet 2004 12:13, Shridhar Daithankar a ᅵcrit :\n> gnari wrote:\n> > is there a recommended procedure to estimate\n> > the best value for effective_cache_size on a\n> > dedicated DB server ?\n>\n> Rule of thumb(On linux): on a typically loaded machine, observe cache\n> memory of the machine and allocate good chunk of it as effective cache.\n>\n> To define good chunck of it, you need to consider how many other things are\n> running on that machine. If it is file server + web server + database\n> server, you have to allocate the resources depending upon requirement.\n>\n> But remember It does not guarantee that it will be a good value. It is just\n> a starting point..:-) You have to tune it further if required.\n\nIn my case it's a PostgreSQL dedicated server ...\n\neffective_cache_size = 5000000 \n\nFor me I give to the planner the information that the kernel is able to cache \n5000000 disk page in RAM\n\n>free\n total used free shared buffers cached\nMem: 7959120 7712164 246956 0 17372 7165704\n-/+ buffers/cache: 529088 7430032\nSwap: 2097136 9880 2087256\n\nWhat should I put ?\n\nRegards,\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n",
"msg_date": "Wed, 14 Jul 2004 16:08:27 +0200",
"msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert are going slower ..."
},
{
"msg_contents": "Hervᅵ Piedvache wrote:\n> In my case it's a PostgreSQL dedicated server ...\n> \n> effective_cache_size = 5000000 \n> \n> For me I give to the planner the information that the kernel is able to cache \n> 5000000 disk page in RAM\n\nThat is what? 38GB of RAM?\n> \n> \n>>free\n> \n> total used free shared buffers cached\n> Mem: 7959120 7712164 246956 0 17372 7165704\n> -/+ buffers/cache: 529088 7430032\n> Swap: 2097136 9880 2087256\n> \n> What should I put ?\n\n7165704 / 8 = 895713\n\nSo counting variations, I would say 875000. That is a 8GB box, right? So 875000 \nis about 7000MB. Which should be rather practical. Of course you can give it \neverything you can but that's upto you.\n\nCan you get explain analze for inserts? I think some foreign key check etc. are \ntaking long and hence it accumulates. But that is just a wild guess.\n\nOff the top of my head, you have allocated roughly 48K shard buffers which seems \nbit on higher side. Can you check with something like 10K-15K?\n\nHTH\n\n Shridhar\n",
"msg_date": "Wed, 14 Jul 2004 20:02:47 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert are going slower ..."
},
{
"msg_contents": "Herve'\n\nI forgot to ask about your hardware. How much RAM, and what's your disk \nsetup? CPU?\n\n> sort_mem = 512000\n\nHuh? Sort_mem is in K. The above says that you've allocated 512MB sort \nmem. Is this process the *only* thing going on on the machine?\n\n> vacuum_mem = 409600\n\nAgain, 409.6MB vacuum mem? That's an odd number, and quite high. \n\n> max_fsm_pages = 50000000\n\n50million? That's quite high. Certianly enough to have an effect on your \nmemory usage. How did you calculate this number?\n\n> checkpoint_segments = 3\n\nYou should probably increase this if you have the disk space. For massive \ninsert operations, I've found it useful to have as much as 128 segments \n(although this means about 1.5GB disk space)\n\n> effective_cache_size = 5000000\n\nIf you actually have that much RAM, I'd love to play on your box. Please?\n\n> Off the top of my head, you have allocated roughly 48K shard buffers which\n> seems bit on higher side. Can you check with something like 10K-15K?\n\nShridhar, that depends on how much RAM he has. On 4GB dedicated machines, \nI've set Shared_Buffers as high as 750MB.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 14 Jul 2004 09:28:29 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert are going slower ..."
},
{
"msg_contents": "Josh,\n\nLe mercredi 14 Juillet 2004 18:28, Josh Berkus a écrit :\n>\n> I forgot to ask about your hardware. How much RAM, and what's your disk\n> setup? CPU?\n\n8 Gb of RAM\nBi - Intel Xeon 2.00GHz\nHard drive in SCSI RAID 5\n/dev/sdb6 101G 87G 8.7G 91% /usr/local/pgsql/data\n/dev/sda7 1.8G 129M 1.6G 8% /usr/local/pgsql/data/pg_xlog\n\nServer dedicated to PostgreSQL with only one database.\n\n> > sort_mem = 512000\n>\n> Huh? Sort_mem is in K. The above says that you've allocated 512MB sort\n> mem. Is this process the *only* thing going on on the machine?\n\nPostgreSQL dedicated server yes ... so it's too much ?\nHow you decide the good value ?\n\n> > vacuum_mem = 409600\n>\n> Again, 409.6MB vacuum mem? That's an odd number, and quite high.\n\nYep but I have 8 Gb of memory ... ;o) So why not ?\nJust explain me why it's not a good choice ... I have done this because of \nthis text from you found somewhere :\n\"As this setting only uses RAM when VACUUM is running, you may wish to \nincrease it on high-RAM machines to make VACUUM run faster (but never more \nthan 20% of available RAM!)\"\nSo that's less than 20% of my memory ...\n\n> > max_fsm_pages = 50000000\n>\n> 50million? That's quite high. Certianly enough to have an effect on\n> your memory usage. How did you calculate this number?\n\nNot done by me ... and the guy is out ... but in same time with 8 Gb of \nRAM ... that's not a crazy number ?\n\n> > checkpoint_segments = 3\n>\n> You should probably increase this if you have the disk space. For massive\n> insert operations, I've found it useful to have as much as 128 segments\n> (although this means about 1.5GB disk space)\n>\n> > effective_cache_size = 5000000\n>\n> If you actually have that much RAM, I'd love to play on your box. Please?\n\nHum ... yes as Shridhar told me the number is a crazy one and now down to \n875000 ...\n\n> > Off the top of my head, you have allocated roughly 48K shard buffers\n> > which seems bit on higher side. Can you check with something like\n> > 10K-15K?\n>\n> Shridhar, that depends on how much RAM he has. On 4GB dedicated machines,\n> I've set Shared_Buffers as high as 750MB.\n\nCould you explain me the interest to reduce this size ??\nI really miss understand this point ...\n\nregards,\n-- \nBill Footcow\n\n",
"msg_date": "Wed, 14 Jul 2004 23:19:16 +0200",
"msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert are going slower ..."
},
{
"msg_contents": "Josh,\n\nLe mercredi 14 Juillet 2004 18:28, Josh Berkus a écrit :\n>\n> > checkpoint_segments = 3\n>\n> You should probably increase this if you have the disk space. For massive\n> insert operations, I've found it useful to have as much as 128 segments\n> (although this means about 1.5GB disk space)\n\nOther point I have also read this : \n\"NOTE: Since 7.2, turning fsync off does NOT stop WAL. It does stop \ncheckpointing.\"\n\nSo ... still true for 7.4.3 ??? So I'm with fsync = off so the value of \ncheckpoint_segments have no interest ??\n\nThanks for your help...\n-- \nBill Footcow\n\n",
"msg_date": "Wed, 14 Jul 2004 23:32:05 +0200",
"msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert are going slower ..."
},
{
"msg_contents": "Hervᅵ Piedvache wrote:\n\n> Josh,\n> \n> Le mercredi 14 Juillet 2004 18:28, Josh Berkus a ᅵcrit :\n> \n>>>checkpoint_segments = 3\n>>\n>>You should probably increase this if you have the disk space. For massive\n>>insert operations, I've found it useful to have as much as 128 segments\n>>(although this means about 1.5GB disk space)\n> \n> \n> Other point I have also read this : \n> \"NOTE: Since 7.2, turning fsync off does NOT stop WAL. It does stop \n> checkpointing.\"\n> \n> So ... still true for 7.4.3 ??? So I'm with fsync = off so the value of \n> checkpoint_segments have no interest ??\n> \n> Thanks for your help...\n\nI suggest you check this first. Check the performance tuning guide..\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n\nThat is a starters. As Josh suggested, increase checkpoint segments if you have \ndisk space. Correspondingly WAL disk space requirements go up as well.\n\nHTH\n\n Shridhar\n\n",
"msg_date": "Thu, 15 Jul 2004 11:51:56 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert are going slower ..."
},
{
"msg_contents": "Shridhar,\n\n> I suggest you check this first. Check the performance tuning guide..\n> \n> http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n> \n> That is a starters. As Josh suggested, increase checkpoint segments if you \nhave \n> disk space. Correspondingly WAL disk space requirements go up as well.\n\nWell, not if he has fsync=off. But having fsync off is a very bad idea. You \ndo realize, Herve', that if you lose power on that machine you'll most likely \nhave to restore from backup?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Thu, 15 Jul 2004 11:09:16 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert are going slower ..."
},
{
"msg_contents": "Josh,\n\nLe jeudi 15 Juillet 2004 20:09, Josh Berkus a ᅵcrit :\n> > I suggest you check this first. Check the performance tuning guide..\n> >\n> > http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n> >\n> > That is a starters. As Josh suggested, increase checkpoint segments if\n> > you\n>\n> have\n>\n> > disk space. Correspondingly WAL disk space requirements go up as well.\n>\n> Well, not if he has fsync=off. But having fsync off is a very bad idea. \n> You do realize, Herve', that if you lose power on that machine you'll most\n> likely have to restore from backup?\n\nHum ... it's only for speed aspect ... I was using postgresql with this option \nsince 7.01 ... and for me fsync=on was so slow ...\nIs it really no time consuming for the system to bring it ON now with \nv7.4.3 ??\n\nTell me ...\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n",
"msg_date": "Fri, 16 Jul 2004 13:17:02 +0200",
"msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert are going slower ..."
},
{
"msg_contents": "Herve'\n\n> Hum ... it's only for speed aspect ... I was using postgresql with this\n> option since 7.01 ... and for me fsync=on was so slow ...\n> Is it really no time consuming for the system to bring it ON now with\n> v7.4.3 ??\n\nWell, I wouldn't do it until you've figured out the current performance \nproblem.\n\nThe issue with having fsync=off is that, if someone yanks the power cord on \nyour server, there is a significant chance that you will have to restore the \ndatabase from backup becuase it will be corrupted. But clearly you've been \nliving with that risk for some time.\n\nIt *is* true that there is significantly less performance difference between \n7.4 with fsync off and on than there was between 7.1 with fsync off and on. \nBut there is still a difference. In 7.0 and 7.1 (I think), when you turned \nfsync off it turned WAL off completely, resulting in a substantial difference \nin disk activity. Now, it just stops checkpointing WAL but WAL is still \nrecording -- meaning that disk activity decreases some but not a lot. The \ndifference is more noticable the more vulnerable to contention your disk \nsystem is.\n\nThe other reason not to think of fsync=off as a permanent performance tweak is \nthat we're likely to remove the option sometime in the next 2 versions, since \nan increasing number of features depend on WAL behavior, and the option is \nlargely a legacy of the 7.0 days, when WAL was sometimes buggy and needed to \nbe turned off to get the database to start.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 18 Jul 2004 10:23:21 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert are going slower ..."
},
{
"msg_contents": "Josh Berkus wrote:\n\n> Herve'\n> \n> I forgot to ask about your hardware. How much RAM, and what's your disk \n> setup? CPU?\n> \n> \n>>sort_mem = 512000\n> \n> \n> Huh? Sort_mem is in K. The above says that you've allocated 512MB sort \n> mem. Is this process the *only* thing going on on the machine?\n\nAnd also is not system wide but let me say \"for backend\"...\n\n\n\nRegards\nGaetano Mendola\n",
"msg_date": "Mon, 26 Jul 2004 16:12:31 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert are going slower ..."
},
{
"msg_contents": "Hervᅵ Piedvache wrote:\n\n> Josh,\n> \n> Le mardi 13 Juillet 2004 19:10, Josh Berkus a ᅵcrit :\n> \n>>>What can I do to get better results ?? (configuration option, and/or\n>>>hardware update ?)\n>>>What can I give you to get more important informations to help me ?\n>>\n>>1) What PostgreSQL version are you using?\n> \n> \n> v7.4.3\n> \n> \n>>2) What's your VACUUM, ANALYZE, VACUUM FULL, REINDEX schedule?\n> \n> \n> VACUUM FULL VERBOSE ANALYZE;\n> \n> Every day after the calculation I was talking about ...\n> \n> \n>>3) Can you list the non-default settings in your PostgreSQL.conf?\n>>Particularly, shared_buffers, sort_mem, checkpoint_segments,\n>>estimated_cache, and max_fsm_pages?\n> \n\n> sort_mem = 512000\n\nThis is too much, you are instructing Postgres to use 512MB\nfor each backend ( some time each backend can use this quantity\nmore then one )\n\n> vacuum_mem = 409600\n> max_fsm_pages = 50000000\n > max_fsm_relations = 2000\n\n50 milions ? HUG.\nwhat tell you postgres in the log after performing\na vacuum full ?\n\n> max_files_per_process = 2000\n> wal_buffers = 1000\n> checkpoint_segments = 3\n\nFor massive insert you have to increase this number,\npump it up to 16\n\n\n> effective_cache_size = 5000000\n\n5GB for 8 GB system is too much\n\n> random_page_cost = 3\t\n\non your HW you can decrease it to 2\nand also decrease the other cpu costs\n\nRegards\nGaetano Mendola\n\n\nBTW, I live in Paris too, if you need a hand...\n\n\n\n\n\n",
"msg_date": "Mon, 26 Jul 2004 16:20:15 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert are going slower ..."
},
{
"msg_contents": "On Mon, 2004-07-26 at 08:20, Gaetano Mendola wrote:\n> Hervé Piedvache wrote:\n\nSNIP\n\n> > sort_mem = 512000\n> \n> This is too much, you are instructing Postgres to use 512MB\n> for each backend ( some time each backend can use this quantity\n> more then one )\n\nagreed. If any one process needs this much sort mem, you can set it in\nthat sessions with set sort_mem anyway, so to let every sort consume up\nto 512 meg is asking for trouble.\n\n> > effective_cache_size = 5000000\n> \n> 5GB for 8 GB system is too much\n\nNo, it's not. Assuming that postgresql with all it's shared buffers is\nusing <2 gig, it's quite likely that the kernel is caching at least 5\ngigs of disk data. Effective cache size doesn't set any cache size, it\ntells the planner about how much the kernel is caching.\n\n> > random_page_cost = 3\t\n> \n> on your HW you can decrease it to 2\n> and also decrease the other cpu costs\n\nOn fast machines it often winds up needing to be set somewhere around\n1.2 to 2.0\n\n",
"msg_date": "Mon, 26 Jul 2004 10:25:38 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert are going slower ..."
}
] |
[
{
"msg_contents": "\n\n\n\nShould I be concerned that my vacuum process has taken upwards of 100 +\nminutes to complete? I dropped all indexes before starting and also\nincreased the vacuum_mem before starting.\nLooking at the output below, it appears that a vacuum full hasn't been done\non this table for quite sometime. Would I be better off exporting the data\nvacuuming the table and reimporting the data? I cannot drop the table do\nto views attached to the table\n\n\nmdc_oz=# set vacuum_mem = 10240;\nSET\nmdc_oz=# vacuum full verbose cdm.cdm_ddw_Tran_item;\nINFO: vacuuming \"cdm.cdm_ddw_tran_item\"\nINFO: \"cdm_ddw_tran_item\": found 15322404 removable, 10950460 nonremovable\nrow versions in 934724 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 233 to 308 bytes long.\nThere were 1081 unused item pointers.\nTotal free space (including removable row versions) is 4474020460 bytes.\n544679 pages are or will become empty, including 0 at the end of the table.\n692980 pages containing 4433398408 free bytes are potential move\ndestinations.\nCPU 29.55s/4.13u sec elapsed 107.82 sec.\n\n\nTIA\nPatrick Hatcher\n\n",
"msg_date": "Wed, 14 Jul 2004 14:29:48 -0700",
"msg_from": "Patrick Hatcher <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuum full 100 mins plus?"
},
{
"msg_contents": "\n\n\n\n\n\nAnswered my own question. I gave up the vacuum full after 150 mins. I was\nable to export to a file, vacuum full the empty table, and reimport in less\nthan 10 mins. I suspect the empty item pointers and the sheer number of\nremovable rows was causing an issue.\n\n\n\n \n Patrick Hatcher \n <[email protected] \n om> To \n Sent by: [email protected] \n pgsql-performance cc \n -owner@postgresql \n .org Subject \n [PERFORM] vacuum full 100 mins \n plus? \n 07/14/2004 02:29 \n PM \n \n \n \n \n\n\n\n\n\n\n\n\nShould I be concerned that my vacuum process has taken upwards of 100 +\nminutes to complete? I dropped all indexes before starting and also\nincreased the vacuum_mem before starting.\nLooking at the output below, it appears that a vacuum full hasn't been done\non this table for quite sometime. Would I be better off exporting the data\nvacuuming the table and reimporting the data? I cannot drop the table do\nto views attached to the table\n\n\nmdc_oz=# set vacuum_mem = 10240;\nSET\nmdc_oz=# vacuum full verbose cdm.cdm_ddw_Tran_item;\nINFO: vacuuming \"cdm.cdm_ddw_tran_item\"\nINFO: \"cdm_ddw_tran_item\": found 15322404 removable, 10950460 nonremovable\nrow versions in 934724 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 233 to 308 bytes long.\nThere were 1081 unused item pointers.\nTotal free space (including removable row versions) is 4474020460 bytes.\n544679 pages are or will become empty, including 0 at the end of the table.\n692980 pages containing 4433398408 free bytes are potential move\ndestinations.\nCPU 29.55s/4.13u sec elapsed 107.82 sec.\n\n\nTIA\nPatrick Hatcher\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n\n\n",
"msg_date": "Wed, 14 Jul 2004 15:36:46 -0700",
"msg_from": "Patrick Hatcher <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: vacuum full 100 mins plus?"
},
{
"msg_contents": "Patrick,\n\n> Answered my own question. I gave up the vacuum full after 150 mins. I was\n> able to export to a file, vacuum full the empty table, and reimport in less\n> than 10 mins. I suspect the empty item pointers and the sheer number of\n> removable rows was causing an issue.\n\nYeah. If you've a table that's not been vacuumed in a month, it's often \nfaster to clean it out and import it.\n\nI've seen vacuums take up to 3 hours in really bad cases.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 14 Jul 2004 17:13:13 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum full 100 mins plus?"
},
{
"msg_contents": "A long time ago, in a galaxy far, far away, [email protected] (Patrick Hatcher) wrote:\n> Answered my own question. I gave up the vacuum full after 150 mins. I was\n> able to export to a file, vacuum full the empty table, and reimport in less\n> than 10 mins. I suspect the empty item pointers and the sheer number of\n> removable rows was causing an issue.\n\nIn that case, you'd be a little further better off if the steps were:\n - drop indices;\n - copy table to file (perhaps via pg_dump -t my_table);\n - truncate the table, or drop-and-recreate, both of which make\n it unnecessary to do _any_ vacuum of the result;\n - recreate indices, probably with SORT_MEM set high, to minimize\n paging to disk\n - analyze the table (no need to vacuum if you haven't created any\n dead tuples)\n - cut SORT_MEM back down to \"normal\" sizes\n-- \noutput = reverse(\"gro.gultn\" \"@\" \"enworbbc\")\nhttp://www3.sympatico.ca/cbbrowne/spreadsheets.html\nSigns of a Klingon Programmer #6: \"Debugging? Klingons do not\ndebug. Our software does not coddle the weak.\"\n",
"msg_date": "Wed, 14 Jul 2004 22:21:29 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum full 100 mins plus?"
},
{
"msg_contents": "Christopher Browne <[email protected]> writes:\n> A long time ago, in a galaxy far, far away, [email protected] (Patrick Hatcher) wrote:\n>> Answered my own question. I gave up the vacuum full after 150 mins. I was\n>> able to export to a file, vacuum full the empty table, and reimport in less\n>> than 10 mins. I suspect the empty item pointers and the sheer number of\n>> removable rows was causing an issue.\n\n> In that case, you'd be a little further better off if the steps were:\n> - drop indices;\n> - copy table to file (perhaps via pg_dump -t my_table);\n> - truncate the table, or drop-and-recreate, both of which make\n> it unnecessary to do _any_ vacuum of the result;\n> - recreate indices, probably with SORT_MEM set high, to minimize\n> paging to disk\n> - analyze the table (no need to vacuum if you haven't created any\n> dead tuples)\n> - cut SORT_MEM back down to \"normal\" sizes\n\nRather than doing all this manually, you can just CLUSTER on any handy\nindex. In 7.5, another possibility is to issue one of the forms of\nALTER TABLE that force a table rewrite.\n\nThe range of usefulness of VACUUM FULL is really looking narrower and\nnarrower to me. I can foresee a day when we'll abandon it completely.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 15 Jul 2004 00:36:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum full 100 mins plus? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> Christopher Browne <[email protected]> writes:\n>> A long time ago, in a galaxy far, farpliers [email protected] (Patrick\n>> Hatcher) wrote:\n>>> Answered my own question. I gave up the vacuum full after 150 mins. I\n>>> was able to export to a file, vacuum full the empty table, and reimport\n>>> in less\n>>> than 10 mins. I suspect the empty item pointers and the sheer number of\n>>> removable rows was causing an issue.\n> \n>> In that case, you'd be a little further better off if the steps were:\n>> - drop indices;\n>> - copy table to file (perhaps via pg_dump -t my_table);\n>> - truncate the table, or drop-and-recreate, both of which make\n>> it unnecessary to do _any_ vacuum of the result;\n>> - recreate indices, probably with SORT_MEM set high, to minimize\n>> paging to disk\n>> - analyze the table (no need to vacuum if you haven't created any\n>> dead tuples)\n>> - cut SORT_MEM back down to \"normal\" sizes\n> \n> Rather than doing all this manually, you can just CLUSTER on any handy\n> index. In 7.5, another possibility is to issue one of the forms of\n> ALTER TABLE that force a table rewrite.\n> \n> The range of usefulness of VACUUM FULL is really looking narrower and\n> narrower to me. I can foresee a day when we'll abandon it completely.\n\nI would love to see this 10lb sledge hammer go away when we have enough tiny\nscrewdrivers and needlenose pliers to make it obsolete!\n\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n",
"msg_date": "Thu, 15 Jul 2004 11:14:37 -0400",
"msg_from": "Mike Rylander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuum full 100 mins plus?"
}
] |
[
{
"msg_contents": "Hi,\n\nI operate a database server with postgres 7.3.2, 2 Xeon CPU's and 1 GB \nRam. The database-files reside on a local SCSI-HD, the system runs on \nSuSE Enterprise Server 8.0.\n\nThe size of the dumped database is about 300 MB, the database is \nvacuumed (-a -z) every night an reindexed every week.\n\nThe database server is used by a lamp applicationsserver over a 100 MBit \nNetwork.\n\nMy Problem is, that the DB-Server is unbelievable slow!\n\nI monitor the database with top. When a (small) query is executed, I see \nthe postmaster task for 2-3 seconds with 80-100 % CPU load. On other \nsystems with the same database but less CPU and RAM power, I see the \ntask for one moment with 2-8 % CPU load. The other systems behave normal \nthe speed is ok.\n\nAlso a restart of the postmaster brings no improvement.\n\nThe database grows very slowly. The main load comes from SELECT's and \nnot from INSERT's or UPDATE's, but the performance gets slower day by day...\n\nI have no idea where to search for the speed break!\n\nStefan\n",
"msg_date": "Thu, 15 Jul 2004 12:24:10 +0200",
"msg_from": "Stefan <[email protected]>",
"msg_from_op": true,
"msg_subject": "extrem bad performance"
},
{
"msg_contents": "> The database grows very slowly. The main load comes from SELECT's and \n> not from INSERT's or UPDATE's, but the performance gets slower day by day...\n> \n> I have no idea where to search for the speed break!\n\nLets start with an example. Please send us an EXPLAIN ANALYZE of a\ncouple of the poorly performing queries.\n\n\n",
"msg_date": "Fri, 16 Jul 2004 22:31:36 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extrem bad performance"
},
{
"msg_contents": "Rod Taylor wrote:\n> Lets start with an example. Please send us an EXPLAIN ANALYZE of a\n> couple of the poorly performing queries.\nthanks for your answer. the problem was solved by using FULL(!) VACUUM.\n\nregards,\n\nStefan\n",
"msg_date": "Wed, 21 Jul 2004 18:25:55 +0200",
"msg_from": "Stefan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: extrem bad performance"
}
] |
[
{
"msg_contents": "\nWith pg_autovaccum it's now at 95M swap; averaging 5MB / day increase with same load. Cache slightly increases or decreases according to top.\n\n --- On Tue 07/13, Matthew T. O'Connor < [email protected] > wrote:\nFrom: Matthew T. O'Connor [mailto: [email protected]]\nTo: [email protected]\n Cc: [email protected]\nDate: Tue, 13 Jul 2004 16:26:09 -0400\nSubject: Re: [PERFORM] Swapping in 7.4.3\n\nJim Ewert wrote:<br>> When I went to 7.4.3 (Slackware 9.1) w/ JDBC, the improvements are that it doesn't initially take much memory (have 512M) and didn't swap. I ran a full vaccum and a cluster before installation, however speed degaded to 1 *second* / update of one row in 150 rows of data, within a day! pg_autovacuum now gives excellent performance however it is taking 66M of swap; only 270k cached.<br>> <br><br>Are you saying that your system stays fast now that you are using <br>pg_autovacuum, but pg_autovacuum is using 66M of memory? Please <br>clarify, I'm not sure what question you want an answered.<br><br>Matthew<br><br>\n\n_______________________________________________\nJoin Excite! - http://www.excite.com\nThe most personalized portal on the Web!\n",
"msg_date": "Thu, 15 Jul 2004 09:49:33 -0400 (EDT)",
"msg_from": "\"Jim Ewert\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Swapping in 7.4.3"
},
{
"msg_contents": "This is normal. My personal workstation has been up for 16 days, and it\nshows 65 megs used for swap. The linux kernel looks for things that\nhaven't been accessed in quite a while and tosses them into swap to free\nup the memory for other uses.\n\nThis isn't PostgreSQL's fault, or anything elses. It's how a typical\nUnix kernel works. I.e. you're seeing a problem that simply isn't\nthere.\n\nOn Thu, 2004-07-15 at 07:49, Jim Ewert wrote:\n> With pg_autovaccum it's now at 95M swap; averaging 5MB / day increase with same load. Cache slightly increases or decreases according to top.\n> \n> --- On Tue 07/13, Matthew T. O'Connor < [email protected] > wrote:\n> From: Matthew T. O'Connor [mailto: [email protected]]\n> To: [email protected]\n> Cc: [email protected]\n> Date: Tue, 13 Jul 2004 16:26:09 -0400\n> Subject: Re: [PERFORM] Swapping in 7.4.3\n> \n> Jim Ewert wrote:<br>> When I went to 7.4.3 (Slackware 9.1) w/ JDBC, the improvements are that it doesn't initially take much memory (have 512M) and didn't swap. I ran a full vaccum and a cluster before installation, however speed degaded to 1 *second* / update of one row in 150 rows of data, within a day! pg_autovacuum now gives excellent performance however it is taking 66M of swap; only 270k cached.<br>> <br><br>Are you saying that your system stays fast now that you are using <br>pg_autovacuum, but pg_autovacuum is using 66M of memory? Please <br>clarify, I'm not sure what question you want an answered.<br><br>Matthew<br><br>\n> \n> _______________________________________________\n> Join Excite! - http://www.excite.com\n> The most personalized portal on the Web!\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n\n",
"msg_date": "Thu, 15 Jul 2004 10:20:34 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Swapping in 7.4.3"
},
{
"msg_contents": "> This is normal. My personal workstation has been up for 16 \n> days, and it shows 65 megs used for swap. The linux kernel \n> looks for things that haven't been accessed in quite a while \n> and tosses them into swap to free up the memory for other uses.\n> \n> This isn't PostgreSQL's fault, or anything elses. It's how a \n> typical Unix kernel works. I.e. you're seeing a problem that \n> simply isn't there.\n\nActually it (and other OSes) does slightly better than that. It _copies_\nthe least recently used pages into swap, but leaves them in memory. Then\nwhen there really is a need to swap stuff out there is no need to actually\nwrite to swap because it's already been done, and conversely if those pages\nare wanted then they don't have to be read from disk because they were never\nremoved from memory.\n\n\n",
"msg_date": "Fri, 16 Jul 2004 08:35:32 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Swapping in 7.4.3"
}
] |
[
{
"msg_contents": "I've been using the adaptec ZCR raid cards in our servers for a while \nnow, mostly small systems with 3 or 6 disks, and we've been very happy \nwith them. However, we're building a new DB machine with 14 U320 15K \nSCA drives, and we've run into a performance bottlenkeck with the ZCR \ncard where it just won't scale well. Without going into too many \ndetails, we've tested RAID5, RAID10 and RAID50 on pretty much every \narray size from 4-14 disks (raid 50 tests used more drives), using JFS, \nreiserfs and EXT3. With every different configuration, performance \ndidn't improve after array size became greater than 6 disks.. We used \nvarious benchmarks, including pgbench with scale factors of 10, 100, \n1000, 5000 and clients of 10, 15, 30 and 45. We've done many other \ntests and monitoring tools, and we've come to the conclusion that the \nZCR is the problem.\n\nWe're looking into getting an Adaptec 2200S or the Megaraid 320 2x \nwhich have better processors, and hopefully better performance. We \nfeel that the use of the AIC7930 as the CPU on the ZCR just doesn't \ncut it and a faster raid controller would work better. Does anyone out \nthere have any experience with these cards with postgresql and linux? \nIf so, would you be willing to share your experiences and possibly give \na recommendation?\n\n--brian\n\n",
"msg_date": "Thu, 15 Jul 2004 12:07:12 -0600",
"msg_from": "Brian Hirt <[email protected]>",
"msg_from_op": true,
"msg_subject": "hardware raid suggestions"
},
{
"msg_contents": "Brian,\n\n> We're looking into getting an Adaptec 2200S or the Megaraid 320 2x\n> which have better processors, and hopefully better performance. We\n> feel that the use of the AIC7930 as the CPU on the ZCR just doesn't\n> cut it and a faster raid controller would work better. Does anyone out\n> there have any experience with these cards with postgresql and linux?\n> If so, would you be willing to share your experiences and possibly give\n> a recommendation?\n\nYes, my experience with adaptecs has been universally bad. I just really \ndon't think that \"the SCSI-2 card company\" is up to making high-end raid \ncards.\n\nMegaRaid is generally positively reviewed in a lot of places. Be careful to \norder the battery back-up at the same time as the Raid card; the batteries \nhave the annoying habit of going off the market for months at a time.\n\nYou should also consider looking into driver issues. In general, the RAID \ncard drivers distributed for Linux simply aren't as good as those the same \ncompanies write for Windows or Unix. That may be your issue with the ZCR, as \nwell as CPU.\n\nOh, and don't bother with the upgrade if you're not getting battery backup. \nYou need it.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 15 Jul 2004 19:55:29 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware raid suggestions"
},
{
"msg_contents": "Not sure what your hw platform is, but I always used to get fantastic \nperformance from Compaq Smart Array battery backed cards. Note that I \nhaven't bought any recently so HP may have \"hp invent\"-ed them...\n\nBut whatever the brand - if you get a swag of battery backed cache you \nwon't know yourself. It's fun to install an OS on them as well - watch \nthe drive format and verify take 10 seconds ;)\n\nAnother option to look at is outboard raid boxes that present a single \ndrive \"interface\" to the server - I know people who swear by them.\n-- \nMark Aufflick\n e [email protected]\n w www.pumptheory.com (work)\n w mark.aufflick.com (personal)\n p +61 438 700 647\nOn 16/07/2004, at 4:07 AM, Brian Hirt wrote:\n\n> I've been using the adaptec ZCR raid cards in our servers for a while \n> now, mostly small systems with 3 or 6 disks, and we've been very happy \n> with them. However, we're building a new DB machine with 14 U320 15K \n> SCA drives, and we've run into a performance bottlenkeck with the ZCR \n> card where it just won't scale well. Without going into too many \n> details, we've tested RAID5, RAID10 and RAID50 on pretty much every \n> array size from 4-14 disks (raid 50 tests used more drives), using \n> JFS, reiserfs and EXT3. With every different configuration, \n> performance didn't improve after array size became greater than 6 \n> disks.. We used various benchmarks, including pgbench with scale \n> factors of 10, 100, 1000, 5000 and clients of 10, 15, 30 and 45. \n> We've done many other tests and monitoring tools, and we've come to \n> the conclusion that the ZCR is the problem.\n>\n> We're looking into getting an Adaptec 2200S or the Megaraid 320 2x \n> which have better processors, and hopefully better performance. We \n> feel that the use of the AIC7930 as the CPU on the ZCR just doesn't \n> cut it and a faster raid controller would work better. Does anyone out \n> there have any experience with these cards with postgresql and linux? \n> If so, would you be willing to share your experiences and possibly \n> give a recommendation?\n>\n> --brian\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n> ======================================================================= \n> =\n> Pain free spam & virus protection by: www.mailsecurity.net.au\n> Forward undetected SPAM to: [email protected]\n> ======================================================================= \n> =\n>\n\n\n========================================================================\n Pain free spam & virus protection by: www.mailsecurity.net.au\n Forward undetected SPAM to: [email protected]\n========================================================================\n\n",
"msg_date": "Fri, 16 Jul 2004 13:25:17 +1000",
"msg_from": "Mark Aufflick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware raid suggestions"
},
{
"msg_contents": "> We're looking into getting an Adaptec 2200S or the Megaraid 320 2x\n> which have better processors, and hopefully better performance. We\n> feel that the use of the AIC7930 as the CPU on the ZCR just doesn't\n> cut it and a faster raid controller would work better. Does anyone out\n> there have any experience with these cards with postgresql and linux?\n> If so, would you be willing to share your experiences and possibly give\n> a recommendation?\n\nI have worked with at least four major name brands of scsi and ide raid\ncontrollers and so far the one I have found to be generally the most\nfeatured and fastest is the ICP Vortex controllers\n(http://www.icp-vortex.com/). It is also more expensive than the others\nbut has been worth the cost IMHO. It has a command line utility to\nmeasure disk performance and I believe the source code for it is\navailable. I have measured over 200 MB/s reads off these controllers on\n3u disk array units. I'm sure I could have gotten more with additional\ntuning.\n\nFred\n",
"msg_date": "Fri, 16 Jul 2004 20:20:40 -0700 (PDT)",
"msg_from": "\"Fred Moyer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware raid suggestions"
},
{
"msg_contents": "Brian Hirt wrote:\n\n> I've been using the adaptec ZCR raid cards in our servers for a while \n> now, mostly small systems with 3 or 6 disks, and we've been very happy \n> with them. However, we're building a new DB machine with 14 U320 15K \n> SCA drives, and we've run into a performance bottlenkeck with the ZCR \n> card where it just won't scale well. Without going into too many \n> details, we've tested RAID5, RAID10 and RAID50 on pretty much every \n> array size from 4-14 disks (raid 50 tests used more drives), using JFS, \n> reiserfs and EXT3. With every different configuration, performance \n> didn't improve after array size became greater than 6 disks.. We used \n> various benchmarks, including pgbench with scale factors of 10, 100, \n> 1000, 5000 and clients of 10, 15, 30 and 45. We've done many other \n> tests and monitoring tools, and we've come to the conclusion that the \n> ZCR is the problem.\n> \n> We're looking into getting an Adaptec 2200S or the Megaraid 320 2x which \n> have better processors, and hopefully better performance. We feel that \n> the use of the AIC7930 as the CPU on the ZCR just doesn't cut it and a \n> faster raid controller would work better. Does anyone out there have any \n> experience with these cards with postgresql and linux? If so, would you \n> be willing to share your experiences and possibly give a recommendation?\n> \n\nDid you consider the option of use an external storage array ?\nWe are using the dell emc CX600\n\nhttp://www.dell.com/downloads/emea/products/pvaul/en/Dell_EMC_cx600_specs.pdf\n\nand I'm forgotting to have a disk behind...\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n",
"msg_date": "Mon, 26 Jul 2004 16:01:15 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hardware raid suggestions"
}
] |
[
{
"msg_contents": "I lost the email that had the fix for this and now I need it again...\ncan someone or tom let me know what the fix was, I can't find it in any\nof my emails or archived on the internet\n\n \n\nThis is what I got...\n\n \n\nTwo servers, one debian, one fedora\n\n \n\nDebain dual 3ghz, 1 gig ram, ide, PostgreSQL 7.2.1 on i686-pc-linux-gnu,\ncompiled by GCC 2.95.4\n\n \n\n \n\nFedora: Dual 3ghz, 1 gig ram, scsi, PostgreSQL 7.3.4-RH on\ni386-redhat-linux-gnu, compiled by GCC i386-redhat-linux-gcc (GCC) 3.3.2\n20031022 (Red Hat Linux 3.3.2-1)\n\n \n\n \n\nBoth have same databases, Both have had vacume full ran on them. Both\ndoing the same query\n\n \n\nSelect * from vpopmail; The vpopmail is a view, this is the view\n\n \n\n \n\n View \"vpopmail\"\n\n Column | Type | Modifiers \n\n-----------+------------------------+-----------\n\n pw_name | character varying(32) | \n\n pw_domain | character varying(64) | \n\n pw_passwd | character varying | \n\n pw_uid | integer | \n\n pw_gid | integer | \n\n pw_gecos | character varying | \n\n pw_dir | character varying(160) | \n\n pw_shell | character varying(20) | \n\nView definition: SELECT ea.email_name AS pw_name, ea.domain AS\npw_domain, get_pwd(u.username, '127.0.0.1'::\"varchar\", '101'::\"varchar\",\n'MD5'::\"varchar\") AS pw_passwd, 0 AS pw_uid, 0 AS pw_gid, ''::\"varchar\"\nAS pw_gecos, ei.directory AS pw_dir, ei.quota AS pw_shell FROM\nemail_addresses ea, email_info ei, users u, user_resources ur WHERE\n(((((ea.user_resource_id = ei.user_resource_id) AND (get_pwd(u.username,\n'127.0.0.1'::\"varchar\", '101'::\"varchar\", 'MD5'::\"varchar\") IS NOT\nNULL)) AND (ur.id = ei.user_resource_id)) AND (u.id = ur.user_id)) AND\n(NOT (EXISTS (SELECT forwarding.email_id FROM forwarding WHERE\n(forwarding.email_id = ea.id)))));\n\n \n\n \n\n \n\nBoth are set to the same buffers and everything... this is the execution\ntime:\n\n \n\nDebian: Total runtime: 35594.81 msec\n\n \n\nFedora: Total runtime: 2279869.08 msec\n\n \n\nHuge difference as you can see... here are the pastes of the stuff\n\n \n\nDebain:\n\n \n\nuser_acl=# explain analyze SELECT count(*) from vpopmail;\n\nNOTICE: QUERY PLAN:\n\n \n\nAggregate (cost=438231.94..438231.94 rows=1 width=20) (actual\ntime=35594.67..35594.67 rows=1 loops=1)\n\n -> Hash Join (cost=434592.51..438142.51 rows=35774 width=20) (actual\ntime=34319.24..35537.11 rows=70613 loops=1)\n\n -> Seq Scan on email_info ei (cost=0.00..1721.40 rows=71640\nwidth=4) (actual time=0.04..95.13 rows=71689 loops=1)\n\n -> Hash (cost=434328.07..434328.07 rows=35776 width=16)\n(actual time=34319.00..34319.00 rows=0 loops=1)\n\n -> Hash Join (cost=430582.53..434328.07 rows=35776\nwidth=16) (actual time=2372.45..34207.21 rows=70613 loops=1)\n\n -> Seq Scan on users u (cost=0.00..1938.51\nrows=71283 width=4) (actual time=0.81..30119.58 rows=70809 loops=1)\n\n -> Hash (cost=430333.64..430333.64 rows=35956\nwidth=12) (actual time=2371.51..2371.51 rows=0 loops=1)\n\n -> Hash Join (cost=2425.62..430333.64\nrows=35956 width=12) (actual time=176.73..2271.14 rows=71470 loops=1)\n\n -> Seq Scan on email_addresses ea\n(cost=0.00..426393.25 rows=35956 width=4) (actual time=0.06..627.49\nrows=71473 loops=1)\n\n SubPlan\n\n -> Index Scan using\nforwarding_idx on forwarding (cost=0.00..5.88 rows=1 width=4) (actual\ntime=0.00..0.00 rows=0 loops=71960)\n\n -> Hash (cost=1148.37..1148.37\nrows=71637 width=8) (actual time=176.38..176.38 rows=0 loops=1)\n\n -> Seq Scan on user_resources ur\n(cost=0.00..1148.37 rows=71637 width=8) (actual time=0.03..82.21\nrows=71686 loops=1)\n\nTotal runtime: 35594.81 msec\n\n \n\nEXPLAIN\n\n \n\n \n\n \n\nAnd for fedora it's\n\n \n\n \n\nAggregate (cost=416775.52..416775.52 rows=1 width=20) (actual\ntime=2279868.57..2279868.58 rows=1 loops=1)\n -> Hash Join (cost=413853.79..416686.09 rows=35772 width=20)\n(actual time=2279271.26..2279803.91 rows=70841 loops=1)\n Hash Cond: (\"outer\".user_resource_id = \"inner\".id)\n -> Seq Scan on email_info ei (cost=0.00..1666.07 rows=71907\nwidth=4) (actual time=8.12..171.10 rows=71907 loops=1)\n -> Hash (cost=413764.36..413764.36 rows=35772 width=16)\n(actual time=2279263.03..2279263.03 rows=0 loops=1)\n -> Hash Join (cost=410712.87..413764.36 rows=35772\nwidth=16) (actual time=993.90..2279008.72 rows=70841 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".user_id)\n -> Seq Scan on users u (cost=0.00..1888.85\nrows=71548 width=4) (actual time=18.38..2277152.51 rows=71028 loops=1)\n Filter: (get_pwd(username,\n'127.0.0.1'::character varying, '101'::character varying,\n'MD5'::character varying) IS NOT NULL)\n -> Hash (cost=410622.99..410622.99 rows=35952\nwidth=12) (actual time=975.40..975.40 rows=0 loops=1)\n -> Hash Join (cost=408346.51..410622.99\nrows=35952 width=12) (actual time=507.52..905.91 rows=71697 loops=1)\n Hash Cond: (\"outer\".id =\n\"inner\".user_resource_id)\n -> Seq Scan on user_resources ur\n(cost=0.00..1108.04 rows=71904 width=8) (actual time=0.05..95.65\nrows=71904 loops=1)\n -> Hash (cost=408256.29..408256.29\nrows=36091 width=4) (actual time=507.33..507.33 rows=0 loops=1)\n -> Seq Scan on email_addresses\nea (cost=0.00..408256.29 rows=36091 width=4) (actual time=0.15..432.83\nrows=71700 loops=1)\n Filter: (NOT (subplan))\n SubPlan\n -> Index Scan using\nforwarding_idx on forwarding (cost=0.00..5.63 rows=1 width=4) (actual\ntime=0.00..0.00 rows=0 loops=72182)\n Index Cond:\n(email_id = $0)\n Total runtime: 2279869.08 msec\n\n(20 rows)\n\n\n\n\n\n\n\n\n\n\n\n\n\nI lost the email that had the fix for this\nand now I need it again… can someone or tom let me know what the fix was,\nI can’t find it in any of my emails or archived on the internet\n \nThis is what I got…\n \nTwo servers, one debian, one fedora\n \nDebain dual 3ghz, 1 gig ram, ide, PostgreSQL 7.2.1 on\ni686-pc-linux-gnu, compiled by GCC 2.95.4\n \n \nFedora: Dual 3ghz, 1 gig ram, scsi, PostgreSQL 7.3.4-RH on\ni386-redhat-linux-gnu, compiled by GCC i386-redhat-linux-gcc (GCC) 3.3.2\n20031022 (Red Hat Linux 3.3.2-1)\n \n \nBoth have same databases, Both have had vacume full ran on\nthem. Both doing the same query\n \nSelect * from vpopmail; The vpopmail is a view, this is the\nview\n \n \n \nView \"vpopmail\"\n Column \n| \nType | Modifiers \n-----------+------------------------+-----------\n pw_name | character varying(32) | \n pw_domain | character varying(64) | \n pw_passwd | character\nvarying | \n pw_uid |\ninteger \n| \n pw_gid |\ninteger \n| \n pw_gecos | character\nvarying | \n pw_dir | character varying(160) | \n pw_shell | character varying(20) | \nView definition: SELECT ea.email_name AS pw_name, ea.domain\nAS pw_domain, get_pwd(u.username, '127.0.0.1'::\"varchar\",\n'101'::\"varchar\", 'MD5'::\"varchar\") AS pw_passwd, 0 AS\npw_uid, 0 AS pw_gid, ''::\"varchar\" AS pw_gecos, ei.directory AS\npw_dir, ei.quota AS pw_shell FROM email_addresses ea, email_info ei, users u,\nuser_resources ur WHERE (((((ea.user_resource_id = ei.user_resource_id) AND\n(get_pwd(u.username, '127.0.0.1'::\"varchar\",\n'101'::\"varchar\", 'MD5'::\"varchar\") IS NOT NULL)) AND\n(ur.id = ei.user_resource_id)) AND (u.id = ur.user_id)) AND (NOT (EXISTS\n(SELECT forwarding.email_id FROM forwarding WHERE (forwarding.email_id =\nea.id)))));\n \n \n \nBoth are set to the same buffers and everything… this\nis the execution time:\n \nDebian: Total runtime: 35594.81 msec\n \nFedora: Total runtime: 2279869.08 msec\n \nHuge difference as you can see… here are the pastes of\nthe stuff\n \nDebain:\n \nuser_acl=# explain analyze SELECT count(*) from vpopmail;\nNOTICE: QUERY PLAN:\n \nAggregate (cost=438231.94..438231.94 rows=1 width=20)\n(actual time=35594.67..35594.67 rows=1 loops=1)\n -> Hash Join \n(cost=434592.51..438142.51 rows=35774 width=20) (actual time=34319.24..35537.11\nrows=70613 loops=1)\n -> Seq\nScan on email_info ei (cost=0.00..1721.40 rows=71640 width=4) (actual\ntime=0.04..95.13 rows=71689 loops=1)\n -> \nHash (cost=434328.07..434328.07 rows=35776 width=16) (actual\ntime=34319.00..34319.00 rows=0 loops=1)\n \n-> Hash Join (cost=430582.53..434328.07 rows=35776 width=16)\n(actual time=2372.45..34207.21 rows=70613 loops=1)\n \n-> Seq Scan on users u (cost=0.00..1938.51 rows=71283 width=4)\n(actual time=0.81..30119.58 rows=70809 loops=1)\n \n-> Hash (cost=430333.64..430333.64 rows=35956 width=12) (actual\ntime=2371.51..2371.51 rows=0 loops=1)\n \n-> Hash Join (cost=2425.62..430333.64 rows=35956 width=12)\n(actual time=176.73..2271.14 rows=71470 loops=1)\n \n-> Seq Scan on email_addresses ea (cost=0.00..426393.25\nrows=35956 width=4) (actual time=0.06..627.49 rows=71473 loops=1)\n \nSubPlan\n \n-> Index Scan using forwarding_idx on forwarding \n(cost=0.00..5.88 rows=1 width=4) (actual time=0.00..0.00 rows=0 loops=71960)\n \n -> \nHash (cost=1148.37..1148.37 rows=71637 width=8) (actual\ntime=176.38..176.38 rows=0 loops=1)\n \n-> Seq Scan on user_resources ur \n(cost=0.00..1148.37 rows=71637 width=8) (actual time=0.03..82.21 rows=71686\nloops=1)\nTotal runtime: 35594.81 msec\n \nEXPLAIN\n \n \n \nAnd for fedora it’s\n \n \nAggregate (cost=416775.52..416775.52 rows=1 width=20) (actual time=2279868.57..2279868.58 rows=1 loops=1) -> Hash Join (cost=413853.79..416686.09 rows=35772 width=20) (actual time=2279271.26..2279803.91 rows=70841 loops=1) Hash Cond: (\"outer\".user_resource_id = \"inner\".id) -> Seq Scan on email_info ei (cost=0.00..1666.07 rows=71907 width=4) (actual time=8.12..171.10 rows=71907 loops=1) -> Hash (cost=413764.36..413764.36 rows=35772 width=16) (actual time=2279263.03..2279263.03 rows=0 loops=1) -> Hash Join (cost=410712.87..413764.36 rows=35772 width=16) (actual time=993.90..2279008.72 rows=70841 loops=1) Hash Cond: (\"outer\".id = \"inner\".user_id) -> Seq Scan on users u (cost=0.00..1888.85 rows=71548 width=4) (actual time=18.38..2277152.51 rows=71028 loops=1) Filter: (get_pwd(username, '127.0.0.1'::character varying, '101'::character varying, 'MD5'::character varying) IS NOT NULL) -> Hash (cost=410622.99..410622.99 rows=35952 width=12) (actual time=975.40..975.40 rows=0 loops=1) -> Hash Join (cost=408346.51..410622.99 rows=35952 width=12) (actual time=507.52..905.91 rows=71697 loops=1) Hash Cond: (\"outer\".id = \"inner\".user_resource_id) -> Seq Scan on user_resources ur (cost=0.00..1108.04 rows=71904 width=8) (actual time=0.05..95.65 rows=71904 loops=1) -> Hash (cost=408256.29..408256.29 rows=36091 width=4) (actual time=507.33..507.33 rows=0 loops=1) -> Seq Scan on email_addresses ea (cost=0.00..408256.29 rows=36091 width=4) (actual time=0.15..432.83 rows=71700 loops=1) Filter: (NOT (subplan)) SubPlan -> Index Scan using forwarding_idx on forwarding (cost=0.00..5.63 rows=1 width=4) (actual time=0.00..0.00 rows=0 loops=72182) Index Cond: (email_id = $0) Total runtime: 2279869.08 msec\n(20 rows)",
"msg_date": "Thu, 15 Jul 2004 12:35:23 -0700",
"msg_from": "\"Andrew Matthews\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wierd issues"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a query, which runs fast for one id (query 1)\nand slow for other id (query 2)\nthough both plans and cost are same except\nthese two qeries return different number of rows.\n\nexplain analyze\nSELECT *\nFROM user U LEFT JOIN user_timestamps T USING\n(user_id), user_alias A\nWHERE U.user_id = A.user_id AND A.domain_id=7551070;\n\n\\g\n \n QUERY PLAN \n \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=234.22..61015.98 rows=12 width=238)\n(actual time=7.73..7.73 rows=0 loops=1)\n Merge Cond: (\"outer\".user_id = \"inner\".user_id)\n -> Merge Join (cost=0.00..58585.67 rows=909864\nwidth=180) (actual time=0.07..0.07 rows=1 loops=1)\n Merge Cond: (\"outer\".user_id =\n\"inner\".user_id)\n -> Index Scan using user_pkey on user u \n(cost=0.00..29714.99 rows=909864 width=156) (actual\ntime=0.04..0.04 rows=1 loops=1)\n -> Index Scan using user_timestamps_uid_idx\non user_timestamps t (cost=0.00..16006.05 rows=706896\nwidth=24) (actual time=0.02..0.02 rows=1 loops=1)\n -> Sort (cost=234.22..234.25 rows=12 width=58)\n(actual time=7.65..7.65 rows=0 loops=1)\n Sort Key: a.user_id\n -> Seq Scan on user_alias a \n(cost=0.00..234.00 rows=12 width=58) (actual\ntime=7.61..7.61 rows=0 loops=1)\n Filter: (domain_id = 7551070)\n Total runtime: 7.96 msec\n(11 rows)\n\nexplain analyze\nSELECT *\nFROM user U LEFT JOIN user_timestamps T USING\n(user_id), user_alias A\nWHERE U.user_id = A.user_id AND\nA.domain_id=2005921193;\n\\g\n \n QUERY PLAN \n \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=247.92..61035.28 rows=332\nwidth=238) (actual time=94511.70..95127.94 rows=493\nloops=1)\n Merge Cond: (\"outer\".user_id = \"inner\".user_id)\n -> Merge Join (cost=0.00..58585.67 rows=909864\nwidth=180) (actual time=6.43..93591.06 rows=897655\nloops=1)\n Merge Cond: (\"outer\".user_id =\n\"inner\".user_id)\n -> Index Scan using user_pkey on user u \n(cost=0.00..29714.99 rows=909864 width=156) (actual\ntime=6.29..55634.85 rows=897655 loops=1)\n -> Index Scan using user_timestamps_uid_idx\non user_timestamps t (cost=0.00..16006.05 rows=706896\nwidth=24) (actual time=0.10..20331.13 rows=700466\nloops=1)\n -> Sort (cost=247.92..248.75 rows=332 width=58)\n(actual time=10.76..11.17 rows=493 loops=1)\n Sort Key: a.user_id\n -> Seq Scan on user_alias a \n(cost=0.00..234.00 rows=332 width=58) (actual\ntime=7.43..9.86 rows=493 loops=1)\n Filter: (domain_id = 2005921193)\n Total runtime: 95128.74 msec\n(11 rows)\n\nI also know if I change the order of 2nd query, it\nwill run much faster:\n\nexplain analyze\nSELECT *\nFROM (user_alias A JOIN user U USING (user_id) ) LEFT\nJOIN user_timestamps T USING (user_id)\nWHERE A.domain_id=2005921193;\n\\g\n \n QUERY PLAN \n \n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..2302.31 rows=332 width=238)\n(actual time=15.32..256.54 rows=493 loops=1)\n -> Nested Loop (cost=0.00..1263.43 rows=332\nwidth=214) (actual time=15.17..130.58 rows=493\nloops=1)\n -> Seq Scan on user_alias a \n(cost=0.00..234.00 rows=332 width=58) (actual\ntime=15.04..21.01 rows=493 loops=1)\n Filter: (domain_id = 2005921193)\n -> Index Scan using user_pkey on user u \n(cost=0.00..3.08 rows=1 width=156) (actual\ntime=0.17..0.17 rows=1 loops=493)\n Index Cond: (\"outer\".user_id =\nu.user_id)\n -> Index Scan using user_timestamps_uid_idx on\nuser_timestamps t (cost=0.00..3.11 rows=1 width=24)\n(actual time=0.16..0.23 rows=1 loops=493)\n Index Cond: (\"outer\".user_id = t.user_id)\n Total runtime: 257.79 msec\n(9 rows)\n\n\n\nuser with 911932 rows\n user_id - PK\n\nuser_timestamps with 708851 rows\n user_id - FK with index \n\nuser_alias with 9689 rows\n user_id - FK with index\n domain_id - no index on this column\n\nMy questions are:\n1. Why 1st \"Merge Join\" in 2nd query gets actual\nrows=897655 while 1st \"Merge Join\" in 1st query is\nactual rows=1?\n\nIf I know the answer, I will understand:\nWhy 1st \"Merge Join\" in 2nd query took so longer time\nthan 1st \"Merge Join\" in 1st query?\n\n2. Why PG optimzer is not smart enough to use 3rd\n(nested Loop) plan?\n\nThanks,\n\n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nVote for the stars of Yahoo!'s next ad campaign!\nhttp://advision.webevents.yahoo.com/yahoo/votelifeengine/\n\n",
"msg_date": "Fri, 16 Jul 2004 14:36:30 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": true,
"msg_subject": "same plan, different time"
},
{
"msg_contents": "Litao Wu <[email protected]> writes:\n> SELECT *\n> FROM user U LEFT JOIN user_timestamps T USING\n> (user_id), user_alias A\n> WHERE U.user_id = A.user_id AND A.domain_id=7551070;\n\nIck. Try changing the join order, perhaps\n\nSELECT *\nFROM (user U JOIN user_alias A ON (U.user_id = A.user_id))\n LEFT JOIN user_timestamps T USING (user_id)\nWHERE A.domain_id=7551070;\n\nAs you have it, the entire LEFT JOIN has to be formed first,\nand the useful restriction clause only gets applied later.\n\nThe fact that the case with 7551070 finishes quickly is just\nblind luck --- the slow case is much more representative.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 17 Jul 2004 00:43:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: same plan, different time "
}
] |
[
{
"msg_contents": "Pg Performers,\n\nThis might be a out of the ordinary question, or perhaps I have been out\nof the loop for a while but does PostgreSQL (or any other database) have\nsupport for lazy index updates. What I mean by lazy index updates is\nindex updating which occur at a regular interval rather than per\ntransaction.\n\nI have found that inserts and updates tend to slow down when the database\ngets really big. I think it is likely an effect of updating indexes when\nthe insert or update occurs.\n\nLooking forward to feedback and possibly direction on my lazy index update\nquestion.\n\nTIA,\n\nFred\n",
"msg_date": "Fri, 16 Jul 2004 20:30:34 -0700 (PDT)",
"msg_from": "\"Fred Moyer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Scaling with lazy index updates"
},
{
"msg_contents": "Howdy Fred,\n\n> This might be a out of the ordinary question, or perhaps I have been out\n> of the loop for a while but does PostgreSQL (or any other database) have\n> support for lazy index updates. What I mean by lazy index updates is\n> index updating which occur at a regular interval rather than per\n> transaction.\n\nIn a word: No.\n\nThe issue with \"asynchronous index updates\" (which is what you asked about) is \nthat they don't work with the way PostgreSQL uses indexes. If the index \nhasn't been updated, then when a query uses an index scan the row simply \nwouldn't show up. If that's acceptable behavior for you, then perhaps you \ncould consider asynchronous *table* updates, done at the application layer, \nwhich would be much easier to implement.\n\nWe do as much as we can by offloading b-tree \"cleanup\" for indexes until \nVACUUM/REINDEX, which is called manually. \n\nHmmm. Can you think of an example of an RDBMS which does *not* update \nindexes immediately (and transactionally)? I can't.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Sat, 17 Jul 2004 17:02:20 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling with lazy index updates"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've been searching the list for a while but couldn't find any up-to-date \ninformation relating to my problem.\nWe have a production server with postgresql on cygwin that currently deels \nwith about 200 Gigs of data (1 big IDE drive). We plan to move to linux \nfor some reasons I don't have to explain.\nOur aim is also to be able to increase our storage capacity up to \napproximately 1 or 2 terabytes and to speed up our production process. As \nwe are a small \"microsoft addicted\" company , we have some difficulties to \nchoose the best configuration that would best meet our needs.\nOur production process is based on transaction (mostly huge inserts) and \ndisk access is the main bottlle-neck.\n\nOur main concern is hardware related :\n\nWould NAS or SAN be good solutions ? (I've read that NAS uses NFS which \ncould slow down the transfer rate ??)\nHas anyone ever tried one of these with postgresql ? \n\nI would appreciate any comments.\nThanks in advance.\n\nBenjamin.\n\n================================================\nBenjamin Simon - Ingénieur Développement Cartographie\nhttp://www.loxane.com\ntel : 01 30 40 24 00\nFax : 01 30 40 24 04\n\nLOXANE \n271, Chaussée Jules César 95250 Beauchamp\nFrance\nHi all,\n\nI've been searching the list for a while but couldn't find any up-to-date information relating to my problem.\nWe have a production server with postgresql on cygwin that currently deels with about 200 Gigs of data (1 big IDE drive). We plan to move to linux for some reasons I don't have to explain.\nOur aim is also to be able to increase our storage capacity up to approximately 1 or 2 terabytes and to speed up our production process. As we are a small \"microsoft addicted\" company , we have some difficulties to choose the best configuration that would best meet our needs.\nOur production process is based on transaction (mostly huge inserts) and disk access is the main bottlle-neck.\n\nOur main concern is hardware related :\n\nWould NAS or SAN be good solutions ? (I've read that NAS uses NFS which could slow down the transfer rate ??)\nHas anyone ever tried one of these with postgresql ? \n\nI would appreciate any comments.\nThanks in advance.\n\nBenjamin.\n\n================================================\nBenjamin Simon - Ingénieur Développement Cartographie\nhttp://www.loxane.com\ntel : 01 30 40 24 00\nFax : 01 30 40 24 04\n\nLOXANE \n271, Chaussée Jules César 95250 Beauchamp\nFrance",
"msg_date": "Tue, 20 Jul 2004 09:52:56 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "NAS, SAN or any alternate solution ?"
},
{
"msg_contents": "On Tue, 2004-07-20 at 01:52, [email protected] wrote:\n> Hi all,\n> \n> I've been searching the list for a while but couldn't find any\n> up-to-date information relating to my problem.\n> We have a production server with postgresql on cygwin that currently\n> deels with about 200 Gigs of data (1 big IDE drive). We plan to move\n> to linux for some reasons I don't have to explain.\n> Our aim is also to be able to increase our storage capacity up to\n> approximately 1 or 2 terabytes and to speed up our production process.\n> As we are a small \"microsoft addicted\" company , we have some\n> difficulties to choose the best configuration that would best meet our\n> needs.\n> Our production process is based on transaction (mostly huge inserts)\n> and disk access is the main bottlle-neck.\n> \n> Our main concern is hardware related :\n> \n> Would NAS or SAN be good solutions ? (I've read that NAS uses NFS\n> which could slow down the transfer rate ??)\n> Has anyone ever tried one of these with postgresql ? \n\nYour best bet would likely be a large external RAID system with lots o\ncache. Next would be a fast internal RAID card like the LSI Megaraid\ncards, with lots of drives and batter backed cache. Next would be a\nSAN, but be careful, there may be issues with some cards and their\ndrivers under linux, research them well before deciding. NFS is right\nout if you want good performance AND reliability.\n\nThe cheapest solution that is likely to meet your needs would be the\ninternal RAID card with battery backed cache. \n\n",
"msg_date": "Tue, 20 Jul 2004 02:20:56 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NAS, SAN or any alternate solution ?"
},
{
"msg_contents": "...and on Tue, Jul 20, 2004 at 09:52:56AM +0200, [email protected] used the keyboard:\n> Hi all,\n> \n> I've been searching the list for a while but couldn't find any up-to-date \n> information relating to my problem.\n> We have a production server with postgresql on cygwin that currently deels \n> with about 200 Gigs of data (1 big IDE drive). We plan to move to linux \n> for some reasons I don't have to explain.\n> Our aim is also to be able to increase our storage capacity up to \n> approximately 1 or 2 terabytes and to speed up our production process. As \n> we are a small \"microsoft addicted\" company , we have some difficulties to \n> choose the best configuration that would best meet our needs.\n> Our production process is based on transaction (mostly huge inserts) and \n> disk access is the main bottlle-neck.\n> \n> Our main concern is hardware related :\n> \n> Would NAS or SAN be good solutions ? (I've read that NAS uses NFS which \n> could slow down the transfer rate ??)\n> Has anyone ever tried one of these with postgresql ? \n> \n> I would appreciate any comments.\n> Thanks in advance.\n\nHello Simon,\n\nWe're testing 3ware Escalade 9000, which is a hardware-raid SATA\ncontroller with VERY good support for Linux (including direct access\nfor S.M.A.R.T. applications, which is a serious problem with other\nRAID controllers), featuring RAID levels 0, 1, 10, 5, JBOD, up to\n12 SATA channels (that's 3ware Escalade 9500S-12, they also come in\n4- and 8-channel versions, up to four cards can be fitted into a\nsystem), up to 1GB battery-backed ECC RAM (128MB out-of-the-box)\nand most of all, excellent tuning guides that actually manage to\nexceed the scope of merely making you come up with good benchmark\nresults for that controller in a specific test environment.\n\nOur preliminary tests show that a setup of four 250GB SATA Maxtors\nthat aren't really qualified as fast drives, in RAID5 can deliver\nblock writes of 50MB/s, rewrites at about 35MB/s and reads of\napproximately 180MB/s, which is rougly 2.5-times the performance\nof previous Escalades.\n\nYou can find more info on Escalade 9000 series, benchmarks and\nother stuff here:\n\n http://www.3ware.com/products/serial_ata9000.asp\n http://www.3ware.com/products/benchmarks_sata.asp\n http://www.3ware.dk/fileadmin/3ware/documents/Benchmarks/Linux_kernel_2.6_Benchmarking.pdf\n\nOh, and not to forget - the price for a 3ware 9500S-12, the version\nwe're testing ranges between EUR1000 and EUR1500, depending on the\ncontract you have with the reseller and the intended use of the\ndevice. SATA disks are dirt-cheap nowadays, as has been mentioned\nbefore.\n\nI do agree on the reliability of cache-usage setting those drives\nreport though, it may or may not be true. But one never knows that\nfor sure with SCSI drives either. At least you can assert that\nproper controller cache sizing with drives that usually feature\n8MB (!!!) cache, will mostly ensure that even the largest amount\nof data that could fit into a hard disk cache of the entire array\n(96MB) will still be available in the controller cache after a\npower failure, for it to be re-checked and ensured it is properly\nwritten.\n\nHope this helps,\n-- \n Grega Bremec\n Senior Administrator\n Noviforum Ltd., Software & Media\n http://www.noviforum.si/",
"msg_date": "Tue, 20 Jul 2004 14:23:35 +0200",
"msg_from": "Grega Bremec <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NAS, SAN or any alternate solution ?"
},
{
"msg_contents": "> \n> Oh, and not to forget - the price for a 3ware 9500S-12, the version\n> we're testing ranges between EUR1000 and EUR1500, depending on the\n> contract you have with the reseller and the intended use of the\n> device. SATA disks are dirt-cheap nowadays, as has been mentioned\n> before.\n> \n\nCorrection, EUR500 and EUR1000, VAT not included. :)\n\nSorry for the mix-up.\n-- \n Grega Bremec\n Senior Administrator\n Noviforum Ltd., Software & Media\n http://www.noviforum.si/",
"msg_date": "Tue, 20 Jul 2004 15:01:35 +0200",
"msg_from": "Grega Bremec <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NAS, SAN or any alternate solution ?"
},
{
"msg_contents": "[email protected] wrote:\n> Would NAS or SAN be good solutions ? (I've read that NAS uses NFS which \n> could slow down the transfer rate ??)\n\n> Has anyone ever tried one of these with postgresql ? \n\nNot (yet) with Postgres, but my company has run ~100GB Oracle database \non NAS (NetApp) for the past couple of years. We've found it to \noutperform local attached storage, and it has been extremely reliable \nand flexible. Our DBAs wouldn't give it up without a fight.\n\nJoe\n",
"msg_date": "Tue, 20 Jul 2004 09:44:42 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NAS, SAN or any alternate solution ?"
},
{
"msg_contents": "> Would NAS or SAN be good solutions ? (I've read that NAS uses NFS\n> which could slow down the transfer rate ??)\n> Has anyone ever tried one of these with postgresql ? \n\nI've used both a NetApp and Hitachi based SANs with PostgreSQL. Both\nwork as well as expected, but do require some tweeking as they normally\nare not optimized for the datablock size that PostgreSQL likes to deal\nwith (8k by default) -- this can make as much as a 50% difference in\nperformance levels.\n\nFor a NAS setup, be VERY careful that the NFS implementation you're\nusing has the semantics that the database requires (do plenty of failure\ntesting -- pull plugs and things at random). iSCSI looks more promising,\nbut I've not tested how gracefully it fails.\n\nHave your supplier run a bunch of benchmarks for random IO with 8k\nblocks.\n\nOne side note, SANs seem to be very good at scaling across multiple jobs\nfrom multiple sources, but beware your Fibre Channel drivers -- mine\nseems to spend quite a bit of time managing interrupts and I've not\nfound a way to put it into a polling mode (I'm not a Linux person and\nthat trick usually happens for me on the BSDs).\n\n\n",
"msg_date": "Tue, 20 Jul 2004 13:25:22 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NAS, SAN or any alternate solution ?"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nRod Taylor wrote:\n| I've used both a NetApp and Hitachi based SANs with PostgreSQL. Both\n| work as well as expected, but do require some tweeking as they normally\n| are not optimized for the datablock size that PostgreSQL likes to deal\n| with (8k by default) -- this can make as much as a 50% difference in\n| performance levels.\n\nI'm looking for documentation about the datablock size you mentioned above.\n\nMy goal is to tune the disk / filesystem on our prototype system. It's\nan EMC disk array, so sectors on disk are 512 bytes of usable space.\nWe've decided to go with RAID 10 since the goal is to maximize\nperformance. Currently the raid element size is set at 16 sectors which\nis 8192 bytes of payload. I've got a sysadmin working on getting XFS\ngoing with 8192 byte blocks. My next task will be to calculate the\namount of space used by XFS for headers etc. to find out how much of\nthose 8192 bytes can be used for the postgres payload. Then configure\npostgres to use datablocks that size. So I'm looking for details on how\nto manipulate the size of the datablock.\n\nI'm also not entirely sure how to make the datablocks line up with the\nfilesystem blocks. Any suggestions on this would be greatly appreciated.\n\n- --\nAndrew Hammond 416-673-4138 [email protected]\nDatabase Administrator, Afilias Canada Corp.\nCB83 2838 4B67 D40F D086 3568 81FC E7E5 27AF 4A9A\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (GNU/Linux)\n\niD8DBQFBUeHmgfzn5SevSpoRAu2sAJ4nHHup5lhp4+RcgBPGoJpUFoE1SQCgyvW1\nixyAvqb7ZkB+IIdGb36mpxI=\n=uDLW\n-----END PGP SIGNATURE-----\n",
"msg_date": "Wed, 22 Sep 2004 16:34:47 -0400",
"msg_from": "Andrew Hammond <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NAS, SAN or any alternate solution ?"
},
{
"msg_contents": "> Rod Taylor wrote:\n> | I've used both a NetApp and Hitachi based SANs with PostgreSQL. Both\n> | work as well as expected, but do require some tweeking as they normally\n> | are not optimized for the datablock size that PostgreSQL likes to deal\n> | with (8k by default) -- this can make as much as a 50% difference in\n> | performance levels.\n\n> I'm also not entirely sure how to make the datablocks line up with the\n> filesystem blocks. Any suggestions on this would be greatly appreciated.\n\nWe just played with Veritas settings while running pg_bench on a 200GB\ndatabase. I no longer have access to the NetApp, but the settings for\nthe Hitachi are below.\n\nIn tunefstab we have:\n\nread_pref_io=8192,read_nstream=4,write_pref_io=8192,write_nstream=2\n\nIn fstab it's:\n\tdefaults,mincache=tmpcache,noatime\n\n\nIf you have better settings, please shoot them over so we can try them\nout. Perhaps even get someone over there to write a new SAN section in\nthe Tuning Chapter.\n\n",
"msg_date": "Wed, 22 Sep 2004 16:52:59 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NAS, SAN or any alternate solution ?"
},
{
"msg_contents": "\nAndrew Hammond <[email protected]> writes:\n\n> My goal is to tune the disk / filesystem on our prototype system. It's\n> an EMC disk array, so sectors on disk are 512 bytes of usable space.\n> We've decided to go with RAID 10 since the goal is to maximize\n> performance. Currently the raid element size is set at 16 sectors which\n> is 8192 bytes of payload. \n\nDo people find it works well to have a stripe size that small? It seems like\nit would be better to have it be at least a few filesystem/postgres blocks so\nthat subsequent reads stand a chance of being sequential and not causing\nanother spindle to have to seek. Does this depend on whether it's an DSS load\nvs an OLTP load? If it's a single query at a time DSS system perhaps small\nblocksizes work best to get maximum throughput?\n\n> I've got a sysadmin working on getting XFS going with 8192 byte blocks.\n\nHaving your filesystem block size match postgres's block size is probably a\ngood idea. So 8k blocks is good.\n\n> My next task will be to calculate the amount of space used by XFS for\n> headers etc. to find out how much of those 8192 bytes can be used for the\n> postgres payload.\n\nNo filesystem that I know of uses up space in every block. The overhead is all\nstored elsewhere in blocks exclusively contain such overhead data. So just\nsetting postgres to 8k which the default would work well.\n\n> Then configure postgres to use datablocks that size. So I'm looking for\n> details on how to manipulate the size of the datablock.\n\nLook in pg_config_manual.h in src/include. Postgres has to be recompiled to\nchange it and the database has to be reinitialized. But it could be set to 16k\nor 32k. In which case you would probably want to adjust your filesystem to\nmatch. But unless you do experiments you won't know if it would be of any\nbenefit to change.\n\n> I'm also not entirely sure how to make the datablocks line up with the\n> filesystem blocks. Any suggestions on this would be greatly appreciated.\n\nThey just will. The files start on a block boundary, so every 8k is a new\nblock. Postgres stores 8k at a time always.\n\n-- \ngreg\n\n",
"msg_date": "22 Sep 2004 23:41:21 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NAS, SAN or any alternate solution ?"
}
] |
[
{
"msg_contents": "Thanks a lot Scott.\n\nIt seems that we were totally wrong when considering a network storage \nsolution. I've read your techdoc \nhttp://techdocs.postgresql.org/guides/DiskTuningGuide and found many \ninteresting remarks. \nI think that we will know focus on external Raid systems which seem to be \nrelativily affordable compared to NAS or SAN (we would have had the budget \nfor one of these). \nAs we don't plan to have more than 5 connections (I.E process), we think \nSATA drives would fit our requirements. Could this be an issue for an \nafter crash recovery ?\nWe also hesitate concerning the raid level to use. We are currently \ncomparing raid 1+0 and raid 5 but we have no actual idea on which one to \nuse.\n\nOur priorities are : \n1) performance\n2) recovery\n3) price\n4) back-up \n\nIt could be nice to have any comments from people who have already set up \na similar platform, giving some precise details of the hardware \nconfiguration :\n - brand of the raid device, \n - technology used (SCSI/IDE, RAID level ...), \n - size of the database, number of disks/size of disks ...\n\nSuch a knowledge base may be useful to convince people to migrate to \nopensource cheap reliable solutions. \nThanks again.\n\nBenjamin.\n\n\n\n\n\n\n\"Scott Marlowe\" <[email protected]>\nEnvoyé par : [email protected]\n20/07/2004 10:20\n\n \n Pour : [email protected]\n cc : [email protected]\n Objet : Re: [PERFORM] NAS, SAN or any alternate solution ?\n\n\nOn Tue, 2004-07-20 at 01:52, [email protected] wrote:\n> Hi all,\n> \n> I've been searching the list for a while but couldn't find any\n> up-to-date information relating to my problem.\n> We have a production server with postgresql on cygwin that currently\n> deels with about 200 Gigs of data (1 big IDE drive). We plan to move\n> to linux for some reasons I don't have to explain.\n> Our aim is also to be able to increase our storage capacity up to\n> approximately 1 or 2 terabytes and to speed up our production process.\n> As we are a small \"microsoft addicted\" company , we have some\n> difficulties to choose the best configuration that would best meet our\n> needs.\n> Our production process is based on transaction (mostly huge inserts)\n> and disk access is the main bottlle-neck.\n> \n> Our main concern is hardware related :\n> \n> Would NAS or SAN be good solutions ? (I've read that NAS uses NFS\n> which could slow down the transfer rate ??)\n> Has anyone ever tried one of these with postgresql ? \n\nYour best bet would likely be a large external RAID system with lots o\ncache. Next would be a fast internal RAID card like the LSI Megaraid\ncards, with lots of drives and batter backed cache. Next would be a\nSAN, but be careful, there may be issues with some cards and their\ndrivers under linux, research them well before deciding. NFS is right\nout if you want good performance AND reliability.\n\nThe cheapest solution that is likely to meet your needs would be the\ninternal RAID card with battery backed cache. \n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faqs/FAQ.html\n\n\n\nThanks a lot Scott.\n\nIt seems that we were totally wrong when considering a network storage solution. I've read your techdoc http://techdocs.postgresql.org/guides/DiskTuningGuide and found many interesting remarks. \nI think that we will know focus on external Raid systems which seem to be relativily affordable compared to NAS or SAN (we would have had the budget for one of these). \nAs we don't plan to have more than 5 connections (I.E process), we think SATA drives would fit our requirements. Could this be an issue for an after crash recovery ?\nWe also hesitate concerning the raid level to use. We are currently comparing raid 1+0 and raid 5 but we have no actual idea on which one to use.\n\nOur priorities are : \n1) performance\n2) recovery\n3) price\n4) back-up \n\nIt could be nice to have any comments from people who have already set up a similar platform, giving some precise details of the hardware configuration :\n - brand of the raid device, \n - technology used (SCSI/IDE, RAID level ...), \n - size of the database, number of disks/size of disks ...\n\nSuch a knowledge base may be useful to convince people to migrate to opensource cheap reliable solutions. \nThanks again.\n\nBenjamin.\n\n\n\n\n\n\n\n\n\"Scott Marlowe\" <[email protected]>\nEnvoyé par : [email protected]\n20/07/2004 10:20\n\n \n Pour : [email protected]\n cc : [email protected]\n Objet : Re: [PERFORM] NAS, SAN or any alternate solution ?\n\n\nOn Tue, 2004-07-20 at 01:52, [email protected] wrote:\n> Hi all,\n> \n> I've been searching the list for a while but couldn't find any\n> up-to-date information relating to my problem.\n> We have a production server with postgresql on cygwin that currently\n> deels with about 200 Gigs of data (1 big IDE drive). We plan to move\n> to linux for some reasons I don't have to explain.\n> Our aim is also to be able to increase our storage capacity up to\n> approximately 1 or 2 terabytes and to speed up our production process.\n> As we are a small \"microsoft addicted\" company , we have some\n> difficulties to choose the best configuration that would best meet our\n> needs.\n> Our production process is based on transaction (mostly huge inserts)\n> and disk access is the main bottlle-neck.\n> \n> Our main concern is hardware related :\n> \n> Would NAS or SAN be good solutions ? (I've read that NAS uses NFS\n> which could slow down the transfer rate ??)\n> Has anyone ever tried one of these with postgresql ? \n\nYour best bet would likely be a large external RAID system with lots o\ncache. Next would be a fast internal RAID card like the LSI Megaraid\ncards, with lots of drives and batter backed cache. Next would be a\nSAN, but be careful, there may be issues with some cards and their\ndrivers under linux, research them well before deciding. NFS is right\nout if you want good performance AND reliability.\n\nThe cheapest solution that is likely to meet your needs would be the\ninternal RAID card with battery backed cache. \n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faqs/FAQ.html",
"msg_date": "Tue, 20 Jul 2004 11:32:15 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "=?iso-8859-1?Q?R=E9f=2E_=3A_Re=3A__NAS=2C_SAN_or_any_alternate?=\n\tsolution ?"
},
{
"msg_contents": "\n\[email protected] wrote:\n\n>\n> As we don't plan to have more than 5 connections (I.E process), we \n> think SATA drives would fit our requirements. Could this be an issue \n> for an after crash recovery ?\n>\nIf you can disable the write ATA write cache, then you have safety. \nUnfortunately many cards under Linux show up as SCSI devices, and you \ncan't access this setting. Does anyone know if the newer SATA cards let \nyou control this?\n\nYou might want to keep and eye on the upcoming native windows port in \n7.5 - It will come with a fearsome array of caveats... but you have been \nrunning cygwin in production! - and I am inclined to think the native \nport will be more solid than this configuration.\n\nregards\n\nMark\n\n\n\n\n\n\n",
"msg_date": "Tue, 20 Jul 2004 22:04:28 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?ISO-8859-1?Q?R=E9f=2E_=3A_Re=3A_=5BPERFORM=5D_NAS=2C?="
},
{
"msg_contents": "On Tue, 2004-07-20 at 03:32, [email protected] wrote:\n> Thanks a lot Scott.\n> \n> It seems that we were totally wrong when considering a network storage\n> solution. I've read your techdoc\n> http://techdocs.postgresql.org/guides/DiskTuningGuide and found many\n> interesting remarks. \n> I think that we will know focus on external Raid systems which seem to\n> be relativily affordable compared to NAS or SAN (we would have had the\n> budget for one of these). \n> As we don't plan to have more than 5 connections (I.E process), we\n> think SATA drives would fit our requirements. Could this be an issue\n> for an after crash recovery ?\n\nIf you're looking at (S)ATA RAID, definitely look at escalade, as\nanother poster mentioned. Last year I and a few other folks on the\nlists were testing RAID controllers for survival of the power plug pull\ntest, and the Escalade passed (someone else did the testing, I tested\nthe LSI MegaRAID 320-2 controller with battery backed cache). \n\n> We also hesitate concerning the raid level to use. We are currently\n> comparing raid 1+0 and raid 5 but we have no actual idea on which one\n> to use.\n> \n> Our priorities are : \n> 1) performance\n> 2) recovery\n> 3) price\n> 4) back-up \n\nBasically, for a smaller number of drivers, RAID 1+0 is almost always a\nwin over RAID 5. As the number of drives in the array grows, RAID 5\nusually starts to pull back in the lead. RAID 5 definitely gives you\nthe most storage for your dollar of any of the redundant array types. \nThe more important point of a RAID controller is that it have battery\nbacked cache to make sure that the database server isn't waiting for WAL\nwrites all the time. A single port LSI Megaraid 320-1 controller is\nonly about $500 or less, the last time I checked (with battery backed\ncache, order it WITH the battery and cache, otherwise you may have a\nhard time finding the right parts later on.) It supports hot spares for\nautomatic rebuild.\n\n\n\n",
"msg_date": "Tue, 20 Jul 2004 09:28:54 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: =?ISO-8859-1?Q?R=E9f=2E?= : Re: NAS, SAN or any"
}
] |
[
{
"msg_contents": "I must say that cygwin did well (there exists good software on windows, \ni've found one)... as a prototype ... when I look at the postgresql poll \n(http://www.postgresql.org/survey.php?View=1&SurveyID=11), it seems like \nI'm not alone !!\nActually, the major problem was the limit of the available allocable \nmemory restricted by cygwin.\n\nWe don't plan to wait for the 7.5 win native version of postgresql. It was \nhard enough to decide moving to linux, I don't want to rollback \neverything :)\nThanks for the advice, I will definetely have a look at the new version \nanyway as soon as it is released.\n\nRegards,\nBenjamin.\n\n\n\n\n\nMark Kirkwood <[email protected]>\n20/07/2004 12:04\n\n \n Pour : [email protected]\n cc : [email protected]\n Objet : Re: Réf. : Re: [PERFORM] NAS, SAN or any alternate solution ?\n\n\n\n\[email protected] wrote:\n\n>\n> As we don't plan to have more than 5 connections (I.E process), we \n> think SATA drives would fit our requirements. Could this be an issue \n> for an after crash recovery ?\n>\nIf you can disable the write ATA write cache, then you have safety. \nUnfortunately many cards under Linux show up as SCSI devices, and you \ncan't access this setting. Does anyone know if the newer SATA cards let \nyou control this?\n\nYou might want to keep and eye on the upcoming native windows port in \n7.5 - It will come with a fearsome array of caveats... but you have been \nrunning cygwin in production! - and I am inclined to think the native \nport will be more solid than this configuration.\n\nregards\n\nMark\n\n\n\n\n\n\n\n\n\nI must say that cygwin did well (there exists good software on windows, i've found one)... as a prototype ... when I look at the postgresql poll (http://www.postgresql.org/survey.php?View=1&SurveyID=11), it seems like I'm not alone !!\nActually, the major problem was the limit of the available allocable memory restricted by cygwin.\n\nWe don't plan to wait for the 7.5 win native version of postgresql. It was hard enough to decide moving to linux, I don't want to rollback everything :)\nThanks for the advice, I will definetely have a look at the new version anyway as soon as it is released.\n\nRegards,\nBenjamin.\n\n\n\n\n\n\n\nMark Kirkwood <[email protected]>\n20/07/2004 12:04\n\n \n Pour : [email protected]\n cc : [email protected]\n Objet : Re: Réf. : Re: [PERFORM] NAS, SAN or any alternate solution ?\n\n\n\n\[email protected] wrote:\n\n>\n> As we don't plan to have more than 5 connections (I.E process), we \n> think SATA drives would fit our requirements. Could this be an issue \n> for an after crash recovery ?\n>\nIf you can disable the write ATA write cache, then you have safety. \nUnfortunately many cards under Linux show up as SCSI devices, and you \ncan't access this setting. Does anyone know if the newer SATA cards let \nyou control this?\n\nYou might want to keep and eye on the upcoming native windows port in \n7.5 - It will come with a fearsome array of caveats... but you have been \nrunning cygwin in production! - and I am inclined to think the native \nport will be more solid than this configuration.\n\nregards\n\nMark",
"msg_date": "Tue, 20 Jul 2004 12:18:10 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "=?iso-8859-1?Q?R=E9f=2E_=3A_Re=3A_R=E9f=2E_=3A_Re=3A__NAS=2C_SAN?=\n\tor any alternate solution ?"
}
] |
[
{
"msg_contents": "I have (among other things) a parent table with 200 records and a child table with 20MM or more. I set up referential integrity on the FK with ON DELETE CASCADE.\n\nIt appears that when a DELETE is done on the parent table, the child table deletion is done with a sequential scan. I say this because it took over four minutes to delete a parent record THAT HAD NO CHILDREN. The DB is recently analyzed and SELECTs in the child table are done by the appropriate index on the FK.\n\nLet me guess, the cascade trigger's query plan is decided at schema load time, when the optimizer has no clue. Is there a way to fix this without writing my own triggers, using PL/PGSQL EXECUTE to delay the planner?\n\nAnd by the way, if FK conditions like IN (1,3,4) could be handled in a single invocation of the trigger, so much the better.\n\n",
"msg_date": "Tue, 20 Jul 2004 12:19:11 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Unbearably slow cascading deletes"
},
{
"msg_contents": "\nOn Tue, 20 Jul 2004 [email protected] wrote:\n\n> I have (among other things) a parent table with 200 records and a child\n> table with 20MM or more. I set up referential integrity on the FK with\n> ON DELETE CASCADE.\n>\n> It appears that when a DELETE is done on the parent table, the child\n> table deletion is done with a sequential scan. I say this because it\n> took over four minutes to delete a parent record THAT HAD NO CHILDREN.\n> The DB is recently analyzed and SELECTs in the child table are done by\n> the appropriate index on the FK.\n>\n> Let me guess, the cascade trigger's query plan is decided at schema load\n> time, when the optimizer has no clue. Is there a way to fix this without\n> writing my own triggers, using PL/PGSQL EXECUTE to delay the planner?\n\nThe query plan should be decided at the first cascaded delete for the key\nin the session. However, IIRC, it's using $arguments for the key values,\nso it's possible that that is giving it a different plan than it would get\nif the value were known. What do you get if you prepare the query with an\nargument for the key and use explain execute?\n\n",
"msg_date": "Tue, 20 Jul 2004 12:45:10 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unbearably slow cascading deletes"
},
{
"msg_contents": "\nOn Tue, 20 Jul 2004, Stephan Szabo wrote:\n\n>\n> On Tue, 20 Jul 2004 [email protected] wrote:\n>\n> > I have (among other things) a parent table with 200 records and a child\n> > table with 20MM or more. I set up referential integrity on the FK with\n> > ON DELETE CASCADE.\n> >\n> > It appears that when a DELETE is done on the parent table, the child\n> > table deletion is done with a sequential scan. I say this because it\n> > took over four minutes to delete a parent record THAT HAD NO CHILDREN.\n> > The DB is recently analyzed and SELECTs in the child table are done by\n> > the appropriate index on the FK.\n> >\n> > Let me guess, the cascade trigger's query plan is decided at schema load\n> > time, when the optimizer has no clue. Is there a way to fix this without\n> > writing my own triggers, using PL/PGSQL EXECUTE to delay the planner?\n>\n> The query plan should be decided at the first cascaded delete for the key\n> in the session. However, IIRC, it's using $arguments for the key values,\n> so it's possible that that is giving it a different plan than it would get\n> if the value were known. What do you get if you prepare the query with an\n> argument for the key and use explain execute?\n\nTo be clear, I mean prepare/explain execute an example select/delete from\nthe fk.\n",
"msg_date": "Tue, 20 Jul 2004 12:55:14 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unbearably slow cascading deletes"
},
{
"msg_contents": "> I have (among other things) a parent table with 200 records and a child table with 20MM or more. I set up referential integrity on the FK with ON DELETE CASCADE.\n> \n> It appears that when a DELETE is done on the parent table, the child table deletion is done with a sequential scan. I say this because it took over four minutes to delete a parent record THAT HAD NO CHILDREN. The DB is recently analyzed and SELECTs in the child table are done by the appropriate index on the FK.\n\nDo you have an index on the foreign key field?\n\nChris\n\n",
"msg_date": "Wed, 21 Jul 2004 09:37:52 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unbearably slow cascading deletes"
}
] |
[
{
"msg_contents": "PREPARE c(int4) AS DELETE FROM childtable WHERE fk=$1;\nEXPLAIN EXECUTE c(-1);\n\ngives an index scan.\n\nPREPARE c2(int4) AS DELETE FROM parenttable WHERE key=$1;\nEXPLAIN EXECUTE c2(1);\n\ngives a seq scan on the parent table (itself a little curious) and no explanation of what the triggers are doing.\n",
"msg_date": "Tue, 20 Jul 2004 12:59:40 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Unbearably slow cascading deletes"
}
] |
[
{
"msg_contents": "I FOUND IT!\n\nA second trigger that doesn't belong......\n\nOK, we're set now, and thanks for showing me some ways to check what the planner is up to. Is there a way of seeing what the triggers will do?\n",
"msg_date": "Tue, 20 Jul 2004 13:06:46 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Unbearably slow cascading deletes"
}
] |
[
{
"msg_contents": "Hello,\n\nI have a request like SELECT ... WHERE x<=A<=y AND t<=B<=u AND z<=C<=w AND ..\n5 columns are in BETWEEN clauses.\n\nWhat is the best index I could use?\n\nIf I create btree index on all columns (A,B,C..), here is what explain\nanalyze gives me:\n-----------------------------------------------------------------\n Index Scan using all_ind on test2 (cost=0.00..4.51 rows=1 width=24) (actual ti\nme=0.000..0.000 rows=5 loops=1)\n Index Cond: ((a >= '2004-07-20 23:50:50'::timestamp without time zone) AND (a\n <= '2004-07-21 23:50:50'::timestamp without time zone) AND (b >= '2004-07-20 23\n:50:50'::timestamp without time zone) AND (b <= '2004-07-21 23:50:50'::timestamp\n without time zone) AND (c >= '2004-07-20 23:50:50'::timestamp without time zone\n) AND (c <= '2004-07-21 23:50:50'::timestamp without time zone))\n\n\nIs such search really optimal?\n\nI remember we used k-d trees for geometric data with independent\ncoords.. Is that the same as btree for multiple columns I wonder.\n\n\n\n-- \nBest regards,\n Ilia mailto:[email protected]\n\n",
"msg_date": "Wed, 21 Jul 2004 01:46:11 +0400",
"msg_from": "Ilia Kantor <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index type"
},
{
"msg_contents": "Ilia,\n\n> If I create btree index on all columns (A,B,C..), here is what explain\n> analyze gives me:\n> -----------------------------------------------------------------\n> Index Scan using all_ind on test2 (cost=0.00..4.51 rows=1 width=24)\n> (actual ti me=0.000..0.000 rows=5 loops=1)\n> Index Cond: ((a >= '2004-07-20 23:50:50'::timestamp without time zone)\n> AND (a <= '2004-07-21 23:50:50'::timestamp without time zone) AND (b >=\n> '2004-07-20 23\n>\n> :50:50'::timestamp without time zone) AND (b <= '2004-07-21\n> : 23:50:50'::timestamp\n>\n> without time zone) AND (c >= '2004-07-20 23:50:50'::timestamp without time\n> zone ) AND (c <= '2004-07-21 23:50:50'::timestamp without time zone))\n\nLooks good to me. It's a fully indexed search, which it should be with \nBETWEEN. The only thing you need to ask yourself is whether or not you've \nselected the columns in the most selective order (e.g. most selective column \nfirst).\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 16 Aug 2004 16:16:27 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index type"
}
] |
[
{
"msg_contents": "Hi all,\n\tI was wondering if part or all of Postgres would be able to take\nadvantage of a beowulf cluster to increase performance? If not then why\nnot, and if so then how would/could it benefit from being on a cluster?\n\n\tThanks for the enlightenment in advance.\n\n-Joe\n\n\n",
"msg_date": "Wed, 21 Jul 2004 15:45:56 -0500",
"msg_from": "joe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Beowulf Cluster & Postgresql?"
},
{
"msg_contents": "On Wed, 2004-07-21 at 14:45, joe wrote:\n> Hi all,\n> \tI was wondering if part or all of Postgres would be able to take\n> advantage of a beowulf cluster to increase performance? If not then why\n> not, and if so then how would/could it benefit from being on a cluster?\n> \n> \tThanks for the enlightenment in advance.\n\nThat type of clustering helps with large parallel processes that are\nloosely interrelated or none at all.\n\nIn PostgreSQL, as in most databases, all actions that change the data in\nthe database tend to be highly interrelated, so it becomes very\nexpensive to pass all that locking information back and forth. The very\nthing a cluster would be good at, lots of reads, very few writies, is\nthe antithesis of what postgresql is built to be good at, lots of writes\nas well as lots of reads.\n\nBasically, clustering tends to make the database faster at reads and\nslower at writes. While there are clustering solutions out there,\nBeowulf clustering is oriented towards highly parallel CPU intensive\nworkloads, while PostgreSQL tends to be I/O intensive, and since all the\ndata needs to be stored in one \"master\" place, adding nodes doesn't\nusually help with making writes faster.\n\n",
"msg_date": "Wed, 21 Jul 2004 15:08:30 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beowulf Cluster & Postgresql?"
},
{
"msg_contents": "You might want to take a look at Matt Dillon's Backplane database. It is \ndesigned to work in a multi-node environment :\n\nhttp://www.backplane.com/\n\nregards\n\nMark\njoe wrote:\n\n>Hi all,\n>\tI was wondering if part or all of Postgres would be able to take\n>advantage of a beowulf cluster to increase performance? If not then why\n>not, and if so then how would/could it benefit from being on a cluster?\n>\n>\tThanks for the enlightenment in advance.\n>\n>-Joe\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n>\n",
"msg_date": "Thu, 22 Jul 2004 14:47:28 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beowulf Cluster & Postgresql?"
}
] |
[
{
"msg_contents": "Hello,\n\nUsing a test client application that performs 100000 insert operations on a\ntable, with the client application running on the same machine as the\nPostgres server, I get the following results for the time taken to run the\ntest:\n\nUnix domain socket connection: 26 seconds\nInet domain socket ('localhost'): 35 seconds\n\nThe table has two columns, a timestamp and a character(16), no indexes.\n\nBut with the server running on one machine and the client running on\nanother, the two machines being connected by a 100 Mb ethernet, with nothing\nelse on the network, this test takes 17 minutes to run. I have tried\nchanging the frequency of COMMIT operations, but with only a small effect.\n\nThe machines used are P4s running FreeBSD 5.2.1. The Postgres version is\n7.4.3. Can anyone tell me why there's such a big difference?\n\nThanks,\nWilliam\n\n",
"msg_date": "Fri, 23 Jul 2004 15:20:54 +0930",
"msg_from": "\"William Carney\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance over a LAN"
},
{
"msg_contents": "> But with the server running on one machine and the client running on\n> another, the two machines being connected by a 100 Mb ethernet, with nothing\n> else on the network, this test takes 17 minutes to run. I have tried\n> changing the frequency of COMMIT operations, but with only a small effect.\n\nAre you using separate INSERT statements? Try using COPY instead, it's \nmuch faster.\n\nchris\n\n",
"msg_date": "Fri, 23 Jul 2004 14:02:38 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance over a LAN"
},
{
"msg_contents": "I don't think that's the advice being looked for here - if this \nbehaviour is repeatable, then there is something askew with the inet \nprotocol.\n\nWhat is the client application written in? Do you know what version of \nthe postgres network protocol your driver code is using? Are the \ninserts inside a transaction?\n\nI'm very interested in this issue since the environment I now work in \nhas a lot of network connected databases and the performance is much \nless than I am used to with local databases.\n\nMark.\n-- \nMark Aufflick\n e [email protected]\n w www.pumptheory.com (work)\n w mark.aufflick.com (personal)\n p +61 438 700 647\nOn 23/07/2004, at 4:02 PM, Christopher Kings-Lynne wrote:\n\n>> But with the server running on one machine and the client running on\n>> another, the two machines being connected by a 100 Mb ethernet, with \n>> nothing\n>> else on the network, this test takes 17 minutes to run. I have tried\n>> changing the frequency of COMMIT operations, but with only a small \n>> effect.\n>\n> Are you using separate INSERT statements? Try using COPY instead, \n> it's much faster.\n>\n> chris\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n> ======================================================================= \n> =\n> Pain free spam & virus protection by: www.mailsecurity.net.au\n> Forward undetected SPAM to: [email protected]\n> ======================================================================= \n> =\n>\n\n\n========================================================================\n Pain free spam & virus protection by: www.mailsecurity.net.au\n Forward undetected SPAM to: [email protected]\n========================================================================\n\n",
"msg_date": "Fri, 23 Jul 2004 16:20:17 +1000",
"msg_from": "Mark Aufflick <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance over a LAN"
},
{
"msg_contents": "On Thu, 2004-07-22 at 23:50, William Carney wrote:\n> Hello,\n> \n> Using a test client application that performs 100000 insert operations on a\n> table, with the client application running on the same machine as the\n> Postgres server, I get the following results for the time taken to run the\n> test:\n> \n> Unix domain socket connection: 26 seconds\n> Inet domain socket ('localhost'): 35 seconds\n> \n> The table has two columns, a timestamp and a character(16), no indexes.\n> \n> But with the server running on one machine and the client running on\n> another, the two machines being connected by a 100 Mb ethernet, with nothing\n> else on the network, this test takes 17 minutes to run. I have tried\n> changing the frequency of COMMIT operations, but with only a small effect.\n> \n> The machines used are P4s running FreeBSD 5.2.1. The Postgres version is\n> 7.4.3. Can anyone tell me why there's such a big difference?\n\nAre you using the exact same script locally as across the network?\n\nHave you checked to see how fast you can copy just a plain text file\nacross the network connection?\n\nHave you checked your system to see if you're getting lots of network\nerrors or anything like that?\n\n",
"msg_date": "Fri, 23 Jul 2004 00:29:11 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance over a LAN"
},
{
"msg_contents": "\nI tested the LAN connection by transferring around some large (150 MByte)\nfiles, and got consistent transfer rates of about 10 MBytes/second in both\ndirections without any problems, which is what I would expect. Netstat says\nthat there are no errors, so I think that the ethernet is working OK. Maybe\nthere's some latency somewhere but I have no reason to think that anything's\nabnormal.\n\nThe test program is a C program with embedded SQL (ecpg). The only\ndifference between the tests was the address used in the EXEC SQL CONNECT\n.. statement. The inserts are committed to the database by performing an\nEXEC SQL COMMIT after every N of them; I tried various values of N up to\nseveral hundred, but it didn't make much difference. Using psql I can see\nrecords appearing in the database in groups of that size. I'm not sure about\nall of the protocol versions. I downloaded the complete Postgres source and\nbuilt it only a few days ago. Ecpg says that it's version is 3.1.1. I'm not\ngetting any errors reported anywhere, it's just that things are surprisingly\nslow over the LAN for some reason.\n\nWilliam\n\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Mark\n> Aufflick\n> Sent: Friday, 23 July 2004 3:50 PM\n> To: Christopher Kings-Lynne\n> Cc: William Carney; [email protected]\n> Subject: Re: [PERFORM] Performance over a LAN\n>\n>\n> I don't think that's the advice being looked for here - if this\n> behaviour is repeatable, then there is something askew with the inet\n> protocol.\n>\n> What is the client application written in? Do you know what version of\n> the postgres network protocol your driver code is using? Are the\n> inserts inside a transaction?\n>\n> I'm very interested in this issue since the environment I now work in\n> has a lot of network connected databases and the performance is much\n> less than I am used to with local databases.\n>\n> Mark.\n> --\n> Mark Aufflick\n> e [email protected]\n> w www.pumptheory.com (work)\n> w mark.aufflick.com (personal)\n> p +61 438 700 647\n> On 23/07/2004, at 4:02 PM, Christopher Kings-Lynne wrote:\n>\n> >> But with the server running on one machine and the client running on\n> >> another, the two machines being connected by a 100 Mb ethernet, with\n> >> nothing\n> >> else on the network, this test takes 17 minutes to run. I have tried\n> >> changing the frequency of COMMIT operations, but with only a small\n> >> effect.\n> >\n> > Are you using separate INSERT statements? Try using COPY instead,\n> > it's much faster.\n> >\n> > chris\n> >\n> >\n> > ---------------------------(end of\n> > broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n> > =======================================================================\n> > =\n> > Pain free spam & virus protection by: www.mailsecurity.net.au\n> > Forward undetected SPAM to: [email protected]\n> > =======================================================================\n> > =\n> >\n>\n>\n> ========================================================================\n> Pain free spam & virus protection by: www.mailsecurity.net.au\n> Forward undetected SPAM to: [email protected]\n> ========================================================================\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n",
"msg_date": "Fri, 23 Jul 2004 17:27:53 +0930",
"msg_from": "\"William Carney\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance over a LAN"
},
{
"msg_contents": "On Fri, 2004-07-23 at 01:50, William Carney wrote:\n> Hello,\n> \n> Using a test client application that performs 100000 insert operations on a\n> table, with the client application running on the same machine as the\n> Postgres server, I get the following results for the time taken to run the\n> test:\n> \n> Unix domain socket connection: 26 seconds\n> Inet domain socket ('localhost'): 35 seconds\n\n> The machines used are P4s running FreeBSD 5.2.1. The Postgres version is\n> 7.4.3. Can anyone tell me why there's such a big difference?\n\nDomains sockets have significantly less work to do than inet sockets as\nwell as less delays for the transmission itself.\n\n\n",
"msg_date": "Fri, 23 Jul 2004 07:43:07 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance over a LAN"
},
{
"msg_contents": "\nOn Jul 23, 2004, at 3:57 AM, William Carney wrote:\n\n>\n> I tested the LAN connection by transferring around some large (150 \n> MByte)\n> files, and got consistent transfer rates of about 10 MBytes/second in \n> both\n> directions without any problems, which is what I would expect. Netstat \n> says\n\nIt would be interesting to run something like ntop that can show you \ncurrent network usage... unless you are doing a large COPY the PG \nprotocol has a lot of back and forth messages...\n\n-- \nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Fri, 23 Jul 2004 08:07:23 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance over a LAN"
},
{
"msg_contents": "On Fri, Jul 23, 2004 at 03:20:54PM +0930, William Carney wrote:\n\n> But with the server running on one machine and the client running on\n> another, the two machines being connected by a 100 Mb ethernet, with nothing\n> else on the network, this test takes 17 minutes to run. I have tried\n> changing the frequency of COMMIT operations, but with only a small effect.\n>\n> The machines used are P4s running FreeBSD 5.2.1. The Postgres version is\n> 7.4.3. Can anyone tell me why there's such a big difference?\n\nCan you reproduce this problem in a tiny test case? If your application\nis doing other networky things (e.g. many name resolutions that hang\nfor 10 seconds each), they may be slowing down the PostgreSQL work. \n\nJust a WAG.\n\n-mike\n",
"msg_date": "Fri, 23 Jul 2004 11:34:16 -0400",
"msg_from": "Michael Adler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance over a LAN"
},
{
"msg_contents": "--- William Carney <[email protected]> wrote:\n\n\n> The test program is a C program with embedded SQL\n> (ecpg). The only\n> difference between the tests was the address used in\n> the EXEC SQL CONNECT\n> .. statement. The inserts are committed to the\n> database by performing an\n> EXEC SQL COMMIT after every N of them; I tried\n> various values of N up to\n> several hundred, but it didn't make much difference.\n> Using psql I can see\n> records appearing in the database in groups of that\n> size. I'm not sure about\n> all of the protocol versions. I downloaded the\n> complete Postgres source and\n> built it only a few days ago. Ecpg says that it's\n> version is 3.1.1. I'm not\n> getting any errors reported anywhere, it's just that\n> things are surprisingly\n> slow over the LAN for some reason.\n> \n> William\n\n\nIt's probably the number of round trips to the server.\n If pg can accept host variable arrays, try using a\nthousand element array or something to do your\ninserts.\n\ne.g. char mycharhv[1000][10]\n\nthen set up the mycharhvs[1][..], [2][...] etc and\nfling them at the database with a single insert\nstatement.\n\nI just tried this with the following program:\n\n#include <stdio.h>\nexec sql include sqlca;\nexec sql begin declare section;\nchar db[10];\nchar inserts[5000][10];\nexec sql end declare section;\nint main(void) {\nunsigned int n;\n strcpy(db,\"mydb\");\n exec sql connect to :db;\n printf(\"sqlcode connect %i\\n\",sqlca.sqlcode);\n for(n=0;n<5000;n++) {\n strcpy(inserts[n],\"hello\");\n }\n exec sql insert into gaz values (:inserts);\n printf(\"sqlcode insert %i\\n\",sqlca.sqlcode);\n exec sql commit work;\n}\n\n\nThis didn't work on pg, I only got one row inserted.\nThis is using ecpg 2.9.0, pg 7.2.2\n\nOn Oracle with PRO*C this causes 5000 rows to be\nwritten with one insert and is a technique I've used\nto get better network performance with Oracle.\n\nIs this fixed in newer versions? If not, it sounds\nlike a good feature.\n\n\n\t\n\t\n\t\t\n___________________________________________________________ALL-NEW Yahoo! Messenger - sooooo many all-new ways to express yourself http://uk.messenger.yahoo.com\n",
"msg_date": "Fri, 23 Jul 2004 17:26:08 +0100 (BST)",
"msg_from": "=?iso-8859-1?q?Gary=20Cowell?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance over a LAN"
},
{
"msg_contents": "\n\"William Carney\" <[email protected]> writes:\n\n> The machines used are P4s running FreeBSD 5.2.1. The Postgres version is\n> 7.4.3. Can anyone tell me why there's such a big difference?\n\nYou're going to have to run tcpdump and see where the delays are. It might be\nhard to decode the postgres protocol though.\n\nWhich driver are you using? I wonder if it isn't the same nagle+delayed ack\nproblem that came up recently.\n\n-- \ngreg\n\n",
"msg_date": "23 Jul 2004 13:09:47 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance over a LAN"
}
] |
[
{
"msg_contents": "I hall\nI have a query in this form:\n\nempdb=# explain analyze select * from v_past_connections where id_user = 26195 and login_time > '2004-07-21';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_user_logs_login_time on user_logs (cost=0.00..14.10 rows=1 width=28) (actual time=66.890..198.998 rows=5 loops=1)\n Index Cond: (login_time > '2004-07-21 00:00:00+02'::timestamp with time zone)\n Filter: (id_user = 26195)\n Total runtime: 199.083 ms\n(4 rows)\n\n\nas you see the index on the time stamp column is used\n\nThe table have indexes on both columns:\n\nempdb=# explain analyze select * from v_past_connections where login_time > '2004-07-21';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_user_logs_login_time on user_logs (cost=0.00..12.90 rows=481 width=28) (actual time=7.338..661.300 rows=22477 loops=1)\n Index Cond: (login_time > '2004-07-21 00:00:00+02'::timestamp with time zone)\n Total runtime: 676.472 ms\n(3 rows)\n\nempdb=# explain analyze select * from v_past_connections where id_user = 26195;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_user_user_logs on user_logs (cost=0.00..252.47 rows=320 width=28) (actual time=4.420..100.122 rows=221 loops=1)\n Index Cond: (id_user = 26195)\n Total runtime: 100.348 ms\n(3 rows)\n\n\nThe rows filtered out with both condictions are two order of magnitude differents,\nalso the extimated rows are close to real numbers:\n\n\nempdb=# select count(*) from v_past_connections where id_user = 26195;\n count\n-------\n 221\n(1 row)\n\nempdb=# select count(*) from v_past_connections where login_time > '2004-07-21';\n count\n-------\n 22441\n(1 row)\n\n\nwhy then the planner choose to do an index scan using the filter that retrieve a bigger ammount of rows ? A bug ?\n\n\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Fri, 23 Jul 2004 10:26:52 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Wrong index choosen?"
},
{
"msg_contents": "On Fri, 23 Jul 2004, Gaetano Mendola wrote:\n\n> empdb=# explain analyze select * from v_past_connections where login_time > '2004-07-21';\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using idx_user_logs_login_time on user_logs (cost=0.00..12.90 rows=481 width=28) (actual time=7.338..661.300 rows=22477 loops=1)\n> Index Cond: (login_time > '2004-07-21 00:00:00+02'::timestamp with time zone)\n> Total runtime: 676.472 ms\n> (3 rows)\n\nIn this plan it estimates to get 481 but it got 22477. So the estimation \nwas very wrong. You can increase the statistics tarhet on the login_time \nand it will probably be better (after the next analyze).\n\n> why then the planner choose to do an index scan using the filter that\n> retrieve a bigger ammount of rows ? A bug ?\n\nBecause it has to decide on the plan before it knows exactly what the \nresult will be. As seen above, the estimation was wrong and thus the plan \nwas not as good as it could have been.\n\nIn this case you probably also want to create a combined index on both\ncolumns:\n\nCREATE INDEX foo ON user_log (id_user, login_time);\n\nps. This letter belonged to pgsql-performance and not pgsql-hackers.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Fri, 23 Jul 2004 14:01:56 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong index choosen?"
},
{
"msg_contents": "Dennis Bjorklund <[email protected]> writes:\n> In this plan it estimates to get 481 but it got 22477. So the estimation \n> was very wrong. You can increase the statistics tarhet on the login_time \n> and it will probably be better (after the next analyze).\n\nGiven the nature of the data (login times), I'd imagine that the problem\nis simply that he hasn't analyzed recently enough. A bump in stats\ntarget may not be needed, but he's going to have to re-analyze that\ncolumn often if he wants this sort of query to be estimated accurately,\nbecause the fraction of entries later than a given time T is *always*\ngoing to be changing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 23 Jul 2004 10:45:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Wrong index choosen? "
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nTom Lane wrote:\n\n| Dennis Bjorklund <[email protected]> writes:\n|\n|>In this plan it estimates to get 481 but it got 22477. So the estimation\n|>was very wrong. You can increase the statistics tarhet on the login_time\n|>and it will probably be better (after the next analyze).\n|\n|\n| Given the nature of the data (login times), I'd imagine that the problem\n| is simply that he hasn't analyzed recently enough. A bump in stats\n| target may not be needed, but he's going to have to re-analyze that\n| column often if he wants this sort of query to be estimated accurately,\n| because the fraction of entries later than a given time T is *always*\n| going to be changing.\n\nWell know that I think about it, I felt my shoulders covered by\npg_autovacuum but looking at the log I see that table never analyzed!\nAaargh.\n\nI already applied the patch for the autovacuum but evidently I have to\nmake it more aggressive, I'm sorry that I can not made him more aggressive\nonly for this table.\n\n\nThank you all.\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBAU3g7UpzwH2SGd4RAhbEAKDLbKXLGRqphBbfyBh6cu7QoqFQhACfdDtu\ncGS0K1UuTuwTDp4P2JjQ30A=\n=aepf\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Fri, 23 Jul 2004 19:41:53 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Wrong index choosen?"
},
{
"msg_contents": "Gaetano Mendola wrote:\n> Tom Lane wrote:\n> | Given the nature of the data (login times), I'd imagine that the problem\n> | is simply that he hasn't analyzed recently enough. A bump in stats\n> | target may not be needed, but he's going to have to re-analyze that\n> | column often if he wants this sort of query to be estimated accurately,\n> | because the fraction of entries later than a given time T is *always*\n> | going to be changing.\n> \n> Well know that I think about it, I felt my shoulders covered by\n> pg_autovacuum but looking at the log I see that table never analyzed!\n> Aaargh.\n> \n> I already applied the patch for the autovacuum but evidently I have to\n> make it more aggressive, I'm sorry that I can not made him more aggressive\n> only for this table.\n\nYeah, the version of autovacuum in 7.4 contrib doesn't allow table \nspecific settings. The patch I have sumbitted for 7.5 does, so \nhopefully this will be better in the future.\n\nYou can however set the VACUUM and ANALYZE thresholds independently. \nSo perhpaps it will help you if you set your ANALYZE setting to be very \naggressive and your VACUUM settings to something more standard.\n\nMatthew\n",
"msg_date": "Fri, 23 Jul 2004 14:12:09 -0400",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Wrong index choosen?"
},
{
"msg_contents": "Hi all,\n\njust as a question.\n\nThere will be some day a feature that let you force\nthe planner to use an specific index, like oracle\ndoes?\n\nOf course the planner is smart enough most times but\nsometimes such an option would be usefull, don't you\nthink so?\n\nThanx in advance,\nJaime Casanova\n\n_________________________________________________________\nDo You Yahoo!?\nInformaci�n de Estados Unidos y Am�rica Latina, en Yahoo! Noticias.\nVis�tanos en http://noticias.espanol.yahoo.com\n",
"msg_date": "Fri, 23 Jul 2004 16:51:11 -0500 (CDT)",
"msg_from": "=?iso-8859-1?q?Jaime=20Casanova?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Wrong index choosen?"
},
{
"msg_contents": "On Fri, 2004-07-23 at 15:51, Jaime Casanova wrote:\n> Hi all,\n> \n> just as a question.\n> \n> There will be some day a feature that let you force\n> the planner to use an specific index, like oracle\n> does?\n> \n> Of course the planner is smart enough most times but\n> sometimes such an option would be usefull, don't you\n> think so?\n\nA planner that always made the right choice would be the most useful\nthing. After that, the ability to \"push\" the planner towards an index\nwould be pretty nice.\n\nAdding features that make PostgreSQL more error prone (i.e. forcing\nparticular index usage, etc.) and harder to drive but allow an expert to\nget what they want is kind of a dangerous road to tread.\n\n",
"msg_date": "Fri, 23 Jul 2004 16:02:53 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Wrong index choosen?"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nMatthew T. O'Connor wrote:\n\n| Gaetano Mendola wrote:\n|\n|> Tom Lane wrote:\n|> | Given the nature of the data (login times), I'd imagine that the\n|> problem\n|> | is simply that he hasn't analyzed recently enough. A bump in stats\n|> | target may not be needed, but he's going to have to re-analyze that\n|> | column often if he wants this sort of query to be estimated accurately,\n|> | because the fraction of entries later than a given time T is *always*\n|> | going to be changing.\n|>\n|> Well know that I think about it, I felt my shoulders covered by\n|> pg_autovacuum but looking at the log I see that table never analyzed!\n|> Aaargh.\n|>\n|> I already applied the patch for the autovacuum but evidently I have to\n|> make it more aggressive, I'm sorry that I can not made him more\n|> aggressive\n|> only for this table.\n|\n|\n| Yeah, the version of autovacuum in 7.4 contrib doesn't allow table\n| specific settings. The patch I have sumbitted for 7.5 does, so\n| hopefully this will be better in the future.\n|\n| You can however set the VACUUM and ANALYZE thresholds independently. So\n| perhpaps it will help you if you set your ANALYZE setting to be very\n| aggressive and your VACUUM settings to something more standard.\n\nWell I think pg_autovacuum as is in 7.4 can not help me for this particular\ntable.\n\nThe table have 4.8 milions rows and I have for that table almost 10252 new\nentries for day.\n\nI'm using pg_autovacuum with -a 200 -A 0.8 this means a threashold for\nthat table equal to: 3849008 and if I understod well the way pg_autovacuum\nworks this means have an analyze each 375 days, and I need an analyze for\neach day, at least.\n\nSo I think is better for me put an analyze for that table in the cron.\n\nAm I wrong ?\n\n\nRegards\nGaetano Mendola\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBAag87UpzwH2SGd4RAqb1AJ416ioVEY5T/dqnAQsaaqqoWcU3ZACghzsO\n4xMowWp/MM8+i7DhoRO4018=\n=/gNn\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Sat, 24 Jul 2004 02:07:27 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Wrong index choosen?"
},
{
"msg_contents": "Gaetano Mendola wrote:\n> Well I think pg_autovacuum as is in 7.4 can not help me for this particular\n> table.\n> \n> The table have 4.8 milions rows and I have for that table almost 10252 new\n> entries for day.\n> \n> I'm using pg_autovacuum with -a 200 -A 0.8 this means a threashold for\n> that table equal to: 3849008 and if I understod well the way pg_autovacuum\n> works this means have an analyze each 375 days, and I need an analyze for\n> each day, at least.\n> \n> So I think is better for me put an analyze for that table in the cron.\n> \n> Am I wrong ?\n\nNo, I think you are right. You could do something like -a 1000 -A \n.00185, but that will probably for an analyze too often for most of your \nother tables.\n\n\n",
"msg_date": "Fri, 23 Jul 2004 21:49:38 -0400",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Wrong index choosen?"
}
] |
[
{
"msg_contents": "Hi all,\n\nin a large table (millions of rows) I am using \na variable-length user\ndefined type \nto store relatively short field entries, i.e. the \nlength of certain\nfields could be restricted to 2^16 or even \n2^8 characters.\n\nNow I wonder whether it would be possible \nto save storage place, by\nusing a smaller length field, i.e. 2 Byte or \neven better 1 Byte per entry instead of 4 \nByte.\n\nIs there a possibility to customize the size of \nthe length field or does\nPostgresql already internally optimize this?\n\nThanks in advance,\n Martin. \n\n\n",
"msg_date": "Fri, 23 Jul 2004 23:28:15 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "variable length - user defined types/storage place"
}
] |
[
{
"msg_contents": "Hi all - \nI've got a schema I'm working on modifying, nad I need some help getting\nthe best performance out. The orginal schema has a many to many linkage\nbetween a couple tables, using a two column linkage table. This is used\nto represent groups of people and their relationship to an object\n(authors, copyrightholders, maintainers) This worked fine, and, with the\nright indixes, is quite zippy. Approximate schems:\n\ntable content (\ncontentid serial,\nname text,\n<...>\nauthorgroupid int,\ncpholdergroupid int,\nmaintgroupid int)\n\ntable groups (\npersonid text,\ngroupid int)\n\nNote that neither grouid nor personid are unique.\n\nNow the users want not just groups, but ordered lists. Well, that's just\nfine: we could do it with another column in the groups linkage table,\nand some additional logic in the middleware for detecting identical\ngroups, but it occured to me that PG's array types are just the ticket\nfor ordered lists like this.\n\nSo, by dropping arrays of personids (authors, copyrightholders,\nmaintainers, ...) into the content table, I can do everything I need.\n\nOnly one problem. Retreiving all the content for a particular\nperson/role is fairly common. Queries of the form:\n\nSELECT * from content c join groups g on c.authorgroupid = g.personid\nwhere personid = 'ross';\n\nwork fine and use the index on groups.personid.\n\nIn the new schema, the same thing is:\n\nSELECT * from content where 42 = ANY (authors);\n\nWorks fine, but for the life of me I can't find nor figure out how to\nbuild an index that will be used to speed this along. Any ideas?\n\nI'm using 7.4.3, BTW.\n\nRoss\n-- \nRoss Reedstrom, Ph.D. [email protected]\nResearch Scientist phone: 713-348-6166\nThe Connexions Project http://cnx.rice.edu fax: 713-348-3665\nRice University MS-375, Houston, TX 77005\nGPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE\n\n\n",
"msg_date": "Sun, 25 Jul 2004 23:57:10 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "arrays and indexes"
},
{
"msg_contents": "\n\"Ross J. Reedstrom\" <[email protected]> writes:\n\n> In the new schema, the same thing is:\n> \n> SELECT * from content where 42 = ANY (authors);\n> \n> Works fine, but for the life of me I can't find nor figure out how to\n> build an index that will be used to speed this along. Any ideas?\n\nWell that's basically the problem with denormalized data like this.\n\nHave you resolved what you're going to do if two sessions try to add a user to\nthe same group at the same time? Or how you'll go about removing a user from\nall his groups in one shot?\n\nBasically, if you denormalize in this fashion it becomes hard to use the\ngroups as anything but single monolithic objects. Whereas normalized data can\nbe queried and updated from other points of view like in the case you name\nabove.\n\nPostgres does have a way to do what you ask, though. It involves GiST indexes\nand the operators from the contrib/intarray directory from the Postgres\nsource.\n\nHowever I warn you in advance that this is fairly esoteric stuff and will take\nsome time to get used to. And at least in my case I found the indexes didn't\nactually help much for my data sets, probably because they just weren't big\nenough to benefit.\n\n-- \ngreg\n\n",
"msg_date": "26 Jul 2004 02:27:20 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: arrays and indexes"
},
{
"msg_contents": "Ross wrote:\n> Hi all -\n> I've got a schema I'm working on modifying, nad I need some help\ngetting\n> the best performance out. The orginal schema has a many to many\nlinkage\n> between a couple tables, using a two column linkage table. This is\nused\n> to represent groups of people and their relationship to an object\n> (authors, copyrightholders, maintainers) This worked fine, and, with\nthe\n> right indixes, is quite zippy. Approximate schems:\n> \n> table content (\n> contentid serial,\n> name text,\n> <...>\n> authorgroupid int,\n> cpholdergroupid int,\n> maintgroupid int)\n> \n> table groups (\n> personid text,\n> groupid int)\n> \n> Note that neither grouid nor personid are unique.\n> \n> Now the users want not just groups, but ordered lists. Well, that's\njust\n> fine: we could do it with another column in the groups linkage table,\n> and some additional logic in the middleware for detecting identical\n> groups, but it occured to me that PG's array types are just the ticket\n> for ordered lists like this.\n> \n> So, by dropping arrays of personids (authors, copyrightholders,\n> maintainers, ...) into the content table, I can do everything I need.\n> \n> Only one problem. Retreiving all the content for a particular\n> person/role is fairly common. Queries of the form:\n> \n> SELECT * from content c join groups g on c.authorgroupid = g.personid\n> where personid = 'ross';\n> \n> work fine and use the index on groups.personid.\n> \n> In the new schema, the same thing is:\n> \n> SELECT * from content where 42 = ANY (authors);\n> \n> Works fine, but for the life of me I can't find nor figure out how to\n> build an index that will be used to speed this along. Any ideas?\n> \n> I'm using 7.4.3, BTW.\n\nArrays are usually a bad choice to put in your tables with a couple of\nexceptions. Keep in mind that you can generate the array in the query\nstage using custom aggregates if you prefer to deal with them on the\nclient side. The basic problem is they introduce flexibility issues and\nare usually better handled by moving the data to a dependant table.\n\nHere are cases you might want to consider using arrays in your tables:\n1. Your array bounds are small and known at design time (think: pay by\nquarter example in the docs).\n2. Your array will not contain more than one or two dependant elements.\n3. You are dealing with an extreme performance situation and you have\ntried doing things the proper way first.\n\nThere are other exceptions...arrays can be a powerful tool albeit a\ndangerous one...just know what you are getting into. A firm\nunderstanding of relational principles are a tremendous help.\n\nIf your array bounds are known, it possible to get around the index\nproblem in limited cases by using a custom function (but only when the\narray bounds are known:\n\ncreate function any_quarter_over_10k (numeric[]) returns boolean as\n'\n select \n\tcase \n\t when $1[1] = > 10000 then true\n when $1[2] = > 10000 then true\n when $1[3] = > 10000 then true\n when $1[4] = > 10000 then true\n else false\n\tend;\t\t\n\n' language 'sql' IMMUTABLE;\n\ncreate index t_q_10k_idx on t(any_quarter_over_10k(salary_qtr));\n\nselect * from t where any_quarter_over_10k(t.salary_qtr) = true;\n\n\nGood luck!\nMerlin\n",
"msg_date": "Mon, 26 Jul 2004 10:56:54 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: arrays and indexes"
},
{
"msg_contents": "On Mon, Jul 26, 2004 at 02:27:20AM -0400, Greg Stark wrote:\n> \n> \"Ross J. Reedstrom\" <[email protected]> writes:\n> \n> > In the new schema, the same thing is:\n> > \n> > SELECT * from content where 42 = ANY (authors);\n> > \n> > Works fine, but for the life of me I can't find nor figure out how to\n> > build an index that will be used to speed this along. Any ideas?\n> \n> Well that's basically the problem with denormalized data like this.\n> \n> Have you resolved what you're going to do if two sessions try to add a user to\n> the same group at the same time? Or how you'll go about removing a user from\n> all his groups in one shot?\n\nWe've got plenty of interlocks in the middleware to handle the first\n(mainly because this is an authoring system where everyone has to agree\nto participate, and acknowledge the open license on the materials)\n\nSecond, they _can't_ be removed: we're effectively a write only archive.\nEven if we weren't it would be a rare event and could go slowly (loop\nover groups in the middleware, probably)\n\n> \n> Basically, if you denormalize in this fashion it becomes hard to use the\n> groups as anything but single monolithic objects. Whereas normalized data can\n> be queried and updated from other points of view like in the case you name\n> above.\n\nThese groups _really are_ ideal for Joe Conway's work on arrays: we need\nordered vectors, so we'd be sorting all the time, otherwise. They're\nstatic, and they're read only. The one thing they're not is fixed, known\nsize (Sorry Merlin). They work fine for the query as shown: the only\nissue is performance.\n\n> Postgres does have a way to do what you ask, though. It involves GiST\n> indexes and the operators from the contrib/intarray directory from the\n> Postgres source.\n\nWell, yes, that's how it used to be done. I figured the new array\nsupport should be able to handle it without the addon, however.\n\n> However I warn you in advance that this is fairly esoteric stuff and\n> will take some time to get used to. And at least in my case I found\n> the indexes didn't actually help much for my data sets, probably\n> because they just weren't big enough to benefit.\n\nI know that they should help in this case: we've got lots of content.\nAny particular author or maintainter will be in a small fraction of\nthose. i.e.: it's ideal for an index. And the current joined case uses\nan index, when it's available. I'll take a look at the GiST/contrib work,\nanyway. \n\nThanks - \n\nRoss \n-- \nRoss Reedstrom, Ph.D. [email protected]\nResearch Scientist phone: 713-348-6166\nThe Connexions Project http://cnx.rice.edu fax: 713-348-3665\nRice University MS-375, Houston, TX 77005\nGPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE\n",
"msg_date": "Mon, 26 Jul 2004 13:47:03 -0500",
"msg_from": "\"Ross J. Reedstrom\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: arrays and indexes"
},
{
"msg_contents": "\n>> > SELECT * from content where 42 = ANY (authors);\n\n>> Postgres does have a way to do what you ask, though. It involves GiST\n>> indexes and the operators from the contrib/intarray directory from the\n>> Postgres source.\n\n\tI have tried to use these indexes, and the performance was very good. It\ncan be faster (in fact much faster) than a join with an additional table,\nbecause you don't have a join. The SQL array syntax is a pain, though.\n",
"msg_date": "Mon, 26 Jul 2004 22:35:01 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: arrays and indexes"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n\n> These groups _really are_ ideal for Joe Conway's work on arrays: we need\n> ordered vectors, so we'd be sorting all the time, otherwise. They're\n> static, and they're read only. The one thing they're not is fixed, known\n> size (Sorry Merlin). They work fine for the query as shown: the only\n> issue is performance.\n\nWell just as long as you understand the trade-offs. Denormalizing can be\nuseful but you need to know what flexibility you're losing too.\n\n> > Postgres does have a way to do what you ask, though. It involves GiST\n> > indexes and the operators from the contrib/intarray directory from the\n> > Postgres source.\n> \n> Well, yes, that's how it used to be done. I figured the new array\n> support should be able to handle it without the addon, however.\n\nI think you can btree index arrays now, which is new, but it's not useful for\nthe kind of lookup you're doing. It would only be useful for joining on array\ntypes or looking for groups with given content, or things like that.\n\n> > However I warn you in advance that this is fairly esoteric stuff and\n> > will take some time to get used to. And at least in my case I found\n> > the indexes didn't actually help much for my data sets, probably\n> > because they just weren't big enough to benefit.\n> \n> I know that they should help in this case: we've got lots of content.\n> Any particular author or maintainter will be in a small fraction of\n> those. i.e.: it's ideal for an index. And the current joined case uses\n> an index, when it's available. I'll take a look at the GiST/contrib work,\n> anyway. \n\nI would be curious to know how it goes. My own project uses denormalized sets\nstored as arrays as well, though in my case they're precalculated from the\nfully normalized data. I tried to use GiST indexes but ran into problems\ncombining the btree-GiST code with array GiST code in a multicolumn index. I\nstill don't really know why it failed, but after two days building the index I\ngave up.\n\n-- \ngreg\n\n",
"msg_date": "26 Jul 2004 16:40:32 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: arrays and indexes"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> I would be curious to know how it goes. My own project uses\n> denormalized sets stored as arrays as well, though in my case they're\n> precalculated from the fully normalized data. I tried to use GiST\n> indexes but ran into problems combining the btree-GiST code with array\n> GiST code in a multicolumn index. I still don't really know why it\n> failed, but after two days building the index I gave up.\n\nSounds like a bug to me. Could you put together a test case?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 26 Jul 2004 17:21:12 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: arrays and indexes "
},
{
"msg_contents": "\nTom Lane <[email protected]> writes:\n\n> > I still don't really know why it failed, but after two days building the\n> > index I gave up.\n> \n> Sounds like a bug to me. Could you put together a test case?\n\nAt the time I contacted one of the GiST authors and we went over things for a\nwhile. They diagnosed the problem as being caused by having a poor selectivity\nGiST btree as the leading column in the index.\n\nHe seemed to think this was fairly fundamental and wasn't something they were\ngoing to be able to address. And I was fairly certain I didn't want to turn\nthe index upside down to have the more selective columns first (as is usually\nnormal) for various reasons.\n\nSo I gave it up as a lost cause. In any case in my application it was unlikely\nto really help. I expect that leading btree index to narrow the search to only\na few hundred or few thousand records in the normal case. So the access times\nare already within reason even having to dig through all the records. And\nsince other queries are likely to need other records from that set I'll need\nthem all in cache eventually. There are a lot of array columns to search\nthrough, so the added i/o to read all those indexes would probably be a net\nloss when they push other things out of cache.\n\nI could try setting up a test case, but I think all it took was having a\nbtree-gist index that was insufficiently selective. In my case I had about 900\ninteger values each on the order of 100-1000 records.\n\n-- \ngreg\n\n",
"msg_date": "27 Jul 2004 00:32:37 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: arrays and indexes"
}
] |
[
{
"msg_contents": "Hello --\n\nTo increase query (i.e. select) performance, we're trying to get \npostgres to use an index based on a timestamp column in a given table.\n\nEvent-based data is put into this table several times a minute, with the \ntimestamp indicating when a particular row was placed in the table.\n\nThe table is purged daily, retaining only the rows that are less than 7 \ndays old. That is, any row within the table is less than 1 week old (+ \n1 day, since the purge is daily).\n\nA typical number of rows in the table is around 400,000.\n\nA \"VACUUM FULL ANALYZE\" is performed every 3 hours.\n\n\nThe problem:\nWe often query the table to extract those rows that are, say, 10 minutes \nold or less.\n\nGiven there are 10080 minutes per week, the planner could, properly \nconfigured, estimate the number of rows returned by such a query to be:\n\n10 min/ 10080 min * 400,000 = 0.001 * 400,000 = 400.\n\nMaking an index scan, with the timestamp field the index, far faster \nthen a sequential scan.\n\n\nHowever, we can't get the planner to do an timestamp-based index scan.\n\nAnyone know what to do?\n\n\nHere's the table specs:\n\nmonitor=# \\d \"eventtable\"\n Table \"public.eventtable\"\n Column | Type | \nModifiers\n-----------+-----------------------------+--------------------------------------------------------------\n timestamp | timestamp without time zone | not null default \n('now'::text)::timestamp(6) with time zone\n key | bigint | not null default \nnextval('public.\"eventtable_key_seq\"'::text)\n propagate | boolean |\n facility | character(10) |\n priority | character(10) |\n host | character varying(128) | not null\n message | text | not null\nIndexes:\n \"eventtable_pkey\" primary key, btree (\"timestamp\", \"key\")\n \"eventtable_host\" btree (host)\n \"eventtable_timestamp\" btree (\"timestamp\")\n\n\nHere's a query (with \"explain analyze\"):\n\nmonitor=# explain analyze select * from \"eventtable\" where timestamp > \nCURRENT_TIMESTAMP - INTERVAL '10 minutes';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Seq Scan on \"eventtable\" (cost=0.00..19009.97 rows=136444 width=155) \n(actual time=11071.073..11432.522 rows=821 loops=1)\n Filter: ((\"timestamp\")::timestamp with time zone > \n(('now'::text)::timestamp(6) with time zone - '@ 10 mins'::interval))\n Total runtime: 11433.384 ms\n(3 rows)\n\n\nHere's something strange. We try to disable sequential scans, but to no \navail. The estimated cost skyrockets, though:\n\nmonitor=# set enable_seqscan = false;\nSET\nmonitor=# explain analyze select * from \"eventtable\" where timestamp > \nCURRENT_TIMESTAMP - INTERVAL '10 minutes';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on \"eventtable\" (cost=100000000.00..100019009.97 rows=136444 \nwidth=155) (actual time=9909.847..9932.438 rows=1763 loops=1)\n Filter: ((\"timestamp\")::timestamp with time zone > \n(('now'::text)::timestamp(6) with time zone - '@ 10 mins'::interval))\n Total runtime: 9934.353 ms\n(3 rows)\n\nmonitor=# set enable_seqscan = true;\nSET\nmonitor=#\n\n\n\nAny help is greatly appreciated :)\n\n-- Harmon\n\n\n\n",
"msg_date": "Mon, 26 Jul 2004 10:49:26 -0400",
"msg_from": "\"Harmon S. Nine\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Timestamp-based indexing"
},
{
"msg_contents": "\n\nHarmon S. Nine wrote:\n\n> monitor=# explain analyze select * from \"eventtable\" where timestamp > \n> CURRENT_TIMESTAMP - INTERVAL '10 minutes';\n> QUERY PLAN\n\nTry\n\nSELECT * FROM eventtable where timestamp BETWEEN (CURRENT_TIMESTAMP - \nINTERVAL '10 minutes') AND CURRENT_TIMESTAMP;\n\nThis should will use a range off valid times. What your query is doing \nis looking for 10 minutes ago to an infinate future. Statically \nspeaking that should encompass most of the table because you have an \ninfinate range. No index will be used. If you assign a range the \nplanner can fiqure out what you are looking for.\n\n-- \nKevin Barnard\nSpeed Fulfillment and Call Center\[email protected]\n214-258-0120\n\n",
"msg_date": "Mon, 26 Jul 2004 09:58:38 -0500",
"msg_from": "Kevin Barnard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp-based indexing"
},
{
"msg_contents": "VACUUM FULL ANALYZE every 3 hours seems a little severe. You will \nprobably be be served just as well by VACUUM ANALYZE. But you probably \ndon't need the VACUUM part most of the time. You might try doing an \nANALYZE on the specific tables you are having issues with. Since \nANALYZE should be much quicker and not have the performance impact of a \nVACUUM, you could do it every hour, or even every 15 minutes.\n\nGood luck...\n\nHarmon S. Nine wrote:\n\n> Hello --\n>\n> To increase query (i.e. select) performance, we're trying to get \n> postgres to use an index based on a timestamp column in a given table.\n>\n> Event-based data is put into this table several times a minute, with \n> the timestamp indicating when a particular row was placed in the table.\n>\n> The table is purged daily, retaining only the rows that are less than \n> 7 days old. That is, any row within the table is less than 1 week old \n> (+ 1 day, since the purge is daily).\n>\n> A typical number of rows in the table is around 400,000.\n>\n> A \"VACUUM FULL ANALYZE\" is performed every 3 hours.\n>\n>\n> The problem:\n> We often query the table to extract those rows that are, say, 10 \n> minutes old or less.\n>\n> Given there are 10080 minutes per week, the planner could, properly \n> configured, estimate the number of rows returned by such a query to be:\n>\n> 10 min/ 10080 min * 400,000 = 0.001 * 400,000 = 400.\n>\n> Making an index scan, with the timestamp field the index, far faster \n> then a sequential scan.\n>\n>\n> However, we can't get the planner to do an timestamp-based index scan.\n>\n> Anyone know what to do?\n>\n>\n> Here's the table specs:\n>\n> monitor=# \\d \"eventtable\"\n> Table \"public.eventtable\"\n> Column | Type | \n> Modifiers\n> -----------+-----------------------------+-------------------------------------------------------------- \n>\n> timestamp | timestamp without time zone | not null default \n> ('now'::text)::timestamp(6) with time zone\n> key | bigint | not null default \n> nextval('public.\"eventtable_key_seq\"'::text)\n> propagate | boolean |\n> facility | character(10) |\n> priority | character(10) |\n> host | character varying(128) | not null\n> message | text | not null\n> Indexes:\n> \"eventtable_pkey\" primary key, btree (\"timestamp\", \"key\")\n> \"eventtable_host\" btree (host)\n> \"eventtable_timestamp\" btree (\"timestamp\")\n>\n>\n> Here's a query (with \"explain analyze\"):\n>\n> monitor=# explain analyze select * from \"eventtable\" where timestamp > \n> CURRENT_TIMESTAMP - INTERVAL '10 minutes';\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------- \n>\n> Seq Scan on \"eventtable\" (cost=0.00..19009.97 rows=136444 width=155) \n> (actual time=11071.073..11432.522 rows=821 loops=1)\n> Filter: ((\"timestamp\")::timestamp with time zone > \n> (('now'::text)::timestamp(6) with time zone - '@ 10 mins'::interval))\n> Total runtime: 11433.384 ms\n> (3 rows)\n>\n>\n> Here's something strange. We try to disable sequential scans, but to \n> no avail. The estimated cost skyrockets, though:\n>\n> monitor=# set enable_seqscan = false;\n> SET\n> monitor=# explain analyze select * from \"eventtable\" where timestamp > \n> CURRENT_TIMESTAMP - INTERVAL '10 minutes';\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------- \n>\n> Seq Scan on \"eventtable\" (cost=100000000.00..100019009.97 rows=136444 \n> width=155) (actual time=9909.847..9932.438 rows=1763 loops=1)\n> Filter: ((\"timestamp\")::timestamp with time zone > \n> (('now'::text)::timestamp(6) with time zone - '@ 10 mins'::interval))\n> Total runtime: 9934.353 ms\n> (3 rows)\n>\n> monitor=# set enable_seqscan = true;\n> SET\n> monitor=#\n>\n>\n>\n> Any help is greatly appreciated :)\n>\n> -- Harmon\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Mon, 26 Jul 2004 11:09:53 -0400",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp-based indexing"
},
{
"msg_contents": "Thank you for your response :)\n\nThis improves the row estimation, but it is still using a sequential scan.\n\nIt really seems like the query would go faster if an index scan was \nused, given the number of rows fetched (both estimated and actual) is \nsignificantly less than the number of rows in the table.\n\nIs there some way to get the planner to use the timestamp as an index on \nthese queries?\n\n\nmonitor=# explain analyze select * from \"eventtable\" where timestamp \nbetween (CURRENT_TIMESTAMP - INTERVAL '10 min') AND CURRENT_TIMESTAMP;\n \nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on \"eventtable\" (cost=0.00..23103.29 rows=2047 width=155) \n(actual time=10227.253..10276.944 rows=1662 loops=1)\n Filter: (((\"timestamp\")::timestamp with time zone >= \n(('now'::text)::timestamp(6) with time zone - '@ 10 mins'::interval)) \nAND ((\"timestamp\")::timestamp with time zone <= \n('now'::text)::timestamp(6) with time zone))\n Total runtime: 10278.628 ms\n(3 rows)\n\n\nmonitor=# SELECT COUNT(*) FROM \"eventtable\";\n count\n--------\n 425602\n(1 row)\n\nmonitor=#\n\n\n-- Harmon\n\n\nKevin Barnard wrote:\n\n>\n>\n> Harmon S. Nine wrote:\n>\n>> monitor=# explain analyze select * from \"eventtable\" where timestamp \n>> > CURRENT_TIMESTAMP - INTERVAL '10 minutes';\n>> QUERY PLAN\n>\n>\n> Try\n>\n> SELECT * FROM eventtable where timestamp BETWEEN (CURRENT_TIMESTAMP - \n> INTERVAL '10 minutes') AND CURRENT_TIMESTAMP;\n>\n> This should will use a range off valid times. What your query is \n> doing is looking for 10 minutes ago to an infinate future. Statically \n> speaking that should encompass most of the table because you have an \n> infinate range. No index will be used. If you assign a range the \n> planner can fiqure out what you are looking for.\n>\n\n",
"msg_date": "Mon, 26 Jul 2004 11:42:26 -0400",
"msg_from": "\"Harmon S. Nine\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Timestamp-based indexing"
},
{
"msg_contents": "We were getting a little desperate, so we engaged in overkill to rule \nout lack-of-analyze as a cause for the slow queries.\n\nThanks for your advice :)\n\n-- Harmon\n\nMatthew T. O'Connor wrote:\n\n> VACUUM FULL ANALYZE every 3 hours seems a little severe. You will \n> probably be be served just as well by VACUUM ANALYZE. But you \n> probably don't need the VACUUM part most of the time. You might try \n> doing an ANALYZE on the specific tables you are having issues with. \n> Since ANALYZE should be much quicker and not have the performance \n> impact of a VACUUM, you could do it every hour, or even every 15 minutes.\n>\n> Good luck...\n>\n> Harmon S. Nine wrote:\n>\n>> Hello --\n>>\n>> To increase query (i.e. select) performance, we're trying to get \n>> postgres to use an index based on a timestamp column in a given table.\n>>\n>> Event-based data is put into this table several times a minute, with \n>> the timestamp indicating when a particular row was placed in the table.\n>>\n>> The table is purged daily, retaining only the rows that are less than \n>> 7 days old. That is, any row within the table is less than 1 week \n>> old (+ 1 day, since the purge is daily).\n>>\n>> A typical number of rows in the table is around 400,000.\n>>\n>> A \"VACUUM FULL ANALYZE\" is performed every 3 hours.\n>>\n>>\n>> The problem:\n>> We often query the table to extract those rows that are, say, 10 \n>> minutes old or less.\n>>\n>> Given there are 10080 minutes per week, the planner could, properly \n>> configured, estimate the number of rows returned by such a query to be:\n>>\n>> 10 min/ 10080 min * 400,000 = 0.001 * 400,000 = 400.\n>>\n>> Making an index scan, with the timestamp field the index, far faster \n>> then a sequential scan.\n>>\n>>\n>> However, we can't get the planner to do an timestamp-based index scan.\n>>\n>> Anyone know what to do?\n>>\n>>\n>> Here's the table specs:\n>>\n>> monitor=# \\d \"eventtable\"\n>> Table \"public.eventtable\"\n>> Column | Type | \n>> Modifiers\n>> -----------+-----------------------------+-------------------------------------------------------------- \n>>\n>> timestamp | timestamp without time zone | not null default \n>> ('now'::text)::timestamp(6) with time zone\n>> key | bigint | not null default \n>> nextval('public.\"eventtable_key_seq\"'::text)\n>> propagate | boolean |\n>> facility | character(10) |\n>> priority | character(10) |\n>> host | character varying(128) | not null\n>> message | text | not null\n>> Indexes:\n>> \"eventtable_pkey\" primary key, btree (\"timestamp\", \"key\")\n>> \"eventtable_host\" btree (host)\n>> \"eventtable_timestamp\" btree (\"timestamp\")\n>>\n>>\n>> Here's a query (with \"explain analyze\"):\n>>\n>> monitor=# explain analyze select * from \"eventtable\" where timestamp \n>> > CURRENT_TIMESTAMP - INTERVAL '10 minutes';\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------------------------------------------------------- \n>>\n>> Seq Scan on \"eventtable\" (cost=0.00..19009.97 rows=136444 width=155) \n>> (actual time=11071.073..11432.522 rows=821 loops=1)\n>> Filter: ((\"timestamp\")::timestamp with time zone > \n>> (('now'::text)::timestamp(6) with time zone - '@ 10 mins'::interval))\n>> Total runtime: 11433.384 ms\n>> (3 rows)\n>>\n>>\n>> Here's something strange. We try to disable sequential scans, but to \n>> no avail. The estimated cost skyrockets, though:\n>>\n>> monitor=# set enable_seqscan = false;\n>> SET\n>> monitor=# explain analyze select * from \"eventtable\" where timestamp \n>> > CURRENT_TIMESTAMP - INTERVAL '10 minutes';\n>> QUERY PLAN\n>> ------------------------------------------------------------------------------------------------------------------------------------- \n>>\n>> Seq Scan on \"eventtable\" (cost=100000000.00..100019009.97 \n>> rows=136444 width=155) (actual time=9909.847..9932.438 rows=1763 \n>> loops=1)\n>> Filter: ((\"timestamp\")::timestamp with time zone > \n>> (('now'::text)::timestamp(6) with time zone - '@ 10 mins'::interval))\n>> Total runtime: 9934.353 ms\n>> (3 rows)\n>>\n>> monitor=# set enable_seqscan = true;\n>> SET\n>> monitor=#\n>>\n>>\n>>\n>> Any help is greatly appreciated :)\n>>\n>> -- Harmon\n>>\n>>\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n>>\n>\n\n",
"msg_date": "Mon, 26 Jul 2004 11:46:42 -0400",
"msg_from": "\"Harmon S. Nine\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Timestamp-based indexing"
},
{
"msg_contents": "\"Matthew T. O'Connor\" <[email protected]> writes:\n> VACUUM FULL ANALYZE every 3 hours seems a little severe.\n\nIf rows are only deleted once a day, that's a complete waste of time,\nindeed.\n\nI'd suggest running a plain VACUUM just after the deletion pass is done.\nANALYZEs are a different matter and possibly need to be done every\nfew hours, seeing that your maximum timestamp value is constantly\nchanging.\n\n>> monitor=# set enable_seqscan = false;\n>> SET\n>> monitor=# explain analyze select * from \"eventtable\" where timestamp > \n>> CURRENT_TIMESTAMP - INTERVAL '10 minutes';\n>> QUERY PLAN\n>> ------------------------------------------------------------------------------------------------------------------------------------- \n>> \n>> Seq Scan on \"eventtable\" (cost=100000000.00..100019009.97 rows=136444 \n>> width=155) (actual time=9909.847..9932.438 rows=1763 loops=1)\n>> Filter: ((\"timestamp\")::timestamp with time zone > \n>> (('now'::text)::timestamp(6) with time zone - '@ 10 mins'::interval))\n>> Total runtime: 9934.353 ms\n\nYou've got some datatype confusion, too. CURRENT_TIMESTAMP yields\ntimestamp with time zone, and since you made the timestamp column\ntimestamp without time zone, you've got a cross-type comparison which is\nnot indexable (at least not in 7.4). My opinion is that you chose the\nwrong type for the column. Values that represent specific real-world\ninstants should always be timestamp with time zone, so that they mean\nthe same thing if you look at them in a different time zone.\n\nAnother issue here is that because CURRENT_TIMESTAMP - INTERVAL '10\nminutes' isn't a constant, the planner isn't able to make use of the\nstatistics gathered by ANALYZE anyway. That's why the rowcount estimate\nhas nothing to do with reality. Unless you force the decision with\n\"set enable_seqscan\", the planner will never pick an indexscan with this\nrowcount estimate. The standard advice for getting around this is to\nhide the nonconstant calculation inside a function that's deliberately\nmislabeled immutable. For example,\n\ncreate function ago(interval) returns timestamp with time zone as\n'select now() - $1' language sql strict immutable;\n\nselect * from \"eventtable\" where timestamp > ago('10 minutes');\n\nThe planner folds the \"ago('10 minutes')\" to a constant, checks the\nstatistics, and should do the right thing. Note however that this\ntechnique may break if you put a call to ago() inside a function\nor prepared statement --- it's only safe in interactive queries,\nwhere you don't care that the value is reduced to a constant during\nplanning instead of during execution.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 26 Jul 2004 11:59:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp-based indexing "
},
{
"msg_contents": "\nOn Mon, 26 Jul 2004, Harmon S. Nine wrote:\n\n> However, we can't get the planner to do an timestamp-based index scan.\n>\n> Anyone know what to do?\n\nI'd wonder if the type conversion is causing you problems.\nCURRENT_TIMESTAMP - INTERVAL '10 minutes' is a timestamp with time zone\nwhile the column is timestamp without time zone. Casting\nCURRENT_TIMESTAMP to timestamp without time zone seemed to make it able to\nchoose an index scan on 7.4.\n\n",
"msg_date": "Mon, 26 Jul 2004 09:02:03 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp-based indexing"
},
{
"msg_contents": "THAT WAS IT!!\n\nThank you very much.\nIs there a way to change the type of \"CURRENT_TIMESTAMP\" to \"timestamp \nwithout time zone\" so that casting isn't needed?\n\n\nBTW, isn't this a bug?\n\n-- Harmon\n\n\nStephan Szabo wrote:\n\n>On Mon, 26 Jul 2004, Harmon S. Nine wrote:\n>\n> \n>\n>>However, we can't get the planner to do an timestamp-based index scan.\n>>\n>>Anyone know what to do?\n>> \n>>\n>\n>I'd wonder if the type conversion is causing you problems.\n>CURRENT_TIMESTAMP - INTERVAL '10 minutes' is a timestamp with time zone\n>while the column is timestamp without time zone. Casting\n>CURRENT_TIMESTAMP to timestamp without time zone seemed to make it able to\n>choose an index scan on 7.4.\n>\n> \n>\n\n\n\n\n\n\n\n\nTHAT WAS IT!!\n\nThank you very much.\nIs there a way to change the type of \"CURRENT_TIMESTAMP\" to \"timestamp\nwithout time zone\" so that casting isn't needed?\n\n\nBTW, isn't this a bug?\n\n-- Harmon\n\n\nStephan Szabo wrote:\n\nOn Mon, 26 Jul 2004, Harmon S. Nine wrote:\n\n \n\nHowever, we can't get the planner to do an timestamp-based index scan.\n\nAnyone know what to do?\n \n\n\nI'd wonder if the type conversion is causing you problems.\nCURRENT_TIMESTAMP - INTERVAL '10 minutes' is a timestamp with time zone\nwhile the column is timestamp without time zone. Casting\nCURRENT_TIMESTAMP to timestamp without time zone seemed to make it able to\nchoose an index scan on 7.4.",
"msg_date": "Mon, 26 Jul 2004 14:37:48 -0400",
"msg_from": "\"Harmon S. Nine\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Timestamp-based indexing"
},
{
"msg_contents": "Hi,\n\nHow about changing:\n\nCURRENT_TIMESTAMP - INTERVAL '10 minutes'\nto\n'now'::timestamptz - INTERVAL '10 minutes'\n\nIt seems to me that Postgres will treat it as\na constant.\n\nThanks,\n\n--- Tom Lane <[email protected]> wrote:\n> \"Matthew T. O'Connor\" <[email protected]> writes:\n> > VACUUM FULL ANALYZE every 3 hours seems a little\n> severe.\n> \n> If rows are only deleted once a day, that's a\n> complete waste of time,\n> indeed.\n> \n> I'd suggest running a plain VACUUM just after the\n> deletion pass is done.\n> ANALYZEs are a different matter and possibly need to\n> be done every\n> few hours, seeing that your maximum timestamp value\n> is constantly\n> changing.\n> \n> >> monitor=# set enable_seqscan = false;\n> >> SET\n> >> monitor=# explain analyze select * from\n> \"eventtable\" where timestamp > \n> >> CURRENT_TIMESTAMP - INTERVAL '10 minutes';\n> >> QUERY PLAN\n> >>\n>\n-------------------------------------------------------------------------------------------------------------------------------------\n> \n> >> \n> >> Seq Scan on \"eventtable\" \n> (cost=100000000.00..100019009.97 rows=136444 \n> >> width=155) (actual time=9909.847..9932.438\n> rows=1763 loops=1)\n> >> Filter: ((\"timestamp\")::timestamp with time zone\n> > \n> >> (('now'::text)::timestamp(6) with time zone - '@\n> 10 mins'::interval))\n> >> Total runtime: 9934.353 ms\n> \n> You've got some datatype confusion, too. \n> CURRENT_TIMESTAMP yields\n> timestamp with time zone, and since you made the\n> timestamp column\n> timestamp without time zone, you've got a cross-type\n> comparison which is\n> not indexable (at least not in 7.4). My opinion is\n> that you chose the\n> wrong type for the column. Values that represent\n> specific real-world\n> instants should always be timestamp with time zone,\n> so that they mean\n> the same thing if you look at them in a different\n> time zone.\n> \n> Another issue here is that because CURRENT_TIMESTAMP\n> - INTERVAL '10\n> minutes' isn't a constant, the planner isn't able to\n> make use of the\n> statistics gathered by ANALYZE anyway. That's why\n> the rowcount estimate\n> has nothing to do with reality. Unless you force\n> the decision with\n> \"set enable_seqscan\", the planner will never pick an\n> indexscan with this\n> rowcount estimate. The standard advice for getting\n> around this is to\n> hide the nonconstant calculation inside a function\n> that's deliberately\n> mislabeled immutable. For example,\n> \n> create function ago(interval) returns timestamp with\n> time zone as\n> 'select now() - $1' language sql strict immutable;\n> \n> select * from \"eventtable\" where timestamp > ago('10\n> minutes');\n> \n> The planner folds the \"ago('10 minutes')\" to a\n> constant, checks the\n> statistics, and should do the right thing. Note\n> however that this\n> technique may break if you put a call to ago()\n> inside a function\n> or prepared statement --- it's only safe in\n> interactive queries,\n> where you don't care that the value is reduced to a\n> constant during\n> planning instead of during execution.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose\n> an index scan if your\n> joining column's datatypes do not match\n> \n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Mail - You care about security. So do we.\nhttp://promotions.yahoo.com/new_mail\n",
"msg_date": "Mon, 26 Jul 2004 14:26:36 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp-based indexing "
},
{
"msg_contents": "Litao Wu <[email protected]> writes:\n> How about changing:\n\n> CURRENT_TIMESTAMP - INTERVAL '10 minutes'\n> to\n> 'now'::timestamptz - INTERVAL '10 minutes'\n\n> It seems to me that Postgres will treat it as\n> a constant.\n\nYeah, that works too, though again it might burn you if used inside a\nfunction or prepared statement. What you're doing here is to push the\nfreezing of the \"now\" value even further upstream, namely to initial\nparsing of the command.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 26 Jul 2004 17:40:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp-based indexing "
},
{
"msg_contents": ">>It seems to me that Postgres will treat it as\n>>a constant.\n> \n> \n> Yeah, that works too, though again it might burn you if used inside a\n> function or prepared statement. What you're doing here is to push the\n> freezing of the \"now\" value even further upstream, namely to initial\n> parsing of the command.\n\nWhat I do in my apps to get postgres to use the timestamp indexes in \nsome situations is to just generate the current timestamp in iso format \nand then just insert it into the query as a constant, for that run of \nthe query.\n\nChris\n\n",
"msg_date": "Tue, 27 Jul 2004 09:53:01 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp-based indexing"
},
{
"msg_contents": "Harmon,\n\n> A \"VACUUM FULL ANALYZE\" is performed every 3 hours.\n\nThe FULL part should not be necessary if you've set your max_fsm_pages high \nenough.\n\n> Given there are 10080 minutes per week, the planner could, properly\n> configured, estimate the number of rows returned by such a query to be:\n>\n> 10 min/ 10080 min * 400,000 = 0.001 * 400,000 = 400.\n\nThe planner doesn't work that way.\n\n> monitor=# explain analyze select * from \"eventtable\" where timestamp >\n> CURRENT_TIMESTAMP - INTERVAL '10 minutes';\n\nHmmm. What verison of PostgreSQL are you running? I seem to remember an \nissue in one version with selecting comparisons against now(). What \nhappens when you supply a constant instead of ( current_timestamp - interval \n'10 minutes' ) ?\n\n> Here's something strange. We try to disable sequential scans, but to no\n> avail. The estimated cost skyrockets, though:\n\nThat's how \"enable_*=false\" works in most cases.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 16 Aug 2004 16:09:13 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp-based indexing"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> monitor=# explain analyze select * from \"eventtable\" where timestamp >\n>> CURRENT_TIMESTAMP - INTERVAL '10 minutes';\n\n> Hmmm. What verison of PostgreSQL are you running? I seem to remember an \n> issue in one version with selecting comparisons against now().\n\nI'm also wondering about the exact datatype of the \"timestamp\" column.\nIf it's timestamp without timezone, then the above is a cross-datatype\ncomparison (timestamp vs timestamptz) and hence not indexable before\n8.0. This could be fixed easily by using the right current-time\nfunction, viz LOCALTIMESTAMP not CURRENT_TIMESTAMP. (Consistency has\nobviously never been a high priority with the SQL committee :-(.)\n\nLess easily but possibly better in the long run, change the column type\nto timestamp with time zone. IMHO, columns representing definable\nreal-world time instants should always be timestamptz, because the other\nway leaves you open to serious confusion about what the time value\nreally means.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 16 Aug 2004 19:25:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp-based indexing "
}
] |
[
{
"msg_contents": "Dear all,\n\nThis is my two last line of a vacuum full verbose analyse;.\n\nINFO: ᅵfree space map: 27 relations, 4336 pages stored; 3232 total pages \nneeded\nDETAIL: ᅵAllocated FSM size: 2000 relations + 50000000 pages = 293088 kB\n\nWhat are the good parameters to set with those informations :\n\nmax_fsm_pages = 30000 ᅵ\nmax_fsm_relations = 100 \n\nAny other parameters could be defined with this informations ?\n\nThanks per advance,\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n",
"msg_date": "Tue, 27 Jul 2004 09:35:33 +0200",
"msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Little understanding for tuning ..."
}
] |
[
{
"msg_contents": "I am in a situation where I have to treat a table as logically ordered\nbased on an index. Right now I'm doing this via queries, and a I need a\nbetter way to do it. Cursors do not meet my requirements, because they\nare always insensitive. Also, my performance requirements are\nextreme...I need 100% index usage.\n\nCurrently, I use queries to do this. Unfortunately, the queries can get\nkind of complex because many if the indexes (keys, really) are over 3 or\nmore columns in a table.\n\nSo, for a table t with a three part key over columns a,b,c, the query to\nread the next value from t for given values a1, b1, c1 is\n\nselect * from t where\n\ta >= a1 and\n (a > a1 or b >= b1) and\n (a > a1 or b > b1 or c > c1)\n\nIn about 95% of cases, the planner correctly selects the index t(a,b,c)\nand uses it. However, the 5% remaining cases usually come at the worst\ntime, when large tables and 3 or 4 part keys are involved. In those\ncases sometimes the planner applies the filter to a, but not b or c with\na large performance hit. Manipulating statistics on the table does not\nseem to help.\n\nInterestingly, it is possible to rewrite the above query by switching\nand with or and >= with >. However when written that way, the planner\nalmost never gets it right.\n\nMy problem is deceptively simple: how you read the next record from a\ntable based on a given set of values? In practice, this is difficult to\nimplement. If anybody can suggest a alternative/better way to this, I'm\nall ears.\n\nMerlin\n",
"msg_date": "Tue, 27 Jul 2004 09:07:02 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "best way to fetch next/prev record based on index"
},
{
"msg_contents": "Hi, Merlin,\n\nOn Tue, 27 Jul 2004 09:07:02 -0400\n\"Merlin Moncure\" <[email protected]> wrote:\n\n> So, for a table t with a three part key over columns a,b,c, the query\n> to read the next value from t for given values a1, b1, c1 is\n> \n> select * from t where\n> \ta >= a1 and\n> (a > a1 or b >= b1) and\n> (a > a1 or b > b1 or c > c1)\n\nYou mut not rely on such trickery to get any ordering, as the SQL data\nmodel contains no ordering, and a query optimizer is free to deliver you\nthe tuples in any order it feels like.\n\nWhy don't you add a 'ORDER BY a,b,c ASC' to your query?\n\n> Interestingly, it is possible to rewrite the above query by switching\n> and with or and >= with >. However when written that way, the planner\n> almost never gets it right.\n\nThat's the reason why you cannot rely on any implicit ordering, the\nplanner is free to rewrite a query as it likes as long as it delivers\nthe same tuples, but in any order it wants.\n\n> My problem is deceptively simple: how you read the next record from a\n> table based on a given set of values? In practice, this is difficult\n> to implement. If anybody can suggest a alternative/better way to\n> this, I'm all ears.\n\nSo you really want something like\n\n'SELECT * FROM t WHERE a>=a1 AND b>=b1 AND c>=c1 ORDER BY a,b,c ASC LIMIT 1'\n\n\nHTH,\nMarkus\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n",
"msg_date": "Tue, 27 Jul 2004 16:13:25 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index"
},
{
"msg_contents": "Hi, Merlin,\n\nOn Tue, 27 Jul 2004 16:13:25 +0200, I myself wrote:\n\n\n> You mut not\n\nShould be \"must\", not \"mut\" :-)\n\n> > My problem is deceptively simple: how you read the next record from\n> > a table based on a given set of values? In practice, this is\n> > difficult to implement. If anybody can suggest a alternative/better\n> > way to this, I'm all ears.\n> \n> So you really want something like\n> \n> 'SELECT * FROM t WHERE a>=a1 AND b>=b1 AND c>=c1 ORDER BY a,b,c ASC\n> LIMIT 1'\n\nSorry, as you want the _next_, and I assume that a1, b1 and c1 are the\ncurrent row's values, you should rather use something like:\n\n'SELECT * FROM t WHERE a>=a1 AND b>=b1 AND c>=c1 ORDER BY a,b,c ASC\nLIMIT 1 OFFSET 1'\n\nHTH,\nMarkus\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n",
"msg_date": "Tue, 27 Jul 2004 16:21:17 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Correction of best way to fetch next/prev record based on\n index"
},
{
"msg_contents": "You only want one record to be returned? Tack a LIMIT 1 onto the end of \nthe query.\n\n> My problem is deceptively simple: how you read the next record from a\n> table based on a given set of values? In practice, this is difficult to\n> implement. If anybody can suggest a alternative/better way to this, I'm\n> all ears.\n\n\n",
"msg_date": "Tue, 27 Jul 2004 10:36:34 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index"
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> So, for a table t with a three part key over columns a,b,c, the query to\n> read the next value from t for given values a1, b1, c1 is\n\n> select * from t where\n> \ta >= a1 and\n> (a > a1 or b >= b1) and\n> (a > a1 or b > b1 or c > c1)\n\n> In about 95% of cases, the planner correctly selects the index t(a,b,c)\n> and uses it.\n\nI'm surprised it's that good. Why not do\n\n\tselect * from t where a >= a1 and b >= b1 and c >= c1\n\torder by a,b,c\n\tlimit 1 offset 1;\n\nwhich has a much more obvious translation to an indexscan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Jul 2004 11:14:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index "
},
{
"msg_contents": "\n\n> Interestingly, it is possible to rewrite the above query by switching\n> and with or and >= with >. However when written that way, the planner\n> almost never gets it right.\n\nWell, note it's still not really getting it right even in your case. It's\ndoing an index scan on a>=a1 but if you have lots of values in your table\nwhere a=a1 and b<b1 then it's going to unnecessarily read through all of\nthose.\n\n\nOne thing that can help is to add ORDER BY a,b,c LIMIT 1 to your query. That\nwill virtually guarantee that it uses an index scan, which will at least avoid\nmaking it scan all the records *after* finding the match. However it still\ndoesn't seem to make Postgres use an Index Cond to allow it to do an instant\nlookup.\n\nI expected WHERE (a,b,c) > (a1,b1,c1) to work however it doesn't. It appears\nto mean a>a1 AND b>b1 AND c>c1 which isn't at all what you want. I imagine the\nstandard dictates this meaning.\n\n> My problem is deceptively simple: how you read the next record from a\n> table based on a given set of values? In practice, this is difficult to\n> implement. If anybody can suggest a alternative/better way to this, I'm\n> all ears.\n\nI've done this a million times for simple integer keys, but I've never had to\ndo it for multi-column keys. It seems it would be nice if some syntax similar\nto (a,b,c) > (a1,b1,c1) worked for this.\n\n-- \ngreg\n\n",
"msg_date": "27 Jul 2004 13:12:31 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index"
}
] |
[
{
"msg_contents": "Hello,\n\nAre there any tools that help with postgres/postgis performance tuning?\n\nSo they measure the acutal tuple costs and cpu power, or suggest optimal\nvalues for the index sample counts?\n\nI could imagine that some profiling on a typical workload (or realistic\nsimulation thereof) could be automatically converted into hints how to\ntweak the config file.\n\nMarkus\n\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n",
"msg_date": "Tue, 27 Jul 2004 15:15:31 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Automagic tuning"
},
{
"msg_contents": "> Are there any tools that help with postgres/postgis performance tuning?\n> \n> So they measure the acutal tuple costs and cpu power, or suggest optimal\n> values for the index sample counts?\n\nHave you turned on the stat_* settings in postgresql.conf and then \nexamined the pg_stat_* system views?\n\nChris\n",
"msg_date": "Tue, 27 Jul 2004 22:19:49 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automagic tuning"
},
{
"msg_contents": "Hi, Cristopher,\n\nChristopher Kings-Lynne schrieb:\n>> Are there any tools that help with postgres/postgis performance tuning?\n>>\n>> So they measure the acutal tuple costs and cpu power, or suggest optimal\n>> values for the index sample counts?\n>\n> Have you turned on the stat_* settings in postgresql.conf and then\n> examined the pg_stat_* system views?\n\nAs far as I examined, those views only count several things like fetched\nrows and pages, and cache hits.\n\nI would like something that really measures values like random_page_cost\nor cpu_tuple_cost that are hardware dependent.\n\nI assume such thing does not exist?\n\nMarkus\n\n--\nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 z�rich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com",
"msg_date": "Mon, 31 Jan 2005 15:54:10 +0100",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Automagic tuning"
},
{
"msg_contents": "Markus,\n\n> As far as I examined, those views only count several things like fetched\n> rows and pages, and cache hits.\n>\n> I would like something that really measures values like random_page_cost\n> or cpu_tuple_cost that are hardware dependent.\n>\n> I assume such thing does not exist?\n\nNope. You gotta whip out your calculator and run some queries.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 31 Jan 2005 12:09:31 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automagic tuning"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> I would like something that really measures values like random_page_cost\n>> or cpu_tuple_cost that are hardware dependent.\n>> \n>> I assume such thing does not exist?\n\n> Nope. You gotta whip out your calculator and run some queries.\n\nPreferably a whole lot of queries. All the measurement techniques I can\nthink of are going to have a great deal of noise, so you shouldn't\ntwiddle these cost settings based on just a few examples.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 Jan 2005 15:26:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automagic tuning "
},
{
"msg_contents": "On Mon, Jan 31, 2005 at 03:26:12PM -0500, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n> >> I would like something that really measures values like random_page_cost\n> >> or cpu_tuple_cost that are hardware dependent.\n> >> \n> >> I assume such thing does not exist?\n> \n> > Nope. You gotta whip out your calculator and run some queries.\n> \n> Preferably a whole lot of queries. All the measurement techniques I can\n> think of are going to have a great deal of noise, so you shouldn't\n> twiddle these cost settings based on just a few examples.\n\nAre there any examples of how you can take numbers from pg_stats_* or\nexplain analize and turn them into configuration settings (such and\nrandom page cost)?\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Mon, 31 Jan 2005 22:52:11 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automagic tuning"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Mon, Jan 31, 2005 at 03:26:12PM -0500, Tom Lane wrote:\n>> Preferably a whole lot of queries. All the measurement techniques I can\n>> think of are going to have a great deal of noise, so you shouldn't\n>> twiddle these cost settings based on just a few examples.\n\n> Are there any examples of how you can take numbers from pg_stats_* or\n> explain analize and turn them into configuration settings (such and\n> random page cost)?\n\nWell, the basic idea is to adjust random_page_cost so that the ratio of\nestimated cost to real elapsed time (as shown by EXPLAIN ANALYZE) is the\nsame for seqscans and indexscans. What you have to watch out for is\nthat the estimated cost model is oversimplified and doesn't take into\naccount a lot of real-world factors, such as the activity of other\nconcurrent processes. The reason for needing a whole lot of tests is\nessentially to try to average out the effects of those unmodeled\nfactors, so that you have a number that makes sense within the planner's\nlimited view of reality.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Feb 2005 00:06:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automagic tuning "
},
{
"msg_contents": "On Tue, Feb 01, 2005 at 12:06:27AM -0500, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > On Mon, Jan 31, 2005 at 03:26:12PM -0500, Tom Lane wrote:\n> >> Preferably a whole lot of queries. All the measurement techniques I can\n> >> think of are going to have a great deal of noise, so you shouldn't\n> >> twiddle these cost settings based on just a few examples.\n> \n> > Are there any examples of how you can take numbers from pg_stats_* or\n> > explain analize and turn them into configuration settings (such and\n> > random page cost)?\n> \n> Well, the basic idea is to adjust random_page_cost so that the ratio of\n> estimated cost to real elapsed time (as shown by EXPLAIN ANALYZE) is the\n> same for seqscans and indexscans. What you have to watch out for is\n> that the estimated cost model is oversimplified and doesn't take into\n> account a lot of real-world factors, such as the activity of other\n> concurrent processes. The reason for needing a whole lot of tests is\n> essentially to try to average out the effects of those unmodeled\n> factors, so that you have a number that makes sense within the planner's\n> limited view of reality.\n\nGiven that, I guess the next logical question is: what would it take to\ncollect stats on queries so that such an estimate could be made? And\nwould it be possible/make sense to gather stats useful for tuning the\nother parameters?\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 1 Feb 2005 00:06:20 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Automagic tuning"
}
] |
[
{
"msg_contents": "> > So, for a table t with a three part key over columns a,b,c, the\nquery\n> > to read the next value from t for given values a1, b1, c1 is\n> >\n> > select * from t where\n> > \ta >= a1 and\n> > (a > a1 or b >= b1) and\n> > (a > a1 or b > b1 or c > c1)\n> \n> You mut not rely on such trickery to get any ordering, as the SQL data\n> model contains no ordering, and a query optimizer is free to deliver\nyou\n> the tuples in any order it feels like.\n> \n> Why don't you add a 'ORDER BY a,b,c ASC' to your query?\n\nLeft that part out (oops) :). My queries always have that at the end\n(or they will give incorrect results!). All are suffixed with order by\na,b,c limit n. n is manipulated in some cases for progressive read\nahead (kind of like fetch 'n' in cursors)).\n\nThe basic problem is the planner can't always match the query to the\nindex. So, either the planner has to be helped/fixed or I have to\nexplore another solution. This seems to happen most when the 'a' column\nhas very poor selectivity. In this case, the planner will only examine\nthe 'a' part of the key.\n\nMerlin\n",
"msg_date": "Tue, 27 Jul 2004 10:21:32 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best way to fetch next/prev record based on index"
},
{
"msg_contents": "Hi, Merlin,\n\nOn Tue, 27 Jul 2004 10:21:32 -0400\n\"Merlin Moncure\" <[email protected]> wrote:\n\n> The basic problem is the planner can't always match the query to the\n> index. So, either the planner has to be helped/fixed or I have to\n> explore another solution. This seems to happen most when the 'a'\n> column has very poor selectivity. In this case, the planner will only\n> examine the 'a' part of the key.\n\nSo it may help to add some more indices so you have an index for all permutations,\n\nCreate an index on (a,b,c), (a,c,b), (b,c,a), (b,a,c), (c,a,b) and (c,b,a).\n\nSo as long as one of the rows has enough selectivity, the planner should\nbe able to select the correct index. Maybe increasing the number of\nrandom samples for the rows is useful.\n\nHTH,\nMarkus\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n",
"msg_date": "Tue, 27 Jul 2004 16:50:44 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index"
}
] |
[
{
"msg_contents": "Markus wrote:\n> > The basic problem is the planner can't always match the query to the\n> > index. So, either the planner has to be helped/fixed or I have to\n> > explore another solution. This seems to happen most when the 'a'\n> > column has very poor selectivity. In this case, the planner will\nonly\n> > examine the 'a' part of the key.\n> \n> So it may help to add some more indices so you have an index for all\n> permutations,\n> \n> Create an index on (a,b,c), (a,c,b), (b,c,a), (b,a,c), (c,a,b) and\n> (c,b,a).\n\nIt is mathematically impossible for any index except for (a,b,c) to\nwork. Although, in theory, (a,b) could be used...but that wouldn't\nhelp. In any event, creating 1000s and 1000s of extra indices is not an\noption.\n\nHere is some log snippets illustrating my problem:\n\nHere is a log snippet illustrating things when everything is working ok\n(notice the sub-millisecond times):\n\nprepare data3_read_next_menu_item_recent_file_0 (character varying,\nnumeric, numeric, numeric)\n\tas select xmin, xmax, *\n\tfrom data3.menu_item_recent_file\n\twhere mir_user_id >= $1 and \n\t\t(mir_user_id > $1 or mir_menu_item_id >= $2) and \n\t\t(mir_user_id > $1 or mir_menu_item_id > $2 or\nmir_sequence_no > $3)\n\torder by mir_user_id, mir_menu_item_id, mir_sequence_no\n\tlimit $4 0.000849704 sec\ndata3_read_next_menu_item_recent_file_0 0.000435999 sec params:\n$1=MERLIN $2=00057 $3=00000001 $4=1 \ndata3_read_next_menu_item_recent_file_0 0.0117151 sec params: $1=MERLIN\n$2=00058 $3=00000002 $4=2 \ndata3_read_next_menu_item_recent_file_0 0.0385374 sec params: $1=MERLIN\n$2=00203 $3=00000005 $4=3 \ndata3_read_next_menu_item_recent_file_0 0.0211677 sec params: $1=MERLIN\n$2=00449 $3=00000010 $4=4 \ndata3_read_next_menu_item_recent_file_0 0.000818999 sec params:\n$1=MERLIN $2=00813 $3=00000008 $4=5\n\nHere is a log snippet when there is a problem:\ndata3_start_nl_line_file_0 37.2677 sec params: $1= $2=008768 $3=003 $4=1\n\nprepare data3_read_next_line_file_0 (character, numeric, numeric,\nnumeric)\n\tas select xmin, xmax, *\n\tfrom data3.line_file\n\twhere li_quote_flag >= $1 and \n\t\t(li_quote_flag > $1 or li_order_no >= $2) and \n\t\t(li_quote_flag > $1 or li_order_no > $2 or li_seq_no\n> $3)\n\torder by li_quote_flag, li_order_no, li_seq_no\n\tlimit $4 0.000839501 sec\ndata3_read_next_line_file_0 0.313869 sec params: $1= $2=008768 $3=005\n$4=1 \ndata3_read_next_line_file_0 0.343179 sec params: $1= $2=008768 $3=006\n$4=2 \ndata3_read_next_line_file_0 0.308703 sec params: $1= $2=008768 $3=008\n$4=3 \ndata3_read_next_line_file_0 0.306802 sec params: $1= $2=008768 $3=011\n$4=4 \ndata3_read_next_line_file_0 0.311033 sec params: $1= $2=008768 $3=015\n$4=5\n\nin the above statements, .3 sec to return a single row is very poor.\nExplain only matches li_quote_flag to the index which offers very poor\nselectivity. li_quote_flag is a char(1) and there is an index on\nline_file on the three where columns in the proper order.\n\nMerlin\n",
"msg_date": "Tue, 27 Jul 2004 11:03:14 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best way to fetch next/prev record based on index"
}
] |
[
{
"msg_contents": "> > select * from t where\n> > \ta >= a1 and\n> > (a > a1 or b >= b1) and\n> > (a > a1 or b > b1 or c > c1)\n> \n> > In about 95% of cases, the planner correctly selects the index\nt(a,b,c)\n> > and uses it.\n> \n> I'm surprised it's that good. Why not do\n\nIt is. In fact, it's so good, I mistakenly assumed it would get it\nright all the time. That led me directly to my current situation.\n \n> \tselect * from t where a >= a1 and b >= b1 and c >= c1\n> \torder by a,b,c\n> \tlimit 1 offset 1;\nNote: I left off the limit/order part of the query in my original\nexample.\n\nMy previous experience with offset was that it's not practical for this\ntype of use. Query time degrades when offset gets large...it's\nbasically n^2/2 for a scan of a table. If offset was pumped up to O(1)\nfor any sized offset, the problem would be trivial. \n\nPlus, your where clause does not guarantee results.\n\nImagine:\na b c\n2 3 4\n4 2 1\n\nc !> c1\n\nThe only other way to rewrite the query is thus (pg has much more\ntrouble with this form):\nselect * from t where\n\ta > a1 or\n (a >= a1 and b > b1) or\n (a >= a1 and b >= b1 and c > c1)\n\nMerlin\n",
"msg_date": "Tue, 27 Jul 2004 11:29:53 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best way to fetch next/prev record based on index "
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> Plus, your where clause does not guarantee results.\n\nNo, but in combination with the ORDER BY it does. Please note also\nthat the offset would *always* be one, so your gripe about it not\nscaling seems misguided to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Jul 2004 11:33:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index "
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n> \"Merlin Moncure\" <[email protected]> writes:\n>> Plus, your where clause does not guarantee results.\n\n> No, but in combination with the ORDER BY it does.\n\nOh, wait, you're right --- I'm mis-visualizing the situation.\n\nHmm, it sure seems like there ought to be an easy way to do this...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Jul 2004 11:40:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index "
},
{
"msg_contents": "I said:\n> Oh, wait, you're right --- I'm mis-visualizing the situation.\n> Hmm, it sure seems like there ought to be an easy way to do this...\n\nThe problem is that a multi-column index doesn't actually have the\nsemantics you want. If you are willing to consider adding another\nindex (or replacing the existing 3-column guy), how about\n\ncreate index ti on t((array[a,b,c]));\n\nselect * from t where array[a,b,c] >= array[a1,b1,c1]\norder by array[a,b,c]\nlimit 1 offset 1;\n\nThis seems to do the right thing in 7.4 and later. It does require that\nall three columns have the same datatype though; you weren't specific\nabout the data model ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 27 Jul 2004 12:00:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index "
}
] |
[
{
"msg_contents": "> Hmm, it sure seems like there ought to be an easy way to do this...\n\nHere is the only alternative that I see:\ncreate function column_stacker(text[] columns, text[] types) returns\ntext\n[...]\nlanguage 'C' immutable;\n\nthe above function stacks the columns together in a single string for\neasy range indexing.\n\ncreate index on t_idx(array[t.a::text, t.b::text, t.c::text],\narray['int', 'int', 'char(2)']);\n\nThis is a lot more complicated then it sounds but it can be done. The\nuse of arrays is forced because of limitations in the way pg handles\nparameters (no big deal). The real disadvantage here is that it these\nindexes don't help with normal queries so every key gets two indexes :(.\n\nI'm just looking for a nudge in the right direction here...if the answer\nis GIST, I'll start researching that, etc. The ideal solution for me\nwould be a smarter planner or a simple C function to get the next record\nout of the index (exposed through a UDF).\n\nEverything has to stay generic...the ultimate goal is an ISAM driver for\npg.\n\nMerlin\n\n\n",
"msg_date": "Tue, 27 Jul 2004 12:02:17 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best way to fetch next/prev record based on index "
}
] |
[
{
"msg_contents": "Greg wrote:\n> One thing that can help is to add ORDER BY a,b,c LIMIT 1 to your\nquery.\n> That\n> will virtually guarantee that it uses an index scan, which will at\nleast\n> avoid\n> making it scan all the records *after* finding the match. However it\nstill\n> doesn't seem to make Postgres use an Index Cond to allow it to do an\n> instant\n> lookup.\n\nYes, order by/limit was accidentally left of my original example. My\nproblem is with the word 'virtually'.\n\n> do it for multi-column keys. It seems it would be nice if some syntax\n> similar\n> to (a,b,c) > (a1,b1,c1) worked for this.\n\n'nice' would be an understatement...\nif the above syntax is not defined in the standard, I would humbly\nsuggest, well, beg for it to work as you thought it did. That would be\nGREAT! ISMT it may be that that is in fact standard...(I don't have it,\nso I don't know).\n\nMerlin\n\n\n",
"msg_date": "Tue, 27 Jul 2004 13:28:08 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best way to fetch next/prev record based on index"
},
{
"msg_contents": "\n\"Merlin Moncure\" <[email protected]> writes:\n\n> > do it for multi-column keys. It seems it would be nice if some syntax\n> > similar to (a,b,c) > (a1,b1,c1) worked for this.\n> \n> 'nice' would be an understatement...\n>\n> if the above syntax is not defined in the standard, I would humbly suggest,\n> well, beg for it to work as you thought it did. That would be GREAT! ISMT it\n> may be that that is in fact standard...(I don't have it, so I don't know).\n\n\nHum. It would seem my intuition matches the SQL92 spec and Postgres gets this\nwrong.\n\n From page 208 (Section 8.2.7) of\n http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt \n\n\n 7) Let Rx and Ry be the two <row value constructor>s of the <com-\n parison predicate> and let RXi and RYi be the i-th <row value\n constructor element>s of Rx and Ry, respectively. \"Rx <comp op>\n Ry\" is true, false, or unknown as follows:\n\n a) \"x = Ry\" is true if and only if RXi = RYi for all i.\n\n b) \"x <> Ry\" is true if and only if RXi <> RYi for some i.\n\n c) \"x < Ry\" is true if and only if RXi = RYi for all i < n and\n RXn < RYn for some n.\n\n d) \"x > Ry\" is true if and only if RXi = RYi for all i < n and\n RXn > RYn for some n.\n\n ...\n\n\n(This is A July 10, 1992 Proposed revision, I don't know how far it differs\nfrom the final. I imagine they mean \"Rx\" in all the places they use \"x\" alone)\n\nThat fairly clearly specifies (a,b,c) < (a1,b1,c1) to work the way you want it\nto. Less-than-or-equal is then defined based on the above definition.\n\n\nEven if Postgres did this right I'm not sure that would solve your index woes.\nI imagine the first thing Postgres would do is rewrite it into regular scalar\nexpressions. Ideally the optimizer should be capable of then deducing from the\nscalar expressions that an index scan would be useful.\n\n-- \ngreg\n\n",
"msg_date": "27 Jul 2004 14:38:48 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index"
}
] |
[
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> wrote ..\n[snip]\n> select * from t where\n> \ta >= a1 and\n> (a > a1 or b >= b1) and\n> (a > a1 or b > b1 or c > c1)\n\nI don't see why this is guaranteed to work without an ORDER BY clause, even if TABLE t is clustered on the correct index. Am I missing something? I have two suggestions: \n\n(1) I think I would have written \n\nSELECT * FROM t WHERE\n(a >= a1 AND b>=b1 AND c>=c1) ORDER BY a,b,c LIMIT 1 OFFSET 1;\n\nusing the way LIMIT cuts down on sort time (I've never tried it with both LIMIT and OFFSET, though; you could always use LIMIT 2 and skip a record client-side if that works better).\n\n(2) I've seen code where depending on the types and values of the fields, it was possible to construct a string from a, b, c by some sort of concatenation where the index now agreed with the lexicographic (dictionary) ordering on the string. Postgres could do that with a functional index, if your values can be used with this trick.\n",
"msg_date": "Tue, 27 Jul 2004 10:37:24 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: best way to fetch next/prev record based on index"
}
] |
[
{
"msg_contents": "> SELECT * FROM t WHERE\n> (a >= a1 AND b>=b1 AND c>=c1) ORDER BY a,b,c LIMIT 1 OFFSET 1;\n> \n> using the way LIMIT cuts down on sort time (I've never tried it with\nboth\n> LIMIT and OFFSET, though; you could always use LIMIT 2 and skip a\nrecord\n> client-side if that works better).\n\nDon't want to further clutter the list (answered this question several\ntimes already), but your query does not work. What I meant to write\nwas:\n\nselect * from t where\n\ta >= a1 and\n (a > a1 or b >= b1) and\n (a > a1 or b > b1 or c > c1)\n order by a, b, c limit 1\n\nThe problem with your query is it excludes all values of c >= c1\nregardless of values of a and b.\n\nMerlin\n",
"msg_date": "Tue, 27 Jul 2004 14:17:15 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "best way to fetch next/prev record based on index"
}
] |
[
{
"msg_contents": "Greg Stark wrote:\n> > > do it for multi-column keys. It seems it would be nice if some\nsyntax\n> > > similar to (a,b,c) > (a1,b1,c1) worked for this.\n> Hum. It would seem my intuition matches the SQL92 spec and Postgres\ngets\n> this\n> wrong.\n[...] \n> Even if Postgres did this right I'm not sure that would solve your\nindex\n> woes.\n> I imagine the first thing Postgres would do is rewrite it into regular\n> scalar\n> expressions. Ideally the optimizer should be capable of then deducing\nfrom\n> the\n> scalar expressions that an index scan would be useful.\n\nWow. For once, the standard is my friend. Well, what has to be done?\n:) Does pg do it the way it does for a reason? From the outside it\nseems like the planner would have an easier job if it can make a field\nby field comparison. \n\nWould a patch introducing the correct behavior (per the standard) be\naccepted? It seems pretty complicated (not to mention the planner\nissues).\n\nMerlin\n\n\n",
"msg_date": "Tue, 27 Jul 2004 15:20:49 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best way to fetch next/prev record based on index"
},
{
"msg_contents": "\nOn Tue, 27 Jul 2004, Merlin Moncure wrote:\n\n> Greg Stark wrote:\n> > > > do it for multi-column keys. It seems it would be nice if some\n> syntax\n> > > > similar to (a,b,c) > (a1,b1,c1) worked for this.\n> > Hum. It would seem my intuition matches the SQL92 spec and Postgres\n> gets\n> > this\n> > wrong.\n> [...]\n> > Even if Postgres did this right I'm not sure that would solve your\n> index\n> > woes.\n> > I imagine the first thing Postgres would do is rewrite it into regular\n> > scalar\n> > expressions. Ideally the optimizer should be capable of then deducing\n> from\n> > the\n> > scalar expressions that an index scan would be useful.\n>\n> Wow. For once, the standard is my friend. Well, what has to be done?\n> :) Does pg do it the way it does for a reason? From the outside it\n> seems like the planner would have an easier job if it can make a field\n> by field comparison.\n>\n> Would a patch introducing the correct behavior (per the standard) be\n> accepted? It seems pretty complicated (not to mention the planner\n> issues).\n\nGiven the comment on make_row_op,\n /*\n * XXX it's really wrong to generate a simple AND combination for < <=\n * > >=. We probably need to invent a new runtime node type to handle\n * those correctly. For the moment, though, keep on doing this ...\n */\nI'd expect it'd be accepted.\n\n",
"msg_date": "Tue, 27 Jul 2004 20:45:34 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index"
},
{
"msg_contents": "\nStephan Szabo <[email protected]> writes:\n\n> Given the comment on make_row_op,\n> /*\n> * XXX it's really wrong to generate a simple AND combination for < <=\n> * > >=. We probably need to invent a new runtime node type to handle\n> * those correctly. For the moment, though, keep on doing this ...\n> */\n> I'd expect it'd be accepted.\n\n\nHm, this code is new. As of version 1.169 2004/04/18 it only accepted \"=\" and\n\"<>\" operators:\n\n /* Combining operators other than =/<> is dubious... */\n if (row_length != 1 &&\n strcmp(opname, \"=\") != 0 &&\n strcmp(opname, \"<>\") != 0)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"row comparison cannot use operator %s\",\n opname)));\n\n\nI think perhaps it's a bad idea to be introducing support for standard syntax\nuntil we can support the correct semantics. It will only mislead people and\ncreate backwards-compatibility headaches when we fix it to work properly.\n\nRemoving <,<=,>,>= would be trivial. Patch (untested):\n\n--- parse_expr.c.~1.174.~\t2004-07-28 01:01:12.000000000 -0400\n+++ parse_expr.c\t2004-07-28 01:52:29.000000000 -0400\n@@ -1695,11 +1695,7 @@\n \t */\n \toprname = strVal(llast(opname));\n \n-\tif ((strcmp(oprname, \"=\") == 0) ||\n-\t\t(strcmp(oprname, \"<\") == 0) ||\n-\t\t(strcmp(oprname, \"<=\") == 0) ||\n-\t\t(strcmp(oprname, \">\") == 0) ||\n-\t\t(strcmp(oprname, \">=\") == 0))\n+\tif (strcmp(oprname, \"=\") == 0)\n \t{\n \t\tboolop = AND_EXPR;\n \t}\n\n\nFixing it to write out complex boolean expressions wouldn't be too hard, but\nI'm not clear it would be worth it, since I suspect the end result would be as\nthe comment indicates, to introduce a new runtime node.\n\n-- \ngreg\n\n",
"msg_date": "28 Jul 2004 01:53:17 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index"
},
{
"msg_contents": "\nGreg Stark <[email protected]> writes:\n\n> Fixing it to write out complex boolean expressions wouldn't be too hard, but\n> I'm not clear it would be worth it, since I suspect the end result would be as\n> the comment indicates, to introduce a new runtime node.\n\nJust to prove that it isn't really all that hard, I took a stab at doing this.\nThis is basically only my second attempt to write even trivial bits of server\nbackend code so I certainly don't suggest anyone try using this code.\n\nIn fact it doesn't quite compile, because I have a bit of confusion between\nthe char *oprname and List *opname variables. Now I could clear that up, but\nI'm missing one piece of the puzzle. To make it work I do need a way to\nconstruct a List *opname from \">\" or \"=\" and I don't know how to do that.\n\nI think that's all I'm missing, but perhaps in the morning I'll look at this\ncode and wonder \"what was I thinking?!\"\n\n\nThis approach won't get the optimizer to actually use an index for these\ncomparisons, but it will fix the semantics to match the spec. Later we can\neither improve the optimizer to detect expressions like this (which I think\nwould be cooler since some users may write them by hand and not use the\nrow-expression approach, but I don't see how to do it), or introduce a new\nrun-time node and have the optimizer handle it. But at least we won't have to\nworry about backwards-compatibility issues with the semantics changing.\n\nOh, I tried to stick to the style, but sometimes I couldn't help myself. I\nsuppose I would have to fix up the style the rest of the way if I got it\nworking and you wanted a patch to apply.\n\n\n/*\n * Transform a \"row op row\" construct\n */\nstatic Node *\nmake_row_op(ParseState *pstate, List *opname, Node *ltree, Node *rtree)\n{\n Node *result = NULL;\n RowExpr *lrow,\n *rrow;\n List *largs,\n *rargs;\n char *oprname;\n\n /* Inputs are untransformed RowExprs */\n lrow = (RowExpr *) transformExpr(pstate, ltree);\n rrow = (RowExpr *) transformExpr(pstate, rtree);\n Assert(IsA(lrow, RowExpr));\n Assert(IsA(rrow, RowExpr));\n largs = lrow->args;\n rargs = rrow->args;\n\n if (list_length(largs) != list_length(rargs))\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n errmsg(\"unequal number of entries in row expression\")));\n\n oprname = strVal(llast(opname));\n\n if (strcmp(oprname, \"=\") == 0) \n {\n result = make_row_op_simple(pstate, \"=\", largs, rargs);\n }\n\n else if (strcmp(oprname, \"<>\") == 0) \n {\n result = make_row_op_simple(pstate, \"<>\", largs, rargs);\n }\n\n else if ((strcmp(oprname, \"<\") == 0) ||\n (strcmp(oprname, \">\") == 0)) \n {\n result = make_row_op_complex(pstate, oprname, largs, rargs);\n }\n \n /* alternatively these last two could just create negated < and >\n * expressions. Which is better depends on whether the extra clause\n * confuses the optimizer more or less than having to push the NOTs down \n */\n\n else if (strcmp(oprname, \">=\") == 0)\n {\n Node *branch = make_row_op_simple(pstate, \"=\", largs, rargs);\n result = make_row_op_complex(pstate, \">\", largs, rargs);\n result = (Node *) makeBoolExpr(OR_EXPR, list_make2(result, branch));\n }\n\n else if (strcmp(oprname, \"<=\") == 0)\n {\n Node *branch = make_row_op_simple(pstate, \"=\", largs, rargs);\n result = make_row_op_complex(pstate, \"<\", largs, rargs);\n result = (Node *) makeBoolExpr(OR_EXPR, list_make2(result, branch));\n }\n \n\n else\n {\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"operator %s is not supported for row expressions\",\n oprname)));\n }\n\n return result;\n}\n\n/*\n * Handle something like\n * (A,B,C) = (X,Y,Z)\n * By constructing\n * (A=X) AND (B=Y) AND (C=Z)\n *\n */\n\nstatic Node *\nmake_row_op_simple(ParseState *pstate, char *oprname, \n List *largs, List *rargs)\n{\n ListCell *l, *r;\n BoolExprType boolop;\n Node *result;\n \n boolop = strcmp(oprname, \"<>\")==0 ? OR_EXPR : AND_EXPR;\n \n forboth(l, largs, r, rargs)\n {\n Node *larg = (Node *) lfirst(l);\n Node *rarg = (Node *) lfirst(r);\n Node *cmp;\n \n cmp = (Node *) make_op(pstate, opname, larg, rarg);\n cmp = coerce_to_boolean(pstate, cmp, \"row comparison\");\n if (result == NULL)\n result = cmp;\n else\n result = (Node *) makeBoolExpr(boolop,\n list_make2(result, cmp));\n }\n \n if (result == NULL)\n {\n /* zero-length rows? Generate constant TRUE or FALSE */\n if (boolop == AND_EXPR)\n result = makeBoolConst(true, false);\n else\n result = makeBoolConst(false, false);\n }\n\n return result;\n}\n\n\n/*\n * Handles something like:\n * (A,B,C) > (X,Y,Z)\n *\n * By constructing something like:\n * ( ( A > X) OR (A=X AND B>Y) OR (A=X AND B=Y AND C>Z) )\n *\n */\n\nstatic Node *\nmake_row_op_complex(ParseState *pstate, char *oprname, \n List *largs, List *rargs)\n{\n ListCell *l, *outer_l,\n *r, *outer_r;\n Node *result;\n \n forboth(outer_l, largs, outer_r, rargs) \n {\n Node *outer_larg = (Node *) lfirst(outer_l);\n Node *outer_rarg = (Node *) lfirst(outer_r);\n Node *branch = NULL;\n Node *cmp;\n \n /* all leading elements have to be equal */\n forboth(l, largs, r, rargs)\n {\n Node *larg = (Node *) lfirst(l);\n Node *rarg = (Node *) lfirst(r);\n Node *cmp;\n \n if (larg == outer_larg) {\n break;\n }\n \n cmp = (Node *) make_op(pstate, \"=\", larg, rarg);\n cmp = coerce_to_boolean(pstate, cmp, \"row comparison\");\n if (branch == NULL)\n branch = cmp;\n else\n branch = (Node *) makeBoolExpr(AND_EXPR,\n list_make2(branch, cmp));\n }\n \n /* trailing element has to be strictly greater or less than */\n \n cmp = (Node *) make_op(pstate, oprname, outer_larg, outer_rarg);\n cmp = coerce_to_boolean(pstate, cmp, \"row comparison\");\n branch = branch==NULL ? cmp : (Node *) makeBoolExpr(AND_EXPR, list_make2(branch, cmp));\n \n result = result==NULL ? branch : (Node *) makeBoolExpr(OR_EXPR, list_make2(result, branch));\n }\n \n if (result == NULL)\n {\n /* zero-length rows? Generate constant FALSE */\n result = makeBoolConst(true, false);\n }\n \n return result;\n}\n\n-- \ngreg\n\n",
"msg_date": "28 Jul 2004 03:14:49 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> Removing <,<=,>,>= would be trivial.\n\n... and not backwards-compatible. If we did that then cases involving\nunlabeled row expressions would no longer work as they did in prior\nreleases. For example\n\n\tselect (1,2,3) < (4,5,6);\n\nis accepted by all releases back to 7.0, and probably much further (but\n7.0 is the oldest I have handy to test). The only reason the code in\nparse_expr.c appears new is that the functionality used to be in gram.y.\n\nI'd like to see this fixed to comply with the spec, but given the lack\nof complaints about the existing behavior over so many years, ripping\nit out meanwhile doesn't seem appropriate.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 28 Jul 2004 10:07:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index "
},
{
"msg_contents": "\nTom Lane <[email protected]> writes:\n\n> The only reason the code in parse_expr.c appears new is that the\n> functionality used to be in gram.y.\n\nAh, that was what I was missing. Though it's odd since it seems there was code\nin parse_expr.c to handle the \"=\" case specially.\n\n> I'd like to see this fixed to comply with the spec, but given the lack\n> of complaints about the existing behavior over so many years, ripping\n> it out meanwhile doesn't seem appropriate.\n\nI tried my hand at this last night and think I did an ok first pass. But I'm\nmissing one piece of the puzzle to get it to compile.\n\nWhat do I need to know to be able to construct a List* suitable for passing as\nthe second arg to make_op() knowing only that I want to create a List* to\nrepresent \"=\" or \"<\" or so on?\n\nI also had another question I didn't ask in the other email. In the midst of a\nforboth() loop, how would I tell if I'm at the last element of the lists?\nWould lnext(l)==NULL do it?\n\n-- \ngreg\n\n",
"msg_date": "28 Jul 2004 12:00:25 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> The only reason the code in parse_expr.c appears new is that the\n>> functionality used to be in gram.y.\n\n> Ah, that was what I was missing. Though it's odd since it seems there was code\n> in parse_expr.c to handle the \"=\" case specially.\n\nIIRC, the case involving a subselect, eg\n\t... WHERE (1,2) = ANY (SELECT a, b FROM foo) ...\nhas always been handled in parse_expr.c, but cases involving simple\nrows were previously expanded in gram.y. One of the reasons I moved\nthe logic over to parse_expr.c was the thought that it would be easier\nto do it right in parse_expr.c --- gram.y would not be allowed to look\nup related operators, which seems necessary to handle the construct\nper spec.\n\n> I tried my hand at this last night and think I did an ok first pass.\n\nThe main issue in my mind is whether to invent a separate node type for\nrow comparisons. This is probably a good idea for a number of reasons,\nthe most obvious being that there's no way to avoid multiple evaluations\nof the subexpressions if you try to expand it into simple comparisons.\nAlso it seems likely that the planner would find it easier to recognize\nthe relationship to a multicolumn index than if the thing is expanded.\n(But teaching the planner and the index mechanisms themselves about this\nis going to be a major project in any case.)\n\nOne thing I did not like about your first pass is that it makes\nunsupportable assumptions about there being a semantic relationship\nbetween operators named, say, '<' and '<='. Postgres used to have such\nbogosity in a number of places but we've managed to get rid of most of\nit. (Offhand I think the only remaining hard-wired assumption about\noperators of particular names having particular semantics is that the\nforeign key mechanisms assume '=' must be the right operator to compare\nkeys with. Eventually we need to get rid of that too.)\n\nIMHO the right way to do this is to look up a suitable btree operator\nclass and use the appropriate member operators of that class. (In a\nseparate-node-type implementation, we'd probably ignore the operators\nas such altogether, and just call the btree comparison function of the\nopclass.) It's not entirely clear to me how to select the opclass when\nthe initially given inputs are of different types, though. In the\npresent code we leave it to oper() to do the right thing, including\npossibly coercing the inputs to matching types. Possibly we should\nstill apply oper(), but then insist that the selected operator appear\nas a btree opclass member, comparable to the way we handle sort\noperators now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 28 Jul 2004 12:56:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index "
},
{
"msg_contents": "\nTom Lane <[email protected]> writes:\n\n> One thing I did not like about your first pass is that it makes\n> unsupportable assumptions about there being a semantic relationship\n> between operators named, say, '<' and '<='. \n\nHm, I think I even had caught that issue on the mailing list previously.\n\nIn that case though, it seems even the existing code is insufficient. Instead\nof testing whether the operator with strcmp against \"=\" and \"<>\" it should\nperhaps be looking for an operator class and the strategy number for the\noperator and its negator.\n\n-- \ngreg\n\n",
"msg_date": "28 Jul 2004 18:54:35 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> One thing I did not like about your first pass is that it makes\n>> unsupportable assumptions about there being a semantic relationship\n>> between operators named, say, '<' and '<='. \n\n> In that case though, it seems even the existing code is insufficient.\n\nWell, yeah, we know the existing code is broken ;-)\n\n> Instead of testing whether the operator with strcmp against \"=\" and\n> \"<>\" it should perhaps be looking for an operator class and the\n> strategy number for the operator and its negator.\n\nProbably. You can find some relevant code in indxpath.c in the stuff\nthat tries to determine whether partial indexes are relevant.\n\nI think that the ideal behavior is that we not look directly at the\noperator name at all. For example it's not too unreasonable to want\nto write (a,b) ~<~ (c,d) if you have an index that uses those\nnon-locale-aware operators. We should find the operator that matches\nthe name and input arguments, and then try to make sense of the operator\nsemantics by matching it to btree opclasses.\n\nNote that it's possible to find multiple matches, for example if someone\nhas installed a \"reverse sort\" opclass. I think we would want to prefer\na match in the datatype's default opclass, if there is one, but\notherwise we can probably follow the lead of the existing code and\nassume that any match is equally good.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 28 Jul 2004 19:11:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index "
}
] |
[
{
"msg_contents": "Hello,\n\nI having a great deal of difficulty getting postgres to do a hash join.\nEven if I disable nestloop and mergejoin in postgres.conf, the optimizer\nstill refuses to select hash join. This behavior is killing my\nperformance.\n\nPostgres version is 7.3.2 and relevant tables are vacuum analyzed.\n\nHere's an overview of what I'm doing:\n\nI have one table of network logs ordered by time values. The other table\nis a set of hosts (approximately 60) that are infected by a worm. I want\nto do this query on the dataset:\n\nstandb=# explain SELECT count (allflow_tv_sobig.tv_s) FROM\nallflow_tv_sobig, blaster_set WHERE allflow_tv_sobig.src =\nblaster_set.label AND allflow_tv_sobig.tv_s >= 1060101118::bigint and\nallflow_tv_sobig.tv_s < 1060187518::bigint;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=185785.06..185785.06 rows=1 width=32)\n -> Merge Join (cost=174939.71..184986.38 rows=319472 width=32)\n Merge Cond: (\"outer\".label = \"inner\".src)\n -> Index Scan using blaster_set_x on blaster_set\n(cost=0.00..3.67 rows=66 width=12)\n -> Sort (cost=174939.71..178073.92 rows=1253684 width=20)\n Sort Key: allflow_tv_sobig.src\n -> Index Scan using allflow_tv_sobig_x on allflow_tv_sobig\n(cost=0.00..47955.63 rows=1253684 width=20)\n Index Cond: ((tv_s >= 1060101118::bigint) AND (tv_s <\n1060187518::bigint))\n(8 rows) \n\nBasically I just want to use the smaller table as a filtering mechanism so\nthat I only get resulted for hosts in that table. Rather than do the\nsensible thing, which is scan the list of infected hosts, then scan the\ntraffic table and ignore entries that aren't in the first list, the\noptimizer insists on SORTING the table of network traffic according to\nsource address. Considering that this table is very large, these queries\nare taking forever.\n\nDoing it in a nested loop, while it doesn't require sorting, still takes a\nvery long time as well.\n\nIs there anyway that I can force the optimizer to do this the right way,\naside from adding each IP manually to a disgustingly bloated 'where'\nclause?\n\n\nThanks,\n-S\n\n\n\n",
"msg_date": "Tue, 27 Jul 2004 15:38:05 -0400 (EDT)",
"msg_from": "Stan Bielski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizer refuses to hash join"
},
{
"msg_contents": "On Tue, 27 Jul 2004, Stan Bielski wrote:\n\n> I having a great deal of difficulty getting postgres to do a hash join.\n> Even if I disable nestloop and mergejoin in postgres.conf, the optimizer\n> still refuses to select hash join. This behavior is killing my\n> performance.\n>\n> Postgres version is 7.3.2 and relevant tables are vacuum analyzed.\n>\n> Here's an overview of what I'm doing:\n>\n> I have one table of network logs ordered by time values. The other table\n> is a set of hosts (approximately 60) that are infected by a worm. I want\n> to do this query on the dataset:\n>\n> standb=# explain SELECT count (allflow_tv_sobig.tv_s) FROM\n> allflow_tv_sobig, blaster_set WHERE allflow_tv_sobig.src =\n> blaster_set.label AND allflow_tv_sobig.tv_s >= 1060101118::bigint and\n> allflow_tv_sobig.tv_s < 1060187518::bigint;\n\nCan you send explain analyze results for the normal case and the nested\nloop case? It's generally more useful than plain explain.\n\nI'd also wonder if blaster_set.label is unique such that you might be able\nto write the condition as an exists clause and if that's better. If you\nwere running 7.4, I'd suggest IN, but that'll certainly be painful in 7.3.\n",
"msg_date": "Thu, 29 Jul 2004 09:08:55 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer refuses to hash join"
},
{
"msg_contents": "Stan Bielski <[email protected]> writes:\n> I having a great deal of difficulty getting postgres to do a hash join.\n> Even if I disable nestloop and mergejoin in postgres.conf, the optimizer\n> still refuses to select hash join.\n\nAre you sure the join condition is hashjoinable? You didn't say\nanything about the datatypes involved ...\n\nIf it is, the other possibility is that you need to increase sort_mem\nto accommodate the hash table.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 29 Jul 2004 13:06:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer refuses to hash join "
}
] |
[
{
"msg_contents": "> Greg Stark <[email protected]> writes:\n> This approach won't get the optimizer to actually use an index for\nthese\n> comparisons, but it will fix the semantics to match the spec. Later we\ncan\n> either improve the optimizer to detect expressions like this (which I\n> think\n> would be cooler since some users may write them by hand and not use\nthe\n> row-expression approach, but I don't see how to do it), or introduce a\nnew\n> run-time node and have the optimizer handle it. But at least we won't\nhave\n> to\n> worry about backwards-compatibility issues with the semantics\nchanging.\n> \n> Oh, I tried to stick to the style, but sometimes I couldn't help\nmyself. I\n> suppose I would have to fix up the style the rest of the way if I got\nit\n> working and you wanted a patch to apply.\n\nRegarding the <= and >= operators: can you apply them in the complex\npass? If you can, this might be more efficient.\n\n> /*\n> * Handles something like:\n> * (A,B,C) > (X,Y,Z)\n> *\n> * By constructing something like:\n> * ( ( A > X) OR (A=X AND B>Y) OR (A=X AND B=Y AND C>Z) )\n> * ^\n> */ |\n\nthe last comparison of the last major clause (or the only comparison for\na single field row construct) is a special case. In > cases use >, in\n>= cases use >=, etc.; this is logical equivalent to doing or of simple\n= intersected with complex >. \n\nIs this step of the transformation visible to the optimizer/planner?\nFor purely selfish reasons, it would be really nice if a field by field\nrow construction could get a fast path to the index if the fields match\nthe index fields.\n\nMerlin\n",
"msg_date": "Wed, 28 Jul 2004 08:47:32 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best way to fetch next/prev record based on index"
}
] |
[
{
"msg_contents": "> Greg Stark <[email protected]> writes:\n> > Removing <,<=,>,>= would be trivial.\n> \n> ... and not backwards-compatible. If we did that then cases involving\n> unlabeled row expressions would no longer work as they did in prior\n> releases. For example\n> \n> \tselect (1,2,3) < (4,5,6);\n> \n> is accepted by all releases back to 7.0, and probably much further\n(but\n> 7.0 is the oldest I have handy to test). The only reason the code in\n> parse_expr.c appears new is that the functionality used to be in\ngram.y.\n> \n> I'd like to see this fixed to comply with the spec, but given the lack\n> of complaints about the existing behavior over so many years, ripping\n> it out meanwhile doesn't seem appropriate.\n\nJust to clarify:\nI think Greg is arguing to bring pg to SQL 92 spec and not remove\nanything. ISTM the SQL standard was designed with exactly my problem in\nmind: how to get the next key in a table. \n\nIMHO, relying \non select (1,2,3) < (4,5,6); \nto give a result which is neither standard nor documented seems to be\nbad style. The current methodology could cause pg to give incorrect\nresults in TPC benchmarks...not good. Also, it's trivial to rewrite\nthat comparison with the old behavior using 'and'. OTOH, it is not\ntrivial to rewrite the comparison to do it the correct way...it's kind\nof an SQL 'trick question'. Most likely, a very small minority of pg\nusers are even away of the above syntax anyways.\n\nTo be fair, I'm a big fan of deprecating features for at least one\nrelease for compatibility reasons. It's no big deal to me, because I'm\nalready writing the queries out the long way anyways. My interests are\nin the optimizer. If there is a way to enhance it so that it\nmulti-column comparisons in a logical way, that would be great. Is this\ntheoretically possible (probable)?\n\nMerlin\n\n",
"msg_date": "Wed, 28 Jul 2004 11:38:00 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best way to fetch next/prev record based on index "
},
{
"msg_contents": "\tHello,\n\n\tI'm building a kind of messaging/forum application with postgresql. There \nare users which will send each others messages, send messages to forums, \netc.\n\tI would like to put the messages in folders, so each user has several \nfolders (inbox, sent...), and forums are also folders.\n\n\tA message will appear in the sender's \"sent\" folder and the receiver's \ninbox, or receiving forum folder.\n\n\tThere are two ways I can do this :\n\t- either by placing two folder fields (outbox_folder_id and \nreceiving_folder_id) in the messages table, which can both point to a \nfolder or be null. When a user sends a message to another user/folder, \nthese fields are set appropriately.\n\t- or by having a messages table, and a link table linking messages to \nfolders.\n\n\tI have built a test database with about 20000 messages in 2000 folders \n(10 messages per folder).\n\n\tFinding all the messages in a folder takes more than 600 times longer \nwith the link table than with the single table approach (66 ms vs. 0.12 \nms).\n\n\tIs this normal ? I have checked explain analyze and it uses indexes in \nall cases. The query plans look right to me.\n\n",
"msg_date": "Wed, 28 Jul 2004 18:53:12 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Join performance"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nsomebody can help me??????? my boss want to migrate to\nORACLE................\n\nwe have a BIG problem of performance,it's slow....\nwe use postgres 7.3 for php security application with approximately 4\nmillions of insertion by day and 4 millions of delete and update\nand archive db with 40 millions of archived stuff...\n\nwe have 10 databases for our clients and a centralized database for the\ngeneral stuff.\n\ndatabase specs:\n\ndouble XEON 2.4 on DELL PowerEdge2650\n2 gigs of RAM\n5 SCSI Drive RAID 5 15rpm\n\ntasks:\n\n4 millions of transactions by day\n160 open connection 24 hours by day 7 days by week\npg_autovacuum running 24/7\nreindex on midnight\n\npostgresql.conf:\n\ntcpip_socket = true\n#ssl = false\n\nmax_connections = 256\n#superuser_reserved_connections = 2\n\n#port = 5432\n#hostname_lookup = false\n#show_source_port = false\n\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n\n#virtual_host = ''\n\n#krb_server_keyfile = ''\n\n\n#\n# Shared Memory Size\n#\n#shared_buffers = 256 # min max_connections*2 or 16, 8KB each\n#shared_buffers = 196000 # min max_connections*2 or 16, 8KB\neach\nshared_buffers = 128000 # min max_connections*2 or 16, 8KB each\n#max_fsm_relations = 1000 # min 10, fsm is free space map, ~40 bytes\nmax_fsm_pages = 1000000 # min 1000, fsm is free space map, ~6 bytes\n#max_locks_per_transaction = 64 # min 10\n#wal_buffers = 8 # min 4, typically 8KB each\n\n#\n# Non-shared Memory Sizes\n#\n#sort_mem = 32168 # min 64, size in KB\n#vacuum_mem = 8192 # min 1024, size in KB\nvacuum_mem = 65536 # min 1024, size in KB\n\n#\n# Write-ahead log (WAL)\n#\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n#\n#fsync = true\n#wal_sync_method = fsync # the default varies across platforms:\n# # fsync, fdatasync, open_sync, or\nopen_datasync\n#wal_debug = 0 # range 0-16\n\n\n#\n# Optimizer Parameters\n#\n#enable_seqscan = true\n#enable_indexscan = true\n#enable_tidscan = true\n#enable_sort = true\n#enable_nestloop = true\n#enable_mergejoin = true\n#enable_hashjoin = true\n\n#effective_cache_size = 1000 # typically 8KB each\neffective_cache_size = 196608 # typically 8KB each\n#random_page_cost = 4 # units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n#default_statistics_target = 10 # range 1-1000\n\n#\n# GEQO Optimizer Parameters\n#\n#geqo = true\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n#geqo_threshold = 11\n#geqo_pool_size = 0 # default based on tables in statement,\n # range 128-1024\n#geqo_effort = 1\n#geqo_generations = 0\n#geqo_random_seed = -1 # auto-compute seed\n\n\n#\n# Message display\n#\nserver_min_messages =notice # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, log,\nfatal,\n # panic\nclient_min_messages =notice # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # log, info, notice, warning, error\n#silent_mode = false\n\n#log_connections =true\n#log_pid =true\n#log_statement =true\n#log_duration =true\n#log_timestamp =true\n\nlog_min_error_statement =error\n# Values in order of increasing severity:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error,\npanic(off)\n\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\n\n#explain_pretty_print = true\n\n# requires USE_ASSERT_CHECKING\n#debug_assertions = true\n\n\n#\n# Syslog\n#\nsyslog = 0 # range 0-2\nsyslog_facility = 'LOCAL0'\nsyslog_ident = 'postgres'\n\n\n#\n# Statistics\n#\nshow_parser_stats = false\nshow_planner_stats =false\nshow_executor_stats = false\nshow_statement_stats =false\n\n# requires BTREE_BUILD_STATS\n#show_btree_build_stats = false\n\n\n#\n# Access statistics collection\n#\nstats_start_collector = true\nstats_reset_on_server_start = false\nstats_command_string = true\nstats_row_level = true\nstats_block_level = true\n\n\n#\n# Lock Tracing\n#\n#trace_notify = false\n\n# requires LOCK_DEBUG\n#trace_locks = false\n#trace_userlocks = false\n#trace_lwlocks = false\n#debug_deadlocks = false\n#trace_lock_oidmin = 16384\n#trace_lock_table = 0\n\n\n#\n# Misc\n#\n#autocommit = true\n#dynamic_library_path = '$libdir'\n#search_path = '$user,public'\n#datestyle = 'iso, us'\n#timezone = unknown # actually, defaults to TZ environment\nsetting\n#australian_timezones = false\n#client_encoding = sql_ascii # actually, defaults to database encoding\n#authentication_timeout = 60 # 1-600, in seconds\n#deadlock_timeout = 1000 # in milliseconds\n#default_transaction_isolation = 'read committed'\n#max_expr_depth = 10000 # min 10\n#max_files_per_process = 1000 # min 25\n#password_encryption = true\n#sql_inheritance = true\n#transform_null_equals = false\n#statement_timeout = 0 # 0 is disabled, in milliseconds\n#db_user_namespace = false\n\n****************************************************************************\n***************************************************************************\n\nStephane Tessier, CISSP\nDevelopment Team Leader\n450-430-8166 X:206\n\n\n\n\n\n\n\nHi \neveryone,\n \nsomebody can help \nme??????? my boss want to migrate to ORACLE................\n \nwe have a BIG \nproblem of performance,it's slow....\nwe use postgres 7.3 \nfor php security application with approximately 4 millions of insertion by \nday and 4 millions of delete and update\nand archive db with \n40 millions of archived stuff...\n \nwe have 10 databases \nfor our clients and a centralized database for the general \nstuff.\n \ndatabase \nspecs:\n \ndouble XEON 2.4 on \nDELL PowerEdge2650\n2 gigs of \nRAM\n5 SCSI Drive RAID 5 \n15rpm\n \ntasks:\n \n4 millions of \ntransactions by day\n160 open connection \n24 hours by day 7 days by week\npg_autovacuum \nrunning 24/7\nreindex on \nmidnight\n \npostgresql.conf:\n \ntcpip_socket = \ntrue#ssl = false\n \nmax_connections = \n256 #superuser_reserved_connections = 2\n \n#port = 5432 \n#hostname_lookup = false#show_source_port = false\n \n#unix_socket_directory = ''#unix_socket_group = \n''#unix_socket_permissions = 0777 # octal\n \n#virtual_host = \n''\n \n#krb_server_keyfile \n= ''\n \n## Shared Memory \nSize##shared_buffers = \n256 # min \nmax_connections*2 or 16, 8KB each#shared_buffers = \n196000 \n# min max_connections*2 or 16, 8KB eachshared_buffers = \n128000 # min max_connections*2 \nor 16, 8KB each#max_fsm_relations = 1000 \n# min 10, fsm is free space map, ~40 bytesmax_fsm_pages = \n1000000 # min 1000, fsm is free \nspace map, ~6 bytes#max_locks_per_transaction = 64 # min 10#wal_buffers \n= \n8 \n# min 4, typically 8KB each\n \n## Non-shared Memory \nSizes##sort_mem = 32168 # min 64, \nsize in KB#vacuum_mem = \n8192 \n# min 1024, size in KBvacuum_mem = \n65536 \n# min 1024, size in KB\n \n## Write-ahead log \n(WAL)##checkpoint_segments = 3 \n# in logfile segments, min 1, 16MB each#checkpoint_timeout = \n300 # range 30-3600, in \nseconds##commit_delay = \n0 \n# range 0-100000, in microseconds#commit_siblings = \n5 # range \n1-1000##fsync = true #wal_sync_method = \nfsync # the default varies across \nplatforms:# \n# fsync, fdatasync, open_sync, or open_datasync#wal_debug = \n0 \n# range 0-16\n \n## Optimizer \nParameters##enable_seqscan = true#enable_indexscan = \ntrue#enable_tidscan = true#enable_sort = true#enable_nestloop = \ntrue#enable_mergejoin = true#enable_hashjoin = true\n \n#effective_cache_size = 1000 # typically 8KB \neacheffective_cache_size = 196608 # typically 8KB \neach#random_page_cost = \n4 # units are one \nsequential page fetch cost#cpu_tuple_cost = \n0.01 # \n(same)#cpu_index_tuple_cost = 0.001 # \n(same)#cpu_operator_cost = 0.0025 # (same)\n \n#default_statistics_target = 10 # range 1-1000\n \n## GEQO Optimizer \nParameters##geqo = true#geqo_selection_bias = \n2.0 # range 1.5-2.0#geqo_threshold = \n11#geqo_pool_size = \n0 # \ndefault based on tables in statement, \n \n# range 128-1024#geqo_effort = 1#geqo_generations = \n0#geqo_random_seed = \n-1 # auto-compute \nseed\n \n## Message \ndisplay#server_min_messages =notice # Values, in \norder of decreasing \ndetail: \n# debug5, debug4, debug3, debug2, \ndebug1, \n# info, notice, warning, error, log, \nfatal, \n# panicclient_min_messages =notice # \nValues, in order of decreasing \ndetail: \n# debug5, debug4, debug3, debug2, \ndebug1, \n# log, info, notice, warning, error#silent_mode = false\n \n#log_connections =true #log_pid =true #log_statement \n=true#log_duration =true#log_timestamp =true \n \nlog_min_error_statement =error# Values in order of increasing \nseverity: \n# debug5, debug4, debug3, debug2, \ndebug1, \n# info, notice, warning, error, panic(off)\n \n#debug_print_parse = false#debug_print_rewritten = \nfalse#debug_print_plan = false#debug_pretty_print = false\n \n#explain_pretty_print = true\n \n# requires USE_ASSERT_CHECKING#debug_assertions = true\n \n## Syslog#syslog = \n0 \n# range 0-2syslog_facility = 'LOCAL0'syslog_ident = 'postgres'\n \n## \nStatistics#show_parser_stats = falseshow_planner_stats \n=falseshow_executor_stats = falseshow_statement_stats =false \n \n# requires BTREE_BUILD_STATS#show_btree_build_stats = false\n \n## Access statistics \ncollection#stats_start_collector = truestats_reset_on_server_start = \nfalse stats_command_string = true stats_row_level = true \nstats_block_level = true \n \n## Lock \nTracing##trace_notify = false\n \n# requires LOCK_DEBUG#trace_locks = false#trace_userlocks = \nfalse#trace_lwlocks = false#debug_deadlocks = \nfalse#trace_lock_oidmin = 16384#trace_lock_table = 0\n \n## Misc##autocommit = \ntrue#dynamic_library_path = '$libdir'#search_path = \n'$user,public'#datestyle = 'iso, us'#timezone = \nunknown \n# actually, defaults to TZ environment setting#australian_timezones = \nfalse#client_encoding = sql_ascii # actually, defaults to \ndatabase encoding#authentication_timeout = 60 # 1-600, in \nseconds#deadlock_timeout = 1000 # \nin milliseconds#default_transaction_isolation = 'read \ncommitted'#max_expr_depth = \n10000 # min \n10#max_files_per_process = 1000 # min 25#password_encryption \n= true#sql_inheritance = true#transform_null_equals = \nfalse#statement_timeout = \n0 # 0 is disabled, in \nmilliseconds#db_user_namespace = false\n*******************************************************************************************************************************************************\n \nStephane Tessier, CISSPDevelopment Team \nLeader450-430-8166 X:206",
"msg_date": "Wed, 28 Jul 2004 13:08:06 -0400",
"msg_from": "\"Stephane Tessier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "my boss want to migrate to ORACLE"
},
{
"msg_contents": "On Thu, 29 Jul 2004 03:08 am, Stephane Tessier wrote:\n> Hi everyone,\n>\n> somebody can help me??????? my boss want to migrate to\n> ORACLE................\n>\n> we have a BIG problem of performance,it's slow....\n> we use postgres 7.3 for php security application with approximately 4\n> millions of insertion by day and 4 millions of delete and update\n> and archive db with 40 millions of archived stuff...\nThis is heavy update. as I say below, what is the vacuum setup like?\n>\n> we have 10 databases for our clients and a centralized database for the\n> general stuff.\n>\n> database specs:\n>\n> double XEON 2.4 on DELL PowerEdge2650\n> 2 gigs of RAM\n> 5 SCSI Drive RAID 5 15rpm\n>\n> tasks:\n>\n> 4 millions of transactions by day\n> 160 open connection 24 hours by day 7 days by week\n> pg_autovacuum running 24/7\n> reindex on midnight\nWhere is your pg_autovacuum config? how often is it set to vacuum? and \nanalyze for that matter.\n\n\n> postgresql.conf:\n>\n> tcpip_socket = true\n> #ssl = false\n>\n> max_connections = 256\n> #superuser_reserved_connections = 2\n>\n> #port = 5432\n> #hostname_lookup = false\n> #show_source_port = false\n>\n> #unix_socket_directory = ''\n> #unix_socket_group = ''\n> #unix_socket_permissions = 0777 # octal\n>\n> #virtual_host = ''\n>\n> #krb_server_keyfile = ''\n>\n>\n> #\n> # Shared Memory Size\n> #\n> #shared_buffers = 256 # min max_connections*2 or 16, 8KB each\n> #shared_buffers = 196000 # min max_connections*2 or 16, 8KB\n> each\n> shared_buffers = 128000 # min max_connections*2 or 16, 8KB each\n> #max_fsm_relations = 1000 # min 10, fsm is free space map, ~40 bytes\n> max_fsm_pages = 1000000 # min 1000, fsm is free space map, ~6 bytes\n> #max_locks_per_transaction = 64 # min 10\n> #wal_buffers = 8 # min 4, typically 8KB each\nI would assume given heavy update you need more WAL buffers, but then I don't \nknow a lot.\n>\n> #\n> # Non-shared Memory Sizes\n> #\n> #sort_mem = 32168 # min 64, size in KB\n> #vacuum_mem = 8192 # min 1024, size in KB\n> vacuum_mem = 65536 # min 1024, size in KB\n>\n> #\n> # Write-ahead log (WAL)\n> #\n> #checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n> #checkpoint_timeout = 300 # range 30-3600, in seconds\n3 checkpoint_segments is too low for the number of inserts/delete/updates you \nare doing. you need a much larger check_point, something like 10+ but the \ntuning docs will give you a better idea.\n\n> #\n> #commit_delay = 0 # range 0-100000, in microseconds\n> #commit_siblings = 5 # range 1-1000\n> #\n> #fsync = true\n> #wal_sync_method = fsync # the default varies across platforms:\n> # # fsync, fdatasync, open_sync, or\n> open_datasync\n> #wal_debug = 0 # range 0-16\n>\n>\n> #\n> # Optimizer Parameters\n> #\n> #enable_seqscan = true\n> #enable_indexscan = true\n> #enable_tidscan = true\n> #enable_sort = true\n> #enable_nestloop = true\n> #enable_mergejoin = true\n> #enable_hashjoin = true\n>\n> #effective_cache_size = 1000 # typically 8KB each\n> effective_cache_size = 196608 # typically 8KB each\n> #random_page_cost = 4 # units are one sequential page fetch cost\n> #cpu_tuple_cost = 0.01 # (same)\n> #cpu_index_tuple_cost = 0.001 # (same)\n> #cpu_operator_cost = 0.0025 # (same)\n>\n> #default_statistics_target = 10 # range 1-1000\n>\n> #\n> # GEQO Optimizer Parameters\n> #\n> #geqo = true\n> #geqo_selection_bias = 2.0 # range 1.5-2.0\n> #geqo_threshold = 11\n> #geqo_pool_size = 0 # default based on tables in statement,\n> # range 128-1024\n> #geqo_effort = 1\n> #geqo_generations = 0\n> #geqo_random_seed = -1 # auto-compute seed\n>\n>\n> #\n> # Message display\n> #\n> server_min_messages =notice # Values, in order of decreasing detail:\n> # debug5, debug4, debug3, debug2, debug1,\n> # info, notice, warning, error, log,\n> fatal,\n> # panic\n> client_min_messages =notice # Values, in order of decreasing detail:\n> # debug5, debug4, debug3, debug2, debug1,\n> # log, info, notice, warning, error\n> #silent_mode = false\n>\n> #log_connections =true\n> #log_pid =true\n> #log_statement =true\n> #log_duration =true\n> #log_timestamp =true\n>\n> log_min_error_statement =error\n> # Values in order of increasing severity:\n> # debug5, debug4, debug3, debug2,\n> debug1, # info, notice, warning, error, panic(off)\n>\n> #debug_print_parse = false\n> #debug_print_rewritten = false\n> #debug_print_plan = false\n> #debug_pretty_print = false\n>\n> #explain_pretty_print = true\n>\n> # requires USE_ASSERT_CHECKING\n> #debug_assertions = true\n>\n>\n> #\n> # Syslog\n> #\n> syslog = 0 # range 0-2\n> syslog_facility = 'LOCAL0'\n> syslog_ident = 'postgres'\n>\n>\n> #\n> # Statistics\n> #\n> show_parser_stats = false\n> show_planner_stats =false\n> show_executor_stats = false\n> show_statement_stats =false\n>\n> # requires BTREE_BUILD_STATS\n> #show_btree_build_stats = false\n>\n>\n> #\n> # Access statistics collection\n> #\n> stats_start_collector = true\n> stats_reset_on_server_start = false\n> stats_command_string = true\n> stats_row_level = true\n> stats_block_level = true\n>\n>\n> #\n> # Lock Tracing\n> #\n> #trace_notify = false\n>\n> # requires LOCK_DEBUG\n> #trace_locks = false\n> #trace_userlocks = false\n> #trace_lwlocks = false\n> #debug_deadlocks = false\n> #trace_lock_oidmin = 16384\n> #trace_lock_table = 0\n>\n>\n> #\n> # Misc\n> #\n> #autocommit = true\n> #dynamic_library_path = '$libdir'\n> #search_path = '$user,public'\n> #datestyle = 'iso, us'\n> #timezone = unknown # actually, defaults to TZ environment\n> setting\n> #australian_timezones = false\n> #client_encoding = sql_ascii # actually, defaults to database encoding\n> #authentication_timeout = 60 # 1-600, in seconds\n> #deadlock_timeout = 1000 # in milliseconds\n> #default_transaction_isolation = 'read committed'\n> #max_expr_depth = 10000 # min 10\n> #max_files_per_process = 1000 # min 25\n> #password_encryption = true\n> #sql_inheritance = true\n> #transform_null_equals = false\n> #statement_timeout = 0 # 0 is disabled, in milliseconds\n> #db_user_namespace = false\n>\n> ***************************************************************************\n>*\n> ***************************************************************************\n>\n> Stephane Tessier, CISSP\n> Development Team Leader\n> 450-430-8166 X:206\n",
"msg_date": "Fri, 30 Jul 2004 11:21:47 +1000",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: my boss want to migrate to ORACLE"
},
{
"msg_contents": "Quoting Stephane Tessier <[email protected]>:\n\n\nGeneral parameter suggestions:\n\n> shared_buffers = 128000 # min max_connections*2 or 16, 8KB each\n> effective_cache_size = 196608 # typically 8KB each\n\nTry reducing shared_buffers (say 30000). There has been much discussion regards\nsetting this parameter - most folks see minimal gains, or even performance\n*loss* with settings higher than 30000.\n\n> #sort_mem = 32168 # min 64, size in KB\n\nDepending on what queries you are experiencing problems with, you might want to\nset this parameter to something. However with 160 connections you need to be\ncareful, as each one uses this much memory (if it needs to sort).\n\n> #wal_buffers = 8\n\nTry this in the 100-1000 range\n\n> #checkpoint_segments = 3\n\nTry this quite a lot higher - say 10-50.\n\n\nPg_autovacuum:\n\nI have not used this, so can't really give any help about setup. However you can\ncheck how well it is working by selecting relname, reltuples, relpages out of\npg_class, and see if you have relations with stupidly low numbers of rows per\npage (clearly you need to be ANALYZing reasonably frequently for this query to\ngive accurate information).\n\nOther Comments:\n\nTo provide you with more help, we need to know what is slow. Particulary what\nqueries are the most troublesome (post EXPLAIN ANALYZE of them). You can\nidentify these by setting the log_statement and log_duration parameters.\n\nFinally: (comment for your boss), it is *not* a given that ORACLE will perform\nany better.....\n\nregards\n\nMark\n\n",
"msg_date": "Fri, 30 Jul 2004 14:33:42 +1200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: my boss want to migrate to ORACLE"
},
{
"msg_contents": "A furthur thought or two:\n\n- you are *sure* that it is Postgres that is slow? (could be Php...or your\n machine could be out of some resource - see next 2 points)\n- is your machine running out of cpu or memory?\n- is your machine seeing huge io transfers or long io waits?\n- are you running Php on this machine as well as Postgres?\n- what os (and what release) are you running? (guessing Linux but...)\n\nAs an aside, they always say this but: Postgres 7.4 generally performs better\nthan 7.3...so an upgrade could be worth it - *after* you have solved/identified\nthe other issues.\n\nbest wishes\n\nMark\n\nQuoting Stephane Tessier <[email protected]>:\n\n> Hi everyone,\n>\n> somebody can help me??????? my boss want to migrate to\n> ORACLE................\n>\n> we have a BIG problem of performance,it's slow....\n> we use postgres 7.3 for php security application with approximately 4\n> millions of insertion by day and 4 millions of delete and update\n> and archive db with 40 millions of archived stuff...\n>\n> we have 10 databases for our clients and a centralized database for the\n> general stuff.\n>\n> database specs:\n>\n> double XEON 2.4 on DELL PowerEdge2650\n> 2 gigs of RAM\n> 5 SCSI Drive RAID 5 15rpm\n>\n> tasks:\n>\n> 4 millions of transactions by day\n> 160 open connection 24 hours by day 7 days by week\n> pg_autovacuum running 24/7\n> reindex on midnight\n\n\n",
"msg_date": "Fri, 30 Jul 2004 14:59:53 +1200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: my boss want to migrate to ORACLE"
},
{
"msg_contents": "\nOn Jul 28, 2004, at 1:08 PM, Stephane Tessier wrote:\n\n> we have a BIG problem of performance,it's slow....\n\nCan you isolate which part is slow? (log_min_duration is useful for \nfinding your slow running queries)\n\n> we use postgres 7.3 for php security application with approximately 4 \n> millions of insertion by day and 4 millions of delete and update\n\nThat is pretty heavy write volume. Are these updates done in batches \nor \"now and then\"? If they are done in batches you could speed them up \nby wrapping them inside a transaction.\n\n> #shared_buffers = 256 # min max_connections*2 or 16, 8KB each\n> #shared_buffers = 196000 # min max_connections*2 or 16, \n> 8KB each\n> shared_buffers = 128000 # min max_connections*2 or 16, 8KB each\n>\nToo much. Generally over 10000 will stop benefitting you.\n\n> #wal_buffers = 8 # min 4, typically 8KB each\n\nMight want to bump this up\n\n> #checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n\nGiven your write volume, increase this up a bit.. oh.. 20 or 30 of them \nwill help a lot.\nBut it will use 16*30MB of disk space.\n\nOracle is *NOT* a silver bullet.\nIt will not instantly make your problems go away.\n\nI'm working on a project porting some things to Oracle and as a test I \nalso ported it to Postgres. And you know what? Postgres is running \nabout 30% faster than Oracle. The Oracle lovers here are not too happy \nwith that one :) Just so you know..\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Fri, 30 Jul 2004 08:33:52 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: my boss want to migrate to ORACLE"
},
{
"msg_contents": "I think with your help guys I'll do it!\n\nI'm working on it!\n\nI'll work on theses issues:\n\nwe have space for more ram(we use 2 gigs on possibility of 3 gigs)\niowait is very high 98% --> look like postgresql wait for io access\nraid5 -->raid0 if i'm right raid5 use 4 writes(parity,data, etc) for each\nwrite on disk\nuse more transactions (we have a lot of insert/update without transaction).\ncpu look like not running very hard\n\n*php is not running on the same machine\n*redhat enterprise 3.0 ES\n*the version of postgresql is 7.3.4(using RHDB from redhat)\n*pg_autovacuum running at 12 and 24 hour each day\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of\[email protected]\nSent: 29 juillet, 2004 23:00\nTo: Stephane Tessier\nCc: [email protected]\nSubject: Re: [PERFORM] my boss want to migrate to ORACLE\n\n\nA furthur thought or two:\n\n- you are *sure* that it is Postgres that is slow? (could be Php...or your\n machine could be out of some resource - see next 2 points)\n- is your machine running out of cpu or memory?\n- is your machine seeing huge io transfers or long io waits?\n- are you running Php on this machine as well as Postgres?\n- what os (and what release) are you running? (guessing Linux but...)\n\nAs an aside, they always say this but: Postgres 7.4 generally performs\nbetter\nthan 7.3...so an upgrade could be worth it - *after* you have\nsolved/identified\nthe other issues.\n\nbest wishes\n\nMark\n\nQuoting Stephane Tessier <[email protected]>:\n\n> Hi everyone,\n>\n> somebody can help me??????? my boss want to migrate to\n> ORACLE................\n>\n> we have a BIG problem of performance,it's slow....\n> we use postgres 7.3 for php security application with approximately 4\n> millions of insertion by day and 4 millions of delete and update\n> and archive db with 40 millions of archived stuff...\n>\n> we have 10 databases for our clients and a centralized database for the\n> general stuff.\n>\n> database specs:\n>\n> double XEON 2.4 on DELL PowerEdge2650\n> 2 gigs of RAM\n> 5 SCSI Drive RAID 5 15rpm\n>\n> tasks:\n>\n> 4 millions of transactions by day\n> 160 open connection 24 hours by day 7 days by week\n> pg_autovacuum running 24/7\n> reindex on midnight\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n\n",
"msg_date": "Fri, 30 Jul 2004 09:56:17 -0400",
"msg_from": "\"Stephane Tessier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: my boss want to migrate to ORACLE"
},
{
"msg_contents": "On Fri, 2004-07-30 at 07:56, Stephane Tessier wrote:\n> I think with your help guys I'll do it!\n> \n> I'm working on it!\n> \n> I'll work on theses issues:\n> \n> we have space for more ram(we use 2 gigs on possibility of 3 gigs)\n> iowait is very high 98% --> look like postgresql wait for io access\n> raid5 -->raid0 if i'm right raid5 use 4 writes(parity,data, etc) for each\n> write on disk\n\nJust get battery backed cache on your RAID controller. RAID0 is way too\nunreliable for a production environment. One disk dies and all your\ndata is just gone.\n\n> use more transactions (we have a lot of insert/update without transaction).\n> cpu look like not running very hard\n> \n> *php is not running on the same machine\n> *redhat enterprise 3.0 ES\n> *the version of postgresql is 7.3.4(using RHDB from redhat)\n> *pg_autovacuum running at 12 and 24 hour each day\n> \n> \n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of\n> [email protected]\n> Sent: 29 juillet, 2004 23:00\n> To: Stephane Tessier\n> Cc: [email protected]\n> Subject: Re: [PERFORM] my boss want to migrate to ORACLE\n> \n> \n> A furthur thought or two:\n> \n> - you are *sure* that it is Postgres that is slow? (could be Php...or your\n> machine could be out of some resource - see next 2 points)\n> - is your machine running out of cpu or memory?\n> - is your machine seeing huge io transfers or long io waits?\n> - are you running Php on this machine as well as Postgres?\n> - what os (and what release) are you running? (guessing Linux but...)\n> \n> As an aside, they always say this but: Postgres 7.4 generally performs\n> better\n> than 7.3...so an upgrade could be worth it - *after* you have\n> solved/identified\n> the other issues.\n> \n> best wishes\n> \n> Mark\n> \n> Quoting Stephane Tessier <[email protected]>:\n> \n> > Hi everyone,\n> >\n> > somebody can help me??????? my boss want to migrate to\n> > ORACLE................\n> >\n> > we have a BIG problem of performance,it's slow....\n> > we use postgres 7.3 for php security application with approximately 4\n> > millions of insertion by day and 4 millions of delete and update\n> > and archive db with 40 millions of archived stuff...\n> >\n> > we have 10 databases for our clients and a centralized database for the\n> > general stuff.\n> >\n> > database specs:\n> >\n> > double XEON 2.4 on DELL PowerEdge2650\n> > 2 gigs of RAM\n> > 5 SCSI Drive RAID 5 15rpm\n> >\n> > tasks:\n> >\n> > 4 millions of transactions by day\n> > 160 open connection 24 hours by day 7 days by week\n> > pg_autovacuum running 24/7\n> > reindex on midnight\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n\n",
"msg_date": "Fri, 30 Jul 2004 09:14:51 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: my boss want to migrate to ORACLE"
},
{
"msg_contents": "Stephane Tessier wrote:\n\n>I think with your help guys I'll do it!\n>\n>I'm working on it!\n>\n>I'll work on theses issues:\n>\n>we have space for more ram(we use 2 gigs on possibility of 3 gigs)\n>iowait is very high 98% --> look like postgresql wait for io access\n>raid5 -->raid0 if i'm right raid5 use 4 writes(parity,data, etc) for each\n>write on disk\n>use more transactions (we have a lot of insert/update without transaction).\n>cpu look like not running very hard\n>\n>*php is not running on the same machine\n>*redhat enterprise 3.0 ES\n>*the version of postgresql is 7.3.4(using RHDB from redhat)\n>*pg_autovacuum running at 12 and 24 hour each day\n> \n>\nWhat do you mean by \"pg_autovacuum running at 12 and 24 hour each day\"?\n",
"msg_date": "Fri, 30 Jul 2004 13:56:16 -0400",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: my boss want to migrate to ORACLE"
},
{
"msg_contents": "On Fri, 30 Jul 2004, Matthew T. O'Connor wrote:\n\n> Stephane Tessier wrote:\n>\n> >I think with your help guys I'll do it!\n> >\n> >I'm working on it!\n> >\n> >I'll work on theses issues:\n> >\n> >we have space for more ram(we use 2 gigs on possibility of 3 gigs)\n> >iowait is very high 98% --> look like postgresql wait for io access\n> >raid5 -->raid0 if i'm right raid5 use 4 writes(parity,data, etc) for each\n> >write on disk\n> >use more transactions (we have a lot of insert/update without transaction).\n> >cpu look like not running very hard\n> >\n> >*php is not running on the same machine\n> >*redhat enterprise 3.0 ES\n> >*the version of postgresql is 7.3.4(using RHDB from redhat)\n> >*pg_autovacuum running at 12 and 24 hour each day\n> >\n> >\n> What do you mean by \"pg_autovacuum running at 12 and 24 hour each day\"?\n\nI suspect he means at 1200 and 2400 each day (i.e noon and midnight).\n\n-- \nDan Langille - http://www.langille.org/\n",
"msg_date": "Fri, 30 Jul 2004 14:11:41 -0400 (EDT)",
"msg_from": "Dan Langille <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: my boss want to migrate to ORACLE"
},
{
"msg_contents": "pg_autovacuum is a daemon, not something that get's run twice a day. \nI think that's what the question Matthew was getting @. I'm not sure \nwhat would happen to performance if pg_autovacuum was launched twice a \nday from cron, but you could end up in an ugly situation if it starts \nup.\n\n--brian\n\nOn Jul 30, 2004, at 12:11 PM, Dan Langille wrote:\n\n> On Fri, 30 Jul 2004, Matthew T. O'Connor wrote:\n>\n>> Stephane Tessier wrote:\n>>\n>>> I think with your help guys I'll do it!\n>>>\n>>> I'm working on it!\n>>>\n>>> I'll work on theses issues:\n>>>\n>>> we have space for more ram(we use 2 gigs on possibility of 3 gigs)\n>>> iowait is very high 98% --> look like postgresql wait for io access\n>>> raid5 -->raid0 if i'm right raid5 use 4 writes(parity,data, etc) for \n>>> each\n>>> write on disk\n>>> use more transactions (we have a lot of insert/update without \n>>> transaction).\n>>> cpu look like not running very hard\n>>>\n>>> *php is not running on the same machine\n>>> *redhat enterprise 3.0 ES\n>>> *the version of postgresql is 7.3.4(using RHDB from redhat)\n>>> *pg_autovacuum running at 12 and 24 hour each day\n>>>\n>>>\n>> What do you mean by \"pg_autovacuum running at 12 and 24 hour each \n>> day\"?\n>\n> I suspect he means at 1200 and 2400 each day (i.e noon and midnight).\n>\n> -- \n> Dan Langille - http://www.langille.org/\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n",
"msg_date": "Fri, 30 Jul 2004 12:29:02 -0600",
"msg_from": "Brian Hirt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: my boss want to migrate to ORACLE"
},
{
"msg_contents": "After a long battle with technology, [email protected] (\"Stephane Tessier\"), an earthling, wrote:\n> I think with your help guys I'll do it!\n>\n> I'm working on it!\n>\n> I'll work on theses issues:\n>\n> we have space for more ram(we use 2 gigs on possibility of 3 gigs)\n\nThat _may_ help; not completely clear.\n\n> iowait is very high 98% --> look like postgresql wait for io access\n\nIn that case, if you haven't got a RAID controller with battery backed\ncache, then that should buy you a BIG boost in performance. Maybe\n$1500 USD; that could be money FABULOUSLY well spent.\n\n> raid5 -->raid0 if i'm right raid5 use 4 writes(parity,data, etc) for each\n> write on disk\n\nI try to avoid talking about RAID levels, and leave them to others\n:-).\n\nSticking WAL on a solid state disk would be WAY COOL; you almost\ncertainly are hitting WAL really hard, which eventually cooks disks.\nWhat is unfortunate is that there doesn't seem to be a \"low end\" 1GB\nSSD; I'd hope that would cost $5K, and that might give a bigger boost\nthan the battery-backed RAID controller with lotsa cache.\n\n> use more transactions (we have a lot of insert/update without\n> transaction).\n\nThat'll help unless you get the RAID controller, in which case WAL\nupdates become much cheaper.\n\n> cpu look like not running very hard\n\nNot surprising.\n\n> *php is not running on the same machine\n> *redhat enterprise 3.0 ES\n> *the version of postgresql is 7.3.4(using RHDB from redhat)\n\nAll makes sense. It would be attractive to move to 7.4.2 or 7.4.3;\nthey're really quite a lot faster. If there's no option to migrate\nquickly, then 7.5 has interesting cache management changes that ought\nto help even more, particularly with your vacuuming issues :-).\n\nBut it's probably better to get two incremental changes; migrating to\n7.4, and being able to tell the boss \"That improved performance by\nx%\", and then doing _another_ upgrade that _also_ improves things\nshould provide a pretty compelling argument in favour of keeping up\nthe good work with PostgreSQL.\n\n> *pg_autovacuum running at 12 and 24 hour each day\n\nThat really doesn't make sense.\n\nThe point of pg_autovacuum is for it to run 24 hours a day.\n\nIf you kick it off twice, once at 11:59, then stop it, and then once\nat 23:59, and then stop it, it shouldn't actually do any work. Or\nhave you set it up with a 'sleep period' of ~12 hours?\n-- \nlet name=\"cbbrowne\" and tld=\"acm.org\" in name ^ \"@\" ^ tld;;\nhttp://www.ntlug.org/~cbbrowne/sgml.html\nThe *Worst* Things to Say to a Police Officer: Hey, is that a 9 mm?\nThat's nothing compared to this .44 magnum.\n",
"msg_date": "Fri, 30 Jul 2004 21:22:25 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: my boss want to migrate to ORACLE"
},
{
"msg_contents": "On Fri, 2004-07-30 at 19:22, Christopher Browne wrote:\n> After a long battle with technology, [email protected] (\"Stephane Tessier\"), an earthling, wrote:\n> > I think with your help guys I'll do it!\n> >\n> > I'm working on it!\n> >\n> > I'll work on theses issues:\n> >\n> > we have space for more ram(we use 2 gigs on possibility of 3 gigs)\n> \n> That _may_ help; not completely clear.\n> \n> > iowait is very high 98% --> look like postgresql wait for io access\n> \n> In that case, if you haven't got a RAID controller with battery backed\n> cache, then that should buy you a BIG boost in performance. Maybe\n> $1500 USD; that could be money FABULOUSLY well spent.\n> \n> > raid5 -->raid0 if i'm right raid5 use 4 writes(parity,data, etc) for each\n> > write on disk\n> \n> I try to avoid talking about RAID levels, and leave them to others\n> :-).\n\nFYI, in a previous post on this topic, the original poster put up a top\noutput that showed the machine using 2 gigs of swap with 150 Meg for\nkernel cache, and all the memory being used by a few postgresql\nprocesses. The machine was simply configured to give WAY too much\nmemory to shared buffers and sort mem and this was likely causing the\nbig slowdown.\n\nAdding a battery backed caching RAID controller and properly configuring\npostgresql.conf should get him into the realm of reasonable performance.\n\n",
"msg_date": "Fri, 30 Jul 2004 22:14:41 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: my boss want to migrate to ORACLE"
},
{
"msg_contents": "Regarding Raid5 at all, you might want to look at http://www.baarf.com\n\n\"\"Stephane Tessier\"\" <[email protected]> wrote in message\nnews:001f01c4763c$fcef40f0$4e00020a@develavoie...\n> I think with your help guys I'll do it!\n>\n> I'm working on it!\n>\n> I'll work on theses issues:\n>\n> we have space for more ram(we use 2 gigs on possibility of 3 gigs)\n> iowait is very high 98% --> look like postgresql wait for io access\n> raid5 -->raid0 if i'm right raid5 use 4 writes(parity,data, etc) for each\n....\n\n\n",
"msg_date": "Sun, 01 Aug 2004 19:13:12 GMT",
"msg_from": "\"Mischa Sandberg\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: my boss want to migrate to ORACLE"
},
{
"msg_contents": "I checked and we have a 128 megs battery backed cache on the raid\ncontroller...\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]]\nSent: 30 juillet, 2004 11:15\nTo: Stephane Tessier\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] my boss want to migrate to ORACLE\n\n\nOn Fri, 2004-07-30 at 07:56, Stephane Tessier wrote:\n> I think with your help guys I'll do it!\n>\n> I'm working on it!\n>\n> I'll work on theses issues:\n>\n> we have space for more ram(we use 2 gigs on possibility of 3 gigs)\n> iowait is very high 98% --> look like postgresql wait for io access\n> raid5 -->raid0 if i'm right raid5 use 4 writes(parity,data, etc) for each\n> write on disk\n\nJust get battery backed cache on your RAID controller. RAID0 is way too\nunreliable for a production environment. One disk dies and all your\ndata is just gone.\n\n> use more transactions (we have a lot of insert/update without\ntransaction).\n> cpu look like not running very hard\n>\n> *php is not running on the same machine\n> *redhat enterprise 3.0 ES\n> *the version of postgresql is 7.3.4(using RHDB from redhat)\n> *pg_autovacuum running at 12 and 24 hour each day\n>\n>\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of\n> [email protected]\n> Sent: 29 juillet, 2004 23:00\n> To: Stephane Tessier\n> Cc: [email protected]\n> Subject: Re: [PERFORM] my boss want to migrate to ORACLE\n>\n>\n> A furthur thought or two:\n>\n> - you are *sure* that it is Postgres that is slow? (could be Php...or your\n> machine could be out of some resource - see next 2 points)\n> - is your machine running out of cpu or memory?\n> - is your machine seeing huge io transfers or long io waits?\n> - are you running Php on this machine as well as Postgres?\n> - what os (and what release) are you running? (guessing Linux but...)\n>\n> As an aside, they always say this but: Postgres 7.4 generally performs\n> better\n> than 7.3...so an upgrade could be worth it - *after* you have\n> solved/identified\n> the other issues.\n>\n> best wishes\n>\n> Mark\n>\n> Quoting Stephane Tessier <[email protected]>:\n>\n> > Hi everyone,\n> >\n> > somebody can help me??????? my boss want to migrate to\n> > ORACLE................\n> >\n> > we have a BIG problem of performance,it's slow....\n> > we use postgres 7.3 for php security application with approximately 4\n> > millions of insertion by day and 4 millions of delete and update\n> > and archive db with 40 millions of archived stuff...\n> >\n> > we have 10 databases for our clients and a centralized database for the\n> > general stuff.\n> >\n> > database specs:\n> >\n> > double XEON 2.4 on DELL PowerEdge2650\n> > 2 gigs of RAM\n> > 5 SCSI Drive RAID 5 15rpm\n> >\n> > tasks:\n> >\n> > 4 millions of transactions by day\n> > 160 open connection 24 hours by day 7 days by week\n> > pg_autovacuum running 24/7\n> > reindex on midnight\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n\n",
"msg_date": "Mon, 2 Aug 2004 17:18:38 -0400",
"msg_from": "\"Stephane Tessier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: my boss want to migrate to ORACLE"
},
{
"msg_contents": "oups,\n\ni changed for RAID 10(strip and mirror)....\n\n-----Original Message-----\nFrom: James Thornton [mailto:[email protected]]\nSent: 2 aout, 2004 17:32\nTo: Stephane Tessier\nCc: 'Scott Marlowe'; [email protected];\[email protected]\nSubject: Re: [PERFORM] my boss want to migrate to ORACLE\n\n\nStephane Tessier wrote:\n\n> I checked and we have a 128 megs battery backed cache on the raid\n> controller...\n\n>>we have space for more ram(we use 2 gigs on possibility of 3 gigs)\n>>iowait is very high 98% --> look like postgresql wait for io access\n>>raid5 -->raid0 if i'm right raid5 use 4 writes(parity,data, etc) for each\n>>write on disk\n> \n> Just get battery backed cache on your RAID controller. RAID0 is way too\n> unreliable for a production environment. One disk dies and all your\n> data is just gone.\n\nI'm the one who sent the e-mail about RAID 5's 4 writes, but I suggested \nyou look at RAID 10, not RAID 0.\n\n-- \n\n James Thornton\n______________________________________________________\nInternet Business Consultant, http://jamesthornton.com\n\n",
"msg_date": "Mon, 2 Aug 2004 17:30:52 -0400",
"msg_from": "\"Stephane Tessier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: my boss want to migrate to ORACLE"
},
{
"msg_contents": "Stephane Tessier wrote:\n\n> I checked and we have a 128 megs battery backed cache on the raid\n> controller...\n\n>>we have space for more ram(we use 2 gigs on possibility of 3 gigs)\n>>iowait is very high 98% --> look like postgresql wait for io access\n>>raid5 -->raid0 if i'm right raid5 use 4 writes(parity,data, etc) for each\n>>write on disk\n> \n> Just get battery backed cache on your RAID controller. RAID0 is way too\n> unreliable for a production environment. One disk dies and all your\n> data is just gone.\n\nI'm the one who sent the e-mail about RAID 5's 4 writes, but I suggested \nyou look at RAID 10, not RAID 0.\n\n-- \n\n James Thornton\n______________________________________________________\nInternet Business Consultant, http://jamesthornton.com\n\n",
"msg_date": "Mon, 02 Aug 2004 16:32:19 -0500",
"msg_from": "James Thornton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: my boss want to migrate to ORACLE"
},
{
"msg_contents": "It may be worth pricing up expansion options e.g. 256M or more.\nThe other path to consider is changing RAID5 -> RAID10 if your card supports it.\n\nHowever, I would recommend reducing that shared_buffers setting and doing your\nperformance measurements *again* - before changing anything else. This is\nbecause you want to ensure that all your io hammering is not just because you\nare making the machine swap (by giving postgres too much memory as Scott\nmentioned)!\n\nQuoting Stephane Tessier <[email protected]>:\n\n> I checked and we have a 128 megs battery backed cache on the raid\n> controller...\n\n\n",
"msg_date": "Tue, 3 Aug 2004 10:47:18 +1200",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: my boss want to migrate to ORACLE"
},
{
"msg_contents": "Is it set to write back or write through? Also, you may want to look at\nlowering the stripe size. The default on many RAID controllers is 128k,\nbut for PostgreSQL 8k to 32k seems a good choice. But that's not near\nas important as the cache setting being write back.\n\nOn Mon, 2004-08-02 at 15:18, Stephane Tessier wrote:\n> I checked and we have a 128 megs battery backed cache on the raid\n> controller...\n> \n> -----Original Message-----\n> From: Scott Marlowe [mailto:[email protected]]\n> Sent: 30 juillet, 2004 11:15\n> To: Stephane Tessier\n> Cc: [email protected]; [email protected]\n> Subject: Re: [PERFORM] my boss want to migrate to ORACLE\n> \n> \n> On Fri, 2004-07-30 at 07:56, Stephane Tessier wrote:\n> > I think with your help guys I'll do it!\n> >\n> > I'm working on it!\n> >\n> > I'll work on theses issues:\n> >\n> > we have space for more ram(we use 2 gigs on possibility of 3 gigs)\n> > iowait is very high 98% --> look like postgresql wait for io access\n> > raid5 -->raid0 if i'm right raid5 use 4 writes(parity,data, etc) for each\n> > write on disk\n> \n> Just get battery backed cache on your RAID controller. RAID0 is way too\n> unreliable for a production environment. One disk dies and all your\n> data is just gone.\n> \n> > use more transactions (we have a lot of insert/update without\n> transaction).\n> > cpu look like not running very hard\n> >\n> > *php is not running on the same machine\n> > *redhat enterprise 3.0 ES\n> > *the version of postgresql is 7.3.4(using RHDB from redhat)\n> > *pg_autovacuum running at 12 and 24 hour each day\n> >\n> >\n> >\n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of\n> > [email protected]\n> > Sent: 29 juillet, 2004 23:00\n> > To: Stephane Tessier\n> > Cc: [email protected]\n> > Subject: Re: [PERFORM] my boss want to migrate to ORACLE\n> >\n> >\n> > A furthur thought or two:\n> >\n> > - you are *sure* that it is Postgres that is slow? (could be Php...or your\n> > machine could be out of some resource - see next 2 points)\n> > - is your machine running out of cpu or memory?\n> > - is your machine seeing huge io transfers or long io waits?\n> > - are you running Php on this machine as well as Postgres?\n> > - what os (and what release) are you running? (guessing Linux but...)\n> >\n> > As an aside, they always say this but: Postgres 7.4 generally performs\n> > better\n> > than 7.3...so an upgrade could be worth it - *after* you have\n> > solved/identified\n> > the other issues.\n> >\n> > best wishes\n> >\n> > Mark\n> >\n> > Quoting Stephane Tessier <[email protected]>:\n> >\n> > > Hi everyone,\n> > >\n> > > somebody can help me??????? my boss want to migrate to\n> > > ORACLE................\n> > >\n> > > we have a BIG problem of performance,it's slow....\n> > > we use postgres 7.3 for php security application with approximately 4\n> > > millions of insertion by day and 4 millions of delete and update\n> > > and archive db with 40 millions of archived stuff...\n> > >\n> > > we have 10 databases for our clients and a centralized database for the\n> > > general stuff.\n> > >\n> > > database specs:\n> > >\n> > > double XEON 2.4 on DELL PowerEdge2650\n> > > 2 gigs of RAM\n> > > 5 SCSI Drive RAID 5 15rpm\n> > >\n> > > tasks:\n> > >\n> > > 4 millions of transactions by day\n> > > 160 open connection 24 hours by day 7 days by week\n> > > pg_autovacuum running 24/7\n> > > reindex on midnight\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: the planner will ignore your desire to choose an index scan if your\n> > joining column's datatypes do not match\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 8: explain analyze is your friend\n> >\n> \n> \n\n",
"msg_date": "Mon, 02 Aug 2004 18:17:16 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: my boss want to migrate to ORACLE"
}
] |
[
{
"msg_contents": "I have the following query that I have to quit after 10 mins. I have vacuum full analyzed each of the tables and still nothing seems to help. There are indexes on all the join fields. I've included the query, explain and the table structures. Please let me know if there is anything else I should provide.\n\n\nSELECT m.r_score,m.f_score,m.m_score,m.rfm_score,ed.email_id, '00'::varchar as event_id, \n COUNT(e.email_adr) AS count\n FROM cdm.cdm_indiv_mast m \n INNER JOIN cdm.cdm_email_data e on e.indiv_fkey = m.indiv_key\n INNER JOIN cdm.email_sent es on es.email_address = e.email_adr\n inner join cdm.email_description ed on ed.email_id = es.email_id\n where (ed.event_date::date between '2003-10-07' and '2003-12-31')\n and m.m_score >0\n GROUP BY 1,2,3,4,5,6\n\n\nQUERY PLAN\nHashAggregate (cost=551716.04..551724.14 rows=3241 width=58)\n -> Hash Join (cost=417315.57..546391.36 rows=304267 width=58)\n Hash Cond: ((\"outer\".email_address)::text = (\"inner\".email_adr)::text)\n -> Nested Loop (cost=0.00..85076.89 rows=1309498 width=42)\n -> Seq Scan on email_description ed (cost=0.00..20.72 rows=3 width=19)\n Filter: (((event_date)::date >= '2003-10-07'::date) AND ((event_date)::date <= '2003-12-31'::date))\n -> Index Scan using emailsnt_id_idx on email_sent es (cost=0.00..20964.10 rows=591036 width=42)\n Index Cond: ((\"outer\".email_id)::text = (es.email_id)::text)\n -> Hash (cost=404914.28..404914.28 rows=1202517 width=39)\n -> Hash Join (cost=144451.53..404914.28 rows=1202517 width=39)\n Hash Cond: (\"outer\".indiv_fkey = \"inner\".indiv_key)\n -> Seq Scan on cdm_email_data e (cost=0.00..93002.83 rows=5175383 width=31)\n -> Hash (cost=134399.24..134399.24 rows=1202517 width=24)\n -> Index Scan using m_score_idx on cdm_indiv_mast m (cost=0.00..134399.24 rows=1202517 width=24)\n Index Cond: (m_score > 0)\n\n\n\nCREATE TABLE cdm.cdm_email_data\n(\n email_adr varchar(75) NOT NULL,\n opt_out char(1) DEFAULT 'n'::bpchar,\n indiv_fkey int8 NOT NULL,\n CONSTRAINT email_datuniq UNIQUE (email_adr, indiv_fkey)\n) WITHOUT OIDS;\nCREATE INDEX emaildat_email_idx ON cdm.cdm_email_data USING btree (email_adr);\n\nCREATE TABLE cdm.cdm_indiv_mast\n(\n name_first varchar(20),\n name_middle varchar(20),\n name_last varchar(30),\n name_suffix varchar(5),\n addr1 varchar(40),\n addr2 varchar(40),\n addr3 varchar(40),\n city varchar(25),\n state varchar(7),\n r_score int4,\n f_score int4,\n m_score int4,\n rfm_score int4,\n rfm_segment int4,\n CONSTRAINT indiv_mast_pk PRIMARY KEY (indiv_key)\n) WITH OIDS;\nCREATE INDEX f_score_idx ON cdm.cdm_indiv_mast USING btree (f_score);\nCREATE INDEX m_score_idx ON cdm.cdm_indiv_mast USING btree (m_score);\nCREATE INDEX r_score_idx ON cdm.cdm_indiv_mast USING btree (r_score);\n\nCREATE TABLE cdm.email_description\n(\n email_id varchar(20) NOT NULL,\n event_date timestamp,\n affiliate varchar(75),\n event_name varchar(100),\n mailing varchar(255),\n category varchar(100),\n div_code varchar(30),\n mkt_category varchar(50),\n merch_code varchar(50),\n campaign_code varchar(50),\n offer_code varchar(30),\n CONSTRAINT email_desc_pk PRIMARY KEY (email_id)\n) WITHOUT OIDS;\nCREATE INDEX email_desc_id_idx ON cdm.email_description USING btree (email_id);\nCREATE INDEX eml_desc_date_idx ON cdm.email_description USING btree (event_date);\n\nCREATE TABLE cdm.email_sent\n(\n email_address varchar(75),\n email_id varchar(20),\n email_sent_ts timestamp,\n email_type char(1)\n) WITHOUT OIDS;\nCREATE INDEX emailsnt_id_idx ON cdm.email_sent USING btree (email_id);\nCREATE INDEX email_sent_emailidx ON cdm.email_sent USING btree (email_address);\n\n\n-TIA\n-Patrick Hatcher\n",
"msg_date": "Thu, 29 Jul 2004 15:54:49 +0000",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Extremely slow query..."
}
] |
[
{
"msg_contents": "Just an update on this:\nqueries in the 'next key' form on fields(a,b,c)\nonly ever use the index for the first field (a). I just never noticed\nthat before...in most cases this is selective enough. In most logical\ncases in multi part keys the most important stuff comes first.\n\nCuriously, queries in the Boolean reversed form (switching and with or,\netc.) find the index but do not apply any condition, kind of like an\nindexed sequential scan...\n\nWell, if and when the rowtype comparison can be made to work over multi\npart keys (and the optimizer is made to do tricks there), postgres can\nbe made to give much better generalized ISAM access. In the meantime,\nI'll just troubleshoot specific cases via application specific behavior\nas they come up. In any case, many thanks to Greg and Tom for taking\nthe time to pick this apart.\n\nMerlin\n",
"msg_date": "Thu, 29 Jul 2004 12:48:51 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best way to fetch next/prev record based on index "
},
{
"msg_contents": "\n\"Merlin Moncure\" <[email protected]> writes:\n\n> Well, if and when the rowtype comparison can be made to work over multi\n> part keys (and the optimizer is made to do tricks there), postgres can\n> be made to give much better generalized ISAM access. In the meantime,\n> I'll just troubleshoot specific cases via application specific behavior\n> as they come up. In any case, many thanks to Greg and Tom for taking\n> the time to pick this apart.\n\nWell I'm not sure whether you caught it, but Tom did come up with a\nwork-around that works with the current infrastructure if all the columns\ninvolved are the same datatype.\n\nYou can create a regular btree index on the expression array[a,b,c] and then\ndo your lookup using array[a,b,c] > array[a1,b1,c1].\n\nThis will only work in 7.4, not previous releases, for several reasons.\n\n-- \ngreg\n\n",
"msg_date": "29 Jul 2004 13:57:55 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index"
}
] |
[
{
"msg_contents": "Greg Stark wrote:\n> Well I'm not sure whether you caught it, but Tom did come up with a\n> work-around that works with the current infrastructure if all the\ncolumns\n> involved are the same datatype.\n> \n> You can create a regular btree index on the expression array[a,b,c]\nand\n> then\n> do your lookup using array[a,b,c] > array[a1,b1,c1].\n\nUnfortunately, ISAM files allow keys based on combinations of fields on\nany type. So this is not an option. (I have spent over 6 months\nresearching this problem).\n\nHowever, this would work:\nCreate index on t(stackparam(array[a::text,b::text,c::text),\narray['char(2)', 'int', 'date')];\n\nWith the 'type strings' queried out in advance. stackparam(text[],\ntext[]) is a C function with uses the types and cats the strings\ntogether in such a way that preserves sorting. In any case, this is an\nugly and inefficient mess, and I have no desire to do this unless there\nis no other way. I would much rather see postgres 'get' (a,b,c) > (a1,\nb1, c1)...if there is even a chance this is possible, I'll direct my\nefforts there. IMNSHO, this form was invented by the SQL folks for\ndealing with data in an ISAM manner, postgres should be able do it and\ndo it well.\n\nMerlin\n",
"msg_date": "Thu, 29 Jul 2004 14:23:15 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best way to fetch next/prev record based on index"
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> I would much rather see postgres 'get' (a,b,c) > (a1,\n> b1, c1)...if there is even a chance this is possible, I'll direct my\n> efforts there.\n\nFor the ISAM context this whole discussion is kinda moot, because you\nreally don't want to have to plan and execute a fairly expensive query\nfor every record fetch. If we had all the improvements discussed, it\nwould be a reasonably cheap operation by the standards of \"execute\nan independent query\", but by the standards of \"fetch next record\nin my query\" it'll still suck. (Using PREPARE might help, but only\nsomewhat.)\n\nIt strikes me that what you really want for ISAM is to improve the\ncursor mechanism so it will do the things you need. I'm not sure\nwhat's involved, but let's talk about that angle for a bit.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 29 Jul 2004 14:41:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index "
},
{
"msg_contents": "\n\"Merlin Moncure\" <[email protected]> writes:\n\n> However, this would work:\n> Create index on t(stackparam(array[a::text,b::text,c::text),\n> array['char(2)', 'int', 'date')];\n\nWell, I fear not all datatypes sort properly when treated as text. Notably\nintegers don't. \"10\" sorts before \"2\" for example. You could probably deal\nwith this with careful attention to each datatype you're converting if you're\ninterested in going to that length.\n\n-- \ngreg\n\n",
"msg_date": "29 Jul 2004 14:49:34 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: best way to fetch next/prev record based on index"
}
] |
[
{
"msg_contents": "> \"Merlin Moncure\" <[email protected]> writes:\n> > I would much rather see postgres 'get' (a,b,c) > (a1,\n> > b1, c1)...if there is even a chance this is possible, I'll direct my\n> > efforts there.\n> \n> For the ISAM context this whole discussion is kinda moot, because you\n> really don't want to have to plan and execute a fairly expensive query\n> for every record fetch. If we had all the improvements discussed, it\n> would be a reasonably cheap operation by the standards of \"execute\n> an independent query\", but by the standards of \"fetch next record\n> in my query\" it'll still suck. (Using PREPARE might help, but only\n> somewhat.)\n> \n> It strikes me that what you really want for ISAM is to improve the\n> cursor mechanism so it will do the things you need. I'm not sure\n> what's involved, but let's talk about that angle for a bit.\n\nSorry for the long post, but here's an explanation of why I think things\nare better off where they are.\n\nI've created a middleware that translates ISAM file I/O on the fly to\nSQL and uses prepared statements over parse/bind to execute them. This\nis why I was so insistent against scoping prepared statement lifetime to\ntransaction level. \nCursors seem attractive at first but they are a decidedly mixed bag.\nFirst of all, PostgreSQL cursors are insensitive, which absolutely\nprecludes their use. Supposing they weren't though, I'm not so sure I'd\nuse them if it was possible to do the same things via vanilla queries. \n\nIt turns out that prepared statements get the job done. Moving them to\nparse/bind got me a 30% drop in server cpu time and statement execution\ntimes run between .1 and .5 ms for random reads. Sequential reads go\nfrom that fast to slower depending on index performance. So, I don't\nhave performance issues except where the index doesn't deliver. \n\n2000-10000 reads/sec is competitive with commercial ISAM filesystems on\nthe pc (assuming application is not running on the server), and it is\nfar better than any other commercial ISAM emulation I've played with up\nto this point. Don't have concrete #s, but the server can deliver 2-3\ntimes that in concurrency situations, in many cases the application\nperformance is actually network bound...this is all on pc hardware. Of\ncourse, mainframes can do much, much better than this but that's really\noutside the scope of what I'm trying to do.\n\nSo, things run pretty fast as they are, and keeping things running\nthrough queries keeps things generic and flexible. Also, queries\nconsume zero resources on the server except for the time it takes to\nprocess and deal with them. Cursors, OTOH, have to be opened up for\nevery table for every user. Cursors are read-only always, whereas views\ncan be updated with rules. It's worth noting that other commercial ISAM\nemulation systems (for example Acu4GL by AcuCorp) cut queries on the fly\nas well, even when cursor options are available.\n\nIf cursors became sensitive, they would be worth consideration. I've\nnever really had 1000 large ones open at once, be interesting to see how\nthat worked. In ideal object would be a 'pseudo cursor', an insensitive\ncursor shared by multiple users with each having their own record\npointer. This might be better handled via middleware though (this might\nalso give better options handling different locking scenarios).\n\nMerlin\n",
"msg_date": "Thu, 29 Jul 2004 16:11:49 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: best way to fetch next/prev record based on index "
}
] |
[
{
"msg_contents": "can anyone imagine why my query refuses to use indexes for a \nfoo.val<=bar.val query (but works for foo.val=bar.val foo.val<=1). the \ni_value fields are ints (int4), and selectivity of both of these queries \nis 2/10000. and I'm running pgsql 7.4.3.\n\nAlso, the default_statistics_target is 1000 and I increased the \neffective_cache_size from 1000 to 80000 but it still doesn't work (but \nseems sort of unrelated. there seems to be something fundamentally wrong, \nbecause even if I set enable_seqscan=off and all the other crazy plans it \ntries, it never gets to index scan).\n\nIf anyone has any ideas, please let me know, I've been trying to get this \nworking for a long time now.\n\nthis query produces the correct plan using Index Scan:\n\nINSERT INTO answer\nSELECT foo.cid as pid, bar.cid as sid,count(bar.cid) as cnt, 1\nFROM foo, bar\n WHERE foo.field=bar.field AND\n (\n foo.op='i=i' AND bar.op='i=i' AND bar.i_value=foo.i_value\n )\nGROUP BY foo.cid,bar.cid;\n\n------explain analyze----------\n Subquery Scan \"*SELECT*\" (cost=9.04..9.07 rows=2 width=20) (actual \ntime=1.000..1.000 rows=2 loops=1)\n -> HashAggregate (cost=9.04..9.04 rows=2 width=8) (actual \ntime=1.000..1.000 rows=2 loops=1)\n -> Nested Loop (cost=0.00..9.02 rows=2 width=8) (actual \ntime=1.000..1.000 rows=2 loops=1)\n Join Filter: ((\"outer\".field)::text = \n(\"inner\".field)::text)\n -> Seq Scan on foo (cost=0.00..1.01 rows=1 width=17) \n(actual time=0.000..0.000 rows=1 loops=1)\n Filter: ((op)::text = 'i=i'::text)\n -> Index Scan using bar_index_op_i_value on bar \n(cost=0.00..7.98 rows=2 width=17)\n (actual time=1.000..1.000 rows=2 loops=1)\n Index Cond: (((bar.op)::text = 'i=i'::text) AND \n(bar.i_value = \"outer\".i_value))\n Total runtime: 1.000 ms\n\n-------------------------\n\nthis almost identical query doesn't (with = as <=):\n\nINSERT INTO answer\nSELECT foo.cid as pid, bar.cid as sid,count(bar.cid) as cnt, 5\nFROM foo, bar\n WHERE foo.field=bar.field AND\n (\n foo.op='i<=i' AND bar.op='i=i' AND bar.i_value<=foo.i_value\n )\nGROUP BY foo.cid,bar.cid;\n\n-------------Explain-------\nTable contains 9 rows\nQUERY PLAN\n----------\nSubquery Scan \"*SELECT*\" (cost=385.02..435.03 rows=3334 width=20) (actual \ntime=50.000..50.000 rows=2 loops=1)\n -> HashAggregate (cost=385.02..393.35 rows=3334 width=8) (actual \ntime=50.000..50.000 rows=2 loops=1)\n -> Nested Loop (cost=0.00..360.01 rows=3334 width=8) (actual \ntime=9.000..50.000 rows=2 loops=1)\n Join Filter: (((\"outer\".field)::text = \n(\"inner\".field)::text) AND (\"inner\".i_value <= \"outer\".i_value))\n -> Seq Scan on foo (cost=0.00..1.01 rows=1 width=17) \n(actual time=0.000..0.000 rows=1 loops\n=1)\n Filter: ((op)::text = 'i<=i'::text)\n -> Seq Scan on bar (cost=0.00..209.00 rows=10000 width=17) \n(actual time=0.000..29.000 rows=\n10000 loops=1)\n Filter: ((op)::text = 'i=i'::text)\nTotal runtime: 51.000 ms\n\n----------------------------\n\nThese queries both are on tables foo with 10,000 values (with i_value \nvalues from 1-5000) and\none single entry in bar (with i_value=1)\n\nIf table bar has more than 1 entry, it resorts to a merge join, why can't \nI get this to use Index Scan also?\n\nThis is what happens if bar has like 20 values (1-20):\n\nINSERT INTO answer\nSELECT foo.cid as pid, bar.cid as sid,count(bar.cid) as cnt, 5\nFROM foo, bar\n WHERE foo.field=bar.field AND\n (\n foo.op='i<=i' AND bar.op='i=i' AND bar.i_value<=foo.i_value\n )\nGROUP BY foo.cid,bar.cid;\n\n-------------Explain-------\nTable contains 8 rows\nQUERY PLAN\n----------\nSubquery Scan \"*SELECT*\" (cost=2.74..3.04 rows=20 width=20) (actual \ntime=0.000..0.000 rows=20 loops=1)\n -> HashAggregate (cost=2.74..2.79 rows=20 width=8) (actual \ntime=0.000..0.000 rows=20 loops=1)\n -> Merge Join (cost=1.29..2.59 rows=20 width=8) (actual \ntime=0.000..0.000 rows=20 loops=1)\n Merge Cond: (\"outer\".i_value = \"inner\".i_value)\n Join Filter: ((\"inner\".field)::text = (\"outer\".field)::text)\n -> Index Scan using bar_index on bar (cost=0.00..500.98 \nrows=10000 width=17) (actu\nal time=0.000..0.000 rows=21 loops=1)\n Filter: ((op)::text = 'i=i'::text)\n -> Sort (cost=1.29..1.32 rows=10 width=17) (actual \ntime=0.000..0.000 rows=19 loops=1)\n Sort Key: foo.i_value\n -> Seq Scan on foo (cost=0.00..1.12 rows=10 width=17) \n(actual time=0.000..0.000 rows=\n10 loops=1)\n Filter: ((op)::text = 'i=i'::text)\nTotal runtime: 4.000 ms\n\nThanks!\n\n--Kris\n\n\"Love is like Pi: natural, irrational, and very important.\"\n -- Lisa Hoffman\n\n",
"msg_date": "Thu, 29 Jul 2004 17:04:51 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Index works with foo.val=bar.val but not foo.val<=bar.val"
}
] |
[
{
"msg_contents": "Stephane wrote:\nHi everyone,\n \nsomebody can help me??????? my boss want to migrate to ORACLE................\n\n#fsync = true \n[snip]\n\nAre you using battery baked RAID? \n\nYour problem is probably due to the database syncing all the time. With fsync one, you get 1 sync per transaction that updates, deletes, etc. 4 million writes/day = 46 writes/sec avg. Of course, writes will be very bursty, and when you get over 100 you are going to have problems even on 15k system. All databases have this problem, including Oracle. Keeping WAL and data on separate volumes helps a lot. It's quicker and easier to use hardware solution tho.\n\nIf you want to run fsync on with that much I/O, consider using a battery backed raid controller that caches writes. This will make a *big* difference. A quick'n'dirty test is to turn fsync off for a little while to see if this fixes your performance problems.\n\nMerlin\n\n\n",
"msg_date": "Fri, 30 Jul 2004 10:13:20 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: my boss want to migrate to ORACLE"
}
] |
[
{
"msg_contents": "Stan Bielski <[email protected]> writes:\n> On Thu, 29 Jul 2004, Tom Lane wrote:\n>> Are you sure the join condition is hashjoinable? You didn't say\n>> anything about the datatypes involved ...\n\n> My apologies. The columns that I want to join are both type 'inet'.\n> Shouldn't that be hashjoinable? \n\nDepends on your PG version. The raw type isn't hashjoinable, because\nits '=' operator ignores the inet-vs-cidr flag. Before 7.4 the operator\nwas (correctly) marked not hashjoinable. In 7.4 it was (incorrectly)\nmarked hashjoinable, due no doubt to momentary brain fade on my part.\nFor 7.5 it is hashjoinable and the join will actually work, because we\nadded a specialized hash function that also ignores the inet-vs-cidr flag.\n\nIf you are joining data that is all inet or all cidr (no mixtures),\nthen 7.4 works okay, which is why we didn't notice the bug right away.\nIf that's good enough for now, you could emulate the 7.4 behavior in\nearlier releases by setting the oprcanhash flag in pg_operator for the\ninet equality operator.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 30 Jul 2004 12:49:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer refuses to hash join "
}
] |
[
{
"msg_contents": "\nTo the person who was looking for a $5k midlevel SSD drive (sorry, I hit 'd'\ntoo fast):\n\n http://www.tigicorp.com/tigijet_exp_s.htm\n\nI found this via this interesting survey of SSD products:\n\n http://www.storagesearch.com/ssd-buyers-guide.html\n\nIncidentally it seems the popular Platypus SSD PCI cards are no more, Platypus\nappears to have gone out of business.\n\n-- \ngreg\n\n",
"msg_date": "01 Aug 2004 07:42:33 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": true,
"msg_subject": "SSD Drives"
}
] |
[
{
"msg_contents": "\n\nHi, i would like to answer if there is any way in postgres to find the\npage miss hits caused during a query execution.\n\n\nIs there something like explain analyze with the page miss hits???\n",
"msg_date": "Mon, 2 Aug 2004 11:11:43 +0300 (EET DST)",
"msg_from": "Ioannis Theoharis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Page Miss Hits"
},
{
"msg_contents": "On Mon, 2004-08-02 at 02:11, Ioannis Theoharis wrote:\n> Hi, i would like to answer if there is any way in postgres to find the\n> page miss hits caused during a query execution.\n> \n> \n> Is there something like explain analyze with the page miss hits???\n\nYou're making a basic assumption that is (at least currently) untrue,\nand that is that PostgreSQL has it's own cache. It doesn't. It has a\nbuffer that drops buffer back into the free pool when the last\nreferencing backend concludes and shuts down. So, PostgreSQL currently\nrelies on the kernel to cache for it. So, what you need is a tool that\nmonitors the kernel cache usage and its hit rate. I'm not familiar with\nany, but I'm sure something out there likely does that.\n\n",
"msg_date": "Mon, 02 Aug 2004 10:19:27 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Page Miss Hits"
},
{
"msg_contents": "Scott Marlowe wrote:\n> On Mon, 2004-08-02 at 02:11, Ioannis Theoharis wrote:\n> \n>>Hi, i would like to answer if there is any way in postgres to find the\n>>page miss hits caused during a query execution.\n>>\n>>\n>>Is there something like explain analyze with the page miss hits???\n> \n> \n> You're making a basic assumption that is (at least currently) untrue,\n> and that is that PostgreSQL has it's own cache.\n\nAre you sure of this ? What is the meaning of the ARC recently introduced\nthen ?\n\n\n\nRegards\nGaetano Mendola\n\n",
"msg_date": "Mon, 02 Aug 2004 18:43:32 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Page Miss Hits"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nScott Marlowe wrote:\n| On Mon, 2004-08-02 at 10:43, Gaetano Mendola wrote:\n|\n|>Scott Marlowe wrote:\n|>\n|>>On Mon, 2004-08-02 at 02:11, Ioannis Theoharis wrote:\n|>>\n|>>\n|>>>Hi, i would like to answer if there is any way in postgres to find the\n|>>>page miss hits caused during a query execution.\n|>>>\n|>>>\n|>>>Is there something like explain analyze with the page miss hits???\n|>>\n|>>\n|>>You're making a basic assumption that is (at least currently) untrue,\n|>>and that is that PostgreSQL has it's own cache.\n|>\n|>Are you sure of this ? What is the meaning of the ARC recently introduced\n|>then ?\n|\n|\n| Yes I am. Test it yourself, setup a couple of backends, select * from\n| some big tables, then, one at a time, shut down the psql clients and\n| when the last one closes, the shared mem goes away. Run another client,\n| do select * from the big table, and watch the client size grow from a\n| few meg to a size large enough to hold the whole table (or however much\n| your shared_buffers will hold.)\n|\n| While someone may make ARC and the shared buffers act like a cache some\n| day (can't be that hard, most of the work is done really) right now it's\n| not how it works.\n|\n| ARC still helps, since it makes sure the shared_buffers don't all get\n| flushed from the useful small datasets when a seq scan gets executed.\n\nI'm still not convinced. Why the last backend alive, have to throw away\nbunch of memory copied in the SHM? And again, the ARC is a replacement\npolicy for a cache, which one ?\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBDqkL7UpzwH2SGd4RAsQFAKCWVpCXKgRfE1nc44ZmtEaIrtNaIQCgr4fd\nHx2NiuRzV0UQ3Na9g/zQbzE=\n=XWua\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Mon, 02 Aug 2004 22:50:20 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Page Miss Hits"
},
{
"msg_contents": "> | ARC still helps, since it makes sure the shared_buffers don't all get\n> | flushed from the useful small datasets when a seq scan gets executed.\n> \n> I'm still not convinced. Why the last backend alive, have to throw away\n> bunch of memory copied in the SHM? And again, the ARC is a replacement\n> policy for a cache, which one ?\n\nAs you know, ARC is a recent addition. I've not seen any benchmarks\ndemonstrating that the optimal SHARED_BUFFERS setting is different today\nthan it was in the past.\n\nWe know it's changed, but the old buffer strategy had an equally hard\ntime with a small buffer as it did a large one. Does that mean the\nmiddle of the curve is still at 15k buffers but the extremes are handled\nbetter? Or something completely different?\n\nPlease feel free to benchmark 7.5 (OSDL folks should be able to help us\nas well) and report back.\n\n",
"msg_date": "Mon, 02 Aug 2004 17:26:16 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Page Miss Hits"
},
{
"msg_contents": "Rod Taylor wrote:\n>>| ARC still helps, since it makes sure the shared_buffers don't all get\n>>| flushed from the useful small datasets when a seq scan gets executed.\n>>\n>>I'm still not convinced. Why the last backend alive, have to throw away\n>>bunch of memory copied in the SHM? And again, the ARC is a replacement\n>>policy for a cache, which one ?\n> \n> \n> As you know, ARC is a recent addition. I've not seen any benchmarks\n> demonstrating that the optimal SHARED_BUFFERS setting is different today\n> than it was in the past.\n> \n> We know it's changed, but the old buffer strategy had an equally hard\n> time with a small buffer as it did a large one. Does that mean the\n> middle of the curve is still at 15k buffers but the extremes are handled\n> better? Or something completely different?\n> \n> Please feel free to benchmark 7.5 (OSDL folks should be able to help us\n> as well) and report back.\n\nI know, I know.\n\nWe were discussing about the fact that postgres use a his own cache or not;\nand for the OP pleasure then if is possible retrieve hit and miss information\nfrom that cache.\n\nFor benchmarch may be is better that you look not at the particular implementation\ndone in postgresql but at the general improvements that the ARC replacement\npolicy introduce. If I'm not wrong till now postgres was using an LRU,\naround you can find some articles like these:\n\nhttp://www.almaden.ibm.com/StorageSystems/autonomic_storage/ARC/rj10284.pdf\nhttp://www.almaden.ibm.com/cs/people/dmodha/arcfast.pdf\n\nwhere are showns the improvements.\n\nAs you wrote no one did benchmarks on demostrating with the \"brute force\" that\nARC is better but on the paper should be.\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 03 Aug 2004 00:13:56 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Page Miss Hits"
}
] |
[
{
"msg_contents": "We have a \"companies\" and a \"contacts\" table with about 3000 records\neach.\n\nWe run the following SQL-Command which runs about 2 MINUTES !:\n\nSELECT count(*) FROM contacts LEFT JOIN companies ON contacts.sid =\ncompanies.intfield01\n\ncontacts.sid (type text, b-tree index on it)\ncompanies.intfield01 (type bigint, b-tree index on it)\n\ncomfire=> explain analyze SELECT count(*) FROM prg_contacts LEFT JOIN\nprg_addresses ON prg_contacts.sid=prg_addresses.intfield01;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=495261.02..495261.02 rows=1 width=15) (actual\ntime=40939.38..40939.38 rows=1 loops=1)\n -> Nested Loop (cost=0.00..495253.81 rows=2885 width=15) (actual\ntime=0.05..40930.14 rows=2866 loops=1)\n\t-> Seq Scan on prg_contacts (cost=0.00..80.66 rows=2866\nwidth=7) (actual time=0.01..18.10 rows=2866 loops=1)\n\t-> Seq Scan on prg_addresses (cost=0.00..131.51 rows=2751\nwidth=8) (actual time=0.03..6.25 rows=2751 loops=2866)\nTotal runtime: 40939.52 msec\n\nEXPLAIN\n\nNote:\n- We need the left join because we need all contacts even if they are\nnot assigned to a company\n- We are not able to change the datatypes of the joined fields\nbecause we use a standard software (btw who cares: SuSE Open Exchange\nServer)\n- When we use a normal join (without LEFT or a where clause) the SQL\nruns immediately using the indexes\n\nHow can I force the usage of the indexes when using \"left join\". Or\nany other SQL construct that does the same !? Can anybody please give\nus a hint !?\n\nThanks in forward.\n\nGreetings\nAchim\n",
"msg_date": "Mon, 2 Aug 2004 14:08:51 +0200 (CEST)",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "No index usage with \"left join\""
},
{
"msg_contents": "> SELECT count(*) FROM contacts LEFT JOIN companies ON contacts.sid =\n> companies.intfield01\n> \n> contacts.sid (type text, b-tree index on it)\n> companies.intfield01 (type bigint, b-tree index on it)\n<snip>\n> How can I force the usage of the indexes when using \"left join\". Or\n> any other SQL construct that does the same !? Can anybody please give\n> us a hint !?\n\nYou really don't need to use indexes since you're fetching all\ninformation from both tables.\n\nAnyway, we can be fairly sure this isn't PostgreSQL 7.4 (which would\nlikely choose a far better plan -- hash join rather than nested loop) as\nit won't join a bigint to a text field without a cast.\n\nTry this:\n\tset enable_nestloop = false;\n SELECT count(*) FROM contacts LEFT JOIN companies ON\n cast(contacts.sid as bigint) = companies.intfield01;\n\tset enable_nestloop = true;\n\n\n",
"msg_date": "Mon, 02 Aug 2004 08:45:22 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: No index usage with \"left join\""
},
{
"msg_contents": "Rod Taylor <[email protected]> writes:\n>> How can I force the usage of the indexes when using \"left join\".\n\n> Anyway, we can be fairly sure this isn't PostgreSQL 7.4 (which would\n> likely choose a far better plan -- hash join rather than nested loop)\n\nIndeed, the lack of any join-condition line in the EXPLAIN output\nimplies it's 7.2 or older. IIRC 7.4 is the first release that is\ncapable of using merge or hash join with a condition more complicated\nthan plain \"Var = Var\". In this case, since the two fields are of\ndifferent datatypes, the planner sees something like \"Var = Var::text\"\n(ie, there's an inserted cast function). 7.2 will just say \"duh, too\ncomplicated for me\" and generate a nestloop. With the columns being\nof different datatypes, you don't even have a chance for an inner\nindexscan in the nestloop.\n\nIn short: change the column datatypes to be the same, or update to\n7.4.something. There are no other solutions.\n\n(Well, if you were really desperate you could create a set of\nmergejoinable \"text op bigint\" comparison operators, and then 7.2\nwould be able to cope; but I should think that updating to 7.4 would\nbe much less work.) \n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 02 Aug 2004 10:03:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: No index usage with \"left join\" "
},
{
"msg_contents": "On Mon, 2004-08-02 at 06:08, [email protected] wrote:\n> We have a \"companies\" and a \"contacts\" table with about 3000 records\n> each.\n> \n> We run the following SQL-Command which runs about 2 MINUTES !:\n> \n> SELECT count(*) FROM contacts LEFT JOIN companies ON contacts.sid =\n> companies.intfield01\n> \n> contacts.sid (type text, b-tree index on it)\n> companies.intfield01 (type bigint, b-tree index on it)\n> \n> comfire=> explain analyze SELECT count(*) FROM prg_contacts LEFT JOIN\n> prg_addresses ON prg_contacts.sid=prg_addresses.intfield01;\n> NOTICE: QUERY PLAN:\n> \n> Aggregate (cost=495261.02..495261.02 rows=1 width=15) (actual\n> time=40939.38..40939.38 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..495253.81 rows=2885 width=15) (actual\n> time=0.05..40930.14 rows=2866 loops=1)\n> \t-> Seq Scan on prg_contacts (cost=0.00..80.66 rows=2866\n> width=7) (actual time=0.01..18.10 rows=2866 loops=1)\n> \t-> Seq Scan on prg_addresses (cost=0.00..131.51 rows=2751\n> width=8) (actual time=0.03..6.25 rows=2751 loops=2866)\n> Total runtime: 40939.52 msec\n> \n> EXPLAIN\n> \n> Note:\n> - We need the left join because we need all contacts even if they are\n> not assigned to a company\n> - We are not able to change the datatypes of the joined fields\n> because we use a standard software (btw who cares: SuSE Open Exchange\n> Server)\n> - When we use a normal join (without LEFT or a where clause) the SQL\n> runs immediately using the indexes\n> \n> How can I force the usage of the indexes when using \"left join\". Or\n> any other SQL construct that does the same !? Can anybody please give\n> us a hint !?\n\nWhy in the world would the database use the index in this case? You're\nretrieving every single row, so it may as well hit the data store\ndirectly. By the way, unlike many other databases that can just hit the\nindex, PostgreSQL always has to go back to the data store anyway to get\nthe real value, so if it's gonna hit more than some small percentage of\nrows, it's usually a win to just seq scan it. Try restricting your\nquery with a where clause to one or two rows and see what you get.\n\n",
"msg_date": "Mon, 02 Aug 2004 10:08:49 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: No index usage with \"left join\""
}
] |
[
{
"msg_contents": "Cannot you do a cast in your query? Does that help with using the indexes?\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of\[email protected]\nSent: maandag 2 augustus 2004 14:09\nTo: [email protected]\nSubject: [PERFORM] No index usage with \"left join\"\n\n\nWe have a \"companies\" and a \"contacts\" table with about 3000 records\neach.\n\nWe run the following SQL-Command which runs about 2 MINUTES !:\n\nSELECT count(*) FROM contacts LEFT JOIN companies ON contacts.sid =\ncompanies.intfield01\n\ncontacts.sid (type text, b-tree index on it)\ncompanies.intfield01 (type bigint, b-tree index on it)\n\ncomfire=> explain analyze SELECT count(*) FROM prg_contacts LEFT JOIN\nprg_addresses ON prg_contacts.sid=prg_addresses.intfield01;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=495261.02..495261.02 rows=1 width=15) (actual\ntime=40939.38..40939.38 rows=1 loops=1)\n -> Nested Loop (cost=0.00..495253.81 rows=2885 width=15) (actual\ntime=0.05..40930.14 rows=2866 loops=1)\n\t-> Seq Scan on prg_contacts (cost=0.00..80.66 rows=2866\nwidth=7) (actual time=0.01..18.10 rows=2866 loops=1)\n\t-> Seq Scan on prg_addresses (cost=0.00..131.51 rows=2751\nwidth=8) (actual time=0.03..6.25 rows=2751 loops=2866)\nTotal runtime: 40939.52 msec\n\nEXPLAIN\n\nNote:\n- We need the left join because we need all contacts even if they are\nnot assigned to a company\n- We are not able to change the datatypes of the joined fields\nbecause we use a standard software (btw who cares: SuSE Open Exchange\nServer)\n- When we use a normal join (without LEFT or a where clause) the SQL\nruns immediately using the indexes\n\nHow can I force the usage of the indexes when using \"left join\". Or\nany other SQL construct that does the same !? Can anybody please give\nus a hint !?\n\nThanks in forward.\n\nGreetings\nAchim\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n",
"msg_date": "Mon, 2 Aug 2004 14:16:41 +0200",
"msg_from": "\"Leeuw van der, Tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: No index usage with \"left join\""
}
] |
[
{
"msg_contents": "Hi all,\n\nMy system is a PostgreSQL 7.4.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 20020903 (Red Hat Linux 8.0 3.2-7). It has a Pentium III-733 Mhz with 512 MB ram. It is connected to my workststation (dual XEON 1700 with 1 Gb RAM) with a 100 Mb switched network.\n\nI have a table with 31 columns, all fixed size datatypes. It contains 88393 rows. Doing a \"select * from table\" with PGAdmin III in it's SQL window, it takes a total of 9206 ms query runtime an a 40638 ms data retrievel runtime.\n\nIs this a reasonable time to get 88393 rows from the database?\n\nIf not, what can I do to find the bottleneck (and eventually make it faster)?\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n",
"msg_date": "Mon, 2 Aug 2004 14:21:31 +0200",
"msg_from": "\"Joost Kraaijeveld\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "What kind of performace can I expect and how to measure?"
},
{
"msg_contents": "On Mon, 2004-08-02 at 06:21, Joost Kraaijeveld wrote:\n> Hi all,\n> \n> My system is a PostgreSQL 7.4.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 20020903 (Red Hat Linux 8.0 3.2-7). It has a Pentium III-733 Mhz with 512 MB ram. It is connected to my workststation (dual XEON 1700 with 1 Gb RAM) with a 100 Mb switched network.\n> \n> I have a table with 31 columns, all fixed size datatypes. It contains 88393 rows. Doing a \"select * from table\" with PGAdmin III in it's SQL window, it takes a total of 9206 ms query runtime an a 40638 ms data retrievel runtime.\n\nThis means it took the backend about 9 seconds to prepare the data, and\n40 or so seconds total (including the 9 I believe) for the client to\nretrieve and then display it.\n\n> Is this a reasonable time to get 88393 rows from the database?\n\nDepends on your row size really. I'm certain you're not CPU bound if\nyou've only got one hard drive. Put that data on a 20 way RAID5 array\nand I'm sure it would come back a little quicker.\n\n> If not, what can I do to find the bottleneck (and eventually make it faster)?\n\nThe bottleneck is almost always IO to start with. First, as another\ndrive and mirror it. Then go to RAID 1+0, then add more and more\ndrives.\n\nRead this document about performance tuning:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n\n\n",
"msg_date": "Mon, 02 Aug 2004 10:11:46 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What kind of performace can I expect and how to"
}
] |
[
{
"msg_contents": "Joost wrote:\n> My system is a PostgreSQL 7.4.1 on i686-pc-linux-gnu, compiled by GCC\ngcc\n> (GCC) 20020903 (Red Hat Linux 8.0 3.2-7). It has a Pentium III-733 Mhz\n> with 512 MB ram. It is connected to my workststation (dual XEON 1700\nwith\n> 1 Gb RAM) with a 100 Mb switched network.\n> \n> I have a table with 31 columns, all fixed size datatypes. It contains\n> 88393 rows. Doing a \"select * from table\" with PGAdmin III in it's SQL\n> window, it takes a total of 9206 ms query runtime an a 40638 ms data\n> retrievel runtime.\n> \n> Is this a reasonable time to get 88393 rows from the database?\n> \n> If not, what can I do to find the bottleneck (and eventually make it\n> faster)?\n\nThe 9206 ms time is what the database actually spent gathering the data\nand sending it to you. This is non-negotiable unless you bump up\nhardware, etc, or fetch less data. This time usually scales linearly\n(or close to it) with the size of the dataset you fetch.\n\nThe 40638 ms time is pgAdmin putting the data in the grid. This time\nspent here is dependant on your client and starts to get really nasty\nwith large tables. Future versions of pgAdmin might be able to deal\nbetter with large datasets (cursor based fetch is one proposed\nsolution). In the meantime, I would suggest using queries to refine\nyour terms a little bit...(do you really need to view all 80k records at\nonce?).\n\nMerlin\n\n",
"msg_date": "Mon, 2 Aug 2004 08:33:34 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What kind of performace can I expect and how to measure?"
}
] |
[
{
"msg_contents": " TIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n\nGreetz,\nGuido\n\n> Cannot you do a cast in your query? Does that help with using the indexes?\n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of\n> [email protected]\n> Sent: maandag 2 augustus 2004 14:09\n> To: [email protected]\n> Subject: [PERFORM] No index usage with \"left join\"\n> \n> \n> We have a \"companies\" and a \"contacts\" table with about 3000 records\n> each.\n> \n> We run the following SQL-Command which runs about 2 MINUTES !:\n> \n> SELECT count(*) FROM contacts LEFT JOIN companies ON contacts.sid =\n> companies.intfield01\n> \n> contacts.sid (type text, b-tree index on it)\n> companies.intfield01 (type bigint, b-tree index on it)\n> \n> comfire=> explain analyze SELECT count(*) FROM prg_contacts LEFT JOIN\n> prg_addresses ON prg_contacts.sid=prg_addresses.intfield01;\n> NOTICE: QUERY PLAN:\n> \n> Aggregate (cost=495261.02..495261.02 rows=1 width=15) (actual\n> time=40939.38..40939.38 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..495253.81 rows=2885 width=15) (actual\n> time=0.05..40930.14 rows=2866 loops=1)\n> \t-> Seq Scan on prg_contacts (cost=0.00..80.66 rows=2866\n> width=7) (actual time=0.01..18.10 rows=2866 loops=1)\n> \t-> Seq Scan on prg_addresses (cost=0.00..131.51 rows=2751\n> width=8) (actual time=0.03..6.25 rows=2751 loops=2866)\n> Total runtime: 40939.52 msec\n> \n> EXPLAIN\n> \n> Note:\n> - We need the left join because we need all contacts even if they are\n> not assigned to a company\n> - We are not able to change the datatypes of the joined fields\n> because we use a standard software (btw who cares: SuSE Open Exchange\n> Server)\n> - When we use a normal join (without LEFT or a where clause) the SQL\n> runs immediately using the indexes\n> \n> How can I force the usage of the indexes when using \"left join\". Or\n> any other SQL construct that does the same !? Can anybody please give\n> us a hint !?\n> \n> Thanks in forward.\n> \n> Greetings\n> Achim\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n",
"msg_date": "Mon, 2 Aug 2004 09:38:42 -0300 (GMT+3)",
"msg_from": "G u i d o B a r o s i o <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: No index usage with"
},
{
"msg_contents": "G u i d o B a r o s i o wrote:\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\nAnd this is fixed in 7.5/8.0.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 2 Aug 2004 08:47:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: No index usage with"
}
] |
[
{
"msg_contents": "Hi Merlin,\n\n> The 9206 ms time is what the database actually spent \n> gathering the data and sending it to you. This is non-negotiable unless you bump up\n> hardware, etc, or fetch less data. This time usually scales linearly\n> (or close to it) with the size of the dataset you fetch.\n>\n> The 40638 ms time is pgAdmin putting the data in the grid. This time\nSo it take PostgreSQL 9206 ms to get the data AND send it to the client. It than takes PGAdmin 40638 ms to display the data?\n\n> solution). In the meantime, I would suggest using queries to refine\n> your terms a little bit...(do you really need to view all 80k \n> records at once?).\nThe application is build in Clarion, a 4 GL environment. We do not have any influence over the query it generates and executes.\n\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n",
"msg_date": "Mon, 2 Aug 2004 14:47:29 +0200",
"msg_from": "\"Joost Kraaijeveld\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What kind of performace can I expect and how to measure?"
}
] |
[
{
"msg_contents": "> Hi Merlin,\n> \n> > The 9206 ms time is what the database actually spent\n> > gathering the data and sending it to you. This is non-negotiable\nunless\n> you bump up\n> > hardware, etc, or fetch less data. This time usually scales\nlinearly\n> > (or close to it) with the size of the dataset you fetch.\n> >\n> > The 40638 ms time is pgAdmin putting the data in the grid. This\ntime\n> So it take PostgreSQL 9206 ms to get the data AND send it to the\nclient.\n> It than takes PGAdmin 40638 ms to display the data?\n\nThat is correct. This is not a problem with pgAdmin, or postgres, but a\nproblem with grids. Conceptually, SQL tables are an in an unordered,\ninfinite space and grids require an ordered, finite space. All 4GLs and\ndata managers have this problem. The real solution is to refine your\nquery in a meaningful way (80k rows is more than a human being can deal\nwith in a practical sense). If you can't do that, install an arbitrary\nlimit on the result set where performance breaks down, could be 10-100k\ndepending on various factors.\n\nTo simulate a finite, ordered, dataset, pgAdmin takes all the result\ndata and puts it in GUI controls are not designed to hold 100k rows\ndata...this is a design compromise to allow editing.\n\nMerlin\n\n\n\n\n\n",
"msg_date": "Mon, 2 Aug 2004 10:14:33 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What kind of performace can I expect and how to measure?"
}
] |
[
{
"msg_contents": "Thanks I found the same info on the tigi and like what I saw. I also\nspoke with a consulting firm that has used them and also says good\nthings, but they have not tried it with postgres. I will post an\nanalysis of performance once we have the equipment ordered and\ninstalled. \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Greg Stark\nSent: Sunday, August 01, 2004 5:43 AM\nTo: [email protected]\nSubject: [PERFORM] SSD Drives\n\n\nTo the person who was looking for a $5k midlevel SSD drive (sorry, I hit\n'd'\ntoo fast):\n\n http://www.tigicorp.com/tigijet_exp_s.htm\n\nI found this via this interesting survey of SSD products:\n\n http://www.storagesearch.com/ssd-buyers-guide.html\n\nIncidentally it seems the popular Platypus SSD PCI cards are no more,\nPlatypus\nappears to have gone out of business.\n\n-- \ngreg\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n",
"msg_date": "Mon, 2 Aug 2004 09:02:56 -0600",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SSD Drives"
}
] |
[
{
"msg_contents": "Hi\n\nI have 2 tables like this:\nCREATE TABLE query (\n\tquery_id \tint not null,\n\tdat \tvarchar(64) null ,\n\tsub_acc_id \tint null ,\n\tquery_ip \tvarchar(64) null ,\n\tosd_user_type \tvarchar(64) null \n)\n;\n\nCREATE TABLE trans (\n\ttransaction_id \tvarchar(64) not null ,\n\tdate \tvarchar(64) null ,\n\tquery_id \tint not null ,\n\tsub_acc_id \tint null ,\n\treg_acc_id \tint null \n)\n;\n\nCREATE UNIQUE INDEX query_query_id_idx\nON query (query_id)\n;\n\nCREATE INDEX trans_reg_acc_id_idx\nON trans (reg_acc_id)\n;\n\nCREATE INDEX trans_query_id_idx\nON trans(query_id)\n;\nosd=> select count(*) from trans\nosd-> ;\n count\n--------\n 598809\n(1 row)\n \nosd=>\nosd=> select count(*) from query\nosd-> ;\n count\n--------\n 137042\n(1 row)\n\nI just vacuum analyse'd the database. \n\nTrying to run this query:\nEXPLAIN ANALYSE\nselect * FROM trans\nWHERE query_id NOT IN (select query_id FROM query)\n\nbut it will remain like that forever (cancelled after 30 min).\n\nMy postgresql.conf is the default:\n# - Memory -\n \nshared_buffers = 1000 # min 16, at least max_connections*2,\n8KB each\n#sort_mem = 1024 # min 64, size in KB\n#vacuum_mem = 8192 # min 1024, size in KB\n\nShould I adjust something?\n\nUsing postgresql 7.4.2, saw in release notes that IN/NOT IN queries are\nat least as faster than EXISTS.\n\nThank you!\n-- \nMarius Andreiana\nGaluna - Solutii Linux in Romania\nhttp://www.galuna.ro\n\n",
"msg_date": "Tue, 03 Aug 2004 11:49:02 +0300",
"msg_from": "Marius Andreiana <[email protected]>",
"msg_from_op": true,
"msg_subject": "NOT IN query takes forever"
},
{
"msg_contents": "\nOn Tue, 3 Aug 2004, Marius Andreiana wrote:\n\n> I just vacuum analyse'd the database.\n>\n> Trying to run this query:\n> EXPLAIN ANALYSE\n> select * FROM trans\n> WHERE query_id NOT IN (select query_id FROM query)\n>\n> but it will remain like that forever (cancelled after 30 min).\n>\n> My postgresql.conf is the default:\n> # - Memory -\n>\n> shared_buffers = 1000 # min 16, at least max_connections*2,\n> 8KB each\n> #sort_mem = 1024 # min 64, size in KB\n> #vacuum_mem = 8192 # min 1024, size in KB\n>\n> Should I adjust something?\n\nProbably sort_mem. It's probably estimating that it can't hash the result\ninto the 1MB of sort_mem so it's probably falling back to some sort of\nnested execution.\n\n",
"msg_date": "Tue, 3 Aug 2004 07:03:31 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN query takes forever"
}
] |
[
{
"msg_contents": "Hello all.\n \nI am managing a large database with lots of transactions in different\ntables.\nThe largest tables have around 5-6 millions tuples and around 50000-60000\ninserts and maybe 20000 updates pr day.\nWhile the smalest tables have only a few tuples and a few updates /inserts\npr day. In addition we have small tables with many updates/inserts. So what\nI am saying is that there is all kinds of tables and uses of tables in our\ndatabase.\nThis, I think, makes it difficult to set up pg_autovacuum. I am now running\nvacuum jobs on different tables in cron. \n \nWhat things should I consider when setting but base and threshold values in\npg_autovacuum? Since the running of vacuum and analyze is relative to the\ntable size, as it must be, I think it is difficult to cover all tables..\n \nAre there anyone who have some thoughts around this?\n \nRegards\nRune\n \n\n\n\nMelding\n\n\nHello \nall.\n \nI am managing a \nlarge database with lots of transactions in different \ntables.\nThe largest tables \nhave around 5-6 millions tuples and around 50000-60000 inserts and maybe 20000 \nupdates pr day.\nWhile the smalest \ntables have only a few tuples and a few updates /inserts pr day. In addition we \nhave small tables with many updates/inserts. So what I am saying is that there \nis all kinds of tables and uses of tables in our database.\nThis, I think, makes \nit difficult to set up pg_autovacuum. I am now running vacuum jobs on different \ntables in cron. \n \nWhat things should I \nconsider when setting but base and threshold values in pg_autovacuum? Since the \nrunning of vacuum and analyze is relative to the table size, as it must be, I \nthink it is difficult to cover all tables..\n \nAre there anyone who \nhave some thoughts around this?\n \nRegards\nRune",
"msg_date": "Tue, 3 Aug 2004 11:09:26 +0200 ",
"msg_from": "\"Lending, Rune\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_autovacuum parameters"
},
{
"msg_contents": "Lending, Rune wrote:\n\n> Hello all.\n> \n> I am managing a large database with lots of transactions in different \n> tables.\n> The largest tables have around 5-6 millions tuples and around \n> 50000-60000 inserts and maybe 20000 updates pr day.\n> While the smalest tables have only a few tuples and a few updates \n> /inserts pr day. In addition we have small tables with many \n> updates/inserts. So what I am saying is that there is all kinds of \n> tables and uses of tables in our database.\n> This, I think, makes it difficult to set up pg_autovacuum. I am now \n> running vacuum jobs on different tables in cron.\n> \n> What things should I consider when setting but base and threshold values \n> in pg_autovacuum? Since the running of vacuum and analyze is relative to \n> the table size, as it must be, I think it is difficult to cover all tables..\n\nOne of the biggest problems with the version of pg_autovacuum in 7.4 \ncontrib is that you can only specify one set of thresholds, which often \nisn't flexible enough. That said the thresholds are based on table \nsince since you specify both a base value and a scaling factor so \npg_autovacuum -v 1000 -V 1 will vacuum a table with 100 rows every 200 \nupdates, but will vacuum a table with 1,000,000 rows every 1,000,100 \nupdates.\n\n> Are there anyone who have some thoughts around this?\n\nBasically, you should be able to use pg_autovacuum to do most of the \nvacuuming, if there are a few tables that aren't getting vacuumed often \nenough, then you can add a vacuum command to cron for those specific tables.\n\nMatthew\n\n\n",
"msg_date": "Tue, 03 Aug 2004 10:19:32 -0400",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_autovacuum parameters"
},
{
"msg_contents": "Matthew T. O'Connor wrote:\n\n> Lending, Rune wrote:\n> \n>> Hello all.\n>> \n>> I am managing a large database with lots of transactions in different \n>> tables.\n>> The largest tables have around 5-6 millions tuples and around \n>> 50000-60000 inserts and maybe 20000 updates pr day.\n>> While the smalest tables have only a few tuples and a few updates \n>> /inserts pr day. In addition we have small tables with many \n>> updates/inserts. So what I am saying is that there is all kinds of \n>> tables and uses of tables in our database.\n>> This, I think, makes it difficult to set up pg_autovacuum. I am now \n>> running vacuum jobs on different tables in cron.\n>> \n>> What things should I consider when setting but base and threshold \n>> values in pg_autovacuum? Since the running of vacuum and analyze is \n>> relative to the table size, as it must be, I think it is difficult to \n>> cover all tables..\n> \n> \n> One of the biggest problems with the version of pg_autovacuum in 7.4 \n> contrib is that you can only specify one set of thresholds, which often \n> isn't flexible enough. That said the thresholds are based on table \n> since since you specify both a base value and a scaling factor so \n> pg_autovacuum -v 1000 -V 1 will vacuum a table with 100 rows every 200 \n> updates, but will vacuum a table with 1,000,000 rows every 1,000,100 \n> updates.\n> \n>> Are there anyone who have some thoughts around this?\n> \n> \n> Basically, you should be able to use pg_autovacuum to do most of the \n> vacuuming, if there are a few tables that aren't getting vacuumed often \n> enough, then you can add a vacuum command to cron for those specific \n> tables.\n\nAnd in the version 7.5^H^H^H8.0 ( Tom Lane docet :-) ) I think is possible\nspecify that thresholds per table...\n\n\nRegards\nGateano Mendola\n\n\n\n\n",
"msg_date": "Tue, 03 Aug 2004 17:54:44 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_autovacuum parameters"
}
] |
[
{
"msg_contents": "> Trying to run this query:\n> EXPLAIN ANALYSE\n> select * FROM trans\n> WHERE query_id NOT IN (select query_id FROM query)\n> \n> but it will remain like that forever (cancelled after 30 min).\n\nexplain analyze actually runs the query to do timings. Just run explain\nand see what you come up with. More than likely there is a nestloop in\nthere which is causing the long query time.\n\nTry bumping up shared buffers some and sort mem as much as you safely\ncan.\n\nMerlin\n",
"msg_date": "Tue, 3 Aug 2004 08:05:23 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: NOT IN query takes forever"
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> Try bumping up shared buffers some and sort mem as much as you safely\n> can.\n\nsort_mem is probably the issue here. The only reasonable way to do NOT\nIN is with a hash table, and the default setting of sort_mem is probably\ntoo small to support a 137042-element table.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 03 Aug 2004 10:59:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN query takes forever "
},
{
"msg_contents": "On Tue, 2004-08-03 at 08:05 -0400, Merlin Moncure wrote:\n> > Trying to run this query:\n> > EXPLAIN ANALYSE\n> > select * FROM trans\n> > WHERE query_id NOT IN (select query_id FROM query)\n> > \n> > but it will remain like that forever (cancelled after 30 min).\n> \n> explain analyze actually runs the query to do timings. Just run explain\n> and see what you come up with. More than likely there is a nestloop in\n> there which is causing the long query time.\n> \n> Try bumping up shared buffers some and sort mem as much as you safely\n> can.\nThank you, that did it!\n\nWith\nshared_buffers = 3000\t\t# min 16, at least max_connections*2, 8KB each\nsort_mem = 128000\t\t# min 64, size in KB\n\nit takes <3 seconds (my hardware is not server-class).\n\n-- \nMarius Andreiana\nGaluna - Solutii Linux in Romania\nhttp://www.galuna.ro\n\n",
"msg_date": "Tue, 03 Aug 2004 19:02:42 +0300",
"msg_from": "Marius Andreiana <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN query takes forever"
},
{
"msg_contents": "Marius Andreiana wrote:\n\n> On Tue, 2004-08-03 at 08:05 -0400, Merlin Moncure wrote:\n> \n>>>Trying to run this query:\n>>>EXPLAIN ANALYSE\n>>>select * FROM trans\n>>>WHERE query_id NOT IN (select query_id FROM query)\n>>>\n>>>but it will remain like that forever (cancelled after 30 min).\n>>\n>>explain analyze actually runs the query to do timings. Just run explain\n>>and see what you come up with. More than likely there is a nestloop in\n>>there which is causing the long query time.\n>>\n>>Try bumping up shared buffers some and sort mem as much as you safely\n>>can.\n> \n> Thank you, that did it!\n> \n> With\n> shared_buffers = 3000\t\t# min 16, at least max_connections*2, 8KB each\n> sort_mem = 128000\t\t# min 64, size in KB\n\n128 MB for sort_mem is too much, consider that in this way each backend can\nuse 128 MB for sort operations...\nAlso shared_buffers = 3000 means 24MB that is not balanced with the 128MB\nneeded for sort...\nTry to bump up 128 MB for shared_buffer ( may be you need to instruct your\nOS to allow that ammount of shared memory usage ) and 24MB for sort_mem.\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 03 Aug 2004 19:28:27 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN query takes forever"
},
{
"msg_contents": "> explain analyze actually runs the query to do timings. Just run explain\n> and see what you come up with. More than likely there is a nestloop in\n> there which is causing the long query time.\n> \n> Try bumping up shared buffers some and sort mem as much as you safely\n> can.\n\nJust use an EXISTS query I suggest.\n\nChris\n",
"msg_date": "Wed, 04 Aug 2004 09:32:53 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN query takes forever"
},
{
"msg_contents": "On Tue, 2004-08-03 at 19:28 +0200, Gaetano Mendola wrote:\n> > With\n> > shared_buffers = 3000\t\t# min 16, at least max_connections*2, 8KB each\n> > sort_mem = 128000\t\t# min 64, size in KB\n> \n> 128 MB for sort_mem is too much, consider that in this way each backend can\n> use 128 MB for sort operations...\n> Also shared_buffers = 3000 means 24MB that is not balanced with the 128MB\n> needed for sort...\n> Try to bump up 128 MB for shared_buffer ( may be you need to instruct your\n> OS to allow that ammount of shared memory usage ) and 24MB for sort_mem.\nThanks for the advice. I increased shmmax to allow shared_buffers to be\n128mb and set sort_mem to 24mb.\n\n-- \nMarius Andreiana\nGaluna - Solutii Linux in Romania\nhttp://www.galuna.ro\n\n",
"msg_date": "Wed, 04 Aug 2004 08:40:52 +0300",
"msg_from": "Marius Andreiana <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] NOT IN query takes forever"
}
] |
[
{
"msg_contents": "> > Try bumping up shared buffers some and sort mem as much as you\nsafely\n> > can.\n> Thank you, that did it!\n> \n> With\n> shared_buffers = 3000\t\t# min 16, at least max_connections*2,\n8KB\n> each\n> sort_mem = 128000\t\t# min 64, size in KB\n> \n> it takes <3 seconds (my hardware is not server-class).\n\nBe careful...sort_mem applies to each connection and (IIRC) in some\ncases more than once to a connection. Of all the configuration\nparameters, sort_mem (IMO) is the most important and the hardest to get\nright. 128k (or 128MB) is awfully high unless you have a ton of memory\n(you don't) or you are running in single connection scenarios. Do some\nexperimentation by lowering the value until you get a good balance\nbetween potential memory consumption and speed.\n\nMerlin\n",
"msg_date": "Tue, 3 Aug 2004 12:10:04 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: NOT IN query takes forever"
},
{
"msg_contents": "On Tue, 2004-08-03 at 10:10, Merlin Moncure wrote:\n> > > Try bumping up shared buffers some and sort mem as much as you\n> safely\n> > > can.\n> > Thank you, that did it!\n> > \n> > With\n> > shared_buffers = 3000\t\t# min 16, at least max_connections*2,\n> 8KB\n> > each\n> > sort_mem = 128000\t\t# min 64, size in KB\n> > \n> > it takes <3 seconds (my hardware is not server-class).\n> \n> Be careful...sort_mem applies to each connection and (IIRC) in some\n> cases more than once to a connection. Of all the configuration\n> parameters, sort_mem (IMO) is the most important and the hardest to get\n> right. 128k (or 128MB) is awfully high unless you have a ton of memory\n> (you don't) or you are running in single connection scenarios. Do some\n> experimentation by lowering the value until you get a good balance\n> between potential memory consumption and speed.\n\nMinor nit, sort_mem actually applies to EACH sort individually, so a\nquery that had to run three sorts could use 3 x sort_mem.\n\nNote that one can set sort_mem per backend connection with set\nsort_mem=128000 if need be so as not to use up all the memory with other\nbackends.\n\n",
"msg_date": "Tue, 03 Aug 2004 12:26:28 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: NOT IN query takes forever"
}
] |
[
{
"msg_contents": "I run a Perl/CGI driven website that makes extensive use of PostgreSQL \n(7.4.3) for everything from user information to formatting and display \nof specific sections of the site. The server itself, is a dual \nprocessor AMD Opteron 1.4Ghz w/ 2GB Ram and 2 x 120GB hard drives \nmirrored for redundancy running under FreeBSD 5.2.1 (AMD64).\n\nRecently loads on the site have increased during peak hours to the point \nof showing considerable loss in performance. This can be observed \nwhen connections move from the 120 concurrent connections to PostgreSQL \nto roughly 175 or more. Essentially, the machine seems to struggle \nto keep up with continual requests and slows down respectively as \nresources are tied down.\n\nCode changes have been made to the scripts to essentially back off in \nhigh load working environments which have worked to an extent. \nHowever, as loads continue to increase the database itself is not taking \nwell to the increased traffic taking place.\n\nHaving taken a look at 'Tuning PostgreSQL for Performance' \n(http://www.varlena.com/GeneralBits/Tidbits/perf.html) using it as best \nI could in order to set my settings. However, even with statistics \ndisabled and ever setting tweaked things still consider to deteriorate.\n\nIs there anything anyone can recommend in order to give the system a \nnecessary speed boost? It would seem to me that a modest dataset of \nroughly a Gig combined with that type of hardware should be able to \nhandle substantially more load then what it is. Can anyone provide me \nwith clues as where to pursue? Would disabling 'fsync' provide more \nperformance if I choose that information may be lost in case of a crash?\n\nIf anyone needs access to logs, settings et cetera. Please ask, I \nsimply wish to test the waters first on what is needed. Thanks!\n\n\tMartin Foster\n\[email protected]\n\n",
"msg_date": "Tue, 03 Aug 2004 18:05:04 GMT",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance Bottleneck"
},
{
"msg_contents": "On Tue, 3 Aug 2004, Martin Foster wrote:\n\n> to roughly 175 or more. Essentially, the machine seems to struggle \n> to keep up with continual requests and slows down respectively as \n> resources are tied down.\n\nI suggest you try to find queries that are slow and check to see if the \nplans are optimal for those queries.\n\nThere are some logging options for logging quries that run longer then a \nuser set limit. That can help finding the slow queries. Just doing some \nlogging for some typical page fetches often show things that can be done \nbetter. For example, it's not uncommon to see the same information beeing \npulled several times by misstake.\n\nMaybe you can also try something like connection pooling. I'm not sure how\nmuch that can give, but for small queries the connection time is usually\nthe big part.\n\n> Would disabling 'fsync' provide more performance if I choose that\n> information may be lost in case of a crash?\n\nI would not do that. In most cases the performance increase is modest and\nthe data corruption risk after a crash is much bigger so it's not worth\nit.\n\nIf you have a lot of small inserts then it might be faster with this, but\nif possible it's much better to try to do more work in a transaction then \nbefore.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Tue, 3 Aug 2004 20:35:36 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Martin Foster wrote:\n\n> I run a Perl/CGI driven website that makes extensive use of PostgreSQL \n> (7.4.3) for everything from user information to formatting and display \n> of specific sections of the site. The server itself, is a dual \n> processor AMD Opteron 1.4Ghz w/ 2GB Ram and 2 x 120GB hard drives \n> mirrored for redundancy running under FreeBSD 5.2.1 (AMD64).\n> \n> Recently loads on the site have increased during peak hours to the point \n> of showing considerable loss in performance. This can be observed \n> when connections move from the 120 concurrent connections to PostgreSQL \n> to roughly 175 or more. Essentially, the machine seems to struggle \n> to keep up with continual requests and slows down respectively as \n> resources are tied down.\n> \n> Code changes have been made to the scripts to essentially back off in \n> high load working environments which have worked to an extent. However, \n> as loads continue to increase the database itself is not taking well to \n> the increased traffic taking place.\n> \n> Having taken a look at 'Tuning PostgreSQL for Performance' \n> (http://www.varlena.com/GeneralBits/Tidbits/perf.html) using it as best \n> I could in order to set my settings. However, even with statistics \n> disabled and ever setting tweaked things still consider to deteriorate.\n> \n> Is there anything anyone can recommend in order to give the system a \n> necessary speed boost? It would seem to me that a modest dataset of \n> roughly a Gig combined with that type of hardware should be able to \n> handle substantially more load then what it is. Can anyone provide me \n> with clues as where to pursue? Would disabling 'fsync' provide more \n> performance if I choose that information may be lost in case of a crash?\n> \n> If anyone needs access to logs, settings et cetera. Please ask, I \n> simply wish to test the waters first on what is needed. Thanks!\n\nTell us about your tipical queries, show us your configuration file.\nThe access are only in read only mode or do you have concurrent writers\nand readers ? During peak hours your processors are tied to 100% ?\nWhat say the vmstat and the iostat ?\n\nMay be you are not using indexes some where, or may be yes but the\nplanner is not using it... In two words we needs other informations\nin order to help you.\n\n\n\nRegards\nGaetano Mendola\n\n\n",
"msg_date": "Tue, 03 Aug 2004 20:41:23 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Hello,\n\nIt sounds to me like you are IO bound. 2x120GB hard drives just isn't \ngoing to cut it with that many connections (as a general rule). Are you \nswapping ?\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n\nMartin Foster wrote:\n> I run a Perl/CGI driven website that makes extensive use of PostgreSQL \n> (7.4.3) for everything from user information to formatting and display \n> of specific sections of the site. The server itself, is a dual \n> processor AMD Opteron 1.4Ghz w/ 2GB Ram and 2 x 120GB hard drives \n> mirrored for redundancy running under FreeBSD 5.2.1 (AMD64).\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL",
"msg_date": "Tue, 03 Aug 2004 11:52:28 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Gaetano Mendola wrote:\n> Martin Foster wrote:\n> \n>> I run a Perl/CGI driven website that makes extensive use of PostgreSQL \n>> (7.4.3) for everything from user information to formatting and display \n>> of specific sections of the site. The server itself, is a dual \n>> processor AMD Opteron 1.4Ghz w/ 2GB Ram and 2 x 120GB hard drives \n>> mirrored for redundancy running under FreeBSD 5.2.1 (AMD64).\n>>\n>> Recently loads on the site have increased during peak hours to the \n>> point of showing considerable loss in performance. This can be \n>> observed when connections move from the 120 concurrent connections to \n>> PostgreSQL to roughly 175 or more. Essentially, the machine seems \n>> to struggle to keep up with continual requests and slows down \n>> respectively as resources are tied down.\n>>\n>> Code changes have been made to the scripts to essentially back off in \n>> high load working environments which have worked to an extent. \n>> However, as loads continue to increase the database itself is not \n>> taking well to the increased traffic taking place.\n>>\n>> Having taken a look at 'Tuning PostgreSQL for Performance' \n>> (http://www.varlena.com/GeneralBits/Tidbits/perf.html) using it as \n>> best I could in order to set my settings. However, even with \n>> statistics disabled and ever setting tweaked things still consider to \n>> deteriorate.\n>>\n>> Is there anything anyone can recommend in order to give the system a \n>> necessary speed boost? It would seem to me that a modest dataset of \n>> roughly a Gig combined with that type of hardware should be able to \n>> handle substantially more load then what it is. Can anyone provide me \n>> with clues as where to pursue? Would disabling 'fsync' provide more \n>> performance if I choose that information may be lost in case of a crash?\n>>\n>> If anyone needs access to logs, settings et cetera. Please ask, I \n>> simply wish to test the waters first on what is needed. Thanks!\n> \n> \n> Tell us about your tipical queries, show us your configuration file.\n> The access are only in read only mode or do you have concurrent writers\n> and readers ? During peak hours your processors are tied to 100% ?\n> What say the vmstat and the iostat ?\n> \n> May be you are not using indexes some where, or may be yes but the\n> planner is not using it... In two words we needs other informations\n> in order to help you.\n> \n> \n> \n> Regards\n> Gaetano Mendola\n> \n> \n\nI included all the files in attachments, which will hopefully cut down \non any replied to Emails. As for things like connection pooling, the \nweb server makes use of Apache::DBI to pool the connections for the Perl \nscripts being driven on that server. For the sake of being thorough, \na quick 'apachectl status' was thrown in when the database was under a \ngood load.\n\nSince it would rather slow things down to wait for the servers to really \nget bogged down with load averages of 20.00 and more, I opted to choose \na period of time where we are a bit busier then normal. You will be \nable to see how the system behaves under a light load and subsequently \nreaching 125 or so concurrent connections.\n\nThe queries themselves are simple, normally drawing information from one \ntable with few conditions or in the most complex cases using joins on \ntwo table or sub queries. These behave very well and always have, the \nproblem is that these queries take place in rather large amounts due to \nthe dumb nature of the scripts themselves.\n\nOver a year ago when I was still using MySQL for the project, the \nstatistics generated would report well over 65 queries per second under \nloads ranging from 130 to 160 at peak but averaged over the weeks of \noperation. Looking at the Apache status, one can see that it averages \nonly roughly 2.5 requests per second giving you a slight indication as \nto what is taking place.\n\nA quick run of 'systat -ifstat' shows the following graph:\n\n\n /0 /1 /2 /3 /4 /5 /6 /7 /8 /9 /10\nLoad Average >>>>>>>>>>>\n\nInterface Traffic Peak Total\n lo0 in 0.000 KB/s 0.000 KB/s 37.690 GB\n out 0.000 KB/s 0.000 KB/s 37.690 GB\n\n em0 in 34.638 KB/s 41.986 KB/s 28.998 GB\n out 70.777 KB/s 70.777 KB/s 39.553 GB\n\nEm0 is a full duplexed 100Mbs connection to an internal switch that \nsupports the servers directly. Load on the loopback was cut down \nconsiderably once I stopped using pg_autovaccum since its performance \nbenefits under low load were buried under the hindrance it caused when \ntraffic was high.\n\nI am sure that there are some places that could benefit from some \noptimization. Especially in the case of indexes, however as a whole the \nproblem seems to be related more to the massive onslaught of queries \nthen it does anything else.\n\nAlso note that some of these scripts run for longer durations even if \nthey are web based. Some run as long as 30 minutes, making queries to \nthe database from periods of wait from five seconds to twenty-five \nseconds. Under high duress the timeouts should back out, based on \nthe time needed for the query to respond, normally averaging 0.008 seconds.\n\nDoes this help at all, or is more detail needed on the matter?\n\n\tMartin Foster\n\[email protected]",
"msg_date": "Wed, 04 Aug 2004 03:49:11 GMT",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "\n\n> The queries themselves are simple, normally drawing information from one\n> table with few conditions or in the most complex cases using joins on\n> two table or sub queries. These behave very well and always have, the\n> problem is that these queries take place in rather large amounts due to\n> the dumb nature of the scripts themselves.\n\n\tHum, maybe this \"dumb\" thing is where to look at ?\n\n\tI'm no expert, but I have had the same situation with a very dump PHP \napplication, namely osCommerce, which averaged about 140 (!!!!!) queries \non a page !\n\n\tI added some traces to queries, and some logging, only to see that the \nstupid programmers did something like (pseudo code):\n\n\tfor id in id_list:\n\t\tselect stuff from database where id=id\n\n\tGeee...\n\n\tI replaced it by :\n\n\tselect stuff from database where id in (id_list)\n\n\tAnd this saved about 20 requests... The code was peppered by queries like \nthat. In the end it went from 140 queries to about 20, which is still way \ntoo much IMHO, but I couldn't go lower without an extensive rewrite.\n\n\tIf you have a script making many selects, it's worth grouping them, even \nusing stored procedures.\n\n\tFor instance using the classical \"tree in a table\" to store a tree of \nproduct categories :\n\ncreate table categories\n(\n\tid serial primary key,\n\tparent_id references categories(id),\n\tetc\n);\n\n\tYou basically have these choices in order to display the tree :\n\n\t- select for parent_id=0 (root)\n\t- for each element, select its children\n\t- and so on\n\n\tOR\n\n\t- make a stored procedure which does that. At least 3x faster and a lot \nless CPU overhead.\n\n\tOR (if you have say 50 rows in the table which was my case)\n\n\t- select the entire table and build your tree in the script\n\tIt was a little bit faster than the stored procedure.\n\n\tCould you give an example of your dumb scripts ? It's good to optimize a \ndatabase, but it's even better to remove useless queries...\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Wed, 04 Aug 2004 08:40:41 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Martin Foster wrote:\n\n> Gaetano Mendola wrote:\n> \n>> Martin Foster wrote:\n>>\n>>> I run a Perl/CGI driven website that makes extensive use of \n>>> PostgreSQL (7.4.3) for everything from user information to formatting \n>>> and display of specific sections of the site. The server itself, is \n>>> a dual processor AMD Opteron 1.4Ghz w/ 2GB Ram and 2 x 120GB hard \n>>> drives mirrored for redundancy running under FreeBSD 5.2.1 (AMD64).\n>>>\n>>> Recently loads on the site have increased during peak hours to the \n>>> point of showing considerable loss in performance. This can be \n>>> observed when connections move from the 120 concurrent connections to \n>>> PostgreSQL to roughly 175 or more. Essentially, the machine seems \n>>> to struggle to keep up with continual requests and slows down \n>>> respectively as resources are tied down.\n>>>\n>>> Code changes have been made to the scripts to essentially back off in \n>>> high load working environments which have worked to an extent. \n>>> However, as loads continue to increase the database itself is not \n>>> taking well to the increased traffic taking place.\n>>>\n>>> Having taken a look at 'Tuning PostgreSQL for Performance' \n>>> (http://www.varlena.com/GeneralBits/Tidbits/perf.html) using it as \n>>> best I could in order to set my settings. However, even with \n>>> statistics disabled and ever setting tweaked things still consider to \n>>> deteriorate.\n>>>\n>>> Is there anything anyone can recommend in order to give the system a \n>>> necessary speed boost? It would seem to me that a modest dataset of \n>>> roughly a Gig combined with that type of hardware should be able to \n>>> handle substantially more load then what it is. Can anyone provide \n>>> me with clues as where to pursue? Would disabling 'fsync' provide \n>>> more performance if I choose that information may be lost in case of \n>>> a crash?\n>>>\n>>> If anyone needs access to logs, settings et cetera. Please ask, I \n>>> simply wish to test the waters first on what is needed. Thanks!\n>>\n>>\n>>\n>> Tell us about your tipical queries, show us your configuration file.\n>> The access are only in read only mode or do you have concurrent writers\n>> and readers ? During peak hours your processors are tied to 100% ?\n>> What say the vmstat and the iostat ?\n>>\n>> May be you are not using indexes some where, or may be yes but the\n>> planner is not using it... In two words we needs other informations\n>> in order to help you.\n>>\n>>\n>>\n>> Regards\n>> Gaetano Mendola\n>>\n>>\n> \n> I included all the files in attachments, which will hopefully cut down \n> on any replied to Emails. As for things like connection pooling, the \n> web server makes use of Apache::DBI to pool the connections for the Perl \n> scripts being driven on that server. For the sake of being thorough, \n> a quick 'apachectl status' was thrown in when the database was under a \n> good load.\n\nLet start from your postgres configuration:\n\nshared_buffers = 8192 <==== This is really too small for your configuration\nsort_mem = 2048\n\nwal_buffers = 128 <==== This is really too small for your configuration\n\neffective_cache_size = 16000\n\nchange this values in:\n\nshared_buffers = 50000\nsort_mem = 16084\n\nwal_buffers = 1500\n\neffective_cache_size = 32000\n\n\nto bump up the shm usage you have to configure your OS in order to be\nallowed to use that ammount of SHM.\n\nThis are the numbers that I feel good for your HW, the second step now is\nanalyze your queries\n\n> The queries themselves are simple, normally drawing information from one \n> table with few conditions or in the most complex cases using joins on \n> two table or sub queries. These behave very well and always have, the \n> problem is that these queries take place in rather large amounts due to \n> the dumb nature of the scripts themselves.\n\nShow us the explain analyze on that queries, how many rows the tables are\ncontaining, the table schema could be also usefull.\n\n\n\nregards\nGaetano Mendola\n\n\n\n\n\n\n\n\n",
"msg_date": "Wed, 04 Aug 2004 17:25:42 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "On Wed, Aug 04, 2004 at 03:49:11AM +0000, Martin Foster wrote:\n> Also note that some of these scripts run for longer durations even if \n> they are web based. Some run as long as 30 minutes, making queries to \n> the database from periods of wait from five seconds to twenty-five \n> seconds. Under high duress the timeouts should back out, based on \n> the time needed for the query to respond, normally averaging 0.008 seconds.\n\nI would start by EXPLAIN ANALYZE'ing those 30 minute queries. \n\n> martin@io ~$ vmstat\n> procs memory page disks faults cpu\n> r b w avm fre flt re pi po fr sr ad4 ad6 in sy cs us sy id\n> 0 0 0 498532 122848 3306 0 0 0 740 0 0 0 788 0 1675 16 21 63\n> \n\nvmstat without a \"delay\" argument (e.g. 'vmstat 1') gives you a\ncumulative or average since boot. You'd probably get better\ninformation by doing a real-time sampling of stats during normal and\nheavy load. \n\n> martin@io ~$ ps -uax\n> USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND\n> postgres 32084 0.0 0.2 91616 3764 p0- R Mon12PM 4:08.99 /usr/local/bin/postmaster -D /var/postgres (postgres)\n> postgres 80333 0.0 2.1 94620 44372 ?? S 8:57PM 0:01.00 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n> postgres 80599 0.0 2.1 94652 44780 ?? S 8:59PM 0:00.97 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n> postgres 80616 0.0 2.4 94424 50396 ?? S 8:59PM 0:00.89 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n> postgres 80715 0.0 2.2 94444 46804 ?? S 9:00PM 0:00.68 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n> postgres 80788 0.0 2.1 94424 43944 ?? S 9:00PM 0:00.93 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n> postgres 80811 0.0 2.1 94424 43884 ?? S 9:00PM 0:00.94 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n> postgres 80902 0.0 2.1 94424 43380 ?? S 9:01PM 0:00.76 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n> postgres 80949 0.0 2.2 94424 45248 ?? S 9:01PM 0:00.67 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n> postgres 81020 0.0 2.1 94424 42924 ?? S 9:02PM 0:00.74 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n\nAll the connections in your email are idle. You may benefit from using\npgpool instead of Apache::DBI (I've never tried). \n\nhttp://www.mail-archive.com/[email protected]/msg00760.html\n\n",
"msg_date": "Wed, 4 Aug 2004 11:36:24 -0400",
"msg_from": "Michael Adler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Michael Adler wrote:\n\n> On Wed, Aug 04, 2004 at 03:49:11AM +0000, Martin Foster wrote:\n> \n>>Also note that some of these scripts run for longer durations even if \n>>they are web based. Some run as long as 30 minutes, making queries to \n>>the database from periods of wait from five seconds to twenty-five \n>>seconds. Under high duress the timeouts should back out, based on \n>>the time needed for the query to respond, normally averaging 0.008 seconds.\n> \n> \n> I would start by EXPLAIN ANALYZE'ing those 30 minute queries. \n> \n\nThe Apache process will run for 30 minutes at a time, not the query \nitself. Essentially, while that process is running it will check for \nnew records in the table at varying intervals, since it will increase \ntimeouts based on load or lack of activity in order to reduce load to \nthe database.\n\n> \n>>martin@io ~$ vmstat\n>> procs memory page disks faults cpu\n>> r b w avm fre flt re pi po fr sr ad4 ad6 in sy cs us sy id\n>> 0 0 0 498532 122848 3306 0 0 0 740 0 0 0 788 0 1675 16 21 63\n>>\n> \n> \n> vmstat without a \"delay\" argument (e.g. 'vmstat 1') gives you a\n> cumulative or average since boot. You'd probably get better\n> information by doing a real-time sampling of stats during normal and\n> heavy load. \n> \n> \n>>martin@io ~$ ps -uax\n>>USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND\n>>postgres 32084 0.0 0.2 91616 3764 p0- R Mon12PM 4:08.99 /usr/local/bin/postmaster -D /var/postgres (postgres)\n>>postgres 80333 0.0 2.1 94620 44372 ?? S 8:57PM 0:01.00 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n>>postgres 80599 0.0 2.1 94652 44780 ?? S 8:59PM 0:00.97 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n>>postgres 80616 0.0 2.4 94424 50396 ?? S 8:59PM 0:00.89 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n>>postgres 80715 0.0 2.2 94444 46804 ?? S 9:00PM 0:00.68 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n>>postgres 80788 0.0 2.1 94424 43944 ?? S 9:00PM 0:00.93 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n>>postgres 80811 0.0 2.1 94424 43884 ?? S 9:00PM 0:00.94 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n>>postgres 80902 0.0 2.1 94424 43380 ?? S 9:01PM 0:00.76 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n>>postgres 80949 0.0 2.2 94424 45248 ?? S 9:01PM 0:00.67 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n>>postgres 81020 0.0 2.1 94424 42924 ?? S 9:02PM 0:00.74 postmaster: ethereal ethereal 192.168.1.6 idle in trans\n> \n> \n> All the connections in your email are idle. You may benefit from using\n> pgpool instead of Apache::DBI (I've never tried). \n> \n> http://www.mail-archive.com/[email protected]/msg00760.html\n> \n\nI will take a look into pgpool and see if it will serve as the solution \nI need. The pre-pooling of children sounds like a good choice, however \nsince overhead is already a point of worry I almost wonder if I can host \nit on another server in order to drop that overhead on the servers directly.\n\nAnyone have experience with this on running it on the same machine or a \ndifferent machine then the database proper? Of course, if this works \nas it should, I could easily put an older database server back into \noperation provided pgpool does weighted load balancing.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Wed, 04 Aug 2004 12:21:26 -0400",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Gaetano Mendola wrote:\n\n> Martin Foster wrote:\n> \n>> Gaetano Mendola wrote:\n>>\n>>> Martin Foster wrote:\n>>>\n>>>> I run a Perl/CGI driven website that makes extensive use of \n>>>> PostgreSQL (7.4.3) for everything from user information to \n>>>> formatting and display of specific sections of the site. The \n>>>> server itself, is a dual processor AMD Opteron 1.4Ghz w/ 2GB Ram and \n>>>> 2 x 120GB hard drives mirrored for redundancy running under FreeBSD \n>>>> 5.2.1 (AMD64).\n>>>>\n>>>> Recently loads on the site have increased during peak hours to the \n>>>> point of showing considerable loss in performance. This can be \n>>>> observed when connections move from the 120 concurrent connections \n>>>> to PostgreSQL to roughly 175 or more. Essentially, the machine \n>>>> seems to struggle to keep up with continual requests and slows down \n>>>> respectively as resources are tied down.\n>>>>\n>>>> Code changes have been made to the scripts to essentially back off \n>>>> in high load working environments which have worked to an extent. \n>>>> However, as loads continue to increase the database itself is not \n>>>> taking well to the increased traffic taking place.\n>>>>\n>>>> Having taken a look at 'Tuning PostgreSQL for Performance' \n>>>> (http://www.varlena.com/GeneralBits/Tidbits/perf.html) using it as \n>>>> best I could in order to set my settings. However, even with \n>>>> statistics disabled and ever setting tweaked things still consider \n>>>> to deteriorate.\n>>>>\n>>>> Is there anything anyone can recommend in order to give the system a \n>>>> necessary speed boost? It would seem to me that a modest dataset \n>>>> of roughly a Gig combined with that type of hardware should be able \n>>>> to handle substantially more load then what it is. Can anyone \n>>>> provide me with clues as where to pursue? Would disabling 'fsync' \n>>>> provide more performance if I choose that information may be lost in \n>>>> case of a crash?\n>>>>\n>>>> If anyone needs access to logs, settings et cetera. Please ask, I \n>>>> simply wish to test the waters first on what is needed. Thanks!\n>>>\n>>>\n>>>\n>>>\n>>> Tell us about your tipical queries, show us your configuration file.\n>>> The access are only in read only mode or do you have concurrent writers\n>>> and readers ? During peak hours your processors are tied to 100% ?\n>>> What say the vmstat and the iostat ?\n>>>\n>>> May be you are not using indexes some where, or may be yes but the\n>>> planner is not using it... In two words we needs other informations\n>>> in order to help you.\n>>>\n>>>\n>>>\n>>> Regards\n>>> Gaetano Mendola\n>>>\n>>>\n>>\n>> I included all the files in attachments, which will hopefully cut down \n>> on any replied to Emails. As for things like connection pooling, \n>> the web server makes use of Apache::DBI to pool the connections for \n>> the Perl scripts being driven on that server. For the sake of being \n>> thorough, a quick 'apachectl status' was thrown in when the database \n>> was under a good load.\n> \n> \n> Let start from your postgres configuration:\n> \n> shared_buffers = 8192 <==== This is really too small for your \n> configuration\n> sort_mem = 2048\n> \n> wal_buffers = 128 <==== This is really too small for your configuration\n> \n> effective_cache_size = 16000\n> \n> change this values in:\n> \n> shared_buffers = 50000\n> sort_mem = 16084\n> \n> wal_buffers = 1500\n> \n> effective_cache_size = 32000\n> \n> \n> to bump up the shm usage you have to configure your OS in order to be\n> allowed to use that ammount of SHM.\n> \n> This are the numbers that I feel good for your HW, the second step now is\n> analyze your queries\n> \n>> The queries themselves are simple, normally drawing information from \n>> one table with few conditions or in the most complex cases using joins \n>> on two table or sub queries. These behave very well and always have, \n>> the problem is that these queries take place in rather large amounts \n>> due to the dumb nature of the scripts themselves.\n> \n> \n> Show us the explain analyze on that queries, how many rows the tables are\n> containing, the table schema could be also usefull.\n> \n> \n> \n> regards\n> Gaetano Mendola\n> \n\nI will look into moving up those values and seeing how they interact \nwith the system once I get back from work. Since it was requested, I \nhave a visual representation of an older schema, one that was used under \nMySQL. Note that all of the timestamps are now properly set to \nLOCALTIME on PostgreSQL.\n\nhttp://prdownloads.sourceforge.net/ethereal-realms/ethereal-3_0_0.png?download\n\nThe amount of rows for tables of note are as follows:\n Puppeteer 1606\n Puppet 33176\n Realm 83\n Post 36156\n Audit 61961\n\nThe post table is continually cleared of old information since the \nnature of the information is time very critical and archiving would only \nhinder performance. As a result, this will vary wildly based on time \nof day since users (Puppeteers) tend to post more during peak hours.\n\nNOTE: The scripts make use of different schema's with the same\n information in order to virtualize the script in order\n to support more then one site on the same hardware.\n\nOn a side note, this would be a normal post-authentication session once \nin realm for getting new posts:\n * Script is executed and schema is determined through stored procedure;\n * Formatting information is fetched from Tag and RealmDesign as needed;\n * Script will retrieve stored parameters in the Param table;\n * Script will decode, analyze and authenticate against Puppeteer;\n * Script will scan the Puppet and Post tables to generate posts;\n * Sub-query to determine ignored puppeteers/users;\n * Sub-query to determine ignored puppets/handles; and\n * Loop above if necessary until expiry of script delaying\n the execution of the script from 5 to 25 seconds.\n\nThis should provide an idea on that portion. of course the flow \nchanges when one posts, but is handled by a different script instance as \nis authentication et cetera.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Wed, 04 Aug 2004 12:36:45 -0400",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "\n\tApache processes running for 30 minutes ?.....\n\n\tMy advice : use frames and Javascript !\n\n\tIn your webpage, you have two frames : \"content\" and \"refresh\".\n\n\t\"content\" starts empty (say, just a title on top of the page).\n\t\"refresh\" is refreshed every five seconds from a script on your server. \nThis script generates a javascript which \"document.write()'s\" new entries \nin the \"content\" frame, thus adding new records in the upper frame.\n\n\tThus, the refreshing uses a new request every 5 seconds, which terminates \nvery fast, and does not hog an Apache process.\n\n\tTurn keepalive timeout down.\n",
"msg_date": "Thu, 05 Aug 2004 08:40:35 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "On Thu, Aug 05, 2004 at 08:40:35AM +0200, Pierre-Fr�d�ric Caillaud wrote:\n> \tApache processes running for 30 minutes ?.....\n> \n> \tMy advice : use frames and Javascript !\n\nMy advice: Stay out of frames and Javascript if you can avoid it. The first\nis severely outdated technology, and the other one might well be disabled at\nthe client side.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 5 Aug 2004 17:04:03 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "On Wed, 2004-08-04 at 17:25 +0200, Gaetano Mendola wrote:\n\n> > The queries themselves are simple, normally drawing information from one \n> > table with few conditions or in the most complex cases using joins on \n> > two table or sub queries. These behave very well and always have, the \n> > problem is that these queries take place in rather large amounts due to \n> > the dumb nature of the scripts themselves.\n> \n> Show us the explain analyze on that queries, how many rows the tables are\n> containing, the table schema could be also usefull.\n> \n\nIf the queries themselves are optimized as much as they can be, and as\nyou say, its just the sheer amount of similar queries hitting the\ndatabase, you could try using prepared queries for ones that are most\noften executed to eliminate some of the overhead. \n\nI've had relatively good success with this in the past, and it doesn't\ntake very much code modification.\n\n-- \nMike Benoit <[email protected]>\n\n",
"msg_date": "Fri, 06 Aug 2004 10:06:51 -0700",
"msg_from": "Mike Benoit <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Gaetano Mendola wrote:\n> \n> \n> Let start from your postgres configuration:\n> \n> shared_buffers = 8192 <==== This is really too small for your \n> configuration\n> sort_mem = 2048\n> \n> wal_buffers = 128 <==== This is really too small for your configuration\n> \n> effective_cache_size = 16000\n> \n> change this values in:\n> \n> shared_buffers = 50000\n> sort_mem = 16084\n> \n> wal_buffers = 1500\n> \n> effective_cache_size = 32000\n> \n> \n> to bump up the shm usage you have to configure your OS in order to be\n> allowed to use that ammount of SHM.\n> \n> This are the numbers that I feel good for your HW, the second step now is\n> analyze your queries\n> \n\nThese changes have yielded some visible improvements, with load averages \nrarely going over the anything noticeable. However, I do have a \nquestion on the matter, why do these values seem to be far higher then \nwhat a frequently pointed to document would indicate as necessary?\n\nhttp://www.varlena.com/GeneralBits/Tidbits/perf.html\n\nI am simply curious, as this clearly shows that my understanding of \nPostgreSQL is clearly lacking when it comes to tweaking for the hardware.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Fri, 06 Aug 2004 18:58:31 -0400",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Mike Benoit wrote:\n\n> On Wed, 2004-08-04 at 17:25 +0200, Gaetano Mendola wrote:\n> \n> \n>>>The queries themselves are simple, normally drawing information from one \n>>>table with few conditions or in the most complex cases using joins on \n>>>two table or sub queries. These behave very well and always have, the \n>>>problem is that these queries take place in rather large amounts due to \n>>>the dumb nature of the scripts themselves.\n>>\n>>Show us the explain analyze on that queries, how many rows the tables are\n>>containing, the table schema could be also usefull.\n>>\n> \n> \n> If the queries themselves are optimized as much as they can be, and as\n> you say, its just the sheer amount of similar queries hitting the\n> database, you could try using prepared queries for ones that are most\n> often executed to eliminate some of the overhead. \n> \n> I've had relatively good success with this in the past, and it doesn't\n> take very much code modification.\n> \n\nOne of the biggest problems is most probably related to the indexes. \nSince the performance penalty of logging the information needed to see \nwhich queries are used and which are not is a slight problem, then I \ncannot really make use of it for now.\n\nHowever, I am curious how one would go about preparing query? Is this \nsimilar to the DBI::Prepare statement with placeholders and simply \nchanging the values passed on execute? Or is this something database \nlevel such as a view et cetera?\n\nSELECT\n Post.PostIDNumber,\n Post.$format,\n Post.PuppeteerLogin,\n Post.PuppetName,\n Post.PostCmd,\n Post.PostClass\nFROM Post\nWHERE Post.PostIDNumber > ?::INT\n AND (Post.PostTo='all' OR Post.PostTo=?)\n AND (NOT EXISTS (SELECT PuppetIgnore.PuppetLogin\n\tFROM PuppetIgnore\n\tWHERE PuppetIgnore.PuppetIgnore='global'\n\t AND PuppetIgnore.PuppeteerLogin=?\n\t AND PuppetIgnore.PuppetLogin=Post.PuppeteerLogin)\n OR Post.PuppeteerLogin IS NULL)\n AND (NOT EXISTS (SELECT PuppetIgnore.PuppetName\n\tFROM PuppetIgnore\n\tWHERE PuppetIgnore.PuppetIgnore='single'\n\t AND PuppetIgnore.PuppeteerLogin=?\n\t AND PuppetIgnore.PuppetName=Post.PuppetName)\n OR Post.PuppetName IS NULL)\nORDER BY Post.PostIDNumber LIMIT 100\n\nThe range is determined from the previous run or through a query listed \nbelow. It was determined that using INT was far faster then limiting \nby timestamp.\n\nSELECT MIN(PostIDNumber)\nFROM Post\nWHERE RealmName=?\n AND PostClass IN ('general','play')\n AND PostTo='all'\n\nThe above simply provides a starting point, nothing more. Once posts \nare pulled the script will throw in the last pulled number as to start \nfrom a fresh point.\n\nUnder MySQL time was an stored as an INT which may have helped it handle \ntimestamps more efficiently. It also made use of three or more \nqueries, where two were done to generate an IN statement for the query \nactually running at the time.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n",
"msg_date": "Fri, 06 Aug 2004 23:18:49 GMT",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Martin Foster wrote:\n\n> Gaetano Mendola wrote:\n> \n>>\n>>\n>> Let start from your postgres configuration:\n>>\n>> shared_buffers = 8192 <==== This is really too small for your \n>> configuration\n>> sort_mem = 2048\n>>\n>> wal_buffers = 128 <==== This is really too small for your \n>> configuration\n>>\n>> effective_cache_size = 16000\n>>\n>> change this values in:\n>>\n>> shared_buffers = 50000\n>> sort_mem = 16084\n>>\n>> wal_buffers = 1500\n>>\n>> effective_cache_size = 32000\n>>\n>>\n>> to bump up the shm usage you have to configure your OS in order to be\n>> allowed to use that ammount of SHM.\n>>\n>> This are the numbers that I feel good for your HW, the second step now is\n>> analyze your queries\n>>\n> \n> These changes have yielded some visible improvements, with load averages \n> rarely going over the anything noticeable. However, I do have a \n> question on the matter, why do these values seem to be far higher then \n> what a frequently pointed to document would indicate as necessary?\n> \n> http://www.varlena.com/GeneralBits/Tidbits/perf.html\n> \n> I am simply curious, as this clearly shows that my understanding of \n> PostgreSQL is clearly lacking when it comes to tweaking for the hardware.\n\nUnfortunately there is no a \"wizard tuning\" for postgres so each one of\nus have a own \"school\". The data I gave you are oversized to be sure\nto achieve improvements. Now you can start to decrease these values\n( starting from the wal_buffers ) in order to find the good compromise\nwith your HW.\n\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Sat, 07 Aug 2004 01:24:18 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Martin Foster <[email protected]> writes:\n> Gaetano Mendola wrote:\n>> change this values in:\n>> shared_buffers = 50000\n>> sort_mem = 16084\n>> \n>> wal_buffers = 1500\n\nThis value of wal_buffers is simply ridiculous.\n\nThere isn't any reason to set wal_buffers higher than the amount of\nWAL log data that will be generated by a single transaction, because\nwhatever is in the buffers will be flushed at transaction commit.\nIf you are mainly dealing with heavy concurrency then it's the mean time\nbetween transaction commits that matters, and that's even less than the\naverage transaction length.\n\nEven if you are mainly interested in the performance of large updating\ntransactions that are not concurrent with anything else (bulk data load,\nperhaps), I'm not sure that I see any value in setting wal_buffers so\nhigh. The data will have to go to disk before commit in any case, and\nbuffering so much of it just means that you are going to have a serious\nspike in disk traffic right before commit. It's almost certainly better\nto keep wal_buffers conservatively small and let the data trickle out as\nthe transaction proceeds. I don't actually think there is anything very\nwrong with the default value (8) ... perhaps it is too small, but it's\nnot two orders of magnitude too small.\n\nIn 8.0, the presence of the background writer may make it useful to run\nwith wal_buffers somewhat higher than before, but I still doubt that\norder-of-a-thousand buffers would be useful. The RAM would almost\ncertainly be better spent on general-purpose disk buffers or kernel\ncache.\n\nNote though that this is just informed opinion, as I've never done or\nseen any benchmarks that examine the results of changing wal_buffers\nwhile holding other things constant. Has anyone tried it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 Aug 2004 20:00:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck "
},
{
"msg_contents": "On Fri, 2004-08-06 at 17:24, Gaetano Mendola wrote:\n> Martin Foster wrote:\n> \n> > Gaetano Mendola wrote:\n> > \n> >>\n> >>\n> >> Let start from your postgres configuration:\n> >>\n> >> shared_buffers = 8192 <==== This is really too small for your \n> >> configuration\n> >> sort_mem = 2048\n> >>\n> >> wal_buffers = 128 <==== This is really too small for your \n> >> configuration\n> >>\n> >> effective_cache_size = 16000\n> >>\n> >> change this values in:\n> >>\n> >> shared_buffers = 50000\n> >> sort_mem = 16084\n> >>\n> >> wal_buffers = 1500\n> >>\n> >> effective_cache_size = 32000\n> >>\n> >>\n> >> to bump up the shm usage you have to configure your OS in order to be\n> >> allowed to use that ammount of SHM.\n> >>\n> >> This are the numbers that I feel good for your HW, the second step now is\n> >> analyze your queries\n> >>\n> > \n> > These changes have yielded some visible improvements, with load averages \n> > rarely going over the anything noticeable. However, I do have a \n> > question on the matter, why do these values seem to be far higher then \n> > what a frequently pointed to document would indicate as necessary?\n> > \n> > http://www.varlena.com/GeneralBits/Tidbits/perf.html\n> > \n> > I am simply curious, as this clearly shows that my understanding of \n> > PostgreSQL is clearly lacking when it comes to tweaking for the hardware.\n> \n> Unfortunately there is no a \"wizard tuning\" for postgres so each one of\n> us have a own \"school\". The data I gave you are oversized to be sure\n> to achieve improvements. Now you can start to decrease these values\n> ( starting from the wal_buffers ) in order to find the good compromise\n> with your HW.\n\nFYI, my school of tuning is to change one thing at a time some\nreasonable percentage (shared_buffers from 1000 to 2000) and measure the\nchange under simulated load. Make another change, test it, chart the\nshape of the change line. It should look something like this for most\nfolks:\n\nshared_buffers | q/s (more is better)\n100 | 20\n200 | 45\n400 | 80\n1000 | 100\n... levels out here...\n8000 | 110\n10000 | 108\n20000 | 40\n30000 | 20\n\nNote it going back down as we exceed our memory and start swapping\nshared_buffers. Where that happens on your machine is determined by\nmany things like your machine's memory, memory bandwidth, type of load,\netc... but it will happen on most machines and when it does, it often\nhappens at the worst times, under heavy parallel load.\n\nUnless testing shows it's faster, 10000 or 25% of mem (whichever is\nless) is usually a pretty good setting for shared_buffers. Large data\nsets may require more than 10000, but going over 25% on machines with\nlarge memory is usually a mistake, especially servers that do anything\nother than just PostgreSQL.\n\nYou're absolutely right about one thing, there's no automatic wizard for\ntuning this stuff.\n\n",
"msg_date": "Fri, 06 Aug 2004 18:56:18 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Scott Marlowe wrote:\n\n> On Fri, 2004-08-06 at 17:24, Gaetano Mendola wrote:\n> \n>>Martin Foster wrote:\n>>\n>>\n>>>Gaetano Mendola wrote:\n>>>\n>>>\n>>>>\n>>>>Let start from your postgres configuration:\n>>>>\n>>>>shared_buffers = 8192 <==== This is really too small for your \n>>>>configuration\n>>>>sort_mem = 2048\n>>>>\n>>>>wal_buffers = 128 <==== This is really too small for your \n>>>>configuration\n>>>>\n>>>>effective_cache_size = 16000\n>>>>\n>>>>change this values in:\n>>>>\n>>>>shared_buffers = 50000\n>>>>sort_mem = 16084\n>>>>\n>>>>wal_buffers = 1500\n>>>>\n>>>>effective_cache_size = 32000\n>>>>\n>>>>\n>>>>to bump up the shm usage you have to configure your OS in order to be\n>>>>allowed to use that ammount of SHM.\n>>>>\n>>>>This are the numbers that I feel good for your HW, the second step now is\n>>>>analyze your queries\n>>>>\n>>>\n>>>These changes have yielded some visible improvements, with load averages \n>>>rarely going over the anything noticeable. However, I do have a \n>>>question on the matter, why do these values seem to be far higher then \n>>>what a frequently pointed to document would indicate as necessary?\n>>>\n>>>http://www.varlena.com/GeneralBits/Tidbits/perf.html\n>>>\n>>>I am simply curious, as this clearly shows that my understanding of \n>>>PostgreSQL is clearly lacking when it comes to tweaking for the hardware.\n>>\n>>Unfortunately there is no a \"wizard tuning\" for postgres so each one of\n>>us have a own \"school\". The data I gave you are oversized to be sure\n>>to achieve improvements. Now you can start to decrease these values\n>>( starting from the wal_buffers ) in order to find the good compromise\n>>with your HW.\n> \n> \n> FYI, my school of tuning is to change one thing at a time some\n> reasonable percentage (shared_buffers from 1000 to 2000) and measure the\n> change under simulated load. Make another change, test it, chart the\n> shape of the change line. It should look something like this for most\n> folks:\n> \n> shared_buffers | q/s (more is better)\n> 100 | 20\n> 200 | 45\n> 400 | 80\n> 1000 | 100\n> ... levels out here...\n> 8000 | 110\n> 10000 | 108\n> 20000 | 40\n> 30000 | 20\n> \n> Note it going back down as we exceed our memory and start swapping\n> shared_buffers. Where that happens on your machine is determined by\n> many things like your machine's memory, memory bandwidth, type of load,\n> etc... but it will happen on most machines and when it does, it often\n> happens at the worst times, under heavy parallel load.\n> \n> Unless testing shows it's faster, 10000 or 25% of mem (whichever is\n> less) is usually a pretty good setting for shared_buffers. Large data\n> sets may require more than 10000, but going over 25% on machines with\n> large memory is usually a mistake, especially servers that do anything\n> other than just PostgreSQL.\n> \n> You're absolutely right about one thing, there's no automatic wizard for\n> tuning this stuff.\n> \n\nWhich rather points out the crux of the problem. This is a live system, \nmeaning changes made need to be as informed as possible, and that \nchanging values for the sake of testing can lead to potential problems \nin service.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n",
"msg_date": "Sat, 07 Aug 2004 04:02:58 GMT",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "On Fri, 2004-08-06 at 22:02, Martin Foster wrote:\n> Scott Marlowe wrote:\n> \n> > On Fri, 2004-08-06 at 17:24, Gaetano Mendola wrote:\n> > \n> >>Martin Foster wrote:\n> >>\n> >>\n> >>>Gaetano Mendola wrote:\n> >>>\n> >>>\n> >>>>\n> >>>>Let start from your postgres configuration:\n> >>>>\n> >>>>shared_buffers = 8192 <==== This is really too small for your \n> >>>>configuration\n> >>>>sort_mem = 2048\n> >>>>\n> >>>>wal_buffers = 128 <==== This is really too small for your \n> >>>>configuration\n> >>>>\n> >>>>effective_cache_size = 16000\n> >>>>\n> >>>>change this values in:\n> >>>>\n> >>>>shared_buffers = 50000\n> >>>>sort_mem = 16084\n> >>>>\n> >>>>wal_buffers = 1500\n> >>>>\n> >>>>effective_cache_size = 32000\n> >>>>\n> >>>>\n> >>>>to bump up the shm usage you have to configure your OS in order to be\n> >>>>allowed to use that ammount of SHM.\n> >>>>\n> >>>>This are the numbers that I feel good for your HW, the second step now is\n> >>>>analyze your queries\n> >>>>\n> >>>\n> >>>These changes have yielded some visible improvements, with load averages \n> >>>rarely going over the anything noticeable. However, I do have a \n> >>>question on the matter, why do these values seem to be far higher then \n> >>>what a frequently pointed to document would indicate as necessary?\n> >>>\n> >>>http://www.varlena.com/GeneralBits/Tidbits/perf.html\n> >>>\n> >>>I am simply curious, as this clearly shows that my understanding of \n> >>>PostgreSQL is clearly lacking when it comes to tweaking for the hardware.\n> >>\n> >>Unfortunately there is no a \"wizard tuning\" for postgres so each one of\n> >>us have a own \"school\". The data I gave you are oversized to be sure\n> >>to achieve improvements. Now you can start to decrease these values\n> >>( starting from the wal_buffers ) in order to find the good compromise\n> >>with your HW.\n> > \n> > \n> > FYI, my school of tuning is to change one thing at a time some\n> > reasonable percentage (shared_buffers from 1000 to 2000) and measure the\n> > change under simulated load. Make another change, test it, chart the\n> > shape of the change line. It should look something like this for most\n> > folks:\n> > \n> > shared_buffers | q/s (more is better)\n> > 100 | 20\n> > 200 | 45\n> > 400 | 80\n> > 1000 | 100\n> > ... levels out here...\n> > 8000 | 110\n> > 10000 | 108\n> > 20000 | 40\n> > 30000 | 20\n> > \n> > Note it going back down as we exceed our memory and start swapping\n> > shared_buffers. Where that happens on your machine is determined by\n> > many things like your machine's memory, memory bandwidth, type of load,\n> > etc... but it will happen on most machines and when it does, it often\n> > happens at the worst times, under heavy parallel load.\n> > \n> > Unless testing shows it's faster, 10000 or 25% of mem (whichever is\n> > less) is usually a pretty good setting for shared_buffers. Large data\n> > sets may require more than 10000, but going over 25% on machines with\n> > large memory is usually a mistake, especially servers that do anything\n> > other than just PostgreSQL.\n> > \n> > You're absolutely right about one thing, there's no automatic wizard for\n> > tuning this stuff.\n> > \n> \n> Which rather points out the crux of the problem. This is a live system, \n> meaning changes made need to be as informed as possible, and that \n> changing values for the sake of testing can lead to potential problems \n> in service.\n\nBut if you make those changes slowly, as I was showing, you should see\nthe small deleterious effects like I was showing long before they become\ncatastrophic. To just jump shared_buffers to 50000 is not a good idea,\nespecially if the sweet spot is likely lower than that. \n\n",
"msg_date": "Fri, 06 Aug 2004 22:36:13 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Scott Marlowe wrote:\n\n> On Fri, 2004-08-06 at 22:02, Martin Foster wrote:\n> \n>>Scott Marlowe wrote:\n>>\n>>\n>>>On Fri, 2004-08-06 at 17:24, Gaetano Mendola wrote:\n>>>\n>>>\n>>>>Martin Foster wrote:\n>>>>\n>>>>\n>>>>\n>>>>>Gaetano Mendola wrote:\n>>>>>\n>>>>>\n>>>>>\n>>>>>>Let start from your postgres configuration:\n>>>>>>\n>>>>>>shared_buffers = 8192 <==== This is really too small for your \n>>>>>>configuration\n>>>>>>sort_mem = 2048\n>>>>>>\n>>>>>>wal_buffers = 128 <==== This is really too small for your \n>>>>>>configuration\n>>>>>>\n>>>>>>effective_cache_size = 16000\n>>>>>>\n>>>>>>change this values in:\n>>>>>>\n>>>>>>shared_buffers = 50000\n>>>>>>sort_mem = 16084\n>>>>>>\n>>>>>>wal_buffers = 1500\n>>>>>>\n>>>>>>effective_cache_size = 32000\n>>>>>>\n>>>>>>\n>>>>>>to bump up the shm usage you have to configure your OS in order to be\n>>>>>>allowed to use that ammount of SHM.\n>>>>>>\n>>>>>>This are the numbers that I feel good for your HW, the second step now is\n>>>>>>analyze your queries\n>>>>>>\n>>>>>\n>>>>>These changes have yielded some visible improvements, with load averages \n>>>>>rarely going over the anything noticeable. However, I do have a \n>>>>>question on the matter, why do these values seem to be far higher then \n>>>>>what a frequently pointed to document would indicate as necessary?\n>>>>>\n>>>>>http://www.varlena.com/GeneralBits/Tidbits/perf.html\n>>>>>\n>>>>>I am simply curious, as this clearly shows that my understanding of \n>>>>>PostgreSQL is clearly lacking when it comes to tweaking for the hardware.\n>>>>\n>>>>Unfortunately there is no a \"wizard tuning\" for postgres so each one of\n>>>>us have a own \"school\". The data I gave you are oversized to be sure\n>>>>to achieve improvements. Now you can start to decrease these values\n>>>>( starting from the wal_buffers ) in order to find the good compromise\n>>>>with your HW.\n>>>\n>>>\n>>>FYI, my school of tuning is to change one thing at a time some\n>>>reasonable percentage (shared_buffers from 1000 to 2000) and measure the\n>>>change under simulated load. Make another change, test it, chart the\n>>>shape of the change line. It should look something like this for most\n>>>folks:\n>>>\n>>>shared_buffers | q/s (more is better)\n>>>100 | 20\n>>>200 | 45\n>>>400 | 80\n>>>1000 | 100\n>>>... levels out here...\n>>>8000 | 110\n>>>10000 | 108\n>>>20000 | 40\n>>>30000 | 20\n>>>\n>>>Note it going back down as we exceed our memory and start swapping\n>>>shared_buffers. Where that happens on your machine is determined by\n>>>many things like your machine's memory, memory bandwidth, type of load,\n>>>etc... but it will happen on most machines and when it does, it often\n>>>happens at the worst times, under heavy parallel load.\n>>>\n>>>Unless testing shows it's faster, 10000 or 25% of mem (whichever is\n>>>less) is usually a pretty good setting for shared_buffers. Large data\n>>>sets may require more than 10000, but going over 25% on machines with\n>>>large memory is usually a mistake, especially servers that do anything\n>>>other than just PostgreSQL.\n>>>\n>>>You're absolutely right about one thing, there's no automatic wizard for\n>>>tuning this stuff.\n>>>\n>>\n>>Which rather points out the crux of the problem. This is a live system, \n>>meaning changes made need to be as informed as possible, and that \n>>changing values for the sake of testing can lead to potential problems \n>>in service.\n> \n> \n> But if you make those changes slowly, as I was showing, you should see\n> the small deleterious effects like I was showing long before they become\n> catastrophic. To just jump shared_buffers to 50000 is not a good idea,\n> especially if the sweet spot is likely lower than that. \n> \n\nWhile I agree, there are also issues with the fact that getting \nconsistent results from this site are very much difficult to do, since \nit is based on the whims of users visiting one of three sites hosted on \nthe same hardware.\n\nNow that being said, having wal_buffers at 8 certainly would not be a \ngood idea, since the database logs themselves were warning of excessive \nwrites in that region. I am not hoping for a perfect intermix ratio, \nthat will solve all my problems.\n\nBut a good idea on a base that will allow me to gain a fair load would \ncertainly be a good option. Right now, the load being handled is not \nmuch more then a single processor system did with half the memory. \nCertainly this architecture should be able to take more of a beating \nthen this?\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Sat, 07 Aug 2004 00:39:35 -0400",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Scott Marlowe wrote:\n> On Fri, 2004-08-06 at 22:02, Martin Foster wrote:\n> \n>>Scott Marlowe wrote:\n>>\n>>\n>>>On Fri, 2004-08-06 at 17:24, Gaetano Mendola wrote:\n>>>\n>>>\n>>>>Martin Foster wrote:\n>>>>\n>>>>\n>>>>\n>>>>>Gaetano Mendola wrote:\n>>>>>\n>>>>>\n>>>>>\n>>>>>>Let start from your postgres configuration:\n>>>>>>\n>>>>>>shared_buffers = 8192 <==== This is really too small for your \n>>>>>>configuration\n>>>>>>sort_mem = 2048\n>>>>>>\n>>>>>>wal_buffers = 128 <==== This is really too small for your \n>>>>>>configuration\n>>>>>>\n>>>>>>effective_cache_size = 16000\n>>>>>>\n>>>>>>change this values in:\n>>>>>>\n>>>>>>shared_buffers = 50000\n>>>>>>sort_mem = 16084\n>>>>>>\n>>>>>>wal_buffers = 1500\n>>>>>>\n>>>>>>effective_cache_size = 32000\n>>>>>>\n>>>>>>\n>>>>>>to bump up the shm usage you have to configure your OS in order to be\n>>>>>>allowed to use that ammount of SHM.\n>>>>>>\n>>>>>>This are the numbers that I feel good for your HW, the second step now is\n>>>>>>analyze your queries\n>>>>>>\n>>>>>\n>>>>>These changes have yielded some visible improvements, with load averages \n>>>>>rarely going over the anything noticeable. However, I do have a \n>>>>>question on the matter, why do these values seem to be far higher then \n>>>>>what a frequently pointed to document would indicate as necessary?\n>>>>>\n>>>>>http://www.varlena.com/GeneralBits/Tidbits/perf.html\n>>>>>\n>>>>>I am simply curious, as this clearly shows that my understanding of \n>>>>>PostgreSQL is clearly lacking when it comes to tweaking for the hardware.\n>>>>\n>>>>Unfortunately there is no a \"wizard tuning\" for postgres so each one of\n>>>>us have a own \"school\". The data I gave you are oversized to be sure\n>>>>to achieve improvements. Now you can start to decrease these values\n>>>>( starting from the wal_buffers ) in order to find the good compromise\n>>>>with your HW.\n>>>\n>>>\n>>>FYI, my school of tuning is to change one thing at a time some\n>>>reasonable percentage (shared_buffers from 1000 to 2000) and measure the\n>>>change under simulated load. Make another change, test it, chart the\n>>>shape of the change line. It should look something like this for most\n>>>folks:\n>>>\n>>>shared_buffers | q/s (more is better)\n>>>100 | 20\n>>>200 | 45\n>>>400 | 80\n>>>1000 | 100\n>>>... levels out here...\n>>>8000 | 110\n>>>10000 | 108\n>>>20000 | 40\n>>>30000 | 20\n>>>\n>>>Note it going back down as we exceed our memory and start swapping\n>>>shared_buffers. Where that happens on your machine is determined by\n>>>many things like your machine's memory, memory bandwidth, type of load,\n>>>etc... but it will happen on most machines and when it does, it often\n>>>happens at the worst times, under heavy parallel load.\n>>>\n>>>Unless testing shows it's faster, 10000 or 25% of mem (whichever is\n>>>less) is usually a pretty good setting for shared_buffers. Large data\n>>>sets may require more than 10000, but going over 25% on machines with\n>>>large memory is usually a mistake, especially servers that do anything\n>>>other than just PostgreSQL.\n>>>\n>>>You're absolutely right about one thing, there's no automatic wizard for\n>>>tuning this stuff.\n>>>\n>>\n>>Which rather points out the crux of the problem. This is a live system, \n>>meaning changes made need to be as informed as possible, and that \n>>changing values for the sake of testing can lead to potential problems \n>>in service.\n> \n> \n> But if you make those changes slowly, as I was showing, you should see\n> the small deleterious effects like I was showing long before they become\n> catastrophic. To just jump shared_buffers to 50000 is not a good idea,\n> especially if the sweet spot is likely lower than that. \n\nAs you can see 50000 are less then 20% of his total memory and I strongly\nfell that 50000 is not oversized for his hardware ( as wal_buffers isn't),\nmay be could be for his database activity but for sure that value ( values )\ncan not be source of problems.\n\nI'd like to have a wizard that could be run also for hours in order to find the\ngood compromise for all GUC parameters , may be a genetic algoritm can help.\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Sat, 07 Aug 2004 12:03:44 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Tom Lane wrote:\n\n> Martin Foster <[email protected]> writes:\n> \n>>Gaetano Mendola wrote:\n>>\n>>>change this values in:\n>>>shared_buffers = 50000\n>>>sort_mem = 16084\n>>>\n>>>wal_buffers = 1500\n> \n> \n> This value of wal_buffers is simply ridiculous.\n\nInstead I think is ridiculous a wal_buffers = 8 ( 64KB ) by default.\n\n> There isn't any reason to set wal_buffers higher than the amount of\n> WAL log data that will be generated by a single transaction, because\n> whatever is in the buffers will be flushed at transaction commit.\n> If you are mainly dealing with heavy concurrency then it's the mean time\n> between transaction commits that matters, and that's even less than the\n> average transaction length.\n\nI partially agree with you, tell me how decide that value without\neven now the typical queries, the tipical load ... nothing.\nI suggested to OP to keep the wal_buffers so high in order to eliminate one\nfreedom of degree in his performance problems. You can see from following reply,\n\n\n========================================================================\nGaetano Mendola wrote:\nUnfortunately there is no a \"wizard tuning\" for postgres so each one of\nus have a own \"school\". The data I gave you are oversized to be sure\nto achieve improvements. Now you can start to decrease these values\n( starting from the wal_buffers ) in order to find the good compromise\nwith your HW.\n========================================================================\n\nHowever wal_buffers = 1500 means ~12 MB that are not so expensive considering\na server with 2GB of ram and I think that is a good compromise if you are not\nstarving for RAM.\n\n\nI had a discussion about how fine tuning a postgres server with a client,\nmy question was: are you planning to have someone that periodically take a\nlook at your server activities in order to use your hardware at the best?\nEasy answer: No, because when the server is overloaded I will buy a bigger\none that is less expensive that pay someone, considering also that shareolders\nprefer increase the capex that pay salaries ( if the company close the hardware\ncan be selled :-( ).\n\nThis is the real world out there.\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Sat, 07 Aug 2004 12:11:09 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": ">> This value of wal_buffers is simply ridiculous.\n> \n> \n> Instead I think is ridiculous a wal_buffers = 8 ( 64KB ) by default.\n\nThere is no point making WAL buffers higher than 8. I have done much \ntesting of this and it makes not the slightest difference to performance \nthat I could measure.\n\nChris\n\n",
"msg_date": "Sat, 07 Aug 2004 18:49:50 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "On 8/3/2004 2:05 PM, Martin Foster wrote:\n\n> I run a Perl/CGI driven website that makes extensive use of PostgreSQL \n> (7.4.3) for everything from user information to formatting and display \n> of specific sections of the site. The server itself, is a dual \n> processor AMD Opteron 1.4Ghz w/ 2GB Ram and 2 x 120GB hard drives \n> mirrored for redundancy running under FreeBSD 5.2.1 (AMD64).\n> \n> Recently loads on the site have increased during peak hours to the point \n> of showing considerable loss in performance. This can be observed \n> when connections move from the 120 concurrent connections to PostgreSQL \n> to roughly 175 or more. Essentially, the machine seems to struggle \n> to keep up with continual requests and slows down respectively as \n> resources are tied down.\n\nHave you taken a look at pgpool? I know, it sounds silly to *reduce* the \nnumber of DB connections through a connection pool, but it can help.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Sat, 07 Aug 2004 10:21:00 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n\n>>> This value of wal_buffers is simply ridiculous.\n>>\n>>\n>>\n>> Instead I think is ridiculous a wal_buffers = 8 ( 64KB ) by default.\n> \n> \n> There is no point making WAL buffers higher than 8. I have done much \n> testing of this and it makes not the slightest difference to performance \n> that I could measure.\n> \n> Chris\n> \n\nNo point? I had it at 64 if memory serves and logs were warning me that \nraising this value would be desired because of excessive IO brought upon \nfrom the logs being filled far too often.\n\nIt would seem to me that 8 is a bit low in at least a few circumstances.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n",
"msg_date": "Sun, 08 Aug 2004 05:14:32 GMT",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Jan Wieck wrote:\n\n> On 8/3/2004 2:05 PM, Martin Foster wrote:\n> \n>> I run a Perl/CGI driven website that makes extensive use of PostgreSQL \n>> (7.4.3) for everything from user information to formatting and display \n>> of specific sections of the site. The server itself, is a dual \n>> processor AMD Opteron 1.4Ghz w/ 2GB Ram and 2 x 120GB hard drives \n>> mirrored for redundancy running under FreeBSD 5.2.1 (AMD64).\n>>\n>> Recently loads on the site have increased during peak hours to the \n>> point of showing considerable loss in performance. This can be \n>> observed when connections move from the 120 concurrent connections to \n>> PostgreSQL to roughly 175 or more. Essentially, the machine seems \n>> to struggle to keep up with continual requests and slows down \n>> respectively as resources are tied down.\n> \n> \n> Have you taken a look at pgpool? I know, it sounds silly to *reduce* the \n> number of DB connections through a connection pool, but it can help.\n> \n> \n> Jan\n> \n\nI am currently making use of Apache::DBI which overrides the \nDBI::disconnect call and keeps a pool of active connections for use when \nneed be. Since it offloads the pooling to the webserver, it seems more \nadvantageous then pgpool which while being able to run on a external \nsystem is not adding another layer of complexity.\n\nAnyone had any experience with both Apache::DBI and pgpool? For my \nneeds they seem to do essentially the same thing, simply that one is \ninvisible to the code while the other requires adding the complexity of \na proxy.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Sun, 08 Aug 2004 01:29:16 -0400",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "On Fri, 2004-08-06 at 23:18 +0000, Martin Foster wrote:\n> Mike Benoit wrote:\n> \n> > On Wed, 2004-08-04 at 17:25 +0200, Gaetano Mendola wrote:\n> > \n> > \n> >>>The queries themselves are simple, normally drawing information from one \n> >>>table with few conditions or in the most complex cases using joins on \n> >>>two table or sub queries. These behave very well and always have, the \n> >>>problem is that these queries take place in rather large amounts due to \n> >>>the dumb nature of the scripts themselves.\n> >>\n> >>Show us the explain analyze on that queries, how many rows the tables are\n> >>containing, the table schema could be also usefull.\n> >>\n> > \n> > \n> > If the queries themselves are optimized as much as they can be, and as\n> > you say, its just the sheer amount of similar queries hitting the\n> > database, you could try using prepared queries for ones that are most\n> > often executed to eliminate some of the overhead. \n> > \n> > I've had relatively good success with this in the past, and it doesn't\n> > take very much code modification.\n> > \n> \n> One of the biggest problems is most probably related to the indexes. \n> Since the performance penalty of logging the information needed to see \n> which queries are used and which are not is a slight problem, then I \n> cannot really make use of it for now.\n> \n> However, I am curious how one would go about preparing query? Is this \n> similar to the DBI::Prepare statement with placeholders and simply \n> changing the values passed on execute? Or is this something database \n> level such as a view et cetera?\n> \n\nYes, always optimize your queries and GUC settings first and foremost.\nThats where you are likely to gain the most performance. After that if\nyou still want to push things even further I would try prepared queries.\nI'm not familiar with DBI::Prepare at all, but I don't think its what\nyour looking for.\n\nThis is what you want:\nhttp://www.postgresql.org/docs/current/static/sql-prepare.html\n\n\n-- \nMike Benoit <[email protected]>\n\n",
"msg_date": "Sun, 08 Aug 2004 01:21:17 -0700",
"msg_from": "Mike Benoit <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "\nOn Aug 8, 2004, at 1:29 AM, Martin Foster wrote:\n\n> I am currently making use of Apache::DBI which overrides the \n> DBI::disconnect call and keeps a pool of active connections for use \n> when need be. Since it offloads the pooling to the webserver, it \n> seems more advantageous then pgpool which while being able to run on a \n> external system is not adding another layer of complexity.\n>\n\nApache::DBI is not the same sort of a pool as pgpool. DB connections \nare not shared among all your apache children (A common misconception). \n So if you have 300 apache kids you can have have 300 db connections. \nWith pgpool connections are shared among all of them so even though \nyou have 300 kids you only have say 32 db connections.\n\n> Anyone had any experience with both Apache::DBI and pgpool? For my \n> needs they seem to do essentially the same thing, simply that one is \n> invisible to the code while the other requires adding the complexity \n> of a proxy.\n>\n\nBoth are invisible to the app. (With pgpool it thinks it is connecting \nto a regular old PG server)\n\nAnd I've been running pgpool in production for months. It just sits \nthere. Doesn't take much to set it up or configure it. Works like a \nchamp\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Sun, 8 Aug 2004 08:10:24 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "On 8/8/2004 8:10 AM, Jeff wrote:\n\n> On Aug 8, 2004, at 1:29 AM, Martin Foster wrote:\n> \n>> I am currently making use of Apache::DBI which overrides the \n>> DBI::disconnect call and keeps a pool of active connections for use \n>> when need be. Since it offloads the pooling to the webserver, it \n>> seems more advantageous then pgpool which while being able to run on a \n>> external system is not adding another layer of complexity.\n>>\n> \n> Apache::DBI is not the same sort of a pool as pgpool. DB connections \n> are not shared among all your apache children (A common misconception). \n> So if you have 300 apache kids you can have have 300 db connections. \n> With pgpool connections are shared among all of them so even though \n> you have 300 kids you only have say 32 db connections.\n\nAnd this is exactly where the pgpool advantage lies. Especially with the \nTPC-W, the Apache is serving a mix of PHP (or whatever CGI technique is \nused) and static content like images. Since the 200+ Apache kids serve \nany of that content by random and the emulated browsers very much \nencourage it to ramp up MaxClients children by using up to 4 concurrent \nimage connections, one does end up with MaxClients DB connections that \nare all relatively low frequently used. In contrast to that the real \npgpool causes lesser, more active DB connections, which is better for \nperformance.\n\n\n> \n>> Anyone had any experience with both Apache::DBI and pgpool? For my \n>> needs they seem to do essentially the same thing, simply that one is \n>> invisible to the code while the other requires adding the complexity \n>> of a proxy.\n>>\n> \n> Both are invisible to the app. (With pgpool it thinks it is connecting \n> to a regular old PG server)\n> \n> And I've been running pgpool in production for months. It just sits \n> there. Doesn't take much to set it up or configure it. Works like a \n> champ\n\nAnd it buys you some extra admin feature people like to forget about it. \nOne can shut down one pool for one web application only. That gives you \ninstant single user access to one database without shutting down the \nwhole webserver or tempering with the pg_hba.conf file.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Sun, 08 Aug 2004 09:52:01 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "> And this is exactly where the pgpool advantage lies. \n> Especially with the \n> TPC-W, the Apache is serving a mix of PHP (or whatever CGI \n> technique is \n> used) and static content like images. Since the 200+ Apache \n> kids serve \n> any of that content by random and the emulated browsers very much \n> encourage it to ramp up MaxClients children by using up to 4 \n> concurrent \n> image connections, one does end up with MaxClients DB \n> connections that \n> are all relatively low frequently used. In contrast to that the real \n> pgpool causes lesser, more active DB connections, which is better for \n> performance.\n\nThere are two well-worn and very mature techniques for dealing with the\nissue of web apps using one DB connection per apache process, both of which\nwork extremely well and attack the issue at its source.\n\n1)\tUse a front-end caching proxy like Squid as an accelerator. Static\ncontent will be served by the accelerator 99% of the time. Additionally,\nlarge pages can be served immediately to the accelerator by Apache, which\ncan then go on to serve another request without waiting for the end user's\ndial-up connection to pull the data down. Massive speedup, fewer apache\nprocesses needed.\n\n2)\tServe static content off an entirely separate apache server than the\ndynamic content, but by using separate domains (e.g. 'static.foo.com').\n\nPersonally I favour number 1. Our last biggish peak saw 6000 open HTTP and\nHTTPS connections and only 200 apache children, all of them nice and busy,\nnot hanging around on street corners looking bored. During quiet times\nApache drops back to its configured minimum of 40 kids. Option 2 has the\nadvantage that you can use a leaner build for the 'dynamic' apache server,\nbut with RAM so plentiful these days that's a less useful property.\n\nBasically this puts the 'pooling' back in the stateless HTTP area where it\ntruly belongs and can be proven not to have any peculiar side effects\n(especially when it comes to transaction safety). Even better, so long as\nyou use URL parameters for searches and the like, you can have the\naccelerator cache those pages for a certain time too so long as slightly\nstale results are OK.\n\nI'm sure pgpool and the like have their place, but being band-aids for\npoorly configured websites probably isn't the best use for them.\n\nM\n\n",
"msg_date": "Sun, 8 Aug 2004 15:29:39 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Jeff wrote:\n> \n> On Aug 8, 2004, at 1:29 AM, Martin Foster wrote:\n> \n>> I am currently making use of Apache::DBI which overrides the \n>> DBI::disconnect call and keeps a pool of active connections for use \n>> when need be. Since it offloads the pooling to the webserver, it \n>> seems more advantageous then pgpool which while being able to run on a \n>> external system is not adding another layer of complexity.\n>>\n> \n> Apache::DBI is not the same sort of a pool as pgpool. DB connections \n> are not shared among all your apache children (A common misconception). \n> So if you have 300 apache kids you can have have 300 db connections. \n> With pgpool connections are shared among all of them so even though you \n> have 300 kids you only have say 32 db connections.\n> \n\nSeems that you are right, never noticed that from the documentation \nbefore. I always assumed it had something to do with the long \nlasting/persistent scripts that would remain in transactions for \nextended periods of time.\n\nHere is an odd question. While the server run 7.4.x, the client \nconnects with 7.3.x. Would this in itself make a difference in \nperformance as the protocols are different? At least based from \npgpool's documentation.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n\n\n",
"msg_date": "Sun, 08 Aug 2004 11:49:10 -0400",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "On 8-8-2004 16:29, Matt Clark wrote:\n> There are two well-worn and very mature techniques for dealing with the\n> issue of web apps using one DB connection per apache process, both of which\n> work extremely well and attack the issue at its source.\n> \n> 1)\tUse a front-end caching proxy like Squid as an accelerator. Static\n> content will be served by the accelerator 99% of the time. Additionally,\n> large pages can be served immediately to the accelerator by Apache, which\n> can then go on to serve another request without waiting for the end user's\n> dial-up connection to pull the data down. Massive speedup, fewer apache\n> processes needed.\n\nAnother version of this 1) is to run with a \"content accelerator\"; our \n\"favourite\" is to run Tux in front of Apache. It takes over the \nconnection-handling stuff, has a very low memoryprofile (compared to \nApache) and very little overhead. What it does, is to serve up all \n\"simple\" content (although you can have cgi/php/perl and other languages \nbeing processed by it, entirely disabling the need for apache in some \ncases) and forwards/proxies everything it doesn't understand to an \nApache/other webserver running at the same machine (which runs on \nanother port).\n\nI think there are a few advantages over Squid; since it is partially \ndone in kernel-space it can be slightly faster in serving up content, \napart from its simplicity which will probably matter even more. You'll \nhave no caching issues for pages that should not be cached or static \nfiles that change periodically (like every few seconds). Afaik Tux can \nhandle more than 10 times as much ab-generated requests per second than \na default-compiled Apache on the same machine.\nAnd besides the speed-up, you can do any request you where able to do \nbefore, since Tux will simply forward it to Apache if it didn't \nunderstand it.\n\nAnyway, apart from all that. Reducing the amount of apache-connections \nis nice, but not really the same as reducing the amount of \npooled-connections using a db-pool... You may even be able to run with \n1000 http-connections, 40 apache-processes and 10 db-connections. In \ncase of the non-pooled setup, you'd still have 40 db-connections.\n\nIn a simple test I did, I did feel pgpool had quite some overhead \nthough. So it should be well tested, to find out where the \nturnover-point is where it will be a gain instead of a loss...\n\nBest regards,\n\nArjen van der Meijden\n\n",
"msg_date": "Sun, 08 Aug 2004 18:02:32 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "Arjen van der Meijden wrote:\n\n> On 8-8-2004 16:29, Matt Clark wrote:\n> \n>> There are two well-worn and very mature techniques for dealing with the\n>> issue of web apps using one DB connection per apache process, both of \n>> which\n>> work extremely well and attack the issue at its source.\n>>\n>> 1) Use a front-end caching proxy like Squid as an accelerator. Static\n>> content will be served by the accelerator 99% of the time. Additionally,\n>> large pages can be served immediately to the accelerator by Apache, which\n>> can then go on to serve another request without waiting for the end \n>> user's\n>> dial-up connection to pull the data down. Massive speedup, fewer apache\n>> processes needed.\n> \n> \n> Another version of this 1) is to run with a \"content accelerator\"; our \n> \"favourite\" is to run Tux in front of Apache. It takes over the \n> connection-handling stuff, has a very low memoryprofile (compared to \n> Apache) and very little overhead. What it does, is to serve up all \n> \"simple\" content (although you can have cgi/php/perl and other languages \n> being processed by it, entirely disabling the need for apache in some \n> cases) and forwards/proxies everything it doesn't understand to an \n> Apache/other webserver running at the same machine (which runs on \n> another port).\n> \n> I think there are a few advantages over Squid; since it is partially \n> done in kernel-space it can be slightly faster in serving up content, \n> apart from its simplicity which will probably matter even more. You'll \n> have no caching issues for pages that should not be cached or static \n> files that change periodically (like every few seconds). Afaik Tux can \n> handle more than 10 times as much ab-generated requests per second than \n> a default-compiled Apache on the same machine.\n> And besides the speed-up, you can do any request you where able to do \n> before, since Tux will simply forward it to Apache if it didn't \n> understand it.\n> \n> Anyway, apart from all that. Reducing the amount of apache-connections \n> is nice, but not really the same as reducing the amount of \n> pooled-connections using a db-pool... You may even be able to run with \n> 1000 http-connections, 40 apache-processes and 10 db-connections. In \n> case of the non-pooled setup, you'd still have 40 db-connections.\n> \n> In a simple test I did, I did feel pgpool had quite some overhead \n> though. So it should be well tested, to find out where the \n> turnover-point is where it will be a gain instead of a loss...\n> \n> Best regards,\n> \n> Arjen van der Meijden\n> \n\nOther then images, there are very few static pages being loaded up by \nthe user. Since they make up a very small portion of the traffic, it \ntends to be an optimization we can forgo for now.\n\nI attempted to make use of pgpool. At the default 32 connections \npre-forked the webserver almost immediately tapped out the pgpool base \nand content stopped being served because no new processes were being \nforked to make up for it.\n\nSo I raised it to a higher value (256) and it immediately segfaulted and \ndropped the core. So not sure exactly how to proceed, since I rather \nneed the thing to fork additional servers as load hits and not the other \nway around.\n\nUnless I had it configured oddly, but it seems work differently then an \nApache server would to handle content.\n\n\tMartin Foster\n\tCreator/Designer Ethereal Realms\n\[email protected]\n",
"msg_date": "Sun, 08 Aug 2004 18:18:15 GMT",
"msg_from": "Martin Foster <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "> Jeff wrote:\n> > \n> > On Aug 8, 2004, at 1:29 AM, Martin Foster wrote:\n> > \n> >> I am currently making use of Apache::DBI which overrides the \n> >> DBI::disconnect call and keeps a pool of active connections for use \n> >> when need be. Since it offloads the pooling to the webserver, it \n> >> seems more advantageous then pgpool which while being able to run on a \n> >> external system is not adding another layer of complexity.\n> >>\n> > \n> > Apache::DBI is not the same sort of a pool as pgpool. DB connections \n> > are not shared among all your apache children (A common misconception). \n> > So if you have 300 apache kids you can have have 300 db connections. \n> > With pgpool connections are shared among all of them so even though you \n> > have 300 kids you only have say 32 db connections.\n> > \n> \n> Seems that you are right, never noticed that from the documentation \n> before. I always assumed it had something to do with the long \n> lasting/persistent scripts that would remain in transactions for \n> extended periods of time.\n> \n> Here is an odd question. While the server run 7.4.x, the client \n> connects with 7.3.x. Would this in itself make a difference in \n> performance as the protocols are different? At least based from \n> pgpool's documentation.\n\nIn this case the server fall back from V3 protocol (employed in 7.4 or\nlater) to V2 protocol (employed in from 6.4 to 7.3.x). As far as\npgpool concerning, performance difference is significant. Of course\nthat depends on the implementation though.\n\nFYI here is the outline of the testing using pgbench.\n\nH/W: Pentium4 2.4GHz x2/memory 1GB/HDD IDE 80GB (all PCs are same spec)\nS/W: RedHat Linux 9/PostgreSQL 7.3.6/7.4.3\n\npostgresql.conf:\ntcpip_socket = true\nmax_connections = 512\nshared_buffers = 2048\n\nhost A: pgbench, host B: pgpool, host C: PostgreSQL 7.3.6 or 7.4.3\npgbench parameters: -S -c 10 -t 1000\n\nresult:\n\t\t\t\t\t\tTPS\t\tratio(7.4.3)\t\tratio(7.3.6)\n----------------------------------------------------------------------------------------------------\nwithout pgpool\t\t\t\t\t4357.625059\t100%\t\t\t100%\nwith pgpool(connection pool mode)\t\t4330.290294\t99.4%\t\t\t94.1%\nwith pgpool(replication mode)\t\t\t4297.614996\t98.6%\t\t\t87.6%\nwith pgpoo(replication with strictmode)\t\t4270.223136\t98.0%\t\t\t81.5%\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 09 Aug 2004 10:09:23 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "> Arjen van der Meijden wrote:\n> \n> > On 8-8-2004 16:29, Matt Clark wrote:\n> > \n> >> There are two well-worn and very mature techniques for dealing with the\n> >> issue of web apps using one DB connection per apache process, both of \n> >> which\n> >> work extremely well and attack the issue at its source.\n> >>\n> >> 1) Use a front-end caching proxy like Squid as an accelerator. Static\n> >> content will be served by the accelerator 99% of the time. Additionally,\n> >> large pages can be served immediately to the accelerator by Apache, which\n> >> can then go on to serve another request without waiting for the end \n> >> user's\n> >> dial-up connection to pull the data down. Massive speedup, fewer apache\n> >> processes needed.\n> > \n> > \n> > Another version of this 1) is to run with a \"content accelerator\"; our \n> > \"favourite\" is to run Tux in front of Apache. It takes over the \n> > connection-handling stuff, has a very low memoryprofile (compared to \n> > Apache) and very little overhead. What it does, is to serve up all \n> > \"simple\" content (although you can have cgi/php/perl and other languages \n> > being processed by it, entirely disabling the need for apache in some \n> > cases) and forwards/proxies everything it doesn't understand to an \n> > Apache/other webserver running at the same machine (which runs on \n> > another port).\n> > \n> > I think there are a few advantages over Squid; since it is partially \n> > done in kernel-space it can be slightly faster in serving up content, \n> > apart from its simplicity which will probably matter even more. You'll \n> > have no caching issues for pages that should not be cached or static \n> > files that change periodically (like every few seconds). Afaik Tux can \n> > handle more than 10 times as much ab-generated requests per second than \n> > a default-compiled Apache on the same machine.\n> > And besides the speed-up, you can do any request you where able to do \n> > before, since Tux will simply forward it to Apache if it didn't \n> > understand it.\n> > \n> > Anyway, apart from all that. Reducing the amount of apache-connections \n> > is nice, but not really the same as reducing the amount of \n> > pooled-connections using a db-pool... You may even be able to run with \n> > 1000 http-connections, 40 apache-processes and 10 db-connections. In \n> > case of the non-pooled setup, you'd still have 40 db-connections.\n> > \n> > In a simple test I did, I did feel pgpool had quite some overhead \n> > though. So it should be well tested, to find out where the \n> > turnover-point is where it will be a gain instead of a loss...\n\nI don't know what were the configurations you are using, but I noticed\nthat UNIX domain sockets are preferred for the connection bwteen\nclients and pgpool. When I tested using pgbench -C (involving\nconnection estblishing for each transaction),\nwith-pgpool-configuration 10 times faster than without-pgpool-conf if\nusing UNIX domain sockets, while there is only 3.6 times speed up with\nTCP/IP sockets.\n\n> > Best regards,\n> > \n> > Arjen van der Meijden\n> > \n> \n> Other then images, there are very few static pages being loaded up by \n> the user. Since they make up a very small portion of the traffic, it \n> tends to be an optimization we can forgo for now.\n> \n> I attempted to make use of pgpool. At the default 32 connections \n> pre-forked the webserver almost immediately tapped out the pgpool base \n> and content stopped being served because no new processes were being \n> forked to make up for it.\n> \n> So I raised it to a higher value (256) and it immediately segfaulted and \n> dropped the core. So not sure exactly how to proceed, since I rather \n> need the thing to fork additional servers as load hits and not the other \n> way around.\n\nWhat version of pgpool did you test? I know that certain version\n(actually 2.0.2) had such that problem. Can you try again with the\nlatest verison of pgpool? (it's 2.0.6).\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 09 Aug 2004 10:12:07 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "On Sun, 8 Aug 2004, Matt Clark wrote:\n\n> > And this is exactly where the pgpool advantage lies.\n> > Especially with the\n> > TPC-W, the Apache is serving a mix of PHP (or whatever CGI\n> > technique is\n> > used) and static content like images. Since the 200+ Apache\n> > kids serve\n> > any of that content by random and the emulated browsers very much\n> > encourage it to ramp up MaxClients children by using up to 4\n> > concurrent\n> > image connections, one does end up with MaxClients DB\n> > connections that\n> > are all relatively low frequently used. In contrast to that the real\n> > pgpool causes lesser, more active DB connections, which is better for\n> > performance.\n>\n> There are two well-worn and very mature techniques for dealing with the\n> issue of web apps using one DB connection per apache process, both of which\n> work extremely well and attack the issue at its source.\n>\n> 1)\tUse a front-end caching proxy like Squid as an accelerator. Static\n> content will be served by the accelerator 99% of the time. Additionally,\n> large pages can be served immediately to the accelerator by Apache, which\n> can then go on to serve another request without waiting for the end user's\n> dial-up connection to pull the data down. Massive speedup, fewer apache\n> processes needed.\n\nSquid also takes away the work of doing SSL (presuming you're running it\non a different machine). Unfortunately it doesn't support HTTP/1.1 which\nmeans that most generated pages (those that don't set Content-length) end\nup forcing squid to close and then reopen the connection to the web\nserver.\n\nBecause you no longer need to worry about keeping Apache processes around\nto dribble data to people on the wrong end of modems you can reduce\nMaxClients quite a bit (to, say, 10 or 20 per web server). This keeps the\nnumber of PostgreSQL connections down. I'd guess that above some point\nyou're going to reduce performance by increasing MaxClients and running\nqueries in parallel rather than queueing the request and doing them\nserially.\n\nI've also had some problems when Squid had a large number of connections\nopen (several thousand); though that may have been because of my\nhalf_closed_clients setting. Squid 3 coped a lot better when I tried it\n(quite a few months ago now - and using FreeBSD and the special kqueue\nsystem call) but crashed under some (admittedly synthetic) conditions.\n\n> I'm sure pgpool and the like have their place, but being band-aids for\n> poorly configured websites probably isn't the best use for them.\n\nYou still have periods of time when the web servers are busy using their\nCPUs to generate HTML rather than waiting for database queries. This is\nespecially true if you cache a lot of data somewhere on the web servers\nthemselves (which, in my experience, reduces the database load a great\ndeal). If you REALLY need to reduce the number of connections (because you\nhave a large number of web servers doing a lot of computation, say) then\nit might still be useful.\n",
"msg_date": "Tue, 10 Aug 2004 15:35:31 +0100 (BST)",
"msg_from": "Alex Hayward <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
},
{
"msg_contents": "\n> Squid also takes away the work of doing SSL (presuming you're running it\n> on a different machine). Unfortunately it doesn't support HTTP/1.1 which\n> means that most generated pages (those that don't set Content-length) end\n> up forcing squid to close and then reopen the connection to the web\n> server.\n\nIt is true that it doesn't support http/1.1, but 'most generated pages'? \nUnless they are actually emitted progressively they should have a\nperfectly good content-length header.\n\n> I've also had some problems when Squid had a large number of connections\n> open (several thousand); though that may have been because of my\n> half_closed_clients setting. Squid 3 coped a lot better when I tried it\n> (quite a few months ago now - and using FreeBSD and the special kqueue\n> system call) but crashed under some (admittedly synthetic) conditions.\n\nIt runs out of the box with a very conservative setting for max open file\ndescriptors - this may or may not be the cause of the problems you have\nseen. Certainly I ran squid with >16,000 connections back in 1999...\n\n> You still have periods of time when the web servers are busy using their\n> CPUs to generate HTML rather than waiting for database queries. This is\n> especially true if you cache a lot of data somewhere on the web servers\n> themselves (which, in my experience, reduces the database load a great\n> deal). If you REALLY need to reduce the number of connections (because you\n> have a large number of web servers doing a lot of computation, say) then\n> it might still be useful.\n\nAha, a postgres related topic in this thread! What you say is very true,\nbut then given that the connection overhead is so vanishingly small, why\nnot simply run without a persistent DB connection in this case? I would\nmaintain that if your webservers are holding open idle DB connections for\nso long that it's a problem, then simply close the connections!\n\nM\n",
"msg_date": "Tue, 10 Aug 2004 21:04:56 +0100 (BST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Performance Bottleneck"
}
] |
[
{
"msg_contents": "Hello,\n\nmy web application grows slower and slower over time. After some \nprofiling I came to the conclusion that my SQL queries are the biggest \ntime spenders (25 seconds). Obviously I need to optimise my queries and \nmaybe introduce some new indexes.\n\nThe problem is, that my application uses dynamic queries. I therefor can \nnot determine what are the most common queries.\n\nI have used the postgresql logging ption before. Is there a tool to \nanalyze the logfile for the most common and/or most time consuming queries?\n\nTIA\n\nUlrich\n\n",
"msg_date": "Wed, 04 Aug 2004 14:00:39 +0200",
"msg_from": "Ulrich Wisser <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to know which queries are to be optimised?"
},
{
"msg_contents": "On Wed, Aug 04, 2004 at 14:00:39 +0200,\n Ulrich Wisser <[email protected]> wrote:\n\nThis topic really belongs on the performance list. I have copied that\nlist and set followups to go there and copy you.\n\n> \n> my web application grows slower and slower over time. After some \n> profiling I came to the conclusion that my SQL queries are the biggest \n> time spenders (25 seconds). Obviously I need to optimise my queries and \n> maybe introduce some new indexes.\n\nThis sounds like you aren't doing proper maintainance. You need to be\nvacuuming with a large enough FSM setting.\n\n> The problem is, that my application uses dynamic queries. I therefor can \n> not determine what are the most common queries.\n> \n> I have used the postgresql logging ption before. Is there a tool to \n> analyze the logfile for the most common and/or most time consuming queries?\n\nYou can log queries that run for at least a specified amount of time.\nThis will be useful in finding what the long running queries are.\nYou can then use explain analyse to see why they are long running.\n",
"msg_date": "Wed, 11 Aug 2004 11:27:01 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to know which queries are to be optimised?"
},
{
"msg_contents": "Hello,\n\nMy linux admin left the job. We had a PostgreSQL\ninstalled under his username. He used to maintain it.\nNow I am looking at the Linux box and I am just a\nsuper duper newbie in Linux administration.\n\nThe previosu admin had a database created under his\nname coz PostgreSQL dosnt allow root database.\n\nNow I want to get the data back and use it? I dont\nmind if I have to use a different DB?\n\nI dont even know where he isntalled the PostgreSQL\nbinaries and data?\n\nWhere can I all this information?\n\nI am feeling really stupid but thank GOD we dont have\nany live databases running?\n\nRegards,\nKaram\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Mail - Helps protect you from nasty viruses.\nhttp://promotions.yahoo.com/new_mail\n",
"msg_date": "Wed, 11 Aug 2004 11:41:43 -0700 (PDT)",
"msg_from": "Karam Chand <[email protected]>",
"msg_from_op": false,
"msg_subject": "My admin left the job and I am stuck"
},
{
"msg_contents": "Hi Bruno,\n\n>>my web application grows slower and slower over time. After some \n>>profiling I came to the conclusion that my SQL queries are the biggest \n>>time spenders (25 seconds). Obviously I need to optimise my queries and \n>>maybe introduce some new indexes.\n> \n> This sounds like you aren't doing proper maintainance. You need to be\n> vacuuming with a large enough FSM setting.\n\nI do a vacuum full analyze every night.\nHow can I see if my FSM setting is appropriate?\n\n>>The problem is, that my application uses dynamic queries. I therefor can \n>>not determine what are the most common queries.\n>>\n>>I have used the postgresql logging ption before. Is there a tool to \n>>analyze the logfile for the most common and/or most time consuming queries?\n> \n> \n> You can log queries that run for at least a specified amount of time.\n> This will be useful in finding what the long running queries are.\n> You can then use explain analyse to see why they are long running.\n\nBut is there a tool that could compile a summary out of the log? The log \ngrows awefully big after a short time.\n\nThanks\n\n/Ulrich\n\n\n\n\n\n",
"msg_date": "Thu, 12 Aug 2004 12:37:44 +0200",
"msg_from": "Ulrich Wisser <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] How to know which queries are to be optimised?"
},
{
"msg_contents": "Ulrich Wisser wrote:\n>> You can log queries that run for at least a specified amount of time.\n>> This will be useful in finding what the long running queries are.\n>> You can then use explain analyse to see why they are long running.\n> \n> But is there a tool that could compile a summary out of the log? The log \n> grows awefully big after a short time.\n\nYou might want to look at the \"Practical Query Analyser\" - haven't used \nit myself yet, but it seems a sensible idea.\n\nhttp://pqa.projects.postgresql.org/\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 12 Aug 2004 12:16:20 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] How to know which queries are to be optimised?"
},
{
"msg_contents": "> I do a vacuum full analyze every night.\n> How can I see if my FSM setting is appropriate?\n\nOn a busy website, run vacuum analyze once an hour, or even better, use\ncontrib/pg_autovacuum\n\nChris\n\n\n",
"msg_date": "Thu, 12 Aug 2004 22:07:45 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] How to know which queries are to be optimised?"
},
{
"msg_contents": "> But is there a tool that could compile a summary out of the log? The log\n> grows awefully big after a short time.\n\nActually, yes there is. Check out www.pgfoundry.org. I think it's called\npqa or postgres query analyzer or somethign.\n\nChris\n\n",
"msg_date": "Thu, 12 Aug 2004 22:09:16 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] How to know which queries are to be optimised?"
},
{
"msg_contents": "All of the resources you need are at http://www.posgresql.org/\n\nBut, a quick note on how to connect to the database, you don't need to \nreally know where it's installed, just that it's running and accepting \nconnections. Under Linux \"netstat -tapn\" will show you all of the open \nTCP ports and what's listening to each. For Postgres, the default port \nis 5432 and the process is called postmaster. You may need to be logged \nin as root to see this.\n\nFirst thing is that knowing SQL basics is pretty much required before \nyou can really investigate what he had set up.\nSecond thing is to connect by using the following command: psql\nWithout any arguments, it will connect to the local machine and to a \ndatabase named for the current user. Once in psql, type \\d to see a \nlist of the user-defined tables. To see a list of databases type \\l and \nyou will be shown a list of databases.\n\n From there, you can explore until your heart's content.\n\nEverything is SQL compliant, so if you know SQL, you shouldn't have any \nproblems.\n\nAs for Linux admin, the most important thing to remember is that \neverything is case sensitive. ls != LS\n\nIf you're coming from the MS world, take some time to really learn \nLinux... I'm sure you'll like it and eventually you'll prefer it. I use \nLinux as my only OS at home, and I finally have a job where I use Linux \nat work (they adopted it after I submitted a proposal).\n\nhope this helps!\nLaura\n\nKaram Chand wrote:\n\n>Hello,\n>\n>My linux admin left the job. We had a PostgreSQL\n>installed under his username. He used to maintain it.\n>Now I am looking at the Linux box and I am just a\n>super duper newbie in Linux administration.\n>\n>The previosu admin had a database created under his\n>name coz PostgreSQL dosnt allow root database.\n>\n>Now I want to get the data back and use it? I dont\n>mind if I have to use a different DB?\n>\n>I dont even know where he isntalled the PostgreSQL\n>binaries and data?\n>\n>Where can I all this information?\n>\n>I am feeling really stupid but thank GOD we dont have\n>any live databases running?\n>\n>Regards,\n>Karam\n> \n>\n-- \nThanks,\nLaura Vance\nSystems Engineer\nWinfree Academy Charter Schools\n6221 Riverside Dr. Ste 110\nIrving, Tx 75039\nWeb: www.winfreeacademy.com\n\n\n",
"msg_date": "Thu, 12 Aug 2004 13:02:48 -0500",
"msg_from": "Laura Vance <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: My admin left the job and I am stuck"
},
{
"msg_contents": "Hi,\n\n>> But is there a tool that could compile a summary out of the log? The \n>> log grows awefully big after a short time.\n\nThere's also pg_analyzer to check out.\n\nhttp://www.samse.fr/GPL/pg_analyzer/\n\nSome of it's features are: written in Perl and produces HTML output.\n\n> You might want to look at the \"Practical Query Analyser\" - haven't used \n> it myself yet, but it seems a sensible idea.\n> \n> http://pqa.projects.postgresql.org/\n\n\nCheers,\nRudi.\n",
"msg_date": "Fri, 13 Aug 2004 08:50:17 +1000",
"msg_from": "Rudi Starcevic <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] How to know which queries are to be optimised?"
}
] |
[
{
"msg_contents": "Hi,\n\nI have some problem of performance on a PG database, and I don't\nknow how to improve. I Have two questions : one about the storage\nof data, one about tuning queries. If possible !\n\nMy job is to compare Oracle and Postgres. All our operational databases\nhave been running under Oracle for about fifteen years. Now I try to replace\nOracle by Postgres.\n\nI have a test platform under linux (Dell server, 4 Gb RAM, bi-processor,\nLinux Red Hat 9 (2.4.20-31.9)) with 2 databases, 1 with Oracle\n(V8i or V9i it's quite the same), 1 with PG (7.4.2). Both databases\nhave the same structure, same content, about 100 Gb each. I developped\nsome benches, representative of our use of databases. My problem\nis that I have tables (relations) with more than 100 millions rows,\nand each row has about 160 fields and an average size 256 bytes.\n\nFor Oracle I have a SGA size of 500 Mb.\nFor PG I have a postgresql.conf as :\n\tmax_connections = 1500\n\tshared_buffers = 30000\n\tsort_mem = 50000\n\teffective_cache_size = 200000\nand default value for other parameters.\n\nI have a table named \"data\" which looks like this :\nbench=> \\d data\n Table \"public.data\"\n Column | Type | Modifiers \n------------+-----------------------------+-----------\n num_poste | numeric(9,0) | not null\n dat | timestamp without time zone | not null\n datrecu | timestamp without time zone | not null\n rr1 | numeric(5,1) | \n qrr1 | numeric(2,0) | ...\n ... all numeric fields\n ...\n Indexes:\n \"pk_data\" primary key, btree (num_poste, dat)\n \"i_data_dat\" btree (dat)\n\nIt contains 1000 different values of \"num_poste\" and for each one \n125000 different values of \"dat\" (1 row per hour, 15 years). \n\nI run a vacuum analyze of the table.\n\nbench=> select * from tailledb ;\n schema | relfilenode | table | index | reltuples | size \n--------+-------------+------------------+------------+-------------+----------\n public | 125615917 | data | | 1.25113e+08 | 72312040\n public | 251139049 | data | i_data_dat | 1.25113e+08 | 2744400\n public | 250870177 | data | pk_data | 1.25113e+08 | 4395480\n\nMy first remark is that the table takes a lot of place on disk, about\n70 Gb, instead of 35 Gb with oracle.\n125 000 000 rows x 256 b = about 32 Gb. This calculation gives an idea\nnot so bad for oracle. What about for PG ? How data is stored ?\n\n\nThe different queries of the bench are \"simple\" queries (no join,\nsub-query, ...) and are using indexes (I \"explained\" each one to\nbe sure) :\nQ1 select_court : access to about 700 rows : 1 \"num_poste\" and 1 month\n\t(using PK : num_poste=p1 and dat between p2 and p3)\nQ2 select_moy : access to about 7000 rows : 10 \"num_poste\" and 1 month\n\t(using PK : num_poste between p1 and p1+10 and dat between p2 and p3)\nQ3 select_long : about 250 000 rows : 2 \"num_poste\" \n\t(using PK : num_poste in (p1,p1+2))\nQ4 select_tres_long : about 3 millions rows : 25 \"num_poste\" \n\t(using PK : num_poste between p1 and p1 + 25)\n\nThe result is that for \"short queries\" (Q1 and Q2) it runs in a few\nseconds on both Oracle and PG. The difference becomes important with\nQ3 : 8 seconds with oracle\n 80 sec with PG\nand too much with Q4 : 28s with oracle\n 17m20s with PG !\n\nOf course when I run 100 or 1000 parallel queries such as Q3 or Q4, \nit becomes a disaster !\nI can't understand these results. The way to execute queries is the\nsame I think. I've read recommended articles on the PG site.\nI tried with a table containing 30 millions rows, results are similar.\n\nWhat can I do ?\n\nThanks for your help ! \n\n********************************************************************\n* Les points de vue exprimes sont strictement personnels et *\n* n'engagent pas la responsabilite de METEO-FRANCE. *\n********************************************************************\n* Valerie SCHNEIDER Tel : +33 (0)5 61 07 81 91 *\n* METEO-FRANCE / DSI/DEV Fax : +33 (0)5 61 07 81 09 *\n* 42, avenue G. Coriolis Email : [email protected] *\n* 31057 TOULOUSE Cedex - FRANCE http://www.meteo.fr *\n********************************************************************\n\n",
"msg_date": "Wed, 4 Aug 2004 12:44:43 +0000 (GMT)",
"msg_from": "Valerie Schneider DSI/DEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tuning queries on large database"
},
{
"msg_contents": "> \tsort_mem = 50000\n\nThat is way, way too large. Try more like 5000 or lower.\n\n> num_poste | numeric(9,0) | not null\n\nFor starters numerics are really, really slow compared to integers. Why\naren't you using an integer for this field since youhave '0' decimal\nplaces.\n\n> schema | relfilenode | table | index | reltuples | size\n> --------+-------------+------------------+------------+-------------+----------\n> public | 125615917 | data | | 1.25113e+08 | 72312040\n> public | 251139049 | data | i_data_dat | 1.25113e+08 | 2744400\n> public | 250870177 | data | pk_data | 1.25113e+08 | 4395480\n>\n> My first remark is that the table takes a lot of place on disk, about\n> 70 Gb, instead of 35 Gb with oracle.\n\nIntegers will take a lot less space than numerics.\n\n> The different queries of the bench are \"simple\" queries (no join,\n> sub-query, ...) and are using indexes (I \"explained\" each one to\n> be sure) :\n> Q1 select_court : access to about 700 rows : 1 \"num_poste\" and 1 month\n> \t(using PK : num_poste=p1 and dat between p2 and p3)\n> Q2 select_moy : access to about 7000 rows : 10 \"num_poste\" and 1 month\n> \t(using PK : num_poste between p1 and p1+10 and dat between p2 and p3)\n> Q3 select_long : about 250 000 rows : 2 \"num_poste\"\n> \t(using PK : num_poste in (p1,p1+2))\n> Q4 select_tres_long : about 3 millions rows : 25 \"num_poste\"\n> \t(using PK : num_poste between p1 and p1 + 25)\n>\n> The result is that for \"short queries\" (Q1 and Q2) it runs in a few\n> seconds on both Oracle and PG. The difference becomes important with\n> Q3 : 8 seconds with oracle\n> 80 sec with PG\n> and too much with Q4 : 28s with oracle\n> 17m20s with PG !\n>\n> Of course when I run 100 or 1000 parallel queries such as Q3 or Q4,\n> it becomes a disaster !\n\nPlease reply with the EXPLAIN ANALYZE output of these queries so we can\nhave some idea of how to help you.\n\nChris\n\n\n",
"msg_date": "Wed, 4 Aug 2004 21:21:51 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Tuning queries on large database"
},
{
"msg_contents": "On Wed, 2004-08-04 at 08:44, Valerie Schneider DSI/DEV wrote:\n> Hi,\n> \n> I have some problem of performance on a PG database, and I don't\n> know how to improve. I Have two questions : one about the storage\n> of data, one about tuning queries. If possible !\n> \n> My job is to compare Oracle and Postgres. All our operational databases\n> have been running under Oracle for about fifteen years. Now I try to replace\n> Oracle by Postgres.\n\nYou may assume some additional hardware may be required -- this would be\npurchased out of the Oracle License budget :)\n\n> My first remark is that the table takes a lot of place on disk, about\n> 70 Gb, instead of 35 Gb with oracle.\n> 125 000 000 rows x 256 b = about 32 Gb. This calculation gives an idea\n> not so bad for oracle. What about for PG ? How data is stored ?\n\nThis is due to the datatype you've selected. PostgreSQL does not convert\nNUMERIC into a more appropriate integer format behind the scenes, nor\nwill it use the faster routines for the math when it is an integer.\nCurrently it makes the assumption that if you've asked for numeric\nrather than integer or float that you are dealing with either large\nnumbers or require high precision math.\n\nChanging most of your columns to integer + Check constraint (where\nnecessary) will give you a large speed boost and reduce disk\nrequirements a little.\n\n> The different queries of the bench are \"simple\" queries (no join,\n> sub-query, ...) and are using indexes (I \"explained\" each one to\n> be sure) :\n\nCare to send us the EXPLAIN ANALYZE output for each of the 4 queries\nafter you've improved the datatype selection?\n\n-- \nRod Taylor <rbt [at] rbt [dot] ca>\n\nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\nPGP Key: http://www.rbt.ca/signature.asc",
"msg_date": "Wed, 04 Aug 2004 09:26:42 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning queries on large database"
},
{
"msg_contents": "Valerie Schneider DSI/DEV wrote:\n\n> Hi,\n> \n> I have some problem of performance on a PG database, and I don't\n> know how to improve. I Have two questions : one about the storage\n> of data, one about tuning queries. If possible !\n> \n> My job is to compare Oracle and Postgres. All our operational databases\n> have been running under Oracle for about fifteen years. Now I try to replace\n> Oracle by Postgres.\n\nShow us the explain analyze on your queries.\n\nRegards\nGaetano Mendola\n\n\n\n\n",
"msg_date": "Wed, 04 Aug 2004 17:34:56 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning queries on large database"
},
{
"msg_contents": "\n>> not so bad for oracle. What about for PG ? How data is stored\n\n\tI agree with the datatype issue. Smallint, bigint, integer... add a \nconstraint...\n\n\tAlso the way order of the records in the database is very important. As \nyou seem to have a very large static population in your table, you should \ninsert it, ordered by your favourite selection index (looks like it's \nposte).\n\n\tAlso, you have a lot of static data which pollutes your table. Why not \ncreate two tables, one for the current year, and one for all the past \nyears. Use a view to present a merged view.\n",
"msg_date": "Wed, 04 Aug 2004 17:50:56 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning queries on large database"
}
] |
[
{
"msg_contents": "Can anyone give a good reference site/book for getting the most out of \nyour postgres server.\n\nAll I can find is contradicting theories on how to work out your settings.\n\nThis is what I followed to setup our db server that serves our web \napplications.\n\nhttp://www.phpbuilder.com/columns/smith20010821.php3?page=2\n\nWe have a Dell Poweredge with the following spec.\n\nCPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\nCPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\nCPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\nCPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\nPhysical Memory: 2077264 kB\nSwap Memory: 2048244 kB\n\nApache on the Web server can take up to 300 connections and PHP is using \n pg_pconnect\n\nPostgres is set with the following.\n\nmax_connections = 300\nshared_buffers = 38400\nsort_mem = 12000\n\nBut Apache is still maxing out the non-super user connection limit.\n\nThe machine is under no load and I would like to up the max_connections \nbut I would like to know more about what you need to consider before \ndoing so.\n\nThe only other source I've found is this:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\nBut following its method my postgres server locks up straight away as it \nrecommends setting max_connections to 16 for Web sites?\n\nIs there a scientific method for optimizing postgres or is it all \n'finger in the air' and trial and error.\n",
"msg_date": "Wed, 04 Aug 2004 13:45:55 +0100",
"msg_from": "Paul Serby <[email protected]>",
"msg_from_op": true,
"msg_subject": "The black art of postgresql.conf tweaking"
},
{
"msg_contents": "\nOn Aug 4, 2004, at 8:45 AM, Paul Serby wrote:\n>\n> Apache on the Web server can take up to 300 connections and PHP is \n> using pg_pconnect\n>\n> Postgres is set with the following.\n>\n> max_connections = 300\n> shared_buffers = 38400\n> sort_mem = 12000\n>\n> But Apache is still maxing out the non-super user connection limit.\n>\n\nDid you restart PG after making that change?\n(you need to restart, reload won't change max_connections)\n\nAlso, you're sort_mem is likely too high (That is the amount of memory \nthat can be used PER SORT) and you s hould back down on shared_buffers. \n(General consensus is don't go over 10k shared buffers)\n\nAnother thing you may want to try is using pgpool and regular \npg_connect - this way you only have a pool of say, 32 connections to \nthe DB that are shared among all apache instances. This gets rid of \nthe need to have hundreds of idle postgres' sitting around. \nConnecting to pgpool is very fast. We use it in production here and it \nworks wonderfully. And it is 100% transparent to your application.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Wed, 4 Aug 2004 09:02:20 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The black art of postgresql.conf tweaking"
},
{
"msg_contents": "Paul Serby wrote:\n\n> Apache on the Web server can take up to 300 connections and PHP is using \n> pg_pconnect\n\n> max_connections = 300\n> But Apache is still maxing out the non-super user connection limit.\n\nDon't forget also that some connections are reserved for superusers \n(usually 2), so if you want 300 users, you need to set max_connections \nto 300 + superuser_reserved_connections.\n\n-- \nMichal Taborsky\nhttp://www.taborsky.cz\n\n",
"msg_date": "Wed, 04 Aug 2004 15:10:55 +0200",
"msg_from": "Michal Taborsky <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The black art of postgresql.conf tweaking"
},
{
"msg_contents": "Am Mittwoch, 4. August 2004 14:45 schrieb Paul Serby:\n> Apache on the Web server can take up to 300 connections and PHP is using\n> pg_pconnect\n>\n> Postgres is set with the following.\n>\n> max_connections = 300\n> shared_buffers = 38400\n> sort_mem = 12000\n>\n> But Apache is still maxing out the non-super user connection limit.\n\nfor most websites 300 connections is far too much (imagine even 10 request per \nsecond for 10 hours a day ends up to 10.8 Mio pages a month)\n\nbut anyway: you should first focus on closing your http connection to the user \nas fast as possible. then you dont need so much concurrent connections which \nkeep db connections open and uses memory.\n\nI did the following:\n- apache: keepalive off\n- apache patch: lingerd (google for it)\n- apache mod_gzip\n- pg_pconnect\n\nthis keeps your http connection as short as possible, so the apache child is \nready to serve the next client. \n\nImagine 5 seconds of keepalive 1 second on lingering half-closed tcp \nconnections and 4 more seconds for transport of uncompressed content.\n\nin this scenario your apache child uses memory an your pooled db connection \nfor 10 seconds while doing nothing!\n\nin my experience apache in standard configuration can be the main bottleneck. \nand teh combination of keepalive off, lingerd and mod_gzip is GREAT and i \ndidn't found much sites propagating a configuration like this.\n\nkind regards,\njanning\n\np.s: sorry for being slightly off topic and talking about apache but when it \ncomes to performance it is always important to look at the complete system.\n",
"msg_date": "Wed, 4 Aug 2004 15:26:30 +0200",
"msg_from": "Janning Vygen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The black art of postgresql.conf tweaking"
},
{
"msg_contents": "\nOn 04/08/2004 13:45 Paul Serby wrote:\n> Can anyone give a good reference site/book for getting the most out of \n> your postgres server.\n> \n> All I can find is contradicting theories on how to work out your \n> settings.\n> \n> This is what I followed to setup our db server that serves our web \n> applications.\n> \n> http://www.phpbuilder.com/columns/smith20010821.php3?page=2\n> \n> We have a Dell Poweredge with the following spec.\n> \n> CPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\n> CPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\n> CPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\n> CPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\n> Physical Memory: 2077264 kB\n> Swap Memory: 2048244 kB\n> \n> Apache on the Web server can take up to 300 connections and PHP is \n> using pg_pconnect\n> \n> Postgres is set with the following.\n> \n> max_connections = 300\n> shared_buffers = 38400\n\nMight be higher that neccessary. Some people reckon that there's no \nmeasurable performance going above ~10,000 buffers\n\n\n> sort_mem = 12000\n\nDo you really need 12MB of sort memory? Remember that this is per \nconnection so you could end up with 300x that being allocated in a worst \ncase scenario.\n\n> \n> But Apache is still maxing out the non-super user connection limit.\n> \n> The machine is under no load and I would like to up the max_connections \n> but I would like to know more about what you need to consider before \n> doing so.\n\nI can't think why you should be maxing out when under no load. Maybe you \nneed to investigate this further.\n\n> \n> The only other source I've found is this:\n> \n> http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n> \n> But following its method my postgres server locks up straight away as it \n> recommends setting max_connections to 16 for Web sites?\n\nI think you've mis-interpreted that. She's talking about using persistent \nconnections - i.e., connection pooling.\n\n> \n> Is there a scientific method for optimizing postgres or is it all \n> 'finger in the air' and trial and error.\n\nPosting more details of the queries which are giving the performance \nproblems will enable people to help you. You're vacuum/analyzing regularly \nof course ;) People will want to know:\n\n- PostgreSQL version\n- hardware configuration (SCSI or IDE? RAID level?)\n- table schemas\n- queries together with EXPLAIN ANALYZE output\n\n\nalso output from utils like vmstat, top etc may be of use.\n\nHTH\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n",
"msg_date": "Wed, 4 Aug 2004 14:44:08 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The black art of postgresql.conf tweaking"
},
{
"msg_contents": "Janning Vygen wrote:\n\n>Am Mittwoch, 4. August 2004 14:45 schrieb Paul Serby:\n> \n>\n>>Apache on the Web server can take up to 300 connections and PHP is using\n>> pg_pconnect\n>>\n>>Postgres is set with the following.\n>>\n>>max_connections = 300\n>>shared_buffers = 38400\n>>sort_mem = 12000\n>>\n>>But Apache is still maxing out the non-super user connection limit\n>>\nThe number of connections in apache and in postgresql are the keys. \n From what you've described, apache can have up to 300 child processes. \nIf this application uses a different identity or db for different \nconnects then you may have more than one connection open per process \neasily exhausting your available connections. Also, your application \nmay open multiple connections to postgresql per process. See if \nsetting max_connections in postgres to a larger works, but you may want \nto reduce your sort_mem proportionately to keep from overbooking your \nsystem.\n\n>for most websites 300 connections is far too much (imagine even 10 request per \n>second for 10 hours a day ends up to 10.8 Mio pages a month)\n>\n>but anyway: you should first focus on closing your http connection to the user \n>as fast as possible. then you dont need so much concurrent connections which \n>keep db connections open and uses memory.\n>\n>I did the following:\n>- apache: keepalive off\n>- apache patch: lingerd (google for it)\n>- apache mod_gzip\n>- pg_pconnect\n> \n>\nKeepAlive for 2 or 3 seconds is quite sufficient. This keeps the \ncurrent number of connections down for those browsers that support it, \nand keeps the server from leaving too many open. We found KeepAlive \noff caused too many http connections to be opened and closed for our \napplications and hardware to keep up. The benefit is facilitating a \nrapid succession of requests: page loads, graphics, embedded objects, \nframes, etc.\n\n>p.s: sorry for being slightly off topic and talking about apache but when it \n>comes to performance it is always important to look at the complete system.\n> \n>\nGood advice.\n",
"msg_date": "Wed, 04 Aug 2004 09:51:12 -0500",
"msg_from": "Thomas Swan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The black art of postgresql.conf tweaking"
},
{
"msg_contents": "Paul Serby wrote:\n\n> Can anyone give a good reference site/book for getting the most out of \n> your postgres server.\n> \n> All I can find is contradicting theories on how to work out your settings.\n> \n> This is what I followed to setup our db server that serves our web \n> applications.\n> \n> http://www.phpbuilder.com/columns/smith20010821.php3?page=2\n> \n> We have a Dell Poweredge with the following spec.\n> \n> CPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\n> CPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\n> CPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\n> CPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\n> Physical Memory: 2077264 kB\n> Swap Memory: 2048244 kB\n> \n> Apache on the Web server can take up to 300 connections and PHP is using \n> pg_pconnect\n> \n> Postgres is set with the following.\n> \n> max_connections = 300\n> shared_buffers = 38400\n> sort_mem = 12000\n> \n> But Apache is still maxing out the non-super user connection limit.\n\nTell us the value MaxClients in your apache configuration\n\n\n\n\nRegards\nGaetano Mendola\n\n\n",
"msg_date": "Wed, 04 Aug 2004 17:29:48 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The black art of postgresql.conf tweaking"
},
{
"msg_contents": "Paul,\n\n> > Physical Memory: 2077264 kB\n\n> > sort_mem = 12000\n\nHmmm. Someone may already have mentioned this, but that looks problematic. \nYou're allowing up to 12MB per sort, and up to 300 connections. Even if each \nconcurrent connection averages only one sort (and they can use more) that's \n3600MB ... roughly 1.5 times your *total* RAM, leaving out RAM for Apache, \npostmaster, shared buffers, etc.\n\nI strongly suggest that you either decrease your total connections or your \nsort_mem, or both.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 6 Aug 2004 09:29:19 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The black art of postgresql.conf tweaking"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nJosh Berkus wrote:\n\n| Paul,\n|\n|\n|>>Physical Memory: 2077264 kB\n|\n|\n|>>sort_mem = 12000\n|\n|\n| Hmmm. Someone may already have mentioned this, but that looks problematic.\n| You're allowing up to 12MB per sort, and up to 300 connections. Even if each\n| concurrent connection averages only one sort (and they can use more) that's\n| 3600MB ... roughly 1.5 times your *total* RAM, leaving out RAM for Apache,\n| postmaster, shared buffers, etc.\n|\n| I strongly suggest that you either decrease your total connections or your\n| sort_mem, or both.\n\nOf course your are speaking about the \"worst case\", I aplly in scenarios like\nthis on the rule 80/20: 80% of connection will perform a sort and 20% will allocate\nmemory for the sort operation in the same window time:\n\n300 -- 80% --> 240 --> 20% --> 48\n\n\n48 * 12MB = 576 MB\n\nthat seems resonable with the total ammount of memory available.\n\nAm I too optimistic?\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBE81z7UpzwH2SGd4RAuzzAJ98Ze0HQedKaZ/laT7P1OS44FG0CwCfaWkY\nMAR1TEY1+x61PoXjK/K8Q4Y=\n=8UmF\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Fri, 06 Aug 2004 20:27:01 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The black art of postgresql.conf tweaking"
},
{
"msg_contents": "Gaetano,\n\n> Of course your are speaking about the \"worst case\", I aplly in scenarios \nlike\n> this on the rule 80/20: 80% of connection will perform a sort and 20% will \nallocate\n> memory for the sort operation in the same window time:\n\nWell, I suppose it depends on how aggresive your connection pooling is. If \nyou minimize idle connections, then 300 connections can mean 200 concurrent \nqueries. And since Paul *is* having problems, this is worth looking into.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Fri, 6 Aug 2004 16:18:40 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The black art of postgresql.conf tweaking"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nJosh Berkus wrote:\n\n| Gaetano,\n|\n|\n|>Of course your are speaking about the \"worst case\", I aplly in scenarios\n|\n| like\n|\n|>this on the rule 80/20: 80% of connection will perform a sort and 20% will\n|\n| allocate\n|\n|>memory for the sort operation in the same window time:\n|\n|\n| Well, I suppose it depends on how aggresive your connection pooling is. If\n| you minimize idle connections, then 300 connections can mean 200 concurrent\n| queries. And since Paul *is* having problems, this is worth looking into.\n\nWith 4 CPU ( like Paul have ) there is a lot of space in order to have 200\nconcurrent connection running but I don't believe that all 200 togheter are\nallocating space for sort, I have not seen the code but I'm quite confident\nthat the memory for sort is released as soon the sort operation is over,\nnot at the end of connection.\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBFBcn7UpzwH2SGd4RAuNhAJ0f+NVUlRUszX+gUE6EfYiFYQy5JQCgnaRj\nHcguR1U3CgvQiZ4a56PBtVU=\n=6Jzo\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Sat, 07 Aug 2004 01:41:28 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The black art of postgresql.conf tweaking"
},
{
"msg_contents": "hi,\n\nPaul Serby wrote:\n\n> Can anyone give a good reference site/book for getting the most out of \n> your postgres server.\n> \n> All I can find is contradicting theories on how to work out your settings.\n> \n> This is what I followed to setup our db server that serves our web \n> applications.\n> \n> http://www.phpbuilder.com/columns/smith20010821.php3?page=2\n> \n> We have a Dell Poweredge with the following spec.\n> \n> CPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\n> CPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\n> CPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\n> CPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\n> Physical Memory: 2077264 kB\n> Swap Memory: 2048244 kB\n> \n> Apache on the Web server can take up to 300 connections and PHP is using \n> pg_pconnect\n> \n> Postgres is set with the following.\n> \n> max_connections = 300\n> shared_buffers = 38400\n> sort_mem = 12000\n> \n> But Apache is still maxing out the non-super user connection limit.\n> \n> The machine is under no load and I would like to up the max_connections \n> but I would like to know more about what you need to consider before \n> doing so.\n\nOne more: In php.ini, set the pgsql.max_persistent lower then 300\n\n; Maximum number of persistent links. -1 means no limit.\npgsql.max_persistent = -1 -> change this\n\nC.\n",
"msg_date": "Mon, 09 Aug 2004 13:16:03 +0200",
"msg_from": "CoL <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The black art of postgresql.conf tweaking"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nThanks to everyone for there help.\n\nI've changed my postgres settings to the following\n\nmax_connections = 500\nshared_buffers = 10000\nsort_mem = 2000\neffective_cache_size = 5000\n\nThe 'effective_cache_size' is just a guess, but some references suggest\nit so I added it.\n\nDropping the Apache Keep-alive down to 3 seconds seems to have was a\ngreat tip I now have far less idle connections hanging about.\n\nI've not maxed out the connections since making the changes, but I'm\nstill not convinced everything is running as well as it could be. I've\ngot some big result sets that need sorting and I'm sure I could spare a\nbit more sort memory.\n\nWhere does everyone get there information about the settings? I still\ncan't find anything that helps explain each of the settings and how you\ndetermine there optimal settings.\n\nIf anyone wants interested here is a table schema form one of the most\nused tables.\n\nCREATE TABLE \"tblForumMessages\" (\n~ \"pk_iForumMessagesID\" serial,\n~ \"fk_iParentMessageID\" integer DEFAULT 0 NOT NULL,\n~ \"fk_iAuthorID\" integer NOT NULL,\n~ \"sSubject\" character varying(255) NOT NULL,\n~ \"sBody\" text,\n~ \"fk_iImageID\" oid,\n~ \"dtCreatedOn\" timestamp with time zone DEFAULT now(),\n~ \"iType\" integer DEFAULT 0,\n~ \"bAnonymous\" boolean DEFAULT false,\n~ \"bLocked\" boolean DEFAULT false,\n~ \"dtHidden\" timestamp with time zone,\n~ \"fk_iReplyToID\" integer,\n~ \"iCreateLevel\" integer DEFAULT 7\n);\n\nThis is the query that is most called on the server explained\n\nEXPLAIN ANALYZE SELECT \"tblForumMessages\".* FROM \"tblForumMessages\"\nWHERE \"fk_iParentMessageID\" = 90 ORDER BY \"dtCreatedOn\" DESC\n\nWhich gives the following:\n\nSort (cost=8156.34..8161.71 rows=2150 width=223) (actual\ntime=0.264..0.264 rows=0 loops=1)\n~ Sort Key: \"dtCreatedOn\"\n~ -> Index Scan using \"fk_iParentMessageID_key\" on \"tblForumMessages\"\n~ (cost=0.00..8037.33 rows=2150 width=223) (actual time=0.153..0.153\nrows=0 loops=1)\n~ Index Cond: (\"fk_iParentMessageID\" = 90)\n~ Total runtime: 0.323 ms\n\nSELECT COUNT(*) FROM \"tblForumMessages\" WHERE \"fk_iParentMessageID\" = 90\nReturns: 22920\n\nSELECT COUNT(*) FROM \"tblForumMessages\"\nReturns: 429913\n\nPaul Serby wrote:\n| Can anyone give a good reference site/book for getting the most out of\n| your postgres server.\n|\n| All I can find is contradicting theories on how to work out your settings.\n|\n| This is what I followed to setup our db server that serves our web\n| applications.\n|\n| http://www.phpbuilder.com/columns/smith20010821.php3?page=2\n|\n| We have a Dell Poweredge with the following spec.\n|\n| CPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\n| CPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\n| CPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\n| CPU: Intel(R) Xeon(TM) CPU 3.06GHz (512 KB Cache)\n| Physical Memory: 2077264 kB\n| Swap Memory: 2048244 kB\n|\n| Apache on the Web server can take up to 300 connections and PHP is using\n| pg_pconnect\n|\n| Postgres is set with the following.\n|\n| max_connections = 300\n| shared_buffers = 38400\n| sort_mem = 12000\n|\n| But Apache is still maxing out the non-super user connection limit.\n|\n| The machine is under no load and I would like to up the max_connections\n| but I would like to know more about what you need to consider before\n| doing so.\n|\n| The only other source I've found is this:\n|\n| http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n|\n| But following its method my postgres server locks up straight away as it\n| recommends setting max_connections to 16 for Web sites?\n|\n| Is there a scientific method for optimizing postgres or is it all\n| 'finger in the air' and trial and error.\n|\n| ---------------------------(end of broadcast)---------------------------\n| TIP 1: subscribe and unsubscribe commands go to [email protected]\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (GNU/Linux)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBF4nxp51pUZR6gxsRAi8cAJ9HBfpNMGQR7vurk0wYW+p6KfqZzACfc9NX\nk72iabZxK+gku06Pf7NmHfQ=\n=Ftv6\n-----END PGP SIGNATURE-----\n",
"msg_date": "Mon, 09 Aug 2004 15:28:01 +0100",
"msg_from": "Paul Serby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: The black art of postgresql.conf tweaking"
},
{
"msg_contents": "On Monday 09 Aug 2004 7:58 pm, Paul Serby wrote:\n> I've not maxed out the connections since making the changes, but I'm\n> still not convinced everything is running as well as it could be. I've\n> got some big result sets that need sorting and I'm sure I could spare a\n> bit more sort memory.\n\nYou could set the sort mem for that connection before issuing the query.\n\ni.e.\n\n# set sort_mem=20000;\n# select * ....;\n\nAnd reset it back. Setting it globally is not that good. If you do it \nselectively, that would tune it as per your needs..\n\n> Where does everyone get there information about the settings? I still\n> can't find anything that helps explain each of the settings and how you\n> determine there optimal settings.\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nHTH\n\n Shridhar\n",
"msg_date": "Mon, 9 Aug 2004 20:16:47 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: The black art of postgresql.conf tweaking"
}
] |
[
{
"msg_contents": "\n>X-Original-To: [email protected]\n>X-Authentication-Warning: houston.familyhealth.com.au: chriskl owned process \ndoing -bs\n>Date: Wed, 4 Aug 2004 21:21:51 +0800 (WST)\n>From: Christopher Kings-Lynne <[email protected]>\n>To: Valerie Schneider DSI/DEV <[email protected]>\n>Cc: [email protected], <[email protected]>\n>Subject: Re: [PERFORM] Tuning queries on large database\n>MIME-Version: 1.0\n>X-Virus-Scanned: by amavisd-new at hub.org\n>X-Spam-Status: No, hits=0.0 tagged_above=0.0 required=5.0 tests=\n>X-Spam-Level: \n>X-Mailing-List: pgsql-performance\n>\n>> \tsort_mem = 50000\n>\n>That is way, way too large. Try more like 5000 or lower.\n>\n>> num_poste | numeric(9,0) | not null\n>\n>For starters numerics are really, really slow compared to integers. Why\n>aren't you using an integer for this field since youhave '0' decimal\n>places.\n>\n>> schema | relfilenode | table | index | reltuples | size\n>> \n--------+-------------+------------------+------------+-------------+----------\n>> public | 125615917 | data | | 1.25113e+08 | \n72312040\n>> public | 251139049 | data | i_data_dat | 1.25113e+08 | \n2744400\n>> public | 250870177 | data | pk_data | 1.25113e+08 | \n4395480\n>>\n>> My first remark is that the table takes a lot of place on disk, about\n>> 70 Gb, instead of 35 Gb with oracle.\n>\n>Integers will take a lot less space than numerics.\n>\n>> The different queries of the bench are \"simple\" queries (no join,\n>> sub-query, ...) and are using indexes (I \"explained\" each one to\n>> be sure) :\n>> Q1 select_court : access to about 700 rows : 1 \"num_poste\" and 1 month\n>> \t(using PK : num_poste=p1 and dat between p2 and p3)\n>> Q2 select_moy : access to about 7000 rows : 10 \"num_poste\" and 1 month\n>> \t(using PK : num_poste between p1 and p1+10 and dat between p2 and p3)\n>> Q3 select_long : about 250 000 rows : 2 \"num_poste\"\n>> \t(using PK : num_poste in (p1,p1+2))\n>> Q4 select_tres_long : about 3 millions rows : 25 \"num_poste\"\n>> \t(using PK : num_poste between p1 and p1 + 25)\n>>\n>> The result is that for \"short queries\" (Q1 and Q2) it runs in a few\n>> seconds on both Oracle and PG. The difference becomes important with\n>> Q3 : 8 seconds with oracle\n>> 80 sec with PG\n>> and too much with Q4 : 28s with oracle\n>> 17m20s with PG !\n>>\n>> Of course when I run 100 or 1000 parallel queries such as Q3 or Q4,\n>> it becomes a disaster !\n>\n>Please reply with the EXPLAIN ANALYZE output of these queries so we can\n>have some idea of how to help you.\n>\n>Chris\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n\nQ1 :\nbench=> explain analyze select 'Q1',min(td),max(u) from data where \nnum_poste=1000 and dat between \n(date_trunc('month',to_timestamp('31012004','ddmmyyyy')-interval '2000 \ndays'))::timestamp and \n(date_trunc('month',to_timestamp('31012004','ddmmyyyy')-interval '2000 days') + \ninterval '1 month' - interval '1 hour')::timestamp;\n \n \n QUERY PLAN \n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n----------------------------------------------------------------------\n Aggregate (cost=2501.90..2501.90 rows=1 width=21) (actual \ntime=581.460..581.461 rows=1 loops=1)\n -> Index Scan using pk_data on data (cost=0.00..2498.80 rows=619 width=21) \n(actual time=92.986..579.089 rows=744 loops=1)\n Index Cond: ((num_poste = 1000::numeric) AND (dat >= \n(date_trunc('month'::text, (to_timestamp('31012004'::text, 'ddmmyyyy'::text) - \n'2000 days'::interval)))::timestamp without time zone) AND (dat <= \n(((date_trunc('month'::text, (to_timestamp('31012004'::text, 'ddmmyyyy'::text) - \n'2000 days'::interval)) + '1 mon'::interval) - '01:00:00'::interval))::timestamp \nwithout time zone))\n Total runtime: 609.149 ms\n(4 rows)\n\n\nQ2 :\nbench=> explain analyze select 'Q2',count(*) from data where num_poste between \n100 and 100+10 and dat between \n(date_trunc('month',to_timestamp('31012004','ddmmyyyy')-interval '3000 \ndays'))::timestamp and \n(date_trunc('month',to_timestamp('31012004','ddmmyyyy')-interval '3000 days') + \ninterval '1 month' - interval '1 hour')::timestamp;\n \n \n QUERY PLAN \n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------------------\n----------------------\n Aggregate (cost=23232.05..23232.05 rows=1 width=0) (actual \ntime=5678.849..5678.850 rows=1 loops=1)\n -> Index Scan using pk_data on data (cost=0.00..23217.68 rows=5747 width=0) \n(actual time=44.408..5669.387 rows=7920 loops=1)\n Index Cond: ((num_poste >= 100::numeric) AND (num_poste <= \n110::numeric) AND (dat >= (date_trunc('month'::text, \n(to_timestamp('31012004'::text, 'ddmmyyyy'::text) - '3000 \ndays'::interval)))::timestamp without time zone) AND (dat <= \n(((date_trunc('month'::text, (to_timestamp('31012004'::text, 'ddmmyyyy'::text) - \n'3000 days'::interval)) + '1 mon'::interval) - '01:00:00'::interval))::timestamp \nwithout time zone))\n Total runtime: 5679.059 ms\n(4 rows)\n\n\nQ3 :\nbench=> explain analyze select 'Q3',sum(rr1),count(ff) from data where num_poste \nin (50,50+2);\n QUERY PLAN \n--------------------------------------------------------------------------------\n------------------------------------------------------------------\n Aggregate (cost=986770.56..986770.56 rows=1 width=17) (actual \ntime=75401.030..75401.031 rows=1 loops=1)\n -> Index Scan using pk_data, pk_data on data (cost=0.00..985534.43 \nrows=247225 width=17) (actual time=35.823..74885.689 rows=250226 loops=1)\n Index Cond: ((num_poste = 50::numeric) OR (num_poste = 52::numeric))\n Total runtime: 75405.666 ms\n(4 rows)\n\n\nQ4 :\nbench=> explain analyze select 'Q4',count(*) from data where num_poste between \n600 and 625;\n QUERY PLAN \n--------------------------------------------------------------------------------\n--------------------------------------------------------------\n Aggregate (cost=12166763.62..12166763.62 rows=1 width=0) (actual \ntime=1162090.302..1162090.303 rows=1 loops=1)\n -> Index Scan using pk_data on data (cost=0.00..12159021.19 rows=3096971 \nwidth=0) (actual time=94.679..1158266.561 rows=3252938 loops=1)\n Index Cond: ((num_poste >= 600::numeric) AND (num_poste <= \n625::numeric))\n Total runtime: 1162102.217 ms\n(4 rows)\n\n\nNow I'm going to recreate my table with integer and real datatype,\nand to decrease sort_mem to 5000.\nThen I'll try these queries again.\nThanks.\n\n\n\n********************************************************************\n* Les points de vue exprimes sont strictement personnels et *\n* n'engagent pas la responsabilite de METEO-FRANCE. *\n********************************************************************\n* Valerie SCHNEIDER Tel : +33 (0)5 61 07 81 91 *\n* METEO-FRANCE / DSI/DEV Fax : +33 (0)5 61 07 81 09 *\n* 42, avenue G. Coriolis Email : [email protected] *\n* 31057 TOULOUSE Cedex - FRANCE http://www.meteo.fr *\n********************************************************************\n\n",
"msg_date": "Wed, 4 Aug 2004 14:12:34 +0000 (GMT)",
"msg_from": "Valerie Schneider DSI/DEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning queries on large database"
},
{
"msg_contents": "\n\tYou often make sums. Why not use separate tables to cache these sums by \nmonth, by poste, by whatever ?\n\n\tRule on insert on the big table updates the cache tables.\n",
"msg_date": "Wed, 04 Aug 2004 17:53:27 +0200",
"msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning queries on large database"
}
] |
[
{
"msg_contents": "[forwarded to performance] \n> The result is that for \"short queries\" (Q1 and Q2) it runs in a few\n> seconds on both Oracle and PG. The difference becomes important with\n> Q3 : 8 seconds with oracle\n> 80 sec with PG\n> and too much with Q4 : 28s with oracle\n> 17m20s with PG !\n> \n> Of course when I run 100 or 1000 parallel queries such as Q3 or Q4,\n> it becomes a disaster !\n> I can't understand these results. The way to execute queries is the\n> same I think. I've read recommended articles on the PG site.\n> I tried with a table containing 30 millions rows, results are similar.\n\n\nI don't trust the Oracle #s. Lets look at Q4: returns 3 million rows.\nUsing your #s of 160 fields and 256 bytes, your are asking for a result\nset of 160 * 256 * 3M = 12 GB! This data has to be gathered by the\ndisk, assembled, and sent over the network.\n\nI don't know Oracle, but it probably has some 'smart' result set that\nuses a cursor behind the scenes to do the fetching.\n\nWith a 3M row result set, you need to strongly consider using cursors.\nTry experimenting with the same query (Q4), declared as a cursor, and\nfetch the data in 10k blocks in a loop (fetch 10000), and watch the #s\nfly.\n\nMerlin\n\n",
"msg_date": "Wed, 4 Aug 2004 11:56:41 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: Tuning queries on large database"
}
] |
[
{
"msg_contents": "I have a database that I should migrate from 7.3 -> 7.4.3 but pg_dump |\npsql seems to take forever. (Several hours) Is there anything that can I \ndo to speed it up?\n\nThe databse is primary a table with 300.000 records of about 200Kbytes\neach. ~ 60 GB. \n\nThis is becoming an issue with the daily backup too.. (running pg_dump\nover night )\n\nJesper\n\n-- \n./Jesper Krogh, [email protected]\nJabber ID: [email protected]\n\n\n",
"msg_date": "Thu, 5 Aug 2004 07:08:48 +0000 (UTC)",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump performance?"
},
{
"msg_contents": "Is it the dump or the restore that's really slow?\n\nChris\n\nJesper Krogh wrote:\n\n> I have a database that I should migrate from 7.3 -> 7.4.3 but pg_dump |\n> psql seems to take forever. (Several hours) Is there anything that can I \n> do to speed it up?\n> \n> The databse is primary a table with 300.000 records of about 200Kbytes\n> each. ~ 60 GB. \n> \n> This is becoming an issue with the daily backup too.. (running pg_dump\n> over night )\n> \n> Jesper\n> \n",
"msg_date": "Fri, 06 Aug 2004 12:46:14 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump performance?"
},
{
"msg_contents": "I gmane.comp.db.postgresql.performance, skrev Christopher Kings-Lynne:\n> Is it the dump or the restore that's really slow?\n\nPrimarily the dump, it seems to be CPU-bound on the postmaster' process. \n\nNo signs on IO-bottleneck when I try to monitor with iostat or vmstat\n\n-- \n./Jesper Krogh, [email protected]\nJabber ID: [email protected]\n\n\n",
"msg_date": "Fri, 6 Aug 2004 05:05:08 +0000 (UTC)",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump performance?"
}
] |
[
{
"msg_contents": "Hi,\nI 've decreased the sort_mem to 5000 instead of 50000.\nI recreated ma table using integer and real types instead of\nnumeric : the result is very improved for the disk space :\n\n schema | relfilenode | table | index | reltuples | size \n--------+-------------+------------------+------------+-------------+----------\n public | 253442696 | data | | 1.25113e+08 | 29760016\n public | 378639579 | data | i_data_dat | 1.25113e+08 | 2744400\n public | 378555698 | data | pk_data | 1.25113e+08 | 3295584\n\nso it takes about 28 Gb instead of 68 Gb !\n\nFor my different queries, it's better but less performant than oracle :\n\n\toracle\tPG yesterday(numeric)\tPG today(integer/real)\nQ1\t<1s\t <1s\t\t\t <1s\nQ2\t 3s\t 8s\t\t\t 4s\nQ3\t 8s\t 1m20s\t\t\t 27s\nQ4\t28s\t17m20s\t\t\t6m47s\n\nResult of EXPLAIN ANALYZE :\n\nQ1 :bench=> explain analyze select 'Q1',min(td),max(u) from data where \nnum_poste=1000 and dat between \n(date_trunc('month',to_timestamp('31012004','ddmmyyyy')-interval '2000 \ndays'))::timestamp and \n(date_trunc('month',to_timestamp('31012004','ddmmyyyy')-interval '2000 days') + \ninterval '1 month' - interval '1 hour')::timestamp;\n\n QUERY PLAN \n--------------------------------------------------------------------------------\n Aggregate (cost=2466.47..2466.47 rows=1 width=8) (actual time=261.777..261.778 \nrows=1 loops=1)\n -> Index Scan using pk_data on data (cost=0.00..2463.41 rows=611 width=8) \n(actual time=20.106..259.924 rows=744 loops=1)\n Index Cond: ((num_poste = 1000) AND (dat >= (date_trunc('month'::text, \n(to_timestamp('31012004'::text, 'ddmmyyyy'::text) - '2000 \ndays'::interval)))::timestamp without time zone) AND (dat <= \n(((date_trunc('month'::text, (to_timestamp('31012004'::text, 'ddmmyyyy'::text) - \n'2000 days'::interval)) + '1 mon'::interval) - '01:00:00'::interval))::timestamp \nwithout time zone))\n Total runtime: 262.145 ms\n(4 rows)\n\n\nQ2 : bench=> explain analyze select 'Q2',count(*) from data where num_poste \nbetween 100 and 100+10 and dat between \n(date_trunc('month',to_timestamp('31012004','ddmmyyyy')-interval '3000 \ndays'))::timestamp and \n(date_trunc('month',to_timestamp('31012004','ddmmyyyy')-interval '3000 days') + \ninterval '1 month' - interval '1 hour')::timestamp;\n \n QUERY PLAN \n--------------------------------------------------------------------------------\n Aggregate (cost=24777.68..24777.68 rows=1 width=0) (actual \ntime=4253.977..4253.978 rows=1 loops=1)\n -> Index Scan using pk_data on data (cost=0.00..24762.34 rows=6138 width=0) \n(actual time=46.602..4244.984 rows=7920 loops=1)\n Index Cond: ((num_poste >= 100) AND (num_poste <= 110) AND (dat >= \n(date_trunc('month'::text, (to_timestamp('31012004'::text, 'ddmmyyyy'::text) - \n'3000 days'::interval)))::timestamp without time zone) AND (dat <= \n(((date_trunc('month'::text, (to_timestamp('31012004'::text, 'ddmmyyyy'::text) - \n'3000 days'::interval)) + '1 mon'::interval) - '01:00:00'::interval))::timestamp \nwithout time zone))\n Total runtime: 4254.233 ms\n(4 rows)\n\n\nQ3 : bench=> explain analyze select 'Q3',sum(rr1),count(ff) from data where \nnum_poste in (50,50+2);\n QUERY PLAN \n--------------------------------------------------------------------------------\n Aggregate (cost=963455.87..963455.87 rows=1 width=8) (actual \ntime=27668.666..27668.667 rows=1 loops=1)\n -> Index Scan using pk_data, pk_data on data (cost=0.00..962236.31 \nrows=243910 width=8) (actual time=16.251..27275.468 rows=250226 loops=1)\n Index Cond: ((num_poste = 50) OR (num_poste = 52))\n Total runtime: 27673.837 ms\n(4 rows)\n\n\nQ4 : bench=> explain analyze select 'Q4',count(*) from data where num_poste \nbetween 600 and 625;\n QUERY PLAN \n--------------------------------------------------------------------------------\n Aggregate (cost=14086174.57..14086174.57 rows=1 width=0) (actual \ntime=428235.024..428235.025 rows=1 loops=1)\n -> Index Scan using pk_data on data (cost=0.00..14076910.99 rows=3705431 \nwidth=0) (actual time=45.283..424634.826 rows=3252938 loops=1)\n Index Cond: ((num_poste >= 600) AND (num_poste <= 625))\n Total runtime: 428235.224 ms\n(4 rows)\n\nThanks for all, Valerie.\n\n>X-Original-To: [email protected]\n>X-Authentication-Warning: houston.familyhealth.com.au: chriskl owned process \ndoing -bs\n>Date: Wed, 4 Aug 2004 21:21:51 +0800 (WST)\n>From: Christopher Kings-Lynne <[email protected]>\n>To: Valerie Schneider DSI/DEV <[email protected]>\n>Cc: [email protected], <[email protected]>\n>Subject: Re: [GENERAL] [PERFORM] Tuning queries on large database\n>MIME-Version: 1.0\n>X-Virus-Scanned: by amavisd-new at hub.org\n>X-Spam-Status: No, hits=0.0 tagged_above=0.0 required=5.0 tests=\n>X-Spam-Level: \n>X-Mailing-List: pgsql-general\n>\n>> \tsort_mem = 50000\n>\n>That is way, way too large. Try more like 5000 or lower.\n>\n>> num_poste | numeric(9,0) | not null\n>\n>For starters numerics are really, really slow compared to integers. Why\n>aren't you using an integer for this field since youhave '0' decimal\n>places.\n>\n>> schema | relfilenode | table | index | reltuples | size\n>> \n--------+-------------+------------------+------------+-------------+----------\n>> public | 125615917 | data | | 1.25113e+08 | \n72312040\n>> public | 251139049 | data | i_data_dat | 1.25113e+08 | \n2744400\n>> public | 250870177 | data | pk_data | 1.25113e+08 | \n4395480\n>>\n>> My first remark is that the table takes a lot of place on disk, about\n>> 70 Gb, instead of 35 Gb with oracle.\n>\n>Integers will take a lot less space than numerics.\n>\n>> The different queries of the bench are \"simple\" queries (no join,\n>> sub-query, ...) and are using indexes (I \"explained\" each one to\n>> be sure) :\n>> Q1 select_court : access to about 700 rows : 1 \"num_poste\" and 1 month\n>> \t(using PK : num_poste=p1 and dat between p2 and p3)\n>> Q2 select_moy : access to about 7000 rows : 10 \"num_poste\" and 1 month\n>> \t(using PK : num_poste between p1 and p1+10 and dat between p2 and p3)\n>> Q3 select_long : about 250 000 rows : 2 \"num_poste\"\n>> \t(using PK : num_poste in (p1,p1+2))\n>> Q4 select_tres_long : about 3 millions rows : 25 \"num_poste\"\n>> \t(using PK : num_poste between p1 and p1 + 25)\n>>\n>> The result is that for \"short queries\" (Q1 and Q2) it runs in a few\n>> seconds on both Oracle and PG. The difference becomes important with\n>> Q3 : 8 seconds with oracle\n>> 80 sec with PG\n>> and too much with Q4 : 28s with oracle\n>> 17m20s with PG !\n>>\n>> Of course when I run 100 or 1000 parallel queries such as Q3 or Q4,\n>> it becomes a disaster !\n>\n>Please reply with the EXPLAIN ANALYZE output of these queries so we can\n>have some idea of how to help you.\n>\n>Chris\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n\n\n********************************************************************\n* Les points de vue exprimes sont strictement personnels et *\n* n'engagent pas la responsabilite de METEO-FRANCE. *\n********************************************************************\n* Valerie SCHNEIDER Tel : +33 (0)5 61 07 81 91 *\n* METEO-FRANCE / DSI/DEV Fax : +33 (0)5 61 07 81 09 *\n* 42, avenue G. Coriolis Email : [email protected] *\n* 31057 TOULOUSE Cedex - FRANCE http://www.meteo.fr *\n********************************************************************\n\n",
"msg_date": "Thu, 5 Aug 2004 08:16:59 +0000 (GMT)",
"msg_from": "Valerie Schneider DSI/DEV <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Tuning queries on large database"
},
{
"msg_contents": "Valerie Schneider DSI/DEV wrote:\n\n > Hi,\n > I 've decreased the sort_mem to 5000 instead of 50000.\n > I recreated ma table using integer and real types instead of\n > numeric : the result is very improved for the disk space :\n >\n > schema | relfilenode | table | index | reltuples | size\n > --------+-------------+------------------+------------+-------------+----------\n > public | 253442696 | data | | 1.25113e+08 | 29760016\n > public | 378639579 | data | i_data_dat | 1.25113e+08 | 2744400\n > public | 378555698 | data | pk_data | 1.25113e+08 | 3295584\n >\n > so it takes about 28 Gb instead of 68 Gb !\n >\n > For my different queries, it's better but less performant than oracle :\n >\n > \toracle\tPG yesterday(numeric)\tPG today(integer/real)\n > Q1\t<1s\t <1s\t\t\t <1s\n > Q2\t 3s\t 8s\t\t\t 4s\n > Q3\t 8s\t 1m20s\t\t\t 27s\n > Q4\t28s\t17m20s\t\t\t6m47s\n\n\nAre you using the same disk for oracle and PG ?\n\nCould you post your actual postgresql.conf ?\nTry also to mount your partition with the option: noatime\nand try again.\n\n\nRegards\nGaetano Mendola\n\n\n\n",
"msg_date": "Thu, 05 Aug 2004 11:53:59 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Tuning queries on large database"
},
{
"msg_contents": "I am guessing that Oracle can satisfy Q4 entirely via index access, \nwhereas Pg has to visit the table as well.\n\nHaving said that, a few partial indexes may be worth trying out on \ndata.num_poste (say 10 or so), this won't help the table access but \ncould lower the index cost. If you combine this with loading the data in \nnum_poste order (or run CLUSTER), you may get closer to Oracle's time \nfor this query.\n\nregards\n\nMark\n\nValerie Schneider DSI/DEV wrote:\n\n>For my different queries, it's better but less performant than oracle :\n>\n>\toracle\tPG yesterday(numeric)\tPG today(integer/real)\n>\n>Q4\t28s\t17m20s\t\t\t6m47s\n>\n>\n>\n>Q4 : bench=> explain analyze select 'Q4',count(*) from data where num_poste \n>between 600 and 625;\n> QUERY PLAN \n>--------------------------------------------------------------------------------\n> Aggregate (cost=14086174.57..14086174.57 rows=1 width=0) (actual \n>time=428235.024..428235.025 rows=1 loops=1)\n> -> Index Scan using pk_data on data (cost=0.00..14076910.99 rows=3705431 \n>width=0) (actual time=45.283..424634.826 rows=3252938 loops=1)\n> Index Cond: ((num_poste >= 600) AND (num_poste <= 625))\n> Total runtime: 428235.224 ms\n>(4 rows)\n>\n>Thanks for all, Valerie.\n>\n> \n>\n",
"msg_date": "Thu, 05 Aug 2004 22:09:58 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Tuning queries on large database"
},
{
"msg_contents": "> so it takes about 28 Gb instead of 68 Gb !\n\nHuzzah!\n\n> For my different queries, it's better but less performant than oracle :\n\nNot surprising. Oracle has a number of optimizations that we don't have\nimplemented at this point, particularly where aggregates are involved. \n\nOne that PG could use, particularly for Q4, is the ability to execute a\nselective sequential scan based on a read of the index -- right now it\npulls in actual data from the table structure while following the index\n-- creates unnecessary disk-head movement.\n\nThe only solution to that, at the moment, is to cluster the table by\npk_data.\n\n\nI am curious though, could you run the below query on both systems and\nreport back times?\n\n select 'Q4', * from data where num_poste between 600 and 625;\n\nI'm wondering if Oracle is using a shortcut since the count(*) doesn't\nactually require the data -- just knowledge of whether a matching row\nexists or not.\n\n\n",
"msg_date": "Thu, 05 Aug 2004 09:18:47 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Tuning queries on large database"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nValerie Schneider DSI/DEV wrote:\n\n| #---------------------------------------------------------------------------\n| # RESOURCE USAGE (except WAL)\n| #---------------------------------------------------------------------------\n|\n| # - Memory -\n|\n| shared_buffers = 30000 # min 16, at least max_connections*2, 8KB each\n| #sort_mem = 1024 # min 64, size in KB\n| sort_mem = 5000 # min 64, size in KB\n| #vacuum_mem = 8192 # min 1024, size in KB\n|\n| # - Free Space Map -\n|\n| #max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n| #max_fsm_relations = 1000 # min 100, ~50 bytes each\n|\n| # - Kernel Resource Usage -\n|\n| #max_files_per_process = 1000 # min 25\n| #preload_libraries = ''\n|\n|\n| #---------------------------------------------------------------------------\n| # WRITE AHEAD LOG\n| #---------------------------------------------------------------------------\n|\n| # - Settings -\n|\n| #fsync = true # turns forced synchronization on or off\n| #wal_sync_method = fsync # the default varies across platforms:\n| # fsync, fdatasync, open_sync, or open_datasync\n| #wal_buffers = 8 # min 4, 8KB each\n|\n| # - Checkpoints -\n|\n| #checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n| checkpoint_segments = 30 # in logfile segments, min 1, 16MB each\n| #checkpoint_timeout = 300 # range 30-3600, in seconds\n| #checkpoint_warning = 30 # 0 is off, in seconds\n| #commit_delay = 0 # range 0-100000, in microseconds\n| #commit_siblings = 5 # range 1-1000\n|\n|\n| #---------------------------------------------------------------------------\n| # QUERY TUNING\n| #---------------------------------------------------------------------------\n|\n| # - Planner Method Enabling -\n|\n| #enable_hashagg = true\n| #enable_hashjoin = true\n| #enable_indexscan = true\n| #enable_mergejoin = true\n| #enable_nestloop = true\n| enable_seqscan = false\n| #enable_sort = true\n| #enable_tidscan = true\n|\n| # - Planner Cost Constants -\n|\n| #effective_cache_size = 1000 # typically 8KB each\n| effective_cache_size = 200000 # typically 8KB each\n| #random_page_cost = 4 # units are one sequential page fetch cost\n| random_page_cost = 2 # units are one sequential page fetch cost\n| #cpu_tuple_cost = 0.01 # (same)\n| #cpu_index_tuple_cost = 0.001 # (same)\n| #cpu_operator_cost = 0.0025 # (same)\n|\n| # - Genetic Query Optimizer -\n|\n| #geqo = true\n| #geqo_threshold = 11\n| #geqo_effort = 1\n| #geqo_generations = 0\n| #geqo_pool_size = 0 # default based on tables in statement,\n| # range 128-1024\n| #geqo_selection_bias = 2.0 # range 1.5-2.0\n\n\n\nYour wal_buffers is too small try do bump up your wal_buffers to ~3000,\nand see the effects.\n\nwhy did you disable the sequential_scan (see later) ?\n\nTry also to lower the cpu_costs:\n\ncpu_tuple_cost = 0.005\ncpu_index_tuple_cost = 0.0005 # (same)\ncpu_operator_cost = 0.0005 # (same)\n\nthis will push the optimizer to choose the index scans.\nIf not show us the explain with enable_seqscan = false\nand with enable_seqscan = true\n\n*Mount also your partition with the noatime parameter*\n\n\n\n\nRegards\nGaeatano Mendola\n\n\n\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFBEjEY7UpzwH2SGd4RAtxnAKDuTtYZvWMXL7zjHWU20VFtm2V1OACg/Y1l\nGZuQ5RviMB2nB4M8G6PW17U=\n=HxGz\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Thu, 05 Aug 2004 15:07:38 +0200",
"msg_from": "Gaetano Mendola <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Tuning queries on large database"
}
] |
[
{
"msg_contents": "Another question:\nHow are you benchmarking your queries? Are you running them from within\npsql? Do you plan to run these queries from within an application?\n\npsql introduces some overhead because it has to scan the result set to\ndetermine the widths of the columns for formatting purposes. Try\nreturning a result set inside the libpq library if you know C and\ncompare the times. Of course, if you are already using libpq, this\nmoot. If you do know libpq, try setting up a loop that fetches the data\nin 10k block in a loop...I will wager that you can get this to run in\nunder two minutes (Q4).\n\nMerlin\n\n\n\n\n",
"msg_date": "Thu, 5 Aug 2004 09:20:17 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Tuning queries on large database"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.