threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi,\n\nI thought that an index can be used for sorting.\nI'm a little confused about the following result:\n\ncreate index OperationsName on Operations(cOperationName);\nexplain SELECT * FROM Operations ORDER BY cOperationName;\n QUERY PLAN\n-----------------------------------------------------------------------\n Sort (cost=185.37..189.20 rows=1532 width=498)\n Sort Key: coperationname\n -> Seq Scan on operations (cost=0.00..104.32 rows=1532 width=498)\n(3 rows)\n\nIs this supposed to be so?\n\nAndrei\n\n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.308 / Virus Database: 266.9.15 - Release Date: 4/16/2005\n\n",
"msg_date": "Mon, 18 Apr 2005 18:36:14 +0300",
"msg_from": "Andrei Gaspar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sort and index"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Andrei Gaspar [mailto:[email protected]]\n> Sent: Monday, April 18, 2005 10:36 AM\n> To: [email protected]\n> Subject: [PERFORM] Sort and index\n> \n> I thought that an index can be used for sorting.\n> I'm a little confused about the following result:\n> \n> create index OperationsName on Operations(cOperationName);\n> explain SELECT * FROM Operations ORDER BY cOperationName;\n> QUERY PLAN\n> --------------------------------------------------------------\n> ---------\n> Sort (cost=185.37..189.20 rows=1532 width=498)\n> Sort Key: coperationname\n> -> Seq Scan on operations (cost=0.00..104.32 rows=1532 width=498)\n> (3 rows)\n> \n> Is this supposed to be so?\n\nSince you are fetching the entire table, you are touching all the rows.\nIf the query were to fetch the rows in index order, it would be seeking\nall over the table's tracks. By fetching in sequence order, it has a\nmuch better chance of fetching rows in a way that minimizes head seeks.\nSince disk I/O is generally 10-100x slower than RAM, the in-memory sort \ncan be surprisingly slow and still beat indexed disk access. Of course,\nthis is only true if the table can fit and be sorted entirely in memory\n(which, with 1500 rows, probably can).\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Mon, 18 Apr 2005 10:44:43 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sort and index"
},
{
"msg_contents": "Thanks for the quick response\nAndrei\n\nDave Held wrote:\n\n>>-----Original Message-----\n>>From: Andrei Gaspar [mailto:[email protected]]\n>>Sent: Monday, April 18, 2005 10:36 AM\n>>To: [email protected]\n>>Subject: [PERFORM] Sort and index\n>>\n>>I thought that an index can be used for sorting.\n>>I'm a little confused about the following result:\n>>\n>>create index OperationsName on Operations(cOperationName);\n>>explain SELECT * FROM Operations ORDER BY cOperationName;\n>> QUERY PLAN\n>>--------------------------------------------------------------\n>>---------\n>> Sort (cost=185.37..189.20 rows=1532 width=498)\n>> Sort Key: coperationname\n>> -> Seq Scan on operations (cost=0.00..104.32 rows=1532 width=498)\n>>(3 rows)\n>>\n>>Is this supposed to be so?\n>> \n>>\n>\n>Since you are fetching the entire table, you are touching all the rows.\n>If the query were to fetch the rows in index order, it would be seeking\n>all over the table's tracks. By fetching in sequence order, it has a\n>much better chance of fetching rows in a way that minimizes head seeks.\n>Since disk I/O is generally 10-100x slower than RAM, the in-memory sort \n>can be surprisingly slow and still beat indexed disk access. Of course,\n>this is only true if the table can fit and be sorted entirely in memory\n>(which, with 1500 rows, probably can).\n>\n>__\n>David B. Held\n>Software Engineer/Array Services Group\n>200 14th Ave. East, Sartell, MN 56377\n>320.534.3637 320.253.7800 800.752.8129\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n>\n>\n> \n>\n\n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.308 / Virus Database: 266.9.15 - Release Date: 4/16/2005\n\n",
"msg_date": "Mon, 18 Apr 2005 19:11:38 +0300",
"msg_from": "Andrei Gaspar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort and index"
},
{
"msg_contents": "On Mon, Apr 18, 2005 at 10:44:43AM -0500, Dave Held wrote:\n> > \n> > I thought that an index can be used for sorting.\n> > I'm a little confused about the following result:\n> > \n> > create index OperationsName on Operations(cOperationName);\n> > explain SELECT * FROM Operations ORDER BY cOperationName;\n> > QUERY PLAN\n> > --------------------------------------------------------------\n> > ---------\n> > Sort (cost=185.37..189.20 rows=1532 width=498)\n> > Sort Key: coperationname\n> > -> Seq Scan on operations (cost=0.00..104.32 rows=1532 width=498)\n> > (3 rows)\n> > \n> > Is this supposed to be so?\n> \n> Since you are fetching the entire table, you are touching all the rows.\n> If the query were to fetch the rows in index order, it would be seeking\n> all over the table's tracks. By fetching in sequence order, it has a\n> much better chance of fetching rows in a way that minimizes head seeks.\n> Since disk I/O is generally 10-100x slower than RAM, the in-memory sort \n> can be surprisingly slow and still beat indexed disk access. Of course,\n> this is only true if the table can fit and be sorted entirely in memory\n> (which, with 1500 rows, probably can).\n\nOut of curiosity, what are the results of the following queries?\n(Queries run twice to make sure time differences aren't due to\ncaching.)\n\nSET enable_seqscan TO on;\nSET enable_indexscan TO off;\nEXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;\nEXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;\n\nSET enable_seqscan TO off;\nSET enable_indexscan TO on;\nEXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;\nEXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;\n\nSELECT version();\n\nWith 1500 rows of random data, I consistently see better performance\nwith an index scan (about twice as fast as a sequence scan), and\nthe planner uses an index scan if it has a choice (i.e., when\nenable_seqscan and enable_indexscan are both on). But my test case\nand postgresql.conf settings might be different enough from yours\nto account for different behavior.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n",
"msg_date": "Mon, 18 Apr 2005 11:10:13 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort and index"
},
{
"msg_contents": "On Mon, Apr 18, 2005 at 10:44:43AM -0500, Dave Held wrote:\n> Since you are fetching the entire table, you are touching all the rows.\n> If the query were to fetch the rows in index order, it would be seeking\n> all over the table's tracks. By fetching in sequence order, it has a\n> much better chance of fetching rows in a way that minimizes head seeks.\n> Since disk I/O is generally 10-100x slower than RAM, the in-memory sort \n> can be surprisingly slow and still beat indexed disk access. Of course,\n> this is only true if the table can fit and be sorted entirely in memory\n> (which, with 1500 rows, probably can).\n\nActually, the planner (at least in 7.4) isn't smart enough to consider\nif the sort would fit in memory or not. I'm running a test right now to\nsee if it's actually faster to use an index in this case.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 19 Apr 2005 19:42:34 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort and index"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Actually, the planner (at least in 7.4) isn't smart enough to consider\n> if the sort would fit in memory or not.\n\nReally? Have you read cost_sort()?\n\nIt's certainly possible that the calculation is all wet, but to claim\nthat the issue is not considered is just wrong.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Apr 2005 23:01:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort and index "
},
{
"msg_contents": "On Tue, Apr 19, 2005 at 11:01:26PM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > Actually, the planner (at least in 7.4) isn't smart enough to consider\n> > if the sort would fit in memory or not.\n> \n> Really? Have you read cost_sort()?\n> \n> It's certainly possible that the calculation is all wet, but to claim\n> that the issue is not considered is just wrong.\n\nTo be fair, no, I haven't looked at the code. This is based strictly on\nanecdotal evidence on a 120M row table. I'm currently running a test to\nsee how an index scan compares to a seqscan. I also got the same results\nwhen I added a where clause that would restrict it to about 7% of the\ntable.\n\nActually, after running some tests (below), the plan cost does change\nwhen I change sort_mem (it was originally 50000).\n\nstats=# \\d email_contrib\n Table \"public.email_contrib\"\n Column | Type | Modifiers \n------------+---------+-----------\n project_id | integer | not null\n id | integer | not null\n date | date | not null\n team_id | integer | \n work_units | bigint | not null\nIndexes:\n \"email_contrib_pkey\" primary key, btree (project_id, id, date)\n \"email_contrib__pk24\" btree (id, date) WHERE (project_id = 24)\n \"email_contrib__pk25\" btree (id, date) WHERE (project_id = 25)\n \"email_contrib__pk8\" btree (id, date) WHERE (project_id = 8)\n \"email_contrib__project_date\" btree (project_id, date)\nForeign-key constraints:\n \"fk_email_contrib__id\" FOREIGN KEY (id) REFERENCES stats_participant(id) ON UPDATE CASCADE\n \"fk_email_contrib__team_id\" FOREIGN KEY (team_id) REFERENCES stats_team(team) ON UPDATE CASCADE\n\nstats=# explain select * from email_contrib where project_id=8 order by project_id, id, date;\n QUERY PLAN \n--------------------------------------------------------------------------------\n Sort (cost=3613476.05..3635631.71 rows=8862263 width=24)\n Sort Key: project_id, id, date\n -> Seq Scan on email_contrib (cost=0.00..2471377.50 rows=8862263 width=24)\n Filter: (project_id = 8)\n(4 rows)\n\nstats=# explain select * from email_contrib order by project_id, id, date;\n QUERY PLAN \n----------------------------------------------------------------------------------\n Sort (cost=25046060.83..25373484.33 rows=130969400 width=24)\n Sort Key: project_id, id, date\n -> Seq Scan on email_contrib (cost=0.00..2143954.00 rows=130969400 width=24)\n(3 rows)\n\nstats=# select 8862263::float/130969400;\n ?column? \n--------------------\n 0.0676666687027657\n(1 row)\n\nstats=# explain select * from email_contrib where project_id=8 order by project_id, id, date;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------\n Index Scan using email_contrib_pkey on email_contrib (cost=0.00..6832005.57 rows=8862263 width=24)\n Index Cond: (project_id = 8)\n(2 rows)\n\nstats=# explain select * from email_contrib order by project_id, id, date;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------\n Index Scan using email_contrib_pkey on email_contrib (cost=0.00..100055905.62 rows=130969400 width=24)\n(1 row)\n\nstats=# set enable_seqscan=on;\nSET\nstats=# set sort_mem=1000;\nSET\nstats=# explain select * from email_contrib order by project_id, id, date;\n QUERY PLAN \n----------------------------------------------------------------------------------\n Sort (cost=28542316.63..28869740.13 rows=130969400 width=24)\n Sort Key: project_id, id, date\n -> Seq Scan on email_contrib (cost=0.00..2143954.00 rows=130969400 width=24)\n(3 rows)\n\nstats=# \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 19 Apr 2005 22:40:41 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort and index"
},
{
"msg_contents": "Michael Fuhr wrote:\n\n>On Mon, Apr 18, 2005 at 10:44:43AM -0500, Dave Held wrote:\n> \n>\n>>>I thought that an index can be used for sorting.\n>>>I'm a little confused about the following result:\n>>>\n>>>create index OperationsName on Operations(cOperationName);\n>>>explain SELECT * FROM Operations ORDER BY cOperationName;\n>>> QUERY PLAN\n>>>--------------------------------------------------------------\n>>>---------\n>>> Sort (cost=185.37..189.20 rows=1532 width=498)\n>>> Sort Key: coperationname\n>>> -> Seq Scan on operations (cost=0.00..104.32 rows=1532 width=498)\n>>>(3 rows)\n>>>\n>>>Is this supposed to be so?\n>>> \n>>>\n>>Since you are fetching the entire table, you are touching all the rows.\n>>If the query were to fetch the rows in index order, it would be seeking\n>>all over the table's tracks. By fetching in sequence order, it has a\n>>much better chance of fetching rows in a way that minimizes head seeks.\n>>Since disk I/O is generally 10-100x slower than RAM, the in-memory sort \n>>can be surprisingly slow and still beat indexed disk access. Of course,\n>>this is only true if the table can fit and be sorted entirely in memory\n>>(which, with 1500 rows, probably can).\n>> \n>>\n>\n>Out of curiosity, what are the results of the following queries?\n>(Queries run twice to make sure time differences aren't due to\n>caching.)\n>\n>SET enable_seqscan TO on;\n>SET enable_indexscan TO off;\n>EXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;\n>EXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;\n>\n>SET enable_seqscan TO off;\n>SET enable_indexscan TO on;\n>EXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;\n>EXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;\n>\n>SELECT version();\n>\n>With 1500 rows of random data, I consistently see better performance\n>with an index scan (about twice as fast as a sequence scan), and\n>the planner uses an index scan if it has a choice (i.e., when\n>enable_seqscan and enable_indexscan are both on). But my test case\n>and postgresql.conf settings might be different enough from yours\n>to account for different behavior.\n>\n> \n>\nHere is the output from the statements above. I know the times seem too \nsmall to care, but what triggered my question is the fact that in the \nlogs there are a lot of lines like (i replaced the list of 43 fields \nwith *).\nI use ODBC (8.0.1.1) and to change the application to cache the table \nisn't feasible.\n\n2005-04-19 10:07:05 LOG: duration: 937.000 ms statement: PREPARE \n\"_PLAN35b0068\" as SELECT * FROM Operations ORDER BY \ncOperationName;EXECUTE \"_PLAN35b0068\"\n2005-04-19 10:07:09 LOG: duration: 1344.000 ms statement: PREPARE \n\"_PLAN35b0068\" as SELECT * FROM Operations ORDER BY \ncOperationName;EXECUTE \"_PLAN35b0068\"\n2005-04-19 10:07:15 LOG: duration: 1031.000 ms statement: PREPARE \n\"_PLAN35b0068\" as SELECT * FROM Operations ORDER BY \ncOperationName;EXECUTE \"_PLAN35b0068\"\n2005-04-19 10:07:19 LOG: duration: 734.000 ms statement: PREPARE \n\"_PLAN35b0068\" as SELECT * FROM Operations ORDER BY \ncOperationName;EXECUTE \"_PLAN35b0068\"\n\nThe times reported by explain analyze are so small though, the intervals \nreported in pg_log are more real,\n\n\ntkp=# SET enable_seqscan TO on;\nSET\ntkp=# SET enable_indexscan TO off;\nSET\ntkp=# EXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Sort (cost=185.37..189.20 rows=1532 width=498) (actual \ntime=235.000..235.000 rows=1532 loops=1)\n Sort Key: coperationname\n -> Seq Scan on operations (cost=0.00..104.32 rows=1532 width=498) \n(actual time=0.000..124.000 rows=1532 loops=1)\n Total runtime: 267.000 ms\n(4 rows)\n\ntkp=# EXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n Sort (cost=185.37..189.20 rows=1532 width=498) (actual \ntime=16.000..16.000 rows=1532 loops=1)\n Sort Key: coperationname\n -> Seq Scan on operations (cost=0.00..104.32 rows=1532 width=498) \n(actual time=0.000..0.000 rows=1532 loops=1)\n Total runtime: 31.000 ms\n(4 rows)\n\ntkp=#\ntkp=# SET enable_seqscan TO off;\nSET\ntkp=# SET enable_indexscan TO on;\nSET\ntkp=# EXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using operationsname on operations (cost=0.00..350.01 \nrows=1532 width=498) (actual time=16.000..62.000 rows=1532 loops=1)\n Total runtime: 62.000 ms\n(2 rows)\n\ntkp=# EXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using operationsname on operations (cost=0.00..350.01 \nrows=1532 width=498) (actual time=0.000..16.000 rows=1532 loops=1)\n Total runtime: 16.000 ms\n(2 rows)\n\ntkp=#\ntkp=# SELECT version();\n version\n------------------------------------------------------------------------------------------\n PostgreSQL 8.0.2 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) \n3.4.2 (mingw-special)\n(1 row)\n\n\n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.308 / Virus Database: 266.9.18 - Release Date: 4/19/2005\n\n",
"msg_date": "Wed, 20 Apr 2005 20:10:42 +0300",
"msg_from": "Andrei Gaspar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort and index"
},
{
"msg_contents": "I've run some performance tests. The actual test case is at\nhttp://stats.distributed.net/~decibel/timing.sql, and the results are at\nhttp://stats.distributed.net/~decibel/timing.log. In a nutshell, doing\nan index scan appears to be about 2x faster than a sequential scan and a\nsort.\n\nSomething else of interest is that going from 50M of sort memory to 3G\nsped the sort up by 900 seconds. If someone wants to record data about\nthe effect of sort_mem on on-disk sorts somewhere (maybe in the docs?) I\ncan run some more tests for that case.\n\nIn any case, it's clear that the planner is making the wrong choice\nhere. BTW, changing random_page_cost to 3 or 4 doesn't change the plan.\n\nOn Tue, Apr 19, 2005 at 10:40:41PM -0500, Jim C. Nasby wrote:\n> On Tue, Apr 19, 2005 at 11:01:26PM -0400, Tom Lane wrote:\n> > \"Jim C. Nasby\" <[email protected]> writes:\n> > > Actually, the planner (at least in 7.4) isn't smart enough to consider\n> > > if the sort would fit in memory or not.\n> > \n> > Really? Have you read cost_sort()?\n> > \n> > It's certainly possible that the calculation is all wet, but to claim\n> > that the issue is not considered is just wrong.\n> \n> To be fair, no, I haven't looked at the code. This is based strictly on\n> anecdotal evidence on a 120M row table. I'm currently running a test to\n> see how an index scan compares to a seqscan. I also got the same results\n> when I added a where clause that would restrict it to about 7% of the\n> table.\n> \n> Actually, after running some tests (below), the plan cost does change\n> when I change sort_mem (it was originally 50000).\n> \n> stats=# \\d email_contrib\n> Table \"public.email_contrib\"\n> Column | Type | Modifiers \n> ------------+---------+-----------\n> project_id | integer | not null\n> id | integer | not null\n> date | date | not null\n> team_id | integer | \n> work_units | bigint | not null\n> Indexes:\n> \"email_contrib_pkey\" primary key, btree (project_id, id, date)\n> \"email_contrib__pk24\" btree (id, date) WHERE (project_id = 24)\n> \"email_contrib__pk25\" btree (id, date) WHERE (project_id = 25)\n> \"email_contrib__pk8\" btree (id, date) WHERE (project_id = 8)\n> \"email_contrib__project_date\" btree (project_id, date)\n> Foreign-key constraints:\n> \"fk_email_contrib__id\" FOREIGN KEY (id) REFERENCES stats_participant(id) ON UPDATE CASCADE\n> \"fk_email_contrib__team_id\" FOREIGN KEY (team_id) REFERENCES stats_team(team) ON UPDATE CASCADE\n> \n> stats=# explain select * from email_contrib where project_id=8 order by project_id, id, date;\n> QUERY PLAN \n> --------------------------------------------------------------------------------\n> Sort (cost=3613476.05..3635631.71 rows=8862263 width=24)\n> Sort Key: project_id, id, date\n> -> Seq Scan on email_contrib (cost=0.00..2471377.50 rows=8862263 width=24)\n> Filter: (project_id = 8)\n> (4 rows)\n> \n> stats=# explain select * from email_contrib order by project_id, id, date;\n> QUERY PLAN \n> ----------------------------------------------------------------------------------\n> Sort (cost=25046060.83..25373484.33 rows=130969400 width=24)\n> Sort Key: project_id, id, date\n> -> Seq Scan on email_contrib (cost=0.00..2143954.00 rows=130969400 width=24)\n> (3 rows)\n> \n> stats=# select 8862263::float/130969400;\n> ?column? \n> --------------------\n> 0.0676666687027657\n> (1 row)\n> \n> stats=# explain select * from email_contrib where project_id=8 order by project_id, id, date;\n> QUERY PLAN \n> -----------------------------------------------------------------------------------------------------\n> Index Scan using email_contrib_pkey on email_contrib (cost=0.00..6832005.57 rows=8862263 width=24)\n> Index Cond: (project_id = 8)\n> (2 rows)\n> \n> stats=# explain select * from email_contrib order by project_id, id, date;\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------\n> Index Scan using email_contrib_pkey on email_contrib (cost=0.00..100055905.62 rows=130969400 width=24)\n> (1 row)\n> \n> stats=# set enable_seqscan=on;\n> SET\n> stats=# set sort_mem=1000;\n> SET\n> stats=# explain select * from email_contrib order by project_id, id, date;\n> QUERY PLAN \n> ----------------------------------------------------------------------------------\n> Sort (cost=28542316.63..28869740.13 rows=130969400 width=24)\n> Sort Key: project_id, id, date\n> -> Seq Scan on email_contrib (cost=0.00..2143954.00 rows=130969400 width=24)\n> (3 rows)\n> \n> stats=# \n> \n> -- \n> Jim C. Nasby, Database Consultant [email protected] \n> Give your computer some brain candy! www.distributed.net Team #1828\n> \n> Windows: \"Where do you want to go today?\"\n> Linux: \"Where do you want to go tomorrow?\"\n> FreeBSD: \"Are you guys coming, or what?\"\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Fri, 22 Apr 2005 20:54:04 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort and index"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> I've run some performance tests. The actual test case is at\n> http://stats.distributed.net/~decibel/timing.sql, and the results are at\n> http://stats.distributed.net/~decibel/timing.log. In a nutshell, doing\n> an index scan appears to be about 2x faster than a sequential scan and a\n> sort.\n\n... for one test case, on one platform, with a pretty strong bias to the\nfully-cached state since you ran the test multiple times consecutively.\n\nPast experience has generally been that an explicit sort is quicker,\nso you'll have to pardon me for suspecting that this case may be\natypical. Is the table nearly in order by pkey, by any chance?\n\n> In any case, it's clear that the planner is making the wrong choice\n> here. BTW, changing random_page_cost to 3 or 4 doesn't change the plan.\n\nFeel free to propose better cost equations.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Apr 2005 22:08:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort and index "
},
{
"msg_contents": "On Fri, Apr 22, 2005 at 10:08:06PM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > I've run some performance tests. The actual test case is at\n> > http://stats.distributed.net/~decibel/timing.sql, and the results are at\n> > http://stats.distributed.net/~decibel/timing.log. In a nutshell, doing\n> > an index scan appears to be about 2x faster than a sequential scan and a\n> > sort.\n> \n> ... for one test case, on one platform, with a pretty strong bias to the\n> fully-cached state since you ran the test multiple times consecutively.\n\nThe table is 6.5G and the box only has 4G, so I suspect it's not cached.\n\n> Past experience has generally been that an explicit sort is quicker,\n> so you'll have to pardon me for suspecting that this case may be\n> atypical. Is the table nearly in order by pkey, by any chance?\n\nIt might be, but there's no way I can check with a multi-key index,\nright?\n\nI'll re-run the tests with a single column index on a column with a\ncorrelation of 16%\n\n> > In any case, it's clear that the planner is making the wrong choice\n> > here. BTW, changing random_page_cost to 3 or 4 doesn't change the plan.\n> \n> Feel free to propose better cost equations.\n\nWhere would I look in code to see what's used now?\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Fri, 22 Apr 2005 22:00:02 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort and index"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n>> Feel free to propose better cost equations.\n\n> Where would I look in code to see what's used now?\n\nAll the gold is hidden in src/backend/optimizer/path/costsize.c.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Apr 2005 01:00:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort and index "
},
{
"msg_contents": "On Sat, Apr 23, 2005 at 01:00:40AM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> >> Feel free to propose better cost equations.\n> \n> > Where would I look in code to see what's used now?\n> \n> All the gold is hidden in src/backend/optimizer/path/costsize.c.\n> \n> \t\t\tregards, tom lane\n\nAfter setting up a second test that orders the table by a highly\nnon-correlated column, I think I've found part of the problem. The\nestimated index scan cost for (project_id, id, date) is\n0.00..100117429.34 while the estimate for work_units is\n0.00..103168408.62; almost no difference, even though project_id\ncorrelation is .657 while work_units correlation is .116. This is with\nrandom_page_cost set to 1.1; if I set it much higher I can't force the\nindex scan (BTW, would it make more sense to set the cost of a disable\nseqscan to either pages or tuples * disable_cost?), but even with only a\n10% overhead on random page fetches it seems logical that the two\nestimates should be much farther apart. If you look at the results of\nthe initial run (http://stats.distributed.net/~decibel/timing.log),\nyou'll see that the cost of the index scan is way overestimated. Looking\nat the code, the runcost is calculated as\n\n run_cost += max_IO_cost + csquared * (min_IO_cost - max_IO_cost);\n\nwhere csquared is indexCorrelation^2. Why is indexCorrelation squared?\nThe comments say a linear interpolation between min_IO and max_IO is\nused, but ISTM that if it was linear then instead of csquared,\nindexCorrelation would just be used.\n\nBy the way, I'm running a test for ordering by work_units right now, and\nI included code to allocate and zero 3.3G of memory (out of 4G) between\nsteps to clear the kernel buffers. This brought the seqscan times up to\n~6800 seconds, so it seems there was in fact buffering going on in the\nfirst test. The second test has been running an index scan for over 14\nhours now, so clearly a seqscan+sort is the way to go for a highly\nuncorrelated index (at least one that won't fit in\neffective_cache_size).\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Sun, 24 Apr 2005 17:01:46 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort and index"
},
{
"msg_contents": "On Sun, 24 Apr 2005 17:01:46 -0500, \"Jim C. Nasby\" <[email protected]>\nwrote:\n>> >> Feel free to propose better cost equations.\n\nI did. More than once.\n\n>estimated index scan cost for (project_id, id, date) is\n>0.00..100117429.34 while the estimate for work_units is\n>0.00..103168408.62; almost no difference,\n\n~3%\n\n> even though project_id correlation is .657\n\nThis is divided by the number of index columns, so the index correlation\nis estimated to be 0.219.\n\n> while work_units correlation is .116.\n\nSo csquared is 0.048 and 0.013, respectively, and you get a result not\nfar away from the upper bound in both cases. The cost estimations\ndiffer by only 3.5% of (max_IO_cost - min_IO_cost).\n\n>you'll see that the cost of the index scan is way overestimated. Looking\n>at the code, the runcost is calculated as\n>\n> run_cost += max_IO_cost + csquared * (min_IO_cost - max_IO_cost);\n>\n>where csquared is indexCorrelation^2. Why is indexCorrelation squared?\n>The comments say a linear interpolation between min_IO and max_IO is\n>used, but ISTM that if it was linear then instead of csquared,\n>indexCorrelation would just be used.\n\nIn my tests I got much more plausible results with\n\n\t1 - (1 - abs(correlation))^2\n\nJim, are you willing to experiment with one or two small patches of\nmine? What version of Postgres are you running?\n\nServus\n Manfred\n",
"msg_date": "Wed, 11 May 2005 17:59:10 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort and index"
},
{
"msg_contents": "First, I've got some updated numbers up at\nhttp://stats.distributed.net/~decibel/\n\ntiming2.log shows that the planner actually under-estimates an index\nscan by several orders of magnitude. Granted, random_page_cost is set to\nan unrealistic 1.1 (otherwise I can't force the index scan), but that\nalone isn't enough to explain the difference.\n\nOn Wed, May 11, 2005 at 05:59:10PM +0200, Manfred Koizar wrote:\n> On Sun, 24 Apr 2005 17:01:46 -0500, \"Jim C. Nasby\" <[email protected]>\n> wrote:\n> >> >> Feel free to propose better cost equations.\n> \n> I did. More than once.\n> \n> >estimated index scan cost for (project_id, id, date) is\n> >0.00..100117429.34 while the estimate for work_units is\n> >0.00..103168408.62; almost no difference,\n> \n> ~3%\n> \n> > even though project_id correlation is .657\n> \n> This is divided by the number of index columns, so the index correlation\n> is estimated to be 0.219.\n\nThat seems like a pretty bad assumption to make.\n\nIs there any eta on having statistics for multi-column indexes?\n\n> >you'll see that the cost of the index scan is way overestimated. Looking\n> >at the code, the runcost is calculated as\n> >\n> > run_cost += max_IO_cost + csquared * (min_IO_cost - max_IO_cost);\n> >\n> >where csquared is indexCorrelation^2. Why is indexCorrelation squared?\n> >The comments say a linear interpolation between min_IO and max_IO is\n> >used, but ISTM that if it was linear then instead of csquared,\n> >indexCorrelation would just be used.\n> \n> In my tests I got much more plausible results with\n> \n> \t1 - (1 - abs(correlation))^2\n\nWhat's the theory behind that?\n\nAnd I'd still like to know why correlation squared is used.\n\n> Jim, are you willing to experiment with one or two small patches of\n> mine? What version of Postgres are you running?\n\nIt depends on the patches, since this is a production machine. Currently\nit's running 7.4.*mumble*, though I need to upgrade to 8, which I was\nintending to do via slony. Perhaps the best thing would be for me to get\nthat setup and we can experiment against version 8.0.3.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Wed, 11 May 2005 16:15:16 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort and index"
},
{
"msg_contents": "On Wed, 11 May 2005 16:15:16 -0500, \"Jim C. Nasby\" <[email protected]>\nwrote:\n>> This is divided by the number of index columns, so the index correlation\n>> is estimated to be 0.219.\n>\n>That seems like a pretty bad assumption to make.\n\nAny assumption we make without looking at entire index tuples has to be\nbad. A new GUC variable secondary_correlation introduced by my patch at\nleast gives you a chance to manually control the effects of additional\nindex columns.\n\n>> In my tests I got much more plausible results with\n>> \n>> \t1 - (1 - abs(correlation))^2\n>\n>What's the theory behind that?\n\nThe same as for csquared -- pure intuition. But the numbers presented\nin http://archives.postgresql.org/pgsql-hackers/2002-10/msg00072.php\nseem to imply that in this case my intiution is better ;-)\n\nActually above formula was not proposed in that mail. AFAIR it gives\nresults between p2 and p3.\n\n>And I'd still like to know why correlation squared is used.\n\nOn Wed, 02 Oct 2002 18:48:49 -0400, Tom Lane <[email protected]> wrote:\n|The indexCorrelation^2 algorithm was only a quick hack with no theory\n|behind it :-(.\n\n>It depends on the patches, since this is a production machine. Currently\n>it's running 7.4.*mumble*,\n\nThe patch referenced in\nhttp://archives.postgresql.org/pgsql-hackers/2003-08/msg00931.php is\nstill available. It doesn't touch too many places and should be easy to\nreview. I'm using it and its predecessors in production for more than\ntwo years. Let me know, if the 74b1 version does not apply cleanly to\nyour source tree.\n\nServus\n Manfred\n",
"msg_date": "Thu, 12 May 2005 20:54:48 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort and index"
},
{
"msg_contents": "On Thu, May 12, 2005 at 08:54:48PM +0200, Manfred Koizar wrote:\n> On Wed, 11 May 2005 16:15:16 -0500, \"Jim C. Nasby\" <[email protected]>\n> wrote:\n> >> This is divided by the number of index columns, so the index correlation\n> >> is estimated to be 0.219.\n> >\n> >That seems like a pretty bad assumption to make.\n> \n> Any assumption we make without looking at entire index tuples has to be\n> bad. A new GUC variable secondary_correlation introduced by my patch at\n> least gives you a chance to manually control the effects of additional\n> index columns.\n\nIt seems it would be much better to gather statistics on any\nmulti-column indexes, but I know that's probably beyond what's\nreasonable for your patch.\n\nAlso, my data (http://stats.distributed.net/~decibel) indicates that\nmax_io isn't high enough. Look specifically at timing2.log compared to\ntiming.log. Thouggh, it is possibile that this is because of having\nrandom_page_cost set to 1.1 (if I set it much higher I can't force the\nindex scan because the index estimate actually exceeds the cost of the\nseqscan with the disable cost added in).\n\n> >It depends on the patches, since this is a production machine. Currently\n> >it's running 7.4.*mumble*,\n> \n> The patch referenced in\n> http://archives.postgresql.org/pgsql-hackers/2003-08/msg00931.php is\n> still available. It doesn't touch too many places and should be easy to\n> review. I'm using it and its predecessors in production for more than\n> two years. Let me know, if the 74b1 version does not apply cleanly to\n> your source tree.\n\nLooks reasonable; I'll give it a shot on 8.0 once I have replication\nhappening.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Sat, 14 May 2005 08:37:22 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort and index"
}
] |
[
{
"msg_contents": "Don't you think \"optimal stripe width\" would be\na good question to research the binaries for? I'd\nthink that drives the answer, largely. (uh oh, pun alert)\n\nEG, oracle issues IO requests (this may have changed _just_ \nrecently) in 64KB chunks, regardless of what you ask for. \nSo when I did my striping (many moons ago, when the Earth \nwas young...) I did it in 128KB widths, and set the oracle \n\"multiblock read count\" according. For oracle, any stripe size\nunder 64KB=stupid, anything much over 128K/258K=wasteful. \n\nI am eager to find out how PG handles all this. \n\n\n- Ross\n\n\n\np.s. <Brooklyn thug accent> 'You want a database record? I \n gotcher record right here' http://en.wikipedia.org/wiki/Akashic_Records\n\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Alex Turner\nSent: Monday, April 18, 2005 2:21 PM\nTo: Jacques Caron\nCc: Greg Stark; William Yu; [email protected]\nSubject: Re: [PERFORM] How to improve db performance with $7K?\n\n\nSo I wonder if one could take this stripe size thing further and say that a larger stripe size is more likely to result in requests getting served parallized across disks which would lead to increased performance?\n\nAgain, thanks to all people on this list, I know that I have learnt a _hell_ of alot since subscribing.\n\nAlex Turner\nnetEconomist\n\nOn 4/18/05, Alex Turner <[email protected]> wrote:\n> Ok - well - I am partially wrong...\n> \n> If you're stripe size is 64Kb, and you are reading 256k worth of data, \n> it will be spread across four drives, so you will need to read from \n> four devices to get your 256k of data (RAID 0 or 5 or 10), but if you \n> are only reading 64kb of data, I guess you would only need to read \n> from one disk.\n> \n> So my assertion that adding more drives doesn't help is pretty \n> wrong... particularly with OLTP because it's always dealing with \n> blocks that are smaller that the stripe size.\n> \n> Alex Turner\n> netEconomist\n> \n> On 4/18/05, Jacques Caron <[email protected]> wrote:\n> > Hi,\n> >\n> > At 18:56 18/04/2005, Alex Turner wrote:\n> > >All drives are required to fill every request in all RAID levels\n> >\n> > No, this is definitely wrong. In many cases, most drives don't \n> > actually have the data requested, how could they handle the request?\n> >\n> > When reading one random sector, only *one* drive out of N is ever \n> > used to service any given request, be it RAID 0, 1, 0+1, 1+0 or 5.\n> >\n> > When writing:\n> > - in RAID 0, 1 drive\n> > - in RAID 1, RAID 0+1 or 1+0, 2 drives\n> > - in RAID 5, you need to read on all drives and write on 2.\n> >\n> > Otherwise, what would be the point of RAID 0, 0+1 or 1+0?\n> >\n> > Jacques.\n> >\n> >\n>\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\n http://archives.postgresql.org\n",
"msg_date": "Mon, 18 Apr 2005 18:41:37 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "On Mon, Apr 18, 2005 at 06:41:37PM -0000, Mohan, Ross wrote:\n> Don't you think \"optimal stripe width\" would be\n> a good question to research the binaries for? I'd\n> think that drives the answer, largely. (uh oh, pun alert)\n> \n> EG, oracle issues IO requests (this may have changed _just_ \n> recently) in 64KB chunks, regardless of what you ask for. \n> So when I did my striping (many moons ago, when the Earth \n> was young...) I did it in 128KB widths, and set the oracle \n> \"multiblock read count\" according. For oracle, any stripe size\n> under 64KB=stupid, anything much over 128K/258K=wasteful. \n> \n> I am eager to find out how PG handles all this. \n\nAFAIK PostgreSQL requests data one database page at a time (normally\n8k). Of course the OS might do something different.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 19 Apr 2005 19:12:24 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "I wonder if thats something to think about adding to Postgresql? A\nsetting for multiblock read count like Oracle (Although having said\nthat I believe that Oracle natively caches pages much more\naggressively that postgresql, which allows the OS to do the file\ncaching).\n\nAlex Turner\nnetEconomist\n\nP.S. Oracle changed this with 9i, you can change the Database block\nsize on a tablespace by tablespace bassis making it smaller for OLTP\ntablespaces and larger for Warehousing tablespaces (at least I think\nit's on a tablespace, might be on a whole DB).\n\nOn 4/19/05, Jim C. Nasby <[email protected]> wrote:\n> On Mon, Apr 18, 2005 at 06:41:37PM -0000, Mohan, Ross wrote:\n> > Don't you think \"optimal stripe width\" would be\n> > a good question to research the binaries for? I'd\n> > think that drives the answer, largely. (uh oh, pun alert)\n> >\n> > EG, oracle issues IO requests (this may have changed _just_\n> > recently) in 64KB chunks, regardless of what you ask for.\n> > So when I did my striping (many moons ago, when the Earth\n> > was young...) I did it in 128KB widths, and set the oracle\n> > \"multiblock read count\" according. For oracle, any stripe size\n> > under 64KB=stupid, anything much over 128K/258K=wasteful.\n> >\n> > I am eager to find out how PG handles all this.\n> \n> AFAIK PostgreSQL requests data one database page at a time (normally\n> 8k). Of course the OS might do something different.\n> --\n> Jim C. Nasby, Database Consultant [email protected]\n> Give your computer some brain candy! www.distributed.net Team #1828\n> \n> Windows: \"Where do you want to go today?\"\n> Linux: \"Where do you want to go tomorrow?\"\n> FreeBSD: \"Are you guys coming, or what?\"\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n",
"msg_date": "Wed, 20 Apr 2005 13:40:25 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
[
{
"msg_contents": "All,\n\nA couple of questions regarding REINDEX command:\n\nRunning PostgreSQL 7.4.2 on Solaris.\n\n1) When is it necessary to run REINDEX or drop/create\nan index? All I could really find in the docs is:\n\n\"In some situations it is worthwhile to rebuild\nindexes periodically with the REINDEX command. (There\nis also contrib/reindexdb which can reindex an entire\ndatabase.) However, PostgreSQL 7.4 has substantially\nreduced the need for this activity compared to earlier\nreleases.\"\n\nWhat are these situations? We have a database with\nsome large tables. Currently we reindex (actually\ndrop/create) nightly. But as the tables have grown\nthis has become prohibitively time-consuming. \nAccording to the above comment it may not be necessary\nat all. \n\n2) If reindexing is necessary, how can this be done in\na non-obtrusive way in a production environment. Our\ndatabase is being updated constantly. REINDEX locks\nclient apps out while in progress. Same with \"CREATE\nINDEX\" when we drop/create. The table can have over\n10 million row. Recreating the indexes seems to take\nhours. This is too long to lock the client apps out. \nIs there any other solution?\n\nthanks,\n\nBill\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nMake Yahoo! your home page \nhttp://www.yahoo.com/r/hs\n",
"msg_date": "Mon, 18 Apr 2005 12:21:42 -0700 (PDT)",
"msg_from": "Bill Chandler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question on REINDEX"
},
{
"msg_contents": "Bill,\n\n> 1) When is it necessary to run REINDEX or drop/create\n> an index? All I could really find in the docs is:\n\nIf you need to VACUUM FULL, you need to REINDEX as well. For example, if you \ndrop millions of rows from a table.\n\n> 2) If reindexing is necessary, how can this be done in\n> a non-obtrusive way in a production environment. Our\n> database is being updated constantly. REINDEX locks\n> client apps out while in progress. Same with \"CREATE\n> INDEX\" when we drop/create. The table can have over\n> 10 million row. Recreating the indexes seems to take\n> hours. This is too long to lock the client apps out.\n> Is there any other solution?\n\nBetter to up your max_fsm_pages and do regular VACUUMs regularly and \nfrequently so that you don't have to REINDEX at all.\n\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 18 Apr 2005 12:33:56 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question on REINDEX"
},
{
"msg_contents": "On Mon, Apr 18, 2005 at 12:21:42 -0700,\n Bill Chandler <[email protected]> wrote:\n> \n> Running PostgreSQL 7.4.2 on Solaris.\n> \n> 1) When is it necessary to run REINDEX or drop/create\n> an index? All I could really find in the docs is:\n> \n> \"In some situations it is worthwhile to rebuild\n> indexes periodically with the REINDEX command. (There\n> is also contrib/reindexdb which can reindex an entire\n> database.) However, PostgreSQL 7.4 has substantially\n> reduced the need for this activity compared to earlier\n> releases.\"\n\nIn pathologic cases it is possible to have a lot of empty space on a lot\nof your index pages. Reindexing would change that to a smaller number.\nIn earlier versions, I think it was possible to have completely empty\npages and this happened for patterns of use (new values monotonically\nincreasing, oldest values deleted first) that were actually seen in\npractice.\n",
"msg_date": "Mon, 18 Apr 2005 14:53:58 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question on REINDEX"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> 1) When is it necessary to run REINDEX or drop/create\n>> an index? All I could really find in the docs is:\n\n> If you need to VACUUM FULL, you need to REINDEX as well. For example, if you\n> drop millions of rows from a table.\n\nThat's probably a pretty good rule of thumb. It's worth noting that\nVACUUM FULL tends to actively bloat indexes, not reduce them in size,\nbecause it has to create new index entries for the rows it moves before\nit can delete the old ones. So if a VACUUM FULL moves many rows you\nare likely to see the indexes get bigger not smaller.\n\n> Better to up your max_fsm_pages and do regular VACUUMs regularly and \n> frequently so that you don't have to REINDEX at all.\n\nYes, definitely. Also consider using CLUSTER rather than VACUUM FULL\nwhen you need to clean up after massive deletions from a table. It's\nnot any less intrusive in terms of locking, but it's often faster and it\navoids the index bloat problem (since it effectively does a REINDEX).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Apr 2005 16:13:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question on REINDEX "
}
] |
[
{
"msg_contents": "All,\n\nIf I run the command \"vacuumdb mydb\" I understand that\nit does some disk space recovery (but not as much as\n\"vacuumdb --full mydb\"). \n\nQuestion: if I run the command \"vacuumdb --analyze\nmydb\" does it still do the aforementioned disk space\nrecovery AS WELL AS update query planning statistics? \nOr are those two completely separate operations\nrequiring separate invocations of 'vacuumdb'.\n\nthanks,\n\nBill\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nPlan great trips with Yahoo! Travel: Now over 17,000 guides!\nhttp://travel.yahoo.com/p-travelguide\n",
"msg_date": "Mon, 18 Apr 2005 12:27:08 -0700 (PDT)",
"msg_from": "Bill Chandler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question on vacuumdb"
},
{
"msg_contents": "On Mon, Apr 18, 2005 at 12:27:08 -0700,\n Bill Chandler <[email protected]> wrote:\n> All,\n> \n> If I run the command \"vacuumdb mydb\" I understand that\n> it does some disk space recovery (but not as much as\n> \"vacuumdb --full mydb\"). \n\nYou are better off not using vacuum full unless some unusual event has\nbloated your database. By running normal vacuums often enough (and with\na large enough fsm setting) your database should reach a steady state size.\n\n> Question: if I run the command \"vacuumdb --analyze\n> mydb\" does it still do the aforementioned disk space\n> recovery AS WELL AS update query planning statistics? \n> Or are those two completely separate operations\n> requiring separate invocations of 'vacuumdb'.\n\nIt is better to do both with one command.\n",
"msg_date": "Mon, 18 Apr 2005 14:58:19 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question on vacuumdb"
}
] |
[
{
"msg_contents": "\nWe have been using Postgresql for many years now...\nWe have always used it with the native OS it was build from, FreeBSD.\nFreeBSD is rock solid stable. Very reliable.\n\nWith so many rumors about Linux being faster especialy the 2.6.x\nkernel, I have decided to give it another try. I have not used Linux in\n6 years. The last linux I used was 5.2.\n\nI tried slackware 10.1 and Gentoo 2005.0 both at the same time.\nI have used different file system seems rumors has it that JFS is\nfaster than ext3.\n\nAfter running benchmarks after benchmarks, I concluded Linux 2.6.x\nkernel is indeed faster. Without much details I would guess Linux is\nabout 30% - 50% faster according to pgbench. \n\nFurthermore I like Gentoo very much since it allows lots of\noptimizations while compiling.\n\nStability was not as good as Freebsd. Under heavy loads I have seen\nhangs and riad drivers are not as stable as Freebsd.\n\nServer performance between intel and amd was also very different. Intel\nserver performance for both pentium 4 and xeon was about the same\nbetween Linux and BSD. But AMD cpu shows large speed improvement under\nLinux.\n\nThis is just from my experience that I would like to share for other\nPostgresql users.\n\nRegards,\n\nAndrew\n[url]http://www.PriceComparison.com[/url]\nOnline Shopping Starts Here!\n\n\n\n--\nPriceComparison.com\n------------------------------------------------------------------------\nPosted via http://www.codecomments.com\n------------------------------------------------------------------------\n \n",
"msg_date": "Tue, 19 Apr 2005 00:21:06 -0500",
"msg_from": "PriceComparison.com <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql faster in Linux than FreeBSD?"
}
] |
[
{
"msg_contents": "\n> \n> Josh Berkus <[email protected]> writes:\n> >> 1) When is it necessary to run REINDEX or drop/create\n> >> an index? All I could really find in the docs is:\n> \n> > If you need to VACUUM FULL, you need to REINDEX as well. \n> For example, \n> > if you drop millions of rows from a table.\n> \n> That's probably a pretty good rule of thumb. It's worth \n> noting that VACUUM FULL tends to actively bloat indexes, not \n> reduce them in size, because it has to create new index \n> entries for the rows it moves before it can delete the old \n> ones. So if a VACUUM FULL moves many rows you are likely to \n> see the indexes get bigger not smaller.\n> \n\nIs my current understanding correct:\n\n1) VACUUM defragments each page locally - moves free space to the end of\npage.\n\n2) VACUUM FULL defragments table globally - tries to fill up all\npartially free pages and deletes all resulting empty pages.\n\n3) Both VACUUM and VACUUM FULL do only local defragment for indexes.\n\n4) If you want indexes to become fully defragmented, you need to\nREINDEX.\n\n\nIf you happen to use triggers for denormalization, like I do, then you\nhave a lot of updates, which means that tables and indexes become quicky\ncluttered with pages, which contain mostly dead tuples. If those tables\nand indexes fill up shared buffers, then PostgreSQL slows down, because\nit has to do a lot more IO than normal. Regular VACUUM FULL helped, but\nI needed REINDEX as well, otherwise indexes grew bigger than tables\nitself!\n\n> > Better to up your max_fsm_pages and do regular VACUUMs regularly and\n> > frequently so that you don't have to REINDEX at all.\n> \n> Yes, definitely. Also consider using CLUSTER rather than \n> VACUUM FULL when you need to clean up after massive deletions \n> from a table. It's not any less intrusive in terms of \n> locking, but it's often faster and it avoids the index bloat \n> problem (since it effectively does a REINDEX).\n> \n\nHmm, thanks for a tip. BTW, is output of \n\nselect count(1), sum(relpages) from pg_class where relkind in\n('r','i','t')\n\ngood estimate for max_fsm_relations and max_fsm_pages?\nAre these parameters used only during VACUUM or in runtime too?\n\n Tambet\n",
"msg_date": "Tue, 19 Apr 2005 11:33:06 +0300",
"msg_from": "\"Tambet Matiisen\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question on REINDEX "
},
{
"msg_contents": "\"Tambet Matiisen\" <[email protected]> writes:\n> Is my current understanding correct:\n\n> 1) VACUUM defragments each page locally - moves free space to the end of\n> page.\n\n> 2) VACUUM FULL defragments table globally - tries to fill up all\n> partially free pages and deletes all resulting empty pages.\n\nBoth versions of VACUUM do within-page defragmentation. Also, both\nversions will remove entirely-empty pages at the end of a table.\nThe difference is that VACUUM FULL actively attempts to make pages\nat the end empty, by moving their contents into free space in earlier\npages. Plain VACUUM never does cross-page data movement, which is\nhow come it doesn't need as strong a lock.\n\nBTW, VACUUM FULL does the data movement back-to-front, and stops as soon\nas it finds a tuple it cannot move down; which is a reasonable strategy\nsince the goal is merely to make the file shorter. But it's entirely\nlikely that there will be lots of empty space left at the end. For\ninstance the final state could have one 4K tuple in the last page and\nup to 4K-1 free bytes in every earlier page.\n\n> 3) Both VACUUM and VACUUM FULL do only local defragment for indexes.\n\n> 4) If you want indexes to become fully defragmented, you need to\n> REINDEX.\n\nI don't think \"defragment\" is a notion that applies to indexes, at least\nnot in the same way as for tables. It's true that there is no\ncross-page data movement in either case. In the last release or two\nwe've been able to recognize and recycle entirely-empty pages in both\nbtree and hash indexes, but such pages are almost never returned to the\nOS; they're put on a freelist for re-use within the index, instead.\n\nIf you allow the table to grow to much more than its \"normal\" size,\nie, you allow many dead tuples to be formed, then getting back to\n\"normal\" size is going to require VACUUM FULL + REINDEX (or you can use\nCLUSTER or some varieties of ALTER TABLE). This is not the recommended\nmaintenance process however. Sufficiently frequent plain VACUUMs should\ngenerally hold the free space to a tolerable level without requiring\nany exclusive locking.\n\n> Hmm, thanks for a tip. BTW, is output of \n> select count(1), sum(relpages) from pg_class where relkind in\n> ('r','i','t')\n> good estimate for max_fsm_relations and max_fsm_pages?\n\nWithin that one database, yes --- don't forget you must sum these\nnumbers across all DBs in the cluster. Also you need some slop\nin the max_fsm_pages setting because of quantization in the space\nusage. It's probably easier to let VACUUM VERBOSE do the calculation\nfor you.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Apr 2005 10:06:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question on REINDEX "
},
{
"msg_contents": "Tambet,\n\n> Hmm, thanks for a tip. BTW, is output of\n>\n> select count(1), sum(relpages) from pg_class where relkind in\n> ('r','i','t')\n\nWell, if you do that for all databases in the cluster, it's the number you \nstart with. However, setting FSM_pages to that would be assuming that you \nexcpected 100% of the rows to be replaced by UPDATES or DELETEs before you \nran VACUUM. I generally run VACUUM a little sooner than that.\n\nSee the end portion of:\nhttp://www.powerpostgresql.com/PerfList\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 19 Apr 2005 08:57:16 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question on REINDEX"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> select count(1), sum(relpages) from pg_class where relkind in\n>> ('r','i','t')\n\n> Well, if you do that for all databases in the cluster, it's the number you \n> start with. However, setting FSM_pages to that would be assuming that you \n> excpected 100% of the rows to be replaced by UPDATES or DELETEs before you \n> ran VACUUM. I generally run VACUUM a little sooner than that.\n\nNot at all. What it says is that you expect 100% of the pages to have\nuseful amounts of free space, which is a *much* weaker criterion.\n\nI think you can usually get away with setting max_fsm_pages to less than\nyour actual disk footprint, but I'm not sure how much less. It'd\nprobably depend a lot on your usage pattern --- for instance,\ninsert-only history tables don't need any FSM space.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Apr 2005 12:24:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question on REINDEX "
},
{
"msg_contents": "Tom,\n\n> Not at all. What it says is that you expect 100% of the pages to have\n> useful amounts of free space, which is a *much* weaker criterion.\n\nHmmm. Good point. \n\nThis seems to be another instance where my rule-of-thumb was based on false \nlogic but nevertheless arrived at correct numbers. I've seldom, if ever, set \nFSM_pages above 50% of the pages in the active database ... and never run \nout.\n\nHmmmm .... actually, it seems like, if you are vacuuming regularly, you only \n*do* need to track pages that have been touched by DELETE or UPDATE. Other \npages would have already been vacuumed and not have any useful free space \nleft. Yes?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 19 Apr 2005 10:56:16 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question on REINDEX"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n>> Not at all. What it says is that you expect 100% of the pages to have\n>> useful amounts of free space, which is a *much* weaker criterion.\n\n> Hmmmm .... actually, it seems like, if you are vacuuming regularly, you only \n> *do* need to track pages that have been touched by DELETE or UPDATE. Other \n> pages would have already been vacuumed and not have any useful free space \n> left. Yes?\n\nWell, the space has to be remembered until it's reused. On the other\nhand, there's nothing that says FSM has to be aware of all the free\nspace available at all times --- the real criterion to avoid bloat\nis that after a VACUUM, enough space is logged in FSM to satisfy all\nthe insertions that will happen before the next VACUUM. So you could\nhave situations where free space is temporarily forgotten (for lack\nof slots in FSM), but other free space gets used instead, and eventually\na later VACUUM re-finds that free space and puts it into FSM.\n\nI think it's true that the more often you vacuum, the less FSM you need,\nbut this doesn't have much to do with how much free space is actually\nout there on disk. It's because you only need enough FSM to record the\nfree space you'll need until the next vacuum.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Apr 2005 14:02:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question on REINDEX "
},
{
"msg_contents": "On Tue, Apr 19, 2005 at 10:06:40AM -0400, Tom Lane wrote:\n\n> BTW, VACUUM FULL does the data movement back-to-front, and stops as soon\n> as it finds a tuple it cannot move down; which is a reasonable strategy\n> since the goal is merely to make the file shorter. But it's entirely\n> likely that there will be lots of empty space left at the end. For\n> instance the final state could have one 4K tuple in the last page and\n> up to 4K-1 free bytes in every earlier page.\n\nAm I right in thinking that vacuum does at least two passes: one\nfront-to-back to find removable tuples, and other back-to-front for\nmovement? Because if it doesn't work this way, it wouldn't relabel\n(change Xmin/Xmax) tuples in early pages. Or does it do something\ndifferent?\n\nI know maintenance_work_mem is used for storing TIDs of to-be-moved\ntuples for index cleanup ... how does it relate to the above?\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"Crear es tan dif�cil como ser libre\" (Elsa Triolet)\n",
"msg_date": "Tue, 19 Apr 2005 14:28:26 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question on REINDEX"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Am I right in thinking that vacuum does at least two passes: one\n> front-to-back to find removable tuples, and other back-to-front for\n> movement?\n\nVACUUM FULL, yes. VACUUM only does the first one.\n\n> I know maintenance_work_mem is used for storing TIDs of to-be-moved\n> tuples for index cleanup ... how does it relate to the above?\n\nTIDs of to-be-deleted tuples, actually. Movable tuples aren't stored,\nthey're just found on-the-fly during the back-to-front pass.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Apr 2005 14:34:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question on REINDEX "
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Alex Turner [mailto:[email protected]]\n> Sent: Monday, April 18, 2005 5:50 PM\n> To: Bruce Momjian\n> Cc: Kevin Brown; [email protected]\n> Subject: Re: [PERFORM] How to improve db performance with $7K?\n> \n> Does it really matter at which end of the cable the queueing is done\n> (Assuming both ends know as much about drive geometry etc..)?\n> [...]\n\nThe parenthetical is an assumption I'd rather not make. If my\nperformance depends on my kernel knowing how my drive is laid\nout, I would always be wondering if a new drive is going to \nbreak any of the kernel's geometry assumptions. Drive geometry\ndoesn't seem like a kernel's business any more than a kernel\nshould be able to decode the ccd signal of an optical mouse.\nThe kernel should queue requests at a level of abstraction that\ndoesn't depend on intimate knowledge of drive geometry, and the\ndrive should queue requests on the concrete level where geometry\nmatters. A drive shouldn't guess whether a process is trying to\nread a file sequentially, and a kernel shouldn't guess whether\nsector 30 is contiguous with sector 31 or not.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Tue, 19 Apr 2005 08:34:19 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Whilst I admire your purist approach, I would say that if it is\nbeneficial to performance that a kernel understand drive geometry,\nthen it is worth investigating teaching it how to deal with that!\n\nI was less referrring to the kernel as I was to the controller.\n\nLets say we invented a new protocol that including the drive telling\nthe controller how it was layed out at initialization time so that the\ncontroller could make better decisions about re-ordering seeks. It\nwould be more cost effective to have that set of electronics just once\nin the controller, than 8 times on each drive in an array, which would\nyield better performance to cost ratio. Therefore I would suggest it\nis something that should be investigated. After all, why implemented\nTCQ on each drive, if it can be handled more effeciently at the other\nend by the controller for less money?!\n\nAlex Turner\nnetEconomist\n\nOn 4/19/05, Dave Held <[email protected]> wrote:\n> > -----Original Message-----\n> > From: Alex Turner [mailto:[email protected]]\n> > Sent: Monday, April 18, 2005 5:50 PM\n> > To: Bruce Momjian\n> > Cc: Kevin Brown; [email protected]\n> > Subject: Re: [PERFORM] How to improve db performance with $7K?\n> >\n> > Does it really matter at which end of the cable the queueing is done\n> > (Assuming both ends know as much about drive geometry etc..)?\n> > [...]\n> \n> The parenthetical is an assumption I'd rather not make. If my\n> performance depends on my kernel knowing how my drive is laid\n> out, I would always be wondering if a new drive is going to\n> break any of the kernel's geometry assumptions. Drive geometry\n> doesn't seem like a kernel's business any more than a kernel\n> should be able to decode the ccd signal of an optical mouse.\n> The kernel should queue requests at a level of abstraction that\n> doesn't depend on intimate knowledge of drive geometry, and the\n> drive should queue requests on the concrete level where geometry\n> matters. A drive shouldn't guess whether a process is trying to\n> read a file sequentially, and a kernel shouldn't guess whether\n> sector 30 is contiguous with sector 31 or not.\n> \n> __\n> David B. Held\n> Software Engineer/Array Services Group\n> 200 14th Ave. East, Sartell, MN 56377\n> 320.534.3637 320.253.7800 800.752.8129\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n",
"msg_date": "Wed, 20 Apr 2005 13:04:04 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "The Linux kernel is definitely headed this way. The 2.6 allows for \nseveral different I/O scheduling algorithms. A brief overview about the \ndifferent modes:\n\nhttp://nwc.serverpipeline.com/highend/60400768\n\nAlthough a much older article from the beta-2.5 days, more indepth info \nfrom one of the programmers who developed the AS scheduler and worked on \nthe deadline scheduler:\n\nhttp://kerneltrap.org/node/657\n\nI think I'm going to start testing the deadline scheduler for our data \nprocessing server for a few weeks before trying it on our production \nservers.\n\n\n\n\nAlex Turner wrote:\n> Whilst I admire your purist approach, I would say that if it is\n> beneficial to performance that a kernel understand drive geometry,\n> then it is worth investigating teaching it how to deal with that!\n> \n> I was less referrring to the kernel as I was to the controller.\n> \n> Lets say we invented a new protocol that including the drive telling\n> the controller how it was layed out at initialization time so that the\n> controller could make better decisions about re-ordering seeks. It\n> would be more cost effective to have that set of electronics just once\n> in the controller, than 8 times on each drive in an array, which would\n> yield better performance to cost ratio. Therefore I would suggest it\n> is something that should be investigated. After all, why implemented\n> TCQ on each drive, if it can be handled more effeciently at the other\n> end by the controller for less money?!\n> \n> Alex Turner\n> netEconomist\n> \n> On 4/19/05, Dave Held <[email protected]> wrote:\n> \n>>>-----Original Message-----\n>>>From: Alex Turner [mailto:[email protected]]\n>>>Sent: Monday, April 18, 2005 5:50 PM\n>>>To: Bruce Momjian\n>>>Cc: Kevin Brown; [email protected]\n>>>Subject: Re: [PERFORM] How to improve db performance with $7K?\n>>>\n>>>Does it really matter at which end of the cable the queueing is done\n>>>(Assuming both ends know as much about drive geometry etc..)?\n>>>[...]\n>>\n>>The parenthetical is an assumption I'd rather not make. If my\n>>performance depends on my kernel knowing how my drive is laid\n>>out, I would always be wondering if a new drive is going to\n>>break any of the kernel's geometry assumptions. Drive geometry\n>>doesn't seem like a kernel's business any more than a kernel\n>>should be able to decode the ccd signal of an optical mouse.\n>>The kernel should queue requests at a level of abstraction that\n>>doesn't depend on intimate knowledge of drive geometry, and the\n>>drive should queue requests on the concrete level where geometry\n>>matters. A drive shouldn't guess whether a process is trying to\n>>read a file sequentially, and a kernel shouldn't guess whether\n>>sector 30 is contiguous with sector 31 or not.\n>>\n>>__\n>>David B. Held\n>>Software Engineer/Array Services Group\n>>200 14th Ave. East, Sartell, MN 56377\n>>320.534.3637 320.253.7800 800.752.8129\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 7: don't forget to increase your free space map settings\n>>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n",
"msg_date": "Wed, 20 Apr 2005 22:02:42 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
[
{
"msg_contents": "Good question. If the SCSI system was moving the head from track 1 to 10, and a request then came in for track 5, could the system make the head stop at track 5 on its way to track 10? That is something that only the controller could do. However, I have no idea if SCSI does that.\n\n|| SCSI, AFAIK, does NOT do this. What SCSI can do is allow \"next\" request insertion into head\n of request queue (queue-jumping), and/or defer request ordering to done by drive per se (queue\n re-ordering). I have looked, in vain, for evidence that SCSI somehow magically \"stops in the\n middle of request to pick up data\" (my words, not yours) \n\nThe only part I am pretty sure about is that real-world experience shows SCSI is better for a mixed I/O environment. Not sure why, exactly, but the command queueing obviously helps, and I am not sure what else does.\n\n|| TCQ is the secret sauce, no doubt. I think NCQ (the SATA version of per se drive request reordering) \n should go a looong way (but not all the way) toward making SATA 'enterprise acceptable'. Multiple \n initiators (e.g. more than one host being able to talk to a drive) is a biggie, too. AFAIK only SCSI\n drives/controllers do that for now. \n\n\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n",
"msg_date": "Tue, 19 Apr 2005 14:34:51 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Mohan, Ross wrote:\n> The only part I am pretty sure about is that real-world experience shows SCSI is better for a mixed I/O environment. Not sure why, exactly, but the command queueing obviously helps, and I am not sure what else does.\n> \n> || TCQ is the secret sauce, no doubt. I think NCQ (the SATA version of per se drive request reordering) \n> should go a looong way (but not all the way) toward making SATA 'enterprise acceptable'. Multiple \n> initiators (e.g. more than one host being able to talk to a drive) is a biggie, too. AFAIK only SCSI\n> drives/controllers do that for now. \n\nWhat is 'multiple initiators' used for in the real world?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 19 Apr 2005 12:10:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "\n\[email protected] wrote on 04/19/2005 11:10:22 AM:\n>\n> What is 'multiple initiators' used for in the real world?\n\nI asked this same question and got an answer off list: Somebody said their\nSAN hardware used multiple initiators. I would try to check the archives\nfor you, but this thread is becoming more of a rope.\n\nMultiple initiators means multiple sources on the bus issuing I/O\ninstructions to the drives. In theory you can have two computers on the\nsame SCSI bus issuing I/O requests to the same drive, or to anything else\non the bus, but I've never seen this implemented. Others have noted this\nfeature as being a big deal, so somebody is benefiting from it.\n\nRick\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania\n19073\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n",
"msg_date": "Tue, 19 Apr 2005 11:22:17 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "On Tue, Apr 19, 2005 at 11:22:17AM -0500, [email protected] wrote:\n> \n> \n> [email protected] wrote on 04/19/2005 11:10:22 AM:\n> >\n> > What is 'multiple initiators' used for in the real world?\n> \n> I asked this same question and got an answer off list: Somebody said their\n> SAN hardware used multiple initiators. I would try to check the archives\n> for you, but this thread is becoming more of a rope.\n> \n> Multiple initiators means multiple sources on the bus issuing I/O\n> instructions to the drives. In theory you can have two computers on the\n> same SCSI bus issuing I/O requests to the same drive, or to anything else\n> on the bus, but I've never seen this implemented. Others have noted this\n> feature as being a big deal, so somebody is benefiting from it.\n\nIt's a big deal for Oracle clustering, which relies on shared drives. Of\ncourse most people doing Oracle clustering are probably using a SAN and\nnot raw SCSI...\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 19 Apr 2005 19:15:23 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Now that we've hashed out which drives are quicker and more money equals \nfaster...\n\nLet's say you had a server with 6 separate 15k RPM SCSI disks, what raid \noption would you use for a standalone postgres server?\n\na) 3xRAID1 - 1 for data, 1 for xlog, 1 for os?\nb) 1xRAID1 for OS/xlog, 1xRAID5 for data\nc) 1xRAID10 for OS/xlong/data\nd) 1xRAID1 for OS, 1xRAID10 for data\ne) .....\n\nI was initially leaning towards b, but after talking to Josh a bit, I suspect \nthat with only 4 disks the raid5 might be a performance detriment vs 3 raid 1s \nor some sort of split raid10 setup.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Tue, 19 Apr 2005 18:00:42 -0700 (PDT)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "What to do with 6 disks?"
},
{
"msg_contents": "http://stats.distributed.net is setup with the OS, WAL, and temp on a\nRAID1 and the database on a RAID10. The drives are 200G SATA with a\n3ware raid card. I don't think the controller has battery-backed cache,\nbut I'm not sure. In any case, it's almost never disk-bound on the\nmirror; when it's disk-bound it's usually the RAID10. But this is a\nread-mostly database. If it was write-heavy, that might not be the case.\n\nAlso, in general, I see very little disk activity from the OS itself, so\nI don't think there's a large disadvantage to having it on the same\ndrives as part of your database. I would recommend different filesystems\nfor each, though. (ie: not one giant / partition)\n\nOn Tue, Apr 19, 2005 at 06:00:42PM -0700, Jeff Frost wrote:\n> Now that we've hashed out which drives are quicker and more money equals \n> faster...\n> \n> Let's say you had a server with 6 separate 15k RPM SCSI disks, what raid \n> option would you use for a standalone postgres server?\n> \n> a) 3xRAID1 - 1 for data, 1 for xlog, 1 for os?\n> b) 1xRAID1 for OS/xlog, 1xRAID5 for data\n> c) 1xRAID10 for OS/xlong/data\n> d) 1xRAID1 for OS, 1xRAID10 for data\n> e) .....\n> \n> I was initially leaning towards b, but after talking to Josh a bit, I \n> suspect that with only 4 disks the raid5 might be a performance detriment \n> vs 3 raid 1s or some sort of split raid10 setup.\n> \n> -- \n> Jeff Frost, Owner \t<[email protected]>\n> Frost Consulting, LLC \thttp://www.frostconsultingllc.com/\n> Phone: 650-780-7908\tFAX: 650-649-1954\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 19 Apr 2005 20:55:59 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What to do with 6 disks?"
},
{
"msg_contents": "My experience:\n\n1xRAID10 for postgres\n1xRAID1 for OS + WAL\n\n\nJeff Frost wrote:\n> Now that we've hashed out which drives are quicker and more money equals \n> faster...\n> \n> Let's say you had a server with 6 separate 15k RPM SCSI disks, what raid \n> option would you use for a standalone postgres server?\n> \n> a) 3xRAID1 - 1 for data, 1 for xlog, 1 for os?\n> b) 1xRAID1 for OS/xlog, 1xRAID5 for data\n> c) 1xRAID10 for OS/xlong/data\n> d) 1xRAID1 for OS, 1xRAID10 for data\n> e) .....\n> \n> I was initially leaning towards b, but after talking to Josh a bit, I \n> suspect that with only 4 disks the raid5 might be a performance \n> detriment vs 3 raid 1s or some sort of split raid10 setup.\n> \n",
"msg_date": "Tue, 19 Apr 2005 20:06:54 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What to do with 6 disks?"
},
{
"msg_contents": "Jeff,\n\n> Let's say you had a server with 6 separate 15k RPM SCSI disks, what raid\n> option would you use for a standalone postgres server?\n>\n> a) 3xRAID1 - 1 for data, 1 for xlog, 1 for os?\n> b) 1xRAID1 for OS/xlog, 1xRAID5 for data\n> c) 1xRAID10 for OS/xlong/data\n> d) 1xRAID1 for OS, 1xRAID10 for data\n> e) .....\n>\n> I was initially leaning towards b, but after talking to Josh a bit, I\n> suspect that with only 4 disks the raid5 might be a performance detriment\n> vs 3 raid 1s or some sort of split raid10 setup.\n\nKnowing that your installation is read-heavy, I'd recommend (d), with the WAL \non the same disk as the OS, i.e.\n\nRAID1 2 disks OS, pg_xlog\nRAID 1+0 4 disks pgdata\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 19 Apr 2005 20:07:33 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What to do with 6 disks?"
},
{
"msg_contents": "> RAID1 2 disks OS, pg_xlog\n> RAID 1+0 4 disks pgdata\n\nLooks like the consensus is RAID 1 for OS, pg_xlog and RAID10 for pgdata. Now \nhere's another performance related question:\n\nI've seen quite a few folks touting the Opteron as 2.5x faster with postgres \nthan a Xeon box. What makes the Opteron so quick? Is it that Postgres \nreally prefers to run in 64-bit mode?\n\nWhen I look at AMD's TPC-C scores where they are showing off the Opteron \nhttp://www.amd.com/us-en/Processors/ProductInformation/0,,30_118_8796_8800~96125,00.html\nIt doesn't appear 2.5x as fast as the Xeon systems, though I have heard from a \nfew Postgres folks that a dual Opteron is 2.5x as fast as a dual Xeon. I \nwould think that AMD would be all over that press if they could show it, so \nwhat am I missing? Is it a bus speed thing? Better south bridge on the \nboards?\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Tue, 19 Apr 2005 21:40:08 -0700 (PDT)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs Xeon (Was: What to do with 6 disks?)"
},
{
"msg_contents": ">I've seen quite a few folks touting the Opteron as 2.5x \n>faster with postgres than a Xeon box. What makes the \n>Opteron so quick? Is it that Postgres really prefers to \n>run in 64-bit mode?\n\n\nI don't know about 2.5x faster (perhaps on specific types \nof loads), but the reason Opterons rock for database \napplications is their insanely good memory bandwidth and \nlatency that scales much better than the Xeon. Opterons \nalso have a ccNUMA-esque I/O fabric and two dedicated \non-die memory channels *per processor* -- no shared bus \nthere, closer to real UNIX server iron than a glorified \nPC.\n\nWe run a large Postgres database on a dual Opteron in \n32-bit mode that crushes Xeons running at higher clock \nspeeds. It has little to do with bitness or theoretical \ninstruction dispatch, and everything to do with the \nsuperior memory controller and I/O fabric. Databases are \nall about moving chunks of data around and the Opteron \nsystems were engineered to do this very well and in a very \nscalable fashion. For the money, it is hard to argue with \nthe price/performance of Opteron based servers. We \nstarted with one dual Opteron postgres server just over a \nyear ago (with an equivalent uptime) and have considered \nnothing but Opterons for database servers since. Opterons \nreally are clearly superior to Xeons for this application. \n I don't work for AMD, just a satisfied customer. :-)\n\n\nre: 6 disks. Unless you are tight on disk space, a hot \nspare might be nice as well depending on your needs.\n\nCheers,\n\nJ. Andrew Rogers\n",
"msg_date": "Tue, 19 Apr 2005 23:02:28 -0700",
"msg_from": "\"J. Andrew Rogers\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs Xeon (Was: What to do with 6 disks?)"
},
{
"msg_contents": "On Tue, 19 Apr 2005, J. Andrew Rogers wrote:\n\n> I don't know about 2.5x faster (perhaps on specific types of loads), but the \n> reason Opterons rock for database applications is their insanely good memory \n> bandwidth and latency that scales much better than the Xeon. Opterons also \n> have a ccNUMA-esque I/O fabric and two dedicated on-die memory channels *per \n> processor* -- no shared bus there, closer to real UNIX server iron than a \n> glorified PC.\n\nThanks J! That's exactly what I was suspecting it might be. Actually, I \nfound an anandtech benchmark that shows the Opteron coming in at close to 2.0x \nperformance:\n\nhttp://www.anandtech.com/linux/showdoc.aspx?i=2163&p=2\n\nIt's an Opteron 150 (2.4ghz) vs. Xeon 3.6ghz from August. I wonder if the \ndifferences are more pronounced with the newer Opterons.\n\n-Jeff\n",
"msg_date": "Tue, 19 Apr 2005 23:21:32 -0700 (PDT)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs Xeon (Was: What to do with 6 disks?)"
},
{
"msg_contents": "I posted this link a few months ago and there was some surprise over the \ndifference in postgresql compared to other DBs. (Not much surprise in \nOpteron stomping on Xeon in pgsql as most people here have had that \nexperience -- the surprise was in how much smaller the difference was in \nother DBs.) If it was across the board +100% in MS-SQL, MySQL, etc -- \nyou can chalk in up to overall better CPU architecture. Most of the time \nthough, the numbers I've seen show +0-30% for [insert DB here] and a \nhuge whopping +++++ for pgsql. Why the pronounced preference for \npostgresql, I'm not sure if it was explained fully.\n\nBTW, the Anandtech test compares single CPU systems w/ 1GB of RAM. Go to \ndual/quad and SMP Xeon will suffer even more since it has to share a \nfixed amount of FSB/memory bandwidth amongst all CPUs. Xeons also seem \nto suffer more from context-switch storms. Go > 4GB of RAM and the Xeon \nsuffers another hit due to the lack of a 64-bit IOMMU. Devices cannot \nmap to addresses > 4GB which means the OS has to do extra work in \ncopying data from/to > 4GB anytime you have IO. (Although this penalty \nmight exist all the time in 64-bit mode for Xeon if Linux/Windows took \nthe expedient and less-buggy route of using a single method versus \nchecking whether target addresses are > or < 4GB.)\n\n\n\nJeff Frost wrote:\n> On Tue, 19 Apr 2005, J. Andrew Rogers wrote:\n> \n>> I don't know about 2.5x faster (perhaps on specific types of loads), \n>> but the reason Opterons rock for database applications is their \n>> insanely good memory bandwidth and latency that scales much better \n>> than the Xeon. Opterons also have a ccNUMA-esque I/O fabric and two \n>> dedicated on-die memory channels *per processor* -- no shared bus \n>> there, closer to real UNIX server iron than a glorified PC.\n> \n> \n> Thanks J! That's exactly what I was suspecting it might be. Actually, \n> I found an anandtech benchmark that shows the Opteron coming in at close \n> to 2.0x performance:\n> \n> http://www.anandtech.com/linux/showdoc.aspx?i=2163&p=2\n> \n> It's an Opteron 150 (2.4ghz) vs. Xeon 3.6ghz from August. I wonder if \n> the differences are more pronounced with the newer Opterons.\n> \n> -Jeff\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n",
"msg_date": "Wed, 20 Apr 2005 08:09:37 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs Xeon (Was: What to do with 6 disks?)"
},
{
"msg_contents": "On Apr 19, 2005, at 11:07 PM, Josh Berkus wrote:\n\n> RAID1 2 disks OS, pg_xlog\n> RAID 1+0 4 disks pgdata\n>\n\nThis is my preferred setup, but I do it with 6 disks on RAID10 for \ndata, and since I have craploads of disk space I set checkpoint \nsegments to 256 (and checkpoint timeout to 5 minutes)\n\n\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806",
"msg_date": "Wed, 20 Apr 2005 11:36:29 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What to do with 6 disks?"
},
{
"msg_contents": "On Apr 20, 2005, at 12:40 AM, Jeff Frost wrote:\n\n> I've seen quite a few folks touting the Opteron as 2.5x faster with \n> postgres than a Xeon box. What makes the Opteron so quick? Is it \n> that Postgres really prefers to run in 64-bit mode?\n>\n\nThe I/O path on the opterons seems to be much faster, and having 64-bit \nall the way to the disk controller helps... just be sure to run a \n64-bit version of your OS.\n\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806",
"msg_date": "Wed, 20 Apr 2005 11:38:06 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs Xeon (Was: What to do with 6 disks?)"
}
] |
[
{
"msg_contents": "Clustered file systems is the first/best example that\ncomes to mind. Host A and Host B can both request from diskfarm, eg. \n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]] \nSent: Tuesday, April 19, 2005 12:10 PM\nTo: Mohan, Ross\nCc: [email protected]\nSubject: Re: [PERFORM] How to improve db performance with $7K?\n\n\nMohan, Ross wrote:\n> The only part I am pretty sure about is that real-world experience \n> shows SCSI is better for a mixed I/O environment. Not sure why, \n> exactly, but the command queueing obviously helps, and I am not sure \n> what else does.\n> \n> || TCQ is the secret sauce, no doubt. I think NCQ (the SATA version \n> || of per se drive request reordering)\n> should go a looong way (but not all the way) toward making SATA 'enterprise acceptable'. Multiple \n> initiators (e.g. more than one host being able to talk to a drive) is a biggie, too. AFAIK only SCSI\n> drives/controllers do that for now.\n\nWhat is 'multiple initiators' used for in the real world?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 19 Apr 2005 16:12:44 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "Mohan, Ross wrote:\n> Clustered file systems is the first/best example that\n> comes to mind. Host A and Host B can both request from diskfarm, eg. \n\nSo one host writes to part of the disk and another host writes to a\ndifferent part?\n\n---------------------------------------------------------------------------\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]] \n> Sent: Tuesday, April 19, 2005 12:10 PM\n> To: Mohan, Ross\n> Cc: [email protected]\n> Subject: Re: [PERFORM] How to improve db performance with $7K?\n> \n> \n> Mohan, Ross wrote:\n> > The only part I am pretty sure about is that real-world experience \n> > shows SCSI is better for a mixed I/O environment. Not sure why, \n> > exactly, but the command queueing obviously helps, and I am not sure \n> > what else does.\n> > \n> > || TCQ is the secret sauce, no doubt. I think NCQ (the SATA version \n> > || of per se drive request reordering)\n> > should go a looong way (but not all the way) toward making SATA 'enterprise acceptable'. Multiple \n> > initiators (e.g. more than one host being able to talk to a drive) is a biggie, too. AFAIK only SCSI\n> > drives/controllers do that for now.\n> \n> What is 'multiple initiators' used for in the real world?\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 19 Apr 2005 12:16:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
},
{
"msg_contents": "On 4/19/05, Mohan, Ross <[email protected]> wrote:\n> Clustered file systems is the first/best example that\n> comes to mind. Host A and Host B can both request from diskfarm, eg.\n\nSomething like a Global File System?\n\nhttp://www.redhat.com/software/rha/gfs/\n\n(I believe some other company did develop it some time in the past;\nhmm, probably the guys doing LVM stuff?).\n\nAnyway the idea is that two machines have same filesystem mounted and\nthey share it. The locking I believe is handled by communication\nbetween computers using \"host to host\" SCSI commands.\n\nI never used it, I've only heard about it from a friend who used to\nwork with it in CERN.\n\n Regards,\n Dawid\n",
"msg_date": "Wed, 20 Apr 2005 10:55:44 +0200",
"msg_from": "Dawid Kuroczko <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
[
{
"msg_contents": "Well, more like they both are allowed to issue disk\nrequests and the magical \"clustered file system\" manages\nlocking, etc. \n\nIn reality, any disk is only reading/writing to one part of\nthe disk at any given time, of course, but that in the multiple\ninitiator deal, multiple streams of requests from multiple hosts\ncan be queued. \n\n\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:[email protected]] \nSent: Tuesday, April 19, 2005 12:16 PM\nTo: Mohan, Ross\nCc: [email protected]\nSubject: Re: [PERFORM] How to improve db performance with $7K?\n\n\nMohan, Ross wrote:\n> Clustered file systems is the first/best example that\n> comes to mind. Host A and Host B can both request from diskfarm, eg.\n\nSo one host writes to part of the disk and another host writes to a different part?\n\n---------------------------------------------------------------------------\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:[email protected]]\n> Sent: Tuesday, April 19, 2005 12:10 PM\n> To: Mohan, Ross\n> Cc: [email protected]\n> Subject: Re: [PERFORM] How to improve db performance with $7K?\n> \n> \n> Mohan, Ross wrote:\n> > The only part I am pretty sure about is that real-world experience\n> > shows SCSI is better for a mixed I/O environment. Not sure why, \n> > exactly, but the command queueing obviously helps, and I am not sure \n> > what else does.\n> > \n> > || TCQ is the secret sauce, no doubt. I think NCQ (the SATA version\n> > || of per se drive request reordering)\n> > should go a looong way (but not all the way) toward making SATA 'enterprise acceptable'. Multiple \n> > initiators (e.g. more than one host being able to talk to a drive) is a biggie, too. AFAIK only SCSI\n> > drives/controllers do that for now.\n> \n> What is 'multiple initiators' used for in the real world?\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> [email protected] | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 19 Apr 2005 16:25:24 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
[
{
"msg_contents": "Folks,\n\nParams: PostgreSQL 8.0.1 on Solaris 10\nStatistics = 500\n(tablenames have been changed to protect NDA)\n\ne1=# select tablename, null_frac, correlation, n_distinct from pg_stats where \ntablename = 'clickstream1' andattname = 'session_id';\n tablename | null_frac | correlation | n_distinct\n----------------------+-----------+-------------+------------\n clickstream1 | 0 | 0.412034 | 378174\n(2 rows)\n\ne1=# select count(distinct session_id) from clickstream1;\n count\n---------\n 3174813\n\nAs you can see, n_distinct estimation is off by a factor of 10x and it's \ncausing query planning problems. Any suggested hacks to improve the \nhistogram on this?\n\n(BTW, increasing the stats to 1000 only doubles n_distinct, and doesn't solve \nthe problem)\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 19 Apr 2005 12:09:05 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> As you can see, n_distinct estimation is off by a factor of 10x and it's \n> causing query planning problems. Any suggested hacks to improve the \n> histogram on this?\n\nWhat's the histogram itself look like? (I'd like to see the whole\npg_stats row not just part of it ...) There's probably no point in\nshowing the target=1000 version, but maybe target=100 would be\ninformative.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Apr 2005 16:44:49 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad n_distinct estimation; hacks suggested? "
},
{
"msg_contents": "Tom,\n\n> What's the histogram itself look like? (I'd like to see the whole\n> pg_stats row not just part of it ...) There's probably no point in\n> showing the target=1000 version, but maybe target=100 would be\n> informative.\n\nHere is the stats = 100 version. Notice that n_distinct has gone down.\n\n schemaname | tablename | attname | null_frac | avg_width | \nn_distinct | most_common_vals \n| most_common_freqs \n| histogram_bounds | \ncorrelation\n------------+----------------------+------------+-----------+-----------+------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n public | web_site_activity_fa | session_id | 0 | 8 | \n96107 | \n{4393922,6049228,6026260,4394034,60341,4393810,2562999,2573850,3006299,4705488,2561499,4705258,3007378,4705490,60327,60352,2560950,2567640,2569852,3006604,4394329,2570739,2406633,2407292,3006356,4393603,4394121,6449083,2565815,4387881,2406770,2407081,2564340,3007328,2406578,2407295,2562813,2567603,4387835,71014,2566253,2566900,6103079,2289424,2407597,2567627,2568333,3457448,23450,23670,60743,70739,2406818,2406852,2407511,2562816,3007446,6306095,60506,71902,591543,1169136,1447077,2285047,2406830,2573964,6222758,61393,70955,70986,71207,71530,262368,2289213,2406899,2567361,2775952,3006824,4387864,6239825,6244853,6422152,1739,58600,179293,278473,488407,1896390,2286976,2407020,2546720,2677019,2984333,3006133,3007497,3310286,3631413,3801909,4366116,4388025} \n| \n{0.00166667,0.00146667,0.0013,0.0011,0.000933333,0.0009,0.0008,0.0008,0.000733333,0.000733333,0.0007,0.000633333,0.0006,0.0006,0.000566667,0.000566667,0.000566667,0.000566667,0.000566667,0.000566667,0.000566667,0.000533333,0.0005,0.0005,0.0005,0.0005,0.0005,0.0005,0.000466667,0.000466667,0.000433333,0.000433333,0.000433333,0.000433333,0.0004,0.0004,0.0004,0.0004,0.0004,0.000366667,0.000366667,0.000366667,0.000366667,0.000333333,0.000333333,0.000333333,0.000333333,0.000333333,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000266667,0.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002} \n| \n{230,58907,88648,156764,216759,240405,264601,289047,312630,339947,364452,386486,409427,434075,455140,475759,500086,521530,544703,680376,981066,1313419,1712592,1860151,1882452,1905328,1927504,1948159,1970054,1990408,2014501,2038573,2062786,2087163,2110129,2132196,2155657,2181058,2204976,2228575,2256229,2283897,2352453,2407153,2457716,2542081,2572119,2624133,2699592,2771254,2832224,2908151,2951500,3005088,3032889,3137244,3158685,3179395,3203681,3261587,3304359,3325577,3566688,3621357,3645094,3718667,3740821,3762386,3783169,3804593,3826503,3904589,3931012,3957675,4141934,4265118,4288568,4316898,4365625,4473965,4535752,4559700,4691802,4749478,5977208,6000272,6021416,6045939,6078912,6111900,6145155,6176422,6206627,6238291,6271270,6303067,6334117,6365200,6395250,6424719,6888329} \n| 0.41744\n\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 19 Apr 2005 14:33:58 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Tom,\n\nAny thoughts? This is really messing up query execution all across the \ndatabase ...\n\n--Josh\n\n> Here is the stats = 100 version. Notice that n_distinct has gone down.\n>\n> schemaname | tablename | attname | null_frac | avg_width |\n> n_distinct | most_common_vals\n>\n> | most_common_freqs\n> | histogram_bounds |\n>\n> correlation\n\n>-------------------+------------- public | web_site_activity_fa |\n> session_id | 0 | 8 | 96107 |\n> {4393922,6049228,6026260,4394034,60341,4393810,2562999,2573850,3006299,4705\n>488,2561499,4705258,3007378,4705490,60327,60352,2560950,2567640,2569852,3006\n>604,4394329,2570739,2406633,2407292,3006356,4393603,4394121,6449083,2565815,\n>4387881,2406770,2407081,2564340,3007328,2406578,2407295,2562813,2567603,4387\n>835,71014,2566253,2566900,6103079,2289424,2407597,2567627,2568333,3457448,23\n>450,23670,60743,70739,2406818,2406852,2407511,2562816,3007446,6306095,60506,\n>71902,591543,1169136,1447077,2285047,2406830,2573964,6222758,61393,70955,709\n>86,71207,71530,262368,2289213,2406899,2567361,2775952,3006824,4387864,623982\n>5,6244853,6422152,1739,58600,179293,278473,488407,1896390,2286976,2407020,25\n>46720,2677019,2984333,3006133,3007497,3310286,3631413,3801909,4366116,438802\n>5}\n>\n> {0.00166667,0.00146667,0.0013,0.0011,0.000933333,0.0009,0.0008,0.0008,0.000\n>733333,0.000733333,0.0007,0.000633333,0.0006,0.0006,0.000566667,0.000566667,\n>0.000566667,0.000566667,0.000566667,0.000566667,0.000566667,0.000533333,0.00\n>05,0.0005,0.0005,0.0005,0.0005,0.0005,0.000466667,0.000466667,0.000433333,0.\n>000433333,0.000433333,0.000433333,0.0004,0.0004,0.0004,0.0004,0.0004,0.00036\n>6667,0.000366667,0.000366667,0.000366667,0.000333333,0.000333333,0.000333333\n>,0.000333333,0.000333333,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.\n>0003,0.0003,0.0003,0.000266667,0.000266667,0.000266667,0.000266667,0.0002666\n>67,0.000266667,0.000266667,0.000266667,0.000266667,0.000233333,0.000233333,0\n>.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.000\n>233333,0.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.0002333\n>33,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0\n>002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002}\n>\n> {230,58907,88648,156764,216759,240405,264601,289047,312630,339947,364452,38\n>6486,409427,434075,455140,475759,500086,521530,544703,680376,981066,1313419,\n>1712592,1860151,1882452,1905328,1927504,1948159,1970054,1990408,2014501,2038\n>573,2062786,2087163,2110129,2132196,2155657,2181058,2204976,2228575,2256229,\n>2283897,2352453,2407153,2457716,2542081,2572119,2624133,2699592,2771254,2832\n>224,2908151,2951500,3005088,3032889,3137244,3158685,3179395,3203681,3261587,\n>3304359,3325577,3566688,3621357,3645094,3718667,3740821,3762386,3783169,3804\n>593,3826503,3904589,3931012,3957675,4141934,4265118,4288568,4316898,4365625,\n>4473965,4535752,4559700,4691802,4749478,5977208,6000272,6021416,6045939,6078\n>912,6111900,6145155,6176422,6206627,6238291,6271270,6303067,6334117,6365200,\n>6395250,6424719,6888329}\n>\n> | 0.41744\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 20 Apr 2005 10:59:01 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "\nHi.\n\nSometimes, if the random number generator, that PostgreSQL uses,\nisn't good enough, the randomly selected pages for the statistics\nmight not be random enough.\n\nSolaris is unknown to me. Maybe the used random number generator there\nisn't good enough?\n\nGood statistics depend on good random numbers.\n\nSo, for example, if you have one million pages, but the upper bound for \nthe random\nnumbers is one hundred thousand pages, the statistics might get tuned.\n\nOr some random number generator has for example only 32000 different values.\n\nRegards,\nMarko Ristola\n\nJosh Berkus wrote:\n\n>Tom,\n>\n>Any thoughts? This is really messing up query execution all across the \n>database ...\n>\n>--Josh\n>\n> \n>\n>>Here is the stats = 100 version. Notice that n_distinct has gone down.\n>>\n>> schemaname | tablename | attname | null_frac | avg_width |\n>>n_distinct | most_common_vals\n>>\n>>| most_common_freqs\n>>| histogram_bounds |\n>>\n>>correlation\n>> \n>>\n>\n> \n>\n>>-------------------+------------- public | web_site_activity_fa |\n>>session_id | 0 | 8 | 96107 |\n>>{4393922,6049228,6026260,4394034,60341,4393810,2562999,2573850,3006299,4705\n>>488,2561499,4705258,3007378,4705490,60327,60352,2560950,2567640,2569852,3006\n>>604,4394329,2570739,2406633,2407292,3006356,4393603,4394121,6449083,2565815,\n>>4387881,2406770,2407081,2564340,3007328,2406578,2407295,2562813,2567603,4387\n>>835,71014,2566253,2566900,6103079,2289424,2407597,2567627,2568333,3457448,23\n>>450,23670,60743,70739,2406818,2406852,2407511,2562816,3007446,6306095,60506,\n>>71902,591543,1169136,1447077,2285047,2406830,2573964,6222758,61393,70955,709\n>>86,71207,71530,262368,2289213,2406899,2567361,2775952,3006824,4387864,623982\n>>5,6244853,6422152,1739,58600,179293,278473,488407,1896390,2286976,2407020,25\n>>46720,2677019,2984333,3006133,3007497,3310286,3631413,3801909,4366116,438802\n>>5}\n>>\n>>{0.00166667,0.00146667,0.0013,0.0011,0.000933333,0.0009,0.0008,0.0008,0.000\n>>733333,0.000733333,0.0007,0.000633333,0.0006,0.0006,0.000566667,0.000566667,\n>>0.000566667,0.000566667,0.000566667,0.000566667,0.000566667,0.000533333,0.00\n>>05,0.0005,0.0005,0.0005,0.0005,0.0005,0.000466667,0.000466667,0.000433333,0.\n>>000433333,0.000433333,0.000433333,0.0004,0.0004,0.0004,0.0004,0.0004,0.00036\n>>6667,0.000366667,0.000366667,0.000366667,0.000333333,0.000333333,0.000333333\n>>,0.000333333,0.000333333,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.\n>>0003,0.0003,0.0003,0.000266667,0.000266667,0.000266667,0.000266667,0.0002666\n>>67,0.000266667,0.000266667,0.000266667,0.000266667,0.000233333,0.000233333,0\n>>.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.000\n>>233333,0.000233333,0.000233333,0.000233333,0.000233333,0.000233333,0.0002333\n>>33,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0\n>>002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002}\n>>\n>>{230,58907,88648,156764,216759,240405,264601,289047,312630,339947,364452,38\n>>6486,409427,434075,455140,475759,500086,521530,544703,680376,981066,1313419,\n>>1712592,1860151,1882452,1905328,1927504,1948159,1970054,1990408,2014501,2038\n>>573,2062786,2087163,2110129,2132196,2155657,2181058,2204976,2228575,2256229,\n>>2283897,2352453,2407153,2457716,2542081,2572119,2624133,2699592,2771254,2832\n>>224,2908151,2951500,3005088,3032889,3137244,3158685,3179395,3203681,3261587,\n>>3304359,3325577,3566688,3621357,3645094,3718667,3740821,3762386,3783169,3804\n>>593,3826503,3904589,3931012,3957675,4141934,4265118,4288568,4316898,4365625,\n>>4473965,4535752,4559700,4691802,4749478,5977208,6000272,6021416,6045939,6078\n>>912,6111900,6145155,6176422,6206627,6238291,6271270,6303067,6334117,6365200,\n>>6395250,6424719,6888329}\n>>\n>>| 0.41744\n>> \n>>\n>\n> \n>\n\n",
"msg_date": "Fri, 22 Apr 2005 21:39:12 +0300",
"msg_from": "Marko Ristola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Marko,\n\n> Sometimes, if the random number generator, that PostgreSQL uses,\n> isn't good enough, the randomly selected pages for the statistics\n> might not be random enough.\n>\n> Solaris is unknown to me. Maybe the used random number generator there\n> isn't good enough?\n\nHmmm. Good point. Will have to test on Linux.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 22 Apr 2005 11:52:51 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "\n> > Solaris is unknown to me. Maybe the used random number generator there\n> > isn't good enough?\n>\n> Hmmm. Good point. Will have to test on Linux.\n\nNope:\n\nLinux 2.4.20:\n\ntest=# select tablename, attname, n_distinct from pg_stats where tablename = \n'web_site_activity_fa';\n tablename | attname | n_distinct\n----------------------+---------------------+------------\n web_site_activity_fa | session_id | 626127\n\ntest=# select count(distinct session_id) from web_site_activity_fa;\n count\n---------\n 3174813\n(1 row)\n\n... I think the problem is in our heuristic sampling code. I'm not the first \nperson to have this kind of a problem. Will be following up with tests ...\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 22 Apr 2005 13:36:08 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "\nJosh Berkus <[email protected]> writes:\n\n> ... I think the problem is in our heuristic sampling code. I'm not the first \n> person to have this kind of a problem. Will be following up with tests ...\n\nI looked into this a while back when we were talking about changing the\nsampling method. The conclusions were discouraging. Fundamentally, using\nconstant sized samples of data for n_distinct is bogus. Constant sized samples\nonly work for things like the histograms that can be analyzed through standard\nstatistics population sampling which depends on the law of large numbers.\n\nn_distinct requires you to estimate how frequently very rare things occur. You\ncan't apply the law of large numbers because even a single instance of a value\nout of a large pool alters the results disproportionately.\n\nTo get a valid estimate for n_distinct you would need to sample a fixed\npercentage of the table. Not a fixed size sample. That just isn't practical.\nMoreover, I think the percentage would have to be quite large. Even if you\nsampled half the table I think the confidence interval would be quite wide.\n\nThe situation is worsened because it's unclear how to interpolate values for\nsubsets of the table. If the histogram says you have a million records and\nyou're adding a clause that has a selectivity of 50% then half a million is a\ngood guess. But if what you care about is n_distinct and you start with a\nmillion records with 1,000 distinct values and then apply a clause that\nfilters 50% of them, how do you estimate the resulting n_distinct? 500 is\nclearly wrong, but 1,000 is wrong too. You could end up with anywhere from 0\nto 1,000 and you have no good way to figure out where the truth lies.\n\nSo I fear this is fundamentally a hopeless situation. It's going to be\ndifficult to consistently get good plans for any queries that depend on good\nestimates for n_distinct.\n\n-- \ngreg\n\n",
"msg_date": "23 Apr 2005 04:02:05 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Greg,\n\n> I looked into this a while back when we were talking about changing the\n> sampling method. The conclusions were discouraging. Fundamentally, using\n> constant sized samples of data for n_distinct is bogus. Constant sized\n> samples only work for things like the histograms that can be analyzed\n> through standard statistics population sampling which depends on the law of\n> large numbers.\n\nWell, unusual distributions are certainly tough. But I think the problem \nexists even for relatively well-distributed populations. Part of it is, I \nbelieve, the formula we are using:\n\nn*d / (n - f1 + f1*n/N)\n\nThis is an estimation formula from Haas and Stokes in IBM Research Report RJ \n10025, and is called the DUJ1 formula. \n(http://www.almaden.ibm.com/cs/people/peterh/jasa3rj.pdf) It appears to \nsuck. For example, take my table:\n\nrows: 26million (N)\ndistinct values: 3.4million\n\nI took a random sample of 1000 rows (n) from that table. It contained:\n968 values that occurred only once (f1)\n981 distinct values (d)\n\nAny human being looking at that sample would assume a large number of distinct \nvalues; after all, 96.8% of the values occurred only once. But the formula \ngives us:\n\n1000*981 / ( 1000 - 968 + ( 968 * 1000/26000000 ) ) = 30620\n\nThis is obviously dramatically wrong, by a factor of 100. The math gets worse \nas the sample size goes down:\n\nSample 250, 248 distinct values, 246 unique values:\n\n250*248 / ( 250 - 246 + ( 246 * 250 / 26000000 ) ) = 15490\n\nEven in a case with an ovewhelming majority of unique values, the formula gets \nit wrong:\n\n999 unque values in sample\n998 appearing only once:\n\n1000*999 / ( 1000 - 998 + ( 998 * 1000 / 26000000 ) ) = 490093\n\nThis means that, with a sample size of 1000 a table of 26million rows cannot \never have with this formula more than half a million distinct values, unless \nthe column is a unique column.\n\nOverall, our formula is inherently conservative of n_distinct. That is, I \nbelieve that it is actually computing the *smallest* number of distinct \nvalues which would reasonably produce the given sample, rather than the \n*median* one. This is contrary to the notes in analyze.c, which seem to \nthink that we're *overestimating* n_distinct. \n\nThis formula appears broken everywhere:\n\nTable: 969000 rows\nDistinct values: 374000\nSample Size: 1000\nUnique values in sample: 938\nValues appearing only once: 918\n\n1000*938 / ( 1000 - 918 + ( 918 * 1000 / 969000 ) ) = 11308\n\nAgain, too small by a factor of 20x. \n\nThis is so broken, in fact, that I'm wondering if we've read the paper right? \nI've perused the paper on almaden, and the DUJ1 formula appears considerably \nmore complex than the formula we're using. \n\nCan someone whose math is more recent than calculus in 1989 take a look at \nthat paper, and look at the formula toward the bottom of page 10, and see if \nwe are correctly interpreting it? I'm particularly confused as to what \"q\" \nand \"d-sub-n\" represent. Thanks!\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sat, 23 Apr 2005 16:39:11 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Josh Berkus said:\n>\n>\n> Well, unusual distributions are certainly tough. But I think the\n> problem exists even for relatively well-distributed populations.\n> Part of it is, I believe, the formula we are using:\n>\n> n*d / (n - f1 + f1*n/N)\n>\n[snip]\n>\n> This is so broken, in fact, that I'm wondering if we've read the paper\n> right? I've perused the paper on almaden, and the DUJ1 formula\n> appears considerably more complex than the formula we're using.\n>\n> Can someone whose math is more recent than calculus in 1989 take a look\n> at that paper, and look at the formula toward the bottom of page 10,\n> and see if we are correctly interpreting it? I'm particularly\n> confused as to what \"q\" and \"d-sub-n\" represent. Thanks!\n>\n\nMath not too recent ...\n\nI quickly read the paper and independently came up with the same formula you\nsay above we are applying. The formula is on the page that is numbered 6,\nalthough it's the tenth page in the PDF.\n\nq = n/N = ratio of sample size to population size\nd_sub_n = d = number of distinct classes in sample\n\ncheers\n\nandrew\n\n\n\n\n",
"msg_date": "Sat, 23 Apr 2005 18:44:28 -0500 (CDT)",
"msg_from": "\"Andrew Dunstan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "People,\n\n> Can someone whose math is more recent than calculus in 1989 take a look at\n> that paper, and look at the formula toward the bottom of page 10, and see\n> if we are correctly interpreting it? I'm particularly confused as to\n> what \"q\" and \"d-sub-n\" represent. Thanks!\n\nActually, I managed to solve for these and it appears we are using the formula \ncorrectly. It's just a bad formula.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sat, 23 Apr 2005 16:53:16 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Overall, our formula is inherently conservative of n_distinct. That is, I \n> believe that it is actually computing the *smallest* number of distinct \n> values which would reasonably produce the given sample, rather than the \n> *median* one. This is contrary to the notes in analyze.c, which seem to \n> think that we're *overestimating* n_distinct. \n\nWell, the notes are there because the early tests I ran on that formula\ndid show it overestimating n_distinct more often than not. Greg is\ncorrect that this is inherently a hard problem :-(\n\nI have nothing against adopting a different formula, if you can find\nsomething with a comparable amount of math behind it ... but I fear\nit'd only shift the failure cases around.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 24 Apr 2005 00:48:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested? "
},
{
"msg_contents": "\nHere is my opinion.\nI hope this helps.\n\nMaybe there is no one good formula:\n\nOn boolean type, there are at most 3 distinct values.\nThere is an upper bound for fornames in one country.\nThere is an upper bound for last names in one country.\nThere is a fixed number of states and postal codes in one country.\n\nOn the other hand, with timestamp, every value could be distinct.\nA primary key with only one column has only distinct values.\nIf the integer column refers with a foreign key into another table's\nonly primary key, we could take advantage of that knolege.\nA column with a unique index has only distinct values.\n\nFirst ones are for classifying and the second ones measure continuous\nor discrete time or something like the time.\n\nThe upper bound for classifying might be 3 (boolean), or it might be\none million. The properties of the distribution might be hard to guess.\n\nHere is one way:\n\n1. Find out the number of distinct values for 500 rows.\n2. Try to guess, how many distinct values are for 1000 rows.\n Find out the real number of distinct values for 1000 rows.\n3. If the guess and the reality are 50% wrong, do the iteration for \n2x1000 rows.\nIterate using a power of two to increase the samples, until you trust the\nestimate enough.\n\nSo, in the phase two, you could try to guess with two distinct formulas:\nOne for the classifying target (boolean columns hit there).\nAnother one for the timestamp and numerical values.\n\nIf there are one million classifications on one column, how you\ncan find it out, by other means than checking at least two million\nrows?\n\nThis means, that the user should have a possibility to tell the lower\nbound for the number of rows for sampling.\n\n\nRegards,\nMarko Ristola\n\nTom Lane wrote:\n\n>Josh Berkus <[email protected]> writes:\n> \n>\n>>Overall, our formula is inherently conservative of n_distinct. That is, I \n>>believe that it is actually computing the *smallest* number of distinct \n>>values which would reasonably produce the given sample, rather than the \n>>*median* one. This is contrary to the notes in analyze.c, which seem to \n>>think that we're *overestimating* n_distinct. \n>> \n>>\n>\n>Well, the notes are there because the early tests I ran on that formula\n>did show it overestimating n_distinct more often than not. Greg is\n>correct that this is inherently a hard problem :-(\n>\n>I have nothing against adopting a different formula, if you can find\n>something with a comparable amount of math behind it ... but I fear\n>it'd only shift the failure cases around.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n>\n\n",
"msg_date": "Sun, 24 Apr 2005 20:09:15 +0300",
"msg_from": "Marko Ristola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>Josh Berkus <[email protected]> writes:\n> \n>\n>>Overall, our formula is inherently conservative of n_distinct. That is, I \n>>believe that it is actually computing the *smallest* number of distinct \n>>values which would reasonably produce the given sample, rather than the \n>>*median* one. This is contrary to the notes in analyze.c, which seem to \n>>think that we're *overestimating* n_distinct. \n>> \n>>\n>\n>Well, the notes are there because the early tests I ran on that formula\n>did show it overestimating n_distinct more often than not. Greg is\n>correct that this is inherently a hard problem :-(\n>\n>I have nothing against adopting a different formula, if you can find\n>something with a comparable amount of math behind it ... but I fear\n>it'd only shift the failure cases around.\n>\n>\n> \n>\n\nThe math in the paper does not seem to look at very low levels of q (= \nsample to pop ratio).\n\nThe formula has a range of [d,N]. It appears intuitively (i.e. I have \nnot done any analysis) that at very low levels of q, as f1 moves down \nfrom n, the formula moves down from N towards d very rapidly. I did a \ntest based on the l_comments field in a TPC lineitems table. The test \nset has N = 6001215, D = 2921877. In my random sample of 1000 I got d = \n976 and f1 = 961, for a DUJ1 figure of 24923, which is too low by 2 \norders of magnitude.\n\nI wonder if this paper has anything that might help: \nhttp://www.stat.washington.edu/www/research/reports/1999/tr355.ps - if I \nwere more of a statistician I might be able to answer :-)\n\ncheers\n\nandrew\n\n\n",
"msg_date": "Sun, 24 Apr 2005 13:58:10 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Andrew,\n\n> The math in the paper does not seem to look at very low levels of q (=\n> sample to pop ratio).\n\nYes, I think that's the failing. Mind you, I did more testing and found out \nthat for D/N ratios of 0.1 to 0.3, the formula only works within 5x accuracy \n(which I would consider acceptable) with a sample size of 25% or more (which \nis infeasable in any large table). The formula does work for populations \nwhere D/N is much lower, say 0.01. So overall it seems to only work for 1/4 \nof cases; those where n/N is large and D/N is low. And, annoyingly, that's \nprobably the population where accurate estimation is least crucial, as it \nconsists mostly of small tables.\n\nI've just developed (not original, probably, but original to *me*) a formula \nthat works on populations where n/N is very small and D/N is moderate (i.e. \n0.1 to 0.4):\n\nN * (d/n)^(sqrt(N/n))\n\nHowever, I've tested it only on (n/N < 0.005 and D/N > 0.1 and D/N < 0.4) \npopulations, and only 3 of them to boot. I'd appreciate other people trying \nit on their own data populations, particularly very different ones, like D/N \n> 0.7 or D/N < 0.01.\n\nFurther, as Andrew points out we presumably do page sampling rather than \npurely random sampling so I should probably read the paper he referenced. \nWorking on it now ....\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 24 Apr 2005 11:30:50 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Folks,\n\n> I wonder if this paper has anything that might help:\n> http://www.stat.washington.edu/www/research/reports/1999/tr355.ps - if I\n> were more of a statistician I might be able to answer :-)\n\nActually, that paper looks *really* promising. Does anyone here have enough \nmath to solve for D(sub)Md on page 6? I'd like to test it on samples of < \n0.01%. \n\nTom, how does our heuristic sampling work? Is it pure random sampling, or \npage sampling?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 24 Apr 2005 12:08:15 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Tom, how does our heuristic sampling work? Is it pure random sampling, or \n> page sampling?\n\nManfred probably remembers better than I do, but I think the idea is\nto approximate pure random sampling as best we can without actually\nexamining every page of the table.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 25 Apr 2005 00:59:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested? "
},
{
"msg_contents": "On Sat, 2005-04-23 at 16:39 -0700, Josh Berkus wrote:\nGreg Stark wrote\n> > I looked into this a while back when we were talking about changing the\n> > sampling method. The conclusions were discouraging. Fundamentally, using\n> > constant sized samples of data for n_distinct is bogus. Constant sized\n> > samples only work for things like the histograms that can be analyzed\n> > through standard statistics population sampling which depends on the law of\n> > large numbers.\n\nISTM Greg's comments are correct. There is no way to calculate this with\nconsistent accuracy when using a constant sized sample. (If it were,\nthen people wouldnt bother to hold large databases...)\n\n> Overall, our formula is inherently conservative of n_distinct. That is, I \n> believe that it is actually computing the *smallest* number of distinct \n> values which would reasonably produce the given sample, rather than the \n> *median* one. This is contrary to the notes in analyze.c, which seem to \n> think that we're *overestimating* n_distinct. \n\nThe only information you can determine from a sample is the smallest\nnumber of distinct values that would reasonably produce the given\nsample. There is no meaningful concept of a median one... (You do have\nan upper bound: the number of rows in the table, but I cannot see any\nmeaning from taking (Nrows+estimatedN_distinct)/2 ).\n\nEven if you use Zipf's Law to predict the frequency of occurrence, you'd\nstill need to estimate the parameters for the distribution.\n\nMost other RDBMS make optimizer statistics collection an unsampled\nscan. Some offer this as one of their options, as well as the ability to\ndefine the sample size in terms of fixed number of rows or fixed\nproportion of the table.\n\nMy suggested hack for PostgreSQL is to have an option to *not* sample,\njust to scan the whole table and find n_distinct accurately. Then\nanybody who doesn't like the estimated statistics has a clear path to\ntake. \n\nThe problem of poorly derived base statistics is a difficult one. When\nconsidering join estimates we already go to the trouble of including MFV\ncomparisons to ensure an upper bound of join selectivity is known. If\nthe statistics derived are themselves inaccurate the error propagation\ntouches every other calculation in the optimizer. GIGO.\n\nWhat price a single scan of a table, however large, when incorrect\nstatistics could force scans and sorts to occur when they aren't\nactually needed ?\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Mon, 25 Apr 2005 09:57:47 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Simon Riggs <[email protected]> writes:\n> My suggested hack for PostgreSQL is to have an option to *not* sample,\n> just to scan the whole table and find n_distinct accurately.\n> ...\n> What price a single scan of a table, however large, when incorrect\n> statistics could force scans and sorts to occur when they aren't\n> actually needed ?\n\nIt's not just the scan --- you also have to sort, or something like\nthat, if you want to count distinct values. I doubt anyone is really\ngoing to consider this a feasible answer for large tables.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 25 Apr 2005 11:23:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested? "
},
{
"msg_contents": "On Mon, 2005-04-25 at 11:23 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > My suggested hack for PostgreSQL is to have an option to *not* sample,\n> > just to scan the whole table and find n_distinct accurately.\n> > ...\n> > What price a single scan of a table, however large, when incorrect\n> > statistics could force scans and sorts to occur when they aren't\n> > actually needed ?\n> \n> It's not just the scan --- you also have to sort, or something like\n> that, if you want to count distinct values. I doubt anyone is really\n> going to consider this a feasible answer for large tables.\n\nAssuming you don't use the HashAgg plan, which seems very appropriate\nfor the task? (...but I understand the plan otherwise).\n\nIf that was the issue, then why not keep scanning until you've used up\nmaintenance_work_mem with hash buckets, then stop and report the result.\n\nThe problem is if you don't do the sort once for statistics collection\nyou might accidentally choose plans that force sorts on that table. I'd\nrather do it once...\n\nThe other alternative is to allow an ALTER TABLE command to set\nstatistics manually, but I think I can guess what you'll say to that!\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Mon, 25 Apr 2005 19:49:01 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Simon, Tom:\n\nWhile it's not possible to get accurate estimates from a fixed size sample, I \nthink it would be possible from a small but scalable sample: say, 0.1% of all \ndata pages on large tables, up to the limit of maintenance_work_mem. \n\nSetting up these samples as a % of data pages, rather than a pure random sort, \nmakes this more feasable; for example, a 70GB table would only need to sample \nabout 9000 data pages (or 70MB). Of course, larger samples would lead to \nbetter accuracy, and this could be set through a revised GUC (i.e., \nmaximum_sample_size, minimum_sample_size). \n\nI just need a little help doing the math ... please?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 25 Apr 2005 12:13:18 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Guys,\n\n> While it's not possible to get accurate estimates from a fixed size sample,\n> I think it would be possible from a small but scalable sample: say, 0.1% of\n> all data pages on large tables, up to the limit of maintenance_work_mem.\n\nBTW, when I say \"accurate estimates\" here, I'm talking about \"accurate enough \nfor planner purposes\" which in my experience is a range between 0.2x to 5x.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 25 Apr 2005 12:18:26 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "\n\nJosh Berkus wrote:\n\n>Simon, Tom:\n>\n>While it's not possible to get accurate estimates from a fixed size sample, I \n>think it would be possible from a small but scalable sample: say, 0.1% of all \n>data pages on large tables, up to the limit of maintenance_work_mem. \n>\n>Setting up these samples as a % of data pages, rather than a pure random sort, \n>makes this more feasable; for example, a 70GB table would only need to sample \n>about 9000 data pages (or 70MB). Of course, larger samples would lead to \n>better accuracy, and this could be set through a revised GUC (i.e., \n>maximum_sample_size, minimum_sample_size). \n>\n>I just need a little help doing the math ... please?\n> \n>\n\n\nAfter some more experimentation, I'm wondering about some sort of \nadaptive algorithm, a bit along the lines suggested by Marko Ristola, \nbut limited to 2 rounds.\n\nThe idea would be that we take a sample (either of fixed size, or some \nsmall proportion of the table) , see how well it fits a larger sample \n(say a few times the size of the first sample), and then adjust the \nformula accordingly to project from the larger sample the estimate for \nthe full population. Math not worked out yet - I think we want to ensure \nthat the result remains bounded by [d,N].\n\ncheers\n\nandrew\n\n\n",
"msg_date": "Mon, 25 Apr 2005 16:43:10 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Simon Riggs <[email protected]> writes:\n> On Mon, 2005-04-25 at 11:23 -0400, Tom Lane wrote:\n>> It's not just the scan --- you also have to sort, or something like\n>> that, if you want to count distinct values. I doubt anyone is really\n>> going to consider this a feasible answer for large tables.\n\n> Assuming you don't use the HashAgg plan, which seems very appropriate\n> for the task? (...but I understand the plan otherwise).\n\nThe context here is a case with a very large number of distinct\nvalues... keep in mind also that we have to do this for *all* the\ncolumns of the table. A full-table scan for each column seems\nright out to me.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 25 Apr 2005 17:10:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested? "
},
{
"msg_contents": "On Sun, 2005-04-24 at 00:48 -0400, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n> > Overall, our formula is inherently conservative of n_distinct. That is, I \n> > believe that it is actually computing the *smallest* number of distinct \n> > values which would reasonably produce the given sample, rather than the \n> > *median* one. This is contrary to the notes in analyze.c, which seem to \n> > think that we're *overestimating* n_distinct. \n> \n> Well, the notes are there because the early tests I ran on that formula\n> did show it overestimating n_distinct more often than not. Greg is\n> correct that this is inherently a hard problem :-(\n> \n> I have nothing against adopting a different formula, if you can find\n> something with a comparable amount of math behind it ... but I fear\n> it'd only shift the failure cases around.\n> \n\nPerhaps the formula is not actually being applied?\n\nThe code looks like this...\n if (nmultiple == 0)\n {\n /* If we found no repeated values, assume it's a unique column */\n\tstats->stadistinct = -1.0;\n }\n else if (toowide_cnt == 0 && nmultiple == ndistinct)\n {\n\t/*\n\t * Every value in the sample appeared more than once. Assume\n\t * the column has just these values.\n\t */\n\tstats->stadistinct = ndistinct;\n }\n else\n {\n\t/*----------\n\t * Estimate the number of distinct values using the estimator\n\t * proposed by Haas and Stokes in IBM Research Report RJ 10025:\n\t\t\n\nThe middle chunk of code looks to me like if we find a distribution\nwhere values all occur at least twice, then we won't bother to apply the\nHaas and Stokes equation. That type of frequency distribution would be\nvery common in a set of values with very high ndistinct, especially when\nsampled.\n\nThe comment\n\t * Every value in the sample appeared more than once. Assume\n\t * the column has just these values.\ndoesn't seem to apply when using larger samples, as Josh is using.\n\nLooking at Josh's application it does seem likely that when taking a\nsample, all site visitors clicked more than once during their session,\nespecially if they include home page, adverts, images etc for each page.\n\nCould it be that we have overlooked this simple explanation and that the\nHaas and Stokes equation is actually quite good, but just not being\napplied?\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 26 Apr 2005 21:31:38 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Simon,\n\n> Could it be that we have overlooked this simple explanation and that the\n> Haas and Stokes equation is actually quite good, but just not being\n> applied?\n\nThat's probably part of it, but I've tried Haas and Stokes on a pure random \nsample and it's still bad, or more specifically overly conservative.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 26 Apr 2005 13:45:53 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "On Mon, 2005-04-25 at 17:10 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > On Mon, 2005-04-25 at 11:23 -0400, Tom Lane wrote:\n> >> It's not just the scan --- you also have to sort, or something like\n> >> that, if you want to count distinct values. I doubt anyone is really\n> >> going to consider this a feasible answer for large tables.\n> \n> > Assuming you don't use the HashAgg plan, which seems very appropriate\n> > for the task? (...but I understand the plan otherwise).\n> \n> The context here is a case with a very large number of distinct\n> values... \n\nYes, but is there another way of doing this other than sampling a larger\nproportion of the table? I don't like that answer either, for the\nreasons you give.\n\nThe manual doesn't actually say this, but you can already alter the\nsample size by setting one of the statistics targets higher, but all of\nthose samples are fixed sample sizes, not a proportion of the table\nitself. It seems reasonable to allow an option to scan a higher\nproportion of the table. (It would be even better if you could say \"keep\ngoing until you run out of memory, then stop\", to avoid needing to have\nan external sort mode added to ANALYZE).\n\nOracle and DB2 allow a proportion of the table to be specified as a\nsample size during statistics collection. IBM seem to be ignoring their\nown research note on estimating ndistinct...\n\n> keep in mind also that we have to do this for *all* the\n> columns of the table. \n\nYou can collect stats for individual columns. You need only use an\noption to increase sample size when required.\n\nAlso, if you have a large table and the performance of ANALYZE worries\nyou, set some fields to 0. Perhaps that should be the default setting\nfor very long text columns, since analyzing those doesn't help much\n(usually) and takes ages. (I'm aware we already don't analyze var length\ncolumn values > 1024 bytes).\n\n> A full-table scan for each column seems\n> right out to me.\n\nSome systems analyze multiple columns simultaneously.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 26 Apr 2005 22:02:31 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "\n\nSimon Riggs wrote:\n\n>The comment\n>\t * Every value in the sample appeared more than once. Assume\n>\t * the column has just these values.\n>doesn't seem to apply when using larger samples, as Josh is using.\n>\n>Looking at Josh's application it does seem likely that when taking a\n>sample, all site visitors clicked more than once during their session,\n>especially if they include home page, adverts, images etc for each page.\n>\n>Could it be that we have overlooked this simple explanation and that the\n>Haas and Stokes equation is actually quite good, but just not being\n>applied?\n>\n>\n> \n>\n\nNo, it is being aplied. If every value in the sample appears more than \nonce, then f1 in the formula is 0, and the result is then just d, the \nnumber of distinct values in the sample.\n\ncheers\n\nandrew\n",
"msg_date": "Tue, 26 Apr 2005 17:41:20 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "\n Hi everybody!\n\n Perhaps the following papers are relevant to the discussion here\n (their contact authors have been cc'd):\n\n\n 1. The following proposes effective algorithms for using block-level \n sampling for n_distinct estimation:\n\n \"Effective use of block-level sampling in statistics estimation\"\n by Chaudhuri, Das and Srivastava, SIGMOD 2004.\n\n http://www-db.stanford.edu/~usriv/papers/block-sampling.pdf\n\n\n 2. In a single scan, it is possible to estimate n_distinct by using\n a very simple algorithm:\n\n \"Distinct sampling for highly-accurate answers to distinct value\n queries and event reports\" by Gibbons, VLDB 2001.\n\n http://www.aladdin.cs.cmu.edu/papers/pdfs/y2001/dist_sampl.pdf\n\n\n 3. In fact, Gibbon's basic idea has been extended to \"sliding windows\" \n (this extension is useful in streaming systems like Aurora / Stream):\n\n \"Distributed streams algorithms for sliding windows\"\n by Gibbons and Tirthapura, SPAA 2002.\n\n http://home.eng.iastate.edu/~snt/research/tocs.pdf\n\n\n Thanks,\n Gurmeet\n\n ----------------------------------------------------\n Gurmeet Singh Manku Google Inc.\n http://www.cs.stanford.edu/~manku (650) 967 1890\n ----------------------------------------------------\n\n",
"msg_date": "Tue, 26 Apr 2005 15:00:48 -0700 (PDT)",
"msg_from": "Gurmeet Manku <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "\nThis one looks *really* good. \n\n http://www.aladdin.cs.cmu.edu/papers/pdfs/y2001/dist_sampl.pdf\n\nIt does require a single full table scan but it works in O(n) time and\nconstant space and it guarantees the confidence intervals for the estimates it\nprovides like the histograms do for regular range scans.\n\nIt can even keep enough data to provide estimates for n_distinct when\nunrelated predicates are applied. I'm not sure Postgres would want to do this\nthough; this seems like it's part of the cross-column correlation story more\nthan the n_distinct story. It seems to require keeping an entire copy of the\nsampled record in the stats tables which would be prohibitive quickly in wide\ntables (it would be O(n^2) storage in the number of columns) .\n\nIt also seems like a lot of work to implement. Nothing particular that would\nbe impossible, but it does require storing a moderately complex data\nstructure. Perhaps Postgres's new support for data structures will make this\neasier.\n\n-- \ngreg\n\n",
"msg_date": "26 Apr 2005 19:03:07 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "On Tue, 2005-04-26 at 19:03 -0400, Greg Stark wrote:\n> This one looks *really* good. \n> \n> http://www.aladdin.cs.cmu.edu/papers/pdfs/y2001/dist_sampl.pdf\n> \n> It does require a single full table scan \n\nAck.. Not by default please.\n\nI have a few large append-only tables (vacuum isn't necessary) which do\nneed stats rebuilt periodically.\n\nLets just say that we've been working hard to upgrade to 8.0 primarily\nbecause pg_dump was taking over 18 hours to make a backup.\n\n-- \nRod Taylor <[email protected]>\n\n",
"msg_date": "Tue, 26 Apr 2005 19:10:11 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Rod Taylor <[email protected]> writes:\n\n> On Tue, 2005-04-26 at 19:03 -0400, Greg Stark wrote:\n> > This one looks *really* good. \n> > \n> > http://www.aladdin.cs.cmu.edu/papers/pdfs/y2001/dist_sampl.pdf\n> > \n> > It does require a single full table scan \n> \n> Ack.. Not by default please.\n> \n> I have a few large append-only tables (vacuum isn't necessary) which do\n> need stats rebuilt periodically.\n\nThe algorithm can also naturally be implemented incrementally. Which would be\nnice for your append-only tables. But that's not Postgres's current philosophy\nwith statistics. Perhaps some trigger function that you could install yourself\nto update statistics for a newly inserted record would be useful.\n\n\nThe paper is pretty straightforward and easy to read, but here's an executive\nsummary:\n\nThe goal is to gather a uniform sample of *distinct values* in the table as\nopposed to a sample of records.\n\nInstead of using a fixed percentage sampling rate for each record, use a hash\nof the value to determine whether to include it. At first include everything,\nbut if the sample space overflows throw out half the values based on their\nhash value. Repeat until finished.\n\nIn the end you'll have a sample of 1/2^n of your distinct values from your\nentire data set where n is large enough for you sample to fit in your\npredetermined constant sample space.\n\n-- \ngreg\n\n",
"msg_date": "26 Apr 2005 19:28:21 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "On Tue, 2005-04-26 at 19:28 -0400, Greg Stark wrote:\n> Rod Taylor <[email protected]> writes:\n> \n> > On Tue, 2005-04-26 at 19:03 -0400, Greg Stark wrote:\n> > > This one looks *really* good. \n> > > \n> > > http://www.aladdin.cs.cmu.edu/papers/pdfs/y2001/dist_sampl.pdf\n> > > \n> > > It does require a single full table scan \n> > \n> > Ack.. Not by default please.\n> > \n> > I have a few large append-only tables (vacuum isn't necessary) which do\n> > need stats rebuilt periodically.\n> \n> The algorithm can also naturally be implemented incrementally. Which would be\n> nice for your append-only tables. But that's not Postgres's current philosophy\n> with statistics. Perhaps some trigger function that you could install yourself\n> to update statistics for a newly inserted record would be useful.\n\nIf when we have partitions, that'll be good enough. If partitions aren't\navailable this would be quite painful to anyone with large tables --\nmuch as the days of old used to be painful for ANALYZE.\n\n-- \n\n",
"msg_date": "Tue, 26 Apr 2005 20:16:32 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Rod Taylor <[email protected]> writes:\n> If when we have partitions, that'll be good enough. If partitions aren't\n> available this would be quite painful to anyone with large tables --\n> much as the days of old used to be painful for ANALYZE.\n\nYeah ... I am very un-enthused about these suggestions to make ANALYZE\ngo back to doing a full scan ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Apr 2005 00:14:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Bad n_distinct estimation; hacks suggested? "
},
{
"msg_contents": "Quoting Andrew Dunstan <[email protected]>: \n \n> After some more experimentation, I'm wondering about some sort of \n> adaptive algorithm, a bit along the lines suggested by Marko \nRistola, but limited to 2 rounds. \n> \n> The idea would be that we take a sample (either of fixed size, or \n> some small proportion of the table) , see how well it fits a larger \nsample \n> > (say a few times the size of the first sample), and then adjust \nthe > formula accordingly to project from the larger sample the \nestimate for the full population. Math not worked out yet - I think we \nwant to ensure that the result remains bounded by [d,N]. \n \nPerhaps I can save you some time (yes, I have a degree in Math). If I \nunderstand correctly, you're trying extrapolate from the correlation \nbetween a tiny sample and a larger sample. Introducing the tiny sample \ninto any decision can only produce a less accurate result than just \ntaking the larger sample on its own; GIGO. Whether they are consistent \nwith one another has no relationship to whether the larger sample \ncorrelates with the whole population. You can think of the tiny sample \nlike \"anecdotal\" evidence for wonderdrugs. \n-- \n\"Dreams come true, not free.\" -- S.Sondheim, ITW \n\n",
"msg_date": "Tue, 26 Apr 2005 22:38:04 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "\nTom Lane <[email protected]> writes:\n\n> Rod Taylor <[email protected]> writes:\n> > If when we have partitions, that'll be good enough. If partitions aren't\n> > available this would be quite painful to anyone with large tables --\n> > much as the days of old used to be painful for ANALYZE.\n> \n> Yeah ... I am very un-enthused about these suggestions to make ANALYZE\n> go back to doing a full scan ...\n\nWell one option would be to sample only a small number of records, but add the\ndata found from those records to the existing statistics. This would make\nsense for a steady-state situation, but make it hard to recover from a drastic\nchange in data distribution. I think in the case of n_distinct it would also\nbias the results towards underestimating n_distinct but perhaps that could be\ncorrected for.\n\nBut I'm unclear for what situation this is a concern. \n\nFor most use cases users have to run vacuum occasionally. In those cases\n\"vacuum analyze\" would be no worse than a straight normal vacuum. Note that\nthis algorithm doesn't require storing more data because of the large scan or\nperforming large sorts per column. It's purely O(n) time and O(1) space.\n\nOn the other hand, if you have tables you aren't vacuuming that means you\nperform zero updates or deletes. In which case some sort of incremental\nstatistics updating would be a good solution. A better solution even than\nsampling.\n\n-- \ngreg\n\n",
"msg_date": "27 Apr 2005 01:59:30 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "On Tue, 2005-04-26 at 15:00 -0700, Gurmeet Manku wrote:\n\n> 2. In a single scan, it is possible to estimate n_distinct by using\n> a very simple algorithm:\n> \n> \"Distinct sampling for highly-accurate answers to distinct value\n> queries and event reports\" by Gibbons, VLDB 2001.\n> \n> http://www.aladdin.cs.cmu.edu/papers/pdfs/y2001/dist_sampl.pdf\n\nThat looks like the one...\n\n...though it looks like some more complex changes to the current\nalgorithm to use it, and we want the other stats as well...\n\n> 3. In fact, Gibbon's basic idea has been extended to \"sliding windows\" \n> (this extension is useful in streaming systems like Aurora / Stream):\n> \n> \"Distributed streams algorithms for sliding windows\"\n> by Gibbons and Tirthapura, SPAA 2002.\n> \n> http://home.eng.iastate.edu/~snt/research/tocs.pdf\n> \n\n...and this offers the possibility of calculating statistics at load\ntime, as part of the COPY command\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Wed, 27 Apr 2005 08:45:10 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "\n\nMischa Sandberg wrote:\n\n> \n>Perhaps I can save you some time (yes, I have a degree in Math). If I \n>understand correctly, you're trying extrapolate from the correlation \n>between a tiny sample and a larger sample. Introducing the tiny sample \n>into any decision can only produce a less accurate result than just \n>taking the larger sample on its own; GIGO. Whether they are consistent \n>with one another has no relationship to whether the larger sample \n>correlates with the whole population. You can think of the tiny sample \n>like \"anecdotal\" evidence for wonderdrugs. \n>\n> \n>\n\nOk, good point.\n\nI'm with Tom though in being very wary of solutions that require even \none-off whole table scans. Maybe we need an additional per-table \nstatistics setting which could specify the sample size, either as an \nabsolute number or as a percentage of the table. It certainly seems that \nwhere D/N ~ 0.3, the estimates on very large tables at least are way way \nout.\n\nOr maybe we need to support more than one estimation method.\n\nOr both ;-)\n\ncheers\n\nandrew\n\n\n",
"msg_date": "Wed, 27 Apr 2005 09:43:31 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Mischa,\n\n> >Perhaps I can save you some time (yes, I have a degree in Math). If I\n> >understand correctly, you're trying extrapolate from the correlation\n> >between a tiny sample and a larger sample. Introducing the tiny sample\n> >into any decision can only produce a less accurate result than just\n> >taking the larger sample on its own; GIGO. Whether they are consistent\n> >with one another has no relationship to whether the larger sample\n> >correlates with the whole population. You can think of the tiny sample\n> >like \"anecdotal\" evidence for wonderdrugs.\n\nActually, it's more to characterize how large of a sample we need. For \nexample, if we sample 0.005 of disk pages, and get an estimate, and then \nsample another 0.005 of disk pages and get an estimate which is not even \nclose to the first estimate, then we have an idea that this is a table which \ndefies analysis based on small samples. Wheras if the two estimates are < \n1.0 stdev apart, we can have good confidence that the table is easily \nestimated. Note that this doesn't require progressively larger samples; any \ntwo samples would work.\n\n> I'm with Tom though in being very wary of solutions that require even\n> one-off whole table scans. Maybe we need an additional per-table\n> statistics setting which could specify the sample size, either as an\n> absolute number or as a percentage of the table. It certainly seems that\n> where D/N ~ 0.3, the estimates on very large tables at least are way way\n> out.\n\nOh, I think there are several other cases where estimates are way out. \nBasically the estimation method we have doesn't work for samples smaller than \n0.10. \n\n> Or maybe we need to support more than one estimation method.\n\nYes, actually. We need 3 different estimation methods:\n1 for tables where we can sample a large % of pages (say, >= 0.1)\n1 for tables where we sample a small % of pages but are \"easily estimated\"\n1 for tables which are not easily estimated by we can't afford to sample a \nlarge % of pages.\n\nIf we're doing sampling-based estimation, I really don't want people to lose \nsight of the fact that page-based random sampling is much less expensive than \nrow-based random sampling. We should really be focusing on methods which \nare page-based.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 27 Apr 2005 08:25:16 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Quoting Josh Berkus <[email protected]>:\n\n> > >Perhaps I can save you some time (yes, I have a degree in Math). If I\n> > >understand correctly, you're trying extrapolate from the correlation\n> > >between a tiny sample and a larger sample. Introducing the tiny sample\n> > >into any decision can only produce a less accurate result than just\n> > >taking the larger sample on its own; GIGO. Whether they are consistent\n> > >with one another has no relationship to whether the larger sample\n> > >correlates with the whole population. You can think of the tiny sample\n> > >like \"anecdotal\" evidence for wonderdrugs.\n>\n> Actually, it's more to characterize how large of a sample we need. For\n> example, if we sample 0.005 of disk pages, and get an estimate, and then\n> sample another 0.005 of disk pages and get an estimate which is not even\n> close to the first estimate, then we have an idea that this is a table\nwhich\n> defies analysis based on small samples. Wheras if the two estimates\nare <\n> 1.0 stdev apart, we can have good confidence that the table is easily\n> estimated. Note that this doesn't require progressively larger\nsamples; any\n> two samples would work.\n\nWe're sort of wandering away from the area where words are a good way\nto describe the problem. Lacking a common scratchpad to work with,\ncould I suggest you talk to someone you consider has a background in\nstats, and have them draw for you why this doesn't work?\n\nAbout all you can get out of it is, if the two samples are\ndisjunct by a stddev, yes, you've demonstrated that the union\nof the two populations has a larger stddev than either of them;\nbut your two stddevs are less info than the stddev of the whole.\nBreaking your sample into two (or three, or four, ...) arbitrary pieces\nand looking at their stddevs just doesn't tell you any more than what\nyou start with.\n\n-- \n\"Dreams come true, not free.\" -- S.Sondheim, ITW \n\n",
"msg_date": "Thu, 28 Apr 2005 08:21:36 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Well, this guy has it nailed. He cites Flajolet and Martin, which was (I \nthought) as good as you could get with only a reasonable amount of memory per \nstatistic. Unfortunately, their hash table is a one-shot deal; there's no way \nto maintain it once the table changes. His incremental update doesn't degrade \nas the table changes. If there isn't the same wrangle of patent as with the \nARC algorithm, and if the existing stats collector process can stand the extra \ntraffic, then this one is a winner. \n \nMany thanks to the person who posted this reference in the first place; so \nsorry I canned your posting and can't recall your name. \n \nNow, if we can come up with something better than the ARC algorithm ... \n\n",
"msg_date": "Thu, 28 Apr 2005 22:10:18 -0700",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Distinct-Sampling (Gibbons paper) for Postgres"
},
{
"msg_contents": "\n> Now, if we can come up with something better than the ARC algorithm ...\n\nTom already did. His clock-sweep patch is already in the 8.1 source.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 28 Apr 2005 22:22:51 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Distinct-Sampling (Gibbons paper) for Postgres"
},
{
"msg_contents": "\n Actually, the earliest paper that solves the distinct_n estimation\n problem in 1 pass is the following:\n\n \"Estimating simple functions on the union of data streams\"\n by Gibbons and Tirthapura, SPAA 2001.\n http://home.eng.iastate.edu/~snt/research/streaming.pdf\n\n The above paper addresses a more difficult problem (1 pass\n _and_ a distributed setting).\n\n\n Gibbon's followup paper in VLDB 2001 limits the problem to a \n single machine and contains primarily experimental results (for\n a database audience). The algorithmic breakthrough had already been\n accomplished in the SPAA paper.\n\n Gurmeet\n\n-- \n ----------------------------------------------------\n Gurmeet Singh Manku Google Inc.\n http://www.cs.stanford.edu/~manku (650) 967 1890\n ----------------------------------------------------\n\n",
"msg_date": "Mon, 2 May 2005 09:14:00 -0700 (PDT)",
"msg_from": "Gurmeet Manku <[email protected]>",
"msg_from_op": false,
"msg_subject": "Citation for \"Bad n_distinct estimation; hacks suggested?\""
},
{
"msg_contents": "Hi, Josh,\n\nJosh Berkus wrote:\n\n> Yes, actually. We need 3 different estimation methods:\n> 1 for tables where we can sample a large % of pages (say, >= 0.1)\n> 1 for tables where we sample a small % of pages but are \"easily estimated\"\n> 1 for tables which are not easily estimated by we can't afford to sample a \n> large % of pages.\n> \n> If we're doing sampling-based estimation, I really don't want people to lose \n> sight of the fact that page-based random sampling is much less expensive than \n> row-based random sampling. We should really be focusing on methods which \n> are page-based.\n\nWould it make sense to have a sample method that scans indices? I think\nthat, at least for tree based indices (btree, gist), rather good\nestimates could be derived.\n\nAnd the presence of a unique index should lead to 100% distinct values\nestimation without any scan at all.\n\nMarkus\n\n",
"msg_date": "Tue, 03 May 2005 15:06:23 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Hello Everybody,\n\nWe recently upgraded to Postgres 7.4 from 7.3.9 and noticed that the\nforeign key constraints compile noticeably faster. In 7.3 the\nconstraints would typically take more than an hour to run on our\nproduction data. Now they take a minute or two.\n\nCan anybody explain such a major performance improvement ?\n\nThanks\n\n-- \nAshish Arte\nOpen Sky Software\n\n",
"msg_date": "Tue, 03 May 2005 10:11:23 -0500",
"msg_from": "Ashish Arte <[email protected]>",
"msg_from_op": false,
"msg_subject": "Foreign key constraints compile faster in 7.4"
},
{
"msg_contents": "Ashish Arte <[email protected]> writes:\n> We recently upgraded to Postgres 7.4 from 7.3.9 and noticed that the\n> foreign key constraints compile noticeably faster. In 7.3 the\n> constraints would typically take more than an hour to run on our\n> production data. Now they take a minute or two.\n\n> Can anybody explain such a major performance improvement ?\n\nHey, we do do some work on this thing from time to time ;-)\n\nProbably you are talking about this:\n\n2003-10-06 12:38 tgl\n\n\t* src/: backend/commands/tablecmds.c,\n\tbackend/utils/adt/ri_triggers.c, include/commands/trigger.h: During\n\tALTER TABLE ADD FOREIGN KEY, try to check the existing rows using a\n\tsingle LEFT JOIN query instead of firing the check trigger for each\n\trow individually. Stephan Szabo, with some kibitzing from Tom Lane\n\tand Jan Wieck.\n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 03 May 2005 11:59:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Foreign key constraints compile faster in 7.4 "
},
{
"msg_contents": "Quoting Markus Schaber <[email protected]>:\n\n> Hi, Josh,\n> \n> Josh Berkus wrote:\n> \n> > Yes, actually. We need 3 different estimation methods:\n> > 1 for tables where we can sample a large % of pages (say, >= 0.1)\n> > 1 for tables where we sample a small % of pages but are \"easily\n> estimated\"\n> > 1 for tables which are not easily estimated by we can't afford to\n> sample a \n> > large % of pages.\n> > \n> > If we're doing sampling-based estimation, I really don't want\n> people to lose \n> > sight of the fact that page-based random sampling is much less\n> expensive than \n> > row-based random sampling. We should really be focusing on\n> methods which \n> > are page-based.\n\nOkay, although given the track record of page-based sampling for\nn-distinct, it's a bit like looking for your keys under the streetlight,\nrather than in the alley where you dropped them :-)\n\nHow about applying the distinct-sampling filter on a small extra data\nstream to the stats collector? \n\n-- \nEngineers think equations approximate reality.\nPhysicists think reality approximates the equations.\nMathematicians never make the connection.\n\n",
"msg_date": "Tue, 3 May 2005 14:33:10 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Mischa,\n\n> Okay, although given the track record of page-based sampling for\n> n-distinct, it's a bit like looking for your keys under the streetlight,\n> rather than in the alley where you dropped them :-)\n\nBad analogy, but funny.\n\nThe issue with page-based vs. pure random sampling is that to do, for example, \n10% of rows purely randomly would actually mean loading 50% of pages. With \n20% of rows, you might as well scan the whole table.\n\nUnless, of course, we use indexes for sampling, which seems like a *really \ngood* idea to me ....\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 3 May 2005 14:43:44 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Josh Berkus wrote:\n> Mischa,\n>\n>\n>>Okay, although given the track record of page-based sampling for\n>>n-distinct, it's a bit like looking for your keys under the streetlight,\n>>rather than in the alley where you dropped them :-)\n>\n>\n> Bad analogy, but funny.\n>\n> The issue with page-based vs. pure random sampling is that to do, for example,\n> 10% of rows purely randomly would actually mean loading 50% of pages. With\n> 20% of rows, you might as well scan the whole table.\n>\n> Unless, of course, we use indexes for sampling, which seems like a *really\n> good* idea to me ....\n>\n\nBut doesn't an index only sample one column at a time, whereas with\npage-based sampling, you can sample all of the columns at once. And not\nall columns would have indexes, though it could be assumed that if a\ncolumn doesn't have an index, then it doesn't matter as much for\ncalculations such as n_distinct.\n\nBut if you had 5 indexed rows in your table, then doing it index wise\nmeans you would have to make 5 passes instead of just one.\n\nThough I agree that page-based sampling is important for performance\nreasons.\n\nJohn\n=:->",
"msg_date": "Tue, 03 May 2005 19:45:17 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "John,\n\n> But doesn't an index only sample one column at a time, whereas with\n> page-based sampling, you can sample all of the columns at once. \n\nHmmm. Yeah, we're not currently doing that though. Another good idea ...\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 3 May 2005 17:52:54 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "Quoting Josh Berkus <[email protected]>: \n \n> Mischa, \n> \n> > Okay, although given the track record of page-based sampling for \n> > n-distinct, it's a bit like looking for your keys under the \n> streetlight, \n> > rather than in the alley where you dropped them :-) \n> \n> Bad analogy, but funny. \n \nBad analogy? Page-sampling effort versus row-sampling effort, c'est \nmoot. It's not good enough for stats to produce good behaviour on the \naverage. Straight random sampling, page or row, is going to cause \nenough untrustworthy engine behaviour,for any %ages small enough to \nallow sampling from scratch at any time. \n \nI'm curious what the problem is with relying on a start-up plus \nincremental method, when the method in the distinct-sampling paper \ndoesn't degenerate: you can start when the table is still empty. \nConstructing an index requires an initial full scan plus incremental \nupdate; what's the diff? \n \n> Unless, of course, we use indexes for sampling, which seems like a \n> *really \n> good* idea to me .... \n \n\"distinct-sampling\" applies for indexes, too. I started tracking the \ndiscussion of this a bit late. Smart method for this is in VLDB'92: \nGennady Antoshenkov, \"Random Sampling from Pseudo-ranked B+-trees\". I \ndon't think this is online anywhere, except if you have a DBLP \nmembership. Does nybod else know better? \nAntoshenkov was the brains behind some of the really cool stuff in DEC \nRdb (what eventually became Oracle). Compressed bitmap indices, \nparallel competing query plans, and smart handling of keys with \nhyperbolic distributions. \n-- \nEngineers think equations approximate reality. \nPhysicists think reality approximates the equations. \nMathematicians never make the connection. \n\n",
"msg_date": "Tue, 3 May 2005 19:52:16 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Bad n_distinct estimation; hacks suggested?"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Josh Berkus [mailto:[email protected]]\n> Sent: Tuesday, April 19, 2005 2:09 PM\n> To: pgsql-perform\n> Subject: [PERFORM] Bad n_distinct estimation; hacks suggested?\n> \n> [...]\n> (BTW, increasing the stats to 1000 only doubles n_distinct, \n> and doesn't solve the problem)\n\nSpeaking of which, is there a reason why statistics are limited\nto 1000? Performance?\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Tue, 19 Apr 2005 15:01:57 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad n_distinct estimation; hacks suggested?"
}
] |
[
{
"msg_contents": "A friend of mine has an application where he's copying in 4000 rows at a\ntime into a table that has about 4M rows. Each row is 40-50 bytes. This\nis taking 25 seconds on a dual PIII-1GHz with 1G of RAM and a 2 disk\nSATA mirror, running FBSD 4.10-stable. There's one index on the table.\n\nWhat's really odd is that neither the CPU or the disk are being\nhammered. The box appears to be pretty idle; the postgresql proces is\nusing 4-5% CPU.\n\nI seem to recall others running into this before, but I can't remember\nwhat the issue was and I can't find it in the archives.\n\nThis is version 8.0, btw.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 19 Apr 2005 21:00:33 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow copy with little CPU/disk usage"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> A friend of mine has an application where he's copying in 4000 rows at a\n> time into a table that has about 4M rows. Each row is 40-50 bytes. This\n> is taking 25 seconds on a dual PIII-1GHz with 1G of RAM and a 2 disk\n> SATA mirror, running FBSD 4.10-stable. There's one index on the table.\n\nIf there's no hidden costs such as foreign key checks, that does seem\npretty dang slow.\n\n> What's really odd is that neither the CPU or the disk are being\n> hammered. The box appears to be pretty idle; the postgresql proces is\n> using 4-5% CPU.\n\nIt's very hard to believe that *neither* disk nor CPU is maxed.\nCan we see a reproducible test case, please?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 19 Apr 2005 23:05:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow copy with little CPU/disk usage "
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n\n> What's really odd is that neither the CPU or the disk are being\n> hammered. The box appears to be pretty idle; the postgresql proces is\n> using 4-5% CPU.\n\nIs he committing every row? In that case you would see fairly low i/o\nbandwidth usage because most of the time is being spent seeking and waiting\nfor rotational latency.\n\n-- \ngreg\n\n",
"msg_date": "20 Apr 2005 00:34:27 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow copy with little CPU/disk usage"
},
{
"msg_contents": "Quoting Tom Lane <[email protected]>: \n \n> \"Jim C. Nasby\" <[email protected]> writes: \n> > A friend of mine has an application where he's copying in 4000 rows at a \n> > time into a table that has about 4M rows. Each row is 40-50 bytes. This \n> > is taking 25 seconds on a dual PIII-1GHz with 1G of RAM and a 2 disk \n> > SATA mirror, running FBSD 4.10-stable. There's one index on the table. \n> \n> If there's no hidden costs such as foreign key checks, that does seem \n> pretty dang slow. \n> \n> > What's really odd is that neither the CPU or the disk are being \n> > hammered. The box appears to be pretty idle; the postgresql proces is \n> > using 4-5% CPU. \n-- \nThis sounds EXACTLY like my problem, if you make the box to a Xeon 2.4GHz, 2GB \nRAM ... with two SCSI drives (xlog and base); loading 10K rows of about 200 \nbytes each; takes about 20 secs at the best, and much longer at the worst. By \nany chance does your friend have several client machines/processes trying to \nmass-load rows at the same time? Or at least some other processes updating \nthat table in a bulkish way? What I get is low diskio, low cpu, even low \ncontext-switches ... and I'm betting he should take a look at pg_locks. For my \nown problem, I gather that an exclusive lock is necessary while updating \nindexes and heap, and the multiple processes doing the update can make that \npathological. \n \nAnyway, have your friend check pg_locks. \n \n \n\"Dreams come true, not free.\" -- S.Sondheim, ITW \n\n",
"msg_date": "Tue, 19 Apr 2005 21:37:21 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow copy with little CPU/disk usage "
},
{
"msg_contents": "No, this is a single process. And there's known issues with context\nstorms on Xeons, so that might be what you're seeing.\n\nOn Tue, Apr 19, 2005 at 09:37:21PM -0700, Mischa Sandberg wrote:\n> Quoting Tom Lane <[email protected]>: \n> \n> > \"Jim C. Nasby\" <[email protected]> writes: \n> > > A friend of mine has an application where he's copying in 4000 rows at a \n> > > time into a table that has about 4M rows. Each row is 40-50 bytes. This \n> > > is taking 25 seconds on a dual PIII-1GHz with 1G of RAM and a 2 disk \n> > > SATA mirror, running FBSD 4.10-stable. There's one index on the table. \n> > \n> > If there's no hidden costs such as foreign key checks, that does seem \n> > pretty dang slow. \n> > \n> > > What's really odd is that neither the CPU or the disk are being \n> > > hammered. The box appears to be pretty idle; the postgresql proces is \n> > > using 4-5% CPU. \n> -- \n> This sounds EXACTLY like my problem, if you make the box to a Xeon 2.4GHz, 2GB \n> RAM ... with two SCSI drives (xlog and base); loading 10K rows of about 200 \n> bytes each; takes about 20 secs at the best, and much longer at the worst. By \n> any chance does your friend have several client machines/processes trying to \n> mass-load rows at the same time? Or at least some other processes updating \n> that table in a bulkish way? What I get is low diskio, low cpu, even low \n> context-switches ... and I'm betting he should take a look at pg_locks. For my \n> own problem, I gather that an exclusive lock is necessary while updating \n> indexes and heap, and the multiple processes doing the update can make that \n> pathological. \n> \n> Anyway, have your friend check pg_locks. \n> \n> \n> \"Dreams come true, not free.\" -- S.Sondheim, ITW \n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Wed, 20 Apr 2005 17:22:27 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow copy with little CPU/disk usage"
},
{
"msg_contents": "No, he's using either COPY or \\COPY.\n\nOn Wed, Apr 20, 2005 at 12:34:27AM -0400, Greg Stark wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> \n> > What's really odd is that neither the CPU or the disk are being\n> > hammered. The box appears to be pretty idle; the postgresql proces is\n> > using 4-5% CPU.\n> \n> Is he committing every row? In that case you would see fairly low i/o\n> bandwidth usage because most of the time is being spent seeking and waiting\n> for rotational latency.\n> \n> -- \n> greg\n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Wed, 20 Apr 2005 17:23:02 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow copy with little CPU/disk usage"
}
] |
[
{
"msg_contents": "Is there a way to look at the stats tables and tell what is jamming up your \npostgres server the most? Other than seeing long running queries and watch \ntop, atop, iostat, vmstat in separate xterms...I'm wondering if postgres keeps \nsome stats on what it spends the most time doing or if there's a way to \nextract that sort of info from other metrics it keeps in the stats table?\n\nMaybe a script which polls the stats table and correlates the info with stats \nabout the system in /proc?\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Tue, 19 Apr 2005 21:44:52 -0700 (PDT)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to tell what your postgresql server is doing"
},
{
"msg_contents": "> Is there a way to look at the stats tables and tell what is jamming up \n> your postgres server the most? Other than seeing long running queries \n> and watch top, atop, iostat, vmstat in separate xterms...I'm wondering \n> if postgres keeps some stats on what it spends the most time doing or if \n> there's a way to extract that sort of info from other metrics it keeps \n> in the stats table?\n> \n> Maybe a script which polls the stats table and correlates the info with \n> stats about the system in /proc?\n\nTurn on logging of all queries, sample for a few hours or one day. Then \n run Practical Query Analyzer (PQA on pgfoundry.org) over it to get \naggregate query information.\n\nChris\n",
"msg_date": "Wed, 20 Apr 2005 13:21:41 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to tell what your postgresql server is doing"
}
] |
[
{
"msg_contents": "We have a table with 1M rows that contain sessions with a start and\nfinish timestamps. When joining this table with a 10k table with rounded\ntimestamps, explain shows me sequential scans are used, and the join\ntakes about 6 hours (2s per seq scan on session table * 10000):\n\n Nested Loop (cost=252.80..233025873.16 rows=1035480320 width=97)\nJoin Filter: ((\"outer\".starttime <= \"inner\".ts) AND (\"outer\".finishtime\n>= \"inner\".ts))\n -> Seq Scan on sessions us (cost=0.00..42548.36 rows=924536\nwidth=105) -> Materialize (cost=252.80..353.60 rows=10080 width=8)\n -> Seq Scan on duration du (cost=0.00..252.80 rows=10080 width=8)\n\nHowever, during the initial loading of the data (we first load into text\ntables, then convert to tables using timestamps etc, then run this\nquery) the same query took only 12 minutes. While debugging, I increased\ncpu_tuple_cost to 0.1 (from 0.01). Now the explain shows an index scan,\nand the run time comes down to 11 minutes:\n\n Nested Loop (cost=0.00..667700310.42 rows=1035480320 width=97)\n -> Seq Scan on sessions us (cost=0.00..125756.60 rows=924536 width=105)\n -> Index Scan using ix_du_ts on duration du (cost=0.00..604.46\nrows=1120 width=8)\n Index Cond: ((\"outer\".starttime <= du.ts) AND\n(\"outer\".finishtime >= du.ts))\n\nI am glad that I found a way to force the use of the index, but still\ncan't explain why in the initial run the planner made the right choice,\nbut now I need to give it a hand. Could this have to do with the\nstatistics of the tables? I make very sure (during the initial load and\nwhile testing) that I vacuum analyze all tables after I fill them.\n\nI'm runing postgres 7.4.7.\n\nAny help is appreciated.\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n",
"msg_date": "Wed, 20 Apr 2005 16:22:24 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "When are index scans used over seq scans?"
},
{
"msg_contents": "Richard van den Berg wrote:\n\n>We have a table with 1M rows that contain sessions with a start and\n>finish timestamps. When joining this table with a 10k table with rounded\n>timestamps, explain shows me sequential scans are used, and the join\n>takes about 6 hours (2s per seq scan on session table * 10000):\n>\n> Nested Loop (cost=252.80..233025873.16 rows=1035480320 width=97)\n>Join Filter: ((\"outer\".starttime <= \"inner\".ts) AND (\"outer\".finishtime\n>\n>\n>>= \"inner\".ts))\n>>\n>>\n> -> Seq Scan on sessions us (cost=0.00..42548.36 rows=924536\n>width=105) -> Materialize (cost=252.80..353.60 rows=10080 width=8)\n> -> Seq Scan on duration du (cost=0.00..252.80 rows=10080 width=8)\n>\n>However, during the initial loading of the data (we first load into text\n>tables, then convert to tables using timestamps etc, then run this\n>query) the same query took only 12 minutes. While debugging, I increased\n>cpu_tuple_cost to 0.1 (from 0.01). Now the explain shows an index scan,\n>and the run time comes down to 11 minutes:\n>\n> Nested Loop (cost=0.00..667700310.42 rows=1035480320 width=97)\n> -> Seq Scan on sessions us (cost=0.00..125756.60 rows=924536 width=105)\n> -> Index Scan using ix_du_ts on duration du (cost=0.00..604.46\n>rows=1120 width=8)\n> Index Cond: ((\"outer\".starttime <= du.ts) AND\n>(\"outer\".finishtime >= du.ts))\n>\n>I am glad that I found a way to force the use of the index, but still\n>can't explain why in the initial run the planner made the right choice,\n>but now I need to give it a hand. Could this have to do with the\n>statistics of the tables? I make very sure (during the initial load and\n>while testing) that I vacuum analyze all tables after I fill them.\n>\n>I'm runing postgres 7.4.7.\n>\n>Any help is appreciated.\n>\n>\n>\nI believe the problem is that postgres doesn't recognize how restrictive\na date-range is unless it uses constants.\nSo saying:\n\nselect blah from du WHERE time between '2004-10-10' and '2004-10-15';\nWill properly use the index, because it realizes it only returns a few rows.\nHowever\nselect blah from du, us where du.ts between us.starttime and us.finishtime;\nDoesn't know how selective that BETWEEN is.\n\nThis has been discussed as a future improvement to the planner (in\n8.*). I don't know the current status.\n\nAlso, in the future, you really should post your table schema, and\nexplain analyze instead of just explain. (I realize that with a 6hr\nquery it is a little painful.)\n\nNotice that in the above plans, the expected number of rows drops from\n10k down to 1k (which is probably where the planner decides to switch).\nAnd if you actually did the analyze probably the number of rows is much\nlower still.\n\nProbably you should try to find out the status of multi-table\nselectivity. It was discussed in the last couple of months.\n\nJohn\n=:->",
"msg_date": "Wed, 20 Apr 2005 09:34:56 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When are index scans used over seq scans?"
},
{
"msg_contents": "Richard van den Berg <[email protected]> writes:\n> We have a table with 1M rows that contain sessions with a start and\n> finish timestamps. When joining this table with a 10k table with rounded\n> timestamps, explain shows me sequential scans are used, and the join\n> takes about 6 hours (2s per seq scan on session table * 10000):\n\n> Nested Loop (cost=252.80..233025873.16 rows=1035480320 width=97)\n> Join Filter: ((\"outer\".starttime <= \"inner\".ts) AND (\"outer\".finishtime\n>> = \"inner\".ts))\n> -> Seq Scan on sessions us (cost=0.00..42548.36 rows=924536\n> width=105) -> Materialize (cost=252.80..353.60 rows=10080 width=8)\n> -> Seq Scan on duration du (cost=0.00..252.80 rows=10080 width=8)\n\nThe explain shows no such thing. What is the *actual* runtime of\neach plan per EXPLAIN ANALYZE, please?\n\n(In general, any time you are complaining about planner misbehavior,\nit is utterly pointless to give only planner estimates and not reality.\nBy definition, you don't think the estimates are right.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Apr 2005 10:39:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When are index scans used over seq scans? "
},
{
"msg_contents": "John A Meinel wrote:\n> I believe the problem is that postgres doesn't recognize how restrictive\n> a date-range is unless it uses constants.\n\nAnd it does when using BETWEEN with int for example? Impressive. :-)\n\n> select blah from du WHERE time between '2004-10-10' and '2004-10-15';\n> Will properly use the index, because it realizes it only returns a few\n> rows.\n\nCorrect, it does.\n\n> Probably you should try to find out the status of multi-table\n> selectivity. It was discussed in the last couple of months.\n\nI can't find the posts you are refering to. What is the priciple of\nmulti-table selectivity?\n\nYour explanation sounds very plausible.. I don't mind changing the\ncpu_tuple_cost before running BETWEEN with timestamps, they are easy\nenough to spot.\n\nThanks,\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n",
"msg_date": "Wed, 20 Apr 2005 17:15:37 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When are index scans used over seq scans?"
},
{
"msg_contents": "Richard van den Berg wrote:\n\n>John A Meinel wrote:\n>\n>\n>>I believe the problem is that postgres doesn't recognize how restrictive\n>>a date-range is unless it uses constants.\n>>\n>>\n>\n>And it does when using BETWEEN with int for example? Impressive. :-)\n>\n>\n>\n>>select blah from du WHERE time between '2004-10-10' and '2004-10-15';\n>>Will properly use the index, because it realizes it only returns a few\n>>rows.\n>>\n>>\n>\n>Correct, it does.\n>\n>\n>\n>>Probably you should try to find out the status of multi-table\n>>selectivity. It was discussed in the last couple of months.\n>>\n>>\n>\n>I can't find the posts you are refering to. What is the priciple of\n>multi-table selectivity?\n>\n>Your explanation sounds very plausible.. I don't mind changing the\n>cpu_tuple_cost before running BETWEEN with timestamps, they are easy\n>enough to spot.\n>\n>Thanks,\n>\n>\n>\nWell, there was a thread titled \"date - range\"\nThere is also \"recognizing range constraints\" which started with \"plan\nfor relatively simple query seems to be very inefficient\".\n\nSorry that I gave you poor search terms.\n\nAnyway, \"date - range\" gives an interesting workaround. Basically you\nstore date ranges with a different structure, which allows fast index\nlookups.\n\nThe other threads are just discussing the possibility of improving the\nplanner so that it recognizes WHERE a > b AND a < c, is generally more\nrestrictive.\n\nThere was a discussion about how to estimate selectivity, but I think it\nmostly boils down that except for pathological cases, a > b AND a < c is\nalways more restrictive than just a > b, or a < c.\n\nSome of it may be also be found in pgsql-hackers, rather than\npgsql-performance, but I'm not subscribed to -hackers, so most of it\nshould be in -performance.\n\nJohn\n=:->\n\ncaveat, I'm not a developer, I just read a lot of the list.",
"msg_date": "Wed, 20 Apr 2005 10:37:34 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When are index scans used over seq scans?"
},
{
"msg_contents": "Tom Lane wrote:\n> The explain shows no such thing. What is the *actual* runtime of\n> each plan per EXPLAIN ANALYZE, please?\n\nI took a simplified version of the problem (the actual query that took 6\nhours joins 3 tables). With cpu_tuple_cost = 0.1:\n\n Nested Loop (cost=0.00..667700310.42 rows=1035480320 width=97) (actual\ntime=31.468..42629.629 rows=6171334 loops=1)\n -> Seq Scan on sessions us (cost=0.00..125756.60 rows=924536\nwidth=105) (actual time=31.366..3293.523 rows=924536 loops=1)\n -> Index Scan using ix_du_ts on duration du (cost=0.00..604.46\nrows=1120 width=8) (actual time=0.004..0.011 rows=7 loops=924536)\n Index Cond: ((\"outer\".starttimetrunc <= du.ts) AND\n(\"outer\".finishtimetrunc >= du.ts))\n Total runtime: 44337.937 ms\n\nThe explain analyze for cpu_tuple_cost = 0.01 is running now. If it\ntakes hours, I'll send it to the list tomorrow.\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n",
"msg_date": "Wed, 20 Apr 2005 17:54:45 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When are index scans used over seq scans?"
},
{
"msg_contents": "Tom Lane wrote:\n> The explain shows no such thing. What is the *actual* runtime of\n> each plan per EXPLAIN ANALYZE, please?\n\nOk, it took 3.5 hours to complete. :-/\n\nThis is with the default cpu_tuple_cost = 0.01:\n\n Nested Loop (cost=252.80..233010147.16 rows=1035480320 width=98)\n(actual time=0.369..12672213.137 rows=6171334 loops=1)\n Join Filter: ((\"outer\".starttimetrunc <= \"inner\".ts) AND\n(\"outer\".finishtimetrunc >= \"inner\".ts))\n -> Seq Scan on sessions us (cost=0.00..26822.36 rows=924536\nwidth=106) (actual time=0.039..5447.349 rows=924536 loops=1)\n -> Materialize (cost=252.80..353.60 rows=10080 width=8) (actual\ntime=0.000..2.770 rows=10080 loops=924536)\n -> Seq Scan on duration du (cost=0.00..252.80 rows=10080\nwidth=8) (actual time=0.019..13.397 rows=10080 loops=1)\n Total runtime: 12674486.670 ms\n\nOnce again with cpu_tuple_cost = 0.1:\n\n Nested Loop (cost=0.00..667684584.42 rows=1035480320 width=98) (actual\ntime=42.892..39877.928 rows=6171334 loops=1)\n -> Seq Scan on sessions us (cost=0.00..110030.60 rows=924536\nwidth=106) (actual time=0.020..917.803 rows=924536 loops=1)\n -> Index Scan using ix_du_ts on duration du (cost=0.00..604.46\nrows=1120 width=8) (actual time=0.004..0.011 rows=7 loops=924536)\n Index Cond: ((\"outer\".starttimetrunc <= du.ts) AND\n(\"outer\".finishtimetrunc >= du.ts))\n Total runtime: 41635.468 ms\n(5 rows)\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n",
"msg_date": "Thu, 21 Apr 2005 10:15:40 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When are index scans used over seq scans?"
},
{
"msg_contents": "Thanks a lot John for the correct search terms. :-)\n\nThe suggestion in\nhttp://archives.postgresql.org/pgsql-performance/2005-04/msg00029.php to\nadd a constraint that checks (finishtime >= starttime) does not make a\ndifference for me. Still seq scans are used.\n\nThe width solution explained in\nhttp://archives.postgresql.org/pgsql-performance/2005-04/msg00027.php\nand\nhttp://archives.postgresql.org/pgsql-performance/2005-04/msg00116.php\ndoes make a huge difference when selecting 1 timestamp using a BETWEEN\n(2ms vs 2sec), but as soon as I put 2 timestamps in a table and try a\njoin, everything goes south (7.7sec). I have 10k timestamps in the\nduration table. :-(\n\nI'm getting more confused on how the planner decides to use indexes. For\nexample, if I try:\n\nexplain analyze select us.oid from sessions us where '2005-04-10\n23:11:00' between us.starttimetrunc and us.finishtimetrunc;\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using sessions_st_ft_idx2 on sessions us\n(cost=0.00..18320.73 rows=4765 width=4) (actual time=0.063..2.455\nrows=279 loops=1)\n Index Cond: (('2005-04-10 23:11:00'::timestamp without time zone <=\nfinishtimetrunc) AND ('2005-04-10 23:11:00'::timestamp without time zone\n>= starttimetrunc))\n Total runtime: 2.616 ms\n\nis uses the index! However, if I change the date it does not:\n\nexplain analyze select us.oid from sessions us where '2005-04-09\n23:11:00' between us.starttimetrunc and us.finishtimetrunc;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\nSeq Scan on sessions us (cost=0.00..68173.04 rows=41575 width=4)\n(actual time=553.424..1981.695 rows=64 loops=1)\n Filter: (('2005-04-09 23:11:00'::timestamp without time zone >=\nstarttimetrunc) AND ('2005-04-09 23:11:00'::timestamp without time zone\n<= finishtimetrunc))\n Total runtime: 1981.802 ms\n\nThe times in sessions go from '2005-04-04 00:00:00' to '2005-04-10\n23:59:00' so both are valid times to query for, but April 10th is more\ntowards the end. A little experimenting shows that if I go earlier than\n'2005-04-10 13:26:15' seq scans are being used. I was thinking this\ntimestamp would have something to do with the histogram_bounds in\npg_stats, but I cannot find a match:\n\n starttimetrunc | {\"2005-04-04 00:05:00\",\"2005-04-04\n11:49:00\",\"2005-04-04 22:03:00\",\"2005-04-05 10:54:00\",\"2005-04-05\n21:08:00\",\"2005-04-06 10:28:00\",\"2005-04-07 01:57:00\",\"2005-04-07\n15:55:00\",\"2005-04-08 10:18:00\",\"2005-04-08 17:12:00\",\"2005-04-10 23:57:00\"}\n finishtimetrunc | {\"2005-04-04 00:05:00.93\",\"2005-04-04\n11:53:00.989999\",\"2005-04-04 22:35:00.38\",\"2005-04-05\n11:13:00.029999\",\"2005-04-05 21:31:00.989999\",\"2005-04-06\n10:45:01\",\"2005-04-07 02:08:08.25\",\"2005-04-07 16:20:00.93\",\"2005-04-08\n10:25:00.409999\",\"2005-04-08 17:15:00.949999\",\"2005-04-11 02:08:19\"}\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n Have you visited our new DNA Portal?\n-------------------------------------------\n",
"msg_date": "Thu, 21 Apr 2005 14:14:26 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When are index scans used over seq scans?"
},
{
"msg_contents": "Richard van den Berg <[email protected]> writes:\n> This is with the default cpu_tuple_cost = 0.01:\n\n> Nested Loop (cost=252.80..233010147.16 rows=1035480320 width=98)\n> (actual time=0.369..12672213.137 rows=6171334 loops=1)\n> Join Filter: ((\"outer\".starttimetrunc <= \"inner\".ts) AND\n> (\"outer\".finishtimetrunc >= \"inner\".ts))\n> -> Seq Scan on sessions us (cost=0.00..26822.36 rows=924536\n> width=106) (actual time=0.039..5447.349 rows=924536 loops=1)\n> -> Materialize (cost=252.80..353.60 rows=10080 width=8) (actual\n> time=0.000..2.770 rows=10080 loops=924536)\n> -> Seq Scan on duration du (cost=0.00..252.80 rows=10080\n> width=8) (actual time=0.019..13.397 rows=10080 loops=1)\n> Total runtime: 12674486.670 ms\n\nHmm, that *is* showing rather a spectacularly large amount of time in\nthe join itself: if I did the arithmetic right,\n\nregression=# select 12672213.137 - (5447.349 + 2.770*924536 + 13.397);\n ?column?\n--------------\n 10105787.671\n(1 row)\n\nwhich is almost 80% of the entire runtime. Which is enormous.\nWhat are those column datatypes exactly? Perhaps you are incurring a\ndatatype conversion cost? Straight timestamp-vs-timestamp comparison\nis fairly cheap, but any sort of conversion will cost dearly.\n\nThe planner's model for the time spent in the join itself is\n\t(cpu_tuple_cost + 2 * cpu_operator_cost) * n_tuples\n(the 2 because you have 2 operators in the join condition)\nso you'd have to raise one or the other of these parameters\nto model this situation accurately. But I have a hard time\nbelieving that cpu_tuple_cost is really as high as 0.1.\nIt seems more likely that the cpu_operator_cost is underestimated,\nwhich leads me to question what exactly is happening in those\ncomparisons.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Apr 2005 10:25:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When are index scans used over seq scans? "
},
{
"msg_contents": "John A Meinel wrote:\n> You might try doing:\n> ALTER TABLE us ALTER COLUMN starttimetrunc SET STATISTICS 200;\n> ALTER TABLE us ALTER COLUMN finishtimetrunc SET STATISTICS 200;\n> VACUUM ANALYZE us;\n\nI've been looking into that. While increasing the statistics makes the\nplanner use the index for simple selects, it still does not for joins.\n\nAnother thing that threw me off is that after a \"vacuum analyze\" a\n\"select * from us where 'x' between start and finish\" uses seq scans,\nwhile after just an \"analyze\" is uses the index! I thought both\nstatements were supposed to update the statistics in the same way? (This\nis with 7.4.7.)\n\n> You have 2 tables, a duration, and a from->to table, right? How many\n> rows in each? \n\nDuration: 10k\nSessions: 1M\n\n> Anyway, you can play around with it by using stuff like:\n> SET enable_seqscan TO off;\n\nThis doesn't help much. Instead of turning seqscans off this setting\nincreases its cost with 100M. Since my query already has a cost of about\n400M-800M this doesn't matter much.\n\nFor now, the only reliable way of forcing the use of the index is to set\ncpu_tuple_cost = 1.\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n\n",
"msg_date": "Thu, 21 Apr 2005 17:54:33 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When are index scans used over seq scans?"
},
{
"msg_contents": "Tom Lane wrote:\n> which is almost 80% of the entire runtime. Which is enormous.\n> What are those column datatypes exactly? \n\n Table \"richard.sessions\"\n Column | Type | Modifiers\n------------------------+-----------------------------+-----------\n[unrelated columns removed]\n starttimetrunc | timestamp without time zone |\n finishtimetrunc | timestamp without time zone |\nIndexes:\n \"rb_us_st_ft_idx\" btree (starttimetrunc, finishtimetrunc)\n \"rb_us_st_ft_idx2\" btree (finishtimetrunc, starttimetrunc)\nCheck constraints:\n \"date_check\" CHECK (finishtimetrunc >= starttimetrunc)\n\n Table \"richard.duration\"\n Column | Type | Modifiers\n--------+-----------------------------+-----------\n ts | timestamp without time zone |\n\n> Perhaps you are incurring a datatype conversion cost? \n\nNot that I can tell.\n\n> It seems more likely that the cpu_operator_cost is underestimated,\n\nAs you perdicted, increasing cpu_operator_cost from 0.0025 to 0.025 also\ncauses the planner to use the index on duration.\n\n> which leads me to question what exactly is happening in those\n> comparisons.\n\nYour guess is as good as mine (actually, yours is much better). I can\nput together a reproducable test case if you like..\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n Have you visited our new DNA Portal?\n-------------------------------------------\n",
"msg_date": "Thu, 21 Apr 2005 18:16:45 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When are index scans used over seq scans?"
},
{
"msg_contents": "Richard van den Berg <[email protected]> writes:\n> Tom Lane wrote:\n>> Perhaps you are incurring a datatype conversion cost? \n\n> Not that I can tell.\n\nNo, apparently not. Hmm ... timestamp_cmp_internal is just a couple of\nisnan() checks and one or two floating-point compares. Should be pretty\ndang cheap. Unless isnan() is ridiculously expensive on your hardware?\nMore likely there is some bottleneck that we are not thinking of.\n\nAre the tables in question particularly wide (many columns)?\n\n>> which leads me to question what exactly is happening in those\n>> comparisons.\n\n> Your guess is as good as mine (actually, yours is much better). I can\n> put together a reproducable test case if you like..\n\nI'm thinking it would be interesting to look at a gprof profile of the\nnestloop case. If you can rebuild with profiling and get one, that\nwould be fine, or you can make up a test case that shows the same slow\njoining behavior.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Apr 2005 12:23:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When are index scans used over seq scans? "
},
{
"msg_contents": "Tom Lane wrote:\n> Are the tables in question particularly wide (many columns)?\n\nYes they are. They both have 20 columns. If I cut down the duration\ntable to just 1 column of timestamps, the planner uses the index.\n\nInteresting, so I could store just the timestamps in another table (view\ndoesn't help) to speed up this query.\n\nI am using the debian package. How can I tell if profiling is enabled?\n\nThanks a lot,\n\n-- \nRichard van den Berg, CISSP\n-------------------------------------------\nTrust Factory B.V. | www.dna-portal.net\nBazarstraat 44a | www.trust-factory.com\n2518AK The Hague | Phone: +31 70 3620684\nThe Netherlands | Fax : +31 70 3603009\n-------------------------------------------\n",
"msg_date": "Fri, 22 Apr 2005 11:04:17 +0200",
"msg_from": "Richard van den Berg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: When are index scans used over seq scans?"
}
] |
[
{
"msg_contents": "In terms of vendor specific models -\n\nDoes anyone have any good/bad experiences/recommendations for a 4-way\nOpteron from Sun (v40z, 6 internal drives) or HP (DL585 5 internal\ndrives) models?\n\nThis is in comparison with the new Dell 6850 (it has PCIexpress, faster\nFSB 667MHz, which doesn't match up with AMD's total IO bandwidth, but\nmuch better than previous 6650s).\n\nThanks,\nAnjan\n\n\n-----Original Message-----\nFrom: William Yu [mailto:[email protected]] \nSent: Wednesday, April 20, 2005 11:10 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)\n\nI posted this link a few months ago and there was some surprise over the\n\ndifference in postgresql compared to other DBs. (Not much surprise in \nOpteron stomping on Xeon in pgsql as most people here have had that \nexperience -- the surprise was in how much smaller the difference was in\n\nother DBs.) If it was across the board +100% in MS-SQL, MySQL, etc -- \nyou can chalk in up to overall better CPU architecture. Most of the time\n\nthough, the numbers I've seen show +0-30% for [insert DB here] and a \nhuge whopping +++++ for pgsql. Why the pronounced preference for \npostgresql, I'm not sure if it was explained fully.\n\nBTW, the Anandtech test compares single CPU systems w/ 1GB of RAM. Go to\n\ndual/quad and SMP Xeon will suffer even more since it has to share a \nfixed amount of FSB/memory bandwidth amongst all CPUs. Xeons also seem \nto suffer more from context-switch storms. Go > 4GB of RAM and the Xeon \nsuffers another hit due to the lack of a 64-bit IOMMU. Devices cannot \nmap to addresses > 4GB which means the OS has to do extra work in \ncopying data from/to > 4GB anytime you have IO. (Although this penalty \nmight exist all the time in 64-bit mode for Xeon if Linux/Windows took \nthe expedient and less-buggy route of using a single method versus \nchecking whether target addresses are > or < 4GB.)\n\n\n\nJeff Frost wrote:\n> On Tue, 19 Apr 2005, J. Andrew Rogers wrote:\n> \n>> I don't know about 2.5x faster (perhaps on specific types of loads), \n>> but the reason Opterons rock for database applications is their \n>> insanely good memory bandwidth and latency that scales much better \n>> than the Xeon. Opterons also have a ccNUMA-esque I/O fabric and two \n>> dedicated on-die memory channels *per processor* -- no shared bus \n>> there, closer to real UNIX server iron than a glorified PC.\n> \n> \n> Thanks J! That's exactly what I was suspecting it might be.\nActually, \n> I found an anandtech benchmark that shows the Opteron coming in at\nclose \n> to 2.0x performance:\n> \n> http://www.anandtech.com/linux/showdoc.aspx?i=2163&p=2\n> \n> It's an Opteron 150 (2.4ghz) vs. Xeon 3.6ghz from August. I wonder if\n\n> the differences are more pronounced with the newer Opterons.\n> \n> -Jeff\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if\nyour\n> joining column's datatypes do not match\n> \n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n",
"msg_date": "Wed, 20 Apr 2005 11:46:46 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Opteron vs Xeon (Was: What to do with 6 disks?)"
},
{
"msg_contents": "Anjan Dave wrote:\n> In terms of vendor specific models -\n> \n> Does anyone have any good/bad experiences/recommendations for a 4-way\n> Opteron from Sun (v40z, 6 internal drives) or HP (DL585 5 internal\n> drives) models?\n> \n> This is in comparison with the new Dell 6850 (it has PCIexpress, faster\n> FSB 667MHz, which doesn't match up with AMD's total IO bandwidth, but\n> much better than previous 6650s).\n\nDell cuts too many corners to be a good server.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 20 Apr 2005 11:50:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs Xeon (Was: What to do with 6 disks?)"
},
{
"msg_contents": "On Wednesday 20 April 2005 17:50, Bruce Momjian wrote:\n> Anjan Dave wrote:\n> > In terms of vendor specific models -\n> >\n> > Does anyone have any good/bad experiences/recommendations for a 4-way\n> > Opteron from Sun (v40z, 6 internal drives) or HP (DL585 5 internal\n> > drives) models?\n> >\n> > This is in comparison with the new Dell 6850 (it has PCIexpress, faster\n> > FSB 667MHz, which doesn't match up with AMD's total IO bandwidth, but\n> > much better than previous 6650s).\n>\n> Dell cuts too many corners to be a good server.\n\nHi\n\nWhich corners do Dell cut compared to the competition ?\n\nThanks\n\nChristian\n",
"msg_date": "Wed, 20 Apr 2005 18:14:12 +0200",
"msg_from": "Christian Sander =?iso-8859-1?q?R=F8snes?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs Xeon (Was: What to do with 6 disks?)"
},
{
"msg_contents": "Anjan,\n\n> Does anyone have any good/bad experiences/recommendations for a 4-way\n> Opteron from Sun (v40z, 6 internal drives) or HP (DL585 5 internal\n> drives) models?\n\nLast I checked, the v40z only takes 5 drives, unless you yank the cd-rom and \nget an extra disk tray. That's the main defect of the model, the second \nbeing its truly phenominal noise level. Other than that (and price) and \nexcellent Opteron machine.\n\nThe HPs are at root pretty good machines -- and take 6 drives, so I expect \nyou're mixed up there. However, they use HP's proprietary RAID controller \nwhich is seriously defective. So you need to factor replacing the RAID \ncontroller into the cost.\n\n> This is in comparison with the new Dell 6850 (it has PCIexpress, faster\n> FSB 667MHz, which doesn't match up with AMD's total IO bandwidth, but\n> much better than previous 6650s).\n\nYes, but you can still expect the 6650 to have 1/2 the performance ... or \nless ... of the above-name models. It:\n1) is Xeon 32-bit\n2) uses a cheap northbridge which makes the Xeon's cache contention even worse\n3) depending on the model and options, may ship with a cheap Adaptec raid card \ninstead of an LSI or other good card\n\nIf all you *need* is 1/2 the performance of an Opteron box, and you can get a \ngood deal, then go for it. But don't be under the illusion that Dell is \ncompetitive with Sun, IBM, HP, Penguin or Microway on servers.\n\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 20 Apr 2005 09:39:33 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs Xeon (Was: What to do with 6 disks?)"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n\n> Last I checked, the v40z only takes 5 drives, unless you yank the cd-rom and \n> get an extra disk tray. That's the main defect of the model, the second \n> being its truly phenominal noise level. Other than that (and price) and \n> excellent Opteron machine.\n\nIncidentally, Sun sells a bunch of v20z and v40z machines on Ebay as some kind\nof marketing strategy. You can pick one up for only a slightly absurd price if\nyou're happy with the configurations listed there. (And if you're in the US).\n\n-- \ngreg\n\n",
"msg_date": "20 Apr 2005 13:10:05 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs Xeon (Was: What to do with 6 disks?)"
},
{
"msg_contents": "> The HPs are at root pretty good machines -- and take 6 drives, so I expect \n> you're mixed up there. However, they use HP's proprietary RAID controller \n> which is seriously defective. So you need to factor replacing the RAID \n> controller into the cost.\n\nDo you have any additional materials on what is defective with their\nraid controllers?\n-- \n\n",
"msg_date": "Wed, 20 Apr 2005 14:01:09 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs Xeon (Was: What to do with 6 disks?)"
},
{
"msg_contents": "On 4/20/05, Anjan Dave <[email protected]> wrote:\n> In terms of vendor specific models -\n> \n> Does anyone have any good/bad experiences/recommendations for a 4-way\n> Opteron from Sun (v40z, 6 internal drives) or HP (DL585 5 internal\n> drives) models?\n\nWe are going with the 90nm HPs for production. They \"feel\" like\nbeefier boxes than the Suns, but the Suns cost a LOT less, IIRC. \nWe're only using the internal drives for the OS. PG gets access to a\nfibre-channel array, HP StorageWorks 3000. I _can't wait_ to get this\nin.\n\nOur dev box is a 130nm DL585 with 16G of RAM and an HP SCSI array, and\nI have absolutely zero complaints. :)\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n",
"msg_date": "Wed, 20 Apr 2005 14:11:47 -0400",
"msg_from": "Mike Rylander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs Xeon (Was: What to do with 6 disks?)"
},
{
"msg_contents": "Quoth [email protected] (Christian Sander R�snes):\n> On Wednesday 20 April 2005 17:50, Bruce Momjian wrote:\n>> Anjan Dave wrote:\n>> > In terms of vendor specific models -\n>> >\n>> > Does anyone have any good/bad experiences/recommendations for a\n>> > 4-way Opteron from Sun (v40z, 6 internal drives) or HP (DL585 5\n>> > internal drives) models?\n>> >\n>> > This is in comparison with the new Dell 6850 (it has PCIexpress,\n>> > faster FSB 667MHz, which doesn't match up with AMD's total IO\n>> > bandwidth, but much better than previous 6650s).\n>>\n>> Dell cuts too many corners to be a good server.\n>\n> Hi\n>\n> Which corners do Dell cut compared to the competition ?\n\nThey seem to be buying the \"cheapest components of the week\" such that\nthey need to customize BIOSes to make them work as opposed to getting\nthe \"Grade A\" stuff that works well out of the box.\n\nWe got a bunch of quad-Xeon boxes in; the MegaRAID controllers took\nplenty o' revisits from Dell folk before they got sorta stable. \n\nDell replaced more SCSI drives on their theory that the problem was\nbad disks than I care to remember. And if they were sufficiently\nsuspicious of the disk drives for that, that tells you that they don't\ntrust the disk they're selling terribly much, which leaves me even\nless reassured...\n-- \noutput = reverse(\"moc.liamg\" \"@\" \"enworbbc\")\nhttp://linuxdatabases.info/info/spreadsheets.html\nWhere do you *not* want to go today? \"Confutatis maledictis, flammis\nacribus addictis\" (<http://www.hex.net/~cbbrowne/msprobs.html>\n",
"msg_date": "Wed, 20 Apr 2005 21:56:03 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs Xeon"
}
] |
[
{
"msg_contents": "kewl. \n\nWell, 8k request out of PG kernel might turn into an \"X\"Kb request at\ndisk/OS level, but duly noted. \n\nDid you scan the code for this, or are you pulling this recollection from\nthe cognitive archives? :-)\n\n\n\n-----Original Message-----\nFrom: Jim C. Nasby [mailto:[email protected]] \nSent: Tuesday, April 19, 2005 8:12 PM\nTo: Mohan, Ross\nCc: [email protected]\nSubject: Re: [PERFORM] How to improve db performance with $7K?\n\n\nOn Mon, Apr 18, 2005 at 06:41:37PM -0000, Mohan, Ross wrote:\n> Don't you think \"optimal stripe width\" would be\n> a good question to research the binaries for? I'd\n> think that drives the answer, largely. (uh oh, pun alert)\n> \n> EG, oracle issues IO requests (this may have changed _just_\n> recently) in 64KB chunks, regardless of what you ask for. \n> So when I did my striping (many moons ago, when the Earth \n> was young...) I did it in 128KB widths, and set the oracle \n> \"multiblock read count\" according. For oracle, any stripe size\n> under 64KB=stupid, anything much over 128K/258K=wasteful. \n> \n> I am eager to find out how PG handles all this.\n\nAFAIK PostgreSQL requests data one database page at a time (normally 8k). Of course the OS might do something different.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Wed, 20 Apr 2005 16:17:50 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
[
{
"msg_contents": "There have been some discussions on this list and others in general about Dell's version of RAID cards, and server support, mainly linux support.\n\nBefore I venture into having another vendor in the shop I want to know if there are any dos/don't's about 4-way Opteron offerings from Sun and HP.\n\nDon't want to put the topic on a different tangent, but I would be interested in the discussion of AMD Vs. XEON in terms of actual products available today.\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: Christian Sander Røsnes [mailto:[email protected]] \nSent: Wednesday, April 20, 2005 12:14 PM\nTo: Bruce Momjian\nCc: [email protected]\nSubject: Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)\n\nOn Wednesday 20 April 2005 17:50, Bruce Momjian wrote:\n> Anjan Dave wrote:\n> > In terms of vendor specific models -\n> >\n> > Does anyone have any good/bad experiences/recommendations for a 4-way\n> > Opteron from Sun (v40z, 6 internal drives) or HP (DL585 5 internal\n> > drives) models?\n> >\n> > This is in comparison with the new Dell 6850 (it has PCIexpress, faster\n> > FSB 667MHz, which doesn't match up with AMD's total IO bandwidth, but\n> > much better than previous 6650s).\n>\n> Dell cuts too many corners to be a good server.\n\nHi\n\nWhich corners do Dell cut compared to the competition ?\n\nThanks\n\nChristian\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n",
"msg_date": "Wed, 20 Apr 2005 12:33:08 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Opteron vs Xeon (Was: What to do with 6 disks?)"
},
{
"msg_contents": "There have been some discussions on this list and others in general about\nDell's version of RAID cards, and server support, mainly linux support.\n\nI was pretty impressed with the Dell guy. He spent the day with me remotely\nand went through my system 6650 with powervault. Changed my drives from ext3\nto ext2 with no journaling checked all the drivers and such.\n\nI did not see any marked improvement, but I dont think my issues are\nrelated to the hardware.\n\nI am giving up on postgres and three developers two months of work and\ntrying MYSQL.\n\nI have posted several items and not got a response (not that I expect folks\nto drop everything). I want to thank everyone who has been of help and there\nare several.\n\nIt just is running way slow on several of my views. I tried them today in\nMYSQL and found that the MYSQL was beating out my MSSQL.\n\nOn certain items I could get PG to work ok, but it never was faster the\nMSSQL. On certain items it is taking several minutes compared to a few\nseconds on MYSQL. \n\nI really like the environment and feel I have learned a lot in the past few\nmonths, but bottom line for me is speed. We bought a 30K Dell 6650 to get\nbetter performance. I chose PG because MSSQL was 70K to license. I believe\nthe MYSQL will be 250.00 to license for us, but I may choose the 4k platinum\nsupport just to feel safe about having some one to touch base with in the\nevent of an issue.\n\nAgain thanks to everyone who has answered my newb questions and helped me\nget it on the 3 spindles and tweek the install. Commandpromt.com was a big\nhelp and if I wanted to budget a bunch more $ and mostly if I was at liberty\nto share my database with them they may of helped me get through all the\nissues. I am not sure I am walking away feeling real good about postgres,\nbecause it just should not take a rocket scientist to get it to work, and I\nused to think I was fairly smart and could figure stuff out and I hate\nadmitting defeat (especially since we have everything working with postgres\nnow).\n\n",
"msg_date": "Wed, 20 Apr 2005 14:15:29 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Opteron vs Xeon (Was: What to do with 6 disks?)"
},
{
"msg_contents": "Joel,\n\n> I did not see any marked improvement, but I dont think my issues are\n> related to the hardware.\n\nIf you won't believe it, then we certainly can't convince you. AFAIK your bad \nview is a bad query plan made worse by the Dell's hardware problems.\n\n> I am giving up on postgres and three developers two months of work and\n> trying MYSQL.\n\nI'd suggest testing your *whole* application and not just this one query. And \nremember that you need to test InnoDB tables if you want transactions.\n\n>\n> I have posted several items and not got a response (not that I expect folks\n> to drop everything). I want to thank everyone who has been of help and\n> there are several.\n\nHmmm ... I see about 25 responses to some of your posts on this list. \nIncluding ones by some of our head developers. That's more than you'd get \nout of a paid MSSQL support contract, I know from experience.\n\nIf you want anything more, then you'll need a \"do-or-die\" contract with a \nsupport company. If your frustration is because you can't find this kind of \nhelp than I completely understand ... I have a waiting list for performance \ncontracts myself. (and, if you hired me the first thing I'd tell you is to \njunk the Dell)\n\n> I really like the environment and feel I have learned a lot in the past few\n> months, but bottom line for me is speed. We bought a 30K Dell 6650 to get\n> better performance. \n\nWould have been smart to ask on this list *before* buying the Dell, hey? Even \na Google of this mailing list would have been informative.\n\n> I chose PG because MSSQL was 70K to license. I believe \n> the MYSQL will be 250.00 to license for us, but I may choose the 4k\n> platinum support just to feel safe about having some one to touch base with\n> in the event of an issue.\n\nHmmm ... you're willing to pay MySQL $4k but expect the PG community to solve \nall your problems with free advice and a couple $100 with CMD? I sense an \napples vs. barca loungers comparison here ...\n\n> I am not sure I am walking away feeling real good about\n> postgres, because it just should not take a rocket scientist to get it to\n> work, and I used to think I was fairly smart and could figure stuff out and\n> I hate admitting defeat (especially since we have everything working with\n> postgres now).\n\nWhile I understand your frustration (I've been frustrated more than a few \ntimes with issues that stump me on Linux, for example) it's extremely unfair \nto lash out at a community that has provided you a lot of free advice because \nthe advice hasn't fixed everything yet. By my reading, you first raised your \nquery issue 6 days ago. 6 days is not a lot of time for getting *free* \ntroubleshooting help by e-mail. Certainly it's going to take more than 6 days \nto port to MySQL. \n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 20 Apr 2005 11:53:35 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Sorry if you feel I am lashing out at a community.\nJust to say it again, I am very appreciative of all the help everyone has\nsupplied.\n\nI am running on more then just the 4 proc Dell (in fact my tests have been\nmostly on desktops).\n\nI have MSSQL running on a 2 proc dell which until my load has increased\n(over aprx 2 years) it was just fine. I totally agree that there are better\nsolutions based on this lists comments, but I have all Dell hardware now and\nresist trying different vendors just to suit Postgres. I was under the\nimpression there were still issues with 64bit postgres and Linux (or at\nleast were when I purchased). I believed I could make my next aquistion a\nopteron based hardware.\n\nAgain I am not at all trying to critasize any one, so please except my\napology if I some how came across with that attitude. I am very disappointed\nat this point. My views may not be that great (although I am not saying that\neither), but they run ok on MSSQL and appear to run ok on MYSQL.\n\nI wish I did understand what I am doing wrong because I do not wish to\nrevisit engineering our application for MYSQL.\n\nI would of spent more $ with Command, but he does need my data base to help\nme and I am not able to do that.\n\nI agree testing the whole app is the only way to see and unfortunately it is\na time consuming bit. I do not have to spend 4k on MYSQL, that is if I want\nto have their premium support. I can spend $250.00 a server for the\ncommercial license if I find the whole app does run well. I just loaded the\ndata last night and only had time to convert one view this morning. I am\nsure it is something I do not understand and not a problem with postgres. I\nalso am willing to take time to get more knowledgeable, but my time is\nrunning out and I feel honestly stupid.\n\nI have been in the process of converting for over two months and have said\nseveral times these lists are a godsend. \n\nIt was never my intention to make you feel like I was flaming anyone\ninvolved. On the contrary, I feel many have taken time to look at my\nquestions and given excellent advice. I know I check the archives so\nhopefully that time will help others after me. \n\nI may yet find that MYSQL is not a good fit as well. I have my whole app\nconverted at this point and find pg works well for a lot of my usage. \n\nThere are some key reporting views that need to retrieve many rows with many\njoins that just take too long to pull the data. I told my boss just now that\nif I try to de-normalize many of these data sets (like 6 main groups of data\nthat the reporting may work, but as is many of my web pages are timing out\n(these are pages that still work on MSSQL and the 2 proc machine).\n\nThanks again for all the help and know I truly appreciate what time every\none has spent on my issues.\n\nI may find that revisiting the datasets is a way to make PG work, or as you\nmentioned maybe I can get some one with more knowledge to step in locally. I\ndid ask Tom if he knew of anyone, maybe some one else on the list is aware\nof a professional in the Tampa FL area.\n\nRealistically I don't think a 30k$ Dell is a something that needs to be\njunked. I am pretty sure if I got MSSQL running on it, it would outperform\nmy two proc box. I can agree it may not have been the optimal platform. My\ndecision is not based solely on the performance on the 4 proc box.\n\nJoel Fradkin\n \n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]] \nSent: Wednesday, April 20, 2005 1:54 PM\nTo: Joel Fradkin\nCc: [email protected]\nSubject: Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon\n\nJoel,\n\n> I did not see any marked improvement, but I don't think my issues are\n> related to the hardware.\n\nIf you won't believe it, then we certainly can't convince you. AFAIK your\nbad \nview is a bad query plan made worse by the Dell's hardware problems.\n\n> I am giving up on postgres and three developers two months of work and\n> trying MYSQL.\n\nI'd suggest testing your *whole* application and not just this one query.\nAnd \nremember that you need to test InnoDB tables if you want transactions.\n\n>\n> I have posted several items and not got a response (not that I expect\nfolks\n> to drop everything). I want to thank everyone who has been of help and\n> there are several.\n\nHmmm ... I see about 25 responses to some of your posts on this list. \nIncluding ones by some of our head developers. That's more than you'd get \nout of a paid MSSQL support contract, I know from experience.\n\nIf you want anything more, then you'll need a \"do-or-die\" contract with a \nsupport company. If your frustration is because you can't find this kind of \nhelp than I completely understand ... I have a waiting list for performance \ncontracts myself. (and, if you hired me the first thing I'd tell you is to \njunk the Dell)\n\n> I really like the environment and feel I have learned a lot in the past\nfew\n> months, but bottom line for me is speed. We bought a 30K Dell 6650 to get\n> better performance. \n\nWould have been smart to ask on this list *before* buying the Dell, hey?\nEven \na Google of this mailing list would have been informative.\n\n> I chose PG because MSSQL was 70K to license. I believe \n> the MYSQL will be 250.00 to license for us, but I may choose the 4k\n> platinum support just to feel safe about having some one to touch base\nwith\n> in the event of an issue.\n\nHmmm ... you're willing to pay MySQL $4k but expect the PG community to\nsolve \nall your problems with free advice and a couple $100 with CMD? I sense an \napples vs. barca loungers comparison here ...\n\n> I am not sure I am walking away feeling real good about\n> postgres, because it just should not take a rocket scientist to get it to\n> work, and I used to think I was fairly smart and could figure stuff out\nand\n> I hate admitting defeat (especially since we have everything working with\n> postgres now).\n\nWhile I understand your frustration (I've been frustrated more than a few \ntimes with issues that stump me on Linux, for example) it's extremely unfair\n\nto lash out at a community that has provided you a lot of free advice\nbecause \nthe advice hasn't fixed everything yet. By my reading, you first raised\nyour \nquery issue 6 days ago. 6 days is not a lot of time for getting *free* \ntroubleshooting help by e-mail. Certainly it's going to take more than 6\ndays \nto port to MySQL. \n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n",
"msg_date": "Wed, 20 Apr 2005 15:52:45 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Joel,\n\n> I have MSSQL running on a 2 proc dell which until my load has increased\n> (over aprx 2 years) it was just fine. I totally agree that there are better\n> solutions based on this lists comments, but I have all Dell hardware now\n> and resist trying different vendors just to suit Postgres. I was under the\n> impression there were still issues with 64bit postgres and Linux (or at\n> least were when I purchased). I believed I could make my next aquistion a\n> opteron based hardware.\n\nYeah, sorry, the Dell stuff is a sore point with me. You can't imagine the \nnumber of conversations I have that go like this:\n\"We're having a severe performance problem with PostgreSQL\"\n\"What hardware/OS are you using?\"\n\"Dell *650 with RHAS 3.0 ....\"\n\nBTW, which Update version is your RHAS? If you're on Update3, you can grab \nmore performance right there by upgrading to Update4.\n\n> Again I am not at all trying to critasize any one, so please except my\n> apology if I some how came across with that attitude. I am very\n> disappointed at this point. My views may not be that great (although I am\n> not saying that either), but they run ok on MSSQL and appear to run ok on\n> MYSQL.\n\nYeah. I think you'll find a few things that are vice-versa. For that \nmatter, I can point to a number of queries we run better than Oracle, and a \nnumber we don't.\n\nYour particular query problem seems to stem from some bad estimates. Can you \npost an EXPLAIN ANALYZE based on all the advice people have given you so far?\n\n> I wish I did understand what I am doing wrong because I do not wish to\n> revisit engineering our application for MYSQL.\n\nI can imagine. \n\n> I would of spent more $ with Command, but he does need my data base to help\n> me and I am not able to do that.\n\nYes. For that matter, it'll take longer to troubleshoot on this list because \nof your security concerns.\n\n> I agree testing the whole app is the only way to see and unfortunately it\n> is a time consuming bit. I do not have to spend 4k on MYSQL, that is if I\n> want to have their premium support. I can spend $250.00 a server for the\n> commercial license if I find the whole app does run well. I just loaded the\n> data last night and only had time to convert one view this morning. I am\n> sure it is something I do not understand and not a problem with postgres. I\n> also am willing to take time to get more knowledgeable, but my time is\n> running out and I feel honestly stupid.\n\nYou're not. You have a real query problem and it will require further \ntroubleshooting to solve. Some of us make a pretty handsome living solving \nthese kinds of problems, it take a lot of expert knowledge.\n\n> It was never my intention to make you feel like I was flaming anyone\n> involved. On the contrary, I feel many have taken time to look at my\n> questions and given excellent advice. I know I check the archives so\n> hopefully that time will help others after me.\n\nWell, I overreacted too. Sorry!\n\n> I may find that revisiting the datasets is a way to make PG work, or as you\n> mentioned maybe I can get some one with more knowledge to step in locally.\n> I did ask Tom if he knew of anyone, maybe some one else on the list is\n> aware of a professional in the Tampa FL area.\n\nWell, Robert Treat is in Florida but I'm pretty sure he's busy full-time.\n\n> Realistically I don't think a 30k$ Dell is a something that needs to be\n> junked. I am pretty sure if I got MSSQL running on it, it would outperform\n> my two proc box. I can agree it may not have been the optimal platform. My\n> decision is not based solely on the performance on the 4 proc box.\n\nOh, certainly it's too late to buy a Sunfire or eServer instead. You just \ncould have gotten far more bang for the buck with some expert advice, that's \nall. But don't bother with Dell support any further, they don't really have \nthe knowledge to help you.\n\nSo ... new EXPLAIN ANALYZE ?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 20 Apr 2005 13:22:37 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Joel Fradkin wrote:\n...\n\n>I would of spent more $ with Command, but he does need my data base to help\n>me and I am not able to do that.\n>\n>\n...\n\nWhat if someone were to write an anonymization script. Something that\nchanges any of the \"data\" of the database, but leaves all of the\nrelational information. It could turn all strings into some sort of\nhashed version, so you don't give out any identifiable information.\nIt could even modify relational entries, as long as it updated both\nends, and this didn't affect the actual performance at all.\n\nI don't think this would be very hard to write. Especially if you can\ngive a list of the tables, and what columns need to be modified.\n\nProbably this would generally be a useful script to have for cases like\nthis where databases are confidential, but need to be tuned by someone else.\n\nWould that be reasonable?\nI would think that by renaming columns, hashing the data in the columns,\nand renaming tables, most of the proprietary information is removed,\nwithout removing the database information.\n\nJohn\n=:->",
"msg_date": "Wed, 20 Apr 2005 15:42:24 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "\nOn Apr 20, 2005, at 4:22 PM, Josh Berkus wrote:\n\n>> Realistically I don't think a 30k$ Dell is a something that needs to \n>> be\n>> junked. I am pretty sure if I got MSSQL running on it, it would \n>> outperform\n>> my two proc box. I can agree it may not have been the optimal \n>> platform. My\n>> decision is not based solely on the performance on the 4 proc box.\n>\n> Oh, certainly it's too late to buy a Sunfire or eServer instead. You \n> just\n> could have gotten far more bang for the buck with some expert advice, \n> that's\n> all. But don't bother with Dell support any further, they don't \n> really have\n> the knowledge to help you.\n>\n\nFWIW, I have a $20k Dell box (PE2650 with 14-disk external PowerVault \nRAID enclosure) which I'm phasing out for a dual opteron box because it \ncan't handle the load. It will be re-purposed as a backup system. \nDamn waste of money, but complaining customers can cost more...\n\nTrust me, it is likely your Dell hardware, as moving to the Opteron \nsystem has improved performance tremendously with fewer disks. Same \namount of RAM and other basic configurations. Both have LSI based RAID \ncards, even.\n\n",
"msg_date": "Wed, 20 Apr 2005 17:01:47 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "I did think of something similar just loading the data tables with junk\nrecords and I may visit that idea with Josh.\n\nI did just do some comparisons on timing of a plain select * from tbl where\nindexed column = x and it was considerably slower then both MSSQL and MYSQL,\nso I am still a bit confused. This still might be configuration issue (I ran\non my 2gig desktop and the 8 gig Linux box comparisons were all ran on the\nsame machines as far MSSQL, MYSQL, and Postgres.\nI turned off postgres when running MYSQL and turned off MYSQL when running\npostgres, MSSQL had one of the two running while I tested it.\n\nFor the 360,000 records returned MYSQL did it in 40 seconds first run and 17\nseconds second run.\n\nMSSQL did it in 56 seconds first run and 16 seconds second run.\n\nPostgres was on the second run\nTotal query runtime: 17109 ms.\nData retrieval runtime: 72188 ms.\n331640 rows retrieved.\n\nSo like 89 on the second run.\nThe first run was 147 secs all told.\n\nThese are all on my 2 meg desktop running XP.\nI can post the config. I noticed the postgres was using 70% of the cpu while\nMSSQL was 100%.\n\nJoel Fradkin\n \n\n>I would of spent more $ with Command, but he does need my data base to help\n>me and I am not able to do that.\n>\n>\n...\n\nWhat if someone were to write an anonymization script. Something that\nchanges any of the \"data\" of the database, but leaves all of the\nrelational information. It could turn all strings into some sort of\nhashed version, so you don't give out any identifiable information.\nIt could even modify relational entries, as long as it updated both\nends, and this didn't affect the actual performance at all.\n\nI don't think this would be very hard to write. Especially if you can\ngive a list of the tables, and what columns need to be modified.\n\nProbably this would generally be a useful script to have for cases like\nthis where databases are confidential, but need to be tuned by someone else.\n\nWould that be reasonable?\nI would think that by renaming columns, hashing the data in the columns,\nand renaming tables, most of the proprietary information is removed,\nwithout removing the database information.\n\nJohn\n=:->\n\n\n",
"msg_date": "Wed, 20 Apr 2005 20:26:08 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Joel Fradkin wrote:\n\n>I did think of something similar just loading the data tables with junk\n>records and I may visit that idea with Josh.\n>\n>I did just do some comparisons on timing of a plain select * from tbl where\n>indexed column = x and it was considerably slower then both MSSQL and MYSQL,\n>so I am still a bit confused. This still might be configuration issue (I ran\n>on my 2gig desktop and the 8 gig Linux box comparisons were all ran on the\n>same machines as far MSSQL, MYSQL, and Postgres.\n>I turned off postgres when running MYSQL and turned off MYSQL when running\n>postgres, MSSQL had one of the two running while I tested it.\n>\n>For the 360,000 records returned MYSQL did it in 40 seconds first run and 17\n>seconds second run.\n>\n>MSSQL did it in 56 seconds first run and 16 seconds second run.\n>\n>Postgres was on the second run\n>Total query runtime: 17109 ms.\n>Data retrieval runtime: 72188 ms.\n>331640 rows retrieved.\n>\n>So like 89 on the second run.\n>The first run was 147 secs all told.\n>\n>These are all on my 2 meg desktop running XP.\n>I can post the config. I noticed the postgres was using 70% of the cpu while\n>MSSQL was 100%.\n>\n>Joel Fradkin\n>\n>\nWhy is MYSQL returning 360,000 rows, while Postgres is only returning\n330,000? This may not be important at all, though.\nI also assume you are selecting from a plain table, not a view.\n\nI suppose knowing your work_mem, and shared_buffers settings would be\nuseful.\n\nHow were you measuring \"data retrieval time\"? And how does this compare\nto what you were measuring on the other machines? It might be possible\nthat what you are really measuring is just the time it takes psql to\nload up all the data into memory, and then print it out. And since psql\ndefaults to measuring entry lengths for each column, this may not be\ntruly comparable.\nIt *looks* like it only takes 18s for postgres to get the data, but then\nit is taking 72s to transfer the data to you. That would be network\nlatency, or something like that, not database latency.\nAnd I would say that 18s is very close to 16 or 17 seconds.\n\nI don't know what commands you were issuing, or how you measured,\nthough. You might be using some other interface (like ODBC), which I\ncan't say too much about.\n\nJohn\n=:->",
"msg_date": "Wed, 20 Apr 2005 19:45:39 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Joel,\n\nOk, please try this:\n\nALTER TABLE tblresponseheader ALTER COLUMN clientnum SET STATISTICS 1000;\nALTER TABLE tblresponseheader ALTER COLUMN locationid SET STATISTICS 1000;\nALTER TABLE tbllocation ALTER COLUMN clientnum SET STATISTICS 1000;\nALTER TABLE tbllocation ALTER COLUMN divisionid SET STATISTICS 1000;\nALTER TABLE tbllocation ALTER COLUMN regionid SET STATISTICS 1000;\nANALYZE tblresponseheader;\nANALYZE tbllocation;\n\nThen run the EXPLAIN ANALYZE again. (on Linux)\n\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 20 Apr 2005 20:32:14 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "John A Meinel <[email protected]> writes:\n> Joel Fradkin wrote:\n>> Postgres was on the second run\n>> Total query runtime: 17109 ms.\n>> Data retrieval runtime: 72188 ms.\n>> 331640 rows retrieved.\n\n> How were you measuring \"data retrieval time\"?\n\nI suspect he's using pgadmin. We've seen reports before suggesting that\npgadmin can be amazingly slow, eg here\nhttp://archives.postgresql.org/pgsql-performance/2004-10/msg00427.php\nwhere the *actual* data retrieval time as shown by EXPLAIN ANALYZE\nwas under three seconds, but pgadmin claimed the query runtime was 22\nsec and data retrieval runtime was 72 sec.\n\nI wouldn't be too surprised if that time was being spent formatting\nthe data into a table for display inside pgadmin. It is a GUI after\nall, not a tool for pushing vast volumes of data around.\n\nIt'd be interesting to check the runtimes for the same query with\nLIMIT 3000, ie, see if a tenth as much data takes a tenth as much\nprocessing time or not. The backend code should be pretty darn\nlinear in this regard, but maybe pgadmin isn't.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Apr 2005 00:35:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon "
},
{
"msg_contents": "Joel Fradkin wrote:\n> I did think of something similar just loading the data tables with junk\n> records and I may visit that idea with Josh.\n> \n> I did just do some comparisons on timing of a plain select * from tbl where\n> indexed column = x and it was considerably slower then both MSSQL and MYSQL,\n> so I am still a bit confused. This still might be configuration issue (I ran\n> on my 2gig desktop and the 8 gig Linux box comparisons were all ran on the\n> same machines as far MSSQL, MYSQL, and Postgres.\n> I turned off postgres when running MYSQL and turned off MYSQL when running\n> postgres, MSSQL had one of the two running while I tested it.\n> \n> For the 360,000 records returned MYSQL did it in 40 seconds first run and 17\n> seconds second run.\n> \n> MSSQL did it in 56 seconds first run and 16 seconds second run.\n> \n> Postgres was on the second run\n> Total query runtime: 17109 ms.\n> Data retrieval runtime: 72188 ms.\n> 331640 rows retrieved.\n\nBeware!\n From the data, I can see that you're probably using pgAdmin3.\nThe time to execute your query including transfer of all data to the \nclient is 17s in this example, while displaying it (i.e. pure GUI and \nmemory alloc stuff) takes 72s. Execute to a file to avoid this.\n\nRegards,\nAndreas\n",
"msg_date": "Thu, 21 Apr 2005 13:05:46 +0000",
"msg_from": "Andreas Pflug <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Why is MYSQL returning 360,000 rows, while Postgres is only returning\n330,000? This may not be important at all, though.\nI also assume you are selecting from a plain table, not a view.\n\nYes plain table. Difference in rows is one of the datasets had sears data in\nit. It (speed differences found) is much worse on some of my views, which is\nwhat forced me to start looking at other options.\n\nI suppose knowing your work_mem, and shared_buffers settings would be\nuseful. I have posted my configs, but will add the Tampa to the bottom\nagain. My desktop has\n# - Memory -\n\nshared_buffers = 8000\t\t# min 16, at least max_connections*2, 8KB\neach\nwork_mem = 8000#1024\t\t# min 64, size in KB\nmaintenance_work_mem = 16384\t# min 1024, size in KB\n#max_stack_depth = 2048\t\t# min 100, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 30000#20000\t\t# min max_fsm_relations*16, 6 bytes\neach\nmax_fsm_relations = 1000\t# min 100, ~50 bytes each\n# - Planner Cost Constants -\n\neffective_cache_size = 80000#1000\t# typically 8KB each\nrandom_page_cost = 2\t\t# units are one sequential page fetch cost\n\nHow were you measuring \"data retrieval time\"? And how does this compare\nto what you were measuring on the other machines? It might be possible\nthat what you are really measuring is just the time it takes psql to\nload up all the data into memory, and then print it out. And since psql\ndefaults to measuring entry lengths for each column, this may not be\ntruly comparable.\nIt *looks* like it only takes 18s for postgres to get the data, but then\nit is taking 72s to transfer the data to you. That would be network\nlatency, or something like that, not database latency.\nAnd I would say that 18s is very close to 16 or 17 seconds.\nThis was ran on the machine with database (as was MYSQL and MSSQL).\nThe PG timing was from PGADMIN and the 18 secs was second run, first run was\nSame time to return the data and 70 secs to do the first part like 147 secs\nall told, compared to the 40 seconds first run of MYSQL and 56 Seconds\nMSSQL. MYSQL was done in their query tool, it returns the rows as well and\nMSSQL was done in their query analyzer. All three tools appear to use a\nsimilar approach. Just an FYI doing an explain analyze of my problem view\ntook much longer then actually returning the data in MSSQL and MYSQL. I have\ndone extensive testing with MYSQL (just this table and two of my problem\nviews). I am not using the transactional version, because I need the best\nspeed.\n\n\nI don't know what commands you were issuing, or how you measured,\nthough. You might be using some other interface (like ODBC), which I\ncan't say too much about.\n\nJohn\n=:->\n\nThis is the Linux box config.\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n# name = value\n#\n# (The '=' is optional.) White space may be used. Comments are introduced\n# with '#' anywhere on a line. The complete list of option names and\n# allowed values can be found in the PostgreSQL documentation. The\n# commented-out settings shown in this file represent the default values.\n#\n# Please note that re-commenting a setting is NOT sufficient to revert it\n# to the default value, unless you restart the postmaster.\n#\n# Any option can also be given as a command line switch to the\n# postmaster, e.g. 'postmaster -c log_connections=on'. Some options\n# can be changed at run-time with the 'SET' SQL command.\n#\n# This file is read on postmaster startup and when the postmaster\n# receives a SIGHUP. If you edit the file on a running system, you have \n# to SIGHUP the postmaster for the changes to take effect, or use \n# \"pg_ctl reload\". Some settings, such as listen_address, require\n# a postmaster shutdown and restart to take effect.\n\n\n#---------------------------------------------------------------------------\n# FILE LOCATIONS\n#---------------------------------------------------------------------------\n\n# The default values of these variables are driven from the -D command line\n# switch or PGDATA environment variable, represented here as ConfigDir.\n# data_directory = 'ConfigDir'\t\t# use data in another directory\n#data_directory = '/pgdata/data'\n# hba_file = 'ConfigDir/pg_hba.conf'\t# the host-based authentication file\n# ident_file = 'ConfigDir/pg_ident.conf' # the IDENT configuration file\n\n# If external_pid_file is not explicitly set, no extra pid file is written.\n# external_pid_file = '(none)'\t\t# write an extra pid file\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\n#listen_addresses = 'localhost'\t# what IP interface(s) to listen on; \n\t\t\t\t# defaults to localhost, '*' = any\n\nlisten_addresses = '*'\nport = 5432\nmax_connections = 100\n\t# note: increasing max_connections costs about 500 bytes of shared\n\t# memory per connection slot, in addition to costs from\nshared_buffers\n\t# and max_locks_per_transaction.\n#superuser_reserved_connections = 2\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777\t# octal\n#rendezvous_name = ''\t\t# defaults to the computer name\n\n# - Security & Authentication -\n\n#authentication_timeout = 60\t# 1-600, in seconds\n#ssl = false\n#password_encryption = true\n#krb_server_keyfile = ''\n#db_user_namespace = false\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 12288 #5000 min 16, at least max_connections*2, 8KB each\n#work_mem = 1024\t\t# min 64, size in KB\nwork_mem = 16384 # 8192\n#maintenance_work_mem = 16384\t# min 1024, size in KB\n#max_stack_depth = 2048\t\t# min 100, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 100000\t#30000\t# min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 1500 #1000\t# min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000\t# min 25\n#preload_libraries = ''\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0\t\t# 0-1000 milliseconds\n#vacuum_cost_page_hit = 1\t# 0-10000 credits\n#vacuum_cost_pagE_miss = 10\t# 0-10000 credits\n#vacuum_cost_page_dirty = 20\t# 0-10000 credits\n#vacuum_cost_limit = 200\t# 0-10000 credits\n\n# - Background writer -\n\n#bgwriter_delay = 200\t\t# 10-10000 milliseconds between rounds\n#bgwriter_percent = 1\t\t# 0-100% of dirty buffers in each round\n#bgwriter_maxpages = 100\t# 0-1000 buffers max per round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\nfsync = true\t\t\t# turns forced synchronization on or off\nwal_sync_method = open_sync# fsync\t# the default varies across\nplatforms:\n\t\t\t\t# fsync, fdatasync, open_sync, or\nopen_datasync\nwal_buffers = 2048#8\t\t# min 4, 8KB each\n#commit_delay = 0\t\t# range 0-100000, in microseconds\n#commit_siblings = 5\t\t# range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 100 #3\t# in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300\t# range 30-3600, in seconds\n#checkpoint_warning = 30\t# 0 is off, in seconds\n\n# - Archiving -\n\n#archive_command = ''\t\t# command to use to archive a logfile\nsegment\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_hashagg = true\n#enable_hashjoin = true\n#enable_indexscan = true\nenable_mergejoin = false\n#enable_nestloop = true\n#enable_seqscan = true\n#enable_sort = true\n#enable_tidscan = true\n\n# - Planner Cost Constants -\n\neffective_cache_size = 262144\t#40000 typically 8KB each\n#random_page_cost = 4\t\t# units are one sequential page fetch cost\nrandom_page_cost = 2\n#cpu_tuple_cost = 0.01\t\t# (same)\n#cpu_index_tuple_cost = 0.001\t# (same)\n#cpu_operator_cost = 0.0025\t# (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = true\n#geqo_threshold = 12\n#geqo_effort = 5\t\t# range 1-10\n#geqo_pool_size = 0\t\t# selects default based on effort\n#geqo_generations = 0\t\t# selects default based on effort\n#geqo_selection_bias = 2.0\t# range 1.5-2.0\n\n# - Other Planner Options -\n\ndefault_statistics_target = 250#10\t# range 1-1000\n#from_collapse_limit = 8\n#join_collapse_limit = 8\t# 1 disables collapsing of explicit JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Where to Log -\n\n#log_destination = 'stderr'\t# Valid values are combinations of stderr,\n # syslog and eventlog, depending on\n # platform.\n\n# This is relevant when logging to stderr:\nredirect_stderr = true # Enable capturing of stderr into log files.\n# These are only relevant if redirect_stderr is true:\nlog_directory = 'pg_log' # Directory where log files are written.\n # May be specified absolute or relative to\nPGDATA\nlog_filename = 'postgresql-%a.log' # Log file name pattern.\n # May include strftime() escapes\nlog_truncate_on_rotation = true # If true, any existing log file of the \n # same name as the new log file will be\ntruncated\n # rather than appended to. But such truncation\n # only occurs on time-driven rotation,\n # not on restarts or size-driven rotation.\n # Default is false, meaning append to existing \n # files in all cases.\nlog_rotation_age = 1440 # Automatic rotation of logfiles will happen\nafter\n # so many minutes. 0 to disable.\nlog_rotation_size = 0 # Automatic rotation of logfiles will happen\nafter\n # so many kilobytes of log output. 0 to\ndisable.\n\n# These are relevant when logging to syslog:\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n\n# - When to Log -\n\n#client_min_messages = notice\t# Values, in order of decreasing detail:\n\t\t\t\t# debug5, debug4, debug3, debug2, debug1,\n\t\t\t\t# log, notice, warning, error\n\n#log_min_messages = notice\t# Values, in order of decreasing detail:\n\t\t\t\t# debug5, debug4, debug3, debug2, debug1,\n\t\t\t\t# info, notice, warning, error, log,\nfatal,\n\t\t\t\t# panic\n\n#log_error_verbosity = default\t# terse, default, or verbose messages\n\n#log_min_error_statement = panic # Values in order of increasing severity:\n\t\t\t\t # debug5, debug4, debug3, debug2, debug1,\n\t\t\t\t # info, notice, warning, error,\npanic(off)\n\t\t\t\t \n#log_min_duration_statement = -1 # -1 is disabled, in milliseconds.\n\n#silent_mode = false\t\t # DO NOT USE without syslog or\nredirect_stderr\n\n# - What to Log -\n\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\n#log_connections = false\n#log_disconnections = false\n#log_duration = false\n#log_line_prefix = ''\t\t# e.g. '<%u%%%d> ' \n\t\t\t\t# %u=user name %d=database name\n\t\t\t\t# %r=remote host and port\n\t\t\t\t# %p=PID %t=timestamp %i=command tag\n\t\t\t\t# %c=session id %l=session line number\n\t\t\t\t# %s=session start timestamp %x=transaction\nid\n\t\t\t\t# %q=stop here in non-session processes\n\t\t\t\t# %%='%'\n#log_statement = 'none'\t\t# none, mod, ddl, all\n#log_hostname = false\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\n#log_parser_stats = false\n#log_planner_stats = false\n#log_executor_stats = false\n#log_statement_stats = false\n\n# - Query/Index Statistics Collector -\n\n#stats_start_collector = true\n#stats_command_string = false\n#stats_block_level = false\n#stats_row_level = false\n#stats_reset_on_server_start = true\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public'\t# schema names\n#default_tablespace = ''\t# a tablespace name, or '' for default\n#check_function_bodies = true\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = false\n#statement_timeout = 0\t\t# 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\n#datestyle = 'iso, mdy'\n#timezone = unknown\t\t# actually, defaults to TZ environment\nsetting\n#australian_timezones = false\n#extra_float_digits = 0\t\t# min -15, max 2\n#client_encoding = sql_ascii\t# actually, defaults to database encoding\n\n# These settings are initialized by initdb -- they might be changed\nlc_messages = 'en_US.UTF-8'\t\t# locale for system error message\nstrings\nlc_monetary = 'en_US.UTF-8'\t\t# locale for monetary formatting\nlc_numeric = 'en_US.UTF-8'\t\t# locale for number formatting\nlc_time = 'en_US.UTF-8'\t\t\t# locale for time formatting\n\n# - Other Defaults -\n\n#explain_pretty_print = true\n#dynamic_library_path = '$libdir'\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1000\t# in milliseconds\n#max_locks_per_transaction = 64\t# min 10, ~200*max_connections bytes each\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = true\n#regex_flavor = advanced\t# advanced, extended, or basic\n#sql_inheritance = true\n#default_with_oids = true\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = false\n\n",
"msg_date": "Thu, 21 Apr 2005 09:53:51 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "\nI suspect he's using pgadmin. \nYup I was, but I did try running on the linux box in psql, but it was\nrunning to the screen and took forever because of that.\n\nThe real issue is returning to my app using ODBC is very slow (Have not\ntested the ODBC for MYSQL, MSSQL is ok (the two proc dell is running out of\nsteam but been good until this year when we about doubled our demand by\nadding sears as a client).\n\nUsing odbc to postgres on some of the views (Josh from Command is having me\ndo some very specific testing) is timing out with a 10 minute time limit.\nThese are pages that still respond using MSSQL (this is wehere production is\nusing the duel proc and the test is using the 4 proc).\n\nI have a tool that hooks to all three databases so I can try it with that\nand see if I get different responses.\n\nJoel\n\n",
"msg_date": "Thu, 21 Apr 2005 10:36:22 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Hrm... I was about to suggest that for timing just the query (and not\noutput/data transfer time) using explain analyze, but then I remembered\nthat explain analyze can incur some non-trivial overhead with the timing\ncalls. Is there a way to run the query but have psql ignore the output?\nIf so, you could use \\timing.\n\nIn any case, it's not valid to use pgadmin to time things.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Fri, 22 Apr 2005 21:01:51 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> Hrm... I was about to suggest that for timing just the query (and not\n> output/data transfer time) using explain analyze, but then I remembered\n> that explain analyze can incur some non-trivial overhead with the timing\n> calls. Is there a way to run the query but have psql ignore the output?\n> If so, you could use \\timing.\n\nWould timing \"SELECT COUNT(*) FROM (query)\" work?\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Fri, 22 Apr 2005 23:30:45 -0700",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Jim, Kevin,\n\n> > Hrm... I was about to suggest that for timing just the query (and not\n> > output/data transfer time) using explain analyze, but then I remembered\n> > that explain analyze can incur some non-trivial overhead with the timing\n> > calls. Is there a way to run the query but have psql ignore the output?\n> > If so, you could use \\timing.\n>\n> Would timing \"SELECT COUNT(*) FROM (query)\" work?\n\nJust \\timing would work fine; PostgreSQL doesn't return anything until it has \nthe whole result set. That's why MSSQL vs. PostgreSQL timing comparisons are \ndeceptive unless you're careful: MSSQL returns the results on block at a \ntime, and reports execution time as the time required to return the *first* \nblock, as opposed to Postgres which reports the time required to return the \nwhole dataset.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sat, 23 Apr 2005 11:27:42 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Josh Berkus wrote:\n> Jim, Kevin,\n> \n> > > Hrm... I was about to suggest that for timing just the query (and not\n> > > output/data transfer time) using explain analyze, but then I remembered\n> > > that explain analyze can incur some non-trivial overhead with the timing\n> > > calls. Is there a way to run the query but have psql ignore the output?\n> > > If so, you could use \\timing.\n> >\n> > Would timing \"SELECT COUNT(*) FROM (query)\" work?\n> \n> Just \\timing would work fine; PostgreSQL doesn't return anything until it has \n> the whole result set. \n\nHmm...does \\timing show the amount of elapsed time between query start\nand the first results handed to it by the database (even if the\ndatabase itself has prepared the entire result set for transmission by\nthat time), or between query start and the last result handed to it by\nthe database?\n\nBecause if it's the latter, then things like server<->client network\nbandwidth are going to affect the results that \\timing shows, and it\nwon't necessarily give you a good indicator of how well the database\nbackend is performing. I would expect that timing SELECT COUNT(*)\nFROM (query) would give you an idea of how the backend is performing,\nbecause the amount of result set data that has to go over the wire is\ntrivial.\n\nEach is, of course, useful in its own right, and you want to be able\nto measure both (so, for instance, you can get an idea of just how\nmuch your network affects the overall performance of your queries).\n\n\n> That's why MSSQL vs. PostgreSQL timing comparisons are \n> deceptive unless you're careful: MSSQL returns the results on block at a \n> time, and reports execution time as the time required to return the *first* \n> block, as opposed to Postgres which reports the time required to return the \n> whole dataset.\n\nInteresting. I had no idea MSSQL did that, but I can't exactly say\nI'm surprised. :-)\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n",
"msg_date": "Tue, 26 Apr 2005 19:46:52 -0700",
"msg_from": "Kevin Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "BTW, http://stats.distributed.net/~decibel/base.log is a test I ran;\nselect count(*) was ~6x faster than explain analyze select *.\n\nOn Tue, Apr 26, 2005 at 07:46:52PM -0700, Kevin Brown wrote:\n> Josh Berkus wrote:\n> > Jim, Kevin,\n> > \n> > > > Hrm... I was about to suggest that for timing just the query (and not\n> > > > output/data transfer time) using explain analyze, but then I remembered\n> > > > that explain analyze can incur some non-trivial overhead with the timing\n> > > > calls. Is there a way to run the query but have psql ignore the output?\n> > > > If so, you could use \\timing.\n> > >\n> > > Would timing \"SELECT COUNT(*) FROM (query)\" work?\n> > \n> > Just \\timing would work fine; PostgreSQL doesn't return anything until it has \n> > the whole result set. \n> \n> Hmm...does \\timing show the amount of elapsed time between query start\n> and the first results handed to it by the database (even if the\n> database itself has prepared the entire result set for transmission by\n> that time), or between query start and the last result handed to it by\n> the database?\n> \n> Because if it's the latter, then things like server<->client network\n> bandwidth are going to affect the results that \\timing shows, and it\n> won't necessarily give you a good indicator of how well the database\n> backend is performing. I would expect that timing SELECT COUNT(*)\n> FROM (query) would give you an idea of how the backend is performing,\n> because the amount of result set data that has to go over the wire is\n> trivial.\n> \n> Each is, of course, useful in its own right, and you want to be able\n> to measure both (so, for instance, you can get an idea of just how\n> much your network affects the overall performance of your queries).\n> \n> \n> > That's why MSSQL vs. PostgreSQL timing comparisons are \n> > deceptive unless you're careful: MSSQL returns the results on block at a \n> > time, and reports execution time as the time required to return the *first* \n> > block, as opposed to Postgres which reports the time required to return the \n> > whole dataset.\n> \n> Interesting. I had no idea MSSQL did that, but I can't exactly say\n> I'm surprised. :-)\n> \n> \n> -- \n> Kevin Brown\t\t\t\t\t [email protected]\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Wed, 27 Apr 2005 16:01:50 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
}
] |
[
{
"msg_contents": "right, the oracle system uses a second \"low latency\" bus to \nmanage locking information (at the block level) via a\ndistributed lock manager. (but this is slightly different\nalbeit related to a clustered file system and OS-managed\nlocking, eg) \n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Dawid Kuroczko\nSent: Wednesday, April 20, 2005 4:56 AM\nTo: [email protected]\nSubject: Re: [PERFORM] How to improve db performance with $7K?\n\n\nOn 4/19/05, Mohan, Ross <[email protected]> wrote:\n> Clustered file systems is the first/best example that\n> comes to mind. Host A and Host B can both request from diskfarm, eg.\n\nSomething like a Global File System?\n\nhttp://www.redhat.com/software/rha/gfs/\n\n(I believe some other company did develop it some time in the past; hmm, probably the guys doing LVM stuff?).\n\nAnyway the idea is that two machines have same filesystem mounted and they share it. The locking I believe is handled by communication between computers using \"host to host\" SCSI commands.\n\nI never used it, I've only heard about it from a friend who used to work with it in CERN.\n\n Regards,\n Dawid\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n",
"msg_date": "Wed, 20 Apr 2005 16:44:01 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
[
{
"msg_contents": "Dear Postgres Masters:\n\n \n\nWe are using postgres 7.4 in our java application on RedHat linux. The\nJava application connects to Postgres via JDBC. The application goes\nthrough a 'discovery' phase, whereas it adds large amount of data into\npostgres. Typically, we are adding about a million records in various\ntables. The application also issues multiple queries to the database at\nthe same time. We do not delete any records during the discovery phase.\nBoth the java application and the postgres are installed on the same\nmachine. \n\n \n\nAt the beginning, the application is able to add in the order of 100\nrecord per minute. Gradually (after several hours), it slows down to\nless than 10 records per minute. At this time, postgres processes take\nbetween 80-99% of CPU. When we reindex the database, the speed bumps up\nto about 30 records per minute. Now, postgres server takes between\n50-70% CPU.\n\n \n\nWe have the following in the postgresql.conf :\n\n \n\nmax_fsm_pages = 500000\n\nfsync = false\n\n \n\nWe certainly can not live with this kind of performance. I believe\npostgres should be able to handle much larger datasets but I can not\npoint my finger as to what are we doing wrong. Can somebody please point\nme to the right direction.\n\n \n\nWith kind regards,\n\n \n\n-- Shachindra Agarwal. \n\n\n\n\n\n\n\n\n\n\nDear Postgres Masters:\n \nWe are using postgres 7.4 in our java application on RedHat\nlinux. The Java application connects to Postgres via JDBC. The application goes\nthrough a ‘discovery’ phase, whereas it adds large amount of data\ninto postgres. Typically, we are adding about a million records in various\ntables. The application also issues multiple queries to the database at the\nsame time. We do not delete any records during the discovery phase. Both\nthe java application and the postgres are installed on the same machine. \n \nAt the beginning, the application is able to add in the\norder of 100 record per minute. Gradually (after several hours), it slows down\nto less than 10 records per minute. At this time, postgres processes take\nbetween 80-99% of CPU. When we reindex the database, the speed bumps up to\nabout 30 records per minute. Now, postgres server takes between 50-70% CPU.\n \nWe have the following in the postgresql.conf :\n \nmax_fsm_pages = 500000\nfsync = false\n \nWe certainly can not live with this kind of performance. I\nbelieve postgres should be able to handle much larger datasets but I can not\npoint my finger as to what are we doing wrong. Can somebody please point me to\nthe right direction.\n \nWith kind regards,\n \n-- Shachindra Agarwal.",
"msg_date": "Wed, 20 Apr 2005 12:16:05 -0500",
"msg_from": "\"Shachindra Agarwal\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres slowdown question"
},
{
"msg_contents": "Shachindra Agarwal wrote:\n\n> Dear Postgres Masters:\n>\n> We are using postgres 7.4 in our java application on RedHat linux. The \n> Java application connects to Postgres via JDBC. The application goes \n> through a ‘discovery’ phase, whereas it adds large amount of data into \n> postgres. Typically, we are adding about a million records in various \n> tables. The application also issues multiple queries to the database \n> at the same time. We do not delete any records during the discovery \n> phase. Both the java application and the postgres are installed on the \n> same machine.\n>\n> At the beginning, the application is able to add in the order of 100 \n> record per minute. Gradually (after several hours), it slows down to \n> less than 10 records per minute. At this time, postgres processes take \n> between 80-99% of CPU. When we reindex the database, the speed bumps \n> up to about 30 records per minute. Now, postgres server takes between \n> 50-70% CPU.\n>\n> We have the following in the postgresql.conf :\n>\n> max_fsm_pages = 500000\n>\n> fsync = false\n>\n> We certainly can not live with this kind of performance. I believe \n> postgres should be able to handle much larger datasets but I can not \n> point my finger as to what are we doing wrong. Can somebody please \n> point me to the right direction.\n>\n> With kind regards,\n>\n> -- Shachindra Agarwal.\n>\nA few questions first. How are you loading the data? Are you using \nINSERT or COPY? Are you using a transaction, or are you autocommitting \neach row?\n\nYou really need a transaction, and preferably use COPY. Both can help \nperformance a lot. (In some of the tests, single row inserts can be \n10-100x slower than doing it in bulk.)\n\nAlso, it sounds like you have a foreign key issue. That as things fill \nup, the foreign key reference checks are slowing you down.\nAre you using ANALYZE as you go? A lot of times when you only have <1000 \nrows a sequential scan is faster than using an index, and if you don't \ninform postgres that you have more rows, it might still use the old seqscan.\n\nThere are other possibilities, but it would be nice to know about your \ntable layout, and possibly an EXPLAIN ANALYZE of the inserts that are \ngoing slow.\n\nJohn\n=:->\n\nPS> I don't know if JDBC supports COPY, but it certainly should support \ntransactions.",
"msg_date": "Wed, 20 Apr 2005 15:48:16 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres slowdown question"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Alex Turner [mailto:[email protected]]\n> Sent: Wednesday, April 20, 2005 12:04 PM\n> To: Dave Held\n> Cc: [email protected]\n> Subject: Re: [PERFORM] How to improve db performance with $7K?\n> \n> [...]\n> Lets say we invented a new protocol that including the drive telling\n> the controller how it was layed out at initialization time so that the\n> controller could make better decisions about re-ordering seeks. It\n> would be more cost effective to have that set of electronics just once\n> in the controller, than 8 times on each drive in an array, which would\n> yield better performance to cost ratio. \n\nAssuming that a single controller would be able to service 8 drives \nwithout delays. The fact that you want the controller to have fairly\nintimate knowledge of the drives implies that this is a semi-soft \nsolution requiring some fairly fat hardware compared to firmware that is\nhard-wired for one drive. Note that your controller has to be 8x as fast\nas the on-board drive firmware. There's definitely a balance there, and \nit's not entirely clear to me where the break-even point is.\n\n> Therefore I would suggest it is something that should be investigated. \n> After all, why implemented TCQ on each drive, if it can be handled more\n> effeciently at the other end by the controller for less money?!\n\nBecause it might not cost less. ;) However, I can see where you might \nwant the controller to drive the actual hardware when you have a RAID\nsetup that requires synchronized seeks, etc. But in that case, it's \ndoing one computation for multiple drives, so there really is a win.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Wed, 20 Apr 2005 12:16:25 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
[
{
"msg_contents": "Alex et al., \n\n\nI wonder if thats something to think about adding to Postgresql? A setting for multiblock read count like Oracle (Although \n\n|| I would think so, yea. GMTA: I was just having this micro-chat with Mr. Jim Nasby. \n\nhaving said that I believe that Oracle natively caches pages much more aggressively that postgresql, which allows the OS to do the file caching).\n\n|| Yea...and it can rely on what is likely a lot more robust and nuanced caching algorithm, but...i don't\n know enough (read: anything) about PG's to back that comment up. \n\n\nAlex Turner\nnetEconomist\n\nP.S. Oracle changed this with 9i, you can change the Database block size on a tablespace by tablespace bassis making it smaller for OLTP tablespaces and larger for Warehousing tablespaces (at least I think it's on a tablespace, might be on a whole DB).\n\n||Yes, it's tspace level. \n\n\n\nOn 4/19/05, Jim C. Nasby <[email protected]> wrote:\n> On Mon, Apr 18, 2005 at 06:41:37PM -0000, Mohan, Ross wrote:\n> > Don't you think \"optimal stripe width\" would be\n> > a good question to research the binaries for? I'd\n> > think that drives the answer, largely. (uh oh, pun alert)\n> >\n> > EG, oracle issues IO requests (this may have changed _just_\n> > recently) in 64KB chunks, regardless of what you ask for. So when I \n> > did my striping (many moons ago, when the Earth was young...) I did \n> > it in 128KB widths, and set the oracle \"multiblock read count\" \n> > according. For oracle, any stripe size under 64KB=stupid, anything \n> > much over 128K/258K=wasteful.\n> >\n> > I am eager to find out how PG handles all this.\n> \n> AFAIK PostgreSQL requests data one database page at a time (normally \n> 8k). Of course the OS might do something different.\n> --\n> Jim C. Nasby, Database Consultant [email protected]\n> Give your computer some brain candy! www.distributed.net Team #1828\n> \n> Windows: \"Where do you want to go today?\"\n> Linux: \"Where do you want to go tomorrow?\"\n> FreeBSD: \"Are you guys coming, or what?\"\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n",
"msg_date": "Wed, 20 Apr 2005 19:01:05 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to improve db performance with $7K?"
}
] |
[
{
"msg_contents": "Stats are updated only after transaction ends. In case you have a really\nlong transaction you need something else. \n\nTo help myself I made a little Perl utility to parse strace output. It\nrecognizes read/write calls, extracts file handle, finds the file name\nusing information in /proc filesystem, then uses oid2name utility to\ntranslate file name to PostgreSQL relation name. See attachment.\n\nIt works well enough for me, but I didn't take time to polish it.\nBasically it works with Linux /proc filesystem layout, expects\nPostgreSQL data directory to be /home/postgres/data and oid2name in\n/usr/lib/postgresql/bin. Usage is pgtrace <pid>.\n\n Tambet\n\n> -----Original Message-----\n> From: Jeff Frost [mailto:[email protected]] \n> Sent: Wednesday, April 20, 2005 7:45 AM\n> To: [email protected]\n> Subject: How to tell what your postgresql server is doing\n> \n> \n> Is there a way to look at the stats tables and tell what is \n> jamming up your \n> postgres server the most? Other than seeing long running \n> queries and watch \n> top, atop, iostat, vmstat in separate xterms...I'm wondering \n> if postgres keeps \n> some stats on what it spends the most time doing or if \n> there's a way to \n> extract that sort of info from other metrics it keeps in the \n> stats table?\n> \n> Maybe a script which polls the stats table and correlates the \n> info with stats \n> about the system in /proc?\n> \n> -- \n> Jeff Frost, Owner \t<[email protected]>\n> Frost Consulting, LLC \thttp://www.frostconsultingllc.com/\n> Phone: 650-780-7908\tFAX: 650-649-1954\n>",
"msg_date": "Wed, 20 Apr 2005 22:27:41 +0300",
"msg_from": "\"Tambet Matiisen\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to tell what your postgresql server is doing"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm having a pretty serious problem with postgresql's performance. \nCurrently, I have a cron task that is set to restart and vacuumdb -faz \nevery six hours. If that doesn't happen, the disk goes from 10% full \nto 95% full within 2 days (and it's a 90GB disk...with the database \nbeing a 2MB download after dump), and the CPU goes from running at \naround a 2% load to a 99+% load right away (the stats look like a \nsquare wave).\n\nSo it's problem-hunting time, I guess. The problem has something to do \nwith the following errors (there are a lot; I'm posting a short sample)\nNOTICE: relation \"pg_depend\" TID 43/27: InsertTransactionInProgress \n209545 --- can't shrink relation\nNOTICE: relation \"pg_depend\" TID 43/28: InsertTransactionInProgress \n209545 --- can't shrink relation\nNOTICE: relation \"pg_depend\" TID 43/29: InsertTransactionInProgress \n209545 --- can't shrink relation\nNOTICE: relation \"pg_depend\" TID 43/30: InsertTransactionInProgress \n209545 --- can't shrink relation\nNOTICE: relation \"pg_depend\" TID 43/31: InsertTransactionInProgress \n209545 --- can't shrink relation\nNOTICE: relation \"pg_depend\" TID 43/32: InsertTransactionInProgress \n209545 --- can't shrink relation\nNOTICE: relation \"pg_type\" TID 17/44: InsertTransactionInProgress \n209545 --- can't shrink relation\nNOTICE: relation \"pg_type\" TID 17/45: InsertTransactionInProgress \n209545 --- can't shrink relation\nNOTICE: relation \"pg_attribute\" TID 133/11: \nInsertTransactionInProgress 209545 --- can't shrink relation\nNOTICE: relation \"pg_attribute\" TID 133/12: \nInsertTransactionInProgress 209545 --- can't shrink relation\nNOTICE: relation \"pg_attribute\" TID 133/13: \nInsertTransactionInProgress 209545 --- can't shrink relation\nNOTICE: relation \"pg_attribute\" TID 133/14: \nInsertTransactionInProgress 209545 --- can't shrink relation\nNOTICE: relation \"pg_attribute\" TID 133/15: \nInsertTransactionInProgress 209545 --- can't shrink relation\nNOTICE: relation \"pg_attribute\" TID 133/16: \nInsertTransactionInProgress 209545 --- can't shrink relation\nNOTICE: relation \"pg_attribute\" TID 133/17: \nInsertTransactionInProgress 209545 --- can't shrink relation\nNOTICE: relation \"pg_class\" TID 41/18: DeleteTransactionInProgress \n209545 --- can't shrink relation\nNOTICE: relation \"pg_class\" TID 41/19: DeleteTransactionInProgress \n209545 --- can't shrink relation\nNOTICE: relation \"pg_class\" TID 41/20: DeleteTransactionInProgress \n209545 --- can't shrink relation\n\nWhen I vacuum full, I can't get rid of these errors unless I restart \nthe database (and then I restart, vacuum full, and everything's fine). \nAnd once I do a successful vacuum full, CPU usage returns to normal, \nand the disk is no longer almost full (back to 10% full). I'm at a loss \nto figure out where the problem is coming from and how to fix it.\n\nMy machine: XServe G5 Dual 2GHz running Mac OS X Server 10.3.9. \nPostgresql 8.0.1\n\nThanks for any responses/ideas/solutions (best of all!),\nRichard\n\n",
"msg_date": "Wed, 20 Apr 2005 12:28:45 -0700",
"msg_from": "Richard Plotkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Disk filling, CPU filling, renegade inserts and deletes?"
},
{
"msg_contents": "> I'm having a pretty serious problem with postgresql's performance. \n> Currently, I have a cron task that is set to restart and vacuumdb -faz \n> every six hours. If that doesn't happen, the disk goes from 10% full \n> to 95% full within 2 days (and it's a 90GB disk...with the database \n> being a 2MB download after dump), and the CPU goes from running at \n> around a 2% load to a 99+% load right away (the stats look like a \n> square wave).\n\nAre you running frequent queries which use temporary tables?\n\n\n-- \n\n",
"msg_date": "Wed, 20 Apr 2005 15:36:58 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and"
},
{
"msg_contents": "No, I don't think so. I don't think there are any temp table queries \n(and I'll check), but even if there are, site traffic is very low, and \nqueries would be very infrequent.\n\nOn Apr 20, 2005, at 12:36 PM, Rod Taylor wrote:\n\n>> I'm having a pretty serious problem with postgresql's performance.\n>> Currently, I have a cron task that is set to restart and vacuumdb -faz\n>> every six hours. If that doesn't happen, the disk goes from 10% full\n>> to 95% full within 2 days (and it's a 90GB disk...with the database\n>> being a 2MB download after dump), and the CPU goes from running at\n>> around a 2% load to a 99+% load right away (the stats look like a\n>> square wave).\n>\n> Are you running frequent queries which use temporary tables?\n>\n>\n> -- \n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n",
"msg_date": "Wed, 20 Apr 2005 13:05:27 -0700",
"msg_from": "Richard Plotkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and"
},
{
"msg_contents": "As a follow-up, I've found a function that used the following code:\n\nCREATE TEMPORARY TABLE results\n(nOrder integer,\npage_id integer,\nname text)\nWITHOUT OIDS\nON COMMIT DROP;\n\nI would assume that the \"WITHOUT OIDS\" would be part of the source of \nthe problem, so I've commented it out.\n\nOn Apr 20, 2005, at 12:36 PM, Rod Taylor wrote:\n\n>> I'm having a pretty serious problem with postgresql's performance.\n>> Currently, I have a cron task that is set to restart and vacuumdb -faz\n>> every six hours. If that doesn't happen, the disk goes from 10% full\n>> to 95% full within 2 days (and it's a 90GB disk...with the database\n>> being a 2MB download after dump), and the CPU goes from running at\n>> around a 2% load to a 99+% load right away (the stats look like a\n>> square wave).\n>\n> Are you running frequent queries which use temporary tables?\n>\n>\n> -- \n>\n\n",
"msg_date": "Wed, 20 Apr 2005 13:43:50 -0700",
"msg_from": "Richard Plotkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and deletes?"
},
{
"msg_contents": "Richard Plotkin <[email protected]> writes:\n> I'm having a pretty serious problem with postgresql's performance. \n> Currently, I have a cron task that is set to restart and vacuumdb -faz \n> every six hours. If that doesn't happen, the disk goes from 10% full \n> to 95% full within 2 days (and it's a 90GB disk...with the database \n> being a 2MB download after dump), and the CPU goes from running at \n> around a 2% load to a 99+% load right away (the stats look like a \n> square wave).\n\nQ: what have you got the FSM parameters set to?\n\nQ: what exactly is bloating? Without knowing which tables or indexes\nare growing, it's hard to speculate about the exact causes. Use du and\noid2name, or look at pg_class.relpages after a plain VACUUM.\n\nIt's likely that the real answer is \"you need to vacuum more often\nthan every six hours\", but I'm trying not to jump to conclusions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Apr 2005 16:51:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and deletes? "
},
{
"msg_contents": "Hi Tom,\n\n> Q: what have you got the FSM parameters set to?\n\nHere's from postgresql.conf -- FSM at default settings.\n# - Memory -\n\nshared_buffers = 30400 # min 16, at least max_connections*2, \n8KB each\nwork_mem = 32168 # min 64, size in KB\n#maintenance_work_mem = 16384 # min 1024, size in KB\n#max_stack_depth = 2048 # min 100, size in KB\n\n# - Free Space Map -\n\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000 # min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\nmax_files_per_process = 750 #1000 # min 25\n#preload_libraries = ''\n\n\n> Q: what exactly is bloating? Without knowing which tables or indexes\n> are growing, it's hard to speculate about the exact causes. Use du and\n> oid2name, or look at pg_class.relpages after a plain VACUUM.\n\nThis I do not know. I've disabled the cron jobs and will let the \nsystem bloat, then I will gather statistics (I'll give it 12-24 hours).\n\n> It's likely that the real answer is \"you need to vacuum more often\n> than every six hours\", but I'm trying not to jump to conclusions.\n\nThat could be it, except that I would expect the problem to then look \nmore like a gradual increase in CPU usage and a gradual increase in use \nof disk space. Mine could be an invalid assumption, but the system \nhere looks like it goes from no problem to 100% problem within a \nminute.\n\nThanks again!\nRichard\n\n",
"msg_date": "Wed, 20 Apr 2005 14:06:03 -0700",
"msg_from": "Richard Plotkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and deletes? "
},
{
"msg_contents": "More info on what is bloating:\n\nIt's only in one database (the one that's most used), and after running \noid2name on the bloated files, the result is (mysteriously) empty. \nHere's the run on the three enormous files:\n\n$ /usr/local/bin/oid2name -d smt -o 160779\n From database \"smt\":\n Filenode Table Name\n----------------------\n\n$ /usr/local/bin/oid2name -d smt -o 65782869\n From database \"smt\":\n Filenode Table Name\n----------------------\n\n$ /usr/local/bin/oid2name -d smt -o 83345634\n From database \"smt\":\n Filenode Table Name\n----------------------\n\nThe file list looks like this (with normal sized files mostly removed):\n1.0G ./106779\n1.0G ./106779.1\n1.0G ./106779.2\n1.0G ./106779.3\n978M ./106779.4\n1.0G ./65782869\n248M ./65782869.1\n 0B ./65782871\n8.0K ./65782873\n780M ./83345634\n 0B ./83345636\n8.0K ./83345638\n\nSo does the empty result mean it's a temporary table? There is one \ntemporary table (in the function previously mentioned) that does get \ncreated and dropped with some regularity.\n\nThanks again,\nRichard\n\nOn Apr 20, 2005, at 2:06 PM, Richard Plotkin wrote:\n\n> Hi Tom,\n>\n>> Q: what have you got the FSM parameters set to?\n>\n> Here's from postgresql.conf -- FSM at default settings.\n> # - Memory -\n>\n> shared_buffers = 30400 # min 16, at least max_connections*2, \n> 8KB each\n> work_mem = 32168 # min 64, size in KB\n> #maintenance_work_mem = 16384 # min 1024, size in KB\n> #max_stack_depth = 2048 # min 100, size in KB\n>\n> # - Free Space Map -\n>\n> #max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes \n> each\n> #max_fsm_relations = 1000 # min 100, ~50 bytes each\n>\n> # - Kernel Resource Usage -\n>\n> max_files_per_process = 750 #1000 # min 25\n> #preload_libraries = ''\n>\n>\n>> Q: what exactly is bloating? Without knowing which tables or indexes\n>> are growing, it's hard to speculate about the exact causes. Use du \n>> and\n>> oid2name, or look at pg_class.relpages after a plain VACUUM.\n>\n> This I do not know. I've disabled the cron jobs and will let the \n> system bloat, then I will gather statistics (I'll give it 12-24 \n> hours).\n>\n>> It's likely that the real answer is \"you need to vacuum more often\n>> than every six hours\", but I'm trying not to jump to conclusions.\n>\n> That could be it, except that I would expect the problem to then look \n> more like a gradual increase in CPU usage and a gradual increase in \n> use of disk space. Mine could be an invalid assumption, but the \n> system here looks like it goes from no problem to 100% problem within \n> a minute.\n>\n> Thanks again!\n> Richard\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if \n> your\n> joining column's datatypes do not match\n>\n\n",
"msg_date": "Thu, 21 Apr 2005 11:38:22 -0700",
"msg_from": "Richard Plotkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and deletes? "
},
{
"msg_contents": "On Thu, Apr 21, 2005 at 11:38:22AM -0700, Richard Plotkin wrote:\n> More info on what is bloating:\n> \n> It's only in one database (the one that's most used), and after running \n> oid2name on the bloated files, the result is (mysteriously) empty. \n> Here's the run on the three enormous files:\n> \n> $ /usr/local/bin/oid2name -d smt -o 160779\n> From database \"smt\":\n> Filenode Table Name\n> ----------------------\n\nTry -f instead of -o ...\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"World domination is proceeding according to plan\" (Andrew Morton)\n",
"msg_date": "Thu, 21 Apr 2005 16:46:18 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and deletes?"
},
{
"msg_contents": "That returned the same result. I also tried oid2name -d smt -x -i -S \nand, separately -s, and also separately, -d with all other databases, \nand none of the databases turned up any listing, in either oid or \nfilenode, for any of these three bloated files. One thing I've noticed \nis that these oids are all extremely large numbers, whereas the rest of \nthe oids in /data/base/* are no higher than 40000 or 50000.\n\nOn Apr 21, 2005, at 1:46 PM, Alvaro Herrera wrote:\n\n> On Thu, Apr 21, 2005 at 11:38:22AM -0700, Richard Plotkin wrote:\n>> More info on what is bloating:\n>>\n>> It's only in one database (the one that's most used), and after \n>> running\n>> oid2name on the bloated files, the result is (mysteriously) empty.\n>> Here's the run on the three enormous files:\n>>\n>> $ /usr/local/bin/oid2name -d smt -o 160779\n>> From database \"smt\":\n>> Filenode Table Name\n>> ----------------------\n>\n> Try -f instead of -o ...\n>\n> -- \n> Alvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n> \"World domination is proceeding according to plan\" (Andrew \n> Morton)\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n\n",
"msg_date": "Thu, 21 Apr 2005 14:01:36 -0700",
"msg_from": "Richard Plotkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and deletes?"
},
{
"msg_contents": "I've also now tried looking at pg_class.relpages. I compared the \nresults before and after vacuum. The results stayed the same, except \nfor five rows that increased after the vacuum. Here is the select on \nthose rows after the vacuum:\n\n relname | relnamespace | reltype | relowner | \nrelam | relfilenode | reltablespace | relpages | reltuples | \nreltoastrelid | reltoastidxid | relhasindex | relisshared | relkind | \nrelnatts | relchecks | reltriggers | relukeys | relfkeys | relrefs | \nrelhasoids | relhaspkey | relhasrules | relhassubclass | relacl\n---------------------------------+--------------+---------+---------- \n+-------+-------------+---------------+----------+----------- \n+---------------+---------------+-------------+-------------+--------- \n+----------+-----------+-------------+----------+----------+--------- \n+------------+------------+-------------+---------------- \n+---------------\n pg_attribute_relid_attnam_index | 11 | 0 | 1 | \n 403 | 16686 | 0 | 292 | 10250 | \n0 | 0 | f | f | i | 2 | \n 0 | 0 | 0 | 0 | 0 | f | f \n | f | f |\n pg_class_oid_index | 11 | 0 | 1 | \n 403 | 16690 | 0 | 18 | 2640 | \n0 | 0 | f | f | i | 1 | \n 0 | 0 | 0 | 0 | 0 | f | f \n | f | f |\n pg_depend_depender_index | 11 | 0 | 1 | \n 403 | 16701 | 0 | 52 | 6442 | \n0 | 0 | f | f | i | 3 | \n 0 | 0 | 0 | 0 | 0 | f | f \n | f | f |\n pg_type_oid_index | 11 | 0 | 1 | \n 403 | 16731 | 0 | 8 | 1061 | \n0 | 0 | f | f | i | 1 | \n 0 | 0 | 0 | 0 | 0 | f | f \n | f | f |\n pg_depend | 11 | 16677 | 1 | \n 0 | 16676 | 0 | 32 | 4200 | \n0 | 0 | t | f | r | 7 | \n 0 | 0 | 0 | 0 | 0 | f | f \n | f | f | {=r/postgres}\n\n\n\nOn Apr 20, 2005, at 1:51 PM, Tom Lane wrote:\n\n> Richard Plotkin <[email protected]> writes:\n>> I'm having a pretty serious problem with postgresql's performance.\n>> Currently, I have a cron task that is set to restart and vacuumdb -faz\n>> every six hours. If that doesn't happen, the disk goes from 10% full\n>> to 95% full within 2 days (and it's a 90GB disk...with the database\n>> being a 2MB download after dump), and the CPU goes from running at\n>> around a 2% load to a 99+% load right away (the stats look like a\n>> square wave).\n>\n> Q: what have you got the FSM parameters set to?\n>\n> Q: what exactly is bloating? Without knowing which tables or indexes\n> are growing, it's hard to speculate about the exact causes. Use du and\n> oid2name, or look at pg_class.relpages after a plain VACUUM.\n>\n> It's likely that the real answer is \"you need to vacuum more often\n> than every six hours\", but I'm trying not to jump to conclusions.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Fri, 22 Apr 2005 09:29:39 -0700",
"msg_from": "Richard Plotkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and deletes? "
},
{
"msg_contents": "If anybody has additional advice on this problem, I would really, \nreally appreciate it...\n\nI updated postgres to 8.0.2, am running vacuumdb -faz every 3 hours, \nand 50 minutes after a vacuum the CPU usage still skyrocketed, and the \ndisk started filling. This time, there is only a single file that is \nspanning multiple GB, but running oid2name again returns no result on \nthe oid or filenode.\n\nWith the increased vacuuming, fixed temp tables, etc., I really am at a \nloss for what's happening, and could really use some additional help.\n\nThank you,\nRichard\n\n",
"msg_date": "Sat, 23 Apr 2005 09:57:20 -0700",
"msg_from": "Richard Plotkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and deletes? "
},
{
"msg_contents": "I also forgot to mention, vacuumdb fails on the command line now with \nthe following error:\nvacuumdb: could not connect to database smt: FATAL: sorry, too many \nclients already\n\nOn Apr 23, 2005, at 9:57 AM, Richard Plotkin wrote:\n\n> If anybody has additional advice on this problem, I would really, \n> really appreciate it...\n>\n> I updated postgres to 8.0.2, am running vacuumdb -faz every 3 hours, \n> and 50 minutes after a vacuum the CPU usage still skyrocketed, and the \n> disk started filling. This time, there is only a single file that is \n> spanning multiple GB, but running oid2name again returns no result on \n> the oid or filenode.\n>\n> With the increased vacuuming, fixed temp tables, etc., I really am at \n> a loss for what's happening, and could really use some additional \n> help.\n>\n> Thank you,\n> Richard\n>\n\n",
"msg_date": "Sat, 23 Apr 2005 09:59:36 -0700",
"msg_from": "Richard Plotkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and deletes? "
},
{
"msg_contents": "Richard Plotkin <[email protected]> writes:\n> I updated postgres to 8.0.2, am running vacuumdb -faz every 3 hours, \n> and 50 minutes after a vacuum the CPU usage still skyrocketed, and the \n> disk started filling. This time, there is only a single file that is \n> spanning multiple GB, but running oid2name again returns no result on \n> the oid or filenode.\n\nWhat is the filename exactly (full path)?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Apr 2005 14:06:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and deletes? "
},
{
"msg_contents": "/usr/local/pgsql/data/base/17234/42791\n/usr/local/pgsql/data/base/17234/42791.1\n/usr/local/pgsql/data/base/17234/42791.2\n/usr/local/pgsql/data/base/17234/42791.3\n/usr/local/pgsql/data/base/17234/42791.4\n/usr/local/pgsql/data/base/17234/42791.5\n/usr/local/pgsql/data/base/17234/42791.6\n/usr/local/pgsql/data/base/17234/42791.7\n/usr/local/pgsql/data/base/17234/42791.8\n/usr/local/pgsql/data/base/17234/42791.9\n/usr/local/pgsql/data/base/17234/42791.10\n/usr/local/pgsql/data/base/17234/42791.11\n\nOn Apr 23, 2005, at 11:06 AM, Tom Lane wrote:\n\n> Richard Plotkin <[email protected]> writes:\n>> I updated postgres to 8.0.2, am running vacuumdb -faz every 3 hours,\n>> and 50 minutes after a vacuum the CPU usage still skyrocketed, and the\n>> disk started filling. This time, there is only a single file that is\n>> spanning multiple GB, but running oid2name again returns no result on\n>> the oid or filenode.\n>\n> What is the filename exactly (full path)?\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Sat, 23 Apr 2005 11:11:38 -0700",
"msg_from": "Richard Plotkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and deletes? "
},
{
"msg_contents": "Richard Plotkin <[email protected]> writes:\n> /usr/local/pgsql/data/base/17234/42791\n> /usr/local/pgsql/data/base/17234/42791.1\n> /usr/local/pgsql/data/base/17234/42791.2\n> /usr/local/pgsql/data/base/17234/42791.3\n> ...\n\nWell, that is certainly a table or index of some kind.\n\nGo into database 17234 --- if you are not certain which one that is, see\n\tselect datname from pg_database where oid = 17234\nand do\n\tselect relname from pg_class where relfilenode = 42791\n\nThe only way I could see for this to not find the table is if the table\ncreation has not been committed yet. Do you have any apps that create\nand fill a table in a single transaction?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Apr 2005 14:17:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and deletes? "
},
{
"msg_contents": "Hi Tom,\n\nThanks for your responses this morning. I did the select relname, and \nit returned 0 rows. I do have one function that creates a temp table \nand fills it within the same transaction. I'm pasting it below. \nPerhaps the \"ON COMMIT DROP\" is causing problems, and I need to drop \nthe table at the end of the function instead of using ON COMMIT DROP?\n\n--\n-- Name: crumbs(integer, text, boolean); Type: FUNCTION; Schema: public\n--\n\nCREATE FUNCTION crumbs(integer, text, boolean) RETURNS text\n AS $_$DECLARE\n\n\tstarting_page ALIAS FOR $1;\n\t\n\tcurrent_page integer;\n\t\n\tdelimiter text DEFAULT ': ';\n\t\n\twithLinkTags BOOLEAN DEFAULT FALSE;\n\t\n\tpage_id_temp INTEGER;\n\t\n\tpage_name_temp TEXT;\n\t\n\tcurrent_nOrder INTEGER := 1;\n\t\n\tpage_results record;\n\t\n\tpath TEXT DEFAULT '';\n\t\nBEGIN\n\n\tIF starting_page IS NULL\n\tTHEN\n\t\tRETURN NULL;\n\tEND IF;\n\n\tcurrent_page := starting_page;\n\t\n\tIF $2 IS NOT NULL\n\tTHEN\n\t\tdelimiter := $2;\n\tEND IF;\n\t\n\tIF $3 IS NOT NULL\n\tTHEN\n\t\twithLinkTags := $3;\n\tEND IF;\n\t\n\t--Create a table consisting of three columns: nOrder, page_id, name\n\t\n\tCREATE TEMPORARY TABLE results\n\t(nOrder integer,\n\tpage_id integer,\n\tname text)\n\tON COMMIT DROP;\n\n\t--Select the current page into the results table\n\t\n\tSELECT INTO\n\t\tpage_id_temp,\n\t\tpage_name_temp\n\t\t\n\t\tp.page_id,\n\t\tCASE WHEN p.title_abbr IS NOT NULL\n\t\t\tTHEN p.title_abbr\n\t\t\tELSE p.title\n\t\tEND as name\n\t\t\n\tFROM page p\n\t\t\n\tWHERE p.page_id = starting_page;\n\t\n\tIF FOUND\n\tTHEN\n\t\tEXECUTE 'INSERT INTO results (nOrder, page_id, name)\n\t\tVALUES ('\t|| current_nOrder || ','\n\t\t\t\t\t|| page_id_temp || ','\n\t\t\t\t\t|| quote_literal(page_name_temp)\n\t\t|| ')';\n\t\n\t\tcurrent_nOrder := current_nOrder + 1;\n\tEND IF;\n\t\n\t--Loop through results for page parents\n\t\n\tLOOP\n\t\n\t\tSELECT INTO\n\t\t\tpage_id_temp,\n\t\t\tpage_name_temp\n\n\t\t\tparent.page_id as parent_id,\n\t\t\tCASE WHEN parent.title_abbr IS NOT NULL\n\t\t\t\tTHEN parent.title_abbr\n\t\t\t\tELSE parent.title\n\t\t\tEND as name\n\t\t\n\t\tFROM page AS child\n\t\t\n\t\tINNER JOIN page AS parent\n\t\t\tON child.subcat_id = parent.page_id\n\t\t\t\n\t\tWHERE child.page_id = current_page;\n\t\t\n\t\tIF FOUND\n\t\tTHEN\n\t\t\n\t\t\tEXECUTE 'INSERT INTO results (nOrder, page_id, name)\n\t\t\tVALUES ('\t|| current_nOrder || ','\n\t\t\t\t\t\t|| page_id_temp || ','\n\t\t\t\t\t\t|| quote_literal(page_name_temp)\n\t\t\t|| ')';\n\t\t\n\t\t\tcurrent_page = page_id_temp;\n\t\t\t\n\t\t\tcurrent_nOrder := current_nOrder + 1;\n\t\t\t\n\t\tELSE\n\t\t\n\t\t\tEXIT;\n\t\t\n\t\tEND IF;\n\t\n\tEND LOOP;\n\t\n\t\n\tSELECT INTO\n\t\tpage_id_temp,\n\t\tpage_name_temp\n\t\n\t\tc.default_page as parent_id,\n\t\tc.name\n\t\t\t\n\tFROM page p\n\t\t\n\tINNER JOIN category c\n\t\tON c.cat_id = p.cat_id\n\t\t\t\n\tWHERE page_id = starting_page;\n\t\n\tIF FOUND\n\tTHEN\n\t\t\n\t\tEXECUTE 'INSERT INTO results (nOrder, page_id, name)\n\t\tVALUES ('\t|| current_nOrder || ','\n\t\t\t\t\t|| page_id_temp || ','\n\t\t\t\t\t|| quote_literal(page_name_temp)\n\t\t|| ')';\n\t\n\tEND IF;\n\t\n\tFOR page_results IN EXECUTE 'SELECT * FROM results ORDER BY nOrder \nDESC' LOOP\n\t\t\n\t\tIF path = ''\n\t\tTHEN\n\t\t\tIF withLinkTags IS TRUE\n\t\t\tTHEN\n\t\t\t\tpath := '<a href=\"index.php?pid=' || page_results.page_id || '\">';\n\t\t\t\tpath := path || page_results.name;\n\t\t\t\tpath := path || '</a>';\n\t\t\tELSE\n\t\t\t\tpath := page_results.name;\n\t\t\tEND IF;\n\t\tELSE\n\t\t\tIF withLinkTags IS TRUE\n\t\t\tTHEN\n\t\t\t\tpath := path || delimiter;\n\t\t\t\tpath := path || '<a href=\"index.php?pid=' || page_results.page_id \n|| '\">';\n\t\t\t\tpath := path || page_results.name;\n\t\t\t\tpath := path || '</a>';\n\t\t\tELSE\n\t\t\t\tpath := path || delimiter || page_results.name;\n\t\t\tEND IF;\n\t\tEND IF;\n\t\t\n\tEND LOOP;\n\t\n\tRETURN path;\n\nEND;$_$\n LANGUAGE plpgsql;\nOn Apr 23, 2005, at 11:17 AM, Tom Lane wrote:\n\n> Richard Plotkin <[email protected]> writes:\n>> /usr/local/pgsql/data/base/17234/42791\n>> /usr/local/pgsql/data/base/17234/42791.1\n>> /usr/local/pgsql/data/base/17234/42791.2\n>> /usr/local/pgsql/data/base/17234/42791.3\n>> ...\n>\n> Well, that is certainly a table or index of some kind.\n>\n> Go into database 17234 --- if you are not certain which one that is, \n> see\n> \tselect datname from pg_database where oid = 17234\n> and do\n> \tselect relname from pg_class where relfilenode = 42791\n>\n> The only way I could see for this to not find the table is if the table\n> creation has not been committed yet. Do you have any apps that create\n> and fill a table in a single transaction?\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Sat, 23 Apr 2005 11:37:30 -0700",
"msg_from": "Richard Plotkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and deletes? "
},
{
"msg_contents": "Richard Plotkin <[email protected]> writes:\n> Thanks for your responses this morning. I did the select relname, and \n> it returned 0 rows. I do have one function that creates a temp table \n> and fills it within the same transaction. I'm pasting it below. \n> Perhaps the \"ON COMMIT DROP\" is causing problems, and I need to drop \n> the table at the end of the function instead of using ON COMMIT DROP?\n\nWell, I think we can conclude that the function is pushing way more\ndata into the temp table than you expect. I am wondering if that loop\nin the middle of the function is turning into an infinite loop --- could\nit be finding some sort of cycle in your page data? You might want to\nadd some RAISE NOTICE commands to the loop so you can track what it's\ndoing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Apr 2005 15:50:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and deletes? "
},
{
"msg_contents": "Hi Tom,\n\nThanks! That's exactly what it was. There was a discrepancy in the \ndata that turned this into an endless loop. Everything has been \nrunning smoothly since I made a change.\n\nThanks so much,\nRichard\n\nOn Apr 23, 2005, at 12:50 PM, Tom Lane wrote:\n\n> Richard Plotkin <[email protected]> writes:\n>> Thanks for your responses this morning. I did the select relname, and\n>> it returned 0 rows. I do have one function that creates a temp table\n>> and fills it within the same transaction. I'm pasting it below.\n>> Perhaps the \"ON COMMIT DROP\" is causing problems, and I need to drop\n>> the table at the end of the function instead of using ON COMMIT DROP?\n>\n> Well, I think we can conclude that the function is pushing way more\n> data into the temp table than you expect. I am wondering if that loop\n> in the middle of the function is turning into an infinite loop --- \n> could\n> it be finding some sort of cycle in your page data? You might want to\n> add some RAISE NOTICE commands to the loop so you can track what it's\n> doing.\n>\n> \t\t\tregards, tom lane\n>\n\n",
"msg_date": "Sun, 24 Apr 2005 20:13:47 -0700",
"msg_from": "Richard Plotkin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Disk filling, CPU filling, renegade inserts and deletes? "
}
] |
[
{
"msg_contents": "He is running RHAS4, which is the latest 2.6.x kernel from RH. I believe\nit should have done away with the RHAS3.0 Update 3 IO issue.\n\nanjan\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]] \nSent: Wednesday, April 20, 2005 4:23 PM\nTo: Joel Fradkin\nCc: [email protected]\nSubject: Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon\n\nJoel,\n\n> I have MSSQL running on a 2 proc dell which until my load has\nincreased\n> (over aprx 2 years) it was just fine. I totally agree that there are\nbetter\n> solutions based on this lists comments, but I have all Dell hardware\nnow\n> and resist trying different vendors just to suit Postgres. I was under\nthe\n> impression there were still issues with 64bit postgres and Linux (or\nat\n> least were when I purchased). I believed I could make my next\naquistion a\n> opteron based hardware.\n\nYeah, sorry, the Dell stuff is a sore point with me. You can't imagine\nthe \nnumber of conversations I have that go like this:\n\"We're having a severe performance problem with PostgreSQL\"\n\"What hardware/OS are you using?\"\n\"Dell *650 with RHAS 3.0 ....\"\n\nBTW, which Update version is your RHAS? If you're on Update3, you can\ngrab \nmore performance right there by upgrading to Update4.\n\n> Again I am not at all trying to critasize any one, so please except my\n> apology if I some how came across with that attitude. I am very\n> disappointed at this point. My views may not be that great (although I\nam\n> not saying that either), but they run ok on MSSQL and appear to run ok\non\n> MYSQL.\n\nYeah. I think you'll find a few things that are vice-versa. For that \nmatter, I can point to a number of queries we run better than Oracle,\nand a \nnumber we don't.\n\nYour particular query problem seems to stem from some bad estimates.\nCan you \npost an EXPLAIN ANALYZE based on all the advice people have given you so\nfar?\n\n> I wish I did understand what I am doing wrong because I do not wish to\n> revisit engineering our application for MYSQL.\n\nI can imagine. \n\n> I would of spent more $ with Command, but he does need my data base to\nhelp\n> me and I am not able to do that.\n\nYes. For that matter, it'll take longer to troubleshoot on this list\nbecause \nof your security concerns.\n\n> I agree testing the whole app is the only way to see and unfortunately\nit\n> is a time consuming bit. I do not have to spend 4k on MYSQL, that is\nif I\n> want to have their premium support. I can spend $250.00 a server for\nthe\n> commercial license if I find the whole app does run well. I just\nloaded the\n> data last night and only had time to convert one view this morning. I\nam\n> sure it is something I do not understand and not a problem with\npostgres. I\n> also am willing to take time to get more knowledgeable, but my time is\n> running out and I feel honestly stupid.\n\nYou're not. You have a real query problem and it will require further \ntroubleshooting to solve. Some of us make a pretty handsome living\nsolving \nthese kinds of problems, it take a lot of expert knowledge.\n\n> It was never my intention to make you feel like I was flaming anyone\n> involved. On the contrary, I feel many have taken time to look at my\n> questions and given excellent advice. I know I check the archives so\n> hopefully that time will help others after me.\n\nWell, I overreacted too. Sorry!\n\n> I may find that revisiting the datasets is a way to make PG work, or\nas you\n> mentioned maybe I can get some one with more knowledge to step in\nlocally.\n> I did ask Tom if he knew of anyone, maybe some one else on the list is\n> aware of a professional in the Tampa FL area.\n\nWell, Robert Treat is in Florida but I'm pretty sure he's busy\nfull-time.\n\n> Realistically I don't think a 30k$ Dell is a something that needs to\nbe\n> junked. I am pretty sure if I got MSSQL running on it, it would\noutperform\n> my two proc box. I can agree it may not have been the optimal\nplatform. My\n> decision is not based solely on the performance on the 4 proc box.\n\nOh, certainly it's too late to buy a Sunfire or eServer instead. You\njust \ncould have gotten far more bang for the buck with some expert advice,\nthat's \nall. But don't bother with Dell support any further, they don't really\nhave \nthe knowledge to help you.\n\nSo ... new EXPLAIN ANALYZE ?\n\n-- \n\n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n\n",
"msg_date": "Wed, 20 Apr 2005 16:22:19 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
}
] |
[
{
"msg_contents": "Thanks for the note. Please see my responses below:\n\n-----Original Message-----\nFrom: John A Meinel [mailto:[email protected]] \nSent: Wednesday, April 20, 2005 3:48 PM\nTo: Shachindra Agarwal\nCc: [email protected]\nSubject: Re: [PERFORM] postgres slowdown question\n\nShachindra Agarwal wrote:\n\n> Dear Postgres Masters:\n>\n> We are using postgres 7.4 in our java application on RedHat linux. The\n\n> Java application connects to Postgres via JDBC. The application goes \n> through a 'discovery' phase, whereas it adds large amount of data into\n\n> postgres. Typically, we are adding about a million records in various \n> tables. The application also issues multiple queries to the database \n> at the same time. We do not delete any records during the discovery \n> phase. Both the java application and the postgres are installed on the\n\n> same machine.\n>\n> At the beginning, the application is able to add in the order of 100 \n> record per minute. Gradually (after several hours), it slows down to \n> less than 10 records per minute. At this time, postgres processes take\n\n> between 80-99% of CPU. When we reindex the database, the speed bumps \n> up to about 30 records per minute. Now, postgres server takes between \n> 50-70% CPU.\n>\n> We have the following in the postgresql.conf :\n>\n> max_fsm_pages = 500000\n>\n> fsync = false\n>\n> We certainly can not live with this kind of performance. I believe \n> postgres should be able to handle much larger datasets but I can not \n> point my finger as to what are we doing wrong. Can somebody please \n> point me to the right direction.\n>\n> With kind regards,\n>\n> -- Shachindra Agarwal.\n>\nA few questions first. How are you loading the data? Are you using \nINSERT or COPY? Are you using a transaction, or are you autocommitting \neach row?\n\nYou really need a transaction, and preferably use COPY. Both can help \nperformance a lot. (In some of the tests, single row inserts can be \n10-100x slower than doing it in bulk.)\n\n>> We are using JDBC which supports 'inserts' and 'transactions'. We are\nusing both. The business logic adds one business object at a time. Each\nobject is added within its own transaction. Each object add results in 5\nrecords in various tables in the the database. So, a commit is performed\nafter every 5 inserts.\n\nAlso, it sounds like you have a foreign key issue. That as things fill \nup, the foreign key reference checks are slowing you down.\nAre you using ANALYZE as you go? A lot of times when you only have <1000\n\nrows a sequential scan is faster than using an index, and if you don't \ninform postgres that you have more rows, it might still use the old\nseqscan.\n\n>> This could be the issue. I will start 'analyze' in a cron job. I will\nupdate you with the results.\n\nThere are other possibilities, but it would be nice to know about your \ntable layout, and possibly an EXPLAIN ANALYZE of the inserts that are \ngoing slow.\n\nJohn\n=:->\n\nPS> I don't know if JDBC supports COPY, but it certainly should support \ntransactions.\n\n\n",
"msg_date": "Wed, 20 Apr 2005 18:16:28 -0500",
"msg_from": "\"Shachindra Agarwal\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres slowdown question"
},
{
"msg_contents": "Shachindra Agarwal wrote:\n\n>Thanks for the note. Please see my responses below:\n>\n>\n...\n\n>\n>\n>>>We are using JDBC which supports 'inserts' and 'transactions'. We are\n>>>\n>>>\n>using both. The business logic adds one business object at a time. Each\n>object is added within its own transaction. Each object add results in 5\n>records in various tables in the the database. So, a commit is performed\n>after every 5 inserts.\n>\n>\n>\nWell, 5 inserts per commit is pretty low. It would be nice to see more\nlike 100 inserts per commit. Would it be possible during the \"discovery\"\nphase to put the begin/commit logic a little bit higher?\nRemember, each COMMIT requires at least one fsync. (I realize you have\nfsync off for now). But commit is pretty expensive.\n\n>Also, it sounds like you have a foreign key issue. That as things fill\n>up, the foreign key reference checks are slowing you down.\n>Are you using ANALYZE as you go? A lot of times when you only have <1000\n>\n>rows a sequential scan is faster than using an index, and if you don't\n>inform postgres that you have more rows, it might still use the old\n>seqscan.\n>\n>\n>\n>>>This could be the issue. I will start 'analyze' in a cron job. I will\n>>>\n>>>\n>update you with the results.\n>\n>There are other possibilities, but it would be nice to know about your\n>table layout, and possibly an EXPLAIN ANALYZE of the inserts that are\n>going slow.\n>\n>John\n>=:->\n>\n>PS> I don't know if JDBC supports COPY, but it certainly should support\n>transactions.\n>\n>\nLet us know if ANALYZE helps. If you are not deleting or updating\nanything, you probably don't need to do VACUUM ANALYZE, but you might\nthink about it. It is a little more expensive since it has to go to\nevery tuple, rather than just a random sampling.\n\nJohn\n=:->",
"msg_date": "Wed, 20 Apr 2005 18:35:34 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres slowdown question"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a series of tables with identical structure. Some contain a few\nthousand rows and some contain 3,000,000 rows. Another applicate writes\nthe rows and my applicate reads then just by selecting where pk >\nlast_seen_pk limit 2000.\n\nI've found that one of the tables, when selecting from it that one of\nthe tables is many times slower than the others. \n\nFor instance when reading data in batches of 2000 rows, it seems to take\n26 seconds to query from dave_data_update_events with 1593600, but only\n1 or two seconds to query from jane_data_update_events with 3100000\nrows!\n\nThis is ther SQL used....\n\n\n|\n|select \n| events.event_id, ctrl.real_name, events.tsds, events.value, \n| events.lds, events.correction, ctrl.type, ctrl.freq \n|from dave_data_update_events events, dave_control ctrl \n|where events.obj_id = ctrl.obj_id and \n|events.event_id > 32128893::bigint \n|order by events.event_id \n|limit 2000\n|\n\nHere is the structure of the tables...\n\n|\n|CREATE TABLE dave_control (\n| obj_id numeric(6,0) NOT NULL,\n| real_name character varying(64) NOT NULL,\n| \"type\" numeric(2,0) NOT NULL,\n| freq numeric(2,0) NOT NULL\n|);\n|\n|CREATE TABLE dave_data_update_events (\n| lds numeric(13,0) NOT NULL,\n| obj_id numeric(6,0) NOT NULL,\n| tsds numeric(13,0) NOT NULL,\n| value character varying(22) NOT NULL,\n| correction numeric(1,0) NOT NULL,\n| delta_lds_tsds numeric(13,0) NOT NULL,\n| event_id bigserial NOT NULL\n|);\n|\n|CREATE UNIQUE INDEX dave_control_obj_id_idx ON dave_control USING btree\n(obj_id);\n|ALTER TABLE dave_control CLUSTER ON dave_control_obj_id_idx;\n|\n|CREATE UNIQUE INDEX dave_control_real_name_idx ON dave_control USING\nbtree (real_name);\n|\n|CREATE INDEX dave_data_update_events_lds_idx ON dave_data_update_events\nUSING btree (lds);\n|\n|CREATE INDEX dave_data_update_events_obj_id_idx ON\ndave_data_update_events USING btree (obj_id);\n|\n|ALTER TABLE ONLY dave_control\n| ADD CONSTRAINT dave_control_obj_id_key UNIQUE (obj_id);\n|\n|ALTER TABLE ONLY dave_control\n| ADD CONSTRAINT dave_control_real_name_key UNIQUE (real_name);\n|\n|ALTER TABLE ONLY dave_data_update_events\n| ADD CONSTRAINT dave_data_update_events_event_id_key UNIQUE\n(event_id);\n|\n\nThere are several pairs of tables, but with names like rod, jane,\nfredie, etc.. instead of dave.\nThe first thing to note about the scheme (not designed by me) is that\nthe control table is clustered on obj_id, but the data_update_events\ntable is not clustered. Does that mean the rows will be stored in order\nof insert? That might be ok, because data_update_events table is like a\nqueue and I read it in the order the new rows are inserted.\n\nWhat also seems weird to me is that the control table has some unique\nindexes created on it, but the data_upate_events table just has a unique\nconstraint. Will postgres use an index in the background to enforce\nthis constraint?\n\nWhen looking at the indexes on the all the tables in DbVisualiser my\ncolleague noticed that the cardinality of the indexes on the rod, jane\nand fredie tables was consistent, but for dave the cardinality was\nstrange...\n\n|\n|SELECT relname, relkind, reltuples, relpages FROM pg_class WHERE\nrelname LIKE 'dave_data%';\n|\n|relname relkind reltuples relpages\n|======================================= ======= ========= ========\n|dave_data_update_events r 1593600.0 40209\n|dave_data_update_events_event_id_keyi1912320.0 29271\n|dave_data_update_events_event_id_seqS 1.0 1\n|dave_data_update_events_lds_idx i 1593600.0 6139\n|dave_data_update_events_obj_id_idx i 1593600.0 6139\n|iso_pjm_data_update_events_obj_id_idxi1593600.0 6139\n|\n\nNote that there are only 1593600 rows in the table, so why the 1912320\nfigure?\n\nOf course I checked that the row count was correct...\n\n|\n|EXPLAIN ANALYZE \n|select count(*) from iso_pjm_data_update_events\n|\n|QUERY PLAN\n|Aggregate (cost=60129.00..60129.00 rows=1 width=0) (actual\ntime=35933.292..35933.293 rows=1 loops=1)\n| -> Seq Scan on iso_pjm_data_update_events (cost=0.00..56145.00\nrows=1593600 width=0) (actual time=0.213..27919.497 rows=1593600\nloops=1)\n|Total runtime: 35933.489 ms\n|\n\nand...\n\n|\n|select count(*) from iso_pjm_data_update_events\n|\n|count\n|1593600\n|\n\nso it's not that there are any undeleted rows lying around.\n\nSo any comments on the index structure? Any ideas why the cardinality\nof the index is greater than the number of rows in the table? Was it\nbecause the table used to be larger?\n\nAlso any ideas on how else to track down the big performance difference\nbetween tables of the same structure?\n\n\n",
"msg_date": "Thu, 21 Apr 2005 11:13:10 +0100",
"msg_from": "\"David Roussel\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How can an index be larger than a table"
},
{
"msg_contents": "David,\n\n> What also seems weird to me is that the control table has some unique\n> indexes created on it, but the data_upate_events table just has a unique\n> constraint. Will postgres use an index in the background to enforce\n> this constraint?\n\nIf you somehow have a unique constraint without a unique index, something is \nseriously broken. I suspect hacking of system tables.\n\nOtherwise, it sounds like you have index bloat due to mass deletions. Run \nREINDEX, or, preferably, VACUUM FULL and then REINDEX.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 21 Apr 2005 10:31:51 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can an index be larger than a table"
},
{
"msg_contents": "[email protected] (Josh Berkus) writes:\n> David,\n>\n>> What also seems weird to me is that the control table has some unique\n>> indexes created on it, but the data_upate_events table just has a unique\n>> constraint. �Will postgres use an index in the background to enforce\n>> this constraint?\n>\n> If you somehow have a unique constraint without a unique index, something is \n> seriously broken. I suspect hacking of system tables.\n>\n> Otherwise, it sounds like you have index bloat due to mass deletions. Run \n> REINDEX, or, preferably, VACUUM FULL and then REINDEX.\n\nThere is in a sense no \"best order\" for this; VACUUM FULL will wind up\nfurther injuring the indices when it reorganizes the table, which\nmeans that whether you do it first or last, you'll have \"index injury.\"\n\nThis actually seems a plausible case for CLUSTER. (And as Douglas &\nDouglas says, \"You can become a hero by using CLUSTER.\" :-))\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://www.ntlug.org/~cbbrowne/sap.html\nRules of the Evil Overlord #78. \"I will not tell my Legions of Terror\n\"And he must be taken alive!\" The command will be: ``And try to take\nhim alive if it is reasonably practical.''\"\n<http://www.eviloverlord.com/>\n",
"msg_date": "Thu, 21 Apr 2005 13:51:18 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can an index be larger than a table"
}
] |
[
{
"msg_contents": "Hi everybody,\n\nOne of our clients was using SQL-Server and decided to switch to\nPostgreSQL 8.0.1.\n\nHardware: Dual processor Intel(R) Xeon(TM) CPU 3.40GHz\nOS: Enterprise Linux with 2.6.9-5 SMP kernel\nFilesystem: ext3\nSHMMAX: $ cat /proc/sys/kernel/shmmax\n6442450944 <--- beleive that's ~6.5 GB, total ram is 8GB\nDatabase: 15GB in size with a few tables with over 80 million rows.\n\nHere is a snippit from the output of \nSELECT oid , relname, relpages, reltuples \n FROM pg_class ORDER BY relpages DESC;\n oid | relname | relpages | reltuples \n-----------+---------------------------------+----------+-------------\n 16996 | CurrentAusClimate | 474551 | 8.06736e+07\n 16983 | ClimateChangeModel40 | 338252 | 5.31055e+07\n 157821816 | PK_CurrentAusClimate | 265628 | 8.06736e+07\n 157835995 | idx_climateid | 176645 | 8.06736e+07\n 157835996 | idx_ausposnum | 176645 | 8.06736e+07\n 157835997 | idx_climatevalue \t\t | 176645 | 8.06736e+07\n 157821808 | PK_ClimateModelChange_40 | 174858 | 5.31055e+07\n 157821788 | IX_iMonth001 | 116280 | 5.31055e+07\n 157821787 | IX_ClimateId | 116280 | 5.31055e+07\n 157821786 | IX_AusPosNumber | 116280 | 5.31055e+07\n 17034 | NeighbourhoodTable | 54312 | 1.00476e+07\n 157821854 | PK_NeighbourhoodTable | 27552 | 1.00476e+07\n 157821801 | IX_NeighbourhoodId | 22002 | 1.00476e+07\n 157821800 | IX_NAusPosNumber | 22002 | 1.00476e+07\n 157821799 | IX_AusPosNumber006 | 22002 | 1.00476e+07\n[...]\n\nTo test the performance of the database we ran one of the most demanding\nqueries that exist with the following embarrassing results:\n\nQuery Execution time on:\nSQL-Server (dual processor xeon) 3min 11sec\nPostgreSQL (SMP IBM Linux server) 5min 30sec\n\nNow I have not touch the $PGDATA/postgresql.conf (As I know very little \nabout memory tuning) Have run VACCUM & ANALYZE.\n\nThe client understands that they may not match the performance for a\nsingle query as there is no multithreading. So they asked me to\ndemonstrate the benefits of Postgresql's multiprocessing capabilities.\n\nTo do that I modified the most demanding query to create a second query\nand ran them in parallel:\n\n$ time ./run_test1.sh\n$ cat ./run_test1.sh\n/usr/bin/time -p psql -f ./q1.sql ausclimate > q1.out 2>q1.time &\n/usr/bin/time -p psql -f ./q2.sql ausclimate > q2.out 2>q2.time\n\nand the time taken is *twice* that for the original. The modification was \nminor. The queries do make use of both CPUs:\n\n 2388 postgres 16 0 79640 15m 11m R 80.9 0.2 5:05.81 postmaster\n 2389 postgres 16 0 79640 15m 11m R 66.2 0.2 5:04.25 postmaster\n\nBut I can't understand why there's no performance improvement and infact\nthere seems to be no benefit of multiprocessing. Any ideas? I don't know\nenough about the locking procedures employed by postgres but one would\nthink this shouldn't be and issue with read-only queries.\n\nPlease don't hesitate to ask me for more info like, the query or the\noutput of explain, or stats on memory usage. I just wanted to keep this \nshort and provide more info as the cogs start turning :-)\n\nThanks & Regards\nShoaib\n\n\n",
"msg_date": "Thu, 21 Apr 2005 21:49:53 +1000 (EST)",
"msg_from": "\"Shoaib Burq (VPAC)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "two queries and dual cpu (perplexed)"
},
{
"msg_contents": "Shoaib Burq (VPAC) schrieb:\n\n>Hi everybody,\n>\n>One of our clients was using SQL-Server and decided to switch to\n>PostgreSQL 8.0.1.\n>\n>Hardware: Dual processor Intel(R) Xeon(TM) CPU 3.40GHz\n>OS: Enterprise Linux with 2.6.9-5 SMP kernel\n>Filesystem: ext3\n>SHMMAX: $ cat /proc/sys/kernel/shmmax\n>6442450944 <--- beleive that's ~6.5 GB, total ram is 8GB\n>Database: 15GB in size with a few tables with over 80 million rows.\n>\n>Here is a snippit from the output of \n>SELECT oid , relname, relpages, reltuples \n> FROM pg_class ORDER BY relpages DESC;\n> oid | relname | relpages | reltuples \n>-----------+---------------------------------+----------+-------------\n> 16996 | CurrentAusClimate | 474551 | 8.06736e+07\n> 16983 | ClimateChangeModel40 | 338252 | 5.31055e+07\n> 157821816 | PK_CurrentAusClimate | 265628 | 8.06736e+07\n> 157835995 | idx_climateid | 176645 | 8.06736e+07\n> 157835996 | idx_ausposnum | 176645 | 8.06736e+07\n> 157835997 | idx_climatevalue \t\t | 176645 | 8.06736e+07\n> 157821808 | PK_ClimateModelChange_40 | 174858 | 5.31055e+07\n> 157821788 | IX_iMonth001 | 116280 | 5.31055e+07\n> 157821787 | IX_ClimateId | 116280 | 5.31055e+07\n> 157821786 | IX_AusPosNumber | 116280 | 5.31055e+07\n> 17034 | NeighbourhoodTable | 54312 | 1.00476e+07\n> 157821854 | PK_NeighbourhoodTable | 27552 | 1.00476e+07\n> 157821801 | IX_NeighbourhoodId | 22002 | 1.00476e+07\n> 157821800 | IX_NAusPosNumber | 22002 | 1.00476e+07\n> 157821799 | IX_AusPosNumber006 | 22002 | 1.00476e+07\n>[...]\n>\n>To test the performance of the database we ran one of the most demanding\n>queries that exist with the following embarrassing results:\n>\n>Query Execution time on:\n>SQL-Server (dual processor xeon) 3min 11sec\n>PostgreSQL (SMP IBM Linux server) 5min 30sec\n>\n>Now I have not touch the $PGDATA/postgresql.conf (As I know very little \n>about memory tuning) Have run VACCUM & ANALYZE.\n>\n>The client understands that they may not match the performance for a\n>single query as there is no multithreading. So they asked me to\n>demonstrate the benefits of Postgresql's multiprocessing capabilities.\n>\n>To do that I modified the most demanding query to create a second query\n>and ran them in parallel:\n>\n>$ time ./run_test1.sh\n>$ cat ./run_test1.sh\n>/usr/bin/time -p psql -f ./q1.sql ausclimate > q1.out 2>q1.time &\n>/usr/bin/time -p psql -f ./q2.sql ausclimate > q2.out 2>q2.time\n>\n>and the time taken is *twice* that for the original. The modification was \n>minor. The queries do make use of both CPUs:\n>\n> 2388 postgres 16 0 79640 15m 11m R 80.9 0.2 5:05.81 postmaster\n> 2389 postgres 16 0 79640 15m 11m R 66.2 0.2 5:04.25 postmaster\n>\n>But I can't understand why there's no performance improvement and infact\n>there seems to be no benefit of multiprocessing. Any ideas? I don't know\n>enough about the locking procedures employed by postgres but one would\n>think this shouldn't be and issue with read-only queries.\n>\n>Please don't hesitate to ask me for more info like, the query or the\n>output of explain, or stats on memory usage. I just wanted to keep this \n>short and provide more info as the cogs start turning :-)\n>\n>Thanks & Regards\n>Shoaib\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n> \n>\nI think you should post the SQL-Statement and EXPLAIN ANALYSE - Output \nhere to get a usefull awnser.\n(EXPLAIN ANALYSE SELECT * FROM x WHERE ---)\n\nDaniel\n",
"msg_date": "Thu, 21 Apr 2005 14:19:06 +0200",
"msg_from": "Daniel Schuchardt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: two queries and dual cpu (perplexed)"
},
{
"msg_contents": "\nOn Apr 21, 2005, at 7:49 AM, Shoaib Burq (VPAC) wrote:\n\n> Now I have not touch the $PGDATA/postgresql.conf (As I know very little\n> about memory tuning) Have run VACCUM & ANALYZE.\n>\nYou should really, really bump up shared_buffers and given you have 8GB \nof ram this query would likely benefit from more work_mem.\n\n> and the time taken is *twice* that for the original. The modification \n> was\n> minor. The queries do make use of both CPUs:\n>\nIs this an IO intensive query? If running both in parellel results in \n2x the run time and you have sufficient cpus it would (to me) indicate \nyou don't have enough IO bandwidth to satisfy the query.\n\nCan we see an explain analyze of the query? Could be a bad plan and a \nbad plan will never give good performance.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Thu, 21 Apr 2005 08:24:15 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: two queries and dual cpu (perplexed)"
},
{
"msg_contents": "On Thu, 21 Apr 2005, Jeff wrote:\n\n>\n> On Apr 21, 2005, at 7:49 AM, Shoaib Burq (VPAC) wrote:\n>\n> > Now I have not touch the $PGDATA/postgresql.conf (As I know very little\n> > about memory tuning) Have run VACCUM & ANALYZE.\n> >\n> You should really, really bump up shared_buffers and given you have 8GB\n> of ram this query would likely benefit from more work_mem.\n\nI'd recommend shared_buffers = 10600. Its possible that work_mem in the\nhundreds of megabytes might have a good impact, but its hard to say\nwithout seeing the EXPLAIN ANALYZE output.\n\nGavin\n\n",
"msg_date": "Thu, 21 Apr 2005 22:33:26 +1000 (EST)",
"msg_from": "Gavin Sherry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: two queries and dual cpu (perplexed)"
},
{
"msg_contents": "On Thu, Apr 21, 2005 at 08:24:15AM -0400, Jeff wrote:\n> \n> On Apr 21, 2005, at 7:49 AM, Shoaib Burq (VPAC) wrote:\n> \n> >Now I have not touch the $PGDATA/postgresql.conf (As I know very little\n> >about memory tuning) Have run VACCUM & ANALYZE.\n> >\n> You should really, really bump up shared_buffers and given you have 8GB \n> of ram this query would likely benefit from more work_mem.\n> \n> >and the time taken is *twice* that for the original. The modification \n> >was\n> >minor. The queries do make use of both CPUs:\n> >\n> Is this an IO intensive query? If running both in parellel results in \n> 2x the run time and you have sufficient cpus it would (to me) indicate \n> you don't have enough IO bandwidth to satisfy the query.\n> \n\nI would add to Jeff's comments, that the default configuration parameters\nare fairly-to-very conservative which tends to produce plans with more I/O.\nBumping your shared_buffers, work_mem, and effective_cache_size should\nallow the planner to favor plans that utilize more memory but require\nless I/O. Also, with small amounts of work_mem, hash joins cannot be\nused and the planner will resort to nested loops.\n\nKen\n",
"msg_date": "Thu, 21 Apr 2005 07:36:23 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: two queries and dual cpu (perplexed)"
},
{
"msg_contents": "\nhere's explain sorry about the mess: I can attach it as text-file if you \nlike.\n\nausclimate=# explain ANALYZE select count(*) from \"getfutureausclimate\";\n\n\n \nQUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1069345.85..1069345.85 rows=1 width=0) (actual \ntime=443241.241..443241.242 rows=1 loops=1)\n -> Subquery Scan getfutureausclimate (cost=1069345.61..1069345.81 \nrows=16 width=0) (actual time=411449.034..436165.259 rows=13276368 \nloops=1)\n -> Sort (cost=1069345.61..1069345.65 rows=16 width=58) (actual \ntime=411449.026..426001.199 rows=13276368 loops=1)\n Sort Key: \"Aus40_DEM\".\"AusPosNumber\", \n\"CurrentAusClimate\".\"iMonth\"\n -> Nested Loop (cost=2.19..1069345.29 rows=16 width=58) \n(actual time=135.390..366902.373 rows=13276368 loops=1)\n -> Nested Loop (cost=2.19..1067304.07 rows=44 \nwidth=68) (actual time=107.627..186390.137 rows=13276368 loops=1)\n -> Nested Loop (cost=2.19..1067038.94 rows=44 \nwidth=52) (actual time=87.255..49743.796 rows=13276368 loops=1)\n -> Nested Loop (cost=2.19..8.09 rows=1 \nwidth=32) (actual time=52.684..52.695 rows=1 loops=1)\n -> Merge Join (cost=2.19..2.24 \nrows=1 width=24) (actual time=28.000..28.007 rows=1 loops=1)\n Merge Cond: \n(\"outer\".\"ClimateId\" = \"inner\".\"ClimateId\")\n -> Sort (cost=1.17..1.19 \nrows=7 width=10) (actual time=10.306..10.307 rows=3 loops=1)\n Sort Key: \n\"ClimateVariables\".\"ClimateId\"\n -> Seq Scan on \n\"ClimateVariables\" (cost=0.00..1.07 rows=7 width=10) (actual \ntime=10.277..10.286 rows=7 loops=1)\n -> Sort (cost=1.02..1.02 \nrows=1 width=14) (actual time=17.679..17.680 rows=1 loops=1)\n Sort Key: \n\"GetFutureClimateParameters\".\"ClimateId\"\n -> Seq Scan on \n\"GetFutureClimateParameters\" (cost=0.00..1.01 rows=1 width=14) (actual \ntime=17.669..17.671 rows=1 loops=1)\n -> Index Scan using \n\"PK_ScenarioEmissionLevels\" on \"ScenarioEmissionLevels\" (cost=0.00..5.83 \nrows=1 width=18) (actual time=24.676..24.679 rows=1 loops=1)\n Index Cond: \n((\"ScenarioEmissionLevels\".\"ScenarioId\" = \"outer\".\"ScenarioId\") AND \n(\"ScenarioEmissionLevels\".\"iYear\" = \"outer\".\"iYear\") AND \n(\"ScenarioEmissionLevels\".\"LevelId\" = \"outer\".\"LevelId\"))\n -> Index Scan using \"IX_ClimateId\" on \n\"ClimateChangeModel40\" (cost=0.00..1063711.75 rows=265528 width=20) \n(actual time=34.564..19435.855 rows=13276368 loops=1)\n Index Cond: (\"outer\".\"ClimateId\" = \n\"ClimateChangeModel40\".\"ClimateId\")\n -> Index Scan using \"PK_Aus40_DEM\" on \n\"Aus40_DEM\" (cost=0.00..6.01 rows=1 width=16) (actual time=0.005..0.006 \nrows=1 loops=13276368)\n Index Cond: (\"outer\".\"AusPosNumber\" = \n\"Aus40_DEM\".\"AusPosNumber\")\n -> Index Scan using \"PK_CurrentAusClimate\" on \n\"CurrentAusClimate\" (cost=0.00..46.20 rows=11 width=14) (actual \ntime=0.007..0.009 rows=1 loops=13276368)\n Index Cond: ((\"CurrentAusClimate\".\"ClimateId\" = \n\"outer\".\"ClimateId\") AND (\"outer\".\"AusPosNumber\" = \n\"CurrentAusClimate\".\"AusPosNum\") AND (\"CurrentAusClimate\".\"iMonth\" = \n\"outer\".\"iMonth\"))\n Total runtime: 443983.269 ms\n(25 rows)\n\n\nSheeeesshh...\n\n> You should really, really bump up shared_buffers and given you have 8GB \n> of ram this query would likely benefit from more work_mem.\n\nI actually tried that and there was a decrease in performance. Are the\nshared_buffers and work_mem the only things I should change to start with? \nIf so what's the reasoning.\n\n\n> Is this an IO intensive query? If running both in parellel results in \n> 2x the run time and you have sufficient cpus it would (to me) indicate \n> you don't have enough IO bandwidth to satisfy the query.\n\nYes I think so too: ... I am just compiling some io stats...\n\nAlso will jump on to irc...\n\n> \nWhoa! thanks all... I am overwhelmed with the help I am getting... I love \nit!\n\n\n",
"msg_date": "Thu, 21 Apr 2005 22:44:51 +1000 (EST)",
"msg_from": "\"Shoaib Burq (VPAC)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: two queries and dual cpu (perplexed)"
},
{
"msg_contents": "On Thu, 21 Apr 2005 10:44 pm, Shoaib Burq (VPAC) wrote:\n> -> Nested Loop (cost=2.19..1069345.29 rows=16 width=58) (actual time=135.390..366902.373 rows=13276368 loops=1)\n> -> Nested Loop (cost=2.19..1067304.07 rows=44 width=68) (actual time=107.627..186390.137 rows=13276368 loops=1)\n> -> Nested Loop (cost=2.19..1067038.94 rows=44 width=52) (actual time=87.255..49743.796 rows=13276368 loops=1)\n\nOUCH, OUCH, OUCH.\n\nMost if not all of the time is going on nested loop joins. The tuple estimates are off by a factore of 10^6 which is means it's chosing the wrong\njoin type.\n\nyou could set enable_seqscan to OFF; to test what he performance is like with a different plan, and then set it back on.\n\nHowever you really need to get the row count estimates up to something comparable. within a factor of 10 at least.\nA number of the other rows estimates seem to be off by a reasonable amount too. You may want to bump up the statistics on the relevant\ncolumns. I can't find what they are from looking at that, I probably should be able too, but it's late.\n\nIf you get the stats up to something near the real values, then the planner will choose a different plan, which should give a huge performance\nincrease.\n\nRegards\n\nRussell Smith.\n\n",
"msg_date": "Thu, 21 Apr 2005 23:29:05 +1000",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: two queries and dual cpu (perplexed)"
},
{
"msg_contents": "\nhere are some i/o stats with the unchanged postgresql.conf. Gonna change\nit now and have another go.\n\n\n[postgres@dbsql1 MultiCPU_test]$ vmstat 10\nprocs -----------memory---------- ---swap-- -----io---- --system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy \nid wa\n 0 0 25808 710356 6348860 972052 2 4 73 29 1 3 1 0 \n99 0\n 2 0 25808 647636 6348960 1034784 0 0 3226 3048 1054 92819 55 19 \n25 1\n 2 0 25808 585684 6349032 1096660 0 0 3203 3057 1053 96375 55 19 \n25 1\n 2 0 25808 521940 6349112 1160364 0 0 3388 2970 1052 95563 54 19 \n26 1\n 2 0 25808 463636 6349184 1218568 0 0 2804 3037 1048 93560 55 19 \n25 1\n 2 0 25808 405460 6349264 1276696 0 0 2794 3047 1046 96971 55 19 \n25 1\n 2 0 25808 343956 6349340 1338160 0 0 3151 3040 1049 96629 55 20 \n25 1\n 2 0 25808 287252 6349412 1394732 0 0 2666 2990 1045 95173 54 20 \n25 1\n 2 0 25808 230804 6349484 1451168 0 0 2678 2966 1044 95577 54 19 \n26 1\n 2 0 25808 169428 6349560 1512428 0 0 3164 3015 1048 98451 55 19 \n25 1\n 2 0 25808 110484 6349640 1571304 0 0 2910 2970 1050 98214 55 20 \n25 0\n 0 0 25808 50260 6349716 1631408 0 0 3049 3015 1049 99830 55 20 \n25 1\n 1 0 25808 8512 6349788 1673156 0 0 2934 2959 1047 95940 54 19 \n24 3\n 2 1 25808 8768 6349796 1672944 0 0 2552 2984 1043 97893 55 19 \n18 8\n 1 1 25808 8384 6349824 1673256 0 0 2596 3032 1051 94646 55 19 \n19 6\n 2 1 25808 8960 6349856 1672680 0 0 2982 3028 1052 94486 55 20 \n19 6\n 1 1 25808 8960 6349884 1672584 0 0 3125 2919 1052 86969 52 20 \n19 8\n 2 0 25808 6196 6349912 1675276 0 0 2809 3064 1046 99147 55 20 \n19 5\n 1 1 25808 9216 6349976 1672152 0 0 2898 3076 1047 93271 55 19 \n21 6\n 2 0 25808 6580 6349316 1663972 0 0 3150 2982 1048 94964 54 22 \n20 4\n 2 0 25808 7692 6349348 1674480 0 0 2742 3006 1045 97488 54 21 \n21 4\n 2 1 25808 8232 6346244 1676700 0 0 2900 3022 1048 92496 54 20 \n19 8\n 2 0 25808 7104 6346192 1678044 0 0 3284 2958 1057 97265 55 20 \n18 7\n 2 0 25808 8488 6346168 1676776 0 0 2609 3031 1047 93965 55 19 \n20 7\n 2 1 25808 8680 6346184 1676488 0 0 3067 3044 1051 96594 55 19 \n19 6\n 2 0 25808 8576 6346168 1676640 0 0 2900 3070 1047 96300 55 19 \n20 6\n 2 1 25808 9152 6346156 1676176 0 0 3010 2993 1049 98271 55 20 \n19 7\n 2 0 25808 7040 6346172 1678200 0 0 3242 3034 1050 97669 55 20 \n21 4\n 1 1 25808 8900 6346192 1676344 0 0 2859 3014 1052 91220 53 19 \n21 6\n 2 1 25808 8512 6346188 1676824 0 0 2737 2960 1049 100609 55 \n20 18 6\n 2 0 25808 7204 6346236 1678000 0 0 2972 3045 1050 94851 55 19 \n17 9\n 1 0 25808 7116 6346208 1678028 0 0 3053 2996 1048 98901 55 19 \n20 5\n 2 1 25808 9180 6346196 1676068 0 0 2857 3067 1047 100629 56 \n21 20 3\nprocs -----------memory---------- ---swap-- -----io---- --system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy \nid wa\n 3 1 25808 8896 6346172 1676500 0 0 3138 3022 1049 97937 55 20 \n20 5\n 2 1 25808 9088 6346188 1676212 0 0 2844 3022 1047 97664 55 19 \n20 5\n 1 1 25808 8920 6346248 1676288 0 0 3017 3024 1049 99644 55 20 \n17 7\n 1 1 25808 8064 6346116 1677168 0 0 2824 3037 1047 99171 55 20 \n19 5\n 2 1 25820 8472 6344336 1678596 0 0 2969 2957 1047 96396 54 21 \n18 7\n 2 1 25820 9208 6344300 1677884 0 0 3072 3031 1050 95017 54 19 \n22 5\n 1 0 25820 7848 6344328 1679148 0 0 3229 3011 1050 97108 55 19 \n20 5\n 2 1 25820 8960 6344348 1678040 0 0 2701 2954 1046 98485 54 20 \n21 5\n 2 0 25820 7900 6344368 1679244 0 0 2604 2931 1044 97198 54 20 \n19 7\n 2 0 25820 9240 6344424 1677896 0 0 2990 3015 1048 102414 56 \n20 19 5\n 2 0 25820 8924 6344436 1678088 0 0 3256 2991 1049 96709 55 19 \n21 5\n 1 1 25820 8900 6344456 1678204 0 0 2761 3030 1051 96498 55 20 \n20 5\n 2 0 25820 7628 6344440 1679444 0 0 2952 3012 1053 96534 55 20 \n19 6\n 2 0 25820 7080 6344472 1679956 0 0 2848 3079 1050 95074 56 19 \n19 6\n 2 0 25820 8928 6344444 1678080 0 0 2985 3021 1049 96806 55 20 \n18 7\n 2 1 25820 7976 6344976 1676892 11 0 3429 3062 1083 92817 55 19 \n18 8\n 2 0 25820 8096 6345080 1676652 0 0 2662 2989 1056 91921 54 19 \n17 10\n 1 0 25820 7424 6345128 1677352 0 0 2956 3029 1054 99385 56 19 \n20 5\n 2 0 25820 6664 6345232 1677724 0 0 3358 3030 1064 95929 55 19 \n21 5\n 1 0 25820 7268 6345320 1676956 0 0 2681 3012 1082 97744 54 20 \n18 7\n 2 0 25820 6944 6345364 1677184 0 0 3156 3022 1061 98055 55 19 \n22 4\n 2 0 25820 8668 6345420 1675428 0 0 2990 3018 1050 94734 55 19 \n22 5\n 2 1 25820 8724 6345464 1675452 0 0 2677 2967 1055 100760 55 \n20 18 7\n 2 1 25820 9260 6345508 1674796 0 0 3296 3233 1054 99711 55 20 \n20 5\n 2 0 25820 6196 6345556 1677944 0 0 2861 2950 1066 93289 53 19 \n23 6\n 2 0 25820 8052 6345620 1675908 0 0 3012 2920 1051 94428 54 19 \n20 7\n 2 1 25820 9000 6345672 1675040 0 0 2645 2980 1045 99992 56 20 \n17 8\n 2 1 25820 8296 6345728 1675732 0 0 3216 3058 1052 91934 54 19 \n21 5\n 2 0 25820 7900 6345796 1676072 0 0 3009 3022 1052 96303 55 19 \n20 7\n 2 0 25820 8516 6345844 1675344 0 0 2586 2956 1048 95812 54 20 \n19 8\n 2 1 25820 9000 6345892 1674752 0 0 3225 3028 1055 99786 54 20 \n21 5\n 0 1 25820 9128 6345768 1674684 0 1 2868 3016 1049 98301 55 21 \n19 6\n 2 1 25820 8160 6345828 1675576 0 0 3079 3056 1050 93725 55 19 \n21 5\n\n\n\n\nOn Thu, 21 Apr 2005, Shoaib Burq (VPAC) wrote:\n\n> \n> here's explain sorry about the mess: I can attach it as text-file if you \n> like.\n> \n> ausclimate=# explain ANALYZE select count(*) from \"getfutureausclimate\";\n> \n> \n> \n> QUERY PLAN \n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=1069345.85..1069345.85 rows=1 width=0) (actual \n> time=443241.241..443241.242 rows=1 loops=1)\n> -> Subquery Scan getfutureausclimate (cost=1069345.61..1069345.81 \n> rows=16 width=0) (actual time=411449.034..436165.259 rows=13276368 \n> loops=1)\n> -> Sort (cost=1069345.61..1069345.65 rows=16 width=58) (actual \n> time=411449.026..426001.199 rows=13276368 loops=1)\n> Sort Key: \"Aus40_DEM\".\"AusPosNumber\", \n> \"CurrentAusClimate\".\"iMonth\"\n> -> Nested Loop (cost=2.19..1069345.29 rows=16 width=58) \n> (actual time=135.390..366902.373 rows=13276368 loops=1)\n> -> Nested Loop (cost=2.19..1067304.07 rows=44 \n> width=68) (actual time=107.627..186390.137 rows=13276368 loops=1)\n> -> Nested Loop (cost=2.19..1067038.94 rows=44 \n> width=52) (actual time=87.255..49743.796 rows=13276368 loops=1)\n> -> Nested Loop (cost=2.19..8.09 rows=1 \n> width=32) (actual time=52.684..52.695 rows=1 loops=1)\n> -> Merge Join (cost=2.19..2.24 \n> rows=1 width=24) (actual time=28.000..28.007 rows=1 loops=1)\n> Merge Cond: \n> (\"outer\".\"ClimateId\" = \"inner\".\"ClimateId\")\n> -> Sort (cost=1.17..1.19 \n> rows=7 width=10) (actual time=10.306..10.307 rows=3 loops=1)\n> Sort Key: \n> \"ClimateVariables\".\"ClimateId\"\n> -> Seq Scan on \n> \"ClimateVariables\" (cost=0.00..1.07 rows=7 width=10) (actual \n> time=10.277..10.286 rows=7 loops=1)\n> -> Sort (cost=1.02..1.02 \n> rows=1 width=14) (actual time=17.679..17.680 rows=1 loops=1)\n> Sort Key: \n> \"GetFutureClimateParameters\".\"ClimateId\"\n> -> Seq Scan on \n> \"GetFutureClimateParameters\" (cost=0.00..1.01 rows=1 width=14) (actual \n> time=17.669..17.671 rows=1 loops=1)\n> -> Index Scan using \n> \"PK_ScenarioEmissionLevels\" on \"ScenarioEmissionLevels\" (cost=0.00..5.83 \n> rows=1 width=18) (actual time=24.676..24.679 rows=1 loops=1)\n> Index Cond: \n> ((\"ScenarioEmissionLevels\".\"ScenarioId\" = \"outer\".\"ScenarioId\") AND \n> (\"ScenarioEmissionLevels\".\"iYear\" = \"outer\".\"iYear\") AND \n> (\"ScenarioEmissionLevels\".\"LevelId\" = \"outer\".\"LevelId\"))\n> -> Index Scan using \"IX_ClimateId\" on \n> \"ClimateChangeModel40\" (cost=0.00..1063711.75 rows=265528 width=20) \n> (actual time=34.564..19435.855 rows=13276368 loops=1)\n> Index Cond: (\"outer\".\"ClimateId\" = \n> \"ClimateChangeModel40\".\"ClimateId\")\n> -> Index Scan using \"PK_Aus40_DEM\" on \n> \"Aus40_DEM\" (cost=0.00..6.01 rows=1 width=16) (actual time=0.005..0.006 \n> rows=1 loops=13276368)\n> Index Cond: (\"outer\".\"AusPosNumber\" = \n> \"Aus40_DEM\".\"AusPosNumber\")\n> -> Index Scan using \"PK_CurrentAusClimate\" on \n> \"CurrentAusClimate\" (cost=0.00..46.20 rows=11 width=14) (actual \n> time=0.007..0.009 rows=1 loops=13276368)\n> Index Cond: ((\"CurrentAusClimate\".\"ClimateId\" = \n> \"outer\".\"ClimateId\") AND (\"outer\".\"AusPosNumber\" = \n> \"CurrentAusClimate\".\"AusPosNum\") AND (\"CurrentAusClimate\".\"iMonth\" = \n> \"outer\".\"iMonth\"))\n> Total runtime: 443983.269 ms\n> (25 rows)\n> \n> \n> Sheeeesshh...\n> \n> > You should really, really bump up shared_buffers and given you have 8GB \n> > of ram this query would likely benefit from more work_mem.\n> \n> I actually tried that and there was a decrease in performance. Are the\n> shared_buffers and work_mem the only things I should change to start with? \n> If so what's the reasoning.\n> \n> \n> > Is this an IO intensive query? If running both in parellel results in \n> > 2x the run time and you have sufficient cpus it would (to me) indicate \n> > you don't have enough IO bandwidth to satisfy the query.\n> \n> Yes I think so too: ... I am just compiling some io stats...\n> \n> Also will jump on to irc...\n> \n> > \n> Whoa! thanks all... I am overwhelmed with the help I am getting... I love \n> it!\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \nShoaib Burq\n--\nVPAC - Geospatial Applications Developer\nBuilding 91, 110 Victoria Street, \nCarlton South, Vic 3053, Australia\n_______________________________________________________________\nw: www.vpac.org | e: sab_AT_vpac_DOT_org | mob: +61.431-850039\n\n\n",
"msg_date": "Thu, 21 Apr 2005 23:32:06 +1000 (EST)",
"msg_from": "\"Shoaib Burq (VPAC)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: two queries and dual cpu (perplexed)"
},
{
"msg_contents": "\n> Is this an IO intensive query? If running both in parellel results in \n> 2x the run time and you have sufficient cpus it would (to me) indicate \n> you don't have enough IO bandwidth to satisfy the query.\n\nany tips on how to verify this?\n\n",
"msg_date": "Fri, 22 Apr 2005 00:53:03 +1000 (EST)",
"msg_from": "\"Shoaib Burq (VPAC)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: two queries and dual cpu (perplexed)"
},
{
"msg_contents": "\nJust tried it with the following changes:\n\nshared_buffers = 10600\nwork_mem = 102400\nenable_seqscan = false\n\nstill no improvement\n\nOk here's the Plan with the enable_seqscan = false:\nausclimate=# explain ANALYZE select count(*) from \"getfutureausclimate\";\n \nQUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=101069350.74..101069350.74 rows=1 width=0) (actual \ntime=461651.787..461651.787 rows=1 loops=1)\n -> Subquery Scan getfutureausclimate (cost=101069350.50..101069350.70 \nrows=16 width=0) (actual time=426142.382..454571.397 rows=13276368 \nloops=1)\n -> Sort (cost=101069350.50..101069350.54 rows=16 width=58) \n(actual time=426142.375..444428.278 rows=13276368 loops=1)\n Sort Key: \"Aus40_DEM\".\"AusPosNumber\", \n\"CurrentAusClimate\".\"iMonth\"\n -> Nested Loop (cost=100000001.02..101069350.18 rows=16 \nwidth=58) (actual time=72.740..366588.646 rows=13276368 loops=1)\n -> Nested Loop (cost=100000001.02..101067308.96 \nrows=44 width=68) (actual time=35.788..184032.873 rows=13276368 loops=1)\n -> Nested Loop \n(cost=100000001.02..101067043.83 rows=44 width=52) (actual \ntime=35.753..47971.652 rows=13276368 loops=1)\n -> Nested Loop \n(cost=100000001.02..100000012.98 rows=1 width=32) (actual \ntime=7.433..7.446 rows=1 loops=1)\n -> Merge Join \n(cost=100000001.02..100000007.13 rows=1 width=24) (actual \ntime=7.403..7.412 rows=1 loops=1)\n Merge Cond: \n(\"outer\".\"ClimateId\" = \"inner\".\"ClimateId\")\n -> Index Scan using \n\"PK_ClimateVariables\" on \"ClimateVariables\" (cost=0.00..6.08 rows=7 \nwidth=10) (actual time=0.011..0.015 rows=3 loops=1)\n -> Sort \n(cost=100000001.02..100000001.03 rows=1 width=14) (actual \ntime=7.374..7.375 rows=1 loops=1)\n Sort Key: \n\"GetFutureClimateParameters\".\"ClimateId\"\n -> Seq Scan on \n\"GetFutureClimateParameters\" (cost=100000000.00..100000001.01 rows=1 \nwidth=14) (actual time=7.361..7.362 rows=1 loops=1)\n -> Index Scan using \n\"PK_ScenarioEmissionLevels\" on \"ScenarioEmissionLevels\" (cost=0.00..5.83 \nrows=1 width=18) (actual time=0.021..0.024 rows=1 loops=1)\n Index Cond: \n((\"ScenarioEmissionLevels\".\"ScenarioId\" = \"outer\".\"ScenarioId\") AND \n(\"ScenarioEmissionLevels\".\"iYear\" = \"outer\".\"iYear\") AND \n(\"ScenarioEmissionLevels\".\"LevelId\" = \"outer\".\"LevelId\"))\n -> Index Scan using \"IX_ClimateId\" on \n\"ClimateChangeModel40\" (cost=0.00..1063711.75 rows=265528 width=20) \n(actual time=28.311..17212.703 rows=13276368 loops=1)\n Index Cond: (\"outer\".\"ClimateId\" = \n\"ClimateChangeModel40\".\"ClimateId\")\n -> Index Scan using \"PK_Aus40_DEM\" on \n\"Aus40_DEM\" (cost=0.00..6.01 rows=1 width=16) (actual time=0.005..0.006 \nrows=1 loops=13276368)\n Index Cond: (\"outer\".\"AusPosNumber\" = \n\"Aus40_DEM\".\"AusPosNumber\")\n -> Index Scan using \"PK_CurrentAusClimate\" on \n\"CurrentAusClimate\" (cost=0.00..46.20 rows=11 width=14) (actual \ntime=0.007..0.009 rows=1 loops=13276368)\n Index Cond: ((\"CurrentAusClimate\".\"ClimateId\" =\n\"outer\".\"ClimateId\") AND (\"outer\".\"AusPosNumber\" =\n\"CurrentAusClimate\".\"AusPosNum\") AND (\"CurrentAusClimate\".\"iMonth\" =\n\"outer\".\"iMonth\"))\n Total runtime: 462218.120 ms\n(23 rows)\n\n\n\n\n\n\nOn Thu, 21 Apr 2005, Russell Smith wrote:\n\n> On Thu, 21 Apr 2005 10:44 pm, Shoaib Burq (VPAC) wrote:\n> > �-> �Nested Loop �(cost=2.19..1069345.29 rows=16 width=58) (actual time=135.390..366902.373 rows=13276368 loops=1)\n> > � � � � � � � � � � �-> �Nested Loop �(cost=2.19..1067304.07 rows=44 width=68) (actual time=107.627..186390.137 rows=13276368 loops=1)\n> > � � � � � � � � � � � � � �-> �Nested Loop �(cost=2.19..1067038.94 rows=44 width=52) (actual time=87.255..49743.796 rows=13276368 loops=1)\n> \n> OUCH, OUCH, OUCH.\n> \n> Most if not all of the time is going on nested loop joins. The tuple estimates are off by a factore of 10^6 which is means it's chosing the wrong\n> join type.\n> \n> you could set enable_seqscan to OFF; to test what he performance is like with a different plan, and then set it back on.\n> \n> However you really need to get the row count estimates up to something comparable. within a factor of 10 at least.\n> A number of the other rows estimates seem to be off by a reasonable amount too. You may want to bump up the statistics on the relevant\n> columns. I can't find what they are from looking at that, I probably should be able too, but it's late.\n> \n> If you get the stats up to something near the real values, then the planner will choose a different plan, which should give a huge performance\n> increase.\n> \n> Regards\n> \n> Russell Smith.\n> \n> \n\n-- \nShoaib Burq\n--\nVPAC - Geospatial Applications Developer\nBuilding 91, 110 Victoria Street, \nCarlton South, Vic 3053, Australia\n_______________________________________________________________\nw: www.vpac.org | e: sab_AT_vpac_DOT_org | mob: +61.431-850039\n\n\n\n",
"msg_date": "Fri, 22 Apr 2005 01:01:19 +1000 (EST)",
"msg_from": "\"Shoaib Burq (VPAC)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: two queries and dual cpu (perplexed)"
},
{
"msg_contents": "Shoaib Burq (VPAC) wrote:\n\n>Just tried it with the following changes:\n>\n>shared_buffers = 10600\n>work_mem = 102400\n>enable_seqscan = false\n>\n>still no improvement\n>\n>Ok here's the Plan with the enable_seqscan = false:\n>ausclimate=# explain ANALYZE select count(*) from \"getfutureausclimate\";\n>\n>\nActually, you probably don't want enable_seqscan=off, you should try:\nSET enable_nestloop TO off.\nThe problem is that it is estimating there will only be 44 rows, but in\nreality there are 13M rows. It almost definitely should be doing a\nseqscan with a sort and merge join.\n\nAlso, please attach you explain analyzes, the wrapping is really hard to\nread.\n\nI don't understand how postgres could get the number of rows that wrong.\n\nIt seems to be misestimating the number of entries in IX_ClimateId\n\nHere:\n\n-> Index Scan using \"PK_Aus40_DEM\" on \"Aus40_DEM\" (cost=0.00..6.01 rows=1 width=16) (actual time=0.005..0.006 rows=1 loops=13276368)\n Index Cond: (\"outer\".\"AusPosNumber\" = \"Aus40_DEM\".\"AusPosNumber\")\n-> Index Scan using \"PK_CurrentAusClimate\" on \"CurrentAusClimate\" (cost=0.00..46.20 rows=11 width=14) (actual time=0.007..0.009 rows=1 loops=13276368)\n\nThe first index scan is costing you 0.006*13276368=79s, and the second one is 119s.\n\nI can't figure out exactly what is where from the formatting, but the query that seems misestimated is:\n-> Index Scan using \"IX_ClimateId\" on \"ClimateChangeModel40\" (cost=0.00..1063711.75 rows=265528 width=20) (actual time=28.311..17212.703 rows=13276368 loops=1)\n Index Cond: (\"outer\".\"ClimateId\" = \"ClimateChangeModel40\".\"ClimateId\")\n\nIs there an unexpected correlaction between\nClimateChangeModel40\".\"ClimateId\" and whatever \"outer\" is at this point?\n\nJohn\n=:->",
"msg_date": "Thu, 21 Apr 2005 10:14:21 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: two queries and dual cpu (perplexed)"
},
{
"msg_contents": "Please see attached the output from explain analyse. This is with the \n\n\tshared_buffers = 10600\n\twork_mem = 102400\n\tenable_seqscan = true\n\nBTW I guess should mention that I am doing the select count(*) on a View.\n\nRan the Explain analyse with the nestedloop disabled but it was taking \nforever... and killed it after 30mins.\n\nThanks\nshoaib\nOn Thu, 21 Apr 2005, John A Meinel wrote:\n\n> Shoaib Burq (VPAC) wrote:\n> \n> >Just tried it with the following changes:\n> >\n> >shared_buffers = 10600\n> >work_mem = 102400\n> >enable_seqscan = false\n> >\n> >still no improvement\n> >\n> >Ok here's the Plan with the enable_seqscan = false:\n> >ausclimate=# explain ANALYZE select count(*) from \"getfutureausclimate\";\n> >\n> >\n> Actually, you probably don't want enable_seqscan=off, you should try:\n> SET enable_nestloop TO off.\n> The problem is that it is estimating there will only be 44 rows, but in\n> reality there are 13M rows. It almost definitely should be doing a\n> seqscan with a sort and merge join.\n> \n> Also, please attach you explain analyzes, the wrapping is really hard to\n> read.\n> \n> I don't understand how postgres could get the number of rows that wrong.\n> \n> It seems to be misestimating the number of entries in IX_ClimateId\n> \n> Here:\n> \n> -> Index Scan using \"PK_Aus40_DEM\" on \"Aus40_DEM\" (cost=0.00..6.01 rows=1 width=16) (actual time=0.005..0.006 rows=1 loops=13276368)\n> Index Cond: (\"outer\".\"AusPosNumber\" = \"Aus40_DEM\".\"AusPosNumber\")\n> -> Index Scan using \"PK_CurrentAusClimate\" on \"CurrentAusClimate\" (cost=0.00..46.20 rows=11 width=14) (actual time=0.007..0.009 rows=1 loops=13276368)\n> \n> The first index scan is costing you 0.006*13276368=79s, and the second one is 119s.\n> \n> I can't figure out exactly what is where from the formatting, but the query that seems misestimated is:\n> -> Index Scan using \"IX_ClimateId\" on \"ClimateChangeModel40\" (cost=0.00..1063711.75 rows=265528 width=20) (actual time=28.311..17212.703 rows=13276368 loops=1)\n> Index Cond: (\"outer\".\"ClimateId\" = \"ClimateChangeModel40\".\"ClimateId\")\n> \n> Is there an unexpected correlaction between\n> ClimateChangeModel40\".\"ClimateId\" and whatever \"outer\" is at this point?\n> \n> John\n> =:->\n> \n>",
"msg_date": "Fri, 22 Apr 2005 13:33:31 +1000 (EST)",
"msg_from": "\"Shoaib Burq (VPAC)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: two queries and dual cpu (perplexed)"
},
{
"msg_contents": "On Fri, 22 Apr 2005, Shoaib Burq (VPAC) wrote:\n\n> Please see attached the output from explain analyse. This is with the\n>\n> \tshared_buffers = 10600\n> \twork_mem = 102400\n> \tenable_seqscan = true\n>\n> BTW I guess should mention that I am doing the select count(*) on a View.\n>\n> Ran the Explain analyse with the nestedloop disabled but it was taking\n> forever... and killed it after 30mins.\n\nTry increasing stats collection on ClimateChangeModel40.ClimateId:\n\nalter table ClimateChangeModel40 alter column ClimateId set statistics 1000;\nanalyze ClimateChangeModel40;\n\nGavin\n",
"msg_date": "Fri, 22 Apr 2005 13:57:17 +1000 (EST)",
"msg_from": "Gavin Sherry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: two queries and dual cpu (perplexed)"
},
{
"msg_contents": "\nOn Apr 21, 2005, at 11:33 PM, Shoaib Burq (VPAC) wrote:\n\n>\n> BTW I guess should mention that I am doing the select count(*) on a \n> View.\n>\n\nA bit of a silly question...\nbut are you actually selecting all the rows from this query in \nproduction or would it be more selective? ie select * from bigslowview \nwhere bah = 'snort'?\n\n\n> Ran the Explain analyse with the nestedloop disabled but it was taking\n> forever... and killed it after 30mins.\n>\n\nIf it takes too long you can run just plain explain (no analyze) and it \nwill show you the plan. This is nearly always instant... it'll give \nyou a clue as to if your setting changes did anything.\n\nYou may need to end up breaking some parts of this up into subqueries. \nI've had to do this before. I had one query that just ran too dang \nslow as a join so I modified it into a subquery type deal. Worked \ngreat. However since you are selecting ALL rows I doubt that will help \nmuch.\n\nAnother option may be to use materialized views. Not sure how \n\"dynamic\" your data model is. It could help.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Fri, 22 Apr 2005 07:48:29 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: two queries and dual cpu (perplexed)"
},
{
"msg_contents": "John A Meinel <[email protected]> writes:\n> Actually, you probably don't want enable_seqscan=off, you should try:\n> SET enable_nestloop TO off.\n> The problem is that it is estimating there will only be 44 rows, but in\n> reality there are 13M rows. It almost definitely should be doing a\n> seqscan with a sort and merge join.\n\nNot nestloops anyway.\n\n> I don't understand how postgres could get the number of rows that wrong.\n\nNo stats, or out-of-date stats is the most likely bet.\n\n> I can't figure out exactly what is where from the formatting, but the query that seems misestimated is:\n> -> Index Scan using \"IX_ClimateId\" on \"ClimateChangeModel40\" (cost=0.00..1063711.75 rows=265528 width=20) (actual time=28.311..17212.703 rows=13276368 loops=1)\n> Index Cond: (\"outer\".\"ClimateId\" = \"ClimateChangeModel40\".\"ClimateId\")\n\nYeah, that's what jumped out at me too. It's not the full explanation\nfor the join number being so far off, but this one at least you have a\nchance to fix by updating the stats on ClimateChangeModel40.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Apr 2005 20:10:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: two queries and dual cpu (perplexed) "
},
{
"msg_contents": "OK ... so just to clearify... (and pardon my ignorance):\n\nI need to increase the value of 'default_statistics_target' variable and \nthen run VACUUM ANALYZE, right? If so what should I choose for the \n'default_statistics_target'?\n\nBTW I only don't do any sub-selection on the View.\n\nI have attached the view in question and the output of:\nSELECT oid , relname, relpages, reltuples \n FROM pg_class ORDER BY relpages DESC;\n\nreg\nshoaib\n\nOn Sat, 23 Apr 2005, Tom Lane wrote:\n\n> John A Meinel <[email protected]> writes:\n> > Actually, you probably don't want enable_seqscan=off, you should try:\n> > SET enable_nestloop TO off.\n> > The problem is that it is estimating there will only be 44 rows, but in\n> > reality there are 13M rows. It almost definitely should be doing a\n> > seqscan with a sort and merge join.\n> \n> Not nestloops anyway.\n> \n> > I don't understand how postgres could get the number of rows that wrong.\n> \n> No stats, or out-of-date stats is the most likely bet.\n> \n> > I can't figure out exactly what is where from the formatting, but the query that seems misestimated is:\n> > -> Index Scan using \"IX_ClimateId\" on \"ClimateChangeModel40\" (cost=0.00..1063711.75 rows=265528 width=20) (actual time=28.311..17212.703 rows=13276368 loops=1)\n> > Index Cond: (\"outer\".\"ClimateId\" = \"ClimateChangeModel40\".\"ClimateId\")\n> \n> Yeah, that's what jumped out at me too. It's not the full explanation\n> for the join number being so far off, but this one at least you have a\n> chance to fix by updating the stats on ClimateChangeModel40.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n-- \nShoaib Burq\n--\nVPAC - Geospatial Applications Developer\nBuilding 91, 110 Victoria Street, \nCarlton South, Vic 3053, Australia\n_______________________________________________________________\nw: www.vpac.org | e: sab_AT_vpac_DOT_org | mob: +61.431-850039",
"msg_date": "Wed, 27 Apr 2005 00:31:18 +1000 (EST)",
"msg_from": "\"Shoaib Burq (VPAC)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: two queries and dual cpu (perplexed) "
},
{
"msg_contents": "Shoaib Burq (VPAC) wrote:\n> OK ... so just to clearify... (and pardon my ignorance):\n>\n> I need to increase the value of 'default_statistics_target' variable and\n> then run VACUUM ANALYZE, right? If so what should I choose for the\n> 'default_statistics_target'?\n>\n> BTW I only don't do any sub-selection on the View.\n>\n> I have attached the view in question and the output of:\n> SELECT oid , relname, relpages, reltuples\n> FROM pg_class ORDER BY relpages DESC;\n>\n> reg\n> shoaib\n\nActually, you only need to alter the statistics for that particular\ncolumn, not for all columns in the db.\n\nWhat you want to do is:\n\nALTER TABLE \"ClimateChangeModel40\"\n\tALTER COLUMN <whatever the column is>\n\tSET STATISTICS 100;\nVACUUM ANALYZE \"ClimateChangeModel40\";\n\nThe column is just the column that you have the \"IX_ClimateId\" index on,\nI don't know which one that is.\n\nThe statistics value ranges from 1 - 1000, the default being 10, and for\nindexed columns you are likely to want somewhere between 100-200.\n\nIf you set it to 100 and the planner is still mis-estimating the number\nof rows, try 200, etc.\n\nThe reason to keep the number low is because with a high number the\nplanner has to spend more time planning. But especially for queries like\nthis one, you'd rather the query planner spent a little bit more time\nplanning, and got the right plan.\n\nJohn\n=:->",
"msg_date": "Tue, 26 Apr 2005 10:05:26 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: two queries and dual cpu (perplexed)"
}
] |
[
{
"msg_contents": "> John A Meinel <[email protected]> writes:\n> > Joel Fradkin wrote:\n> >> Postgres was on the second run\n> >> Total query runtime: 17109 ms.\n> >> Data retrieval runtime: 72188 ms.\n> >> 331640 rows retrieved.\n> \n> > How were you measuring \"data retrieval time\"?\n> \n> I suspect he's using pgadmin. We've seen reports before suggesting\nthat\n> pgadmin can be amazingly slow, eg here\n> http://archives.postgresql.org/pgsql-performance/2004-10/msg00427.php\n> where the *actual* data retrieval time as shown by EXPLAIN ANALYZE\n> was under three seconds, but pgadmin claimed the query runtime was 22\n> sec and data retrieval runtime was 72 sec.\n\nThe problem is that pgAdmin takes your query results and puts it in a\ngrid. The grid is not designed to be used in that way for large\ndatasets. The time complexity is not linear and really breaks down\naround 10k-100k rows depending on various factors. pgAdmin users just\nhave to become used to it and use limit or the filter feature at\nappropriate times.\n\nThe ms sql enterprise manager uses cursors which has its own set of\nnasty issues (no mvcc).\n\nIn fairness, unless you are running with \\a switch, psql adds a fair\namount of time to the query too.\n\nJoel:\n\"Postgres was on the second run\nTotal query runtime: 17109 ms.\nData retrieval runtime: 72188 ms.\n331640 rows retrieved.\"\n\nThe Data retrieval runtime is time spend by pgAdmin formatting, etc.\nThe query runtime is the actual timing figure you should be concerned\nwith (you are not comparing apples to apples). I can send you a utility\nI wrote in Delphi which adds only a few seconds overhead for 360k result\nset. Or, go into psql, throw \\a switch, and run query.\n\nor: \npsql -A -c \"select * from myview where x\" > output.txt\n\nit should finish the above in 16-17 sec plus the time to write out the\nfile.\n\nJoel, I have a lot of experience with all three databases you are\nevaluating and you are making a huge mistake switching to mysql. you\ncan make a decent case for ms sql, but it's quite expensive at your\nlevel of play as you know.\n\nMerlin\n\n\n\n",
"msg_date": "Thu, 21 Apr 2005 08:56:09 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon "
}
] |
[
{
"msg_contents": "FWIW, ODBC has variables to tweak, as well. fetch/buffer sizes, and the like. \n\nMaybe one of the ODBC cognoscenti here can chime in more concretely....\n\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Joel Fradkin\nSent: Thursday, April 21, 2005 10:36 AM\nTo: 'Tom Lane'; 'John A Meinel'\nCc: 'Postgresql Performance'\nSubject: Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon\n\n\n\nI suspect he's using pgadmin. \nYup I was, but I did try running on the linux box in psql, but it was running to the screen and took forever because of that.\n\nThe real issue is returning to my app using ODBC is very slow (Have not tested the ODBC for MYSQL, MSSQL is ok (the two proc dell is running out of steam but been good until this year when we about doubled our demand by adding sears as a client).\n\nUsing odbc to postgres on some of the views (Josh from Command is having me do some very specific testing) is timing out with a 10 minute time limit. These are pages that still respond using MSSQL (this is wehere production is using the duel proc and the test is using the 4 proc).\n\nI have a tool that hooks to all three databases so I can try it with that and see if I get different responses.\n\nJoel\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n",
"msg_date": "Thu, 21 Apr 2005 14:42:16 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Here is the connect string I am using.\nIt could be horrid as I cut it from ODBC program.\n\nSession(\"StringConn\") =\n\"DRIVER={PostgreSQL};DATABASE=wazagua;SERVER=192.168.123.252;PORT=5432;UID=;\nPWD=;ReadOnly=0;Protocol=6.4;FakeOidIndex=0;ShowOidColumn=0;RowVersioning=0;\nShowSystemTables=0;ConnSettings=;Fetch=100;Socket=4096;UnknownSizes=0;MaxVar\ncharSize=254;MaxLongVarcharSize=8190;Debug=0;CommLog=0;Optimizer=1;Ksqo=1;Us\neDeclareFetch=0;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsAsChar=1;Pa\nrse=0;CancelAsFreeStmt=0;ExtraSysTablePrefixes=dd_;LFConversion=1;UpdatableC\nursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinary=0;UseS\nerverSidePrepare=0\"\n\nJoel Fradkin\n \n\n-----Original Message-----\nFrom: Mohan, Ross [mailto:[email protected]] \nSent: Thursday, April 21, 2005 9:42 AM\nTo: [email protected]\nSubject: RE: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon\n\nFWIW, ODBC has variables to tweak, as well. fetch/buffer sizes, and the\nlike. \n\nMaybe one of the ODBC cognoscenti here can chime in more concretely....\n\n\n\n\n\n",
"msg_date": "Thu, 21 Apr 2005 10:53:51 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon"
}
] |
[
{
"msg_contents": "Joel, thanks. A couple of things jump out there for\nme, not a problem for a routine ODBC connection, but\nperhaps in the \"lotsa stuff\" context of your current\nexplorations, it might be relevant?\n\nI am completely shooting from the hip, here, but...if\nit were my goose to cook, I'd be investigating\n\nSession(\"StringConn\") = \"DRIVER={PostgreSQL};DATABASE=wazagua;SERVER=192.168.123.252;PORT=5432;UID=;\nPWD=;ReadOnly=0;Protocol=6.4;\n\n|| Protocol? Is this related to version? is the driver waaaay old?\n\n\nFakeOidIndex=0;ShowOidColumn=0;RowVersioning=0;\nShowSystemTables=0;ConnSettings=;Fetch=100;\n\n|| Fetch great for OLTP, lousy for batch?\n\n\nSocket=4096;UnknownSizes=0;MaxVarcharSize=254;MaxLongVarcharSize=8190;\n\n|| what ARE the datatypes and sizes in your particular case? \n\nDebug=0;\n\n|| a run with debug=1 probably would spit up something interesting....\n\nCommLog=0;Optimizer=1;\n\n|| Optimizer? that's a new one on me....\n\nKsqo=1;UseDeclareFetch=0;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsAsChar=1;Parse=0;CancelAsFreeStmt=;ExtraSysTablePrefixes=dd_;LFConversion=1;UpdatableCursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinary=0;UseServerSidePrepare=0\"\n\n|| that's about all I can see, prima facie. I'll be very curious to know if ODBC is\n any part of your performance equation. \n\n\nHTH, \n\nRoss\n\n-----Original Message-----\nFrom: Joel Fradkin [mailto:[email protected]] \nSent: Thursday, April 21, 2005 10:54 AM\nTo: Mohan, Ross\nCc: [email protected]; PostgreSQL Perform\nSubject: RE: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon\n\n\nHere is the connect string I am using.\nIt could be horrid as I cut it from ODBC program.\n\nSession(\"StringConn\") = \"DRIVER={PostgreSQL};DATABASE=wazagua;SERVER=192.168.123.252;PORT=5432;UID=;\nPWD=;ReadOnly=0;Protocol=6.4;FakeOidIndex=0;ShowOidColumn=0;RowVersioning=0;\nShowSystemTables=0;ConnSettings=;Fetch=100;Socket=4096;UnknownSizes=0;MaxVar\ncharSize=254;MaxLongVarcharSize=8190;Debug=0;CommLog=0;Optimizer=1;Ksqo=1;Us\neDeclareFetch=0;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsAsChar=1;Pa\nrse=0;CancelAsFreeStmt=0;ExtraSysTablePrefixes=dd_;LFConversion=1;UpdatableC\nursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinary=0;UseS\nerverSidePrepare=0\"\n\nJoel Fradkin\n \n\n-----Original Message-----\nFrom: Mohan, Ross [mailto:[email protected]] \nSent: Thursday, April 21, 2005 9:42 AM\nTo: [email protected]\nSubject: RE: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon\n\nFWIW, ODBC has variables to tweak, as well. fetch/buffer sizes, and the like. \n\nMaybe one of the ODBC cognoscenti here can chime in more concretely....\n\n\n\n\n\n",
"msg_date": "Thu, 21 Apr 2005 15:01:10 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Hate to be dumb, but unfortunately I am.\n\nCould you give me an idea what I should be using, or is there a good\nresource for me to check out.\nI have been spending so much time with config and moving data, converting\netc, I never looked at the odbc settings (didn't even think about it until\nJosh brought it up). I did ask him for his advice, but would love a second\nopinion.\n\nOur data is a bit of a mixture, some records have text items most are\nvarchars and integers with a bit of Booleans mixed in.\n\nI am running 8.0.2 so not sure if the protocol is ODBC or Postgres?\n\nThanks for responding I appreciate any help \n\nJoel Fradkin\n \n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Mohan, Ross\nSent: Thursday, April 21, 2005 10:01 AM\nTo: [email protected]\nSubject: Re: [ODBC] [PERFORM] Joel's Performance Issues WAS : Opteron vs\nXeon\n\nJoel, thanks. A couple of things jump out there for\nme, not a problem for a routine ODBC connection, but\nperhaps in the \"lotsa stuff\" context of your current\nexplorations, it might be relevant?\n\nI am completely shooting from the hip, here, but...if\nit were my goose to cook, I'd be investigating\n\nSession(\"StringConn\") =\n\"DRIVER={PostgreSQL};DATABASE=wazagua;SERVER=192.168.123.252;PORT=5432;UID=;\nPWD=;ReadOnly=0;Protocol=6.4;\n\n|| Protocol? Is this related to version? is the driver waaaay old?\n\n\nFakeOidIndex=0;ShowOidColumn=0;RowVersioning=0;\nShowSystemTables=0;ConnSettings=;Fetch=100;\n\n|| Fetch great for OLTP, lousy for batch?\n\n\nSocket=4096;UnknownSizes=0;MaxVarcharSize=254;MaxLongVarcharSize=8190;\n\n|| what ARE the datatypes and sizes in your particular case? \n\nDebug=0;\n\n|| a run with debug=1 probably would spit up something interesting....\n\nCommLog=0;Optimizer=1;\n\n|| Optimizer? that's a new one on me....\n\nKsqo=1;UseDeclareFetch=0;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsAs\nChar=1;Parse=0;CancelAsFreeStmt=;ExtraSysTablePrefixes=dd_;LFConversion=1;Up\ndatableCursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinar\ny=0;UseServerSidePrepare=0\"\n\n\n|| that's about all I can see, prima facie. I'll be very curious to know\nif ODBC is\n any part of your performance equation. \n\n\nHTH, \n\nRoss\n\n-----Original Message-----\nFrom: Joel Fradkin [mailto:[email protected]] \nSent: Thursday, April 21, 2005 10:54 AM\nTo: Mohan, Ross\nCc: [email protected]; PostgreSQL Perform\nSubject: RE: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon\n\n\nHere is the connect string I am using.\nIt could be horrid as I cut it from ODBC program.\n\nSession(\"StringConn\") =\n\"DRIVER={PostgreSQL};DATABASE=wazagua;SERVER=192.168.123.252;PORT=5432;UID=;\nPWD=;ReadOnly=0;Protocol=6.4;FakeOidIndex=0;ShowOidColumn=0;RowVersioning=0;\nShowSystemTables=0;ConnSettings=;Fetch=100;Socket=4096;UnknownSizes=0;MaxVar\ncharSize=254;MaxLongVarcharSize=8190;Debug=0;CommLog=0;Optimizer=1;Ksqo=1;Us\neDeclareFetch=0;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsAsChar=1;Pa\nrse=0;CancelAsFreeStmt=0;ExtraSysTablePrefixes=dd_;LFConversion=1;UpdatableC\nursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinary=0;UseS\nerverSidePrepare=0\"\n\nJoel Fradkin\n \n\n-----Original Message-----\nFrom: Mohan, Ross [mailto:[email protected]] \nSent: Thursday, April 21, 2005 9:42 AM\nTo: [email protected]\nSubject: RE: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon\n\nFWIW, ODBC has variables to tweak, as well. fetch/buffer sizes, and the\nlike. \n\nMaybe one of the ODBC cognoscenti here can chime in more concretely....\n\n\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n\n",
"msg_date": "Sat, 23 Apr 2005 10:18:31 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "\nHere is, how you can receive all one billion rows with\npieces of 2048 rows. This changes PostgreSQL and ODBC behaviour:\n\nChange ODBC data source configuration in the following way:\n\nFetch = 2048\nUseDeclareFetch = 1\n\nIt does not create core dumps with 32 bit computers with billions of rows!\nThis is a bit slower than fetching all rows at once. Scalability means \nsometimes\na bit less speed :(\n\nWith UseDeclareFetch=1 you might get even 150 thousands rows per second.\nWith UseDeclareFetch=0 the backend might be able to send about 200 \nthousands rows per\nsecond.\n\nSo, these high numbers come, if all the results are already in memory, \nand no disc\naccesses are needed. These are about the peak speeds with VARCHAR, \nwithout Unicode,\nwith Athlon64 home computer.\n\nWith sequential disc scan, more typical fetching\nspeed is about 50-100 thousands rows per second.\n\nPostgreSQL ODBC row fetching speed is very good.\nPerhaps with better discs, with RAID10, the current upper limit about \n200 thousands\nrows per second could be achieved??\n\nSo the in memory examples show, that the hard disc is normally\nthe bottleneck. It is on the server side.\nMy experiments are done in Linux. In Windows, the speed might be a bit \ndifferent\nby a constant factor (algorithmically).\n\nThese speeds depend on very many factos even on sequential scan.\nODBC speed is affected by the number of columns fetched and the types of \nthe columns.\nIntegers are processed faster than textual or date columns.\n\nThe network latency is decreased with UseDeclareFetc=1 by increasing the \nFetch=2048\nparameter: With Fetch=1 you get a bad performance with lots of rows, but \nif you fetch\nmore data from the server once per 2048 rows, the network latency \naffects only once for\nthe 2048 row block.\n\nRegards,\nMarko Ristola\n\nJoel Fradkin wrote:\n\n>Hate to be dumb, but unfortunately I am.\n>\n>Could you give me an idea what I should be using, or is there a good\n>resource for me to check out.\n>I have been spending so much time with config and moving data, converting\n>etc, I never looked at the odbc settings (didn't even think about it until\n>Josh brought it up). I did ask him for his advice, but would love a second\n>opinion.\n>\n>Our data is a bit of a mixture, some records have text items most are\n>varchars and integers with a bit of Booleans mixed in.\n>\n>I am running 8.0.2 so not sure if the protocol is ODBC or Postgres?\n>\n>Thanks for responding I appreciate any help \n>\n>Joel Fradkin\n> \n>-----Original Message-----\n>From: [email protected]\n>[mailto:[email protected]] On Behalf Of Mohan, Ross\n>Sent: Thursday, April 21, 2005 10:01 AM\n>To: [email protected]\n>Subject: Re: [ODBC] [PERFORM] Joel's Performance Issues WAS : Opteron vs\n>Xeon\n>\n>Joel, thanks. A couple of things jump out there for\n>me, not a problem for a routine ODBC connection, but\n>perhaps in the \"lotsa stuff\" context of your current\n>explorations, it might be relevant?\n>\n>I am completely shooting from the hip, here, but...if\n>it were my goose to cook, I'd be investigating\n>\n>Session(\"StringConn\") =\n>\"DRIVER={PostgreSQL};DATABASE=wazagua;SERVER=192.168.123.252;PORT=5432;UID=;\n>PWD=;ReadOnly=0;Protocol=6.4;\n>\n>|| Protocol? Is this related to version? is the driver waaaay old?\n>\n>\n>FakeOidIndex=0;ShowOidColumn=0;RowVersioning=0;\n>ShowSystemTables=0;ConnSettings=;Fetch=100;\n>\n>|| Fetch great for OLTP, lousy for batch?\n>\n>\n>Socket=4096;UnknownSizes=0;MaxVarcharSize=254;MaxLongVarcharSize=8190;\n>\n>|| what ARE the datatypes and sizes in your particular case? \n>\n>Debug=0;\n>\n>|| a run with debug=1 probably would spit up something interesting....\n>\n>CommLog=0;Optimizer=1;\n>\n>|| Optimizer? that's a new one on me....\n>\n>Ksqo=1;UseDeclareFetch=0;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsAs\n>Char=1;Parse=0;CancelAsFreeStmt=;ExtraSysTablePrefixes=dd_;LFConversion=1;Up\n>datableCursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinar\n>y=0;UseServerSidePrepare=0\"\n>\n>\n>|| that's about all I can see, prima facie. I'll be very curious to know\n>if ODBC is\n> any part of your performance equation. \n>\n>\n>HTH, \n>\n>Ross\n>\n>-----Original Message-----\n>From: Joel Fradkin [mailto:[email protected]] \n>Sent: Thursday, April 21, 2005 10:54 AM\n>To: Mohan, Ross\n>Cc: [email protected]; PostgreSQL Perform\n>Subject: RE: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon\n>\n>\n>Here is the connect string I am using.\n>It could be horrid as I cut it from ODBC program.\n>\n>Session(\"StringConn\") =\n>\"DRIVER={PostgreSQL};DATABASE=wazagua;SERVER=192.168.123.252;PORT=5432;UID=;\n>PWD=;ReadOnly=0;Protocol=6.4;FakeOidIndex=0;ShowOidColumn=0;RowVersioning=0;\n>ShowSystemTables=0;ConnSettings=;Fetch=100;Socket=4096;UnknownSizes=0;MaxVar\n>charSize=254;MaxLongVarcharSize=8190;Debug=0;CommLog=0;Optimizer=1;Ksqo=1;Us\n>eDeclareFetch=0;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsAsChar=1;Pa\n>rse=0;CancelAsFreeStmt=0;ExtraSysTablePrefixes=dd_;LFConversion=1;UpdatableC\n>ursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinary=0;UseS\n>erverSidePrepare=0\"\n>\n>Joel Fradkin\n> \n>\n>-----Original Message-----\n>From: Mohan, Ross [mailto:[email protected]] \n>Sent: Thursday, April 21, 2005 9:42 AM\n>To: [email protected]\n>Subject: RE: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon\n>\n>FWIW, ODBC has variables to tweak, as well. fetch/buffer sizes, and the\n>like. \n>\n>Maybe one of the ODBC cognoscenti here can chime in more concretely....\n>\n>\n>\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: don't forget to increase your free space map settings\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n> \n>\n\n",
"msg_date": "Sun, 24 Apr 2005 10:15:03 +0300",
"msg_from": "Marko Ristola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Thanks we will try that, we are working on a test suit for the way our app\ngets data (ODBC).\nwe plan to include updates, inserts, and selects and all three at once with\na log of the results.\nThen we should use a stress test tool to see how it works with multiple\ninstances (I used Microsoft's tool last time I did stress testing).\n\nJoel Fradkin\n \nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n \[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\nC 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply email and delete and destroy\nall copies of the original message, including attachments.\n \n\n \n\n-----Original Message-----\nFrom: Marko Ristola [mailto:[email protected]] \nSent: Sunday, April 24, 2005 2:15 AM\nTo: Joel Fradkin\nCc: 'Mohan, Ross'; [email protected];\[email protected]\nSubject: Re: [ODBC] [PERFORM] Joel's Performance Issues WAS : Opteron vs\nXeon\n\n\nHere is, how you can receive all one billion rows with\npieces of 2048 rows. This changes PostgreSQL and ODBC behaviour:\n\nChange ODBC data source configuration in the following way:\n\nFetch = 2048\nUseDeclareFetch = 1\n\nIt does not create core dumps with 32 bit computers with billions of rows!\nThis is a bit slower than fetching all rows at once. Scalability means \nsometimes\na bit less speed :(\n\nWith UseDeclareFetch=1 you might get even 150 thousands rows per second.\nWith UseDeclareFetch=0 the backend might be able to send about 200 \nthousands rows per\nsecond.\n\nSo, these high numbers come, if all the results are already in memory, \nand no disc\naccesses are needed. These are about the peak speeds with VARCHAR, \nwithout Unicode,\nwith Athlon64 home computer.\n\nWith sequential disc scan, more typical fetching\nspeed is about 50-100 thousands rows per second.\n\nPostgreSQL ODBC row fetching speed is very good.\nPerhaps with better discs, with RAID10, the current upper limit about \n200 thousands\nrows per second could be achieved??\n\nSo the in memory examples show, that the hard disc is normally\nthe bottleneck. It is on the server side.\nMy experiments are done in Linux. In Windows, the speed might be a bit \ndifferent\nby a constant factor (algorithmically).\n\nThese speeds depend on very many factos even on sequential scan.\nODBC speed is affected by the number of columns fetched and the types of \nthe columns.\nIntegers are processed faster than textual or date columns.\n\nThe network latency is decreased with UseDeclareFetc=1 by increasing the \nFetch=2048\nparameter: With Fetch=1 you get a bad performance with lots of rows, but \nif you fetch\nmore data from the server once per 2048 rows, the network latency \naffects only once for\nthe 2048 row block.\n\nRegards,\nMarko Ristola\n\nJoel Fradkin wrote:\n\n>Hate to be dumb, but unfortunately I am.\n>\n>Could you give me an idea what I should be using, or is there a good\n>resource for me to check out.\n>I have been spending so much time with config and moving data, converting\n>etc, I never looked at the odbc settings (didn't even think about it until\n>Josh brought it up). I did ask him for his advice, but would love a second\n>opinion.\n>\n>Our data is a bit of a mixture, some records have text items most are\n>varchars and integers with a bit of Booleans mixed in.\n>\n>I am running 8.0.2 so not sure if the protocol is ODBC or Postgres?\n>\n>Thanks for responding I appreciate any help \n>\n>Joel Fradkin\n> \n>-----Original Message-----\n>From: [email protected]\n>[mailto:[email protected]] On Behalf Of Mohan, Ross\n>Sent: Thursday, April 21, 2005 10:01 AM\n>To: [email protected]\n>Subject: Re: [ODBC] [PERFORM] Joel's Performance Issues WAS : Opteron vs\n>Xeon\n>\n>Joel, thanks. A couple of things jump out there for\n>me, not a problem for a routine ODBC connection, but\n>perhaps in the \"lotsa stuff\" context of your current\n>explorations, it might be relevant?\n>\n>I am completely shooting from the hip, here, but...if\n>it were my goose to cook, I'd be investigating\n>\n>Session(\"StringConn\") =\n>\"DRIVER={PostgreSQL};DATABASE=wazagua;SERVER=192.168.123.252;PORT=5432;UID=\n;\n>PWD=;ReadOnly=0;Protocol=6.4;\n>\n>|| Protocol? Is this related to version? is the driver waaaay old?\n>\n>\n>FakeOidIndex=0;ShowOidColumn=0;RowVersioning=0;\n>ShowSystemTables=0;ConnSettings=;Fetch=100;\n>\n>|| Fetch great for OLTP, lousy for batch?\n>\n>\n>Socket=4096;UnknownSizes=0;MaxVarcharSize=254;MaxLongVarcharSize=8190;\n>\n>|| what ARE the datatypes and sizes in your particular case? \n>\n>Debug=0;\n>\n>|| a run with debug=1 probably would spit up something interesting....\n>\n>CommLog=0;Optimizer=1;\n>\n>|| Optimizer? that's a new one on me....\n>\n>Ksqo=1;UseDeclareFetch=0;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsA\ns\n>Char=1;Parse=0;CancelAsFreeStmt=;ExtraSysTablePrefixes=dd_;LFConversion=1;U\np\n>datableCursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBina\nr\n>y=0;UseServerSidePrepare=0\"\n>\n>\n>|| that's about all I can see, prima facie. I'll be very curious to know\n>if ODBC is\n> any part of your performance equation. \n>\n>\n>HTH, \n>\n>Ross\n>\n>-----Original Message-----\n>From: Joel Fradkin [mailto:[email protected]] \n>Sent: Thursday, April 21, 2005 10:54 AM\n>To: Mohan, Ross\n>Cc: [email protected]; PostgreSQL Perform\n>Subject: RE: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon\n>\n>\n>Here is the connect string I am using.\n>It could be horrid as I cut it from ODBC program.\n>\n>Session(\"StringConn\") =\n>\"DRIVER={PostgreSQL};DATABASE=wazagua;SERVER=192.168.123.252;PORT=5432;UID=\n;\n>PWD=;ReadOnly=0;Protocol=6.4;FakeOidIndex=0;ShowOidColumn=0;RowVersioning=0\n;\n>ShowSystemTables=0;ConnSettings=;Fetch=100;Socket=4096;UnknownSizes=0;MaxVa\nr\n>charSize=254;MaxLongVarcharSize=8190;Debug=0;CommLog=0;Optimizer=1;Ksqo=1;U\ns\n>eDeclareFetch=0;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsAsChar=1;P\na\n>rse=0;CancelAsFreeStmt=0;ExtraSysTablePrefixes=dd_;LFConversion=1;Updatable\nC\n>ursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinary=0;Use\nS\n>erverSidePrepare=0\"\n>\n>Joel Fradkin\n> \n>\n>-----Original Message-----\n>From: Mohan, Ross [mailto:[email protected]] \n>Sent: Thursday, April 21, 2005 9:42 AM\n>To: [email protected]\n>Subject: RE: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon\n>\n>FWIW, ODBC has variables to tweak, as well. fetch/buffer sizes, and the\n>like. \n>\n>Maybe one of the ODBC cognoscenti here can chime in more concretely....\n>\n>\n>\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: don't forget to increase your free space map settings\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n> \n>\n\n",
"msg_date": "Mon, 25 Apr 2005 09:08:52 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Tried changing the settings and saw no change in a test using asp.\nThe test does several selects on views and tables.\nIt actually seemed to take a bit longer.\n\nJoel Fradkin\n \nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n \[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\nC 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply email and delete and destroy\nall copies of the original message, including attachments.\n \n\n \n\n-----Original Message-----\nFrom: Marko Ristola [mailto:[email protected]] \nSent: Sunday, April 24, 2005 2:15 AM\nTo: Joel Fradkin\nCc: 'Mohan, Ross'; [email protected];\[email protected]\nSubject: Re: [ODBC] [PERFORM] Joel's Performance Issues WAS : Opteron vs\nXeon\n\n\nHere is, how you can receive all one billion rows with\npieces of 2048 rows. This changes PostgreSQL and ODBC behaviour:\n\nChange ODBC data source configuration in the following way:\n\nFetch = 2048\nUseDeclareFetch = 1\n\nIt does not create core dumps with 32 bit computers with billions of rows!\nThis is a bit slower than fetching all rows at once. Scalability means \nsometimes\na bit less speed :(\n\nWith UseDeclareFetch=1 you might get even 150 thousands rows per second.\nWith UseDeclareFetch=0 the backend might be able to send about 200 \nthousands rows per\nsecond.\n\nSo, these high numbers come, if all the results are already in memory, \nand no disc\naccesses are needed. These are about the peak speeds with VARCHAR, \nwithout Unicode,\nwith Athlon64 home computer.\n\nWith sequential disc scan, more typical fetching\nspeed is about 50-100 thousands rows per second.\n\nPostgreSQL ODBC row fetching speed is very good.\nPerhaps with better discs, with RAID10, the current upper limit about \n200 thousands\nrows per second could be achieved??\n\nSo the in memory examples show, that the hard disc is normally\nthe bottleneck. It is on the server side.\nMy experiments are done in Linux. In Windows, the speed might be a bit \ndifferent\nby a constant factor (algorithmically).\n\nThese speeds depend on very many factos even on sequential scan.\nODBC speed is affected by the number of columns fetched and the types of \nthe columns.\nIntegers are processed faster than textual or date columns.\n\nThe network latency is decreased with UseDeclareFetc=1 by increasing the \nFetch=2048\nparameter: With Fetch=1 you get a bad performance with lots of rows, but \nif you fetch\nmore data from the server once per 2048 rows, the network latency \naffects only once for\nthe 2048 row block.\n\nRegards,\nMarko Ristola\n\nJoel Fradkin wrote:\n\n>Hate to be dumb, but unfortunately I am.\n>\n>Could you give me an idea what I should be using, or is there a good\n>resource for me to check out.\n>I have been spending so much time with config and moving data, converting\n>etc, I never looked at the odbc settings (didn't even think about it until\n>Josh brought it up). I did ask him for his advice, but would love a second\n>opinion.\n>\n>Our data is a bit of a mixture, some records have text items most are\n>varchars and integers with a bit of Booleans mixed in.\n>\n>I am running 8.0.2 so not sure if the protocol is ODBC or Postgres?\n>\n>Thanks for responding I appreciate any help \n>\n>Joel Fradkin\n> \n>-----Original Message-----\n>From: [email protected]\n>[mailto:[email protected]] On Behalf Of Mohan, Ross\n>Sent: Thursday, April 21, 2005 10:01 AM\n>To: [email protected]\n>Subject: Re: [ODBC] [PERFORM] Joel's Performance Issues WAS : Opteron vs\n>Xeon\n>\n>Joel, thanks. A couple of things jump out there for\n>me, not a problem for a routine ODBC connection, but\n>perhaps in the \"lotsa stuff\" context of your current\n>explorations, it might be relevant?\n>\n>I am completely shooting from the hip, here, but...if\n>it were my goose to cook, I'd be investigating\n>\n>Session(\"StringConn\") =\n>\"DRIVER={PostgreSQL};DATABASE=wazagua;SERVER=192.168.123.252;PORT=5432;UID=\n;\n>PWD=;ReadOnly=0;Protocol=6.4;\n>\n>|| Protocol? Is this related to version? is the driver waaaay old?\n>\n>\n>FakeOidIndex=0;ShowOidColumn=0;RowVersioning=0;\n>ShowSystemTables=0;ConnSettings=;Fetch=100;\n>\n>|| Fetch great for OLTP, lousy for batch?\n>\n>\n>Socket=4096;UnknownSizes=0;MaxVarcharSize=254;MaxLongVarcharSize=8190;\n>\n>|| what ARE the datatypes and sizes in your particular case? \n>\n>Debug=0;\n>\n>|| a run with debug=1 probably would spit up something interesting....\n>\n>CommLog=0;Optimizer=1;\n>\n>|| Optimizer? that's a new one on me....\n>\n>Ksqo=1;UseDeclareFetch=0;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsA\ns\n>Char=1;Parse=0;CancelAsFreeStmt=;ExtraSysTablePrefixes=dd_;LFConversion=1;U\np\n>datableCursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBina\nr\n>y=0;UseServerSidePrepare=0\"\n>\n>\n>|| that's about all I can see, prima facie. I'll be very curious to know\n>if ODBC is\n> any part of your performance equation. \n>\n>\n>HTH, \n>\n>Ross\n>\n>-----Original Message-----\n>From: Joel Fradkin [mailto:[email protected]] \n>Sent: Thursday, April 21, 2005 10:54 AM\n>To: Mohan, Ross\n>Cc: [email protected]; PostgreSQL Perform\n>Subject: RE: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon\n>\n>\n>Here is the connect string I am using.\n>It could be horrid as I cut it from ODBC program.\n>\n>Session(\"StringConn\") =\n>\"DRIVER={PostgreSQL};DATABASE=wazagua;SERVER=192.168.123.252;PORT=5432;UID=\n;\n>PWD=;ReadOnly=0;Protocol=6.4;FakeOidIndex=0;ShowOidColumn=0;RowVersioning=0\n;\n>ShowSystemTables=0;ConnSettings=;Fetch=100;Socket=4096;UnknownSizes=0;MaxVa\nr\n>charSize=254;MaxLongVarcharSize=8190;Debug=0;CommLog=0;Optimizer=1;Ksqo=1;U\ns\n>eDeclareFetch=0;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsAsChar=1;P\na\n>rse=0;CancelAsFreeStmt=0;ExtraSysTablePrefixes=dd_;LFConversion=1;Updatable\nC\n>ursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinary=0;Use\nS\n>erverSidePrepare=0\"\n>\n>Joel Fradkin\n> \n>\n>-----Original Message-----\n>From: Mohan, Ross [mailto:[email protected]] \n>Sent: Thursday, April 21, 2005 9:42 AM\n>To: [email protected]\n>Subject: RE: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon\n>\n>FWIW, ODBC has variables to tweak, as well. fetch/buffer sizes, and the\n>like. \n>\n>Maybe one of the ODBC cognoscenti here can chime in more concretely....\n>\n>\n>\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: don't forget to increase your free space map settings\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n> \n>\n\n",
"msg_date": "Mon, 25 Apr 2005 10:07:39 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon"
}
] |
[
{
"msg_contents": "All,\n\nRunning PostgreSQL 7.4.2, Solaris.\n\nClient is reporting that the size of an index is\ngreater than the number of rows in the table (1.9\nmillion vs. 1.5 million). Index was automatically\ncreated from a 'bigserial unique' column.\n\nDatabase contains several tables with exactly the same\ncolumns (including 'bigserial unique' column). This\nis the only table where this index is out of line with\nthe actual # of rows. \n\nQueries on this table take 40 seconds to retrieve 2000\nrows as opposed to 1-2 seconds on the other tables.\n\nWe have been running 'VACUUM ANALYZE' very regularly. \nIn fact, our vacuum schedule has probably been\noverkill. We have been running on a per-table basis\nafter every update (many per day, only inserts\noccurring) and after every purge (one per day,\ndeleting a day's worth of data). \n\nIt is theoretically possible that at some time a\nprocess was run that deleted all rows in the table\nfollowed by a VACUUM FULL. In this case we would have\ndropped/recreated our own indexes on the table but not\nthe index automatically created for the bigserial\ncolumn. If that happened, could that cause these\nsymptoms?\n\nWhat about if an out-of-the-ordinary number of rows\nwere deleted (say 75% of rows in the table, as opposed\nto normal 5%) followed by a 'VACUUM ANALYZE'? Could\nthings get out of whack because of that situation?\n\nthanks,\n\nBill\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Thu, 21 Apr 2005 10:00:18 -0700 (PDT)",
"msg_from": "Bill Chandler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index bloat problem?"
},
{
"msg_contents": "Bill,\n\n> What about if an out-of-the-ordinary number of rows\n> were deleted (say 75% of rows in the table, as opposed\n> to normal 5%) followed by a 'VACUUM ANALYZE'? Could\n> things get out of whack because of that situation?\n\nYes. You'd want to run REINDEX after and event like that. As you should now.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 21 Apr 2005 10:22:03 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "Is:\n\nREINDEX DATABASE blah\n\nsupposed to rebuild all indices in the database, or must you specify\neach table individualy? (I'm asking because I just tried it and it\nonly did system tables)\n\nAlex Turner\nnetEconomist\n\nOn 4/21/05, Josh Berkus <[email protected]> wrote:\n> Bill,\n> \n> > What about if an out-of-the-ordinary number of rows\n> > were deleted (say 75% of rows in the table, as opposed\n> > to normal 5%) followed by a 'VACUUM ANALYZE'? Could\n> > things get out of whack because of that situation?\n> \n> Yes. You'd want to run REINDEX after and event like that. As you should now.\n> \n> --\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n>\n",
"msg_date": "Thu, 21 Apr 2005 13:33:28 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "[email protected] (Josh Berkus) writes:\n> Bill,\n>\n>> What about if an out-of-the-ordinary number of rows\n>> were deleted (say 75% of rows in the table, as opposed\n>> to normal 5%) followed by a 'VACUUM ANALYZE'? �Could\n>> things get out of whack because of that situation?\n>\n> Yes. You'd want to run REINDEX after and event like that. As you should now.\n\nBased on Tom's recent comments, I'd be inclined to handle this via\ndoing a CLUSTER, which has the \"triple heroism effect\" of:\n\n a) Reorganizing the entire table to conform with the relevant index order,\n b) Having the effect of VACUUM FULL, and\n c) Having the effect of REINDEX\n\nall in one command.\n\nIt has all of the \"oops, that blocked me for 20 minutes\" effect of\nREINDEX and VACUUM FULL, but at least it doesn't have the effect\ntwice...\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://www.ntlug.org/~cbbrowne/sap.html\nRules of the Evil Overlord #78. \"I will not tell my Legions of Terror\n\"And he must be taken alive!\" The command will be: ``And try to take\nhim alive if it is reasonably practical.''\"\n<http://www.eviloverlord.com/>\n",
"msg_date": "Thu, 21 Apr 2005 13:47:24 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "Alex,\n\n> REINDEX DATABASE blah\n>\n> supposed to rebuild all indices in the database, or must you specify\n> each table individualy? (I'm asking because I just tried it and it\n> only did system tables)\n\n\"DATABASE\n\n Recreate all system indexes of a specified database. Indexes on user tables \nare not processed. Also, indexes on shared system catalogs are skipped except \nin stand-alone mode (see below). \"\n\nhttp://www.postgresql.org/docs/8.0/static/sql-reindex.html\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 21 Apr 2005 10:50:29 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "Quoting Bill Chandler <[email protected]>:\n\n> Running PostgreSQL 7.4.2, Solaris.\n> Client is reporting that the size of an index is\n> greater than the number of rows in the table (1.9\n> million vs. 1.5 million). Index was automatically\n> created from a 'bigserial unique' column.\n\n> We have been running 'VACUUM ANALYZE' very regularly. \n> In fact, our vacuum schedule has probably been\n> overkill. We have been running on a per-table basis\n> after every update (many per day, only inserts\n> occurring) and after every purge (one per day,\n> deleting a day's worth of data). \n> \n> What about if an out-of-the-ordinary number of rows\n> were deleted (say 75% of rows in the table, as opposed\n> to normal 5%) followed by a 'VACUUM ANALYZE'? Could\n> things get out of whack because of that situation?\n\nI gather you mean, out-of-the-ordinary for most apps, but not for this client?\n\nIn case nobody else has asked: is your max_fsm_pages big enough to handle all\nthe deleted pages, across ALL tables hit by the purge? If not, you're\nhaemorrhaging pages, and VACUUM is probably warning you about exactly that.\n\nIf that's not a problem, you might want to consider partitioning the data.\nTake a look at inherited tables. For me, they're a good approximation of\nclustered indexes (sigh, miss'em) and equivalent to table spaces.\n\nMy app is in a similar boat to yours: up to 1/3 of a 10M-row table goes away\nevery day. For each of the child tables that is a candidate to be dropped, there\nis a big prologue txn, whichs moves (INSERT then DELETE) the good rows into a\nchild table that is NOT to be dropped. Then BANG pull the plug on the tables you\ndon't want. MUCH faster than DELETE: the dropped tables' files' disk space goes\naway in one shot, too.\n\nJust my 2c.\n\n",
"msg_date": "Thu, 21 Apr 2005 11:41:48 -0700",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "Bill Chandler <[email protected]> writes:\n> Client is reporting that the size of an index is\n> greater than the number of rows in the table (1.9\n> million vs. 1.5 million).\n\nThis thread seems to have wandered away without asking the critical\nquestion \"what did you mean by that?\"\n\nIt's not possible for an index to have more rows than there are in\nthe table unless something is seriously broken. And there aren't\nany SQL operations that let you inspect an index directly anyway.\nSo: what is the actual observation that led you to the above\nconclusion? Facts, please, not inferences.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Apr 2005 01:57:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem? "
},
{
"msg_contents": "On 22 Apr 2005, at 06:57, Tom Lane wrote:\n> Bill Chandler <[email protected]> writes:\n>> Client is reporting that the size of an index is\n>> greater than the number of rows in the table (1.9\n>> million vs. 1.5 million).\n>\n> This thread seems to have wandered away without asking the critical\n> question \"what did you mean by that?\"\n>\n> It's not possible for an index to have more rows than there are in\n> the table unless something is seriously broken. And there aren't\n> any SQL operations that let you inspect an index directly anyway.\n> So: what is the actual observation that led you to the above\n> conclusion? Facts, please, not inferences.\n\nI work for the client in question. Glad you picked up on that point. I \ncovered the detail in my my post \"How can an index be larger than a \ntable\" on 21 Apr. 2005. I guess I was too detailed, and too much info \nput people off.\nhttp://archives.postgresql.org/pgsql-performance/2005-04/msg00553.php\n\nquoting from there...\n\n|\n|SELECT relname, relkind, reltuples, relpages FROM pg_class WHERE\nrelname LIKE 'dave_data%';\n|\n|relname relkind reltuples relpages\n|======================================= ======= ========= ========\n|dave_data_update_events r 1593600.0 40209\n|dave_data_update_events_event_id_key i 1912320.0 29271\n|dave_data_update_events_event_id_seq S 1.0 1\n|dave_data_update_events_lds_idx i 1593600.0 6139\n|dave_data_update_events_obj_id_idx i 1593600.0 6139\n|iso_pjm_data_update_events_obj_id_idx i 1593600.0 6139\n|\n\nNote that there are only 1593600 rows in the table, so why the 1912320\nfigure?\n\nOf course I checked that the row count was correct...\n\n|\n|EXPLAIN ANALYZE\n|select count(*) from iso_pjm_data_update_events\n|\n|QUERY PLAN\n|Aggregate (cost=60129.00..60129.00 rows=1 width=0) (actual\ntime=35933.292..35933.293 rows=1 loops=1)\n| -> Seq Scan on iso_pjm_data_update_events (cost=0.00..56145.00\nrows=1593600 width=0) (actual time=0.213..27919.497 rows=1593600\nloops=1)\n|Total runtime: 35933.489 ms\n|\n\nand...\n\n|\n|select count(*) from iso_pjm_data_update_events\n|\n|count\n|1593600\n|\n\nso it's not that there are any undeleted rows lying around\n\n",
"msg_date": "Fri, 22 Apr 2005 09:44:05 +0100",
"msg_from": "David Roussel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem? "
},
{
"msg_contents": "David Roussel <[email protected]> writes:\n> |dave_data_update_events r 1593600.0 40209\n> |dave_data_update_events_event_id_key i 1912320.0 29271\n\nHmm ... what PG version is this, and what does VACUUM VERBOSE on\nthat table show?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Apr 2005 10:06:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem? "
},
{
"msg_contents": "On Fri, 22 Apr 2005 10:06:33 -0400, \"Tom Lane\" <[email protected]> said:\n> David Roussel <[email protected]> writes:\n> > |dave_data_update_events r 1593600.0 40209\n> > |dave_data_update_events_event_id_key i 1912320.0 29271\n> \n> Hmm ... what PG version is this, and what does VACUUM VERBOSE on\n> that table show?\n\nPG 7.4\n\nThe disparity seems to have sorted itself out now, so hampering futher\ninvestigations. I guess the regular inserts of new data, and the nightly\ndeletion and index recreation did it. However, we did suffer reduced\nperformance and the strange cardinality for several days before it went \naway. For what it's worth..\n\nndb=# vacuum verbose iso_pjm_data_update_events;\nINFO: vacuuming \"public.iso_pjm_data_update_events\"\nINFO: index \"iso_pjm_data_update_events_event_id_key\" now contains\n1912320 row versions in 29271 pages\nDETAIL: 21969 index pages have been deleted, 20000 are currently\nreusable.\nCPU 6.17s/0.88u sec elapsed 32.55 sec.\nINFO: index \"iso_pjm_data_update_events_lds_idx\" now contains 1912320\nrow versions in 7366 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 3.52s/0.57u sec elapsed 14.35 sec.\nINFO: index \"iso_pjm_data_update_events_obj_id_idx\" now contains\n1912320 row versions in 7366 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 3.57s/0.58u sec elapsed 12.87 sec.\nINFO: \"iso_pjm_data_update_events\": found 0 removable, 1912320\nnonremovable row versions in 40209 pages\nDETAIL: 159384 dead row versions cannot be removed yet.\nThere were 745191 unused item pointers.\n0 pages are entirely empty.\nCPU 18.26s/3.62u sec elapsed 74.35 sec.\nVACUUM\n\nAfter each insert is does this...\n\nVACUUM ANALYZE iso_pjm_DATA_UPDATE_EVENTS\nVACUUM ANALYZE iso_pjm_CONTROL\n\nEach night it does this...\n\nBEGIN\nDROP INDEX iso_pjm_control_obj_id_idx\nDROP INDEX iso_pjm_control_real_name_idx\nDROP INDEX iso_pjm_data_update_events_lds_idx\nDROP INDEX iso_pjm_data_update_events_obj_id_idx\nCREATE UNIQUE INDEX iso_pjm_control_obj_id_idx ON\niso_pjm_control(obj_id)\nCLUSTER iso_pjm_control_obj_id_idx ON iso_pjm_control\nCREATE UNIQUE INDEX iso_pjm_control_real_name_idx ON\niso_pjm_control(real_name)\nCREATE INDEX iso_pjm_data_update_events_lds_idx ON\niso_pjm_data_update_events(lds)\nCREATE INDEX iso_pjm_data_update_events_obj_id_idx ON\niso_pjm_data_update_events(obj_id)\nCOMMIT\n\nNote there is no reference to iso_pjm_data_update_events_event_id_key\nwhich is the index that went wacky on us. Does that seem weird to you?\n\nThanks\n\nDavid\n",
"msg_date": "Fri, 22 Apr 2005 17:16:36 +0100",
"msg_from": "\"David Roussel\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "\"David Roussel\" <[email protected]> writes:\n> Note there is no reference to iso_pjm_data_update_events_event_id_key\n> which is the index that went wacky on us. Does that seem weird to you?\n\nWhat that says is that that index doesn't belong to that table. You\nsure it wasn't a chance coincidence of names that made you think it did?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Apr 2005 13:28:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem? "
}
] |
[
{
"msg_contents": "\n--- Josh Berkus <[email protected]> wrote:\n> Bill,\n> \n> > What about if an out-of-the-ordinary number of\n> rows\n> > were deleted (say 75% of rows in the table, as\n> opposed\n> > to normal 5%) followed by a 'VACUUM ANALYZE'?\n> ���Could\n> > things get out of whack because of that situation?\n> \n> Yes. You'd want to run REINDEX after and event like\n> that. As you should now.\n> \n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n\nThank you. Though I must say, that is very\ndiscouraging. REINDEX is a costly operation, timewise\nand due to the fact that it locks out other processes\nfrom proceeding. Updates are constantly coming in and\nqueries are occurring continuously. A REINDEX could\npotentially bring the whole thing to a halt.\n\nHonestly, this seems like an inordinate amount of\nbabysitting for a production application. I'm not\nsure if the client will be willing to accept it. \n\nAdmittedly my knowledge of the inner workings of an\nRDBMS is limited, but could somebody explain to me why\nthis would be so? If you delete a bunch of rows why\ndoesn't the index get updated at the same time? Is\nthis a common issue among all RDBMSs or is it\nsomething that is PostgreSQL specific? Is there any\nway around it?\n\nthanks,\n\nBill\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Thu, 21 Apr 2005 10:38:55 -0700 (PDT)",
"msg_from": "Bill Chandler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "Bill,\n\n> Honestly, this seems like an inordinate amount of\n> babysitting for a production application. I'm not\n> sure if the client will be willing to accept it.\n\nWell, then, tell them not to delete 75% of the rows in a table at once. I \nimagine that operation brought processing to a halt, too.\n\n> Admittedly my knowledge of the inner workings of an\n> RDBMS is limited, but could somebody explain to me why\n> this would be so? If you delete a bunch of rows why\n> doesn't the index get updated at the same time? \n\nIt does get updated. What doesn't happen is the space getting reclaimed. In \na *normal* data situation, those dead nodes would be replaced with new index \nnodes. However, a mass-delete-in-one-go messes that system up.\n\n> Is \n> this a common issue among all RDBMSs or is it\n> something that is PostgreSQL specific? \n\nSpeaking from experience, this sort of thing affects MSSQL as well, although \nthe maintenance routines are different.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 21 Apr 2005 10:42:38 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "Bill,\n\n> Honestly, this seems like an inordinate amount of\n> babysitting for a production application. I'm not\n> sure if the client will be willing to accept it.\n\nWell, then, tell them not to delete 75% of the rows in a table at once. I \nimagine that operation brought processing to a halt, too.\n\nIf the client isn't willing to accept the consequences of their own bad data \nmanagement, I'm not really sure what you expect us to do about it.\n\n> Admittedly my knowledge of the inner workings of an\n> RDBMS is limited, but could somebody explain to me why\n> this would be so? If you delete a bunch of rows why\n> doesn't the index get updated at the same time? \n\nIt does get updated. What doesn't happen is the space getting reclaimed. In \na *normal* data situation, the dead nodes are recycled for new rows. But \ndoing a massive delete operation upsets that, and generally needs to be \nfollowed by a REINDEX.\n\n> Is \n> this a common issue among all RDBMSs or is it\n> something that is PostgreSQL specific? \n\nSpeaking from experience, this sort of thing affects MSSQL as well, although \nthe maintenance routines are different.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n",
"msg_date": "Thu, 21 Apr 2005 10:44:48 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "\n>>Is \n>>this a common issue among all RDBMSs or is it\n>>something that is PostgreSQL specific? \n>> \n>>\n>\n>Speaking from experience, this sort of thing affects MSSQL as well, although \n>the maintenance routines are different.\n>\n> \n>\nYes, this is true with MSSQL too, however sql server implements a defrag \nindex that doesn't lock up the table..\n\nhttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/tsqlref/ts_dbcc_30o9.asp\n\n\"DBCC INDEXDEFRAG can defragment clustered and nonclustered indexes on \ntables and views. DBCC INDEXDEFRAG defragments the leaf level of an \nindex so that the physical order of the pages matches the left-to-right \nlogical order of the leaf nodes, thus improving index-scanning performance.\n\n....Every five minutes, DBCC INDEXDEFRAG will report to the user an \nestimated percentage completed. DBCC INDEXDEFRAG can be terminated at \nany point in the process, and *any completed work is retained.*\"\n\n-michael\n\n",
"msg_date": "Thu, 21 Apr 2005 14:24:57 -0400",
"msg_from": "Michael Guerin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "Michael,\n\n> ....Every five minutes, DBCC INDEXDEFRAG will report to the user an\n> estimated percentage completed. DBCC INDEXDEFRAG can be terminated at\n> any point in the process, and *any completed work is retained.*\"\n\nKeen. Sounds like something for our TODO list.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 21 Apr 2005 11:28:43 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "Same thing happens in Oracle\n\nALTER INDEX <blah> rebuild\n\nTo force a rebuild. It will mark the free blocks as 'free' below the\nPCTFREE value for the tablespace.\n\nBasically If you build an index with 9999 entries. and each entry is\n1/4 of a block, the database will write 2500 blocks to the disk. If\nyou delete a random 75% of the index values, you will now have 2500\nblocks that have 75% free space. The database will reuse that free\nspace in those blocks as you insert new values, but until then, you\nstill have 2500 blocks worth of data on a disk, that is only 25% full.\n Rebuilding the index forces the system to physically re-allocate all\nthat data space, and now you have just 2499 entries, that use 625\nblocks.\n\nI'm not sure that 'blocks' is the correct term in postgres, it's\nsegments in Oracle, but the concept remains the same.\n\nAlex Turner\nnetEconomist\n\nOn 4/21/05, Bill Chandler <[email protected]> wrote:\n> \n> --- Josh Berkus <[email protected]> wrote:\n> > Bill,\n> >\n> > > What about if an out-of-the-ordinary number of\n> > rows\n> > > were deleted (say 75% of rows in the table, as\n> > opposed\n> > > to normal 5%) followed by a 'VACUUM ANALYZE'?\n> > Could\n> > > things get out of whack because of that situation?\n> >\n> > Yes. You'd want to run REINDEX after and event like\n> > that. As you should now.\n> >\n> > --\n> > Josh Berkus\n> > Aglio Database Solutions\n> > San Francisco\n> >\n> \n> Thank you. Though I must say, that is very\n> discouraging. REINDEX is a costly operation, timewise\n> and due to the fact that it locks out other processes\n> from proceeding. Updates are constantly coming in and\n> queries are occurring continuously. A REINDEX could\n> potentially bring the whole thing to a halt.\n> \n> Honestly, this seems like an inordinate amount of\n> babysitting for a production application. I'm not\n> sure if the client will be willing to accept it.\n> \n> Admittedly my knowledge of the inner workings of an\n> RDBMS is limited, but could somebody explain to me why\n> this would be so? If you delete a bunch of rows why\n> doesn't the index get updated at the same time? Is\n> this a common issue among all RDBMSs or is it\n> something that is PostgreSQL specific? Is there any\n> way around it?\n> \n> thanks,\n> \n> Bill\n> \n> __________________________________________________\n> Do You Yahoo!?\n> Tired of spam? Yahoo! Mail has the best spam protection around\n> http://mail.yahoo.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n>\n",
"msg_date": "Thu, 21 Apr 2005 15:12:09 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "On Thu, Apr 21, 2005 at 11:28:43AM -0700, Josh Berkus wrote:\n> Michael,\n> \n> > ....Every five minutes, DBCC INDEXDEFRAG will report to the user an\n> > estimated percentage completed. DBCC INDEXDEFRAG can be terminated at\n> > any point in the process, and *any completed work is retained.*\"\n> \n> Keen. Sounds like something for our TODO list.\n> \n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n\nSee http://archives.postgresql.org/pgsql-general/2005-03/msg01465.php\nfor my thoughts on a non-blocking alternative to REINDEX. I got no\nreplies to that message. :-(\n\n\nI've almost got a working solution integrated in the backend that does\ncorrect WAL logging and everything. (Writing the code to write and\nreplay WAL logs for complicated operations can be very annoying!)\n\nFor now I've gone with a syntax of:\n\n REINDEX INDEX btree_index_name INCREMENTAL;\n\n(For now it's not a proper index AM (accessor method), instead the\ngeneric index code knows this is only supported for btrees and directly\ncalls the btree_compress function.)\n\nIt's not actually a REINDEX per-se in that it doesn't rebuild the whole\nindex. It holds brief exclusive locks on the index while it shuffles\nitems around to pack the leaf pages fuller. There were issues with the\ncode I attached to the above message that have been resolved with the\nnew code. With respect to the numbers provided in that e-mail the new\ncode also recycles more pages than before.\n\nOnce I've finished it up I'll prepare and post a patch.\n\n-- \nDave Chapeskie\nOpenPGP Key ID: 0x3D2B6B34\n",
"msg_date": "Thu, 21 Apr 2005 15:33:05 -0400",
"msg_from": "Dave Chapeskie <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "Dave,\n\n> See http://archives.postgresql.org/pgsql-general/2005-03/msg01465.php\n> for my thoughts on a non-blocking alternative to REINDEX. I got no\n> replies to that message. :-(\n\nWell, sometimes you have to be pushy. Say, \"Hey, comments please?\"\n\nThe hackers list is about 75 posts a day, it's easy for people to lose track \nof stuff they meant to comment on.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 21 Apr 2005 12:55:01 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "You would be interested in\nhttp://archives.postgresql.org/pgsql-hackers/2005-04/msg00565.php\n\nOn Thu, Apr 21, 2005 at 03:33:05PM -0400, Dave Chapeskie wrote:\n> On Thu, Apr 21, 2005 at 11:28:43AM -0700, Josh Berkus wrote:\n> > Michael,\n> > \n> > > ....Every five minutes, DBCC INDEXDEFRAG will report to the user an\n> > > estimated percentage completed. DBCC INDEXDEFRAG can be terminated at\n> > > any point in the process, and *any completed work is retained.*\"\n> > \n> > Keen. Sounds like something for our TODO list.\n> > \n> > -- \n> > Josh Berkus\n> > Aglio Database Solutions\n> > San Francisco\n> \n> See http://archives.postgresql.org/pgsql-general/2005-03/msg01465.php\n> for my thoughts on a non-blocking alternative to REINDEX. I got no\n> replies to that message. :-(\n> \n> \n> I've almost got a working solution integrated in the backend that does\n> correct WAL logging and everything. (Writing the code to write and\n> replay WAL logs for complicated operations can be very annoying!)\n> \n> For now I've gone with a syntax of:\n> \n> REINDEX INDEX btree_index_name INCREMENTAL;\n> \n> (For now it's not a proper index AM (accessor method), instead the\n> generic index code knows this is only supported for btrees and directly\n> calls the btree_compress function.)\n> \n> It's not actually a REINDEX per-se in that it doesn't rebuild the whole\n> index. It holds brief exclusive locks on the index while it shuffles\n> items around to pack the leaf pages fuller. There were issues with the\n> code I attached to the above message that have been resolved with the\n> new code. With respect to the numbers provided in that e-mail the new\n> code also recycles more pages than before.\n> \n> Once I've finished it up I'll prepare and post a patch.\n> \n> -- \n> Dave Chapeskie\n> OpenPGP Key ID: 0x3D2B6B34\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Fri, 22 Apr 2005 22:12:38 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
}
] |
[
{
"msg_contents": "--- [email protected] wrote:\n> I gather you mean, out-of-the-ordinary for most\n> apps, but not for this client?\n\nActually, no. The normal activity is to delete 3-5%\nof the rows per day, followed by a VACUUM ANALYZE. \nThen over the course of the day (in multiple\ntransactions) about the same amount are INSERTed (each\ntransaction followed by a VACUUM ANALYZE on just the\nupdated table). So 75% deletion is just out of the\nordinary for this app. However, on occasion, deleting\n75% of rows is a legitimate action for the client to\ntake. It would be nice if they didn't have to\nremember to do things like REINDEX or CLUSTER or\nwhatever on just those occasions.\n \n> In case nobody else has asked: is your max_fsm_pages\n> big enough to handle all\n> the deleted pages, across ALL tables hit by the\n> purge? If not, you're\n> haemorrhaging pages, and VACUUM is probably warning\n> you about exactly that.\n\nThis parameter is most likely set incorrectly. So\nthat could be causing problems. Could that be a\nculprit for the index bloat, though?\n\n> If that's not a problem, you might want to consider\n> partitioning the data.\n> Take a look at inherited tables. For me, they're a\n> good approximation of\n> clustered indexes (sigh, miss'em) and equivalent to\n> table spaces.\n> \n> My app is in a similar boat to yours: up to 1/3 of a\n> 10M-row table goes away\n> every day. For each of the child tables that is a\n> candidate to be dropped, there\n> is a big prologue txn, whichs moves (INSERT then\n> DELETE) the good rows into a\n> child table that is NOT to be dropped. Then BANG\n> pull the plug on the tables you\n> don't want. MUCH faster than DELETE: the dropped\n> tables' files' disk space goes\n> away in one shot, too.\n> \n> Just my 2c.\n\nThanks.\n\nBill \n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Thu, 21 Apr 2005 12:03:18 -0700 (PDT)",
"msg_from": "Bill Chandler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "Quoting Bill Chandler <[email protected]>:\n\n> ... The normal activity is to delete 3-5% of the rows per day,\n> followed by a VACUUM ANALYZE. \n...\n> However, on occasion, deleting 75% of rows is a \n> legitimate action for the client to take. \n\n> > In case nobody else has asked: is your max_fsm_pages\n> > big enough to handle all the deleted pages, \n> > across ALL tables hit by the purge?\n\n> This parameter is most likely set incorrectly. So\n> that could be causing problems. Could that be a\n> culprit for the index bloat, though?\n\nLook at the last few lines of vacuum verbose output.\nIt will say something like:\n\nfree space map: 55 relations, 88416 pages stored; 89184 total pages needed\n Allocated FSM size: 1000 relations + 1000000 pages = 5920 kB shared memory.\n\n\"1000000\" here is [max_fsm_pages] from my postgresql.conf.\nIf the \"total pages needed\" is bigger than the pages \nfsm is allocated for, then you are bleeding.\n-- \n\"Dreams come true, not free.\" -- S.Sondheim, ITW\n\n",
"msg_date": "Thu, 21 Apr 2005 15:15:22 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
}
] |
[
{
"msg_contents": "\nHi folks,\n\n\nI'm doing a simple lookup in a small table by an unique id, and I'm\nwondering, why explains tells me seqscan is used instead the key.\n\nThe table looks like:\n\n id\tbigint\t\tprimary key,\n a\tvarchar,\n b\tvarchar,\n c \tvarchar\n \nand I'm quering: select * from foo where id = 2;\n\nI've got only 15 records in this table, but I wanna have it as \nfast as possible since its used (as a map between IDs and names) \nfor larger queries.\n\n\nthx\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n cellphone: +49 174 7066481\n---------------------------------------------------------------------\n -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --\n---------------------------------------------------------------------\n",
"msg_date": "Thu, 21 Apr 2005 21:05:44 +0200",
"msg_from": "Enrico Weigelt <[email protected]>",
"msg_from_op": true,
"msg_subject": "index not used"
},
{
"msg_contents": "On Thursday 21 April 2005 12:05, Enrico Weigelt wrote:\n> Hi folks,\n>\n>\n> I'm doing a simple lookup in a small table by an unique id, and I'm\n> wondering, why explains tells me seqscan is used instead the key.\n>\n> The table looks like:\n>\n> id\tbigint\t\tprimary key,\n> a\tvarchar,\n> b\tvarchar,\n> c \tvarchar\n>\n> and I'm quering: select * from foo where id = 2;\n>\n> I've got only 15 records in this table, but I wanna have it as\n> fast as possible since its used (as a map between IDs and names)\n> for larger queries.\n\nThe over head to load the index, fetch the record in there, then check the \ntable for visibility and return the value, is far greater than just doing 15 \ncompares in the original table.\n\n\n\n>\n>\n> thx\n\n-- \nDarcy Buskermolen\nWavefire Technologies Corp.\n\nhttp://www.wavefire.com\nph: 250.717.0200\nfx: 250.763.1759\n",
"msg_date": "Thu, 21 Apr 2005 12:23:52 -0700",
"msg_from": "Darcy Buskermolen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index not used"
},
{
"msg_contents": "On Thu, 21 Apr 2005, Enrico Weigelt wrote:\n\n> I'm doing a simple lookup in a small table by an unique id, and I'm\n> wondering, why explains tells me seqscan is used instead the key.\n>\n> The table looks like:\n>\n> id\tbigint\t\tprimary key,\n> a\tvarchar,\n> b\tvarchar,\n> c \tvarchar\n>\n> and I'm quering: select * from foo where id = 2;\n>\n> I've got only 15 records in this table, but I wanna have it as\n> fast as possible since its used (as a map between IDs and names)\n> for larger queries.\n\nTwo general things:\n For 15 records, an index scan may not be faster. For simple tests\n you can play with enable_seqscan to see, but for more complicated\n queries it's a little harder to tell.\n If you're using a version earlier than 8.0, you'll need to quote\n or cast the value you're searching for due to problems with\n cross-type comparisons (the 2 would be treated as int4).\n",
"msg_date": "Thu, 21 Apr 2005 14:15:08 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index not used"
}
] |
[
{
"msg_contents": "If id is PK, the query shoudl return 1 row only...\n--- Enrico Weigelt <[email protected]> wrote:\n> \n> Hi folks,\n> \n> \n> I'm doing a simple lookup in a small table by an\n> unique id, and I'm\n> wondering, why explains tells me seqscan is used\n> instead the key.\n> \n> The table looks like:\n> \n> id\tbigint\t\tprimary key,\n> a\tvarchar,\n> b\tvarchar,\n> c \tvarchar\n> \n> and I'm quering: select * from foo where id = 2;\n> \n> I've got only 15 records in this table, but I wanna\n> have it as \n> fast as possible since its used (as a map between\n> IDs and names) \n> for larger queries.\n> \n> \n> thx\n> -- \n>\n---------------------------------------------------------------------\n> Enrico Weigelt == metux IT service\n> \n> phone: +49 36207 519931 www: \n> http://www.metux.de/\n> fax: +49 36207 519932 email: \n> [email protected]\n> cellphone: +49 174 7066481\n>\n---------------------------------------------------------------------\n> -- DSL ab 0 Euro. -- statische IP -- UUCP --\n> Hosting -- Webshops --\n>\n---------------------------------------------------------------------\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nYahoo! Small Business - Try our new resources site!\nhttp://smallbusiness.yahoo.com/resources/ \n",
"msg_date": "Thu, 21 Apr 2005 13:39:27 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index not used"
}
] |
[
{
"msg_contents": "Mischa,\n\nThanks. Yes, I understand that not having a large\nenough max_fsm_pages is a problem and I think that it\nis most likely the case for the client. What I wasn't\nsure of was if the index bloat we're seeing is the\nresult of the \"bleeding\" you're talking about or\nsomething else.\n\nIf I deleted 75% of the rows but had a max_fsm_pages\nsetting that still exceeded the pages required (as\nindicated in VACUUM output), would that solve my\nindexing problem or would I still need to REINDEX\nafter such a purge?\n\nregards,\n\nBill\n\n--- Mischa Sandberg <[email protected]> wrote:\n> Quoting Bill Chandler <[email protected]>:\n> \n> > ... The normal activity is to delete 3-5% of the\n> rows per day,\n> > followed by a VACUUM ANALYZE. \n> ...\n> > However, on occasion, deleting 75% of rows is a \n> > legitimate action for the client to take. \n> \n> > > In case nobody else has asked: is your\n> max_fsm_pages\n> > > big enough to handle all the deleted pages, \n> > > across ALL tables hit by the purge?\n> \n> > This parameter is most likely set incorrectly. So\n> > that could be causing problems. Could that be a\n> > culprit for the index bloat, though?\n> \n> Look at the last few lines of vacuum verbose output.\n> It will say something like:\n> \n> free space map: 55 relations, 88416 pages stored;\n> 89184 total pages needed\n> Allocated FSM size: 1000 relations + 1000000 pages\n> = 5920 kB shared memory.\n> \n> \"1000000\" here is [max_fsm_pages] from my\n> postgresql.conf.\n> If the \"total pages needed\" is bigger than the pages\n> \n> fsm is allocated for, then you are bleeding.\n> -- \n> \"Dreams come true, not free.\" -- S.Sondheim, ITW\n> \n> \n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n",
"msg_date": "Thu, 21 Apr 2005 15:59:26 -0700 (PDT)",
"msg_from": "Bill Chandler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "Bill Chandler wrote:\n\n>Mischa,\n>\n>Thanks. Yes, I understand that not having a large\n>enough max_fsm_pages is a problem and I think that it\n>is most likely the case for the client. What I wasn't\n>sure of was if the index bloat we're seeing is the\n>result of the \"bleeding\" you're talking about or\n>something else.\n>\n>If I deleted 75% of the rows but had a max_fsm_pages\n>setting that still exceeded the pages required (as\n>indicated in VACUUM output), would that solve my\n>indexing problem or would I still need to REINDEX\n>after such a purge?\n>\n>regards,\n>\n>Bill\n>\n>\nI don't believe VACUUM re-packs indexes. It just removes empty index\npages. So if you have 1000 index pages all with 1 entry in them, vacuum\ncannot reclaim any pages. REINDEX re-packs the pages to 90% full.\n\nfsm just needs to hold enough pages that all requests have free space\nthat can be used before your next vacuum. It is just a map letting\npostgres know where space is available for a new fill.\n\nJohn\n=:->",
"msg_date": "Thu, 21 Apr 2005 18:54:08 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
},
{
"msg_contents": "Bill,\n\n> If I deleted 75% of the rows but had a max_fsm_pages\n> setting that still exceeded the pages required (as\n> indicated in VACUUM output), would that solve my\n> indexing problem or would I still need to REINDEX\n> after such a purge?\n\nDepends on the performance you're expecting. The FSM relates the the re-use \nof nodes, not taking up free space. So after you've deleted 75% of rows, \nthe index wouldn't shrink. It just wouldn't grow when you start adding rows.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 21 Apr 2005 16:56:18 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index bloat problem?"
}
] |
[
{
"msg_contents": "\nHi folks,\n\n\ndo foreign keys have any influence on performance (besides slowing\ndown huge inserts) ? do they bring any performance improvement ?\n\n\nthx\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n cellphone: +49 174 7066481\n---------------------------------------------------------------------\n -- DSL ab 0 Euro. -- statische IP -- UUCP -- Hosting -- Webshops --\n---------------------------------------------------------------------\n",
"msg_date": "Fri, 22 Apr 2005 02:06:15 +0200",
"msg_from": "Enrico Weigelt <[email protected]>",
"msg_from_op": true,
"msg_subject": "foreign key performance "
},
{
"msg_contents": "On Fri, Apr 22, 2005 at 02:06:15AM +0200, Enrico Weigelt wrote:\n\n> do foreign keys have any influence on performance (besides slowing\n> down huge inserts) ? do they bring any performance improvement ?\n\nNo. They only cause additional tables to be visited to enforce them.\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"Ciencias pol�ticas es la ciencia de entender por qu�\n los pol�ticos act�an como lo hacen\" (netfunny.com)\n",
"msg_date": "Thu, 21 Apr 2005 23:55:21 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: foreign key performance"
}
] |
[
{
"msg_contents": " \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Andreas Pflug\n> Sent: 21 April 2005 14:06\n> To: Joel Fradkin\n> Cc: 'John A Meinel'; [email protected]; \n> [email protected]\n> Subject: Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon\n> \n> Beware!\n> From the data, I can see that you're probably using pgAdmin3.\n> The time to execute your query including transfer of all data to the \n> client is 17s in this example, while displaying it (i.e. pure GUI and \n> memory alloc stuff) takes 72s. Execute to a file to avoid this.\n\nPerhaps we should add a guruhint there for longer runtimes?\n\nRegards, dave\n",
"msg_date": "Fri, 22 Apr 2005 09:08:01 +0100",
"msg_from": "\"Dave Page\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Dave Page wrote:\n> \n> \n> \n>>-----Original Message-----\n>>From: [email protected] \n>>[mailto:[email protected]] On Behalf Of \n>>Andreas Pflug\n>>Sent: 21 April 2005 14:06\n>>To: Joel Fradkin\n>>Cc: 'John A Meinel'; [email protected]; \n>>[email protected]\n>>Subject: Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon\n>>\n>>Beware!\n>> From the data, I can see that you're probably using pgAdmin3.\n>>The time to execute your query including transfer of all data to the \n>>client is 17s in this example, while displaying it (i.e. pure GUI and \n>>memory alloc stuff) takes 72s. Execute to a file to avoid this.\n> \n> \n> Perhaps we should add a guruhint there for longer runtimes?\n\nYup, easily done as replacement for the \"max rows exceeded\" message box.\nAdded to TODO.txt.\n\n\nRegards,\nAndreas\n\n",
"msg_date": "Fri, 22 Apr 2005 09:47:31 +0000",
"msg_from": "Andreas Pflug <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "I just finished testing Postgres, MYSQL, and MSSQL on my machine (2 gigs\ninternal XP).\n\nI have adjusted the postgres config to what I think is an ok place and have\nmysql default and mssql default.\n\nUsing Aqua studio a program that hooks to all three I have found:\n\n Initial exec Second exec Returning 331,640 records on all 3 database\nMSSQL 468ms 16ms 2 mins 3 secs\nMYSQL 14531ms 6625ms 2 mins 42 secs \nPostgr 52120ms 11702ms 2 mins 15 secs\n\nNot sure if this proves your point on PGadmin versus MYSQL query tool versus\nMSSQL Query tool, but it certainly seems encouraging.\n\nI am going to visit Josh's tests he wanted me to run on the LINUX server.\n \nJoel Fradkin\n \n\n\n",
"msg_date": "Fri, 22 Apr 2005 13:51:08 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "\nAre you using 8.0.2? I hope so because there were some Win32\nperformance changes related to fsync() in that release.\n\n---------------------------------------------------------------------------\n\nJoel Fradkin wrote:\n> I just finished testing Postgres, MYSQL, and MSSQL on my machine (2 gigs\n> internal XP).\n> \n> I have adjusted the postgres config to what I think is an ok place and have\n> mysql default and mssql default.\n> \n> Using Aqua studio a program that hooks to all three I have found:\n> \n> Initial exec Second exec Returning 331,640 records on all 3 database\n> MSSQL 468ms 16ms 2 mins 3 secs\n> MYSQL 14531ms 6625ms 2 mins 42 secs \n> Postgr 52120ms 11702ms 2 mins 15 secs\n> \n> Not sure if this proves your point on PGadmin versus MYSQL query tool versus\n> MSSQL Query tool, but it certainly seems encouraging.\n> \n> I am going to visit Josh's tests he wanted me to run on the LINUX server.\n> \n> Joel Fradkin\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 22 Apr 2005 14:03:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "On Fri, Apr 22, 2005 at 01:51:08PM -0400, Joel Fradkin wrote:\n> I just finished testing Postgres, MYSQL, and MSSQL on my machine (2 gigs\n> internal XP).\n> \n> I have adjusted the postgres config to what I think is an ok place and have\n> mysql default and mssql default.\n>\n> Using Aqua studio a program that hooks to all three I have found:\n> \n> Initial exec Second exec Returning 331,640 records on all 3 database\n> MSSQL 468ms 16ms 2 mins 3 secs\n> MYSQL 14531ms 6625ms 2 mins 42 secs \n> Postgr 52120ms 11702ms 2 mins 15 secs\n\nOne further question is: is this really a meaningful test? I mean, in\nproduction are you going to query 300000 rows regularly? And is the\nsystem always going to be used by only one user? I guess the question\nis if this big select is representative of the load you expect in\nproduction.\n\nWhat happens if you execute the query more times? Do the times stay the\nsame as the second run?\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"Use it up, wear it out, make it do, or do without\"\n",
"msg_date": "Fri, 22 Apr 2005 14:30:54 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Quoting Alvaro Herrera <[email protected]>:\n\n> One further question is: is this really a meaningful test? I mean, in\n> production are you going to query 300000 rows regularly? And is the\n> system always going to be used by only one user? I guess the question\n> is if this big select is representative of the load you expect in\n> production.\n\nWhile there may be some far-out queries that nobody would try,\nyou might be surprised what becomes the norm for queries,\nas soon as the engine feasibly supports them. SQL is used for\nwarehouse and olap apps, as a data queue, and as the co-ordinator\nor bridge for (non-SQL) replication apps. In all of these,\nyou see large updates, large result sets and volatile tables\n(\"large\" to me means over 20% of a table and over 1M rows).\n\nTo answer your specific question: yes, every 30 mins,\nin a data redistribution app that makes a 1M-row query, \nand writes ~1000 individual update files, of overlapping sets of rows. \nIt's the kind of operation SQL doesn't do well, \nso you have to rely on one big query to get the data out.\n\nMy 2c\n-- \n\"Dreams come true, not free.\" -- S.Sondheim, ITW\n\n",
"msg_date": "Fri, 22 Apr 2005 13:53:50 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "One further question is: is this really a meaningful test? I mean, in\nproduction are you going to query 300000 rows regularly? \n\nIt is a query snippet if you will as the view I posted for audit and case\nwhere tables are joined are more likely to be ran.\n\nJosh and I worked over this until we got explain analyze on the linux box to\n1 sec. I was just using this as a test as I don't have my views set up on\nMYSQL.\n\nSo many of my reports pull huge data sets (comprised of normalized joins).\nI am thinking I probably have to modify to using an non normalized table,\nand Josh is sending me information on using cursors instead of selects.\n\nAnd is the system always going to be used by only one user? \nNo we have 400+ concurrent users\n\nI guess the question is if this big select is representative of the load you\nexpect in production.\nYes we see many time on the two processor box running MSSQL large return\nsets using 100%cpu for 5-30 seconds.\n\nWhat happens if you execute the query more times? Do the times stay the\nsame as the second run?\nI will definitely have to pressure testing prior to going live in\nproduction. I have not done concurrent tests as honestly single user tests\nare failing, so multiple user testing is not something I need yet.\n\nJoel\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"Use it up, wear it out, make it do, or do without\"\n\n",
"msg_date": "Fri, 22 Apr 2005 17:04:19 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "On Fri, Apr 22, 2005 at 05:04:19PM -0400, Joel Fradkin wrote:\n> And is the system always going to be used by only one user? \n> No we have 400+ concurrent users\n> \n> I guess the question is if this big select is representative of the load you\n> expect in production.\n> Yes we see many time on the two processor box running MSSQL large return\n> sets using 100%cpu for 5-30 seconds.\n> \n> What happens if you execute the query more times? Do the times stay the\n> same as the second run?\n> I will definitely have to pressure testing prior to going live in\n> production. I have not done concurrent tests as honestly single user tests\n> are failing, so multiple user testing is not something I need yet.\n\nI would very, very strongly encourage you to run multi-user tests before\ndeciding on mysql. Mysql is nowhere near as capable when it comes to\nconcurrent operations as PostgreSQL is. From what others have said, it\ndoesn't take many concurrent operations for it to just fall over. I\ncan't speak from experience because I avoid mysql like the plague,\nthough. :)\n\nLikewise, MSSQL will probably look better single-user than it will\nmulti-user. Unless you're going to only access the database single-user,\nit's just not a valid test case (and by the way, this is true no matter\nwhat database you're looking at. Multiuser access is where you uncover\nyour real bottlenecks.)\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Fri, 22 Apr 2005 21:11:07 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "I would very, very strongly encourage you to run multi-user tests before\ndeciding on mysql. Mysql is nowhere near as capable when it comes to\nconcurrent operations as PostgreSQL is. From what others have said, it\ndoesn't take many concurrent operations for it to just fall over. I\ncan't speak from experience because I avoid mysql like the plague,\nthough. :)\n\nI am just testing the water so to speak, if it cant handle single user tests\nthen multiple user tests are kind of a waste of time.\n\nI am trying to de-normalize my view into a table to see if I can get my app\nto work. It is a good idea anyway but raises a ton of questions about\ndealing with the data post a case being closed etc; also on multiple child\nrelationships like merchandise and payments etc.\n\nI did do a test of all three (MSSQL, MYSQL,and postgres) in aqua studio ,\nall on the same machine running the servers and found postgres beat out\nMYSQL, but like any other test it may have been an issue with aqua studio\nand mysql in any case I have not made a decision to use mysql I am still\nresearching fixes for postgres.\n\nI am waiting to here back from Josh on using cursors and trying to flatten\nlong running views. \n\nI am a little disappointed I have not understood enough to get my analyzer\nto use the proper plan, we had to set seqscan off to get the select from\nresponse_line to work fast and I had to turn off merge joins to get assoc\nlist to work fast. Once I am up I can try to learn more about it, I am so\nglad there are so many folks here willing to take time to educate us newb's.\n\n\n",
"msg_date": "Sat, 23 Apr 2005 10:11:28 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Joel Fradkin wrote:\n> I would very, very strongly encourage you to run multi-user tests before\n> deciding on mysql. Mysql is nowhere near as capable when it comes to\n> concurrent operations as PostgreSQL is. From what others have said, it\n> doesn't take many concurrent operations for it to just fall over. I\n> can't speak from experience because I avoid mysql like the plague,\n> though. :)\n> \n> I am just testing the water so to speak, if it cant handle single user tests\n> then multiple user tests are kind of a waste of time.\n\nJoel I think you are missing the point on the above comment. The above\ncomment as I read is, o.k. you are having problems with PostgreSQL BUT \nMySQL isn't going to help you and you will see that in multi-user tests.\n\nMySQL is known to work very well on small databases without a lot of \nconcurrent sessions. I don't think anybody here would argue that.\n\nWhere MySQL runs into trouble is larger databases with lots of \nconcurrent connections.\n\n> I am a little disappointed I have not understood enough to get my analyzer\n> to use the proper plan, we had to set seqscan off to get the select from\n> response_line to work fast and I had to turn off merge joins to get assoc\n> list to work fast. Once I am up I can try to learn more about it, I am so\n> glad there are so many folks here willing to take time to educate us newb's.\n\nSincerely,\n\nJoshua D. Drake\nCommand Prompt, Inc.\n\n\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n",
"msg_date": "Sat, 23 Apr 2005 08:35:52 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "\n\n> I am just testing the water so to speak, if it cant handle single user\n> tests then multiple user tests are kind of a waste of time.\n\nAt the risk of being even more pedantic, let me point out that if you are\ngoing to be running your application with multiple users the reverse is\neven more true, 'If it can't handle multiple user tests then single user\ntests are kind of a waste of time'.\n\nbrew\n\n ==========================================================================\n Strange Brew ([email protected])\n Check out my Stock Option Covered Call website http://www.callpix.com\n and my Musician's Online Database Exchange http://www.TheMode.com\n ==========================================================================\n\n",
"msg_date": "Sat, 23 Apr 2005 12:03:33 -0400 (EDT)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Centuries ago, Nostradamus foresaw when [email protected] (\"Joel Fradkin\") would write:\n> I am just testing the water so to speak, if it cant handle single\n> user tests then multiple user tests are kind of a waste of time.\n\nI would suggest that if multi-user functionality is needed, then\nstarting with single user tests is a similar waste of time.\n\nThere's good reason to look at it this way... \n\nIt is all too common for people to try to start building things with\nprimitive functionality, and then try to \"evolve\" the system into what\nthey need. It is possible for that to work, if the \"base\" covers\nenough of the necessary functionality.\n\nIn practice, we have watched Windows evolve in such a fashion with\nrespect to multiuser support, and, in effect, it has never really\ngotten it. Microsoft started by hacking something on top of MS-DOS,\nand by the time enough applications had enough dependancies on the way\nthat worked, it has essentially become impossible for them to migrate\nproperly to a multiuser model since applications are normally designed\nwith the myopic \"this is MY computer!\" model of the world.\n\nYou may not need _total_ functionality in the beginning, but,\nparticularly for multiuser support, which has deep implications for\napplications, it needs to be there In The Beginning.\n-- \noutput = reverse(\"moc.liamg\" \"@\" \"enworbbc\")\nhttp://linuxdatabases.info/info/lisp.html\nA CONS is an object which cares. -- Bernie Greenberg.\n",
"msg_date": "Sat, 23 Apr 2005 14:27:12 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
}
] |
[
{
"msg_contents": "I saw an interesting thought in another thread about placing database data\nin a partition that uses cylinders at the outer edge of the disk. I want\nto try this. Are the lower number cylinders closer to the edge of a SCSI\ndisk or is it the other way around? What about ATA?\n\nCheers,\n\nRick\n\n",
"msg_date": "Fri, 22 Apr 2005 10:49:50 -0500",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Disk Edge Partitioning"
}
] |
[
{
"msg_contents": "Hi there,\n \nWe need to update a table of about 1.2GB (and about 900k rows) size. I\nwas wondering if I should let the regular cron job take care of clean up\n(vacuum db Mon-Sat, vacuum full on Sun, followed by Reindex script), or\nmanually do this on the table followed by the update.\n \nThis is what I used to find the table size, which probably doesn't\ninclude the index size. Is there a way to find out size of indexes?\n \nselect relpages * 8192 as size_in_bytes from pg_class where relnamespace\n= (select oid from pg_namespace where nspname = 'public') and relname =\n'r_itemcategory';\n \n \nThanks,\n\nAnjan\n \n \n************************************************************************\n******************\nThis e-mail and any files transmitted with it are intended for the use\nof the \naddressee(s) only and may be confidential and covered by the\nattorney/client \nand other privileges. If you received this e-mail in error, please\nnotify the \nsender; do not disclose, copy, distribute, or take any action in\nreliance on \nthe contents of this information; and delete it from your system. Any\nother \nuse of this e-mail is prohibited.\n************************************************************************\n******************\n\n \n\n\n\n\n\n\n\n\n\n\n\nHi there, We need to update a table of about 1.2GB (and about 900k rows) size. I was wondering if I should let the regular cron job take care of clean up (vacuum db Mon-Sat, vacuum full on Sun, followed by Reindex script), or manually do this on the table followed by the update. This is what I used to find the table size, which probably doesn’t include the index size. Is there a way to find out size of indexes? select relpages * 8192 as size_in_bytes from pg_class where relnamespace = (select oid from pg_namespace where nspname = 'public') and relname = 'r_itemcategory'; Thanks,\nAnjan ******************************************************************************************This e-mail and any files transmitted with it are intended for the use of the addressee(s) only and may be confidential and covered by the attorney/client and other privileges. If you received this e-mail in error, please notify the sender; do not disclose, copy, distribute, or take any action in reliance on the contents of this information; and delete it from your system. Any other use of this e-mail is prohibited.******************************************************************************************",
"msg_date": "Fri, 22 Apr 2005 17:50:29 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Updating table, precautions?"
},
{
"msg_contents": "Anjan,\n\n> This is what I used to find the table size, which probably doesn't\n> include the index size. Is there a way to find out size of indexes?\n>\n> select relpages * 8192 as size_in_bytes from pg_class where relnamespace\n> = (select oid from pg_namespace where nspname = 'public') and relname =\n> 'r_itemcategory';\n\nSee the code in CVS in the \"newsysviews\" project in pgFoundry. Andrew coded \nup a nice pg_user_table_storage view which gives table, index and TOAST size.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Fri, 22 Apr 2005 15:33:26 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Updating table, precautions?"
}
] |
[
{
"msg_contents": "Building a single-column index on a dual opteron with 4G of memory, data\non a 4 SATA RAID10; OS, logs and tempsace on a SATA mirror, with\nsort_mem set to 2.5G, create index is actually CPU bound for large\nportions of time. The postgresql process and system time are accounting\nfor an entire CPU, and systat (this is a FreeBSD5.2 box) is generally\nshowing 80% utilization on the RAID10 and 40% on the mirror.\n\nNot a performance problem, but I thought some people might be\ninterested. The RAID10 is doing about 28-32MB/s, I would think this\nwouldn't be enough to swamp the CPU but I guess I would be thinking\nwrong.\n\nBTW, the column I'm indexing is a bigint with a low correlation.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Fri, 22 Apr 2005 22:09:20 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Interesting numbers on a CREATE INDEX"
}
] |
[
{
"msg_contents": "\"Index Scan using ix_tblviwauditcube_clientnum on tblviwauditcube\n(cost=0.00..35895.75 rows=303982 width=708) (actual time=0.145..1320.432\nrows=316490 loops=1)\"\n\n\" Index Cond: ((clientnum)::text = 'MSI'::text)\"\n\n\"Total runtime: 1501.028 ms\"\n\n \n\nJoel Fradkin\n\n \n\nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n\n \n\[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\nC 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply email and delete and destroy\nall copies of the original message, including attachments.\n\n \n\n\n \n\n \n\n\n\n\n\n\n\n\n\n\n\"Index Scan using ix_tblviwauditcube_clientnum on tblviwauditcube \n(cost=0.00..35895.75 rows=303982 width=708) (actual time=0.145..1320.432\nrows=316490 loops=1)\"\n\" Index Cond: ((clientnum)::text = 'MSI'::text)\"\n\"Total runtime: 1501.028 ms\"\n \nJoel Fradkin\n\n \n\nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n\n \n\[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\n© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the\nintended recipient, please contact the sender by reply email and delete and\ndestroy all copies of the original message, including attachments.",
"msg_date": "Sat, 23 Apr 2005 16:18:36 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "flattening the file might work for me here is the analyze."
}
] |
[
{
"msg_contents": "> In practice, we have watched Windows evolve in such a fashion with\n> respect to multiuser support, and, in effect, it has never really\n> gotten it. Microsoft started by hacking something on top of MS-DOS,\n> and by the time enough applications had enough dependancies on the way\n> that worked, it has essentially become impossible for them to migrate\n> properly to a multiuser model since applications are normally designed\n> with the myopic \"this is MY computer!\" model of the world.\n\nCompletely false. NT was a complete rewrite (1993ish) and was\ninherently multi-user with even the GDI running as a user level process\n(no longer however). The NT kernel was scalable and portable, running\non the Alpha, MIPS, etc.\n\nHowever, you do have a point with applications...many win32 developers\nhave a very bad habit about expecting their apps to install and run as\nroot. However, this is generally not a problem with Microsoft stuff.\nIn short, the problem is really people, not the technology.\n\nMerlin\n",
"msg_date": "Mon, 25 Apr 2005 09:52:16 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
},
{
"msg_contents": "Martha Stewart called it a Good Thing when [email protected] (\"Merlin Moncure\") wrote:\n>> In practice, we have watched Windows evolve in such a fashion with\n>> respect to multiuser support, and, in effect, it has never really\n>> gotten it. Microsoft started by hacking something on top of MS-DOS,\n>> and by the time enough applications had enough dependancies on the way\n>> that worked, it has essentially become impossible for them to migrate\n>> properly to a multiuser model since applications are normally designed\n>> with the myopic \"this is MY computer!\" model of the world.\n>\n> Completely false. NT was a complete rewrite (1993ish) and was\n> inherently multi-user with even the GDI running as a user level\n> process (no longer however). The NT kernel was scalable and\n> portable, running on the Alpha, MIPS, etc.\n\nCompletely irrelevant. When Win32 was deployed, the notion that more\nthan a tiny fraction of the users would be running Win32 apps on\nmultiuser platforms was absolutely laughable. It continued to be\nlaughable until well into this century, when Microsoft ceased to sell\nsystems based on MS-DOS.\n\n> However, you do have a point with applications...many win32 developers\n> have a very bad habit about expecting their apps to install and run as\n> root. However, this is generally not a problem with Microsoft stuff.\n> In short, the problem is really people, not the technology.\n\nReality is that it is all about the applications.\n\nMicrosoft spent _years_ pushing people from MS-DOS to Windows 3.1 to\nWfW to Windows 95, and had to do a lot of hard pushing.\n\nThe result of that was that a lot of vendors built Win32 applications\nfor Windows 95.\n\nNone of those systems supported multiple users, so the usage and\nexperience with Win32 pointed everyone to the construction of single\nuser applications.\n\nAt that point, whether Windows NT did or didn't support multiple users\nbecame irrelevant. Usage patterns had to be oriented towards single\nuser operation because that's all Win32 could be used to support for\nthe vast majority that _weren't_ running Windows NT.\n-- \nlet name=\"cbbrowne\" and tld=\"gmail.com\" in String.concat \"@\" [name;tld];;\nhttp://linuxfinances.info/info/x.html\nBut what can you do with it? -- ubiquitous cry from Linux-user\npartner. -- Andy Pearce, <[email protected]>\n",
"msg_date": "Mon, 25 Apr 2005 18:03:52 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
}
] |
[
{
"msg_contents": "> I am waiting to here back from Josh on using cursors and trying to\nflatten\n> long running views.\n> \n> I am a little disappointed I have not understood enough to get my\nanalyzer\n> to use the proper plan, we had to set seqscan off to get the select\nfrom\n> response_line to work fast and I had to turn off merge joins to get\nassoc\n> list to work fast. Once I am up I can try to learn more about it, I am\nso\n> glad there are so many folks here willing to take time to educate us\n> newb's.\n\nI am not a big fan of tweaking the optimizer because you are robbing\nPeter to pay Paul, so to speak. pg 8.1 may come out with new optimizer\ntweaks and you'll have to do it all over again. If the optimizer is not\n'getting' your view, there are a few different approaches to fixing the\nproblem.\n\nI am also not a big fan of de-normalizing your database. Essentially\nyou are lighting a fuse that may blow up later. Here are some general\napproaches to planner optimization that can help out in tricky\nsituations.\n\n1. Split up views. Often overlooked but can provide good enhancements.\nIf your view is based on 3 or more tables, has left/right joins,\nconsider breaking it up into two or more views. Views can be based on\nviews and it is easier to force the planner to pick good plans this way.\nIf you can find other uses for component views in other queries, so much\nthe better.\n\n2. Materialize your view. Use lazy materialization, i.e. you query the\nview into a table at scheduled times. Now we are trading disk spaces\nand coherence for performance...this may not fit your requirements but\nthe nice thing about it is that it will help give us the 'ideal plan'\nrunning time which we are shooting for.\n\n3. pl/pgsql. Using combinations of loops, refcursors, and queries, you\ncan cut code that should give you comparable performance to the ideal\nplan. If you can do the actual work here as well (no data returned to\nclient), you get a tremendous win. Also pl/pgsql works really well for\nrecursive sets and other things that are difficult to run in the context\nof a single query. Just be aware of the disadvantages:\na. not portable\nb. maintenance overhead\nc. require relatively high developer skill set\n\nI will go out on a limb and say that mastering the above approaches can\nprovide the solution to virtually any performance problem within the\nlimits of your hardware and the problem complexity.\n\nBased on your questions, it sounds to me like your #1 problem is your\ndeveloper skillset relative to your requirements. However, this is\neasily solvable...just keep attacking the problem and don't be afraid to\nbring in outside help (which you've already done, that's a start!).\n\nMerlin\n",
"msg_date": "Mon, 25 Apr 2005 10:13:34 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Joel's Performance Issues WAS : Opteron vs Xeon"
}
] |
[
{
"msg_contents": "If I have a freshly CLUSTERed table and queries that want to do a\nmerge join, it seems to me that quite a bit of time is spent\nunnecessarily sorting the already-sorted table. An example such\nquery I found in my log files is shown below. If I read the\nEXPLAIN ANALYZE output correctly, it's saying that roughly half\nthe time (570-269 = 300 out of 670 ms) was spent sorting the\nalready sorted data.\n\n=====================\n \\d entity_facids;\n Table \"public.entity_facids\"\n Column | Type | Modifiers\n -----------+-----------+-----------\n entity_id | integer |\n fac_ids | integer[] |\n Indexes:\n \"entity_facids__entity_id\" btree (entity_id)\n fli=# cluster entity_facids__entity_id on entity_facids;\n CLUSTER\n fli=#\n fli=# explain analyze select * from userfeatures.point_features join entity_facids using (entity_id) where featureid=118;\n QUERY PLAN\n -------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=9299.37..9738.34 rows=1078 width=117) (actual time=536.989..667.648 rows=2204 loops=1)\n Merge Cond: (\"outer\".entity_id = \"inner\".entity_id)\n -> Sort (cost=37.27..38.45 rows=471 width=85) (actual time=14.289..16.303 rows=2204 loops=1)\n Sort Key: point_features.entity_id\n -> Index Scan using point_features__featureid on point_features (cost=0.00..16.36 rows=471 width=85) (actual time=0.030..9.360 rows=2204 loops=1)\n Index Cond: (featureid = 118)\n -> Sort (cost=9262.10..9475.02 rows=85168 width=36) (actual time=518.471..570.038 rows=59112 loops=1)\n Sort Key: entity_facids.entity_id\n -> Seq Scan on entity_facids (cost=0.00..2287.68 rows=85168 width=36) (actual time=0.093..268.679 rows=85168 loops=1)\n Total runtime: 693.161 ms\n (10 rows)\n fli=#\n\n====================\n\nI understand that the optimizer can not in general know that\na CLUSTERed table stays CLUSTERed when inserts or updates happen;\nbut I was wondering if anyone has any clever ideas on how I can\navoid this sort step.\n\n\nPerhaps in the future, could the table set a bit to remember it\nis freshly clustered, and clear that bit the first time any\nchanges are even attempted in the table? Or, if not, would\nthat be possible if Hannu Krosing's read-only-table idea\n http://archives.postgresql.org/pgsql-hackers/2005-04/msg00660.php\nhappened?\n",
"msg_date": "Mon, 25 Apr 2005 13:14:25 -0700",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": true,
"msg_subject": "half the query time in an unnecessary(?) sort?"
},
{
"msg_contents": "Ron,\n\n> If I have a freshly CLUSTERed table and queries that want to do a\n> merge join, it seems to me that quite a bit of time is spent\n> unnecessarily sorting the already-sorted table. An example such\n> query I found in my log files is shown below. If I read the\n> EXPLAIN ANALYZE output correctly, it's saying that roughly half\n> the time (570-269 = 300 out of 670 ms) was spent sorting the\n> already sorted data.\n\nIt still has to sort because the clustering isn't guarenteed to be 100%. \nHowever, such sorts should be very quick as they have little work to do.\n\nLooking at your analyze, though, I think it's not the sort that's taking the \ntime as it is that the full sorted entity_id column won't fit in work_mem. \nTry increasing it?\n\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 25 Apr 2005 15:11:41 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: half the query time in an unnecessary(?) sort?"
},
{
"msg_contents": "\nJosh Berkus wrote: [quoted out of order]\n>Ron,\n> \n> Looking at your analyze, though, I think it's not the sort that's taking the \n> time as it is that the full sorted entity_id column won't fit in work_mem. \n> Try increasing it?\n\nYup, that indeed fixed this particular query since neither table was\nparticularly large.\n\n\n> It still has to sort because the clustering isn't guarenteed to be 100%. \n\nI guess I was contemplating whether or not there are some conditions\nwhere it could be 100% (perhaps combined with Hannu's read only\ntable speculation).\n\n> However, such sorts should be very quick as they have little work to do.\n\nTrue, so long as the table can fit in work-mem. For much larger tables\nIMHO it'd be nice to be able to simply do a seq-scan on them if there were\nsome way of knowing that they were sorted.\n\n",
"msg_date": "Mon, 25 Apr 2005 22:45:57 -0700",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: half the query time in an unnecessary(?) sort?"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Josh Berkus [mailto:[email protected]]\n> Sent: Sunday, April 24, 2005 2:08 PM\n> To: Andrew Dunstan\n> Cc: Tom Lane; Greg Stark; Marko Ristola; pgsql-perform;\n> [email protected]\n> Subject: Re: [HACKERS] [PERFORM] Bad n_distinct estimation; hacks\n> suggested?\n> \n> [...]\n> Actually, that paper looks *really* promising. Does anyone here have\n> enough math to solve for D(sub)Md on page 6? I'd like to test it on\n> samples of < 0.01%. \n> [...]\n\nD_Md = [1 - sqrt(f_1 / s)] D_b + sqrt(f_1 / s) D_B\n\ns = block size\n\nf~_1 = median frequency within blocks for distinct values occurring in\n only one block\n\nD_b = d + f_1^(b+1)\n\nd = distinct classes in the sample\n\nf_1^(b+1) = number of distinct values occurring in a single block in\n a sample of b+1 blocks\n\nD_B = d + [B / (b + 1)] f_1^(b+1)\n\nb = sample size (in blocks)\n\nB = total table size (in blocks)\n\nf_k and f~_k are the only tricky functions here, but they are easy to \nunderstand:\n\nSuppose our column contains values from the set {a, b, c, ..., z}.\nSuppose we have a sample of b = 10 blocks.\nSuppose that the value 'c' occurs in exactly 3 blocks (we don't care\nhow often it occurs *within* those blocks).\nSuppose that the value 'f' also occurs in exactly 3 blocks.\nSuppose that the values 'h', 'p', and 'r' occur in exactly 3 blocks.\nSuppose that no other value occurs in exactly 3 blocks.\n\nf_3^b = 5\n\nThis is because there are 5 distinct values that occur in exactly\n3 blocks. f_1^b is the number of distinct values that occur in\nexactly 1 block, regardless of how often it occurs within that block.\n\nNote that when you select a sample size of b blocks, you actually\nneed to sample b+1 blocks to compute f_1^(b+1). This is actually\npedantry since all occurrences of b in the formula are really b+1.\n\nf~ is slightly trickier. First, we pick the distinct values that\noccur in only one block. Then, we count how often each value\noccurs within its block. To wit:\n\nSuppose we have a set {d, q, y, z} of values that occur in only\none block.\nSuppose that d occurs 3x, q occurs 1x, y occurs 8x, and z occurs 6x.\n\nThe function f- would take the mean of these counts to determine\nthe \"cluster frequency\". So f- here would be 4.5. This allows\none to compute D_MF.\n\nThe function f~ takes the median of this sample, which is 3 or 6\n(or I suppose you could even take the mean of the two medians if\nyou wanted).\n\nNo tricky math involved. That should be enough to tell you how to\nwrite the estimator.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Mon, 25 Apr 2005 16:00:18 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Andrew Dunstan [mailto:[email protected]]\n> Sent: Monday, April 25, 2005 3:43 PM\n> To: [email protected]\n> Cc: pgsql-perform; [email protected]\n> Subject: Re: [HACKERS] [PERFORM] Bad n_distinct estimation; hacks\n> suggested?\n> \n> Josh Berkus wrote:\n> \n> >Simon, Tom:\n> >\n> >While it's not possible to get accurate estimates from a \n> >fixed size sample, I think it would be possible from a\n> >small but scalable sample: say, 0.1% of all data pages on\n> >large tables, up to the limit of maintenance_work_mem. \n\nNote that the results obtained in the cited paper were obtained\nfrom samples of 5 and 10%. It should also warrant caution\nthat the authors don't offer any proofs of confidence bounds, \neven for the \"average\" case.\n\n> [...]\n> After some more experimentation, I'm wondering about some\n> sort of adaptive algorithm, a bit along the lines suggested\n> by Marko Ristola, but limited to 2 rounds.\n\nOne path might be to use the published algorithm and simply\nrecompute the statistics after every K blocks are sampled,\nwhere K is a reasonably small number. If it looks like the\nstatistics are converging on a value, then take a few more\nsamples, check against the trend value and quit. Otherwise \ncontinue until some artificial limit is reached.\n\n> The idea would be that we take a sample (either of fixed \n> size, or some small proportion of the table), see how well\n> it fits a larger sample (say a few times the size of the\n> first sample), and then adjust the formula accordingly to\n> project from the larger sample the estimate for the full\n> population. Math not worked out yet - I think we want to\n> ensure that the result remains bounded by [d,N].\n\nThe crudest algorithm could be something like the Newton-\nRalphson method for finding roots. Just adjust the predicted\nvalue up or down until it comes within an error tolerance of\nthe observed value for the current sample. No need to choose\npowers of 2, and I would argue that simply checking every so\noften on the way to a large sample that can be terminated\nearly is more efficient than sampling and resampling. Of\ncourse, the crude algorithm would almost certainly be I/O\nbound, so if a more sophisticated algorithm would give a\nbetter prediction by spending a few more CPU cycles on each\nsample block gathered, then that seems like a worthwhile\navenue to pursue.\n\nAs far as configuration goes, the user is most likely to\ncare about how long it takes to gather the statistics or\nhow accurate they are. So it would probably be best to\nterminate the sampling process on a user-defined percentage\nof the table size and the minimum error tolerance of the\nalgorithmic prediction value vs. the computed sample value.\n\nIf someone wants a fast and dirty statistic, they set the\nrow percent low and the error tolerance high, which will\neffectively make the blocks read the limiting factor. If\nthey want an accurate statistic, they set the row percent\nas high as they feel they can afford, and the error \ntolerance as low as they need to in order to get the query \nplans they want.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Mon, 25 Apr 2005 17:41:41 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
}
] |
[
{
"msg_contents": "Hi all,\n\nIa a Guy from Germany an a strong Postgres believer!\nIt is the best OpenSource Database i have ever have bee tasted and i \ntry to using\nit in any Database Environments.\n\nIt is exiting to see thadt Verison 8.0 has Tablespaces like ORACLE and DB/2,\nbut i need Partitioning on a few very large Tables.\n\nThe Tabeles are not verry complex, but it is extremely Large (1 GByte \nand above)\nand i think Table Partitioning is the right Way to spiltt them off on \nsome physical\nHarddrives. Iam not sure thadt a common Harddrive RAID or SAN Storage\nSystem will do it for me. The ORACLE Table Partitioning Features are verry\nusefull but my favorite Datebase is PSQL.\n\nIs there any Plans thadt Postgres will support Partitioning in the near \nFuture?\n\nThanks\n\n",
"msg_date": "Tue, 26 Apr 2005 10:07:33 +0200",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Table Partitioning: Will it be supported in Future? (splitting large\n\tTables)"
},
{
"msg_contents": "[email protected] wrote:\n> Hi all,\n> \n> Ia a Guy from Germany an a strong Postgres believer!\n> It is the best OpenSource Database i have ever have bee tasted and i \n> try to using\n> it in any Database Environments.\n> \n> It is exiting to see thadt Verison 8.0 has Tablespaces like ORACLE and \n> DB/2,\n> but i need Partitioning on a few very large Tables.\n\nI believe these are being worked on at the moment. You might want to \nsearch the archives of the hackers mailing list to see if the plans will \nsuit your needs.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 26 Apr 2005 09:22:27 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table Partitioning: Will it be supported in Future?"
},
{
"msg_contents": "Hmm,\n\nI have asked some Peoples on the List an some one has posted this links\n\nhttp://archives.postgresql.org/pgsql-performance/2004-12/msg00101.php\n\nIt is quite usefull to read but iam not sure thadt theese Trick is verry \nhelpfull.\n\nI want to splitt my 1GByte Table into some little Partitions but how \nshould i do thadt?\nWith the ORACLE Partitioning Option, i can Configurering my Table withe \nEnterprise\nManager or SQL Plus but in this case it looks like Trap.\n\nShould i really decrease my Tabledata size and spread them to other \nTables with the\nsame Structure by limiting Records???\n\nThe next Problem i see, how should i do a Insert/Update/Delete on 4 \nTables of the\nsame Structure at one Query???\n\nNo missunderstanding. We talking not about normalization or \nrestructuring the Colums\nof a table. We talking about Partitioning and in this case at Postgres \n(emultation\nof Partitioning wir UNIONS for Performance tuning)..\n\n\nJosh\n\n\n",
"msg_date": "Tue, 26 Apr 2005 11:57:33 +0200",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table Partitioning: Will it be supported in Future?"
},
{
"msg_contents": "[email protected] wrote:\n> Hmm,\n> \n> I have asked some Peoples on the List an some one has posted this links\n> \n> http://archives.postgresql.org/pgsql-performance/2004-12/msg00101.php\n> \n> It is quite usefull to read but iam not sure thadt theese Trick is verry \n> helpfull.\n> \n> I want to splitt my 1GByte Table into some little Partitions but how \n> should i do thadt?\n> With the ORACLE Partitioning Option, i can Configurering my Table withe \n> Enterprise\n> Manager or SQL Plus but in this case it looks like Trap.\n> \n> Should i really decrease my Tabledata size and spread them to other \n> Tables with the\n> same Structure by limiting Records???\n> \n> The next Problem i see, how should i do a Insert/Update/Delete on 4 \n> Tables of the\n> same Structure at one Query???\n> \n> No missunderstanding. We talking not about normalization or \n> restructuring the Colums\n> of a table. We talking about Partitioning and in this case at Postgres \n> (emultation\n> of Partitioning wir UNIONS for Performance tuning)..\n\n From your description I don't see evidence that you should need to \npartition your table at all. A 1GB table is very common for pgsql. Spend \nsome hard disks on your storage subsystem and you'll gain the \nperformance you want, without trouble on the SQL side. For specific \nrequirements, you might see improvements from partial indexes.\n\nRegards,\nAndreas\n",
"msg_date": "Tue, 26 Apr 2005 15:53:46 +0000",
"msg_from": "Andreas Pflug <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table Partitioning: Will it be supported in Future?"
},
{
"msg_contents": "Richard,\n\n> I believe these are being worked on at the moment. You might want to\n> search the archives of the hackers mailing list to see if the plans will\n> suit your needs.\n\nActually, this is being discussed through the Bizgres project: \nwww.bizgres.org.\n\nHowever, I agree that a 1GB table is not in need of partitioning.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 26 Apr 2005 17:00:13 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table Partitioning: Will it be supported in Future?"
}
] |
[
{
"msg_contents": "Hi all again,\n\nMy next queststion is dedicated to blobs in my Webapplication (using \nTomcat 5 and JDBC\nintegrated a the J2EE Appserver JBoss).\n\nFilesystems with many Filesystem Objects can slow down the Performance \nat opening\nand reading Data.\n\nMy Question:\nCan i speedup my Webapplication if i store my JPEG Images with small\nsizes inside my PostgreSQL Database (on verry large Databasis over 1 GByte\nand above without Images at this time!)\n\nI hope some Peoples can give me a Tip or Hint where in can\nsome usefull Information about it!\n\nThanks\nJosh\n\n\n\n",
"msg_date": "Tue, 26 Apr 2005 10:22:30 +0200",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "BLOB's bypassing the OS Filesystem for better Image loading speed?"
},
{
"msg_contents": "[email protected] wrote:\n> Hi all again,\n> \n> My next queststion is dedicated to blobs in my Webapplication (using \n> Tomcat 5 and JDBC\n> integrated a the J2EE Appserver JBoss).\n> \n> Filesystems with many Filesystem Objects can slow down the Performance \n> at opening\n> and reading Data.\n\nWhich filesystems? I know ext2 used to have issues with many-thousands \nof files in one directory, but that was a directory scanning issue \nrather than file reading.\n\n> My Question:\n> Can i speedup my Webapplication if i store my JPEG Images with small\n> sizes inside my PostgreSQL Database (on verry large Databasis over 1 GByte\n> and above without Images at this time!)\n\nNo. Otherwise the filesystem people would build their filesystems on top \nof PostgreSQL not the other way around. Of course, if you want image \nupdates to be part of a database transaction, then it might be worth \nstoring them in the database.\n\n> I hope some Peoples can give me a Tip or Hint where in can\n> some usefull Information about it!\n\nLook into having a separate server (process or actual hardware) to \nhandle requests for static text and images. Keep the Java server for \nactually processing data.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 26 Apr 2005 10:05:32 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BLOB's bypassing the OS Filesystem for better Image"
},
{
"msg_contents": "\n> Which filesystems? I know ext2 used to have issues with many-thousands \n> of files in one directory, but that was a directory scanning issue \n> rather than file reading.\n\n From my Point of view i think it is better to let one Process do the \noperation to an Postgres Cluster Filestructure as\nif i bypass it with a second process.\n\nFor example:\nA User loads up some JPEG Images over HTTP.\n\na) (Filesystem)\nOn Filesystem it would be written in a File with a random generated \nFilename (timestamp or what ever)\n(the Directory Expands and over a Million Fileobjects with will be \narchived, written, replaced, e.t.c)\n\nb) (Database)\nThe JPEG Image Information will be stored into a BLOB as Part of a \nspecial Table, where is linked\nwit the custid of the primary Usertable.\n\n From my Point of view is any outside Process (must be created, forked, \nMemory allocated, e.t.c)\na bad choice. I think it is generall better to Support the Postmaster in \nall Ways and do some\nHardware RAID Configurations.\n\n>> My Question:\n>> Can i speedup my Webapplication if i store my JPEG Images with small\n>> sizes inside my PostgreSQL Database (on verry large Databasis over 1 \n>> GByte\n>> and above without Images at this time!)\n>\n>\n> No. Otherwise the filesystem people would build their filesystems on \n> top of PostgreSQL not the other way around. Of course, if you want \n> image updates to be part of a database transaction, then it might be \n> worth storing them in the database.\n\nHmm, ORACLE is going the other Way. All File Objects can be stored into \nthe Database if the DB\nhas the IFS Option (Database Filesystem and Fileserver insinde the \nDatabase).\n\n\n>\n>> I hope some Peoples can give me a Tip or Hint where in can\n>> some usefull Information about it!\n>\n> Look into having a separate server (process or actual hardware) to \n> handle requests for static text and images. Keep the Java server for \n> actually processing\n\n\nThanks\n\n",
"msg_date": "Tue, 26 Apr 2005 11:34:45 +0200",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BLOB's bypassing the OS Filesystem for better Image"
},
{
"msg_contents": "\n\tMy laptop reads an entire compiled linux kernel (23000 files totalling \n250 MBytes) in about 1.5 seconds if they're in cache. It's about 15.000 \nfiles/second. You think it's slow ? If you want to read them in random \norder, you'll probably use something else than a laptop drive, but you get \nthe idea.\n\n\tFilesystem is reiser4.\n\n\tIf you use ext2, you'll have a problem with many files in the same \ndirectory because I believe it uses a linear search, hence time \nproportional to the number of files (ouch). I once tried to put a million \n1-kbyte files in a directory ; it was with reiserfs 3, and it didn't seem \nto feel anything close to molested. I believe it took some 10 minutes, but \nit was two years ago so I don't remember very well. NTFS took a day, that \nI do remember ! By curiosity I tried to stuff 1 million 1KB files in a \ndirectory on my laptop right now, It took a bit less than two minutes.\n\nOn Tue, 26 Apr 2005 11:34:45 +0200, [email protected] <[email protected]> \nwrote:\n\n>\n>> Which filesystems? I know ext2 used to have issues with many-thousands \n>> of files in one directory, but that was a directory scanning issue \n>> rather than file reading.\n>\n> From my Point of view i think it is better to let one Process do the \n> operation to an Postgres Cluster Filestructure as\n> if i bypass it with a second process.\n>\n> For example:\n> A User loads up some JPEG Images over HTTP.\n>\n> a) (Filesystem)\n> On Filesystem it would be written in a File with a random generated \n> Filename (timestamp or what ever)\n> (the Directory Expands and over a Million Fileobjects with will be \n> archived, written, replaced, e.t.c)\n>\n> b) (Database)\n> The JPEG Image Information will be stored into a BLOB as Part of a \n> special Table, where is linked\n> wit the custid of the primary Usertable.\n>\n> From my Point of view is any outside Process (must be created, forked, \n> Memory allocated, e.t.c)\n> a bad choice. I think it is generall better to Support the Postmaster in \n> all Ways and do some\n> Hardware RAID Configurations.\n>\n>>> My Question:\n>>> Can i speedup my Webapplication if i store my JPEG Images with small\n>>> sizes inside my PostgreSQL Database (on verry large Databasis over 1 \n>>> GByte\n>>> and above without Images at this time!)\n>>\n>>\n>> No. Otherwise the filesystem people would build their filesystems on \n>> top of PostgreSQL not the other way around. Of course, if you want \n>> image updates to be part of a database transaction, then it might be \n>> worth storing them in the database.\n>\n> Hmm, ORACLE is going the other Way. All File Objects can be stored into \n> the Database if the DB\n> has the IFS Option (Database Filesystem and Fileserver insinde the \n> Database).\n>\n>\n>>\n>>> I hope some Peoples can give me a Tip or Hint where in can\n>>> some usefull Information about it!\n>>\n>> Look into having a separate server (process or actual hardware) to \n>> handle requests for static text and images. Keep the Java server for \n>> actually processing\n>\n>\n> Thanks\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n",
"msg_date": "Mon, 02 May 2005 01:22:49 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BLOB's bypassing the OS Filesystem for better Image"
},
{
"msg_contents": "* [email protected] <[email protected]> wrote:\n\nHi,\n\n> My next queststion is dedicated to blobs in my Webapplication (using \n> Tomcat 5 and JDBC\n> integrated a the J2EE Appserver JBoss).\n> \n> Filesystems with many Filesystem Objects can slow down the Performance \n> at opening and reading Data.\n\nAs others already pointed out, you probably meant: overcrowded\ndirectories can make some filesystems slow. For ext2 this is the case.\nInstead reiserfs is designed to handle very large directories\n(in fact by using similar indices like an database does).\n\nIf your application is an typical web app your will probably have\nthe situation:\n\n+ images get read quite often, while they get updated quite seldom. \n+ you dont want to use image content in quries (ie. match against it)\n+ the images will be transfered directly, without further processing\n+ you can give the upload and the download-server access to a shared\n filesystem or synchronize their filesystems (ie rsync)\n\nUnder this assumptions, I'd suggest directly using the filesystem.\nThis should save some load, ie. \n\n+ no transfer from postgres -> webserver and further processing \n (server side application) necessary, the webserver can directly \n fetch files from filesystem\n+ no further processing (server side application) necessary\n+ backup and synchronization is quite trivial (good old fs tools)\n+ clustering (using many image webservers) is quite trivial\n\nAlready mentioned that you've got to choose the right filesystem or \nat least the right fs organization (ie. working with a n-level hierachy\nto keep directory sizes small and lookups fast).\n\nAn RDBMS can do this for you and so will save some implementation work, \nbut I don't think it will be noticably faster than an good fs-side\nimplementation.\n\n\nOf course there may be a lot of good reasons to put images into the\ndatabase, ie. if some clients directly work on db connections and \nall work (including image upload) should be done over the db link.\n\n\ncu\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n---------------------------------------------------------------------\n Realtime Forex/Stock Exchange trading powered by postgresSQL :))\n http://www.fxignal.net/\n---------------------------------------------------------------------\n",
"msg_date": "Thu, 12 May 2005 01:26:23 +0200",
"msg_from": "Enrico Weigelt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BLOB's bypassing the OS Filesystem for better Image loading\n speed?"
},
{
"msg_contents": "\n>> Filesystems with many Filesystem Objects can slow down the Performance\n>> at opening and reading Data.\n\n\tOn my laptop, lighttpd takes upto 15000 hits PER SECOND on static 2-3 Kb \nfiles (tested with apachebench 2).\n\tApache is slower, of course : 3-4000 hits per second which is not that \nbad.\n\tUsing a dynamic script with images in the database, you should account \nfor query and transmission overhead, dynamic page overhead... mmm, I'd say \nusing a fast application server you could maybe get 2-300 images served \nper second from the database, and that's very optimistic. And then the \ndatabase will crawl, it will be disintegrated by the incoming flow of \nuseless requests... scalability will be awful...\n\tNot mentioning that browsers ask the server \"has this image changed since \nthe last time ?\" (HEAD request) and then they don't download it if it \ndoesn't. The server just stat()'s the file. statting a file on any decent \nfilesystem (ie. XFS Reiser JFS etc.) should take less than 10 microseconds \nif the information is in the cache. You'll have to look in the database to \ncheck the date... more queries !\n\n\tIf you want to control download rights on files, you can still put the \nfiles on the filesystem (which is the right choice IMHO) and use a dynamic \nscript to serve them. Even better, you could use lighttpd's authorized \nfile download feature.\n\n\tThe only case I see putting files in a database as interesting is if you \nwant them to be part of a transaction. In that case, why not...\n",
"msg_date": "Thu, 12 May 2005 12:02:56 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BLOB's bypassing the OS Filesystem for better Image loading\n speed?"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Shoaib Burq (VPAC) [mailto:[email protected]]\n> Sent: Tuesday, April 26, 2005 9:31 AM\n> To: Tom Lane\n> Cc: John A Meinel; Russell Smith; Jeff; \n> [email protected]\n> Subject: Re: [PERFORM] two queries and dual cpu (perplexed)\n> \n> \n> OK ... so just to clearify... (and pardon my ignorance):\n> \n> I need to increase the value of 'default_statistics_target' \n> variable and then run VACUUM ANALYZE, right?\n\nNot necessarily. You can set the statistics for a single\ncolumn with ALTER TABLE.\n\n> If so what should I choose for the 'default_statistics_target'?\n> [...]\n\nSince you have a decently large table, go for the max setting\nwhich is 1000.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Tue, 26 Apr 2005 09:58:00 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: two queries and dual cpu (perplexed)"
}
] |
[
{
"msg_contents": "Maybe he needs to spend $7K on performance improvements? \n\n;-)\n\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Josh Berkus\nSent: Tuesday, April 26, 2005 8:00 PM\nTo: Richard Huxton\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] Table Partitioning: Will it be supported in Future?\n\n\nRichard,\n\n> I believe these are being worked on at the moment. You might want to \n> search the archives of the hackers mailing list to see if the plans \n> will suit your needs.\n\nActually, this is being discussed through the Bizgres project: \nwww.bizgres.org.\n\nHowever, I agree that a 1GB table is not in need of partitioning.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n",
"msg_date": "Tue, 26 Apr 2005 16:58:31 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table Partitioning: Will it be supported in Future?"
},
{
"msg_contents": "On 4/26/05, Mohan, Ross <[email protected]> wrote:\n> Maybe he needs to spend $7K on performance improvements?\n> \n> ;-)\n> \n\nAAAAAAAAAAARRRRRRRRRRRGGGGGGGGGGG!!!!!!!!!\n\nI will forever hate the number 7,000 from this day forth!\n\nSeriously, though, I've never seen a thread on any list wander on so\naimlessly for so long.\n\nPlease, mommy, make it stop!\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n",
"msg_date": "Wed, 27 Apr 2005 00:42:10 +0000",
"msg_from": "Mike Rylander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table Partitioning: Will it be supported in Future?"
}
] |
[
{
"msg_contents": "On March 21, 2005 8:07 AM, Hannu Krosing wrote:\n> On L, 2005-03-19 at 23:47 -0500, Tom Lane wrote:\n> > Well, partitioning on the primary key would be Good Enough for 95% or\n> > 99% of the real problems out there. I'm not excited about adding a\n> > large chunk of complexity to cover another few percent.\n> \n> Are you sure that partitioning on anything else than PK would be\n> significantly harder ?\n> \n> I have a case where I do manual partitioning over start_time\n> (timestamp), but the PK is an id from a sequence. They are almost, but\n> not exactly in the same order. And I don't think that moving the PK to\n> be (start_time, id) just because of \"partitioning on PK only\" would be a\n> good design in any way.\n> \n> So please don't design the system to partition on PK only.\n\nI agree. I have used table partitioning to implement pseudo-partitioning, and I am very pleased with the results so far. Real partitioning would be even better, but I am partitioning by timestamp, and this is not the PK, and I don't wish to make it one.\n\n-Roger\n",
"msg_date": "Tue, 26 Apr 2005 12:52:53 -0700",
"msg_from": "\"Roger Hand\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: What needs to be done for real Partitioning?"
}
] |
[
{
"msg_contents": "I have this query that takes a little over 8 min to run:\nselect client,max(atime) as atime from usage_access where atime >=\n(select atime - '1 hour'::interval from usage_access order by atime\ndesc limit 1) group by client;\n\nI think it can go a lot faster. Any suggestions on improving this? DB\nis 7.3.4 I think. (There is no index on client because it is very big\nand this data is used infrequently.)\n\nexplain ANALYZE select client,max(atime) as atime from usage_access\nwhere atime >= (select atime - '1 hour'::interval from usage_access\norder by atime desc limit 1) group by client;\n \n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=3525096.28..3620450.16 rows=1271385 width=20)\n(actual time=482676.95..482693.69 rows=126 loops=1)\n InitPlan\n -> Limit (cost=0.00..0.59 rows=1 width=8) (actual\ntime=0.40..0.41 rows=1 loops=1)\n -> Index Scan Backward using usage_access_atime on\nusage_access (cost=0.00..22657796.18 rows=38141552 width=8) (actual\ntime=0.39..0.40 rows=2 loops=1)\n -> Group (cost=3525096.28..3588665.53 rows=12713851 width=20)\n(actual time=482676.81..482689.29 rows=3343 loops=1)\n -> Sort (cost=3525096.28..3556880.90 rows=12713851\nwidth=20) (actual time=482676.79..482679.16 rows=3343 loops=1)\n Sort Key: client\n -> Seq Scan on usage_access (cost=0.00..1183396.40\nrows=12713851 width=20) (actual time=482641.57..482659.18 rows=3343\nloops=1)\n Filter: (atime >= $0)\n Total runtime: 482694.65 msec\n\n\nI'm starting to understand this, which is quite frightening to me. I\nthought that maybe if I shrink the number of rows down I could improve\nthings a bit, but my first attempt didn't work. I thought I'd replace\nthe \"from usage_access\" with this query instead:\nselect * from usage_access where atime >= (select atime - '1\nhour'::interval from usage_access order by atime desc limit 1);\n \n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on usage_access (cost=0.00..1183396.40 rows=12713851\nwidth=116) (actual time=481796.22..481839.43 rows=3343 loops=1)\n Filter: (atime >= $0)\n InitPlan\n -> Limit (cost=0.00..0.59 rows=1 width=8) (actual\ntime=0.41..0.42 rows=1 loops=1)\n -> Index Scan Backward using usage_access_atime on\nusage_access (cost=0.00..22657796.18 rows=38141552 width=8) (actual\ntime=0.40..0.41 rows=2 loops=1)\n Total runtime: 481842.47 msec\n\nIt doesn't look like this will help at all.\n\nThis table is primarily append, however I just recently deleted a few\nmillion rows from the table, if that helps anyone.\n\n-- \nMatthew Nuzum\nwww.bearfruit.org\n",
"msg_date": "Tue, 26 Apr 2005 15:16:57 -0500",
"msg_from": "Matthew Nuzum <[email protected]>",
"msg_from_op": true,
"msg_subject": "speed up query with max() and odd estimates"
},
{
"msg_contents": "On Tue, Apr 26, 2005 at 03:16:57PM -0500, Matthew Nuzum wrote:\n> Seq Scan on usage_access (cost=0.00..1183396.40 rows=12713851\n> width=116) (actual time=481796.22..481839.43 rows=3343 loops=1)\n\nThat's a gross misestimation -- four orders of magnitude off!\n\nHave you considering doing this in two steps, first getting out whatever\ncomes from the subquery and then doing the query? Have you ANALYZEd recently?\nDo you have an index on atime?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 26 Apr 2005 22:48:53 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: speed up query with max() and odd estimates"
},
{
"msg_contents": "On 4/26/05, Steinar H. Gunderson <[email protected]> wrote:\n> On Tue, Apr 26, 2005 at 03:16:57PM -0500, Matthew Nuzum wrote:\n> > Seq Scan on usage_access (cost=0.00..1183396.40 rows=12713851\n> > width=116) (actual time=481796.22..481839.43 rows=3343 loops=1)\n> \n> That's a gross misestimation -- four orders of magnitude off!\n> \n> Have you considering doing this in two steps, first getting out whatever\n> comes from the subquery and then doing the query? Have you ANALYZEd recently?\n> Do you have an index on atime?\n> \n\nYes, there is an index on atime. I'll re-analyze but I'm pretty\ncertain that runs nightly.\n\nRegarding two steps, are you suggesting:\nbegin;\nselect * into temp_table...;\nselect * from temp_table...;\ndrop temp_table;\nrollback;\n\nI have not tried that but will.\n\nBTW, I created an index on clients just for the heck of it and there\nwas no improvement. (actually, a slight degradation)\n\n-- \nMatthew Nuzum\nwww.bearfruit.org\n",
"msg_date": "Tue, 26 Apr 2005 16:02:12 -0500",
"msg_from": "Matthew Nuzum <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: speed up query with max() and odd estimates"
},
{
"msg_contents": "Matthew Nuzum wrote:\n> I have this query that takes a little over 8 min to run:\n> select client,max(atime) as atime from usage_access where atime >=\n> (select atime - '1 hour'::interval from usage_access order by atime\n> desc limit 1) group by client;\n>\n> I think it can go a lot faster. Any suggestions on improving this? DB\n> is 7.3.4 I think. (There is no index on client because it is very big\n> and this data is used infrequently.)\nSwitch to Postgres 8.0.2 :)\n\nActually, I think one problem that you are running into is that postgres\n(at least used to) has problems with selectivity of date fields when\nusing a non-constant parameter.\n\nSo it isn't switching over to using an index, even though you are\nrestricting the access time.\n\nI would guess that creating a multi-column index on (client, atime)\n*might* get you the best performance.\n\nTry adding the index, and then doing this query:\n\nselect atime from usage_access where client = <client_id>\n\torder by atime desc limit 1;\n\nIf you can get that query to use an index, then you can put it in a\nloop. Something like:\n\nCREATE FUNCTION last_client_access() RETURNS SETOF time AS '\nDECLARE\n\tclient_id INT;\n\tclient_time TIME;\nBEGIN\n\tFOR client_id IN SELECT id FROM <client_list> LOOP\n\t\tSELECT INTO client_time atime FROM usage_access\n\t\t\tWHERE client = client_id\n\t\t\tORDER BY atime DESC LIMIT 1;\n\t\tRETURN NEXT client_time;\n\tEND LOOP;\nEND;\n' LANGUAGE plpgsql;\n\nIf you really need high speed, you could create a partial index for each\nclient id, something like:\nCREATE INDEX usage_access_atime_client1_idx ON usage_access(atime)\n\tWHERE client = client1;\n\nBut that is a lot of indexes to maintain.\n\nI'm hoping that the multi-column index would be enough.\n\nYou might also try something like:\n\nSELECT client, max(atime) FROM usage_access\n WHERE atime > now - '1 hour'::interval\n GROUP BY client;\n\nnow is more of a constant, so postgres might have a better time figuring\nout the selectivity. I don't know your table, but I assume you are\nconstantly inserting new rows, and the largest atime value will be close\nto now(). Remember, in this query (and in your original query) clients\nwith their last access time > then 1 hour since the max time (of all\nclients) will not be shown. (Example, client 1 accessed yesterday,\nclient 2 accessed right now your original last atime would be today,\nwhich would hide client 1).\n\nAlso, if it is simply a problem of the planner mis-estimating the\nselectivity of the row, you can alter the statistics for atime.\n\nALTER TABLE usage_access ALTER COLUMN atime SET STATISTICS 1000;\n\nI'm not really sure what else to try, but you might start there.\n\nAlso, I still recommend upgrading to postgres 8, as I think it handles a\nlot of these things better. (7.3 is pretty old).\n\nJohn\n=:->\n\n>\n> explain ANALYZE select client,max(atime) as atime from usage_access\n> where atime >= (select atime - '1 hour'::interval from usage_access\n> order by atime desc limit 1) group by client;\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=3525096.28..3620450.16 rows=1271385 width=20)\n> (actual time=482676.95..482693.69 rows=126 loops=1)\n> InitPlan\n> -> Limit (cost=0.00..0.59 rows=1 width=8) (actual\n> time=0.40..0.41 rows=1 loops=1)\n> -> Index Scan Backward using usage_access_atime on\n> usage_access (cost=0.00..22657796.18 rows=38141552 width=8) (actual\n> time=0.39..0.40 rows=2 loops=1)\n> -> Group (cost=3525096.28..3588665.53 rows=12713851 width=20)\n> (actual time=482676.81..482689.29 rows=3343 loops=1)\n> -> Sort (cost=3525096.28..3556880.90 rows=12713851\n> width=20) (actual time=482676.79..482679.16 rows=3343 loops=1)\n> Sort Key: client\n> -> Seq Scan on usage_access (cost=0.00..1183396.40\n> rows=12713851 width=20) (actual time=482641.57..482659.18 rows=3343\n> loops=1)\n> Filter: (atime >= $0)\n> Total runtime: 482694.65 msec\n>\n>\n> I'm starting to understand this, which is quite frightening to me. I\n> thought that maybe if I shrink the number of rows down I could improve\n> things a bit, but my first attempt didn't work. I thought I'd replace\n> the \"from usage_access\" with this query instead:\n> select * from usage_access where atime >= (select atime - '1\n> hour'::interval from usage_access order by atime desc limit 1);\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on usage_access (cost=0.00..1183396.40 rows=12713851\n> width=116) (actual time=481796.22..481839.43 rows=3343 loops=1)\n> Filter: (atime >= $0)\n> InitPlan\n> -> Limit (cost=0.00..0.59 rows=1 width=8) (actual\n> time=0.41..0.42 rows=1 loops=1)\n> -> Index Scan Backward using usage_access_atime on\n> usage_access (cost=0.00..22657796.18 rows=38141552 width=8) (actual\n> time=0.40..0.41 rows=2 loops=1)\n> Total runtime: 481842.47 msec\n>\n> It doesn't look like this will help at all.\n>\n> This table is primarily append, however I just recently deleted a few\n> million rows from the table, if that helps anyone.\n>",
"msg_date": "Tue, 26 Apr 2005 16:51:09 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: speed up query with max() and odd estimates"
},
{
"msg_contents": "On 4/26/05, Steinar H. Gunderson <[email protected]> wrote:\n> On Tue, Apr 26, 2005 at 03:16:57PM -0500, Matthew Nuzum wrote:\n> > Seq Scan on usage_access (cost=0.00..1183396.40 rows=12713851\n> > width=116) (actual time=481796.22..481839.43 rows=3343 loops=1)\n> \n> That's a gross misestimation -- four orders of magnitude off!\n> \n> Have you considering doing this in two steps, first getting out whatever\n> comes from the subquery and then doing the query? \n\nWell, I don't know if the estimates are correct now or not, but I\nfound that your suggestion of doing it in two steps helped a lot.\n\nFor the archives, here's what made a drastic improvement:\n\nThis batch program had an overhead of 25 min to build hash tables\nusing the sql queries. It is now down to about 47 seconds.\n\nThe biggest improvements (bringing it down to 9 min) were to get rid\nof all instances of `select max(field) from ...` and replacing them\nwith `select field from ... order by field desc limit 1`\n\nThen, to get it down to the final 47 seconds I changed this query:\nSELECT client,max(atime) as atime from usage_access where atime >=\n(select atime - '1 hour'::interval from usage_access order by atime\ndesc limit 1) group by client;\n\nTo these three queries:\nSELECT atime - '1 hour'::interval from usage_access order by atime desc limit 1;\nSELECT client, atime into temporary table recent_sessions from\nusage_access where atime >= '%s';\nSELECT client, max(atime) as atime from recent_sessions group by client;\n\nThanks for the help.\n-- \nMatthew Nuzum\nwww.bearfruit.org\n",
"msg_date": "Tue, 26 Apr 2005 17:32:54 -0500",
"msg_from": "Matthew Nuzum <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: speed up query with max() and odd estimates"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Gurmeet Manku [mailto:[email protected]]\n> Sent: Tuesday, April 26, 2005 5:01 PM\n> To: Simon Riggs\n> Cc: Tom Lane; [email protected]; Greg Stark; Marko Ristola;\n> pgsql-perform; [email protected]; Utkarsh Srivastava;\n> [email protected]\n> Subject: Re: [HACKERS] [PERFORM] Bad n_distinct estimation; hacks\n> suggested?\n> \n> [...]\n> 2. In a single scan, it is possible to estimate n_distinct by using\n> a very simple algorithm:\n> \n> \"Distinct sampling for highly-accurate answers to distinct value\n> queries and event reports\" by Gibbons, VLDB 2001.\n> \n> http://www.aladdin.cs.cmu.edu/papers/pdfs/y2001/dist_sampl.pdf\n> \n> [...]\n\nThis paper looks the most promising, and isn't too different \nfrom what I suggested about collecting stats over the whole table\ncontinuously. What Gibbons does is give a hard upper bound on\nthe sample size by using a logarithmic technique for storing\nsample information. His technique appears to offer very good \nerror bounds and confidence intervals as shown by tests on \nsynthetic and real data. I think it deserves a hard look from \npeople hacking the estimator.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Tue, 26 Apr 2005 17:43:14 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
}
] |
[
{
"msg_contents": "-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Joel Fradkin\nSent: Wednesday, April 27, 2005 9:02 AM\nTo: PostgreSQL Perform\nSubject: [PERFORM] Final decision\n\n\n\nI spent a great deal of time over the past week looking seriously at\nPostgres and MYSQL.\n\nObjectively I am not seeing that much of an improvement in speed with MYSQL,\nand we have a huge investment in postgrs.\n\nSo I am planning on sticking with postgres fro our production database\n(going live this weekend).\n\n \n\nMany people have offered a great deal of help and I appreciate all that time\nand energy.\n\nI did not find any resolutions to my issues with Commandprompt.com (we only\nworked together 2.5 hours).\n\n \n\nMost of my application is working about the same speed as MSSQL server\n(unfortunately its twice the speed box, but as many have pointed out it\ncould be an issue with the 4 proc dell). I spent considerable time with Dell\nand could see my drives are delivering 40 meg per sec.\n\n \n\nThings I still have to make better are my settings in config, I have it set\nto no merge joins and no seq scans.\n\nI am going to have to use flattened history files for reporting (I saw huge\ndifference here the view for audit cube took 10 minutes in explain analyze\nand the flattened file took under one second).\n\n \n\nI understand both of these practices are not desirable, but I am at a place\nwhere I have to get it live and these are items I could not resolve.\n\nI may try some more time with Commanpromt.com, or seek other professional\nhelp.\n\n \n\nIn stress testing I found Postgres was holding up very well (but my IIS\nservers could not handle much of a load to really push the server).\n\nI have a few desktops acting as IIS servers at the moment and if I pushed\npast 50 consecutive users it pretty much blew the server up.\n\nOn inserts that number was like 7 consecutive users and updates was also\nlike 7 users.\n\n \n\nI believe that was totally IIS not postgres, but I am curious as to if using\npostgres odbc will put more stress on the IIS side then MSSQL did.\n\nI did have a question if any folks are using two servers one for reporting\nand one for data entry what system should be the beefier?\n\nI have a 2proc machine I will be using and I can either put Sears off by\nthemselves on this machine or split up functionality and have one for\nreporting and one for inserts and updates; so not sure which machine would\nbe best for which spot (reminder the more robust is a 4proc with 8 gigs and\n2 proc is 4 gigs, both dells).\n\n \n\nThank you for any ideas in this arena.\n\n \n\nJoel Fradkin\n\n \n\n \n\n \n\n \n\n \n\n \n\nYou didnt tell us what OS are you using, windows?\n\nIf you want good performance you must install unix on that machine,\n\n \n\n---\n\n \n\n \n\n\n\n\n\n\n\n\n \n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]]On Behalf Of Joel \n FradkinSent: Wednesday, April 27, 2005 9:02 AMTo: \n PostgreSQL PerformSubject: [PERFORM] Final \n decision\n\nI spent a great deal of time over \n the past week looking seriously at Postgres and MYSQL.\nObjectively I am not seeing that \n much of an improvement in speed with MYSQL, and we have a huge investment in \n postgrs.\nSo I am planning on sticking with \n postgres fro our production database (going live this \n weekend).\n \nMany people have offered a great \n deal of help and I appreciate all that time and energy.\nI did not find any resolutions to \n my issues with Commandprompt.com (we only worked together 2.5 \n hours).\n \nMost of my application is working \n about the same speed as MSSQL server (unfortunately its twice the speed box, \n but as many have pointed out it could be an issue with the 4 proc dell). I \n spent considerable time with Dell and could see my drives are delivering 40 \n meg per sec.\n \nThings I still have to make better \n are my settings in config, I have it set to no merge joins and no seq \n scans.\nI am going to have to use \n flattened history files for reporting (I saw huge difference here the view for \n audit cube took 10 minutes in explain analyze and the flattened file took \n under one second).\n \nI understand both of these \n practices are not desirable, but I am at a place where I have to get it live \n and these are items I could not resolve.\nI may try some more time with \n Commanpromt.com, or seek other professional help.\n \nIn stress testing I found Postgres \n was holding up very well (but my IIS servers could not handle much of a load \n to really push the server).\nI have a few desktops acting as \n IIS servers at the moment and if I pushed past 50 consecutive users it pretty \n much blew the server up.\nOn inserts that number was like 7 \n consecutive users and updates was also like 7 users.\n \nI believe that was totally IIS not \n postgres, but I am curious as to if using postgres odbc will put more stress \n on the IIS side then MSSQL did.\nI did have a question if any folks \n are using two servers one for reporting and one for data entry what system \n should be the beefier?\nI have a 2proc machine I will be \n using and I can either put Sears off by themselves on this machine or split up \n functionality and have one for reporting and one for inserts and updates; so \n not sure which machine would be best for which spot (reminder the more robust \n is a 4proc with 8 gigs and 2 proc is 4 gigs, both dells).\n \nThank you for any ideas in this \n arena.\n \nJoel Fradkin\n\n \n \n \n \n \n \nYou didnt tell us what \n OS are you using, windows?\nIf you want good \n performance you must install unix on that \n machine,\n \n---",
"msg_date": "Wed, 27 Apr 2005 08:59:41 -0600",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Final decision"
},
{
"msg_contents": "Sorry I am using Redhat AS4 and postgres 8.0.2\n\nJoel\n\n \n\nYou didnt tell us what OS are you using, windows?\n\nIf you want good performance you must install unix on that machine,\n\n \n\n---\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nSorry I am using Redhat AS4 and postgres\n8.0.2\nJoel\n\n \nYou didnt tell us what OS\nare you using, windows?\nIf you want good\nperformance you must install unix on that machine,\n \n---",
"msg_date": "Wed, 27 Apr 2005 11:17:23 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Final decision"
}
] |
[
{
"msg_contents": "I spent a great deal of time over the past week looking seriously at\nPostgres and MYSQL.\n\nObjectively I am not seeing that much of an improvement in speed with MYSQL,\nand we have a huge investment in postgrs.\n\nSo I am planning on sticking with postgres fro our production database\n(going live this weekend).\n\n \n\nMany people have offered a great deal of help and I appreciate all that time\nand energy.\n\nI did not find any resolutions to my issues with Commandprompt.com (we only\nworked together 2.5 hours).\n\n \n\nMost of my application is working about the same speed as MSSQL server\n(unfortunately its twice the speed box, but as many have pointed out it\ncould be an issue with the 4 proc dell). I spent considerable time with Dell\nand could see my drives are delivering 40 meg per sec.\n\n \n\nThings I still have to make better are my settings in config, I have it set\nto no merge joins and no seq scans.\n\nI am going to have to use flattened history files for reporting (I saw huge\ndifference here the view for audit cube took 10 minutes in explain analyze\nand the flattened file took under one second).\n\n \n\nI understand both of these practices are not desirable, but I am at a place\nwhere I have to get it live and these are items I could not resolve.\n\nI may try some more time with Commanpromt.com, or seek other professional\nhelp.\n\n \n\nIn stress testing I found Postgres was holding up very well (but my IIS\nservers could not handle much of a load to really push the server).\n\nI have a few desktops acting as IIS servers at the moment and if I pushed\npast 50 consecutive users it pretty much blew the server up.\n\nOn inserts that number was like 7 consecutive users and updates was also\nlike 7 users.\n\n \n\nI believe that was totally IIS not postgres, but I am curious as to if using\npostgres odbc will put more stress on the IIS side then MSSQL did.\n\nI did have a question if any folks are using two servers one for reporting\nand one for data entry what system should be the beefier?\n\nI have a 2proc machine I will be using and I can either put Sears off by\nthemselves on this machine or split up functionality and have one for\nreporting and one for inserts and updates; so not sure which machine would\nbe best for which spot (reminder the more robust is a 4proc with 8 gigs and\n2 proc is 4 gigs, both dells).\n\n \n\nThank you for any ideas in this arena.\n\n \n\nJoel Fradkin\n\n \n\n \n\n \n\n\n\n\n\n\n\n\n\n\nI spent a great deal of time over the past week looking\nseriously at Postgres and MYSQL.\nObjectively I am not seeing that much of an improvement in\nspeed with MYSQL, and we have a huge investment in postgrs.\nSo I am planning on sticking with postgres fro our\nproduction database (going live this weekend).\n \nMany people have offered a great deal of help and I\nappreciate all that time and energy.\nI did not find any resolutions to my issues with\nCommandprompt.com (we only worked together 2.5 hours).\n \nMost of my application is working about the same speed as\nMSSQL server (unfortunately its twice the speed box, but as many have pointed\nout it could be an issue with the 4 proc dell). I spent considerable time with\nDell and could see my drives are delivering 40 meg per sec.\n \nThings I still have to make better are my settings in config,\nI have it set to no merge joins and no seq scans.\nI am going to have to use flattened history files for\nreporting (I saw huge difference here the view for audit cube took 10 minutes\nin explain analyze and the flattened file took under one second).\n \nI understand both of these practices are not desirable, but\nI am at a place where I have to get it live and these are items I could not\nresolve.\nI may try some more time with Commanpromt.com, or seek other\nprofessional help.\n \nIn stress testing I found Postgres was holding up very well\n(but my IIS servers could not handle much of a load to really push the server).\nI have a few desktops acting as IIS servers at the moment\nand if I pushed past 50 consecutive users it pretty much blew the server up.\nOn inserts that number was like 7 consecutive users and\nupdates was also like 7 users.\n \nI believe that was totally IIS not postgres, but I am\ncurious as to if using postgres odbc will put more stress on the IIS side then\nMSSQL did.\nI did have a question if any folks are using two servers one\nfor reporting and one for data entry what system should be the beefier?\nI have a 2proc machine I will be using and I can either put\nSears off by themselves on this machine or split up functionality and have one\nfor reporting and one for inserts and updates; so not sure which machine would\nbe best for which spot (reminder the more robust is a 4proc with 8 gigs and 2\nproc is 4 gigs, both dells).\n \nThank you for any ideas in this arena.\n \nJoel Fradkin",
"msg_date": "Wed, 27 Apr 2005 11:01:41 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Final decision"
},
{
"msg_contents": "\n> \n> I did have a question if any folks are using two servers one for\n> reporting and one for data entry what system should be the beefier?\n\nYeah. We started putting up slaves for reporting purposes and\napplication specific areas using Slony replicating partial data sets to\nvarious locations -- some for reporting.\n\nIf your reports have a long runtime and don't require transactional\nsafety for writes (daily summary written or results aren't recorded in\nthe DB at all) this is probably something to consider.\n\nI understand that PGAdmin makes Slony fairly painless to setup, but it\ncan be time consuming to get going and Slony can add new complications\ndepending on the data size and what you're doing with it -- but they're\nworking hard to reduce the impact of those complications.\n\n> I have a 2proc machine I will be using and I can either put Sears off\n> by themselves on this machine or split up functionality and have one\n> for reporting and one for inserts and updates; so not sure which\n> machine would be best for which spot (reminder the more robust is a\n> 4proc with 8 gigs and 2 proc is 4 gigs, both dells).\n> \n> \n> \n> Thank you for any ideas in this arena.\n> \n> \n> \n> Joel Fradkin\n> \n> \n> \n> \n> \n> \n> \n> \n> \n-- \n\n",
"msg_date": "Wed, 27 Apr 2005 11:23:43 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Final decision"
},
{
"msg_contents": "Joel,\n\n> So I am planning on sticking with postgres fro our production database\n> (going live this weekend).\n\nGlad to have you.\n\n> I did not find any resolutions to my issues with Commandprompt.com (we only\n> worked together 2.5 hours).\n\nBTW, your performance troubleshooting will continue to be hampered if you \ncan't share actual queries and data structure. I strongly suggest that you \nmake a confidentiality contract with a support provider so that you can give \nthem detailed (rather than general) problem reports.\n\n> Most of my application is working about the same speed as MSSQL server\n> (unfortunately its twice the speed box, but as many have pointed out it\n> could be an issue with the 4 proc dell). I spent considerable time with\n> Dell and could see my drives are delivering 40 meg per sec.\n\nFWIW, on a v40z I get 180mb/s. So your disk array on the Dell is less than \nideal ... basically, what you have is a more expensive box, not a faster \none :-(\n\n> Things I still have to make better are my settings in config, I have it set\n> to no merge joins and no seq scans.\n\nYeah, I'm also finding that our estimator underestimates the real cost of \nmerge joins on some systems. Basically we need a sort-cost variable, \nbecause I've found an up to 2x difference in sort cost depending on \narchitecture.\n\n> I am going to have to use flattened history files for reporting (I saw huge\n> difference here the view for audit cube took 10 minutes in explain analyze\n> and the flattened file took under one second).\n> I understand both of these practices are not desirable, but I am at a place\n> where I have to get it live and these are items I could not resolve.\n\nFlattening data for reporting is completely reasonable; I do it all the time.\n\n> I believe that was totally IIS not postgres, but I am curious as to if\n> using postgres odbc will put more stress on the IIS side then MSSQL did.\n\nActually, I think the problem may be ODBC. Our ODBC driver is not the best \nand is currently being re-built from scratch. Is using npgsql, a much \nhigher-performance driver (for .NET) out of the question? According to one \ncompany, npgsql performs better than drivers supplied by Microsoft.\n\n> I did have a question if any folks are using two servers one for reporting\n> and one for data entry what system should be the beefier?\n\nDepends on the relative # of users. This is often a good approach, because \nthe requirements for DW reporting and OLTP are completely different. \nBasically:\nOLTP: Many slow processors, disk array set up for fast writes, moderate shared \nmem, low work_mem.\nDW: Few fast processors, disk array set up for fast reads, high shared mem and \nwork mem.\n\nIf reporting is at least 1/4 of your workload, I'd suggest spinning that off \nto the 2nd machine before putting one client on that machine. That way you \ncan also use the 2nd machine as a failover back-up.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 27 Apr 2005 09:13:35 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Final decision"
},
{
"msg_contents": "Joel Fradkin wrote:\n> I spent a great deal of time over the past week looking seriously at\n> Postgres and MYSQL.\n>\n> Objectively I am not seeing that much of an improvement in speed with\n> MYSQL, and we have a huge investment in postgrs.\n>\n> So I am planning on sticking with postgres fro our production database\n> (going live this weekend).\n\nGlad to hear it. Good luck.\n>\n>\n...\n>\n> Things I still have to make better are my settings in config, I have it\n> set to no merge joins and no seq scans.\n\nJust realize, you probably *don't* want to set that in postgresql.conf.\nYou just want to issue an \"SET enable_seqscan TO off\" before issuing one\nof the queries that are mis-planned.\n\nBecause there are lots of times when merge join and seq scan is actually\nfaster than the alternatives. And since I don't think you tested every\nquery you are going to run, you probably want to let the planner handle\nthe ones it gets right. (Usually it doesn't quite a good job.)\n\nAlso, I second the notion of getting a confidentiality contract. There\nhave been several times where someone had a pathological case, and by\nsending the data to someone (Tom Lane), they were able to track down and\nfix the problem.\n\n>\n> I am going to have to use flattened history files for reporting (I saw\n> huge difference here the view for audit cube took 10 minutes in explain\n> analyze and the flattened file took under one second).\n>\n>\n>\n> I understand both of these practices are not desirable, but I am at a\n> place where I have to get it live and these are items I could not resolve.\n\nNothing wrong with a properly updated flattened table. You just need to\nbe careful to keep it consistent with the rest of the data. (Update\ntriggers/lazy materialization, etc)\n\n>\n> I may try some more time with Commanpromt.com, or seek other\n> professional help.\n>\n>\n>\n> In stress testing I found Postgres was holding up very well (but my IIS\n> servers could not handle much of a load to really push the server).\n>\n> I have a few desktops acting as IIS servers at the moment and if I\n> pushed past 50 consecutive users it pretty much blew the server up.\n>\n> On inserts that number was like 7 consecutive users and updates was also\n> like 7 users.\n>\n>\n>\n> I believe that was totally IIS not postgres, but I am curious as to if\n> using postgres odbc will put more stress on the IIS side then MSSQL did.\n>\n\nWhat do you mean by \"blew up\"? I assume you have IIS on a different\nmachine than the database. Are you saying that the database slowed down\ndramatically, or that the machine crashed, or just that the web\ninterface became unresponsive?\n\n> I did have a question if any folks are using two servers one for\n> reporting and one for data entry what system should be the beefier?\n>\n> I have a 2proc machine I will be using and I can either put Sears off by\n> themselves on this machine or split up functionality and have one for\n> reporting and one for inserts and updates; so not sure which machine\n> would be best for which spot (reminder the more robust is a 4proc with 8\n> gigs and 2 proc is 4 gigs, both dells).\n>\n\nIt probably depends on what queries are being done, and what kind of\ntimes you need. Usually the update machine needs the stronger hardware,\nso that it can do the writing.\n\nBut it depends if you can wait longer to update data than to query data,\nobviously the opposite is true. It all depends on load, and that is\npretty much application defined.\n\n>\n>\n> Thank you for any ideas in this arena.\n>\n>\n>\n> Joel Fradkin\n>\n\nJohn\n=:->",
"msg_date": "Wed, 27 Apr 2005 12:20:37 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Final decision"
},
{
"msg_contents": "BTW, your performance troubleshooting will continue to be hampered if you \ncan't share actual queries and data structure. I strongly suggest that you\n\nmake a confidentiality contract with a support provider so that you can\ngive them detailed (rather than general) problem reports.\n\nI am glad to hear your perspective, maybe my rollout is not as off base as I\nthought.\n\nFYI it is not that I can not share specifics (I have posted a few table\nstructures and views here and on pgsql, I just can not backup the entire\ndatabase and ship it off to a consultant.\n\nWhat I had suggested with Commandprompt was to use remote connectivity for\nhim to have access to our server directly. In this way I can learn by\nwatching what types of test he does and it allows him to do tests with our\ndata set.\n\nOnce I am in production that will not be something I want tests done on, so\nit may have to wait until we get a development box with a similar deployment\n(at the moment development is on a XP machine and production will be on\nLinux (The 4 proc is linux and will be our production).\n\nThank you for letting me know what I can hope to see in the way of disk\naccess on the next hardware procurement, I may email you off list to get the\nspecific brands etc that you found that kind of through put with.\n\n\n\n",
"msg_date": "Wed, 27 Apr 2005 14:17:25 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Final decision"
},
{
"msg_contents": "Just realize, you probably *don't* want to set that in postgresql.conf.\nYou just want to issue an \"SET enable_seqscan TO off\" before issuing one\nof the queries that are mis-planned.\n\nI believe all the tested queries (90 some odd views) saw an improvement.\nI will however take the time to verify this and take your suggestion as I\ncan certainly put the appropriate settings in each as opposed to using the\nconfig option, Thanks for the good advice (I believe Josh from\nCommandprompt.com also suggested this approach and I in my lazy self some\nhow blurred the concept.)\n\n\nAlso, I second the notion of getting a confidentiality contract. There\nhave been several times where someone had a pathological case, and by\nsending the data to someone (Tom Lane), they were able to track down and\nfix the problem.\n\nExcellent point, Our data is confidential, but I should write something to\nallow me to ship concept without confidential, so in the future I can just\nsend a backup and not have it break our agreements, but allow minds greater\nthen my own to see, and feel my issues.\n\n\nWhat do you mean by \"blew up\"? \nIIS testing was being done with an old 2300 and a optiplex both machines\nreached 100%CPU utilization and the test suite (ASP code written in house by\none of programmers) was not returning memory correctly, so it ran out of\nmemory and died. Prior to death I did see cpu utilization on the 4proc linux\nbox running postgres fluctuate and at times hit the 100% level, but the\nserver seemed very stable. I did fix the memory usage of the suite and was\nable to see 50 concurrent users with fairly high RPS especially on select\ntesting, the insert and update seemed to fall apart (many 404 errors etc)\n\n\nI assume you have IIS on a different\nmachine than the database. Are you saying that the database slowed down\ndramatically, or that the machine crashed, or just that the web\ninterface became unresponsive? Just the web interface.\n\nIt probably depends on what queries are being done, and what kind of\ntimes you need. Usually the update machine needs the stronger hardware,\nso that it can do the writing.\n\nBut it depends if you can wait longer to update data than to query data,\nobviously the opposite is true. It all depends on load, and that is\npretty much application defined.\n\nI am guessing our app is like 75% data entry and 25% reporting, but the\nreporting is taking the toll SQL wise.\n\nThis was from my insert test with 15 users.\nTest type: Dynamic \n Simultaneous browser connections: 15 \n Warm up time (secs): 0 \n Test duration: 00:00:03:13 \n Test iterations: 200 \n Detailed test results generated: Yes\nResponse Codes \n \n Response Code: 403 - The server understood the request, but is refusing to\nfulfill it. \n Count: 15 \n Percent (%): 0.29 \n \n \n Response Code: 302 - The requested resource resides temporarily under a\ndifferent URI (Uniform Resource Identifier). \n Count: 200 \n Percent (%): 3.85 \n \n \n Response Code: 200 - The request completed successfully. \n Count: 4,980 \n Percent (%): 95.86 \n \nMy select test with 25 users had this\nProperties \n \n Test type: Dynamic \n Simultaneous browser connections: 25 \n Warm up time (secs): 0 \n Test duration: 00:00:06:05 \n Test iterations: 200 \n Detailed test results generated: Yes \n \nSummary \n \n Total number of requests: 187 \n Total number of connections: 200 \n \n Average requests per second: 0.51 \n Average time to first byte (msecs): 30,707.42 \n Average time to last byte (msecs): 30,707.42 \n Average time to last byte per iteration (msecs): 28,711.44 \n \n Number of unique requests made in test: 1 \n Number of unique response codes: 1 \n \nErrors Counts \n \n HTTP: 0 \n DNS: 0 \n Socket: 26 \n \nAdditional Network Statistics \n \n Average bandwidth (bytes/sec): 392.08 \n \n Number of bytes sent (bytes): 64,328 \n Number of bytes received (bytes): 78,780 \n \n Average rate of sent bytes (bytes/sec): 176.24 \n Average rate of received bytes (bytes/sec): 215.84 \n \n Number of connection errors: 0 \n Number of send errors: 13 \n Number of receive errors: 13 \n Number of timeout errors: 0 \n \nResponse Codes \n \n Response Code: 200 - The request completed successfully. \n Count: 187 \n Percent (%): 100.00 \n \n\n\n\nJoel\n\n",
"msg_date": "Wed, 27 Apr 2005 14:47:50 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Final decision"
},
{
"msg_contents": "Joel Fradkin wrote:\n\n...\n\n>\n> I am guessing our app is like 75% data entry and 25% reporting, but the\n> reporting is taking the toll SQL wise.\n>\n> This was from my insert test with 15 users.\n> Test type: Dynamic\n> Simultaneous browser connections: 15\n> Warm up time (secs): 0\n> Test duration: 00:00:03:13\n> Test iterations: 200\n> Detailed test results generated: Yes\n> Response Codes\n>\n> Response Code: 403 - The server understood the request, but is refusing to\n> fulfill it.\n> Count: 15\n> Percent (%): 0.29\n>\n>\n> Response Code: 302 - The requested resource resides temporarily under a\n> different URI (Uniform Resource Identifier).\n> Count: 200\n> Percent (%): 3.85\n>\n>\n> Response Code: 200 - The request completed successfully.\n> Count: 4,980\n> Percent (%): 95.86\n>\n> My select test with 25 users had this\n> Properties\n>\n> Test type: Dynamic\n> Simultaneous browser connections: 25\n> Warm up time (secs): 0\n> Test duration: 00:00:06:05\n> Test iterations: 200\n> Detailed test results generated: Yes\n>\n> Summary\n>\n> Total number of requests: 187\n> Total number of connections: 200\n>\n> Average requests per second: 0.51\n> Average time to first byte (msecs): 30,707.42\n> Average time to last byte (msecs): 30,707.42\n> Average time to last byte per iteration (msecs): 28,711.44\n>\n> Number of unique requests made in test: 1\n> Number of unique response codes: 1\n\nWell, having a bandwidth of 392Bps seems *really* low. I mean that is a\nvery old modem speed (3200 baud).\n\nI'm wondering if you are doing a lot of aggregating in the web server,\nand if you couldn't move some of that into the database by using plpgsql\nfunctions.\n\nThat would take some of the load off of your IIS servers, and possibly\nimprove your overall bandwidth.\n\nBut I do agree, it looks like the select side is where you are hurting.\nIf I understand the numbers correctly, you can do 5k inserts in 3min,\nbut are struggling to do 200 selects in 6min.\n\nJohn\n=:->\n\n>\n> Errors Counts\n>\n> HTTP: 0\n> DNS: 0\n> Socket: 26\n>\n> Additional Network Statistics\n>\n> Average bandwidth (bytes/sec): 392.08\n>\n> Number of bytes sent (bytes): 64,328\n> Number of bytes received (bytes): 78,780\n>\n> Average rate of sent bytes (bytes/sec): 176.24\n> Average rate of received bytes (bytes/sec): 215.84\n>\n> Number of connection errors: 0\n> Number of send errors: 13\n> Number of receive errors: 13\n> Number of timeout errors: 0\n>\n> Response Codes\n>\n> Response Code: 200 - The request completed successfully.\n> Count: 187\n> Percent (%): 100.00\n>\n>\n>\n>\n> Joel\n>",
"msg_date": "Wed, 27 Apr 2005 14:43:48 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Final decision"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Josh Berkus [mailto:[email protected]]\n> Sent: Wednesday, April 27, 2005 10:25 AM\n> To: Andrew Dunstan\n> Cc: Mischa Sandberg; pgsql-perform; [email protected]\n> Subject: Re: [HACKERS] [PERFORM] Bad n_distinct estimation; hacks\n> suggested?\n> \n> [...]\n> Actually, it's more to characterize how large of a sample\n> we need. For example, if we sample 0.005 of disk pages, and\n> get an estimate, and then sample another 0.005 of disk pages\n> and get an estimate which is not even close to the first\n> estimate, then we have an idea that this is a table which \n> defies analysis based on small samples. \n\nI buy that.\n\n> Wheras if the two estimates are < 1.0 stdev apart, we can\n> have good confidence that the table is easily estimated. \n\nI don't buy that. A negative indication is nothing more than\nproof by contradiction. A positive indication is mathematical\ninduction over the set, which in this type of context is \nlogically unsound. There is no reason to believe that two\nsmall samples with a small difference imply that a table is\neasily estimated rather than that you got unlucky in your\nsamples.\n\n> [...]\n> Yes, actually. We need 3 different estimation methods:\n> 1 for tables where we can sample a large % of pages\n> (say, >= 0.1)\n> 1 for tables where we sample a small % of pages but are \n> \"easily estimated\"\n> 1 for tables which are not easily estimated by we can't \n> afford to sample a large % of pages.\n\nI don't buy that the first and second need to be different\nestimation methods. I think you can use the same block\nsample estimator for both, and simply stop sampling at\ndifferent points. If you set the default to be a fixed\nnumber of blocks, you could get a large % of pages on\nsmall tables and a small % of pages on large tables, which\nis exactly how you define the first two cases. However,\nI think such a default should also be overridable to a\n% of the table or a desired accuracy.\n\nOf course, I would recommend the distinct sample technique\nfor the third case.\n\n> If we're doing sampling-based estimation, I really don't\n> want people to lose sight of the fact that page-based random\n> sampling is much less expensive than row-based random\n> sampling. We should really be focusing on methods which \n> are page-based.\n\nOf course, that savings comes at the expense of having to\naccount for factors like clustering within blocks. So block\nsampling is more efficient, but can also be less accurate.\nNonetheless, I agree that of the sampling estimators, block\nsampling is the better technique.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Wed, 27 Apr 2005 10:47:36 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "\n\"Dave Held\" <[email protected]> writes:\n\n> > Actually, it's more to characterize how large of a sample\n> > we need. For example, if we sample 0.005 of disk pages, and\n> > get an estimate, and then sample another 0.005 of disk pages\n> > and get an estimate which is not even close to the first\n> > estimate, then we have an idea that this is a table which \n> > defies analysis based on small samples. \n> \n> I buy that.\n\nBetter yet is to use the entire sample you've gathered of .01 and then perform\nanalysis on that sample to see what the confidence interval is. Which is\neffectively the same as what you're proposing except looking at every possible\npartition.\n\nUnfortunately the reality according to the papers that were sent earlier is\nthat you will always find the results disappointing. Until your sample is\nnearly the entire table your estimates for n_distinct will be extremely\nunreliable.\n\n-- \ngreg\n\n",
"msg_date": "27 Apr 2005 13:16:48 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
},
{
"msg_contents": "\nFirst I will comment my original idea.\nSecond I will give another improved suggestion (an idea).\nI hope, that they will be useful for you.\n\n(I don't know, wether the first one was useful at all because it showed,\nthat I and some others of us are not very good with statistics :( )\n\nI haven't looked about the PostgreSQL code, so I don't know, that what \nis possible\nnow, and what is not. I do know, that the full table scan and after that \nincremental\nstatistics changes are a very big change, without looking at the code.\n\n\n\nI meant the following idea:\n- compare two equal sized samples. Then redo the same thing with double\nsized samples. So do lots of unnecessary work.\nCheck out the correlation of the two samples to try to guess the \ndistribution.\n\nSo I tried to give you an idea, not to give you a full answer into the \nwhole problem.\n\n\nI did read some parts of the attached PDFs. They did convince me,\nthat it seems, that the heuristics for the hard cases would actually read\nalmost the whole table in many cases.\n\nI did cover the \"too little sample\" problem by stating that the\nuser should be able to give the minimum size of samples. This way you would\navoid the too small sampling problem. My purpose was not to achieve at\nmost 5% wrong estimates, but to decrease the 2000% wrong estimates, that \nare\nseen now sometimes.\n\nConclusions:\n- No heuristics or similar thing of small samples will grant excellent \nresults.\n- If you need excellent estimates, you need to process the whole table!\n- Some special cases, like primary keys and the unique indexes and special\ncase column types do give easy ways to make estimates:\nFor example, wether a boolean column has zero, one or two distinct \nvalues, it does not matter\nso much ??? Hashing seems the right choise for all of them.\n\nIf I have understund correctly, the full table scans are out of\nquestions for large tables at this time.\n\nThe percentage idea of taking 10% samples seems good.\n\n\nSo here is another suggestion:\n1. Do a full percentage scan, starting at an arbitrary position. If the \nuser's data is not\nhomogenous, this hurts it, but this way it is faster.\nDuring that scan, try to figure out all those columns, that have at most \n100 distinct values.\n\nOf course, with it you can't go into 100% accuracy, but if the full \ntable scan is out of question now,\nit is better, if the accuracy is for example at most ten times wrong.\n\nYou could also improve accuracy by instead of doing a 10% partial table \nscan, you could\ndo 20 pieces of 0,5 percent partial table scans: This would improve \naccuracy a bit, but keep\nthe speed almost the same as the partial table scan.\n\nHere are questions for the statisticians for distinct values calculation:\n\nIf we want at most 1000% tolerance, how big percentage of table's one\ncolumn must be processed?\n\nIf we want at most 500% tolerance, how big percentage of table's one\ncolumn must be processed?\n\nIf we want at most 250% tolerance, how big percentage of table's one\ncolumn must be processed?\n\nBetter to assume, that there are at most 100 distinct values on a table,\nif it helps calculations.\n\nIf we try to get as much with one discontinuous partial table scan\n(0,1-10% sample), here is the information, we can gather:\n\n1. We could gather a histogram for max(100) distinct values for each \ncolumn for every column.\n2. We could measure variance and average, and the number of rows for \nthese 100 distinct values.\n3. We could count the number of rows, that didn't match with these 100 \ndistinct values:\nthey were left out from the histogram.\n4. We could get a minimum and a maximum value for each column.\n\n=> We could get exact information about the sample with one 0,1-10% pass \nfor many columns.\n\nWhat you statisticans can gather about these values?\nMy idea is programmatical combined with statistics:\n+ Performance: scan for example 100 blocks each of size 100Mb, because \ndisc I/O\nis much faster this way.\n+ Enables larger table percentage. I hope it helps with the statistics \nformula.\n Required because of more robust statistics: take those blocks at random\n (not over each other) places to decrease the effect from hitting \ninto statistically\n bad parts on the table.\n+ Less table scan passes: scan all columns with limited hashing in the \nfirst pass.\n+ All easy columns are found here with one pass.\n+- Harder columns need an own pass each, but we have some preliminary\n knoledge of them on the given sample after all (minimum and maximum \nvalues\n and the histogram of the 100 distinct values).\n\nMarko Ristola\n\nGreg Stark wrote:\n\n>\"Dave Held\" <[email protected]> writes:\n>\n> \n>\n>>>Actually, it's more to characterize how large of a sample\n>>>we need. For example, if we sample 0.005 of disk pages, and\n>>>get an estimate, and then sample another 0.005 of disk pages\n>>>and get an estimate which is not even close to the first\n>>>estimate, then we have an idea that this is a table which \n>>>defies analysis based on small samples. \n>>> \n>>>\n>>I buy that.\n>> \n>>\n>\n>Better yet is to use the entire sample you've gathered of .01 and then perform\n>analysis on that sample to see what the confidence interval is. Which is\n>effectively the same as what you're proposing except looking at every possible\n>partition.\n>\n>Unfortunately the reality according to the papers that were sent earlier is\n>that you will always find the results disappointing. Until your sample is\n>nearly the entire table your estimates for n_distinct will be extremely\n>unreliable.\n>\n> \n>\n\n",
"msg_date": "Thu, 28 Apr 2005 20:44:37 +0300",
"msg_from": "Marko Ristola <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Bad n_distinct estimation; hacks suggested?"
}
] |
[
{
"msg_contents": "I've ported enough of my companies database to Postgres to make\nwarehousing on PG a real possibility. I thought I would toss my data\nmigration architecture ideas out for the list to shoot apart..\n\n1. Script on production server dumps the production database (MSSQL) to\na set of delimited text files. \n2. Script on production server moves files via FTP to a Postgres\ndatabase server. \n3. File Alteration Monitor trigger on PG server executes script when\nlast file is transferred.\n4. Script on PG server drops the target database (by issuing a \"dropdb\"\ncommand).\n5. Script on PG server re-creates target database. (createdb command)\n6. Script on PG server re-creates the tables.\n7. Script on PG server issues COPY commands to import data.\n8. Script on PG server indexes tables.\n9. Script on PG server builds de-normalized reporting tables.\n10. Script on PG server indexes the reporting tables.\n11. Script on PG server creates needed reporting functions.\n12. Vacuum analyze?\n\nMy question revolves around the drop/create for the database. Is their\nsignificant downside to this approach? I'm taking this approach because\nit is simpler from a scripting point of view to simply start from\nscratch on each warehouse update. If I do not drop the database I would\nneed to delete the contents of each table and drop all indexes prior to\nthe COPY/data import. My assumption is all the table deletes and index\ndrops would be more expensive then just droping/re-creating the entire\ndatabase.\n\nAlso, is the Vacuum analyze step needed on a freshly minted database\nwhere the indexes have all been newly created?\n\nThanks in advance for all feedback.\n\n\n-- \n\n",
"msg_date": "Wed, 27 Apr 2005 11:07:14 -0500",
"msg_from": "Richard Rowell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Suggestions for a data-warehouse migration routine"
},
{
"msg_contents": "Richard Rowell wrote:\n> I've ported enough of my companies database to Postgres to make\n> warehousing on PG a real possibility. I thought I would toss my data\n> migration architecture ideas out for the list to shoot apart..\n>\n> 1. Script on production server dumps the production database (MSSQL) to\n> a set of delimited text files.\n> 2. Script on production server moves files via FTP to a Postgres\n> database server.\n> 3. File Alteration Monitor trigger on PG server executes script when\n> last file is transferred.\n> 4. Script on PG server drops the target database (by issuing a \"dropdb\"\n> command).\n> 5. Script on PG server re-creates target database. (createdb command)\n> 6. Script on PG server re-creates the tables.\n> 7. Script on PG server issues COPY commands to import data.\n> 8. Script on PG server indexes tables.\n> 9. Script on PG server builds de-normalized reporting tables.\n> 10. Script on PG server indexes the reporting tables.\n> 11. Script on PG server creates needed reporting functions.\n> 12. Vacuum analyze?\n>\n> My question revolves around the drop/create for the database. Is their\n> significant downside to this approach? I'm taking this approach because\n> it is simpler from a scripting point of view to simply start from\n> scratch on each warehouse update. If I do not drop the database I would\n> need to delete the contents of each table and drop all indexes prior to\n> the COPY/data import. My assumption is all the table deletes and index\n> drops would be more expensive then just droping/re-creating the entire\n> database.\n\nI believe you are correct. If you are going to completely wipe the\ndatabase, just drop it and re-create. Deleting is much slower than\ndropping. (One of the uses of partitioning is so that you can just drop\none of the tables, rather than deleting the entries). Dropping the whole\ndb skips any Foreign Key checks, etc.\n\n>\n> Also, is the Vacuum analyze step needed on a freshly minted database\n> where the indexes have all been newly created?\n>\n> Thanks in advance for all feedback.\n\nANALYZE is needed, since you haven't updated any of your statistics yet.\nSo the planner doesn't really know how many rows there are.\n\nVACUUM probably isn't since everything should be pretty well aligned.\n\nJohn\n=:->",
"msg_date": "Wed, 27 Apr 2005 12:23:56 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions for a data-warehouse migration routine"
},
{
"msg_contents": "Quoting Richard Rowell <[email protected]>:\n\n> I've ported enough of my companies database to Postgres to make\n> warehousing on PG a real possibility. I thought I would toss my\n> data\n> migration architecture ideas out for the list to shoot apart..\n> \n[...]\nNot much feedback required.\n\nYes, dropping the entire database is faster and simpler.\nIf your database is small enough that you can rebuild it from scratch\nevery time, go for it.\n\nYes, vacuum analyze required; creating indexes alone does not create\nstatistics.\n\n From a I'd dump an extract of pg_stat[io_]user_(tables|indexes)\nto see how index usage and table load changes over time.\n-- \n\"Dreams come true, not free.\" -- S.Sondheim, ITW \n\n",
"msg_date": "Thu, 28 Apr 2005 08:00:53 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions for a data-warehouse migration routine"
}
] |
[
{
"msg_contents": " \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Josh Berkus\n> Sent: 27 April 2005 17:14\n> To: Joel Fradkin\n> Cc: PostgreSQL Perform\n> Subject: Re: [PERFORM] Final decision\n> \n> Actually, I think the problem may be ODBC. Our ODBC driver \n> is not the best \n> and is currently being re-built from scratch. \n\nIt is? No-one told the developers...\n\nRegards, Dave\n\n[and yes, I know Joshua said Command Prompt are rewriting /their/\ndriver]\n",
"msg_date": "Wed, 27 Apr 2005 17:29:09 +0100",
"msg_from": "\"Dave Page\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Final decision"
},
{
"msg_contents": "Dave,\n\n> > Actually, I think the problem may be ODBC. Our ODBC driver\n> > is not the best\n> > and is currently being re-built from scratch.\n>\n> It is? No-one told the developers...\n>\n> Regards, Dave\n>\n> [and yes, I know Joshua said Command Prompt are rewriting /their/\n> driver]\n\nOK. Well, let's put it this way: the v3 and v3.5 drivers will not be based \non the current driver, unless you suddenly have a bunch of free time.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 27 Apr 2005 09:36:34 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Final decision"
},
{
"msg_contents": "Dave Page wrote:\n> \n> \n> \n>>-----Original Message-----\n>>From: [email protected] \n>>[mailto:[email protected]] On Behalf Of \n>>Josh Berkus\n>>Sent: 27 April 2005 17:14\n>>To: Joel Fradkin\n>>Cc: PostgreSQL Perform\n>>Subject: Re: [PERFORM] Final decision\n>>\n>>Actually, I think the problem may be ODBC. Our ODBC driver \n>>is not the best \n>>and is currently being re-built from scratch. \n> \n> \n> It is? No-one told the developers...\n\nWe have mentioned it on the list.\n\nhttp://www.linuxdevcenter.com/pub/a/linux/2002/07/16/drake.html\n\n> Regards, Dave\n> \n> [and yes, I know Joshua said Command Prompt are rewriting /their/\n> driver]\n\n:) No we are rewriting a complete OSS driver.\n\nSincerely,\n\nJoshua D. Drake\nCommand Prompt, Inc.\n\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedication Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n",
"msg_date": "Wed, 27 Apr 2005 09:46:02 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Final decision"
},
{
"msg_contents": ">>\n>> It is? No-one told the developers...\n> \n> \n> We have mentioned it on the list.\n> \n> http://www.linuxdevcenter.com/pub/a/linux/2002/07/16/drake.html\n\nOoops ;)\n\nhttp://archives.postgresql.org/pgsql-odbc/2005-03/msg00109.php\n\nSincerely,\n\nJoshua D. Drake\nCommand Prompt, Inc.\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedication Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n",
"msg_date": "Wed, 27 Apr 2005 10:01:55 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Final decision"
},
{
"msg_contents": "Joshua,\n\nThis article was in July 2002, so is there update to this information? \nWhen will a new ODBC driver be available for testing? Is there a release \nof the ODBC driver with better performance than 7.0.3.0200 for a 7.4.x \ndatabase?\n\n\nSteve Poe\n\n>\n> We have mentioned it on the list.\n>\n> http://www.linuxdevcenter.com/pub/a/linux/2002/07/16/drake.html\n>\n>> Regards, Dave\n>>\n>> [and yes, I know Joshua said Command Prompt are rewriting /their/\n>> driver]\n>\n>\n> :) No we are rewriting a complete OSS driver.\n>\n> Sincerely,\n>\n> Joshua D. Drake\n> Command Prompt, Inc.\n>\n>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n>\n>\n>\n\n",
"msg_date": "Wed, 27 Apr 2005 12:15:38 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Final decision"
}
] |
[
{
"msg_contents": "Hello,\n\n \n\nI am trying to understand what I need to do for this system to stop\nusing swap. Maybe it's something simple, or obvious for the situation.\nI'd appreciate some thoughts/suggestions.\n\n \n\nSome background: \n\nThis is a quad XEON (yes, Dell) with 12GB of RAM, pg 7.4...pretty heavy\non concurrent usage. With peak traffic (db allows 1000 connections, in\nline with the number of app servers and connection pools for each)\nfollowing is from 'top' (sorted by mem) Shared_buffers is 170MB,\nsort_mem 2MB. Both WAL and pgdata are on separate LUNs on fibre channel\nstorage, RAID10.\n\n \n\n972 processes: 971 sleeping, 1 running, 0 zombie, 0 stopped\n\nCPU states: cpu user nice system irq softirq iowait idle\n\n total 57.2% 0.0% 23.2% 0.0% 3.6% 82.8% 232.4%\n\n cpu00 22.0% 0.0% 9.1% 0.1% 0.9% 18.7% 48.8%\n\n cpu01 17.5% 0.0% 5.8% 0.0% 2.3% 19.7% 54.4%\n\n cpu02 7.8% 0.0% 3.7% 0.0% 0.0% 20.8% 67.5%\n\n cpu03 9.7% 0.0% 4.4% 0.0% 0.5% 23.6% 61.5%\n\nMem: 12081744k av, 12055220k used, 26524k free, 0k shrd,\n71828k buff\n\n 9020480k actv, 1741348k in_d, 237396k in_c\n\nSwap: 4096532k av, 472872k used, 3623660k free 9911176k\ncached\n\n \n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU\nCOMMAND\n\n21397 postgres 22 0 181M 180M 175M D 25.9 1.5 85:17 0\npostmaster\n\n23820 postgres 15 0 178M 177M 175M S 0.0 1.5 1:53 3\npostmaster\n\n24428 postgres 15 0 178M 177M 175M S 0.0 1.5 1:35 3\npostmaster\n\n24392 postgres 15 0 178M 177M 175M S 2.7 1.5 2:07 2\npostmaster\n\n23610 postgres 15 0 178M 177M 175M S 0.0 1.5 0:29 2\npostmaster\n\n24395 postgres 15 0 178M 177M 175M S 0.0 1.5 1:12 1\npostmaster\n\n...\n\n...\n\n-bash-2.05b$ free\n\n total used free shared\nbuffers cached\n\nMem: 12081744 12055536 26208 0 66704\n9943988\n\n-/+ buffers/cache: 2044844 10036900\n\nSwap: 4096532 512744 3583788\n\n \n\nAs you can see the system starts utilizing swap at some point, with so\nmany processes. Some time ago we had decided to keep the connections\nfrom the pool open for longer periods of time, possibly to avoid\nconnection maintenance overhead on the db. At that time the traffic was\nnot as high as it is today, which might be causing this, because for the\nmost part, non-idle postmaster processes are only a few, except when the\nsystem becomes busy and suddenly you see a lot of selects piling up, and\nload averages shooting upwards. I am thinking closing out connections\nsooner might help the system release some memory to the kernel. Swapping\nadds up to the IO, although OS is on separate channel than postgres.\n\n \n\nI can add more memory, but I want to make sure I haven't missed out\nsomething obvious.\n\n \n\nThanks!\n\nAnjan\n\n \n\n \n************************************************************************\n******************\nThis e-mail and any files transmitted with it are intended for the use\nof the \naddressee(s) only and may be confidential and covered by the\nattorney/client \nand other privileges. If you received this e-mail in error, please\nnotify the \nsender; do not disclose, copy, distribute, or take any action in\nreliance on \nthe contents of this information; and delete it from your system. Any\nother \nuse of this e-mail is prohibited.\n************************************************************************\n******************\n\n \n\n\n\n\n\n\n\n\n\n\n\n\nHello,\n \nI am trying to understand what I need to do for this system\nto stop using swap. Maybe it’s something simple, or obvious for the\nsituation. I’d appreciate some thoughts/suggestions.\n \nSome background: \nThis is a quad XEON (yes, Dell) with 12GB of RAM, pg 7.4…pretty\nheavy on concurrent usage. With peak traffic (db allows 1000 connections, in\nline with the number of app servers and connection pools for each) following is\nfrom ‘top’ (sorted by mem) Shared_buffers is 170MB, sort_mem 2MB.\nBoth WAL and pgdata are on separate LUNs on fibre channel storage, RAID10.\n \n972 processes: 971 sleeping, 1 running, 0 zombie, 0 stopped\nCPU states: cpu \nuser nice system irq \nsoftirq iowait idle\n \ntotal 57.2% 0.0% 23.2% \n0.0% 3.6% 82.8% 232.4%\n \ncpu00 22.0% 0.0% \n9.1% 0.1% 0.9% \n18.7% 48.8%\n \ncpu01 17.5% 0.0% \n5.8% 0.0% 2.3% \n19.7% 54.4%\n \ncpu02 7.8% 0.0% \n3.7% 0.0% 0.0% \n20.8% 67.5%\n \ncpu03 9.7% 0.0% \n4.4% 0.0% 0.5% \n23.6% 61.5%\nMem: 12081744k av, 12055220k used, 26524k\nfree, 0k shrd, 71828k buff\n \n9020480k actv, 1741348k in_d, 237396k in_c\nSwap: 4096532k av, 472872k used, 3623660k\nfree \n9911176k cached\n \n PID USER PRI NI \nSIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n21397 postgres 22 0 181M 180M \n175M D 25.9 1.5 85:17 0 postmaster\n23820 postgres 15 0 178M 177M \n175M S 0.0 1.5 1:53 3\npostmaster\n24428 postgres 15 0 178M 177M \n175M S 0.0 1.5 1:35 3\npostmaster\n24392 postgres 15 0 178M 177M \n175M S 2.7 1.5 2:07 2\npostmaster\n23610 postgres 15 0 178M 177M \n175M S 0.0 1.5 0:29 2\npostmaster\n24395 postgres 15 0 178M 177M \n175M S 0.0 1.5 1:12 1\npostmaster\n…\n…\n-bash-2.05b$ free\n \n total used \nfree shared \nbuffers cached\nMem: 12081744 \n12055536 \n26208 \n0 66704 9943988\n-/+ buffers/cache: 2044844 \n10036900\nSwap: \n4096532 512744 3583788\n \nAs you can see the system starts utilizing swap at some\npoint, with so many processes. Some time ago we had decided to keep the\nconnections from the pool open for longer periods of time, possibly to avoid\nconnection maintenance overhead on the db. At that time the traffic was not as\nhigh as it is today, which might be causing this, because for the most part,\nnon-idle postmaster processes are only a few, except when the system becomes\nbusy and suddenly you see a lot of selects piling up, and load averages\nshooting upwards. I am thinking closing out connections sooner might help the\nsystem release some memory to the kernel. Swapping adds up to the IO, although\nOS is on separate channel than postgres.\n \nI can add more memory, but I want to make sure I haven’t\nmissed out something obvious.\n \nThanks!\nAnjan\n \n ******************************************************************************************This e-mail and any files transmitted with it are intended for the use of the addressee(s) only and may be confidential and covered by the attorney/client and other privileges. If you received this e-mail in error, please notify the sender; do not disclose, copy, distribute, or take any action in reliance on the contents of this information; and delete it from your system. Any other use of this e-mail is prohibited.******************************************************************************************",
"msg_date": "Wed, 27 Apr 2005 13:48:15 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why is this system swapping?"
},
{
"msg_contents": "\"Anjan Dave\" <[email protected]> writes:\n\n> Some background: \n> \n> This is a quad XEON (yes, Dell) with 12GB of RAM, pg 7.4...pretty heavy\n> on concurrent usage. With peak traffic (db allows 1000 connections, in\n> line with the number of app servers and connection pools for each)\n> following is from 'top' (sorted by mem) Shared_buffers is 170MB,\n> sort_mem 2MB. Both WAL and pgdata are on separate LUNs on fibre channel\n> storage, RAID10.\n> \n> 972 processes: 971 sleeping, 1 running, 0 zombie, 0 stopped\n> \n> CPU states: cpu user nice system irq softirq iowait idle\n> total 57.2% 0.0% 23.2% 0.0% 3.6% 82.8% 232.4%\n\nThis looks to me like most of your server processes are sitting around idle\nmost of the time.\n\n> 21397 postgres 22 0 181M 180M 175M D 25.9 1.5 85:17 0\n> postmaster\n> \n> 23820 postgres 15 0 178M 177M 175M S 0.0 1.5 1:53 3\n> postmaster\n\nSo each process is taking up 8-11M of ram beyond the shared memory. 1,000 x\n10M is 10G. Add in some memory for page tables and kernel data structures, as\nwell as the kernel's need to keep some memory set aside for filesystem buffers\n(what you really want all that memory being used for anyways) and you've used\nup all your 12G.\n\nI would seriously look at tuning those connection pools down. A lot. If your\nserver processes are sitting idle over half the time I would at least cut it\nby a factor of 2.\n\nWorking the other direction: you have four processors (I guess you have\nhyperthreading turned off?) so ideally what you want is four runnable\nprocesses at all times and as few others as possible. If your load typically\nspends about half the time waiting on i/o (which is what that top output says)\nthen you want a total of 8 connections.\n\nRealistically you might not be able to predict which app server will be\nproviding the load at any given time, so you might want 8 connections per app\nserver. \n\nAnd you might have some load that's more i/o intensive than the 50% i/o load\nshown here. Say you think some loads will be 80% i/o, you might want 20\nconnections for those loads. If you had 10 app servers with 20 connections\neach for a total of 200 connections I suspect that would be closer to right\nthan having 1,000 connections.\n\n200 connections would consume 2G of ram leaving you with 10G of filesystem\ncache. Which might in turn decrease the percentage of time waiting on i/o,\nwhich would decrease the number of processes you need even further...\n\n-- \ngreg\n\n",
"msg_date": "27 Apr 2005 14:29:12 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is this system swapping?"
},
{
"msg_contents": "\nOn Apr 27, 2005, at 1:48 PM, Anjan Dave wrote:\n\n> As you can see the system starts utilizing swap at some point, with so \n> many processes. Some time ago we had decided to keep the connections \n> from the pool open for longer\n\nYou've shown the system has used swap but not that it is swapping. \nHaving swap in use is fine - there is likely plenty of code and whatnot \nthat is not being used so it dumped it out to swap. However if you are \nactively moving data to/from swap that is bad. Very bad. Especially on \nlinux.\n\nTo tell if you are swapping you need to watch the output of say, vmstat \n1 and look at the si and so columns.\n\nLinux is very swap happy and likes to swap things for fun and profit.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Wed, 27 Apr 2005 14:29:42 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is this system swapping?"
},
{
"msg_contents": "\nOn Apr 27, 2005, at 2:29 PM, Greg Stark wrote:\n\n> \"AI would seriously look at tuning those connection pools down. A lot. \n> If your\n> server processes are sitting idle over half the time I would at least \n> cut it\n> by a factor of 2.\n>\n\nAre you (Anjan) using real or fake connection pooling - ie pgpool \nversus php's persistent connections ? I'd strongly recommend looking \nat pgpool. it does connection pooling correctly (A set of X connections \nshared among the entire box rather than 1 per web server)\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Wed, 27 Apr 2005 15:28:56 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is this system swapping?"
},
{
"msg_contents": "Jeff <[email protected]> writes:\n\n> Are you (Anjan) using real or fake connection pooling - ie pgpool versus php's\n> persistent connections ? I'd strongly recommend looking at pgpool. it does\n> connection pooling correctly (A set of X connections shared among the entire\n> box rather than 1 per web server)\n\nHaving one connection per web process isn't \"fake connection pooling\", it's a\ncompletely different arrangement. And there's nothing \"incorrect\" about it. \n\nIn fact I think it's generally superior to having a layer like pgpool having\nto hand off all your database communication. Having to do an extra context\nswitch to handle every database communication is crazy. \n\nFor typical web sites where the database is the only slow component there's\nnot much point in having more web server processes than connections anyways,\nAll your doing is transferring the wait time from waiting for a web server\nprocess to waiting for a database process.\n\nMost applications that find they need connection pooling are using it to work\naround a poorly architected system that is mixing static requests (like\nimages) and database driven requests in the same web server.\n\nHowever, your application sounds like it's more involved than a typical web\nserver. If it's handling many slow resources, such as connections to multiple\ndatabases, SOAP services, mail, or other network services then you may well\nneed that many processes. In which case you'll need something like pgpool.\n\n-- \ngreg\n\n",
"msg_date": "27 Apr 2005 19:46:10 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is this system swapping?"
},
{
"msg_contents": "Greg,\n\n> In fact I think it's generally superior to having a layer like pgpool\n> having to hand off all your database communication. Having to do an extra\n> context switch to handle every database communication is crazy.\n\nAlthough, one of their issues is that their database connection pooling is \nper-server. Which means that a safety margin of pre-allocated connections \n(something they need since they get bursts of 1000 new users in a few \nseconds) has to be maintained per server, increasing the total number of \nconnections. \n\nSo a pooling system that allowed them to hold 100 free connections centrally \nrather than 10 per server might be a win.\n\nBetter would be getting some of this stuff offloaded onto database replication \nslaves.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 27 Apr 2005 17:02:49 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is this system swapping?"
},
{
"msg_contents": "\nOn Apr 27, 2005, at 7:46 PM, Greg Stark wrote:\n\n> In fact I think it's generally superior to having a layer like pgpool \n> having\n> to hand off all your database communication. Having to do an extra \n> context\n> switch to handle every database communication is crazy.\n>\n\nI suppose this depends on how many machines / how much traffic you have.\n\nIn one setup I run here I get away with 32 * 4 db connections instead \nof 500 * 4. Pretty simple to see the savings on the db machine. (Yes, \nit is a \"bad design\" as you said where static & dynamic content are \nserved from the same box. However it also saves money since I don't \nneed machines sitting around serving up pixel.gif vs \nmyBigApplication.cgi)\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Thu, 28 Apr 2005 08:13:43 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is this system swapping?"
}
] |
[
{
"msg_contents": "Sorry, I didn't attach vmstat, the system does actively swap pages. Not\nto the point where it crawls, but for some brief periods the console\nbecomes a bit unresponsive. I am taking this as a sign to prevent future\nproblems.\n\nanjan\n\n-----Original Message-----\nFrom: Jeff [mailto:[email protected]] \nSent: Wednesday, April 27, 2005 2:30 PM\nTo: Anjan Dave\nCc: [email protected]\nSubject: Re: [PERFORM] Why is this system swapping?\n\n\nOn Apr 27, 2005, at 1:48 PM, Anjan Dave wrote:\n\n> As you can see the system starts utilizing swap at some point, with so\n\n> many processes. Some time ago we had decided to keep the connections \n> from the pool open for longer\n\nYou've shown the system has used swap but not that it is swapping. \nHaving swap in use is fine - there is likely plenty of code and whatnot \nthat is not being used so it dumped it out to swap. However if you are \nactively moving data to/from swap that is bad. Very bad. Especially on \nlinux.\n\nTo tell if you are swapping you need to watch the output of say, vmstat \n1 and look at the si and so columns.\n\nLinux is very swap happy and likes to swap things for fun and profit.\n\n--\n\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Wed, 27 Apr 2005 14:44:27 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why is this system swapping?"
}
] |
[
{
"msg_contents": " \n\n> -----Original Message-----\n> From: Joshua D. Drake [mailto:[email protected]] \n> Sent: 27 April 2005 17:46\n> To: Dave Page\n> Cc: Josh Berkus; Joel Fradkin; PostgreSQL Perform\n> Subject: Re: [PERFORM] Final decision\n> \n> > It is? No-one told the developers...\n> \n> We have mentioned it on the list.\n\nErr, yes. But that's not quite the same as core telling us the current\ndriver is being replaced.\n\nRegards, Dave.\n",
"msg_date": "Wed, 27 Apr 2005 20:47:17 +0100",
"msg_from": "\"Dave Page\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Final decision"
},
{
"msg_contents": "Dave, folks,\n\n> Err, yes. But that's not quite the same as core telling us the current\n> driver is being replaced.\n\nSorry, I spoke off the cuff. I also was unaware that work on the current \ndriver had renewed. Us Core people are not omnicient, believe it or don't.\n\nMind you, having 2 different teams working on two different ODBC drivers is a \nproblem for another list ...\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 27 Apr 2005 20:09:27 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Final decision"
},
{
"msg_contents": "On Wed, Apr 27, 2005 at 08:09:27PM -0700, Josh Berkus wrote:\n\n> Mind you, having 2 different teams working on two different ODBC drivers is a \n> problem for another list ...\n\nOnly two? I thought another commercial entity was also working on their\nown ODBC driver, so there may be three of them.\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"Always assume the user will do much worse than the stupidest thing\nyou can imagine.\" (Julien PUYDT)\n",
"msg_date": "Wed, 27 Apr 2005 23:36:37 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "ODBC driver overpopulation (was Re: Final decision)"
},
{
"msg_contents": ">>Mind you, having 2 different teams working on two different ODBC drivers is a \n>>problem for another list ...\n> \n> \n> Only two? I thought another commercial entity was also working on their\n> own ODBC driver, so there may be three of them.\n\nWell I only know of one company actually working on ODBC actively and \nthat is Command Prompt, If there are others I would like to hear about \nit because I would rather work with someone than against them.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> \n\n\n-- \nYour PostgreSQL solutions provider, Command Prompt, Inc.\n24x7 support - 1.800.492.2240, programming, and consulting\nHome of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\nhttp://www.commandprompt.com / http://www.postgresql.org\n",
"msg_date": "Wed, 27 Apr 2005 20:38:34 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ODBC driver overpopulation (was Re: Final decision)"
},
{
"msg_contents": "Dave Page wrote:\n> \n> \n> \n>>-----Original Message-----\n>>From: Joshua D. Drake [mailto:[email protected]] \n>>Sent: 27 April 2005 17:46\n>>To: Dave Page\n>>Cc: Josh Berkus; Joel Fradkin; PostgreSQL Perform\n>>Subject: Re: [PERFORM] Final decision\n>>\n>>\n>>>It is? No-one told the developers...\n>>\n>>We have mentioned it on the list.\n> \n> \n> Err, yes. But that's not quite the same as core telling us the current\n> driver is being replaced.\n\nWell I don't think anyone knew that the current driver is still being \nmaintained?\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> Regards, Dave.\n\n\n-- \nYour PostgreSQL solutions provider, Command Prompt, Inc.\n24x7 support - 1.800.492.2240, programming, and consulting\nHome of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\nhttp://www.commandprompt.com / http://www.postgresql.org\n",
"msg_date": "Wed, 27 Apr 2005 20:43:53 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Final decision"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> >>Mind you, having 2 different teams working on two different ODBC drivers is a \n> >>problem for another list ...\n> > \n> > \n> > Only two? I thought another commercial entity was also working on their\n> > own ODBC driver, so there may be three of them.\n> \n> Well I only know of one company actually working on ODBC actively and \n> that is Command Prompt, If there are others I would like to hear about \n> it because I would rather work with someone than against them.\n\nWell, you should talk to Pervasive because they have a team working on\nimproving the existing driver. I am sure they would want to work\ntogether too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 27 Apr 2005 23:44:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ODBC driver overpopulation (was Re: [PERFORM] Final decision)"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> Dave Page wrote:\n> > \n> > \n> > \n> >>-----Original Message-----\n> >>From: Joshua D. Drake [mailto:[email protected]] \n> >>Sent: 27 April 2005 17:46\n> >>To: Dave Page\n> >>Cc: Josh Berkus; Joel Fradkin; PostgreSQL Perform\n> >>Subject: Re: [PERFORM] Final decision\n> >>\n> >>\n> >>>It is? No-one told the developers...\n> >>\n> >>We have mentioned it on the list.\n> > \n> > \n> > Err, yes. But that's not quite the same as core telling us the current\n> > driver is being replaced.\n> \n> Well I don't think anyone knew that the current driver is still being \n> maintained?\n\nWe have been looking for someone to take over ODBC and Pervasive agreed\nto do it, but there wasn't a big announcement about it. I have\ndiscussed this with them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 27 Apr 2005 23:47:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Final decision"
}
] |
[
{
"msg_contents": "Yes, HT is turned off (I haven't seen any recommendations to keep it\non).\n\nThis is when we were seeing 30 to 50% less traffic (users) than today -\nwe didn't want the idle connections in the pool to expire too soon\n(default 30 secs, after which it goes back to pool) and reopen it\nquickly, or not have sufficient available (default 20 conns, we raised\nit to 50), so we figured a number per app server (50) and set that to\nexpire after a very long time, so as to avoid any overhead, and always\nhave the connection available whenever needed, without opening a new\none. \n\nBut now, for *some* reason, in some part of the day, we use up almost\nall connections in each app's pool. After that since they are set to\nexpire after a long time, they remain there, taking up DB resources.\n\nI will be trimming down the idle-timeout to a few minutes first, see if\nthat helps.\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: Greg Stark [mailto:[email protected]] \nSent: Wednesday, April 27, 2005 2:29 PM\nTo: Anjan Dave\nCc: [email protected]\nSubject: Re: [PERFORM] Why is this system swapping?\n\n\"Anjan Dave\" <[email protected]> writes:\n\n> Some background: \n> \n> This is a quad XEON (yes, Dell) with 12GB of RAM, pg 7.4...pretty\nheavy\n> on concurrent usage. With peak traffic (db allows 1000 connections, in\n> line with the number of app servers and connection pools for each)\n> following is from 'top' (sorted by mem) Shared_buffers is 170MB,\n> sort_mem 2MB. Both WAL and pgdata are on separate LUNs on fibre\nchannel\n> storage, RAID10.\n> \n> 972 processes: 971 sleeping, 1 running, 0 zombie, 0 stopped\n> \n> CPU states: cpu user nice system irq softirq iowait\nidle\n> total 57.2% 0.0% 23.2% 0.0% 3.6% 82.8%\n232.4%\n\nThis looks to me like most of your server processes are sitting around\nidle\nmost of the time.\n\n> 21397 postgres 22 0 181M 180M 175M D 25.9 1.5 85:17 0\n> postmaster\n> \n> 23820 postgres 15 0 178M 177M 175M S 0.0 1.5 1:53 3\n> postmaster\n\nSo each process is taking up 8-11M of ram beyond the shared memory.\n1,000 x\n10M is 10G. Add in some memory for page tables and kernel data\nstructures, as\nwell as the kernel's need to keep some memory set aside for filesystem\nbuffers\n(what you really want all that memory being used for anyways) and you've\nused\nup all your 12G.\n\nI would seriously look at tuning those connection pools down. A lot. If\nyour\nserver processes are sitting idle over half the time I would at least\ncut it\nby a factor of 2.\n\nWorking the other direction: you have four processors (I guess you have\nhyperthreading turned off?) so ideally what you want is four runnable\nprocesses at all times and as few others as possible. If your load\ntypically\nspends about half the time waiting on i/o (which is what that top output\nsays)\nthen you want a total of 8 connections.\n\nRealistically you might not be able to predict which app server will be\nproviding the load at any given time, so you might want 8 connections\nper app\nserver. \n\nAnd you might have some load that's more i/o intensive than the 50% i/o\nload\nshown here. Say you think some loads will be 80% i/o, you might want 20\nconnections for those loads. If you had 10 app servers with 20\nconnections\neach for a total of 200 connections I suspect that would be closer to\nright\nthan having 1,000 connections.\n\n200 connections would consume 2G of ram leaving you with 10G of\nfilesystem\ncache. Which might in turn decrease the percentage of time waiting on\ni/o,\nwhich would decrease the number of processes you need even further...\n\n-- \n\ngreg\n\n\n",
"msg_date": "Wed, 27 Apr 2005 16:53:36 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why is this system swapping?"
}
] |
[
{
"msg_contents": "Using Resin's connection pooling. We are looking into pgpool alongside\nslony to separate some reporting functionality.\n\n-anjan\n\n-----Original Message-----\nFrom: Jeff [mailto:[email protected]] \nSent: Wednesday, April 27, 2005 3:29 PM\nTo: Greg Stark\nCc: Anjan Dave; [email protected]\nSubject: Re: [PERFORM] Why is this system swapping?\n\n\nOn Apr 27, 2005, at 2:29 PM, Greg Stark wrote:\n\n> \"AI would seriously look at tuning those connection pools down. A lot.\n\n> If your\n> server processes are sitting idle over half the time I would at least \n> cut it\n> by a factor of 2.\n>\n\nAre you (Anjan) using real or fake connection pooling - ie pgpool \nversus php's persistent connections ? I'd strongly recommend looking \nat pgpool. it does connection pooling correctly (A set of X connections \nshared among the entire box rather than 1 per web server)\n\n--\n\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Wed, 27 Apr 2005 16:55:32 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why is this system swapping?"
}
] |
[
{
"msg_contents": " \n\n> -----Original Message-----\n> From: Josh Berkus [mailto:[email protected]] \n> Sent: 28 April 2005 04:09\n> To: Dave Page\n> Cc: Joshua D. Drake; Joel Fradkin; PostgreSQL Perform\n> Subject: Re: [PERFORM] Final decision\n> \n> Dave, folks,\n> \n> > Err, yes. But that's not quite the same as core telling us \n> the current\n> > driver is being replaced.\n> \n> Sorry, I spoke off the cuff. I also was unaware that work \n> on the current \n> driver had renewed. Us Core people are not omnicient, \n> believe it or don't.\n\nI was under the impression that you and Bruce negiotiated the developer\ntime! Certainly you and I chatted about it on IRC once... Ahh, well.\nNever mind.\n\n> Mind you, having 2 different teams working on two different \n> ODBC drivers is a \n> problem for another list ...\n\nAbsolutely.\n\nRegards, Dave.\n",
"msg_date": "Thu, 28 Apr 2005 08:39:39 +0100",
"msg_from": "\"Dave Page\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Final decision"
}
] |
[
{
"msg_contents": "\nHi folks,\n\n\nthere's often some talk about indices cannot be used if datatypes\ndont match. \n\nOn a larger (and long time growed) application I tend to use OID \nfor references on new tables while old stuff is using integer.\nIs the planner smart enough to see both as compatible datatype\nor is manual casting required ?\n\n\nthx\n-- \n---------------------------------------------------------------------\n Enrico Weigelt == metux IT service\n phone: +49 36207 519931 www: http://www.metux.de/\n fax: +49 36207 519932 email: [email protected]\n---------------------------------------------------------------------\n Realtime Forex/Stock Exchange trading powered by postgresSQL :))\n http://www.fxignal.net/\n---------------------------------------------------------------------\n",
"msg_date": "Fri, 29 Apr 2005 04:35:13 +0200",
"msg_from": "Enrico Weigelt <[email protected]>",
"msg_from_op": true,
"msg_subject": "index on different types"
},
{
"msg_contents": "On Fri, Apr 29, 2005 at 04:35:13AM +0200, Enrico Weigelt wrote:\n> \n> there's often some talk about indices cannot be used if datatypes\n> dont match. \n\nPostgreSQL 8.0 is smarter than previous versions in this respect.\nIt'll use an index if possible even when the types don't match.\n\n> On a larger (and long time growed) application I tend to use OID \n> for references on new tables while old stuff is using integer.\n\nIf you're using OIDs as primary keys then you might wish to reconsider.\nSee the caveats in the documentation and in the FAQ:\n\nhttp://www.postgresql.org/docs/8.0/interactive/datatype-oid.html\nhttp://www.postgresql.org/docs/faqs.FAQ.html#4.12\n\n> Is the planner smart enough to see both as compatible datatype\n> or is manual casting required ?\n\nYou can use EXPLAIN to see what the planner will do, but be aware\nthat the planner won't always use an index even if it could: if it\nthinks a sequential scan would be faster then it won't use an index.\nTo see if using an index is possible, you could set enable_seqscan\nto off before executing EXPLAIN. In any case, a foreign key column\nprobably ought to have the same type as the column it references --\nis there a reason for making them different?\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n",
"msg_date": "Thu, 28 Apr 2005 21:28:34 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index on different types"
}
] |
[
{
"msg_contents": "Howdy!\n\nI'm converting an application to be using postgresql instead of oracle.\nThere seems to be only one issue left, batch inserts in postgresql seem\nsignificant slower than in oracle. I have about 200 batch jobs, each\nconsisting of about 14 000 inserts. Each job takes 1.3 seconds in\npostgresql and 0.25 seconds in oracle. With 200 jobs this means several\nmore minutes to complete the task. By fixing this I think the\napplication using postgresql over all would be faster than when using\noracle.\n\nI'd like some advice of what could enhance the performance. I use\nPostgreSQL 8. The table that is loaded with a bunch of data has no\nindexes. I use Gentoo Linux on a P4 3GHz with 1GB RAM. I use JDBC from\njdbc.postgresql.org, postgresql-8.0-311.jdbc3.jar.\n\nI've changed a few parameters as I hoped that would help me:\nwal_buffers = 64\ncheckpoint_segments = 10\nshared_buffers = 15000\nwork_mem = 4096\nmaintenance_work_mem = 70000\neffective_cache_size = 30000\nshmmax is 150000000\n\nThese settings made creating index faster for instance. Don't know if\nthey can be tweaked further so these batch jobs are executed faster?\nSome setting I forgot to tweak? I tried setting fsync to false, but that\ndidnt change anything.\n\nSomething like this is what runs and takes a bit too long imho:\n\nconn.setAutoCommit(false);\npst = conn.prepareStatement(\"INSERT INTO tmp (...) VALUES (?,?)\");\nfor (int i = 0; i < len; i++) {\n pst.setInt(0, 2);\n pst.setString(1, \"xxx\");\n pst.addBatch();\n}\npst.executeBatch();\nconn.commit();\n\nThis snip takes 1.3 secs in postgresql. How can I lower that?\n\nThanks, Tim\n\n",
"msg_date": "Mon, 2 May 2005 15:52:31 +0200 (CEST)",
"msg_from": "=?iso-8859-1?Q?Tim_Terleg=E5rd?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "batch inserts are \"slow\""
},
{
"msg_contents": "> conn.setAutoCommit(false);\n> pst = conn.prepareStatement(\"INSERT INTO tmp (...) VALUES (?,?)\");\n> for (int i = 0; i < len; i++) {\n> pst.setInt(0, 2);\n> pst.setString(1, \"xxx\");\n> pst.addBatch();\n> }\n> pst.executeBatch();\n> conn.commit();\n> \n> This snip takes 1.3 secs in postgresql. How can I lower that?\n\nYou're batching them as one transaction, and using a prepared query both \nof which are good. I guess the next step for a great performance \nimprovement is to use the COPY command. However, you'd have to find out \nhow to access that via Java.\n\nI have a nasty suspicion that the release JDBC driver doesn't support it \nand you may have to apply a patch.\n\nAsk on [email protected] perhaps.\n\nChris\n",
"msg_date": "Mon, 02 May 2005 23:10:33 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: batch inserts are \"slow\""
},
{
"msg_contents": "On 5/2/05, Tim Terlegård <[email protected]> wrote:\n> Howdy!\n> \n> I'm converting an application to be using postgresql instead of oracle.\n> There seems to be only one issue left, batch inserts in postgresql seem\n> significant slower than in oracle. I have about 200 batch jobs, each\n> consisting of about 14 000 inserts. Each job takes 1.3 seconds in\n> postgresql and 0.25 seconds in oracle. With 200 jobs this means several\n> more minutes to complete the task. By fixing this I think the\n> application using postgresql over all would be faster than when using\n> oracle.\n\nJust as on Oracle you would use SQL*Loader for this application, you\nshould use the COPY syntax for PostgreSQL. You will find it a lot\nfaster. I have used it by building the input files and executing\n'psql' with a COPY command, and also by using it with a subprocess,\nboth are quite effective.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Mon, 2 May 2005 11:17:18 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: batch inserts are \"slow\""
},
{
"msg_contents": "=?iso-8859-1?Q?Tim_Terleg=E5rd?= <[email protected]> writes:\n> There seems to be only one issue left, batch inserts in postgresql seem\n> significant slower than in oracle. I have about 200 batch jobs, each\n> consisting of about 14 000 inserts.\n\n> conn.setAutoCommit(false);\n> pst = conn.prepareStatement(\"INSERT INTO tmp (...) VALUES (?,?)\");\n> for (int i = 0; i < len; i++) {\n> pst.setInt(0, 2);\n> pst.setString(1, \"xxx\");\n> pst.addBatch();\n> }\n> pst.executeBatch();\n> conn.commit();\n\nHmm. It's good that you are wrapping this in a transaction, but I\nwonder about doing it as a single \"batch\". I have no idea what the\ninternal implementation of batches in JDBC is like, but it seems\npossible that it would have some performance issues with 14000\nstatements in a batch.\n\nHave you checked whether the bulk of the runtime is being consumed\non the server or client side?\n\nAlso, make sure that the JDBC driver is using \"real\" prepared statements\n--- until pretty recently, it faked them. I think build 311 is new\nenough, but it would be good to check in the docs or by asking on pgsql-jdbc.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 02 May 2005 11:29:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: batch inserts are \"slow\" "
},
{
"msg_contents": "[email protected] (Christopher Petrilli) writes:\n> On 5/2/05, Tim Terleg�rd <[email protected]> wrote:\n>> Howdy!\n>> \n>> I'm converting an application to be using postgresql instead of\n>> oracle. There seems to be only one issue left, batch inserts in\n>> postgresql seem significant slower than in oracle. I have about 200\n>> batch jobs, each consisting of about 14 000 inserts. Each job takes\n>> 1.3 seconds in postgresql and 0.25 seconds in oracle. With 200 jobs\n>> this means several more minutes to complete the task. By fixing\n>> this I think the application using postgresql over all would be\n>> faster than when using oracle.\n>\n> Just as on Oracle you would use SQL*Loader for this application, you\n> should use the COPY syntax for PostgreSQL. You will find it a lot\n> faster. I have used it by building the input files and executing\n> 'psql' with a COPY command, and also by using it with a subprocess,\n> both are quite effective.\n\nI'd suggest taking a peek at the PGForge project, pgloader\n<http://pgfoundry.org/projects/pgloader/>.\n\nThis is intended to provide somewhat analagous functionality to\nSQL*Loader; a particularly useful thing about it is that it will load\nthose records that it can, and generate a file consisting of just the\nfailures.\n\nIt uses COPY, internally, so it does run reasonably fast.\n\nTo the extent to which it is inadequate, it would be neat to see some\nenhancements...\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://www.ntlug.org/~cbbrowne/sap.html\nRules of the Evil Overlord #78. \"I will not tell my Legions of Terror\n\"And he must be taken alive!\" The command will be: ``And try to take\nhim alive if it is reasonably practical.''\"\n<http://www.eviloverlord.com/>\n",
"msg_date": "Mon, 02 May 2005 12:16:42 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: batch inserts are \"slow\""
},
{
"msg_contents": "> > I'm converting an application to be using postgresql instead of oracle.\n> > There seems to be only one issue left, batch inserts in postgresql seem\n> > significant slower than in oracle. I have about 200 batch jobs, each\n> > consisting of about 14 000 inserts. Each job takes 1.3 seconds in\n> > postgresql and 0.25 seconds in oracle. With 200 jobs this means several\n> > more minutes to complete the task. By fixing this I think the\n> > application using postgresql over all would be faster than when using\n> > oracle.\n>\n> Just as on Oracle you would use SQL*Loader for this application, you\n> should use the COPY syntax for PostgreSQL. You will find it a lot\n> faster. I have used it by building the input files and executing\n> 'psql' with a COPY command, and also by using it with a subprocess,\n> both are quite effective.\n\nI tried this now. Now it's down to 0.45 seconds. It feels a bit hacky to\nrun /usr/bin/psql from java, but it sure works. Thanks for the hint!\n\nTim\n\n",
"msg_date": "Tue, 3 May 2005 19:44:08 +0200 (CEST)",
"msg_from": "=?iso-8859-1?Q?Tim_Terleg=E5rd?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: batch inserts are \"slow\""
},
{
"msg_contents": "Tim Terleg�rd wrote:\n>>\n>>Just as on Oracle you would use SQL*Loader for this application, you\n>>should use the COPY syntax for PostgreSQL. You will find it a lot\n>>faster. I have used it by building the input files and executing\n>>'psql' with a COPY command, and also by using it with a subprocess,\n>>both are quite effective.\n> \n> \n> I tried this now. Now it's down to 0.45 seconds. It feels a bit hacky to\n> run /usr/bin/psql from java, but it sure works. Thanks for the hint!\n\nThere was a patch against 7.4 that provided direct JDBC access to\nPostgreSQL's COPY. (I have it installed here and *love* it - it\ngives outstanding performance.) However, it hasn't made into an\nofficial release yet. I don't know why, perhaps there's\na problem yet to be solved with it ('works for me', though)?\n\nIs this still on the board? I won't upgrade past 7.4 without it.\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Tue, 03 May 2005 11:29:54 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: batch inserts are \"slow\""
},
{
"msg_contents": "On 5/3/05, Tim Terlegård <[email protected]> wrote:\n> > Just as on Oracle you would use SQL*Loader for this application, you\n> > should use the COPY syntax for PostgreSQL. You will find it a lot\n> > faster. I have used it by building the input files and executing\n> > 'psql' with a COPY command, and also by using it with a subprocess,\n> > both are quite effective.\n> \n> I tried this now. Now it's down to 0.45 seconds. It feels a bit hacky to\n> run /usr/bin/psql from java, but it sure works. Thanks for the hint!\n\nIt may feel hacky, but I think if you want to use SQL*Loader on\nOracle, you have to do the same thing. I know a C++ app that I use\nthat runs SQL*Loader about once per second to deal with a HUGE volume\n(10K/sec). In fact, moving the load files onto ramdisk has helped a\nlot.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Tue, 3 May 2005 15:17:20 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: batch inserts are \"slow\""
}
] |
[
{
"msg_contents": "> > Howdy!\n> >\n> > I'm converting an application to be using postgresql instead of oracle.\n> > There seems to be only one issue left, batch inserts in postgresql seem\n> > significant slower than in oracle. I have about 200 batch jobs, each\n> > consisting of about 14 000 inserts. Each job takes 1.3 seconds in\n> > postgresql and 0.25 seconds in oracle. With 200 jobs this means several\n> > more minutes to complete the task. By fixing this I think the\n> > application using postgresql over all would be faster than when using\n> > oracle.\n>\n> Have you tried COPY statement?\n\nI did that now. I copied all 3 million rows of data into a text file and\nexecuted the COPY command. It takes about 0.25 seconds per job. So that's\nmuch better. I'm afraid jdbc doesn't support COPY though. But now I know\nwhat the theoretical lower limit is atleast.\n\nShould it be possible to get anyway nearer 0.25s from my current 1.3s?\n\nTim\n\n",
"msg_date": "Mon, 2 May 2005 17:20:07 +0200 (CEST)",
"msg_from": "=?iso-8859-1?Q?Tim_Terleg=E5rd?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: batch inserts are \"slow\""
},
{
"msg_contents": "On 5/2/05, Tim Terlegård <[email protected]> wrote:\n> > > Howdy!\n> > >\n> > > I'm converting an application to be using postgresql instead of oracle.\n> > > There seems to be only one issue left, batch inserts in postgresql seem\n> > > significant slower than in oracle. I have about 200 batch jobs, each\n> > > consisting of about 14 000 inserts. Each job takes 1.3 seconds in\n> > > postgresql and 0.25 seconds in oracle. With 200 jobs this means several\n> > > more minutes to complete the task. By fixing this I think the\n> > > application using postgresql over all would be faster than when using\n> > > oracle.\n> >\n> > Have you tried COPY statement?\n> \n> I did that now. I copied all 3 million rows of data into a text file and\n> executed the COPY command. It takes about 0.25 seconds per job. So that's\n> much better. I'm afraid jdbc doesn't support COPY though. But now I know\n> what the theoretical lower limit is atleast.\n> \n> Should it be possible to get anyway nearer 0.25s from my current 1.3s?\n\nMy experience says 'no'. What you're likely seeing is the parse\noverhead of the setup. When you use COPY (as opposed to \\copy), the\npostmaster is reading the file directory. There's just a lot less\noverhead.\n\nCan you write the files on disk and then kick off the psql process to run them?\n\nChris\n\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Mon, 2 May 2005 11:27:37 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: batch inserts are \"slow\""
}
] |
[
{
"msg_contents": "We ran into the need to use COPY, but our application is also in Java.\nWe wrote a JNI bridge to a C++ routine that uses the libpq library to do\nthe COPY. The coding is a little bit weird, but not too complicated -\nthe biggest pain in the neck is probably getting it into your build\nsystem.\n\nHere's the Java tutorial on JNI:\nhttp://java.sun.com/docs/books/tutorial/native1.1/concepts/index.html \n\nHope that helps!\n\n- DAP\n\n>-----Original Message-----\n>From: [email protected] \n>[mailto:[email protected]] On Behalf Of \n>Christopher Kings-Lynne\n>Sent: Monday, May 02, 2005 11:11 AM\n>To: [email protected]\n>Cc: [email protected]\n>Subject: Re: [PERFORM] batch inserts are \"slow\"\n>\n>> conn.setAutoCommit(false);\n>> pst = conn.prepareStatement(\"INSERT INTO tmp (...) VALUES \n>(?,?)\"); for \n>> (int i = 0; i < len; i++) {\n>> pst.setInt(0, 2);\n>> pst.setString(1, \"xxx\");\n>> pst.addBatch();\n>> }\n>> pst.executeBatch();\n>> conn.commit();\n>> \n>> This snip takes 1.3 secs in postgresql. How can I lower that?\n>\n>You're batching them as one transaction, and using a prepared \n>query both of which are good. I guess the next step for a \n>great performance improvement is to use the COPY command. \n>However, you'd have to find out how to access that via Java.\n>\n>I have a nasty suspicion that the release JDBC driver doesn't \n>support it and you may have to apply a patch.\n>\n>Ask on [email protected] perhaps.\n>\n>Chris\n>\n>---------------------------(end of \n>broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to \n>[email protected]\n>\n",
"msg_date": "Mon, 2 May 2005 11:53:33 -0400",
"msg_from": "\"David Parker\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: batch inserts are \"slow\""
},
{
"msg_contents": "Hi, all,\n\nDavid Parker wrote:\n> We ran into the need to use COPY, but our application is also in Java.\n> We wrote a JNI bridge to a C++ routine that uses the libpq library to do\n> the COPY. The coding is a little bit weird, but not too complicated -\n> the biggest pain in the neck is probably getting it into your build\n> system.\n\nThere are several hacks floating around that add COPY capabilities to\nthe pgjdbc driver. As they all are rather simple hacks, they have not\nbeen included in the cvs yet, but they tend to work fine.\n\nMarkus\n\n",
"msg_date": "Tue, 03 May 2005 16:40:46 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: batch inserts are \"slow\""
},
{
"msg_contents": "People,\n\n> There are several hacks floating around that add COPY capabilities to\n> the pgjdbc driver. As they all are rather simple hacks, they have not\n> been included in the cvs yet, but they tend to work fine.\n\nFWIW, Dave Cramer just added beta COPY capability to JDBC. Contact him on \nthe JDBC list for details; I think he needs testers.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 3 May 2005 08:41:43 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: batch inserts are \"slow\""
},
{
"msg_contents": "\n\nOn Tue, 3 May 2005, Josh Berkus wrote:\n\n> > There are several hacks floating around that add COPY capabilities to\n> > the pgjdbc driver. As they all are rather simple hacks, they have not\n> > been included in the cvs yet, but they tend to work fine.\n> \n> FWIW, Dave Cramer just added beta COPY capability to JDBC. Contact him on \n> the JDBC list for details; I think he needs testers.\n> \n\nI believe Dave has remerged a patch for COPY I posted over a year ago, but \nhe has not yet published it. I would guess it has the same bugs as the \noriginal (transaction + error handling) and will meet the same objections \nthat kept the original patch out of the driver in the first place (we want \na friendlier API than just a data stream).\n\nKris Jurka\n",
"msg_date": "Tue, 3 May 2005 10:57:21 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: batch inserts are \"slow\""
},
{
"msg_contents": "Testing list access\n",
"msg_date": "Tue, 03 May 2005 19:33:06 +0200",
"msg_from": "Jona <[email protected]>",
"msg_from_op": false,
"msg_subject": "Testing list access"
},
{
"msg_contents": "Kris is correct,\n\nThis code was not added or even submitted to CVS. The purpose of this \nwas to work\nout the bugs with people who are actually using copy.\n\nThe api is a separate issue however. There's no reason that copy can't \nsupport more\nthan one api.\n\nDave\n\nKris Jurka wrote:\n\n>On Tue, 3 May 2005, Josh Berkus wrote:\n>\n> \n>\n>>>There are several hacks floating around that add COPY capabilities to\n>>>the pgjdbc driver. As they all are rather simple hacks, they have not\n>>>been included in the cvs yet, but they tend to work fine.\n>>> \n>>>\n>>FWIW, Dave Cramer just added beta COPY capability to JDBC. Contact him on \n>>the JDBC list for details; I think he needs testers.\n>>\n>> \n>>\n>\n>I believe Dave has remerged a patch for COPY I posted over a year ago, but \n>he has not yet published it. I would guess it has the same bugs as the \n>original (transaction + error handling) and will meet the same objections \n>that kept the original patch out of the driver in the first place (we want \n>a friendlier API than just a data stream).\n>\n>Kris Jurka\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n>\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n",
"msg_date": "Tue, 03 May 2005 15:58:59 -0400",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: batch inserts are \"slow\""
}
] |
[
{
"msg_contents": "\n\n\n\nIn our application we have tables that we regularly load with 5-10 million\nrecords daily. We *were* using INSERT (I know... Still kicking ourselves\nfor *that* design decision), and we now converting over to COPY. For the\nsake of robustness, we are planning on breaking the entire load into chunks\nof a couple hundred thousand records each. This is to constrain the amount\nof data we'd have to re-process if one of the COPYs fails.\n\nMy question is, are there any advantages, drawbacks, or outright\nrestrictions to using multiple simultaneous COPY commands to load data into\nthe same table? One issue that comes to mind is the loss of data\nsequencing if we have multiple chunks interleaving records in the table at\nthe same time. But from a purely technical point of view, is there any\nreason why the backend would not be happy with two or more COPY commands\ntrying to insert data into the same table at the same time? Does COPY take\nout any locks on a table?\n\nThanks in advance,\n--- Steve\n\n",
"msg_date": "Tue, 3 May 2005 11:55:15 -0400",
"msg_from": "Steven Rosenstein <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
},
{
"msg_contents": "Steven Rosenstein <[email protected]> writes:\n> My question is, are there any advantages, drawbacks, or outright\n> restrictions to using multiple simultaneous COPY commands to load data into\n> the same table?\n\nIt will work; not sure about whether there is any performance benefit.\nI vaguely recall someone having posted about doing this, so you might\ncheck the archives.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 03 May 2005 12:12:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "> Steven Rosenstein <[email protected]> writes:\n> > My question is, are there any advantages, drawbacks, or outright\n> > restrictions to using multiple simultaneous COPY commands to load\n> data into\n> > the same table?\n\nDo you mean, multiple COPY commands (connections) being putline'd from\nthe same thread (process)? I have indirect evidence that this may hurt.\n\nTwo copy commands from different threads/processes are fine, and can\nhelp, if they alternate contention on some other resource (disk/CPU).\n\nI'm basing this on being at the third generation of a COPY\nimplementation. The app loads about 1M objects/hour from 6 servers. \nEach object is split across four tables.\nThe batch load opens four connections and firehoses records down each.\nA batch is 10K objects.\n\nCOPY invokes all the same logic as INSERT on the server side\n(rowexclusive locking, transaction log, updating indexes, rules). \nThe difference is that all the rows are inserted as a single\ntransaction. This reduces the number of fsync's on the xlog,\nwhich may be a limiting factor for you. You'll want to crank \nWAL_BUFFERS and CHECKPOINT_SEGMENTS to match, though. \nOne of my streams has 6K records; I run with WB=1000, CS=128.\n\nThe downside I found with multiple clients inserting large blocks of\nrows was, that they serialized. I THINK that's because at some point\nthey all needed to lock the same portions of the same indexes. I'm still\nworking on how to avoid that, tuning the batch size and inserting into a\n \"queue\" table with fewer indexes.\n\nCOPY (via putline) didn't do measurably better than INSERT until I\nbatched 40 newline-separate rows into one putline call, which improved\nit 2-3:1. The suspect problem was stalling on the TCP stream; the driver\nwas flushing small packets. This may or may not be relevant to you;\ndepends on how much processing (waiting) your app does between posting\nof rows.\n\nIn such a case, writing alternately to two TCP streams from the same\nprocess increases the likelihood of a stall. I've never tested that\nset-up; it would have been heading AWAY from the solution in my case.\n\nHope that helps.\n-- \nEngineers think equations approximate reality.\nPhysicists think reality approximates the equations.\nMathematicians never make the connection.\n\n",
"msg_date": "Tue, 3 May 2005 11:53:17 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT"
},
{
"msg_contents": "On 5/3/05, Tom Lane <[email protected]> wrote:\n> Steven Rosenstein <[email protected]> writes:\n> > My question is, are there any advantages, drawbacks, or outright\n> > restrictions to using multiple simultaneous COPY commands to load data into\n> > the same table?\n> \n> It will work; not sure about whether there is any performance benefit.\n> I vaguely recall someone having posted about doing this, so you might\n> check the archives.\n> \n\nI may be one of Tom's vague \"voices\". ;) The only issue would be that\nyou need to remove all you UNIQUE constraints before sending multiple\nCOPYs to the server. This includes the PRIMARY KEY constraint. To\nthe backend, COPY is just like INSERT and all constraints need to be\nchecked and this will block the commit of one of the COPY streams.\n\nHowever, multiple COPYs may no be needed. I regularly load several\ntable totaling around 50M rows with a single COPY per table. I drop\n(actually, this is during DB reload, so I don't yet create...) all\nfkeys, constraints and indexes and the data loads in a matter of 5\nminutes or so.\n\nHope that helps!\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n",
"msg_date": "Wed, 4 May 2005 11:08:46 +0000",
"msg_from": "Mike Rylander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "> COPY invokes all the same logic as INSERT on the server side\n> (rowexclusive locking, transaction log, updating indexes, rules). \n> The difference is that all the rows are inserted as a single\n> transaction. This reduces the number of fsync's on the xlog,\n> which may be a limiting factor for you. You'll want to crank \n> WAL_BUFFERS and CHECKPOINT_SEGMENTS to match, though. \n> One of my streams has 6K records; I run with WB=1000, CS=128.\n\nSo what's the difference between a COPY and a batch of INSERT\nstatements. Also, surely, fsyncs only occur at the end of a\ntransaction, no need to fsync before a commit has been issued, right?\n\nDavid\n",
"msg_date": "Wed, 04 May 2005 23:41:30 +0100",
"msg_from": "\"David Roussel\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT"
},
{
"msg_contents": "Quoting David Roussel <[email protected]>:\n\n> > COPY invokes all the same logic as INSERT on the server side\n> > (rowexclusive locking, transaction log, updating indexes, rules). \n> > The difference is that all the rows are inserted as a single\n> > transaction. This reduces the number of fsync's on the xlog,\n> > which may be a limiting factor for you. You'll want to crank \n> > WAL_BUFFERS and CHECKPOINT_SEGMENTS to match, though. \n> > One of my streams has 6K records; I run with WB=1000, CS=128.\n> \n> So what's the difference between a COPY and a batch of INSERT\n> statements. Also, surely, fsyncs only occur at the end of a\n> transaction, no need to fsync before a commit has been issued,\n> right?\n\nSorry, I was comparing granularities the other way araound. As far as\nxlog is concerned, a COPY is ALWAYS one big txn, no matter how many\nputline commands you use to feed the copy. With inserts, you can choose\nwhether to commit every row, every nth row, etc.\n\nCopy makes better use of the TCP connection for transmission. COPY uses\nthe TCP connection like a one-way pipe. INSERT is like an RPC: the\nsender has to wait until the insert's return status roundtrips.\n-- \nEngineers think equations approximate reality.\nPhysicists think reality approximates the equations.\nMathematicians never make the connection.\n\n",
"msg_date": "Wed, 4 May 2005 16:11:32 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT"
},
{
"msg_contents": "David Roussel wrote:\n>>COPY invokes all the same logic as INSERT on the server side\n>>(rowexclusive locking, transaction log, updating indexes, rules).\n>>The difference is that all the rows are inserted as a single\n>>transaction. This reduces the number of fsync's on the xlog,\n>>which may be a limiting factor for you. You'll want to crank\n>>WAL_BUFFERS and CHECKPOINT_SEGMENTS to match, though.\n>>One of my streams has 6K records; I run with WB=1000, CS=128.\n>\n>\n> So what's the difference between a COPY and a batch of INSERT\n> statements. Also, surely, fsyncs only occur at the end of a\n> transaction, no need to fsync before a commit has been issued, right?\n\nI think COPY also has the advantage that for index updates it only grabs\nthe lock once, rather than grabbing and releasing for each row. But I\nbelieve you are right that fsync only happens on COMMIT.\n\n>\n> David\n\nJohn\n=:->",
"msg_date": "Wed, 04 May 2005 18:23:55 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT"
},
{
"msg_contents": "On 5/4/05, Mischa Sandberg <[email protected]> wrote:\n> Quoting David Roussel <[email protected]>:\n> \n> > > COPY invokes all the same logic as INSERT on the server side\n> > > (rowexclusive locking, transaction log, updating indexes, rules).\n> > > The difference is that all the rows are inserted as a single\n> > > transaction. This reduces the number of fsync's on the xlog,\n> > > which may be a limiting factor for you. You'll want to crank\n> > > WAL_BUFFERS and CHECKPOINT_SEGMENTS to match, though.\n> > > One of my streams has 6K records; I run with WB=1000, CS=128.\n> >\n> > So what's the difference between a COPY and a batch of INSERT\n> > statements. Also, surely, fsyncs only occur at the end of a\n> > transaction, no need to fsync before a commit has been issued,\n> > right?\n> \n> Sorry, I was comparing granularities the other way araound. As far as\n> xlog is concerned, a COPY is ALWAYS one big txn, no matter how many\n> putline commands you use to feed the copy. With inserts, you can choose\n> whether to commit every row, every nth row, etc.\n> \n> Copy makes better use of the TCP connection for transmission. COPY uses\n> the TCP connection like a one-way pipe. INSERT is like an RPC: the\n> sender has to wait until the insert's return status roundtrips.\n\nI have found even greater performance increases by using COPY FROM\n<filename> not COPY FROM STDIN. This allows the backend process to\ndirectly read the file, rather than shoving it over a pipe (thereby\npotentially hitting the CPU multiple times). My experience is that\nthis is anywhere from 5-10x faster than INSERT statements on the\nwhole, and sometimes 200x.\n\nChris\n\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Wed, 4 May 2005 19:29:29 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT"
},
{
"msg_contents": "\n\nOn Wed, 4 May 2005, Mischa Sandberg wrote:\n\n> Copy makes better use of the TCP connection for transmission. COPY uses\n> the TCP connection like a one-way pipe. INSERT is like an RPC: the\n> sender has to wait until the insert's return status roundtrips.\n\nNot true. A client may send any number of Bind/Execute messages on a \nprepared statement before a Sync message. So multiple inserts may be sent \nin one network roundtrip. This is exactly how the JDBC driver \nimplements batch statements. There is some limit to the number of queries \nin flight at any given moment because there is the potential to deadlock \nif both sides of network buffers are filled up and each side is blocked \nwaiting on a write. The JDBC driver has conservatively selected 256 as \nthe maximum number of queries to send at once.\n\nKris Jurka\n",
"msg_date": "Wed, 4 May 2005 18:33:50 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT"
},
{
"msg_contents": "> So what's the difference between a COPY and a batch of INSERT\n> statements. Also, surely, fsyncs only occur at the end of a\n> transaction, no need to fsync before a commit has been issued, right?\n\nWith COPY, the data being inserted itself does not have to pass through \nthe postgresql parser.\n\nChris\n",
"msg_date": "Thu, 05 May 2005 09:51:22 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT"
},
{
"msg_contents": "Christopher Kings-Lynne <[email protected]> writes:\n>> So what's the difference between a COPY and a batch of INSERT\n>> statements. Also, surely, fsyncs only occur at the end of a\n>> transaction, no need to fsync before a commit has been issued, right?\n\n> With COPY, the data being inserted itself does not have to pass through \n> the postgresql parser.\n\nAlso, there is a whole lot of one-time-per-statement overhead that can\nbe amortized across many rows instead of only one. Stuff like opening\nthe target table, looking up the per-column I/O conversion functions,\nidentifying trigger functions if any, yadda yadda. It's not *that*\nexpensive, but compared to an operation as small as inserting a single\nrow, it's significant.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 04 May 2005 22:22:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT "
},
{
"msg_contents": "Quoting Kris Jurka <[email protected]>: \n \n> On Wed, 4 May 2005, Mischa Sandberg wrote: \n> \n> > Copy makes better use of the TCP connection for transmission. COPY \n> uses \n> > the TCP connection like a one-way pipe. INSERT is like an RPC: the \n> > sender has to wait until the insert's return status roundtrips. \n> \n> Not true. A client may send any number of Bind/Execute messages on \na \n> prepared statement before a Sync message. So multiple inserts may \nbe \n> sent \n> in one network roundtrip. This is exactly how the JDBC driver \n> implements batch statements. There is some limit to the number of \n> queries \n> in flight at any given moment because there is the potential to \n> deadlock \n> if both sides of network buffers are filled up and each side is \n> blocked \n> waiting on a write. The JDBC driver has conservatively selected 256 \n> as \n> the maximum number of queries to send at once. \n \nHunh. Interesting optimization in the JDBC driver. I gather it is \nsending a string of (;)-separated inserts. Sounds like \nefficient-but-risky stuff we did for ODBC drivers at Simba ... gets \ninteresting when one of the insert statements in the middle fails. \nGood to know. Hope that the batch size is parametric, given that you \ncan have inserts with rather large strings bound to 'text' columns in \nPG --- harder to identify BLOBs when talking to PG, than when talking \nto MSSQL/Oracle/Sybase. \n \n-- \nEngineers think equations approximate reality. \nPhysicists think reality approximates the equations. \nMathematicians never make the connection. \n\n",
"msg_date": "Wed, 4 May 2005 21:41:46 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT"
},
{
"msg_contents": "\n\nOn Wed, 4 May 2005, Mischa Sandberg wrote:\n\n> Quoting Kris Jurka <[email protected]>: \n> \n> > Not true. A client may send any number of Bind/Execute messages on \n> > a prepared statement before a Sync message.\n\n> Hunh. Interesting optimization in the JDBC driver. I gather it is \n> sending a string of (;)-separated inserts.\n\nNo, it uses the V3 protocol and a prepared statement and uses \nBind/Execute, as I mentioned.\n\n> Sounds like efficient-but-risky stuff we did for ODBC drivers at Simba\n> ... gets interesting when one of the insert statements in the middle\n> fails.\n\nWhen running inside a transaction (as you really want to do anyway when\nbulk loading) it is well defined, it is a little odd for auto commit mode\nthough. In autocommit mode the transaction boundary is at the Sync\nmessage, not the individual Execute messages, so you will get some\nrollback on error. The JDBC spec is poorly defined in this area, so we\ncan get away with this.\n\n> Good to know. Hope that the batch size is parametric, given that\n> you can have inserts with rather large strings bound to 'text' columns\n> in PG --- harder to identify BLOBs when talking to PG, than when talking\n> to MSSQL/Oracle/Sybase.\n\nThe batch size is not a parameter and I don't think it needs to be. The \nissue of filling both sides of network buffers and deadlocking only needs \nto be avoided on one side. The response to an insert request is small and \nnot dependent on the size of the data sent, so we can send as much as we \nwant as long as the server doesn't send much back to us.\n\nKris Jurka\n",
"msg_date": "Thu, 5 May 2005 04:15:40 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT"
},
{
"msg_contents": "Christopher Petrilli wrote:\n> On 5/4/05, Mischa Sandberg <[email protected]> wrote:\n> \n>>Quoting David Roussel <[email protected]>:\n>>\n>>\n>>>>COPY invokes all the same logic as INSERT on the server side\n>>>>(rowexclusive locking, transaction log, updating indexes, rules).\n>>>>The difference is that all the rows are inserted as a single\n>>>>transaction. This reduces the number of fsync's on the xlog,\n>>>>which may be a limiting factor for you. You'll want to crank\n>>>>WAL_BUFFERS and CHECKPOINT_SEGMENTS to match, though.\n>>>>One of my streams has 6K records; I run with WB=1000, CS=128.\n>>>\n>>>So what's the difference between a COPY and a batch of INSERT\n>>>statements. Also, surely, fsyncs only occur at the end of a\n>>>transaction, no need to fsync before a commit has been issued,\n>>>right?\n>>\n>>Sorry, I was comparing granularities the other way araound. As far as\n>>xlog is concerned, a COPY is ALWAYS one big txn, no matter how many\n>>putline commands you use to feed the copy. With inserts, you can choose\n>>whether to commit every row, every nth row, etc.\n>>\n>>Copy makes better use of the TCP connection for transmission. COPY uses\n>>the TCP connection like a one-way pipe. INSERT is like an RPC: the\n>>sender has to wait until the insert's return status roundtrips.\n> \n> \n> I have found even greater performance increases by using COPY FROM\n> <filename> not COPY FROM STDIN. This allows the backend process to\n> directly read the file, rather than shoving it over a pipe (thereby\n> potentially hitting the CPU multiple times). My experience is that\n> this is anywhere from 5-10x faster than INSERT statements on the\n> whole, and sometimes 200x.\n> \n> Chris\n> \n\nUnfortunately, COPY FROM '<file>' can only be done by a superuser. If \nyou that option then that is great. If not...\n\n-- \nKind Regards,\nKeith\n",
"msg_date": "Thu, 05 May 2005 20:53:31 -0400",
"msg_from": "Keith Worthington <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT"
},
{
"msg_contents": "On Wed, May 04, 2005 at 10:22:56PM -0400, Tom Lane wrote:\n> Also, there is a whole lot of one-time-per-statement overhead that can\n> be amortized across many rows instead of only one. Stuff like opening\n> the target table, looking up the per-column I/O conversion functions,\n> identifying trigger functions if any, yadda yadda. It's not *that*\n> expensive, but compared to an operation as small as inserting a single\n> row, it's significant.\n\nHas thought been given to supporting inserting multiple rows in a single\ninsert? DB2 supported:\n\nINSERT INTO table VALUES(\n (1,2,3),\n (4,5,6),\n (7,8,9)\n);\n\nI'm not sure how standard that is or if other databases support it.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Fri, 6 May 2005 01:51:29 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT"
},
{
"msg_contents": "On Fri, 6 May 2005, Jim C. Nasby wrote:\n\n> Has thought been given to supporting inserting multiple rows in a single\n> insert? DB2 supported:\n> \n> INSERT INTO table VALUES(\n> (1,2,3),\n> (4,5,6),\n> (7,8,9)\n> );\n> \n> I'm not sure how standard that is or if other databases support it.\n\nThe sql standard include this, except that you can not have the outer ().\nSo it should be\n\nINSERT INTO table VALUES\n (1,2,3),\n (4,5,6),\n (7,8,9);\n\nDo DB2 demand these extra ()?\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Fri, 6 May 2005 09:30:46 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT"
},
{
"msg_contents": "In article <[email protected]>,\nDennis Bjorklund <[email protected]> writes:\n\n> On Fri, 6 May 2005, Jim C. Nasby wrote:\n>> Has thought been given to supporting inserting multiple rows in a single\n>> insert? DB2 supported:\n>> \n>> INSERT INTO table VALUES(\n>> (1,2,3),\n>> (4,5,6),\n>> (7,8,9)\n>> );\n>> \n>> I'm not sure how standard that is or if other databases support it.\n\n> The sql standard include this, except that you can not have the outer ().\n> So it should be\n\n> INSERT INTO table VALUES\n> (1,2,3),\n> (4,5,6),\n> (7,8,9);\n\nSince MySQL has benn supporting this idiom for ages, it can't be\nstandard ;-)\n\n",
"msg_date": "06 May 2005 11:27:43 +0200",
"msg_from": "Harald Fuchs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT"
},
{
"msg_contents": "On Fri, May 06, 2005 at 01:51:29 -0500,\n \"Jim C. Nasby\" <[email protected]> wrote:\n> On Wed, May 04, 2005 at 10:22:56PM -0400, Tom Lane wrote:\n> > Also, there is a whole lot of one-time-per-statement overhead that can\n> > be amortized across many rows instead of only one. Stuff like opening\n> > the target table, looking up the per-column I/O conversion functions,\n> > identifying trigger functions if any, yadda yadda. It's not *that*\n> > expensive, but compared to an operation as small as inserting a single\n> > row, it's significant.\n> \n> Has thought been given to supporting inserting multiple rows in a single\n> insert? DB2 supported:\n> \n> INSERT INTO table VALUES(\n> (1,2,3),\n> (4,5,6),\n> (7,8,9)\n> );\n> \n> I'm not sure how standard that is or if other databases support it.\n\nIt's on the TODO list. I don't remember anyone bringing this up for about\na year now, so I doubt anyone is actively working on it.\n",
"msg_date": "Fri, 6 May 2005 07:38:31 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT"
},
{
"msg_contents": "Bruno Wolff III <[email protected]> writes:\n> \"Jim C. Nasby\" <[email protected]> wrote:\n>> Has thought been given to supporting inserting multiple rows in a single\n>> insert?\n\n> It's on the TODO list. I don't remember anyone bringing this up for about\n> a year now, so I doubt anyone is actively working on it.\n\nIt is on TODO but I think it is only there for standards compliance.\nIt won't produce near as much of a speedup as using COPY does ---\nin particular, trying to put thousands of rows through at once with\nsuch a command would probably be a horrible idea. You'd still have\nto pay the price of lexing/parsing, and there would also be considerable\nflailing about with deducing the data type of the VALUES() construct.\n(Per spec that can be used in SELECT FROM, not only in INSERT, and so\nit's not clear to what extent we can use knowledge of the insert target\ncolumns to avoid running the generic union-type-resolution algorithm for\neach column of the VALUES() :-(.) Add on the price of shoving an\nenormous expression tree through the planner and executor, and it starts\nto sound pretty grim.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 May 2005 09:51:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT "
},
{
"msg_contents": "On Fri, May 06, 2005 at 09:30:46AM +0200, Dennis Bjorklund wrote:\n> The sql standard include this, except that you can not have the outer ().\n> So it should be\n> \n> INSERT INTO table VALUES\n> (1,2,3),\n> (4,5,6),\n> (7,8,9);\n> \n> Do DB2 demand these extra ()?\n\nMy recollection is that it does, but it's been a few years...\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Sun, 8 May 2005 12:12:32 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY vs INSERT"
}
] |
[
{
"msg_contents": "Please refer to part 1 for question and query 1\n\nCheers\nJona\n----------------------------------------------------------------------------------------------------------------------- \n\nQuery 2:\nEXPLAIN ANALYZE\nSELECT DISTINCT CatType_Tbl.id, CatType_Tbl.url, Category_Tbl.name, \nMin(SubCatType_Tbl.id) AS subcatid\nFROM (CatType_Tbl\nINNER JOIN Category_Tbl ON CatType_Tbl.id = Category_Tbl.cattpid AND \nCategory_Tbl.enabled = true\nINNER JOIN Language_Tbl ON Language_Tbl.id = Category_Tbl.langid AND \nLanguage_Tbl.sysnm = UPPER('us') AND Language_Tbl.enabled = true\nINNER JOIN SubCatType_Tbl ON CatType_Tbl.id = SubCatType_Tbl.cattpid AND \nSubCatType_Tbl.enabled = true\nINNER JOIN SCT2SubCatType_Tbl ON SubCatType_Tbl.id = \nSCT2SubCatType_Tbl.subcattpid\nINNER JOIN Price_Tbl ON SCT2SubCatType_Tbl.sctid = Price_Tbl.sctid AND \nPrice_Tbl.affid = 8)\nWHERE CatType_Tbl.spcattpid = 1 AND CatType_Tbl.enabled = true\nGROUP BY CatType_Tbl.id, CatType_Tbl.url, Category_Tbl.name\nORDER BY CatType_Tbl.id ASC\n\nPlan on PostGre 7.3.6 on Red Hat Linux 3.2.3-39\n\"Unique (cost=94.57..94.58 rows=1 width=147) (actual \ntime=134.85..134.86 rows=4 loops=1)\"\n\" -> Sort (cost=94.57..94.57 rows=1 width=147) (actual \ntime=134.85..134.85 rows=4 loops=1)\"\n\" Sort Key: cattype_tbl.id, cattype_tbl.url, category_tbl.name, \nmin(subcattype_tbl.id)\"\n\" -> Aggregate (cost=94.54..94.56 rows=1 width=147) (actual \ntime=127.49..134.77 rows=4 loops=1)\"\n\" -> Group (cost=94.54..94.55 rows=1 width=147) (actual \ntime=114.85..132.44 rows=2117 loops=1)\"\n\" -> Sort (cost=94.54..94.55 rows=1 width=147) \n(actual time=114.84..116.10 rows=2117 loops=1)\"\n\" Sort Key: cattype_tbl.id, cattype_tbl.url, \ncategory_tbl.name\"\n\" -> Nested Loop (cost=4.54..94.53 rows=1 \nwidth=147) (actual time=0.64..52.65 rows=2117 loops=1)\"\n\" -> Nested Loop (cost=4.54..88.51 \nrows=1 width=143) (actual time=0.55..18.23 rows=2838 loops=1)\"\n\" -> Hash Join (cost=4.54..8.93 \nrows=1 width=135) (actual time=0.44..1.34 rows=48 loops=1)\"\n\" Hash Cond: (\"outer\".langid \n= \"inner\".id)\"\n\" -> Hash Join \n(cost=3.47..7.84 rows=1 width=131) (actual time=0.35..1.05 rows=96 \nloops=1)\"\n\" Hash Cond: \n(\"outer\".cattpid = \"inner\".id)\"\n\" -> Seq Scan on \nsubcattype_tbl (cost=0.00..3.98 rows=79 width=8) (actual \ntime=0.03..0.37 rows=156 loops=1)\"\n\" Filter: \n(enabled = true)\"\n\" -> Hash \n(cost=3.46..3.46 rows=1 width=123) (actual time=0.30..0.30 rows=0 loops=1)\"\n\" -> Hash Join \n(cost=1.50..3.46 rows=1 width=123) (actual time=0.12..0.29 rows=10 \nloops=1)\"\n\" Hash \nCond: (\"outer\".cattpid = \"inner\".id)\"\n\" -> Seq \nScan on category_tbl (cost=0.00..1.80 rows=32 width=51) (actual \ntime=0.03..0.13 rows=64 loops=1)\"\n\" \nFilter: (enabled = true)\"\n\" -> Hash \n(cost=1.50..1.50 rows=1 width=72) (actual time=0.07..0.07 rows=0 loops=1)\"\n\" -> \nSeq Scan on cattype_tbl (cost=0.00..1.50 rows=1 width=72) (actual \ntime=0.04..0.06 rows=5 loops=1)\"\n\" \nFilter: ((spcattpid = 1) AND (enabled = true))\"\n\" -> Hash (cost=1.07..1.07 \nrows=1 width=4) (actual time=0.05..0.05 rows=0 loops=1)\"\n\" -> Seq Scan on \nlanguage_tbl (cost=0.00..1.07 rows=1 width=4) (actual time=0.05..0.05 \nrows=1 loops=1)\"\n\" Filter: \n(((sysnm)::text = 'US'::text) AND (enabled = true))\"\n\" -> Index Scan using subcat_uq on \nsct2subcattype_tbl (cost=0.00..79.26 rows=26 width=8) (actual \ntime=0.01..0.17 rows=59 loops=48)\"\n\" Index Cond: (\"outer\".id = \nsct2subcattype_tbl.subcattpid)\"\n\" -> Index Scan using aff_price_uq on \nprice_tbl (cost=0.00..6.01 rows=1 width=4) (actual time=0.01..0.01 \nrows=1 loops=2838)\"\n\" Index Cond: ((price_tbl.affid = \n8) AND (\"outer\".sctid = price_tbl.sctid))\"\n\"Total runtime: 135.39 msec\"\n\nPlan on PostGre 7.3.9 on Red Hat Linux 3.2.3-49\n\"Unique (cost=1046.36..1046.54 rows=1 width=75) (actual \ntime=279.67..279.69 rows=4 loops=1)\"\n\" -> Sort (cost=1046.36..1046.40 rows=15 width=75) (actual \ntime=279.67..279.67 rows=4 loops=1)\"\n\" Sort Key: cattype_tbl.id, cattype_tbl.url, category_tbl.name, \nmin(subcattype_tbl.id)\"\n\" -> Aggregate (cost=1044.22..1046.07 rows=15 width=75) (actual \ntime=266.85..279.20 rows=4 loops=1)\"\n\" -> Group (cost=1044.22..1045.70 rows=148 width=75) \n(actual time=245.28..275.37 rows=2555 loops=1)\"\n\" -> Sort (cost=1044.22..1044.59 rows=148 width=75) \n(actual time=245.27..248.35 rows=2555 loops=1)\"\n\" Sort Key: cattype_tbl.id, cattype_tbl.url, \ncategory_tbl.name\"\n\" -> Hash Join (cost=141.39..1038.89 rows=148 \nwidth=75) (actual time=67.81..153.45 rows=2555 loops=1)\"\n\" Hash Cond: (\"outer\".sctid = \n\"inner\".sctid)\"\n------------------- Bad idea, price_tbl hold 38.5k records\n\" -> Seq Scan on price_tbl \n(cost=0.00..883.48 rows=2434 width=4) (actual time=0.86..67.25 rows=4570 \nloops=1)\"\n\" Filter: (affid = 8)\"\n\" -> Hash (cost=140.62..140.62 rows=309 \nwidth=71) (actual time=66.85..66.85 rows=0 loops=1)\"\n\" -> Hash Join (cost=9.87..140.62 \nrows=309 width=71) (actual time=9.08..60.17 rows=3214 loops=1)\"\n\" Hash Cond: \n(\"outer\".subcattpid = \"inner\".id)\"\n------------------- Bad idea, sct2subcattype_tbl hold 5.5k records\n\" -> Seq Scan on \nsct2subcattype_tbl (cost=0.00..99.26 rows=5526 width=8) (actual \ntime=0.01..30.16 rows=5526 loops=1)\"\n\" -> Hash (cost=9.85..9.85 \nrows=9 width=63) (actual time=8.97..8.97 rows=0 loops=1)\"\n\" -> Hash Join \n(cost=4.99..9.85 rows=9 width=63) (actual time=7.96..8.87 rows=48 loops=1)\"\n\" Hash Cond: \n(\"outer\".cattpid = \"inner\".cattpid)\"\n\" -> Seq Scan on \nsubcattype_tbl (cost=0.00..3.98 rows=156 width=8) (actual \ntime=0.02..0.50 rows=156 loops=1)\"\n\" Filter: \n(enabled = true)\"\n\" -> Hash \n(cost=4.98..4.98 rows=2 width=55) (actual time=7.83..7.83 rows=0 loops=1)\"\n\" -> Hash \nJoin (cost=2.58..4.98 rows=2 width=55) (actual time=7.34..7.81 rows=5 \nloops=1)\"\n\" \nHash Cond: (\"outer\".cattpid = \"inner\".id)\"\n\" -> \nHash Join (cost=1.08..3.36 rows=13 width=27) (actual time=0.32..0.73 \nrows=32 loops=1)\"\n\" \nHash Cond: (\"outer\".langid = \"inner\".id)\"\n\" \n-> Seq Scan on category_tbl (cost=0.00..1.80 rows=64 width=23) (actual \ntime=0.02..0.24 rows=64 loops=1)\"\n\" \nFilter: (enabled = true)\"\n\" \n-> Hash (cost=1.07..1.07 rows=1 width=4) (actual time=0.10..0.10 \nrows=0 loops=1)\"\n\" \n-> Seq Scan on language_tbl (cost=0.00..1.07 rows=1 width=4) (actual \ntime=0.09..0.10 rows=1 loops=1)\"\n\" \nFilter: (((sysnm)::text = 'US'::text) AND (enabled = true))\"\n\" -> \nHash (cost=1.50..1.50 rows=5 width=28) (actual time=6.87..6.87 rows=0 \nloops=1)\"\n\" \n-> Seq Scan on cattype_tbl (cost=0.00..1.50 rows=5 width=28) (actual \ntime=6.81..6.86 rows=5 loops=1)\"\n\" \nFilter: ((spcattpid = 1) AND (enabled = true))\"\n\"Total runtime: 297.35 msec\"\n----------------------------------------------------------------------------------------------------------------------- \n\n\nQuery 3:\nEXPLAIN ANALYZE\nSELECT Count(DISTINCT SCT2SubCatType_Tbl.id) AS num\nFROM (SCT2SubCatType_Tbl\nINNER JOIN StatConTrans_Tbl ON SCT2SubCatType_Tbl.sctid = \nStatConTrans_Tbl.id AND StatConTrans_Tbl.enabled = true\nINNER JOIN SCT2Lang_Tbl ON StatConTrans_Tbl.id = SCT2Lang_Tbl.sctid\nINNER JOIN Info_Tbl ON StatConTrans_Tbl.id = Info_Tbl.sctid\nINNER JOIN Language_Tbl ON SCT2Lang_Tbl.langid = Language_Tbl.id AND \nInfo_Tbl.langid = Language_Tbl.id AND Language_Tbl.sysnm = UPPER('us') \nAND Language_Tbl.enabled = true\nINNER JOIN Price_Tbl ON StatConTrans_Tbl.id = Price_Tbl.sctid AND \nPrice_Tbl.affid = 8\nINNER JOIN StatCon_Tbl ON StatConTrans_Tbl.id = StatCon_Tbl.sctid AND \nStatCon_Tbl.ctpid = 1\nINNER JOIN B2SC_Tbl ON StatCon_Tbl.id = B2SC_Tbl.scid AND B2SC_Tbl.bid = 80\nINNER JOIN B2M_Tbl ON StatCon_Tbl.mtpid = B2M_Tbl.mtpid AND B2M_Tbl.bid \n= 80\nINNER JOIN SMSC2MimeType_Tbl ON StatCon_Tbl.mtpid = \nSMSC2MimeType_Tbl.mtpid AND SMSC2MimeType_Tbl.smscid = 3)\nWHERE SCT2SubCatType_Tbl.subcattpid = 138\n\nPlan on PostGre 7.3.6 on Red Hat Linux 3.2.3-39\n\"Aggregate (cost=495.96..495.96 rows=1 width=60) (actual \ntime=62.36..62.36 rows=1 loops=1)\"\n\" -> Nested Loop (cost=80.40..495.96 rows=1 width=60) (actual \ntime=62.30..62.30 rows=0 loops=1)\"\n\" Join Filter: (\"outer\".mtpid = \"inner\".mtpid)\"\n\" -> Nested Loop (cost=80.40..489.99 rows=1 width=56) (actual \ntime=62.30..62.30 rows=0 loops=1)\"\n\" -> Nested Loop (cost=80.40..484.07 rows=1 width=52) \n(actual time=62.29..62.29 rows=0 loops=1)\"\n\" -> Nested Loop (cost=80.40..478.05 rows=1 \nwidth=44) (actual time=62.29..62.29 rows=0 loops=1)\"\n\" Join Filter: (\"inner\".sctid = \"outer\".sctid)\"\n\" -> Nested Loop (cost=80.40..472.03 rows=1 \nwidth=40) (actual time=62.29..62.29 rows=0 loops=1)\"\n\" -> Nested Loop (cost=80.40..315.17 \nrows=1 width=36) (actual time=1.01..27.62 rows=69 loops=1)\"\n\" Join Filter: (\"outer\".id = \n\"inner\".sctid)\"\n\" -> Nested Loop \n(cost=80.40..309.14 rows=1 width=24) (actual time=0.91..26.31 rows=69 \nloops=1)\"\n\" -> Hash Join \n(cost=80.40..275.63 rows=6 width=20) (actual time=0.80..25.18 rows=69 \nloops=1)\"\n\" Hash Cond: \n(\"outer\".sctid = \"inner\".sctid)\"\n\" -> Hash Join \n(cost=1.08..195.80 rows=43 width=12) (actual time=0.15..21.75 rows=3947 \nloops=1)\"\n\" Hash Cond: \n(\"outer\".langid = \"inner\".id)\"\n----------------- Bad idea, sct2lang_tbl has 8.6k records, but the query \nis still faster than the one below where it uses its index?\n\" -> Seq Scan on \nsct2lang_tbl (cost=0.00..150.79 rows=8679 width=8) (actual \ntime=0.03..10.70 rows=8679 loops=1)\"\n\" -> Hash \n(cost=1.07..1.07 rows=1 width=4) (actual time=0.05..0.05 rows=0 loops=1)\"\n\" -> Seq \nScan on language_tbl (cost=0.00..1.07 rows=1 width=4) (actual \ntime=0.04..0.05 rows=1 loops=1)\"\n\" \nFilter: (((sysnm)::text = 'US'::text) AND (enabled = true))\"\n\" -> Hash \n(cost=79.26..79.26 rows=26 width=8) (actual time=0.39..0.39 rows=0 \nloops=1)\"\n\" -> Index Scan \nusing subcat_uq on sct2subcattype_tbl (cost=0.00..79.26 rows=26 \nwidth=8) (actual time=0.10..0.33 rows=69 loops=1)\"\n\" Index \nCond: (subcattpid = 138)\"\n\" -> Index Scan using \nstatcontrans_pk on statcontrans_tbl (cost=0.00..5.99 rows=1 width=4) \n(actual time=0.01..0.01 rows=1 loops=69)\"\n\" Index Cond: \n(statcontrans_tbl.id = \"outer\".sctid)\"\n\" Filter: (enabled = \ntrue)\"\n\" -> Index Scan using ctp_statcon \non statcon_tbl (cost=0.00..6.01 rows=1 width=12) (actual \ntime=0.01..0.01 rows=1 loops=69)\"\n\" Index Cond: \n((statcon_tbl.sctid = \"outer\".sctid) AND (statcon_tbl.ctpid = 1))\"\n\" -> Index Scan using b2sc_uq on \nb2sc_tbl (cost=0.00..156.38 rows=38 width=4) (actual time=0.50..0.50 \nrows=0 loops=69)\"\n\" Index Cond: ((b2sc_tbl.bid = 80) \nAND (\"outer\".id = b2sc_tbl.scid))\"\n\" -> Index Scan using aff_price_uq on \nprice_tbl (cost=0.00..6.01 rows=1 width=4) (never executed)\"\n\" Index Cond: ((price_tbl.affid = 8) AND \n(price_tbl.sctid = \"outer\".sctid))\"\n\" -> Index Scan using info_uq on info_tbl \n(cost=0.00..6.00 rows=1 width=8) (never executed)\"\n\" Index Cond: ((info_tbl.sctid = \"outer\".sctid) \nAND (info_tbl.langid = \"outer\".langid))\"\n\" -> Index Scan using b2m_uq on b2m_tbl (cost=0.00..5.91 \nrows=1 width=4) (never executed)\"\n\" Index Cond: ((b2m_tbl.bid = 80) AND (\"outer\".mtpid \n= b2m_tbl.mtpid))\"\n\" -> Index Scan using smsc2mimetype_uq on smsc2mimetype_tbl \n(cost=0.00..5.95 rows=1 width=4) (never executed)\"\n\" Index Cond: ((smsc2mimetype_tbl.smscid = 3) AND \n(smsc2mimetype_tbl.mtpid = \"outer\".mtpid))\"\n\"Total runtime: 62.98 msec\"\n\nPlan on PostGre 7.3.9 on Red Hat Linux 3.2.3-49\n\"Aggregate (cost=538.89..538.89 rows=1 width=60) (actual \ntime=1699.01..1699.01 rows=1 loops=1)\"\n\" -> Nested Loop (cost=1.08..538.89 rows=1 width=60) (actual \ntime=1698.94..1698.94 rows=0 loops=1)\"\n\" Join Filter: (\"outer\".mtpid = \"inner\".mtpid)\"\n\" -> Nested Loop (cost=1.08..533.21 rows=1 width=56) (actual \ntime=1698.94..1698.94 rows=0 loops=1)\"\n\" -> Nested Loop (cost=1.08..526.96 rows=1 width=52) \n(actual time=1698.93..1698.93 rows=0 loops=1)\"\n\" -> Nested Loop (cost=1.08..518.61 rows=1 \nwidth=44) (actual time=1698.93..1698.93 rows=0 loops=1)\"\n\" -> Nested Loop (cost=1.08..501.22 rows=3 \nwidth=40) (actual time=6.98..466.52 rows=69 loops=1)\"\n\" Join Filter: (\"inner\".sctid = \n\"outer\".sctid)\"\n\" -> Nested Loop (cost=1.08..440.65 \nrows=3 width=28) (actual time=6.92..331.23 rows=69 loops=1)\"\n\" Join Filter: (\"inner\".id = \n\"outer\".sctid)\"\n\" -> Hash Join (cost=1.08..418.73 \nrows=4 width=24) (actual time=3.59..172.15 rows=69 loops=1)\"\n\" Hash Cond: (\"outer\".langid \n= \"inner\".id)\"\n\" -> Nested Loop \n(cost=0.00..417.51 rows=19 width=20) (actual time=3.20..171.27 rows=138 \nloops=1)\"\n\" -> Nested Loop \n(cost=0.00..287.97 rows=16 width=12) (actual time=3.15..5.27 rows=69 \nloops=1)\"\n\" -> Index Scan \nusing subcat_uq on sct2subcattype_tbl (cost=0.00..92.56 rows=33 \nwidth=8) (actual time=3.06..3.45 rows=69 loops=1)\"\n\" Index \nCond: (subcattpid = 138)\"\n\" -> Index Scan \nusing aff_price_uq on price_tbl (cost=0.00..5.88 rows=1 width=4) \n(actual time=0.02..0.02 rows=1 loops=69)\"\n\" Index \nCond: ((price_tbl.affid = 8) AND (price_tbl.sctid = \"outer\".sctid))\"\n\" -> Index Scan using \nsct2lang_uq on sct2lang_tbl (cost=0.00..8.13 rows=2 width=8) (actual \ntime=1.10..2.39 rows=2 loops=69)\"\n\" Index Cond: \n(sct2lang_tbl.sctid = \"outer\".sctid)\"\n\" -> Hash (cost=1.07..1.07 \nrows=1 width=4) (actual time=0.20..0.20 rows=0 loops=1)\"\n\" -> Seq Scan on \nlanguage_tbl (cost=0.00..1.07 rows=1 width=4) (actual time=0.18..0.19 \nrows=1 loops=1)\"\n\" Filter: \n(((sysnm)::text = 'US'::text) AND (enabled = true))\"\n\" -> Index Scan using \nstatcontrans_pk on statcontrans_tbl (cost=0.00..5.88 rows=1 width=4) \n(actual time=2.29..2.30 rows=1 loops=69)\"\n\" Index Cond: \n(statcontrans_tbl.id = \"outer\".sctid)\"\n\" Filter: (enabled = true)\"\n\" -> Index Scan using ctp_statcon on \nstatcon_tbl (cost=0.00..20.40 rows=5 width=12) (actual time=1.95..1.95 \nrows=1 loops=69)\"\n\" Index Cond: ((statcon_tbl.sctid = \n\"outer\".sctid) AND (statcon_tbl.ctpid = 1))\"\n\" -> Index Scan using sc2b on b2sc_tbl \n(cost=0.00..5.96 rows=1 width=4) (actual time=17.86..17.86 rows=0 \nloops=69)\"\n\" Index Cond: ((\"outer\".id = \nb2sc_tbl.scid) AND (b2sc_tbl.bid = 80))\"\n\" -> Index Scan using info_uq on info_tbl \n(cost=0.00..5.93 rows=1 width=8) (never executed)\"\n\" Index Cond: ((info_tbl.sctid = \"outer\".sctid) \nAND (info_tbl.langid = \"outer\".langid))\"\n\" -> Index Scan using b2m_uq on b2m_tbl (cost=0.00..5.59 \nrows=1 width=4) (never executed)\"\n\" Index Cond: ((b2m_tbl.bid = 80) AND (\"outer\".mtpid \n= b2m_tbl.mtpid))\"\n\" -> Index Scan using smsc2mimetype_uq on smsc2mimetype_tbl \n(cost=0.00..5.67 rows=1 width=4) (never executed)\"\n\" Index Cond: ((smsc2mimetype_tbl.smscid = 3) AND \n(smsc2mimetype_tbl.mtpid = \"outer\".mtpid))\"\n\"Total runtime: 1710.07 msec\"\n\n",
"msg_date": "Tue, 03 May 2005 19:41:37 +0200",
"msg_from": "Jona <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad choice of query plan from PG 7.3.6 to PG 7.3.9 part 2"
}
] |
[
{
"msg_contents": "Please refer to part 1a for questions and part 2 for more queries and \nquery plans.\nWhy won't this list accept my questions and sample data in one mail???\n\n/Jona\n---------------------------------------------------------------------------------------------------- \n\nQuery 1:\nEXPLAIN ANALYZE\nSELECT DISTINCT StatConTrans_Tbl.id, Code_Tbl.sysnm AS code, \nPriceCat_Tbl.amount AS price, Country_Tbl.currency,\n CreditsCat_Tbl.amount AS credits, Info_Tbl.title, Info_Tbl.description\nFROM (SCT2SubCatType_Tbl\nINNER JOIN SCT2Lang_Tbl ON SCT2SubCatType_Tbl.sctid = SCT2Lang_Tbl.sctid\nINNER JOIN Language_Tbl ON SCT2Lang_Tbl.langid = Language_Tbl.id AND \nLanguage_Tbl.sysnm = UPPER('us') AND Language_Tbl.enabled = true\nINNER JOIN Info_Tbl ON SCT2SubCatType_Tbl.sctid = Info_Tbl.sctid AND \nLanguage_Tbl.id = Info_Tbl.langid\nINNER JOIN SubCatType_Tbl ON SCT2SubCatType_Tbl.subcattpid = \nSubCatType_Tbl.id AND SubCatType_Tbl.enabled = true\nINNER JOIN CatType_Tbl ON SubCatType_Tbl.cattpid = CatType_Tbl.id AND \nCatType_Tbl.enabled = true\nINNER JOIN SuperCatType_Tbl ON CatType_Tbl.spcattpid = \nSuperCatType_Tbl.id AND SuperCatType_Tbl.enabled = true\nINNER JOIN StatConTrans_Tbl ON SCT2SubCatType_Tbl.sctid = \nStatConTrans_Tbl.id AND StatConTrans_Tbl.enabled = true\nINNER JOIN Price_Tbl ON StatConTrans_Tbl.id = Price_Tbl.sctid AND \nPrice_Tbl.affid = 8\nINNER JOIN PriceCat_Tbl ON Price_Tbl.prccatid = PriceCat_Tbl.id AND \nPriceCat_Tbl.enabled = true\nINNER JOIN Country_Tbl ON PriceCat_Tbl.cntid = Country_Tbl.id AND \nCountry_Tbl.enabled = true\nINNER JOIN CreditsCat_Tbl ON Price_Tbl.crdcatid = CreditsCat_Tbl.id AND \nCreditsCat_Tbl.enabled = true\nINNER JOIN StatCon_Tbl ON StatConTrans_Tbl.id = StatCon_Tbl.sctid AND \nStatCon_Tbl.ctpid = 1\nINNER JOIN Code_Tbl ON SuperCatType_Tbl.id = Code_Tbl.spcattpid AND \nCode_Tbl.affid = 8 AND Code_Tbl.cdtpid = 1)\nWHERE SCT2SubCatType_Tbl.subcattpid = 79\nORDER BY StatConTrans_Tbl.id DESC\nLIMIT 8 OFFSET 0\n\nPlan on PostGre 7.3.6 on Red Hat Linux 3.2.3-39\n\"Limit (cost=178.59..178.61 rows=1 width=330) (actual time=22.77..28.51 \nrows=4 loops=1)\"\n\" -> Unique (cost=178.59..178.61 rows=1 width=330) (actual \ntime=22.77..28.50 rows=4 loops=1)\"\n\" -> Sort (cost=178.59..178.60 rows=1 width=330) (actual \ntime=22.76..22.85 rows=156 loops=1)\"\n\" Sort Key: statcontrans_tbl.id, code_tbl.sysnm, \npricecat_tbl.amount, country_tbl.currency, creditscat_tbl.amount, \ninfo_tbl.title, info_tbl.description\"\n\" -> Hash Join (cost=171.19..178.58 rows=1 width=330) \n(actual time=3.39..6.55 rows=156 loops=1)\"\n\" Hash Cond: (\"outer\".cntid = \"inner\".id)\"\n\" -> Nested Loop (cost=170.13..177.51 rows=1 \nwidth=312) (actual time=3.27..5.75 rows=156 loops=1)\"\n\" Join Filter: (\"inner\".sctid = \"outer\".sctid)\"\n\" -> Hash Join (cost=170.13..171.48 rows=1 \nwidth=308) (actual time=3.12..3.26 rows=4 loops=1)\"\n\" Hash Cond: (\"outer\".crdcatid = \n\"inner\".id)\"\n\" -> Hash Join (cost=169.03..170.38 \nrows=1 width=300) (actual time=3.00..3.11 rows=4 loops=1)\"\n\" Hash Cond: (\"outer\".spcattpid = \n\"inner\".spcattpid)\"\n\" -> Hash Join \n(cost=167.22..168.56 rows=1 width=253) (actual time=2.88..2.97 rows=4 \nloops=1)\"\n\" Hash Cond: (\"outer\".id = \n\"inner\".prccatid)\"\n\" -> Seq Scan on \npricecat_tbl (cost=0.00..1.29 rows=12 width=12) (actual time=0.04..0.08 \nrows=23 loops=1)\"\n\" Filter: (enabled = \ntrue)\"\n\" -> Hash \n(cost=167.21..167.21 rows=1 width=241) (actual time=2.80..2.80 rows=0 \nloops=1)\"\n\" -> Nested Loop \n(cost=3.77..167.21 rows=1 width=241) (actual time=1.31..2.79 rows=4 \nloops=1)\"\n\" Join Filter: \n(\"inner\".sctid = \"outer\".sctid)\"\n\" -> Nested \nLoop (cost=3.77..161.19 rows=1 width=229) (actual time=1.19..2.60 \nrows=4 loops=1)\"\n\" Join \nFilter: (\"outer\".sctid = \"inner\".sctid)\"\n\" -> Hash \nJoin (cost=3.77..155.17 rows=1 width=44) (actual time=1.07..2.37 rows=4 \nloops=1)\"\n\" \nHash Cond: (\"outer\".langid = \"inner\".id)\"\n\" -> \nNested Loop (cost=2.69..154.06 rows=7 width=40) (actual time=0.90..2.18 \nrows=8 loops=1)\"\n\" \nJoin Filter: (\"outer\".sctid = \"inner\".sctid)\"\n\" \n-> Nested Loop (cost=2.69..21.30 rows=1 width=32) (actual \ntime=0.78..1.94 rows=4 loops=1)\"\n\" \n-> Nested Loop (cost=2.69..15.30 rows=1 width=28) (actual \ntime=0.66..1.76 rows=4 loops=1)\"\n\" \n-> Hash Join (cost=2.69..7.07 rows=1 width=20) (actual time=0.39..1.15 \nrows=154 loops=1)\"\n\" \nHash Cond: (\"outer\".cattpid = \"inner\".id)\"\n\" \n-> Seq Scan on subcattype_tbl (cost=0.00..3.98 rows=79 width=8) \n(actual time=0.03..0.35 rows=156 loops=1)\"\n\" \nFilter: (enabled = true)\"\n\" \n-> Hash (cost=2.68..2.68 rows=3 width=12) (actual time=0.31..0.31 \nrows=0 loops=1)\"\n\" \n-> Hash Join (cost=1.15..2.68 rows=3 width=12) (actual time=0.16..0.27 \nrows=31 loops=1)\"\n\" \nHash Cond: (\"outer\".spcattpid = \"inner\".id)\"\n\" \n-> Seq Scan on cattype_tbl (cost=0.00..1.41 rows=16 width=8) (actual \ntime=0.04..0.09 rows=31 loops=1)\"\n\" \nFilter: (enabled = true)\"\n\" \n-> Hash (cost=1.14..1.14 rows=6 width=4) (actual time=0.06..0.06 \nrows=0 loops=1)\"\n\" \n-> Seq Scan on supercattype_tbl (cost=0.00..1.14 rows=6 width=4) \n(actual time=0.03..0.05 rows=10 loops=1)\"\n\" \nFilter: (enabled = true)\"\n\" \n-> Index Scan using subcat_uq on sct2subcattype_tbl (cost=0.00..5.97 \nrows=1 width=8) (actual time=0.00..0.00 rows=0 loops=154)\"\n\" \nIndex Cond: ((sct2subcattype_tbl.subcattpid = \"outer\".id) AND \n(sct2subcattype_tbl.subcattpid = 79))\"\n\" \n-> Index Scan using statcontrans_pk on statcontrans_tbl \n(cost=0.00..5.99 rows=1 width=4) (actual time=0.04..0.04 rows=1 loops=4)\"\n\" \nIndex Cond: (\"outer\".sctid = statcontrans_tbl.id)\"\n\" \nFilter: (enabled = true)\"\n\" \n-> Index Scan using sct2lang_uq on sct2lang_tbl (cost=0.00..132.22 \nrows=43 width=8) (actual time=0.04..0.05 rows=2 loops=4)\"\n\" \nIndex Cond: (\"outer\".id = sct2lang_tbl.sctid)\"\n\" -> \nHash (cost=1.07..1.07 rows=1 width=4) (actual time=0.11..0.11 rows=0 \nloops=1)\"\n\" \n-> Seq Scan on language_tbl (cost=0.00..1.07 rows=1 width=4) (actual \ntime=0.10..0.11 rows=1 loops=1)\"\n\" \nFilter: (((sysnm)::text = 'US'::text) AND (enabled = true))\"\n\" -> Index \nScan using info_uq on info_tbl (cost=0.00..6.00 rows=1 width=185) \n(actual time=0.05..0.05 rows=1 loops=4)\"\n\" \nIndex Cond: ((info_tbl.sctid = \"outer\".sctid) AND (info_tbl.langid = \n\"outer\".langid))\"\n\" -> Index Scan \nusing aff_price_uq on price_tbl (cost=0.00..6.01 rows=1 width=12) \n(actual time=0.03..0.03 rows=1 loops=4)\"\n\" Index \nCond: ((price_tbl.affid = 8) AND (price_tbl.sctid = \"outer\".sctid))\"\n\" -> Hash (cost=1.81..1.81 rows=1 \nwidth=47) (actual time=0.08..0.08 rows=0 loops=1)\"\n\" -> Seq Scan on code_tbl \n(cost=0.00..1.81 rows=1 width=47) (actual time=0.04..0.07 rows=5 loops=1)\"\n\" Filter: ((affid = 8) \nAND (cdtpid = 1))\"\n\" -> Hash (cost=1.09..1.09 rows=4 \nwidth=8) (actual time=0.06..0.06 rows=0 loops=1)\"\n\" -> Seq Scan on creditscat_tbl \n(cost=0.00..1.09 rows=4 width=8) (actual time=0.03..0.04 rows=7 loops=1)\"\n\" Filter: (enabled = true)\"\n\" -> Index Scan using ctp_statcon on \nstatcon_tbl (cost=0.00..6.01 rows=1 width=4) (actual time=0.05..0.31 \nrows=39 loops=4)\"\n\" Index Cond: ((statcon_tbl.sctid = \n\"outer\".sctid) AND (statcon_tbl.ctpid = 1))\"\n\" -> Hash (cost=1.06..1.06 rows=2 width=18) (actual \ntime=0.06..0.06 rows=0 loops=1)\"\n\" -> Seq Scan on country_tbl (cost=0.00..1.06 \nrows=2 width=18) (actual time=0.04..0.05 rows=4 loops=1)\"\n\" Filter: (enabled = true)\"\n\"Total runtime: 29.56 msec\"\n\nPlan on PostGre 7.3.9 on Red Hat Linux 3.2.3-49\n\"Limit (cost=545.53..545.60 rows=1 width=135) (actual \ntime=1251.71..1261.25 rows=4 loops=1)\"\n\" -> Unique (cost=545.53..545.60 rows=1 width=135) (actual \ntime=1251.71..1261.24 rows=4 loops=1)\"\n\" -> Sort (cost=545.53..545.54 rows=4 width=135) (actual \ntime=1251.70..1251.90 rows=156 loops=1)\"\n\" Sort Key: statcontrans_tbl.id, code_tbl.sysnm, \npricecat_tbl.amount, country_tbl.currency, creditscat_tbl.amount, \ninfo_tbl.title, info_tbl.description\"\n\" -> Nested Loop (cost=485.61..545.49 rows=4 width=135) \n(actual time=603.77..1230.96 rows=156 loops=1)\"\n\" Join Filter: (\"inner\".sctid = \"outer\".sctid)\"\n\" -> Hash Join (cost=485.61..486.06 rows=3 \nwidth=131) (actual time=541.87..542.22 rows=4 loops=1)\"\n\" Hash Cond: (\"outer\".crdcatid = \"inner\".id)\"\n\" -> Hash Join (cost=484.51..484.90 rows=3 \nwidth=123) (actual time=529.09..529.36 rows=4 loops=1)\"\n\" Hash Cond: (\"outer\".spcattpid = \n\"inner\".spcattpid)\"\n\" -> Hash Join (cost=482.68..482.93 \nrows=3 width=114) (actual time=517.60..517.77 rows=4 loops=1)\"\n\" Hash Cond: (\"outer\".cntid = \n\"inner\".id)\"\n\" -> Merge Join \n(cost=481.60..481.80 rows=4 width=105) (actual time=517.36..517.43 \nrows=4 loops=1)\"\n\" Merge Cond: (\"outer\".id = \n\"inner\".prccatid)\"\n\" -> Sort (cost=1.81..1.87 \nrows=23 width=12) (actual time=8.44..8.45 rows=6 loops=1)\"\n\" Sort Key: \npricecat_tbl.id\"\n\" -> Seq Scan on \npricecat_tbl (cost=0.00..1.29 rows=23 width=12) (actual time=8.31..8.37 \nrows=23 loops=1)\"\n\" Filter: \n(enabled = true)\"\n\" -> Sort \n(cost=479.80..479.81 rows=4 width=93) (actual time=508.87..508.87 rows=4 \nloops=1)\"\n\" Sort Key: \nprice_tbl.prccatid\"\n\" -> Nested Loop \n(cost=13.69..479.75 rows=4 width=93) (actual time=444.70..508.81 rows=4 \nloops=1)\"\n\" Join Filter: \n(\"inner\".sctid = \"outer\".sctid)\"\n\" -> Nested \nLoop (cost=13.69..427.04 rows=9 width=81) (actual time=444.60..508.62 \nrows=4 loops=1)\"\n\" Join \nFilter: (\"outer\".sctid = \"inner\".sctid)\"\n\" -> \nNested Loop (cost=13.69..377.03 rows=8 width=44) (actual \ntime=345.13..398.38 rows=4 loops=1)\"\n\" \nJoin Filter: (\"outer\".sctid = \"inner\".id)\"\n\" -> \nHash Join (cost=13.69..327.32 rows=8 width=40) (actual \ntime=219.17..272.27 rows=4 loops=1)\"\n\" \nHash Cond: (\"outer\".langid = \"inner\".id)\"\n\" \n-> Nested Loop (cost=12.61..325.92 rows=42 width=36) (actual \ntime=209.77..262.79 rows=8 loops=1)\"\n\" \n-> Hash Join (cost=12.61..106.32 rows=27 width=28) (actual \ntime=101.88..102.00 rows=4 loops=1)\"\n\" \nHash Cond: (\"outer\".cattpid = \"inner\".id)\"\n\" \n-> Hash Join (cost=9.47..102.68 rows=33 width=16) (actual \ntime=84.14..84.21 rows=4 loops=1)\"\n\" \nHash Cond: (\"outer\".subcattpid = \"inner\".id)\"\n\" \n-> Index Scan using subcat_uq on sct2subcattype_tbl (cost=0.00..92.56 \nrows=33 width=8) (actual time=83.33..83.37 rows=4 loops=1)\"\n\" \nIndex Cond: (subcattpid = 79)\"\n\" \n-> Hash (cost=3.98..3.98 rows=156 width=8) (actual time=0.76..0.76 \nrows=0 loops=1)\"\n\" \n-> Seq Scan on subcattype_tbl (cost=0.00..3.98 rows=156 width=8) \n(actual time=0.03..0.49 rows=156 loops=1)\"\n\" \nFilter: (enabled = true)\"\n\" \n-> Hash (cost=3.07..3.07 rows=27 width=12) (actual time=17.58..17.58 \nrows=0 loops=1)\"\n\" \n-> Hash Join (cost=1.16..3.07 rows=27 width=12) (actual \ntime=17.30..17.52 rows=31 loops=1)\"\n\" \nHash Cond: (\"outer\".spcattpid = \"inner\".id)\"\n\" \n-> Seq Scan on cattype_tbl (cost=0.00..1.41 rows=31 width=8) (actual \ntime=0.02..0.12 rows=31 loops=1)\"\n\" \nFilter: (enabled = true)\"\n\" \n-> Hash (cost=1.14..1.14 rows=10 width=4) (actual time=17.09..17.09 \nrows=0 loops=1)\"\n\" \n-> Seq Scan on supercattype_tbl (cost=0.00..1.14 rows=10 width=4) \n(actual time=17.05..17.07 rows=10 loops=1)\"\n\" \nFilter: (enabled = true)\"\n\" \n-> Index Scan using sct2lang_uq on sct2lang_tbl (cost=0.00..8.13 \nrows=2 width=8) (actual time=26.97..40.18 rows=2 loops=4)\"\n\" \nIndex Cond: (\"outer\".sctid = sct2lang_tbl.sctid)\"\n\" \n-> Hash (cost=1.07..1.07 rows=1 width=4) (actual time=9.04..9.04 \nrows=0 loops=1)\"\n\" \n-> Seq Scan on language_tbl (cost=0.00..1.07 rows=1 width=4) (actual \ntime=9.02..9.03 rows=1 loops=1)\"\n\" \nFilter: (((sysnm)::text = 'US'::text) AND (enabled = true))\"\n\" -> \nIndex Scan using statcontrans_pk on statcontrans_tbl (cost=0.00..5.88 \nrows=1 width=4) (actual time=31.51..31.52 rows=1 loops=4)\"\n\" \nIndex Cond: (statcontrans_tbl.id = \"outer\".sctid)\"\n\" \nFilter: (enabled = true)\"\n\" -> Index \nScan using info_uq on info_tbl (cost=0.00..5.93 rows=1 width=37) \n(actual time=27.54..27.54 rows=1 loops=4)\"\n\" \nIndex Cond: ((info_tbl.sctid = \"outer\".sctid) AND (info_tbl.langid = \n\"outer\".langid))\"\n\" -> Index Scan \nusing aff_price_uq on price_tbl (cost=0.00..5.88 rows=1 width=12) \n(actual time=0.03..0.03 rows=1 loops=4)\"\n\" Index \nCond: ((price_tbl.affid = 8) AND (price_tbl.sctid = \"outer\".sctid))\"\n\" -> Hash (cost=1.06..1.06 rows=4 \nwidth=9) (actual time=0.05..0.05 rows=0 loops=1)\"\n\" -> Seq Scan on \ncountry_tbl (cost=0.00..1.06 rows=4 width=9) (actual time=0.02..0.03 \nrows=4 loops=1)\"\n\" Filter: (enabled = \ntrue)\"\n\" -> Hash (cost=1.81..1.81 rows=8 \nwidth=9) (actual time=11.31..11.31 rows=0 loops=1)\"\n\" -> Seq Scan on code_tbl \n(cost=0.00..1.81 rows=8 width=9) (actual time=11.24..11.29 rows=5 loops=1)\"\n\" Filter: ((affid = 8) AND \n(cdtpid = 1))\"\n\" -> Hash (cost=1.09..1.09 rows=7 width=8) \n(actual time=12.59..12.59 rows=0 loops=1)\"\n\" -> Seq Scan on creditscat_tbl \n(cost=0.00..1.09 rows=7 width=8) (actual time=12.55..12.57 rows=7 loops=1)\"\n\" Filter: (enabled = true)\"\n\" -> Index Scan using ctp_statcon on statcon_tbl \n(cost=0.00..20.40 rows=5 width=4) (actual time=27.97..171.84 rows=39 \nloops=4)\"\n\" Index Cond: ((statcon_tbl.sctid = \n\"outer\".sctid) AND (statcon_tbl.ctpid = 1))\"\n\"Total runtime: 1299.02 msec\"\n\n",
"msg_date": "Tue, 03 May 2005 19:56:44 +0200",
"msg_from": "Jona <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad choice of query plan from PG 7.3.6 to PG 7.3.9 part 1b"
}
] |
[
{
"msg_contents": "Hi,\n\nI have postgres 8.0.2 installed on FreeBSD FreeBSD 4.11-RELEASE with 2GB \nof RAM.\n\nWhen trying to set max_connections=256 I get the following error message:\n\nFATAL: could not create semaphores: No space left on device\nDETAIL: Failed system call was semget(5432017, 17, 03600).\nHINT: This error does *not* mean that you have run out of disk space.\n It occurs when either the system limit for the maximum number of \nsemaphore sets (SEMMNI), or the system wide maximum number of semaphores \n(SEMMNS), would be exceeded. You need to raise the respective kernel \nparameter. Alternatively, reduce PostgreSQL's consumption of semaphores \nby reducing its max_connections parameter (currently 256).\n The PostgreSQL documentation contains more information about \nconfiguring your system for PostgreSQL.\n\nI have read through the kernel resources documentation for postgres 8 \nand set values accordingly. \nSome settings are not staying after a reboot, even if I place them in \n/boot/loader.conf.\n\nSo far I'm able to get max_connections to 250.\n\nHere is a dump of kern.ipc values:\n\nkern.ipc.maxsockbuf: 262144\nkern.ipc.sockbuf_waste_factor: 8\nkern.ipc.somaxconn: 128\nkern.ipc.max_linkhdr: 16\nkern.ipc.max_protohdr: 60\nkern.ipc.max_hdr: 76\nkern.ipc.max_datalen: 136\nkern.ipc.nmbclusters: 65536\nkern.ipc.msgmax: 16384\nkern.ipc.msgmni: 40\nkern.ipc.msgmnb: 2048\nkern.ipc.msgtql: 40\nkern.ipc.msgssz: 8\nkern.ipc.msgseg: 2048\nkern.ipc.semmap: 30\nkern.ipc.semmni: 256\nkern.ipc.semmns: 272\nkern.ipc.semmnu: 30\nkern.ipc.semmsl: 60\nkern.ipc.semopm: 100\nkern.ipc.semume: 10\nkern.ipc.semusz: 92\nkern.ipc.semvmx: 32767\nkern.ipc.semaem: 16384\nkern.ipc.shmmax: 33554432\nkern.ipc.shmmin: 1\nkern.ipc.shmmni: 192\nkern.ipc.shmseg: 128\nkern.ipc.shmall: 8192\nkern.ipc.shm_use_phys: 0\nkern.ipc.shm_allow_removed: 0\nkern.ipc.mbuf_wait: 32\nkern.ipc.mbtypes: 38 551 3 0 0 0 0 0 0 0 0 0 0 0 0 0\nkern.ipc.nmbufs: 262144\nkern.ipc.nsfbufs: 8704\nkern.ipc.nsfbufspeak: 7\nkern.ipc.nsfbufsused: 0\nkern.ipc.m_clreflimithits: 0\nkern.ipc.mcl_pool_max: 0\nkern.ipc.mcl_pool_now: 0\nkern.ipc.maxsockets: 65536\n\nAnd boot/loader.conf:\n\nuserconfig_script_load=\"YES\"\nkern.ipc.nmbclusters=\"65536\"\nkern.maxfiles=\"65536\"\nkern.maxfilesperproc=\"65536\"\nnet.inet.tcp.mssdflt=\"1460\"\nkern.somaxconn=\"4096\"\nkern.ipc.semmns=\"272\"\nkern.ipc.semmni=\"256\"\nkern.ipc.shmmax=\"66099200\"\n\nkern.ipc.shmmax and kern.ipc.shmmin will not stay to what I set them to.\n\nWhat am I doing wrong or not doing at all?\n\nYour help is greatly appreciated.\n\nRegards,\nChris.\n\n\n\n\n\n\n\n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.308 / Virus Database: 266.11.2 - Release Date: 5/2/2005\n\n",
"msg_date": "Wed, 04 May 2005 11:12:52 +1000",
"msg_from": "Chris Hebrard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Kernel Resources and max_connections"
},
{
"msg_contents": "Chris Hebrard wrote:\n\n> kern.ipc.shmmax and kern.ipc.shmmin will not stay to what I set them to.\n> \n> What am I doing wrong or not doing at all?\n> \n\nThese need to go in /etc/sysctl.conf. You might need to set shmall as well.\n\n(This not-very-clear distinction between what is sysctl'abe and what is \na kernel tunable is a bit of a downer).\n\ncheers\n\nMark\n\n\n",
"msg_date": "Wed, 04 May 2005 13:46:34 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Kernel Resources and max_connections"
},
{
"msg_contents": "Mark Kirkwood wrote:\n\n> Chris Hebrard wrote:\n>\n>> kern.ipc.shmmax and kern.ipc.shmmin will not stay to what I set them to.\n>>\n>> What am I doing wrong or not doing at all?\n>>\n>\n> These need to go in /etc/sysctl.conf. You might need to set shmall as \n> well.\n>\n> (This not-very-clear distinction between what is sysctl'abe and what \n> is a kernel tunable is a bit of a downer).\n>\n> cheers\n>\n> Mark\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n>\n>\nThanks for your reply,\n\nI set the values in etc/sysctl.conf:\n\n# $FreeBSD: src/etc/sysctl.conf,v 1.1.2.3 2002/04/15 00:44:13 dougb Exp $\n#\n# This file is read when going to multi-user and its contents piped thru\n# ``sysctl'' to adjust kernel values. ``man 5 sysctl.conf'' for details.\n#\n# Added by IMP 2005-05-04\nnet.inet.tcp.rfc1323=1\nkern.ipc.somaxconn=1024\nkern.ipc.maxsockbuf=8388608\nnet.inet.tcp.sendspace=3217968\nnet.inet.tcp.recvspace=3217968\nkern.ipc.semmns=\"272\"\nkern.ipc.semmni=\"256\"\nkern.ipc.shmmax=\"66099200\"\nkern.ipc.shmmin=\"256\"\n\n\nAfter a restart both shmmax and shmmin are now 0 and postgres failed to \nstart.\n\nkern.ipc.maxsockbuf: 8388608\nkern.ipc.sockbuf_waste_factor: 8\nkern.ipc.somaxconn: 1024\nkern.ipc.max_linkhdr: 16\nkern.ipc.max_protohdr: 60\nkern.ipc.max_hdr: 76\nkern.ipc.max_datalen: 136\nkern.ipc.nmbclusters: 65536\nkern.ipc.msgmax: 16384\nkern.ipc.msgmni: 40\nkern.ipc.msgmnb: 2048\nkern.ipc.msgtql: 40\nkern.ipc.msgssz: 8\nkern.ipc.msgseg: 2048\nkern.ipc.semmap: 30\nkern.ipc.semmni: 10\nkern.ipc.semmns: 60\nkern.ipc.semmnu: 30\nkern.ipc.semmsl: 60\nkern.ipc.semopm: 100\nkern.ipc.semume: 10\nkern.ipc.semusz: 92\nkern.ipc.semvmx: 32767\nkern.ipc.semaem: 16384\nkern.ipc.shmmax: 0\nkern.ipc.shmmin: 0\nkern.ipc.shmmni: 192\nkern.ipc.shmseg: 128\nkern.ipc.shmall: 8192\nkern.ipc.shm_use_phys: 0\nkern.ipc.shm_allow_removed: 0\nkern.ipc.mbuf_wait: 32\nkern.ipc.mbtypes: 24 550 2 0 0 0 0 0 0 0 0 0 0 0 0 0\nkern.ipc.nmbufs: 262144\nkern.ipc.nsfbufs: 8704\nkern.ipc.nsfbufspeak: 0\nkern.ipc.nsfbufsused: 0\nkern.ipc.m_clreflimithits: 0\nkern.ipc.mcl_pool_max: 0\nkern.ipc.mcl_pool_now: 0\nkern.ipc.maxsockets: 65536\n\nI'm lost here.\nChris.\n\n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.308 / Virus Database: 266.11.2 - Release Date: 5/2/2005\n\n",
"msg_date": "Wed, 04 May 2005 12:04:21 +1000",
"msg_from": "Chris Hebrard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Kernel Resources and max_connections"
},
{
"msg_contents": "On Wed, May 04, 2005 at 01:46:34PM +1200, Mark Kirkwood wrote:\n> (This not-very-clear distinction between what is sysctl'abe and what is \n> a kernel tunable is a bit of a downer).\n\nI think this is documented somewhere, though I can't think of where\nright now.\n\nAlso, note that some sysctl's can only be set in /boot/loader.conf.\nhw.ata.wc=0 is an example (which you want to set on any box with IDE\ndrives if you want fsync to actually do what it thinks it's doing).\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 3 May 2005 23:07:18 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Kernel Resources and max_connections"
},
{
"msg_contents": "Chris Hebrard wrote:\n> \n> I set the values in etc/sysctl.conf:\n> \n> # $FreeBSD: src/etc/sysctl.conf,v 1.1.2.3 2002/04/15 00:44:13 dougb Exp $\n> #\n> # This file is read when going to multi-user and its contents piped thru\n> # ``sysctl'' to adjust kernel values. ``man 5 sysctl.conf'' for details.\n> #\n> # Added by IMP 2005-05-04\n> net.inet.tcp.rfc1323=1\n> kern.ipc.somaxconn=1024\n> kern.ipc.maxsockbuf=8388608\n> net.inet.tcp.sendspace=3217968\n> net.inet.tcp.recvspace=3217968\n> kern.ipc.semmns=\"272\"\n> kern.ipc.semmni=\"256\"\n> kern.ipc.shmmax=\"66099200\"\n> kern.ipc.shmmin=\"256\"\n> \n> \n> After a restart both shmmax and shmmin are now 0 and postgres failed to \n> start.\n> \n>\nHmmm - puzzling. One point to check, did you take them out of \n/boot/loader.conf ?\n\nAssuming so, maybe don't quote 'em (see below).\n\nFinally you need to to set shmall, otherwise it will over(under)ride the \nshmmax setting. So try:\n\nnet.inet.tcp.rfc1323=1\nkern.ipc.somaxconn=1024\nkern.ipc.maxsockbuf=8388608\nnet.inet.tcp.sendspace=3217968\nnet.inet.tcp.recvspace=3217968\nkern.ipc.semmns=272\nkern.ipc.semmni=256\nkern.ipc.shmmax=66099200\nkern.ipc.shmmin=256\nkern.ipc.shmall=32768\n\n\n\n\n\n\n",
"msg_date": "Wed, 04 May 2005 17:16:36 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Kernel Resources and max_connections"
}
] |
[
{
"msg_contents": "Problem sovled by setting:\n\nkern.ipc.semmni: 280\nkern.ipc.semmns: 300\n\nChris.\n\n\n> Mark Kirkwood wrote:\n>\n>> Chris Hebrard wrote:\n>>\n>>> kern.ipc.shmmax and kern.ipc.shmmin will not stay to what I set them \n>>> to.\n>>>\n>>> What am I doing wrong or not doing at all?\n>>>\n>>\n>> These need to go in /etc/sysctl.conf. You might need to set shmall as \n>> well.\n>>\n>> (This not-very-clear distinction between what is sysctl'abe and what \n>> is a kernel tunable is a bit of a downer).\n>>\n>> cheers\n>>\n>> Mark\n>>\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>>\n>>\n>>\n>> kern.ipc.maxsockbuf: 8388608\n>> kern.ipc.sockbuf_waste_factor: 8\n>> kern.ipc.somaxconn: 1024\n>> kern.ipc.max_linkhdr: 16\n>> kern.ipc.max_protohdr: 60\n>> kern.ipc.max_hdr: 76\n>> kern.ipc.max_datalen: 136\n>> kern.ipc.nmbclusters: 65536\n>> kern.ipc.msgmax: 16384\n>> kern.ipc.msgmni: 40\n>> kern.ipc.msgmnb: 2048\n>> kern.ipc.msgtql: 40\n>> kern.ipc.msgssz: 8\n>> kern.ipc.msgseg: 2048\n>> kern.ipc.semmap: 30\n>> kern.ipc.semmni: 256\n>> kern.ipc.semmns: 272\n>> kern.ipc.semmnu: 30\n>> kern.ipc.semmsl: 60\n>> kern.ipc.semopm: 100\n>> kern.ipc.semume: 10\n>> kern.ipc.semusz: 92\n>> kern.ipc.semvmx: 32767\n>> kern.ipc.semaem: 16384\n>> kern.ipc.shmmax: 66099200\n>> kern.ipc.shmmin: 256\n>> kern.ipc.shmmni: 192\n>> kern.ipc.shmseg: 128\n>> kern.ipc.shmall: 8192\n>> kern.ipc.shm_use_phys: 0\n>> kern.ipc.shm_allow_removed: 0\n>> kern.ipc.mbuf_wait: 32\n>> kern.ipc.mbtypes: 37 552 3 0 0 0 0 0 0 0 0 0 0 0 0 0\n>> kern.ipc.nmbufs: 262144\n>> kern.ipc.nsfbufs: 8704\n>> kern.ipc.nsfbufspeak: 4\n>> kern.ipc.nsfbufsused: 0\n>> kern.ipc.m_clreflimithits: 0\n>> kern.ipc.mcl_pool_max: 0\n>> kern.ipc.mcl_pool_now: 0\n>> kern.ipc.maxsockets: 65536\n>\n>\n> I've got the values to what I want them to be now, after loading some \n> values in loader.conf and others in sysctl.conf.\n>\n> loader.conf:\n>\n> userconfig_script_load=\"YES\"\n> kern.ipc.nmbclusters=\"65536\"\n> kern.maxfiles=\"65536\"\n> kern.maxfilesperproc=\"65536\"\n> net.inet.tcp.mssdflt=\"1460\"\n> kern.somaxconn=\"4096\"\n> kern.ipc.semmns=\"272\"\n> kern.ipc.semmni=\"256\"\n>\n> sysctl.conf:\n>\n> net.inet.tcp.rfc1323=1\n> kern.ipc.somaxconn=1024\n> kern.ipc.maxsockbuf=8388608\n> net.inet.tcp.sendspace=3217968\n> net.inet.tcp.recvspace=3217968\n> kern.ipc.shmmax=66099200\n> kern.ipc.shmmin=256\n> kern.ipc.shmall=16138\n>\n> and kern.ipc values are now:\n> kern.ipc.maxsockbuf: 8388608\n> kern.ipc.sockbuf_waste_factor: 8\n> kern.ipc.somaxconn: 1024\n> kern.ipc.max_linkhdr: 16\n> kern.ipc.max_protohdr: 60\n> kern.ipc.max_hdr: 76\n> kern.ipc.max_datalen: 136\n> kern.ipc.nmbclusters: 65536\n> kern.ipc.msgmax: 16384\n> kern.ipc.msgmni: 40\n> kern.ipc.msgmnb: 2048\n> kern.ipc.msgtql: 40\n> kern.ipc.msgssz: 8\n> kern.ipc.msgseg: 2048\n> kern.ipc.semmap: 30\n> kern.ipc.semmni: 256\n> kern.ipc.semmns: 272\n> kern.ipc.semmnu: 30\n> kern.ipc.semmsl: 60\n> kern.ipc.semopm: 100\n> kern.ipc.semume: 10\n> kern.ipc.semusz: 92\n> kern.ipc.semvmx: 32767\n> kern.ipc.semaem: 16384\n> kern.ipc.shmmax: 66099200\n> kern.ipc.shmmin: 256\n> kern.ipc.shmmni: 192\n> kern.ipc.shmseg: 128\n> kern.ipc.shmall: 16138\n> kern.ipc.shm_use_phys: 0\n> kern.ipc.shm_allow_removed: 0\n> kern.ipc.mbuf_wait: 32\n> kern.ipc.mbtypes: 7 550 3 0 0 0 0 0 0 0 0 0 0 0 0 0\n> kern.ipc.nmbufs: 262144\n> kern.ipc.nsfbufs: 8704\n> kern.ipc.nsfbufspeak: 6\n> kern.ipc.nsfbufsused: 0\n> kern.ipc.m_clreflimithits: 0\n> kern.ipc.mcl_pool_max: 0\n> kern.ipc.mcl_pool_now: 0\n> kern.ipc.maxsockets: 65536\n>\n>\n> Postgres still refuses to start with 256 max_connections.\n>\n> Chris.\n>\n>\n>\n>\n\n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.308 / Virus Database: 266.11.2 - Release Date: 5/2/2005\n\n",
"msg_date": "Wed, 04 May 2005 12:40:10 +1000",
"msg_from": "Chris Hebrard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Kernel Resources Solved"
}
] |
[
{
"msg_contents": "Hello,\n\nI have a table collecting stats that shows 5 Index Tuples Fetched but no Index Scans. Should there not be at least one Index Scan showing in the stats?\n\nMike\n",
"msg_date": "Wed, 4 May 2005 09:58:25 -0500",
"msg_from": "\"Mike G.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Table stats"
},
{
"msg_contents": " > Should there not be at least one Index Scan showing in the stats?\n\nnot if there was a table scan\n\n",
"msg_date": "Fri, 6 May 2005 00:59:32 +0100",
"msg_from": "David Roussel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table stats"
}
] |
[
{
"msg_contents": "Hi\n\nI'm currently experiencing problems with long query execution times.\nWhat I believe makes these problems particularly interesting is the \ndifference in execution plans between our test server running PostGreSQL \n7.3.6 and our production server running PostGreSQL 7.3.9.\nThe test server is an upgraded \"home machine\", a Pentium 4 with 1GB of \nmemory and IDE disk.\nThe production server is a dual CPU XEON Pentium 4 with 2GB memory and \nSCSI disks.\nOne should expect the production server to be faster, but appearently \nnot as the outlined query plans below shows.\n\nMy questions can be summoned up to:\n1) How come the query plans between the 2 servers are different?\n2) How come the production server in general estimates the cost of the \nquery plans so horribly wrong? (ie. it chooses a bad query plan where as \nthe test server chooses a good plan)\n3) In Query 2, how come the production server refuses the use its \nindexes (subcat_uq and aff_price_uq, both unique indexes) where as the \ntest server determines that the indexes are the way to go\n4) In Query 3, how come the test server refuses to use its index \n(sct2lang_uq) and the production server uses it? And why is the test \nserver still faster eventhough it makes a sequential scan of a table \nwith 8.5k records in?\n\nPlease note, a VACUUM ANALYSE is run on the production server once a day \n(used to be once an hour but it seemed to make no difference), however \nthere are generally no writes to the tables used in the queries.\n\nIf anyone could shed some light on these issues I would truly appreciate \nit.\n\n\nCheers\nJona\n\nPS. Please refer to part 2 for the other queries and query plans\n\n---------------------------------------------------------------------------------------------------- \n\nQuery 1:\nEXPLAIN ANALYZE\nSELECT DISTINCT StatConTrans_Tbl.id, Code_Tbl.sysnm AS code, \nPriceCat_Tbl.amount AS price, Country_Tbl.currency,\n CreditsCat_Tbl.amount AS credits, Info_Tbl.title, Info_Tbl.description\nFROM (SCT2SubCatType_Tbl\nINNER JOIN SCT2Lang_Tbl ON SCT2SubCatType_Tbl.sctid = SCT2Lang_Tbl.sctid\nINNER JOIN Language_Tbl ON SCT2Lang_Tbl.langid = Language_Tbl.id AND \nLanguage_Tbl.sysnm = UPPER('us') AND Language_Tbl.enabled = true\nINNER JOIN Info_Tbl ON SCT2SubCatType_Tbl.sctid = Info_Tbl.sctid AND \nLanguage_Tbl.id = Info_Tbl.langid\nINNER JOIN SubCatType_Tbl ON SCT2SubCatType_Tbl.subcattpid = \nSubCatType_Tbl.id AND SubCatType_Tbl.enabled = true\nINNER JOIN CatType_Tbl ON SubCatType_Tbl.cattpid = CatType_Tbl.id AND \nCatType_Tbl.enabled = true\nINNER JOIN SuperCatType_Tbl ON CatType_Tbl.spcattpid = \nSuperCatType_Tbl.id AND SuperCatType_Tbl.enabled = true\nINNER JOIN StatConTrans_Tbl ON SCT2SubCatType_Tbl.sctid = \nStatConTrans_Tbl.id AND StatConTrans_Tbl.enabled = true\nINNER JOIN Price_Tbl ON StatConTrans_Tbl.id = Price_Tbl.sctid AND \nPrice_Tbl.affid = 8\nINNER JOIN PriceCat_Tbl ON Price_Tbl.prccatid = PriceCat_Tbl.id AND \nPriceCat_Tbl.enabled = true\nINNER JOIN Country_Tbl ON PriceCat_Tbl.cntid = Country_Tbl.id AND \nCountry_Tbl.enabled = true\nINNER JOIN CreditsCat_Tbl ON Price_Tbl.crdcatid = CreditsCat_Tbl.id AND \nCreditsCat_Tbl.enabled = true\nINNER JOIN StatCon_Tbl ON StatConTrans_Tbl.id = StatCon_Tbl.sctid AND \nStatCon_Tbl.ctpid = 1\nINNER JOIN Code_Tbl ON SuperCatType_Tbl.id = Code_Tbl.spcattpid AND \nCode_Tbl.affid = 8 AND Code_Tbl.cdtpid = 1)\nWHERE SCT2SubCatType_Tbl.subcattpid = 79\nORDER BY StatConTrans_Tbl.id DESC\nLIMIT 8 OFFSET 0\n\nPlan on PostGre 7.3.6 on Red Hat Linux 3.2.3-39\n\"Limit (cost=178.59..178.61 rows=1 width=330) (actual time=22.77..28.51 \nrows=4 loops=1)\"\n\" -> Unique (cost=178.59..178.61 rows=1 width=330) (actual \ntime=22.77..28.50 rows=4 loops=1)\"\n\" -> Sort (cost=178.59..178.60 rows=1 width=330) (actual \ntime=22.76..22.85 rows=156 loops=1)\"\n\" Sort Key: statcontrans_tbl.id, code_tbl.sysnm, \npricecat_tbl.amount, country_tbl.currency, creditscat_tbl.amount, \ninfo_tbl.title, info_tbl.description\"\n\" -> Hash Join (cost=171.19..178.58 rows=1 width=330) \n(actual time=3.39..6.55 rows=156 loops=1)\"\n\" Hash Cond: (\"outer\".cntid = \"inner\".id)\"\n\" -> Nested Loop (cost=170.13..177.51 rows=1 \nwidth=312) (actual time=3.27..5.75 rows=156 loops=1)\"\n\" Join Filter: (\"inner\".sctid = \"outer\".sctid)\"\n\" -> Hash Join (cost=170.13..171.48 rows=1 \nwidth=308) (actual time=3.12..3.26 rows=4 loops=1)\"\n\" Hash Cond: (\"outer\".crdcatid = \n\"inner\".id)\"\n\" -> Hash Join (cost=169.03..170.38 \nrows=1 width=300) (actual time=3.00..3.11 rows=4 loops=1)\"\n\" Hash Cond: (\"outer\".spcattpid = \n\"inner\".spcattpid)\"\n\" -> Hash Join \n(cost=167.22..168.56 rows=1 width=253) (actual time=2.88..2.97 rows=4 \nloops=1)\"\n\" Hash Cond: (\"outer\".id = \n\"inner\".prccatid)\"\n\" -> Seq Scan on \npricecat_tbl (cost=0.00..1.29 rows=12 width=12) (actual time=0.04..0.08 \nrows=23 loops=1)\"\n\" Filter: (enabled = \ntrue)\"\n\" -> Hash \n(cost=167.21..167.21 rows=1 width=241) (actual time=2.80..2.80 rows=0 \nloops=1)\"\n\" -> Nested Loop \n(cost=3.77..167.21 rows=1 width=241) (actual time=1.31..2.79 rows=4 \nloops=1)\"\n\" Join Filter: \n(\"inner\".sctid = \"outer\".sctid)\"\n\" -> Nested \nLoop (cost=3.77..161.19 rows=1 width=229) (actual time=1.19..2.60 \nrows=4 loops=1)\"\n\" Join \nFilter: (\"outer\".sctid = \"inner\".sctid)\"\n\" -> Hash \nJoin (cost=3.77..155.17 rows=1 width=44) (actual time=1.07..2.37 rows=4 \nloops=1)\"\n\" \nHash Cond: (\"outer\".langid = \"inner\".id)\"\n\" -> \nNested Loop (cost=2.69..154.06 rows=7 width=40) (actual time=0.90..2.18 \nrows=8 loops=1)\"\n\" \nJoin Filter: (\"outer\".sctid = \"inner\".sctid)\"\n\" \n-> Nested Loop (cost=2.69..21.30 rows=1 width=32) (actual \ntime=0.78..1.94 rows=4 loops=1)\"\n\" \n-> Nested Loop (cost=2.69..15.30 rows=1 width=28) (actual \ntime=0.66..1.76 rows=4 loops=1)\"\n\" \n-> Hash Join (cost=2.69..7.07 rows=1 width=20) (actual time=0.39..1.15 \nrows=154 loops=1)\"\n\" \nHash Cond: (\"outer\".cattpid = \"inner\".id)\"\n\" \n-> Seq Scan on subcattype_tbl (cost=0.00..3.98 rows=79 width=8) \n(actual time=0.03..0.35 rows=156 loops=1)\"\n\" \nFilter: (enabled = true)\"\n\" \n-> Hash (cost=2.68..2.68 rows=3 width=12) (actual time=0.31..0.31 \nrows=0 loops=1)\"\n\" \n-> Hash Join (cost=1.15..2.68 rows=3 width=12) (actual time=0.16..0.27 \nrows=31 loops=1)\"\n\" \nHash Cond: (\"outer\".spcattpid = \"inner\".id)\"\n\" \n-> Seq Scan on cattype_tbl (cost=0.00..1.41 rows=16 width=8) (actual \ntime=0.04..0.09 rows=31 loops=1)\"\n\" \nFilter: (enabled = true)\"\n\" \n-> Hash (cost=1.14..1.14 rows=6 width=4) (actual time=0.06..0.06 \nrows=0 loops=1)\"\n\" \n-> Seq Scan on supercattype_tbl (cost=0.00..1.14 rows=6 width=4) \n(actual time=0.03..0.05 rows=10 loops=1)\"\n\" \nFilter: (enabled = true)\"\n\" \n-> Index Scan using subcat_uq on sct2subcattype_tbl (cost=0.00..5.97 \nrows=1 width=8) (actual time=0.00..0.00 rows=0 loops=154)\"\n\" \nIndex Cond: ((sct2subcattype_tbl.subcattpid = \"outer\".id) AND \n(sct2subcattype_tbl.subcattpid = 79))\"\n\" \n-> Index Scan using statcontrans_pk on statcontrans_tbl \n(cost=0.00..5.99 rows=1 width=4) (actual time=0.04..0.04 rows=1 loops=4)\"\n\" \nIndex Cond: (\"outer\".sctid = statcontrans_tbl.id)\"\n\" \nFilter: (enabled = true)\"\n\" \n-> Index Scan using sct2lang_uq on sct2lang_tbl (cost=0.00..132.22 \nrows=43 width=8) (actual time=0.04..0.05 rows=2 loops=4)\"\n\" \nIndex Cond: (\"outer\".id = sct2lang_tbl.sctid)\"\n\" -> \nHash (cost=1.07..1.07 rows=1 width=4) (actual time=0.11..0.11 rows=0 \nloops=1)\"\n\" \n-> Seq Scan on language_tbl (cost=0.00..1.07 rows=1 width=4) (actual \ntime=0.10..0.11 rows=1 loops=1)\"\n\" \nFilter: (((sysnm)::text = 'US'::text) AND (enabled = true))\"\n\" -> Index \nScan using info_uq on info_tbl (cost=0.00..6.00 rows=1 width=185) \n(actual time=0.05..0.05 rows=1 loops=4)\"\n\" \nIndex Cond: ((info_tbl.sctid = \"outer\".sctid) AND (info_tbl.langid = \n\"outer\".langid))\"\n\" -> Index Scan \nusing aff_price_uq on price_tbl (cost=0.00..6.01 rows=1 width=12) \n(actual time=0.03..0.03 rows=1 loops=4)\"\n\" Index \nCond: ((price_tbl.affid = 8) AND (price_tbl.sctid = \"outer\".sctid))\"\n\" -> Hash (cost=1.81..1.81 rows=1 \nwidth=47) (actual time=0.08..0.08 rows=0 loops=1)\"\n\" -> Seq Scan on code_tbl \n(cost=0.00..1.81 rows=1 width=47) (actual time=0.04..0.07 rows=5 loops=1)\"\n\" Filter: ((affid = 8) \nAND (cdtpid = 1))\"\n\" -> Hash (cost=1.09..1.09 rows=4 \nwidth=8) (actual time=0.06..0.06 rows=0 loops=1)\"\n\" -> Seq Scan on creditscat_tbl \n(cost=0.00..1.09 rows=4 width=8) (actual time=0.03..0.04 rows=7 loops=1)\"\n\" Filter: (enabled = true)\"\n\" -> Index Scan using ctp_statcon on \nstatcon_tbl (cost=0.00..6.01 rows=1 width=4) (actual time=0.05..0.31 \nrows=39 loops=4)\"\n\" Index Cond: ((statcon_tbl.sctid = \n\"outer\".sctid) AND (statcon_tbl.ctpid = 1))\"\n\" -> Hash (cost=1.06..1.06 rows=2 width=18) (actual \ntime=0.06..0.06 rows=0 loops=1)\"\n\" -> Seq Scan on country_tbl (cost=0.00..1.06 \nrows=2 width=18) (actual time=0.04..0.05 rows=4 loops=1)\"\n\" Filter: (enabled = true)\"\n\"Total runtime: 29.56 msec\"\n\nPlan on PostGre 7.3.9 on Red Hat Linux 3.2.3-49\n\"Limit (cost=545.53..545.60 rows=1 width=135) (actual \ntime=1251.71..1261.25 rows=4 loops=1)\"\n\" -> Unique (cost=545.53..545.60 rows=1 width=135) (actual \ntime=1251.71..1261.24 rows=4 loops=1)\"\n\" -> Sort (cost=545.53..545.54 rows=4 width=135) (actual \ntime=1251.70..1251.90 rows=156 loops=1)\"\n\" Sort Key: statcontrans_tbl.id, code_tbl.sysnm, \npricecat_tbl.amount, country_tbl.currency, creditscat_tbl.amount, \ninfo_tbl.title, info_tbl.description\"\n\" -> Nested Loop (cost=485.61..545.49 rows=4 width=135) \n(actual time=603.77..1230.96 rows=156 loops=1)\"\n\" Join Filter: (\"inner\".sctid = \"outer\".sctid)\"\n\" -> Hash Join (cost=485.61..486.06 rows=3 \nwidth=131) (actual time=541.87..542.22 rows=4 loops=1)\"\n\" Hash Cond: (\"outer\".crdcatid = \"inner\".id)\"\n\" -> Hash Join (cost=484.51..484.90 rows=3 \nwidth=123) (actual time=529.09..529.36 rows=4 loops=1)\"\n\" Hash Cond: (\"outer\".spcattpid = \n\"inner\".spcattpid)\"\n\" -> Hash Join (cost=482.68..482.93 \nrows=3 width=114) (actual time=517.60..517.77 rows=4 loops=1)\"\n\" Hash Cond: (\"outer\".cntid = \n\"inner\".id)\"\n\" -> Merge Join \n(cost=481.60..481.80 rows=4 width=105) (actual time=517.36..517.43 \nrows=4 loops=1)\"\n\" Merge Cond: (\"outer\".id = \n\"inner\".prccatid)\"\n\" -> Sort (cost=1.81..1.87 \nrows=23 width=12) (actual time=8.44..8.45 rows=6 loops=1)\"\n\" Sort Key: \npricecat_tbl.id\"\n\" -> Seq Scan on \npricecat_tbl (cost=0.00..1.29 rows=23 width=12) (actual time=8.31..8.37 \nrows=23 loops=1)\"\n\" Filter: \n(enabled = true)\"\n\" -> Sort \n(cost=479.80..479.81 rows=4 width=93) (actual time=508.87..508.87 rows=4 \nloops=1)\"\n\" Sort Key: \nprice_tbl.prccatid\"\n\" -> Nested Loop \n(cost=13.69..479.75 rows=4 width=93) (actual time=444.70..508.81 rows=4 \nloops=1)\"\n\" Join Filter: \n(\"inner\".sctid = \"outer\".sctid)\"\n\" -> Nested \nLoop (cost=13.69..427.04 rows=9 width=81) (actual time=444.60..508.62 \nrows=4 loops=1)\"\n\" Join \nFilter: (\"outer\".sctid = \"inner\".sctid)\"\n\" -> \nNested Loop (cost=13.69..377.03 rows=8 width=44) (actual \ntime=345.13..398.38 rows=4 loops=1)\"\n\" \nJoin Filter: (\"outer\".sctid = \"inner\".id)\"\n\" -> \nHash Join (cost=13.69..327.32 rows=8 width=40) (actual \ntime=219.17..272.27 rows=4 loops=1)\"\n\" \nHash Cond: (\"outer\".langid = \"inner\".id)\"\n\" \n-> Nested Loop (cost=12.61..325.92 rows=42 width=36) (actual \ntime=209.77..262.79 rows=8 loops=1)\"\n\" \n-> Hash Join (cost=12.61..106.32 rows=27 width=28) (actual \ntime=101.88..102.00 rows=4 loops=1)\"\n\" \nHash Cond: (\"outer\".cattpid = \"inner\".id)\"\n\" \n-> Hash Join (cost=9.47..102.68 rows=33 width=16) (actual \ntime=84.14..84.21 rows=4 loops=1)\"\n\" \nHash Cond: (\"outer\".subcattpid = \"inner\".id)\"\n\" \n-> Index Scan using subcat_uq on sct2subcattype_tbl (cost=0.00..92.56 \nrows=33 width=8) (actual time=83.33..83.37 rows=4 loops=1)\"\n\" \nIndex Cond: (subcattpid = 79)\"\n\" \n-> Hash (cost=3.98..3.98 rows=156 width=8) (actual time=0.76..0.76 \nrows=0 loops=1)\"\n\" \n-> Seq Scan on subcattype_tbl (cost=0.00..3.98 rows=156 width=8) \n(actual time=0.03..0.49 rows=156 loops=1)\"\n\" \nFilter: (enabled = true)\"\n\" \n-> Hash (cost=3.07..3.07 rows=27 width=12) (actual time=17.58..17.58 \nrows=0 loops=1)\"\n\" \n-> Hash Join (cost=1.16..3.07 rows=27 width=12) (actual \ntime=17.30..17.52 rows=31 loops=1)\"\n\" \nHash Cond: (\"outer\".spcattpid = \"inner\".id)\"\n\" \n-> Seq Scan on cattype_tbl (cost=0.00..1.41 rows=31 width=8) (actual \ntime=0.02..0.12 rows=31 loops=1)\"\n\" \nFilter: (enabled = true)\"\n\" \n-> Hash (cost=1.14..1.14 rows=10 width=4) (actual time=17.09..17.09 \nrows=0 loops=1)\"\n\" \n-> Seq Scan on supercattype_tbl (cost=0.00..1.14 rows=10 width=4) \n(actual time=17.05..17.07 rows=10 loops=1)\"\n\" \nFilter: (enabled = true)\"\n\" \n-> Index Scan using sct2lang_uq on sct2lang_tbl (cost=0.00..8.13 \nrows=2 width=8) (actual time=26.97..40.18 rows=2 loops=4)\"\n\" \nIndex Cond: (\"outer\".sctid = sct2lang_tbl.sctid)\"\n\" \n-> Hash (cost=1.07..1.07 rows=1 width=4) (actual time=9.04..9.04 \nrows=0 loops=1)\"\n\" \n-> Seq Scan on language_tbl (cost=0.00..1.07 rows=1 width=4) (actual \ntime=9.02..9.03 rows=1 loops=1)\"\n\" \nFilter: (((sysnm)::text = 'US'::text) AND (enabled = true))\"\n\" -> \nIndex Scan using statcontrans_pk on statcontrans_tbl (cost=0.00..5.88 \nrows=1 width=4) (actual time=31.51..31.52 rows=1 loops=4)\"\n\" \nIndex Cond: (statcontrans_tbl.id = \"outer\".sctid)\"\n\" \nFilter: (enabled = true)\"\n\" -> Index \nScan using info_uq on info_tbl (cost=0.00..5.93 rows=1 width=37) \n(actual time=27.54..27.54 rows=1 loops=4)\"\n\" \nIndex Cond: ((info_tbl.sctid = \"outer\".sctid) AND (info_tbl.langid = \n\"outer\".langid))\"\n\" -> Index Scan \nusing aff_price_uq on price_tbl (cost=0.00..5.88 rows=1 width=12) \n(actual time=0.03..0.03 rows=1 loops=4)\"\n\" Index \nCond: ((price_tbl.affid = 8) AND (price_tbl.sctid = \"outer\".sctid))\"\n\" -> Hash (cost=1.06..1.06 rows=4 \nwidth=9) (actual time=0.05..0.05 rows=0 loops=1)\"\n\" -> Seq Scan on \ncountry_tbl (cost=0.00..1.06 rows=4 width=9) (actual time=0.02..0.03 \nrows=4 loops=1)\"\n\" Filter: (enabled = \ntrue)\"\n\" -> Hash (cost=1.81..1.81 rows=8 \nwidth=9) (actual time=11.31..11.31 rows=0 loops=1)\"\n\" -> Seq Scan on code_tbl \n(cost=0.00..1.81 rows=8 width=9) (actual time=11.24..11.29 rows=5 loops=1)\"\n\" Filter: ((affid = 8) AND \n(cdtpid = 1))\"\n\" -> Hash (cost=1.09..1.09 rows=7 width=8) \n(actual time=12.59..12.59 rows=0 loops=1)\"\n\" -> Seq Scan on creditscat_tbl \n(cost=0.00..1.09 rows=7 width=8) (actual time=12.55..12.57 rows=7 loops=1)\"\n\" Filter: (enabled = true)\"\n\" -> Index Scan using ctp_statcon on statcon_tbl \n(cost=0.00..20.40 rows=5 width=4) (actual time=27.97..171.84 rows=39 \nloops=4)\"\n\" Index Cond: ((statcon_tbl.sctid = \n\"outer\".sctid) AND (statcon_tbl.ctpid = 1))\"\n\"Total runtime: 1299.02 msec\"\n\n\n\n",
"msg_date": "Thu, 05 May 2005 11:52:01 +0200",
"msg_from": "Jona <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad choice of query plan from PG 7.3.6 to PG 7.3.9 part 1"
},
{
"msg_contents": "Jona <[email protected]> writes:\n> I'm currently experiencing problems with long query execution times.\n> What I believe makes these problems particularly interesting is the \n> difference in execution plans between our test server running PostGreSQL \n> 7.3.6 and our production server running PostGreSQL 7.3.9.\n> The test server is an upgraded \"home machine\", a Pentium 4 with 1GB of \n> memory and IDE disk.\n> The production server is a dual CPU XEON Pentium 4 with 2GB memory and \n> SCSI disks.\n> One should expect the production server to be faster, but appearently \n> not as the outlined query plans below shows.\n\nI think the plans are fine; it looks to me like the production server\nhas serious table-bloat or index-bloat problems, probably because of\ninadequate vacuuming. For instance compare these entries:\n\n-> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..6.01 rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4)\n Index Cond: ((statcon_tbl.sctid = \"outer\".sctid) AND (statcon_tbl.ctpid = 1))\n\n-> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..20.40 rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4)\n Index Cond: ((statcon_tbl.sctid = \"outer\".sctid) AND (statcon_tbl.ctpid = 1))\n\nAppears to be exactly the same task ... but the test server spent\n1.24 msec total while the production server spent 687.36 msec total.\nThat's more than half of your problem right there. Some of the other\nscans seem a lot slower on the production machine too.\n\n> 1) How come the query plans between the 2 servers are different?\n\nThe production server's rowcount estimates are pretty good, the test\nserver's are not. How long since you vacuumed/analyzed the test server?\n\nIt'd be interesting to see the output of \"vacuum verbose statcon_tbl\"\non both servers ...\n\n\t\t\tregards, tom lane\n\nPS: if you post any more query plans, please try to use software that\ndoesn't mangle the formatting so horribly ...\n",
"msg_date": "Thu, 05 May 2005 12:02:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad choice of query plan from PG 7.3.6 to PG 7.3.9 part 1 "
},
{
"msg_contents": "Thank you for the swift reply.\nThe test server is hardly ever vacuumed as it in general sees very \nlimited traffic. vacuum is only necessary if the server sees a lot of \nwrite operations, i.e. update, delete, insert right?\n\nWhat explains the different choice of query plans then?\nAs can be seen from the following snippets the test server decides to \nuse an index twice in Query 2, where as the live server decides to do a \nfull scan of tables with 38.5k and 5.5k records.\nIn Query 3 it's vice versa.\nSeems strange to me...\n\nQuery 2:\n------------------- Bad idea, price_tbl hold 38.5k records\nTest:\n-> Index Scan using aff_price_uq on price_tbl (cost=0.00..6.01 rows=1 \nwidth=4) (actual time=0.01..0.01 rows=1 loops=2838)\"\nLive:\n-> Seq Scan on price_tbl (cost=0.00..883.48 rows=2434 width=4) (actual \ntime=0.86..67.25 rows=4570 loops=1)\"\nFilter: (affid = 8)\"\n\n------------------- Bad idea, sct2subcattype_tbl hold 5.5k records\nTest:\n-> Index Scan using subcat_uq on sct2subcattype_tbl (cost=0.00..79.26 \nrows=26 width=8) (actual time=0.01..0.17 rows=59 loops=48)\nLive:\n -> Seq Scan on sct2subcattype_tbl (cost=0.00..99.26 rows=5526 \nwidth=8) (actual time=0.01..30.16 rows=5526 loops=1)\"\n\nQuery 3:\n----------------- Bad idea, sct2lang_tbl has 8.6k records\nTest:\n -> Seq Scan on sct2lang_tbl (cost=0.00..150.79 rows=8679 width=8) \n(actual time=0.03..10.70 rows=8679 loops=1)\"\nLive:\n-> Index Scan using sct2lang_uq on sct2lang_tbl (cost=0.00..8.13 \nrows=2 width=8) (actual time=1.10..2.39 rows=2 loops=69)\"\n\nWill get a VACUUM VERBOSE of StatCon_Tbl\n\nCheers\nJona\n\nPS: The query plans are extracted using pgAdmin on Windows, if you can \nrecommend a better cross-platform postgre client I'd be happy to try it out.\n\nTom Lane wrote:\n\n>Jona <[email protected]> writes:\n> \n>\n>>I'm currently experiencing problems with long query execution times.\n>>What I believe makes these problems particularly interesting is the \n>>difference in execution plans between our test server running PostGreSQL \n>>7.3.6 and our production server running PostGreSQL 7.3.9.\n>>The test server is an upgraded \"home machine\", a Pentium 4 with 1GB of \n>>memory and IDE disk.\n>>The production server is a dual CPU XEON Pentium 4 with 2GB memory and \n>>SCSI disks.\n>>One should expect the production server to be faster, but appearently \n>>not as the outlined query plans below shows.\n>> \n>>\n>\n>I think the plans are fine; it looks to me like the production server\n>has serious table-bloat or index-bloat problems, probably because of\n>inadequate vacuuming. For instance compare these entries:\n>\n>-> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..6.01 rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4)\n> Index Cond: ((statcon_tbl.sctid = \"outer\".sctid) AND (statcon_tbl.ctpid = 1))\n>\n>-> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..20.40 rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4)\n> Index Cond: ((statcon_tbl.sctid = \"outer\".sctid) AND (statcon_tbl.ctpid = 1))\n>\n>Appears to be exactly the same task ... but the test server spent\n>1.24 msec total while the production server spent 687.36 msec total.\n>That's more than half of your problem right there. Some of the other\n>scans seem a lot slower on the production machine too.\n>\n> \n>\n>>1) How come the query plans between the 2 servers are different?\n>> \n>>\n>\n>The production server's rowcount estimates are pretty good, the test\n>server's are not. How long since you vacuumed/analyzed the test server?\n>\n>It'd be interesting to see the output of \"vacuum verbose statcon_tbl\"\n>on both servers ...\n>\n>\t\t\tregards, tom lane\n>\n>PS: if you post any more query plans, please try to use software that\n>doesn't mangle the formatting so horribly ...\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n> \n>\n\n\n\n\n\n\nThank you for the swift reply.\nThe test server is hardly ever vacuumed as it in general sees very\nlimited traffic. vacuum is only necessary if the server sees a lot of\nwrite operations, i.e. update, delete, insert right?\n\nWhat explains the different choice of query plans then?\nAs can be seen from the following snippets the test server decides to\nuse an index twice in Query 2, where as the live server decides to do a\nfull scan of tables with 38.5k and 5.5k records.\nIn Query 3 it's vice versa.\nSeems strange to me...\n\nQuery 2:\n------------------- Bad idea, price_tbl hold 38.5k records\n\nTest:\n-> Index Scan using aff_price_uq on price_tbl (cost=0.00..6.01\nrows=1 width=4) (actual time=0.01..0.01 rows=1 loops=2838)\"\n\nLive:\n-> Seq Scan on price_tbl (cost=0.00..883.48 rows=2434 width=4)\n(actual time=0.86..67.25 rows=4570 loops=1)\"\n\nFilter: (affid = 8)\"\n\n\n------------------- Bad idea, sct2subcattype_tbl hold 5.5k records\n\nTest:\n-> Index Scan using subcat_uq on sct2subcattype_tbl \n(cost=0.00..79.26 rows=26 width=8) (actual time=0.01..0.17 rows=59\nloops=48)\nLive:\n -> Seq Scan on sct2subcattype_tbl (cost=0.00..99.26 rows=5526\nwidth=8) (actual time=0.01..30.16 rows=5526 loops=1)\"\n\n\nQuery 3:\n----------------- Bad idea, sct2lang_tbl has 8.6k records\nTest:\n -> Seq Scan on sct2lang_tbl (cost=0.00..150.79 rows=8679 width=8)\n(actual time=0.03..10.70 rows=8679 loops=1)\"\n\nLive:\n-> Index Scan using sct2lang_uq on sct2lang_tbl (cost=0.00..8.13\nrows=2 width=8) (actual time=1.10..2.39 rows=2 loops=69)\"\n\n\nWill get a VACUUM VERBOSE of StatCon_Tbl\n\nCheers\nJona\n\nPS: The query plans are extracted using pgAdmin on Windows, if you can\nrecommend a better cross-platform postgre client I'd be happy to try it\nout.\n\nTom Lane wrote:\n\nJona <[email protected]> writes:\n \n\nI'm currently experiencing problems with long query execution times.\nWhat I believe makes these problems particularly interesting is the \ndifference in execution plans between our test server running PostGreSQL \n7.3.6 and our production server running PostGreSQL 7.3.9.\nThe test server is an upgraded \"home machine\", a Pentium 4 with 1GB of \nmemory and IDE disk.\nThe production server is a dual CPU XEON Pentium 4 with 2GB memory and \nSCSI disks.\nOne should expect the production server to be faster, but appearently \nnot as the outlined query plans below shows.\n \n\n\nI think the plans are fine; it looks to me like the production server\nhas serious table-bloat or index-bloat problems, probably because of\ninadequate vacuuming. For instance compare these entries:\n\n-> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..6.01 rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4)\n Index Cond: ((statcon_tbl.sctid = \"outer\".sctid) AND (statcon_tbl.ctpid = 1))\n\n-> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..20.40 rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4)\n Index Cond: ((statcon_tbl.sctid = \"outer\".sctid) AND (statcon_tbl.ctpid = 1))\n\nAppears to be exactly the same task ... but the test server spent\n1.24 msec total while the production server spent 687.36 msec total.\nThat's more than half of your problem right there. Some of the other\nscans seem a lot slower on the production machine too.\n\n \n\n1) How come the query plans between the 2 servers are different?\n \n\n\nThe production server's rowcount estimates are pretty good, the test\nserver's are not. How long since you vacuumed/analyzed the test server?\n\nIt'd be interesting to see the output of \"vacuum verbose statcon_tbl\"\non both servers ...\n\n\t\t\tregards, tom lane\n\nPS: if you post any more query plans, please try to use software that\ndoesn't mangle the formatting so horribly ...\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster",
"msg_date": "Fri, 06 May 2005 09:28:06 +0200",
"msg_from": "Jona <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad choice of query plan from PG 7.3.6 to PG 7.3.9"
},
{
"msg_contents": "Results of VACUUM VERBOSE from both servers\n\nTest server:\ncomm=# VACUUM VERBOSE StatCon_Tbl;\nINFO: --Relation public.statcon_tbl--\nINFO: Pages 338: Changed 338, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.\n Total CPU 0.02s/0.00u sec elapsed 0.04 sec.\nINFO: --Relation pg_toast.pg_toast_179851--\nINFO: Pages 85680: Changed 85680, Empty 0; Tup 343321: Vac 0, Keep 0, \nUnUsed 0.\n Total CPU 4.03s/0.40u sec elapsed 70.99 sec.\nVACUUM\n\nLive Server:\ncomm=# VACUUM VERBOSE StatCon_Tbl;\nINFO: --Relation public.statcon_tbl--\nINFO: Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed 6101.\n Total CPU 0.01s/0.00u sec elapsed 0.60 sec.\nINFO: --Relation pg_toast.pg_toast_891830--\nINFO: Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, \nUnUsed 5487.\n Total CPU 4.44s/0.34u sec elapsed 35.48 sec.\nVACUUM\n\nCheers\nJona\n\nTom Lane wrote:\n\n>Jona <[email protected]> writes:\n> \n>\n>>I'm currently experiencing problems with long query execution times.\n>>What I believe makes these problems particularly interesting is the \n>>difference in execution plans between our test server running PostGreSQL \n>>7.3.6 and our production server running PostGreSQL 7.3.9.\n>>The test server is an upgraded \"home machine\", a Pentium 4 with 1GB of \n>>memory and IDE disk.\n>>The production server is a dual CPU XEON Pentium 4 with 2GB memory and \n>>SCSI disks.\n>>One should expect the production server to be faster, but appearently \n>>not as the outlined query plans below shows.\n>> \n>>\n>\n>I think the plans are fine; it looks to me like the production server\n>has serious table-bloat or index-bloat problems, probably because of\n>inadequate vacuuming. For instance compare these entries:\n>\n>-> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..6.01 rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4)\n> Index Cond: ((statcon_tbl.sctid = \"outer\".sctid) AND (statcon_tbl.ctpid = 1))\n>\n>-> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..20.40 rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4)\n> Index Cond: ((statcon_tbl.sctid = \"outer\".sctid) AND (statcon_tbl.ctpid = 1))\n>\n>Appears to be exactly the same task ... but the test server spent\n>1.24 msec total while the production server spent 687.36 msec total.\n>That's more than half of your problem right there. Some of the other\n>scans seem a lot slower on the production machine too.\n>\n> \n>\n>>1) How come the query plans between the 2 servers are different?\n>> \n>>\n>\n>The production server's rowcount estimates are pretty good, the test\n>server's are not. How long since you vacuumed/analyzed the test server?\n>\n>It'd be interesting to see the output of \"vacuum verbose statcon_tbl\"\n>on both servers ...\n>\n>\t\t\tregards, tom lane\n>\n>PS: if you post any more query plans, please try to use software that\n>doesn't mangle the formatting so horribly ...\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n> \n>\n\n\n\n\n\n\nResults of VACUUM VERBOSE from both servers\n\nTest server:\ncomm=# VACUUM VERBOSE StatCon_Tbl;\nINFO: --Relation public.statcon_tbl--\nINFO: Pages 338: Changed 338, Empty 0; Tup 11494: Vac 0, Keep 0,\nUnUsed 0.\n Total CPU 0.02s/0.00u sec elapsed 0.04 sec.\nINFO: --Relation pg_toast.pg_toast_179851--\nINFO: Pages 85680: Changed 85680, Empty 0; Tup 343321: Vac 0, Keep 0,\nUnUsed 0.\n Total CPU 4.03s/0.40u sec elapsed 70.99 sec.\nVACUUM\n\nLive Server:\ncomm=# VACUUM VERBOSE StatCon_Tbl;\nINFO: --Relation public.statcon_tbl--\nINFO: Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed\n6101.\n Total CPU 0.01s/0.00u sec elapsed 0.60 sec.\nINFO: --Relation pg_toast.pg_toast_891830--\nINFO: Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0,\nUnUsed 5487.\n Total CPU 4.44s/0.34u sec elapsed 35.48 sec.\nVACUUM\n\nCheers\nJona\n\nTom Lane wrote:\n\nJona <[email protected]> writes:\n \n\nI'm currently experiencing problems with long query execution times.\nWhat I believe makes these problems particularly interesting is the \ndifference in execution plans between our test server running PostGreSQL \n7.3.6 and our production server running PostGreSQL 7.3.9.\nThe test server is an upgraded \"home machine\", a Pentium 4 with 1GB of \nmemory and IDE disk.\nThe production server is a dual CPU XEON Pentium 4 with 2GB memory and \nSCSI disks.\nOne should expect the production server to be faster, but appearently \nnot as the outlined query plans below shows.\n \n\n\nI think the plans are fine; it looks to me like the production server\nhas serious table-bloat or index-bloat problems, probably because of\ninadequate vacuuming. For instance compare these entries:\n\n-> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..6.01 rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4)\n Index Cond: ((statcon_tbl.sctid = \"outer\".sctid) AND (statcon_tbl.ctpid = 1))\n\n-> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..20.40 rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4)\n Index Cond: ((statcon_tbl.sctid = \"outer\".sctid) AND (statcon_tbl.ctpid = 1))\n\nAppears to be exactly the same task ... but the test server spent\n1.24 msec total while the production server spent 687.36 msec total.\nThat's more than half of your problem right there. Some of the other\nscans seem a lot slower on the production machine too.\n\n \n\n1) How come the query plans between the 2 servers are different?\n \n\n\nThe production server's rowcount estimates are pretty good, the test\nserver's are not. How long since you vacuumed/analyzed the test server?\n\nIt'd be interesting to see the output of \"vacuum verbose statcon_tbl\"\non both servers ...\n\n\t\t\tregards, tom lane\n\nPS: if you post any more query plans, please try to use software that\ndoesn't mangle the formatting so horribly ...\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster",
"msg_date": "Fri, 06 May 2005 09:34:34 +0200",
"msg_from": "Jona <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad choice of query plan from PG 7.3.6 to PG 7.3.9"
},
{
"msg_contents": "You didn't do analyze.\n\nChris\n\nJona wrote:\n> Results of VACUUM VERBOSE from both servers\n> \n> Test server:\n> comm=# VACUUM VERBOSE StatCon_Tbl;\n> INFO: --Relation public.statcon_tbl--\n> INFO: Pages 338: Changed 338, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.\n> Total CPU 0.02s/0.00u sec elapsed 0.04 sec.\n> INFO: --Relation pg_toast.pg_toast_179851--\n> INFO: Pages 85680: Changed 85680, Empty 0; Tup 343321: Vac 0, Keep 0, \n> UnUsed 0.\n> Total CPU 4.03s/0.40u sec elapsed 70.99 sec.\n> VACUUM\n> \n> Live Server:\n> comm=# VACUUM VERBOSE StatCon_Tbl;\n> INFO: --Relation public.statcon_tbl--\n> INFO: Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed 6101.\n> Total CPU 0.01s/0.00u sec elapsed 0.60 sec.\n> INFO: --Relation pg_toast.pg_toast_891830--\n> INFO: Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, \n> UnUsed 5487.\n> Total CPU 4.44s/0.34u sec elapsed 35.48 sec.\n> VACUUM\n> \n> Cheers\n> Jona\n> \n> Tom Lane wrote:\n> \n>>Jona <[email protected]> <mailto:[email protected]> writes:\n>> \n>>\n>>>I'm currently experiencing problems with long query execution times.\n>>>What I believe makes these problems particularly interesting is the \n>>>difference in execution plans between our test server running PostGreSQL \n>>>7.3.6 and our production server running PostGreSQL 7.3.9.\n>>>The test server is an upgraded \"home machine\", a Pentium 4 with 1GB of \n>>>memory and IDE disk.\n>>>The production server is a dual CPU XEON Pentium 4 with 2GB memory and \n>>>SCSI disks.\n>>>One should expect the production server to be faster, but appearently \n>>>not as the outlined query plans below shows.\n>>> \n>>>\n>>I think the plans are fine; it looks to me like the production server\n>>has serious table-bloat or index-bloat problems, probably because of\n>>inadequate vacuuming. For instance compare these entries:\n>>\n>>-> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..6.01 rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4)\n>> Index Cond: ((statcon_tbl.sctid = \"outer\".sctid) AND (statcon_tbl.ctpid = 1))\n>>\n>>-> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..20.40 rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4)\n>> Index Cond: ((statcon_tbl.sctid = \"outer\".sctid) AND (statcon_tbl.ctpid = 1))\n>>\n>>Appears to be exactly the same task ... but the test server spent\n>>1.24 msec total while the production server spent 687.36 msec total.\n>>That's more than half of your problem right there. Some of the other\n>>scans seem a lot slower on the production machine too.\n>>\n>> \n>>\n>>>1) How come the query plans between the 2 servers are different?\n>>> \n>>>\n>>The production server's rowcount estimates are pretty good, the test\n>>server's are not. How long since you vacuumed/analyzed the test server?\n>>\n>>It'd be interesting to see the output of \"vacuum verbose statcon_tbl\"\n>>on both servers ...\n>>\n>>\t\t\tregards, tom lane\n>>\n>>PS: if you post any more query plans, please try to use software that\n>>doesn't mangle the formatting so horribly ...\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 4: Don't 'kill -9' the postmaster\n>> \n>>\n",
"msg_date": "Fri, 06 May 2005 16:41:02 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad choice of query plan from PG 7.3.6 to PG 7.3.9"
},
{
"msg_contents": "Now with analyze\n\nTest Server:\ncomm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;\nINFO: --Relation public.statcon_tbl--\nINFO: Pages 338: Changed 0, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.\n Total CPU 0.02s/0.00u sec elapsed 1.98 sec.\nINFO: --Relation pg_toast.pg_toast_179851--\nINFO: Pages 85680: Changed 0, Empty 0; Tup 343321: Vac 0, Keep 0, UnUsed 0.\n Total CPU 1.75s/0.23u sec elapsed 30.36 sec.\nINFO: Analyzing public.statcon_tbl\nVACUUM\n\nLive Server:\ncomm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;\nINFO: --Relation public.statcon_tbl--\nINFO: Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed 6101.\n Total CPU 0.00s/0.01u sec elapsed 0.01 sec.\nINFO: --Relation pg_toast.pg_toast_891830--\nINFO: Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, \nUnUsed 5487.\n Total CPU 3.21s/0.47u sec elapsed 18.03 sec.\nINFO: Analyzing public.statcon_tbl\nVACUUM\n\nHave done some sampling running the same query a few times through the \npast few hours and it appears that the VACUUM has helped.\nThe following are the results after the Vacuum:\n\nAfter VACUUM VERBOSE:\nIndex Scan using ctp_statcon on statcon_tbl (cost=0.00..21.29 rows=5 \nwidth=4) (actual time=0.07..0.37 rows=39 loops=4)\nIndex Cond: ((statcon_tbl.sctid = \"outer\".sctid) AND (statcon_tbl.ctpid \n= 1))\n\nAfter VACUUM ANALYZE VERBOSE:\nIndex Scan using ctp_statcon on statcon_tbl (cost=0.00..20.03 rows=5 \nwidth=4) (actual time=0.09..0.37 rows=39 loops=4)\nIndex Cond: ((statcon_tbl.sctid = \"outer\".sctid) AND (statcon_tbl.ctpid \n= 1))\n\nOnly question remains why one server uses its indexes and the other \ndon't eventhough VACUUM ANALYZE has now been run on both servers?\nAnd even more interesting, before the VACUUM ANALYZEit was the server \nwhere no vacuum had taken place that used its index.\n\nCheers\nJona\n\nChristopher Kings-Lynne wrote:\n\n> You didn't do analyze.\n>\n> Chris\n>\n> Jona wrote:\n>\n>> Results of VACUUM VERBOSE from both servers\n>>\n>> Test server:\n>> comm=# VACUUM VERBOSE StatCon_Tbl;\n>> INFO: --Relation public.statcon_tbl--\n>> INFO: Pages 338: Changed 338, Empty 0; Tup 11494: Vac 0, Keep 0, \n>> UnUsed 0.\n>> Total CPU 0.02s/0.00u sec elapsed 0.04 sec.\n>> INFO: --Relation pg_toast.pg_toast_179851--\n>> INFO: Pages 85680: Changed 85680, Empty 0; Tup 343321: Vac 0, Keep \n>> 0, UnUsed 0.\n>> Total CPU 4.03s/0.40u sec elapsed 70.99 sec.\n>> VACUUM\n>>\n>> Live Server:\n>> comm=# VACUUM VERBOSE StatCon_Tbl;\n>> INFO: --Relation public.statcon_tbl--\n>> INFO: Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, \n>> UnUsed 6101.\n>> Total CPU 0.01s/0.00u sec elapsed 0.60 sec.\n>> INFO: --Relation pg_toast.pg_toast_891830--\n>> INFO: Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, \n>> UnUsed 5487.\n>> Total CPU 4.44s/0.34u sec elapsed 35.48 sec.\n>> VACUUM\n>>\n>> Cheers\n>> Jona\n>>\n>> Tom Lane wrote:\n>>\n>>> Jona <[email protected]> <mailto:[email protected]> writes:\n>>> \n>>>\n>>>> I'm currently experiencing problems with long query execution times.\n>>>> What I believe makes these problems particularly interesting is the \n>>>> difference in execution plans between our test server running \n>>>> PostGreSQL 7.3.6 and our production server running PostGreSQL 7.3.9.\n>>>> The test server is an upgraded \"home machine\", a Pentium 4 with 1GB \n>>>> of memory and IDE disk.\n>>>> The production server is a dual CPU XEON Pentium 4 with 2GB memory \n>>>> and SCSI disks.\n>>>> One should expect the production server to be faster, but \n>>>> appearently not as the outlined query plans below shows.\n>>>> \n>>>\n>>> I think the plans are fine; it looks to me like the production server\n>>> has serious table-bloat or index-bloat problems, probably because of\n>>> inadequate vacuuming. For instance compare these entries:\n>>>\n>>> -> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..6.01 \n>>> rows=1 width=4) (actual time=0.05..0.31 rows=39 loops=4)\n>>> Index Cond: ((statcon_tbl.sctid = \"outer\".sctid) AND \n>>> (statcon_tbl.ctpid = 1))\n>>>\n>>> -> Index Scan using ctp_statcon on statcon_tbl (cost=0.00..20.40 \n>>> rows=5 width=4) (actual time=27.97..171.84 rows=39 loops=4)\n>>> Index Cond: ((statcon_tbl.sctid = \"outer\".sctid) AND \n>>> (statcon_tbl.ctpid = 1))\n>>>\n>>> Appears to be exactly the same task ... but the test server spent\n>>> 1.24 msec total while the production server spent 687.36 msec total.\n>>> That's more than half of your problem right there. Some of the other\n>>> scans seem a lot slower on the production machine too.\n>>>\n>>> \n>>>\n>>>> 1) How come the query plans between the 2 servers are different?\n>>>> \n>>>\n>>> The production server's rowcount estimates are pretty good, the test\n>>> server's are not. How long since you vacuumed/analyzed the test \n>>> server?\n>>>\n>>> It'd be interesting to see the output of \"vacuum verbose statcon_tbl\"\n>>> on both servers ...\n>>>\n>>> regards, tom lane\n>>>\n>>> PS: if you post any more query plans, please try to use software that\n>>> doesn't mangle the formatting so horribly ...\n>>>\n>>> ---------------------------(end of \n>>> broadcast)---------------------------\n>>> TIP 4: Don't 'kill -9' the postmaster\n>>> \n>>>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n",
"msg_date": "Fri, 06 May 2005 13:14:51 +0200",
"msg_from": "Jona <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad choice of query plan from PG 7.3.6 to PG 7.3.9"
},
{
"msg_contents": "Jona <[email protected]> writes:\n> Test Server:\n> comm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;\n> INFO: --Relation public.statcon_tbl--\n> INFO: Pages 338: Changed 0, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.\n> Total CPU 0.02s/0.00u sec elapsed 1.98 sec.\n> INFO: --Relation pg_toast.pg_toast_179851--\n> INFO: Pages 85680: Changed 0, Empty 0; Tup 343321: Vac 0, Keep 0, UnUsed 0.\n> Total CPU 1.75s/0.23u sec elapsed 30.36 sec.\n> INFO: Analyzing public.statcon_tbl\n> VACUUM\n\n> Live Server:\n> comm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;\n> INFO: --Relation public.statcon_tbl--\n> INFO: Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed 6101.\n> Total CPU 0.00s/0.01u sec elapsed 0.01 sec.\n> INFO: --Relation pg_toast.pg_toast_891830--\n> INFO: Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, \n> UnUsed 5487.\n> Total CPU 3.21s/0.47u sec elapsed 18.03 sec.\n> INFO: Analyzing public.statcon_tbl\n> VACUUM\n\nHm, the physical table sizes aren't very different, which suggests that\nthe problem must lie with the indexes. Unfortunately, VACUUM in 7.3\ndoesn't tell you anything about indexes if it doesn't have any dead rows\nto clean up. Could you look at pg_class.relpages for all the indexes\nof this table, and see what that shows?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 May 2005 09:59:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad choice of query plan from PG 7.3.6 to PG 7.3.9 "
},
{
"msg_contents": "Wouldn't the VACUUM have made them equivalent??\n\nanyway, here's the info for relpages:\nLive Server: 424\nTest Server: 338\n\nPlease note though that there're more rows on the live server than on \nthe test server due to recent upload.\nTotal Row counts are as follows:\nLive Server: 12597\nTest Server: 11494\n\nWhen the problems started the tables had identical size though.\n\nCheers\nJona\n\nTom Lane wrote:\n\n>Jona <[email protected]> writes:\n> \n>\n>>Test Server:\n>>comm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;\n>>INFO: --Relation public.statcon_tbl--\n>>INFO: Pages 338: Changed 0, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.\n>> Total CPU 0.02s/0.00u sec elapsed 1.98 sec.\n>>INFO: --Relation pg_toast.pg_toast_179851--\n>>INFO: Pages 85680: Changed 0, Empty 0; Tup 343321: Vac 0, Keep 0, UnUsed 0.\n>> Total CPU 1.75s/0.23u sec elapsed 30.36 sec.\n>>INFO: Analyzing public.statcon_tbl\n>>VACUUM\n>> \n>>\n>\n> \n>\n>>Live Server:\n>>comm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;\n>>INFO: --Relation public.statcon_tbl--\n>>INFO: Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed 6101.\n>> Total CPU 0.00s/0.01u sec elapsed 0.01 sec.\n>>INFO: --Relation pg_toast.pg_toast_891830--\n>>INFO: Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, \n>>UnUsed 5487.\n>> Total CPU 3.21s/0.47u sec elapsed 18.03 sec.\n>>INFO: Analyzing public.statcon_tbl\n>>VACUUM\n>> \n>>\n>\n>Hm, the physical table sizes aren't very different, which suggests that\n>the problem must lie with the indexes. Unfortunately, VACUUM in 7.3\n>doesn't tell you anything about indexes if it doesn't have any dead rows\n>to clean up. Could you look at pg_class.relpages for all the indexes\n>of this table, and see what that shows?\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n> \n>\n\n\n\n\n\n\n\nWouldn't the VACUUM have made them equivalent??\n\nanyway, here's the info for relpages:\nLive Server: 424\nTest Server: 338\n\nPlease note though that there're more rows on the live server than on\nthe test server due to recent upload.\nTotal Row counts are as follows:\nLive Server: 12597\nTest Server: 11494\n\nWhen the problems started the tables had identical size though.\n\nCheers\nJona\n\nTom Lane wrote:\n\nJona <[email protected]> writes:\n \n\nTest Server:\ncomm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;\nINFO: --Relation public.statcon_tbl--\nINFO: Pages 338: Changed 0, Empty 0; Tup 11494: Vac 0, Keep 0, UnUsed 0.\n Total CPU 0.02s/0.00u sec elapsed 1.98 sec.\nINFO: --Relation pg_toast.pg_toast_179851--\nINFO: Pages 85680: Changed 0, Empty 0; Tup 343321: Vac 0, Keep 0, UnUsed 0.\n Total CPU 1.75s/0.23u sec elapsed 30.36 sec.\nINFO: Analyzing public.statcon_tbl\nVACUUM\n \n\n\n \n\nLive Server:\ncomm=# VACUUM ANALYZE VERBOSE StatCon_Tbl;\nINFO: --Relation public.statcon_tbl--\nINFO: Pages 424: Changed 0, Empty 0; Tup 12291: Vac 0, Keep 0, UnUsed 6101.\n Total CPU 0.00s/0.01u sec elapsed 0.01 sec.\nINFO: --Relation pg_toast.pg_toast_891830--\nINFO: Pages 89234: Changed 0, Empty 0; Tup 352823: Vac 0, Keep 0, \nUnUsed 5487.\n Total CPU 3.21s/0.47u sec elapsed 18.03 sec.\nINFO: Analyzing public.statcon_tbl\nVACUUM\n \n\n\nHm, the physical table sizes aren't very different, which suggests that\nthe problem must lie with the indexes. Unfortunately, VACUUM in 7.3\ndoesn't tell you anything about indexes if it doesn't have any dead rows\nto clean up. Could you look at pg_class.relpages for all the indexes\nof this table, and see what that shows?\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\n http://archives.postgresql.org",
"msg_date": "Sat, 07 May 2005 14:00:19 +0200",
"msg_from": "Jona <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad choice of query plan from PG 7.3.6 to PG 7.3.9"
},
{
"msg_contents": "Jona <[email protected]> writes:\n> anyway, here's the info for relpages:\n> Live Server: 424\n> Test Server: 338\n\nI was asking about the indexes associated with the table, not the table\nitself.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 07 May 2005 11:40:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad choice of query plan from PG 7.3.6 to PG 7.3.9 "
},
{
"msg_contents": "Sorry Tom, misread your mail! My bad :-(\n\nI believe the following is the data you need ?\nLive Server\nrelname \trelpages\nctp_statcon \t72\nstatcon_pk \t135\n\n\t\nTest Server\nrelname \trelpages\nctp_statcon \t34\nstatcon_pk \t28\n\n\nHave executed the following query to obtain that data:\nSELECT relname, relpages\nFROM pg_class\nWHERE relname = 'statcon_pk' OR relname = 'sc2ctp_fk' OR relname = \n'sc2mtp_fk' OR relname = 'sc2sc_fk' OR relname = 'ctp_statcon'\n\nThe size difference for the index is surprisingly big I think, \nconsidering that there's only around 1000 rows more in the table on the \nlive server than on the server.\nCount for Live Server: 12597\nCount for Test Server: 11494\nAny insight into this?\n\nCheers\nJona\n\nPS: The meta data for the table is:\nCREATE TABLE statcon_tbl\n(\n id serial NOT NULL,\n data bytea,\n wm bool DEFAULT 'FALSE',\n created timestamp DEFAULT now(),\n modified timestamp DEFAULT now(),\n enabled bool DEFAULT 'TRUE',\n bitsperpixel int4 DEFAULT 0,\n mtpid int4,\n sctid int4,\n ctpid int4,\n CONSTRAINT statcon_pk PRIMARY KEY (id),\n CONSTRAINT sc2ctp_fk FOREIGN KEY (ctpid) REFERENCES contype_tbl (id) \nON UPDATE CASCADE ON DELETE CASCADE,\n CONSTRAINT sc2mtp_fk FOREIGN KEY (mtpid) REFERENCES mimetype_tbl (id) \nON UPDATE CASCADE ON DELETE CASCADE,\n CONSTRAINT sc2sct_fk FOREIGN KEY (sctid) REFERENCES statcontrans_tbl \n(id) ON UPDATE CASCADE ON DELETE CASCADE\n)\nWITHOUT OIDS;\nCREATE INDEX ctp_statcon ON statcon_tbl USING btree (sctid, ctpid);\n\n\nTom Lane wrote:\n\n>Jona <[email protected]> writes:\n> \n>\n>>anyway, here's the info for relpages:\n>>Live Server: 424\n>>Test Server: 338\n>> \n>>\n>\n>I was asking about the indexes associated with the table, not the table\n>itself.\n>\n>\t\t\tregards, tom lane\n> \n>\n\n\n\n\n\n\n\nSorry Tom, misread your mail! My bad :-(\n\nI believe the following is the data you need ?\n\n \n\nLive Server\n\n\nrelname\nrelpages\n\n\nctp_statcon\n72\n\n\nstatcon_pk\n135\n\n\n\n\n\n\n\n\nTest Server\n\n\nrelname\nrelpages\n\n\nctp_statcon\n34\n\n\nstatcon_pk\n28\n\n\n\n\nHave executed the following query to obtain that data:\nSELECT relname, relpages\nFROM pg_class\nWHERE relname = 'statcon_pk' OR relname = 'sc2ctp_fk' OR relname =\n'sc2mtp_fk' OR relname = 'sc2sc_fk' OR relname = 'ctp_statcon'\n\nThe size difference for the index is surprisingly big I think,\nconsidering that there's only around 1000 rows more in the table on the\nlive server than on the server.\nCount for Live Server: 12597\nCount for Test Server: 11494\nAny insight into this?\n\nCheers\nJona\n\nPS: The meta data for the table is:\nCREATE TABLE statcon_tbl\n(\n id serial NOT NULL,\n data bytea,\n wm bool DEFAULT 'FALSE',\n created timestamp DEFAULT now(),\n modified timestamp DEFAULT now(),\n enabled bool DEFAULT 'TRUE',\n bitsperpixel int4 DEFAULT 0,\n mtpid int4,\n sctid int4,\n ctpid int4,\n CONSTRAINT statcon_pk PRIMARY KEY (id),\n CONSTRAINT sc2ctp_fk FOREIGN KEY (ctpid) REFERENCES contype_tbl (id)\nON UPDATE CASCADE ON DELETE CASCADE,\n CONSTRAINT sc2mtp_fk FOREIGN KEY (mtpid) REFERENCES mimetype_tbl (id)\nON UPDATE CASCADE ON DELETE CASCADE,\n CONSTRAINT sc2sct_fk FOREIGN KEY (sctid) REFERENCES statcontrans_tbl\n(id) ON UPDATE CASCADE ON DELETE CASCADE\n) \nWITHOUT OIDS;\nCREATE INDEX ctp_statcon ON statcon_tbl USING btree (sctid, ctpid);\n\n\nTom Lane wrote:\n\nJona <[email protected]> writes:\n \n\nanyway, here's the info for relpages:\nLive Server: 424\nTest Server: 338\n \n\n\nI was asking about the indexes associated with the table, not the table\nitself.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 08 May 2005 13:10:09 +0200",
"msg_from": "Jona <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad choice of query plan from PG 7.3.6 to PG 7.3.9"
}
] |
[
{
"msg_contents": "I think we should put some notes about SELinux causing issues with \npgsql in the OS notes or FAQ.\n\nMyself and a few coworkers just spent a few hours tracking down why \npg_dump would produce no output. We'd fire it up in strace and we'd \nsee all the successful write calls, but not output.\n\nWe copied pg_dump from another machine and it worked fine, and that \nmachine was running the same OS & pg rpms.\n\nEventually we found it was SELinux was preventing pg_dump from \nproducing output.\n\nAny thoughts? I could write up a short blurb but I'm not terribly \nfamiliar with selinux. we just disabled the whole thing to make it work.\n\nFor the record:\nCentOS 4.0\npostgresql-8.0.2-1PGDG.i686.rpm (and associated) rpms from \npostgresql.org's ftp server\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n\n\n",
"msg_date": "Fri, 6 May 2005 10:43:49 -0400",
"msg_from": "Jeff - <[email protected]>",
"msg_from_op": true,
"msg_subject": "SELinux & Redhat"
},
{
"msg_contents": "Jeff - wrote:\n> I think we should put some notes about SELinux causing issues with \n> pgsql in the OS notes or FAQ.\n> \n> Myself and a few coworkers just spent a few hours tracking down why \n> pg_dump would produce no output. We'd fire it up in strace and we'd \n> see all the successful write calls, but not output.\n> \n> We copied pg_dump from another machine and it worked fine, and that \n> machine was running the same OS & pg rpms.\n> \n> Eventually we found it was SELinux was preventing pg_dump from \n> producing output.\n> \n> Any thoughts? I could write up a short blurb but I'm not terribly \n> familiar with selinux. we just disabled the whole thing to make it work.\n> \n> For the record:\n> CentOS 4.0\n> postgresql-8.0.2-1PGDG.i686.rpm (and associated) rpms from \n> postgresql.org's ftp server\n\nA blurb about what? No one else has reported such a problem so we have\nno reason to assume it isn't a misconfiguration on your end.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 6 May 2005 10:55:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELinux & Redhat"
},
{
"msg_contents": "Am Freitag, 6. Mai 2005 16:55 schrieb Bruce Momjian:\n> A blurb about what? No one else has reported such a problem so we have\n> no reason to assume it isn't a misconfiguration on your end.\n\n*Countless* people are constantly reporting problems that can be attributed to \nselinux. We really need to write something about it. Of course, most \npeople, including myself, just solve these issues by turning off selinux, but \nI'd be interested in a more thorough treatment.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Fri, 6 May 2005 17:13:39 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELinux & Redhat"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Am Freitag, 6. Mai 2005 16:55 schrieb Bruce Momjian:\n> > A blurb about what? No one else has reported such a problem so we have\n> > no reason to assume it isn't a misconfiguration on your end.\n> \n> *Countless* people are constantly reporting problems that can be attributed to \n> selinux. We really need to write something about it. Of course, most \n> people, including myself, just solve these issues by turning off selinux, but \n> I'd be interested in a more thorough treatment.\n\nWho makes SE Linux? Is it SuSE? What would we say in an FAQ? I would\nrather report something to people using that OS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 6 May 2005 11:21:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELinux & Redhat"
},
{
"msg_contents": "Jeff - <[email protected]> writes:\n> Eventually we found it was SELinux was preventing pg_dump from \n> producing output.\n\nThat's a new one on me. Why was it doing that --- mislabeling on\nthe pg_dump executable, or what?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 May 2005 11:23:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELinux & Redhat "
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> Am Freitag, 6. Mai 2005 16:55 schrieb Bruce Momjian:\n>> A blurb about what? No one else has reported such a problem so we have\n>> no reason to assume it isn't a misconfiguration on your end.\n\n> *Countless* people are constantly reporting problems that can be\n> attributed to selinux.\n\nThat's mostly because selinux outright broke postgres in the initial\nFC3 releases :-(. I have to take most of the blame for this myself;\nI didn't realize there might be problems, and didn't test adequately.\nI believe the problems are all resolved in the latest Fedora RPMs,\nthough this pg_dump report may be something new.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 May 2005 11:28:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELinux & Redhat "
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nHi,\n\nOn Fri, 6 May 2005, Tom Lane wrote:\n\n> Jeff - <[email protected]> writes:\n>> Eventually we found it was SELinux was preventing pg_dump from\n>> producing output.\n>\n> That's a new one on me. Why was it doing that --- mislabeling on\n> the pg_dump executable, or what?\n\nLooking at the strace report that someone has sent me before, there is a \nproblem with devices:\n\n===================================================================\n<snip>\nfstat64(1, {st_mode=S_IFCHR|0666, st_rdev=makedev(1, 3), ...}) = 0\nioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, 0xbfe16a8c) = -1 ENOTTY\n(Inappropriate ioctl for device)\nmmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,\n0) = 0xb7dee000\nwrite(1, \"pg_dump dumps a database as a te\"..., 2367) = 2367\nmunmap(0xb7dee000, 4096) = 0\nexit_group(0) = ?\n===================================================================\n\nThis one is from a server with SELinux enabled. My server does not produce \nthis, and uses virtual console (as expected?). However with SELinux \nenabled, it wants to use ramdisk (expected? I think no...)\n\nRegards,\n- --\nDevrim GUNDUZ \ndevrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.tdmsoft.com.tr http://www.gunduz.org\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.1 (GNU/Linux)\n\niD8DBQFCe45Btl86P3SPfQ4RAhpbAJ0UhBh8dlOEpPsNm2NB1QIJ82X2swCg7JOg\nA1OCBrZRHxoOPQo0U9hNdNY=\n=ENTC\n-----END PGP SIGNATURE-----\n",
"msg_date": "Fri, 6 May 2005 18:33:20 +0300 (EEST)",
"msg_from": "Devrim GUNDUZ <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELinux & Redhat "
},
{
"msg_contents": "On Fri, May 06, 2005 at 11:21:26AM -0400, Bruce Momjian wrote:\n> Peter Eisentraut wrote:\n> > Am Freitag, 6. Mai 2005 16:55 schrieb Bruce Momjian:\n> > > A blurb about what? No one else has reported such a problem so we have\n> > > no reason to assume it isn't a misconfiguration on your end.\n> > \n> > *Countless* people are constantly reporting problems that can be attributed to \n> > selinux. We really need to write something about it. Of course, most \n> > people, including myself, just solve these issues by turning off selinux, but \n> > I'd be interested in a more thorough treatment.\n> \n> Who makes SE Linux? Is it SuSE? What would we say in an FAQ? I would\n> rather report something to people using that OS.\n\nIt's linux-distribution agnostic. Redhat is including it on its\ndistributions, as is Debian. Not sure about the others but that is\nalready a large population. (Of course it's Linux only.)\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"Aprende a avergonzarte m�s ante ti que ante los dem�s\" (Dem�crito)\n",
"msg_date": "Fri, 6 May 2005 11:42:38 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELinux & Redhat"
},
{
"msg_contents": "\nOn May 6, 2005, at 11:23 AM, Tom Lane wrote:\n\n> Jeff - <[email protected]> writes:\n>\n>> Eventually we found it was SELinux was preventing pg_dump from\n>> producing output.\n>>\n>\n> That's a new one on me. Why was it doing that --- mislabeling on\n> the pg_dump executable, or what?\n>\n\nWe've got a stock CentOS 4 install\nI nabbed the rpms I mentioned (8.0.2) (-rw-r--r-- 1 root root \n2955126 May 4 11:51 postgresql-8.0.2-1PGDG.i686.rpm & company)\n\nfrom /etc/selinux/targeted/contexts/files/file_contexts I see\n\nfile_contexts:/usr/bin/pg_dump -- \nsystem_u:object_r:postgresql_exec_t\nfile_contexts:/usr/bin/pg_dumpall -- \nsystem_u:object_r:postgresql_exec_t\n\nSyslog logs:\n\nMay 6 09:01:25 starslice kernel: audit(1115384485.559:0): avc: \ndenied { execute_no_trans } for pid=4485 exe=/bin/bash path=/usr/ \nbin/pg_dump dev=sda3 ino=5272966 \nscontext=user_u:system_r:postgresql_t \ntcontext=system_u:object_r:postgresql_exec_t tclass=file\n\n\nSELinux is on and under system-config-securitylevel's selinux tab, \n\"SELinux Protection services\" disable postgresql is not clicked.\n\nWhen I run pg_dump w/these settings the following happens running \npg_dump (.broken is hte original file from the rpm)\n\nbash-3.00$ /usr/bin/pg_dump.broken planet\nbash-3.00$\n\nStracing it I get\n....\nwrite(1, \"file_pkey; Type: CONSTRAINT; Sch\"..., 4096) = 4096\nwrite(1, \"\\n-- Name: userprofile_pkey; Type\"..., 4096) = 4096\nwrite(1, \"_idx_1 OWNER TO planet;\\n\\n--\\n-- N\"..., 4096) = 4096\nrt_sigaction(SIGPIPE, {SIG_IGN}, {SIG_DFL}, 8) = 0\nsend(3, \"X\\0\\0\\0\\4\", 5, 0) = 5\nrt_sigaction(SIGPIPE, {SIG_DFL}, {SIG_IGN}, 8) = 0\nclose(3) = 0\nwrite(1, \"me: top3_cmtcount_idx; Type: IND\"..., 3992) = 3992\nmunmap(0xb7df0000, 4096) = 0\nexit_group(0) = ?\n\n\nand what is interesting is it seems only sometimes things get logged \nto syslog about the failure.\n\nIf I copy the file (not mv) it will work (possibly due to xattrs \nbeing set?)\n\nand if I disable pg checking, (or selinux all together) it works.\n\n\nCOOL, HUH?\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n\n\n",
"msg_date": "Fri, 6 May 2005 11:46:26 -0400",
"msg_from": "Jeff - <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELinux & Redhat "
},
{
"msg_contents": "Jeff - <[email protected]> writes:\n> When I run pg_dump w/these settings the following happens running \n> pg_dump (.broken is hte original file from the rpm)\n\n> bash-3.00$ /usr/bin/pg_dump.broken planet\n> bash-3.00$\n\nDoes it work if you direct the output into a file, instead of letting it\ncome to your terminal (which seems a bit useless anyway)?\n\nI've been bugging dwalsh about the fact that the selinux policy\ndisallows writes to /dev/tty to things it thinks are daemons;\nthat seems pretty stupid. But pg_dump isn't a daemon so there's\nno reason for it to be restricted this way anyway...\n\n> and what is interesting is it seems only sometimes things get logged \n> to syslog about the failure.\n\nSomeone told me there's a rate limit on selinux complaints going to\nsyslog, to keep it from swamping your logs. I suspect there are some\nactual bugs there too, because I've noticed cases where an action was\nblocked and there wasn't any log message, nor enough activity to\njustify a rate limit. Feel free to file a bugzilla report if you can\nget a reproducible case.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 06 May 2005 11:57:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SELinux & Redhat "
},
{
"msg_contents": "\nOn May 6, 2005, at 11:57 AM, Tom Lane wrote:\n>> bash-3.00$ /usr/bin/pg_dump.broken planet\n>> bash-3.00$\n>>\n>\n> Does it work if you direct the output into a file, instead of \n> letting it\n> come to your terminal (which seems a bit useless anyway)?\n>\n\nInteresting.\n\nRedirecting it worked, but I'm pretty sure at one point it didn't \nwork. (I could also be smoking crack).\n\nHmm.. piping it into another app worked.\n\nI only found out about this when another developer here tried to run \nit and got nothing.\n\nin any case, it might be something useful to jot somewhere.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n\n\n",
"msg_date": "Fri, 6 May 2005 13:37:49 -0400",
"msg_from": "Jeff - <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SELinux & Redhat "
},
{
"msg_contents": "After reading the comparisons between Opteron and Xeon processors for Linux,\nI'd like to add an Opteron box to our stable of Dells and Sparcs, for comparison.\n\nIBM, Sun and HP have their fairly pricey Opteron systems.\nThe IT people are not swell about unsupported purchases off ebay.\nAnyone care to suggest any other vendors/distributors?\nLooking for names with national support, so that we can recommend as much to our\ncustomers.\n\nMany thanks in advance.\n-- \n\"Dreams come true, not free.\" -- S.Sondheim\n\n",
"msg_date": "Fri, 6 May 2005 14:39:11 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Whence the Opterons?"
},
{
"msg_contents": "Mischa,\n\nWhat kind of budget are you on? penguincomputing.com deals with\nOpteron servers. I looked at a couple of their servers before deciding\non a HP DL145.\n\nIan\n\nOn 5/6/05, Mischa Sandberg <[email protected]> wrote:\n> After reading the comparisons between Opteron and Xeon processors for Linux,\n> I'd like to add an Opteron box to our stable of Dells and Sparcs, for comparison.\n> \n> IBM, Sun and HP have their fairly pricey Opteron systems.\n> The IT people are not swell about unsupported purchases off ebay.\n> Anyone care to suggest any other vendors/distributors?\n> Looking for names with national support, so that we can recommend as much to our\n> customers.\n> \n> Many thanks in advance.\n> --\n> \"Dreams come true, not free.\" -- S.Sondheim\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n>\n",
"msg_date": "Fri, 6 May 2005 18:09:19 -0400",
"msg_from": "Ian Meyer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
},
{
"msg_contents": ">IBM, Sun and HP have their fairly pricey Opteron systems.\n>The IT people are not swell about unsupported purchases off ebay.\n\n\nMischa,\n\nI certainly understand your concern, but the price and support \nsometimes go hand-in-hand. You may have to pick your batttles if your \nwant more bang for the buck or more support. I might be wrong on this, \nbut not everything you buy on E-Bay is unsupported.\n\nWe purchase a dual Operton from Sun off their E-Bay store for about $3K \nless than the \"buy it now\" price.\n\n From an IT perspective, support is not as critical if I can do it \nmyself. If it is for business 24/7 operations, then the company should \nbe able to put some money behind what they want to put their business \non. Your mileage may vary.\n\nSteve\n\n\n",
"msg_date": "Fri, 06 May 2005 15:29:28 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
},
{
"msg_contents": "Please wait a week before buying Sun v20z's or v40z's from off of Ebay \n(j/k). (As I'm in the process of picking up a few) From everything I \nhear the v20z/v40z's are a great way to go and I'll know more in 15 days \nor so.\n\nRegards,\n\nGavin\n\n\nSteve Poe wrote:\n\n>> IBM, Sun and HP have their fairly pricey Opteron systems.\n>> The IT people are not swell about unsupported purchases off ebay.\n>\n>\n>\n> Mischa,\n>\n> I certainly understand your concern, but the price and support \n> sometimes go hand-in-hand. You may have to pick your batttles if your \n> want more bang for the buck or more support. I might be wrong on this, \n> but not everything you buy on E-Bay is unsupported.\n>\n> We purchase a dual Operton from Sun off their E-Bay store for about \n> $3K less than the \"buy it now\" price.\n>\n> From an IT perspective, support is not as critical if I can do it \n> myself. If it is for business 24/7 operations, then the company should \n> be able to put some money behind what they want to put their business \n> on. Your mileage may vary.\n>\n> Steve\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n",
"msg_date": "Fri, 06 May 2005 15:44:49 -0700",
"msg_from": "\"Gavin M. Roy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
},
{
"msg_contents": "On Fri, May 06, 2005 at 02:39:11PM -0700, Mischa Sandberg wrote:\n> IBM, Sun and HP have their fairly pricey Opteron systems.\n\nWe've had some quite good experiences with the HP boxes. They're not\ncheap, it's true, but boy are they sweet.\n\nA\n\n\n-- \nAndrew Sullivan | [email protected]\nIn the future this spectacle of the middle classes shocking the avant-\ngarde will probably become the textbook definition of Postmodernism. \n --Brad Holland\n",
"msg_date": "Sat, 7 May 2005 10:57:46 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
},
{
"msg_contents": "A-\n\n> On Fri, May 06, 2005 at 02:39:11PM -0700, Mischa Sandberg wrote:\n> > IBM, Sun and HP have their fairly pricey Opteron systems.\n>\n> We've had some quite good experiences with the HP boxes. They're not\n> cheap, it's true, but boy are they sweet.\n\nQuestion, though: is HP still using their proprietary RAID card? And, if so, \nhave they fixed its performance problems?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sat, 7 May 2005 14:00:34 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
},
{
"msg_contents": "Mischa Sandberg wrote:\n> After reading the comparisons between Opteron and Xeon processors for Linux,\n> I'd like to add an Opteron box to our stable of Dells and Sparcs, for comparison.\n> IBM, Sun and HP have their fairly pricey Opteron systems.\n> The IT people are not swell about unsupported purchases off ebay.\n> Anyone care to suggest any other vendors/distributors?\n\nCheck out the Tyan Transport systems. Tyan are an ex Sparc clone manufacturer, and\nreleased the second available Opteron board - widely considered the first serious\nOpteron board to hit the market.\n\nSam.\n",
"msg_date": "Mon, 09 May 2005 09:59:07 +1200",
"msg_from": "Sam Vilain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
},
{
"msg_contents": "Thanks to everyone for their pointers to suppliers of Opteron systems.\n\nThe system I'm pricing is under a tighter budget than a production machine,\nbecause it will be for perftests. Our customers tend to run on Dells but\noccasionally run on (Sun) Opterons. \n\n\n",
"msg_date": "Sun, 8 May 2005 15:33:11 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
},
{
"msg_contents": "Mischa Sandberg wrote:\n> After reading the comparisons between Opteron and Xeon processors for Linux,\n> I'd like to add an Opteron box to our stable of Dells and Sparcs, for comparison.\n> \n> IBM, Sun and HP have their fairly pricey Opteron systems.\n> The IT people are not swell about unsupported purchases off ebay.\n> Anyone care to suggest any other vendors/distributors?\n> Looking for names with national support, so that we can recommend as much to our\n> customers.\n\nMonarch Computer http://www.monarchcomputer.com/\n\nThey have prebuilt and custom built systems.\n\n-- \nUntil later, Geoffrey\n",
"msg_date": "Sun, 08 May 2005 22:17:41 -0400",
"msg_from": "Geoffrey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
},
{
"msg_contents": "Geoffrey wrote:\n> Mischa Sandberg wrote:\n>\n>> After reading the comparisons between Opteron and Xeon processors for\n>> Linux,\n>> I'd like to add an Opteron box to our stable of Dells and Sparcs, for\n>> comparison.\n>>\n>> IBM, Sun and HP have their fairly pricey Opteron systems.\n>> The IT people are not swell about unsupported purchases off ebay.\n>> Anyone care to suggest any other vendors/distributors?\n>> Looking for names with national support, so that we can recommend as\n>> much to our\n>> customers.\n>\n>\n> Monarch Computer http://www.monarchcomputer.com/\n>\n> They have prebuilt and custom built systems.\n>\n\nI believe we have had some issues with their workstation class systems.\nJust a high rate of part failure, and DOA systems.\n\nI believe they RMA'd without too great of difficulties, but it was\nsomething like 50%.\n\nJohn\n=:->",
"msg_date": "Mon, 09 May 2005 10:18:15 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
},
{
"msg_contents": "John A Meinel wrote:\n> Geoffrey wrote:\n> \n>> Mischa Sandberg wrote:\n>>\n>>> After reading the comparisons between Opteron and Xeon processors for\n>>> Linux,\n>>> I'd like to add an Opteron box to our stable of Dells and Sparcs, for\n>>> comparison.\n>>>\n>>> IBM, Sun and HP have their fairly pricey Opteron systems.\n>>> The IT people are not swell about unsupported purchases off ebay.\n>>> Anyone care to suggest any other vendors/distributors?\n>>> Looking for names with national support, so that we can recommend as\n>>> much to our\n>>> customers.\n>>\n>>\n>>\n>> Monarch Computer http://www.monarchcomputer.com/\n>>\n>> They have prebuilt and custom built systems.\n>>\n> \n> I believe we have had some issues with their workstation class systems.\n> Just a high rate of part failure, and DOA systems.\n> \n> I believe they RMA'd without too great of difficulties, but it was\n> something like 50%.\n\nOuch, that's not been my experiences. I'd like to hear if anyone else \nhas had such problems with Monarch.\n\n-- \nUntil later, Geoffrey\n",
"msg_date": "Mon, 09 May 2005 16:00:59 -0400",
"msg_from": "Geoffrey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
},
{
"msg_contents": "On Sat, May 07, 2005 at 02:00:34PM -0700, Josh Berkus wrote:\n> \n> Question, though: is HP still using their proprietary RAID card? And, if so, \n> have they fixed its performance problems?\n\nAccording to my folks here, we're using the CCISS controllers, so I\nguess they are. The systems are nevertheless performing very well --\nwe did a load test that was pretty impressive. Also, Chris Browne\npointed me to this for the drivers:\n\nhttp://sourceforge.net/projects/cciss/\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nA certain description of men are for getting out of debt, yet are\nagainst all taxes for raising money to pay it off.\n\t\t--Alexander Hamilton\n",
"msg_date": "Fri, 13 May 2005 17:07:35 -0400",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
},
{
"msg_contents": "I will second the nod to Penguin computing. We have a bit of Penguin\nhardware here (though the majority is Dell). We did have issues with\none machine a couple of years ago, but Penguin was very pro-active in\naddressing that.\n\nWe recently picked up a Dual Opteron system from them and have been very\npleased with it so far. \n\nI would be careful of the RHES that it ships with though. We had\nmachine lockups immediately after the suggested kernel update (had to\ndown grade manually). Also, the RH supplied Postgres binary has issues,\nso you would need to compile Postgres yourself until the next RH update.\n\nOn Fri, 2005-05-06 at 14:39 -0700, Mischa Sandberg wrote:\n> After reading the comparisons between Opteron and Xeon processors for Linux,\n> I'd like to add an Opteron box to our stable of Dells and Sparcs, for comparison.\n> \n> IBM, Sun and HP have their fairly pricey Opteron systems.\n> The IT people are not swell about unsupported purchases off ebay.\n> Anyone care to suggest any other vendors/distributors?\n> Looking for names with national support, so that we can recommend as much to our\n> customers.\n> \n> Many thanks in advance.\n-- \n--\nRichard Rowell\[email protected]\nBowman Systems\n(318) 213-8780\n\n",
"msg_date": "Fri, 10 Jun 2005 07:34:21 -0500",
"msg_from": "Richard Rowell <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
}
] |
[
{
"msg_contents": "Good Day,\n\nI'm hoping this is the right place to send this. I have a query that's \ncausing me some grief trying to optimize it. The query cost is fine \nuntil I order the data set. Mind you after it's been ran and cached, \nsubsequent calls to it are near instant. The Query in question is:\n\nselect mr.*,m.* from maillog_received mr JOIN maillog m ON mr.maillog_id \n= m.id WHERE mr.subscription=89 and m.spam=1 ORDER BY m.msg_date desc \nlimit 10\n\nThe strucutre of the tables involved in the query are as follows:\n\n Table \"public.maillog_received\"\n Column | Type | \nModifiers\n--------------+------------------------+------------------------------------------------------------------\n id | integer | not null default \nnextval('public.maillog_received_id_seq'::text)\n maillog_id | bigint | not null\n subscription | integer | not null\n local_part | character varying(255) | not null\n domain | character varying(255) | not null\nIndexes:\n \"maillog_received_pkey\" PRIMARY KEY, btree (id)\n \"maillog_received_subscription_idx\" btree (subscription)\n \"maillog_received_subscription_maillog_id_idx\" btree (subscription, \nmaillog_id)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (subscription) REFERENCES subscriptions(id) ON \nDELETE CASCADE\n \"$2\" FOREIGN KEY (maillog_id) REFERENCES maillog(id) ON DELETE CASCADE\nTriggers:\n checkit BEFORE INSERT OR UPDATE ON maillog_received FOR EACH ROW \nEXECUTE PROCEDURE check_sub()\n\n Table \"public.maillog\"\n Column | Type | \nModifiers\n-----------------+-----------------------------+---------------------------------------------------------\n id | bigint | not null default \nnextval('public.maillog_id_seq'::text)\n message_id | character varying(16) |\n msg_date | timestamp without time zone | not null\n from_local_part | character varying(255) | not null\n from_domain | character varying(255) | not null\n from_ip | character varying(128) |\n from_host | character varying(255) |\n subject | character varying(255) |\n virus | integer | not null default 0\n spam | integer | not null default 0\n list | integer | not null default 0\n bad_recipient | integer | not null default 0\n bad_sender | integer | not null default 0\n bad_relay | integer | not null default 0\n bad_file | integer | not null default 0\n bad_mime | integer | not null default 0\n sascore | numeric(7,2) |\n sareport | text |\n vscanreport | text |\n contentreport | text |\n bypassed | integer | not null default 0\n delivered | integer | not null default 1\n complete | integer | not null default 1\nIndexes:\n \"maillog_pkey\" PRIMARY KEY, btree (id)\n \"maillog_msg_date_idx\" btree (msg_date)\n\nEXPLAIN ANALYZE gives me the following:\n\n Limit (cost=31402.85..31402.87 rows=10 width=306) (actual \ntime=87454.203..87454.334 rows=10 loops=1)\n -> Sort (cost=31402.85..31405.06 rows=886 width=306) (actual \ntime=87454.187..87454.240 rows=10 loops=1)\n Sort Key: m.msg_date\n -> Nested Loop (cost=0.00..31359.47 rows=886 width=306) \n(actual time=4.740..86430.468 rows=26308 loops=1)\n -> Index Scan using maillog_received_subscription_idx on \nmaillog_received mr (cost=0.00..17789.73 rows=4479 width=43) (actual \ntime=0.030..33554.061 rows=65508 loops=1)\n Index Cond: (subscription = 89)\n -> Index Scan using maillog_pkey on maillog m \n(cost=0.00..3.02 rows=1 width=263) (actual time=0.776..0.780 rows=0 \nloops=65508)\n Index Cond: (\"outer\".maillog_id = m.id)\n Filter: (spam = 1)\n Total runtime: 87478.068 ms\n\nNow there is a lot of data in these tables, at least a few million \nrecords, but I'd hoped to get a bit better performance :) Now another \nodd thing I will mention, if I take the database schema to a second \nserver (both running postgres 8.0.2 on FreeBSD 5.3), I get a much \ndifferent (and to me, it looks much more effecient) query plan (though \nit's with substantially less data (about 500,000 records in the table)):\n\n Limit (cost=0.00..6482.60 rows=10 width=311) (actual \ntime=25.340..26.885 rows=10 loops=1)\n -> Nested Loop (cost=0.00..1175943.99 rows=1814 width=311) (actual \ntime=25.337..26.867 rows=10 loops=1)\n -> Index Scan Backward using maillog_msg_date_idx on maillog \nm (cost=0.00..869203.93 rows=51395 width=270) (actual \ntime=25.156..26.050 rows=48 loops=1)\n Filter: (spam = 1)\n -> Index Scan using \nmaillog_received_subscription_maillog_id_idx on maillog_received mr \n(cost=0.00..5.96 rows=1 width=41) (actual time=0.011..0.012 rows=0 loops=48)\n Index Cond: ((mr.subscription = 89) AND (mr.maillog_id = \n\"outer\".id))\n Total runtime: 27.016 ms\n\nAny suggestions?\n\n-- \nRegards,\n\nDerek Buttineau\nInternet Systems Developer\nCompu-SOLVE Internet Services\nCompu-SOLVE Technologies Inc.\n\n705.725.1212 x255\n\n",
"msg_date": "Fri, 06 May 2005 15:48:32 -0400",
"msg_from": "Derek Buttineau|Compu-SOLVE <[email protected]>",
"msg_from_op": true,
"msg_subject": "ORDER BY Optimization"
},
{
"msg_contents": "while you weren't looking, Derek Buttineau|Compu-SOLVE wrote:\n\n> I'm hoping this is the right place to send this.\n\nThe PostgreSQL Performance list, [email protected]\nwould be more appropriate. I'm copying my followup there, as well.\n\nAs for your query, almost all the time is actually spent in the\nnestloop, not the sort. Compare:\n\n> -> Sort (cost=31402.85..31405.06 rows=886 width=306) (actual\n> time=87454.187..87454.240 rows=10 loops=1)\n\nvs.\n\n> -> Nested Loop (cost=0.00..31359.47 rows=886 width=306)\n> (actual time=4.740..86430.468 rows=26308 loops=1)\n\nThat's 50-ish ms versus 80-odd seconds.\n\nIt seems to me a merge join might be more appropriate here than a\nnestloop. What's your work_mem set at? Off-the-cuff numbers show the\ndataset weighing in the sub-ten mbyte range.\n\nProvided it's not already at least that big, and you don't want to up\nit permanently, try saying:\n\nSET work_mem = 10240; -- 10 mbytes\n\nimmediately before running this query (uncached, of course) and see\nwhat happens.\n\nAlso, your row-count estimates look pretty off-base. When were these\ntables last VACUUMed or ANALYZEd?\n\n/rls\n\n-- \n:wq\n",
"msg_date": "Fri, 6 May 2005 15:35:30 -0500",
"msg_from": "Rosser Schwarz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] ORDER BY Optimization"
},
{
"msg_contents": "Thanks for the response :)\n\n>That's 50-ish ms versus 80-odd seconds.\n>\n>It seems to me a merge join might be more appropriate here than a\n>nestloop. What's your work_mem set at? Off-the-cuff numbers show the\n>dataset weighing in the sub-ten mbyte range.\n>\n>Provided it's not already at least that big, and you don't want to up\n>it permanently, try saying:\n>\n>SET work_mem = 10240; -- 10 mbytes\n> \n>\nIt's currently set at 16mb, I've also tried upping sort_mem as well \nwithout any noticible impact on the uncached query. :(\n\n>immediately before running this query (uncached, of course) and see\n>what happens.\n>\n>Also, your row-count estimates look pretty off-base. When were these\n>tables last VACUUMed or ANALYZEd?\n> \n>\nI'm not entirely sure what's up with the row-count estimates, the tables \nare updated quite frequently (and VACUUM is also run quite frequently), \nhowever I had just run a VACUUM ANALYZE on both databases before running \nthe explain.\n\nI'm also still baffled at the differences in the plans between the two \nservers, on the one that uses the index to sort, I get for comparison a \nnestloop of:\n\nNested Loop (cost=0.00..1175943.99 rows=1814 width=311) (actual \ntime=25.337..26.867 rows=10 loops=1)\n\nThe plan that the \"live\" server seems to be using seems fairly inefficient.\n\nDerek\n",
"msg_date": "Fri, 06 May 2005 16:54:57 -0400",
"msg_from": "Derek Buttineau|Compu-SOLVE <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] ORDER BY Optimization"
},
{
"msg_contents": "[ cc list limited to -performance ]\n\nDerek Buttineau|Compu-SOLVE <[email protected]> writes:\n>> It seems to me a merge join might be more appropriate here than a\n>> nestloop.\n\nAfter some experimentation, I don't seem to be able to get the planner\nto generate a mergejoin based on a backwards index scan. I suspect\nit's not considering the idea of a merge using descending order at all.\nMight be a good enhancement, although we'd need to figure out how to\nkeep this from just uselessly doubling the number of mergejoin paths\nconsidered :-(\n\nIn the meantime, the nestloop is the only hope for avoiding a\nfull-scan-and-sort.\n\n> I'm not entirely sure what's up with the row-count estimates, the tables \n> are updated quite frequently (and VACUUM is also run quite frequently),\n\nThey're probably not as bad as they look. The estimates for the lower\nnodes are made without regard to the LIMIT, but the actuals of course\nreflect the fact that the LIMIT stopped execution of the plan early.\n\nThe problem with this query is that the \"fast\" plan depends on the\nassumption that as we scan in backwards m.msg_date order, it won't take\nvery long to find 10 rows that join to mr rows with mr.subscription=89.\nIf that's true then the plan wins, if it's not true then the plan can\nlose big. That requires a pretty good density of rows with\nsubscription=89, and in particular a good density near the end of the\nmsg_date order. The planner effectively assumes that the proportion of\nrows with subscription=89 isn't changing over time, but perhaps it is\n--- if there's been a lot recently that could explain why the \"fast\"\nplan is fast. In any case I suppose that the reason the larger server\ndoesn't want to try that plan is that its statistics show a much lower\ndensity of rows with subscription=89, and so the plan doesn't look\npromising compared to something that wins if there are few rows with\nsubscription=89 ... which the other plan does.\n\nYou could probably get your larger server to try the no-sort plan if\nyou said \"set enable_sort = 0\" first. It would be interesting to\ncompare the EXPLAIN ANALYZE results for that case with the other\nserver.\n\nThe contents of the pg_stats row for mr.subscription in each server\nwould be informative, too. One rowcount estimate that does look\nwrong is\n\n -> Index Scan using maillog_received_subscription_idx on \nmaillog_received mr (cost=0.00..17789.73 rows=4479 width=43) (actual \ntime=0.030..33554.061 rows=65508 loops=1)\n Index Cond: (subscription = 89)\n\nso the stats row is suggesting there are only 4479 rows with\nsubscription = 89 when really there are 65508. (The preceding\ndiscussion hopefully makes it clear why this is a potentially critical\nmistake.) This suggests that you may need to raise your statistics\ntargets.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 07 May 2005 18:55:45 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] ORDER BY Optimization "
},
{
"msg_contents": "Thanks for the response :)\n\n>You could probably get your larger server to try the no-sort plan if\n>you said \"set enable_sort = 0\" first. It would be interesting to\n>compare the EXPLAIN ANALYZE results for that case with the other\n>server.\n>\n> \n>\n\nOdd, I went to investigate this switch on the larger server, but the \nquery planner is now using the reverse index sort for this particular \nsubscription. I'm guessing it's now accumulated enough rows for the \nplanner to justify the reverse sort?\n\n Limit (cost=0.00..14808.49 rows=10 width=299) (actual \ntime=3.760..11.689 rows=10 loops=1)\n -> Nested Loop (cost=0.00..15594816.65 rows=10531 width=299) \n(actual time=3.750..11.600 rows=10 loops=1)\n -> Index Scan Backward using maillog_msg_date_idx on maillog \nm (cost=0.00..805268.22 rows=2454190 width=256) (actual \ntime=0.132..5.548 rows=194 loops=1)\n Filter: (spam = 1)\n -> Index Scan using \nmaillog_received_subscription_maillog_id_idx on maillog_received mr \n(cost=0.00..6.01 rows=1 width=43) (actual time=0.020..0.021 rows=0 \nloops=194)\n Index Cond: ((mr.subscription = 89) AND (mr.maillog_id = \n\"outer\".id))\n Total runtime: 11.878 ms\n\n\nI decided to try the same query with enable_sort on and off to see what \nsort of a difference it made roughly:\n\nWith enable_sort = 1:\n\n Limit (cost=7515.77..7515.79 rows=10 width=299) (actual \ntime=13153.300..13153.412 rows=10 loops=1)\n -> Sort (cost=7515.77..7516.26 rows=196 width=299) (actual \ntime=13153.288..13153.324 rows=10 loops=1)\n Sort Key: m.msg_date\n -> Nested Loop (cost=0.00..7508.30 rows=196 width=299) \n(actual time=0.171..13141.099 rows=853 loops=1)\n -> Index Scan using maillog_received_subscription_idx on \nmaillog_received mr (cost=0.00..4266.90 rows=1069 width=43) (actual \ntime=0.095..5240.645 rows=993 loops=1)\n Index Cond: (subscription = 15245)\n -> Index Scan using maillog_pkey on maillog m \n(cost=0.00..3.02 rows=1 width=256) (actual time=7.893..7.902 rows=1 \nloops=993)\n Index Cond: (\"outer\".maillog_id = m.id)\n Filter: (spam = 1)\n Total runtime: 13153.812 ms\n\nWith enable_sort = 0;\n\n Limit (cost=0.00..795580.99 rows=10 width=299) (actual \ntime=108.345..3801.446 rows=10 loops=1)\n -> Nested Loop (cost=0.00..15593387.49 rows=196 width=299) (actual \ntime=108.335..3801.352 rows=10 loops=1)\n -> Index Scan Backward using maillog_msg_date_idx on maillog \nm (cost=0.00..805194.97 rows=2453965 width=256) (actual \ntime=0.338..3338.096 rows=15594 loops=1)\n Filter: (spam = 1)\n -> Index Scan using \nmaillog_received_subscription_maillog_id_idx on maillog_received mr \n(cost=0.00..6.01 rows=1 width=43) (actual time=0.020..0.020 rows=0 \nloops=15594)\n Index Cond: ((mr.subscription = 15245) AND (mr.maillog_id \n= \"outer\".id))\n Total runtime: 3801.676 ms\n\nIn comparsion, query plan on the smaller server (it used a sort for this \nsubscription vs a reverse scan):\n\n Limit (cost=197.37..197.38 rows=6 width=313) (actual \ntime=883.576..883.597 rows=10 loops=1)\n -> Sort (cost=197.37..197.38 rows=6 width=313) (actual \ntime=883.571..883.577 rows=10 loops=1)\n Sort Key: m.msg_date\n -> Nested Loop (cost=0.00..197.29 rows=6 width=313) (actual \ntime=106.334..873.928 rows=47 loops=1)\n -> Index Scan using maillog_received_subscription_idx on \nmaillog_received mr (cost=0.00..109.17 rows=28 width=41) (actual \ntime=47.289..389.775 rows=58 loops=1)\n Index Cond: (subscription = 15245)\n -> Index Scan using maillog_pkey on maillog m \n(cost=0.00..3.13 rows=1 width=272) (actual time=8.319..8.322 rows=1 \nloops=58)\n Index Cond: (\"outer\".maillog_id = m.id)\n Filter: (spam = 1)\n Total runtime: 883.820 ms\n\n>The contents of the pg_stats row for mr.subscription in each server\n>would be informative, too. \n>\nI've increased the statistics targets to 300, so these rows are pretty \nbulky, however I've included the rows as text files to this message \n(pg_stats_large.txt and pg_stats_small.txt).\n\n>One rowcount estimate that does look\n>wrong is\n>\n> -> Index Scan using maillog_received_subscription_idx on \n>maillog_received mr (cost=0.00..17789.73 rows=4479 width=43) (actual \n>time=0.030..33554.061 rows=65508 loops=1)\n> Index Cond: (subscription = 89)\n>\n>so the stats row is suggesting there are only 4479 rows with\n>subscription = 89 when really there are 65508. (The preceding\n>discussion hopefully makes it clear why this is a potentially critical\n>mistake.)\n>\nThis could potentially make sense on the larger server (if my \nunderstanding of the vacuum process is correct). The regular \nmaintenance of the large server (which is currently the only one being \nupdated regularily), does a vacuum analyze once per day, a scheduled \nvacuum once / hour, and autovacuum for the remainder of the time (which \nmight be overkill). With the function of these tables, it is easily \npossible for the maillog_received table to gain 60,000 rows for a single \nsubscription in 24 hours. With that information, maybe it might be in \nmy best interest to run the analyze hourly?\n\n> This suggests that you may need to raise your statistics\n>targets.\n>\n> \n>\nI've raised the statistics targets to 300 on these columns, it seems to \nhave helped some. Interesting enough, when I query on the \nmost_common_vals for the subscription argument, they tend to use the \nreverse scan method vs the sort. \n\nThanks again,\n\nDerek",
"msg_date": "Mon, 09 May 2005 14:10:02 -0400",
"msg_from": "Derek Buttineau|Compu-SOLVE <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ORDER BY Optimization"
}
] |
[
{
"msg_contents": "Jeroen van Iddekinge wrote:\n> Hi,\n>\n>\n> I understand that when a table contains only a few rows it is better to\n> do a sequence scan than an index scan. But is this also for a table with\n> 99 records?\n>\n\n...\n\n> explain select * from tblFolders where id=90;\n> QUERY PLAN\n> -----------------------------------------------------------\n> Seq Scan on tblfolders (cost=0.00..3.24 rows=1 width=50)\n> Filter: (id = 90)\n>\n>\n> (I have analyze table bit still a sequence scan).\n>\n> With how manys rows it is ok to do an index scan or sequence scan? How\n> is this calculated in pg?\n>\n> Regards\n> Jer\n\nIt depends on how many pages need to be read. To do an index scan you\nneed to read the index pages, and then you read the page where the\nactual row resides.\n\nUsually the comment is if you are selecting >5% of the rows, seqscan is\nfaster than an index scan. If I'm reading your query correctly, it is\nestimating needing to read about 3 pages to get the row you are asking\nfor. If you used an index, it probably would have to read at least that\nmany pages, and they would not be a sequential read, so it should be slower.\n\nIf you want to prove it, try:\n\n\\timing\nEXPLAIN ANALYZE SELECT * FROM tblFolders WHERE id=90;\nEXPLAIN ANALYZE SELECT * FROM tblFolders WHERE id=90;\nEXPLAIN ANALYZE SELECT * FROM tblFolders WHERE id=90;\n\nSET enable_seqscan TO OFF;\n\nEXPLAIN ANALYZE SELECT * FROM tblFolders WHERE id=90;\nEXPLAIN ANALYZE SELECT * FROM tblFolders WHERE id=90;\nEXPLAIN ANALYZE SELECT * FROM tblFolders WHERE id=90;\n\n\nRun multiple times to make sure everything is cached, and take the\nfastest time. On your machine it might be true that the index scan is\nslightly faster than the seqscan in this exact circumstance. But I have\nthe feeling the time is trivially different, and if you had say 70 rows\nit would favor seqscan. Probably somewhere at 150-200 rows it will\nswitch on it's own.\n\nYou could tweak with several settings to get it to do an index scan\nearlier, but these would probably break other queries. You don't need to\ntune for 100 rows, more like 100k or 100M.\n\nJohn\n=:->",
"msg_date": "Sun, 08 May 2005 08:44:28 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sequence scan on PK"
},
{
"msg_contents": "Hi,\n\n\nI understand that when a table contains only a few rows it is better to \ndo a sequence scan than an index scan. But is this also for a table with \n99 records?\n\nA table contains\nid integer (primary key)\nname varchar(70)\nparent integer\ncomment text\nowner integer\ninheritAccess integer\ndefaultAccess integer\nsequence bigint\ncontentsinheritaccessmove integer\ncontentsinheritaccessadd integer\n\n\nexplain select * from tblFolders where id=90;\n QUERY PLAN\n-----------------------------------------------------------\n Seq Scan on tblfolders (cost=0.00..3.24 rows=1 width=50)\n Filter: (id = 90)\n\n\n(I have analyze table bit still a sequence scan).\n\nWith how manys rows it is ok to do an index scan or sequence scan? How \nis this calculated in pg?\n\nRegards\n Jer\n",
"msg_date": "Sun, 08 May 2005 14:59:13 +0000",
"msg_from": "Jeroen van Iddekinge <[email protected]>",
"msg_from_op": false,
"msg_subject": "sequence scan on PK"
},
{
"msg_contents": "Hi,\n\n> Thanks for respone.\n> The index scan was a little bit faster for id=1 and faster for id=99.\n> \n> Which settings shoud I change for this? cpu_index_tuple_cost , \n> cpu_operator_cost, cpu_tuple_cost?\n\nYou should lower random_page_cost to make the planner choose an index \nscan vs sequential scan.\n\n\nBest regards\n--\nMatteo Beccati\nhttp://phpadsnew.com/\nhttp://phppgads.com/\n",
"msg_date": "Sun, 08 May 2005 18:18:46 +0200",
"msg_from": "Matteo Beccati <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequence scan on PK"
},
{
"msg_contents": "Jeroen van Iddekinge wrote:\n>\n>>\n>> You could tweak with several settings to get it to do an index scan\n>> earlier, but these would probably break other queries. You don't need to\n>> tune for 100 rows, morelike 100k or 100M.\n>\n>\n> Thanks for respone.\n> The index scan was a little bit faster for id=1 and faster for id=99.\n>\n> Which settings shoud I change for this? cpu_index_tuple_cost ,\n> cpu_operator_cost, cpu_tuple_cost?\n>\n>\n> Jer.\n\nWell, I would start with *don't*. You are only looking at one query,\nwhich is pretty much fast already, and probably is not going to be the\nbottleneck. You are optimizing the wrong thing.\n\nThat being said, because you have everything cached in ram (since it is\na tiny table), you probably would set random_page_cost = 2.\n\nIn theory it should really never be lower than 2, though if you are\ntrying to force an index scan you can do it.\n\nJohn\n=:->",
"msg_date": "Sun, 08 May 2005 11:24:36 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sequence scan on PK"
},
{
"msg_contents": "Jeroen van Iddekinge <[email protected]> writes:\n>> You could tweak with several settings to get it to do an index scan\n>> earlier, but these would probably break other queries. You don't need to\n>> tune for 100 rows, morelike 100k or 100M.\n\n> Which settings shoud I change for this?\n\nI'd agree with John's response: if you change any settings based on just\nthis one test case, you're a fool. But usually random_page_cost is the\nbest knob to twiddle if you wish to encourage indexscans.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 08 May 2005 12:49:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequence scan on PK "
},
{
"msg_contents": "Hi,\n\n> Yes , it was a bit to high (18) so a lowered it. It speeded up some \n> pages for about 5%.\n\n18? The default is 4 if I can remember correctly. I wonder if your db \nhas ever seen an index scan ;)\n\n\nBest regards\n--\nMatteo Beccati\nhttp://phpadsnew.com/\nhttp://phppgads.com/\n",
"msg_date": "Sun, 08 May 2005 19:05:11 +0200",
"msg_from": "Matteo Beccati <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequence scan on PK"
},
{
"msg_contents": "\n>\n> You could tweak with several settings to get it to do an index scan\n> earlier, but these would probably break other queries. You don't need to\n> tune for 100 rows, morelike 100k or 100M.\n\nThanks for respone.\nThe index scan was a little bit faster for id=1 and faster for id=99.\n\nWhich settings shoud I change for this? cpu_index_tuple_cost , \ncpu_operator_cost, cpu_tuple_cost?\n\n\nJer.\n\n\n",
"msg_date": "Sun, 08 May 2005 17:49:36 +0000",
"msg_from": "Jeroen van Iddekinge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequence scan on PK"
},
{
"msg_contents": "\n>\n> You should lower random_page_cost to make the planner choose an index \n> scan vs sequential scan.\n>\nYes , it was a bit to high (18) so a lowered it. It speeded up some \npages for about 5%.\n\nReg. Jer\n",
"msg_date": "Sun, 08 May 2005 18:32:14 +0000",
"msg_from": "Jeroen van Iddekinge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequence scan on PK"
},
{
"msg_contents": "Matteo Beccati wrote:\n\n> Hi,\n>\n>> Yes , it was a bit to high (18) so a lowered it. It speeded up some \n>> pages for about 5%.\n>\n>\n> 18? The default is 4 if I can remember correctly. I wonder if your db \n> has ever seen an index scan ;)\n>\n\nI was expermenting how much some setting influence has on the perfomance \nof some web application.\nSo I think i forgot to change the setting back and got some strange \nquery plans.\n\nThanks\nJer\n\n",
"msg_date": "Sun, 08 May 2005 20:15:38 +0000",
"msg_from": "Jeroen van Iddekinge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequence scan on PK"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nTom Lane wrote:\n| Jeroen van Iddekinge <[email protected]> writes:\n|\n|>>You could tweak with several settings to get it to do an index scan\n|>>earlier, but these would probably break other queries. You don't need to\n|>>tune for 100 rows, morelike 100k or 100M.\n|\n|\n|>Which settings shoud I change for this?\n|\n|\n| I'd agree with John's response: if you change any settings based on just\n| this one test case, you're a fool. But usually random_page_cost is the\n| best knob to twiddle if you wish to encourage indexscans.\n|\n\nPerhaps just a small comment - before starting the tuning process, you\nwant to make sure the query planner has the right ideas about the nature\nof data contained in your indexed column.\n\nSometimes, if you insert reasonably sized batches of records containing\nthe same value for that column (for example in a multicolumn key where\nyou usually retrieve by only one column), statistics collector (used to)\nget out of sync with reality with regard to cardinality of data, because\nthe default snapshot is too small to provide it with objective insight.\nIf you're intimate with your data, you probably want to increase\nstatistics target on that column and/or do some other statistics-related\nmagic and ANALYZE the table again; that alone can mean the difference\nbetween a sequential and an index scan where appropriate, and most\nimportantly, you don't need to distort the database's understanding of\nyour hardware to achieve optimal plans (provided you have the value set\nto proper values, of course), so you won't get bitten where you don't\nexpect it. :)\n\nAgain, this might not pertain to a 100-row table, but is a good thing\n[tm] to know when optimizing. I personally would prefer to look at that\naspect of optimizer's understanding of data before anything else.\n\nHope this helps.\n\nRegards,\n- --\nGrega Bremec\ngregab at p0f dot net\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (GNU/Linux)\n\niD8DBQFCfvizfu4IwuB3+XoRAhI1AJ92uhoh0u9q7/XPllH37o5KXlpJdwCfQ+2b\nsJhq4ZWDdZU9x4APoGOsMes=\n=Tq99\n-----END PGP SIGNATURE-----\n",
"msg_date": "Mon, 09 May 2005 07:44:20 +0200",
"msg_from": "Grega Bremec <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sequence scan on PK"
}
] |
[
{
"msg_contents": "Sorry to bother everyone with yet another \"my query isn't using an \nindex\" problem but I am over my head on this one.. I am open to ways \nof restructuring this query to perform better.\n\nI have a table, 'ea', with 22 million rows in it. VACUUM ANALYZE has \nbeen just run on the table.\n\nThis is the result of:\n\nexplain analyze\nselect distinct\n\tem.incidentid,\n\tea.recordtext as retdata,\n\teg.long,\n\teg.lat\nfrom\n\tea, em, eg\nwhere\n\tem.incidentid = ea.incidentid and\n\tem.incidentid = eg.incidentid and\n\tem.entrydate >= '2005-1-1 00:00' and\n\tem.entrydate <= '2005-5-9 00:00'\n\tand ea.incidentid in (\n\t\tselect\n\t\t\tincidentid\n\t\tfrom\n\t\t\tea\n\t\twhere\n\t\t\trecordtext like '%RED%'\n\t)\n\n\tand ea.incidentid in (\n\t\tselect\n\t\t\tincidentid\n\t\tfrom\n\t\t\tea\n\t\twhere\n\t\t\trecordtext like '%CORVETTE%'\n\t)\n\tand ( recordtext like '%RED%' or recordtext like '%CORVETTE%' ) \norder by em.entrydate\n\n\n---------------------\nANALYZE RESULTS\n---------------------\n\n Unique (cost=774693.72..774693.76 rows=1 width=159) (actual \ntime=446787.056..446787.342 rows=72 loops=1)\n -> Sort (cost=774693.72..774693.72 rows=1 width=159) (actual \ntime=446787.053..446787.075 rows=72 loops=1)\n Sort Key: em.incidentid, public.ea.recordtext, eg.long, eg.lat\n -> Nested Loop (cost=771835.10..774693.71 rows=1 width=159) \n(actual time=444378.655..446786.746 rows=72 loops=1)\n -> Nested Loop (cost=771835.10..774688.81 rows=1 \nwidth=148) (actual time=444378.532..446768.381 rows=72 loops=1)\n -> Nested Loop IN Join \n(cost=771835.10..774678.88 rows=2 width=81) (actual \ntime=444367.080..446191.864 rows=701 loops=1)\n -> Nested Loop (cost=771835.10..774572.05 \nrows=42 width=64) (actual time=444366.859..445463.232 rows=1011 \nloops=1)\n -> HashAggregate \n(cost=771835.10..771835.10 rows=1 width=17) (actual \ntime=444366.702..444368.583 rows=473 loops=1)\n -> Seq Scan on ea \n(cost=0.00..771834.26 rows=335 width=17) (actual \ntime=259.746..444358.837 rows=592 loops=1)\n Filter: \n((recordtext)::text ~~ '%CORVETTE%'::text)\n -> Index Scan using ea1 on ea \n(cost=0.00..2736.43 rows=42 width=47) (actual time=2.085..2.309 rows=2 \nloops=473)\n Index Cond: \n((ea.incidentid)::text = (\"outer\".incidentid)::text)\n Filter: (((recordtext)::text ~~ \n'%RED%'::text) OR ((recordtext)::text ~~ '%CORVETTE%'::text))\n -> Index Scan using ea1 on ea \n(cost=0.00..2733.81 rows=42 width=17) (actual time=0.703..0.703 rows=1 \nloops=1011)\n Index Cond: \n((\"outer\".incidentid)::text = (ea.incidentid)::text)\n Filter: ((recordtext)::text ~~ \n'%RED%'::text)\n -> Index Scan using em_incidentid_idx on em \n(cost=0.00..4.95 rows=1 width=67) (actual time=0.820..0.821 rows=0 \nloops=701)\n Index Cond: ((\"outer\".incidentid)::text = \n(em.incidentid)::text)\n Filter: ((entrydate >= '2005-01-01 \n00:00:00'::timestamp without time zone) AND (entrydate <= '2005-05-09 \n00:00:00'::timestamp without time zone))\n -> Index Scan using eg_incidentid_idx on eg \n(cost=0.00..4.89 rows=1 width=79) (actual time=0.245..0.246 rows=1 \nloops=72)\n Index Cond: ((\"outer\".incidentid)::text = \n(eg.incidentid)::text)\n Total runtime: 446871.880 ms\n(22 rows)\n\n\n-------------------------\nEXPLANATION\n-------------------------\nThe reason for the redundant LIKE clause is that first, I only want \nthose \"incidentid\"s that contain the words 'RED' and 'CORVETTE'. BUT, \nthose two words may exist across multiple records with the same \nincidentid. Then, I only want to actually work with the rows that \ncontain one of the words. This query will repeat the same logic for \nhowever many keywords are entered by the user. I have investigated \ntext searching options and have not found them to be congruous with my \napplication.\n\nWhy is it choosing a sequential scan one part of the query when \nsearching for the words, yet using an index scan for another part of \nit? Is there a better way to structure the query to give it better \nhints?\n\nI'm using 8.0.1 on a 4-way Opteron with beefy RAID-10 and 12GB of RAM.\n\nThank you for any advice.\n\n-Dan\n\n\n\n",
"msg_date": "Sun, 8 May 2005 17:20:35 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query tuning help"
},
{
"msg_contents": "Dan,\n\n> and ( recordtext like '%RED%' or recordtext like '%CORVETTE%' ) \n\nIt is simply not possible to use B-tree indexes on these kind of text queries. \nB-trees require you to start at the \"left\" side of the field, because B-trees \nlocate records via <> tests. \"Anywhere in the field\" text search requires a \nFull Text Index.\n\n> The reason for the redundant LIKE clause is that first, I only want\n> those \"incidentid\"s that contain the words 'RED' and 'CORVETTE'. BUT,\n> those two words may exist across multiple records with the same\n> incidentid. Then, I only want to actually work with the rows that\n> contain one of the words. This query will repeat the same logic for\n> however many keywords are entered by the user. I have investigated\n> text searching options and have not found them to be congruous with my\n> application.\n\nSounds like you either need to restructure your application, restructure your \ndatabase (so that you're not doing \"anywhere in field\" searches), or buy 32GB \nof ram so that you can cache the whole table.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 8 May 2005 17:48:18 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning help"
},
{
"msg_contents": "On Mon, 9 May 2005 09:20 am, Dan Harris wrote:\n> Sorry to bother everyone with yet another \"my query isn't using an \n> index\" problem but I am over my head on this one.. I am open to ways \n> of restructuring this query to perform better.\n> \n> I have a table, 'ea', with 22 million rows in it. VACUUM ANALYZE has \n> been just run on the table.\n> \n> This is the result of:\n> \n> explain analyze\n> select distinct\n> em.incidentid,\n> ea.recordtext as retdata,\n> eg.long,\n> eg.lat\n> from\n> ea, em, eg\n> where\n> em.incidentid = ea.incidentid and\n> em.incidentid = eg.incidentid and\n> em.entrydate >= '2005-1-1 00:00' and\n> em.entrydate <= '2005-5-9 00:00'\n> and ea.incidentid in (\n> select\n> incidentid\n> from\n> ea\n> where\n> recordtext like '%RED%'\n> )\n> \n> and ea.incidentid in (\n> select\n> incidentid\n> from\n> ea\n> where\n> recordtext like '%CORVETTE%'\n> )\n> and ( recordtext like '%RED%' or recordtext like '%CORVETTE%' ) \n> order by em.entrydate\n> \nYou cannot use an index for %CORVETTE%, or %RED%. There is no way\nfor the index to know if a row had that in the middle without scanning the whole\nindex. So it's much cheaper to do a sequence scan.\n\nOne possible way to make the query faster is to limit based on date, as you will only get about 700 rows.\nAnd then don't use subselects, as they are doing full sequence scans. I think this query does what you do \nabove, and I think it will be faster, but I don't know.\n\nselect distinct em.incidentid, ea.recordtext as retdata, eg.long, eg.lat\nFROM em JOIN ea ON (em.incidentid = ea.incidentid AND em.entrydate >= '2005-1-1 00:00'\nAND em.entrydate <= '2005-5-9 00:00' AND ea.recordtext like '%RED%' AND ea.recordtext like '%CORVETTE%')\nJOIN eg ON em.incidentid = eg.incidentid WHERE (recordtext like '%RED%' or recordtext like '%CORVETTE%' );\n\n> \n> ---------------------\n> ANALYZE RESULTS\n> ---------------------\n> \n> Unique (cost=774693.72..774693.76 rows=1 width=159) (actual time=446787.056..446787.342 rows=72 loops=1)\n> -> Sort (cost=774693.72..774693.72 rows=1 width=159) (actual time=446787.053..446787.075 rows=72 loops=1)\n> Sort Key: em.incidentid, public.ea.recordtext, eg.long, eg.lat\n> -> Nested Loop (cost=771835.10..774693.71 rows=1 width=159) (actual time=444378.655..446786.746 rows=72 loops=1)\n> -> Nested Loop (cost=771835.10..774688.81 rows=1 width=148) (actual time=444378.532..446768.381 rows=72 loops=1)\n> -> Nested Loop IN Join (cost=771835.10..774678.88 rows=2 width=81) (actual time=444367.080..446191.864 rows=701 loops=1)\n> -> Nested Loop (cost=771835.10..774572.05 rows=42 width=64) (actual time=444366.859..445463.232 rows=1011 loops=1)\n> -> HashAggregate (cost=771835.10..771835.10 rows=1 width=17) (actual time=444366.702..444368.583 rows=473 loops=1)\n> -> Seq Scan on ea (cost=0.00..771834.26 rows=335 width=17) (actual time=259.746..444358.837 rows=592 loops=1)\n> Filter: ((recordtext)::text ~~ '%CORVETTE%'::text)\n> -> Index Scan using ea1 on ea (cost=0.00..2736.43 rows=42 width=47) (actual time=2.085..2.309 rows=2 loops=473)\n> Index Cond: ((ea.incidentid)::text = (\"outer\".incidentid)::text)\n> Filter: (((recordtext)::text ~~ '%RED%'::text) OR ((recordtext)::text ~~ '%CORVETTE%'::text))\n> -> Index Scan using ea1 on ea (cost=0.00..2733.81 rows=42 width=17) (actual time=0.703..0.703 rows=1 loops=1011)\n> Index Cond: ((\"outer\".incidentid)::text = (ea.incidentid)::text)\n> Filter: ((recordtext)::text ~~ '%RED%'::text)\n> -> Index Scan using em_incidentid_idx on em (cost=0.00..4.95 rows=1 width=67) (actual time=0.820..0.821 rows=0 loops=701)\n> Index Cond: ((\"outer\".incidentid)::text = (em.incidentid)::text)\n> Filter: ((entrydate >= '2005-01-01 00:00:00'::timestamp without time zone) AND (entrydate <= '2005-05-09 00:00:00'::timestamp without time zone))\n> -> Index Scan using eg_incidentid_idx on eg (cost=0.00..4.89 rows=1 width=79) (actual time=0.245..0.246 rows=1 loops=72)\n> Index Cond: ((\"outer\".incidentid)::text = (eg.incidentid)::text)\n> Total runtime: 446871.880 ms\n> (22 rows)\n> \n> \n> -------------------------\n> EXPLANATION\n> -------------------------\n> The reason for the redundant LIKE clause is that first, I only want \n> those \"incidentid\"s that contain the words 'RED' and 'CORVETTE'. BUT, \n> those two words may exist across multiple records with the same \n> incidentid. Then, I only want to actually work with the rows that \n> contain one of the words. This query will repeat the same logic for \n> however many keywords are entered by the user. I have investigated \n> text searching options and have not found them to be congruous with my \n> application.\n> \n> Why is it choosing a sequential scan one part of the query when \n> searching for the words, yet using an index scan for another part of \n> it? Is there a better way to structure the query to give it better \n> hints?\n> \n> I'm using 8.0.1 on a 4-way Opteron with beefy RAID-10 and 12GB of RAM.\n> \n> Thank you for any advice.\n> \n> -Dan\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n> \n",
"msg_date": "Mon, 9 May 2005 10:51:14 +1000",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning help"
},
{
"msg_contents": "Russell Smith <[email protected]> writes:\n> On Mon, 9 May 2005 09:20 am, Dan Harris wrote:\n>> and ( recordtext like '%RED%' or recordtext like '%CORVETTE%' ) \n>> \n> You cannot use an index for %CORVETTE%, or %RED%.\n\nNot a btree index anyway. Dan might have some success here with a\nfull-text-indexing package (eg, contrib/tsearch2)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 08 May 2005 20:58:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning help "
},
{
"msg_contents": "\nOn May 8, 2005, at 6:51 PM, Russell Smith wrote:\n\n> On Mon, 9 May 2005 09:20 am, Dan Harris wrote:\n> You cannot use an index for %CORVETTE%, or %RED%. There is no way\n> for the index to know if a row had that in the middle without scanning \n> the whole\n> index. So it's much cheaper to do a sequence scan.\n>\n\nWhile I believe you, I'm confused by this line in my original EXPLAIN \nANALYZE:\n\n>> -> Index Scan using ea1 on ea (cost=0.00..2736.43 rows=42 width=47) \n>> (actual time=2.085..2.309 rows=2 loops=473)\n>> Index Cond: \n>> ((ea.incidentid)::text = (\"outer\".incidentid)::text)\n>> Filter: (((recordtext)::text \n>> ~~ '%RED%'::text) OR ((recordtext)::text ~~ '%CORVETTE%'::text))\n\nDoesn't that mean it was using an index to filter? Along those lines, \nbefore I created index 'ea1', the query was much much slower. So, it \nseemed like creating this index made a difference.\n\n> One possible way to make the query faster is to limit based on date, \n> as you will only get about 700 rows.\n> And then don't use subselects, as they are doing full sequence scans. \n> I think this query does what you do\n> above, and I think it will be faster, but I don't know.\n>\n\nI REALLY like this idea! If I could just filter by date first and then \nsequential scan through those, it should be very manageable. Hopefully \nI can keep this goal while still accommodating the requirement listed \nin my next paragraph.\n\n> select distinct em.incidentid, ea.recordtext as retdata, eg.long, \n> eg.lat\n> FROM em JOIN ea ON (em.incidentid = ea.incidentid AND em.entrydate >= \n> '2005-1-1 00:00'\n> AND em.entrydate <= '2005-5-9 00:00' AND ea.recordtext like '%RED%' \n> AND ea.recordtext like '%CORVETTE%')\n> JOIN eg ON em.incidentid = eg.incidentid WHERE (recordtext like \n> '%RED%' or recordtext like '%CORVETTE%' );\n>\n\nI have run this, and while it is very fast, I'm concerned it's not \ndoing what I need. Here's the situation:\n\nDue to the format of the systems with which I integrate ( I have no \ncontrol over these formats ), we will get these 'recordtext' values one \nline at a time, accumulating over time. The only way I can find to \nmake this work is to insert a new record for each line. The problem \nis, that when someone wants to search multiple keywords, they expect \nthese words to be matched across multiple records with a given incident \nnumber.\n\n For a very simple example:\n\nIncidentID\t\tDate\t\t\t\tRecordtext\n--------------\t\t-------------\t\t\t \n-------------------------------------------------------\n11111\t\t\t2005-05-01 14:21\tblah blah blah RED blah blah\n2222\t\t\t2005-05-01 14:23\tnot what we are looking for\n11111\t\t\t2005-05-02 02:05\tblah CORVETTE blah blah\n\nSo, doing a search with an 'and' condition, e.g. WHERE RECORDTEXT LIKE \n'%RED%' AND RECORDTEXT LIKE '%CORVETTE%' , will not match because the \ncondition will only be applied to a single row of recordtext at a time, \nnot a whole group with the same incident number.\n\nIf I were to use tsearch2 for full-text indexing, would I need to \ncreate another table that merges all of my recordtext rows into a \nsingle 'text' field type? If so, this is where I run into problems, as \nmy logic also needs to match multiple words in their original order. I \nmay also receive additional updates to the previous data. In that \ncase, I need to replace the original record with the latest version of \nit. If I have already concatenated these rows into a single field, the \nlogic to in-line replace only the old text that has changed is very \nvery difficult at best. So, that's the reason I had to do two \nsubqueries in my example. Please tell me if I misunderstood your logic \nand it really will match given my condition above, but it didn't seem \nlike it would.\n\nThanks again for the quick responses! This list has been a great \nresource for me.\n\n-Dan\n\n",
"msg_date": "Sun, 8 May 2005 19:49:30 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query tuning help"
},
{
"msg_contents": "Dan,\n\n> While I believe you, I'm confused by this line in my original EXPLAIN\n>\n> ANALYZE:\n> >> -> Index Scan using ea1 on ea (cost=0.00..2736.43 rows=42 width=47)\n> >> (actual time=2.085..2.309 rows=2 loops=473)\n> >> Index Cond:\n> >> ((ea.incidentid)::text = (\"outer\".incidentid)::text)\n> >> Filter: (((recordtext)::text\n> >> ~~ '%RED%'::text) OR ((recordtext)::text ~~ '%CORVETTE%'::text))\n\nThe index named is matching based on incidentid -- the join condition. The \n\"filter\" is applied against the table rows, i.e. a scan.\n\n> If I were to use tsearch2 for full-text indexing, would I need to\n> create another table that merges all of my recordtext rows into a\n> single 'text' field type? \n\nNo. Read the OpenFTS docs, they are fairly clear on how to set up a simple \nFTS index. (TSearch2 ~~ OpenFTS)\n\n> If so, this is where I run into problems, as \n> my logic also needs to match multiple words in their original order. \n\nYou do that by doubling up ... that is, use the FTS index to pick all rows \nthat contain \"RED\" and \"CORVETTE\", and then check the order. I'll also note \nthat your current query is not checking word order. \n\nExample:\nWHERE recordtext_fti @@ to_tsquery ('default', 'RED && CORVETTE')\n\tAND recordtext LIKE '%RED%CORVETTE%'\n\nI'm doing something fairly similar on one of my projects and it works very \nwell.\n\nThe limitations on TSearch2 indexes are:\n1) they are expensive to update, so your data loads would be noticably slower. \n2) they are only fast when cached in RAM (and when cached, are *very* fast). \nSo if you have a variety of other processes that tend to fill up RAM between \nsearches, you may find them less useful.\n3) You have to create a materialized index column next to recordtext, which \nwill increase the size of the table.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 8 May 2005 19:06:47 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning help"
},
{
"msg_contents": "Dan Harris <[email protected]> writes:\n>> -> Index Scan using ea1 on ea (cost=0.00..2736.43 rows=42 width=47) (actual time=2.085..2.309 rows=2 loops=473)\n>> Index Cond: ((ea.incidentid)::text = (\"outer\".incidentid)::text)\n>> Filter: (((recordtext)::text ~~ '%RED%'::text) OR ((recordtext)::text ~~ '%CORVETTE%'::text))\n\n> Doesn't that mean it was using an index to filter?\n\nNo. The \"Index Cond\" shows it is using the index only for the join\ncondition. A \"Filter\" is an additional filter condition that happens to\nget applied at this plan node --- but it'll be applied to every row the\nindex finds for the index condition.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 08 May 2005 22:28:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning help "
},
{
"msg_contents": "\nOn May 8, 2005, at 8:06 PM, Josh Berkus wrote:\n>\n>> If I were to use tsearch2 for full-text indexing, would I need to\n>> create another table that merges all of my recordtext rows into a\n>> single 'text' field type?\n>\n> No. Read the OpenFTS docs, they are fairly clear on how to set up a \n> simple\n> FTS index. (TSearch2 ~~ OpenFTS)\n>\n>> If so, this is where I run into problems, as\n>> my logic also needs to match multiple words in their original order.\n\nI have been reading the Tsearch2 docs and either I don't understand \nsomething or I'm not communicating my situation clearly enough. It \nseems that Tsearch2 has a concept of \"document\". And, in everything I \nam reading, they expect your \"document\" to be all contained in a single \nrow. Since my words can be spread across multiple rows, I don't see \nthat Tsearch2 will combine all 'recordtext' row values with the same \n\"incidentid\" into a single vector. Am I overlooking something in the \ndocs?\n\n>\n> I'm doing something fairly similar on one of my projects and it works \n> very\n> well.\n>\n\nI'd be curious what similarities they have? Is it the searching across \nmultiple rows or the order of words?\n\n> The limitations on TSearch2 indexes are:\n> 1) they are expensive to update, so your data loads would be noticably \n> slower.\n> 2) they are only fast when cached in RAM (and when cached, are *very* \n> fast).\n> So if you have a variety of other processes that tend to fill up RAM \n> between\n> searches, you may find them less useful.\n> 3) You have to create a materialized index column next to recordtext, \n> which\n> will increase the size of the table.\n\nDuly noted. If this method can search across rows, I'm willing to \naccept this overhead for the speed it would add.\n\nIn the meantime, is there any way I can reach my goal without Tsearch2 \nby just restructuring my query to narrow down the results by date \nfirst, then seq scan for the 'likes'?\n\n-Dan\n\n",
"msg_date": "Sun, 8 May 2005 20:31:38 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query tuning help"
},
{
"msg_contents": "On Mon, 9 May 2005 11:49 am, Dan Harris wrote:\n> \n> On May 8, 2005, at 6:51 PM, Russell Smith wrote:\n> \n[snip]\n> > select distinct em.incidentid, ea.recordtext as retdata, eg.long, \n> > eg.lat\n> > FROM em JOIN ea ON (em.incidentid = ea.incidentid AND em.entrydate >= \n> > '2005-1-1 00:00'\n> > AND em.entrydate <= '2005-5-9 00:00' AND ea.recordtext like '%RED%' \n> > AND ea.recordtext like '%CORVETTE%')\n> > JOIN eg ON em.incidentid = eg.incidentid WHERE (recordtext like \n> > '%RED%' or recordtext like '%CORVETTE%' );\n> >\n> \n> I have run this, and while it is very fast, I'm concerned it's not \n> doing what I need.\nHow fast is very fast?\n\n\n> Here's the situation: \n> \n> Due to the format of the systems with which I integrate ( I have no \n> control over these formats ), we will get these 'recordtext' values one \n> line at a time, accumulating over time. The only way I can find to \n> make this work is to insert a new record for each line. The problem \n> is, that when someone wants to search multiple keywords, they expect \n> these words to be matched across multiple records with a given incident \n> number.\n> \n> For a very simple example:\n> \n> IncidentID Date Recordtext\n> -------------- ------------- \n> -------------------------------------------------------\n> 11111 2005-05-01 14:21 blah blah blah RED blah blah\n> 2222 2005-05-01 14:23 not what we are looking for\n> 11111 2005-05-02 02:05 blah CORVETTE blah blah\n> \n> So, doing a search with an 'and' condition, e.g. WHERE RECORDTEXT LIKE \n> '%RED%' AND RECORDTEXT LIKE '%CORVETTE%' , will not match because the \n> condition will only be applied to a single row of recordtext at a time, \n> not a whole group with the same incident number.\n> \n> If I were to use tsearch2 for full-text indexing, would I need to \n> create another table that merges all of my recordtext rows into a \n> single 'text' field type? If so, this is where I run into problems, as \n> my logic also needs to match multiple words in their original order. I \n> may also receive additional updates to the previous data. In that \n> case, I need to replace the original record with the latest version of \n> it. If I have already concatenated these rows into a single field, the \n> logic to in-line replace only the old text that has changed is very \n> very difficult at best. So, that's the reason I had to do two \n> subqueries in my example. Please tell me if I misunderstood your logic \n> and it really will match given my condition above, but it didn't seem \n> like it would.\n> \n> Thanks again for the quick responses! This list has been a great \n> resource for me.\n> \nselect distinct em.incidentid, ea.recordtext as retdata, eg.long, eg.lat\nFROM em JOIN ea ON (em.incidentid = ea.incidentid AND em.entrydate >= '2005-1-1 00:00'\nAND em.entrydate <= '2005-5-9 00:00' AND (ea.recordtext like '%RED%' OR ea.recordtext like '%CORVETTE%'))\nJOIN eg ON em.incidentid = eg.incidentid WHERE \nem.incidentid IN\n(select distinct em.incidentid, ea.recordtext as retdata, eg.long, eg.lat\nFROM em JOIN ea ON (em.incidentid = ea.incidentid AND em.entrydate >= '2005-1-1 00:00'\nAND em.entrydate <= '2005-5-9 00:00' AND ea.recordtext like '%CORVETTE%'))\nJOIN eg ON em.incidentid = eg.incidentid) AND \nem.incidentid IN\n(select distinct em.incidentid, ea.recordtext as retdata, eg.long, eg.lat\nFROM em JOIN ea ON (em.incidentid = ea.incidentid AND em.entrydate >= '2005-1-1 00:00'\nAND em.entrydate <= '2005-5-9 00:00' AND ea.recordtext like '%RED%'))\nJOIN eg ON em.incidentid = eg.incidentid)\n\nThis may be more accurate. However I would cool it VERY NASTY. Josh's solutions may be better.\nHowever much of the data should be in memory once the subplans are done, so it may be quite fast.\nyou may \n> >\n\n> -Dan\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n> \n",
"msg_date": "Mon, 9 May 2005 12:32:05 +1000",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning help"
},
{
"msg_contents": "\nOn May 8, 2005, at 8:32 PM, Russell Smith wrote:\n>> I have run this, and while it is very fast, I'm concerned it's not\n>> doing what I need.\n> How fast is very fast?\n>\n\nIt took 35 seconds to complete versus ~450 my old way.\n\n>\n> select distinct em.incidentid, ea.recordtext as retdata, eg.long, \n> eg.lat\n> FROM em JOIN ea ON (em.incidentid = ea.incidentid AND em.entrydate >= \n> '2005-1-1 00:00'\n> AND em.entrydate <= '2005-5-9 00:00' AND (ea.recordtext like '%RED%' \n> OR ea.recordtext like '%CORVETTE%'))\n> JOIN eg ON em.incidentid = eg.incidentid WHERE\n> em.incidentid IN\n> (select distinct em.incidentid, ea.recordtext as retdata, eg.long, \n> eg.lat\n> FROM em JOIN ea ON (em.incidentid = ea.incidentid AND em.entrydate >= \n> '2005-1-1 00:00'\n> AND em.entrydate <= '2005-5-9 00:00' AND ea.recordtext like \n> '%CORVETTE%'))\n> JOIN eg ON em.incidentid = eg.incidentid) AND\n> em.incidentid IN\n> (select distinct em.incidentid, ea.recordtext as retdata, eg.long, \n> eg.lat\n> FROM em JOIN ea ON (em.incidentid = ea.incidentid AND em.entrydate >= \n> '2005-1-1 00:00'\n> AND em.entrydate <= '2005-5-9 00:00' AND ea.recordtext like '%RED%'))\n> JOIN eg ON em.incidentid = eg.incidentid)\n>\n\nYes, it is nasty, but so was my previous query :) So long as this is \nfaster, I'm ok with that. I'll see if i can make this work. Thank you \nvery much.\n\n-Dan\n\n",
"msg_date": "Sun, 8 May 2005 20:49:07 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query tuning help"
},
{
"msg_contents": "On Sun, 8 May 2005 20:31:38 -0600, Dan Harris <[email protected]> wrote:\n> Duly noted. If this method can search across rows, I'm willing to \n> accept this overhead for the speed it would add.\n\nYou could use intersect to search across rows. Using tsearch2 will look\nup the RED and CORVETTE using the index and intersect will pull out the\ncommmon rows.\n\n> In the meantime, is there any way I can reach my goal without Tsearch2 \n> by just restructuring my query to narrow down the results by date \n> first, then seq scan for the 'likes'?\n\n\nselect distinct\n\tem.incidentid,\n\tea.recordtext as retdata,\n\teg.long,\n\teg.lat\n>from\n\tea, em, eg, \n\t(\n\t\tselect\n\t\t\tea.incidentid\n\t\tfrom\n\t\t\tea, em\n\t\twhere\n\t\t\tem.incidentid = ea.incidentid and\n\t\t\tem.entrydate >= '2005-1-1 00:00' and\n\t\t\tem.entrydate <= '2005-5-9 00:00' and\n\t\t\trecordtext like '%RED%'\n\n\t\tintersect\n\n\t\tselect\n\t\t\tea.incidentid\n\t\tfrom\n\t\t\tea, em\n\t\twhere\n\t\t\tem.incidentid = ea.incidentid and\n\t\t\tem.entrydate >= '2005-1-1 00:00' and\n\t\t\tem.entrydate <= '2005-5-9 00:00' and\n\t\t\trecordtext like '%CORVETTE%'\n\t) as iid\nwhere\n\tem.incidentid = ea.incidentid and\n\tem.incidentid = eg.incidentid and\n\tem.entrydate >= '2005-1-1 00:00' and\n\tem.entrydate <= '2005-5-9 00:00'\n\tand ea.incidentid = iid.incidentid \n\tand ( recordtext like '%RED%' or recordtext like '%CORVETTE%' )\norder by em.entrydate\n\nklint.\n\n+---------------------------------------+-----------------+\n: Klint Gore : \"Non rhyming :\n: EMail : [email protected] : slang - the :\n: Snail : A.B.R.I. : possibilities :\n: Mail University of New England : are useless\" :\n: Armidale NSW 2351 Australia : L.J.J. :\n: Fax : +61 2 6772 5376 : :\n+---------------------------------------+-----------------+\n",
"msg_date": "Mon, 09 May 2005 14:17:17 +1000",
"msg_from": "Klint Gore <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning help"
},
{
"msg_contents": "In article <[email protected]>,\nDan Harris <[email protected]> writes:\n\n> On May 8, 2005, at 8:06 PM, Josh Berkus wrote:\n>> \n>>> If I were to use tsearch2 for full-text indexing, would I need to\n>>> create another table that merges all of my recordtext rows into a\n>>> single 'text' field type?\n>> \n>> No. Read the OpenFTS docs, they are fairly clear on how to set up\n>> a simple\n>> FTS index. (TSearch2 ~~ OpenFTS)\n>> \n>>> If so, this is where I run into problems, as\n>>> my logic also needs to match multiple words in their original order.\n\n> I have been reading the Tsearch2 docs and either I don't understand\n> something or I'm not communicating my situation clearly enough. It\n> seems that Tsearch2 has a concept of \"document\". And, in everything I\n> am reading, they expect your \"document\" to be all contained in a\n> single row. Since my words can be spread across multiple rows, I\n> don't see that Tsearch2 will combine all 'recordtext' row values with\n> the same \"incidentid\" into a single vector. Am I overlooking\n> something in the docs?\n\nAFAICS no, but you could create a separate table containing just the\ndistinct incidentids and the tsearch2 vectors of all recordtexts\nmatching that incidentid. This table would get updated solely by\ntriggers on the original table and would provide a fast way to get all\nincidentids for RED and CORVETTE. The question is: would this reduce\nthe number of rows to check more than filtering on date?\n\n",
"msg_date": "09 May 2005 13:39:30 +0200",
"msg_from": "Harald Fuchs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning help"
},
{
"msg_contents": "Quoting Russell Smith <[email protected]>:\n\n> On Mon, 9 May 2005 11:49 am, Dan Harris wrote:\n> > On May 8, 2005, at 6:51 PM, Russell Smith wrote:\n> [snip]\n> > select distinct em.incidentid, ea.recordtext as retdata, eg.long, eg.lat\n> > FROM em\n> > JOIN ea ON em.incidentid = ea.incidentid --- slight paraphrase /Mischa.\n> > AND em.entrydate between '2005-1-1' and '2005-5-9'\n> > AND ea.recordtext like '%RED%' AND ea.recordtext like\n'%CORVETTE%'\n\n> > Here's the situation:\n> > Due to the format of the systems with which I integrate ( I have no\n> > control over these formats ), we will get these 'recordtext' values one\n> > line at a time, accumulating over time. The only way I can find to\n> > make this work is to insert a new record for each line. The problem\n> > is, that when someone wants to search multiple keywords, they expect\n> > these words to be matched across multiple records with a given incident\n> > number.\n> >\n> > For a very simple example:\n> >\n> > IncidentID Date Recordtext\n> > -------------- -------------\n> > 11111 2005-05-01 14:21 blah blah blah RED blah blah\n> > 2222 2005-05-01 14:23 not what we are looking for\n> > 11111 2005-05-02 02:05 blah CORVETTE blah blah\n> >\n\nselect em.incidentid, ea.recordtest as retdata\nfrom em\njoin ( -- equivalent to \"where incidentid in (...)\", sometimes faster.\n select incidentid\n from em join ea using (incidentid)\n where em.entrydate between '2005-1-1' and '2005-5-9'\n group by incidentid\n having 1 = min(case when recordtest like '%RED%' then 1 end)\n and 1 = min(case when recordtest like '%CORVETTE%' then 1 end)\n ) as X using (incidentid);\n\n\n",
"msg_date": "Mon, 9 May 2005 10:31:58 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning help"
},
{
"msg_contents": "Hi Dan,\n\nI tried to understand your query, but I couldn't get my understanding of \nthe query and your description in sync.\n\nWhy do you use sub selects? Wouldn't a simple \"recordtext like '%RED%'\" \ndo the trick too?\n\nYou combine all your where conditions with and. To me this looks like \nyou get only rows with RED and CORVETTE.\n\n From your description I would rewrite the query as\n\nexplain analyze\nselect distinct\n em.incidentid,\n ea.recordtext as retdata,\n eg.long,\n eg.lat\nfrom\n ea join em using(incidentid) join eg using(incidentid)\nwhere\n em.entrydate >= '2005-1-1 00:00'::date\n and em.entrydate <= '2005-5-9 00:00'::date\n and ( recordtext like '%RED%' or recordtext like '%CORVETTE%' )\norder by em.entrydate\n\n\nThat should give you all rows containing one of the words.\nDoes it work?\nIs is faster? Is it fast enough?\n\nUlrich\n",
"msg_date": "Wed, 11 May 2005 14:47:31 +0200",
"msg_from": "Ulrich Wisser <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query tuning help"
}
] |
[
{
"msg_contents": "Greetings,\n\nWe are working on speeding up the queries by creating indexes. We have \nqueries with searching criteria such as \"select ... where *col1='...'*\". \nThis is a simple query with only \"=\" operation. As a result I setup hash \nindex on column \"col1\". While, in postgreSQL 8 doc, it is wirttern:\n\n*Note: * Testing has shown PostgreSQL's hash indexes to perform no \nbetter than B-tree indexes, and the index size and build time for hash \nindexes is much worse. For these reasons, hash index use is presently \ndiscouraged.\n\nMay I know for simple \"=\" operation query, for \"Hash index\" vs. \"B-tree\" \nindex, which can provide better performance please?\n\nThanks,\nEmi\n",
"msg_date": "Mon, 09 May 2005 09:40:35 -0400",
"msg_from": "Ying Lu <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"Hash index\" vs. \"b-tree index\" (PostgreSQL 8.0)"
},
{
"msg_contents": "Ying Lu wrote:\n> May I know for simple \"=\" operation query, for \"Hash index\" vs. \"B-tree\" \n> index, which can provide better performance please?\n\nI don't think we've found a case in which the hash index code \noutperforms B+-tree indexes, even for \"=\". The hash index code also has \na number of additional issues: for example, it isn't WAL safe, it has \nrelatively poor concurrency, and creating a hash index is significantly \nslower than creating a b+-tree index.\n\n-Neil\n",
"msg_date": "Tue, 10 May 2005 00:36:42 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"Hash index\" vs. \"b-tree index\" (PostgreSQL 8.0)"
},
{
"msg_contents": "On 5/9/05, Neil Conway <[email protected]> wrote:\n> I don't think we've found a case in which the hash index code\n> outperforms B+-tree indexes, even for \"=\". The hash index code also has\n> a number of additional issues: for example, it isn't WAL safe, it has\n> relatively poor concurrency, and creating a hash index is significantly\n> slower than creating a b+-tree index.\n\nThis being the case, is there ever ANY reason for someone to use it? \nIf not, then shouldn't we consider deprecating it and eventually\nremoving it. This would reduce complexity, I think.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n",
"msg_date": "Mon, 9 May 2005 11:27:54 -0400",
"msg_from": "Christopher Petrilli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL 8.0)"
},
{
"msg_contents": "Christopher Petrilli wrote:\n> This being the case, is there ever ANY reason for someone to use it?\n\nWell, someone might fix it up at some point in the future. I don't think \nthere's anything fundamentally wrong with hash indexes, it is just that \nthe current implementation is a bit lacking.\n\n> If not, then shouldn't we consider deprecating it and eventually\n> removing it.\n\nI would personally consider the code to be deprecated already.\n\nI don't think there is much to be gained b removing it: the code is \npretty isolated from the rest of the tree, and (IMHO) not a significant \nmaintenance burden.\n\n-Neil\n",
"msg_date": "Tue, 10 May 2005 01:34:57 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "On Tue, May 10, 2005 at 01:34:57AM +1000, Neil Conway wrote:\n> Christopher Petrilli wrote:\n> >This being the case, is there ever ANY reason for someone to use it?\n> \n> Well, someone might fix it up at some point in the future. I don't think \n> there's anything fundamentally wrong with hash indexes, it is just that \n> the current implementation is a bit lacking.\n> \n> >If not, then shouldn't we consider deprecating it and eventually\n> >removing it.\n> \n> I would personally consider the code to be deprecated already.\n> \n> I don't think there is much to be gained b removing it: the code is \n> pretty isolated from the rest of the tree, and (IMHO) not a significant \n> maintenance burden.\n\nThat may be true, but it's also a somewhat 'developer-centric' view. ;)\n\nHaving indexes that people shouldn't be using does add confusion for\nusers, and presents the opportunity for foot-shooting. I don't know what\npurpose they once served, but if there's no advantage to them they\nshould be officially depricated and eventually removed. Even if there is\nsome kind of advantage (would they possibly speed up hash joins?), if\nthere's no plans to fix them they should still be removed. If someone\never really wanted to do something with, the code would still be in CVS.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Mon, 9 May 2005 11:34:23 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> Having indexes that people shouldn't be using does add confusion for\n> users, and presents the opportunity for foot-shooting.\n\nEmitting a warning/notice on hash-index creation is something I've \nsuggested in the past -- that would be fine with me.\n\n> Even if there is some kind of advantage (would they possibly speed up\n> hash joins?)\n\nNo, hash joins and hash indexes are unrelated.\n\n-Neil\n",
"msg_date": "Tue, 10 May 2005 02:38:41 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "On Tue, May 10, 2005 at 02:38:41AM +1000, Neil Conway wrote:\n> Jim C. Nasby wrote:\n> >Having indexes that people shouldn't be using does add confusion for\n> >users, and presents the opportunity for foot-shooting.\n> \n> Emitting a warning/notice on hash-index creation is something I've \n> suggested in the past -- that would be fine with me.\n\nProbably not a bad idea.\n\n> >Even if there is some kind of advantage (would they possibly speed up\n> >hash joins?)\n> \n> No, hash joins and hash indexes are unrelated.\n\nI know they are now, but does that have to be the case? Like I said, I\ndon't know the history, so I don't know why we even have them to begin\nwith.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Mon, 9 May 2005 11:51:33 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "> *Note: * Testing has shown PostgreSQL's hash indexes to perform no \n> better than B-tree indexes, and the index size and build time for hash \n> indexes is much worse. For these reasons, hash index use is presently \n> discouraged.\n>\n> May I know for simple \"=\" operation query, for \"Hash index\" vs. \n> \"B-tree\" index, which can provide better performance please?\n\nIf you trust the documentation use a b-tree. If you don't trust the \ndocumentation do your own tests.\n\nplease don't cross post.\n\n",
"msg_date": "Mon, 9 May 2005 23:38:47 +0100",
"msg_from": "David Roussel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"Hash index\" vs. \"b-tree index\" (PostgreSQL 8.0)"
},
{
"msg_contents": "Jim C. Nasby wrote:\n >> No, hash joins and hash indexes are unrelated.\n> I know they are now, but does that have to be the case?\n\nI mean, the algorithms are fundamentally unrelated. They share a bit of \ncode such as the hash functions themselves, but they are really solving \ntwo different problems (disk based indexing with (hopefully) good \nconcurrency and WAL logging vs. in-memory joins via hashing with spill \nto disk if needed).\n\n> Like I said, I don't know the history, so I don't know why we even\n> have them to begin with.\n\nAs I said, the idea of using hash indexes for better performance on \nequality scans is perfectly valid, it is just the implementation that \nneeds work.\n\n-Neil\n",
"msg_date": "Tue, 10 May 2005 10:14:11 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Tue, May 10, 2005 at 02:38:41AM +1000, Neil Conway wrote:\n> > Jim C. Nasby wrote:\n> > >Having indexes that people shouldn't be using does add confusion for\n> > >users, and presents the opportunity for foot-shooting.\n> > \n> > Emitting a warning/notice on hash-index creation is something I've \n> > suggested in the past -- that would be fine with me.\n> \n> Probably not a bad idea.\n\nAgreed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 9 May 2005 22:20:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Neil Conway <[email protected]> writes:\n> Jim C. Nasby wrote:\n>>> No, hash joins and hash indexes are unrelated.\n>> I know they are now, but does that have to be the case?\n\n> I mean, the algorithms are fundamentally unrelated. They share a bit of \n> code such as the hash functions themselves, but they are really solving \n> two different problems\n\nIn fact, up till fairly recently they didn't even share the hash\nfunctions. Which was a bug not a feature, but the fact remains ---\nthere's not a lot of commonality.\n\n>> Like I said, I don't know the history, so I don't know why we even\n>> have them to begin with.\n\nI think it's largely because some Berkeley grad student had a need to\nimplement hash indexes for academic reasons ;-)\n\n> As I said, the idea of using hash indexes for better performance on \n> equality scans is perfectly valid, it is just the implementation that \n> needs work.\n\nI was thinking about that earlier today. It seems to me there is a\nwindow within which hash indexes are theoretically superior, but it\nmight be pretty narrow. The basic allure of a hash index is that you\nlook at the search key, do some allegedly-trivial computations, and go\ndirectly to the relevant index page(s); whereas a btree index requires\ndescending through several upper levels of index pages to reach the\ntarget leaf page. On the other hand, once you reach the target index\npage, a hash index has no better method than linear scan through all\nthe page's index entries to find the actually wanted key(s); in fact you\nhave to search all the pages in that index bucket. A btree index can\nuse binary search to find the relevant items within the page.\n\nSo it strikes me that important parameters include the index entry size\nand the number of entries matching any particular key value.\n\nbtree will probably win for smaller keys, on two grounds: it will have\nfewer tree levels to descend through, because of higher fan-out, and it\nwill be much more efficient at finding the target entries within the\ntarget page when there are many entries per page. (As against this,\nit will have to work harder at each upper tree page to decide where to\ndescend to --- but I think that's a second-order consideration.)\n\nhash will tend to win as the number of duplicate keys increases, because\nits relative inefficiency at finding the matches within a particular\nbucket will become less significant. (The ideal situation for a hash\nindex is to have only one actual key value per bucket. You can't really\nafford to store only one index entry per bucket, because of the sheer\nI/O volume that would result, so you need multiple entries that will all\nbe responsive to your search.) (This also brings up the thought that\nit might be interesting to support hash buckets smaller than a page ...\nbut I don't know how to make that work in an adaptive fashion.)\n\nI suspect that people haven't looked hard for a combination of these\nparameters within which hash can win. Of course the real question for\nus is whether the window is wide enough to justify the maintenance\neffort for hash.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 May 2005 00:10:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL "
},
{
"msg_contents": "Tom Lane wrote:\n> On the other hand, once you reach the target index page, a hash index\n> has no better method than linear scan through all the page's index\n> entries to find the actually wanted key(s)\n\nI wonder if it would be possible to store the keys in a hash bucket in \nsorted order, provided that the necessary ordering is defined for the \nindex keys -- considering the ubiquity of b+-trees in Postgres, the \nchances of an ordering being defined are pretty good. Handling overflow \npages would be tricky: perhaps we could store the entries in a given \npage in sorted order, but not try to maintain that order for the hash \nbucket as a whole. This would mean we'd need to do a binary search for \neach page of the bucket, but that would still be a win.\n\n-Neil\n",
"msg_date": "Tue, 10 May 2005 14:25:06 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Neil Conway <[email protected]> writes:\n> Tom Lane wrote:\n>> On the other hand, once you reach the target index page, a hash index\n>> has no better method than linear scan through all the page's index\n>> entries to find the actually wanted key(s)\n\n> I wonder if it would be possible to store the keys in a hash bucket in \n> sorted order, provided that the necessary ordering is defined for the \n> index keys -- considering the ubiquity of b+-trees in Postgres, the \n> chances of an ordering being defined are pretty good.\n\nI have a gut reaction against that: it makes hash indexes fundamentally\nsubservient to btrees. We shouldn't bring in concepts that are outside\nthe basic opclass abstraction.\n\nHowever: what about storing the things in hashcode order? Ordering uint32s\ndoesn't seem like any big conceptual problem.\n\nI think that efficient implementation of this would require explicitly\nstoring the hash code for each index entry, which we don't do now, but\nit seems justifiable on multiple grounds --- besides this point, the\nsearch could avoid doing the data-type-specific comparison if the hash\ncode isn't equal.\n\nThere is evidence in the code that indexes used to store more info than\nwhat we now think of as a \"standard\" index tuple. I am not sure when\nthat went away or what it'd cost to bring it back, but it seems worth\nlooking into.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 May 2005 00:54:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL "
},
{
"msg_contents": "Tom Lane wrote:\n> I have a gut reaction against that: it makes hash indexes fundamentally\n> subservient to btrees.\n\nI wouldn't say \"subservient\" -- if there is no ordering defined for the \nindex key, we just do a linear scan.\n\n> However: what about storing the things in hashcode order? Ordering uint32s\n> doesn't seem like any big conceptual problem.\n\nHmm, my memory of the hash code is a bit fuzzy. Do I understand correctly?\n\n- we only use some of the bits in the hash to map from the hash of a key \nto its bucket\n\n- therefore within a bucket, we can still distinguish most of the \nnon-equal tuples from one another by comparing their full hash values\n\n- if we keep the entries in a bucket (or page, I guess -- per earlier \nmail) sorted by full hash value, we can use that to perform a binary search\n\nSounds like a good idea to me. How likely is it that the hash index will \nbe sufficiently large that we're using most of the bits in the hash just \nto map hash values to buckets, so that the binary search won't be very \neffective? (At this point many of the distinct keys in each bucket will \nbe full-on hash collisions, although sorting by the key values \nthemselves would still be effective.)\n\n> I think that efficient implementation of this would require explicitly\n> storing the hash code for each index entry, which we don't do now, but\n> it seems justifiable on multiple grounds --- besides this point, the\n> search could avoid doing the data-type-specific comparison if the hash\n> code isn't equal.\n\nAnother benefit is that it would speed up page splits -- there would be \nno need to rehash all the keys in a bucket when doing the split.\n\n-Neil\n",
"msg_date": "Tue, 10 May 2005 15:29:48 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> However: what about storing the things in hashcode order? Ordering uint32s\n> doesn't seem like any big conceptual problem.\n> \n> I think that efficient implementation of this would require explicitly\n> storing the hash code for each index entry, which we don't do now, but\n> it seems justifiable on multiple grounds --- besides this point, the\n> search could avoid doing the data-type-specific comparison if the hash\n> code isn't equal.\n\nIt seems that means doubling the size of the hash index. That's a pretty big\ni/o to cpu tradeoff. \n\nWhat if the hash index stored *only* the hash code? That could be useful for\nindexing large datatypes that would otherwise create large indexes. A good\nhash function should have a pretty low collision rate anyways so the\noccasional extra i/o should more than be outweighed by the decrease in i/o\nneeded to use the index.\n\n-- \ngreg\n\n",
"msg_date": "10 May 2005 02:12:17 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> I think that efficient implementation of this would require explicitly\n>> storing the hash code for each index entry,\n\n> It seems that means doubling the size of the hash index. That's a pretty big\n> i/o to cpu tradeoff. \n\nHardly. The minimum possible size of a hash entry today is 8 bytes\nheader plus 4 bytes datum, plus there's a 4-byte line pointer to factor\nin. So under the most pessimistic assumptions, storing the hash code\nwould add 25% to the size. (On MAXALIGN=8 hardware, it might cost you\nnothing at all.)\n\n> What if the hash index stored *only* the hash code? That could be useful for\n> indexing large datatypes that would otherwise create large indexes.\n\nHmm, that could be a thought.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 May 2005 09:53:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL "
},
{
"msg_contents": "On Tue, May 10, 2005 at 10:14:11AM +1000, Neil Conway wrote:\n> Jim C. Nasby wrote:\n> >> No, hash joins and hash indexes are unrelated.\n> >I know they are now, but does that have to be the case?\n> \n> I mean, the algorithms are fundamentally unrelated. They share a bit of \n> code such as the hash functions themselves, but they are really solving \n> two different problems (disk based indexing with (hopefully) good \n> concurrency and WAL logging vs. in-memory joins via hashing with spill \n> to disk if needed).\n\nWell, in a hash-join right now you normally end up feeding at least one\nside of the join with a seqscan. Wouldn't it speed things up\nconsiderably if you could look up hashes in the hash index instead? That\nway you can eliminate going to the heap for any hashes that match. Of\ncourse, if limited tuple visibility info was added to hash indexes\n(similar to what I think is currently happening to B-tree's), many of\nthe heap scans could be eliminated as well. A similar method could also\nbe used for hash aggregates, assuming they use the same hash.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 10 May 2005 10:32:45 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "On Tue, May 10, 2005 at 12:10:57AM -0400, Tom Lane wrote:\n> be responsive to your search.) (This also brings up the thought that\n> it might be interesting to support hash buckets smaller than a page ...\n> but I don't know how to make that work in an adaptive fashion.)\n\nIIRC, other databases that support hash indexes also allow you to define\nthe bucket size, so it might be a good start to allow for that. DBA's\nusually have a pretty good idea of what a table will look like in\nproduction, so if there's clear documentation on the effect of bucket\nsize a good DBA should be able to make a good decision.\n\nWhat's the challange to making it adaptive, comming up with an algorithm\nthat gives you the optimal bucket size (which I would think there's\nresearch on...) or allowing the index to accommodate different bucket\nsizes existing in the index at once? (Presumably you don't want to\nre-write the entire index every time it looks like a different bucket\nsize would help.)\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 10 May 2005 10:39:27 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> What's the challange to making it adaptive, comming up with an algorithm\n> that gives you the optimal bucket size (which I would think there's\n> research on...) or allowing the index to accommodate different bucket\n> sizes existing in the index at once? (Presumably you don't want to\n> re-write the entire index every time it looks like a different bucket\n> size would help.)\n\nExactly. That's (a) expensive and (b) really hard to fit into the WAL\nparadigm --- I think we could only handle it as a REINDEX. So if it\nwere adaptive at all I think we'd have to support multiple bucket sizes\nexisting simultaneously in the index, and I do not see a good way to do\nthat.\n\nAllowing a bucket size to be specified at CREATE INDEX doesn't seem out\nof line though. We'd have to think up a scheme for index-AM-specific\nindex parameters ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 May 2005 11:49:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL "
},
{
"msg_contents": "On Tue, May 10, 2005 at 11:49:50AM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > What's the challange to making it adaptive, comming up with an algorithm\n> > that gives you the optimal bucket size (which I would think there's\n> > research on...) or allowing the index to accommodate different bucket\n> > sizes existing in the index at once? (Presumably you don't want to\n> > re-write the entire index every time it looks like a different bucket\n> > size would help.)\n> \n> Exactly. That's (a) expensive and (b) really hard to fit into the WAL\n> paradigm --- I think we could only handle it as a REINDEX. So if it\n> were adaptive at all I think we'd have to support multiple bucket sizes\n> existing simultaneously in the index, and I do not see a good way to do\n> that.\n\nI'm not really familiar enough with hash indexes to know if this would\nwork, but if the maximum bucket size was known you could use that to\ndetermine a maximum range of buckets to look at. In some cases, that\nrange would include only one bucket, otherwise it would be a set of\nbuckets. If you found a set of buckets, I think you could then just go\nto the specific one you need.\n\nIf we assume that the maximum bucket size is one page it becomes more\nrealistic to take an existing large bucket and split it into several\nsmaller ones. This could be done on an update to the index page, or a\nbackground process could handle it.\n\nIn any case, should this go on the TODO list?\n\n> Allowing a bucket size to be specified at CREATE INDEX doesn't seem out\n> of line though. We'd have to think up a scheme for index-AM-specific\n> index parameters ...\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 10 May 2005 12:26:52 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> > What if the hash index stored *only* the hash code? That could be useful for\n> > indexing large datatypes that would otherwise create large indexes.\n> \n> Hmm, that could be a thought.\n\nHm, if you go this route of having hash indexes store tuples ordered by hash\ncode and storing the hash code in the index, then it seems hash indexes become\njust a macro for a btree index of HASH(index columns). \n\nI'm not saying that to criticize this plan. In fact I think that captures most\n(though not all) of what a hash index should be. \n\nIt would be pretty useful. In fact if it isn't how hash indexes are\nimplemented then it might be useful to provide a user visible hash(ROW)\nfunction that allows creating such indexes as functional indexes. Though\nhiding it would make the SQL simpler.\n\n-- \ngreg\n\n",
"msg_date": "10 May 2005 13:35:59 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Well, in a hash-join right now you normally end up feeding at least one\n> side of the join with a seqscan. Wouldn't it speed things up\n> considerably if you could look up hashes in the hash index instead?\n\nThat's called a \"nestloop with inner index scan\", not a hash join.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 May 2005 14:07:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL "
},
{
"msg_contents": "Quoting \"Jim C. Nasby\" <[email protected]>:\n\n> I'm not really familiar enough with hash indexes to know if this\n> would\n> work, but if the maximum bucket size was known you could use that to\n> determine a maximum range of buckets to look at. In some cases, that\n> range would include only one bucket, otherwise it would be a set of\n> buckets. If you found a set of buckets, I think you could then just\n> go\n> to the specific one you need.\n> \n> If we assume that the maximum bucket size is one page it becomes\n> more\n> realistic to take an existing large bucket and split it into several\n> smaller ones. This could be done on an update to the index page, or\n> a\n> background process could handle it.\n> \n> In any case, should this go on the TODO list?\n> \n> > Allowing a bucket size to be specified at CREATE INDEX doesn't seem\n> out\n> > of line though. We'd have to think up a scheme for\n> index-AM-specific\n> > index parameters ...\n> -- \n> Jim C. Nasby, Database Consultant [email protected] \n> Give your computer some brain candy! www.distributed.net Team #1828\n\nGoogle \"dynamic hash\" or \"linear hash\". It takes care of not needing to\nhave varying bucket sizes.\n\nHash indexes are useful if you ALWAYS require disk access; they behave\nlike worst-case random cache-thrash tests. That's probably why dbs have\ngravitated toward tree indexes instead. On the other hand, there's more\n(good) to hashing than initially meets the eye.\n\nDynamic multiway hashing has come a long way from just splicing the bits\ntogether from multiple columns' hash values. If you can lay your hands\non Tim Merrett's old text \"Relational Information Systems\", it's an\neye-opener. Picture an efficient terabyte spreadsheet.\n\nFor one thing, unlike a btree, a multicolumn hash is symmetric: it\ndoesn't matter which column(s) you do not specify in a partial match.\n\nFor another, a multiway hash is useful for much lower selectivity than a\nbtree. I built such indexes for OLAP cubes, and some dimensions were\nonly 10 elements wide. At the point where btree indexing becomes worse\nthan seqscan, a multiway hash tells you which 10% of the disk to scan.\n\n",
"msg_date": "Tue, 10 May 2005 11:12:45 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n>>> What if the hash index stored *only* the hash code? That could be useful for\n>>> indexing large datatypes that would otherwise create large indexes.\n>> \n>> Hmm, that could be a thought.\n\n> Hm, if you go this route of having hash indexes store tuples ordered by hash\n> code and storing the hash code in the index, then it seems hash indexes become\n> just a macro for a btree index of HASH(index columns). \n\nNo, not at all, because searching such an index will require a tree\ndescent, thus negating the one true advantage of hash indexes. I see\nthe potential value of sorting by hashcode within an individual page,\nbut that doesn't mean we should do the same across the whole index.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 May 2005 15:50:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL "
},
{
"msg_contents": "Quoting \"Jim C. Nasby\" <[email protected]>:\n\n> Well, in a hash-join right now you normally end up feeding at least\n> one\n> side of the join with a seqscan. Wouldn't it speed things up\n> considerably if you could look up hashes in the hash index instead?\n\nYou might want to google on \"grace hash\" and \"hybrid hash\".\n\nThe PG hash join is the simplest possible: build a hash table in memory,\nand match an input stream against it.\n\n*Hybrid hash* is where you spill the hash to disk in a well-designed\nway. Instead of thinking of it as building a hash table in memory, think\nof it as partitioning one input; if some or all of it fits in memory,\nall the better. The boundary condition is the same. \n\nThe real wizard of hybrid hash has to be Goetz Graefe, who sadly has now\njoined the MS Borg. He demonstrated that for entire-table joins, hybrid\nhash completely dominates sort-merge. MSSQL now uses what he developed\nas an academic, but I don't know what the patent state is.\n\n\"Grace hash\" is the original implementation of hybrid hash:\n Kitsuregawa, M., Tanaka, H., and Moto-oka, T. (1984).\n Architecture and Performance of Relational Algebra Machine Grace. \n\n\n",
"msg_date": "Tue, 10 May 2005 14:35:58 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "If the original paper was published in 1984, then it's been more than 20\nyears. Any potential patents would already have expired, no?\n\n-- Mark Lewis\n\nOn Tue, 2005-05-10 at 14:35, Mischa Sandberg wrote:\n> Quoting \"Jim C. Nasby\" <[email protected]>:\n> \n> > Well, in a hash-join right now you normally end up feeding at least\n> > one\n> > side of the join with a seqscan. Wouldn't it speed things up\n> > considerably if you could look up hashes in the hash index instead?\n> \n> You might want to google on \"grace hash\" and \"hybrid hash\".\n> \n> The PG hash join is the simplest possible: build a hash table in memory,\n> and match an input stream against it.\n> \n> *Hybrid hash* is where you spill the hash to disk in a well-designed\n> way. Instead of thinking of it as building a hash table in memory, think\n> of it as partitioning one input; if some or all of it fits in memory,\n> all the better. The boundary condition is the same. \n> \n> The real wizard of hybrid hash has to be Goetz Graefe, who sadly has now\n> joined the MS Borg. He demonstrated that for entire-table joins, hybrid\n> hash completely dominates sort-merge. MSSQL now uses what he developed\n> as an academic, but I don't know what the patent state is.\n> \n> \"Grace hash\" is the original implementation of hybrid hash:\n> Kitsuregawa, M., Tanaka, H., and Moto-oka, T. (1984).\n> Architecture and Performance of Relational Algebra Machine Grace. \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n",
"msg_date": "Tue, 10 May 2005 15:46:04 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Mischa Sandberg <[email protected]> writes:\n> The PG hash join is the simplest possible: build a hash table in memory,\n> and match an input stream against it.\n\n> *Hybrid hash* is where you spill the hash to disk in a well-designed\n> way. Instead of thinking of it as building a hash table in memory, think\n> of it as partitioning one input; if some or all of it fits in memory,\n> all the better. The boundary condition is the same. \n\n[ raised eyebrow... ] Apparently you've not read the code. It's been\nhybrid hashjoin since we got it from Berkeley. Probably not the best\npossible implementation of the concept, but we do understand about spill\nto disk.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 May 2005 18:56:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL "
},
{
"msg_contents": "Quoting Tom Lane <[email protected]>:\n\n> Mischa Sandberg <[email protected]> writes:\n> > The PG hash join is the simplest possible: build a hash table in\n> memory, and match an input stream against it.\n> \n> [ raised eyebrow... ] Apparently you've not read the code. It's\n> been hybrid hashjoin since we got it from Berkeley. Probably not the\n> best possible implementation of the concept, but we do \n> understand about spill to disk.\n\nApologies. I stopped reading around line 750 (PG 8.0.1) in\nsrc/backend/executor/nodeHashjoin.c\n\nif (!node->hj_hashdone)\n{\n ....\n /*\n * execute the Hash node, to build the hash table\n */\n hashNode->hashtable = hashtable;\n (void) ExecProcNode((PlanState *) hashNode);\n ...\n\nand missed the comment:\n /*\n * Open temp files for outer batches,\n */\n\nWill quietly go and read twice, talk once. \n\n",
"msg_date": "Tue, 10 May 2005 16:24:01 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL "
},
{
"msg_contents": "\nTom Lane <[email protected]> writes:\n\n> No, not at all, because searching such an index will require a tree\n> descent, thus negating the one true advantage of hash indexes. \n\nThe hash index still has to do a tree descent, it just has a larger branching\nfactor than the btree index.\n\nbtree indexes could have a special case hack to optionally use a large\nbranching factor for the root node, effectively turning them into hash\nindexes. That would be useful for cases where you know the values will be very\nevenly distributed and won't need to scan ranges, ie, when you're indexing a\nhash function.\n\n-- \ngreg\n\n",
"msg_date": "10 May 2005 19:56:15 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Quoting Mark Lewis <[email protected]>:\n\n> If the original paper was published in 1984, then it's been more than\n> 20 years. Any potential patents would already have expired, no?\n\nDon't know, but the idea is pervasive among different vendors ...\nperhaps that's a clue. \n\nAnd having now read beyond the start of ExecHashJoin(), I can see that\nPG does indeed implement Grace hash; and the implementation is nice and\nclean.\n\nIf there were room for improvement, (and I didn't see it in the source)\nit would be the logic to:\n\n- swap inner and outer inputs (batches) when the original inner turned\nout to be too large for memory, and the corresponding outer did not. If\nyou implement that anyway (complicates the loops) then it's no trouble\nto just hash the smaller of the two, every time; saves some CPU.\n\n- recursively partition batches where both inner and outer input batch\nends up being too large for memory, too; or where the required number of\nbatch output buffers alone is too large for working RAM. This is only\nfor REALLY big inputs.\n\nNote that you don't need a bad hash function to get skewed batch sizes;\nyou only need a skew distribution of the values being hashed.\n\n\n",
"msg_date": "Tue, 10 May 2005 17:14:26 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "\nIs there a TODO anywhere in this discussion? If so, please let me know.\n\n---------------------------------------------------------------------------\n\nMischa Sandberg wrote:\n> Quoting Mark Lewis <[email protected]>:\n> \n> > If the original paper was published in 1984, then it's been more than\n> > 20 years. Any potential patents would already have expired, no?\n> \n> Don't know, but the idea is pervasive among different vendors ...\n> perhaps that's a clue. \n> \n> And having now read beyond the start of ExecHashJoin(), I can see that\n> PG does indeed implement Grace hash; and the implementation is nice and\n> clean.\n> \n> If there were room for improvement, (and I didn't see it in the source)\n> it would be the logic to:\n> \n> - swap inner and outer inputs (batches) when the original inner turned\n> out to be too large for memory, and the corresponding outer did not. If\n> you implement that anyway (complicates the loops) then it's no trouble\n> to just hash the smaller of the two, every time; saves some CPU.\n> \n> - recursively partition batches where both inner and outer input batch\n> ends up being too large for memory, too; or where the required number of\n> batch output buffers alone is too large for working RAM. This is only\n> for REALLY big inputs.\n> \n> Note that you don't need a bad hash function to get skewed batch sizes;\n> you only need a skew distribution of the values being hashed.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 10 May 2005 21:51:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Quoting Bruce Momjian <[email protected]>:\n\n> \n> Is there a TODO anywhere in this discussion? If so, please let me\n> know.\n> \n\nUmm... I don't think so. I'm not clear on what TODO means yet. 'Up for\nconsideration'? If a \"TODO\" means committing to do, I would prefer to\nfollow up on a remote-schema (federated server) project first.\n...\n\n> > If there were room for improvement, (and I didn't see it in the\n> source)\n> > it would be the logic to:\n> > \n> > - swap inner and outer inputs (batches) when the original inner\n> turned\n> > out to be too large for memory, and the corresponding outer did\n> not. If\n> > you implement that anyway (complicates the loops) then it's no\n> trouble\n> > to just hash the smaller of the two, every time; saves some CPU.\n> > \n> > - recursively partition batches where both inner and outer input\n> batch\n> > ends up being too large for memory, too; or where the required\n> number of\n> > batch output buffers alone is too large for working RAM. This is\n> only\n> > for REALLY big inputs.\n> > \n> > Note that you don't need a bad hash function to get skewed batch\n> sizes;\n> > you only need a skew distribution of the values being hashed.\n\n\n",
"msg_date": "Tue, 10 May 2005 19:02:02 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Mischa Sandberg wrote:\n> Quoting Bruce Momjian <[email protected]>:\n> \n> > \n> > Is there a TODO anywhere in this discussion? If so, please let me\n> > know.\n> > \n> \n> Umm... I don't think so. I'm not clear on what TODO means yet. 'Up for\n> consideration'? If a \"TODO\" means committing to do, I would prefer to\n> follow up on a remote-schema (federated server) project first.\n\nTODO means it is a change that most people think would be a good idea. \nIt is not a committment from anyone to actually do it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 10 May 2005 22:03:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Quoting Bruce Momjian <[email protected]>:\n\n> Mischa Sandberg wrote:\n> > Quoting Bruce Momjian <[email protected]>:\n> > > Is there a TODO anywhere in this discussion? If so, please let\n> me\n> > > know.\n> > > \n> > \n> > Umm... I don't think so. I'm not clear on what TODO means yet. 'Up\n> for\n> > consideration'? If a \"TODO\" means committing to do, I would prefer\n> to\n> > follow up on a remote-schema (federated server) project first.\n> \n> TODO means it is a change that most people think would be a good\n> idea. \n> It is not a committment from anyone to actually do it.\n\nI think there has not been enough commentary from \"most people\".\n\n\n",
"msg_date": "Tue, 10 May 2005 19:05:38 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Is there a TODO anywhere in this discussion? If so, please let me know.\n\nThere are a couple:\n\n- consider changing hash indexes to keep the entries in a hash bucket \nsorted, to allow a binary search rather than a linear scan\n\n- consider changing hash indexes to store each key's hash value in \naddition to or instead of the key value.\n\nYou should probably include a pointer to this discussion as well.\n\n(I'd like to take a look at implementing these if I get a chance.)\n\n-Neil\n",
"msg_date": "Wed, 11 May 2005 12:14:22 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> Tom Lane <[email protected]> writes:\n>> No, not at all, because searching such an index will require a tree\n>> descent, thus negating the one true advantage of hash indexes. \n\n> The hash index still has to do a tree descent, it just has a larger branching\n> factor than the btree index.\n\nThere is *no* tree descent in a hash index: you go directly to the\nbucket you want.\n\nIf the bucket spans more than one page, you pay something, but this\nstrikes me as being equivalent to the case of multiple equal keys\nspanning multiple pages in a btree. It works, but it's not the design\ncenter.\n\n> btree indexes could have a special case hack to optionally use a large\n> branching factor for the root node, effectively turning them into hash\n> indexes.\n\nNo, because you'd still have to fetch and search the root node.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 May 2005 22:17:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL "
},
{
"msg_contents": "Neil Conway <[email protected]> writes:\n> Greg Stark wrote:\n>> What if the hash index stored *only* the hash code?\n\n> Attached is a WIP patch that implements this.\n\nPerformance?\n\n> I'm posting mainly because I wasn't sure what to do to avoid false \n> positives in the case of hash collisions. In the hash AM code it is \n> somewhat awkward to fetch the pointed-to heap tuple and recheck the \n> scankey.[1] I just did the first thing that came to mind -- I marked all \n> the hash AM opclasses as \"lossy\", so the index qual is rechecked. This \n> works, but suggestions for a better way to do things would be welcome.\n\nAFAICS that's the *only* way to do it.\n\nI disagree completely with the idea of forcing this behavior for all\ndatatypes. It could only be sensible for fairly wide values; you don't\nsave enough to justify the lossiness otherwise.\n\nIt would be interesting to look into whether it could be driven on a\nper-opclass basis. Then you could have, eg, \"text_lossy_hash_ops\"\nas a non-default opclass the DBA could select if he wanted this\nbehavior. (The code could perhaps use the amopreqcheck flag to tell\nit which way to behave.) If that seems unworkable, I'd prefer to see us\nset this up as a new index AM type, which would share a lot of code with\nthe old.\n\n[ BTW, posting patches to pgsql-general seems pretty off-topic. ]\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 May 2005 09:57:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL "
},
{
"msg_contents": "Neil Conway <[email protected]> writes:\n\n> I'm posting mainly because I wasn't sure what to do to avoid false positives in\n> the case of hash collisions. In the hash AM code it is somewhat awkward to\n> fetch the pointed-to heap tuple and recheck the scankey.[1] I just did the\n> first thing that came to mind -- I marked all the hash AM opclasses as \"lossy\",\n> so the index qual is rechecked. This works, but suggestions for a better way to\n> do things would be welcome.\n\nI would have thought that would be the only way worth considering.\n\nConsider for example a query involving two or more hash indexes and the new\nbitmap indexscan plan. You don't want to fetch the tuples if you can eliminate\nthem using one of the other indexes. \n\n-- \ngreg\n\n",
"msg_date": "11 May 2005 09:59:05 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Tom Lane wrote:\n> Performance?\n\nI'll run some benchmarks tomorrow, as it's rather late in my time zone. \nIf anyone wants to post some benchmark results, they are welcome to.\n\n> I disagree completely with the idea of forcing this behavior for all\n> datatypes. It could only be sensible for fairly wide values; you don't\n> save enough to justify the lossiness otherwise.\n\nI think it would be premature to decide about this before we see some \nperformance numbers. I'm not fundamentally opposed, though.\n\n> [ BTW, posting patches to pgsql-general seems pretty off-topic. ]\n\nNot any more than discussing implementation details is :) But your point \nis well taken, I'll send future patches to -patches.\n\n-Neil\n",
"msg_date": "Thu, 12 May 2005 00:14:50 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
},
{
"msg_contents": "Was curious why you pointed out SQL-MED as a SQL-standard approach to\nfederated servers. Always thought of it as covering access to non-SQL\ndata, the way the lo_* interface works; as opposed to meshing compatible\n(to say nothing of identical) SQL servers. Just checked Jim Melton's\nlast word on that, to make sure, too. Is there something beyond that,\nthat I'm missing?\n\nThe approach that made first best sense to me (perhaps from having gone\nthere before) is to leave the SQL syntactically unchanged, and to manage\nfederated relations via pg_ tables and probably procedures. MSSQL and\nSybase went that route. It won't preclude moving to a system embedded in\nthe SQL language. \n\nThe hurdles for federated SQL service are:\n- basic syntax (how to refer to a remote object)\n- connection management and delegated security\n- timeouts and temporary connection failures\n- efficient distributed queries with >1 remote table\n- distributed transactions\n- interserver integrity constraints\n\nSometimes the lines get weird because of opportunistic implementations.\nFor example, for the longest time, MSSQL supported server.db.user.object\nreferences WITHIN STORED PROCEDURES, since the proc engine could hide\nsome primitive connection management. \n\nPG struck me as such a natural for cross-server queries, because\nit keeps everything out in the open, including statistics.\nPG is also well set-up to handle heterogeneous table types,\nand has functions that return rowsets. Nothing needs to be bent out of\nshape syntactically, or in the cross-server interface, to get over the\nhurdles above.\n\nThe fact that queries hence transactions can't span multiple databases\ntells me, PG has a way to go before it can handle dependency on a\ndistributed transaction monitor. \n\nMy 2c.\n\n",
"msg_date": "Wed, 11 May 2005 14:21:22 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Federated PG servers -- Was: Re: [GENERAL] \"Hash index\" vs. \"b-tree\n\tindex\" (PostgreSQL"
},
{
"msg_contents": "\nAdded to TODO:\n\n* Consider sorting hash buckets so entries can be found using a binary\n search, rather than a linear scan\n* In hash indexes, consider storing the hash value with or instead\n of the key itself\n\n\n---------------------------------------------------------------------------\n\nNeil Conway wrote:\n> Bruce Momjian wrote:\n> > Is there a TODO anywhere in this discussion? If so, please let me know.\n> \n> There are a couple:\n> \n> - consider changing hash indexes to keep the entries in a hash bucket \n> sorted, to allow a binary search rather than a linear scan\n> \n> - consider changing hash indexes to store each key's hash value in \n> addition to or instead of the key value.\n> \n> You should probably include a pointer to this discussion as well.\n> \n> (I'd like to take a look at implementing these if I get a chance.)\n> \n> -Neil\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Fri, 27 May 2005 18:07:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] \"Hash index\" vs. \"b-tree index\" (PostgreSQL"
}
] |
[
{
"msg_contents": "You also want to consider any whitebox opteron system being on the\ncompatibility list of your storage vendor, as well as RedHat, etc. With\nEMC you can file an RPQ via your sales contacts to get it approved,\nthough not sure how lengthy/painful that process might be, or if it's\ngonna be worth it.\n\nRead the article devoted to the v40z on anandtech.com.\n\nI am also trying to get a quad-Opteron versus the latest quad-XEON from\nDell (6850), but it's hard to justify a difference between a 15K dell\nversus a 30k v40z for a 5-8% performance gain (read the XEON Vs. Opteron\nDatabase comparo on anandtech.com)...\n\nThanks,\nAnjan\n\n\n-----Original Message-----\nFrom: Geoffrey [mailto:[email protected]] \nSent: Sunday, May 08, 2005 10:18 PM\nTo: Mischa Sandberg\nCc: [email protected]\nSubject: Re: [PERFORM] Whence the Opterons?\n\nMischa Sandberg wrote:\n> After reading the comparisons between Opteron and Xeon processors for\nLinux,\n> I'd like to add an Opteron box to our stable of Dells and Sparcs, for\ncomparison.\n> \n> IBM, Sun and HP have their fairly pricey Opteron systems.\n> The IT people are not swell about unsupported purchases off ebay.\n> Anyone care to suggest any other vendors/distributors?\n> Looking for names with national support, so that we can recommend as\nmuch to our\n> customers.\n\nMonarch Computer http://www.monarchcomputer.com/\n\nThey have prebuilt and custom built systems.\n\n-- \n\nUntil later, Geoffrey\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n",
"msg_date": "Mon, 9 May 2005 11:05:58 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Whence the Opterons?"
},
{
"msg_contents": "Anjan Dave wrote:\n> You also want to consider any whitebox opteron system being on the\n> compatibility list of your storage vendor, as well as RedHat, etc. With\n> EMC you can file an RPQ via your sales contacts to get it approved,\n> though not sure how lengthy/painful that process might be, or if it's\n> gonna be worth it.\n>\n> Read the article devoted to the v40z on anandtech.com.\n>\n> I am also trying to get a quad-Opteron versus the latest quad-XEON from\n> Dell (6850), but it's hard to justify a difference between a 15K dell\n> versus a 30k v40z for a 5-8% performance gain (read the XEON Vs. Opteron\n> Database comparo on anandtech.com)...\n>\n> Thanks,\n> Anjan\n>\n\n15k vs 30k is indeed a big difference. But also realize that Postgres\nhas a specific benefit to Opterons versus Xeons. The context switching\nstorm happens less on an Opteron for some reason.\n\nI would venture a much greater benefit than 5-8%, more like 10-50%.\n\nJohn\n=:->",
"msg_date": "Mon, 09 May 2005 10:22:03 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
}
] |
[
{
"msg_contents": "Wasn't the context switching issue occurring in specific cases only?\n\nI haven't seen any benchmarks for a 50% performance difference. Neither\nhave I seen any benchmarks of pure disk IO performance of specific\nmodels of Dell vs HP or Sun Opterons.\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: John A Meinel [mailto:[email protected]] \nSent: Monday, May 09, 2005 11:22 AM\nTo: Anjan Dave\nCc: Geoffrey; Mischa Sandberg; [email protected]\nSubject: Re: [PERFORM] Whence the Opterons?\n\nAnjan Dave wrote:\n> You also want to consider any whitebox opteron system being on the\n> compatibility list of your storage vendor, as well as RedHat, etc.\nWith\n> EMC you can file an RPQ via your sales contacts to get it approved,\n> though not sure how lengthy/painful that process might be, or if it's\n> gonna be worth it.\n>\n> Read the article devoted to the v40z on anandtech.com.\n>\n> I am also trying to get a quad-Opteron versus the latest quad-XEON\nfrom\n> Dell (6850), but it's hard to justify a difference between a 15K dell\n> versus a 30k v40z for a 5-8% performance gain (read the XEON Vs.\nOpteron\n> Database comparo on anandtech.com)...\n>\n> Thanks,\n> Anjan\n>\n\n15k vs 30k is indeed a big difference. But also realize that Postgres\nhas a specific benefit to Opterons versus Xeons. The context switching\nstorm happens less on an Opteron for some reason.\n\nI would venture a much greater benefit than 5-8%, more like 10-50%.\n\nJohn\n=:->\n\n",
"msg_date": "Mon, 9 May 2005 11:29:55 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Whence the Opterons?"
},
{
"msg_contents": "Anjan Dave wrote:\n> Wasn't the context switching issue occurring in specific cases only?\n>\n> I haven't seen any benchmarks for a 50% performance difference. Neither\n> have I seen any benchmarks of pure disk IO performance of specific\n> models of Dell vs HP or Sun Opterons.\n>\n> Thanks,\n> Anjan\n>\n\nWell, I'm speaking more from what I remember reading, than personal\ntesting. Probably 50% is too high, but I thought I remembered it being\nmore general than just specific cases.\n\nI agree, though, that disk bandwidth is probably more important than CPU\nissues, though. And the extra 15k might get you a lot of disk performance.\n\nJohn\n=:->",
"msg_date": "Mon, 09 May 2005 10:40:07 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
},
{
"msg_contents": "On Mon, 9 May 2005, John A Meinel wrote:\n\n> Well, I'm speaking more from what I remember reading, than personal\n> testing. Probably 50% is too high, but I thought I remembered it being\n> more general than just specific cases.\n\nAnadtech had a benchmark here:\n\nhttp://www.anandtech.com/linux/showdoc.aspx?i=2163&p=2\n\nIt's a little old, as it's listing an Opteron 150 vs 3.6 Xeon, but it does\nshow that the opteron comes in almost twice as fast as the Xeon doing\nPostgres.\n\n\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Mon, 9 May 2005 10:08:27 -0700 (PDT)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
},
{
"msg_contents": "Unfortunately, Anandtech only used Postgres just a single time in his \nbenchmarks. And what it did show back then was a huge performance \nadvantage for the Opteron architecture over Xeon in this case. Where the \nfastest Opterons were just 15% faster in MySQL/MSSQL/DB2 than the \nfastest Xeons, it was 100%+ faster in Postgres. He probably got rid of \nPostgres from his benchmark suite since it favors Opteron too much. As a \ngeneral hardware review site, makes senses that he needs to get more \nneutral apps in order to get free systems to review and (ahem) ad dollars.\n\nThat being said, I wouldn't get a quad Opteron system anyways now that \nthe dual core Opterons are available. A DP+DC system would be faster and \ncheaper than a pure quad system. Unless of course, I needed a QP+DC for \n8-way SMP.\n\n\n\n\n\n\nAnjan Dave wrote:\n> Wasn't the context switching issue occurring in specific cases only?\n> \n> I haven't seen any benchmarks for a 50% performance difference. Neither\n> have I seen any benchmarks of pure disk IO performance of specific\n> models of Dell vs HP or Sun Opterons.\n> \n> Thanks,\n> Anjan\n> \n>>EMC you can file an RPQ via your sales contacts to get it approved,\n>>though not sure how lengthy/painful that process might be, or if it's\n>>gonna be worth it.\n>>\n>>Read the article devoted to the v40z on anandtech.com.\n>>\n>>I am also trying to get a quad-Opteron versus the latest quad-XEON\n> \n> from\n> \n>>Dell (6850), but it's hard to justify a difference between a 15K dell\n>>versus a 30k v40z for a 5-8% performance gain (read the XEON Vs.\n> \n> Opteron\n> \n>>Database comparo on anandtech.com)...\n>>\n>>Thanks,\n>>Anjan\n>>\n> \n> \n> 15k vs 30k is indeed a big difference. But also realize that Postgres\n> has a specific benefit to Opterons versus Xeons. The context switching\n> storm happens less on an Opteron for some reason.\n> \n> I would venture a much greater benefit than 5-8%, more like 10-50%.\n",
"msg_date": "Mon, 09 May 2005 10:23:34 -0700",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
}
] |
[
{
"msg_contents": "I took an unbiased look and did some tests.\nObjectively for me MYSQL was not an improvement for speed.\nI had read the benchmarks in pcmagazine from a while back as well.\n\nI did some tests using ODBC, and .net connections and also used aqua studios\n(hooks up to both data bases) and found postgres a bit faster.\n\nI did spend more time getting a feeling for setup on postgres, but I was at\na point of desperation as some queries were still too slow on postgres.\n\nI ended up re-engineering my app to use simpler(flattened) data sets.\nI still have a few I am working through, but all in all it is running better\nthen when I was on MSSQL, and MYSQL was just slower on the tests I did.\n\nI loaded both MYSQL and postgres on both my 4 processor Dell running red hat\nAS3 and Windows XP on a optiplex.\n\nJoel Fradkin\n \nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n \[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\nC 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply email and delete and destroy\nall copies of the original message, including attachments.\n \n\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Amit V Shah\nSent: Tuesday, May 24, 2005 12:22 PM\nTo: 'Joshua D. Drake'\nCc: '[email protected]'\nSubject: Re: [PERFORM] Need help to decide Mysql vs Postgres\n\nHi Josh,\n\nThanks for the prompt reply !! Actually migration is inevitable. We have a\ntotally messed up schema, not normalized and stuff like that. So the goal of\nthe migration is to get a new and better normalized schema. That part is\ndone already. Now the decision point is, should we go with postgres or\nmysql. \n\nThanks,\nAmit\n \n-----Original Message-----\nFrom: Joshua D. Drake [mailto:[email protected]]\nSent: Tuesday, May 24, 2005 1:15 PM\nTo: Amit V Shah\nCc: '[email protected]'\nSubject: Re: [PERFORM] Need help to decide Mysql vs Postgres\n\n\n> \n> I am not trying to start a mysql vs postgres war so please dont\n> misunderstand me .... I tried to look around for mysql vs postgres\narticles,\n> but most of them said mysql is better in speed. However those articles\nwere\n> very old so I dont know about recent stage. Please comment !!!\n\nIt is my experience that MySQL is faster under smaller load scenarios. \nSay 5 - 10 connections only doing simple SELECTS. E.g; a dymanic website.\n\nIt is also my experience that PostgreSQL is faster and more stable under\nconsistent and heavy load. I have customers you regularly are using up \nto 500 connections.\n\nNote that alot of this depends on how your database is designed. Foreign \nkeys slow things down.\n\nI think it would be important for you to look at your overall goal of \nmigration. MySQL is really not a bad product \"IF\" you are willing to \nwork within its limitations.\n\nPostgreSQL is a real RDMS, it is like Oracle or DB2 and comes with a \ncomparable feature set. Only you can decide if that is what you need.\n\nSincerely,\n\nJoshua D. Drake\nCommand Prompt, Inc.\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n",
"msg_date": "Mon, 9 May 2005 13:30:36 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "Hi Josh,\n\nThanks for the prompt reply !! Actually migration is inevitable. We have a\ntotally messed up schema, not normalized and stuff like that. So the goal of\nthe migration is to get a new and better normalized schema. That part is\ndone already. Now the decision point is, should we go with postgres or\nmysql. \n\nThanks,\nAmit\n \n-----Original Message-----\nFrom: Joshua D. Drake [mailto:[email protected]]\nSent: Tuesday, May 24, 2005 1:15 PM\nTo: Amit V Shah\nCc: '[email protected]'\nSubject: Re: [PERFORM] Need help to decide Mysql vs Postgres\n\n\n> \n> I am not trying to start a mysql vs postgres war so please dont\n> misunderstand me .... I tried to look around for mysql vs postgres\narticles,\n> but most of them said mysql is better in speed. However those articles\nwere\n> very old so I dont know about recent stage. Please comment !!!\n\nIt is my experience that MySQL is faster under smaller load scenarios. \nSay 5 - 10 connections only doing simple SELECTS. E.g; a dymanic website.\n\nIt is also my experience that PostgreSQL is faster and more stable under\nconsistent and heavy load. I have customers you regularly are using up \nto 500 connections.\n\nNote that alot of this depends on how your database is designed. Foreign \nkeys slow things down.\n\nI think it would be important for you to look at your overall goal of \nmigration. MySQL is really not a bad product \"IF\" you are willing to \nwork within its limitations.\n\nPostgreSQL is a real RDMS, it is like Oracle or DB2 and comes with a \ncomparable feature set. Only you can decide if that is what you need.\n\nSincerely,\n\nJoshua D. Drake\nCommand Prompt, Inc.\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n\n",
"msg_date": "Tue, 24 May 2005 13:22:08 -0400",
"msg_from": "Amit V Shah <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": "Amit V Shah wrote:\n> Hi Josh,\n> \n> Thanks for the prompt reply !! Actually migration is inevitable. We have a\n> totally messed up schema, not normalized and stuff like that. So the goal of\n> the migration is to get a new and better normalized schema. That part is\n> done already. Now the decision point is, should we go with postgres or\n> mysql. \n\nO.k. then I would ask myself this:\n\nWould I trust my brand new data that I have put all this effort into, \nthat finally looks the way that I want it to look, to a database that \ntruncates information?\n\nPostgreSQL is truly ACID compliant. Even if it is a little slower (which \nunder normal use I don't find to be the case) wouldn't the reliability \nof PostgreSQL make up for say the 10% net difference in performance?\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> \n> Thanks,\n> Amit\n> \n> -----Original Message-----\n> From: Joshua D. Drake [mailto:[email protected]]\n> Sent: Tuesday, May 24, 2005 1:15 PM\n> To: Amit V Shah\n> Cc: '[email protected]'\n> Subject: Re: [PERFORM] Need help to decide Mysql vs Postgres\n> \n> \n> \n>>I am not trying to start a mysql vs postgres war so please dont\n>>misunderstand me .... I tried to look around for mysql vs postgres\n> \n> articles,\n> \n>>but most of them said mysql is better in speed. However those articles\n> \n> were\n> \n>>very old so I dont know about recent stage. Please comment !!!\n> \n> \n> It is my experience that MySQL is faster under smaller load scenarios. \n> Say 5 - 10 connections only doing simple SELECTS. E.g; a dymanic website.\n> \n> It is also my experience that PostgreSQL is faster and more stable under\n> consistent and heavy load. I have customers you regularly are using up \n> to 500 connections.\n> \n> Note that alot of this depends on how your database is designed. Foreign \n> keys slow things down.\n> \n> I think it would be important for you to look at your overall goal of \n> migration. MySQL is really not a bad product \"IF\" you are willing to \n> work within its limitations.\n> \n> PostgreSQL is a real RDMS, it is like Oracle or DB2 and comes with a \n> comparable feature set. Only you can decide if that is what you need.\n> \n> Sincerely,\n> \n> Joshua D. Drake\n> Command Prompt, Inc.\n> \n> \n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n",
"msg_date": "Tue, 24 May 2005 14:36:07 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
},
{
"msg_contents": ">> Thanks for the prompt reply !! Actually migration is inevitable. We have a\n>> totally messed up schema, not normalized and stuff like that. So the goal \n>> of\n>> the migration is to get a new and better normalized schema. That part is\n>> done already. Now the decision point is, should we go with postgres or\n>> mysql.\n\nComing in a little late, but you might find these links interesting... not \nsure how up to date and/or accurate they are, but might give you some \nthings to look into.\n\nhttp://sql-info.de/mysql/gotchas.html\nhttp://sql-info.de/postgresql/postgres-gotchas.html\n",
"msg_date": "Wed, 25 May 2005 08:52:24 -0700 (PDT)",
"msg_from": "Philip Hallstrom <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Need help to decide Mysql vs Postgres"
}
] |
[
{
"msg_contents": "Hello\n\nHow can i know a capacity of a pg database ?\nHow many records my table can have ?\nI saw in a message that someone have 50 000 records it's possible in a table ?\n(My table have 8 string field (length 32 car)).\nThanks for your response.\n\n\nNanou\n\n\n",
"msg_date": "Mon, 9 May 2005 21:22:40 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "PGSQL Capacity"
},
{
"msg_contents": "On Mon, May 09, 2005 at 09:22:40PM +0200, [email protected] wrote:\n> How can i know a capacity of a pg database ?\n> How many records my table can have ?\n> I saw in a message that someone have 50 000 records it's possible in a table ?\n> (My table have 8 string field (length 32 car)).\n> Thanks for your response.\n\nYou can have several million records in a table easily -- I've done 10\nmillion personally, but you can find people doing that many records a _day_.\nHitting 1 billion records should probably not be impossible either -- it all\ndepends on your hardware, and perhaps more importantly, what kind of queries\nyou're running against it. 50000 is absolutely no problem at all.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 9 May 2005 21:32:18 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGSQL Capacity"
},
{
"msg_contents": "[email protected] wrote:\n> Hello\n>\n> How can i know a capacity of a pg database ?\n> How many records my table can have ?\n> I saw in a message that someone have 50 000 records it's possible in a table ?\n> (My table have 8 string field (length 32 car)).\n> Thanks for your response.\n>\n>\n> Nanou\n\nThe capacity for a PG database is just the limit of how much disk space\nyou have. There are people who have 100 million or even up to a billion\nrows in one table. If you are in the billions you have to take some care\nabout OID wraparound, but at the very least the maximum number of rows\nin one table is at least 2Billion.\n\nWith 8 string fields and 32 chars each, if they are all filled, you will\ntake about (32+4)*8 = 324 + overhead bytes each, so I would estimate\nabout 400 per. If we add a whole lot to be safe, and say 1k per row, you\ncan fit 1Million rows in 1GB of disk space. There is more taken up by\nindexes, etc.\n\nGenerally, you are disk limited, not limited by Postgres.\n\nJohn\n=:->",
"msg_date": "Mon, 09 May 2005 14:40:52 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGSQL Capacity"
},
{
"msg_contents": "[email protected] writes:\n> How can i know a capacity of a pg database ?\n> How many records my table can have ?\n> I saw in a message that someone have 50 000 records it's possible in a table ?\n> (My table have 8 string field (length 32 car)).\n> Thanks for your response.\n\nThe capacity is much more likely to be limited by the size of the disk\ndrives and filesystems you have available to you than by anything\nelse.\n\nIf your table consists of 8- 32 character strings, then each tuple\nwill consume around 256 bytes of memory, and you will be able to fit\non the order of 30 tuples into each 8K page.\n\nBy default, you can extend a single table file to up to 1GB before it\nsplits off to another piece. That would mean each file can have about\n3.9M tuples. From there, you can have as many 1GB pieces as the disk\nwill support. So you could have (plenty * 3.9M tuples), which could\nadd up to be rather large.\n\nIf you're expecting 50K records, that will be no big deal at all.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://www.ntlug.org/~cbbrowne/sap.html\nRules of the Evil Overlord #78. \"I will not tell my Legions of Terror\n\"And he must be taken alive!\" The command will be: ``And try to take\nhim alive if it is reasonably practical.''\"\n<http://www.eviloverlord.com/>\n",
"msg_date": "Mon, 09 May 2005 17:46:23 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGSQL Capacity"
},
{
"msg_contents": "http://stats.distributed.net has a table that's 130M rows.\nhttp://stats.distributed.net/participant/phistory.php?project_id=8&id=39622\nis a page that hits that table, and as you can see it's quite fast. This\nis on a dual opteron with 4G of memory.\n\nUnless you're looking for sub millisecond response times, 50k rows is\nnothing.\n\nOn Mon, May 09, 2005 at 09:32:18PM +0200, Steinar H. Gunderson wrote:\n> On Mon, May 09, 2005 at 09:22:40PM +0200, [email protected] wrote:\n> > How can i know a capacity of a pg database ?\n> > How many records my table can have ?\n> > I saw in a message that someone have 50 000 records it's possible in a table ?\n> > (My table have 8 string field (length 32 car)).\n> > Thanks for your response.\n> \n> You can have several million records in a table easily -- I've done 10\n> million personally, but you can find people doing that many records a _day_.\n> Hitting 1 billion records should probably not be impossible either -- it all\n> depends on your hardware, and perhaps more importantly, what kind of queries\n> you're running against it. 50000 is absolutely no problem at all.\n> \n> /* Steinar */\n> -- \n> Homepage: http://www.sesse.net/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 10 May 2005 16:40:24 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PGSQL Capacity"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]\n> Sent: Monday, May 09, 2005 2:23 PM\n> To: [email protected]\n> Subject: [PERFORM] PGSQL Capacity\n> \n> How can i know a capacity of a pg database ?\n> How many records my table can have ?\n> I saw in a message that someone have 50 000 records it's \n> possible in a table ?\n> (My table have 8 string field (length 32 car)).\n> Thanks for your response.\n\nYou're basically limited to the size of your hard drive(s).\nThe actual limit is in the terabyte range.\n\n__\nDavid B. Held\nSoftware Engineer/Array Services Group\n200 14th Ave. East, Sartell, MN 56377\n320.534.3637 320.253.7800 800.752.8129\n",
"msg_date": "Mon, 9 May 2005 14:55:40 -0500",
"msg_from": "\"Dave Held\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PGSQL Capacity"
}
] |
[
{
"msg_contents": "The DP+DC isn't available yet, from Sun. Only QP+DC is, for which the\nbid opens at 38k, that is a bit pricey -:)\n\n\n-----Original Message-----\nFrom: William Yu [mailto:[email protected]] \nSent: Monday, May 09, 2005 1:24 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Whence the Opterons?\n\nUnfortunately, Anandtech only used Postgres just a single time in his \nbenchmarks. And what it did show back then was a huge performance \nadvantage for the Opteron architecture over Xeon in this case. Where the\n\nfastest Opterons were just 15% faster in MySQL/MSSQL/DB2 than the \nfastest Xeons, it was 100%+ faster in Postgres. He probably got rid of \nPostgres from his benchmark suite since it favors Opteron too much. As a\n\ngeneral hardware review site, makes senses that he needs to get more \nneutral apps in order to get free systems to review and (ahem) ad\ndollars.\n\nThat being said, I wouldn't get a quad Opteron system anyways now that \nthe dual core Opterons are available. A DP+DC system would be faster and\n\ncheaper than a pure quad system. Unless of course, I needed a QP+DC for \n8-way SMP.\n\n\n\n\n\n\nAnjan Dave wrote:\n> Wasn't the context switching issue occurring in specific cases only?\n> \n> I haven't seen any benchmarks for a 50% performance difference.\nNeither\n> have I seen any benchmarks of pure disk IO performance of specific\n> models of Dell vs HP or Sun Opterons.\n> \n> Thanks,\n> Anjan\n> \n>>EMC you can file an RPQ via your sales contacts to get it approved,\n>>though not sure how lengthy/painful that process might be, or if it's\n>>gonna be worth it.\n>>\n>>Read the article devoted to the v40z on anandtech.com.\n>>\n>>I am also trying to get a quad-Opteron versus the latest quad-XEON\n> \n> from\n> \n>>Dell (6850), but it's hard to justify a difference between a 15K dell\n>>versus a 30k v40z for a 5-8% performance gain (read the XEON Vs.\n> \n> Opteron\n> \n>>Database comparo on anandtech.com)...\n>>\n>>Thanks,\n>>Anjan\n>>\n> \n> \n> 15k vs 30k is indeed a big difference. But also realize that Postgres\n> has a specific benefit to Opterons versus Xeons. The context switching\n> storm happens less on an Opteron for some reason.\n> \n> I would venture a much greater benefit than 5-8%, more like 10-50%.\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n",
"msg_date": "Mon, 9 May 2005 16:10:03 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Whence the Opterons?"
},
{
"msg_contents": "\nIron Systems has a fair selection of opteron machines, up to 4 way. \nThe one I have has Tyan guts.\n\n\nOn May 9, 2005, at 4:10 PM, Anjan Dave wrote:\n\n> The DP+DC isn't available yet, from Sun. Only QP+DC is, for which the\n> bid opens at 38k, that is a bit pricey -:)\n>\n>\n> -----Original Message-----\n> From: William Yu [mailto:[email protected]]\n> Sent: Monday, May 09, 2005 1:24 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] Whence the Opterons?\n>\n> Unfortunately, Anandtech only used Postgres just a single time in his\n> benchmarks. And what it did show back then was a huge performance\n> advantage for the Opteron architecture over Xeon in this case. \n> Where the\n>\n> fastest Opterons were just 15% faster in MySQL/MSSQL/DB2 than the\n> fastest Xeons, it was 100%+ faster in Postgres. He probably got rid of\n> Postgres from his benchmark suite since it favors Opteron too much. \n> As a\n>\n> general hardware review site, makes senses that he needs to get more\n> neutral apps in order to get free systems to review and (ahem) ad\n> dollars.\n>\n> That being said, I wouldn't get a quad Opteron system anyways now that\n> the dual core Opterons are available. A DP+DC system would be \n> faster and\n>\n> cheaper than a pure quad system. Unless of course, I needed a QP+DC \n> for\n> 8-way SMP.\n>\n>\n>\n>\n>\n>\n> Anjan Dave wrote:\n>\n>> Wasn't the context switching issue occurring in specific cases only?\n>>\n>> I haven't seen any benchmarks for a 50% performance difference.\n>>\n> Neither\n>\n>> have I seen any benchmarks of pure disk IO performance of specific\n>> models of Dell vs HP or Sun Opterons.\n>>\n>> Thanks,\n>> Anjan\n>>\n>>\n>>> EMC you can file an RPQ via your sales contacts to get it approved,\n>>> though not sure how lengthy/painful that process might be, or if \n>>> it's\n>>> gonna be worth it.\n>>>\n>>> Read the article devoted to the v40z on anandtech.com.\n>>>\n>>> I am also trying to get a quad-Opteron versus the latest quad-XEON\n>>>\n>>\n>> from\n>>\n>>\n>>> Dell (6850), but it's hard to justify a difference between a 15K \n>>> dell\n>>> versus a 30k v40z for a 5-8% performance gain (read the XEON Vs.\n>>>\n>>\n>> Opteron\n>>\n>>\n>>> Database comparo on anandtech.com)...\n>>>\n>>> Thanks,\n>>> Anjan\n>>>\n>>>\n>>\n>>\n>> 15k vs 30k is indeed a big difference. But also realize that Postgres\n>> has a specific benefit to Opterons versus Xeons. The context \n>> switching\n>> storm happens less on an Opteron for some reason.\n>>\n>> I would venture a much greater benefit than 5-8%, more like 10-50%.\n>>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n>\n\n--------------------\n\nAndrew Rawnsley\nPresident\nThe Ravensfield Digital Resource Group, Ltd.\n(740) 587-0114\nwww.ravensfield.com\n\n",
"msg_date": "Mon, 9 May 2005 23:03:23 -0400",
"msg_from": "Andrew Rawnsley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Whence the Opterons?"
}
] |
[
{
"msg_contents": "Seems to be only using like 360 meg out of 7 gig free (odd thing is I did\nsee some used swap 4k out of 1.9) with a bunch of users (this may be normal,\nbut it is not going overly fast so thought I would ask).\n\nItems I modified per commandprompt.coma nd watching this list etc.\n\n \n\nshared_buffers = 24576\n\nwork_mem = 32768\n\nmax_fsm_pages = 100000\n\nmax_fsm_relations = 1500\n\nfsync = true\n\nwal_sync_method = open_sync\n\nwal_buffers = 2048\n\ncheckpoint_segments = 100 \n\neffective_cache_size = 524288\n\ndefault_statistics_target = 250\n\n \n\nAny help is appreciated.\n\n \n\n \n\n \n\nJoel Fradkin\n\n \n\nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n\n \n\[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\nC 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply email and delete and destroy\nall copies of the original message, including attachments.\n\n \n\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nSeems to be only using like 360 meg out of 7 gig free (odd\nthing is I did see some used swap 4k out of 1.9) with a bunch of users (this\nmay be normal, but it is not going overly fast so thought I would ask).\nItems I modified per commandprompt.coma nd watching this\nlist etc.\n \nshared_buffers = 24576\nwork_mem = 32768\nmax_fsm_pages = 100000\nmax_fsm_relations = 1500\nfsync = true\nwal_sync_method = open_sync\nwal_buffers = 2048\ncheckpoint_segments = 100 \neffective_cache_size =\n524288\ndefault_statistics_target =\n250\n \nAny help is appreciated.\n \n \n \nJoel Fradkin\n\n \n\nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n\n \n\[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\n© 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the\nintended recipient, please contact the sender by reply email and delete and\ndestroy all copies of the original message, including attachments.",
"msg_date": "Mon, 9 May 2005 16:55:53 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Configing 8 gig box."
},
{
"msg_contents": "On Mon, May 09, 2005 at 04:55:53PM -0400, Joel Fradkin wrote:\n> Seems to be only using like 360 meg out of 7 gig free (odd thing is I did\n> see some used swap 4k out of 1.9) with a bunch of users (this may be normal,\n> but it is not going overly fast so thought I would ask).\n\nThis is perfectly normal. Each postgresql backend will only report\nmemory usage roughly equal to shared_buffers plus the size of the code\n(16M or so?). If it's in the middle of a sort or vacuum, it will use\nmore memory.\n\nIt's not uncommon for modern OS's to swap out stuff that's not being\nused. They would rather have the memory available for disk caching,\nwhich is normally a good trade-off.\n\nFor reference, on a 4G box running FreeBSD, there's currently 18M of\nswap used. Postgresql processes typically show 53M of total VM, with\n~22M resident. This is with shared buffers of 2000.\n\n> Items I modified per commandprompt.coma nd watching this list etc.\n> \n> \n> \n> shared_buffers = 24576\n> \n> work_mem = 32768\n> \n> max_fsm_pages = 100000\n> \n> max_fsm_relations = 1500\n> \n> fsync = true\n> \n> wal_sync_method = open_sync\n> \n> wal_buffers = 2048\n> \n> checkpoint_segments = 100 \n> \n> effective_cache_size = 524288\n> \n> default_statistics_target = 250\n> \n> \n> \n> Any help is appreciated.\n> \n> \n> \n> \n> \n> \n> \n> Joel Fradkin\n> \n> \n> \n> Wazagua, Inc.\n> 2520 Trailmate Dr\n> Sarasota, Florida 34243\n> Tel. 941-753-7111 ext 305\n> \n> \n> \n> [email protected]\n> www.wazagua.com\n> Powered by Wazagua\n> Providing you with the latest Web-based technology & advanced tools.\n> C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n> This email message is for the use of the intended recipient(s) and may\n> contain confidential and privileged information. Any unauthorized review,\n> use, disclosure or distribution is prohibited. If you are not the intended\n> recipient, please contact the sender by reply email and delete and destroy\n> all copies of the original message, including attachments.\n> \n> \n> \n> \n> \n> \n> \n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 10 May 2005 16:45:31 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configing 8 gig box."
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Mon, May 09, 2005 at 04:55:53PM -0400, Joel Fradkin wrote:\n>> Seems to be only using like 360 meg out of 7 gig free (odd thing is I did\n>> see some used swap 4k out of 1.9) with a bunch of users (this may be normal,\n>> but it is not going overly fast so thought I would ask).\n\n> This is perfectly normal. Each postgresql backend will only report\n> memory usage roughly equal to shared_buffers plus the size of the code\n> (16M or so?). If it's in the middle of a sort or vacuum, it will use\n> more memory.\n\nOne thing to note is that depending on which Unix variant you are using,\ntop may claim that any particular backend process is using the portion\nof shared memory that it's actually physically touched. This means that\nthe claimed size of a backend process will grow as it runs (and randomly\nneeds to touch pages that are in different slots of the shared-memory\nbuffers) regardless of any actual objective growth in memory needs.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 May 2005 00:10:36 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configing 8 gig box. "
}
] |
[
{
"msg_contents": "I wanted to get some opinions about row prefetching. AFAIK, there is no \nprefetching done by PostgreSQL; all prefetching is delegated to the operating \nsystem. \n\nThe hardware (can't say enough good things about it):\n\nAthlon 64, dual channel\n4GB ram\n240GB usable 4 disk raid5 (ATA133)\nFedora Core 3\nPostgreSQL 7.4.7\n\nI have what is essentially a data warehouse of stock data. Each day has \naround 30,000 records (tickers). A typical operation is to get the 200 day \nsimple moving average (of price) for each ticker and write the result to a \nsummary table. In running this process (Perl/DBI), it is typical to see \n70-80% I/O wait time with postgres running a about 8-9%. If I run the next \nday's date, the postgres cache and file cache is now populated with 199 days \nof the needed data, postgres runs 80-90% of CPU and total run time is greatly \nreduced. My conclusion is that this is a high cache hit rate in action. \n\nI've done other things that make sense, like using indexes, playing with the \nplanner constants and turning up the postgres cache buffers. \n\nEven playing with extream hdparm read-ahead numbers (i.e. 64738), there is no \napparent difference in database performance. The random nature of the I/O \ndrops disk reads down to about 1MB/sec for the array. A linear table scan \ncan easily yield 70-80MB/sec on this system. Total table size is usually \naround 1GB and with indexes should be able to fit completely in main memory.\n\nOther databases like Oracle and DB2 implement some sort of row prefetch. Has \nthere been serious consideration of implementing something like a prefetch \nsubsystem? Does anyone have any opinions as to why this would be a bad idea \nfor postgres? \n\nPostges is great for a multiuser environment and OLTP applications. However, \nin this set up, a data warehouse, the observed performance is not what I \nwould hope for. \n\nRegards,\n\nMatt Olson\nOcean Consulting\nhttp://www.oceanconsulting.com/\n",
"msg_date": "Mon, 9 May 2005 19:10:26 -0700",
"msg_from": "Matt Olson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Prefetch"
},
{
"msg_contents": "My only comment is what is the layout of your data (just one table with\nindexes?).\nI found on my date with dozens of joins my view speed was not good for me to\nuse, so I made a flat file with no joins and it flies.\n\nJoel Fradkin\n \nWazagua, Inc.\n2520 Trailmate Dr\nSarasota, Florida 34243\nTel. 941-753-7111 ext 305\n \[email protected]\nwww.wazagua.com\nPowered by Wazagua\nProviding you with the latest Web-based technology & advanced tools.\nC 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc\n This email message is for the use of the intended recipient(s) and may\ncontain confidential and privileged information. Any unauthorized review,\nuse, disclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply email and delete and destroy\nall copies of the original message, including attachments.\n \n\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Matt Olson\nSent: Monday, May 09, 2005 9:10 PM\nTo: [email protected]\nSubject: [PERFORM] Prefetch\n\nI wanted to get some opinions about row prefetching. AFAIK, there is no \nprefetching done by PostgreSQL; all prefetching is delegated to the\noperating \nsystem. \n\nThe hardware (can't say enough good things about it):\n\nAthlon 64, dual channel\n4GB ram\n240GB usable 4 disk raid5 (ATA133)\nFedora Core 3\nPostgreSQL 7.4.7\n\nI have what is essentially a data warehouse of stock data. Each day has \naround 30,000 records (tickers). A typical operation is to get the 200 day \nsimple moving average (of price) for each ticker and write the result to a \nsummary table. In running this process (Perl/DBI), it is typical to see \n70-80% I/O wait time with postgres running a about 8-9%. If I run the next\n\nday's date, the postgres cache and file cache is now populated with 199 days\n\nof the needed data, postgres runs 80-90% of CPU and total run time is\ngreatly \nreduced. My conclusion is that this is a high cache hit rate in action. \n\nI've done other things that make sense, like using indexes, playing with the\n\nplanner constants and turning up the postgres cache buffers. \n\nEven playing with extream hdparm read-ahead numbers (i.e. 64738), there is\nno \napparent difference in database performance. The random nature of the I/O \ndrops disk reads down to about 1MB/sec for the array. A linear table scan \ncan easily yield 70-80MB/sec on this system. Total table size is usually \naround 1GB and with indexes should be able to fit completely in main memory.\n\nOther databases like Oracle and DB2 implement some sort of row prefetch.\nHas \nthere been serious consideration of implementing something like a prefetch \nsubsystem? Does anyone have any opinions as to why this would be a bad idea\n\nfor postgres? \n\nPostges is great for a multiuser environment and OLTP applications.\nHowever, \nin this set up, a data warehouse, the observed performance is not what I \nwould hope for. \n\nRegards,\n\nMatt Olson\nOcean Consulting\nhttp://www.oceanconsulting.com/\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n\n",
"msg_date": "Mon, 16 May 2005 08:45:01 -0400",
"msg_from": "\"Joel Fradkin\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm having problems with the query optimizer and FULL OUTER JOIN on \nPostgreSQL 7.4. I cannot get it to use my indexes with full outer joins. \nI might be naive, but I think that it should be possible?\n\nI have two BIG tables (virtually identical) with 3 NOT NULL columns \nStation_id, TimeObs, Temp_XXXX, with indexes on (Station_id, TimeObs) \nand valid ANALYSE (set statistics=100). I want to join the two tables \nwith a FULL OUTER JOIN.\n\nWhen I specify the query as:\n\nselect temp_max_60min,temp_dry_at_2m\nfrom station s natural join\ntemp_dry_at_2m a full outer join temp_max_60min b using (station_id, timeobs)\nwhere s.wmo_id=6065 \nand timeobs='2004-1-1 0:0:0'\nand '2004-1-1 0:0:0' between s.startdate and s.enddate;\n\nI get the correct results, BUT LOUSY performance, and the following explain:\n\n Nested Loop Left Join (cost=5.84..163484.08 rows=1349 width=12) (actual time=66146.815..119005.381 rows=1 loops=1)\n Filter: (COALESCE(\"outer\".timeobs, \"inner\".timeobs) = '2004-01-01 00:00:00'::timestamp without time zone)\n -> Hash Join (cost=5.84..155420.24 rows=1349 width=16) (actual time=8644.449..110836.038 rows=109826 loops=1)\n Hash Cond: (\"outer\".station_id = \"inner\".station_id)\n -> Seq Scan on temp_dry_at_2m a (cost=0.00..120615.94 rows=6956994 width=16) (actual time=0.024..104548.515 rows=6956994 loops=1)\n -> Hash (cost=5.84..5.84 rows=1 width=4) (actual time=0.114..0.114 rows=0 loops=1)\n -> Index Scan using wmo_idx on station (cost=0.00..5.84 rows=1 width=4) (actual time=0.105..0.108 rows=1 loops=1)\n Index Cond: ((wmo_id = 6065) AND ('2004-01-01 00:00:00'::timestamp without time zone >= startdate) AND ('2004-01-01 00:00:00'::timestamp without time zone <= enddate))\n -> Index Scan using temp_max_60min_idx on temp_max_60min b (cost=0.00..5.96 rows=1 width=20) (actual time=0.071..0.071 rows=0 loops=109826)\n Index Cond: ((\"outer\".station_id = b.station_id) AND (\"outer\".timeobs = b.timeobs))\n Total runtime: 119005.499 ms\n(11 rows)\n\nIf I change the query to (and thus negates the full outer join):\n\nselect temp_max_60min,temp_dry_at_2m\nfrom station s natural join\ntemp_dry_at_2m a full outer join temp_max_60min b using (station_id, timeobs)\nwhere s.wmo_id=6065 \nand _a.timeobs='2004-1-1 0:0:0' and b._timeobs='2004-1-1 0:0:0' \nand '2004-1-1 0:0:0' between s.startdate and s.enddate;\n\n\nI get wrong results (In the case where one of the records is missing in \none of the tables), BUT GOOD performance, and this query plan:\n\n Nested Loop (cost=0.00..17.83 rows=1 width=12) (actual time=79.221..79.236 rows=1 loops=1)\n -> Nested Loop (cost=0.00..11.82 rows=1 width=24) (actual time=65.517..65.526 rows=1 loops=1)\n -> Index Scan using wmo_idx on station (cost=0.00..5.83 rows=1 width=4) (actual time=0.022..0.026 rows=1 loops=1)\n Index Cond: ((wmo_id = 6065) AND ('2004-01-01 00:00:00'::timestamp without time zone >= startdate) AND ('2004-01-01 00:00:00'::timestamp without time zone <= enddate))\n -> Index Scan using temp_max_60min_idx on temp_max_60min b (cost=0.00..5.97 rows=1 width=20) (actual time=65.483..65.486 rows=1 loops=1)\n Index Cond: ((\"outer\".station_id = b.station_id) AND (b.timeobs = '2004-01-01 00:00:00'::timestamp without time zone))\n -> Index Scan using temp_dry_at_2m_idx on temp_dry_at_2m a (cost=0.00..6.00 rows=1 width=16) (actual time=13.694..13.698 rows=1 loops=1)\n Index Cond: ((\"outer\".station_id = a.station_id) AND (a.timeobs = '2004-01-01 00:00:00'::timestamp without time zone))\n Total runtime: 79.340 ms\n(9 rows)\n\n\nIf further info like EXPLAIN VERBOSE is useful please say so and I will \nprovide it.\n\nThanks in advance!\nKim Bisgaard.\n\n\n\n\n\n\n\n\nHi,\n\nI'm having problems with the query optimizer and FULL OUTER JOIN on\nPostgreSQL 7.4. I cannot get it to use my indexes with full outer\njoins. I might be naive, but I think that it should be possible?\n\nI have two BIG tables (virtually identical) with 3 NOT NULL columns\nStation_id, TimeObs, Temp_XXXX, with indexes on (Station_id, TimeObs)\nand valid ANALYSE (set statistics=100). I want to join the two tables\nwith a FULL OUTER JOIN.\n\nWhen I specify the query as:\nselect temp_max_60min,temp_dry_at_2m\nfrom station s natural join\ntemp_dry_at_2m a full outer join temp_max_60min b using (station_id, timeobs)\nwhere s.wmo_id=6065 \nand timeobs='2004-1-1 0:0:0'\nand '2004-1-1 0:0:0' between s.startdate and s.enddate;\nI get the correct results, BUT LOUSY performance, and the following\nexplain:\n Nested Loop Left Join (cost=5.84..163484.08 rows=1349 width=12) (actual time=66146.815..119005.381 rows=1 loops=1)\n Filter: (COALESCE(\"outer\".timeobs, \"inner\".timeobs) = '2004-01-01 00:00:00'::timestamp without time zone)\n -> Hash Join (cost=5.84..155420.24 rows=1349 width=16) (actual time=8644.449..110836.038 rows=109826 loops=1)\n Hash Cond: (\"outer\".station_id = \"inner\".station_id)\n -> Seq Scan on temp_dry_at_2m a (cost=0.00..120615.94 rows=6956994 width=16) (actual time=0.024..104548.515 rows=6956994 loops=1)\n -> Hash (cost=5.84..5.84 rows=1 width=4) (actual time=0.114..0.114 rows=0 loops=1)\n -> Index Scan using wmo_idx on station (cost=0.00..5.84 rows=1 width=4) (actual time=0.105..0.108 rows=1 loops=1)\n Index Cond: ((wmo_id = 6065) AND ('2004-01-01 00:00:00'::timestamp without time zone >= startdate) AND ('2004-01-01 00:00:00'::timestamp without time zone <= enddate))\n -> Index Scan using temp_max_60min_idx on temp_max_60min b (cost=0.00..5.96 rows=1 width=20) (actual time=0.071..0.071 rows=0 loops=109826)\n Index Cond: ((\"outer\".station_id = b.station_id) AND (\"outer\".timeobs = b.timeobs))\n Total runtime: 119005.499 ms\n(11 rows)\n\nIf I change the query to (and thus negates the full outer join):\nselect temp_max_60min,temp_dry_at_2m\nfrom station s natural join\ntemp_dry_at_2m a full outer join temp_max_60min b using (station_id, timeobs)\nwhere s.wmo_id=6065 \nand a.timeobs='2004-1-1 0:0:0' and b.timeobs='2004-1-1 0:0:0' \nand '2004-1-1 0:0:0' between s.startdate and s.enddate;\n\n\nI get wrong results (In the case where one of the records is missing in\none of the tables), BUT GOOD performance, and this query plan:\n Nested Loop (cost=0.00..17.83 rows=1 width=12) (actual time=79.221..79.236 rows=1 loops=1)\n -> Nested Loop (cost=0.00..11.82 rows=1 width=24) (actual time=65.517..65.526 rows=1 loops=1)\n -> Index Scan using wmo_idx on station (cost=0.00..5.83 rows=1 width=4) (actual time=0.022..0.026 rows=1 loops=1)\n Index Cond: ((wmo_id = 6065) AND ('2004-01-01 00:00:00'::timestamp without time zone >= startdate) AND ('2004-01-01 00:00:00'::timestamp without time zone <= enddate))\n -> Index Scan using temp_max_60min_idx on temp_max_60min b (cost=0.00..5.97 rows=1 width=20) (actual time=65.483..65.486 rows=1 loops=1)\n Index Cond: ((\"outer\".station_id = b.station_id) AND (b.timeobs = '2004-01-01 00:00:00'::timestamp without time zone))\n -> Index Scan using temp_dry_at_2m_idx on temp_dry_at_2m a (cost=0.00..6.00 rows=1 width=16) (actual time=13.694..13.698 rows=1 loops=1)\n Index Cond: ((\"outer\".station_id = a.station_id) AND (a.timeobs = '2004-01-01 00:00:00'::timestamp without time zone))\n Total runtime: 79.340 ms\n(9 rows)\n\n\nIf further info like EXPLAIN VERBOSE is useful please say so and I will\nprovide it.\n\nThanks in advance!\nKim Bisgaard.",
"msg_date": "Tue, 10 May 2005 11:03:34 +0200",
"msg_from": "Kim Bisgaard <[email protected]>",
"msg_from_op": true,
"msg_subject": "full outer performance problem"
},
{
"msg_contents": "Kim Bisgaard wrote:\n> Hi,\n>\n> I'm having problems with the query optimizer and FULL OUTER JOIN on\n> PostgreSQL 7.4. I cannot get it to use my indexes with full outer joins.\n> I might be naive, but I think that it should be possible?\n>\n> I have two BIG tables (virtually identical) with 3 NOT NULL columns\n> Station_id, TimeObs, Temp_XXXX, with indexes on (Station_id, TimeObs)\n> and valid ANALYSE (set statistics=100). I want to join the two tables\n> with a FULL OUTER JOIN.\n>\n> When I specify the query as:\n>\n> select temp_max_60min,temp_dry_at_2m\n> from station s natural join\n> temp_dry_at_2m a full outer join temp_max_60min b using (station_id, timeobs)\n> where s.wmo_id=6065\n> and timeobs='2004-1-1 0:0:0'\n> and '2004-1-1 0:0:0' between s.startdate and s.enddate;\n>\n> I get the correct results, BUT LOUSY performance, and the following explain:\n>\n> Nested Loop Left Join (cost=5.84..163484.08 rows=1349 width=12) (actual time=66146.815..119005.381 rows=1 loops=1)\n> Filter: (COALESCE(\"outer\".timeobs, \"inner\".timeobs) = '2004-01-01 00:00:00'::timestamp without time zone)\n> -> Hash Join (cost=5.84..155420.24 rows=1349 width=16) (actual time=8644.449..110836.038 rows=109826 loops=1)\n\nWell, the estimate here is quite a bit off. It thinks you will be\ngetting 1349 (which is probably why it picked a nested loop plan), but\nthen it is getting 109826 rows.\nI'm guessing it is misunderstanding the selectivity of the timeobs column.\n\n> Hash Cond: (\"outer\".station_id = \"inner\".station_id)\n> -> Seq Scan on temp_dry_at_2m a (cost=0.00..120615.94 rows=6956994 width=16) (actual time=0.024..104548.515 rows=6956994 loops=1)\n> -> Hash (cost=5.84..5.84 rows=1 width=4) (actual time=0.114..0.114 rows=0 loops=1)\n> -> Index Scan using wmo_idx on station (cost=0.00..5.84 rows=1 width=4) (actual time=0.105..0.108 rows=1 loops=1)\n> Index Cond: ((wmo_id = 6065) AND ('2004-01-01 00:00:00'::timestamp without time zone >= startdate) AND ('2004-01-01 00:00:00'::timestamp without time zone <= enddate))\n> -> Index Scan using temp_max_60min_idx on temp_max_60min b (cost=0.00..5.96 rows=1 width=20) (actual time=0.071..0.071 rows=0 loops=109826)\n> Index Cond: ((\"outer\".station_id = b.station_id) AND (\"outer\".timeobs = b.timeobs))\n> Total runtime: 119005.499 ms\n> (11 rows)\n\nI think the bigger problem is that a full outer join says grab all rows,\neven if they are null.\n\nWhat about this query:\nSELECT temp_max_60min,temp_dry_at_2m\n FROM (station s LEFT JOIN temp_dry_at_2m a USING (station_id, timeobs)\n LEFT JOIN temp_max_60min b USING (station_id, timeobs)\nwhere s.wmo_id=6065\nand timeobs='2004-1-1 0:0:0'\nand '2004-1-1 0:0:0' between s.startdate and s.enddate;\n\nAfter that, you should probably have a multi-column index on\n(station_id, timeobs), which lets postgres use just that index for the\nlookup, rather than using an index and then a filter. (Looking at your\nnext query you might already have that index).\n\n>\n> If I change the query to (and thus negates the full outer join):\n\nThis is the same query, I think you messed up your copy and paste.\n\n>\n> select temp_max_60min,temp_dry_at_2m\n> from station s natural join\n> temp_dry_at_2m a full outer join temp_max_60min b using (station_id, timeobs)\n> where s.wmo_id=6065\n> and _a.timeobs='2004-1-1 0:0:0' and b._timeobs='2004-1-1 0:0:0'\n> and '2004-1-1 0:0:0' between s.startdate and s.enddate;\n>\n>\n> I get wrong results (In the case where one of the records is missing in\n> one of the tables), BUT GOOD performance, and this query plan:\n>\n> Nested Loop (cost=0.00..17.83 rows=1 width=12) (actual time=79.221..79.236 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..11.82 rows=1 width=24) (actual time=65.517..65.526 rows=1 loops=1)\n> -> Index Scan using wmo_idx on station (cost=0.00..5.83 rows=1 width=4) (actual time=0.022..0.026 rows=1 loops=1)\n> Index Cond: ((wmo_id = 6065) AND ('2004-01-01 00:00:00'::timestamp without time zone >= startdate) AND ('2004-01-01 00:00:00'::timestamp without time zone <= enddate))\n> -> Index Scan using temp_max_60min_idx on temp_max_60min b (cost=0.00..5.97 rows=1 width=20) (actual time=65.483..65.486 rows=1 loops=1)\n> Index Cond: ((\"outer\".station_id = b.station_id) AND (b.timeobs = '2004-01-01 00:00:00'::timestamp without time zone))\n> -> Index Scan using temp_dry_at_2m_idx on temp_dry_at_2m a (cost=0.00..6.00 rows=1 width=16) (actual time=13.694..13.698 rows=1 loops=1)\n> Index Cond: ((\"outer\".station_id = a.station_id) AND (a.timeobs = '2004-01-01 00:00:00'::timestamp without time zone))\n> Total runtime: 79.340 ms\n> (9 rows)\n>\n>\n> If further info like EXPLAIN VERBOSE is useful please say so and I will\n> provide it.\n>\n> Thanks in advance!\n> Kim Bisgaard.\n\nI still feel like you will have a problem with an outer join in this\ncircumstance, because it will have to scan all of both tables.\n\nI think what you are wanting is \"give me everything where station_id =\nX, and there is a row in either a or b\".\nI think my LEFT JOIN example does that, but I also think there would be\na subselect form which would work, which might do better. Something like:\n\nSELECT temp_max_60min,temp_dry_at_2m\n\tFROM (SELECT station_id, timeobs FROM station s\n\t WHERE s.wmo_id=6065\n\t AND timeobs = '2004-1-1 0:0:0'\n\t AND '2004-1-1 0:0:0' BETWEEN s.startdate AND s.enddate\n\t ) AS s\n\tJOIN (SELECT temp_max_60min, temp_dry_at_2m\n\t\tFROM temp_dry_at_2m a\n\t\tFULL OUTER JOIN temp_max_60min b\n\t\tUSING (station_id, timeobs)\n\t\tWHERE station_id = s.station_id\n\t\t AND timeobs = '2004-1-1 0:0:0'\n\t )\n;\n\nIf I did this correctly, you should have a very restrictive scan done on\nstation, which only returns a few rows based on timeobs & station_id.\nBut it might be better to turn that final FULL OUTER JOIN into 2 LEFT\nJOINs like I did the first time:\n\nSELECT temp_max_60min,temp_dry_at_2m\n\tFROM (SELECT station_id, timeobs FROM station s\n\t WHERE s.wmo_id=6065\n\t AND timeobs = '2004-1-1 0:0:0'\n\t AND '2004-1-1 0:0:0' BETWEEN s.startdate AND s.enddate\n\t ) AS s\n\tLEFT JOIN temp_dry_at_2m a USING (station_id, timeobs)\n\tLEFT JOIN temp_max_60min b USING (station_id, timeobs)\n;\n\nI would hope postgres could do this from just my earlier plan. And I\nhope I understand what you want, such that 2 LEFT JOINS work better than\nyour FULL OUTER JOIN. If you only want rows where one of both temp_dry\nor temp_max exist, you probably could just add the line:\n\nWHERE (temp_max_60_min IS NOT NULL OR temp_dry_at_2m IS NOT NULL)\n\n\nJohn\n=:->",
"msg_date": "Tue, 10 May 2005 09:28:54 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: full outer performance problem"
},
{
"msg_contents": "Kim Bisgaard <[email protected]> writes:\n> I have two BIG tables (virtually identical) with 3 NOT NULL columns \n> Station_id, TimeObs, Temp_XXXX, with indexes on (Station_id, TimeObs) \n> and valid ANALYSE (set statistics=100). I want to join the two tables \n> with a FULL OUTER JOIN.\n\nI'm confused. If the columns are NOT NULL, why isn't this a valid\ntransformation of your original query?\n\n> select temp_max_60min,temp_dry_at_2m\n> from station s natural join\n> temp_dry_at_2m a full outer join temp_max_60min b using (station_id, timeobs)\n> where s.wmo_id=6065 \n> and _a.timeobs='2004-1-1 0:0:0' and b._timeobs='2004-1-1 0:0:0' \n> and '2004-1-1 0:0:0' between s.startdate and s.enddate;\n\nSeems like it's not eliminating any rows that would otherwise succeed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 May 2005 10:45:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: full outer performance problem "
},
{
"msg_contents": "Sorry for not listing the exact layout of temp_XXXX:\n\nobsdb=> \\d temp_dry_at_2m\n Table \"public.temp_dry_at_2m\"\n Column | Type | Modifiers\n----------------+-----------------------------+-----------\n obshist_id | integer | not null\n station_id | integer | not null\n timeobs | timestamp without time zone | not null\n temp_dry_at_2m | real | not null\nIndexes:\n \"temp_dry_at_2m_pkey\" primary key, btree (obshist_id)\n \"temp_dry_at_2m_idx\" btree (station_id, timeobs)\n\nThe difference between the two queries is if a (station_id,timeobs) row \nis missing in one table, then the first returns one record(null,9.3) \nwhile the second return no records.\n\nRegards,\nKim Bisgaard.\n\nTom Lane wrote:\n\n>Kim Bisgaard <[email protected]> writes:\n> \n>\n>>I have two BIG tables (virtually identical) with 3 NOT NULL columns \n>>Station_id, TimeObs, Temp_XXXX, with indexes on (Station_id, TimeObs) \n>>and valid ANALYSE (set statistics=100). I want to join the two tables \n>>with a FULL OUTER JOIN.\n>> \n>>\n>\n>I'm confused. If the columns are NOT NULL, why isn't this a valid\n>transformation of your original query?\n>\n> \n>\n>>select temp_max_60min,temp_dry_at_2m\n>>from station s natural join\n>>temp_dry_at_2m a full outer join temp_max_60min b using (station_id, timeobs)\n>>where s.wmo_id=6065 \n>>and _a.timeobs='2004-1-1 0:0:0' and b._timeobs='2004-1-1 0:0:0' \n>>and '2004-1-1 0:0:0' between s.startdate and s.enddate;\n>> \n>>\n>\n>Seems like it's not eliminating any rows that would otherwise succeed.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n> \n>\n\n-- \nKim Bisgaard\n\nComputer Department Phone: +45 3915 7562 (direct)\nDanish Meteorological Institute Fax: +45 3915 7460 (division)\n\n\n\n\n\n\n\n\nSorry for not listing the exact layout of temp_XXXX:\nobsdb=> \\d temp_dry_at_2m\n Table \"public.temp_dry_at_2m\"\n Column | Type | Modifiers\n----------------+-----------------------------+-----------\n obshist_id | integer | not null\n station_id | integer | not null\n timeobs | timestamp without time zone | not null\n temp_dry_at_2m | real | not null\nIndexes:\n \"temp_dry_at_2m_pkey\" primary key, btree (obshist_id)\n \"temp_dry_at_2m_idx\" btree (station_id, timeobs)\n\nThe difference between the two queries is if a (station_id,timeobs) row\nis missing in one table, then the first returns one record(null,9.3)\nwhile the second return no records.\n\nRegards,\nKim Bisgaard.\n\nTom Lane wrote:\n\nKim Bisgaard <[email protected]> writes:\n \n\nI have two BIG tables (virtually identical) with 3 NOT NULL columns \nStation_id, TimeObs, Temp_XXXX, with indexes on (Station_id, TimeObs) \nand valid ANALYSE (set statistics=100). I want to join the two tables \nwith a FULL OUTER JOIN.\n \n\n\nI'm confused. If the columns are NOT NULL, why isn't this a valid\ntransformation of your original query?\n\n \n\nselect temp_max_60min,temp_dry_at_2m\nfrom station s natural join\ntemp_dry_at_2m a full outer join temp_max_60min b using (station_id, timeobs)\nwhere s.wmo_id=6065 \nand _a.timeobs='2004-1-1 0:0:0' and b._timeobs='2004-1-1 0:0:0' \nand '2004-1-1 0:0:0' between s.startdate and s.enddate;\n \n\n\nSeems like it's not eliminating any rows that would otherwise succeed.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n \n\n\n-- \nKim Bisgaard\n\nComputer Department Phone: +45 3915 7562 (direct)\nDanish Meteorological Institute Fax: +45 3915 7460 (division)",
"msg_date": "Wed, 11 May 2005 09:05:05 +0200",
"msg_from": "Kim Bisgaard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: full outer performance problem"
},
{
"msg_contents": "Hi,\n\nLook for my comments further down...\n\nJohn A Meinel wrote:\n\n> Kim Bisgaard wrote:\n>\n>> Hi,\n>>\n>> I'm having problems with the query optimizer and FULL OUTER JOIN on\n>> PostgreSQL 7.4. I cannot get it to use my indexes with full outer joins.\n>> I might be naive, but I think that it should be possible?\n>>\n>> I have two BIG tables (virtually identical) with 3 NOT NULL columns\n>> Station_id, TimeObs, Temp_XXXX, with indexes on (Station_id, TimeObs)\n>> and valid ANALYSE (set statistics=100). I want to join the two tables\n>> with a FULL OUTER JOIN.\n>>\n>> When I specify the query as:\n>>\n>> select temp_max_60min,temp_dry_at_2m\n>> from station s natural join\n>> temp_dry_at_2m a full outer join temp_max_60min b using (station_id, \n>> timeobs)\n>> where s.wmo_id=6065\n>> and timeobs='2004-1-1 0:0:0'\n>> and '2004-1-1 0:0:0' between s.startdate and s.enddate;\n>>\n>> I get the correct results, BUT LOUSY performance, and the following \n>> explain:\n>>\n>> Nested Loop Left Join (cost=5.84..163484.08 rows=1349 width=12) \n>> (actual time=66146.815..119005.381 rows=1 loops=1)\n>> Filter: (COALESCE(\"outer\".timeobs, \"inner\".timeobs) = '2004-01-01 \n>> 00:00:00'::timestamp without time zone)\n>> -> Hash Join (cost=5.84..155420.24 rows=1349 width=16) (actual \n>> time=8644.449..110836.038 rows=109826 loops=1)\n>\n>\n> Well, the estimate here is quite a bit off. It thinks you will be\n> getting 1349 (which is probably why it picked a nested loop plan), but\n> then it is getting 109826 rows.\n> I'm guessing it is misunderstanding the selectivity of the timeobs \n> column.\n\nI think you are right..\n\n>\n>> Hash Cond: (\"outer\".station_id = \"inner\".station_id)\n>> -> Seq Scan on temp_dry_at_2m a (cost=0.00..120615.94 \n>> rows=6956994 width=16) (actual time=0.024..104548.515 rows=6956994 \n>> loops=1)\n>> -> Hash (cost=5.84..5.84 rows=1 width=4) (actual \n>> time=0.114..0.114 rows=0 loops=1)\n>> -> Index Scan using wmo_idx on station \n>> (cost=0.00..5.84 rows=1 width=4) (actual time=0.105..0.108 rows=1 \n>> loops=1)\n>> Index Cond: ((wmo_id = 6065) AND ('2004-01-01 \n>> 00:00:00'::timestamp without time zone >= startdate) AND ('2004-01-01 \n>> 00:00:00'::timestamp without time zone <= enddate))\n>> -> Index Scan using temp_max_60min_idx on temp_max_60min b \n>> (cost=0.00..5.96 rows=1 width=20) (actual time=0.071..0.071 rows=0 \n>> loops=109826)\n>> Index Cond: ((\"outer\".station_id = b.station_id) AND \n>> (\"outer\".timeobs = b.timeobs))\n>> Total runtime: 119005.499 ms\n>> (11 rows)\n>\n>\n> I think the bigger problem is that a full outer join says grab all rows,\n> even if they are null.\n>\n> What about this query:\n> SELECT temp_max_60min,temp_dry_at_2m\n> FROM (station s LEFT JOIN temp_dry_at_2m a USING (station_id, timeobs)\n> LEFT JOIN temp_max_60min b USING (station_id, timeobs)\n> where s.wmo_id=6065\n> and timeobs='2004-1-1 0:0:0'\n> and '2004-1-1 0:0:0' between s.startdate and s.enddate;\n\nThis works very well, and gives the correct result - thanks!!\n\n>\n> After that, you should probably have a multi-column index on\n> (station_id, timeobs), which lets postgres use just that index for the\n> lookup, rather than using an index and then a filter. (Looking at your\n> next query you might already have that index).\n\nYes I have.\n\n>\n>>\n>> If I change the query to (and thus negates the full outer join):\n>\n>\n> This is the same query, I think you messed up your copy and paste.\n\nNope. Changed \"and timeobs='2004-1-1 0:0:0' \" to \"and \na.timeobs='2004-1-1 0:0:0' and b.timeobs='2004-1-1 0:0:0' \"\n\n>\n>>\n>> select temp_max_60min,temp_dry_at_2m\n>> from station s natural join\n>> temp_dry_at_2m a full outer join temp_max_60min b using (station_id, \n>> timeobs)\n>> where s.wmo_id=6065\n>> and _a.timeobs='2004-1-1 0:0:0' and b._timeobs='2004-1-1 0:0:0'\n>> and '2004-1-1 0:0:0' between s.startdate and s.enddate;\n>>\n>>\n>> I get wrong results (In the case where one of the records is missing in\n>> one of the tables), BUT GOOD performance, and this query plan:\n>>\n>> Nested Loop (cost=0.00..17.83 rows=1 width=12) (actual \n>> time=79.221..79.236 rows=1 loops=1)\n>> -> Nested Loop (cost=0.00..11.82 rows=1 width=24) (actual \n>> time=65.517..65.526 rows=1 loops=1)\n>> -> Index Scan using wmo_idx on station (cost=0.00..5.83 \n>> rows=1 width=4) (actual time=0.022..0.026 rows=1 loops=1)\n>> Index Cond: ((wmo_id = 6065) AND ('2004-01-01 \n>> 00:00:00'::timestamp without time zone >= startdate) AND ('2004-01-01 \n>> 00:00:00'::timestamp without time zone <= enddate))\n>> -> Index Scan using temp_max_60min_idx on temp_max_60min b \n>> (cost=0.00..5.97 rows=1 width=20) (actual time=65.483..65.486 rows=1 \n>> loops=1)\n>> Index Cond: ((\"outer\".station_id = b.station_id) AND \n>> (b.timeobs = '2004-01-01 00:00:00'::timestamp without time zone))\n>> -> Index Scan using temp_dry_at_2m_idx on temp_dry_at_2m a \n>> (cost=0.00..6.00 rows=1 width=16) (actual time=13.694..13.698 rows=1 \n>> loops=1)\n>> Index Cond: ((\"outer\".station_id = a.station_id) AND \n>> (a.timeobs = '2004-01-01 00:00:00'::timestamp without time zone))\n>> Total runtime: 79.340 ms\n>> (9 rows)\n>>\n>>\n>> If further info like EXPLAIN VERBOSE is useful please say so and I will\n>> provide it.\n>>\n>> Thanks in advance!\n>> Kim Bisgaard.\n>\n>\n> I still feel like you will have a problem with an outer join in this\n> circumstance, because it will have to scan all of both tables.\n\nMaybe I misunderstand outer joins but since there are no rows with \nNULLs, I think it is a matter of finding the rows that are there or \nmakeing one up one if there are no rows?\n\n>\n> I think what you are wanting is \"give me everything where station_id =\n> X, and there is a row in either a or b\".\n> I think my LEFT JOIN example does that, but I also think there would be\n> a subselect form which would work, which might do better. Something like:\n>\n> SELECT temp_max_60min,temp_dry_at_2m\n> FROM (SELECT station_id, timeobs FROM station s\n> WHERE s.wmo_id=6065\n> AND timeobs = '2004-1-1 0:0:0'\n> AND '2004-1-1 0:0:0' BETWEEN s.startdate AND s.enddate\n> ) AS s\n> JOIN (SELECT temp_max_60min, temp_dry_at_2m\n> FROM temp_dry_at_2m a\n> FULL OUTER JOIN temp_max_60min b\n> USING (station_id, timeobs)\n> WHERE station_id = s.station_id\n> AND timeobs = '2004-1-1 0:0:0'\n> )\n> ;\n>\n> If I did this correctly, you should have a very restrictive scan done on\n> station, which only returns a few rows based on timeobs & station_id.\n> But it might be better to turn that final FULL OUTER JOIN into 2 LEFT\n> JOINs like I did the first time:\n>\n> SELECT temp_max_60min,temp_dry_at_2m\n> FROM (SELECT station_id, timeobs FROM station s\n> WHERE s.wmo_id=6065\n> AND timeobs = '2004-1-1 0:0:0'\n> AND '2004-1-1 0:0:0' BETWEEN s.startdate AND s.enddate\n> ) AS s\n> LEFT JOIN temp_dry_at_2m a USING (station_id, timeobs)\n> LEFT JOIN temp_max_60min b USING (station_id, timeobs)\n> ;\n>\n> I would hope postgres could do this from just my earlier plan. And I\n> hope I understand what you want, such that 2 LEFT JOINS work better than\n> your FULL OUTER JOIN. If you only want rows where one of both temp_dry\n> or temp_max exist, you probably could just add the line:\n>\n> WHERE (temp_max_60_min IS NOT NULL OR temp_dry_at_2m IS NOT NULL)\n\n\nThanks John. You have opened my eyes for a new way to formulate my queries!\n\n>\n>\n> John\n> =:->\n\n\n-- \nKim Bisgaard\n\nComputer Department Phone: +45 3915 7562 (direct)\nDanish Meteorological Institute Fax: +45 3915 7460 (division)\n\n",
"msg_date": "Wed, 11 May 2005 10:20:25 +0200",
"msg_from": "Kim Bisgaard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: full outer performance problem"
}
] |
[
{
"msg_contents": "What is the status of Postgres support for any sort of multi-machine \nscaling support? What are you meant to do once you've upgraded your \nbox and tuned the conf files as much as you can? But your query load \nis just too high for a single machine?\n\nUpgrading stock Dell boxes (I know we could be using better machines, \nbut I am trying to tackle the real issue) is not a hugely price \nefficient way of getting extra performance, nor particularly scalable \nin the long term.\n\nSo, when/is PG meant to be getting a decent partitioning system? \nMySQL is getting one (eventually) which is apparently meant to be \nsimiliar to Oracle's according to the docs. Clusgres does not appear \nto be widely/or at all used, and info on it seems pretty thin on the \nground, so I am\nnot too keen on going with that. Is the real solution to multi- \nmachine partitioning (as in, not like MySQLs MERGE tables) on \nPostgreSQL actually doing it in our application API? This seems like \na less than perfect solution once we want to add redundancy and \nthings into the mix. \n",
"msg_date": "Tue, 10 May 2005 11:03:26 +0100",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitioning / Clustering"
},
{
"msg_contents": "Alex Stapleton wrote:\n> What is the status of Postgres support for any sort of multi-machine\n> scaling support? What are you meant to do once you've upgraded your box\n> and tuned the conf files as much as you can? But your query load is\n> just too high for a single machine?\n>\n> Upgrading stock Dell boxes (I know we could be using better machines,\n> but I am trying to tackle the real issue) is not a hugely price\n> efficient way of getting extra performance, nor particularly scalable\n> in the long term.\n\nSwitch from Dell Xeon boxes, and go to Opterons. :) Seriously, Dell is\nfar away from Big Iron. I don't know what performance you are looking\nfor, but you can easily get into inserting 10M rows/day with quality\nhardware.\n\nBut actually is it your SELECT load that is too high, or your INSERT\nload, or something inbetween.\n\nBecause Slony is around if it is a SELECT problem.\nhttp://gborg.postgresql.org/project/slony1/projdisplay.php\n\nBasically, Slony is a Master/Slave replication system. So if you have\nINSERT going into the Master, you can have as many replicated slaves,\nwhich can handle your SELECT load.\nSlony is an asynchronous replicator, so there is a time delay from the\nINSERT until it will show up on a slave, but that time could be pretty\nsmall.\n\nThis would require some application level support, since an INSERT goes\nto a different place than a SELECT. But there has been some discussion\nabout pg_pool being able to spread the query load, and having it be\naware of the difference between a SELECT and an INSERT and have it route\nthe query to the correct host. The biggest problem being that functions\ncould cause a SELECT func() to actually insert a row, which pg_pool\nwouldn't know about. There are 2 possible solutions, a) don't do that\nwhen you are using this system, b) add some sort of comment hint so that\npg_pool can understand that the select is actually an INSERT, and needs\nto be done on the master.\n\n>\n> So, when/is PG meant to be getting a decent partitioning system? MySQL\n> is getting one (eventually) which is apparently meant to be similiar to\n> Oracle's according to the docs. Clusgres does not appear to be\n> widely/or at all used, and info on it seems pretty thin on the ground,\n> so I am\n> not too keen on going with that. Is the real solution to multi- machine\n> partitioning (as in, not like MySQLs MERGE tables) on PostgreSQL\n> actually doing it in our application API? This seems like a less than\n> perfect solution once we want to add redundancy and things into the mix.\n\nThere is also PGCluster\nhttp://pgfoundry.org/projects/pgcluster/\n\nWhich is trying to be more of a Synchronous multi-master system. I\nhaven't heard of Clusgres, so I'm guessing it is an older attempt, which\nhas been overtaken by pgcluster.\n\nJust realize that clusters don't necessarily scale like you would want\nthem too. Because at some point you have to insert into the same table,\nwhich means you need to hold a lock which prevents the other machine\nfrom doing anything. And with synchronous replication, you have to wait\nfor all of the machines to get a copy of the data before you can say it\nhas been committed, which does *not* scale well with the number of machines.\n\nIf you can make it work, I think having a powerful master server, who\ncan finish an INSERT quickly, and then having a bunch of Slony slaves\nwith a middleman (like pg_pool) to do load balancing among them, is the\nbest way to scale up. There are still some requirements, like not having\nto see the results of an INSERT instantly (though if you are using\nhinting to pg_pool, you could hint that this query must be done on the\nmaster, realizing that the more you do it, the more you slow everything\ndown).\n\nJohn\n=:->\n\nPS> I don't know what functionality has been actually implemented in\npg_pool, just that it was discussed in the past. Slony-II is also in the\nworks.",
"msg_date": "Tue, 10 May 2005 09:41:05 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "\nI think that perhaps he was trying to avoid having to buy \"Big Iron\" at all.\n\nWith all the Opteron v. Xeon around here, and talk of $30,000 machines,\nperhaps it would be worth exploring the option of buying 10 cheapass\nmachines for $300 each. At the moment, that $300 buys you, from Dell, a\n2.5Ghz Pentium 4 w/ 256mb of RAM and a 40Gb hard drive and gigabit ethernet.\nThe aggregate CPU and bandwidth is pretty stupendous, but not as easy to\nharness as a single machine.\n\nFor those of us looking at batch and data warehousing applications, it would\nbe really handy to be able to partition databases, tables, and processing\nload across banks of cheap hardware.\n\nYes, clustering solutions can distribute the data, and can even do it on a\nper-table basis in some cases. This still leaves it up to the application's\nlogic to handle reunification of the data.\n\nIdeas:\n\t1. Create a table/storage type that consists of a select statement\non another machine. While I don't think the current executor is capable of\nworking on multiple nodes of an execution tree at the same time, it would be\ngreat if it could offload a select of tuples from a remote table to an\nentirely different server and merge the resulting data into the current\nexecution. I believe MySQL has this, and Oracle may implement it in another\nway.\n\n\t2. There is no #2 at this time, but I'm sure one can be\nhypothesized.\n\n...Google and other companies have definitely proved that one can harness\nhuge clusters of cheap hardware. It can't be _that_ hard, can it. :)\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of John A Meinel\nSent: Tuesday, May 10, 2005 7:41 AM\nTo: Alex Stapleton\nCc: [email protected]\nSubject: Re: [PERFORM] Partitioning / Clustering\n\nAlex Stapleton wrote:\n> What is the status of Postgres support for any sort of multi-machine \n> scaling support? What are you meant to do once you've upgraded your \n> box and tuned the conf files as much as you can? But your query load \n> is just too high for a single machine?\n>\n> Upgrading stock Dell boxes (I know we could be using better machines, \n> but I am trying to tackle the real issue) is not a hugely price \n> efficient way of getting extra performance, nor particularly scalable \n> in the long term.\n\nSwitch from Dell Xeon boxes, and go to Opterons. :) Seriously, Dell is far\naway from Big Iron. I don't know what performance you are looking for, but\nyou can easily get into inserting 10M rows/day with quality hardware.\n\nBut actually is it your SELECT load that is too high, or your INSERT load,\nor something inbetween.\n\nBecause Slony is around if it is a SELECT problem.\nhttp://gborg.postgresql.org/project/slony1/projdisplay.php\n\nBasically, Slony is a Master/Slave replication system. So if you have INSERT\ngoing into the Master, you can have as many replicated slaves, which can\nhandle your SELECT load.\nSlony is an asynchronous replicator, so there is a time delay from the\nINSERT until it will show up on a slave, but that time could be pretty\nsmall.\n\nThis would require some application level support, since an INSERT goes to a\ndifferent place than a SELECT. But there has been some discussion about\npg_pool being able to spread the query load, and having it be aware of the\ndifference between a SELECT and an INSERT and have it route the query to the\ncorrect host. The biggest problem being that functions could cause a SELECT\nfunc() to actually insert a row, which pg_pool wouldn't know about. There\nare 2 possible solutions, a) don't do that when you are using this system,\nb) add some sort of comment hint so that pg_pool can understand that the\nselect is actually an INSERT, and needs to be done on the master.\n\n>\n> So, when/is PG meant to be getting a decent partitioning system? \n> MySQL is getting one (eventually) which is apparently meant to be \n> similiar to Oracle's according to the docs. Clusgres does not appear \n> to be widely/or at all used, and info on it seems pretty thin on the \n> ground, so I am not too keen on going with that. Is the real solution \n> to multi- machine partitioning (as in, not like MySQLs MERGE tables) \n> on PostgreSQL actually doing it in our application API? This seems \n> like a less than perfect solution once we want to add redundancy and \n> things into the mix.\n\nThere is also PGCluster\nhttp://pgfoundry.org/projects/pgcluster/\n\nWhich is trying to be more of a Synchronous multi-master system. I haven't\nheard of Clusgres, so I'm guessing it is an older attempt, which has been\novertaken by pgcluster.\n\nJust realize that clusters don't necessarily scale like you would want them\ntoo. Because at some point you have to insert into the same table, which\nmeans you need to hold a lock which prevents the other machine from doing\nanything. And with synchronous replication, you have to wait for all of the\nmachines to get a copy of the data before you can say it has been committed,\nwhich does *not* scale well with the number of machines.\n\nIf you can make it work, I think having a powerful master server, who can\nfinish an INSERT quickly, and then having a bunch of Slony slaves with a\nmiddleman (like pg_pool) to do load balancing among them, is the best way to\nscale up. There are still some requirements, like not having to see the\nresults of an INSERT instantly (though if you are using hinting to pg_pool,\nyou could hint that this query must be done on the master, realizing that\nthe more you do it, the more you slow everything down).\n\nJohn\n=:->\n\nPS> I don't know what functionality has been actually implemented in\npg_pool, just that it was discussed in the past. Slony-II is also in the\nworks.\n\n",
"msg_date": "Tue, 10 May 2005 08:02:50 -0700",
"msg_from": "\"Adam Haberlach\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "\nOn 10 May 2005, at 15:41, John A Meinel wrote:\n\n> Alex Stapleton wrote:\n>\n>> What is the status of Postgres support for any sort of multi-machine\n>> scaling support? What are you meant to do once you've upgraded \n>> your box\n>> and tuned the conf files as much as you can? But your query load is\n>> just too high for a single machine?\n>>\n>> Upgrading stock Dell boxes (I know we could be using better machines,\n>> but I am trying to tackle the real issue) is not a hugely price\n>> efficient way of getting extra performance, nor particularly scalable\n>> in the long term.\n>>\n>\n> Switch from Dell Xeon boxes, and go to Opterons. :) Seriously, Dell is\n> far away from Big Iron. I don't know what performance you are looking\n> for, but you can easily get into inserting 10M rows/day with quality\n> hardware.\n\nBetter hardware = More Efficient != More Scalable\n\n> But actually is it your SELECT load that is too high, or your INSERT\n> load, or something inbetween.\n>\n> Because Slony is around if it is a SELECT problem.\n> http://gborg.postgresql.org/project/slony1/projdisplay.php\n>\n> Basically, Slony is a Master/Slave replication system. So if you have\n> INSERT going into the Master, you can have as many replicated slaves,\n> which can handle your SELECT load.\n> Slony is an asynchronous replicator, so there is a time delay from the\n> INSERT until it will show up on a slave, but that time could be pretty\n> small.\n\n<snip>\n\n>\n>>\n>> So, when/is PG meant to be getting a decent partitioning system? \n>> MySQL\n>> is getting one (eventually) which is apparently meant to be \n>> similiar to\n>> Oracle's according to the docs. Clusgres does not appear to be\n>> widely/or at all used, and info on it seems pretty thin on the \n>> ground,\n>> so I am\n>> not too keen on going with that. Is the real solution to multi- \n>> machine\n>> partitioning (as in, not like MySQLs MERGE tables) on PostgreSQL\n>> actually doing it in our application API? This seems like a less \n>> than\n>> perfect solution once we want to add redundancy and things into \n>> the mix.\n>>\n>\n> There is also PGCluster\n> http://pgfoundry.org/projects/pgcluster/\n>\n> Which is trying to be more of a Synchronous multi-master system. I\n> haven't heard of Clusgres, so I'm guessing it is an older attempt, \n> which\n> has been overtaken by pgcluster.\n>\n> Just realize that clusters don't necessarily scale like you would want\n> them too. Because at some point you have to insert into the same \n> table,\n> which means you need to hold a lock which prevents the other machine\n> from doing anything. And with synchronous replication, you have to \n> wait\n> for all of the machines to get a copy of the data before you can \n> say it\n> has been committed, which does *not* scale well with the number of \n> machines.\n\nThis is why I mention partitioning. It solves this issue by storing \ndifferent data sets on different machines under the same schema. \nThese seperate chunks of the table can then be replicated as well for \ndata redundancy and so on. MySQL are working on these things, but PG \njust has a bunch of third party extensions, I wonder why these are \nnot being integrated into the main trunk :/ Thanks for pointing me to \nPGCluster though. It looks like it should be better than Slony at least.\n\n",
"msg_date": "Tue, 10 May 2005 16:10:30 +0100",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "\nOn 10 May 2005, at 16:02, Adam Haberlach wrote:\n\n>\n> I think that perhaps he was trying to avoid having to buy \"Big \n> Iron\" at all.\n\nYou would be right. Although we are not against paying a bit more \nthan $300 for a server ;)\n\n> With all the Opteron v. Xeon around here, and talk of $30,000 \n> machines,\n> perhaps it would be worth exploring the option of buying 10 cheapass\n> machines for $300 each. At the moment, that $300 buys you, from \n> Dell, a\n> 2.5Ghz Pentium 4 w/ 256mb of RAM and a 40Gb hard drive and gigabit \n> ethernet.\n> The aggregate CPU and bandwidth is pretty stupendous, but not as \n> easy to\n> harness as a single machine.\n\n<snip>\n\n> Yes, clustering solutions can distribute the data, and can even do \n> it on a\n> per-table basis in some cases. This still leaves it up to the \n> application's\n> logic to handle reunification of the data.\n\nIf your going to be programming that sort of logic into your API in \nthe beginning, it's not too much more work to add basic replication, \nload balancing and partitioning into it either. But the DB should be \nable to do it for you, adding that stuff in later is often more \ndifficult and less likely to get done.\n\n> Ideas:\n> 1. Create a table/storage type that consists of a select statement\n> on another machine. While I don't think the current executor is \n> capable of\n> working on multiple nodes of an execution tree at the same time, it \n> would be\n> great if it could offload a select of tuples from a remote table to an\n> entirely different server and merge the resulting data into the \n> current\n> execution. I believe MySQL has this, and Oracle may implement it \n> in another\n> way.\n\nMySQL sort of has this, it's not as good as Oracle's though. \nApparently there is a much better version of it in 5.1 though, that \nshould make it to stable sometime next year I imagine.\n\n> 2. There is no #2 at this time, but I'm sure one can be\n> hypothesized.\n\nI would of thought a particularly smart version of pg_pool could do \nit. It could partition data to different servers if it knew which \ncolumns to key by on each table.\n\n> ...Google and other companies have definitely proved that one can \n> harness\n> huge clusters of cheap hardware. It can't be _that_ hard, can it. :)\n\nI shudder to think how much the \"Big Iron\" equivalent of a google \ndata-center would cost.\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of John A \n> Meinel\n> Sent: Tuesday, May 10, 2005 7:41 AM\n> To: Alex Stapleton\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Partitioning / Clustering\n>\n> Alex Stapleton wrote:\n>\n>> What is the status of Postgres support for any sort of multi-machine\n>> scaling support? What are you meant to do once you've upgraded your\n>> box and tuned the conf files as much as you can? But your query load\n>> is just too high for a single machine?\n>>\n>> Upgrading stock Dell boxes (I know we could be using better machines,\n>> but I am trying to tackle the real issue) is not a hugely price\n>> efficient way of getting extra performance, nor particularly scalable\n>> in the long term.\n>>\n>\n> Switch from Dell Xeon boxes, and go to Opterons. :) Seriously, Dell \n> is far\n> away from Big Iron. I don't know what performance you are looking \n> for, but\n> you can easily get into inserting 10M rows/day with quality hardware.\n>\n> But actually is it your SELECT load that is too high, or your \n> INSERT load,\n> or something inbetween.\n>\n> Because Slony is around if it is a SELECT problem.\n> http://gborg.postgresql.org/project/slony1/projdisplay.php\n>\n> Basically, Slony is a Master/Slave replication system. So if you \n> have INSERT\n> going into the Master, you can have as many replicated slaves, \n> which can\n> handle your SELECT load.\n> Slony is an asynchronous replicator, so there is a time delay from the\n> INSERT until it will show up on a slave, but that time could be pretty\n> small.\n>\n> This would require some application level support, since an INSERT \n> goes to a\n> different place than a SELECT. But there has been some discussion \n> about\n> pg_pool being able to spread the query load, and having it be aware \n> of the\n> difference between a SELECT and an INSERT and have it route the \n> query to the\n> correct host. The biggest problem being that functions could cause \n> a SELECT\n> func() to actually insert a row, which pg_pool wouldn't know about. \n> There\n> are 2 possible solutions, a) don't do that when you are using this \n> system,\n> b) add some sort of comment hint so that pg_pool can understand \n> that the\n> select is actually an INSERT, and needs to be done on the master.\n>\n>\n>>\n>> So, when/is PG meant to be getting a decent partitioning system?\n>> MySQL is getting one (eventually) which is apparently meant to be\n>> similiar to Oracle's according to the docs. Clusgres does not appear\n>> to be widely/or at all used, and info on it seems pretty thin on the\n>> ground, so I am not too keen on going with that. Is the real solution\n>> to multi- machine partitioning (as in, not like MySQLs MERGE tables)\n>> on PostgreSQL actually doing it in our application API? This seems\n>> like a less than perfect solution once we want to add redundancy and\n>> things into the mix.\n>>\n>\n> There is also PGCluster\n> http://pgfoundry.org/projects/pgcluster/\n>\n> Which is trying to be more of a Synchronous multi-master system. I \n> haven't\n> heard of Clusgres, so I'm guessing it is an older attempt, which \n> has been\n> overtaken by pgcluster.\n>\n> Just realize that clusters don't necessarily scale like you would \n> want them\n> too. Because at some point you have to insert into the same table, \n> which\n> means you need to hold a lock which prevents the other machine from \n> doing\n> anything. And with synchronous replication, you have to wait for \n> all of the\n> machines to get a copy of the data before you can say it has been \n> committed,\n> which does *not* scale well with the number of machines.\n>\n> If you can make it work, I think having a powerful master server, \n> who can\n> finish an INSERT quickly, and then having a bunch of Slony slaves \n> with a\n> middleman (like pg_pool) to do load balancing among them, is the \n> best way to\n> scale up. There are still some requirements, like not having to see \n> the\n> results of an INSERT instantly (though if you are using hinting to \n> pg_pool,\n> you could hint that this query must be done on the master, \n> realizing that\n> the more you do it, the more you slow everything down).\n>\n> John\n> =:->\n>\n> PS> I don't know what functionality has been actually implemented in\n> pg_pool, just that it was discussed in the past. Slony-II is also \n> in the\n> works.\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to \n> [email protected])\n>\n>\n\n",
"msg_date": "Tue, 10 May 2005 16:24:23 +0100",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "> exploring the option of buying 10 cheapass\n> machines for $300 each. At the moment, that $300 buys you, from Dell, a\n> 2.5Ghz Pentium 4\n\nBuy cheaper ass Dells with an AMD 64 3000+. Beats the crap out of the 2.5\nGHz Pentium, especially for PostgreSQL.\n\nSee the thread \"Whence the Opterons\" for more....\n\nRick\n\[email protected] wrote on 05/10/2005 10:02:50 AM:\n\n>\n> I think that perhaps he was trying to avoid having to buy \"Big Iron\" at\nall.\n>\n> With all the Opteron v. Xeon around here, and talk of $30,000 machines,\n> perhaps it would be worth exploring the option of buying 10 cheapass\n> machines for $300 each. At the moment, that $300 buys you, from Dell, a\n> 2.5Ghz Pentium 4 w/ 256mb of RAM and a 40Gb hard drive and gigabit\nethernet.\n> The aggregate CPU and bandwidth is pretty stupendous, but not as easy to\n> harness as a single machine.\n>\n> For those of us looking at batch and data warehousing applications, it\nwould\n> be really handy to be able to partition databases, tables, and processing\n> load across banks of cheap hardware.\n>\n> Yes, clustering solutions can distribute the data, and can even do it on\na\n> per-table basis in some cases. This still leaves it up to the\napplication's\n> logic to handle reunification of the data.\n>\n> Ideas:\n> 1. Create a table/storage type that consists of a select statement\n> on another machine. While I don't think the current executor is capable\nof\n> working on multiple nodes of an execution tree at the same time, it would\nbe\n> great if it could offload a select of tuples from a remote table to an\n> entirely different server and merge the resulting data into the current\n> execution. I believe MySQL has this, and Oracle may implement it in\nanother\n> way.\n>\n> 2. There is no #2 at this time, but I'm sure one can be\n> hypothesized.\n>\n> ...Google and other companies have definitely proved that one can harness\n> huge clusters of cheap hardware. It can't be _that_ hard, can it. :)\n>\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of John A\nMeinel\n> Sent: Tuesday, May 10, 2005 7:41 AM\n> To: Alex Stapleton\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Partitioning / Clustering\n>\n> Alex Stapleton wrote:\n> > What is the status of Postgres support for any sort of multi-machine\n> > scaling support? What are you meant to do once you've upgraded your\n> > box and tuned the conf files as much as you can? But your query load\n> > is just too high for a single machine?\n> >\n> > Upgrading stock Dell boxes (I know we could be using better machines,\n> > but I am trying to tackle the real issue) is not a hugely price\n> > efficient way of getting extra performance, nor particularly scalable\n> > in the long term.\n>\n> Switch from Dell Xeon boxes, and go to Opterons. :) Seriously, Dell is\nfar\n> away from Big Iron. I don't know what performance you are looking for,\nbut\n> you can easily get into inserting 10M rows/day with quality hardware.\n>\n> But actually is it your SELECT load that is too high, or your INSERT\nload,\n> or something inbetween.\n>\n> Because Slony is around if it is a SELECT problem.\n> http://gborg.postgresql.org/project/slony1/projdisplay.php\n>\n> Basically, Slony is a Master/Slave replication system. So if you have\nINSERT\n> going into the Master, you can have as many replicated slaves, which can\n> handle your SELECT load.\n> Slony is an asynchronous replicator, so there is a time delay from the\n> INSERT until it will show up on a slave, but that time could be pretty\n> small.\n>\n> This would require some application level support, since an INSERT goes\nto a\n> different place than a SELECT. But there has been some discussion about\n> pg_pool being able to spread the query load, and having it be aware of\nthe\n> difference between a SELECT and an INSERT and have it route the query to\nthe\n> correct host. The biggest problem being that functions could cause a\nSELECT\n> func() to actually insert a row, which pg_pool wouldn't know about. There\n> are 2 possible solutions, a) don't do that when you are using this\nsystem,\n> b) add some sort of comment hint so that pg_pool can understand that the\n> select is actually an INSERT, and needs to be done on the master.\n>\n> >\n> > So, when/is PG meant to be getting a decent partitioning system?\n> > MySQL is getting one (eventually) which is apparently meant to be\n> > similiar to Oracle's according to the docs. Clusgres does not appear\n> > to be widely/or at all used, and info on it seems pretty thin on the\n> > ground, so I am not too keen on going with that. Is the real solution\n> > to multi- machine partitioning (as in, not like MySQLs MERGE tables)\n> > on PostgreSQL actually doing it in our application API? This seems\n> > like a less than perfect solution once we want to add redundancy and\n> > things into the mix.\n>\n> There is also PGCluster\n> http://pgfoundry.org/projects/pgcluster/\n>\n> Which is trying to be more of a Synchronous multi-master system. I\nhaven't\n> heard of Clusgres, so I'm guessing it is an older attempt, which has been\n> overtaken by pgcluster.\n>\n> Just realize that clusters don't necessarily scale like you would want\nthem\n> too. Because at some point you have to insert into the same table, which\n> means you need to hold a lock which prevents the other machine from doing\n> anything. And with synchronous replication, you have to wait for all of\nthe\n> machines to get a copy of the data before you can say it has been\ncommitted,\n> which does *not* scale well with the number of machines.\n>\n> If you can make it work, I think having a powerful master server, who can\n> finish an INSERT quickly, and then having a bunch of Slony slaves with a\n> middleman (like pg_pool) to do load balancing among them, is the best way\nto\n> scale up. There are still some requirements, like not having to see the\n> results of an INSERT instantly (though if you are using hinting to\npg_pool,\n> you could hint that this query must be done on the master, realizing that\n> the more you do it, the more you slow everything down).\n>\n> John\n> =:->\n>\n> PS> I don't know what functionality has been actually implemented in\n> pg_pool, just that it was discussed in the past. Slony-II is also in the\n> works.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n",
"msg_date": "Tue, 10 May 2005 10:43:54 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Adam Haberlach wrote:\n> I think that perhaps he was trying to avoid having to buy \"Big Iron\" at all.\n>\n> With all the Opteron v. Xeon around here, and talk of $30,000 machines,\n> perhaps it would be worth exploring the option of buying 10 cheapass\n> machines for $300 each. At the moment, that $300 buys you, from Dell, a\n> 2.5Ghz Pentium 4 w/ 256mb of RAM and a 40Gb hard drive and gigabit ethernet.\n> The aggregate CPU and bandwidth is pretty stupendous, but not as easy to\n> harness as a single machine.\n>\n> For those of us looking at batch and data warehousing applications, it would\n> be really handy to be able to partition databases, tables, and processing\n> load across banks of cheap hardware.\n>\n> Yes, clustering solutions can distribute the data, and can even do it on a\n> per-table basis in some cases. This still leaves it up to the application's\n> logic to handle reunification of the data.\n\nSure. A lot of this is application dependent, though. For instance\nforeign key constraints. In a general cluster solution, you would allow\nforeign keys across partitions. I have a feeling this would be extra\nslow, and hard to do correctly. Multi-machine transactions are also a\ndifficulty, since WAL now has to take into account all machines, and you\nhave to wait for fsync on all of them.\n\nI'm not sure how Oracle does it, but these things seem like they prevent\nclustering from really scaling very well.\n\n>\n> Ideas:\n> \t1. Create a table/storage type that consists of a select statement\n> on another machine. While I don't think the current executor is capable of\n> working on multiple nodes of an execution tree at the same time, it would be\n> great if it could offload a select of tuples from a remote table to an\n> entirely different server and merge the resulting data into the current\n> execution. I believe MySQL has this, and Oracle may implement it in another\n> way.\n>\n> \t2. There is no #2 at this time, but I'm sure one can be\n> hypothesized.\n>\n> ...Google and other companies have definitely proved that one can harness\n> huge clusters of cheap hardware. It can't be _that_ hard, can it. :)\n\nAgain, it depends on the application. A generic database with lots of\ncross reference integrity checking does not work on a cluster very well.\nA very distributed db where you don't worry about cross references does\nscale. Google made a point of making their application work in a\ndistributed manner.\n\nIn the other post he mentions that pg_pool could naturally split out the\nrows into different machines based on partitioning, etc. I would argue\nthat it is more of a custom pool daemon based on the overall\napplication. Because you have to start dealing with things like\ncross-machine joins. Who handles that? the pool daemon has to, since it\nis the only thing that talks to both tables. I think you could certainly\nwrite a reasonably simple application specific daemon where all of the\nclients send their queries to, and it figures out where they need to go,\nand aggregates them as necessary. But a fully generic one is *not*\nsimple at all, and I think is far out of the scope of something like\npg_pool.\n\nI'm guessing that PGCluster is looking at working on that, and it might\nbe true that pg_pool is thinking about it. But just thinking about the\nvery simple query:\n\nSELECT row1, row2 FROM table1_on_machine_a NATURAL JOIN table2_on_machine_b\nWHERE restrict_table_1 AND restrict_table_2\nAND restrict_1_based_on_2;\n\nThis needs to be broken into something like:\n\nSELECT row1 FROM table1_on_machine_a\n\tWHERE restrict_table_1\n\tORDER BY join_column;\nSELECT row2 FROM table2_on_machine_b\n\tWHERE restrict_table_2\n\tORDER BY join_column;\n\nThen these rows need to be merge_joined, and the restrict_1_based_on_2\nneeds to be applied.\nThis is in no way trivial, and I think it is outside the scope of\npg_pool. Now maybe if you restrict yourself so that each query stays\nwithin one machine you can make it work. Or write your own app to handle\nsome of this transparently for the clients. But I would expect to make\nthe problem feasible, it would not be a generic solution.\n\nMaybe I'm off base, I don't really keep track of pg_pool/PGCluster/etc.\nBut I can see that the problem is very difficult. Not at the very least,\nthis is a simple query. And it doesn't even do optimizations. You might\nactually prefer the above to be done with a Nestloop style, where\ntable_1 is selected, and then for each row you do a single index select\non table_2. But how is your app going to know that? It has to have the\nstatistics from the backend databases. And if it has to place an extra\nquery to get those statistics, you just hurt your scalability even more.\nWhereas big-iron already has all the statistics, and can optimize the\nquery plan.\n\nPerhaps pg_cluster will handle this, by maintaining full statistics\nacross the cluster on each machine, so that more optimal queries can be\nperformed. I don't really know.\n\nJohn\n=:->",
"msg_date": "Tue, 10 May 2005 10:46:54 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "\n\n> SELECT row1, row2 FROM table1_on_machine_a NATURAL JOIN \n> table2_on_machine_b\n> WHERE restrict_table_1 AND restrict_table_2\n> AND restrict_1_based_on_2;\n\n\tI don't think that's ever going to be efficient...\n\tWhat would be efficient would be, for instance, a Join of a part of a \ntable against another part of another table which both happen to be on the \nsame machine, because the partitioning was done with this in mind (ie. for \ninstance partitioning on client_id and keeping the information for each \nclient on the same machine).\n\n\tYou could build your smart pool daemon in pl/pgsql and use dblink ! At \nleast you have the query parser built-in.\n\n\tI wonder how Oracle does it ;)\n",
"msg_date": "Tue, 10 May 2005 19:29:59 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Alex,\n\n> This is why I mention partitioning. It solves this issue by storing \n> different data sets on different machines under the same schema. \n\nThat's clustering, actually. Partitioning is simply dividing up a table into \nchunks and using the chunks intelligently. Putting those chunks on seperate \nmachines is another thing entirely. \n\nWe're working on partitioning through the Bizgres sub-project:\nwww.bizgres.org / http://pgfoundry.org/projects/bizgres/\n... and will be pushing it to the main PostgreSQL when we have something.\n\nI invite you to join the mailing list.\n\n> These seperate chunks of the table can then be replicated as well for \n> data redundancy and so on. MySQL are working on these things, \n\nDon't hold your breath. MySQL, to judge by their first \"clustering\" \nimplementation, has a *long* way to go before they have anything usable. In \nfact, at OSCON their engineers were asking Jan Wieck for advice.\n\nIf you have $$$ to shell out, my employer (GreenPlum) has a multi-machine \ndistributed version of PostgreSQL. It's proprietary, though. \nwww.greenplum.com.\n\nIf you have more time than money, I understand that Stanford is working on \nthis problem:\nhttp://www-db.stanford.edu/~bawa/\n\nBut, overall, some people on this list are very mistaken in thinking it's an \neasy problem. GP has devoted something like 5 engineers for 3 years to \ndevelop their system. Oracle spent over $100 million to develop RAC. \n\n> but PG \n> just has a bunch of third party extensions, I wonder why these are \n> not being integrated into the main trunk :/ \n\nBecause it represents a host of complex functionality which is not applicable \nto most users? Because there are 4 types of replication and 3 kinds of \nclusering and not all users want the same kind?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 10 May 2005 10:46:24 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Quoting [email protected]:\n\n> > exploring the option of buying 10 cheapass\n> > machines for $300 each. At the moment, that $300 buys you, from\n> Dell, a\n> > 2.5Ghz Pentium 4\n> \n> Buy cheaper ass Dells with an AMD 64 3000+. Beats the crap out of\n> the 2.5\n> GHz Pentium, especially for PostgreSQL.\n\nWhence \"Dells with an AMD 64\" ?? Perhaps you skimmed:\n\n http://www.thestreet.com/tech/kcswanson/10150604.html\nor\n http://www.eweek.com/article2/0,1759,1553822,00.asp\n\n\n\n\n",
"msg_date": "Tue, 10 May 2005 13:49:05 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "On Tue, May 10, 2005 at 07:29:59PM +0200, PFC wrote:\n> \tI wonder how Oracle does it ;)\n\nOracle *clustering* demands shared storage. So you've shifted your money\nfrom big-iron CPUs to big-iron disk arrays.\n\nOracle replication works similar to Slony, though it supports a lot more\nmodes (ie: syncronous).\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 10 May 2005 16:50:54 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Quoting Alex Stapleton <[email protected]>:\n\n> This is why I mention partitioning. It solves this issue by storing \n> different data sets on different machines under the same schema. \n> These seperate chunks of the table can then be replicated as well for\n> data redundancy and so on. MySQL are working on these things, but PG \n> just has a bunch of third party extensions, I wonder why these are \n> not being integrated into the main trunk :/ Thanks for pointing me to\n> PGCluster though. It looks like it should be better than Slony at\n> least.\n\nAcross a decade or two of projects, including creating a federated\ndatabase engine for Simba, I've become rather dubious of horizontal\npartitions (across disks or servers), either to improve performance, or\njust to scale up and not lose performance. [[The one exception is for\n<emphasis> non-time-critical read-only</emphasis> systems, with\nSlony-style replication.]]\n\nThe most successful high-volume systems I've seen have broken up\ndatabases functionally, like a pipeline, where different applications\nuse different sections of the pipe. \n\nThe highest-volume system I've worked on is Acxiom's gigantic\ndata-cleansing system. This is the central clearinghouse for every scrap\nof demographic that can be associated with some North American,\nsomewhere. Think of D&B for 300M people (some dead). The volumes are\njust beyond belief, for both updates and queries. At Acxiom, the\ndatasets are so large, even after partitioning, that they just\nconstantly cycle them through memory, and commands are executes in\nconvoys --- sort of like riding a paternoster.\n..........\nAnybody been tracking on what Mr Stonebraker's been up to, lately?\nDatastream management. Check it out. Like replication, everybody\nhand-rolled their own datastream systems until finally somebody else\ngeneralized it well enough that it didn't have to be built from scratch\nevery time.\n\nDatastream systems require practically no locking, let alone distributed\ntransactions. They give you some really strong guarantees on transaction\nelapsed-time and throughput. \n.......\nWhere is this all leading? Well, for scaling data like this, the one\nfeature that you need is the ability of procedures/rules on one server\nto perform queries/procedures on another. MSSQL has linked servers and\n(blech) OpenQuery. This lets you do reasonably-efficient work when you\nonly deal with one table at a time. Do NOT try anything fancy with\nmulti-table joins; timeouts are unavoidable, and painful.\n\nPostgres has a natural advantage in such a distributed server system:\nall table/index stats are openly available through the SQL interface,\nfor one server to make rational query plans involving another server's\nresources. God! I would have killed for that when I was writing a\nfederated SQL engine; the kluges you need to do this at arms-length from\nthat information are true pain.\n\nSo where should I go look, to see what's been done so far, on a Postgres\nthat can treat another PG server as a new table type?\n\n\n",
"msg_date": "Tue, 10 May 2005 14:55:55 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "On Tue, May 10, 2005 at 02:55:55PM -0700, Mischa Sandberg wrote:\n> just beyond belief, for both updates and queries. At Acxiom, the\n> datasets are so large, even after partitioning, that they just\n> constantly cycle them through memory, and commands are executes in\n> convoys --- sort of like riding a paternoster.\n\nSpeaking of which... what's the status of the patch that allows seqscans\nto piggyback on already running seqscans on the same table?\n\n> So where should I go look, to see what's been done so far, on a Postgres\n> that can treat another PG server as a new table type?\n\nTo the best of my knowledge no such work has been done. There is a\nproject (who's name escapes me) that lets you run queries against a\nremote postgresql server from a postgresql connection to a different\nserver, which could serve as the basis for what you're proposing.\n\nBTW, given your experience, you might want to check out Bizgres.\n(http://pgfoundry.org/projects/bizgres/) I'm sure your insights would be\nmost welcome.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Tue, 10 May 2005 17:15:54 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "> This is why I mention partitioning. It solves this issue by storing \n> different data sets on different machines under the same schema. These \n> seperate chunks of the table can then be replicated as well for data \n> redundancy and so on. MySQL are working on these things\n\n*laff*\n\nYeah, like they've been working on views for the last 5 years, and still \nhaven't released them :D :D :D\n\nChris\n",
"msg_date": "Wed, 11 May 2005 09:24:10 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Quoting Christopher Kings-Lynne <[email protected]>:\n\n> > This is why I mention partitioning. It solves this issue by storing\n> > different data sets on different machines under the same schema. \n> > These seperate chunks of the table can then be replicated as well for \n> > data redundancy and so on. MySQL are working on these things\n> *laff*\n> Yeah, like they've been working on views for the last 5 years, and\n> still haven't released them :D :D :D\n\n? \nhttp://dev.mysql.com/doc/mysql/en/create-view.html\n...for MySQL 5.0.1+ ?\n\n",
"msg_date": "Tue, 10 May 2005 18:34:26 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Mischa Sandberg wrote:\n> Quoting Christopher Kings-Lynne <[email protected]>:\n> \n> \n>>>This is why I mention partitioning. It solves this issue by storing\n>>>different data sets on different machines under the same schema. \n>>>These seperate chunks of the table can then be replicated as well for \n>>>data redundancy and so on. MySQL are working on these things\n>>\n>>*laff*\n>>Yeah, like they've been working on views for the last 5 years, and\n>>still haven't released them :D :D :D\n> \n> \n> ? \n> http://dev.mysql.com/doc/mysql/en/create-view.html\n> ...for MySQL 5.0.1+ ?\n\nYes but MySQL 5 isn't out yet (considered stable).\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n",
"msg_date": "Tue, 10 May 2005 18:48:48 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "\n>>*laff*\n>>Yeah, like they've been working on views for the last 5 years, and\n>>still haven't released them :D :D :D\n> \n> \n> ? \n> http://dev.mysql.com/doc/mysql/en/create-view.html\n> ...for MySQL 5.0.1+ ?\n\nGive me a call when it's RELEASED.\n\nChris\n",
"msg_date": "Wed, 11 May 2005 09:59:14 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Quoting Christopher Kings-Lynne <[email protected]>:\n\n> \n> >>*laff*\n> >>Yeah, like they've been working on views for the last 5 years, and\n> >>still haven't released them :D :D :D\n> > \n> > ? \n> > http://dev.mysql.com/doc/mysql/en/create-view.html\n> > ...for MySQL 5.0.1+ ?\n> \n> Give me a call when it's RELEASED.\n\n\n:-) Touche'\n\n\n",
"msg_date": "Tue, 10 May 2005 19:06:27 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Quoting \"Jim C. Nasby\" <[email protected]>:\n\n> To the best of my knowledge no such work has been done. There is a\n> project (who's name escapes me) that lets you run queries against a\n> remote postgresql server from a postgresql connection to a different\n> server, which could serve as the basis for what you're proposing.\n\nOkay, if the following looks right to the powerthatbe, I'd like to start\na project. Here's the proposition:\n\n\"servername.dbname.schema.object\" would change RangeVar, which would\naffect much code. \"dbname.schema.object\" itself is not implemented in\n8.0. So, simplicity dictates something like:\n\ntable pg_remote(schemaname text, connectby text, remoteschema text)\n\nThe pg_statistic info from a remote server cannot be cached in local\npg_statistic, without inventing pseudo reloids as well as a\npseudoschema. Probably cleaner to cache it somewhere else. I'm still\nreading down the path that puts pg_statistic data where costsize can get\nat it.\n\nFirst step: find out whether one can link libpq.so to postmaster :-)\n\n",
"msg_date": "Tue, 10 May 2005 19:19:13 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Josh Berkus wrote:\n> Don't hold your breath. MySQL, to judge by their first \"clustering\" \n> implementation, has a *long* way to go before they have anything usable.\n\nOh? What's wrong with MySQL's clustering implementation?\n\n-Neil\n",
"msg_date": "Wed, 11 May 2005 12:49:06 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Neil Conway wrote:\n> Josh Berkus wrote:\n> \n>> Don't hold your breath. MySQL, to judge by their first \"clustering\" \n>> implementation, has a *long* way to go before they have anything usable.\n> \n> \n> Oh? What's wrong with MySQL's clustering implementation?\n\nRam only tables :)\n\n> \n> -Neil\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n",
"msg_date": "Tue, 10 May 2005 20:11:33 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "On Tue, May 10, 2005 at 08:02:50 -0700,\n Adam Haberlach <[email protected]> wrote:\n> \n> \n> With all the Opteron v. Xeon around here, and talk of $30,000 machines,\n> perhaps it would be worth exploring the option of buying 10 cheapass\n> machines for $300 each. At the moment, that $300 buys you, from Dell, a\n> 2.5Ghz Pentium 4 w/ 256mb of RAM and a 40Gb hard drive and gigabit ethernet.\n> The aggregate CPU and bandwidth is pretty stupendous, but not as easy to\n> harness as a single machine.\n\nThat isn't going to be ECC ram. I don't think you really want to use\nnon-ECC ram in a critical database.\n",
"msg_date": "Tue, 10 May 2005 23:12:24 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> Neil Conway wrote:\n>> Oh? What's wrong with MySQL's clustering implementation?\n> \n> Ram only tables :)\n\nSure, but that hardly makes it not \"usable\". Considering the price of \nRAM these days, having enough RAM to hold the database (distributed over \nthe entire cluster) is perfectly acceptable for quite a few people.\n\n(Another deficiency is in 4.0, predicates in queries would not be pushed \ndown to storage nodes -- so you had to stream the *entire* table over \nthe network, and then apply the WHERE clause at the frontend query node. \nThat is fixed in 5.0, though.)\n\n-Neil\n",
"msg_date": "Wed, 11 May 2005 14:39:22 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Neil,\n\n> Sure, but that hardly makes it not \"usable\". Considering the price of\n> RAM these days, having enough RAM to hold the database (distributed over\n> the entire cluster) is perfectly acceptable for quite a few people.\n\nThe other problem, as I was told it at OSCON, was that these were not \nhigh-availability clusters; it's impossible to add a server to an existing \ncluster, and a server going down is liable to take the whole cluster down. \nMind you, I've not tried that aspect of it myself; once I saw the ram-only \nrule, we switched to something else.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 10 May 2005 21:44:06 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "On Tue, 2005-05-10 at 11:03 +0100, Alex Stapleton wrote:\n> So, when/is PG meant to be getting a decent partitioning system? \n\nISTM that your question seems to confuse where code comes from. Without\nmeaning to pick on you, or reply rudely, I'd like to explore that\nquestion. Perhaps it should be a FAQ entry.\n\nAll code is written by someone, and those people need to eat. Some\npeople are fully or partly funded to perform their tasks on this project\n(coding, patching, etc). Others contribute their time for a variety of\nreasons where involvement has a positive benefit.\n\nYou should ask these questions:\n- Is anyone currently working on (Feature X)?\n- If not, Can I do it myself?\n- If not, and I still want it, can I fund someone else to build it for\nme?\n\nAsking \"when is Feature X going to happen\" is almost certainly going to\nget the answer \"never\" otherwise, if the initial development is large\nand complex. There are many TODO items that have lain untouched for\nyears, even though adding the feature has been discussed and agreed.\n\nBest Regards, Simon Riggs\n\n\n",
"msg_date": "Wed, 11 May 2005 08:16:20 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Josh Berkus wrote:\n> The other problem, as I was told it at OSCON, was that these were not \n> high-availability clusters; it's impossible to add a server to an existing \n> cluster\n\nYeah, that's a pretty significant problem.\n\n> a server going down is liable to take the whole cluster down.\n\nThat's news to me. Do you have more information on this?\n\n-Neil\n",
"msg_date": "Wed, 11 May 2005 17:55:10 +1000",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "For an interesting look at scalability, clustering, caching, etc for a\nlarge site have a look at how livejournal did it.\nhttp://www.danga.com/words/2004_lisa/lisa04.pdf\n\nThey have 2.6 Million active users, posting 200 new blog entries per\nminute, plus many comments and countless page views.\n\nAlthough this system is of a different sort to the type I work on it's\ninteresting to see how they've made it scale.\n\nThey use mysql on dell hardware! And found single master replication did\nnot scale. There's a section on multimaster replication, not sure if\nthey use it. The main approach they use is to parition users into\nspefic database clusters. Caching is done using memcached at the\napplication level to avoid hitting the db for rendered pageviews.\n\nIt's interesting that the solution livejournal have arrived at is quite\nsimilar in ways to the way google is set up.\n\nDavid\n",
"msg_date": "Wed, 11 May 2005 08:57:57 +0100",
"msg_from": "\"David Roussel\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "\nOn 11 May 2005, at 08:16, Simon Riggs wrote:\n\n> On Tue, 2005-05-10 at 11:03 +0100, Alex Stapleton wrote:\n>\n>> So, when/is PG meant to be getting a decent partitioning system?\n>>\n>\n> ISTM that your question seems to confuse where code comes from. \n> Without\n> meaning to pick on you, or reply rudely, I'd like to explore that\n> question. Perhaps it should be a FAQ entry.\n>\n> All code is written by someone, and those people need to eat. Some\n> people are fully or partly funded to perform their tasks on this \n> project\n> (coding, patching, etc). Others contribute their time for a variety of\n> reasons where involvement has a positive benefit.\n>\n> You should ask these questions:\n> - Is anyone currently working on (Feature X)?\n> - If not, Can I do it myself?\n> - If not, and I still want it, can I fund someone else to build it for\n> me?\n>\n> Asking \"when is Feature X going to happen\" is almost certainly \n> going to\n> get the answer \"never\" otherwise, if the initial development is large\n> and complex. There are many TODO items that have lain untouched for\n> years, even though adding the feature has been discussed and agreed.\n>\n> Best Regards, Simon Riggs\n>\n\nAcceptable Answers to 'So, when/is PG meant to be getting a decent \npartitioning system?':\n\n 1. Person X is working on it I believe.\n 2. It's on the list, but nobody has done anything about it yet\n 3. Your welcome to take a stab at it, I expect the community \nwould support your efforts as well.\n 4. If you have a huge pile of money you could probably buy the \nMoon. Thinking along those lines, you can probably pay someone to \nwrite it for you.\n 5. It's a stupid idea, and it's never going to work, and heres \nwhy..............\n\nUnacceptable Answers to the same question:\n\n 1. Yours.\n\nBe more helpful, and less arrogant please. Everyone else who has \ncontributed to this thread has been very helpful in clarifying the \nstate of affairs and pointing out what work is and isn't being done, \nand alternatives to just waiting for PG do it for you.\n\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n>\n\n",
"msg_date": "Wed, 11 May 2005 09:40:00 +0100",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "\nOn 11 May 2005, at 08:57, David Roussel wrote:\n\n> For an interesting look at scalability, clustering, caching, etc for a\n> large site have a look at how livejournal did it.\n> http://www.danga.com/words/2004_lisa/lisa04.pdf\n\nI have implemented similar systems in the past, it's a pretty good \ntechnique, unfortunately it's not very \"Plug-and-Play\" as you have to \nbase most of your API on memcached (I imagine MySQLs NDB tables might \nwork as well actually) for it to work well.\n\n> They have 2.6 Million active users, posting 200 new blog entries per\n> minute, plus many comments and countless page views.\n>\n> Although this system is of a different sort to the type I work on it's\n> interesting to see how they've made it scale.\n>\n> They use mysql on dell hardware! And found single master \n> replication did\n> not scale. There's a section on multimaster replication, not sure if\n> they use it. The main approach they use is to parition users into\n> spefic database clusters. Caching is done using memcached at the\n> application level to avoid hitting the db for rendered pageviews\n\nI don't think they are storing pre-rendered pages (or bits of) in \nmemcached, but are principally storing the data for the pages in it. \nGluing pages together is not a hugely intensive process usually :)\nThe only problem with memcached is that the clients clustering/ \npartitioning system will probably break if a node dies, and probably \nget confused if you add new nodes onto it as well. Easily extensible \nclustering (no complete redistribution of data required when you add/ \nremove nodes) with the data distributed across nodes seems to be \nnothing but a pipe dream right now.\n\n> It's interesting that the solution livejournal have arrived at is \n> quite\n> similar in ways to the way google is set up.\n\nDon't Google use indexing servers which keep track of where data is? \nSo that you only need to update them when you add or move data, \ndeletes don't even have to be propagated among indexes immediately \nreally because you'll find out if data isn't there when you visit \nwhere it should be. Or am I talking crap?\n\n> David\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n>\n\n",
"msg_date": "Wed, 11 May 2005 09:50:34 +0100",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "\nOn 11 May 2005, at 09:50, Alex Stapleton wrote:\n\n>\n> On 11 May 2005, at 08:57, David Roussel wrote:\n>\n>\n>> For an interesting look at scalability, clustering, caching, etc \n>> for a\n>> large site have a look at how livejournal did it.\n>> http://www.danga.com/words/2004_lisa/lisa04.pdf\n>>\n>\n> I have implemented similar systems in the past, it's a pretty good \n> technique, unfortunately it's not very \"Plug-and-Play\" as you have \n> to base most of your API on memcached (I imagine MySQLs NDB tables \n> might work as well actually) for it to work well.\n>\n>\n>> They have 2.6 Million active users, posting 200 new blog entries per\n>> minute, plus many comments and countless page views.\n>>\n>> Although this system is of a different sort to the type I work on \n>> it's\n>> interesting to see how they've made it scale.\n>>\n>> They use mysql on dell hardware! And found single master \n>> replication did\n>> not scale. There's a section on multimaster replication, not sure if\n>> they use it. The main approach they use is to parition users into\n>> spefic database clusters. Caching is done using memcached at the\n>> application level to avoid hitting the db for rendered pageviews\n>>\n>\n> I don't think they are storing pre-rendered pages (or bits of) in \n> memcached, but are principally storing the data for the pages in \n> it. Gluing pages together is not a hugely intensive process usually :)\n> The only problem with memcached is that the clients clustering/ \n> partitioning system will probably break if a node dies, and \n> probably get confused if you add new nodes onto it as well. Easily \n> extensible clustering (no complete redistribution of data required \n> when you add/remove nodes) with the data distributed across nodes \n> seems to be nothing but a pipe dream right now.\n>\n>\n>> It's interesting that the solution livejournal have arrived at is \n>> quite\n>> similar in ways to the way google is set up.\n>>\n>\n> Don't Google use indexing servers which keep track of where data \n> is? So that you only need to update them when you add or move data, \n> deletes don't even have to be propagated among indexes immediately \n> really because you'll find out if data isn't there when you visit \n> where it should be. Or am I talking crap?\n\nThat will teach me to RTFA first ;) Ok so LJ maintain an index of \nwhich cluster each user is on, kinda of like google do :)\n\n>\n>> David\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 8: explain analyze is your friend\n>>\n>>\n>>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n>\n\n",
"msg_date": "Wed, 11 May 2005 10:03:45 +0100",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "> Acceptable Answers to 'So, when/is PG meant to be getting a decent \n> partitioning system?':\n> \n> 1. Person X is working on it I believe.\n> 2. It's on the list, but nobody has done anything about it yet\n> 3. Your welcome to take a stab at it, I expect the community would \n> support your efforts as well.\n> 4. If you have a huge pile of money you could probably buy the \n> Moon. Thinking along those lines, you can probably pay someone to write \n> it for you.\n> 5. It's a stupid idea, and it's never going to work, and heres \n> why..............\n> \n> Unacceptable Answers to the same question:\n> \n> 1. Yours.\n> \n> Be more helpful, and less arrogant please. Everyone else who has \n> contributed to this thread has been very helpful in clarifying the \n> state of affairs and pointing out what work is and isn't being done, \n> and alternatives to just waiting for PG do it for you.\n\nPlease YOU be more helpful and less arrogant. I thought your inital \nemail was arrogant, demanding and insulting. Your followup email has \ndone nothing to dispel my impression. Simon (one of PostgreSQL's major \ncontributors AND one of the very few people working on partitioning in \nPostgreSQL, as you requested) told you all the reasons clearly and politely.\n\nChris\n",
"msg_date": "Wed, 11 May 2005 17:13:43 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Mischa Sandberg <[email protected]> writes:\n> So, simplicity dictates something like:\n\n> table pg_remote(schemaname text, connectby text, remoteschema text)\n\nPrevious discussion of this sort of thing concluded that we wanted to\nfollow the SQL-MED standard.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 May 2005 09:27:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering "
},
{
"msg_contents": "David,\n\n> It's interesting that the solution livejournal have arrived at is quite\n> similar in ways to the way google is set up.\n\nYes, although again, they're using memcached as pseudo-clustering software, \nand as a result are limited to what fits in RAM (RAM on 27 machines, but it's \nstill RAM). And due to limitations on memcached, the whole thing blows \nwhenever a server goes out (the memcached project is working on this). But \nany LJ user could tell you that it's a low-availability system.\n\nHowever, memcached (and for us, pg_memcached) is an excellent way to improve \nhorizontal scalability by taking disposable data (like session information) \nout of the database and putting it in protected RAM. On some websites, \nadding memcached can result is as much as a 60% decrease in database traffic.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 11 May 2005 10:13:51 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Alex Stapleton <[email protected]> writes:\n\n> Acceptable Answers to 'So, when/is PG meant to be getting a decent\n> partitioning system?':\n...\n> 3. Your welcome to take a stab at it, I expect the community would\n> support your efforts as well.\n\nAs long as we're being curt all around, this one's not acceptable on the basis\nthat it's not grammatical.\n\n-- \ngreg\n\n",
"msg_date": "11 May 2005 16:02:01 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "On Wed, 2005-05-11 at 17:13 +0800, Christopher Kings-Lynne wrote:\n> > Alex Stapleton wrote\n> > Be more helpful, and less arrogant please. \n> \n> Simon told you all the reasons clearly and politely.\n\nThanks Chris for your comments.\n\nPostgreSQL can always do with one more developer and my sole intent was\nto encourage Alex and other readers to act themselves. If my words seem\narrogant, then I apologise to any and all that think so.\n\nBest Regards, Simon Riggs \n\n\n",
"msg_date": "Wed, 11 May 2005 21:22:13 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "On Wed, May 11, 2005 at 08:57:57AM +0100, David Roussel wrote:\n> For an interesting look at scalability, clustering, caching, etc for a\n> large site have a look at how livejournal did it.\n> http://www.danga.com/words/2004_lisa/lisa04.pdf\n> \n> They have 2.6 Million active users, posting 200 new blog entries per\n> minute, plus many comments and countless page views.\n\nNeither of which is that horribly impressive. 200 TPM is less than 4TPS.\nWhile I haven't run high transaction rate databases under PostgreSQL, I\nsuspect others who have will say that 4TPS isn't that big of a deal.\n\n> Although this system is of a different sort to the type I work on it's\n> interesting to see how they've made it scale.\n> \n> They use mysql on dell hardware! And found single master replication did\n> not scale. There's a section on multimaster replication, not sure if\nProbably didn't scale because they used to use MyISAM.\n\n> they use it. The main approach they use is to parition users into\n> spefic database clusters. Caching is done using memcached at the\nWhich means they've got a huge amount of additional code complexity, not\nto mention how many times you can't post something because 'that cluster\nis down for maintenance'.\n\n> application level to avoid hitting the db for rendered pageviews.\nMemcached is about the only good thing I've seen come out of\nlivejournal.\n\n> It's interesting that the solution livejournal have arrived at is quite\n> similar in ways to the way google is set up.\n\nExcept that unlike LJ, google stays up and it's fast. Though granted, LJ\nis quite a bit faster than it was 6 months ago.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n",
"msg_date": "Wed, 11 May 2005 16:28:19 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "\n\n> However, memcached (and for us, pg_memcached) is an excellent way to \n> improve\n> horizontal scalability by taking disposable data (like session \n> information)\n> out of the database and putting it in protected RAM.\n\n\tSo, what is the advantage of such a system versus, say, a \"sticky \nsessions\" system where each session is assigned to ONE application server \n(not PHP then) which keeps it in RAM as native objects instead of \nserializing and deserializing it on each request ?\n\tI'd say the sticky sessions should perform a lot better, and if one \nmachine dies, only the sessions on this one are lost.\n\tBut of course you can't do it with PHP as you need an app server which \ncan manage sessions. Potentially the savings are huge, though.\n\n\tOn Google, their distributed system spans a huge number of PCs and it has \nredundancy, ie. individual PC failure is a normal thing and is a part of \nthe system, it is handled gracefully. I read a paper on this matter, it's \npretty impressive. The google filesystem has nothing to do with databases \nthough, it's more a massive data store / streaming storage.\n",
"msg_date": "Thu, 12 May 2005 00:35:16 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "\nOn 11 May 2005, at 23:35, PFC wrote:\n\n>\n>\n>\n>> However, memcached (and for us, pg_memcached) is an excellent way \n>> to improve\n>> horizontal scalability by taking disposable data (like session \n>> information)\n>> out of the database and putting it in protected RAM.\n>>\n>\n> So, what is the advantage of such a system versus, say, a \n> \"sticky sessions\" system where each session is assigned to ONE \n> application server (not PHP then) which keeps it in RAM as native \n> objects instead of serializing and deserializing it on each request ?\n> I'd say the sticky sessions should perform a lot better, and if \n> one machine dies, only the sessions on this one are lost.\n> But of course you can't do it with PHP as you need an app \n> server which can manage sessions. Potentially the savings are huge, \n> though.\n\nTheres no reason it couldn't be done with PHP to be fair as long as \nyou could ensure that the client was always routed back to the same \nmachines. Which has it's own set of issues entirely. I am not \nentirely sure that memcached actually does serialize data when it's \ncomitted into memcached either, although I could be wrong, I have not \nlooked at the source. Certainly if you can ensure that a client \nalways goes back to the same machine you can simplify the whole thing \nhugely. It's generally not that easy though, you need a proxy server \nof some description capable of understanding the HTTP traffic and \nmaintaining a central session lookup table to redirect with. Which \nisn't really solving the problem so much as moving it somewhere else. \nInstead of needing huge memcached pools, you need hardcore \nloadbalancers. Load Balancers tend to cost $$$$$ in comparison. \nDistributed sticky sessions are a rather nice idea, I would like to \nhear a way of implementing them cheaply (and on PHP) as well. I may \nhave to give that some thought in fact. Oh yeah, and load balancers \nsoftware often sucks in annoying (if not always important) ways.\n\n> On Google, their distributed system spans a huge number of PCs \n> and it has redundancy, ie. individual PC failure is a normal thing \n> and is a part of the system, it is handled gracefully. I read a \n> paper on this matter, it's pretty impressive. The google filesystem \n> has nothing to do with databases though, it's more a massive data \n> store / streaming storage.\n>\n\nSince when did Massive Data stores have nothing to do with DBs? Isn't \nOracle Cluster entirely based on forming an enormous scalable disk \narray to store your DB on?\n\n",
"msg_date": "Wed, 11 May 2005 23:56:54 +0100",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "\n> machines. Which has it's own set of issues entirely. I am not entirely \n> sure that memcached actually does serialize data when it's comitted into\n\n\tI think it does, ie. it's a simple mapping of [string key] => [string \nvalue].\n\n> memcached either, although I could be wrong, I have not looked at the \n> source. Certainly if you can ensure that a client always goes back to \n> the same machine you can simplify the whole thing hugely. It's generally \n> not that easy though, you need a proxy server of some description \n> capable of understanding the HTTP traffic and maintaining a central\n\n\tYes...\n\tYou could implement it by mapping servers to the hash of the user session \nid.\n\tStatistically, the servers would get the same numbers of sessions on each \nof them, but you have to trust statistics...\n\tIt does eliminate the lookup table though.\n\n> idea, I would like to hear a way of implementing them cheaply (and on \n> PHP) as well. I may have to give that some thought in fact. Oh yeah, and \n> load balancers software often sucks in annoying (if not always \n> important) ways.\n\n\tYou can use lighttpd as a load balancer, I believe it has a stick \nsessions plugin (or you could code one in, it's open source after all). It \ndefinitely support simple round-robin load balancing, acting as a proxy to \nany number of independent servers.\n\n\n>> matter, it's pretty impressive. The google filesystem has nothing to do \n>> with databases though, it's more a massive data store / streaming \n>> storage.\n>\n> Since when did Massive Data stores have nothing to do with DBs? Isn't \n> Oracle Cluster entirely based on forming an enormous scalable disk array \n> to store your DB on?\n\n\tUm, well, the Google Filesystem is (like its name implies) a filesystem \ndesigned to store huge files in a distributed and redundant manner. Files \nare structured as a stream of records (which are themselves big in size) \nand it's designed to support appending records to these stream files \nefficiently and without worrying about locking.\n\n\tIt has no querying features however, that is why I said it was not a \ndatabase.\n\n\tI wish I could find the whitepaper, I think the URL was on this list some \nday, maybe it's on Google's site ?\n",
"msg_date": "Thu, 12 May 2005 12:09:34 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Having local sessions is unnesesary, and here is my logic:\n\nGeneraly most people have less than 100Mb of bandwidth to the internet.\n\nIf you make the assertion that you are transferring equal or less\nsession data between your session server (lets say an RDBMS) and the\napp server than you are between the app server and the client, an out\nof band 100Mb network for session information is plenty of bandwidth. \nThis also represents OLTP style traffic, which postgresql is pretty\ngood at. You should easily be able to get over 100Tps. 100 hits per\nsecond is an awful lot of traffic, more than any website I've managed\nwill ever see.\n\nWhy solve the complicated clustered sessions problem, when you don't\nreally need to?\n\nAlex Turner\nnetEconomist\n\nOn 5/11/05, PFC <[email protected]> wrote:\n> \n> \n> > However, memcached (and for us, pg_memcached) is an excellent way to\n> > improve\n> > horizontal scalability by taking disposable data (like session\n> > information)\n> > out of the database and putting it in protected RAM.\n> \n> So, what is the advantage of such a system versus, say, a \"sticky\n> sessions\" system where each session is assigned to ONE application server\n> (not PHP then) which keeps it in RAM as native objects instead of\n> serializing and deserializing it on each request ?\n> I'd say the sticky sessions should perform a lot better, and if one\n> machine dies, only the sessions on this one are lost.\n> But of course you can't do it with PHP as you need an app server which\n> can manage sessions. Potentially the savings are huge, though.\n> \n> On Google, their distributed system spans a huge number of PCs and it has\n> redundancy, ie. individual PC failure is a normal thing and is a part of\n> the system, it is handled gracefully. I read a paper on this matter, it's\n> pretty impressive. The google filesystem has nothing to do with databases\n> though, it's more a massive data store / streaming storage.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n",
"msg_date": "Thu, 12 May 2005 10:08:09 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "\nOn 12 May 2005, at 15:08, Alex Turner wrote:\n\n> Having local sessions is unnesesary, and here is my logic:\n>\n> Generaly most people have less than 100Mb of bandwidth to the \n> internet.\n>\n> If you make the assertion that you are transferring equal or less\n> session data between your session server (lets say an RDBMS) and the\n> app server than you are between the app server and the client, an out\n> of band 100Mb network for session information is plenty of bandwidth.\n> This also represents OLTP style traffic, which postgresql is pretty\n> good at. You should easily be able to get over 100Tps. 100 hits per\n> second is an awful lot of traffic, more than any website I've managed\n> will ever see.\n>\n> Why solve the complicated clustered sessions problem, when you don't\n> really need to?\n\n100 hits a second = 8,640,000 hits a day. I work on a site which does \n > 100 million dynamic pages a day. In comparison Yahoo probably does \n > 100,000,000,000 (100 billion) views a day\n if I am interpreting Alexa's charts correctly. Which is about \n1,150,000 a second.\n\nNow considering the site I work on is not even in the top 1000 on \nAlexa, theres a lot of sites out there which need to solve this \nproblem I would assume.\n\nThere are also only so many hash table lookups a single machine can \ndo, even if its a Quad Opteron behemoth.\n\n\n> Alex Turner\n> netEconomist\n>\n> On 5/11/05, PFC <[email protected]> wrote:\n>\n>>\n>>\n>>\n>>> However, memcached (and for us, pg_memcached) is an excellent way to\n>>> improve\n>>> horizontal scalability by taking disposable data (like session\n>>> information)\n>>> out of the database and putting it in protected RAM.\n>>>\n>>\n>> So, what is the advantage of such a system versus, say, a \n>> \"sticky\n>> sessions\" system where each session is assigned to ONE application \n>> server\n>> (not PHP then) which keeps it in RAM as native objects instead of\n>> serializing and deserializing it on each request ?\n>> I'd say the sticky sessions should perform a lot better, \n>> and if one\n>> machine dies, only the sessions on this one are lost.\n>> But of course you can't do it with PHP as you need an app \n>> server which\n>> can manage sessions. Potentially the savings are huge, though.\n>>\n>> On Google, their distributed system spans a huge number of \n>> PCs and it has\n>> redundancy, ie. individual PC failure is a normal thing and is a \n>> part of\n>> the system, it is handled gracefully. I read a paper on this \n>> matter, it's\n>> pretty impressive. The google filesystem has nothing to do with \n>> databases\n>> though, it's more a massive data store / streaming storage.\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 1: subscribe and unsubscribe commands go to \n>> [email protected]\n>>\n>>\n>\n>\n\n",
"msg_date": "Thu, 12 May 2005 16:16:53 +0100",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Ok - my common sense alarm is going off here...\n\nThere are only 6.446 billion people worldwide. 100 Billion page views\nwould require every person in the world to view 18 pages of yahoo\nevery day. Not very likely.\n\nhttp://www.internetworldstats.com/stats.htm\nsuggests that there are around 1 billion people actualy on the internet.\n\nThat means each and every person on the internet has to view 100 pages\nper day of yahoo.\n\npretty unlikely IMHO. I for one don't even use Yahoo ;)\n\n100 million page views per day suggests that 1 in 100 people on the\ninternet each viewed 10 pages of a site. Thats a pretty high\npercentage if you ask me.\n\nIf I visit 20 web sites in a day, and see an average of 10 pages per\nsite. that means only about 2000 or so sites generate 100 million page\nviews in a day or better.\n\n100 million pageviews averages to 1157/sec, which we'll double for\npeak load to 2314.\n\nI can easily see a system doing 2314 hash lookups per second. Hell I\nwrote a system that could do a thousand times that four years ago on a\nsingle 1Ghz Athlon. Heck - you can get 2314 lookups/sec on a 486 ;)\n\nGiven that session information doesn't _have_ to persist to storage,\nand can be kept in RAM. A single server could readily manage session\ninformation for even very large sites (of course over a million\nconcurrent users could really start chewing into RAM, but if you are\nYahoo, you can probably afford a box with 100GB of RAM ;).\n\nWe get over 1000 tps on a dual opteron with a couple of mid size RAID\narrays on 10k discs with fsync on for small transactions. I'm sure\nthat could easily be bettered with a few more dollars.\n\nMaybe my number are off, but somehow it doesn't seem like that many\npeople need a highly complex session solution to me.\n\nAlex Turner\nnetEconomist\n\nOn 5/12/05, Alex Stapleton <[email protected]> wrote:\n> \n> On 12 May 2005, at 15:08, Alex Turner wrote:\n> \n> > Having local sessions is unnesesary, and here is my logic:\n> >\n> > Generaly most people have less than 100Mb of bandwidth to the\n> > internet.\n> >\n> > If you make the assertion that you are transferring equal or less\n> > session data between your session server (lets say an RDBMS) and the\n> > app server than you are between the app server and the client, an out\n> > of band 100Mb network for session information is plenty of bandwidth.\n> > This also represents OLTP style traffic, which postgresql is pretty\n> > good at. You should easily be able to get over 100Tps. 100 hits per\n> > second is an awful lot of traffic, more than any website I've managed\n> > will ever see.\n> >\n> > Why solve the complicated clustered sessions problem, when you don't\n> > really need to?\n> \n> 100 hits a second = 8,640,000 hits a day. I work on a site which does\n> > 100 million dynamic pages a day. In comparison Yahoo probably does\n> > 100,000,000,000 (100 billion) views a day\n> if I am interpreting Alexa's charts correctly. Which is about\n> 1,150,000 a second.\n> \n> Now considering the site I work on is not even in the top 1000 on\n> Alexa, theres a lot of sites out there which need to solve this\n> problem I would assume.\n> \n> There are also only so many hash table lookups a single machine can\n> do, even if its a Quad Opteron behemoth.\n> \n> \n> > Alex Turner\n> > netEconomist\n> >\n> > On 5/11/05, PFC <[email protected]> wrote:\n> >\n> >>\n> >>\n> >>\n> >>> However, memcached (and for us, pg_memcached) is an excellent way to\n> >>> improve\n> >>> horizontal scalability by taking disposable data (like session\n> >>> information)\n> >>> out of the database and putting it in protected RAM.\n> >>>\n> >>\n> >> So, what is the advantage of such a system versus, say, a\n> >> \"sticky\n> >> sessions\" system where each session is assigned to ONE application\n> >> server\n> >> (not PHP then) which keeps it in RAM as native objects instead of\n> >> serializing and deserializing it on each request ?\n> >> I'd say the sticky sessions should perform a lot better,\n> >> and if one\n> >> machine dies, only the sessions on this one are lost.\n> >> But of course you can't do it with PHP as you need an app\n> >> server which\n> >> can manage sessions. Potentially the savings are huge, though.\n> >>\n> >> On Google, their distributed system spans a huge number of\n> >> PCs and it has\n> >> redundancy, ie. individual PC failure is a normal thing and is a\n> >> part of\n> >> the system, it is handled gracefully. I read a paper on this\n> >> matter, it's\n> >> pretty impressive. The google filesystem has nothing to do with\n> >> databases\n> >> though, it's more a massive data store / streaming storage.\n> >>\n> >> ---------------------------(end of\n> >> broadcast)---------------------------\n> >> TIP 1: subscribe and unsubscribe commands go to\n> >> [email protected]\n> >>\n> >>\n> >\n> >\n> \n>\n",
"msg_date": "Thu, 12 May 2005 12:05:32 -0400",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "Alex Turner wrote:\n> Ok - my common sense alarm is going off here...\n>\n> There are only 6.446 billion people worldwide. 100 Billion page views\n> would require every person in the world to view 18 pages of yahoo\n> every day. Not very likely.\n>\n> http://www.internetworldstats.com/stats.htm\n> suggests that there are around 1 billion people actualy on the internet.\n>\n> That means each and every person on the internet has to view 100 pages\n> per day of yahoo.\n>\n> pretty unlikely IMHO. I for one don't even use Yahoo ;)\n>\n> 100 million page views per day suggests that 1 in 100 people on the\n> internet each viewed 10 pages of a site. Thats a pretty high\n> percentage if you ask me.\n\nIn general I think your point is valid. Just remember that it probably\nalso matters how you count page views. Because technically images are a\nseparate page (and this thread did discuss serving up images). So if\nthere are 20 graphics on a specific page, that is 20 server hits just\nfor that one page.\n\nI could easily see an image heavy site getting 100 hits / page. Which\nstarts meaning that if 1M users hit 10 pages, then you get 1M*10*100 = 1G.\n\nI still think 100G views on a single website is a lot, but 100M is\ncertainly possible.\n\nJohn\n=:->",
"msg_date": "Thu, 12 May 2005 11:19:56 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "\n> 100 hits a second = 8,640,000 hits a day. I work on a site which does > \n> 100 million dynamic pages a day. In comparison Yahoo probably does > \n> 100,000,000,000 (100 billion) views a day\n> if I am interpreting Alexa's charts correctly. Which is about \n> 1,150,000 a second.\n\n\nRead the help on Alexa's site... ;)\n",
"msg_date": "Thu, 12 May 2005 18:25:05 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "People,\n\n> In general I think your point is valid. Just remember that it probably\n> also matters how you count page views. Because technically images are a\n> separate page (and this thread did discuss serving up images). So if\n> there are 20 graphics on a specific page, that is 20 server hits just\n> for that one page.\n\nAlso, there's bots and screen-scrapers and RSS, web e-mails, and web services \nand many other things which create hits but are not \"people\". I'm currently \nworking on clickstream for a site which is nowhere in the top 100, and is \ngetting 3 million real hits a day ... and we know for a fact that at least \n1/4 of that is bots.\n\nRegardless, the strategy you should be employing for a high traffic site is \nthat if your users hit the database for anything other than direct \ninteraction (like filling out a webform) then you're lost. Use memcached, \nsquid, lighttpd caching, ASP.NET caching, pools, etc. Keep the load off the \ndatabase except for the stuff that only the database can do.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 12 May 2005 10:33:41 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "\nOn 12 May 2005, at 18:33, Josh Berkus wrote:\n\n> People,\n>\n>\n>> In general I think your point is valid. Just remember that it \n>> probably\n>> also matters how you count page views. Because technically images \n>> are a\n>> separate page (and this thread did discuss serving up images). So if\n>> there are 20 graphics on a specific page, that is 20 server hits just\n>> for that one page.\n>>\n>\n> Also, there's bots and screen-scrapers and RSS, web e-mails, and \n> web services\n> and many other things which create hits but are not \"people\". I'm \n> currently\n> working on clickstream for a site which is nowhere in the top 100, \n> and is\n> getting 3 million real hits a day ... and we know for a fact that \n> at least\n> 1/4 of that is bots.\n\nI doubt bots are generally Alexa toolbar enabled.\n\n> Regardless, the strategy you should be employing for a high traffic \n> site is\n> that if your users hit the database for anything other than direct\n> interaction (like filling out a webform) then you're lost. Use \n> memcached,\n> squid, lighttpd caching, ASP.NET caching, pools, etc. Keep the \n> load off the\n> database except for the stuff that only the database can do.\n\nThis is the aproach I would take as well. There is no point storing \nstuff in a DB, if your only doing direct lookups on it and it isn't \nthe sort of data that you care so much about the integrity of.\n\n\n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n>\n\n",
"msg_date": "Thu, 12 May 2005 18:45:32 +0100",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioning / Clustering"
},
{
"msg_contents": "\n\n> If you make the assertion that you are transferring equal or less\n> session data between your session server (lets say an RDBMS) and the\n> app server than you are between the app server and the client, an out\n> of band 100Mb network for session information is plenty of bandwidth.\n\n\tSo if you count on a mean page size of 6-8 kbytes gzipped, that will \nprevent you from caching the N first results of the Big Slow Search Query \nin a native object in the user session state (say, a list of integers \nindicating which rows match), so you will have to redo the Big Slow Search \nQuery everytime the user clicks on Next Page instead of grabbing a set of \ncached row id's and doing a fast SELECT WHERE id IN ...\n\tThis is the worst case ... I'd gzip() the row id's and stuff them in the \nsession, that's always better than blowing up the database with the Big \nSlow Search Query everytime someone does Next Page...\n\n> This also represents OLTP style traffic, which postgresql is pretty\n> good at. You should easily be able to get over 100Tps. 100 hits per\n> second is an awful lot of traffic, more than any website I've managed\n> will ever see.\n\n\tOn the latest anandtech benchmarks, 100 hits per second on a blog/forum \nsoftware is a big bi-opteron server running dotNET, at 99% load... it's a \nlot if you count only dynamic page hits.\n\n",
"msg_date": "Sat, 14 May 2005 15:00:47 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / Clustering"
}
] |
[
{
"msg_contents": "Hi,\n\nEnvironment :\n - Fedora Core 2 (kernel 2.6.10)\n - postgresql 7.4.7\n\nI am running a huge query with several joins that needs around 900\nMbytes of Memory (System RAM is 512 Mbytes).\n\nWhen the system starts to swap the postgres process, CPU consumed drops\nto around 2% (instead of around 50% on another system with kernel 2.4).\nThe query was still working after more than 4 hours, spending the time\nwith 98% if IO wait.\n\nSo I tried to run the postmaster with the environment variable\nLD_ASSUME_KERNEL=2.4.1. With this, the query last 7 minutes !!!\n\nWhy swapping is so bad with the new kernel ?\n\nThanks,\n\nGuillaume\n\n",
"msg_date": "Tue, 10 May 2005 15:01:13 +0200",
"msg_from": "\"Guillaume Nobiron\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Swapping and Kernel 2.6"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nGuillaume Nobiron wrote:\n| Hi,\n|\n| Environment :\n| - Fedora Core 2 (kernel 2.6.10)\n| - postgresql 7.4.7\n|\n| I am running a huge query with several joins that needs around 900\n| Mbytes of Memory (System RAM is 512 Mbytes).\n|\n| When the system starts to swap the postgres process, CPU consumed drops\n| to around 2% (instead of around 50% on another system with kernel 2.4).\n| The query was still working after more than 4 hours, spending the time\n| with 98% if IO wait.\n|\n| So I tried to run the postmaster with the environment variable\n| LD_ASSUME_KERNEL=2.4.1. With this, the query last 7 minutes !!!\n|\n| Why swapping is so bad with the new kernel ?\n|\n\nHello, Guillaume.\n\nYour swapping issue may not necessarily have to do with bad, perhaps\njust slightly mistuned for your usage profile, virtual memory\nmanagement. I think /proc/sys/vm/swappiness only appeared in the 2.6\nseries of kernels (or late in 2.4), but it has rather significant effect\non the way kernel handles pages that are swapped out to disk, and most\nimportantly, those that have been swapped back in again, yet still\noccupy the swap space - this new mechanism usually results in a certain\namount of memory being reserved swap cache, which leaves less where it's\nmore important to have it. You might want to read more about it in this\n(rather lengthy) kerneltrap articles: http://kerneltrap.org/node/3000\nand http://kerneltrap.org/node/3828\n\nWhat we'd probably need here though (after you've verified playing with\nswappiness doesn't help), is a couple of minutes worth of typical memory\nmanager behaviour while this transaction of yours is taking place,\nespecially where swapin/swapout goes mad. Try running \"vmstat 5\" while\nit's running and see if there are any interesting patterns, also, be\nsure to include enough context before and after such events (up to half\na minute on each side should do).\n\nMy guess is, you'd do well with an extra gigabyte or so of memory -\n512MB really isn't much nowadays. Why make I/O even worse bottleneck\nthan it needs to be by forcing pages of active memory in and out? :)\n\nKind regards,\n- --\n~ Grega Bremec\n~ gregab at p0f dot net\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (GNU/Linux)\n\niD8DBQFCh8XHfu4IwuB3+XoRAicxAJwI0FzZIpXpxlJlZMXVJUJaqdj0EgCfRNuw\nDr58jtIgHDtjq/LCjd2Kr1s=\n=iLle\n-----END PGP SIGNATURE-----\n",
"msg_date": "Sun, 15 May 2005 23:57:27 +0200",
"msg_from": "Grega Bremec <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Swapping and Kernel 2.6"
}
] |
[
{
"msg_contents": "I wanted to get some opinions about row prefetching. AFAIK, there is no\nprefetching done by PostgreSQL; all prefetching is delegated to the operating\nsystem.\n\nThe hardware (can't say enough good things about it):\n\nAthlon 64, dual channel\n4GB ram\n240GB usable 4 disk raid5 (ATA133)\nFedora Core 3\nPostgreSQL 7.4.7\n\nI have what is essentially a data warehouse of stock data. Each day has\naround 30,000 records (tickers). A typical operation is to get the 200 day\nsimple moving average (of price) for each ticker and write the result to a\nsummary table. In running this process (Perl/DBI), it is typical to see\n70-80% I/O wait time with postgres running a about 8-9%. If I run the next\nday's date, the postgres cache and file cache is now populated with 199 days\nof the needed data, postgres runs 80-90% of CPU and total run time is greatly\nreduced. My conclusion is that this is a high cache hit rate in action.\n\nI've done other things that make sense, like using indexes, playing with the\nplanner constants and turning up the postgres cache buffers.\n\nEven playing with extream hdparm read-ahead numbers (i.e. 64738) yields no\napparent difference in database performance. The random nature of the I/O\ndrops disk reads down to about 1MB/sec for the array. A linear table scan\ncan easily yield 70-80MB/sec on this system. Total table size is usually\naround 1GB and with indexes should be able to fit completely in main memory.\n\nOther databases like Oracle and DB2 implement some sort of row prefetch. Has\nthere been serious consideration of implementing something like a prefetch\nsubsystem? Does anyone have any opinions as to why this would be a bad idea\nfor postgres?\n\nPostges is great for a multiuser environment and OLTP applications. However,\nin this set up, a data warehouse, the observed performance is not what I\nwould hope for.\n\nRegards,\n\nMatt Olson\nOcean Consulting\nhttp://www.oceanconsulting.com/\n\n",
"msg_date": "Tue, 10 May 2005 06:52:51 -0700",
"msg_from": "Matt Olson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Prefetch"
},
{
"msg_contents": "Matt Olson <[email protected]> writes:\n> Other databases like Oracle and DB2 implement some sort of row prefetch. Has\n> there been serious consideration of implementing something like a prefetch\n> subsystem?\n\nNo.\n\n> Does anyone have any opinions as to why this would be a bad idea for\n> postgres? \n\nWe know even less than the OS does about disk layout, and not a lot more\nthan it about what our next request will be. (If we're doing a seqscan,\nthen of course that's not true, but I would expect the OS to be able to\nfigure that one out and do readahead.)\n\nYou haven't shown us your problem queries, but I think that conventional\nquery tuning would be a more appropriate answer. In particular I wonder\nwhether you shouldn't be looking at ways to calculate multiple\naggregates in parallel.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 May 2005 10:16:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch "
},
{
"msg_contents": "> I've done other things that make sense, like using indexes, playing with the\n> planner constants and turning up the postgres cache buffers.\n\nAfter you load the new days data try running CLUSTER on the structure\nusing a key of (stockID, date) -- probably your primary key.\n\nThis should significantly reduce the amount of IO required for your\ncalculations involving a few stocks over a period of time.\n\n-- \n\n",
"msg_date": "Tue, 10 May 2005 10:17:59 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch"
},
{
"msg_contents": "\nMatt Olson <[email protected]> writes:\n\n> I've done other things that make sense, like using indexes, playing with the\n> planner constants and turning up the postgres cache buffers.\n> \n> Even playing with extream hdparm read-ahead numbers (i.e. 64738) yields no\n> apparent difference in database performance. The random nature of the I/O\n> drops disk reads down to about 1MB/sec for the array. A linear table scan\n> can easily yield 70-80MB/sec on this system. Total table size is usually\n> around 1GB and with indexes should be able to fit completely in main memory.\n\nActually forcing things to use indexes is the wrong direction to go if you're\ntrying to process lots of data and want to stream it off disk as rapidly as\npossible. I would think about whether you can structure your data such that\nyou can use sequential scans. That might mean partitioning your raw data into\nseparate tables and then accessing only the partitions that are relevant to\nthe query.\n\nIn your application that might be hard. It sounds like you would need more or\nless one table per stock ticker which would really be hard to manage.\n\nOne thing you might look into is using the CLUSTER command. But postgres\ndoesn't maintain the cluster ordering so it would require periodically\nrerunning it.\n\nI'm a bit surprised by your 1MB/s rate. I would expect to see about 10MB/s\neven for completely random reads. Is it possible you're seeing something else\ninterfering? Do you have INSERT/UPDATE/DELETE transactions happening\nconcurrently with this select scan? If so you should strongly look into\nseparating the transaction log from the data files.\n\n-- \ngreg\n\n",
"msg_date": "10 May 2005 14:13:11 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch"
},
{
"msg_contents": "My postgres binaries and WAL are on a separate disk from the raid array. The \ntable I'm doing the selects from is probably about 4GB in size and 18-20 \nmillion records. No concurrent or dependent inserts or deletes are going on.\n\nTom's point and your points about optimizing the application are well taken. \nI know my approach is sub optimal and prone to getting caught by latency \nissues (seek times, cache hit rates, etc.). However, the question of \nprefetch in my mind is all about eliminating latencies, so, I thought my \nproblem would be good for the sake of discussing prefetching.\n\nThe two approaches I'm in the process of testing are Rod and Greg's suggestion \nof using 'CLUSTER'. And for the sake of not letting a good idea get away, \nI'll probably spend time on doing a parallel query approach which Tom \nsuggested. \n\nI'll report back to the list what I find and maybe do some _rough_ \nbenchmarking. This is a production app, so I can't get too much in the way \nof the daily batches. \n\n-- \nMatt Olson\nOcean Consulting\nhttp://www.oceanconsulting.com/\n\nOn Tuesday 10 May 2005 11:13 am, Greg Stark wrote:\n> Matt Olson writes:\n> > I've done other things that make sense, like using indexes, playing with\n> > the planner constants and turning up the postgres cache buffers.\n> >\n> > Even playing with extream hdparm read-ahead numbers (i.e. 64738) yields\n> > no apparent difference in database performance. The random nature of the\n> > I/O drops disk reads down to about 1MB/sec for the array. A linear table\n> > scan can easily yield 70-80MB/sec on this system. Total table size is\n> > usually around 1GB and with indexes should be able to fit completely in\n> > main memory.\n>\n> Actually forcing things to use indexes is the wrong direction to go if\n> you're trying to process lots of data and want to stream it off disk as\n> rapidly as possible. I would think about whether you can structure your\n> data such that you can use sequential scans. That might mean partitioning\n> your raw data into separate tables and then accessing only the partitions\n> that are relevant to the query.\n>\n> In your application that might be hard. It sounds like you would need more\n> or less one table per stock ticker which would really be hard to manage.\n>\n> One thing you might look into is using the CLUSTER command. But postgres\n> doesn't maintain the cluster ordering so it would require periodically\n> rerunning it.\n>\n> I'm a bit surprised by your 1MB/s rate. I would expect to see about 10MB/s\n> even for completely random reads. Is it possible you're seeing something\n> else interfering? Do you have INSERT/UPDATE/DELETE transactions happening\n> concurrently with this select scan? If so you should strongly look into\n> separating the transaction log from the data files.\n\n\n",
"msg_date": "Tue, 10 May 2005 11:36:31 -0700",
"msg_from": "Matt Olson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Prefetch"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> Actually forcing things to use indexes is the wrong direction to go if you're\n> trying to process lots of data and want to stream it off disk as rapidly as\n> possible. I would think about whether you can structure your data such that\n> you can use sequential scans.\n\nAgreed.\n\n> In your application that might be hard. It sounds like you would need more or\n> less one table per stock ticker which would really be hard to manage.\n\nActually, in a previous lifetime I used to do pretty much the same stuff\nMatt is working on. The reason I suggested parallelizing is that what\nyou want is usually not so much the 200day moving average of FOO, as the\n200day moving averages of a whole bunch of things. If your input table\ncontains time-ordered data for all those things, then a seqscan works\nout pretty well.\n\n> One thing you might look into is using the CLUSTER command. But postgres\n> doesn't maintain the cluster ordering so it would require periodically\n> rerunning it.\n\nIf the desired sort order is time-based, it falls out pretty much for\nfree in this application, because historical data doesn't change -- you\nare only interested in appending at the right.\n\nIn said previous lifetime, we used Postgres for tracking our actual\ntransactions, but we built a custom file format for storing the\nindividual tick data. That's not stuff you need transactional semantics\nfor; the historical data is what it is. Besides, you need to compress\nit as much as you can because there's always too much of it. Machines\nare faster and disk space cheaper than they were at the time, but I'd\nstill question the wisdom of using a Postgres row for each daily bar,\nlet alone finer-grain data.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 10 May 2005 14:53:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch "
},
{
"msg_contents": "Matt Olson wrote:\n> Other databases like Oracle and DB2 implement some sort of row prefetch. Has\n> there been serious consideration of implementing something like a prefetch\n> subsystem? Does anyone have any opinions as to why this would be a bad idea\n> for postgres?\n> Postges is great for a multiuser environment and OLTP applications. However,\n> in this set up, a data warehouse, the observed performance is not what I\n> would hope for.\n\nOracle doesn't pre-fetch data to get its fast results in this case.\npre-fetching doesn't give you the 100 times speed increases.\n\nBitmap indexes are very important for data mining. You might want to see\n\n http://www.it.iitb.ac.in/~rvijay/dbms/proj/\n\nI have no idea how well developed this is, but this is often the biggest\nwin with Data Warehousing. If it works, you'll get results back in seconds,\nif it doesn't you'll have plenty of time while your queries run to reflect on\nthe possibility that commercial databases might actually have important features\nthat haven't even penetrated the awareness of most free database developers.\n\nAnother trick you can use with large data sets like this when you want results\nback in seconds is to have regularly updated tables that aggregate the data\nalong each column normally aggregated against the main data set.\n\nOf couse, Pg doesn't have the nice features that make this just work and make\nqueries against your data source faster (called \"OLAP Query rewrite\" in\nOracle), so you'll have to put a lot of work into changing your application to\nfigure out when to use the summary tables. As far as I know it doesn't have\nmaterialized views, either, so updating these summary tables is also a more\ncomplex task than just a single REFRESH command.\n\nMaybe some bright person will prove me wrong by posting some working\ninformation about how to get these apparently absent features working.\n\nYou might also want to consider ditching RAID 5 and switching to plain\nmirroring. RAID 5 is a helluva performance penalty (by design). This is\nwhy they said RAID - fast, cheap, reliable - pick any two. RAID 5 ain't\nfast. But that's probably not your main problem.\n\nSam.\n\n> \n> Regards,\n> \n> Matt Olson\n> Ocean Consulting\n> http://www.oceanconsulting.com/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n",
"msg_date": "Wed, 11 May 2005 16:33:50 +1200",
"msg_from": "Sam Vilain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch"
},
{
"msg_contents": "> Another trick you can use with large data sets like this when you want \n> results\n> back in seconds is to have regularly updated tables that aggregate the data\n> along each column normally aggregated against the main data set.\n\n> Maybe some bright person will prove me wrong by posting some working\n> information about how to get these apparently absent features working.\n\nMost people just use simple triggers to maintain aggregate summary tables...\n\nChris\n",
"msg_date": "Wed, 11 May 2005 12:53:05 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch"
},
{
"msg_contents": "On Wed, 2005-05-11 at 12:53 +0800, Christopher Kings-Lynne wrote:\n> > Another trick you can use with large data sets like this when you want \n> > results\n> > back in seconds is to have regularly updated tables that aggregate the data\n> > along each column normally aggregated against the main data set.\n> \n> > Maybe some bright person will prove me wrong by posting some working\n> > information about how to get these apparently absent features working.\n> \n> Most people just use simple triggers to maintain aggregate summary tables...\n\nAgreed. I've also got a view which calls a function that will 1) use the\nsummary table where data exists, or 2) calculate the summary\ninformation, load it into summary table, and send a copy to the client\n(partial query results cache).\n\nIt's not all nicely abstracted behind user friendly syntax, but most of\nthose features can be cobbled together (with effort) in PostgreSQL.\n-- \n\n",
"msg_date": "Wed, 11 May 2005 09:42:30 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n>> Another trick you can use with large data sets like this when you want \n>> results\n>> back in seconds is to have regularly updated tables that aggregate the \n>> data\n>> along each column normally aggregated against the main data set.\n> \n> \n>> Maybe some bright person will prove me wrong by posting some working\n>> information about how to get these apparently absent features working.\n> \n> \n> Most people just use simple triggers to maintain aggregate summary \n> tables...\n> \n> Chris\n\nHowever, if (insert) triggers prove to be too much of a performance hit, try \ncron'd functions that perform the aggregation for you. This system works well \nfor us, using the pk's (sequence) for start and stop points.\n\n-- \n_______________________________\n\nThis e-mail may be privileged and/or confidential, and the sender does\nnot waive any related rights and obligations. Any distribution, use or\ncopying of this e-mail or the information it contains by other than an\nintended recipient is unauthorized. If you received this e-mail in\nerror, please advise me (by return e-mail or otherwise) immediately.\n_______________________________\n",
"msg_date": "Wed, 11 May 2005 07:23:20 -0700",
"msg_from": "Bricklen Anderson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch"
},
{
"msg_contents": "Quoting Christopher Kings-Lynne <[email protected]>:\n\n> > Another trick you can use with large data sets like this when you\n> want \n> > results\n> > back in seconds is to have regularly updated tables that aggregate\n> the data\n> > along each column normally aggregated against the main data set.\n> \n> > Maybe some bright person will prove me wrong by posting some\n> working\n> > information about how to get these apparently absent features\n> working.\n> \n> Most people just use simple triggers to maintain aggregate summary\n> tables...\n\nDon't know if this is more appropriate to bizgres, but:\nWhat the first poster is talking about is what OLAP cubes do.\n\nFor big aggregating systems (OLAP), triggers perform poorly, \ncompared to messy hand-rolled code. You may have dozens\nof aggregates at various levels. Consider the effect of having \neach detail row cascade into twenty updates. \n\nIt's particularly silly-looking when data is coming in as \nbatches of thousands of rows in a single insert, e.g.\n\n COPY temp_table FROM STDIN;\n UPDATE fact_table ... FROM ... temp_table\n INSERT INTO fact_table ...FROM...temp_table\n\n (the above pair of operations is so common, \n Oracle added its \"MERGE\" operator for it).\n\nHence my recent post (request) for using RULES to aggregate \n--- given no luck with triggers \"FOR EACH STATEMENT\".\n\n",
"msg_date": "Wed, 11 May 2005 12:06:56 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch"
}
] |
[
{
"msg_contents": "\nHi Alex,\n\nActually, our product can partition data among several clustered nodes\nrunning PostgreSQL, if that is what you are looking for. Data is\ndistributed based on a designated column. Other tables can be\nreplicated to all nodes.\n\nFor SELECTs, it also knows when it can join locally or it needs to ship\nrows as part of the query plan. For FK constraints (discussed here), it\nalso knows when it can enforce them locally or not.\n\nPlease let me know if you would like some more information.\n\nRegards,\n\nTom Drayton\nExtenDB\nhttp://www.extendb.com\n\n\n\n> This is why I mention partitioning. It solves this issue by storing \n> different data sets on different machines under the same schema. \n> These seperate chunks of the table can then be replicated as \n> well for \n> data redundancy and so on. MySQL are working on these things, but PG \n> just has a bunch of third party extensions, I wonder why these are \n> not being integrated into the main trunk :/ Thanks for \n> pointing me to \n> PGCluster though. It looks like it should be better than \n> Slony at least.\n>\n",
"msg_date": "Tue, 10 May 2005 20:26:02 +0200",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "=?iso-8859-1?Q?RE:__Partitioning_/_Clustering?="
}
] |
[
{
"msg_contents": "for time-series and \"insane fast\", nothing beats kdB, I believe\n\nwww.kx.com\n\nNot trying to Quisling-out PG here, just hoping to respond to Mr. Olson....\n\n\n\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Tom Lane\nSent: Tuesday, May 10, 2005 2:54 PM\nTo: Greg Stark\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] Prefetch \n\n\nGreg Stark <[email protected]> writes:\n> Actually forcing things to use indexes is the wrong direction to go if \n> you're trying to process lots of data and want to stream it off disk \n> as rapidly as possible. I would think about whether you can structure \n> your data such that you can use sequential scans.\n\nAgreed.\n\n> In your application that might be hard. It sounds like you would need \n> more or less one table per stock ticker which would really be hard to \n> manage.\n\nActually, in a previous lifetime I used to do pretty much the same stuff Matt is working on. The reason I suggested parallelizing is that what you want is usually not so much the 200day moving average of FOO, as the 200day moving averages of a whole bunch of things. If your input table contains time-ordered data for all those things, then a seqscan works out pretty well.\n\n> One thing you might look into is using the CLUSTER command. But \n> postgres doesn't maintain the cluster ordering so it would require \n> periodically rerunning it.\n\nIf the desired sort order is time-based, it falls out pretty much for free in this application, because historical data doesn't change -- you are only interested in appending at the right.\n\nIn said previous lifetime, we used Postgres for tracking our actual transactions, but we built a custom file format for storing the individual tick data. That's not stuff you need transactional semantics for; the historical data is what it is. Besides, you need to compress it as much as you can because there's always too much of it. Machines are faster and disk space cheaper than they were at the time, but I'd still question the wisdom of using a Postgres row for each daily bar, let alone finer-grain data.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n",
"msg_date": "Tue, 10 May 2005 19:17:07 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Prefetch - OffTopic"
},
{
"msg_contents": "[email protected] (\"Mohan, Ross\") writes:\n> for time-series and \"insane fast\", nothing beats kdB, I believe\n>\n> www.kx.com\n\n... Which is well and fine if you're prepared to require that all of\nthe staff that interact with data are skilled APL hackers. Skilled\nenough that they're all ready to leap into Whitney's ASCII-based\nvariant, K.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://www.ntlug.org/~cbbrowne/functional.html\nRules of the Evil Overlord #78. \"I will not tell my Legions of Terror\n\"And he must be taken alive!\" The command will be: ``And try to take\nhim alive if it is reasonably practical.''\"\n<http://www.eviloverlord.com/>\n",
"msg_date": "Tue, 10 May 2005 16:14:05 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Prefetch - OffTopic"
}
] |
[
{
"msg_contents": "Yes, that would be a sufficient (although not necessary) condition for being well and fine with kdB. \nLast time I used APL was.....pre-Gregorian, so yea, that's scary to me, too. \n\n( Of course, one can use C/ODBC or Java/JDBC to reach kdB; once there, one uses SQL92, or\nproprietary kSQL. )\n\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Chris Browne\nSent: Tuesday, May 10, 2005 4:14 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Prefetch - OffTopic\n\n\[email protected] (\"Mohan, Ross\") writes:\n> for time-series and \"insane fast\", nothing beats kdB, I believe\n>\n> www.kx.com\n\n... Which is well and fine if you're prepared to require that all of the staff that interact with data are skilled APL hackers. Skilled enough that they're all ready to leap into Whitney's ASCII-based variant, K.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\") http://www.ntlug.org/~cbbrowne/functional.html\nRules of the Evil Overlord #78. \"I will not tell my Legions of Terror \"And he must be taken alive!\" The command will be: ``And try to take him alive if it is reasonably practical.''\" <http://www.eviloverlord.com/>\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n",
"msg_date": "Tue, 10 May 2005 21:28:23 -0000",
"msg_from": "\"Mohan, Ross\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Prefetch - OffTopic"
}
] |
[
{
"msg_contents": "Hi,\n\n\nI have a query that is giving the optimizer (and me) great headache. When\nits in the good mood the optimizer chooses Hash Left Join and the query\nexecutes in 13ms or so, but sometimes (more and more often) it chooses\nNested Loop Left Join and the execution time goes up to 2-30sec.\n\nThe query:\nSELECT COUNT(DISTINCT a.tid) FROM axp_temp_order_match a LEFT OUTER JOIN (\nSELECT ol.tid, ds.orid FROM axp_dayschedule ds JOIN axp_order_line ol ON\nol.olid = ds.olid JOIN axp_order o ON ds.orid = o.orid WHERE o.status >= 100\nAND ds.day between '2005-05-12' and '2005-05-12' AND ds.used = '1' ) b ON\n(a.tid = b.tid) WHERE b.tid IS NULL AND a.sid = 16072;\n\nGood plan:\n=========\nAggregate (cost=221.93..221.93 rows=1 width=4) (actual time=34.262..34.266\nrows=1 loops=1)\n -> Hash Left Join (cost=9.07..220.86 rows=426 width=4) (actual\ntime=34.237..34.237 rows=0 loops=1)\n Hash Cond: (\"outer\".tid = \"inner\".tid)\n Filter: (\"inner\".tid IS NULL)\n -> Index Scan using axp_temp_order_match_idx1 on\naxp_temp_order_match a (cost=0.00..209.65 rows=426 width=4) (actual\ntime=0.277..0.512 rows=6 loops=1)\n Index Cond: (sid = 16072)\n -> Hash (cost=9.07..9.07 rows=1 width=4) (actual\ntime=32.777..32.777 rows=0 loops=1)\n -> Nested Loop (cost=0.00..9.07 rows=1 width=4) (actual\ntime=0.208..31.563 rows=284 loops=1)\n -> Nested Loop (cost=0.00..6.05 rows=1 width=4)\n(actual time=0.178..20.684 rows=552 loops=1)\n -> Index Scan using axp_dayschedule_day_idx on\naxp_dayschedule ds (cost=0.00..3.02 rows=1 width=8) (actual\ntime=0.036..3.973 rows=610 loops=1)\n Index Cond: ((\"day\" >= '2005-05-12'::date)\nAND (\"day\" <= '2005-05-12'::date))\n Filter: (used = B'1'::\"bit\")\n -> Index Scan using axp_order_orid_key on\naxp_order o (cost=0.00..3.02 rows=1 width=4) (actual time=0.009..0.013\nrows=1 loops=610)\n Index Cond: (\"outer\".orid = o.orid)\n Filter: (status >= 100)\n -> Index Scan using axp_order_line_pk on\naxp_order_line ol (cost=0.00..3.01 rows=1 width=8) (actual\ntime=0.006..0.008 rows=1 loops=552)\n Index Cond: (ol.olid = \"outer\".olid)\n Total runtime: 34.581 ms\n\nBad plan (same query different values):\n=======================================\n Aggregate (cost=11.54..11.54 rows=1 width=4) (actual\ntime=11969.281..11969.285 rows=1 loops=1)\n -> Nested Loop Left Join (cost=0.00..11.53 rows=1 width=4) (actual\ntime=25.730..11967.180 rows=338 loops=1)\n Join Filter: (\"outer\".tid = \"inner\".tid)\n Filter: (\"inner\".tid IS NULL)\n -> Index Scan using axp_temp_order_match_idx1 on\naxp_temp_order_match a (cost=0.00..2.45 rows=1 width=4) (actual\ntime=0.027..2.980 rows=471 loops=1)\n Index Cond: (sid = 16092)\n -> Nested Loop (cost=0.00..9.07 rows=1 width=4) (actual\ntime=0.088..24.350 rows=285 loops=471)\n -> Nested Loop (cost=0.00..6.04 rows=1 width=8) (actual\ntime=0.067..15.649 rows=317 loops=471)\n -> Index Scan using axp_dayschedule_day_idx on\naxp_dayschedule ds (cost=0.00..3.02 rows=1 width=8) (actual\ntime=0.015..3.557 rows=606 loops=471)\n Index Cond: ((\"day\" >= '2005-05-13'::date) AND\n(\"day\" <= '2005-05-13'::date))\n Filter: (used = B'1'::\"bit\")\n -> Index Scan using axp_order_line_pk on\naxp_order_line ol (cost=0.00..3.01 rows=1 width=8) (actual\ntime=0.006..0.008 rows=1 loops=285426)\n Index Cond: (ol.olid = \"outer\".olid)\n -> Index Scan using axp_order_orid_key on axp_order o\n(cost=0.00..3.02 rows=1 width=4) (actual time=0.009..0.013 rows=1\nloops=149307)\n Index Cond: (\"outer\".orid = o.orid)\n Filter: (status >= 100)\n Total runtime: 11969.443 ms\n\nPlease note that sometimes when I get \"bad plan\" in the logfile, I just\nre-run the query and the optimizer chooses the more efficient one. Sometime\nit does not.\n\nAny ideas?\n\npostgresql-8.0.2 on 2x3.2 GHz Xeon with 2GB ram Linux 2.6\nshared_buffers = 15000\nwork_mem = 128000\neffective_cache_size = 200000\nrandom_page_cost = (tried 1.0 - 4, seemingly without effect on this\nparticular issue).\n\nEdin\n\n\n\n",
"msg_date": "Wed, 11 May 2005 10:34:45 +0200",
"msg_from": "\"Edin Kadribasic\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizer wrongly picks Nested Loop Left Join"
},
{
"msg_contents": "Edin Kadribasic wrote:\n> Hi,\n>\n>\n> I have a query that is giving the optimizer (and me) great headache. When\n> its in the good mood the optimizer chooses Hash Left Join and the query\n> executes in 13ms or so, but sometimes (more and more often) it chooses\n> Nested Loop Left Join and the execution time goes up to 2-30sec.\n>\n> The query:\n> SELECT COUNT(DISTINCT a.tid) FROM axp_temp_order_match a LEFT OUTER JOIN (\n> SELECT ol.tid, ds.orid FROM axp_dayschedule ds JOIN axp_order_line ol ON\n> ol.olid = ds.olid JOIN axp_order o ON ds.orid = o.orid WHERE o.status >= 100\n> AND ds.day between '2005-05-12' and '2005-05-12' AND ds.used = '1' ) b ON\n> (a.tid = b.tid) WHERE b.tid IS NULL AND a.sid = 16072;\n>\n\nUnfortunately, because Hash Join doesn't report the number of rows\n(rows=0 always), it's hard to tell how good the estimator is. But I\n*can* say that the NestLoop estimation is way off.\n\n>\n> Good plan:\n> =========\n> Aggregate (cost=221.93..221.93 rows=1 width=4) (actual time=34.262..34.266\n> rows=1 loops=1)\n> -> Hash Left Join (cost=9.07..220.86 rows=426 width=4) (actual\n> time=34.237..34.237 rows=0 loops=1)\n> Hash Cond: (\"outer\".tid = \"inner\".tid)\n> Filter: (\"inner\".tid IS NULL)\n> -> Index Scan using axp_temp_order_match_idx1 on\n> axp_temp_order_match a (cost=0.00..209.65 rows=426 width=4) (actual\n> time=0.277..0.512 rows=6 loops=1)\n> Index Cond: (sid = 16072)\n> -> Hash (cost=9.07..9.07 rows=1 width=4) (actual\n> time=32.777..32.777 rows=0 loops=1)\n> -> Nested Loop (cost=0.00..9.07 rows=1 width=4) (actual\n> time=0.208..31.563 rows=284 loops=1)\n> -> Nested Loop (cost=0.00..6.05 rows=1 width=4)\n> (actual time=0.178..20.684 rows=552 loops=1)\n> -> Index Scan using axp_dayschedule_day_idx on\n> axp_dayschedule ds (cost=0.00..3.02 rows=1 width=8) (actual\n> time=0.036..3.973 rows=610 loops=1)\n> Index Cond: ((\"day\" >= '2005-05-12'::date)\n> AND (\"day\" <= '2005-05-12'::date))\n> Filter: (used = B'1'::\"bit\")\n> -> Index Scan using axp_order_orid_key on\n> axp_order o (cost=0.00..3.02 rows=1 width=4) (actual time=0.009..0.013\n> rows=1 loops=610)\n> Index Cond: (\"outer\".orid = o.orid)\n> Filter: (status >= 100)\n> -> Index Scan using axp_order_line_pk on\n> axp_order_line ol (cost=0.00..3.01 rows=1 width=8) (actual\n> time=0.006..0.008 rows=1 loops=552)\n> Index Cond: (ol.olid = \"outer\".olid)\n> Total runtime: 34.581 ms\n>\n> Bad plan (same query different values):\n> =======================================\n> Aggregate (cost=11.54..11.54 rows=1 width=4) (actual\n> time=11969.281..11969.285 rows=1 loops=1)\n> -> Nested Loop Left Join (cost=0.00..11.53 rows=1 width=4) (actual\n> time=25.730..11967.180 rows=338 loops=1)\n\nSee here, it thinks it will only have to do 1 nestloop, which would be\nquite fast, but it hast to do 338.\n\n> Join Filter: (\"outer\".tid = \"inner\".tid)\n> Filter: (\"inner\".tid IS NULL)\n> -> Index Scan using axp_temp_order_match_idx1 on\n> axp_temp_order_match a (cost=0.00..2.45 rows=1 width=4) (actual\n> time=0.027..2.980 rows=471 loops=1)\n> Index Cond: (sid = 16092)\n> -> Nested Loop (cost=0.00..9.07 rows=1 width=4) (actual\n> time=0.088..24.350 rows=285 loops=471)\n\nSame thing here.\n\n> -> Nested Loop (cost=0.00..6.04 rows=1 width=8) (actual\n> time=0.067..15.649 rows=317 loops=471)\n\nAnd here.\n\n> -> Index Scan using axp_dayschedule_day_idx on\n> axp_dayschedule ds (cost=0.00..3.02 rows=1 width=8) (actual\n> time=0.015..3.557 rows=606 loops=471)\n\nThis estimate is way off too, but it is off in both plans.\n\n> Index Cond: ((\"day\" >= '2005-05-13'::date) AND\n> (\"day\" <= '2005-05-13'::date))\n> Filter: (used = B'1'::\"bit\")\n> -> Index Scan using axp_order_line_pk on\n> axp_order_line ol (cost=0.00..3.01 rows=1 width=8) (actual\n> time=0.006..0.008 rows=1 loops=285426)\n\nThis is probably what is killing you. It is doing a single lookup 285k\ntimes. The above plan only does it 552 times.\n> Index Cond: (ol.olid = \"outer\".olid)\n> -> Index Scan using axp_order_orid_key on axp_order o\n> (cost=0.00..3.02 rows=1 width=4) (actual time=0.009..0.013 rows=1\n> loops=149307)\n> Index Cond: (\"outer\".orid = o.orid)\n> Filter: (status >= 100)\n> Total runtime: 11969.443 ms\n>\n> Please note that sometimes when I get \"bad plan\" in the logfile, I just\n> re-run the query and the optimizer chooses the more efficient one. Sometime\n> it does not.\n\nYou work_mem is quite high relative to your total Ram, hopefully you\ndon't have many allowed concurrent connections. But that is a side point.\n\nI assume the tables are freshly VACUUM ANALYZEd. Have you tried altering\nthe statistics for the columns, one of them to look at is\naxp_dayschedule(day). That one seems to be consistently incorrect.\nPerhaps because a between with the same date is too restrictive in\npostgres? I don't really know.\n\n\nAlso, looking at the original query, maybe I am missing something, but\nyou are doing a COUNT(DISTINCT) with a LEFT OUTER JOIN.\n\nIf it is a LEFT JOIN, isn't that the same as\n\nSELECT COUNT(DISTINCT a.tid) FROM axp_temp_order_match a\n WHERE a.sid = 16072;\n\nI also have to wonder about:\n\nSELECT ... a.tid FROM a LEFT JOIN (...) b ON (a.tid = b.tid)\n WHERE b.tid IS NULL\n\nIsn't that equivalent to\n\nSELECT COUNT(DISTINCT a.tid) FROM a WHERE a.tid IS NULL;\n\nWhich is also equivalent to\nSELECT CASE WHEN EXISTS (SELECT a.tid FROM a WHERE a.tid IS NULL AND\na.sid = 16072) THEN 1 ELSE 0 END;\n\nI could be wrong, but if a.tid IS NULL (because b.tid IS NULL, and a.tid\n= b.tid), DISTINCT can only return 0 or 1 rows, which is the same as\nusing a case statement. You could also put a LIMIT in there, since you\nknow DISTINCT can only return 1 row\n\nSELECT COUNT(a.tid) FROM a WHERE a.tid IS NULL AND a.sid = ... LIMIT 1;\n\nHowever, one final point, COUNT(column) where column is NULL doesn't\ncount anything. So really your query can be replaced with:\n\nSELECT 0;\n\nNow maybe this is a generated query, and under other circumstances it\ngives a different query which actually is important.\n\nThe biggest thing is that using a COUNT() and a DISTINCT on a column\nthat is on the left side of a LEFT JOIN, sounds like you can get rid of\nthe entire right side of that join.\n\nJohn\n=:->\n\n>\n> Any ideas?\n>\n> postgresql-8.0.2 on 2x3.2 GHz Xeon with 2GB ram Linux 2.6\n> shared_buffers = 15000\n> work_mem = 128000\n> effective_cache_size = 200000\n> random_page_cost = (tried 1.0 - 4, seemingly without effect on this\n> particular issue).\n>\n> Edin\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match",
"msg_date": "Wed, 11 May 2005 09:04:47 -0500",
"msg_from": "John A Meinel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer wrongly picks Nested Loop Left Join"
},
{
"msg_contents": "\"Edin Kadribasic\" <[email protected]> writes:\n> I have a query that is giving the optimizer (and me) great headache.\n\nThe main problem seems to be that the rowcount estimates for\naxp_temp_order_match and axp_dayschedule are way off:\n\n> -> Index Scan using axp_temp_order_match_idx1 on\n> axp_temp_order_match a (cost=0.00..209.65 rows=426 width=4) (actual\n> time=0.277..0.512 rows=6 loops=1)\n> Index Cond: (sid = 16072)\n\n> -> Index Scan using axp_dayschedule_day_idx on\n> axp_dayschedule ds (cost=0.00..3.02 rows=1 width=8) (actual\n> time=0.036..3.973 rows=610 loops=1)\n> Index Cond: ((\"day\" >= '2005-05-12'::date)\n> AND (\"day\" <= '2005-05-12'::date))\n> Filter: (used = B'1'::\"bit\")\n\n> -> Index Scan using axp_temp_order_match_idx1 on\n> axp_temp_order_match a (cost=0.00..2.45 rows=1 width=4) (actual\n> time=0.027..2.980 rows=471 loops=1)\n> Index Cond: (sid = 16092)\n\n> -> Index Scan using axp_dayschedule_day_idx on\n> axp_dayschedule ds (cost=0.00..3.02 rows=1 width=8) (actual\n> time=0.015..3.557 rows=606 loops=471)\n> Index Cond: ((\"day\" >= '2005-05-13'::date) AND\n> (\"day\" <= '2005-05-13'::date))\n> Filter: (used = B'1'::\"bit\")\n\nDo you ANALYZE these tables on a regular basis? If so, it may be\nnecessary to increase the statistics target to the point where you\nget better estimates.\n\n> Please note that sometimes when I get \"bad plan\" in the logfile, I just\n> re-run the query and the optimizer chooses the more efficient one.\n\nThat's fairly hard to believe, unless you've got autovacuum running\nin the background.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 May 2005 10:18:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer wrongly picks Nested Loop Left Join "
},
{
"msg_contents": "John A Meinel <[email protected]> writes:\n> Unfortunately, because Hash Join doesn't report the number of rows\n> (rows=0 always), it's hard to tell how good the estimator is.\n\nThis is only a cosmetic problem because you can just look at the number\nof rows actually emitted by the Hash node's child; that's always exactly\nthe number loaded into the hashtable.\n\n(But having said that, it is fixed in CVS tip.)\n\nYou may be confused though --- the Hash node is not the Hash Join node.\nA zero report from Hash Join does actually mean that it emitted zero\nrows.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 May 2005 11:03:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizer wrongly picks Nested Loop Left Join "
},
{
"msg_contents": "From: \"Tom Lane\" <[email protected]>\n> \"Edin Kadribasic\" <[email protected]> writes:\n> > I have a query that is giving the optimizer (and me) great headache.\n>\n> The main problem seems to be that the rowcount estimates for\n> axp_temp_order_match and axp_dayschedule are way off:\n>\n> > -> Index Scan using axp_temp_order_match_idx1 on\n> > axp_temp_order_match a (cost=0.00..209.65 rows=426 width=4) (actual\n> > time=0.277..0.512 rows=6 loops=1)\n> > Index Cond: (sid = 16072)\n>\n> > -> Index Scan using axp_dayschedule_day_idx\non\n> > axp_dayschedule ds (cost=0.00..3.02 rows=1 width=8) (actual\n> > time=0.036..3.973 rows=610 loops=1)\n> > Index Cond: ((\"day\" >=\n'2005-05-12'::date)\n> > AND (\"day\" <= '2005-05-12'::date))\n> > Filter: (used = B'1'::\"bit\")\n>\n> > -> Index Scan using axp_temp_order_match_idx1 on\n> > axp_temp_order_match a (cost=0.00..2.45 rows=1 width=4) (actual\n> > time=0.027..2.980 rows=471 loops=1)\n> > Index Cond: (sid = 16092)\n>\n> > -> Index Scan using axp_dayschedule_day_idx on\n> > axp_dayschedule ds (cost=0.00..3.02 rows=1 width=8) (actual\n> > time=0.015..3.557 rows=606 loops=471)\n> > Index Cond: ((\"day\" >= '2005-05-13'::date)\nAND\n> > (\"day\" <= '2005-05-13'::date))\n> > Filter: (used = B'1'::\"bit\")\n>\n> Do you ANALYZE these tables on a regular basis? If so, it may be\n> necessary to increase the statistics target to the point where you\n> get better estimates.\n\nIncreasing statistics didn't seem to help, but both of you gave me an idea\nof what might be wrong. axp_temp_order match contains temporary matches for\na search. Just before execution of that query the new matches are inserted\ninto the table under a new search id (sid column). Since the ANALYZE was\nthat before it it grossly underestimates the number of matches for that sid.\nAs this table is relatively small inserting ANALYZE axp_temp_order_match\njust before running the query does not introduce a great perforance penalty\n(50ms) and it reduces the query execution time from up to 50s down to ~20ms.\n\n> > Please note that sometimes when I get \"bad plan\" in the logfile, I just\n> > re-run the query and the optimizer chooses the more efficient one.\n>\n> That's fairly hard to believe, unless you've got autovacuum running\n> in the background.\n\nThe application had ANALYZE axp_temp_order_match placed in the \"slightly\"\nwrong location, before the large insert was done (1000 rows with a new\nsid). So when the app run the next search, previous search got correctly\nanalyzed and the query execution time dropped dramatically as I was trying\nto EXPLAIN ANALYZE query recorded in the log file.\n\nThanks for your help,\n\nEdin\n\n",
"msg_date": "Thu, 12 May 2005 01:28:25 +0200",
"msg_from": "\"Edin Kadribasic\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizer wrongly picks Nested Loop Left Join "
}
] |
[
{
"msg_contents": "Hi,\n\nWe have some performances problem on a particular query.\n\nWe reproduced the problem on a 7.4.5 and on a 7.4.7 server.\n* we load the dump in a new database\n* query: it's fast (< 1ms)\n* VACUUM FULL ANALYZE;\n* query: it's really slow (130ms) and it's another plan\n* set enable_seqscan=off;\n* query: it's fast (< 1ms) : it uses the best plan\n\nI attached the EXPLAIN ANALYZE outputs, the query and the tables\ndescription. I really can't understand why the planner chooses this plan\nand especially the line :\n-> Index Scan using acs_objects_object_id_p_hhkb1 on acs_objects t98\n(cost=0.00..2554.07 rows=33510 width=81) (actual time=0.043..56.392\nrows=33510 loops=1).\nI never saw an index scan on such a number of lines. For your\ninformation, there are 33510 lines in this table so it scans the whole\ntable.\n\nThe problem seems to be the left join on the acs_objects t98 table for\nthe parent_application_id as if I remove it or if I change it to a\nsubquery, it's ok. The query is automatically generated by a persistence \nlayer so I can't really rewrite it.\n\nThanks for any help\n\nRegards\n\n--\nGuillaume",
"msg_date": "Wed, 11 May 2005 19:23:32 +0200",
"msg_from": "Guillaume Smet <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad plan after vacuum analyze"
},
{
"msg_contents": "Guillaume,\n\n> We reproduced the problem on a 7.4.5 and on a 7.4.7 server.\n> * we load the dump in a new database\n> * query: it's fast (< 1ms)\n> * VACUUM FULL ANALYZE;\n> * query: it's really slow (130ms) and it's another plan\n> * set enable_seqscan=off;\n> * query: it's fast (< 1ms) : it uses the best plan\n\nLooking at this, the planner seems convinced that the merge join is the \neasiest way to do the OUTER JOINS, but it appears to be wrong; a nested loop \nis faster.\n\nThis isn't the only place I've encountered our optimizer doing this -- \nunderestimating the cost of a merge join. This seems to be becuase the \nmerge_join vs. nested_loop decision seems to be being made in the planner \nwithout taking the double-sort and index access costs into account. This \nquery is an excellent example:\n\n\"good\" plan:\n Nested Loop Left Join (cost=2.44..17.36 rows=1 width=5532) (actual \ntime=0.441..0.466 rows=1 loops=1)\n Join Filter: (\"outer\".parent_application_id = \"inner\".application_id)\n -> Nested Loop Left Join (cost=2.44..15.73 rows=1 width=5214) (actual \ntime=0.378..0.402 rows=1 loops=1)\n\nSee, here the planner thinks that the 2 nested loops will cost \"35\". \n\n\"bad\" plan:\n Merge Right Join (cost=9.27..9.48 rows=1 width=545) (actual \ntime=129.364..129.365 rows=1 loops=1)\n Merge Cond: (\"outer\".application_id = \"inner\".parent_application_id)\n -> Index Scan using applicati_applicati_id_p_ogstm on applications t116 \n(cost=0.00..5.51 rows=28 width=20) (actual time=0.030..0.073 rows=28 loops=1)\n -> Sort (cost=9.27..9.27 rows=1 width=529) (actual time=129.202..129.203 \nrows=1 loops=1)\n Sort Key: t22.parent_application_id\n -> Merge Right Join (cost=8.92..9.26 rows=1 width=529) (actual \ntime=129.100..129.103 rows=1 loops=1)\n Merge Cond: (\"outer\".object_id = \"inner\".parent_application_id)\n -> Index Scan using acs_objects_object_id_p_hhkb1 on \nacs_objects t98 (cost=0.00..2554.07 rows=33510 width=81) (actual \ntime=0.043..56.392 rows=33510 loops=1)\n -> Sort (cost=8.92..8.93 rows=1 width=452) (actual \ntime=0.309..0.310 rows=1 loops=1)\n Sort Key: t22.parent_application_id\n\nHere the planner chooses a merge right join. This decision seems to have been \nmade entirely on the basis of the cost of the join itself (total of 17) \nwithout taking the cost of the sort and index access (total of 2600+) into \naccount.\n\nTom, is this a possible error in planner logic?\n\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 11 May 2005 11:10:15 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan after vacuum analyze"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> -> Merge Right Join (cost=8.92..9.26 rows=1 width=529) (actual \n> time=129.100..129.103 rows=1 loops=1)\n> Merge Cond: (\"outer\".object_id = \"inner\".parent_application_id)\n> -> Index Scan using acs_objects_object_id_p_hhkb1 on \n> acs_objects t98 (cost=0.00..2554.07 rows=33510 width=81) (actual \n> time=0.043..56.392 rows=33510 loops=1)\n> -> Sort (cost=8.92..8.93 rows=1 width=452) (actual \n> time=0.309..0.310 rows=1 loops=1)\n> Sort Key: t22.parent_application_id\n\n> Here the planner chooses a merge right join. This decision seems to have been \n> made entirely on the basis of the cost of the join itself (total of 17) \n> without taking the cost of the sort and index access (total of 2600+) into \n> account.\n\n> Tom, is this a possible error in planner logic?\n\nNo, it certainly hasn't forgotten to add in the costs of the inputs.\nThere might be a bug here, but if so it's much more subtle than that.\n\nIt looks to me like the planner believes that the one value of\nt22.parent_application_id joins to something very early in the\nacs_objects_object_id_p_hhkb1 sort order, and that it will therefore not\nbe necessary to run the indexscan to completion (or indeed very far at\nall, considering that it's including such a small fraction of the total\nindexscan cost).\n\nandrew@supernews pointed out recently that this effect doesn't apply to\nthe outer side of an outer join; releases before 7.4.8 mistakenly think\nit does. But unless my wires are totally crossed today, acs_objects is\nthe nullable side here and so that error isn't applicable anyway.\n\nSo, the usual questions: have these two tables been ANALYZEd lately?\nIf so, can we see the pg_stats rows for the object_id and\nparent_application_id columns?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 May 2005 14:58:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan after vacuum analyze "
},
{
"msg_contents": "Tom,\n\n> So, the usual questions: have these two tables been ANALYZEd lately?\n\nYes, of course.\nAs I wrote in my previous mail, here is how I reproduce the problem:\n- we load the dump in a new database (to be sure, there is no problem on \nan index or something like that)\n- query: it's fast (< 1ms)\n- *VACUUM FULL ANALYZE;*\n- query: it's really slow (130ms) and it's another plan\n- set enable_seqscan=off;\n- query: it's fast (< 1ms) : it uses the best plan\n\nI reproduced it on two different servers exactly like that (7.4.5 and \n7.4.7).\n\nI first met the problem on a production database with a VACUUM ANALYZE \nrun every night (and we don't have too many inserts a day on this database).\n\n> If so, can we see the pg_stats rows for the object_id and\n> parent_application_id columns?\n\nSee attached file.\n\nIf you're interested in a dump of these tables, just tell me. There \naren't any confidential information in them.\n\nRegards\n\n--\nGuillaume",
"msg_date": "Wed, 11 May 2005 21:32:11 +0200",
"msg_from": "Guillaume Smet <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad plan after vacuum analyze"
},
{
"msg_contents": "Guillaume Smet <[email protected]> writes:\n>> If so, can we see the pg_stats rows for the object_id and\n>> parent_application_id columns?\n\n> See attached file.\n\nWell, those stats certainly appear to justify the planner's belief that\nthe indexscan needn't run very far: the one value of\nparent_application_id is 1031 and this is below the smallest value of\nobject_id seen by analyze. You might have better luck if you increase\nthe statistics target for acs_objects.object_id. (It'd be interesting\nto know what fraction of acs_objects actually does have object_id < 1032.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 May 2005 15:38:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan after vacuum analyze "
},
{
"msg_contents": " > Well, those stats certainly appear to justify the planner's belief that\n > the indexscan needn't run very far: the one value of\n > parent_application_id is 1031 and this is below the smallest value of\n > object_id seen by analyze.\n\nYes, it seems rather logical but why does it cost so much if it should \nbe an effective way to find the row?\n\n > You might have better luck if you increase\n > the statistics target for acs_objects.object_id.\n\nWhat do you mean exactly?\n\n > (It'd be interesting\n > to know what fraction of acs_objects actually does have object_id < \n1032.)\n\nccm_perf=# SELECT COUNT(*) FROM acs_objects WHERE object_id<1032;\n count\n-------\n 15\n\nccm_perf=# SELECT COUNT(*) FROM acs_objects;\n count\n-------\n 33510\n\n\n--\nGuillaume\n",
"msg_date": "Wed, 11 May 2005 21:57:02 +0200",
"msg_from": "Guillaume Smet <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad plan after vacuum analyze"
},
{
"msg_contents": "Ah-ha, I can replicate the problem. This example uses tenk1 from the\nregression database, which has a column unique2 containing just the\nintegers 0..9999.\n\nregression=# create table t1(f1 int);\nCREATE TABLE\nregression=# insert into t1 values(5);\nINSERT 154632 1\nregression=# insert into t1 values(7);\nINSERT 154633 1\nregression=# analyze t1;\nANALYZE\nregression=# explain analyze select * from tenk1 right join t1 on (unique2=f1);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Merge Right Join (cost=1.03..1.37 rows=2 width=248) (actual time=0.507..0.617 rows=2 loops=1)\n Merge Cond: (\"outer\".unique2 = \"inner\".f1)\n -> Index Scan using tenk1_unique2 on tenk1 (cost=0.00..498.24 rows=10024 width=244) (actual time=0.126..0.242 rows=9 loops=1)\n -> Sort (cost=1.03..1.03 rows=2 width=4) (actual time=0.145..0.153 rows=2 loops=1)\n Sort Key: t1.f1\n -> Seq Scan on t1 (cost=0.00..1.02 rows=2 width=4) (actual time=0.029..0.049 rows=2 loops=1)\n Total runtime: 1.497 ms\n(7 rows)\n\nThe planner correctly perceives that only a small part of the unique2\nindex will need to be scanned, and hence thinks the merge is cheap ---\nmuch cheaper than if the whole index had to be scanned. And it is.\nNotice that only 9 rows were actually pulled from the index. Once\nwe got to unique2 = 8, nodeMergejoin.c could see that no more matches\nto f1 were possible.\n\nBut watch this:\n\nregression=# insert into t1 values(null);\nINSERT 154634 1\nregression=# explain analyze select * from tenk1 right join t1 on (unique2=f1);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Merge Right Join (cost=1.03..1.37 rows=2 width=248) (actual time=0.560..290.874 rows=3 loops=1)\n Merge Cond: (\"outer\".unique2 = \"inner\".f1)\n -> Index Scan using tenk1_unique2 on tenk1 (cost=0.00..498.24 rows=10024 width=244) (actual time=0.139..106.982 rows=10000 loops=1)\n -> Sort (cost=1.03..1.03 rows=2 width=4) (actual time=0.181..0.194 rows=3 loops=1)\n Sort Key: t1.f1\n -> Seq Scan on t1 (cost=0.00..1.02 rows=2 width=4) (actual time=0.032..0.067 rows=3 loops=1)\n Total runtime: 291.670 ms\n(7 rows)\n\nSee what happened to the actual costs of the indexscan? All of a sudden\nwe had to scan the whole index because there was a null in the other\ninput, and nulls sort high.\n\nI wonder if it is worth fixing nodeMergejoin.c to not even try to match\nnulls to the other input. We'd have to add a check to see if the join\noperator is strict or not, but it nearly always will be.\n\nThe alternative would be to make the planner only believe in the\nshort-circuit path occuring if it thinks that the other input is\nentirely non-null ... but this seems pretty fragile, since it only\ntakes one null to mess things up, and ANALYZE can hardly be counted\non to detect one null in a table.\n\nIn the meantime it seems like the quickest answer for Guillaume might\nbe to try to avoid keeping any NULLs in parent_application_id.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 11 May 2005 16:32:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan after vacuum analyze "
},
{
"msg_contents": "Josh, Tom,\n\nThanks for your explanations.\n\n> In the meantime it seems like the quickest answer for Guillaume might\n> be to try to avoid keeping any NULLs in parent_application_id.\n\nI can't do that as the majority of the applications don't have any \nparent one. Moreover, we use a third party application and we cannot \nmodify all its internals.\n\nAnyway, I tried to work on the statistics as you told me and here are \nthe results:\nccm_perf=# ALTER TABLE acs_objects ALTER COLUMN object_id SET STATISTICS 30;\nALTER TABLE\nccm_perf=# ANALYZE acs_objects;\nANALYZE\n\nccm_perf=# \\i query_section.sql\n... correct plan ...\n Total runtime: 0.555 ms\n\nSo I think I will use this solution for the moment.\n\nThanks a lot for your help.\n\nRegards\n\n--\nGuillaume\n",
"msg_date": "Wed, 11 May 2005 22:59:40 +0200",
"msg_from": "Guillaume Smet <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad plan after vacuum analyze"
},
{
"msg_contents": "Quoting Guillaume Smet <[email protected]>:\n\n> Hi,\n> \n> We have some performances problem on a particular query.\n...\n\nI have to say it, this was the best laid-out set of details behind a\nproblem I've ever seen on this list; I'm going to try live up to it, the\nnext time I have a problem of my own.\n\n\n",
"msg_date": "Wed, 11 May 2005 14:27:45 -0700",
"msg_from": "Mischa Sandberg <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan after vacuum analyze"
},
{
"msg_contents": "В Срд, 11/05/2005 в 22:59 +0200, Guillaume Smet пишет:\n\n> Anyway, I tried to work on the statistics as you told me and here are \n> the results:\n> ccm_perf=# ALTER TABLE acs_objects ALTER COLUMN object_id SET STATISTICS 30;\n> ALTER TABLE\n> ccm_perf=# ANALYZE acs_objects;\n> ANALYZE\n> \n> ccm_perf=# \\i query_section.sql\n> ... correct plan ...\n> Total runtime: 0.555 ms\n\nGiven Tom's analysis, how can increasing the stats target change which\nplan is chosen?\n\n-- \nMarkus Bertheau <[email protected]>",
"msg_date": "Fri, 13 May 2005 20:35:35 +0200",
"msg_from": "Markus Bertheau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan after vacuum analyze"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.