threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi Michael,\n\nThanls for your response.\n\nMichael Fuhr wrote:\n> On Mon, Mar 06, 2006 at 04:29:49PM +0100, Joost Kraaijeveld wrote:\n>> Below are some results of running pgbench, run on a machine that\n>> is doing nothing else than running PostgreSQL woth pgbench. The\n>> strange thing is that the results are *constantly alternating* hight\n>> (750-850 transactions)and low (50-80 transactions), no matter how\n>> many test I run. If I wait a long time (> 5 minutes) after running\n>> the test, I always get a hight score, followed by a low one, followed\n>> by a high one, low one etc.\n> \n> The default checkpoint_timeout is 300 seconds (5 minutes). Is it\n> coincidence that the \"long time\" between fast results is about the\n> same? \nI have not measured the \"long wait time\". But I can run multiple test in 3 minutes: the fast test lasts 3 sec, the long one 40 secs (see below). During the tests there is not much activity on the partition where the logfiles are (other controller and disk than the database and swap)\n\npostgres@panoramix:/usr/lib/postgresql/8.1/bin$ time ./pgbench -c 10 -t 150 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 150\nnumber of transactions actually processed: 1500/1500\ntps = 531.067258 (including connections establishing)\ntps = 541.694790 (excluding connections establishing)\n\nreal 0m2.892s\nuser 0m0.105s\nsys 0m0.145s\n\n\npostgres@panoramix:/usr/lib/postgresql/8.1/bin$ time ./pgbench -c 10 -t 150 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 100\nnumber of clients: 10\nnumber of transactions per client: 150\nnumber of transactions actually processed: 1500/1500\ntps = 37.064000 (including connections establishing)\ntps = 37.114023 (excluding connections establishing)\n\nreal 0m40.531s\nuser 0m0.088s\nsys 0m0.132s\n\n>What's your setting? \nDefault.\n\n> Are your test results more consistent\n> if you execute CHECKPOINT between them?\nCould you tell me how I could do that?\n\n\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl\n",
"msg_date": "Mon, 6 Mar 2006 19:46:05 +0100",
"msg_from": "\"Joost Kraaijeveld\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can anyone explain this pgbench results?"
},
{
"msg_contents": "On Mon, Mar 06, 2006 at 07:46:05PM +0100, Joost Kraaijeveld wrote:\n> Michael Fuhr wrote:\n> > What's your setting? \n>\n> Default.\n\nHave you tweaked postgresql.conf at all? If so, what non-default\nsettings are you using?\n\n> > Are your test results more consistent\n> > if you execute CHECKPOINT between them?\n>\n> Could you tell me how I could do that?\n\nConnect to the database as a superuser and execute a CHECKPOINT\nstatement.\n\nhttp://www.postgresql.org/docs/8.1/interactive/sql-checkpoint.html\n\n From the command line you could do something like\n\npsql -c checkpoint\npgbench -c 10 -t 150 test\npsql -c checkpoint\npgbench -c 10 -t 150 test\npsql -c checkpoint\npgbench -c 10 -t 150 test\n\n-- \nMichael Fuhr\n",
"msg_date": "Mon, 6 Mar 2006 13:17:16 -0700",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can anyone explain this pgbench results?"
}
] |
[
{
"msg_contents": "There seems to be many posts on this issue but I not yet found an answer to the seq scan issue.\n\nI am having an issue with a joins. I am using 8.0.3 on FC4 \n\nQuery: select * from ommemberrelation where srcobj='somevalue' and dstobj in (select objectid from omfilesysentry where name='dir15_file80');\n\nColumns srcobj, dstobj & name are all indexed.\n\nI ran test adding records to ommemberrelation and omfilesysentry up to 32K in each to simulate and measured query times. The graph is O(n²) like. i.e sequencial scan \n\nThe columns in the where clauses are indexed, and yes I did VACUUM ANALYZE FULL. I even tried backup restore of the entire db. No difference. \n\nTurning sequencial scan off results in a O(n log n) like graph, \n\nExplain analyze confirms sequencial scan. A majority (70ms) of the 91ms query is as a result of -> Seq Scan on ommemberrelation Timing is on.\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop IN Join (cost=486.19..101533.99 rows=33989 width=177) (actual time=5.493..90.682 rows=1 loops=1)\n Join Filter: (\"outer\".dstobj = \"inner\".objectid)\n -> Seq Scan on ommemberrelation (cost=0.00..2394.72 rows=33989 width=177) (actual time=0.078..70.887 rows=100 loops=1)\n Filter: (srcobj = '3197a4e6-abf1-11da-a0f9-000fb05ab829'::text)\n -> Materialize (cost=486.19..487.48 rows=129 width=16) (actual time=0.004..0.101 rows=26 loops=100)\n -> Append (cost=0.00..486.06 rows=129 width=16) (actual time=0.063..1.419 rows=26 loops=1)\n -> Index Scan using omfilesysentry_name_idx on omfilesysentry (cost=0.00..8.30 rows=2 width=16) (actual time=0.019..0.019 rows=0 loops=1)\n Index Cond: (name = 'dir15_file80'::text)\n -> Index Scan using omfile_name_idx on omfile omfilesysentry (cost=0.00..393.85 rows=101 width=16) (actual time=0.033..0.291 rows=26 loops=1)\n Index Cond: (name = 'dir15_file80'::text)\n -> Seq Scan on omdirectory omfilesysentry (cost=0.00..24.77 rows=11 width=16) (actual time=0.831..0.831 rows=0 loops=1)\n Filter: (name = 'dir15_file80'::text)\n -> Index Scan using omfilesequence_name_idx on omfilesequence omfilesysentry (cost=0.00..8.30 rows=2 width=16) (actual time=0.014..0.014 rows=0 loops=1)\n Index Cond: (name = 'dir15_file80'::text)\n -> Index Scan using omclipfile_name_idx on omclipfile omfilesysentry (cost=0.00..8.30 rows=2 width=16) (actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: (name = 'dir15_file80'::text)\n -> Index Scan using omimagefile_name_idx on omimagefile omfilesysentry (cost=0.00..8.30 rows=2 width=16) (actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: (name = 'dir15_file80'::text)\n -> Index Scan using omcollection_name_idx on omcollection omfilesysentry (cost=0.00..8.30 rows=2 width=16) (actual time=0.008..0.008 rows=0 loops=1)\n Index Cond: (name = 'dir15_file80'::text)\n -> Index Scan using omhomedirectory_name_idx on omhomedirectory omfilesysentry (cost=0.00..8.30 rows=2 width=16) (actual time=0.007..0.007 rows=0 loops=1)\n Index Cond: (name = 'dir15_file80'::text)\n -> Seq Scan on omrootdirectory omfilesysentry (cost=0.00..1.05 rows=1 width=16) (actual time=0.013..0.013 rows=0 loops=1)\n Filter: (name = 'dir15_file80'::text)\n -> Index Scan using omwarehousedirectory_name_idx on omwarehousedirectory omfilesysentry (cost=0.00..8.30 rows=2 width=16) (actual time=0.007..0.007 rows=0 loops=1)\n Index Cond: (name = 'dir15_file80'::text)\n -> Index Scan using omtask_name_idx on omtask omfilesysentry (cost=0.00..8.30 rows=2 width=16) (actual time=0.009..0.009 rows=0 loops=1)\n Index Cond: (name = 'dir15_file80'::text) Total runtime: 91.019 ms\n(29 rows)\n\nSo why is the planner not using the index? Everything I have read indicates sequencial scanning should be left on and the planner should do the right thing. \n\nThis is a quote from 1 web site:\n\n\"These options are pretty much only for use in query testing; frequently one sets \"enable_seqscan = false\" in order to determine if the planner is unnecessarily discarding an index, for example. However, it would require very unusual circumstances to change any of them to false in the .conf file.\"\n\nSo how do I determine why the planner is unnecessarily discarding the index? \n\nThanks\n\n\n\n",
"msg_date": "Mon, 6 Mar 2006 13:46:47 -0500",
"msg_from": "\"Harry Hehl\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sequencial scan instead of using index "
},
{
"msg_contents": "On m�n, 2006-03-06 at 13:46 -0500, Harry Hehl wrote:\n> Query: select * from ommemberrelation where srcobj='somevalue' \n> and dstobj in (select objectid from omfilesysentry where name='dir15_file80');\n> \n> Columns srcobj, dstobj & name are all indexed.\n\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop IN Join (cost=486.19..101533.99 rows=33989 width=177) (actual time=5.493..90.682 rows=1 loops=1)\n> Join Filter: (\"outer\".dstobj = \"inner\".objectid)\n> -> Seq Scan on ommemberrelation (cost=0.00..2394.72 rows=33989 width=177) (actual time=0.078..70.887 rows=100 loops=1)\n> Filter: (srcobj = '3197a4e6-abf1-11da-a0f9-000fb05ab829'::text)\n> -> Materialize (cost=486.19..487.48 rows=129 width=16) (actual time=0.004..0.101 rows=26 loops=100)\n\nLooks like the planner is expecting 33989 rows, making \nan index scan a ppor choice, but in fact only 100 rows\nactually match your srcobj value.\n\nCould we see the explain analyze with enable_seqscan\n= false please ?\n\nPossibly you might want totry to increase the statistics\ntarget for this columns , as in:\n ALTER TABLE ommemberrelation ALTER COLUMN srcobj\n SET STATISTICS 1000;\n ANALYZE;\nand try again (with enable_seqscan=true)\n\nA target of 1000 ismost probably overkill, but\nstart with this value, and if it improves matters,\nyou can experiment with lower settings.\n\ngnari\n\n\n",
"msg_date": "Mon, 06 Mar 2006 20:51:46 +0000",
"msg_from": "Ragnar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequencial scan instead of using index"
},
{
"msg_contents": "Harry Hehl wrote:\n> There seems to be many posts on this issue but I not yet found an answer to the seq scan issue.\n> \n> I am having an issue with a joins. I am using 8.0.3 on FC4 \n> \n> Query: select * from ommemberrelation where srcobj='somevalue' and dstobj in (select objectid from omfilesysentry where name='dir15_file80');\n> \n> Columns srcobj, dstobj & name are all indexed.\n> \n> \n\nThe planner is over-estimating the number of rows here (33989 vs 100):\n\n-> Seq Scan on ommemberrelation (cost=0.00..2394.72 rows=33989 \nwidth=177) (actual time=0.078..70.887 rows=100 loops=1)\n\nThe usual way to attack this is to up the sample size for ANALYZE:\n\nALTER TABLE ommemberrelation ALTER COLUMN srcobj SET STATISTICS 100;\nALTER TABLE ommemberrelation ALTER COLUMN dstobj SET STATISTICS 100;\n-- or even 1000.\nANALYZE ommemberrelation;\n\nThen try EXPLAIN ANALYZE again.\n\n\nIf you can upgrade to 8.1.(3), then the planner can consider paths that \nuse *both* the indexes on srcobj and dstobj (which would probably be the \nbusiness!).\n\nCheers\n\nMark\n",
"msg_date": "Tue, 07 Mar 2006 18:04:13 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequencial scan instead of using index"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI'm experimenting with PostgreSQL, but since I'm no expert DBA, I'm \nexperiencing some performance issues.\n\nPlease take a look at the following query:\n\nSELECT\n /*groups.\"name\" AS t2_r1,\n groups.\"id\" AS t2_r3,\n groups.\"user_id\" AS t2_r0,\n groups.\"pretty_url\" AS t2_r2,\n locations.\"postal_code\" AS t0_r6,\n locations.\"pretty_url\" AS t0_r7,\n locations.\"id\" AS t0_r8,\n locations.\"colony_id\" AS t0_r0,\n locations.\"user_id\" AS t0_r1,\n locations.\"group_id\" AS t0_r2,\n locations.\"distinction\" AS t0_r3,\n locations.\"street\" AS t0_r4,\n locations.\"street_2\" AS t0_r5,\n schools.\"updated\" AS t1_r10,\n schools.\"level_id\" AS t1_r4,\n schools.\"pretty_url\" AS t1_r11,\n schools.\"user_id\" AS t1_r5,\n schools.\"id\" AS t1_r12,\n schools.\"type_id\" AS t1_r6,\n schools.\"distinction\" AS t1_r7,\n schools.\"cct\" AS t1_r8,\n schools.\"created_on\" AS t1_r9,\n schools.\"location_id\" AS t1_r0,\n schools.\"service_id\" AS t1_r1,\n schools.\"sustentation_id\" AS t1_r2,\n schools.\"dependency_id\" AS t1_r3*/\n groups.*,\n locations.*,\n schools.*\nFROM locations\nLEFT OUTER JOIN groups ON groups.id = locations.group_id\nLEFT OUTER JOIN schools ON schools.location_id = locations.id\nWHERE (colony_id = 71501)\nORDER BY groups.name, locations.distinction, schools.distinction\n\nAs you can see, I've commented out some parts. I did that as an \nexperiment, and it improved the query by 2x. I really don't understand \nhow is that possible... I also tried changing the second join to an \nINNER join, and that improves it a little bit also.\n\nAnyway, the main culprit seems to be that second join. Here's the output \nfrom EXPLAIN:\n\nSort (cost=94315.15..94318.02 rows=1149 width=852)\n Sort Key: groups.name, locations.distinction, schools.distinction\n -> Merge Left Join (cost=93091.96..94256.74 rows=1149 width=852)\n Merge Cond: (\"outer\".id = \"inner\".location_id)\n -> Sort (cost=4058.07..4060.94 rows=1148 width=646)\n Sort Key: locations.id\n -> Hash Left Join (cost=1.01..3999.72 rows=1148 width=646)\n Hash Cond: (\"outer\".group_id = \"inner\".id)\n -> Index Scan using locations_colony_id on \nlocations (cost=0.00..3992.91 rows=1148 width=452)\n Index Cond: (colony_id = 71501)\n -> Hash (cost=1.01..1.01 rows=1 width=194)\n -> Seq Scan on groups (cost=0.00..1.01 \nrows=1 width=194)\n -> Sort (cost=89033.90..89607.67 rows=229510 width=206)\n Sort Key: schools.location_id\n -> Seq Scan on schools (cost=0.00..5478.10 rows=229510 \nwidth=206)\n\nI don't completely understand what that output means, but it would seem \nthat the first join costs about 4000, but if I remove that join from the \nquery, the performance difference is negligible. So as I said, it seems \nthe problem is the join on the schools table.\n\nI hope it's ok for me to post the relevant tables here, so here they are \n(I removed some constraints and indexes that aren't relevant to the \nquery above):\n\nCREATE TABLE groups\n(\n user_id int4 NOT NULL,\n name varchar(50) NOT NULL,\n pretty_url varchar(50) NOT NULL,\n id serial NOT NULL,\n CONSTRAINT groups_pk PRIMARY KEY (id),\n)\n\nCREATE TABLE locations\n(\n colony_id int4 NOT NULL,\n user_id int4 NOT NULL,\n group_id int4 NOT NULL,\n distinction varchar(60) NOT NULL,\n street varchar(60) NOT NULL,\n street_2 varchar(50) NOT NULL,\n postal_code varchar(5) NOT NULL,\n pretty_url varchar(60) NOT NULL,\n id serial NOT NULL,\n CONSTRAINT locations_pk PRIMARY KEY (id),\n CONSTRAINT colony FOREIGN KEY (colony_id)\n REFERENCES colonies (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT \"group\" FOREIGN KEY (group_id)\n REFERENCES groups (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n)\nCREATE INDEX locations_fki_colony\n ON locations\n USING btree\n (colony_id);\nCREATE INDEX locations_fki_group\n ON locations\n USING btree\n (group_id);\n\nCREATE TABLE schools\n(\n location_id int4 NOT NULL,\n service_id int4 NOT NULL,\n sustentation_id int4 NOT NULL,\n dependency_id int4 NOT NULL,\n level_id int4 NOT NULL,\n user_id int4 NOT NULL,\n type_id int4 NOT NULL,\n distinction varchar(25) NOT NULL,\n cct varchar(20) NOT NULL,\n created_on timestamp(0) NOT NULL,\n updated timestamp(0),\n pretty_url varchar(25) NOT NULL,\n id serial NOT NULL,\n CONSTRAINT schools_pk PRIMARY KEY (id),\n CONSTRAINT \"location\" FOREIGN KEY (location_id)\n REFERENCES locations (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n)\nCREATE INDEX schools_fki_location\n ON schools\n USING btree\n (location_id);\n\nSo I'm wondering what I'm doing wrong. I migrated this database from \nMySQL, and on there it ran pretty fast.\n\nKind regards,\nIvan V.\n\n",
"msg_date": "Mon, 06 Mar 2006 18:15:55 -0600",
"msg_from": "\"i.v.r.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help understanding indexes, explain, and optimizing a query"
},
{
"msg_contents": "i.v.r. wrote:\n> Hi everyone,\n> \n> I'm experimenting with PostgreSQL, but since I'm no expert DBA, I'm \n> experiencing some performance issues.\n> \n> Please take a look at the following query:\n> \n> SELECT\n> /*groups.\"name\" AS t2_r1,\n> groups.\"id\" AS t2_r3,\n> groups.\"user_id\" AS t2_r0,\n> groups.\"pretty_url\" AS t2_r2,\n> locations.\"postal_code\" AS t0_r6,\n> locations.\"pretty_url\" AS t0_r7,\n> locations.\"id\" AS t0_r8,\n> locations.\"colony_id\" AS t0_r0,\n> locations.\"user_id\" AS t0_r1,\n> locations.\"group_id\" AS t0_r2,\n> locations.\"distinction\" AS t0_r3,\n> locations.\"street\" AS t0_r4,\n> locations.\"street_2\" AS t0_r5,\n> schools.\"updated\" AS t1_r10,\n> schools.\"level_id\" AS t1_r4,\n> schools.\"pretty_url\" AS t1_r11,\n> schools.\"user_id\" AS t1_r5,\n> schools.\"id\" AS t1_r12,\n> schools.\"type_id\" AS t1_r6,\n> schools.\"distinction\" AS t1_r7,\n> schools.\"cct\" AS t1_r8,\n> schools.\"created_on\" AS t1_r9,\n> schools.\"location_id\" AS t1_r0,\n> schools.\"service_id\" AS t1_r1,\n> schools.\"sustentation_id\" AS t1_r2,\n> schools.\"dependency_id\" AS t1_r3*/\n> groups.*,\n> locations.*,\n> schools.*\n> FROM locations\n> LEFT OUTER JOIN groups ON groups.id = locations.group_id\n> LEFT OUTER JOIN schools ON schools.location_id = locations.id\n> WHERE (colony_id = 71501)\n> ORDER BY groups.name, locations.distinction, schools.distinction\n> \n> As you can see, I've commented out some parts. I did that as an \n> experiment, and it improved the query by 2x. I really don't understand \n> how is that possible... I also tried changing the second join to an \n> INNER join, and that improves it a little bit also.\n> \n> Anyway, the main culprit seems to be that second join. Here's the output \n> from EXPLAIN:\n> \n> Sort (cost=94315.15..94318.02 rows=1149 width=852)\n> Sort Key: groups.name, locations.distinction, schools.distinction\n> -> Merge Left Join (cost=93091.96..94256.74 rows=1149 width=852)\n> Merge Cond: (\"outer\".id = \"inner\".location_id)\n> -> Sort (cost=4058.07..4060.94 rows=1148 width=646)\n> Sort Key: locations.id\n> -> Hash Left Join (cost=1.01..3999.72 rows=1148 width=646)\n> Hash Cond: (\"outer\".group_id = \"inner\".id)\n> -> Index Scan using locations_colony_id on \n> locations (cost=0.00..3992.91 rows=1148 width=452)\n> Index Cond: (colony_id = 71501)\n> -> Hash (cost=1.01..1.01 rows=1 width=194)\n> -> Seq Scan on groups (cost=0.00..1.01 \n> rows=1 width=194)\n> -> Sort (cost=89033.90..89607.67 rows=229510 width=206)\n> Sort Key: schools.location_id\n> -> Seq Scan on schools (cost=0.00..5478.10 rows=229510 \n> width=206)\n> \n> I don't completely understand what that output means, but it would seem \n> that the first join costs about 4000, but if I remove that join from the \n> query, the performance difference is negligible. So as I said, it seems \n> the problem is the join on the schools table.\n> \n> I hope it's ok for me to post the relevant tables here, so here they are \n> (I removed some constraints and indexes that aren't relevant to the \n> query above):\n> \n> CREATE TABLE groups\n> (\n> user_id int4 NOT NULL,\n> name varchar(50) NOT NULL,\n> pretty_url varchar(50) NOT NULL,\n> id serial NOT NULL,\n> CONSTRAINT groups_pk PRIMARY KEY (id),\n> )\n> \n> CREATE TABLE locations\n> (\n> colony_id int4 NOT NULL,\n> user_id int4 NOT NULL,\n> group_id int4 NOT NULL,\n> distinction varchar(60) NOT NULL,\n> street varchar(60) NOT NULL,\n> street_2 varchar(50) NOT NULL,\n> postal_code varchar(5) NOT NULL,\n> pretty_url varchar(60) NOT NULL,\n> id serial NOT NULL,\n> CONSTRAINT locations_pk PRIMARY KEY (id),\n> CONSTRAINT colony FOREIGN KEY (colony_id)\n> REFERENCES colonies (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT \"group\" FOREIGN KEY (group_id)\n> REFERENCES groups (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> )\n> CREATE INDEX locations_fki_colony\n> ON locations\n> USING btree\n> (colony_id);\n> CREATE INDEX locations_fki_group\n> ON locations\n> USING btree\n> (group_id);\n> \n> CREATE TABLE schools\n> (\n> location_id int4 NOT NULL,\n> service_id int4 NOT NULL,\n> sustentation_id int4 NOT NULL,\n> dependency_id int4 NOT NULL,\n> level_id int4 NOT NULL,\n> user_id int4 NOT NULL,\n> type_id int4 NOT NULL,\n> distinction varchar(25) NOT NULL,\n> cct varchar(20) NOT NULL,\n> created_on timestamp(0) NOT NULL,\n> updated timestamp(0),\n> pretty_url varchar(25) NOT NULL,\n> id serial NOT NULL,\n> CONSTRAINT schools_pk PRIMARY KEY (id),\n> CONSTRAINT \"location\" FOREIGN KEY (location_id)\n> REFERENCES locations (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> )\n> CREATE INDEX schools_fki_location\n> ON schools\n> USING btree\n> (location_id);\n> \n> So I'm wondering what I'm doing wrong. I migrated this database from \n> MySQL, and on there it ran pretty fast.\n\nHave you done an 'analyze' or 'vacuum analyze' over these tables?\n\nA left outer join gets *everything* from the second table:\n\n > LEFT OUTER JOIN groups ON groups.id = locations.group_id\n > LEFT OUTER JOIN schools ON schools.location_id = locations.id\n\nSo they will load everything from groups and schools. Maybe they should \nbe left join's not left outer joins?\n\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Tue, 07 Mar 2006 11:40:19 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help understanding indexes, explain, and optimizing"
},
{
"msg_contents": "Chris escribi�:\n> Have you done an 'analyze' or 'vacuum analyze' over these tables?\n>\n> A left outer join gets *everything* from the second table:\n>\n> > LEFT OUTER JOIN groups ON groups.id = locations.group_id\n> > LEFT OUTER JOIN schools ON schools.location_id = locations.id\n>\n> So they will load everything from groups and schools. Maybe they \n> should be left join's not left outer joins?\n>\n>\nYes, I did that. I tried your other suggestion and it did improve it by \nabout 200ms.\n\nI also repurposed the query by selecting first from the groups table and \njoining with the locations and schools tables, and that made all the \ndifference. Now it's down to\n32ms. Yipee!\n\nThanks!\n\nIvan V.\n\n",
"msg_date": "Mon, 06 Mar 2006 20:11:47 -0600",
"msg_from": "\"i.v.r.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help understanding indexes, explain, and optimizing"
},
{
"msg_contents": "Actually I think LEFT OUTER JOIN is equivalent to LEFT JOIN. The\nPostgres manual says that the word OUTER is optional. Either way you\nget \"...all rows in the qualified Cartesian product (i.e., all combined\nrows that pass its join condition), plus one copy of each row in the\nleft-hand table for which there was no right-hand row that passed the\njoin condition.\"\n\nIt sounds like the original posters problem was a less than optimal join\norder, and from what I understand Postgres can't reorder left joins. \n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Chris\nSent: Monday, March 06, 2006 6:40 PM\nTo: i.v.r.\nCc: [email protected]\nSubject: Re: [PERFORM] Help understanding indexes, explain, and\noptimizing\n\ni.v.r. wrote:\n> Hi everyone,\n[Snip]\n> So I'm wondering what I'm doing wrong. I migrated this database from \n> MySQL, and on there it ran pretty fast.\n\nHave you done an 'analyze' or 'vacuum analyze' over these tables?\n\nA left outer join gets *everything* from the second table:\n\n > LEFT OUTER JOIN groups ON groups.id = locations.group_id\n > LEFT OUTER JOIN schools ON schools.location_id = locations.id\n\nSo they will load everything from groups and schools. Maybe they should \nbe left join's not left outer joins?\n\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n",
"msg_date": "Tue, 7 Mar 2006 09:02:58 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help understanding indexes, explain, and optimizing"
},
{
"msg_contents": "\"Dave Dutcher\" <[email protected]> writes:\n> It sounds like the original posters problem was a less than optimal join\n> order, and from what I understand Postgres can't reorder left joins. \n\nNot really relevant to the OP's immediate problem, but: that's fixed in\nCVS HEAD.\n\nhttp://archives.postgresql.org/pgsql-hackers/2005-12/msg00760.php\nhttp://archives.postgresql.org/pgsql-committers/2005-12/msg00352.php\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Mar 2006 10:34:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help understanding indexes, explain, and optimizing "
}
] |
[
{
"msg_contents": "[Please copy the mailing list on replies.]\n\nOn Mon, Mar 06, 2006 at 09:38:20PM +0100, Joost Kraaijeveld wrote:\n> Michael Fuhr wrote:\n> > Have you tweaked postgresql.conf at all? If so, what non-default\n> > settings are you using? \n> \n> Yes, I have tweaked the following settings:\n> \n> shared_buffers = 40000\n> work_mem = 512000\n> maintenance_work_mem = 512000\n> max_fsm_pages = 40000\n> effective_cache_size = 131072\n\nAre you sure you need work_mem that high? How did you decide on\nthat value? Are all other settings at their defaults? No changes\nto the write ahead log (WAL) or background writer (bgwriter) settings?\nWhat version of PostgreSQL are you running? The paths in your\noriginal message suggest 8.1.x.\n\n> >>> Are your test results more consistent\n> > psql -c checkpoint\n> > pgbench -c 10 -t 150 test\n> > psql -c checkpoint\n> > pgbench -c 10 -t 150 test\n> > psql -c checkpoint\n> > pgbench -c 10 -t 150 test\n>\n> OK, that leads to a consistant hight score. I also noticed that\n> \"psql -c checkpoint\" results in I/O on the database partition but\n> not on the partition that has the logfiles (pg_xlog directory). Do\n> you know if that how it should be?\n\nA checkpoint updates the database files with the data from the\nwrite-ahead log; you're seeing those writes to the database partition.\nThe postmaster does checkpoints every checkpoint_timeout seconds\n(default 300) or every checkpoint_segment log segments (default 3);\nit also uses a background writer to trickle pages to the database\nfiles between checkpoints so the checkpoints don't have as much\nwork to do. I've been wondering if your pgbench runs are being\naffected by that background activity; the fact that you get\nconsistently good performance after forcing a checkpoint suggests\nthat that might be the case.\n\nIf you run pgbench several times without intervening checkpoints,\ndo your postmaster logs have any messages like \"checkpoints are\noccurring too frequently\"? It might be useful to increase\ncheckpoint_warning up to the value of checkpoint_timeout and then\nsee if you get any such messages during pgbench runs. If checkpoints\nare happening a lot more often than every checkpoint_timeout seconds\nthen try increasing checkpoint_segments (assuming you have the disk\nspace). After doing so, restart the database and run pgbench several\ntimes without intervening checkpoints and see if performance is\nmore consistent.\n\nNote that tuning PostgreSQL for pgbench performance might be\nirrelevant for your actual needs unless your usage patterns happen\nto resemble what pgbench does.\n\n-- \nMichael Fuhr\n",
"msg_date": "Mon, 6 Mar 2006 20:22:09 -0700",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can anyone explain this pgbench results?"
}
] |
[
{
"msg_contents": "Hi Michael,\n\nMichael Fuhr wrote:\n>>> Have you tweaked postgresql.conf at all? If so, what non-default\n>>> settings are you using?\n>> \n>> Yes, I have tweaked the following settings:\n>> \n>> shared_buffers = 40000\n>> work_mem = 512000\n>> maintenance_work_mem = 512000\n>> max_fsm_pages = 40000\n>> effective_cache_size = 131072\n> \n> Are you sure you need work_mem that high? How did you decide on\n> that value? \nI have used http://www.powerpostgresql.com/Downloads/annotated_conf_80.html , expecting that the differences between 8.0 and 8.1 do not invalidate the recommendations. I have checked with (some) of my (large) queries and adjusted upward untill I had no temp files in the PGDATA/base/DB_OID/pgsql_tmp. (The warning about\n\n> Are all other settings at their defaults? \nYep.\n\n> No changes to the write ahead log (WAL) or background writer (bgwriter) settings?\nNo, because the forementioned document explicitely states that it has recomendations on these subjects.\n\n> What version of PostgreSQL are you running? The paths in your\n> original message suggest 8.1.x.\nDebian's Ecth 8.1.0-3\n\n> A checkpoint updates the database files with the data from the\n> write-ahead log; you're seeing those writes to the database partition.\n> The postmaster does checkpoints every checkpoint_timeout seconds\n> (default 300) or every checkpoint_segment log segments (default 3);\n> it also uses a background writer to trickle pages to the database\n> files between checkpoints so the checkpoints don't have as much\n> work to do. I've been wondering if your pgbench runs are being\n> affected by that background activity; the fact that you get\n> consistently good performance after forcing a checkpoint suggests\n> that that might be the case. \nOK, thanks. \n\nTo be sure if I understand it correctly:\n\n1. Every update/insert is first written to a WAL log file which is in the PGDATA/pg_xlog directory. \n2. Routinely the background writer than writes the changes to the PGDATA/base/DB_OID/ directory.\n2. Postmaster forces after 300 secs or if the log segments are full (which ever comes first?) a checkpoint so that the WAL log file are empty ( I assume that that are the changes the background writer has not written yet since the last forced checkpont?).\n\n> If you run pgbench several times without intervening checkpoints,\n> do your postmaster logs have any messages like \"checkpoints are\n> occurring too frequently\"? It might be useful to increase\n> checkpoint_warning up to the value of checkpoint_timeout and then\n> see if you get any such messages during pgbench runs. If checkpoints\n> are happening a lot more often than every checkpoint_timeout seconds\n> then try increasing checkpoint_segments (assuming you have the disk\n> space). After doing so, restart the database and run pgbench several\n> times without intervening checkpoints and see if performance is\n> more consistent.\nI will try that this day.\n\n> Note that tuning PostgreSQL for pgbench performance might be\n> irrelevant for your actual needs unless your usage patterns happen\n> to resemble what pgbench does.\n\nThe advantage of using pgbench is a repeatable short command that leads to something that is showing in actual real world usage.\n\nMy problem is with the raw performance of my disk array (3Ware 9500S-8 SATA RAID5 controller with 5 disks). I am having *very* serious performance problems if I do large updates on my databases. E.g. an update of 1 (boolean) column in a table (update prototype.customers set deleted = false) that has 368915 records last forever (> 3500 secs ). The only noticable disk activity during such an update is on the disk/partition that has the PGDATA/base/DB_OID/ directory (/dev/sdc, the 3Ware 9800S-8 RAID 5 array). There is *no* noticable disk activity on the disk/partition that hase the PGDATA/pg_xlog directory (/dev/sdb, on a Sil 3114 on-board SAT controller). The throughtput during the update is ~ 2 MB/sec. The thoughtput during a large file copy or running bonnie (a benchmark) is > 40 MB/sec. My primary goal is to understand the differences ( and than sue the guilty ones ;-)), and than maybe either learn to live with it or find a solution. The number of write operations/sec during the update is ~ 2000 /sec. I suspect that the RAID card cannot handle a lot of small write operations (with fsync?) in a short time without performance penalty (and yes, the write cache on the controller is enabled).\n\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl\n",
"msg_date": "Tue, 7 Mar 2006 11:34:18 +0100",
"msg_from": "\"Joost Kraaijeveld\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can anyone explain this pgbench results?"
}
] |
[
{
"msg_contents": "Hi Michael,\n\nMichael Fuhr wrote:\n> If you run pgbench several times without intervening checkpoints,\n> do your postmaster logs have any messages like \"checkpoints are\n> occurring too frequently\"? It might be useful to increase\n> checkpoint_warning up to the value of checkpoint_timeout and then\n> see if you get any such messages during pgbench runs. If checkpoints\n> are happening a lot more often than every checkpoint_timeout seconds\n> then try increasing checkpoint_segments (assuming you have the disk\n> space). After doing so, restart the database and run pgbench several\n> times without intervening checkpoints and see if performance is\n> more consistent.\nI got the \"checkpoints are occurring too frequently\". Increasing the number of checkpoint_segments from the default 3 to 10 resulted in more tests without performance penalty (~ 5-6 tests). The perfomance penalty is also a little less. It takes several minutes for the background writer to catch up.\n\nThis will solve my problems at the customers site (they do not run sm many sales transaction per second), but not my own problem while converting the old database to a new databse :-(. Maybe I should invest in other hardware or re-arrange my RAID5 in a RAID10 (or 50???).\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl\n",
"msg_date": "Tue, 7 Mar 2006 14:08:40 +0100",
"msg_from": "\"Joost Kraaijeveld\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can anyone explain this pgbench results?"
}
] |
[
{
"msg_contents": "Hello,\nA friend asked for help to accelerate some postgresql queries on postgresql\n8.1.2 for windows.\nHe is comparing with firebird.\nFirebird was being up to 90 times faster at some queries.\nAttached is a gziped text file containing some steps I tried on a simple\nexample query.\nCould get improvements from 270 seconds to 74 seconds.\nBut Firebird effortlessly still can perform the same query at 20 seconds.\nPlease, do you have some suggestion?\nThanks.\nAndre Felipe Machado",
"msg_date": "Tue, 7 Mar 2006 13:29:22 -0300",
"msg_from": "\"andremachado\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "firebird X postgresql 8.1.2 windows, performance comparison"
},
{
"msg_contents": "andremachado wrote:\n> Hello,\n> A friend asked for help to accelerate some postgresql queries on postgresql\n> 8.1.2 for windows.\n> He is comparing with firebird.\n> Firebird was being up to 90 times faster at some queries.\n> Attached is a gziped text file containing some steps I tried on a simple\n> example query.\n> Could get improvements from 270 seconds to 74 seconds.\n> But Firebird effortlessly still can perform the same query at 20 seconds.\n> Please, do you have some suggestion?\n> Thanks.\n> \nTry increasing your work mem and shared buffers considerably.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> Andre Felipe Machado\n>\n> \n> ------------------------------------------------------------------------\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n\n-- \nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: PLphp, PLperl - http://www.commandprompt.com/\n\n",
"msg_date": "Tue, 07 Mar 2006 08:44:13 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance comparison"
},
{
"msg_contents": "Andre,\n \n I noticed that enable_bitmapscan and enable_seqscan are off, is there a reason for it? Have you tried with enable_bitmapscan on?\n \n How much RAM do you have? What kind of disks are being used?\n \n Beste regards,\n \n Reimer\n 55-47-33270878\n Blumenau - SC - Brazil\n \nandremachado <[email protected]> escreveu:\n Hello,\nA friend asked for help to accelerate some postgresql queries on postgresql\n8.1.2 for windows.\nHe is comparing with firebird.\nFirebird was being up to 90 times faster at some queries.\nAttached is a gziped text file containing some steps I tried on a simple\nexample query.\nCould get improvements from 270 seconds to 74 seconds.\nBut Firebird effortlessly still can perform the same query at 20 seconds.\nPlease, do you have some suggestion?\nThanks.\nAndre Felipe Machado\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\nsubscribe-nomail command to [email protected] so that your\nmessage can get through to the mailing list cleanly\n\n\n\t\t\n---------------------------------\n Yahoo! Acesso Gr�tis \nInternet r�pida e gr�tis. Instale o discador agora!\nAndre, I noticed that enable_bitmapscan and enable_seqscan are off, is there a reason for it? Have you tried with enable_bitmapscan on? How much RAM do you have? What kind of disks are being used? Beste regards, Reimer 55-47-33270878 Blumenau - SC - Brazil andremachado <[email protected]> escreveu: Hello,A friend asked for help to accelerate some postgresql queries on postgresql8.1.2 for windows.He is comparing with firebird.Firebird was being up to 90 times faster at some queries.Attached is a gziped text file containing some steps I tried on a simpleexample query.Could get improvements from 270 seconds to 74 seconds.But Firebird effortless\n ly still\n can perform the same query at 20 seconds.Please, do you have some suggestion?Thanks.Andre Felipe Machado---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriatesubscribe-nomail command to [email protected] so that yourmessage can get through to the mailing list cleanly\n \nYahoo! Acesso Gr�tis \nInternet r�pida e gr�tis. Instale o discador agora!",
"msg_date": "Tue, 7 Mar 2006 13:57:39 -0300 (ART)",
"msg_from": "Carlos Henrique Reimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance comparison"
},
{
"msg_contents": "On Tue, 2006-03-07 at 10:29, andremachado wrote:\n> Hello,\n> A friend asked for help to accelerate some postgresql queries on postgresql\n> 8.1.2 for windows.\n> He is comparing with firebird.\n> Firebird was being up to 90 times faster at some queries.\n> Attached is a gziped text file containing some steps I tried on a simple\n> example query.\n> Could get improvements from 270 seconds to 74 seconds.\n> But Firebird effortlessly still can perform the same query at 20 seconds.\n> Please, do you have some suggestion?\n\nFirst off, PostgreSQL on Windows is still kinda new, so it's quite\npossible that on some flavor of unix the disparity we're seeing wouldn't\nbe so great. You may be seeing some issue with PostgreSQL's fairly new\nwindows port instead of some basic postgresql problem.\n\nIs this running on the same basic hardware for both databases? I would\nimagine so, but just wanted to check.\n\nAs someone else mentioned, try cranking up work mem, and to a lesser\nextent, shared_buffers. \n\nAlso, as mentioned, why are bitmap scans and seq scans turned off? \nBitmap scans are quite a nice improvement, and sometimes, a sequential\nscan is faster than an index. Forcing PostgreSQL to always use an index\nit not really a good idea.\n\nLastly, I noticed that after you clusters on all your indexes, the query\nplanner switched from a merge join to a hash join, and it was slower. \nYou might wanna try turning off hash joins for a quick test to see if\nmerge joins are any faster.\n\nLastly, you might want to compare the two databases running on linux or\nBSD to see how they compare there.\n\n\n",
"msg_date": "Tue, 07 Mar 2006 11:08:53 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance"
},
{
"msg_contents": "Scott Marlowe wrote:\n\n> Lastly, I noticed that after you clusters on all your indexes, the query\n> planner switched from a merge join to a hash join, and it was slower. \n> You might wanna try turning off hash joins for a quick test to see if\n> merge joins are any faster.\n\nAnyway please note that clustering \"all indexes\" does not really make\nsense. You can cluster only on one index. If you cluster on another,\nthen the first clustering will be lost. Better make sure to cluster on\nthe one index where it makes the most difference.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 7 Mar 2006 14:15:14 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance"
},
{
"msg_contents": "On Tue, 2006-03-07 at 11:15, Alvaro Herrera wrote:\n> Scott Marlowe wrote:\n> \n> > Lastly, I noticed that after you clusters on all your indexes, the query\n> > planner switched from a merge join to a hash join, and it was slower. \n> > You might wanna try turning off hash joins for a quick test to see if\n> > merge joins are any faster.\n> \n> Anyway please note that clustering \"all indexes\" does not really make\n> sense. You can cluster only on one index. If you cluster on another,\n> then the first clustering will be lost. Better make sure to cluster on\n> the one index where it makes the most difference.\n\nNote that I was referring to his clustering on an index for each table. \nI.e. not on every single index. but he clustered on four tables /\nindexes at once, so that was what I was referring to. Sorry for any\nconfusion there.\n\nSo, do you see any obvious, low hanging fruit here?\n",
"msg_date": "Tue, 07 Mar 2006 11:18:59 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance"
},
{
"msg_contents": "Scott Marlowe wrote:\n> On Tue, 2006-03-07 at 11:15, Alvaro Herrera wrote:\n> > Scott Marlowe wrote:\n> > \n> > > Lastly, I noticed that after you clusters on all your indexes, the query\n> > > planner switched from a merge join to a hash join, and it was slower. \n> > > You might wanna try turning off hash joins for a quick test to see if\n> > > merge joins are any faster.\n> > \n> > Anyway please note that clustering \"all indexes\" does not really make\n> > sense. You can cluster only on one index. If you cluster on another,\n> > then the first clustering will be lost. Better make sure to cluster on\n> > the one index where it makes the most difference.\n> \n> Note that I was referring to his clustering on an index for each table. \n> I.e. not on every single index. but he clustered on four tables /\n> indexes at once, so that was what I was referring to. Sorry for any\n> confusion there.\n\nAh, sorry, I misinterpreted.\n\n> So, do you see any obvious, low hanging fruit here?\n\nSorry, I didn't look at his test case very closely :-(\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 7 Mar 2006 14:22:24 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> So, do you see any obvious, low hanging fruit here?\n\nIt would help if we were being told the whole truth about the settings\nbeing used. The first few plans are clearly suffering from the\n\"enable_seqscan = off\" error, but the last few don't seem to be. I\ndon't trust the SHOW ALL at all since it disagrees with the immediately\nfollowing retail SHOWs --- there is seemingly a whole lot of parameter\nchanging going on that we are not being told about.\n\nIt'd also be a good idea to know something about the datatypes involved,\nparticularly for the join keys.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Mar 2006 14:56:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance "
},
{
"msg_contents": "Hello,\nMany thanks for the valuable suggestions and insights.\nThe defaults enable_bitmapscan and enable_seqscan were altered by my\nfriend. He already re enabled them (maybe even while I was trying some\nof the queries).\nThe machine is a P4 3.2GHz, 1 GBram, sata hd, windows 2000. I did not\nused pg on win before to have any advice to my friend.\nThe previously attached file contains SOME relevant info from the psql\nsession, in order to not clutter file.\nWhen some server parameter was modified (at least by me) and server\nrestarted, a new sholl parameter was issued to show the new value.\nFirebird is running at the same machine.\nAs you can see by the session log, indexes were created on the columns\nused and tables was first clustered on the indexes actually used by the\nquery.\nThe subsequent cluster commands only recluster on the same indexes\npreviously clustered.\nshared_buffers was increased from 1000 to 16384 pages\neffective_cache_size was increased from 1000 to 65535 pages and at the\nfinal steps REDUCED to 8192 pages\nwork_mem was increased from 1024 first to 16384 KB and then to 65535\nKB.\nThe first 2 parameters reduced time 18%.\nwork_mem reduced time almost 66%.\nBut work_mem easily can exhaust ram with many users connected, as each\nconnection query will use this amount of memory (if I can remember).\nHow much it can grow at this 1 gbram win machine?\nSome of the docs I already read suggested that indexes should be\nentirely contained in ram. How to dimension the parameters?\nOther docs adviced that some memory parameters could actually degrade\nperformance if too big. There are peak points at the performance curve\nby adjusting mem parameters.\nI hope tomorrow execute explain with the bitmapscan and seqscan enabled.\nbitmapscans are almost always faster?\n\nThe data, as far I know, are a sample real app data (hey, if and when in\nproduction it will be even large?). They are almost true random as my\nfriend informed, and according to him, cluster should not really be of\nbenefit. It seems confirmed by the various explain analyze commands\nbefore and after clustering.\n\nAny suggestions? Do you see some obvious error on the steps at the\nprevious session log file?\nIt seems that Firebird windows can use adequately as much ram it finds\nand postgresql windows can not. How dimension ram to the indexes? Only\nby trial and error? I tried some suggested values found at some tuning\ndocs suitable to the available system ram.\n\nThanks \nAndre Felipe\n\n\n\n\n",
"msg_date": "Tue, 07 Mar 2006 22:40:15 -0300",
"msg_from": "Andre Felipe Machado <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance comparison"
},
{
"msg_contents": "Andre,\n \n Could not Postgresql file cache being killed by firebird activity?\nHaven�t you tried decrease ramdom_page_cost to 3 or 2?\n\nIt would be better if only one person will make configuration changes, \notherwise it will be difficult to measure each configuration change impact.\n\nReimer\n\n\n\nAndre Felipe Machado <[email protected]> escreveu:\n Hello,\nMany thanks for the valuable suggestions and insights.\nThe defaults enable_bitmapscan and enable_seqscan were altered by my\nfriend. He already re enabled them (maybe even while I was trying some\nof the queries).\nThe machine is a P4 3.2GHz, 1 GBram, sata hd, windows 2000. I did not\nused pg on win before to have any advice to my friend.\nThe previously attached file contains SOME relevant info from the psql\nsession, in order to not clutter file.\nWhen some server parameter was modified (at least by me) and server\nrestarted, a new sholl parameter was issued to show the new value.\nFirebird is running at the same machine.\nAs you can see by the session log, indexes were created on the columns\nused and tables was first clustered on the indexes actually used by the\nquery.\nThe subsequent cluster commands only recluster on the same indexes\npreviously clustered.\nshared_buffers was increased from 1000 to 16384 pages\neffective_cache_size was increased from 1000 to 65535 pages and at the\nfinal steps REDUCED to 8192 pages\nwork_mem was increased from 1024 first to 16384 KB and then to 65535\nKB.\nThe first 2 parameters reduced time 18%.\nwork_mem reduced time almost 66%.\nBut work_mem easily can exhaust ram with many users connected, as each\nconnection query will use this amount of memory (if I can remember).\nHow much it can grow at this 1 gbram win machine?\nSome of the docs I already read suggested that indexes should be\nentirely contained in ram. How to dimension the parameters?\nOther docs adviced that some memory parameters could actually degrade\nperformance if too big. There are peak points at the performance curve\nby adjusting mem parameters.\nI hope tomorrow execute explain with the bitmapscan and seqscan enabled.\nbitmapscans are almost always faster?\n\nThe data, as far I know, are a sample real app data (hey, if and when in\nproduction it will be even large?). They are almost true random as my\nfriend informed, and according to him, cluster should not really be of\nbenefit. It seems confirmed by the various explain analyze commands\nbefore and after clustering.\n\nAny suggestions? Do you see some obvious error on the steps at the\nprevious session log file?\nIt seems that Firebird windows can use adequately as much ram it finds\nand postgresql windows can not. How dimension ram to the indexes? Only\nby trial and error? I tried some suggested values found at some tuning\ndocs suitable to the available system ram.\n\nThanks \nAndre Felipe\n\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/docs/faq\n\n\n\t\t\n---------------------------------\n Yahoo! Acesso Gr�tis \nInternet r�pida e gr�tis. Instale o discador agora!\nAndre, Could not Postgresql file cache being killed by firebird activity?Haven�t you tried decrease ramdom_page_cost to 3 or 2?It would be better if only one person will make configuration changes, otherwise it will be difficult to measure each configuration change impact.ReimerAndre Felipe Machado <[email protected]> escreveu: Hello,Many thanks for the valuable suggestions and insights.The defaults enable_bitmapscan and enable_seqscan were altered by myfriend. He already re enabled them (maybe even while I was trying someof the queries).The machine is a P4 3.2GHz, 1 GBram, sata hd, windows 2000. I did notused pg on win before to have any advice to my friend.The previously attached file contains SOME relevant info from the psql\n session,\n in order to not clutter file.When some server parameter was modified (at least by me) and serverrestarted, a new sholl parameter was issued to show the new value.Firebird is running at the same machine.As you can see by the session log, indexes were created on the columnsused and tables was first clustered on the indexes actually used by thequery.The subsequent cluster commands only recluster on the same indexespreviously clustered.shared_buffers was increased from 1000 to 16384 pageseffective_cache_size was increased from 1000 to 65535 pages and at thefinal steps REDUCED to 8192 pageswork_mem was increased from 1024 first to 16384 KB and then to 65535KB.The first 2 parameters reduced time 18%.work_mem reduced time almost 66%.But work_mem easily can exhaust ram with many users connected, as eachconnection query will use this amount of memory (if I can remember).How much it can grow at this 1 gbram win\n machine?Some of the docs I already read suggested that indexes should beentirely contained in ram. How to dimension the parameters?Other docs adviced that some memory parameters could actually degradeperformance if too big. There are peak points at the performance curveby adjusting mem parameters.I hope tomorrow execute explain with the bitmapscan and seqscan enabled.bitmapscans are almost always faster?The data, as far I know, are a sample real app data (hey, if and when inproduction it will be even large?). They are almost true random as myfriend informed, and according to him, cluster should not really be ofbenefit. It seems confirmed by the various explain analyze commandsbefore and after clustering.Any suggestions? Do you see some obvious error on the steps at theprevious session log file?It seems that Firebird windows can use adequately as much ram it findsand postgresql windows can not. How dimens\n ion ram\n to the indexes? Onlyby trial and error? I tried some suggested values found at some tuningdocs suitable to the available system ram.Thanks Andre Felipe---------------------------(end of broadcast)---------------------------TIP 3: Have you checked our extensive FAQ?http://www.postgresql.org/docs/faq\n \nYahoo! Acesso Gr�tis \nInternet r�pida e gr�tis. Instale o discador agora!",
"msg_date": "Thu, 9 Mar 2006 07:41:31 -0300 (ART)",
"msg_from": "Carlos Henrique Reimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance comparison"
},
{
"msg_contents": "\n\n> I hope tomorrow execute explain with the bitmapscan and seqscan enabled.\n> bitmapscans are almost always faster?\n\n\tLike all the rest, they're just a tool, which works great when used in \nits intended purpose :\n\n\t- Fetching just a few percent of the rows from a table is better served \nby an index scan\n\t- Fetching a lot of rows (>30-50%) from a table is better served by a seq \nscan\n\t- Bitmap scan comes in between and it's a very welcome addition.\n\n\tAlso Bitmap scan will save your life if you have complex searches, like \nif you run a dating site and have an index on blondes and an index on boob \nsize, because it can use several indexes in complex AND/OR queries.\n\n\tCommon wisdom says simpler databases can be faster than postgres on \nsimple queries.\n\n\tReality check with pg 8.1 driven by PHP :\n\n- SELECT 1\n\tmysql 5\t~ 42 us\n\tpostgres\t~ 70 us\n\n- SELECT * FROM users WHERE id=1\n\tmysql 5\t~ 180 us\n\tpostgres\t~ 160 us\n\n\tOf course people doing stupid things, like using the database to keep a \nhit counter on their website which is updated on every hit, will say that \npostgres is slow.\n",
"msg_date": "Thu, 09 Mar 2006 13:05:07 +0100",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance comparison"
},
{
"msg_contents": "Andre Felipe Machado wrote:\n\n>It seems that Firebird windows can use adequately as much ram it finds\n>and postgresql windows can not.\n> \n>\nPostgreSQL relies on the OS cache to utilize RAM. Make sure that most of \nthe RAM is 'available' so Windows can do its thing.\n\neffective_cache_size should be set correspondingly high - at least 65535.\n\nshared_buffers should be as low as you can get away with (allowing for \nmultiple users). 16384 is 12.5% of your RAM and far too high.\n\nAFAIK, PostgreSQL still doesn't differentiate between index blocks and \ndata blocks.\n\n>work_mem reduced time almost 66%.\n>But work_mem easily can exhaust ram with many users connected, as each\n>connection query will use this amount of memory (if I can remember).\n>How much it can grow at this 1 gbram win machine?\n> \n>\n\nwork_mem has to be just big enough to allow hash joins to operate \nefficiently. This varies from query to query and can be set in your code \naccordingly. However, the 1024 default is just too low for most \napplications and you'll probably find even 4096 is a huge improvement. \nYou need to find the minimum that delivers acceptable performance in \nmost queries and boost it for selected queries as required.\n\nBTW, which version of Firebird is this?\n",
"msg_date": "Fri, 10 Mar 2006 09:51:36 +1000",
"msg_from": "David Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance comparison"
},
{
"msg_contents": "Hello,\nI got good results on tuning postgresql performance for my friend.\nOne of the queries took almost 10 minutes.\n\nNow it completes on 26 miliseconds! (at the second run)\n\nA combination of query otimization, indexes choosing (with some droping\nand clustering), server parameters reconfigurations.\nFirebird still execute it on almost 2 minutes, much slower.\n\n\nFirebird is much slower than Postgresql at queries without joins.\nPostgresql is lightning faster than Firebird when manually tunned and\nwithout using joins and aggregates functions.\n\n\nThe example query and its explain analyze results are attached, with the\n\"show all\" output of each config iteration, and indexes created.\n(UPDATE: i am sending msg from home and does not have the correct log\nfile here. Will send the file at monday....)\n\n\nBUT\nthere are some issues still unknown.\nThe example query executes consistently at 56 seconds, and even at 39\nseconds.\nFirebird executes the same query at 54 seconds the first time and at 20\nseconds at next times.\nToday I went to the machine (was previously executing pg commands\nremotely) to observe the windows behaviour.\n\nPostgresql uses around 30% cpu and hard disk heavily (not so as vacuum)\nat all executions.\nFirebird uses around 40% cpu and hard disk heavily at the first\nexecution.\nThe second execution uses around 60% cpu and **NO** disk activity.\n\nThe previously cited query running at 26 miliseconds down from 10\nminutes, can achieve this performance at the second run, with **NO**\ndisk activity.\nAt the first run it uses 1,7 seconds, down from 10 minutes.\n\nThe hard disk is clearly a bottleneck.\n1,7 seconds against 26 miliseconds.\n\n\nSo,\nHow \"convince\" postgresql to use windows disk cache or to read all\nindexes to ram?\nIt seems that effective_cache_size does not tell postgresql to actually\nuse windows disk cache.\nWhat parameter must be configured?\nDo you have some suggestions?\nRegards.\nAndre Felipe Machado\n\nwww.techforce.com.br\n\n\n",
"msg_date": "Fri, 10 Mar 2006 22:39:57 -0300",
"msg_from": "Andre Felipe Machado <[email protected]>",
"msg_from_op": false,
"msg_subject": "firebird X postgresql 8.1.2 windows, performance comparison"
},
{
"msg_contents": "Andre Felipe Machado wrote:\n\n>Postgresql uses around 30% cpu and hard disk heavily (not so as vacuum)\n>at all executions.\n>Firebird uses around 40% cpu and hard disk heavily at the first\n>execution.\n>The second execution uses around 60% cpu and **NO** disk activity.\n>\n>The previously cited query running at 26 miliseconds down from 10\n>minutes, can achieve this performance at the second run, with **NO**\n>disk activity.\n>At the first run it uses 1,7 seconds, down from 10 minutes.\n>\n>The hard disk is clearly a bottleneck.\n>1,7 seconds against 26 miliseconds.\n>\n>\n>So,\n>How \"convince\" postgresql to use windows disk cache or to read all\n>indexes to ram?\n>It seems that effective_cache_size does not tell postgresql to actually\n>use windows disk cache.\n>What parameter must be configured?\n>Do you have some suggestions?\n> \n>\nAssuming these are selects and that you have already vacuumed, etc.\n\nLook at memory useage. It seems likely that you have a difference in \ncaching behavior. PostgreSQL has its own cache, and failing that will \nuse the OS disk cache. So there may be a number of possible issues \ninvolved including whether the data is staying in the OS cache, how much \nmemory is being used for caching, etc. It is also likely that the \nWindows version of PostgreSQL may have some issues in these areas that \nthe UNIX/Linux versions may not simply because it is more immature.\n\nYou might even try a vacuum full to retrieve space. This may mean \nsmaller tables, more likely to remain in disk cache, etc. But that \nwould not provide any indication of scalability.\n\nBest Wishes,\nChris Travers\nMetatron Technology Consulting",
"msg_date": "Fri, 10 Mar 2006 22:14:15 -0800",
"msg_from": "Chris Travers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance comparison"
},
{
"msg_contents": "On Fri, Mar 10, 2006 at 10:39:57PM -0300, Andre Felipe Machado wrote:\n> It seems that effective_cache_size does not tell postgresql to actually\n> use windows disk cache.\n\nNo, it just tells PostgreSQL how much cache memory it should expect to\nhave.\n\n> What parameter must be configured?\n> Do you have some suggestions?\n\nWell, you could try increasing shared_buffers, but the real question is\nwhy Windows isn't caching the data. Are you sure that the data you're\nreading is small enough to fit entirely in memory? Remember that\nFirebird has a completely different on-disk storage layout than\nPostgreSQL, so just because the table fits in memory there doesn't mean\nit will do so on PostgreSQL.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 14 Mar 2006 14:26:54 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance comparison"
}
] |
[
{
"msg_contents": "Good afternoon,\n\nRelatively new to PostgreSQL and have been assigned the task of capturing\ncache I/O % hits. I figured out (thanks to other posts) how to turn on the\ncapture and what stats to (kind of) capture. I did find a view in the\narchives as shown below but it does not execute, error follows. I'm using\n8.0.1 so that shouldn't be the issue. Any help will be appreciated.\n\nAlso, I also found pg_reset_stats.tar.gz in the archives with a lot of talk\nregarding its addition as a patch, did it ever make it in? If not, can I\nget a copy of it somewhere? The tar.gz gets corrupted when I download it.\n\nThank you,\nTim\n\nCREATE VIEW cache_hits AS SELECT relname, ROUND(CASE WHEN heap_blks_hit = 0\nTHEN 0 ELSE ((heap_blks_hit::float /(heap_blks_read::float +\nheap_blks_hit::float)) * 100) END ,2) as heap, ROUND(CASE WHEN idx_blks_hit\n= 0 THEN 0 ELSE ((idx_blks_hit::float /(idx_blks_read::float +\nidx_blks_hit::float)) * 100) END,2) as index,ROUND(CASE WHEN toast_blks_hit\n= 0 THEN 0 ELSE ((toast_blks_hit::float /(toast_blks_read::float +\ntoast_blks_hit::float)) * 100) END,2) as toast FROM pg_statio_user_tables\nWHERE heap_blks_read <> 0 or idx_blks_read <> 0 OR toast_blks_read <> 0\n\nunion select 'ALL TABLES', ROUND(CASE WHEN sum(heap_blks_hit) = 0 THEN 0\nELSE ((sum(heap_blks_hit::float) /(sum(heap_blks_read::float) +\nsum(heap_blks_hit::float))) * 100) END ,2) as heap, ROUND(CASE WHEN\nsum(idx_blks_hit) = 0 THEN 0 ELSE ((sum(idx_blks_hit::float)\n/(sum(idx_blks_read::float) + sum(idx_blks_hit::float))) * 100) END,2) as\nindex,ROUND(CASE WHEN sum(toast_blks_hit) = 0 THEN 0 ELSE\n((sum(toast_blks_hit::float) /(sum(toast_blks_read::float) +\nsum(toast_blks_hit::float))) * 100) END,2) as toast FROM\npg_statio_user_tables HAVING sum(heap_blks_read) <> 0 or sum(idx_blks_read)\n<> 0 OR sum(toast_blks_read) <> 0 ;\n\nERROR: function round(double precision, integer) does not exist\nHINT: No function matches the given name and argument types. You may need\nto add explicit type casts.\n\n\n\n\n\npg_reset_stats + cache I/O %\n\n\nGood afternoon,\nRelatively new to PostgreSQL and have been assigned the task of capturing cache I/O % hits. I figured out (thanks to other posts) how to turn on the capture and what stats to (kind of) capture. I did find a view in the archives as shown below but it does not execute, error follows. I'm using 8.0.1 so that shouldn't be the issue. Any help will be appreciated.\nAlso, I also found pg_reset_stats.tar.gz in the archives with a lot of talk regarding its addition as a patch, did it ever make it in? If not, can I get a copy of it somewhere? The tar.gz gets corrupted when I download it.\nThank you,\nTim\nCREATE VIEW cache_hits AS SELECT relname, ROUND(CASE WHEN heap_blks_hit = 0\nTHEN 0 ELSE ((heap_blks_hit::float /(heap_blks_read::float +\nheap_blks_hit::float)) * 100) END ,2) as heap, ROUND(CASE WHEN idx_blks_hit\n= 0 THEN 0 ELSE ((idx_blks_hit::float /(idx_blks_read::float +\nidx_blks_hit::float)) * 100) END,2) as index,ROUND(CASE WHEN toast_blks_hit\n= 0 THEN 0 ELSE ((toast_blks_hit::float /(toast_blks_read::float +\ntoast_blks_hit::float)) * 100) END,2) as toast FROM pg_statio_user_tables\nWHERE heap_blks_read <> 0 or idx_blks_read <> 0 OR toast_blks_read <> 0\nunion select 'ALL TABLES', ROUND(CASE WHEN sum(heap_blks_hit) = 0 THEN 0\nELSE ((sum(heap_blks_hit::float) /(sum(heap_blks_read::float) +\nsum(heap_blks_hit::float))) * 100) END ,2) as heap, ROUND(CASE WHEN\nsum(idx_blks_hit) = 0 THEN 0 ELSE ((sum(idx_blks_hit::float)\n/(sum(idx_blks_read::float) + sum(idx_blks_hit::float))) * 100) END,2) as\nindex,ROUND(CASE WHEN sum(toast_blks_hit) = 0 THEN 0 ELSE\n((sum(toast_blks_hit::float) /(sum(toast_blks_read::float) +\nsum(toast_blks_hit::float))) * 100) END,2) as toast FROM\npg_statio_user_tables HAVING sum(heap_blks_read) <> 0 or sum(idx_blks_read)\n<> 0 OR sum(toast_blks_read) <> 0 ;\nERROR: function round(double precision, integer) does not exist\nHINT: No function matches the given name and argument types. You may need to add explicit type casts.",
"msg_date": "Tue, 7 Mar 2006 14:07:24 -0500 ",
"msg_from": "\"mcelroy, tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_reset_stats + cache I/O %"
},
{
"msg_contents": "\"mcelroy, tim\" <[email protected]> writes:\n> ERROR: function round(double precision, integer) does not exist\n\nTry coercing to numeric instead of float. Also, it'd be a good idea to\nput that coercion outside the sum()'s instead of inside --- summing\nbigints is probably noticeably faster than summing numerics.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 07 Mar 2006 14:36:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_reset_stats + cache I/O % "
}
] |
[
{
"msg_contents": "\nJim C. Nasby wrote:\n \n> Speaking of 'disks', what's your exact layout? Do you have a 5 drive\n> raid5 for the OS and the database, 1 drive for swap and 1 drive for\n> pg_xlog?\n\nOn a Sil SATA 3114 controller:\n/dev/sda OS + Swap\n/dev/sdb /var with pg_xlog\n\nOn the 3Ware 9500S-8, 5 disk array:\n/dev/sdc with the database (and very safe, my MP3 collection ;-))\n\nAs I wrote in one of my posts to Michael, I suspect that the card is not handling the amount of write operations as well as I expected. I wonder if anyone else sees the same characteristics with this kind of card.\n\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl\n",
"msg_date": "Tue, 7 Mar 2006 20:49:30 +0100",
"msg_from": "\"Joost Kraaijeveld\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can anyone explain this pgbench results?"
},
{
"msg_contents": "On Tue, Mar 07, 2006 at 08:49:30PM +0100, Joost Kraaijeveld wrote:\n> \n> Jim C. Nasby wrote:\n> \n> > Speaking of 'disks', what's your exact layout? Do you have a 5 drive\n> > raid5 for the OS and the database, 1 drive for swap and 1 drive for\n> > pg_xlog?\n> \n> On a Sil SATA 3114 controller:\n> /dev/sda OS + Swap\n> /dev/sdb /var with pg_xlog\n> \n> On the 3Ware 9500S-8, 5 disk array:\n> /dev/sdc with the database (and very safe, my MP3 collection ;-))\n> \n> As I wrote in one of my posts to Michael, I suspect that the card is not handling the amount of write operations as well as I expected. I wonder if anyone else sees the same characteristics with this kind of card.\n\nWell, the problem is that you're using RAID5, which has a huge write\noverhead. You're unlikely to get good performance with it.\n\nAlso, it sounds like sda and sdb are not mirrored. If that's the case,\nyou have no protection from a drive failure taking out your entire\ndatabase, because you'd lose pg_xlog.\n\nIf you want better performance your best bets are to either setup RAID10\nor if you don't care about the data, just go to RAID0.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 7 Mar 2006 13:59:30 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can anyone explain this pgbench results?"
}
] |
[
{
"msg_contents": "Jim C. Nasby wrote:\n> Well, the problem is that you're using RAID5, which has a huge write\n> overhead. You're unlikely to get good performance with it.\nApparently. But I had no idea that the performance hit would be that big. \n\nRunning bonnie or copying a large file with dd show that the card can do 30-50 MB/sec. Running a large update on my postgresql database however, show a throughtput of ~ 2MB/sec, doing between ~ 2500 - 2300 writes/second (avarage). with an utilisation of almost always 100%, and large await times ( almost always > 700), large io-wait percentages (>50%), all measured with iostat.\n \n> Also, it sounds like sda and sdb are not mirrored. If that's the case,\n> you have no protection from a drive failure taking out your entire\n> database, because you'd lose pg_xlog.\n> \n> If you want better performance your best bets are to either\n> setup RAID10 or if you don't care about the data, just go to RAID0.\nBecause it is just my development machine I think I will opt for the last option. More diskspace left.\n\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl\n",
"msg_date": "Tue, 7 Mar 2006 21:15:37 +0100",
"msg_from": "\"Joost Kraaijeveld\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Can anyone explain this pgbench results?"
},
{
"msg_contents": "On Tue, Mar 07, 2006 at 09:15:37PM +0100, Joost Kraaijeveld wrote:\n> Jim C. Nasby wrote:\n> > Well, the problem is that you're using RAID5, which has a huge write\n> > overhead. You're unlikely to get good performance with it.\n> Apparently. But I had no idea that the performance hit would be that big. \n> \n> Running bonnie or copying a large file with dd show that the card can do 30-50 MB/sec. Running a large update on my postgresql database however, show a throughtput of ~ 2MB/sec, doing between ~ 2500 - 2300 writes/second (avarage). with an utilisation of almost always 100%, and large await times ( almost always > 700), large io-wait percentages (>50%), all measured with iostat.\n\nWhile there are some issues with PostgreSQL not getting as close to the\ntheoretical maximum of a dd bs=8k (you did match the block size to\nPostgreSQL's page size, right? :) ), a bigger issue in this case is that\nbetter cards are able to remove much/all of the RAID5 write penalty in\nthe case where you're doing a large sequential write, because it will\njust blow entire stripes down to disk. This is very different from doing\na more random IO. And it's also very possible that if you use a block\nsize that's smaller than the stripe size that the controller won't be\nable to pick up on that.\n\nIn any case, RAID0 will absolutely be the fastest performance you can\nget.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 7 Mar 2006 14:21:02 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can anyone explain this pgbench results?"
}
] |
[
{
"msg_contents": "Thanks Tom, sorry I neglected to copy the list on my previous email.....\n\nDoes this query make sense and is it valid for an accurate cache % hit ratio\nfor the entire DB? I would assume I could use the same logic with other\nviews such as pg_stat_user_tables to get a per table ratio?\n\nSELECT 100 - round((blks_hit::numeric / (blks_hit::numeric +\nblks_read::numeric)) * 100,2)\nAS \"Cache % Hit\"\nFROM pg_stat_database\nWHERE datname = 'Fix1';\n\n<RETURNS>\n\nCache % Hit\n--------------------\n 98.06\n(1 row)\n\nThank you,\nTim\n\n -----Original Message-----\nFrom: \tTom Lane [mailto:[email protected]] \nSent:\tTuesday, March 07, 2006 2:37 PM\nTo:\tmcelroy, tim\nCc:\t'[email protected]'\nSubject:\tRe: [PERFORM] pg_reset_stats + cache I/O % \n\n\"mcelroy, tim\" <[email protected]> writes:\n> ERROR: function round(double precision, integer) does not exist\n\nTry coercing to numeric instead of float. Also, it'd be a good idea to\nput that coercion outside the sum()'s instead of inside --- summing\nbigints is probably noticeably faster than summing numerics.\n\n\t\t\tregards, tom lane\n\n\n\n\n\nRE: [PERFORM] pg_reset_stats + cache I/O % \n\n\nThanks Tom, sorry I neglected to copy the list on my previous email.....\n\nDoes this query make sense and is it valid for an accurate cache % hit ratio for the entire DB? I would assume I could use the same logic with other views such as pg_stat_user_tables to get a per table ratio?\nSELECT 100 - round((blks_hit::numeric / (blks_hit::numeric + blks_read::numeric)) * 100,2)\nAS \"Cache % Hit\"\nFROM pg_stat_database\nWHERE datname = 'Fix1';\n\n<RETURNS>\n\nCache % Hit\n--------------------\n 98.06\n(1 row)\n\nThank you,\nTim\n\n -----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, March 07, 2006 2:37 PM\nTo: mcelroy, tim\nCc: '[email protected]'\nSubject: Re: [PERFORM] pg_reset_stats + cache I/O % \n\n\"mcelroy, tim\" <[email protected]> writes:\n> ERROR: function round(double precision, integer) does not exist\n\nTry coercing to numeric instead of float. Also, it'd be a good idea to\nput that coercion outside the sum()'s instead of inside --- summing\nbigints is probably noticeably faster than summing numerics.\n\n regards, tom lane",
"msg_date": "Wed, 8 Mar 2006 08:59:51 -0500 ",
"msg_from": "\"mcelroy, tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_reset_stats + cache I/O % "
},
{
"msg_contents": "Out of curiosity, why do you want this info? More important, do the\nfolks who are looking at this understand that a key part of PostgreSQL's\ntuning strategy is to let the OS handle the bulk of the caching?\n\nOn Wed, Mar 08, 2006 at 08:59:51AM -0500, mcelroy, tim wrote:\n> Thanks Tom, sorry I neglected to copy the list on my previous email.....\n> \n> Does this query make sense and is it valid for an accurate cache % hit ratio\n> for the entire DB? I would assume I could use the same logic with other\n> views such as pg_stat_user_tables to get a per table ratio?\n> \n> SELECT 100 - round((blks_hit::numeric / (blks_hit::numeric +\n> blks_read::numeric)) * 100,2)\n> AS \"Cache % Hit\"\n> FROM pg_stat_database\n> WHERE datname = 'Fix1';\n> \n> <RETURNS>\n> \n> Cache % Hit\n> --------------------\n> 98.06\n> (1 row)\n> \n> Thank you,\n> Tim\n> \n> -----Original Message-----\n> From: \tTom Lane [mailto:[email protected]] \n> Sent:\tTuesday, March 07, 2006 2:37 PM\n> To:\tmcelroy, tim\n> Cc:\t'[email protected]'\n> Subject:\tRe: [PERFORM] pg_reset_stats + cache I/O % \n> \n> \"mcelroy, tim\" <[email protected]> writes:\n> > ERROR: function round(double precision, integer) does not exist\n> \n> Try coercing to numeric instead of float. Also, it'd be a good idea to\n> put that coercion outside the sum()'s instead of inside --- summing\n> bigints is probably noticeably faster than summing numerics.\n> \n> \t\t\tregards, tom lane\n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 8 Mar 2006 12:27:40 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_reset_stats + cache I/O %"
}
] |
[
{
"msg_contents": "Hi,\n\nIm having a dude with a new inplementation in a web site.\nThe ojective is create a search as fast as possible. I have thought two \npossibilities to do that:\n\nI have several items. Those items has 1 or more of capacity. Each \ncapacity, has several dates (From 1 january to 10 of april, for \nexample). The dates covers 366 days, the current year, and they are \nindeterminated ranges. Per each date, it has price per day, per week, \nper15days and per month.\n\nI have designed two possibilities:\n\nFirst: \nIdItem StartDate EndDate Capacity PricePerDay PricePerWeek* \n PricePer15days* PricePerMonth*\n 1 1-1-2005 10-1-2005 2 100 \n 90 85 80\n 1 11-1-2005 20-1-2005 2 105 \n 94 83 82\n 1 21-1-2005 5-2-2005 4 405 \n 394 283 182\n 2 ...\nRight now arround 30.000 rows, in one year is spected to have 60.000 rows\n\n* In order to compare right, all prices will be translated to days. \nExample, PricePerWeek will have the Week Price / 7 and go on\n\nSecond\nIdItem Capacity Days \n Week 15Days Month Year\n 1 2 [Array of 365 values, one per day of \nyear] [ .Array. ] [ .Array. ] [ .Array. ] [ .Array. ]\n ^__ Each item of array its a price\n\nRight now arround 2.500 rows. in one year is spected to have 5.000 rows\n\nI have to compare prices or prices and dates or prices and dates and \ncapacity or capacity and prices\n\nI have no experience working with arrays on a table. Is it fast?\nWitch one do u think will have better performance?\nAny good idea?\n\nI hope this is enouth information.\nThanks in advance,\nRuben Rubio Rey\n",
"msg_date": "Wed, 08 Mar 2006 15:28:36 +0100",
"msg_from": "Ruben Rubio Rey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is good idea an array of 365 elements in a cell of a table, in order\n\tto perform searchs?"
},
{
"msg_contents": "If you need to compare stuff on a day-by-day basis, I think you'll be\nmuch better off just expanding stuff into a table of:\n\nitem_id int NOT NULL\n, day date NOT NULL\n, capacitiy ...\n, price_per_day ...\n, price_per_week ...\n, PRIMARY KEY( item_id, day )\n\n(Note that camel case and databases don't mix well...)\n\nSure, you're de-normalizing here, but the key is that you're putting the\ndata into a format where you can easily do things like:\n\nSELECT sum(capacity) FROM ... WHERE day = '2006-12-18';\n\nTrying to do that with arrays would be noticably more complex. And if\nyou wanted to do a whole month or something? Yeck...\n\nBTW, another option is to roll price_per_15_days and price_per_month\ninto a different table, since you'd only need 24 rows per item. Might be\nworth the trade-off in complexity depending on the specifics of the\napplication.\n\nOn Wed, Mar 08, 2006 at 03:28:36PM +0100, Ruben Rubio Rey wrote:\n> Hi,\n> \n> Im having a dude with a new inplementation in a web site.\n> The ojective is create a search as fast as possible. I have thought two \n> possibilities to do that:\n> \n> I have several items. Those items has 1 or more of capacity. Each \n> capacity, has several dates (From 1 january to 10 of april, for \n> example). The dates covers 366 days, the current year, and they are \n> indeterminated ranges. Per each date, it has price per day, per week, \n> per15days and per month.\n> \n> I have designed two possibilities:\n> \n> First: \n> IdItem StartDate EndDate Capacity PricePerDay PricePerWeek* \n> PricePer15days* PricePerMonth*\n> 1 1-1-2005 10-1-2005 2 100 \n> 90 85 80\n> 1 11-1-2005 20-1-2005 2 105 \n> 94 83 82\n> 1 21-1-2005 5-2-2005 4 405 \n> 394 283 182\n> 2 ...\n> Right now arround 30.000 rows, in one year is spected to have 60.000 rows\n> \n> * In order to compare right, all prices will be translated to days. \n> Example, PricePerWeek will have the Week Price / 7 and go on\n> \n> Second\n> IdItem Capacity Days \n> Week 15Days Month Year\n> 1 2 [Array of 365 values, one per day of \n> year] [ .Array. ] [ .Array. ] [ .Array. ] [ .Array. ]\n> ^__ Each item of array its a price\n> \n> Right now arround 2.500 rows. in one year is spected to have 5.000 rows\n> \n> I have to compare prices or prices and dates or prices and dates and \n> capacity or capacity and prices\n> \n> I have no experience working with arrays on a table. Is it fast?\n> Witch one do u think will have better performance?\n> Any good idea?\n> \n> I hope this is enouth information.\n> Thanks in advance,\n> Ruben Rubio Rey\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 8 Mar 2006 12:53:10 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is good idea an array of 365 elements in a cell of a table,\n\tin order to perform searchs?"
}
] |
[
{
"msg_contents": "I actually need this info as I was tasked by management to provide it. Not\nsure if they understand that or not, I do but management does like to see\nhow well the system and its components are performing. Also, I would\nutilize these results to test any cache tuning changes I may make. \n\nTim\n\n -----Original Message-----\nFrom: \tJim C. Nasby [mailto:[email protected]] \nSent:\tWednesday, March 08, 2006 1:28 PM\nTo:\tmcelroy, tim\nCc:\t'Tom Lane'; '[email protected]'\nSubject:\tRe: [PERFORM] pg_reset_stats + cache I/O %\n\nOut of curiosity, why do you want this info? More important, do the\nfolks who are looking at this understand that a key part of PostgreSQL's\ntuning strategy is to let the OS handle the bulk of the caching?\n\nOn Wed, Mar 08, 2006 at 08:59:51AM -0500, mcelroy, tim wrote:\n> Thanks Tom, sorry I neglected to copy the list on my previous email.....\n> \n> Does this query make sense and is it valid for an accurate cache % hit\nratio\n> for the entire DB? I would assume I could use the same logic with other\n> views such as pg_stat_user_tables to get a per table ratio?\n> \n> SELECT 100 - round((blks_hit::numeric / (blks_hit::numeric +\n> blks_read::numeric)) * 100,2)\n> AS \"Cache % Hit\"\n> FROM pg_stat_database\n> WHERE datname = 'Fix1';\n> \n> <RETURNS>\n> \n> Cache % Hit\n> --------------------\n> 98.06\n> (1 row)\n> \n> Thank you,\n> Tim\n> \n> -----Original Message-----\n> From: \tTom Lane [mailto:[email protected]] \n> Sent:\tTuesday, March 07, 2006 2:37 PM\n> To:\tmcelroy, tim\n> Cc:\t'[email protected]'\n> Subject:\tRe: [PERFORM] pg_reset_stats + cache I/O % \n> \n> \"mcelroy, tim\" <[email protected]> writes:\n> > ERROR: function round(double precision, integer) does not exist\n> \n> Try coercing to numeric instead of float. Also, it'd be a good idea to\n> put that coercion outside the sum()'s instead of inside --- summing\n> bigints is probably noticeably faster than summing numerics.\n> \n> \t\t\tregards, tom lane\n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n\n\n\nRE: [PERFORM] pg_reset_stats + cache I/O %\n\n\nI actually need this info as I was tasked by management to provide it. Not sure if they understand that or not, I do but management does like to see how well the system and its components are performing. Also, I would utilize these results to test any cache tuning changes I may make. \nTim\n\n -----Original Message-----\nFrom: Jim C. Nasby [mailto:[email protected]] \nSent: Wednesday, March 08, 2006 1:28 PM\nTo: mcelroy, tim\nCc: 'Tom Lane'; '[email protected]'\nSubject: Re: [PERFORM] pg_reset_stats + cache I/O %\n\nOut of curiosity, why do you want this info? More important, do the\nfolks who are looking at this understand that a key part of PostgreSQL's\ntuning strategy is to let the OS handle the bulk of the caching?\n\nOn Wed, Mar 08, 2006 at 08:59:51AM -0500, mcelroy, tim wrote:\n> Thanks Tom, sorry I neglected to copy the list on my previous email.....\n> \n> Does this query make sense and is it valid for an accurate cache % hit ratio\n> for the entire DB? I would assume I could use the same logic with other\n> views such as pg_stat_user_tables to get a per table ratio?\n> \n> SELECT 100 - round((blks_hit::numeric / (blks_hit::numeric +\n> blks_read::numeric)) * 100,2)\n> AS \"Cache % Hit\"\n> FROM pg_stat_database\n> WHERE datname = 'Fix1';\n> \n> <RETURNS>\n> \n> Cache % Hit\n> --------------------\n> 98.06\n> (1 row)\n> \n> Thank you,\n> Tim\n> \n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]] \n> Sent: Tuesday, March 07, 2006 2:37 PM\n> To: mcelroy, tim\n> Cc: '[email protected]'\n> Subject: Re: [PERFORM] pg_reset_stats + cache I/O % \n> \n> \"mcelroy, tim\" <[email protected]> writes:\n> > ERROR: function round(double precision, integer) does not exist\n> \n> Try coercing to numeric instead of float. Also, it'd be a good idea to\n> put that coercion outside the sum()'s instead of inside --- summing\n> bigints is probably noticeably faster than summing numerics.\n> \n> regards, tom lane\n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461",
"msg_date": "Wed, 8 Mar 2006 13:35:35 -0500 ",
"msg_from": "\"mcelroy, tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_reset_stats + cache I/O %"
},
{
"msg_contents": "On Wed, Mar 08, 2006 at 01:35:35PM -0500, mcelroy, tim wrote:\n> I actually need this info as I was tasked by management to provide it. Not\n> sure if they understand that or not, I do but management does like to see\n> how well the system and its components are performing. Also, I would\n> utilize these results to test any cache tuning changes I may make. \n\nWhat I feared. While monitoring cache hit % over time isn't a bad idea,\nit's less than half the picture, which makes fertile ground for\noptimizing for some mythical target instead of actual system\nperformance. If the \"conclusion\" from these numbers is that\nshared_buffers needs to get set larger than min(50000, 10% of memory)\nI'd very seriously re-consider how performance tuning is being done.\n\nBut hopefully I'm just being paranoid and you guys are just doing a\ngreat job of monitoring things and keeping on the ball. :)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 8 Mar 2006 21:23:37 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_reset_stats + cache I/O %"
}
] |
[
{
"msg_contents": "Adding -performance back; you should do a reply-all if you want to reply to list messages.\n\n> From: Jeremy Haile [mailto:[email protected]]\n> > Can you point us at more info about this? I can't even find \n> a website\n> > for Ingres...\n> \n> Ingres is based off of the same original codebase that PostgreSQL was\n> based upon (a long time ago) It is owned by Computer \n> Associates and was\n> open sourced last year. It supports clustering and replication, and\n> I've seen an Ingres install set up as a cluster backed by a \n> SAN before. \n> I just haven't talked to anyone (at least unbiased) who has used this\n> type of setup in production, and I'm not fully aware of the\n> advantages/disadvantages of this type of setup with Ingres. \n> Since this\n> group seems pretty knowledgable about performance advantages \n> (and we are\n> currently running PostgreSQL), I wanted to see if there were any\n> experiences or opinions.\n> \n> Here is a link to their website:\n> http://opensource.ca.com/projects/ingres\n> \n> \n> > Perhaps if you posted your performance requirements someone \n> could help\n> > point you to a solution that would meet them.\n> \n> This is honestly more of a curiousity question at the moment, \n> so I don't\n> have any specific numbers. We definitely have a requirement for\n> failover in the case of a machine failure, so we at least need\n> Master->Slave replication. However, I wanted to solicit \n> information on\n> clustering alternatives as well, since scalability will likely be a\n> future problem for our database. \n\nAhh, ok... that's likely a much different requirement than true clustering.\n\nWhat a lot of folks do right now is segregate their application into a read-only stream and the more interactive read-write streams, and then use Slony to replicate data to a number of machines for the read-only work. This way anyone who's hitting the site read-only (and can handle some possible delay) will just hit one of the slave machines. People who are doing interactive work (updating data) will hit the master. Since most applications do far more reading than they do writing, this is a pretty good way to load-balance.\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 8 Mar 2006 13:24:17 -0600",
"msg_from": "\"Jim Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres and Ingres R3 / SAN"
},
{
"msg_contents": "Folks,\n\n> > Ingres is based off of the same original codebase that PostgreSQL was\n> > based upon (a long time ago) \n\nThis is wrong. According to Andrew Yu and others who date back to the \noriginal POSTGRES, development of Postgres involved several of the same \nteam members as INGRES (most notably Stonebraker himself) but the two \ndatabase systems share no code. So the two systems share some ideas and \nalgorithms, but Postgres is a ground-up rewrite without borrowed code.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sat, 11 Mar 2006 15:06:52 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres and Ingres R3 / SAN"
}
] |
[
{
"msg_contents": "Sorry if this is the wrong list .......\n\nI'm in the process of developing an application based on gtk & postgress for \nboth windows & linux.\n\nShort, simple and to the point - I'm using embedded SQL .... is there anything \nI should know about using postgress in multiple threads, under linux OR \nwindows? I've not been able to find anything in the FAQ or documentation \nregarding this\n",
"msg_date": "Wed, 8 Mar 2006 19:32:37 -0500",
"msg_from": "Gorshkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "embedded postgres and threading"
}
] |
[
{
"msg_contents": "Hi all !\n\n I wanna test my system performance when using pgCluster.\n I'm using postgreSQL 8.1.0 and i've downloaded\npgcluster-1.5.0rc7 \n and pgcluster-1.5.0rc7-patch.\n\n Do i need to recompile postgreSQL with the patch?\n Can i use pgcluster-1.5 with this version of postgreSQL?\n\n Thx all\n\n\n\n\n\n\n\n\n\n Hi all !\n\n I wanna test my system performance when using pgCluster.\n I'm using postgreSQL 8.1.0 and i've downloaded pgcluster-1.5.0rc7 \n and pgcluster-1.5.0rc7-patch.\n\n Do i need to recompile postgreSQL with the patch?\n Can i use pgcluster-1.5 with this version of postgreSQL?\n\n Thx all",
"msg_date": "Thu, 09 Mar 2006 11:24:36 +0100",
"msg_from": "Javier Somoza <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgCluster and postgres 8.1"
},
{
"msg_contents": "Javier Somoza wrote:\n> I wanna test my system performance when using pgCluster.\n> I'm using postgreSQL 8.1.0 and i've downloaded pgcluster-1.5.0rc7\n> and pgcluster-1.5.0rc7-patch.\n> \n> Do i need to recompile postgreSQL with the patch?\n> Can i use pgcluster-1.5 with this version of postgreSQL?\n\nWhat does the documentation that comes with the patch say?\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Fri, 10 Mar 2006 12:48:51 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgCluster and postgres 8.1"
}
] |
[
{
"msg_contents": "Sorry I realized your fears :)\n\nPostgreSQL is a new (last four months) install here and I'm responsible for\nit. Great DB and I enjoy working with it a lot and learning the nuances of\nit. Keep in mind that the management are 'old-time' system folks who love\ncharts showing system and in this case DB performance. I'm basically just\nusing the out-of-the-box defaults in my postgresql.conf file and that seems\nto be working so far. But as the DB grows I just need a way to prove the DB\nis functioning properly when apps get slow. You know the old you're guilty\ntill proven innocent syndrome.... Ok enough on that. \n\nYes, thank you we try to keep on the ball regarding system monitoring. BTW\n- I'm still waiting to see if anyone out there can say yea or nay if the SQL\nI wrote is a valid indicator of overall cache % hit?\n\n> SELECT 100 - round((blks_hit::numeric / (blks_hit::numeric +\n> blks_read::numeric)) * 100,2)\n> AS \"Cache % Hit\"\n> FROM pg_stat_database\n> WHERE datname = 'Fix1';\n> \n> <RETURNS>\n> \n> Cache % Hit\n> --------------------\n> 98.06\n> (1 row)\n\nThank you,\nTim\n\n\n -----Original Message-----\nFrom: \tJim C. Nasby [mailto:[email protected]] \nSent:\tWednesday, March 08, 2006 10:24 PM\nTo:\tmcelroy, tim\nCc:\t'[email protected]'\nSubject:\tRe: [PERFORM] pg_reset_stats + cache I/O %\n\nOn Wed, Mar 08, 2006 at 01:35:35PM -0500, mcelroy, tim wrote:\n> I actually need this info as I was tasked by management to provide it.\nNot\n> sure if they understand that or not, I do but management does like to see\n> how well the system and its components are performing. Also, I would\n> utilize these results to test any cache tuning changes I may make. \n\nWhat I feared. While monitoring cache hit % over time isn't a bad idea,\nit's less than half the picture, which makes fertile ground for\noptimizing for some mythical target instead of actual system\nperformance. If the \"conclusion\" from these numbers is that\nshared_buffers needs to get set larger than min(50000, 10% of memory)\nI'd very seriously re-consider how performance tuning is being done.\n\nBut hopefully I'm just being paranoid and you guys are just doing a\ngreat job of monitoring things and keeping on the ball. :)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n\n\n\nRE: [PERFORM] pg_reset_stats + cache I/O %\n\n\nSorry I realized your fears :)\n\nPostgreSQL is a new (last four months) install here and I'm responsible for it. Great DB and I enjoy working with it a lot and learning the nuances of it. Keep in mind that the management are 'old-time' system folks who love charts showing system and in this case DB performance. I'm basically just using the out-of-the-box defaults in my postgresql.conf file and that seems to be working so far. But as the DB grows I just need a way to prove the DB is functioning properly when apps get slow. You know the old you're guilty till proven innocent syndrome.... Ok enough on that. \nYes, thank you we try to keep on the ball regarding system monitoring. BTW - I'm still waiting to see if anyone out there can say yea or nay if the SQL I wrote is a valid indicator of overall cache % hit?\n> SELECT 100 - round((blks_hit::numeric / (blks_hit::numeric +\n> blks_read::numeric)) * 100,2)\n> AS \"Cache % Hit\"\n> FROM pg_stat_database\n> WHERE datname = 'Fix1';\n> \n> <RETURNS>\n> \n> Cache % Hit\n> --------------------\n> 98.06\n> (1 row)\n\nThank you,\nTim\n\n\n -----Original Message-----\nFrom: Jim C. Nasby [mailto:[email protected]] \nSent: Wednesday, March 08, 2006 10:24 PM\nTo: mcelroy, tim\nCc: '[email protected]'\nSubject: Re: [PERFORM] pg_reset_stats + cache I/O %\n\nOn Wed, Mar 08, 2006 at 01:35:35PM -0500, mcelroy, tim wrote:\n> I actually need this info as I was tasked by management to provide it. Not\n> sure if they understand that or not, I do but management does like to see\n> how well the system and its components are performing. Also, I would\n> utilize these results to test any cache tuning changes I may make. \n\nWhat I feared. While monitoring cache hit % over time isn't a bad idea,\nit's less than half the picture, which makes fertile ground for\noptimizing for some mythical target instead of actual system\nperformance. If the \"conclusion\" from these numbers is that\nshared_buffers needs to get set larger than min(50000, 10% of memory)\nI'd very seriously re-consider how performance tuning is being done.\n\nBut hopefully I'm just being paranoid and you guys are just doing a\ngreat job of monitoring things and keeping on the ball. :)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461",
"msg_date": "Thu, 9 Mar 2006 08:13:30 -0500 ",
"msg_from": "\"mcelroy, tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_reset_stats + cache I/O %"
},
{
"msg_contents": "On Thu, Mar 09, 2006 at 08:13:30AM -0500, mcelroy, tim wrote:\n> charts showing system and in this case DB performance. I'm basically just\n> using the out-of-the-box defaults in my postgresql.conf file and that seems\n\nUgh... the default config won't get you far. Take a look here:\nhttp://www.powerpostgresql.com/Downloads/annotated_conf_80.html\n\nOr, I've been planning on posting a website with some better \"canned\"\npostgresql.conf config files for different configurations; if you send\nme specs on the machine you're running on I'll come up with something\nthat's at least more reasonable.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 10 Mar 2006 09:15:49 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_reset_stats + cache I/O %"
}
] |
[
{
"msg_contents": "> Is it possible to get a stack trace from the stuck process? \n> I dunno if you've got anything gdb-equivalent under Windows, \n> but that's the first thing I'd be interested in ...\n\nTry Process Explorer from www.sysinternals.com.\n\n//Magnus\n",
"msg_date": "Thu, 9 Mar 2006 22:22:04 +0100",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hanging queries on dual CPU windows "
}
] |
[
{
"msg_contents": "I typed up a description of a situation where the only viable option to \nimprove performance was to use a materialized view, which, when implemented, \nwas found to improve performance twenty-sevenfold, even with a fairly small \namount of excess data (which is antipated to grow). I thought this might be \nof use to anybody else in a similar situation, so I thought I'd post it here.\n\nhttp://community.seattleserver.com/viewtopic.php?t=11\n\nFeel free to reproduce as you see fit.\n\nCheers,\n-- \nCasey Allen Shobe | [email protected] | 206-381-2800\nSeattleServer.com, Inc. | http://www.seattleserver.com\n",
"msg_date": "Fri, 10 Mar 2006 02:25:08 +0000",
"msg_from": "Casey Allen Shobe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Using materialized views for commonly-queried subsets"
},
{
"msg_contents": "See also\nhttp://www.jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n\nOn Fri, Mar 10, 2006 at 02:25:08AM +0000, Casey Allen Shobe wrote:\n> I typed up a description of a situation where the only viable option to \n> improve performance was to use a materialized view, which, when implemented, \n> was found to improve performance twenty-sevenfold, even with a fairly small \n> amount of excess data (which is antipated to grow). I thought this might be \n> of use to anybody else in a similar situation, so I thought I'd post it here.\n> \n> http://community.seattleserver.com/viewtopic.php?t=11\n> \n> Feel free to reproduce as you see fit.\n> \n> Cheers,\n> -- \n> Casey Allen Shobe | [email protected] | 206-381-2800\n> SeattleServer.com, Inc. | http://www.seattleserver.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 10 Mar 2006 09:17:37 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using materialized views for commonly-queried subsets"
}
] |
[
{
"msg_contents": "Hello,\n\nI could need some help.\n\n I have a Postgresql database \n\nWhen i do a query on my homeserver the result is given back fast but when i do the same query on my webhost server the query is useless because of the processtime (200 times slower (56366.20 / 281.000 = 200.59) ). My Pc is just a simple pc in reference to the high quality systems my webhost uses.\n\nI have included the query plan and the table\n\nQuery:\n\nexplain analyze SELECT B.gegevensnaam AS boss, E.gegevensnaam \nFROM nieuw_gegevens AS E \nLEFT OUTER JOIN \nnieuw_gegevens AS B \nON B.lft \n= (SELECT MAX(lft) \nFROM nieuw_gegevens AS S \nWHERE E.lft > S.lft \nAND E.lft < S.rgt) order by boss, gegevensnaam \n\nOn the WEBHOST: \n\nQUERY PLAN \nSort (cost=1654870.86..1654871.87 rows=403 width=38) (actual time=56365.13..56365.41 rows=403 loops=1) \n Sort Key: b.gegevensnaam, e.gegevensnaam \n -> Nested Loop (cost=0.00..1654853.42 rows=403 width=38) (actual time=92.76..56360.79 rows=403 loops=1) \n Join Filter: (\"inner\".lft = (subplan)) \n -> Seq Scan on nieuw_gegevens e (cost=0.00..8.03 rows=403 width=19) (actual time=0.03..1.07 rows=403 loops=1) \n -> Seq Scan on nieuw_gegevens b (cost=0.00..8.03 rows=403 width=19) (actual time=0.00..0.79 rows=403 loops=403) \n SubPlan \n -> Aggregate (cost=10.16..10.16 rows=1 width=4) (actual time=0.34..0.34 rows=1 loops=162409) \n -> Seq Scan on nieuw_gegevens s (cost=0.00..10.04 rows=45 width=4) (actual time=0.20..0.33 rows=2 loops=162409) \n Filter: (($0 > lft) AND ($0 < rgt)) \nTotal runtime: 56366.20 msec \n\n11 row(s) \n\nTotal runtime: 56,370.345 ms \n\n\nOn my HOMESERVER: \n\nQUERY PLAN \nSort (cost=12459.00..12461.04 rows=813 width=290) (actual time=281.000..281.000 rows=403 loops=1) \n Sort Key: b.gegevensnaam, e.gegevensnaam \n -> Merge Left Join (cost=50.94..12419.71 rows=813 width=290) (actual time=281.000..281.000 rows=403 loops=1) \n Merge Cond: (\"outer\".\"?column3?\" = \"inner\".lft) \n -> Sort (cost=25.47..26.48 rows=403 width=149) (actual time=281.000..281.000 rows=403 loops=1) \n Sort Key: (subplan) \n -> Seq Scan on nieuw_gegevens e (cost=0.00..8.03 rows=403 width=149) (actual time=0.000..281.000 rows=403 loops=1) \n SubPlan \n -> Aggregate (cost=10.16..10.16 rows=1 width=4) (actual time=0.697..0.697 rows=1 loops=403) \n -> Seq Scan on nieuw_gegevens s (cost=0.00..10.05 rows=45 width=4) (actual time=0.308..0.658 rows=2 loops=403) \n Filter: (($0 > lft) AND ($0 < rgt)) \n -> Sort (cost=25.47..26.48 rows=403 width=149) (actual time=0.000..0.000 rows=770 loops=1) \n Sort Key: b.lft \n -> Seq Scan on nieuw_gegevens b (cost=0.00..8.03 rows=403 width=149) (actual time=0.000..0.000 rows=403 loops=1) \nTotal runtime: 281.000 ms \n\n15 row(s) \n\nTotal runtime: 287.273 ms \n\n\nAs you can see the query isn't useful anymore because of the processtime. Please Also notice that both systems use a different query plan. \nAlso on the webhost we have a loop of 162409 (403 rows * 403 rows).\nBoth systems also use a different postgresql version. But I cannot believe that the performance difference between 1 version could be this big regarding self outer join queries!\n\nTable \n\nCREATE TABLE nieuw_gegevens \n( \n gegevensid int4 NOT NULL DEFAULT nextval('nieuw_gegevens_gegevensid_seq'::text), \n gegevensnaam varchar(255) NOT NULL, \n lft int4 NOT NULL, \n rgt int4 NOT NULL, \n keyword text, \n CONSTRAINT nieuw_gegevens_pkey PRIMARY KEY (gegevensid), \n CONSTRAINT nieuw_gegevens_gegevensnaam_key UNIQUE (gegevensnaam) \n) \nWITH OIDS; \n\n\nDoes anyone now how to resolve this problem? Could it be that the configuration of the webhost postgresql could me wrong?\n\nthank you\n\n\n\n\n\n\nHello,\n \nI could need some help.\n \n I have a Postgresql \ndatabase When i do a query on my homeserver the result is given \nback fast but when i do the same query on my webhost server the query is useless \nbecause of the processtime (200 times slower \n(56366.20 / 281.000 = 200.59) ). My Pc \nis just a simple pc in reference to the high quality systems my webhost \nuses.\nI have included the query plan and the \ntable\nQuery:\nexplain analyze SELECT B.gegevensnaam AS boss, E.gegevensnaam FROM \nnieuw_gegevens AS E LEFT OUTER JOIN nieuw_gegevens AS B ON B.lft \n= (SELECT MAX(lft) FROM nieuw_gegevens AS S WHERE E.lft > S.lft \nAND E.lft < S.rgt) order by boss, gegevensnaam On \nthe WEBHOST: QUERY PLAN \nSort (cost=1654870.86..1654871.87 rows=403 width=38) (actual \ntime=56365.13..56365.41 rows=403 loops=1) Sort Key: \nb.gegevensnaam, e.gegevensnaam -> Nested \nLoop (cost=0.00..1654853.42 rows=403 width=38) (actual \ntime=92.76..56360.79 rows=403 loops=1) \n Join Filter: (\"inner\".lft = \n(subplan)) \n -> Seq Scan on \nnieuw_gegevens e (cost=0.00..8.03 rows=403 width=19) (actual \ntime=0.03..1.07 rows=403 loops=1) \n -> Seq Scan on \nnieuw_gegevens b (cost=0.00..8.03 rows=403 width=19) (actual \ntime=0.00..0.79 rows=403 loops=403) \n SubPlan \n -> Aggregate (cost=10.16..10.16 \nrows=1 width=4) (actual time=0.34..0.34 rows=1 loops=162409) \n -> Seq \nScan on nieuw_gegevens s (cost=0.00..10.04 rows=45 width=4) (actual \ntime=0.20..0.33 rows=2 loops=162409) \n Filter: \n(($0 > lft) AND ($0 < rgt)) Total runtime: 56366.20 msec 11 \nrow(s) Total runtime: 56,370.345 ms On my HOMESERVER: \nQUERY PLAN Sort (cost=12459.00..12461.04 rows=813 \nwidth=290) (actual time=281.000..281.000 rows=403 loops=1) Sort \nKey: b.gegevensnaam, e.gegevensnaam -> Merge Left \nJoin (cost=50.94..12419.71 rows=813 width=290) (actual \ntime=281.000..281.000 rows=403 loops=1) \n Merge Cond: \n(\"outer\".\"?column3?\" = \"inner\".lft) \n -> Sort (cost=25.47..26.48 \nrows=403 width=149) (actual time=281.000..281.000 rows=403 loops=1) \n Sort \nKey: (subplan) \n -> Seq \nScan on nieuw_gegevens e (cost=0.00..8.03 rows=403 width=149) (actual \ntime=0.000..281.000 rows=403 loops=1) \n SubPlan \n -> Aggregate (cost=10.16..10.16 \nrows=1 width=4) (actual time=0.697..0.697 rows=1 loops=403) \n -> Seq \nScan on nieuw_gegevens s (cost=0.00..10.05 rows=45 width=4) (actual \ntime=0.308..0.658 rows=2 loops=403) \n Filter: \n(($0 > lft) AND ($0 < rgt)) \n -> Sort (cost=25.47..26.48 \nrows=403 width=149) (actual time=0.000..0.000 rows=770 loops=1) \n Sort \nKey: b.lft \n -> Seq \nScan on nieuw_gegevens b (cost=0.00..8.03 rows=403 width=149) (actual \ntime=0.000..0.000 rows=403 loops=1) Total runtime: 281.000 ms 15 \nrow(s) Total runtime: 287.273 ms As you can see the \nquery isn't useful anymore because of the processtime. Please Also \nnotice that both systems use a different query plan. \nAlso on the webhost we have a loop of 162409 (403 rows * 403 rows).\nBoth systems also use a different postgresql version. But I cannot believe \nthat the performance difference between 1 version could be this big regarding \nself outer join queries!Table CREATE TABLE nieuw_gegevens ( \n gegevensid int4 NOT NULL DEFAULT \nnextval('nieuw_gegevens_gegevensid_seq'::text), gegevensnaam \nvarchar(255) NOT NULL, lft int4 NOT NULL, rgt \nint4 NOT NULL, keyword text, CONSTRAINT \nnieuw_gegevens_pkey PRIMARY KEY (gegevensid), CONSTRAINT \nnieuw_gegevens_gegevensnaam_key UNIQUE (gegevensnaam) ) WITH OIDS; \nDoes anyone now how to resolve this problem? Could it be that the \nconfiguration of the webhost postgresql could me wrong?\n \nthank you",
"msg_date": "Fri, 10 Mar 2006 08:11:44 +0100",
"msg_from": "\"NbForYou\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Process Time X200"
},
{
"msg_contents": "On Fri, Mar 10, 2006 at 08:11:44AM +0100, NbForYou wrote:\n> As you can see the query isn't useful anymore because of the\n> processtime. Please Also notice that both systems use a different\n> query plan.\n> Also on the webhost we have a loop of 162409 (403 rows * 403 rows).\n> Both systems also use a different postgresql version. But I cannot\n> believe that the performance difference between 1 version could be\n> this big regarding self outer join queries!\n\nWhat versions are both servers? I'd guess that the webhost is using\n7.3 or earlier and you're using 7.4 or later. I created a table\nlike yours, populated it with test data, and ran your query on\nseveral versions of PostgreSQL. I saw the same horrible plan on\n7.3 and the same good plan on later versions. The 7.4 Release Notes\ndo mention improvements in query planning; apparently one of those\nimprovements is making the difference.\n\n-- \nMichael Fuhr\n",
"msg_date": "Fri, 10 Mar 2006 01:59:43 -0700",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Process Time X200"
},
{
"msg_contents": "Hey Michael, you sure know your stuff!\n\nVersions:\n\nPostgreSQL 7.3.9-RH running on the webhost.\nPostgreSQL 8.0.3 running on my homeserver.\n\nSo the only solution is to ask my webhost to upgrade its postgresql?\nThe question is will he do that? After all a license fee is required for\ncommercial use. And running a webhosting service is a commercial use.\n\nthanks for replying and going through the effort of creating the database \nand populating it.\n\nNick\n\n\n\n----- Original Message ----- \nFrom: \"Michael Fuhr\" <[email protected]>\nTo: \"NbForYou\" <[email protected]>\nCc: <[email protected]>\nSent: Friday, March 10, 2006 9:59 AM\nSubject: Re: [PERFORM] Process Time X200\n\n\n> On Fri, Mar 10, 2006 at 08:11:44AM +0100, NbForYou wrote:\n>> As you can see the query isn't useful anymore because of the\n>> processtime. Please Also notice that both systems use a different\n>> query plan.\n>> Also on the webhost we have a loop of 162409 (403 rows * 403 rows).\n>> Both systems also use a different postgresql version. But I cannot\n>> believe that the performance difference between 1 version could be\n>> this big regarding self outer join queries!\n>\n> What versions are both servers? I'd guess that the webhost is using\n> 7.3 or earlier and you're using 7.4 or later. I created a table\n> like yours, populated it with test data, and ran your query on\n> several versions of PostgreSQL. I saw the same horrible plan on\n> 7.3 and the same good plan on later versions. The 7.4 Release Notes\n> do mention improvements in query planning; apparently one of those\n> improvements is making the difference.\n>\n> -- \n> Michael Fuhr\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n",
"msg_date": "Fri, 10 Mar 2006 10:11:00 +0100",
"msg_from": "\"NbForYou\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Process Time X200"
},
{
"msg_contents": "On 10.03.2006, at 10:11 Uhr, NbForYou wrote:\n\n> So the only solution is to ask my webhost to upgrade its postgresql?\n\nSeems to be.\n\n> The question is will he do that?\n\nYou are the customer. If they don't, go to another provider.\n\n> After all a license fee is required for\n> commercial use. And running a webhosting service is a commercial use.\n\nNo license fee is required for any use of PostgreSQL. Read the license:\n\n\"Permission to use, copy, modify, and distribute this software and \nits documentation for any purpose, without fee, and without a written \nagreement is hereby granted, provided that the above copyright notice \nand this paragraph and the following two paragraphs appear in all \ncopies.\"\n\nA commercial license is needed for MySQL, not for PostgreSQL.\n\ncug\n\n\n-- \nPharmaLine, Essen, GERMANY\nSoftware and Database Development",
"msg_date": "Fri, 10 Mar 2006 10:23:32 +0100",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Process Time X200"
},
{
"msg_contents": "On f�s, 2006-03-10 at 10:11 +0100, NbForYou wrote:\n> Hey Michael, you sure know your stuff!\n> \n> Versions:\n> \n> PostgreSQL 7.3.9-RH running on the webhost.\n> PostgreSQL 8.0.3 running on my homeserver.\n> \n> So the only solution is to ask my webhost to upgrade its postgresql?\n> The question is will he do that? After all a license fee is required for\n> commercial use. And running a webhosting service is a commercial use.\n\nA licence fee for what? Certainly not for postgresql.\n\ngnari\n\n\n",
"msg_date": "Fri, 10 Mar 2006 09:35:15 +0000",
"msg_from": "Ragnar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Process Time X200"
},
{
"msg_contents": "NbForYou wrote:\n> Hey Michael, you sure know your stuff!\n> \n> Versions:\n> \n> PostgreSQL 7.3.9-RH running on the webhost.\n> PostgreSQL 8.0.3 running on my homeserver.\n> \n> So the only solution is to ask my webhost to upgrade its postgresql?\n> The question is will he do that? After all a license fee is required for\n> commercial use. And running a webhosting service is a commercial use.\n\nNo, you're thinking of MySQL - PostgreSQL is free for anyone, for any \npurpose. You can even distribute your own changes without giving them \nback to the community if you want to complicate your life.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 10 Mar 2006 09:40:47 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Process Time X200"
},
{
"msg_contents": "Ok, Everybody keeps saying that Postgresql is free...\n\nSo I contacted my webhost and their respons was they have to pay a license \nfee.\n\nBut because they use PLESK as a service I think they are refering to a fee \nPLESK charges them\nfor the use combination PLESK - POSTGRESQL\n\nI do not know however that this information is accurate...\n\nI thank everybody who have responded so far. Great feedback!\n\n\n----- Original Message ----- \nFrom: \"Richard Huxton\" <[email protected]>\nTo: \"NbForYou\" <[email protected]>\nCc: \"Michael Fuhr\" <[email protected]>; <[email protected]>\nSent: Friday, March 10, 2006 10:40 AM\nSubject: Re: [PERFORM] Process Time X200\n\n\n> NbForYou wrote:\n>> Hey Michael, you sure know your stuff!\n>>\n>> Versions:\n>>\n>> PostgreSQL 7.3.9-RH running on the webhost.\n>> PostgreSQL 8.0.3 running on my homeserver.\n>>\n>> So the only solution is to ask my webhost to upgrade its postgresql?\n>> The question is will he do that? After all a license fee is required for\n>> commercial use. And running a webhosting service is a commercial use.\n>\n> No, you're thinking of MySQL - PostgreSQL is free for anyone, for any \n> purpose. You can even distribute your own changes without giving them back \n> to the community if you want to complicate your life.\n>\n> -- \n> Richard Huxton\n> Archonet Ltd\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n",
"msg_date": "Fri, 10 Mar 2006 11:45:06 +0100",
"msg_from": "\"NbForYou\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Process Time X200"
},
{
"msg_contents": "\n\n> Ok, Everybody keeps saying that Postgresql is free...\n>\n> So I contacted my webhost and their respons was they have to pay a \n> license fee.\n>\n> But because they use PLESK as a service I think they are refering to a \n> fee PLESK charges them\n> for the use combination PLESK - POSTGRESQL\n\n\tProbably.\n\tAlthough in my humble opinion, proposing postgres 7.3 in 2006 is a bit \ndisrespectful to the considerable work that has been done by the postgres \nteam since that release.\n\n\tIf you don't find a host to your liking, and you have a large website, as \nyou say, consider a dedicated server. Prices are quite accessible now, you \ncan install the latest version of Postgres. Going from 7.3 to 8.1, and \nhaving your own server with all its resources dedicated to running your \nsite, will probably enhance your performance. Consider lighttpd which is a \nspeed demon and uses very little resources.\n",
"msg_date": "Fri, 10 Mar 2006 12:38:54 +0100",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Process Time X200"
},
{
"msg_contents": "On 3/10/06, NbForYou <[email protected]> wrote:\n> Hey Michael, you sure know your stuff!\n>\n> Versions:\n>\n> PostgreSQL 7.3.9-RH running on the webhost.\n> PostgreSQL 8.0.3 running on my homeserver.\n>\n> So the only solution is to ask my webhost to upgrade its postgresql?\n> The question is will he do that? After all a license fee is required for\n> commercial use. And running a webhosting service is a commercial use.\n>\n> thanks for replying and going through the effort of creating the database\n> and populating it.\n>\n> Nick\n>\n\nYou can look at the explain analyze output of the query from pg 7.3,\nfigure out why the plan is bad and tweak your query to get optimum\nperformance.\n\nYes, I agree with the other statements that say, \"upgrade to 7.4 or\n8.x if you can\" but if you can't, then you can still work on it.\n\n--\nMatthew Nuzum\nwww.bearfruit.org\n",
"msg_date": "Fri, 10 Mar 2006 09:48:22 -0600",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Process Time X200"
},
{
"msg_contents": "On Fri, 2006-03-10 at 04:45, NbForYou wrote:\n> Ok, Everybody keeps saying that Postgresql is free...\n> \n> So I contacted my webhost and their respons was they have to pay a license \n> fee.\n> \n> But because they use PLESK as a service I think they are refering to a fee \n> PLESK charges them\n> for the use combination PLESK - POSTGRESQL\n> \n> I do not know however that this information is accurate...\n> \n> I thank everybody who have responded so far. Great feedback!\n\nI think it's time to get a new hosting provider.\n\nIf they're still running PostgreSQL 7.3.9 (the latest 7.3 is 7.3.14, and\n8.1.3 is amazingly faster than 7.3.anything...) then they're likely not\nupdating other vital components either, and therefore it's only a matter\nof time before your machine gets hacked.\n",
"msg_date": "Fri, 10 Mar 2006 10:46:56 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Process Time X200"
},
{
"msg_contents": "On Fri, Mar 10, 2006 at 10:46:56AM -0600, Scott Marlowe wrote:\n> I think it's time to get a new hosting provider.\n> \n> If they're still running PostgreSQL 7.3.9 (the latest 7.3 is 7.3.14, and\n> 8.1.3 is amazingly faster than 7.3.anything...) then they're likely not\n> updating other vital components either, and therefore it's only a matter\n> of time before your machine gets hacked.\n\nOr you lose data. IIRC there have been some data-loss bugs fixed between\n7.3.9 and 7.3.14.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 14 Mar 2006 14:11:25 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Process Time X200"
}
] |
[
{
"msg_contents": "> > Is it possible to get a stack trace from the stuck process? \n> I dunno \n> > if you've got anything gdb-equivalent under Windows, but that's the \n> > first thing I'd be interested in ...\n> \n> Here ya go:\n> \n> http://www.devisser-siderius.com/stack1.jpg\n> http://www.devisser-siderius.com/stack2.jpg\n> http://www.devisser-siderius.com/stack3.jpg\n> \n> There are three threads in the process. I guess thread 1 \n> (stack1.jpg) is the most interesting.\n> \n> I also noted that cranking up concurrency in my app \n> reproduces the problem in about 4 minutes ;-)\n\nActually, stack2 looks very interesting. Does it \"stay stuck\" in pg_queue_signal? That's really not supposed to happen.\n\nAlso, can you confirm that stack1 actually *stops* in pgwin32_waitforsinglesocket? Or does it go out and come back? ;-)\n\n(A good signal of this is to check the cswitch delta. If it stays at zero, then it's stuck. If it shows any values, that means it's actuall going out and coming back)\n\nAnd finally, is this 8.0 or 8.1? There have been some significant changes in the handling of the signals between the two...\n\n//Magnus\n",
"msg_date": "Fri, 10 Mar 2006 10:20:15 +0100",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hanging queries on dual CPU windows"
},
{
"msg_contents": "On Friday 10 March 2006 04:20, Magnus Hagander wrote:\n> > > Is it possible to get a stack trace from the stuck process?\n> >\n> > I dunno\n> >\n> > > if you've got anything gdb-equivalent under Windows, but that's the\n> > > first thing I'd be interested in ...\n> >\n> > Here ya go:\n> >\n> > http://www.devisser-siderius.com/stack1.jpg\n> > http://www.devisser-siderius.com/stack2.jpg\n> > http://www.devisser-siderius.com/stack3.jpg\n> >\n> > There are three threads in the process. I guess thread 1\n> > (stack1.jpg) is the most interesting.\n> >\n> > I also noted that cranking up concurrency in my app\n> > reproduces the problem in about 4 minutes ;-)\n>\n\nJust reproduced again. \n\n> Actually, stack2 looks very interesting. Does it \"stay stuck\" in\n> pg_queue_signal? That's really not supposed to happen.\n\nYes it does. \n\n>\n> Also, can you confirm that stack1 actually *stops* in\n> pgwin32_waitforsinglesocket? Or does it go out and come back? ;-)\n>\n> (A good signal of this is to check the cswitch delta. If it stays at zero,\n> then it's stuck. If it shows any values, that means it's actuall going out\n> and coming back)\n\nI only see CSwitch change once I click OK on the thread window. Once I do \nthat, it goes up to 3 and back to blank again. The 'context switches' counter \ndoes not increase like it does for other processes (like e.g. process \nexplorer itself).\n\nAnother thing which may or may not be of interest: Nothing is listed in the \n'TCP/IP' tab for the stuck process. I would have expected to see at least the \nsocket of the client connection there??\n\n>\n> And finally, is this 8.0 or 8.1? There have been some significant changes\n> in the handling of the signals between the two...\n\nThis is 8.1.3 on Windows 2003 Server. Also reproduced on 8.1.0 and 8.1.1 (also \non 2K3). \n\n>\n> //Magnus\n\njan\n\n-- \n--------------------------------------------------------------\nJan de Visser [email protected]\n\n Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n",
"msg_date": "Fri, 10 Mar 2006 09:03:14 -0500",
"msg_from": "Jan de Visser <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hanging queries on dual CPU windows"
},
{
"msg_contents": "On Friday 10 March 2006 09:03, Jan de Visser wrote:\n> On Friday 10 March 2006 04:20, Magnus Hagander wrote:\n> > > > Is it possible to get a stack trace from the stuck process?\n> > >\n> > > I dunno\n> > >\n> > > > if you've got anything gdb-equivalent under Windows, but that's the\n> > > > first thing I'd be interested in ...\n> > >\n> > > Here ya go:\n> > >\n> > > http://www.devisser-siderius.com/stack1.jpg\n> > > http://www.devisser-siderius.com/stack2.jpg\n> > > http://www.devisser-siderius.com/stack3.jpg\n> > >\n> > > There are three threads in the process. I guess thread 1\n> > > (stack1.jpg) is the most interesting.\n> > >\n> > > I also noted that cranking up concurrency in my app\n> > > reproduces the problem in about 4 minutes ;-)\n>\n> Just reproduced again.\n>\n> > Actually, stack2 looks very interesting. Does it \"stay stuck\" in\n> > pg_queue_signal? That's really not supposed to happen.\n>\n> Yes it does.\n\nAn update on that: There is actually *two* processes in this state, both \nhanging in pg_queue_signal. I've looked at the source of that, and the \nobvious candidate for hanging is EnterCriticalSection. I also found this:\n\nhttp://blogs.msdn.com/larryosterman/archive/2005/03/02/383685.aspx\n\nwhere they say:\n\n\"\nIn addition, for Windows 2003, SP1, the EnterCriticalSection API has a subtle \nchange that's intended tor resolve many of the lock convoy issues. Before \nWin2003 SP1, if 10 threads were blocked on EnterCriticalSection and all 10 \nthreads had the same priority, then EnterCriticalSection would service those \nthreads in a FIFO (first -in, first-out) basis. Starting in Windows 2003 \nSP1, the EnterCriticalSection will wake up a random thread from the waiting \nthreads. If all the threads are doing the same thing (like a thread pool) \nthis won't make much of a difference, but if the different threads are doing \ndifferent work (like the critical section protecting a widely accessed \nobject), this will go a long way towards removing lock convoy semantics.\n\"\n\nCould it be they broke it when they did that????\n\n\n>\n> > Also, can you confirm that stack1 actually *stops* in\n> > pgwin32_waitforsinglesocket? Or does it go out and come back? ;-)\n> >\n> > (A good signal of this is to check the cswitch delta. If it stays at\n> > zero, then it's stuck. If it shows any values, that means it's actuall\n> > going out and coming back)\n>\n> I only see CSwitch change once I click OK on the thread window. Once I do\n> that, it goes up to 3 and back to blank again. The 'context switches'\n> counter does not increase like it does for other processes (like e.g.\n> process explorer itself).\n>\n> Another thing which may or may not be of interest: Nothing is listed in the\n> 'TCP/IP' tab for the stuck process. I would have expected to see at least\n> the socket of the client connection there??\n>\n> > And finally, is this 8.0 or 8.1? There have been some significant changes\n> > in the handling of the signals between the two...\n>\n> This is 8.1.3 on Windows 2003 Server. Also reproduced on 8.1.0 and 8.1.1\n> (also on 2K3).\n>\n> > //Magnus\n>\n> jan\n\n-- \n--------------------------------------------------------------\nJan de Visser [email protected]\n\n Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n",
"msg_date": "Fri, 10 Mar 2006 09:32:59 -0500",
"msg_from": "Jan de Visser <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hanging queries on dual CPU windows"
},
{
"msg_contents": "On Friday 10 March 2006 09:32, Jan de Visser wrote:\n> > > Actually, stack2 looks very interesting. Does it \"stay stuck\" in\n> > > pg_queue_signal? That's really not supposed to happen.\n> >\n> > Yes it does.\n>\n> An update on that: There is actually *two* processes in this state, both\n> hanging in pg_queue_signal. I've looked at the source of that, and the\n> obvious candidate for hanging is EnterCriticalSection. I also found this:\n>\n> http://blogs.msdn.com/larryosterman/archive/2005/03/02/383685.aspx\n>\n> where they say:\n>\n> \"\n> In addition, for Windows 2003, SP1, the EnterCriticalSection API has a\n> subtle change that's intended tor resolve many of the lock convoy issues.\n> Before Win2003 SP1, if 10 threads were blocked on EnterCriticalSection and\n> all 10 threads had the same priority, then EnterCriticalSection would\n> service those threads in a FIFO (first -in, first-out) basis. Starting in\n> Windows 2003 SP1, the EnterCriticalSection will wake up a random thread\n> from the waiting threads. If all the threads are doing the same thing\n> (like a thread pool) this won't make much of a difference, but if the\n> different threads are doing different work (like the critical section\n> protecting a widely accessed object), this will go a long way towards\n> removing lock convoy semantics. \"\n>\n> Could it be they broke it when they did that????\n\nSee also this:\n\nhttp://bugs.mysql.com/bug.php?id=12071\n\nIt appears the mysql people ran into this and concluded it is a Windows bug \nthey needed to work around.\n\njan\n\n-- \n--------------------------------------------------------------\nJan de Visser [email protected]\n\n Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n",
"msg_date": "Fri, 10 Mar 2006 09:47:22 -0500",
"msg_from": "Jan de Visser <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hanging queries on dual CPU windows"
}
] |
[
{
"msg_contents": "Hello list.\n\nWe have compared 2 IBM x servers:\n\n IBM X206\nIBM X226\n ---------------------- \n -------------------\nprocessor Pentium 4 3.2 Ghz\nXeon 3.0 Ghz\nmain memory 1.25 GB\n4 GB\ndiscs 2 x SCSI RAID1 10000RPM 1 x\nATA 7200 RPM\n\nLINUX 2.6 (SUSE 9)\nsame\nPGSQL 7.4\nsame\npostgresql.conf attached\nsame\n\n\nWe have bij means of an informix-4GL program done the following test:\n\n\ncreate table : name char(18)\n adres char(20)\n key integer\n\ncreate index on (key)\n Time\nat X206 Time at X226\n ----------------\n---- ------------------\n\ninsert record (key goes from 1 to 10000) 6 sec.\n41 sec.\nselect record (key goes from 1 to 10000) 4\n4\ndelete record (key goes from 1 to 10000) 6\n41\n\n\nThis is ofcourse a totally unexpected results (you should think off the\nopposite).\n\nFunny is that the select time is the same for both machines.\n\nDoes anybody has any any idea what can cause this strange results or where\nwe\ncan start our investigations?\n\n\nRegards\n\n\nHenk Sanders",
"msg_date": "Fri, 10 Mar 2006 10:50:20 +0100",
"msg_from": "\"H.J. Sanders\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "x206-x225"
},
{
"msg_contents": "H.J. Sanders wrote:\n> X206 IBM X226\n> ---------------------- -------------------\n> processor Pentium 4 3.2 \n> Ghz Xeon 3.0 Ghz \n> main memory 1.25 \n> GB 4 GB \n> discs 2 x SCSI RAID1 10000RPM \n> 1 x ATA 7200 RPM\n\nNoting that the SCSI discs are on the *slower* machine.\n\n> Time at X206 Time at X226\n> -------------------- ------------------\n> insert record (1 to 10000) 6 sec. 41 sec.\n> select record (1 to 10000) 4 4\n> delete record (1 to 10000) 6 41\n> \n> \n> This is ofcourse a totally unexpected results (you should think off the \n> opposite).\n\nYour ATA disk is lying about disk caching being turned off. Assuming \neach insert is in a separate transaction, then it's not going to do \n10,000 / 6 = 1667 transactions/sec - that's faster than it's rotational \nspeed.\n\n> Funny is that the select time is the same for both machines.\n\nBecause you're limited by the speed to read from RAM.\n\nBy the way - these sort of tests are pretty much meaningless in any \npractical terms.\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 10 Mar 2006 13:40:22 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: x206-x225"
},
{
"msg_contents": "The primary slow down is probably between your system bus from main memory\nto your disk storage. If you notice from your statistics that the select\nstatements are very close. This is because all the data you need is already\nin system memory. The primary bottle neck is probably disk I/O. Scsi will\nalways be faster than ATA. Scsi devices have dedicated hardware for getting\ndata to and from the disc to the main system bus without requiring a trip\nthrough the CPU.\n\nYou may be able to speed up the ata disc by enabling DMA by using hdparm.\n\nhdparm -d1 /dev/hda (or whatever your device is)\n\n-Daniel\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2 (GNU/Linux)\n\niD8DBQBEEYzX9SJ2nhowvKERAoiFAKCLR+7a7ReZ2mjjPjpONHLGIQD1SgCeNNON\nV1kbyATIFVPWuf1W6Ji0IFg=\n=5Msr\n-----END PGP SIGNATURE-----\n\nOn 3/10/06, Richard Huxton <[email protected]> wrote:\n>\n> H.J. Sanders wrote:\n> > X206 IBM X226\n> > ---------------------- -------------------\n> > processor Pentium 4 3.2\n> > Ghz Xeon 3.0 Ghz\n> > main memory 1.25\n> > GB 4 GB\n> > discs 2 x SCSI RAID1 10000RPM\n> > 1 x ATA 7200 RPM\n>\n> Noting that the SCSI discs are on the *slower* machine.\n>\n> > Time at X206 Time at X226\n> > -------------------- ------------------\n> > insert record (1 to 10000) 6 sec. 41 sec.\n> > select record (1 to 10000) 4 4\n> > delete record (1 to 10000) 6 41\n> >\n> >\n> > This is ofcourse a totally unexpected results (you should think off the\n> > opposite).\n>\n> Your ATA disk is lying about disk caching being turned off. Assuming\n> each insert is in a separate transaction, then it's not going to do\n> 10,000 / 6 = 1667 transactions/sec - that's faster than it's rotational\n> speed.\n>\n> > Funny is that the select time is the same for both machines.\n>\n> Because you're limited by the speed to read from RAM.\n>\n> By the way - these sort of tests are pretty much meaningless in any\n> practical terms.\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\nThe primary slow down is probably between your system bus from main\nmemory to your disk storage. If you notice from your statistics that\nthe select statements are very close. This is because all the data you\nneed is already in system memory. The primary bottle neck is probably\ndisk I/O. Scsi will always be faster than ATA. Scsi devices have\ndedicated hardware for getting data to and from the disc to the main\nsystem bus without requiring a trip through the CPU. \n\nYou may be able to speed up the ata disc by enabling DMA by using hdparm.\n\nhdparm -d1 /dev/hda (or whatever your device is)\n\n-Daniel\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2 (GNU/Linux)\n\niD8DBQBEEYzX9SJ2nhowvKERAoiFAKCLR+7a7ReZ2mjjPjpONHLGIQD1SgCeNNON\nV1kbyATIFVPWuf1W6Ji0IFg=\n=5Msr\n-----END PGP SIGNATURE-----\nOn 3/10/06, Richard Huxton <[email protected]> wrote:\nH.J. Sanders wrote:>\nX206 \nIBM X226>\n---------------------- ------------------->\nprocessor Pentium\n4 3.2>\nGhz Xeon\n3.0 Ghz> main\nmemory 1.25>\nGB 4\nGB>\ndiscs 2\nx SCSI RAID1 10000RPM> 1 x ATA 7200 RPMNoting that the SCSI discs are on the *slower* machine.> Time at X206 Time at X226> -------------------- ------------------> insert record (1 to 10000) 6 sec. 41 sec.\n>\nselect record (1 to\n10000) 4 4>\ndelete record (1 to\n10000) 6 \n41>>> This is ofcourse a totally unexpected results (you should think off the> opposite).Your ATA disk is lying about disk caching being turned off. Assumingeach insert is in a separate transaction, then it's not going to do\n10,000 / 6 = 1667 transactions/sec - that's faster than it's rotationalspeed.> Funny is that the select time is the same for both machines.Because you're limited by the speed to read from RAM.\nBy the way - these sort of tests are pretty much meaningless in anypractical terms.-- Richard Huxton Archonet Ltd---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend",
"msg_date": "Fri, 10 Mar 2006 09:27:48 -0500",
"msg_from": "\"Daniel Blaisdell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: x206-x225"
},
{
"msg_contents": "On Fri, 2006-03-10 at 13:40 +0000, Richard Huxton wrote:\n> Your ATA disk is lying about disk caching being turned off. Assuming \n> each insert is in a separate transaction, then it's not going to do \n> 10,000 / 6 = 1667 transactions/sec - that's faster than it's rotational \n> speed.\nCould you explain the calculation? Why should the number of transactions\nbe related to the rotational speed of the disk, without saying anything\nabout the number of bytes per rotation?\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl\n\n",
"msg_date": "Sat, 11 Mar 2006 08:49:38 +0100",
"msg_from": "Joost Kraaijeveld <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: x206-x225"
},
{
"msg_contents": "On Sat, 11 Mar 2006, Joost Kraaijeveld wrote:\n\n> On Fri, 2006-03-10 at 13:40 +0000, Richard Huxton wrote:\n>> Your ATA disk is lying about disk caching being turned off. Assuming\n>> each insert is in a separate transaction, then it's not going to do\n>> 10,000 / 6 = 1667 transactions/sec - that's faster than it's rotational\n>> speed.\n> Could you explain the calculation? Why should the number of transactions\n> be related to the rotational speed of the disk, without saying anything\n> about the number of bytes per rotation?\n\neach transaction requires a sync to the disk, a sync requires a real \nwrite (which you then wait for), so you can only do one transaction per \nrotation.\n\nDavid Lang\n",
"msg_date": "Fri, 10 Mar 2006 23:57:16 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: x206-x225"
},
{
"msg_contents": "On Fri, 2006-03-10 at 23:57 -0800, David Lang wrote:\n> On Sat, 11 Mar 2006, Joost Kraaijeveld wrote:\n> \n> > On Fri, 2006-03-10 at 13:40 +0000, Richard Huxton wrote:\n> >> Your ATA disk is lying about disk caching being turned off. Assuming\n> >> each insert is in a separate transaction, then it's not going to do\n> >> 10,000 / 6 = 1667 transactions/sec - that's faster than it's rotational\n> >> speed.\n> > Could you explain the calculation? Why should the number of transactions\n> > be related to the rotational speed of the disk, without saying anything\n> > about the number of bytes per rotation?\n> \n> each transaction requires a sync to the disk, a sync requires a real \n> write (which you then wait for), so you can only do one transaction per \n> rotation.\nNot according to a conversation I had with Western Digital about the\nwrite performance of my own SATA disks. What I understand from their\nexplanation their disk are limited by the MB/sec and not by the number\nof writes/second, e.g. I could write 50 MB/sec *in 1 bit/write* on my\ndisk. This would suggest that the maximum transactions of my disk\n(overhead of OS and PostgreSQL ignored) would be 50MB / (transaction\nsize in MB) per second. Or am I missing something (what would not\nsurprise me, as I do not understand the perforance of my system at\nall ;-))?\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl\n\n",
"msg_date": "Sat, 11 Mar 2006 09:17:09 +0100",
"msg_from": "Joost Kraaijeveld <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: x206-x225"
},
{
"msg_contents": "\n>> each transaction requires a sync to the disk, a sync requires a real\n>> write (which you then wait for), so you can only do one transaction per\n>> rotation.\n> Not according to a conversation I had with Western Digital about the\n\n\nIt depends if you consider that \"written to the disk\" means \"data is \nsomewhere between the OS cache and the platter\" or \"data is writter on the \nplatter and will survive a power loss\".\n\nPostgres wants the second option, of course.\n\nFor that, the data has to be on the disk. Thus, the disk has to seek, wait \ntill the desired sector arrives in front of the head, write, and tell the \nOS it's done. Your disk just stores data in its embedded RAM buffer and \ntells the OS it's written, but if you lose power, you lose anything that's \nin the disk embedded RAM cache...\n\nAdvanced RAID cards have battery backed up RAM cache precisely for that \npurpose. Your harddisk doesn't.\n",
"msg_date": "Sat, 11 Mar 2006 12:33:50 +0100",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: x206-x225"
},
{
"msg_contents": "On Sat, 2006-03-11 at 12:33 +0100, PFC wrote:\n> >> each transaction requires a sync to the disk, a sync requires a real\n> >> write (which you then wait for), so you can only do one transaction per\n> >> rotation.\n> > Not according to a conversation I had with Western Digital about the\n> \n> \n> It depends if you consider that \"written to the disk\" means \"data is \n> somewhere between the OS cache and the platter\" or \"data is writter on the \n> platter and will survive a power loss\".\n> \n> Postgres wants the second option, of course.\n\nI assume that for PostgreSQL \"written to disk\" is after fsync returned\nsuccessfully. In practice that could very well mean that the data is\nstill in a cache somewhere (controller or harddisk, not in the OS\nanymore, see also man page of fsync)\n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl\n\n",
"msg_date": "Sat, 11 Mar 2006 15:26:14 +0100",
"msg_from": "Joost Kraaijeveld <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: x206-x225"
},
{
"msg_contents": "Joost Kraaijeveld <[email protected]> writes:\n> I assume that for PostgreSQL \"written to disk\" is after fsync returned\n> successfully. In practice that could very well mean that the data is\n> still in a cache somewhere (controller or harddisk, not in the OS\n> anymore, see also man page of fsync)\n\nWhat it had better mean, if you want your database to be reliable,\nis that the data is stored someplace that will survive a system crash\n(power outage, kernel panic, etc). A battery-backed RAM cache is OK,\nassuming that total failure of the RAID controller is not one of the\nevents you consider likely enough to need protection against.\n\nThe description of your SATA drive makes it sound like the drive\ndoes not put data on the platter before reporting \"write complete\",\nbut only stores it in on-board RAM cache. It is highly unlikely\nthat there is any battery backing for that cache, and therefore that\ndrive is not to be trusted.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 11 Mar 2006 11:59:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: x206-x225 "
},
{
"msg_contents": "On Sat, 2006-03-11 at 11:59 -0500, Tom Lane wrote:\n> Joost Kraaijeveld <[email protected]> writes:\n> > I assume that for PostgreSQL \"written to disk\" is after fsync returned\n> > successfully. In practice that could very well mean that the data is\n> > still in a cache somewhere (controller or harddisk, not in the OS\n> > anymore, see also man page of fsync)\n> \n> What it had better mean, if you want your database to be reliable,\n> is that the data is stored someplace that will survive a system crash\n> (power outage, kernel panic, etc). A battery-backed RAM cache is OK,\n> assuming that total failure of the RAID controller is not one of the\n> events you consider likely enough to need protection against.\n\nMaybe I should have expressed myself better. The parent post said: \n\n> It depends if you consider that \"written to the disk\" means \"data is \n> somewhere between the OS cache and the platter\" or \"data is written on\n> the platter and will survive a power loss\".\n>\n> Postgres wants the second option, of course.\n\nWith my remark I meant that the only thing *PostgreSQL* can expect is\nthat the data is out of the OS: there is no greater guarantee in the\nfsync function. If the *database administrator* wants better guarantees,\nhe (or she) better read your advise.\n\n> The description of your SATA drive makes it sound like the drive\n> does not put data on the platter before reporting \"write complete\",\n> but only stores it in on-board RAM cache. It is highly unlikely\n> that there is any battery backing for that cache, and therefore that\n> drive is not to be trusted.\nYep, the drives have a write cache, and indeed, they are not backed up\nby a battery (neither is my RAID controller) but as this is a\ntest/development machine, I don't really care. \n\nYou made me rethink my production machine thought. I will have to check\nthe drives and the state of their write cache of that machine. Thanks\nfor that.\n\n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl\n\n",
"msg_date": "Sat, 11 Mar 2006 19:26:36 +0100",
"msg_from": "Joost Kraaijeveld <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: x206-x225"
},
{
"msg_contents": "On Sat, 11 Mar 2006, Joost Kraaijeveld wrote:\n\n> Date: Sat, 11 Mar 2006 09:17:09 +0100\n> From: Joost Kraaijeveld <[email protected]>\n> To: David Lang <[email protected]>\n> Cc: Richard Huxton <[email protected]>, [email protected]\n> Subject: Re: [PERFORM] x206-x225\n> \n> On Fri, 2006-03-10 at 23:57 -0800, David Lang wrote:\n>> On Sat, 11 Mar 2006, Joost Kraaijeveld wrote:\n>>\n>>> On Fri, 2006-03-10 at 13:40 +0000, Richard Huxton wrote:\n>>>> Your ATA disk is lying about disk caching being turned off. Assuming\n>>>> each insert is in a separate transaction, then it's not going to do\n>>>> 10,000 / 6 = 1667 transactions/sec - that's faster than it's rotational\n>>>> speed.\n>>> Could you explain the calculation? Why should the number of transactions\n>>> be related to the rotational speed of the disk, without saying anything\n>>> about the number of bytes per rotation?\n>>\n>> each transaction requires a sync to the disk, a sync requires a real\n>> write (which you then wait for), so you can only do one transaction per\n>> rotation.\n> Not according to a conversation I had with Western Digital about the\n> write performance of my own SATA disks. What I understand from their\n> explanation their disk are limited by the MB/sec and not by the number\n> of writes/second, e.g. I could write 50 MB/sec *in 1 bit/write* on my\n> disk. This would suggest that the maximum transactions of my disk\n> (overhead of OS and PostgreSQL ignored) would be 50MB / (transaction\n> size in MB) per second. Or am I missing something (what would not\n> surprise me, as I do not understand the perforance of my system at\n> all ;-))?\n\nbut if you do a 1 bit write, and wait for it to complete, and then do \nanother 1 bit write that belongs on disk immediatly after the first one \n(and wait for it to complete) you have to wait until the disk rotates to \nthe point that it can make the write before it's really safe on disk.\n\nso you can do one transaction in less then one rotation, but if you do 50 \ntransactions you must wait at least 49 (and a fraction) roatations.\n\nif the disk cache is turned on then you don't have to wait for this, but \nyou also will loose the data if you loose power so it's really not safe.\n\nDavid Lang\n",
"msg_date": "Sat, 11 Mar 2006 13:15:41 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: x206-x225"
},
{
"msg_contents": "On Fri, Mar 10, 2006 at 11:57:16PM -0800, David Lang wrote:\n> On Sat, 11 Mar 2006, Joost Kraaijeveld wrote:\n> \n> >On Fri, 2006-03-10 at 13:40 +0000, Richard Huxton wrote:\n> >>Your ATA disk is lying about disk caching being turned off. Assuming\n> >>each insert is in a separate transaction, then it's not going to do\n> >>10,000 / 6 = 1667 transactions/sec - that's faster than it's rotational\n> >>speed.\n> >Could you explain the calculation? Why should the number of transactions\n> >be related to the rotational speed of the disk, without saying anything\n> >about the number of bytes per rotation?\n> \n> each transaction requires a sync to the disk, a sync requires a real \n> write (which you then wait for), so you can only do one transaction per \n> rotation.\n\nBut shouldn't it be possible to batch up WAL writes and syncs? In other\nwords, if you have 5 transactions that all COMMIT at exactly the same\ntime, it should be possible to get all 5 WAL pages (I'll assume each\none generated a small enough change so as not to require multiple WAL\npages) to the drive before the platter comes around to the right\nposition. The drive should then be able to write all 5 at once. At\nleast, theoretically...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 14 Mar 2006 14:32:53 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: x206-x225"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Fri, Mar 10, 2006 at 11:57:16PM -0800, David Lang wrote:\n>> On Sat, 11 Mar 2006, Joost Kraaijeveld wrote:\n>>\n>>> On Fri, 2006-03-10 at 13:40 +0000, Richard Huxton wrote:\n>>>> Your ATA disk is lying about disk caching being turned off. Assuming\n>>>> each insert is in a separate transaction, then it's not going to do\n>>>> 10,000 / 6 = 1667 transactions/sec - that's faster than it's rotational\n>>>> speed.\n>>> Could you explain the calculation? Why should the number of transactions\n>>> be related to the rotational speed of the disk, without saying anything\n>>> about the number of bytes per rotation?\n>> each transaction requires a sync to the disk, a sync requires a real \n>> write (which you then wait for), so you can only do one transaction per \n>> rotation.\n> \n> But shouldn't it be possible to batch up WAL writes and syncs? In other\n> words, if you have 5 transactions that all COMMIT at exactly the same\n> time, it should be possible to get all 5 WAL pages (I'll assume each\n> one generated a small enough change so as not to require multiple WAL\n> pages) to the drive before the platter comes around to the right\n> position. The drive should then be able to write all 5 at once. At\n> least, theoretically...\n\nI think you mean this...\n\nhttp://www.postgresql.org/docs/8.1/static/runtime-config-wal.html\n\ncommit_delay (integer)\n\n Time delay between writing a commit record to the WAL buffer and \nflushing the buffer out to disk, in microseconds. A nonzero delay can \nallow multiple transactions to be committed with only one fsync() system \ncall, if system load is high enough that additional transactions become \nready to commit within the given interval. But the delay is just wasted \nif no other transactions become ready to commit. Therefore, the delay is \nonly performed if at least commit_siblings other transactions are active \nat the instant that a server process has written its commit record. The \ndefault is zero (no delay).\n\ncommit_siblings (integer)\n\n Minimum number of concurrent open transactions to require before \nperforming the commit_delay delay. A larger value makes it more probable \nthat at least one other transaction will become ready to commit during \nthe delay interval. The default is five.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 14 Mar 2006 21:37:33 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: x206-x225"
},
{
"msg_contents": "On Tue, Mar 14, 2006 at 09:37:33PM +0000, Richard Huxton wrote:\n> >But shouldn't it be possible to batch up WAL writes and syncs? In other\n> >words, if you have 5 transactions that all COMMIT at exactly the same\n> >time, it should be possible to get all 5 WAL pages (I'll assume each\n> >one generated a small enough change so as not to require multiple WAL\n> >pages) to the drive before the platter comes around to the right\n> >position. The drive should then be able to write all 5 at once. At\n> >least, theoretically...\n> \n> I think you mean this...\n> \n> http://www.postgresql.org/docs/8.1/static/runtime-config-wal.html\n> \n> commit_delay (integer)\n\nNo, that's not what I mean at all. On a system doing a large number of\nWAL-generating transactions per second, it's certainly possible for\nmultiple transactions to commit in the period of time it takes for the\nplatter to rotate back into position to allow for writing of the WAL\ndata. What I don't know is if those multiple transactions would actually\nmake it to the platter on that rotation, or if they'd serialize,\nresulting in one commit per revolution. I do know that there's no\ntheoretical reason that they couldn't, it's just a matter of putting\nenough intelligence in the drive.\n\nPerhaps this is something that SCSI supports and (S)ATA doesn't, since\nSCSI allows multiple transactions to be 'in flight' on the bus at once.\n\nBut since you mention commit_delay, this does lead to an interesting\npossible use: set it equal to the effective rotational period of the\ndrive. If you know your transaction load well enough, you could possibly\ngain some benefit here. But of course a RAID controller with a BBU would\nbe a better bet...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 14 Mar 2006 16:08:18 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: x206-x225"
},
{
"msg_contents": "Jim C. Nasby wrote:\n>> I think you mean this...\n>>\n>> http://www.postgresql.org/docs/8.1/static/runtime-config-wal.html\n>>\n>> commit_delay (integer)\n> \n> No, that's not what I mean at all. On a system doing a large number of\n> WAL-generating transactions per second, it's certainly possible for\n> multiple transactions to commit in the period of time it takes for the\n> platter to rotate back into position to allow for writing of the WAL\n> data. What I don't know is if those multiple transactions would actually\n> make it to the platter on that rotation, or if they'd serialize,\n> resulting in one commit per revolution. I do know that there's no\n> theoretical reason that they couldn't, it's just a matter of putting\n> enough intelligence in the drive.\n> \n> Perhaps this is something that SCSI supports and (S)ATA doesn't, since\n> SCSI allows multiple transactions to be 'in flight' on the bus at once.\n\nSCSI Command queueing:\nhttp://www.storagereview.com/guide2000/ref/hdd/if/scsi/protCQR.html\n\nSATA \"native command queuing\":\nhttp://www.tomshardware.com/2004/11/16/can_command_queuing_turbo_charge_sata/\n\n> But since you mention commit_delay, this does lead to an interesting\n> possible use: set it equal to the effective rotational period of the\n> drive. If you know your transaction load well enough, you could possibly\n> gain some benefit here. But of course a RAID controller with a BBU would\n> be a better bet...\n\nI suppose as long as you always have several transactions trying to \ncommit, have a separate spindle(s) for the WAL then you could improve \nthroughput at the cost of the shortest transaction times. Of course, it \nmight be that the increase in lock duration etc. might outweigh any \nbenefits. I'd suspect the cost/gain would be highly variable with \nchanges in workload, and as you say write-cache+BBU seems more sensible.\n\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 15 Mar 2006 09:26:09 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: x206-x225"
}
] |
[
{
"msg_contents": "Hello list.\n\nReading my own e-mail I notice I made a very important mistake.\n\nThe X206 has 1 x ATA 7200 RPM\nThe X226 has 2 x SCSI RAID1 10000RPM\n\nI corrected it below.\n\nSorry .\n\n\nHenk Sanders\n\n\n -----Oorspronkelijk bericht-----\nVan: [email protected]\n[mailto:[email protected]]Namens H.J. Sanders\nVerzonden: vrijdag 10 maart 2006 10:50\nAan: [email protected]\nOnderwerp: [PERFORM] x206-x225\n\n\n Hello list.\n\n We have compared 2 IBM x servers:\n\n IBM X206\nIBM X226\n ---------------------- \n -------------------\n processor Pentium 4 3.2 Ghz\nXeon 3.0 Ghz\n main memory 1.25 GB\n4 GB\n discs 1 x ATA 7200 RPM\n2 x SCSI RAID1 10000RPM\n\n LINUX 2.6 (SUSE 9)\nsame\n PGSQL 7.4\nsame\n postgresql.conf attached\nsame\n\n\n We have bij means of an informix-4GL program done the following test:\n\n\n create table : name char(18)\n adres char(20)\n key integer\n\n create index on (key)\n Ti\nme at X206 Time at X226\n --------------\n------ ------------------\n\n insert record (key goes from 1 to 10000) 6 sec.\n41 sec.\n select record (key goes from 1 to 10000) 4\n4\n delete record (key goes from 1 to 10000) 6\n41\n\n\n This is ofcourse a totally unexpected results (you should think off the\nopposite).\n\n Funny is that the select time is the same for both machines.\n\n Does anybody has any any idea what can cause this strange results or where\nwe\n can start our investigations?\n\n\n Regards\n\n\n Henk Sanders",
"msg_date": "Fri, 10 Mar 2006 10:59:46 +0100",
"msg_from": "\"H.J. Sanders\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: x206-x226"
}
] |
[
{
"msg_contents": "There is not possibility to use another database. It's the best option I \nhave seen. We have been working in postgres in last 3 years, and this is \nthe first problem I have seen. (The database is working in a large \nwebsite, 6.000 visits per day in a dedicated server)\n\nAny other idea?\n\n\nChethana, Rao (IE10) wrote:\n\n>USUALLY POSTGRES DATABASE TAKES MORE TIME, COMPARED TO OTHER DATABASES. \n>HOWEVER U CAN FINETUNE THE PERFORMANCE OF POSTGRESQL.\n>IF U HAVE AN OPTION GO FOR SQLITE, MYSQL OR FIREBIRD.\n>\n>-----Original Message-----\n>From: [email protected]\n>[mailto:[email protected]] On Behalf Of Ruben Rubio\n>Rey\n>Sent: Friday, March 10, 2006 2:06 AM\n>To: [email protected]\n>Subject: [PERFORM] Query time\n>\n>Hi,\n>\n>I think im specting problems with a 7.4.8 postgres database.\n>\n>Sometimes some big query takes between 5 to 15 seconds. It happens \n>sometimes all the day it does not depend if database is busy.\n>\n>I have measured that sentence in 15 - 70 ms in normal circunstances.\n>\n>Why sometimes its takes too much time?\n>How can I fix it?\n>Is a postgres version problem, database problem or query problem?\n>\n>Any ideas will be apreciatted.\n>\n>Ruben Rubio\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: explain analyze is your friend\n>\n>\n> \n>\n\n",
"msg_date": "Fri, 10 Mar 2006 11:29:53 +0100",
"msg_from": "Ruben Rubio Rey <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query time"
},
{
"msg_contents": "On Fri, Mar 10, 2006 at 11:29:53AM +0100, Ruben Rubio Rey wrote:\n> There is not possibility to use another database. It's the best option I \n> have seen. We have been working in postgres in last 3 years, and this is \n> the first problem I have seen. (The database is working in a large \n> website, 6.000 visits per day in a dedicated server)\n> \n> Any other idea?\n> \n> \n> Chethana, Rao (IE10) wrote:\n> \n> >USUALLY POSTGRES DATABASE TAKES MORE TIME, COMPARED TO OTHER DATABASES. \n> >HOWEVER U CAN FINETUNE THE PERFORMANCE OF POSTGRESQL.\n> >IF U HAVE AN OPTION GO FOR SQLITE, MYSQL OR FIREBIRD.\n\nIf I were you I wouldn't believe any performance recommendations from\nsomeone who can't find their caps-lock key or spell \"you\".\n\nThe fact is, on any meaningful benchmark current versions of PostgreSQL\nare on par with other databases. Any benchmark that shows PostgreSQL to\nbe 'slow' is almost certain to be very old and/or does a very poor job\nof reflecting how client-server databases are normally used. The one\ncaveat is that PostgreSQL is often overkill for single user embedded\ndatabase type apps.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 10 Mar 2006 09:26:59 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query time"
}
] |
[
{
"msg_contents": "Hi,\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of Tom Lane\n> Sent: Thursday, March 09, 2006 9:11 PM\n> To: Jan de Visser\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Hanging queries on dual CPU windows \n> \n> \n> Jan de Visser <[email protected]> writes:\n> > Furtermore, it does not happen on Linux machines, both \n> single CPU and dual \n> > CPU, nor on single CPU windows machines. We can only \n> reproduce on a dual CPU \n> > windows machine, and if we take one CPU out, it does not happen.\n> > ...\n> > Which showed me that several transactions where waiting for \n> a particular row \n> > which was locked by another transaction. This transaction \n> had no pending \n> > locks (so no deadlock), but just does not complete and hence never \n> > relinquishes the lock.\n> \n> Is the stuck transaction still consuming CPU time, or just stopped?\n> \n> Is it possible to get a stack trace from the stuck process? I dunno\n> if you've got anything gdb-equivalent under Windows, but that's the\n> first thing I'd be interested in ...\n\nDebugging Tools for Windows from Microsoft\nhttp://www.microsoft.com/whdc/devtools/debugging/installx86.mspx\n\nAdditinonally you need a symbol-file or you use\n\"SRV*c:\\debug\\symbols*http://msdl.microsoft.com/download/symbols\"\nto load the symbol-file dynamically from the net.\n\nBest regards\n\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n\n\n\nHakan Kocaman\nSoftware-Development\n\ndigame.de GmbH\nRichard-Byrd-Str. 4-8\n50829 Köln\n\nTel.: +49 (0) 221 59 68 88 31\nFax: +49 (0) 221 59 68 88 98\nEmail: [email protected]\n\n \n",
"msg_date": "Fri, 10 Mar 2006 15:32:19 +0100",
"msg_from": "\"Hakan Kocaman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hanging queries on dual CPU windows "
}
] |
[
{
"msg_contents": "> > > > I dunno\n> > > >\n> > > > > if you've got anything gdb-equivalent under Windows, \n> but that's \n> > > > > the first thing I'd be interested in ...\n> > > >\n> > > > Here ya go:\n> > > >\n> > > > http://www.devisser-siderius.com/stack1.jpg\n> > > > http://www.devisser-siderius.com/stack2.jpg\n> > > > http://www.devisser-siderius.com/stack3.jpg\n> > > >\n> > > > There are three threads in the process. I guess thread 1\n> > > > (stack1.jpg) is the most interesting.\n> > > >\n> > > > I also noted that cranking up concurrency in my app \n> reproduces the \n> > > > problem in about 4 minutes ;-)\n> >\n> > Just reproduced again.\n> >\n> > > Actually, stack2 looks very interesting. Does it \"stay stuck\" in \n> > > pg_queue_signal? That's really not supposed to happen.\n> >\n> > Yes it does.\n> \n> An update on that: There is actually *two* processes in this \n> state, both hanging in pg_queue_signal. I've looked at the \n> source of that, and the obvious candidate for hanging is \n> EnterCriticalSection. I also found this:\n> \n> http://blogs.msdn.com/larryosterman/archive/2005/03/02/383685.aspx\n> \n> where they say:\n> \n> \"\n> In addition, for Windows 2003, SP1, the EnterCriticalSection \n> API has a subtle change that's intended tor resolve many of \n> the lock convoy issues. Before\n> Win2003 SP1, if 10 threads were blocked on \n> EnterCriticalSection and all 10 threads had the same \n> priority, then EnterCriticalSection would service those \n> threads in a FIFO (first -in, first-out) basis. Starting in \n> Windows 2003 SP1, the EnterCriticalSection will wake up a \n> random thread from the waiting threads. If all the threads \n> are doing the same thing (like a thread pool) this won't make \n> much of a difference, but if the different threads are doing \n> different work (like the critical section protecting a widely \n> accessed object), this will go a long way towards removing \n> lock convoy semantics.\n> \"\n> \n> Could it be they broke it when they did that????\n\nIn theory, yes, but it still seems a bit far fetched :-(\n\nIf you have the env to rebuild, can you try changing the order of the lines:\n\tResetEvent(pgwin32_signal_event);\n\tLeaveCriticalSection(&pg_signal_crit_sec);\n\nin backend/port/win32/signal.c\n\n\nAnd if not, can you also try disabling the stats collector and see if that makes a difference. (Could be a workaround..)\n\n\n//Magnus\n",
"msg_date": "Fri, 10 Mar 2006 16:11:00 +0100",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hanging queries on dual CPU windows"
},
{
"msg_contents": "On Friday 10 March 2006 10:11, Magnus Hagander wrote:\n> > Could it be they broke it when they did that????\n>\n> In theory, yes, but it still seems a bit far fetched :-(\n\nWell, I rolled back SP1 and am running my test again. Looking much better, \nhasn't locked up in 45mins now, whereas before it would lock up within 5mins.\n\nSo I think they broke something.\n\njan\n\n-- \n--------------------------------------------------------------\nJan de Visser [email protected]\n\n Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n",
"msg_date": "Fri, 10 Mar 2006 11:13:35 -0500",
"msg_from": "Jan de Visser <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hanging queries on dual CPU windows"
},
{
"msg_contents": "Hello.\n\nRecently I've discovered an interesting thing (Postgres version 8.1.3):\n\nexample table:\n\nCREATE TABLE test (\n id INT,\n name TEXT,\n comment TEXT,\n phone TEXT,\n visible BOOLEAN\n);\n\nthen,\nCREATE INDEX i1 ON test(phone);\nCREATE INDEX i2 ON test(phone, visible);\nCREATE INDEX i3 ON test(phone, visible) WHERE visible;\n\nthen insert lot's of data\nand try to execute query like:\n\nSELECT * FROM test WHERE phone='12345' AND visible;\n\nuses index i1, and filters all visible fields.\nWhen I drop index i1, postgres starts to use index i2\nand the query began to work much more faster.\n\nWhen I drop index i2, postgres uses index i3 which is faster than i2 ofcourse.\n\nI've noticed that planner estimated all queries for all three cases with the same cost.\nSo, is it a planner bad estimate or what?\n\n-- \nEvgeny Gridasov\nSoftware Engineer \nI-Free, Russia\n",
"msg_date": "Fri, 10 Mar 2006 19:45:45 +0300",
"msg_from": "Evgeny Gridasov <[email protected]>",
"msg_from_op": false,
"msg_subject": "one-field index vs. multi-field index planner estimates"
},
{
"msg_contents": "Evgeny Gridasov <[email protected]> writes:\n> Recently I've discovered an interesting thing (Postgres version 8.1.3):\n\nHave you ANALYZEd the table since loading it? What fraction of the rows\nhave visible = true?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Mar 2006 12:09:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: one-field index vs. multi-field index planner estimates "
},
{
"msg_contents": "Tom,\n\nofcourse I've analyzed it.\nvisible is true for about 0.3% of all rows.\ntesting table contains about 300,000-500,000 rows.\n\nOn Fri, 10 Mar 2006 12:09:19 -0500\nTom Lane <[email protected]> wrote:\n\n> Evgeny Gridasov <[email protected]> writes:\n> > Recently I've discovered an interesting thing (Postgres version 8.1.3):\n> \n> Have you ANALYZEd the table since loading it? What fraction of the rows\n> have visible = true?\n\n-- \nEvgeny Gridasov\nSoftware Engineer \nI-Free, Russia\n",
"msg_date": "Fri, 10 Mar 2006 20:28:23 +0300",
"msg_from": "Evgeny Gridasov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: one-field index vs. multi-field index planner"
},
{
"msg_contents": "Evgeny Gridasov <[email protected]> writes:\n> ofcourse I've analyzed it.\n> visible is true for about 0.3% of all rows.\n\nWell, I get an indexscan on i3 ... there isn't going to be any\nstrong reason for the planner to prefer i2 over i1, given that\nthe phone column is probably near-unique and the i2 index will be\nbigger than i1. I don't see why it wouldn't like i3 though. Could\nwe see the EXPLAIN ANALYZE results with and without i3?\n\nregression=# CREATE TABLE test (phone TEXT, visible BOOLEAN);\nCREATE TABLE\nregression=# insert into test select (z/2)::text,(z%1000)<=3 from generate_series(1,300000) z;\nINSERT 0 300000\nregression=# CREATE INDEX i1 ON test(phone);\nCREATE INDEX\nregression=# CREATE INDEX i2 ON test(phone, visible);\nCREATE INDEX\nregression=# CREATE INDEX i3 ON test(phone, visible) WHERE visible;\nCREATE INDEX\nregression=# analyze test;\nANALYZE\nregression=# explain SELECT * FROM test WHERE phone='12345' AND visible;\n QUERY PLAN\n----------------------------------------------------------------\n Index Scan using i3 on test (cost=0.00..5.82 rows=1 width=10)\n Index Cond: ((phone = '12345'::text) AND (visible = true))\n(2 rows)\n\nregression=# drop index i3;\nDROP INDEX\nregression=# explain SELECT * FROM test WHERE phone='12345' AND visible;\n QUERY PLAN\n----------------------------------------------------------------\n Index Scan using i2 on test (cost=0.00..5.82 rows=1 width=10)\n Index Cond: ((phone = '12345'::text) AND (visible = true))\n Filter: visible\n(3 rows)\n\nregression=#\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Mar 2006 13:58:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: one-field index vs. multi-field index planner "
}
] |
[
{
"msg_contents": " \nWe have large tables that hold statistics based on time. They are of the\nform.\n\nCREATE TABLE stats (\n id serial primary key,\n logtime timestamptz,\n d1 int,\n s1 bigint\n);\n\nCREATE INDEX idx on stats(logtime);\n\nSome of these tables have new data inserted at a rate of 500,000+ rows /\nhour. The entire table will grow to being 10's to 100's of millions of\nrows in size. (Yes, we are also paritioning these, it's the size of an\nindividual partition that we're talking about).\n\nWe tend to analyze these tables every day or so and this doesn't always\nprove to be sufficient....\n\nOur application is a reporting application and the end users typically\nlike to query the newest data the most. As such, the queries of the\nform...\n\n\nselect \n *\nfrom stats\ninner join dimension_d1 using (d1)\nwhere logtime between X and Y and d1.something = value; \n\nThis usually results in a hash join (good thing) where the dimension\ntable is loaded into the hash table and it index scans stats using idx\nindex.\n\nThe trouble starts when both X and Y are times \"after\" the last analyze.\nThis restriction clause is outside the range of values in the historgram\ncreated by the last analyze. Postgres's estimate on the number of rows\nreturned here is usually very low and incorrect, as you'd expect... \n\nTrouble can occur when the planner will \"flip\" its decision and decide\nto hash join by loading the results of the index scan on idx into the\nhash table instead of the dimension table.... \n\nSince the table is so large and the system is busy (disk not idle at\nall), doing an analyze on this table in the production system can take\n1/2 hour! (statistics collector set to 100). We can't \"afford\" to\nanalyze more often...\n\nIt certainly would be nice if postgres could understand somehow that\nsome columns are \"dynamic\" and that it's histogram could be stretched to\nthe maximal values or some other technique for estimating rows to the\nright of the range of values in the histogram...\n\nOr have some concept of error bars on it's planner decisions....\n\nSuggestions? Comments?\n\n\nMarc\n",
"msg_date": "Fri, 10 Mar 2006 12:30:41 -0500",
"msg_from": "\"Marc Morin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trouble managing planner for timestamptz columns"
},
{
"msg_contents": "\"Marc Morin\" <[email protected]> writes:\n> We tend to analyze these tables every day or so and this doesn't always\n> prove to be sufficient....\n\nSeems to me you just stated your problem. Instead of having the planner\nmake wild extrapolations, why not set up a cron job to analyze these\ntables more often? Or use autovacuum which will do it for you.\n\n> Since the table is so large and the system is busy (disk not idle at\n> all), doing an analyze on this table in the production system can take\n> 1/2 hour! (statistics collector set to 100).\n\nI'd believe that for vacuum analyze, but analyze alone should be cheap.\nHave you perhaps got some weird datatypes in the table? Maybe you\nshould back off the stats target a bit?\n\nWe do support analyzing selected columns, so you might try something\nlike a cron job analyzing only the timestamp column, with a suitably low\nstats target for that column. This would yield numbers far more\nreliable than any extrapolation the planner could do.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Mar 2006 13:31:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trouble managing planner for timestamptz columns "
}
] |
[
{
"msg_contents": "> > > Could it be they broke it when they did that????\n> >\n> > In theory, yes, but it still seems a bit far fetched :-(\n> \n> Well, I rolled back SP1 and am running my test again. Looking \n> much better, hasn't locked up in 45mins now, whereas before \n> it would lock up within 5mins.\n> \n> So I think they broke something.\n\nWow. I guess I was lucky that I didn't say it was impossible :-)\n\n\nBut what really is happening. What other thread is actually holding the\ncritical section at this point, causing us to block? The only places it\ngets held is while looping the signal queue, but it is released while\ncalling the signal function itself...\n\nBut they obviously *have* been messing with critical sections, so maybe\nthey accidentally changed something else as well...\n\nWhat bothers me is that nobody else has reported this. It could be that\nthis was exposed by the changes to the signal handling done for 8.1, and\nthe ppl with this level of concurrency are either still on 8.0 or just\nnot on SP1 for their windows boxes yet... Do you have any other software\ninstalled on the machine? That might possibly interfere in some way?\n\nBut let's have it run for a bit longer to confirm this does help. If so,\nwe could perhaps recode that part using a Mutex instead of a critical\nsection - since it's not a performance critical path, the difference\nshouldn't be large. If I code up a patch for that, can you re-apply SP1\nand test it? Or is this a production system you can't really touch?\n\n//Magnus\n",
"msg_date": "Fri, 10 Mar 2006 19:25:57 +0100",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hanging queries on dual CPU windows"
},
{
"msg_contents": "On Friday 10 March 2006 13:25, Magnus Hagander wrote:\n> > > > Could it be they broke it when they did that????\n> > >\n> > > In theory, yes, but it still seems a bit far fetched :-(\n> >\n> > Well, I rolled back SP1 and am running my test again. Looking\n> > much better, hasn't locked up in 45mins now, whereas before\n> > it would lock up within 5mins.\n> >\n> > So I think they broke something.\n>\n> Wow. I guess I was lucky that I didn't say it was impossible :-)\n>\n>\n> But what really is happening. What other thread is actually holding the\n> critical section at this point, causing us to block? The only places it\n> gets held is while looping the signal queue, but it is released while\n> calling the signal function itself...\n>\n> But they obviously *have* been messing with critical sections, so maybe\n> they accidentally changed something else as well...\n>\n> What bothers me is that nobody else has reported this. It could be that\n> this was exposed by the changes to the signal handling done for 8.1, and\n> the ppl with this level of concurrency are either still on 8.0 or just\n> not on SP1 for their windows boxes yet... Do you have any other software\n> installed on the machine? That might possibly interfere in some way?\n\nJust a JDK, JBoss, cygwin (running sshd), and a VNC Server. I don't think that \ninterferes.\n\n>\n> But let's have it run for a bit longer to confirm this does help. \n\nI turned it off after 2.5hr. The longest I had to wait before, with less load, \nwas 1.45hr.\n\n> If so, \n> we could perhaps recode that part using a Mutex instead of a critical\n> section - since it's not a performance critical path, the difference\n> shouldn't be large. If I code up a patch for that, can you re-apply SP1\n> and test it? Or is this a production system you can't really touch?\n\nI can do whatever the hell I want with it, so if you could cook up a patch \nthat would be great.\n\nAs a BTW: I reinstalled SP1 and turned stats collection off. That also seems \nto work, but is not really a solution since we want to use autovacuuming.\n\n>\n> //Magnus\n\njan\n\n-- \n--------------------------------------------------------------\nJan de Visser [email protected]\n\n Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n",
"msg_date": "Fri, 10 Mar 2006 14:27:39 -0500",
"msg_from": "Jan de Visser <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hanging queries on dual CPU windows"
},
{
"msg_contents": "On Friday 10 March 2006 14:27, Jan de Visser wrote:\n> As a BTW: I reinstalled SP1 and turned stats collection off. That also\n> seems to work, but is not really a solution since we want to use\n> autovacuuming.\n\nI lied. I hangs now. Just takes a lot longer...\n\njan\n\n-- \n--------------------------------------------------------------\nJan de Visser [email protected]\n\n Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n",
"msg_date": "Fri, 10 Mar 2006 14:37:13 -0500",
"msg_from": "Jan de Visser <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hanging queries on dual CPU windows"
}
] |
[
{
"msg_contents": "Well this analyze just took 12 minutes... Stats target of 100.\n\n# time psql xxx xxx -c \"analyze elem_trafficstats_1\"\nANALYZE\n\nreal 12m1.070s\nuser 0m0.001s\nsys 0m0.015s \n\n\nA large table, but by far, not the largest... Have about 1 dozen or so\ntables like this, so analyzing them will take 3-4 hours of time... No\nweird datatypes, just bigints for facts, timestamptz and ints for\ndimensions.\n\nMy problem is not the analyze itself, it's the fact that our db is\nreally busy doing stuff.... Analyze I/O is competing... I am random I/O\nbound like crazy.\n\nIf I set the stats target to 10, I get\n\n# time psql xxxx xxx -c \"set session default_statistics_target to\n10;analyze elem_trafficstats_1\"\nANALYZE\n\nreal 2m15.733s\nuser 0m0.009s\nsys 0m2.255s \n\nBetter, but not sure what side affect this would have.\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]] \n> Sent: Friday, March 10, 2006 1:31 PM\n> To: Marc Morin\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Trouble managing planner for \n> timestamptz columns \n> \n> \"Marc Morin\" <[email protected]> writes:\n> > We tend to analyze these tables every day or so and this doesn't \n> > always prove to be sufficient....\n> \n> Seems to me you just stated your problem. Instead of having \n> the planner make wild extrapolations, why not set up a cron \n> job to analyze these tables more often? Or use autovacuum \n> which will do it for you.\n> \n> > Since the table is so large and the system is busy (disk \n> not idle at \n> > all), doing an analyze on this table in the production \n> system can take\n> > 1/2 hour! (statistics collector set to 100).\n> \n> I'd believe that for vacuum analyze, but analyze alone should \n> be cheap.\n> Have you perhaps got some weird datatypes in the table? \n> Maybe you should back off the stats target a bit?\n> \n> We do support analyzing selected columns, so you might try \n> something like a cron job analyzing only the timestamp \n> column, with a suitably low stats target for that column. \n> This would yield numbers far more reliable than any \n> extrapolation the planner could do.\n> \n> \t\t\tregards, tom lane\n> \n",
"msg_date": "Fri, 10 Mar 2006 16:47:08 -0500",
"msg_from": "\"Marc Morin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trouble managing planner for timestamptz columns "
},
{
"msg_contents": "\"Marc Morin\" <[email protected]> writes:\n> Well this analyze just took 12 minutes... Stats target of 100.\n> # time psql xxx xxx -c \"analyze elem_trafficstats_1\"\n\nTry analyzing just the one column, and try reducing its stats target to\n10. It does make a difference:\n\nsorttest=# set default_statistics_target TO 100;\nSET\nTime: 0.382 ms\nsorttest=# analyze verbose d10;\nINFO: analyzing \"public.d10\"\nINFO: \"d10\": scanned 30000 of 833334 pages, containing 3600000 live rows and 0 dead rows; 30000 rows in sample, 100000080 estimated total rows\nANALYZE\nTime: 137186.347 ms\nsorttest=# set default_statistics_target TO 10;\nSET\nTime: 0.418 ms\nsorttest=# analyze verbose d10(col1);\nINFO: analyzing \"public.d10\"\nINFO: \"d10\": scanned 3000 of 833334 pages, containing 360000 live rows and 0 dead rows; 3000 rows in sample, 100000080 estimated total rows\nANALYZE\nTime: 17206.018 ms\nsorttest=#\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 10 Mar 2006 18:40:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trouble managing planner for timestamptz columns "
}
] |
[
{
"msg_contents": "Hi.\n\nI am new to postgres and i need help from u.i hope i get positive response.. though my questions mite seem silly to u...\n\niam working on postgres.. i have around 1 lakh records in almost 12 tables..\n1 ) when i try to query for count or for any thg it takes a long time to return the result. How to avoid this\n\n2) also i want to know how to increase the performance..( i do vacuum once in a day)\n\n3) apart from that iam connecting to it through asp.net.. so when i try to fetch rcords the connection breaks.. \nhow to avoid this..(very immp)\n\n4) also in the tables i put a column -- serial id . so when i try to insert new records after deleting the records(lets say at that time the last sequence number was 100).. when i insert new record it will start with 101..\nsuppose the sequence number reaches its maximum limit can i use the previous 1-100 values, by using the cycled option.\n\nThks..\n\n\n\n\n\n\n \nHi.\n\nI am new to postgres and i need help from u.i hope i get positive response.. though my questions mite seem silly to u...\n\niam working on postgres.. i have around 1 lakh records in almost 12 tables..\n1 ) when i try to query for count or for any thg it takes a long time to return the result. How to avoid this\n\n2) also i want to know how to increase the performance..( i do vacuum once in a day)\n\n3) apart from that iam connecting to it through asp.net.. so when i try to fetch rcords the connection breaks.. \nhow to avoid this..(very immp)\n\n4) also in the tables i put a column -- serial id . so when i try to insert new records after deleting the records(lets say at that time the last sequence number was 100).. when i insert new record it will start with 101..\nsuppose the sequence number reaches its maximum limit can i use the previous 1-100 values, by using the cycled option.\n\nThks..",
"msg_date": "12 Mar 2006 11:46:25 -0000",
"msg_from": "\"Phadnis\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "help needed asap...."
},
{
"msg_contents": "On 12 Mar 2006 11:46:25 -0000, Phadnis <[email protected]> wrote:\n> Hi.\n>\n> I am new to postgres and i need help from u.i hope i get positive response.. though my questions mite seem silly to u...\n>\n> iam working on postgres.. i have around 1 lakh records in almost 12 tables..\n> 1 ) when i try to query for count or for any thg it takes a long time to return the result. How to avoid this\n>\n> 2) also i want to know how to increase the performance..( i do vacuum once in a day)\n>\n\nThese two questions are applicable to this list... your other\nquestions may get quicker responses on the users list.\n\nHowever, you haven't provided enough information for anyone here to\nhelp. Here's what you should do:\n\nFind queries that you think should be faster than they are. For\nexample, if your query is \"Select count(*) from foo\" you can get\nimportant performance information about the query by running:\nEXPLAIN ANALYZE select count(*) from foo\n\nSend the details of the query, including the output from the explain\nanalyze output (which looks pretty meaningless until you've learned\nwhat to look for) to the list with a detailed question.\n\nAlso, for general performance hints, tell the list what your setup is,\nwhat items you've tweaked (and maybe why).\n\nGenerally, be as generous with your details as you can. Also, have you\ngoogled around for hints? Here's a good website with information:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\nNotice there's a section on performance tips.\n\nAlso, this list works because volunteers who have knowledge and free\ntime choose to help when they can. If you really need answers ASAP,\nthere are a few organizations who provide paid support.\n\n--\nMatthew Nuzum\nwww.bearfruit.org\n",
"msg_date": "Sun, 12 Mar 2006 12:38:27 -0600",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: help needed asap...."
},
{
"msg_contents": "On Sun, Mar 12, 2006 at 11:46:25 -0000,\n Phadnis <[email protected]> wrote:\n> �\n> 1 ) when i try to query for count or for any thg it takes a long time to return the result. How to avoid this\n\nPostgres doesn't cache counts, so if you are counting a lot of records, this\nmay take a while to run. If you do a lot of counts or need them to be fast\neven if it slows other things down, there are some things you can do to address\nthis. Several strategies have been repeatedly discussed in the archives.\n",
"msg_date": "Sun, 12 Mar 2006 15:00:51 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: help needed asap...."
}
] |
[
{
"msg_contents": "If I only insert data into a table, never update or delete, then I should never have to vacuum it. Is that correct?\n\nThanks,\nCraig\n",
"msg_date": "Mon, 13 Mar 2006 07:02:33 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "No vacuum for insert-only database?"
},
{
"msg_contents": "Craig A. James wrote:\n> If I only insert data into a table, never update or delete, then I should \n> never have to vacuum it. Is that correct?\n\nYou still need to vacuum eventually, to avoid transaction Id wraparound\nissues. But not as often.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Mon, 13 Mar 2006 11:09:49 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: No vacuum for insert-only database?"
},
{
"msg_contents": "Alvaro Herrera wrote:\n>>If I only insert data into a table, never update or delete, then I should \n>>never have to vacuum it. Is that correct?\n> \n> You still need to vacuum eventually, to avoid transaction Id wraparound\n> issues. But not as often.\n\nThanks. Any suggestions for what \"not as often\" means? For example, if my database will never contain more than 10 million rows, is that a problem? 100 million rows? When does transaction ID wraparound become a problem?\n\nCraig\n",
"msg_date": "Mon, 13 Mar 2006 09:19:32 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: No vacuum for insert-only database?"
},
{
"msg_contents": "Craig A. James wrote:\n> Alvaro Herrera wrote:\n> >>If I only insert data into a table, never update or delete, then I should \n> >>never have to vacuum it. Is that correct?\n> >\n> >You still need to vacuum eventually, to avoid transaction Id wraparound\n> >issues. But not as often.\n> \n> Thanks. Any suggestions for what \"not as often\" means? For example, if my \n> database will never contain more than 10 million rows, is that a problem? \n> 100 million rows? When does transaction ID wraparound become a problem?\n\nTransaction ID wraparound will be a problem at a bit less than 2 billion\ntransactions. So if you vacuum the table every 1 billion transactions\nyou are safe. I suggest you read the \"routine maintenance\" section in\nthe docs; the wraparound issue is explained there.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Mon, 13 Mar 2006 13:41:35 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: No vacuum for insert-only database?"
},
{
"msg_contents": "On Mon, Mar 13, 2006 at 09:19:32 -0800,\n \"Craig A. James\" <[email protected]> wrote:\n> Alvaro Herrera wrote:\n> >>If I only insert data into a table, never update or delete, then I should \n> >>never have to vacuum it. Is that correct?\n> >\n> >You still need to vacuum eventually, to avoid transaction Id wraparound\n> >issues. But not as often.\n> \n> Thanks. Any suggestions for what \"not as often\" means? For example, if my \n> database will never contain more than 10 million rows, is that a problem? \n> 100 million rows? When does transaction ID wraparound become a problem?\n\nI believe it is at billion (10^9).\n",
"msg_date": "Mon, 13 Mar 2006 12:52:08 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: No vacuum for insert-only database?"
},
{
"msg_contents": "Craig,\n\n> Transaction ID wraparound will be a problem at a bit less than 2 billion\n> transactions. So if you vacuum the table every 1 billion transactions\n> you are safe. I suggest you read the \"routine maintenance\" section in\n> the docs; the wraparound issue is explained there.\n\nFor reference, we calculated on a data warehouse with about 700 million \nrows in the main fact table that we had 6 years until XID wraparound. \nMind you, that's partly because all of our rows were inserted in large \nbatches of 100,000 rows each.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 13 Mar 2006 11:16:53 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: No vacuum for insert-only database?"
}
] |
[
{
"msg_contents": "Hello,\nAttached is the text file containing the last rounds of configurations.\nThis time, used \"show all\" just before issuing each relevant \"explain analyze\"\nto ensure available information.\nNote that the last runs are being executed concurrently with other problematic\nquery that is consuming 100% cpu for HOURS.\nSome people suggested to reduce shared buffers, but for few users (1 or 2\nsimultaneously for this app, as my friend told me), it could be large.\n\nhttp://candle.pha.pa.us/main/writings/pgsql/hw_performance/\nbrought some light over the subject. For few users, could be a viable alternative.\n\nBut, despite these huge improvements in speed, there are other problematic\nqueries with postgresql.\nOne of them is:\n\nselect count(distinct NF.ID_NF ) as contagem, DE.AM_REFERENCIA as campo\nfrom DECLARACAO DE inner join CADASTRO CAD on\n(CAD.ID_DECLARACAO=DE.ID_DECLARACAO)\ninner join NOTA_FISCAL NF on (NF.ID_CADASTRO=CAD.ID_CADASTRO)\ninner join EMPRESA EMP on (EMP.ID_EMPRESA=DE.ID_EMPRESA)\ninner join ARQUIVO_PROCESSADO ARQ on (ARQ.ID_ARQUIVO=DE.ID_ARQUIVO)\ngroup by DE.AM_REFERENCIA\norder by DE.AM_REFERENCIA\n\nfirebird windows executed in 1min30s\npostgresql windows is running for 3 hours and still not finished.\n\nI already know that count() is VERY performance problematic in postgresql.\nIs there a way to work around this?\nUnfortunately, the deadline for my friend project is approaching and he is\ngiving up postgresql for firebird. \nIf some work around is available, he will give another try. But i already saw\nthat count and joins are still problem.\nHe asked me if other people are struggling with poor performance and wondered\nif all other users are issuing simple queries only.\nAny suggestions?\nThanks .\nAndre Felipe Machado",
"msg_date": "Mon, 13 Mar 2006 15:11:54 -0300",
"msg_from": "\"andremachado\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "firebird X postgresql 8.1.2 windows, performance comparison"
},
{
"msg_contents": "Andre,\n\n> http://candle.pha.pa.us/main/writings/pgsql/hw_performance/\n> brought some light over the subject. For few users, could be a viable\n> alternative.\n\nThat article is very old. Read this instead:\nhttp://www.powerpostgresql.com/PerfList\n\n> select count(distinct NF.ID_NF ) as contagem, DE.AM_REFERENCIA as campo\n> from DECLARACAO DE inner join CADASTRO CAD on\n> (CAD.ID_DECLARACAO=DE.ID_DECLARACAO)\n> inner join NOTA_FISCAL NF on (NF.ID_CADASTRO=CAD.ID_CADASTRO)\n> inner join EMPRESA EMP on (EMP.ID_EMPRESA=DE.ID_EMPRESA)\n> inner join ARQUIVO_PROCESSADO ARQ on (ARQ.ID_ARQUIVO=DE.ID_ARQUIVO)\n> group by DE.AM_REFERENCIA\n> order by DE.AM_REFERENCIA\n>\n> firebird windows executed in 1min30s\n> postgresql windows is running for 3 hours and still not finished.\n\nHow about an EXPLAIN?\n\nAnd, did you run ANALYZE on the data?\n\n> I already know that count() is VERY performance problematic in\n> postgresql. Is there a way to work around this?\n> Unfortunately, the deadline for my friend project is approaching and he\n> is giving up postgresql for firebird.\n> If some work around is available, he will give another try. But i\n> already saw that count and joins are still problem.\n> He asked me if other people are struggling with poor performance and\n> wondered if all other users are issuing simple queries only.\n\nNo, actually we excel at complex queries. Some of the data warehousing \nstuff I run involves queries more than a page long. Either you're \nhitting some Windows-specific problem, or you still have some major basic \ntuning issues.\n\nThat being said, there's nothing wrong with Firebird if he wants to use it.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 13 Mar 2006 11:15:00 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance comparison"
},
{
"msg_contents": "On Mon, 2006-03-13 at 12:11, andremachado wrote:\n> Hello,\n> Attached is the text file containing the last rounds of configurations.\n> This time, used \"show all\" just before issuing each relevant \"explain analyze\"\n> to ensure available information.\n> Note that the last runs are being executed concurrently with other problematic\n> query that is consuming 100% cpu for HOURS.\n> Some people suggested to reduce shared buffers, but for few users (1 or 2\n> simultaneously for this app, as my friend told me), it could be large.\n> \n> http://candle.pha.pa.us/main/writings/pgsql/hw_performance/\n> brought some light over the subject. For few users, could be a viable alternative.\n> \n> But, despite these huge improvements in speed, there are other problematic\n> queries with postgresql.\n> One of them is:\n> \n> select count(distinct NF.ID_NF ) as contagem, DE.AM_REFERENCIA as campo\n> from DECLARACAO DE inner join CADASTRO CAD on\n> (CAD.ID_DECLARACAO=DE.ID_DECLARACAO)\n> inner join NOTA_FISCAL NF on (NF.ID_CADASTRO=CAD.ID_CADASTRO)\n> inner join EMPRESA EMP on (EMP.ID_EMPRESA=DE.ID_EMPRESA)\n> inner join ARQUIVO_PROCESSADO ARQ on (ARQ.ID_ARQUIVO=DE.ID_ARQUIVO)\n> group by DE.AM_REFERENCIA\n> order by DE.AM_REFERENCIA\n> \n> firebird windows executed in 1min30s\n> postgresql windows is running for 3 hours and still not finished.\n> \n> I already know that count() is VERY performance problematic in postgresql.\n> Is there a way to work around this?\n\nWell, it's not uncommon in mvcc databases. My testing against Oracle\n9.x series showed little difference on similar machines. In fact, my\nworkstation running PostgreSQL was faster at count() queries than our\nold Sun 420 running Oracle, which has much more memory.\n\nCan we see an explain output and schema (if needed) for this query? \nJust plain explain, not analyze, since, like you said, it's been running\nfor hours.\n\nI'd like to just add, that if you use any database long enough, you'll\neventually come up with queries that it runs slow on that other\ndatabases run quickly on. It's just the nature of the beast. That\nsaid, I've never seen a team work so hard to fix poorly performing\nqueries as the guys that write PostgreSQL. If there's a natural, basic\nfix to the problem, you'll see it pretty quick, whether that be in the\nquery itself, the planner, or the execution of the query. And if it's\njust not possible in PostgreSQL, you'll usually hear that pretty quick\ntoo.\n",
"msg_date": "Mon, 13 Mar 2006 14:02:26 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance"
}
] |
[
{
"msg_contents": "Good evening,\n\nDoes anyone know how much of a performance hit turning stats_block_level and\nstats_row_level on will incur? Do both need to be on to gather cache\nrelated statistics? I know the annotated_conf_80 document states to only\nturn them on for debug but if they're not that performance intensive I\ncannot see the harm.\n\nThank you,\nTim McElroy\n\n\n\n\n\n\nPG Statistics\n\n\nGood evening,\n\nDoes anyone know how much of a performance hit turning stats_block_level and stats_row_level on will incur? Do both need to be on to gather cache related statistics? I know the annotated_conf_80 document states to only turn them on for debug but if they're not that performance intensive I cannot see the harm.\nThank you,\nTim McElroy",
"msg_date": "Mon, 13 Mar 2006 18:49:39 -0500",
"msg_from": "\"mcelroy, tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PG Statistics"
},
{
"msg_contents": "On Mon, Mar 13, 2006 at 06:49:39PM -0500, mcelroy, tim wrote:\n> Does anyone know how much of a performance hit turning stats_block_level and\n> stats_row_level on will incur? Do both need to be on to gather cache\n> related statistics? I know the annotated_conf_80 document states to only\n> turn them on for debug but if they're not that performance intensive I\n> cannot see the harm.\n\nI ran some tests a few months ago and found that stats_command_string\nhad a significant impact, whereas stats_block_level and stats_row_level\nwere almost negligible. Here are my test results:\n\nhttp://archives.postgresql.org/pgsql-performance/2005-12/msg00307.php\n\nYour results may vary. If you see substantially different results\nthen please post the particulars.\n\n-- \nMichael Fuhr\n",
"msg_date": "Mon, 13 Mar 2006 17:18:53 -0700",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG Statistics"
},
{
"msg_contents": "Tim,\n\nWhen I have done ODBC load tests with stats_block_level enabled on (20\nmins. per test), I've seen about 3-4% performance hit. Your mileage may\nvary.\n\nSteve Poe\n\nOn Mon, 2006-03-13 at 18:49 -0500, mcelroy, tim wrote:\n> Good evening,\n> \n> Does anyone know how much of a performance hit turning\n> stats_block_level and stats_row_level on will incur? Do both need to\n> be on to gather cache related statistics? I know the\n> annotated_conf_80 document states to only turn them on for debug but\n> if they're not that performance intensive I cannot see the harm.\n> \n> Thank you, \n> Tim McElroy\n> \n\n",
"msg_date": "Mon, 13 Mar 2006 16:37:51 -0800",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG Statistics"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm trying to work out why my 8.1 system is slower than my 7.4 system \nfor importing data.\n\nThe import is a lot of \"insert into\" commands - it's a converted \ndatabase from another system so I can't change it to copy commands.\n\n\nMy uncommented config options:\n\n\nautovacuum = off\n\nbgwriter_all_maxpages = 15\nbgwriter_all_percent = 10.0\nbgwriter_delay = 2000\nbgwriter_lru_maxpages = 10\nbgwriter_lru_percent = 5.0\n\ncheckpoint_segments = 10\n\ncommit_delay = 100000\ncommit_siblings = 500\n\ntemp_buffers = 500\n\nwal_buffers = 16\n\nmax_connections = 16\n\nshared_buffers = 256\n\n\n(I was playing around with the bgwriter stuff to see if it made any \ndifferences, so I could be making it worse).\n\nIt's a pretty small machine - 2.6GHz with 512M RAM.\n\nMy main concern is 7.4 on a smaller machine with less memory is faster \nto import this data.\n\n\nSuggestions on what I need to do would be fantastic, thanks!\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Tue, 14 Mar 2006 11:40:11 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "import performance"
},
{
"msg_contents": "On Tue, 14 Mar 2006, Chris wrote:\n\n> Hi all,\n>\n> I'm trying to work out why my 8.1 system is slower than my 7.4 system\n> for importing data.\n>\n> The import is a lot of \"insert into\" commands - it's a converted\n> database from another system so I can't change it to copy commands.\n>\n>\n> My uncommented config options:\n>\n>\n> autovacuum = off\n>\n> bgwriter_all_maxpages = 15\n> bgwriter_all_percent = 10.0\n\nThe above is a bit high.\n\n> bgwriter_delay = 2000\n\nThis too.\n\n> bgwriter_lru_maxpages = 10\n> bgwriter_lru_percent = 5.0\n>\n> checkpoint_segments = 10\n>\n> commit_delay = 100000\n> commit_siblings = 500\n\nWay too high\n\n>\n> temp_buffers = 500\n>\n> wal_buffers = 16\n\nMake this at least 64.\n\n>\n> max_connections = 16\n>\n> shared_buffers = 256\n\nMake this higher too. If this is a dedicated machine with 512 MB of ram,\nset it to something like 125000.\n\nYou may need to adjust shared memory settings for your operating system.\nSee the manual for details.\n\nThanks,\n\nGavin\n",
"msg_date": "Tue, 14 Mar 2006 11:48:28 +1100 (EST)",
"msg_from": "Gavin Sherry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: import performance"
},
{
"msg_contents": "[Snip]\n> >\n> > shared_buffers = 256\n> \n> Make this higher too. If this is a dedicated machine with 512 MB of\nram,\n> set it to something like 125000.\n> \n> You may need to adjust shared memory settings for your operating\nsystem.\n> See the manual for details.\n> \n\nWhoa. Maybe I'm wrong, but isn't each buffer 8192 bytes? So you are\nsuggesting that he set his shared buffers to a gigabyte on a machine\nwith 512 MB of ram? Or was that just a miscalculation?\n\nDave\n\n",
"msg_date": "Mon, 13 Mar 2006 18:57:05 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: import performance"
},
{
"msg_contents": "On Mon, 13 Mar 2006, Dave Dutcher wrote:\n\n> [Snip]\n> > >\n> > > shared_buffers = 256\n> >\n> > Make this higher too. If this is a dedicated machine with 512 MB of\n> ram,\n> > set it to something like 125000.\n> >\n> > You may need to adjust shared memory settings for your operating\n> system.\n> > See the manual for details.\n> >\n>\n> Whoa. Maybe I'm wrong, but isn't each buffer 8192 bytes? So you are\n> suggesting that he set his shared buffers to a gigabyte on a machine\n> with 512 MB of ram? Or was that just a miscalculation?\n\nOne to many zeros. Oops.\n\nGavin\n",
"msg_date": "Tue, 14 Mar 2006 12:00:19 +1100 (EST)",
"msg_from": "Gavin Sherry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: import performance"
},
{
"msg_contents": "Gavin Sherry wrote:\n> On Tue, 14 Mar 2006, Chris wrote:\n> \n> \n>>Hi all,\n>>\n>>I'm trying to work out why my 8.1 system is slower than my 7.4 system\n>>for importing data.\n>>\n>>The import is a lot of \"insert into\" commands - it's a converted\n>>database from another system so I can't change it to copy commands.\n>>\n\n<snip>\n\nnew config variables...\n\nautovacuum = off\n\nbgwriter_all_maxpages = 15\nbgwriter_all_percent = 2.0\nbgwriter_delay = 500\nbgwriter_lru_maxpages = 10\nbgwriter_lru_percent = 5.0\n\ncheckpoint_segments = 10\ncheckpoint_timeout = 300\n\ncommit_delay = 10000\ncommit_siblings = 10\n\nfsync = on\n\nlc_messages = 'en_US.UTF-8'\nlc_monetary = 'en_US.UTF-8'\nlc_numeric = 'en_US.UTF-8'\nlc_time = 'en_US.UTF-8'\n\nlog_connections = on\nlog_destination = 'syslog'\nlog_disconnections = on\nlog_duration = on\nlog_statement = 'all'\n\nmax_connections = 16\n\nredirect_stderr = on\n\nshared_buffers = 12500\n\nsilent_mode = off\n\nstats_command_string = off\n\nsyslog_facility = 'LOCAL0'\nsyslog_ident = 'postgres'\n\ntemp_buffers = 500\n\nwal_buffers = 256\n\n\nI changed a couple of things and restarted postgres before trying again. \nStill getting pretty insert times :(\n\nINSERT 0 1\nTime: 1251.956 ms\nINSERT 0 1\nTime: 700.244 ms\nINSERT 0 1\nTime: 851.254 ms\nINSERT 0 1\nTime: 407.725 ms\nINSERT 0 1\nTime: 267.881 ms\nINSERT 0 1\nTime: 575.834 ms\nINSERT 0 1\nTime: 371.914 ms\nINSERT 0 1\n\n\nThe table schema is bare:\n\nCREATE TABLE ArticleLive_articlepages (\n PageID serial not null,\n ArticleID integer default '0',\n SortOrderID integer default '0',\n Title varchar(100) NOT NULL default '',\n Content text,\n PRIMARY KEY (PageID)\n);\n\n(I know the fields will be lowercased...).\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Tue, 14 Mar 2006 12:24:22 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: import performance"
},
{
"msg_contents": "On Tue, 14 Mar 2006 12:24:22 +1100\nChris <[email protected]> wrote:\n\n> Gavin Sherry wrote:\n> > On Tue, 14 Mar 2006, Chris wrote:\n> > \n> > \n> >>Hi all,\n> >>\n> >>I'm trying to work out why my 8.1 system is slower than my 7.4\n> >>system for importing data.\n> >>\n> >>The import is a lot of \"insert into\" commands - it's a converted\n> >>database from another system so I can't change it to copy commands.\n\n Are you on the same hardware specifically in your disk subsystem? \n Anything else different about how the two servers are used? \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n",
"msg_date": "Mon, 13 Mar 2006 19:33:18 -0600",
"msg_from": "Frank Wiles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: import performance"
},
{
"msg_contents": "Frank Wiles wrote:\n> On Tue, 14 Mar 2006 12:24:22 +1100\n> Chris <[email protected]> wrote:\n> \n> \n>>Gavin Sherry wrote:\n>>\n>>>On Tue, 14 Mar 2006, Chris wrote:\n>>>\n>>>\n>>>\n>>>>Hi all,\n>>>>\n>>>>I'm trying to work out why my 8.1 system is slower than my 7.4\n>>>>system for importing data.\n>>>>\n>>>>The import is a lot of \"insert into\" commands - it's a converted\n>>>>database from another system so I can't change it to copy commands.\n> \n> \n> Are you on the same hardware specifically in your disk subsystem? \n> Anything else different about how the two servers are used? \n\nDifferent hardware.\n\n7.4 is running on a 500MHz computer with 256M compared to 8.1 running on \na 2.6GHz with 512M.\n\nThe only notable config variables on that machine (the rest are logging):\n\ncommit_delay = 10000\n\ncheckpoint_segments = 10\ncheckpoint_warning = 300\n\ninsert times:\n\nTime: 63.756 ms\nINSERT 13584074 1\nTime: 46.465 ms\nINSERT 13584075 1\nTime: 70.518 ms\nINSERT 13584077 1\nTime: 59.864 ms\nINSERT 13584078 1\nTime: 35.984 ms\n\nTons of difference :/\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Tue, 14 Mar 2006 12:42:21 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: import performance"
},
{
"msg_contents": "Chris <[email protected]> writes:\n> Tons of difference :/\n\nHave you checked that the I/O performance is comparable? It seems\npossible that there's something badly misconfigured about the disks\non your new machine. Benchmarking with \"bonnie\" or some such would\nbe useful; also try looking at \"iostat 1\" output while running the\ninserts on both machines.\n\nAlso, are the inserts just trivial \"insert values (... some constants ...)\"\nor is there more to it than that?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 13 Mar 2006 20:51:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: import performance "
},
{
"msg_contents": "On Tue, 14 Mar 2006 12:42:21 +1100\nChris <[email protected]> wrote:\n\n> Different hardware.\n> \n> 7.4 is running on a 500MHz computer with 256M compared to 8.1 running\n> on a 2.6GHz with 512M.\n\n Well when it comes to inserts CPU and RAM have almost nothing to do \n with it. What are the hard disk differences? Does the old server\n have fast SCSI disk and the new box SATA? Or the old server was\n on a RAID volume and the new one isn't, etc... those are the sort\n of hardware differences that are important in this particular\n case. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n",
"msg_date": "Mon, 13 Mar 2006 19:57:27 -0600",
"msg_from": "Frank Wiles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: import performance"
},
{
"msg_contents": "Tom Lane wrote:\n> Chris <[email protected]> writes:\n> \n>>Tons of difference :/\n> \n> \n> Have you checked that the I/O performance is comparable? It seems\n> possible that there's something badly misconfigured about the disks\n> on your new machine. Benchmarking with \"bonnie\" or some such would\n> be useful; also try looking at \"iostat 1\" output while running the\n> inserts on both machines.\n\nI'll check out bonnie, thanks.\n\nhdparm shows a world of difference (which I can understand) - that being \nthe old server is a lot slower.\n\nhdparm -t /dev/hda\n/dev/hda:\n Timing buffered disk reads: 24 MB in 3.13 seconds = 7.67 MB/sec\n\nhdparm -T /dev/hda\n/dev/hda:\n Timing cached reads: 596 MB in 2.00 seconds = 298.00 MB/sec\n\n\n\nNewer server:\nhdparm -t /dev/hda\n/dev/hda:\n Timing buffered disk reads: 70 MB in 3.02 seconds = 23.15 MB/sec\n\nhdparm -T /dev/hda\n/dev/hda:\n Timing cached reads: 1512 MB in 2.00 seconds = 754.44 MB/sec\n\n> Also, are the inserts just trivial \"insert values (... some constants ...)\"\n> or is there more to it than that?\n\nStraight inserts, no foreign keys, triggers etc.\n\n\nThe only other thing I can see is the old server is ext2:\n/dev/hda4 on / type ext2 (rw,errors=remount-ro)\n\nthe new one is ext3:\n/dev/hda2 on / type ext3 (rw)\n\n\nIf it's a server issue not a postgres issue I'll keep playing :) I \nthought my config was bad but I guess not.\n\nThanks for all the help.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Tue, 14 Mar 2006 13:27:28 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: import performance"
},
{
"msg_contents": "On Tue, 14 Mar 2006, Chris wrote:\n\n> The only other thing I can see is the old server is ext2:\n> /dev/hda4 on / type ext2 (rw,errors=remount-ro)\n>\n> the new one is ext3:\n> /dev/hda2 on / type ext3 (rw)\n\nthis is actually a fairly significant difference.\n\nwith ext3 most of your data actually gets written twice, once to the \njournal and a second time to the spot on the disk it's actually going to \nlive.\n\nin addition there are significant differences in how things are arranged \non disk between the two filesystems, (overridable at mount, but only \nchanges future new files). the ext3 layout is supposed to be better for a \ngeneral purpose filesystem, but I've found common cases (lots of files and \ndirectories) where it's significantly slower, and I think postgres will \nfall into those layouts.\n\ntry makeing a xfs filesystem for your postgres data and see what sort of \nperformance you get on it.\n\nDavid Lang\n",
"msg_date": "Mon, 13 Mar 2006 22:13:53 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: import performance"
},
{
"msg_contents": "David Lang wrote:\n> On Tue, 14 Mar 2006, Chris wrote:\n> \n>> The only other thing I can see is the old server is ext2:\n>> /dev/hda4 on / type ext2 (rw,errors=remount-ro)\n>>\n>> the new one is ext3:\n>> /dev/hda2 on / type ext3 (rw)\n> \n> \n> this is actually a fairly significant difference.\n> \n> with ext3 most of your data actually gets written twice, once to the \n> journal and a second time to the spot on the disk it's actually going to \n> live.\n> \n> in addition there are significant differences in how things are arranged \n> on disk between the two filesystems, (overridable at mount, but only \n> changes future new files). the ext3 layout is supposed to be better for \n> a general purpose filesystem, but I've found common cases (lots of files \n> and directories) where it's significantly slower, and I think postgres \n> will fall into those layouts.\n> \n> try makeing a xfs filesystem for your postgres data and see what sort of \n> performance you get on it.\n\nInteresting.\n\nTo be honest I think I'm just lucky with my really old server. I can't \nsee any particular tweaks in regards to drives or anything else. I have \nanother server running postgres 7.4.something and it's as slow as the \n8.1 system.\n\n#1 is running 2.4.x kernel - pg 7.4 (debian package) - good performance. \next2.\n\n#2 is running 2.2.x kernel (I know I know).. - pg 7.4 (debian package) \n- reasonable performance. ext2.\n\n#3 is running 2.6.x kernel - pg 8.1 (fedora package) - reasonable \nperformance. ext3.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Tue, 14 Mar 2006 17:28:34 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: import performance"
}
] |
[
{
"msg_contents": "Hello,\nAttached is a file containing the problematic queries cited yesterday, with\n\"explain\", \"\\di\" and \"show all\" outputs.\nThe first one finished in almost 4 hours. Firebird for windows finished in 1m30s.\nThe second one CRASHED after some hours, without finishing. The error message\nis at the file too.\nI will ask my friend to reduce shared_buffers to 16000 as this number gave the\nbest results for his machine.\nDo you have any suggestion?\nRegards.\nAndre Felipe Machado",
"msg_date": "Tue, 14 Mar 2006 09:02:49 -0300",
"msg_from": "\"andremachado\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "firebird X postgresql 8.1.2 windows, performance comparison"
},
{
"msg_contents": "On 3/14/06, andremachado <[email protected]> wrote:\n> Hello,\n> Attached is a file containing the problematic queries cited yesterday, with\n> \"explain\", \"\\di\" and \"show all\" outputs.\n> The first one finished in almost 4 hours. Firebird for windows finished in 1m30s.\n> The second one CRASHED after some hours, without finishing. The error message\n> is at the file too.\n> I will ask my friend to reduce shared_buffers to 16000 as this number gave the\n> best results for his machine.\n> Do you have any suggestion?\n> Regards.\n> Andre Felipe Machado\n\nAre you looking for help optimizing the postgresql database generally\nor for help making those queries run faster?\n\n1. do all basic stuff. (analyze, etc etc)\n\n2. for first query, try rewriting without explicit join\nselect count(distinct NF.ID_NF) as contagem,\n DE.AM_REFERENCIA as campo\n from DECLARACAO DE, CADASTRO CAD, NOTA_FISCAL NF, EMPRESA EMP,\n ARQUIVO_PROCESSADO ARQ\n where CAD.ID_DECLARACAO=DE.ID_DECLARACAO and\n NF.ID_CADASTRO=CAD.ID_CADASTRO and\n EMP.ID_EMPRESA=DE.ID_EMPRESA and\n ARQ.ID_ARQUIVO=DE.ID_ARQUIVO\n group by DE.AM_REFERENCIA order by DE.AM_REFERENCIA ;\n\n3. second query is a mess. remove try removing explicit joins and\nreplace 'where in' with 'where exists'\n\n4. your tables look like classic overuse of surrogate keys. Do some\nexperimentation with natural keys to reduce the number of joins\ninvolved.\n\nMerlin\n",
"msg_date": "Tue, 14 Mar 2006 09:06:36 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance comparison"
},
{
"msg_contents": "On Tue, Mar 14, 2006 at 09:02:49AM -0300, andremachado wrote:\n> Hello,\n> Attached is a file containing the problematic queries cited yesterday, with\n> \"explain\", \"\\di\" and \"show all\" outputs.\n> The first one finished in almost 4 hours. Firebird for windows finished in 1m30s.\n> The second one CRASHED after some hours, without finishing. The error message\n> is at the file too.\n\nPANIC: could not open file \"pg_xlog/0000000100000018000000E7\" (log file\n24, segment 231): Invalid argument\n\nIIRC that means you have a data corruption issue.\n\nAs for the queries, EXPLAIN ANALYZE would be in order here. It looks\nlike the first one might benefit from increasing work_memory.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 14 Mar 2006 15:10:51 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance comparison"
}
] |
[
{
"msg_contents": "Thank you for the insight Michael. I'll be performing some tests with the\nvarious setting on/off this week and will post the results.\n\nTim\n\n -----Original Message-----\nFrom: \tMichael Fuhr [mailto:[email protected]] \nSent:\tMonday, March 13, 2006 7:19 PM\nTo:\tmcelroy, tim\nCc:\t'[email protected]'\nSubject:\tRe: [PERFORM] PG Statistics\n\nOn Mon, Mar 13, 2006 at 06:49:39PM -0500, mcelroy, tim wrote:\n> Does anyone know how much of a performance hit turning stats_block_level\nand\n> stats_row_level on will incur? Do both need to be on to gather cache\n> related statistics? I know the annotated_conf_80 document states to only\n> turn them on for debug but if they're not that performance intensive I\n> cannot see the harm.\n\nI ran some tests a few months ago and found that stats_command_string\nhad a significant impact, whereas stats_block_level and stats_row_level\nwere almost negligible. Here are my test results:\n\nhttp://archives.postgresql.org/pgsql-performance/2005-12/msg00307.php\n\nYour results may vary. If you see substantially different results\nthen please post the particulars.\n\n-- \nMichael Fuhr\n\n\n\n\n\nRE: [PERFORM] PG Statistics\n\n\nThank you for the insight Michael. I'll be performing some tests with the various setting on/off this week and will post the results.\nTim\n\n -----Original Message-----\nFrom: Michael Fuhr [mailto:[email protected]] \nSent: Monday, March 13, 2006 7:19 PM\nTo: mcelroy, tim\nCc: '[email protected]'\nSubject: Re: [PERFORM] PG Statistics\n\nOn Mon, Mar 13, 2006 at 06:49:39PM -0500, mcelroy, tim wrote:\n> Does anyone know how much of a performance hit turning stats_block_level and\n> stats_row_level on will incur? Do both need to be on to gather cache\n> related statistics? I know the annotated_conf_80 document states to only\n> turn them on for debug but if they're not that performance intensive I\n> cannot see the harm.\n\nI ran some tests a few months ago and found that stats_command_string\nhad a significant impact, whereas stats_block_level and stats_row_level\nwere almost negligible. Here are my test results:\n\nhttp://archives.postgresql.org/pgsql-performance/2005-12/msg00307.php\n\nYour results may vary. If you see substantially different results\nthen please post the particulars.\n\n-- \nMichael Fuhr",
"msg_date": "Tue, 14 Mar 2006 08:50:04 -0500",
"msg_from": "\"mcelroy, tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG Statistics"
}
] |
[
{
"msg_contents": "Thanks you Steve. As mentioned in my other reply to Michael Fuhr I'll post\nthe results from tests to be performed this week.\n\nTim\n\n -----Original Message-----\nFrom: \tSteve Poe [mailto:[email protected]] \nSent:\tMonday, March 13, 2006 7:38 PM\nTo:\tmcelroy, tim\nCc:\t'[email protected]'\nSubject:\tRe: [PERFORM] PG Statistics\n\nTim,\n\nWhen I have done ODBC load tests with stats_block_level enabled on (20\nmins. per test), I've seen about 3-4% performance hit. Your mileage may\nvary.\n\nSteve Poe\n\nOn Mon, 2006-03-13 at 18:49 -0500, mcelroy, tim wrote:\n> Good evening,\n> \n> Does anyone know how much of a performance hit turning\n> stats_block_level and stats_row_level on will incur? Do both need to\n> be on to gather cache related statistics? I know the\n> annotated_conf_80 document states to only turn them on for debug but\n> if they're not that performance intensive I cannot see the harm.\n> \n> Thank you, \n> Tim McElroy\n> \n\n\n\n\n\nRE: [PERFORM] PG Statistics\n\n\nThanks you Steve. As mentioned in my other reply to Michael Fuhr I'll post the results from tests to be performed this week.\nTim\n\n -----Original Message-----\nFrom: Steve Poe [mailto:[email protected]] \nSent: Monday, March 13, 2006 7:38 PM\nTo: mcelroy, tim\nCc: '[email protected]'\nSubject: Re: [PERFORM] PG Statistics\n\nTim,\n\nWhen I have done ODBC load tests with stats_block_level enabled on (20\nmins. per test), I've seen about 3-4% performance hit. Your mileage may\nvary.\n\nSteve Poe\n\nOn Mon, 2006-03-13 at 18:49 -0500, mcelroy, tim wrote:\n> Good evening,\n> \n> Does anyone know how much of a performance hit turning\n> stats_block_level and stats_row_level on will incur? Do both need to\n> be on to gather cache related statistics? I know the\n> annotated_conf_80 document states to only turn them on for debug but\n> if they're not that performance intensive I cannot see the harm.\n> \n> Thank you, \n> Tim McElroy\n>",
"msg_date": "Tue, 14 Mar 2006 08:51:17 -0500",
"msg_from": "\"mcelroy, tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG Statistics"
}
] |
[
{
"msg_contents": "Hi, \n\n Do we have to vacuum template0 database regularly ? We got this warning this morning while vacuuming databases. As a part of my daily vacuum job I do vacuum of quartz, helix_fdc and affiliate databases which are the \none's which are heavily updated and used. But today I realized that usps, template1 and template0 is also being used in a transaction somehow based on this (SELECT datname, age(datfrozenxid) FROM pg_database;) query.\nActually we dont do any updates on usps , template1 and templat0 databases but some how still the age(datfrozenxid keeps incrementing. \n\n My question now is do I have to vacuum daily template1 and template0 databse, is there any harm on vacuuming these databases daily ?, since these are postgres system tables I am kind of worried. \n I was told that template0 is freezed but not sure why the age(datfrozenxid keeps incrementing. \n I am going to vacuum usps from now anyway. We are using Postgres version 8.0.2 \n\n\n If some one can please help me on this it would be really great, this is a production database and we cant afford to loose anything. \n\nThanks!\nPallav. \n\n\nMessage from the log\n---------------------\nWARNING: some databases have not been vacuumed in 1618393379 transactions\nHINT: Better vacuum them within 529090268 transactions, or you may have a wraparound failure.\n\n\n\nSELECT datname, age(datfrozenxid) FROM pg_database;\n datname | age\n-----------+------------\n quartz | 1076729648\n helix_fdc | 1078452246\n usps | 1621381218\n affiliate | 1078561327\n template1 | 1621381218\n template0 | 1621381218\n(6 rows)\n \n \nSELECT datname, age(datfrozenxid) FROM pg_database;\n datname | age\n-----------+------------\n quartz | 1076770467\n helix_fdc | 1078493065\n usps | 1621422037\n affiliate | 1078602146\n template1 | 1621422037\n template0 | 1621422037\n(6 rows)\n \n \nI ran this just 2 minutes apart and you can see the age value changes for \ntemplate0 and template1\n\n",
"msg_date": "Tue, 14 Mar 2006 11:44:12 -0500",
"msg_from": "Pallav Kalva <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuum template databases, Urgent: Production problem"
},
{
"msg_contents": "Pallav Kalva <[email protected]> writes:\n> Do we have to vacuum template0 database regularly ?\n\nNo, and in fact you can't because it's marked not datallowconn.\nBut you do need to vacuum template1 and usps every now and then.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 14 Mar 2006 12:27:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum template databases, Urgent: Production problem "
},
{
"msg_contents": "On Tue, Mar 14, 2006 at 11:44:12AM -0500, Pallav Kalva wrote:\n> Hi, \n> \n> Do we have to vacuum template0 database regularly ? We got this warning \n> this morning while vacuuming databases. As a part of my daily vacuum job \n> I do vacuum of quartz, helix_fdc and affiliate databases which are the \n> one's which are heavily updated and used. But today I realized that usps, \n> template1 and template0 is also being used in a transaction somehow based \n> on this (SELECT datname, age(datfrozenxid) FROM pg_database;) query.\n> Actually we dont do any updates on usps , template1 and templat0 databases \n> but some how still the age(datfrozenxid keeps incrementing. \n> My question now is do I have to vacuum daily template1 and template0 \n> databse, is there any harm on vacuuming these databases daily ?, since \n> these are postgres system tables I am kind of worried. I was told that \n> template0 is freezed but not sure why the age(datfrozenxid keeps \n> incrementing. I am going to vacuum usps from now anyway. We are using \n> Postgres version 8.0.2 \n\nYou should upgrade to 8.0.6; data loss bugs have been fixed in there.\n\nIf you never update USPS you can do a vacuum freeze on it and you won't\nneed to worry about XID rollover. Same with template1. But if they're\nsmall databases it's probably safer just to periodically vacuum (once a\nmonth or so).\n\nIf you up to 8.1 and enable autovacuum, it should take care of all of\nthis for you.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 14 Mar 2006 15:13:56 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum template databases, Urgent: Production problem"
}
] |
[
{
"msg_contents": "If one adds the '-a' arg to vacuumdb wouldn't that vacuum all databases\nincluding template1? \n\nTim\n\n -----Original Message-----\nFrom: \[email protected]\n[mailto:[email protected]] On Behalf Of Tom Lane\nSent:\tTuesday, March 14, 2006 12:28 PM\nTo:\tPallav Kalva\nCc:\[email protected]\nSubject:\tRe: [PERFORM] Vacuum template databases, Urgent: Production\nproblem \n\nPallav Kalva <[email protected]> writes:\n> Do we have to vacuum template0 database regularly ?\n\nNo, and in fact you can't because it's marked not datallowconn.\nBut you do need to vacuum template1 and usps every now and then.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n\n\n\nRE: [PERFORM] Vacuum template databases, Urgent: Production problem \n\n\nIf one adds the '-a' arg to vacuumdb wouldn't that vacuum all databases including template1? \n\nTim\n\n -----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Tom Lane\nSent: Tuesday, March 14, 2006 12:28 PM\nTo: Pallav Kalva\nCc: [email protected]\nSubject: Re: [PERFORM] Vacuum template databases, Urgent: Production problem \n\nPallav Kalva <[email protected]> writes:\n> Do we have to vacuum template0 database regularly ?\n\nNo, and in fact you can't because it's marked not datallowconn.\nBut you do need to vacuum template1 and usps every now and then.\n\n regards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly",
"msg_date": "Tue, 14 Mar 2006 12:28:17 -0500",
"msg_from": "\"mcelroy, tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuum template databases, Urgent: Production probl"
},
{
"msg_contents": "On Tue, Mar 14, 2006 at 12:28:17PM -0500, mcelroy, tim wrote:\n> If one adds the '-a' arg to vacuumdb wouldn't that vacuum all databases\n> including template1? \n\nIt does on 8.1...\n\[email protected][15:15]~:18%vacuumdb -va | & grep template1\nvacuumdb: vacuuming database \"template1\"\[email protected][15:16]~:19%\n\nTry it and find out.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 14 Mar 2006 15:16:35 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum template databases, Urgent: Production probl"
}
] |
[
{
"msg_contents": "Hello,\nMany thanks for your suggestions.\nI am trying to optimize server configs, as (presumed) my friend already\noptimized his queries and firebird windows is executing them fast.\nYou could see at the new attached file the results of the queries rewrite.\nUnfortunately, the first query simply returned the same estimated costs by the\nplanner.\nThe second one, using EXISTS, multiplied its cost almost 200 times!\n\"Exists\" is painfully slow.\nThe shared_buffers was reduced again.\nDo you have any suggestions?\nMany thanks.\nAndre Felipe Machado",
"msg_date": "Tue, 14 Mar 2006 15:33:20 -0300",
"msg_from": "\"andremachado\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "firebird X postgresql 8.1.2 windows, performance comparison"
},
{
"msg_contents": "On 3/14/06, andremachado <[email protected]> wrote:\n> Unfortunately, the first query simply returned the same estimated costs by the\n> planner.\n\nCan you try making a big increase to work_mem .conf parameter (as much\nas is reasonalbe) and see if that helps either query?\n\nok, thats understandable. you do have indexes on all the id columns, yes?\n\n> The second one, using EXISTS, multiplied its cost almost 200 times!\n\nregardless of what the planner said, could you please try running\nquery with explain analyze? also:\n1. DE.ID_ARQUIVO in (10) could be written as DE.ID_ARQUIVO = 10\n\n2. and CAD.ID_DECLARACAO=DE.ID_DECLARACAO\n and CAD.ID_CADASTRO=NOTA_FISCAL.ID_CADASTRO\ncould possibly beneift from key on CAD(ID_DECLARACAO, ID_CADASTRO)\nalso, you could try adding an index on DE(ID_ARQUIVO, ID_DECLARACAO)\n\n3. and (select sum(ITEM_NOTA.VA_TOTAL) from ITEM_NOTA\n where ITEM_NOTA.ID_NF = NOTA_FISCAL.ID_NF) < 999999999999;\n\nthis is probably the major performance killer. you have to somehow\noptimize the 'sum' out of the target of the major where clause. One\nway to possibly tackle that is to attempt to materialze the sum into\nnota_fiscal.\n\nmerlin\n",
"msg_date": "Tue, 14 Mar 2006 14:36:25 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: firebird X postgresql 8.1.2 windows, performance comparison"
}
] |
[
{
"msg_contents": "Humm, well I am running 8.0.1 and use that option and see the following in\nmy vacuum output log:\n\nvacuumdb: vacuuming database \"template1\"\n\nSo I would assume that it is being vacuumed? Maybe I'm wrong. If so, we\nshould be upgrading soon and it won't be an issue.\n\nThanks,\nTim\n\n -----Original Message-----\nFrom: \tJim C. Nasby [mailto:[email protected]] \nSent:\tTuesday, March 14, 2006 4:17 PM\nTo:\tmcelroy, tim\nCc:\t'Tom Lane'; [email protected]\nSubject:\tRe: [PERFORM] Vacuum template databases, Urgent: Production\nprobl\n\nOn Tue, Mar 14, 2006 at 12:28:17PM -0500, mcelroy, tim wrote:\n> If one adds the '-a' arg to vacuumdb wouldn't that vacuum all databases\n> including template1? \n\nIt does on 8.1...\n\[email protected][15:15]~:18%vacuumdb -va | & grep template1\nvacuumdb: vacuuming database \"template1\"\[email protected][15:16]~:19%\n\nTry it and find out.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n\n\n\nRE: [PERFORM] Vacuum template databases, Urgent: Production probl\n\n\nHumm, well I am running 8.0.1 and use that option and see the following in my vacuum output log:\n\nvacuumdb: vacuuming database \"template1\"\n\nSo I would assume that it is being vacuumed? Maybe I'm wrong. If so, we should be upgrading soon and it won't be an issue.\nThanks,\nTim\n\n -----Original Message-----\nFrom: Jim C. Nasby [mailto:[email protected]] \nSent: Tuesday, March 14, 2006 4:17 PM\nTo: mcelroy, tim\nCc: 'Tom Lane'; [email protected]\nSubject: Re: [PERFORM] Vacuum template databases, Urgent: Production probl\n\nOn Tue, Mar 14, 2006 at 12:28:17PM -0500, mcelroy, tim wrote:\n> If one adds the '-a' arg to vacuumdb wouldn't that vacuum all databases\n> including template1? \n\nIt does on 8.1...\n\[email protected][15:15]~:18%vacuumdb -va | & grep template1\nvacuumdb: vacuuming database \"template1\"\[email protected][15:16]~:19%\n\nTry it and find out.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461",
"msg_date": "Tue, 14 Mar 2006 16:19:37 -0500",
"msg_from": "\"mcelroy, tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuum template databases, Urgent: Production probl"
},
{
"msg_contents": "On Tue, Mar 14, 2006 at 04:19:37PM -0500, mcelroy, tim wrote:\n> Humm, well I am running 8.0.1 and use that option and see the following in\n> my vacuum output log:\n> \n> vacuumdb: vacuuming database \"template1\"\n> \n> So I would assume that it is being vacuumed? Maybe I'm wrong. If so, we\n> should be upgrading soon and it won't be an issue.\n\nMy guess is that vacuumdb -a will vacuum anything that it's allowed to\nconnect to, which normally means every database except for template0. If\nyou want the Real Answer, look in the source.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 14 Mar 2006 15:24:11 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum template databases, Urgent: Production probl"
},
{
"msg_contents": "\nOn Mar 14, 2006, at 4:19 PM, mcelroy, tim wrote:\n\n> Humm, well I am running 8.0.1 and use that option and see the \n> following in\n> my vacuum output log:\n>\n> vacuumdb: vacuuming database \"template1\"\n>\n\nit has done so since at least 7.4, probably 7.3. the \"-a\" flag \nreally does what is says.\n\n",
"msg_date": "Tue, 14 Mar 2006 17:18:24 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum template databases, Urgent: Production probl"
}
] |
[
{
"msg_contents": "Hello list.\n\nI recently tried to do a slony replica of my database, and doing it falied.\nI retried, and then it succeeded (why it failed is another story).\n\nThis caused that in the replica there is a lot of dead tuples ( If i\nunderstand correctly, a failure in creating the replica means a HUGE aborted\ntransaction - and Slony should TRUNCATE the table, getting rid of dead\ntuples, but that is a subject for another list).\n\nso I did vacuum full verbose analyze (does it make sense ?)\n\nThis hanged on a (quite large) table:\n\nINFO: vacuuming \"public.calls\"\nINFO: \"calls\": found 7980456 removable, 3989705 nonremovable row versions\nin 296943 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 136 to 224 bytes long.\nThere were 891 unused item pointers.\nTotal free space (including removable row versions) is 1594703944 bytes.\n197958 pages are or will become empty, including 0 at the end of the table.\n212719 pages containing 1588415680 free bytes are potential move\ndestinations.\nCPU 7.25s/3.28u sec elapsed 144.95 sec.\nINFO: index \"calls_pkey\" now contains 3989705 row versions in 8975 pages\nDETAIL: 108927 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.35s/0.59u sec elapsed 39.03 sec.\nINFO: index \"calls_cli\" now contains 3989705 row versions in 13504 pages\nDETAIL: 108927 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.51s/0.60u sec elapsed 58.60 sec.\nINFO: index \"calls_dnis\" now contains 3989705 row versions in 13600 pages\nDETAIL: 108927 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.60s/0.90u sec elapsed 27.05 sec.\nINFO: index \"calls_u\" now contains 3989705 row versions in 23820 pages\nDETAIL: 108927 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.92s/0.78u sec elapsed 80.51 sec.\nINFO: index \"calls_z\" now contains 3989705 row versions in 13607 pages\nDETAIL: 108927 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.60s/0.85u sec elapsed 39.77 sec.\n\nIt was hanging in this state for more than 3 hours, and I had to kill the\nvacuum process.\n\n From iostat I saw that there was continuous write activity, steadilly about\n1.3 MB/s (the disk system can do about 40 MB/s), and there were iowait\nprocesses. There was no read activity.\n\nThere were no other clients for that database (but there were clients in\nother databases in the instance).\n\nversion is 8.1.0 . Autovacuum is off. I upped maintenance_work_mem to 512 MB\n. Any hints? If nothing comes up today, I am scratching that replica.\n\n\ntelefony=# \\d calls\n Table \"public.calls\"\n Column | Type |\nModifiers\n---------------+-----------------------------+------------------------------\n------------------------------\n dt | timestamp without time zone |\n machine_ip | integer |\n port | integer |\n filename | character varying(15) |\n account | character(11) |\n duration | integer |\n ani | character(32) |\n application | character(32) |\n dnis | integer |\n z | integer |\n client | integer |\n taryfa | integer |\n operator | character varying(20) |\n id | integer | not null default\nnextval(('seq_calls_id'::text)::regclass)\n outgoing | character(12) |\n release_cause | text |\n waiting | integer |\n oper_pin | integer |\nIndexes:\n \"calls_pkey\" PRIMARY KEY, btree (id)\n \"calls_u\" UNIQUE, btree (dt, dnis, port, machine_ip, account)\n \"calls_cli\" btree (client, dt)\n \"calls_dnis\" btree (dnis, dt)\n \"calls_z\" btree (z, dt)\nTriggers:\n _ctele_denyaccess_5 BEFORE INSERT OR DELETE OR UPDATE ON calls FOR EACH\nROW EXECUTE PROCEDURE _ctele.denyaccess('_ctele')\n\n\nPozdrawiam\nMarcin Ma�k\n\n",
"msg_date": "Wed, 15 Mar 2006 11:09:52 +0100",
"msg_from": "=?iso-8859-2?Q?Marcin_Ma=F1k?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "VACUUM FULL hangs"
},
{
"msg_contents": "Marcin Mańk wrote:\n> Hello list.\n> \n> I recently tried to do a slony replica of my database, and doing it falied.\n> I retried, and then it succeeded (why it failed is another story).\n> \n> This caused that in the replica there is a lot of dead tuples ( If i\n> understand correctly, a failure in creating the replica means a HUGE aborted\n> transaction - and Slony should TRUNCATE the table, getting rid of dead\n> tuples, but that is a subject for another list).\n> \n> so I did vacuum full verbose analyze (does it make sense ?)\n\nFair enough. If you want empty tables TRUNCATE is probably a better bet \nthough.\n\n> This hanged on a (quite large) table:\n> \n> INFO: vacuuming \"public.calls\"\n> INFO: \"calls\": found 7980456 removable, 3989705 nonremovable row versions\n> in 296943 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> Nonremovable row versions range from 136 to 224 bytes long.\n> There were 891 unused item pointers.\n> Total free space (including removable row versions) is 1594703944 bytes.\n> 197958 pages are or will become empty, including 0 at the end of the table.\n> 212719 pages containing 1588415680 free bytes are potential move\n> destinations.\n\nOK, so there are 7.9 million removable rows and 3.9 million nonremovable \nso truncate isn't an option since you have data you presumably want to \nkeep. It estimates about 200,000 pages will become empty, but none of \nthem are at the end of the table. This represents 1.5GB of unused \ndisk-space.\n\nI'm a bit puzzled as to how you managed to get so much free space at the \nstart of the table. Did the replication work on the second try?\n\n> CPU 7.25s/3.28u sec elapsed 144.95 sec.\n> INFO: index \"calls_pkey\" now contains 3989705 row versions in 8975 pages\n> DETAIL: 108927 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.35s/0.59u sec elapsed 39.03 sec.\n> INFO: index \"calls_cli\" now contains 3989705 row versions in 13504 pages\n> DETAIL: 108927 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.51s/0.60u sec elapsed 58.60 sec.\n> INFO: index \"calls_dnis\" now contains 3989705 row versions in 13600 pages\n> DETAIL: 108927 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.60s/0.90u sec elapsed 27.05 sec.\n> INFO: index \"calls_u\" now contains 3989705 row versions in 23820 pages\n> DETAIL: 108927 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.92s/0.78u sec elapsed 80.51 sec.\n> INFO: index \"calls_z\" now contains 3989705 row versions in 13607 pages\n> DETAIL: 108927 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.60s/0.85u sec elapsed 39.77 sec.\n\nIt's done all the indexes (and seems to have done them quite quickly), \nand is presumably working on the data now.\n\n> It was hanging in this state for more than 3 hours, and I had to kill the\n> vacuum process.\n> \n>>From iostat I saw that there was continuous write activity, steadilly about\n> 1.3 MB/s (the disk system can do about 40 MB/s), and there were iowait\n> processes. There was no read activity.\n> \n> There were no other clients for that database (but there were clients in\n> other databases in the instance).\n\nOK, so you might well be getting the vacuum writing one page, then WAL, \nthen vacuum, etc. That will mean the disk spends most of its time \nseeking back and fore. How many disks do you have, and is the WAL on a \nseparate set of disks?\n\nI think it's just taking a long time because you have so many pages to \nmove and not enough disk bandwidth. Of course the root of the problem is \nthat you had so many dead rows after a failed replication, but you're \nright and that's another email.\n\n-- \n Richard Huxton\n Archonet Ltd\n\n",
"msg_date": "Wed, 15 Mar 2006 11:54:02 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM FULL hangs"
},
{
"msg_contents": "\n> I'm a bit puzzled as to how you managed to get so much free space at the\n> start of the table. Did the replication work on the second try?\n\nIt actually worked on third try, I guess.\n\n> OK, so you might well be getting the vacuum writing one page, then WAL,\n> then vacuum, etc. That will mean the disk spends most of its time\n> seeking back and fore. How many disks do you have, and is the WAL on a\n> separate set of disks?\n\nIt is 2 spindles software RAID1 . Till now there were no performance\nproblems with this machine that would mandate trying anything more fancy,\nthis machine is low traffic.\n\nGreetings\nMarcin Ma�k\n\n",
"msg_date": "Wed, 15 Mar 2006 19:43:35 +0100",
"msg_from": "=?iso-8859-2?Q?Marcin_Ma=F1k?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: VACUUM FULL hangs"
}
] |
[
{
"msg_contents": "Attached is a simplified example of a performance problem we have seen,\nwith a workaround and a suggestion for enhancement (hence both the\nperformance and hackers lists).\n\nOur software is allowing users to specify the start and end dates for a\nquery. When they enter the same date for both, the optimizer makes a\nvery bad choice. We can work around it in application code by using an\nequality test if both dates match. I think the planner should be able\nto make a better choice here. (One obvious way to fix it would be to\nrewrite \"BETWEEN a AND b\" as \"= a\" when a is equal to b, but it seems\nlike there is some underlying problem which should be fixed instead (or\nin addition to) this.\n\nThe first query uses BETWEEN with the same date for both min and max\nvalues. The second query uses an equality test for the same date. The\nthird query uses BETWEEN with a two-day range. In all queries, there\nare less than 4,600 rows for the specified cotfcNo value out of over 18\nmillion rows in the table. We tried boosting the statistics samples for\nthe columns in the selection, which made the estimates of rows more\naccurate, but didn't change the choice of plans.\n\n-Kevin",
"msg_date": "Wed, 15 Mar 2006 11:56:53 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "BETWEEN optimizer problems with single-value range"
},
{
"msg_contents": "Kevin Grittner <[email protected]> schrieb:\n\n> Attached is a simplified example of a performance problem we have seen,\n\nOdd. Can you tell us your PG-Version?\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n",
"msg_date": "Wed, 15 Mar 2006 19:17:35 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BETWEEN optimizer problems with single-value range"
},
{
"msg_contents": ">>> On Wed, Mar 15, 2006 at 12:17 pm, in message\n<20060315181735.GA22240@KanotixBox>, Andreas Kretschmer\n<[email protected]> wrote: \n> Kevin Grittner <[email protected]> schrieb:\n> \n>> Attached is a simplified example of a performance problem we have\nseen,\n> \n> Odd. Can you tell us your PG- Version?\n\nI know we really should move to 8.1.3, but I haven't gotten to it yet. \nWe're on a build from the 8.1 stable branch as of February 10th, with a\npatch to allow ANSI standard interpretation of string literals. (So\nthis is 8.1.2 with some 8.1.3 changes plus the string literal patch.)\n\nIf there are any changes in that time frame which might affect this\nissue, I could deploy a standard release and make sure that I see the\nsame behavior. Let me know.\n\n-Kevin\n\n\n",
"msg_date": "Wed, 15 Mar 2006 12:48:51 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BETWEEN optimizer problems with single-value"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n>> Odd. Can you tell us your PG- Version?\n\n> this is 8.1.2 with some 8.1.3 changes plus the string literal patch.)\n\n8.1 is certainly capable of devising the plan you want, for example\nin the regression database:\n\nregression=# explain select * from tenk1 where thousand = 10 and tenthous between 42 and 144;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Index Scan using tenk1_thous_tenthous on tenk1 (cost=0.00..6.01 rows=1 width=244)\n Index Cond: ((thousand = 10) AND (tenthous >= 42) AND (tenthous <= 144))\n(2 rows)\n\nIt looks to me like this is a matter of bad cost estimation, ie, it's\nthinking the other index is cheaper to use. Why that is is not clear.\nCan we see the pg_stats rows for ctofcNo and calDate?\n\nAlso, try to force it to generate the plan you want, so we can see what\nit thinks the cost is for that. If you temporarily drop the wrong index\nyou should be able to get there:\n\n\tbegin;\n\tdrop index \"Cal_CalDate\";\n\texplain analyze select ... ;\n\t-- repeat as needed if it chooses some other wrong index\n\trollback;\n\nI hope you have a play copy of the database to do this in ---\nalthough it would be safe to do the above in a live DB, the DROP would\nexclusive-lock the table until you finish the experiment and rollback,\nwhich probably is not good for response time ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Mar 2006 14:17:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BETWEEN optimizer problems with single-value "
},
{
"msg_contents": "On 3/15/06, Kevin Grittner <[email protected]> wrote:\n> Attached is a simplified example of a performance problem we have seen,\n> with a workaround and a suggestion for enhancement (hence both the\n> performance and hackers lists).\n\n\nHi Kevin. In postgres 8.2 you will be able to use the row-wise\ncomparison for your query which should guarantee good worst case\nperformance without having to maintain two separate query forms. it\nis also a more elegant syntax as you will see.\n\nSELECT \"CA\".\"calDate\", \"CA\".\"startTime\"\n FROM \"Cal\" \"CA\"\n WHERE (\"CA\".\"ctofcNo\", \"CA\".\"calDate\") BETWEEN\n (2192, '2006-03-15') and (2192, '2006-03-15')\n ORDER BY \"ctofcNo\", \"calDate\", \"startTime\";\n\nBe warned this will not work properly in pg < 8.2. IMO, row-wise is\nthe best way to write this type of a query. Please note the row\nconstructor and the addition of ctofcNo into the order by clause to\nforce use of the index.\n\nMerlin\n",
"msg_date": "Wed, 15 Mar 2006 14:36:51 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] BETWEEN optimizer problems with single-value range"
},
{
"msg_contents": ">>> On Wed, Mar 15, 2006 at 1:17 pm, in message\n<[email protected]>,\nTom Lane <[email protected]> wrote: \n> \n> 8.1 is certainly capable of devising the plan you want, for example\n> in the regression database:\n> \n> regression=# explain select * from tenk1 where thousand = 10 and\ntenthous \n> between 42 and 144;\n> QUERY PLAN\n>\n------------------------------------------------------------------------------------\n> Index Scan using tenk1_thous_tenthous on tenk1 (cost=0.00..6.01\nrows=1 \n> width=244)\n> Index Cond: ((thousand = 10) AND (tenthous >= 42) AND (tenthous <=\n144))\n> (2 rows)\n\nThat matches one of the examples where it optimized well. I only saw\nthe bad plan when low and high ends of the BETWEEN range were equal.\n\n> It looks to me like this is a matter of bad cost estimation, ie,\nit's\n> thinking the other index is cheaper to use. Why that is is not\nclear.\n> Can we see the pg_stats rows for ctofcNo and calDate?\n\n schemaname | tablename | attname | null_frac | avg_width | n_distinct\n| most_common_vals \n | \n most_common_freqs | \n histogram_bounds \n | correlation\n------------+-----------+---------+-----------+-----------+------------+-----------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------+-------------\n public | Cal | calDate | 0 | 4 | 2114\n|\n{2003-06-02,2000-06-20,2001-04-16,2003-06-17,2003-12-01,2004-10-12,2001-04-23,2001-10-15,2002-03-06,2002-05-03}\n|\n{0.00333333,0.00233333,0.00233333,0.00233333,0.00233333,0.00233333,0.002,0.002,0.002,0.002}\n|\n{1986-03-14,1999-06-11,2000-07-14,2001-05-18,2002-03-21,2002-12-04,2003-08-12,2004-05-13,2005-02-01,2005-09-28,2080-12-31}\n| 0.0545768\n public | Cal | ctofcNo | 0 | 8 | 669\n| {0793,1252,1571,0964,0894,1310,\"DA \",0944,1668,0400} \n |\n{0.024,0.019,0.015,0.0123333,0.012,0.011,0.0106667,0.01,0.00966667,0.00866667}\n | {0000,0507,0733,0878,1203,1336,14AG,1633,1971,3705,YVJO} \n | \n-0.0179665\n(2 rows)\n\n\n> Also, try to force it to generate the plan you want, so we can see\nwhat\n> it thinks the cost is for that. If you temporarily drop the wrong\nindex\n> you should be able to get there:\n> \n> \tbegin;\n> \tdrop index \"Cal_CalDate\";\n> \texplain analyze select ... ;\n> \t-- repeat as needed if it chooses some other wrong index\n> \trollback;\n\n Sort (cost=4.03..4.03 rows=1 width=12) (actual time=48.484..48.486\nrows=4 loops=1)\n Sort Key: \"calDate\", \"startTime\"\n -> Index Scan using \"Cal_CtofcNo\" on \"Cal\" \"CA\" (cost=0.00..4.02\nrows=1 width=12) (actual time=36.750..48.228 rows=4 loops=1)\n Index Cond: (((\"ctofcNo\")::bpchar = '2192'::bpchar) AND\n((\"calDate\")::date >= '2006-03-15'::date) AND ((\"calDate\")::date <=\n'2006-03-15'::date))\n Total runtime: 56.616 ms\n\n",
"msg_date": "Wed, 15 Mar 2006 14:25:58 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: BETWEEN optimizer problems with single-value"
},
{
"msg_contents": "Merlin Moncure <[email protected]> schrieb:\n\n> On 3/15/06, Kevin Grittner <[email protected]> wrote:\n> > Attached is a simplified example of a performance problem we have seen,\n> > with a workaround and a suggestion for enhancement (hence both the\n> > performance and hackers lists).\n> \n> \n> Hi Kevin. In postgres 8.2 you will be able to use the row-wise\n\n8.2? AFAIK, Feature freeze in juni/juli this year...\nRelease august/september.\n\n\n> comparison for your query which should guarantee good worst case\n> performance without having to maintain two separate query forms. it\n\nPerhaps, a bitmap index scan (since 8.1) are useful for such querys.\nThats why i asked which version.\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n",
"msg_date": "Wed, 15 Mar 2006 21:47:00 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BETWEEN optimizer problems with single-value range"
},
{
"msg_contents": "On 3/15/06, Andreas Kretschmer <[email protected]> wrote:\n> Merlin Moncure <[email protected]> schrieb:\n>\n> > On 3/15/06, Kevin Grittner <[email protected]> wrote:\n> > > Attached is a simplified example of a performance problem we have seen,\n> > > with a workaround and a suggestion for enhancement (hence both the\n> > > performance and hackers lists).\n> >\n> >\n> > Hi Kevin. In postgres 8.2 you will be able to use the row-wise\n>\n> 8.2? AFAIK, Feature freeze in juni/juli this year...\n> Release august/september.\n\nyes, but I was addressing kevin's point about enhancing the server...\n\n> > comparison for your query which should guarantee good worst case\n> > performance without having to maintain two separate query forms. it\n>\n> Perhaps, a bitmap index scan (since 8.1) are useful for such querys.\n> Thats why i asked which version.\n\nI think you will find that reading a range of records from a table\nordered by an index utilizing the 8.2 comparison feature is much\nfaster than a bitmap index scan.\n\nMerlin\n",
"msg_date": "Wed, 15 Mar 2006 17:11:16 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] BETWEEN optimizer problems with single-value range"
},
{
"msg_contents": "On Wed, 2006-03-15 at 11:56 -0600, Kevin Grittner wrote:\n> Attached is a simplified example of a performance problem we have seen,\n> with a workaround and a suggestion for enhancement (hence both the\n> performance and hackers lists).\n> \n> Our software is allowing users to specify the start and end dates for a\n> query. When they enter the same date for both, the optimizer makes a\n> very bad choice. We can work around it in application code by using an\n> equality test if both dates match. I think the planner should be able\n> to make a better choice here. \n\n> (One obvious way to fix it would be to\n> rewrite \"BETWEEN a AND b\" as \"= a\" when a is equal to b, but it seems\n> like there is some underlying problem which should be fixed instead (or\n> in addition to) this.\n\nThat might work, but I'm not sure if that is in itself the problem and\nit would be mostly wasted overhead in 99% of cases.\n\nThe main issue appears to be that the planner chooses \"Cal_CalDate\"\nindex rather than \"Cal_CtofcNo\" index when the BETWEEN values match. \n\nIt seems that the cost of the first and third EXPLAINs is equal, yet for\nsome reason it chooses different indexes in each case. My understanding\nwas that it would pick the first index created if plan costs were equal.\nIs that behaviour repeatable with each query?\n\nISTM that if we have equal plan costs then we should be choosing the\nindex for which we have more leading columns, since that is more likely\nto lead to a more selective answer. But the plan selection is a simple\n\"pick the best, or if they're equal pick the best sort order\".\n\n> The first query uses BETWEEN with the same date for both min and max\n> values. The second query uses an equality test for the same date. The\n> third query uses BETWEEN with a two-day range. In all queries, there\n> are less than 4,600 rows for the specified cotfcNo value out of over 18\n> million rows in the table. We tried boosting the statistics samples for\n> the columns in the selection, which made the estimates of rows more\n> accurate, but didn't change the choice of plans.\n\nThe selectivity seems the same in both - clamped to a minimum of 1 row,\nso changing that doesn't look like it would help.\n\nBest Regards, Simon Riggs\n\n\n\n\n",
"msg_date": "Wed, 15 Mar 2006 23:05:08 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] BETWEEN optimizer problems with single-value range"
},
{
"msg_contents": ">>> On Wed, Mar 15, 2006 at 5:05 pm, in message\n<[email protected]>, Simon Riggs\n<[email protected]> wrote: \n> On Wed, 2006- 03- 15 at 11:56 - 0600, Kevin Grittner wrote:\n> \n>> (One obvious way to fix it would be to\n>> rewrite \"BETWEEN a AND b\" as \"= a\" when a is equal to b, but it\nseems\n>> like there is some underlying problem which should be fixed instead\n(or\n>> in addition to) this.\n> \n> That might work, but I'm not sure if that is in itself the problem\nand\n> it would be mostly wasted overhead in 99% of cases.\n\nIt sounds like we agree.\n\n> The main issue appears to be that the planner chooses \"Cal_CalDate\"\n> index rather than \"Cal_CtofcNo\" index when the BETWEEN values match.\n\n\nAgreed.\n\n> It seems that the cost of the first and third EXPLAINs is equal, yet\nfor\n> some reason it chooses different indexes in each case. My\nunderstanding\n> was that it would pick the first index created if plan costs were\nequal.\n> Is that behaviour repeatable with each query?\n\nIt seems to be a consistent pattern, although strictly speaking our\nevidence is anecdotal. We've got hundreds of known failures with the\nBETWEEN variant on equal dates and no known successes. We have a few\ndozen tests of the equality variant with 100% success in those tests.\n\n> ISTM that if we have equal plan costs then we should be choosing the\n> index for which we have more leading columns, since that is more\nlikely\n> to lead to a more selective answer. But the plan selection is a\nsimple\n> \"pick the best, or if they're equal pick the best sort order\".\n\n> The selectivity seems the same in both - clamped to a minimum of 1\nrow,\n> so changing that doesn't look like it would help.\n\nThe fact that it costs these as equivalent is surprising in itself, and\nmight be worth examining. This might be an example of something I\nsuggested a while ago -- that the rounding a row estimate to an integer\non the basis that \"you can't read half a row\" is not necessarily wise,\nbecause you can have a 50% chance of reading a row versus a higher or\nlower percentage.\n\n-Kevin\n\n\n",
"msg_date": "Wed, 15 Mar 2006 17:34:33 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] BETWEEN optimizer problems with single-value"
},
{
"msg_contents": "On Wed, 2006-03-15 at 14:17 -0500, Tom Lane wrote:\n\n> It looks to me like this is a matter of bad cost estimation, ie, it's\n> thinking the other index is cheaper to use. Why that is is not clear.\n> Can we see the pg_stats rows for ctofcNo and calDate?\n\nISTM that when the BETWEEN constants match we end up in this part of\nclauselist_selectivity()...\n\n\n",
"msg_date": "Thu, 16 Mar 2006 00:07:06 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BETWEEN optimizer problems with single-value"
},
{
"msg_contents": "On Thu, 2006-03-16 at 00:07 +0000, Simon Riggs wrote:\n> On Wed, 2006-03-15 at 14:17 -0500, Tom Lane wrote:\n> \n> > It looks to me like this is a matter of bad cost estimation, ie, it's\n> > thinking the other index is cheaper to use. Why that is is not clear.\n> > Can we see the pg_stats rows for ctofcNo and calDate?\n> \n> ISTM that when the BETWEEN constants match we end up in this part of\n> clauselist_selectivity()...\n\n(and now for the whole email...)\n\n\t/*\n\t * It's just roundoff error; use a small positive\n\t * value\n\t */\n\ts2 = 1.0e-10;\n\nso that the planner underestimates the cost of using \"Cal_CalDate\" so\nthat it ends up the same as \"Cal_CtofcNo\", and then we pick\n\"Cal_CalDate\" because it was created first.\n\nUsing 1.0e-10 isn't very useful... the selectivity for a range should\nnever be less than the selectivity for an equality, so we should simply\nput in a test against one of the pseudo constants and use that as the\nminimal value. That should lead to raising the apparent cost of\nCal_CalDate so that Cal_CtofcNo can take precedence.\n\nBest Regards, Simon Riggs\n\n\n\n\n\n",
"msg_date": "Thu, 16 Mar 2006 00:24:38 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BETWEEN optimizer problems with single-value"
},
{
"msg_contents": "Simon Riggs <[email protected]> writes:\n>> ISTM that when the BETWEEN constants match we end up in this part of\n>> clauselist_selectivity()...\n\nYeah, I think you are right.\n\n> so that the planner underestimates the cost of using \"Cal_CalDate\" so\n> that it ends up the same as \"Cal_CtofcNo\", and then we pick\n> \"Cal_CalDate\" because it was created first.\n\nNo, it doesn't end up the same --- but the difference is small enough to\nbe in the roundoff-error regime. The real issue here is that we're\neffectively assuming that one row will be fetched from the index in both\ncases, and this is clearly not the case for the Cal_CalDate index. So\nwe need a more accurate estimate for the boundary case.\n\n> Using 1.0e-10 isn't very useful... the selectivity for a range should\n> never be less than the selectivity for an equality, so we should simply\n> put in a test against one of the pseudo constants and use that as the\n> minimal value.\n\nThat's easier said than done, because you'd first have to find the\nappropriate equality operator to use (ie, one having semantics that\nagree with the inequality operators). Another point is that the above\nstatement is simply wrong, consider\n\tcalDate BETWEEN '2006-03-15' AND '2006-03-14'\nfor which an estimate of zero really is correct.\n\nPossibly we could drop this code's reliance on seeing\nSCALARLTSEL/SCALARGTSEL as the estimators, and instead try to locate a\ncommon btree opclass for the operators --- which would then let us\nidentify the right equality operator to use, and also let us distinguish\n> from >= etc. If we're trying to get the boundary cases right I\nsuspect we have to account for that. I could see such an approach being\ntremendously slow though :-(, because we'd go looking for btree\nopclasses even for operators that have nothing to do with < or >.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 15 Mar 2006 21:05:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BETWEEN optimizer problems with single-value "
},
{
"msg_contents": "On Wed, 2006-03-15 at 21:05 -0500, Tom Lane wrote:\n> So we need a more accurate estimate for the boundary case.\n\nAgreed.\n\n> > Using 1.0e-10 isn't very useful... the selectivity for a range should\n> > never be less than the selectivity for an equality, so we should simply\n> > put in a test against one of the pseudo constants and use that as the\n> > minimal value.\n> \n> That's easier said than done, because you'd first have to find the\n> appropriate equality operator to use (ie, one having semantics that\n> agree with the inequality operators). \n...\n\nKevin: this is also the reason we can't simply transform the WHERE\nclause into a more appropriate form...\n\n> Possibly we could drop this code's reliance on seeing\n> SCALARLTSEL/SCALARGTSEL as the estimators, and instead try to locate a\n> common btree opclass for the operators --- which would then let us\n> identify the right equality operator to use, and also let us distinguish\n> > from >= etc. If we're trying to get the boundary cases right I\n> suspect we have to account for that. I could see such an approach being\n> tremendously slow though :-(, because we'd go looking for btree\n> opclasses even for operators that have nothing to do with < or >.\n\nTrying to get the information in the wrong place would be very\nexpensive, I agree. But preparing that information when we have access\nto it and passing it through the plan would be much cheaper. Relating\nop->opclass will be very useful in other places in planning, even if any\none case seems not to justify the work to record it. (This case feels\nlike deja vu, all over again.)\n\nThe operator and the opclass are only connected via an index access\nmethod, but for a particular index each column has only one opclass. So\nthe opclass will have a 1-1 correspondence with the operator for *that*\nplan only, realising that other plans might have different\ncorrespondences. find_usable_indexes() or thereabouts could annotate a\nrestriction OpExpr with the opclass it will use. \n\nOnce we have the link, clauselist_selectivity() can trivially compare\nopclasses for both OpExprs, then retrieve other information for that\nopclass for various purposes.\n\nSeems lots of work for such a corner case, but would be worth it if this\nsolves other problems as well.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Thu, 16 Mar 2006 11:53:52 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BETWEEN optimizer problems with single-value"
},
{
"msg_contents": "Simon Riggs <[email protected]> writes:\n> Trying to get the information in the wrong place would be very\n> expensive, I agree. But preparing that information when we have access\n> to it and passing it through the plan would be much cheaper.\n\nWhere would that be?\n\n> The operator and the opclass are only connected via an index access\n> method, but for a particular index each column has only one opclass.\n\nIf you're proposing making clauselist_selectivity depend on what indexes\nexist, I think that's very much the wrong approach. In the first place,\nit still has to give usable answers for unindexed columns, and in the\nsecond place there might be multiple indexes with different opclasses\nfor the same column, so the ambiguity problem still exists.\n\nI have been wondering if we shouldn't add some more indexes on pg_amop\nor something to make it easier to do this sort of lookup --- we\ndefinitely seem to be finding multiple reasons to want to look up\nwhich opclasses contain a given operator.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Mar 2006 10:57:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BETWEEN optimizer problems with single-value "
},
{
"msg_contents": "On Thu, 2006-03-16 at 10:57 -0500, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > Trying to get the information in the wrong place would be very\n> > expensive, I agree. But preparing that information when we have access\n> > to it and passing it through the plan would be much cheaper.\n> \n> Where would that be?\n> \n> > The operator and the opclass are only connected via an index access\n> > method, but for a particular index each column has only one opclass.\n> \n> If you're proposing making clauselist_selectivity depend on what indexes\n> exist, I think that's very much the wrong approach. \n\nUsing available information sounds OK to me. Guess you're thinking of\nthe lack of plan invalidation?\n\n> In the first place,\n> it still has to give usable answers for unindexed columns, and in the\n> second place there might be multiple indexes with different opclasses\n> for the same column, so the ambiguity problem still exists.\n\nI was thinking that we would fill out the OpExpr with different\nopclasses for each plan, so each one sees a different story. (I was\nthinking there was a clauselist for each plan; if not, there could be.)\nSo the multiple index problem shouldn't exist.\n\nNon-indexed cases still cause the problem, true.\n\n> I have been wondering if we shouldn't add some more indexes on pg_amop\n> or something to make it easier to do this sort of lookup --- we\n> definitely seem to be finding multiple reasons to want to look up\n> which opclasses contain a given operator.\n\nAgreed, but still looking for better way than that.\n\n[BTW how do you add new indexes to system tables? I want to add one to\npg_inherits but not sure where to look.]\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Thu, 16 Mar 2006 19:28:18 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BETWEEN optimizer problems with single-value"
},
{
"msg_contents": "Simon Riggs wrote:\n\n> [BTW how do you add new indexes to system tables? I want to add one to\n> pg_inherits but not sure where to look.]\n\nSee src/include/catalog/indexing.h -- I don't remember if there's\nanything else that needs modification.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Thu, 16 Mar 2006 15:41:52 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BETWEEN optimizer problems with single-value"
},
{
"msg_contents": "On Thu, 2006-03-16 at 15:41 -0400, Alvaro Herrera wrote:\n> Simon Riggs wrote:\n> \n> > [BTW how do you add new indexes to system tables? I want to add one to\n> > pg_inherits but not sure where to look.]\n> \n> See src/include/catalog/indexing.h -- I don't remember if there's\n> anything else that needs modification.\n\nThat was easy: many thanks!\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Thu, 16 Mar 2006 19:43:22 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BETWEEN optimizer problems with single-value"
},
{
"msg_contents": "Simon Riggs <[email protected]> writes:\n> I was thinking that we would fill out the OpExpr with different\n> opclasses for each plan, so each one sees a different story. (I was\n> thinking there was a clauselist for each plan; if not, there could be.)\n\nThis is backwards: there isn't a plan yet. If there were, having\nclauselist_selectivity return different answers depending on what index\nthe plan was thinking of using would still be wrong.\n\n> [BTW how do you add new indexes to system tables? I want to add one to\n> pg_inherits but not sure where to look.]\n\nsrc/include/catalog/indexing.h\n\nOffhand I think adding a new entry is all you have to do. You may also\nwant a syscache to go with it, which'll take a bit more work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Mar 2006 14:45:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BETWEEN optimizer problems with single-value "
},
{
"msg_contents": "On Thu, 2006-03-16 at 14:45 -0500, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > [BTW how do you add new indexes to system tables? I want to add one to\n> > pg_inherits but not sure where to look.]\n> \n> src/include/catalog/indexing.h\n> \n> Offhand I think adding a new entry is all you have to do. You may also\n> want a syscache to go with it, which'll take a bit more work.\n\nI see its actually postgres.bki... I never scrolled to the bottom before\nnow.\n\nI'll have a go.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Thu, 16 Mar 2006 20:00:19 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BETWEEN optimizer problems with single-value"
}
] |
[
{
"msg_contents": "Hello,\n\nAfter fixing the hanging problems I reported here earlier (by uninstalling \nW2K3 SP1), I'm running into another weird one.\n\nAfter doing a +/- 8hr cycle of updates and inserts (what we call a 'batch'), \nthe first 'reporting' type query on tables involved in that write cycle is \nvery slow. As an example, I have a query which according to EXPLAIN ANALYZE \ntakes about 1.1s taking 46s. After this one hit, everything is back to \nnormal, and subsequent executions of the same query are in fact subsecond. \nRestarting the appserver and pgsql does not make the slowness re-appear, only \nrunning another batch will.\n\nDuring the 'write'/batch cycle, a large number of rows in various tables are \ninserted and subsequently (repeatedly) updated. The reporting type queries \nafter that are basically searches on those tables.\n\nAnybody any ideas?\n\nThanks,\n\njan\n\n-- \n--------------------------------------------------------------\nJan de Visser [email protected]\n\n Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n",
"msg_date": "Wed, 15 Mar 2006 14:39:13 -0500",
"msg_from": "Jan de Visser <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow SELECTS after large update cycle"
},
{
"msg_contents": "Jan de Visser wrote:\n> Hello,\n> \n> After fixing the hanging problems I reported here earlier (by uninstalling \n> W2K3 SP1), I'm running into another weird one.\n> \n> After doing a +/- 8hr cycle of updates and inserts (what we call a 'batch'), \n> the first 'reporting' type query on tables involved in that write cycle is \n> very slow. As an example, I have a query which according to EXPLAIN ANALYZE \n> takes about 1.1s taking 46s. After this one hit, everything is back to \n> normal, and subsequent executions of the same query are in fact subsecond. \n> Restarting the appserver and pgsql does not make the slowness re-appear, only \n> running another batch will.\n> \n> During the 'write'/batch cycle, a large number of rows in various tables are \n> inserted and subsequently (repeatedly) updated. The reporting type queries \n> after that are basically searches on those tables.\n\nAfter a large batch you need to run 'analyze' over the tables involved \nto get postgresql to update it's statistics so it can work out which \nindexes etc it should use.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Thu, 16 Mar 2006 10:11:55 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SELECTS after large update cycle"
},
{
"msg_contents": "On Wed, 2006-03-15 at 14:39 -0500, Jan de Visser wrote:\n\n> After fixing the hanging problems I reported here earlier (by uninstalling \n> W2K3 SP1), I'm running into another weird one.\n> \n> After doing a +/- 8hr cycle of updates and inserts (what we call a 'batch'), \n> the first 'reporting' type query on tables involved in that write cycle is \n> very slow. As an example, I have a query which according to EXPLAIN ANALYZE \n> takes about 1.1s taking 46s. After this one hit, everything is back to \n> normal, and subsequent executions of the same query are in fact subsecond. \n> Restarting the appserver and pgsql does not make the slowness re-appear, only \n> running another batch will.\n> \n> During the 'write'/batch cycle, a large number of rows in various tables are \n> inserted and subsequently (repeatedly) updated. The reporting type queries \n> after that are basically searches on those tables.\n> \n> Anybody any ideas?\n\nThis is caused by updating the commit status hint bits on each row\ntouched by the SELECTs. This turns the first SELECT into a write\noperation.\n\nTry running a scan of the whole table to take the hit before you give it\nback to the users.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Wed, 15 Mar 2006 23:21:27 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow SELECTS after large update cycle"
},
{
"msg_contents": "On Wednesday 15 March 2006 18:21, Simon Riggs wrote:\n> On Wed, 2006-03-15 at 14:39 -0500, Jan de Visser wrote:\n> > After fixing the hanging problems I reported here earlier (by\n> > uninstalling W2K3 SP1), I'm running into another weird one.\n> >\n> > After doing a +/- 8hr cycle of updates and inserts (what we call a\n> > 'batch'), the first 'reporting' type query on tables involved in that\n> > write cycle is very slow. As an example, I have a query which according\n> > to EXPLAIN ANALYZE takes about 1.1s taking 46s. After this one hit,\n> > everything is back to normal, and subsequent executions of the same query\n> > are in fact subsecond. Restarting the appserver and pgsql does not make\n> > the slowness re-appear, only running another batch will.\n> >\n> > During the 'write'/batch cycle, a large number of rows in various tables\n> > are inserted and subsequently (repeatedly) updated. The reporting type\n> > queries after that are basically searches on those tables.\n> >\n> > Anybody any ideas?\n>\n> This is caused by updating the commit status hint bits on each row\n> touched by the SELECTs. This turns the first SELECT into a write\n> operation.\n>\n> Try running a scan of the whole table to take the hit before you give it\n> back to the users.\n\nThanks Simon. I didn't know about the cause, but I expected the answer to be \n'deal with it', as it is. At least I can explain it now...\n\n>\n> Best Regards, Simon Riggs\n\njan\n\n-- \n--------------------------------------------------------------\nJan de Visser [email protected]\n\n Baruk Khazad! Khazad ai-menu!\n--------------------------------------------------------------\n",
"msg_date": "Wed, 15 Mar 2006 19:42:00 -0500",
"msg_from": "Jan de Visser <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow SELECTS after large update cycle"
}
] |
[
{
"msg_contents": "We were seeing clusters of query timeouts with our web site, which were\ncorrected by adjusting the configuration of the background writer. I'm\nposting just to provide information which others might find useful -- I\ndon't have any problem I'm trying to solve in this regard.\n\nThe web site gets 1 to 2 million hits per day, with about the same\nnumber of select queries run to provide data for the web pages. The\nload is distributed across multiple databases. (We have four, but the\nload is easily handled by any two of them, and we often take one or two\nout of web use for maintenance or special statistical runs.) Each\ndatabase gets the same stream of modification requests -- about 2.7\nmillion database transactions per day. Each transaction can contain\nmultiple inserts, updates, or deletes. The peak times for both the web\nrequests and the data modifications are in the afternoon on business\ndays. Most web queries run under a timeout limit of 20 seconds.\n\nDuring peak times, we would see clusters of timeouts (where queries\nexceeded the 20 second limit) on very simple queries which normally run\nin a few milliseconds. The pattern suggested that checkpoints were at\nfault. I boosted the settings for the background writer from the\ndefaults to the values below, and we saw a dramatic reduction in these\ntimeouts. We also happened to have one machine which had been out of\nthe replication mix which was in \"catch up\" mode, processing the\ntransaction stream as fast as the database could handle it, without any\nweb load. We saw the transaction application rate go up by a factor of\nfour when I applied these changes:\n\nbgwriter_lru_percent = 2.0\nbgwriter_lru_maxpages = 250\nbgwriter_all_percent = 1.0\nbgwriter_all_maxpages = 250\n\nThis was with shared_buffers = 20000, so that last value was\neffectively limited to 200 by the percentage.\n\nI then did some calculations, based on the sustained write speed of our\ndrive array (as measured by copying big files to it), and we tried\nthis:\n\nbgwriter_lru_percent = 20.0\nbgwriter_lru_maxpages = 200\nbgwriter_all_percent = 10.0\nbgwriter_all_maxpages = 600\n\nThis almost totally eliminated the clusters of timeouts, and caused the\ntransaction application rate to increase by a factor of eight over the\nalready-improved speed. (That is, we were running 30 to 35 times as\nmany transactions per minute into the database, compared to the default\nbackground writer configuration.) I'm going to let these settings\nsettle in for a week or two before we try adjusting them further (to see\nif we can eliminate those last few timeouts of this type).\n\nI guess my point is that people shouldn't be shy about boosting these\nnumbers by a couple orders of magnitude from the default values. It may\nalso be worth considering whether the defaults should be something more\naggressive.\n\n-Kevin\n\n",
"msg_date": "Wed, 15 Mar 2006 13:43:45 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Background writer configuration"
},
{
"msg_contents": "\n> I then did some calculations, based on the sustained write speed of our\n> drive array (as measured by copying big files to it), and we tried\n> this:\n>\n> bgwriter_lru_percent = 20.0\n> bgwriter_lru_maxpages = 200\n> bgwriter_all_percent = 10.0\n> bgwriter_all_maxpages = 600\n>\n> This almost totally eliminated the clusters of timeouts, and caused the\n> transaction application rate to increase by a factor of eight over the\n> already-improved speed. (That is, we were running 30 to 35 times as\n> many transactions per minute into the database, compared to the default\n> background writer configuration.) I'm going to let these settings\n> settle in for a week or two before we try adjusting them further (to see\n> if we can eliminate those last few timeouts of this type).\n\n\nCan you tell us what type of array you have?\n\nJoshua D. Drake\n\n>\n> I guess my point is that people shouldn't be shy about boosting these\n> numbers by a couple orders of magnitude from the default values. It may\n> also be worth considering whether the defaults should be something more\n> aggressive.\n>\n> -Kevin\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n-- \nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: PLphp, PLperl - http://www.commandprompt.com/\n",
"msg_date": "Wed, 15 Mar 2006 11:54:33 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background writer configuration"
},
{
"msg_contents": ">>> On Wed, Mar 15, 2006 at 1:54 pm, in message\n<[email protected]>, \"Joshua D. Drake\"\n<[email protected]> wrote: \n\n>> I then did some calculations, based on the sustained write speed of\nour\n>> drive array (as measured by copying big files to it), and we tried\n>> this:\n>>\n>> bgwriter_lru_percent = 20.0\n>> bgwriter_lru_maxpages = 200\n>> bgwriter_all_percent = 10.0\n>> bgwriter_all_maxpages = 600\n>>\n>> This almost totally eliminated the clusters of timeouts, and caused\nthe\n>> transaction application rate to increase by a factor of eight over\nthe\n>> already- improved speed. (That is, we were running 30 to 35 times\nas\n>> many transactions per minute into the database, compared to the\ndefault\n>> background writer configuration.) I'm going to let these settings\n>> settle in for a week or two before we try adjusting them further (to\nsee\n>> if we can eliminate those last few timeouts of this type).\n> \n> \n> Can you tell us what type of array you have?\n\nEach machine has a RAID5 array of 13 (plus one hot spare)\n 15,000 RPM Ultra 320 SCSI drives\n2 machines using IBM ServRaid6M battery backed caching controllers\n2 machines using IBM ServRaid4MX battery backed caching controllers\n\n\n",
"msg_date": "Wed, 15 Mar 2006 14:43:01 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Background writer configuration"
},
{
"msg_contents": "Kevin,\n\nplease, could you post other settings from your postgresql.conf?\n\ninterested in:\n\nbgwriter_delay\nshared_buffers\ncheckpoint_segments \ncheckpoint_timeout\nwal_buffers\n\nOn Wed, 15 Mar 2006 13:43:45 -0600\n\"Kevin Grittner\" <[email protected]> wrote:\n\n> We were seeing clusters of query timeouts with our web site, which were\n> corrected by adjusting the configuration of the background writer. I'm\n> posting just to provide information which others might find useful -- I\n> don't have any problem I'm trying to solve in this regard.\n> \n\n-- \nEvgeny Gridasov\nSoftware Engineer \nI-Free, Russia\n",
"msg_date": "Thu, 16 Mar 2006 21:15:23 +0300",
"msg_from": "Evgeny Gridasov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background writer configuration"
},
{
"msg_contents": ">>> On Thu, Mar 16, 2006 at 12:15 pm, in message\n<[email protected]>, Evgeny Gridasov\n<[email protected]> wrote: \n> \n> please, could you post other settings from your postgresql.conf?\n\nEverything in postgresql.conf which is not commented out:\n\nlisten_addresses = '*' # what IP interface(s) to listen on;\nmax_connections = 600 # note: increasing\nmax_connections costs\nshared_buffers = 20000 # min 16 or max_connections*2,\n8KB each\nwork_mem = 10240 # min 64, size in KB\nmax_fsm_pages = 1400000 # min max_fsm_relations*16, 6\nbytes each\nbgwriter_lru_percent = 20.0 # 0-100% of LRU buffers\nscanned/round\nbgwriter_lru_maxpages = 200 # 0-1000 buffers max\nwritten/round\nbgwriter_all_percent = 10.0 # 0-100% of all buffers\nscanned/round\nbgwriter_all_maxpages = 600 # 0-1000 buffers max\nwritten/round\nfull_page_writes = off # recover from partial page\nwrites\nwal_buffers = 20 # min 4, 8KB each\ncheckpoint_segments = 10 # in logfile segments, min 1,\n16MB each\neffective_cache_size = 524288 # typically 8KB each\nrandom_page_cost = 2 # units are one sequential page\nfetch\nredirect_stderr = on # Enable capturing of stderr\ninto log\nlog_line_prefix = '[%m] %p %q<%u %d %r> ' #\nSpecial values:\nstats_start_collector = on\nstats_block_level = on\nstats_row_level = on\nautovacuum = true # enable autovacuum\nsubprocess?\nautovacuum_naptime = 10 # time between autovacuum runs, in\nsecs\nautovacuum_vacuum_threshold = 1 # min # of tuple updates before\nautovacuum_analyze_threshold = 1 # min # of tuple updates\nbefore\nautovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before\nautovacuum_analyze_scale_factor = 0.1 # fraction of rel size before\nlc_messages = 'C' # locale for system error\nmessage\nlc_monetary = 'C' # locale for monetary\nformatting\nlc_numeric = 'C' # locale for number formatting\nlc_time = 'C' # locale for time formatting\nsql_inheritance = off\nstandard_conforming_strings = on\n\n",
"msg_date": "Thu, 16 Mar 2006 15:58:53 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Background writer configuration"
},
{
"msg_contents": "Yesterday we recieved a new server 2xAMD64(2core x 2chips = 4 cores)\n8GB RAM and RAID-1 (LSI megaraid)\nI've maid some tests with pgbench (scaling 1000, database size ~ 16Gb)\n\nFirst of all, I'd like to mention that it was strange to see that\nthe server performance degraded by 1-2% when we changed kernel/userland to x86_64\nfrom default installed i386 userland/amd64 kernel. The operating system was Debian Linux,\nfilesystem ext3.\n\nbg_writer_*_percent/maxpages setting did not dramatically increase performance,\nbut setting bg_writer_delay to values x10 original setting (2000-4000) increased\ntransaction rate by 4-7 times.\nI've tried shared buffers 32768, 65536, performance was almost equal.\n\nfor all tests:\ncheckpoint_segments = 16 \ncheckpoint_timeout = 900\nshared_buffers=65536\nwal_buffers=128:\n\n\nbgwriter_delay = 200\nbgwriter_lru_percent = 10.0\nbgwriter_lru_maxpages = 100\nbgwriter_all_percent = 5.0\nbgwriter_all_maxpages = 50\n\nresult:\n./pgbench -c 32 -t 500 -U postgres regression\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1000\nnumber of clients: 32\nnumber of transactions per client: 500\nnumber of transactions actually processed: 16000/16000\ntps = 112.740903 (including connections establishing)\ntps = 112.814327 (excluding connections establishing)\n\n(disk activity about 2-4mb/sec writing)\n\n\nbgwriter_delay = 4000\nbgwriter_lru_percent = 10.0\nbgwriter_lru_maxpages = 100\nbgwriter_all_percent = 5.0\nbgwriter_all_maxpages = 50\n\nresult:\n./pgbench -c 32 -t 500 -U postgres regression\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1000\nnumber of clients: 32\nnumber of transactions per client: 500\nnumber of transactions actually processed: 16000/16000\ntps = 508.637831 (including connections establishing)\ntps = 510.107981 (excluding connections establishing)\n\n(disk activity about 20-40 mb/sec writing)\n\nSetting bgwriter_delay to higher values leads to slower postgresql shutdown time\n(I see postgresql writer process writing to disk). Sometimes postgresql didn't\nshutdown correctly (doesn't complete background writing ?).\n\nI've found some settings with which system behaves strange:\n\n./pgbench -c 32 -t 3000 -U postgres regression\n\nvmstat 1:\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 1 25 528 14992 22884 7876736 0 0 457 383 77 83 1 0 94 5\n 0 7 632 14728 22892 7875780 0 88 4412 9456 1594 21623 9 5 8 78\n 1 19 796 16904 22928 7872712 0 16 3536 9053 1559 19717 9 4 12 75\n 0 4 872 14928 22936 7874208 0 36 3036 9092 1574 20874 9 4 2 85\n 0 24 912 16292 22964 7872068 0 44 3020 9316 1581 19922 9 4 9 78\n 0 1 912 17800 22980 7869876 0 0 2596 8700 1560 19926 9 4 4 84\n 4 23 996 18284 22996 7868292 32 0 3396 11048 1657 22802 11 5 3 81\n 0 22 960 14728 23020 7871448 52 0 3020 9648 1613 21641 9 4 5 82\n 0 28 1008 15440 23028 7869624 0 48 2992 10052 1608 21430 9 5 5 82\n 1 16 1088 17328 23044 7867196 0 0 2460 7884 1530 16536 8 3 9 79\n 0 23 1088 18440 23052 7865556 0 0 3256 10128 1635 22587 10 4 4 81\n 1 29 1076 14728 23076 7868604 0 0 2968 9860 1597 21518 10 5 7 79\n 1 24 1136 15952 23084 7866700 0 40 2696 8900 1560 19311 9 4 5 81\n 0 14 1208 17200 23112 7864736 0 16 2888 9508 1603 20634 10 4 6 80\n 0 21 1220 18520 23120 7862828 0 72 2816 9487 1572 19888 10 4 7 79\n 1 21 1220 14792 23144 7866000 0 0 2960 9536 1599 20331 9 5 5 81\n 1 24 1220 16392 23152 7864088 0 0 2860 8932 1583 19288 9 4 3 84\n 0 18 1276 18000 23168 7862048 0 0 2792 8592 1553 18843 9 4 9 78\n 1 17 1348 19144 23176 7860132 0 16 2840 9604 1583 20654 10 4 6 80\n 0 22 64 15112 23200 7864264 528 0 3280 8785 1582 19339 9 4 7 80\n 0 25 16 16008 23212 7862664 4 0 2764 8964 1605 18471 9 4 8 79\n 0 26 16 17544 23236 7860872 0 0 3008 9848 1590 20527 10 4 7 79\n 1 7 16 18704 23244 7858960 0 0 2756 8760 1564 19875 9 4 4 84\n 1 25 16 15120 23268 7861996 0 0 2768 8512 1550 18518 9 3 12 75\n 1 25 16 18076 23276 7859812 0 0 2484 8580 1536 18391 8 4 8 80\n 0 3 16 17832 23300 7862916 0 0 2888 8864 1586 21450 9 4 4 83\n 0 14 16 24280 23308 7866036 0 0 2816 9140 1537 20655 9 4 7 81\n 1 1 16 54452 23348 7867968 0 0 1808 6988 1440 14235 6 9 24 61\n 0 1 16 51988 23348 7868036 0 0 60 4180 1344 885 1 10 72 16\n 0 2 16 51988 23348 7868036 0 0 0 3560 1433 50 0 0 75 25\n 0 2 16 51988 23348 7868036 0 0 0 2848 1364 46 0 0 75 25\n 0 2 16 51988 23348 7868036 0 0 0 2560 1350 44 0 0 75 25\n 0 4 16 51996 23360 7868092 0 0 0 2603 1328 60 0 0 72 28\n 0 4 16 52060 23360 7868092 0 0 0 2304 1306 46 0 0 75 25\n 0 4 16 52140 23360 7868092 0 0 0 2080 1288 40 0 0 75 25\n 0 2 16 52140 23360 7868092 0 0 0 2552 1321 48 0 0 75 25\n 0 2 16 52220 23360 7868092 0 0 0 2560 1335 44 0 0 75 25\n 0 2 16 52220 23360 7868092 0 0 0 2560 1340 48 0 0 75 25\n 0 2 16 52284 23360 7868092 0 0 0 2560 1338 48 0 0 75 25\n... continued\n\nduring the time with zero read io and write io about 2500 I see many hanging \npostgresql processes executing UPDATE or COMMIT. This lasts for a minute or so,\nafter that I see the same IO which was during benchmark start.\n\nWhat happens during this period?\n\nOn Thu, 16 Mar 2006 15:58:53 -0600\n\"Kevin Grittner\" <[email protected]> wrote:\n\n> >>> On Thu, Mar 16, 2006 at 12:15 pm, in message\n> <[email protected]>, Evgeny Gridasov\n> <[email protected]> wrote: \n> > \n> > please, could you post other settings from your postgresql.conf?\n> \n> Everything in postgresql.conf which is not commented out:\n> \n> listen_addresses = '*' # what IP interface(s) to listen on;\n> max_connections = 600 # note: increasing\n> max_connections costs\n> shared_buffers = 20000 # min 16 or max_connections*2,\n> 8KB each\n> work_mem = 10240 # min 64, size in KB\n> max_fsm_pages = 1400000 # min max_fsm_relations*16, 6\n> bytes each\n> bgwriter_lru_percent = 20.0 # 0-100% of LRU buffers\n> scanned/round\n> bgwriter_lru_maxpages = 200 # 0-1000 buffers max\n> written/round\n> bgwriter_all_percent = 10.0 # 0-100% of all buffers\n> scanned/round\n> bgwriter_all_maxpages = 600 # 0-1000 buffers max\n> written/round\n> full_page_writes = off # recover from partial page\n> writes\n\n\n-- \nEvgeny Gridasov\nSoftware Engineer \nI-Free, Russia\n",
"msg_date": "Fri, 17 Mar 2006 15:24:48 +0300",
"msg_from": "Evgeny Gridasov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background writer configuration"
},
{
"msg_contents": "\n> First of all, I'd like to mention that it was strange to see that\n> the server performance degraded by 1-2% when we changed kernel/userland \n> to x86_64\n> from default installed i386 userland/amd64 kernel. The operating system \n> was Debian Linux,\n> filesystem ext3.\n\n\tDid you use postgres compiled for AMD64 with the 64 kernel, or did you \nuse a 32 bit postgres in emulation mode ?\n",
"msg_date": "Fri, 17 Mar 2006 14:35:15 +0100",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background writer configuration"
},
{
"msg_contents": "template1=# select version();\n version \n---------------------------------------------------------------------------------------------\n PostgreSQL 8.1.3 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.5 (Debian 1:3.3.5-13)\n(1 row)\n\n\nOn Fri, 17 Mar 2006 14:35:15 +0100\nPFC <[email protected]> wrote:\n\n> \n> > First of all, I'd like to mention that it was strange to see that\n> > the server performance degraded by 1-2% when we changed kernel/userland \n> > to x86_64\n> > from default installed i386 userland/amd64 kernel. The operating system \n> > was Debian Linux,\n> > filesystem ext3.\n> \n> \tDid you use postgres compiled for AMD64 with the 64 kernel, or did you \n> use a 32 bit postgres in emulation mode ?\n> \n\n-- \nEvgeny Gridasov\nSoftware Engineer \nI-Free, Russia\n",
"msg_date": "Fri, 17 Mar 2006 17:50:17 +0300",
"msg_from": "Evgeny Gridasov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background writer configuration"
},
{
"msg_contents": "I got this :\n\ntemplate1=# select version();\n version\n------------------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.1.2 on x86_64-pc-linux-gnu, compiled by GCC \nx86_64-pc-linux-gnu-gcc (GCC) 3.4.4 (Gentoo 3.4.4-r1, ssp-3.4.4-1.0, \npie-8.7.8)\n(1 ligne)\n\nNormally you should get a noticeable performance boost by using userland \nexecutables compiled for the 64 platform... strange...\n\n\nOn Fri, 17 Mar 2006 15:50:17 +0100, Evgeny Gridasov <[email protected]> \nwrote:\n\n> template1=# select version();\n> version\n> ---------------------------------------------------------------------------------------------\n> PostgreSQL 8.1.3 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.5 \n> (Debian 1:3.3.5-13)\n> (1 row)\n>\n>\n> On Fri, 17 Mar 2006 14:35:15 +0100\n> PFC <[email protected]> wrote:\n>\n>>\n>> > First of all, I'd like to mention that it was strange to see that\n>> > the server performance degraded by 1-2% when we changed \n>> kernel/userland\n>> > to x86_64\n>> > from default installed i386 userland/amd64 kernel. The operating \n>> system\n>> > was Debian Linux,\n>> > filesystem ext3.\n>>\n>> \tDid you use postgres compiled for AMD64 with the 64 kernel, or did you\n>> use a 32 bit postgres in emulation mode ?\n>>\n>\n\n\n",
"msg_date": "Fri, 17 Mar 2006 15:55:31 +0100",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background writer configuration"
},
{
"msg_contents": ">>> On Fri, Mar 17, 2006 at 6:24 am, in message\n<[email protected]>, Evgeny Gridasov\n<[email protected]> wrote: \n> \n> I've maid some tests with pgbench\n\n\nIf possible, tune the background writer with your actual application\ncode under normal load. Optimal tuning is going to vary based on usage\npatterns. You can change these settings on the fly by editing the\npostgresql.conf file and running pg_ctl reload. This is very nice, as\nit allowed us to try various settings in our production environment\nwhile two machines dealt with normal update and web traffic and another\nwas in a saturated update process.\n\nFor us, the key seems to be to get the dirty blocks pushed out to the\nOS level cache as soon as possible, so that the OS can deal with them\nbefore the checkpoint comes along.\n\n> for all tests:\n> checkpoint_segments = 16 \n> checkpoint_timeout = 900\n> shared_buffers=65536\n> wal_buffers=128:\n\n> ./pgbench - c 32 - t 500 - U postgres regression\n\nUnless you are going to be running in short bursts of activity, be sure\nthat the testing is sustained long enough to get through several\ncheckpoints and settle into a \"steady state\" with any caching\ncontroller, etc. On the face of it, it doesn't seem like this test\nshows anything except how it would behave with a relatively short burst\nof activity sandwiched between big blocks of idle time. I think your\nsecond test may look so good because it is just timing how fast it can\npush a few rows into cache space.\n\n> Setting bgwriter_delay to higher values leads to slower postgresql\nshutdown time\n> (I see postgresql writer process writing to disk). Sometimes\npostgresql didn't\n> shutdown correctly (doesn't complete background writing ?).\n\nYeah, here's where it gets to trying to finish all the work you avoided\nmeasuring in your benchmark.\n\n-Kevin\n\n",
"msg_date": "Fri, 17 Mar 2006 09:08:23 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Background writer configuration"
},
{
"msg_contents": "\nOn Mar 17, 2006, at 4:24 AM, Evgeny Gridasov wrote:\n\n> Yesterday we recieved a new server 2xAMD64(2core x 2chips = 4 cores)\n> 8GB RAM and RAID-1 (LSI megaraid)\n> I've maid some tests with pgbench (scaling 1000, database size ~ 16Gb)\n>\n> First of all, I'd like to mention that it was strange to see that\n> the server performance degraded by 1-2% when we changed kernel/ \n> userland to x86_64\n> from default installed i386 userland/amd64 kernel. The operating \n> system was Debian Linux,\n> filesystem ext3.\n\n64 bit binaries usually run marginally slower than 32 bit binaries.\nAIUI the main reason is that they're marginally bigger, so fit less\nwell in cache, have to haul themselves over the memory channels\nand so on. They're couch potato binaries. I've seen over 10% performance\nloss in compute-intensive code, so a couple of percent isn't too\nbad at all.\n\nIf that 64 bit addressing gets you cheap access to lots of RAM, and\nyour main applications can make good use of that then\nthat can easily outweigh the overall loss in performance\n\nCheers,\n Steve\n\n",
"msg_date": "Fri, 17 Mar 2006 08:56:58 -0800",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background writer configuration"
},
{
"msg_contents": "On Fri, Mar 17, 2006 at 08:56:58AM -0800, Steve Atkins wrote:\n> 64 bit binaries usually run marginally slower than 32 bit binaries.\n\nThis depends a bit on the application, and what you mean by \"64 bit\" (ie.\nwhat architecture). Some specialized applications actually benefit from\nhaving a 64-bit native data type (especially stuff working with a small\namount of bitfields -- think an anagram program), but Postgres is probably\nnot among them unless you do lots of arithmetic on bigints. amd64 has the\nadded benefit that you get twice as many registers available in 64-bit mode\n(16 vs. 8 -- the benefit gets even bigger when you consider that a few of\nthose go to stack pointers etc.), so in some code you might get a few percent\nextra from that, too.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 17 Mar 2006 18:20:01 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background writer configuration"
},
{
"msg_contents": "On 2006-03-17, at 15:50, Evgeny Gridasov wrote:\n\n> template1=# select version();\n> version\n> ---------------------------------------------------------------------- \n> -----------------------\n> PostgreSQL 8.1.3 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC) \n> 3.3.5 (Debian 1:3.3.5-13)\n> (1 row)\n\nHow about something like:\n$ file /usr/lib/postgresql/bin/postgres\n(or whatever directory postmaster binary is in) instead?\n\n-- \n11.\n\n",
"msg_date": "Fri, 17 Mar 2006 18:56:32 +0100",
"msg_from": "11 <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background writer configuration"
},
{
"msg_contents": "eugene@test:~$ file /usr/lib/postgresql/8.1/bin/postgres \n/usr/lib/postgresql/8.1/bin/postgres: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.0, dynamically linked (uses shared libs), stripped\n\nOn Fri, 17 Mar 2006 18:56:32 +0100\n11 <[email protected]> wrote:\n\n> On 2006-03-17, at 15:50, Evgeny Gridasov wrote:\n> \n> > template1=# select version();\n> > version\n> > ---------------------------------------------------------------------- \n> > -----------------------\n> > PostgreSQL 8.1.3 on x86_64-pc-linux-gnu, compiled by GCC cc (GCC) \n> > 3.3.5 (Debian 1:3.3.5-13)\n> > (1 row)\n> \n> How about something like:\n> $ file /usr/lib/postgresql/bin/postgres\n> (or whatever directory postmaster binary is in) instead?\n\n\n\n-- \nEvgeny Gridasov\nSoftware Engineer \nI-Free, Russia\n",
"msg_date": "Fri, 17 Mar 2006 21:36:06 +0300",
"msg_from": "Evgeny Gridasov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Background writer configuration"
}
] |
[
{
"msg_contents": "Hello,\n\nWe are experiencing performances problem with a quad Xeon MP and\nPostgreSQL 7.4 for a year now. Our context switch rate is not so high\nbut the load of the server is blocked to 4 even on very high load and\nwe have 60% cpu idle even in this case. Our database fits in RAM and\nwe don't have any IO problem. I saw this post from Tom Lane\nhttp://archives.postgresql.org/pgsql-performance/2004-04/msg00249.php\nand several other references to problem with Xeon MP and I suspect our\nproblems are related to this.\nWe tried to put our production load on a dual standard Xeon on monday\nand it performs far better with the same configuration parameters.\n\nI know that work has been done by Tom for PostgreSQL 8.1 on\nmultiprocessor support but I didn't find any information on if it\nsolves the problem with Xeon MP or not.\n\nMy question is should we expect a resolution of our problem by\nswitching to 8.1 or will we still have problems and should we consider\na hardware change? We will try to upgrade next tuesday so we will have\nthe real answer soon but if anyone has any experience or information\non this, he will be very welcome.\n\nThanks for your help.\n\n--\nGuillaume\n",
"msg_date": "Thu, 16 Mar 2006 11:45:12 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL and Xeon MP"
},
{
"msg_contents": "Guillaume Smet wrote:\n> Hello,\n> \n> We are experiencing performances problem with a quad Xeon MP and\n> PostgreSQL 7.4 for a year now.\n\nI had a similar issue with a client the other week.\n\n> Our context switch rate is not so high\n> but the load of the server is blocked to 4 even on very high load and\n> we have 60% cpu idle even in this case. Our database fits in RAM and\n> we don't have any IO problem.\n\nActually, I think that's part of the problem - it's the memory bandwidth.\n\n > I saw this post from Tom Lane\n> http://archives.postgresql.org/pgsql-performance/2004-04/msg00249.php\n> and several other references to problem with Xeon MP and I suspect our\n> problems are related to this.\n\nYou should be seeing context-switching jump dramatically if it's the \n\"classic\" multi-Xeon problem. There's a point at which it seems to just \nescalate without a corresponding jump in activity.\n\n> We tried to put our production load on a dual standard Xeon on monday\n> and it performs far better with the same configuration parameters.\n> \n> I know that work has been done by Tom for PostgreSQL 8.1 on\n> multiprocessor support but I didn't find any information on if it\n> solves the problem with Xeon MP or not.\n\nI checked with Tom last week. Thread starts below:\n http://archives.postgresql.org/pgsql-hackers/2006-02/msg01118.php\n\nHe's of the opinion that 8.1.3 will be an improvement.\n\n> My question is should we expect a resolution of our problem by\n> switching to 8.1 or will we still have problems and should we consider\n> a hardware change? We will try to upgrade next tuesday so we will have\n> the real answer soon but if anyone has any experience or information\n> on this, he will be very welcome.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 16 Mar 2006 11:21:47 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and Xeon MP"
},
{
"msg_contents": "Richard,\n\n> You should be seeing context-switching jump dramatically if it's the\n> \"classic\" multi-Xeon problem. There's a point at which it seems to just\n> escalate without a corresponding jump in activity.\n\nNo we don't have this problem of very high context switching in our\ncase even when the database is very slow. When I mean very slow, we\nhave pages which loads in a few seconds in the normal case (load\nbetween 3 and 4) which takes several minutes (up to 5-10 minutes) to\nbe generated in the worst case (load at 4 but really bad\nperformances).\nIf I take a look on our cpu load graph, in one year, the cpu load was\nnever higher than 5 even in the worst cases...\n\n> I checked with Tom last week. Thread starts below:\n> http://archives.postgresql.org/pgsql-hackers/2006-02/msg01118.php\n>\n> He's of the opinion that 8.1.3 will be an improvement.\n\nThanks for pointing me this thread, I searched in -performance not in\n-hackers as the original thread was in -performance. We planned a\nmigration to 8.1.3 so we'll see what happen with this version.\n\nDo you plan to test it before next tuesday? If so, I'm interested in\nyour results. I'll post our results here as soon as we complete the\nupgrade.\n\n--\nGuillaume\n",
"msg_date": "Thu, 16 Mar 2006 13:28:11 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL and Xeon MP"
},
{
"msg_contents": "Guillaume Smet wrote:\n> Richard,\n> \n>> You should be seeing context-switching jump dramatically if it's the\n>> \"classic\" multi-Xeon problem. There's a point at which it seems to just\n>> escalate without a corresponding jump in activity.\n> \n> No we don't have this problem of very high context switching in our\n> case even when the database is very slow. When I mean very slow, we\n> have pages which loads in a few seconds in the normal case (load\n> between 3 and 4) which takes several minutes (up to 5-10 minutes) to\n> be generated in the worst case (load at 4 but really bad\n> performances).\n\nVery strange.\n\n> If I take a look on our cpu load graph, in one year, the cpu load was\n> never higher than 5 even in the worst cases...\n> \n>> I checked with Tom last week. Thread starts below:\n>> http://archives.postgresql.org/pgsql-hackers/2006-02/msg01118.php\n>>\n>> He's of the opinion that 8.1.3 will be an improvement.\n> \n> Thanks for pointing me this thread, I searched in -performance not in\n> -hackers as the original thread was in -performance. We planned a\n> migration to 8.1.3 so we'll see what happen with this version.\n> \n> Do you plan to test it before next tuesday? If so, I'm interested in\n> your results. I'll post our results here as soon as we complete the\n> upgrade.\n\nThe client has just bought an Opteron to run on, I'm afraid. I might try \n8.1 on the Xeon but it'll just be to see what happens and that won't be \nfor a while.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 16 Mar 2006 12:55:59 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and Xeon MP"
},
{
"msg_contents": "Hi Guillaume,\n\nI had a similar issue last summer. Could you please provide details \nabout your XEON MP server and some statistics (context-switches/load/CPU \nusage)?\n\nI tried different servers (x86) with different results. I saw a \ndifference between XEON MP w/ and w/o EMT64. The memory bandwidth makes \nalso a difference.\n\nWhat version of XEON MP does your server have?\nWhich type of RAM does you server have?\nDo you use Hyperthreading?\n\nYou should provide details from the XEON DP?\n\nRegards\nSven.\n\nGuillaume Smet schrieb:\n> Richard,\n> \n>> You should be seeing context-switching jump dramatically if it's the\n>> \"classic\" multi-Xeon problem. There's a point at which it seems to just\n>> escalate without a corresponding jump in activity.\n> \n> No we don't have this problem of very high context switching in our\n> case even when the database is very slow. When I mean very slow, we\n> have pages which loads in a few seconds in the normal case (load\n> between 3 and 4) which takes several minutes (up to 5-10 minutes) to\n> be generated in the worst case (load at 4 but really bad\n> performances).\n> If I take a look on our cpu load graph, in one year, the cpu load was\n> never higher than 5 even in the worst cases...\n> \n>> I checked with Tom last week. Thread starts below:\n>> http://archives.postgresql.org/pgsql-hackers/2006-02/msg01118.php\n>>\n>> He's of the opinion that 8.1.3 will be an improvement.\n> \n> Thanks for pointing me this thread, I searched in -performance not in\n> -hackers as the original thread was in -performance. We planned a\n> migration to 8.1.3 so we'll see what happen with this version.\n> \n> Do you plan to test it before next tuesday? If so, I'm interested in\n> your results. I'll post our results here as soon as we complete the\n> upgrade.\n> \n> --\n> Guillaume\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n-- \n/This email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you are not the intended recipient, you should not\ncopy it, re-transmit it, use it or disclose its contents, but should\nreturn it to the sender immediately and delete your copy from your\nsystem. Thank you for your cooperation./\n\nSven Geisler <[email protected]> Tel +49.30.5362.1627 Fax .1638\nSenior Developer, AEC/communications GmbH Berlin, Germany\n",
"msg_date": "Thu, 16 Mar 2006 14:11:16 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and Xeon MP"
},
{
"msg_contents": "On 3/16/06, Richard Huxton <[email protected]> wrote:\n> Very strange.\n\nSure. I can't find any logical explanation for that but it is the\nbehaviour we have for more than a year now (the site was migrated from\nOracle to PostgreSQL on january 2005).\nWe check iostat, vmstat and so on without any hint on why we have this\nbehaviour.\n\n> The client has just bought an Opteron to run on, I'm afraid. I might try\n> 8.1 on the Xeon but it'll just be to see what happens and that won't be\n> for a while.\n\nI don't think it will be an option for us so I will have more\ninformation next week.\n",
"msg_date": "Thu, 16 Mar 2006 14:41:50 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL and Xeon MP"
},
{
"msg_contents": "Sven,\n\nOn 3/16/06, Sven Geisler <[email protected]> wrote:\n> What version of XEON MP does your server have?\n\nThe server is a dell 6650 from end of 2004 with 4 xeon mp 2.2 and 2MB\ncache per proc.\n\nHere are the information from Dell:\n4x PROCESSOR, 80532, 2.2GHZ, 2MB cache, 400Mhz, SOCKET F\n8x DUAL IN-LINE MEMORY MODULE, 512MB, 266MHz\n\n> Do you use Hyperthreading?\n\nNo, we don't use it.\n\n> You should provide details from the XEON DP?\n\nThe only problem is that the Xeon DP is installed with a 2.6 kernel\nand a postgresql 8.1.3 (it is used to test the migration from 7.4 to\n8.1.3). So it's very difficult to really compare the two behaviours.\n\nIt's a Dell 2850 with:\n2 x PROCESSOR, 80546K, 2.8G, 1MB cache, XEON NOCONA, 800MHz\n4 x DUAL IN-LINE MEMORY MODULE, 1GB, 400MHz\n\nThis server is obviously newer than the other one.\n\n--\nGuillaume\n",
"msg_date": "Thu, 16 Mar 2006 15:17:32 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL and Xeon MP"
},
{
"msg_contents": "On 3/16/06, Sven Geisler <[email protected]> wrote:\n> Hi Guillaume,\n>\n> I had a similar issue last summer. Could you please provide details\n> about your XEON MP server and some statistics (context-switches/load/CPU\n> usage)?\n\nI forgot the statistics:\nCPU load usually from 1 to 4.\nCPU usage < 40% for each processor usually and sometimes when the\nserver completely hangs, it grows to 60%..,\n\nHere is a top output of the server at this time:\n 15:21:17 up 138 days, 13:25, 1 user, load average: 1.29, 1.25, 1.38\n82 processes: 81 sleeping, 1 running, 0 zombie, 0 stopped\nCPU states: cpu user nice system irq softirq iowait idle\n total 25.7% 0.0% 3.9% 0.0% 0.3% 0.1% 69.7%\n cpu00 29.3% 0.0% 4.7% 0.1% 0.5% 0.0% 65.0%\n cpu01 20.7% 0.0% 1.9% 0.0% 0.3% 0.0% 76.8%\n cpu02 25.5% 0.0% 5.5% 0.0% 0.1% 0.3% 68.2%\n cpu03 27.3% 0.0% 3.3% 0.0% 0.1% 0.1% 68.8%\nMem: 3857224k av, 3298580k used, 558644k free, 0k shrd, 105172k buff\n 2160124k actv, 701304k in_d, 56400k in_c\nSwap: 4281272k av, 6488k used, 4274784k free 2839348k cached\n\nWe have currently between 3000 and 13000 context switches/s, average\nof 5000 I'd say visually.\n\nHere is a top output I had on november 17 when the server completely\nhangs (several minutes for each page of the website) and it is typical\nof this server behaviour:\n17:08:41 up 19 days, 15:16, 1 user, load average: 4.03, 4.26, 4.36\n288 processes: 285 sleeping, 3 running, 0 zombie, 0 stopped\nCPU states: cpu user nice system irq softirq iowait idle\n total 59.0% 0.0% 8.8% 0.2% 0.0% 0.0% 31.9%\n cpu00 52.3% 0.0% 13.3% 0.9% 0.0% 0.0% 33.3%\n cpu01 65.7% 0.0% 7.6% 0.0% 0.0% 0.0% 26.6%\n cpu02 58.0% 0.0% 7.6% 0.0% 0.0% 0.0% 34.2%\n cpu03 60.0% 0.0% 6.6% 0.0% 0.0% 0.0% 33.3%\nMem: 3857224k av, 3495880k used, 361344k free, 0k shrd, 92160k buff\n 2374048k actv, 463576k in_d, 37708k in_c\nSwap: 4281272k av, 25412k used, 4255860k free 2173392k cached\n\nAs you can see, load is blocked to 4, no iowait and cpu idle of 30%.\n\nVmstat showed 5000 context switches/s on average so we had no context\nswitch storm.\n",
"msg_date": "Thu, 16 Mar 2006 15:30:07 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL and Xeon MP"
},
{
"msg_contents": "On Thu, Mar 16, 2006 at 11:45:12AM +0100, Guillaume Smet wrote:\n> Hello,\n> \n> We are experiencing performances problem with a quad Xeon MP and\n> PostgreSQL 7.4 for a year now. Our context switch rate is not so high\n> but the load of the server is blocked to 4 even on very high load and\n> we have 60% cpu idle even in this case. Our database fits in RAM and\n> we don't have any IO problem. I saw this post from Tom Lane\n> http://archives.postgresql.org/pgsql-performance/2004-04/msg00249.php\n> and several other references to problem with Xeon MP and I suspect our\n> problems are related to this.\n> We tried to put our production load on a dual standard Xeon on monday\n> and it performs far better with the same configuration parameters.\n> \n> I know that work has been done by Tom for PostgreSQL 8.1 on\n> multiprocessor support but I didn't find any information on if it\n> solves the problem with Xeon MP or not.\n> \n> My question is should we expect a resolution of our problem by\n> switching to 8.1 or will we still have problems and should we consider\n> a hardware change? We will try to upgrade next tuesday so we will have\n> the real answer soon but if anyone has any experience or information\n> on this, he will be very welcome.\n> \n> Thanks for your help.\n> \n> --\n> Guillaume\n> \n\nGuillaume,\n\nWe had a similar problem with poor performance on a Xeon DP and \nPostgreSQL 7.4.x. 8.0 came out in time for preliminary testing but\nit did not solve the problem and our production systems went live\nusing a different database product. We are currently testing against\n8.1.x and the seemingly bizarre lack of performance is gone. I would\nsuspect that a quad-processor box would have the same issue. I would\ndefinitely recommend giving 8.1 a try.\n\nKen\n",
"msg_date": "Thu, 16 Mar 2006 08:30:20 -0600",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and Xeon MP"
},
{
"msg_contents": "\"Guillaume Smet\" <[email protected]> writes:\n> Here is a top output I had on november 17 when the server completely\n> hangs (several minutes for each page of the website) and it is typical\n> of this server behaviour:\n> 17:08:41 up 19 days, 15:16, 1 user, load average: 4.03, 4.26, 4.36\n> 288 processes: 285 sleeping, 3 running, 0 zombie, 0 stopped\n> CPU states: cpu user nice system irq softirq iowait idle\n> total 59.0% 0.0% 8.8% 0.2% 0.0% 0.0% 31.9%\n> cpu00 52.3% 0.0% 13.3% 0.9% 0.0% 0.0% 33.3%\n> cpu01 65.7% 0.0% 7.6% 0.0% 0.0% 0.0% 26.6%\n> cpu02 58.0% 0.0% 7.6% 0.0% 0.0% 0.0% 34.2%\n> cpu03 60.0% 0.0% 6.6% 0.0% 0.0% 0.0% 33.3%\n> Mem: 3857224k av, 3495880k used, 361344k free, 0k shrd, 92160k buff\n> 2374048k actv, 463576k in_d, 37708k in_c\n> Swap: 4281272k av, 25412k used, 4255860k free 2173392k cached\n\n> As you can see, load is blocked to 4, no iowait and cpu idle of 30%.\n\nCan you try strace'ing some of the backend processes while the system is\nbehaving like this? I suspect what you'll find is a whole lot of\ndelaying select() calls due to high contention for spinlocks ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Mar 2006 10:20:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and Xeon MP "
},
{
"msg_contents": "Hi Guillaume,\n\nGuillaume Smet schrieb:\n>\n> The server is a dell 6650 from end of 2004 with 4 xeon mp 2.2 and 2MB\n> cache per proc.\n> \n> Here are the information from Dell:\n> 4x PROCESSOR, 80532, 2.2GHZ, 2MB cache, 400Mhz, SOCKET F\n> 8x DUAL IN-LINE MEMORY MODULE, 512MB, 266MHz\n> \n....\n> \n>> You should provide details from the XEON DP?\n> \n> The only problem is that the Xeon DP is installed with a 2.6 kernel\n> and a postgresql 8.1.3 (it is used to test the migration from 7.4 to\n> 8.1.3). So it's very difficult to really compare the two behaviours.\n> \n> It's a Dell 2850 with:\n> 2 x PROCESSOR, 80546K, 2.8G, 1MB cache, XEON NOCONA, 800MHz\n> 4 x DUAL IN-LINE MEMORY MODULE, 1GB, 400MHz\n> \n\nDid you compare 7.4 on a 4-way with 8.1 on a 2-way?\nHow many queries and clients did you use to test the performance?\nHow much faster is the XEON DP?\n\nI think, you can expect that your XEON DP is faster on a single query \nbecause CPU and RAM are faster. The overall performance can be better on \nyour XEON DP if you only have a few clients.\n\nI guess, the newer hardware and the newer PostgreSQL version cause the \nbetter performance.\n\nRegards\nSven.\n",
"msg_date": "Thu, 16 Mar 2006 16:20:31 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and Xeon MP"
},
{
"msg_contents": "On 3/16/06, Sven Geisler <[email protected]> wrote:\n> Did you compare 7.4 on a 4-way with 8.1 on a 2-way?\n\nI know there are too many parameters changing between the two servers\nbut I can't really change anything before tuesday. On tuesday, we will\nbe able to compare both servers with the same software.\n\n> How many queries and clients did you use to test the performance?\n\nGooglebot is indexing this site generating 2-3 mbits/s of traffic so\nwe use the googlebot to stress this server. There was a lot of clients\nand a lot of queries.\n\n> How much faster is the XEON DP?\n\nWell, on high load, PostgreSQL scales well on the DP (load at 40,\nqueries slower but still performing well) and is awfully slow on the\nMP box.\n",
"msg_date": "Thu, 16 Mar 2006 17:05:38 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL and Xeon MP"
},
{
"msg_contents": "On 3/16/06, Tom Lane <[email protected]> wrote:\n> Can you try strace'ing some of the backend processes while the system is\n> behaving like this? I suspect what you'll find is a whole lot of\n> delaying select() calls due to high contention for spinlocks ...\n\nTom,\n\nI think we can try to do it.\n\nYou mean strace -p pid with pid on some of the postgres process not on\nthe postmaster itself, does you? Do we need other options?\nWhich pattern should we expect? I'm not really familiar with strace\nand its output.\n\nThanks for your help.\n",
"msg_date": "Thu, 16 Mar 2006 17:08:46 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL and Xeon MP"
},
{
"msg_contents": "\"Guillaume Smet\" <[email protected]> writes:\n> You mean strace -p pid with pid on some of the postgres process not on\n> the postmaster itself, does you?\n\nRight, pick a couple that are accumulating CPU time.\n\n> Do we need other options?\n\nstrace will generate a *whole lot* of output to stderr. I usually do\nsomething like\n\tstrace -p pid 2>outfile\nand then control-C it after a few seconds.\n\n> Which pattern should we expect?\n\nWhat we want to find out is if there's a lot of select()s and/or\nsemop()s shown in the result. Ideally there wouldn't be any, but\nI fear that's not what you'll find.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 16 Mar 2006 11:34:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and Xeon MP "
},
{
"msg_contents": "Hi Guillaume,\n\nGuillaume Smet schrieb:\n>> How much faster is the XEON DP?\n> \n> Well, on high load, PostgreSQL scales well on the DP (load at 40,\n> queries slower but still performing well) and is awfully slow on the\n> MP box.\n\nI know what you mean with awfully slow.\nI think, your application is facing contention. The contention becomes \nlarger as more CPU you have. PostgreSQL 8.1 is addressing contention on \nmultiprocessor servers as you mentioned before.\n\nI guess, you will see that your 4-way XEON MP isn't that bad if you \ncompare both servers with the same PostgreSQL version.\n\nRegards\nSven.\n",
"msg_date": "Thu, 16 Mar 2006 17:36:52 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and Xeon MP"
},
{
"msg_contents": "On 3/16/06, Tom Lane <[email protected]> wrote:\n> What we want to find out is if there's a lot of select()s and/or\n> semop()s shown in the result. Ideally there wouldn't be any, but\n> I fear that's not what you'll find.\n\nOK, I'll try to do it on monday before our upgrade then see what\nhappens with PostgreSQL 8.1.3.\n\nThanks for your help.\n",
"msg_date": "Thu, 16 Mar 2006 18:53:12 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL and Xeon MP"
},
{
"msg_contents": "On 3/16/06, Tom Lane <[email protected]> wrote:\n> Can you try strace'ing some of the backend processes while the system is\n> behaving like this? I suspect what you'll find is a whole lot of\n> delaying select() calls due to high contention for spinlocks ...\n\nAs announced, we have migrated our production server from 7.4.8 to\n8.1.3 this morning. We did some strace'ing before the migration and\nyou were right on the select calls. We had a lot of them even when the\ndatabase was not highly loaded (one every 3-4 lines).\n\nAfter the upgrade, we have the expected behaviour with a more linear\nscalability and a growing cpu load when the database is highly loaded\n(and no cpu idle anymore in this case). We have fewer context switches\ntoo.\n\n8.1.3 definitely is far better for quad Xeon MP and I recommend the\nupgrade for everyone having this sort of problem.\n\nTom, thanks for your great work on this problem.\n\n--\nGuillaume\n",
"msg_date": "Tue, 21 Mar 2006 17:57:54 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL and Xeon MP"
}
] |
[
{
"msg_contents": "PostgreSQL tuned to the max and still too slow? Database too big to \nfit into memory? Here's the solution! http://www.superssd.com/ \nproducts/tera-ramsan/\n\nAnyone purchasing one will be expected to post benchmarks! :)\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n",
"msg_date": "Thu, 16 Mar 2006 12:33:28 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": true,
"msg_subject": "1 TB of memory"
},
{
"msg_contents": "Jim Nasby wrote:\n> PostgreSQL tuned to the max and still too slow? Database too big to \n> fit into memory? Here's the solution! \n> http://www.superssd.com/products/tera-ramsan/\n>\n> Anyone purchasing one will be expected to post benchmarks! :)\n\nAnd give us one :)\n> -- \n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n",
"msg_date": "Thu, 16 Mar 2006 10:44:51 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "On 3/16/06, Jim Nasby <[email protected]> wrote:\n> PostgreSQL tuned to the max and still too slow? Database too big to\n> fit into memory? Here's the solution! http://www.superssd.com/\n> products/tera-ramsan/\n>\n> Anyone purchasing one will be expected to post benchmarks! :)\n\nPricing is tight-lipped, but searching shows $1.85 /GB. That's close\nto $500,000 for 250GB. One report says a person paid $219,000 for 32GB\nand 1TB costs \"well over $1,000,000.\"\n\nBut they \"guarantee the performance.\"\n\nToo rich for me.\n--\nMatthew Nuzum\nwww.bearfruit.org\n",
"msg_date": "Thu, 16 Mar 2006 14:46:26 -0600",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "Luke,\n\n> With a single 3 Gbyte/second infiniband connection to the device?\n\nHey, take it easy! Jim's post was tongue-in-cheek.\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Thu, 16 Mar 2006 21:43:25 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "Jim,\n\n> PostgreSQL tuned to the max and still too slow? Database too big to\n> fit into memory? Here's the solution! http://www.superssd.com/\n> products/tera-ramsan/\n\nWith a single 3 Gbyte/second infiniband connection to the device?\n\nYou'd be better off with 4 x $10K servers that do 800MB/s from disk each and\na Bizgres MPP - then you'd do 3.2GB/s (faster than the SSD) at a price 1/10\nof the SSD, and you'd have 24TB of RAID5 disk under you.\n\nPlus - need more speed? Add 12 more servers, and you'd run at 12.8GB/s and\nhave 96TB of disk to work with, and you'd *still* spend less on HW and SW\nthan the SSD.\n \n- Luke\n\n\n",
"msg_date": "Thu, 16 Mar 2006 22:44:25 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "Jim,\n\nOn 3/16/06 10:44 PM, \"Luke Lonergan\" <[email protected]> wrote:\n\n> Plus - need more speed? Add 12 more servers, and you'd run at 12.8GB/s and\n> have 96TB of disk to work with, and you'd *still* spend less on HW and SW\n> than the SSD.\n\nAnd I forgot to mention that with these 16 servers you'd have 64 CPUs and\n256GB of RAM working for you in addition to the 96TB of disk. Every query\nwould use all of that RAM and all of those CPUs, all at the same time.\n\nBy comparison, with the SSD, you'd have 1 CPU trying to saturate 1\nconnection to the SSD. If you do anything other than just access the data\nthere (order by, group by, join, aggregation, functions), you'll be faced\nwith trying to have 1 CPU do all the work on 1 TB of data. I suggest that\nit won't be any faster than having the 1 TB on disk for most queries, as you\nwould be CPU bound.\n\nBy comparison, with the MPP system, all 64 CPUs would be used at one time to\nprocess the N TB of data and if you grew from N TB to 2N TB, you could\ndouble the machine size and it would take the same amount of time to do 2N\nas it did to do N. That's what data parallelism and scaling is all about.\nWithout it, you don't have a prayer of using all 1TB of data in queries.\n\n- Luke\n\n\n",
"msg_date": "Thu, 16 Mar 2006 23:24:00 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "On Thu, Mar 16, 2006 at 10:44:25PM -0800, Luke Lonergan wrote:\n>You'd be better off with 4 x $10K servers that do 800MB/s from disk each and\n>a Bizgres MPP - then you'd do 3.2GB/s (faster than the SSD) at a price 1/10\n>of the SSD, and you'd have 24TB of RAID5 disk under you.\n\nExcept, of course, that your solution doesn't have a seek time of zero. \nThat approach is great for applications that are limited by their \nsequential scan speed, not so good for applications with random access. \nAt 3.2 GB/s it would still take over 5 minutes to seqscan a TB, so you'd \nprobably want some indices--and you're not going to be getting 800MB/s \nper system doing random index scans from rotating disk (but you might \nwith SSD). Try not to beat your product drum quite so loud...\n\nMike Stone\n",
"msg_date": "Fri, 17 Mar 2006 07:31:23 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "For God's sake buy a mainframe! =o)\n\nOn 3/17/06, Michael Stone <[email protected]> wrote:\n> On Thu, Mar 16, 2006 at 10:44:25PM -0800, Luke Lonergan wrote:\n> >You'd be better off with 4 x $10K servers that do 800MB/s from disk each and\n> >a Bizgres MPP - then you'd do 3.2GB/s (faster than the SSD) at a price 1/10\n> >of the SSD, and you'd have 24TB of RAID5 disk under you.\n>\n> Except, of course, that your solution doesn't have a seek time of zero.\n> That approach is great for applications that are limited by their\n> sequential scan speed, not so good for applications with random access.\n> At 3.2 GB/s it would still take over 5 minutes to seqscan a TB, so you'd\n> probably want some indices--and you're not going to be getting 800MB/s\n> per system doing random index scans from rotating disk (but you might\n> with SSD). Try not to beat your product drum quite so loud...\n>\n> Mike Stone\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n",
"msg_date": "Fri, 17 Mar 2006 05:38:14 -0800",
"msg_from": "\"Rodrigo Madera\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "We got a quote for one of these (entirely for comedy value of course) \nand it was in the region of �1,500,000 give or take a few thousand.\n\nOn 16 Mar 2006, at 18:33, Jim Nasby wrote:\n\n> PostgreSQL tuned to the max and still too slow? Database too big to \n> fit into memory? Here's the solution! http://www.superssd.com/ \n> products/tera-ramsan/\n>\n> Anyone purchasing one will be expected to post benchmarks! :)\n> --\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n",
"msg_date": "Fri, 17 Mar 2006 13:47:58 +0000",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "On 3/16/06, Jim Nasby <[email protected]> wrote:\n> PostgreSQL tuned to the max and still too slow? Database too big to\n> fit into memory? Here's the solution! http://www.superssd.com/\n> products/tera-ramsan/\n>\n> Anyone purchasing one will be expected to post benchmarks! :)\n\nI like their approach...ddr ram + raid sanity backup + super reliable\npower system. Their prices are on jupiter (and i dont mean jupiter,\nfl) but hopefully there will be some competition and the invetible\ndecline in prices. When prices drop from the current 1-2k$/Gb to a\nmore realistic 250$/Gb there will be no reason not to throw one into a\nserver. You could already make a case for an entry level one to\nhandle the WAL and perhaps a few key tables/indexes, particularly ones\nthat are frequenct vacuum targets.\n\nddr approach is much faster than flash nvram inherintly and has a\nvirtually unlimited duty cycle. My prediction is that by 2010 SSD\nwill be relatively commonplace in the server market, barring some\nrediculous goverment intervention (patentes, etc).\n\nmerlin\n",
"msg_date": "Fri, 17 Mar 2006 08:55:48 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "On 3/17/06, Rodrigo Madera <[email protected]> wrote:\n> I don't know about you databasers that crunch in some selects, updates\n> and deletes, but my personal developer workstation is planned to be a\n> 4x 300GB SATA300 with a dedicated RAID stripping controller (no\n> checksums, just speedup) and 4x AMD64 CPUs... not to mention 2GB for\n> each processor... all this in a nice server motherboard...\n\nno doubt, that will handle quite a lot of data. in fact, most\ndatabases (contrary to popular opinion) are cpu bound, not i/o bound. \nHowever, at some point a different set of rules come into play. This\npoint is constantly chaning due to the relentless march of hardware\nbut I'd suggest that at around 1TB you can no longer count on things\nto run quickly just depending on o/s file caching to bail you out. \nOr, you may have a single table + indexes thats 50 gb that takes 6\nhours to vacuum sucking all your i/o.\n\nanother useful aspect of SSD is the relative value of using system\nmemory is much less, so you can reduce swappiness and tune postgres to\nrely more on the filesystem and give all your memory to work_mem and\nsuch.\n\nmerlin\n",
"msg_date": "Fri, 17 Mar 2006 09:57:36 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "Josh,\n\nOn 3/16/06 9:43 PM, \"Josh Berkus\" <[email protected]> wrote:\n\n>> With a single 3 Gbyte/second infiniband connection to the device?\n> \n> Hey, take it easy! Jim's post was tongue-in-cheek.\n\nYou're right - I insulted his bandwidth, sorry :-)\n\n- Luke\n\n\n",
"msg_date": "Fri, 17 Mar 2006 07:59:16 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "On Thu, Mar 16, 2006 at 10:44:25PM -0800, Luke Lonergan wrote:\n> Jim,\n> \n> > PostgreSQL tuned to the max and still too slow? Database too big to\n> > fit into memory? Here's the solution! http://www.superssd.com/\n> > products/tera-ramsan/\n> \n> With a single 3 Gbyte/second infiniband connection to the device?\n> \n> You'd be better off with 4 x $10K servers that do 800MB/s from disk each and\n> a Bizgres MPP - then you'd do 3.2GB/s (faster than the SSD) at a price 1/10\n> of the SSD, and you'd have 24TB of RAID5 disk under you.\n> \n> Plus - need more speed? Add 12 more servers, and you'd run at 12.8GB/s and\n> have 96TB of disk to work with, and you'd *still* spend less on HW and SW\n> than the SSD.\n\nNow what happens as soon as you start doing random I/O? :)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 17 Mar 2006 11:36:53 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "On 3/17/06, Luke Lonergan <[email protected]> wrote:\n> > Now what happens as soon as you start doing random I/O? :)\n> If you are accessing 3 rows at a time from among billions, the problem you\n> have is mostly access time - so an SSD might be very good for some OLTP\n> applications. However - the idea of putting Terabytes of data into an SSD\n> through a thin straw of a channel is silly.\n\nI'll 'byte' on this..right now the price for gigabyte of ddr ram is\nhovering around 60$/gigabyte. If you conveniently leave aside the\nproblem of making ddr ram fault tolerant vs making disks tolerant, you\nare getting 10 orders of magnitude faster seek time and unlimited\nbandwidth...at least from the physical device. While SANs are getting\ncheaper they are still fairly expensive at 1-5$/gigabyte depending on\nvarious factors. You can do the same tricks on SSD storage as with\ndisks.\n\nSSD storage is 1-2k$/gigabyte currently, but I think there is huge\nroom to maneuver price-wise after the major players recoup their\ninvestments and market forces kick in. IMO this process is already in\nplay and the next cycle of hardware upgrades in the enterprise will be\nupdating critical servers with SSD storage. Im guessing by as early\n2010 a significant percentage of enterpise storage will be SSD of some\nflavor.\n\nmerlin\n",
"msg_date": "Fri, 17 Mar 2006 16:28:07 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "On Fri, 2006-03-17 at 15:28, Merlin Moncure wrote:\n> On 3/17/06, Luke Lonergan <[email protected]> wrote:\n> > > Now what happens as soon as you start doing random I/O? :)\n> > If you are accessing 3 rows at a time from among billions, the problem you\n> > have is mostly access time - so an SSD might be very good for some OLTP\n> > applications. However - the idea of putting Terabytes of data into an SSD\n> > through a thin straw of a channel is silly.\n> \n> I'll 'byte' on this..right now the price for gigabyte of ddr ram is\n> hovering around 60$/gigabyte. If you conveniently leave aside the\n> problem of making ddr ram fault tolerant vs making disks tolerant, you\n> are getting 10 orders of magnitude faster seek time and unlimited\n> bandwidth...at least from the physical device. While SANs are getting\n> cheaper they are still fairly expensive at 1-5$/gigabyte depending on\n> various factors. You can do the same tricks on SSD storage as with\n> disks.\n> \n> SSD storage is 1-2k$/gigabyte currently, but I think there is huge\n> room to maneuver price-wise after the major players recoup their\n> investments and market forces kick in. IMO this process is already in\n> play and the next cycle of hardware upgrades in the enterprise will be\n> updating critical servers with SSD storage. Im guessing by as early\n> 2010 a significant percentage of enterpise storage will be SSD of some\n> flavor.\n\nNow I'm envisioning building something with commodity 1U servers hold 4\nto 16 gigs ram, and interconnected with 1g or 10g ethernet.\n\nOpen Source SSD via iSCSI with commodity hardware... hmmm. sounds like\na useful project.\n",
"msg_date": "Fri, 17 Mar 2006 16:07:08 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "Jim,\n\nOn 3/17/06 9:36 AM, \"Jim C. Nasby\" <[email protected]> wrote:\n\n> Now what happens as soon as you start doing random I/O? :)\n\nWell - given that we've divided the data into 32 separate segments, and that\nseeking is done in parallel over all 256 disk drives, random I/O rocks hard\nand scales. Of course, the parallelizing planner is designed to minimize\nseeking as much as possible, as is the normal Postgres planner, but with\nmore segment and more parallel platters, seeking is faster.\n\nThe biggest problem with this idea of \"put huge amounts of data on your SSD\nand everything is infinitely fast\" is that it ignores several critical\nscaling factors:\n- How much bandwidth is available in and out of the device?\n- Does that bandwidth scale as you grow the data?\n- As you grow the data, how long does it take to use the data?\n- Can more than 1 CPU use the data at once? Do they share the path to the\ndata?\n\nIf you are accessing 3 rows at a time from among billions, the problem you\nhave is mostly access time - so an SSD might be very good for some OLTP\napplications. However - the idea of putting Terabytes of data into an SSD\nthrough a thin straw of a channel is silly.\n\nNote that SSDs have been around for a *long* time. I was using them on Cray\nX/MP and 2 supercomputers back in 1987-92, when we had a 4 Million Word SSD\nconnected over a 2GB/s channel. In fact, some people I worked with built a\nmachine with 4 Cray 2 computers that shared an SSD between them for parallel\ncomputing and it was very effective, and also ungodly expensive and special\npurpose.\n\n- Luke\n\n\n",
"msg_date": "Fri, 17 Mar 2006 15:03:19 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "\nOn Mar 17, 2006, at 8:55 AM, Merlin Moncure wrote:\n\n> I like their approach...ddr ram + raid sanity backup + super reliable\n> power system. Their prices are on jupiter (and i dont mean jupiter,\n> fl) but hopefully there will be some competition and the invetible\n\nNothing unique to them. I have a 4 year old SSD from a now out-of- \nbusiness company, Imperial Technology. Initially we bought it for \nabout $20k with 1GB of RAM. Subsequently upgraded to 5GB for another \n$20k. The speed is wicked fast even with just ultra2 SCSI (4 \nchannels). The unit has the same battery backup to disk stuff \n(although it only does the backup at power fail).\n\nAt one time they quoted me about $80k to upgrade it to a full 32MB \nthat the unit supports. I passed.\n\nFor my use it was worth the price. However, given the speed increase \nof other components since then, I don't think I'd buy one today. \nParallelism (if you can do it like Luke suggested) is the way to go.\n\nAnd no, I have not run a database on one of these... though I am \ntempted to...\n\n",
"msg_date": "Mon, 20 Mar 2006 14:04:23 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "\nOn Mar 17, 2006, at 5:07 PM, Scott Marlowe wrote:\n\n> Open Source SSD via iSCSI with commodity hardware... hmmm. sounds \n> like\n> a useful project.\n\nshhhhh! don't give away our top secret plans!\n\n",
"msg_date": "Mon, 20 Mar 2006 14:07:53 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "> > I like their approach...ddr ram + raid sanity backup + super reliable\n> > power system. Their prices are on jupiter (and i dont mean jupiter,\n> > fl) but hopefully there will be some competition and the invetible\n>\n> Nothing unique to them. I have a 4 year old SSD from a now out-of-\n> business company, Imperial Technology. Initially we bought it for\n> about $20k with 1GB of RAM. Subsequently upgraded to 5GB for another\n> $20k. The speed is wicked fast even with just ultra2 SCSI (4\n> channels). The unit has the same battery backup to disk stuff\n> (although it only does the backup at power fail).\n\nyou may or may not be intersted to know they are back in business :).\n\n> For my use it was worth the price. However, given the speed increase\n> of other components since then, I don't think I'd buy one today.\n> Parallelism (if you can do it like Luke suggested) is the way to go.\n\nThats an interesting statement. My personal opionion is that SSD will\nultimately take over the database storage market as well as most\nconsumer level devices for primary storage. except perhaps for very\nlarge databases (>1tb). Hard disk drives will displace tapes for\nbackup storage.\n\nmerlin\n",
"msg_date": "Mon, 20 Mar 2006 14:44:44 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
},
{
"msg_contents": "\nOn Mar 20, 2006, at 2:44 PM, Merlin Moncure wrote:\n\n>> For my use it was worth the price. However, given the speed increase\n>> of other components since then, I don't think I'd buy one today.\n>> Parallelism (if you can do it like Luke suggested) is the way to go.\n>\n> Thats an interesting statement. My personal opionion is that SSD will\n> ultimately take over the database storage market as well as most\n> consumer level devices for primary storage. except perhaps for very\n\nI tend to agree with you that perhaps one day when the $$ are right, \nbut that day is not today.\n\n",
"msg_date": "Mon, 20 Mar 2006 17:24:53 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 1 TB of memory"
}
] |
[
{
"msg_contents": "Hi\n\nI have a performance problem when traversing a table in index order with\nmultiple columns including a date column in date reverse order. Below\nfollows a simplified description of the table, the index and the\nassociated query\n\n\\d prcdedit\n prcdedit_prcd | character(20) |\n prcdedit_date | timestamp without time zone |\n\nIndexes:\n \"prcdedit_idx\" btree (prcdedit_prcd, prcdedit_date)\n\nWhen invoking a query such as \n\nselect oid, prcdedit_prcd, prcdedit_date, 'dd/mm/yyyy hh24:mi:ss') as\nmydate where prcdedit_prcd > 'somevalue' order by prcdedit_prcd,\nprcdedit_date desc;\n\nthe peformance is dismal.\n\nHowever removing the 'desc' qualifier as follows the query flys\n\nselect oid, prcdedit_prcd, prcdedit_date, 'dd/mm/yyyy hh24:mi:ss') as\nmydate where prcdedit_prcd > 'somevalue' order by prcdedit_prcd,\nprcdedit_date;\n\nPostgreSQL Version = 8.1.2\n\nRow count on the table is > 300000\n\nExplain is as follows for desc\n Limit (cost=81486.35..81486.41 rows=25 width=230) (actual\ntime=116619.652..116619.861 rows=25 loops=1)\n -> Sort (cost=81486.35..82411.34 rows=369997 width=230) (actual\ntime=116619.646..116619.729 rows=25 loops=1)\n Sort Key: prcdedit_prcd, prcdedit_date, oid\n -> Bitmap Heap Scan on prcdedit (cost=4645.99..23454.94\nrows=369997 width=230) (actual time=376.952..11798.834 rows=369630\nloops=1)\n Recheck Cond: (prcdedit_prcd > '063266 \n'::bpchar)\n -> Bitmap Index Scan on prcdedit_idx \n(cost=0.00..4645.99 rows=369997 width=0) (actual time=366.048..366.048\nrows=369630 loops=1)\n Index Cond: (prcdedit_prcd > '063266 \n'::bpchar)\n Total runtime: 116950.175 ms\n\nand as follows when I remove the 'desc'\n\n Limit (cost=0.00..2.34 rows=25 width=230) (actual time=0.082..0.535\nrows=25 loops=1)\n -> Index Scan using prcdedit_idx on prcdedit (cost=0.00..34664.63\nrows=369997 width=230) (actual time=0.075..0.405 rows=25 loops=1)\n Index Cond: (prcdedit_prcd > '063266 '::bpchar)\n Total runtime: 0.664 ms\n\n\nAny assistance/advice much appreciated.\n\n-- \nRegards\nTheo\n\n",
"msg_date": "Thu, 16 Mar 2006 22:34:51 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Indexes with descending date columns"
}
] |
[
{
"msg_contents": "The US Dept of Homeland Security has at least two =10=TB SSDs.\n<begin speculation>\nRumor is they are being used for Carnivore or an offshoot/descendent of Carnivore.\n<end speculation>\n\nGood luck getting them to give you benchmark data.\n\nYou need >deep< pockets to afford >= 1TB of SSD.\n(...and as the example shows, perhaps more money than sense.)\nRon\n\n-----Original Message-----\n>From: Jim Nasby <[email protected]>\n>Sent: Mar 16, 2006 1:33 PM\n>To: [email protected]\n>Subject: [PERFORM] 1 TB of memory\n>\n>PostgreSQL tuned to the max and still too slow? Database too big to \n>fit into memory? Here's the solution! http://www.superssd.com/ \n>products/tera-ramsan/\n>\n>Anyone purchasing one will be expected to post benchmarks! :)\n>--\n>Jim C. Nasby, Sr. Engineering Consultant [email protected]\n>Pervasive Software http://pervasive.com work: 512-231-6117\n>vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: explain analyze is your friend\n\n",
"msg_date": "Thu, 16 Mar 2006 15:41:54 -0500 (EST)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 1 TB of memory"
}
] |
[
{
"msg_contents": "explain analyze\nselect distinct eventmain.incidentid, eventmain.entrydate, \neventgeo.long, eventgeo.lat, eventgeo.geox, eventgeo.geoy\nfrom eventmain, eventgeo\nwhere\n eventmain.incidentid = eventgeo.incidentid and\n ( long > -104.998027962962 and long < -104.985957781349 ) and\n ( lat > 39.7075542720006 and lat < 39.7186195832938 ) and\n eventmain.entrydate > '2006-1-1 00:00' and\n eventmain.entrydate <= '2006-3-17 00:00'\norder by\n eventmain.entrydate;\n\n QUERY \nPLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=121313.81..121330.72 rows=451 width=178) (actual \ntime=723719.761..723726.875 rows=1408 loops=1)\n -> Sort (cost=121313.81..121314.94 rows=451 width=178) (actual \ntime=723719.755..723721.807 rows=1408 loops=1)\n Sort Key: eventmain.entrydate, eventmain.disposition, \neventmain.incidentid, eventgeo.reportingarea, eventgeo.beatid, \neventmain.finaltype, eventmain.casenumber, eventgeo.eventlocation, \neventmain.insertdate, eventmain.priority, eventgeo.long, eventgeo.lat, \neventgeo.geox, eventgeo.geoy\n -> Nested Loop (cost=0.00..121293.93 rows=451 width=178) \n(actual time=1916.230..723712.900 rows=1408 loops=1)\n -> Index Scan using eventgeo_lat_idx on eventgeo \n(cost=0.00..85488.05 rows=10149 width=76) (actual time=0.402..393376.129 \nrows=22937 loops=1)\n Index Cond: ((lat > 39.7075542720006::double \nprecision) AND (lat < 39.7186195832938::double precision))\n Filter: ((long > -104.998027962962::double \nprecision) AND (long < -104.985957781349::double precision))\n -> Index Scan using eventmain_incidentid_idx on \neventmain (cost=0.00..3.52 rows=1 width=119) (actual \ntime=14.384..14.392 rows=0 loops=22937)\n Index Cond: ((eventmain.incidentid)::text = \n(\"outer\".incidentid)::text)\n Filter: ((entrydate > '2006-01-01 \n00:00:00'::timestamp without time zone) AND (entrydate <= '2006-03-17 \n00:00:00'::timestamp without time zone))\n\n Total runtime: >>> 723729.238 ms(!) <<<\n\n\n\nI'm trying to figure out why it's consuming so much time on the index \nscan for eventgeo_lat_idx. Also, I have an index on \"long\" that the \nplanner does not appear to find helpful.\n\nThere are 3.3 million records in eventmain and eventgeo. The server has \na reasonably fast RAID10 setup with 16x 15k RPM drives and 12GB of RAM ( \n11GB listed as \"cache\" by vmstat ). Running version 8.0.2 on linux \nkernel 2.6.12.\n\nI have just vacuum analyze'd both tables, rebuilt the eventgeo_lat_idx \nindex and reran the query multiple times to see if caching helped ( it \ndidn't help much ). The server seems to be fine utilizing other fields \nfrom this table but using \"long\" and \"lat\" seem to drag it down \nsignificantly.\n\n Is it because there's such slight differences between the records, \nsince they are all within a few hundredths of a degree from each other?\n\nThanks for your time and ideas.\n\n-Dan\n",
"msg_date": "Thu, 16 Mar 2006 14:40:32 -0700",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help optimizing a slow index scan"
},
{
"msg_contents": "Dan Harris wrote:\n> explain analyze \n.... doh.. sorry to reply to my own post. But I messed up copying some \nof the fields into the select statement that you'll see in the \"Sort \nKey\" section of the analyze results. The mistake was mine. Everything \nelse is \"normal\" between the query and the plan.\n\n-Dan\n",
"msg_date": "Thu, 16 Mar 2006 14:43:56 -0700",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help optimizing a slow index scan"
},
{
"msg_contents": "Markus Bertheau wrote:\n> Have you tried using a GIST index on lat & long? These things are\n> meant for two-dimensional data, whereas btree doesn't handle\n> two-dimensional data that well. How many rows satisfy either of the\n> long / lat condition?\n>\n> \n>> \nAccording to the analyze, less than 500 rows matched. I'll look into \nGIST indexes, thanks for the feedback.\n\n-Dan\n",
"msg_date": "Fri, 17 Mar 2006 08:34:26 -0700",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help optimizing a slow index scan"
},
{
"msg_contents": "Dan Harris wrote:\n> Markus Bertheau wrote:\n>> Have you tried using a GIST index on lat & long? These things are\n>> meant for two-dimensional data, whereas btree doesn't handle\n>> two-dimensional data that well. How many rows satisfy either of the\n>> long / lat condition?\n>>\n>> \n>>> \n> According to the analyze, less than 500 rows matched. I'll look into \n> GIST indexes, thanks for the feedback.\n>\n> -Dan\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\nWhen I try to create a GIST index, I get the following error:\n\ncreate index eventgeo_lat_idx on eventgeo using GIST (lat);\n\nERROR: data type double precision has no default operator class for \naccess method \"gist\"\nHINT: You must specify an operator class for the index or define a \ndefault operator class for the data type.\n\nI'm not sure what a \"default operator class\" is, exactly..\n\n-Dan\n",
"msg_date": "Fri, 17 Mar 2006 08:53:44 -0700",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help optimizing a slow index scan"
},
{
"msg_contents": "On 3/16/06, Dan Harris <[email protected]> wrote:\n> explain analyze\n> select distinct eventmain.incidentid, eventmain.entrydate,\n> eventgeo.long, eventgeo.lat, eventgeo.geox, eventgeo.geoy\n> from eventmain, eventgeo\n> where\n> eventmain.incidentid = eventgeo.incidentid and\n> ( long > -104.998027962962 and long < -104.985957781349 ) and\n> ( lat > 39.7075542720006 and lat < 39.7186195832938 ) and\n> eventmain.entrydate > '2006-1-1 00:00' and\n> eventmain.entrydate <= '2006-3-17 00:00'\n> order by\n> eventmain.entrydate;\n\nAs others will probably mention, effective queries on lot/long which\nis a spatial problem will require r-tree or gist. I don't have a lot\nof experience with exotic indexes but this may be the way to go.\n\nOne easy optimization to consider making is to make an index on either\n(incidentid, entrydate) or (incident_id,long) which ever is more\nselective.\n\nThis is 'yet another query' that would be fun to try out and tweak\nusing the 8.2 upcoming row-wise comparison.\n\nmerlin\n",
"msg_date": "Fri, 17 Mar 2006 11:56:03 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help optimizing a slow index scan"
},
{
"msg_contents": "On Fri, Mar 17, 2006 at 08:34:26 -0700,\n Dan Harris <[email protected]> wrote:\n> Markus Bertheau wrote:\n> >Have you tried using a GIST index on lat & long? These things are\n> >meant for two-dimensional data, whereas btree doesn't handle\n> >two-dimensional data that well. How many rows satisfy either of the\n> >long / lat condition?\n> >\n> > \n> >> \n> According to the analyze, less than 500 rows matched. I'll look into \n> GIST indexes, thanks for the feedback.\n\nHave you looked at using the Earth Distance contrib module? If a spherical\nmodel of the earth is suitable for your application, then it may work for you\nand might be easier than trying to create Gist indexes yourself.\n",
"msg_date": "Fri, 17 Mar 2006 11:38:27 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help optimizing a slow index scan"
},
{
"msg_contents": "On 3/17/06, Bruno Wolff III <[email protected]> wrote:\n> Have you looked at using the Earth Distance contrib module? If a spherical\n> model of the earth is suitable for your application, then it may work for you\n> and might be easier than trying to create Gist indexes yourself.\n\nearth distance = great stuff. If the maximum error is known then you\ncan just pad the distance and filter the result on the client if exact\nprecision is needed.\n\nMerlin\n",
"msg_date": "Fri, 17 Mar 2006 13:07:34 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help optimizing a slow index scan"
},
{
"msg_contents": "Try contrib/btree_gist.\nI've tried that one, but for my case it didn't help much.\nThe performance was almost equal or even slower than built-in btree.\n\nOn Fri, 17 Mar 2006 08:53:44 -0700\nDan Harris <[email protected]> wrote:\n\n> Dan Harris wrote:\n> > Markus Bertheau wrote:\n> >> Have you tried using a GIST index on lat & long? These things are\n> >> meant for two-dimensional data, whereas btree doesn't handle\n> >> two-dimensional data that well. How many rows satisfy either of the\n> >> long / lat condition?\n> >>\n> >> \n> >>> \n> > According to the analyze, less than 500 rows matched. I'll look into \n> > GIST indexes, thanks for the feedback.\n> >\n> > -Dan\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: Don't 'kill -9' the postmaster\n> \n> When I try to create a GIST index, I get the following error:\n> \n> create index eventgeo_lat_idx on eventgeo using GIST (lat);\n> \n> ERROR: data type double precision has no default operator class for \n> access method \"gist\"\n> HINT: You must specify an operator class for the index or define a \n> default operator class for the data type.\n> \n> I'm not sure what a \"default operator class\" is, exactly..\n> \n> -Dan\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n\n-- \nEvgeny Gridasov\nSoftware Engineer \nI-Free, Russia\n",
"msg_date": "Fri, 17 Mar 2006 22:32:53 +0300",
"msg_from": "Evgeny Gridasov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help optimizing a slow index scan"
},
{
"msg_contents": "Merlin Moncure wrote:\n\n> As others will probably mention, effective queries on lot/long which\n> is a spatial problem will require r-tree or gist. I don't have a lot\n> of experience with exotic indexes but this may be the way to go.\n>\n> One easy optimization to consider making is to make an index on either\n> (incidentid, entrydate) or (incident_id,long) which ever is more\n> selective.\n>\n> This is 'yet another query' that would be fun to try out and tweak\n> using the 8.2 upcoming row-wise comparison.\n>\n> merlin\n> \nThanks to everyone for your suggestions. One problem I ran into is that \napparently my version doesn't support the GIST index that was \nmentioned. \"function 'box' doesn't exist\" ).. So I'm guessing that both \nthis as well as the Earth Distance contrib require me to add on some \nmore pieces that aren't there.\n\nFurthermore, by doing so, I am tying my queries directly to \n\"postgres-isms\". One of the long term goals of this project is to be \nable to fairly transparently support any ANSI SQL-compliant back end \nwith the same code base. If I had full control over the query designs, \nI could make stored procedures to abstract this. However, I have to \ndeal with a \"gray box\" third-party reporting library that isn't so \nflexible. I'll certainly consider going with something \npostgre-specific, but only as a last resort.\n\nI tried the multi-column index as mentioned above but didn't see any \nnoticeable improvement in elapsed time, although the planner did use the \nnew index.\n\nWhat is the real reason for the index not being very effective on these \ncolumns? Although the numbers are in a very limited range, it seems \nthat the records would be very selective as it's not terribly common for \nmultiple rows to share the same coords.\n\nIs the \"8.2. upcoming row-wise comparison\" something that would be \nlikely to help me?\n\nThanks again for your input\n",
"msg_date": "Fri, 17 Mar 2006 13:44:54 -0700",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help optimizing a slow index scan"
},
{
"msg_contents": "On 3/17/06, Dan Harris <[email protected]> wrote:\n> Merlin Moncure wrote:\n> Thanks to everyone for your suggestions. One problem I ran into is that\n> apparently my version doesn't support the GIST index that was\n> mentioned. \"function 'box' doesn't exist\" ).. So I'm guessing that both\n> this as well as the Earth Distance contrib require me to add on some\n> more pieces that aren't there.\n\nearth distance is a contrib module that has to be built and installed.\nit does use some pg-isms so I guess that can be ruled out. GIST is a\nbit more complex and I would consider reading the documentation very\ncarefully regarding them and make your own determination.\n\n> Furthermore, by doing so, I am tying my queries directly to\n> \"postgres-isms\". [snip]\n\n> I tried the multi-column index as mentioned above but didn't see any\n> noticeable improvement in elapsed time, although the planner did use the\n> new index.\n\ndid you try both flavors of the multiple key index I suggested? (there\nwere other possiblities, please experiment)\n\n> Is the \"8.2. upcoming row-wise comparison\" something that would be\n> likely to help me?\n\npossibly. good news is that rwc is ansi sql. you can see my blog\nabout it here: http://people.planetpostgresql.org/merlin/\n\nSpecifically, if you can order your table with an order by statement\nsuch that the records you want are contingous, then yes. However,\neven though it's ansi sql, various commercial databases implement rwc\nimproperly or not at all (mysql, to their credit, gets it right) and I\nstill feel like an exotic index or some other nifty pg trick might be\nthe best performance approach here).\n\nMerlin\n",
"msg_date": "Fri, 17 Mar 2006 16:41:38 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help optimizing a slow index scan"
},
{
"msg_contents": "Dan Harris <[email protected]> writes:\n> Furthermore, by doing so, I am tying my queries directly to \n> \"postgres-isms\". One of the long term goals of this project is to be \n> able to fairly transparently support any ANSI SQL-compliant back end \n> with the same code base.\n\nUnfortunately, there isn't any portable or standard (not exactly the\nsame thing ;-)) SQL functionality for dealing gracefully with\ntwo-dimensional searches, which is what your lat/long queries are.\nYou should accept right now that you can have portability or you can\nhave good performance, not both.\n\nMerlin's enthusiasm for row-comparison queries is understandable because\nthat fix definitely helped a common problem. But row comparison has\nnothing to do with searches in two independent dimensions. Row\ncomparison basically makes it easier to exploit the natural behavior of\nmulticolumn btree indexes ... but a multicolumn btree index does not\nefficiently support queries that involve separate range limitations on\neach index column. (If you think about the index storage order you'll\nsee why: the answer entries are not contiguous in the index.)\n\nTo support two-dimensional searches you really need a non-btree index\nstructure, such as GIST. Since this isn't standard, demanding a\nportable answer won't get you anywhere. (I don't mean to suggest that\nPostgres is the only database that has such functionality, just that\nthe DBs that do have it don't agree on any common API.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Mar 2006 23:41:11 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help optimizing a slow index scan "
},
{
"msg_contents": "On Fri, Mar 17, 2006 at 11:41:11PM -0500, Tom Lane wrote:\n> Dan Harris <[email protected]> writes:\n> > Furthermore, by doing so, I am tying my queries directly to \n> > \"postgres-isms\". One of the long term goals of this project is to be \n> > able to fairly transparently support any ANSI SQL-compliant back end \n> > with the same code base.\n> \n> Unfortunately, there isn't any portable or standard (not exactly the\n> same thing ;-)) SQL functionality for dealing gracefully with\n> two-dimensional searches, which is what your lat/long queries are.\n\nThe OpenGIS Simple Features Specification[1] is a step in that\ndirection, no? PostGIS[2], MySQL[3], and Oracle Spatial[4] implement\nto varying degrees. With PostGIS you do have to add non-standard\noperators to a query's predicate to benefit from GiST indexes on\nspatial columns, but the rest of the query can be straight out of\nthe SQL and OGC standards.\n\n[1] http://www.opengeospatial.org/docs/99-049.pdf\n[2] http://www.postgis.org/\n[3] http://dev.mysql.com/doc/refman/5.0/en/spatial-extensions.html\n[4] http://www.oracle.com/technology/products/spatial/index.html\n\n-- \nMichael Fuhr\n",
"msg_date": "Fri, 17 Mar 2006 22:29:41 -0700",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help optimizing a slow index scan"
},
{
"msg_contents": "I may be wrong but we in astronomy have several sky indexing schemes, which\nallows to effectively use classical btree index. See \nhttp://www.sai.msu.su/~megera/oddmuse/index.cgi/SkyPixelization\nfor details. Sergei Koposov has developed Q3C contrib module for \nPostgreSQL 8.1+ and we use it with billiard size astronomical catalogs.\n\n\n \tOleg\n\nOn Fri, 17 Mar 2006, Michael Fuhr wrote:\n\n> On Fri, Mar 17, 2006 at 11:41:11PM -0500, Tom Lane wrote:\n>> Dan Harris <[email protected]> writes:\n>>> Furthermore, by doing so, I am tying my queries directly to\n>>> \"postgres-isms\". One of the long term goals of this project is to be\n>>> able to fairly transparently support any ANSI SQL-compliant back end\n>>> with the same code base.\n>>\n>> Unfortunately, there isn't any portable or standard (not exactly the\n>> same thing ;-)) SQL functionality for dealing gracefully with\n>> two-dimensional searches, which is what your lat/long queries are.\n>\n> The OpenGIS Simple Features Specification[1] is a step in that\n> direction, no? PostGIS[2], MySQL[3], and Oracle Spatial[4] implement\n> to varying degrees. With PostGIS you do have to add non-standard\n> operators to a query's predicate to benefit from GiST indexes on\n> spatial columns, but the rest of the query can be straight out of\n> the SQL and OGC standards.\n>\n> [1] http://www.opengeospatial.org/docs/99-049.pdf\n> [2] http://www.postgis.org/\n> [3] http://dev.mysql.com/doc/refman/5.0/en/spatial-extensions.html\n> [4] http://www.oracle.com/technology/products/spatial/index.html\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n",
"msg_date": "Sat, 18 Mar 2006 11:50:48 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help optimizing a slow index scan"
},
{
"msg_contents": "On Sat, Mar 18, 2006 at 11:50:48 +0300,\n Oleg Bartunov <[email protected]> wrote:\n> I may be wrong but we in astronomy have several sky indexing schemes, which\n> allows to effectively use classical btree index. See \n> http://www.sai.msu.su/~megera/oddmuse/index.cgi/SkyPixelization\n> for details. Sergei Koposov has developed Q3C contrib module for \n> PostgreSQL 8.1+ and we use it with billiard size astronomical catalogs.\n\nNote that Earth Distance can also be used for astronomy. If you use an\nappropiate radius, distances will be in degrees.\n",
"msg_date": "Sat, 18 Mar 2006 09:00:19 -0600",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help optimizing a slow index scan"
},
{
"msg_contents": "On Fri, 17 Mar 2006, Evgeny Gridasov wrote:\n\n> Try contrib/btree_gist.\n\ncontrib/btree_gist does nothing more than built-in btree - it's just\nan support for multicolumn GiST indices.\n\n> I've tried that one, but for my case it didn't help much.\n> The performance was almost equal or even slower than built-in btree.\n>\n> On Fri, 17 Mar 2006 08:53:44 -0700\n> Dan Harris <[email protected]> wrote:\n>\n>> Dan Harris wrote:\n>>> Markus Bertheau wrote:\n>>>> Have you tried using a GIST index on lat & long? These things are\n>>>> meant for two-dimensional data, whereas btree doesn't handle\n>>>> two-dimensional data that well. How many rows satisfy either of the\n>>>> long / lat condition?\n>>>>\n>>>>\n>>>>>\n>>> According to the analyze, less than 500 rows matched. I'll look into\n>>> GIST indexes, thanks for the feedback.\n>>>\n>>> -Dan\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 2: Don't 'kill -9' the postmaster\n>>\n>> When I try to create a GIST index, I get the following error:\n>>\n>> create index eventgeo_lat_idx on eventgeo using GIST (lat);\n>>\n>> ERROR: data type double precision has no default operator class for\n>> access method \"gist\"\n>> HINT: You must specify an operator class for the index or define a\n>> default operator class for the data type.\n>>\n>> I'm not sure what a \"default operator class\" is, exactly..\n>>\n>> -Dan\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n>>\n>\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n",
"msg_date": "Sun, 19 Mar 2006 09:26:24 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help optimizing a slow index scan"
}
] |
[
{
"msg_contents": "> I have a performance problem when traversing a table in index order with\n> multiple columns including a date column in date reverse order. Below\n> follows a simplified description of the table, the index and the\n> associated query\n> \n> \\d prcdedit\n> prcdedit_prcd | character(20) |\n> prcdedit_date | timestamp without time zone |\n> \n> Indexes:\n> \"prcdedit_idx\" btree (prcdedit_prcd, prcdedit_date)\n\nDepending on how you use the table, there are three possible solutions.\n\nFirst, if it makes sense in the domain, using an ORDER BY where _both_ columns are used descending will make PG search the index in reverse and will be just as fast as when both as searched by the default ascending.\n\nSecond possibility: Create a dummy column whose value depends on the negative of prcdedit_date, e.g., -extract(epoch from prcdedit_date), keep the dummy column in sync with the original column using triggers, and rewrite your queries to use ORDER BY prcdedit_prod, dummy_column.\n\nThird: Create an index on a function which sorts in the order you want, and then always sort using the function index (you could use the -extract(epoch...) gimmick for that, among other possibilities.)\n\nHTH.\n",
"msg_date": "Thu, 16 Mar 2006 22:25:19 -0800",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Indexes with descending date columns"
},
{
"msg_contents": "On Fri, 2006-03-17 at 08:25, [email protected] wrote:\n> > I have a performance problem when traversing a table in index order with\n> > multiple columns including a date column in date reverse order. Below\n> > follows a simplified description of the table, the index and the\n> > associated query\n> > \n> > \\d prcdedit\n> > prcdedit_prcd | character(20) |\n> > prcdedit_date | timestamp without time zone |\n> > \n> > Indexes:\n> > \"prcdedit_idx\" btree (prcdedit_prcd, prcdedit_date)\n> \n> Depending on how you use the table, there are three possible solutions.\n> \n> First, if it makes sense in the domain, using an ORDER BY where _both_ columns are used descending will make PG search the index in reverse and will be just as fast as when both as searched by the default ascending.\n> \n> Second possibility: Create a dummy column whose value depends on the negative of prcdedit_date, e.g., -extract(epoch from prcdedit_date), keep the dummy column in sync with the original column using triggers, and rewrite your queries to use ORDER BY prcdedit_prod, dummy_column.\n> \n> Third: Create an index on a function which sorts in the order you want, and then always sort using the function index (you could use the -extract(epoch...) gimmick for that, among other possibilities.)\n> \n> HTH.\n\nAll good input - thanks, however, before I start messing with my stuff\nwhich I know will be complex - some questions to any of the developers\non the list.\n\ni Is it feasible to extend index creation to support descending \n columns? ... this is supported on other commercial and non\n commercial databases, but I do not know if this is a SQL standard.\n\nii If no to i, is it feasible to extend PostgreSQL to allow traversing\n an index in column descending and column ascending order - assuming\n an order by on more than one column with column order not \n in the same direction and indexes existing? ... if that makes sense.\n\n-- \nRegards\nTheo\n\n",
"msg_date": "Thu, 23 Mar 2006 13:09:49 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes with descending date columns"
},
{
"msg_contents": "Theo Kramer wrote:\n\n> All good input - thanks, however, before I start messing with my stuff\n> which I know will be complex - some questions to any of the developers\n> on the list.\n> \n> i Is it feasible to extend index creation to support descending \n> columns? ... this is supported on other commercial and non\n> commercial databases, but I do not know if this is a SQL standard.\n\nThis can be done. You need to create an operator class which specifies\nthe reverse sort order (i.e. reverse the operators), and then use it in\nthe new index.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Thu, 23 Mar 2006 09:24:49 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes with descending date columns"
},
{
"msg_contents": "On Thu, 2006-03-23 at 16:16, Alvaro Herrera wrote:\n> Theo Kramer wrote:\n> \n> > All good input - thanks, however, before I start messing with my stuff\n> > which I know will be complex - some questions to any of the developers\n> > on the list.\n> > \n> > i Is it feasible to extend index creation to support descending \n> > columns? ... this is supported on other commercial and non\n> > commercial databases, but I do not know if this is a SQL standard.\n> \n> This can be done. You need to create an operator class which specifies\n> the reverse sort order (i.e. reverse the operators), and then use it in\n> the new index.\n\nHmmm, would that then result in the following syntax being valid?\n\n create index my_idx on my_table (c1, c2 desc, c3, c4 desc) ;\n\nwhere my_table is defined as\n\n create table my_table (\n c1 text,\n c2 timestamp,\n c3 integer,\n c4 integer\n );\n\nIf so, I would appreciate any pointers on where to start on this -\nalready fumbling my way through Interfacing Extensions To Indexes in the\nmanual...\n\nRegards\nTheo\n-- \nRegards\nTheo\n\n",
"msg_date": "Fri, 24 Mar 2006 06:32:29 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes with descending date columns"
},
{
"msg_contents": "Theo Kramer <[email protected]> writes:\n> If so, I would appreciate any pointers on where to start on this -\n> already fumbling my way through Interfacing Extensions To Indexes in the\n> manual...\n\nSearch the PG list archives for discussions of reverse-sort opclasses.\nIt's really pretty trivial, once you've created a negated btree\ncomparison function for the datatype.\n\nThis is the sort of thing that we are almost but not quite ready to put\ninto the standard distribution. The issues that are bugging me have to\ndo with whether NULLs sort low or high --- right now, if you make a\nreverse-sort opclass, it will effectively sort NULLs low instead of\nhigh, and that has some unpleasant consequences because the rest of the\nsystem isn't prepared for variance on the point (in particular I'm\nafraid this could break mergejoins). I'd like to see us make \"NULLs\nlow\" vs \"NULLs high\" be a defined property of opclasses, and deal with\nthe fallout from that, and then we could put reverse-sort opclasses for\nall the standard datatypes into the regular distribution.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 23 Mar 2006 23:59:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes with descending date columns "
},
{
"msg_contents": "On Thu, Mar 23, 2006 at 01:09:49PM +0200, Theo Kramer wrote:\n> ii If no to i, is it feasible to extend PostgreSQL to allow traversing\n> an index in column descending and column ascending order - assuming\n> an order by on more than one column with column order not \n> in the same direction and indexes existing? ... if that makes sense.\n\nYes.\n\nstats=# explain select * from email_contrib order by project_id desc, id desc, date desc limit 10;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..31.76 rows=10 width=24)\n -> Index Scan Backward using email_contrib_pkey on email_contrib (cost=0.00..427716532.18 rows=134656656 width=24)\n(2 rows)\n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 24 Mar 2006 04:21:38 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes with descending date columns"
},
{
"msg_contents": "Hi,\n\nI have a select like\n\nSELECT (array[20]+array[21]+ ... +array[50]+array[51]) as total\nFROM table\nWHERE\n(array[20]+array[21]+ ... +array[50]+array[51])<5000\nAND array[20]<>0\nAND array[21]<>0\n ...\nAND array[50]<>0\nAND array[51])<>0\n\nAny ideas to make this query faster?\n",
"msg_date": "Fri, 24 Mar 2006 13:41:50 +0100",
"msg_from": "Ruben Rubio Rey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Array performance"
},
{
"msg_contents": "On Fri, Mar 24, 2006 at 01:41:50PM +0100, Ruben Rubio Rey wrote:\n> Hi,\n> \n> I have a select like\n> \n> SELECT (array[20]+array[21]+ ... +array[50]+array[51]) as total\n> FROM table\n> WHERE\n> (array[20]+array[21]+ ... +array[50]+array[51])<5000\n\nhttp://www.varlena.com/GeneralBits/109.php might provide some useful\ninsights. I also recall seeing something about sum operators for arrays,\nbut I can't recall where.\n\n> AND array[20]<>0\n> AND array[21]<>0\n> ...\n> AND array[50]<>0\n> AND array[51])<>0\n\nUhm... please don't tell me that you're using 0 in place of NULL...\n\nYou might be able to greatly simplify that by use of ANY; you'd need to\nditch elements 1-19 though:\n\n... WHERE NOT ANY(array) = 0\n\nSee http://www.postgresql.org/docs/8.1/interactive/arrays.html\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 24 Mar 2006 06:52:45 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Array performance"
},
{
"msg_contents": "Jim C. Nasby wrote:\n\n>On Fri, Mar 24, 2006 at 01:41:50PM +0100, Ruben Rubio Rey wrote:\n> \n>\n>>Hi,\n>>\n>>I have a select like\n>>\n>>SELECT (array[20]+array[21]+ ... +array[50]+array[51]) as total\n>>FROM table\n>>WHERE\n>>(array[20]+array[21]+ ... +array[50]+array[51])<5000\n>> \n>>\n>\n>http://www.varlena.com/GeneralBits/109.php might provide some useful\n>insights. I also recall seeing something about sum operators for arrays,\n>but I can't recall where.\n> \n>\nI ll check it out, seems to be very useful\nIs faster create a function to sum the array?\n\n> \n>\n>>AND array[20]<>0\n>>AND array[21]<>0\n>>...\n>>AND array[50]<>0\n>>AND array[51])<>0\n>> \n>>\n>\n>Uhm... please don't tell me that you're using 0 in place of NULL...\n> \n>\nmmm ... i have read in postgres documentation that null values on arrays \nare not supported ...\n\n>You might be able to greatly simplify that by use of ANY; you'd need to\n>ditch elements 1-19 though:\n>\n>... WHERE NOT ANY(array) = 0\n> \n>\nYep this is much better.\n\n>See http://www.postgresql.org/docs/8.1/interactive/arrays.html\n> \n>\n\n\n",
"msg_date": "Fri, 24 Mar 2006 14:01:29 +0100",
"msg_from": "Ruben Rubio Rey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Array performance"
},
{
"msg_contents": "On Fri, Mar 24, 2006 at 02:01:29PM +0100, Ruben Rubio Rey wrote:\n> >http://www.varlena.com/GeneralBits/109.php might provide some useful\n> >insights. I also recall seeing something about sum operators for arrays,\n> >but I can't recall where.\n> > \n> >\n> I ll check it out, seems to be very useful\n> Is faster create a function to sum the array?\n\nThere's been talk of having one, but I don't think any such thing\ncurrently exists.\n\n> >>AND array[20]<>0\n> >>AND array[21]<>0\n> >>...\n> >>AND array[50]<>0\n> >>AND array[51])<>0\n> >> \n> >>\n> >\n> >Uhm... please don't tell me that you're using 0 in place of NULL...\n> > \n> >\n> mmm ... i have read in postgres documentation that null values on arrays \n> are not supported ...\n\nDamn, you're right. Another reason I tend to stay away from them...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 24 Mar 2006 07:06:19 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Array performance"
},
{
"msg_contents": "On Fri, Mar 24, 2006 at 07:06:19AM -0600, Jim C. Nasby wrote:\n> On Fri, Mar 24, 2006 at 02:01:29PM +0100, Ruben Rubio Rey wrote:\n> > mmm ... i have read in postgres documentation that null values on arrays \n> > are not supported ...\n> \n> Damn, you're right. Another reason I tend to stay away from them...\n\n8.2 will support NULL array elements.\n\nhttp://archives.postgresql.org/pgsql-committers/2005-11/msg00385.php\nhttp://developer.postgresql.org/docs/postgres/arrays.html\n\ntest=> SELECT '{1,2,NULL,3,4}'::integer[];\n int4 \n----------------\n {1,2,NULL,3,4}\n(1 row)\n\n-- \nMichael Fuhr\n",
"msg_date": "Fri, 24 Mar 2006 06:32:53 -0700",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Array performance"
},
{
"msg_contents": "With 8.1.3, I get an error when trying to do this on a Text[] column :\n.. WHERE ANY(array) LIKE 'xx%'\n\nIndeed, I get rejected even with:\n.. WHERE ANY(array) = 'xx'\n\nIn both cases, the error is: ERROR: syntax error at or near \"any\" ...\n\nIt would only work as documented in the manual (8.10.5):\nSELECT * FROM sal_emp WHERE 10000 = ANY (pay_by_quarter);\n\nIt appears that this restriction is still in place in 8.2:\n>http://developer.postgresql.org/docs/postgres/arrays.html\n\nIs that the case?\n\nThanks in advance,\nKC. \n\n\n\nWith 8.1.3, I get an error when trying to do this on a Text[] column\n:\n.. WHERE ANY(array) LIKE 'xx%'\n\nIndeed, I get rejected even with:\n.. WHERE ANY(array) = 'xx'\n\nIn both cases, the error is: ERROR: syntax error at or near\n\"any\" ... \n\nIt would only work as documented in the manual (8.10.5):\nSELECT * FROM sal_emp WHERE 10000 = ANY (pay_by_quarter);\n\nIt appears that this restriction is still in place in\n8.2:\n\nhttp://developer.postgresql.org/docs/postgres/arrays.html\n\nIs that the case?\nThanks in advance,\nKC.",
"msg_date": "Fri, 24 Mar 2006 23:25:00 +0800",
"msg_from": "K C Lau <[email protected]>",
"msg_from_op": false,
"msg_subject": "limitation using LIKE on ANY(array)"
},
{
"msg_contents": "Ruben Rubio Rey <[email protected]> writes:\n> SELECT (array[20]+array[21]+ ... +array[50]+array[51]) as total\n> FROM table\n> WHERE\n> (array[20]+array[21]+ ... +array[50]+array[51])<5000\n> AND array[20]<>0\n> AND array[21]<>0\n> ...\n> AND array[50]<>0\n> AND array[51])<>0\n\n> Any ideas to make this query faster?\n\nWhat's the array datatype? Integer or float would probably go a lot\nfaster than NUMERIC, if that's what you're using now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Mar 2006 10:51:44 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Array performance "
},
{
"msg_contents": "K C Lau <[email protected]> writes:\n> Indeed, I get rejected even with:\n> .. WHERE ANY(array) = 'xx'\n\n> It would only work as documented in the manual (8.10.5):\n> SELECT * FROM sal_emp WHERE 10000 = ANY (pay_by_quarter);\n\nThat's not changing any time soon; the SQL spec defines only the second\nsyntax for ANY, and I believe there would be syntactic ambiguity if we\ntried to allow the other.\n\n> With 8.1.3, I get an error when trying to do this on a Text[] column :\n> .. WHERE ANY(array) LIKE 'xx%'\n\nIf you're really intent on doing that, make an operator for \"reverse\nLIKE\" and use it with the ANY on the right-hand side.\n\nregression=# create function rlike(text,text) returns bool as\nregression-# 'select $2 like $1' language sql strict immutable;\nCREATE FUNCTION\nregression=# create operator ~~~ (procedure = rlike, leftarg = text,\nregression(# rightarg = text, commutator = ~~);\nCREATE OPERATOR\nregression=# select 'xx%' ~~~ any(array['aaa','bbb']);\n ?column?\n----------\n f\n(1 row)\n\nregression=# select 'xx%' ~~~ any(array['aaa','xxb']);\n ?column?\n----------\n t\n(1 row)\n\nregression=#\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Mar 2006 11:25:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: limitation using LIKE on ANY(array) "
},
{
"msg_contents": "Thank you very much, Tom. We'll try it and report if there is any \nsignificant impact performance-wise.\n\nBest regards,\nKC.\n\nAt 00:25 06/03/25, Tom Lane wrote:\n>K C Lau <[email protected]> writes:\n> > Indeed, I get rejected even with:\n> > .. WHERE ANY(array) = 'xx'\n>\n> > It would only work as documented in the manual (8.10.5):\n> > SELECT * FROM sal_emp WHERE 10000 = ANY (pay_by_quarter);\n>\n>That's not changing any time soon; the SQL spec defines only the second\n>syntax for ANY, and I believe there would be syntactic ambiguity if we\n>tried to allow the other.\n>\n> > With 8.1.3, I get an error when trying to do this on a Text[] column :\n> > .. WHERE ANY(array) LIKE 'xx%'\n>\n>If you're really intent on doing that, make an operator for \"reverse\n>LIKE\" and use it with the ANY on the right-hand side.\n>\n>regression=# create function rlike(text,text) returns bool as\n>regression-# 'select $2 like $1' language sql strict immutable;\n>CREATE FUNCTION\n>regression=# create operator ~~~ (procedure = rlike, leftarg = text,\n>regression(# rightarg = text, commutator = ~~);\n>CREATE OPERATOR\n>regression=# select 'xx%' ~~~ any(array['aaa','bbb']);\n> ?column?\n>----------\n> f\n>(1 row)\n>\n>regression=# select 'xx%' ~~~ any(array['aaa','xxb']);\n> ?column?\n>----------\n> t\n>(1 row)\n>\n>regression=#\n>\n> regards, tom lane\n\n",
"msg_date": "Sat, 25 Mar 2006 09:48:53 +0800",
"msg_from": "K C Lau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: limitation using LIKE on ANY(array) "
},
{
"msg_contents": "Tom Lane wrote:\n\n>Ruben Rubio Rey <[email protected]> writes:\n> \n>\n>>SELECT (array[20]+array[21]+ ... +array[50]+array[51]) as total\n>>FROM table\n>>WHERE\n>>(array[20]+array[21]+ ... +array[50]+array[51])<5000\n>>AND array[20]<>0\n>>AND array[21]<>0\n>> ...\n>>AND array[50]<>0\n>>AND array[51])<>0\n>>\n>Any ideas to make this query faster?\n> \n>\n>\n>What's the array datatype? Integer or float would probably go a lot\n>faster than NUMERIC, if that's what you're using now.\n> \n>\nAlready its integer[]\n",
"msg_date": "Mon, 27 Mar 2006 09:03:03 +0200",
"msg_from": "Ruben Rubio Rey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Array performance"
},
{
"msg_contents": "On Fri, 2006-03-24 at 12:21, Jim C. Nasby wrote:\n> On Thu, Mar 23, 2006 at 01:09:49PM +0200, Theo Kramer wrote:\n> > ii If no to i, is it feasible to extend PostgreSQL to allow traversing\n> > an index in column descending and column ascending order - assuming\n> > an order by on more than one column with column order not \n> > in the same direction and indexes existing? ... if that makes sense.\n> \n> Yes.\n> \n> stats=# explain select * from email_contrib order by project_id desc, id desc, date desc limit 10;\n> QUERY PLAN \n> ------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..31.76 rows=10 width=24)\n> -> Index Scan Backward using email_contrib_pkey on email_contrib (cost=0.00..427716532.18 rows=134656656 width=24)\n> (2 rows)\n\nNot quite what I mean - redo the above as follows and then see what\nexplain returns\n\nexplain select * from email_contrib order by project_id, id, date desc\nlimit 10;\n\n-- \nRegards\nTheo\n\n",
"msg_date": "Wed, 29 Mar 2006 12:52:31 +0200",
"msg_from": "Theo Kramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes with descending date columns"
},
{
"msg_contents": "On Wed, Mar 29, 2006 at 12:52:31PM +0200, Theo Kramer wrote:\n> On Fri, 2006-03-24 at 12:21, Jim C. Nasby wrote:\n> > On Thu, Mar 23, 2006 at 01:09:49PM +0200, Theo Kramer wrote:\n> > > ii If no to i, is it feasible to extend PostgreSQL to allow traversing\n> > > an index in column descending and column ascending order - assuming\n> > > an order by on more than one column with column order not \n> > > in the same direction and indexes existing? ... if that makes sense.\n> > \n> > Yes.\n> > \n> > stats=# explain select * from email_contrib order by project_id desc, id desc, date desc limit 10;\n> > QUERY PLAN \n> > ------------------------------------------------------------------------------------------------------------------------\n> > Limit (cost=0.00..31.76 rows=10 width=24)\n> > -> Index Scan Backward using email_contrib_pkey on email_contrib (cost=0.00..427716532.18 rows=134656656 width=24)\n> > (2 rows)\n> \n> Not quite what I mean - redo the above as follows and then see what\n> explain returns\n> \n> explain select * from email_contrib order by project_id, id, date desc\n> limit 10;\n\nAhh. There's a hack to do that by defining a new opclass that reverses <\nand >, and then doing ORDER BY project_id, id, date USING new_opclass.\n\nI think there's a TODO about this, but I'm not sure...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 31 Mar 2006 09:55:54 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes with descending date columns"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Wed, Mar 29, 2006 at 12:52:31PM +0200, Theo Kramer wrote:\n> > On Fri, 2006-03-24 at 12:21, Jim C. Nasby wrote:\n> > > On Thu, Mar 23, 2006 at 01:09:49PM +0200, Theo Kramer wrote:\n> > > > ii If no to i, is it feasible to extend PostgreSQL to allow traversing\n> > > > an index in column descending and column ascending order - assuming\n> > > > an order by on more than one column with column order not \n> > > > in the same direction and indexes existing? ... if that makes sense.\n> > > \n> > > Yes.\n> > > \n> > > stats=# explain select * from email_contrib order by project_id desc, id desc, date desc limit 10;\n> > > QUERY PLAN \n> > > ------------------------------------------------------------------------------------------------------------------------\n> > > Limit (cost=0.00..31.76 rows=10 width=24)\n> > > -> Index Scan Backward using email_contrib_pkey on email_contrib (cost=0.00..427716532.18 rows=134656656 width=24)\n> > > (2 rows)\n> > \n> > Not quite what I mean - redo the above as follows and then see what\n> > explain returns\n> > \n> > explain select * from email_contrib order by project_id, id, date desc\n> > limit 10;\n> \n> Ahh. There's a hack to do that by defining a new opclass that reverses <\n> and >, and then doing ORDER BY project_id, id, date USING new_opclass.\n> \n> I think there's a TODO about this, but I'm not sure...\n\nYes, and updated:\n\n\t* Allow the creation of indexes with mixed ascending/descending\n\t specifiers\n\t\n\t This is possible now by creating an operator class with reversed sort\n\t operators. One complexity is that NULLs would then appear at the start\n\t of the result set, and this might affect certain sort types, like\n\t merge join.\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Sat, 8 Apr 2006 23:26:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes with descending date columns"
},
{
"msg_contents": "Hi, Bruce,\n\nBruce Momjian wrote:\n\n>>Ahh. There's a hack to do that by defining a new opclass that reverses <\n>>and >, and then doing ORDER BY project_id, id, date USING new_opclass.\n>>\n>>I think there's a TODO about this, but I'm not sure...\n> \n> Yes, and updated:\n> \n> \t* Allow the creation of indexes with mixed ascending/descending\n> \t specifiers\n> \t\n> \t This is possible now by creating an operator class with reversed sort\n> \t operators. One complexity is that NULLs would then appear at the start\n> \t of the result set, and this might affect certain sort types, like\n> \t merge join.\n\nI think it would be better to allow \"index zig-zag scans\" for\nmulti-column index.[1]\n\nSo it traverses in a given order on the higher order column, and the sub\ntrees for each specific high order value is traversed in reversed order.\n>From my knowledge at least of BTrees, and given correct commutator\ndefinitions, this should be not so complicated to implement.[2]\n\nThis would allow the query planner to use the same index for arbitrary\nASC/DESC combinations of the given columns.\n\n\nJust a thought,\nMarkus\n\n\n[1] It may make sense to implement the mixed specifiers on indices as\nwell, to allow CLUSTERing on mixed search order.\n\n[2] But I admit that I currently don't have enough knowledge in\nPostgreSQL index scan internals to know whether it really is easy to\nimplement.\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Tue, 11 Apr 2006 19:59:39 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexes with descending date columns"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a problem with the postgres planner, which gives a cost to\nindex scan which is much higher than actual cost (worst case\nconsidered, e.g. without any previous disk cache), and am posting\nhere for advices for tweaking cost constants. Because of this\nproblem, the planner typically chooses a seq scan when an index\nscan would be more efficient, and I would like to correct this if\npossible.\n\nReading the documentation and postgresql list archives, I have\nrun ANALYZE right before my tests, I have increased the\nstatistics target to 50 for the considered table; my problem is\nthat the index scan cost reported by EXPLAIN seems to be around\n12.7 times higher that it should, a figure I suppose incompatible\n(too large) for just random_page_cost and effective_cache_size\ntweaks.\n\n\nStructure of the table:\n\n\\d sent_messages\n Table \"public.sent_messages\"\n Column | Type | Modifiers \n----------+--------------------------+----------------------------------------------------------------\n uid | integer | not null default nextval('public.sent_messages_uid_seq'::text)\n sender | character varying(25) | \n receiver | character varying(25) | \n action | character varying(25) | \n cost | integer | \n date | timestamp with time zone | not null default ('now'::text)::timestamp(6) with time zone\n status | character varying(128) | \n theme | character varying(25) | \n operator | character varying(15) | \nIndexes:\n \"sent_messages_pkey\" primary key, btree (uid)\n \"idx_sent_msgs_date_theme_status\" btree (date, theme, status)\n\n\nWhat I did:\n\n- SET default_statistics_target = 50\n\n- VACUUM FULL ANALYZE VERBOSE sent_messages - copied so that you\n can have a look at rows and pages taken up by relations\n\nINFO: vacuuming \"public.sent_messages\"\nINFO: \"sent_messages\": found 0 removable, 3692284 nonremovable row versions in 55207 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 103 to 177 bytes long.\nThere were 150468 unused item pointers.\nTotal free space (including removable row versions) is 2507320 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n2469 pages containing 262732 free bytes are potential move destinations.\nCPU 0.57s/0.20u sec elapsed 11.27 sec.\nINFO: index \"sent_messages_pkey\" now contains 3692284 row versions in 57473 pages\nDETAIL: 0 index row versions were removed.\n318 index pages have been deleted, 318 are currently reusable.\nCPU 2.80s/1.27u sec elapsed 112.69 sec.\nINFO: index \"idx_sent_msgs_date_theme_status\" now contains 3692284 row versions in 88057 pages\nDETAIL: 0 index row versions were removed.\n979 index pages have been deleted, 979 are currently reusable.\nCPU 4.22s/1.51u sec elapsed 246.88 sec.\nINFO: \"sent_messages\": moved 0 row versions, truncated 55207 to 55207 pages\nDETAIL: CPU 1.87s/3.18u sec elapsed 42.71 sec.\nINFO: vacuuming \"pg_toast.pg_toast_77852470\"\nINFO: \"pg_toast_77852470\": found 0 removable, 0 nonremovable row versions in 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 0 to 0 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 0 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n0 pages containing 0 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"pg_toast_77852470_index\" now contains 0 row versions in 1 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.02 sec.\nINFO: analyzing \"public.sent_messages\"\nINFO: \"sent_messages\": 55207 pages, 15000 rows sampled, 3666236 estimated total rows\n\n- select rows of the table with a range condition on \"date\", find\n a range for which seq scan and index scan runtimes seem to be\n very close (I use Linux, I cat a 2G file to /dev/null between\n each request to flush disk cache, on a machine of 1G real RAM\n and 1G of swap, so that this is the worst case tested for index\n scan), notice that the cost used by the planner is 12.67 times\n higher for index scan, at a position it should be around 1 so\n that planner could make sensible choices:\n\nEXPLAIN ANALYZE SELECT * FROM sent_messages WHERE date > '2005-09-01' AND date < '2005-09-19';\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on sent_messages (cost=0.00..110591.26 rows=392066 width=78) (actual time=7513.205..13095.147 rows=393074 loops=1)\n Filter: ((date > '2005-09-01 00:00:00+00'::timestamp with time zone) AND (date < '2005-09-19 00:00:00+00'::timestamp with time zone))\n Total runtime: 14272.522 ms\n\n\nSET enable_seqscan = false\n\nEXPLAIN ANALYZE SELECT * FROM sent_messages WHERE date > '2005-09-01' AND date < '2005-09-19';\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_sent_msgs_date_theme_status on sent_messages (cost=0.00..1402124.26 rows=392066 width=78) (actual time=142.638..12677.378 rows=393074 loops=1)\n Index Cond: ((date > '2005-09-01 00:00:00+00'::timestamp with time zone) AND (date < '2005-09-19 00:00:00+00'::timestamp with time zone))\n Total runtime: 13846.504 ms\n\n\n Please notice that an index on the \"date\" column only would be\n much more efficient for the considered request (and I have\n confirmed this by creating and trying it), but I don't\n necessarily would need this index if the existing index was\n used. Of course real queries use smaller date ranges.\n\n- I then tried to tweak random_page_cost and effective_cache_size\n following advices from documentation:\n\nSET random_page_cost = 2;\nSET effective_cache_size = 10000;\nEXPLAIN SELECT * FROM sent_messages WHERE date > '2005-09-01' AND date < '2005-09-19';\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_sent_msgs_date_theme_status on sent_messages (cost=0.00..595894.94 rows=392066 width=78)\n Index Cond: ((date > '2005-09-01 00:00:00+00'::timestamp with time zone) AND (date < '2005-09-19 00:00:00+00'::timestamp with time zone))\n\n\n We can see that estimated index scan cost goes down but by a\n factor of approx. 2.3 which is far from enough to \"fix\" it. I\n am reluctant in changing way more the random_page_cost and\n effective_cache_size values as I'm suspecting it might have\n other (bad) consequences if it is too far away from reality\n (even if Linux is known to aggressively cache), the application\n being multithreaded (there is a warning about concurrent\n queries using different indexes in documentation). But I\n certainly could benefit from others' experience on this matter.\n\n\nI apologize for this long email but I wanted to be sure I gave\nenough information on the data and things I have tried to fix the\nproblem myself. If anyone can see what I am doing wrong, I would\nbe very interested in pointers.\n\nThanks in advance!\n\nBtw, I use postgres 7.4.5 with -B 1000 -N 500 and all\npostgresql.conf default values except timezone = 'UTC', on an\next3 partition with data=ordered, and run Linux 2.6.12.\n\n-- \nGuillaume Cottenceau\n",
"msg_date": "17 Mar 2006 11:09:50 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "planner with index scan cost way off actual cost,\n\tadvices to tweak cost constants?"
},
{
"msg_contents": "Guillaume,\n\nOn 17 Mar 2006 11:09:50 +0100, Guillaume Cottenceau\nwrote:\n> Reading the documentation and postgresql list archives, I have\n> run ANALYZE right before my tests, I have increased the\n> statistics target to 50 for the considered table; my problem is\n> that the index scan cost reported by EXPLAIN seems to be around\n> 12.7 times higher that it should, a figure I suppose incompatible\n> (too large) for just random_page_cost and effective_cache_size\n> tweaks.\n\nIt's not surprising you have a high cost for an index scan which is\nplanned to return and returns so much rows. I really don't think the\nplanner does something wrong on this one.\nAFAIK, increasing the statistics target won't do anything to reduce\nthe cost as the planner estimation for the number of returned rows is\nalready really accurate and probably can't be better.\n\n> Of course real queries use smaller date ranges.\n\nWhat about providing us the respective plans for your real queries?\nAnd in a real case. It's a bad idea to compare index scan and seqscan\nwhen your data have to be loaded in RAM.\nBefore doing so create an index on the date column to have the most\neffective index possible.\n\n> - I then tried to tweak random_page_cost and effective_cache_size\n> following advices from documentation:\n>\n> SET random_page_cost = 2;\n\nrandom_page_cost is the way to go for this sort of thing but I don't\nthink it's a good idea to have it too low globally and I'm still\nthinking the problem is that your test case is not accurate.\n\n--\nGuillaume\n",
"msg_date": "Sat, 18 Mar 2006 11:20:45 +0100",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner with index scan cost way off actual cost,\n\tadvices to tweak cost constants?"
},
{
"msg_contents": "Guillaume Cottenceau wrote:\n\n> \n> SET random_page_cost = 2;\n> SET effective_cache_size = 10000;\n> EXPLAIN SELECT * FROM sent_messages WHERE date > '2005-09-01' AND date < '2005-09-19';\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using idx_sent_msgs_date_theme_status on sent_messages (cost=0.00..595894.94 rows=392066 width=78)\n> Index Cond: ((date > '2005-09-01 00:00:00+00'::timestamp with time zone) AND (date < '2005-09-19 00:00:00+00'::timestamp with time zone))\n> \n> \n> We can see that estimated index scan cost goes down but by a\n> factor of approx. 2.3 which is far from enough to \"fix\" it. I\n> am reluctant in changing way more the random_page_cost and\n> effective_cache_size values as I'm suspecting it might have\n> other (bad) consequences if it is too far away from reality\n> (even if Linux is known to aggressively cache), the application\n> being multithreaded (there is a warning about concurrent\n> queries using different indexes in documentation). But I\n> certainly could benefit from others' experience on this matter.\n> \n> \n> I apologize for this long email but I wanted to be sure I gave\n> enough information on the data and things I have tried to fix the\n> problem myself. If anyone can see what I am doing wrong, I would\n> be very interested in pointers.\n> \n> Thanks in advance!\n> \n\n\n> Btw, I use postgres 7.4.5 with -B 1000 -N 500 and all\n> postgresql.conf default values except timezone = 'UTC', on an\n> ext3 partition with data=ordered, and run Linux 2.6.12.\n> \n\nI didn't see any mention of how much memory is on your server, but \nprovided you have say 1G, and are using the box solely for a database \nserver, I would increase both shared_buffers and effective_cache size.\n\nshared_buffer = 12000\neffective_cache_size = 25000\n\nThis would mean you are reserving 100M for Postgres to cache relation \npages, and informing the planner that it can expect ~200M available from \nthe disk buffer cache. To give a better recommendation, we need to know \nmore about your server and workload (e.g server memory configuration and \nusage plus how close you get to 500 connections).\n\nCheers\n\nMark\n\n\n\n",
"msg_date": "Sun, 19 Mar 2006 16:14:34 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner with index scan cost way off actual cost,"
},
{
"msg_contents": "Guillaume,\n\nThanks for your answer.\n\n> On 17 Mar 2006 11:09:50 +0100, Guillaume Cottenceau\n> wrote:\n> > Reading the documentation and postgresql list archives, I have\n> > run ANALYZE right before my tests, I have increased the\n> > statistics target to 50 for the considered table; my problem is\n> > that the index scan cost reported by EXPLAIN seems to be around\n> > 12.7 times higher that it should, a figure I suppose incompatible\n> > (too large) for just random_page_cost and effective_cache_size\n> > tweaks.\n> \n> It's not surprising you have a high cost for an index scan which is\n> planned to return and returns so much rows. I really don't think the\n> planner does something wrong on this one.\n\nMy point is that the planner's cost estimate is way above the\nactual cost of the query, so the planner doesn't use the best\nplan. Even if the index returns so much rows, actual cost of the\nquery is so that index scan (worst case, all disk cache flushed)\nis still better than seq scan but the planner uses seq scan.\n\n> AFAIK, increasing the statistics target won't do anything to reduce\n> the cost as the planner estimation for the number of returned rows is\n> already really accurate and probably can't be better.\n\nOk, thanks.\n \n> > Of course real queries use smaller date ranges.\n> \n> What about providing us the respective plans for your real queries?\n> And in a real case. It's a bad idea to compare index scan and seqscan\n\nThe original query is more complicated and sometimes involves\nrestricting the resultset with another constraint. I am not sure\nit is very interesting to show it; I know that best performance\nwould be achieved with an index on the date column for the shown\nquery, and an index on the date column and the other column when\ndoing a query on these..\n\n> when your data have to be loaded in RAM.\n\nWhat do you mean? That I should not flush disk cache before\ntiming? I did so to find the worst case.. I am not sure it is the\nbest solution.. maybe half worst case would be? but this depends\na lot on whether the index pages would stay in disk cache or not\nbefore next query.. which cannot be told for sure unless a full\nserious timing of the real application is done (and my\napplication can be used in quite different scenarios, which means\nsuch a test is not entirely possible/meaningful).\n\n> Before doing so create an index on the date column to have the most\n> effective index possible.\n\nYes, as I said, I know that doing this would improve a lot the\nqueries. My point was to understand why the cost of the index\nscan is so \"inaccurate\" compared to actual cost. Adding an index\non the date column enlarges the data by 100-150M so I'd rather\nsave this if possible.\n\n> > - I then tried to tweak random_page_cost and effective_cache_size\n> > following advices from documentation:\n> >\n> > SET random_page_cost = 2;\n> \n> random_page_cost is the way to go for this sort of thing but I don't\n> think it's a good idea to have it too low globally and I'm still\n\nThanks, I suspected so.\n\n> thinking the problem is that your test case is not accurate.\n\nOk.\n\n-- \nGuillaume Cottenceau\n",
"msg_date": "20 Mar 2006 09:14:32 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: planner with index scan cost way off actual cost,\n\tadvices to tweak cost constants?"
},
{
"msg_contents": "Hi Mark,\n\nThanks for your reply.\n\n> Guillaume Cottenceau wrote:\n\n[...]\n\n> > Btw, I use postgres 7.4.5 with -B 1000 -N 500 and all\n> > postgresql.conf default values except timezone = 'UTC', on an\n> > ext3 partition with data=ordered, and run Linux 2.6.12.\n> \n> I didn't see any mention of how much memory is on your server, but\n> provided you have say 1G, and are using the box solely for a database\n> server, I would increase both shared_buffers and effective_cache size.\n\nThis test machine has 1G of (real) memory, servers often have 2G\nor 4G. The thing is that the application runs on the same\nmachine, and as it is a java application, it takes up a little\nmemory too (we can say half of it should go to java and half to\npostgres, I guess). Determining the best memory \"plan\" is not so\neasy, though your information is priceless and will help a lot!\n \n> shared_buffer = 12000\n> effective_cache_size = 25000\n> \n> This would mean you are reserving 100M for Postgres to cache relation\n> pages, and informing the planner that it can expect ~200M available\n> from the disk buffer cache. To give a better recommendation, we need\n\nOk, thanks. I wanted to investigate this field, but as the\napplication is multithreaded and uses a lot of postgres clients,\nI wanted to make sure the shared_buffers values is globally for\npostgres, not just per (TCP) connection to postgres, before\nincreasing the value, fearing to take the whole server down.\n\nOn a server with 235 connections and -N 512 -B 1024, reading\nhttp://virtualthreads.blogspot.com/2006/02/understanding-memory-usage-on-linux.html\nI came up with the following figure:\n\nfor i in `pidof postmaster`; do pmap -d $i | grep -i writeable ; done | perl -MMDK::Common -ne 'do { push @a, $1; $tot += $1 } if /writeable.private: (\\d+)K/; END { print \"total postgres private memory: ${tot}K\\nmin: \" . min(@a) . \"K\\nmax: \" . max(@a) . \"K\\n\"; }'\ntotal postgres private memory: 432080K\nmin: 936K\nmax: 4216K\n\nAs the server has 2G of memory, I was reluctant to increase the\namount of shared memory since overall postgres memory use seems\nalready quite high - though 100M more would not kill the server,\nobviously. Btw, can you comment on the upper figures?\n\n> to know more about your server and workload (e.g server memory\n> configuration and usage plus how close you get to 500 connections).\n\nDepending on the server, it can have 200, up to around 400\nconnections open. As of workload, I am not sure what metrics are\nsuitable. Typically postgres can be seen in the top processes but\nmost queries are quick and average load average reported by the\nlinux kernel is nearly always below 0.3, and often 0.1. These are\nsingle or dual xeon 2.8 GHz machines with hardware raid (megaraid\nor percraid driver) with reasonable performance.\n\n-- \nGuillaume Cottenceau\n",
"msg_date": "20 Mar 2006 09:35:14 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: planner with index scan cost way off actual cost,\n\tadvices to tweak cost constants?"
},
{
"msg_contents": "On Mon, Mar 20, 2006 at 09:35:14AM +0100, Guillaume Cottenceau wrote:\n> > shared_buffer = 12000\n> > effective_cache_size = 25000\n> > \n> > This would mean you are reserving 100M for Postgres to cache relation\n> > pages, and informing the planner that it can expect ~200M available\n> > from the disk buffer cache. To give a better recommendation, we need\n> \n> Ok, thanks. I wanted to investigate this field, but as the\n> application is multithreaded and uses a lot of postgres clients,\n> I wanted to make sure the shared_buffers values is globally for\n> postgres, not just per (TCP) connection to postgres, before\n> increasing the value, fearing to take the whole server down.\n\nshared_buffer is for the entire 'cluster', not per-connection or\nper-database.\n\nAlso, effective_cache_size of 25000 on a 1G machine seems pretty\nconservative to me. I'd set it to at least 512MB, if not closer to\n800MB.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 03:50:49 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner with index scan cost way off actual cost,\n\tadvices to tweak cost constants?"
},
{
"msg_contents": "On Mon, Mar 20, 2006 at 09:14:32AM +0100, Guillaume Cottenceau wrote:\n> Guillaume,\n> \n> Thanks for your answer.\n> \n> > On 17 Mar 2006 11:09:50 +0100, Guillaume Cottenceau\n> > wrote:\n> > > Reading the documentation and postgresql list archives, I have\n> > > run ANALYZE right before my tests, I have increased the\n> > > statistics target to 50 for the considered table; my problem is\n> > > that the index scan cost reported by EXPLAIN seems to be around\n> > > 12.7 times higher that it should, a figure I suppose incompatible\n> > > (too large) for just random_page_cost and effective_cache_size\n> > > tweaks.\n> > \n> > It's not surprising you have a high cost for an index scan which is\n> > planned to return and returns so much rows. I really don't think the\n> > planner does something wrong on this one.\n> \n> My point is that the planner's cost estimate is way above the\n> actual cost of the query, so the planner doesn't use the best\n> plan. Even if the index returns so much rows, actual cost of the\n> query is so that index scan (worst case, all disk cache flushed)\n> is still better than seq scan but the planner uses seq scan.\n\nYes. The cost estimator for an index scan supposedly does a linear\ninterpolation between a minimum cost and a maximum cost depending on the\ncorrelation of the first field in the index. The problem is that while\nthe comment states it's a linear interpolation, the actual formula\nsquares the correlation before interpolating. This means that unless the\ncorrelation is very high, you're going to get an unrealistically high\ncost for an index scan. I have data that supports this at\nhttp://stats.distributed.net/~decibel/, but I've never been able to get\naround to testing a patch to see if it improves things.\n\n<snip>\n> > thinking the problem is that your test case is not accurate.\n> \n> Ok.\n\nActually, I suspect your test case was probably fine, but take a look at\nthe data I've got and see what you think. If you want to spend some time\non this it should be possible to come up with a test case that uses\neither pgbench or dbt2/3 to generate data, so that others can easily\nreproduce (I can't really make the data I used for my testing\navailable).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 03:57:00 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner with index scan cost way off actual cost,\n\tadvices to tweak cost constants?"
},
{
"msg_contents": "\"Jim C. Nasby\" <jnasby 'at' pervasive.com> writes:\n\n[...]\n\n> > My point is that the planner's cost estimate is way above the\n> > actual cost of the query, so the planner doesn't use the best\n> > plan. Even if the index returns so much rows, actual cost of the\n> > query is so that index scan (worst case, all disk cache flushed)\n> > is still better than seq scan but the planner uses seq scan.\n> \n> Yes. The cost estimator for an index scan supposedly does a linear\n> interpolation between a minimum cost and a maximum cost depending on the\n> correlation of the first field in the index. The problem is that while\n> the comment states it's a linear interpolation, the actual formula\n> squares the correlation before interpolating. This means that unless the\n> correlation is very high, you're going to get an unrealistically high\n> cost for an index scan. I have data that supports this at\n> http://stats.distributed.net/~decibel/, but I've never been able to get\n> around to testing a patch to see if it improves things.\n\nInteresting.\n\nIt would be nice to investigate the arguments behind the choice\nyou describe for the formula used to perform the interpolation. I\nhave absolutely no knowledge on pg internals so this is rather\nnew/fresh for me, I have no idea how smart that choice is (but\nbased on my general feeling about pg, I'm suspecting this is\nactually smart but I am not smart enough to see why ;p).\n\n-- \nGuillaume Cottenceau\n",
"msg_date": "21 Mar 2006 11:13:06 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: planner with index scan cost way off actual cost,\n\tadvices to tweak cost constants?"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Mon, Mar 20, 2006 at 09:35:14AM +0100, Guillaume Cottenceau wrote:\n> \n>>>shared_buffer = 12000\n>>>effective_cache_size = 25000\n>>>\n>>>This would mean you are reserving 100M for Postgres to cache relation\n>>>pages, and informing the planner that it can expect ~200M available\n>>>from the disk buffer cache. To give a better recommendation, we need\n>>\n>>Ok, thanks. I wanted to investigate this field, but as the\n>>application is multithreaded and uses a lot of postgres clients,\n>>I wanted to make sure the shared_buffers values is globally for\n>>postgres, not just per (TCP) connection to postgres, before\n>>increasing the value, fearing to take the whole server down.\n> \n> \n> shared_buffer is for the entire 'cluster', not per-connection or\n> per-database.\n> \n> Also, effective_cache_size of 25000 on a 1G machine seems pretty\n> conservative to me. I'd set it to at least 512MB, if not closer to\n> 800MB.\n\nI was going to recommend higher - but not knowing what else was running, \nkept it to quite conservative :-)... and given he's running java, the \nJVM could easily eat 512M all by itself!\n\nCheers\n\nMark\n",
"msg_date": "Tue, 21 Mar 2006 22:40:45 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner with index scan cost way off actual cost,"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 10:40:45PM +1200, Mark Kirkwood wrote:\n> I was going to recommend higher - but not knowing what else was running, \n> kept it to quite conservative :-)... and given he's running java, the \n> JVM could easily eat 512M all by itself!\n\nOh, didn't pick up on java being in the mix. Yeah, it can be a real pig.\nI think people often place too much emphasis on having a seperate\napplication server, but in the case of java you often have no choice.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 05:51:00 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner with index scan cost way off actual cost,"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 11:13:06AM +0100, Guillaume Cottenceau wrote:\n> \"Jim C. Nasby\" <jnasby 'at' pervasive.com> writes:\n> \n> [...]\n> \n> > > My point is that the planner's cost estimate is way above the\n> > > actual cost of the query, so the planner doesn't use the best\n> > > plan. Even if the index returns so much rows, actual cost of the\n> > > query is so that index scan (worst case, all disk cache flushed)\n> > > is still better than seq scan but the planner uses seq scan.\n> > \n> > Yes. The cost estimator for an index scan supposedly does a linear\n> > interpolation between a minimum cost and a maximum cost depending on the\n> > correlation of the first field in the index. The problem is that while\n> > the comment states it's a linear interpolation, the actual formula\n> > squares the correlation before interpolating. This means that unless the\n> > correlation is very high, you're going to get an unrealistically high\n> > cost for an index scan. I have data that supports this at\n> > http://stats.distributed.net/~decibel/, but I've never been able to get\n> > around to testing a patch to see if it improves things.\n> \n> Interesting.\n> \n> It would be nice to investigate the arguments behind the choice\n> you describe for the formula used to perform the interpolation. I\n> have absolutely no knowledge on pg internals so this is rather\n> new/fresh for me, I have no idea how smart that choice is (but\n> based on my general feeling about pg, I'm suspecting this is\n> actually smart but I am not smart enough to see why ;p).\n\nIf you feel like running some tests, you need to change\n\n run_cost += max_IO_cost + csquared * (min_IO_cost - max_IO_cost);\n\nin src/backend/optimizer/path/costsize.c to something like\n\n run_cost += max_IO_cost + abs(indexCorrelation) * (min_IO_cost - max_IO_cost);\n\nThat might not produce a perfect cost estimate, but I'll wager that it\nwill be substantially better than what's in there now. FYI, see also\nhttp://archives.postgresql.org/pgsql-performance/2005-04/msg00669.php\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 05:58:35 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner with index scan cost way off actual cost,\n\tadvices to tweak cost constants?"
},
{
"msg_contents": "\"Jim C. Nasby\" <jnasby 'at' pervasive.com> writes:\n\n> On Tue, Mar 21, 2006 at 10:40:45PM +1200, Mark Kirkwood wrote:\n> > I was going to recommend higher - but not knowing what else was running, \n> > kept it to quite conservative :-)... and given he's running java, the \n> > JVM could easily eat 512M all by itself!\n> \n> Oh, didn't pick up on java being in the mix. Yeah, it can be a real pig.\n> I think people often place too much emphasis on having a seperate\n> application server, but in the case of java you often have no choice.\n\nFortunately the servers use 2G or 4G of memory, only my test\nmachine had 1G, as I believe I precised in a message; so I'm\ndefinitely going to use Mark's advices to enlarge a lot the\nshared buffers. Btw, what about sort_mem? I have seen it only\nlittle referenced in the documentation.\n\nAlso, I'd still be interested in comments on the result of pmap\nshowing around 450M of \"private memory\" used by pg, if anyone can\nshare insight about it. Though most people seem freebsd-oriented,\nand this might be very much linux-centric.\n\n-- \nGuillaume Cottenceau\n",
"msg_date": "21 Mar 2006 14:03:19 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: planner with index scan cost way off actual cost,"
},
{
"msg_contents": "\"Jim C. Nasby\" <jnasby 'at' pervasive.com> writes:\n\n> If you feel like running some tests, you need to change\n> \n> run_cost += max_IO_cost + csquared * (min_IO_cost - max_IO_cost);\n> \n> in src/backend/optimizer/path/costsize.c to something like\n> \n> run_cost += max_IO_cost + abs(indexCorrelation) * (min_IO_cost - max_IO_cost);\n\nShort after the beginning of a discussion about planner\nassociating too high cost for index scan, I'm suggested to change\nsource-code.. I'm already frightened about the near future :)\n\n> That might not produce a perfect cost estimate, but I'll wager that it\n> will be substantially better than what's in there now. FYI, see also\n> http://archives.postgresql.org/pgsql-performance/2005-04/msg00669.php\n\nSad that Tom didn't share his thoughts about your cost algorithm\nquestion in this message.\n\n-- \nGuillaume Cottenceau\n",
"msg_date": "21 Mar 2006 14:30:22 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: planner with index scan cost way off actual cost,\n\tadvices to tweak cost constants?"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 02:30:22PM +0100, Guillaume Cottenceau wrote:\n> \"Jim C. Nasby\" <jnasby 'at' pervasive.com> writes:\n> \n> > If you feel like running some tests, you need to change\n> > \n> > run_cost += max_IO_cost + csquared * (min_IO_cost - max_IO_cost);\n> > \n> > in src/backend/optimizer/path/costsize.c to something like\n> > \n> > run_cost += max_IO_cost + abs(indexCorrelation) * (min_IO_cost - max_IO_cost);\n> \n> Short after the beginning of a discussion about planner\n> associating too high cost for index scan, I'm suggested to change\n> source-code.. I'm already frightened about the near future :)\n\nWell, this is mostly because I've just never gotten around to following\nup on this.\n\n> > That might not produce a perfect cost estimate, but I'll wager that it\n> > will be substantially better than what's in there now. FYI, see also\n> > http://archives.postgresql.org/pgsql-performance/2005-04/msg00669.php\n> \n> Sad that Tom didn't share his thoughts about your cost algorithm\n> question in this message.\n\nSee above. :)\n\nIf someone comes up with a before and after comparison showing that the\nchange makes the estimator more accurate I'm sure the code will be\nchanged in short order. The nice thing about this case is that basically\nany PostgreSQL user can do the heavy lifting, instead of relying on the\nprimary contributors for a change.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 11:45:57 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner with index scan cost way off actual cost,\n\tadvices to tweak cost constants?"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 02:03:19PM +0100, Guillaume Cottenceau wrote:\n> \"Jim C. Nasby\" <jnasby 'at' pervasive.com> writes:\n> \n> > On Tue, Mar 21, 2006 at 10:40:45PM +1200, Mark Kirkwood wrote:\n> > > I was going to recommend higher - but not knowing what else was running, \n> > > kept it to quite conservative :-)... and given he's running java, the \n> > > JVM could easily eat 512M all by itself!\n> > \n> > Oh, didn't pick up on java being in the mix. Yeah, it can be a real pig.\n> > I think people often place too much emphasis on having a seperate\n> > application server, but in the case of java you often have no choice.\n> \n> Fortunately the servers use 2G or 4G of memory, only my test\n> machine had 1G, as I believe I precised in a message; so I'm\n> definitely going to use Mark's advices to enlarge a lot the\n> shared buffers. Btw, what about sort_mem? I have seen it only\n> little referenced in the documentation.\n\nThe biggest issue with setting work_mem (you're not doing current\ndevelopment on 7.4 are you?) is ensuring that you don't push the server\ninto swapping. Remember that work_mem controls how much memory can be\nused for EACH sort or hash (maybe others) operation. Each query can\nconsume multiples of work_mem (since it can do multiple sorts, for\nexample), and of course each backend could be running a query at the\nsame time. Because of all this it's pretty difficult to make work_mem\nrecomendations without knowing a lot more about your environment.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 11:49:03 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner with index scan cost way off actual cost,"
},
{
"msg_contents": "On Fri, 2006-03-17 at 11:09 +0100, Guillaume Cottenceau wrote:\n\n> INFO: index \"idx_sent_msgs_date_theme_status\" now contains 3692284 row versions in 88057 pages\n\n> SET effective_cache_size = 10000;\n\nSET effective_cache_size > 88057, round up to 100000\n\nto ensure the index cost calculation knows the whole index will be\ncached, which it clearly could be with 4GB RAM.\n\nIf the cost is still wrong, it is because the index order doesn't\ncorrelate physically with the key columns. Use CLUSTER.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 21 Mar 2006 20:57:27 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner with index scan cost way off actual cost,"
},
{
"msg_contents": "\"Jim C. Nasby\" <jnasby 'at' pervasive.com> writes:\n\n> On Tue, Mar 21, 2006 at 02:03:19PM +0100, Guillaume Cottenceau wrote:\n> > \"Jim C. Nasby\" <jnasby 'at' pervasive.com> writes:\n> > \n> > > On Tue, Mar 21, 2006 at 10:40:45PM +1200, Mark Kirkwood wrote:\n> > > > I was going to recommend higher - but not knowing what else was running, \n> > > > kept it to quite conservative :-)... and given he's running java, the \n> > > > JVM could easily eat 512M all by itself!\n> > > \n> > > Oh, didn't pick up on java being in the mix. Yeah, it can be a real pig.\n> > > I think people often place too much emphasis on having a seperate\n> > > application server, but in the case of java you often have no choice.\n> > \n> > Fortunately the servers use 2G or 4G of memory, only my test\n> > machine had 1G, as I believe I precised in a message; so I'm\n> > definitely going to use Mark's advices to enlarge a lot the\n> > shared buffers. Btw, what about sort_mem? I have seen it only\n> > little referenced in the documentation.\n> \n> The biggest issue with setting work_mem (you're not doing current\n> development on 7.4 are you?) is ensuring that you don't push the server\n\nYes, we use 7.4.5 actually, because \"it just works\", so production\nwants to first deal with all the things that don't work before\nupgrading. I have recently discovered about the background writer\nof 8.x which could be a supplementary reason to push for an\nugprade though.\n\n> into swapping. Remember that work_mem controls how much memory can be\n> used for EACH sort or hash (maybe others) operation. Each query can\n> consume multiples of work_mem (since it can do multiple sorts, for\n> example), and of course each backend could be running a query at the\n> same time. Because of all this it's pretty difficult to make work_mem\n> recomendations without knowing a lot more about your environment.\n\nOk, I see. Thanks for the info!\n\n-- \nGuillaume Cottenceau\n",
"msg_date": "22 Mar 2006 09:04:29 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: planner with index scan cost way off actual cost,"
},
{
"msg_contents": "On Wed, 2006-03-22 at 02:04, Guillaume Cottenceau wrote:\n> \"Jim C. Nasby\" <jnasby 'at' pervasive.com> writes:\n> \n> > On Tue, Mar 21, 2006 at 02:03:19PM +0100, Guillaume Cottenceau wrote:\n> > > \"Jim C. Nasby\" <jnasby 'at' pervasive.com> writes:\n> > > \n> > > > On Tue, Mar 21, 2006 at 10:40:45PM +1200, Mark Kirkwood wrote:\n> > > > > I was going to recommend higher - but not knowing what else was running, \n> > > > > kept it to quite conservative :-)... and given he's running java, the \n> > > > > JVM could easily eat 512M all by itself!\n> > > > \n> > > > Oh, didn't pick up on java being in the mix. Yeah, it can be a real pig.\n> > > > I think people often place too much emphasis on having a seperate\n> > > > application server, but in the case of java you often have no choice.\n> > > \n> > > Fortunately the servers use 2G or 4G of memory, only my test\n> > > machine had 1G, as I believe I precised in a message; so I'm\n> > > definitely going to use Mark's advices to enlarge a lot the\n> > > shared buffers. Btw, what about sort_mem? I have seen it only\n> > > little referenced in the documentation.\n> > \n> > The biggest issue with setting work_mem (you're not doing current\n> > development on 7.4 are you?) is ensuring that you don't push the server\n> \n> Yes, we use 7.4.5 actually, because \"it just works\", so production\n> wants to first deal with all the things that don't work before\n> upgrading. I have recently discovered about the background writer\n> of 8.x which could be a supplementary reason to push for an\n> ugprade though.\n\nImagine you get a call from the manufacturer of your car. There's a\nproblem with the fuel pump, and, in a small percentage of accidents,\nyour car could catch fire and kill everyone inside.\n\nDo you go in for the recall, or ignore it because you just want your car\nto \"just work?\"\n\nIn the case of the third number in postgresql releases, that's what\nyou're talking about. the updates that have come after the 7.4.5\nversion, just talking 7.4 series here, have included a few crash and\ndata loss fixes. Rare, but possible.\n\nDon't worry about upgrading to 8.x until later, fine, but you should\nreally be upgrading to the latest patch level of 7.4.\n\nI fight this same fight at work, by the way. It's hard convincing\npeople that the updates are security / crash / data loss only...\n",
"msg_date": "Wed, 22 Mar 2006 10:16:14 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: planner with index scan cost way off actual cost,"
},
{
"msg_contents": "Hi Scott,\n\nScott Marlowe <smarlowe 'at' g2switchworks.com> writes:\n\n> On Wed, 2006-03-22 at 02:04, Guillaume Cottenceau wrote:\n\n[...]\n\n> > Yes, we use 7.4.5 actually, because \"it just works\", so production\n> > wants to first deal with all the things that don't work before\n> > upgrading. I have recently discovered about the background writer\n> > of 8.x which could be a supplementary reason to push for an\n> > ugprade though.\n> \n> Imagine you get a call from the manufacturer of your car. There's a\n> problem with the fuel pump, and, in a small percentage of accidents,\n> your car could catch fire and kill everyone inside.\n> \n> Do you go in for the recall, or ignore it because you just want your car\n> to \"just work?\"\n\nAh, this holy computer/OS/whatever-to-cars comparison.. How many\nmillion electrons would the world save if computer people would\nabandon it? :)\n\n> In the case of the third number in postgresql releases, that's what\n> you're talking about. the updates that have come after the 7.4.5\n> version, just talking 7.4 series here, have included a few crash and\n> data loss fixes. Rare, but possible.\n\nI guess we didn't know that. I for myself have (a bit more)\nexcuses because I'm on the development side :) But I've passed\nthe information to the operation team, thank you.\n\n-- \nGuillaume Cottenceau\n",
"msg_date": "22 Mar 2006 17:25:56 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: planner with index scan cost way off actual cost,"
}
] |
[
{
"msg_contents": "About a year ago we decided to migrate our central database that powers various\nintranet tools from MySQL to PostgreSQL. We have about 130 tables and about\n10GB of data that stores various status information for a variety of services\nfor our intranet. We generally have somewhere between 150-200 connections to\nthe database at any given time and probably anywhere between 5-10 new \nconnections being made every second and about 100 queries per second. Most\nof the queries and transactions are very small due to the fact that the tools\nwere designed to work around the small functionality of MySQL 3.23 DB. \nOur company primarily uses FreeBSD and we are stuck on FreeBSD 4.X series due\nto IT support issues, but I believe I may be able to get more performance out\nof our server by reconfiguring and setting up the postgresql.conf file up \nbetter. The performance is not as good as I was hoping at the moment and \nit seems as if the database is not making use of the available ram.\n\nsnapshot of active server:\nlast pid: 5788; load averages: 0.32, 0.31, 0.28 up 127+15:16:08 13:59:24\n169 processes: 1 running, 168 sleeping\nCPU states: 5.4% user, 0.0% nice, 9.9% system, 0.0% interrupt, 84.7% idle\nMem: 181M Active, 2632M Inact, 329M Wired, 179M Cache, 199M Buf, 81M Free\nSwap: 4096M Total, 216K Used, 4096M Free\n\n PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU CPU COMMAND\n14501 pgsql 2 0 254M 242M select 2 76:26 1.95% 1.95% postgre\n 5720 root 28 0 2164K 1360K CPU0 0 0:00 1.84% 0.88% top\n 5785 pgsql 2 0 255M 29296K sbwait 0 0:00 3.00% 0.15% postgre\n 5782 pgsql 2 0 255M 11900K sbwait 0 0:00 3.00% 0.15% postgre\n 5772 pgsql 2 0 255M 11708K sbwait 2 0:00 1.54% 0.15% postgre\n\n\nHere is my current configuration:\n\nDual Xeon 3.06Ghz 4GB RAM\nAdaptec 2200S 48MB cache & 4 disks configured in RAID5\nFreeBSD 4.11 w/kernel options:\noptions SHMMAXPGS=65536\noptions SEMMNI=256\noptions SEMMNS=512\noptions SEMUME=256\noptions SEMMNU=256\noptions SMP # Symmetric MultiProcessor Kernel\noptions APIC_IO # Symmetric (APIC) I/O\n\nThe OS is installed on the local single disk and postgres data directory\nis on the RAID5 partition. Maybe Adaptec 2200S RAID5 performance is not as\ngood as the vendor claimed. It was my impression that the raid controller \nthese days are optimized for RAID5 and going RAID10 would not benefit me much.\n\nAlso, I may be overlooking a postgresql.conf setting. I have attached the \nconfig file.\n\nIn summary, my questions:\n\n1. Would running PG on FreeBSD 5.x or 6.x or Linux improve performance?\n\n2. Should I change SCSI controller config to use RAID 10 instead of 5?\n\n3. Why isn't postgres using all 4GB of ram for at least caching table for reads?\n\n4. Are there any other settings in the conf file I could try to tweak?",
"msg_date": "Fri, 17 Mar 2006 14:11:16 -0800",
"msg_from": "Kenji Morishige <[email protected]>",
"msg_from_op": true,
"msg_subject": "Best OS & Configuration for Dual Xeon w/4GB & Adaptec RAID 2200S"
},
{
"msg_contents": "On Fri, 2006-03-17 at 16:11, Kenji Morishige wrote:\n> About a year ago we decided to migrate our central database that powers various\n> intranet tools from MySQL to PostgreSQL. We have about 130 tables and about\n> 10GB of data that stores various status information for a variety of services\n> for our intranet. We generally have somewhere between 150-200 connections to\n> the database at any given time and probably anywhere between 5-10 new \n> connections being made every second and about 100 queries per second. Most\n> of the queries and transactions are very small due to the fact that the tools\n> were designed to work around the small functionality of MySQL 3.23 DB. \n> Our company primarily uses FreeBSD and we are stuck on FreeBSD 4.X series due\n> to IT support issues,\n\nThere were a LOT of performance enhancements to FreeBSD with the 5.x\nseries release. I'd recommend fast tracking the database server to the\n5.x branch. 4-stable was release 6 years ago. 5-stable was released\ntwo years ago.\n\n> but I believe I may be able to get more performance out\n> of our server by reconfiguring and setting up the postgresql.conf file up \n> better.\n\nCan't hurt. But if your OS isn't doing the job, postgresql.conf can\nonly do so much, nee?\n\n> The performance is not as good as I was hoping at the moment and \n> it seems as if the database is not making use of the available ram.\n> snapshot of active server:\n> last pid: 5788; load averages: 0.32, 0.31, 0.28 up 127+15:16:08 13:59:24\n> 169 processes: 1 running, 168 sleeping\n> CPU states: 5.4% user, 0.0% nice, 9.9% system, 0.0% interrupt, 84.7% idle\n> Mem: 181M Active, 2632M Inact, 329M Wired, 179M Cache, 199M Buf, 81M Free\n> Swap: 4096M Total, 216K Used, 4096M Free\n> \n> PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU CPU COMMAND\n> 14501 pgsql 2 0 254M 242M select 2 76:26 1.95% 1.95% postgre\n> 5720 root 28 0 2164K 1360K CPU0 0 0:00 1.84% 0.88% top\n> 5785 pgsql 2 0 255M 29296K sbwait 0 0:00 3.00% 0.15% postgre\n> 5782 pgsql 2 0 255M 11900K sbwait 0 0:00 3.00% 0.15% postgre\n> 5772 pgsql 2 0 255M 11708K sbwait 2 0:00 1.54% 0.15% postgre\n\nThat doesn't look good. Is this machine freshly rebooted, or has it\nbeen running postgres for a while? 179M cache and 199M buffer with 2.6\ngig inactive is horrible for a machine running a 10gig databases.\n\nFor comparison, here's what my production linux boxes show in top:\n 16:42:27 up 272 days, 14:49, 1 user, load average: 1.02, 1.04, 1.00\n162 processes: 161 sleeping, 1 running, 0 zombie, 0 stopped\nCPU states: cpu user nice system irq softirq iowait idle\n total 0.2% 0.0% 0.4% 0.0% 0.0% 0.4% 98.7%\n cpu00 0.4% 0.0% 0.4% 0.0% 0.0% 0.0% 99.0%\n cpu01 0.0% 0.0% 0.4% 0.0% 0.0% 0.9% 98.5%\nMem: 6096912k av, 4529208k used, 1567704k free, 0k shrd, 306884k buff\n 2398948k actv, 1772072k in_d, 78060k in_c\nSwap: 4192880k av, 157480k used, 4035400k free 3939332k cached\n \nPID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n24000 postgres 15 0 752 524 456 S 0.0 0.0 0:00 1 rotatelogs\n24012 postgres 15 0 1536 1420 1324 S 0.0 0.0 7:11 0 postmaster\n24015 postgres 15 0 2196 2032 996 S 0.0 0.0 56:07 0 postmaster\n24016 postgres 15 0 1496 1352 1004 S 0.0 0.0 233:46 1 postmaster\n\nNote that the kernel here is caching ~3.9 gigs of data. so, postgresql\ndoesn't have to. Also, the disk buffers are sitting at > 300 Megs.\n\nIf FreeBSD 4.x can't or won't cache more than that, there's an OS issue\nhere, either endemic to FreeBSD 4.x, or your configuration of it.\n\n\n> Dual Xeon 3.06Ghz 4GB RAM\n\nMake sure hyperthreading is disabled, it's generally a performance loss\nfor pgsql.\n\n> Adaptec 2200S 48MB cache & 4 disks configured in RAID5\n\nI'm not a huge fan of adaptec RAID controllers, and 48 Megs ain't much. \nBut for what you're doing, I'd expect it to run well enough. Have you\ntested this array with bonnie++ to see what kind of performance it gets\nin general? There could be some kind of hardware issue going on you're\nnot seeing in the logs.\n\nIs that memory cache set to write back not through, and does it have\nbattery backup (the cache, not the machine)?\n\n> The OS is installed on the local single disk and postgres data directory\n> is on the RAID5 partition. Maybe Adaptec 2200S RAID5 performance is not as\n> good as the vendor claimed. It was my impression that the raid controller \n> these days are optimized for RAID5 and going RAID10 would not benefit me much.\n\nYou have to be careful about RAID 10, since many controllers serialize\naccess through multiple levels of RAID, and therefore wind up being\nslower in RAID 10 or 50 than in RAID 1 or 5.\n\n> Also, I may be overlooking a postgresql.conf setting. I have attached the \n> config file.\n\nIf you're doing a lot of small transactions you might see some gain from\nincreasing commit_delay to 100 to 1000 and commit siblings to 25 to\n100. It won't set the world on fire, but it's given me a 25% boost on\ncertain loads with lots of small transactions\n\n> \n> In summary, my questions:\n> \n> 1. Would running PG on FreeBSD 5.x or 6.x or Linux improve performance?\n\nIt most probably would. I'd at least test it out.\n\n> 2. Should I change SCSI controller config to use RAID 10 instead of 5?\n\nMaybe. With that controller, and many others in its league, you may be\nslowing things down doing that. You may be better off with a simple\nRAID 1 instead as well. Also, if you've got a problem with the\ncontroller serializing multiple raid levels, you might see the best\nperformance with one raid level on the controller and the other handled\nby the kernel. BSD does do kernel level RAID, right?\n\n> 3. Why isn't postgres using all 4GB of ram for at least caching table for reads?\n\nBecause that's your Operating System's job.\n\n> 4. Are there any other settings in the conf file I could try to tweak?\n\nWith the later versions of PostgreSQL, it's gotten better at doing the\nOS job of caching, IF you set it to use enough memory. You might try\ncranking up shared memory / shared_buffers to something large like 75%\nof the machine memory and see if that does help. With 7.4 and before,\nit's generally a really bad idea. Looking at your postgresql.conf it\nappears you're running a post-7.4 version, so you might be able to get\naway with handing over all the ram to the database.\n\nNow that the tuning stuff is out of the way. Have you been using the\nlogging to look for individual slow queries and run explain analyze on\nthem? Are you analyzing your database and vacuuming it too?\n",
"msg_date": "Fri, 17 Mar 2006 17:00:34 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "> Here is my current configuration:\n>\n> Dual Xeon 3.06Ghz 4GB RAM\n> Adaptec 2200S 48MB cache & 4 disks configured in RAID5\n> FreeBSD 4.11 w/kernel options:\n> options SHMMAXPGS=65536\n> options SEMMNI=256\n> options SEMMNS=512\n> options SEMUME=256\n> options SEMMNU=256\n> options SMP # Symmetric MultiProcessor Kernel\n> options APIC_IO # Symmetric (APIC) I/O\n>\n> The OS is installed on the local single disk and postgres data directory\n> is on the RAID5 partition. Maybe Adaptec 2200S RAID5 performance is not as\n> good as the vendor claimed. It was my impression that the raid controller\n> these days are optimized for RAID5 and going RAID10 would not benefit me much.\n\nI don't know whether 'systat -vmstat' is available on 4.x, if so try\nto issue the command with 'systat -vmstat 1' for 1 sec. updates. This\nwill (amongst much other info) show how much disk-transfer you have.\n\n> Also, I may be overlooking a postgresql.conf setting. I have attached the\n> config file.\n\nYou could try to lower shared_buffers from 30000 to 16384. Setting\nthis value too high can in some cases be counterproductive according\nto doc's I read.\n\nAlso try to lower work_mem from 16384 to 8192 or 4096. This setting is\nfor each sort, so it does become expensive in terms of memory when\nmany sorts are being carried out. It does depend on the complexity of\nyour sorts of course.\n\nTry to do a vacuum analyse in your crontab. If your aliases-file is\nset up correctly mails generated by crontab will be forwarded to a\nhuman being. I have the following in my (root) crontab (and mail to\nroot forwarded to me):\n\ntime /usr/local/bin/psql -d dbname -h dbhost -U username -c \"vacuum\nanalyse verbose;\"\n\n> In summary, my questions:\n>\n> 1. Would running PG on FreeBSD 5.x or 6.x or Linux improve performance?\n\nGoing to 6.x would probably increase overall performance, but you have\nto try it out first. Many people report increased performance just by\nupgrading, some report that it grinds to a halt. But SMP-wise 6.x is a\nmore mature release than 4.x is. Changes to the kernel from being\ngiant-locked in 4.x to be \"fine-grained locked\" started in 5.x and\nhave improved in 6.x. The disk- and network-layer should behave\nbetter.\n\nLinux, don't know. If your expertise is in FreeBSD try this first and\nthen move to Linux (or Solaris 10) if 6.x does not meet your\nexpectations.\n\n> 3. Why isn't postgres using all 4GB of ram for at least caching table for reads?\n\nI guess it's related to the usage of the i386-architecture in general.\nIf the zzeons are the newer noconas you can try the amd64-port\ninstead. This can utilize more memory (without going through PAE).\n\n> 4. Are there any other settings in the conf file I could try to tweak?\n\nmax_fsm_pages and max_fsm_relations. You can look at the bottom of\nvacuum analyze and increase the values:\n\nINFO: free space map: 153 relations, 43445 pages stored; 45328 total\npages needed\n\nRaise max_fsm_pages so it meet or exceed 'total pages needed' and\nmax_fsm_relations to relations.\n\nThis is finetuning though. It's more important to set work- and\nmaintenance-mem correct.\n\nhth\nClaus\n",
"msg_date": "Sat, 18 Mar 2006 00:03:29 +0100",
"msg_from": "\"Claus Guttesen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec RAID 2200S"
},
{
"msg_contents": "Kenji Morishige <[email protected]> writes:\n> ... We generally have somewhere between 150-200 connections to\n> the database at any given time and probably anywhere between 5-10 new \n> connections being made every second and about 100 queries per second. Most\n> of the queries and transactions are very small due to the fact that the tools\n> were designed to work around the small functionality of MySQL 3.23 DB.\n\nYou should think seriously about putting in some sort of\nconnection-pooling facility. Postgres backends aren't especially\nlightweight things; the overhead involved in forking a process and then\ngetting its internal caches populated etc. is significant. You don't\nwant to be doing that for one small query, at least not if you're doing\nso many times a second.\n\n> it seems as if the database is not making use of the available ram.\n\nPostgres generally relies on the kernel to do the bulk of the disk\ncaching. Your shared_buffers setting of 30000 seems quite reasonable to\nme; I don't think you want to bump it up (not much anyway). I'm not too\nfamiliar with FreeBSD and so I'm not clear on what \"Inact\" is:\n\n> Mem: 181M Active, 2632M Inact, 329M Wired, 179M Cache, 199M Buf, 81M Free\n> Swap: 4096M Total, 216K Used, 4096M Free\n\nIf \"Inact\" covers disk pages cached by the kernel then this is looking\nreasonably good. If it's something else then you got a problem, but\nfixing it is a kernel issue not a database issue.\n\n> #max_fsm_pages = 20000\t\t# min max_fsm_relations*16, 6 bytes each\n\nYou almost certainly need to bump this way up. 20000 is enough to cover\ndirty pages in about 200MB of database, which is only a fiftieth of\nwhat you say your disk footprint is. Unless most of your data is\nstatic, you're going to be suffering severe table bloat over time due\nto inability to recycle free space properly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 17 Mar 2006 18:14:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec RAID 2200S "
},
{
"msg_contents": "On Fri, 2006-03-17 at 17:03, Claus Guttesen wrote:\n> > Here is my current configuration:\n\n> > Also, I may be overlooking a postgresql.conf setting. I have attached the\n> > config file.\n> \n> You could try to lower shared_buffers from 30000 to 16384. Setting\n> this value too high can in some cases be counterproductive according\n> to doc's I read.\n\nFYI, that was very true before 8.0, but since the introduction of better\ncache management algorithms, you can have pretty big shared_buffers\nsettings.\n\n> Also try to lower work_mem from 16384 to 8192 or 4096. This setting is\n> for each sort, so it does become expensive in terms of memory when\n> many sorts are being carried out. It does depend on the complexity of\n> your sorts of course.\n\nBut looking at his usage of RAM on his box, it doesn't look like one at\nthe time that snapshot was taken. Assuming the box was busy then, he's\nOK. Otherwise, he'd show a usage of swapping, which he doesn't.\n",
"msg_date": "Fri, 17 Mar 2006 17:21:16 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "> 4. Are there any other settings in the conf file I could try to tweak?\n\nOne more thing :-)\n\nI stumbled over this setting, this made the db (PG 7.4.9) make use of\nthe index rather than doing a sequential scan and it reduced a query\nfrom several minutes to some 20 seconds.\n\nrandom_page_cost = 2 (original value was 4).\n\nAnother thing you ought to do is to to get the four-five most used\nqueries and do an explain analyze in these. Since our website wasn't\nprepared for this type of statistics I simply did a tcpdump, grep'ed\nall select's, sorted them and sorted them unique so I could see which\nqueries were used most.\n\nregards\nClaus\n",
"msg_date": "Sat, 18 Mar 2006 00:29:17 +0100",
"msg_from": "\"Claus Guttesen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec RAID 2200S"
},
{
"msg_contents": "Scott Marlowe wrote:\n> On Fri, 2006-03-17 at 16:11, Kenji Morishige wrote:\n> \n>>About a year ago we decided to migrate our central database that powers various\n>>intranet tools from MySQL to PostgreSQL. We have about 130 tables and about\n>>10GB of data that stores various status information for a variety of services\n>>for our intranet. We generally have somewhere between 150-200 connections to\n>>the database at any given time and probably anywhere between 5-10 new \n>>connections being made every second and about 100 queries per second. Most\n>>of the queries and transactions are very small due to the fact that the tools\n>>were designed to work around the small functionality of MySQL 3.23 DB. \n>>Our company primarily uses FreeBSD and we are stuck on FreeBSD 4.X series due\n>>to IT support issues,\n> \n> \n> There were a LOT of performance enhancements to FreeBSD with the 5.x\n> series release. I'd recommend fast tracking the database server to the\n> 5.x branch. 4-stable was release 6 years ago. 5-stable was released\n> two years ago.\n> \n> \n\nI would recommend skipping 5.x and using 6.0 - as it performs measurably \nbetter than 5.x. In particular the vfs layer is no longer under the \nGIANT lock, so you will get considerably improved concurrent filesystem \naccess on your dual Xeon.\n\nRegards\n\nMark\n",
"msg_date": "Sat, 18 Mar 2006 13:00:37 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "Thanks guys, I'm studying each of your responses and am going to start to \nexperiement. Unfortunately, I don't have another box with similar specs to\ndo a perfect experiment with, but I think I'm going to go ahead and open a \nservice window to ungrade the box to FBSD6.0 and apply some other changes. It\nalso gives me the chance to go from 8.0.1 to 8.1 series which I been wanting\nto do as well. Thanks guys and I will see if any of your suggestions make \na noticable difference. I also have been looking at log result of slow queries\nand making necessary indexes to make those go faster.\n\n-Kenji\n\nOn Sat, Mar 18, 2006 at 12:29:17AM +0100, Claus Guttesen wrote:\n> > 4. Are there any other settings in the conf file I could try to tweak?\n> \n> One more thing :-)\n> \n> I stumbled over this setting, this made the db (PG 7.4.9) make use of\n> the index rather than doing a sequential scan and it reduced a query\n> from several minutes to some 20 seconds.\n> \n> random_page_cost = 2 (original value was 4).\n> \n> Another thing you ought to do is to to get the four-five most used\n> queries and do an explain analyze in these. Since our website wasn't\n> prepared for this type of statistics I simply did a tcpdump, grep'ed\n> all select's, sorted them and sorted them unique so I could see which\n> queries were used most.\n> \n> regards\n> Claus\n",
"msg_date": "Fri, 17 Mar 2006 16:08:55 -0800",
"msg_from": "Kenji Morishige <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec RAID 2200S"
},
{
"msg_contents": "Kenji,\n\n\nOn 3/17/06 4:08 PM, \"Kenji Morishige\" <[email protected]> wrote:\n\n> Thanks guys, I'm studying each of your responses and am going to start to\n> experiement.\n\nI notice that no one asked you about your disk bandwidth - the Adaptec 2200S\nis a \"known bad\" controller - the bandwidth to/from in RAID5 is about 1/2 to\n1/3 of a single disk drive, which is far too slow for a 10GB database, and\nIMO should disqualify a RAID adapter from being used at all.\n\nWithout fixing this, I'd suggest that all of the other tuning described here\nwill have little value, provided your working set is larger than your RAM.\n\nYou should test the I/O bandwidth using these simple tests:\n time bash -c \"dd if=/dev/zero of=bigfile bs=8k count=1000000 && sync\"\n\nthen:\n time dd if=bigfile of=/dev/null bs=8k\n\nYou should get on the order of 150MB/s on four disk drives in RAID5.\n\nAnd before people jump in about \"random I/O\", etc, the sequential scan test\nwill show whether the controller is just plain bad very quickly. If it\ncan't do sequential fast, it won't do seeks fast either.\n\n- Luke\n\n\n",
"msg_date": "Sun, 19 Mar 2006 11:26:16 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "On Fri, Mar 17, 2006 at 05:00:34PM -0600, Scott Marlowe wrote:\n> > last pid: 5788; load averages: 0.32, 0.31, 0.28 up 127+15:16:08 13:59:24\n> > 169 processes: 1 running, 168 sleeping\n> > CPU states: 5.4% user, 0.0% nice, 9.9% system, 0.0% interrupt, 84.7% idle\n> > Mem: 181M Active, 2632M Inact, 329M Wired, 179M Cache, 199M Buf, 81M Free\n> > Swap: 4096M Total, 216K Used, 4096M Free\n> > \n> > PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU CPU COMMAND\n> > 14501 pgsql 2 0 254M 242M select 2 76:26 1.95% 1.95% postgre\n> > 5720 root 28 0 2164K 1360K CPU0 0 0:00 1.84% 0.88% top\n> > 5785 pgsql 2 0 255M 29296K sbwait 0 0:00 3.00% 0.15% postgre\n> > 5782 pgsql 2 0 255M 11900K sbwait 0 0:00 3.00% 0.15% postgre\n> > 5772 pgsql 2 0 255M 11708K sbwait 2 0:00 1.54% 0.15% postgre\n> \n> That doesn't look good. Is this machine freshly rebooted, or has it\n> been running postgres for a while? 179M cache and 199M buffer with 2.6\n> gig inactive is horrible for a machine running a 10gig databases.\n\nNo, this is perfectly fine. Inactive memory in FreeBSD isn't the same as\nFree. It's the same as 'active' memory except that it's pages that\nhaven't been accessed in X amount of time (between 100 and 200 ms, I\nthink). When free memory starts getting low, FBSD will start moving\npages from the inactive queue to the free queue (possibly resulting in\nwrites to disk along the way).\n\nIIRC, Cache is the directory cache, and Buf is disk buffers, which is\nsomewhat akin to shared_buffers in PostgreSQL.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Mon, 20 Mar 2006 08:45:04 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "On Mon, 2006-03-20 at 08:45, Jim C. Nasby wrote:\n> On Fri, Mar 17, 2006 at 05:00:34PM -0600, Scott Marlowe wrote:\n> > > last pid: 5788; load averages: 0.32, 0.31, 0.28 up 127+15:16:08 13:59:24\n> > > 169 processes: 1 running, 168 sleeping\n> > > CPU states: 5.4% user, 0.0% nice, 9.9% system, 0.0% interrupt, 84.7% idle\n> > > Mem: 181M Active, 2632M Inact, 329M Wired, 179M Cache, 199M Buf, 81M Free\n> > > Swap: 4096M Total, 216K Used, 4096M Free\n> > > \n> > > PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU CPU COMMAND\n> > > 14501 pgsql 2 0 254M 242M select 2 76:26 1.95% 1.95% postgre\n> > > 5720 root 28 0 2164K 1360K CPU0 0 0:00 1.84% 0.88% top\n> > > 5785 pgsql 2 0 255M 29296K sbwait 0 0:00 3.00% 0.15% postgre\n> > > 5782 pgsql 2 0 255M 11900K sbwait 0 0:00 3.00% 0.15% postgre\n> > > 5772 pgsql 2 0 255M 11708K sbwait 2 0:00 1.54% 0.15% postgre\n> > \n> > That doesn't look good. Is this machine freshly rebooted, or has it\n> > been running postgres for a while? 179M cache and 199M buffer with 2.6\n> > gig inactive is horrible for a machine running a 10gig databases.\n> \n> No, this is perfectly fine. Inactive memory in FreeBSD isn't the same as\n> Free. It's the same as 'active' memory except that it's pages that\n> haven't been accessed in X amount of time (between 100 and 200 ms, I\n> think). When free memory starts getting low, FBSD will start moving\n> pages from the inactive queue to the free queue (possibly resulting in\n> writes to disk along the way).\n> \n> IIRC, Cache is the directory cache, and Buf is disk buffers, which is\n> somewhat akin to shared_buffers in PostgreSQL.\n\nSo, then, the inact is pretty much the same as kernel buffers in linux?\n",
"msg_date": "Mon, 20 Mar 2006 10:20:20 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "\nOn Mar 17, 2006, at 5:11 PM, Kenji Morishige wrote:\n\n> In summary, my questions:\n>\n> 1. Would running PG on FreeBSD 5.x or 6.x or Linux improve \n> performance?\n\nFreeBSD 6.x will definitely get you improvements. Many speedup \nimprovements have been made to both the generic disk layer and the \nspecific drivers. However, the current best of breed RAID controller \nis the LSI 320-x (I use 320-2X). I have one box into which this \ncard will not fit (Thanks Sun, for making a box with only low-profile \nslots!) so I use an Adaptec 2230SLP card in it. Testing shows it is \nabout 80% speed of a LSI 320-2x on sequential workload (load DB, run \nsome queries, rebuild indexes, etc.)\n\nIf you do put on FreeBSD 6, I'd love to see the output of \"diskinfo - \nv -t\" on your RAID volume(s).\n\n>\n> 2. Should I change SCSI controller config to use RAID 10 instead of 5?\n\nI use RAID10.\n\n>\n> 3. Why isn't postgres using all 4GB of ram for at least caching \n> table for reads?\n\nI think FreeBSD has a hard upper limit on the total ram it will use \nfor disk cache. I haven't been able to get reliable, irrefutable, \nanswers about it, though.\n\n>\n> 4. Are there any other settings in the conf file I could try to tweak?\n\nI like to bump up the checkpoint segments to 256.\n\n",
"msg_date": "Mon, 20 Mar 2006 14:15:22 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec RAID 2200S"
},
{
"msg_contents": "On Mon, 20 Mar 2006, Jim C. Nasby wrote:\n\n> No, this is perfectly fine. Inactive memory in FreeBSD isn't the same as\n> Free. It's the same as 'active' memory except that it's pages that\n> haven't been accessed in X amount of time (between 100 and 200 ms, I\n> think). When free memory starts getting low, FBSD will start moving\n> pages from the inactive queue to the free queue (possibly resulting in\n> writes to disk along the way).\n>\n> IIRC, Cache is the directory cache, and Buf is disk buffers, which is\n> somewhat akin to shared_buffers in PostgreSQL.\n\nI don't believe that's true. I'm not an expert in FreeBSD's VM internals,\nbut this is how I believe it works:\n\nActive pages are pages currently mapped in to a process's address space.\n\nInactive pages are pages which are marked dirty (must be written to\nbacking store before they can be freed) and which are not mapped in to a\nprocess's address. They're still associated with a VM object of some kind\n- like part of a process's virtual address space or a as part of the cache\nfor a file on disk. If it's still part of a process's virtual address\nspace and is accessed a fault is generated. The page is then put back in\nto the address mappings.\n\nCached pages are like inactive pages but aren't dirty. Then can be either\nre-mapped or freed immediately.\n\nFree pages are properly free. Wired pages are unswappable. Buf I'm not\nsure about. It doesn't represent that amount of memory used to cache files\non disk, I'm sure of that. The sysctl -d description is 'KVA memory used\nfor bufs', so I suspect that it's the amount of kernel virtual address\nspace mapped to pages in the 'active', 'inactive' and 'cache' queues.\n\n-- \n Alex Hayward\n Seatbooker\n\n",
"msg_date": "Mon, 20 Mar 2006 19:46:13 +0000 (GMT)",
"msg_from": "Alex Hayward <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "Vivek Khera wrote:\n\n>\n> On Mar 17, 2006, at 5:11 PM, Kenji Morishige wrote:\n>\n>> In summary, my questions:\n>>\n>> 1. Would running PG on FreeBSD 5.x or 6.x or Linux improve performance?\n>\n>\n> FreeBSD 6.x will definitely get you improvements. Many speedup \n> improvements have been made to both the generic disk layer and the \n> specific drivers. However, the current best of breed RAID controller \n> is the LSI 320-x (I use 320-2X). I have one box into which this \n> card will not fit (Thanks Sun, for making a box with only low-profile \n> slots!) so I use an Adaptec 2230SLP card in it. Testing shows it is \n> about 80% speed of a LSI 320-2x on sequential workload (load DB, run \n> some queries, rebuild indexes, etc.)\n>\n> If you do put on FreeBSD 6, I'd love to see the output of \"diskinfo - \n> v -t\" on your RAID volume(s).\n>\nNot directly related ...\ni have a HP dl380 g3 with array 5i controlled (1+0), these are my results\n\nshiva2# /usr/sbin/diskinfo -v -t /dev/da2s1d\n/dev/da2s1d\n 512 # sectorsize\n 218513555456 # mediasize in bytes (204G)\n 426784288 # mediasize in sectors\n 52301 # Cylinders according to firmware.\n 255 # Heads according to firmware.\n 32 # Sectors according to firmware.\n\nSeek times:\n Full stroke: 250 iter in 1.138232 sec = 4.553 msec\n Half stroke: 250 iter in 1.084474 sec = 4.338 msec\n Quarter stroke: 500 iter in 1.690313 sec = 3.381 msec\n Short forward: 400 iter in 0.752646 sec = 1.882 msec\n Short backward: 400 iter in 1.306270 sec = 3.266 msec\n Seq outer: 2048 iter in 0.766676 sec = 0.374 msec\n Seq inner: 2048 iter in 0.803759 sec = 0.392 msec\nTransfer rates:\n outside: 102400 kbytes in 2.075984 sec = 49326 kbytes/sec\n middle: 102400 kbytes in 2.100510 sec = 48750 kbytes/sec\n inside: 102400 kbytes in 2.042313 sec = 50139 kbytes/sec\n\n\nis this good enough?\n",
"msg_date": "Mon, 20 Mar 2006 14:52:45 -0600",
"msg_from": "Miguel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec"
},
{
"msg_contents": "Miguel,\n\nOn 3/20/06 12:52 PM, \"Miguel\" <[email protected]> wrote:\n\n> i have a HP dl380 g3 with array 5i controlled (1+0), these are my results\n\nAnother \"known bad\" RAID controller. The Smartarray 5i is horrible on Linux\n- this is the first BSD result I've seen.\n \n> Seek times:\n> Full stroke: 250 iter in 1.138232 sec = 4.553 msec\n> Half stroke: 250 iter in 1.084474 sec = 4.338 msec\n\nThese seem OK - are they \"access times\" or are they actually \"seek times\"?\nSeems like with RAID 10, you should get better by maybe double.\n\n> Transfer rates:\n> outside: 102400 kbytes in 2.075984 sec = 49326 kbytes/sec\n> middle: 102400 kbytes in 2.100510 sec = 48750 kbytes/sec\n> inside: 102400 kbytes in 2.042313 sec = 50139 kbytes/sec\n> \n> \n> is this good enough?\n\nIt's pretty slow. How many disk drives do you have?\n\n- Luke\n\n\n",
"msg_date": "Mon, 20 Mar 2006 12:59:13 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "Luke Lonergan wrote:\n\n>Miguel,\n>\n>On 3/20/06 12:52 PM, \"Miguel\" <[email protected]> wrote:\n>\n> \n>\n>>i have a HP dl380 g3 with array 5i controlled (1+0), these are my results\n>> \n>>\n>\n>Another \"known bad\" RAID controller. The Smartarray 5i is horrible on Linux\n>- this is the first BSD result I've seen.\n> \n> \n>\n>>Seek times:\n>> Full stroke: 250 iter in 1.138232 sec = 4.553 msec\n>> Half stroke: 250 iter in 1.084474 sec = 4.338 msec\n>> \n>>\n>\n>These seem OK - are they \"access times\" or are they actually \"seek times\"?\n> \n>\ni dont know, how can i check?\n\n>Transfer rates:\n> outside: 102400 kbytes in 2.075984 sec = 49326 kbytes/sec\n> middle: 102400 kbytes in 2.100510 sec = 48750 kbytes/sec\n> inside: 102400 kbytes in 2.042313 sec = 50139 kbytes/sec\n>\n>\n>is this good enough?\n>It's pretty slow. How many disk drives do you have?\n>\n>\n> \n>\nI have 6 ultra a320 72G 10k discs\n\n---\nMiguel\n",
"msg_date": "Mon, 20 Mar 2006 15:12:37 -0600",
"msg_from": "Miguel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec"
},
{
"msg_contents": "Miguel,\n\nOn 3/20/06 1:12 PM, \"Miguel\" <[email protected]> wrote:\n\n> i dont know, how can i check?\n\nNo matter - it's the benchmark that would tell you, it's probably \"access\ntime\" that's being measured even though the text says \"seek time\". The\ndifference is that seek time represents only the head motion, where access\ntime is the whole access including seek. Access times of 4.5ms are typical\nof a single 10K RPM SCSI disk drive like the Seagate barracuda.\n\n>> Transfer rates:\n>> outside: 102400 kbytes in 2.075984 sec = 49326 kbytes/sec\n>> middle: 102400 kbytes in 2.100510 sec = 48750 kbytes/sec\n>> inside: 102400 kbytes in 2.042313 sec = 50139 kbytes/sec\n>> \n> I have 6 ultra a320 72G 10k discs\n\nYah - ouch. With 6 drives in a RAID10, you should expect 3 drives worth of\nsequential scan performance, or anywhere from 100MB/s to 180MB/s. You're\ngetting from half to 1/3 of the performance you'd get with a decent raid\ncontroller.\n\nIf you add a simple SCSI adapter like the common LSI U320 adapter to your\nDL380G3 and then run software RAID, you will get more than 150MB/s with less\nCPU consumption. I'd also expect you'd get down to about 2ms access times.\n\nThis might not be easy for you to do, and you might prefer hardware RAID\nadapters, but I don't have a recommendation for you there. I'd stay away\nfrom the HP line.\n\n- Luke \n\n\n",
"msg_date": "Mon, 20 Mar 2006 13:27:56 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "Luke Lonergan wrote:\n\n>>>Transfer rates:\n>>> outside: 102400 kbytes in 2.075984 sec = 49326 kbytes/sec\n>>> middle: 102400 kbytes in 2.100510 sec = 48750 kbytes/sec\n>>> inside: 102400 kbytes in 2.042313 sec = 50139 kbytes/sec\n>>>\n>>> \n>>>\n>>I have 6 ultra a320 72G 10k discs\n>> \n>>\n>\n>Yah - ouch. With 6 drives in a RAID10, you should expect 3 drives worth of\n>sequential scan performance, or anywhere from 100MB/s to 180MB/s. You're\n>getting from half to 1/3 of the performance you'd get with a decent raid\n>controller.\n>\n>If you add a simple SCSI adapter like the common LSI U320 adapter to your\n>DL380G3 and then run software RAID, you will get more than 150MB/s with less\n>CPU consumption. I'd also expect you'd get down to about 2ms access times.\n>\n>This might not be easy for you to do, and you might prefer hardware RAID\n>adapters, but I don't have a recommendation for you there. I'd stay away\n>from the HP line.\n>\n> \n>\nThis is my new postgreql 8.1.3 server, so i have many options (in fact, \nany option) to choose from, i want maximum performance, if i undestood \nyou well, do you mean using something like vinum?\ni forgot to mention that the 6 discs are in a MSA500 G2 external \nstoradge, additionally i have two 36G a320 10k in raid 10 for the os \ninstalled in the server slots.\n---\nMiguel\n\n",
"msg_date": "Mon, 20 Mar 2006 15:51:41 -0600",
"msg_from": "Miguel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec"
},
{
"msg_contents": "Miguel,\n\n\nOn 3/20/06 1:51 PM, \"Miguel\" <[email protected]> wrote:\n\n> i forgot to mention that the 6 discs are in a MSA500 G2 external\n> storadge, additionally i have two 36G a320 10k in raid 10 for the os\n> installed in the server slots.\n\nI just checked online and I think the MSA500 G2 has it's own SCSI RAID\ncontrollers, so you are actually just using the 5i as a SCSI attach, which\nit's not good at (no reordering/command queueing, etc). So, just using a\nsimple SCSI adapter to connect to the MSA might be a big win.\n\n- Luke \n\n\n",
"msg_date": "Mon, 20 Mar 2006 14:04:25 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "Luke Lonergan wrote:\n\n>Miguel,\n>\n>\n>On 3/20/06 1:51 PM, \"Miguel\" <[email protected]> wrote:\n>\n> \n>\n>>i forgot to mention that the 6 discs are in a MSA500 G2 external\n>>storadge, additionally i have two 36G a320 10k in raid 10 for the os\n>>installed in the server slots.\n>> \n>>\n>\n>I just checked online and I think the MSA500 G2 has it's own SCSI RAID\n>controllers,\n>\nYes, it has its own redundant controller,\n\n> so you are actually just using the 5i as a SCSI attach, which\n>it's not good at (no reordering/command queueing, etc). So, just using a\n>simple SCSI adapter to connect to the MSA might be a big win.\n> \n>\n\nI will try a LS320 and will let you know if i got any performance gain,\nthanks for your advises\n\n---\nMiguel\n\n",
"msg_date": "Mon, 20 Mar 2006 16:14:13 -0600",
"msg_from": "Miguel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": ">> If you do put on FreeBSD 6, I'd love to see the output of \n>> \"diskinfo - v -t\" on your RAID volume(s).\n>>\n> Not directly related ...\n> i have a HP dl380 g3 with array 5i controlled (1+0), these are my \n> results\n> [...]\n> is this good enough?\n\nIs that on a loaded box or a mostly quiet box? Those number seem \nrather low for my tastes. For comparison, here are numbers from a \nDell 1850 with a built-in PERC 4e/Si RAID in a two disk mirror. All \nnumbers below are on mostly or totally quiet disk systems.\n\namrd0\n 512 # sectorsize\n 73274490880 # mediasize in bytes (68G)\n 143114240 # mediasize in sectors\n 8908 # Cylinders according to firmware.\n 255 # Heads according to firmware.\n 63 # Sectors according to firmware.\n\nSeek times:\n Full stroke: 250 iter in 0.756718 sec = 3.027 msec\n Half stroke: 250 iter in 0.717824 sec = 2.871 msec\n Quarter stroke: 500 iter in 1.972368 sec = 3.945 msec\n Short forward: 400 iter in 1.193179 sec = 2.983 msec\n Short backward: 400 iter in 1.322440 sec = 3.306 msec\n Seq outer: 2048 iter in 0.271402 sec = 0.133 msec\n Seq inner: 2048 iter in 0.271151 sec = 0.132 msec\nTransfer rates:\n outside: 102400 kbytes in 1.080339 sec = 94785 \nkbytes/sec\n middle: 102400 kbytes in 1.166021 sec = 87820 \nkbytes/sec\n inside: 102400 kbytes in 1.461498 sec = 70065 \nkbytes/sec\n\n\nAnd for the *real* disks.... In the following two cases, I used a \nDell 1425SC with 1GB RAM and connected the controllers to the same \nDell PowerVault 14 disk U320 array (one controller at a time, \nobviously). For each controller each pair of the mirror was on the \nopposite channel of the controller for optimal speed. disk 0 is a \nRAID1 of two drives, and disk 1 is a RAID10 of the remaining 12 \ndrives. All running FreeBSD 6.0 RELEASE. First I tested the Adaptec \n2230SLP and got these:\n\naacd0\n 512 # sectorsize\n 36385456128 # mediasize in bytes (34G)\n 71065344 # mediasize in sectors\n 4423 # Cylinders according to firmware.\n 255 # Heads according to firmware.\n 63 # Sectors according to firmware.\n\nSeek times:\n Full stroke: 250 iter in 2.288389 sec = 9.154 msec\n Half stroke: 250 iter in 1.657302 sec = 6.629 msec\n Quarter stroke: 500 iter in 2.756597 sec = 5.513 msec\n Short forward: 400 iter in 1.205275 sec = 3.013 msec\n Short backward: 400 iter in 1.249310 sec = 3.123 msec\n Seq outer: 2048 iter in 0.412770 sec = 0.202 msec\n Seq inner: 2048 iter in 0.428585 sec = 0.209 msec\nTransfer rates:\n outside: 102400 kbytes in 1.204412 sec = 85021 \nkbytes/sec\n middle: 102400 kbytes in 1.347325 sec = 76002 \nkbytes/sec\n inside: 102400 kbytes in 2.036832 sec = 50274 \nkbytes/sec\n\n\naacd1\n 512 # sectorsize\n 218307231744 # mediasize in bytes (203G)\n 426381312 # mediasize in sectors\n 26541 # Cylinders according to firmware.\n 255 # Heads according to firmware.\n 63 # Sectors according to firmware.\n\nSeek times:\n Full stroke: 250 iter in 0.856699 sec = 3.427 msec\n Half stroke: 250 iter in 1.475651 sec = 5.903 msec\n Quarter stroke: 500 iter in 2.693270 sec = 5.387 msec\n Short forward: 400 iter in 1.127831 sec = 2.820 msec\n Short backward: 400 iter in 1.216876 sec = 3.042 msec\n Seq outer: 2048 iter in 0.416340 sec = 0.203 msec\n Seq inner: 2048 iter in 0.436471 sec = 0.213 msec\nTransfer rates:\n outside: 102400 kbytes in 1.245798 sec = 82196 \nkbytes/sec\n middle: 102400 kbytes in 1.169033 sec = 87594 \nkbytes/sec\n inside: 102400 kbytes in 1.390840 sec = 73625 \nkbytes/sec\n\n\nAnd the LSI 320-2X card:\n\namrd0\n 512 # sectorsize\n 35999711232 # mediasize in bytes (34G)\n 70311936 # mediasize in sectors\n 4376 # Cylinders according to firmware.\n 255 # Heads according to firmware.\n 63 # Sectors according to firmware.\n\nSeek times:\n Full stroke: 250 iter in 0.737130 sec = 2.949 msec\n Half stroke: 250 iter in 0.694498 sec = 2.778 msec\n Quarter stroke: 500 iter in 2.040667 sec = 4.081 msec\n Short forward: 400 iter in 1.418592 sec = 3.546 msec\n Short backward: 400 iter in 0.896076 sec = 2.240 msec\n Seq outer: 2048 iter in 0.292390 sec = 0.143 msec\n Seq inner: 2048 iter in 0.300836 sec = 0.147 msec\nTransfer rates:\n outside: 102400 kbytes in 1.102025 sec = 92920 \nkbytes/sec\n middle: 102400 kbytes in 1.247608 sec = 82077 \nkbytes/sec\n inside: 102400 kbytes in 1.905603 sec = 53736 \nkbytes/sec\n\n\namrd1\n 512 # sectorsize\n 215998267392 # mediasize in bytes (201G)\n 421871616 # mediasize in sectors\n 26260 # Cylinders according to firmware.\n 255 # Heads according to firmware.\n 63 # Sectors according to firmware.\n\nSeek times:\n Full stroke: 250 iter in 0.741648 sec = 2.967 msec\n Half stroke: 250 iter in 1.021720 sec = 4.087 msec\n Quarter stroke: 500 iter in 2.220321 sec = 4.441 msec\n Short forward: 400 iter in 0.945948 sec = 2.365 msec\n Short backward: 400 iter in 1.036555 sec = 2.591 msec\n Seq outer: 2048 iter in 0.378911 sec = 0.185 msec\n Seq inner: 2048 iter in 0.457275 sec = 0.223 msec\nTransfer rates:\n outside: 102400 kbytes in 0.986572 sec = 103794 \nkbytes/sec\n middle: 102400 kbytes in 0.998528 sec = 102551 \nkbytes/sec\n inside: 102400 kbytes in 0.857322 sec = 119442 \nkbytes/sec\n\n\n",
"msg_date": "Mon, 20 Mar 2006 17:39:36 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec RAID 2200S"
},
{
"msg_contents": "Vivek Khera wrote:\n\n>>> If you do put on FreeBSD 6, I'd love to see the output of \"diskinfo \n>>> - v -t\" on your RAID volume(s).\n>>>\n>> Not directly related ...\n>> i have a HP dl380 g3 with array 5i controlled (1+0), these are my \n>> results\n>> [...]\n>> is this good enough?\n>\n>\n> Is that on a loaded box or a mostly quiet box? Those number seem \n> rather low for my tastes. For comparison, here are numbers from a \n> Dell 1850 with a built-in PERC 4e/Si RAID in a two disk mirror. All \n> numbers below are on mostly or totally quiet disk systems.\n\nMy numbers are on totally quiet box, i've just installed it.\n\n>\n> amrd0\n> 512 # sectorsize\n> 73274490880 # mediasize in bytes (68G)\n> 143114240 # mediasize in sectors\n> 8908 # Cylinders according to firmware.\n> 255 # Heads according to firmware.\n> 63 # Sectors according to firmware.\n>\n> Seek times:\n> Full stroke: 250 iter in 0.756718 sec = 3.027 msec\n> Half stroke: 250 iter in 0.717824 sec = 2.871 msec\n> Quarter stroke: 500 iter in 1.972368 sec = 3.945 msec\n> Short forward: 400 iter in 1.193179 sec = 2.983 msec\n> Short backward: 400 iter in 1.322440 sec = 3.306 msec\n> Seq outer: 2048 iter in 0.271402 sec = 0.133 msec\n> Seq inner: 2048 iter in 0.271151 sec = 0.132 msec\n> Transfer rates:\n> outside: 102400 kbytes in 1.080339 sec = 94785 \n> kbytes/sec\n> middle: 102400 kbytes in 1.166021 sec = 87820 \n> kbytes/sec\n> inside: 102400 kbytes in 1.461498 sec = 70065 \n> kbytes/sec\n>\n>\nUmm, in my box i see better seektimes but worst transfer rates, does it \nmake sense?\ni think i have something wrong, the question i cant answer is what \ntunning am i missing?\n\n---\nMiguel\n\n\n\n\n\n",
"msg_date": "Mon, 20 Mar 2006 17:04:10 -0600",
"msg_from": "Miguel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec"
},
{
"msg_contents": "\nOn Mar 20, 2006, at 6:04 PM, Miguel wrote:\n\n> Umm, in my box i see better seektimes but worst transfer rates, \n> does it make sense?\n> i think i have something wrong, the question i cant answer is what \n> tunning am i missing?\n\nWell, I forgot to mention I have 15k RPM disks, so the transfers \nshould be faster.\n\nI did no tuning to the disk configurations. I think your controller \nis either just not supported well in FreeBSD, or is bad in general...\n\nI *really* wish LSI would make a low profile card that would fit in a \nSun X4100... as it stands the only choice for dual channel cards is \nthe adaptec 2230SLP...\n\n",
"msg_date": "Mon, 20 Mar 2006 18:05:35 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec RAID 2200S"
},
{
"msg_contents": "Vivek Khera wrote:\n\n>\n> On Mar 20, 2006, at 6:04 PM, Miguel wrote:\n>\n>> Umm, in my box i see better seektimes but worst transfer rates, does \n>> it make sense?\n>> i think i have something wrong, the question i cant answer is what \n>> tunning am i missing?\n>\n>\n> Well, I forgot to mention I have 15k RPM disks, so the transfers \n> should be faster.\n>\n> I did no tuning to the disk configurations. I think your controller \n> is either just not supported well in FreeBSD, or is bad in general...\n\n:-(\n\nI guess you are right, i made a really bad choice, i better look at dell \nnext time,\nthanks\n\n---\nMiguel\n",
"msg_date": "Mon, 20 Mar 2006 17:15:01 -0600",
"msg_from": "Miguel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec"
},
{
"msg_contents": "\n\tThis is a 2-Disk Linux software RAID1 with 2 7200RPM IDE Drives, 1 PATA \nand 1 SATA :\n\napollo13 ~ # hdparm -t /dev/md0\n\n/dev/md0:\n Timing buffered disk reads: 156 MB in 3.02 seconds = 51.58 MB/sec\napollo13 ~ # hdparm -t /dev/md0\n\n/dev/md0:\n Timing buffered disk reads: 168 MB in 3.06 seconds = 54.87 MB/sec\n\n\tThis is a 5-Disk Linux software RAID5 with 4 7200RPM IDE Drives and 1 \n5400RPM, 3 SATA and 2 PATA:\n\napollo13 ~ # hdparm -t /dev/md2\n/dev/md2:\n Timing buffered disk reads: 348 MB in 3.17 seconds = 109.66 MB/sec\n\napollo13 ~ # hdparm -t /dev/md2\n/dev/md2:\n Timing buffered disk reads: 424 MB in 3.00 seconds = 141.21 MB/sec\n\napollo13 ~ # hdparm -t /dev/md2\n/dev/md2:\n Timing buffered disk reads: 426 MB in 3.00 seconds = 141.88 MB/sec\n\napollo13 ~ # hdparm -t /dev/md2\n/dev/md2:\n Timing buffered disk reads: 426 MB in 3.01 seconds = 141.64 MB/sec\n\n\n\tThe machine is a desktop Athlon 64 3000+, buggy nforce3 chipset, 1G \nDDR400, Gentoo Linux 2.6.15-ck4 running in 64 bit mode.\n\tThe bottleneck is the PCI bus.\n\n\tExpensive SCSI hardware RAID cards with expensive 10Krpm harddisks should \nnot get humiliated by such a simple (and cheap) setup. (I'm referring to \nthe 12-drive RAID10 mentioned before, not the other one which was a simple \n2-disk mirror). Toms hardware benchmarked some hardware RAIDs and got \nhumongous transfer rates... hm ?\n",
"msg_date": "Tue, 21 Mar 2006 00:27:04 +0100",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec"
},
{
"msg_contents": "Scott Marlowe wrote:\n> On Mon, 2006-03-20 at 08:45, Jim C. Nasby wrote:\n> \n>>On Fri, Mar 17, 2006 at 05:00:34PM -0600, Scott Marlowe wrote:\n>>\n>>>>last pid: 5788; load averages: 0.32, 0.31, 0.28 up 127+15:16:08 13:59:24\n>>>>169 processes: 1 running, 168 sleeping\n>>>>CPU states: 5.4% user, 0.0% nice, 9.9% system, 0.0% interrupt, 84.7% idle\n>>>>Mem: 181M Active, 2632M Inact, 329M Wired, 179M Cache, 199M Buf, 81M Free\n>>>>Swap: 4096M Total, 216K Used, 4096M Free\n>>>>\n>>>> PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU CPU COMMAND\n>>>>14501 pgsql 2 0 254M 242M select 2 76:26 1.95% 1.95% postgre\n>>>> 5720 root 28 0 2164K 1360K CPU0 0 0:00 1.84% 0.88% top\n>>>> 5785 pgsql 2 0 255M 29296K sbwait 0 0:00 3.00% 0.15% postgre\n>>>> 5782 pgsql 2 0 255M 11900K sbwait 0 0:00 3.00% 0.15% postgre\n>>>> 5772 pgsql 2 0 255M 11708K sbwait 2 0:00 1.54% 0.15% postgre\n>>>\n>>>That doesn't look good. Is this machine freshly rebooted, or has it\n>>>been running postgres for a while? 179M cache and 199M buffer with 2.6\n>>>gig inactive is horrible for a machine running a 10gig databases.\n>>\n>>No, this is perfectly fine. Inactive memory in FreeBSD isn't the same as\n>>Free. It's the same as 'active' memory except that it's pages that\n>>haven't been accessed in X amount of time (between 100 and 200 ms, I\n>>think). When free memory starts getting low, FBSD will start moving\n>>pages from the inactive queue to the free queue (possibly resulting in\n>>writes to disk along the way).\n>>\n>>IIRC, Cache is the directory cache, and Buf is disk buffers, which is\n>>somewhat akin to shared_buffers in PostgreSQL.\n> \n> \n> So, then, the inact is pretty much the same as kernel buffers in linux?\n> \n\nI think Freebsd 'Inactive' corresponds pretty closely to Linux's \n'Inactive Dirty'|'Inactive Laundered'|'Inactive Free'.\n\n From what I can see, 'Buf' is a bit misleading e.g. read a 1G file \nrandomly and you increase 'Inactive' by about 1G - 'Buf' might get to \n200M. However read the file again and you'll see zero i/o in vmstat or \ngstat. From reading the Freebsd architecture docs, I think 'Buf' \nconsists of those pages from 'Inactive' or 'Active' that were last kvm \nmapped for read/write operations. However 'Buf' is restricted to a \nfairly small size (various sysctls), so really only provides a lower \nbound on the file buffer cache activity.\n\nSorry to not really answer your question Scott - how are Linux kernel \nbuffers actually defined?\n\nCheers\n\nMark\n",
"msg_date": "Tue, 21 Mar 2006 14:57:37 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "Mark Kirkwood wrote:\n> \n> I think Freebsd 'Inactive' corresponds pretty closely to Linux's \n> 'Inactive Dirty'|'Inactive Laundered'|'Inactive Free'.\n> \n\nHmmm - on second thoughts I think I've got that wrong :-(, since in \nLinux all the file buffer pages appear in 'Cached' don't they...\n\n(I also notice that 'Inactive Laundered' does not seem to be mentioned \nin vanilla - read non-Redhat - 2.6 kernels)\n\nSo I think its more correct to say Freebsd 'Inactive' is similar to \nLinux 'Inactive' + some|most of Linux 'Cached'.\n\nA good discussion of how the Freebsd vm works is here:\n\nhttp://www.freebsd.org/doc/en_US.ISO8859-1/books/arch-handbook/vm.html\n\nIn particular:\n\n\"FreeBSD reserves a limited amount of KVM to hold mappings from struct \nbufs, but it should be made clear that this KVM is used solely to hold \nmappings and does not limit the ability to cache data.\"\n\nCheers\n\nMark\n",
"msg_date": "Tue, 21 Mar 2006 15:51:35 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 03:51:35PM +1200, Mark Kirkwood wrote:\n> Mark Kirkwood wrote:\n> >\n> >I think Freebsd 'Inactive' corresponds pretty closely to Linux's \n> >'Inactive Dirty'|'Inactive Laundered'|'Inactive Free'.\n> >\n> \n> Hmmm - on second thoughts I think I've got that wrong :-(, since in \n> Linux all the file buffer pages appear in 'Cached' don't they...\n> \n> (I also notice that 'Inactive Laundered' does not seem to be mentioned \n> in vanilla - read non-Redhat - 2.6 kernels)\n> \n> So I think its more correct to say Freebsd 'Inactive' is similar to \n> Linux 'Inactive' + some|most of Linux 'Cached'.\n> \n> A good discussion of how the Freebsd vm works is here:\n> \n> http://www.freebsd.org/doc/en_US.ISO8859-1/books/arch-handbook/vm.html\n> \n> In particular:\n> \n> \"FreeBSD reserves a limited amount of KVM to hold mappings from struct \n> bufs, but it should be made clear that this KVM is used solely to hold \n> mappings and does not limit the ability to cache data.\"\n\nIt's worth noting that starting in either 2.4 or 2.6, linux pretty much\nadopted the FreeBSD VM system (or so I've been told).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 04:08:53 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "On Mon, Mar 20, 2006 at 07:46:13PM +0000, Alex Hayward wrote:\n> On Mon, 20 Mar 2006, Jim C. Nasby wrote:\n> \n> > No, this is perfectly fine. Inactive memory in FreeBSD isn't the same as\n> > Free. It's the same as 'active' memory except that it's pages that\n> > haven't been accessed in X amount of time (between 100 and 200 ms, I\n> > think). When free memory starts getting low, FBSD will start moving\n> > pages from the inactive queue to the free queue (possibly resulting in\n> > writes to disk along the way).\n> >\n> > IIRC, Cache is the directory cache, and Buf is disk buffers, which is\n> > somewhat akin to shared_buffers in PostgreSQL.\n> \n> I don't believe that's true. I'm not an expert in FreeBSD's VM internals,\n> but this is how I believe it works:\n> \n> Active pages are pages currently mapped in to a process's address space.\n> \n> Inactive pages are pages which are marked dirty (must be written to\n> backing store before they can be freed) and which are not mapped in to a\n> process's address. They're still associated with a VM object of some kind\n\nActually, a page that is in the inactive queue *may* be dirty. In fact,\nif you start with a freshly booted system (or one that's been recently\nstarved of memory) and read in a large file, you'll see the inactive\nqueue grow even though the pages haven't been dirtied.\n\n> - like part of a process's virtual address space or a as part of the cache\n> for a file on disk. If it's still part of a process's virtual address\n> space and is accessed a fault is generated. The page is then put back in\n> to the address mappings.\n> \n> Cached pages are like inactive pages but aren't dirty. Then can be either\n> re-mapped or freed immediately.\n> \n> Free pages are properly free. Wired pages are unswappable. Buf I'm not\n> sure about. It doesn't represent that amount of memory used to cache files\n> on disk, I'm sure of that. The sysctl -d description is 'KVA memory used\n> for bufs', so I suspect that it's the amount of kernel virtual address\n> space mapped to pages in the 'active', 'inactive' and 'cache' queues.\n> \n> -- \n> Alex Hayward\n> Seatbooker\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 04:23:36 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "On Mon, Mar 20, 2006 at 02:15:22PM -0500, Vivek Khera wrote:\n> I think FreeBSD has a hard upper limit on the total ram it will use \n> for disk cache. I haven't been able to get reliable, irrefutable, \n> answers about it, though.\n\nIt does not. Any memory in the inactive queue is effectively your 'disk\ncache'. Pages start out in the active queue, and if they aren't used\nfairly frequently they will move into the inactive queue. From there\nthey will be moved to the cache queue, but only if the cache queue falls\nbelow a certain threshold, because in order to go into the cache queue\nthe page must be marked clean, possibly incurring a write to disk. AFAIK\npages only go into the free queue if they have been completely released\nby all objects that were referencing them, so it's theoretically\nposisble for that queue to go to 0.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 04:32:19 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec RAID 2200S"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Mon, Mar 20, 2006 at 02:15:22PM -0500, Vivek Khera wrote:\n> \n>>I think FreeBSD has a hard upper limit on the total ram it will use \n>>for disk cache. I haven't been able to get reliable, irrefutable, \n>>answers about it, though.\n> \n> \n> It does not. Any memory in the inactive queue is effectively your 'disk\n> cache'. Pages start out in the active queue, and if they aren't used\n> fairly frequently they will move into the inactive queue. From there\n> they will be moved to the cache queue, but only if the cache queue falls\n> below a certain threshold, because in order to go into the cache queue\n> the page must be marked clean, possibly incurring a write to disk. AFAIK\n> pages only go into the free queue if they have been completely released\n> by all objects that were referencing them, so it's theoretically\n> posisble for that queue to go to 0.\n\nExactly.\n\nThe so-called limit (controllable via various sysctl's) is on the amount \nof memory used for kvm mapped pages, not cached pages, i.e - its a \nsubset of the cached pages that are set up for immediate access (the \nothers require merely to be shifted from the 'Inactive' queue to this \none before they can be operated on - a relatively cheap operation).\n\nSo its really all about accounting, in a sense - whether pages end up in \nthe 'Buf' or 'Inactive' queue, they are still cached!\n\nCheers\n\nMark\n",
"msg_date": "Tue, 21 Mar 2006 23:03:26 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 11:03:26PM +1200, Mark Kirkwood wrote:\n> Jim C. Nasby wrote:\n> >On Mon, Mar 20, 2006 at 02:15:22PM -0500, Vivek Khera wrote:\n> >\n> >>I think FreeBSD has a hard upper limit on the total ram it will use \n> >>for disk cache. I haven't been able to get reliable, irrefutable, \n> >>answers about it, though.\n> >\n> >\n> >It does not. Any memory in the inactive queue is effectively your 'disk\n> >cache'. Pages start out in the active queue, and if they aren't used\n> >fairly frequently they will move into the inactive queue. From there\n> >they will be moved to the cache queue, but only if the cache queue falls\n> >below a certain threshold, because in order to go into the cache queue\n> >the page must be marked clean, possibly incurring a write to disk. AFAIK\n> >pages only go into the free queue if they have been completely released\n> >by all objects that were referencing them, so it's theoretically\n> >posisble for that queue to go to 0.\n> \n> Exactly.\n> \n> The so-called limit (controllable via various sysctl's) is on the amount \n> of memory used for kvm mapped pages, not cached pages, i.e - its a \n> subset of the cached pages that are set up for immediate access (the \n> others require merely to be shifted from the 'Inactive' queue to this \n> one before they can be operated on - a relatively cheap operation).\n> \n> So its really all about accounting, in a sense - whether pages end up in \n> the 'Buf' or 'Inactive' queue, they are still cached!\n\nSo what's the difference between Buf and Active then? Just that active\nmeans it's a code page, or that it's been directly mapped into a\nprocesses memory (perhaps via mmap)?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 05:46:59 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec"
},
{
"msg_contents": "On Mon, Mar 20, 2006 at 01:27:56PM -0800, Luke Lonergan wrote:\n> >> Transfer rates:\n> >> outside: 102400 kbytes in 2.075984 sec = 49326 kbytes/sec\n> >> middle: 102400 kbytes in 2.100510 sec = 48750 kbytes/sec\n> >> inside: 102400 kbytes in 2.042313 sec = 50139 kbytes/sec\n> >> \n> > I have 6 ultra a320 72G 10k discs\n> \n> Yah - ouch. With 6 drives in a RAID10, you should expect 3 drives worth of\n> sequential scan performance, or anywhere from 100MB/s to 180MB/s. You're\n> getting from half to 1/3 of the performance you'd get with a decent raid\n> controller.\n> \n> If you add a simple SCSI adapter like the common LSI U320 adapter to your\n> DL380G3 and then run software RAID, you will get more than 150MB/s with less\n> CPU consumption. I'd also expect you'd get down to about 2ms access times.\n\nFWIW, here's my dirt-simple workstation, with 2 segate SATA drives setup\nas a mirror using software (first the mirror, then one of the raw\ndrives):\n\[email protected][5:43]~:15>sudo diskinfo -vt /dev/mirror/gm0\nPassword:\n/dev/mirror/gm0\n 512 # sectorsize\n 300069051904 # mediasize in bytes (279G)\n 586072367 # mediasize in sectors\n\nSeek times:\n Full stroke: 250 iter in 1.416409 sec = 5.666 msec\n Half stroke: 250 iter in 1.404503 sec = 5.618 msec\n Quarter stroke: 500 iter in 2.887344 sec = 5.775 msec\n Short forward: 400 iter in 2.101949 sec = 5.255 msec\n Short backward: 400 iter in 2.373578 sec = 5.934 msec\n Seq outer: 2048 iter in 0.209539 sec = 0.102 msec\n Seq inner: 2048 iter in 0.347499 sec = 0.170 msec\nTransfer rates:\n outside: 102400 kbytes in 3.183924 sec = 32162 kbytes/sec\n middle: 102400 kbytes in 3.216232 sec = 31838 kbytes/sec\n inside: 102400 kbytes in 4.242779 sec = 24135 kbytes/sec\n\[email protected][5:43]~:16>sudo diskinfo -vt /dev/ad4\n/dev/ad4\n 512 # sectorsize\n 300069052416 # mediasize in bytes (279G)\n 586072368 # mediasize in sectors\n 581421 # Cylinders according to firmware.\n 16 # Heads according to firmware.\n 63 # Sectors according to firmware.\n\nSeek times:\n Full stroke: 250 iter in 5.835744 sec = 23.343 msec\n Half stroke: 250 iter in 4.364424 sec = 17.458 msec\n Quarter stroke: 500 iter in 6.981597 sec = 13.963 msec\n Short forward: 400 iter in 2.157210 sec = 5.393 msec\n Short backward: 400 iter in 2.330445 sec = 5.826 msec\n Seq outer: 2048 iter in 0.181176 sec = 0.088 msec\n Seq inner: 2048 iter in 0.198974 sec = 0.097 msec\nTransfer rates:\n outside: 102400 kbytes in 1.715810 sec = 59680 kbytes/sec\n middle: 102400 kbytes in 1.937027 sec = 52865 kbytes/sec\n inside: 102400 kbytes in 3.260515 sec = 31406 kbytes/sec\n\nNo, I don't know why the transfer rates for the mirror are 1/2 that as the raw\ndevice. :(\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 05:49:37 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "On Tue, 21 Mar 2006, Jim C. Nasby wrote:\n\n> On Tue, Mar 21, 2006 at 11:03:26PM +1200, Mark Kirkwood wrote:\n> >\n> > So its really all about accounting, in a sense - whether pages end up in\n> > the 'Buf' or 'Inactive' queue, they are still cached!\n>\n> So what's the difference between Buf and Active then? Just that active\n> means it's a code page, or that it's been directly mapped into a\n> processes memory (perhaps via mmap)?\n\nI don't think that Buf and Active are mutually exclusive. Try adding up\nActive, Inactive, Cache, Wired, Buf and Free - it'll come to more than\nyour physical memory.\n\nActive gives an amount of physical memory. Buf gives an amount of\nkernel-space virtual memory which provide the kernel with a window on to\npages in the other categories. In fact, I don't think that 'Buf' really\nbelongs in the list as it doesn't represent a 'type' of page at all.\n\n-- \n Alex Hayward\n Seatbooker\n",
"msg_date": "Tue, 21 Mar 2006 12:22:31 +0000 (GMT)",
"msg_from": "Alex Hayward <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 12:22:31PM +0000, Alex Hayward wrote:\n> On Tue, 21 Mar 2006, Jim C. Nasby wrote:\n> \n> > On Tue, Mar 21, 2006 at 11:03:26PM +1200, Mark Kirkwood wrote:\n> > >\n> > > So its really all about accounting, in a sense - whether pages end up in\n> > > the 'Buf' or 'Inactive' queue, they are still cached!\n> >\n> > So what's the difference between Buf and Active then? Just that active\n> > means it's a code page, or that it's been directly mapped into a\n> > processes memory (perhaps via mmap)?\n> \n> I don't think that Buf and Active are mutually exclusive. Try adding up\n> Active, Inactive, Cache, Wired, Buf and Free - it'll come to more than\n> your physical memory.\n> \n> Active gives an amount of physical memory. Buf gives an amount of\n> kernel-space virtual memory which provide the kernel with a window on to\n> pages in the other categories. In fact, I don't think that 'Buf' really\n> belongs in the list as it doesn't represent a 'type' of page at all.\n\nAhhh, I get it... a KVM (what's that stand for anyway?) is required any\ntime the kernel wants to access a page that doesn't belong to it, right?\n\nAnd actually, I just checked 4 machines and adding all the queues plus\nbuf together didn't add up to total memory except on one of them (there\nadding just the queues came close; 1507.6MB on a 1.5GB machine).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 06:34:17 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec"
},
{
"msg_contents": "Jim,\n\nOn 3/21/06 3:49 AM, \"Jim C. Nasby\" <[email protected]> wrote:\n\n> No, I don't know why the transfer rates for the mirror are 1/2 that as the raw\n> device. :(\n\nWell - lessee. Would those drives be attached to a Silicon Image (SII) SATA\ncontroller? A Highpoint?\n\nI found in testing about 2 years ago that under Linux (looks like you're\nBSD), most SATA controllers other than the Intel PIIX are horribly broken\nfrom a performance standpoint, probably due to bad drivers but I'm not sure.\n\nNow I think whatever is commonly used by Nforce 4 implementations seems to\nwork ok, but we don't count on them for RAID configurations yet.\n\n- Luke\n\n\n",
"msg_date": "Tue, 21 Mar 2006 07:25:07 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "\nOn Mar 20, 2006, at 6:27 PM, PFC wrote:\n\n> \tExpensive SCSI hardware RAID cards with expensive 10Krpm harddisks \n> should not get humiliated by such a simple (and cheap) setup. (I'm \n> referring to the 12-drive RAID10 mentioned before, not the other \n> one which was a simple 2-disk mirror). Toms hardware benchmarked \n> some hardware RAIDs and got humongous transfer rates... hm ?\n>\n\nI'll put up my \"slow\" 12 disk SCSI array up against your IDE array on \na large parallel load any day.\n\n",
"msg_date": "Tue, 21 Mar 2006 10:33:32 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec"
},
{
"msg_contents": "\nOn Mar 21, 2006, at 6:03 AM, Mark Kirkwood wrote:\n\n> The so-called limit (controllable via various sysctl's) is on the \n> amount of memory used for kvm mapped pages, not cached pages, i.e - \n> its a subset of the cached pages that are set up for immediate \n> access (the\n\nThanks... now that makes sense to me.\n\n\n",
"msg_date": "Tue, 21 Mar 2006 10:34:16 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec"
},
{
"msg_contents": "\n \n> [email protected][5:43]~:15>sudo diskinfo -vt /dev/mirror/gm0\n\nCan anyone point me to where I can find diskinfo or an equivalent to run on\nmy debian system, I have been googling for the last hour but can't find it!\nI would like to analyse my own disk setup for comparison\n\nThanks for any help\n\nAdam\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content by MailScanner, and is\nbelieved to be clean.\n\n",
"msg_date": "Tue, 21 Mar 2006 15:40:25 +0000",
"msg_from": "Adam Witney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 07:25:07AM -0800, Luke Lonergan wrote:\n> Jim,\n> \n> On 3/21/06 3:49 AM, \"Jim C. Nasby\" <[email protected]> wrote:\n> \n> > No, I don't know why the transfer rates for the mirror are 1/2 that as the raw\n> > device. :(\n> \n> Well - lessee. Would those drives be attached to a Silicon Image (SII) SATA\n> controller? A Highpoint?\n> \n> I found in testing about 2 years ago that under Linux (looks like you're\n> BSD), most SATA controllers other than the Intel PIIX are horribly broken\n> from a performance standpoint, probably due to bad drivers but I'm not sure.\n> \n> Now I think whatever is commonly used by Nforce 4 implementations seems to\n> work ok, but we don't count on them for RAID configurations yet.\n\natapci1: <nVidia nForce4 SATA150 controller>\n\nAnd note that this is using FreeBSD gmirror, not the built-in raid\ncontroller.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 11:59:01 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "\n\n>> \tExpensive SCSI hardware RAID cards with expensive 10Krpm harddisks \n>> should not get humiliated by such a simple (and cheap) setup. (I'm \n>> referring to the 12-drive RAID10 mentioned before, not the other one \n>> which was a simple 2-disk mirror). Toms hardware benchmarked some \n>> hardware RAIDs and got humongous transfer rates... hm ?\n>>\n>\n> I'll put up my \"slow\" 12 disk SCSI array up against your IDE array on a \n> large parallel load any day.\n\n\tSure, and I have no doubt that yours will be immensely faster on parallel \nloads than mine, but still, it should also be the case on sequential \nscan... especially since I have desktop PCI and the original poster has a \nreal server with PCI-X I think.\n",
"msg_date": "Tue, 21 Mar 2006 20:04:54 +0100",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec"
},
{
"msg_contents": "Adam Witney wrote:\n> \n> \n>>[email protected][5:43]~:15>sudo diskinfo -vt /dev/mirror/gm0\n> \n> \n> Can anyone point me to where I can find diskinfo or an equivalent to run on\n> my debian system, I have been googling for the last hour but can't find it!\n> I would like to analyse my own disk setup for comparison\n> \n\nI guess you could use hdparm (-t or -T flags do a simple benchmark).\n\nThough iozone or bonnie++ are probably better.\n\n\nCheers\n\nMark\n\n",
"msg_date": "Wed, 22 Mar 2006 07:14:00 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "On Wed, 22 Mar 2006, Mark Kirkwood wrote:\n\n> Adam Witney wrote:\n>> \n>>> [email protected][5:43]~:15>sudo diskinfo -vt /dev/mirror/gm0\n>> \n>> Can anyone point me to where I can find diskinfo or an equivalent to run on\n>> my debian system, I have been googling for the last hour but can't find it!\n>> I would like to analyse my own disk setup for comparison\n>\n> I guess you could use hdparm (-t or -T flags do a simple benchmark).\n>\n> Though iozone or bonnie++ are probably better.\n\nYou might also have a look at lmdd for sequential read/write performance from \nthe lmbench suite: http://sourceforge.net/projects/lmbench\n\nAs numbers from lmdd are seen on this frequently.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Tue, 21 Mar 2006 11:22:22 -0800 (PST)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
},
{
"msg_contents": "\nOn Mar 21, 2006, at 2:04 PM, PFC wrote:\n\n> especially since I have desktop PCI and the original poster has a \n> real server with PCI-X I think.\n\nthat was me :-)\n\nbut yeah, I never seem to get full line speed for some reason. i \ndon't know if it is because of inadequate measurement tools or what...\n\n",
"msg_date": "Tue, 21 Mar 2006 15:32:01 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB & Adaptec"
},
{
"msg_contents": "\nOn Mar 21, 2006, at 12:59 PM, Jim C. Nasby wrote:\n\n> atapci1: <nVidia nForce4 SATA150 controller>\n>\n> And note that this is using FreeBSD gmirror, not the built-in raid\n> controller.\n\nI get similar counter-intuitive slowdown with gmirror SATA disks on \nan IBM e326m I'm evaluating. If/when I buy one I'll get the onboard \nSCSI RAID instead.\n\nThe IBM uses ServerWorks chipset, which shows up to freebsd 6.0 as \n\"generic ATA\" and only does UDMA33 transfers.\n\n",
"msg_date": "Tue, 21 Mar 2006 21:22:52 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
}
] |
[
{
"msg_contents": "Hi,\nI have enabled the autovacuum daemon, but occasionally still get a\nmessage telling me I need to run vacuum when I access a table in\npgadmin. Is this normal? Should I use scripts instead of the daemon?\nWould posting config options make this a much more sensible question?\nCheers\nAntoine\n\n--\nThis is where I should put some witty comment.\n",
"msg_date": "Sat, 18 Mar 2006 13:01:24 +0100",
"msg_from": "Antoine <[email protected]>",
"msg_from_op": true,
"msg_subject": "n00b autovacuum question"
},
{
"msg_contents": "More detail please. It sounds like you running 8.1 and talking about \nthe integrated autovacuum is that correct? Also, what is the message \nspecifically from pgadmin?\n\nMatt\n\nAntoine wrote:\n> Hi,\n> I have enabled the autovacuum daemon, but occasionally still get a\n> message telling me I need to run vacuum when I access a table in\n> pgadmin. Is this normal? Should I use scripts instead of the daemon?\n> Would posting config options make this a much more sensible question?\n> Cheers\n> Antoine\n> \n> --\n> This is where I should put some witty comment.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n",
"msg_date": "Sat, 18 Mar 2006 10:54:14 -0500",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: n00b autovacuum question"
},
{
"msg_contents": "Antoine wrote:\n> Hi,\n> I have enabled the autovacuum daemon, but occasionally still get a\n> message telling me I need to run vacuum when I access a table in\n> pgadmin.\n\npgAdmin notices a discrepancy between real rowcount and estimated \nrowcount and thus suggests to run vacuum/analyze; it won't examine \nautovacuum rules so it might warn although autovac is running ok.\n\nIf you're sure autovacuum is running fine, just dismiss the message. \nIt's a hint for newbies that do *not* run vacuum because they don't know \nwhat it's good for.\n\nRegards\nAndreas\n",
"msg_date": "Sat, 18 Mar 2006 18:46:00 +0100",
"msg_from": "Andreas Pflug <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: n00b autovacuum question"
},
{
"msg_contents": "On 18/03/06, Andreas Pflug <[email protected]> wrote:\n> Antoine wrote:\n> > Hi,\n> > I have enabled the autovacuum daemon, but occasionally still get a\n> > message telling me I need to run vacuum when I access a table in\n> > pgadmin.\n>\n> pgAdmin notices a discrepancy between real rowcount and estimated\n> rowcount and thus suggests to run vacuum/analyze; it won't examine\n> autovacuum rules so it might warn although autovac is running ok.\n>\n> If you're sure autovacuum is running fine, just dismiss the message.\n\nI guess that is my problem - I a not sure it is running fine. The\nprocess is definitely running but I am getting lots of complaints\nabout performance. This probably has lots to do with crap code and not\nmuch to do with the database but I am still searching the maintenance\navenue... We have a massive project coming up and I want to go for\nPostgres (the boss wants Oracle). If I can't get my stuff together I\nam not sure my arguments will stick... problem is I don't really have\nthe time to experiment properly.\nCheers\nAntoine\n\n--\nThis is where I should put some witty comment.\n",
"msg_date": "Sun, 19 Mar 2006 13:27:29 +0100",
"msg_from": "Antoine <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: n00b autovacuum question"
},
{
"msg_contents": "Antoine wrote:\n> On 18/03/06, Andreas Pflug <[email protected]> wrote:\n> \n>>Antoine wrote:\n>>\n>>>Hi,\n>>>I have enabled the autovacuum daemon, but occasionally still get a\n>>>message telling me I need to run vacuum when I access a table in\n>>>pgadmin.\n\nBring up the postgresql.conf editor on that server, and watch if pgadmin \ncomplains.\n\nRegards,\nAndreas\n",
"msg_date": "Sun, 19 Mar 2006 15:47:32 +0100",
"msg_from": "Andreas Pflug <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: n00b autovacuum question"
},
{
"msg_contents": "On 19/03/06, Andreas Pflug <[email protected]> wrote:\n> Antoine wrote:\n> > On 18/03/06, Andreas Pflug <[email protected]> wrote:\n> >\n> >>Antoine wrote:\n> >>\n> >>>Hi,\n> >>>I have enabled the autovacuum daemon, but occasionally still get a\n> >>>message telling me I need to run vacuum when I access a table in\n> >>>pgadmin.\n>\n> Bring up the postgresql.conf editor on that server, and watch if pgadmin\n> complains.\n\nHi,\nI am not sure I understand what \"bring up\" means. Could you explain?\nThanks\nAntoine\n\n--\nThis is where I should put some witty comment.\n",
"msg_date": "Sun, 19 Mar 2006 16:34:39 +0100",
"msg_from": "Antoine <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: n00b autovacuum question"
}
] |
[
{
"msg_contents": "Hello,\n\nDoes anybody know how to build a database model to include sizes for rings, tshirts, etc?\n\n\nthe current database is built like:\n\ntable product\n=========\n\nproductid int8 PK\nproductname charvar(255)\nquantity int4\n\n\nwhat i want now is that WHEN (not all products have multiple sizes) there are multiple sizes available. The sizes are stored into the database. I was wondering to include a extra table:\n\ntable sizes:\n========\nproductid int8 FK\nsize varchar(100)\n\n\nbut then i have a quantity problem. Because now not all size quantities can be stored into this table, because it allready exist in my product table.\n\nHow do professionals do it? How do they make their model to include sizes if any available?\n\n\n\n\n\n\nHello,\n \nDoes anybody know how to build a database model to \ninclude sizes for rings, tshirts, etc?\n \n \nthe current database is built like:\n \ntable product\n=========\n \nproductid int8 PK\nproductname charvar(255)\nquantity int4\n \n \nwhat i want now is that WHEN (not all products have \nmultiple sizes) there are multiple sizes available. The sizes are stored into \nthe database. I was wondering to include a extra table:\n \ntable sizes:\n========\nproductid int8 FK\nsize varchar(100)\n \n \nbut then i have a quantity problem. Because now not \nall size quantities can be stored into this table, because it allready exist in \nmy product table.\n \nHow do professionals do it? How do they make their \nmodel to include sizes if any available?",
"msg_date": "Sat, 18 Mar 2006 16:03:59 +0100",
"msg_from": "\"NbForYou\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "database model tshirt sizes"
},
{
"msg_contents": "We have size and color in the product table itself. It is really an\nattribute of the product. If you update the availability of the product\noften, I would split out the quantity into a separate table so that you can\ntruncate and update as needed.\n\nPatrick Hatcher\nDevelopment Manager Analytics/MIO\nMacys.com\n\n\n \n \"NbForYou\" \n <nbforyou@hotmail \n .com> To \n Sent by: <[email protected]> \n pgsql-performance cc \n -owner@postgresql \n .org Subject \n [PERFORM] database model tshirt \n sizes \n 03/18/06 07:03 AM \n \n \n \n \n \n\n\n\n\nHello,\n\nDoes anybody know how to build a database model to include sizes for rings,\ntshirts, etc?\n\n\nthe current database is built like:\n\ntable product\n=========\n\nproductid int8 PK\nproductname charvar(255)\nquantity int4\n\n\nwhat i want now is that WHEN (not all products have multiple sizes) there\nare multiple sizes available. The sizes are stored into the database. I was\nwondering to include a extra table:\n\ntable sizes:\n========\nproductid int8 FK\nsize varchar(100)\n\n\nbut then i have a quantity problem. Because now not all size quantities can\nbe stored into this table, because it allready exist in my product table.\n\nHow do professionals do it? How do they make their model to include sizes\nif any available?\n\n",
"msg_date": "Sun, 19 Mar 2006 05:59:35 -0800",
"msg_from": "Patrick Hatcher <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database model tshirt sizes"
},
{
"msg_contents": "another approach would be:\n\ntable product:\n> productid int8 PK\n> productname charvar(255)\n\ntable versions\n> productid int8 FK\n> versionid int8 PK\n> size\n> color\n> ...\n> quantity int4\n\nan example would be then:\n\ntable product:\n- productid: 123, productname: 'nice cotton t-shirt'\n- productid: 442, productname: 'another cotton t-shirt'\n\ntable versions:\n- productid: 123, versionid: 1, color: 'black', size: 'all', quantity: 11\n- productid: 442, versionid: 2, color: 'yellow', size: 'l', quantity: 1\n- productid: 442, versionid: 2, color: 'yellow', size: 's', quantity: 4\n- productid: 442, versionid: 2, color: 'red', size: 'xl', quantity: 9\n- productid: 442, versionid: 2, color: 'blue', size: 's', quantity: 0\n\n\nthat way you can have more than 1 quantity / color / size combination per \nproduct and still have products that come in one size. so instead of only \nusing a 2nd table for cases where more than one size is available, you would \nalways use a 2nd table. this probably reduces your code complexity quite a \nbit and only needs 1 JOIN.\n\n- thomas\n\n\n\n----- Original Message ----- \nFrom: \"Patrick Hatcher\" <[email protected]>\nTo: \"NbForYou\" <[email protected]>\nCc: <[email protected]>; \n<[email protected]>\nSent: Sunday, March 19, 2006 2:59 PM\nSubject: Re: [PERFORM] database model tshirt sizes\n\n\n> We have size and color in the product table itself. It is really an\n> attribute of the product. If you update the availability of the product\n> often, I would split out the quantity into a separate table so that you \n> can\n> truncate and update as needed.\n>\n> Patrick Hatcher\n> Development Manager Analytics/MIO\n> Macys.com\n>\n>\n>\n> \"NbForYou\"\n> <nbforyou@hotmail\n> .com> To\n> Sent by: <[email protected]>\n> pgsql-performance cc\n> -owner@postgresql\n> .org Subject\n> [PERFORM] database model tshirt\n> sizes\n> 03/18/06 07:03 AM\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> Hello,\n>\n> Does anybody know how to build a database model to include sizes for \n> rings,\n> tshirts, etc?\n>\n>\n> the current database is built like:\n>\n> table product\n> =========\n>\n> productid int8 PK\n> productname charvar(255)\n> quantity int4\n>\n>\n> what i want now is that WHEN (not all products have multiple sizes) there\n> are multiple sizes available. The sizes are stored into the database. I \n> was\n> wondering to include a extra table:\n>\n> table sizes:\n> ========\n> productid int8 FK\n> size varchar(100)\n>\n>\n> but then i have a quantity problem. Because now not all size quantities \n> can\n> be stored into this table, because it allready exist in my product table.\n>\n> How do professionals do it? How do they make their model to include sizes\n> if any available?\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n\n",
"msg_date": "Sun, 19 Mar 2006 15:37:53 +0100",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: database model tshirt sizes"
},
{
"msg_contents": "So a default value for all products would be size:\"all\"\n\nfor example, the same tshirt shop also sells cdroms.\n\nIt size attribute would be to place it to be :\"all\". (because we cannot \nplace an uniqe index on null values)\n\nBut the industry evolves and so in time the same cdrom is now available for \npc and playstation.\n\nSo i would like to have it as 1 productid but with different attributes: pc \n(with quantity 5) and playstation (with quantity 3).\nSo when I do an insert for this 2 products with 1 productid it would be \nlike:\n\ninsert into versions (productid,size,quantity) values (345,'pc',5);\ninsert into versions (productid,size,quantity) values (345,'playstation',3);\n\nif however the product existed we get an error:\n\nbecause the default value version \"all\" did also exist and is now obsolete\n\npopulation versions:\n================\n\nproductid: 123, versionid: 1, color: 'black', size: 'all', quantity: 11\nproductid: 442, versionid: 2, color: 'yellow', size: 'l', quantity: 1\nproductid: 442, versionid: 2, color: 'yellow', size: 's', quantity: 4\nproductid: 442, versionid: 2, color: 'red', size: 'xl', quantity: 9\nproductid: 442, versionid: 2, color: 'blue', size: 's', quantity: 0\nproductid: 345, versionid: 3, color: null, size: 'all', quantity: 15\nproductid: 345, versionid: 3, color: null, size: 'pc', quantity: 5\nproductid: 345, versionid: 3, color: null, size: 'playstation', quantity: 3\n\nWOULD HAVE TO BE:\n\npopulation versions:\n================\n\nproductid: 123, versionid: 1, color: 'black', size: 'all', quantity: 11\nproductid: 442, versionid: 2, color: 'yellow', size: 'l', quantity: 1\nproductid: 442, versionid: 2, color: 'yellow', size: 's', quantity: 4\nproductid: 442, versionid: 2, color: 'red', size: 'xl', quantity: 9\nproductid: 442, versionid: 2, color: 'blue', size: 's', quantity: 0\nproductid: 345, versionid: 3, color: null, size: 'pc', quantity: 5\nproductid: 345, versionid: 3, color: null, size: 'playstation', quantity: 3\n\nALSO:\n\nwhat is versionid used for?\n\n\n----- Original Message ----- \nFrom: <[email protected]>\nTo: \"NbForYou\" <[email protected]>\nCc: <[email protected]>; \n<[email protected]>\nSent: Sunday, March 19, 2006 3:37 PM\nSubject: Re: [PERFORM] database model tshirt sizes\n\n\n> another approach would be:\n>\n> table product:\n>> productid int8 PK\n>> productname charvar(255)\n>\n> table versions\n>> productid int8 FK\n>> versionid int8 PK\n>> size\n>> color\n>> ...\n>> quantity int4\n>\n> an example would be then:\n>\n> table product:\n> - productid: 123, productname: 'nice cotton t-shirt'\n> - productid: 442, productname: 'another cotton t-shirt'\n>\n> table versions:\n> - productid: 123, versionid: 1, color: 'black', size: 'all', quantity: 11\n> - productid: 442, versionid: 2, color: 'yellow', size: 'l', quantity: 1\n> - productid: 442, versionid: 2, color: 'yellow', size: 's', quantity: 4\n> - productid: 442, versionid: 2, color: 'red', size: 'xl', quantity: 9\n> - productid: 442, versionid: 2, color: 'blue', size: 's', quantity: 0\n>\n>\n> that way you can have more than 1 quantity / color / size combination per \n> product and still have products that come in one size. so instead of only \n> using a 2nd table for cases where more than one size is available, you \n> would always use a 2nd table. this probably reduces your code complexity \n> quite a bit and only needs 1 JOIN.\n>\n> - thomas\n>\n>\n>\n> ----- Original Message ----- \n> From: \"Patrick Hatcher\" <[email protected]>\n> To: \"NbForYou\" <[email protected]>\n> Cc: <[email protected]>; \n> <[email protected]>\n> Sent: Sunday, March 19, 2006 2:59 PM\n> Subject: Re: [PERFORM] database model tshirt sizes\n>\n>\n>> We have size and color in the product table itself. It is really an\n>> attribute of the product. If you update the availability of the product\n>> often, I would split out the quantity into a separate table so that you \n>> can\n>> truncate and update as needed.\n>>\n>> Patrick Hatcher\n>> Development Manager Analytics/MIO\n>> Macys.com\n>>\n>>\n>>\n>> \"NbForYou\"\n>> <nbforyou@hotmail\n>> .com> To\n>> Sent by: <[email protected]>\n>> pgsql-performance cc\n>> -owner@postgresql\n>> .org Subject\n>> [PERFORM] database model tshirt\n>> sizes\n>> 03/18/06 07:03 AM\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>> Hello,\n>>\n>> Does anybody know how to build a database model to include sizes for \n>> rings,\n>> tshirts, etc?\n>>\n>>\n>> the current database is built like:\n>>\n>> table product\n>> =========\n>>\n>> productid int8 PK\n>> productname charvar(255)\n>> quantity int4\n>>\n>>\n>> what i want now is that WHEN (not all products have multiple sizes) there\n>> are multiple sizes available. The sizes are stored into the database. I \n>> was\n>> wondering to include a extra table:\n>>\n>> table sizes:\n>> ========\n>> productid int8 FK\n>> size varchar(100)\n>>\n>>\n>> but then i have a quantity problem. Because now not all size quantities \n>> can\n>> be stored into this table, because it allready exist in my product table.\n>>\n>> How do professionals do it? How do they make their model to include sizes\n>> if any available?\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> choose an index scan if your joining column's datatypes do not\n>> match\n>>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n",
"msg_date": "Sun, 19 Mar 2006 18:43:53 +0100",
"msg_from": "\"NbForYou\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: database model tshirt sizes"
}
] |
[
{
"msg_contents": "Hi,\nIs there any work on the cards for implementing other partitioning\nstrategies? I see mysql 5.1 will have support for hashes and stuff but\ndidn't see anything in the todos for postgres.\nCheers\nAntoine\n--\nThis is where I should put some witty comment.\n",
"msg_date": "Sun, 19 Mar 2006 13:31:42 +0100",
"msg_from": "Antoine <[email protected]>",
"msg_from_op": true,
"msg_subject": "partitioning"
},
{
"msg_contents": "On Sun, Mar 19, 2006 at 01:31:42PM +0100, Antoine wrote:\n> Hi,\n> Is there any work on the cards for implementing other partitioning\n> strategies? I see mysql 5.1 will have support for hashes and stuff but\n> didn't see anything in the todos for postgres.\n\nYou'd have to provide a pretty convincing argument for providing hash\npartitioning I think. I can't really think of any real-world scenarios\nwhere it's better than other forms.\n\nIn any case, the next logical step on the partitioning front is to add\nsome 'syntactic sugar' to make it easier for people to work with\npartitions. I seem to remember some discussion about that, but I don't\nrecall where it lead to.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 03:59:32 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partitioning"
}
] |
[
{
"msg_contents": "\n-----Original Message-----\nFrom: \"Luke Lonergan\"<[email protected]>\nSent: 19/03/06 16:26:58\nTo: \"Kenji Morishige\"<[email protected]>, \"Claus Guttesen\"<[email protected]>\nCc: \"[email protected]\"<[email protected]>\nSubject: Re: [PERFORM] Best OS & Configuration for Dual Xeon w/4GB &\n\n> I notice that no one asked you about your disk bandwidth - the Adaptec 2200S\n>is a \"known bad\" controller - \n\nAgreed - We have a couple at work which got relagated to use in 'toy' boxes when we realised how bad they were, long before they ever saw any production use.\n\nRegards, Dave\n\n-----Unmodified Original Message-----\nKenji,\n\n\nOn 3/17/06 4:08 PM, \"Kenji Morishige\" <[email protected]> wrote:\n\n> Thanks guys, I'm studying each of your responses and am going to start to\n> experiement.\n\nI notice that no one asked you about your disk bandwidth - the Adaptec 2200S\nis a \"known bad\" controller - the bandwidth to/from in RAID5 is about 1/2 to\n1/3 of a single disk drive, which is far too slow for a 10GB database, and\nIMO should disqualify a RAID adapter from being used at all.\n\nWithout fixing this, I'd suggest that all of the other tuning described here\nwill have little value, provided your working set is larger than your RAM.\n\nYou should test the I/O bandwidth using these simple tests:\n time bash -c \"dd if=/dev/zero of=bigfile bs=8k count=1000000 && sync\"\n\nthen:\n time dd if=bigfile of=/dev/null bs=8k\n\nYou should get on the order of 150MB/s on four disk drives in RAID5.\n\nAnd before people jump in about \"random I/O\", etc, the sequential scan test\nwill show whether the controller is just plain bad very quickly. If it\ncan't do sequential fast, it won't do seeks fast either.\n\n- Luke\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n",
"msg_date": "Sun, 19 Mar 2006 18:19:40 -0000",
"msg_from": "\"Dave Page\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best OS & Configuration for Dual Xeon w/4GB &"
}
] |
[
{
"msg_contents": "I have a case where it seems the planner should be able to infer more \nfrom its partial indexes than it is doing. Observe:\n\npx=# select version();\n version\n------------------------------------------------------------------------\n PostgreSQL 8.1.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2\n(1 row)\n\npx=# \\d pxmdvalue\n Table \"store.pxmdvalue\"\n Column | Type | Modifiers\n------------+----------+-----------\n entityid | bigint | not null\n fieldid | integer | not null\n value | text | not null\n datatypeid | integer | not null\n tsi | tsvector |\nIndexes:\n \"pxmdvalue_pk\" PRIMARY KEY, btree (entityid, fieldid)\n \"pxmdvalue_atom_val_idx\" btree (value) WHERE datatypeid = 22\n \"pxmdvalue_bigint_val_idx\" btree ((value::bigint)) WHERE datatypeid \n= 43\n \"pxmdvalue_datatypeid_idx\" btree (datatypeid)\n \"pxmdvalue_int_val_idx\" btree ((value::integer)) WHERE datatypeid = 16\n \"pxmdvalue_str32_val0_idx\" btree (lower(value)) WHERE datatypeid = \n2 AND octet_length(value) < 2700\n \"pxmdvalue_str32_val1_idx\" btree (lower(value) text_pattern_ops) \nWHERE datatypeid = 2 AND octet_length(value) < 2700\n \"pxmdvalue_str_val0_idx\" btree (lower(value)) WHERE datatypeid = 85 \nAND octet_length(value) < 2700\n \"pxmdvalue_str_val1_idx\" btree (lower(value) text_pattern_ops) \nWHERE datatypeid = 85 AND octet_length(value) < 2700\n \"pxmdvalue_time_val_idx\" btree (px_text2timestamp(value)) WHERE \ndatatypeid = 37\n\npx=# explain analyse select * from pxmdvalue where datatypeid = 43 and \nfieldid = 857 and cast(value as bigint) = '1009';\n \nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on pxmdvalue (cost=2143.34..2685.74 rows=1 \nwidth=245) (actual time=144.411..144.415 rows=1 loops=1)\n Recheck Cond: (((value)::bigint = 1009::bigint) AND (datatypeid = 43))\n Filter: (fieldid = 857)\n -> BitmapAnd (cost=2143.34..2143.34 rows=138 width=0) (actual \ntime=144.394..144.394 rows=0 loops=1)\n -> Bitmap Index Scan on pxmdvalue_bigint_val_idx \n(cost=0.00..140.23 rows=1758 width=0) (actual time=0.021..0.021 rows=2 \nloops=1)\n Index Cond: ((value)::bigint = 1009::bigint)\n -> Bitmap Index Scan on pxmdvalue_datatypeid_idx \n(cost=0.00..2002.85 rows=351672 width=0) (actual time=144.127..144.127 \nrows=346445 loops=1)\n Index Cond: (datatypeid = 43)\n Total runtime: 144.469 ms\n(9 rows)\n\npx=# drop index pxmdvalue_datatypeid_idx;\nDROP INDEX\npx=# explain analyse select * from pxmdvalue where datatypeid = 43 and \nfieldid = 857 and cast(value as bigint) = '1009';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using pxmdvalue_bigint_val_idx on pxmdvalue \n(cost=0.00..6635.06 rows=1 width=245) (actual time=0.018..0.022 rows=1 \nloops=1)\n Index Cond: ((value)::bigint = 1009::bigint)\n Filter: (fieldid = 857)\n Total runtime: 0.053 ms\n(4 rows)\n\n\n\nNotice the two bitmap index scans in the first version of the query. The \none that hits the pxmdvalue_bigint_val_idx actually subsumes the work of \nthe second one, as it is a partial index on the same condition that the \nsecond bitmap scan is checking. So that second bitmap scan is a complete \nwaste of time and effort, afaict. When I remove the \npxmdvalue_datatypeid_idx index, to prevent it using that second bitmap \nscan, the resulting query is much faster, although its estimated cost is \nrather higher.\n\nAny clues, anyone? Is this indeed a limitation of the query planner, in \nthat it doesn't realise that the partial index is all it needs here? Or \nis something else going on that is leading the cost estimation astray?\n\nTim\n\n-- \n-----------------------------------------------\nTim Allen [email protected]\nProximity Pty Ltd http://www.proximity.com.au/\n",
"msg_date": "Mon, 20 Mar 2006 15:27:21 +1100",
"msg_from": "Tim Allen <[email protected]>",
"msg_from_op": true,
"msg_subject": "partial indexes and inference"
},
{
"msg_contents": "I suspect you've found an issue with how the planner evaluates indexes\nfor bitmap scans. My guess is that that section of the planner needs to\nbe taught to look for partial indexes.\n\nYou should also try\n\ncast(value as bigint) = 1009\n\nThe planner may be getting confused by the '1009'.\n\nOn Mon, Mar 20, 2006 at 03:27:21PM +1100, Tim Allen wrote:\n> I have a case where it seems the planner should be able to infer more \n> from its partial indexes than it is doing. Observe:\n> \n> px=# select version();\n> version\n> ------------------------------------------------------------------------\n> PostgreSQL 8.1.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2\n> (1 row)\n> \n> px=# \\d pxmdvalue\n> Table \"store.pxmdvalue\"\n> Column | Type | Modifiers\n> ------------+----------+-----------\n> entityid | bigint | not null\n> fieldid | integer | not null\n> value | text | not null\n> datatypeid | integer | not null\n> tsi | tsvector |\n> Indexes:\n> \"pxmdvalue_pk\" PRIMARY KEY, btree (entityid, fieldid)\n> \"pxmdvalue_atom_val_idx\" btree (value) WHERE datatypeid = 22\n> \"pxmdvalue_bigint_val_idx\" btree ((value::bigint)) WHERE datatypeid \n> = 43\n> \"pxmdvalue_datatypeid_idx\" btree (datatypeid)\n> \"pxmdvalue_int_val_idx\" btree ((value::integer)) WHERE datatypeid = 16\n> \"pxmdvalue_str32_val0_idx\" btree (lower(value)) WHERE datatypeid = \n> 2 AND octet_length(value) < 2700\n> \"pxmdvalue_str32_val1_idx\" btree (lower(value) text_pattern_ops) \n> WHERE datatypeid = 2 AND octet_length(value) < 2700\n> \"pxmdvalue_str_val0_idx\" btree (lower(value)) WHERE datatypeid = 85 \n> AND octet_length(value) < 2700\n> \"pxmdvalue_str_val1_idx\" btree (lower(value) text_pattern_ops) \n> WHERE datatypeid = 85 AND octet_length(value) < 2700\n> \"pxmdvalue_time_val_idx\" btree (px_text2timestamp(value)) WHERE \n> datatypeid = 37\n> \n> px=# explain analyse select * from pxmdvalue where datatypeid = 43 and \n> fieldid = 857 and cast(value as bigint) = '1009';\n> \n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on pxmdvalue (cost=2143.34..2685.74 rows=1 \n> width=245) (actual time=144.411..144.415 rows=1 loops=1)\n> Recheck Cond: (((value)::bigint = 1009::bigint) AND (datatypeid = 43))\n> Filter: (fieldid = 857)\n> -> BitmapAnd (cost=2143.34..2143.34 rows=138 width=0) (actual \n> time=144.394..144.394 rows=0 loops=1)\n> -> Bitmap Index Scan on pxmdvalue_bigint_val_idx \n> (cost=0.00..140.23 rows=1758 width=0) (actual time=0.021..0.021 rows=2 \n> loops=1)\n> Index Cond: ((value)::bigint = 1009::bigint)\n> -> Bitmap Index Scan on pxmdvalue_datatypeid_idx \n> (cost=0.00..2002.85 rows=351672 width=0) (actual time=144.127..144.127 \n> rows=346445 loops=1)\n> Index Cond: (datatypeid = 43)\n> Total runtime: 144.469 ms\n> (9 rows)\n> \n> px=# drop index pxmdvalue_datatypeid_idx;\n> DROP INDEX\n> px=# explain analyse select * from pxmdvalue where datatypeid = 43 and \n> fieldid = 857 and cast(value as bigint) = '1009';\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using pxmdvalue_bigint_val_idx on pxmdvalue \n> (cost=0.00..6635.06 rows=1 width=245) (actual time=0.018..0.022 rows=1 \n> loops=1)\n> Index Cond: ((value)::bigint = 1009::bigint)\n> Filter: (fieldid = 857)\n> Total runtime: 0.053 ms\n> (4 rows)\n> \n> \n> \n> Notice the two bitmap index scans in the first version of the query. The \n> one that hits the pxmdvalue_bigint_val_idx actually subsumes the work of \n> the second one, as it is a partial index on the same condition that the \n> second bitmap scan is checking. So that second bitmap scan is a complete \n> waste of time and effort, afaict. When I remove the \n> pxmdvalue_datatypeid_idx index, to prevent it using that second bitmap \n> scan, the resulting query is much faster, although its estimated cost is \n> rather higher.\n> \n> Any clues, anyone? Is this indeed a limitation of the query planner, in \n> that it doesn't realise that the partial index is all it needs here? Or \n> is something else going on that is leading the cost estimation astray?\n> \n> Tim\n> \n> -- \n> -----------------------------------------------\n> Tim Allen [email protected]\n> Proximity Pty Ltd http://www.proximity.com.au/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 04:04:12 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: partial indexes and inference"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a strange problem with my Postgres application. The problem is\nthat the data entered in the application never reaches the database,\nalthough the record id (serial) is generated, and the record can be\nretrieved again, and be modified. Multiple records can be added and\nmodified. But when i check the data with psql, the record is not\nthere.\nThe application uses persistant database connection, and when i check\nthe status of the connection, it shows: \"idle in transaction\". I am\npretty sure that every insert is being committed with explicit\n\"commit()\" . It always worked before.... weird.\n\nthanks for any hints\nKsenia.\n",
"msg_date": "Mon, 20 Mar 2006 11:46:11 +0100",
"msg_from": "\"Ksenia Marasanova\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "data doesnt get saved in the database / idle in transaction"
},
{
"msg_contents": "\n\"\"Ksenia Marasanova\"\" <[email protected]> wrote\n>\n> The application uses persistant database connection, and when i check\n> the status of the connection, it shows: \"idle in transaction\". I am\n> pretty sure that every insert is being committed with explicit\n> \"commit()\" . It always worked before.... weird.\n>\n\nTry to use the following command to see what commands reach the server:\n\n set log_statement = \"all\";\n\nRegards,\nQingqing\n\n\n",
"msg_date": "Mon, 20 Mar 2006 20:03:14 +0800",
"msg_from": "\"Qingqing Zhou\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: data doesnt get saved in the database / idle in transaction"
},
{
"msg_contents": "On Mon, Mar 20, 2006 at 08:03:14PM +0800, Qingqing Zhou wrote:\n> \n> \"\"Ksenia Marasanova\"\" <[email protected]> wrote\n> >\n> > The application uses persistant database connection, and when i check\n> > the status of the connection, it shows: \"idle in transaction\". I am\n> > pretty sure that every insert is being committed with explicit\n> > \"commit()\" . It always worked before.... weird.\n> >\n> \n> Try to use the following command to see what commands reach the server:\n> \n> set log_statement = \"all\";\n\nI'd bet that the commits aren't making it over.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 04:05:18 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: data doesnt get saved in the database / idle in transaction"
}
] |
[
{
"msg_contents": "Ok, here's the deal:\n\nI am responisble for an exciting project of evaluating migration of a medium/large application for a well-known swedish car&truck manufacturer from a proprietary DB to Postgres. The size of the database is currently about 50Gb, annual growth depending on sales, but probably in the 30-50Gb range.\n\nMigrating the schema was easily done, mostly involving a search/replace of some vendor specific datatypes. The next step is to migrate the data itself, and for this we have written a Java app relying on JDBC metadata to map the tables in the source schema to the target schema. The goal right now is to find the set of parameters that gives as short bulk insert time as possible, minimizing downtime while the data itself is migrated.\n\nThe machine used for the study is a Dell PE2850, 6GB memory, 1xXEON 3.0GHz/2MB cache, internal SCSI 0+1 raid (currently 4x36GB 10000rpm striped+mirrored, two more 146GB 15000rpm disks will arrive later). Not sure about the brand/model of the raid controller, so I'll leave that for now. File system is ext3(I know, maybe not the optimal choice but this is how it was when I got it) with a 8k block size. The OS currently installed is CentOS4.\n\nUntil the new disks arrive, both the OS itself, pg_xlog and the data reside on the same disks. When they arrive, I will probably move the data to the new disks (need two more to get raid 0+1, though) and leave the OS + pg_xlog on the 10000rpm disks. Mounting the 15000rpm data disks with the noatime option (this is safe, right?) and using a 16kb block size (for read performance) will probably be considered as well.\n\nNOTE: this machine/configuration is NOT what we will be using in production if the study turns out OK, it's just supposed to work as a development machine in the first phase whose purpose more or less is to get familiar with configurating Postgres and see if we can get the application up and running (we will probably use a 64bit platform and either a FC SAN or internal raid with a battery backed cache for production use, if all goes well).\n\nThe first thing I did when I got the machine was to do a raw dd write test:\n\n# time bash -c \"(dd if=/dev/zero of=/opt/bigfile count=1310720 bs=8k && sync)\"\n1310720+0 records in\n1310720+0 records out\n\nreal 2m21.438s\nuser 0m0.998s\nsys 0m51.347s\n\n(10*1024)Mb/~141s => ~75.5Mb/s\n\nAs a simple benchmark, I created a simple table without PK/indexes with 1k wide rows:\n\ncreate table iotest.one_kb_rows\n(\n the_col char(1024) not null\n);\n\nTo fill the table, I use this simple function:\n\ncreate or replace function iotest.writestress(megs integer) returns void as $$\ndeclare\n char_str char(1024) := repeat('x', 1024);\nbegin\n for i in 1..megs loop\n for j in 1..1024 loop\n insert into one_kb_rows(the_col) values (char_str);\n end loop;\n end loop;\nend;\n$$\nlanguage plpgsql;\n\nThen, I tested how long it takes to write 10Gb of data to this table:\n\niotest=> \\timing\nTiming is on.\n\niotest=> select writestress((10*1024));\n writestress\n-------------\n\n(1 row)\n\nTime: 379971.252 ms\n\nThis gives that 10Gb takes ~380s => ~27Mb/s (with fsync=off), compared to the raw dd result (~75.5Mb/s).\n\nI assume this difference is due to: \n- simultaneous WAL write activity (assumed: for each byte written to the table, at least one byte is also written to WAL, in effect: 10Gb data inserted in the table equals 20Gb written to disk)\n- lousy test method (it is done using a function => the transaction size is 10Gb, and 10Gb will *not* fit in wal_buffers :) )\n- poor config\n- something else? \n\nI have tried to read up as much as possible on Postgres configuration (disk layout, buffer management, WAL sync methods, etc) and found this post regarding bgwriter tweaking: http://archives.postgresql.org/pgsql-performance/2006-03/msg00218.php - which explains the bgwriter config below.\n\nAll params in postgresql.conf that are not commented out:\n---------------------------------------------------------\nmax_connections = 100\nsuperuser_reserved_connections = 2\nshared_buffers = 16000 \nbgwriter_lru_percent = 20 \nbgwriter_lru_maxpages = 160 \nbgwriter_all_percent = 10 \nbgwriter_all_maxpages = 320 \nfsync = off \nwal_sync_method = open_sync \nwal_buffers = 128 \ncheckpoint_segments = 3 \nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' \nlog_rotation_age = 1440 \nlog_line_prefix = '%m: (%u@%d) ' \nlc_messages = 'C' \nlc_monetary = 'C' \nlc_numeric = 'C' \nlc_time = 'C' \n\nfsync can safely be kept off during data migration as we are able to restart the procedure without losing data if something goes wrong. Increasing chekpoint_segments to 8/16/32 only increased the insert time, so I kept it at the default. I will increase shared_buffers and effective_cache_size as soon as it's time to tweak read performance, but for now I'm just focusing on write performance.\n\n\nPostgres version used: \n\niotest=> select version();\n version \n---------------------------------------------------------------------------------------------------\n PostgreSQL 8.1.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2)\n(1 row)\n\n\nI want to make sure I have made correct assumptions before I carry on, so comments are welcome.\n\n- Mikael\n",
"msg_date": "Mon, 20 Mar 2006 15:59:14 +0100",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Migration study, step 1: bulk write performance optimization"
},
{
"msg_contents": "Mikael Carneholm wrote:\n\n> I am responisble for an exciting project of evaluating migration of a\n> medium/large application for a well-known swedish car&truck manufacturer\n> ... The goal right now is to find the set of parameters that gives as\n> short bulk insert time as possible, minimizing downtime while the data\n> itself is migrated.\n\nIf you haven't explored the COPY command yet, check it out. It is stunningly fast compared to normal INSERT commands.\n\n http://www.postgresql.org/docs/8.1/static/sql-copy.html\n\npg_dump and pg_restore make use of the COPY command. Since you're coming from a different vendor, you'd have to dump the data into a COPY-compatible set of files yourself. But it will be worth the effort.\n\nCraig\n",
"msg_date": "Mon, 20 Mar 2006 07:12:18 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance optimization"
},
{
"msg_contents": "Mikael,\n\nI've just recently passed such an experience, i.e. migrating from\nanother vendor to postgres of a DB about the same size category you\nhave.\n\nI think you got it right with the fsync turned off during migration\n(just don't forget to turn it back after finishing ;-), and using tables\nwithout indexes/foreign keys. In our case recreating all the\nindexes/foreign keys/other constraints took actually longer than the raw\ndata transfer itself... but it's possible that the process was not tuned\n100%, we are still learning how to tune postgres...\n\nWhat I can add from our experience: ext3 turned out lousy for our\napplication, and converting to XFS made a quite big improvement for our\nDB load. I don't have hard figures, but I think it was some 30%\nimprovement in overall speed, and it had a huge improvement for heavy\nload times... what I mean is that with ext3 we had multiple parallel big\ntasks executing in more time than if we would have executed them\nsequentially, and with XFS that was gone, load scales linearly. In any\ncase you should test the performance of your application on different FS\nand different settings, as this could make a huge difference.\n\nAnd another thing, we're still fighting with performance problems due to\nthe fact that our application was designed to perform well with the\nother DB product... I think you'll have more work to do in this regard\nthan just some search/replace ;-)\n\nCheers,\nCsaba.\n\n\n",
"msg_date": "Mon, 20 Mar 2006 16:19:12 +0100",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance"
},
{
"msg_contents": "> using a 16kb block size (for read performance) will probably be \n> considered as well.\n\n\tHm, this means that when postgres wants to write just one 8k page, the OS \nwill have to read 16k, replace half of it with the new block, and write \n16k again... I guess it should be better to stick with the usual block \nsize. Also, it will have to read 16k every time it rally wants to read one \npage... which happens quite often except for seq scan.\n\n> NOTE: this machine/configuration is NOT what we will be using in \n> production if the study turns out OK, it's just supposed to work as a \n> development machine in the first phase whose purpose more or less is to \n> get familiar with configurating Postgres and see if we can get the \n> application up and running (we will probably use a 64bit platform and\n\n\tOpteron xDDD\n\n\tUse XFS or Reiser... ext3 isn't well suited for this. use noatime AND \nnodiratime.\n\n\tIt's safe to turn off fsync while importing your data.\n\tFor optimum speed, put the WAL on another physical disk.\n\n\tLook in the docs which of maintenance_work_mem, or work_mem or sort_mem \nis used for index creation, and set it to a very large value, to speed up \nthat index creation. Create your indexes with fsync=off also.\n\n\n",
"msg_date": "Mon, 20 Mar 2006 17:35:00 +0100",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance optimization"
},
{
"msg_contents": "Others are reporting better performance on 8.1.x with very large \nshared buffers. You may want to try tweaking that possibly as high as \n20% of available memory\n\nDave\nOn 20-Mar-06, at 9:59 AM, Mikael Carneholm wrote:\n\n> Ok, here's the deal:\n>\n> I am responisble for an exciting project of evaluating migration of \n> a medium/large application for a well-known swedish car&truck \n> manufacturer from a proprietary DB to Postgres. The size of the \n> database is currently about 50Gb, annual growth depending on sales, \n> but probably in the 30-50Gb range.\n>\n> Migrating the schema was easily done, mostly involving a search/ \n> replace of some vendor specific datatypes. The next step is to \n> migrate the data itself, and for this we have written a Java app \n> relying on JDBC metadata to map the tables in the source schema to \n> the target schema. The goal right now is to find the set of \n> parameters that gives as short bulk insert time as possible, \n> minimizing downtime while the data itself is migrated.\n>\n> The machine used for the study is a Dell PE2850, 6GB memory, 1xXEON \n> 3.0GHz/2MB cache, internal SCSI 0+1 raid (currently 4x36GB 10000rpm \n> striped+mirrored, two more 146GB 15000rpm disks will arrive later). \n> Not sure about the brand/model of the raid controller, so I'll \n> leave that for now. File system is ext3(I know, maybe not the \n> optimal choice but this is how it was when I got it) with a 8k \n> block size. The OS currently installed is CentOS4.\n>\n> Until the new disks arrive, both the OS itself, pg_xlog and the \n> data reside on the same disks. When they arrive, I will probably \n> move the data to the new disks (need two more to get raid 0+1, \n> though) and leave the OS + pg_xlog on the 10000rpm disks. Mounting \n> the 15000rpm data disks with the noatime option (this is safe, \n> right?) and using a 16kb block size (for read performance) will \n> probably be considered as well.\n>\n> NOTE: this machine/configuration is NOT what we will be using in \n> production if the study turns out OK, it's just supposed to work as \n> a development machine in the first phase whose purpose more or less \n> is to get familiar with configurating Postgres and see if we can \n> get the application up and running (we will probably use a 64bit \n> platform and either a FC SAN or internal raid with a battery backed \n> cache for production use, if all goes well).\n>\n> The first thing I did when I got the machine was to do a raw dd \n> write test:\n>\n> # time bash -c \"(dd if=/dev/zero of=/opt/bigfile count=1310720 \n> bs=8k && sync)\"\n> 1310720+0 records in\n> 1310720+0 records out\n>\n> real 2m21.438s\n> user 0m0.998s\n> sys 0m51.347s\n>\n> (10*1024)Mb/~141s => ~75.5Mb/s\n>\n> As a simple benchmark, I created a simple table without PK/indexes \n> with 1k wide rows:\n>\n> create table iotest.one_kb_rows\n> (\n> the_col char(1024) not null\n> );\n>\n> To fill the table, I use this simple function:\n>\n> create or replace function iotest.writestress(megs integer) returns \n> void as $$\n> declare\n> char_str char(1024) := repeat('x', 1024);\n> begin\n> for i in 1..megs loop\n> for j in 1..1024 loop\n> insert into one_kb_rows(the_col) values (char_str);\n> end loop;\n> end loop;\n> end;\n> $$\n> language plpgsql;\n>\n> Then, I tested how long it takes to write 10Gb of data to this table:\n>\n> iotest=> \\timing\n> Timing is on.\n>\n> iotest=> select writestress((10*1024));\n> writestress\n> -------------\n>\n> (1 row)\n>\n> Time: 379971.252 ms\n>\n> This gives that 10Gb takes ~380s => ~27Mb/s (with fsync=off), \n> compared to the raw dd result (~75.5Mb/s).\n>\n> I assume this difference is due to:\n> - simultaneous WAL write activity (assumed: for each byte written \n> to the table, at least one byte is also written to WAL, in effect: \n> 10Gb data inserted in the table equals 20Gb written to disk)\n> - lousy test method (it is done using a function => the transaction \n> size is 10Gb, and 10Gb will *not* fit in wal_buffers :) )\n> - poor config\n> - something else?\n>\n> I have tried to read up as much as possible on Postgres \n> configuration (disk layout, buffer management, WAL sync methods, \n> etc) and found this post regarding bgwriter tweaking: http:// \n> archives.postgresql.org/pgsql-performance/2006-03/msg00218.php - \n> which explains the bgwriter config below.\n>\n> All params in postgresql.conf that are not commented out:\n> ---------------------------------------------------------\n> max_connections = 100\n> superuser_reserved_connections = 2\n> shared_buffers = 16000\n> bgwriter_lru_percent = 20\n> bgwriter_lru_maxpages = 160\n> bgwriter_all_percent = 10\n> bgwriter_all_maxpages = 320\n> fsync = off\n> wal_sync_method = open_sync\n> wal_buffers = 128\n> checkpoint_segments = 3\n> log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\n> log_rotation_age = 1440\n> log_line_prefix = '%m: (%u@%d) '\n> lc_messages = 'C'\n> lc_monetary = 'C'\n> lc_numeric = 'C'\n> lc_time = 'C'\n>\n> fsync can safely be kept off during data migration as we are able \n> to restart the procedure without losing data if something goes \n> wrong. Increasing chekpoint_segments to 8/16/32 only increased the \n> insert time, so I kept it at the default. I will increase \n> shared_buffers and effective_cache_size as soon as it's time to \n> tweak read performance, but for now I'm just focusing on write \n> performance.\n>\n>\n> Postgres version used:\n>\n> iotest=> select version();\n> version\n> ---------------------------------------------------------------------- \n> -----------------------------\n> PostgreSQL 8.1.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) \n> 3.4.4 20050721 (Red Hat 3.4.4-2)\n> (1 row)\n>\n>\n> I want to make sure I have made correct assumptions before I carry \n> on, so comments are welcome.\n>\n> - Mikael\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Mon, 20 Mar 2006 12:44:31 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance optimization"
},
{
"msg_contents": "At 03:44 PM 3/21/2006, Simon Riggs wrote:\n>On Mon, 2006-03-20 at 15:59 +0100, Mikael Carneholm wrote:\n>\n> > This gives that 10Gb takes ~380s => ~27Mb/s (with fsync=off), \n> compared to the raw dd result (~75.5Mb/s).\n> >\n> > I assume this difference is due to:\n> > - simultaneous WAL write activity (assumed: for each byte written \n> to the table, at least one byte is also written to WAL, in effect: \n> 10Gb data inserted in the table equals 20Gb written to disk)\n> > - lousy test method (it is done using a function => the \n> transaction size is 10Gb, and 10Gb will *not* fit in wal_buffers :) )\n> > - poor config\n>\n> > checkpoint_segments = 3\n>\n>With those settings, you'll be checkpointing every 48 Mb, which will be\n>every about once per second. Since the checkpoint will take a reasonable\n>amount of time, even with fsync off, you'll be spending most of your\n>time checkpointing. bgwriter will just be slowing you down too because\n>you'll always have more clean buffers than you can use, since you have\n>132MB of shared_buffers, yet flushing all of them every checkpoint.\nIIRC, Josh Berkus did some benches that suggests in pg 8.x a value of \n64 - 256 is best for checkpoint_segments as long as you have the RAM available.\n\nI'd suggest trying values of 64, 128, and 256 and setting \ncheckpoint_segments to the best of those.\n\nRon \n\n\n",
"msg_date": "Mon, 20 Mar 2006 16:17:09 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance"
},
{
"msg_contents": "I've seen it said here several times that \"update == delete + insert\". On the other hand, I've noticed that \"alter table [add|drop] column ...\" is remarkably fast, even for very large tables, which leads me to wonder whether each column's contents are in a file specifically for that column.\n\nMy question: Suppose I have a very \"wide\" set of data, say 100 columns, and one of those columns will be updated often, but the others are fairly static. I have two choices:\n\nDesign 1:\n create table a (\n id integer,\n frequently_updated integer);\n\n create table b(\n id integer,\n infrequently_updated_1 integer,\n infrequently_updated_2 integer,\n infrequently_updated_3 integer,\n ... etc.\n infrequently_updated_99 integer);\n\nDesign 2:\n create table c(\n id integer,\n frequently_updated integer,\n infrequently_updated_1 integer,\n infrequently_updated_2 integer,\n infrequently_updated_3 integer,\n ... etc.\n infrequently_updated_99 integer);\n\nIf \"update == delete + insert\" is strictly true, then \"Design 2\" would be poor since 99 columns would be moved around with each update. But if columns are actually stored in separate files, the Designs 1 and 2 would be essentially equivalent when it comes to vacuuming.\n\nThanks,\nCraig\n",
"msg_date": "Mon, 20 Mar 2006 14:49:43 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "update == delete + insert?"
},
{
"msg_contents": "go with design 1, update does = delete + insert.\n\n\n---------- Original Message -----------\nFrom: \"Craig A. James\" <[email protected]>\nTo: [email protected]\nSent: Mon, 20 Mar 2006 14:49:43 -0800\nSubject: [PERFORM] update == delete + insert?\n\n> I've seen it said here several times that \"update == delete + insert\". On the other hand, I've noticed that \n> \"alter table [add|drop] column ...\" is remarkably fast, even for very large tables, which leads me to wonder \n> whether each column's contents are in a file specifically for that column.\n> \n> My question: Suppose I have a very \"wide\" set of data, say 100 columns, and one of those columns will be \n> updated often, but the others are fairly static. I have two choices:\n> \n> Design 1:\n> create table a (\n> id integer,\n> frequently_updated integer);\n> \n> create table b(\n> id integer,\n> infrequently_updated_1 integer,\n> infrequently_updated_2 integer,\n> infrequently_updated_3 integer,\n> ... etc.\n> infrequently_updated_99 integer);\n> \n> Design 2:\n> create table c(\n> id integer,\n> frequently_updated integer,\n> infrequently_updated_1 integer,\n> infrequently_updated_2 integer,\n> infrequently_updated_3 integer,\n> ... etc.\n> infrequently_updated_99 integer);\n> \n> If \"update == delete + insert\" is strictly true, then \"Design 2\" would be poor since 99 columns would be moved \n> around with each update. But if columns are actually stored in separate files, the Designs 1 and 2 would be \n> essentially equivalent when it comes to vacuuming.\n> \n> Thanks,\n> Craig\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n------- End of Original Message -------\n\n",
"msg_date": "Mon, 20 Mar 2006 17:56:34 -0500",
"msg_from": "\"Jim Buttafuoco\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: update == delete + insert?"
},
{
"msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> I've seen it said here several times that \"update == delete + insert\". On the other hand, I've noticed that \"alter table [add|drop] column ...\" is remarkably fast, even for very large tables, which leads me to wonder whether each column's contents are in a file specifically for that column.\n\nNo. The reason \"drop column\" is fast is that we make no attempt to\nremove the data from existing rows; we only mark the column's entry in\nthe system catalogs as deleted. \"add column\" is only fast if you are\nadding a column with no default (a/k/a default NULL). In that case\nlikewise we don't have to modify existing rows; the desired behavior\nfalls out from the fact that the tuple access routines return NULL if\nasked to fetch a column beyond those existing in a particular tuple.\n\nYou can read about the storage layout in\nhttp://developer.postgresql.org/docs/postgres/storage.html\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Mar 2006 18:22:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: update == delete + insert? "
},
{
"msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> If you haven't explored the COPY command yet, check it out. It is stunningly fast compared to normal INSERT commands.\n\nNote also that his \"benchmark\" is testing multiple INSERTs issued within\na loop in a plpgsql function, which has got nearly nothing to do with\nthe performance that will be obtained from INSERTs issued by a client\n(especially if said INSERTs aren't prepared and/or aren't batched into\ntransactions).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 20 Mar 2006 19:27:17 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance optimization "
},
{
"msg_contents": "On 3/20/06, Craig A. James <[email protected]> wrote:\n> I've seen it said here several times that \"update == delete + insert\". On the other hand, I've noticed that \"alter table [add|drop] column ...\" is remarkably fast, even for very large tables, which leads me to wonder whether each column's contents are in a file specifically for that column.\n>\n> My question: Suppose I have a very \"wide\" set of data, say 100 columns, and one of those columns will be updated often, but the others are fairly static. I have two choices:\n>\n> Design 1:\n> create table a (\n> id integer,\n> frequently_updated integer);\n>\n> create table b(\n> id integer,\n> infrequently_updated_1 integer,\n> infrequently_updated_2 integer,\n> infrequently_updated_3 integer,\n> ... etc.\n> infrequently_updated_99 integer);\n>\n> Design 2:\n> create table c(\n> id integer,\n> frequently_updated integer,\n> infrequently_updated_1 integer,\n> infrequently_updated_2 integer,\n> infrequently_updated_3 integer,\n> ... etc.\n> infrequently_updated_99 integer);\n>\n> If \"update == delete + insert\" is strictly true, then \"Design 2\" would be poor since 99 columns would be moved around with each update. But if columns are actually stored in separate files, the Designs 1 and 2 would be essentially equivalent when it comes to vacuuming.\n>\n> Thanks,\n> Craig\n>\n\ndesign 1 is normalized and better\ndesign 2 is denormalized and a bad approach no matter the RDBMS\n\nupdate does delete + insert, and vacuum is the way to recover the space\n\n--\nAtentamente,\nJaime Casanova\n\n\"What they (MySQL) lose in usability, they gain back in benchmarks, and that's\nall that matters: getting the wrong answer really fast.\"\n Randal L. Schwartz\n",
"msg_date": "Mon, 20 Mar 2006 20:38:15 -0500",
"msg_from": "\"Jaime Casanova\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: update == delete + insert?"
},
{
"msg_contents": "On Mon, Mar 20, 2006 at 08:38:15PM -0500, Jaime Casanova wrote:\n> On 3/20/06, Craig A. James <[email protected]> wrote:\n> > Design 1:\n> > create table a (\n> > id integer,\n> > frequently_updated integer);\n> >\n> > create table b(\n> > id integer,\n> > infrequently_updated_1 integer,\n> > infrequently_updated_2 integer,\n> > infrequently_updated_3 integer,\n> > ... etc.\n> > infrequently_updated_99 integer);\n> >\n> > Design 2:\n> > create table c(\n> > id integer,\n> > frequently_updated integer,\n> > infrequently_updated_1 integer,\n> > infrequently_updated_2 integer,\n> > infrequently_updated_3 integer,\n> > ... etc.\n> > infrequently_updated_99 integer);\n> design 1 is normalized and better\n> design 2 is denormalized and a bad approach no matter the RDBMS\n\nHow is design 1 denormalized?\n\n> \"What they (MySQL) lose in usability, they gain back in benchmarks, and that's\n> all that matters: getting the wrong answer really fast.\"\n> Randal L. Schwartz\n\nWhere's that quote from?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 05:23:48 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: update == delete + insert?"
},
{
"msg_contents": "On Mon, Mar 20, 2006 at 04:19:12PM +0100, Csaba Nagy wrote:\n> What I can add from our experience: ext3 turned out lousy for our\n> application, and converting to XFS made a quite big improvement for our\n> DB load. I don't have hard figures, but I think it was some 30%\n> improvement in overall speed, and it had a huge improvement for heavy\n> load times... what I mean is that with ext3 we had multiple parallel big\n> tasks executing in more time than if we would have executed them\n> sequentially, and with XFS that was gone, load scales linearly. In any\n> case you should test the performance of your application on different FS\n> and different settings, as this could make a huge difference.\n\nDid you try mounting ext3 whith data=writeback by chance? People have\nfound that makes a big difference in performance.\n\nhttp://archives.postgresql.org/pgsql-performance/2003-01/msg00320.php\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 05:27:59 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance"
},
{
"msg_contents": "> Did you try mounting ext3 whith data=writeback by chance? People have\n> found that makes a big difference in performance.\n\nI'm not sure, there's other people here doing the OS stuff - I'm pretty\nmuch ignorant about what \"data=writeback\" could mean :-D\n\nThey knew however that for the data partitions no FS journaling is\nneeded, and for the WAL partition meta data journaling is enough, so I\nguess they tuned ext3 for this.\n\nCheers,\nCsaba.\n\n\n",
"msg_date": "Tue, 21 Mar 2006 12:52:46 +0100",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 12:52:46PM +0100, Csaba Nagy wrote:\n> They knew however that for the data partitions no FS journaling is\n> needed, and for the WAL partition meta data journaling is enough, so I\n> guess they tuned ext3 for this.\n\nFor the record, that's the wrong way round. For the data partitioning\nmetadata journaling is enough, and for the WAL partition you don't need any\nFS journaling at all.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 21 Mar 2006 12:56:18 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance"
},
{
"msg_contents": "> For the record, that's the wrong way round. For the data partitioning\n> metadata journaling is enough, and for the WAL partition you don't need any\n> FS journaling at all.\n\nYes, you're right: the data partition shouldn't loose file creation,\ndeletion, etc., which is not important for the WAL partition where the\nWAL files are mostly recycled... right ?\n\nCheers,\nCsaba.\n\n\n",
"msg_date": "Tue, 21 Mar 2006 12:59:13 +0100",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 12:56:18PM +0100, Steinar H. Gunderson wrote:\n> On Tue, Mar 21, 2006 at 12:52:46PM +0100, Csaba Nagy wrote:\n> > They knew however that for the data partitions no FS journaling is\n> > needed, and for the WAL partition meta data journaling is enough, so I\n> > guess they tuned ext3 for this.\n> \n> For the record, that's the wrong way round. For the data partitioning\n> metadata journaling is enough, and for the WAL partition you don't need any\n> FS journaling at all.\n\nAre you sure? Metadate changes are probably a lot more common on the WAL\npartition. In any case, I don't see why there should be a difference.\nThe real issue is: is related filesystem metadata sync'd as part of a\nfile being fsync'd?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 06:01:58 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 06:01:58AM -0600, Jim C. Nasby wrote:\n> Are you sure? Metadate changes are probably a lot more common on the WAL\n> partition. In any case, I don't see why there should be a difference.\n> The real issue is: is related filesystem metadata sync'd as part of a\n> file being fsync'd?\n\nI've been told on this list that PostgreSQL actually takes care to fill a new\nWAL file with zeroes etc. when initializing it; dig a few months back and I'm\nsure it's there.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 21 Mar 2006 13:10:32 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 01:10:32PM +0100, Steinar H. Gunderson wrote:\n> On Tue, Mar 21, 2006 at 06:01:58AM -0600, Jim C. Nasby wrote:\n> > Are you sure? Metadate changes are probably a lot more common on the WAL\n> > partition. In any case, I don't see why there should be a difference.\n> > The real issue is: is related filesystem metadata sync'd as part of a\n> > file being fsync'd?\n> \n> I've been told on this list that PostgreSQL actually takes care to fill a new\n> WAL file with zeroes etc. when initializing it; dig a few months back and I'm\n> sure it's there.\n\nThat's fine and all, but does no good if the filesystem doesn't know\nthat the file exists on a crash. The same concern is also true on the\ndata partition, although it's less likely to be a problem because if you\nhappen to crash soon after a DDL operation it's likely that you haven't\nhad a checkpoint yet, so the operation will likely be repeated during\nWAL replay. But depending on that is a race condition.\n\nBasically, you need to know for certain that if PostgreSQL creates a\nfile and then fsync's it that that file is safely on disk, and that the\nfilesystem knows how to find it (ie: the metadata is also on disk in\nsome fashion).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 06:18:39 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 06:18:39AM -0600, Jim C. Nasby wrote:\n> Basically, you need to know for certain that if PostgreSQL creates a\n> file and then fsync's it that that file is safely on disk, and that the\n> filesystem knows how to find it (ie: the metadata is also on disk in\n> some fashion).\n\nIt seems to do, quoting Tom from\nhttp://archives.postgresql.org/pgsql-performance/2005-11/msg00184.php:\n\n== snip ==\n No, Mike is right: for WAL you shouldn't need any journaling. This is\n because we zero out *and fsync* an entire WAL file before we ever\n consider putting live WAL data in it. During live use of a WAL file,\n its metadata is not changing. As long as the filesystem follows\n the minimal rule of syncing metadata about a file when it fsyncs the\n file, all the live WAL files should survive crashes OK.\n\n We can afford to do this mainly because WAL files can normally be\n recycled instead of created afresh, so the zero-out overhead doesn't\n get paid during normal operation.\n\n You do need metadata journaling for all non-WAL PG files, since we don't\n fsync them every time we extend them; which means the filesystem could\n lose track of which disk blocks belong to such a file, if it's not\n journaled.\n== snip ==\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 21 Mar 2006 13:29:54 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 01:29:54PM +0100, Steinar H. Gunderson wrote:\n> On Tue, Mar 21, 2006 at 06:18:39AM -0600, Jim C. Nasby wrote:\n> > Basically, you need to know for certain that if PostgreSQL creates a\n> > file and then fsync's it that that file is safely on disk, and that the\n> > filesystem knows how to find it (ie: the metadata is also on disk in\n> > some fashion).\n> \n> It seems to do, quoting Tom from\n> http://archives.postgresql.org/pgsql-performance/2005-11/msg00184.php:\n\n404 :(\n\n> \n> == snip ==\n> its metadata is not changing. As long as the filesystem follows\n> the minimal rule of syncing metadata about a file when it fsyncs the\n> file, all the live WAL files should survive crashes OK.\n\nAnd therin lies the rub: file metadata *must* commit to disk as part of\nan fsync, and it's needed for both WAL and heap data. It's needed for\nheap data because as soon as a checkpoint completes, PostgreSQL is free\nto erase any WAL info about previous DDL changes.\n\nOn FreeBSD, if you're using softupdates, the filesystem will properly\norder writes to the drive so that metadata must be written before file\ndata; this ensures that an fsync on the file will first write any\nmetadata before writing the data itself.\n\nWith fsync turned off, any metadata-changing commands will wait for the\nmetadata to commit to disk before returning (unless you run async...)\n\nI'm not really sure how this all plays out on a journalling filesystem.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 06:45:25 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 06:01:58AM -0600, Jim C. Nasby wrote:\n>On Tue, Mar 21, 2006 at 12:56:18PM +0100, Steinar H. Gunderson wrote:\n>> For the record, that's the wrong way round. For the data partitioning\n>> metadata journaling is enough, and for the WAL partition you don't need any\n>> FS journaling at all.\n>\n>Are you sure? \n\nYes. :) You actually shouldn't need metadata journaling in either \ncase--fsck will do the same thing. But fsck can take a *very* long time \non a large paritition, so for your data partition the journaling fs is a \nbig win. But your wal partition isn't likely to have very many files \nand should fsck in a snap, and data consistency is taken care of by \nsynchronous operations. (Which is the reason you really don't need/want \ndata journalling.)\n\nMike Stone\n",
"msg_date": "Tue, 21 Mar 2006 07:48:43 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance"
},
{
"msg_contents": "> > design 1 is normalized and better\n> > design 2 is denormalized and a bad approach no matter the RDBMS\n>\n> How is design 1 denormalized?\n\nIt isn't :)...he said it is normalized. Design 2 may or may not be\nde-normalized (IMO there is not enough information to make that\ndetermination) but as stated it's a good idea to split the table on\npractical grounds.\n\nmerlin\n",
"msg_date": "Tue, 21 Mar 2006 09:12:08 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: update == delete + insert?"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 09:12:08AM -0500, Merlin Moncure wrote:\n> > > design 1 is normalized and better\n> > > design 2 is denormalized and a bad approach no matter the RDBMS\n> >\n> > How is design 1 denormalized?\n> \n> It isn't :)...he said it is normalized. Design 2 may or may not be\n> de-normalized (IMO there is not enough information to make that\n> determination) but as stated it's a good idea to split the table on\n> practical grounds.\n\nErr, sorry, got the number backwards. My point is that 2 isn't\ndenormalized afaik, at least not based just on the example. But yes, in\na case like this, vertical partitioning can make a lot of sense.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 11:38:13 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: update == delete + insert?"
},
{
"msg_contents": "On Mon, 2006-03-20 at 15:59 +0100, Mikael Carneholm wrote:\n\n> This gives that 10Gb takes ~380s => ~27Mb/s (with fsync=off), compared to the raw dd result (~75.5Mb/s).\n> \n> I assume this difference is due to: \n> - simultaneous WAL write activity (assumed: for each byte written to the table, at least one byte is also written to WAL, in effect: 10Gb data inserted in the table equals 20Gb written to disk)\n> - lousy test method (it is done using a function => the transaction size is 10Gb, and 10Gb will *not* fit in wal_buffers :) )\n> - poor config\n\n> checkpoint_segments = 3 \n\nWith those settings, you'll be checkpointing every 48 Mb, which will be\nevery about once per second. Since the checkpoint will take a reasonable\namount of time, even with fsync off, you'll be spending most of your\ntime checkpointing. bgwriter will just be slowing you down too because\nyou'll always have more clean buffers than you can use, since you have\n132MB of shared_buffers, yet flushing all of them every checkpoint.\n\nPlease read you're logfile, which should have relevant WARNING messages.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 21 Mar 2006 20:44:50 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance"
},
{
"msg_contents": "On Mon, 2006-03-20 at 15:17, Ron wrote:\n> At 03:44 PM 3/21/2006, Simon Riggs wrote:\n> >On Mon, 2006-03-20 at 15:59 +0100, Mikael Carneholm wrote:\n> >\n> > > This gives that 10Gb takes ~380s => ~27Mb/s (with fsync=off), \n> > compared to the raw dd result (~75.5Mb/s).\n> > >\n> > > I assume this difference is due to:\n> > > - simultaneous WAL write activity (assumed: for each byte written \n> > to the table, at least one byte is also written to WAL, in effect: \n> > 10Gb data inserted in the table equals 20Gb written to disk)\n> > > - lousy test method (it is done using a function => the \n> > transaction size is 10Gb, and 10Gb will *not* fit in wal_buffers :) )\n> > > - poor config\n> >\n> > > checkpoint_segments = 3\n> >\n> >With those settings, you'll be checkpointing every 48 Mb, which will be\n> >every about once per second. Since the checkpoint will take a reasonable\n> >amount of time, even with fsync off, you'll be spending most of your\n> >time checkpointing. bgwriter will just be slowing you down too because\n> >you'll always have more clean buffers than you can use, since you have\n> >132MB of shared_buffers, yet flushing all of them every checkpoint.\n> IIRC, Josh Berkus did some benches that suggests in pg 8.x a value of \n> 64 - 256 is best for checkpoint_segments as long as you have the RAM available.\n> \n> I'd suggest trying values of 64, 128, and 256 and setting \n> checkpoint_segments to the best of those.\n\nI've also found that modest increases in commit_siblings and\ncommit_delay help a lot on certain types of imports.\n",
"msg_date": "Tue, 21 Mar 2006 15:19:22 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance"
},
{
"msg_contents": "Ron wrote:\n\n> IIRC, Josh Berkus did some benches that suggests in pg 8.x a value of \n> 64 - 256 is best for checkpoint_segments as long as you have the RAM \n> available.\n\nI think you are confusing checkpoint_segments with wal_buffers.\ncheckpoint_segments certainly has little to do with available RAM!\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 21 Mar 2006 17:20:38 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> I've also found that modest increases in commit_siblings and\n> commit_delay help a lot on certain types of imports.\n\nOn a data import? Those really should have zero effect on a\nsingle-process workload. Or are you doing multiple concurrent imports?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 21 Mar 2006 16:56:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance "
},
{
"msg_contents": "On Tue, 2006-03-21 at 15:56, Tom Lane wrote:\n> Scott Marlowe <[email protected]> writes:\n> > I've also found that modest increases in commit_siblings and\n> > commit_delay help a lot on certain types of imports.\n> \n> On a data import? Those really should have zero effect on a\n> single-process workload. Or are you doing multiple concurrent imports?\n\nThat, and it's a machine that's doing other things. Also, a lot of the\nimports are NOT bundled up into groups of transactions. i.e. lots and\nlots of individual insert queries.\n",
"msg_date": "Tue, 21 Mar 2006 16:23:14 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performance"
}
] |
[
{
"msg_contents": "Hello!\n\nCan I Increment the perfomance of execution query?\n\nWhere is the instrument to analyze the query runnnig for create a Index \nquery for a single optimize that?\n\nthank's\n\nMarco \"Furetto\" Berri\n",
"msg_date": "Mon, 20 Mar 2006 15:59:25 +0100",
"msg_from": "Marco Furetto <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query Feromance"
},
{
"msg_contents": "Marco,\n\nCould you give us the query you would like to improve performance?\n\n\n----- Original Message ----- \nFrom: \"Marco Furetto\" <[email protected]>\nTo: <[email protected]>\nSent: Monday, March 20, 2006 11:59 AM\nSubject: [PERFORM] Query Feromance\n\n\n> Hello!\n> \n> Can I Increment the perfomance of execution query?\n> \n> Where is the instrument to analyze the query runnnig for create a Index \n> query for a single optimize that?\n> \n> thank's\n> \n> Marco \"Furetto\" Berri\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n",
"msg_date": "Mon, 20 Mar 2006 13:34:47 -0300",
"msg_from": "\"Reimer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Feromance"
},
{
"msg_contents": "Hello!\n\nI'm managing the db of a \"Content Management environment\" and I'm\nsearching for a \"Query analyzer\" to improve performance because i don't\nknow how many and what type of queries are executing on the system (for\nthe \"where and join\" block).\n\nIf i could have query's stats i could Optimize the queries indexes.\n\nThank's a lot!\n\nMarco \"Furetto\" Berri\n\n\n\n\nReimer wrote:\n> Marco,\n> \n> Could you give us the query you would like to improve performance?\n> \n> \n> ----- Original Message ----- From: \"Marco Furetto\" <[email protected]>\n> To: <[email protected]>\n> Sent: Monday, March 20, 2006 11:59 AM\n> Subject: [PERFORM] Query Feromance\n> \n> \n>> Hello!\n>>\n>> Can I Increment the perfomance of execution query?\n>>\n>> Where is the instrument to analyze the query runnnig for create a \n>> Index query for a single optimize that?\n>>\n>> thank's\n>>\n>> Marco \"Furetto\" Berri\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> choose an index scan if your joining column's datatypes do not\n>> match\n>>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n> \n\n",
"msg_date": "Tue, 21 Mar 2006 09:25:50 +0100",
"msg_from": "Marco Furetto <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Feromance"
},
{
"msg_contents": "Hi,\n\nOn Tuesday 21 March 2006 09:25, Marco Furetto wrote:\n| I'm managing the db of a \"Content Management environment\" and I'm\n| searching for a \"Query analyzer\" to improve performance because i don't\n| know how many and what type of queries are executing on the system (for\n| the \"where and join\" block).\n\nas a first step, I'd enable query duration logging; in postgresql.conf\nI have set\n\n log_min_duration_statement = 3000\n\nthis will log each query that needs more than 3 seconds to complete.\n\nThe next step would be to \"explain analyze\" the problematic queries.\n\nCiao,\nThomas\n\n-- \nThomas Pundt <[email protected]> ---- http://rp-online.de/ ----\n",
"msg_date": "Tue, 21 Mar 2006 09:43:57 +0100",
"msg_from": "Thomas Pundt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Feromance"
},
{
"msg_contents": "ok, I enable query duration logging in postgresql.conf.\n\nwhere is the instruments for analyze the statistics queries executing on \nmy db?\n\nEg.: Number of query executing, total time for executing a single query, \netc...\n\n\nThank's\n\nMarco \"Furetto\" Berri\n\n\nThomas Pundt wrote:\n> Hi,\n> \n> On Tuesday 21 March 2006 09:25, Marco Furetto wrote:\n> | I'm managing the db of a \"Content Management environment\" and I'm\n> | searching for a \"Query analyzer\" to improve performance because i don't\n> | know how many and what type of queries are executing on the system (for\n> | the \"where and join\" block).\n> \n> as a first step, I'd enable query duration logging; in postgresql.conf\n> I have set\n> \n> log_min_duration_statement = 3000\n> \n> this will log each query that needs more than 3 seconds to complete.\n> \n> The next step would be to \"explain analyze\" the problematic queries.\n> \n> Ciao,\n> Thomas\n> \n",
"msg_date": "Tue, 21 Mar 2006 10:56:34 +0100",
"msg_from": "Marco Furetto <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Feromance"
},
{
"msg_contents": "Hi,\n\nOn Tuesday 21 March 2006 10:56, Marco Furetto wrote:\n| ok, I enable query duration logging in postgresql.conf.\n|\n| where is the instruments for analyze the statistics queries executing on\n| my db?\n|\n| Eg.: Number of query executing, total time for executing a single query,\n| etc...\n\nI don't know if there are tools or settings available for PostgreSQL that do\nsuch number-of-query-accounting; but you can set the \nlog_min_duration_statement value to 0 to log all statements with their\nduration.\n\nSee http://www.postgresql.org/docs/8.1/interactive/runtime-config.html\nfor more options on runtime configuration, especially 17.7 and 17.8\nmight be of interest for you.\n\nCiao,\nThomas\n\n-- \nThomas Pundt <[email protected]> ---- http://rp-online.de/ ----\n",
"msg_date": "Tue, 21 Mar 2006 11:58:43 +0100",
"msg_from": "Thomas Pundt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Feromance"
},
{
"msg_contents": "I find another java program for monitory application query: \nhttp://www.p6spy.com/\nwith interface\nhttp://www.jahia.net/jahia/page597.html\n\nThomas Pundt wrote:\n> Hi,\n> \n> On Tuesday 21 March 2006 09:25, Marco Furetto wrote:\n> | I'm managing the db of a \"Content Management environment\" and I'm\n> | searching for a \"Query analyzer\" to improve performance because i don't\n> | know how many and what type of queries are executing on the system (for\n> | the \"where and join\" block).\n> \n> as a first step, I'd enable query duration logging; in postgresql.conf\n> I have set\n> \n> log_min_duration_statement = 3000\n> \n> this will log each query that needs more than 3 seconds to complete.\n> \n> The next step would be to \"explain analyze\" the problematic queries.\n> \n> Ciao,\n> Thomas\n> \n",
"msg_date": "Wed, 22 Mar 2006 10:18:22 +0100",
"msg_from": "Marco Furetto <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Feromance"
}
] |
[
{
"msg_contents": "\nI have to say I've been really impressed with the quality and diversity \nof tools here to increase performance for PostgreSQL. But I keep seeing \na lot of the same basic things repeated again and again. Has anyone \nlooked into a \"smart\" or auto-adjusting resource manager for postgres? \n\nConsider for instance you set it to aggressively use system resources, \nthen it would do things like notice that it needs more work mem after \nprofiling a few thousand queries and adds it for you, or that a specific \nindex or table should be moved to a different spindle and does it in the \nbackground, or that query plans keep screwing up on a particular table \nso it knows to up the amount of stastics it keeps on that table.\n\nIs this a crazy idea or something someone's already working on?\n\nOrion\n\n",
"msg_date": "Mon, 20 Mar 2006 11:12:34 -0800",
"msg_from": "Orion Henry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Auto performance tuning?"
},
{
"msg_contents": "On Mon, Mar 20, 2006 at 11:12:34AM -0800, Orion Henry wrote:\n> \n> I have to say I've been really impressed with the quality and diversity \n> of tools here to increase performance for PostgreSQL. But I keep seeing \n> a lot of the same basic things repeated again and again. Has anyone \n> looked into a \"smart\" or auto-adjusting resource manager for postgres? \n> \n> Consider for instance you set it to aggressively use system resources, \n> then it would do things like notice that it needs more work mem after \n> profiling a few thousand queries and adds it for you, or that a specific \n> index or table should be moved to a different spindle and does it in the \n> background, or that query plans keep screwing up on a particular table \n> so it knows to up the amount of stastics it keeps on that table.\n> \n> Is this a crazy idea or something someone's already working on?\n\nFeel free to submit a patch. :)\n\nSeriously, the issue here is that everyone who donates code for\nPostgreSQL already knows how to tune it, so they're unlikely to come up\nwith a tool to do it for them (which is much harder than you might\nthink).\n\nThere is the configurator project on pgFoundry, which is a start in the\nright direction. Perhaps at some point a commercial entity might come\nout with some kind of automatic tuning tool as well. But I doubt you'll\nsee anything come out of the core developers.\n\nAlso, note that you could probably write such a tool without embedding\nit into the backend, so don't let that scare you off. :)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 05:35:06 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Auto performance tuning?"
}
] |
[
{
"msg_contents": "Hi All,\n \n I want to compare performance of postgresql database with some other database.\n \n Somebody must have done some performance testing. \n \n Can you pls. share that data (performance figures) with me? And if possible pls. share procedure also, that how you have done the same?\n \n Thanks In Advance,\n -Amit\n\nHi All, I want to compare performance of postgresql database with some other database. Somebody must have done some performance testing. Can you pls. share that data (performance figures) with me? And if possible pls. share procedure also, that how you have done the same? Thanks In Advance, -Amit",
"msg_date": "Mon, 20 Mar 2006 21:59:54 -0800 (PST)",
"msg_from": "Amit Soni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Perfomance test figures"
},
{
"msg_contents": "On Mon, Mar 20, 2006 at 09:59:54PM -0800, Amit Soni wrote:\n> Hi All,\n> \n> I want to compare performance of postgresql database with some other database.\n> \n> Somebody must have done some performance testing. \n> \n> Can you pls. share that data (performance figures) with me? And if possible pls. share procedure also, that how you have done the same?\n\nSadly, there's very little in the way of meaningful benchmarks,\nespecially ones that aren't ancient. A SQLite user recently did some\ntesting, but his workload was a single-user case, something that SQLite\nis ideally suited for (and not very interesting for anything in the\nenterprise world).\n\nI've been wanting to do a PostgreSQL vs MySQL head-to-head using DBT2\nfor some time. I even have hardware to do it on. What I haven't been\nable to find is the time. If this is something you'd be interested in\nhelping with, please let me know. Depending on how much work you wanted\nto do there could be money in it as well (Pervasive would pay for a\nperformance comparison whitepaper).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 05:38:13 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perfomance test figures"
},
{
"msg_contents": "On 3/21/06, Amit Soni <[email protected]> wrote:\n> I want to compare performance of postgresql database with some other\n> database.\n>\n> Somebody must have done some performance testing.\n>\n> Can you pls. share that data (performance figures) with me? And if possibleu\n> pls. share procedure also, that how you have done the same?\n\nUnfortunately, most database tests are synthetic and not very helpful.\n Compounding the problem is that the 'best' way to use the database\ndiffers between platforms (case in point: with postgresql you want to\nuse stored procedures for simple qeries, and with mysql you don't want\nto use them).\n\nThere are a couple of public benchmarks out there...you could try\nhitting one of them. But this is no substitute for developing\nsimulations of your workload and doing your own in-house benchmarks.\n\nis there a particular reason for wanting to compare various sql databases?\n\nmerlin\n",
"msg_date": "Tue, 21 Mar 2006 09:03:20 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Perfomance test figures"
}
] |
[
{
"msg_contents": "Currently, it appears that SELECT * INTO new_table FROM old_table logs\neach page as it's written to WAL. Is this actually needed? Couldn't the\ndatabase simply log that the SELECT ... INTO statement was executed\ninstead? Doing so would likely result in a large performance improvement\nin most installs. Is there no provision for writing anything but data\npage changes (or whole pages) to WAL?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 21 Mar 2006 06:22:11 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "On Tue, 2006-03-21 at 06:22 -0600, Jim C. Nasby wrote:\n> Currently, it appears that SELECT * INTO new_table FROM old_table logs\n> each page as it's written to WAL. Is this actually needed? Couldn't the\n> database simply log that the SELECT ... INTO statement was executed\n> instead? Doing so would likely result in a large performance improvement\n> in most installs. Is there no provision for writing anything but data\n> page changes (or whole pages) to WAL?\n\nAFAIK it takes the same code path as CREATE TABLE AS SELECT, which\nalready does exactly what you suggest (except when using PITR).\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 21 Mar 2006 20:33:50 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "\n\"Simon Riggs\" <[email protected]> wrote\n> On Tue, 2006-03-21 at 06:22 -0600, Jim C. Nasby wrote:\n> > Currently, it appears that SELECT * INTO new_table FROM old_table logs\n> > each page as it's written to WAL. Is this actually needed? Couldn't the\n> > database simply log that the SELECT ... INTO statement was executed\n> > instead? Doing so would likely result in a large performance improvement\n> > in most installs. Is there no provision for writing anything but data\n> > page changes (or whole pages) to WAL?\n>\n> AFAIK it takes the same code path as CREATE TABLE AS SELECT, which\n> already does exactly what you suggest (except when using PITR).\n>\n\nAs I read, they did take the same code path, but did they \"simply log that\nthe SELECT ... INTO statement was executed\"? If so, how can we rely on the\nunreliable content of the old_table to do recovery?\n\nRegards,\nQingqing\n\n\n",
"msg_date": "Wed, 22 Mar 2006 14:20:39 +0800",
"msg_from": "\"Qingqing Zhou\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 08:33:50PM +0000, Simon Riggs wrote:\n> On Tue, 2006-03-21 at 06:22 -0600, Jim C. Nasby wrote:\n> > Currently, it appears that SELECT * INTO new_table FROM old_table logs\n> > each page as it's written to WAL. Is this actually needed? Couldn't the\n> > database simply log that the SELECT ... INTO statement was executed\n> > instead? Doing so would likely result in a large performance improvement\n> > in most installs. Is there no provision for writing anything but data\n> > page changes (or whole pages) to WAL?\n> \n> AFAIK it takes the same code path as CREATE TABLE AS SELECT, which\n> already does exactly what you suggest (except when using PITR).\n\nOk, I saw disk activity on the base directory and assumed it was pg_xlog\nstuff. Turns out that both SELECT INTO and CREATE TABLE AS ignore\ndefault_tablepsace and create the new tables in the base directory. I'm\nguessing that's a bug... (this is on 8.1.2, btw).\n\nAlso, why do we log rows for CTAS/SELECT INTO when PITR is in use for\nsimple SELECTs (ones that don't call non-deterministic functions)? The\ndata should alread be available AFAICS...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 22 Mar 2006 06:47:32 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "On Wed, 2006-03-22 at 06:47 -0600, Jim C. Nasby wrote:\n\n> Also, why do we log rows for CTAS/SELECT INTO when PITR is in use for\n> simple SELECTs (ones that don't call non-deterministic functions)? The\n> data should alread be available AFAICS...\n\nNot sure what you're asking... SELECTs don't produce WAL.\n\nPITR wants all changes. Without PITR we can optimise certain logging\nactions.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Wed, 22 Mar 2006 13:08:34 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "On Wed, Mar 22, 2006 at 01:08:34PM +0000, Simon Riggs wrote:\n> On Wed, 2006-03-22 at 06:47 -0600, Jim C. Nasby wrote:\n> \n> > Also, why do we log rows for CTAS/SELECT INTO when PITR is in use for\n> > simple SELECTs (ones that don't call non-deterministic functions)? The\n> > data should alread be available AFAICS...\n> \n> Not sure what you're asking... SELECTs don't produce WAL.\n\nYes, there'd have to be some special kind of WAL entry that specifies\nwhat select statement was used in CTAS.\n\n> PITR wants all changes. Without PITR we can optimise certain logging\n> actions.\n\nThe only change here is that we're creating a new table based on the\nresults of a SELECT. If that SELECT doesn't use anything that's\nnon-deterministic, then the machine doing the recovery should already\nhave all the data it needs, provided that we log the SELECT that was\nused in the CTAS.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 22 Mar 2006 07:19:10 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n>> PITR wants all changes. Without PITR we can optimise certain logging\n>> actions.\n\n> The only change here is that we're creating a new table based on the\n> results of a SELECT. If that SELECT doesn't use anything that's\n> non-deterministic, then the machine doing the recovery should already\n> have all the data it needs, provided that we log the SELECT that was\n> used in the CTAS.\n\nThis is based on a fundamental misconception about the way PITR\nlog-shipping works. We log actions at the physical level (put this\ntuple here), not the logical here's-the-statement-we-executed level.\nThe two approaches cannot mix, because as soon as there's any physical\ndiscrepancy at all, physical-level actions would be incorrectly applied\nto the slave database.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Mar 2006 10:06:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL logging of SELECT ... INTO command "
},
{
"msg_contents": "On Wed, Mar 22, 2006 at 10:06:05AM -0500, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> >> PITR wants all changes. Without PITR we can optimize certain logging\n> >> actions.\n> \n> > The only change here is that we're creating a new table based on the\n> > results of a SELECT. If that SELECT doesn't use anything that's\n> > non-deterministic, then the machine doing the recovery should already\n> > have all the data it needs, provided that we log the SELECT that was\n> > used in the CTAS.\n> \n> This is based on a fundamental misconception about the way PITR\n> log-shipping works. We log actions at the physical level (put this\n> tuple here), not the logical here's-the-statement-we-executed level.\n> The two approaches cannot mix, because as soon as there's any physical\n> discrepancy at all, physical-level actions would be incorrectly applied\n> to the slave database.\n\nOh, so in other words, SELECT * INTO temp FROM table is inherently\nnon-deterministic at the physical level, so the only way to be able to\nallow PITR to work is to duplicate all the physical changes. Darn.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 22 Mar 2006 09:14:34 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Oh, so in other words, SELECT * INTO temp FROM table is inherently\n> non-deterministic at the physical level, so the only way to be able to\n> allow PITR to work is to duplicate all the physical changes. Darn.\n\nWell, lemme put it this way: I'm not prepared to require that PG be\ndeterministic at the physical level. One obvious source of\nnon-determinancy is the FSM, which is likely to hand out different free\nspace to different transactions depending on what else is going on at\nthe same time. There are others, such as deliberately random\ntie-breaking during btree index insertion.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Mar 2006 10:35:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL logging of SELECT ... INTO command "
},
{
"msg_contents": "On Wed, 2006-03-22 at 16:35, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > Oh, so in other words, SELECT * INTO temp FROM table is inherently\n> > non-deterministic at the physical level, so the only way to be able to\n> > allow PITR to work is to duplicate all the physical changes. Darn.\n> \n> Well, lemme put it this way: I'm not prepared to require that PG be\n> deterministic at the physical level. One obvious source of\n> non-determinancy is the FSM, which is likely to hand out different free\n> space to different transactions depending on what else is going on at\n> the same time. There are others, such as deliberately random\n> tie-breaking during btree index insertion.\n\nWhile you're at talking about WAL and PITR... I see from the aboce\ndiscussion that PITR is already demanding special handling in the code\n(I hope I got this one right, as the following are based on this).\n\nWhat if the PITR logging would be disconnected from the WAL logging\ncompletely ?\n\nWhat I mean is to introduce a WAL subscription mechanism, which\nbasically means some incoming connections where we stream the log\nrecords. We don't need to write them to disk at all in the normal case,\nI guess usually PITR will store the records on some other machine so it\nmeans network, not disk. And it doesn't need to be done synchronously,\nit can lag behind the running transactions, and we can do it in batches\nof WAL records.\n\nIt also would mean that the local WAL does not need to log the things\nwhich are only needed for the PITR... that would likely mean some spared\nWAL disk activity. Of course it also would mean that the local WAL and\nPITR WAL are not the same, but that is not an issue I guess.\n\nIt would also permit immediate recycling of the WAL files if the current\narchiving style is not used.\n\nThe drawbacks I can see (please add yours):\n1) the need for the subscription management code with the added\ncomplexity it implies;\n2) problems if the WAL stream lags too much behind;\n3) problems if the subscribed client's connection is interrupted;\n\nNr. 2 could be solved by saving the PITR WAL separately if the lag grows\nover a threshold, and issue a warning. This could still be acceptable,\nas the writing doesn't have to be synchronous and can be made in\nrelatively large blocks.\nThere could be a second bigger lag threshold which completely cancels\nthe subscription. All these thresholds should be configurable, as it\ndepends on the application what's more important, to have the standby\navailable all the time or have the primary faster if loaded...\n\nNr. 3. can be solved by either canceling the subscription on connection\ndrop, or by allowing a certain amount of time after which the\nsubscription is canceled. The client can reconnect before this timeout\nexpires. In the meantime the primary can store the PITR WAL on disk as\nmentioned above...\n\nCheers,\nCsaba.\n\n\n",
"msg_date": "Wed, 22 Mar 2006 17:19:58 +0100",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "\n\nOn Wed, 22 Mar 2006, Jim C. Nasby wrote:\n\n> Ok, I saw disk activity on the base directory and assumed it was pg_xlog\n> stuff. Turns out that both SELECT INTO and CREATE TABLE AS ignore\n> default_tablepsace and create the new tables in the base directory. I'm\n> guessing that's a bug... (this is on 8.1.2, btw).\n\nThis has been fixed in CVS HEAD as part of a patch to allow additional \noptions to CREATE TABLE AS.\n\nhttp://archives.postgresql.org/pgsql-patches/2006-02/msg00211.php\n\nKris Jurka\n\n",
"msg_date": "Wed, 22 Mar 2006 14:37:28 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "On Wed, Mar 22, 2006 at 02:37:28PM -0500, Kris Jurka wrote:\n> \n> \n> On Wed, 22 Mar 2006, Jim C. Nasby wrote:\n> \n> >Ok, I saw disk activity on the base directory and assumed it was pg_xlog\n> >stuff. Turns out that both SELECT INTO and CREATE TABLE AS ignore\n> >default_tablepsace and create the new tables in the base directory. I'm\n> >guessing that's a bug... (this is on 8.1.2, btw).\n> \n> This has been fixed in CVS HEAD as part of a patch to allow additional \n> options to CREATE TABLE AS.\n> \n> http://archives.postgresql.org/pgsql-patches/2006-02/msg00211.php\n\nI'll argue that the current behavior is still a bug and should be fixed.\nWould it be difficult to patch 8.1 (and 8.0 if there were tablespaces\nthen...) to honor default_tablespace?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 24 Mar 2006 04:05:14 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "On Wed, Mar 22, 2006 at 02:20:39PM +0800, Qingqing Zhou wrote:\n> \n> \"Simon Riggs\" <[email protected]> wrote\n> > On Tue, 2006-03-21 at 06:22 -0600, Jim C. Nasby wrote:\n> > > Currently, it appears that SELECT * INTO new_table FROM old_table logs\n> > > each page as it's written to WAL. Is this actually needed? Couldn't the\n> > > database simply log that the SELECT ... INTO statement was executed\n> > > instead? Doing so would likely result in a large performance improvement\n> > > in most installs. Is there no provision for writing anything but data\n> > > page changes (or whole pages) to WAL?\n> >\n> > AFAIK it takes the same code path as CREATE TABLE AS SELECT, which\n> > already does exactly what you suggest (except when using PITR).\n> >\n> \n> As I read, they did take the same code path, but did they \"simply log that\n> the SELECT ... INTO statement was executed\"? If so, how can we rely on the\n> unreliable content of the old_table to do recovery?\n\nWhy would the content of the old_table be unreliable? If we've replayed\nlogs up to the point of the CTAS then any data that would be visible to\nthe CTAS should be fine, no?\n\nThough, the way Tom put it in one of his replies it sounds like WAL\ndoesn't do any kind of statement logging, only data logging. If that's\nthe case I'm not sure that the CTAS would actually get replayed. But I\nsuspect I'm just misunderstanding...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 24 Mar 2006 04:07:31 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "Jim C. Nasby wrote:\n\n> Why would the content of the old_table be unreliable? If we've replayed\n> logs up to the point of the CTAS then any data that would be visible to\n> the CTAS should be fine, no?\n> \n> Though, the way Tom put it in one of his replies it sounds like WAL\n> doesn't do any kind of statement logging, only data logging. If that's\n> the case I'm not sure that the CTAS would actually get replayed. But I\n> suspect I'm just misunderstanding...\n\nThe CTAS doesn't get logged (nor replayed obviously). What happens is\nthat the involved files are fsync'ed before transaction commit, AFAIR.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Fri, 24 Mar 2006 08:39:02 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "On Fri, Mar 24, 2006 at 08:39:02AM -0400, Alvaro Herrera wrote:\n> Jim C. Nasby wrote:\n> \n> > Why would the content of the old_table be unreliable? If we've replayed\n> > logs up to the point of the CTAS then any data that would be visible to\n> > the CTAS should be fine, no?\n> > \n> > Though, the way Tom put it in one of his replies it sounds like WAL\n> > doesn't do any kind of statement logging, only data logging. If that's\n> > the case I'm not sure that the CTAS would actually get replayed. But I\n> > suspect I'm just misunderstanding...\n> \n> The CTAS doesn't get logged (nor replayed obviously). What happens is\n> that the involved files are fsync'ed before transaction commit, AFAIR.\n\nAhh, yes, that sounds right. Might be a nice gain to be had if there was\nsome way to log the statement, but I suspect getting WAL to support that\nwould be extremely non-trivial.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 24 Mar 2006 07:01:21 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Fri, Mar 24, 2006 at 08:39:02AM -0400, Alvaro Herrera wrote:\n> > Jim C. Nasby wrote:\n> > \n> > > Why would the content of the old_table be unreliable? If we've replayed\n> > > logs up to the point of the CTAS then any data that would be visible to\n> > > the CTAS should be fine, no?\n> > > \n> > > Though, the way Tom put it in one of his replies it sounds like WAL\n> > > doesn't do any kind of statement logging, only data logging. If that's\n> > > the case I'm not sure that the CTAS would actually get replayed. But I\n> > > suspect I'm just misunderstanding...\n> > \n> > The CTAS doesn't get logged (nor replayed obviously). What happens is\n> > that the involved files are fsync'ed before transaction commit, AFAIR.\n> \n> Ahh, yes, that sounds right. Might be a nice gain to be had if there was\n> some way to log the statement, but I suspect getting WAL to support that\n> would be extremely non-trivial.\n\nNone at all, at least in the current incarnation, I think, because said\nquery execution is dependent on the contents of the FSM, which is itself\ndependent on the timing of VACUUM and other stuff. Such an action,\nrunning with a different FSM content, can very trivially cause data\ncorruption.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Fri, 24 Mar 2006 09:47:20 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "On Fri, Mar 24, 2006 at 09:47:20AM -0400, Alvaro Herrera wrote:\n> Jim C. Nasby wrote:\n> > On Fri, Mar 24, 2006 at 08:39:02AM -0400, Alvaro Herrera wrote:\n> > > Jim C. Nasby wrote:\n> > > \n> > > > Why would the content of the old_table be unreliable? If we've replayed\n> > > > logs up to the point of the CTAS then any data that would be visible to\n> > > > the CTAS should be fine, no?\n> > > > \n> > > > Though, the way Tom put it in one of his replies it sounds like WAL\n> > > > doesn't do any kind of statement logging, only data logging. If that's\n> > > > the case I'm not sure that the CTAS would actually get replayed. But I\n> > > > suspect I'm just misunderstanding...\n> > > \n> > > The CTAS doesn't get logged (nor replayed obviously). What happens is\n> > > that the involved files are fsync'ed before transaction commit, AFAIR.\n> > \n> > Ahh, yes, that sounds right. Might be a nice gain to be had if there was\n> > some way to log the statement, but I suspect getting WAL to support that\n> > would be extremely non-trivial.\n> \n> None at all, at least in the current incarnation, I think, because said\n> query execution is dependent on the contents of the FSM, which is itself\n> dependent on the timing of VACUUM and other stuff. Such an action,\n> running with a different FSM content, can very trivially cause data\n> corruption.\n\nOh, duh, because subsiquent operations will depend on the heap being in\na very specific state. Oh well.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 24 Mar 2006 07:59:46 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "On Fri, 24 Mar 2006, Jim C. Nasby wrote:\n\n> On Wed, Mar 22, 2006 at 02:37:28PM -0500, Kris Jurka wrote:\n>>\n>> On Wed, 22 Mar 2006, Jim C. Nasby wrote:\n>>\n>>> Ok, I saw disk activity on the base directory and assumed it was pg_xlog\n>>> stuff. Turns out that both SELECT INTO and CREATE TABLE AS ignore\n>>> default_tablepsace and create the new tables in the base directory. I'm\n>>> guessing that's a bug... (this is on 8.1.2, btw).\n>>\n>> This has been fixed in CVS HEAD as part of a patch to allow additional\n>> options to CREATE TABLE AS.\n>>\n>> http://archives.postgresql.org/pgsql-patches/2006-02/msg00211.php\n>\n> I'll argue that the current behavior is still a bug and should be fixed.\n> Would it be difficult to patch 8.1 (and 8.0 if there were tablespaces\n> then...) to honor default_tablespace?\n\nHere are patches that fix this for 8.0 and 8.1.\n\nKris Jurka",
"msg_date": "Fri, 24 Mar 2006 13:09:26 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "On Fri, 2006-04-21 at 19:56 -0400, Bruce Momjian wrote:\n> Your patch has been added to the PostgreSQL unapplied patches list at:\n> \n> \thttp://momjian.postgresql.org/cgi-bin/pgpatches\n> \n> It will be applied as soon as one of the PostgreSQL committers reviews\n> and approves it.\n\nThis patch should now be referred to as \n\tallow CREATE TABLE AS/SELECT INTO to use default_tablespace\nor something similar.\n\nThe name of the original thread no longer bears any resemblance to the\nintention of this patch as submitted in its final form.\n\nI've no objection to the patch, which seems to fill a functional\ngap/bug.\n\n-- \n Simon Riggs\n EnterpriseDB http://www.enterprisedb.com/\n\n",
"msg_date": "Sat, 22 Apr 2006 00:11:09 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://momjian.postgresql.org/cgi-bin/pgpatches\n\nIt will be applied as soon as one of the PostgreSQL committers reviews\nand approves it.\n\n---------------------------------------------------------------------------\n\n\nKris Jurka wrote:\n> \n> \n> On Fri, 24 Mar 2006, Jim C. Nasby wrote:\n> \n> > On Wed, Mar 22, 2006 at 02:37:28PM -0500, Kris Jurka wrote:\n> >>\n> >> On Wed, 22 Mar 2006, Jim C. Nasby wrote:\n> >>\n> >>> Ok, I saw disk activity on the base directory and assumed it was pg_xlog\n> >>> stuff. Turns out that both SELECT INTO and CREATE TABLE AS ignore\n> >>> default_tablepsace and create the new tables in the base directory. I'm\n> >>> guessing that's a bug... (this is on 8.1.2, btw).\n> >>\n> >> This has been fixed in CVS HEAD as part of a patch to allow additional\n> >> options to CREATE TABLE AS.\n> >>\n> >> http://archives.postgresql.org/pgsql-patches/2006-02/msg00211.php\n> >\n> > I'll argue that the current behavior is still a bug and should be fixed.\n> > Would it be difficult to patch 8.1 (and 8.0 if there were tablespaces\n> > then...) to honor default_tablespace?\n> \n> Here are patches that fix this for 8.0 and 8.1.\n> \n> Kris Jurka\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Fri, 21 Apr 2006 19:56:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL logging of SELECT ... INTO command"
},
{
"msg_contents": "\nBackpatched to 8.0.X and 8.1.X.\n\n---------------------------------------------------------------------------\n\nKris Jurka wrote:\n> \n> \n> On Fri, 24 Mar 2006, Jim C. Nasby wrote:\n> \n> > On Wed, Mar 22, 2006 at 02:37:28PM -0500, Kris Jurka wrote:\n> >>\n> >> On Wed, 22 Mar 2006, Jim C. Nasby wrote:\n> >>\n> >>> Ok, I saw disk activity on the base directory and assumed it was pg_xlog\n> >>> stuff. Turns out that both SELECT INTO and CREATE TABLE AS ignore\n> >>> default_tablepsace and create the new tables in the base directory. I'm\n> >>> guessing that's a bug... (this is on 8.1.2, btw).\n> >>\n> >> This has been fixed in CVS HEAD as part of a patch to allow additional\n> >> options to CREATE TABLE AS.\n> >>\n> >> http://archives.postgresql.org/pgsql-patches/2006-02/msg00211.php\n> >\n> > I'll argue that the current behavior is still a bug and should be fixed.\n> > Would it be difficult to patch 8.1 (and 8.0 if there were tablespaces\n> > then...) to honor default_tablespace?\n> \n> Here are patches that fix this for 8.0 and 8.1.\n> \n> Kris Jurka\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian http://candle.pha.pa.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Wed, 26 Apr 2006 19:02:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] WAL logging of SELECT ... INTO command"
}
] |
[
{
"msg_contents": "Hi all,\n I'm having a very strange performance \nproblems on a fresh install of postgres 8.1.3\nI've just installed it with default option and \n--enable-thread-safety without tweaking config files yet.\n\nThe import of a small SQL files into the DB (6 \ntables with 166.500 total records, INSERT syntax)\ntook me more than 18 minutes as shown below \n(output of \"time ./psql benchmarks < dump.sql\")\n\nreal 18m33.062s\nuser 0m10.386s\nsys 0m7.707s\n\nThe server is an\n- Intel(R) Xeon(TM) CPU 3.60GHz - 1MB L2\n- 1 GB RAM\n- 2x HDD SCSI U320 RAID 1 Hardware (HP 6i controller)\n\nThe same import, tried on an another low-end \nserver with a fresh install of postgres 8.1.3 gave me:\n\nreal 2m4.497s\nuser 0m6.234s\nsys 0m6.148s\n\nDuring the test, the postmaster on the first \nserver (the slow one) uses only a 4% CPU, while \non the second one it reaches 50% cpu usage\n\nI was thinking on a IO bandwidth saturation, but \n\"vmstat 1\" during the import shows me small values for io/bo column\n\nSearching the archive of the ml I found a Disk IO \ntest I suddenly ran on the slower server as follow\n\n# time bash -c \"dd if=/dev/zero of=bigfile bs=8k \ncount=200000 && sync\" (write test)\n# time dd if=bigfile of=/dev/null bs=8k (read test)\n\noutput of \"vmstat 1\" during the above test follows:\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n\nWrite test\n 0 11 540 2344 12152 863456 4 0 340 27848 1709 695 6 53 0 41\n 0 11 540 2344 12180 863516 4 0 44 45500 1623 386 0 2 0 98\n 0 5 540 3168 12200 862520 0 0 264 44888 1573 315 1 2 0 97\n\nRead test\n 0 2 440 2328 6076 849120 0 0 94552 0 1550 624 3 10 0 87\n 0 2 440 2248 6104 848936 0 0 94508 0 1567 715 7 10 0 83\n 0 3 440 2824 6148 847828 0 0 \n102540 448 1511 675 14 11 0 75\n\nValues of io/(bi-bo) during the disk test are a \nlot higher than during the import operation....\n\nI really have no more clues .... :(\n\nDo you have any ideas ?\n\nTnx in advance\n\nRegards\n\n\nEdoardo Serra\nWeBRainstorm S.r.l.\nIT, Internet services & consulting\nVia Pio Foà 83/C\n10126 Torino\nTel: +39 011 6966881\n\n",
"msg_date": "Tue, 21 Mar 2006 13:46:16 +0100",
"msg_from": "Edoardo Serra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postmaster using only 4-5% CPU"
},
{
"msg_contents": "The low end server by chance doesn't have an IDE disk that lies about\nwrite completion, or a battery backed disk controller? Try disabling\nfsync on the new server to get comparable figures.\n\nMarkus Bertheau\n\n2006/3/21, Edoardo Serra <[email protected]>:\n> Hi all,\n> I'm having a very strange performance\n> problems on a fresh install of postgres 8.1.3\n> I've just installed it with default option and\n> --enable-thread-safety without tweaking config files yet.\n>\n> The import of a small SQL files into the DB (6\n> tables with 166.500 total records, INSERT syntax)\n> took me more than 18 minutes as shown below\n> (output of \"time ./psql benchmarks < dump.sql\")\n>\n> real 18m33.062s\n> user 0m10.386s\n> sys 0m7.707s\n>\n> The server is an\n> - Intel(R) Xeon(TM) CPU 3.60GHz - 1MB L2\n> - 1 GB RAM\n> - 2x HDD SCSI U320 RAID 1 Hardware (HP 6i controller)\n>\n> The same import, tried on an another low-end\n> server with a fresh install of postgres 8.1.3 gave me:\n>\n> real 2m4.497s\n> user 0m6.234s\n> sys 0m6.148s\n>\n> During the test, the postmaster on the first\n> server (the slow one) uses only a 4% CPU, while\n> on the second one it reaches 50% cpu usage\n>\n> I was thinking on a IO bandwidth saturation, but\n> \"vmstat 1\" during the import shows me small values for io/bo column\n>\n> Searching the archive of the ml I found a Disk IO\n> test I suddenly ran on the slower server as follow\n>\n> # time bash -c \"dd if=/dev/zero of=bigfile bs=8k\n> count=200000 && sync\" (write test)\n> # time dd if=bigfile of=/dev/null bs=8k (read test)\n>\n> output of \"vmstat 1\" during the above test follows:\n>\n> procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n>\n> Write test\n> 0 11 540 2344 12152 863456 4 0 340 27848 1709 695 6 53 0 41\n> 0 11 540 2344 12180 863516 4 0 44 45500 1623 386 0 2 0 98\n> 0 5 540 3168 12200 862520 0 0 264 44888 1573 315 1 2 0 97\n>\n> Read test\n> 0 2 440 2328 6076 849120 0 0 94552 0 1550 624 3 10 0 87\n> 0 2 440 2248 6104 848936 0 0 94508 0 1567 715 7 10 0 83\n> 0 3 440 2824 6148 847828 0 0\n> 102540 448 1511 675 14 11 0 75\n>\n> Values of io/(bi-bo) during the disk test are a\n> lot higher than during the import operation....\n>\n> I really have no more clues .... :(\n>\n> Do you have any ideas ?\n>\n> Tnx in advance\n>\n> Regards\n>\n>\n> Edoardo Serra\n> WeBRainstorm S.r.l.\n> IT, Internet services & consulting\n> Via Pio Foà 83/C\n> 10126 Torino\n> Tel: +39 011 6966881\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n",
"msg_date": "Tue, 21 Mar 2006 20:10:56 +0600",
"msg_from": "\"Markus Bertheau\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster using only 4-5% CPU"
},
{
"msg_contents": "Edoardo Serra <osdevel 'at' webrainstorm.it> writes:\n\n> Hi all,\n> I'm having a very strange performance problems on a fresh\n> install of postgres 8.1.3\n> I've just installed it with default option and --enable-thread-safety\n> without tweaking config files yet.\n> \n> The import of a small SQL files into the DB (6 tables with 166.500\n> total records, INSERT syntax)\n> took me more than 18 minutes as shown below (output of \"time ./psql\n> benchmarks < dump.sql\")\n> \n> real 18m33.062s\n> user 0m10.386s\n> sys 0m7.707s\n> \n> The server is an\n> - Intel(R) Xeon(TM) CPU 3.60GHz - 1MB L2\n> - 1 GB RAM\n> - 2x HDD SCSI U320 RAID 1 Hardware (HP 6i controller)\n\nI have seen similar very low performance for INSERTs, although\nusing SCSI 320 disk, controlled by LSI Logic 53C1030 (using\nFusion MPT SCSI Host driver 3.01.18 on Linux 2.6.11). Something\nlike tens of INSERTs per second into a small table, no more.\n\"iostat\" reports very large figures in the \"await\" field compared\nto other servers using raid1 controllers, that's my best guess,\nbut I was unable to find why and how to fix (and the vendor has\nbeen very helpless until now). I'm wondering if we don't have an\nissue with the driver but have no more clue.\n\n-- \nGuillaume Cottenceau\n",
"msg_date": "21 Mar 2006 15:34:48 +0100",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster using only 4-5% CPU"
},
{
"msg_contents": "On Tue, 2006-03-21 at 06:46, Edoardo Serra wrote:\n> Hi all,\n> I'm having a very strange performance \n> problems on a fresh install of postgres 8.1.3\n> I've just installed it with default option and \n> --enable-thread-safety without tweaking config files yet.\n> \n> The import of a small SQL files into the DB (6 \n> tables with 166.500 total records, INSERT syntax)\n> took me more than 18 minutes as shown below \n> (output of \"time ./psql benchmarks < dump.sql\")\n> \n> real 18m33.062s\n> user 0m10.386s\n> sys 0m7.707s\n> \n> The server is an\n> - Intel(R) Xeon(TM) CPU 3.60GHz - 1MB L2\n> - 1 GB RAM\n> - 2x HDD SCSI U320 RAID 1 Hardware (HP 6i controller)\n> \n> The same import, tried on an another low-end \n> server with a fresh install of postgres 8.1.3 gave me:\n> \n> real 2m4.497s\n> user 0m6.234s\n> sys 0m6.148s\n\nHere's what's happening. On the \"fast\" machine, you are almost\ncertainly using IDE drives. PostgreSQL uses a system call called\n\"fsync\" when writing data out. It writes the data to the write ahead\nlogs, calls fsync, and waits for it to return.\n\nfsync() tells the drive to flush its write buffers to disk and tell the\nOS when it has completed this.\n\nSCSI drives dutifully write out those buffers, and then, only after\nthey're written, tell the OS that yes, the data is written out. Since\nSCSI drives can do other things while this is going on, by using command\nqueueing, this is no great harm to performance, since the drive and OS\ncan transfer other data into / out of buffers during this fsync\noperation.\n\nMeanwhile, back in the jungle... The machine with IDE drives operates\ndifferently. Most, if not all, IDE drives, when told by the OS to\nfsync() tell the OS immediately that the fsync() call has completed, and\nthe data is written to the drive. Shortly thereafter, the drive\nactually commences to write the data out. When it gets a chance.\n\nThe reason IDE drives do this is that until very recently, the IDE\ninterface allowed only one operation at a time to be \"in flight\" on an\ninterface / drive.\n\nSo, if the IDE drive really did write the data out, then report that it\nwas done, it would be much slower than the SCSI drive listed above,\nbecause ALL operations on it would stop, waiting in line, for the caches\nto flush to the platters.\n\nFor PostgreSQL, the way IDE drives operate is dangerous. Write data\nout, call fsync(), get an immediate return, mark the data as committed,\nmove on the next operation, operator trips over power cord / power\nconditioner explodes, power supply dies, brown out causes the machine to\nreboot, et. al., and when the machine comes up, PostgreSQL politely\ninforms you that your database is corrupt, and you come to the\npgsql-general group asking how to get your database back online. Very\nbad.\n\nWith SCSI drives, the same scenario results in a machine that comes\nright back up and keeps on trucking.\n\nSo, what's happening to you is that on the machine with SCSI drives,\nPostgreSQL, the OS, and the drives are operating properly, making sure\nyour data is secure, and, unfortunately, taking its sweet time doing\nit. Given that your .sql file is probably individual inserts without a\ntransaction, this is normal.\n\nTry wrapping the inserts in the sql file in begin; / commit; statements,\nlike so:\n\nbegin;\ninsert into table ...\n(100,000 inserts here)\ninsert into table ...\ncommit;\n\nand it should fly. And, if there's a single bad row, the whole import\nrolls back. Which means you don't have to figure out where the import\nstopped or which rows did or didn't take. You just fix the one or two\nbad rows, and run the whole import again.\n\nWhen a good friend of mine first started using PostgreSQL, he was a\ntotal MySQL bigot. He was importing a 10,000 row dataset, and made a\nsmartassed remark after 10 minutes how it would have imported in minutes\non MySQL. It was a test database, so I had him stop the import, delete\nall the imported rows, and wrap the whole import inside begin; and\ncommit; \n\nThe import took about 20 seconds or so. \n\nNow, for the interesting test. Run the import on both machines, with\nthe begin; commit; pairs around it. Halfway through the import, pull\nthe power cord, and see which one comes back up. Don't do this to\nservers with data you like, only test machines, obviously. For an even\nmore interesting test, do this with MySQL, Oracle, DB2, etc...\n\nI've been amazed that the looks of horror I get for suggesting such a\ntest are about the same from an Oracle DBA as they are from a MySQL\nDBA. :)\n",
"msg_date": "Tue, 21 Mar 2006 11:44:40 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster using only 4-5% CPU"
},
{
"msg_contents": "At 18.44 21/03/2006, Scott Marlowe wrote:\n>Here's what's happening. On the \"fast\" machine, you are almost\n>certainly using IDE drives.\n\nOh yes, the fast machine has IDE drives, you got it ;)\n\n>Meanwhile, back in the jungle... The machine with IDE drives operates\n>differently. Most, if not all, IDE drives, when told by the OS to\n>fsync() tell the OS immediately that the fsync() call has completed, and\n>the data is written to the drive. Shortly thereafter, the drive\n>actually commences to write the data out. When it gets a chance.\n\nI really didn't know this behaviour of IDE drives.\nI was stracing the postmaster while investigating the problem and noticed\nmany fsync syscalls (one after each INSERT).\n\nI was investigating on it but I didn't explain me why SCSI was slower.\nYou helped me a lot ;) tnx\n\n>For PostgreSQL, the way IDE drives operate is dangerous. Write data\n>out, call fsync(), get an immediate return, mark the data as committed,\n>move on the next operation, operator trips over power cord / power\n>conditioner explodes, power supply dies, brown out causes the machine to\n>reboot, et. al., and when the machine comes up, PostgreSQL politely\n>informs you that your database is corrupt, and you come to the\n>pgsql-general group asking how to get your database back online. Very\n>bad.\n\nYes, it sounds very bad... what about SATA drives ?\nI heard about command queueing in SATA but I don't know if the kernel \nhandles it properly\n\n>Try wrapping the inserts in the sql file in begin; / commit; statements,\n>like so:\n>\n>begin;\n>insert into table ...\n>(100,000 inserts here)\n>insert into table ...\n>commit;\n>\n>and it should fly.\n\nOh, yes with the insert wrapped in a transaction the import time is as follows:\n- SCSI: 35 secs\n- IDE: 50 secs\n\n>When a good friend of mine first started using PostgreSQL, he was a\n>total MySQL bigot. He was importing a 10,000 row dataset, and made a\n>smartassed remark after 10 minutes how it would have imported in minutes\n>on MySQL. It was a test database, so I had him stop the import, delete\n>all the imported rows, and wrap the whole import inside begin; and\n>commit;\n>\n>The import took about 20 seconds or so.\n\n;)\n\n>Now, for the interesting test. Run the import on both machines, with\n>the begin; commit; pairs around it. Halfway through the import, pull\n>the power cord, and see which one comes back up. Don't do this to\n>servers with data you like, only test machines, obviously. For an even\n>more interesting test, do this with MySQL, Oracle, DB2, etc...\n\nI will surely run a test like this ;)\n\nTnx a lot again for help\n\nRegards\n\nEdoardo Serra\n\n",
"msg_date": "Thu, 23 Mar 2006 10:14:24 +0100",
"msg_from": "Edoardo Serra <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postmaster using only 4-5% CPU"
},
{
"msg_contents": "On Thu, Mar 23, 2006 at 10:14:24AM +0100, Edoardo Serra wrote:\n> >Now, for the interesting test. Run the import on both machines, with\n> >the begin; commit; pairs around it. Halfway through the import, pull\n> >the power cord, and see which one comes back up. Don't do this to\n> >servers with data you like, only test machines, obviously. For an even\n> >more interesting test, do this with MySQL, Oracle, DB2, etc...\n> \n> I will surely run a test like this ;)\n\nIf you do, I'd be *very* interested in the results. Pervasive would\nprobably pay for a whitepaper about this, btw (see\nhttp://www.pervasivepostgres.com/postgresql/partners_in_publishing.asp).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 24 Mar 2006 04:16:54 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster using only 4-5% CPU"
},
{
"msg_contents": "On Fri, 2006-03-24 at 04:16, Jim C. Nasby wrote:\n> On Thu, Mar 23, 2006 at 10:14:24AM +0100, Edoardo Serra wrote:\n> > >Now, for the interesting test. Run the import on both machines, with\n> > >the begin; commit; pairs around it. Halfway through the import, pull\n> > >the power cord, and see which one comes back up. Don't do this to\n> > >servers with data you like, only test machines, obviously. For an even\n> > >more interesting test, do this with MySQL, Oracle, DB2, etc...\n> > \n> > I will surely run a test like this ;)\n> \n> If you do, I'd be *very* interested in the results. Pervasive would\n> probably pay for a whitepaper about this, btw (see\n> http://www.pervasivepostgres.com/postgresql/partners_in_publishing.asp).\n\nHehe. good luck with it.\n\nAt the last company I worked at I was the PostgreSQL DBA, and I could\nnot get one single Oracle, DB2, MySQL, MSSQL, Ingres, or other DBA to\nagree to that kind of test.\n\n6 months later, when all three power conditioners blew at once (amazing\nwhat a 1/4\" piece of wire can do, eh?) and we lost all power in our\nhosting center, there was one, and only one, database server that came\nback up without errors, and we know which one that was. No other\ndatabase there was up in less than 2 hours. So, I wandered the floor\nwatching the folks panic who were trying to bring their systems back\nup. \n\nAnd you know what? They still didn't want to test their systems for\nrecovery from a power loss situation.\n",
"msg_date": "Fri, 24 Mar 2006 10:42:58 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postmaster using only 4-5% CPU"
}
] |
[
{
"msg_contents": "Hi guys,\n\nI'm trying to figure out when Sequence Scan is better than Index Scan. I \njust want to know this because I disabled the sequence scan in \npostgresql and receive a better result. :)\n\nTwo tables.\n\nTable 1 (1 million rows )\n-----------\nid\ntext\ntable2_id\n\nTable 2 (300 thousand rows)\n----------\nid\ntext 2\n\nWhen I join these two tables I have a sequence_scan. :(\n\nThanks in advance.\n\nFernando Lujan\n",
"msg_date": "Tue, 21 Mar 2006 15:08:07 -0300",
"msg_from": "Fernando Lujan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sequence Scan vs. Index scan"
},
{
"msg_contents": "On Tue, Mar 21, 2006 at 03:08:07PM -0300, Fernando Lujan wrote:\n> I'm trying to figure out when Sequence Scan is better than Index Scan. I \n> just want to know this because I disabled the sequence scan in \n> postgresql and receive a better result. :)\n\nThat is a very broad question, and you're introducing somewhat of a false\nchoice since you're talking about joins (a join can be solved by more methods\nthan just \"sequential scan\" or not).\n\nCould you please paste the exact query you're using, with EXPLAIN ANALYZE for\nboth the case with and without sequential scans?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 21 Mar 2006 19:17:32 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequence Scan vs. Index scan"
},
{
"msg_contents": "Fernando,\n\nIf you need to read all the table for example it would be better to read \nonly the data pages instead of read data and index pages.\n\nReimer\n\n----- Original Message ----- \nFrom: \"Fernando Lujan\" <[email protected]>\nTo: <[email protected]>\nSent: Tuesday, March 21, 2006 3:08 PM\nSubject: [PERFORM] Sequence Scan vs. Index scan\n\n\n> Hi guys,\n>\n> I'm trying to figure out when Sequence Scan is better than Index Scan. I \n> just want to know this because I disabled the sequence scan in postgresql \n> and receive a better result. :)\n>\n> Two tables.\n>\n> Table 1 (1 million rows )\n> -----------\n> id\n> text\n> table2_id\n>\n> Table 2 (300 thousand rows)\n> ----------\n> id\n> text 2\n>\n> When I join these two tables I have a sequence_scan. :(\n>\n> Thanks in advance.\n>\n> Fernando Lujan\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n",
"msg_date": "Tue, 21 Mar 2006 15:23:08 -0300",
"msg_from": "\"Reimer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequence Scan vs. Index scan"
},
{
"msg_contents": "2006/3/21, Reimer <[email protected]>:\n>\n> Fernando,\n>\n> If you need to read all the table for example it would be better to read\n> only the data pages instead of read data and index pages.\n>\n> Reimer\n>\n> ----- Original Message -----\n> From: \"Fernando Lujan\" <[email protected]>\n> To: <[email protected]>\n> Sent: Tuesday, March 21, 2006 3:08 PM\n> Subject: [PERFORM] Sequence Scan vs. Index scan\n>\n>\n> > Hi guys,\n> >\n> > I'm trying to figure out when Sequence Scan is better than Index Scan. I\n> > just want to know this because I disabled the sequence scan in\n> postgresql\n> > and receive a better result. :)\n> >\n> > Two tables.\n> >\n> > Table 1 (1 million rows )\n> > -----------\n> > id\n> > text\n> > table2_id\n> >\n> > Table 2 (300 thousand rows)\n> > ----------\n> > id\n> > text 2\n> >\n> > When I join these two tables I have a sequence_scan. :(\n> >\n> > Thanks in advance.\n> >\n> > Fernando Lujan\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: explain analyze is your friend\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\nHi, I've got the same situation:\n\nENABLE_SEQSCAN ON -> 5,031 ms\nENABLE_SEQSCAN OFF -> 406 ms\n\nTables definition:\n-----------------------\n\nCREATE TABLE liquidacionesos\n(\n codigoliquidacionos serial NOT NULL,\n codigoobrasocial int4 NOT NULL,\n quincena char(1) NOT NULL,\n per_m char(2) NOT NULL,\n per_a char(4) NOT NULL,\n nombreliquidacion varchar(60) NOT NULL,\n codigotipoliquidacionos int2 NOT NULL,\n importe numeric(12,2) NOT NULL,\n conformado bool NOT NULL,\n facturada bool NOT NULL,\n codigoremito int4 NOT NULL DEFAULT 0,\n codigoprofesion int2 NOT NULL DEFAULT 0,\n matriculaprofesional int4 NOT NULL DEFAULT 0,\n letrafactura char(1) NOT NULL DEFAULT ' '::bpchar,\n numerofactura varchar(13) NOT NULL DEFAULT '0000-00000000'::character\nvarying,\n importegravado numeric(12,2) NOT NULL DEFAULT 0,\n importenogravado numeric(12,2) NOT NULL DEFAULT 0,\n importeiva numeric(12,2) NOT NULL DEFAULT 0,\n importefactura numeric(12,2) NOT NULL DEFAULT 0,\n fechahora_cga timestamp NOT NULL DEFAULT now(),\n userid varchar(20) NOT NULL DEFAULT \"current_user\"(),\n numerosecuencia int4 NOT NULL DEFAULT 0,\n CONSTRAINT liqos_pkey PRIMARY KEY (codigoliquidacionos)\n)\nWITHOUT OIDS TABLESPACE data;\nALTER TABLE liquidacionesos ALTER COLUMN codigoliquidacionos SET STATISTICS\n100;\nALTER TABLE liquidacionesos ALTER COLUMN per_a SET STATISTICS 100;\nALTER TABLE liquidacionesos ALTER COLUMN per_m SET STATISTICS 100;\nALTER TABLE liquidacionesos ALTER COLUMN quincena SET STATISTICS 100;\nALTER TABLE liquidacionesos ALTER COLUMN codigoobrasocial SET STATISTICS\n100;\nCREATE INDEX ixliqos_periodo\n ON liquidacionesos\n USING btree\n (per_a, per_m, quincena);\n\n\nCREATE TABLE detalleprestaciones\n(\n codigoliquidacionos int4 NOT NULL,\n numerosecuencia int4 NOT NULL,\n codigoprofesionclisanhosp int2 NOT NULL,\n matriculaprofesionalclisanhosp int4 NOT NULL,\n codigoctmclisanhosp int4 NOT NULL,\n codigoprofesionefector int2 NOT NULL,\n matriculaprofesionalefector int4 NOT NULL,\n codigoctmefector int4 NOT NULL,\n fechaprestacion date NOT NULL,\n codigonn char(6) NOT NULL,\n cantidad int2 NOT NULL,\n codigofacturacion int2 NOT NULL,\n porcentajehonorarios numeric(6,2) NOT NULL,\n porcentajederechos numeric(6,2) NOT NULL,\n importehonorarios numeric(12,2) NOT NULL,\n importederechos numeric(12,2) NOT NULL,\n importegastos numeric(12,2) NOT NULL,\n importegastosnogravados numeric(12,2) NOT NULL,\n importecompensacion numeric(12,2) NOT NULL,\n codigopadron int2 NOT NULL,\n codigoafiliado char(15) NOT NULL,\n numerobono varchar(15) NOT NULL,\n matriculaprofesionalprescriptor int4 NOT NULL,\n codigodevolucion int2 NOT NULL,\n importeforzado bool NOT NULL,\n codigotramo int2 NOT NULL DEFAULT 0,\n campocomodin int2 NOT NULL,\n fechahora_cga timestamp NOT NULL DEFAULT now(),\n userid varchar(20) NOT NULL DEFAULT \"current_user\"(),\n CONSTRAINT dp_pkey PRIMARY KEY (codigoliquidacionos, numerosecuencia)\n)\nWITHOUT OIDS TABLESPACE data;\nALTER TABLE detalleprestaciones ALTER COLUMN codigoliquidacionos SET\nSTATISTICS 100;\n\nboth vacummed and analyzed\ntable detalleprestaciones 5,408,590 rec\ntable liquidacionesos 16,752 rec\n\nQuery:\n--------\n\nSELECT DP.CodigoProfesionEfector, DP.MatriculaProfesionalEfector,\nSUM((ImporteHonorarios+ImporteD\nerechos+ImporteCompensacion)*Cantidad+ImporteGastos+ImporteGastosNoGravados)\nAS Importe\nFROM DetallePrestaciones DP INNER JOIN LiquidacionesOS L ON\nDP.CodigoLiquidacionOS=L.CodigoLiquidacionOS\nWHERE L.Per_a='2005' AND L.Facturada AND L.CodigoObraSocial IN(54)\nGROUP BY DP.CodigoProfesionEfector, DP.MatriculaProfesionalEfector;\n\nExplains:\n------------\nWith SET ENABLE_SEQSCAN TO ON;\nHashAggregate (cost=251306.99..251627.36 rows=11650 width=78)\n -> Hash Join (cost=1894.30..250155.54 rows=153526 width=78)\n Hash Cond: (\"outer\".codigoliquidacionos =\n\"inner\".codigoliquidacionos)\n -> Seq Scan on detalleprestaciones dp \n(cost=0.00..219621.32rows=5420932 width=82)\n -> Hash (cost=1891.01..1891.01 rows=1318 width=4)\n -> Bitmap Heap Scan on liquidacionesos l (cost=\n43.89..1891.01 rows=1318 width=4)\n Recheck Cond: (codigoobrasocial = 54)\n Filter: ((per_a = '2005'::bpchar) AND facturada)\n -> Bitmap Index Scan on ixliqos_os \n(cost=0.00..43.89rows=4541 width=0)\n Index Cond: (codigoobrasocial = 54)\n\nWith SET ENABLE_SEQSCAN TO OFF;\nHashAggregate (cost=2943834.84..2944155.21 rows=11650 width=78)\n -> Nested Loop (cost=0.00..2942683.39 rows=153526 width=78)\n -> Index Scan using liqos_pkey on liquidacionesos l (cost=\n0.00..3020.21 rows=1318 width=4)\n Filter: ((per_a = '2005'::bpchar) AND facturada AND\n(codigoobrasocial = 54))\n -> Index Scan using dp_pkey on detalleprestaciones dp (cost=\n0.00..2214.90 rows=1240 width=82)\n Index Cond: (dp.codigoliquidacionos =\n\"outer\".codigoliquidacionos)\n\nThanks for your time!!!!\nAlejandro\n\n2006/3/21, Reimer <[email protected]>:\nFernando,If you need to read all the table for example it would be better to readonly the data pages instead of read data and index pages.Reimer----- Original Message -----From: \"Fernando Lujan\" <\[email protected]>To: <[email protected]>Sent: Tuesday, March 21, 2006 3:08 PM\nSubject: [PERFORM] Sequence Scan vs. Index scan> Hi guys,>> I'm trying to figure out when Sequence Scan is better than Index Scan. I> just want to know this because I disabled the sequence scan in postgresql\n> and receive a better result. :)>> Two tables.>> Table 1 (1 million rows )> -----------> id> text> table2_id>> Table 2 (300 thousand rows)> ----------\n> id> text 2>> When I join these two tables I have a sequence_scan. :(>> Thanks in advance.>> Fernando Lujan>> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend>---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to \[email protected] so that your message can get through to the mailing list cleanly\n\nHi, I've got the same situation:\n\nENABLE_SEQSCAN ON -> 5,031 ms\n\nENABLE_SEQSCAN OFF -> 406 ms\n\n\nTables definition:\n\n-----------------------\n\n\nCREATE TABLE liquidacionesos\n\n(\n\n codigoliquidacionos serial NOT NULL,\n\n codigoobrasocial int4 NOT NULL,\n\n quincena char(1) NOT NULL,\n\n per_m char(2) NOT NULL,\n\n per_a char(4) NOT NULL,\n\n nombreliquidacion varchar(60) NOT NULL,\n\n codigotipoliquidacionos int2 NOT NULL,\n\n importe numeric(12,2) NOT NULL,\n\n conformado bool NOT NULL,\n\n facturada bool NOT NULL,\n\n codigoremito int4 NOT NULL DEFAULT 0,\n\n codigoprofesion int2 NOT NULL DEFAULT 0,\n\n matriculaprofesional int4 NOT NULL DEFAULT 0,\n\n letrafactura char(1) NOT NULL DEFAULT ' '::bpchar,\n\n numerofactura varchar(13) NOT NULL DEFAULT '0000-00000000'::character varying,\n\n importegravado numeric(12,2) NOT NULL DEFAULT 0,\n\n importenogravado numeric(12,2) NOT NULL DEFAULT 0,\n\n importeiva numeric(12,2) NOT NULL DEFAULT 0,\n\n importefactura numeric(12,2) NOT NULL DEFAULT 0,\n\n fechahora_cga timestamp NOT NULL DEFAULT now(),\n\n userid varchar(20) NOT NULL DEFAULT \"current_user\"(),\n\n numerosecuencia int4 NOT NULL DEFAULT 0,\n\n CONSTRAINT liqos_pkey PRIMARY KEY (codigoliquidacionos)\n\n) \n\nWITHOUT OIDS TABLESPACE data;\n\nALTER TABLE liquidacionesos ALTER COLUMN codigoliquidacionos SET STATISTICS 100;\n\nALTER TABLE liquidacionesos ALTER COLUMN per_a SET STATISTICS 100;\n\nALTER TABLE liquidacionesos ALTER COLUMN per_m SET STATISTICS 100;\n\nALTER TABLE liquidacionesos ALTER COLUMN quincena SET STATISTICS 100;\n\nALTER TABLE liquidacionesos ALTER COLUMN codigoobrasocial SET STATISTICS 100;\n\nCREATE INDEX ixliqos_periodo\n\n ON liquidacionesos\n\n USING btree\n\n (per_a, per_m, quincena);\n\n\n\nCREATE TABLE detalleprestaciones\n\n(\n\n codigoliquidacionos int4 NOT NULL,\n\n numerosecuencia int4 NOT NULL,\n\n codigoprofesionclisanhosp int2 NOT NULL,\n\n matriculaprofesionalclisanhosp int4 NOT NULL,\n\n codigoctmclisanhosp int4 NOT NULL,\n\n codigoprofesionefector int2 NOT NULL,\n\n matriculaprofesionalefector int4 NOT NULL,\n\n codigoctmefector int4 NOT NULL,\n\n fechaprestacion date NOT NULL,\n\n codigonn char(6) NOT NULL,\n\n cantidad int2 NOT NULL,\n\n codigofacturacion int2 NOT NULL,\n\n porcentajehonorarios numeric(6,2) NOT NULL,\n\n porcentajederechos numeric(6,2) NOT NULL,\n\n importehonorarios numeric(12,2) NOT NULL,\n\n importederechos numeric(12,2) NOT NULL,\n\n importegastos numeric(12,2) NOT NULL,\n\n importegastosnogravados numeric(12,2) NOT NULL,\n\n importecompensacion numeric(12,2) NOT NULL,\n\n codigopadron int2 NOT NULL,\n\n codigoafiliado char(15) NOT NULL,\n\n numerobono varchar(15) NOT NULL,\n\n matriculaprofesionalprescriptor int4 NOT NULL,\n\n codigodevolucion int2 NOT NULL,\n\n importeforzado bool NOT NULL,\n\n codigotramo int2 NOT NULL DEFAULT 0,\n\n campocomodin int2 NOT NULL,\n\n fechahora_cga timestamp NOT NULL DEFAULT now(),\n\n userid varchar(20) NOT NULL DEFAULT \"current_user\"(),\n\n CONSTRAINT dp_pkey PRIMARY KEY (codigoliquidacionos, numerosecuencia)\n\n) \n\nWITHOUT OIDS TABLESPACE data;\n\nALTER TABLE detalleprestaciones ALTER COLUMN codigoliquidacionos SET STATISTICS 100;\n\n\nboth vacummed and analyzed\n\ntable detalleprestaciones 5,408,590 rec\n\ntable liquidacionesos 16,752 rec\n\n\nQuery:\n\n--------\n\n\nSELECT DP.CodigoProfesionEfector, DP.MatriculaProfesionalEfector, \n\nSUM((ImporteHonorarios+ImporteD\nerechos+ImporteCompensacion)*Cantidad+ImporteGastos+ImporteGastosNoGravados) AS Importe\nFROM DetallePrestaciones DP INNER JOIN LiquidacionesOS L ON DP.CodigoLiquidacionOS=L.CodigoLiquidacionOS\nWHERE L.Per_a='2005' AND L.Facturada AND L.CodigoObraSocial IN(54)\nGROUP BY DP.CodigoProfesionEfector, DP.MatriculaProfesionalEfector;\n\nExplains:\n------------\nWith SET ENABLE_SEQSCAN TO ON;\nHashAggregate (cost=251306.99..251627.36 rows=11650 width=78)\n -> Hash Join (cost=1894.30..250155.54 rows=153526 width=78)\n Hash Cond: (\"outer\".codigoliquidacionos = \"inner\".codigoliquidacionos)\n -> Seq Scan on\ndetalleprestaciones dp (cost=0.00..219621.32 rows=5420932\nwidth=82)\n -> Hash (cost=1891.01..1891.01 rows=1318 width=4)\n \n-> Bitmap Heap Scan on liquidacionesos l \n(cost=43.89..1891.01 rows=1318 width=4)\n \nRecheck Cond: (codigoobrasocial = 54)\n \nFilter: ((per_a = '2005'::bpchar) AND facturada)\n \n-> Bitmap Index Scan on ixliqos_os (cost=0.00..43.89\nrows=4541 width=0) \nIndex Cond: (codigoobrasocial = 54)\n\nWith SET ENABLE_SEQSCAN TO OFF;\nHashAggregate (cost=2943834.84..2944155.21 rows=11650 width=78)\n -> Nested Loop (cost=0.00..2942683.39 rows=153526 width=78)\n -> Index Scan using\nliqos_pkey on liquidacionesos l (cost=0.00..3020.21 rows=1318\nwidth=4)\n \nFilter: ((per_a = '2005'::bpchar) AND facturada AND (codigoobrasocial =\n54))\n -> Index Scan using\ndp_pkey on detalleprestaciones dp (cost=0.00..2214.90 rows=1240\nwidth=82)\n \nIndex Cond: (dp.codigoliquidacionos = \"outer\".codigoliquidacionos)\n\nThanks for your time!!!! \nAlejandro",
"msg_date": "Wed, 22 Mar 2006 08:50:20 -0300",
"msg_from": "\"Alejandro D. Burne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequence Scan vs. Index scan"
},
{
"msg_contents": "On Wed, Mar 22, 2006 at 08:50:20AM -0300, Alejandro D. Burne wrote:\n> Explains:\n> ------------\n> With SET ENABLE_SEQSCAN TO ON;\n> HashAggregate (cost=251306.99..251627.36 rows=11650 width=78)\n\nYou'll need to post EXPLAIN ANALYZE results, not just EXPLAIN.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 22 Mar 2006 13:13:33 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequence Scan vs. Index scan"
},
{
"msg_contents": "2006/3/22, Steinar H. Gunderson <[email protected]>:\n>\n> On Wed, Mar 22, 2006 at 08:50:20AM -0300, Alejandro D. Burne wrote:\n> > Explains:\n> > ------------\n> > With SET ENABLE_SEQSCAN TO ON;\n> > HashAggregate (cost=251306.99..251627.36 rows=11650 width=78)\n>\n> You'll need to post EXPLAIN ANALYZE results, not just EXPLAIN.\n>\n> /* Steinar */\n> --\n> Homepage: http://www.sesse.net/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\nSorry, this is the result:\n\nWITH SET ENABLE_SEQSCAN TO ON;\n\nHashAggregate (cost=251306.99..251627.36 rows=11650 width=78) (actual time=\n25089.024..25090.340 rows=1780 loops=1)\n -> Hash Join (cost=1894.30..250155.54 rows=153526 width=78) (actual\ntime=3190.599..24944.418 rows=38009 loops=1)\n Hash Cond: (\"outer\".codigoliquidacionos =\n\"inner\".codigoliquidacionos)\n -> Seq Scan on detalleprestaciones dp \n(cost=0.00..219621.32rows=5420932 width=82) (actual time=\n0.058..23198.852 rows=5421786 loops=1)\n -> Hash (cost=1891.01..1891.01 rows=1318 width=4) (actual time=\n60.777..60.777 rows=1530 loops=1)\n -> Bitmap Heap Scan on liquidacionesos l (cost=\n43.89..1891.01 rows=1318 width=4) (actual time=1.843..59.574 rows=1530\nloops=1)\n Recheck Cond: (codigoobrasocial = 54)\n Filter: ((per_a = '2005'::bpchar) AND facturada)\n -> Bitmap Index Scan on ixliqos_os \n(cost=0.00..43.89rows=4541 width=0) (actual time=\n1.439..1.439 rows=4736 loops=1)\n Index Cond: (codigoobrasocial = 54)\nTotal runtime: 25090.920 ms\n\nWITH SET ENABLE_SEQSCAN TO OFF;\nHashAggregate (cost=2943834.84..2944155.21 rows=11650 width=78) (actual\ntime=1479.361..1480.641 rows=1780 loops=1)\n -> Nested Loop (cost=0.00..2942683.39 rows=153526 width=78) (actual\ntime=195.690..1345.494 rows=38009 loops=1)\n -> Index Scan using liqos_pkey on liquidacionesos l (cost=\n0.00..3020.21 rows=1318 width=4) (actual time=174.546..666.761 rows=1530\nloops=1)\n Filter: ((per_a = '2005'::bpchar) AND facturada AND\n(codigoobrasocial = 54))\n -> Index Scan using dp_pkey on detalleprestaciones dp (cost=\n0.00..2214.90 rows=1240 width=82) (actual time=0.333..0.422 rows=25\nloops=1530)\n Index Cond: (dp.codigoliquidacionos =\n\"outer\".codigoliquidacionos)\nTotal runtime: 1481.244 ms\n\nThanks again, Alejandro\n\n2006/3/22, Steinar H. Gunderson <[email protected]>:\nOn Wed, Mar 22, 2006 at 08:50:20AM -0300, Alejandro D. Burne wrote:> Explains:> ------------> With SET ENABLE_SEQSCAN TO ON;> HashAggregate (cost=251306.99..251627.36 rows=11650 width=78)\nYou'll need to post EXPLAIN ANALYZE results, not just EXPLAIN./* Steinar */--Homepage: http://www.sesse.net/---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\nSorry, this is the result:\n\nWITH SET ENABLE_SEQSCAN TO ON;\n\nHashAggregate (cost=251306.99..251627.36 rows=11650 width=78) (actual time=25089.024..25090.340 rows=1780 loops=1)\n -> Hash Join (cost=1894.30..250155.54 rows=153526\nwidth=78) (actual time=3190.599..24944.418 rows=38009 loops=1)\n Hash Cond: (\"outer\".codigoliquidacionos = \"inner\".codigoliquidacionos)\n -> Seq Scan on\ndetalleprestaciones dp (cost=0.00..219621.32 rows=5420932\nwidth=82) (actual time=0.058..23198.852 rows=5421786 loops=1)\n -> Hash \n(cost=1891.01..1891.01 rows=1318 width=4) (actual time=60.777..60.777\nrows=1530 loops=1)\n \n-> Bitmap Heap Scan on liquidacionesos l \n(cost=43.89..1891.01 rows=1318 width=4) (actual time=1.843..59.574\nrows=1530 loops=1)\n \nRecheck Cond: (codigoobrasocial = 54)\n \nFilter: ((per_a = '2005'::bpchar) AND facturada)\n \n-> Bitmap Index Scan on ixliqos_os (cost=0.00..43.89\nrows=4541 width=0) (actual time=1.439..1.439 rows=4736 loops=1)\n \nIndex Cond: (codigoobrasocial = 54)\nTotal runtime: 25090.920 ms\n\nWITH SET ENABLE_SEQSCAN TO OFF;\nHashAggregate (cost=2943834.84..2944155.21 rows=11650 width=78) (actual time=1479.361..1480.641 rows=1780 loops=1)\n -> Nested Loop (cost=0.00..2942683.39 rows=153526\nwidth=78) (actual time=195.690..1345.494 rows=38009 loops=1)\n -> Index Scan using\nliqos_pkey on liquidacionesos l (cost=0.00..3020.21 rows=1318\nwidth=4) (actual time=174.546..666.761 rows=1530 loops=1)\n \nFilter: ((per_a = '2005'::bpchar) AND facturada AND (codigoobrasocial =\n54))\n -> Index Scan using\ndp_pkey on detalleprestaciones dp (cost=0.00..2214.90 rows=1240\nwidth=82) (actual time=0.333..0.422 rows=25 loops=1530)\n \nIndex Cond: (dp.codigoliquidacionos = \"outer\".codigoliquidacionos)\nTotal runtime: 1481.244 ms\n\nThanks again, Alejandro",
"msg_date": "Wed, 22 Mar 2006 09:23:53 -0300",
"msg_from": "\"Alejandro D. Burne\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sequence Scan vs. Index scan"
}
] |
[
{
"msg_contents": "Assuming you are joining on \"Table 1\".id = \"Table 2\".id - do you have indexes on both columns? Have you analyzed your tables + indexes (are there statistics available?) If not those criterias are met, it is unlikely that postgres will choose an index scan.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Fernando\nLujan\nSent: den 21 mars 2006 19:08\nTo: [email protected]\nSubject: [PERFORM] Sequence Scan vs. Index scan\n\n\nHi guys,\n\nI'm trying to figure out when Sequence Scan is better than Index Scan. I \njust want to know this because I disabled the sequence scan in \npostgresql and receive a better result. :)\n\nTwo tables.\n\nTable 1 (1 million rows )\n-----------\nid\ntext\ntable2_id\n\nTable 2 (300 thousand rows)\n----------\nid\ntext 2\n\nWhen I join these two tables I have a sequence_scan. :(\n\nThanks in advance.\n\nFernando Lujan\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n",
"msg_date": "Tue, 21 Mar 2006 19:22:43 +0100",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sequence Scan vs. Index scan"
}
] |
[
{
"msg_contents": "It's time to build a new white box postgresql test box/workstation. My Athlon \nXP system is getting a little long in the tooth. Have any of you performance \nfolks evaluated the Socket 939 boards on the market these days? I'd like to \nfind something that doesn't have terrible SATA disk performance. I'm planning \nto install Gentoo x86_64 on it and run software raid, so I won't be using the \nfakeraid controllers as raid. I have been eyeing the Abit AN8 32X board, but \nI don't really need SLI, though having an extra PCI-e might be nice in the \nfuture.\n\nIf you respond off-list, I'll summarize and post the results back.\n\nThanks for any input.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Tue, 21 Mar 2006 18:48:26 -0800 (PST)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "motherboard recommendations"
}
] |
[
{
"msg_contents": ">>On Mon, 2006-03-20 at 15:59 +0100, Mikael Carneholm wrote:\n\n>> This gives that 10Gb takes ~380s => ~27Mb/s (with fsync=off), compared to the raw dd result (~75.5Mb/s).\n>> \n>> I assume this difference is due to: \n>> - simultaneous WAL write activity (assumed: for each byte written to the table, at least one byte is also written to WAL, in effect: 10Gb data inserted in the table equals 20Gb written to disk)\n>> - lousy test method (it is done using a function => the transaction size is 10Gb, and 10Gb will *not* fit in wal_buffers :) )\n>> - poor config\n\n>> checkpoint_segments = 3 \n\n>With those settings, you'll be checkpointing every 48 Mb, which will be\n>every about once per second. Since the checkpoint will take a reasonable\n>amount of time, even with fsync off, you'll be spending most of your\n>time checkpointing. bgwriter will just be slowing you down too because\n>you'll always have more clean buffers than you can use, since you have\n>132MB of shared_buffers, yet flushing all of them every checkpoint.\n\n>Please read you're logfile, which should have relevant WARNING messages.\n\nIt does (\"LOG: checkpoints are occurring too frequently (2 seconds apart)\") However, I tried increasing checkpoint_segments to 32 (512Mb) making it checkpoint every 15 second or so, but that gave a more uneven insert rate than with checkpoint_segments=3. Maybe 64 segments (1024Mb) would be a better value? If I set checkpoint_segments to 64, what would a reasonable bgwriter setup be? I still need to improve my understanding of the relations between checkpoint_segments <-> shared_buffers <-> bgwriter... :/\n\n- Mikael\n\n",
"msg_date": "Wed, 22 Mar 2006 10:04:49 +0100",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Migration study, step 1: bulk write performanceoptimization"
},
{
"msg_contents": "On Wed, 2006-03-22 at 10:04 +0100, Mikael Carneholm wrote:\n> but that gave a more uneven insert rate\n\nNot sure what you mean, but happy to review test results.\n\nYou should be able to tweak other parameters from here as you had been\ntrying. Your bgwriter will be of some benefit now if you set it\naggressively enough to keep up.\n\nYour thoughts on this process are welcome...\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Wed, 22 Mar 2006 09:46:29 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write"
},
{
"msg_contents": "On Wed, Mar 22, 2006 at 10:04:49AM +0100, Mikael Carneholm wrote:\n> It does (\"LOG: checkpoints are occurring too frequently (2 seconds apart)\") However, I tried increasing checkpoint_segments to 32 (512Mb) making it checkpoint every 15 second or so, but that gave a more uneven insert rate than with checkpoint_segments=3. Maybe 64 segments (1024Mb) would be a better value? If I set checkpoint_segments to 64, what would a reasonable bgwriter setup be? I still need to improve my understanding of the relations between checkpoint_segments <-> shared_buffers <-> bgwriter... :/\n\nProbably the easiest way is to set checkpoint_segments to something like\n128 or 256 (or possibly higher), and then make bg_writer more aggressive\nby increasing bgwriter_*_maxpages dramatically (maybe start with 200).\nYou might want to up lru_percent as well, otherwise it will take a\nminimum of 20 seconds to fully scan.\n\nBasically, slowly start increasing settings until performance smooths\nout.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 22 Mar 2006 06:55:17 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performanceoptimization"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Wed, Mar 22, 2006 at 10:04:49AM +0100, Mikael Carneholm wrote:\n>> It does (\"LOG: checkpoints are occurring too frequently (2 seconds apart)\") However, I tried increasing checkpoint_segments to 32 (512Mb) making it checkpoint every 15 second or so, but that gave a more uneven insert rate than with checkpoint_segments=3. Maybe 64 segments (1024Mb) would be a better value? If I set checkpoint_segments to 64, what would a reasonable bgwriter setup be? I still need to improve my understanding of the relations between checkpoint_segments <-> shared_buffers <-> bgwriter... :/\n\n> Probably the easiest way is to set checkpoint_segments to something like\n> 128 or 256 (or possibly higher), and then make bg_writer more aggressive\n> by increasing bgwriter_*_maxpages dramatically (maybe start with 200).\n\nDefinitely. You really don't want checkpoints happening oftener than\nonce per several minutes (five or ten if possible). Push\ncheckpoint_segments as high as you need to make that happen, and then\nexperiment with making the bgwriter parameters more aggressive in order\nto smooth out the disk write behavior. Letting the physical writes\nhappen via bgwriter is WAY cheaper than checkpointing.\n\nbgwriter parameter tuning is still a bit of a black art, so we'd be\ninterested to hear what works well for you.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 22 Mar 2006 09:34:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migration study, step 1: bulk write performanceoptimization "
}
] |
[
{
"msg_contents": "Thanks, will try that. I'll report on the progress later, I have some unit tests to set up first but as soon as that is done I'll go back to optimizing insert performance.\n\nRegards,\nMikael.\n\n-----Original Message-----\nFrom: Jim C. Nasby [mailto:[email protected]]\nSent: den 22 mars 2006 13:55\nTo: Mikael Carneholm\nCc: Simon Riggs; [email protected]\nSubject: Re: [PERFORM] Migration study, step 1: bulk write\nperformanceoptimization\n\n\nOn Wed, Mar 22, 2006 at 10:04:49AM +0100, Mikael Carneholm wrote:\n> It does (\"LOG: checkpoints are occurring too frequently (2 seconds apart)\") However, I tried increasing checkpoint_segments to 32 (512Mb) making it checkpoint every 15 second or so, but that gave a more uneven insert rate than with checkpoint_segments=3. Maybe 64 segments (1024Mb) would be a better value? If I set checkpoint_segments to 64, what would a reasonable bgwriter setup be? I still need to improve my understanding of the relations between checkpoint_segments <-> shared_buffers <-> bgwriter... :/\n\nProbably the easiest way is to set checkpoint_segments to something like\n128 or 256 (or possibly higher), and then make bg_writer more aggressive\nby increasing bgwriter_*_maxpages dramatically (maybe start with 200).\nYou might want to up lru_percent as well, otherwise it will take a\nminimum of 20 seconds to fully scan.\n\nBasically, slowly start increasing settings until performance smooths\nout.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 22 Mar 2006 14:12:43 +0100",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Migration study, step 1: bulk write performanceoptimization"
}
] |
[
{
"msg_contents": "All,\n\nHas anyone tested PostgreSQL 8.1.x compiled with Intel's Linux C/C++\ncompiler?\n\nGreg\n\n--\n Greg Spiegelberg\n [email protected]\n ISOdx Product Development Manager\n Cranel, Inc.\n \n",
"msg_date": "Wed, 22 Mar 2006 08:56:04 -0500",
"msg_from": "\"Spiegelberg, Greg\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Intel C/C++ Compiler Tests"
},
{
"msg_contents": "Greg,\n\n\nOn 3/22/06 5:56 AM, \"Spiegelberg, Greg\" <[email protected]> wrote:\n\n> Has anyone tested PostgreSQL 8.1.x compiled with Intel's Linux C/C++\n> compiler?\n\nWe used to compile 8.0 with icc and 7.x before that. We found very good\nperformance gains for Intel P4 architecture processors and some gains for\nAMD Athlon.\n\nLately, the gcc compilers have caught up with icc on pipelining\noptimizations and they generate better code for Opteron than icc, so we\nfound that icc was significantly slower than gcc on Opteron and no different\non P4/Xeon.\n\nMaybe things have changed in newer versions of icc, the last tests I did\nwere about 1 year ago.\n\n- Luke \n\n\n",
"msg_date": "Wed, 22 Mar 2006 09:10:58 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Intel C/C++ Compiler Tests"
}
] |
[
{
"msg_contents": "I have a database with foreign keys enabled on the schema. I receive different \nfiles, some of them are huge. And I need to load these files in the database \nevery night. There are several scenerios that I want to design an optimal \nsolution for -\n\n1. One of the file has around 80K records and I have to delete everything from \nthe table and load this file. The provider never provides a \"delta file\" so I \ndont have a way to identify which records are already present and which are \nnew. If I dont delete everything and insert fresh, I have to make around 80K \nselects to decide if the records exist or not. Now there are lot of tables \nthat have foreign keys linked with this table so unless I disable the foreign \nkeys, I cannot really delete anything from this table. What would be a good \npractise here?\n\n2. Another file that I receive has around 150K records that I need to load in \nthe database. Now one of the fields is logically a \"foreign key\" to another \ntable, and it is linked to the parent table via a database generated unique \nID instead of the actual value. But the file comes with the actual value. So \nonce again, I have to either drop the foreign key, or make 150K selects to \ndetermine the serial ID so that the foreign key is satisfied. What would be a \ngood strategy in this scenerio ?\n\nPlease pardon my inexperience with database !\n\nThanks,\nAmit\n",
"msg_date": "Wed, 22 Mar 2006 10:32:10 -0500",
"msg_from": "ashah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Massive Inserts Strategies"
},
{
"msg_contents": "\n\tFor both cases, you could COPY your file into a temporary table and do a \nbig JOIN with your existing table, one for inserting new rows, and one for \nupdating existing rows.\n\tDoing a large bulk query is a lot more efficient than doing a lot of \nselects. Vacuum afterwards, and you'll be fine.\n",
"msg_date": "Wed, 22 Mar 2006 17:07:52 +0100",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive Inserts Strategies"
},
{
"msg_contents": "Load the files into a temp table and go from there...\n\nCOPY ... FROM file;\nUPDATE existing_table SET ... WHERE ...;\nINSERT INTO existing_table SELECT * FROM temp_table WHERE NOT EXISTS(\nSELECT * FROM existing_table WHERE ...)\n\nOn Wed, Mar 22, 2006 at 10:32:10AM -0500, ashah wrote:\n> I have a database with foreign keys enabled on the schema. I receive different \n> files, some of them are huge. And I need to load these files in the database \n> every night. There are several scenerios that I want to design an optimal \n> solution for -\n> \n> 1. One of the file has around 80K records and I have to delete everything from \n> the table and load this file. The provider never provides a \"delta file\" so I \n> dont have a way to identify which records are already present and which are \n> new. If I dont delete everything and insert fresh, I have to make around 80K \n> selects to decide if the records exist or not. Now there are lot of tables \n> that have foreign keys linked with this table so unless I disable the foreign \n> keys, I cannot really delete anything from this table. What would be a good \n> practise here?\n> \n> 2. Another file that I receive has around 150K records that I need to load in \n> the database. Now one of the fields is logically a \"foreign key\" to another \n> table, and it is linked to the parent table via a database generated unique \n> ID instead of the actual value. But the file comes with the actual value. So \n> once again, I have to either drop the foreign key, or make 150K selects to \n> determine the serial ID so that the foreign key is satisfied. What would be a \n> good strategy in this scenerio ?\n> \n> Please pardon my inexperience with database !\n> \n> Thanks,\n> Amit\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 22 Mar 2006 10:09:25 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive Inserts Strategies"
},
{
"msg_contents": "I tried this solution, but ran into following problem.\n\nThe temp_table has columns (col1, col2, col3).\n\nThe original_table has columns (col0, col1, col2, col3)\n\nNow the extra col0 on the original_table is the unique generated ID by the \ndatabase.\n\nHow can I make your suggestions work in that case .. ?\n\nOn Wednesday 22 March 2006 11:09 am, Jim C. Nasby wrote:\n> Load the files into a temp table and go from there...\n>\n> COPY ... FROM file;\n> UPDATE existing_table SET ... WHERE ...;\n> INSERT INTO existing_table SELECT * FROM temp_table WHERE NOT EXISTS(\n> SELECT * FROM existing_table WHERE ...)\n>\n> On Wed, Mar 22, 2006 at 10:32:10AM -0500, ashah wrote:\n> > I have a database with foreign keys enabled on the schema. I receive\n> > different files, some of them are huge. And I need to load these files in\n> > the database every night. There are several scenerios that I want to\n> > design an optimal solution for -\n> >\n> > 1. One of the file has around 80K records and I have to delete everything\n> > from the table and load this file. The provider never provides a \"delta\n> > file\" so I dont have a way to identify which records are already present\n> > and which are new. If I dont delete everything and insert fresh, I have\n> > to make around 80K selects to decide if the records exist or not. Now\n> > there are lot of tables that have foreign keys linked with this table so\n> > unless I disable the foreign keys, I cannot really delete anything from\n> > this table. What would be a good practise here?\n> >\n> > 2. Another file that I receive has around 150K records that I need to\n> > load in the database. Now one of the fields is logically a \"foreign key\"\n> > to another table, and it is linked to the parent table via a database\n> > generated unique ID instead of the actual value. But the file comes with\n> > the actual value. So once again, I have to either drop the foreign key,\n> > or make 150K selects to determine the serial ID so that the foreign key\n> > is satisfied. What would be a good strategy in this scenerio ?\n> >\n> > Please pardon my inexperience with database !\n> >\n> > Thanks,\n> > Amit\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n",
"msg_date": "Tue, 28 Mar 2006 10:59:49 -0500",
"msg_from": "ashah <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive Inserts Strategies"
},
{
"msg_contents": "Hi, ashah,\n\nashah wrote:\n> I tried this solution, but ran into following problem.\n> \n> The temp_table has columns (col1, col2, col3).\n> \n> The original_table has columns (col0, col1, col2, col3)\n\n> Now the extra col0 on the original_table is the unique generated ID by\n> the database.\n\nINSERT INTO original_table (col1, col2, col3) SELECT col1, col2, col3\nFROM temp_table WHERE ...\n\nHTH,\nMarkus\n\n\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Tue, 28 Mar 2006 18:18:44 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive Inserts Strategies"
},
{
"msg_contents": "Is there some other unique key you can test on?\n\nTake a look at http://lnk.nu/cvs.distributed.net/8qt.sql lines 169-216\nfor an exammple. In this case we use a different method for assigning\nIDs than you probably will, but the idea remains.\n\nOn Tue, Mar 28, 2006 at 10:59:49AM -0500, ashah wrote:\n> I tried this solution, but ran into following problem.\n> \n> The temp_table has columns (col1, col2, col3).\n> \n> The original_table has columns (col0, col1, col2, col3)\n> \n> Now the extra col0 on the original_table is the unique generated ID by the \n> database.\n> \n> How can I make your suggestions work in that case .. ?\n> \n> On Wednesday 22 March 2006 11:09 am, Jim C. Nasby wrote:\n> > Load the files into a temp table and go from there...\n> >\n> > COPY ... FROM file;\n> > UPDATE existing_table SET ... WHERE ...;\n> > INSERT INTO existing_table SELECT * FROM temp_table WHERE NOT EXISTS(\n> > SELECT * FROM existing_table WHERE ...)\n> >\n> > On Wed, Mar 22, 2006 at 10:32:10AM -0500, ashah wrote:\n> > > I have a database with foreign keys enabled on the schema. I receive\n> > > different files, some of them are huge. And I need to load these files in\n> > > the database every night. There are several scenerios that I want to\n> > > design an optimal solution for -\n> > >\n> > > 1. One of the file has around 80K records and I have to delete everything\n> > > from the table and load this file. The provider never provides a \"delta\n> > > file\" so I dont have a way to identify which records are already present\n> > > and which are new. If I dont delete everything and insert fresh, I have\n> > > to make around 80K selects to decide if the records exist or not. Now\n> > > there are lot of tables that have foreign keys linked with this table so\n> > > unless I disable the foreign keys, I cannot really delete anything from\n> > > this table. What would be a good practise here?\n> > >\n> > > 2. Another file that I receive has around 150K records that I need to\n> > > load in the database. Now one of the fields is logically a \"foreign key\"\n> > > to another table, and it is linked to the parent table via a database\n> > > generated unique ID instead of the actual value. But the file comes with\n> > > the actual value. So once again, I have to either drop the foreign key,\n> > > or make 150K selects to determine the serial ID so that the foreign key\n> > > is satisfied. What would be a good strategy in this scenerio ?\n> > >\n> > > Please pardon my inexperience with database !\n> > >\n> > > Thanks,\n> > > Amit\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 4: Have you searched our list archives?\n> > >\n> > > http://archives.postgresql.org\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 28 Mar 2006 11:38:27 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive Inserts Strategies"
}
] |
[
{
"msg_contents": "I'd like to know if the latest PostgreSQL release can scale up by\nutilizing multiple cpu or dual core cpu to boost up the sql\nexecutions.\n\nI already do a research on the PostgreSQL mailing archives and only\nfound old threads dating back 2000. A lot of things have improved with\nPostgreSQL and hopefully the support for multiple cpu or dual cores is\nalready provided.\n\n--\nhttp://jojopaderes.multiply.com\nhttp://jojopaderes.wordpress.com\n",
"msg_date": "Thu, 23 Mar 2006 14:19:24 +0800",
"msg_from": "\"Jojo Paderes\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Scaling up PostgreSQL in Multiple CPU / Dual Core Powered Servers"
},
{
"msg_contents": "[email protected] (\"Jojo Paderes\") wrote:\n> I'd like to know if the latest PostgreSQL release can scale up by\n> utilizing multiple cpu or dual core cpu to boost up the sql\n> executions.\n>\n> I already do a research on the PostgreSQL mailing archives and only\n> found old threads dating back 2000. A lot of things have improved with\n> PostgreSQL and hopefully the support for multiple cpu or dual cores is\n> already provided.\n\nIf you submit multiple concurrent queries, they can be concurrently\nprocessed on separate CPUs; that has long been supported, and people\nhave been using SMP systems to this end for years.\n-- \nlet name=\"cbbrowne\" and tld=\"gmail.com\" in String.concat \"@\" [name;tld];;\nhttp://cbbrowne.com/info/spreadsheets.html\n\"In other words -- and this is the rock solid principle on which the\nwhole of the Corporation's Galaxy-wide success is founded -- their\nfundamental design flaws are completely hidden by their superficial\ndesign flaws.\" -- HHGTG\n",
"msg_date": "Thu, 23 Mar 2006 08:23:42 -0500",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling up PostgreSQL in Multiple CPU / Dual Core Powered Servers"
},
{
"msg_contents": "On Thu, 23 Mar 2006 14:19:24 +0800\n\"Jojo Paderes\" <[email protected]> wrote:\n\n> I'd like to know if the latest PostgreSQL release can scale up by\n> utilizing multiple cpu or dual core cpu to boost up the sql\n> executions.\n> \n> I already do a research on the PostgreSQL mailing archives and only\n> found old threads dating back 2000. A lot of things have improved with\n> PostgreSQL and hopefully the support for multiple cpu or dual cores is\n> already provided.\n\n Yes PostgreSQL can take advantage of multiple CPUs and core, has been\n able to for quite some time. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n",
"msg_date": "Thu, 23 Mar 2006 10:26:42 -0600",
"msg_from": "Frank Wiles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling up PostgreSQL in Multiple CPU / Dual Core"
},
{
"msg_contents": "On Thu, 2006-03-23 at 00:19, Jojo Paderes wrote:\n> I'd like to know if the latest PostgreSQL release can scale up by\n> utilizing multiple cpu or dual core cpu to boost up the sql\n> executions.\n> \n> I already do a research on the PostgreSQL mailing archives and only\n> found old threads dating back 2000. A lot of things have improved with\n> PostgreSQL and hopefully the support for multiple cpu or dual cores is\n> already provided.\n\nCan a single query be split up into parts and run on separate processors\nat the same time? No.\n\nCan multiple incoming queries be run on different processors for better\nperformance? Yes.\n\nHas someone been working on the problem of splitting a query into pieces\nand running it on multiple CPUs / multiple machines? Yes. Bizgress has\ndone that. \n",
"msg_date": "Thu, 23 Mar 2006 10:38:21 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling up PostgreSQL in Multiple CPU / Dual Core"
},
{
"msg_contents": "\n> Has someone been working on the problem of splitting a query into pieces\n> and running it on multiple CPUs / multiple machines? Yes. Bizgress has\n> done that. \n\nI believe that is limited to Bizgress MPP yes?\n\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n",
"msg_date": "Thu, 23 Mar 2006 08:43:31 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling up PostgreSQL in Multiple CPU / Dual Core"
},
{
"msg_contents": "On Thu, 2006-03-23 at 10:43, Joshua D. Drake wrote:\n> > Has someone been working on the problem of splitting a query into pieces\n> > and running it on multiple CPUs / multiple machines? Yes. Bizgress has\n> > done that. \n> \n> I believe that is limited to Bizgress MPP yes?\n\nYep. I hope that someday it will be released to the postgresql global\ndev group for inclusion. Or at least parts of it.\n",
"msg_date": "Thu, 23 Mar 2006 11:02:44 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling up PostgreSQL in Multiple CPU / Dual Core"
},
{
"msg_contents": "Martha Stewart called it a Good Thing when [email protected] (Scott Marlowe) wrote:\n> On Thu, 2006-03-23 at 10:43, Joshua D. Drake wrote:\n>> > Has someone been working on the problem of splitting a query into pieces\n>> > and running it on multiple CPUs / multiple machines? Yes. Bizgress has\n>> > done that. \n>> \n>> I believe that is limited to Bizgress MPP yes?\n>\n> Yep. I hope that someday it will be released to the postgresql global\n> dev group for inclusion. Or at least parts of it.\n\nQuestion: Does the Bizgress/MPP use threading for this concurrency?\nOr forking?\n\nIf it does so via forking, that's more portable, and less dependent on\nspecific complexities of threading implementations (which amounts to\nnon-portability ;-)).\n\nMost times Jan comes to town, we spend a few minutes musing about the\n\"splitting queries across threads\" problem, and dismiss it again; if\nthere's the beginning of a \"split across processes,\" that's decidedly\nneat :-).\n-- \noutput = reverse(\"moc.liamg\" \"@\" \"enworbbc\")\nhttp://linuxfinances.info/info/internet.html\nWhy do we put suits in a garment bag, and put garments in a suitcase? \n",
"msg_date": "Thu, 23 Mar 2006 21:22:34 -0500",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling up PostgreSQL in Multiple CPU / Dual Core"
},
{
"msg_contents": "On Thu, Mar 23, 2006 at 09:22:34PM -0500, Christopher Browne wrote:\n> Martha Stewart called it a Good Thing when [email protected] (Scott Marlowe) wrote:\n> > On Thu, 2006-03-23 at 10:43, Joshua D. Drake wrote:\n> >> > Has someone been working on the problem of splitting a query into pieces\n> >> > and running it on multiple CPUs / multiple machines? Yes. Bizgress has\n> >> > done that. \n> >> \n> >> I believe that is limited to Bizgress MPP yes?\n> >\n> > Yep. I hope that someday it will be released to the postgresql global\n> > dev group for inclusion. Or at least parts of it.\n> \n> Question: Does the Bizgress/MPP use threading for this concurrency?\n> Or forking?\n> \n> If it does so via forking, that's more portable, and less dependent on\n> specific complexities of threading implementations (which amounts to\n> non-portability ;-)).\n> \n> Most times Jan comes to town, we spend a few minutes musing about the\n> \"splitting queries across threads\" problem, and dismiss it again; if\n> there's the beginning of a \"split across processes,\" that's decidedly\n> neat :-).\n\nCorrect me if I'm wrong, but there's no way to (reasonably) accomplish\nthat without having some dedicated extra processes laying around that\nyou can use to execute the queries, no? In other words, the cost of a\nfork() during query execution would be too prohibitive...\n\nFWIW, DB2 executes all queries in a dedicated set of processes. The\nprocess handling the connection from the client will pass a query\nrequest off to one of the executor processes. I can't remember which\nprocess actually plans the query, but I know that the executor runs it.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 24 Mar 2006 04:14:40 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling up PostgreSQL in Multiple CPU / Dual Core"
},
{
"msg_contents": "Christopher,\n\nOn 3/23/06 6:22 PM, \"Christopher Browne\" <[email protected]> wrote:\n\n> Question: Does the Bizgress/MPP use threading for this concurrency?\n> Or forking?\n> \n> If it does so via forking, that's more portable, and less dependent on\n> specific complexities of threading implementations (which amounts to\n> non-portability ;-)).\n\nOK - I'll byte:\n\nIt's process based, we fork backends at slice points in the execution plan.\n\nTo take care of the startup latency problem, we persist sets of these\nbackends, called \"gangs\". They appear, persist for connection scope for\nreuse, then are disbanded.\n\n> Most times Jan comes to town, we spend a few minutes musing about the\n> \"splitting queries across threads\" problem, and dismiss it again; if\n> there's the beginning of a \"split across processes,\" that's decidedly\n> neat :-).\n\n:-)\n\n- Luke\n\n\n",
"msg_date": "Fri, 24 Mar 2006 07:02:15 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling up PostgreSQL in Multiple CPU / Dual Core"
},
{
"msg_contents": "[email protected] (\"Jim C. Nasby\") writes:\n> On Thu, Mar 23, 2006 at 09:22:34PM -0500, Christopher Browne wrote:\n>> Martha Stewart called it a Good Thing when [email protected] (Scott Marlowe) wrote:\n>> > On Thu, 2006-03-23 at 10:43, Joshua D. Drake wrote:\n>> >> > Has someone been working on the problem of splitting a query into pieces\n>> >> > and running it on multiple CPUs / multiple machines? Yes. Bizgress has\n>> >> > done that. \n>> >> \n>> >> I believe that is limited to Bizgress MPP yes?\n>> >\n>> > Yep. I hope that someday it will be released to the postgresql global\n>> > dev group for inclusion. Or at least parts of it.\n>> \n>> Question: Does the Bizgress/MPP use threading for this concurrency?\n>> Or forking?\n>> \n>> If it does so via forking, that's more portable, and less dependent on\n>> specific complexities of threading implementations (which amounts to\n>> non-portability ;-)).\n>> \n>> Most times Jan comes to town, we spend a few minutes musing about the\n>> \"splitting queries across threads\" problem, and dismiss it again; if\n>> there's the beginning of a \"split across processes,\" that's decidedly\n>> neat :-).\n>\n> Correct me if I'm wrong, but there's no way to (reasonably) accomplish\n> that without having some dedicated extra processes laying around that\n> you can use to execute the queries, no? In other words, the cost of a\n> fork() during query execution would be too prohibitive...\n\nCounterexample...\n\nThe sort of scenario we keep musing about is where you split off a\n(thread|process) for each partition of a big table. There is in fact\na natural such partitioning, in that tables get split at the 1GB mark,\nby default.\n\nConsider doing a join against 2 tables that are each 8GB in size\n(e.g. - they consist of 8 data files). Let's assume that the query\nplan indicates doing seq scans on both.\n\nYou *know* you'll be reading through 16 files, each 1GB in size.\nSpawning a process for each of those files doesn't strike me as\n\"prohibitively expensive.\"\n\nA naive read on this is that you might start with one backend process,\nwhich then spawns 16 more. Each of those backends is scanning through\none of those 16 files; they then throw relevant tuples into shared\nmemory to be aggregated/joined by the central one.\n\nThat particular scenario is one where the fork()s would hardly be\nnoticeable.\n\n> FWIW, DB2 executes all queries in a dedicated set of processes. The\n> process handling the connection from the client will pass a query\n> request off to one of the executor processes. I can't remember which\n> process actually plans the query, but I know that the executor runs\n> it.\n\nIt seems to me that the kinds of cases where extra processes/threads\nwould be warranted are quite likely to be cases where fork()ing may be\nan immaterial cost.\n-- \nlet name=\"cbbrowne\" and tld=\"ntlug.org\" in String.concat \"@\" [name;tld];;\nhttp://www.ntlug.org/~cbbrowne/languages.html\nTECO Madness: a moment of convenience, a lifetime of regret.\n-- Dave Moon\n",
"msg_date": "Fri, 24 Mar 2006 13:21:23 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling up PostgreSQL in Multiple CPU / Dual Core"
},
{
"msg_contents": "[email protected] (\"Luke Lonergan\") writes:\n> Christopher,\n>\n> On 3/23/06 6:22 PM, \"Christopher Browne\" <[email protected]> wrote:\n>\n>> Question: Does the Bizgress/MPP use threading for this concurrency?\n>> Or forking?\n>> \n>> If it does so via forking, that's more portable, and less dependent on\n>> specific complexities of threading implementations (which amounts to\n>> non-portability ;-)).\n>\n> OK - I'll byte:\n>\n> It's process based, we fork backends at slice points in the execution plan.\n\nBy \"slice points\", do you mean that you'd try to partition tables\n(e.g. - if there's a Seq Scan on a table with 8 1GB segments, you\ncould spawn as many as 8 processes), or that two scans that are then\nmerge joined means a process for each scan, and a process for the\nmerge join? Or perhaps both :-). Or perhaps something else entirely ;-).\n\n> To take care of the startup latency problem, we persist sets of\n> these backends, called \"gangs\". They appear, persist for connection\n> scope for reuse, then are disbanded.\n\nIf only that could happen to more gangs...\n-- \noutput = (\"cbbrowne\" \"@\" \"cbbrowne.com\")\nhttp://cbbrowne.com/info/multiplexor.html\n\"I'm sorry, the teleportation booth you have reached is not in service\nat this time. Please hand-reassemble your molecules or call an\noperator to help you....\"\n",
"msg_date": "Fri, 24 Mar 2006 13:24:09 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling up PostgreSQL in Multiple CPU / Dual Core"
},
{
"msg_contents": "On Fri, Mar 24, 2006 at 01:21:23PM -0500, Chris Browne wrote:\n>A naive read on this is that you might start with one backend process,\n>which then spawns 16 more. Each of those backends is scanning through\n>one of those 16 files; they then throw relevant tuples into shared\n>memory to be aggregated/joined by the central one.\n\nOf course, table scanning is going to be IO limited in most cases, and \nhaving every query spawn 16 independent IO threads is likely to slow \nthings down in more cases than it speeds them up. It could work if you \nhave a bunch of storage devices, but at that point it's probably easier \nand more direct to implement a clustered approach.\n\nMike Stone\n",
"msg_date": "Fri, 24 Mar 2006 14:21:24 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling up PostgreSQL in Multiple CPU / Dual Core"
},
{
"msg_contents": "On Fri, Mar 24, 2006 at 01:21:23PM -0500, Chris Browne wrote:\n> > Correct me if I'm wrong, but there's no way to (reasonably) accomplish\n> > that without having some dedicated extra processes laying around that\n> > you can use to execute the queries, no? In other words, the cost of a\n> > fork() during query execution would be too prohibitive...\n> \n> Counterexample...\n> \n> The sort of scenario we keep musing about is where you split off a\n> (thread|process) for each partition of a big table. There is in fact\n> a natural such partitioning, in that tables get split at the 1GB mark,\n> by default.\n> \n> Consider doing a join against 2 tables that are each 8GB in size\n> (e.g. - they consist of 8 data files). Let's assume that the query\n> plan indicates doing seq scans on both.\n> \n> You *know* you'll be reading through 16 files, each 1GB in size.\n> Spawning a process for each of those files doesn't strike me as\n> \"prohibitively expensive.\"\n\nHave you ever tried reading from 2 large files on a disk at the same\ntime, let alone 16? The results ain't pretty.\n\nWhat you're suggesting maybe makes sense if the two tables are in\ndifferent tablespaces, provided you have some additional means to know\nif those two tablespaces are on the same set of spindles. Though even\nhere the usefulness is somewhat suspect, because CPU is a hell of a lot\nfaster than disks are, unless you have a whole lot of disks. Of course,\nthis is exactly the target market for MPP.\n\nWhere parallel execution really makes sense is when you're doing things\nlike sorts or hash operations, because those are relatively\nCPU-intensive.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 24 Mar 2006 13:24:04 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling up PostgreSQL in Multiple CPU / Dual Core"
},
{
"msg_contents": "[email protected] (Michael Stone) writes:\n\n> On Fri, Mar 24, 2006 at 01:21:23PM -0500, Chris Browne wrote:\n>>A naive read on this is that you might start with one backend process,\n>>which then spawns 16 more. Each of those backends is scanning through\n>>one of those 16 files; they then throw relevant tuples into shared\n>>memory to be aggregated/joined by the central one.\n>\n> Of course, table scanning is going to be IO limited in most cases, and\n> having every query spawn 16 independent IO threads is likely to slow\n> things down in more cases than it speeds them up. It could work if you\n> have a bunch of storage devices, but at that point it's probably\n> easier and more direct to implement a clustered approach.\n\nAll stipulated, yes. It obviously wouldn't be terribly useful to scan\nmore aggressively than I/O bandwidth can support. The point is that\nthis is one of the kinds of places where concurrent processing could\ndo some good...\n-- \nlet name=\"cbbrowne\" and tld=\"acm.org\" in name ^ \"@\" ^ tld;;\nhttp://cbbrowne.com/info/spiritual.html\nSave the whales. Collect the whole set. \n",
"msg_date": "Fri, 24 Mar 2006 14:33:21 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling up PostgreSQL in Multiple CPU / Dual Core"
},
{
"msg_contents": "\nAdded to TODO:\n\t\n\t* Experiment with multi-threaded backend better resource utilization\n\t This would allow a single query to make use of multiple CPU's or\n\t multiple I/O channels simultaneously.\n\n\n---------------------------------------------------------------------------\n\nChris Browne wrote:\n> [email protected] (Michael Stone) writes:\n> \n> > On Fri, Mar 24, 2006 at 01:21:23PM -0500, Chris Browne wrote:\n> >>A naive read on this is that you might start with one backend process,\n> >>which then spawns 16 more. Each of those backends is scanning through\n> >>one of those 16 files; they then throw relevant tuples into shared\n> >>memory to be aggregated/joined by the central one.\n> >\n> > Of course, table scanning is going to be IO limited in most cases, and\n> > having every query spawn 16 independent IO threads is likely to slow\n> > things down in more cases than it speeds them up. It could work if you\n> > have a bunch of storage devices, but at that point it's probably\n> > easier and more direct to implement a clustered approach.\n> \n> All stipulated, yes. It obviously wouldn't be terribly useful to scan\n> more aggressively than I/O bandwidth can support. The point is that\n> this is one of the kinds of places where concurrent processing could\n> do some good...\n> -- \n> let name=\"cbbrowne\" and tld=\"acm.org\" in name ^ \"@\" ^ tld;;\n> http://cbbrowne.com/info/spiritual.html\n> Save the whales. Collect the whole set. \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian http://candle.pha.pa.us\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Sun, 9 Apr 2006 16:24:44 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Scaling up PostgreSQL in Multiple CPU / Dual Core"
}
] |
[
{
"msg_contents": "Hello, I have a big problem with one of my databases. When i run my \nquery, after a few minutes, the postmaster shows 99% mem i top, and \nthe server becomes totally unresponsive.\n\nI get this message when I try to cancel the query:\n\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\n\nThis works fine on a different machine with the same database \nsettings and about 30% less records. The other machine is running \nPostgreSQL 8.0.3\nThe troubled one is running 8.1.2\n\n\nAny help is greatly appreciated!\n\nThanks\n\n\n\n\n\nThe machine has 2x Intel dual core processors (3GHz) and 2 Gigs of ram.\n\n#----------------------------------------------------------------------- \n----\n# RESOURCE USAGE (except WAL)\n#----------------------------------------------------------------------- \n----\n\n# - Memory -\n\nshared_buffers = 8192 # min 16 or \nmax_connections*2, 8KB each\n#temp_buffers = 1000 # min 100, 8KB each\n#max_prepared_transactions = 5 # can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of \nshared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 4096 # min 64, size in KB\nmaintenance_work_mem = 262144 # min 1024, size in KB\n#max_stack_depth = 2048 # min 100, size in KB\n\n\n\n\n\nMy query:\n\nSELECT r.id AS id, max(r.name) AS name, max(companyid) AS companyid, \nmax(extract(epoch from r.updated)) as r_updated, hydra.join(co.value) \nAS contacts, hydra.join(ad.postalsite) AS postalsites FROM records r \nLEFT OUTER JOIN contacts co ON(r.id = co.record AND co.type IN \n(1,11,101,3)) LEFT OUTER JOIN addresses ad ON(r.id = ad.record) WHERE \nr.original IS NULL GROUP BY r.id;\n\n\nThe hydra.join function\n-- Aggregates a column to an array\n\nDROP FUNCTION hydra.join_aggregate(text, text) CASCADE;\nDROP FUNCTION hydra.join_aggregate_to_array(text);\n\nCREATE FUNCTION hydra.join_aggregate(text, text) RETURNS text\n AS 'select $1 || ''|'' || $2'\n LANGUAGE sql IMMUTABLE STRICT;\n\nCREATE FUNCTION hydra.join_aggregate_to_array(text) RETURNS text[]\n AS 'SELECT string_to_array($1, ''|'')'\n LANGUAGE sql IMMUTABLE STRICT;\n\nCREATE AGGREGATE hydra.join (\n BASETYPE = text\n,SFUNC = hydra.join_aggregate\n,STYPE = text\n,FINALFUNC = hydra.join_aggregate_to_array\n);\n\n\n\n\n\nTables:\nrecords: 757278 rows\ncontacts: 2256253 rows\naddresses: 741536 rows\n\n\n\n\n\n\n\n\nExplain:\n\n QUERY PLAN\n------------------------------------------------------------------------ \n-----------------------------\nGroupAggregate (cost=636575.63..738618.40 rows=757278 width=75)\n -> Merge Left Join (cost=636575.63..694469.65 rows=1681120 \nwidth=75)\n Merge Cond: (\"outer\".id = \"inner\".record)\n -> Merge Left Join (cost=523248.93..552247.54 \nrows=1681120 width=63)\n Merge Cond: (\"outer\".id = \"inner\".record)\n -> Sort (cost=164044.73..165937.93 rows=757278 \nwidth=48)\n Sort Key: r.id\n -> Seq Scan on records r (cost=0.00..19134.78 \nrows=757278 width=48)\n Filter: (original IS NULL)\n -> Sort (cost=359204.20..363407.00 rows=1681120 \nwidth=19)\n Sort Key: co.record\n -> Seq Scan on contacts co \n(cost=0.00..73438.06 rows=1681120 width=19)\n Filter: ((\"type\" = 1) OR (\"type\" = 11) OR \n(\"type\" = 101) OR (\"type\" = 3))\n -> Sort (cost=113326.70..115180.54 rows=741536 width=16)\n Sort Key: ad.record\n -> Seq Scan on addresses ad (cost=0.00..20801.36 \nrows=741536 width=16)\n(16 rows)\n\n\n\n\n\n\n\nse_companies=# \\d records;\n Table \"public.records\"\n Column | Type | \nModifiers\n-----------------+-------------------------- \n+------------------------------------------------------\nid | integer | not null default nextval \n('records_id_seq'::regclass)\ncompanyid | character varying(16) | default ''::character \nvarying\ncategories | integer[] |\nnace | integer[] |\nname | character varying(255) | default ''::character \nvarying\nupdated | timestamp with time zone | default \n('now'::text)::timestamp(6) with time zone\nupdater | integer |\nowner | integer |\nloaner | integer |\ninfo | text |\noriginal | integer |\nactive | boolean | default true\ncategoryquality | integer | not null default 0\nsearchwords | character varying(128)[] |\npriority | integer |\ncategorized | timestamp with time zone |\ninfopage | boolean |\nnational | boolean |\npassword | character varying(32) |\nlogin | boolean |\ndeleted | boolean | not null default false\nreference | integer[] |\nnuinfo | text |\nbrands | integer[] |\nvolatile | boolean | not null default false\nIndexes:\n \"records_pkey\" PRIMARY KEY, btree (id) CLUSTER\n \"original_is_null\" btree (original) WHERE original IS NULL\n \"records_category_rdtree_idx\" gist (categories)\n \"records_categoryquality_idx\" btree (categoryquality)\n \"records_lower_name_idx\" btree (lower(name::text))\n \"records_original_idx\" btree (original)\n \"records_owner\" btree (\"owner\")\n \"records_updated_idx\" btree (updated)\nForeign-key constraints:\n \"records_original_fkey\" FOREIGN KEY (original) REFERENCES records \n(id)\n\nse_companies=# \\d contacts;\n Table \"public.contacts\"\n Column | Type | Modifiers\n-------------+------------------------ \n+-------------------------------------------------------\nid | integer | not null default nextval \n('contacts_id_seq'::regclass)\nrecord | integer |\ntype | integer |\nvalue | character varying(128) |\ndescription | character varying(255) |\npriority | integer |\nitescotype | integer |\noriginal | integer |\nsource | integer |\nreference | character varying(32) |\nquality | integer |\ndeleted | boolean | not null default false\nsearchable | boolean | not null default true\nvisible | boolean | not null default true\nIndexes:\n \"contacts_pkey\" PRIMARY KEY, btree (id)\n \"contacts_original_idx\" btree (original)\n \"contacts_quality_idx\" btree (quality)\n \"contacts_record_idx\" btree (record)\n \"contacts_source_reference_idx\" btree (source, reference)\n \"contacts_value_idx\" btree (value)\nForeign-key constraints:\n \"contacts_original_fkey\" FOREIGN KEY (original) REFERENCES \ncontacts(id)\n\nse_companies=# \\d addresses;\n Table \"public.addresses\"\n Column | Type | \nModifiers\n--------------+-------------------------- \n+--------------------------------------------------------\nid | integer | not null default nextval \n('addresses_id_seq'::regclass)\nrecord | integer |\naddress | character varying(128) |\nextra | character varying(32) |\npostalcode | character varying(16) |\npostalsite | character varying(64) |\ndescription | character varying(255) |\nposition | point |\nuncertainty | integer | default 99999999\npriority | integer |\ntype | integer |\nplace | character varying(64) |\nfloor | integer |\nside | character varying(8) |\nhousename | character varying(64) |\noriginal | integer |\nsource | integer |\nreference | character varying(64) |\nquality | integer |\ndeleted | boolean | not null default false\nsearchable | boolean | not null default true\nvisible | boolean | not null default true\nmunicipality | integer |\nmap | boolean | not null default true\ngeocoded | timestamp with time zone | default now()\nIndexes:\n \"addresses_pkey\" PRIMARY KEY, btree (id)\n \"addresses_lower_address_postalcode\" btree (lower \n(address::text), lower(postalcode::text))\n \"addresses_original_idx\" btree (original)\n \"addresses_record_idx\" btree (record)\n \"addresses_source_reference_idx\" btree (source, reference)\nForeign-key constraints:\n \"addresses_original_fkey\" FOREIGN KEY (original) REFERENCES \naddresses(id)\n\n",
"msg_date": "Thu, 23 Mar 2006 13:12:08 +0100",
"msg_from": "Bendik Rognlien Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with query, server totally unresponsive"
},
{
"msg_contents": "On Thu, Mar 23, 2006 at 01:12:08PM +0100, Bendik Rognlien Johansen wrote:\n> Hello, I have a big problem with one of my databases. When i run my \n> query, after a few minutes, the postmaster shows 99% mem i top, and \n> the server becomes totally unresponsive.\n\nYou've got a bunch of sorts going on; could you be pushing the machine\ninto swapping?\n\n> I get this message when I try to cancel the query:\n> \n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n \nDid you send a kill of some kind to the backend?\n \n> The machine has 2x Intel dual core processors (3GHz) and 2 Gigs of ram.\n\nUnless I missed some big news recently, no such CPU exists.\nHyperthreading is absolutely not the same as dual core, and many people\nhave found that it's best to disable hyperthreading on database servers.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 24 Mar 2006 04:25:20 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with query, server totally unresponsive"
},
{
"msg_contents": "\n\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of Jim C. Nasby\n> Subject: Re: [PERFORM] Problem with query, server totally unresponsive\n> \n> On Thu, Mar 23, 2006 at 01:12:08PM +0100, Bendik Rognlien Johansen\nwrote:\n> > Hello, I have a big problem with one of my databases. When i run my\n> > query, after a few minutes, the postmaster shows 99% mem i top, and\n> > the server becomes totally unresponsive.\n> \n> You've got a bunch of sorts going on; could you be pushing the machine\n> into swapping?\n> \n> > I get this message when I try to cancel the query:\n> >\n> > server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > The connection to the server was lost. Attempting reset: Failed.\n> \n> Did you send a kill of some kind to the backend?\n> \n> > The machine has 2x Intel dual core processors (3GHz) and 2 Gigs of\nram.\n> \n> Unless I missed some big news recently, no such CPU exists.\n> Hyperthreading is absolutely not the same as dual core, and many\npeople\n> have found that it's best to disable hyperthreading on database\nservers.\n\nMaybe I'm confused by the marketing, but I think those CPUs do exist.\nAccording to New Egg the Pentium D 830 and the Pentium D 930 both are\ndual core Pentiums that run at 3Ghz. It also specifically says these\nprocessors don't support hyper threading, so I believe they really have\ntwo cores. Maybe you are thinking he was talking about a 3Ghz Core\nDuo.\n\nhttp://www.newegg.com/Product/ProductList.asp?Category=34&N=2000340000+5\n0001157+1302820275+1051007392&Submit=ENE\n\nDave\n\n\n",
"msg_date": "Fri, 24 Mar 2006 08:46:54 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with query, server totally unresponsive"
},
{
"msg_contents": "On Fri, Mar 24, 2006 at 08:46:54AM -0600, Dave Dutcher wrote:\n> > > The machine has 2x Intel dual core processors (3GHz) and 2 Gigs of\n> ram.\n> > \n> > Unless I missed some big news recently, no such CPU exists.\n> > Hyperthreading is absolutely not the same as dual core, and many\n> people\n> > have found that it's best to disable hyperthreading on database\n> servers.\n> \n> Maybe I'm confused by the marketing, but I think those CPUs do exist.\n> According to New Egg the Pentium D 830 and the Pentium D 930 both are\n> dual core Pentiums that run at 3Ghz. It also specifically says these\n> processors don't support hyper threading, so I believe they really have\n> two cores. Maybe you are thinking he was talking about a 3Ghz Core\n> Duo.\n\nA quick google shows I'm just behind the times; Intel does have true\ndual-core CPUs now.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 24 Mar 2006 08:57:49 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problem with query, server totally unresponsive"
}
] |
[
{
"msg_contents": "Seems the problem was with the custom aggregate function not being \nable to handle thousands of rows.\n",
"msg_date": "Thu, 23 Mar 2006 14:51:27 +0100",
"msg_from": "Bendik Rognlien Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problem with query, forget previous message"
}
] |
[
{
"msg_contents": "Hi there.\n\nI have hit a edge in the planning and I hope you can help.\n\nThe system uses a lot of stored procedures to move as much of the \nintelligence into the database layer as possible.\n\nMy (development) query looks like and runs reasonably fast:\n\nexplain analyze select dataset_id, entity, sum(amount) from \nentrydata_current where flow_direction in (select * from \noutflow_direction(dataset_id)) and dataset_id in (122,112,125,89,111) \ngroup by dataset_id, entity;\n \nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=918171.00..918171.30 rows=24 width=19) (actual \ntime=11533.297..11533.340 rows=50 loops=1)\n -> Bitmap Heap Scan on entrydata_current (cost=676.72..917736.04 \nrows=57994 width=19) (actual time=23.921..11425.373 rows=37870 loops=1)\n Recheck Cond: ((dataset_id = 122) OR (dataset_id = 112) OR \n(dataset_id = 125) OR (dataset_id = 89) OR (dataset_id = 111))\n Filter: (subplan)\n -> BitmapOr (cost=676.72..676.72 rows=117633 width=0) (actual \ntime=15.765..15.765 rows=0 loops=1)\n -> Bitmap Index Scan on entrydata_current_dataset_idx \n(cost=0.00..83.97 rows=14563 width=0) (actual time=1.881..1.881 \nrows=13728 loops=1)\n Index Cond: (dataset_id = 122)\n -> Bitmap Index Scan on entrydata_current_dataset_idx \n(cost=0.00..156.12 rows=27176 width=0) (actual time=3.508..3.508 \nrows=25748 loops=1)\n Index Cond: (dataset_id = 112)\n -> Bitmap Index Scan on entrydata_current_dataset_idx \n(cost=0.00..124.24 rows=21498 width=0) (actual time=2.729..2.729 \nrows=20114 loops=1)\n Index Cond: (dataset_id = 125)\n -> Bitmap Index Scan on entrydata_current_dataset_idx \n(cost=0.00..102.20 rows=17771 width=0) (actual time=2.351..2.351 \nrows=17344 loops=1)\n Index Cond: (dataset_id = 89)\n -> Bitmap Index Scan on entrydata_current_dataset_idx \n(cost=0.00..210.19 rows=36625 width=0) (actual time=5.292..5.292 \nrows=37118 loops=1)\n Index Cond: (dataset_id = 111)\n SubPlan\n -> Function Scan on outflow_direction (cost=0.00..12.50 \nrows=1000 width=4) (actual time=0.093..0.095 rows=4 loops=114052)\n Total runtime: 11540.506 ms\n(18 rows)\n\nThe problem is, that the application should not need to know the five \ndataset_ids (it will always know one - its own). So I make a function to \nreturn the five ids and then the query looks like:\n\nexplain select dataset_id, entity, sum(amount) from entrydata_current \nwhere flow_direction in (select * from outflow_direction(dataset_id)) \nand dataset_id in (select * from get_dataset_ids(122)) group by \ndataset_id, entity;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=24672195.68..24672203.88 rows=656 width=19)\n -> Hash IN Join (cost=15.00..24660005.45 rows=1625364 width=19)\n Hash Cond: (\"outer\".dataset_id = \"inner\".get_dataset_ids)\n -> Index Scan using entrydata_current_dataset_idx on \nentrydata_current (cost=0.00..24558405.20 rows=1625364 width=19)\n Filter: (subplan)\n SubPlan\n -> Function Scan on outflow_direction \n(cost=0.00..12.50 rows=1000 width=4)\n -> Hash (cost=12.50..12.50 rows=1000 width=4)\n -> Function Scan on get_dataset_ids (cost=0.00..12.50 \nrows=1000 width=4)\n(9 rows)\n\nwhich does not return within 10 minutes - which is unacceptable.\n\nIs there any way to get a better plan for the second ? The planner \nshould really see the two queries as equal as there is no dependencies \nbetween the outer query and get_dataset_ids (isn't it called constant \nfolding?).\n\nThanks in advance\n\nSvenne",
"msg_date": "Fri, 24 Mar 2006 13:49:17 +0100",
"msg_from": "Svenne Krap <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance problems with multiple layers of functions"
},
{
"msg_contents": "On Fri, Mar 24, 2006 at 01:49:17PM +0100, Svenne Krap wrote:\n> explain select dataset_id, entity, sum(amount) from entrydata_current \n> where flow_direction in (select * from outflow_direction(dataset_id)) \n> and dataset_id in (select * from get_dataset_ids(122)) group by \n> dataset_id, entity;\n<snip> \n> which does not return within 10 minutes - which is unacceptable.\n\n\nThe issue is that the planner has no way to know what's comming back\nfrom get_dataset_ids.\n\nI think your best bet will be to wrap that select into it's own function\nand have that function prepare the query statement, going back to\nhard-coded values. So you could do something like:\n\nSQL := 'SELECT ... AND dataset_id IN (''' || get_dataset_ids(122) ||\n''');' (yeah, I know that won't work as written, but you get the idea).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 24 Mar 2006 06:59:19 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problems with multiple layers of functions"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Fri, Mar 24, 2006 at 01:49:17PM +0100, Svenne Krap wrote:\n>> explain select dataset_id, entity, sum(amount) from entrydata_current \n>> where flow_direction in (select * from outflow_direction(dataset_id)) \n>> and dataset_id in (select * from get_dataset_ids(122)) group by \n>> dataset_id, entity;\n\n> The issue is that the planner has no way to know what's comming back\n> from get_dataset_ids.\n\nMore specifically, the first IN is not optimizable into a join because\nthe results of the sub-SELECT depend on the current row of the outer\nquery. The second IN is being optimized fine, but the first one is\nwhat's killing you.\n\nI'd suggest refactoring the functions into something that returns a set\nof outflow_direction/dataset_id pairs, and then phrase the query as\n\nwhere (flow_direction, dataset_id) in (select * from new_func(122))\n\nYou could do it without refactoring:\n\nwhere (flow_direction, dataset_id) in\n (select outflow_direction(id),id from get_dataset_ids(122) id)\n\nhowever this won't work if outflow_direction() is a plpgsql function\nbecause of limitations in plpgsql's set-function support. (It will work\nif outflow_direction() is a SQL function, or you could kluge it as a SQL\nfunction wrapper around a plpgsql function.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Mar 2006 11:02:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problems with multiple layers of functions "
},
{
"msg_contents": "Tom Lane wrote:\n> where (flow_direction, dataset_id) in (select * from new_func(122))\n> \n\nIs this form of multi-column IN mentioned anywhere in the docs? I can't \nfind it.\n\nSvenne",
"msg_date": "Fri, 24 Mar 2006 20:16:29 +0100",
"msg_from": "Svenne Krap <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance problems with multiple layers of functions"
},
{
"msg_contents": "Svenne Krap <[email protected]> writes:\n> Tom Lane wrote:\n>> where (flow_direction, dataset_id) in (select * from new_func(122))\n\n> Is this form of multi-column IN mentioned anywhere in the docs? I can't \n> find it.\n\nSure, look under \"Subquery Expressions\". 8.0 and later refer to it as a\nrow_constructor, but it's documented at least as far back as 7.3.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 24 Mar 2006 14:23:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance problems with multiple layers of functions "
},
{
"msg_contents": "\n\tWhoa !\n\n\tbookmark_delta contains very few rows but is inserted/deleted very \noften... the effect is spectacular !\n\tI guess I'll have to vacuum analyze this table every minute...\n\n\nannonces=# EXPLAIN ANALYZE SELECT id, priority FROM annonces WHERE id IN \n(SELECT annonce_id FROM bookmark_delta);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Hash IN Join (cost=32.12..8607.08 rows=1770 width=6) (actual \ntime=387.011..387.569 rows=1 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".annonce_id)\n -> Seq Scan on annonces (cost=0.00..7796.00 rows=101500 width=6) \n(actual time=0.022..164.369 rows=101470 loops=1)\n -> Hash (cost=27.70..27.70 rows=1770 width=4) (actual \ntime=0.013..0.013 rows=5 loops=1)\n -> Seq Scan on bookmark_delta (cost=0.00..27.70 rows=1770 \nwidth=4) (actual time=0.004..0.010 rows=5 loops=1)\n Total runtime: 387.627 ms\n(6 lignes)\n\nannonces=# EXPLAIN ANALYZE SELECT id, priority FROM annonces a, (SELECT \nannonce_id FROM bookmark_delta GROUP BY annonce_id) foo WHERE \na.id=foo.annonce_id;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=32.12..10409.31 rows=1770 width=6) (actual \ntime=0.081..0.084 rows=1 loops=1)\n -> HashAggregate (cost=32.12..49.83 rows=1770 width=4) (actual \ntime=0.038..0.040 rows=1 loops=1)\n -> Seq Scan on bookmark_delta (cost=0.00..27.70 rows=1770 \nwidth=4) (actual time=0.024..0.027 rows=5 loops=1)\n -> Index Scan using annonces_pkey on annonces a (cost=0.00..5.83 \nrows=1 width=6) (actual time=0.039..0.040 rows=1 loops=1)\n Index Cond: (a.id = \"outer\".annonce_id)\n Total runtime: 0.163 ms\n(6 lignes)\n\nannonces=# vacuum bookmark_delta ;\nVACUUM\nannonces=# EXPLAIN ANALYZE SELECT id, priority FROM annonces WHERE id IN \n(SELECT annonce_id FROM bookmark_delta);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Hash IN Join (cost=32.12..8607.08 rows=1770 width=6) (actual \ntime=195.284..196.063 rows=1 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".annonce_id)\n -> Seq Scan on annonces (cost=0.00..7796.00 rows=101500 width=6) \n(actual time=0.014..165.626 rows=101470 loops=1)\n -> Hash (cost=27.70..27.70 rows=1770 width=4) (actual \ntime=0.008..0.008 rows=2 loops=1)\n -> Seq Scan on bookmark_delta (cost=0.00..27.70 rows=1770 \nwidth=4) (actual time=0.003..0.004 rows=2 loops=1)\n Total runtime: 196.122 ms\n(6 lignes)\n\nannonces=# vacuum analyze bookmark_delta ;\nVACUUM\nannonces=# EXPLAIN ANALYZE SELECT id, priority FROM annonces WHERE id IN \n(SELECT annonce_id FROM bookmark_delta);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=1.02..6.88 rows=1 width=6) (actual time=0.025..0.027 \nrows=1 loops=1)\n -> HashAggregate (cost=1.02..1.03 rows=1 width=4) (actual \ntime=0.011..0.012 rows=1 loops=1)\n -> Seq Scan on bookmark_delta (cost=0.00..1.02 rows=2 width=4) \n(actual time=0.004..0.006 rows=2 loops=1)\n -> Index Scan using annonces_pkey on annonces (cost=0.00..5.83 rows=1 \nwidth=6) (actual time=0.009..0.010 rows=1 loops=1)\n Index Cond: (annonces.id = \"outer\".annonce_id)\n Total runtime: 0.104 ms\n(6 lignes)\n",
"msg_date": "Fri, 24 Mar 2006 23:54:37 +0100",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Query plan from hell"
},
{
"msg_contents": "On 24.03.2006, at 23:54 Uhr, PFC wrote:\n\n> \tbookmark_delta contains very few rows but is inserted/deleted very \n> often... the effect is spectacular !\n> \tI guess I'll have to vacuum analyze this table every minute...\n\nWhat about using autovacuum?\n\ncug\n\n-- \nPharmaLine, Essen, GERMANY\nSoftware and Database Development",
"msg_date": "Sat, 25 Mar 2006 10:52:56 +0100",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan from hell"
}
] |
[
{
"msg_contents": "Hello!\n\n\tFirst tried some searching around, but did not find anything useful so I gave up and decided to ask here... I am \nwondering how do pair of 1.5GHz Itanium2(4MB cache) stack up against pair of AMD or Intel server CPUs as far as \npostgresql performance is concerned? Is it worthy or not?\n\nThanks in advance.\n\nTomaž\n",
"msg_date": "Sat, 25 Mar 2006 20:11:22 +0100",
"msg_from": "Tomaz Borstnar <[email protected]>",
"msg_from_op": true,
"msg_subject": "experiences needed - how does Itanium2/1.5GHz(4MB) compare to AMD\n\tand Intel CPUs as far as Postgresql is concerned"
},
{
"msg_contents": "On Sat, Mar 25, 2006 at 08:11:22PM +0100, Tomaz Borstnar wrote:\n> Hello!\n> \n> \tFirst tried some searching around, but did not find anything useful \n> \tso I gave up and decided to ask here... I am wondering how do pair of \n> 1.5GHz Itanium2(4MB cache) stack up against pair of AMD or Intel server \n> CPUs as far as postgresql performance is concerned? Is it worthy or not?\n\n-performance would be a better place to ask, so I'm moving this there.\n\nThe general consensus is that your best bet CPU-wise is Opterons.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Mon, 27 Mar 2006 06:14:07 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: experiences needed - how does Itanium2/1.5GHz(4MB) compare to AMD\n\tand Intel CPUs as far as Postgresql is concerned"
}
] |
[
{
"msg_contents": "Hi,\n\nI guess this is an age-old 100times answered question, but I didn't find \nthe answer to it yet (neither in the FAQ nor in the mailing list archives).\n\nQuestion: I have a table with 2.5M rows. count(*) on this table is \nrunning 4 minutes long. (dual opteron, 4gig ram, db on 4 disk raid10 \narray (sata, not scsi)) Is this normal? How could I make it run faster?\nMaybe make it run faster for the 2nd time? Which parameters should I \nchange in postgresql.conf and how?\n\n\n\n\n\n-- \n�dv�zlettel,\nG�briel �kos\n-=E-Mail :[email protected]|Web: http://www.i-logic.hu=-\n-=Tel/fax:+3612367353 |Mobil:+36209278894 =-\n",
"msg_date": "Mon, 27 Mar 2006 15:34:32 +0200",
"msg_from": "=?ISO-8859-2?Q?G=E1briel_=C1kos?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "count(*) performance"
},
{
"msg_contents": "On Mon, Mar 27, 2006 at 03:34:32PM +0200, G?briel ?kos wrote:\n> Hi,\n> \n> I guess this is an age-old 100times answered question, but I didn't find \n> the answer to it yet (neither in the FAQ nor in the mailing list archives).\n> \n> Question: I have a table with 2.5M rows. count(*) on this table is \n> running 4 minutes long. (dual opteron, 4gig ram, db on 4 disk raid10 \n> array (sata, not scsi)) Is this normal? How could I make it run faster?\n> Maybe make it run faster for the 2nd time? Which parameters should I \n> change in postgresql.conf and how?\n\nFirst, count(*) on PostgreSQL tends to be slow because you can't do\nindex covering[1].\n\nBut in this case, I'd bet money that if it's taking 4 minutes something\nelse is wrong. Have you been vacuuming that table frequently enough?\nWhat's SELECT relpages FROM pg_class WHERE relname='tablename' show?\n\n[1] http://www.pervasive-postgres.com/lp/newsletters/2006/Insights_postgres_Feb.asp#5\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Mon, 27 Mar 2006 07:41:04 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) performance"
},
{
"msg_contents": "Gabriel,\n\nOn 3/27/06 5:34 AM, \"Gábriel Ákos\" <[email protected]> wrote:\n\n> Question: I have a table with 2.5M rows. count(*) on this table is\n> running 4 minutes long. (dual opteron, 4gig ram, db on 4 disk raid10\n> array (sata, not scsi)) Is this normal? How could I make it run faster?\n> Maybe make it run faster for the 2nd time? Which parameters should I\n> change in postgresql.conf and how?\n\nBefore changing anything with your Postgres configuration, you should check\nyour hard drive array performance. All select count(*) does is a sequential\nscan of your data, and if the table is larger than memory, or if it's the\nfirst time you've scanned it, it is limited by your disk speed.\n\nTo test your disk speed, use the following commands and report the times\nhere:\n\n time bash -c \"dd if=/dev/zero of=bigfile bs=8k count=500000 && sync\"\n time dd if=bigfile of=/dev/null bs=8k\n\nIf these are taking a long time, from another session watch the I/O rate\nwith \"vmstat 1\" for a while and report that here.\n\n- Luke \n\n\n",
"msg_date": "Mon, 27 Mar 2006 08:31:58 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) performance"
},
{
"msg_contents": "Luke Lonergan wrote:\n\n> To test your disk speed, use the following commands and report the times\n> here:\n> \n> time bash -c \"dd if=/dev/zero of=bigfile bs=8k count=500000 && sync\"\n\nroot@panther:/fast # time bash -c \"dd if=/dev/zero of=bigfile bs=8k\ncount=500000 && sync\"\n500000+0 records in\n500000+0 records out\n4096000000 bytes transferred in 45.469404 seconds (90082553 bytes/sec)\nreal 0m56.880s\nuser 0m0.112s\nsys 0m18.937s\n\n> time dd if=bigfile of=/dev/null bs=8k\n\nroot@panther:/fast # time dd if=bigfile of=/dev/null bs=8k\n500000+0 records in\n500000+0 records out\n4096000000 bytes transferred in 53.542147 seconds (76500481 bytes/sec)\n\nreal 0m53.544s\nuser 0m0.048s\nsys 0m10.637s\n\nI guess these values aren't that bad :)\n\n-- \n�dv�zlettel,\nG�briel �kos\n-=E-Mail :[email protected]|Web: http://www.i-logic.hu=-\n-=Tel/fax:+3612367353 |Mobil:+36209278894 =-\n\n",
"msg_date": "Mon, 27 Mar 2006 20:04:46 +0200",
"msg_from": "=?ISO-8859-1?Q?G=E1briel_=C1kos?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) performance"
},
{
"msg_contents": "Jim C. Nasby wrote:\n\n> But in this case, I'd bet money that if it's taking 4 minutes something\n> else is wrong. Have you been vacuuming that table frequently enough?\n\nThat gave me an idea. I thought that autovacuum is doing it right, but I\nissued a vacuum full analyze verbose , and it worked all the day.\nAfter that I've tweaked memory settings a bit too (more fsm_pages)\n\nNow:\n\nstaging=# SELECT count(*) from infx.infx_product;\n count\n---------\n 3284997\n(1 row)\n\nTime: 1301.049 ms\n\nAs I saw the output, the database was compressed to 10% of its size :)\nThis table has quite big changes every 4 hour, let's see how it works.\nMaybe I'll have to issue full vacuums from cron regularly.\n\n> What's SELECT relpages FROM pg_class WHERE relname='tablename' show?\n\nThis went to 10% as well, now it's around 156000 pages.\n\nRegards,\nAkos\n\n\n-- \n�dv�zlettel,\nG�briel �kos\n-=E-Mail :[email protected]|Web: http://www.i-logic.hu=-\n-=Tel/fax:+3612367353 |Mobil:+36209278894 =-\n\n",
"msg_date": "Mon, 27 Mar 2006 20:05:31 +0200",
"msg_from": "=?ISO-8859-1?Q?G=E1briel_=C1kos?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) performance"
},
{
"msg_contents": "Gabriel,\n\nOn 3/27/06 10:05 AM, \"Gábriel Ákos\" <[email protected]> wrote:\n\n> That gave me an idea. I thought that autovacuum is doing it right, but I\n> issued a vacuum full analyze verbose , and it worked all the day.\n> After that I've tweaked memory settings a bit too (more fsm_pages)\n\nOops! I replied to your disk speed before I saw this.\n\nThe only thing is - you probably don't want to do a \"vacuum full\", but\nrather a simple \"vacuum\" should be enough.\n\n- Luke\n\n\n",
"msg_date": "Mon, 27 Mar 2006 10:14:45 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) performance"
},
{
"msg_contents": "Luke Lonergan wrote:\n> Gabriel,\n> \n> On 3/27/06 10:05 AM, \"G�briel �kos\" <[email protected]> wrote:\n> \n>> That gave me an idea. I thought that autovacuum is doing it right, but I\n>> issued a vacuum full analyze verbose , and it worked all the day.\n>> After that I've tweaked memory settings a bit too (more fsm_pages)\n> \n> Oops! I replied to your disk speed before I saw this.\n> \n> The only thing is - you probably don't want to do a \"vacuum full\", but\n> rather a simple \"vacuum\" should be enough.\n\nI thought that too. Autovacuum is running on our system but it didn't do \nthe trick. Anyway the issue is solved, thank you all for helping. :)\n\n\n-- \n�dv�zlettel,\nG�briel �kos\n-=E-Mail :[email protected]|Web: http://www.i-logic.hu=-\n-=Tel/fax:+3612367353 |Mobil:+36209278894 =-\n",
"msg_date": "Mon, 27 Mar 2006 20:21:57 +0200",
"msg_from": "=?ISO-8859-1?Q?G=E1briel_=C1kos?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) performance"
},
{
"msg_contents": "Does that mean that even though autovacuum is turned on, you still \nshould do a regular vacuum analyze periodically?\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Mar 27, 2006, at 11:14 AM, Luke Lonergan wrote:\n\n> Gabriel,\n>\n> On 3/27/06 10:05 AM, \"G�briel �kos\" <[email protected]> wrote:\n>\n>> That gave me an idea. I thought that autovacuum is doing it right, \n>> but I\n>> issued a vacuum full analyze verbose , and it worked all the day.\n>> After that I've tweaked memory settings a bit too (more fsm_pages)\n>\n> Oops! I replied to your disk speed before I saw this.\n>\n> The only thing is - you probably don't want to do a \"vacuum full\", but\n> rather a simple \"vacuum\" should be enough.\n>\n> - Luke\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>",
"msg_date": "Mon, 27 Mar 2006 12:20:54 -0700",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) performance"
},
{
"msg_contents": "G�briel �kos wrote:\n> Luke Lonergan wrote:\n>> Gabriel,\n>>\n>> On 3/27/06 10:05 AM, \"G�briel �kos\" <[email protected]> wrote:\n>>\n>>> That gave me an idea. I thought that autovacuum is doing it right, but I\n>>> issued a vacuum full analyze verbose , and it worked all the day.\n>>> After that I've tweaked memory settings a bit too (more fsm_pages)\n>>\n>> Oops! I replied to your disk speed before I saw this.\n>>\n>> The only thing is - you probably don't want to do a \"vacuum full\", but\n>> rather a simple \"vacuum\" should be enough.\n> \n> I thought that too. Autovacuum is running on our system but it didn't do \n> the trick. Anyway the issue is solved, thank you all for helping. :)\n\nYeah, it would be nice of autovacuum had some way of raising a flag to \nthe admin that given current settings (thresholds, FSM etc...), it's not \nkeeping up with the activity. I don't know how to do this, but I hope \nsomeone else has some good ideas.\n\nMatt\n\n",
"msg_date": "Mon, 27 Mar 2006 14:21:38 -0500",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) performance"
},
{
"msg_contents": "Brendan Duddridge wrote:\n> Does that mean that even though autovacuum is turned on, you still \n> should do a regular vacuum analyze periodically?\n\nNo, it probably means you have set FSM settings too low, or not tuned\nthe autovacuum parameters to your specific situation.\n\nA bug in the autovacuum daemon is not unexpected however, so if it\ndoesn't work after tuning, let us know.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Mon, 27 Mar 2006 15:35:28 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) performance"
},
{
"msg_contents": "On 27.03.2006, at 21:20 Uhr, Brendan Duddridge wrote:\n\n> Does that mean that even though autovacuum is turned on, you still \n> should do a regular vacuum analyze periodically?\n\nIt seems that there are situations where autovacuum does not a really \ngood job.\n\nHowever, in our application I have made stupid design decision which \nI want to change as soon as possible. I have a \"visit count\" column \nin one of the very large tables, so updates are VERY regular. I've \njust checked and saw that autovacuum does a great job with that.\n\nNevertheless I have set up a cron job to do a standard vacuum every \nmonth. I've used vacuum full only once after I did a bulk update of \nabout 200.000 rows ...\n\ncug\n\n-- \nPharmaLine, Essen, GERMANY\nSoftware and Database Development",
"msg_date": "Mon, 27 Mar 2006 21:43:43 +0200",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) performance"
},
{
"msg_contents": "Gábriel Ákos wrote:\n\n> I thought that too. Autovacuum is running on our system but it didn't do\n> the trick. Anyway the issue is solved, thank you all for helping. :)\n\nHi, Gabriel, it may be that your Free Space Map (FSM) setting is way to\nlow.\n\nTry increasing it.\n\nBtw, VACUUM outputs a Warning if FSM is not high enough, maybe you can\nfind useful hints in the log file.\n\nHTH\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n\n",
"msg_date": "Tue, 28 Mar 2006 13:49:45 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) performance"
},
{
"msg_contents": "On Mon, Mar 27, 2006 at 12:20:54PM -0700, Brendan Duddridge wrote:\n> Does that mean that even though autovacuum is turned on, you still \n> should do a regular vacuum analyze periodically?\n\nDoing a periodic vacuumdb -avz and keeping an eye on the last few lines\nisn't a bad idea. It would also be helpful if there was a log parser\nthat could take a look at the output of a vacuumdb -av and look for any\nproblem areas, such as relations that have a lot of free space in them.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 28 Mar 2006 11:26:35 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) performance"
}
] |
[
{
"msg_contents": "Hello everybody ,\nI use PostgreSQL 8.1.3 on a bi-processor Xeon and I would know how to do to enable a parallelism for \nthe execution of queries. Indeed , when I analyse the use of the cpus during a query the result is that for \nsome minutes a cpu is used while the other not and after it is the contrary. So they are not used at the same \ntime and i would know what i have to do in order cpus work together .\nThanks and sorry for my english,\nHello everybody ,\nI use PostgreSQL 8.1.3 on a bi-processor Xeon and I would know how to do to enable a parallelism for \nthe execution of queries. Indeed , when I analyse the use of the cpus during a query the result is that for \nsome minutes a cpu is used while the other not and after it is the contrary. So they are not used at the same \ntime and i would know what i have to do in order cpus work together .\nThanks and sorry for my english,",
"msg_date": "Mon, 27 Mar 2006 16:25:25 +0200 (CEST)",
"msg_from": "luchot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query parallelism"
},
{
"msg_contents": "On Mon, Mar 27, 2006 at 04:25:25PM +0200, luchot wrote:\n> Hello everybody ,\n> I use PostgreSQL 8.1.3 on a bi-processor Xeon and I would know how to do to enable a parallelism for \n> the execution of queries. Indeed , when I analyse the use of the cpus during a query the result is that for \n> some minutes a cpu is used while the other not and after it is the contrary. So they are not used at the same \n> time and i would know what i have to do in order cpus work together .\n> Thanks and sorry for my english,\n\nPostgreSQL has no support for intra-query parallelism at this time.\nGreenplum's MPP might do what you're looking for.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Mon, 27 Mar 2006 08:31:48 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query parallelism"
}
] |
[
{
"msg_contents": "[PostgreSQL 8.1.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.1]\nI have a simple join on two tables that takes way too long. Can you help\nme understand what's wrong? There are indexes defined on the relevant columns.\nI just did a fresh vacuum --full --analyze on the two tables.\nIs there something I'm not seeing?\n[CPU is 950Mhz AMD, 256MB RAM, 15k rpm scsi disk]\n-- George Young\n\nTable sizes: parameters has 2.1512e+07 tuples, run_opsets has 211745 tuples.\n\nexplain analyze SELECT ro.run, ro.opset_num, p.step_num FROM run_opsets ro, parameters p WHERE ro.run = p.run AND ro.opset_num = p.opset_num and ro.run='team9';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=2.16..7957.40 rows=6707 width=22) (actual time=14.986..70197.129 rows=43050 loops=1)\n -> Index Scan using run_opsets_pkey on run_opsets ro (cost=0.00..128.75 rows=71 width=18) (actual time=0.386..62.959 rows=263 loops=1)\n Index Cond: (run = 'team9'::text)\n -> Bitmap Heap Scan on parameters p (cost=2.16..109.93 rows=27 width=22) (actual time=1.591..266.211 rows=164 loops=263)\n Recheck Cond: (('team9'::text = p.run) AND (\"outer\".opset_num = p.opset_num))\n -> Bitmap Index Scan on parameters_idx (cost=0.00..2.16 rows=27 width=0) (actual time=1.153..1.153 rows=164 loops=263)\n Index Cond: (('team9'::text = p.run) AND (\"outer\".opset_num = p.opset_num))\n Total runtime: 70237.727 ms\n(8 rows)\n\n Table \"public.run_opsets\"\n Column | Type | Modifiers\n--------------+-----------------------------+-------------------------\n run | text | not null\n opset | text |\n opset_ver | integer |\n opset_num | integer | not null\n status | opset_status |\n date_started | timestamp without time zone |\n date_done | timestamp without time zone |\n work_started | timestamp without time zone |\n lock_user | text | default 'NO-USER'::text\n lock_pid | integer |\n needs_review | text |\nIndexes:\n \"run_opsets_pkey\" PRIMARY KEY, btree (run, opset_num) CLUSTER\n\n\n-- Table \"public.parameters\"\n Column | Type | Modifiers\n-----------+---------+-------------------------------\n run | text | not null\n opset_num | integer | not null\n opset | text | not null\n opset_ver | integer | not null\n step_num | integer | not null\n step | text | not null\n step_ver | integer | not null\n name | text | not null\n value | text |\n split | boolean | not null default false\n wafers | text[] | not null default '{}'::text[]\nIndexes:\n \"parameters_idx\" btree (run, opset_num, step_num, opset, opset_ver, step, step_ver, name, split, wafers)\n \"parameters_opset_idx\" btree (opset, step, name)\n \"parameters_step_idx\" btree (step, name)\n\n\n\"Are the gods not just?\" \"Oh no, child.\nWhat would become of us if they were?\" (CSL)\n",
"msg_date": "Mon, 27 Mar 2006 13:47:33 -0500",
"msg_from": "george young <[email protected]>",
"msg_from_op": true,
"msg_subject": "simple join uses indexes, very slow"
},
{
"msg_contents": "On Mon, 2006-03-27 at 13:47 -0500, george young wrote:\n\n> Table sizes: parameters has 2.1512e+07 tuples, run_opsets has 211745 tuples.\n> \n> explain analyze SELECT ro.run, ro.opset_num, p.step_num FROM run_opsets ro, parameters p WHERE ro.run = p.run AND ro.opset_num = p.opset_num and ro.run='team9';\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=2.16..7957.40 rows=6707 width=22) (actual time=14.986..70197.129 rows=43050 loops=1)\n> -> Index Scan using run_opsets_pkey on run_opsets ro (cost=0.00..128.75 rows=71 width=18) (actual time=0.386..62.959 rows=263 loops=1)\n> Index Cond: (run = 'team9'::text)\n> -> Bitmap Heap Scan on parameters p (cost=2.16..109.93 rows=27 width=22) (actual time=1.591..266.211 rows=164 loops=263)\n> Recheck Cond: (('team9'::text = p.run) AND (\"outer\".opset_num = p.opset_num))\n> -> Bitmap Index Scan on parameters_idx (cost=0.00..2.16 rows=27 width=0) (actual time=1.153..1.153 rows=164 loops=263)\n> Index Cond: (('team9'::text = p.run) AND (\"outer\".opset_num = p.opset_num))\n> Total runtime: 70237.727 ms\n> (8 rows)\n\nThe planner appears to be underestimating the number of rows retrieved\nin both cases, then multiplying them together to make it worse.\nMulti-column indexes provide less accurate estimates (right now).\n\nLooks like a hash join might be faster. What is your work_mem set to?\n\nCan you SET enable_nestloop=off and rerun the EXPLAIN ANALYZE?\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 28 Mar 2006 09:30:54 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple join uses indexes, very slow"
},
{
"msg_contents": "On Tue, 28 Mar 2006 09:30:54 +0100\nSimon Riggs <[email protected]> threw this fish to the penguins:\n\n> On Mon, 2006-03-27 at 13:47 -0500, george young wrote:\n> \n> > Table sizes: parameters has 2.1512e+07 tuples, run_opsets has 211745 tuples.\n> > \n> > explain analyze SELECT ro.run, ro.opset_num, p.step_num FROM run_opsets ro, parameters p WHERE ro.run = p.run AND ro.opset_num = p.opset_num and ro.run='team9';\n> > QUERY PLAN\n> > --------------------------------------------------------------------------------------------------------------------------------------------\n> > Nested Loop (cost=2.16..7957.40 rows=6707 width=22) (actual time=14.986..70197.129 rows=43050 loops=1)\n> > -> Index Scan using run_opsets_pkey on run_opsets ro (cost=0.00..128.75 rows=71 width=18) (actual time=0.386..62.959 rows=263 loops=1)\n> > Index Cond: (run = 'team9'::text)\n> > -> Bitmap Heap Scan on parameters p (cost=2.16..109.93 rows=27 width=22) (actual time=1.591..266.211 rows=164 loops=263)\n> > Recheck Cond: (('team9'::text = p.run) AND (\"outer\".opset_num = p.opset_num))\n> > -> Bitmap Index Scan on parameters_idx (cost=0.00..2.16 rows=27 width=0) (actual time=1.153..1.153 rows=164 loops=263)\n> > Index Cond: (('team9'::text = p.run) AND (\"outer\".opset_num = p.opset_num))\n> > Total runtime: 70237.727 ms\n> > (8 rows)\n> \n> The planner appears to be underestimating the number of rows retrieved\n> in both cases, then multiplying them together to make it worse.\n> Multi-column indexes provide less accurate estimates (right now).\n> \n> Looks like a hash join might be faster. What is your work_mem set to?\nwork_mem= 1024\n\n\n> Can you SET enable_nestloop=off and rerun the EXPLAIN ANALYZE?\nnewschm3=> set enable_nestloop=off ;\nSET\nnewschm3=> explain analyze SELECT ro.run, ro.opset_num, p.step_num FROM run_opsets ro, parameters p WHERE ro.run = p.run AND ro.opset_num = p.opset_num and ro.run='team9';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=34177.87..34291.36 rows=6707 width=22) (actual time=68421.681..68547.686 rows=43050 loops=1)\n Merge Cond: (\"outer\".opset_num = \"inner\".opset_num)\n -> Sort (cost=130.93..131.11 rows=71 width=18) (actual time=107.744..107.901 rows=263 loops=1)\n Sort Key: ro.opset_num\n -> Index Scan using run_opsets_pkey on run_opsets ro (cost=0.00..128.75 rows=71 width=18) (actual time=57.641..106.096 rows=263 loops=1)\n Index Cond: (run = 'team9'::text)\n -> Sort (cost=34046.94..34070.02 rows=9231 width=22) (actual time=68301.325..68358.087 rows=43050 loops=1)\n Sort Key: p.opset_num\n -> Bitmap Heap Scan on parameters p (cost=272.31..33438.97 rows=9231 width=22) (actual time=526.462..67363.577 rows=43050 loops=1)\n Recheck Cond: ('team9'::text = run)\n -> Bitmap Index Scan on parameters_idx (cost=0.00..272.31 rows=9231 width=0) (actual time=483.500..483.500 rows=43050 loops=1)\n Index Cond: ('team9'::text = run)\n Total runtime: 68595.868 ms\n(13 rows)\n\n-- George Young\n\n-- \n\"Are the gods not just?\" \"Oh no, child.\nWhat would become of us if they were?\" (CSL)\n",
"msg_date": "Tue, 28 Mar 2006 10:22:00 -0500",
"msg_from": "george young <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: simple join uses indexes, very slow"
},
{
"msg_contents": "Hi, George,\n\ngeorge young wrote:\n\n>>Looks like a hash join might be faster. What is your work_mem set to?\n> \n> work_mem= 1024\n\nThis is 1 Megabyte. By all means, increase it, if possible.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Tue, 28 Mar 2006 17:26:46 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple join uses indexes, very slow"
},
{
"msg_contents": "\n\n> -----Original Message-----\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of george young\n> Sent: Monday, March 27, 2006 12:48 PM\n> To: [email protected]\n> Subject: [PERFORM] simple join uses indexes, very slow\n> \n[Snip]\n> \n> Indexes:\n> \"parameters_idx\" btree (run, opset_num, step_num, opset,\nopset_ver,\n> step, step_ver, name, split, wafers)\n> \"parameters_opset_idx\" btree (opset, step, name)\n> \"parameters_step_idx\" btree (step, name)\n> \n\n\nHave you tried creating some different indexes on parameters? I don't\nknow if it should matter or not, but I would try some indexes like:\n\n(run, opset_num) //Without all the other columns\n(opset_num, run) //Backwards\n(opset_num)\n\nI don't really know Postgres internals all that well. It just seems to\nme that parameters_idx has a lot of columns this query is not interested\nin. I'd just be curious to see what happens.\n\n\n\n\n",
"msg_date": "Tue, 28 Mar 2006 10:18:25 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple join uses indexes, very slow"
},
{
"msg_contents": "On Tue, Mar 28, 2006 at 10:18:25AM -0600, Dave Dutcher wrote:\n>> \"parameters_idx\" btree (run, opset_num, step_num, opset,\n> opset_ver,\n>> step, step_ver, name, split, wafers)\n>> \"parameters_opset_idx\" btree (opset, step, name)\n>> \"parameters_step_idx\" btree (step, name)\n> Have you tried creating some different indexes on parameters? I don't\n> know if it should matter or not, but I would try some indexes like:\n> \n> (run, opset_num) //Without all the other columns\n> (opset_num, run) //Backwards\n> (opset_num)\n\nAn index on (A,B,C) can be used for a query on (A,B) or (A), so it doesn't\nreally matter. It isn't usable for a query on (B), (C) or (B,C), though. (The\nindex rows will get bigger, of course, so you'll need more I/O if you want to\nscan large parts of it, but I guess that's beside the point.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 28 Mar 2006 18:29:08 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple join uses indexes, very slow"
},
{
"msg_contents": "> -----Original Message-----\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of Steinar H. Gunderson\n> Sent: Tuesday, March 28, 2006 10:29 AM\n> \n> An index on (A,B,C) can be used for a query on (A,B) or (A), so it\ndoesn't\n> really matter. It isn't usable for a query on (B), (C) or (B,C),\nthough.\n> (The\n> index rows will get bigger, of course, so you'll need more I/O if you\nwant\n> to\n> scan large parts of it, but I guess that's beside the point.)\n\n\nI guess what I am really curious about is why was the OP getting an\nexpensive sort when the planner tried a merge join? Most of the time\nwas spent sorting the parameters parameters table by opset_num even\nthough opset_num is indexed. Isn't Postgres able to walk the index\ninstead of sorting? I was wondering if maybe Postgres wasn't\nrecognizing that it could just walk the index because the opset_num\ncolumn isn't the first in the index.\n\n\n\n\n",
"msg_date": "Tue, 28 Mar 2006 11:20:19 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple join uses indexes, very slow"
},
{
"msg_contents": "On Tue, Mar 28, 2006 at 06:29:08PM +0200, Steinar H. Gunderson wrote:\n> On Tue, Mar 28, 2006 at 10:18:25AM -0600, Dave Dutcher wrote:\n> >> \"parameters_idx\" btree (run, opset_num, step_num, opset,\n> > opset_ver,\n> >> step, step_ver, name, split, wafers)\n> >> \"parameters_opset_idx\" btree (opset, step, name)\n> >> \"parameters_step_idx\" btree (step, name)\n> > Have you tried creating some different indexes on parameters? I don't\n> > know if it should matter or not, but I would try some indexes like:\n> > \n> > (run, opset_num) //Without all the other columns\n> > (opset_num, run) //Backwards\n> > (opset_num)\n> \n> An index on (A,B,C) can be used for a query on (A,B) or (A), so it doesn't\n> really matter. It isn't usable for a query on (B), (C) or (B,C), though. (The\n> index rows will get bigger, of course, so you'll need more I/O if you want to\n> scan large parts of it, but I guess that's beside the point.)\n\nNote that given how statistics currenly work, there are many situations\nwhere the planner will refuse to use a multi-column index. This probably\nwon't change until there's some concept of multi-column statistics, at\nleast for multi-column indexes.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 28 Mar 2006 11:30:47 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple join uses indexes, very slow"
},
{
"msg_contents": "On Tue, Mar 28, 2006 at 11:20:19AM -0600, Dave Dutcher wrote:\n> I guess what I am really curious about is why was the OP getting an\n> expensive sort when the planner tried a merge join?\n\nA merge join requires sorted inputs.\n\n> Most of the time was spent sorting the parameters parameters table by\n> opset_num even though opset_num is indexed. Isn't Postgres able to walk the\n> index instead of sorting?\n\nThe time of an index scan vs. a sequential scan + sort depends on several\nfactors, so it's not just a matter of walking the index whenever there is one.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Tue, 28 Mar 2006 19:34:43 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple join uses indexes, very slow"
},
{
"msg_contents": "On Tue, 2006-03-28 at 10:22 -0500, george young wrote:\n\n> work_mem= 1024\n\nSet that higher.\n\nTry a couple of other plans using enable_* and let us have the EXPLAIN\nANALYZE plans.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Tue, 28 Mar 2006 19:17:49 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple join uses indexes, very slow"
},
{
"msg_contents": "On Tue, 28 Mar 2006 19:17:49 +0100\nSimon Riggs <[email protected]> threw this fish to the penguins:\n\n> On Tue, 2006-03-28 at 10:22 -0500, george young wrote:\n> \n> > work_mem= 1024\n> \n> Set that higher.\n> \n> Try a couple of other plans using enable_* and let us have the EXPLAIN\n> ANALYZE plans.\nI tried this, but it doesn't seem to have made much difference that I can see:\n\nnewschm3=> show work_mem;\n work_mem\n----------\n 8024\n\nnewschm3=> explain analyze SELECT ro.run, ro.opset_num, p.step_num FROM run_opsets ro, parameters p WHERE ro.run = p.run AND ro.opset_num = p.opset_num and ro.run='team9';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=2.16..7957.40 rows=6707 width=22) (actual time=292.739..107672.525 rows=43050 loops=1)\n -> Index Scan using run_opsets_pkey on run_opsets ro (cost=0.00..128.75 rows=71 width=18) (actual time=115.134..197.818 rows=263 loops=1)\n Index Cond: (run = 'team9'::text)\n -> Bitmap Heap Scan on parameters p (cost=2.16..109.93 rows=27 width=22) (actual time=2.559..408.125 rows=164 loops=263)\n Recheck Cond: (('team9'::text = p.run) AND (\"outer\".opset_num = p.opset_num))\n -> Bitmap Index Scan on parameters_idx (cost=0.00..2.16 rows=27 width=0) (actual time=2.099..2.099 rows=164 loops=263)\n Index Cond: (('team9'::text = p.run) AND (\"outer\".opset_num = p.opset_num))\n Total runtime: 107860.493 ms\n(8 rows)\n\nnewschm3=> shoe enable_nestloop;\nERROR: syntax error at or near \"shoe\" at character 1\nLINE 1: shoe enable_nestloop;\n ^\nnewschm3=> show enable_nestloop;\n enable_nestloop\n-----------------\n on\n(1 row)\n\nnewschm3=> set enable_nestloop=off;\nSET\nnewschm3=> explain analyze SELECT ro.run, ro.opset_num, p.step_num FROM run_opsets ro, parameters p WHERE ro.run = p.run AND ro.opset_num = p.opset_num and ro.run='team9';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=34177.87..34291.36 rows=6707 width=22) (actual time=64654.744..64760.875 rows=43050 loops=1)\n Merge Cond: (\"outer\".opset_num = \"inner\".opset_num)\n -> Sort (cost=130.93..131.11 rows=71 width=18) (actual time=62.177..62.333 rows=263 loops=1)\n Sort Key: ro.opset_num\n -> Index Scan using run_opsets_pkey on run_opsets ro (cost=0.00..128.75 rows=71 width=18) (actual time=40.415..55.745 rows=263 loops=1)\n Index Cond: (run = 'team9'::text)\n -> Sort (cost=34046.94..34070.02 rows=9231 width=22) (actual time=64592.526..64615.228 rows=43050 loops=1)\n Sort Key: p.opset_num\n -> Bitmap Heap Scan on parameters p (cost=272.31..33438.97 rows=9231 width=22) (actual time=333.975..64126.200 rows=43050 loops=1)\n Recheck Cond: ('team9'::text = run)\n -> Bitmap Index Scan on parameters_idx (cost=0.00..272.31 rows=9231 width=0) (actual time=309.199..309.199 rows=43050 loops=1)\n Index Cond: ('team9'::text = run)\n Total runtime: 64919.714 ms\n(13 rows)\n\n-- \n\"Are the gods not just?\" \"Oh no, child.\nWhat would become of us if they were?\" (CSL)\n",
"msg_date": "Tue, 28 Mar 2006 15:56:56 -0500",
"msg_from": "george young <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: simple join uses indexes, very slow"
},
{
"msg_contents": "\n> -----Original Message-----\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of Steinar H. Gunderson\n> A merge join requires sorted inputs.\n> \n> > Most of the time was spent sorting the parameters parameters table\nby\n> > opset_num even though opset_num is indexed. Isn't Postgres able to\nwalk\n> the\n> > index instead of sorting?\n> \n> The time of an index scan vs. a sequential scan + sort depends on\nseveral\n> factors, so it's not just a matter of walking the index whenever there\nis\n> one.\n\nI was just looking this over again and I realized I misread the query\nplan. The slowest step was the Bitmap Heap Scan not the sort. (The\nsort was relatively fast.)\n\n\n\n",
"msg_date": "Tue, 28 Mar 2006 19:30:23 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple join uses indexes, very slow"
},
{
"msg_contents": "george young wrote:\n> [PostgreSQL 8.1.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.1]\n> I have a simple join on two tables that takes way too long. Can you help\n> me understand what's wrong? There are indexes defined on the relevant columns.\n> I just did a fresh vacuum --full --analyze on the two tables.\n> Is there something I'm not seeing?\n> [CPU is 950Mhz AMD, 256MB RAM, 15k rpm scsi disk]\n> -- George Young\n> \n> Table sizes: parameters has 2.1512e+07 tuples, run_opsets has 211745 tuples.\n> \n> explain analyze SELECT ro.run, ro.opset_num, p.step_num FROM run_opsets ro, parameters p WHERE ro.run = p.run AND ro.opset_num = p.opset_num and ro.run='team9';\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=2.16..7957.40 rows=6707 width=22) (actual time=14.986..70197.129 rows=43050 loops=1)\n> -> Index Scan using run_opsets_pkey on run_opsets ro (cost=0.00..128.75 rows=71 width=18) (actual time=0.386..62.959 rows=263 loops=1)\n> Index Cond: (run = 'team9'::text)\n> -> Bitmap Heap Scan on parameters p (cost=2.16..109.93 rows=27 width=22) (actual time=1.591..266.211 rows=164 loops=263)\n> Recheck Cond: (('team9'::text = p.run) AND (\"outer\".opset_num = p.opset_num))\n> -> Bitmap Index Scan on parameters_idx (cost=0.00..2.16 rows=27 width=0) (actual time=1.153..1.153 rows=164 loops=263)\n> Index Cond: (('team9'::text = p.run) AND (\"outer\".opset_num = p.opset_num))\n> Total runtime: 70237.727 ms\n> (8 rows)\n> \n> Table \"public.run_opsets\"\n> Column | Type | Modifiers\n> --------------+-----------------------------+-------------------------\n> run | text | not null\n> opset | text |\n> opset_ver | integer |\n> opset_num | integer | not null\n> status | opset_status |\n> date_started | timestamp without time zone |\n> date_done | timestamp without time zone |\n> work_started | timestamp without time zone |\n> lock_user | text | default 'NO-USER'::text\n> lock_pid | integer |\n> needs_review | text |\n> Indexes:\n> \"run_opsets_pkey\" PRIMARY KEY, btree (run, opset_num) CLUSTER\n> \n> \n> -- Table \"public.parameters\"\n> Column | Type | Modifiers\n> -----------+---------+-------------------------------\n> run | text | not null\n> opset_num | integer | not null\n> opset | text | not null\n> opset_ver | integer | not null\n> step_num | integer | not null\n> step | text | not null\n> step_ver | integer | not null\n> name | text | not null\n> value | text |\n> split | boolean | not null default false\n> wafers | text[] | not null default '{}'::text[]\n> Indexes:\n> \"parameters_idx\" btree (run, opset_num, step_num, opset, opset_ver, step, step_ver, name, split, wafers)\n> \"parameters_opset_idx\" btree (opset, step, name)\n> \"parameters_step_idx\" btree (step, name)\n\nMore for my own information (because nobody else has suggested it), \nwould it make a difference if 'run' was a varchar field rather than text?\n\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Wed, 29 Mar 2006 16:15:52 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple join uses indexes, very slow"
},
{
"msg_contents": "\nIf your looking for suggestions, I would suggest updating the 8.1.x you \nhave installed to the latest version, as of typing this is 8.1.3 ;) Most \nnotable is some of the -bug- fixes that are in since 8.1.0, for example;\n\n* Fix incorrect optimizations of outer-join conditions (Tom)\n\nYou know, minor point releases aren't adding new features or changing \nbasic functionality, they are pure and simple bugfixes. If I was in \n-your- position, I would run (don't walk ;) and install upto 8.1.3\n\nof course, thats jst my 2c, feel free to ignore :D\nRegards\nStef\n\nChris wrote:\n\n> george young wrote:\n>\n>> [PostgreSQL 8.1.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.1]\n>> I have a simple join on two tables that takes way too long. Can you \n>> help\n>> me understand what's wrong? There are indexes defined on the \n>> relevant columns.\n>> I just did a fresh vacuum --full --analyze on the two tables.\n>> Is there something I'm not seeing?\n>> [CPU is 950Mhz AMD, 256MB RAM, 15k rpm scsi disk]\n>> -- George Young\n>>\n>> Table sizes: parameters has 2.1512e+07 tuples, run_opsets has 211745 \n>> tuples.\n>>\n>> explain analyze SELECT ro.run, ro.opset_num, p.step_num FROM \n>> run_opsets ro, parameters p WHERE ro.run = p.run AND ro.opset_num = \n>> p.opset_num and ro.run='team9';\n>> \n>> QUERY PLAN\n>> -------------------------------------------------------------------------------------------------------------------------------------------- \n>>\n>> Nested Loop (cost=2.16..7957.40 rows=6707 width=22) (actual \n>> time=14.986..70197.129 rows=43050 loops=1)\n>> -> Index Scan using run_opsets_pkey on run_opsets ro \n>> (cost=0.00..128.75 rows=71 width=18) (actual time=0.386..62.959 \n>> rows=263 loops=1)\n>> Index Cond: (run = 'team9'::text)\n>> -> Bitmap Heap Scan on parameters p (cost=2.16..109.93 rows=27 \n>> width=22) (actual time=1.591..266.211 rows=164 loops=263)\n>> Recheck Cond: (('team9'::text = p.run) AND \n>> (\"outer\".opset_num = p.opset_num))\n>> -> Bitmap Index Scan on parameters_idx (cost=0.00..2.16 \n>> rows=27 width=0) (actual time=1.153..1.153 rows=164 loops=263)\n>> Index Cond: (('team9'::text = p.run) AND \n>> (\"outer\".opset_num = p.opset_num))\n>> Total runtime: 70237.727 ms\n>> (8 rows)\n>>\n>> Table \"public.run_opsets\"\n>> Column | Type | Modifiers\n>> --------------+-----------------------------+-------------------------\n>> run | text | not null\n>> opset | text |\n>> opset_ver | integer |\n>> opset_num | integer | not null\n>> status | opset_status |\n>> date_started | timestamp without time zone |\n>> date_done | timestamp without time zone |\n>> work_started | timestamp without time zone |\n>> lock_user | text | default 'NO-USER'::text\n>> lock_pid | integer |\n>> needs_review | text |\n>> Indexes:\n>> \"run_opsets_pkey\" PRIMARY KEY, btree (run, opset_num) CLUSTER\n>>\n>>\n>> -- Table \"public.parameters\"\n>> Column | Type | Modifiers\n>> -----------+---------+-------------------------------\n>> run | text | not null\n>> opset_num | integer | not null\n>> opset | text | not null\n>> opset_ver | integer | not null\n>> step_num | integer | not null\n>> step | text | not null\n>> step_ver | integer | not null\n>> name | text | not null\n>> value | text |\n>> split | boolean | not null default false\n>> wafers | text[] | not null default '{}'::text[]\n>> Indexes:\n>> \"parameters_idx\" btree (run, opset_num, step_num, opset, \n>> opset_ver, step, step_ver, name, split, wafers)\n>> \"parameters_opset_idx\" btree (opset, step, name)\n>> \"parameters_step_idx\" btree (step, name)\n>\n>\n> More for my own information (because nobody else has suggested it), \n> would it make a difference if 'run' was a varchar field rather than text?\n>\n>\n\n",
"msg_date": "Wed, 29 Mar 2006 01:08:15 -0500",
"msg_from": "stef <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple join uses indexes, very slow"
},
{
"msg_contents": "On Wed, 29 Mar 2006 01:08:15 -0500\nstef <[email protected]> threw this fish to the penguins:\n\n> \n> If your looking for suggestions, I would suggest updating the 8.1.x you \n> have installed to the latest version, as of typing this is 8.1.3 ;) Most \n> notable is some of the -bug- fixes that are in since 8.1.0, for example;\n> \n> * Fix incorrect optimizations of outer-join conditions (Tom)\n> \n> You know, minor point releases aren't adding new features or changing \n> basic functionality, they are pure and simple bugfixes. If I was in \n> -your- position, I would run (don't walk ;) and install upto 8.1.3\n\nI just did this(8.1.3). I also moved the server to a host with more\nram and faster cpu. And I did cluster on the main index of the large\nparameters table. The result is less than a second instead of 70\nseconds. \n\nSorry I didn't have time to isolate the individual effects\nof the above changes, but sometimes you just have to do \"a bunch of\ngood things\" and move on. For your enjoyment here's the latest analyze:\n\nnewschm3=> explain analyze SELECT ro.run, ro.opset_num, p.step_num FROM run_opsets ro, parameters p WHERE ro.run = p.run AND ro.opset_num = p.opset_num and ro.run='team9';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..6194.18 rows=9186 width=22) (actual time=0.477..175.554 rows=43050 loops=1)\n -> Index Scan using run_opsets_pkey on run_opsets ro (cost=0.00..122.27 rows=68 width=18) (actual time=0.222..1.093 rows=263 loops=1)\n Index Cond: (run = 'team9'::text)\n -> Index Scan using parameters_idx on parameters p (cost=0.00..88.72 rows=46 width=22) (actual time=0.023..0.498 rows=164 loops=263)\n Index Cond: (('team9'::text = p.run) AND (\"outer\".opset_num = p.opset_num))\n Total runtime: 190.821 ms\n\nThank you all very much for you help!\n\n-- George Young\n\n> \n> of course, thats jst my 2c, feel free to ignore :D\n> Regards\n> Stef\n> \n> Chris wrote:\n> \n> > george young wrote:\n> >\n> >> [PostgreSQL 8.1.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.1]\n> >> I have a simple join on two tables that takes way too long. Can you \n> >> help\n> >> me understand what's wrong? There are indexes defined on the \n> >> relevant columns.\n> >> I just did a fresh vacuum --full --analyze on the two tables.\n> >> Is there something I'm not seeing?\n> >> [CPU is 950Mhz AMD, 256MB RAM, 15k rpm scsi disk]\n> >> -- George Young\n> >>\n> >> Table sizes: parameters has 2.1512e+07 tuples, run_opsets has 211745 \n> >> tuples.\n> >>\n> >> explain analyze SELECT ro.run, ro.opset_num, p.step_num FROM \n> >> run_opsets ro, parameters p WHERE ro.run = p.run AND ro.opset_num = \n> >> p.opset_num and ro.run='team9';\n> >> \n> >> QUERY PLAN\n> >> -------------------------------------------------------------------------------------------------------------------------------------------- \n> >>\n> >> Nested Loop (cost=2.16..7957.40 rows=6707 width=22) (actual \n> >> time=14.986..70197.129 rows=43050 loops=1)\n> >> -> Index Scan using run_opsets_pkey on run_opsets ro \n> >> (cost=0.00..128.75 rows=71 width=18) (actual time=0.386..62.959 \n> >> rows=263 loops=1)\n> >> Index Cond: (run = 'team9'::text)\n> >> -> Bitmap Heap Scan on parameters p (cost=2.16..109.93 rows=27 \n> >> width=22) (actual time=1.591..266.211 rows=164 loops=263)\n> >> Recheck Cond: (('team9'::text = p.run) AND \n> >> (\"outer\".opset_num = p.opset_num))\n> >> -> Bitmap Index Scan on parameters_idx (cost=0.00..2.16 \n> >> rows=27 width=0) (actual time=1.153..1.153 rows=164 loops=263)\n> >> Index Cond: (('team9'::text = p.run) AND \n> >> (\"outer\".opset_num = p.opset_num))\n> >> Total runtime: 70237.727 ms\n> >> (8 rows)\n> >>\n> >> Table \"public.run_opsets\"\n> >> Column | Type | Modifiers\n> >> --------------+-----------------------------+-------------------------\n> >> run | text | not null\n> >> opset | text |\n> >> opset_ver | integer |\n> >> opset_num | integer | not null\n> >> status | opset_status |\n> >> date_started | timestamp without time zone |\n> >> date_done | timestamp without time zone |\n> >> work_started | timestamp without time zone |\n> >> lock_user | text | default 'NO-USER'::text\n> >> lock_pid | integer |\n> >> needs_review | text |\n> >> Indexes:\n> >> \"run_opsets_pkey\" PRIMARY KEY, btree (run, opset_num) CLUSTER\n> >>\n> >>\n> >> -- Table \"public.parameters\"\n> >> Column | Type | Modifiers\n> >> -----------+---------+-------------------------------\n> >> run | text | not null\n> >> opset_num | integer | not null\n> >> opset | text | not null\n> >> opset_ver | integer | not null\n> >> step_num | integer | not null\n> >> step | text | not null\n> >> step_ver | integer | not null\n> >> name | text | not null\n> >> value | text |\n> >> split | boolean | not null default false\n> >> wafers | text[] | not null default '{}'::text[]\n> >> Indexes:\n> >> \"parameters_idx\" btree (run, opset_num, step_num, opset, \n> >> opset_ver, step, step_ver, name, split, wafers)\n> >> \"parameters_opset_idx\" btree (opset, step, name)\n> >> \"parameters_step_idx\" btree (step, name)\n> >\n> >\n> > More for my own information (because nobody else has suggested it), \n> > would it make a difference if 'run' was a varchar field rather than text?\n> >\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n-- \n\"Are the gods not just?\" \"Oh no, child.\nWhat would become of us if they were?\" (CSL)\n",
"msg_date": "Wed, 29 Mar 2006 10:29:15 -0500",
"msg_from": "george young <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: simple join uses indexes, very slow"
},
{
"msg_contents": "On Wed, Mar 29, 2006 at 01:08:15AM -0500, stef wrote:\n> \n> If your looking for suggestions, I would suggest updating the 8.1.x you \n> have installed to the latest version, as of typing this is 8.1.3 ;) Most \n> notable is some of the -bug- fixes that are in since 8.1.0, for example;\n> \n> * Fix incorrect optimizations of outer-join conditions (Tom)\n> \n> You know, minor point releases aren't adding new features or changing \n> basic functionality, they are pure and simple bugfixes. If I was in \n> -your- position, I would run (don't walk ;) and install upto 8.1.3\n\nMore important, there are data loss bugfixes between 8.1.0 and 8.1.3.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 31 Mar 2006 09:47:53 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: simple join uses indexes, very slow"
}
] |
[
{
"msg_contents": "Hello to all on the list.\n\nI have developed a product that sits between the database and an\napplication that handles the storage of large binary data.\n\nThe system is fast, but I'm feeling bad as to think that I have\ncompletely reinvented the weel on this case.\n\nYou see, the engine does just stores the large data in \"containers\"\nthat are directly on the filesystem instead of going to the database\ndirectly (since some of this list's members told me it would make the\ndatabase really slow to store the data directly).\n\nSo now I have a huge dilema as to continue this reinvention or use\ndirect large objects.\n\nThe database is holding large ammounts of digital video, and I am\nwanting to put these directly into the database. What performance\nguidelines would you all give seeing my position today?\n\nThanks for all your speed-up tips,\nRodrigo\n",
"msg_date": "Mon, 27 Mar 2006 12:16:18 -0800",
"msg_from": "\"Rodrigo Madera\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Large Binary Objects Middleware"
},
{
"msg_contents": "\n\"\"Rodrigo Madera\"\" <[email protected]> wrote\n>\n> The database is holding large ammounts of digital video, and I am\n> wanting to put these directly into the database. What performance\n> guidelines would you all give seeing my position today?\n>\n\nIMHO, if you don't need transaction semantics, don't put these big things \ninto database. Instead, add a field in your table and put the link to the \nbig things in it.\n\nRegards,\nQingqing \n\n\n",
"msg_date": "Sun, 2 Apr 2006 22:36:17 +0800",
"msg_from": "\"Qingqing Zhou\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Large Binary Objects Middleware"
}
] |
[
{
"msg_contents": "This is where a \"last_vacuumed\" (and \"last_analyzed\") column in\npg_statistic(?) would come in handy. Each time vacuum or analyze has\nfinished, update the row for the specific table that was\nvacuumed/analyzed with a timestamp in the last_vacuumed/last_analyzed\ncolumn. No more guessing \"maybe I haven't vacuumed/analyzed in a while\",\nand each time a user complains about bad performance, one could request\nthe user to do a \"select s.last_vacuumed, s.last_analyzed from\npg_statistic s, pg_attribute a, pg_class c where ...\"\n\nIt SOUNDS easy to implement, but that has fooled me before... :-)\n\n- Mikael\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Guido\nNeitzer\nSent: den 27 mars 2006 21:44\nTo: Brendan Duddridge\nCc: Postgresql Performance\nSubject: Re: [PERFORM] count(*) performance\n\n\nOn 27.03.2006, at 21:20 Uhr, Brendan Duddridge wrote:\n\n> Does that mean that even though autovacuum is turned on, you still \n> should do a regular vacuum analyze periodically?\n\nIt seems that there are situations where autovacuum does not a really \ngood job.\n\nHowever, in our application I have made stupid design decision which \nI want to change as soon as possible. I have a \"visit count\" column \nin one of the very large tables, so updates are VERY regular. I've \njust checked and saw that autovacuum does a great job with that.\n\nNevertheless I have set up a cron job to do a standard vacuum every \nmonth. I've used vacuum full only once after I did a bulk update of \nabout 200.000 rows ...\n\ncug\n\n-- \nPharmaLine, Essen, GERMANY\nSoftware and Database Development\n\n\n",
"msg_date": "Mon, 27 Mar 2006 23:57:43 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: count(*) performance"
},
{
"msg_contents": "Mikael Carneholm wrote:\n> This is where a \"last_vacuumed\" (and \"last_analyzed\") column in\n> pg_statistic(?) would come in handy. Each time vacuum or analyze has\n> finished, update the row for the specific table that was\n> vacuumed/analyzed with a timestamp in the last_vacuumed/last_analyzed\n> column. No more guessing \"maybe I haven't vacuumed/analyzed in a while\",\n> and each time a user complains about bad performance, one could request\n> the user to do a \"select s.last_vacuumed, s.last_analyzed from\n> pg_statistic s, pg_attribute a, pg_class c where ...\"\n> \n> It SOUNDS easy to implement, but that has fooled me before... :-)\n\n\nIt is fairly easy to implement, however it has been discussed before and \ndecided that it wasn't necessary. What the system cares about is how \nlong it's been since the last vacuum in terms of XIDs not time. Storing \na timestamp would make it more human readable, but I'm not sure the \npowers that be want to add two new columns to some system table to \naccommodate this.\n\nMatt\n",
"msg_date": "Mon, 27 Mar 2006 17:43:02 -0500",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) performance"
},
{
"msg_contents": "\"Matthew T. O'Connor\" <[email protected]> writes:\n> It is fairly easy to implement, however it has been discussed before and \n> decided that it wasn't necessary. What the system cares about is how \n> long it's been since the last vacuum in terms of XIDs not time.\n\nI think Alvaro is intending to do the latter (store per-table vacuum xid\ninfo) for 8.2.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 27 Mar 2006 18:13:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: count(*) performance "
}
] |
[
{
"msg_contents": "I think it is definitely necessary from an administration point of view - as an administrator, I want to know:\n\n1) Are there any stats (at all) in a schema\n2) Are there any stats on the table that slow_query_foo is targeting\n3) If I have stats, how recent are they\n4) Could it be that there are a lot of dead tuples lying around (given the amount of traffic I know I have)\n\nThese would be (are always!) the first questions I ask myself when I'm about to identify performance problems in an app, don't know how other people do though :)\n\nMaybe something I'll try to look into this weekend, if I can spare some time.\n\n- Mikael\n\n\n-----Original Message-----\nFrom: Matthew T. O'Connor [mailto:[email protected]]\nSent: den 28 mars 2006 00:43\nTo: Mikael Carneholm\nCc: Postgresql Performance\nSubject: Re: [PERFORM] count(*) performance\n\n\nMikael Carneholm wrote:\n> This is where a \"last_vacuumed\" (and \"last_analyzed\") column in\n> pg_statistic(?) would come in handy. Each time vacuum or analyze has\n> finished, update the row for the specific table that was\n> vacuumed/analyzed with a timestamp in the last_vacuumed/last_analyzed\n> column. No more guessing \"maybe I haven't vacuumed/analyzed in a while\",\n> and each time a user complains about bad performance, one could request\n> the user to do a \"select s.last_vacuumed, s.last_analyzed from\n> pg_statistic s, pg_attribute a, pg_class c where ...\"\n> \n> It SOUNDS easy to implement, but that has fooled me before... :-)\n\n\nIt is fairly easy to implement, however it has been discussed before and \ndecided that it wasn't necessary. What the system cares about is how \nlong it's been since the last vacuum in terms of XIDs not time. Storing \na timestamp would make it more human readable, but I'm not sure the \npowers that be want to add two new columns to some system table to \naccommodate this.\n\nMatt\n",
"msg_date": "Tue, 28 Mar 2006 01:17:43 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: count(*) performance"
}
] |
[
{
"msg_contents": "Hello,\n\nI have just installed PostGreSql 8.1 on my Windows XP PC. I created a simple \ntable called users with 4 varchar fields.\n\nI am using the OleDb connection driver. In my .NET application, I populate \n3000 records into the table to test PostGreSql's speed. It takes about 3-4 \nseconds.\n\nEven worse is displaying the 3000 records in a ListView control. It takes \nabout 7 seconds. In MySQL, the exact same table and application displays the \nsame 3000 records in under 1/2 second!!!\n\nWhy is PostGreSql so slow compared to MySQL? What do you recommend I do to \nspeed up? It is such a simple query and small database. \n\n\n",
"msg_date": "Tue, 28 Mar 2006 14:14:00 +0200",
"msg_from": "\"Greg Quinn\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "On 3/28/06, Greg Quinn <[email protected]> wrote:\n> I am using the OleDb connection driver. In my .NET application, I populate\n> 3000 records into the table to test PostGreSql's speed. It takes about 3-4\n> seconds.\n\nhave you tried:\n1. npgsql .net data provider\n2. odbc ado.net bridge\n\nmerlin\n",
"msg_date": "Tue, 28 Mar 2006 09:10:46 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "On Tue, Mar 28, 2006 at 02:14:00PM +0200, Greg Quinn wrote:\n> Hello,\n> \n> I have just installed PostGreSql 8.1 on my Windows XP PC. I created a \n> simple table called users with 4 varchar fields.\n> \n> I am using the OleDb connection driver. In my .NET application, I populate \n> 3000 records into the table to test PostGreSql's speed. It takes about 3-4 \n> seconds.\n> \n> Even worse is displaying the 3000 records in a ListView control. It takes \n> about 7 seconds. In MySQL, the exact same table and application displays \n> the same 3000 records in under 1/2 second!!!\n\nHave you vacuumed recently? This smells like it might be a table bloat\nproblem.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 28 Mar 2006 11:32:13 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Tue, Mar 28, 2006 at 02:14:00PM +0200, Greg Quinn wrote:\n>> Hello,\n>>\n>> I have just installed PostGreSql 8.1 on my Windows XP PC. I created a \n>> simple table called users with 4 varchar fields.\n>>\n>> I am using the OleDb connection driver. In my .NET application, I populate \n>> 3000 records into the table to test PostGreSql's speed. It takes about 3-4 \n>> seconds.\n>>\n>> Even worse is displaying the 3000 records in a ListView control. It takes \n>> about 7 seconds. In MySQL, the exact same table and application displays \n>> the same 3000 records in under 1/2 second!!!\n> \n> Have you vacuumed recently? This smells like it might be a table bloat\n> problem.\n\n\nThis could be a lot of things...\n\nHe is probably running the default postgresql.conf which is going to \nperform horribly.\n\nWhat is your work_mem? shared_buffers?\n\nAre you passing a where clause? If so is there an index on the field \nthat is subject to the clause?\n\nWhen you do the population, is it via inserts or copy?\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\n Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 28 Mar 2006 09:52:23 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "The query is,\n\nselect * from users\n\nwhich returns 4 varchar fields, there is no where clause\n\nYes, I am running the default postgres config. Basically I have been a MySQL \nuser and thought I would like to check out PostGreSql. So I did a quick \nperformance test. The performance was so different that I thought PostGreSQL \nwas nothing compared to MySQL, but now it seems its just a few configuration \noptions. Strange how the defult config would be so slow...\n\nI have begun reading the documentation but am not too sure what options I \ncan quickly tweak to get good performance, could somebody give me some tips?\n\nThanks\n\n\n----- Original Message ----- \nFrom: \"Joshua D. Drake\" <[email protected]>\nTo: \"Jim C. Nasby\" <[email protected]>\nCc: \"Greg Quinn\" <[email protected]>; <[email protected]>\nSent: Tuesday, March 28, 2006 7:52 PM\nSubject: Re: [PERFORM] Slow performance on Windows .NET and OleDb\n\n\n> Jim C. Nasby wrote:\n>> On Tue, Mar 28, 2006 at 02:14:00PM +0200, Greg Quinn wrote:\n>>> Hello,\n>>>\n>>> I have just installed PostGreSql 8.1 on my Windows XP PC. I created a \n>>> simple table called users with 4 varchar fields.\n>>>\n>>> I am using the OleDb connection driver. In my .NET application, I \n>>> populate 3000 records into the table to test PostGreSql's speed. It \n>>> takes about 3-4 seconds.\n>>>\n>>> Even worse is displaying the 3000 records in a ListView control. It \n>>> takes about 7 seconds. In MySQL, the exact same table and application \n>>> displays the same 3000 records in under 1/2 second!!!\n>>\n>> Have you vacuumed recently? This smells like it might be a table bloat\n>> problem.\n>\n>\n> This could be a lot of things...\n>\n> He is probably running the default postgresql.conf which is going to \n> perform horribly.\n>\n> What is your work_mem? shared_buffers?\n>\n> Are you passing a where clause? If so is there an index on the field that \n> is subject to the clause?\n>\n> When you do the population, is it via inserts or copy?\n>\n> Joshua D. Drake\n>\n>\n> -- \n>\n> === The PostgreSQL Company: Command Prompt, Inc. ===\n> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n> Providing the most comprehensive PostgreSQL solutions since 1997\n> http://www.commandprompt.com/\n>\n> \n\n\n",
"msg_date": "Wed, 29 Mar 2006 08:22:08 +0200",
"msg_from": "\"Greg Quinn\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "Via insert\n\n> \n> When you do the population, is it via inserts or copy?\n> \n> Joshua D. Drake\n> \n> \n> -- \n> \n> === The PostgreSQL Company: Command Prompt, Inc. ===\n> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n> Providing the most comprehensive PostgreSQL solutions since 1997\n> http://www.commandprompt.com/\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n",
"msg_date": "Wed, 29 Mar 2006 08:24:58 +0200",
"msg_from": "\"Greg Quinn\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "Greg Quinn wrote:\n\n> The query is,\n>\n> select * from users\n>\n> which returns 4 varchar fields, there is no where clause\n>\n> Yes, I am running the default postgres config. Basically I have been a \n> MySQL user and thought I would like to check out PostGreSql. So I did \n> a quick performance test. The performance was so different that I \n> thought PostGreSQL was nothing compared to MySQL, but now it seems its \n> just a few configuration options. Strange how the defult config would \n> be so slow...\n\nMy english is poor but im gonna try to explain it:\n\nDefault configuration in postgres its not for good performance, its just \ndesign to make it working in any computer. Thats why u have to try to \ncustom default config file.\n\nAnyway, people says that mysql is faster (and lighter) than postgres (at \nleast with mysql 3.x vs postgres 7.4), but postgres is more advanced and \nits much harder to get data corrupted.\n\nBut there is something that you should known about postgres. Postgres \ncreates statistics of usage, and when you \"vacumm\", it optimizes each \ntable depending of usage.\n\nSo:\n - You should custom config file.\n - You should vacumm it, as someone recomended before.\n - Do u have any indexes? Remove it. To get all rows you do not need it\n \nNote that I just have use it under Linux, i have no idea about how \nshould it work on Windows.\n\n\n",
"msg_date": "Wed, 29 Mar 2006 08:58:07 +0200",
"msg_from": "Ruben Rubio Rey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "Ruben Rubio Rey wrote:\n> Greg Quinn wrote:\n> \n>> The query is,\n>>\n>> select * from users\n>>\n>> which returns 4 varchar fields, there is no where clause\n>>\n>> Yes, I am running the default postgres config. Basically I have been a \n>> MySQL user and thought I would like to check out PostGreSql. So I did \n>> a quick performance test. The performance was so different that I \n>> thought PostGreSQL was nothing compared to MySQL, but now it seems its \n>> just a few configuration options. Strange how the defult config would \n>> be so slow...\n\n> - Do u have any indexes? Remove it. To get all rows you do not need it\n\nI wouldn't do that. Postgres needs indexing just like any other database.\n\nIt might affect this query but it's not going to help other queries.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Wed, 29 Mar 2006 18:16:23 +1100",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "> how many rows does it return ? a few, or a lot ?\n\n3000 Rows - 7 seconds - very slow\n\nWhich client library may have a problem? I am using OleDb, though haven't \ntried the .NET connector yet.\n\nNetwork configuration?? I am running it off my home PC with no network. It \nis P4 2.4 with 1 Gig Ram. Windows XP\n\n----- Original Message ----- \nFrom: \"PFC\" <[email protected]>\nTo: \"Greg Quinn\" <[email protected]>\nSent: Wednesday, March 29, 2006 11:02 AM\nSubject: Re: [PERFORM] Slow performance on Windows .NET and OleDb\n\n\n>\n>> select * from users\n>> which returns 4 varchar fields, there is no where clause\n>\n> how many rows does it return ? a few, or a lot ?\n>\n>> Yes, I am running the default postgres config. Basically I have been a \n>> MySQL user and thought I would like to check out PostGreSql.\n>\n> Good idea...\n>\n> From the tests I made, on simple queries like yours, with no joins, speed \n> pf pg 8.x is about the same as mysql 5.x ; that is to say very fast. If \n> you have a performance problem on something so basic, and moreover on \n> windows, it smells like a problem in the client library, or in the TCP \n> transport between client and server.\n> I remember messages saying postgres on windows was slow some time ago \n> here, and it turned out to be a problem in the network configuration of \n> the machine.\n> \n\n\n",
"msg_date": "Wed, 29 Mar 2006 13:00:20 +0200",
"msg_from": "\"Greg Quinn\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "Hi, Greg,\n\nGreg Quinn wrote:\n>>> I populate 3000 records into the table to test PostGreSql's speed.\n>>> It takes about 3-4 seconds.\n>> When you do the population, is it via inserts or copy?\n> Via insert\n\nAre those inserts encapsulated into a single transaction? If not, that's\nthe reason why it's so slow, every transaction sync()s through to the disk.\n\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Wed, 29 Mar 2006 13:21:15 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "\n> 3000 Rows - 7 seconds - very slow\n\n\tOn my PC (athlon 64 3000+ running Linux), selecting 3000 rows with 4 \ncolumns out of a 29 column table takes about 105 ms, including time to \ntransfer the results and convert them to native Python objects. It takes \nabout 85 ms on a test table with only those 4 columns.\n\n\tThere is definitely a problem somewhere on your system.\n\n\tI'd suggest running this query in an infinite loop. Logically, it should \nuse 100% processor, with postgres using some percentage (30% here) and \nyour client using some other percentage (70% here). Is your processor used \nto the max ?\n",
"msg_date": "Wed, 29 Mar 2006 13:40:54 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "On 3/29/06, Greg Quinn <[email protected]> wrote:\n> > how many rows does it return ? a few, or a lot ?\n>\n> 3000 Rows - 7 seconds - very slow\n>\n> Which client library may have a problem? I am using OleDb, though haven't\n> tried the .NET connector yet.\n\n\nesilo=# create temp table use_npgsql as select v, 12345 as a, 'abcdef'\nas b, 'abcdef' as c, 4 as d from generate_series(1,100000) v;\n\nSELECT\nTime: 203.000 ms\nesilo=# explain analyze select * from use_npgsql;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Seq Scan on use_npgsql (cost=0.00..1451.16 rows=61716 width=76)\n(actual time=0.007..176.106 rows=100000 loops=1)\n Total runtime: 336.809 ms\n(2 rows)\n\nI just pulled out 100k rows in about 1/3 second. The problem is not\nyour postgresql configuration. Your problem is possibly in the oledb\ndriver. The last time I looked at it, it was not production ready.\n\nhttp://pgfoundry.org/frs/?group_id=1000140&release_id=407\n\nMerlin\n",
"msg_date": "Wed, 29 Mar 2006 09:05:12 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "You should run the select query from the psql utility to determine if \nit's PostgreSQL, or your OleDb driver that's being slow. It takes like \n185ms on one of my tables to get 7000 rows.\n\nGreg Quinn wrote:\n>> how many rows does it return ? a few, or a lot ?\n> \n> 3000 Rows - 7 seconds - very slow\n> \n> Which client library may have a problem? I am using OleDb, though \n> haven't tried the .NET connector yet.\n> \n> Network configuration?? I am running it off my home PC with no network. \n> It is P4 2.4 with 1 Gig Ram. Windows XP\n> \n> ----- Original Message ----- From: \"PFC\" <[email protected]>\n> To: \"Greg Quinn\" <[email protected]>\n> Sent: Wednesday, March 29, 2006 11:02 AM\n> Subject: Re: [PERFORM] Slow performance on Windows .NET and OleDb\n> \n> \n>>\n>>> select * from users\n>>> which returns 4 varchar fields, there is no where clause\n>>\n>> how many rows does it return ? a few, or a lot ?\n>>\n>>> Yes, I am running the default postgres config. Basically I have been \n>>> a MySQL user and thought I would like to check out PostGreSql.\n>>\n>> Good idea...\n>>\n>> From the tests I made, on simple queries like yours, with no joins, \n>> speed pf pg 8.x is about the same as mysql 5.x ; that is to say very \n>> fast. If you have a performance problem on something so basic, and \n>> moreover on windows, it smells like a problem in the client library, \n>> or in the TCP transport between client and server.\n>> I remember messages saying postgres on windows was slow some time ago \n>> here, and it turned out to be a problem in the network configuration \n>> of the machine.\n>>\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n\n-- \nChristopher Kings-Lynne\n\nTechnical Manager\nCalorieKing\nTel: +618.9389.8777\nFax: +618.9389.8444\[email protected]\nwww.calorieking.com\n\n",
"msg_date": "Thu, 30 Mar 2006 09:21:06 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "This problem was caused by the OleDb driver. I used a 3rd party .NET \nprovider and it worked, 8000 rows in just over 100ms!\n\nCan somebody send me a sample connection string for the PostGreSql native \n.net driver please? I'm battling to find a valid connection string.\n\nThanks\n\n\n",
"msg_date": "Thu, 30 Mar 2006 07:57:23 +0200",
"msg_from": "\"Greg Quinn\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Solved] Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "I use Npgsql, and the connection string I use is real simple:\n\nServer=192.168.0.36;Database=mydb;User Id=myuserid;Password=123456\n\nHope that helps,\n\nDave\n\n> -----Original Message-----\n> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of Greg Quinn\n> Sent: Wednesday, March 29, 2006 11:57 PM\n> To: [email protected]\n> Subject: [PERFORM] [Solved] Slow performance on Windows .NET and OleDb\n> \n> This problem was caused by the OleDb driver. I used a 3rd party .NET\n> provider and it worked, 8000 rows in just over 100ms!\n> \n> Can somebody send me a sample connection string for the PostGreSql\nnative\n> .net driver please? I'm battling to find a valid connection string.\n> \n> Thanks\n> \n> \n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that\nyour\n> message can get through to the mailing list cleanly\n\n",
"msg_date": "Thu, 30 Mar 2006 08:34:23 -0600",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "This is a blatant thread steal... but here we go...\nDo people have any opinions on the pgsql driver? How does it compare\nwith the odbc in terms of performance? Is it fully production ready?\nThe boss wants to go .net (instead of Java, which is my preference...)\n- will I have to spend my time defending postgres against\nmysql/postgres/sqlserver?\nCheers\nAntoine\nps. I try my best not to steal threads but sometimes... :-)\n\nOn 30/03/06, Dave Dutcher <[email protected]> wrote:\n> I use Npgsql, and the connection string I use is real simple:\n>\n> Server=192.168.0.36;Database=mydb;User Id=myuserid;Password=123456\n>\n> Hope that helps,\n>\n> Dave\n>\n> > -----Original Message-----\n> > From: [email protected]\n> [mailto:pgsql-performance-\n> > [email protected]] On Behalf Of Greg Quinn\n> > Sent: Wednesday, March 29, 2006 11:57 PM\n> > To: [email protected]\n> > Subject: [PERFORM] [Solved] Slow performance on Windows .NET and OleDb\n> >\n> > This problem was caused by the OleDb driver. I used a 3rd party .NET\n> > provider and it worked, 8000 rows in just over 100ms!\n> >\n> > Can somebody send me a sample connection string for the PostGreSql\n> native\n> > .net driver please? I'm battling to find a valid connection string.\n> >\n> > Thanks\n> >\n> >\n> >\n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 1: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that\n> your\n> > message can get through to the mailing list cleanly\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n\n--\nThis is where I should put some witty comment.\n",
"msg_date": "Fri, 31 Mar 2006 22:49:10 +0200",
"msg_from": "Antoine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] Slow performance on Windows .NET and OleDb"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm a Postgresql's user and I think that it's very very good and\nrobust. \n\nIn my work we're confuse between where database is the best choose:\nPostgresql or Mysql. The Mysql have the reputation that is very fast\nworking in the web but in our application we are estimating many access\nsimultaneous, then I think that the Postgresql is the best choice. \n\nAm I right?\n\nOur server have 1 GB of RAM, how many users can it support at the same\ntime with this memory?\n\nThanks in advanced\n\nMarcos\n\n",
"msg_date": "Tue, 28 Mar 2006 15:31:42 +0000",
"msg_from": "Marcos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Decide between Postgresql and Mysql (help of comunity)"
},
{
"msg_contents": "> So, what exactly are you planning on doing?\n\nThe application will be a chat for web, the chats will be stored in the\nserver. In a determined interval of time... more or less 2 seconds, the\napplication will be looking for new messages.\n\nI believe that it will make many accesses. The write in disc will be\nconstant.\n\nThanks :o)\n\nMarcos\n\n",
"msg_date": "Tue, 28 Mar 2006 16:55:28 +0000",
"msg_from": "Marcos <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "Marcos wrote:\n> Hi,\n> \n> I'm a Postgresql's user and I think that it's very very good and\n> robust. \n> \n> In my work we're confuse between where database is the best choose:\n> Postgresql or Mysql. The Mysql have the reputation that is very fast\n> working in the web but in our application we are estimating many access\n> simultaneous, then I think that the Postgresql is the best choice. \n> \n> Am I right?\n> \n> Our server have 1 GB of RAM, how many users can it support at the same\n> time with this memory?\n> \n> Thanks in advanced\n> \n> Marcos\n\n The RAM/users question depends largely on what the database is used\nfor and what each user is doing in the database.\n\n From what I understand, PostgreSQL is designed with stability and\nreliability as key tenants. MySQL favors performance and ease of use. An\nexample is that, last I checked, MySQL doesn't have an equivalent to\nPostgreSQL's 'fsync' which helps insure that data is actually written to\n the disk. This costs performance but increases reliability and crash\nrecovery.\n\nHTH\n\nMadison\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Madison Kelly (Digimer)\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n",
"msg_date": "Tue, 28 Mar 2006 13:57:37 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of comunity)"
},
{
"msg_contents": "On Tue, 2006-03-28 at 09:31, Marcos wrote:\n> Hi,\n> \n> I'm a Postgresql's user and I think that it's very very good and\n> robust. \n> \n> In my work we're confuse between where database is the best choose:\n> Postgresql or Mysql. The Mysql have the reputation that is very fast\n> working in the web but in our application we are estimating many access\n> simultaneous, then I think that the Postgresql is the best choice. \n> \n> Am I right?\n> \n> Our server have 1 GB of RAM, how many users can it support at the same\n> time with this memory?\n\nThis is as much about the code in front of the database as the database\nitself. You'll want to use an architecture that supports pooled\nconnections (java, php under lighttpd, etc...) and you'll want to look\nat your read to write ratio.\n\nMySQL and PostgreSQL can handle fairly heavy parallel loads. PostgreSQL\nis generally a much better performer when you need to make a lot of\nparallel writes.\n\nBut the bigger question is which one is suited to your application in\ngeneral. If some major issue in MySQL or PostgreSQL makes it a poor\nchoice for your app, then it doesn't matter how much load it can handle,\nit's still a poor choice.\n\nGenerally speaking, MySQL is a poor choice if you're doing things like\naccounting, where the maths have to be correct. It's quite easy to ask\nMySQL to do math and get the wrong answer. It also has some serious\nproblems with referential integrity, but most of those can be worked\naround using innodb tables. But at that point, you're using the same\nbasic storage methods as PostgreSQL uses, i.e. an MVCC storage engine. \nAnd now that Oracle has bought Innodb, the availability of that in the\nfuture to MySQL is in doubt.\n\nThere's also the issue of licensing. If you'll be selling copies of\nyour app to customers, you'll be writing a check for each install to\nMySQL AB. Not so with PostgreSQL.\n\nSo, what exactly are you planning on doing?\n\nLastly, take a look here:\n\nhttp://sql-info.de/mysql/gotchas.html\n\nand here:\n\nhttp://sql-info.de/postgresql/postgres-gotchas.html\n\nfor a list of the common \"gotchas\" in both databases.\n\nGenerally you'll find the PostgreSQL gotchas are of the sort that make\nyou go \"oh, that's interesting\" and the MySQL gotchas are the kind that\nmake you go \"Dear god, you must be kidding me!\"\n\nBut that's just my opinion, I could be wrong.\n",
"msg_date": "Tue, 28 Mar 2006 12:59:52 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "\n\n> This is as much about the code in front of the database as the database\n> itself. You'll want to use an architecture that supports pooled\n> connections (java, php under lighttpd, etc...) and you'll want to look\n\n\tWell, anybody who uses PHP and cares about performance is already using \nlighttpd, no ?\n\n> MySQL and PostgreSQL can handle fairly heavy parallel loads.\n\n\tI'll only speak about MyISAM. MySQL == MyISAM. InnoDB is useless : if you \nwant transactions, use postgres.\n\tIf you say to yourself \"oh yeah, but it would be cool to use a MyISAM \ntable for stuff like hit counters etc\"... Is it the job of a SQL database \nto count hits on the root page of your site ? No. To store user sessions ? \nNo. The job of a SQL database is to efficiently handle data, not to do \nsomething that should stay in RAM in the application server process, or at \nworst, in a memcached record.\n\n\tMySQL + MyISAM has a huge advantage : it can look up data in the index \nwithout touching the tables.\n \tMySQL handles parallel SELECTs very well.\n\n\tHowever, throw in some maintenance operation which involves a long query \nwith writes (like a big joined UPDATE) and all access to your website is \nblocked while the query lasts.\n\tThis is worsened by the fact that MySQL sucks at complex queries.\n\n\tIf all of your updates are done to a few rows, MyISAM is cool, but \nsomeday you'll want to do this query which locks a table during one \nminute... and then you got a problem.\n\n\tJust be very clear about what you want to do, what types of queries \nyou'll want to run in two years... etc.\n\n\n\n",
"msg_date": "Tue, 28 Mar 2006 21:42:51 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "On Tue, 2006-03-28 at 13:42, PFC wrote:\n> > This is as much about the code in front of the database as the database\n> > itself. You'll want to use an architecture that supports pooled\n> > connections (java, php under lighttpd, etc...) and you'll want to look\n> \n> \tWell, anybody who uses PHP and cares about performance is already using \n> lighttpd, no ?\n> \n> > MySQL and PostgreSQL can handle fairly heavy parallel loads.\n> \n> \tI'll only speak about MyISAM. MySQL == MyISAM. InnoDB is useless : if you \n> want transactions, use postgres.\n\nI agree with most of what you posted, but I'm not quite sure what you\nmeant here.\n\nInnodb in and of itself is a fairly decent MVCC implementation, with, as\nusual, some limitations (it's rollback performance is HORRIFICLY bad). \nWhat really makes innodb useless to me is that there's no real support\nfor proper operation by MySQL itself. If you could force MySQL to only\nuse innodb tables, and to NEVER do the wrong things syntactically, it\nwould be ok. But there are thousands of foot-guns in the MySQL - Innodb\ncombination waiting to take off your toes. Too many to count really. \nTo me, that's what makes innodb so useless, the way MySQL fails to\nintegrate well with it.\n",
"msg_date": "Tue, 28 Mar 2006 13:50:45 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "Marcos wrote:\n\n>>So, what exactly are you planning on doing?\n>> \n>>\n>\n>The application will be a chat for web, the chats will be stored in the\n>server. In a determined interval of time... more or less 2 seconds, the\n>application will be looking for new messages.\n>\n>I believe that it will make many accesses. The write in disc will be\n>constant.\n> \n>\nOk. I would favor PostgreSQL for reasons of ease of future \ndevelopment. However, lets look at what both RDBMS's buy you:\n\nMySQL:\n1) Possibility of many components for web apps that can be used though \nthe lack of certain features (such as complex updateable views) makes \nthis possibly an issue.\n2) Great simple read performance.\n\nPostgreSQL:\n1) Possibility to integrate any other components later (including those \non MySQL via DBI-Link).\n2) Fabulous community support (and I am sure fabulous paid support too \ngiven the fact that many of those who contribute to the great community \nsupport also offer paid support).\n3) Better parallel write performance.\n4) Greater extensibility, leading to greater flexibility down the road \nshould you want to add in new components without rewriting your front-end.\n\nFor a simple chat client, you can probably put something together with \nsome Perl/CGI scripts, Jabber, and MySQL or PostgreSQL pretty easily and \nwithout much development labor at all. Indeed I would suggest that the \nRDBMS is, absent other specific concerns, the least of your issues.\n\nIn other words, both are probably adequate. It is impossible to provide \nan estimate for capacity though without knowing the app in question, \nexpected query composition, and so forth.\n\nBest Wishes,\nChris Travers\nMetatron Technology Consulting\n",
"msg_date": "Tue, 28 Mar 2006 12:50:53 -0800",
"msg_from": "Chris Travers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "\n>> So, what exactly are you planning on doing?\n>\n> The application will be a chat for web, the chats will be stored in the\n> server. In a determined interval of time... more or less 2 seconds, the\n> application will be looking for new messages.\n>\n> I believe that it will make many accesses. The write in disc will be\n> constant.\n\n\tAh, cool. That's exactly what a database is not designed for xD\n\n\tTry this, I coded this in about 1 hour as a joke last week.\n\thttp://peufeu.com/demos/xhchat/\n\tIt works in firefox and opera, uses xmlhttprequest, and the messages are \nstored in a dbm database.\n\n\tWe have also coded a real HTTP chat. I'll briefly expose the details \non-list, but if you want the gory stuff, ask privately.\n\n\tThere is a Postgres database for users, authentication, chatrooms and \nstuff. This database can be modified by a full-blown web application.\n\tOf course, messages are not stored in the database. It would be suicidal \nperformance-wise to do so.\n\n\tAn asynchronous HTTP server, using select() (lighttpd style) is coded in \nPython. It is very special-purpose server. It keeps an open connection \nwith the client (browser) and sends messages as they arrive in the \nchatroom, with no delay. The connection is interrupted only when the \nclient submits a new message via a form, but this is not mandatory.\n\n\tMy memories are a bit old, but we benchmarked it at about 4000 \nmessages/second on a low-end server (athlon something). Concurrent \nconnections are unlimited. Disk I/O is zero. I like it.\n\tIf you store messages in the database, you can hope to be about 10-50 \ntimes slower.\n",
"msg_date": "Tue, 28 Mar 2006 23:18:39 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "\n> What really makes innodb useless to me is that there's no real support\n> for proper operation by MySQL itself. If you could force MySQL to only\n> use innodb tables, and to NEVER do the wrong things syntactically, it\n> would be ok. But there are thousands of foot-guns in the MySQL\n\n\tThat's what I meant actually.\n\tAnd by saying \"if you want transactions\" I also meant \"if you want a \ndatabase system that will go to great lengths to save your ass and your \ndata instead of helping you shooting yourself in the foot, generally work \nvery well, be reliable, friendly and a pleasure to work with, which means \nmore or less, if you're rational rather than masochistic, then yeah, you \nshould use postgres\".\n\n> If you could force MySQL to only\n> use innodb tables, and to NEVER do the wrong things syntactically, it\n> would be ok.\n\n\tYou'd still need to teach it how to hash-join and everything, though. \nLife sucks when the only join type you have is merge join.\n",
"msg_date": "Tue, 28 Mar 2006 23:24:28 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "On Tue, Mar 28, 2006 at 09:42:51PM +0200, PFC wrote:\n> \tHowever, throw in some maintenance operation which involves a long \n> \tquery with writes (like a big joined UPDATE) and all access to your \n> website is blocked while the query lasts.\n> \tThis is worsened by the fact that MySQL sucks at complex queries.\n> \n> \tIf all of your updates are done to a few rows, MyISAM is cool, but \n> someday you'll want to do this query which locks a table during one \n> minute... and then you got a problem.\n\nNot to mention that MyISAM loves to eat data. Livejournal suffered at\nleast one major crash due to MyISAM corruption.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 28 Mar 2006 15:52:59 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "Heh, too quick on the send button...\n\nOn Tue, Mar 28, 2006 at 09:42:51PM +0200, PFC wrote:\n> \tI'll only speak about MyISAM. MySQL == MyISAM. InnoDB is useless : \n> \tif you want transactions, use postgres.\n> \tIf you say to yourself \"oh yeah, but it would be cool to use a \n> \tMyISAM table for stuff like hit counters etc\"... Is it the job of a SQL \n> database to count hits on the root page of your site ? No. To store user \n> sessions ? No. The job of a SQL database is to efficiently handle data, \n> not to do something that should stay in RAM in the application server \n> process, or at worst, in a memcached record.\n\nActually, it's entirely possible to do stuff like web counters, you just\nwant to do it differently in PostgreSQL. Simply insert into a table\nevery time you have a hit, and then roll that data up periodically.\n\nAnd using MyISAM is no panacea, either. Trying to keep a web counter in\na MyISAM table means you'll serialize every web page on that counter\nupdate.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 28 Mar 2006 15:56:08 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "On 3/28/06, Jim C. Nasby <[email protected]> wrote:\n> Heh, too quick on the send button...\n>\n> On Tue, Mar 28, 2006 at 09:42:51PM +0200, PFC wrote:\n\n> Actually, it's entirely possible to do stuff like web counters, you just\n> want to do it differently in PostgreSQL. Simply insert into a table\n> every time you have a hit, and then roll that data up periodically.\n>\n> And using MyISAM is no panacea, either. Trying to keep a web counter in\n> a MyISAM table means you'll serialize every web page on that counter\n> update.\n\nif you want raw speed, use a sequence for a hit-counter. sequences\nare wonder-tools and very lightweight. Explain analyze for a sequence\nnextval on my desktop box reports 47 microseconds. thats 200k\nsequence updates/second. insert into a table (fsync off/cache write,\nno keys) is not much slower.\n\nPostgreSQL 8.1 saw a lot of performance improvements...but the most\nimportant (and least publicized) is the reduced latency of simple\nqueries in high cache enviroments.\n\nmerlin\n",
"msg_date": "Wed, 29 Mar 2006 08:48:22 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "\nOn Mar 28, 2006, at 1:57 PM, Madison Kelly wrote:\n\n> From what I understand, PostgreSQL is designed with stability and\n> reliability as key tenants. MySQL favors performance and ease of \n> use. An\n\n From my point of view, mysql favors single-user performance over all \nelse. Get into multiple updaters and you are competing for table \nlocks all the time. Postgres works much better with multiple clients \nwriting to it.\n\n",
"msg_date": "Wed, 29 Mar 2006 10:32:54 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of comunity)"
},
{
"msg_contents": "\nOn Mar 28, 2006, at 1:59 PM, Scott Marlowe wrote:\n\n> Generally you'll find the PostgreSQL gotchas are of the sort that make\n> you go \"oh, that's interesting\" and the MySQL gotchas are the kind \n> that\n> make you go \"Dear god, you must be kidding me!\"\n>\n> But that's just my opinion, I could be wrong.\n\nI nominate this for \"quote of the month\". :-)\n\n",
"msg_date": "Wed, 29 Mar 2006 10:35:11 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "\nOn Mar 28, 2006, at 11:55 AM, Marcos wrote:\n\n> The application will be a chat for web, the chats will be stored in \n> the\n> server. In a determined interval of time... more or less 2 seconds, \n> the\n> application will be looking for new messages.\n\nWe bought software for this purpose (phplive). It is based on mysql \nusing isam tables and is written in (surprise!) php. Two of my \n\"favorite\" techonologies! :-)\n\n",
"msg_date": "Wed, 29 Mar 2006 10:36:49 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "On Tuesday 28 March 2006 14:50, Scott Marlowe wrote:\n> On Tue, 2006-03-28 at 13:42, PFC wrote:\n> > > This is as much about the code in front of the database as the database\n> > > itself. You'll want to use an architecture that supports pooled\n> > > connections (java, php under lighttpd, etc...) and you'll want to look\n> >\n> > \tWell, anybody who uses PHP and cares about performance is already using\n> > lighttpd, no ?\n\n/flame on\nif you were *that* worried about performance, you wouldn't be using PHP or \n*any* interperted language\n/flame off\n\nsorry - couldn't resist it :-)\n",
"msg_date": "Wed, 29 Mar 2006 19:01:59 -0500",
"msg_from": "Gorshkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "Gorshkov wrote:\n> /flame on\n> if you were *that* worried about performance, you wouldn't be using PHP or \n> *any* interperted language\n> /flame off\n> \n> sorry - couldn't resist it :-)\n\nI hope this was just a joke. You should be sure to clarify - there might be some newbie out there who thinks you are seriously suggesting coding major web sites in some old-fashioned compiled language.\n\nCraig\n",
"msg_date": "Wed, 29 Mar 2006 18:23:52 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "This is off-topic for this group so I'll just give a brief reply; I'm happy to carry on more just between the two of us...\n\nGorshkov wrote:\n> That being said ..... what *is* the difference between coding a website - \n> major or otherwise - in an \"old-fashioned\" compiled language and a \n> non-compiled language, except for the amount of hoursepower and memory you \n> require?\n> \n> Old-fashioned doesn't mean bad, inappropriate, or inferior. It's just not the \n> latest-and-greatest, however it's currently defined by the geek fashion \n> police.\n\nOur experience coding web sites with C/C++ versus Perl is about a factor of ten in productivity. We only use C/C++ for CPU-intensive calculations, such as scientific prediction code. Everything else is Perl or Java.\n\nI recently re-coded 10,000 lines of C into 650 lines of Perl. Why? String handling, hash tables, and the simplicity of DBD/DBI. And there was no loss of performance, because the app was strictly I/O bound (that is, Postgres was I/O bound). Sure, the old app may not have been optimal, but we're talking about a factor of 15 reduction in lines of code.\n\nThat's not \"geek fashion\", it's good engineering. Pick the best tool for the job, and learn how to use it.\n\nCraig\n",
"msg_date": "Wed, 29 Mar 2006 19:01:26 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "On Wednesday 29 March 2006 21:23, Craig A. James wrote:\n> Gorshkov wrote:\n> > /flame on\n> > if you were *that* worried about performance, you wouldn't be using PHP\n> > or *any* interperted language\n> > /flame off\n> >\n> > sorry - couldn't resist it :-)\n>\n> I hope this was just a joke. You should be sure to clarify - there might\n> be some newbie out there who thinks you are seriously suggesting coding\n> major web sites in some old-fashioned compiled language.\n>\n\nwell yes, it was meant as a joke ..... that's *usually* what a \";-)\" means.\n\nThat being said ..... what *is* the difference between coding a website - \nmajor or otherwise - in an \"old-fashioned\" compiled language and a \nnon-compiled language, except for the amount of hoursepower and memory you \nrequire?\n\nOld-fashioned doesn't mean bad, inappropriate, or inferior. It's just not the \nlatest-and-greatest, however it's currently defined by the geek fashion \npolice.\n",
"msg_date": "Wed, 29 Mar 2006 23:07:28 -0500",
"msg_from": "Gorshkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "On Wednesday 29 March 2006 22:01, Craig A. James wrote:\n> This is off-topic for this group so I'll just give a brief reply; I'm happy\n> to carry on more just between the two of us...\n>\n> Gorshkov wrote:\n> > That being said ..... what *is* the difference between coding a website -\n> > major or otherwise - in an \"old-fashioned\" compiled language and a\n> > non-compiled language, except for the amount of hoursepower and memory\n> > you require?\n> >\n> > Old-fashioned doesn't mean bad, inappropriate, or inferior. It's just not\n> > the latest-and-greatest, however it's currently defined by the geek\n> > fashion police.\n>\n> Our experience coding web sites with C/C++ versus Perl is about a factor of\n> ten in productivity. We only use C/C++ for CPU-intensive calculations,\n> such as scientific prediction code. Everything else is Perl or Java.\n>\n> I recently re-coded 10,000 lines of C into 650 lines of Perl. Why? String\n> handling, hash tables, and the simplicity of DBD/DBI. And there was no\n> loss of performance, because the app was strictly I/O bound (that is,\n> Postgres was I/O bound). Sure, the old app may not have been optimal, but\n> we're talking about a factor of 15 reduction in lines of code.\n\n\nSounds to me like the C programmers in your past needed to learn how to re-use \ncode and make libraries. That's not a function of the language - that's a \nfunction of the programmer.\n\n>\n> That's not \"geek fashion\", it's good engineering. Pick the best tool for\n> the job, and learn how to use it.\n>\n\nThanks for making my point. You choose the best tool for the job, and \nsometimes it's \"old-fashioned\".\n\nPlease remember that - there may be newbies out there who think that if \nthey're not using the latest alpha-beta-zeta version .0006-a-r1, then they \nmust be bad programmers.\n",
"msg_date": "Wed, 29 Mar 2006 23:39:46 -0500",
"msg_from": "Gorshkov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "Hi, Craig,\n\nCraig A. James wrote:\n\n> I hope this was just a joke. You should be sure to clarify - there\n> might be some newbie out there who thinks you are seriously suggesting\n> coding major web sites in some old-fashioned compiled language.\n\nNo, but perhaps with a CMS that pregenerates static content, or\nhttp://www.tntnet.org/\n\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Thu, 30 Mar 2006 10:29:51 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "[email protected] (\"Craig A. James\") writes:\n\n> Gorshkov wrote:\n>> /flame on\n>> if you were *that* worried about performance, you wouldn't be using\n>> PHP or *any* interperted language\n>> /flame off\n>> sorry - couldn't resist it :-)\n>\n> I hope this was just a joke. You should be sure to clarify - there\n> might be some newbie out there who thinks you are seriously\n> suggesting coding major web sites in some old-fashioned compiled\n> language.\n\nActually, this seems not so bad a point...\n\nIf people are so interested in micro-managing certain bits of how\nperformance works, then it seems an excellent question to ask why NOT\nwrite all the CGIs in C.\n\nAfter all, CGI in C *won't* suffer from the performance troubles\nassociated with repetitively loading in Perl/PHP frameworks (which is\nwhy things like FastCGI, mod_perl, and such came about), and you can\nget a fair level of assurance that the compiled C won't be the\nperformance bottleneck.\n\nAnd yes, it does become natural to ask \"why not write CGIs in ASM?\"\n;-)\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://www3.sympatico.ca/cbbrowne/lisp.html\n\"When I was a boy of fourteen, my father was so ignorant I could\nhardly stand to have the old man around. But when I got to be\ntwenty-one, I was astonished at how much the old man had learned in\nseven years.\" -- Mark Twain\n",
"msg_date": "Thu, 30 Mar 2006 12:22:48 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "On Thu, 2006-03-30 at 11:22, Chris Browne wrote:\n> [email protected] (\"Craig A. James\") writes:\n> \n> > Gorshkov wrote:\n> >> /flame on\n> >> if you were *that* worried about performance, you wouldn't be using\n> >> PHP or *any* interperted language\n> >> /flame off\n> >> sorry - couldn't resist it :-)\n> >\n> > I hope this was just a joke. You should be sure to clarify - there\n> > might be some newbie out there who thinks you are seriously\n> > suggesting coding major web sites in some old-fashioned compiled\n> > language.\n> \n> Actually, this seems not so bad a point...\n> \n> If people are so interested in micro-managing certain bits of how\n> performance works, then it seems an excellent question to ask why NOT\n> write all the CGIs in C.\n> \n> After all, CGI in C *won't* suffer from the performance troubles\n> associated with repetitively loading in Perl/PHP frameworks (which is\n> why things like FastCGI, mod_perl, and such came about), and you can\n> get a fair level of assurance that the compiled C won't be the\n> performance bottleneck.\n> \n> And yes, it does become natural to ask \"why not write CGIs in ASM?\"\n> ;-)\n\nBut as an aside, I've been load testing our web application. We have,\nin the test farm, two tomcat servers feeding into three jboss servers,\nfeeding into a database farm (oracle and postgresql, doing different\nthings, oracle is the transaction engine, postgresql is the \"data\ncollection bucket\" so to speak.)\n\nOur tomcat servers sit at 10% load, the jboss servers sit at 20 to 40%\nload, and the Oracle server sits at 100% load.\n\nAnd the thing is, while we can add load balanced tomcat and jboss\nservers as need be, and get nearly linear scaling from them, we can't do\nthe same for the database. That's going to require vertical scaling.\n\nAnd that, nowadays, is generally the state of web development. It's not\nthe language you're using to write it in, it's how efficiently you're\nusing your database. We can probably tweak the system we're testing now\nand get more from our databases by adjusting how hibernate hits them,\nand the types of queries that it's throwing, but in the long run, the\nbottleneck will always be the database server, because we can throw\nrelatively small amounts of money at the other layers if they happen to\nbe bogging down. Not so much with the database.\n",
"msg_date": "Thu, 30 Mar 2006 13:21:30 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "\n>> And yes, it does become natural to ask \"why not write CGIs in ASM?\"\n>> ;-)\n\n\tPersonally, I'd code it in brainfuck, for aesthetic reasons.\n\n> And that, nowadays, is generally the state of web development. It's not\n> the language you're using to write it in, it's how efficiently you're\n> using your database. We can probably tweak the system we're testing now\n> and get more from our databases by adjusting how hibernate hits them,\n> and the types of queries that it's throwing, but in the long run, the\n> bottleneck will always be the database server, because we can throw\n> relatively small amounts of money at the other layers if they happen to\n> be bogging down. Not so much with the database.\n\n\tSo, one wonders why some use 70's languages like Java instead of Lisp or \nPython, which are slower, but a lot more powerful and faster to develop \nin...\n\t(and don't have hibernate, which is a big bonus)\n\t(why do you think I don't like Java ?)\n\n",
"msg_date": "Thu, 30 Mar 2006 23:31:25 +0200",
"msg_from": "PFC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "On Thu, Mar 30, 2006 at 11:31:25PM +0200, PFC wrote:\n> \tSo, one wonders why some use 70's languages like Java instead of \n> \tLisp or Python, which are slower, but a lot more powerful and faster to \n> develop in...\n> \t(and don't have hibernate, which is a big bonus)\n> \t(why do you think I don't like Java ?)\n\nPython may not have Hibernate, but it has even worse stuff trying to do about\nthe same thing. :-)\n\nAnyhow, this is rapidly becoming offtopic for the original thread.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 31 Mar 2006 00:36:38 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "[email protected] (Scott Marlowe) writes:\n> And that, nowadays, is generally the state of web development. It's\n> not the language you're using to write it in, it's how efficiently\n> you're using your database.\n\nWhich properly puts my comments in their place :-).\n\nMore importantly, that seems like a valid statement which has a *wide*\nscope of effects and side-effects. Including some that ought to put\nPostgreSQL in a very good place, in that it provides some very good\nways of achieving high efficiency.\n\nNeat performance thing du jour: Hibernate seems to be the \"neat new\nJava persistence thing.\"\n\nI have been very unimpressed with some of the web frameworks I have\nseen thus far in their interaction with databases.\n\nWe use RT (Request Tracker) for tracking tickets, and in its attempt\nto be \"database agnostic,\" it actually only achieves being\nMySQL(tm)-specific, because they have an automated query generator\nthat is only good at one style of queries at a time. Coworkers have\nsuggested improved queries that are (on occasion) hundreds or\nthousands of times faster than what it generates; those improvements\nfall on deaf ears because they wouldn't work with all the databases.\n(Well, more precisely, they wouldn't work with MySQL(tm).)\n\nThere's a home grown flavor of Java persistence mapping; it doesn't\nseem as heinous as RT's, but it still doesn't make it overly\nconvenient to replace poor queries with more efficient ones.\n\nHibernate has a nifty thing in the form of \"Named Queries.\" It'll\noften use its own \"HQL\" to auto-generate SQL, but any time the DBAs\ncome up with something that's nicely tuned, it seems to be highly\nrecommended to generate a \"Named Query\" for that which allows a Nice\nQuery to be made part of the application without too much weeping and\ngnashing of teeth on either DBA or developers' sides.\n\nA framework that allows you to thereby \"soup up\" how efficiently you\nuse your database... Hmm... I wonder if that fits into anyone's\nnotable quote? :-).\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://www3.sympatico.ca/cbbrowne/lisp.html\n\"When I was a boy of fourteen, my father was so ignorant I could\nhardly stand to have the old man around. But when I got to be\ntwenty-one, I was astonished at how much the old man had learned in\nseven years.\" -- Mark Twain\n",
"msg_date": "Thu, 30 Mar 2006 17:53:08 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "jython is a full rewrite of python in java and interface naturally with \njava classes, therefore hibernate ... and is just as easy as python.\n\nSteinar H. Gunderson a écrit :\n> On Thu, Mar 30, 2006 at 11:31:25PM +0200, PFC wrote:\n> \n>> \tSo, one wonders why some use 70's languages like Java instead of \n>> \tLisp or Python, which are slower, but a lot more powerful and faster to \n>> develop in...\n>> \t(and don't have hibernate, which is a big bonus)\n>> \t(why do you think I don't like Java ?)\n>> \n>\n> Python may not have Hibernate, but it has even worse stuff trying to do about\n> the same thing. :-)\n>\n> Anyhow, this is rapidly becoming offtopic for the original thread.\n>\n> /* Steinar */\n> \n\n\n\n\n\n\njython is a full rewrite of python in java and interface naturally with\njava classes, therefore hibernate ... and is just as easy as python.\n\nSteinar H. Gunderson a écrit :\n\nOn Thu, Mar 30, 2006 at 11:31:25PM +0200, PFC wrote:\n \n\n\tSo, one wonders why some use 70's languages like Java instead of \n\tLisp or Python, which are slower, but a lot more powerful and faster to \ndevelop in...\n\t(and don't have hibernate, which is a big bonus)\n\t(why do you think I don't like Java ?)\n \n\n\nPython may not have Hibernate, but it has even worse stuff trying to do about\nthe same thing. :-)\n\nAnyhow, this is rapidly becoming offtopic for the original thread.\n\n/* Steinar */",
"msg_date": "Fri, 31 Mar 2006 04:08:05 +0200",
"msg_from": "Philippe Marzin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
},
{
"msg_contents": "On 30.03.2006, at 23:31 Uhr, PFC wrote:\n\n> \t(why do you think I don't like Java ?)\n\nBecause you haven't used a good framework/toolkit yet? Come on, the \nlanguage doesn't really matter these days, it's all about frameworks, \ntoolkits, libraries, interfaces and so on.\n\nBut, nevertheless, this has nothing to do with a decision between \nPostgreSQL or MySQL. They can both be accessed by a myriad of \nprogramming languages, so the decision may (and should) be based on \nother things.\n\ncug\n\n-- \nPharmaLine, Essen, GERMANY\nSoftware and Database Development",
"msg_date": "Fri, 31 Mar 2006 08:56:07 +0200",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Decide between Postgresql and Mysql (help of"
}
] |
[
{
"msg_contents": "Hi,\nDoes anyone know of any fairly entry-level documentation for the\nbenefits-drawbacks of MVCC in the db? As it relates to performance?\nPostgres vs the others?\nCheers\nAntoine\n\n--\nThis is where I should put some witty comment.\n",
"msg_date": "Tue, 28 Mar 2006 22:27:39 +0200",
"msg_from": "Antoine <[email protected]>",
"msg_from_op": true,
"msg_subject": "MVCC intro and benefits docs?"
},
{
"msg_contents": "On Tue, Mar 28, 2006 at 10:27:39PM +0200, Antoine wrote:\n> Hi,\n> Does anyone know of any fairly entry-level documentation for the\n> benefits-drawbacks of MVCC in the db? As it relates to performance?\n> Postgres vs the others?\n> Cheers\n> Antoine\n\nIt's not dedicated to discussing MVCC alone, but\nhttp://www.pervasive-postgres.com/lp/newsletters/2005/Insights_opensource_Dec.asp#2\nmight provide you with some useful info.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 28 Mar 2006 15:59:20 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MVCC intro and benefits docs?"
},
{
"msg_contents": "\n\"\"Jim C. Nasby\"\" <[email protected]> wrote\n>\n> It's not dedicated to discussing MVCC alone, but\n>\nhttp://www.pervasive-postgres.com/lp/newsletters/2005/Insights_opensource_Dec.asp#2\n> might provide you with some useful info.\n> -- \n\nAnother introduction is here:\n\nhttp://www.postgresql.org/files/developer/transactions.pdf\n\nRegards,\nQingqing\n\n\n",
"msg_date": "Wed, 29 Mar 2006 12:59:34 +0800",
"msg_from": "\"Qingqing Zhou\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: MVCC intro and benefits docs?"
}
] |
[
{
"msg_contents": "Hi,\n\nI've got this message while heavily inserting into a database. What \nshould I tune and how? It is postgresql 8.1.3.\n\n2006-03-29 14:16:57.513 CEST:LOG: statistics buffer is full\n\nThanks in advance,\nAkos\n\n-- \n�dv�zlettel,\nG�briel �kos\n-=E-Mail :[email protected]|Web: http://www.i-logic.hu=-\n-=Tel/fax:+3612367353 |Mobil:+36209278894 =-\n",
"msg_date": "Wed, 29 Mar 2006 14:22:32 +0200",
"msg_from": "=?ISO-8859-2?Q?G=E1briel_=C1kos?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "statistics buffer is full"
},
{
"msg_contents": "\n\"\"G�briel �kos\"\" <[email protected]> wrote\n>\n> I've got this message while heavily inserting into a database. What should \n> I tune and how? It is postgresql 8.1.3.\n>\n> 2006-03-29 14:16:57.513 CEST:LOG: statistics buffer is full\n>\n\nSince your server is in a heavy load, so the common trick is to increase \nPGSTAT_RECVBUFFERSZ in include/pgstat.h and recompile your server.\n\nRegards,\nQingqing \n\n\n",
"msg_date": "Sun, 2 Apr 2006 22:33:05 +0800",
"msg_from": "\"Qingqing Zhou\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: statistics buffer is full"
}
] |
[
{
"msg_contents": "Greetings,\n\n\tWe have an issue where we have a database with many tables.\n\tThe layout of the database is 3 set of look alike tables with different names.\n\tEach set of tables has some referential integrety that point back to the main\n\tcontrol table.\n\n\tOn two set of tables we are able to efficiently delete referential and main record\n\twithout a problems, but on the 3rd set we have an issue where the control table is clugged\n\tand delete seem to take forever , as example on the first two set a delete of 60K record take about\n\t4 second to 10 second but on the 3rd set it can take as long as 3hours.\n\n\tThis seem to be only affecting one database , the schema and way of doing is replicated elsewhere\n\tand if efficient.\n\n\tThe question is, even after droping 3rd set integrity , dumping the table data , deleting the table,\n\tdoing a manual checkpoint and recreating the table with the dump data , with or without referential\n\tintegrity , the problems still araise.\n\n\tIf we copy the data from the live table and do a create table aaa as select * from problematic_table;\n\tthe table aaa operations are normal and efficient.\n\n\tThis is why our investigation brought us to the folowing questions:\n\n\t1. Are postgresql data file name are hashed references to table name(as oracle)? [~path to data EX:/var/log/pgsql/data/[arbitraty \t\t\t numbers]/[datafile]]?\n\n\t2. If the data files are corrupted and we re-create is it possible it uses the same files thus creating the same issue?\n\n \t3. Since we know that all the tables has that problems is there an internal table with undisclosed references to tables data files?\n\t\n\tI hope the questions were clear.\n\n\tHave a good day, and thank you in advance.\n\n\nEric Lauzon\n[Recherche & Développement]\nAbove Sécurité / Above Security\nTél : (450) 430-8166\nFax : (450) 430-1858\n\nAVERTISSEMENT CONCERNANT LA CONFIDENTIALITÉ \n\nLe présent message est à l'usage exclusif du ou des destinataires mentionnés ci-dessus. Son contenu est confidentiel et peut être assujetti au secret professionnel. Si vous avez reçu le présent message par erreur, veuillez nous en aviser immédiatement et le détruire en vous abstenant d'en faire une copie, d'en divulguer le contenu ou d'y donner suite.\n\nCONFIDENTIALITY NOTICE\n\nThis communication is intended for the exclusive use of the addressee identified above. Its content is confidential and may contain privileged information. If you have received this communication by error, please notify the sender and delete the message without copying or disclosing it.\n",
"msg_date": "Wed, 29 Mar 2006 12:58:59 -0500",
"msg_from": "\"Eric Lauzon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Database possible corruption , unsolvable mystery"
},
{
"msg_contents": "Eric Lauzon wrote:\n> This is why our investigation brought us to the folowing questions:\n> \n> 1. Are postgresql data file name are hashed references to table\n> name(as oracle)? [~path to data EX:/var/log/pgsql/data/[arbitraty\n> numbers]/[datafile]]?\n\nOID numbers - look in the contrib directory/package for the oid2name \nutility.\n\n> 2. If the data files are corrupted and we re-create is it possible it\n> uses the same files thus creating the same issue?\n\nNo\n\n> 3. Since we know that all the tables has that problems is there an\n> internal table with undisclosed references to tables data files? I\n> hope the questions were clear.\n\nYou mean a system table that could account for your problems since it \nrefers to some of your tables but not others? No.\n\nThe obvious places to start are:\n1. vacuum analyse verbose on the tables in question\n This should show whether there are a lot of \"dead\" rows\n2. explain analyse on problem queries\n To see if the query plans are correct\n3. SELECT * FROM pg_stat_???\n Assuming you have statistics gathering turned on, this might show \nunusual table accesses.\n\nHTH\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 29 Mar 2006 23:09:52 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database possible corruption , unsolvable mystery"
},
{
"msg_contents": "Can you post an explain analyze for the delete query? That will at\nleast tell you if it is the delete itself which is slow, or a trigger /\nreferential integrity constraint check. Which version of PG is this?\n\n-- Mark Lewis\n\nOn Wed, 2006-03-29 at 12:58 -0500, Eric Lauzon wrote:\n> Greetings,\n> \n> \tWe have an issue where we have a database with many tables.\n> \tThe layout of the database is 3 set of look alike tables with different names.\n> \tEach set of tables has some referential integrety that point back to the main\n> \tcontrol table.\n> \n> \tOn two set of tables we are able to efficiently delete referential and main record\n> \twithout a problems, but on the 3rd set we have an issue where the control table is clugged\n> \tand delete seem to take forever , as example on the first two set a delete of 60K record take about\n> \t4 second to 10 second but on the 3rd set it can take as long as 3hours.\n> \n> \tThis seem to be only affecting one database , the schema and way of doing is replicated elsewhere\n> \tand if efficient.\n> \n> \tThe question is, even after droping 3rd set integrity , dumping the table data , deleting the table,\n> \tdoing a manual checkpoint and recreating the table with the dump data , with or without referential\n> \tintegrity , the problems still araise.\n> \n> \tIf we copy the data from the live table and do a create table aaa as select * from problematic_table;\n> \tthe table aaa operations are normal and efficient.\n> \n> \tThis is why our investigation brought us to the folowing questions:\n> \n> \t1. Are postgresql data file name are hashed references to table name(as oracle)? [~path to data EX:/var/log/pgsql/data/[arbitraty \t\t\t numbers]/[datafile]]?\n> \n> \t2. If the data files are corrupted and we re-create is it possible it uses the same files thus creating the same issue?\n> \n> \t3. Since we know that all the tables has that problems is there an internal table with undisclosed references to tables data files?\n> \t\n> \tI hope the questions were clear.\n> \n> \tHave a good day, and thank you in advance.\n> \n> \n> Eric Lauzon\n> [Recherche & Développement]\n> Above Sécurité / Above Security\n> Tél : (450) 430-8166\n> Fax : (450) 430-1858\n> \n> AVERTISSEMENT CONCERNANT LA CONFIDENTIALITÉ \n> \n> Le présent message est à l'usage exclusif du ou des destinataires mentionnés ci-dessus. Son contenu est confidentiel et peut être assujetti au secret professionnel. Si vous avez reçu le présent message par erreur, veuillez nous en aviser immédiatement et le détruire en vous abstenant d'en faire une copie, d'en divulguer le contenu ou d'y donner suite.\n> \n> CONFIDENTIALITY NOTICE\n> \n> This communication is intended for the exclusive use of the addressee identified above. Its content is confidential and may contain privileged information. If you have received this communication by error, please notify the sender and delete the message without copying or disclosing it.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n",
"msg_date": "Wed, 29 Mar 2006 14:16:51 -0800",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database possible corruption , unsolvable mystery"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Richard Huxton [mailto:[email protected]] \n> Sent: 29 mars 2006 17:10\n> To: Eric Lauzon\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Database possible corruption , \n> unsolvable mystery\n> \n> Eric Lauzon wrote:\n> > This is why our investigation brought us to the folowing questions:\n> > \n> > 1. Are postgresql data file name are hashed references to table \n> > name(as oracle)? [~path to data EX:/var/log/pgsql/data/[arbitraty \n> > numbers]/[datafile]]?\n> \n> OID numbers - look in the contrib directory/package for the \n> oid2name utility.\n\nThis will give me the location of the databases file for a specific\ntable or index?\n\n\n\n> \n> > 2. If the data files are corrupted and we re-create is it \n> possible it \n> > uses the same files thus creating the same issue?\n> \n> No\n> \n\nhumm why would it affect only original table , and copy of that table\nrenamed back to the original table name\nbut not the copy.\n\nexample: \noriginal table name : table_problem <issue>\n\t copy name : table_problem_copy <no issue>\n\t renamed copyed table: table_problem <issue>\n\n> > 3. Since we know that all the tables has that problems is there an \n> > internal table with undisclosed references to tables data files? I \n> > hope the questions were clear.\n> \n> You mean a system table that could account for your problems \n> since it refers to some of your tables but not others? No.\n\n Well actualy its affecting only one table in a set of 5 table\n(referential integrity)\n and the table affected if the [referenced table] so it might be system\nrelated, but\n as stated if all the data is copied to a create table\ncopy_of_problematic_table as select * from problematic_table\n there is 0 issue but as soon as copy_of_problematic_table is renamed to\nproblematic_table the problems is back.\n\n But we have 2 orther set of 5 table in the same database built exactly\nthe same way and it dosen't\n seem affected by the same problems, this is why i am wandering why the\nproblems is recurent if\n internal postgresql data file are name bound ...and i am not taking\nabout the OID.\n\n\n> \n> The obvious places to start are:\n> 1. vacuum analyse verbose on the tables in question\n> This should show whether there are a lot of \"dead\" rows \n> 2. explain analyse on problem queries\n> To see if the query plans are correct 3. SELECT * FROM pg_stat_???\n> Assuming you have statistics gathering turned on, this \n> might show unusual table accesses.\n\nBtw i can't give vacuum info right now because the source database is\nbeing dumped for complete re-insertion.\n\nMabey later if this dosen't fix the problem , and as of information its\n7.4.6 [i know its not the most rescent]\nbut it is the way it is right now and we suspect the problem might have\ncome from a power outage while there was\na full vacuum and the reason why its only one table that has been\naffected is probably because it was the table being vacummed,\nbut this is only an assumption right now and more info will folow if the\nproblems persis after a full restore.\n\nThanks you :)\n-elz\n\nAVERTISSEMENT CONCERNANT LA CONFIDENTIALITE \n\nLe present message est a l'usage exclusif du ou des destinataires mentionnes ci-dessus. Son contenu est confidentiel et peut etre assujetti au secret professionnel. Si vous avez recu le present message par erreur, veuillez nous en aviser immediatement et le detruire en vous abstenant d'en faire une copie, d'en divulguer le contenu ou d'y donner suite.\n\nCONFIDENTIALITY NOTICE\n\nThis communication is intended for the exclusive use of the addressee identified above. Its content is confidential and may contain privileged information. If you have received this communication by error, please notify the sender and delete the message without copying or disclosing it.\n",
"msg_date": "Wed, 29 Mar 2006 17:31:17 -0500",
"msg_from": "\"Eric Lauzon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Database possible corruption , unsolvable mystery"
},
{
"msg_contents": "Eric Lauzon wrote:\n\n>Mabey later if this dosen't fix the problem , and as of information its\n>7.4.6 [i know its not the most rescent]\n>but it is the way it is right now and we suspect the problem might have\n>come from a power outage while there was\n>a full vacuum and the reason why its only one table that has been\n>affected is probably because it was the table being vacummed,\n>but this is only an assumption right now and more info will folow if the\n>problems persis after a full restore.\n>\n> \n>\nHrm, you know that you -should- upgrade to at least the latest 7.4 \n(7.4.13 I think is the most recent). looking from the changelogs, there \nare a few bugs that you could be hitting;\n\n7.4.10\n * Fix race condition in transaction log management There was a \nnarrow window in which an I/O operation could be initiated for the wrong \npage, leading to an Assert failure or data corruption.\n\n7.4.9\n * Improve checking for partially-written WAL pages\n * Fix error that allowed VACUUM to remove ctid chains too soon, and \nadd more checking in code that follows ctid links. This fixes a \nlong-standing problem that could cause crashes in very rare circumstances.\n\n7.4.8\n * Repair race condition between relation extension and VACUUMThis \ncould theoretically have caused loss of a page's worth of \nfreshly-inserted data, although the scenario seems of very low \nprobability. There are no known cases of it having caused more than an \nAssert failure\n\n and these are only the ones that appear 'notably' in the changelog. \nIn short, I -really- -would- -strongly- -advise- you upgrading to \n7.4.13. Personally, I would have made this my first step, especially if \nyour data is important.\n\n There is no need for a dump/reload between minor point releases. \nAlthough there is a security fix in 7.4.8.\n\n Since the db is in a state of 'down' or repair, why not do it now ? \ntwo birds, one stone.\n\n Regards\n Stef\n",
"msg_date": "Wed, 29 Mar 2006 17:52:55 -0500",
"msg_from": "stef <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database possible corruption , unsolvable mystery"
}
] |
[
{
"msg_contents": "> Hrm, you know that you -should- upgrade to at least the latest 7.4\n> (7.4.13 I think is the most recent). looking from the \n> changelogs, there are a few bugs that you could be hitting;\n> \n> 7.4.10\n> * Fix race condition in transaction log management There \n> was a narrow window in which an I/O operation could be \n> initiated for the wrong page, leading to an Assert failure or \n> data corruption.\n> \n> 7.4.9\n> * Improve checking for partially-written WAL pages\n> * Fix error that allowed VACUUM to remove ctid chains too \n> soon, and add more checking in code that follows ctid links. \n> This fixes a long-standing problem that could cause crashes \n> in very rare circumstances.\n> \n> 7.4.8\n> * Repair race condition between relation extension and \n> VACUUMThis could theoretically have caused loss of a page's \n> worth of freshly-inserted data, although the scenario seems \n> of very low probability. There are no known cases of it \n> having caused more than an Assert failure\n> \n> and these are only the ones that appear 'notably' in the \n> changelog. \n> In short, I -really- -would- -strongly- -advise- you \n> upgrading to 7.4.13. Personally, I would have made this my \n> first step, especially if your data is important.\n> \n> There is no need for a dump/reload between minor point releases. \n> Although there is a security fix in 7.4.8.\n> \n> Since the db is in a state of 'down' or repair, why not \n> do it now ? \n> two birds, one stone.\n\nThank you , this might be a good solution , but we have a bigger upgrade\ncomming for 8.1.x later on,\nbut considering that other things out of our hands might occur , we\nmight seriously look into it after fixing\nthe current problems :) [because we dont think that upgrading right now\nwill magicly fix the problem we are having.]\nAnd on about 10 database [all 7.4.6] it is the first time this occur ,\nand the symtom is really on one table, considering\na few terabytes of data sparsed accros a few db, we might have been\nlucky yet but as of now its the first time \nwe can see performance hit only on \"delete\".\n\nBut thanks alot for the hint. [even tho we never had some unexpected\ndata failure/crash] beside this out of control\nhuman power failure that might have been the root of this [the database\nis still dumping ...few gigs :)]\n\nThanks alot all for the help,and if we find the root cause we will give\nfeed back.\n\n-elz\n\nAVERTISSEMENT CONCERNANT LA CONFIDENTIALITE \n\nLe present message est a l'usage exclusif du ou des destinataires mentionnes ci-dessus. Son contenu est confidentiel et peut etre assujetti au secret professionnel. Si vous avez recu le present message par erreur, veuillez nous en aviser immediatement et le detruire en vous abstenant d'en faire une copie, d'en divulguer le contenu ou d'y donner suite.\n\nCONFIDENTIALITY NOTICE\n\nThis communication is intended for the exclusive use of the addressee identified above. Its content is confidential and may contain privileged information. If you have received this communication by error, please notify the sender and delete the message without copying or disclosing it.\n",
"msg_date": "Wed, 29 Mar 2006 18:33:21 -0500",
"msg_from": "\"Eric Lauzon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Database possible corruption , unsolvable mystery"
},
{
"msg_contents": "Eric,\n\n> Thank you , this might be a good solution , but we have a bigger upgrade\n> comming for 8.1.x later on,\n> but considering that other things out of our hands might occur , we\n> might seriously look into it after fixing\n> the current problems :) [because we dont think that upgrading right now\n> will magicly fix the problem we are having.]\n\nIt probably won't, but it will prevent a re-occurance before you get around to \nthe 8.1 upgrade. How much time have you wasted on this issue already, an \nissue which might not have occurred if you'd kept up with patch releases? A \npatch upgrade is what, 5 minutes of downtime?\n\n> And on about 10 database [all 7.4.6] it is the first time this occur ,\n> and the symtom is really on one table, considering\n> a few terabytes of data sparsed accros a few db, we might have been\n> lucky yet but as of now its the first time\n> we can see performance hit only on \"delete\".\n\nWell, that would be in line with the issues 7.4.7-7.4.12. All of them require \nmillesecond-timing to hit the bug. You're not likely to see it more than \nonce.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 29 Mar 2006 21:14:27 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database possible corruption , unsolvable mystery"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a query that is using a sequential scan instead of an index \nscan. I've turned off sequential scans and it is in fact faster with \nthe index scan.\n\nHere's my before and after.\n\nBefore:\n\nssdev=# SET enable_seqscan TO DEFAULT;\nssdev=# explain analyze select cp.product_id\n\t\tfrom category_product cp, product_attribute_value pav\n\t\twhere cp.category_id = 1001082 and cp.product_id = pav.product_id;\n\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n------------------------------\nHash Join (cost=25.52..52140.59 rows=5139 width=4) (actual \ntime=4.521..2580.520 rows=19695 loops=1)\n Hash Cond: (\"outer\".product_id = \"inner\".product_id)\n -> Seq Scan on product_attribute_value pav (cost=0.00..40127.12 \nrows=2387312 width=4) (actual time=0.039..1469.295 rows=2385846 loops=1)\n -> Hash (cost=23.10..23.10 rows=970 width=4) (actual \ntime=2.267..2.267 rows=1140 loops=1)\n -> Index Scan using x_category_product__category_id_fk_idx \non category_product cp (cost=0.00..23.10 rows=970 width=4) (actual \ntime=0.122..1.395 rows=1140 loops=1)\n Index Cond: (category_id = 1001082)\nTotal runtime: 2584.221 ms\n(7 rows)\n\n\nAfter:\n\nssdev=# SET enable_seqscan TO false;\nssdev=# explain analyze select cp.product_id\n\t\tfrom category_product cp, product_attribute_value pav\n\t\twhere cp.category_id = 1001082 and cp.product_id = pav.product_id;\n\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-------------------------------------\nNested Loop (cost=0.00..157425.22 rows=5139 width=4) (actual \ntime=0.373..71.177 rows=19695 loops=1)\n -> Index Scan using x_category_product__category_id_fk_idx on \ncategory_product cp (cost=0.00..23.10 rows=970 width=4) (actual \ntime=0.129..1.438 rows=1140 loops=1)\n Index Cond: (category_id = 1001082)\n -> Index Scan using product_attribute_value__product_id_fk_idx \non product_attribute_value pav (cost=0.00..161.51 rows=61 width=4) \n(actual time=0.016..0.053 rows=17 loops=1140)\n Index Cond: (\"outer\".product_id = pav.product_id)\nTotal runtime: 74.747 ms\n(6 rows)\n\nThere's quite a big difference in speed there. 2584.221 ms vs. 74.747 \nms.\n\nAny ideas what I can do to improve this without turning sequential \nscanning off?\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com",
"msg_date": "Wed, 29 Mar 2006 20:12:28 -0700",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query using SeqScan instead of IndexScan"
},
{
"msg_contents": "Oops. I forgot to mention that I was using PostgreSQL 8.1.3 on Mac OS X.\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Mar 29, 2006, at 8:12 PM, Brendan Duddridge wrote:\n\n> Hi,\n>\n> I have a query that is using a sequential scan instead of an index \n> scan. I've turned off sequential scans and it is in fact faster \n> with the index scan.\n>\n> Here's my before and after.\n>\n> Before:\n>\n> ssdev=# SET enable_seqscan TO DEFAULT;\n> ssdev=# explain analyze select cp.product_id\n> \t\tfrom category_product cp, product_attribute_value pav\n> \t\twhere cp.category_id = 1001082 and cp.product_id = pav.product_id;\n>\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------- \n> ---------------------------------------------------------------------- \n> ----------------------------------\n> Hash Join (cost=25.52..52140.59 rows=5139 width=4) (actual \n> time=4.521..2580.520 rows=19695 loops=1)\n> Hash Cond: (\"outer\".product_id = \"inner\".product_id)\n> -> Seq Scan on product_attribute_value pav \n> (cost=0.00..40127.12 rows=2387312 width=4) (actual \n> time=0.039..1469.295 rows=2385846 loops=1)\n> -> Hash (cost=23.10..23.10 rows=970 width=4) (actual \n> time=2.267..2.267 rows=1140 loops=1)\n> -> Index Scan using \n> x_category_product__category_id_fk_idx on category_product cp \n> (cost=0.00..23.10 rows=970 width=4) (actual time=0.122..1.395 \n> rows=1140 loops=1)\n> Index Cond: (category_id = 1001082)\n> Total runtime: 2584.221 ms\n> (7 rows)\n>\n>\n> After:\n>\n> ssdev=# SET enable_seqscan TO false;\n> ssdev=# explain analyze select cp.product_id\n> \t\tfrom category_product cp, product_attribute_value pav\n> \t\twhere cp.category_id = 1001082 and cp.product_id = pav.product_id;\n>\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------- \n> ---------------------------------------------------------------------- \n> -----------------------------------------\n> Nested Loop (cost=0.00..157425.22 rows=5139 width=4) (actual \n> time=0.373..71.177 rows=19695 loops=1)\n> -> Index Scan using x_category_product__category_id_fk_idx on \n> category_product cp (cost=0.00..23.10 rows=970 width=4) (actual \n> time=0.129..1.438 rows=1140 loops=1)\n> Index Cond: (category_id = 1001082)\n> -> Index Scan using product_attribute_value__product_id_fk_idx \n> on product_attribute_value pav (cost=0.00..161.51 rows=61 width=4) \n> (actual time=0.016..0.053 rows=17 loops=1140)\n> Index Cond: (\"outer\".product_id = pav.product_id)\n> Total runtime: 74.747 ms\n> (6 rows)\n>\n> There's quite a big difference in speed there. 2584.221 ms vs. \n> 74.747 ms.\n>\n> Any ideas what I can do to improve this without turning sequential \n> scanning off?\n>\n> Thanks,\n>\n> ____________________________________________________________________\n> Brendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n>\n> ClickSpace Interactive Inc.\n> Suite L100, 239 - 10th Ave. SE\n> Calgary, AB T2G 0V9\n>\n> http://www.clickspace.com\n>",
"msg_date": "Wed, 29 Mar 2006 20:20:17 -0700",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query using SeqScan instead of IndexScan"
},
{
"msg_contents": "Brenden,\n\n> Any ideas what I can do to improve this without turning sequential \n> scanning off?\n\nHmmm, looks like your row estimates are good. Which means it's probably your \npostgresql.conf parameters which are off. Try the following, in the order \nbelow:\n\n1) Raise effective_cache_size to 2/3 of your RAM (remember that ecs is in 8k \npages). Test again.\n\n2) Multiply all of the cpu_* costs by 0.3. Test again.\n\n3) Lower random_page_cost by steps to 3.5, then 3.0, then 2.5, then 2.0, \ntesting each time.\n\nThese are all runtime-settable parameters, so you can test them in one query \nwindow, then set them in the main postgresql.conf if they work.\n\n-- \nJosh Berkus\nSun Microsystems\nSan Francisco\n",
"msg_date": "Wed, 29 Mar 2006 21:18:49 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query using SeqScan instead of IndexScan"
},
{
"msg_contents": "What's the correlation of category_id? The current index scan cost\nestimator places a heavy penalty on anything with a correlation much\nbelow about 90%.\n\nOn Wed, Mar 29, 2006 at 08:12:28PM -0700, Brendan Duddridge wrote:\n> Hi,\n> \n> I have a query that is using a sequential scan instead of an index \n> scan. I've turned off sequential scans and it is in fact faster with \n> the index scan.\n> \n> Here's my before and after.\n> \n> Before:\n> \n> ssdev=# SET enable_seqscan TO DEFAULT;\n> ssdev=# explain analyze select cp.product_id\n> \t\tfrom category_product cp, product_attribute_value pav\n> \t\twhere cp.category_id = 1001082 and cp.product_id = \n> \t\tpav.product_id;\n> \n> \n> QUERY PLAN\n> ------------------------------------------------------------------------ \n> ------------------------------------------------------------------------ \n> ------------------------------\n> Hash Join (cost=25.52..52140.59 rows=5139 width=4) (actual \n> time=4.521..2580.520 rows=19695 loops=1)\n> Hash Cond: (\"outer\".product_id = \"inner\".product_id)\n> -> Seq Scan on product_attribute_value pav (cost=0.00..40127.12 \n> rows=2387312 width=4) (actual time=0.039..1469.295 rows=2385846 loops=1)\n> -> Hash (cost=23.10..23.10 rows=970 width=4) (actual \n> time=2.267..2.267 rows=1140 loops=1)\n> -> Index Scan using x_category_product__category_id_fk_idx \n> on category_product cp (cost=0.00..23.10 rows=970 width=4) (actual \n> time=0.122..1.395 rows=1140 loops=1)\n> Index Cond: (category_id = 1001082)\n> Total runtime: 2584.221 ms\n> (7 rows)\n> \n> \n> After:\n> \n> ssdev=# SET enable_seqscan TO false;\n> ssdev=# explain analyze select cp.product_id\n> \t\tfrom category_product cp, product_attribute_value pav\n> \t\twhere cp.category_id = 1001082 and cp.product_id = \n> \t\tpav.product_id;\n> \n> \n> QUERY PLAN\n> ------------------------------------------------------------------------ \n> ------------------------------------------------------------------------ \n> -------------------------------------\n> Nested Loop (cost=0.00..157425.22 rows=5139 width=4) (actual \n> time=0.373..71.177 rows=19695 loops=1)\n> -> Index Scan using x_category_product__category_id_fk_idx on \n> category_product cp (cost=0.00..23.10 rows=970 width=4) (actual \n> time=0.129..1.438 rows=1140 loops=1)\n> Index Cond: (category_id = 1001082)\n> -> Index Scan using product_attribute_value__product_id_fk_idx \n> on product_attribute_value pav (cost=0.00..161.51 rows=61 width=4) \n> (actual time=0.016..0.053 rows=17 loops=1140)\n> Index Cond: (\"outer\".product_id = pav.product_id)\n> Total runtime: 74.747 ms\n> (6 rows)\n> \n> There's quite a big difference in speed there. 2584.221 ms vs. 74.747 \n> ms.\n> \n> Any ideas what I can do to improve this without turning sequential \n> scanning off?\n> \n> Thanks,\n> \n> ____________________________________________________________________\n> Brendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n> \n> ClickSpace Interactive Inc.\n> Suite L100, 239 - 10th Ave. SE\n> Calgary, AB T2G 0V9\n> \n> http://www.clickspace.com\n> \n\n\n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 31 Mar 2006 09:59:12 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query using SeqScan instead of IndexScan"
},
{
"msg_contents": "Hi Jim,\n\nI'm not quite sure what you mean by the correlation of category_id?\nThe category_id is part of a compound primary key in the \ncategory_product\ntable. The primary key on category_product is (category_id, product_id).\n\nHere's the definitions of the two tables involved in the join:\n\n Table \"public.category_product\"\n Column | Type | Modifiers\n---------------------+----------------------+-----------\ncategory_id | integer | not null\nproduct_id | integer | not null\nen_name_sort_order | integer |\nfr_name_sort_order | integer |\nmerchant_sort_order | integer |\nprice_sort_order | integer |\nmerchant_count | integer |\nis_active | character varying(5) |\nIndexes:\n \"x_category_product_pk\" PRIMARY KEY, btree (category_id, \nproduct_id)\n \"category_product__is_active_idx\" btree (is_active)\n \"category_product__merchant_sort_order_idx\" btree \n(merchant_sort_order)\n \"x_category_product__category_id_fk_idx\" btree (category_id) \nCLUSTER\n \"x_category_product__product_id_fk_idx\" btree (product_id)\nForeign-key constraints:\n \"x_category_product_category_fk\" FOREIGN KEY (category_id) \nREFERENCES category(category_id) DEFERRABLE INITIALLY DEFERRED\n \"x_category_product_product_fk\" FOREIGN KEY (product_id) \nREFERENCES product(product_id) DEFERRABLE INITIALLY DEFERRED\n\n\n\n Table \"public.product_attribute_value\"\n Column | Type | Modifiers\n----------------------------+-----------------------+-----------\nattribute_id | integer | not null\nattribute_unit_id | integer |\nattribute_value_id | integer |\nboolean_value | character varying(5) |\ndecimal_value | numeric(30,10) |\nproduct_attribute_value_id | integer | not null\nproduct_id | integer | not null\nproduct_reference_id | integer |\nstatus_code | character varying(32) |\nIndexes:\n \"product_attribute_value_pk\" PRIMARY KEY, btree \n(product_attribute_value_id)\n \"product_attribute_value__attribute_id_fk_idx\" btree (attribute_id)\n \"product_attribute_value__attribute_unit_id_fk_idx\" btree \n(attribute_unit_id)\n \"product_attribute_value__attribute_value_id_fk_idx\" btree \n(attribute_value_id)\n \"product_attribute_value__product_id_fk_idx\" btree (product_id)\n \"product_attribute_value__product_reference_id_fk_idx\" btree \n(product_reference_id)\nForeign-key constraints:\n \"product_attribute_value_attribute_fk\" FOREIGN KEY \n(attribute_id) REFERENCES attribute(attribute_id) DEFERRABLE \nINITIALLY DEFERRED\n \"product_attribute_value_attributeunit_fk\" FOREIGN KEY \n(attribute_unit_id) REFERENCES attribute_unit(attribute_unit_id) \nDEFERRABLE INITIALLY DEFERRED\n \"product_attribute_value_attributevalue_fk\" FOREIGN KEY \n(attribute_value_id) REFERENCES attribute_value(attribute_value_id) \nDEFERRABLE INITIALLY DEFERRED\n \"product_attribute_value_product_fk\" FOREIGN KEY (product_id) \nREFERENCES product(product_id) DEFERRABLE INITIALLY DEFERRED\n \"product_attribute_value_productreference_fk\" FOREIGN KEY \n(product_reference_id) REFERENCES product(product_id) DEFERRABLE \nINITIALLY DEFERRED\n\n\nNot sure if that helps answer your question, but the query is pretty \nslow. Sometimes it takes 5 - 15 seconds depending on the category_id \nspecified.\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Mar 31, 2006, at 8:59 AM, Jim C. Nasby wrote:\n\n> What's the correlation of category_id? The current index scan cost\n> estimator places a heavy penalty on anything with a correlation much\n> below about 90%.\n>\n> On Wed, Mar 29, 2006 at 08:12:28PM -0700, Brendan Duddridge wrote:\n>> Hi,\n>>\n>> I have a query that is using a sequential scan instead of an index\n>> scan. I've turned off sequential scans and it is in fact faster with\n>> the index scan.\n>>\n>> Here's my before and after.\n>>\n>> Before:\n>>\n>> ssdev=# SET enable_seqscan TO DEFAULT;\n>> ssdev=# explain analyze select cp.product_id\n>> \t\tfrom category_product cp, product_attribute_value pav\n>> \t\twhere cp.category_id = 1001082 and cp.product_id =\n>> \t\tpav.product_id;\n>>\n>>\n>> QUERY PLAN\n>> --------------------------------------------------------------------- \n>> ---\n>> --------------------------------------------------------------------- \n>> ---\n>> ------------------------------\n>> Hash Join (cost=25.52..52140.59 rows=5139 width=4) (actual\n>> time=4.521..2580.520 rows=19695 loops=1)\n>> Hash Cond: (\"outer\".product_id = \"inner\".product_id)\n>> -> Seq Scan on product_attribute_value pav (cost=0.00..40127.12\n>> rows=2387312 width=4) (actual time=0.039..1469.295 rows=2385846 \n>> loops=1)\n>> -> Hash (cost=23.10..23.10 rows=970 width=4) (actual\n>> time=2.267..2.267 rows=1140 loops=1)\n>> -> Index Scan using x_category_product__category_id_fk_idx\n>> on category_product cp (cost=0.00..23.10 rows=970 width=4) (actual\n>> time=0.122..1.395 rows=1140 loops=1)\n>> Index Cond: (category_id = 1001082)\n>> Total runtime: 2584.221 ms\n>> (7 rows)\n>>\n>>\n>> After:\n>>\n>> ssdev=# SET enable_seqscan TO false;\n>> ssdev=# explain analyze select cp.product_id\n>> \t\tfrom category_product cp, product_attribute_value pav\n>> \t\twhere cp.category_id = 1001082 and cp.product_id =\n>> \t\tpav.product_id;\n>>\n>>\n>> QUERY PLAN\n>> --------------------------------------------------------------------- \n>> ---\n>> --------------------------------------------------------------------- \n>> ---\n>> -------------------------------------\n>> Nested Loop (cost=0.00..157425.22 rows=5139 width=4) (actual\n>> time=0.373..71.177 rows=19695 loops=1)\n>> -> Index Scan using x_category_product__category_id_fk_idx on\n>> category_product cp (cost=0.00..23.10 rows=970 width=4) (actual\n>> time=0.129..1.438 rows=1140 loops=1)\n>> Index Cond: (category_id = 1001082)\n>> -> Index Scan using product_attribute_value__product_id_fk_idx\n>> on product_attribute_value pav (cost=0.00..161.51 rows=61 width=4)\n>> (actual time=0.016..0.053 rows=17 loops=1140)\n>> Index Cond: (\"outer\".product_id = pav.product_id)\n>> Total runtime: 74.747 ms\n>> (6 rows)\n>>\n>> There's quite a big difference in speed there. 2584.221 ms vs. 74.747\n>> ms.\n>>\n>> Any ideas what I can do to improve this without turning sequential\n>> scanning off?\n>>\n>> Thanks,\n>>\n>> ____________________________________________________________________\n>> Brendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n>>\n>> ClickSpace Interactive Inc.\n>> Suite L100, 239 - 10th Ave. SE\n>> Calgary, AB T2G 0V9\n>>\n>> http://www.clickspace.com\n>>\n>\n>\n>\n> -- \n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n>",
"msg_date": "Fri, 31 Mar 2006 18:09:18 -0700",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query using SeqScan instead of IndexScan"
},
{
"msg_contents": "On 4/1/06, Brendan Duddridge <[email protected]> wrote:\n> Hi Jim,\n>\n> I'm not quite sure what you mean by the correlation of category_id?\n\nIt means how many distinct values does it have (at least that's my\nunderstanding of it ;) ).\n\nselect category_id, count(*) from category_product group by category_id;\n\nwill show you how many category_id's there are and how many products\nare in each category.\n\nHaving a lot of products in one category (or having a small amount of\ncategories) can slow things down because the db can't use the index\neffectively.. which might be what you're seeing (hence why it's fast\nfor some categories, slow for others).\n\n\n> On Mar 31, 2006, at 8:59 AM, Jim C. Nasby wrote:\n>\n> > What's the correlation of category_id? The current index scan cost\n> > estimator places a heavy penalty on anything with a correlation much\n> > below about 90%.\n> >\n> > On Wed, Mar 29, 2006 at 08:12:28PM -0700, Brendan Duddridge wrote:\n> >> Hi,\n> >>\n> >> I have a query that is using a sequential scan instead of an index\n> >> scan. I've turned off sequential scans and it is in fact faster with\n> >> the index scan.\n> >>\n> >> Here's my before and after.\n> >>\n> >> Before:\n> >>\n> >> ssdev=# SET enable_seqscan TO DEFAULT;\n> >> ssdev=# explain analyze select cp.product_id\n> >> from category_product cp, product_attribute_value pav\n> >> where cp.category_id = 1001082 and cp.product_id =\n> >> pav.product_id;\n> >>\n> >>\n> >> QUERY PLAN\n> >> ---------------------------------------------------------------------\n> >> ---\n> >> ---------------------------------------------------------------------\n> >> ---\n> >> ------------------------------\n> >> Hash Join (cost=25.52..52140.59 rows=5139 width=4) (actual\n> >> time=4.521..2580.520 rows=19695 loops=1)\n> >> Hash Cond: (\"outer\".product_id = \"inner\".product_id)\n> >> -> Seq Scan on product_attribute_value pav (cost=0.00..40127.12\n> >> rows=2387312 width=4) (actual time=0.039..1469.295 rows=2385846\n> >> loops=1)\n> >> -> Hash (cost=23.10..23.10 rows=970 width=4) (actual\n> >> time=2.267..2.267 rows=1140 loops=1)\n> >> -> Index Scan using x_category_product__category_id_fk_idx\n> >> on category_product cp (cost=0.00..23.10 rows=970 width=4) (actual\n> >> time=0.122..1.395 rows=1140 loops=1)\n> >> Index Cond: (category_id = 1001082)\n> >> Total runtime: 2584.221 ms\n> >> (7 rows)\n> >>\n> >>\n> >> After:\n> >>\n> >> ssdev=# SET enable_seqscan TO false;\n> >> ssdev=# explain analyze select cp.product_id\n> >> from category_product cp, product_attribute_value pav\n> >> where cp.category_id = 1001082 and cp.product_id =\n> >> pav.product_id;\n> >>\n> >>\n> >> QUERY PLAN\n> >> ---------------------------------------------------------------------\n> >> ---\n> >> ---------------------------------------------------------------------\n> >> ---\n> >> -------------------------------------\n> >> Nested Loop (cost=0.00..157425.22 rows=5139 width=4) (actual\n> >> time=0.373..71.177 rows=19695 loops=1)\n> >> -> Index Scan using x_category_product__category_id_fk_idx on\n> >> category_product cp (cost=0.00..23.10 rows=970 width=4) (actual\n> >> time=0.129..1.438 rows=1140 loops=1)\n> >> Index Cond: (category_id = 1001082)\n> >> -> Index Scan using product_attribute_value__product_id_fk_idx\n> >> on product_attribute_value pav (cost=0.00..161.51 rows=61 width=4)\n> >> (actual time=0.016..0.053 rows=17 loops=1140)\n> >> Index Cond: (\"outer\".product_id = pav.product_id)\n> >> Total runtime: 74.747 ms\n> >> (6 rows)\n> >>\n> >> There's quite a big difference in speed there. 2584.221 ms vs. 74.747\n> >> ms.\n> >>\n> >> Any ideas what I can do to improve this without turning sequential\n> >> scanning off?\n> >>\n> >> Thanks,\n> >>\n> >> ____________________________________________________________________\n> >> Brendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n> >>\n> >> ClickSpace Interactive Inc.\n> >> Suite L100, 239 - 10th Ave. SE\n> >> Calgary, AB T2G 0V9\n> >>\n> >> http://www.clickspace.com\n> >>\n> >\n> >\n> >\n> > --\n> > Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> > Pervasive Software http://pervasive.com work: 512-231-6117\n> > vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n> >\n> > ---------------------------(end of\n> > broadcast)---------------------------\n> > TIP 1: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that\n> > your\n> > message can get through to the mailing list cleanly\n> >\n>\n>\n>\n>\n\n\n--\nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Sat, 1 Apr 2006 11:23:37 +1000",
"msg_from": "\"chris smith\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query using SeqScan instead of IndexScan"
},
{
"msg_contents": "Ah I see. Ok, well we have a very wide variety here...\n\ncategory_id | count\n-------------+-------\n 1000521 | 31145\n 1001211 | 22991\n 1001490 | 22019\n 1001628 | 12472\n 1000046 | 10480\n 1000087 | 10338\n 1001223 | 10020\n 1001560 | 9532\n 1000954 | 8633\n 1001314 | 8191\n 1001482 | 8140\n 1001556 | 7959\n 1001481 | 7850\n[snip...]\n 1001133 | 1\n 1000532 | 1\n 1000691 | 1\n 1000817 | 1\n 1000783 | 1\n 1000689 | 1\n\n(1157 rows)\n\nSo what's the best kind of query to handle this kind of data to make \nit fast in all cases? I'd like get down to sub-second response times.\n\ncurrently we have:\n\nselect cp.product_id\n from category_product cp, product_attribute_value pav\n where cp.category_id = 1001082 and cp.product_id =\n pav.product_id;\n\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Mar 31, 2006, at 6:23 PM, chris smith wrote:\n\n> On 4/1/06, Brendan Duddridge <[email protected]> wrote:\n>> Hi Jim,\n>>\n>> I'm not quite sure what you mean by the correlation of category_id?\n>\n> It means how many distinct values does it have (at least that's my\n> understanding of it ;) ).\n>\n> select category_id, count(*) from category_product group by \n> category_id;\n>\n> will show you how many category_id's there are and how many products\n> are in each category.\n>\n> Having a lot of products in one category (or having a small amount of\n> categories) can slow things down because the db can't use the index\n> effectively.. which might be what you're seeing (hence why it's fast\n> for some categories, slow for others).\n>\n>\n>> On Mar 31, 2006, at 8:59 AM, Jim C. Nasby wrote:\n>>\n>>> What's the correlation of category_id? The current index scan cost\n>>> estimator places a heavy penalty on anything with a correlation much\n>>> below about 90%.\n>>>\n>>> On Wed, Mar 29, 2006 at 08:12:28PM -0700, Brendan Duddridge wrote:\n>>>> Hi,\n>>>>\n>>>> I have a query that is using a sequential scan instead of an index\n>>>> scan. I've turned off sequential scans and it is in fact faster \n>>>> with\n>>>> the index scan.\n>>>>\n>>>> Here's my before and after.\n>>>>\n>>>> Before:\n>>>>\n>>>> ssdev=# SET enable_seqscan TO DEFAULT;\n>>>> ssdev=# explain analyze select cp.product_id\n>>>> from category_product cp, product_attribute_value pav\n>>>> where cp.category_id = 1001082 and cp.product_id =\n>>>> pav.product_id;\n>>>>\n>>>>\n>>>> QUERY PLAN\n>>>> ------------------------------------------------------------------- \n>>>> --\n>>>> ---\n>>>> ------------------------------------------------------------------- \n>>>> --\n>>>> ---\n>>>> ------------------------------\n>>>> Hash Join (cost=25.52..52140.59 rows=5139 width=4) (actual\n>>>> time=4.521..2580.520 rows=19695 loops=1)\n>>>> Hash Cond: (\"outer\".product_id = \"inner\".product_id)\n>>>> -> Seq Scan on product_attribute_value pav \n>>>> (cost=0.00..40127.12\n>>>> rows=2387312 width=4) (actual time=0.039..1469.295 rows=2385846\n>>>> loops=1)\n>>>> -> Hash (cost=23.10..23.10 rows=970 width=4) (actual\n>>>> time=2.267..2.267 rows=1140 loops=1)\n>>>> -> Index Scan using \n>>>> x_category_product__category_id_fk_idx\n>>>> on category_product cp (cost=0.00..23.10 rows=970 width=4) (actual\n>>>> time=0.122..1.395 rows=1140 loops=1)\n>>>> Index Cond: (category_id = 1001082)\n>>>> Total runtime: 2584.221 ms\n>>>> (7 rows)\n>>>>\n>>>>\n>>>> After:\n>>>>\n>>>> ssdev=# SET enable_seqscan TO false;\n>>>> ssdev=# explain analyze select cp.product_id\n>>>> from category_product cp, product_attribute_value pav\n>>>> where cp.category_id = 1001082 and cp.product_id =\n>>>> pav.product_id;\n>>>>\n>>>>\n>>>> QUERY PLAN\n>>>> ------------------------------------------------------------------- \n>>>> --\n>>>> ---\n>>>> ------------------------------------------------------------------- \n>>>> --\n>>>> ---\n>>>> -------------------------------------\n>>>> Nested Loop (cost=0.00..157425.22 rows=5139 width=4) (actual\n>>>> time=0.373..71.177 rows=19695 loops=1)\n>>>> -> Index Scan using x_category_product__category_id_fk_idx on\n>>>> category_product cp (cost=0.00..23.10 rows=970 width=4) (actual\n>>>> time=0.129..1.438 rows=1140 loops=1)\n>>>> Index Cond: (category_id = 1001082)\n>>>> -> Index Scan using product_attribute_value__product_id_fk_idx\n>>>> on product_attribute_value pav (cost=0.00..161.51 rows=61 width=4)\n>>>> (actual time=0.016..0.053 rows=17 loops=1140)\n>>>> Index Cond: (\"outer\".product_id = pav.product_id)\n>>>> Total runtime: 74.747 ms\n>>>> (6 rows)\n>>>>\n>>>> There's quite a big difference in speed there. 2584.221 ms vs. \n>>>> 74.747\n>>>> ms.\n>>>>\n>>>> Any ideas what I can do to improve this without turning sequential\n>>>> scanning off?\n>>>>\n>>>> Thanks,\n>>>>\n>>>> ___________________________________________________________________ \n>>>> _\n>>>> Brendan Duddridge | CTO | 403-277-5591 x24 | \n>>>> [email protected]\n>>>>\n>>>> ClickSpace Interactive Inc.\n>>>> Suite L100, 239 - 10th Ave. SE\n>>>> Calgary, AB T2G 0V9\n>>>>\n>>>> http://www.clickspace.com\n>>>>\n>>>\n>>>\n>>>\n>>> --\n>>> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n>>> Pervasive Software http://pervasive.com work: 512-231-6117\n>>> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>>>\n>>> ---------------------------(end of\n>>> broadcast)---------------------------\n>>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>>> subscribe-nomail command to [email protected] so that\n>>> your\n>>> message can get through to the mailing list cleanly\n>>>\n>>\n>>\n>>\n>>\n>\n>\n> --\n> Postgresql & php tutorials\n> http://www.designmagick.com/\n>",
"msg_date": "Fri, 31 Mar 2006 18:31:47 -0700",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query using SeqScan instead of IndexScan"
},
{
"msg_contents": "On Sat, Apr 01, 2006 at 11:23:37AM +1000, chris smith wrote:\n> On 4/1/06, Brendan Duddridge <[email protected]> wrote:\n> > Hi Jim,\n> >\n> > I'm not quite sure what you mean by the correlation of category_id?\n> \n> It means how many distinct values does it have (at least that's my\n> understanding of it ;) ).\n\nYour understanding is wrong. :) What you're discussing is n_distinct.\n\nhttp://www.postgresql.org/docs/8.1/interactive/view-pg-stats.html\n\ncorrelation: \"Statistical correlation between physical row ordering and\nlogical ordering of the column values. This ranges from -1 to +1. When\nthe value is near -1 or +1, an index scan on the column will be\nestimated to be cheaper than when it is near zero, due to reduction of\nrandom access to the disk. (This column is NULL if the column data type\ndoes not have a < operator.)\"\n\nIn other words, the following will have a correlation of 1:\n\n1\n2\n3\n...\n998\n999\n1000\n\nAnd this is -1...\n\n1000\n999\n...\n2\n1\n\nWhile this would have a very low correlation:\n\n1\n1000\n2\n999\n...\n\nThe lower the correlation, the more expensive an index scan is, because\nit's more random. As I mentioned, I believe that the current index scan\ncost estimator is flawed though, because it will bias heavily against\ncorrelations that aren't close to 1 or -1.\n\nSo, what does\n\nSELECT * FROM pg_stats WHERE tablename='table' AND attname='category_id';\n\nshow?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Sat, 1 Apr 2006 09:32:47 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query using SeqScan instead of IndexScan"
},
{
"msg_contents": "Hi Jim,\n\nfrom SELECT * FROM pg_stats WHERE tablename='table' AND \nattname='category_id'\n\nI find correlation on category_product for category_id is 0.643703\n\nWould setting the index on category_id to be clustered help with this?\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Apr 1, 2006, at 8:32 AM, Jim C. Nasby wrote:\n\n> On Sat, Apr 01, 2006 at 11:23:37AM +1000, chris smith wrote:\n>> On 4/1/06, Brendan Duddridge <[email protected]> wrote:\n>>> Hi Jim,\n>>>\n>>> I'm not quite sure what you mean by the correlation of category_id?\n>>\n>> It means how many distinct values does it have (at least that's my\n>> understanding of it ;) ).\n>\n> Your understanding is wrong. :) What you're discussing is n_distinct.\n>\n> http://www.postgresql.org/docs/8.1/interactive/view-pg-stats.html\n>\n> correlation: \"Statistical correlation between physical row ordering \n> and\n> logical ordering of the column values. This ranges from -1 to +1. When\n> the value is near -1 or +1, an index scan on the column will be\n> estimated to be cheaper than when it is near zero, due to reduction of\n> random access to the disk. (This column is NULL if the column data \n> type\n> does not have a < operator.)\"\n>\n> In other words, the following will have a correlation of 1:\n>\n> 1\n> 2\n> 3\n> ...\n> 998\n> 999\n> 1000\n>\n> And this is -1...\n>\n> 1000\n> 999\n> ...\n> 2\n> 1\n>\n> While this would have a very low correlation:\n>\n> 1\n> 1000\n> 2\n> 999\n> ...\n>\n> The lower the correlation, the more expensive an index scan is, \n> because\n> it's more random. As I mentioned, I believe that the current index \n> scan\n> cost estimator is flawed though, because it will bias heavily against\n> correlations that aren't close to 1 or -1.\n>\n> So, what does\n>\n> SELECT * FROM pg_stats WHERE tablename='table' AND \n> attname='category_id';\n>\n> show?\n> -- \n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>",
"msg_date": "Sat, 1 Apr 2006 10:51:12 -0700",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query using SeqScan instead of IndexScan"
},
{
"msg_contents": "On 4/2/06, Jim C. Nasby <[email protected]> wrote:\n> On Sat, Apr 01, 2006 at 11:23:37AM +1000, chris smith wrote:\n> > On 4/1/06, Brendan Duddridge <[email protected]> wrote:\n> > > Hi Jim,\n> > >\n> > > I'm not quite sure what you mean by the correlation of category_id?\n> >\n> > It means how many distinct values does it have (at least that's my\n> > understanding of it ;) ).\n>\n> Your understanding is wrong. :) What you're discussing is n_distinct.\n\nGeez, I'm going well this week ;)\n\nThanks for the detailed info.\n\n--\nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Sun, 2 Apr 2006 10:50:44 +1000",
"msg_from": "\"chris smith\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query using SeqScan instead of IndexScan"
},
{
"msg_contents": "On 4/2/06, chris smith <[email protected]> wrote:\n> On 4/2/06, Jim C. Nasby <[email protected]> wrote:\n> > On Sat, Apr 01, 2006 at 11:23:37AM +1000, chris smith wrote:\n> > > On 4/1/06, Brendan Duddridge <[email protected]> wrote:\n> > > > Hi Jim,\n> > > >\n> > > > I'm not quite sure what you mean by the correlation of category_id?\n> > >\n> > > It means how many distinct values does it have (at least that's my\n> > > understanding of it ;) ).\n> >\n> > Your understanding is wrong. :) What you're discussing is n_distinct.\n\n<rant>\nIt'd be nice if the database developers agreed on what terms meant.\n\nhttp://dev.mysql.com/doc/refman/5.1/en/myisam-index-statistics.html\n\nThe SHOW INDEX statement displays a cardinality value based on N/S,\nwhere N is the number of rows in the table and S is the average value\ngroup size. That ratio yields an approximate number of value groups in\nthe table.\n</rant>\n\nA work colleague found that information a few weeks ago so that's\nwhere my misunderstanding came from - if I'm reading that right they\nuse n_distinct as their \"cardinality\" basis.. then again I could be\nreading that completely wrong too.\n\nI believe postgres (because it's a lot more standards compliant).. but\nsheesh - what a difference!\n\nThis week's task - stop reading mysql documentation.\n\n--\nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Sun, 2 Apr 2006 11:32:12 +1000",
"msg_from": "\"chris smith\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query using SeqScan instead of IndexScan"
},
{
"msg_contents": "chris smith wrote:\n\n> I believe postgres (because it's a lot more standards compliant).. but\n> sheesh - what a difference!\n> \n> This week's task - stop reading mysql documentation.\n\nYou don't _have_ to believe Postgres -- this is stuff taught in any\nstatistics course.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Sun, 2 Apr 2006 00:26:45 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query using SeqScan instead of IndexScan"
},
{
"msg_contents": "chris smith wrote:\n\n> <rant>\n> It'd be nice if the database developers agreed on what terms meant.\n> \n> http://dev.mysql.com/doc/refman/5.1/en/myisam-index-statistics.html\n> \n> The SHOW INDEX statement displays a cardinality value based on N/S,\n> where N is the number of rows in the table and S is the average value\n> group size. That ratio yields an approximate number of value groups in\n> the table.\n> </rant>\n> \n> A work colleague found that information a few weeks ago so that's\n> where my misunderstanding came from - if I'm reading that right they\n> use n_distinct as their \"cardinality\" basis.. then again I could be\n> reading that completely wrong too.\n> \n\nYeah that's right - e.g using the same table in postgres and mysql:\n\npgsql> SELECT attname,n_distinct,correlation\n FROM pg_stats\n WHERE tablename='fact0'\n AND attname LIKE 'd%key';\n attname | n_distinct | correlation\n---------+------------+-------------\n d0key | 10000 | -0.0211169\n d1key | 100 | 0.124012\n d2key | 10 | 0.998393\n(3 rows)\n\n\nmysql> SHOW INDEX FROM fact0\n -> ;\n+-------+------------+-------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+\n| Table | Non_unique | Key_name | Seq_in_index | Column_name |\nCollation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |\n+-------+------------+-------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+\n| fact0 | 1 | fact0_d0key | 1 | d0key | A\n | 10000 | NULL | NULL | | BTREE | |\n| fact0 | 1 | fact0_d1key | 1 | d1key | A\n | 100 | NULL | NULL | | BTREE | |\n| fact0 | 1 | fact0_d2key | 1 | d2key | A\n | 10 | NULL | NULL | | BTREE | |\n+-------+------------+-------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+\n3 rows in set (0.00 sec)\n\n\nIt is a bit confusing - '(distinct) cardinality' might be a better\nheading for their 'cardinality' column!\n\nOn the correlation business - I don't think Mysql calculates it (or if\nit does, its not displayed).\n\n\n> I believe postgres (because it's a lot more standards compliant).. but\n> sheesh - what a difference!\n> \n\nWell yes - however, to be fair to the Mysql guys, AFAICS the capture and \ndisplay of index stats (and any other optimizer related data) is not \npart of any standard.\n\n\nCheers\n\nMark\n",
"msg_date": "Sun, 02 Apr 2006 16:31:34 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query using SeqScan instead of IndexScan"
},
{
"msg_contents": "Mark Kirkwood <[email protected]> writes:\n> It is a bit confusing - '(distinct) cardinality' might be a better\n> heading for their 'cardinality' column!\n\nThe usual mathematical meaning of \"cardinality\" is \"the number of\nmembers in a set\". That isn't real helpful for the point at hand,\nbecause the mathematical definition of a set disallows duplicate\nmembers, so if you're dealing with non-unique values you could argue it\neither way about whether to count duplicates or not. However, I read in\nthe SQL99 spec (3.1 Definitions)\n\n d) cardinality (of a value of a collection type): The number of\n elements in that value. Those elements need not necessarily have\n distinct values.\n\nso ... as all too often ... the mysql boys have not got a clue about\nstandards compliance. They are using this term in the opposite way\nfrom how the SQL committee uses it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 01 Apr 2006 23:46:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query using SeqScan instead of IndexScan "
},
{
"msg_contents": "Brendan,\n\n> But just as a follow up question to your #1 suggestion, I have 8 GB\n> of ram in my production server. You're saying to set the\n> effective_cache_size then to 5 GB roughly? Somewhere around 655360?\n> Currently it is set to 65535. Is that something that's OS dependent?\n> I'm not sure how much memory my server sets aside for disk caching.\n\nYes, about. It's really a judgement call; you're looking for the approximate \ncombined RAM available for disk caching and shared mem. However, this is \njust used as a way of estimating the probability that the data you want is \ncached in memory, so you're just trying to be order-of-magnitude accurate, \nnot to-the-MB accurate.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Sun, 2 Apr 2006 15:30:55 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query using SeqScan instead of IndexScan"
},
{
"msg_contents": "Hi Josh,\n\nThanks. I've adjusted my effective_cache_size to 5 GB, so we'll see \nhow that goes.\n\nI'm also doing some query and de-normalization optimizations so we'll \nsee how those go too.\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Apr 2, 2006, at 4:30 PM, Josh Berkus wrote:\n\n> Brendan,\n>\n>> But just as a follow up question to your #1 suggestion, I have 8 GB\n>> of ram in my production server. You're saying to set the\n>> effective_cache_size then to 5 GB roughly? Somewhere around 655360?\n>> Currently it is set to 65535. Is that something that's OS dependent?\n>> I'm not sure how much memory my server sets aside for disk caching.\n>\n> Yes, about. It's really a judgement call; you're looking for the \n> approximate\n> combined RAM available for disk caching and shared mem. However, \n> this is\n> just used as a way of estimating the probability that the data you \n> want is\n> cached in memory, so you're just trying to be order-of-magnitude \n> accurate,\n> not to-the-MB accurate.\n>\n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>",
"msg_date": "Sun, 2 Apr 2006 22:20:00 -0600",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query using SeqScan instead of IndexScan"
},
{
"msg_contents": "On Apr 2, 2006, at 6:30 PM, Josh Berkus wrote:\n>> But just as a follow up question to your #1 suggestion, I have 8 GB\n>> of ram in my production server. You're saying to set the\n>> effective_cache_size then to 5 GB roughly? Somewhere around 655360?\n>> Currently it is set to 65535. Is that something that's OS dependent?\n>> I'm not sure how much memory my server sets aside for disk caching.\n>\n> Yes, about. It's really a judgement call; you're looking for the \n> approximate\n> combined RAM available for disk caching and shared mem. However, \n> this is\n> just used as a way of estimating the probability that the data you \n> want is\n> cached in memory, so you're just trying to be order-of-magnitude \n> accurate,\n> not to-the-MB accurate.\n\nFWIW, I typically set effective_cache_size to the amount of memory in \nthe machine minus 1G for the OS and various other daemons, etc. But \nas Josh said, as long as your somewhere in the ballpark it's probably \ngood enough.\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n",
"msg_date": "Tue, 4 Apr 2006 16:26:37 -0400",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query using SeqScan instead of IndexScan"
},
{
"msg_contents": "On Apr 1, 2006, at 12:51 PM, Brendan Duddridge wrote:\n> from SELECT * FROM pg_stats WHERE tablename='table' AND \n> attname='category_id'\n>\n> I find correlation on category_product for category_id is 0.643703\n>\n> Would setting the index on category_id to be clustered help with this?\n\nIt would absolutely help on the query in question. In my experience, \na correlation of 0.64 is too low to allow an index scan to be used \nfor anything but a tiny number of rows.\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n",
"msg_date": "Tue, 4 Apr 2006 16:28:24 -0400",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query using SeqScan instead of IndexScan"
}
] |
[
{
"msg_contents": "Hullo, I have pg 8.1.3 on an 8-CPU AIX 5.3 box with 16GB of RAM, and I'm \nfinding that it's taking an age to CREATE INDEX on a large table:\n\n Column | Type | Modifiers\n----------------+------------------------+---------------------------------------------------------------------\n ID | integer | not null default nextval(('public.keyword_id_seq'::text)::regclass)\n Text | character varying(200) |\n Longitude | numeric(16,5) |\n Latitude | numeric(16,5) |\n AreaID | integer |\n SearchCount | integer | not null default 0\n Radius | integer |\n LanguageID | integer |\n KeywordType | character varying(20) |\n LowerText | character varying(200) |\n NumberOfHotels | integer |\n CountryID | integer |\n FriendlyText | character varying(200) |\nIndexes:\n\n\n2006-03-29 21:39:38 BST LOG: duration: 41411.625 ms statement: CREATE INDEX ix_keyword_areaid ON \"Keyword\" USING btree (\"AreaID\");\n2006-03-29 21:42:46 BST LOG: duration: 188550.644 ms statement: CREATE INDEX ix_keyword_lonlat ON \"Keyword\" USING btree (\"Longitude\", \"Latitude\");\n2006-03-29 21:46:41 BST LOG: duration: 234864.571 ms statement: CREATE INDEX ix_keyword_lowertext ON \"Keyword\" USING btree (\"LowerText\");\n2006-03-29 21:52:32 BST LOG: duration: 350757.565 ms statement: CREATE INDEX ix_keyword_type ON \"Keyword\" USING btree (\"KeywordType\");\n\nThe table has just under six million rows - should it really be taking \nnearly six minutes to add an index? These log snippets were taking \nduring a pg_restore on a newly created db, so there should be no issues \nwith the table needing vacuuming.\n\nWhat parameters in the postgresql.conf are pertinent here? I have\n\nshared_buffers 120000\nwork_mem 16384\nmaintenance_work_mem = 262144\n\nfor starters... any advice would be warmly welcomed!\n\nCheers,\nGavin.\n\n",
"msg_date": "Thu, 30 Mar 2006 09:26:13 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "CREATE INDEX rather sluggish"
},
{
"msg_contents": "Gavin Hamill <[email protected]> writes:\n> The table has just under six million rows - should it really be taking \n> nearly six minutes to add an index?\n\nTry running it with trace_sort enabled to get more info about where the\ntime is going.\n\nWe've been doing some considerable work on the sorting code in the last\ncouple months, so 8.2 should be better, but I'd like to verify that\nyou're not seeing something we don't know about.\n\n> maintenance_work_mem = 262144\n\nFooling with this might affect the results some.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 30 Mar 2006 10:19:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX rather sluggish "
},
{
"msg_contents": "On Thu, 2006-03-30 at 09:26 +0100, Gavin Hamill wrote: \n> Hullo, I have pg 8.1.3 on an 8-CPU AIX 5.3 box with 16GB of RAM, and I'm \n> finding that it's taking an age to CREATE INDEX on a large table:\n> \n> Column | Type | Modifiers\n> ----------------+------------------------+---------------------------------------------------------------------\n> ID | integer | not null default nextval(('public.keyword_id_seq'::text)::regclass)\n> Text | character varying(200) |\n> Longitude | numeric(16,5) |\n> Latitude | numeric(16,5) |\n> AreaID | integer |\n> SearchCount | integer | not null default 0\n> Radius | integer |\n> LanguageID | integer |\n> KeywordType | character varying(20) |\n> LowerText | character varying(200) |\n> NumberOfHotels | integer |\n> CountryID | integer |\n> FriendlyText | character varying(200) |\n> Indexes:\n> \n> \n> 2006-03-29 21:39:38 BST LOG: duration: 41411.625 ms statement: CREATE INDEX ix_keyword_areaid ON \"Keyword\" USING btree (\"AreaID\");\n> 2006-03-29 21:42:46 BST LOG: duration: 188550.644 ms statement: CREATE INDEX ix_keyword_lonlat ON \"Keyword\" USING btree (\"Longitude\", \"Latitude\");\n> 2006-03-29 21:46:41 BST LOG: duration: 234864.571 ms statement: CREATE INDEX ix_keyword_lowertext ON \"Keyword\" USING btree (\"LowerText\");\n> 2006-03-29 21:52:32 BST LOG: duration: 350757.565 ms statement: CREATE INDEX ix_keyword_type ON \"Keyword\" USING btree (\"KeywordType\");\n> \n> The table has just under six million rows - should it really be taking \n> nearly six minutes to add an index? These log snippets were taking \n> during a pg_restore on a newly created db, so there should be no issues \n> with the table needing vacuuming.\n\nThe index build time varies according to the number and type of the\ndatatypes, as well as the distribution of values in the table. As well\nas the number of rows in the table.\n\nNote the x10 factor to index AreaID (integer) v KeywordType (vchar(20))\n\n> What parameters in the postgresql.conf are pertinent here? I have\n> \n> shared_buffers 120000\n> work_mem 16384\n> maintenance_work_mem = 262144\n\nTry trace_sort = on and then rerun the index builds to see what's\nhappening there. We've speeded sort up by about 2.5 times in the current\ndevelopment version, but it does just run in single threaded mode so\nyour 8 CPUs aren't helping there.\n\nLooks like you might be just over the maintenance_work_mem limit for the\nlast index builds. You can try doubling maintenance_work_mem.\n\nThe extended runtime for KeywordType is interesting in comparison to\nLowerText, which on the face of it is a longer column. My guess would be\nthat LowerText is fairly unique and sorts quickly, whereas KeywordType\nis fairly non-unique with a high average row length that require\ncomplete string comparison before deciding it is actually the same\nvalue. You might want to try using codes rather than textual\nKeywordTypes. \n\nYou might try using partial indexes also, along the lines of\n\nCREATE INDEX ix_keyword_type ON \"Keyword\" USING btree (\"KeywordType\") WHERE KeywordType IS NOT NULL;\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Thu, 30 Mar 2006 18:08:44 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CREATE INDEX rather sluggish"
},
{
"msg_contents": "Tom Lane wrote:\n\n>Gavin Hamill <[email protected]> writes:\n> \n>\n>>The table has just under six million rows - should it really be taking \n>>nearly six minutes to add an index?\n>> \n>>\n>\n>Try running it with trace_sort enabled to get more info about where the\n>time is going.\n>\n>We've been doing some considerable work on the sorting code in the last\n>couple months, so 8.2 should be better, but I'd like to verify that\n>you're not seeing something we don't know about.\n>\n> \n>\nOKies, I dropped the db, created again so it's all clean, ran pg_restore \nagain with trace_sort on - here's the output from one of the larger \nCREATE INDEXes:\n\n2006-03-30 16:48:53 BST LOG: begin index sort: unique = f, workMem = \n262144, randomAccess = f\n2006-03-30 16:49:04 BST LOG: switching to external sort: CPU \n0.88s/9.99u sec elapsed 10.90 sec\n\n2006-03-30 16:49:44 BST LOG: autovacuum: processing database \"postgres\"\n2006-03-30 16:50:38 BST LOG: performsort starting: CPU 1.69s/102.73u \nsec elapsed 104.58 sec\n2006-03-30 16:50:44 BST LOG: autovacuum: processing database \"laterooms\"\n2006-03-30 16:51:44 BST LOG: autovacuum: processing database \"postgres\"\n2006-03-30 16:52:23 BST LOG: finished writing run 1: CPU 2.40s/206.53u \nsec elapsed 209.30 sec\n2006-03-30 16:52:39 BST LOG: finished writing final run 2: CPU \n2.51s/222.98u sec elapsed 225.89 sec\n2006-03-30 16:52:40 BST LOG: performsort done (except final merge): CPU \n2.59s/223.99u sec elapsed 226.98 sec\n2006-03-30 16:52:44 BST LOG: autovacuum: processing database \"laterooms\"\n2006-03-30 16:52:53 BST LOG: external sort ended, 21292 disk blocks \nused: CPU 3.65s/233.10u sec elapsed 239.35 sec\n2006-03-30 16:52:53 BST LOG: duration: 239381.535 ms statement: CREATE \nINDEX ix_keyword_lowertext ON \"Keyword\" USING btree (\"LowerText\");\n\n\nDuring all this, there's been about 900KB/sec of disk activity. The \ndisks are RAID1 and will happily sustain 50MB/sec with minimal system \noverhead.\n\nI'm guessing then that an external sort means disk-based...\n\n>>maintenance_work_mem = 262144\n>> \n>>\n>\n>Fooling with this might affect the results some.\n> \n>\n\nOK will tinker with that - it's not a massive problem since I hope I \nnever have to do a pg_restore once the live server is running fulltime :)\n\nRight - I bumped maintenance_work_mem up to 1GB, tried dropping the \nindex and recreating, and sure enough it's an internal sort now, \nchopping 10% off the time taken:\n\n2006-03-30 21:15:57 BST LOG: begin index sort: unique = f, workMem = \n1048576, randomAccess = f\n2006-03-30 21:16:03 BST LOG: autovacuum: processing database \"laterooms\"\n2006-03-30 21:16:12 BST LOG: performsort starting: CPU 1.20s/13.85u sec \nelapsed 15.07 sec\n2006-03-30 21:17:03 BST LOG: autovacuum: processing database \"laterooms\"\n2006-03-30 21:18:03 BST LOG: autovacuum: processing database \"laterooms\"\n2006-03-30 21:19:03 BST LOG: autovacuum: processing database \"laterooms\"\n2006-03-30 21:19:28 BST LOG: performsort done: CPU 1.20s/210.34u sec \nelapsed 211.69 sec\n2006-03-30 21:19:36 BST LOG: internal sort ended, 336538 KB used: CPU \n2.06s/212.61u sec elapsed 218.80 sec\n2006-03-30 21:19:36 BST LOG: duration: 218847.055 ms statement: CREATE \nINDEX ix_keyword_lowertext on \"Keyword\" USING btree (\"LowerText\");\n\nIf that's reasonable performance from 8.1, then that's fine - I just \ndidn't want to be inadvertantly running way under average :)\n\nCheers,\nGavin.\n\n",
"msg_date": "Thu, 30 Mar 2006 21:21:38 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE INDEX rather sluggish"
},
{
"msg_contents": "On Thu, 30 Mar 2006 18:08:44 +0100\nSimon Riggs <[email protected]> wrote:\n\nHello again Simon :)\n\n> The index build time varies according to the number and type of the\n> datatypes, as well as the distribution of values in the table. As well\n> as the number of rows in the table.\n> \n> Note the x10 factor to index AreaID (integer) v KeywordType (vchar(20))\n\nFair enough. :) Is there much of a performance increase by using fixed-length character fields instead of varchars?\n\n> Try trace_sort = on and then rerun the index builds to see what's\n> happening there. We've speeded sort up by about 2.5 times in the current\n> development version, but it does just run in single threaded mode so\n> your 8 CPUs aren't helping there.\n\nYum - I look forward to the 8.2 release =) \n \n> Looks like you might be just over the maintenance_work_mem limit for the\n> last index builds. You can try doubling maintenance_work_mem.\n\nYou were right - needed ~370MB ... I'm happy to alloc 1GB to allow for db growth..\n\n> The extended runtime for KeywordType is interesting in comparison to\n> LowerText, which on the face of it is a longer column. My guess would be\n> that LowerText is fairly unique and sorts quickly, whereas KeywordType\n> is fairly non-unique with a high average row length that require\n> complete string comparison before deciding it is actually the same\n> value. \n\n From looking at a few samples of the millions of rows it seems that it's actually KeywordType that's more unique - LowerText is simply an lowercase representation of the name of this search-keyword, so it's much less unique. Fun stuff :)\n\n> You might want to try using codes rather than textual KeywordTypes. \n\nThat makes sense - I can't get a grip on the data in KeywordType at the moment .. many are more obvious like 'RGN' 'AREA' 'MKT' 'LK' for Region, Area, Market and Lake, but many other rows have '1'.\n \n> You might try using partial indexes also, along the lines of\n> \n> CREATE INDEX ix_keyword_type ON \"Keyword\" USING btree (\"KeywordType\") WHERE KeywordType IS NOT NULL;\n\nWell, each row does have a KeywordType, so no row has a NULL entry...\n \n> Best Regards, Simon Riggs\n\nCheers :)\nGavin.\n \n",
"msg_date": "Thu, 30 Mar 2006 21:45:31 +0100",
"msg_from": "Gavin Hamill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: CREATE INDEX rather sluggish"
}
] |
[
{
"msg_contents": "I have noticed that a lot of people have a hard time finding out how to tune postgresql to suit their hardware.\n\nAre there any tools for automatic tuning of the parameters in postgresql.conf? A simple program running some benchmarks on cpu & disk speed, checking the amount of ram and so on and then suggesting random/seq access cost, vacuum cust, sortmem/cache settings and so on? A pg_tune utility?\n\nMaybe it could even look at runtime statistics/usage logs and help set the number of shared buffers, chekpoints, autovacuum...?\n\n/Mattias (using the default config)\n\n\n\n\n\n\n\nI have noticed that a lot of people have a hard \ntime finding out how to tune postgresql to suit their hardware.\n \nAre there any tools for automatic tuning of the \nparameters in postgresql.conf? A simple program \nrunning some benchmarks on cpu & disk speed, checking the amount of ram \nand so on and then suggesting random/seq access cost, vacuum cust, \nsortmem/cache settings and so on? A pg_tune utility?\n \nMaybe it could even look at runtime \nstatistics/usage logs and help set the number of shared buffers, chekpoints, \nautovacuum...?\n \n/Mattias (using the default config)",
"msg_date": "Thu, 30 Mar 2006 11:16:34 +0200",
"msg_from": "\"Mattias Kregert\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Automatic tuning of postgresql.conf parameters?"
}
] |
[
{
"msg_contents": "[Apologies if this already went through. I don't see it in the archives.]\n\nNormally one expects that an index scan would have a startup time of nearly \nzero. Can anyone explain this:\n\nEXPLAIN ANALYZE select activity_id from activity where state in (10000, 10001) \norder by activity_id limit 100;\n\nQUERY PLAN\n\nLimit (cost=0.00..622.72 rows=100 width=8) (actual \ntime=207356.054..207356.876 rows=100 loops=1)\n -> Index Scan using activity_pk on activity (cost=0.00..40717259.91 \nrows=6538650 width=8) (actual time=207356.050..207356.722 rows=100 loops=1)\n Filter: ((state = 10000) OR (state = 10001))\nTotal runtime: 207357.000 ms\n\nThe table has seen VACUUM FULL and REINDEX before this.\n\nThe plan choice and the statistics look right, but why does it take 3 minutes \nbefore doing anything? Or is the measurement of the actual start time \ninaccurate? This is quite reproducible, so it's not just a case of a \ntemporary I/O bottleneck, say.\n\n(PostgreSQL 8.0.3)\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Thu, 30 Mar 2006 13:59:10 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index scan startup time"
},
{
"msg_contents": "On Thu, Mar 30, 2006 at 01:59:10PM +0200, Peter Eisentraut wrote:\n> EXPLAIN ANALYZE select activity_id from activity where state in (10000, 10001) \n> order by activity_id limit 100;\n> \n> QUERY PLAN\n> \n> Limit (cost=0.00..622.72 rows=100 width=8) (actual \n> time=207356.054..207356.876 rows=100 loops=1)\n> -> Index Scan using activity_pk on activity (cost=0.00..40717259.91 \n> rows=6538650 width=8) (actual time=207356.050..207356.722 rows=100 loops=1)\n> Filter: ((state = 10000) OR (state = 10001))\n> Total runtime: 207357.000 ms\n> \n> The table has seen VACUUM FULL and REINDEX before this.\n\nThe index scan is by activity_id, not by state. Do you have an index on state\nat all?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 30 Mar 2006 14:02:06 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index scan startup time"
},
{
"msg_contents": "On Thu, Mar 30, 2006 at 01:59:10PM +0200, Peter Eisentraut wrote:\n>The table has seen VACUUM FULL and REINDEX before this.\n\nBut no analyze?\n\nMike Stone\n",
"msg_date": "Thu, 30 Mar 2006 07:06:05 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index scan startup time"
},
{
"msg_contents": "Am Donnerstag, 30. März 2006 14:02 schrieb Steinar H. Gunderson:\n> On Thu, Mar 30, 2006 at 01:59:10PM +0200, Peter Eisentraut wrote:\n> > EXPLAIN ANALYZE select activity_id from activity where state in (10000,\n> > 10001) order by activity_id limit 100;\n> >\n> > QUERY PLAN\n> >\n> > Limit (cost=0.00..622.72 rows=100 width=8) (actual\n> > time=207356.054..207356.876 rows=100 loops=1)\n> > -> Index Scan using activity_pk on activity (cost=0.00..40717259.91\n> > rows=6538650 width=8) (actual time=207356.050..207356.722 rows=100\n> > loops=1) Filter: ((state = 10000) OR (state = 10001))\n> > Total runtime: 207357.000 ms\n> >\n> > The table has seen VACUUM FULL and REINDEX before this.\n>\n> The index scan is by activity_id, not by state. Do you have an index on\n> state at all?\n\nThere is an index on state as well but the column is not selective enough.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Thu, 30 Mar 2006 14:23:53 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index scan startup time"
},
{
"msg_contents": "Am Donnerstag, 30. M�rz 2006 14:06 schrieb Michael Stone:\n> On Thu, Mar 30, 2006 at 01:59:10PM +0200, Peter Eisentraut wrote:\n> >The table has seen VACUUM FULL and REINDEX before this.\n>\n> But no analyze?\n\nANALYZE as well, but the plan choice is not the point anyway.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Thu, 30 Mar 2006 14:24:27 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index scan startup time"
},
{
"msg_contents": "On Thu, Mar 30, 2006 at 02:23:53PM +0200, Peter Eisentraut wrote:\n>>> EXPLAIN ANALYZE select activity_id from activity where state in (10000,\n>>> 10001) order by activity_id limit 100;\n>>>\n>>> QUERY PLAN\n>>>\n>>> Limit (cost=0.00..622.72 rows=100 width=8) (actual\n>>> time=207356.054..207356.876 rows=100 loops=1)\n>>> -> Index Scan using activity_pk on activity (cost=0.00..40717259.91\n>>> rows=6538650 width=8) (actual time=207356.050..207356.722 rows=100\n>>> loops=1) Filter: ((state = 10000) OR (state = 10001))\n>>> Total runtime: 207357.000 ms\n>>>\n>>> The table has seen VACUUM FULL and REINDEX before this.\n>> The index scan is by activity_id, not by state. Do you have an index on\n>> state at all?\n> There is an index on state as well but the column is not selective enough.\n\nWell, it's logical enough; it scans along activity_id until it finds one with\nstate=10000 or state=10001. You obviously have a _lot_ of records with low\nactivity_id and state none of these two, so Postgres needs to scan all those\nrecords before it founds 100 it can output. This is the “startup cost” you're\nseeing.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 30 Mar 2006 14:31:34 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index scan startup time"
},
{
"msg_contents": "Hi, Peter,\n\nPeter Eisentraut wrote:\n>>>The table has seen VACUUM FULL and REINDEX before this.\n>>But no analyze?\n> ANALYZE as well, but the plan choice is not the point anyway.\n\nMaybe you could add a combined Index on activity_id and state, or (if\nyou use this kind of query more often) a conditional index on\nactivity_id where state in (10000,10001).\n\nBtw, PostgreSQL 8.1 could AND two bitmap index scans on the activity and\nstate indices, and get the result faster (i presume).\n\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Thu, 30 Mar 2006 14:35:53 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index scan startup time"
},
{
"msg_contents": "On Thu, Mar 30, 2006 at 02:31:34PM +0200, Steinar H. Gunderson wrote:\n>Well, it's logical enough; it scans along activity_id until it finds one with\n>state=10000 or state=10001. You obviously have a _lot_ of records with low\n>activity_id and state none of these two, so Postgres needs to scan all those\n>records before it founds 100 it can output. This is the “startup cost” you're\n>seeing.\n\nYes. And the estimates are bad enough (orders of magnitude) that I can't \nhelp but wonder whether pg could come up with a better plan with better \nstatistics:\n\n>>>> -> Index Scan using activity_pk on activity (cost=0.00..40717259.91 rows=6538650 width=8) (actual time=207356.050..207356.722 rows=100 loops=1) Filter: ((state = 10000) OR (state = 10001))\n\nMike Stone\n",
"msg_date": "Thu, 30 Mar 2006 07:42:53 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index scan startup time"
},
{
"msg_contents": "On Thu, Mar 30, 2006 at 07:42:53AM -0500, Michael Stone wrote:\n> Yes. And the estimates are bad enough (orders of magnitude) that I can't \n> help but wonder whether pg could come up with a better plan with better \n> statistics:\n> \n>>>>> -> Index Scan using activity_pk on activity (cost=0.00..40717259.91 \n>>>>> rows=6538650 width=8) (actual time=207356.050..207356.722 rows=100 \n>>>>> loops=1) Filter: ((state = 10000) OR (state = 10001))\n\nIsn't the rows=100 here because of the LIMIT?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 30 Mar 2006 14:51:47 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index scan startup time"
},
{
"msg_contents": "Am Donnerstag, 30. März 2006 14:31 schrieb Steinar H. Gunderson:\n> Well, it's logical enough; it scans along activity_id until it finds one\n> with state=10000 or state=10001. You obviously have a _lot_ of records with\n> low activity_id and state none of these two, so Postgres needs to scan all\n> those records before it founds 100 it can output. This is the “startup\n> cost” you're seeing.\n\nThe startup cost is the cost until the plan is set up to start outputting \nrows. It is not the time until the first row is found.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Thu, 30 Mar 2006 14:59:02 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index scan startup time"
},
{
"msg_contents": "On Thu, Mar 30, 2006 at 02:59:02PM +0200, Peter Eisentraut wrote:\n>> Well, it's logical enough; it scans along activity_id until it finds one\n>> with state=10000 or state=10001. You obviously have a _lot_ of records with\n>> low activity_id and state none of these two, so Postgres needs to scan all\n>> those records before it founds 100 it can output. This is the “startup\n>> cost” you're seeing.\n> The startup cost is the cost until the plan is set up to start outputting \n> rows. It is not the time until the first row is found.\n\nWell, point, my terminology was wrong. Still, what you're seeing is endless\nscanning for the first row. I don't know your distribution, but are you\nreally sure state wouldn't have better selectivity?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 30 Mar 2006 15:00:41 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index scan startup time"
},
{
"msg_contents": "On Thu, Mar 30, 2006 at 02:51:47PM +0200, Steinar H. Gunderson wrote:\n>On Thu, Mar 30, 2006 at 07:42:53AM -0500, Michael Stone wrote:\n>> Yes. And the estimates are bad enough (orders of magnitude) that I can't \n>> help but wonder whether pg could come up with a better plan with better \n>> statistics:\n>> \n>>>>>> -> Index Scan using activity_pk on activity (cost=0.00..40717259.91 \n>>>>>> rows=6538650 width=8) (actual time=207356.050..207356.722 rows=100 \n>>>>>> loops=1) Filter: ((state = 10000) OR (state = 10001))\n>\n>Isn't the rows=100 here because of the LIMIT?\n\nYes. I was looking at the other side; I thought pg could estimate how \nmuch work it would have to do to hit the limit, but double-checking it \nlooks like it can't.\n\nMike Stone\n",
"msg_date": "Thu, 30 Mar 2006 08:29:17 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index scan startup time"
},
{
"msg_contents": "Michael Stone <[email protected]> writes:\n> Yes. I was looking at the other side; I thought pg could estimate how \n> much work it would have to do to hit the limit, but double-checking it \n> looks like it can't.\n\nYes, it does, you just have to understand how to interpret the EXPLAIN\noutput. Peter had\n\nLimit (cost=0.00..622.72 rows=100 width=8) (actual time=207356.054..207356.876 rows=100 loops=1)\n -> Index Scan using activity_pk on activity (cost=0.00..40717259.91 rows=6538650 width=8) (actual time=207356.050..207356.722 rows=100 loops=1)\n Filter: ((state = 10000) OR (state = 10001))\nTotal runtime: 207357.000 ms\n\nNotice that the total cost of the LIMIT node is estimated as far less\nthan the total cost of the IndexScan node. That's exactly because the\nplanner doesn't expect the indexscan to run to completion.\n\nThe problem here appears to be a non-random correlation between state\nand activity, such that the desired state values are not randomly\nscattered in the activity sequence. The planner doesn't know about that\ncorrelation and hence can't predict the poor startup time.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 30 Mar 2006 09:25:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index scan startup time "
},
{
"msg_contents": "Tom Lane wrote:\n> The problem here appears to be a non-random correlation between state\n> and activity, such that the desired state values are not randomly\n> scattered in the activity sequence. The planner doesn't know about\n> that correlation and hence can't predict the poor startup time.\n\nSo from when to when is the startup time (the \"x\" in \"x..y\") actually \nmeasured? When does the clock start ticking and when does it stop? \nThat is what's confusing me.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Thu, 30 Mar 2006 17:16:13 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index scan startup time"
},
{
"msg_contents": "Peter Eisentraut <[email protected]> writes:\n> So from when to when is the startup time (the \"x\" in \"x..y\") actually \n> measured? When does the clock start ticking and when does it stop? \n> That is what's confusing me.\n\nThe planner thinks of the startup time (the first estimated-cost number)\nas the time before the output scan can start, eg, time to do the sort in\na sort node. EXPLAIN ANALYZE however reports the actual time until the\nfirst output row is delivered. When you've got a filter applied to the\nnode result, as in this case, there can be a material difference between\nthe two definitions, because of the time spent scanning rows that don't\nget past the filter.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 30 Mar 2006 10:24:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index scan startup time "
},
{
"msg_contents": "On Thu, 2006-03-30 at 13:59 +0200, Peter Eisentraut wrote:\n\n> Can anyone explain this:\n> \n> EXPLAIN ANALYZE select activity_id from activity where state in (10000, 10001) \n> order by activity_id limit 100;\n> \n> QUERY PLAN\n> \n> Limit (cost=0.00..622.72 rows=100 width=8) (actual \n> time=207356.054..207356.876 rows=100 loops=1)\n> -> Index Scan using activity_pk on activity (cost=0.00..40717259.91 \n> rows=6538650 width=8) (actual time=207356.050..207356.722 rows=100 loops=1)\n> Filter: ((state = 10000) OR (state = 10001))\n> Total runtime: 207357.000 ms\n> \n\n...just adding to Tom's comments:\n\nThe interesting thing about this query is it *looks* like the index is\nbeing used to retrieve the matching rows and so the startup time looks\nwrong. However the index is being used instead of a sort to satisfy the\nORDER BY, with the state clauses being applied as after-scan filters\nsince those columns aren't part of the index. So the Index Scan starts\nat the leftmost page and scans the whole index...\n\nIf the query had chosen a sort, the startup time would have been easily\nunderstandable, but there's no indication from the EXPLAIN as to why the\nIndex Scan exists. \n\nPerhaps it should be a TODO item to make the EXPLAIN say explicitly when\nan Index Scan is being used to provide sorted output?\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Thu, 30 Mar 2006 18:19:11 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index scan startup time"
}
] |
[
{
"msg_contents": "Hi folks!\n\nI have just a issue again with unused indexes. I have a database with a\ncouple of tables and I have to do an sync job with them. For marking\nwhich row has to be transfered I added a new column token (integer, I\nwill need some more tokens in near future) to every table.\n\nBefore determining wich rows to mark I first had a simple\n\nupdate <table> set token=0;\n\nOkay, this uses seq scan of course. For speeding things up, I created an\npartial index on every table like this:\n\ncreate index <table>_idx_token on <table> using (token) where token=1;\n\nAfter that I run vacuum analyse to update statistics and changed my to:\n\nupdate <table> set token=0 where token=1;\n\nI think this should be able to use my index, and indeed on one table\nthis works quite fine:\n\ntransfer=> explain analyse update ku set token=0 where token=1;\n\nQUERY PLAN\n------------------------------------------------------------------------\n Index Scan using ku_idx_token on ku (cost=0.00..1.01 rows=1\nwidth=1871) (actual time=0.169..0.169 rows=0 loops=1)\n Index Cond: (token = 1)\n Total runtime: 3.816 ms\n(3 rows)\n\nBut on most of the other tables a seq scan is still used:\n\ntransfer=> explain analyse update fak6 set token=0 where token=1;\n\nQUERY PLAN\n------------------------------------------------------------------------\n Seq Scan on fak6 (cost=0.00..301618.71 rows=24217 width=1895) (actual\ntime=96987.417..127020.919 rows=24251 loops=1)\n Filter: (token = 1)\n Total runtime: 181828.281 ms\n(3 rows)\n\nSo I tried to force using an index with setting enable_seqscan to off,\nhere are the results:\n\ntransfer=> set enable_seqscan to off;\nSET\ntransfer=> explain analyse update fak6 set token=0 where token=1;\n\nQUERY PLAN\n------------------------------------------------------------------------\n Index Scan using fak6_idx_token on fak6 (cost=0.00..301697.93\nrows=24217 width=1895) (actual time=1271.273..1271.273 rows=0 loops=1)\n Index Cond: (token = 1)\n Total runtime: 1272.572 ms\n(3 rows)\n\ntransfer=> set enable_seqscan to on;\nSET\ntransfer=> explain analyse update fak6 set token=0 where token=1;\n\nQUERY PLAN\n------------------------------------------------------------------------\n Seq Scan on fak6 (cost=0.00..301618.71 rows=24217 width=1895) (actual\ntime=93903.379..93903.379 rows=0 loops=1)\n Filter: (token = 1)\n Total runtime: 93904.679 ms\n(3 rows)\n\ntransfer=> set enable_seqscan to off;\nSET\ntransfer=> explain analyse update fak6 set token=0 where token=1;\n\nQUERY PLAN\n------------------------------------------------------------------------\n Index Scan using fak6_idx_token on fak6 (cost=0.00..301697.93\nrows=24217 width=1895) (actual time=223.721..223.721 rows=0 loops=1)\n Index Cond: (token = 1)\n Total runtime: 226.851 ms\n(3 rows)\n\nNow I'm a bit confused. The costs are nearly the same if using index or\nnot - but runtime is about 70 times faster? Any idea how I can fix this\nissue - I thought a partial index would be the right way?\n\nCheers,\nJan",
"msg_date": "Fri, 31 Mar 2006 11:16:37 +0200",
"msg_from": "Jan Kesten <[email protected]>",
"msg_from_op": true,
"msg_subject": "index not used again"
},
{
"msg_contents": "On Fri, 31 Mar 2006, Jan Kesten wrote:\n\n>\n> Hi folks!\n>\n> I have just a issue again with unused indexes. I have a database with a\n> couple of tables and I have to do an sync job with them. For marking\n> which row has to be transfered I added a new column token (integer, I\n> will need some more tokens in near future) to every table.\n>\n> Before determining wich rows to mark I first had a simple\n>\n> update <table> set token=0;\n>\n> Okay, this uses seq scan of course. For speeding things up, I created an\n> partial index on every table like this:\n>\n> create index <table>_idx_token on <table> using (token) where token=1;\n>\n> After that I run vacuum analyse to update statistics and changed my to:\n>\n> update <table> set token=0 where token=1;\n>\n> I think this should be able to use my index, and indeed on one table\n> this works quite fine:\n>\n> transfer=> explain analyse update ku set token=0 where token=1;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Index Scan using ku_idx_token on ku (cost=0.00..1.01 rows=1\n> width=1871) (actual time=0.169..0.169 rows=0 loops=1)\n> Index Cond: (token = 1)\n> Total runtime: 3.816 ms\n> (3 rows)\n>\n> But on most of the other tables a seq scan is still used:\n>\n> transfer=> explain analyse update fak6 set token=0 where token=1;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Seq Scan on fak6 (cost=0.00..301618.71 rows=24217 width=1895) (actual\n> time=96987.417..127020.919 rows=24251 loops=1)\n> Filter: (token = 1)\n> Total runtime: 181828.281 ms\n> (3 rows)\n>\n> So I tried to force using an index with setting enable_seqscan to off,\n> here are the results:\n>\n> transfer=> set enable_seqscan to off;\n> SET\n> transfer=> explain analyse update fak6 set token=0 where token=1;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Index Scan using fak6_idx_token on fak6 (cost=0.00..301697.93\n> rows=24217 width=1895) (actual time=1271.273..1271.273 rows=0 loops=1)\n> Index Cond: (token = 1)\n> Total runtime: 1272.572 ms\n> (3 rows)\n\nDid you reset the table contents between these two (remember that explain\nanalyze actually runs the query)? The second appears to be changing no\nrows from the output.\n\n",
"msg_date": "Fri, 31 Mar 2006 06:44:44 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index not used again"
},
{
"msg_contents": "Stephan Szabo schrieb:\n\n> Did you reset the table contents between these two (remember that\n> explain analyze actually runs the query)? The second appears to be\n> changing no rows from the output.\n\nI for myself did not, but as there are runnig automatic jobs\nperiodically I can't tell, if one ran in the time while I was testing\n(but I guess not). At starting my tests all rows contained a zero for\nall tokens and there should be no ones at all.\n\nIn my case rows with token set to one are really rare, about one of a\nthousand rows. I looked for fast way to find therse rows.\n\nI'll try again after a successful run - not resetting the token (not\nusing analyse this time).\n\nCheers,\nJan",
"msg_date": "Sun, 02 Apr 2006 18:20:16 +0200",
"msg_from": "Jan Kesten <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index not used again"
},
{
"msg_contents": "On Sun, 2 Apr 2006, Jan Kesten wrote:\n\n> Stephan Szabo schrieb:\n>\n> > Did you reset the table contents between these two (remember that\n> > explain analyze actually runs the query)? The second appears to be\n> > changing no rows from the output.\n>\n> I for myself did not, but as there are runnig automatic jobs\n> periodically I can't tell, if one ran in the time while I was testing\n> (but I guess not). At starting my tests all rows contained a zero for\n> all tokens and there should be no ones at all.\n\nThe reason I asked is that the explain analyze output for the first query\non fak6 (using a seqscan) seemed to imply 24k rows actually matched the\ncondition and were updated, so comparisons to the later times may be\nskewed.\n\n",
"msg_date": "Sun, 2 Apr 2006 12:46:55 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index not used again"
}
] |
[
{
"msg_contents": "Hello,\n\nI would like to know how my application works before\nand after data from VACUUM ANALYSE is available.\n\nIs there a way to de-'vacuum analyse' a database for\ntesting purposes?\n\nThank you,\nFred\n",
"msg_date": "Fri, 31 Mar 2006 18:02:16 +0200",
"msg_from": "Frederic Back <[email protected]>",
"msg_from_op": true,
"msg_subject": "un-'vacuum analyse'"
},
{
"msg_contents": "Frederic Back <[email protected]> writes:\n> Is there a way to de-'vacuum analyse' a database for\n> testing purposes?\n\n\"DELETE FROM pg_statistic\" will get you most of the way there.\nIt doesn't get rid of the accurate relpages/reltuples entries\nin pg_class, but since CREATE INDEX also updates those counts,\nI think it's reasonable to consider that a freshly loaded database\nwould normally have correct counts.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 31 Mar 2006 11:27:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: un-'vacuum analyse' "
}
] |
[
{
"msg_contents": "> This is a blatant thread steal... but here we go...\n> Do people have any opinions on the pgsql driver?\n\nIt's very nice.\n\n> How does it compare with the odbc in terms of performance?\n\nI haven't measured specifically, but if you're tlaking .net it should be\nbetter. It's all in managed code, so you won't pay the repeated penalty\nof switching down to unmanaged and back all the time (the\n.net-ODBC-bridge is known not to be very fast). As a bonus your program\nwill run in an environment where the CAS policy prevents native code. \nAnd I've never had any performance problems with it myself.\n\n\n> Is it fully production ready?\n\nI beleive so. I've been using it for a long time with zero problems.\nWhile I don't use many of the exotic features in it, I doubt most people\ndo ;-) Don't get scared by the claim it's in beta - IIRC there's an RC\nout any day now, and it's been stable long before 1.0. But it's always a\ngood idea to browse through the list of known bugs and see if one will\nlikely hit you...\n\n\n//Magnus\n",
"msg_date": "Sat, 1 Apr 2006 01:27:15 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [Solved] Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "On 01/04/06, Magnus Hagander <[email protected]> wrote:\n> > This is a blatant thread steal... but here we go...\n> > Do people have any opinions on the pgsql driver?\n>\n> It's very nice.\n...\n\nThanks for the tips - i'll try a couple of test apps soon.\nCheers\nAntoine\n\n\n\n\n--\nThis is where I should put some witty comment.\n",
"msg_date": "Sat, 1 Apr 2006 11:54:00 +0200",
"msg_from": "Antoine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] Slow performance on Windows .NET and OleDb"
},
{
"msg_contents": "On 3/31/06, Magnus Hagander <[email protected]> wrote:\n> > This is a blatant thread steal... but here we go...\n> > Do people have any opinions on the pgsql driver?\n\n> I beleive so. I've been using it for a long time with zero problems.\n> While I don't use many of the exotic features in it, I doubt most people\n> do ;-) Don't get scared by the claim it's in beta - IIRC there's an RC\n> out any day now, and it's been stable long before 1.0. But it's always a\n> good idea to browse through the list of known bugs and see if one will\n> likely hit you...\n\nUp until a few months ago the npgsql driver was missing a few features\nthat made it easier to work with typed datasets in the IDE...I would\nuse the odbc driver to create the dataset at design time and work with\nit at run time with the npgsql driver.\n\nLately though, it seems there is no reason not use the npgsql driver.\n\nMerlin\n",
"msg_date": "Sat, 1 Apr 2006 14:23:20 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Solved] Slow performance on Windows .NET and OleDb"
}
] |
[
{
"msg_contents": "Hi all.\n\nThere are two tables:\n\ncreate table device_types (\nid int,\nname varchar\n);\nabout 1000 rows\n\ncreate table devices (\nid int,\ntype int REFERENCES device_types(id),\nname varchar,\ndata float\n);\nabout 200000 rows\n\nAnd about 1000 functions:\ncreate function device_type1(int) returns ..\ncreate function device_type2(int) returns ..\n...\ncreate function device_type1000(int) returns ..\n\n\nWhat is faster?\n\nOne trigger with 1000 ELSE IF\nif old.type=1 then \n select device_type1(old.id);\nelse if old.type=2 then\n\tselect device_type2(old.id);\n...\nelse if old.type=1000 then\n\tselect device_type1000(old.id);\nend if;\n\nOr 1000 rules\ncreate rule device_type1 AS ON update to devices \n\twhere old.type=1 \n DO select device_type1(old.id);\ncreate rule device_type2 AS ON update to devices \n where old.type=2\n DO select device_type2(old.id);\n...\ncreate rule device_type1000 AS ON update to devices \n where old.type=1000\n DO select device_type1000(old.id);\n\nthx.\n\n-- \nС уважением,\nКлючников А.С.\n",
"msg_date": "Sun, 2 Apr 2006 12:31:49 +0400",
"msg_from": "=?utf-8?B?0JrQu9GO0YfQvdC40LrQvtCyINCQLtChLg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trigger vs Rule"
},
{
"msg_contents": "\nOn 2 apr 2006, at 10.31, Ключников А.С. wrote:\n> What is faster?\n> One trigger with 1000 ELSE IF\n> Or 1000 rules\n\nFaster to write and easier to maintain would be to write a trigger \nfunction in pl/pgsql which executes the right function dynamically:\n\nCREATE OR REPLACE FUNCTION exec_device_type() RETURNS trigger AS $$\n\tEXECUTE \"SELECT device_type\" || OLD.type || \"(OLD.id)\";\n$$ LANGUAGE plpgsql;\n\nBest would probably be to refactor your device_typeN() functions into \none, that would take N as an argument.\n\n\nSincerely,\n\nNiklas Johansson\n\n\n\n\n",
"msg_date": "Sun, 2 Apr 2006 23:08:44 +0200",
"msg_from": "Niklas Johansson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger vs Rule"
},
{
"msg_contents": "\nOn 2 apr 2006, at 23.08, Niklas Johansson wrote:\n\n> CREATE OR REPLACE FUNCTION exec_device_type() RETURNS trigger AS $$\n> \tEXECUTE \"SELECT device_type\" || OLD.type || \"(OLD.id)\";\n> $$ LANGUAGE plpgsql;\n\n\nSorry, I was bitten by the bedbug there: a plpgsql function needs a \nlittle more than that to be functional :)\n\nCREATE OR REPLACE FUNCTION exec_device_type() RETURNS trigger AS $$\nBEGIN\n\tEXECUTE 'SELECT device_type' || OLD.type || '(OLD.id)';\n\tRETURN NEW/OLD/NULL; -- Depending on your application.\nEND;\n$$ LANGUAGE plpgsql;\n\nBut really, you should consider reworking your schema structure. \nHaving a thousand functions doing almost the same thing is neither \nefficient, nor maintainable.\n\n\n\nSincerely,\n\nNiklas Johansson\n\n\n\n\n",
"msg_date": "Mon, 3 Apr 2006 11:04:25 +0200",
"msg_from": "Niklas Johansson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Trigger vs Rule"
},
{
"msg_contents": "* Niklas Johansson <[email protected]> [2006-04-03 11:04:25 +0200]:\n\n> \n> On 2 apr 2006, at 23.08, Niklas Johansson wrote:\n> \n> >CREATE OR REPLACE FUNCTION exec_device_type() RETURNS trigger AS $$\n> >\tEXECUTE \"SELECT device_type\" || OLD.type || \"(OLD.id)\";\n> >$$ LANGUAGE plpgsql;\n> \n> \n> Sorry, I was bitten by the bedbug there: a plpgsql function needs a \n> little more than that to be functional :)\n> \n> CREATE OR REPLACE FUNCTION exec_device_type() RETURNS trigger AS $$\n> BEGIN\n> \tEXECUTE 'SELECT device_type' || OLD.type || '(OLD.id)';\n> \tRETURN NEW/OLD/NULL; -- Depending on your application.\n> END;\n> $$ LANGUAGE plpgsql;\n> \n> But really, you should consider reworking your schema structure. \n> Having a thousand functions doing almost the same thing is neither \n> efficient, nor maintainable.\nThings are very diferent. \nFor many types functions not needed, jast update.\n\nI.e. This is a way One trigger with ~1000 else if.\nHere was a diametral opinion. \n> \n> \n> \n> Sincerely,\n> \n> Niklas Johansson\n> \n> \n> \n> \n\n-- \nС уважением,\nКлючников А.С.\nВедущий инженер ПРП \"Аналитприбор\"\n432030 г.Ульяновск, а/я 3117\nтел./факс +7 (8422) 43-44-78\nmailto: [email protected]\n",
"msg_date": "Mon, 3 Apr 2006 13:17:36 +0400",
"msg_from": "=?utf-8?B?0JrQu9GO0YfQvdC40LrQvtCyINCQLtChLg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Trigger vs Rule"
}
] |
[
{
"msg_contents": "\n\"Bruno Baguette\" <[email protected]> wrote\n>\n>\n> Is there a way to log all SQL queries, with the date/time when they were \n> launched, and the cost of that query (if this is possible) in order to see \n> which queries need to be optimized ?\n>\n\nSee if log_statement, log_statement_stats parameters can help you. Also, \nEXPLAIN ANALYZE can help you more on the target query.\n\nRegards,\nQingqing \n\n\n",
"msg_date": "Sun, 2 Apr 2006 22:45:34 +0800",
"msg_from": "\"Qingqing Zhou\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Logging SQL queries to optimize them ?"
}
] |
[
{
"msg_contents": "I have asked this before, but haven't noticed any response, so if there\nwere any, I appologize for asking this again...\n\nI have a function that is called by middle-tier (java trough JDBC), and\nin postgres log I can see only the execution time of that function. I\nhave no idea how long are functions insde taking time to execute.\n\nSince the function is written in plpgsql I tried to calculate the\ndurations by using now() function, but realized that within the\ntransaction now() always retunrs the same value.\n\nThe good thing is that those RAISE NOTICE calls from within my function\nare logged as they're encountered, so, with carefully placed RAISE\nNOTICE calls I could see how much time are the -inside- functions\ntaking.\n\nFor instance:\n\nCREATE FUNCTION test_outer() RETURNS void\nAS\n$$BODY$$BEGIN\n\tRAISE NOTICE 'We start here'\n\tPERFORM SELECT someInternalFunction1();\n\tRAISE NOTICE 'InternalFunction1 is done now.';\n\tPERFORM SELECT someInternalFunction2();\n\tRAISE NOTICE 'InternalFunction2 is done now.';\n\t-- ... more code here\nEND$$BODY$$\nLANGUAGE 'plpgsql'\n\nIs there any other, maybe more convinient way to measure the 'inside'\nfunction performance? I also have a problem if the outer function is\nwritten in SQL, like this, for instance:\n\nCREATE FUNCTION getSomeData(param1, param2, param3)\nRETURN SETOF someType\nAS\n$$BODY$$SELECT\n\t*\nFROM\n\tsomeTable\n\tJOIN someOtherFunction($1, $2, $3) ON someTable.col =\nsomeOtherFunction.col\nWHERE\n\tsomeCondition\n$$BODY$$\nLANGUAGE 'sql'.\n\nThank you in advance,\n\n\tMario\n-- \nMario Splivalo\nMob-Art\[email protected]\n\n\"I can do it quick, I can do it cheap, I can do it well. Pick any two.\"\n\n\n",
"msg_date": "Mon, 03 Apr 2006 11:42:19 +0200",
"msg_from": "Mario Splivalo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Measuring the execution time of functions within functions..."
},
{
"msg_contents": "Mario Splivalo wrote:\n\n> Since the function is written in plpgsql I tried to calculate the\n> durations by using now() function, but realized that within the\n> transaction now() always retunrs the same value.\n\nMaybe you can use timeofday().\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Mon, 3 Apr 2006 08:50:47 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Measuring the execution time of functions within functions..."
},
{
"msg_contents": "Mario Splivalo <[email protected]> writes:\n\n> I have asked this before, but haven't noticed any response, so if there\n> were any, I appologize for asking this again...\n> \n> I have a function that is called by middle-tier (java trough JDBC), and\n> in postgres log I can see only the execution time of that function. I\n> have no idea how long are functions insde taking time to execute.\n> \n> Since the function is written in plpgsql I tried to calculate the\n> durations by using now() function, but realized that within the\n> transaction now() always retunrs the same value.\n> \n> The good thing is that those RAISE NOTICE calls from within my function\n> are logged as they're encountered, so, with carefully placed RAISE\n> NOTICE calls I could see how much time are the -inside- functions\n> taking.\n> \n> For instance:\n> \n> CREATE FUNCTION test_outer() RETURNS void\n> AS\n> $$BODY$$BEGIN\n> \tRAISE NOTICE 'We start here'\n> \tPERFORM SELECT someInternalFunction1();\n> \tRAISE NOTICE 'InternalFunction1 is done now.';\n> \tPERFORM SELECT someInternalFunction2();\n> \tRAISE NOTICE 'InternalFunction2 is done now.';\n> \t-- ... more code here\n> END$$BODY$$\n> LANGUAGE 'plpgsql'\n> \n> Is there any other, maybe more convinient way to measure the 'inside'\n> function performance? I also have a problem if the outer function is\n> written in SQL, like this, for instance:\n\nSee the timeofday() func which returns the actual time and is not\nfrozen in the current transaction. You'll need to cast it to\ntimestamp or other if wishing to do time arithmetic deltas on it.\n\nHTH\n\n-- \n-------------------------------------------------------------------------------\nJerry Sievers 305 854-3001 (home) WWW ECommerce Consultant\n 305 321-1144 (mobile\thttp://www.JerrySievers.com/\n",
"msg_date": "03 Apr 2006 12:32:09 -0400",
"msg_from": "Jerry Sievers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Measuring the execution time of functions within functions..."
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.