threads
listlengths
1
275
[ { "msg_contents": "BEGIN;\n SET LOCAL enable_seqscan = off;\n SELECT id, team_id, sum(work_units) AS work_units\n INTO TEMP email_contrib_summary\n FROM email_contrib\n WHERE project_id = :ProjectID\n GROUP by id, team_id\n ;\nCOMMIT;\n\ninserts 29000 rows...\n\nUPDATE email_contrib_summary\n SET id = sp.retire_to\n FROM stats_participant sp\n WHERE sp.id = email_contrib_summary.id\n AND sp.retire_to >= 0\n AND (sp.retire_date >= (SELECT ps.last_date FROM project_statsrun ps WHERE ps.project_id = :ProjectID)\n OR sp.retire_date IS NULL)\n;\n Nested Loop (cost=0.00..5475.20 rows=982 width=54) (actual time=25.54..2173363.11 rows=29181 loops=1)\n InitPlan\n -> Seq Scan on project_statsrun ps (cost=0.00..1.06 rows=1 width=4) (actual time=0.06..0.07 rows=1 loops=1)\n Filter: (project_id = 8)\n -> Seq Scan on email_contrib_summary (cost=0.00..20.00 rows=1000 width=46) (actual time=25.11..1263.26 rows=29753 loops=1)\n -> Index Scan using stats_participant__participantretire_id on stats_participant sp (cost=0.00..5.44 rows=1 width=8) (actual time=2.16..72.93 rows=1 loops=29753)\n Index Cond: ((sp.retire_to >= 0) AND (sp.id = \"outer\".id))\n Filter: ((retire_date >= $0) OR (retire_date IS NULL))\n Total runtime: 2174315.61 msec\n\nGAH! 45 minutes to update 29k rows! BUT, if I do\n\n Hash Join (cost=41104.03..42410.14 rows=29166 width=38) (actual time=8391.81..10925.07 rows=29181 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".id)\n InitPlan\n -> Seq Scan on project_statsrun ps (cost=0.00..1.06 rows=1 width=4) (actual time=0.05..0.06 rows=1 loops=1)\n Filter: (project_id = 8)\n -> Seq Scan on email_contrib_summary (cost=0.00..496.01 rows=29701 width=30) (actual time=0.20..387.95 rows=29753 loops=1)\n -> Hash (cost=13939.69..13939.69 rows=394217 width=8) (actual time=8390.72..8390.72 rows=0 loops=1)\n -> Seq Scan on stats_participant sp (cost=0.00..13939.69 rows=394217 width=8) (actual time=0.22..5325.38 rows=389115 loops=1)\n Filter: ((retire_to >= 0) AND ((retire_date >= $0) OR (retire_date IS NULL)))\n Total runtime: 11584.09 msec\n\n\nAhhh... soothing relief...\n\nSo, question is, would it make sense to automatically do an analyze\nafter/during a SELECT INTO? Would it be very expensive to analyze the\ndata as it's being inserted? I think it's pretty well understood that\nyou want to vacuum/vacuum analyze the entire database regularly, but\nthat obviously wouldn't help temporary tables... maybe it makes the most\nsense to automatically analyze temporary tables only. For that matter,\nsince temp tables only have one operation performed on them at a time,\nmaybe it makes sense to keep stats for them up-to-date as part of every\noperation?\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Sat, 26 Apr 2003 10:15:37 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Automatic analyze on select into" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> So, question is, would it make sense to automatically do an analyze\n> after/during a SELECT INTO?\n\nI don't think so. Very often, temp tables are small and only used for\none or two operations anyway --- so the cost of an ANALYZE wouldn't be\nrepaid. If ANALYZE would be useful, the user can issue one.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 26 Apr 2003 12:18:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Automatic analyze on select into " }, { "msg_contents": "On Sat, Apr 26, 2003 at 12:18:54PM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > So, question is, would it make sense to automatically do an analyze\n> > after/during a SELECT INTO?\n> \n> I don't think so. Very often, temp tables are small and only used for\n> one or two operations anyway --- so the cost of an ANALYZE wouldn't be\n> repaid. If ANALYZE would be useful, the user can issue one.\n \nOk, I'll add notes to appropriate pages in the documentation. BTW, do\nthe notes entered into the interactive docs get rolled into the formal\ndocumentation at any point?\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Sat, 26 Apr 2003 13:16:25 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Automatic analyze on select into" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Ok, I'll add notes to appropriate pages in the documentation. BTW, do\n> the notes entered into the interactive docs get rolled into the formal\n> documentation at any point?\n\nUsually, towards the end of a release cycle, someone will run through\nthem looking for stuff worth incorporating into the next generation.\nIt's pretty informal though.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 26 Apr 2003 14:40:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Automatic analyze on select into " } ]
[ { "msg_contents": "Hello,\n\n I have a database that contains a large amount of Large Objects \n(>500MB). I am using this database to store images for an e-commerce \nwebsite, so I have a simple accessor script written in perl to dump out \na blob based on a virtual 'path' stored in a table (and associated with \nthe large object's OID). This system seemed to work wonderfully until I \nput more than ~500MB of binary data into the database. \n\n Now, every time I run the accessor script (via the web OR the command \nline), the postmaster process gobbles up my CPU resources (usually >30% \nfor a single process - and it's a 1GHz processor with 1GB of RAM!), and \nthe script takes a very long time to completely dump out the data.\n\n I have the same issue with an import script that reads files from the \nhard drive and puts them into Large Objects in the database. It takes a \nvery long time to import whereas before, it would run extremely fast. \n\n Are there any known issues in PostgreSQL involving databases with a \nlot of binary data? I am using PostgreSQL v7.2.3 on a linux system.\n\nThanks,\n\n\t-Jeremy\n\n-- \n------------------------\nJeremy C. Andrus\nhttp://www.jeremya.com/\n------------------------\n\n", "msg_date": "Sun, 27 Apr 2003 22:30:23 -0400", "msg_from": "Jeremy Andrus <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql BLOB issues" }, { "msg_contents": "Jeremy Andrus <[email protected]> writes:\n> I have a database that contains a large amount of Large Objects \n> (>500MB). I am using this database to store images for an e-commerce \n> website, so I have a simple accessor script written in perl to dump out \n> a blob based on a virtual 'path' stored in a table (and associated with \n> the large object's OID). This system seemed to work wonderfully until I \n> put more than ~500MB of binary data into the database. \n\nAre you talking about 500MB in one BLOB, or 500MB total?\n\nIf the former, I can well imagine swap thrashing being a problem when\nyou try to access such a large blob.\n\nIf the latter, I can't think of any reason for total blob storage to\ncause any big performance issue. Perhaps you just haven't vacuumed\npg_largeobject in a long time?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 28 Apr 2003 01:00:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql BLOB issues " }, { "msg_contents": "On Monday 28 April 2003 01:00 am, Tom Lane wrote:\n> Are you talking about 500MB in one BLOB, or 500MB total?\n\n I meant 500MB _total_. There are over 5000 separate BLOBs.\n\n I'll ask my friendly sys-admin to vacuum pg_largeobject, and I'll let \nyou know what happens :-) In general though, how much performance is \nreally gained through regular vacuuming? \n\nThanks for your help,\n\n\t-Jeremy\n\n-- \n------------------------\nJeremy C. Andrus\nhttp://www.jeremya.com/\n------------------------\n\n", "msg_date": "Mon, 28 Apr 2003 01:33:38 -0400", "msg_from": "Jeremy Andrus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgsql BLOB issues" }, { "msg_contents": "> I'll ask my friendly sys-admin to vacuum pg_largeobject, and I'll let \n> you know what happens :-) In general though, how much performance is \n> really gained through regular vacuuming? \n\nSignificant. It's essential to vacuum regularly.\n\nChris\n\n", "msg_date": "Mon, 28 Apr 2003 16:19:50 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql BLOB issues" } ]
[ { "msg_contents": "Somebody could explain me why this query...\n\n\t SELECT *\n\t FROM articulos,eans\n\t WHERE articulos.id_iinterno=eans.id_iinterno\n\t AND eans.id_iean=345\n\nis slower than this one? (the difference is the quotes around the\nnumber....)\n\n\t SELECT *\n\t FROM articulos,eans\n\t WHERE articulos.id_iinterno=eans.id_iinterno\n\t AND eans.id_iean='345'\n\nI really now why, but I don't undestand the reason. The execution plan for\nthe first query uses\nSequential scans, and the second one uses the index, as you can see here:\n\nExecution plan for the first query:\n\n\t Nested Loop (cost=0.00..8026.85 rows=1 width=133)\n\t -> Seq Scan on eans (cost=0.00..8023.74 rows=1 width=16)\n\t -> Index Scan using articulos_pk on articulos (cost=0.00..3.10 rows=1\nwidth=117)\n\nAnd this is the second:\n\n\t Nested Loop (cost=0.00..9.12 rows=1 width=133)\n\t -> Index Scan using eans_pk on eans (cost=0.00..6.01 rows=1 width=16)\n\t -> Index Scan using articulos_pk on articulos (cost=0.00..3.10 rows=1\nwidth=117)\n\nThe field id_iean is an 8 bytes integer. Also the same for the field\nid_iinterno in both tables.\n\nThe definition of the 2 tables is this:\n\n\t CREATE TABLE \"eans\" (\n\t \"id_iean\" int8 NOT NULL,\n\t \"id_iinterno\" int8,\n\t CONSTRAINT \"eans_pk\" PRIMARY KEY (\"id_iean\")\n\t ) WITH OIDS;\n\n\t CREATE TABLE \"articulos\" (\n\t \"id_iinterno\" int8 NOT NULL,\n\t \"vsdesc_calypso\" varchar(20),\n\t \"id_iseccion\" int4,\n\t \"iprecio\" int4,\n\t \"ifamilia\" int8,\n\t \"icod_proveedor\" int4,\n\t \"vsmarca\" varchar(10),\n\t \"vsdesc_larga\" varchar(22),\n\t \"bnulo\" bool,\n\t \"bcontrol_devolucion\" bool,\n\t \"itipo_pedido\" int2,\n\t \"isurtido\" int2,\n\t \"ifuera_lineal\" int2,\n\t \"idias_caducidad\" int2,\n\t \"iuni_x_caja\" int2,\n\t \"suni_medida\" varchar(2),\n\t \"suni_pedido\" varchar(3),\n\t CONSTRAINT \"articulos_pk\" PRIMARY KEY (\"id_iinterno\")\n\t ) WITH OIDS;\n\n\nWhat I don't understand is why the quotes in the number result in a diferent\nquery execution. Somebody could help me?\n\nThank you for your help.\n\nJordi Giménez .\nAnalista Software Departamento Calypso.\nSoluciones Informáticas Para El Comercio, S.L.\njgimenez(arroba)sipec.es\n\n", "msg_date": "Mon, 28 Apr 2003 12:26:03 +0200", "msg_from": "<jgimenez@sipec_quitaesto_.es>", "msg_from_op": true, "msg_subject": "Diferent execution plan for similar query" }, { "msg_contents": "On Monday 28 April 2003 15:56, jgimenez@sipec_quitaesto_.es wrote:\n> Somebody could explain me why this query...\n>\n> \t SELECT *\n> \t FROM articulos,eans\n> \t WHERE articulos.id_iinterno=eans.id_iinterno\n> \t AND eans.id_iean=345\n>\n> is slower than this one? (the difference is the quotes around the\n> number....)\n>\n> \t SELECT *\n> \t FROM articulos,eans\n> \t WHERE articulos.id_iinterno=eans.id_iinterno\n> \t AND eans.id_iean='345'\n\nIn second case, postgresql typecasted it correctly. Even \neans.id_iean=345::int8 would have worked the same way. By default postgresql \ntreats a number as int4 while comparing and integer and float8 for a real \nnumbe. I discovered that yesterday.\n\nUntil the planner/parser gets smarter, this is going to be an FAQ..\n\n Shridhar\n\n", "msg_date": "Mon, 28 Apr 2003 16:11:33 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Diferent execution plan for similar query" }, { "msg_contents": "\nShridhar Daithankar said:\n>\n> In second case, postgresql typecasted it correctly. Even\n> eans.id_iean=345::int8 would have worked the same way. By default\n> postgresql\n> treats a number as int4 while comparing and integer and float8 for a real\n> numbe. I discovered that yesterday.\n>\n> Until the planner/parser gets smarter, this is going to be an FAQ..\n>\n> Shridhar\n\nIs this an nontrivial change?\nBecause if it's trivial it should be done, imho.\nI've been bitten indirectly of this, and it's not too easy to find out\nalways. I think that this is one of the most unobvious performance hickups\nthere are with postgresql.\n\nMagnus\n\n", "msg_date": "Mon, 28 Apr 2003 12:59:07 +0200 (CEST)", "msg_from": "\"Magnus Naeslund(w)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Diferent execution plan for similar query" }, { "msg_contents": "On Monday 28 April 2003 16:29, Magnus Naeslund(w) wrote:\n> Shridhar Daithankar said:\n> > In second case, postgresql typecasted it correctly. Even\n> > eans.id_iean=345::int8 would have worked the same way. By default\n> > postgresql\n> > treats a number as int4 while comparing and integer and float8 for a real\n> > numbe. I discovered that yesterday.\n> >\n> > Until the planner/parser gets smarter, this is going to be an FAQ..\n> >\n> > Shridhar\n>\n> Is this an nontrivial change?\n> Because if it's trivial it should be done, imho.\n> I've been bitten indirectly of this, and it's not too easy to find out\n> always. I think that this is one of the most unobvious performance hickups\n> there are with postgresql.\n\nI would say dig into hackers archives for the consensus(??) reached.. I don't \nremember..\n\n Shridhar\n\n", "msg_date": "Mon, 28 Apr 2003 16:30:55 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Diferent execution plan for similar query" } ]
[ { "msg_contents": "\nShridhar Daithankar said:\n>\n> In second case, postgresql typecasted it correctly. Even\n> eans.id_iean=345::int8 would have worked the same way. By default\n> postgresql\n> treats a number as int4 while comparing and integer and float8 for a real\n> numbe. I discovered that yesterday.\n>\n> Until the planner/parser gets smarter, this is going to be an FAQ..\n>\n> Shridhar\n\nIs this an nontrivial change?\nBecause if it's trivial it should be done, imho.\nI've been bitten indirectly of this, and it's not too easy to find out\nalways. I think that this is one of the most unobvious performance hickups\nthere are with postgresql.\n\nMagnus\n\n", "msg_date": "Mon, 28 Apr 2003 13:00:03 +0200 (CEST)", "msg_from": "\"Magnus Naeslund(w)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Diferent execution plan for similar query" }, { "msg_contents": "On Mon, Apr 28, 2003 at 01:00:03PM +0200, Magnus Naeslund(w) wrote:\n> Is this an nontrivial change?\n> Because if it's trivial it should be done, imho.\n\nIt's not trivial. If it were, it would have been done already.\n\nSee the TODO entries about this, and the many discussions about it on\n-hackers, for why it's not trivial.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 28 Apr 2003 07:12:06 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Diferent execution plan for similar query" } ]
[ { "msg_contents": "select id into temp NewRetires from stats_participant where retire_to>=1\nAND retire_date = (SELECT last_date FROM Project_statsrun WHERE\nproject_id = :ProjectID);\n\nresults in a table with 5 values...\n\nexplain analyze delete from email_rank where project_id=25 and id in\n(select id from NewRetires);\n\n Index Scan using email_rank__day_rank on email_rank\n(cost=0.00..9003741627715.16 rows=45019 width=6) (actual time=408.12..9688.37 rows=3 loops=1)\n Index Cond: (project_id = 25)\n Filter: (subplan)\n SubPlan\n -> Seq Scan on newretires (cost=100000000.00..100000020.00 rows=1000 width=4) (actual time=0.01..0.05 rows=5 loops=91834)\n Total runtime: 9689.86 msec\n\nBut, there's already an index that would fit the bill here perfectly:\n\n Table \"public.email_rank\"\n Column | Type | Modifiers \n-----------------------+---------+--------------------\n project_id | integer | not null\n id | integer | not null\n first_date | date | not null\n last_date | date | not null\n day_rank | integer | not null default 0\n day_rank_previous | integer | not null default 0\n overall_rank | integer | not null default 0\n overall_rank_previous | integer | not null default 0\n work_today | bigint | not null default 0\n work_total | bigint | not null default 0\nIndexes: email_rank_pkey primary key btree (project_id, id),\n email_rank__day_rank btree (project_id, day_rank),\n email_rank__overall_rank btree (project_id, overall_rank)\n\nWhy isn't it using email_rank_pkey instead of using day_rank then a\nfilter? The original query on sybase (see below) is essentially instant,\nbecause it's using the index of (project_id, id), so it doesn't have to\nread the whole table.\n\nstats=> select project_id,count(*) from email_rank group by project_id;\n project_id | count \n------------+--------\n 5 | 327856\n 8 | 28304\n 24 | 34622\n 25 | 91834\n 205 | 331464\n\nAlso, changing the WHERE IN to a WHERE EXISTS in the delete is\nsubstantially faster in this case (3.5 seconds as opposed to 9); it\nwould be nice if the optimizer could rewrite the query on-the-fly. I\nstarted looking into this in the first place because the original query\nwas taking 6-10 seconds, which seemed too long...\n\nOriginal query:\nDELETE FROM Email_Rank\n WHERE project_id = :ProjectID\n AND id IN (SELECT id\n FROM STATS_Participant sp\n WHERE retire_to >= 1\n AND retire_date = (SELECT last_date FROM Project_statsrun WHERE project_id = :ProjectID)\n )\n;\n\nI tried changing this to an EXISTS and it takes over a minute. So in\nthis case, the range of runtimes is ~4 seconds (building the temp table\ntakes ~0.25 seconds) to over a minute.\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Mon, 28 Apr 2003 13:21:43 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to get the optimizer to use an index with multiple fields" }, { "msg_contents": "On Mon, Apr 28, 2003 at 01:21:43PM -0500, Jim C. Nasby wrote:\n> I tried changing this to an EXISTS and it takes over a minute. So in\n> this case, the range of runtimes is ~4 seconds (building the temp table\n> takes ~0.25 seconds) to over a minute.\n \nBTW, I forgot to mention that building the temp table only takes 0.25\nseconds if I first disable sequential scans. :/\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Mon, 28 Apr 2003 13:32:44 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to get the optimizer to use an index with multiple fields" }, { "msg_contents": "On Mon, 28 Apr 2003, Jim C. Nasby wrote:\n\n> select id into temp NewRetires from stats_participant where retire_to>=1\n> AND retire_date = (SELECT last_date FROM Project_statsrun WHERE\n> project_id = :ProjectID);\n> \n> results in a table with 5 values...\n> \n> explain analyze delete from email_rank where project_id=25 and id in\n> (select id from NewRetires);\n> \n> Index Scan using email_rank__day_rank on email_rank\n> (cost=0.00..9003741627715.16 rows=45019 width=6) (actual time=408.12..9688.37 rows=3 loops=1)\n> Index Cond: (project_id = 25)\n> Filter: (subplan)\n> SubPlan\n> -> Seq Scan on newretires (cost=100000000.00..100000020.00 rows=1000 width=4) (actual time=0.01..0.05 rows=5 loops=91834)\n> Total runtime: 9689.86 msec\n> \n> But, there's already an index that would fit the bill here perfectly:\n> \n> Table \"public.email_rank\"\n> Column | Type | Modifiers \n> -----------------------+---------+--------------------\n> project_id | integer | not null\n> id | integer | not null\n> first_date | date | not null\n> last_date | date | not null\n> day_rank | integer | not null default 0\n> day_rank_previous | integer | not null default 0\n> overall_rank | integer | not null default 0\n> overall_rank_previous | integer | not null default 0\n> work_today | bigint | not null default 0\n> work_total | bigint | not null default 0\n> Indexes: email_rank_pkey primary key btree (project_id, id),\n> email_rank__day_rank btree (project_id, day_rank),\n> email_rank__overall_rank btree (project_id, overall_rank)\n> \n> Why isn't it using email_rank_pkey instead of using day_rank then a\n> filter? The original query on sybase (see below) is essentially instant,\n> because it's using the index of (project_id, id), so it doesn't have to\n> read the whole table.\n\nIt looks like the seq scan is newretires up there, from your 'id in\n(select id from NewRetires);' part of your query. I.e. the where in() has \nto be done first, and the query planner has no stats on that table, so it \nassumes a seq scan will be faster in case we need the whole thing anyway.\n\nTry adding an analyze newretires in there between the two queries.\n\nNo clue as to why it's choosing one index over the other. I don't think \nthat really matters a lot, it's the seq scan on the temp table that is \ntaking your time on this.\n\n", "msg_date": "Mon, 28 Apr 2003 13:52:57 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to get the optimizer to use an index with multiple" }, { "msg_contents": "On Mon, Apr 28, 2003 at 01:52:57PM -0600, scott.marlowe wrote:\n> On Mon, 28 Apr 2003, Jim C. Nasby wrote:\n> \n> > select id into temp NewRetires from stats_participant where retire_to>=1\n> > AND retire_date = (SELECT last_date FROM Project_statsrun WHERE\n> > project_id = :ProjectID);\n> > \n> > results in a table with 5 values...\n> > \n> > explain analyze delete from email_rank where project_id=25 and id in\n> > (select id from NewRetires);\n> > \n> > Index Scan using email_rank__day_rank on email_rank\n> > (cost=0.00..9003741627715.16 rows=45019 width=6) (actual time=408.12..9688.37 rows=3 loops=1)\n> > Index Cond: (project_id = 25)\n> > Filter: (subplan)\n> > SubPlan\n> > -> Seq Scan on newretires (cost=100000000.00..100000020.00 rows=1000 width=4) (actual time=0.01..0.05 rows=5 loops=91834)\n> > Total runtime: 9689.86 msec\n> > \n> > But, there's already an index that would fit the bill here perfectly:\n> > \n> > Table \"public.email_rank\"\n> > Column | Type | Modifiers \n> > -----------------------+---------+--------------------\n> > project_id | integer | not null\n> > id | integer | not null\n> > first_date | date | not null\n> > last_date | date | not null\n> > day_rank | integer | not null default 0\n> > day_rank_previous | integer | not null default 0\n> > overall_rank | integer | not null default 0\n> > overall_rank_previous | integer | not null default 0\n> > work_today | bigint | not null default 0\n> > work_total | bigint | not null default 0\n> > Indexes: email_rank_pkey primary key btree (project_id, id),\n> > email_rank__day_rank btree (project_id, day_rank),\n> > email_rank__overall_rank btree (project_id, overall_rank)\n> > \n> > Why isn't it using email_rank_pkey instead of using day_rank then a\n> > filter? The original query on sybase (see below) is essentially instant,\n> > because it's using the index of (project_id, id), so it doesn't have to\n> > read the whole table.\n> \n> It looks like the seq scan is newretires up there, from your 'id in\n> (select id from NewRetires);' part of your query. I.e. the where in() has \n> to be done first, and the query planner has no stats on that table, so it \n> assumes a seq scan will be faster in case we need the whole thing anyway.\n> \n> Try adding an analyze newretires in there between the two queries.\n> \n> No clue as to why it's choosing one index over the other. I don't think \n> that really matters a lot, it's the seq scan on the temp table that is \n> taking your time on this.\n \nThere's no index at all on the temporary table; I fully expect it to\nseqscan than. :) The issue is the choice of index on email_rank. It's\nonly going to hit at most 5 rows in email_rank (which it should be able\nto figure out based on newretires and the fact that email_rank_pkey is\nunique. I didn't show it, but I did run analyze on the temporary table\n(why it doesn't have statistics I don't know...)\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Mon, 28 Apr 2003 17:54:39 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to get the optimizer to use an index with multiple" } ]
[ { "msg_contents": "\n\nHi Can anyone tell if the case below is an acceptable\nperformance ?\n\nI have a query that returns data and creates a table\nin 3 mins approx. This query is optimised and uses appropriate\nindexes for the NOT EXISTS part.\n\nCREATE TABLE t_a as SELECT \nemail,country_code,city,title1,fname1,mname1,lname1,website,address,source,ifimporter,\nifexporter,ifservice,ifmanu,creation_date from general.email_bank_import \nwhere not exists (select * from general.profile_master where \nemail=general.email_bank_import.email) ;\nSELECT\nTime: 174637.31 ms (3 mins Approx)\n\n\n\nThe problem is when i try to INSERT the data into another table\nit takes 23 mins Apprx to inser 412331 records the same query.\n\nI am providing the various details below:\n\ntradein_clients=# INSERT INTO general.profile_master \n(email,country_code,city,title1,fname1,mname1,lname1,website,address,source,ifimporter,ifexporter,\nifservice, ifmanu,creation_date) SELECT email,country_code, \ncity,title1,fname1,mname1,lname1,website,address,source,ifimporter,ifexporter,ifservice, \nifmanu,creation_date from general.email_bank_import where not exists \n(select * from general.profile_master where \nemail=general.email_bank_import.email) ;\nINSERT 0 412331\nTime: 1409510.63 ms\n\n\nThe table destination general.profile_master in which \ndata is being inserted was already having 184424 records \nbefore the INSERT the VACUUM FULL ANALZYE VERBOSE output was:\n\ntradein_clients=# VACUUM FULL VERBOSE ANALYZE profile_master ;\nINFO: --Relation general.profile_master--\nINFO: Pages 9161: Changed 0, reaped 8139, Empty 0, New 0; Tup 184424: Vac 72, \nKeep/VTL 0/0, UnUsed 118067, MinLen 154, MaxLen 2034; Re-using: Free/Avail. \nSpace 708064/337568; EndEmpty/Avail. Pages 0/1669.\n CPU 0.17s/0.03u sec elapsed 0.21 sec.\nINFO: Index profile_master_email: Pages 8921; Tuples 184424: Deleted 72.\n CPU 0.15s/0.21u sec elapsed 0.37 sec.\nINFO: Index profile_master_profile_id_pkey: Pages 1295; Tuples 184424: \nDeleted 72.\n CPU 0.03s/0.10u sec elapsed 0.16 sec.\nINFO: Rel profile_master: Pages: 9161 --> 9161; Tuple(s) moved: 0.\n CPU 0.44s/0.98u sec elapsed 15.79 sec.\nINFO: --Relation pg_toast.pg_toast_163041602--\nINFO: Pages 31: Changed 0, reaped 1, Empty 0, New 0; Tup 187: Vac 0, Keep/VTL \n0/0, UnUsed 2, MinLen 50, MaxLen 2034; Re-using: Free/Avail. Space \n24800/24788; EndEmpty/Avail. Pages 0/30.\n CPU 0.00s/0.00u sec elapsed 3.04 sec.\nINFO: Index pg_toast_163041602_index: Pages 2; Tuples 187: Deleted 0.\n CPU 0.00s/0.00u sec elapsed 0.49 sec.\nINFO: Rel pg_toast_163041602: Pages: 31 --> 31; Tuple(s) moved: 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\n\nINFO: Analyzing general.profile_master\nVACUUM\nIt was already vacuumed once.\n\nIndex Info: Only two indexes were existing \n\ntradein_clients=# \\d profile_master\n Table \"general.profile_master\"\n+--------------------+------------------------+-------\n| Column | Type | \n+--------------------+------------------------+-------\n| profile_id | integer | \n| userid | integer | \n| co_name | character varying(100) | \n| address | text | \n| pincode | character varying(20) | \n| city | character varying(50) | \n| country_code | character varying(2) | \n| phone_no | character varying(100) | \n| fax_no | character varying(100) | \n| email | character varying(100) | \n| website | character varying(100) | \n| title1 | character varying(15) | \n| fname1 | character varying(200) | \n| mname1 | character varying(30) | \n| lname1 | character varying(30) | \n| desg1 | character varying(100) | \n| mobile | character varying(20) | \n| title2 | character varying(15) | \n| fname2 | character varying(30) | \n| mname2 | character varying(30) | \n| lname2 | character varying(30) | \n| desg2 | character varying(100) | \n| mobile2 | character varying(20) | \n| co_branches | character varying(100) | \n| estd | smallint | \n| staff | integer | \n| prod_exp | text | \n| prod_imp | text | \n| prod_manu | text | \n| prod_serv | text | \n| ifexporter | boolean | not null \n| ifimporter | boolean | not null \n| ifservice | boolean | not null \n| ifmanu | boolean | not null \n| bankers | character varying(255) | \n| imp_exp_code | character varying(100) | \n| memb_affil | character varying(255) | \n| std_cert | character varying(255) | \n| branch_id | integer | \n| area_id | integer | \n| annual_turn | numeric | \n| annual_currency | character varying(5) | \n| exp_turn | numeric | \n| exp_currency | character varying(5) | \n| imp_turn | numeric | \n| imp_currency | character varying(5) | \n| creation_date | integer | not null \n| profile_status | character varying(10) | \n| source | character varying(20) | not null \n| company_id | integer | \n| eyp_list_id | integer | \n| iid_list_id | integer | \n| ip_list_id | integer | \n| catalog_company_id | integer | \n| extra_attributes | boolean | not null default false \n|\n------------------------------------------------------------------------\nIndexes: profile_master_profile_id_pkey primary key btree (profile_id),\n profile_master_email btree (email)\n\n\n-- \nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n", "msg_date": "Tue, 29 Apr 2003 12:31:09 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Is 292 inserts/sec acceptable performance ?" }, { "msg_contents": "On Tuesday 29 April 2003 12:31, Rajesh Kumar Mallah wrote:\n> Hi Can anyone tell if the case below is an acceptable\n> performance ?\n>\n> I have a query that returns data and creates a table\n> in 3 mins approx. This query is optimised and uses appropriate\n> indexes for the NOT EXISTS part.\n>\n> CREATE TABLE t_a as SELECT\n> email,country_code,city,title1,fname1,mname1,lname1,website,address,source,\n>ifimporter, ifexporter,ifservice,ifmanu,creation_date from \n> general.email_bank_import where not exists (select * from\n> general.profile_master where\n> email=general.email_bank_import.email) ;\n> SELECT\n> Time: 174637.31 ms (3 mins Approx)\n>\n>\n>\n> The problem is when i try to INSERT the data into another table\n> it takes 23 mins Apprx to inser 412331 records the same query.\n>\n> I am providing the various details below:\n>\n> tradein_clients=# INSERT INTO general.profile_master\n> (email,country_code,city,title1,fname1,mname1,lname1,website,address,source\n>,ifimporter,ifexporter, ifservice, ifmanu,creation_date) SELECT\n> email,country_code,\n> city,title1,fname1,mname1,lname1,website,address,source,ifimporter,ifexport\n>er,ifservice, ifmanu,creation_date from general.email_bank_import where\n> not exists (select * from general.profile_master where\n> email=general.email_bank_import.email) ;\n> INSERT 0 412331\n> Time: 1409510.63 ms\n\nI am not sure if this would help but why you have to use all the fields in not \nexists clause? How about not exists for a name or profile_id? Would it be any \nfaster\n\nI assume if there are two records with half the info same, then not exists for \n1 field with index would be significantly faster than 10 fields.\n\nHTH\n\n Shridhar\n\n", "msg_date": "Tue, 29 Apr 2003 12:55:15 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is 292 inserts/sec acceptable performance ?" }, { "msg_contents": "\nYeah even 1 feild can be given in the NOT EXISTS part.\nbUt i vaugely recally tom saying that it does not matter\nand internally its converted to \"select * form tab\" from,\n\ncorrect me if i am recalling wrong.\n\nin anycase the CREATE TABLE part is working fine ie \nin 3 mins the select and table creation is over.\n\nIs the continuously entering data slowing down the NO EXISTS\npart ? in any case that inserts are supposed to be invisible\nto the NOT EXISTS part i guess.\n\n\n\n\nregds\nmallah.\n\n\nOn Tuesday 29 Apr 2003 12:55 pm, Shridhar Daithankar wrote:\n> On Tuesday 29 April 2003 12:31, Rajesh Kumar Mallah wrote:\n> > Hi Can anyone tell if the case below is an acceptable\n> > performance ?\n> >\n> > I have a query that returns data and creates a table\n> > in 3 mins approx. This query is optimised and uses appropriate\n> > indexes for the NOT EXISTS part.\n> >\n> > CREATE TABLE t_a as SELECT\n> > email,country_code,city,title1,fname1,mname1,lname1,website,address,sourc\n> >e, ifimporter, ifexporter,ifservice,ifmanu,creation_date from\n> > general.email_bank_import where not exists (select * from\n> > general.profile_master where\n> > email=general.email_bank_import.email) ;\n> > SELECT\n> > Time: 174637.31 ms (3 mins Approx)\n> >\n> >\n> >\n> > The problem is when i try to INSERT the data into another table\n> > it takes 23 mins Apprx to inser 412331 records the same query.\n> >\n> > I am providing the various details below:\n> >\n> > tradein_clients=# INSERT INTO general.profile_master\n> > (email,country_code,city,title1,fname1,mname1,lname1,website,address,sour\n> >ce ,ifimporter,ifexporter, ifservice, ifmanu,creation_date) SELECT\n> > email,country_code,\n> > city,title1,fname1,mname1,lname1,website,address,source,ifimporter,ifexpo\n> >rt er,ifservice, ifmanu,creation_date from general.email_bank_import\n> > where not exists (select * from general.profile_master where\n> > email=general.email_bank_import.email) ;\n> > INSERT 0 412331\n> > Time: 1409510.63 ms\n>\n> I am not sure if this would help but why you have to use all the fields in\n> not exists clause? How about not exists for a name or profile_id? Would it\n> be any faster\n>\n> I assume if there are two records with half the info same, then not exists\n> for 1 field with index would be significantly faster than 10 fields.\n>\n> HTH\n>\n> Shridhar\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n-- \nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n", "msg_date": "Tue, 29 Apr 2003 16:42:24 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is 292 inserts/sec acceptable performance ?" }, { "msg_contents": "Rajesh Kumar Mallah <[email protected]> writes:\n> Hi Can anyone tell if the case below is an acceptable\n> performance ?\n\nNot with that info. Could we see EXPLAIN ANALYZE results for both\nthe faster and slower cases?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 29 Apr 2003 10:00:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is 292 inserts/sec acceptable performance ? " }, { "msg_contents": "\n\nit really takes that long :(\ni can post it 2morrow only when i am office .\n\n\nregds\nmallah\n\n> Rajesh Kumar Mallah <[email protected]> writes:\n>> Hi Can anyone tell if the case below is an acceptable\n>> performance ?\n>\n> Not with that info. Could we see EXPLAIN ANALYZE results for both the faster and slower cases?\n>\n> \t\t\tregards, tom lane\n\n\n\n-----------------------------------------\nGet your free web based email at trade-india.com.\n \"India's Leading B2B eMarketplace.!\"\nhttp://www.trade-india.com/\n\n", "msg_date": "Tue, 29 Apr 2003 21:18:57 +0530 (IST)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is 292 inserts/sec acceptable performance ?" }, { "msg_contents": "\nOoops Sorry ,\n\nActually the query finished in approx 4 mins not 23 mins.\nThat performance must have been under some crazy circumstances.\nSo the insert Rate now is 1608 inserts/sec not 292 as stated \nearlier.\n\nHere is the EXPLAIN ANALYZE anyway\n\n\ntradein_clients=# begin work;EXPLAIN analyze INSERT INTO general.profile_master (email,country_code,city,title1,fname1,mname1,lname1,website,address,source,ifimporter,ifexporter,ifservice,ifmanu,creation_date) SELECT email,country_code,city,title1,fname1,mname1,lname1,website,address,source,ifimporter,ifexporter,ifservice,ifmanu,creation_date from general.email_bank_import where not exists (select * from general.profile_master where email=general.email_bank_import.email) ; rollback;\nBEGIN\nTime: 993.07 ms\n+---------------------------------------------------------------------------------------------------------------------------------------------------------+\n| QUERY PLAN |\n+---------------------------------------------------------------------------------------------------------------------------------------------------------+\n| Hash Join (cost=8.07..2395887.30 rows=279296 width=129) (actual time=2.56..151083.30 rows=394646 loops=1) |\n| Hash Cond: (\"outer\".country = \"inner\".name) |\n| -> Seq Scan on email_bank a (cost=0.00..2390293.31 rows=279296 width=109) (actual time=0.36..41475.08 rows=394646 loops=1) |\n| Filter: (NOT (subplan)) |\n| SubPlan |\n| -> Index Scan using profile_master_email on profile_master (cost=0.00..31.66 rows=7 width=678) (actual time=0.05..0.05 rows=0 loops=558731) |\n| Index Cond: (email = $0) |\n| -> Hash (cost=7.46..7.46 rows=246 width=20) (actual time=1.11..1.11 rows=0 loops=1) |\n| -> Seq Scan on countries b (cost=0.00..7.46 rows=246 width=20) (actual time=0.06..0.73 rows=246 loops=1) |\n| Total runtime: 196874.70 msec |\n+---------------------------------------------------------------------------------------------------------------------------------------------------------+\n(10 rows)\n\nTime: 198905.62 ms\nROLLBACK\nTime: 1481.41 ms\n\n\nRegds\nmallah.\n\n\nOn Tuesday 29 Apr 2003 7:30 pm, Tom Lane wrote:\n> Rajesh Kumar Mallah <[email protected]> writes:\n> > Hi Can anyone tell if the case below is an acceptable\n> > performance ?\n>\n> Not with that info. Could we see EXPLAIN ANALYZE results for both\n> the faster and slower cases?\n>\n> \t\t\tregards, tom lane\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n-- \nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n", "msg_date": "Wed, 30 Apr 2003 13:09:53 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Is 292 inserts/sec acceptable performance ?" } ]
[ { "msg_contents": "Like some other recent messages, this one is about getting postgresql\nto use an index that it seems it should clearly use. (But this one has\nnothing to do with count(*)).\n\nHere is my table:\n\n Column | Type | Modifiers\n--------------+-----------------------------+-----------\n dsid | character varying(20) | not null\n recid | numeric(8,0) | not null\n trans_id | character varying(16) | not null\n status | character varying(1) |\n init_ts | timestamp without time zone |\n last_ts | timestamp without time zone |\n last_form_id | character varying(8) |\n secval | character varying(20) |\nIndexes: ds_rec1 unique btree (recid),\n ds_rec2 btree (dsid)\n\nHere is my version info:\n PostgreSQL 7.3.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2\n20020903 (Red Hat Linux 8.0 3.2-7)\n\nCurrently the table ds_record has about 250,000 records in it. Of\nthose, about 3000 have dsid = 'starz'. When I need to look up all the\nrecids with this dsid in ds_record, I have the following simple query:\n\nselect recid from ds_record where dsid = 'startz';\n\nBut it doesn't use the index ds_rec2 on dsid. Here is the explain\nanalyze output:\n\nintellis2=> explain analyze select recid from ds_record where dsid =\n'starz';\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------\n Seq Scan on ds_record (cost=0.00..6186.21 rows=3484 width=12) (actual\ntime=10.60..408.12 rows=3484 loops=1)\n Filter: (dsid = 'starz'::character varying)\n Total runtime: 410.14 msec\n(3 rows)\n\nbut if I turn off seqscan I get this:\n\nintellis2=> set enable_seqscan=off;\nSET\nintellis2=> explain analyze select recid from ds_record where dsid =\n'starz';\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------\n Index Scan using ds_rec2 on ds_record (cost=0.00..7185.47 rows=3484\nwidth=12)\n(actual time=0.17..12.94 rows=3484 loops=1)\n Index Cond: (dsid = 'starz'::character varying)\n Total runtime: 14.97 msec\n(3 rows)\n\nso it is faster by more than a factor of 25 to use the index. The\nproblem gets worse when I add a join to the table.\n\nI have tried the following:\n\nalter table ds_record alter dsid set statistics 1000;\nvacuum analyze ds_record;\ndrop index ds_rec2;\nCREATE INDEX ds_rec2 ON ds_record USING btree (dsid);\n\nBut to no avail, I get the same results. \n\nInterestingly, for queries that return fewer rows it does use the\ncorrect index. For example, dsid=\"mapbuy2\" appears about 500 times in\nds_record. Here is the explain out there (with enable_seqscan back\non):\n\nintellis2=> explain analyze select recid from ds_record where dsid =\n'mapbuy2';\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------\n Index Scan using ds_rec2 on ds_record (cost=0.00..1351.17 rows=522\nwidth=12) (actual time=0.18..4.31 rows=522 loops=1)\n Index Cond: (dsid = 'mapbuy2'::character varying)\n Total runtime: 4.68 msec\n\nTo me it seems that the threshold for doing a table scan is wrong --\nwhen the rows retrieved are about 1.25% of the table it does a scan.\n\nWhat can I do to fix this -- is there something I am missing about\nsetting statistics or some configuration variable I can change? Any\ninsights would be greatly appreciated. Thank you,\n\nRob Messer\n\n\n\n__________________________________\nDo you Yahoo!?\nThe New Yahoo! Search - Faster. Easier. Bingo.\nhttp://search.yahoo.com\n\n", "msg_date": "Tue, 29 Apr 2003 02:03:01 -0700 (PDT)", "msg_from": "Rob Messer <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizer not using index when it should " }, { "msg_contents": "> What can I do to fix this -- is there something I am missing about\n> setting statistics or some configuration variable I can change? Any\n> insights would be greatly appreciated. Thank you,\n\nIf you look at the estimates for cost, the index scan is more expensive\nby ~1/8th. But as you've shown, it's not.\n\nYou might try adjusting the random_page_cost down to something more\nappropriate for your hardware and situation.\n\nDo testing... this may cause other queries to use an index scan when\nthey should have been doing a sequential scan. Mistakenly using an\nindex can be a much more costly error (hence the high default\nrandom_page_cost).\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "29 Apr 2003 09:04:09 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer not using index when it should" }, { "msg_contents": "Rod Taylor <[email protected]> writes:\n>> What can I do to fix this -- is there something I am missing about\n>> setting statistics or some configuration variable I can change? Any\n>> insights would be greatly appreciated. Thank you,\n\n> You might try adjusting the random_page_cost down to something more\n> appropriate for your hardware and situation.\n\nAlso, is the table physically ordered by dsid? If so, is that condition\nlikely to persist? You may be looking at a test-condition artifact\nhere --- a poor estimate for an ordered table may not mean much when\nyou get to realistic database states.\n\nI assume you've done an ANALYZE of course --- what does the pg_stats row\nfor column dsid contain?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 29 Apr 2003 10:18:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer not using index when it should " } ]
[ { "msg_contents": "\n\nIs printing timeofday() at various points a good idea\nof profiling plpgsql functions?\n\nalso is anything wrong with following fragment ?\nRAISE INFO '' % , message here ... '' , timeofday() ;\n\n\nregds\nmallah.\n\n-- \nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n", "msg_date": "Tue, 29 Apr 2003 16:55:01 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "profiling plpgsql functions.." }, { "msg_contents": "Rajesh Kumar Mallah <[email protected]> writes:\n> Is printing timeofday() at various points a good idea\n> of profiling plpgsql functions?\n\nSure.\n\n> also is anything wrong with following fragment ?\n> RAISE INFO '' % , message here ... '' , timeofday() ;\n\nIIRC, RAISE is pretty slovenly implemented :-( ... it will only take\nplain variable references as additional arguments. So you'll have to\ndo\n\n\t\tvar := timeofday();\n\t\tRAISE INFO ''... '', var;\n\nI believe timeofday() produces TEXT, so declare the var that way.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 29 Apr 2003 10:27:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: profiling plpgsql functions.. " }, { "msg_contents": "\nYep timeofday returns text , but is there anything else\nthat equivalant that can be differenced inside plpgsql\nso that i can print the not of secs/millisecs connsumed?\n\nhmm shud i cast timeofday to timestamp and use timestamp\narithmatic ?\n\nregds\nmallah.\n\n> Rajesh Kumar Mallah <[email protected]> writes:\n>> Is printing timeofday() at various points a good idea\n>> of profiling plpgsql functions?\n>\n> Sure.\n>\n>> also is anything wrong with following fragment ?\n>> RAISE INFO '' % , message here ... '' , timeofday() ;\n>\n> IIRC, RAISE is pretty slovenly implemented :-( ... it will only take plain variable references\n> as additional arguments. So you'll have to do\n>\n> \t\tvar := timeofday();\n> \t\tRAISE INFO ''... '', var;\n>\n> I believe timeofday() produces TEXT, so declare the var that way.\n>\n> \t\t\tregards, tom lane\n\n\n\n-----------------------------------------\nGet your free web based email at trade-india.com.\n \"India's Leading B2B eMarketplace.!\"\nhttp://www.trade-india.com/\n\n", "msg_date": "Tue, 29 Apr 2003 21:26:08 +0530 (IST)", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: profiling plpgsql functions.." }, { "msg_contents": "<[email protected]> writes:\n> hmm shud i cast timeofday to timestamp and use timestamp\n> arithmatic ?\n\nYeah. It's only historical accident that it doesn't return timestamp...\n(or better use timestamptz)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 29 Apr 2003 12:01:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: profiling plpgsql functions.. " }, { "msg_contents": "\n\nthe profiling was really helpful to track down \nan absense of an appropriate index.\n\nbelieve me or not the overall speed improvement was 50 times :))\n\nfrom the order of .4 sec to .008 secs per function call\n\nregds\nmallah.\n\n\n\nOn Tuesday 29 Apr 2003 9:31 pm, Tom Lane wrote:\n> <[email protected]> writes:\n> > hmm shud i cast timeofday to timestamp and use timestamp\n> > arithmatic ?\n>\n> Yeah. It's only historical accident that it doesn't return timestamp...\n> (or better use timestamptz)\n>\n> \t\t\tregards, tom lane\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n-- \nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)6152172 (221) (L) ,9811255597 (M)\n\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n", "msg_date": "Thu, 1 May 2003 20:29:16 +0530", "msg_from": "Rajesh Kumar Mallah <[email protected]>", "msg_from_op": true, "msg_subject": "Re: profiling plpgsql functions.." } ]
[ { "msg_contents": "Dear Gurus,\n\nA nasty query and its EXPLAINs are here. Read on at your own risk :)\n\nABSTRACT:\n\nSearch for the strings \"looping index scan\" and \"loops=2310\" in this\nmessage.\n\nDETAILS:\n\nYesteray, I spent two hours to optimize a view in postgresql 7.2.1. My\nproblem was that one of the index scans executed 2358 times, which is (as\nfar as I can consider) equal to 2x1179, where 2 is the #rows in a subquery,\nand 1179 is the total #rows in the table of the index scan (\"arfolyam\").\n\nFinally I managed to put it as deep in the query as possible, to reduce the\nloops to the number of query result rows (12)\n\nHowever, when I tried the same query in 7.3.2, it first complained about a\nmissing FROM clause and a missing GROUP BY for a field. I managed to\neliminate both without affecting 7.2.1 performance, but in 7.3.2, there are\nstill those 2300+ loops of the index scan.\n\nI ask your kind help, explanation or references to documented similar cases.\n\nBelow is the query and the two explains. Please forgive the raw format.\nIf you need further info (such as table defs), I'd be glad to help.\n\nNote that the two databases are not exactly the same but very similar (they\nwhere the same 2 weeks ago, but data changes occured in both, independently)\nand I don't think these differences should affect the planner.\n\nYours,\nG.\n--\nwhile (!asleep()) sheep++;\n\n---------------------------- QUERY -----------------------------------\nSELECT *,\n kerekit_penznem (netto_ertek*(afa_szazalek/100), penznem) AS afa_ertek,\n kerekit_penznem (netto_ertek*(1+afa_szazalek/100), penznem) AS\nbrutto_ertek\nFROM (\n SELECT szamla,\n kerekit_penznem(\n elsodl_netto_ertek*COALESCE(deka,1)/COALESCE(dekb,1), penznem)\n AS netto_ertek,\n konyvelesi_tetelcsoport, afa, afa_szazalek, penznem\n FROM\n (SELECT szt.szamla,\n kerekit_penznem(sum(szt.netto_egysegar * szt.mennyiseg), penznem)\n AS elsodl_netto_ertek,\n konyvelesi_tetelcsoport(szt.szamla, szt.tetelszam)\n AS konyvelesi_tetelcsoport,\n szt.afa, afa.ertek AS afa_szazalek,\n szamla.penznem AS sz_penznem,\n szamla.teljesites AS sz_teljesites,\n arf_a.deviza_kozeparfolyam as deka,\n arf_b.deviza_kozeparfolyam as dekb,\n foo_valuta AS penznem\n FROM szamla_tetele szt\n LEFT JOIN szamla ON (szamla = szamla.az)\n LEFT JOIN afa ON (afa.az = szt.afa)\n LEFT JOIN arfolyam arf_a\n ON (arf_a.ervenyes =\n (SELECT ervenyes FROM arfolyam\n WHERE ervenyes<=szamla.teljesites AND valuta = szamla.penznem\n ORDER BY 1 DESC LIMIT 1)\n AND szamla.penznem=arf_a.valuta)\n JOIN\n (SELECT az AS foo_valuta FROM valuta) AS valuta ON (true)\n LEFT JOIN arfolyam arf_b\n ON (arf_b.valuta=foo_valuta AND\n arf_b.ervenyes =\n-- this is the looping index scan --\n (SELECT ervenyes FROM arfolyam\n WHERE ervenyes<=szamla.teljesites AND valuta = foo_valuta\n ORDER BY 1 DESC LIMIT 1)\n-- end of looping index scan --\n )\n WHERE (NOT szt.archiv) AND\n (foo_valuta = 4 or arf_b.valuta notnull\n )\n GROUP BY szt.szamla, konyvelesi_tetelcsoport, szt.afa,\n sz_penznem, sz_teljesites, afa.ertek,\n arf_a.deviza_kozeparfolyam, arf_b.deviza_kozeparfolyam, foo_valuta,\npenznem\n ) foo\n) bar\nWHERE szamla=2380;\n\n---------------------------- 7.2.1 PLAN ------------------------------\nSubquery Scan foo (cost=488.97..490.94 rows=8 width=104) (actual\ntime=94.77..109.10 rows=12 loops=1)\n-> Aggregate (cost=488.97..490.94 rows=8 width=104) (actual\ntime=89.29..92.05 rows=12 loops=1)\n -> Group (cost=488.97..490.74 rows=79 width=104) (actual\ntime=88.13..88.59 rows=12 loops=1)\n -> Sort (cost=488.97..488.97 rows=79 width=104) (actual time=88.09..88.13\nrows=12 loops=1)\n -> Nested Loop (cost=1.05..486.50 rows=79 width=104) (actual\ntime=28.23..86.20 rows=12 loops=1)\n -> Nested Loop (cost=1.05..150.19 rows=79 width=84) (actual\ntime=12.68..25.41 rows=12 loops=1)\n -> Nested Loop (cost=1.05..135.52 rows=13 width=80) (actual\ntime=12.60..24.80 rows=2 loops=1)\n -> Hash Join (cost=1.05..79.59 rows=13 width=60) (actual\ntime=0.55..0.80 rows=2 loops=1)\n -> Nested Loop (cost=0.00..78.31 rows=13 width=46) (actual\ntime=0.23..0.42 rows=2 loops=1)\n -> Index Scan using szml_ttl_szml on szamla_tetele szt\n(cost=0.00..3.51 rows=13 width=34) (actual time=0.11..0.16 rows=2 loops=1)\n -> Index Scan using szamla_az_key on szamla (cost=0.00..5.70 rows=1\nwidth=12) (actual time=0.07..0.09 rows=1 loops=2)\n -> Hash (cost=1.04..1.04 rows=4 width=14) (actual time=0.14..0.14\nrows=0 loops=1)\n -> Seq Scan on afa (cost=0.00..1.04 rows=4 width=14) (actual\ntime=0.08..0.11 rows=4 loops=1)\n -> Index Scan using arfolyam_ervenyes on arfolyam arf_a\n(cost=0.00..3.63 rows=3 width=20) (actual time=0.01..0.01 rows=0 loops=2)\n SubPlan\n -> Limit (cost=0.00..0.17 rows=1 width=4) (actual time=11.92..11.92\nrows=0 loops=2)\n -> Index Scan Backward using arfolyam_ervenyes on arfolyam\n(cost=0.00..13.30 rows=79 width=4) (actual time=11.90..11.90 rows=0 loops=2)\n -> Limit (cost=0.00..0.17 rows=1 width=4)\n -> Index Scan Backward using arfolyam_ervenyes on arfolyam\n(cost=0.00..13.30 rows=79 width=4)\n -> Seq Scan on valuta (cost=0.00..1.06 rows=6 width=4) (actual\ntime=0.02..0.11 rows=6 loops=2)\n -> Index Scan using arfolyam_ervenyes on arfolyam arf_b (cost=0.00..3.63\nrows=3 width=20) (actual time=0.04..0.10 rows=3 loops=12)\n SubPlan\n -> Limit (cost=0.00..0.17 rows=1 width=4) (actual time=4.35..4.39\nrows=1 loops=12)\n -> Index Scan Backward using arfolyam_ervenyes on arfolyam\n(cost=0.00..13.30 rows=79 width=4) (actual time=4.33..4.37 rows=2 loops=12)\n -> Limit (cost=0.00..0.17 rows=1 width=4)\n -> Index Scan Backward using arfolyam_ervenyes on arfolyam\n(cost=0.00..13.30 rows=79 width=4)\nTotal runtime: 111.48 msec\n\n---------------------------- 7.3.2 PLAN ------------------------------\n Subquery Scan foo (cost=14542.01..15448.46 rows=3022 width=123) (actual\ntime=2264.36..2282.17 rows=12 loops=1)\n -> Aggregate (cost=14542.01..15448.46 rows=3022 width=123) (actual\ntime=2257.70..2261.08 rows=12 loops=1)\n -> Group (cost=14542.01..15372.92 rows=30215 width=123) (actual\ntime=2256.31..2256.84 rows=12 loops=1)\n -> Sort (cost=14542.01..14617.55 rows=30215 width=123)\n(actual time=2256.27..2256.31 rows=12 loops=1)\n Sort Key: szt.szamla,\nkonyvelesi_tetelcsoport(szt.szamla, szt.tetelszam), szt.afa, szamla.penznem,\nszamla.teljesites, afa.ertek, arf_a.deviza_kozeparfolyam,\narf_b.deviza_kozeparfolyam, public.valuta.az\n -> Merge Join (cost=4755.50..11038.79 rows=30215\nwidth=123) (actual time=80.88..2254.96 rows=12 loops=1)\n Merge Cond: (\"outer\".az = \"inner\".valuta)\n Join Filter: (\"inner\".ervenyes = (subplan))\n Filter: ((\"outer\".az = 4) OR (\"inner\".valuta IS\nNOT NULL))\n -> Sort (cost=4676.19..4751.73 rows=30215\nwidth=103) (actual time=56.41..56.44 rows=12 loops=1)\n Sort Key: public.valuta.az\n -> Nested Loop (cost=433.35..1375.92\nrows=30215 width=103) (actual time=55.78..56.18 rows=12 loops=1)\n -> Merge Join (cost=433.35..469.47\nrows=30 width=99) (actual time=55.68..55.72 rows=2 loops=1)\n Merge Cond: (\"outer\".penznem =\n\"inner\".valuta)\n Join Filter: (\"inner\".ervenyes\n= (subplan))\n -> Sort (cost=354.05..354.12\nrows=30 width=79) (actual time=32.63..32.64 rows=2 loops=1)\n Sort Key: szamla.penznem\n -> Hash Join\n(cost=121.67..353.30 rows=30 width=79) (actual time=32.37..32.49 rows=2\nloops=1)\n Hash Cond:\n(\"outer\".afa = \"inner\".az)\n -> Merge Join\n(cost=120.62..352.09 rows=30 width=58) (actual time=32.07..32.16 rows=2\nloops=1)\n Merge Cond:\n(\"outer\".az = \"inner\".szamla)\n -> Sort\n(cost=120.62..123.85 rows=1291 width=12) (actual time=25.62..27.62 rows=1285\nloops=1)\n Sort\nKey: szamla.az\n -> Seq\nScan on szamla (cost=0.00..53.91 rows=1291 width=12) (actual\ntime=0.04..16.46 rows=1314 loops=1)\n -> Index\nScan using szamla_tetele_pkey on szamla_tetele szt (cost=0.00..218.88\nrows=30 width=46) (actual time=0.13..0.18 rows=2 loops=1)\n Index\nCond: (szamla = 2380)\n Filter:\n(NOT archiv)\n -> Hash\n(cost=1.04..1.04 rows=4 width=21) (actual time=0.10..0.10 rows=0 loops=1)\n -> Seq Scan\non afa (cost=0.00..1.04 rows=4 width=21) (actual time=0.04..0.07 rows=4\nloops=1)\n -> Sort (cost=79.30..82.19\nrows=1155 width=20) (actual time=22.93..22.93 rows=1 loops=1)\n Sort Key: arf_a.valuta\n -> Seq Scan on arfolyam\narf_a (cost=0.00..20.55 rows=1155 width=20) (actual time=0.03..9.66\nrows=1155 loops=1)\n SubPlan\n -> Limit (cost=0.00..0.16\nrows=1 width=4) (never executed)\n -> Index Scan Backward\nusing arfolyam_ervenyes on arfolyam (cost=0.00..12.17 rows=77 width=4)\n(never executed)\n Index Cond:\n(ervenyes <= $0)\n Filter: (valuta =\n$1)\n -> Seq Scan on valuta\n(cost=0.00..20.00 rows=1000 width=4) (actual time=0.02..0.06 rows=6 loops=2)\n -> Sort (cost=79.30..82.19 rows=1155 width=20)\n(actual time=21.98..27.54 rows=2309 loops=1)\n Sort Key: arf_b.valuta\n -> Seq Scan on arfolyam arf_b\n(cost=0.00..20.55 rows=1155 width=20) (actual time=0.03..9.89 rows=1155\nloops=1)\n SubPlan\n -> Limit (cost=0.00..0.16 rows=1 width=4)\n(actual time=0.88..0.91 rows=1 loops=2310)\n -> Index Scan Backward using\narfolyam_ervenyes on arfolyam (cost=0.00..12.17 rows=77 width=4) (actual\ntime=0.87..0.90 rows=2 loops=2310)\n Index Cond: (ervenyes <= $0)\n Filter: (valuta = $2)\n Total runtime: 2287.30 msec\n\n", "msg_date": "Tue, 29 Apr 2003 15:01:21 +0200", "msg_from": "\"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query Plan far worse in 7.3.2 than 7.2.1" }, { "msg_contents": "\"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]> writes:\n> A nasty query and its EXPLAINs are here. Read on at your own risk :)\n\nIt's pretty much unreadable because of the way your mailer folded,\nspindled, and mutilated the EXPLAIN output :-(\n\nCould you resend in a more legible format? Maybe append the explain\noutput as an attachment, if you can't get the mailer to leave its\nformatting alone otherwise.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 29 Apr 2003 10:53:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Plan far worse in 7.3.2 than 7.2.1 " }, { "msg_contents": "Sure, thanks for your interest :)\n\nhope these help.\n\nG.\n--\nwhile (!asleep()) sheep++;\n\n---------------------------- cut here ------------------------------\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nSent: Tuesday, April 29, 2003 4:53 PM\n\n\n> \"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]> writes:\n> > A nasty query and its EXPLAINs are here. Read on at your own risk :)\n> \n> It's pretty much unreadable because of the way your mailer folded,\n> spindled, and mutilated the EXPLAIN output :-(\n> \n> Could you resend in a more legible format? Maybe append the explain\n> output as an attachment, if you can't get the mailer to leave its\n> formatting alone otherwise.", "msg_date": "Tue, 29 Apr 2003 18:03:48 +0200", "msg_from": "\"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Plan far worse in 7.3.2 than 7.2.1" }, { "msg_contents": "\"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]> writes:\n> ---------------------------- 7.2.1 PLAN ---------------------------------\n> \t\t -> Seq Scan on valuta (cost=0.00..1.06 rows=6 width=4) (actual time=0.02..0.11 rows=6 loops=2)\n> \n> ---------------------------- 7.3.2 PLAN ---------------------------------\n> -> Seq Scan on valuta (cost=0.00..20.00 rows=1000 width=4) (actual time=0.02..0.06 rows=6 loops=2)\n\nAh, there's the problem. You never vacuumed or analyzed \"valuta\", so\nthe 7.3 planner didn't know it had only six rows, and chose a plan that\nwas more appropriate for a larger table. The thousand-row estimate is\nthe tipoff, because that's the default assumption when there are no\nstats.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 29 Apr 2003 19:05:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Plan far worse in 7.3.2 than 7.2.1 " }, { "msg_contents": "----- Original Message -----\nFrom: \"Tom Lane\" <[email protected]>\nSent: Wednesday, April 30, 2003 1:05 AM\n\n\n> \"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]> writes:\n> > ---------------------------- 7.2.1\nPLAN ---------------------------------\n> > -> Seq Scan on valuta (cost=0.00..1.06 rows=6 width=4) (actual\ntime=0.02..0.11 rows=6 loops=2)\n> >\n> > ---------------------------- 7.3.2\nPLAN ---------------------------------\n> > -> Seq Scan on valuta\n(cost=0.00..20.00 rows=1000 width=4) (actual time=0.02..0.06 rows=6 loops=2)\n>\n> Ah, there's the problem. You never vacuumed or analyzed \"valuta\", so\n> the 7.3 planner didn't know it had only six rows, and chose a plan that\n> was more appropriate for a larger table. The thousand-row estimate is\n> the tipoff, because that's the default assumption when there are no\n> stats.\n>\n> regards, tom lane\n\nThanks!\n\nVACUUM ANALYZE really worked and I learned something new.\n\nThe strange part is, that I think I issued a \"VACUUM ANALYZE;\" (that should\ndo all the tables, right?) a couple of weeks before because of another\nproblem (it didn't help that time, tho)\n\nG.\n--\nwhile (!asleep()) sheep++;\n\n---------------------------- cut here ------------------------------\n\n", "msg_date": "Wed, 30 Apr 2003 13:00:40 +0200", "msg_from": "\"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Plan far worse in 7.3.2 than 7.2.1" }, { "msg_contents": "Friends,\n\tI've got a query that has stopped using an index scan between 7.2.1 or RH\n7.1 and 7.3.2 or RH 8.0, and I can't figure out why. I've come up with a\nreplacement query which is a whole lot faster, but again, I can't tell why.\n\nThe original query (condensed to remove the uninteresting bits) is:\n\nSELECT COUNT(*) FROM Border_Shop_List WHERE NOT EXISTS (SELECT Foreign_Key\nFROM Sample WHERE Foreign_Key='Quantum_' || Border_Shop_List.Assignment_ID\n|| '_' || Assignment_Year || '_' || Evaluation_ID)\n\nThis runs in 667055.79 msec\n\nThe new one is:\n\nSELECT COUNT(*) FROM Border_Shop_List WHERE 'Quantum_' ||\nBorder_Shop_List.Assignment_ID || '_' || Border_Shop_List.Assignment_Year ||\n'_' || Border_Shop_List.Evaluation_ID NOT IN (SELECT Foreign_Key FROM\nSample WHERE Foreign_Key IS NOT NULL)\n\nThis runs in 16500.83 msec (~1/40th the time)\n\n\tAgain, my immediate problem is solved, but I'm trying to understand why\nthere is such a speed difference.\n\n\tI've attached explains for the two querys in both versions.\n\n\tThe schemas for the two databases are identical. If there's more info\npeople need, just let me know.\n\nThanks,\nPeter Darley", "msg_date": "Wed, 30 Apr 2003 07:39:24 -0700", "msg_from": "\"Peter Darley\" <[email protected]>", "msg_from_op": false, "msg_subject": "Query Plan far worse in 7.3.2 than 7.2.1 " }, { "msg_contents": "\"Peter Darley\" <[email protected]> writes:\n> SELECT COUNT(*) FROM Border_Shop_List WHERE NOT EXISTS (SELECT Foreign_Key=\n> FROM Sample WHERE Foreign_Key=3D'Quantum_' || Border_Shop_List.Assignment_=\n> ID || '_' || Assignment_Year || '_' || Evaluation_ID)\n\nWhat's the datatype of Foreign_Key?\n\nI'm betting that it's varchar(n) or char(n). The result of the ||\nexpression is text, and so the comparison can't use a varchar index\nunless you explicitly cast it to varchar:\n\tWHERE Foreign_Key = ('Quantum_' || ... || Evaluation_ID)::varchar\n\nI think 7.2 had some kluge in it that would allow a varchar index to be\nused anyway, but we took out the kluge because it was semantically wrong\n(it would also allow use of a char(n) index in place of a text\ncomparison, which alters the semantics...)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 30 Apr 2003 10:54:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Plan far worse in 7.3.2 than 7.2.1 " }, { "msg_contents": "Tom,\n\tYou hit the nail on the head, foreign_key is a varchar(250). I'll re-write\nthe queries with explicit casts.\n\tI'm hesitant to say anything, because I'm really not in a position to\ncontribute, but... It seems like there are getting to be lots of typing\nissues (this one, 2 isn't an int8, etc.) I think that people have said that\nthings are like this to support user defined data types. I would happily\nget rid of user defined data types if it would help with the type conversion\nissues. Just my 2c, for what it's worth.\nThanks,\nPeter Darley\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Wednesday, April 30, 2003 7:54 AM\nTo: Peter Darley\nCc: [email protected]\nSubject: Re: [PERFORM] Query Plan far worse in 7.3.2 than 7.2.1\n\n\n\"Peter Darley\" <[email protected]> writes:\n> SELECT COUNT(*) FROM Border_Shop_List WHERE NOT EXISTS (SELECT\nForeign_Key=\n> FROM Sample WHERE Foreign_Key=3D'Quantum_' ||\nBorder_Shop_List.Assignment_=\n> ID || '_' || Assignment_Year || '_' || Evaluation_ID)\n\nWhat's the datatype of Foreign_Key?\n\nI'm betting that it's varchar(n) or char(n). The result of the ||\nexpression is text, and so the comparison can't use a varchar index\nunless you explicitly cast it to varchar:\n\tWHERE Foreign_Key = ('Quantum_' || ... || Evaluation_ID)::varchar\n\nI think 7.2 had some kluge in it that would allow a varchar index to be\nused anyway, but we took out the kluge because it was semantically wrong\n(it would also allow use of a char(n) index in place of a text\ncomparison, which alters the semantics...)\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 30 Apr 2003 08:09:12 -0700", "msg_from": "\"Peter Darley\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Plan far worse in 7.3.2 than 7.2.1 " } ]
[ { "msg_contents": "I access Postgresql through the ODBC driver, and always only read small\nrecordsets (never updating them) with forward cursors.\n\nThe following options are defined in ADO with which I can create a\nrecordset with:\n\nCursor types:\nadOpenForwardOnly\t\t(what I currently use)\nadOpenKeyset\nadOpenDynamic\nadOpenStatic\n\nLock types:\nadLockReadOnly\nadLockPessimistic\nadLockOptimistic\t\t(what I currently use)\nadLockBatchOptimistic\n\nDo any of these offer a performance gain over others? I used to use\nadLockReadOnly with MS-SQL which really sped things up but this doesn't\nseem to work at all under Postgresql and I've been using\nadLockOptimistic instead.\n\n\nYours Unwhettedly,\nRobert John Shepherd.\n\nEditor\nDVD REVIEWER\nThe UK's BIGGEST Online DVD Magazine\nhttp://www.dvd.reviewer.co.uk\n\nFor a copy of my Public PGP key, email: [email protected] \n\n", "msg_date": "Tue, 29 Apr 2003 14:53:18 +0100", "msg_from": "\"Robert John Shepherd\" <[email protected]>", "msg_from_op": true, "msg_subject": "Best ODBC cursor and lock types for fastest reading?" }, { "msg_contents": "Robert,\n\n> The following options are defined in ADO with which I can create a\n> recordset with:\n<snip>\n> Do any of these offer a performance gain over others? I used to use\n> adLockReadOnly with MS-SQL which really sped things up but this doesn't\n> seem to work at all under Postgresql and I've been using\n> adLockOptimistic instead.\n\nAll of the types you list were designed around the MS SQL/MSDE server \narchitecture, and many do not apply to PostgreSQL (for example, Postgres does \nnot use read locks and does not support client-side keyset cursors as far as \nI know). I wouldn't be surprised to find out that the pgODBC driver is \nignoring most of these options as irrelevant -- you should contact the pgODBC \nproject to find out.\n\nCertainly I wouldn't expect any setting other than adLockPessimistic to have \nan effect on the speed at which you get rows from the server (Pessimistic \nwould presumably declare \"SELECT FOR UPDATE\", which would be slower). \nHowever, one or more types might be faster on the client side than the \nothers; I recommmend that you set up a test case and experiment.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Tue, 29 Apr 2003 08:38:50 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best ODBC cursor and lock types for fastest reading?" }, { "msg_contents": "> All of the types you list were designed around the MS SQL/MSDE server \n> architecture, and many do not apply to PostgreSQL (for \n> example, Postgres does not use read locks and does not support\n> client-side keyset cursors as far as I know).\n\nThanks, this backs up my feelings from some of my limited experiments\nwith them.\n\nI guess I need to keep trying to rewrite my queries to avoid nested\nloops then, as this seems to be the main performance hit I get as all\nuse indexes properly.\n\n\n\nYours Unwhettedly,\nRobert John Shepherd.\n\nEditor\nDVD REVIEWER\nThe UK's BIGGEST Online DVD Magazine\nhttp://www.dvd.reviewer.co.uk\n\nFor a copy of my Public PGP key, email: [email protected] \n\n", "msg_date": "Tue, 29 Apr 2003 16:43:26 +0100", "msg_from": "\"Robert John Shepherd\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best ODBC cursor and lock types for fastest reading?" } ]
[ { "msg_contents": "I'm doing something where I just need to know if we have more than 100\nrows in a table. Not wanting to scan the whole table, I thought I'd get\ncute...\n\nexplain select count(*)\n FROM (SELECT * FROM email_rank WHERE project_id = :ProjectID LIMIT 100) AS t1;\n QUERY PLAN \n-------------------------------------------------------------------------------------\n Aggregate (cost=111.32..111.32 rows=1 width=48)\n -> Subquery Scan t1 (cost=0.00..111.07 rows=100 width=48)\n -> Limit (cost=0.00..111.07 rows=100 width=48)\n -> Seq Scan on email_rank (cost=0.00..76017.40 rows=68439 width=48)\n Filter: (project_id = 24)\n\nThe idea is that the inner-most query would only read the first 100 rows\nit finds, then stop. Instead, if explain is to be believed (and speed\ntesting seems to indicate it's accurate), we'll read the entire table,\n*then* pick the first 100 rows. Why is that?\n\nFYI...\n\n Table \"public.email_rank\"\n Column | Type | Modifiers \n-----------------------+---------+--------------------\n project_id | integer | not null\n id | integer | not null\n first_date | date | not null\n last_date | date | not null\n day_rank | integer | not null default 0\n day_rank_previous | integer | not null default 0\n overall_rank | integer | not null default 0\n overall_rank_previous | integer | not null default 0\n work_today | bigint | not null default 0\n work_total | bigint | not null default 0\nIndexes: email_rank_pkey primary key btree (project_id, id),\n email_rank__day_rank btree (project_id, day_rank),\n email_rank__overall_rank btree (project_id, overall_rank)\n\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Wed, 30 Apr 2003 01:04:25 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Why LIMIT after scanning the table?" }, { "msg_contents": "On Wed, 30 Apr 2003, Jim C. Nasby wrote:\n\n> I'm doing something where I just need to know if we have more than 100\n> rows in a table. Not wanting to scan the whole table, I thought I'd get\n> cute...\n>\n> explain select count(*)\n> FROM (SELECT * FROM email_rank WHERE project_id = :ProjectID LIMIT 100) AS t1;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------\n> Aggregate (cost=111.32..111.32 rows=1 width=48)\n> -> Subquery Scan t1 (cost=0.00..111.07 rows=100 width=48)\n> -> Limit (cost=0.00..111.07 rows=100 width=48)\n> -> Seq Scan on email_rank (cost=0.00..76017.40 rows=68439 width=48)\n> Filter: (project_id = 24)\n>\n> The idea is that the inner-most query would only read the first 100 rows\n> it finds, then stop. Instead, if explain is to be believed (and speed\n> testing seems to indicate it's accurate), we'll read the entire table,\n> *then* pick the first 100 rows. Why is that?\n\nI'd suggest looking at explain analyze rather than explain. In most cases\nI've seen what it'll actually grab is limit+1 rows (I think cvs will only\ngrab limit) in the actual rows. It shows you the full count for the\nsequence scan in explain, but notice that the limit cost is lower than\nthat of the sequence scan.\n\n", "msg_date": "Wed, 30 Apr 2003 07:19:41 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why LIMIT after scanning the table?" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> explain select count(*)\n> FROM (SELECT * FROM email_rank WHERE project_id = :ProjectID LIMIT 100) AS t1;\n\n> The idea is that the inner-most query would only read the first 100 rows\n> it finds, then stop. Instead, if explain is to be believed (and speed\n> testing seems to indicate it's accurate), we'll read the entire table,\n> *then* pick the first 100 rows. Why is that?\n\nYou're misreading the EXPLAIN output. Try EXPLAIN ANALYZE to see how\nmany rows really get fetched.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 30 Apr 2003 10:22:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why LIMIT after scanning the table? " }, { "msg_contents": "If you only what to know if there is more than 100 rows, why not do:\n\nif exists (\nSELECT 1 FROM email_rank WHERE project_id = :ProjectID OFFSET 100 LIMIT\n1\n)\n\n\n\"Jim C. Nasby\" wrote:\n> \n> I'm doing something where I just need to know if we have more than 100\n> rows in a table. Not wanting to scan the whole table, I thought I'd get\n> cute...\n> \n> explain select count(*)\n> FROM () AS t1;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------\n> Aggregate (cost=111.32..111.32 rows=1 width=48)\n> -> Subquery Scan t1 (cost=0.00..111.07 rows=100 width=48)\n> -> Limit (cost=0.00..111.07 rows=100 width=48)\n> -> Seq Scan on email_rank (cost=0.00..76017.40 rows=68439 width=48)\n> Filter: (project_id = 24)\n> \n> The idea is that the inner-most query would only read the first 100 rows\n> it finds, then stop. Instead, if explain is to be believed (and speed\n> testing seems to indicate it's accurate), we'll read the entire table,\n> *then* pick the first 100 rows. Why is that?\n> \n> FYI...\n> \n> Table \"public.email_rank\"\n> Column | Type | Modifiers\n> -----------------------+---------+--------------------\n> project_id | integer | not null\n> id | integer | not null\n> first_date | date | not null\n> last_date | date | not null\n> day_rank | integer | not null default 0\n> day_rank_previous | integer | not null default 0\n> overall_rank | integer | not null default 0\n> overall_rank_previous | integer | not null default 0\n> work_today | bigint | not null default 0\n> work_total | bigint | not null default 0\n> Indexes: email_rank_pkey primary key btree (project_id, id),\n> email_rank__day_rank btree (project_id, day_rank),\n> email_rank__overall_rank btree (project_id, overall_rank)\n> \n> --\n> Jim C. Nasby (aka Decibel!) [email protected]\n> Member: Triangle Fraternity, Sports Car Club of America\n> Give your computer some brain candy! www.distributed.net Team #1828\n> \n> Windows: \"Where do you want to go today?\"\n> Linux: \"Where do you want to go tomorrow?\"\n> FreeBSD: \"Are you guys coming, or what?\"\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n", "msg_date": "Wed, 30 Apr 2003 12:23:11 -0400", "msg_from": "Jean-Luc Lachance <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why LIMIT after scanning the table?" } ]
[ { "msg_contents": "Hi all,\n\nI have a fairly large table with a char(20) field in it which I search on\nquite a bit. The problem is that I tend to do a lot of \n\"...where field like '%-d%'\" type searches on this field.\n\nIs there any to speed up this type of search?\n\nTIA,\n\nMike Diehl.\n\n", "msg_date": "Wed, 30 Apr 2003 10:34:40 -0600", "msg_from": "\"Diehl, Jeffrey\" <[email protected]>", "msg_from_op": true, "msg_subject": "Like search performance." }, { "msg_contents": "I'm not an expert, but AFAIK locale and collation heavily affects LIKE, and\nthus, IIRC there is no index search for like, maybe except the simplest\nlocales (maybe C and/or en_US?)\n\nBut if you mean it... there is a nasty trick in the archives:\n\nhttp://archives.postgresql.org/pgsql-general/2002-08/msg00819.php\n\nReally, really nasty, but really nice at the same time.\n\nG.\n--\nwhile (!asleep()) sheep++;\n\n---------------------------- cut here ------------------------------\n----- Original Message -----\nFrom: \"Diehl, Jeffrey\" <[email protected]>\nTo: <[email protected]>; <[email protected]>\nSent: Wednesday, April 30, 2003 6:34 PM\nSubject: [PERFORM] Like search performance.\n\n\n> Hi all,\n>\n> I have a fairly large table with a char(20) field in it which I search on\n> quite a bit. The problem is that I tend to do a lot of\n> \"...where field like '%-d%'\" type searches on this field.\n>\n> Is there any to speed up this type of search?\n>\n> TIA,\n>\n> Mike Diehl.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Wed, 30 Apr 2003 18:59:52 +0200", "msg_from": "=?iso-8859-1?Q?SZUCS_G=E1bor?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Like search performance." }, { "msg_contents": "Mike,\n\n> I have a fairly large table with a char(20) field in it which I search on\n> quite a bit. The problem is that I tend to do a lot of \n> \"...where field like '%-d%'\" type searches on this field.\n> \n> Is there any to speed up this type of search?\n\nYes. See the tsearch module in /contrib in your postgresql source.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 30 Apr 2003 10:33:17 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Like search performance." } ]
[ { "msg_contents": "Jeffrey,\n\nThe best thing you can do is to have the wildcard % as\nlate as possible in your search condition.\nSo do like 'd%' instead of like '%d%' if you can.\n\nRegards,\nNikolaus\n\n\nOn Wed, 30 Apr 2003 10:34:40 -0600, \"Diehl, Jeffrey\"\nwrote:\n\n> \n> Hi all,\n> \n> I have a fairly large table with a char(20) field in\nit\n> which I search on\n> quite a bit. The problem is that I tend to do a lot\nof \n> \"...where field like '%-d%'\" type searches on this\n> field.\n> \n> Is there any to speed up this type of search?\n> \n> TIA,\n> \n> Mike Diehl.\n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Wed, 30 Apr 2003 10:08:28 -0700 (PDT)", "msg_from": "\"Nikolaus Dilger\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Like search performance." } ]
[ { "msg_contents": "\tIs there some way to give some postgres backends higher priority. \nHence on a very busy server \"important\" queries get done faster than less \npriority that unimportant queries. \n\tI don't think this would be too difficult to do as certainly on \nLinux the process could just be reniced and the os left to figure it out.\nof course any query that is holding up another query with locks needs to \nget done quickly.\n\tI find my self with a database thats slowed to a craw because of a \nslow batch program it not letting the gui clients the speed they require \nto be usable.\n\nPeter Childs\n\n", "msg_date": "Fri, 2 May 2003 13:55:01 +0100 (BST)", "msg_from": "Peter Childs <[email protected]>", "msg_from_op": true, "msg_subject": "Query Priority" }, { "msg_contents": "On Fri, May 02, 2003 at 01:55:01PM +0100, Peter Childs wrote:\n> \tIs there some way to give some postgres backends higher priority. \n> Hence on a very busy server \"important\" queries get done faster than less \n> priority that unimportant queries. \n\nNo.\n\n> \tI don't think this would be too difficult to do as certainly on \n> Linux the process could just be reniced and the os left to figure it out.\n> of course any query that is holding up another query with locks needs to \n> get done quickly.\n\nIt's the latter condition that causes the problem for the nice-ing\n(among other things -- there's plenty of discussion about this in the\narchives. Tom Lane gave a quite long explanation one time, but I\ncan't find it right now.)\n\n> \tI find my self with a database thats slowed to a craw because of a \n> slow batch program it not letting the gui clients the speed they require \n> to be usable.\n\nSounds like what you really need is a replica database which you can\nuse for batch reports, &c. You could do this with a small-ish box,\nbecause you'd only have one client.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 2 May 2003 10:05:44 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Priority" }, { "msg_contents": "Peter Childs <[email protected]> writes:\n> \tIs there some way to give some postgres backends higher priority. \n\nNo.\n\n> \tI don't think this would be too difficult to do as certainly on \n> Linux the process could just be reniced and the os left to figure it out.\n\nRead about \"priority inversion\" in any handy CS textbook ... renicing a\nprocess won't buy you much if the locking mechanisms don't cooperate.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 02 May 2003 10:23:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Priority " }, { "msg_contents": "Peter Childs <[email protected]> writes:\n>Is there some way to give some postgres backends higher priority. \n\n>I find my self with a database thats slowed to a craw because of a \n>slow batch program it not letting the gui clients the speed they require \n>to be usable.\n\n\nWell, if your backend-process is CPU bound (large in memory sorts & joins)...\n\n The Linux (and SunOS and Ultrix and most certainly others) scheduler, \n has logic to make \"interactive tasks\" more responsive automatically provides \n whatever the benefit you might expect by automatically adjusting the \n priority of such processes.\n\n If you have one backend hogging the CPU, it _will_ use its entire time\n slice, and not get a \"bonus\" to it's priority. In contrast, backends\n that do simple fast queries will _not_ use their whole time slice, so\n they look like \"interactive processes\" to the kernel and do get a bonus\n to their priority.\n\n I think that this means that a reporting system where some people are\n doing big sorts and others are doing little fast queries, the little\n fast ones actually do run at a higher priority (assuming they don't\n have to block waiting for each other).\n\n These slides:\n http://www.inf.fu-berlin.de/lehre/SS01/OS/Lectures/Lecture08.pdf\n explain how this works.\n\nOn the other hand, if your process is IO bound (large table seq_scan)...\n\n I don't think setting the scheduler priority will help your application\n much anyway.\n\n I strongly suspect which I/O scheduler you're using would have a bigger\n effect than a priority setting. There are a few to choose from.\n http://lwn.net/Articles/23411/\n http://www.cs.rice.edu/~ssiyer/r/antsched/shines.html\n I haven't tried any except the default.\n\nIf the batch process you're worrying about is I/O bound and doing lots of writes...\n ... you might want to play with one of the newer I/O schedulers in\n the newer kernel...\n\n It looks like the 2.4 I/O scheduler has a bad behavior where a process\n doing lots of writes starves other processes that are trying to do\n reads...\n http://www.ussg.iu.edu/hypermail/linux/kernel/0302.2/1370.html\n all the newer schedulers seem to improve this.\n\n\n\nPS: Did anyone try any of the newer I/O schedulers? I have a reporting\n system that gets pretty unresponsive while large loads of data are\n occurring, and was curious if those patches would be the answer...\n\n", "msg_date": "Fri, 2 May 2003 20:10:02 -0700", "msg_from": "\"Ron Mayer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Priority " } ]
[ { "msg_contents": "I have a server on a standard pc right now.\nPIII 700, 1Gig ram (SD), 40 Gig IDE, RedHat 8.0, PostgreSQL 7.3.1\n\nThe database has 3 tables that just broke 10 million tuples (yeah, i think\nim entering in to the world of real databases ;-)\nIts primarly bulk (copy) inserts and queries, rarely an update.\n\nI am looking at moving this to a P4 2.4G, 2 Gig Ram(DDR), RedHat 8,\nPostgreSQL 7.3.latest\n\nMy primary reason for posting this is to help filter through the noise, and\nget me pointed in the right direction.\n\nI realize that Im a raid on linux newbie so any suggestions are appreciated.\nIm thinking I want to put this on an IDE Raid array, probably 0+1. IDE seems\nto be cheap and effective these days.\nWhat ive been able to glean from other postings is that I should have 3\ndrives, 2 for the database w/ striping and another for the WAL.\nAm I way off base here?\nI would also appreciate raid hardware suggestions (brands, etc)\nAnd as always im not afraid to RTFM if someone can point me to the FM :-)\n\nCost seems to be quite a high priority, I'm getting pretty good at making\nsomething out of nothing for everyone :)\n\nTIA for any suggestions.\nChad\n\n", "msg_date": "Fri, 2 May 2003 12:53:45 -0600", "msg_from": "\"Chad Thompson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Looking for a cheap upgrade (RAID)" }, { "msg_contents": "On Fri, 2 May 2003, Chad Thompson wrote:\n\n> I have a server on a standard pc right now.\n> PIII 700, 1Gig ram (SD), 40 Gig IDE, RedHat 8.0, PostgreSQL 7.3.1\n> \n> The database has 3 tables that just broke 10 million tuples (yeah, i think\n> im entering in to the world of real databases ;-)\n> Its primarly bulk (copy) inserts and queries, rarely an update.\n> \n> I am looking at moving this to a P4 2.4G, 2 Gig Ram(DDR), RedHat 8,\n> PostgreSQL 7.3.latest\n> \n> My primary reason for posting this is to help filter through the noise, and\n> get me pointed in the right direction.\n> \n> I realize that Im a raid on linux newbie so any suggestions are appreciated.\n> Im thinking I want to put this on an IDE Raid array, probably 0+1. IDE seems\n> to be cheap and effective these days.\n> What ive been able to glean from other postings is that I should have 3\n> drives, 2 for the database w/ striping and another for the WAL.\n> Am I way off base here?\n> I would also appreciate raid hardware suggestions (brands, etc)\n> And as always im not afraid to RTFM if someone can point me to the FM :-)\n> \n> Cost seems to be quite a high priority, I'm getting pretty good at making\n> something out of nothing for everyone :)\n\nMy experience has been that with IDEs, RAID-5 is pretty good (85% the \nperformance of RAID-1 in real use) X+0 in linux kernel (2.4.7 is what I \ntested, no idea on the newer kernel versions) is no faster than X where X \nis 1 or 5. I think there are parallel issues with stacking with linux \nsoftware kernel arrays. That said, their performance in stock RAID1 and \nRAID5 configurations is quite good.\n\nIf your writes happen during off hours, or only account for a small \nportion of your IO then a seperate drive is not gonna win you much, it's a \nheavily written environment that will gain from that.\n\n", "msg_date": "Fri, 2 May 2003 14:04:37 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "Chad,\n\n> I realize that Im a raid on linux newbie so any suggestions are appreciated.\n> Im thinking I want to put this on an IDE Raid array, probably 0+1. IDE seems\n> to be cheap and effective these days.\n> What ive been able to glean from other postings is that I should have 3\n> drives, 2 for the database w/ striping and another for the WAL.\n\nWell, RAID 0+1 is only relevant if you have more than 2 drives. Otherwise, \nit's just RAID 1 (which is a good choice for PostgreSQL).\n\nMore disks is almost always better. Putting WAL on a seperate (non-RAID) disk \nis usually a very good idea.\n\n> I would also appreciate raid hardware suggestions (brands, etc)\n> And as always im not afraid to RTFM if someone can point me to the FM :-)\n\nUse Linux Software RAID. To get hardware RAID better than Linux Software \nRAID, you have to spend $800 or more. \n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 2 May 2003 13:10:46 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "Can WAL and the swap partition be on the same drive?\n\nThanks\nChad\n----- Original Message -----\nFrom: \"Josh Berkus\" <[email protected]>\nTo: \"Chad Thompson\" <[email protected]>; \"pgsql-performance\"\n<[email protected]>\nSent: Friday, May 02, 2003 2:10 PM\nSubject: Re: [PERFORM] Looking for a cheap upgrade (RAID)\n\n\nChad,\n\n> I realize that Im a raid on linux newbie so any suggestions are\nappreciated.\n> Im thinking I want to put this on an IDE Raid array, probably 0+1. IDE\nseems\n> to be cheap and effective these days.\n> What ive been able to glean from other postings is that I should have 3\n> drives, 2 for the database w/ striping and another for the WAL.\n\nWell, RAID 0+1 is only relevant if you have more than 2 drives. Otherwise,\nit's just RAID 1 (which is a good choice for PostgreSQL).\n\nMore disks is almost always better. Putting WAL on a seperate (non-RAID)\ndisk\nis usually a very good idea.\n\n> I would also appreciate raid hardware suggestions (brands, etc)\n> And as always im not afraid to RTFM if someone can point me to the FM :-)\n\nUse Linux Software RAID. To get hardware RAID better than Linux Software\nRAID, you have to spend $800 or more.\n\n\n--\n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 2 May 2003 14:53:33 -0600", "msg_from": "\"Chad Thompson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "Seeing as you'll have 2 gigs of RAM, your swap partition is likely to grow \ncob webs, so where you put it probably isn't that critical.\n\nWhat I usually do is say take 4 120 Gig drives, allocate 1 gig on each for \nswap, so you have 4 gigs swap (your swap should be larger than available \nmemory in Linux for performance reasons) and the rest of the drives split \nso that say, the first 5 or so gigs of each is used to house most of the \nOS, and the rest for another RAID array hosting the database. Since the \nroot partition can't be on RAID5, you'd have to set up either a single \ndrive or a mirror set to handle that.\n\nWith that setup, you'd have 15 Gigs for the OS, 4 gigs for swap, and about \n300 gigs for the database. The nice thing about RAID 5 is that random \nread performance for parallel load gets better as you add drives. Write \nperformance gets a little better with more drives since it's likely that \nthe drives you're writing to aren't the same ones being read. \n\nSince your swap os likely to never see much use, except for offline \nstorage of long running processes that haven't been accessed recently, \nit's probably fine to put them on the same drive, but honestly, I've not \nfound a great increase from drive configuration under IDE. With SCSI, \nrearranging can make a bigger difference, maybe it's the better buss \ndesign, i don't know for sure. Test them if you have the time now, you \nwon't get to take apart a working machine after it's up to test it. :)\n\nOn Fri, 2 May 2003, Chad Thompson wrote:\n\n> Can WAL and the swap partition be on the same drive?\n> \n> Thanks\n> Chad\n> ----- Original Message -----\n> From: \"Josh Berkus\" <[email protected]>\n> To: \"Chad Thompson\" <[email protected]>; \"pgsql-performance\"\n> <[email protected]>\n> Sent: Friday, May 02, 2003 2:10 PM\n> Subject: Re: [PERFORM] Looking for a cheap upgrade (RAID)\n> \n> \n> Chad,\n> \n> > I realize that Im a raid on linux newbie so any suggestions are\n> appreciated.\n> > Im thinking I want to put this on an IDE Raid array, probably 0+1. IDE\n> seems\n> > to be cheap and effective these days.\n> > What ive been able to glean from other postings is that I should have 3\n> > drives, 2 for the database w/ striping and another for the WAL.\n> \n> Well, RAID 0+1 is only relevant if you have more than 2 drives. Otherwise,\n> it's just RAID 1 (which is a good choice for PostgreSQL).\n> \n> More disks is almost always better. Putting WAL on a seperate (non-RAID)\n> disk\n> is usually a very good idea.\n> \n> > I would also appreciate raid hardware suggestions (brands, etc)\n> > And as always im not afraid to RTFM if someone can point me to the FM :-)\n> \n> Use Linux Software RAID. To get hardware RAID better than Linux Software\n> RAID, you have to spend $800 or more.\n> \n> \n> --\n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n", "msg_date": "Fri, 2 May 2003 15:20:56 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "Scott,\n\n> With that setup, you'd have 15 Gigs for the OS, 4 gigs for swap, and about \n> 300 gigs for the database. The nice thing about RAID 5 is that random \n> read performance for parallel load gets better as you add drives. Write \n> performance gets a little better with more drives since it's likely that \n> the drives you're writing to aren't the same ones being read. \n\nYeah, but I've found with relatively few drives (such as the minimum of 3) \nthat RAID 5 performance is considerably worse for writes than RAID 1 -- as \nbad as 30-40% of the speed of a raw SCSI disk. This problem goes away with \nmore disks, of course.\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n", "msg_date": "Fri, 2 May 2003 15:10:25 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "On Fri, 2 May 2003, Josh Berkus wrote:\n\n> Scott,\n> \n> > With that setup, you'd have 15 Gigs for the OS, 4 gigs for swap, and about \n> > 300 gigs for the database. The nice thing about RAID 5 is that random \n> > read performance for parallel load gets better as you add drives. Write \n> > performance gets a little better with more drives since it's likely that \n> > the drives you're writing to aren't the same ones being read. \n> \n> Yeah, but I've found with relatively few drives (such as the minimum of 3) \n> that RAID 5 performance is considerably worse for writes than RAID 1 -- as \n> bad as 30-40% of the speed of a raw SCSI disk. This problem goes away with \n> more disks, of course.\n\nYeah, My RAID test box is an old dual PPro 200 with 6 to 8 2 gig drives in \nit and on two seperate scsi channels. It's truly amazing how much better \nRAID5 is when you get that many drives together. OF course, RAID 0 on \nthat setup really flies. :-0\n\nI'd have to say if you're only gonna need 50 or so gigs max, then a RAID1 \nis much easier to configure, and with a hot spare is very reliable.\n\n", "msg_date": "Fri, 2 May 2003 16:18:21 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "On Friday 02 May 2003 16:10, Josh Berkus wrote:\n> More disks is almost always better. Putting WAL on a seperate (non-RAID)\n> disk is usually a very good idea.\n\n From a performance POV perhaps. The subject came up on hackers recently and \nit was pointed out that if you use RAID for reliability and redundancy rather \nthan for performance, you need to keep the WAL files on the RAID too.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n\n", "msg_date": "Sat, 3 May 2003 03:57:54 -0400", "msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "On Saturday 03 May 2003 02:50, scott.marlowe wrote:\n> Seeing as you'll have 2 gigs of RAM, your swap partition is likely to grow\n> cob webs, so where you put it probably isn't that critical.\n>\n> What I usually do is say take 4 120 Gig drives, allocate 1 gig on each for\n> swap, so you have 4 gigs swap (your swap should be larger than available\n> memory in Linux for performance reasons) and the rest of the drives split\n> so that say, the first 5 or so gigs of each is used to house most of the\n> OS, and the rest for another RAID array hosting the database. Since the\n> root partition can't be on RAID5, you'd have to set up either a single\n> drive or a mirror set to handle that.\n\nSetting swap in linux is a tricky proposition. If there is no swap at all, \nlinux has behaved crazily in past. These days situation is much better.\n\nIn my experience with single IDE disk, if swap usage goes above 20-30MB due to \nshortage of memory, machine is dead in waters. Linux sometimes does memory \ninversion where swap used is half the free memory but swap is not freed but \nthat does not hurt really..\n\nSo my advice is, setting swap more tahn 128MB is waste of disk space. OK 256 \nin ultra-extreme situations.. but more than that would a be unadvisable \nsituation..\n\n Shridhar\n\n", "msg_date": "Sat, 3 May 2003 13:32:49 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "On Saturday 03 May 2003 13:27, D'Arcy J.M. Cain wrote:\n> On Friday 02 May 2003 16:10, Josh Berkus wrote:\n> > More disks is almost always better. Putting WAL on a seperate (non-RAID)\n> > disk is usually a very good idea.\n>\n> From a performance POV perhaps. The subject came up on hackers recently\n> and it was pointed out that if you use RAID for reliability and redundancy\n> rather than for performance, you need to keep the WAL files on the RAID\n> too.\n\nbut for performance reason, that RAID can be separate from the data RAID..:-)\n\n Shridhar\n\n-- \n\"Gee, Toto, I don't think we are in Kansas anymore.\"\n\n", "msg_date": "Sat, 3 May 2003 13:45:40 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "On Fri, 2003-05-02 at 13:53, Chad Thompson wrote:\n> I have a server on a standard pc right now.\n> PIII 700, 1Gig ram (SD), 40 Gig IDE, RedHat 8.0, PostgreSQL 7.3.1\n> \n> The database has 3 tables that just broke 10 million tuples (yeah, i think\n> im entering in to the world of real databases ;-)\n> Its primarly bulk (copy) inserts and queries, rarely an update.\n> \n> I am looking at moving this to a P4 2.4G, 2 Gig Ram(DDR), RedHat 8,\n> PostgreSQL 7.3.latest\n[snip]\n\nHow big do you expect the database to get? \n\nIf I may be a contrarian, if under 70GB, then why not just get a 72GB\n10K RPM SCSI drive ($160) and a SCSI 160 card? OS, swap, input files,\netc, can go on a 7200RPM IDE drive.\n\nMuch fewer moving parts than RAID, so more reliable...\n\n-- \n+-----------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| An ad currently being run by the NEA (the US's biggest |\n| public school TEACHERS UNION) asks a teenager if he can |\n| find sodium and *chloride* in the periodic table of the |\n| elements. |\n| And they wonder why people think public schools suck... |\n+-----------------------------------------------------------+\n\n", "msg_date": "03 May 2003 15:39:34 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "On 3 May 2003, Ron Johnson wrote:\n\n> On Fri, 2003-05-02 at 13:53, Chad Thompson wrote:\n> > I have a server on a standard pc right now.\n> > PIII 700, 1Gig ram (SD), 40 Gig IDE, RedHat 8.0, PostgreSQL 7.3.1\n> > \n> > The database has 3 tables that just broke 10 million tuples (yeah, i think\n> > im entering in to the world of real databases ;-)\n> > Its primarly bulk (copy) inserts and queries, rarely an update.\n> > \n> > I am looking at moving this to a P4 2.4G, 2 Gig Ram(DDR), RedHat 8,\n> > PostgreSQL 7.3.latest\n> [snip]\n> \n> How big do you expect the database to get? \n> \n> If I may be a contrarian, if under 70GB, then why not just get a 72GB\n> 10K RPM SCSI drive ($160) and a SCSI 160 card? OS, swap, input files,\n> etc, can go on a 7200RPM IDE drive.\n> \n> Much fewer moving parts than RAID, so more reliable...\n\nSorry, everything else is true, but RAID is far more reliable, even if \ndisk failure is more likely. Since a RAID array (1 or 5) can run with one \ndead disk, and supports auto-rebuild from hot spares, there's really no \nway a single disk can be more reliable. It may have fewer failures, but \nthat's not the same thing. \n\n", "msg_date": "Mon, 5 May 2003 10:31:53 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "On Sat, 3 May 2003, Shridhar Daithankar wrote:\n\n> On Saturday 03 May 2003 02:50, scott.marlowe wrote:\n> > Seeing as you'll have 2 gigs of RAM, your swap partition is likely to grow\n> > cob webs, so where you put it probably isn't that critical.\n> >\n> > What I usually do is say take 4 120 Gig drives, allocate 1 gig on each for\n> > swap, so you have 4 gigs swap (your swap should be larger than available\n> > memory in Linux for performance reasons) and the rest of the drives split\n> > so that say, the first 5 or so gigs of each is used to house most of the\n> > OS, and the rest for another RAID array hosting the database. Since the\n> > root partition can't be on RAID5, you'd have to set up either a single\n> > drive or a mirror set to handle that.\n> \n> Setting swap in linux is a tricky proposition. If there is no swap at all, \n> linux has behaved crazily in past. These days situation is much better.\n> \n> In my experience with single IDE disk, if swap usage goes above 20-30MB due to \n> shortage of memory, machine is dead in waters. Linux sometimes does memory \n> inversion where swap used is half the free memory but swap is not freed but \n> that does not hurt really..\n> \n> So my advice is, setting swap more tahn 128MB is waste of disk space. OK 256 \n> in ultra-extreme situations.. but more than that would a be unadvisable \n> situation..\n\nWhereas disks are ALL over 20 gigs now, and\nwhereas the linux kernel will begin killing processes when it runs out of \nmem and swap, and \nwhereas the linux kernel STILL has issues using swap when it's smaller \nthan memory (those problems have been lessened, but not eliminated), and \nwhereas the linux kernel will parallelize access to its swap partitions \nwhen it has more than one and they are at the same priority, providing \nbetter swap performance, and\nwhereas REAL servers always use more memory than you'd ever thought they \nwould,\n\nbe it declared here and now by me that using a small swap file is \npenny-wise and pound foolish.\n\n:-)\n\nSeriously, though, having once had a REAL bad experience on a production \nserver that I was trying to increase swap on (yes, some idiot set it up \nwith some tiny little 64 Meg swap file (yes, that idiot was me...)) I now \njust give every server a few gigs of swap from its three or four 40+ gig \ndrives. With 4 drives, and each one donating 256 Meg to the cause you can \nhave a gig of swap space.\n\n", "msg_date": "Mon, 5 May 2003 10:38:03 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "On Mon, 2003-05-05 at 11:31, scott.marlowe wrote:\n> On 3 May 2003, Ron Johnson wrote:\n> \n> > On Fri, 2003-05-02 at 13:53, Chad Thompson wrote:\n> > > I have a server on a standard pc right now.\n> > > PIII 700, 1Gig ram (SD), 40 Gig IDE, RedHat 8.0, PostgreSQL 7.3.1\n> > > \n> > > The database has 3 tables that just broke 10 million tuples (yeah, i think\n> > > im entering in to the world of real databases ;-)\n> > > Its primarly bulk (copy) inserts and queries, rarely an update.\n> > > \n> > > I am looking at moving this to a P4 2.4G, 2 Gig Ram(DDR), RedHat 8,\n> > > PostgreSQL 7.3.latest\n> > [snip]\n> > \n> > How big do you expect the database to get? \n> > \n> > If I may be a contrarian, if under 70GB, then why not just get a 72GB\n> > 10K RPM SCSI drive ($160) and a SCSI 160 card? OS, swap, input files,\n> > etc, can go on a 7200RPM IDE drive.\n> > \n> > Much fewer moving parts than RAID, so more reliable...\n> \n> Sorry, everything else is true, but RAID is far more reliable, even if \n> disk failure is more likely. Since a RAID array (1 or 5) can run with one \n> dead disk, and supports auto-rebuild from hot spares, there's really no \n> way a single disk can be more reliable. It may have fewer failures, but \n> that's not the same thing. \n\nWhat controller do you use for IDE hot-swapping and auto-rebuild?\n3Ware?\n\n-- \n+-----------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| An ad currently being run by the NEA (the US's biggest |\n| public school TEACHERS UNION) asks a teenager if he can |\n| find sodium and *chloride* in the periodic table of the |\n| elements. |\n| And they wonder why people think public schools suck... |\n+-----------------------------------------------------------+\n\n", "msg_date": "05 May 2003 16:54:54 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "On 5 May 2003, Ron Johnson wrote:\n\n> On Mon, 2003-05-05 at 11:31, scott.marlowe wrote:\n> > On 3 May 2003, Ron Johnson wrote:\n> > \n> > > On Fri, 2003-05-02 at 13:53, Chad Thompson wrote:\n> > > > I have a server on a standard pc right now.\n> > > > PIII 700, 1Gig ram (SD), 40 Gig IDE, RedHat 8.0, PostgreSQL 7.3.1\n> > > > \n> > > > The database has 3 tables that just broke 10 million tuples (yeah, i think\n> > > > im entering in to the world of real databases ;-)\n> > > > Its primarly bulk (copy) inserts and queries, rarely an update.\n> > > > \n> > > > I am looking at moving this to a P4 2.4G, 2 Gig Ram(DDR), RedHat 8,\n> > > > PostgreSQL 7.3.latest\n> > > [snip]\n> > > \n> > > How big do you expect the database to get? \n> > > \n> > > If I may be a contrarian, if under 70GB, then why not just get a 72GB\n> > > 10K RPM SCSI drive ($160) and a SCSI 160 card? OS, swap, input files,\n> > > etc, can go on a 7200RPM IDE drive.\n> > > \n> > > Much fewer moving parts than RAID, so more reliable...\n> > \n> > Sorry, everything else is true, but RAID is far more reliable, even if \n> > disk failure is more likely. Since a RAID array (1 or 5) can run with one \n> > dead disk, and supports auto-rebuild from hot spares, there's really no \n> > way a single disk can be more reliable. It may have fewer failures, but \n> > that's not the same thing. \n> \n> What controller do you use for IDE hot-swapping and auto-rebuild?\n> 3Ware?\n\nLinux, and I don't do hot swapping with IDE, just hot rebuild from a \nspare drive. My servers are running SCSI, by the way, only the \nworkstations are running IDE. With the saved cost of a decent RAID \ncontroller (good SCSI controllers are still well over $500 most the time) \nI can afford enough hot spares to never have to worry about changing one \nout during the day.\n\n", "msg_date": "Mon, 5 May 2003 16:22:10 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "On Mon, 2003-05-05 at 17:22, scott.marlowe wrote:\n> On 5 May 2003, Ron Johnson wrote:\n> \n> > On Mon, 2003-05-05 at 11:31, scott.marlowe wrote:\n> > > On 3 May 2003, Ron Johnson wrote:\n> > > \n> > > > On Fri, 2003-05-02 at 13:53, Chad Thompson wrote:\n[snip]\n> > What controller do you use for IDE hot-swapping and auto-rebuild?\n> > 3Ware?\n> \n> Linux, and I don't do hot swapping with IDE, just hot rebuild from a \n> spare drive. My servers are running SCSI, by the way, only the \n> workstations are running IDE. With the saved cost of a decent RAID \n> controller (good SCSI controllers are still well over $500 most the time) \n> I can afford enough hot spares to never have to worry about changing one \n> out during the day.\n\nAh, I guess that drives go out infrequently enough that shutting \nit down at night for a swap-out isn't all that onerous...\n\nWhat controller model do you use?\n\n-- \n+-----------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| An ad currently being run by the NEA (the US's biggest |\n| public school TEACHERS UNION) asks a teenager if he can |\n| find sodium and *chloride* in the periodic table of the |\n| elements. |\n| And they wonder why people think public schools suck... |\n+-----------------------------------------------------------+\n\n", "msg_date": "06 May 2003 06:04:06 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "On 6 May 2003, Ron Johnson wrote:\n\n> On Mon, 2003-05-05 at 17:22, scott.marlowe wrote:\n> > On 5 May 2003, Ron Johnson wrote:\n> > \n> > > On Mon, 2003-05-05 at 11:31, scott.marlowe wrote:\n> > > > On 3 May 2003, Ron Johnson wrote:\n> > > > \n> > > > > On Fri, 2003-05-02 at 13:53, Chad Thompson wrote:\n> [snip]\n> > > What controller do you use for IDE hot-swapping and auto-rebuild?\n> > > 3Ware?\n> > \n> > Linux, and I don't do hot swapping with IDE, just hot rebuild from a \n> > spare drive. My servers are running SCSI, by the way, only the \n> > workstations are running IDE. With the saved cost of a decent RAID \n> > controller (good SCSI controllers are still well over $500 most the time) \n> > I can afford enough hot spares to never have to worry about changing one \n> > out during the day.\n> \n> Ah, I guess that drives go out infrequently enough that shutting \n> it down at night for a swap-out isn't all that onerous...\n> \n> What controller model do you use?\n\nMy preference is SymBIOS (LSI now) plain UW SCSI 160, but at work we use \nadaptec built in UW SCSI 160 on INTEL dual CPU motherboards. I've used \nRAID controllers in the past, but now I genuinely prefer linux's built in \nkernel level raid to most controllers, and the load on the server is <2% \nof one of the two CPUs, so it doesn't really slow anything else down. The \nperformance is quite good, I can read raw at about 48 Megs a second from a \npair of 10kRPM UWSCSI drives in a RAID1. These drives, individually can \npump out about 25 megs a second individually.\n\n", "msg_date": "Tue, 6 May 2003 12:12:51 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "On Tue, 2003-05-06 at 13:12, scott.marlowe wrote:\n> On 6 May 2003, Ron Johnson wrote:\n> \n> > On Mon, 2003-05-05 at 17:22, scott.marlowe wrote:\n> > > On 5 May 2003, Ron Johnson wrote:\n> > > \n> > > > On Mon, 2003-05-05 at 11:31, scott.marlowe wrote:\n> > > > > On 3 May 2003, Ron Johnson wrote:\n> > > > > \n> > > > > > On Fri, 2003-05-02 at 13:53, Chad Thompson wrote:\n> > [snip]\n> > > > What controller do you use for IDE hot-swapping and auto-rebuild?\n> > > > 3Ware?\n> > > \n> > > Linux, and I don't do hot swapping with IDE, just hot rebuild from a \n> > > spare drive. My servers are running SCSI, by the way, only the \n> > > workstations are running IDE. With the saved cost of a decent RAID \n> > > controller (good SCSI controllers are still well over $500 most the time) \n> > > I can afford enough hot spares to never have to worry about changing one \n> > > out during the day.\n> > \n> > Ah, I guess that drives go out infrequently enough that shutting \n> > it down at night for a swap-out isn't all that onerous...\n> > \n> > What controller model do you use?\n> \n> My preference is SymBIOS (LSI now) plain UW SCSI 160, but at work we use \n> adaptec built in UW SCSI 160 on INTEL dual CPU motherboards. I've used \n> RAID controllers in the past, but now I genuinely prefer linux's built in \n> kernel level raid to most controllers, and the load on the server is <2% \n> of one of the two CPUs, so it doesn't really slow anything else down. The \n> performance is quite good, I can read raw at about 48 Megs a second from a \n> pair of 10kRPM UWSCSI drives in a RAID1. These drives, individually can \n> pump out about 25 megs a second individually.\n\nHmm, I'm confused (again)...\n\nI thought you liked IDE RAID, because of the price savings.\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| The purpose of the military isn't to pay your college tuition |\n| or give you a little extra income; it's to \"kill people and |\n| break things\". Surprisingly, not everyone understands that. |\n+---------------------------------------------------------------+\n\n", "msg_date": "06 May 2003 14:33:15 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" }, { "msg_contents": "On 6 May 2003, Ron Johnson wrote:\n\n> On Tue, 2003-05-06 at 13:12, scott.marlowe wrote:\n> > On 6 May 2003, Ron Johnson wrote:\n> > \n> > > On Mon, 2003-05-05 at 17:22, scott.marlowe wrote:\n> > > > On 5 May 2003, Ron Johnson wrote:\n> > > > \n> > > > > On Mon, 2003-05-05 at 11:31, scott.marlowe wrote:\n> > > > > > On 3 May 2003, Ron Johnson wrote:\n> > > > > > \n> > > > > > > On Fri, 2003-05-02 at 13:53, Chad Thompson wrote:\n> > > [snip]\n> > > > > What controller do you use for IDE hot-swapping and auto-rebuild?\n> > > > > 3Ware?\n> > > > \n> > > > Linux, and I don't do hot swapping with IDE, just hot rebuild from a \n> > > > spare drive. My servers are running SCSI, by the way, only the \n> > > > workstations are running IDE. With the saved cost of a decent RAID \n> > > > controller (good SCSI controllers are still well over $500 most the time) \n> > > > I can afford enough hot spares to never have to worry about changing one \n> > > > out during the day.\n> > > \n> > > Ah, I guess that drives go out infrequently enough that shutting \n> > > it down at night for a swap-out isn't all that onerous...\n> > > \n> > > What controller model do you use?\n> > \n> > My preference is SymBIOS (LSI now) plain UW SCSI 160, but at work we use \n> > adaptec built in UW SCSI 160 on INTEL dual CPU motherboards. I've used \n> > RAID controllers in the past, but now I genuinely prefer linux's built in \n> > kernel level raid to most controllers, and the load on the server is <2% \n> > of one of the two CPUs, so it doesn't really slow anything else down. The \n> > performance is quite good, I can read raw at about 48 Megs a second from a \n> > pair of 10kRPM UWSCSI drives in a RAID1. These drives, individually can \n> > pump out about 25 megs a second individually.\n> \n> Hmm, I'm confused (again)...\n> \n> I thought you liked IDE RAID, because of the price savings.\n\nNo, I was saying that software RAID is what I like. IDE or SCSI. I just \nuse SCSI because it's on a server that happens to have come with some nice \nUW SCSI Drives. The discussion about the IDE RAID was about what \nsomeone else was using. I was just defending the use of it, as it is \nstill a great value for RAID arrays, and let's face it, the slowest IDE \nRAID you can build with new parts is probably still faster than the \nfastest SCSI RAID arrays from less than a decade ago. Now with Serial ATA \ncoming out, I expect a lot more servers to use it, and it looks like the \ndrives made for serial ATA will come in server class versions (tested for \nlonger life, greater heat resistance, etc...)\n\nOn my little 2xPPro200 I have 6 2 gig UltraWide 80 MB/sec SCSI drives, and \n2 80 gig DMA-33 drives, and the two 80 gig DMA-33 drives literally stomp \nthe 6 2 gigs into the ground, no matter how I configure it, except at \nheavy parallel access (i.e. pgbench -c 20 -t 1000) where the extra \nspindle/head count makes a big difference. And even then, the SCSIs are \nonly a tiny bit faster, say 10% or so.\n\n", "msg_date": "Tue, 6 May 2003 14:40:04 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for a cheap upgrade (RAID)" } ]
[ { "msg_contents": "I was woundering where could I find a nice large dataset. Perhaps 50\nthousand records or more \n-- \nAntoine <[email protected]>\n\n", "msg_date": "03 May 2003 01:52:41 -0400", "msg_from": "Antoine <[email protected]>", "msg_from_op": true, "msg_subject": "looking for large dataset" }, { "msg_contents": "That's a very small dataset :)\n\nChris\n\nOn 3 May 2003, Antoine wrote:\n\n> I was woundering where could I find a nice large dataset. Perhaps 50\n> thousand records or more\n> --\n> Antoine <[email protected]>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n", "msg_date": "Sat, 3 May 2003 18:04:22 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: looking for large dataset" }, { "msg_contents": "If you can create a flat file with some rows, it's pretty easy to \nduplicate them as many times as you need to get up to 50k (which, as \npreviously mentioned, is relatively small)\n\nThis might not work if you need \"real\" data - but I started with 67k rows \nof real data in my table, then copied them to a temp table, \nupdated the 3 key fields with previous value + max value, \nand inserted back into the original table. (Just to ensure my new rows \nhad new values for those 3 fields.)\n\nOn Sat, 3 May 2003, Christopher Kings-Lynne wrote:\n\n> That's a very small dataset :)\n> \n> Chris\n> \n> On 3 May 2003, Antoine wrote:\n> \n> > I was woundering where could I find a nice large dataset. Perhaps 50\n> > thousand records or more\n> > --\n> > Antoine <[email protected]>\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n", "msg_date": "Sat, 3 May 2003 11:13:17 -0400 (EDT)", "msg_from": "Becky Neville <[email protected]>", "msg_from_op": false, "msg_subject": "Re: looking for large dataset" }, { "msg_contents": "On 3 May 2003, Antoine wrote:\n\n> I was woundering where could I find a nice large dataset. Perhaps 50\n> thousand records or more \n\nI've attached a PHP script called mktestdb that reads in the dictionary at \n/usr/share/dict/words, and inserts a user defined number of rows into a \nuser defined number of columns.\n\nIt's ugly and simple. Just pipe the output to a text file or psql and off \nyou go.\n\nusage:\n\nmktestdb tablename [rows [cols]]\n\ndefault of 1 column and 1000 rows.\n\nIt would be easy enough to rewrite this in something more portable if \nsomeone wanted to.", "msg_date": "Mon, 5 May 2003 10:25:57 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: looking for large dataset" } ]
[ { "msg_contents": "I have been looking through the archives but can't find anything on this.\n\nDoes the use of WHERE field NOT IN ('A','B' etc) prevent the use of an \nindex?\nWould changing the query to WHERE field <> 'A' and field <> 'B' etc help?\n\nThe query only involves one table, and this is the only field in the where \nclause. Explain plan indicates a Sort and Seq Scan are being done.\n\nTHanks\n\n", "msg_date": "Sat, 3 May 2003 01:56:02 -0400 (EDT)", "msg_from": "Becky Neville <[email protected]>", "msg_from_op": true, "msg_subject": "NOT IN doesn't use index?" }, { "msg_contents": "On Sat, May 03, 2003 at 01:56:02AM -0400, Becky Neville wrote:\n> Does the use of WHERE field NOT IN ('A','B' etc) prevent the use of an \n> index?\n\nThat '&c.' is hiding a lot. Why not post your query and the explain\nanalyse output?\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Sat, 3 May 2003 10:52:12 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT IN doesn't use index?" } ]
[ { "msg_contents": "I am running my own database server but I don't have root privilege (and \nno hope of getting it.)\n\nI only have 3 tables, with rowcounts of 3000, 48000 and 2 million.\nI don't think this is that many rows but most things take a long time to \nrun. There are a lot of indexes on each table and creating an index on \nthe 2mil row table takes forever, which I could perhaps live with BUT -\n\ntyping something as dumb as \\! pwd is not instantaneous either and there \ndoesn't seem to be anyone else hogging up the CPU.\n\nI am on Linux and due to lack of space in my own account, I have PGDATA \npointing to /tmp.\n(This is for a class project to analyze query performance ...I can \nrecreate the data at any time if necessary.)\n\nAre there any parameters I can set to speed things up?\n\nThanks\nBecky\n\n", "msg_date": "Sat, 3 May 2003 13:40:28 -0400 (EDT)", "msg_from": "Becky Neville <[email protected]>", "msg_from_op": true, "msg_subject": "why is the db so slow?" }, { "msg_contents": "Becky Neville wrote:\n> Are there any parameters I can set to speed things up?\n> \n\nYou haven't given us much in the way of specifics to work with, but here \nis a short list of things to try/do:\n\n- read (amongst other things):\nhttp://www.us.postgresql.org/users-lounge/docs/7.3/postgres/performance-tips.html\nhttp://www.us.postgresql.org/users-lounge/docs/7.3/postgres/runtime-config.html\n\n- run \"VACUUM ANALYZE\" on your database\n- adjust key default configuration settings:\n shared_buffers = 1000 (or maybe 2000 or even 4000 -- above that you'd\n need root access, and it might not help anyway)\n sort_mem = 8192 (depending on the amount of RAM in the server, this\n might be too high/low, but start with something in\n the 4000 to 8000K range)\n- run \"EXPLAIN ANALYZE\" on your queries, and send in the results and the \ntable structure details to the list.\n\nHTH,\n\nJoe\n\n", "msg_date": "Sat, 03 May 2003 11:31:00 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why is the db so slow?" }, { "msg_contents": "Becky,\n\n> I am running my own database server but I don't have root privilege (and\n> no hope of getting it.)\n<snip>\n> typing something as dumb as \\! pwd is not instantaneous either and there\n> doesn't seem to be anyone else hogging up the CPU.\n\nIt sounds to me like the system has something wrong with it if \"pwd\" takes a \nwhile to respond. Even if CPU isn't in heavy use, I'd guess some other \nprocess is eating RAM or disk I/O\n\n> I am on Linux and due to lack of space in my own account, I have PGDATA\n> pointing to /tmp.\n> (This is for a class project to analyze query performance ...I can\n> recreate the data at any time if necessary.)\n\nReally? What class? I'm personally very interested to know of schools that \nare teaching PostgreSQL.\n\nHowever, if this is for school, PostgreSQL is not very efficient being run as \na seperate installation for each user. For multiuser installations, it is \nfar better to have one installation and many databases with restricted \npermissions.\n\nI also suspect that you database being in /tmp may be causing you problems; \nmany sysadmins put the /tmp partition on their slowest drive since it's \nregarded as disposable.\n\n> Are there any parameters I can set to speed things up?\n\nLots, the settings of many of which are a matter of debate. I suggest that \nyou browse through the online archives of this list, which will be far more \neducational than me giving you a few tips.\n\nHowever, be aware that no postgresql.conf settings, however clever, can make \nup for an overloaded system, poor disk configuration, or slow system I/O. \nAt best correct settings ameliorate poor performance.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Sat, 3 May 2003 11:32:56 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why is the db so slow?" }, { "msg_contents": "\nreplying to Becky and Josh's reply.....\n\n> It sounds to me like the system has something wrong with it if \"pwd\"\n> takes a while to respond. Even if CPU isn't in heavy use, I'd guess\n> some other process is eating RAM or disk I/O\n\nLook into the unix command 'top'. It lists processes and the amount of\nresources they are using. Although if it's another user using them it may\nnot detail them.... but I think you can get some idea of what other users\nare up to from the CPU idle time and server load averages from the 'top'\ndisplay.\n\n> However, if this is for school, PostgreSQL is not very efficient being\n> run as a seperate installation for each user. For multiuser\n> installations, it is far better to have one installation and many\n> databases with restricted permissions.\n\nI can attest to that, I run a web site using virtual hosting (about 80\nusers, each with their own version of Apache (and in my case, my own\nversion of postgreSQL, I have no idea what the other users are running).\nMy development Linux laptop is 5 to 10 times faster than the web site, of\ncourse, I'm it's ONLY user.\n\nbrew\n\n", "msg_date": "Sat, 3 May 2003 16:30:38 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: why is the db so slow?" } ]
[ { "msg_contents": "Here is the EXPLAIN output from the two queries. The first is the one \nthat uses WHERE field NOT IN ( 'a','b' etc ). The second is the (much \nfaster) one \nthat uses WHERE NOT (field = 'a' and field = 'b' etc).\n\nI don't understand why the query planner thinks there are only 38055 rows \nin the table on the slow one. I didn't run analyze in between them and the \nsecond try seems to know (correctly) that there are 1799976 rows.\n\nAlso, why does the first (slow) one think there are 38055 rows and only \nevaluate 48 rows - and yet it still takes longer. ? I assume it's due to \nthe lack of a sort, but I don't understand why using NOT IN should \nprohibit a sort.\n\n-------------slow one - ~9 minutes-----------------------\n/home/accts/ran26/cs437/Proj/code/scripts/sql\ntest=# \\i query3.sql\npsql:query3.sql:76: NOTICE: QUERY PLAN:\n\nSeq Scan on uabopen (cost=0.00..3305914.86 rows=38055 width=7) (actual \ntime=36577.26..494243.37 rows=48 loops=1)\nTotal runtime: 494243.67 msec\n\n--------------faster one - 2 minutes-----------------\npsql:query3Mod2.sql:77: NOTICE: QUERY PLAN:\n\nUnique (cost=3592408.28..3596908.22 rows=179998 width=7) (actual \ntime=104959.31..114131.22 rows=101 loops=1)\n -> Sort (cost=3592408.28..3592408.28 rows=1799976 width=7) (actual \ntime=104959.30..108425.61 rows=1799976 loops=1)\n -> Seq Scan on uabopen (cost=0.00..3305914.86 rows=1799976 \nwidth=7) (actual time=30.13..14430.99 rows=1799976 loops=1)\nTotal runtime: 114220.66 msec\n\n\n\n---------- Forwarded message ----------\nDate: Sat, 3 May 2003 13:09:22 -0400 (EDT)\nFrom: Becky Neville <[email protected]>\nTo: Andrew Sullivan <[email protected]>\nSubject: Re: [PERFORM] NOT IN doesn't use index?\n\nI didn't post it because the rest of the query is exactly the same (and\nthe NOT IN list is about a page long - although it's\napparently still shorter than the IN list.)\n\nI need to verify something and then can send the EXPLAIN output.\n \nI am running my own server and have no idea what parameters I should use \nto speed things up. Everything is dog slow.\n\n\nOn Sat, 3 May 2003, Andrew Sullivan wrote:\n\n> On Sat, May 03, 2003 at 01:56:02AM -0400, Becky Neville wrote:\n> > Does the use of WHERE field NOT IN ('A','B' etc) prevent the use of an \n> > index?\n> \n> That '&c.' is hiding a lot. Why not post your query and the explain\n> analyse output?\n> \n> A\n> \n\n", "msg_date": "Sat, 3 May 2003 13:57:52 -0400 (EDT)", "msg_from": "Becky Neville <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NOT IN doesn't use index? (fwd)" }, { "msg_contents": "Becky,\n\n> Here is the EXPLAIN output from the two queries. The first is the one\n> that uses WHERE field NOT IN ( 'a','b' etc ). The second is the (much\n> faster) one\n> that uses WHERE NOT (field = 'a' and field = 'b' etc).\n\nWe still can't help you if you're not posting your actual queries. We have no \nidea what's in query3.sql. We're not clairvoyant, y'know.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Sat, 3 May 2003 11:34:40 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT IN doesn't use index? (fwd)" }, { "msg_contents": "Becky Neville wrote:\n> Here is the EXPLAIN output from the two queries. The first is the one \n> that uses WHERE field NOT IN ( 'a','b' etc ). The second is the (much \n\nUnless you are working with Postgres 7.4devel (i.e. cvs HEAD), the IN \nconstruct is notoriously slow in Postgres. In cvs it is vastly improved.\n\nAlso, as I mentioned in the other reply, send in \"EXPLAIN ANALYZE\" \nresults instead of \"EXPLAIN\" (and make sure you run \"VACUUM ANALYZE\" first).\n\nJoe\n\n", "msg_date": "Sat, 03 May 2003 11:34:55 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT IN doesn't use index? (fwd)" }, { "msg_contents": "Well I think you answered my question already, but just in case\nhere are the explain results again and the query follows (I warned, it is \nlong.) And I did run VACUUM ANALYZE beforehand.\n\npsql:sql/query3.sql:76: NOTICE: QUERY PLAN:\n\nSeq Scan on uabopen (cost=0.00..3305914.86 rows=56580 width=7) (actual \ntime=36077.26..491592.22 rows=48 loops=1)\nTotal runtime: 491592.52 msec\n-------------------------------------------\n\nexplain analyze\nselect uabopen_srat_code\n FROM UABOPEN\n where uabopen_srat_code not in \n('1A','1B','1C','1E','1AC','1BC','1CC','1EC','PG1A',\n \n'PG1B','PG1C','PG1E','R1A','R1B',\n \n'R1C','R1E','RD1A','RD1B','RD1C','RD1E','TRF','WN1A',\n 'WN1B','WN1C','WN1E', 'APS')\n AND uabopen_srat_code not in \n('1F','1FD','3A','3AD','3B','3B1','3BD','3C','3CD','3F',\n \n'3FD','3G','3GD','3H','3HD','4A','4AD','5A','5AD','5B','5BD','5C','5CD',\n \n'5D','5DD','5E','5ED','5F','5FD','5G','5GD','6A','6AD','6B','6BD','6C',\n \n'6CD','6D','6DD','8A','8B','8AD','9A','9TA','9AD','9B','9BD','9C','9CD','9D','9D\\\nD',\n \n'9E','9ED','9F','9FD','9G','9GD','9H','9I','9T','ACC','CM3A','CM3B','CM3C','CM3F\\\n',\n \n'CM3G','CM3H','DEM','GR3A','GR3B','GR3C','GR3H','GR4A','GR5A','GR5B','GR5C',\n \n'GR5D','GR5E','GR5F','GR6A','GR6B','GR6C','GR6D','GR9A','GR9B','GR9C','GR9D','GR\\\n9E',\n \n'GR9F','GR9G','GR9H','GR9T','MT3B','MT3C','MT3G','MT3H','MT4A','MT9A','MT9B','MT\\\n9C',\n \n'MT9D','MT9E','MT9F','MT9G','N1','N10','N100','N101','N102','N103','N104','N105'\\\n,\n \n'N106','N107','N108','N109','ITCP','1FC','3AP','3CP','5AC',\n \n'5AP','5BC','5BP','5CC','5CP','5DC','5DP','5GC','6AC','6AP','6BC','6BP','6CC','6\\\nCP',\n \n'6DC','6DP','MT5A','MT5B','MT6A','MT6B','MT5H','MT6I','MT6H',\n \n'5HP','6H','6HC','6HP','6I','6IC','6IP','3BP','5H','5HC',\n \n'5I','5IC','5IP','GR5H','GR5I','GR6H','GR6I',\n \n'MT5I','PG5H','PG5I','PG6H','PG6I','WN5H','WN5I','WN6H','WN6I',\n \n'5CT','6CT','6DT','MT6C','MT6D','MT5C','MT5D','5DT','5HD')\n AND UABOPEN_SRAT_CODE NOT IN \n('N11','N110','N111','N112','N113','N114','N115','N116','N117','N118','N119','N12',\n \n'N120','N121','N122','N123','N124','N125','N126','N127','N128','N129','N13','N13\\\n0',\n \n'N131','N132','N133','N134','N135','N136','N137','N138','N139','N14','N140',\n \n'N141','N142','N143','N144','N145','N146','N147','N148','N149','N15','N150',\n \n'N151','N152','N153','N154','N155','N156','N157','N158'\n\n\nOn Sat, 3 May 2003, Joe Conway wrote:\n\n> Becky Neville wrote:\n> > Here is the EXPLAIN output from the two queries. The first is the one \n> > that uses WHERE field NOT IN ( 'a','b' etc ). The second is the (much \n> \n> Unless you are working with Postgres 7.4devel (i.e. cvs HEAD), the IN \n> construct is notoriously slow in Postgres. In cvs it is vastly improved.\n> \n> Also, as I mentioned in the other reply, send in \"EXPLAIN ANALYZE\" \n> results instead of \"EXPLAIN\" (and make sure you run \"VACUUM ANALYZE\" first).\n> \n> Joe\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n\n", "msg_date": "Sat, 3 May 2003 15:08:03 -0400 (EDT)", "msg_from": "Becky Neville <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NOT IN doesn't use index? (fwd)" }, { "msg_contents": "Becky Neville wrote:\n> Well I think you answered my question already, but just in case\n> here are the explain results again and the query follows (I warned, it is \n> long.) And I did run VACUUM ANALYZE beforehand.\n\n[snipped ugly query with three NOT IN clauses]\n\nHmmm, no surprise that's slow. How are those three lists of constants \ngenerated? One idea is to recast this as a left join with a FROM clause \nsubselect, e.g.\n\nselect\n uabopen_srat_code\nfrom\n uabopen u left join\n (select '1F' as uabopen_srat_code union all\n '1FD' union all\n '3A' ...) as ss\n on u.uabopen_srat_code = ss.uabopen_srat_code\nwhere ss.uabopen_srat_code is null;\n\nBut I'm not sure that will be much quicker. If the list of \nuabopen_srat_code you're filtering on comes from one of the other \ntables, you might be able to do better -- back to the question above, \nhow is that list generated? What do the other table look like?\n\nJoe\n\n", "msg_date": "Sat, 03 May 2003 12:31:12 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT IN doesn't use index? (fwd)" }, { "msg_contents": "I think that list is actually (gulp) hard coded. It's not my query. I am \ntrying to speed it up for someone else - to hopefully learn something in \nthe process that isn't dependent on what version of postgres i'm \nrunning :)\n\nI assume it's from another table but can't find it on their data model at \nthe moment. Those are all valid billing codes. The query is checking to see if \nanyone was billed under an invalid code. So if everything is ok, the query \nreturns nothing.\n\nBut there must be more to it than that...otherwise, they could just add a \nValid flag to the lookup table. \n\nIf you have any ideas for speeding it up other than using another table \nplease let me know. It only takes me 9 min to run with 2 mil rows but it \ntakes them 7 hours (51 mil rows in Oracle with many other jobs running and \npoor system maintenance.)\n\n\n\n On Sat, 3 May 2003, Joe Conway \nwrote:\n\n> Becky Neville wrote:\n> > Well I think you answered my question already, but just in case\n> > here are the explain results again and the query follows (I warned, it is \n> > long.) And I did run VACUUM ANALYZE beforehand.\n> \n> [snipped ugly query with three NOT IN clauses]\n> \n> Hmmm, no surprise that's slow. How are those three lists of constants \n> generated? One idea is to recast this as a left join with a FROM clause \n> subselect, e.g.\n> \n> select\n> uabopen_srat_code\n> from\n> uabopen u left join\n> (select '1F' as uabopen_srat_code union all\n> '1FD' union all\n> '3A' ...) as ss\n> on u.uabopen_srat_code = ss.uabopen_srat_code\n> where ss.uabopen_srat_code is null;\n> \n> But I'm not sure that will be much quicker. If the list of \n> uabopen_srat_code you're filtering on comes from one of the other \n> tables, you might be able to do better -- back to the question above, \n> how is that list generated? What do the other table look like?\n> \n> Joe\n> \n\n", "msg_date": "Sat, 3 May 2003 15:56:53 -0400 (EDT)", "msg_from": "Becky Neville <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NOT IN doesn't use index? (fwd)" }, { "msg_contents": "Becky Neville wrote:\n> I think that list is actually (gulp) hard coded. It's not my query.\n> I am trying to speed it up for someone else - to hopefully learn something\n> in the process that isn't dependent on what version of postgres i'm \n> running :)\n> \n> I assume it's from another table but can't find it on their data\n> model at the moment. Those are all valid billing codes. The query\n> is checking to see if anyone was billed under an invalid code. So if\n> everything is ok, the query returns nothing.\n\nYeah -- that sounds like there has to be a table of valid codes\nsomewhere. In that case you can substitute the \"valid_codes\" table in\nthe left join where I had the subselect with all the UNIONs.\nAlternatively you might find a NOT EXISTS method would work best. If \nthere isn't a \"valid_codes\" table, but that hard coded list is static, \nperhaps you could build one and use that.\n\n> But there must be more to it than that...otherwise, they could just\n> add a Valid flag to the lookup table.\n\nWell I certainly wouldn't query a whole table of historical information \nover and over. Can you use and date column (suitably indexed) to just \ncheck recent transactions (like since the last time you checked)?\n\n> If you have any ideas for speeding it up other than using another\n> table please let me know. It only takes me 9 min to run with 2 mil\n> rows but it takes them 7 hours (51 mil rows in Oracle with many other\n> jobs running and poor system maintenance.)\n\nAs above, are all 51 million rows recent transactions, or is that all of \neternity? If its the latter, I'd scan the whole thing once and produce a \nreport, or maybe a \"transactions_with_invalid_codes\" table.\n\n From that point on, I'd only check the transactions since the last time \nI'd checked, either based on a timestamp or even a sequence generated id \nfield. All you need to do is save off the max value each time you run, \nand then use that as the starting point next time.\n\nHTH,\n\nJoe\n\n", "msg_date": "Sat, 03 May 2003 13:24:38 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT IN doesn't use index? (fwd)" }, { "msg_contents": "On Sat, 2003-05-03 at 15:56, Becky Neville wrote:\n> I think that list is actually (gulp) hard coded. It's not my query. I am \n> trying to speed it up for someone else - to hopefully learn something in \n> the process that isn't dependent on what version of postgres i'm \n> running :)\n\nAn interesting test might be to see if the overhead of doing a character\nbased comparison (as opposed to integer based) is significant. If it\nis, previous tests show it can be significant for CPU bound queries,\nconvert all of those codes into integers and use a lookup table table to\ndo the conversion.\n\nAnother interesting thought, since you have a long running query would\nbe to attempt an inversion. Create a temporary table with the *valid*\ncodes if count(valid codes) < 2 * count(invalid codes). Run the query\nreplacing NOT IN with a join to the temporary table. This will reduce\nthe number of comparisons required, as a match can move onto the next\ndatum, but a NOT IN must check all values. If this helps, try indexing\n(and analyzing) the temporary table.\n\nBy far the fastest results can be achieved by not allowing invalid\nbilling codes to be inserted into the table via a constraint of somekind\n(check or fkey to summary table).\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "03 May 2003 17:16:41 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT IN doesn't use index? (fwd)" }, { "msg_contents": "AFAIK, it's only the IN (large subquery) form that is slow...\n\nChris\n\nOn Sat, 3 May 2003, Joe Conway wrote:\n\n> Becky Neville wrote:\n> > Here is the EXPLAIN output from the two queries. The first is the one\n> > that uses WHERE field NOT IN ( 'a','b' etc ). The second is the (much\n>\n> Unless you are working with Postgres 7.4devel (i.e. cvs HEAD), the IN\n> construct is notoriously slow in Postgres. In cvs it is vastly improved.\n>\n> Also, as I mentioned in the other reply, send in \"EXPLAIN ANALYZE\"\n> results instead of \"EXPLAIN\" (and make sure you run \"VACUUM ANALYZE\" first).\n>\n> Joe\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n\n", "msg_date": "Sun, 4 May 2003 12:05:42 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT IN doesn't use index? (fwd)" } ]
[ { "msg_contents": "Folks,\n\nI have a common query on a production database that's running a little too \nslow (3-6 seconds). I can currently drop the time to 0.8 seconds by \nsetting enable_seqscan = false; the main reason is the planner poorly \ndeciding to use a seq scan for the hash join between \"events\" and \"cases\", \nmostly due to a dramatically bad estimate of the number of rows required from \n\"cases\".\n\nSuggestions on how to get Postgres to use cases_pkey instead of a seq scan on \ncases without messing with global query settings in the database which might \nmake other queries run slower? (And yes, a VACUUM FULL ANALYZE was \ninvolved). \n\nThe View:\ncreate view sv_events as\nselect events.event_id, events.status, status_label, status.rollup as rstatus, \nevents.etype_id, type_name,\n\tevent_cats.ecat_id, cat_name, events.event_date, events.event_name,\n\tjw_date_format(events.event_date, events.event_tz, events.duration) as \nshow_date,\n\tcases.case_name || '(' || cases.docket || ')' as event_case,\n\tevents.case_id, cases.case_name, cases.docket, NULL::VARCHAR as tgroup_name,\n\tevents.location_id, location_name, locations.zip_code, locations.address, \nlocations.state_code, locations.city,\n\tlu.user_name as lock_name, lu.email as lock_email, lu.user_id AS lock_user\nFROM status, locations, event_types, event_cats, cases,\n\tevents LEFT OUTER JOIN lock_users lu ON events.event_id = lock_record\nWHERE events.status <> 0\n\tAND (events.status = status.status AND status.relation = 'events')\n\tAND events.location_id = locations.location_id\n\tAND event_types.etype_id = events.etype_id\n\tAND event_cats.ecat_id = event_types.ecat_id\n\tAND events.case_id = cases.case_id;\n\nThe Query:\nSELECT sv_events.*, FALSE AS fuzzy_team FROM sv_events WHERE EXISTS ( SELECT \nevent_id FROM event_days\n WHERE event_days.event_id = sv_events.event_id AND (event_day BETWEEN \n('2003-04-08'::TIMESTAMP WITHOUT TIME ZONE)\n AND ('2003-06-17 23:59'::TIMESTAMP WITHOUT TIME ZONE) ) );\n\n\nThe Explain:\njwnet_test=> \\i perform.sql\npsql:perform.sql:9: NOTICE: QUERY PLAN:\n\nLimit (cost=199572.58..199572.58 rows=10 width=368) (actual \ntime=3239.95..3239.96 rows=10 loops=1)\n -> Sort (cost=199572.58..199572.58 rows=33575 width=368) (actual \ntime=3239.92..3239.93 rows=41 loops=1)\n -> Hash Join (cost=6576.62..191013.53 rows=33575 width=368) (actual \ntime=513.49..3220.38 rows=1790 loops=1)\n -> Hash Join (cost=6574.72..189924.26 rows=14837 width=350) \n(actual time=509.20..3063.85 rows=1790 loops=1)\n -> Hash Join (cost=38.81..180804.32 rows=14837 \nwidth=304) (actual time=16.38..452.80 rows=1919 loops=1)\n -> Hash Join (cost=33.92..180539.78 rows=14837 \nwidth=252) (actual time=15.68..428.38 rows=1919 loops=1)\n -> Hash Join (cost=22.17..180231.28 \nrows=14837 width=155) (actual time=13.98..406.61 rows=1919 loops=1)\n -> Seq Scan on events \n(cost=0.00..179874.82 rows=14837 width=67) (actual time=0.27..382.47 \nrows=1919 loops=1)\n SubPlan\n -> Index Scan using \nevent_days_pk on event_days (cost=0.00..6.01 rows=1 width=4) (actual \ntime=0.01..0.01 rows=0 loops=29734)\n -> Hash (cost=21.99..21.99 rows=72 \nwidth=83) (actual time=13.66..13.66 rows=0 loops=1)\n -> Subquery Scan lu \n(cost=12.61..21.99 rows=72 width=83) (actual time=13.64..13.65 rows=1 \nloops=1)\n -> Hash Join \n(cost=12.61..21.99 rows=72 width=83) (actual time=13.63..13.64 rows=1 \nloops=1)\n -> Seq Scan on \nedit_locks (cost=0.00..7.94 rows=72 width=26) (actual time=12.82..12.83 \nrows=1 loops=1)\n -> Hash \n(cost=6.50..6.50 rows=150 width=57) (actual time=0.71..0.71 rows=0 loops=1)\n -> Seq Scan on \nusers (cost=0.00..6.50 rows=150 width=57) (actual time=0.01..0.47 rows=150 \nloops=1)\n -> Hash (cost=11.00..11.00 rows=300 \nwidth=97) (actual time=1.66..1.66 rows=0 loops=1)\n -> Seq Scan on locations \n(cost=0.00..11.00 rows=300 width=97) (actual time=0.01..1.11 rows=300 \nloops=1)\n -> Hash (cost=4.75..4.75 rows=56 width=52) (actual \ntime=0.60..0.60 rows=0 loops=1)\n -> Hash Join (cost=1.21..4.75 rows=56 \nwidth=52) (actual time=0.17..0.51 rows=56 loops=1)\n -> Seq Scan on event_types \n(cost=0.00..2.56 rows=56 width=31) (actual time=0.01..0.15 rows=56 loops=1)\n -> Hash (cost=1.17..1.17 rows=17 \nwidth=21) (actual time=0.07..0.07 rows=0 loops=1)\n -> Seq Scan on event_cats \n(cost=0.00..1.17 rows=17 width=21) (actual time=0.01..0.05 rows=17 loops=1)\n -> Hash (cost=3800.07..3800.07 rows=112107 width=46) \n(actual time=491.84..491.84 rows=0 loops=1)\n -> Seq Scan on cases (cost=0.00..3800.07 \nrows=112107 width=46) (actual time=0.01..277.20 rows=112107 loops=1)\n -> Hash (cost=1.88..1.88 rows=10 width=18) (actual \ntime=0.12..0.12 rows=0 loops=1)\n -> Seq Scan on status (cost=0.00..1.88 rows=10 width=18) \n(actual time=0.03..0.11 rows=10 loops=1)\nTotal runtime: 3241.09 msec\n\nThe Index Scan:\njwnet_test=> set enable_seqscan = false;\nSET VARIABLE\njwnet_test=> \\i perform.sql\npsql:perform.sql:9: NOTICE: QUERY PLAN:\n\nLimit (cost=252608.52..252608.52 rows=10 width=368) (actual \ntime=740.62..740.64 rows=10 loops=1)\n -> Sort (cost=252608.52..252608.52 rows=33469 width=368) (actual \ntime=740.60..740.61 rows=41 loops=1)\n -> Hash Join (cost=86.85..244083.21 rows=33469 width=368) (actual \ntime=20.93..720.70 rows=1790 loops=1)\n -> Hash Join (cost=80.75..242992.18 rows=14812 width=350) \n(actual time=16.69..554.62 rows=1790 loops=1)\n -> Nested Loop (cost=49.20..242664.38 rows=14812 \nwidth=253) (actual time=14.56..519.42 rows=1790 loops=1)\n -> Hash Join (cost=49.20..158631.12 rows=14812 \nwidth=207) (actual time=14.40..459.91 rows=1919 loops=1)\n -> Hash Join (cost=32.78..158355.48 \nrows=14812 width=155) (actual time=13.59..442.08 rows=1919 loops=1)\n -> Index Scan using idx_events_status \non events (cost=0.00..157988.97 rows=14812 width=67) (actual \ntime=0.08..416.67 rows=1919 loops=1)\n SubPlan\n -> Index Scan using \nevent_days_pk on event_days (cost=0.00..5.26 rows=1 width=4) (actual \ntime=0.01..0.01 rows=0 loops=29734)\n -> Hash (cost=32.60..32.60 rows=72 \nwidth=83) (actual time=13.47..13.47 rows=0 loops=1)\n -> Subquery Scan lu \n(cost=0.00..32.60 rows=72 width=83) (actual time=1.60..13.46 rows=1 loops=1)\n -> Merge Join \n(cost=0.00..32.60 rows=72 width=83) (actual time=1.59..13.45 rows=1 loops=1)\n -> Index Scan using \nusers_pkey on users (cost=0.00..19.63 rows=150 width=57) (actual \ntime=0.09..0.12 rows=3 loops=1)\n -> Index Scan using \nedit_locks_user_id on edit_locks (cost=0.00..11.51 rows=72 width=26) (actual \ntime=1.43..13.28 rows=1 loops=1)\n -> Hash (cost=16.28..16.28 rows=56 width=52) \n(actual time=0.77..0.77 rows=0 loops=1)\n -> Hash Join (cost=5.67..16.28 rows=56 \nwidth=52) (actual time=0.29..0.68 rows=56 loops=1)\n -> Index Scan using \nevent_types_pkey on event_types (cost=0.00..9.63 rows=56 width=31) (actual \ntime=0.08..0.28 rows=56 loops=1)\n -> Hash (cost=5.63..5.63 rows=17 \nwidth=21) (actual time=0.15..0.15 rows=0 loops=1)\n -> Index Scan using \nevent_cats_pkey on event_cats (cost=0.00..5.63 rows=17 width=21) (actual \ntime=0.08..0.13 rows=17 loops=1)\n -> Index Scan using cases_pkey on cases \n(cost=0.00..5.66 rows=1 width=46) (actual time=0.02..0.02 rows=1 loops=1919)\n -> Hash (cost=30.80..30.80 rows=300 width=97) (actual \ntime=2.07..2.07 rows=0 loops=1)\n -> Index Scan using locations_pkey on locations \n(cost=0.00..30.80 rows=300 width=97) (actual time=0.09..1.61 rows=300 \nloops=1)\n -> Hash (cost=6.07..6.07 rows=10 width=18) (actual \ntime=0.08..0.08 rows=0 loops=1)\n -> Index Scan using status_relation on status \n(cost=0.00..6.07 rows=10 width=18) (actual time=0.03..0.06 rows=10 loops=1)\nTotal runtime: 741.72 msec\n\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Sat, 3 May 2003 19:28:52 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Suggestions wanted for 7.2.4 query" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> SELECT sv_events.*, FALSE AS fuzzy_team FROM sv_events WHERE EXISTS ( SELECT \n> event_id FROM event_days\n> WHERE event_days.event_id = sv_events.event_id AND (event_day BETWEEN \n> ('2003-04-08'::TIMESTAMP WITHOUT TIME ZONE)\n> AND ('2003-06-17 23:59'::TIMESTAMP WITHOUT TIME ZONE) ) );\n\nIs event_days.event_id unique? If so, try\n\nSELECT sv_events.*, FALSE AS fuzzy_team FROM sv_events, event_days\nWHERE\nevent_days.event_id = sv_events.event_id AND\n (event_days.event_day BETWEEN \n('2003-04-08'::TIMESTAMP WITHOUT TIME ZONE)\n AND ('2003-06-17 23:59'::TIMESTAMP WITHOUT TIME ZONE) );\n\nThis at least gives you some glimmer of a chance that the restriction on\nevent_day can be used to avoid computing the entire join represented by\nsv_events. With the exists() form, there's no chance...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 04 May 2003 00:42:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestions wanted for 7.2.4 query " }, { "msg_contents": "Tom,\n\n> > SELECT sv_events.*, FALSE AS fuzzy_team FROM sv_events WHERE EXISTS (\n> > SELECT event_id FROM event_days\n> > WHERE event_days.event_id = sv_events.event_id AND (event_day BETWEEN\n> > ('2003-04-08'::TIMESTAMP WITHOUT TIME ZONE)\n> > AND ('2003-06-17 23:59'::TIMESTAMP WITHOUT TIME ZONE) ) );\n>\n> Is event_days.event_id unique? If so, try\n\nRegrettably, no. Event_days is an iterative list of all of the days covered \nby the event. What's unique is event_days_pk, which is event_id, event_day. \nIf I did a direct join to event_days, multi-day events would appear on the \nsearch results more than once .... which we *don't* want.\n\n> This at least gives you some glimmer of a chance that the restriction on\n> event_day can be used to avoid computing the entire join represented by\n> sv_events. With the exists() form, there's no chance...\n\nHmmm. There are other ways I can get at the date limit for sv_events; I'll \ntry that. Unfortunately, those ways require a seq scan on events, so I'm not \nsure we have a net gain here (that is, I can't imagine that a two-column \ndate calculation between two parameters could be indexed)\n\n However, by my reading, 75% of the cost of the query is the unindexed join \nbetween \"events\" and \"cases\". Are you saying that the planner being vague \nabout what will be returned from the EXISTS clause is what's triggering the \nseq scan on \"cases\"?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Sun, 4 May 2003 09:07:03 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suggestions wanted for 7.2.4 query" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> This at least gives you some glimmer of a chance that the restriction on\n>> event_day can be used to avoid computing the entire join represented by\n>> sv_events. With the exists() form, there's no chance...\n\n> Hmmm.\n\nI have to take that back (must have been out too late last night ;-)).\nThe EXISTS subquery *is* getting pushed down to become a restriction on\nevents alone; that's what the \"SubPlan\" is. However, it'd still be\nworth looking for another way to express it, because the planner is\npretty clueless about the selectivity of EXISTS restrictions. That's\nwhat's causing it to drastically overestimate the number of rows taken\nfrom \"events\" (14812 vs 1919), which in turn drives it away from using\nthe nestloop-with-inner-indexscan join style for joining to \"cases\".\n\n> Are you saying that the planner being vague about what will be\n> returned from the EXISTS clause is what's triggering the seq scan on\n> \"cases\"?\n\nRight. The nestloop/indexscan style only wins if there are not too many\nouter rows. If the EXISTS constraint actually did succeed for 14812\n\"events\" rows, the planner would probably be making the right choice to\nuse a hash join.\n\nBTW, have you tried lowering the value of \"random_page_cost\"? Looking\nat the relative costs in these examples makes me think most of your\ntables are cached in memory. Of course, if that's not true during\nday-to-day production then you need to be wary about reducing the setting.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 04 May 2003 13:23:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestions wanted for 7.2.4 query " }, { "msg_contents": "Tom,\n\n> I have to take that back (must have been out too late last night ;-)).\n> The EXISTS subquery *is* getting pushed down to become a restriction on\n> events alone; that's what the \"SubPlan\" is. However, it'd still be\n> worth looking for another way to express it, because the planner is\n> pretty clueless about the selectivity of EXISTS restrictions. That's\n> what's causing it to drastically overestimate the number of rows taken\n> from \"events\" (14812 vs 1919), which in turn drives it away from using\n> the nestloop-with-inner-indexscan join style for joining to \"cases\".\n\nThat may be solvable without forcing a seq scan on \"events\", simply by \noverdetermining the criteria on date. That is, I can't apply the date \ncriteria to \"events\" because that would require running date calucations on \neach row forcing a seq scan ( i.e. (event_date + duration) between date_one \nand date_two would require a seq scan), but I can apply a broadend version of \nthe criteria to \"events\" ( i.e. event_date between (date_one - 1 month) and \n(date_two + 1 day)) which would give the planner the idea that it is \nreturning a minority of rows from \"events\".\n\nSomeday, we have to come up with a way of indexing simple multi-column \ncalculations. Unless someone did that in current source while I was behind \non -hackers?\n\n> Right. The nestloop/indexscan style only wins if there are not too many\n> outer rows. If the EXISTS constraint actually did succeed for 14812\n> \"events\" rows, the planner would probably be making the right choice to\n> use a hash join.\n\nHmm. Any hope of improving this in the future? Like the IN() functionality \nimprovements in 7.4?\n\n> BTW, have you tried lowering the value of \"random_page_cost\"? Looking\n> at the relative costs in these examples makes me think most of your\n> tables are cached in memory. Of course, if that's not true during\n> day-to-day production then you need to be wary about reducing the setting.\n\nNo, we're probably cached ... the machine has 1gb of RAM. Also it has a \nreally fast RAID array, at least for block disk reads, although random seek \ntimes suck. I can tweak a little. The problem is that it's a production \nmachine in use 70 hours a week, so there isn't a lot of time we can test \nperformance settings that might cause problems.\n\nThanks for the advice!\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Sun, 4 May 2003 10:59:41 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suggestions wanted for 7.2.4 query" }, { "msg_contents": "Folks,\n\n> That may be solvable without forcing a seq scan on \"events\", simply by \n> overdetermining the criteria on date. That is, I can't apply the date \n> criteria to \"events\" because that would require running date calucations on \n> each row forcing a seq scan ( i.e. (event_date + duration) between date_one \n> and date_two would require a seq scan), but I can apply a broadend version \nof \n> the criteria to \"events\" ( i.e. event_date between (date_one - 1 month) and \n> (date_two + 1 day)) which would give the planner the idea that it is \n> returning a minority of rows from \"events\".\n\nIf anyone is interested, the above idea worked.\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 5 May 2003 10:59:11 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suggestions wanted for 7.2.4 query" }, { "msg_contents": "Andrew,\n\n> > If anyone is interested, the above idea worked.\n> \n> I am. Thanks, that was a clever idea.\n\nThanks! In that case, I'll give you the full implementation:\n\n1) Build an index on the product of time and duration for the table \"events\":\njwnet_test=> create function add_time ( timestamp without time zone, interval \n)\ncal_test-> returns timestamp without time zone as '\ncal_test'> select $1 + $2;\ncal_test'> ' language 'sql' with (isstrict, iscachable);\ncal_test=> create index idx_event_ends on events(add_time(event_date, \nduration));\nCREATE\n\n2) add this as a column to the view:\ncreate view sv_events as\nselect events.event_id, events.status, status_label, status.rollup as rstatus, \nevents.etype_id, type_name,\n event_cats.ecat_id, cat_name, events.event_date, events.event_name,\n jw_date_format(events.event_date, events.event_tz, events.duration) as \nshow_date,\n cases.case_name || '(' || cases.docket || ')' as event_case,\n events.case_id, cases.case_name, cases.docket, NULL::VARCHAR as \ntgroup_name,\n events.location_id, location_name, locations.zip_code, \nlocations.address, \nlocations.state_code, locations.city,\n lu.user_name as lock_name, lu.email as lock_email, lu.user_id AS \nlock_user, add_time(events.event_date, events.duration) as end_date\nFROM status, locations, event_types, event_cats, cases,\n events LEFT OUTER JOIN lock_users lu ON events.event_id = lock_record\nWHERE events.status <> 0\n AND (events.status = status.status AND status.relation = 'events')\n AND events.location_id = locations.location_id\n AND event_types.etype_id = events.etype_id\n AND event_cats.ecat_id = event_types.ecat_id\n AND events.case_id = cases.case_id;\n\n\n3) change the query as follows:\nSELECT sv_events.*, FALSE AS fuzzy_team FROM sv_events WHERE\n(sv_events.event_date BETWEEN ('2003-04-07'::TIMESTAMP WITHOUT TIME ZONE) AND \n('2003-05-19'::TIMESTAMP WITHOUT TIME ZONE)\n or sv_events.end_date BETWEEN ('2003-04-07'::TIMESTAMP WITHOUT TIME ZONE) \nAND ('2003-05-19'::TIMESTAMP WITHOUT TIME ZONE) )\n AND EXISTS ( SELECT event_id FROM event_days WHERE event_days.event_id = \nsv_events.event_id\n AND (event_day BETWEEN ('2003-04-08'::TIMESTAMP WITHOUT TIME ZONE)\n AND ('2003-06-17 23:59'::TIMESTAMP WITHOUT TIME ZONE) ) )\n AND ( UPPER(case_name) LIKE 'RODRIGUEZ%' OR docket LIKE 'RODRIGUEZ%' OR\n UPPER(tgroup_name) LIKE 'RODRIGUEZ%' OR EXISTS (SELECT tgroup_id FROM \ntrial_groups\n JOIN cases USING(tgroup_id) WHERE trial_groups.status > 0 AND \n((UPPER(case_name)\n LIKE 'RODRIGUEZ%' OR docket LIKE 'RODRIGUEZ%') AND tgroup_id = \nsv_events.case_id)\n OR (UPPER(tgroup_name) LIKE 'RODRIGUEZ%' AND cases.case_id = \nsv_events.case_id) ) ) AND rstatus <> 0;\n\nThe new version returns in 0.85 seconds, a 75% improvement! Yahoo!\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 5 May 2003 12:27:25 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suggestions wanted for 7.2.4 query" } ]
[ { "msg_contents": "Hello all!\n\nOn PostgreSQL V7.3.2 on TRU64 I have a table\nand applied indices for that table.\nBut on a simple query the indices are not used by the optimizer.\n(An sequential scan is used which takes a lot of time)\nI have done\nVACUUM and VACUUM analyze\nbut without any change to the optimizer.\n\nCan someone give me a hint what I should do to give the\noptimizer a start?\n--------------------------------------\n\nWell, let's start by the query\n\nwetter=# explain select * from wetter where epoche > '2001-01-01';\n QUERY PLAN\n-------------------------------------------------------------------------\n Seq Scan on wetter (cost=0.00..614795.55 rows=19054156 width=16)\n Filter: (epoche > '2001-01-01 00:00:00+00'::timestamp with time zone)\n(2 rows)\n\nwetter=#\n\n\nThe table definition is as follows:\n \\d wetter\n Table \"public.wetter\"\n Column | Type | Modifiers\n-----------+--------------------------+-----------\n sensor_id | integer | not null\n epoche | timestamp with time zone | not null\n wert | real | not null\nIndexes: wetter_pkey primary key btree (sensor_id, epoche),\n wetter_epoche_idx btree (epoche),\n wetter_sensor_id_idx btree (sensor_id)\nTriggers: RI_ConstraintTrigger_45702811,\n t_ins_wetter_wetterakt\n\nwetter=#\n\n\nThe trigger information is as follows:\nselect * from pg_trigger where tgname='RI_ConstraintTrigger_45702811';\n tgrelid | tgname | tgfoid | tgtype | tgenabled \n| tgisconstraint | tgconstrname | tgconstrrelid | tgdeferrable | \ntginitdeferred | tgnargs | tgattr | \n tgargs\n----------+-------------------------------+--------+--------+-----------+----------------+--------------+---------------+--------------+----------------+---------+--------+---------------------------------------------------------------------------------------\n 43169106 | RI_ConstraintTrigger_45702811 | 1644 | 21 | t \n | t | <unnamed> | 43169098 | f | f \n | 6 | | \n<unnamed>\\000wetter\\000sensoren_an_orten\\000UNSPECIFIED\\000sensor_id\\000sensor_id\\000\n(1 row)\n\nwetter=#\n\n\nand t_ins_wetter_wetterakt\nis a PLPGSQL Funktion which copies some information into another table\nwhen an insert or update is done.\n\n\n-- \n\nMit freundlichen Gruessen / With best regards\n Reiner Dassing\n\n", "msg_date": "Mon, 05 May 2003 14:16:27 +0200", "msg_from": "Reiner Dassing <[email protected]>", "msg_from_op": true, "msg_subject": "Indices are not used by the optimizer" }, { "msg_contents": "Are you really expecting 19 million rows to be returned -- are you\nreally going to use them all?\n\nHow about explain analyze output?\n\nHave you tried using a cursor to allow for parallel processing? (pull\n1000 rows, do work, pull next 1000 rows, do work, etc.)\n\n> wetter=# explain select * from wetter where epoche > '2001-01-01';\n> QUERY PLAN\n> -------------------------------------------------------------------------\n> Seq Scan on wetter (cost=0.00..614795.55 rows=19054156 width=16)\n> Filter: (epoche > '2001-01-01 00:00:00+00'::timestamp with time zone)\n> (2 rows)\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "05 May 2003 09:33:05 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indices are not used by the optimizer" }, { "msg_contents": "\nHi Reiner,\n\nnormally these kind of subjects must go\nto [email protected]\n\nWhat's important is in pg_class and pg_statistic tables.\nEspecially, you may check out histgraph bounds\nin pg_stats for attribute epoche.\n\nFor a test, did you do a\n# set enable_seqscan to OFF\n??\nOn Mon, 5 May 2003, Reiner Dassing wrote:\n\n> Hello all!\n> \n> On PostgreSQL V7.3.2 on TRU64 I have a table\n> and applied indices for that table.\n> But on a simple query the indices are not used by the optimizer.\n> (An sequential scan is used which takes a lot of time)\n> I have done\n> VACUUM and VACUUM analyze\n> but without any change to the optimizer.\n> \n> Can someone give me a hint what I should do to give the\n> optimizer a start?\n> --------------------------------------\n> \n> Well, let's start by the query\n> \n> wetter=# explain select * from wetter where epoche > '2001-01-01';\n> QUERY PLAN\n> -------------------------------------------------------------------------\n> Seq Scan on wetter (cost=0.00..614795.55 rows=19054156 width=16)\n> Filter: (epoche > '2001-01-01 00:00:00+00'::timestamp with time zone)\n> (2 rows)\n> \n> wetter=#\n> \n> \n> The table definition is as follows:\n> \\d wetter\n> Table \"public.wetter\"\n> Column | Type | Modifiers\n> -----------+--------------------------+-----------\n> sensor_id | integer | not null\n> epoche | timestamp with time zone | not null\n> wert | real | not null\n> Indexes: wetter_pkey primary key btree (sensor_id, epoche),\n> wetter_epoche_idx btree (epoche),\n> wetter_sensor_id_idx btree (sensor_id)\n> Triggers: RI_ConstraintTrigger_45702811,\n> t_ins_wetter_wetterakt\n> \n> wetter=#\n> \n> \n> The trigger information is as follows:\n> select * from pg_trigger where tgname='RI_ConstraintTrigger_45702811';\n> tgrelid | tgname | tgfoid | tgtype | tgenabled \n> | tgisconstraint | tgconstrname | tgconstrrelid | tgdeferrable | \n> tginitdeferred | tgnargs | tgattr | \n> tgargs\n> ----------+-------------------------------+--------+--------+-----------+----------------+--------------+---------------+--------------+----------------+---------+--------+---------------------------------------------------------------------------------------\n> 43169106 | RI_ConstraintTrigger_45702811 | 1644 | 21 | t \n> | t | <unnamed> | 43169098 | f | f \n> | 6 | | \n> <unnamed>\\000wetter\\000sensoren_an_orten\\000UNSPECIFIED\\000sensor_id\\000sensor_id\\000\n> (1 row)\n> \n> wetter=#\n> \n> \n> and t_ins_wetter_wetterakt\n> is a PLPGSQL Funktion which copies some information into another table\n> when an insert or update is done.\n> \n> \n> \n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: [email protected]\n [email protected]\n\n", "msg_date": "Mon, 5 May 2003 16:38:43 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Indices are not used by the optimizer" } ]
[ { "msg_contents": "Hi,\n\nthis is in continuation from the previous\nhttp://archives.postgresql.org/pgsql-performance/2003-05/msg00003.php\nthread.\n\nSummary:\n\nOn a table i have this situation: on the queries i do, the best plan is \nused only if NO statistics\nare produced (via ANALYZE).\nOnce i run [VACUUM] [FULL] ANALYZE; the correct index is used only \nin certain circumstances, and the planner fails to use it\nin the most common ones.\n\nSince Message \nhttp://archives.postgresql.org/pgsql-performance/2003-05/msg00003.php\nhad no responces, i thought that the best way to solve the problem\nis to provide the pg_dump for anyone willing to examine the case.\nbzip2'ed it is about 355K.\n\nThanx.\n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: [email protected]\n [email protected]\n\n", "msg_date": "Mon, 5 May 2003 10:30:52 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": true, "msg_subject": "Wrong index usage in 7.3.2" } ]
[ { "msg_contents": "Folks,\n\nAn area in which postgresql planner & indexing could be improved have occurred \nto me over the last week. I'd like to share this ideas with you in case it \nis worthy of the todo list.\n\nPlease excuse me if this issue is already dealt with in CVS; I've been unable \nto keep up completely on HACKERS lately. Please also excuse me if this \nissue has been discussed and was tabled due to some theoretical limitation, \nsuch as x^n scaling problems\n\nTHE IDEA: The planner should keep statistics on the correlation of foreign \nkeys and apply them to the expected row counts for EXISTS clause limitations, \nand possibly for other query types as well.\n\nTo illustrate:\nDatabase \"calendar\" has two tables, events and event_days.\nEvent_days has FK on column event_id to parent table Events.\nThere is at lease one record in event_days for each record in events, and the \naverage parent-child relationship is 1 event -> 1.15 event_days records.\n\nThis query:\nSELECT events.* FROM events\nWHERE EXISTS (SELECT event_id FROM event_days \n\tWHERE event_day BETWEEN '2003-04-08' AND '2003-05-18');\n\nCurrently, (in 7.2.4 and 7.3.1) the planner makes the assumption that the \nabove EXISTS restriction will only filter events by 50% and makes other join \nand execution plans accordingly. In fact, it filters events by 96% and the \nideal execution plan should be quite different.\n\nIt would be really keen if planner statistics could be expanded to include \ncorrelation on foriegn keys in order to make more intelligent planner \ndecisions on the above type of query possible.\n\nThanks for your attention!\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 5 May 2003 12:19:58 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Hypothetical suggestions for planner, indexing improvement" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> THE IDEA: The planner should keep statistics on the correlation of foreign \n> keys and apply them to the expected row counts for EXISTS clause limitations,\n> and possibly for other query types as well.\n\nIt's a thought. Keeping complete cross-column correlation stats (for\nevery combination of columns in the DB) is obviously out of the\nquestion. If you're gonna do it you need a heuristic to tell you which\ncombinations of columns are worth keeping track of --- and foreign-key\nrelationships seem like a reasonable guide to the interesting\ncombinations.\n\nI'm not sure about the long-term usefulness of optimizing EXISTS per se.\nSeems to me that a lot of the present uses of EXISTS are workarounds\nfor Postgres' historic mistreatment of IN ... which we've attacked more\ndirectly for 7.4. But cross-column correlations are certainly useful\nfor estimating join sizes in general.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 06 May 2003 00:25:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hypothetical suggestions for planner, indexing improvement " }, { "msg_contents": "Tom,\n\n> It's a thought. Keeping complete cross-column correlation stats (for\n> every combination of columns in the DB) is obviously out of the\n> question. If you're gonna do it you need a heuristic to tell you which\n> combinations of columns are worth keeping track of --- and foreign-key\n> relationships seem like a reasonable guide to the interesting\n> combinations.\n\nYes. It would also make FKs something more than just an annoying (and slow) \nconstraint in PostgreSQL. And it would be a performance feature that most \nother RDBMSs don't have ;-)\n\n> I'm not sure about the long-term usefulness of optimizing EXISTS per se.\n> Seems to me that a lot of the present uses of EXISTS are workarounds\n> for Postgres' historic mistreatment of IN ... which we've attacked more\n> directly for 7.4. But cross-column correlations are certainly useful\n> for estimating join sizes in general.\n\nEXISTS is more flexible than IN; how can you do a 3-column corellation on an \nIN clause?\n\nThe reason that I mention EXISTS is because that's where the lack of \ncross-column corellation is most dramatic; the planner seems to estimate a \nflat 50% for EXISTS clauses regardless of the content.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Mon, 5 May 2003 21:33:33 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hypothetical suggestions for planner, indexing improvement" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> The reason that I mention EXISTS is because that's where the lack of \n> cross-column corellation is most dramatic; the planner seems to estimate a \n> flat 50% for EXISTS clauses regardless of the content.\n\nNo \"seems to\" about that one: see src/backend/optimizer/path/clausesel.c\n\n\telse if (is_subplan(clause))\n\t{\n\t\t/*\n\t\t * Just for the moment! FIX ME! - vadim 02/04/98\n\t\t */\n\t\ts1 = (Selectivity) 0.5;\n\t}\n\nPatches to improve this are welcome ;-). But I'm not at all sure how to\nwrite something that would extract a reliable selectivity estimate from\na subplan.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 06 May 2003 00:45:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hypothetical suggestions for planner, indexing improvement " }, { "msg_contents": "On Mon, 5 May 2003, Josh Berkus wrote:\n\n> Tom,\n> \n> > It's a thought. Keeping complete cross-column correlation stats (for\n> > every combination of columns in the DB) is obviously out of the\n> > question. If you're gonna do it you need a heuristic to tell you which\n> > combinations of columns are worth keeping track of --- and foreign-key\n> > relationships seem like a reasonable guide to the interesting\n> > combinations.\n> \n> Yes. It would also make FKs something more than just an annoying (and slow) \n> constraint in PostgreSQL. And it would be a performance feature that most \n> other RDBMSs don't have ;-)\n\n\tThat statement seams really strange, If FKs are really only a \nsafety lock to stop you from putting bad data in your database, It makes \nthem a bit pointless if you want a nice fast database and you can trust \nyour users! This does not make them useless and I still have them but from \na purely performance point of view they don't help currently!\n\tIt may be worth adding Partial Matching FKs so that a user can \nmark that they think might be a useful match to do. This would help the \nfact that in many data sets NULL can mean more than one different thing. \n(Don't Know, None, etc) plus using the index on IS NOT NULL queries would \nbe very handy when you need to know about all the records that you need to \nfind the information out for, or all the records with no relationship.\n\nPeter Childs\n\n> \n> > I'm not sure about the long-term usefulness of optimizing EXISTS per se.\n> > Seems to me that a lot of the present uses of EXISTS are workarounds\n> > for Postgres' historic mistreatment of IN ... which we've attacked more\n> > directly for 7.4. But cross-column correlations are certainly useful\n> > for estimating join sizes in general.\n> \n> EXISTS is more flexible than IN; how can you do a 3-column corellation on an \n> IN clause?\n> \n> The reason that I mention EXISTS is because that's where the lack of \n> cross-column corellation is most dramatic; the planner seems to estimate a \n> flat 50% for EXISTS clauses regardless of the content.\n> \n> \n\n", "msg_date": "Tue, 6 May 2003 08:25:45 +0100 (BST)", "msg_from": "Peter Childs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hypothetical suggestions for planner, indexing" }, { "msg_contents": "On Mon, 2003-05-05 at 23:25, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n> > THE IDEA: The planner should keep statistics on the correlation of foreign \n> > keys and apply them to the expected row counts for EXISTS clause limitations,\n> > and possibly for other query types as well.\n> \n> It's a thought. Keeping complete cross-column correlation stats (for\n> every combination of columns in the DB) is obviously out of the\n> question. If you're gonna do it you need a heuristic to tell you which\n> combinations of columns are worth keeping track of --- and foreign-key\n> relationships seem like a reasonable guide to the interesting\n> combinations.\n\nHow about generalizing this into something useful for all queries?\n\nAnd to make this problem not just guess work on the part of the \noptimizer, how about having a separate \"backslash command\" so that \nthe DBA can add specific/important/crucial predicates that he needs\noptimized.\n\nThus, if there's a query with a large WHERE clause that has an\noptimized predicate inside it, the statistics would be used.\n\n> WHERE event_day BETWEEN '2003-04-08' AND '2003-05-18')\n\nWhat this sounds to me like, though, is that in order for it to\nwork quickly, the postmaster would have to keep track in system\ntables each value of event_day and it's COUNT(*), thus more I/O\noverhead.\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| The purpose of the military isn't to pay your college tuition |\n| or give you a little extra income; it's to \"kill people and |\n| break things\". Surprisingly, not everyone understands that. |\n+---------------------------------------------------------------+\n\n", "msg_date": "06 May 2003 06:49:15 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hypothetical suggestions for planner," }, { "msg_contents": "On Mon, May 05, 2003 at 09:33:33PM -0700, Josh Berkus wrote:\n> EXISTS is more flexible than IN; how can you do a 3-column corellation on an \n> IN clause?\n \nIt would be nice to add support for multi-column IN..\n\nWHERE (a, b, c) IN (SELECT a, b, c ...)\n\nBTW, does postgresql handle IN and EXISTS differently? Theoretically if\nthe optimizer was good enough you could transform one to the other and\nnot worry about it. Whenever possible, I try and use IN when the\nsubselect will return a very small number of rows, since IN might be\nfaster than EXISTS in that case, though it seems most optimizers tend to\nfall apart when they see ORs, and a lot of databases transform IN to a\nOR b OR c.\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Tue, 6 May 2003 08:07:47 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hypothetical suggestions for planner,\n\tindexing improvement" }, { "msg_contents": "On Tue, May 06, 2003 at 12:25:33AM -0400, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n> > THE IDEA: The planner should keep statistics on the correlation of foreign \n> > keys and apply them to the expected row counts for EXISTS clause limitations,\n> > and possibly for other query types as well.\n> \n> It's a thought. Keeping complete cross-column correlation stats (for\n> every combination of columns in the DB) is obviously out of the\n> question. If you're gonna do it you need a heuristic to tell you which\n> combinations of columns are worth keeping track of --- and foreign-key\n> relationships seem like a reasonable guide to the interesting\n> combinations.\n \nWhat if the optimizer kept on-going statistics on what columns were used\nin joins to what other columns? Over time this would allow analyze to\ndetermine on it's own what cross-column correlation stats should be\nkept.\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Tue, 6 May 2003 08:10:35 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Hypothetical suggestions for planner,\n\tindexing improvement" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> It would be nice to add support for multi-column IN..\n> WHERE (a, b, c) IN (SELECT a, b, c ...)\n\nRTFM...\n\n> BTW, does postgresql handle IN and EXISTS differently?\n\nYes.\n\n> Theoretically if the optimizer was good enough you could transform one\n> to the other and not worry about it.\n\nNo. They have different responses to NULLs in the subselect result.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 06 May 2003 09:45:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Hypothetical suggestions for planner,\n\tindexing improvement" }, { "msg_contents": "> On Mon, May 05, 2003 at 09:33:33PM -0700, Josh Berkus wrote:\n> > EXISTS is more flexible than IN; how can you do a 3-column corellation on an\n> > IN clause?\n>\n> It would be nice to add support for multi-column IN..\n>\n> WHERE (a, b, c) IN (SELECT a, b, c ...)\n\nUmm....we DO have that...\n\nChris\n\n", "msg_date": "Tue, 6 May 2003 22:04:39 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Hypothetical suggestions for planner, indexing" }, { "msg_contents": "On Tue, 2003-05-06 at 00:45, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n> > The reason that I mention EXISTS is because that's where the lack of \n> > cross-column corellation is most dramatic; the planner seems to estimate a \n> > flat 50% for EXISTS clauses regardless of the content.\n> \n> No \"seems to\" about that one: see src/backend/optimizer/path/clausesel.c\n> \n> \telse if (is_subplan(clause))\n> \t{\n> \t\t/*\n> \t\t * Just for the moment! FIX ME! - vadim 02/04/98\n> \t\t */\n> \t\ts1 = (Selectivity) 0.5;\n> \t}\n> \n> Patches to improve this are welcome ;-). But I'm not at all sure how to\n> write something that would extract a reliable selectivity estimate from\n> a subplan.\n> \n\ngiven that we have so few GUC variables... \n\nwould there be any merit in adding one that would allow folks to change\nthis assumption? \n\nRobert Treat\n\n", "msg_date": "06 May 2003 11:33:41 -0400", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hypothetical suggestions for planner, indexing" }, { "msg_contents": "On Tue, 6 May 2003, Tom Lane wrote:\n\n> > It would be nice to add support for multi-column IN..\n> > WHERE (a, b, c) IN (SELECT a, b, c ...)\n> \n> RTFM...\n\nMaybe he did read the manual:\n\n6.15.3. IN (subquery form)\n\nexpression IN (subquery)\n\nThe right-hand side of this form of IN is a parenthesized subquery, which \nmust return exactly one column.\n\n-- \n/Dennis\n\n", "msg_date": "Tue, 6 May 2003 20:32:01 +0200 (CEST)", "msg_from": "=?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Hypothetical suggestions for planner, indexing" }, { "msg_contents": "On Tue, May 06, 2003 at 09:45:07AM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > It would be nice to add support for multi-column IN..\n> > WHERE (a, b, c) IN (SELECT a, b, c ...)\n> \n> RTFM...\n\nAs someone pointed out, the documentation says you can't. In this case\nthe docs are wrong (I've added a note).\n\n> > BTW, does postgresql handle IN and EXISTS differently?\n> \n> Yes.\n> \n> > Theoretically if the optimizer was good enough you could transform one\n> > to the other and not worry about it.\n> \n> No. They have different responses to NULLs in the subselect result.\n\nThey appear to operate the same... what's different?\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Wed, 7 May 2003 00:08:01 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Hypothetical suggestions for planner,\n\tindexing improvement" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Tue, May 06, 2003 at 09:45:07AM -0400, Tom Lane wrote:\n>> RTFM...\n\n> As someone pointed out, the documentation says you can't. In this case\n> the docs are wrong (I've added a note).\n\nPerhaps you should have read to the end of the section.\n\n>>> BTW, does postgresql handle IN and EXISTS differently?\n>> \n>> Yes.\n>\n> They appear to operate the same... what's different?\n\nSupposing that tab1.col1 contains 1, NULL, 2, then for an outer\ntable row where col2 = 42\n\n\tWHERE outer.col2 IN (SELECT col1 FROM tab1)\n\nwill yield NULL (not FALSE). But\n\n\tWHERE EXISTS(SELECT * FROM tab1 WHERE col1 = outer.col2)\n\nwill yield FALSE (not NULL).\n\nThe distinction doesn't matter at the top level of WHERE, but it\nmatters a lot underneath a NOT ...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 07 May 2003 01:20:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Hypothetical suggestions for planner,\n\tindexing improvement" }, { "msg_contents": "On Wed, 7 May 2003, Tom Lane wrote:\n\n> > As someone pointed out, the documentation says you can't. In this case\n> > the docs are wrong (I've added a note).\n> \n> Perhaps you should have read to the end of the section.\n\nOh, you are right (of course). It is documented later on in that section. \n\nI guess it's specified in the SQL spec, but how come a tuple is not an\nexpression? If it had been an expression the first part of that section\nwould still apply, which is why I just read the first part.\n\n-- \n/Dennis\n\n", "msg_date": "Wed, 7 May 2003 09:36:52 +0200 (CEST)", "msg_from": "=?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Hypothetical suggestions for planner, indexing" }, { "msg_contents": "> Supposing that tab1.col1 contains 1, NULL, 2, then for an outer\n> table row where col2 = 42\n> \n> \tWHERE outer.col2 IN (SELECT col1 FROM tab1)\n> \n> will yield NULL (not FALSE). But\n> \n> \tWHERE EXISTS(SELECT * FROM tab1 WHERE col1 = outer.col2)\n> \n> will yield FALSE (not NULL).\n> \n> The distinction doesn't matter at the top level of WHERE, but it\n> matters a lot underneath a NOT ...\n \nOK, but even if a true transform can't be done, couldn't they share the\nsame set of code to fetch the data for the subquery? Going back to my\noriginal post, I tend to use IN only in cases where I think the subquery\nwill return a small result-set, and use EXISTS elsewhere. Presumably,\nthe subquery for an IN will only be run once, while EXISTS will be run\nas an inner-loop (I'm guessing here, I could be wrong). It might be\nuseful if the subquery was executed based on how many rows it\nwould/might return.\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Wed, 7 May 2003 08:36:16 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Hypothetical suggestions for planner,\n\tindexing improvement" }, { "msg_contents": "=?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <[email protected]> writes:\n> I guess it's specified in the SQL spec, but how come a tuple is not an\n> expression? If it had been an expression the first part of that section\n> would still apply, which is why I just read the first part.\n\nWell, if you want to think of \"(42, 'foo')\" as being an expression then\nI suppose it could be considered to be all one syntax. Personally I\nthink that'd be more confusing rather than less so.\n\nAnother alternative is to put both syntax summaries at the top of the\nsubsection, and combine the two textual descriptions into one.\n\nThis is the wrong place for this discussion, though. If you want to\nhave a go at rewriting the section, why not put up a sketch on\npgsql-docs and see if people like it?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 07 May 2003 10:19:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Hypothetical suggestions for planner, indexing " } ]
[ { "msg_contents": "Hello all!\n\nOn PostgreSQL V7.3.2 on TRU64 I recognized the following phenomena\nthat a SELECT using a difference of a timestamp and an interval\nin the WHERE clause does not use the index\nbut using a timestamp without a difference does use the index.\nThe semantic of both SELECT's is equal, i.e., the result is equal.\n\nTherefore, the second way is much faster.\n\nAny ideas?\n\n\nIn detail:\n table:\n\nwetter=# \\d wetter\n Table \"public.wetter\"\n Column | Type | Modifiers\n-----------+--------------------------+-----------\n sensor_id | integer | not null\n epoche | timestamp with time zone | not null\n wert | real | not null\nIndexes: wetter_pkey primary key btree (sensor_id, epoche),\n wetter_epoche_idx btree (epoche),\n wetter_sensor_id_idx btree (sensor_id)\nTriggers: RI_ConstraintTrigger_45702811,\n t_ins_wetter_wetterakt\n\n\n\nSelect not using index:\n-----------------------\nwetter=# explain select * from wetter where epoche between\n'2003-05-06 06:50:54+00'::timestamp-'1 days'::interval\nAND '2003-05-06 04:45:36';\n \n QUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on wetter (cost=0.00..768644.57 rows=10253528 width=16)\n Filter: ((epoche >= ('2003-05-05 06:50:54'::timestamp without time \nzone)::timestamp with time zone) AND (epoche <= '2003-05-06 \n04:45:36+00'::timestamp with time zone))\n(2 rows)\n\nwetter=#\n\n\n\n\nSelect using the index:\n-----------------------\nexplain select * from wetter where epoche between '2003-05-05 06:50:54' \nAND '2003-05-06 04:45:36';\n \nQUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using wetter_epoche_idx on wetter (cost=0.00..5.45 rows=1 \nwidth=16)\n Index Cond: ((epoche >= '2003-05-05 06:50:54+00'::timestamp with \ntime zone) AND (epoche <= '2003-05-06 04:45:36+00'::timestamp with time \nzone))\n(2 rows)\n\nwetter=#\n\n\n\n\n-- \n\nMit freundlichen Gruessen / With best regards\n Reiner Dassing\n\n", "msg_date": "Tue, 06 May 2003 08:59:43 +0200", "msg_from": "Reiner Dassing <[email protected]>", "msg_from_op": true, "msg_subject": "Select on timestamp-day slower than timestamp alone" }, { "msg_contents": "On Tuesday 06 May 2003 7:59 am, Reiner Dassing wrote:\n> Hello all!\n>\n> On PostgreSQL V7.3.2 on TRU64 I recognized the following phenomena\n> that a SELECT using a difference of a timestamp and an interval\n> in the WHERE clause does not use the index\n> but using a timestamp without a difference does use the index.\n> The semantic of both SELECT's is equal, i.e., the result is equal.\n>\n> Therefore, the second way is much faster.\n>\n> Any ideas?\n\n> Select not using index:\n> -----------------------\n> wetter=# explain select * from wetter where epoche between\n> '2003-05-06 06:50:54+00'::timestamp-'1 days'::interval\n> AND '2003-05-06 04:45:36';\n>\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------\n>----------------------------------------------------------------------------\n>-------------------- Seq Scan on wetter (cost=0.00..768644.57 rows=10253528\n> width=16) Filter: ((epoche >= ('2003-05-05 06:50:54'::timestamp without\n> time zone)::timestamp with time zone) AND (epoche <= '2003-05-06\n> 04:45:36+00'::timestamp with time zone))\n> (2 rows)\n\nWell, the \"why\" is because the number of rows recommended is so big \n(rows=10253528) - I'm also puzzled why we get \"timestamp without time zone\". \nDoes an explicit cast to \"with time zone\" help?\n\n-- \n Richard Huxton\n\n", "msg_date": "Tue, 6 May 2003 14:04:24 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select on timestamp-day slower than timestamp alone" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n> Well, the \"why\" is because the number of rows recommended is so big \n> (rows=10253528) - I'm also puzzled why we get \"timestamp without time zone\". \n\nBecause that's what he specified the constant to be.\n\n> Does an explicit cast to \"with time zone\" help?\n\nWriting the constant as timestamp with time zone would fix it.\nCasting after-the-fact would not.\n\nThe reason: although both \"timestamp minus interval\" and \"timestamptz\nminus interval\" are constant-foldable, timestamp-to-timestamptz\nconversion is not (because it depends on SET TIMEZONE). So the\nplanner has to fall back to a default selectivity estimate. With real\nconstants it is able to derive a better estimate.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 06 May 2003 09:59:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select on timestamp-day slower than timestamp alone " }, { "msg_contents": "Hello Richard!\n\n\nYour proposal to use an explicit cast to \"with time zone\" helps:\n\nexplain\nselect * from wetter where epoche between\n'2003-05-06 06:50:54+00'::timestamp with time zone-'1 days'::interval\nAND '2003-05-06 04:45:36';\n\n \nQUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using wetter_epoche_idx on wetter (cost=0.00..5.45 rows=1 \nwidth=16)\n Index Cond: ((epoche >= '2003-05-05 06:50:54+00'::timestamp with \ntime zone) AND (epoche <= '2003-05-06 04:45:36+00'::timestamp with time \nzone))\n(2 rows)\n\nThe result now is like expected.\n\nThanks for the help.\nBut for your question \"why we get \"timestamp without time zone\".\"\nI have no answer.\n\nReiner\n\n\n> \n>>Select not using index:\n>>-----------------------\n>>wetter=# explain select * from wetter where epoche between\n>>'2003-05-06 06:50:54+00'::timestamp-'1 days'::interval\n>>AND '2003-05-06 04:45:36';\n>>\n>> QUERY PLAN\n>>\n>>---------------------------------------------------------------------------\n>>----------------------------------------------------------------------------\n>>-------------------- Seq Scan on wetter (cost=0.00..768644.57 rows=10253528\n>>width=16) Filter: ((epoche >= ('2003-05-05 06:50:54'::timestamp without\n>>time zone)::timestamp with time zone) AND (epoche <= '2003-05-06\n>>04:45:36+00'::timestamp with time zone))\n>>(2 rows)\n> \n> \n> Well, the \"why\" is because the number of rows recommended is so big \n> (rows=10253528) - I'm also puzzled why we get \"timestamp without time zone\". \n> Does an explicit cast to \"with time zone\" help?\n> \n\n", "msg_date": "Tue, 06 May 2003 16:00:32 +0200", "msg_from": "Reiner Dassing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Select on timestamp-day slower than timestamp alone" } ]
[ { "msg_contents": "Hi !\n\nI have a database on PostgreSQL 7.2.1 and I have performance's problems with\nsome queries.\nI'm debbuging the query below:\n\nSelect count(*) from blcar\nwhere manide = 3811 and blide = 58090 and bcalupcod = 'MVDUY' and bcalopcod\n= 'LOCAL' and bcapag <> 'P';\n\n From the command prompt of Psql:\n\n QUERY PLAN\n----------------------------------------------------------------------------\n--------------------------------------\n Aggregate (cost=3.03..3.03 rows=1 width=0) (actual time=0.20..0.20 rows=1\nloops=1)\n -> Index Scan using iblsec on blcar (cost=0.00..3.02 rows=1 width=0)\n(actual time=0.19..0.19 rows=0 loops=1)\n Index Cond: ((manide = 3811) AND (blide = 58090))\n Filter: ((bcalupcod = 'MVDUY'::bpchar) AND (bcalopcod =\n'REPRE'::bpchar) AND (bcapag <> 'P'::bpchar))\n Total runtime: 0.30 msec\n(5 rows)\n\n From a file with a SQL sentence. (I execute it this way: \\i filename)\n\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n---------------------------------------\n Aggregate (cost=8277.10..8277.10 rows=1 width=0) (actual\ntime=1273.98..1273.98 rows=1 loops=1)\n -> Seq Scan on blcar (cost=0.00..8277.09 rows=1 width=0) (actual\ntime=1273.96..1273.96 rows=0 loops=1)\n Filter: (((manide)::numeric = 3811::numeric) AND ((blide)::numeric\n= 58090::numeric) AND (bcalupcod = 'MVDUY'::bpchar) AND (bcalopcod =\n'REPRE'::bpchar) AND (bcapag <> 'P'::bpchar))\n Total runtime: 1274.08 msec\n(4 rows)\n\nThe problem is how one understands this duality of execution plans for the\nsame sentence in two situations which are really the same.\nIt's a relevant matter, because I need to solve performance problems\ninvolved with the execution of this sentence from a program, and due to the\nexecution time this query required (according to the logfile of database), I\nunderstand that it is choosing the second plan, when it is more reasonable\nto use the first plan.\n\nThanks\n\n", "msg_date": "Tue, 6 May 2003 09:21:16 -0300", "msg_from": "\"Fabio C. Bon\" <[email protected]>", "msg_from_op": true, "msg_subject": "A query with performance problems." }, { "msg_contents": "\n[Moving to -performance since it's more ontopic there]\n\nOn Tue, 6 May 2003, Fabio C. Bon wrote:\n\n> I have a database on PostgreSQL 7.2.1 and I have performance's problems with\n> some queries.\n> I'm debbuging the query below:\n>\n> Select count(*) from blcar\n> where manide = 3811 and blide = 58090 and bcalupcod = 'MVDUY' and bcalopcod\n> = 'LOCAL' and bcapag <> 'P';\n\nWhat does the schema of the table look like?\n\nIs the SQL query in the file exactly the same text as the above?\n\n> Aggregate (cost=8277.10..8277.10 rows=1 width=0) (actual\n> time=1273.98..1273.98 rows=1 loops=1)\n> -> Seq Scan on blcar (cost=0.00..8277.09 rows=1 width=0) (actual\n> time=1273.96..1273.96 rows=0 loops=1)\n> Filter: (((manide)::numeric = 3811::numeric) AND ((blide)::numeric\n\nIt seems to want to coerce manide and blide to a numeric here, which seems\nodd.\n\n> = 58090::numeric) AND (bcalupcod = 'MVDUY'::bpchar) AND (bcalopcod =\n> 'REPRE'::bpchar) AND (bcapag <> 'P'::bpchar))\n> Total runtime: 1274.08 msec\n> (4 rows)\n\n", "msg_date": "Tue, 6 May 2003 07:39:57 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] A query with performance problems." } ]
[ { "msg_contents": "Achilleus Mantzios kirjutas K, 07.05.2003 kell 19:33:\n> Hi, few days ago, i posted some really wierd (at least to me)\n> situation (maybe a potentian bug) to the performance and bugs list\n> and to some core hacker(s) privately as well,\n> and i got no response.\n> Moreover i asked for some feedback\n> in order to understand/fix the problem myself,\n> and again received no response.\n> \n> What i asked was pretty simple:\n> \"1. Is it possible that the absense of statistics make the planer produce \n> better plans than in the case of statistcs generated with vacuum \n> analyze/analyze?\n\nYes, the planner is not perfect, the statistics are just statistics\n(based on a random sample), etc..\n\nThis question comes up at least once a month on either [PERFORM] or\n[HACKERS], search the mailing lists to get more thorough\ndiscussion/explanation.\n\n> 2. If No, i found a bug,\n\nRather a feature ;-p\n\n> 3. If yes then under what conditions??\n\nif \n\n1) ANALYZE produced skewed data which was worse than default.\n\nor.\n\n2) some costs are way off for your system (try changing them in\npostgresql.conf)\n\n> 4. If no person knows the answer or no hacker wants to dig into the \n> problem then is there a direction i must follow to understand/fix whats \n> going on myself??\"\"\n\nYou can sturt by enabling/disabling various scan methods\n\npsqldb# set enable_seqscan to off;\nSET\n\n\nand see what happens, then adjust the weights in postgresql.conf or use\nsome combination of SETs around critical queries to force the plan you\nlike.\n\n\n------------\nHannu\n\n", "msg_date": "07 May 2003 16:40:06 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": true, "msg_subject": "Re: An unresolved performance problem." }, { "msg_contents": "Achilleus,\n\n> My systems are (rather usual) linux/freebsd and the costs defined (by\n> default) in postgresql.conf worked well for all queries except\n> a cursed query on a cursed table.\n> So i start to believe its an estimation selectivity\n> problem.\n\nWe can probably fix the problem by re-writing the query then; see my previous \nexample this weekend about overdetermining criteria in order to force the use \nof an index.\n\nHow about posting the query and the EXPLAIN ANALYZE results?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Wed, 7 May 2003 08:26:52 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An unresolved performance problem." }, { "msg_contents": "----- Original Message ----- \nFrom: \"Achilleus Mantzios\" <[email protected]>\nTo: <[email protected]>; <[email protected]>;\n<[email protected]>\nSent: Wednesday, May 07, 2003 6:33 PM\nSubject: [SQL] An unresolved performance problem.\n\n\n>\n> Hi, few days ago, i posted some really wierd (at least to me)\n> situation (maybe a potentian bug) to the performance and bugs list\n> and to some core hacker(s) privately as well,\n> and i got no response.\n\nI seen around a lot of questions are remaining without any reply,\nmay be in this period the guys like Tom Lane are too busy.\n\n> Moreover i asked for some feedback\n> in order to understand/fix the problem myself,\n> and again received no response.\n>\n> What i asked was pretty simple:\n> \"1. Is it possible that the absense of statistics make the planer produce\n> better plans\n> than in the case of statistcs generated with vacuum\n> analyze/analyze?\n> 2. If No, i found a bug,\n> 3. If yes then under what conditions??\n> 4. If no person knows the answer or no hacker wants to dig into the\n> problem then is there a direction i must follow to understand/fix whats\n> going on myself??\"\"\n\nCan you give us more informations? Like the table structure, wich kind\nof query are you tring to do and so on...\n\n\nGaetano\n\n", "msg_date": "Wed, 7 May 2003 17:36:29 +0200", "msg_from": "\"Mendola Gaetano\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] An unresolved performance problem." }, { "msg_contents": "On Wed, 7 May 2003, Achilleus Mantzios wrote:\n\n> \n> Hi, few days ago, i posted some really wierd (at least to me)\n> situation (maybe a potentian bug) to the performance and bugs list\n> and to some core hacker(s) privately as well,\n> and i got no response.\n> Moreover i asked for some feedback\n> in order to understand/fix the problem myself,\n> and again received no response.\n> \n> What i asked was pretty simple:\n> \"1. Is it possible that the absense of statistics make the planer produce \n> better plans\n> than in the case of statistcs generated with vacuum \n> analyze/analyze?\n\nOne of the common examples of this happening was posted a few weeks back. \nsomeone was basically doing this:\n\ndelete from table;\nanalyze table;\ninsert into table (1,000,000 times);\n\nthe problem was that the table had fk constraints to another table, and \nthe query planner for the inserts (all 1,000,000 of them) assumed it was \ninserting into a mostly empty table, and therefore used seq scans instead \nof index scans.\n\nIt's not a bug, not quite a feature, just a corner case.\n\n", "msg_date": "Wed, 7 May 2003 09:42:27 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An unresolved performance problem." }, { "msg_contents": "\nHi, few days ago, i posted some really wierd (at least to me)\nsituation (maybe a potentian bug) to the performance and bugs list\nand to some core hacker(s) privately as well,\nand i got no response.\nMoreover i asked for some feedback\nin order to understand/fix the problem myself,\nand again received no response.\n\nWhat i asked was pretty simple:\n\"1. Is it possible that the absense of statistics make the planer produce \nbetter plans\nthan in the case of statistcs generated with vacuum \nanalyze/analyze?\n2. If No, i found a bug,\n3. If yes then under what conditions??\n4. If no person knows the answer or no hacker wants to dig into the \nproblem then is there a direction i must follow to understand/fix whats \ngoing on myself??\"\"\n\nPretty straight i think.\n\nWell, i stack on step 1.\n\nIt seemed to me that either my question was too naive to deserve some real \ninvestigation (doubtedly), or no one was in a position to comment on \nit (doubtedly), or that it is not considered an interesting case (possible),\nor that some people move all the mail i send to the lists to /dev/null \n(unfortunately possible too).\n\nSo Since i really have stuck to postgresql for over 2 years for both \ntechnical and emotional reasons, i would feel much more confident\nif i would reach step 2 or greater.\n\nThe table i have in question is a critical one in my application\nsince it monitors important plan maintenance data, and i have\nto move on with this problem.\n\nThanx\n\nP.S. the www (64.49.215.82) server is down for while.\n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: [email protected]\n [email protected]\n\n", "msg_date": "Wed, 7 May 2003 14:33:24 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "An unresolved performance problem." }, { "msg_contents": "Gaetano,\n\n> I seen around a lot of questions are remaining without any reply,\n> may be in this period the guys like Tom Lane are too busy.\n\nYes, they are. Currently the major contributors are working hard to shape up \nboth 7.3.3. and 7.4 (and having a long-running discussion about the due date \nfor 7.4), so they don't have much time for questions.\n\nAnd for my part, I'm too busy with my paying job to answer all the questions \nthat get posted, as I suspect are Stephan and Bruno and several other people \nwho field newbie questions. Given the flood of requests, I have to \nprioritize ... and a question which is missing several crucial details (like \na copy of the query!!!) is going to get answered way later than a question \nwhich provides all the needed information -- if at all.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 7 May 2003 09:57:22 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unanswered Questions WAS: An unresolved performance problem." }, { "msg_contents": "On Wed, 7 May 2003 17:09:17 -0200 (GMT+2), Achilleus Mantzios\n<[email protected]> wrote:\n>I have about 10 indexes on this table, and the \"correct\" one\n>is used only if i do set enable_seqscan to off; and \n>drop all other indexes.\n\nWhat we already have is\n\n|dynacom=# EXPLAIN ANALYZE\n|SELECT count(*)\n| FROM status\n| WHERE assettable='vessels' AND appname='ISM PMS' AND apptblname='items' AND status='warn' AND isvalid AND assetidval=57;\n| \n|QUERY PLAN (fbsd)\n|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n| Aggregate (cost=6.02..6.02 rows=1 width=0) (actual time=14.16..14.16 rows=1 loops=1)\n| -> Index Scan using status_all on status (cost=0.00..6.02 rows=1 width=0) (actual time=13.09..13.95 rows=75 loops=1)\n| Index Cond: ((assettable = 'vessels'::character varying) AND (assetidval = 57) AND (appname = 'ISM PMS'::character varying) AND (apptblname = 'items'::character varying) AND (status = 'warn'::character varying))\n| Filter: isvalid\n| Total runtime: 14.40 msec\n|(5 rows)\n| \n|QUERY PLAN (lnx)\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n| Aggregate (cost=1346.56..1346.56 rows=1 width=0) (actual time=244.05..244.05 rows=1 loops=1)\n| -> Seq Scan on status (cost=0.00..1345.81 rows=300 width=0) (actual time=0.63..243.93 rows=75 loops=1)\n| Filter: ((assettable = 'vessels'::character varying) AND (appname = 'ISM PMS'::character varying) AND (apptblname = 'items'::character varying) AND (status = 'warn'::character varying) AND isvalid AND (assetidval = 57))\n| Total runtime: 244.12 msec\n|(4 rows)\n\nNow set enable_seqscan to off, and show as the EXPLAIN ANALYSE output.\nIf the wrong index is used, remove it and rerun the query. Repeat\nuntil you arrive at the correct index and show us these results, too.\n\n>Otherwise i get either a seq scan or the wrong index.\n| -> Seq Scan on status (cost=0.00..1345.81 rows=300 width=0) (actual time=0.63..243.93 rows=75 loops=1)\n ^^^^\nThis seems strange, given that relpages = 562.\nWhat are your config settings? And what hardware is this running on,\nespecially how much RAM?\n\nServus\n Manfred\n\n", "msg_date": "Wed, 07 May 2003 20:42:46 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An unresolved performance problem." }, { "msg_contents": "On 7 May 2003, Hannu Krosing wrote:\n\n> Achilleus Mantzios kirjutas K, 07.05.2003 kell 19:33:\n> > Hi, few days ago, i posted some really wierd (at least to me)\n> > situation (maybe a potentian bug) to the performance and bugs list\n> > and to some core hacker(s) privately as well,\n> > and i got no response.\n> > Moreover i asked for some feedback\n> > in order to understand/fix the problem myself,\n> > and again received no response.\n> > \n> > What i asked was pretty simple:\n> > \"1. Is it possible that the absense of statistics make the planer produce \n> > better plans than in the case of statistcs generated with vacuum \n> > analyze/analyze?\n> \n> Yes, the planner is not perfect, the statistics are just statistics\n> (based on a random sample), etc..\n> \n> This question comes up at least once a month on either [PERFORM] or\n> [HACKERS], search the mailing lists to get more thorough\n> discussion/explanation.\n\nOoopss i am i [email protected] newbie \n(up to now i thought -sql was where all the fun takes place :)\n\n> \n> > 2. If No, i found a bug,\n> \n> Rather a feature ;-p\n> \n> > 3. If yes then under what conditions??\n> \n> if \n> \n> 1) ANALYZE produced skewed data which was worse than default.\n> \n> or.\n> \n> 2) some costs are way off for your system (try changing them in\n> postgresql.conf)\n> \n\nMy systems are (rather usual) linux/freebsd and the costs defined (by \ndefault) in postgresql.conf worked well for all queries except\na cursed query on a cursed table.\nSo i start to believe its an estimation selectivity\nproblem.\n\n> > 4. If no person knows the answer or no hacker wants to dig into the \n> > problem then is there a direction i must follow to understand/fix whats \n> > going on myself??\"\"\n> \n> You can sturt by enabling/disabling various scan methods\n> \n> psqldb# set enable_seqscan to off;\n> SET\n> \n\nI have about 10 indexes on this table, and the \"correct\" one\nis used only if i do set enable_seqscan to off; and \ndrop all other indexes.\nOtherwise i get either a seq scan or the wrong index.\n\n> \n> and see what happens, then adjust the weights in postgresql.conf or use\n> some combination of SETs around critical queries to force the plan you\n> like.\n> \n\nAlso i played with ALTER TABLE set statistics \nbut could not generate this ideal situation when\nno stats where available (right after a load).\n\nThe problem is that other queries on this table\nneed some indexes.\nI dunno whata do :(\n\n> \n> ------------\n> Hannu\n> \n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: [email protected]\n [email protected]\n\n", "msg_date": "Wed, 7 May 2003 17:09:17 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An unresolved performance problem." }, { "msg_contents": "Hi,\n\nMendola Gaetano wrote:\n\n> ...\n>\n>I seen around a lot of questions are remaining without any reply,\n>may be in this period the guys like Tom Lane are too busy. ...\n> \n>\nAnd we should remember that this is still free, open source software - \nso we have no right to claim _any_ support whatsoever.\nSo thanks a lot to the PostgreSQL team for all the hard work that has \nbeen and is being put into this software.\n\n// Bernd vdB\n\n", "msg_date": "Wed, 07 May 2003 21:57:56 +0200", "msg_from": "Bernd von den Brincken <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] An unresolved performance problem." }, { "msg_contents": "Folks,\n\nI suspect that a good number of fairly simple questions aren't being \nanswered because they're either misdirected or because the poster \nhasn't included an \"answerable\" question (one with sufficient \ninformation to answer).\n\nA suggestion to partially counter this, at least for \"slow query\" type \nquestions, has been put forth. If we make it a social norm on the \npg-lists in general to reply off-list to inadequately descriptive \"slow \nquery\" questions with a canned message of helpful guidance, we may be \nable to up the level of \"answerability\" of most questions. Ideally, \nthis would make the questions more transparent, so that more responses \ncan come from folks other than the major contributors.\n\nThoughts? Josh and I have placed a draft at \nhttp://techdocs.postgresql.org/guides/SlowQueryPostingGuidelines\n\nI'd specifically like to hear whether people would suggest more of an \nemphasis on heuristics for self-help in such a message, what other info \nshould be included in a \"good\" slow query question, and people's \nthoughts on the netiquette of the whole idea.\n\nBest,\n\nRandall\n\nOn Wednesday, May 7, 2003, at 12:57 PM, Josh Berkus wrote:\n\n> Gaetano,\n>\n>> I seen around a lot of questions are remaining without any reply,\n>> may be in this period the guys like Tom Lane are too busy.\n>\n> Yes, they are. Currently the major contributors are working hard to \n> shape up\n> both 7.3.3. and 7.4 (and having a long-running discussion about the \n> due date\n> for 7.4), so they don't have much time for questions.\n>\n> And for my part, I'm too busy with my paying job to answer all the \n> questions\n> that get posted, as I suspect are Stephan and Bruno and several other \n> people\n> who field newbie questions. Given the flood of requests, I have to\n> prioritize ... and a question which is missing several crucial details \n> (like\n> a copy of the query!!!) is going to get answered way later than a \n> question\n> which provides all the needed information -- if at all.\n>\n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> [email protected]\n>\n\n", "msg_date": "Wed, 7 May 2003 19:52:30 -0400", "msg_from": "Randall Lucas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] Unanswered Questions WAS: An unresolved performance\n problem." }, { "msg_contents": "Randall Lucas <[email protected]> writes:\n> I suspect that a good number of fairly simple questions aren't being \n> answered because they're either misdirected or because the poster \n> hasn't included an \"answerable\" question (one with sufficient \n> information to answer).\n\nThat's always been a problem, but it does seem to have been getting\nworse lately.\n\n> A suggestion to partially counter this, at least for \"slow query\" type \n> questions, has been put forth. If we make it a social norm on the \n> pg-lists in general to reply off-list to inadequately descriptive \"slow \n> query\" questions with a canned message of helpful guidance, we may be \n> able to up the level of \"answerability\" of most questions.\n\nThe idea of some canned guidance doesn't seem bad, but I'm not sure if\nit should be off-list or not. If newbies are corrected off-list then\nother newbies who might be lurking, or reading the archives, don't learn\nany better and will make the same mistakes in their turn.\n\nHow about a standard answer of \"you haven't really provided enough info\nfor us to be helpful, please see this-URL for some hints\"? That would\navoid bulking up the list archives with many copies, yet at the same\ntime the archives would provide evidence of the existence of hints...\n\n> Thoughts? Josh and I have placed a draft at \n> http://techdocs.postgresql.org/guides/SlowQueryPostingGuidelines\n\nLooks good, though I concur with Stephan's comment that the table\nschemas aren't optional.\n\nIt might be worth including a checklist of the standard kinds of errors\n(for example, datatype mismatch preventing index usage). Come to think\nof it, that starts to make it look like a FAQ list directed towards\nperformance issues. Maybe we could make this a subsection of the main\nFAQ?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 08 May 2003 00:13:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] [SQL] Unanswered Questions WAS: An unresolved\n\tperformance problem." }, { "msg_contents": "> > I suspect that a good number of fairly simple questions aren't\n> > being answered because they're either misdirected or because the\n> > poster hasn't included an \"answerable\" question (one with\n> > sufficient information to answer).\n> \n> That's always been a problem, but it does seem to have been getting\n> worse lately.\n\nI hate to point this out, but \"TIP 4\" is getting a bit old and the 6\ntips that we throw out to probably about 40K people about 1-200 times\na day have probably reached saturation. Without looking at the\narchives, I bet anyone a shot of good scotch that, it's probably\npretty infrequent that people don't kill -9 their postmasters.\n\nAny chance we could flush out the TIPs at the bottom to include,\n\"VACUUM ANALYZE your database regularly,\" or \"When reporting a\nproblem, include the output from EXPLAIN [query],\" or \"ANALYZE tables\nbefore examining the output from an EXPLAIN [query],\" or \"Visit [url]\nfor a tutorial on (schemas|triggers|views).\"\n\n-sc\n\n-- \nSean Chittenden\n\n", "msg_date": "Wed, 7 May 2003 21:57:49 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] [SQL] Unanswered Questions WAS: An unresolved\n\tperformance problem." }, { "msg_contents": "On Thu, May 08, 2003 at 10:48:52AM -0200, Achilleus Mantzios wrote:\n\n> That is, we have a marginal decrease of the total cost\n> for the index scan when random_page_cost = 1.9,\n> whereas the \"real cost\" in the means of total runtime\n> ranges from 218 msecs (seq scan) to 19 msecs (index scan).\n> (is it sane?)\n\nYou're right that the problem is the poor estimate of the cost of\nthat selection. I recall you mentioning that you'd expanded the\nstatistics on the field, but I don't recall to what. I know that\nunder some circumstances, you _really_ have to increase the stats to\nget a meaningful sample.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 8 May 2003 07:20:20 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An unresolved performance problem." }, { "msg_contents": "On Wed, 7 May 2003, Bernd von den Brincken wrote:\n\n> Hi,\n> \n> Mendola Gaetano wrote:\n> \n> > ...\n> >\n> >I seen around a lot of questions are remaining without any reply,\n> >may be in this period the guys like Tom Lane are too busy. ...\n> > \n> >\n> And we should remember that this is still free, open source software - \n> so we have no right to claim _any_ support whatsoever.\n> So thanks a lot to the PostgreSQL team for all the hard work that has \n> been and is being put into this software.\n\nI fully support your statement.\nAlso i must add that the confidense of the longterm/power users\nis am essential element that applies to all software\n(open source included)\n\n> \n> // Bernd vdB\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: [email protected]\n [email protected]\n\n", "msg_date": "Thu, 8 May 2003 10:09:47 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] An unresolved performance problem." }, { "msg_contents": "\nAbout the unanswered questions problem:\n\nThere seems to be a trade off between\ndescribing a problem as minimalistically \nas possible so that it gets the chance\nof being read (on one hand) and giving\nthe full details, explain analyze,\npg_class,pg_statistic data (on the other hand),\nin order to be more informational.\nAt the extreme cases: provide a \"query slow\" post\non one hand and provide the whole pg_dump\non the other.\nThe problem is that in the first case\n\"he hasnt given any real info\"\nand in the second case every one is avoiding\nreading 10 pages of data.\nI think i must have missed the \"golden intersection\".\n\nWell now to the point.\n\nThe problem was dealt using a hint\nfrom Mr Kenneth Marshall.\nSetting random_page_cost = 1.9 \nresulted in a smaller cost calculation\nfor the index than the seq scan.\n\nNow the question is:\nWith random_page_cost = 4 (default) \ni get \ndynacom=# EXPLAIN ANALYZE select count(*) from status where \nassettable='vessels' and appname='ISM PMS' and apptblname='items' and \nstatus='warn' and isvalid and assetidval=57;\n \nQUERY PLAN\n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1669.01..1669.01 rows=1 width=0) (actual \ntime=258.45..258.46 rows=1 loops=1)\n -> Seq Scan on status (cost=0.00..1668.62 rows=158 width=0) (actual \ntime=171.26..258.38 rows=42 loops=1)\n Filter: ((assettable = 'vessels'::character varying) AND (appname \n= 'ISM PMS'::character varying) AND (apptblname = 'items'::character \nvarying) AND (status = 'warn'::character varying) AND isvalid AND \n(assetidval = 57))\n Total runtime: 258.52 msec\n(4 rows)\n \ndynacom=#\n\nAnd with random_page_cost = 1.9, i get\ndynacom=# EXPLAIN ANALYZE select count(*) from status where \nassettable='vessels' and appname='ISM PMS' and apptblname='items' and \nstatus='warn' and isvalid and assetidval=57;\n \nQUERY PLAN\n \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1650.39..1650.39 rows=1 width=0) (actual \ntime=18.86..18.86 rows=1 loops=1)\n -> Index Scan using status_all on status (cost=0.00..1650.04 rows=139 \nwidth=0) (actual time=18.26..18.77 rows=42 loops=1)\n Index Cond: ((assettable = 'vessels'::character varying) AND \n(assetidval = 57) AND (appname = 'ISM PMS'::character varying) AND \n(apptblname = 'items'::character varying) AND (status = 'warn'::character \nvarying))\n Filter: isvalid\n Total runtime: 18.94 msec\n(5 rows)\n \ndynacom=# \n\nThat is, we have a marginal decrease of the total cost\nfor the index scan when random_page_cost = 1.9,\nwhereas the \"real cost\" in the means of total runtime\nranges from 218 msecs (seq scan) to 19 msecs (index scan).\n(is it sane?)\n-----\n(returning to the general -performance posting problem)\nAltho a FAQ with \"please do VACUUM ANALYZE before\nposting to the lists\" is something usefull in general,\nit does not provide enuf info for the users,\nat least for \"corner cases\" (as a fellow pgsql'er\nwrote)\n\nI think in order to stop this undesirable phaenomenon\nof flooding the lists, the best way is to provide\nthe actual algorithms that govern the planer/optimiser,\nin a form of lets say \"advanced documentation\".\n\n(If there is such thing, i am sorry but i wasnt\ntold so by anyone.)\n\nOtherwise there are gonna be unhappy core hackers\n(having to examine each case individually)\nand of course bad performing systems on the users side.\n\nP.S.\n\nOf course there are newbies in postgresql,\nofcourse there are people who think that\n\"support\" is to be taken for granted,\nofcourse there are people with minimal\nprogramming/hacking skills,\nbut i think the average \"power user\"\naltho he didnt get the chance to\nfollow the \"hard core\" hacking path\nin his life, he has a CompScience BSc or MSc,\nand can deal with both complicated algoritmic\nissues and source code reading,\nand morever on the average he likes to\ngive and receive respect.\n\n(not to mention that he is the person who can\n\"spread the word\" based on strong arguments\nand solid ground)\n\nThats my 20 drachmas.\n\n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: [email protected]\n [email protected]\n\n", "msg_date": "Thu, 8 May 2003 10:48:52 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An unresolved performance problem." }, { "msg_contents": "Andrew Sullivan <[email protected]> writes:\n> On Thu, May 08, 2003 at 10:48:52AM -0200, Achilleus Mantzios wrote:\n>> That is, we have a marginal decrease of the total cost\n>> for the index scan when random_page_cost = 1.9,\n>> whereas the \"real cost\" in the means of total runtime\n>> ranges from 218 msecs (seq scan) to 19 msecs (index scan).\n>> (is it sane?)\n\n> You're right that the problem is the poor estimate of the cost of\n> that selection.\n\nAre the table and index orders the same? Oliver Elphick pointed out\nawhile ago that we're doing a bad job of index order correlation\nestimation for multi-column indexes --- the correlation is taken to\nbe much lower than it should be. But if the correlation is near\nzero anyway then this wouldn't explain Achilleus' problem...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 08 May 2003 10:42:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An unresolved performance problem. " }, { "msg_contents": "On Wed, May 07, 2003 at 09:57:49PM -0700, Sean Chittenden wrote:\n> I hate to point this out, but \"TIP 4\" is getting a bit old and the 6\n> tips that we throw out to probably about 40K people about 1-200\n> times a day have probably reached saturation. Without looking at\n> the archives, I bet anyone a shot of good scotch that, it's probably\n> pretty infrequent that people don't kill -9 their postmasters.\n> \n> Any chance we could flush out the TIPs at the bottom to include,\n> \"VACUUM ANALYZE your database regularly,\" or \"When reporting a\n> problem, include the output from EXPLAIN [query],\" or \"ANALYZE\n> tables before examining the output from an EXPLAIN [query],\" or\n> \"Visit [url] for a tutorial on (schemas|triggers|views).\"\n\nBetter yet, have TIPs that are appropriate to the subscribed\nlist. -performance has different posting guidelines, things to try,\netc. than does -bugs, than does -sql (than does -hackers, than does\n-interfaces, ...).\n\nI don't know how feasible it is to separate them out, but i think it's\nworth looking into.\n\n-johnnnnnnnnnnn\n\n", "msg_date": "Thu, 8 May 2003 09:47:38 -0500", "msg_from": "johnnnnnn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] [SQL] Unanswered Questions WAS: An unresolved\n\tperformance problem." }, { "msg_contents": "On Thu, 8 May 2003, johnnnnnn wrote:\n\n> On Wed, May 07, 2003 at 09:57:49PM -0700, Sean Chittenden wrote:\n> > I hate to point this out, but \"TIP 4\" is getting a bit old and the 6\n> > tips that we throw out to probably about 40K people about 1-200\n> > times a day have probably reached saturation. Without looking at\n> > the archives, I bet anyone a shot of good scotch that, it's probably\n> > pretty infrequent that people don't kill -9 their postmasters.\n> > \n> > Any chance we could flush out the TIPs at the bottom to include,\n> > \"VACUUM ANALYZE your database regularly,\" or \"When reporting a\n> > problem, include the output from EXPLAIN [query],\" or \"ANALYZE\n> > tables before examining the output from an EXPLAIN [query],\" or\n> > \"Visit [url] for a tutorial on (schemas|triggers|views).\"\n> \n> Better yet, have TIPs that are appropriate to the subscribed\n> list. -performance has different posting guidelines, things to try,\n> etc. than does -bugs, than does -sql (than does -hackers, than does\n> -interfaces, ...).\n> \n> I don't know how feasible it is to separate them out, but i think it's\n> worth looking into.\n\nAgreed.\n\nAlso, some tips might well cross over, like say, vacuum and analyze \nregularly. Hmmm. Sounds like a job for a relational database :-)\n\n", "msg_date": "Thu, 8 May 2003 10:20:19 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] [SQL] Unanswered Questions WAS: An unresolved\n\tperformance" }, { "msg_contents": "Achilleus Mantzios <[email protected]> writes:\n> If so, how can one find the correlation between the ordering\n> of a table and a multicolumn index?\n\nWell, it is surely no better than the correlation of the index's\nfirst column --- what is that?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 09 May 2003 08:31:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An unresolved performance problem. " }, { "msg_contents": "On Thu, 8 May 2003, Tom Lane wrote:\n\n> Andrew Sullivan <[email protected]> writes:\n> > On Thu, May 08, 2003 at 10:48:52AM -0200, Achilleus Mantzios wrote:\n> >> That is, we have a marginal decrease of the total cost\n> >> for the index scan when random_page_cost = 1.9,\n> >> whereas the \"real cost\" in the means of total runtime\n> >> ranges from 218 msecs (seq scan) to 19 msecs (index scan).\n> >> (is it sane?)\n> \n> > You're right that the problem is the poor estimate of the cost of\n> > that selection.\n> \n> Are the table and index orders the same? Oliver Elphick pointed out\n> awhile ago that we're doing a bad job of index order correlation\n> estimation for multi-column indexes --- the correlation is taken to\n> be much lower than it should be. But if the correlation is near\n> zero anyway then this wouldn't explain Achilleus' problem...\n\nPlease correct me if i am wrong. (i think i probably am)\nThe correlation value in pg_statistc for a column refers to the \ncorrelation between\nthe ordering of a table's tuples and the ordering of that column.\n(So it plays some role in determining the execution plan\nif an index exists on that column. Also CLUSTERing a\nsingle-column index on the table makes reordering\nof the table according to that index, that is the ordering of that \ncolumn).\n\nIs that correct??\n\n\nIf so, how can one find the correlation between the ordering\nof a table and a multicolumn index?\n\n\n\n> \n> \t\t\tregards, tom lane\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: [email protected]\n [email protected]\n\n", "msg_date": "Fri, 9 May 2003 11:00:30 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An unresolved performance problem. " }, { "msg_contents": "Achilleus Mantzios <[email protected]> writes:\n> On Fri, 9 May 2003, Tom Lane wrote:\n>> Achilleus Mantzios <[email protected]> writes:\n>>> If so, how can one find the correlation between the ordering\n>>> of a table and a multicolumn index?\n>> \n>> Well, it is surely no better than the correlation of the index's\n>> first column --- what is that?\n\n> it is 1\n\nWell, that's suggestive, isn't it? What about the remaining columns?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 09 May 2003 09:08:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An unresolved performance problem. " }, { "msg_contents": "Achilleus Mantzios <[email protected]> writes:\n> On Fri, 9 May 2003, Tom Lane wrote:\n>> Well, that's suggestive, isn't it? What about the remaining columns?\n\n> The index is defined as:\n\n> status_all btree (assettable, assetidval, appname, apptblname, status, \n> isvalid)\n\n> And correlations are:\n\n> attname | correlation\n> -------------+-------------\n> assettable | 1\n> assetidval | 0.125902\n> appname | 0.942771\n> apptblname | 0.928761\n> status | 0.443405\n> isvalid | 0.970531\n\nActually, thinking twice about it, I'm not sure if the correlations of\nthe righthand columns mean anything. If the table were perfectly\nordered by the index, you'd expect righthand values to cycle through\ntheir range for each lefthand value, and so they'd show low\ncorrelations.\n\nThe fact that most of the columns show high correlation makes me think\nthat they are not independent --- is that right?\n\nBut anyway, I'd say that yes this table is probably quite well ordered\nby the index. You could just visually compare the results of\n\nselect * from tab\n\nselect * from tab\n order by assettable, assetidval, appname, apptblname, status, isvalid\n\nto confirm this.\n\nAnd that tells us where the problem is: the code is estimating a low\nindex correlation where it should be estimating a high one. If you\ndon't mind running a nonstandard version of Postgres, you could try\nmaking btcostestimate() in src/backend/utils/adt/selfuncs.c estimate\nthe indexCorrelation as just varCorrelation, instead of\nvarCorrelation / nKeys. This is doubtless an overcorrection in the\nother direction (which is why it hasn't been done in the official\nsources) but it's probably better than what's there, at least for\nyour purposes.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 09 May 2003 09:30:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An unresolved performance problem. " }, { "msg_contents": "On Fri, 9 May 2003, Tom Lane wrote:\n\n> Achilleus Mantzios <[email protected]> writes:\n> > If so, how can one find the correlation between the ordering\n> > of a table and a multicolumn index?\n> \n> Well, it is surely no better than the correlation of the index's\n> first column --- what is that?\n\nit is 1\n\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: [email protected]\n [email protected]\n\n", "msg_date": "Fri, 9 May 2003 16:11:49 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An unresolved performance problem. " }, { "msg_contents": "On Fri, 9 May 2003, Tom Lane wrote:\n\n> >> Well, it is surely no better than the correlation of the index's\n> >> first column --- what is that?\n> \n> > it is 1\n> \n> Well, that's suggestive, isn't it? What about the remaining columns?\n\nThe index is defined as:\n\nstatus_all btree (assettable, assetidval, appname, apptblname, status, \nisvalid)\n\nAnd correlations are:\n\n attname | correlation\n-------------+-------------\n assettable | 1\n assetidval | 0.125902\n appname | 0.942771\n apptblname | 0.928761\n status | 0.443405\n isvalid | 0.970531\n\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: [email protected]\n [email protected]\n\n", "msg_date": "Fri, 9 May 2003 16:22:23 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An unresolved performance problem. " }, { "msg_contents": "On Fri, 9 May 2003, Tom Lane wrote:\n\n> Achilleus Mantzios <[email protected]> writes:\n> > On Fri, 9 May 2003, Tom Lane wrote:\n> >> Well, that's suggestive, isn't it? What about the remaining columns?\n> \n> > The index is defined as:\n> \n> > status_all btree (assettable, assetidval, appname, apptblname, status, \n> > isvalid)\n> \n> > And correlations are:\n> \n> > attname | correlation\n> > -------------+-------------\n> > assettable | 1\n> > assetidval | 0.125902\n> > appname | 0.942771\n> > apptblname | 0.928761\n> > status | 0.443405\n> > isvalid | 0.970531\n> \n> Actually, thinking twice about it, I'm not sure if the correlations of\n> the righthand columns mean anything. If the table were perfectly\n> ordered by the index, you'd expect righthand values to cycle through\n> their range for each lefthand value, and so they'd show low\n> correlations.\n\nWhen i clustered (on onother system no to spoil the situation)\nCLUSTER status_all on status;\ni got identical results on the order (see below),\nalso i got quite high correlations.\n\n\n> \n> The fact that most of the columns show high correlation makes me think\n> that they are not independent --- is that right?\n\nWell, assettable,appname,apptblname\nhave high frequencies on one value, so they can be\nregarded as constants.\nassetidval, status and isvalid play the most part of the\nselectivity.\n(i have included the first 3 columns in the status_all index for future \nusage)\n\n> \n> But anyway, I'd say that yes this table is probably quite well ordered\n> by the index. You could just visually compare the results of\n> \n> select * from tab\n> \n> select * from tab\n> order by assettable, assetidval, appname, apptblname, status, isvalid\n> \n> to confirm this.\n> \n\nIf the table was ordered by status_all index i would show something like\n attname | correlation\n-------------+-------------\n assettable | 1\n assetidval | 1\n appname | 0.927842\n apptblname | 0.895155\n status | 0.539183\n isvalid | 0.722838\n\nIn the current (production system) situation, visually, i dont see any \ncorrelation between the two.\n\n> And that tells us where the problem is: the code is estimating a low\n> index correlation where it should be estimating a high one. If you\n> don't mind running a nonstandard version of Postgres, you could try\n> making btcostestimate() in src/backend/utils/adt/selfuncs.c estimate\n> the indexCorrelation as just varCorrelation, instead of\n> varCorrelation / nKeys. This is doubtless an overcorrection in the\n> other direction (which is why it hasn't been done in the official\n> sources) but it's probably better than what's there, at least for\n> your purposes.\n> \n\nOn the test system,\nif i cluster the table according to assetidval the optimiser\nuses the index on that column which does a pretty good job.\nEven better, if i revert the table to an ordering according\nto its id (to spoil the previous effect of the CLUSTER command)\nand i set random_page_cost = 2 i get the usage of the better\nstatus_all index.\n\nThis way the correlations seem low, but the expected selectivity\nis either way 83 rows.\n\nAre you suggesting to try the change in src/backend/utils/adt/selfuncs.c\nat this exact situation i am on my test system?? (its linux too)\n\nThanx a lot!\n\n> \t\t\tregards, tom lane\n> \n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: [email protected]\n [email protected]\n\n", "msg_date": "Fri, 9 May 2003 17:13:59 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An unresolved performance problem. " }, { "msg_contents": "\nI changed \n*indexCorrelation = varCorrelation / nKeys;\nto \n*indexCorrelation = varCorrelation ;\n\nand i got cost=28.88\nand it beats every other index.\n\n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: [email protected]\n [email protected]\n\n", "msg_date": "Fri, 9 May 2003 17:25:12 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: An unresolved performance problem. " } ]
[ { "msg_contents": "Ok, I have two tables (Postgresql 7.3.2 on Debian):\n\n Table \"public.zip\"\n Column | Type | Modifiers\n------------+-----------------------+-----------\n zip | character varying(5) |\n city | character varying(25) |\n county | character varying(30) |\n countyfips | character varying(5) |\n state_full | character varying(30) |\n state | character varying(2) |\n citytype | character(1) |\n zipcodetyp | character(1) |\n areacode | character varying(3) |\n timezone | character varying(10) |\n dst | character(1) |\n latitude | double precision |\n longitude | double precision |\n country | character varying(10) |\nIndexes: zip_idx btree (zip)\n\n Table \"public.client_options\"\n Column | Type | Modifiers\n--------------+--------+-----------\n client_id | bigint | not null\n option_name | text | not null\n option_value | text | not null\nForeign Key constraints: [...omitted...]\n\nI wanted to do the following:\n\nmidas=# explain analyze select * from zip where zip in\n (select option_value from client_options where option_name = 'ZIP_CODE' );\n QUERY PLAN\n---------------------------------------------------------------------------\n Seq Scan on zip (cost=0.00..206467.85 rows=38028 width=112)\n (actual time=58.45..4676.76 rows=8 loops=1)\n Filter: (subplan)\n SubPlan\n -> Seq Scan on client_options (cost=0.00..5.36 rows=3 width=14)\n (actual time=0.02..0.05 rows=3 loops=76056)\n Filter: (option_name = 'ZIP_CODE'::text)\n Total runtime: 4676.87 msec\n\nOr even:\n\nmidas=# explain analyze select * from zip z, client_options c where\nc.option_name = 'ZIP_CODE' and c.option_value = z.zip;\n QUERY PLAN\n---------------------------------------------------------------------------\n Nested Loop (cost=0.00..9915.14 rows=10 width=148)\n (actual time=26.63..2864.01 rows=8 loops=1)\n Join Filter: (\"outer\".option_value = (\"inner\".zip)::text)\n -> Seq Scan on client_options c (cost=0.00..5.36 rows=3 width=36)\n (actual time=0.25..0.34 rows=3 loops=1)\n Filter: (option_name = 'ZIP_CODE'::text)\n -> Seq Scan on zip z (cost=0.00..2352.56 rows=76056 width=112)\n (actual time=0.07..809.19 rows=76056 loops=3)\n Total runtime: 2864.16 msec\n\n\nIf I wanted to do select the zip codes out of the client_options and then\nselect the zipcodes seperately, I would be looking at times of .14 msec\nand 222.82 msec respectively.\n\nOh, and yes, I have done a vacuum analyze.\n\n(the reason I'm trying to join these tables is to get longitude and\nlatitude coordinates to use with the earthdistance <@> operator, it just\ntakes entirely too long)\n\nWhat am I doing wrong?\n\nRyan\n\n", "msg_date": "Wed, 7 May 2003 09:11:49 -0500 (CDT)", "msg_from": "\"Ryan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Yet another 'why does it not use my index' question." }, { "msg_contents": "> On Wed, May 07, 2003 at 09:11:49 -0500,\n> Ryan <[email protected]> wrote:\n>> I wanted to do the following:\n>>\n>> midas=# explain analyze select * from zip where zip in\n>> (select option_value from client_options where option_name =\n>> 'ZIP_CODE' );\n>\n> Until 7.4 comes out IN will be slow and you should use a join to do\n> this.\n>\n>> midas=# explain analyze select * from zip z, client_options c where\n>> c.option_name = 'ZIP_CODE' and c.option_value = z.zip;\n>\n> I think the problem here might be related to option_value being text and\n> zip being char varying. This might prevent an index from being used to\n> do the join.\nHMMMM. I'll have to re-insert that table (it was a dbf2pg job) and change\nthat. Any reason why postgres is so picky about varchar/text conversion,\nconsidering they are practally the same thing?\n\nSomething intresting however. If I do this:\nselect * from zip where zip = 98404;\nI get a seq scan, as postgres types it to text.\n\nbut if I do this:\nselect * from zip where zip = '98404';\nPostgres types it as character varying and uses the index.\n\nNot that it would happen any time soon, but it would be nice if explain\nanalyze would tell you why it chose an seq scan on an indexed field.\n(e.g. You should know better than to try an index with a different type!)\n\nRyan\n\n", "msg_date": "Wed, 7 May 2003 09:43:28 -0500 (CDT)", "msg_from": "\"Ryan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Yet another 'why does it not use my index' question." }, { "msg_contents": "On Wed, May 07, 2003 at 09:11:49 -0500,\n Ryan <[email protected]> wrote:\n> I wanted to do the following:\n> \n> midas=# explain analyze select * from zip where zip in\n> (select option_value from client_options where option_name = 'ZIP_CODE' );\n\nUntil 7.4 comes out IN will be slow and you should use a join to do this.\n\n> midas=# explain analyze select * from zip z, client_options c where\n> c.option_name = 'ZIP_CODE' and c.option_value = z.zip;\n\nI think the problem here might be related to option_value being text\nand zip being char varying. This might prevent an index from being used\nto do the join.\n\n", "msg_date": "Wed, 7 May 2003 11:28:15 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Yet another 'why does it not use my index' question." } ]
[ { "msg_contents": "I'm not sure if this a performance question or a sql question really, but\nsince my primarily peeve here is performance, here goes:\n\nI'm trying to write a query which takes the output of a join and shows me\nonly what the items that are in the main join but not in the subselect of\njust one of the tables in the join, using EXCEPT.\n\nThis is a little complicated, so please bear with me.\n\nI have two tables: an event table that logs random events as they come in,\nand a tracking table that keeps a state of events it cares about. In this\nparticular case I'm trying to obtain a list of tracking pkeys for related\nevent data that do not correspond to a certain (other) set of event data.\n\nIdeally, here is what I want:\n\nSELECT tracking.pk,events.data1,events.data2 FROM tracking,events WHERE\ntracking.event_fk = event.pk EXCEPT (SELECT events.data1,events.data2 FROM\nevents WHERE event.type = 10)\n\nThe problem I have of course is that I get an error regarding trying to use\ndifferent columns for the two queries in EXCEPT. I'm sure someone will\npoint this out, but the following suggestion will not work:\n\nSELECT tracking.pk,events.data1,events.data2 FROM tracking,events WHERE\ntracking.event_fk = event.pk EXCEPT (SELECT\ntracking.pk,events.data1,events.data2 FROM tracking,events WHERE\ntracking.event_fk = event.pk AND event.type = 10)\n\nThat won't work for two reasons... first, there are no matching entries in\nthe tracking table pointing to events where event.type = 10, meaning this\nquery would always return an empty set. And even if there were, I don't\nwant to do the join twice if its not necessary, as the events table is\nliable to be very large.\n\nThe official solution to this I believe would be to just use CORRESPONDING\nBY, but that's not supported by PG (why exactly, oh why!)\n\nSuggestions, anyone? Thanks in advance,\n Lucas.\n\n\n\n\n\n\nI'm not sure if this \na performance question or a sql question really, but since my primarily peeve \nhere is performance, here goes:\n \nI'm trying to write \na query which takes the output of a join and shows me only what the items that \nare in the main join but not in the subselect of just one of the tables in the \njoin, using EXCEPT.\n \nThis is a little \ncomplicated, so please bear with me.  \n \nI have two tables: \nan event table that logs random events as they come in, and a tracking table \nthat keeps a state of events it cares about.  In this particular case I'm \ntrying to obtain a list of tracking pkeys for related event data that do \nnot correspond to a certain (other) set of event data.\n \nIdeally, here is \nwhat I want:\n \nSELECT \ntracking.pk,events.data1,events.data2 FROM tracking,events WHERE \ntracking.event_fk = event.pk EXCEPT (SELECT events.data1,events.data2 \nFROM events WHERE event.type = 10)\n \nThe problem I have \nof course is that I get an error regarding trying to use different columns for \nthe two queries in EXCEPT.  I'm sure someone will point this out, but the \nfollowing suggestion will not work:\n\n \n\nSELECT \ntracking.pk,events.data1,events.data2 FROM tracking,events WHERE \ntracking.event_fk = event.pk EXCEPT (SELECT tracking.pk,events.data1,events.data2 FROM \ntracking,events WHERE tracking.event_fk = event.pk AND \nevent.type = 10)\n \nThat won't work for two reasons... first, \nthere are no matching entries in the tracking table pointing to events where \nevent.type = 10, meaning this query would always return an empty set.  And \neven if there were, I don't want to do the join twice if its not necessary, as \nthe events table is liable to be very large.\n \nThe official \nsolution to this I believe would be to just use CORRESPONDING BY, but \nthat's not supported by PG (why exactly, oh why!)\n \nSuggestions, \nanyone?  Thanks in advance,\n  \nLucas.", "msg_date": "Wed, 7 May 2003 12:11:46 -0700", "msg_from": "\"Lucas Adamski\" <[email protected]>", "msg_from_op": true, "msg_subject": "Hack around lack of CORRESPONDING BY in EXCEPT?" }, { "msg_contents": "\nOn Wed, 7 May 2003, Lucas Adamski wrote:\n\n> I'm not sure if this a performance question or a sql question really, but\n> since my primarily peeve here is performance, here goes:\n>\n> I'm trying to write a query which takes the output of a join and shows me\n> only what the items that are in the main join but not in the subselect of\n> just one of the tables in the join, using EXCEPT.\n>\n> This is a little complicated, so please bear with me.\n>\n> I have two tables: an event table that logs random events as they come in,\n> and a tracking table that keeps a state of events it cares about. In this\n> particular case I'm trying to obtain a list of tracking pkeys for related\n> event data that do not correspond to a certain (other) set of event data.\n>\n> Ideally, here is what I want:\n>\n> SELECT tracking.pk,events.data1,events.data2 FROM tracking,events WHERE\n> tracking.event_fk = event.pk EXCEPT (SELECT events.data1,events.data2 FROM\n> events WHERE event.type = 10)\n\nMaybe something like (if I'm right in assuming that you want any event\nwhose data1 and data2 match an event having type 10):\n\nselect tracking.pk, e.data1, e.data2 from\n tracking,\n ((select data1,data2 from events) except (select data1,data2 from events\n where event.type=10)) e\nwhere tracking.event_fk=e.pk;\n\n\n> The official solution to this I believe would be to just use CORRESPONDING\n> BY, but that's not supported by PG (why exactly, oh why!)\n\nBecause it's not entry level SQL92 and noone's implemented it yet. :)\n\n", "msg_date": "Wed, 7 May 2003 12:36:42 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hack around lack of CORRESPONDING BY in EXCEPT?" }, { "msg_contents": "\nNot sure if I'm reading your question correctly, but is this what you want?\n\nSELECT t.pk,e.data1,e.data2\nFROM tracking t left outer join events e on t.event_fk = e.pk\nWHERE e.type <> 10\n\nOR\n\nSELECT t.pk,e.data1,e.data2\nFROM tracking t inner join events e on t.event_fk = e.pk\nWHERE e.type <> 10\n\n\n\n\n \n \"Lucas Adamski\" \n <[email protected]> To: \"Postgresql Performance Mailing list (E-mail)\" \n Sent by: <[email protected]> \n pgsql-performance-owner@post cc: \n gresql.org Subject: [PERFORM] Hack around lack of CORRESPONDING BY in EXCEPT? \n \n \n 05/07/2003 12:11 PM \n \n\n\n\n\nI'm not sure if this a performance question or a sql question really, but\nsince my primarily peeve here is performance, here goes:\n\nI'm trying to write a query which takes the output of a join and shows me\nonly what the items that are in the main join but not in the subselect of\njust one of the tables in the join, using EXCEPT.\n\nThis is a little complicated, so please bear with me.\n\nI have two tables: an event table that logs random events as they come in,\nand a tracking table that keeps a state of events it cares about. In this\nparticular case I'm trying to obtain a list of tracking pkeys for related\nevent data that do not correspond to a certain (other) set of event data.\n\nIdeally, here is what I want:\n\nSELECT tracking.pk,events.data1,events.data2 FROM tracking,events WHERE\ntracking.event_fk = event.pk EXCEPT (SELECT events.data1,events.data2 FROM\nevents WHERE event.type = 10)\n\nThe problem I have of course is that I get an error regarding trying to use\ndifferent columns for the two queries in EXCEPT. I'm sure someone will\npoint this out, but the following suggestion will not work:\n\nSELECT tracking.pk,events.data1,events.data2 FROM tracking,events WHERE\ntracking.event_fk = event.pk EXCEPT (SELECT\ntracking.pk,events.data1,events.data2 FROM tracking,events WHERE\ntracking.event_fk = event.pk AND event.type = 10)\n\nThat won't work for two reasons... first, there are no matching entries in\nthe tracking table pointing to events where event.type = 10, meaning this\nquery would always return an empty set. And even if there were, I don't\nwant to do the join twice if its not necessary, as the events table is\nliable to be very large.\n\nThe official solution to this I believe would be to just use CORRESPONDING\nBY, but that's not supported by PG (why exactly, oh why!)\n\nSuggestions, anyone? Thanks in advance,\n Lucas.\n\n", "msg_date": "Wed, 7 May 2003 12:40:19 -0700", "msg_from": "\"Patrick Hatcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hack around lack of CORRESPONDING BY in EXCEPT?" }, { "msg_contents": "\nOn Wed, 7 May 2003, Lucas Adamski wrote:\n\nOf course my last suggestion won't work since you need to get the event.pk\nfield out. The actual subquery would need to be more complicated and\nprobably involve an IN or EXISTS. :(\n\n", "msg_date": "Wed, 7 May 2003 12:43:29 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hack around lack of CORRESPONDING BY in EXCEPT?" }, { "msg_contents": "On Wed, 7 May 2003 12:11:46 -0700, \"Lucas Adamski\"\n<[email protected]> wrote:\n>I have two tables: an event table that logs random events as they come in,\n>and a tracking table that keeps a state of events it cares about. In this\n>particular case I'm trying to obtain a list of tracking pkeys for related\n>event data that do not correspond to a certain (other) set of event data.\n>\n>Ideally, here is what I want:\n>\n>SELECT tracking.pk,events.data1,events.data2 FROM tracking,events WHERE\n>tracking.event_fk = event.pk EXCEPT (SELECT events.data1,events.data2 FROM\n>events WHERE event.type = 10)\n\nLucas, try this untested query:\n\n\tSELECT tr.pk, ev.data1, ev.data2\n\t FROM tracking tr INNER JOIN events ev\n\t\tON tr.event_fk = ev.pk\n\t WHERE ev.type != 10;\n\n(Should also work with AND instead of WHERE.)\n\n>SELECT tracking.pk,events.data1,events.data2 FROM tracking,events WHERE\n>tracking.event_fk = event.pk EXCEPT (SELECT\n>tracking.pk,events.data1,events.data2 FROM tracking,events WHERE\n>tracking.event_fk = event.pk AND event.type = 10)\n>\n>That won't work for two reasons... first, there are no matching entries in\n>the tracking table pointing to events where event.type = 10, meaning this\n>query would always return an empty set.\n\nI don't understand this. If there are no entries with event.type 10,\nthen the subselect returns an empty result set, and <anything> EXCEPT\n<empty> should give the original result?\n\nServus\n Manfred\n\n", "msg_date": "Wed, 07 May 2003 22:22:54 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hack around lack of CORRESPONDING BY in EXCEPT?" }, { "msg_contents": "Stephan,\n\nYup, unfortunately you are correct... I'd need to get the event.pk's out of\nthere somewhere to join with the tracking.event_fk. I can't put the\nevent.pk in the subselects as they don't match, and I would get an empty set\nback.\n\nselect tracking.pk, e.data1, e.data2 from\n tracking,\n ((select data1,data2 from events) except (select data1,data2 from events\n where event.type=10)) e\nwhere tracking.event_fk=e.pk;\n\nI wrote it originally as:\n\nSELECT tracking.pk,events.data1,events.data2 FROM tracking,events WHERE\ntracking.event_fk = event.pk EXCEPT (SELECT events.data1,events.data2 FROM\nevents WHERE event.type = 10)\n\nbecause each of these subqueries restricts the dataset greatly before doing\nthe join. I've simplified the actual problem (as the real code has a bunch\nof extraneous stuff that makes it even more obtuse), but essentially, the\ntracking table maintains a record of the last record type that was entered.\nThe type is incremented for each batch of events that is loaded. In this\ncase, I'm assuming that the latest batch is type=10 (or 5000, or 100000),\nand the tracking table references a small subset of previous events\n(possibly of types 1-9 in this example). This particular query is supposed\nto return all tracking.pk's that are present in the previous batches (types)\nbut not in the latest batch (10). I didn't mean to make it quite so obtuse,\nsorry. :)\n\nSo in this case I'm getting all of the relevant data for the new entries,\nsubtracting those from the old entries that are referred to by the tracking\nsystem, and returning those outdated tracking.pk's.\n Lucas.\n\n-----Original Message-----\nFrom: Stephan Szabo [mailto:[email protected]]\nSent: Wednesday, May 07, 2003 12:43 PM\nTo: Lucas Adamski\nCc: Postgresql Performance Mailing list (E-mail)\nSubject: Re: [PERFORM] Hack around lack of CORRESPONDING BY in EXCEPT?\n\n\n\nOn Wed, 7 May 2003, Lucas Adamski wrote:\n\nOf course my last suggestion won't work since you need to get the event.pk\nfield out. The actual subquery would need to be more complicated and\nprobably involve an IN or EXISTS. :(\n\n", "msg_date": "Wed, 7 May 2003 15:28:26 -0700", "msg_from": "\"Lucas Adamski\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hack around lack of CORRESPONDING BY in EXCEPT?" }, { "msg_contents": "Manfred,\n\nI think what you propose is similar to what Patrick proposed, let me see if\nI can explain below:\n\n> -----Original Message-----\n> From: Manfred Koizar [mailto:[email protected]]\n> Sent: Wednesday, May 07, 2003 1:23 PM\n> To: Lucas Adamski\n> Cc: Postgresql Performance Mailing list (E-mail)\n> Subject: Re: [PERFORM] Hack around lack of CORRESPONDING BY in EXCEPT?\n>\n\n<snip>\n\n> Lucas, try this untested query:\n>\n> \tSELECT tr.pk, ev.data1, ev.data2\n> \t FROM tracking tr INNER JOIN events ev\n> \t\tON tr.event_fk = ev.pk\n> \t WHERE ev.type != 10;\n>\n> (Should also work with AND instead of WHERE.)\n\nThe problem is that it simply removes all events where type != 10, versus\nsubtracting all events from subselect of type 10 where data1 and data2 match\nthose in the main join. The goal of the query is to remove all events that\nmatch (matching being defined as both data1 and data2 matching) that are\npresent in events of type 10 and events that are referenced by the tracking\ntable, then return those tracking.pk's for entries that are left over.\n\nIts not required that I join tracking and events in the primary select\nbefore doing the EXCEPT join, but it should make it a bit more efficient.\n\n>\n> >SELECT tracking.pk,events.data1,events.data2 FROM\n> tracking,events WHERE\n> >tracking.event_fk = event.pk EXCEPT (SELECT\n> >tracking.pk,events.data1,events.data2 FROM tracking,events WHERE\n> >tracking.event_fk = event.pk AND event.type = 10)\n> >\n> >That won't work for two reasons... first, there are no\n> matching entries in\n> >the tracking table pointing to events where event.type = 10,\n> meaning this\n> >query would always return an empty set.\n>\n> I don't understand this. If there are no entries with event.type 10,\n> then the subselect returns an empty result set, and <anything> EXCEPT\n> <empty> should give the original result?\n\nIts not that there are no entires with event.type=10, its that there may not\nbe any tracking entires for events of type 10, and if I join them before\ndoing the EXCEPT I will lose them. That's why I have to do the EXCEPT\nsubselect without joining it to the table. Thanks,\n Lucas.\n\n>\n> Servus\n> Manfred\n>\n>\n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n\n", "msg_date": "Wed, 7 May 2003 15:49:06 -0700", "msg_from": "\"Lucas Adamski\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hack around lack of CORRESPONDING BY in EXCEPT?" }, { "msg_contents": "\nOn Wed, 7 May 2003, Lucas Adamski wrote:\n\n> I wrote it originally as:\n>\n> SELECT tracking.pk,events.data1,events.data2 FROM tracking,events WHERE\n> tracking.event_fk = event.pk EXCEPT (SELECT events.data1,events.data2 FROM\n> events WHERE event.type = 10)\n>\n> because each of these subqueries restricts the dataset greatly before doing\n> the join. I've simplified the actual problem (as the real code has a bunch\n> of extraneous stuff that makes it even more obtuse), but essentially, the\n> tracking table maintains a record of the last record type that was entered.\n> The type is incremented for each batch of events that is loaded. In this\n> case, I'm assuming that the latest batch is type=10 (or 5000, or 100000),\n> and the tracking table references a small subset of previous events\n> (possibly of types 1-9 in this example). This particular query is supposed\n> to return all tracking.pk's that are present in the previous batches (types)\n> but not in the latest batch (10). I didn't mean to make it quite so obtuse,\n> sorry. :)\n\nMaybe something like nominally like (quickly done so possibly wrong\nagain):\n\n select tracking.pk, events.data1, events.data2 from\n tracking,events where not exists (select * from events e where\n e.type=10 and e.data1=events.data1 and e.data2=events.data2)\n and tracking.event_fk=event.pk\n\nGet all tracking/event combinations, not including those where the data1/2\nmatches that of an event with type 10.\n\nThat might give dups if there are multiple events rows with that pk for\ndifferent types (but not 10).\n\n", "msg_date": "Wed, 7 May 2003 15:58:51 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hack around lack of CORRESPONDING BY in EXCEPT?" }, { "msg_contents": "On Wed, 7 May 2003 15:49:06 -0700, \"Lucas Adamski\"\n<[email protected]> wrote:\n>The problem is that it simply removes all events where type != 10, versus\n>subtracting all events from subselect of type 10 where data1 and data2 match\n>those in the main join.\n\nYes, I realized it when I read Stephan's comment immediately after I\nhad sent my mail. Should have read your requirements more thoroughly.\nSorry for the noise ...\n\nServus\n Manfred\n\n", "msg_date": "Thu, 08 May 2003 01:54:04 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hack around lack of CORRESPONDING BY in EXCEPT?" }, { "msg_contents": "Stephan,\n\nBingo! That worked perfectly, thank you! I was considering something like\nthat, but couldn't figure out the syntax offhand to join two events tables\nin that fashion. Didn't realize you could alias a table as well! Thanks\nagain,\n Lucas.\n\n> -----Original Message-----\n> From: Stephan Szabo [mailto:[email protected]]\n> Sent: Wednesday, May 07, 2003 3:59 PM\n> To: Lucas Adamski\n> Cc: Postgresql Performance Mailing list (E-mail)\n> Subject: Re: [PERFORM] Hack around lack of CORRESPONDING BY in EXCEPT?\n>\n>\n>\n> On Wed, 7 May 2003, Lucas Adamski wrote:\n>\n> > I wrote it originally as:\n> >\n> > SELECT tracking.pk,events.data1,events.data2 FROM\n> tracking,events WHERE\n> > tracking.event_fk = event.pk EXCEPT (SELECT\n> events.data1,events.data2 FROM\n> > events WHERE event.type = 10)\n> >\n> > because each of these subqueries restricts the dataset\n> greatly before doing\n> > the join. I've simplified the actual problem (as the real\n> code has a bunch\n> > of extraneous stuff that makes it even more obtuse), but\n> essentially, the\n> > tracking table maintains a record of the last record type\n> that was entered.\n> > The type is incremented for each batch of events that is\n> loaded. In this\n> > case, I'm assuming that the latest batch is type=10 (or\n> 5000, or 100000),\n> > and the tracking table references a small subset of previous events\n> > (possibly of types 1-9 in this example). This particular\n> query is supposed\n> > to return all tracking.pk's that are present in the\n> previous batches (types)\n> > but not in the latest batch (10). I didn't mean to make it\n> quite so obtuse,\n> > sorry. :)\n>\n> Maybe something like nominally like (quickly done so possibly wrong\n> again):\n>\n> select tracking.pk, events.data1, events.data2 from\n> tracking,events where not exists (select * from events e where\n> e.type=10 and e.data1=events.data1 and e.data2=events.data2)\n> and tracking.event_fk=event.pk\n>\n> Get all tracking/event combinations, not including those\n> where the data1/2\n> matches that of an event with type 10.\n>\n> That might give dups if there are multiple events rows with\n> that pk for\n> different types (but not 10).\n>\n>\n\n", "msg_date": "Mon, 12 May 2003 00:29:50 -0700", "msg_from": "\"Lucas Adamski\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hack around lack of CORRESPONDING BY in EXCEPT?" } ]
[ { "msg_contents": "Patrick,\n\nI don't think that wouldn't quite work unfortunately, as I'm actually trying\nto filter them out based upon the values in data1 and data2. I'm using the\ndata in set 2 (data1,data2 from events where type=10) to remove rows from\nset 1 (join between events and tracking table) where set1.data1=set2.data1\nand set1.data2=set2.data2, and returning the tracking id's for any rows left\nin set 1 (that were not in set 2). I probably gave a better explaination in\nmy response to Stephan. In the case below, I would simply get all events\nwhere type<>10 from the join, regardless of whether they matched the data1\nand data2 for all type=10. Thanks,\n Lucas.\n\n-----Original Message-----\nFrom: Patrick Hatcher [mailto:[email protected]]\nSent: Wednesday, May 07, 2003 12:40 PM\nTo: [email protected]\nCc: Postgresql Performance Mailing list (E-mail);\[email protected]\nSubject: Re: [PERFORM] Hack around lack of CORRESPONDING BY in EXCEPT?\n\n\n\n\n\n\n\nNot sure if I'm reading your question correctly, but is this what you want?\n\nSELECT t.pk,e.data1,e.data2\nFROM tracking t left outer join events e on t.event_fk = e.pk\nWHERE e.type <> 10\n\nOR\n\nSELECT t.pk,e.data1,e.data2\nFROM tracking t inner join events e on t.event_fk = e.pk\nWHERE e.type <> 10\n\n\n\n\n\n \"Lucas Adamski\"\n <[email protected]> To: \"Postgresql\nPerformance Mailing list (E-mail)\"\n Sent by:\n<[email protected]>\n pgsql-performance-owner@post cc:\n gresql.org Subject:\n[PERFORM] Hack around lack of CORRESPONDING BY in EXCEPT?\n\n\n 05/07/2003 12:11 PM\n\n\n\n\n\nI'm not sure if this a performance question or a sql question really, but\nsince my primarily peeve here is performance, here goes:\n\nI'm trying to write a query which takes the output of a join and shows me\nonly what the items that are in the main join but not in the subselect of\njust one of the tables in the join, using EXCEPT.\n\nThis is a little complicated, so please bear with me.\n\nI have two tables: an event table that logs random events as they come in,\nand a tracking table that keeps a state of events it cares about. In this\nparticular case I'm trying to obtain a list of tracking pkeys for related\nevent data that do not correspond to a certain (other) set of event data.\n\nIdeally, here is what I want:\n\nSELECT tracking.pk,events.data1,events.data2 FROM tracking,events WHERE\ntracking.event_fk = event.pk EXCEPT (SELECT events.data1,events.data2 FROM\nevents WHERE event.type = 10)\n\nThe problem I have of course is that I get an error regarding trying to use\ndifferent columns for the two queries in EXCEPT. I'm sure someone will\npoint this out, but the following suggestion will not work:\n\nSELECT tracking.pk,events.data1,events.data2 FROM tracking,events WHERE\ntracking.event_fk = event.pk EXCEPT (SELECT\ntracking.pk,events.data1,events.data2 FROM tracking,events WHERE\ntracking.event_fk = event.pk AND event.type = 10)\n\nThat won't work for two reasons... first, there are no matching entries in\nthe tracking table pointing to events where event.type = 10, meaning this\nquery would always return an empty set. And even if there were, I don't\nwant to do the join twice if its not necessary, as the events table is\nliable to be very large.\n\nThe official solution to this I believe would be to just use CORRESPONDING\nBY, but that's not supported by PG (why exactly, oh why!)\n\nSuggestions, anyone? Thanks in advance,\n Lucas.\n\n", "msg_date": "Wed, 7 May 2003 15:37:30 -0700", "msg_from": "\"Lucas Adamski\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hack around lack of CORRESPONDING BY in EXCEPT?" } ]
[ { "msg_contents": "I hope this hasn't been answered before, I've looked at the docs here:\n http://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=index.html\n\nAnyway, I've found a (bug|feature|standard?) with type casting and index usage.\n\nI've got a table with a column that's a timestamp with time zone. This column is indexed. If I issue the \"normal\" query of: \n\n SELECT count(*) FROM foo WHERE bar > '2003-05-05':;timestamp\n\nI get the following EXPLAIN ANALYZE output:\n\nurldb=> explain select count(*) from foo where bar > '2003-05-05'::timestamp;\n QUERY PLAN \n------------------------------------------------------------------------\n Aggregate (cost=89960.75..89960.75 rows=1 width=0) (actual time=\n 56706.58..56706.58 rows=1 loops=1)\n -> Seq Scan on urlinfo (cost=0.00..87229.45 rows=1092521 width=0) (actual \n time=25.37..56537.86 rows=27490 loops=1)\n Filter: (ratedon > ('2003-05-05 00:00:00'::timestamp without time \n zone)::timestamp with time zone)\n Total runtime: 56706.67 msec\n\nSo it seems that the type conversion is killing the use of the index, even though the type conversion has to happen for the condition to be tested.\n\nIf I change this query slightly, by casting to timestamptz, I get the following EXPLAIN ANALYZE output:\n\n QUERY PLAN \n-------------------------------------------------------------------------\n Aggregate (cost=38609.70..38609.70 rows=1 width=0) (actual time=547.58..547.58 \n rows=1 loops=1)\n -> Index Scan using urlinfo_on on urlinfo (cost=0.00..38578.97 rows=12295 \n width=0) (actual time=0.18..381.95 rows=27490 loops=1)\n Index Cond: (ratedon > '2003-05-05 00:00:00-07'::timestamp with time \n zone)\n Total runtime: 548.17 msec\n\nThat's much better! Is this the way it's supposed to work?\n\n--------------------------\nDavid Olbersen \niGuard Engineer\n11415 West Bernardo Court \nSan Diego, CA 92127 \n1-858-676-2277 x2152\n\n", "msg_date": "Thu, 8 May 2003 08:36:27 -0700", "msg_from": "\"David Olbersen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Type casting and indexes" }, { "msg_contents": "On Thu, 8 May 2003, David Olbersen wrote:\n\n> Anyway, I've found a (bug|feature|standard?) with type casting and index usage.\n>\n> I've got a table with a column that's a timestamp with time zone. This\n> column is indexed. If I issue the \"normal\" query of:\n>\n> SELECT count(*) FROM foo WHERE bar > '2003-05-05':;timestamp\n>\n> I get the following EXPLAIN ANALYZE output:\n>\n> urldb=> explain select count(*) from foo where bar > '2003-05-05'::timestamp;\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Aggregate (cost=89960.75..89960.75 rows=1 width=0) (actual time=\n> 56706.58..56706.58 rows=1 loops=1)\n> -> Seq Scan on urlinfo (cost=0.00..87229.45 rows=1092521 width=0) (actual\n> time=25.37..56537.86 rows=27490 loops=1)\n> Filter: (ratedon > ('2003-05-05 00:00:00'::timestamp without time\n> zone)::timestamp with time zone)\n> Total runtime: 56706.67 msec\n>\n> So it seems that the type conversion is killing the use of the index,\n> even though the type conversion has to happen for the condition to be\n> tested.\n\nIIRC, timestamp->timestamptz is not considered to give a constant value\n(ie, is not stable) probably since it depends on timezone settings which\ncould be changed (for example by a function) during the query, so for each\nrow the conversion from '2003-05-05 00:00:00'::timestamp without time zone\nto a timestamp with time zone can potentially give a different answer.\n\n", "msg_date": "Thu, 8 May 2003 09:13:29 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Type casting and indexes" }, { "msg_contents": "\"David Olbersen\" <[email protected]> writes:\n> So it seems that the type conversion is killing the use of the index, even though the type conversion has to happen for the condition to be tested.\n\nSeems like I just answered this yesterday ;-)\n\nNote the difference in the number of estimated rows in the two explains.\nThe reason is that the timestamptz conversion is not a constant and so\nthe planner can't get a good estimate of the number of rows that will\nsatisfy it. (And the reason it's not a constant is that it depends on\nSET TIMEZONE.)\n\nBottom line: declare the constant correctly. Or at least don't\ngratuitously cast it to the wrong thing.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 08 May 2003 12:25:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Type casting and indexes " }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n> On Thu, 8 May 2003, David Olbersen wrote:\n>> So it seems that the type conversion is killing the use of the index,\n>> even though the type conversion has to happen for the condition to be\n>> tested.\n\n> IIRC, timestamp->timestamptz is not considered to give a constant value\n> (ie, is not stable)\n\nNo: it is stable, but not immutable, because it depends on SET TIMEZONE.\n(Our policy on those is if you change one mid-query, it's unspecified\nwhether the query will notice or not.) So the query is potentially\nindexable.\n\nThe problem here is that instead of seeing a constant, the planner sees\na nonconstant function invocation on the right side of '>', and so it\nhas to fall back to a default selectivity estimate instead of being able\nto extract a reasonable estimate from pg_statistic. The default\nestimate is high enough to discourage an indexscan ...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Thu, 08 May 2003 18:35:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Type casting and indexes " } ]
[ { "msg_contents": "I have realtime data flowing at a rate of 500, 512 byte packets per second.\nI want to log the info in a database table with two other columns, one for a\ntimestamp and one for a name of the packet. The max rate I can achieve is\n350 inserts per second on a sun blade 2000. The inserts are grouped in a\ntransaction and I commit every 1200 records. I am storing the binary data\nin a bytea. I am using the libpq conversion function. Not sure if that is\nslowing me down. But I think it is the insert not the conversion.\n\nAny thoughts on how to achive this goal?\n\n", "msg_date": "Sat, 10 May 2003 11:25:16 -0400", "msg_from": "\"Adam Siegel\" <[email protected]>", "msg_from_op": true, "msg_subject": "realtime data inserts" }, { "msg_contents": "\"Adam Siegel\" <[email protected]> writes:\n> I have realtime data flowing at a rate of 500, 512 byte packets per second.\n> I want to log the info in a database table with two other columns, one for a\n> timestamp and one for a name of the packet. The max rate I can achieve is\n> 350 inserts per second on a sun blade 2000. The inserts are grouped in a\n> transaction and I commit every 1200 records.\n\nHave you thought about using COPY?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 10 May 2003 12:00:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: realtime data inserts " }, { "msg_contents": "Had the same problem recently...\n\nFormat your data like a pg text dump into a file and then...\n\nCOPY <tablename> (a,b,c) FROM stdin;\n1 2 3\n4 5 6\n\\.\n\npsql <yourdatabase < dumpfile.sql\n\nI've achieved thousands of rows per seconds with this method.\n\n- Ericson Smith\[email protected]\nhttp://www.did-it.com\n\nAdam Siegel wrote:\n\n>I have realtime data flowing at a rate of 500, 512 byte packets per second.\n>I want to log the info in a database table with two other columns, one for a\n>timestamp and one for a name of the packet. The max rate I can achieve is\n>350 inserts per second on a sun blade 2000. The inserts are grouped in a\n>transaction and I commit every 1200 records. I am storing the binary data\n>in a bytea. I am using the libpq conversion function. Not sure if that is\n>slowing me down. But I think it is the insert not the conversion.\n>\n>Any thoughts on how to achive this goal?\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n> \n>\n\n", "msg_date": "Sat, 10 May 2003 12:02:18 -0400", "msg_from": "Ericson Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: realtime data inserts" }, { "msg_contents": "Are you binding your insert? IE:\n\nprepare statement INSERT INTO blah VALUES (?, ?, ?);\n\nexecute statement (a, b, c)\n\nInstead of just \"INSERT INTO blah VALUES(a, b, c)\"\n\n\nOn Sat, May 10, 2003 at 11:25:16AM -0400, Adam Siegel wrote:\n> I have realtime data flowing at a rate of 500, 512 byte packets per second.\n> I want to log the info in a database table with two other columns, one for a\n> timestamp and one for a name of the packet. The max rate I can achieve is\n> 350 inserts per second on a sun blade 2000. The inserts are grouped in a\n> transaction and I commit every 1200 records. I am storing the binary data\n> in a bytea. I am using the libpq conversion function. Not sure if that is\n> slowing me down. But I think it is the insert not the conversion.\n> \n> Any thoughts on how to achive this goal?\n \n\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Sat, 10 May 2003 12:08:33 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: realtime data inserts" }, { "msg_contents": "On Sat, 2003-05-10 at 11:00, Tom Lane wrote:\n> \"Adam Siegel\" <[email protected]> writes:\n> > I have realtime data flowing at a rate of 500, 512 byte packets per second.\n> > I want to log the info in a database table with two other columns, one for a\n> > timestamp and one for a name of the packet. The max rate I can achieve is\n> > 350 inserts per second on a sun blade 2000. The inserts are grouped in a\n> > transaction and I commit every 1200 records.\n> \n> Have you thought about using COPY?\n\nGenerate a temporary file, and then system(\"COPY /tmp/foobar ...\") ?\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| The purpose of the military isn't to pay your college tuition |\n| or give you a little extra income; it's to \"kill people and |\n| break things\". Surprisingly, not everyone understands that. |\n+---------------------------------------------------------------+\n\n", "msg_date": "10 May 2003 13:31:30 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: realtime data inserts" }, { "msg_contents": "Ron Johnson <[email protected]> writes:\n> On Sat, 2003-05-10 at 11:00, Tom Lane wrote:\n>> Have you thought about using COPY?\n\n> Generate a temporary file, and then system(\"COPY /tmp/foobar ...\") ?\n\nNo, copy from stdin. No need for a temp file.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sat, 10 May 2003 22:46:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: realtime data inserts " }, { "msg_contents": "On Sat, 2003-05-10 at 21:46, Tom Lane wrote:\n> Ron Johnson <[email protected]> writes:\n> > On Sat, 2003-05-10 at 11:00, Tom Lane wrote:\n> >> Have you thought about using COPY?\n> \n> > Generate a temporary file, and then system(\"COPY /tmp/foobar ...\") ?\n> \n> No, copy from stdin. No need for a temp file.\n\nBut wouldn't that only work if the input stream is acceptable to\nCOPY ?\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| The purpose of the military isn't to pay your college tuition |\n| or give you a little extra income; it's to \"kill people and |\n| break things\". Surprisingly, not everyone understands that. |\n+---------------------------------------------------------------+\n\n", "msg_date": "11 May 2003 11:54:50 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: realtime data inserts" }, { "msg_contents": "Ron Johnson <[email protected]> writes:\n\n> On Sat, 2003-05-10 at 21:46, Tom Lane wrote:\n> > Ron Johnson <[email protected]> writes:\n> > > On Sat, 2003-05-10 at 11:00, Tom Lane wrote:\n> > >> Have you thought about using COPY?\n> > \n> > > Generate a temporary file, and then system(\"COPY /tmp/foobar ...\") ?\n> > \n> > No, copy from stdin. No need for a temp file.\n> \n> But wouldn't that only work if the input stream is acceptable to\n> COPY ?\n\nYes, but you could always pipe it through a script or C program to\nmake it so...\n\n-Doug\n\n", "msg_date": "11 May 2003 12:58:43 -0400", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: realtime data inserts" }, { "msg_contents": "\nThe copy from method (PQputline) allows me to achieve around 1000 inserts \nper second. \n\n\nOn Sat, 10 May 2003, Jim C. Nasby wrote:\n\n> Are you binding your insert? IE:\n> \n> prepare statement INSERT INTO blah VALUES (?, ?, ?);\n> \n> execute statement (a, b, c)\n> \n> Instead of just \"INSERT INTO blah VALUES(a, b, c)\"\n> \n> \n> On Sat, May 10, 2003 at 11:25:16AM -0400, Adam Siegel wrote:\n> > I have realtime data flowing at a rate of 500, 512 byte packets per second.\n> > I want to log the info in a database table with two other columns, one for a\n> > timestamp and one for a name of the packet. The max rate I can achieve is\n> > 350 inserts per second on a sun blade 2000. The inserts are grouped in a\n> > transaction and I commit every 1200 records. I am storing the binary data\n> > in a bytea. I am using the libpq conversion function. Not sure if that is\n> > slowing me down. But I think it is the insert not the conversion.\n> > \n> > Any thoughts on how to achive this goal?\n> \n> \n> \n\n", "msg_date": "Mon, 12 May 2003 10:51:17 -0400 (EDT)", "msg_from": "Adam Siegel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: realtime data inserts" }, { "msg_contents": "Depends - we don't know enough about your needs. Some questions:\n\nIs this constant data or just capturing a burst?\n\nAre you feeding it through one connection or several in parallel?\n\nDid you tune your memory configs in postgresql.conf or are they still at the \nminimalized defaults?\n\nHow soon does the data need to be available for query? (Obviously there will \nbe up to a 1200 record delay just due to the transaction.)\n\nWhat generates the timestamp? Ie. is it an insert into foo values (now(), \npacketname, data) or is the app providing the timestamp?\n\nMore info about the app will help.\n\nCheers,\nSteve\n\n\nOn Saturday 10 May 2003 8:25 am, Adam Siegel wrote:\n> I have realtime data flowing at a rate of 500, 512 byte packets per second.\n> I want to log the info in a database table with two other columns, one for\n> a timestamp and one for a name of the packet. The max rate I can achieve\n> is 350 inserts per second on a sun blade 2000. The inserts are grouped in\n> a transaction and I commit every 1200 records. I am storing the binary\n> data in a bytea. I am using the libpq conversion function. Not sure if\n> that is slowing me down. But I think it is the insert not the conversion.\n>\n> Any thoughts on how to achive this goal?\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Mon, 12 May 2003 08:56:02 -0700", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: realtime data inserts" }, { "msg_contents": "Doug McNaught wrote:\n> Ron Johnson <[email protected]> writes:\n> \n> \n>>On Sat, 2003-05-10 at 21:46, Tom Lane wrote:\n>>\n>>>Ron Johnson <[email protected]> writes:\n>>>\n>>>>On Sat, 2003-05-10 at 11:00, Tom Lane wrote:\n>>>>\n>>>>>Have you thought about using COPY?\n>>>\n>>>>Generate a temporary file, and then system(\"COPY /tmp/foobar ...\") ?\n>>>\n>>>No, copy from stdin. No need for a temp file.\n>>\n>>But wouldn't that only work if the input stream is acceptable to\n>>COPY ?\n> \n> \n> Yes, but you could always pipe it through a script or C program to\n> make it so...\n\nlets say I have an about 1kb/s continuus datastream comming in for many \nhours and I'd like to store this data in my db using COPY table FROM stdin.\n\nAt what time should I COMMIT or close the stream to feed the database \nand COPY FROM again?\n\n\n\n", "msg_date": "Fri, 16 May 2003 22:27:42 +0200", "msg_from": "\"alex b.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: realtime data inserts" }, { "msg_contents": "You probably want to have a process that constantly stores the data in a\ntext file. Every \"n\" minutes, you will cause the logger to rotate the\ntext file, then process that batch. \n\nOver here, we are able to dump around 5,000 records per second in one of\nour tables using that methodology.\n\n- Ericson Smith\[email protected]\n\nOn Fri, 2003-05-16 at 16:27, alex b. wrote:\n> Doug McNaught wrote:\n> > Ron Johnson <[email protected]> writes:\n> > \n> > \n> >>On Sat, 2003-05-10 at 21:46, Tom Lane wrote:\n> >>\n> >>>Ron Johnson <[email protected]> writes:\n> >>>\n> >>>>On Sat, 2003-05-10 at 11:00, Tom Lane wrote:\n> >>>>\n> >>>>>Have you thought about using COPY?\n> >>>\n> >>>>Generate a temporary file, and then system(\"COPY /tmp/foobar ...\") ?\n> >>>\n> >>>No, copy from stdin. No need for a temp file.\n> >>\n> >>But wouldn't that only work if the input stream is acceptable to\n> >>COPY ?\n> > \n> > \n> > Yes, but you could always pipe it through a script or C program to\n> > make it so...\n> \n> lets say I have an about 1kb/s continuus datastream comming in for many \n> hours and I'd like to store this data in my db using COPY table FROM stdin.\n> \n> At what time should I COMMIT or close the stream to feed the database \n> and COPY FROM again?\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n-- \nEricson Smith <[email protected]>\n\n", "msg_date": "16 May 2003 16:33:55 -0400", "msg_from": "Ericson Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: realtime data inserts" }, { "msg_contents": "On Fri, 2003-05-16 at 15:33, Ericson Smith wrote:\n> You probably want to have a process that constantly stores the data in a\n> text file. Every \"n\" minutes, you will cause the logger to rotate the\n> text file, then process that batch. \n\nDoes the logger spawn the DB writer?\n\n> Over here, we are able to dump around 5,000 records per second in one of\n> our tables using that methodology.\n> \n> - Ericson Smith\n> [email protected]\n> \n> On Fri, 2003-05-16 at 16:27, alex b. wrote:\n> > Doug McNaught wrote:\n> > > Ron Johnson <[email protected]> writes:\n> > > \n> > > \n> > >>On Sat, 2003-05-10 at 21:46, Tom Lane wrote:\n> > >>\n> > >>>Ron Johnson <[email protected]> writes:\n> > >>>\n> > >>>>On Sat, 2003-05-10 at 11:00, Tom Lane wrote:\n> > >>>>\n> > >>>>>Have you thought about using COPY?\n> > >>>\n> > >>>>Generate a temporary file, and then system(\"COPY /tmp/foobar ...\") ?\n> > >>>\n> > >>>No, copy from stdin. No need for a temp file.\n> > >>\n> > >>But wouldn't that only work if the input stream is acceptable to\n> > >>COPY ?\n> > > \n> > > \n> > > Yes, but you could always pipe it through a script or C program to\n> > > make it so...\n> > \n> > lets say I have an about 1kb/s continuus datastream comming in for many \n> > hours and I'd like to store this data in my db using COPY table FROM stdin.\n> > \n> > At what time should I COMMIT or close the stream to feed the database \n> > and COPY FROM again?\n> > \n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/docs/faqs/FAQ.html\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:[email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| The purpose of the military isn't to pay your college tuition |\n| or give you a little extra income; it's to \"kill people and |\n| break things\". Surprisingly, not everyone understands that. |\n+---------------------------------------------------------------+\n\n", "msg_date": "16 May 2003 17:29:05 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: realtime data inserts" } ]
[ { "msg_contents": "Hi,\n\nI have created a table whit some indexes. I analize\nthe query of this table and never use index.\n\nAfter this, I create a more simplistic table with two\ncolumns and one index and the query uses the index.\n\nLook at this:\n\npfc=# \\d document\n Table\n\"public.document\"\n Column | Type | \n Modifiers\n------------+--------------------------+----------------------------------------------------\n codi | integer | not null\ndefault nextval('seq_document'::text)\n nom | character varying(32) | not null\n descripcio | text |\n formulari | integer |\n fitxer | character varying(32) |\n tamany | integer | default -1\n data | timestamp with time zone | default\n('now'::text)::timestamp(6) with time zone\nIndexes: document_pkey primary key btree (codi),\n ind_doc1 btree (codi),\n ind_document btree (formulari)\nTriggers: RI_ConstraintTrigger_19414,\n RI_ConstraintTrigger_19418,\n RI_ConstraintTrigger_19419,\n actualitzaritemcercadocument,\n altaitemcercadocument,\n baixaitemcercadocument,\n eliminaracldocument,\n eliminaravaluaciodocument\n\npfc=# explain select * from document where codi=2;\n QUERY PLAN\n----------------------------------------------------------\n Seq Scan on document (cost=0.00..1.19 rows=1\nwidth=120)\n Filter: (codi = 2)\n(2 rows)\n\nThis query must use index document_pkey but explain\ntells us that the query does a Sequencial scan on\ntable document.\n\nLook at this simplistic case:\n\npfc=# \\d prova\n Table \"public.prova\"\n Column | Type | Modifiers\n--------+-----------------------+-----------\n codi | integer | not null\n nom | character varying(30) |\nIndexes: prova_pkey primary key btree (codi)\n\npfc=# explain select * from prova where codi=1234;\n QUERY PLAN\n-------------------------------------------------------------------------\n Index Scan using prova_pkey on prova \n(cost=0.00..5.99 rows=1 width=37)\n Index Cond: (codi = 1234)\n(2 rows)\n\nNow the query uses index, explain tell something about\nindex scan using index prova_pkey.\n\nWhat is the diference with two cases? What must I do?\nIt is a bug? I need do something else? \n\nThanks a lot for helping me.\n\nRegards,\n\nXevi.\n\n_______________________________________________________________\nYahoo! Messenger\nNueva versión: Webcam, voz, y mucho más ¡Gratis! \nDescárgalo ya desde http://messenger.yahoo.es\n\n", "msg_date": "Sun, 11 May 2003 15:16:00 +0200 (CEST)", "msg_from": "=?iso-8859-1?q?Xevi=20Serrats?= <[email protected]>", "msg_from_op": true, "msg_subject": "Yet another question about not use on indexes" }, { "msg_contents": "On Sunday 11 May 2003 2:16 pm, Xevi Serrats wrote:\n> Hi,\n>\n> I have created a table whit some indexes. I analize\n> the query of this table and never use index.\n\n> pfc=# \\d document\n> Table\n> \"public.document\"\n> Column | Type |\n> Modifiers\n> ------------+--------------------------+-----------------------------------\n>----------------- codi | integer | not null\n> default nextval('seq_document'::text)\n> nom | character varying(32) | not null\netc...\n\n> pfc=# explain select * from document where codi=2;\n> QUERY PLAN\n> ----------------------------------------------------------\n> Seq Scan on document (cost=0.00..1.19 rows=1\n> width=120)\n> Filter: (codi = 2)\n> (2 rows)\n\n1. Have you done a VACUUM ANALYSE?\n2. How many rows are in this table?\n3. Can you post the output of EXPLAIN ANALYSE SELECT... - that actually runs \nthe query.\n\n-- \n Richard Huxton\n\n", "msg_date": "Sun, 11 May 2003 15:08:34 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Yet another question about not use on indexes" }, { "msg_contents": "=?iso-8859-1?q?Xevi=20Serrats?= <[email protected]> writes:\n> pfc=# explain select * from document where codi=2;\n> QUERY PLAN\n> ----------------------------------------------------------\n> Seq Scan on document (cost=0.00..1.19 rows=1 width=120)\n> Filter: (codi = 2)\n> (2 rows)\n\nJudging from the cost estimate, this table is too small to bother with\nan indexscan.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Sun, 11 May 2003 10:15:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Yet another question about not use on indexes " } ]
[ { "msg_contents": "Alfranio,\n\n> I'm a new PostgresSql user and I do not know so much about the\n> performance mechanisms currently implemented and available.\n<snip>\n> Does anybody know what is happening ?\n\n90% likely: You haven't run VACUUM FULL ANALYZE in a while.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Mon, 12 May 2003 08:49:39 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PERFORMANCE and SIZE" }, { "msg_contents": " Hello,\n\nI'm a new PostgresSql user and I do not know so much about the\n performance mechanisms currently implemented and available.\n\n So, as a dummy user I think that something strange is happening with me.\n When I run the following command:\n\n explain analyze select * from customer\n where c_last = 'ROUGHTATION' and\n c_w_id = 1 and\n c_d_id = 1\n order by c_w_id, c_d_id, c_last, c_first limit 1;\n\n I receive the following results:\n\n (Customer table with 60.000 rows) -\n QUERY PLAN\n ---------------------------------------------------------------------------\n-----------------------------------------------------------\n Limit (cost=4.84..4.84 rows=1 width=283) (actual time=213.13..213.13\n rows=0 loops=1)\n -> Sort (cost=4.84..4.84 rows=1 width=283) (actual\n time=213.13..213.13 rows=0 loops=1)\n Sort Key: c_w_id, c_d_id, c_last, c_first\n -> Index Scan using pk_customer on customer (cost=0.00..4.83\n rows=1 width=283) (actual time=211.93..211.93 rows=0 loops=1)\n Index Cond: ((c_w_id = 1) AND (c_d_id = 1))\n Filter: (c_last = 'ROUGHTATION'::bpchar)\n Total runtime: 213.29 msec\n (7 rows)\n\n\n (Customer table with 360.000 rows) -\n QUERY PLAN\n ---------------------------------------------------------------------------\n-------------------------------------------------------------\n Limit (cost=11100.99..11101.00 rows=1 width=638) (actual\n time=20.82..20.82 rows=0 loops=1)\n -> Sort (cost=11100.99..11101.00 rows=4 width=638) (actual\n time=20.81..20.81 rows=0 loops=1)\n Sort Key: c_w_id, c_d_id, c_last, c_first\n -> Index Scan using pk_customer on customer\n (cost=0.00..11100.95 rows=4 width=638) (actual time=20.40..20.40 rows=0\n loops=1)\n Index Cond: ((c_w_id = 1) AND (c_d_id = 1))\n Filter: (c_last = 'ROUGHTATION'::bpchar)\n Total runtime: 21.11 msec\n (7 rows)\n\n Increasing the number of rows the total runtime decreases.\n The customer table has the following structure:\n CREATE TABLE customer\n (\n c_id int NOT NULL ,\n c_d_id int4 NOT NULL ,\n c_w_id int4 NOT NULL ,\n c_first char (16) NULL ,\n c_middle char (2) NULL ,\n c_last char (16) NULL ,\n c_street_1 char (20) NULL ,\n c_street_2 char (20) NULL ,\n c_city char (20) NULL ,\n c_state char (2) NULL ,\n c_zip char (9) NULL ,\n c_phone char (16) NULL ,\n c_since timestamp NULL ,\n c_credit char (2) NULL ,\n c_credit_lim numeric(12, 2) NULL ,\n c_discount numeric(4, 4) NULL ,\n c_balance numeric(12, 2) NULL ,\n c_ytd_payment numeric(12, 2) NULL ,\n c_payment_cnt int4 NULL ,\n c_delivery_cnt int4 NULL ,\n c_data text NULL\n );\n\n ALTER TABLE customer ADD\n CONSTRAINT PK_customer PRIMARY KEY\n (\n c_w_id,\n c_d_id,\n c_id\n );\n\n Does anybody know what is happening ?\n\n\n Thanks !!!!\n\n Alfranio Junior\n\n", "msg_date": "Mon, 12 May 2003 12:35:24 -0700", "msg_from": "\"Alfranio Junior\" <[email protected]>", "msg_from_op": false, "msg_subject": "PERFORMANCE and SIZE" }, { "msg_contents": "Alfranio,\n\n> And now, the optimizer started to use a table scan and in consequence gives\n> me:\n\nWhat appears to me to be happening is that the planner has incorrect estimates \nof the cost of an index lookup. The base estimate is contained in the \npostgresql.conf parameter:\ncpu_index_tuple_cost = 0.001\n\n From the look of things, your disk/array has much better random seek times \nthan the standard, or you have enough RAM to cache most of your tables. \nEither way, I would experiment with lowering the index_tuple_cost to, say, \n0.0003 and see if you get better use of indexes.\n\nIf that does work for you, make sure to check some other queries unrelated to \nthe \"customers\" table to make sure that the new setting doesn't mess them up \nin some way.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Tue, 13 May 2003 09:15:34 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PERFORMANCE and SIZE" }, { "msg_contents": "Josh,\n\nI ran the vacuumdb as follows:\nvacuumdb -f -v -e -a\n\nand after that,\n\nvacuumdb -z -v -e -a.\n\nAnd now, the optimizer started to use a table scan and in consequence gives\nme:\n\nexplain analyze select * from customer\n\nwhere c_last = 'ROUGHTATION' and\n\nc_w_id = 1 and\n\nc_d_id = 1\n\norder by c_w_id, c_d_id, c_last, c_first limit 1;\n\nQUERY PLAN\n\n----------------------------------------------------------------------------\n-----------------------------------------\n\nLimit (cost=6302.03..6302.03 rows=1 width=639) (actual time=208.33..208.33\nrows=0 loops=1)\n\n-> Sort (cost=6302.03..6302.04 rows=3 width=639) (actual time=208.32..208.32\nrows=0 loops=1)\n\nSort Key: c_w_id, c_d_id, c_last, c_first\n\n-> Seq Scan on customer (cost=0.00..6302.00 rows=3 width=639) (actual\ntime=207.99..207.99 rows=0 loops=1)\n\nFilter: ((c_last = 'ROUGHTATION'::bpchar) AND (c_w_id = 1) AND (c_d_id = 1))\n\nTotal runtime: 208.54 msec\n\n(6 rows)\n\n\n\n\nWhen I force the index use a receive a better result:\n\nset enable_seqscan to off;\n\nexplain analyze select * from customer\n\nwhere c_last = 'ROUGHTATION' and\n\nc_w_id = 1 and\n\nc_d_id = 1\n\norder by c_w_id, c_d_id, c_last, c_first limit 1;\n\n\nQUERY PLAN\n\n----------------------------------------------------------------------------\n-----------------------------------------------------------\n\nLimit (cost=9860.03..9860.03 rows=1 width=639) (actual time=13.98..13.98\nrows=0 loops=1)\n\n-> Sort (cost=9860.03..9860.04 rows=3 width=639) (actual time=13.98..13.98\nrows=0 loops=1)\n\nSort Key: c_w_id, c_d_id, c_last, c_first\n\n-> Index Scan using pk_customer on customer (cost=0.00..9860.00 rows=3\nwidth=639) (actual time=13.86..13.86 rows=0 loops=1)\n\nIndex Cond: ((c_w_id = 1) AND (c_d_id = 1))\n\nFilter: (c_last = 'ROUGHTATION'::bpchar)\n\nTotal runtime: 14.11 msec\n\n(7 rows)\n\nIs this the only way to force the index ?\nWhat are the reasons to the optimizer to decide for a worse plan ?\n\n> Alfranio,\n>\n> > I'm a new PostgresSql user and I do not know so much about the\n> > performance mechanisms currently implemented and available.\n> <snip>\n> > Does anybody know what is happening ?\n>\n> 90% likely: You haven't run VACUUM FULL ANALYZE in a while.\n>\n> --\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n\n", "msg_date": "Tue, 13 May 2003 13:28:58 -0700", "msg_from": "\"Alfranio Junior\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PERFORMANCE and SIZE" }, { "msg_contents": "On Tue, 13 May 2003, Josh Berkus wrote:\n\n> Alfranio,\n> \n> > And now, the optimizer started to use a table scan and in consequence gives\n> > me:\n> \n> What appears to me to be happening is that the planner has incorrect estimates \n> of the cost of an index lookup. The base estimate is contained in the \n> postgresql.conf parameter:\n> cpu_index_tuple_cost = 0.001\n> \n> >From the look of things, your disk/array has much better random seek times \n> than the standard, or you have enough RAM to cache most of your tables. \n> Either way, I would experiment with lowering the index_tuple_cost to, say, \n> 0.0003 and see if you get better use of indexes.\n> \n> If that does work for you, make sure to check some other queries unrelated to \n> the \"customers\" table to make sure that the new setting doesn't mess them up \n> in some way.\n\nAlso, you can lower random page cost. And make sure the query planner has \nsome idea how much effective cache you have, as it can kind of take that \ninto account too. i.e. a machine wiht 800 Meg cache is far more likely to \nhave data in memory than one 100 MEg cache. This is kernel cache I'm \ntalking about, by the way. effective cache size is set in 8k blocks.\n\n", "msg_date": "Tue, 13 May 2003 14:51:37 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PERFORMANCE and SIZE" }, { "msg_contents": "\nI have gotten so much spam, this subject line struck me as spam until I\nlooked closer. Did it catch anyone else?\n\n---------------------------------------------------------------------------\n\nAlfranio Junior wrote:\n> Hello,\n> \n> I'm a new PostgresSql user and I do not know so much about the\n> performance mechanisms currently implemented and available.\n> \n> So, as a dummy user I think that something strange is happening with me.\n> When I run the following command:\n> \n> explain analyze select * from customer\n> where c_last = 'ROUGHTATION' and\n> c_w_id = 1 and\n> c_d_id = 1\n> order by c_w_id, c_d_id, c_last, c_first limit 1;\n> \n> I receive the following results:\n> \n> (Customer table with 60.000 rows) -\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n> -----------------------------------------------------------\n> Limit (cost=4.84..4.84 rows=1 width=283) (actual time=213.13..213.13\n> rows=0 loops=1)\n> -> Sort (cost=4.84..4.84 rows=1 width=283) (actual\n> time=213.13..213.13 rows=0 loops=1)\n> Sort Key: c_w_id, c_d_id, c_last, c_first\n> -> Index Scan using pk_customer on customer (cost=0.00..4.83\n> rows=1 width=283) (actual time=211.93..211.93 rows=0 loops=1)\n> Index Cond: ((c_w_id = 1) AND (c_d_id = 1))\n> Filter: (c_last = 'ROUGHTATION'::bpchar)\n> Total runtime: 213.29 msec\n> (7 rows)\n> \n> \n> (Customer table with 360.000 rows) -\n> QUERY PLAN\n> ---------------------------------------------------------------------------\n> -------------------------------------------------------------\n> Limit (cost=11100.99..11101.00 rows=1 width=638) (actual\n> time=20.82..20.82 rows=0 loops=1)\n> -> Sort (cost=11100.99..11101.00 rows=4 width=638) (actual\n> time=20.81..20.81 rows=0 loops=1)\n> Sort Key: c_w_id, c_d_id, c_last, c_first\n> -> Index Scan using pk_customer on customer\n> (cost=0.00..11100.95 rows=4 width=638) (actual time=20.40..20.40 rows=0\n> loops=1)\n> Index Cond: ((c_w_id = 1) AND (c_d_id = 1))\n> Filter: (c_last = 'ROUGHTATION'::bpchar)\n> Total runtime: 21.11 msec\n> (7 rows)\n> \n> Increasing the number of rows the total runtime decreases.\n> The customer table has the following structure:\n> CREATE TABLE customer\n> (\n> c_id int NOT NULL ,\n> c_d_id int4 NOT NULL ,\n> c_w_id int4 NOT NULL ,\n> c_first char (16) NULL ,\n> c_middle char (2) NULL ,\n> c_last char (16) NULL ,\n> c_street_1 char (20) NULL ,\n> c_street_2 char (20) NULL ,\n> c_city char (20) NULL ,\n> c_state char (2) NULL ,\n> c_zip char (9) NULL ,\n> c_phone char (16) NULL ,\n> c_since timestamp NULL ,\n> c_credit char (2) NULL ,\n> c_credit_lim numeric(12, 2) NULL ,\n> c_discount numeric(4, 4) NULL ,\n> c_balance numeric(12, 2) NULL ,\n> c_ytd_payment numeric(12, 2) NULL ,\n> c_payment_cnt int4 NULL ,\n> c_delivery_cnt int4 NULL ,\n> c_data text NULL\n> );\n> \n> ALTER TABLE customer ADD\n> CONSTRAINT PK_customer PRIMARY KEY\n> (\n> c_w_id,\n> c_d_id,\n> c_id\n> );\n> \n> Does anybody know what is happening ?\n> \n> \n> Thanks !!!!\n> \n> Alfranio Junior\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 14 May 2003 23:30:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PERFORMANCE and SIZE" }, { "msg_contents": "Ha !\n\nNo - it didn't catch me -- but Yes my spam has been going through the \nroof lately.\nOver here in Australia it's in the Media alot of late - Spam increases.\nSeems like everyone is suffering.\n\nCheers\nRS.\n\n\n\n\nBruce Momjian wrote:\n\n>I have gotten so much spam, this subject line struck me as spam until I\n>looked closer. Did it catch anyone else?\n>\n>---------------------------------------------------------------------------\n>\n>Alfranio Junior wrote:\n> \n>\n>> Hello,\n>>\n>>I'm a new PostgresSql user and I do not know so much about the\n>> performance mechanisms currently implemented and available.\n>>\n>> So, as a dummy user I think that something strange is happening with me.\n>> When I run the following command:\n>>\n>> explain analyze select * from customer\n>> where c_last = 'ROUGHTATION' and\n>> c_w_id = 1 and\n>> c_d_id = 1\n>> order by c_w_id, c_d_id, c_last, c_first limit 1;\n>>\n>> I receive the following results:\n>>\n>> (Customer table with 60.000 rows) -\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------\n>>-----------------------------------------------------------\n>> Limit (cost=4.84..4.84 rows=1 width=283) (actual time=213.13..213.13\n>> rows=0 loops=1)\n>> -> Sort (cost=4.84..4.84 rows=1 width=283) (actual\n>> time=213.13..213.13 rows=0 loops=1)\n>> Sort Key: c_w_id, c_d_id, c_last, c_first\n>> -> Index Scan using pk_customer on customer (cost=0.00..4.83\n>> rows=1 width=283) (actual time=211.93..211.93 rows=0 loops=1)\n>> Index Cond: ((c_w_id = 1) AND (c_d_id = 1))\n>> Filter: (c_last = 'ROUGHTATION'::bpchar)\n>> Total runtime: 213.29 msec\n>> (7 rows)\n>>\n>>\n>> (Customer table with 360.000 rows) -\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------\n>>-------------------------------------------------------------\n>> Limit (cost=11100.99..11101.00 rows=1 width=638) (actual\n>> time=20.82..20.82 rows=0 loops=1)\n>> -> Sort (cost=11100.99..11101.00 rows=4 width=638) (actual\n>> time=20.81..20.81 rows=0 loops=1)\n>> Sort Key: c_w_id, c_d_id, c_last, c_first\n>> -> Index Scan using pk_customer on customer\n>> (cost=0.00..11100.95 rows=4 width=638) (actual time=20.40..20.40 rows=0\n>> loops=1)\n>> Index Cond: ((c_w_id = 1) AND (c_d_id = 1))\n>> Filter: (c_last = 'ROUGHTATION'::bpchar)\n>> Total runtime: 21.11 msec\n>> (7 rows)\n>>\n>> Increasing the number of rows the total runtime decreases.\n>> The customer table has the following structure:\n>> CREATE TABLE customer\n>> (\n>> c_id int NOT NULL ,\n>> c_d_id int4 NOT NULL ,\n>> c_w_id int4 NOT NULL ,\n>> c_first char (16) NULL ,\n>> c_middle char (2) NULL ,\n>> c_last char (16) NULL ,\n>> c_street_1 char (20) NULL ,\n>> c_street_2 char (20) NULL ,\n>> c_city char (20) NULL ,\n>> c_state char (2) NULL ,\n>> c_zip char (9) NULL ,\n>> c_phone char (16) NULL ,\n>> c_since timestamp NULL ,\n>> c_credit char (2) NULL ,\n>> c_credit_lim numeric(12, 2) NULL ,\n>> c_discount numeric(4, 4) NULL ,\n>> c_balance numeric(12, 2) NULL ,\n>> c_ytd_payment numeric(12, 2) NULL ,\n>> c_payment_cnt int4 NULL ,\n>> c_delivery_cnt int4 NULL ,\n>> c_data text NULL\n>> );\n>>\n>> ALTER TABLE customer ADD\n>> CONSTRAINT PK_customer PRIMARY KEY\n>> (\n>> c_w_id,\n>> c_d_id,\n>> c_id\n>> );\n>>\n>> Does anybody know what is happening ?\n>>\n>>\n>> Thanks !!!!\n>>\n>> Alfranio Junior\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 5: Have you checked our extensive FAQ?\n>>\n>>http://www.postgresql.org/docs/faqs/FAQ.html\n>>\n>> \n>>\n>\n> \n>\n\n\n\n\n\n\n\n\nHa !\n\nNo - it didn't catch me -- but Yes my spam has been going through the\nroof lately.\nOver here in Australia it's in the Media alot of late - Spam increases.\nSeems like everyone is suffering.\n\nCheers\nRS.\n\n\n\n\nBruce Momjian wrote:\n\nI have gotten so much spam, this subject line struck me as spam until I\nlooked closer. Did it catch anyone else?\n\n---------------------------------------------------------------------------\n\nAlfranio Junior wrote:\n \n\n Hello,\n\nI'm a new PostgresSql user and I do not know so much about the\n performance mechanisms currently implemented and available.\n\n So, as a dummy user I think that something strange is happening with me.\n When I run the following command:\n\n explain analyze select * from customer\n where c_last = 'ROUGHTATION' and\n c_w_id = 1 and\n c_d_id = 1\n order by c_w_id, c_d_id, c_last, c_first limit 1;\n\n I receive the following results:\n\n (Customer table with 60.000 rows) -\n QUERY PLAN\n ---------------------------------------------------------------------------\n-----------------------------------------------------------\n Limit (cost=4.84..4.84 rows=1 width=283) (actual time=213.13..213.13\n rows=0 loops=1)\n -> Sort (cost=4.84..4.84 rows=1 width=283) (actual\n time=213.13..213.13 rows=0 loops=1)\n Sort Key: c_w_id, c_d_id, c_last, c_first\n -> Index Scan using pk_customer on customer (cost=0.00..4.83\n rows=1 width=283) (actual time=211.93..211.93 rows=0 loops=1)\n Index Cond: ((c_w_id = 1) AND (c_d_id = 1))\n Filter: (c_last = 'ROUGHTATION'::bpchar)\n Total runtime: 213.29 msec\n (7 rows)\n\n\n (Customer table with 360.000 rows) -\n QUERY PLAN\n ---------------------------------------------------------------------------\n-------------------------------------------------------------\n Limit (cost=11100.99..11101.00 rows=1 width=638) (actual\n time=20.82..20.82 rows=0 loops=1)\n -> Sort (cost=11100.99..11101.00 rows=4 width=638) (actual\n time=20.81..20.81 rows=0 loops=1)\n Sort Key: c_w_id, c_d_id, c_last, c_first\n -> Index Scan using pk_customer on customer\n (cost=0.00..11100.95 rows=4 width=638) (actual time=20.40..20.40 rows=0\n loops=1)\n Index Cond: ((c_w_id = 1) AND (c_d_id = 1))\n Filter: (c_last = 'ROUGHTATION'::bpchar)\n Total runtime: 21.11 msec\n (7 rows)\n\n Increasing the number of rows the total runtime decreases.\n The customer table has the following structure:\n CREATE TABLE customer\n (\n c_id int NOT NULL ,\n c_d_id int4 NOT NULL ,\n c_w_id int4 NOT NULL ,\n c_first char (16) NULL ,\n c_middle char (2) NULL ,\n c_last char (16) NULL ,\n c_street_1 char (20) NULL ,\n c_street_2 char (20) NULL ,\n c_city char (20) NULL ,\n c_state char (2) NULL ,\n c_zip char (9) NULL ,\n c_phone char (16) NULL ,\n c_since timestamp NULL ,\n c_credit char (2) NULL ,\n c_credit_lim numeric(12, 2) NULL ,\n c_discount numeric(4, 4) NULL ,\n c_balance numeric(12, 2) NULL ,\n c_ytd_payment numeric(12, 2) NULL ,\n c_payment_cnt int4 NULL ,\n c_delivery_cnt int4 NULL ,\n c_data text NULL\n );\n\n ALTER TABLE customer ADD\n CONSTRAINT PK_customer PRIMARY KEY\n (\n c_w_id,\n c_d_id,\n c_id\n );\n\n Does anybody know what is happening ?\n\n\n Thanks !!!!\n\n Alfranio Junior\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/docs/faqs/FAQ.html", "msg_date": "Thu, 15 May 2003 13:46:05 +1000", "msg_from": "Rudi Starcevic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PERFORMANCE and SIZE" } ]
[ { "msg_contents": "> Jim,\n>\n>> I have a 40M row table I need to import data into, then use to create\n>> a bunch of more normalized tables. Right now all fields are varchar,\n>> but I'm going to change this so that fields that are less than a\n>> certain size are just char. Question is, how much impact is there from\n>> char being nullable vs. not nullable? src/include/access/htup.h\n>> indicates that nulls are stored in a bitmap, so I'd suspect that I\n>> should see a decent space savings from not having to include length\n>> information all the time... (most of these small fields are always the\n>> same size no matter what...)\n>\n> This is moot. PostgreSQL stores CHAR(x), VARCHAR, and TEXT in the same\n> internal format, which includes length information in the page header.\n> So you save no storage space by converting to CHAR(x) ... you might\n> even make your tables *larger* because of the space padding.\n\nSo if the internal format is identical, why does the INFERNAL database\nignore indexes when you have a text compared to a varchar?\n\nRyan\n\n", "msg_date": "Mon, 12 May 2003 13:58:03 -0500 (CDT)", "msg_from": "\"Ryan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How are null's stored?" }, { "msg_contents": "I have a 40M row table I need to import data into, then use to create a\nbunch of more normalized tables. Right now all fields are varchar, but\nI'm going to change this so that fields that are less than a certain\nsize are just char. Question is, how much impact is there from char\nbeing nullable vs. not nullable? src/include/access/htup.h indicates\nthat nulls are stored in a bitmap, so I'd suspect that I should see a\ndecent space savings from not having to include length information all\nthe time... (most of these small fields are always the same size no\nmatter what...)\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Mon, 12 May 2003 14:01:56 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "How are null's stored?" }, { "msg_contents": "Jim,\n\n> I have a 40M row table I need to import data into, then use to create a\n> bunch of more normalized tables. Right now all fields are varchar, but\n> I'm going to change this so that fields that are less than a certain\n> size are just char. Question is, how much impact is there from char\n> being nullable vs. not nullable? src/include/access/htup.h indicates\n> that nulls are stored in a bitmap, so I'd suspect that I should see a\n> decent space savings from not having to include length information all\n> the time... (most of these small fields are always the same size no\n> matter what...)\n\nThis is moot. PostgreSQL stores CHAR(x), VARCHAR, and TEXT in the same \ninternal format, which includes length information in the page header. So \nyou save no storage space by converting to CHAR(x) ... you might even make \nyour tables *larger* because of the space padding.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 12 May 2003 13:44:43 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How are null's stored?" }, { "msg_contents": "On Mon, May 12, 2003 at 01:58:03PM -0500, Ryan wrote:\n> So if the internal format is identical, why does the INFERNAL database\n> ignore indexes when you have a text compared to a varchar?\n\nBecause the rules for handling the two data types are not the same. \nSince spaces are significant on char(n) according to the spec, you\nhave strange rules in their handling.\n\nShort answer: use text. Varchar(n) if you must, to limit length. \nBut char(n) is almost always evil.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 12 May 2003 17:04:24 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How are null's stored?" }, { "msg_contents": "Ryan,\n\n> So if the internal format is identical, why does the INFERNAL database\n> ignore indexes when you have a text compared to a varchar?\n\nI don't seem to have this problem; I use TEXT or VARCHAR willy-nilly, \nincluding in LIKE 'string%' and UPPER(field) queries, and the indexes work \nfine.\n\nI suspect that either you're talking about TEXT to CHAR(x) comparisons, which \nare a different ball o' wax, or your query problem is something else.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 12 May 2003 15:46:39 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How are null's stored?" }, { "msg_contents": "On Mon, 12 May 2003, Josh Berkus wrote:\n\n> > So if the internal format is identical, why does the INFERNAL database\n> > ignore indexes when you have a text compared to a varchar?\n>\n> I don't seem to have this problem; I use TEXT or VARCHAR willy-nilly,\n> including in LIKE 'string%' and UPPER(field) queries, and the indexes work\n> fine.\n\nI can get the case he's complaining about with some cases I believe.\n\nWith an indexed varchar field, I can get 7.3.1 to give me:\nsszabo=# set enable_seqscan=off;\nSET\nsszabo=# explain select * from aq2 where a=('f' || 'g');\n QUERY PLAN\n---------------------------------------------------------------------\n Seq Scan on aq2 (cost=100000000.00..100000022.50 rows=1 width=168)\n Filter: ((a)::text = 'fg'::text)\n\nbut\n\nsszabo=# explain select * from aq2 where a=('f' || 'g')::varchar;\n QUERY PLAN\n----------------------------------------------------------------------\n Index Scan using aq2_pkey on aq2 (cost=0.00..4.82 rows=1 width=168)\n Index Cond: (a = 'fg'::character varying)\n\nor\n\nsszabo=# explain select * from aq2 where a=('f' || 'g'::varchar);\n QUERY PLAN\n----------------------------------------------------------------------\n Index Scan using aq2_pkey on aq2 (cost=0.00..4.82 rows=1 width=168)\n Index Cond: (a = 'fg'::character varying)\n\nAll in all, I'm not sure what the semantic differences between a varchar\nwith no length specified and a text are in PostgreSQL actually and if the\nwhole thing could be simplified in some way that doesn't break backwards\ncompatibility.\n\n", "msg_date": "Mon, 12 May 2003 16:19:25 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How are null's stored?" }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n> All in all, I'm not sure what the semantic differences between a varchar\n> with no length specified and a text are in PostgreSQL actually and if the\n> whole thing could be simplified in some way that doesn't break backwards\n> compatibility.\n\nYeah, I've been wondering about that too. A large part of the problem\nis that varchar has its own set of operators, which the planner has no\nright to assume behave exactly like the text ones ... but they do. It\nmight work to rip out the redundant varchar operators and allow indexes\non varchar to become truly textual indexes (ie, they'd be text_ops not\nvarchar_ops opclass). There might be a few tweaks needed to get the\nplanner to play nice with indexes that require implicit coercions, but\nI think it could be made to work.\n\nAnother idea that has been rattling around is to stop treating bpchar as\nbinary-equivalent to text, and in fact to make bpchar-to-text promotion\ngo through rtrim() to eliminate padding spaces.\n\nI think this stuff got put on hold because we haven't been able to come\nup with a good solution for the comparable problems in the numeric\ndatatype hierarchy. But bpchar/varchar/text is a lot simpler problem,\nand maybe could be solved with the tools we have in place already.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Mon, 12 May 2003 19:50:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How are null's stored? " }, { "msg_contents": "\nOn Mon, 12 May 2003, Tom Lane wrote:\n\n> Stephan Szabo <[email protected]> writes:\n> > All in all, I'm not sure what the semantic differences between a varchar\n> > with no length specified and a text are in PostgreSQL actually and if the\n> > whole thing could be simplified in some way that doesn't break backwards\n> > compatibility.\n>\n> Yeah, I've been wondering about that too. A large part of the problem\n> is that varchar has its own set of operators, which the planner has no\n> right to assume behave exactly like the text ones ... but they do. It\n> might work to rip out the redundant varchar operators and allow indexes\n> on varchar to become truly textual indexes (ie, they'd be text_ops not\n> varchar_ops opclass). There might be a few tweaks needed to get the\n> planner to play nice with indexes that require implicit coercions, but\n> I think it could be made to work.\n\nThis seems to possibly work on 7.4. I took my system and removed the\nvarchar comparison operators and directly made a text_ops index on a\nvarchar(30).\nThat gave me indexscans for\n col = 'a'\n col = 'a'::varchar\n col = 'a'::text\n col = 'a' || 'b'\n\nbut I don't know if it has other bad effects yet.\n\n\n> Another idea that has been rattling around is to stop treating bpchar as\n> binary-equivalent to text, and in fact to make bpchar-to-text promotion\n> go through rtrim() to eliminate padding spaces.\n\nI guess this depends on how we read the comparisons/conversions from PAD\nSPACE to NO PAD are supposed to work, but I think this would be good and\nmake things easier for alot of people since most people don't expect it,\nespecially when using functions like upper and lower that return text.\n\n", "msg_date": "Tue, 13 May 2003 12:45:27 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How are null's stored? " }, { "msg_contents": "Stephan Szabo <[email protected]> writes:\n> On Mon, 12 May 2003, Tom Lane wrote:\n>> It might work to rip out the redundant varchar operators and allow indexes\n>> on varchar to become truly textual indexes (ie, they'd be text_ops not\n>> varchar_ops opclass).\n\n> This seems to possibly work on 7.4. I took my system and removed the\n> varchar comparison operators and directly made a text_ops index on a\n> varchar(30).\n\nYeah, I fooled with it a little bit last night too. It seems that we'd\nneed to still have a varchar_ops entry in pg_opclass (else you get\ncomplaints about unable to select a default opclass, not to mention that\nold pg_dump files specifying varchar_ops would fail to load). But this\nentry could point to the textual comparison operators. AFAICT the\nplanner doesn't have any problem dealing with the implicit coercions\nthat it's faced with in such cases.\n\n>> Another idea that has been rattling around is to stop treating bpchar as\n>> binary-equivalent to text, and in fact to make bpchar-to-text promotion\n>> go through rtrim() to eliminate padding spaces.\n\n> I guess this depends on how we read the comparisons/conversions from PAD\n> SPACE to NO PAD are supposed to work, but I think this would be good and\n> make things easier for alot of people since most people don't expect it,\n> especially when using functions like upper and lower that return text.\n\nI tried that too, and it seemed to work as expected. Whether it's\narguably more spec-compliant than our current behavior I dunno; haven't\nlooked at that part of the spec closely...\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 13 May 2003 16:16:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How are null's stored? " }, { "msg_contents": "I did two experiments. First, as someone mentioned, changing between\nchar and varchar made absolutely no difference size-wise. In some other\nRDBMSes, performance wise char might still win out because the database\nwouldn't have to do the math to figure out where in the tuple the fields\nare. I know it's splitting hairs, but on what will be a 40M row table...\n\nSecond, I modified the table (see below; all fields were originally\nnullable):\n\nBefore:\nusps=# vacuum full analyze verbose zip4_detail;\nINFO: --Relation public.zip4_detail--\nINFO: Pages 12728: Changed 0, reaped 1, Empty 0, New 0; Tup 467140: Vac\n0, Keep/VTL 0/0, UnUsed 19, MinLen 154, MaxLen 302; Re-using:\nFree/Avail. Space 1009820/264028; EndEmpty/Avail. Pages 0/1521.\n CPU 0.65s/0.86u sec elapsed 1.51 sec.\nINFO: Rel zip4_detail: Pages: 12728 --> 12728; Tuple(s) moved: 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: Analyzing public.zip4_detail\n\n\nAfter:\nINFO: --Relation public.zip4_detail--\nINFO: Pages 13102: Changed 0, reaped 6961, Empty 0, New 0; Tup 467140:\nVac 0, Keep/VTL 0/0, UnUsed 31795, MinLen 166, MaxLen 306; Re-using:\nFree/Avail. Space 1136364/190188; EndEmpty/Avail. Pages 0/1056.\n CPU 0.41s/0.79u sec elapsed 1.20 sec.\nINFO: Rel zip4_detail: Pages: 13102 --> 13102; Tuple(s) moved: 0.\n CPU 0.59s/10.02u sec elapsed 18.17 sec.\nINFO: Analyzing public.zip4_detail\n\nAs you can see, space useage actually went up, by 2.9% (pages). In other\nwords, it appears to be more efficient to store a null than to store an\nempty string in a varchar.\n\nusps=# select count(*) from zip4_detail where street_pre_drctn_abbrev=''\nand street_suffix_abbrev='' and street_post_drctn_abbrev='';\n-------\n 9599\n\nusps=# select count(*) from zip4_detail where street_pre_drctn_abbrev=''\nor street_suffix_abbrev='';\n--------\n 128434\n\n(all rows have at least one of the 3 fields empty)\n\nHope someone finds this info useful... :)\n\n Table \"public.zip4_detail\"\n Column | Type | Modifiers\n---------------------------+-----------------------+-----------\n zip_code | character varying(5) |\n update_key_no | character varying(10) |\n action_code | character varying(1) |\n record_type_code | character varying(1) |\n carrier_route_id | character varying(4) |\n street_pre_drctn_abbrev | character varying(2) | not null\n street_name | character varying(28) |\n street_suffix_abbrev | character varying(4) | not null\n street_post_drctn_abbrev | character varying(2) | not null\n addr_primary_low_no | character varying(10) |\n addr_primary_high_no | character varying(10) |\n addr_prmry_odd_even_code | character varying(1) |\n building_or_firm_name | character varying(40) |\n addr_secondary_abbrev | character varying(4) |\n addr_secondary_low_no | character varying(8) |\n addr_secondary_high_no | character varying(8) |\n addr_secny_odd_even_code | character varying(1) |\n zip_add_on_low_no | character varying(4) |\n zip_add_on_high_no | character varying(4) |\n base_alt_code | character varying(1) |\n lacs_status_ind | character varying(1) |\n govt_bldg_ind | character varying(1) |\n finance_no | character varying(6) |\n state_abbrev | character varying(2) |\n county_no | character varying(3) |\n congressional_dist_no | character varying(2) |\n muncipality_ctyst_key | character varying(6) |\n urbanization_ctyst_key | character varying(6) |\n prefd_last_line_ctyst_key | character varying(6) |\n\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Tue, 13 May 2003 16:55:41 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How are null's stored? -- Some numbers" }, { "msg_contents": "We have had a couple of threads recently on improving\nbpchar/varchar/text behavior by making bpchar-to-text promotion\ngo through rtrim() (instead of being a straight binary-compatible\nconversion) and getting rid of redundant operators:\n\nhttp://archives.postgresql.org/pgsql-hackers/2002-11/msg00703.php\nhttp://archives.postgresql.org/pgsql-performance/2003-05/msg00151.php\n\nI'm going to go ahead and make these changes for 7.4, as they will\nclearly improve the intuitiveness of the behavior. I don't think they\nmove us any closer to spec compliance --- the spec appears to require\na notion of a collation sequence that can be specified independently\nof the character datatype, which is something we don't have and aren't\nlikely to have very soon. (It's not clear to me that it's actually\n*useful* to specify NO PAD collation with fixed-width character data,\nor PAD SPACE with varchar data, but the spec lets you do it.) In the\nmeantime though these changes seem to be a win, and they will not leave\nus any worse off when we do get around to implementing collations.\n\nWe speculated about a couple of alternative solutions in the first of\nthe above-mentioned threads, but they didn't look nearly as practical\nto implement as this way.\n\nLast call for objections ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 May 2003 16:34:48 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Simplifying varchar and bpchar behavior" } ]
[ { "msg_contents": "\ngreetings.\n\ni have a query that is taking a rather long time to execute and have been\nlooking into setting up a partial index to help, although i'm not sure if this\nis what i want.\n\nhere is the table:\n\nid serial,\ntype_id int,\nareacode smallint,\ncontent text\n\n__________________________________\nDo you Yahoo!?\nThe New Yahoo! Search - Faster. Easier. Bingo.\nhttp://search.yahoo.com\n\n", "msg_date": "Mon, 12 May 2003 16:51:00 -0700 (PDT)", "msg_from": "csajl <[email protected]>", "msg_from_op": true, "msg_subject": "partial index / funxtional idx or bad sql?" } ]
[ { "msg_contents": "\nmy apologies - a strange key combination sent the message early.\n\n----\ngreetings.\n\ni have a query that is taking a rather long time to execute and have been\nlooking into setting up a partial index to help, although i'm not sure if this\nis what i want.\n\nhere is the (simplified) table \"posts\":\n\nid serial\ntype_id int\nareacode smallint\ncontent text\n\nand the other table (areacodes) referenced:\n\nsite_id smallint\nareacode smallint\n\n\nthe query is:\n\nSELECT p.id, p.areacode, p.content \nFROM posts p\nWHERE p.type_id = ?\nAND p.areacode in (\n select areacode from areacodes\n where site_id = ?\n )\n\n\nthe \"posts\" table has 100,000 rows of varying data, across areacodes and types.\ngiven the type_id and site_id, the query is currently taking ~4 seconds to\nreturn 8500 rows (on a dual proc/ gig ram linux box).\n\nindexes on table \"posts\" are:\nprimary key (id)\nand another on both (type_id, areacode)\n\nindex on the table \"areacodes\" is (site_id, areacode).\n\nwould a parital index help in speeding up this query?\nare my current indexes counter productive? \nor is it just my sql that need help?\n\n\nthanks much for any help or pointers to information.\n\n- seth\n\n__________________________________\nDo you Yahoo!?\nThe New Yahoo! Search - Faster. Easier. Bingo.\nhttp://search.yahoo.com\n\n", "msg_date": "Mon, 12 May 2003 17:07:46 -0700 (PDT)", "msg_from": "csajl <[email protected]>", "msg_from_op": true, "msg_subject": "[repost] partial index / funxtional idx or bad sql?" }, { "msg_contents": "Seth,\n\n> SELECT p.id, p.areacode, p.content \n> FROM posts p\n> WHERE p.type_id = ?\n> AND p.areacode in (\n> select areacode from areacodes\n> where site_id = ?\n> )\n\nUnless you're using 7.4 from CVS, you want to get rid of that IN:\n\n SELECT p.id, p.areacode, p.content \n FROM posts p\n WHERE p.type_id = ?\n AND EXISTS (\n select areacode from areacodes\n where site_id = ?\n and p.areacode = areacodes.areacode\n );\n\nSee how that works, and if it's still slow, post the EXPLAIN ANALYZE.\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 12 May 2003 17:13:38 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [repost] partial index / funxtional idx or bad sql?" }, { "msg_contents": "\nhi josh.\n\ni'm using 7.3.2. i tried using EXISTS instead of the IN, but the same query\nnow returns in seven sceonds as opposed to four with the IN.\n\n\ncmdb=# EXPLAIN ANALYZE\ncmdb-# select c.class_id, c.areacode, c.title from classifieds c\ncmdb-# where c.class_cat_id = '1'\ncmdb-# and c.areacode IN (\ncmdb(# select areacode from cm_areacode where site_id = '10')\ncmdb-# ;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using classifieds_dual_idx on classifieds c (cost=0.00..26622.14\nrows=1837 width=39) (actual time=345.48..2305.04 rows=8460 loops=1)\n Index Cond: (class_cat_id = 1)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=3.46..3.46 rows=4 width=2) (actual time=0.00..0.01\nrows=5 loops=61966)\n -> Index Scan using site_cm_areacode_idx on cm_areacode \n(cost=0.00..3.46 rows=4 width=2) (actual time=0.14..0.22 rows=5 loops=1)\n Index Cond: (site_id = 10)\n Total runtime: 2314.14 msec\n(8 rows)\n----------------------------------\n\nclassifieds_dual_idx is the btree index on (class_type_id, areacode)\nand site_cm_areacode_idx is the btree index on (site_id) only.\nthere is an index in the areacode table that has both (site_id, areacode) but\nit's apparently not being used. would it help the query to use that index\ninstead?\n\nthanks for your help.\n\n\n\n--- Josh Berkus <[email protected]> wrote:\n> Seth,\n> \n> > SELECT p.id, p.areacode, p.content \n> > FROM posts p\n> > WHERE p.type_id = ?\n> > AND p.areacode in (\n> > select areacode from areacodes\n> > where site_id = ?\n> > )\n> \n> Unless you're using 7.4 from CVS, you want to get rid of that IN:\n> \n> SELECT p.id, p.areacode, p.content \n> FROM posts p\n> WHERE p.type_id = ?\n> AND EXISTS (\n> select areacode from areacodes\n> where site_id = ?\n> and p.areacode = areacodes.areacode\n> );\n> \n> See how that works, and if it's still slow, post the EXPLAIN ANALYZE.\n\n\n__________________________________\nDo you Yahoo!?\nThe New Yahoo! Search - Faster. Easier. Bingo.\nhttp://search.yahoo.com\n\n", "msg_date": "Mon, 12 May 2003 17:47:10 -0700 (PDT)", "msg_from": "csajl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [repost] partial index / funxtional idx or bad sql?" }, { "msg_contents": "Csajl,\n\n> i'm using 7.3.2. i tried using EXISTS instead of the IN, but the same\n> query now returns in seven sceonds as opposed to four with the IN.\n<snip>\n> classifieds_dual_idx is the btree index on (class_type_id, areacode)\n> and site_cm_areacode_idx is the btree index on (site_id) only.\n> there is an index in the areacode table that has both (site_id, areacode)\n> but it's apparently not being used. would it help the query to use that\n> index instead?\n\nNo. \n>From the look of things, it's not the index scan that's taking time ... it's \nthe subplan, which is doing 61,000 loops. Which is normal for IN, but not \nfor EXISTS. You run VACUUM ANALYZE?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Mon, 12 May 2003 20:32:10 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [repost] partial index / funxtional idx or bad sql?" }, { "msg_contents": "\nOn Mon, 12 May 2003, csajl wrote:\n\n> i'm using 7.3.2. i tried using EXISTS instead of the IN, but the same query\n> now returns in seven sceonds as opposed to four with the IN.\n>\n>\n> cmdb=# EXPLAIN ANALYZE\n> cmdb-# select c.class_id, c.areacode, c.title from classifieds c\n> cmdb-# where c.class_cat_id = '1'\n> cmdb-# and c.areacode IN (\n> cmdb(# select areacode from cm_areacode where site_id = '10')\n> cmdb-# ;\n\nHow about something like:\n\nselect c.class_id, c.areacode, c.title from\n classifieds c,\n (select distinct areacode from cm_areacode where site_id='10') a\n where c.class_cat_id='1' and c.areacode=a.areacode;\n\n", "msg_date": "Mon, 12 May 2003 20:47:55 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [repost] partial index / funxtional idx or bad sql?" }, { "msg_contents": "\nhi josh.\n\nthanks for your help and time with this.\n\nran vacuum analyze, still timed in around 3seconds.\ni dropped the site_id only index on the areacodes table in favor of the dual\nsite_id and areacode index and seemingly gained 1/2 second.\n\nby using the IN, i gain another .3 of a second. (i thought EXISTS was supposed\nto be more efficient?)\n\nthe loop on the subplan (~62k) is killing me. any alternatives to what i\nthought would be a seemingly innocuous lookup? the cm_Areacode table is\nnothing more than two columns, associating each areacode into a site_id. (292\nrows if i remember correctly)\n\n\ncmdb=# EXPLAIN ANALYZE\ncmdb-# select c.class_id, c.areacode, c.title from classifieds c\ncmdb-# where c.class_cat_id = '1'\ncmdb-# and EXISTS (\ncmdb(# select areacode from cm_areacode cm where site_id = '10' and c.areacode\n= cm.areacode)\ncmdb-# ;\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using classifieds_dual_idx on classifieds c (cost=0.00..493277.77\nrows=28413 width=39) (actual time=360.23..2523.08 rows=8460 loops=1)\n Index Cond: (class_cat_id = 1)\n Filter: (subplan)\n SubPlan\n -> Index Scan using areacode_site_dual_cmareacode on cm_areacode cm \n(cost=0.00..4.96 rows=1 width=2) (actual time=0.01..0.01 rows=0 loops=61966)\n Index Cond: ((site_id = 10) AND ($0 = areacode))\n Total runtime: 2533.93 msec\n(7 rows)\n\ncmdb=#\n------------------------------------\n\ncmdb=# EXPLAIN ANALYZE\ncmdb-# select c.class_id, c.areacode, c.title from classifieds c\ncmdb-# where c.class_cat_id = '1'\ncmdb-# and c.areacode IN (\ncmdb(# select areacode from cm_areacode where site_id = '10')\ncmdb-# ;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using classifieds_dual_idx on classifieds c (cost=0.00..632183.80\nrows=28413 width=39) (actual time=344.70..2287.93 rows=8460 loops=1)\n Index Cond: (class_cat_id = 1)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=7.40..7.40 rows=4 width=2) (actual time=0.00..0.00\nrows=5 loops=61966)\n -> Seq Scan on cm_areacode (cost=0.00..7.40 rows=4 width=2)\n(actual time=0.20..0.73 rows=5 loops=1)\n Filter: (site_id = 10)\n Total runtime: 2296.83 msec\n(8 rows)\n\n\n\n--- Josh Berkus <[email protected]> wrote:\n> Csajl,\n> \n> > i'm using 7.3.2. i tried using EXISTS instead of the IN, but the same\n> > query now returns in seven sceonds as opposed to four with the IN.\n> <snip>\n> > classifieds_dual_idx is the btree index on (class_type_id, areacode)\n> > and site_cm_areacode_idx is the btree index on (site_id) only.\n> > there is an index in the areacode table that has both (site_id, areacode)\n> > but it's apparently not being used. would it help the query to use that\n> > index instead?\n> \n> No. \n> From the look of things, it's not the index scan that's taking time ... it's \n> the subplan, which is doing 61,000 loops. Which is normal for IN, but not \n> for EXISTS. You run VACUUM ANALYZE?\n\n\n__________________________________\nDo you Yahoo!?\nThe New Yahoo! Search - Faster. Easier. Bingo.\nhttp://search.yahoo.com\n\n", "msg_date": "Mon, 12 May 2003 20:58:54 -0700 (PDT)", "msg_from": "csajl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [repost] partial index / funxtional idx or bad sql?" }, { "msg_contents": "\nwow.\n\nthat did it. so much for my knowing SQL...\n\nunbelievable - thanks much.\n\n\ncmdb=# EXPLAIN ANALYZE\ncmdb-# select c.class_id, c.areacode, c.title from classifieds c\ncmdb-# , (select distinct areacode from cm_areacode where site_id='10') a\ncmdb-# where c.class_cat_id='1' and c.areacode=a.areacode;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=7.44..1107.53 rows=279 width=41) (actual time=1.13..258.11\nrows=8460 loops=1)\n -> Subquery Scan a (cost=7.44..7.46 rows=1 width=2) (actual\ntime=0.86..0.92 rows=5 loops=1)\n -> Unique (cost=7.44..7.46 rows=1 width=2) (actual time=0.85..0.88\nrows=5 loops=1)\n -> Sort (cost=7.44..7.45 rows=4 width=2) (actual\ntime=0.85..0.86 rows=5 loops=1)\n Sort Key: areacode\n -> Seq Scan on cm_areacode (cost=0.00..7.40 rows=4\nwidth=2) (actual time=0.20..0.73 rows=5 loops=1)\n Filter: (site_id = 10)\n -> Index Scan using classifieds_dual_idx on classifieds c \n(cost=0.00..1096.59 rows=279 width=39) (actual time=0.22..44.28 rows=1692\nloops=5)\n Index Cond: ((c.class_cat_id = 1) AND (c.areacode = \"outer\".areacode))\n Total runtime: 267.71 msec\n(10 rows)\n\n\n\n\n\n--- Stephan Szabo <[email protected]> wrote:\n> \n> On Mon, 12 May 2003, csajl wrote:\n> \n> > i'm using 7.3.2. i tried using EXISTS instead of the IN, but the same\n> query\n> > now returns in seven sceonds as opposed to four with the IN.\n> >\n> >\n> > cmdb=# EXPLAIN ANALYZE\n> > cmdb-# select c.class_id, c.areacode, c.title from classifieds c\n> > cmdb-# where c.class_cat_id = '1'\n> > cmdb-# and c.areacode IN (\n> > cmdb(# select areacode from cm_areacode where site_id = '10')\n> > cmdb-# ;\n> \n> How about something like:\n> \n> select c.class_id, c.areacode, c.title from\n> classifieds c,\n> (select distinct areacode from cm_areacode where site_id='10') a\n> where c.class_cat_id='1' and c.areacode=a.areacode;\n> \n\n\n__________________________________\nDo you Yahoo!?\nThe New Yahoo! Search - Faster. Easier. Bingo.\nhttp://search.yahoo.com\n\n", "msg_date": "Mon, 12 May 2003 21:03:38 -0700 (PDT)", "msg_from": "csajl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [repost] partial index / funxtional idx or bad sql?" }, { "msg_contents": "On Mon, May 12, 2003 at 09:03:38PM -0700, csajl wrote:\n> \n> wow.\n> \n> that did it. so much for my knowing SQL...\n> > \n> > How about something like:\n> > \n> > select c.class_id, c.areacode, c.title from\n> > classifieds c,\n> > (select distinct areacode from cm_areacode where site_id='10') a\n> > where c.class_cat_id='1' and c.areacode=a.areacode;\n> > \n\nWow, I'll have to keep that in mind. Shouldn't the optimizer be able to\nhandle that? Could this get added to the TODO?\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n", "msg_date": "Tue, 13 May 2003 05:58:17 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [repost] partial index / funxtional idx or bad sql?" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Wow, I'll have to keep that in mind. Shouldn't the optimizer be able to\n> handle that? Could this get added to the TODO?\n\nNo, 'cause it's done (in CVS tip).\n\nI'm actually a bit hesitant now to recommend that people do such things,\nbecause the 7.4 optimizer is likely to produce a better plan from the\nunmodified IN query than it will from any explicitly \"improved\" version.\nThe 7.4 code knows several ways to do IN efficiently, but when you\nhand-transform the query you are forcing the choice; perhaps wrongly.\n\nAn example from CVS tip and the regression database in which hand\ntransformation forces a less efficient plan choice:\n\nregression=# explain analyze select * from tenk1 a where unique1 in (select ten from tenk1);\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=483.17..484.91 rows=10 width=248) (actual time=407.14..409.16 rows=10 loops=1)\n Merge Cond: (\"outer\".unique1 = \"inner\".ten)\n -> Index Scan using tenk1_unique1 on tenk1 a (cost=0.00..1571.97 rows=10000 width=244) (actual time=0.41..1.60 rows=11 loops=1)\n -> Sort (cost=483.17..483.19 rows=10 width=4) (actual time=406.57..406.65 rows=10 loops=1)\n Sort Key: tenk1.ten\n -> HashAggregate (cost=483.00..483.00 rows=10 width=4) (actual time=406.08..406.26 rows=10 loops=1)\n -> Seq Scan on tenk1 (cost=0.00..458.00 rows=10000 width=4) (actual time=0.19..261.84 rows=10000 loops=1)\n Total runtime: 410.74 msec\n(8 rows)\n\nregression=# explain analyze select * from tenk1 a, (select distinct ten from tenk1) b\nregression-# where a.unique1 = b.ten;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=1122.39..1232.59 rows=10 width=248) (actual time=476.67..666.02 rows=10 loops=1)\n -> Subquery Scan b (cost=1122.39..1172.39 rows=10 width=4) (actual time=475.94..662.00 rows=10 loops=1)\n -> Unique (cost=1122.39..1172.39 rows=10 width=4) (actual time=475.89..661.65 rows=10 loops=1)\n -> Sort (cost=1122.39..1147.39 rows=10000 width=4) (actual time=475.85..559.27 rows=10000 loops=1)\n Sort Key: ten\n -> Seq Scan on tenk1 (cost=0.00..458.00 rows=10000 width=4) (actual time=0.37..274.87 rows=10000 loops=1)\n -> Index Scan using tenk1_unique1 on tenk1 a (cost=0.00..6.01 rows=1 width=244) (actual time=0.27..0.31 rows=1 loops=10)\n Index Cond: (a.unique1 = \"outer\".ten)\n Total runtime: 687.53 msec\n(9 rows)\n\nSo, for now, make the transformation ... but keep a note about the IN\nversion to try whenever you update to 7.4.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Tue, 13 May 2003 10:46:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [repost] partial index / funxtional idx or bad sql? " } ]
[ { "msg_contents": "Alfranio Junior,\n\n99% likely: You ran the second query after the first\nand the 4 result rows where already stored in memory. \nThe first execution took longer because the database\nhad to go to the disk after looking up in the index\nwhat rows to get. I further assume that the index was\nalready in memory for both queries since you most\nlikely just build it.\n\nOf course you also need to vaccuum on a regular basis\nin order to have up to date statstics.\n\nRegards,\nNikolaus Dilger\n\n\nOn Mon, 12 May 2003 12:35:24 -0700, \"Alfranio Junior\"\nwrote:\n\n> \n> Hello,\n> \n> I'm a new PostgresSql user and I do not know so much\n> about the\n> performance mechanisms currently implemented and\n> available.\n> \n> So, as a dummy user I think that something strange is\n> happening with me.\n> When I run the following command:\n> \n> explain analyze select * from customer\n> where c_last = 'ROUGHTATION' and\n> c_w_id = 1 and\n> c_d_id = 1\n> order by c_w_id, c_d_id, c_last, c_first limit\n1;\n> \n> I receive the following results:\n> \n> (Customer table with 60.000 rows) -\n> \n \n> QUERY PLAN\n> \n---------------------------------------------------------------------------\n>\n-----------------------------------------------------------\n> Limit (cost=4.84..4.84 rows=1 width=283) (actual\n> time=213.13..213.13\n> rows=0 loops=1)\n> -> Sort (cost=4.84..4.84 rows=1 width=283)\n> (actual\n> time=213.13..213.13 rows=0 loops=1)\n> Sort Key: c_w_id, c_d_id, c_last, c_first\n> -> Index Scan using pk_customer on\ncustomer\n> (cost=0.00..4.83\n> rows=1 width=283) (actual time=211.93..211.93 rows=0\n> loops=1)\n> Index Cond: ((c_w_id = 1) AND (c_d_id\n> = 1))\n> Filter: (c_last =\n> 'ROUGHTATION'::bpchar)\n> Total runtime: 213.29 msec\n> (7 rows)\n> \n> \n> (Customer table with 360.000 rows) -\n> \n \n> QUERY PLAN\n> \n---------------------------------------------------------------------------\n>\n-------------------------------------------------------------\n> Limit (cost=11100.99..11101.00 rows=1 width=638)\n> (actual\n> time=20.82..20.82 rows=0 loops=1)\n> -> Sort (cost=11100.99..11101.00 rows=4\n> width=638) (actual\n> time=20.81..20.81 rows=0 loops=1)\n> Sort Key: c_w_id, c_d_id, c_last, c_first\n> -> Index Scan using pk_customer on\ncustomer\n> (cost=0.00..11100.95 rows=4 width=638) (actual\n> time=20.40..20.40 rows=0\n> loops=1)\n> Index Cond: ((c_w_id = 1) AND (c_d_id\n> = 1))\n> Filter: (c_last =\n> 'ROUGHTATION'::bpchar)\n> Total runtime: 21.11 msec\n> (7 rows)\n> \n> Increasing the number of rows the total runtime\n> decreases.\n> The customer table has the following structure:\n> CREATE TABLE customer\n> (\n> c_id int NOT NULL ,\n> c_d_id int4 NOT NULL ,\n> c_w_id int4 NOT NULL ,\n> c_first char (16) NULL ,\n> c_middle char (2) NULL ,\n> c_last char (16) NULL ,\n> c_street_1 char (20) NULL ,\n> c_street_2 char (20) NULL ,\n> c_city char (20) NULL ,\n> c_state char (2) NULL ,\n> c_zip char (9) NULL ,\n> c_phone char (16) NULL ,\n> c_since timestamp NULL ,\n> c_credit char (2) NULL ,\n> c_credit_lim numeric(12, 2) NULL ,\n> c_discount numeric(4, 4) NULL ,\n> c_balance numeric(12, 2) NULL ,\n> c_ytd_payment numeric(12, 2) NULL ,\n> c_payment_cnt int4 NULL ,\n> c_delivery_cnt int4 NULL ,\n> c_data text NULL\n> );\n> \n> ALTER TABLE customer ADD\n> CONSTRAINT PK_customer PRIMARY KEY\n> (\n> c_w_id,\n> c_d_id,\n> c_id\n> );\n> \n> Does anybody know what is happening ?\n> \n> \n> Thanks !!!!\n> \n> Alfranio Junior\n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n\n", "msg_date": "Mon, 12 May 2003 19:14:31 -0700 (PDT)", "msg_from": "\"Nikolaus Dilger\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PERFORMANCE and SIZE" } ]
[ { "msg_contents": "Jamie Lawrence wrote:\n\n>How do I join pg_class and pg_database to determine OID/tablename\n>pairs for a given database? I can't find anything to join on in those\n>tables. I'm just grepping output right now, but I'd like to do more\n>complicated things in the future, thus my question.\n> \n>\nJamie,\n\nyou don't need to join since pg_class is specific for a database. You \ncannot see any classes from a different database in pg_class. In \ncontrast, pg_database is server-wide mirrored.\n\nRegards,\nAndreas\n\n", "msg_date": "Tue, 13 May 2003 23:52:29 +0200", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Finding filenames for tables" }, { "msg_contents": "\nI must be having a bad search-engine day.\n\nHow do I join pg_class and pg_database to determine OID/tablename\npairs for a given database? I can't find anything to join on in those\ntables. I'm just grepping output right now, but I'd like to do more\ncomplicated things in the future, thus my question.\n\n-j\n\n-- \nJamie Lawrence [email protected]\n\n", "msg_date": "Tue, 13 May 2003 17:30:56 -0500", "msg_from": "Jamie Lawrence <[email protected]>", "msg_from_op": false, "msg_subject": "Finding filenames for tables" }, { "msg_contents": "\nOn Tue, 13 May 2003, Andreas Pflug wrote:\n\n> you don't need to join since pg_class is specific for a database. You \n> cannot see any classes from a different database in pg_class. In \n> contrast, pg_database is server-wide mirrored.\n\nThank you! I had sufficiently confused myself that I didn't even think\nto wonder if this was the case or not.\n\nThanks again.\n\n-j\n\n-- \nJamie Lawrence [email protected]\n\"It only takes 20 years for a liberal to become a conservative \nwithout changing a single idea.\" \n - Robert Anton Wilson \n\n", "msg_date": "Tue, 13 May 2003 19:11:17 -0500", "msg_from": "Jamie Lawrence <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding filenames for tables" } ]
[ { "msg_contents": "Hello.\n\nI'm using PostgreSQL 7.3.1 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66.\n\nHere is topic. Table transactions:\n\n=> \\d transactions\n Table \"public.transactions\"\n Column | Type | Modifiers\n-------------+--------------+-----------\n trxn_id | integer | not null\n trxn_ret | integer |\n trxn_for | integer |\n status | numeric(2,0) | not null\n auth_status | numeric(2,0) | not null\nIndexes: transactions_pkey primary key btree (trxn_id)\nForeign Key constraints: trxns_id FOREIGN KEY (trxn_id) REFERENCES connections(conn_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n trxns_ret FOREIGN KEY (trxn_ret) REFERENCES transactions(trxn_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n trxns_for FOREIGN KEY (trxn_for) REFERENCES transactions(trxn_id) ON UPDATE NO ACTION ON DELETE NO ACTION\n\nAs you can see, trxns_ret and trxns_for constraints references to the same table they come from.\n\nMaintenance of system includes the following step:\ndelete from transactions where transactions.trxn_id = uneeded_trxns.trxn_id;\ntransactions volume is about 10K-20K rows.\nuneeded_trxns volume is about 3K-5K rows.\n\n\nProblem: It takes to MUCH time. EXPLAIN says:\n=> explain delete from transactions where transactions.trxn_id = balance_delete_data.conn_id;\n QUERY PLAN\n-----------------------------------------------------------------------------------\n Hash Join (cost=86.47..966.66 rows=5238 width=14)\n Hash Cond: (\"outer\".trxn_id = \"inner\".conn_id)\n -> Seq Scan on transactions (cost=0.00..503.76 rows=24876 width=10)\n -> Hash (cost=73.38..73.38 rows=5238 width=4)\n -> Seq Scan on balance_delete_data (cost=0.00..73.38 rows=5238 width=4)\n(5 rows)\n\nI was waiting for about 30 minutes and then hit ^C.\n\nAfter some time spent dropping indexes and constraints, I've found out, that problem was in\nthose 2 \"cyclic\" constraints. After drop, query passed in some seconds (that is suitable).\n\nQuestion: why so?\nThanks in advance.\n\n-- \n\nVictor Yegorov\n", "msg_date": "Thu, 15 May 2003 02:11:33 +0300", "msg_from": "\"Victor Yegorov\" <[email protected]>", "msg_from_op": true, "msg_subject": "constraint with reference to the same table" }, { "msg_contents": "On Thu, 15 May 2003, Victor Yegorov wrote:\n\n> I'm using PostgreSQL 7.3.1 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66.\n>\n> Here is topic. Table transactions:\n>\n> => \\d transactions\n> Table \"public.transactions\"\n> Column | Type | Modifiers\n> -------------+--------------+-----------\n> trxn_id | integer | not null\n> trxn_ret | integer |\n> trxn_for | integer |\n> status | numeric(2,0) | not null\n> auth_status | numeric(2,0) | not null\n> Indexes: transactions_pkey primary key btree (trxn_id)\n> Foreign Key constraints: trxns_id FOREIGN KEY (trxn_id) REFERENCES connections(conn_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n> trxns_ret FOREIGN KEY (trxn_ret) REFERENCES transactions(trxn_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n> trxns_for FOREIGN KEY (trxn_for) REFERENCES transactions(trxn_id) ON UPDATE NO ACTION ON DELETE NO ACTION\n>\n> As you can see, trxns_ret and trxns_for constraints references to the same table they come from.\n>\n> Maintenance of system includes the following step:\n> delete from transactions where transactions.trxn_id = uneeded_trxns.trxn_id;\n> transactions volume is about 10K-20K rows.\n> uneeded_trxns volume is about 3K-5K rows.\n>\n>\n> Problem: It takes to MUCH time. EXPLAIN says:\n>\n> I was waiting for about 30 minutes and then hit ^C.\n>\n> After some time spent dropping indexes and constraints, I've found out, that problem was in\n> those 2 \"cyclic\" constraints. After drop, query passed in some seconds (that is suitable).\n>\n> Question: why so?\n\nFor each row dropped it's making sure that no row has either a trxn_ret or\ntrxn_for that pointed to that row. If those columns aren't indexed it's\ngoing to be amazingly slow (if they are indexed it'll probably only be\nnormally slow ;) ).\n\n\n", "msg_date": "Wed, 14 May 2003 16:28:55 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: constraint with reference to the same table" }, { "msg_contents": "Hi,\n\nCan I confirm what this means then ..\n\nFor large table's each column with ref. inegritry I should create an \nindex on those columns ?\n\nSo if I create a table like this :\nCREATE TABLE business_businesstype\n(\nb_bt_id serial PRIMARY KEY,\nb_id integer REFERENCES business ON UPDATE CASCADE ON DELETE CASCADE NOT \nNULL,\nbt_id integer REFERENCES businesstype ON UPDATE CASCADE ON DELETE \nCASCADE NOT NULL\n);\n\nI should then create 2 index's\n\nCREATE INDEX business_idx ON business_businesstype (business);\nCREATE INDEX businesstype_idx ON business_businesstype (businesstype);\n\nThanks\nRegards\nRudi.\n\n\n\nStephan Szabo wrote:\n\n>On Thu, 15 May 2003, Victor Yegorov wrote:\n>\n> \n>\n>>I'm using PostgreSQL 7.3.1 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66.\n>>\n>>Here is topic. Table transactions:\n>>\n>>=> \\d transactions\n>> Table \"public.transactions\"\n>> Column | Type | Modifiers\n>>-------------+--------------+-----------\n>> trxn_id | integer | not null\n>> trxn_ret | integer |\n>> trxn_for | integer |\n>> status | numeric(2,0) | not null\n>> auth_status | numeric(2,0) | not null\n>>Indexes: transactions_pkey primary key btree (trxn_id)\n>>Foreign Key constraints: trxns_id FOREIGN KEY (trxn_id) REFERENCES connections(conn_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n>> trxns_ret FOREIGN KEY (trxn_ret) REFERENCES transactions(trxn_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n>> trxns_for FOREIGN KEY (trxn_for) REFERENCES transactions(trxn_id) ON UPDATE NO ACTION ON DELETE NO ACTION\n>>\n>>As you can see, trxns_ret and trxns_for constraints references to the same table they come from.\n>>\n>>Maintenance of system includes the following step:\n>>delete from transactions where transactions.trxn_id = uneeded_trxns.trxn_id;\n>>transactions volume is about 10K-20K rows.\n>>uneeded_trxns volume is about 3K-5K rows.\n>>\n>>\n>>Problem: It takes to MUCH time. EXPLAIN says:\n>>\n>>I was waiting for about 30 minutes and then hit ^C.\n>>\n>>After some time spent dropping indexes and constraints, I've found out, that problem was in\n>>those 2 \"cyclic\" constraints. After drop, query passed in some seconds (that is suitable).\n>>\n>>Question: why so?\n>> \n>>\n>\n>For each row dropped it's making sure that no row has either a trxn_ret or\n>trxn_for that pointed to that row. If those columns aren't indexed it's\n>going to be amazingly slow (if they are indexed it'll probably only be\n>normally slow ;) ).\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://archives.postgresql.org\n>\n> \n>\n\n\n\n\n\n\n\n\nHi,\n\nCan I confirm what this means then ..\n\nFor large table's each column with ref. inegritry I should create an\nindex on those columns ?\n\nSo if I create a table like this :\nCREATE TABLE business_businesstype\n(\nb_bt_id serial PRIMARY KEY,\nb_id integer REFERENCES business ON UPDATE CASCADE ON DELETE CASCADE\nNOT NULL,\nbt_id integer REFERENCES businesstype ON UPDATE CASCADE ON DELETE\nCASCADE NOT NULL\n);\n\nI should then create 2 index's\n\nCREATE  INDEX business_idx ON  business_businesstype (business);\nCREATE  INDEX businesstype_idx ON  business_businesstype (businesstype);\n\nThanks\nRegards\nRudi.\n\n\n\nStephan Szabo wrote:\n\nOn Thu, 15 May 2003, Victor Yegorov wrote:\n\n \n\nI'm using PostgreSQL 7.3.1 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66.\n\nHere is topic. Table transactions:\n\n=> \\d transactions\n Table \"public.transactions\"\n Column | Type | Modifiers\n-------------+--------------+-----------\n trxn_id | integer | not null\n trxn_ret | integer |\n trxn_for | integer |\n status | numeric(2,0) | not null\n auth_status | numeric(2,0) | not null\nIndexes: transactions_pkey primary key btree (trxn_id)\nForeign Key constraints: trxns_id FOREIGN KEY (trxn_id) REFERENCES connections(conn_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n trxns_ret FOREIGN KEY (trxn_ret) REFERENCES transactions(trxn_id) ON UPDATE NO ACTION ON DELETE NO ACTION,\n trxns_for FOREIGN KEY (trxn_for) REFERENCES transactions(trxn_id) ON UPDATE NO ACTION ON DELETE NO ACTION\n\nAs you can see, trxns_ret and trxns_for constraints references to the same table they come from.\n\nMaintenance of system includes the following step:\ndelete from transactions where transactions.trxn_id = uneeded_trxns.trxn_id;\ntransactions volume is about 10K-20K rows.\nuneeded_trxns volume is about 3K-5K rows.\n\n\nProblem: It takes to MUCH time. EXPLAIN says:\n\nI was waiting for about 30 minutes and then hit ^C.\n\nAfter some time spent dropping indexes and constraints, I've found out, that problem was in\nthose 2 \"cyclic\" constraints. After drop, query passed in some seconds (that is suitable).\n\nQuestion: why so?\n \n\n\nFor each row dropped it's making sure that no row has either a trxn_ret or\ntrxn_for that pointed to that row. If those columns aren't indexed it's\ngoing to be amazingly slow (if they are indexed it'll probably only be\nnormally slow ;) ).\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://archives.postgresql.org", "msg_date": "Thu, 15 May 2003 09:57:09 +1000", "msg_from": "Rudi Starcevic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: constraint with reference to the same table" }, { "msg_contents": " Hi,\n\nOops - sorry I made a typo on those 2 index's.\n\nWrong:\nCREATE INDEX business_idx ON business_businesstype (business);\nCREATE INDEX businesstype_idx ON business_businesstype (businesstype);\n\nRight:\nCREATE INDEX business_idx ON business_businesstype (b_id);\nCREATE INDEX businesstype_idx ON business_businesstype (bt_id);\n\nThe table:\nCREATE TABLE business_businesstype\n(\nb_bt_id serial PRIMARY KEY,\nb_id integer REFERENCES business ON UPDATE CASCADE ON DELETE CASCADE NOT \nNULL,\nbt_id integer REFERENCES businesstype ON UPDATE CASCADE ON DELETE \nCASCADE NOT NULL\n);\n\nThanks\nRegards\nRudi.\n\n\n", "msg_date": "Thu, 15 May 2003 10:07:34 +1000", "msg_from": "Rudi Starcevic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: constraint with reference to the same table" }, { "msg_contents": "* Rudi Starcevic <[email protected]> [15.05.2003 02:59]:\n> Hi,\n> \n> Can I confirm what this means then ..\n> \n> For large table's each column with ref. inegritry I should create an \n> index on those columns ?\n\nI think, that indicies are needed only at delete stage to decrease search\ntime of possible referencing rows.\nNot only, of course, but when we speak about\nINSERT/UPDATE/DELETE data it is so.\n\nOn the other side, indicies increases total query runtime, because for\neach row deleted/updated/inserted it'll be necessary to update each index.\n\nIn my case, I at first drop \"cyclic\" constraints, do the job and then\nrestore them.\n\n\n-- \n\nVictor Yegorov\n", "msg_date": "Thu, 15 May 2003 03:12:39 +0300", "msg_from": "\"Victor Yegorov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: constraint with reference to the same table" }, { "msg_contents": "On Thu, 15 May 2003, Rudi Starcevic wrote:\n\n> Can I confirm what this means then ..\n>\n> For large table's each column with ref. inegritry I should create an\n> index on those columns ?\n\nIn general, yes. There's always an additional cost with having additional\nindexes to modifications to the table, so you need to balance the costs by\nwhat sorts of queries you're doing. For example, if you're doing a\nreferences constraint to a table that is mostly there for say providing a\nnice name for something and those values aren't likely to change (and it's\nokay if a change were expensive) then you wouldn't necessarily want the\nadditional index.\n\n\n", "msg_date": "Wed, 14 May 2003 17:46:47 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: constraint with reference to the same table" }, { "msg_contents": "On Thu, 15 May 2003, Victor Yegorov wrote:\n\n> * Rudi Starcevic <[email protected]> [15.05.2003 02:59]:\n> > Hi,\n> >\n> > Can I confirm what this means then ..\n> >\n> > For large table's each column with ref. inegritry I should create an\n> > index on those columns ?\n>\n> I think, that indicies are needed only at delete stage to decrease search\n> time of possible referencing rows.\n> Not only, of course, but when we speak about\n> INSERT/UPDATE/DELETE data it is so.\n>\n> On the other side, indicies increases total query runtime, because for\n> each row deleted/updated/inserted it'll be necessary to update each index.\n>\n> In my case, I at first drop \"cyclic\" constraints, do the job and then\n> restore them.\n\nThat can be a win, but if you're actually dropping and adding the\nconstraint again it may not be on large tables since it'll still do a\nwhole bunch of index lookups to check the existing rows when the alter\ntable add constraint happens. Disabling triggers and re-enabling them is\nfaster but breaks the guarantee of the constraint.\n\n", "msg_date": "Wed, 14 May 2003 17:49:42 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: constraint with reference to the same table" }, { "msg_contents": "Victor,\n\nI see.\nGood point.\n\nThank you kindly.\nRegards\nRudi.\n\n\nVictor Yegorov wrote:\n\n>* Rudi Starcevic <[email protected]> [15.05.2003 02:59]:\n> \n>\n>>Hi,\n>>\n>>Can I confirm what this means then ..\n>>\n>>For large table's each column with ref. inegritry I should create an \n>>index on those columns ?\n>> \n>>\n>\n>I think, that indicies are needed only at delete stage to decrease search\n>time of possible referencing rows.\n>Not only, of course, but when we speak about\n>INSERT/UPDATE/DELETE data it is so.\n>\n>On the other side, indicies increases total query runtime, because for\n>each row deleted/updated/inserted it'll be necessary to update each index.\n>\n>In my case, I at first drop \"cyclic\" constraints, do the job and then\n>restore them.\n>\n>\n> \n>\n\n\n\n\n\n\n\n\nVictor,\n\nI see.\nGood point.\n\nThank you kindly.\nRegards\nRudi.\n\n\nVictor Yegorov wrote:\n\n* Rudi Starcevic <[email protected]> [15.05.2003 02:59]:\n \n\nHi,\n\nCan I confirm what this means then ..\n\nFor large table's each column with ref. inegritry I should create an \nindex on those columns ?\n \n\n\nI think, that indicies are needed only at delete stage to decrease search\ntime of possible referencing rows.\nNot only, of course, but when we speak about\nINSERT/UPDATE/DELETE data it is so.\n\nOn the other side, indicies increases total query runtime, because for\neach row deleted/updated/inserted it'll be necessary to update each index.\n\nIn my case, I at first drop \"cyclic\" constraints, do the job and then\nrestore them.", "msg_date": "Thu, 15 May 2003 10:53:09 +1000", "msg_from": "Rudi Starcevic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: constraint with reference to the same table" }, { "msg_contents": "* Stephan Szabo <[email protected]> [15.05.2003 03:54]:\n> \n> That can be a win, but if you're actually dropping and adding the\n> constraint again it may not be on large tables since it'll still do a\n> whole bunch of index lookups to check the existing rows when the alter\n> table add constraint happens. Disabling triggers and re-enabling them is\n> faster but breaks the guarantee of the constraint.\n\nYou're right. I thought of big tables after posting the reply. My solution\nis suitable for my case, i.e. not so big tables.\n\nReturning to the very first question I asked.\nMay be it is usefull to implicitly create index on foreign key columns?\nActually, untill you had pointed on seq. scans, I thought Postgres is\nusing internal indicies - don't ask me why.\n\n\n-- \n\nVictor Yegorov\n", "msg_date": "Thu, 15 May 2003 04:03:41 +0300", "msg_from": "Victor Yegorov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: constraint with reference to the same table" }, { "msg_contents": "Stephan,\n\nThanks also - I'm actually building a new database as I write this so \nthis topic is perfect timing for me.\n\nI'm using ref. integrity right now mostly for many-to-many type situations.\n\nFor example.\nI create a table of People,\nthen a table of Business's,\nthen I need to relate many people to many business's.\n\nSo I create a business_people table *with* index's to the referred to tables\nEg:\nCREATE TABLE business_people\n(\nb_p_id serial PRIMARY KEY,\nb_id integer REFERENCES business ON UPDATE CASCADE ON DELETE CASCADE NOT \nNULL,\np_id integer REFERENCES people ON UPDATE CASCADE ON DELETE CASCADE NOT \nNULL\n);\nCREATE INDEX b_p_b_id_idx ON business_people (b_id);\nCREATE INDEX b_p_p_id_idx ON business_people (p_id);\n\nThe b_id and p_id are primary key's in other table's so they have an \nindex too.\n\nSo far I think I've done every thing right.\nCan I ask if you'd agree or not ?\n\nAs a side note when I build my PG database's I do it 100% by hand in text.\nThat is I write Create table statements, save them to file then \ncut'n'paste them into phpPgAdmin or use PSQL.\nSo the code I have below is the same code I use build the DB.\nI wonder if this is OK or would make other PG user's gasp.\nI'm sure most database people out there, not sure about PG people, would \nuse some sort of GUI.\n\nThanks kindly\nI appreciate your time guy's.\nRegards\nRudi.\n\n\n\n\n\n\n\n\nStephan Szabo wrote:\n\n>On Thu, 15 May 2003, Rudi Starcevic wrote:\n>\n> \n>\n>>Can I confirm what this means then ..\n>>\n>>For large table's each column with ref. inegritry I should create an\n>>index on those columns ?\n>> \n>>\n>\n>In general, yes. There's always an additional cost with having additional\n>indexes to modifications to the table, so you need to balance the costs by\n>what sorts of queries you're doing. For example, if you're doing a\n>references constraint to a table that is mostly there for say providing a\n>nice name for something and those values aren't likely to change (and it's\n>okay if a change were expensive) then you wouldn't necessarily want the\n>additional index.\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n>subscribe-nomail command to [email protected] so that your\n>message can get through to the mailing list cleanly\n>\n> \n>\n\n\n\n\n\n\n\n\nStephan,\n\nThanks also - I'm actually building a new database as I write this so\nthis topic is perfect timing for me.\n\nI'm using ref. integrity right now mostly for many-to-many type\nsituations.\n\nFor example.\nI create a table of People,\nthen a table of Business's,\nthen I need to relate many people to many business's.\n\nSo I create a business_people table *with* index's to the referred to\ntables\nEg:\nCREATE TABLE business_people \n( \nb_p_id serial PRIMARY KEY, \nb_id integer REFERENCES business ON UPDATE CASCADE ON DELETE CASCADE\nNOT NULL, \np_id integer REFERENCES people   ON UPDATE CASCADE ON DELETE CASCADE\nNOT NULL \n); \nCREATE  INDEX b_p_b_id_idx ON  business_people (b_id); \nCREATE  INDEX b_p_p_id_idx ON  business_people (p_id); \n\nThe b_id and p_id are primary key's in other table's so they have an\nindex too.\n\nSo far I think I've done every thing right.\nCan I ask if you'd agree or not ?\n\nAs a side note when I build my PG database's I do it 100% by hand in\ntext.\nThat is I write Create table statements, save them to file then\ncut'n'paste them into phpPgAdmin or use PSQL.\nSo the code I have below is the same code I use build the DB.\nI wonder if this is OK or would make other PG user's gasp.\nI'm sure most database people out there, not sure about PG people,\nwould use some sort of GUI.\n\nThanks kindly\nI appreciate your time guy's.\nRegards\nRudi.\n\n\n\n\n\n\n\n\nStephan Szabo wrote:\n\nOn Thu, 15 May 2003, Rudi Starcevic wrote:\n\n \n\nCan I confirm what this means then ..\n\nFor large table's each column with ref. inegritry I should create an\nindex on those columns ?\n \n\n\nIn general, yes. There's always an additional cost with having additional\nindexes to modifications to the table, so you need to balance the costs by\nwhat sorts of queries you're doing. For example, if you're doing a\nreferences constraint to a table that is mostly there for say providing a\nnice name for something and those values aren't likely to change (and it's\nokay if a change were expensive) then you wouldn't necessarily want the\nadditional index.\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\nsubscribe-nomail command to [email protected] so that your\nmessage can get through to the mailing list cleanly", "msg_date": "Thu, 15 May 2003 11:09:12 +1000", "msg_from": "Rudi Starcevic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: constraint with reference to the same table" }, { "msg_contents": "\nOn Thu, 15 May 2003, Rudi Starcevic wrote:\n\n> I'm using ref. integrity right now mostly for many-to-many type situations.\n>\n> For example.\n> I create a table of People,\n> then a table of Business's,\n> then I need to relate many people to many business's.\n>\n> So I create a business_people table *with* index's to the referred to tables\n> Eg:\n> CREATE TABLE business_people\n> (\n> b_p_id serial PRIMARY KEY,\n> b_id integer REFERENCES business ON UPDATE CASCADE ON DELETE CASCADE NOT\n> NULL,\n> p_id integer REFERENCES people ON UPDATE CASCADE ON DELETE CASCADE NOT\n> NULL\n> );\n> CREATE INDEX b_p_b_id_idx ON business_people (b_id);\n> CREATE INDEX b_p_p_id_idx ON business_people (p_id);\n>\n> The b_id and p_id are primary key's in other table's so they have an\n> index too.\n>\n> So far I think I've done every thing right.\n> Can I ask if you'd agree or not ?\n\nGenerally, yes, I'd agree with something like that, although I might not\nhave given a separate serial and instead made the primary key the two id\nintegers (since I'm not sure having the same reference twice makes sense\nand I'm not sure that you'll need to reference the relationship itself\nseparately). If you weren't likely to be doing your own lookups on b_id\nand p_id I'd have to consider the indexes more carefully, since I'd expect\nthat inserts/updates to business_people are much much more likely than\ndeletes or key updates to business or people.\n\n> As a side note when I build my PG database's I do it 100% by hand in text.\n> That is I write Create table statements, save them to file then\n> cut'n'paste them into phpPgAdmin or use PSQL.\n> So the code I have below is the same code I use build the DB.\n> I wonder if this is OK or would make other PG user's gasp.\n> I'm sure most database people out there, not sure about PG people, would\n> use some sort of GUI.\n\nI generally do something like the above, or make the tables, get them to\nwhat I want and schema dump them.\n\n", "msg_date": "Wed, 14 May 2003 18:23:27 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: constraint with reference to the same table" }, { "msg_contents": "\nOn Thu, 15 May 2003, Victor Yegorov wrote:\n\n> * Stephan Szabo <[email protected]> [15.05.2003 03:54]:\n> >\n> > That can be a win, but if you're actually dropping and adding the\n> > constraint again it may not be on large tables since it'll still do a\n> > whole bunch of index lookups to check the existing rows when the alter\n> > table add constraint happens. Disabling triggers and re-enabling them is\n> > faster but breaks the guarantee of the constraint.\n>\n> You're right. I thought of big tables after posting the reply. My solution\n> is suitable for my case, i.e. not so big tables.\n\nThis may become slightly a higher point of balance if we change the alter\ntable time check to a single query rather than repeated checks as well.\n\n> Returning to the very first question I asked.\n> May be it is usefull to implicitly create index on foreign key columns?\n\nMaybe, it seems to me that we've been trying to move away from such\nimplicit behavior (such as serial columns no longer implicitly being\nunique) in general. I don't personally have a strong feeling on the\nsubject.\n\n", "msg_date": "Wed, 14 May 2003 18:24:32 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: constraint with reference to the same table" }, { "msg_contents": "Stephen,\n\n\n>> although I might not\n>> have given a separate serial and instead made the primary key the two id\n>> integers (since I'm not sure having the same reference twice makes sense\n>> and I'm not sure that you'll need to reference the relationship itself\n>> separately). \n\nYes I see.\nThat's a very good point.\nIf I make the primary key across both the business and person instead of using\na new primary key/serial then that will prevent the same business to person\nrelationship being entered twice.\n\nIf I did it that way would this be OK:\n\nNew:\nCREATE TABLE business_person\n(\nb_id integer REFERENCES business ON UPDATE CASCADE ON DELETE CASCADE NOT NULL,\npn_id integer REFERENCES person ON UPDATE CASCADE ON DELETE CASCADE NOT NULL\nPRIMARY KEY(b_id,pn_id);\n);\nCREATE INDEX b_pn_b_id_idx ON business_person (b_id);\nCREATE INDEX b_pn_pn_id_idx ON business_person (pn_id);\n\n\nOld:\nCREATE TABLE business_person\n(\nb_pn_id serial PRIMARY KEY,\nb_id integer REFERENCES business ON UPDATE CASCADE ON DELETE CASCADE NOT NULL,\npn_id integer REFERENCES person ON UPDATE CASCADE ON DELETE CASCADE NOT NULL\n);\nCREATE INDEX b_pn_b_id_idx ON business_person (b_id);\nCREATE INDEX b_pn_pn_id_idx ON business_person (pn_id);\n\nAs I'd like to sometime's look up business's, sometime's look up people and sometimes\nlook up both I think I should keep the Index's.\n\nCheers\nRudi.\n\n\n", "msg_date": "Thu, 15 May 2003 11:48:20 +1000", "msg_from": "Rudi Starcevic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: constraint with reference to the same table" }, { "msg_contents": "* Rudi Starcevic <[email protected]> [15.05.2003 04:46]:\n> Stephen,\n> \n> \n> New:\n> CREATE TABLE business_person\n> (\n> b_id integer REFERENCES business ON UPDATE CASCADE ON DELETE CASCADE NOT \n> NULL,\n> pn_id integer REFERENCES person ON UPDATE CASCADE ON DELETE CASCADE NOT NULL\n> PRIMARY KEY(b_id,pn_id);\n> );\n> CREATE INDEX b_pn_b_id_idx ON business_person (b_id);\n> CREATE INDEX b_pn_pn_id_idx ON business_person (pn_id);\n\nMay be it's better to name indexes a bit more clearer? No impact on overall\nperformance, but you'll ease your life, if you project will grow to hundreds\nof tables and thousands of indicies.\n\n> As I'd like to sometime's look up business's, sometime's look up people and \n> sometimes\n> look up both I think I should keep the Index's.\n\nIf your lookups are part of business logic, than it's ok. Also, if your\nsystem generates reports using several table joins that may speed up the\nthings.\n\nOtherwise, for curiosity cases, it's better to wait some time for the result\nof one-time queries.\n\n-- \n\nVictor Yegorov\n", "msg_date": "Thu, 15 May 2003 04:58:11 +0300", "msg_from": "\"Victor Yegorov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: constraint with reference to the same table" }, { "msg_contents": "Victor,\n\n>> May be it's better to name indexes a bit more clearer? No impact on overall\n>> performance, but you'll ease your life, if you project will grow to hundreds\n>> of tables and thousands of indicies.\n\nVery true.\nInstead of: b_pn_b_id_idx,\nI think better would be: busines_person_b_id_idx\n\nThanks\nRudi.\n\n\n\n\nVictor Yegorov wrote:\n\n>* Rudi Starcevic <[email protected]> [15.05.2003 04:46]:\n> \n>\n>>Stephen,\n>>\n>>\n>>New:\n>>CREATE TABLE business_person\n>>(\n>>b_id integer REFERENCES business ON UPDATE CASCADE ON DELETE CASCADE NOT \n>>NULL,\n>>pn_id integer REFERENCES person ON UPDATE CASCADE ON DELETE CASCADE NOT NULL\n>>PRIMARY KEY(b_id,pn_id);\n>>);\n>>CREATE INDEX b_pn_b_id_idx ON business_person (b_id);\n>>CREATE INDEX b_pn_pn_id_idx ON business_person (pn_id);\n>> \n>>\n>\n>May be it's better to name indexes a bit more clearer? No impact on overall\n>performance, but you'll ease your life, if you project will grow to hundreds\n>of tables and thousands of indicies.\n>\n> \n>\n>>As I'd like to sometime's look up business's, sometime's look up people and \n>>sometimes\n>>look up both I think I should keep the Index's.\n>> \n>>\n>\n>If your lookups are part of business logic, than it's ok. Also, if your\n>system generates reports using several table joins that may speed up the\n>things.\n>\n>Otherwise, for curiosity cases, it's better to wait some time for the result\n>of one-time queries.\n>\n> \n>\n\n\n\n\n\n\n\n\nVictor,\n\n>> May be it's better to name indexes a bit more clearer? No impact on overall\n>> performance, but you'll ease your life, if you project will grow to hundreds\n>> of tables and thousands of indicies.\n\nVery true.\nInstead of: b_pn_b_id_idx,\nI think better would be: busines_person_b_id_idx\n\nThanks\nRudi.\n\n\n\n\nVictor Yegorov wrote:\n\n* Rudi Starcevic <[email protected]> [15.05.2003 04:46]:\n \n\nStephen,\n\n\nNew:\nCREATE TABLE business_person\n(\nb_id integer REFERENCES business ON UPDATE CASCADE ON DELETE CASCADE NOT \nNULL,\npn_id integer REFERENCES person ON UPDATE CASCADE ON DELETE CASCADE NOT NULL\nPRIMARY KEY(b_id,pn_id);\n);\nCREATE INDEX b_pn_b_id_idx ON business_person (b_id);\nCREATE INDEX b_pn_pn_id_idx ON business_person (pn_id);\n \n\n\nMay be it's better to name indexes a bit more clearer? No impact on overall\nperformance, but you'll ease your life, if you project will grow to hundreds\nof tables and thousands of indicies.\n\n \n\nAs I'd like to sometime's look up business's, sometime's look up people and \nsometimes\nlook up both I think I should keep the Index's.\n \n\n\nIf your lookups are part of business logic, than it's ok. Also, if your\nsystem generates reports using several table joins that may speed up the\nthings.\n\nOtherwise, for curiosity cases, it's better to wait some time for the result\nof one-time queries.", "msg_date": "Thu, 15 May 2003 12:07:27 +1000", "msg_from": "Rudi Starcevic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: constraint with reference to the same table" }, { "msg_contents": "Perhaps I also need a 3rd Index ?\n\nOne for Business's\nOne for People and\nOne for Business_People.\n\nI think I may need the 3rd Index for query's like\n\nSelect b_id\n From business_people\nwhere b_id = 1 and pn_id = 2;\n\nI think this way I have an Index for 3 type's of queries.\n\nWhen I looking for data on just the business,\nwhen I'm looking for data on just people and\nwhen I'm looking for data on business people relationships.\n\nCheers\nRudi.\n\n", "msg_date": "Thu, 15 May 2003 12:14:51 +1000", "msg_from": "Rudi Starcevic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: constraint with reference to the same table" }, { "msg_contents": "* Rudi Starcevic <[email protected]> [15.05.2003 05:15]:\n> Perhaps I also need a 3rd Index ?\n> \n> One for Business's\n> One for People and\n> One for Business_People.\n> \n> I think I may need the 3rd Index for query's like\n\nYou don't need it. Primary key on that 2 columns will create a unique index\non them. Of course, if you left things unchanged - you'll need to create\nbusiness_people index yourself.\n\nexecute:\n\n=> \\d business_people\n\nand take a glance on a line, describing primary key.\n\n-- \n\nVictor Yegorov\n", "msg_date": "Thu, 15 May 2003 05:21:35 +0300", "msg_from": "Victor Yegorov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: constraint with reference to the same table" }, { "msg_contents": "Victor,\n\n>> You don't need it. Primary key on that 2 columns will create a unique index\n>> on them. Of course, if you left things unchanged - you'll need to create\n>> business_people index yourself.\n\nAhh of course ..\n\n\"I see said the blind man !\" ..\n\nThanks heaps.\nI think now it's pretty clear to me.\nI feel I have pretty much optimised my code / sql schema.\n\nThank you both,\nit's a tremendous help - one learns something every day with this list.\n\nKind regards\nRudi.\n\nVictor Yegorov wrote:\n\n>* Rudi Starcevic <[email protected]> [15.05.2003 05:15]:\n> \n>\n>>Perhaps I also need a 3rd Index ?\n>>\n>>One for Business's\n>>One for People and\n>>One for Business_People.\n>>\n>>I think I may need the 3rd Index for query's like\n>> \n>>\n>\n>You don't need it. Primary key on that 2 columns will create a unique index\n>on them. Of course, if you left things unchanged - you'll need to create\n>business_people index yourself.\n>\n>execute:\n>\n>=> \\d business_people\n>\n>and take a glance on a line, describing primary key.\n>\n> \n>\n\n\n\n\n\n\n\n\nVictor,\n\n>> You don't need it. Primary key on that 2 columns will create a unique index\n>> on them. Of course, if you left things unchanged - you'll need to create\n>> business_people index yourself.\n\nAhh of course ..\n\n\"I see said the blind man !\" ..\n\nThanks heaps.\nI think now it's pretty clear to me.\nI feel I have pretty much optimised my code / sql schema.\n\nThank you both,\nit's a tremendous help - one learns something every day with this list.\n\nKind regards\nRudi.\n\nVictor Yegorov wrote:\n\n* Rudi Starcevic <[email protected]> [15.05.2003 05:15]:\n \n\nPerhaps I also need a 3rd Index ?\n\nOne for Business's\nOne for People and\nOne for Business_People.\n\nI think I may need the 3rd Index for query's like\n \n\n\nYou don't need it. Primary key on that 2 columns will create a unique index\non them. Of course, if you left things unchanged - you'll need to create\nbusiness_people index yourself.\n\nexecute:\n\n=> \\d business_people\n\nand take a glance on a line, describing primary key.", "msg_date": "Thu, 15 May 2003 12:29:22 +1000", "msg_from": "Rudi Starcevic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: constraint with reference to the same table" } ]
[ { "msg_contents": "Hello,\n\nI've just had a quick search through the archives but couldn't find\nwhat I wanted, so I've joined this list to continue my search for an\nanswer.\n\nIs there any rule of thumb about how much more efficient an INNER JOIN\nis compare to a regular WHERE statement?\n\nI have a couple of queries in PostgreSQL that use a variety of tables\n(four or five) linked together by key/foreign key conditions all ANDed\ntogether. My co-worker re-wrote one of them using the INNER JOIN\napproach and I wanted to find out if this would empirically improve the\nperformance.\n\nI have not tried to do an EXPLAIN ANALYZE yet but I will try that.\nThanks for your responses.\n\nAlex\n\n\n\n", "msg_date": "Wed, 14 May 2003 21:45:04 -0400", "msg_from": "\"T. Alex Beamish\" <[email protected]>", "msg_from_op": true, "msg_subject": "INNER JOIN vs WHERE" }, { "msg_contents": "\"T. Alex Beamish\" <[email protected]> writes:\n> Is there any rule of thumb about how much more efficient an INNER JOIN\n> is compare to a regular WHERE statement?\n\nIdeally there is no difference. If there are only two tables involved,\nthere definitely is no difference.\n\nIf there is a difference, it's because there are more than two tables,\nand the JOIN syntax forced a particular join order. Read \nhttp://www.postgresql.org/docs/view.php?version=7.3&idoc=0&file=explicit-joins.html\n\nTypically I'd expect JOIN syntax to be a loss because the odds are good\nthat you're forcing an inefficient join order. It could be a win only\nif the planner chooses a poor join order when given a free hand, or if\nyou have so many tables that you need to suppress the planner's search\nfor a good join order.\n\n> I have not tried to do an EXPLAIN ANALYZE yet but I will try that.\n\nIf you have not bothered to do any EXPLAINs yet then you are really\nwasting people's time on this list.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 May 2003 22:25:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: INNER JOIN vs WHERE " } ]
[ { "msg_contents": "* amol <[email protected]> [15.05.2003 06:47]:\n> Hi everybody,\n> I am new to this mailing list, so please let me know if I am not posting\n> queries the way you are expecting.\n> \n> - We are porting a web based application from MSSQL to postgres as a\n> backend.\n> This is a database intensive application. I am facing a problem in some\n> queries like this :\n> \n> select distinct attached_info.id, ownerid ,attached_info.modified_date from\n> attached_info where attached_info.id in ( select distinct\n> attached_tag_list.id from attached_tag_list where attached_tag_list.id in\n> select attached_info.id from attached_info where\n> ttached_info.deleted='0' ) and attached_tag_list.id in ( select id from\n> attached_tag_list where attached_tag = 262 ) and\n> attached_tag_list.attached_tag in ( select tags.id from tags where tags.id\n> in ( select tag_id from tag_classifier, tag_classifier_association where\n> classifier_tag_id in ( 261, 4467, 1894, 1045, 1087, 1355, 72, 1786, 1179,\n> 3090, 871, 3571, 3565, 3569, 3567, 1043, 2535, 1080, 3315, 87, 1041, 2343,\n> 2345, 1869, 3088, 3872, 2651, 2923, 2302, 1681, 3636, 3964, 2778, 2694,\n> 1371, 2532, 2527, 3742, 3740, 1761, 4530, 4671, 4503, 4512, 3700 ) and\n> association_id='1566' and\n> tag_classifier.uid=tag_classifier_association.uid ) and\n> tags.isdeleted='0' ) ) order by attached_info.modified_date desc,\n> attached_info.id desc;\n\nIN () constructs isn't a good part of postgres (from the performance point\nof view). Try to rewrite your query using joins or EXISTS/NOT EXISTS\nconstructs.\n\nSearch archives for more details, there were a discussion of this topic\nlately.\n\n-- \n\nVictor Yegorov\n", "msg_date": "Thu, 15 May 2003 06:51:46 +0300", "msg_from": "\"Victor Yegorov\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: nested select query failing" }, { "msg_contents": "Hi everybody,\nI am new to this mailing list, so please let me know if I am not posting\nqueries the way you are expecting.\n\n- We are porting a web based application from MSSQL to postgres as a\nbackend.\nThis is a database intensive application. I am facing a problem in some\nqueries like this :\n\nselect distinct attached_info.id, ownerid ,attached_info.modified_date from\nattached_info where attached_info.id in ( select distinct\nattached_tag_list.id from attached_tag_list where attached_tag_list.id in\n select attached_info.id from attached_info where\nttached_info.deleted='0' ) and attached_tag_list.id in ( select id from\nattached_tag_list where attached_tag = 262 ) and\nattached_tag_list.attached_tag in ( select tags.id from tags where tags.id\nin ( select tag_id from tag_classifier, tag_classifier_association where\nclassifier_tag_id in ( 261, 4467, 1894, 1045, 1087, 1355, 72, 1786, 1179,\n3090, 871, 3571, 3565, 3569, 3567, 1043, 2535, 1080, 3315, 87, 1041, 2343,\n2345, 1869, 3088, 3872, 2651, 2923, 2302, 1681, 3636, 3964, 2778, 2694,\n1371, 2532, 2527, 3742, 3740, 1761, 4530, 4671, 4503, 4512, 3700 ) and\nassociation_id='1566' and\ntag_classifier.uid=tag_classifier_association.uid ) and\ntags.isdeleted='0' ) ) order by attached_info.modified_date desc,\nattached_info.id desc;\n\nWhen I fire this query in psql, it does not return back.\n\n- top command shows postgres above 95+% cpu usage consistantly\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n 1550 postgres 25 0 20268 19M 18904 R 95.3 5.2 6:31 postmaster\n\n- I am using RedHat 8 with following postgres rpms\npostgresql-libs-7.2.2-1\npostgresql-7.2.2-1\npostgresql-server-7.2.2-1\n\n- RAM size is 384 MB, SWAP size is 384 MB, but top shows that memory is free\n\n- I have done following changes after searching for performance realted\ninformation on the internet and postgres site\n - in /etc/rc.d/rc.local added following lines\n echo \"32768\" >/proc/sys/fs/file-max\n echo \"98304\" >/proc/sys/fs/inode-max\n - in /etc/init.d/postgresql file a pg_ctl call is changed to :\n su -l postgres -s /bin/sh -c \"/usr/bin/pg_ctl -D $PGDATA -o\n'-i -N 1024 -B 2048 -d 5' -p /usr/bin/postmaster start >>\n/var/log/pgsql.log 2>&1\" < /dev/null\n\n- pgsql log shows :\n......\n}) :lefttree <> :righttree <> :extprm () :locprm () :initplan <> :nprm 0\n:scanrelid 1 } :righttree <> :extprm () :locprm () :initplan <> :nprm 0\n:keycount 3 } :righttree <> :extprm () :locprm () :initplan <> :nprm 0\n:numCols 3 :uniqColIdx 3 1 2 }\nDEBUG: ProcessQuery\n*********\nlog stops here for 5/6 minuts\n*********\nDEBUG: proc_exit(0)\nDEBUG: shmem_exit(0)\nDEBUG: exit(0)\nDEBUG: reaping dead processes\nDEBUG: child process (pid 1595) exited with exit code 0\nDEBUG: proc_exit(0)\nDEBUG: shmem_exit(0)\nDEBUG: exit(0)\nDEBUG: reaping dead processes\nDEBUG: child process (pid 1598) exited with exit code 0\nDEBUG: proc_exit(0)\nDEBUG: shmem_exit(0)\nDEBUG: exit(0)\nDEBUG: reaping dead processes\nDEBUG: child process (pid 1599) exited with exit code 0\nDEBUG: proc_exit(0)\nDEBUG: shmem_exit(0)\nDEBUG: exit(0)\nDEBUG: reaping dead processes\nDEBUG: child process (pid 1600) exited with exit code 0\n\n\n- What should I do to get such queries working?\n- Is there any limit on query size?\n- Is there anything left in tuning postgres which is causing this problem ?\n\nIf you want me to try anything, please let me know.\n\nthanks\nAmol\n\n\n\n\n\n", "msg_date": "Thu, 15 May 2003 09:27:33 +0530", "msg_from": "\"amol\" <[email protected]>", "msg_from_op": false, "msg_subject": "nested select query failing" }, { "msg_contents": "Please post the EXPLAIN ANALYZE of that query...\n\nChris\n\n----- Original Message ----- \nFrom: \"amol\" <[email protected]>\nTo: <[email protected]>\nSent: Thursday, May 15, 2003 11:57 AM\nSubject: [PERFORM] nested select query failing\n\n\n> Hi everybody,\n> I am new to this mailing list, so please let me know if I am not posting\n> queries the way you are expecting.\n>\n> - We are porting a web based application from MSSQL to postgres as a\n> backend.\n> This is a database intensive application. I am facing a problem in some\n> queries like this :\n>\n> select distinct attached_info.id, ownerid ,attached_info.modified_date\nfrom\n> attached_info where attached_info.id in ( select distinct\n> attached_tag_list.id from attached_tag_list where attached_tag_list.id in\n> select attached_info.id from attached_info where\n> ttached_info.deleted='0' ) and attached_tag_list.id in ( select id from\n> attached_tag_list where attached_tag = 262 ) and\n> attached_tag_list.attached_tag in ( select tags.id from tags where tags.id\n> in ( select tag_id from tag_classifier, tag_classifier_association where\n> classifier_tag_id in ( 261, 4467, 1894, 1045, 1087, 1355, 72, 1786, 1179,\n> 3090, 871, 3571, 3565, 3569, 3567, 1043, 2535, 1080, 3315, 87, 1041, 2343,\n> 2345, 1869, 3088, 3872, 2651, 2923, 2302, 1681, 3636, 3964, 2778, 2694,\n> 1371, 2532, 2527, 3742, 3740, 1761, 4530, 4671, 4503, 4512, 3700 ) and\n> association_id='1566' and\n> tag_classifier.uid=tag_classifier_association.uid ) and\n> tags.isdeleted='0' ) ) order by attached_info.modified_date desc,\n> attached_info.id desc;\n>\n> When I fire this query in psql, it does not return back.\n>\n> - top command shows postgres above 95+% cpu usage consistantly\n> PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n> 1550 postgres 25 0 20268 19M 18904 R 95.3 5.2 6:31 postmaster\n>\n> - I am using RedHat 8 with following postgres rpms\n> postgresql-libs-7.2.2-1\n> postgresql-7.2.2-1\n> postgresql-server-7.2.2-1\n>\n> - RAM size is 384 MB, SWAP size is 384 MB, but top shows that memory is\nfree\n>\n> - I have done following changes after searching for performance realted\n> information on the internet and postgres site\n> - in /etc/rc.d/rc.local added following lines\n> echo \"32768\" >/proc/sys/fs/file-max\n> echo \"98304\" >/proc/sys/fs/inode-max\n> - in /etc/init.d/postgresql file a pg_ctl call is changed to :\n> su -l postgres -s /bin/sh -c \"/usr/bin/pg_ctl -D\n$PGDATA -o\n> '-i -N 1024 -B 2048 -d 5' -p /usr/bin/postmaster start >>\n> /var/log/pgsql.log 2>&1\" < /dev/null\n>\n> - pgsql log shows :\n> ......\n> }) :lefttree <> :righttree <> :extprm () :locprm () :initplan <> :nprm 0\n> :scanrelid 1 } :righttree <> :extprm () :locprm () :initplan <> :nprm 0\n> :keycount 3 } :righttree <> :extprm () :locprm () :initplan <> :nprm 0\n> :numCols 3 :uniqColIdx 3 1 2 }\n> DEBUG: ProcessQuery\n> *********\n> log stops here for 5/6 minuts\n> *********\n> DEBUG: proc_exit(0)\n> DEBUG: shmem_exit(0)\n> DEBUG: exit(0)\n> DEBUG: reaping dead processes\n> DEBUG: child process (pid 1595) exited with exit code 0\n> DEBUG: proc_exit(0)\n> DEBUG: shmem_exit(0)\n> DEBUG: exit(0)\n> DEBUG: reaping dead processes\n> DEBUG: child process (pid 1598) exited with exit code 0\n> DEBUG: proc_exit(0)\n> DEBUG: shmem_exit(0)\n> DEBUG: exit(0)\n> DEBUG: reaping dead processes\n> DEBUG: child process (pid 1599) exited with exit code 0\n> DEBUG: proc_exit(0)\n> DEBUG: shmem_exit(0)\n> DEBUG: exit(0)\n> DEBUG: reaping dead processes\n> DEBUG: child process (pid 1600) exited with exit code 0\n>\n>\n> - What should I do to get such queries working?\n> - Is there any limit on query size?\n> - Is there anything left in tuning postgres which is causing this problem\n?\n>\n> If you want me to try anything, please let me know.\n>\n> thanks\n> Amol\n>\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Thu, 15 May 2003 12:24:08 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: nested select query failing" }, { "msg_contents": "It's a rather nasty query format, but wrapped it to readable form.\nLooks like you could make a good join from all these IN's.\n\nAnother question: does EXPLAIN (without ANALYZE) work for this query?\nCould you send its output, and table defs? maybe a minimal dump in private\nemail?\n\n\nQUESTION TO PRO'S:\n\nBasically, is it true that IN's can be converted to RIGHT JOIN's quite\nsimply? Is it always worth?\n\nG.\n--\nwhile (!asleep()) sheep++;\n\n---------------------------- cut here ------------------------------\n----- Original Message -----\nFrom: \"amol\" <[email protected]>\nSent: Thursday, May 15, 2003 5:57 AM\n\n\n> Hi everybody,\n> I am new to this mailing list, so please let me know if I am not posting\n> queries the way you are expecting.\n>\n> - We are porting a web based application from MSSQL to postgres as a\n> backend.\n> This is a database intensive application. I am facing a problem in some\n> queries like this :\n>\n> select distinct\n> attached_info.id, ownerid ,attached_info.modified_date\n> from attached_info\n> where\n> attached_info.id in\n> (select distinct attached_tag_list.id from attached_tag_list\n> where\n> attached_tag_list.id in\n> (select attached_info.id from attached_info\n> where attached_info.deleted='0') and\n> attached_tag_list.id in\n> (select id from attached_tag_list\n> where attached_tag = 262) and\n> attached_tag_list.attached_tag in\n> (select tags.id from tags\n> where\n> tags.id in\n> (select tag_id\n> from tag_classifier, tag_classifier_association\n> where\n> classifier_tag_id in\n> (261, 4467, 1894, 1045, 1087, 1355, 72, 1786, 1179,\n> 3090, 871, 3571, 3565, 3569, 3567, 1043, 2535, 1080,\n> 3315, 87, 1041, 2343, 2345, 1869, 3088, 3872, 2651,\n> 2923, 2302, 1681, 3636, 3964, 2778, 2694, 1371, 2532,\n> 2527, 3742, 3740, 1761, 4530, 4671, 4503, 4512, 3700)\n> and\n> association_id='1566' and\n> tag_classifier.uid=tag_classifier_association.uid\n> ) and\n> tags.isdeleted='0'\n> )\n> )\n> order by attached_info.modified_date desc, attached_info.id desc;\n\n", "msg_date": "Thu, 15 May 2003 13:56:10 +0200", "msg_from": "=?iso-8859-1?Q?SZUCS_G=E1bor?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: nested select query failing" }, { "msg_contents": "On Thu, 15 May 2003, [iso-8859-1] SZUCS G�bor wrote:\n\n> Basically, is it true that IN's can be converted to RIGHT JOIN's quite\n> simply? Is it always worth?\n\nI'm not sure you want to convert to an outer join (since you want to throw\naway the rows on either side that don't match in an IN). You also have to\nbe careful not to get duplicate entries from what was the subquery.\n\nAs for whether it's worth doing, in 7.3 and earlier, almost\ncertainly, in 7.4 almost certainly not. :)\n\n\n\n", "msg_date": "Thu, 15 May 2003 07:56:49 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: nested select query failing" }, { "msg_contents": "thanks allot everybody for your mails,\n\n- It helped and now I have got down the query execution time allot. But I am\nfacing problem in following query\n-----------\nexplain analyze select attached_info.id from attached_tag_list,\nattached_info\nwhere\n attached_tag_list.attached_tag = 265\n and\n attached_tag_list.id = attached_info.id\n----------\n\n- it's result is\n----------\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..165349.50 rows=114 width=16) (actual\ntime=117.14..8994.60 rows=15 loops=1)\n -> Index Scan using ix_attached_tag_list_id on attached_tag_list\n(cost=0.00..111.13 rows=96 width=12) (actual time=0.12..0.66 rows=15\nloops=1)\n -> Seq Scan on attached_info (cost=0.00..1211.53 rows=33553 width=4)\n(actual time=3.67..197.98 rows=33553 loops=15)\nTotal runtime: 8994.92 msec\n\nEXPLAIN\n---------\n\n- I have already indexed attached_info on id using following query\n------\nCREATE INDEX attached_info_Index_1 ON attached_info(id) ;\n------\n\n- But I am wondering why there is \"->Seq Scan on attached_info.\"\n After reading various documentation on the internet I am assuming it\nshould have been an index scan. BTW I have done vaccume analyze also.\n\nAm I right?\n\nthanks,\nAmol\n\n\n\n----- Original Message -----\nFrom: \"Stephan Szabo\" <[email protected]>\nTo: \"SZUCS G�bor\" <[email protected]>\nCc: <[email protected]>\nSent: Thursday, May 15, 2003 8:26 PM\nSubject: Re: [PERFORM] nested select query failing\n\n\nOn Thu, 15 May 2003, [iso-8859-1] SZUCS G�bor wrote:\n\n> Basically, is it true that IN's can be converted to RIGHT JOIN's quite\n> simply? Is it always worth?\n\nI'm not sure you want to convert to an outer join (since you want to throw\naway the rows on either side that don't match in an IN). You also have to\nbe careful not to get duplicate entries from what was the subquery.\n\nAs for whether it's worth doing, in 7.3 and earlier, almost\ncertainly, in 7.4 almost certainly not. :)\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n\n", "msg_date": "Tue, 20 May 2003 12:56:16 +0530", "msg_from": "\"amol\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: nested select query failing" }, { "msg_contents": "\"amol\" <[email protected]> writes:\n> explain analyze select attached_info.id from attached_tag_list,\n> attached_info\n> where\n> attached_tag_list.attached_tag = 265\n> and\n> attached_tag_list.id = attached_info.id\n\n> NOTICE: QUERY PLAN:\n\n> Nested Loop (cost=0.00..165349.50 rows=114 width=16) (actual\n> time=117.14..8994.60 rows=15 loops=1)\n> -> Index Scan using ix_attached_tag_list_id on attached_tag_list\n> (cost=0.00..111.13 rows=96 width=12) (actual time=0.12..0.66 rows=15\n> loops=1)\n> -> Seq Scan on attached_info (cost=0.00..1211.53 rows=33553 width=4)\n> (actual time=3.67..197.98 rows=33553 loops=15)\n> Total runtime: 8994.92 msec\n\n> - I have already indexed attached_info on id using following query\n> CREATE INDEX attached_info_Index_1 ON attached_info(id) ;\n\nHm. I'd have expected an index scan too. Maybe the two id columns are\nnot of the same datatype?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 May 2003 11:18:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: nested select query failing " }, { "msg_contents": "Hi Tom,\nU are great. As you have said, one item was numeric and another serial\n integer ) so it was applying seq scan. Thank you very much for your help\neverybody.\n\nregards,\nAmol\n\n----- Original Message -----\nFrom: \"Tom Lane\" <[email protected]>\nTo: \"amol\" <[email protected]>\nCc: <[email protected]>\nSent: Tuesday, May 20, 2003 8:48 PM\nSubject: Re: [PERFORM] nested select query failing\n\n\n> \"amol\" <[email protected]> writes:\n> > explain analyze select attached_info.id from attached_tag_list,\n> > attached_info\n> > where\n> > attached_tag_list.attached_tag = 265\n> > and\n> > attached_tag_list.id = attached_info.id\n>\n> > NOTICE: QUERY PLAN:\n>\n> > Nested Loop (cost=0.00..165349.50 rows=114 width=16) (actual\n> > time=117.14..8994.60 rows=15 loops=1)\n> > -> Index Scan using ix_attached_tag_list_id on attached_tag_list\n> > (cost=0.00..111.13 rows=96 width=12) (actual time=0.12..0.66 rows=15\n> > loops=1)\n> > -> Seq Scan on attached_info (cost=0.00..1211.53 rows=33553 width=4)\n> > (actual time=3.67..197.98 rows=33553 loops=15)\n> > Total runtime: 8994.92 msec\n>\n> > - I have already indexed attached_info on id using following query\n> > CREATE INDEX attached_info_Index_1 ON attached_info(id) ;\n>\n> Hm. I'd have expected an index scan too. Maybe the two id columns are\n> not of the same datatype?\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n>\n\n", "msg_date": "Wed, 21 May 2003 15:46:00 +0530", "msg_from": "\"amol\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: nested select query failing " } ]
[ { "msg_contents": "-----Original Message-----\nFrom: Anagha Joshi \nSent: Wednesday, May 07, 2003 3:58 PM\nTo: [email protected]\nSubject: [ADMIN] Out of disk space- error code\n \nHi,\n \nI'm using PostgreSQL 7.1.2 on Sun Solaries 8.\n \nMy Server was kept for overnight test and I observed that on my server\nhostmachine I could not switch user (su) to 'postgres' to see the\npostgres pid. It looked as if the postgres process was tied up some how.\n\n \nThe server pids were available but postgress was not. This looked like a\npotential database crash although there are no postgres core files or\nlogs that give clues as to what may have happned \nThe zipped server logs display lots of error trying to locate the\npostgres PID\n \nThe snapshot of the Postgres log is like this:\nERROR: cannot extend trap: No space left on device.\n Check free disk space.\nERROR: cannot extend trap: No space left on device.\n Check free disk space.\nERROR: cannot extend trap: No space left on device.\n Check free disk space.\nERROR: cannot extend trap: No space left on device.\n \nThat was clear from the logs that disk is full. But is there anyway that\nI can check this programatically? i.e. when I call 'PgDatabase::Exec()'\nto execute query from C++ API, I get error code reflecting 'low on the\ndisk'? I know I can get corresponding error message in C++ from Postgres\nby\n'errorMessage()' method. I want specific error code for this.\n \nPls. help ASAP.\nThanks in advance,\n \nAnagha Joshi\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n-----Original Message-----\nFrom: Anagha Joshi \nSent: Wednesday, May 07, 2003 3:58 PM\nTo: [email protected]\nSubject: [ADMIN] Out of disk\nspace- error code\n \nHi,\n \nI�m using PostgreSQL 7.1.2 on Sun Solaries 8. My Server was kept for overnight test and I observed that on my server hostmachine I could not switch user (su) to 'postgres' to see the postgres pid. It looked as if the postgres process was tied up some how.  The server pids were available but postgress was not. This looked like a potential database crash although there are no postgres core files or logs that give clues as to what may have happned The zipped server logs display lots of error trying to locate the postgres PID \nThe snapshot of the Postgres log is like this:\nERROR:� cannot extend\ntrap: No space left on device.\n����������� Check free\ndisk space.\nERROR:� cannot extend\ntrap: No space left on device.\n����������� Check free\ndisk space.\nERROR:� cannot extend\ntrap: No space left on device.\n����������� Check free\ndisk space.\nERROR:� cannot extend\ntrap: No space left on device.\n That was clear from the logs that disk is full. But is there anyway that I can check this programatically? i.e. when I call �PgDatabase::Exec()� to execute query from C++ API, I get error code reflecting �low on the disk�? I know I can get corresponding error message in C++ from Postgres by�errorMessage()� method. I want specific error code for this. Pls. help ASAP.Thanks in advance, Anagha Joshi", "msg_date": "Thu, 15 May 2003 16:09:44 +0530", "msg_from": "\"Anagha Joshi\" <[email protected]>", "msg_from_op": true, "msg_subject": "FW: [ADMIN] Out of disk space- error code" }, { "msg_contents": "On Thu, May 15, 2003 at 04:09:44PM +0530, Anagha Joshi wrote:\n> The snapshot of the Postgres log is like this:\n> ERROR: cannot extend trap: No space left on device.\n> Check free disk space.\n\n[. . .]\n\n> \n> That was clear from the logs that disk is full. But is there anyway that\n> I can check this programatically? i.e. when I call 'PgDatabase::Exec()'\n> to execute query from C++ API, I get error code reflecting 'low on the\n> disk'? I know I can get corresponding error message in C++ from Postgres\n> by\n> 'errorMessage()' method. I want specific error code for this.\n\nPostgres relies on your filesystem and OS. It doesn't know you're\nlow on disk until it gets the error from the OS. Use your OS to warn\nyou if you're going to run out of space.\n\nA \n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 15 May 2003 08:38:22 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: [ADMIN] Out of disk space- error code" } ]
[ { "msg_contents": "This a relatively simple nested query that we try to use, but it finish in a \"seq scan\" with a \ntoo high cost, so we had to use a little orthodox solution creating a temporal table into the \nterminal and scanning this table row's one by one making individual querys for each one.\n\nAny body knows how to make the query work in \"index scan\" mode ?\n \n________________________________________________________\n \nexplain select w.*,b.nombre from (select nro_insc,cod_estab,cuitempre,impuesto,sum(monto_impo)\n as totret,sum(monto_rete) as suma_rete,tipodoc,documento from detadj where nro_insc=390009\n and cod_estab=0 and ano=2003 and mes=4 and per=2 and sec=0 group by nro_insc,cod_estab,\n cuitempre,impuesto,tipodoc,documento) w LEFT OUTER JOIN retper b on (w.tipodoc=b.tipodoc \n and btrim(w.documento) like btrim(b.documento) and btrim(w.cuitempre) like btrim(b.cuitempre) \n and w.nro_insc=b.nro_insc and w.cod_estab=b.cod_estab) \n \n_______________________________________________\n \nTABLES STRUCTURE:\n \n Table \"retper\" ( 180.000 rows )\n\n Column | Type | Modifiers\n-----------+---------------+-----------\n tipodoc | integer |\n documento | character(20) |\n nombre | character(40) |\n domicilio | character(40) |\n puerta | integer |\n localidad | character(15) |\n provincia | character(15) |\n ningbru | character(20) |\n c_postal | character(8) |\n cuitempre | character(20) |\n nro_insc | integer |\n cod_estab | integer |\n graba | date |\n hora | character(4) |\n opera | integer |\n puesto | integer |\n crc | character(4) |\nIndexes: \n cuitemp_btrim,\n docu_btrim,\n retper_cod_estab,\n retper_cuitempre,\n retper_documento,\n retper_nombre,\n retper_nro_insc,\n retper_tipodoc\n \n________________________________________________\n\nTable \"detadj\" ( 18.500.000 rows )\n\n Column | Type | Modifiers\n------------+-----------------------+-----------\n cuitempre | character varying(20) |\n sec | numeric(10,0) |\n per | numeric(10,0) |\n mes | numeric(10,0) |\n ano | numeric(10,0) |\n nro_insc | numeric(10,0) |\n cod_estab | numeric(10,0) |\n nobli | character varying(20) |\n cod_act | character varying(20) |\n tipo_agen | character varying(1) |\n monto_impo | double precision |\n alicuota | double precision |\n monto_rete | double precision |\n tipodoc | numeric(10,0) |\n documento | character varying(20) |\n impuesto | numeric(10,0) |\n tipo_dato | numeric(10,0) |\n id | character varying(11) |\n tipo_comp | numeric(10,0) |\n letra | character varying(1) |\n terminal | numeric(10,0) |\n numero | character varying(20) |\n fecha | date |\n ningbru | character varying(20) |\n graba | date |\n hora | character varying(4) |\n opera | numeric(10,0) |\n puesto | numeric(10,0) |\nIndexes: \n ano_detadj,\n ano_mes_per,\n cod_estab,\n cuitempre,\n cuitempre_btrim,\n documento_btrim,\n impue,\n mes_detadj,\n nro_insc_detadj,\n per_detadj,\n sec\n \n________________________________________\n \nQUERY:\n \n# explain select w.*,b.nombre from (select nro_insc,cod_estab,cuitempre,impuesto,sum(monto_impo) as totret,sum(monto_rete) as suma_rete,tipodoc,documento from detadj where nro_insc=390009 and cod_estab=0 and ano=2003 and mes=4 and per=2 and sec=0 group by nro_insc,cod_estab,cuitempre,impuesto,tipodoc,documento) w LEFT OUTER JOIN retper b on (w.tipodoc=b.tipodoc and btrim(w.documento) like btrim(b.documento) and btrim(w.cuitempre) like btrim(b.cuitempre) );\n\n\nRESULTS:\n \nNOTICE: QUERY PLAN:\n \nNested Loop (cost=4999.30..21256.26 rows=1 width=220)\n -> Subquery Scan w (cost=4999.30..4999.34 rows=1 width=106)\n -> Aggregate (cost=4999.30..4999.34 rows=1 width=106)\n -> Group (cost=4999.30..4999.33 rows=2 width=106)\n -> Sort (cost=4999.30..4999.30 rows=2 width=106)\n -> Index Scan using ano_mes_per on detadj (cost=0.00..4999.29 rows=2 width=106)\n-> Seq Scan on retper b (cost=0.00..9821.23 rows=214523 width=96)\n \n________________________________________\n \nE. Caillava\n\n\n\n\n\n\n\nThis a relatively simple nested query that we try \nto use, but it finish in a \"seq scan\" with a too high cost, so we had to use \na little orthodox solution creating a temporal table into the terminal and \nscanning this table row's one by one making individual querys for each \none.\n \nAny body knows how to make the query work \nin \"index scan\" mode \n? ________________________________________________________ explain \nselect w.*,b.nombre from (select \nnro_insc,cod_estab,cuitempre,impuesto,sum(monto_impo) as \ntotret,sum(monto_rete) as suma_rete,tipodoc,documento from detadj where \nnro_insc=390009 and cod_estab=0 and ano=2003 and mes=4 and per=2 and \nsec=0 group by \nnro_insc,cod_estab, cuitempre,impuesto,tipodoc,documento) w LEFT OUTER \nJOIN retper b on (w.tipodoc=b.tipodoc  and btrim(w.documento) like \nbtrim(b.documento) and btrim(w.cuitempre) like btrim(b.cuitempre)  and \nw.nro_insc=b.nro_insc and w.cod_estab=b.cod_estab) \n _______________________________________________ TABLES \nSTRUCTURE:  Table \"retper\"  ( 180.000 rows )\n \n  Column   |     \nType      | \nModifiers-----------+---------------+----------- tipodoc   \n| integer       | documento | \ncharacter(20) | nombre    | character(40) \n| domicilio | character(40) | puerta    | \ninteger       | localidad | character(15) \n| provincia | character(15) | ningbru   | \ncharacter(20) | c_postal  | character(8)  \n| cuitempre | character(20) | nro_insc  | \ninteger       | cod_estab | \ninteger       \n| graba     | \ndate          \n| hora      | character(4)  \n| opera     | \ninteger       | puesto    \n| integer       \n| crc       | character(4)  \n|Indexes: \n         \ncuitemp_btrim,         \ndocu_btrim,         \nretper_cod_estab,         \nretper_cuitempre,         \nretper_documento,         \nretper_nombre,         \nretper_nro_insc,         \nretper_tipodoc ________________________________________________\n \nTable \"detadj\"  ( 18.500.000 rows )\n \n   Column   \n|         \nType          | \nModifiers------------+-----------------------+----------- cuitempre  \n| character varying(20) | sec        \n| numeric(10,0)         \n| per        | \nnumeric(10,0)         \n| mes        | \nnumeric(10,0)         \n| ano        | \nnumeric(10,0)         \n| nro_insc   | \nnumeric(10,0)         \n| cod_estab  | \nnumeric(10,0)         \n| nobli      | character varying(20) \n| cod_act    | character varying(20) \n| tipo_agen  | character varying(1)  | monto_impo | \ndouble precision      | alicuota   | \ndouble precision      | monto_rete | double \nprecision      | tipodoc    | \nnumeric(10,0)         \n| documento  | character varying(20) \n| impuesto   | \nnumeric(10,0)         \n| tipo_dato  | \nnumeric(10,0)         \n| id         | character \nvarying(11) | tipo_comp  | \nnumeric(10,0)         \n| letra      | character varying(1)  \n| terminal   | \nnumeric(10,0)         \n| numero     | character varying(20) \n| fecha      | \ndate                  \n| ningbru    | character varying(20) \n| graba      | \ndate                  \n| hora       | character varying(4)  \n| opera      | \nnumeric(10,0)         \n| puesto     | \nnumeric(10,0)         |Indexes: \n\n         \nano_detadj,         \nano_mes_per,         \ncod_estab,         \ncuitempre,         \ncuitempre_btrim,         \ndocumento_btrim,         \nimpue,         \nmes_detadj,         \nnro_insc_detadj,         \nper_detadj,         \nsec \n________________________________________ QUERY: # explain \nselect w.*,b.nombre from (select \nnro_insc,cod_estab,cuitempre,impuesto,sum(monto_impo) as totret,sum(monto_rete) \nas suma_rete,tipodoc,documento from detadj where nro_insc=390009 and cod_estab=0 \nand ano=2003 and mes=4 and per=2 and sec=0 group by \nnro_insc,cod_estab,cuitempre,impuesto,tipodoc,documento) w LEFT OUTER JOIN \nretper b on (w.tipodoc=b.tipodoc and btrim(w.documento) like btrim(b.documento) \nand btrim(w.cuitempre) like btrim(b.cuitempre) );\n \nRESULTS: NOTICE:  QUERY \nPLAN: Nested Loop  (cost=4999.30..21256.26 rows=1 \nwidth=220)  ->  Subquery Scan w  (cost=4999.30..4999.34 \nrows=1 width=106)        ->  \nAggregate  (cost=4999.30..4999.34 rows=1 \nwidth=106)              \n->  Group  (cost=4999.30..4999.33 rows=2 \nwidth=106)                    \n->  Sort  (cost=4999.30..4999.30 rows=2 \nwidth=106)                          \n->  Index Scan using ano_mes_per on detadj  (cost=0.00..4999.29 \nrows=2 width=106)->  Seq Scan on retper b  (cost=0.00..9821.23 \nrows=214523 \nwidth=96) ________________________________________ E. \nCaillava", "msg_date": "Thu, 15 May 2003 10:20:01 -0300", "msg_from": "=?Windows-1252?Q?Sub_Director_-_Sistemas_Inform=E1ticos?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "nested query too expensive" }, { "msg_contents": "\n[Moving to -performance, since it's more on topic there]\n\nOn Thu, 15 May 2003, [Windows-1252] Sub Director - Sistemas Inform�ticos wrote:\n\n> This a relatively simple nested query that we try to use, but it finish in a \"seq scan\" with a\n> too high cost, so we had to use a little orthodox solution creating a temporal table into the\n> terminal and scanning this table row's one by one making individual querys for each one.\n>\n> Any body knows how to make the query work in \"index scan\" mode ?\n\n> explain select w.*,b.nombre from (select nro_insc,cod_estab,cuitempre,impuesto,sum(monto_impo)\n> as totret,sum(monto_rete) as suma_rete,tipodoc,documento from detadj where nro_insc=390009\n> and cod_estab=0 and ano=2003 and mes=4 and per=2 and sec=0 group by nro_insc,cod_estab,\n> cuitempre,impuesto,tipodoc,documento) w LEFT OUTER JOIN retper b on (w.tipodoc=b.tipodoc\n> and btrim(w.documento) like btrim(b.documento) and btrim(w.cuitempre) like btrim(b.cuitempre)\n> and w.nro_insc=b.nro_insc and w.cod_estab=b.cod_estab)\n\nIf you're doing a condition on a bunch of columns, you might want a\nmulti-column index, since postgres is only going to use one of the\nindexes below I believe and it may not be considered selective enough\non just one of those conditions. And you're doing cross datatype\ncomparisons, which is likely to screw it up as well (why is tipodoc an\ninteger in one and a numeric in the other for example?) I'd also say you\nmight want to consider upgrading to 7.3.x since the explain format looks\nlike that from 7.2 or earlier. Also explain analyze output would tell us\nwhat is actually taking the time and could be useful as well.\n\n\n> Indexes:\n> cuitemp_btrim,\n> docu_btrim,\n> retper_cod_estab,\n> retper_cuitempre,\n> retper_documento,\n> retper_nombre,\n> retper_nro_insc,\n> retper_tipodoc\n>\n> ________________________________________________\n>\n> Table \"detadj\" ( 18.500.000 rows )\n>\n> Column | Type | Modifiers\n> ------------+-----------------------+-----------\n> cuitempre | character varying(20) |\n> sec | numeric(10,0) |\n> per | numeric(10,0) |\n> mes | numeric(10,0) |\n> ano | numeric(10,0) |\n> nro_insc | numeric(10,0) |\n> cod_estab | numeric(10,0) |\n> nobli | character varying(20) |\n> cod_act | character varying(20) |\n> tipo_agen | character varying(1) |\n> monto_impo | double precision |\n> alicuota | double precision |\n> monto_rete | double precision |\n> tipodoc | numeric(10,0) |\n> documento | character varying(20) |\n> impuesto | numeric(10,0) |\n> tipo_dato | numeric(10,0) |\n> id | character varying(11) |\n> tipo_comp | numeric(10,0) |\n> letra | character varying(1) |\n> terminal | numeric(10,0) |\n> numero | character varying(20) |\n> fecha | date |\n> ningbru | character varying(20) |\n> graba | date |\n> hora | character varying(4) |\n> opera | numeric(10,0) |\n> puesto | numeric(10,0) |\n> Indexes:\n> ano_detadj,\n> ano_mes_per,\n> cod_estab,\n> cuitempre,\n> cuitempre_btrim,\n> documento_btrim,\n> impue,\n> mes_detadj,\n> nro_insc_detadj,\n> per_detadj,\n> sec\n\n", "msg_date": "Thu, 15 May 2003 08:08:52 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] nested query too expensive" } ]
[ { "msg_contents": "Has anyone run postgres on a beowulf system? \n\nI'm shopping for a new server. One candidate would be a\nquad opteron (64-bit AMD \"hammer\") machine. Another approach might\nbe a beowulf of single or dual opterons. I imagine the beowulf\nwould be a bit cheaper, and much more expandable, but what about\nthe shared memory used by the postgres backends? I gather that\npostgres uses shared memory to coordinate (locks?) between backends?\n\nI have a smallish DB (pgdump|bzip2 -> 10MB), with ~45 users logged in\nusing local X(python/gtk) postgres client apps. \n\nWill the much slower shared memory access between beowulf nodes be\na performance bottleneck? \n\n[Next question is: has anyone used postgres on an opteron at all??]\n\n-- George\n-- \n I cannot think why the whole bed of the ocean is\n not one solid mass of oysters, so prolific they seem. Ah,\n I am wandering! Strange how the brain controls the brain!\n\t-- Sherlock Holmes in \"The Dying Detective\"\n", "msg_date": "Mon, 19 May 2003 13:28:32 -0400", "msg_from": "george young <[email protected]>", "msg_from_op": true, "msg_subject": "postgres on a beowulf? (AMD)opteron?" }, { "msg_contents": "On Monday 19 May 2003 22:58, george young wrote:\n> Has anyone run postgres on a beowulf system?\n>\n> I'm shopping for a new server. One candidate would be a\n> quad opteron (64-bit AMD \"hammer\") machine. Another approach might\n> be a beowulf of single or dual opterons. I imagine the beowulf\n> would be a bit cheaper, and much more expandable, but what about\n> the shared memory used by the postgres backends? I gather that\n> postgres uses shared memory to coordinate (locks?) between backends?\n\nPostgresql will not run on beowulf at all since it is an MPI system and \npostgresql can no span a single database across machines (yet). Further it \nwon't even run on mosix because mosix does not support shared memory across \nmachines.\n\n>\n> I have a smallish DB (pgdump|bzip2 -> 10MB), with ~45 users logged in\n> using local X(python/gtk) postgres client apps.\n\nWell, you haven't put how many transactions you do, but in general for that \nsort of DB, a P-IV/512MB RAM and SCSI disk would be more than enough unless \nyou are doing really exotic things with data..\n\n> [Next question is: has anyone used postgres on an opteron at all??]\n\nWell, if it runs linux as good as anything else, postgresql will run as good \nas anything else..:-)\n\nHTH\n\n Shridhar\n", "msg_date": "Tue, 20 May 2003 12:14:03 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres on a beowulf? (AMD)opteron?" }, { "msg_contents": "george young kirjutas E, 19.05.2003 kell 20:28:\n> Has anyone run postgres on a beowulf system? \n\nI don't think that postgresql will easyly port to beowulf clusters.\n\n> I'm shopping for a new server. One candidate would be a\n> quad opteron (64-bit AMD \"hammer\") machine. Another approach might\n> be a beowulf of single or dual opterons. I imagine the beowulf\n> would be a bit cheaper, and much more expandable, but what about\n> the shared memory used by the postgres backends? I gather that\n> postgres uses shared memory to coordinate (locks?) between backends?\n> \n> I have a smallish DB (pgdump|bzip2 -> 10MB), with ~45 users logged in\n> using local X(python/gtk) postgres client apps. \n\nWhy do you want such a monster machine for this smallish DB ?\n\nAre there any special performance requirements you are not telling us\nabout ?\n\n> Will the much slower shared memory access between beowulf nodes be\n> a performance bottleneck? \n\nI guess that it will not run at all ;(\n\n--------------\nHannu\n\n", "msg_date": "20 May 2003 11:29:34 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres on a beowulf? (AMD)opteron?" }, { "msg_contents": "On Mon, May 19, 2003 at 01:28:32PM -0400, george young wrote:\n> Has anyone run postgres on a beowulf system? \n\nCan't be done. None of the cluster systems support cross-machne\nshared memory (this is actually a problem for Postgres and NUMA, as\nwell, BTW).\n\n> I have a smallish DB (pgdump|bzip2 -> 10MB), with ~45 users logged in\n> using local X(python/gtk) postgres client apps. \n\nSeems like you want a sledgehammer to kill a fly here. That's tiny. \nWhy do you want the complications of a cluster for this?\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 20 May 2003 07:27:39 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres on a beowulf? (AMD)opteron?" }, { "msg_contents": "On Mon, 19 May 2003, george young wrote:\n\n> Has anyone run postgres on a beowulf system? \n> \n> I'm shopping for a new server. One candidate would be a\n> quad opteron (64-bit AMD \"hammer\") machine. Another approach might\n> be a beowulf of single or dual opterons. I imagine the beowulf\n> would be a bit cheaper, and much more expandable, but what about\n> the shared memory used by the postgres backends? I gather that\n> postgres uses shared memory to coordinate (locks?) between backends?\n> \n> I have a smallish DB (pgdump|bzip2 -> 10MB), with ~45 users logged in\n> using local X(python/gtk) postgres client apps. \n> \n> Will the much slower shared memory access between beowulf nodes be\n> a performance bottleneck? \n\nSave yourself some money on the big boxes and get a fast drive subsystem \nand lots of memory, those are more important than raw horsepower, and any\ndual Opteron / Itanium2 / USparc III / PPC / Xeon machine has plenty of \nCPU ponies to handle the load.\n\nWe use dual PIII's for most of our serving, and while our front end web \nservers need to grow a bit to handle all the PHP we're throwing at them, \nthe postgresql database on the dual PIII-750 is still plenty fast. I.e. \nour bottlenecks are elsewhere than pgsql.\n\nI don't know anyone off the top of my head that's running postgresql on an \nOpteron, by the way, but I expect it should work fine. You're more likely \nto have problems finding a distribution that works well on top of an \nOpteron than to have problems with pgsql.\n\n", "msg_date": "Tue, 20 May 2003 09:37:03 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres on a beowulf? (AMD)opteron?" }, { "msg_contents": "On Tue, 2003-05-20 at 06:27, Andrew Sullivan wrote:\n> On Mon, May 19, 2003 at 01:28:32PM -0400, george young wrote:\n> > Has anyone run postgres on a beowulf system? \n> \n> Can't be done. None of the cluster systems support cross-machne\n> shared memory (this is actually a problem for Postgres and NUMA, as\n> well, BTW).\n\nVMSclusters and Tru64 clusters do, but, of course, it has to be\nprogrammed for.\n\nAnd the licensing costs are pretty steep...\n\n-- \n+-----------------------------------------------------------+\n| Ron Johnson, Jr. Home: [email protected] |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| Regarding war zones: \"There's nothing sacrosanct about a |\n| hotel with a bunch of journalists in it.\" |\n| Marine Lt. Gen. Bernard E. Trainor (Retired) |\n+-----------------------------------------------------------+\n\n", "msg_date": "21 May 2003 09:54:34 -0500", "msg_from": "Ron Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres on a beowulf? (AMD)opteron?" }, { "msg_contents": "On 21 May 2003, Ron Johnson wrote:\n\n> On Tue, 2003-05-20 at 06:27, Andrew Sullivan wrote:\n> > On Mon, May 19, 2003 at 01:28:32PM -0400, george young wrote:\n> > > Has anyone run postgres on a beowulf system? \n> > \n> > Can't be done. None of the cluster systems support cross-machne\n> > shared memory (this is actually a problem for Postgres and NUMA, as\n> > well, BTW).\n> \n> VMSclusters and Tru64 clusters do, but, of course, it has to be\n> programmed for.\n> \n> And the licensing costs are pretty steep...\n\nsomeone was on the list last year and had gotten postgresql (sorta) \nworking on mosix clusters. The performance, I recall, was less than \nstellar.\n\n", "msg_date": "Wed, 21 May 2003 09:45:50 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres on a beowulf? (AMD)opteron?" }, { "msg_contents": "scott.marlowe wrote:\n> On 21 May 2003, Ron Johnson wrote:\n> \n> > On Tue, 2003-05-20 at 06:27, Andrew Sullivan wrote:\n> > > On Mon, May 19, 2003 at 01:28:32PM -0400, george young wrote:\n> > > > Has anyone run postgres on a beowulf system? \n> > > \n> > > Can't be done. None of the cluster systems support cross-machne\n> > > shared memory (this is actually a problem for Postgres and NUMA, as\n> > > well, BTW).\n> > \n> > VMSclusters and Tru64 clusters do, but, of course, it has to be\n> > programmed for.\n> > \n> > And the licensing costs are pretty steep...\n> \n> someone was on the list last year and had gotten postgresql (sorta) \n> working on mosix clusters. The performance, I recall, was less than \n> stellar.\n\nI think they copied and locked the shared memory for each machine that\nneeded it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 22 May 2003 17:23:41 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres on a beowulf? (AMD)opteron?" }, { "msg_contents": "On Tue, 20 May 2003, Shridhar Daithankar wrote:\n\n> On Monday 19 May 2003 22:58, george young wrote:\n> > Has anyone run postgres on a beowulf system?\n> >\n> > I'm shopping for a new server. One candidate would be a\n> > quad opteron (64-bit AMD \"hammer\") machine. Another approach might\n> > be a beowulf of single or dual opterons. I imagine the beowulf\n> > would be a bit cheaper, and much more expandable, but what about\n> > the shared memory used by the postgres backends? I gather that\n> > postgres uses shared memory to coordinate (locks?) between backends?\n> \n> Postgresql will not run on beowulf at all since it is an MPI system and \n> postgresql can no span a single database across machines (yet). Further it \n> won't even run on mosix because mosix does not support shared memory across \n> machines.\n> \n> >\n> > I have a smallish DB (pgdump|bzip2 -> 10MB), with ~45 users logged in\n> > using local X(python/gtk) postgres client apps.\n> \n> Well, you haven't put how many transactions you do, but in general for that \n> sort of DB, a P-IV/512MB RAM and SCSI disk would be more than enough unless \n> you are doing really exotic things with data..\n> \n> > [Next question is: has anyone used postgres on an opteron at all??]\n> \n> Well, if it runs linux as good as anything else, postgresql will run as good \n> as anything else..:-)\n\nKeep in mind, if what you're doing is very memory intensive, then the PIV \nwith it's faster memory bandwidth may be the best bet. If it's CPU \nprocessing intensive (GIS calculations) or I/O intensive the AMD's should \nbe competitive, but for memory I/O speed bound apps, the P IV is still \nfaster.\n\nYou'll not know which is faster until you've benchmarked it yourself under \nyour own load though. :-)\n\n", "msg_date": "Fri, 23 May 2003 10:13:47 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres on a beowulf? (AMD)opteron?" } ]
[ { "msg_contents": "Hello.\nI'm just going to upgrade from 7.1.2 to 7.3.2 and found that some\nof my queries performing on 7.3.2 much slower than on 7.1.2.\nFor example, pretty complex query that takes 3-4 seconds on 7.1.2\nnow takes about 1 minute on 7.3.2.\nEXPLAIN shows the pretty same total query cost (49000 on 7.1.2 vs\n56000 vs 7.3.2, but 7.1.2 didn't calculate some subqueries).\nWhat I did: make the dump from 7.1.2, load dump into 7.3.2,\ntune postgresql.conf parameters like in 7.1.2, vacuum analyze.\nWhy is it take so long ?\nP.S. Of course, I can show the query.\n-- \nEugene Fokin\nSOLVO Ltd. Company\n", "msg_date": "Tue, 20 May 2003 16:28:38 +0400", "msg_from": "Eugene Fokin <[email protected]>", "msg_from_op": true, "msg_subject": "7.3.2 vs 7.1.2" }, { "msg_contents": "On Tue, 2003-05-20 at 08:28, Eugene Fokin wrote:\n> Hello.\n> I'm just going to upgrade from 7.1.2 to 7.3.2 and found that some\n> of my queries performing on 7.3.2 much slower than on 7.1.2.\n\nThis generally indicates misconfiguration of some kind or required\ntweaking.\n\n> P.S. Of course, I can show the query.\n\nPlease show EXPLAIN ANALYZE output along with the query.\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "20 May 2003 09:08:59 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.3.2 vs 7.1.2" }, { "msg_contents": "* Eugene Fokin <[email protected]> [20.05.2003 15:52]:\n> Why is it take so long ?\n> P.S. Of course, I can show the query.\n\nPlease, attach both: query and explain analyze results.\nResults of:\n\n=> select version();\n\nare welcomed too.\n\n-- \n\nVictor Yegorov\n", "msg_date": "Tue, 20 May 2003 16:09:13 +0300", "msg_from": "\"Victor Yegorov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.3.2 vs 7.1.2" }, { "msg_contents": "On Tue, May 20, 2003 at 04:09:13PM +0300, Victor Yegorov wrote:\n \n> Please, attach both: query and explain analyze results.\n> Results of:\n> \n> => select version();\n> \n> are welcomed too.\n\nOk.\nbtw, it works on 7.2.1 fine to me too (as 7.1.2).\n\n\\d loadview:\n\n View \"public.loadview\"\n Column | Type | Modifiers \n-------------------+--------------------------+-----------\n id | integer | \n parent_load_id | integer | \n name | character varying(10) | \n code_id | integer | \n rcn_id | integer | \n loc_id | integer | \n real_loc_id | integer | \n dest_id | integer | \n order_id | integer | \n last_comment | character varying | \n label | character varying(20) | \n type | character varying(1) | \n qty | integer | \n qty_type | character varying(1) | \n units | integer | \n assigned | integer | \n visible | boolean | \n status | character varying(1) | \n sort | integer | \n dest_status | character varying(1) | \n date_pour | date | \n akciz_name | text | \n is_ub | boolean | \n is_toll | boolean | \n has_receiving | boolean | \n has_ub | boolean | \n has_custom | boolean | \n has_akciz | boolean | \n owner_id | integer | \n receive_type | character varying(1) | \n region_units | integer | \n msk_units | integer | \n town_units | integer | \n date_last_counted | timestamp with time zone | \n counted_by | character varying(32) | \n date_last_access | timestamp with time zone | \n accessed_by | character varying(32) | \n created | timestamp with time zone | \n created_by | character varying(32) | \n sku_name | character varying | \n real_loc | integer | \n loc_type | character varying | \nView definition: SELECT l.id, l.parent_load_id, l.name, l.code_id, l.rcn_id, l.loc_id, l.real_loc_id, l.dest_id, CASE WHEN (EXISTS (SELECT orders.id FROM orders WHERE (orders.id = l.order_id))) THEN l.order_id ELSE 0 END AS order_id, (SELECT lc.\"comment\" FROM load_comments lc WHERE (lc.id = l.last_comment_id)) AS last_comment, l.label, l.\"type\", l.qty, l.qty_type, l.units, l.assigned, l.visible, l.status, l.sort, l.dest_status, r.date_pour, ad.name AS akciz_name, l.is_ub, l.is_toll, l.has_receiving, l.has_ub, l.has_custom, l.has_akciz, l.owner_id, l.receive_type, l.region_units, l.msk_units, l.town_units, l.date_last_counted, l.counted_by, l.date_last_access, l.accessed_by, l.created, l.created_by, (SELECT s.name FROM sku s, code_info c WHERE ((s.id = c.sku_id) AND (c.id = l.code_id))) AS sku_name, l.real_loc_id AS real_loc, (SELECT loc.\"type\" FROM \"location\" loc WHERE (loc.id = l.real_loc_id)) AS loc_type FROM (((loads l JOIN (SELECT rcn_details.id, rcn_details.date_pour FROM rcn_details) r ON ((r.id = l.rcn_id))) LEFT JOIN (SELECT min(akciz.id) AS id, akciz.rcn_id FROM akciz GROUP BY akciz.rcn_id) ah ON ((ah.rcn_id = l.rcn_id))) LEFT JOIN (SELECT max((akciz_details.name)::text) AS name, akciz_details.akciz_id FROM akciz_details GROUP BY akciz_details.akciz_id) ad ON ((ad.akciz_id = ah.id)));\n\n7.2.1:\nselect version ():\n \"PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.96\"\n\nexplain analyze select count(*) from loadview:\n\nNOTICE: QUERY PLAN:\n\nAggregate (cost=49464.29..49464.29 rows=1 width=20) (actual time=4823.05..4823.05 rows=1 loops=1)\n -> Merge Join (cost=36149.36..47306.99 rows=862919 width=20) (actual time=4081.67..4699.48 rows=147281 loops=1)\n -> Sort (cost=35013.94..35013.94 rows=147281 width=16) (actual time=3851.65..3919.07 rows=147281 loops=1)\n -> Merge Join (cost=1098.11..22371.18 rows=147281 width=16) (actual time=196.80..3001.89 rows=147281 loops=1)\n -> Merge Join (cost=0.00..19885.60 rows=147281 width=8) (actual time=0.08..2059.89 rows=147281 loops=1)\n -> Index Scan using load_rcn_id_idx on loads l (cost=0.00..17026.36 rows=147281 width=4) (actual time=0.04..786.13 rows=147281 loops=1)\n -> Index Scan using rcn_detail_idx on rcn_details (cost=0.00..618.30 rows=12692 width=4) (actual time=0.03..510.13 rows=151332 loops=1)\n -> Sort (cost=1098.11..1098.11 rows=1161 width=8) (actual time=196.68..273.26 rows=140535 loops=1)\n -> Subquery Scan ah (cost=980.95..1039.00 rows=1161 width=8) (actual time=73.79..167.89 rows=11497 loops=1)\n -> Aggregate (cost=980.95..1039.00 rows=1161 width=8) (actual time=73.78..145.90 rows=11497 loops=1)\n -> Group (cost=980.95..1009.98 rows=11610 width=8) (actual time=73.76..115.53 rows=11610 loops=1)\n -> Sort (cost=980.95..980.95 rows=11610 width=8) (actual time=73.75..78.99 rows=11610 loops=1)\n -> Seq Scan on akciz (cost=0.00..197.10 rows=11610 width=8) (actual time=0.01..26.24 rows=11610 loops=1)\n -> Sort (cost=1135.43..1135.43 rows=1172 width=15) (actual time=229.97..308.41 rows=140648 loops=1)\n -> Subquery Scan ad (cost=1017.11..1075.70 rows=1172 width=15) (actual time=94.52..200.64 rows=11610 loops=1)\n -> Aggregate (cost=1017.11..1075.70 rows=1172 width=15) (actual time=94.51..179.57 rows=11610 loops=1)\n -> Group (cost=1017.11..1046.40 rows=11718 width=15) (actual time=94.49..135.00 rows=11718 loops=1)\n -> Sort (cost=1017.11..1017.11 rows=11718 width=15) (actual time=94.47..101.80 rows=11718 loops=1)\n -> Seq Scan on akciz_details (cost=0.00..225.18 rows=11718 width=15) (actual time=0.03..30.11 rows=11718 loops=1)\nTotal runtime: 4878.56 msec\n\n7.3.2:\nselect version():\n \"PostgreSQL 7.3.2 on i386-redhat-linux-gnu, compiled by GCC i386-redhat-linux-gcc (GCC) 3.2.2 20030213 (Red Hat Linux 8.0 3.2.2-1)\"\n\n Also, I've tried 7.3.2 version binaries from PostgreSQL site for RH73.\n And I've got the same result.\n\nexplain analyze select count(*) from loadview:\n\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=57642.48..57642.48 rows=1 width=233) (actual time=43799.03..43799.03 rows=1 loops=1)\n -> Subquery Scan loadview (cost=43956.42..55485.18 rows=862919 width=233) (actual time=28013.35..43638.75 rows=147281 loops=1)\n -> Merge Join (cost=43956.42..55485.18 rows=862919 width=233) (actual time=28013.35..43409.03 rows=147281 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".akciz_id)\n -> Sort (cost=42797.70..43165.90 rows=147281 width=197) (actual time=27785.80..28126.86 rows=147281 loops=1)\n Sort Key: ah.id\n -> Merge Join (cost=1115.13..22038.07 rows=147281 width=197) (actual time=133.98..14205.66 rows=147281 loops=1)\n Merge Cond: (\"outer\".rcn_id = \"inner\".rcn_id)\n -> Merge Join (cost=0.00..19524.78 rows=147281 width=189) (actual time=0.14..9419.68 rows=147281 loops=1)\n Merge Cond: (\"outer\".rcn_id = \"inner\".id)\n -> Index Scan using load_rcn_id_idx on loads l (cost=0.00..16659.18 rows=147281 width=181) (actual time=0.07..4486.76 rows=147281 loops=1)\n -> Index Scan using rcn_detail_idx on rcn_details (cost=0.00..624.96 rows=12692 width=8) (actual time=0.02..587.84 rows=151332 loops=1)\n -> Sort (cost=1115.13..1118.03 rows=1161 width=8) (actual time=133.74..214.17 rows=140535 loops=1)\n Sort Key: ah.rcn_id\n -> Subquery Scan ah (cost=968.95..1056.03 rows=1161 width=8) (actual time=46.03..115.21 rows=11497 loops=1)\n -> Aggregate (cost=968.95..1056.03 rows=1161 width=8) (actual time=46.02..100.01 rows=11497 loops=1)\n -> Group (cost=968.95..1027.00 rows=11610 width=8) (actual time=46.00..76.80 rows=11610 loops=1)\n -> Sort (cost=968.95..997.98 rows=11610 width=8) (actual time=45.99..50.45 rows=11610 loops=1)\n Sort Key: rcn_id\n -> Seq Scan on akciz (cost=0.00..185.10 rows=11610 width=8) (actual time=0.01..19.09 rows=11610 loops=1)\n -> Sort (cost=1158.72..1161.65 rows=1172 width=15) (actual time=227.16..332.79 rows=140648 loops=1)\n Sort Key: ad.akciz_id\n -> Subquery Scan ad (cost=1011.11..1098.99 rows=1172 width=15) (actual time=80.77..188.32 rows=11610 loops=1)\n -> Aggregate (cost=1011.11..1098.99 rows=1172 width=15) (actual time=80.76..158.60 rows=11610 loops=1)\n -> Group (cost=1011.11..1069.70 rows=11718 width=15) (actual time=80.73..124.73 rows=11718 loops=1)\n -> Sort (cost=1011.11..1040.40 rows=11718 width=15) (actual time=80.71..88.88 rows=11718 loops=1)\n Sort Key: akciz_id\n -> Seq Scan on akciz_details (cost=0.00..219.18 rows=11718 width=15) (actual time=0.03..28.57 rows=11718 loops=1)\n SubPlan\n -> Index Scan using orders_id_idx on orders (cost=0.00..5.92 rows=1 width=4) (actual time=0.01..0.01 rows=1 loops=147281)\n Index Cond: (id = $0)\n -> Index Scan using load_comments_id_idx on load_comments lc (cost=0.00..5.90 rows=1 width=10) (actual time=0.01..0.01 rows=0 loops=147281)\n Index Cond: (id = $1)\n -> Nested Loop (cost=0.00..11.08 rows=1 width=59) (actual time=0.02..0.03 rows=1 loops=147281)\n -> Index Scan using code_id_idx on code_info c (cost=0.00..5.07 rows=1 width=4) (actual time=0.01..0.01 rows=1 loops=147281)\n Index Cond: (id = $2)\n -> Index Scan using sku_id_idx on sku s (cost=0.00..6.00 rows=1 width=55) (actual time=0.01..0.01 rows=1 loops=147281)\n Index Cond: (s.id = \"outer\".sku_id)\n -> Index Scan using loc_g_id_idx on \"location\" loc (cost=0.00..5.98 rows=1 width=5) (actual time=0.01..0.01 rows=1 loops=147281)\n Index Cond: (id = $3)\n Total runtime: 43825.44 msec\n(41 rows)\n\n-- \nEugene Fokin\nSOLVO Ltd. Company\n", "msg_date": "Tue, 20 May 2003 17:28:42 +0400", "msg_from": "Eugene Fokin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.3.2 vs 7.1.2" }, { "msg_contents": "* Eugene Fokin <[email protected]> [20.05.2003 16:33]:\n> 7.2.1:\n> -> Merge Join (cost=0.00..19885.60 rows=147281 width=8) (actual time=0.08..2059.89 rows=147281 loops=1)\n> -> Index Scan using load_rcn_id_idx on loads l (cost=0.00..17026.36 rows=147281 width=4) (actual time=0.04..786.13 rows=147281 loops=1)\n> -> Index Scan using rcn_detail_idx on rcn_details (cost=0.00..618.30 rows=12692 width=4) (actual time=0.03..510.13 rows=151332 loops=1)\n\nsnip \n\n> 7.3.2:\n> -> Merge Join (cost=0.00..19524.78 rows=147281 width=189) (actual time=0.14..9419.68 rows=147281 loops=1)\n> Merge Cond: (\"outer\".rcn_id = \"inner\".id)\n> -> Index Scan using load_rcn_id_idx on loads l (cost=0.00..16659.18 rows=147281 width=181) (actual time=0.07..4486.76 rows=147281 loops=1)\n> -> Index Scan using rcn_detail_idx on rcn_details (cost=0.00..624.96 rows=12692 width=8) (actual time=0.02..587.84 rows=151332 loops=1)\n\nAs you can see, in 7.2.1 index scan on loads (load_rcn_id_idx) takes 0.04..786.13,\nbut in 7.3.2 - 0.07..4486.76.\n\nAlso, note the difference in the:\n\n7.2.1 \"... rows=147281 width=4) ...\"\n7.3.2 \"... rows=147281 width=181) ...\"\n\nMy guesses:\n\n1. Check your index.\n2. Do vacuum analyze again.\n3. This part:\n(loads l JOIN (SELECT rcn_details.id, rcn_details.date_pour FROM rcn_details) r ON ((r.id = l.rcn_id)))\n\nWhy do you use subselect here? It seems to me, that you can simply join\nwhole table, can't you?\n\nMay be somebody else will point to some other details.\nGood luck!\n\n-- \n\nVictor Yegorov\n", "msg_date": "Tue, 20 May 2003 17:07:13 +0300", "msg_from": "Victor Yegorov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.3.2 vs 7.1.2" }, { "msg_contents": "On Tue, May 20, 2003 at 05:07:13PM +0300, Victor Yegorov wrote:\n> \n> As you can see, in 7.2.1 index scan on loads (load_rcn_id_idx) takes 0.04..786.13,\n> but in 7.3.2 - 0.07..4486.76.\n> \n> Also, note the difference in the:\n> \n> 7.2.1 \"... rows=147281 width=4) ...\"\n> 7.3.2 \"... rows=147281 width=181) ...\"\n> \n> My guesses:\n> \n> 1. Check your index.\n> 2. Do vacuum analyze again.\n> 3. This part:\n> (loads l JOIN (SELECT rcn_details.id, rcn_details.date_pour FROM rcn_details) r ON ((r.id = l.rcn_id)))\n> \n\nI've tried to simplify it, nothing happens -- the same effect !\nThe question is -- why it scans my indexes so long ?\nI've created \"clean\" experiment: the same machine, the same dump.\nAnd perform the same procedure for each DB (btw, now it's 7.2.4 vs 7.3.1 :-)):\n\n1. Install DB.\n2. Init DB.\n3. Fix postgresql.conf\n4. Load dump.\n5. vacuum analyze.\n6. query !\n\nDifference is the same like 5 seconds vs 50 seconds...\n\n-- \nEugene Fokin\nSOLVO Ltd. Company\n", "msg_date": "Tue, 20 May 2003 19:33:48 +0400", "msg_from": "Eugene Fokin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.3.2 vs 7.1.2" }, { "msg_contents": "* Eugene Fokin <[email protected]> [20.05.2003 18:38]:\n> > (loads l JOIN (SELECT rcn_details.id, rcn_details.date_pour FROM rcn_details) r ON ((r.id = l.rcn_id)))\n> > \n\nTry changing the join above to:\n\nloads l JOIN rcn_details r ON r.id = l.rcn_id\n\nAlso, give the full description of fields, involved in your\nload_rcn_id_idx and rcn_detail_idx indicies.\n\n-- \n\nVictor Yegorov\n", "msg_date": "Tue, 20 May 2003 18:58:01 +0300", "msg_from": "Victor Yegorov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.3.2 vs 7.1.2" }, { "msg_contents": "Eugene,\n\n> 7.2.1 \"... rows=147281 width=4) ...\"\n> 7.3.2 \"... rows=147281 width=181) ...\"\n\nUnless we have a serious bug here, Victor changed the definition of his index, \nor of his table, between versions. Or mabye type casting during upgade \nchanged it?\n\nPlease post the definition of \"loads\" and \" load_rcn_id_idx\" in each system, \nVictor. Intentionally or not, you changed it between systems.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 20 May 2003 09:30:20 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.3.2 vs 7.1.2" }, { "msg_contents": "Eugene, Victor,\n\nSorry! I got the two of you mixed up ... who was asking, who was answering. \nPlease transpose your two names in my last post!\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 20 May 2003 09:31:32 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.3.2 vs 7.1.2 -- Ooops" }, { "msg_contents": "Eugene,\n\nAnother question ... given that 7.3.2's estimates are very similar to 7.2.1's \nestimates, but the real execution time is much slower, is it possible that \nthe 7.3.2 copy of the database is being loaded on a much slower disk?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 20 May 2003 09:35:25 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.3.2 vs 7.1.2" }, { "msg_contents": "Victor Yegorov <[email protected]> writes:\n> As you can see, in 7.2.1 index scan on loads (load_rcn_id_idx) takes\n> 0.04..786.13, but in 7.3.2 - 0.07..4486.76.\n\nThat seems very odd, doesn't it? Is it reproducible? I'm wondering if\nthe 7.2 table was clustered on the index while the 7.3 wasn't.\n\nMost of the cost differential, however, is coming from the fact that 7.3\ndoesn't flatten the view and thus fails to realize that it doesn't need\nto evaluate any of the view's targetlist expressions. Note the lack of\nany \"SubPlans\" in the 7.2 plan, whereas they're accounting for a good\ndeal of time in the 7.3 plan.\n\nThe reason that the view isn't flattened is that pulling up targetlists\ncontaining sub-selects turned out to break some obscure cases involving\njoin alias variables, and by the time we discovered this (after 7.3\nrelease) there was no practical way to fix it except to disable the\noptimization. It's fixed properly in CVS tip (7.4 branch) but 7.3.* is\njust going to be slow on such cases; the fix is much too complex to risk\nback-porting.\n\nI'm not sure whether selecting the count from this view is really all\nthat important to Eugene, but if it is he could make an alternate view\nthat has the same FROM clause and nothing interesting in its select\nlist, and then count that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 May 2003 12:40:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.3.2 vs 7.1.2 " }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Another question ... given that 7.3.2's estimates are very similar to 7.2.1's\n> estimates, but the real execution time is much slower, is it possible that \n> the 7.3.2 copy of the database is being loaded on a much slower disk?\n\nNah, the big reason for the discrepancy is that 7.3 is unable to prune\naway evaluation of all those sub-selects in the view's target list, per\nmy previous response.\n\nUp till just recently (post-7.3) the planner didn't bother to charge any\nevaluation cost for targetlist expressions, and so the estimated costs\ndon't reflect the difference.\n\n(The reasoning behind that behavior was that the planner couldn't affect\nthe evaluation costs of targetlist expressions by choosing a different\nplan, since the number of rows they'll be computed for will be the same\nin every correct plan. But now that we allow arbitrarily complex stuff\nin sub-selects, that reasoning doesn't hold water anymore --- it's\nimportant to propagate a good estimate of the cost up to the enclosing\nplan. So as of 7.4 we expend the cycles to add in tlist execution time\nestimates.)\n\nI am still interested in the apparent difference in the time taken for\nthat bottom indexscan, though. The width difference that you noticed is\nbecause the unflattened view needs to fetch many more columns of the\ntable than the flattened query needs. But the same number of rows get\nfetched, and approximately the same number of disk blocks ought to get\nread, so it's hard to see why there'd be such a large difference.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 May 2003 13:33:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.3.2 vs 7.1.2 " }, { "msg_contents": "On Tue, May 20, 2003 at 06:58:01PM +0300, Victor Yegorov wrote:\n> \n> Try changing the join above to:\n> \n> loads l JOIN rcn_details r ON r.id = l.rcn_id\n> \n\nNothing, I mean - the same result.\n\n>\n> Also, give the full description of fields, involved in your\n> load_rcn_id_idx and rcn_detail_idx indicies.\n>\n\n\\d load_rcn_id_idx:\n\n Index \"public.load_rcn_id_idx\"\n Column | Type\n --------+---------\n rcn_id | integer\n btree, for table \"public.loads\"\n\n\\d loads:\n\n ...\n rcn_id | integer | not null default 0\n ...\n\n\\d rcn_detail_idx:\n\n Index \"public.rcn_detail_idx\"\n Column | Type\n --------+---------\n id | integer\n unique, btree, for table \"public.rcn_details\"\n\n\\d rcn_details:\n\n ...\n id | integer | default nextval('rcn_details_id'::text)\n ...\n\n\\d rcn_details_id\n\n Sequence \"public.rcn_details_id\"\n Column | Type\n ---------------+---------\n sequence_name | name\n last_value | bigint\n increment_by | bigint\n max_value | bigint\n min_value | bigint\n cache_value | bigint\n log_cnt | bigint\n is_cycled | boolean\n is_called | boolean\n\n-- \nEugene Fokin\nSOLVO Ltd. Company\n", "msg_date": "Wed, 21 May 2003 11:08:16 +0400", "msg_from": "Eugene Fokin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.3.2 vs 7.1.2" }, { "msg_contents": "On Tue, May 20, 2003 at 09:30:20AM -0700, Josh Berkus wrote:\n> \n> > 7.2.1 \"... rows=147281 width=4) ...\"\n> > 7.3.2 \"... rows=147281 width=181) ...\"\n> \n> Unless we have a serious bug here, Victor changed the definition of his index, \n> or of his table, between versions. Or mabye type casting during upgade \n> changed it?\n> \n> Please post the definition of \"loads\" and \" load_rcn_id_idx\" in each system, \n> Victor. Intentionally or not, you changed it between systems.\n\nNothing changed ! Believe me :-)\nSame dump file, same platform, even the same machine.\nI've post already necessary definitions.\n\n-- \nEugene Fokin\nSOLVO Ltd. Company\n", "msg_date": "Wed, 21 May 2003 11:11:25 +0400", "msg_from": "Eugene Fokin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.3.2 vs 7.1.2" }, { "msg_contents": "On Tue, May 20, 2003 at 12:40:21PM -0400, Tom Lane wrote:\n>\n> [...skipped...]\n> \n> I'm not sure whether selecting the count from this view is really all\n> that important to Eugene, but if it is he could make an alternate view\n> that has the same FROM clause and nothing interesting in its select\n> list, and then count that.\n> \n\nThis is the one sample from the working system for which I'm trying\nto upgrade the database.\n\nAnd you're right. I've removed all subqueries from select list and I've\ngot original 5 seconds ! After that I tried to add one subquery back (the\nsimpliest one) and got the same 40-50 seconds again !\n\n-- \nEugene Fokin\nSOLVO Ltd. Company\n", "msg_date": "Wed, 21 May 2003 11:19:03 +0400", "msg_from": "Eugene Fokin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.3.2 vs 7.1.2" } ]
[ { "msg_contents": "Dear Gurus,\n\nThis is a rather nasty query, built up from several parameters, and it\nproved to be 7--15 times slower in 7.3 than in 7.2. This particular query\ntakes more than 3.5 minutes (4 after vacuum full analyze! (henceforth VFA))\nthat is unacceptable in an interactive client application.\n\nIf you have courage and will to please have a look at the query and/or the\nexplains, you might point out something I can't see at this level of\ncomplexity.\n\nAs for trivial questions:\n\n* The databases were identical a couple of weeks ago, deviated slightly\n since then, but I don't think it may be a cause.\n* The 5% difference in the result set doesn't seem to explain this huge\n diff in performance either.\n* The query has been run on the same server (Linux RedHat 6.1 -- \n historical, isn't it?) with the same load (this one postmaster took >90%\n CPU all the time, in all three cases)\n* Since this query involves quite a large part of the database, I'm not\n willing to post a dump on the list. If a schema-only dump helps, I may\n be able to send it in private email; I approximate it to be ~500k,\n zipped.\n* Also checked a \"lighter\" version of this query (at least, fewer rows). It\n took 223msec on 7.2 and 3658 on 7.3 (VFA). (15x slower) However, it got\n down to 400-500msec (still double of 7.2) when re-queried\n\nFiles are zipped, since 7.3 exp-ana's are over 40k each.\n\nslow.sql: the query.\n72.ana: explain analyze in 7.2\n73.ana: explain analyze in 7.3, before VFA\n73.ana2: explain analyze in 7.3, after VFA\n\nI just hope someone helps me; any little help may prove really useful!\nTIA,\nG.\n------------------------------- cut here -------------------------------", "msg_date": "Thu, 22 May 2003 16:25:54 +0200", "msg_from": "\"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "ugly query slower in 7.3, even slower after vacuum full analyze" }, { "msg_contents": "\"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]> writes:\n> This is a rather nasty query, built up from several parameters, and it\n> proved to be 7--15 times slower in 7.3 than in 7.2.\n\nI think you are running into the same subselect-in-targetlist\nshortcoming as Eugene Fokin did:\nhttp://archives.postgresql.org/pgsql-performance/2003-05/msg00204.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 May 2003 12:02:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ugly query slower in 7.3, even slower after vacuum full analyze " }, { "msg_contents": "Szucs,\n\n> This is a rather nasty query, built up from several parameters, and it\n> proved to be 7--15 times slower in 7.3 than in 7.2. This particular query\n> takes more than 3.5 minutes (4 after vacuum full analyze! (henceforth VFA))\n> that is unacceptable in an interactive client application.\n\nPlease read the list archives for the last 3-4 days. Another user reported a \n\"slow query\" problem with 7.3.2; please see if it sounds like yours.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 22 May 2003 09:31:06 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ugly query slower in 7.3, even slower after vacuum full analyze" }, { "msg_contents": "Dear Tom, (or anyone who followed the belowmentioned thread)\n\nI read that thread (more-or-less), but couldn't have noticed the same\nsymptoms in my analyze output. So, to summarize my reading on this (please\nconfirm or fix):\n\n* The symptom is the differing width in 7.2 and 7.3\n\n* This causes more hdd work, that takes lots of time (indeed, the hdd was\ngoing crazy)\n\n* The query is probably good as it is; it's 7.3 that's slow (but more\nreliable than 7.2) and 7.4 will most likely fix the problem.\n\nIf all these are correct, that's enough info to me. Hopefully it'll move\nfrom a Cel333 (the developers' server) to an IBM 2x2.4 Xeon with 5-HDD SCSI\nRaid (the business server).\n\nG.\n------------------------------- cut here -------------------------------\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nSent: Thursday, May 22, 2003 6:02 PM\nSubject: Re: [PERFORM] ugly query slower in 7.3, even slower after vacuum\nfull analyze\n\n\n> \"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]> writes:\n> > This is a rather nasty query, built up from several parameters, and it\n> > proved to be 7--15 times slower in 7.3 than in 7.2.\n>\n> I think you are running into the same subselect-in-targetlist\n> shortcoming as Eugene Fokin did:\n> http://archives.postgresql.org/pgsql-performance/2003-05/msg00204.php\n>\n> regards, tom lane\n>\n\n", "msg_date": "Thu, 22 May 2003 18:32:48 +0200", "msg_from": "\"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ugly query slower in 7.3, even slower after vacuum full analyze" } ]
[ { "msg_contents": "Hi all,\n\nsorry for reposting this to the lists, but I feel I posted this at the\nwrong time of day, since now a lot more of you gurus are reading, and I\nreally need some knowledgeable input... thanks for consideration :)\n\n\nI have a question concerning table/key layout.\n\nI need to store an ID value that consists of three numerical elements:\n - ident1 char(5)\n - ident2 char(5)\n - nodeid int4\n\nI need an index on these columns. Insert, delete, and lookup operations\nthis in this need to be as fast as possible. Now I have two options:\n\n(a) creating an index on all three columns, or\n(b) create a single varchar column combining all three components into a\nsingle string, like \"ident1:ident2:nodeid\" and indexing this column only.\n\nThere will be a couple of million rows in this table, the values in\nquestion are not unique.\n\nWhich would be faster in your opinion? (a) or (b)?\n\nThanks for any insight,\n\n\n-- \n >O Ernest E. Vogelsinger\n (\\) ICQ #13394035\n ^ http://www.vogelsinger.at/\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\nsubscribe-nomail command to [email protected] so that your\nmessage can get through to the mailing list cleanly \n\n-- \n >O Ernest E. Vogelsinger\n (\\) ICQ #13394035\n ^ http://www.vogelsinger.at/\n\n\n", "msg_date": "Thu, 22 May 2003 22:41:22 +0200", "msg_from": "Ernest E Vogelsinger <[email protected]>", "msg_from_op": true, "msg_subject": "Q: Structured index - which one runs faster?" }, { "msg_contents": "On Thu, 22 May 2003, Ernest E Vogelsinger wrote:\n\n> Hi all,\n> \n> sorry for reposting this to the lists, but I feel I posted this at the\n> wrong time of day, since now a lot more of you gurus are reading, and I\n> really need some knowledgeable input... thanks for consideration :)\n> \n> \n> I have a question concerning table/key layout.\n> \n> I need to store an ID value that consists of three numerical elements:\n> - ident1 char(5)\n> - ident2 char(5)\n> - nodeid int4\n> \n> I need an index on these columns. Insert, delete, and lookup operations\n> this in this need to be as fast as possible. Now I have two options:\n> \n> (a) creating an index on all three columns, or\n> (b) create a single varchar column combining all three components into a\n> single string, like \"ident1:ident2:nodeid\" and indexing this column only.\n> \n> There will be a couple of million rows in this table, the values in\n> question are not unique.\n> \n> Which would be faster in your opinion? (a) or (b)?\n\nGenerally speaking, b should be faster, but a should be more versatile.\n\n", "msg_date": "Thu, 22 May 2003 16:23:44 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Q: Structured index - which one runs faster?" }, { "msg_contents": "Ernest E Vogelsinger <[email protected]> writes:\n> (a) creating an index on all three columns, or\n> (b) create a single varchar column combining all three components into a\n> single string, like \"ident1:ident2:nodeid\" and indexing this column only.\n\nI can't imagine that (b) is a good idea ... it's dubious that you are\nsaving anything on the indexing, and you're sure adding a lot of space\nto the table, not to mention maintenance effort, potential for bugs,\netc.\n\nIt might be worth creating the index so that the \"least non-unique\"\ncolumn is mentioned first, if there's a clear winner in those terms.\nThat would minimize the number of times that comparisons have to look at\nthe additional columns.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 May 2003 18:53:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q: Structured index - which one runs faster? " }, { "msg_contents": "On Thu, 22 May 2003, Ernest E Vogelsinger wrote:\n\n[response only to -performance]\n\n> sorry for reposting this to the lists, but I feel I posted this at the\n> wrong time of day, since now a lot more of you gurus are reading, and I\n> really need some knowledgeable input... thanks for consideration :)\n\nIt just takes time. :)\n\n> I have a question concerning table/key layout.\n>\n> I need to store an ID value that consists of three numerical elements:\n> - ident1 char(5)\n> - ident2 char(5)\n> - nodeid int4\n\nThis seems like a somewhat odd key layout, why char(5) for the first\ntwo parts if they're numeric as well?\n\n> I need an index on these columns. Insert, delete, and lookup operations\n> this in this need to be as fast as possible. Now I have two options:\n>\n> (a) creating an index on all three columns, or\n> (b) create a single varchar column combining all three components into a\n> single string, like \"ident1:ident2:nodeid\" and indexing this column only.\n>\n> There will be a couple of million rows in this table, the values in\n> question are not unique.\n>\n> Which would be faster in your opinion? (a) or (b)?\n\nGenerally, you're probably better off with an index on the three columns.\nOtherwise either your clients need to composite the value for the varchar\ncolumn or the system does in triggers for insert/update.\n\nAlso, what kinds of lookups are you going to be doing? Only lookups based\non all three parts of the key or will you ever be searching based on parts\nof the keys?\n\n", "msg_date": "Thu, 22 May 2003 16:00:54 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Q: Structured index - which one runs faster?" }, { "msg_contents": "Thanks for replying :)\n\nAt 01:00 23.05.2003, Stephan Szabo said:\n--------------------[snip]--------------------\n>On Thu, 22 May 2003, Ernest E Vogelsinger wrote:\n>\n>> I need to store an ID value that consists of three numerical elements:\n>> - ident1 char(5)\n>> - ident2 char(5)\n>> - nodeid int4\n>\n>This seems like a somewhat odd key layout, why char(5) for the first\n>two parts if they're numeric as well?\n\nIt's not odd - ident1 and ident2 are in fact logical identifiers that _are_\ncharacter values, no numbers.\n\n>Generally, you're probably better off with an index on the three columns.\n>Otherwise either your clients need to composite the value for the varchar\n>column or the system does in triggers for insert/update.\n\nThis table will be used by a PHP library accessing it - no direct client\nintervention (except the developers and they should know what they're doing ;-)\n\n>Also, what kinds of lookups are you going to be doing? Only lookups based\n>on all three parts of the key or will you ever be searching based on parts\n>of the keys?\n\nHmm. Yes, lookups on parts of the keys will be possible, but only from left\nto right, ident1 having the highest precedence, followed by ident2 and\nfinally by nodeid.\n\nThese columns will never be modified once inserted. The only operations\nthese columns will be affected are insert and delete, and lookup of course.\nI'm not so concerned with delete since this will not happen too often, but\ninserts will, and lookups of course permanently, and both operations must\nbe as fast as possible, even with gazillions of rows...\n\n\n\n-- \n >O Ernest E. Vogelsinger\n (\\) ICQ #13394035\n ^ http://www.vogelsinger.at/\n\n\n", "msg_date": "Fri, 23 May 2003 01:36:14 +0200", "msg_from": "Ernest E Vogelsinger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [ADMIN] Q: Structured index - which one runs faster?" }, { "msg_contents": "At 00:53 23.05.2003, Tom Lane said:\n--------------------[snip]--------------------\n>Ernest E Vogelsinger <[email protected]> writes:\n>> (a) creating an index on all three columns, or\n>> (b) create a single varchar column combining all three components into a\n>> single string, like \"ident1:ident2:nodeid\" and indexing this column only.\n>\n>I can't imagine that (b) is a good idea ... it's dubious that you are\n>saving anything on the indexing, and you're sure adding a lot of space\n>to the table, not to mention maintenance effort, potential for bugs,\n>etc.\n>\n>It might be worth creating the index so that the \"least non-unique\"\n>column is mentioned first, if there's a clear winner in those terms.\n>That would minimize the number of times that comparisons have to look at\n>the additional columns.\n--------------------[snip]-------------------- \n\nThanks for replying :)\n\nDo you know if there's a general performance difference between numeric\n(int4) and character (fixed-size char[5]) columns? The ident1 and ident2\ncolumns are planned to be char[5], only the third column (with least\nprecedence) will be numeric.\n\nThe application is still in the design phase, so I still could fiddle\naround that and make that char[5] numeric with an additional mapping\n(@runtime, not in the DB) if this will increase performance.\n\nThanks,\n\n-- \n >O Ernest E. Vogelsinger\n (\\) ICQ #13394035\n ^ http://www.vogelsinger.at/\n\n\n", "msg_date": "Fri, 23 May 2003 01:43:06 +0200", "msg_from": "Ernest E Vogelsinger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Q: Structured index - which one runs faster? " }, { "msg_contents": "Ernest E Vogelsinger <[email protected]> writes:\n> Do you know if there's a general performance difference between numeric\n> (int4) and character (fixed-size char[5]) columns? The ident1 and ident2\n> columns are planned to be char[5], only the third column (with least\n> precedence) will be numeric.\n\nint4 is certainly faster to compare than char(n), but I wouldn't contort\nyour database design on that basis... if the idents aren't naturally\nintegers, don't force them to be.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 May 2003 20:00:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Q: Structured index - which one runs faster? " }, { "msg_contents": "\nOn Fri, 23 May 2003, Ernest E Vogelsinger wrote:\n\n> Thanks for replying :)\n>\n> At 01:00 23.05.2003, Stephan Szabo said:\n> --------------------[snip]--------------------\n> >On Thu, 22 May 2003, Ernest E Vogelsinger wrote:\n> >\n> >> I need to store an ID value that consists of three numerical elements:\n> >> - ident1 char(5)\n> >> - ident2 char(5)\n> >> - nodeid int4\n> >\n> >This seems like a somewhat odd key layout, why char(5) for the first\n> >two parts if they're numeric as well?\n>\n> It's not odd - ident1 and ident2 are in fact logical identifiers that _are_\n> character values, no numbers.\n\nThe reason I mentioned it is that the original said, \"three numerical\nelements\" ;)\n\n> >Also, what kinds of lookups are you going to be doing? Only lookups based\n> >on all three parts of the key or will you ever be searching based on parts\n> >of the keys?\n>\n> Hmm. Yes, lookups on parts of the keys will be possible, but only from left\n> to right, ident1 having the highest precedence, followed by ident2 and\n> finally by nodeid.\n\nThe multi-column index helps for those as well, as long as you put the\ncolumns in the precedence order. If they're ordered ident1,ident2,nodeid\nthen it'll potentially use it for searches on ident1 or ident1 and ident2\nif it thinks that the condition is selective enough.\n\n", "msg_date": "Thu, 22 May 2003 23:42:36 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Q: Structured index - which one runs faster?" }, { "msg_contents": "A related question:\n\nAre any of these indexes redundant:\n\n CREATE UNIQUE INDEX user_list_id_email ON user_list (owner_id,user_email);\n CREATE INDEX user_list_owner_id ON user_list (owner_id);\n CREATE INDEX user_list_oid_created ON user_list (owner_id,user_created);\n\nIn particular, is user_list_owner_id redundant to\nuser_list_oid_created? Will the latter be used for queries such as\n\n SELECT user_fname from user_list where owner_id=34\n\nIf so, I can drop the owner_id index. the _id columns are integers,\ncreated is a datetime, and email is a string. owner_id is also a\nforeign key into the owners table (via REFERENCES), if that matters.\n\nI'd try it out by dropping the index, but reindexing it takes a *LONG*\ntime which I cannot afford to be unavailable.\n\nThanks.\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-240-453-8497\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "23 May 2003 11:09:00 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Q: Structured index - which one runs faster?" }, { "msg_contents": "Ernest-\n\n> (a) creating an index on all three columns, or\n> (b) create a single varchar column combining all three components into a\n> single string, like \"ident1:ident2:nodeid\" and indexing this column only.\n> \n> There will be a couple of million rows in this table, the values in\n> question are not unique.\n\nI'd go with (a). (b) is not very flexible (e.g., lookup by ident2\nonly), and any speed advantage will require knowing in advance the\noptimal key order (i1:i2:n v. n:i2:i1 v. ...). I'd expect it would be\ncomparable to a multi-column index for speed.\n\n(a) can really be implemented in 3 ways:\n(a1) an index of all 3 columns\n(a2) an index on /each/ of 3 columns\n(a3) a multi-column index AND separate indices on the others.\n e.g., index (i1,i2,n), and index (i2) and index (n)\n\nThe choice of which is fastest depends a lot on the distribution of keys\nin each column and whether you need to do lookups on only one or two\ncolumns. Again, once you choose (b), you're kinda stuck with treating\nthe compound key as a single entity (without incurring a big performance\nhit); (a) will allow you to experiment with optimal indexing without\naffecting code.\n\nSince it sounds like you've already got the data loaded, I (probably\nothers) would be interested in any timing runs you do.\n\n-Reece\n\n-- \nReece Hart, Ph.D. [email protected], http://www.gene.com/\nGenentech, Inc. 650/225-6133 (voice), -5389 (fax)\nBioinformatics and Protein Engineering\n1 DNA Way, MS-93 http://www.in-machina.com/~reece/\nSouth San Francisco, CA 94080-4990 [email protected], GPG: 0x25EC91A0\n\n", "msg_date": "23 May 2003 09:46:25 -0700", "msg_from": "Reece Hart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Q: Structured index - which one runs faster?" }, { "msg_contents": "On Fri, May 23, 2003 at 11:09:00 -0400,\n Vivek Khera <[email protected]> wrote:\n> A related question:\n> \n> Are any of these indexes redundant:\n> \n> CREATE UNIQUE INDEX user_list_id_email ON user_list (owner_id,user_email);\n> CREATE INDEX user_list_owner_id ON user_list (owner_id);\n> CREATE INDEX user_list_oid_created ON user_list (owner_id,user_created);\n> \n> In particular, is user_list_owner_id redundant to\n> user_list_oid_created? Will the latter be used for queries such as\n\nYes. Any prefix of a multicolumn index can be used for queries. They\n(prefixes) won't be usable by foreign key references because even if the\nindex as a whole is unique, the prefixes won't necessarily be.\n", "msg_date": "Fri, 23 May 2003 11:50:20 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Q: Structured index - which one runs faster?" }, { "msg_contents": "Vivek Khera <[email protected]> writes:\n> Are any of these indexes redundant:\n\n> CREATE UNIQUE INDEX user_list_id_email ON user_list (owner_id,user_email);\n> CREATE INDEX user_list_owner_id ON user_list (owner_id);\n> CREATE INDEX user_list_oid_created ON user_list (owner_id,user_created);\n\n> In particular, is user_list_owner_id redundant to\n> user_list_oid_created?\n\nAny of the three indexes can be used for a search on owner_id alone, so\nyeah, user_list_owner_id is redundant. It would be marginally faster to\nuse user_list_owner_id for such a search, just because it's physically\nsmaller than the other two indexes, but against that you have to balance\nthe extra update cost of maintaining the additional index.\n\nAlso, I can imagine scenarios where even a pure SELECT query load could\nfind the extra index to be a net loss: if you have a mix of queries that\nuse two or all three indexes, and the indexes don't fit in kernel disk\ncache but just one or two would, then you'll lose on extra I/O as the\nindexes compete for cache space. Not sure how likely that scenario is,\nbut it's something to think about.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 May 2003 13:38:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Q: Structured index - which one runs faster? " }, { "msg_contents": ">>>>> \"TL\" == Tom Lane <[email protected]> writes:\n\n>> In particular, is user_list_owner_id redundant to\n>> user_list_oid_created?\n\nTL> Any of the three indexes can be used for a search on owner_id alone, so\nTL> yeah, user_list_owner_id is redundant. It would be marginally faster to\nTL> use user_list_owner_id for such a search, just because it's physically\nTL> smaller than the other two indexes, but against that you have to balance\nTL> the extra update cost of maintaining the additional index.\n\nThis is great info. That extra index is gonna be nuked in about 37.23\nseconds... It takes up a lot of space and is wasting time with\nupdates and inserts, which happen a *lot* on that table (nearly 10\nmillion rows).\n\nThanks!\n\n", "msg_date": "Fri, 23 May 2003 14:04:28 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Q: Structured index - which one runs faster? " }, { "msg_contents": "On 23 May 2003 11:09:00 -0400, Vivek Khera <[email protected]> wrote:\n> CREATE UNIQUE INDEX user_list_id_email ON user_list (owner_id,user_email);\n> CREATE INDEX user_list_owner_id ON user_list (owner_id);\n> CREATE INDEX user_list_oid_created ON user_list (owner_id,user_created);\n>\n>In particular, is user_list_owner_id redundant to\n>user_list_oid_created?\n\nIn theory yes, but in practice it depends ...\n\n> Will the latter be used for queries such as\n>\n> SELECT user_fname from user_list where owner_id=34\n\nAll other things being equal, the planner tends to estimate higher\ncosts for the multi column index. This has to do with its attempt to\nadjust correlation for the additional index columns. So unless the\nphysical order of tuples is totally unrelated to owner_id, I'd expect\nit to choose the single column index.\n\n>If so, I can drop the owner_id index.\n\nIf the planner estimates the cost for an user_list_id_email or\nuser_list_oid_created index scan lower than for a seq scan, you will\nnotice no difference.\n\nBut under unfortunate circumstances it might choose a seq scan ...\n\nServus\n Manfred\n", "msg_date": "Fri, 23 May 2003 20:30:03 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Q: Structured index - which one runs faster?" } ]
[ { "msg_contents": "Hello,\n\nI have a database with the following layout:\n\n searchterms=# \\d+ searches_2002\n\t\tTable \"public.searches_2002\"\n Column | Type | Modifiers | Description \n -----------+------------------------+-----------+-------------\n srchdate | date | not null | \n srchtime | time without time zone | not null | \n client_ip | inet | not null | \n srchquery | character varying(50) | not null | \n fhvalue | smallint | | \n Indexes: searches_2002_client_ip btree (client_ip),\n\t searches_2002_srchdate btree (srchdate),\n\t searches_2002_srchdatetime btree (srchdate, srchtime),\n\t searches_2002_srchquery btree (srchquery),\n\t searches_2002_srchquery_lcase btree (lower(srchquery)),\n\t searches_2002_srchquery_withfh btree (srchquery, fhvalue),\n\t searches_2002_srchtime btree (srchtime)\n\nThere are no uniqueness properties that would make it possible for this table\nto have a primary key, as it is a list of searches performed on a search\nengine and the users' behaviour is, well... umm, odd, to be mild. :)\n\nAlso, do note that this is a test table, so nevermind the excessive amount of\nindexes - performance is not an issue here, I am still evaluating the need and\nbenefits of having various indexes on those columns.\n\nThe particular case I am interested is this: when executing queries involving\npattern searches using various operators on srchquery, none of the indexes are\nused in certain cases, namely those LIKE and regexp filters that start with\na wildcard.\n\nThis makes perfect sense, because wildcard pattern searches that start with a\nwildcard, can not really benefit from an index scan, because a sequential scan\nis probably going to be faster: we are only going to benefit from scanning an\nindex in those special cases where the wildcard evaluates to a zero-length\nstring.\n\nOne example of a query plan:\n\n searchterms=# explain select count(*)\n\t\t from searches_2002\n\t\t\twhere srchquery like '%curriculum%';\n\t\t\t\t QUERY PLAN \n --------------------------------------------------------------------------\n Aggregate (cost=4583.26..4583.26 rows=1 width=0)\n -> Seq Scan on searches_2002 (cost=0.00..4583.26 rows=1 width=0)\n\t Filter: (srchquery ~~ '%curriculum%'::text)\n\nThere is 211061 records in this test table, but the real-life tables would\ncontain a much much larger amount of data, more like 50+ million rows.\n\nThis promise of a hell on earth trying to optimize performance makes me wonder:\nwould there be a sensible way/reason for avoiding sequential scans on queries\nthat start with a wildcard, and would avoiding sequential scans even be\nfeasible in such cases? Or in other words, can I somehow optimize LIKE and\nregexp queries that start with a wildcard?\n\nTIA,\n-- \n Grega Bremec\n System Administration & Development Support\n grega.bremec-at-noviforum.si\n http://najdi.si/\n http://www.noviforum.si/\n", "msg_date": "Tue, 27 May 2003 21:09:08 +0200", "msg_from": "Grega Bremec <[email protected]>", "msg_from_op": true, "msg_subject": "Wildcard searches & performance question" }, { "msg_contents": "On Tue, May 27, 2003 at 09:09:08PM +0200, Grega Bremec wrote:\n> that start with a wildcard, and would avoiding sequential scans even be\n> feasible in such cases? Or in other words, can I somehow optimize LIKE and\n> regexp queries that start with a wildcard?\n\nNot really. But it sounds like you might be a candidate for full\ntext indexing. See contrib/tsearch for one implementation.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 27 May 2003 15:37:00 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wildcard searches & performance question" }, { "msg_contents": "What you want is full text searching. To see it in action go here:\n\nfts.postgresql.org\n\nTo download it go here:\n\nhttp://openfts.sourceforge.net/\n\nThere is also the older, and slightly slower full text indexing engine, \nincluded in the /contrib/fulltextindex directory. It's a little more \nwrung out, but also not as likely to get maintenance in the future.\n\nBasically, full text indexing does exactly what you're asking for by \nindexing each row inserted by each word in it (that isn't a noise word \nlike \"the\" or \"a\") and then uses the indexes created for its searches.\n\nOn Tue, 27 May 2003, Grega Bremec wrote:\n\n> Hello,\n> \n> I have a database with the following layout:\n> \n> searchterms=# \\d+ searches_2002\n> \t\tTable \"public.searches_2002\"\n> Column | Type | Modifiers | Description \n> -----------+------------------------+-----------+-------------\n> srchdate | date | not null | \n> srchtime | time without time zone | not null | \n> client_ip | inet | not null | \n> srchquery | character varying(50) | not null | \n> fhvalue | smallint | | \n> Indexes: searches_2002_client_ip btree (client_ip),\n> \t searches_2002_srchdate btree (srchdate),\n> \t searches_2002_srchdatetime btree (srchdate, srchtime),\n> \t searches_2002_srchquery btree (srchquery),\n> \t searches_2002_srchquery_lcase btree (lower(srchquery)),\n> \t searches_2002_srchquery_withfh btree (srchquery, fhvalue),\n> \t searches_2002_srchtime btree (srchtime)\n> \n> There are no uniqueness properties that would make it possible for this table\n> to have a primary key, as it is a list of searches performed on a search\n> engine and the users' behaviour is, well... umm, odd, to be mild. :)\n> \n> Also, do note that this is a test table, so nevermind the excessive amount of\n> indexes - performance is not an issue here, I am still evaluating the need and\n> benefits of having various indexes on those columns.\n> \n> The particular case I am interested is this: when executing queries involving\n> pattern searches using various operators on srchquery, none of the indexes are\n> used in certain cases, namely those LIKE and regexp filters that start with\n> a wildcard.\n> \n> This makes perfect sense, because wildcard pattern searches that start with a\n> wildcard, can not really benefit from an index scan, because a sequential scan\n> is probably going to be faster: we are only going to benefit from scanning an\n> index in those special cases where the wildcard evaluates to a zero-length\n> string.\n> \n> One example of a query plan:\n> \n> searchterms=# explain select count(*)\n> \t\t from searches_2002\n> \t\t\twhere srchquery like '%curriculum%';\n> \t\t\t\t QUERY PLAN \n> --------------------------------------------------------------------------\n> Aggregate (cost=4583.26..4583.26 rows=1 width=0)\n> -> Seq Scan on searches_2002 (cost=0.00..4583.26 rows=1 width=0)\n> \t Filter: (srchquery ~~ '%curriculum%'::text)\n> \n> There is 211061 records in this test table, but the real-life tables would\n> contain a much much larger amount of data, more like 50+ million rows.\n> \n> This promise of a hell on earth trying to optimize performance makes me wonder:\n> would there be a sensible way/reason for avoiding sequential scans on queries\n> that start with a wildcard, and would avoiding sequential scans even be\n> feasible in such cases? Or in other words, can I somehow optimize LIKE and\n> regexp queries that start with a wildcard?\n> \n> TIA,\n> \n\n", "msg_date": "Tue, 27 May 2003 13:48:39 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wildcard searches & performance question" }, { "msg_contents": "Grega,\n\nSee www.openfts.org for a tool to do what you need. There's also a simpler \none in /contrib, as Andrew mentioned.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 27 May 2003 12:55:46 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wildcard searches & performance question" }, { "msg_contents": "> feasible in such cases? Or in other words, can I somehow optimize LIKE and\n> regexp queries that start with a wildcard?\n\nIf they start with a wildcard, but end in character data you could\nreverse the string and index that...\n\nIf they start and end with a wildcard, your best bet is a full text\nindexing solution (various contrib modules).\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "27 May 2003 16:06:26 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wildcard searches & performance question" }, { "msg_contents": "Thank you very much for all your suggestions.\n\nI am planning on investing some time into trying out a couple FT indexes. Not\nminding the fact most of the queries are going to be exact phrase substring\nsearches, performance will most probably benefit from it, so at least some of\nwhat I'm after is achieved that way.\n\nI shall be getting back with reports, in case anyone is interested.\n\nCheers,\n-- \n Grega Bremec\n System Administration & Development Support\n grega.bremec-at-noviforum.si\n http://najdi.si/\n http://www.noviforum.si/\n", "msg_date": "Wed, 28 May 2003 22:52:56 +0200", "msg_from": "Grega Bremec <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wildcard searches & performance question" }, { "msg_contents": "On Wed, 28 May 2003, Grega Bremec wrote:\n\n> Thank you very much for all your suggestions.\n> \n> I am planning on investing some time into trying out a couple FT indexes. Not\n> minding the fact most of the queries are going to be exact phrase substring\n> searches, performance will most probably benefit from it, so at least some of\n> what I'm after is achieved that way.\n> \n> I shall be getting back with reports, in case anyone is interested.\n\nBe sure to look at OpenFTS http://sourceforge.net/projects/openfts/\n\n", "msg_date": "Wed, 28 May 2003 16:46:26 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wildcard searches & performance question" } ]
[ { "msg_contents": "[email protected]\nCc: \nBcc: \nSubject: Re: [PERFORM] [SQL] Unanswered Questions WAS: An unresolved performance\nReply-To: [email protected]\nIn-Reply-To: <[email protected]>\nX-Operating-System: Linux/2.4.18-686-smp (i686)\nX-Uptime: 21:03:03 up 8 days, 22:55, 8 users, load average: 1.00, 1.00, 1.00\n\n* scott.marlowe ([email protected]) wrote:\n\n\n> On Thu, 8 May 2003, johnnnnnn wrote:\n> \n> > > I hate to point this out, but \"TIP 4\" is getting a bit old and the 6\n> \n> Also, some tips might well cross over, like say, vacuum and analyze \n> regularly. Hmmm. Sounds like a job for a relational database :-)\n\nDid this get carried any further?\n\nI'd love to see something come out of it. But I'd really like to see more\nof the kind of tricks and tips, like how to dump all databases using a\nscript of 3 lines(?) I'm sure I've seen it before. The extra lines I\ndon't think would be a major pain given the added value.\n\nEven if it's too much, how about somewhere on techdocs for the script and\na link in the tips 'n tricks line?\n\n\nPete\n:wq\n", "msg_date": "Wed, 28 May 2003 21:28:15 +1000", "msg_from": "Peter Lavender <[email protected]>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "Having grepped the web, it's clear that this isn't the first or last \ntime this issue will be raised.\n\nMy application relies heavily on IN lists. The lists are primarily \nconstant integers, so queries look like:\n\nSELECT val FROM table WHERE id IN (43, 49, 1001, 100002, ...)\n\nPerformance is critical, and the size of these lists depends a lot on \nhow the larger 3-tier applicaiton is used,\nbut it wouldn't be out of the question to retrieve 3000-10000 items.\n\nPostgreSQL 7.3.2 seems to have a lot of trouble with large lists. \n\nI ran an experiment that ran queries on a table of two integers (ID, \nVAL), where ID is a primary key and the subject\nof IN list predicates. The test used a table with one million rows ID \nis appropriately indexed,\nand I have VACUUMED/analyzed the database after table load.\n\nI ran tests on in-lists from about 100 to 100,000 entries.\n\nI also ran tests where I picked the rows out one-by-one using \nparameterized statements, e.g.\n\nSELECT val FROM table WHERE id = ?\n\nI'm trying to decide how much I should use parameterized statements and \nwhen to work around buffer size limitations\nin JDBC transport, general query processing, etc.\n\nSo here are my questions as a prelude to the data noted below:\n\n1) What is the acceptable limit of jdbc Statement buffer sizes for \nexecuteQuery()?\n (Presumably it's not really a JDBC question, but a PostgreSQL \nquery/buffering/transport question).\n\n2) What is the expected acceptable limit for the number of items in an \nIN list predicate such as\n those used here. (List of constants, not subselects).\n\n3) What should I expect for performance capabilities/limitations from \nPostgreSQL on this type of problem?\n (Set my expectations, perhaps they're unrealistic).\n \n4) What are my alternatives for picking specific rows for thousands of \nelements with sub-second response times\n if not IN lists? (This is crucial!)\n\n---------------------------------------------\n\nHere is a summary of my observations of query times, and I've attached \nthe test program (more on that below).\n\n1) PostgreSQL exhibits worse-than-linear performance behavior with \nrespect to IN list size.\n This is bad.\n\n2) Parameterized statements exhibit the expected linear performance \ncharacteristics.\n\n3) The break-even point for using IN lists vs. parameterized statements \nin my environment\n\n (RedHat Linux 9.0, PostgreSQL 7.3.2, 512MB memory, 7200RPM 100UDMA \nIDE disks, AMD1600Mhz)\n\n is about 700 items in the IN list. Beyond that, IN the IN list \nscalability curve makes it impractical.\n\n4) God help you if you haven't vacuum/analyzed that the newly loaded \ntable. Without this,\n IN list processing is even worse!\n\n For just 10 elements in the IN list:\n\n *Without* VACUUMDB, IN lists suck beyond measure:\n Elapsed time for 10 IN list elements: 2638 ms\n Elapsed time for 10 parameterized elements: 9 ms\n\n *With* VACUUMDB: IN lists recover a bit:\n Elapsed time for 10 IN list elements: 10 ms\n Elapsed time for 10 parameterized elements: 24 ms\n\n However it's VERY interesting to note that parameterized statements \nworked well. That implies\n probable disparity in plan generation. It's worth noting that it \ndidn't *feel* like I was getting the same\n delay when I ran the query from the 'psql' client, but since it \ndoesn't report times I can't be sure.\n\n5) Rest of my results are vacuumed, (and listed in the attached program \nin detail).\n \n The interesting points are:\n\n For an IN list of 700 elements:\n\n MySQL 3.23.56 (with INNODB tables) takes 19ms, 73ms with \nparameterized statements.\n PostgreSQL takes 269ms, 263ms with parameterized statements.\n\n For larger lists, MySQL happily processed a 90,000 element IN list \nin 1449ms,\n 9283 ms using parameterized statements.\n\n PostgreSQL craps out trying to process 8000 elements with the error:\n out of free buffers: time to abort! \n \n PostgreSQL takes 45,566ms for 7000 elements in an IN list (ouch!) \n, and 2027ms for a parameterized statement.\n MySQL easily beats that with 10 times the data.\n\n6) Using a remote client on the lan (10/100) to run the client piece on \na separate machine from the database\n server yielded little change int he results. Prepared statements \nworked pretty well even with actual wire latency,\n surprise! (Of course it's very little latency on my lan, not like, \nsay, the database server running in a different city\n which customers have been known to do).\n \nThe MySQL and PostgreSQL installations are the default RedHat 9.0 \ndistribution packages,\nI haven't tweaked either's installation parameters. (though MySQL was \nupdated by RedHat from 3.23.54a to 3.23.56\nas part of an automatic upgrade).\n\nMy goal here isn't to say \"why aren't you like MySQL\". I've been using \nPostgreSQL for a year as the development database of choice\nwith satisfactory results. But I won't be able to support customers who \nwant to use PostgreSQL on large deployments of my products\nif I can't selectively retrieve data for several thousand elements in \nsub-second times.\n\n(PostgreSQL devos, if you want a feel good bullet, note that I don't \nsupport MySQL at all since lack of MVCC transactions\nis a showstopper from a multi-user performance standpoint).\n\nSo I'm looking for (a) solutions, answers to my questions above, and (b) \na statement of \"we're on this\" or \"you're stuck with it\" from\nPostgreSQL devos who know.\n\n----------------------------------------------------------------\nOn the attached program. (a Java JDBC program) Sorry, you can't just run \nit \"as is\". Search for formfeeds (^L) if you want\nto skip all the result data I logged. Compilation is straightforward, \nsimply supply the location of your JDBC jar file for compiling and running\n(mine is noted in the program).\n\nFirst, move the \"if (false)\" element below the table creation statements \nand run the program to create the table.\nThen VACUUM/analyze your database (suuuuure would be nice not to have to \nvacuum).\nThen move the \"if (false)\" element above the table creation so you won't \nhave to do it every time.\nThen move past the formfeed and adjust the 'tryAmount' for loop to test \nthe amounts you're interested.\n100 to 1000 by 100's is a good starting point.\n\nIgnore the section (part of the if (false) logic) that attempts to \nfigure out what the largest query size is. \nUnless you want to reproduce a hung postmaster in a CPU loop, which is \nwhat I got when I tried to run that logic,\nthough whan I ran it I was missing the closing ')' in my IN list, which \nI've since added to that code.\n\nThanks for any help!\n\nDave", "msg_date": "Wed, 28 May 2003 08:51:49 -0400", "msg_from": "Dave Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "IN list processing performance (yet again)" }, { "msg_contents": "On Wednesday 28 May 2003 18:21, Dave Tenny wrote:\n> Having grepped the web, it's clear that this isn't the first or last\n> time this issue will be raised.\n>\n> My application relies heavily on IN lists. The lists are primarily\n> constant integers, so queries look like:\n>\n> SELECT val FROM table WHERE id IN (43, 49, 1001, 100002, ...)\n\nHow do you derive this list of number? If it is from same database, can you \nrewrite the query using a join statement?\n\nHTH\n\n Shridhar\n\n", "msg_date": "Wed, 28 May 2003 18:46:38 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "Dave Tenny wrote:\n\n> Having grepped the web, it's clear that this isn't the first or last \n> time this issue will be raised.\n>\n> My application relies heavily on IN lists. The lists are primarily \n> constant integers, so queries look like:\n>\n> SELECT val FROM table WHERE id IN (43, 49, 1001, 100002, ...)\n>\n> Performance is critical, and the size of these lists depends a lot on \n> how the larger 3-tier applicaiton is used,\n> but it wouldn't be out of the question to retrieve 3000-10000 items.\n>\n> PostgreSQL 7.3.2 seems to have a lot of trouble with large lists.\n> I ran an experiment that ran queries on a table of two integers (ID, \n> VAL), where ID is a primary key and the subject\n> of IN list predicates. The test used a table with one million rows \n> ID is appropriately indexed,\n> and I have VACUUMED/analyzed the database after table load.\n>\n> I ran tests on in-lists from about 100 to 100,000 entries. \n\nHi Dave,\n\nit sounds as if that IN-list is created by the application. I wonder if \nthere are really so many variances and combinations of it or whether you \ncould invent an additional column, which groups all those individual \nvalues. If possible, you could reduce your IN list to much fewer values, \nand probably would get better performance (using an index on that col, \nof course).\n\nRegards,\n\nAndreas\n\n\n\n", "msg_date": "Wed, 28 May 2003 15:19:22 +0200", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": ">\n> My application relies heavily on IN lists. The lists are primarily\n> constant integers, so queries look like:\n>\n> SELECT val FROM table WHERE id IN (43, 49, 1001, 100002, ...)\n>\n> Performance is critical, and the size of these lists depends a lot on\n> how the larger 3-tier applicaiton is used,\n> but it wouldn't be out of the question to retrieve 3000-10000 items.\n>\n> PostgreSQL 7.3.2 seems to have a lot of trouble with large lists.\n\nyou should rewrite your query if the query is created from an applition:\n\nSELECT val\n FROM table\n WHERE id between 43 and 100002\n AND id IN (43, 49, 1001, 100002, ...)\n\nwhere 43 is the min and 100002 the max of all values.\n\nI had this case with postgresql 7.2 and the planner made much smarter\nchoices in my case.\n\nRegards,\n Mario Weilguni\n\n\n", "msg_date": "Wed, 28 May 2003 16:31:37 +0200", "msg_from": "\"Mario Weilguni\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "\nOn Wed, 28 May 2003, Dave Tenny wrote:\n\n> Having grepped the web, it's clear that this isn't the first or last\n> time this issue will be raised.\n>\n> My application relies heavily on IN lists. The lists are primarily\n> constant integers, so queries look like:\n>\n> SELECT val FROM table WHERE id IN (43, 49, 1001, 100002, ...)\n>\n> Performance is critical, and the size of these lists depends a lot on\n> how the larger 3-tier applicaiton is used,\n> but it wouldn't be out of the question to retrieve 3000-10000 items.\n>\n> PostgreSQL 7.3.2 seems to have a lot of trouble with large lists.\n\nIt gets converted into a sequence like col=list[0] or col=list[1] and it\nseems the planner/optimizer is taking at least a large amount of time for\nme given that explain takes just over 80 seconds for a 9900 item list on\nmy machine (I don't have a data filled table to run the actual query\nagainst).\n\nThe best plan may be generated right now from making a temporary\ntable, copying the values into it, and joining.\n\n> 2) What is the expected acceptable limit for the number of items in an\n> IN list predicate such as\n> those used here. (List of constants, not subselects).\n\nAs a note, 7.4 by default seems to limit it to 10000 unless you up\nmax_expr_depth afaics.\n\n\n\n", "msg_date": "Wed, 28 May 2003 07:54:26 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "A join isn't an option, these elements come a a selection of entity ID's \nthat are specific to some client context.\nSome other people suggested joins too. \n\nConsider it something like this, say there's a database that represents \nthe entire file system content\nof a set of machines, hundreds of thousands of files. A single user \nwants to do something\nrelated to the ID's of 3000 files. The requests for those 3000 files \ncan be built up in a number of ways,\nnot all of which rely on data in the database. So I need to retrieve \ndata on those 3000 files using IN lists or some alternative.\n\nDave\n\nShridhar Daithankar wrote:\n\n>On Wednesday 28 May 2003 18:21, Dave Tenny wrote:\n> \n>\n>>Having grepped the web, it's clear that this isn't the first or last\n>>time this issue will be raised.\n>>\n>>My application relies heavily on IN lists. The lists are primarily\n>>constant integers, so queries look like:\n>>\n>>SELECT val FROM table WHERE id IN (43, 49, 1001, 100002, ...)\n>> \n>>\n>\n>How do you derive this list of number? If it is from same database, can you \n>rewrite the query using a join statement?\n>\n>HTH\n>\n> Shridhar\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n>\n> \n>\n\n\n\n\n\n\n\nA join isn't an option, these elements come a a selection of entity\nID's that are specific to some client context.\nSome other people suggested joins too.  \n\nConsider it something like this, say there's a database that represents\nthe entire file system content\nof a set of machines, hundreds of thousands of files.  A single user\nwants to do something\nrelated to the ID's of 3000 files.  The requests for those 3000 files\ncan be built up in a number of ways,\nnot all of which rely on data in the database.  So I need to retrieve\ndata on those 3000 files using IN lists or some alternative.\n\nDave\n\nShridhar Daithankar wrote:\n\nOn Wednesday 28 May 2003 18:21, Dave Tenny wrote:\n \n\nHaving grepped the web, it's clear that this isn't the first or last\ntime this issue will be raised.\n\nMy application relies heavily on IN lists. The lists are primarily\nconstant integers, so queries look like:\n\nSELECT val FROM table WHERE id IN (43, 49, 1001, 100002, ...)\n \n\n\nHow do you derive this list of number? If it is from same database, can you \nrewrite the query using a join statement?\n\nHTH\n\n Shridhar\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster", "msg_date": "Wed, 28 May 2003 13:58:14 -0400", "msg_from": "Dave Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "Andreas Pflug wrote:\n\n> Dave Tenny wrote:\n>\n>> Having grepped the web, it's clear that this isn't the first or last \n>> time this issue will be raised.\n>>\n>> My application relies heavily on IN lists. The lists are primarily \n>> constant integers, so queries look like:\n>>\n>> SELECT val FROM table WHERE id IN (43, 49, 1001, 100002, ...)\n>>\n>> Performance is critical, and the size of these lists depends a lot on \n>> how the larger 3-tier applicaiton is used,\n>> but it wouldn't be out of the question to retrieve 3000-10000 items.\n>>\n>> PostgreSQL 7.3.2 seems to have a lot of trouble with large lists.\n>> I ran an experiment that ran queries on a table of two integers (ID, \n>> VAL), where ID is a primary key and the subject\n>> of IN list predicates. The test used a table with one million rows \n>> ID is appropriately indexed,\n>> and I have VACUUMED/analyzed the database after table load.\n>>\n>> I ran tests on in-lists from about 100 to 100,000 entries. \n>\n>\n> Hi Dave,\n>\n> it sounds as if that IN-list is created by the application. I wonder \n> if there are really so many variances and combinations of it or \n> whether you could invent an additional column, which groups all those \n> individual values. If possible, you could reduce your IN list to much \n> fewer values, and probably would get better performance (using an \n> index on that col, of course).\n\n\nThere are over 50 tables in the schema,\nand dozens of client commands that manipulate the schema in a \npersistent-checkout kind of way over time, as well as spurious reporting \nrequests\nthat require incredibly complex filtering and combination of data from \nmany tables.\nI'm pretty much up to my keister data (and am resisting impulses for \ndenormalization), so this approach\nprobably isn't viable for me. Now I *could* create a temporary table \nwith the group of values, but I suspect the cost of that\nsubstantially outweighs the negative performance of current IN lists or \nparameterized statements.\n\nI'm reminded to relay to the PostgreSQL devos that I might be able to do \nmore in the join or subquery department if\nPostgreSQL had better performing MAX functions and a FIRST function for \nselecting rows from groups.\n(\"Performing\" being the operative word here, since the extensible \narchitecture of PostgreSQL currently makes for poorly\nperforming MAX capabilities and presumably similar user defined \naggregate functions).\n\n>\n> Regards,\n>\n> Andreas\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Wed, 28 May 2003 14:08:02 -0400", "msg_from": "Dave Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "Mario Weilguni wrote:\n\n>>My application relies heavily on IN lists. The lists are primarily\n>>constant integers, so queries look like:\n>>\n>>SELECT val FROM table WHERE id IN (43, 49, 1001, 100002, ...)\n>>\n>>Performance is critical, and the size of these lists depends a lot on\n>>how the larger 3-tier applicaiton is used,\n>>but it wouldn't be out of the question to retrieve 3000-10000 items.\n>>\n>>PostgreSQL 7.3.2 seems to have a lot of trouble with large lists.\n>> \n>>\n>\n>you should rewrite your query if the query is created from an applition:\n>\n>SELECT val\n> FROM table\n> WHERE id between 43 and 100002\n> AND id IN (43, 49, 1001, 100002, ...)\n>\n>where 43 is the min and 100002 the max of all values.\n>\n>I had this case with postgresql 7.2 and the planner made much smarter\n>choices in my case.\n>\n>Regards,\n> Mario Weilguni\n> \n>\nVery interesting! I tried it out, but it didn't appreciably change the \nthresholds in my results for going by for IN list\nsizes 100 - 1000. It's also likely to be of use only if the range for \nthe between is fairly restricted,\nwhich isn't necessarily characteristic of my data.\n\nDave\n\n\n\n\n\n\n\nMario Weilguni wrote:\n\n\nMy application relies heavily on IN lists. The lists are primarily\nconstant integers, so queries look like:\n\nSELECT val FROM table WHERE id IN (43, 49, 1001, 100002, ...)\n\nPerformance is critical, and the size of these lists depends a lot on\nhow the larger 3-tier applicaiton is used,\nbut it wouldn't be out of the question to retrieve 3000-10000 items.\n\nPostgreSQL 7.3.2 seems to have a lot of trouble with large lists.\n \n\n\nyou should rewrite your query if the query is created from an applition:\n\nSELECT val\n FROM table\n WHERE id between 43 and 100002\n AND id IN (43, 49, 1001, 100002, ...)\n\nwhere 43 is the min and 100002 the max of all values.\n\nI had this case with postgresql 7.2 and the planner made much smarter\nchoices in my case.\n\nRegards,\n Mario Weilguni\n \n\nVery interesting!  I tried it out, but it didn't appreciably change the\nthresholds in my results for going by for IN list\nsizes 100 - 1000.  It's also likely to be of use only if the range for\nthe between is fairly restricted,\nwhich isn't necessarily characteristic of my data.\n\nDave", "msg_date": "Wed, 28 May 2003 14:17:21 -0400", "msg_from": "Dave Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "> I'm reminded to relay to the PostgreSQL devos that I might be able to do\n> more in the join or subquery department if\n> PostgreSQL had better performing MAX functions and a FIRST function for\n> selecting rows from groups.\n> (\"Performing\" being the operative word here, since the extensible\n> architecture of PostgreSQL currently makes for poorly\n> performing MAX capabilities and presumably similar user defined\n> aggregate functions).\n\nMIN/MAX is almost in every case replaceable:\nselect bar\n from foo\n order by bar limit 1;\n\ninstead of\nselect max(bar) from foo;\n\n\n", "msg_date": "Wed, 28 May 2003 20:29:43 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "On Wed, May 28, 2003 at 14:08:02 -0400,\n Dave Tenny <[email protected]> wrote:\n> Andreas Pflug wrote:\n> \n> I'm reminded to relay to the PostgreSQL devos that I might be able to do \n> more in the join or subquery department if\n> PostgreSQL had better performing MAX functions and a FIRST function for \n> selecting rows from groups.\n> (\"Performing\" being the operative word here, since the extensible \n> architecture of PostgreSQL currently makes for poorly\n> performing MAX capabilities and presumably similar user defined \n> aggregate functions).\n\nHave you tried replacing max with a subselect that uses order by and limit?\n", "msg_date": "Wed, 28 May 2003 13:39:14 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "On Wed, May 28, 2003 at 13:58:14 -0400,\n Dave Tenny <[email protected]> wrote:\n> A join isn't an option, these elements come a a selection of entity ID's \n> that are specific to some client context.\n> Some other people suggested joins too. \n\nYou can union the values together and then join (or use where exists) with the\nresult. This may not be faster and you may not be able to union several\nthousand selects together in a single statement. But it shouldn't be too\nmuch work to test it out.\n", "msg_date": "Wed, 28 May 2003 13:41:50 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "Mario Weilguni wrote:\n\n>>I'm reminded to relay to the PostgreSQL devos that I might be able to do\n>>more in the join or subquery department if\n>>PostgreSQL had better performing MAX functions and a FIRST function for\n>>selecting rows from groups.\n>>(\"Performing\" being the operative word here, since the extensible\n>>architecture of PostgreSQL currently makes for poorly\n>>performing MAX capabilities and presumably similar user defined\n>>aggregate functions).\n>> \n>>\n>\n>MIN/MAX is almost in every case replaceable:\n>select bar\n> from foo\n> order by bar limit 1;\n>\n>instead of\n>select max(bar) from foo;\n> \n>\nYup, been there, done that, but thanks, it's a good tidbit for the \npostgresql unaware.\n\nThere are some places however where it doesn't work well in query logic, \nthough I don't have an example off the top of my head\nsince I've worked around it in all my queries.\n\n\n\n\n\n\n\n\n\nMario Weilguni wrote:\n\n\nI'm reminded to relay to the PostgreSQL devos that I might be able to do\nmore in the join or subquery department if\nPostgreSQL had better performing MAX functions and a FIRST function for\nselecting rows from groups.\n(\"Performing\" being the operative word here, since the extensible\narchitecture of PostgreSQL currently makes for poorly\nperforming MAX capabilities and presumably similar user defined\naggregate functions).\n \n\n\nMIN/MAX is almost in every case replaceable:\nselect bar\n from foo\n order by bar limit 1;\n\ninstead of\nselect max(bar) from foo;\n \n\nYup, been there, done that, but thanks, it's a good tidbit for the\npostgresql unaware.\n\nThere are some places however where it doesn't work well in query\nlogic, though I don't have an example off the top of my head\nsince I've worked around it in all my queries.", "msg_date": "Wed, 28 May 2003 15:57:22 -0400", "msg_from": "Dave Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "Bruno Wolff III wrote:\n\n>On Wed, May 28, 2003 at 14:08:02 -0400,\n> Dave Tenny <[email protected]> wrote:\n> \n>\n>>Andreas Pflug wrote:\n>>\n>>I'm reminded to relay to the PostgreSQL devos that I might be able to do \n>>more in the join or subquery department if\n>>PostgreSQL had better performing MAX functions and a FIRST function for \n>>selecting rows from groups.\n>>(\"Performing\" being the operative word here, since the extensible \n>>architecture of PostgreSQL currently makes for poorly\n>>performing MAX capabilities and presumably similar user defined \n>>aggregate functions).\n>> \n>>\n>\n>Have you tried replacing max with a subselect that uses order by and limit?\n> \n>\n\nI'm uncertain how that would work, since somewhere in there I still need \nto elaborate on the\n1000 items I want, and they're not necessarily in any particular range, \nnor do they bear any\ncontiguous group nature.\n\nAlso, IN (subquery) is a known performance problem in PGSQL, at least if \nthe subquery is going to return many rows.\nIt's too bad, since I'm rather fond of subqueries, but I avoid them like \nthe plague in PostgreSQL.\n\nPerhaps I don't understand what you had in mind.\n\n\n\n\n\n\n\n\nBruno Wolff III wrote:\n\nOn Wed, May 28, 2003 at 14:08:02 -0400,\n Dave Tenny <[email protected]> wrote:\n \n\nAndreas Pflug wrote:\n\nI'm reminded to relay to the PostgreSQL devos that I might be able to do \nmore in the join or subquery department if\nPostgreSQL had better performing MAX functions and a FIRST function for \nselecting rows from groups.\n(\"Performing\" being the operative word here, since the extensible \narchitecture of PostgreSQL currently makes for poorly\nperforming MAX capabilities and presumably similar user defined \naggregate functions).\n \n\n\nHave you tried replacing max with a subselect that uses order by and limit?\n \n\n\nI'm uncertain how that would work, since somewhere in there I still\nneed to elaborate on the\n1000 items I want, and they're not necessarily in any particular range,\nnor do they bear any \ncontiguous group nature.\n\nAlso, IN (subquery) is a known performance problem in PGSQL, at least\nif the subquery is going to return many rows.\nIt's too bad, since I'm rather fond of subqueries, but I avoid them\nlike the plague in PostgreSQL.\n\nPerhaps I don't understand what you had in mind.", "msg_date": "Wed, 28 May 2003 16:01:34 -0400", "msg_from": "Dave Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "Bruno Wolff III wrote:\n\n>On Wed, May 28, 2003 at 13:58:14 -0400,\n> Dave Tenny <[email protected]> wrote:\n> \n>\n>>A join isn't an option, these elements come a a selection of entity ID's \n>>that are specific to some client context.\n>>Some other people suggested joins too. \n>> \n>>\n>\n>You can union the values together and then join (or use where exists) with the\n>result. This may not be faster and you may not be able to union several\n>thousand selects together in a single statement. But it shouldn't be too\n>much work to test it out.\n> \n>\nI assume you mean something like:\n\ntest=# select million.id, million.val from million, (select 10000 as a \nunion select 20000 as a) t2 where million.id = t2.a;\n id | val\n-------+-------\n 10000 | 0\n 20000 | 10000\n(2 rows)\n\nOuch! That's deviant. Haven't tried it yet and I cringe at the \nthought of it, but I might take a run at it. However that's going to\nrun up the buffer space quickly. That was one of my as yet unsnaswered \nquestions, what is the pragmatic buffer size limit\nfor queries?\n\nI'm /really/ hoping we'll come up with something better, like an \nunderstanding of why IN lists are non-linear in the first\nplace when the column is indexed, and whether it's fixable through some \nother means or whether it's a bug that should be fixed.\n\nAfter all, I'm trying to support multiple databases, and other databases \nkick butt on this. It's just postgresql that's\nhaving difficulty.\n\n(Btw, I've also tried statement batching, but that's a lose for now, at \nleast with the current JDBC drivers and 7.3.2).\n\n\n\n\n\n\n\n\nBruno Wolff III wrote:\n\nOn Wed, May 28, 2003 at 13:58:14 -0400,\n Dave Tenny <[email protected]> wrote:\n \n\nA join isn't an option, these elements come a a selection of entity ID's \nthat are specific to some client context.\nSome other people suggested joins too. \n \n\n\nYou can union the values together and then join (or use where exists) with the\nresult. This may not be faster and you may not be able to union several\nthousand selects together in a single statement. But it shouldn't be too\nmuch work to test it out.\n \n\nI assume you mean something like:\n\ntest=# select million.id, million.val from million, (select 10000 as a\nunion select 20000 as a) t2 where million.id = t2.a;\n  id   |  val\n-------+-------\n 10000 |     0\n 20000 | 10000\n(2 rows)\n\nOuch!  That's deviant.   Haven't tried it yet and I cringe at the\nthought of it, but I might take a run at it.  However that's going to\nrun up the buffer space quickly.  That was one of my as yet unsnaswered\nquestions, what is the pragmatic buffer size limit\nfor queries?\n\nI'm really hoping we'll come up with something better, like an\nunderstanding of why IN lists are non-linear in the first\nplace when the column is indexed, and whether it's fixable through some\nother means or whether it's a bug that should be fixed.\n\nAfter all, I'm trying to support multiple databases, and other\ndatabases kick butt on this.  It's just postgresql that's\nhaving difficulty.\n\n(Btw, I've also tried statement batching, but that's a lose for now, at\nleast with the current JDBC drivers and 7.3.2).", "msg_date": "Wed, 28 May 2003 16:13:17 -0400", "msg_from": "Dave Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "On Wed, May 28, 2003 at 16:01:34 -0400,\n Dave Tenny <[email protected]> wrote:\n> \n> Perhaps I don't understand what you had in mind.\n> \n\nI was refering to your comment about max causing problems. But it seems\nyou are aware of the standard work around.\n", "msg_date": "Wed, 28 May 2003 15:24:18 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "On Wed, May 28, 2003 at 16:13:17 -0400,\n Dave Tenny <[email protected]> wrote:\n> Bruno Wolff III wrote:\n> \n> I assume you mean something like:\n> \n> test=# select million.id, million.val from million, (select 10000 as a \n> union select 20000 as a) t2 where million.id = t2.a;\n> id | val\n> -------+-------\n> 10000 | 0\n> 20000 | 10000\n> (2 rows)\n> \n> Ouch! That's deviant. Haven't tried it yet and I cringe at the \n> thought of it, but I might take a run at it. However that's going to\n> run up the buffer space quickly. That was one of my as yet unsnaswered \n> questions, what is the pragmatic buffer size limit\n> for queries?\n\nThat is what I was referring to. I have used this in some cases where\nI knew the list was small and I wanted to do a set difference without\nloading a temporary table. Or to do an insert of multiple rows with\none insert statement.\n\n> I'm /really/ hoping we'll come up with something better, like an \n> understanding of why IN lists are non-linear in the first\n> place when the column is indexed, and whether it's fixable through some \n> other means or whether it's a bug that should be fixed.\n\nIt also might be worth seeing if the development version is going to\nspeed things up for you. Beta is one month away. My guess is that the\nproduction release will be in September.\n", "msg_date": "Wed, 28 May 2003 15:29:05 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "Dave Tenny <[email protected]> writes:\n> My application relies heavily on IN lists. The lists are primarily \n> constant integers, so queries look like:\n> SELECT val FROM table WHERE id IN (43, 49, 1001, 100002, ...)\n\n> 1) PostgreSQL exhibits worse-than-linear performance behavior with \n> respect to IN list size.\n\nYeah. There are a couple of places in the planner that have O(N^2)\nbehavior on sufficiently large WHERE clauses, due to building lists\nin a naive way (repeated lappend() operations). The inner loop is\nvery tight, but nonetheless when you do it tens of millions of times\nit adds up :-(\n\nI have just committed some fixes into CVS tip for this --- I see about\na 10x speedup in planning time on test cases involving 10000-OR-item\nWHERE clauses. We looked at this once before; the test cases I'm using\nactually date back to Jan 2000. But it seems some slowness has crept\nin due to subsequent planning improvements.\n\n\n> 4) God help you if you haven't vacuum/analyzed that the newly loaded \n> table.\n\nWithout knowledge that the id field is unique, the planner is likely to\ntilt away from an indexscan with not too many IN items. I don't\nconsider this a bug.\n\n\n> PostgreSQL craps out trying to process 8000 elements with the error:\n> out of free buffers: time to abort! \n\nThis is a known bug in 7.3.2; it's fixed in 7.3.3.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 May 2003 19:44:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN list processing performance (yet again) " }, { "msg_contents": "Tom Lane wrote:\n\n>Dave Tenny <[email protected]> writes:\n> \n>\n>>My application relies heavily on IN lists. The lists are primarily \n>>constant integers, so queries look like:\n>>SELECT val FROM table WHERE id IN (43, 49, 1001, 100002, ...)\n>> \n>>\n>\n> \n>\n>>1) PostgreSQL exhibits worse-than-linear performance behavior with \n>>respect to IN list size.\n>> \n>>\n>\n>Yeah. There are a couple of places in the planner that have O(N^2)\n>behavior on sufficiently large WHERE clauses, due to building lists\n>in a naive way (repeated lappend() operations). The inner loop is\n>very tight, but nonetheless when you do it tens of millions of times\n>it adds up :-(\n>\n> \n>\nIt was also showing up very clearly in the timings I attached for just a \ncouple hundred IN list entries too.\nDoes that mean that the time for the O(1) portions of the logic are \nperhaps also in need of a tuneup?\n(I would think O(N^2) for trivial operations on a fast machine wouldn't \nmanifest quite so much\nfor smallish N).\n\n>I have just committed some fixes into CVS tip for this --- I see about\n>a 10x speedup in planning time on test cases involving 10000-OR-item\n>WHERE clauses. We looked at this once before; the test cases I'm using\n>actually date back to Jan 2000. But it seems some slowness has crept\n>in due to subsequent planning improvements.\n>\n> \n>\nSo what version might that equate to down the road, so I can be sure to \ncheck it out?\nI'm afraid I'm not up to testing CVS tips.\n\n> \n>\n>>4) God help you if you haven't vacuum/analyzed that the newly loaded \n>>table.\n>> \n>>\n>\n>Without knowledge that the id field is unique, the planner is likely to\n>tilt away from an indexscan with not too many IN items. I don't\n>consider this a bug.\n>\n> \n>\nThere is one very interesting thing in my test case though. It \ncertainly /seemed/ as if the\nparameterized statements were successfully using the index of the \nfreshly-created-but-unanalyzed table,\nor else the times on those queries would have been terrible too. It was \nonly the IN list form\nof query that wasn't making correct use of the index. How can the \nplanner recognize uniqueness for\none case but not the other? (Since I ran both forms of query on the same \ntable with and without vacuuming).\n\n> \n>\n>> PostgreSQL craps out trying to process 8000 elements with the error:\n>> out of free buffers: time to abort! \n>> \n>>\n>\n>This is a known bug in 7.3.2; it's fixed in 7.3.3.\n> \n>\nThanks, that's good to know.\n\nDave\n\n\n\n\n\n\n\nTom Lane wrote:\n\nDave Tenny <[email protected]> writes:\n \n\nMy application relies heavily on IN lists. The lists are primarily \nconstant integers, so queries look like:\nSELECT val FROM table WHERE id IN (43, 49, 1001, 100002, ...)\n \n\n\n \n\n1) PostgreSQL exhibits worse-than-linear performance behavior with \nrespect to IN list size.\n \n\n\nYeah. There are a couple of places in the planner that have O(N^2)\nbehavior on sufficiently large WHERE clauses, due to building lists\nin a naive way (repeated lappend() operations). The inner loop is\nvery tight, but nonetheless when you do it tens of millions of times\nit adds up :-(\n\n \n\nIt was also showing up very clearly in the timings I attached for just\na couple hundred IN list entries too.\nDoes that mean that the time for the O(1) portions of the logic are\nperhaps also in need of a tuneup?\n(I would think O(N^2) for trivial operations on a fast machine wouldn't\nmanifest quite so much\nfor smallish N).\n\nI have just committed some fixes into CVS tip for this --- I see about\na 10x speedup in planning time on test cases involving 10000-OR-item\nWHERE clauses. We looked at this once before; the test cases I'm using\nactually date back to Jan 2000. But it seems some slowness has crept\nin due to subsequent planning improvements.\n\n \n\nSo what version might that equate to down the road, so I can be sure to\ncheck it out?\nI'm afraid I'm not up to testing CVS tips.\n\n\n \n\n4) God help you if you haven't vacuum/analyzed that the newly loaded \ntable.\n \n\n\nWithout knowledge that the id field is unique, the planner is likely to\ntilt away from an indexscan with not too many IN items. I don't\nconsider this a bug.\n\n \n\nThere is one very interesting thing in my test case though.  It\ncertainly seemed as if the\nparameterized statements were successfully using the index of the\nfreshly-created-but-unanalyzed table,\nor else the times on those queries would have been terrible too.  It\nwas only the IN list form\nof query that wasn't making correct use of the index.  How can the\nplanner recognize uniqueness for\none case but not the other? (Since I ran both forms of query on the\nsame table with and without vacuuming).\n\n\n \n\n PostgreSQL craps out trying to process 8000 elements with the error:\n out of free buffers: time to abort! \n \n\n\nThis is a known bug in 7.3.2; it's fixed in 7.3.3.\n \n\nThanks, that's good to know.\n\nDave", "msg_date": "Wed, 28 May 2003 21:19:56 -0400", "msg_from": "Dave Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "Dave Tenny <[email protected]> writes:\n> </blockquote>\n> There is one very interesting thing in my test case though.&nbsp; It\n> certainly <i>seemed</i> as if the<br>\n> parameterized statements were successfully using the index of the\n> freshly-created-but-unanalyzed table,<br>\n> or else the times on those queries would have been terrible too.&nbsp; It\n> was only the IN list form<br>\n> of query that wasn't making correct use of the index.&nbsp; How can the\n> planner recognize uniqueness for<br>\n> one case but not the other?\n\nThe question is whether a seqscan will be faster than an indexscan; at\nsome point there's less I/O involved to just scan the table once. If\nthe planner doesn't know the index is unique then it's going to estimate\na higher cost for the indexscan (due to more rows fetched) and there is\nsome number of rows at which it will flip over to a seqscan. The same\nwill happen even if it *does* know the index is unique, it's just that\nit will take more IN elements to make it happen. This is reasonable\nbehavior IMHO, although whether the flip-over point is anywhere near\nthe actual breakeven point on your hardware is anyone's guess. The cost\nestimates are often far enough off that it's not very close.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 May 2003 21:46:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN list processing performance (yet again) " }, { "msg_contents": "The IN list processing has been fixed in 7.4CVS. It now uses a hash based lookup rather than a list, so it's vastly faster.\n\nChris\n ----- Original Message ----- \n From: Dave Tenny \n To: Shridhar Daithankar \n Cc: [email protected] \n Sent: Thursday, May 29, 2003 1:58 AM\n Subject: Re: [PERFORM] IN list processing performance (yet again)\n\n\n A join isn't an option, these elements come a a selection of entity ID's that are specific to some client context.\n Some other people suggested joins too. \n\n Consider it something like this, say there's a database that represents the entire file system content\n of a set of machines, hundreds of thousands of files. A single user wants to do something\n related to the ID's of 3000 files. The requests for those 3000 files can be built up in a number of ways,\n not all of which rely on data in the database. So I need to retrieve data on those 3000 files using IN lists or some alternative.\n\n Dave\n\n Shridhar Daithankar wrote:\n\nOn Wednesday 28 May 2003 18:21, Dave Tenny wrote:\n Having grepped the web, it's clear that this isn't the first or last\ntime this issue will be raised.\n\nMy application relies heavily on IN lists. The lists are primarily\nconstant integers, so queries look like:\n\nSELECT val FROM table WHERE id IN (43, 49, 1001, 100002, ...)\n \nHow do you derive this list of number? If it is from same database, can you \nrewrite the query using a join statement?\n\nHTH\n\n Shridhar\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n \n\n\n\n\n\n\nThe IN list processing has been fixed in \n7.4CVS.  It now uses a hash based lookup rather than a list, so it's vastly \nfaster.\n \nChris\n\n----- Original Message ----- \nFrom:\nDave Tenny \nTo: Shridhar Daithankar\n\nCc: [email protected]\n\nSent: Thursday, May 29, 2003 1:58 \nAM\nSubject: Re: [PERFORM] IN list processing \n performance (yet again)\nA join isn't an option, these elements come a a selection of \n entity ID's that are specific to some client context.Some other people \n suggested joins too.  Consider it something like this, say \n there's a database that represents the entire file system contentof a set \n of machines, hundreds of thousands of files.  A single user wants to do \n somethingrelated to the ID's of 3000 files.  The requests for those \n 3000 files can be built up in a number of ways,not all of which rely on \n data in the database.  So I need to retrieve data on those 3000 files \n using IN lists or some alternative.DaveShridhar Daithankar \n wrote:\nOn Wednesday 28 May 2003 18:21, Dave Tenny wrote:\n \nHaving grepped the web, it's clear that this isn't the first or last\ntime this issue will be raised.\n\nMy application relies heavily on IN lists. The lists are primarily\nconstant integers, so queries look like:\n\nSELECT val FROM table WHERE id IN (43, 49, 1001, 100002, ...)\n \nHow do you derive this list of number? If it is from same database, can you \nrewrite the query using a join statement?\n\nHTH\n\n Shridhar\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster", "msg_date": "Thu, 29 May 2003 09:47:53 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "\n\n> Also, IN (subquery) is a known performance problem in PGSQL, at least if\nthe subquery is going to return > many rows.\n> It's too bad, since I'm rather fond of subqueries, but I avoid them like\nthe plague in PostgreSQL.\n\nYou're not really using a subquery - really just a long list of integers.\nSubqueries are lightning fast, so long as you conver to the EXISTS form:\n\nSELECT * FROM tab WHERE id IN (SELECT id2 FROM tab2);\n\nconverts to:\n\nSELECT * FROM tab WHERE EXISTS (SELECT id2 FROM tab2 WHERE id2=id);\n\nChris\n\n\n\n", "msg_date": "Thu, 29 May 2003 09:53:55 +0800", "msg_from": "\"Christopher Kings-Lynne\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "On Wed, May 28, 2003 at 21:19:56 -0400,\n Dave Tenny <[email protected]> wrote:\n> So what version might that equate to down the road, so I can be sure to \n> check it out?\n> I'm afraid I'm not up to testing CVS tips.\n\n7.4\n\n7.4 is suppsoed to go into beta July 1.\n\nIf you are used to installing from source tarballs, you can grab a\ndevelopment snapshot tarball. There is probably a day or two delay\ndepending on if you get it from a mirror or the primary site.\n", "msg_date": "Thu, 29 May 2003 05:55:31 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN list processing performance (yet again)" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n\n> \n>\n>>Also, IN (subquery) is a known performance problem in PGSQL, at least if\n>> \n>>\n>the subquery is going to return > many rows.\n> \n>\n>>It's too bad, since I'm rather fond of subqueries, but I avoid them like\n>> \n>>\n>the plague in PostgreSQL.\n>\n>You're not really using a subquery - really just a long list of integers.\n>\n\nOops, you got that out of context, it was a different piece of \nconversation about subqueries in IN predicate,\nnot the scalar forms that was my overall discussion point. You're \nright, I'm using lists of integers,\nsomeone else was suggesting using subqueries in some context and I was \nresponding to that.\n\n>Subqueries are lightning fast, so long as you conver to the EXISTS form:\n>\n>SELECT * FROM tab WHERE id IN (SELECT id2 FROM tab2);\n>\n>converts to:\n>\n>SELECT * FROM tab WHERE EXISTS (SELECT id2 FROM tab2 WHERE id2=id);\n>\n>Chris\n> \n>\nI hadn't thought of that, it's an excellent tip. I'll have to remember \nit next time I want to use subqueries.\n(Again, it's a side topic, my primary concern is scalar-form IN lists.)\n\nThanks,\n\nDave\n\n\n\n\n\n\n\nChristopher Kings-Lynne wrote:\n\n\n \n\nAlso, IN (subquery) is a known performance problem in PGSQL, at least if\n \n\nthe subquery is going to return > many rows.\n \n\nIt's too bad, since I'm rather fond of subqueries, but I avoid them like\n \n\nthe plague in PostgreSQL.\n\nYou're not really using a subquery - really just a long list of integers.\n\n\nOops, you got that out of context, it was a different piece of\nconversation about subqueries in IN predicate,\nnot the scalar forms that was my overall discussion point.  You're\nright, I'm using lists of integers,\nsomeone else was suggesting using subqueries in some context and I was\nresponding to that.\n\n\n\nSubqueries are lightning fast, so long as you conver to the EXISTS form:\n\nSELECT * FROM tab WHERE id IN (SELECT id2 FROM tab2);\n\nconverts to:\n\nSELECT * FROM tab WHERE EXISTS (SELECT id2 FROM tab2 WHERE id2=id);\n\nChris\n \n\nI hadn't thought of that, it's an excellent tip.  I'll have to remember\nit next time I want to use subqueries.\n(Again, it's a side topic, my primary concern is scalar-form IN lists.)\n\nThanks,\n\nDave", "msg_date": "Thu, 29 May 2003 08:48:41 -0400", "msg_from": "Dave Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IN list processing performance (yet again)" } ]
[ { "msg_contents": "I have, what appears to be a big problem.\n\nMachine specs\nAMD 2100+,\n1 GIG SDRam,\n3 WD HD's\n 1 - 20 Gig -15 Gig system and 5 Gig Swap\n mounted as /\n 2 - 80 Gig (8 M Cache) in Redhat software RAID 1 (mirror) using Adaptec\n1200 as an IDE Controller\n mounted as /usr/local/pgsql\nRedhat 8 w/ latest kernel and all updates.\n\nI have a much slower machine that has been running my database. We are\ntrying to upgrade to the above machine to make things a bit faster.\n\nI followed \"Tips for upgrading PostgreSQL from 6.5.3 to 7.0.3\" by Mark\nStosberg with only a few changes\n\n[postgres@sqlsrv root]# pg_dump -cs mydbtable >sqlschema.sql\n[postgres@sqlsrv root]# pg_dump -a mydbtable > sqldump.sql\n\nsqlschema.sql = 900K\nsqldump.sql = 2.4G\n\n[sftp files to aforementioned machine]\n\n[postgres@newsqlsrv root]# psql -e mydbtable <sqlschema.sql 2>&1 | tee\nschema-full-results.txt; grep ERROR schema-full-results.txt\n>schema-err-results.txt\n\nAll this works perfectly, quite fast but when I ran....\n\n[postgres@newsqlsrv root]# psql -e <sqldump.sql 2>&1 | tee\ninserts-full-results.txt; grep ERROR inserts-full-results.txt\n>inserts-err-results.txt\n\nIt started off quick, but it got to the first table w/ any real data in it\n(only about 30k records) and acted like it was frozen. I left it running\nall night, it finished that table and started on others but it hasnt even\ngotten to the big tables (2 @ about 9 million records). At this pace it\nwill take several days to finish the restore.\n\nI hope this is something easy/stupid that I have missed. I know that w/\nmirroring my write times are not improved, but they are DEFINATLY not this\nbad.\n\nI hope that I havent missed any information.\nThank you in advance for any direction.\n\nChad\n\n", "msg_date": "Wed, 28 May 2003 09:12:23 -0600", "msg_from": "\"Chad Thompson\" <[email protected]>", "msg_from_op": true, "msg_subject": ">24 hour restore" }, { "msg_contents": "Have a look through the log files for both postgresql and the kernel. \n\nYou could be having issues like SCSI time outs, or a failed disk in a \nRAID, or there could be some hints in the postgresql logs about what's \nhappening.\n\nWhat does top show? high CPU load, low?\n\niostat ?\n\nvmstat ?\n\nOn Wed, 28 May 2003, Chad Thompson wrote:\n\n> I have, what appears to be a big problem.\n> \n> Machine specs\n> AMD 2100+,\n> 1 GIG SDRam,\n> 3 WD HD's\n> 1 - 20 Gig -15 Gig system and 5 Gig Swap\n> mounted as /\n> 2 - 80 Gig (8 M Cache) in Redhat software RAID 1 (mirror) using Adaptec\n> 1200 as an IDE Controller\n> mounted as /usr/local/pgsql\n> Redhat 8 w/ latest kernel and all updates.\n> \n> I have a much slower machine that has been running my database. We are\n> trying to upgrade to the above machine to make things a bit faster.\n> \n> I followed \"Tips for upgrading PostgreSQL from 6.5.3 to 7.0.3\" by Mark\n> Stosberg with only a few changes\n> \n> [postgres@sqlsrv root]# pg_dump -cs mydbtable >sqlschema.sql\n> [postgres@sqlsrv root]# pg_dump -a mydbtable > sqldump.sql\n> \n> sqlschema.sql = 900K\n> sqldump.sql = 2.4G\n> \n> [sftp files to aforementioned machine]\n> \n> [postgres@newsqlsrv root]# psql -e mydbtable <sqlschema.sql 2>&1 | tee\n> schema-full-results.txt; grep ERROR schema-full-results.txt\n> >schema-err-results.txt\n> \n> All this works perfectly, quite fast but when I ran....\n> \n> [postgres@newsqlsrv root]# psql -e <sqldump.sql 2>&1 | tee\n> inserts-full-results.txt; grep ERROR inserts-full-results.txt\n> >inserts-err-results.txt\n> \n> It started off quick, but it got to the first table w/ any real data in it\n> (only about 30k records) and acted like it was frozen. I left it running\n> all night, it finished that table and started on others but it hasnt even\n> gotten to the big tables (2 @ about 9 million records). At this pace it\n> will take several days to finish the restore.\n> \n> I hope this is something easy/stupid that I have missed. I know that w/\n> mirroring my write times are not improved, but they are DEFINATLY not this\n> bad.\n> \n> I hope that I havent missed any information.\n> Thank you in advance for any direction.\n> \n> Chad\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n", "msg_date": "Wed, 28 May 2003 09:55:51 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: >24 hour restore" }, { "msg_contents": "On Wed, May 28, 2003 at 09:12:23AM -0600, Chad Thompson wrote:\n> \n> It started off quick, but it got to the first table w/ any real data in it\n> (only about 30k records) and acted like it was frozen. I left it running\n> all night, it finished that table and started on others but it hasnt even\n> gotten to the big tables (2 @ about 9 million records). At this pace it\n> will take several days to finish the restore.\n\nThis makes me think you have a trigger problem. You don't say what\nversion you're running, but my guess is that you need to disable all\nyour triggers, and remove all your indices, before you start loading\nthe data. Re-enable them afterwards.\n\nBy building the schema first, then loading the data, you're spending\ncycles running triggers &c.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 28 May 2003 12:24:55 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: >24 hour restore" }, { "msg_contents": "\n\n> On Wed, May 28, 2003 at 09:12:23AM -0600, Chad Thompson wrote:\n> >\n> > It started off quick, but it got to the first table w/ any real data in\nit\n> > (only about 30k records) and acted like it was frozen. I left it\nrunning\n> > all night, it finished that table and started on others but it hasnt\neven\n> > gotten to the big tables (2 @ about 9 million records). At this pace it\n> > will take several days to finish the restore.\n>\n> This makes me think you have a trigger problem. You don't say what\n> version you're running, but my guess is that you need to disable all\n> your triggers, and remove all your indices, before you start loading\n> the data. Re-enable them afterwards.\n>\n> By building the schema first, then loading the data, you're spending\n> cycles running triggers &c.\n>\n\nThis was my first thought. After about an hour of running, I stopped the\nprocess, edited the schema file to remove all the foreign keys and triggers.\nI then started it again. So there SHOULD be no triggers right now.\n\nUPDATE: I stopped the restore, before it was stopped, top showed postmaster\nusing 17% CPU. After stopping I noticed that it DID fill my largest table\n(1.16 M tuples) over night. So I am editing the dump file to continue where\nit left off. ( vi is the only thing that is not choking on the 2.4 gig file)\nThat is good news because that means it wont take 7-10 days to import, just\n1-2.\n\nAs for version (oops) my old version was 7.3.1 and I am moving to 7.3.2\n\nAny other ideas?\n\nTIA\nChad\n\nOh, a bit off topic... I remember that I wanted to move the WAL files off of\nthe raid but forgot to do it on start up. Can I do that now that the system\nis setup? Where would I find docs to tell me about that?\n\n", "msg_date": "Wed, 28 May 2003 11:59:49 -0600", "msg_from": "\"Chad Thompson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: >24 hour restore" }, { "msg_contents": "On Wed, May 28, 2003 at 11:59:49AM -0600, Chad Thompson wrote:\n\n> This was my first thought. After about an hour of running, I stopped the\n> process, edited the schema file to remove all the foreign keys and triggers.\n> I then started it again. So there SHOULD be no triggers right now.\n\nHmm.\n\n> UPDATE: I stopped the restore, before it was stopped, top showed postmaster\n> using 17% CPU. After stopping I noticed that it DID fill my largest table\n> (1.16 M tuples) over night. So I am editing the dump file to continue where\n> it left off. ( vi is the only thing that is not choking on the 2.4 gig file)\n> That is good news because that means it wont take 7-10 days to import, just\n> 1-2.\n\nSounds like you have an I/O problem. \n\n> As for version (oops) my old version was 7.3.1 and I am moving to 7.3.2\n\nWhy don't you just shut down your 7.3.1 postmaster and start 7.3.2? \nThis requires no initdb. If you're changing machines (ISTR you are),\nthen copy the tree, assuming the same OS.\n\n> Oh, a bit off topic... I remember that I wanted to move the WAL files off of\n> the raid but forgot to do it on start up. Can I do that now that the system\n> is setup? Where would I find docs to tell me about that?\n\nSure. Stop the postmaster, copy the pg_xlog directory to the target\nlocation, then make a soft link. (I usually use cp and move the old\ndir out of the way temporarily to start with, just in case.)\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 28 May 2003 14:14:03 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: >24 hour restore" }, { "msg_contents": "* Chad Thompson <[email protected]> [28.05.2003 19:08]:\n> I hope this is something easy/stupid that I have missed. I know that w/\n> mirroring my write times are not improved, but they are DEFINATLY not this\n> bad.\n\nWell, I have had something similar to your case, except for size - it's was\nabout 1 Gb.\n\nI've dropped all foreign keys, triggers and, also, all indexes. As I've\nfound, each index takes additional time for inserts/updates/deletes,\nso it's recommended to create indexes after data manipulations.\n\nIf this will not help, I don't know. May be hardware problems...\n\n-- \n\nVictor Yegorov\n", "msg_date": "Thu, 29 May 2003 09:18:18 +0300", "msg_from": "\"Victor Yegorov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: >24 hour restore" } ]
[ { "msg_contents": "Working on my first set returning function... So far the examples from\nhttp://techdocs.postgresql.org/guides/SetReturningFunctions have worked well\nfor me...\n\nI'd like to see what kind of performance I get from a particularly slow\npiece of code by replacing it with a recursive srf (right now, I do the\nrecursion in php).\n\nSo, here's my working example, I haven't bench marked it yet, but if someone\nwould look at it and tell me if there's any improvements that can be made,\nI'd appreciate it. My first impression is that it's fast, because it\nappeared to have returned instantaneously. I really don't understand the\n\"explain analyze\" output, but I'm including it as well.\n\nI'd love to get some feedback on this (did I say that already?).\n\nImagine this:\nCREATE TYPE nav_list AS (id int8, accountid varchar(12), \n\t\t...snip... , parent int8, subfolders int8);\n\nsubfolders is the count() of records that have their parent set to this\nrecord's id. I want to take a list of something like this:\nhome\n - item 1\n - item 2\n - sub item 1\n - item 3\nand return it so that it comes out in this order\nhome\nitem1\nitem2\nsub item 1\nitem 3\n\ncreate or replace function nav_srf(varchar(12), int8) returns setof nav_list\nas '\nDECLARE \n\tr nav_list%rowtype;\n\tdepth int8;\n\tlast_id int8;\n\trecords RECORD;\nBEGIN\n\tFOR r IN SELECT * FROM navigation WHERE accountid = $1 AND parent =\n$2 ORDER BY dsply_order LOOP\n\t\tdepth := r.subfolders;\n\t\tlast_id := r.id;\n\t\tRETURN NEXT r;\n\t\tIF depth > 0 THEN\n\t\t\tFOR records IN SELECT * FROM nav_srf($1, last_id)\nLOOOP\n\t\t\t\tRETURN NEXT records;\n\t\t\tEND LOOP;\n\t\tEND IF;\n\tEND LOOP;\n\tRETURN;\nEND\n' LANGUAGE 'plpgsql';\n\n\n# EXPLAIN ANALYZE SELECT * FROM nav_srf('GOTDNS000000', 0);\nQUERY PLAN\nFunction Scan on nav_srf (cost=0.00..12.50 rows=1000 width=134) (actual\ntime=85.78..86.19 rows=22 loops=1)\nTotal runtime: 86.37 msec\n(2 rows)\n\nI then ran it again a moment later and got:\n# EXPLAIN ANALYZE SELECT * FROM nav_srf('GOTDNS000000', 0);\nQUERY PLAN\nFunction Scan on nav_srf (cost=0.00..12.50 rows=1000 width=134) (actual\ntime=23.54..23.97 rows=22 loops=1)\nTotal runtime: 24.15 msec\n(2 rows)\n\nBTW, this started out as a question about how to do it, but in the process\nof thinking my question out, the answer came to me. ;-)\n\nMatthew Nuzum\nwww.bearfruit.org\[email protected]\n\n\n", "msg_date": "Wed, 28 May 2003 17:30:01 -0400", "msg_from": "\"Matthew Nuzum\" <[email protected]>", "msg_from_op": true, "msg_subject": "recursive srf" } ]
[ { "msg_contents": "Hello,\n I'm running a simple query on a table and I'm getting a very long\nresponse time. The table has 56,000 rows in it. It has a full text field,\nbut it is not being referenced in this query. The query I'm running is\n\nselect row_key, column1, column2, column3, column4, column5 from table1\nwhere column6 = 1 order by column3 desc limit 21;\n\nThere is an index on the table\n\nmessage_index btree (column6, column3, column7)\n\nColumn 3 is a date type, column 6 is an integer and column 7 is unused in\nthis query.\n\nThe total query time is 6 seconds, but I can bring that down to 4.5 if I\nappend \"offset 0\" to the end of the query. By checking query using \"explain\nanalyze\" it shows that it is using the index.\n\nIf anyone has any ideas as to why the query is taking so long and what I can\ndo to make it more efficient I would love to know.\n\nThanks\nKevin\n\n", "msg_date": "Thu, 29 May 2003 08:58:07 -0500", "msg_from": "\"Kevin Schroeder\" <[email protected]>", "msg_from_op": true, "msg_subject": "Select query takes long to execute" }, { "msg_contents": "On 29 May 2003 at 8:58, Kevin Schroeder wrote:\n> If anyone has any ideas as to why the query is taking so long and what I can\n> do to make it more efficient I would love to know.\n\nCheck yor shared buffers setting and effective OS cache setting. If these are \nappropriately tuned, then it should be fast enough.\n\nIs the table vacuumed? Is index taking too much space? Then try reindexing. It \nmight help as vacuum does not reclaim wasted space in index.\n\nHTH\n\nBye\n Shridhar\n\n--\nWait! You have not been prepared!\t\t-- Mr. Atoz, \"Tomorrow is Yesterday\", \nstardate 3113.2\n\n", "msg_date": "Thu, 29 May 2003 19:54:43 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select query takes long to execute" }, { "msg_contents": "See if lowering random_page_cost to 1.5 or so helps here.\n\nThat and effective_cache_size are two of the more important values the \nplanner uses to decide between seq scans and index scans.\n\nOn Thu, 29 May 2003, Kevin Schroeder wrote:\n\n> Hello,\n> I'm running a simple query on a table and I'm getting a very long\n> response time. The table has 56,000 rows in it. It has a full text field,\n> but it is not being referenced in this query. The query I'm running is\n> \n> select row_key, column1, column2, column3, column4, column5 from table1\n> where column6 = 1 order by column3 desc limit 21;\n> \n> There is an index on the table\n> \n> message_index btree (column6, column3, column7)\n> \n> Column 3 is a date type, column 6 is an integer and column 7 is unused in\n> this query.\n> \n> The total query time is 6 seconds, but I can bring that down to 4.5 if I\n> append \"offset 0\" to the end of the query. By checking query using \"explain\n> analyze\" it shows that it is using the index.\n> \n> If anyone has any ideas as to why the query is taking so long and what I can\n> do to make it more efficient I would love to know.\n> \n> Thanks\n> Kevin\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n", "msg_date": "Thu, 29 May 2003 10:01:16 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select query takes long to execute" } ]
[ { "msg_contents": "I figured out how to make the query faster. There should be a mailing list\nset up for wasted questions since I always seem to figure out the problem\nafter I've bugged everyone for help.\n\nIn the query\n\nselect row_key, column1, column2, column3, column4, column5 from table1\nwhere column6 = 1 order by column3 desc limit 21;\n\nI changed the index to\n\nmessage_index btree (column3, column6)\n\nrather than\n\nmessage_index btree (column6, column3, column7)\n\nSince the data was being ordered by column3 it seems to have sped the query\nup to 1 ms from 6000ms by making column 3 the first part of the index rather\nthan the second.\n\nKevin\n\n", "msg_date": "Thu, 29 May 2003 09:09:23 -0500", "msg_from": "\"Kevin Schroeder\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query problem fixed" }, { "msg_contents": "The thing I can't really understand why can't the planner find out something\nlike this:\n\n1. Index scan using column6\n2. Backward search on subset using column3\n\nAny guru to explain?\n\nG.\n------------------------------- cut here -------------------------------\n----- Original Message ----- \nFrom: \"Kevin Schroeder\" <[email protected]>\nSent: Thursday, May 29, 2003 4:09 PM\n\n\n> I figured out how to make the query faster. There should be a mailing\nlist\n> set up for wasted questions since I always seem to figure out the problem\n> after I've bugged everyone for help.\n>\n> In the query\n>\n> select row_key, column1, column2, column3, column4, column5 from table1\n> where column6 = 1 order by column3 desc limit 21;\n>\n> I changed the index to\n>\n> message_index btree (column3, column6)\n>\n> rather than\n>\n> message_index btree (column6, column3, column7)\n>\n> Since the data was being ordered by column3 it seems to have sped the\nquery\n> up to 1 ms from 6000ms by making column 3 the first part of the index\nrather\n> than the second.\n>\n> Kevin\n\n", "msg_date": "Thu, 29 May 2003 16:27:10 +0200", "msg_from": "=?ISO-8859-1?Q?SZUCS_G=E1bor?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query problem fixed" }, { "msg_contents": "\"Kevin Schroeder\" <[email protected]> writes:\n> select row_key, column1, column2, column3, column4, column5 from table1\n> where column6 = 1 order by column3 desc limit 21;\n\n> I changed the index to\n\n> message_index btree (column3, column6)\n\n> rather than\n\n> message_index btree (column6, column3, column7)\n\nThat's probably not the best solution. It would be better to leave the\nindex with column6 first and write the query as\n\n... where column6 = 1 order by column6 desc, column3 desc limit 21\n\nThis doesn't change the results (since there's only one value of column6\nin the output), but it does cause the planner to realize that a\nbackwards scan of the index would produce what you want with no sort\nstep. The results should be essentially instantaneous if you can get\nthe query plan down to Index Scan Backward + Limit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 May 2003 10:49:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query problem fixed " } ]
[ { "msg_contents": "Hello everybody,\n\nI'm facing a simple yet gravely problem with postgresql 7.3.2 on x86 Linux.\nMy db is used to store IP accounting statistics for about 30 C's. There are\na couple truly trivial tables such as the one below:\n\nCREATE TABLE stats_min\n(\n\tip\tinet\t\tNOT NULL,\n\tstart\ttimestamp\tNOT NULL default CURRENT_TIMESTAMP(0),\n\tintlen\tint4\t\tNOT NULL default 60,\n\td_in\tint8\t\tNOT NULL,\n\td_out\tint8\t\tNOT NULL,\n\n\tconstraint \"stats_min_pkey\" PRIMARY KEY (\"ip\", \"start\")\n);\nCREATE INDEX stats_min_start ON stats_min (start);\n\nA typical transaction committed on these tables looks like this:\n\nBEGIN WORK\n\tDELETE ...\n\tUPDATE/INSERT ...\nCOMMIT WORK\n\nTrouble is, as the rows in the tables get deleted/inserted/updated\n(the frequency being a couple thousand rows per minute), the database\nis growing out of proportion in size. After about a week, I have\nto redump the db by hand so as to get query times back to sensible\nfigures. A transaction that takes ~50 seconds before the redump will\nthen complete in under 5 seconds (the corresponding data/base/ dir having\nshrunk from ~2 GB to ~0.6GB).\n\nA nightly VACCUM ANALYZE is no use.\n\nA VACUUM FULL is no use.\n\nA VACUUM FULL followed by REINDEX is no use.\n\nIt seems that only a full redump involving \"pg_dump olddb | \\\npsql newdb\" is capable of restoring the system to its working\nglory.\n\nPlease accept my apologies if I've overlooked a relevant piece of\ninformation in the docs. I'm in an urgent need of getting this\nproblem resolved.\n\n-- \nTomas Szepe <[email protected]>\n", "msg_date": "Thu, 29 May 2003 18:32:39 +0200", "msg_from": "Tomas Szepe <[email protected]>", "msg_from_op": true, "msg_subject": "db growing out of proportion" }, { "msg_contents": "On Thu, 29 May 2003, Tomas Szepe wrote:\n\n> Hello everybody,\n>\n> I'm facing a simple yet gravely problem with postgresql 7.3.2 on x86 Linux.\n> My db is used to store IP accounting statistics for about 30 C's. There are\n> a couple truly trivial tables such as the one below:\n>\n> CREATE TABLE stats_min\n> (\n> \tip\tinet\t\tNOT NULL,\n> \tstart\ttimestamp\tNOT NULL default CURRENT_TIMESTAMP(0),\n> \tintlen\tint4\t\tNOT NULL default 60,\n> \td_in\tint8\t\tNOT NULL,\n> \td_out\tint8\t\tNOT NULL,\n>\n> \tconstraint \"stats_min_pkey\" PRIMARY KEY (\"ip\", \"start\")\n> );\n> CREATE INDEX stats_min_start ON stats_min (start);\n>\n> A typical transaction committed on these tables looks like this:\n>\n> BEGIN WORK\n> \tDELETE ...\n> \tUPDATE/INSERT ...\n> COMMIT WORK\n>\n> Trouble is, as the rows in the tables get deleted/inserted/updated\n> (the frequency being a couple thousand rows per minute), the database\n> is growing out of proportion in size. After about a week, I have\n> to redump the db by hand so as to get query times back to sensible\n> figures. A transaction that takes ~50 seconds before the redump will\n> then complete in under 5 seconds (the corresponding data/base/ dir having\n> shrunk from ~2 GB to ~0.6GB).\n>\n> A nightly VACCUM ANALYZE is no use.\n>\n> A VACUUM FULL is no use.\n>\n> A VACUUM FULL followed by REINDEX is no use.\n\nIs the space being taken up by stats_min, this index, some other object?\nI'm not 100% sure, but after vacuums maybe\nselect * from pg_class order by relpages desc limit 10;\nwill give a good idea.\n\nWhat does VACUUM FULL VERBOSE stats_min; give you?\n\n", "msg_date": "Thu, 29 May 2003 10:37:38 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db growing out of proportion" }, { "msg_contents": "> [[email protected]]\n> \n> > Trouble is, as the rows in the tables get deleted/inserted/updated\n> > (the frequency being a couple thousand rows per minute), the database\n> > is growing out of proportion in size. After about a week, I have\n> > to redump the db by hand so as to get query times back to sensible\n> > figures. A transaction that takes ~50 seconds before the redump will\n> > then complete in under 5 seconds (the corresponding data/base/ dir having\n> > shrunk from ~2 GB to ~0.6GB).\n> >\n> > A nightly VACCUM ANALYZE is no use.\n> >\n> > A VACUUM FULL is no use.\n> >\n> > A VACUUM FULL followed by REINDEX is no use.\n> \n> Is the space being taken up by stats_min, this index, some other object?\n\n relname | relkind | relpages | reltuples \n---------------------------------+---------+----------+-------------\n stats_hr | r | 61221 | 3.01881e+06\n stats_hr_pkey | i | 26414 | 3.02239e+06\n stats_min_pkey | i | 20849 | 953635\n stats_hr_start | i | 17218 | 3.02142e+06\n stats_min_start | i | 15284 | 949788\n stats_min | r | 10885 | 948792\n authinfo_pkey | i | 1630 | 1342\n authinfo | r | 1004 | 1342\n contract_ips | r | 865 | 565\n contract_ips_pkey | i | 605 | 565\n\n> What does VACUUM FULL VERBOSE stats_min; give you?\n\nSorry, I can't run a VACUUM FULL at this time.\nWe're in production use.\n\n-- \nTomas Szepe <[email protected]>\n", "msg_date": "Fri, 30 May 2003 09:24:42 +0200", "msg_from": "Tomas Szepe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: db growing out of proportion" }, { "msg_contents": "On Fri, 30 May 2003, Tomas Szepe wrote:\n\n> > [[email protected]]\n> > \n> > > Trouble is, as the rows in the tables get deleted/inserted/updated\n> > > (the frequency being a couple thousand rows per minute), the database\n> > > is growing out of proportion in size. After about a week, I have\n> > > to redump the db by hand so as to get query times back to sensible\n> > > figures. A transaction that takes ~50 seconds before the redump will\n> > > then complete in under 5 seconds (the corresponding data/base/ dir having\n> > > shrunk from ~2 GB to ~0.6GB).\n> > >\n> > > A nightly VACCUM ANALYZE is no use.\n> > >\n> > > A VACUUM FULL is no use.\n> > >\n> > > A VACUUM FULL followed by REINDEX is no use.\n> > \n> > Is the space being taken up by stats_min, this index, some other object?\n> \n> relname | relkind | relpages | reltuples \n> ---------------------------------+---------+----------+-------------\n> stats_hr | r | 61221 | 3.01881e+06\n> stats_hr_pkey | i | 26414 | 3.02239e+06\n> stats_min_pkey | i | 20849 | 953635\n> stats_hr_start | i | 17218 | 3.02142e+06\n> stats_min_start | i | 15284 | 949788\n> stats_min | r | 10885 | 948792\n> authinfo_pkey | i | 1630 | 1342\n> authinfo | r | 1004 | 1342\n> contract_ips | r | 865 | 565\n> contract_ips_pkey | i | 605 | 565\n> \n> > What does VACUUM FULL VERBOSE stats_min; give you?\n> \n> Sorry, I can't run a VACUUM FULL at this time.\n> We're in production use.\n> \n> \n\n\tWould more regular vacuum help. I think a vaccum every hour may do \nthe job. perhaps with an analyse every day. (I presume the statistics \ndon't change too much) \n\tWhile I don't surgest doing a vacuum more than twice an hour as \nthis would slow down the system with little gain more than once a day may \nimprove the speed and space usage.\n\tJust an idea.\n\nPeter \n\n", "msg_date": "Fri, 30 May 2003 10:21:51 +0100 (BST)", "msg_from": "Peter Childs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db growing out of proportion" }, { "msg_contents": "Peter Childs <[email protected]> writes:\n> On Fri, 30 May 2003, Tomas Szepe wrote:\n>> Trouble is, as the rows in the tables get deleted/inserted/updated\n>> (the frequency being a couple thousand rows per minute), the database\n>> is growing out of proportion in size.\n\n> \tWould more regular vacuum help. I think a vaccum every hour may do \n> the job.\n\nAlso note that no amount of vacuuming will save you if the FSM is not\nlarge enough to keep track of all the free space. The default FSM\nsettings, like all the other default settings in Postgres, are set up\nfor a small installation. You'd probably need to raise them by at least\na factor of 10 for this installation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 May 2003 09:11:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db growing out of proportion " }, { "msg_contents": "I have a database with similar performance constraints. Our best \nestimates put the turnover on our most active table at 350k tuples/day. \nThe hardware is a 4x1.4GHz Xeon w/ a RAID 1 disk setup, and the DB \nfloats around 500MB of disk space taken. Here is what we do to maintain \noperations:\n\n1) Cron job @ 4:00AM that runs a full vacuum analyze on the DB, and \nreindex on the major tables. (Reindex is to maintain index files in SHM) \nAn alerting feature pages the administrator if the job does not complete \nwithin a reasonable amount of time.\n\n2) Every 15 minutes, a cron job runs a vacuum analyze on our five \nlargest tables. An alert is emailed to the administrator if a second \nvacuum attempts to start before the previous completes.\n\n3) Every week, we review the disk usage numbers from daily peaks. This \ndetermines if we need to increase our shmmax & shared buffers.\n\nAdditionally, you may want to take a look at your query performance. Are \nmost of your queries doing sequential scans? In my system, the crucial \ncolumns of the primary tables are int8 and float8 fields. I have those \nindexed, and I get a serious performance boost by making sure all \nSELECT/UPDATE/DELETE queries that use those columns in the WHERE have an \nexplicit ::int8 or ::float8 (Explain analyze is your friend). During \npeak usage, there is an order of magnitude difference (usually 10 to \n15x) between queries doing sequential scans on the table, and queries \ndoing index scans. Might be worth investigating if your queries are \ntaking 5 seconds when your DB is fresh. HTH.\n\n\n\nTomas Szepe wrote:\n> Hello everybody,\n> \n> I'm facing a simple yet gravely problem with postgresql 7.3.2 on x86 Linux.\n> My db is used to store IP accounting statistics for about 30 C's. There are\n> a couple truly trivial tables such as the one below:\n> \n> CREATE TABLE stats_min\n> (\n> \tip\tinet\t\tNOT NULL,\n> \tstart\ttimestamp\tNOT NULL default CURRENT_TIMESTAMP(0),\n> \tintlen\tint4\t\tNOT NULL default 60,\n> \td_in\tint8\t\tNOT NULL,\n> \td_out\tint8\t\tNOT NULL,\n> \n> \tconstraint \"stats_min_pkey\" PRIMARY KEY (\"ip\", \"start\")\n> );\n> CREATE INDEX stats_min_start ON stats_min (start);\n> \n> A typical transaction committed on these tables looks like this:\n> \n> BEGIN WORK\n> \tDELETE ...\n> \tUPDATE/INSERT ...\n> COMMIT WORK\n> \n> Trouble is, as the rows in the tables get deleted/inserted/updated\n> (the frequency being a couple thousand rows per minute), the database\n> is growing out of proportion in size. After about a week, I have\n> to redump the db by hand so as to get query times back to sensible\n> figures. A transaction that takes ~50 seconds before the redump will\n> then complete in under 5 seconds (the corresponding data/base/ dir having\n> shrunk from ~2 GB to ~0.6GB).\n> \n> A nightly VACCUM ANALYZE is no use.\n> \n> A VACUUM FULL is no use.\n> \n> A VACUUM FULL followed by REINDEX is no use.\n> \n> It seems that only a full redump involving \"pg_dump olddb | \\\n> psql newdb\" is capable of restoring the system to its working\n> glory.\n> \n> Please accept my apologies if I've overlooked a relevant piece of\n> information in the docs. I'm in an urgent need of getting this\n> problem resolved.\n> \n\n", "msg_date": "Fri, 30 May 2003 10:25:42 -0400", "msg_from": "Todd Nemanich <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db growing out of proportion" }, { "msg_contents": "\nOn Fri, 30 May 2003, Tomas Szepe wrote:\n\n> > [[email protected]]\n> >\n> > > Trouble is, as the rows in the tables get deleted/inserted/updated\n> > > (the frequency being a couple thousand rows per minute), the database\n> > > is growing out of proportion in size. After about a week, I have\n> > > to redump the db by hand so as to get query times back to sensible\n> > > figures. A transaction that takes ~50 seconds before the redump will\n> > > then complete in under 5 seconds (the corresponding data/base/ dir having\n> > > shrunk from ~2 GB to ~0.6GB).\n> > >\n> > > A nightly VACCUM ANALYZE is no use.\n> > >\n> > > A VACUUM FULL is no use.\n> > >\n> > > A VACUUM FULL followed by REINDEX is no use.\n> >\n> > Is the space being taken up by stats_min, this index, some other object?\n>\n> relname | relkind | relpages | reltuples\n> ---------------------------------+---------+----------+-------------\n> stats_hr | r | 61221 | 3.01881e+06\n> stats_hr_pkey | i | 26414 | 3.02239e+06\n> stats_min_pkey | i | 20849 | 953635\n> stats_hr_start | i | 17218 | 3.02142e+06\n> stats_min_start | i | 15284 | 949788\n> stats_min | r | 10885 | 948792\n> authinfo_pkey | i | 1630 | 1342\n> authinfo | r | 1004 | 1342\n> contract_ips | r | 865 | 565\n> contract_ips_pkey | i | 605 | 565\n>\n> > What does VACUUM FULL VERBOSE stats_min; give you?\n>\n> Sorry, I can't run a VACUUM FULL at this time.\n> We're in production use.\n\nAs Tom said, you probably need higher FSM settings, but also, do you have\nany long lived transactions (say from some kind of persistent connection\nsystem) that might be preventing vacuum from removing rows?\n\n\n", "msg_date": "Fri, 30 May 2003 08:40:43 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db growing out of proportion" }, { "msg_contents": "> [[email protected]]\n> \n> Peter Childs <[email protected]> writes:\n> > On Fri, 30 May 2003, Tomas Szepe wrote:\n> >> Trouble is, as the rows in the tables get deleted/inserted/updated\n> >> (the frequency being a couple thousand rows per minute), the database\n> >> is growing out of proportion in size.\n> \n> > \tWould more regular vacuum help. I think a vaccum every hour may do \n> > the job.\n> \n> Also note that no amount of vacuuming will save you if the FSM is not\n> large enough to keep track of all the free space. The default FSM\n> settings, like all the other default settings in Postgres, are set up\n> for a small installation. You'd probably need to raise them by at least\n> a factor of 10 for this installation.\n\nThanks, I'll try to tweak those settings and will let the list know how\nthings went.\n\n-- \nTomas Szepe <[email protected]>\n", "msg_date": "Sat, 31 May 2003 00:59:39 +0200", "msg_from": "Tomas Szepe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: db growing out of proportion" }, { "msg_contents": "> As Tom said, you probably need higher FSM settings, but also, do you have\n> any long lived transactions (say from some kind of persistent connection\n> system) that might be preventing vacuum from removing rows?\n\nNo, not at all.\n\n-- \nTomas Szepe <[email protected]>\n", "msg_date": "Sat, 31 May 2003 01:00:50 +0200", "msg_from": "Tomas Szepe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: db growing out of proportion" }, { "msg_contents": "> [[email protected]]\n> \n> Additionally, you may want to take a look at your query performance. Are \n> most of your queries doing sequential scans? In my system, the crucial \n> columns of the primary tables are int8 and float8 fields. I have those \n> indexed, and I get a serious performance boost by making sure all \n> SELECT/UPDATE/DELETE queries that use those columns in the WHERE have an \n> explicit ::int8 or ::float8 (Explain analyze is your friend). During \n> peak usage, there is an order of magnitude difference (usually 10 to \n> 15x) between queries doing sequential scans on the table, and queries \n> doing index scans. Might be worth investigating if your queries are \n> taking 5 seconds when your DB is fresh. HTH.\n\nYes, I have taken special care to fine-tune all queries on authentic\ndata. The db setup works as expected in whatever respect with the\nexception of query times deterioration that apparently corelates to\nthe db's on-disk size growth.\n\nThanks for your suggestions,\n\n-- \nTomas Szepe <[email protected]>\n", "msg_date": "Sat, 31 May 2003 01:08:21 +0200", "msg_from": "Tomas Szepe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: db growing out of proportion" }, { "msg_contents": "On Fri, 30 May 2003 09:11:39 -0400\nTom Lane <[email protected]> said something like:\n\n> \n> Also note that no amount of vacuuming will save you if the FSM is not\n> large enough to keep track of all the free space. The default FSM\n> settings, like all the other default settings in Postgres, are set up\n> for a small installation. You'd probably need to raise them by at least\n> a factor of 10 for this installation.\n> \n\nTom,\n\nThanks for the hint. I just upped my shared_buffers to 8192, fsm_relations to 10000, fsm_pages to 100000, sort_mem to 64000, and an UPDATE which was taking over 2 hours dropped down to 1 to 2 minutes!\n\nNice...\n\nThanks,\nRob", "msg_date": "Fri, 30 May 2003 21:21:01 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db growing out of proportion" }, { "msg_contents": "Robert Creager <[email protected]> writes:\n> Thanks for the hint. I just upped my shared_buffers to 8192, fsm_relations to 10000, fsm_pages to 100000, sort_mem to 64000, and an UPDATE which was taking over 2 hours dropped down to 1 to 2 minutes!\n\nCool ... but it's not immediately obvious which of these changes did the\ntrick for you. What settings were you at before? And what's the\ndetails of the problem query?\n\nThe first three settings you mention all seem like reasonable choices,\nbut I'd be hesitant to recommend 64M sort_mem for general use (it won't\ntake very many concurrent sorts to drive you into the ground...). So\nI'm interested to narrow down exactly what was the issue here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 May 2003 00:11:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db growing out of proportion " }, { "msg_contents": "On Sat, 31 May 2003 00:11:26 -0400\nTom Lane <[email protected]> said something like:\n> \n> Cool ... but it's not immediately obvious which of these changes did the\n> trick for you. What settings were you at before? And what's the\n> details of the problem query?\n> \n> The first three settings you mention all seem like reasonable choices,\n> but I'd be hesitant to recommend 64M sort_mem for general use (it won't\n> take very many concurrent sorts to drive you into the ground...). So\n> I'm interested to narrow down exactly what was the issue here.\n> \n> \t\t\tregards, tom lane\n\nshared_buffers was 1024, now 8192\nmax_fsm_relations was 1000, now 10000\nmax_fsm_pages was 20000, now 100000\nwal_buffers was 8, now 16\nsort_mem was 1024, now 64000\nvacuum_mem was 1024, now 64000\neffective_cache_size was 1000, now 100000\n\nI am in the process of reloading the dB, but obs_v and obs_i contain ~750000 records each. I'd be happy to play around with the settings if you would like to see the timing results. I'll also be able to get some explain analyze results tomorrow when finished reloading. Suggestions as to what values to change first?\n\nThere is a 'C' language trigger on the obs_v and obs_i tables which essentially combines the data from the the obs_? tables and updates the catalog table when the obs_? records are updated.\n\nThe query is:\n\nUPDATE obs_v\nSET mag = obs_v.imag + zp.zero_v + cg.color_v * (obs_v.imag - i.imag),\n use = true\nFROM color_group AS cg, zero_pair AS zp, obs_i AS i, files AS f\nWHERE obs_v.star_id = i.star_id\n AND obs_v.file_id = f.file_id\n AND cg.group_id = f.group_id\n AND f.group_id = $group_id\n AND zp.pair_id = f.pair_id\n\nwhich is called from a perl script (DBD::Pg - which sets $group_id), and the relevant tables are:\n\n Table \"public.obs_v\"\n Column | Type | Modifiers \n---------+---------+------------------------------------------------\n x | real | not null\n y | real | not null\n imag | real | not null\n smag | real | not null\n ra | real | not null\n dec | real | not null\n obs_id | integer | not null default nextval('\"obs_id_seq\"'::text)\n file_id | integer | \n use | boolean | default false\n solve | boolean | default false\n star_id | integer | \n mag | real | \nIndexes: obs_v_file_id_index btree (file_id),\n obs_v_loc_index btree (ra, \"dec\"),\n obs_v_obs_id_index btree (obs_id),\n obs_v_star_id_index btree (star_id),\n obs_v_use_index btree (use)\nForeign Key constraints: obs_v_files_constraint FOREIGN KEY (file_id) REFERENCES files(file_id) ON UPDATE NO ACTION ON DELETE CASCADE\nTriggers: obs_v_trig\n\nwith obs_i being identical (inherited from same root table)\n\n Table \"public.color_group\"\n Column | Type | Modifiers \n----------+---------+-----------\n group_id | integer | \n color_u | real | default 0\n color_b | real | default 0\n color_v | real | default 0\n color_r | real | default 0\n color_i | real | default 0\nIndexes: color_group_group_id_index btree (group_id)\nForeign Key constraints: $1 FOREIGN KEY (group_id) REFERENCES groups(group_id) ON UPDATE NO ACTION ON DELETE CASCADE\n\n Table \"public.zero_pair\"\n Column | Type | Modifiers \n---------+---------+-----------\n pair_id | integer | not null\n zero_u | real | default 0\n zero_b | real | default 0\n zero_v | real | default 0\n zero_r | real | default 0\n zero_i | real | default 0\nIndexes: zero_pair_pkey primary key btree (pair_id),\n zero_pair_pair_id_index btree (pair_id)\nForeign Key constraints: $1 FOREIGN KEY (pair_id) REFERENCES pairs(pair_id) ON UPDATE NO ACTION ON DELETE CASCADE\n\n Table \"public.files\"\n Column | Type | Modifiers \n----------+--------------------------+-------------------------------------------------------\n file_id | integer | not null default nextval('\"files_file_id_seq\"'::text)\n group_id | integer | \n pair_id | integer | \n date | timestamp with time zone | not null\n name | character varying | not null\n ra_min | real | default 0\n ra_max | real | default 0\n dec_min | real | default 0\n dec_max | real | default 0\nIndexes: files_pkey primary key btree (file_id),\n files_name_key unique btree (name),\n files_id_index btree (file_id, group_id, pair_id),\n files_range_index btree (ra_min, ra_max, dec_min, dec_max),\n imported__file_id_idex btree (file_id)\nForeign Key constraints: $1 FOREIGN KEY (group_id) REFERENCES groups(group_id) ON UPDATE NO ACTION ON DELETE CASCADE,\n $2 FOREIGN KEY (pair_id) REFERENCES pairs(pair_id) ON UPDATE NO ACTION ON DELETE CASCADE\n\n Table \"public.catalog\"\n Column | Type | Modifiers \n------------------+------------------+-------------------------------------------------\n star_id | integer | not null default nextval('\"star_id_seq\"'::text)\n loc_count | integer | default 0\n ra | real | not null\n ra_sum | double precision | default 0\n ra_sigma | real | default 0\n ra_sum_square | double precision | default 0\n dec | real | not null\n dec_sum | double precision | default 0\n dec_sigma | real | default 0\n dec_sum_square | double precision | default 0\n mag_u_count | integer | default 0\n mag_u | real | default 99\n mag_u_sum | double precision | default 0\n mag_u_sigma | real | default 0\n mag_u_sum_square | double precision | default 0\n mag_b_count | integer | default 0\n mag_b | real | default 99\n mag_b_sum | double precision | default 0\n mag_b_sigma | real | default 0\n mag_b_sum_square | double precision | default 0\n mag_v_count | integer | default 0\n mag_v | real | default 99\n mag_v_sum | double precision | default 0\n mag_v_sigma | real | default 0\n mag_v_sum_square | double precision | default 0\n mag_r_count | integer | default 0\n mag_r | real | default 99\n mag_r_sum | double precision | default 0\n mag_r_sigma | real | default 0\n mag_r_sum_square | double precision | default 0\n mag_i_count | integer | default 0\n mag_i | real | default 99\n mag_i_sum | double precision | default 0\n mag_i_sigma | real | default 0\n mag_i_sum_square | double precision | default 0\nIndexes: catalog_pkey primary key btree (star_id),\n catalog_ra_decl_index btree (ra, \"dec\"),\n catalog_star_id_index btree (star_id)\n\n\n\n-- \nO_", "msg_date": "Fri, 30 May 2003 22:50:02 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db growing out of proportion" }, { "msg_contents": "Robert Creager <[email protected]> writes:\n>> I'm interested to narrow down exactly what was the issue here.\n\n> shared_buffers was 1024, now 8192\n> max_fsm_relations was 1000, now 10000\n> max_fsm_pages was 20000, now 100000\n> wal_buffers was 8, now 16\n> sort_mem was 1024, now 64000\n> vacuum_mem was 1024, now 64000\n> effective_cache_size was 1000, now 100000\n\n> The query is:\n\n> UPDATE obs_v\n> SET mag = obs_v.imag + zp.zero_v + cg.color_v * (obs_v.imag - i.imag),\n> use = true\n> FROM color_group AS cg, zero_pair AS zp, obs_i AS i, files AS f\n> WHERE obs_v.star_id = i.star_id\n> AND obs_v.file_id = f.file_id\n> AND cg.group_id = f.group_id\n> AND f.group_id = $group_id\n> AND zp.pair_id = f.pair_id\n\nHm. My best guess is that the increase in sort_mem allowed this query\nto use a more efficient join plan. Perhaps the planner switched from\nmerge to hash join once it thought the hash table would fit in sort_mem;\nor maybe the plan didn't change but the executor was able to keep\neverything in memory instead of using temp files. The other changes you\nmention seem good as general housekeeping, but I doubt they'd have much\ndirect effect on this query's speed. It'd be interesting to look at\nEXPLAIN ANALYZE results for the same query at several different sort_mem\nvalues.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 May 2003 12:13:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db growing out of proportion " }, { "msg_contents": "Hey Tom,\n\nSorry for the long delay. I'd been having mail trouble, and your and\npostgresql mail servers were bouncing me because Starband (my ISP)\ndoesn't setup a full DNS entry for their clients. I'm now relaying\nthrough another host.\n\nI'm posting to the performance list, as it seems more appropriate there.\n\nThe results were not as clear cut as I would of thought. If either\nfsm_relations, fsm_pages or sort_mem were dropped to their original\nvalues, the queries went from 3 hours to not completing 9/15 sets after\n13 hours. When the shared buffers were reverted, the set completed in 12\nhours.\n\nI didn't capture any explains for the problem settings, but will be\nhappy to do so if you would like to see some of the results (if they are\ndifferent). I'm almost caught up with importing new data (too much rain\naround here to take new data), and can explain away this weekend.\n\nCheers,\nRob\n\nOn Fri, 30 May 2003 22:50:02 -0600\nRobert Creager <[email protected]> said something like:\n\n> On Sat, 31 May 2003 00:11:26 -0400\n> Tom Lane <[email protected]> said something like:\n> > \n> > Cool ... but it's not immediately obvious which of these changes did\n> > the trick for you. What settings were you at before? And what's\n> > the details of the problem query?\n> > \n> > The first three settings you mention all seem like reasonable\n> > choices, but I'd be hesitant to recommend 64M sort_mem for general\n> > use (it won't take very many concurrent sorts to drive you into the\n> > ground...). So I'm interested to narrow down exactly what was the\n> > issue here.\n> > \n> > \t\t\tregards, tom lane\n> \n> shared_buffers was 1024, now 8192\n> max_fsm_relations was 1000, now 10000\n> max_fsm_pages was 20000, now 100000\n> wal_buffers was 8, now 16\n> sort_mem was 1024, now 64000\n> vacuum_mem was 1024, now 64000\n> effective_cache_size was 1000, now 100000\n> \n> I am in the process of reloading the dB, but obs_v and obs_i contain\n> ~750000 records each. I'd be happy to play around with the settings\n> if you would like to see the timing results. I'll also be able to get\n> some explain analyze results tomorrow when finished reloading. \n> Suggestions as to what values to change first?\n> \n> There is a 'C' language trigger on the obs_v and obs_i tables which\n> essentially combines the data from the the obs_? tables and updates\n> the catalog table when the obs_? records are updated.\n> \n> The query is:\n> \n> UPDATE obs_v\n> SET mag = obs_v.imag + zp.zero_v + cg.color_v * (obs_v.imag - i.imag),\n> use = true\n> FROM color_group AS cg, zero_pair AS zp, obs_i AS i, files AS f\n> WHERE obs_v.star_id = i.star_id\n> AND obs_v.file_id = f.file_id\n> AND cg.group_id = f.group_id\n> AND f.group_id = $group_id\n> AND zp.pair_id = f.pair_id\n> \n> which is called from a perl script (DBD::Pg - which sets $group_id),\n> and the relevant tables are:\n> \n> Table \"public.obs_v\"\n> Column | Type | Modifiers \n> ---------+---------+------------------------------------------------\n> x | real | not null\n> y | real | not null\n> imag | real | not null\n> smag | real | not null\n> ra | real | not null\n> dec | real | not null\n> obs_id | integer | not null default nextval('\"obs_id_seq\"'::text)\n> file_id | integer | \n> use | boolean | default false\n> solve | boolean | default false\n> star_id | integer | \n> mag | real | \n> Indexes: obs_v_file_id_index btree (file_id),\n> obs_v_loc_index btree (ra, \"dec\"),\n> obs_v_obs_id_index btree (obs_id),\n> obs_v_star_id_index btree (star_id),\n> obs_v_use_index btree (use)\n> Foreign Key constraints: obs_v_files_constraint FOREIGN KEY (file_id)\n> REFERENCES files(file_id) ON UPDATE NO ACTION ON DELETE CASCADE\n> Triggers: obs_v_trig\n> \n> with obs_i being identical (inherited from same root table)\n> \n> Table \"public.color_group\"\n> Column | Type | Modifiers \n> ----------+---------+-----------\n> group_id | integer | \n> color_u | real | default 0\n> color_b | real | default 0\n> color_v | real | default 0\n> color_r | real | default 0\n> color_i | real | default 0\n> Indexes: color_group_group_id_index btree (group_id)\n> Foreign Key constraints: $1 FOREIGN KEY (group_id) REFERENCES\n> groups(group_id) ON UPDATE NO ACTION ON DELETE CASCADE\n> \n> Table \"public.zero_pair\"\n> Column | Type | Modifiers \n> ---------+---------+-----------\n> pair_id | integer | not null\n> zero_u | real | default 0\n> zero_b | real | default 0\n> zero_v | real | default 0\n> zero_r | real | default 0\n> zero_i | real | default 0\n> Indexes: zero_pair_pkey primary key btree (pair_id),\n> zero_pair_pair_id_index btree (pair_id)\n> Foreign Key constraints: $1 FOREIGN KEY (pair_id) REFERENCES\n> pairs(pair_id) ON UPDATE NO ACTION ON DELETE CASCADE\n> \n> Table \"public.files\"\n> Column | Type | Modifiers\n> \n> ----------+--------------------------+-------------------------------\n> ------------------------\n> file_id | integer | not null default\n> nextval('\"files_file_id_seq\"'::text) group_id | integer \n> | \n> pair_id | integer | \n> date | timestamp with time zone | not null\n> name | character varying | not null\n> ra_min | real | default 0\n> ra_max | real | default 0\n> dec_min | real | default 0\n> dec_max | real | default 0\n> Indexes: files_pkey primary key btree (file_id),\n> files_name_key unique btree (name),\n> files_id_index btree (file_id, group_id, pair_id),\n> files_range_index btree (ra_min, ra_max, dec_min, dec_max),\n> imported__file_id_idex btree (file_id)\n> Foreign Key constraints: $1 FOREIGN KEY (group_id) REFERENCES\n> groups(group_id) ON UPDATE NO ACTION ON DELETE CASCADE,\n> $2 FOREIGN KEY (pair_id) REFERENCES\n> pairs(pair_id) ON UPDATE NO ACTION ON DELETE\n> CASCADE\n> \n> Table \"public.catalog\"\n> Column | Type | Modifiers \n> \n> ------------------+------------------+-------------------------------\n> ------------------\n> star_id | integer | not null default\n> nextval('\"star_id_seq\"'::text) loc_count | integer |\n> default 0 ra | real | not null\n> ra_sum | double precision | default 0\n> ra_sigma | real | default 0\n> ra_sum_square | double precision | default 0\n> dec | real | not null\n> dec_sum | double precision | default 0\n> dec_sigma | real | default 0\n> dec_sum_square | double precision | default 0\n> mag_u_count | integer | default 0\n> mag_u | real | default 99\n> mag_u_sum | double precision | default 0\n> mag_u_sigma | real | default 0\n> mag_u_sum_square | double precision | default 0\n> mag_b_count | integer | default 0\n> mag_b | real | default 99\n> mag_b_sum | double precision | default 0\n> mag_b_sigma | real | default 0\n> mag_b_sum_square | double precision | default 0\n> mag_v_count | integer | default 0\n> mag_v | real | default 99\n> mag_v_sum | double precision | default 0\n> mag_v_sigma | real | default 0\n> mag_v_sum_square | double precision | default 0\n> mag_r_count | integer | default 0\n> mag_r | real | default 99\n> mag_r_sum | double precision | default 0\n> mag_r_sigma | real | default 0\n> mag_r_sum_square | double precision | default 0\n> mag_i_count | integer | default 0\n> mag_i | real | default 99\n> mag_i_sum | double precision | default 0\n> mag_i_sigma | real | default 0\n> mag_i_sum_square | double precision | default 0\n> Indexes: catalog_pkey primary key btree (star_id),\n> catalog_ra_decl_index btree (ra, \"dec\"),\n> catalog_star_id_index btree (star_id)\n> \n> \n> \n> -- \n> O_\n> \n\n\n-- \nO_", "msg_date": "Thu, 12 Jun 2003 21:49:34 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": false, "msg_subject": "Adjusting fsm values was Re: [BUGS] db growing out of proportion" }, { "msg_contents": "> [[email protected]]\n> \n> > Peter Childs <[email protected]> writes:\n> > > On Fri, 30 May 2003, Tomas Szepe wrote:\n> > >> Trouble is, as the rows in the tables get deleted/inserted/updated\n> > >> (the frequency being a couple thousand rows per minute), the database\n> > >> is growing out of proportion in size.\n> > \n> > > \tWould more regular vacuum help. I think a vaccum every hour may do \n> > > the job.\n> > \n> > Also note that no amount of vacuuming will save you if the FSM is not\n> > large enough to keep track of all the free space. The default FSM\n> > settings, like all the other default settings in Postgres, are set up\n> > for a small installation. You'd probably need to raise them by at least\n> > a factor of 10 for this installation.\n> \n> Thanks, I'll try to tweak those settings and will let the list know how\n> things went.\n\nWell, raising max_fsm_pages to 500000 seems to have solved the problem\nentirely. My thanks go to everyone who've offered their help.\n\n-- \nTomas Szepe <[email protected]>\n", "msg_date": "Fri, 13 Jun 2003 07:34:59 +0200", "msg_from": "Tomas Szepe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: db growing out of proportion" } ]
[ { "msg_contents": "Kevin,\n\nHow about creating a new index just on column6?\nThat should be much more effective than the multicolumn\nindex.\n\nRegards,\nNikolaus\n\nOn Thu, 29 May 2003 08:58:07 -0500, \"Kevin Schroeder\"\nwrote:\n\n> \n> Hello,\n> I'm running a simple query on a table and I'm\n> getting a very long\n> response time. The table has 56,000 rows in it. It\n> has a full text field,\n> but it is not being referenced in this query. The\n> query I'm running is\n> \n> select row_key, column1, column2, column3, column4,\n> column5 from table1\n> where column6 = 1 order by column3 desc limit 21;\n> \n> There is an index on the table\n> \n> message_index btree (column6, column3, column7)\n> \n> Column 3 is a date type, column 6 is an integer and\n> column 7 is unused in\n> this query.\n> \n> The total query time is 6 seconds, but I can bring\nthat\n> down to 4.5 if I\n> append \"offset 0\" to the end of the query. By\nchecking\n> query using \"explain\n> analyze\" it shows that it is using the index.\n> \n> If anyone has any ideas as to why the query is taking\n> so long and what I can\n> do to make it more efficient I would love to know.\n> \n> Thanks\n> Kevin\n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n", "msg_date": "Thu, 29 May 2003 19:44:50 -0700 (PDT)", "msg_from": "\"Nikolaus Dilger\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Select query takes long to execute" }, { "msg_contents": "Nikolaus,\n\nI think that shouldn't be any more effective. As I experienced, it's\nirrelevant how many cols an index has as long as you only use the first\ncolumn. And, after that, if you use another column, how could a missing\nsecond column be any better?\n\nG.\n------------------------------- cut here -------------------------------\n----- Original Message ----- \nFrom: \"Nikolaus Dilger\" <[email protected]>\nSent: Friday, May 30, 2003 4:44 AM\n\n\n> Kevin,\n>\n> How about creating a new index just on column6?\n> That should be much more effective than the multicolumn\n> index.\n>\n> Regards,\n> Nikolaus\n\n\n", "msg_date": "Fri, 30 May 2003 10:57:33 +0200", "msg_from": "=?iso-8859-1?Q?SZUCS_G=E1bor?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select query takes long to execute" }, { "msg_contents": "\nWhat are the advantages to having a database relational? I am currently \ndiscussing the design of a database with some people at work and they \nreckon it is best to create one table with and index and all the data \ninstead of normalizing the database. I think that they think that joining \ntables will slow down retrieval, is this true?\n\nThanks\nJeandre\n\n", "msg_date": "Fri, 30 May 2003 11:23:10 +0200 (SAST)", "msg_from": "Jeandre du Toit <[email protected]>", "msg_from_op": false, "msg_subject": "Table Relationships" }, { "msg_contents": "* Jeandre du Toit <[email protected]> [30.05.2003 12:57]:\n> \n> What are the advantages to having a database relational? I am currently \n> discussing the design of a database with some people at work and they \n> reckon it is best to create one table with and index and all the data \n> instead of normalizing the database. I think that they think that joining \n> tables will slow down retrieval, is this true?\n> \n\nTake a look at situation from another side.\n\nLet's say: You own a store and have 3 customers and 5 products on your\nstore. All you going to keep in DB is track of all purchases.\n\nSo, each time a customer will by a product, an new record will be added.\nWhat this means:\n\n1. Customer's name will be repeated as many times, as many purchases he had\n made. The same for each of products. In real world, you'll have about\n 10,000 customers and about 100,000 products. Do you have enoght space on\n your disks to store all that stuff?\n\n2. Some of your customers decided to change it's name. What you're going to\n do? If you're going to insert new purchases of that customer with he's new\n name, then in all turnover reports you'll have to specify both:\n old name and new one. If he will hange his name again - again, all\n reports are to be updated.\n\nThere is much more stuff to read about Relational Data Model in books.\n\nAbout slowing down retrieval of data: all efforts today are put to speed up\nthings. You should think about your convenience in data manipulation.\n\n\nI suggest you should try both: one huge table, and a set of normalized\ntables and compare, what is quicker and what is easier to use.\n\n-- \n\nVictor Yegorov\n", "msg_date": "Fri, 30 May 2003 13:10:09 +0300", "msg_from": "\"Victor Yegorov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Relationships" }, { "msg_contents": "On Fri, May 30, 2003 at 11:23:10 +0200,\n Jeandre du Toit <[email protected]> wrote:\n> \n\nDon't reply to existing threads to start a new thread.\n\n> What are the advantages to having a database relational? I am currently \n> discussing the design of a database with some people at work and they \n> reckon it is best to create one table with and index and all the data \n> instead of normalizing the database. I think that they think that joining \n> tables will slow down retrieval, is this true?\n\nYou might want to read some books on relational database theory.\n\nDate and Pascal are two noted authors of books on relational database theory.\n\n> \n> Thanks\n> Jeandre\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Fri, 30 May 2003 07:11:01 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Relationships" }, { "msg_contents": "On Fri, May 30, 2003 at 10:57:33 +0200,\n SZUCS G�bor <[email protected]> wrote:\n> Nikolaus,\n> \n> I think that shouldn't be any more effective. As I experienced, it's\n> irrelevant how many cols an index has as long as you only use the first\n> column. And, after that, if you use another column, how could a missing\n> second column be any better?\n\nBecause the index will be more compact and reside on less disk blocks.\nThe planner also makes different guesses for the selectivity whne using\nthe first column of a multicolumn index as opposed to a single column\nindex.\n", "msg_date": "Fri, 30 May 2003 07:13:11 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select query takes long to execute" }, { "msg_contents": "On Fri, 30 May 2003, Bruno Wolff III wrote:\n\n> On Fri, May 30, 2003 at 11:23:10 +0200,\n> Jeandre du Toit <[email protected]> wrote:\n> > \n> \n> Don't reply to existing threads to start a new thread.\n\nSorry about that, I did something screwy in Pine. I thought that it would \ncreate a new mail.\n\n> \n> > What are the advantages to having a database relational? I am currently \n> > discussing the design of a database with some people at work and they \n> > reckon it is best to create one table with and index and all the data \n> > instead of normalizing the database. I think that they think that joining \n> > tables will slow down retrieval, is this true?\n> \n> You might want to read some books on relational database theory.\n> \n> Date and Pascal are two noted authors of books on relational database theory.\n\nThanks, I will look at these books.\n\n> \n> > \n> > Thanks\n> > Jeandre\n> > \n> > \n\n", "msg_date": "Fri, 30 May 2003 15:34:36 +0200 (SAST)", "msg_from": "Jeandre du Toit <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Relationships" }, { "msg_contents": "Jeandre,\n\n> instead of normalizing the database. I think that they think that joining\n> tables will slow down retrieval, is this true?\n\nNo, it's not. I'm afraid that your co-workers learned their computer \nknowledge 10 years ago and have not kept up to date. They may need \nretraining.\n\nModern database systems, especially PostgreSQL, are much faster with a proper \nrelational schema than with an inadequate flat-file table, due to the \nefficient storage of data ... i.e., no redundancy. \n\nI highly suggest that you take a look at the book \"Database Design for Mere \nMortals\"; if you're asking the question you posted, you are nowhere near \nready to build a production database application.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 30 May 2003 09:06:43 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Relationships" }, { "msg_contents": "On Fri, 30 May 2003, Josh Berkus wrote:\n\n> Jeandre,\n> \n> > instead of normalizing the database. I think that they think that joining\n> > tables will slow down retrieval, is this true?\n> \n> No, it's not. I'm afraid that your co-workers learned their computer \n> knowledge 10 years ago and have not kept up to date. They may need \n> retraining.\n\nThought as much\n\n> \n> Modern database systems, especially PostgreSQL, are much faster with a proper \n> relational schema than with an inadequate flat-file table, due to the \n> efficient storage of data ... i.e., no redundancy. \n\nThat is what I thought, but since they out rank me at work I needed the \nextra conformation. Now at least I can show them that I am not the only \nperson that thinks a flat table structure is stone age design. I know for a \nfact it is better on Sybase, but I wasn't to sure about postgres and since \nthey have been working on it for longer than I have, I am expected to \nfollow their lead.\n\n> \n> I highly suggest that you take a look at the book \"Database Design for Mere \n> Mortals\"; if you're asking the question you posted, you are nowhere near \n> ready to build a production database application.\n> \n\nThanks, I will have a look at that book. You are right, I am only first \nyear Bsc, but I had a feeling that the facts they are giving me can't be \nright, it just didn't make any sense. They way I figured it, is that \nhaving a relational database, makes the database smaller because there is \nno duplicate data, which should make it faster.\n\nThanks for your help. I will approach my managers.\nJeandre\n \n\n", "msg_date": "Fri, 30 May 2003 18:19:48 +0200 (SAST)", "msg_from": "Jeandre du Toit <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Relationships" }, { "msg_contents": "On Fri, May 30, 2003 at 09:06:43AM -0700, Josh Berkus wrote:\n> Modern database systems, especially PostgreSQL, are much faster with a proper \n> relational schema than with an inadequate flat-file table, due to the \n> efficient storage of data ... i.e., no redundancy. \n\nAre you sure you want to say it that strongly? After all, if you\nhave a data set which needs always to be returned in the same static\nformat, why not just denormalise it? It's sure faster that way in\nevery system I've ever encountered.\n\nIt's only when you actually have relations to cope with that it\nceases to be an advantage. So, as usual, it depends on what you're\ntrying to do.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 30 May 2003 12:39:00 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Relationships" }, { "msg_contents": "Andrew,\n\n> Are you sure you want to say it that strongly? After all, if you\n> have a data set which needs always to be returned in the same static\n> format, why not just denormalise it? It's sure faster that way in\n> every system I've ever encountered.\n>\n> It's only when you actually have relations to cope with that it\n> ceases to be an advantage. So, as usual, it depends on what you're\n> trying to do.\n\nYeah, I suppose so ... if all they're doing is reporting on a static set of \ndata which is not transactional ... sure. If it's a disposable, \nlimited-time-use application.\n\nHowever, I have yet to see in my professional experience any application that \nwas really this way and stayed this way once it was in use ... relations have \na way of creeping in, and planning for them is less messy than refactoring.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 30 May 2003 10:03:19 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Relationships" }, { "msg_contents": "On Fri, 30 May 2003, Josh Berkus wrote:\n\n> Andrew,\n> \n> > Are you sure you want to say it that strongly? After all, if you\n> > have a data set which needs always to be returned in the same static\n> > format, why not just denormalise it? It's sure faster that way in\n> > every system I've ever encountered.\n> >\n> > It's only when you actually have relations to cope with that it\n> > ceases to be an advantage. So, as usual, it depends on what you're\n> > trying to do.\n> \n> Yeah, I suppose so ... if all they're doing is reporting on a static set of \n> data which is not transactional ... sure. If it's a disposable, \n> limited-time-use application.\n> \n> However, I have yet to see in my professional experience any application that \n> was really this way and stayed this way once it was in use ... relations have \n> a way of creeping in, and planning for them is less messy than refactoring.\n\nMy philosophy has been you store the data normalized, and denormalize it \nfor performance down the line.\n\nbut denormalizing for storage is usually a bad idea, as it allows your \ndata to get filled with inconsistencies.\n\nIt's funny how people start worrying about performance of flat versus \nnormalized before really looking at the difference between the two first. \nOn Postgresql and most other databases, there are far more important \nconcerns to worry about when it comes to performance than whether or not \nyou're joining a couple tables.\n\n", "msg_date": "Fri, 30 May 2003 11:20:33 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Relationships" }, { "msg_contents": "On Fri, May 30, 2003 at 11:20:33AM -0600, scott.marlowe wrote:\n\n> but denormalizing for storage is usually a bad idea, as it allows your \n> data to get filled with inconsistencies.\n\nSure, but if performance is an important goal for certain kinds of\nSELECTs, using a trigger at insert or update to do denormalising is\nperhaps an acceptable approach. It's obvious that in most cases,\ndenormalising instead of optimising your normalisation is silly. But\nif you need something to return in, say, 2ms most of the time, and it\nrequires a wide variety of data, denormalising is a good idea.\n\nIt is, of course, contrary to the RDBMS-y mind to denormalise. But\nthere are (rare) times when it's a good idea, and I hate to see it\nrejected out of hand in such cases.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 30 May 2003 14:18:54 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Relationships" }, { "msg_contents": "Andrew,\n\n> Sure, but if performance is an important goal for certain kinds of\n> SELECTs, using a trigger at insert or update to do denormalising is\n> perhaps an acceptable approach. It's obvious that in most cases,\n> denormalising instead of optimising your normalisation is silly. But\n> if you need something to return in, say, 2ms most of the time, and it\n> requires a wide variety of data, denormalising is a good idea.\n\nI've done this plenty of times ... but what you're talking about is more of a \n\"materialized view\" than denormalized data. The data is still stored in \nnormal form; it is just distilled for a particular view and saved on disk for \nquick reference. This is often a good approach with performance-sensitive, \ncomplex databases.\n\n> It is, of course, contrary to the RDBMS-y mind to denormalise. But\n> there are (rare) times when it's a good idea, and I hate to see it\n> rejected out of hand in such cases.\n\nThere is a big difference between denormalizing normalized data and storing \nyour data in denormalized (basically flat file) form in the first place.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 30 May 2003 12:54:26 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Relationships" }, { "msg_contents": "On Fri, 2003-05-30 at 12:10, Victor Yegorov wrote:\n> Take a look at situation from another side.\n> \n> Let's say: You own a store and have 3 customers and 5 products on your\n> store. All you going to keep in DB is track of all purchases.\n> \n> So, each time a customer will by a product, an new record will be added.\n> What this means:\n> \n> 1. Customer's name will be repeated as many times, as many purchases he had\n> made. The same for each of products. In real world, you'll have about\n> 10,000 customers and about 100,000 products. Do you have enoght space on\n> your disks to store all that stuff?\nWell, to play the devil's advocate, to do it correctly, you should\nprobably store the customer data duplicate (one in the main record, and\nonce in the purchase order). If you do not, you'll get an ERP system\nthat is incapable to reproduce work done, which is basically a BAD\nTHING(tm) :)\n\n\n> 2. Some of your customers decided to change it's name. What you're going to\n> do? If you're going to insert new purchases of that customer with he's new\n> name, then in all turnover reports you'll have to specify both:\n> old name and new one. If he will hange his name again - again, all\n> reports are to be updated.\nWell, again, a purchase order should keep records -> it shouldn't\nmagically change the name or address of the customer, just because the\ncustomer moved.\n\nAndreas", "msg_date": "12 Jun 2003 11:32:09 +0200", "msg_from": "Andreas Kostyrka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Table Relationships" } ]
[ { "msg_contents": "Hi,\n\nI am in the process of pricing up boxes for our database, and I was\nwondering if anyone had any recommendations or comments.\n\nThe database itself will have around 100-150 users mostly accessing through\na PHP/apache interface. I don't expect lots of simultaneous activity,\nhowever users will often be doing multiple table joins (can be up to 10-15\ntables in one query). Also they will often be pulling out on the order of\n250,000 rows (5-10 numeric fields per row), processing the data (I may split\nthis to a second box) and then writing back ~20,000 rows of data (2-3\nnumeric fields per row).\n\nEstimating total amount of data is quite tricky, but it could grow to\n100-250Gb over the next 3 years.\n\nI have priced one box from the Dell web site as follows\n\nSingle Intel Xeon 2.8GHz with 512kb L2 cache\n2GB RAM\n\n36Gb 10,000rpm Ultra 3 160 SCSI\n36Gb 10,000rpm Ultra 3 160 SCSI\n146Gb 10,000rpm U320 SCSI\n146Gb 10,000rpm U320 SCSI\n146Gb 10,000rpm U320 SCSI\n\nPERC 3/DC RAID Controller (128MB Cache)\n\nRAID1 for 2x 36Gb drives\nRAID5 for 3x 146Gb drives\n\nRunning RedHat Linux 8.0\n\nThis configuration would be pretty much the top of our budget (~ £5k).\n\nI was planning on having the RAID1 setup for the OS and then the RAID5 for\nthe db files.\n\nWould it be better to have a dual 2.4GHz setup rather than a single 2.8GHz\nor would it not make much difference?\n\nDoes the RAID setup look ok, or would anyone forsee problems in this\ncontext? (This machine can take a maximum of 5 internal drives).\n\nAm I overdoing any particular component at the expense of another?\n\nAny other comments would be most welcome.\n\nThanks for any help\n\nAdam\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content by MailScanner, and is\nbelieved to be clean.\n\n", "msg_date": "Fri, 30 May 2003 15:23:28 +0100", "msg_from": "Adam Witney <[email protected]>", "msg_from_op": true, "msg_subject": "Hardware advice" }, { "msg_contents": "On Fri, May 30, 2003 at 03:23:28PM +0100, Adam Witney wrote:\n> RAID5 for 3x 146Gb drives\n\nI find the RAID5 on the PERC to be painfully slow. It's _really_ bad\nif you don't put WAL on its own drive.\n\nAlso, you don't mention it, but check to make sure you're getting ECC\nmemory on these boxes. Random memory errors which go undetected will\nmake you very unhappy. ECC lowers (but doesn't eliminate,\napparently) your chances.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 30 May 2003 10:44:30 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware advice" }, { "msg_contents": "On Fri, 30 May 2003, Adam Witney wrote:\n\n> 250,000 rows (5-10 numeric fields per row), processing the data (I may split\n> this to a second box) and then writing back ~20,000 rows of data (2-3\n> numeric fields per row).\n\nMake sure and vacuum often and crank up your fsm values to be able to \nreclaim lost disk space.\n\n> 36Gb 10,000rpm Ultra 3 160 SCSI\n> 36Gb 10,000rpm Ultra 3 160 SCSI\n> 146Gb 10,000rpm U320 SCSI\n> 146Gb 10,000rpm U320 SCSI\n> 146Gb 10,000rpm U320 SCSI\n> \n> PERC 3/DC RAID Controller (128MB Cache)\n\nIf that box has a built in U320 controller or you can bypass the Perc, \ngive the Linux kernel level RAID1 and RAID5 drivers a try. On a dual CPU \nbox of that speed, they may well outrun many hardware controllers. \nContrary to popular opinion, software RAID is not slow in Linux. \n\n> RAID1 for 2x 36Gb drives\n> RAID5 for 3x 146Gb drives\n\nYou might wanna do something like go to all 146 gig drives, put a mirror \nset on the first 20 or so gigs for the OS, and then use the remainder \n(5x120gig or so ) to make your RAID5. The more drives in a RAID5 the \nbetter, generally, up to about 8 or 12 as the optimal for most setups.\n\nBut that setup of a RAID1 and RAID5 set is fine as is.\n\nBy running software RAID you may be able to afford to upgrade the 36 gig \ndrives...\n\n> Would it be better to have a dual 2.4GHz setup rather than a single 2.8GHz\n> or would it not make much difference?\n\nYes it would. Linux servers running databases are much more responsive \nwith dual CPUs.\n\n> Am I overdoing any particular component at the expense of another?\n\nMaybe the RAID controller cost versus having more big hard drives.\n\n\n", "msg_date": "Fri, 30 May 2003 09:25:38 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware advice" }, { "msg_contents": "Hi scott, \n\nThanks for the info\n\n> You might wanna do something like go to all 146 gig drives, put a mirror\n> set on the first 20 or so gigs for the OS, and then use the remainder\n> (5x120gig or so ) to make your RAID5. The more drives in a RAID5 the\n> better, generally, up to about 8 or 12 as the optimal for most setups.\n\nI am not quite sure I understand what you mean here... Do you mean take 20Gb\nfrom each of the 5 drives to setup a 20Gb RAID 1 device? Or just from the\nfirst 2 drives?\n\nThanks again for your help\n\nadam\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content by MailScanner, and is\nbelieved to be clean.\n\n", "msg_date": "Fri, 30 May 2003 17:55:40 +0100", "msg_from": "Adam Witney <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware advice" }, { "msg_contents": "On Fri, 30 May 2003, Adam Witney wrote:\n\n> Hi scott, \n> \n> Thanks for the info\n> \n> > You might wanna do something like go to all 146 gig drives, put a mirror\n> > set on the first 20 or so gigs for the OS, and then use the remainder\n> > (5x120gig or so ) to make your RAID5. The more drives in a RAID5 the\n> > better, generally, up to about 8 or 12 as the optimal for most setups.\n> \n> I am not quite sure I understand what you mean here... Do you mean take 20Gb\n> from each of the 5 drives to setup a 20Gb RAID 1 device? Or just from the\n> first 2 drives?\n\nYou could do it either way, since the linux kernel supports more than 2 \ndrives in a mirror. But, this costs on writes, so don't do it for things \nlike /var or the pg_xlog directory.\n\nThere are a few ways you could arrange 5 146 gig drives.\n\nOne might be to make the first 20 gig on each drive part of a mirror set \nwhere the first two drives are the live mirror, and the next three are hot \nspares. Then you could setup your RAID5 to have 4 live drives and 1 hot \nspare.\n\nHot spares are nice to have because they provide for the shortest period \nof time during which your machine is running with a degraded RAID array.\n\nnote that in linux you can set the kernel parameter \ndev.raid.speed_limit_max and dev.raid.speed_limit_min to control the \nrebuild bandwidth used so that when a disk dies you can set a compromise \nbetween fast rebuilds, and lowering the demands on the I/O subsystem \nduring a rebuild. The max limit default is 100k / second, which is quite \nslow. On a machine with Ultra320 gear, you could set that to 10 ot 20 \nmegs a second and still not saturate your SCSI buss.\n\nNow that I think of it, you could probably set it up so that you have a \nmirror set for the OS, one for pg_xlog, and then use the rest of the \ndrives as RAID5. Then grab space on the fifth drive to make a hot spare \nfor both the pg_xlog and the OS drive.\n\nDrive 0\n[OS RAID1 20 Gig D0][big data drive RAID5 106 Gig D0]\nDrive 1\n[OS RAID1 20 Gig D1][big data drive RAID5 106 Gig D1]\nDrive 2\n[pg_xlog RAID1 20 gig D0][big data drive RAID5 106 Gig D2]\nDrive 3\n[pg_xlog RAID1 20 gig D1][big data drive RAID5 106 Gig D3]\nDrive 4\n[OS hot spare 20 gig][g_clog hot spare 20 gig][big data drive RAID5 106 \nGig hot spare]\n\nThat would give you ~ 300 gigs storage.\n\nOf course, there will likely be slightly less performance than you might \nget from dedicated RAID arrays for each RAID1/RAID5 set, but my guess is \nthat by having 4 (or 5 if you don't want a hot spare) drives in the RAID5 \nit'll still be faster than a dedicated 3 drive RAID array.\n\n", "msg_date": "Fri, 30 May 2003 11:17:39 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware advice" }, { "msg_contents": "On Fri, 2003-05-30 at 07:44, Andrew Sullivan wrote:\n> On Fri, May 30, 2003 at 03:23:28PM +0100, Adam Witney wrote:\n> > RAID5 for 3x 146Gb drives\n> \n> I find the RAID5 on the PERC to be painfully slow. It's _really_ bad\n> if you don't put WAL on its own drive.\n\nThis seems to be an issue with the dell firmware. The megaraid devel\nlist has been tracking this issue on and off for some time now. People\nhave had good luck with a couple of different fixes. The PERC cards\n-can- be made not to suck and the LSI cards simply don't have the\nproblem. ( Since they are effectively the same card its the opinion that\nits the firmware )\n\n\n\n> Also, you don't mention it, but check to make sure you're getting ECC\n> memory on these boxes. Random memory errors which go undetected will\n> make you very unhappy. ECC lowers (but doesn't eliminate,\n> apparently) your chances.\n\n100% agree with this note.\n\n> A\n> \n> -- \n> ----\n> Andrew Sullivan 204-4141 Yonge Street\n> Liberty RMS Toronto, Ontario Canada\n> <[email protected]> M2P 2A8\n> +1 416 646 3304 x110\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])", "msg_date": "30 May 2003 12:14:57 -0700", "msg_from": "Will LaShell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware advice" }, { "msg_contents": "On 30 May 2003, Will LaShell wrote:\n\n> On Fri, 2003-05-30 at 07:44, Andrew Sullivan wrote:\n> > On Fri, May 30, 2003 at 03:23:28PM +0100, Adam Witney wrote:\n> > > RAID5 for 3x 146Gb drives\n> > \n> > I find the RAID5 on the PERC to be painfully slow. It's _really_ bad\n> > if you don't put WAL on its own drive.\n> \n> This seems to be an issue with the dell firmware. The megaraid devel\n> list has been tracking this issue on and off for some time now. People\n> have had good luck with a couple of different fixes. The PERC cards\n> -can- be made not to suck and the LSI cards simply don't have the\n> problem. ( Since they are effectively the same card its the opinion that\n> its the firmware )\n\nI've used the LSI/MegaRAID cards in the past. They're not super fast, but \nthey're not slow either. Very solid operation. Sometimes the firmware \nmakes you feel like you're wearing handcuffs compared to the relative \nfreedom in the kernel sw drivers (i.e. you can force the kernel to take \nback a failed drive, the megaraid just won't take it back until it's been \nformatted, that kind of thing).\n\nThe LSI plain scsi cards in general are great cards, I got an UWSCSI card \nby them with gigabit ethernet thrown in off ebay a couple years back and \nit's VERY fast and stable.\n\nAlso, if you're getting cache memory on the megaraid/perc card, make sure \nyou get the battery backup module.\n\n", "msg_date": "Fri, 30 May 2003 13:33:19 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware advice" }, { "msg_contents": "On 30/5/03 6:17 pm, \"scott.marlowe\" <[email protected]> wrote:\n\n> On Fri, 30 May 2003, Adam Witney wrote:\n> \n>> Hi scott, \n>> \n>> Thanks for the info\n>> \n>>> You might wanna do something like go to all 146 gig drives, put a mirror\n>>> set on the first 20 or so gigs for the OS, and then use the remainder\n>>> (5x120gig or so ) to make your RAID5. The more drives in a RAID5 the\n>>> better, generally, up to about 8 or 12 as the optimal for most setups.\n>> \n>> I am not quite sure I understand what you mean here... Do you mean take 20Gb\n>> from each of the 5 drives to setup a 20Gb RAID 1 device? Or just from the\n>> first 2 drives?\n> \n> You could do it either way, since the linux kernel supports more than 2\n> drives in a mirror. But, this costs on writes, so don't do it for things\n> like /var or the pg_xlog directory.\n> \n> There are a few ways you could arrange 5 146 gig drives.\n> \n> One might be to make the first 20 gig on each drive part of a mirror set\n> where the first two drives are the live mirror, and the next three are hot\n> spares. Then you could setup your RAID5 to have 4 live drives and 1 hot\n> spare.\n> \n> Hot spares are nice to have because they provide for the shortest period\n> of time during which your machine is running with a degraded RAID array.\n> \n> note that in linux you can set the kernel parameter\n> dev.raid.speed_limit_max and dev.raid.speed_limit_min to control the\n> rebuild bandwidth used so that when a disk dies you can set a compromise\n> between fast rebuilds, and lowering the demands on the I/O subsystem\n> during a rebuild. The max limit default is 100k / second, which is quite\n> slow. On a machine with Ultra320 gear, you could set that to 10 ot 20\n> megs a second and still not saturate your SCSI buss.\n> \n> Now that I think of it, you could probably set it up so that you have a\n> mirror set for the OS, one for pg_xlog, and then use the rest of the\n> drives as RAID5. Then grab space on the fifth drive to make a hot spare\n> for both the pg_xlog and the OS drive.\n> \n> Drive 0\n> [OS RAID1 20 Gig D0][big data drive RAID5 106 Gig D0]\n> Drive 1\n> [OS RAID1 20 Gig D1][big data drive RAID5 106 Gig D1]\n> Drive 2\n> [pg_xlog RAID1 20 gig D0][big data drive RAID5 106 Gig D2]\n> Drive 3\n> [pg_xlog RAID1 20 gig D1][big data drive RAID5 106 Gig D3]\n> Drive 4\n> [OS hot spare 20 gig][g_clog hot spare 20 gig][big data drive RAID5 106\n> Gig hot spare]\n> \n> That would give you ~ 300 gigs storage.\n> \n> Of course, there will likely be slightly less performance than you might\n> get from dedicated RAID arrays for each RAID1/RAID5 set, but my guess is\n> that by having 4 (or 5 if you don't want a hot spare) drives in the RAID5\n> it'll still be faster than a dedicated 3 drive RAID array.\n> \n\nHi Scott,\n\nJust following up a post from a few months back... I have now purchased the\nhardware, do you have a recommended/preferred Linux distro that is easy to\nconfigure for software RAID?\n\nThanks again\n\nAdam\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content by MailScanner, and is\nbelieved to be clean.\n\n", "msg_date": "Fri, 21 Nov 2003 10:08:42 +0000", "msg_from": "Adam Witney <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware advice" } ]
[ { "msg_contents": "\nI was unable to find the address of a human who manages these lists,\nso I apologize for the off-topic email (email about abuse of the\npsql-performance mailing list instead of email about psql\nperformance).\n\nI received, today, spam sent to addresses subscribed to\npsql-performance. One address was ONLY used for this specific mailing\nlist mail, so there are only two ways the address could have been\ndiscovered:\n\n* Someone found a way to retrieve the list of email addresses subscribed\n to this mailing list.\n\n* Someone harvested the email addresses from the archive.\n\nThe evidence supports the latter assertion, since other single-use\nemail addresses subscribed to other psql lists received no spam.\n\nThus, I suggest that the archives be munged to translate all email\naddresses to human (but not simple machine) readable form.\n\n -Seth Robertson\n [email protected]\n\nOffsending (and offensive) spam included below\n----------------------------------------------------------------------\nReturn-Path: [email protected]\nDelivery-Date: Fri May 30 05:41:09 2003\nDelivery-Date: Fri, 30 May 2003 05:41:09 -0400\nReceived: from martin.sysdetect.com (martin.sysdetect.com [172.16.1.254])\n\tby winwood.sysdetect.com (8.11.6/8.11.6) with ESMTP id h4U9f9927804;\n\tFri, 30 May 2003 05:41:09 -0400\nReceived: (from mail@localhost)\n\tby martin.sysdetect.com (8.11.4/8.11.3) id h4U9f7B02542;\n\tFri, 30 May 2003 09:41:07 GMT\nReceived: from user-0cal2q3.cable.mindspring.com(24.170.139.67)\n via SMTP by mail.sysdetect.com, id smtpdgM8099; Fri May 30 09:40:57 2003\nReceived: from zfd.8ot39.net [195.6.117.117] by user-0cal2q3.cable.mindspring.com with SMTP for <[email protected]>; Fri, 30 May 2003 06:40:57 -0400\nMessage-ID: <b-s-z05$9jk---yr8-ts$-s2v$i3@5r30ftk52>\nFrom: \"Adele Dickson\" <[email protected]>\nTo: <[email protected]>, <[email protected]>\nSubject: KARMEN'S pics x iroiaqypverhm tud\nDate: Fri, 30 May 03 06:40:57 GMT\nX-Priority: 3\nX-MSMail-Priority: Normal\nX-Mailer: Microsoft Outlook, Build 10.0.2616\nMIME-Version: 1.0\nContent-Type: multipart/alternative;\n\tboundary=\"C_FC_.F69E_\"\n\nThis is a multi-part message in MIME format.\n\n--C_FC_.F69E_\nContent-Type: text/html\nContent-Transfer-Encoding: quoted-printable\n\n<p>I hope this is [email protected] ... Here are the snapshot from my c@m last ni=\nght\n<a href=3D\"http://[email protected]\"></p>\n<p><img src=3D\"http://[email protected]/byot/tn4790/sierra.jpg=\n?combustion\"> </a></p>\n<br>\n<br>\n<br>This will piss off my dink BF!!\n<br>I hope I got your address right\n<br>XOXOXOXOXOXOXOX\n<br>\n<a href=3D\"http://[email protected]/r.php\">beam me off scotty</a>=\n</font></td>\n\n\nieawib fh tahy\namt\nq\n\ni akfrfddmhafjk hvyvcc\nprz qzmuaz\n--C_FC_.F69E_--\n\n----------------------------------------------------------------------\n", "msg_date": "Fri, 30 May 2003 12:30:11 -0400", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Spam sent to addresses subscribed to psql-performance" } ]
[ { "msg_contents": "Based on what you've said, I would guess you are considering the Dell PowerEdge 2650 since it has 5 drive bays. If you could afford the rackspace and just a bit more money, I'd get the tower configuration 2600 with 6 drive bays (and rack rails if needed - Dell even gives you a special rackmount faceplate if you order a tower with rack rails). This would allow you to have this configuration, which I think would be about ideal for the price range you are looking at:\r\n \r\n* Linux kernel RAID\r\n* Dual processors - better than a single faster processor, especially with concurrent user load and software RAID on top of that\r\n* 2x36GB in RAID-1 (for OS and WAL)\r\n* 4x146GB in RAID-10 (for data) (alternative: 4-disk RAID-5)\r\n \r\nThe RAID-10 array gives you the same amount of space you would have with a 3-disk RAID-5 and improved fault tolerance. Although I'm pretty sure your drives won't be hot-swappable with the software RAID - I've never actually had to do it.\r\n \r\nI can't say I like Scott's idea much because the WAL and OS are competing for disk time with the data since they are on the same physical disk. In a database that is mainly reads with few writes, this wouldn't be such a problem though.\r\n \r\nJust my inexpert opinion,\r\n \r\nRoman\r\n \r\n\r\n\t-----Original Message----- \r\n\tFrom: Adam Witney [mailto:[email protected]] \r\n\tSent: Fri 5/30/2003 9:55 AM \r\n\tTo: scott.marlowe; Adam Witney \r\n\tCc: pgsql-performance \r\n\tSubject: Re: [PERFORM] Hardware advice\r\n\t\r\n\t\r\n\r\n\tHi scott,\r\n\t\r\n\tThanks for the info\r\n\t\r\n\t> You might wanna do something like go to all 146 gig drives, put a mirror\r\n\t> set on the first 20 or so gigs for the OS, and then use the remainder\r\n\t> (5x120gig or so ) to make your RAID5. The more drives in a RAID5 the\r\n\t> better, generally, up to about 8 or 12 as the optimal for most setups.\r\n\t\r\n\tI am not quite sure I understand what you mean here... Do you mean take 20Gb\r\n\tfrom each of the 5 drives to setup a 20Gb RAID 1 device? Or just from the\r\n\tfirst 2 drives?\r\n\t\r\n\tThanks again for your help\r\n\t\r\n\tadam\r\n\t\r\n\t\r\n\t--\r\n\tThis message has been scanned for viruses and\r\n\tdangerous content by MailScanner, and is\r\n\tbelieved to be clean.\r\n\t\r\n\t\r\n\t---------------------------(end of broadcast)---------------------------\r\n\tTIP 1: subscribe and unsubscribe commands go to [email protected]\r\n\t\r\n\r\n", "msg_date": "Fri, 30 May 2003 10:59:51 -0700", "msg_from": "\"Roman Fail\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware advice" }, { "msg_contents": "On Fri, 30 May 2003, Roman Fail wrote:\n\n> Based on what you've said, I would guess you are considering the Dell PowerEdge 2650 since it has 5 drive bays. If you could afford the rackspace and just a bit more money, I'd get the tower configuration 2600 with 6 drive bays (and rack rails if needed - Dell even gives you a special rackmount faceplate if you order a tower with rack rails). This would allow you to have this configuration, which I think would be about ideal for the price range you are looking at:\n> \n> * Linux kernel RAID\n\nActually, I think he was looking at hardware RAID, but I was recommending \nsoftware RAID as at least an option. I've found that on modern hardware \nwith late model kernels, Linux is pretty fast with straight RAID, but not \nas good with layering it, fyi. I haven't tested since 2.4.9 though, so \nthings may well have changed, and hopefully for the better, in relation to \nrunning fast in layered RAID.\n\nthey both would likely work well, but going with a sub par HW raid card \nwill make the system slower than the kernel sw RAID.\n\n> * Dual processors - better than a single faster processor, especially \n> with concurrent user load and software RAID on top of that\n> * 2x36GB in RAID-1 (for OS and WAL)\n> * 4x146GB in RAID-10 (for data) (alternative: 4-disk RAID-5)\n> \n> The RAID-10 array gives you the same amount of space you would have \n> with a 3-disk RAID-5 and improved fault tolerance. Although I'm pretty \n> sure your drives won't be hot-swappable with the software RAID - I've \n> never actually had to do it.\n\nI agree that 6 drives makes this a much better option.\n\nActually, the hot swappable issue can only be accomplished in linux kernel \nsw raid by using multiple controllers. It's not really \"hot swap\" because \nyou have to basically reset that card and it's information about which \ndrives are on it. Using two controllers, where one runs one RAID0 set, \nand the other runs another RAID0 set, and you run a RAID1 on top, you can \nthen use hot swap shoes and replace failed drives.\n\nThe improved fault tolerance of the RAID 1+0 is minimal over the RAID5 if \nthe RAID5 has a hot spare, but it is there.\n\nI've removed and added drives to running arrays, and the raidhotadd \nprogram to do it is quite easy to drive. It all seemed to work quite \nwell. The biggest problem you'll note when a drive fails is that the \nkernel / scsi driver will keep resetting the bus and timing out the \ndevice, so with a failed device, linux kernel RAID can be a bit doggish \nuntil you restart the SCSI driver so it KNOWs the drive's not there and \nquits asking for it over and over.\n\n> I can't say I like Scott's idea much because the WAL and OS are \n> competing for disk time with the data since they are on the same \n> physical disk. In a database that is mainly reads with few writes, \n> this wouldn't be such a problem though.\n\nYou'd be surprised how often this is a non-issue. If you're writing \n20,000 records every 10 minutes or so, the location of the WAL file is not \nthat important. The machine will lug for a few seconds, insert, and be \ndone. The speed increase averaged out over time is almost nothing.\n\nNow, transactional systems are a whole nother enchilada. I got the \nfeeling from the original post this was more a batch processing kinda \nthing.\n\nI knew the solution I was giving was suboptimal on performance (I might \nhave even alluded to that...). I was going more for maximizing use of \nrack space and getting the most storage. I think the user said that this \nproject might well grow to 250 or 300 gig, so size is probably more or as \nimportant as speed for this system.\n\nRAID5 is pretty much the compromise RAID set. It's not necessarily the \nfastest, it certainly isn't the sexiest, but it provides a lot of storage \nfor very little redundancy cost, and with a hot spare it's pretty much \n24/7 with a couple days off a year for scheduled maintenance. Combine \nthat with having n-1 number of platters for each read to be spread across \nmake it a nice choice for data warehousing or report serving.\n\nWhatever he does, he should make sure he turns off atime on the data \npartition. That can utterly kill a postgresql / linux box by a factor of \nright at two for someone doing small reads.\n\n\n", "msg_date": "Fri, 30 May 2003 13:26:31 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware advice" } ]
[ { "msg_contents": "In the application, that I'm working on, I have a query that'll be a lot \n60% faster if I disable sequential scan forcing it to you my index.\n\nIs it bad practice to disable sequential scan ( set \nenable_seqscan=false), run my query then enable sequential scan, \nwhenever I'm running this query? Why?\n\nThanks in advance\n\n- David Wendy\n\n", "msg_date": "Fri, 30 May 2003 16:33:07 -0400", "msg_from": "Yusuf <[email protected]>", "msg_from_op": true, "msg_subject": "Enabling and Disabling Sequencial Scan" }, { "msg_contents": "On Fri, 30 May 2003, Yusuf wrote:\n\n> In the application, that I'm working on, I have a query that'll be a lot \n> 60% faster if I disable sequential scan forcing it to you my index.\n> \n> Is it bad practice to disable sequential scan ( set \n> enable_seqscan=false), run my query then enable sequential scan, \n> whenever I'm running this query? Why?\n\nsetting seqscan to off is more of a troubleshooting tool than a tuning \ntool, albeit sometimes it's the only tuning tool that MIGHT work.\n\nOnce you've determined that the database is picking the wrong plan when \nyou turn seqscan back on, you need to figure out how to convince the \ndatabase to use the right plan more often.\n\nThe best parameters to change and see how they affect this are the \n*cost* parameters and the effective cache size.\n\nshow all; will show them to you, the ones we're interested in are these:\n\nNOTICE: effective_cache_size is 100000\nNOTICE: random_page_cost is 1\nNOTICE: cpu_tuple_cost is 0.01\nNOTICE: cpu_index_tuple_cost is 0.0001\nNOTICE: cpu_operator_cost is 0.0025\n\nTo change them for one session, just use the set command. To make the \nchanges the permanent default, edit the $PGDATA/postgresql.conf file.\n\neffective_cache_size tells the planner about how big the kernel's file \nlevel cache is. On my machine it's about 800 meg. It's measured in 8k \nblocks, so 100,000 * 8k ~ 800 meg. The smaller this is, the more likely \nthe database will have to access the hard drive, and therefore the more \nlikely it will pick a seqscan if the other numbers point to it.\n\nrandom_page_cost tells the planner how much more a random page access \ncosts. The default is 4. Most systems seem to work well with numbers \nfrom 1 to 2.\n\nlowering the cpu_index_tuple_cost also favors index scans.\n\n\n", "msg_date": "Fri, 30 May 2003 14:46:12 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enabling and Disabling Sequencial Scan" }, { "msg_contents": "On Fri, 30 May 2003 14:46:12 -0600 (MDT)\n\"scott.marlowe\" <[email protected]> said something like:\n\n> \n> level cache is. On my machine it's about 800 meg. It's measured in 8k > blocks, so 100,000 * 8k ~ 800 meg. The smaller this is, the more \n\nAny thoughts on how to figure this out (disk buffer size)? For some reason, my system (2xAMD 2800+, 2Gb RAM 2.4.21 - /proc/meminfo) only shows a usage of 88kb of 'Buffers' usage, and that never changes. My 'Cached' usage is 1.7Gb. I've hit the kernel mailing list, and the one response I got said don't worry about it :-(\n\nCheers,\nRob\n\n-- \nO_", "msg_date": "Fri, 30 May 2003 21:28:46 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enabling and Disabling Sequencial Scan" }, { "msg_contents": "On Fri, 30 May 2003, Robert Creager wrote:\n\n> On Fri, 30 May 2003 14:46:12 -0600 (MDT)\n> \"scott.marlowe\" <[email protected]> said something like:\n> \n> > \n> > level cache is. On my machine it's about 800 meg. It's measured in 8k > blocks, so 100,000 * 8k ~ 800 meg. The smaller this is, the more \n> \n> Any thoughts on how to figure this out (disk buffer size)? For some \n> reason, my system (2xAMD 2800+, 2Gb RAM 2.4.21 - /proc/meminfo) only \n> shows a usage of 88kb of 'Buffers' usage, and that never changes. My \n> 'Cached' usage is 1.7Gb. I've hit the kernel mailing list, and the one \n> response I got said don't worry about it :-(\n\nAre you sure that's not 88213Kb or so of buffers? 88kb is awfully small.\n\nIt's normal to have a cache size many times larger than the buffer size. \nBuffers are assigned to individual disks, and sit under the larger single \npool that is the cache.\n\nI just take the approximate size of the cache under load and use that for \nthe effective_cache_size. Since it's pretty much a \"fudge factor\" \nvariable anyway.\n\nP.s. My use of the term fudge factor here is in no way meant to be \nderogatory. It's just that as long as the effective cache size is within \nsome reasonable range of the actual cache/buffer in the machine, it'll be \nclose enough to push the query planner in the right direction.\n\nNote that you can adopt two philosophies on the planner. One is that the \nplanner will always make certain mistakes, and you've got to fool it in \norder to get the right query plan.\n\nThe other philosophy is that you give the query planner all the variables \nyou can reasonably give it to let it decide the proper course of action, \nand you fine tune each one so that eventually it makes the right choice \nfor all the querys you throw at it.\n\nWhile the first philosophy provides for the fastest functional solutions \non a static platform (i.e. we're running postgresql 7.0.2 and aren't going \nto upgrade.) but it kind of supports the idea that the query planner can \nnever be given the right information and programmed with the right code to \nmake the right decision 99% of the time, and when it makes the wrong \ndecision, it's only a little wrong.\n\nThe second choice will require you to spend more time fine tuning all the \nparameters fed to the query planner with your own queries using explain \nanalyze and repeated testing with different settings.\n\nWhat I look for are the corner cases. I.e. if I do some select that \nreturns 500 records with a seq scan, and it takes 5 seconds, and with 450 \nrecords it switches to index scan and takes 1 second, then likely the \nplanner is choosing to switch to seq scans too quickly when I raise the \nresult size from 450 to 500. \n\nAt this point use the set seq_scan option to test the database \nperformance with it on and off and increasing set size.\n\nSomewhere around 2,000 or so in this scenario, we'll notice that the seq \nscan has now the same speed as the index scan, and as we raise the number \nof rows we are getting, the index scan would now be slower than the seq \nscan.\n\nAssuming we set effective_cache_size right at the beginning, we now can \nturn seq_scan back on, and adjust the default cost options until the \nplanner chooses a seq scan at the break point we found (in our imaginary \ncase of 2000). It doesn't have to be perfect, since the performance at or \naround the break point is similar for index and seq scans alike.\n\nThen, throw the next query at it and see how it does.\n\nI've found that on fast machines, it's good to lower the cpu costs, \nespecially the index one. I usually drop these by a divisor of 2 to 10. \nFor the random_page_cost, settings of 1.x to 2.x seem a good choice for \nfast I/O subsystems.\n\n", "msg_date": "Mon, 2 Jun 2003 09:20:28 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enabling and Disabling Sequencial Scan" }, { "msg_contents": "On Fri, 30 May 2003, Robert Creager wrote:\n\n> On Fri, 30 May 2003 14:46:12 -0600 (MDT)\n> \"scott.marlowe\" <[email protected]> said something like:\n> \n> > \n> > level cache is. On my machine it's about 800 meg. It's measured in 8k \n> > blocks, so 100,000 * 8k ~ 800 meg. The smaller this is, the more \n> \n> My 'Cached' usage is 1.7Gb. I've hit the kernel mailing list, and the \n> one response I got said don't worry about it :-(\n\nOh, yeah, just a bit on that. as far as the kernel developers are \nconcerned, the buffer / cache is working perfectly, and they're right, it \nis. What they probably don't understand if your need to tell postgresql \nhow much cache/buffer is allocated to it. \n\nso don't worry about the kernel, the linux kernel really is pretty good at \ncaching disk access.\n\n", "msg_date": "Mon, 2 Jun 2003 09:23:11 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enabling and Disabling Sequencial Scan" } ]
[ { "msg_contents": "David,\n\nI say go ahead and use it since you get a significant\nperformance gain. This is a special case where you\nknow more about your data than the planer does with\ngeneral system wide settings. In Oracle you could use\n\"hints\". Since there are no hints in PostgreSQL\ndisabling and reenabling an option just before and\nafter a query has the same effect.\n\nRegards,\nNikolaus\n\nOn Fri, 30 May 2003 16:33:07 -0400, Yusuf wrote:\n\n> \n> In the application, that I'm working on, I have a\nquery\n> that'll be a lot \n> 60% faster if I disable sequential scan forcing it to\n> you my index.\n> \n> Is it bad practice to disable sequential scan ( set \n> enable_seqscan=false), run my query then enable\n> sequential scan, \n> whenever I'm running this query? Why?\n> \n> Thanks in advance\n> \n> - David Wendy\n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\n> [email protected]\n", "msg_date": "Sat, 31 May 2003 08:07:15 -0700 (PDT)", "msg_from": "\"Nikolaus Dilger\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Enabling and Disabling Sequencial Scan" }, { "msg_contents": "I have a simple table with a dozen integer fields and a primary key.\n\nWhen I say \"explain select * from Patient where Patient_primary_key = 100\"\n\nI get sequential scan.\n\nI've just converted my application from MySQL and am seeing everything run\nabout 3X slower. What do I have to do to get postgres to use indexes?\n\nBrian Tarbox\n\n", "msg_date": "Sat, 31 May 2003 12:30:40 -0400", "msg_from": "\"Brian Tarbox\" <[email protected]>", "msg_from_op": false, "msg_subject": "why Sequencial Scan when selecting on primary key of table?" }, { "msg_contents": "On Sat, May 31, 2003 at 12:30:40PM -0400, Brian Tarbox wrote:\n> I have a simple table with a dozen integer fields and a primary key.\n> \n> When I say \"explain select * from Patient where Patient_primary_key = 100\"\n> \n> I get sequential scan.\n> \n> I've just converted my application from MySQL and am seeing everything run\n> about 3X slower. What do I have to do to get postgres to use indexes?\n\nUsual questions: have you vacuumed? EXPLAIN ANALYSE output, schema,\n&c.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Sat, 31 May 2003 13:02:08 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why Sequencial Scan when selecting on primary key of table?" }, { "msg_contents": "\"Brian Tarbox\" <[email protected]> writes:\n> When I say \"explain select * from Patient where Patient_primary_key = 100\"\n> I get sequential scan.\n\nPerhaps Patient_primary_key is not an integer field? If not, you need\nto cast the constant 100 to the right type. Or write '100' with\nsingle quotes around it, which leaves Postgres to choose the constant's\ndatatype. (Yeah, I know, it's a pain in the neck. We've had a lot of\ndiscussions about how to fix this without breaking datatype extensibility;\nno luck so far.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 May 2003 13:13:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why Sequencial Scan when selecting on primary key of table? " }, { "msg_contents": "The primary key field is an integer and I have performed vacuum analyse but\nthat does not seem to change anything.\n\nI've also heard that postgres will not indexes when JOINing tables. Can\nthat really be true??\n\nBrian\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Tom Lane\nSent: Saturday, May 31, 2003 1:14 PM\nTo: Brian Tarbox\nCc: [email protected]\nSubject: Re: [PERFORM] why Sequencial Scan when selecting on primary key\nof table?\n\n\n\"Brian Tarbox\" <[email protected]> writes:\n> When I say \"explain select * from Patient where Patient_primary_key = 100\"\n> I get sequential scan.\n\nPerhaps Patient_primary_key is not an integer field? If not, you need\nto cast the constant 100 to the right type. Or write '100' with\nsingle quotes around it, which leaves Postgres to choose the constant's\ndatatype. (Yeah, I know, it's a pain in the neck. We've had a lot of\ndiscussions about how to fix this without breaking datatype extensibility;\nno luck so far.)\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://archives.postgresql.org\n\n", "msg_date": "Sat, 31 May 2003 13:45:50 -0400", "msg_from": "\"Brian Tarbox\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why Sequencial Scan when selecting on primary key of table? " }, { "msg_contents": "On Sat, 2003-05-31 at 13:13, Tom Lane wrote:\n> \"Brian Tarbox\" <[email protected]> writes:\n> > When I say \"explain select * from Patient where Patient_primary_key = 100\"\n> > I get sequential scan.\n> \n> Perhaps Patient_primary_key is not an integer field? If not, you need\n> to cast the constant 100 to the right type. Or write '100' with\n> single quotes around it, which leaves Postgres to choose the constant's\n> datatype.\n\nOut of curiosity, why don't we confirm the unquoted value is an integer,\nnumeric, etc, then change it into type 'unknown'? From that point\nforward it would be treated like it's quoted counterpart.\n\nIs this noticeably slower or am I missing something?\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "31 May 2003 13:51:11 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why Sequencial Scan when selecting on primary key of table?" }, { "msg_contents": "\"Brian Tarbox\" <[email protected]> writes:\n> The primary key field is an integer and I have performed vacuum analyse but\n> that does not seem to change anything.\n\nHm. So how big is the table, exactly? On small tables a seqscan will\nbe preferred because the extra I/O to examine the index costs more than\nthe CPU to examine all the tuples on a disk page.\n\n> I've also heard that postgres will not indexes when JOINing tables. Can\n> that really be true??\n\nWe have some join methods that like indexes and we have some that find\nno benefit in 'em. Again, testing on toy-size tables is not a good\nguide to what will happen on larger tables.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 May 2003 13:55:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why Sequencial Scan when selecting on primary key of table? " }, { "msg_contents": "On Sat, May 31, 2003 at 01:45:50PM -0400, Brian Tarbox wrote:\n> The primary key field is an integer and I have performed vacuum analyse but\n> that does not seem to change anything.\n\nint4? int8? int2? Makes a difference. Please post the results of\nEXPLAIN ANALYSE on the query you're having trouble with, and someone\nmay be able to help you. (You'll need to show us the table, too.)\n\n> I've also heard that postgres will not indexes when JOINing tables. Can\n> that really be true??\n\nNo. \n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Sat, 31 May 2003 14:02:35 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why Sequencial Scan when selecting on primary key of table?" }, { "msg_contents": "Rod Taylor <[email protected]> writes:\n> Out of curiosity, why don't we confirm the unquoted value is an integer,\n> numeric, etc, then change it into type 'unknown'?\n\nUNKNOWNNUMERIC is one of the ideas that's been proposed, but it's not\nclear to me that it is better than other alternatives. In particular,\nI don't like losing knowledge of the form and size of the constant.\nSomething like \"WHERE int4col = 4.8\" should yield FALSE, not \"ERROR:\npg_atoi: unable to parse '4.8'\" which is what you're likely to get with\na naive \"unknown numeric type\" approach. A perhaps-more-realistic\nobjection is that it only solves the problem for trivial \"var = const\"\ncases. As soon as you look at even slightly more complicated\nexpressions, it stops doing much good.\n\nI'm still of the opinion that the best solution in the long run is to\nget rid of most of the cross-datatype numeric operators, but there are\npitfalls there too. See last thread on the issue:\nhttp://archives.postgresql.org/pgsql-hackers/2002-11/msg00468.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 May 2003 14:12:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why Sequencial Scan when selecting on primary key of table? " }, { "msg_contents": "I am working with a highly normalized database. Most of my meaningful\nqueries involve joins from a primary table to 4-6 secondary tables.\n\nWould my query performance be significantly faster/slower using a View as\nopposed to a prepared query using join?\n\n(Assume all join fields are ints, say 10,000 records in main table and a few\ndozen records in each of the secondary tables).\n\nThank you.\n\nBrian Tarbox\n\n", "msg_date": "Sat, 31 May 2003 22:55:18 -0400", "msg_from": "\"Brian Tarbox\" <[email protected]>", "msg_from_op": false, "msg_subject": "are views typically any faster/slower than equivilent joins?" }, { "msg_contents": "On Sat, 2003-05-31 at 22:55, Brian Tarbox wrote:\n> I am working with a highly normalized database. Most of my meaningful\n> queries involve joins from a primary table to 4-6 secondary tables.\n> \n> Would my query performance be significantly faster/slower using a View as\n> opposed to a prepared query using join?\n\nThere are some corner cases where a view would be slower than a standard\nquery in 7.3 (bug fix / disabled optimization -- fixed right in 7.4),\nbut generally you can assume it will be about the same speed.\n\n\nSome views such as unions will not be as fast as you would like, but\nthats a general issue with PostgreSQLs inability to throw away selects\nwhen it won't find results on one side of a union.\n\nCREATE VIEW sales AS SELECT * FROM sales_archive_2002 UNION ALL SELECT *\nFROM sales_current;\n\n\nSELECT * FROM sales WHERE timestamp => CURRENT_TIMESTAMP - INTERVAL '1\nday';\n\nThe above query would not be so quick.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "01 Jun 2003 00:02:39 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: are views typically any faster/slower than equivilent joins?" }, { "msg_contents": "> Would my query performance be significantly faster/slower using a View as\n> opposed to a prepared query using join?\n\nI missed this part. Views and prepared queries are not the same time. \nUse of a view still needs to be optimized.\n\nPrepared queries will run the optimization portion on the entire query\nincluding the view segments of it. Think of a view as a MACRO. \nDepending on the context of what surrounds it, the view may be executed\nvery differently.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "01 Jun 2003 00:05:56 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: are views typically any faster/slower than equivilent joins?" }, { "msg_contents": "On Sun, Jun 01, 2003 at 00:02:39 -0400,\n Rod Taylor <[email protected]> wrote:\n> \n> Some views such as unions will not be as fast as you would like, but\n> thats a general issue with PostgreSQLs inability to throw away selects\n> when it won't find results on one side of a union.\n> \n> CREATE VIEW sales AS SELECT * FROM sales_archive_2002 UNION ALL SELECT *\n> FROM sales_current;\n> \n> \n> SELECT * FROM sales WHERE timestamp => CURRENT_TIMESTAMP - INTERVAL '1\n> day';\n> \n> The above query would not be so quick.\n\nI thought some work had been done on pushing where conditions down into\nunions? If so the above wouldn't be too bad. It would still look at\nthe archive table, but it should return no rows relatively quickly\nassuming an appropiate index exists.\n", "msg_date": "Sun, 1 Jun 2003 00:43:37 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: are views typically any faster/slower than equivilent joins?" }, { "msg_contents": "On Sun, 2003-06-01 at 01:43, Bruno Wolff III wrote:\n> On Sun, Jun 01, 2003 at 00:02:39 -0400,\n> Rod Taylor <[email protected]> wrote:\n> > \n> > Some views such as unions will not be as fast as you would like, but\n> > thats a general issue with PostgreSQLs inability to throw away selects\n> > when it won't find results on one side of a union.\n> > \n> > CREATE VIEW sales AS SELECT * FROM sales_archive_2002 UNION ALL SELECT *\n> > FROM sales_current;\n> > \n> > \n> > SELECT * FROM sales WHERE timestamp => CURRENT_TIMESTAMP - INTERVAL '1\n> > day';\n> > \n> > The above query would not be so quick.\n> \n> I thought some work had been done on pushing where conditions down into\n> unions? If so the above wouldn't be too bad. It would still look at\n> the archive table, but it should return no rows relatively quickly\n> assuming an appropiate index exists.\n\nCertainly, if the index exists it won't be so bad (for any single\narchive table). It's when the index doesn't exist or there are several\nhundred archive tables then it starts to get a little worse.\n\nAnyway, anyone doing the above in PostgreSQL should probably be looking\nat partial indexes and merging the information back into a single table\nagain.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "01 Jun 2003 07:56:15 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: are views typically any faster/slower than" } ]
[ { "msg_contents": "Tom Lane Writes:\n\n>Bruno Wolff III <[email protected]> writes:\n>> It probably has one visible row in it. If it can changed a lot, there\n>> may be lots of deleted tuples in a row. That would explain why an\n>> index scan speeds things up.\n\n>Right, every UPDATE on unique_ids generates a dead row, and a seqscan\n>has no alternative but to wade through them all. When a unique index is\n>present, the indexscan code knows that after it's fetched one live tuple\n...\n>More-frequent vacuums would be a much more reliable solution,\n\n\nThe index I created wasn't unique (though it should have been), but \nperhaps much of the same reasoning still applies.\n\nAlso, I could have swore I tried a vacuum, and it didn't make a \ndifference, although experimenting just now, it did. The data collection \nrate is considerably slower at the moment though, so perhaps last time \nthe table simply quickly got \"inefficient\" very quickly again \nduring/immediately after the vacuum (or I wasn't where I thought I was \nwhen I vacuumed). I'll have to experiment with this a bit more, when the \ndata generation is high again.\n\n(ok, experimented a bit more just now)\nHm, it appears that degredation occurs with the index as well, I guess \nat the time I created the index, it just initially did better because it \ngot to skip all the already dead rows at creation time: but this is \ndisturbing, I do a vacuum, and the access times are better, but still \nhorrible:\n\nexplain analyze select next_id from bigint_unique_ids where \ntable_name='CONNECTION_DATA';\n\n Index Scan using bigint_unique_ids__table_name on bigint_unique_ids \n(cost=0.00..8.01 rows=1 width=8) (actual time=13.77..844.14 rows=1 loops=1)\n Index Cond: (table_name = 'CONNECTION_DATA'::text)\n Total runtime: 844.36 msec\n(3 rows)\n\nvacuum; -- takes about 10 minutes\nVACUUM\n\nexplain analyze select next_id from bigint_unique_ids where \ntable_name='CONNECTION_DATA';\nIndex Scan using bigint_unique_ids__table_name on bigint_unique_ids \n(cost=0.00..84.01 rows=1 width=8) (actual time=0.17..99.94 rows=1 loops=1)\n Index Cond: (table_name = 'CONNECTION_DATA'::text)\n Total runtime: 100.09 msec\n\nvacuum; --takes about 2 minutes\nIndex Scan using bigint_unique_ids__table_name on bigint_unique_ids \n(cost=0.00..179.01 rows=1 width=8) (actual time=0.45..219.05 rows=1 loops=1)\n Index Cond: (table_name = 'CONNECTION_DATA'::text)\n Total runtime: 219.20 msec\n\n--ACK, worse, ran twice more, got 212.5 ms, and 394.39\n\nvacuum bigint_unique_ids; -- try specific table only, takes about 5 seconds\nIndex Scan using bigint_unique_ids__table_name on bigint_unique_ids \n(cost=0.00..163.01 rows=1 width=8) (actual time=0.23..143.59 rows=1 loops=1)\n Index Cond: (table_name = 'CONNECTION_DATA'::text)\n Total runtime: 143.72 msec\n\nvacuum full bigint_unique_ids; -- try full, takes about 3 seconds.\nSeq Scan on bigint_unique_ids (cost=0.00..1.02 rows=1 width=8) (actual \ntime=0.10..0.10 rows=1 loops=1)\n Filter: (table_name = 'CONNECTION_DATA'::text)\n Total runtime: 0.25 msec\n\n-- ah, much much much, better.\n\nSo apparently vacuum by itself isn't going to be sufficent, i'm going to \nneed vacuum fulls? Or if I do vacuum's often enough (that should allow \nold rows to be overwritten?) will that do it? I'm a bit hazy on why \nvacuum isn't doing just as well as vacuum full, I thought the only \ndifference was that full released space back to the operating system \n(and presumably defragments existing data, but for one row, this \nshouldn't matter?).\n\nwait several minutes:\n\nSeq Scan on bigint_unique_ids (cost=0.00..39.01 rows=1 width=8) (actual \ntime=2.97..2.98 rows=1 loops=1)\n Filter: (table_name = 'CONNECTION_DATA'::text)\n Total runtime: 3.13 msec\nreindex index bigint_unique_ids__table_name;\nREINDEX\n\nIndex Scan using bigint_unique_ids__table_name on bigint_unique_ids \n(cost=0.00..5.97 rows=1 width=8) (actual time=0.11..0.20 rows=1 loops=1)\n Index Cond: (table_name = 'CONNECTION_DATA'::text)\n Total runtime: 0.30 msec\n\nIt appears reindex has the same speed up effect. (and in this case made \nit switch back from seq_scan to index scan).\n\nLet me throw in this too, if its helpful:\n\nvacuum verbose bigint_unique_ids;\nINFO: --Relation public.bigint_unique_ids--\nINFO: Index bigint_unique_ids__table_name: Pages 29; Tuples 1: Deleted \n5354.\n CPU 0.01s/0.04u sec elapsed 0.05 sec.\nINFO: Removed 11348 tuples in 79 pages.\n CPU 0.00s/0.02u sec elapsed 0.02 sec.\nINFO: Pages 79: Changed 1, Empty 0; Tup 1: Vac 11348, Keep 0, UnUsed 0.\n Total CPU 0.03s/0.06u sec elapsed 0.14 sec.\nINFO: --Relation pg_toast.pg_toast_21592--\nINFO: Pages 0: Changed 0, Empty 0; Tup 0: Vac 0, Keep 0, UnUsed 0.\n Total CPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\n\nvacuum full verbose bigint_unique_ids;\nINFO: --Relation public.bigint_unique_ids--\nINFO: Pages 79: Changed 1, reaped 79, Empty 0, New 0; Tup 1: Vac 297, \nKeep/VTL 0/0, UnUsed 11157, MinLen 52, MaxLen 52; Re-using: Free/Avail. \nSpace 599716/22724; EndEmpty/Avail. Pages 76/3.\n CPU 0.01s/0.00u sec elapsed 0.01 sec.\nINFO: Index bigint_unique_ids__table_name: Pages 29; Tuples 1: Deleted 297.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: Rel bigint_unique_ids: Pages: 79 --> 1; Tuple(s) moved: 1.\n CPU 0.00s/0.00u sec elapsed 0.02 sec.\nINFO: Index bigint_unique_ids__table_name: Pages 29; Tuples 1: Deleted 1.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: --Relation pg_toast.pg_toast_21592--\nINFO: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, \nKeep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space \n0/0; EndEmpty/Avail. Pages 0/0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: Index pg_toast_21592_index: Pages 1; Tuples 0.\n CPU 0.00s/0.00u sec elapsed 0.01 sec.\nVACUUM\n\n\n", "msg_date": "Sat, 31 May 2003 16:56:56 -0600", "msg_from": "Dave E Martin XXIII <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index speeds up one row table (why)?" }, { "msg_contents": "> vacuum verbose bigint_unique_ids;\n> INFO: --Relation public.bigint_unique_ids--\n> INFO: Index bigint_unique_ids__table_name: Pages 29; Tuples 1: Deleted \n> 5354.\n> CPU 0.01s/0.04u sec elapsed 0.05 sec.\n> INFO: Removed 11348 tuples in 79 pages.\n> CPU 0.00s/0.02u sec elapsed 0.02 sec.\n> INFO: Pages 79: Changed 1, Empty 0; Tup 1: Vac 11348, Keep 0, UnUsed 0.\n> Total CPU 0.03s/0.06u sec elapsed 0.14 sec.\n\nVacuum (regular, not full) frequently enough that the 'Pages' value\ndoesn't increase past 1 and you'll be fine. A sequential scan on a very\nsmall table is what you want to have.\n\nIn this particular case, vacuum removed over 11000 dead versions of the\ntuple.\n\nAn 8 k page will hold approx 140 tuples based on your structure. So,\nfor every ~100 updates you'll want to run vacuum (regular, not full) on\nthe table.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "31 May 2003 22:29:33 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index speeds up one row table (why)?" }, { "msg_contents": "On Sat, May 31, 2003 at 16:56:56 -0600,\n Dave E Martin XXIII <[email protected]> wrote:\n> \n> (ok, experimented a bit more just now)\n> Hm, it appears that degredation occurs with the index as well, I guess \n> at the time I created the index, it just initially did better because it \n> got to skip all the already dead rows at creation time: but this is \n> disturbing, I do a vacuum, and the access times are better, but still \n> horrible:\n\nYou really don't want to use an index, so this probably doesn't matter\nfor the current application. The problem is that when data is inserted\ninto an index that just increases (or decreases) in value space from\ndeleted entries doesn't get reused. I believe this is fixed in 7.4.\nThis case would apply to indexes based on counters, dates or times\nwhere new values are added and old values get deleted.\n", "msg_date": "Sun, 1 Jun 2003 00:33:39 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index speeds up one row table (why)?" }, { "msg_contents": "Rod Taylor wrote:\n\n >An 8 k page will hold approx 140 tuples based on your structure. So,\n >for every ~100 updates you'll want to run vacuum (regular, not full) on\n >the table\n\nAlas, for this application, that means a vacuum once every 5 seconds or \nso. I'll see if I can set up a separate little task to do that (I assume \nat this rate, its better to just keep a connection open, than \nsetup/teardown). I don't suppose there is a way to get a trigger to do a \nvacuum (which doesn't want to be in a transaction) (thinking it could \ncheck for id mod 100=0 or something)? I also assume a few pages isn't \ngoing to be that bad (just don't let it get to 11000 8).\n\n\n", "msg_date": "Sun, 01 Jun 2003 01:20:03 -0600", "msg_from": "Dave E Martin XXIII <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index speeds up one row table (why)?" }, { "msg_contents": "sending thread to -performance from -bugs\n\nOn Sun, 2003-06-01 at 03:20, Dave E Martin XXIII wrote:\n> Rod Taylor wrote:\n> \n> >An 8 k page will hold approx 140 tuples based on your structure. So,\n> >for every ~100 updates you'll want to run vacuum (regular, not full) on\n> >the table\n> \n> Alas, for this application, that means a vacuum once every 5 seconds or \n> so. I'll see if I can set up a separate little task to do that (I assume \n> at this rate, its better to just keep a connection open, than \n> setup/teardown). I don't suppose there is a way to get a trigger to do a \n> vacuum (which doesn't want to be in a transaction) (thinking it could \n> check for id mod 100=0 or something)? I also assume a few pages isn't \n> going to be that bad (just don't let it get to 11000 8).\n\nSorry... Vacuum cannot be triggered -- nor would you want it to be. \nThere really isn't anything wrong with vacuuming once every 5 seconds or\nso, as it'll take a very short time if there is only a page or so to\ndeal with.\n\nSetup a script to connect, issue a vacuum, count to 5, issue a vacuum,\ncount to 5, etc.\n\nMore than one page and you will want an index. Having an index is only\ngoing to slow things down in the long run as indexes will not shrink\nwith a vacuum in 7.3 (7.4 puts an effort towards correcting this). This\nmeans you'll be running REINDEX every couple of minutes, which of course\nlocks the table, where standard vacuum does not.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "01 Jun 2003 08:14:15 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index speeds up one row table (why)?" }, { "msg_contents": "On Sun, Jun 01, 2003 at 01:20:03 -0600,\n Dave E Martin XXIII <[email protected]> wrote:\n> Rod Taylor wrote:\n> \n> >An 8 k page will hold approx 140 tuples based on your structure. So,\n> >for every ~100 updates you'll want to run vacuum (regular, not full) on\n> >the table\n> \n> Alas, for this application, that means a vacuum once every 5 seconds or \n> so. I'll see if I can set up a separate little task to do that (I assume \n> at this rate, its better to just keep a connection open, than \n> setup/teardown). I don't suppose there is a way to get a trigger to do a \n> vacuum (which doesn't want to be in a transaction) (thinking it could \n> check for id mod 100=0 or something)? I also assume a few pages isn't \n> going to be that bad (just don't let it get to 11000 8).\n\nMaybe you should reconsider how badly you want the app to be totally database\nagnostic? Using a sequence might be less of a contortion than using vacuum\na few times a minute. You are likely to have similar performance issues\nwith other databases, so this section of code may not turn out to be very\nportable in any case.\n", "msg_date": "Sun, 1 Jun 2003 14:04:09 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index speeds up one row table (why)?" }, { "msg_contents": "Bruno Wolff III wrote:\n\n>Maybe you should reconsider how badly you want the app to be totally database\n>agnostic? Using a sequence might be less of a contortion than using vacuum\n>a few times a minute. You are likely to have similar performance issues\n>with other databases, so this section of code may not turn out to be very\n>portable in any case.\n> \n>\nMaybe I can further abstract out the generate unique-id portion, Since \nunique-id generation does seem to be a pretty common database extension \n(for some reason...), and then provide a generic schema definition, and \na postgresql specific one (along with whatever others I can drum up). \nThe generic one will rely on the software to come up with the unique id \nin the fashion I'm currently doing.\n\nSpeaking of which, is there a better way than what i'm currently doing \n(when the database doesn't have any such support)? I've heard of one \nmethod based on something like \"select max(id)+1 from table\" but this \nseems error prone, at the very least, you'd have to have a unique index, \nand be prepared to redo on failure, which could get messy if its a big \ntransaction, and frequent if there is a lot of concurrent inserting \ngoing on.\n\n\n", "msg_date": "Mon, 02 Jun 2003 01:14:30 -0600", "msg_from": "Dave E Martin XXIII <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index speeds up one row table (why)?" }, { "msg_contents": "On Mon, 2 Jun 2003, Dave E Martin XXIII wrote:\n\n> Bruno Wolff III wrote:\n> \n> >Maybe you should reconsider how badly you want the app to be totally database\n> >agnostic? Using a sequence might be less of a contortion than using vacuum\n> >a few times a minute. You are likely to have similar performance issues\n> >with other databases, so this section of code may not turn out to be very\n> >portable in any case.\n> > \n> >\n> Maybe I can further abstract out the generate unique-id portion, Since \n> unique-id generation does seem to be a pretty common database extension \n> (for some reason...), and then provide a generic schema definition, and \n> a postgresql specific one (along with whatever others I can drum up). \n> The generic one will rely on the software to come up with the unique id \n> in the fashion I'm currently doing.\n> \n> Speaking of which, is there a better way than what i'm currently doing \n> (when the database doesn't have any such support)? I've heard of one \n> method based on something like \"select max(id)+1 from table\" but this \n> seems error prone, at the very least, you'd have to have a unique index, \n> and be prepared to redo on failure, which could get messy if its a big \n> transaction, and frequent if there is a lot of concurrent inserting \n> going on.\n> \n\tFor a generic solution you could have an extra table that fed you \nids and update it every time you took a value. (Maybe a trigger could be \nused?) Due to table locking during transactions no two concurrent \nrequested would get the same answer. Implementation could be \ninteresting but relatively simple. \n\nBEGIN;\nSELECT id from unqid where name='seq_name';\nUPDATE unqid set id=id+1 where name='seq_name'; \nCOMMIT;\n\nPeter Childs\n\n", "msg_date": "Mon, 2 Jun 2003 08:51:49 +0100 (BST)", "msg_from": "Peter Childs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index speeds up one row table (why)?" } ]
[ { "msg_contents": "Hi,\n I am running Postgresql 7.2.3 on a linux Qube with 256 RAM. I have a tabe \nwith about 100,000 records. The table has a postgis geometry column. I have \na GIST index on the table on the geometry column. Here are my questions:\n1)When I do a spatial select query on the geometry column in this table it \ntakes a few seconds. What more can I do in terms of the Postgresql \nconfiguration or query tuning besides adding the GIST index?\n\nHere's a sample query I make:\n\nSelect [column] from [table_name] where [spatial_column] && [the geometry \nobject];\n\n2)Also, I execute this query over the web. If there are mltiple select \nqueries then I have to execute one get its reultset and then send the other \none. Is there a faster way to execute multiple select queries over the web?\n\nThanks in advance.\n\nRiyaz\n\n_________________________________________________________________\nAdd photos to your messages with MSN 8. Get 2 months FREE*. \nhttp://join.msn.com/?page=features/featuredemail\n\n", "msg_date": "Sun, 01 Jun 2003 21:12:54 -0400", "msg_from": "\"Ricky Prasla\" <[email protected]>", "msg_from_op": true, "msg_subject": "Select Query Performance" }, { "msg_contents": "> 2)Also, I execute this query over the web. If there are mltiple select \n> queries then I have to execute one get its reultset and then send the other \n> one. Is there a faster way to execute multiple select queries over the web?\n\nGoing with the assumption the result-set is not used to generate further\nqueries, you might look into the use of Asynchronous connections.\n\nIt'll enable you to easily establish several connections to the database\nfor parallel work to be done.\n\nhttp://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=libpq-connect.html\n\n\nEven PHP will let you use the Asynchronous query mechanism within\nPostgreSQL.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "01 Jun 2003 21:48:31 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select Query Performance" } ]
[ { "msg_contents": "\n> Hi All, \n> \n> We have a performance problem with a specific query, where just getting\n> the QUERY PLAN (_not_ getting the results of the query itself) on this\n> query is taking up to 10 seconds, and spinning the CPU, and basically\n> blocking other access to the db. This same query on a dump/restore on\n> another server (different kernel, same postgres version, much less\n> powerful box) generates practically the same query plan, but the\n> generation of the query plan takes orders of magnitude less (almost\n> immmediately generates it).\n> \n> We noticed this significant performance loss when we upgraded from\n> Postgres 7.2.1 -> 7.2.3 over the weekend, we also upgraded our RedHat\n> kernel from \"2.4.18-4smp i686\" to \"2.4.20-13.7smp i686\", plus upgraded to\n> latest glibc at the same time (probably shouldn't have mixed all those\n> upgrades, but there you go).\n> \n> We did not do a dump restore as part of the postgres/kernal upgrade on our\n> production box (docs say upgrade is fine without it). We vacuumed every\n> which way possible. Several times. Vacuum full analyze. the lot. We\n> dropped the indexes and recreated them. We used REINDEX.\n> \n> This is a UNICODE database, and this table does contain some unicode\n> character sequences.\n> \n> The offending explain statement is: \n> \n> EXPLAIN SELECT tblUser.id, tblUser.first_name, tblUser.last_name,\n> tblUser.login, tblUser.comments, tblUser.title_id, tblUser.bh_phone,\n> tblUser.ah_phone, tblUser.mobile, tblUser.fax, tblUser.address,\n> tblUser.city, tblUser.state_id, tblUser.country_id, tblUser.postcode,\n> tblUser.plain_text_email, tblUser.email_freq_id,\n> tblUser.email_freq_day_id, tblUser.privilege, tblUser.secure_id,\n> tblUser.activeyn, tblUser.login_attempts, tblUser.hashed_password,\n> tblUser.last_password_change, tblUser.forwarding_user_id,\n> tblUser.role_name FROM tblUser WHERE tblUser.id IN\n> ('102','103','104','105','106','107','108','109','110','111','112','113','\n> 114','115','116','117','118','119','120','121','122','123','124','125','12\n> 6','127','128','129','130','131','132','133','134','135','136','137','138'\n> ,'139','140','141','142','143','144','145','146','147','148','149','150','\n> 151','152','153','154','155','156','157','158','159','160','161','162','16\n> 3','164','165','166','167','168','169','170','171','172','173','174','175'\n> ,'176','177','178','179','180','181','182','183','184','185','186','187','\n> 188','189','190','191','192','193','194','195','196','197','198','199','20\n> 0','201','202','203','204','205','206','207','208','209','210','211','212'\n> ,'213','215','216','217','218','219','220','221','222','223','224','225','\n> 226','227','228','229','230','231','233','235','236','237','238','239','24\n> 0','241','242','243','244','245','246','247','249','250','251','252','253'\n> ,'254','255','256','257','258','259','260','261','262','263','264','265','\n> 266','267','268','269','270','271','272','273','274','275','276','277','27\n> 8','279','280','281','282','283','284','285','286','287','288','289','290'\n> ,'291','292','293','294','295','296','297','298','299','300','301','302','\n> 303','304','305','306','307','308','309','310','311','312','313','315','31\n> 6','317','318','319','320','321','322','323','324','325','326','327','328'\n> ,'329','331','333','334','335','336','337','338','339','340','341','342','\n> 343','344','345','346','347','348','349','350','351','352','353','354','35\n> 5','356','357','358');\n> \nNOTICE: QUERY PLAN:\n\nSeq Scan on tbluser (cost=0.00..670.82 rows=221 width=3726)\n\n> (this is what our App server generates as part of the query, I KNOW the\n> IN() is not the most efficient, but it's working fine on a number of other\n> machines, the tbluser table is only 1000 rows, and with an index on the id\n> column).\n> \n> My suspicion is that a dump/restore on our production box may fix this\n> problem, but I'd rather know some more about this issue. Can anyone help\n> explain this issue?\n> \n> regards,\n> \n> _________________________\n> Paul Smith \n> Lawlex Compliance Solutions\n> phone: +61 3 9278 1511\n> email: [email protected]\n> \n> \n", "msg_date": "Mon, 2 Jun 2003 14:56:18 +1000 ", "msg_from": "George Papastamatopoulos <[email protected]>", "msg_from_op": true, "msg_subject": "FW: Query Plan problem" }, { "msg_contents": "George Papastamatopoulos <[email protected]> writes:\n>> ... WHERE tblUser.id IN\n>> ('102','103','104','105','106','107','108','109','110','111','112','113','\n>> 114','115','116','117','118','119','120','121','122','123','124','125','12\n> ...\n\nWhat's the datatype of tblUser.id? What indexes do you have on the\ntable?\n\nAlso, are both databases built with the same locale/encoding support\nand initdb-time choices? What are they?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Jun 2003 01:31:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Query Plan problem " } ]
[ { "msg_contents": "\n Hello,\n\n I have table with slowly degrading performance. Table is special is\nsuch way that all its rows are updated every 5 minutes (routers interfaces).\nvacuum does not help. vacuum full does but I'd like to avoid it.\n\n Below I added explain analyze output before and after vacuum full. How\ncould I make that table not to grow?\n\n PostgreSQL 7.3.2 on Redhat Linux 7.1.\nmax_fsm_pages=10000 max_fsm_relations=1000.\n\n Mindaugas\n\nrouter_db=# explain analyze select * from ifdata;\n QUERY PLAN\n----------------------------------------------------------------------------\n-------------------------------\n Seq Scan on ifdata (cost=0.00..4894.76 rows=776 width=133) (actual\ntime=31.65..1006.76 rows=776 loops=1)\n Total runtime: 1007.72 msec\n(2 rows)\n\nrouter_db=# VACUUM full verbose ifdata;\nINFO: --Relation public.ifdata--\nINFO: Pages 4887: Changed 0, reaped 4883, Empty 0, New 0; Tup 776: Vac\n46029, Keep/VTL 0/0, UnUsed 186348, MinLen 130, MaxLen 216; Re-using:\nFree/Avail. Space 38871060/15072128; EndEmpty/Avail. Pages 2981/1895.\n CPU 0.33s/0.04u sec elapsed 0.45 sec.\nINFO: Index ifdata_clientid_key: Pages 2825; Tuples 776: Deleted 46029.\n CPU 0.23s/0.32u sec elapsed 1.98 sec.\nINFO: Rel ifdata: Pages: 4887 --> 17; Tuple(s) moved: 776.\n CPU 0.30s/0.35u sec elapsed 1.65 sec.\nINFO: Index ifdata_clientid_key: Pages 2825; Tuples 776: Deleted 776.\n CPU 0.21s/0.04u sec elapsed 0.29 sec.\nVACUUM\nrouter_db=# explain analyze select * from ifdata;\n QUERY PLAN\n----------------------------------------------------------------------------\n-------------------------\n Seq Scan on ifdata (cost=0.00..24.76 rows=776 width=133) (actual\ntime=0.03..7.53 rows=776 loops=1)\n Total runtime: 8.17 msec\n(2 rows)\n\n\n", "msg_date": "Mon, 2 Jun 2003 08:33:38 +0300", "msg_from": "\"Mindaugas Riauba\" <[email protected]>", "msg_from_op": true, "msg_subject": "Degrading performance" }, { "msg_contents": "\"Mindaugas Riauba\" <[email protected]> writes:\n> I have table with slowly degrading performance. Table is special is\n> such way that all its rows are updated every 5 minutes (routers interfaces).\n> vacuum does not help. vacuum full does but I'd like to avoid it.\n\nVACUUM will do the trick, you just need to do it every five minutes or\nso. I suggest a cron job to vacuum just the one table.\n\n> INFO: Rel ifdata: Pages: 4887 --> 17; Tuple(s) moved: 776.\n> CPU 0.30s/0.35u sec elapsed 1.65 sec.\n\nThat says you waited way too long to vacuum --- over two hundred update\ncycles, evidently.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Jun 2003 09:36:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Degrading performance " }, { "msg_contents": "On Mon, 2 Jun 2003, Tom Lane wrote:\n\n> \"Mindaugas Riauba\" <[email protected]> writes:\n> > I have table with slowly degrading performance. Table is special is\n> > such way that all its rows are updated every 5 minutes (routers interfaces).\n> > vacuum does not help. vacuum full does but I'd like to avoid it.\n> \n> VACUUM will do the trick, you just need to do it every five minutes or\n> so. I suggest a cron job to vacuum just the one table.\n> \n> > INFO: Rel ifdata: Pages: 4887 --> 17; Tuple(s) moved: 776.\n> > CPU 0.30s/0.35u sec elapsed 1.65 sec.\n> \n> That says you waited way too long to vacuum --- over two hundred update\n> cycles, evidently.\n\nDon't forget to crank up your fsm settings in $PGDATA/postgresql.conf as \nwell.\n\n", "msg_date": "Mon, 2 Jun 2003 10:34:43 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Degrading performance " }, { "msg_contents": "\"scott.marlowe\" <[email protected]> writes:\n> On Mon, 2 Jun 2003, Tom Lane wrote:\n>>> INFO: Rel ifdata: Pages: 4887 --> 17; Tuple(s) moved: 776.\n>>> CPU 0.30s/0.35u sec elapsed 1.65 sec.\n>> \n>> That says you waited way too long to vacuum --- over two hundred update\n>> cycles, evidently.\n\n> Don't forget to crank up your fsm settings in $PGDATA/postgresql.conf as \n> well.\n\nThe table's not very big though. As long as he keeps after it with\nsufficiently-frequent vacuuming, it won't need much FSM space.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Jun 2003 13:25:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Degrading performance " }, { "msg_contents": "On Mon, 2 Jun 2003, Tom Lane wrote:\n\n> \"scott.marlowe\" <[email protected]> writes:\n> > On Mon, 2 Jun 2003, Tom Lane wrote:\n> >>> INFO: Rel ifdata: Pages: 4887 --> 17; Tuple(s) moved: 776.\n> >>> CPU 0.30s/0.35u sec elapsed 1.65 sec.\n> >> \n> >> That says you waited way too long to vacuum --- over two hundred update\n> >> cycles, evidently.\n> \n> > Don't forget to crank up your fsm settings in $PGDATA/postgresql.conf as \n> > well.\n> \n> The table's not very big though. As long as he keeps after it with\n> sufficiently-frequent vacuuming, it won't need much FSM space.\n\nYeah, but I got the feeling he was updating like 40 rows a second or \nsomething. Sufficiently frequent for him may well be constant. :-)\n\n", "msg_date": "Mon, 2 Jun 2003 11:42:48 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Degrading performance " } ]
[ { "msg_contents": "Thanks Tom for the reply (if you could reply all, as I'm not currently\nsubscribed just yet).\n\n[Since our post, we've down an explicit vacuum on the tbluser.id column, and\nthings are looking much much better, there were 0 rows in the pg_stats table\nfor that table...]\n\nIncidently, tbluser.Id is a bigint (hence the '' wrapped around the in\nclause, otherwise the infamous postgres issue crops up not matching the Int\nliteral number with the bigint index, and reverts to nasty table scan).\n\nBoth our production and our dump/restore servers are UNICODE. \n\nIncidently, if I do a VACUUM Analyze on this table:\n\ncomptoolkit=# VACUUM analyze tbluser;\nERROR: Invalid UNICODE character sequence found (0xf8335c)\n\nMe thinks somehow there is a hashed_password with some dodgy characters, but\nI'm not sure how we'll find that row, or what we'll do with that when we\nfind it. (Any thoughts?). Could be why statistics getting removed?\n\nANy thoughts along this would be good, we're over the performance hump, but\nit's always nice to know more...\n\ncheers,\n\nPaul Smith\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Monday, 2 June 2003 3:31 PM\n> To: George Papastamatopoulos\n> Cc: [email protected]; Paul Smith\n> Subject: Re: [PERFORM] FW: Query Plan problem \n> \n> \n> George Papastamatopoulos \n> <[email protected]> writes:\n> >> ... WHERE tblUser.id IN\n> >> \n> ('102','103','104','105','106','107','108','109','110','111','\n> 112','113','\n> >> \n> 114','115','116','117','118','119','120','121','122','123','12\n> 4','125','12\n> > ...\n> \n> What's the datatype of tblUser.id? What indexes do you have on the\n> table?\n> \n> Also, are both databases built with the same locale/encoding support\n> and initdb-time choices? What are they?\n> \n> \t\t\tregards, tom lane\n> \n", "msg_date": "Mon, 2 Jun 2003 15:35:25 +1000 ", "msg_from": "Paul Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FW: Query Plan problem " } ]
[ { "msg_contents": "I have discovered that I could optimize queries by adjusting the \nfollowing parameters such as enable_seqscan, enable_hashjoin, \nenable_mergejoin and enable_nestloop.\n\nIs it a good idea, to temporarily adjust those values before running a \nquery to spend up the execution time? I've searched online and wasn't \nable to find articles about it.\n\nI need to speed up an enterprise application that I'm working on, and I \nwouldn't want to screw things up.\n\nMy plan is for every query that could be optimized by adjusting \nparameters: I'll enable parameters that'll speed it up, run the query, \nthen set the parameters back to their default values.\n\nThanks in advance.\n\n\n", "msg_date": "Thu, 05 Jun 2003 11:35:22 -0400", "msg_from": "Yusuf <[email protected]>", "msg_from_op": true, "msg_subject": "Enabling and disabling run time configuration parameters." }, { "msg_contents": "> My plan is for every query that could be optimized by adjusting \n> parameters: I'll enable parameters that'll speed it up, run the query, \n> then set the parameters back to their default values.\n\nUnless you intend to regularly test these, or have static data this may\ncause you more problems than it fixes.\n\nAny change in the data may make the plan you have forced a non-optimal\none.\n\n\nA much better approach is to tweek the cost values that cause the\nplanner to chose that particular plan. The random_page_cost will\nprobably have the most effect on the plan chosen.\n\n#effective_cache_size = 1000 # typically 8KB each\n#random_page_cost = 4 # units are one sequential page fetch\ncost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "05 Jun 2003 12:14:23 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enabling and disabling run time configuration parameters." }, { "msg_contents": "On Thu, Jun 05, 2003 at 11:35:22 -0400,\n Yusuf <[email protected]> wrote:\n> I have discovered that I could optimize queries by adjusting the \n> following parameters such as enable_seqscan, enable_hashjoin, \n> enable_mergejoin and enable_nestloop.\n> \n> Is it a good idea, to temporarily adjust those values before running a \n> query to spend up the execution time? I've searched online and wasn't \n> able to find articles about it.\n\nThat is a reasonable thing to do. However you should also look into\nadjusting some of the costs used by the planner so that it gets the\nright plan more often. If you manually hack how the query is done,\nthen you have to worry about whether the hack is still right if the\nthe data changes significantly.\n\n> I need to speed up an enterprise application that I'm working on, and I \n> wouldn't want to screw things up.\n\nThere worst that would happen is that the plan you forced it to use\nwas slower than what the planner would have used.\n\n> My plan is for every query that could be optimized by adjusting \n> parameters: I'll enable parameters that'll speed it up, run the query, \n> then set the parameters back to their default values.\n\nThey only apply to the current backend session. You can also set them\nfor just the current transaction which is safer if you are using persistant\nbackend connections. (So that if you make a mistake the setting doesn't\napply for a very long time.)\n", "msg_date": "Thu, 5 Jun 2003 11:15:02 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enabling and disabling run time configuration parameters." }, { "msg_contents": "Yusuf,\n\n> Is it a good idea, to temporarily adjust those values before running a\n> query to spend up the execution time? I've searched online and wasn't\n> able to find articles about it.\n\nNo. The \"enable_%\" vars are intended as settings for *testing*, to tell you \nif you have a problem with your query structure or indexing, cost variables, \nor are in need of a VACUUM. Using them in a production capacity is a bad \nidea, because you haven't addressed the problem that was causing the query to \nbe slow in the first place, and as your database changes over time your \nqueries will become slow again.\n\nAdhjusting the *cost* variables is a good idea. Find you need \nENABLE_SEQSCAN=FALSE a lot? Raise your cache_size and lower your \nrandom_tuple_cost variables, among other adjustments.\n\nFor further adjustments, post some of your \"bad queries\" to this list. Be \nsure to include *all* of the following:\n\n1) VACUUM FULL ANALYZE before testing.\n2) Include the full query.\n3) Include the EXPLAIN ANALYZE results of the query.\n4) Include (possibly as a text attachment) the schema of relevant tables, \nincluding (especially!) indexes. \n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 5 Jun 2003 09:19:10 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enabling and disabling run time configuration parameters." }, { "msg_contents": "On Thu, Jun 05, 2003 at 11:35:22AM -0400, Yusuf wrote:\n> I have discovered that I could optimize queries by adjusting the \n> following parameters such as enable_seqscan, enable_hashjoin, \n> enable_mergejoin and enable_nestloop.\n> \n> Is it a good idea, to temporarily adjust those values before running a \n> query to spend up the execution time? I've searched online and wasn't \n> able to find articles about it.\n\nIt sounds like you need more general tuning. If the planner is\nmaking mistakes, it'd be nice to know about it. Could you post some\ndetails?\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<[email protected]> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 5 Jun 2003 12:39:34 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enabling and disabling run time configuration parameters." }, { "msg_contents": "On Thu, 5 Jun 2003, Yusuf wrote:\n\n> I have discovered that I could optimize queries by adjusting the \n> following parameters such as enable_seqscan, enable_hashjoin, \n> enable_mergejoin and enable_nestloop.\n\nSetting those to get a fast query is the brute force method. It works, \nbut at some cost of flexibility.\n\nHave you run vacuum full and analyze? If not, the planner has no clue how \nto decide on which plans to choose. \n\n> Is it a good idea, to temporarily adjust those values before running a \n> query to spend up the execution time? I've searched online and wasn't \n> able to find articles about it.\n\nYes, it's a great idea to do that in testing. No, it's a bad idea to rely \non them in production. \n\n> I need to speed up an enterprise application that I'm working on, and I \n> wouldn't want to screw things up.\n\nThen you'll want to tune your databases cost estimates so it makes the \nright decision.\n\n> My plan is for every query that could be optimized by adjusting \n> parameters: I'll enable parameters that'll speed it up, run the query, \n> then set the parameters back to their default values.\n\nThat's a good plan as long as you go the extra step of making the changes \nto the cost parameters so that the planner chooses correctly between the \ndifferent options it has.\n\nEvery server has different performance characteristics. A machine with 1 \ngig of RAM and 18 drives in a large RAID 1+0 is going to handle random \npage access a lot better than a machine with 256 Meg ram and a single IDE \nhard drive.\n\nThe values you need to look at are these:\n\nrandom_page_cost\ncpu_index_tuple_cost\ncpu_operator_cost\ncpu_tuple_cost\neffective_cache_size\n\nThey are covered in detail in the docs here:\n\nhttp://www.postgresql.org/docs/view.php?version=7.3&idoc=0&file=runtime-config.html\n\nI'm gonna go offline and write a quick tutorial on tuning your database to \nyour server. Look for a preliminary version today or tomorrow.\n\nSet effective cache size to approximately the size of all kernel cache \nbuffer/pagesize (8192 for most pgsql setups).\n\nThen tune the *_cost options so the planner picks the right plan each \ntime.\n\n\n", "msg_date": "Thu, 5 Jun 2003 10:42:01 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Enabling and disabling run time configuration parameters." } ]
[ { "msg_contents": "--------- THE QUERY ----------------\n\nselect sum(item.charge) as currentCharges\n, sum(item.gst) as gst\n, sum(item.pst) as pst\n, sum(item.hst) as hst\n, sum(item.qst) as qst\n, sum(item.federaltax) as federalTax\n, sum(item.statetax) as stateTax\n, sum(item.localtax) as localTax\n, sum(item.othertax) as otherTax\n, consaccount.latePaymentCharge as latePaymentCharges\n, consaccount.PreviousBalance as balanceForward\n, consaccount.dateinserted as dateInserted\n, consaccount.userDateInserted as dateEntered\n, consaccount.issueDate as invoiceDate\n, consaccount.dueDate as dateDue\n, consaccount.consAccount_Id as consolidatedAccountId\n, consaccount.invoiceNumber as invoiceNumber\n, consaccountinfo.name as consolidatedAccountNumber\n, consaccount.vendor_Id as vendorId\n, consaccount.client_Id as clientId\n, consaccount.ponumber as ponumber\n, consaccount.ismanualentry as isManualEntry\n, consaccount_approvedby_user.approvedby_user_id as approved\n, consaccount_allocatedby_user.allocatedby_user_id as allocated\n, consaccount_paidby_user.paidby_user_id as paid\n, consaccountinfo.consaccountinfo_id as consAccountInfoId\n, consaccount_paidby_user.amountpaid as amountPaid\nfrom consaccount\ninner join consaccountinfo on consaccount.consAccountInfo_Id = consaccountinfo.ConsAccountInfo_Id\nleft join consaccount_allocatedby_user on consaccount.consaccount_id = consaccount_allocatedby_user.consaccount_id\nleft join consaccount_approvedby_user on consaccount.consaccount_id = consaccount_approvedby_user.consaccount_id\nleft join consaccount_paidby_user on consaccount.consaccount_id = consaccount_paidby_user.consaccount_id\ninner join account on consaccount.consAccount_Id = account.ConsAccount_Id\ninner join phone on account.account_Id = phone.Account_Id\ninner join item on phone.phone_Id = item.Phone_Id\nwhere consaccount.consaccount_id in (36,37,38,40,41,42,43,44,45,48,16,49,50,15,14)\ngroup by consaccountinfo.name\n, consaccountinfo.consaccountinfo_id\n, consaccount.invoicenumber\n, consaccount.consaccount_id\n, consaccount.dateinserted\n, consaccount.userDateInserted\n, consaccount.duedate\n, consaccount.issuedate\n, consaccount.previousbalance\n, consaccount.latepaymentcharge\n, consaccount.vendor_id\n, consaccount.client_id\n, consaccount.ponumber\n, consaccount.ismanualentry\n, consaccount_approvedby_user.approvedby_user_id\n, consaccount_allocatedby_user.allocatedby_user_id\n, consaccount_paidby_user.paidby_user_id\n, consaccount.isManualEntry\n, consaccount_paidby_user.amountpaid\norder by consaccount.invoicenumber asc;\n\n----------- THE QUERY PLAN -----------\n QUERY PLAN\n\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=5938.90..5939.30 rows=161 width=256) (actual time=45187.21..45187.30 rows=15 loops=1)\n Sort Key: consaccount.invoicenumber\n -> Aggregate (cost=5820.52..5933.01 rows=161 width=256) (actual time=42859.84..45186.91 rows=15 loops=1)\n -> Group (cost=5820.52..5896.85 rows=1607 width=256) (actual time=42729.90..44465.25 rows=39078 loops=1)\n -> Sort (cost=5820.52..5824.54 rows=1607 width=256) (actual time=42729.85..43018.41 rows=39078 loops=1)\n Sort Key: consaccountinfo.name, consaccountinfo.consaccountinfo_id, consaccount.invoicenumber,\nconsaccount.consaccount_id, consaccount.dateinserted, consaccount.userdateinserted, consaccount.duedate,\nconsaccount.issuedate, consaccount.previousbalance, consaccount.latepaymentcharge, consaccount.vendor_id,\nconsaccount.client_id, consaccount.ponumber, consaccount.ismanualentry, consaccount_approvedby_user.approvedby_user_id,\nconsaccount_allocatedby_user.allocatedby_user_id, consaccount_paidby_user.paidby_user_id, consaccount_paidby_user.amountpaid\n -> Hash Join (cost=3208.20..5734.94 rows=1607 width=256) (actual time=7787.49..38027.69\nrows=39078 loops=1)\n Hash Cond: (\"outer\".phone_id = \"inner\".phone_id)\n -> Seq Scan on item (cost=0.00..2140.77 rows=73177 width=95) (actual time=0.07..977.20\nrows=73177 loops=1)\n -> Hash (cost=3200.10..3200.10 rows=3239 width=161) (actual time=7785.54..7785.54 rows=0\nloops=1)\n -> Hash Join (cost=149.32..3200.10 rows=3239 width=161) (actual time=156.50..6589.78\nrows=139977 loops=1)\n Hash Cond: (\"outer\".account_id = \"inner\".account_id)\n -> Seq Scan on phone (cost=0.00..2272.86 rows=147486 width=8) (actual\ntime=0.12..1211.95 rows=147486 loops=1)\n -> Hash (cost=149.07..149.07 rows=103 width=153) (actual time=156.29..156.29\nrows=0 loops=1)\n -> Hash Join (cost=51.60..149.07 rows=103 width=153) (actual\ntime=13.62..128.92 rows=3412 loops=1)\n Hash Cond: (\"outer\".consaccount_id = \"inner\".consaccount_id)\n -> Seq Scan on account (cost=0.00..72.79 rows=4679 width=8)\n(actual time=0.02..36.21 rows=4679 loops=1)\n -> Hash (cost=51.56..51.56 rows=15 width=145) (actual\ntime=7.27..7.27 rows=0 loops=1)\n -> Hash Join (cost=44.42..51.56 rows=15 width=145) (actual\ntime=5.80..7.15 rows=15 loops=1)\n Hash Cond: (\"outer\".consaccount_id = \n\"inner\".consaccount_id)\n -> Hash Join (cost=44.41..51.48 rows=15 width=117)\n(actual time=5.71..6.76 rows=15 loops=1)\n Hash Cond: (\"outer\".consaccount_id =\n\"inner\".consaccount_id)\n -> Hash Join (cost=44.41..51.41 rows=15\nwidth=109) (actual time=5.60..6.35 rows=15 loops=1)\n Hash Cond: (\"outer\".consaccount_id =\n\"inner\".consaccount_id)\n -> Merge Join (cost=43.40..50.32 rows=15\nwidth=101) (actual time=5.37..5.82 rows=15 loops=1)\n Merge Cond:\n(\"outer\".consaccountinfo_id = \"inner\".consaccountinfo_id)\n -> Index Scan using\nconsaccountinfo_pkey on consaccountinfo (cost=0.00..6.17 rows=197 width=18) (actual time=0.20..0.27 rows=7 loops=1)\n -> Sort (cost=43.40..43.44 rows=15\nwidth=83) (actual time=5.06..5.15 rows=15 loops=1)\n Sort Key:\nconsaccount.consaccountinfo_id\n -> Seq Scan on consaccount\n(cost=0.00..43.11 rows=15 width=83) (actual time=0.09..4.87 rows=15 loops=1)\n Filter: ((consaccount_id =\n36) OR (consaccount_id = 37) OR (consaccount_id = 38) OR (consaccount_id = 40) OR (consaccount_id = 41) OR\n(consaccount_id = 42) OR (consaccount_id = 43) OR (consaccount_id = 44) OR (consaccount_id = 45) OR (consaccount_id =\n48) OR (consaccount_id = 16) OR (consaccount_id = 49) OR (consaccount_id = 50) OR (consaccount_id = 15) OR\n(consaccount_id = 14))\n -> Hash (cost=1.01..1.01 rows=1 width=8)\n(actual time=0.13..0.13 rows=0 loops=1)\n -> Seq Scan on\nconsaccount_allocatedby_user (cost=0.00..1.01 rows=1 width=8) (actual time=0.09..0.10 rows=1 loops=1)\n -> Hash (cost=0.00..0.00 rows=1 width=8) (actual\ntime=0.02..0.02 rows=0 loops=1)\n -> Seq Scan on consaccount_approvedby_user\n (cost=0.00..0.00 rows=1 width=8) (actual time=0.01..0.01 rows=0 loops=1)\n -> Hash (cost=0.00..0.00 rows=1 width=28) (actual\ntime=0.02..0.02 rows=0 loops=1)\n -> Seq Scan on consaccount_paidby_user\n(cost=0.00..0.00 rows=1 width=28) (actual time=0.01..0.01 rows=0 loops=1)\n Total runtime: 45189.45 msec\n(38 rows)\n\nThe total time when hashjoin=off and mergejoin=off is ~ 13.2 seconds\n\n\n----------- ABOUT MY MACHINE -----------\nThe size of the database when I check PGDATA\\base is about 400 MB\n\nFreeBSD\nMem: 26M Active, 1695M Inact, 155M Wired, 52M Cache, 199M Buf, 82M Free\nSwap: 4080M Total, 8K Used, 4080M Free\n\n----------- MY POSTGRES CONFIGURATION -----------\n cpu_index_tuple_cost | 0.001\n cpu_operator_cost | 0.0025\n cpu_tuple_cost | 0.01\n effective_cache_size | 1000\n enable_hashjoin | on\n enable_indexscan | on\n enable_mergejoin | on\n enable_nestloop | on\n enable_seqscan | on\n enable_sort | on\n max_connections | 40\n shared_buffers | 500\n sort_mem | 1024\n random_page_cost | 4\n\n\n What should I set the config parameters to be, to improve performance? I've attached my schema.", "msg_date": "Fri, 06 Jun 2003 14:53:22 -0400", "msg_from": "Yusuf <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: Re: Enabling and disabling run time configuration parameters.]" } ]
[ { "msg_contents": "\n\n\n> -----Original Message-----\n> From:\tLending, Rune [SMTP:[email protected]]\n> Sent:\t05 June 2003 10:11\n> To:\t'[email protected]'\n> Subject:\t[ADMIN] Shared_buffers and kernel parameters, tuning\n> \n> After days of searching and testing I have come up with this way of\n> configuring our postgresql 7.2 db.\n> I have not yet increased my shared_buffer as high as suggested below on\n> our\n> prod machine (24-7 high traffic), but after testing on our dev machines\n> this allows at least the databse to start up. It is very difficult to test\n> the actual performance since there is a hugh difference in traffic on dev\n> and prod.\n> Here is what we have:\n> \n> We have a high traffic system with a database described as followed:\n> \n> 4 pentium 3 633 cpu's\n> 3753456 kB RAM (3.5 Gb)\n> Red Hat Linux 7.2 \n> postgresql 7.2 \n> \n> \n> What I like to do is:\n> \n> /proc/sys/kernel/sem=250 32000 100 500 (after advise from forum/docs)\n> /proc/sys/kernel/shmmax=1921769472 (RAM / 2 * 1024 - this\n> piece of math I got from some of oracle's support pages (ooopps) actually\n> )\n> \n> /proc/sys/kernel/shmall=1921769472 (RAM / 2 * 1024)\n> \n> in postgresql.conf:\n> \n> shared_buffers = 117248 (shmmax / 2 / 1024 / 8 ) This I got from this\n> forum.\n> \n> \n> Does this sound right or am I totally out of bounds here? I have, as said\n> before done this on our dev macine ( a lot smaller machine ), but it would\n> be nice with some feedback .. \n> \n> Thanx in advance for response.\n> \n> /rune\n> \n> \n", "msg_date": "Mon, 9 Jun 2003 11:18:47 +0200 ", "msg_from": "Howard Oblowitz <[email protected]>", "msg_from_op": true, "msg_subject": "FW: [ADMIN] Shared_buffers and kernel parameters, tuning" }, { "msg_contents": "On 9 Jun 2003 at 11:18, Howard Oblowitz wrote:\n> > in postgresql.conf:\n> > \n> > shared_buffers = 117248 (shmmax / 2 / 1024 / 8 ) This I got from this\n> > forum.\n> > \n> > \n> > Does this sound right or am I totally out of bounds here? I have, as said\n> > before done this on our dev macine ( a lot smaller machine ), but it would\n> > be nice with some feedback .. \n\nWith that kind of RAM and that kind of shared buffers setting, you must set \neffective OS cache size so that postgresql can calculate when to flush buffers.\n\nWhile tuning database, it always help to pin down the target first and then try \nto reach it. If you could let us know what performance you are expecting out of \nthis machine and for what kind of load in terms of concurrent users, database \nsize and usage pattern etc., that would help.\n\nHTH\n\nBye\n Shridhar\n\n--\nQOTD:\t\"I'm just a boy named 'su'...\"\n\n", "msg_date": "Tue, 10 Jun 2003 10:45:25 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: [ADMIN] Shared_buffers and kernel parameters, tuning" }, { "msg_contents": "Rune,\n\n> > shared_buffers = 117248 (shmmax / 2 / 1024 / 8 ) This I got from this\n> > forum.\n> > Does this sound right or am I totally out of bounds here? I have, as said\n\nOut of bounds, through no fault of your own .... I'm still working on \ndocumentation for this. However, let me qoute the upcoming supplimentary \ndocs:\n\nSHARED_BUFFERS\nSets the size of Postgres' memory buffer where queries are held before being \nfed into the Kernel buffer of the host system. It's very important to \nremember that this is only a holding area, and not the total memory available \nfor the server. As such, resist the urge to set this number to a large \nportion of your RAM, as this will actually degrade performance on many OSes. \nMembers of the pgsql-performance mailing list have found useful values in the \nrange of 1000-6000, depending on available RAM, database size, and number of \nconcurrent queries. No one has yet reported positive results for any number \nover 6000.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 10 Jun 2003 08:46:21 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: [ADMIN] Shared_buffers and kernel parameters, tuning" }, { "msg_contents": "On 10 Jun 2003 at 8:46, Josh Berkus wrote:\n\n> SHARED_BUFFERS\n> Sets the size of Postgres' memory buffer where queries are held before being \n> fed into the Kernel buffer of the host system. It's very important to \n> remember that this is only a holding area, and not the total memory available \n> for the server. As such, resist the urge to set this number to a large \n> portion of your RAM, as this will actually degrade performance on many OSes. \n> Members of the pgsql-performance mailing list have found useful values in the \n> range of 1000-6000, depending on available RAM, database size, and number of \n> concurrent queries. No one has yet reported positive results for any number \n> over 6000.\n\nI was planning to document postgresql.conf with little hints, enough to get one \nstarted, drawing inspiration from lilo.conf of debian, which is beautiful to \nsay the least..\n\nI haven't find enough time to do that. But I will do it.. But I don't know all \nthe parameters enough. Of course I will post a starter but any input would be \nwelcome.\n\nPoint is we should be able to say RTFC rather than RTFA as that would get a DBA \nsingle place to look at. I agree that no amount of simplicity is enough but \nstill..:-)\n\n\nBye\n Shridhar\n\n--\nBrooke's Law:\tWhenever a system becomes completely defined, some damn fool\t\ndiscovers something which either abolishes the system or\texpands it beyond \nrecognition.\n\n", "msg_date": "Tue, 10 Jun 2003 21:26:49 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: [ADMIN] Shared_buffers and kernel parameters, tuning" }, { "msg_contents": "Shridhar,\n\n> I was planning to document postgresql.conf with little hints, enough to get\n> one started, drawing inspiration from lilo.conf of debian, which is\n> beautiful to say the least..\n\nThis week, I am:\n\n1) Submiting a patch to re-organize postgresql.conf.sample and \"Run-Time \nConfiguration\" docs in a more logical order.\n\n2) Finishing up a massive OpenOffice.org spreadsheet full information on each \npostgresql.conf option, including anecdotal advice from this list.\n\nNext week, I will try to turn the spreadsheet into a series of HTML pages for \nTechdocs.\n\nI would be thrilled to have your help on:\na) editing + augmenting the spreadsheet contents\nb) transforming it into HTML pages.\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 10 Jun 2003 09:03:50 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: [ADMIN] Shared_buffers and kernel parameters, tuning" }, { "msg_contents": "On Tue, Jun 10, 2003 at 21:26:49 +0530,\n Shridhar Daithankar <[email protected]> wrote:\n> \n> Point is we should be able to say RTFC rather than RTFA as that would get a DBA \n> single place to look at. I agree that no amount of simplicity is enough but \n> still..:-)\n\nI believe there was discussion a couple of months ago that came to a\ndifferent conclusion. There was concern about having documenation that\nwasn't in the documentation and have to versions of essentially the\nsame information that both need to be maintained.\n", "msg_date": "Tue, 10 Jun 2003 12:21:42 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: [ADMIN] Shared_buffers and kernel parameters, tuning" }, { "msg_contents": "On 2003-06-10 08:46:21 -0700, Josh Berkus wrote:\n> Rune,\n> \n> > > shared_buffers = 117248 (shmmax / 2 / 1024 / 8 ) This I got from this\n> > > forum.\n> > > Does this sound right or am I totally out of bounds here? I have, as said\n> \n> Out of bounds, through no fault of your own .... I'm still working on \n> documentation for this. However, let me qoute the upcoming supplimentary \n> docs:\n> \n> SHARED_BUFFERS\n> Sets the size of Postgres' memory buffer where queries are held before being \n> fed into the Kernel buffer of the host system. It's very important to \n> remember that this is only a holding area, and not the total memory available \n> for the server. As such, resist the urge to set this number to a large \n> portion of your RAM, as this will actually degrade performance on many OSes. \n> Members of the pgsql-performance mailing list have found useful values in the \n> range of 1000-6000, depending on available RAM, database size, and number of \n> concurrent queries. No one has yet reported positive results for any number \n> over 6000.\n> \n\nWe run a dual P3 1GHz server, running Debian Linux (stable), kernel 2.4.20,\nwith a 5-disk (10K rpm) RAID 5 array (ICP Vortex controller) and 4GB RAM, most\nof which is used for filesystem cache. This server runs Postgresql 7.3.2\nexclusively, with a database of roughly 7GB. This database is used for a very\nbusy community website, running an enormous amount of small and simple\nselect/update/insert queries and a number of complex select queries, to search\nthrough all kinds of data.\n\nThis server isn't running postgres that long, and we're still trying to figure\nout the best configuration parameters for the highest possible performance.\nShared_buffers was one of the first things we looked at. We've tested with\nshared_buffers at 1024, 8192, 32768 and 131072. So far, performance with\nshared_buffers set at 32768 was the best we could attain. 8192 and 131072 came\nout roughly equal. 1024 was miserable.\n\n(yay, 3 lines in a row starting with the word 'shared_buffers'! ;))\n\nAlso, there was a very strong relation between the shared_buffers setting and\nthe amount of cpu time spent in kernelland. Currently, the server spends\nroughly 20% of it's time in kernelspace (according to vmstat). When\nshared_buffers was 8192, this went up to about 30%.\n\nI don't have any hard performance statistics, we just threw the site live with\ndifferent settings and watched the load on all servers, and the amount of\nrequests/second our webservers could generate (the bottleneck is the\npostgresql server, not the webservers).\n\n\nI'm really eager for any useful tips regarding the various cost settings. I've\nbeen following this list for months and read through a large portion of the\narchives, but noone has been able to do more than handwaving around certain\nnumbers, which are close to the defaults anyway.\n\nCurrently, we have the following settings:\nshared_buffers = 32768\nmax_fsm_relations = 100\nmax_fsm_pages = 100000\nsort_mem = 16384\nvacuum_mem = 131072\neffective_cache_size = 327680\nrandom_page_cost = 1.5\ncpu_tuple_cost = 0.005\n#cpu_index_tuple_cost = 0.001 (default)\n#cpu_operator_cost = 0.0025 (default)\n\nHalving the cpu_tuple_cost has given a very impressive performance boost\n(performance roughly doubled). I'm not sure why, because the plans of the\nlarge queries I was checking haven't changed as far as I can see, but maybe\nsome smaller queries I didn't bother to check are using a different plan now.\nAlthough I was quite sure those smaller queries were all using the correct\nindexes etc before the change anyway.\n\nJust to be absolutely sure: all *_cost parameters only influence the chosen\nplan, right? There is absolutely nothing else influenced which doesn't show up\nin an EXPLAIN ANALYZE, right?\n\n\nRegards,\n\nVincent van Leeuwen\nMedia Design - http://www.mediadesign.nl/\n", "msg_date": "Tue, 10 Jun 2003 20:12:48 +0200", "msg_from": "Vincent van Leeuwen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: [ADMIN] Shared_buffers and kernel parameters, tuning" }, { "msg_contents": "Vincent,\n\n> This server isn't running postgres that long, and we're still trying to \nfigure\n> out the best configuration parameters for the highest possible performance.\n> Shared_buffers was one of the first things we looked at. We've tested with\n> shared_buffers at 1024, 8192, 32768 and 131072. So far, performance with\n> shared_buffers set at 32768 was the best we could attain. 8192 and 131072 \ncame\n> out roughly equal. 1024 was miserable.\n\nCool! This is the first report we've had of a successful higher setting for \nshared_buffers. I'll need to revise the text. What do people think of:\n\nSHARED_BUFFERS\nSets the size of Postgres' memory buffer where queries are held before being\nfed into the Kernel buffer of the host system. It's very important to\nremember that this is only a holding area, and not the total memory available\nfor the server. As such, resist the urge to set this number to a large\nportion of your RAM, as this will actually degrade performance on many OSes.\nMembers of the pgsql-performance mailing list have mostly found useful values\nin the range of 1000-6000, depending on available RAM, database size, and\nnumber of concurrent queries.\nThis can go up slightly for servers with a great deal of RAM; the useful\nmaximum on Linux seems to be 6% to 10% of available RAM, with performance\ndegrading at higher settings. Information on other OSes is not yet posted.\nOn multi-purpose servers, of course, the setting should be lowered. \n\n> Also, there was a very strong relation between the shared_buffers setting \nand\n> the amount of cpu time spent in kernelland. Currently, the server spends\n> roughly 20% of it's time in kernelspace (according to vmstat). When\n> shared_buffers was 8192, this went up to about 30%.\n\nThis makes perfect sense ... less shared_buffers = more kernel_buffers, and \nvice-versa.\n\n> Currently, we have the following settings:\n> shared_buffers = 32768\n> max_fsm_relations = 100\n\nYou might wanna increase this; current recommended is 300 just to make sure \nthat you have one for every table.\n\n> Just to be absolutely sure: all *_cost parameters only influence the chosen\n> plan, right? There is absolutely nothing else influenced which doesn't show \nup\n> in an EXPLAIN ANALYZE, right?\n\nYes, AFAIK.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 10 Jun 2003 11:28:00 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: [ADMIN] Shared_buffers and kernel parameters, tuning" }, { "msg_contents": "Josh Berkus wrote:\n> Vincent,\n> \n> > This server isn't running postgres that long, and we're still trying to \n> figure\n> > out the best configuration parameters for the highest possible performance.\n> > Shared_buffers was one of the first things we looked at. We've tested with\n> > shared_buffers at 1024, 8192, 32768 and 131072. So far, performance with\n> > shared_buffers set at 32768 was the best we could attain. 8192 and 131072 \n> came\n> > out roughly equal. 1024 was miserable.\n> \n> Cool! This is the first report we've had of a successful higher setting for \n> shared_buffers. I'll need to revise the text. What do people think of:\n\nI have been thinking about shared_buffers, and it seems it is the\nage-old issue of working set.\n\nTraditionally Unix doesn't use working set (though a few do). It just\nallocates memory proportionally among all processes, with unreferenced\npages being paged out first.\n\nFor PostgreSQL, if your working set is X, if you set your shared buffers\nto X, you will get optimal performance (assuming there is no memory\npressure). If set allocate X/2, you will probably get worse\nperformance. If you allocate X*2, you will also probably get slightly\nworse performance.\n\nNow, let's suppose you can't allocate X shared buffers, because of\nmemory pressure. Suppose you can allocate X/2 shared buffers, and that\nwill leave X/2 kernel buffers. It would be better to allocate X/4\nshared buffers, and leave X*3/4 kernel buffers. If you can only\nallocate X/5 shared buffers, you might be better with X/10 shared\nbuffers because you are going to be doing a lot of I/O, and you need\nlots of kernel buffers for that.\n\nI think that is what people are seeing when modifying shared buffers:\n\n\tX shared buffers is best\n\t>X shared buffers is too much overhead and starves kernel\n\t<X might be better by not maximizing shared buffers and have\n\t more kernel buffers\n\nAdd to this that it is very hard to estimate working set.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 10 Jun 2003 15:08:22 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: [ADMIN] Shared_buffers and kernel parameters, tuning" }, { "msg_contents": "Vincent van Leeuwen <[email protected]> writes:\n> Halving the cpu_tuple_cost has given a very impressive performance boost\n> (performance roughly doubled). I'm not sure why, because the plans of the\n> large queries I was checking haven't changed as far as I can see, but maybe\n> some smaller queries I didn't bother to check are using a different plan now.\n\nThat's very curious; I'd expect that parameter to have only marginal\neffect in the first place (unless you make huge changes in it, of course).\nIt must have changed some plan that you didn't take note of. If you can\nfind it I'd be interested to know.\n\n> Just to be absolutely sure: all *_cost parameters only influence the\n> chosen plan, right? There is absolutely nothing else influenced which\n> doesn't show up in an EXPLAIN ANALYZE, right?\n\nAFAIR, the only one of these parameters that the executor pays any\nattention to is SORT_MEM; that will determine how soon the runtime code\nstarts to spill tuples to disk in sorts, hash tables, etc.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jun 2003 15:43:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: [ADMIN] Shared_buffers and kernel parameters, tuning " }, { "msg_contents": "On 2003-06-10 15:08:22 -0400, Bruce Momjian wrote:\n> For PostgreSQL, if your working set is X, if you set your shared buffers\n> to X, you will get optimal performance (assuming there is no memory\n> pressure). If set allocate X/2, you will probably get worse\n> performance. If you allocate X*2, you will also probably get slightly\n> worse performance.\n> \n> Now, let's suppose you can't allocate X shared buffers, because of\n> memory pressure. Suppose you can allocate X/2 shared buffers, and that\n> will leave X/2 kernel buffers. It would be better to allocate X/4\n> shared buffers, and leave X*3/4 kernel buffers. If you can only\n> allocate X/5 shared buffers, you might be better with X/10 shared\n> buffers because you are going to be doing a lot of I/O, and you need\n> lots of kernel buffers for that.\n> \n> I think that is what people are seeing when modifying shared buffers:\n> \n> \tX shared buffers is best\n> \t>X shared buffers is too much overhead and starves kernel\n> \t<X might be better by not maximizing shared buffers and have\n> \t more kernel buffers\n> \n> Add to this that it is very hard to estimate working set.\n> \n\nMakes a lot of sense to me. We're doing a lot of I/O on a small part of that\n7GB, and the rest is accessed in a more or less random fashion, so 256MB of\nshared buffers sounds about right. I'll play more with this in the future to\nsee at what setting it performs best.\n\nIs there any information available in the system tables or statistics\ncollector that can help determine X? Could PostgreSQL be easily modified to\nprovide more information in this area?\n\nVincent van Leeuwen\nMedia Design - http://www.mediadesign.nl/\n", "msg_date": "Wed, 11 Jun 2003 01:10:36 +0200", "msg_from": "Vincent van Leeuwen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: [ADMIN] Shared_buffers and kernel parameters, tuning" }, { "msg_contents": "On 2003-06-10 15:43:47 -0400, Tom Lane wrote:\n> Vincent van Leeuwen <[email protected]> writes:\n> > Halving the cpu_tuple_cost has given a very impressive performance boost\n> > (performance roughly doubled). I'm not sure why, because the plans of the\n> > large queries I was checking haven't changed as far as I can see, but maybe\n> > some smaller queries I didn't bother to check are using a different plan now.\n> \n> That's very curious; I'd expect that parameter to have only marginal\n> effect in the first place (unless you make huge changes in it, of course).\n> It must have changed some plan that you didn't take note of. If you can\n> find it I'd be interested to know.\n> \n\nUnfortunately, we're not exactly in the best position to test a lot of things.\nOur website has been running on MySQL and PHP for the last 3 years, and I've\nbeen wanting to switch to PostgreSQL for about the last 2 years. A lot of\npreparation went in to the change, but once we switched our live site to use\nPostgreSQL as it's main database we were utterly dissapointed in our own\npreparations. I knew our website was somewhat optimized for MySQL usage, but\nlooking back I am totally amazed that we were able to squeeze so much\nperformance out of a database that locks entire tables for every update (yes,\nwe used the MyISAM table format). One of the most surprising things we learned\nwas that MySQL was totally bottlenecking on I/O, with a large chunk of CPU\nunused, and with PostgreSQL it's the other way around.\n\nThe last couple of weeks have been a nice collection of whacky antics and\nperformance tuning all over the place. The first week everything performed\nabysmal, and another week later we're close to our original performance again.\nOfcourse, the goal is to exceed MySQL's performance by a comfortable margin,\nbut we're not there yet :)\n\nSo, basically, this server is pushed far harder than it should be. Average\nsystem load is at about 4, and there are always 50-200 postgresql threads\nrunning during daytime. A new server that will replace this one and which is\nroughly 2-3 times as fast will be put live in a few weeks, and until that's\nhere this box will have to bear the burden on it's own.\n\n> > Just to be absolutely sure: all *_cost parameters only influence the\n> > chosen plan, right? There is absolutely nothing else influenced which\n> > doesn't show up in an EXPLAIN ANALYZE, right?\n> \n> AFAIR, the only one of these parameters that the executor pays any\n> attention to is SORT_MEM; that will determine how soon the runtime code\n> starts to spill tuples to disk in sorts, hash tables, etc.\n> \n\nCurrent sort_mem setting is based on monitoring the pgsql_tmp directory and\nconcluding that sort_mem needed to be doubled to avoid swapping to disk. It's\nnot as if this box doesn't have enough RAM :)\n\nBut this means I'll have to look more closely at my query plans, more things\nare changing than I'm noticing when I tweak various settings.\n\nOne of the hardest parts is that some queries which should use sequential\nscans are using indexes and some queries which should use indexes are using\nsequential scans :) We're currently using some ugly 'set enable_seqscan to\noff;' hacks in a few places, until everything is tweaked right, but I hope we\ncan remove those as soon as possible.\n\n\nRegards,\n\nVincent van Leeuwen\nMedia Design - http://www.mediadesign.nl/\n", "msg_date": "Wed, 11 Jun 2003 01:25:38 +0200", "msg_from": "Vincent van Leeuwen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: [ADMIN] Shared_buffers and kernel parameters, tuning" }, { "msg_contents": "Vincent van Leeuwen wrote:\n> On 2003-06-10 15:08:22 -0400, Bruce Momjian wrote:\n> > For PostgreSQL, if your working set is X, if you set your shared buffers\n> > to X, you will get optimal performance (assuming there is no memory\n> > pressure). If set allocate X/2, you will probably get worse\n> > performance. If you allocate X*2, you will also probably get slightly\n> > worse performance.\n> > \n> > Now, let's suppose you can't allocate X shared buffers, because of\n> > memory pressure. Suppose you can allocate X/2 shared buffers, and that\n> > will leave X/2 kernel buffers. It would be better to allocate X/4\n> > shared buffers, and leave X*3/4 kernel buffers. If you can only\n> > allocate X/5 shared buffers, you might be better with X/10 shared\n> > buffers because you are going to be doing a lot of I/O, and you need\n> > lots of kernel buffers for that.\n> > \n> > I think that is what people are seeing when modifying shared buffers:\n> > \n> > \tX shared buffers is best\n> > \t>X shared buffers is too much overhead and starves kernel\n> > \t<X might be better by not maximizing shared buffers and have\n> > \t more kernel buffers\n> > \n> > Add to this that it is very hard to estimate working set.\n> > \n> \n> Makes a lot of sense to me. We're doing a lot of I/O on a small part of that\n> 7GB, and the rest is accessed in a more or less random fashion, so 256MB of\n> shared buffers sounds about right. I'll play more with this in the future to\n> see at what setting it performs best.\n> \n> Is there any information available in the system tables or statistics\n> collector that can help determine X? Could PostgreSQL be easily modified to\n> provide more information in this area?\n\nEstimatinge working set is an old problem. You can look at pgsql_tmp\nunder each database directory for sort mem, but for shared buffers, I am\nnot sure how to know the proper size.\n\nAnyone else have an idea?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 10 Jun 2003 23:52:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: [ADMIN] Shared_buffers and kernel parameters, tuning" } ]
[ { "msg_contents": "I have a query that's cauing pgsql choose either a hash or merge join\ndepending on how I mess with the stats variables, but it won't choose an\nnested loop, even though it's the fastest.\n\nThe estimate for the nested loop index scans always seems to be way high\non the high end. Note that it's 0-3 in one case and 0-2 in the other,\nbut the actual time is very low in both cases. Why is this? I haven't\nbeen able to make much of a difference by changing the optimizer\nvariables.\n\nThis is on a solaris machine, if that matters. Tinput_data, locality,\nand postal code have 1300, 28000 and 43000 rows, respectively, and\nlocality and postal code are very narrow tables (full definition below).\n\nusps=# explain analyze SELECT key, pc.locality_id, l.state_code::varchar FROM Tinput_data i, postal_code pc, locality l WHERE i.zip = pc.postal_code AND l.locality_id = pc.locality_id;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=940.20..1417.94 rows=1380 width=36) (actual time=1727.30..2363.91 rows=1380 loops=1)\n Merge Cond: (\"outer\".locality_id = \"inner\".locality_id)\n -> Index Scan using locality_pkey on locality l (cost=0.00..455.99 rows=27789 width=10) (actual time=0.62..495.39 rows=27632 loops=1)\n -> Sort (cost=940.20..940.55 rows=1380 width=26) (actual time=1725.53..1726.71 rows=1380 loops=1)\n Sort Key: pc.locality_id\n -> Merge Join (cost=42.00..933.00 rows=1380 width=26) (actual time=56.27..1684.67 rows=1380 loops=1)\n Merge Cond: (\"outer\".postal_code = \"inner\".zip)\n -> Index Scan using postal_code_postal_code_key on postal_code pc (cost=0.00..869.31 rows=42704 width=13) (actual time=10.05..1396.11 rows=42418 loops=1)\n -> Sort (cost=42.00..42.34 rows=1380 width=13) (actual time=39.63..40.97 rows=1380 loops=1)\n Sort Key: i.zip\n -> Seq Scan on tinput_data i (cost=0.00..34.80 rows=1380 width=13) (actual time=0.02..12.13 rows=1380 loops=1)\n Total runtime: 2367.50 msec\n(12 rows)\n\nusps=# set enable_mergejoin=0;\nSET\nusps=# set enable_hashjoin=0;\nSET\nusps=# explain analyze SELECT key, pc.locality_id, l.state_code::varchar FROM Tinput_data i, postal_code pc, locality l WHERE i.zip = pc.postal_code AND l.locality_id = pc.locality_id;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..6991.66 rows=1380 width=36) (actual time=0.22..231.00 rows=1380 loops=1)\n -> Nested Loop (cost=0.00..4203.23 rows=1380 width=26) (actual time=0.14..132.70 rows=1380 loops=1)\n -> Seq Scan on tinput_data i (cost=0.00..34.80 rows=1380 width=13) (actual time=0.02..17.41 rows=1380 loops=1)\n -> Index Scan using postal_code_postal_code_key on postal_code pc (cost=0.00..3.01 rows=1 width=13) (actual time=0.06..0.06 rows=1 loops=1380)\n Index Cond: (\"outer\".zip = pc.postal_code)\n -> Index Scan using locality_pkey on locality l (cost=0.00..2.01 rows=1 width=10) (actual time=0.05..0.05 rows=1 loops=1380)\n Index Cond: (l.locality_id = \"outer\".locality_id)\n Total runtime: 233.60 msec\n(8 rows)\n\n Table \"pg_temp_1.tinput_data\"\n Column | Type | Modifiers\n------------------+-----------------------+-----------\n key | integer | not null\n firm | character varying(40) |\n address | integer |\n address_v | character varying(10) |\n odd_even | character(1) |\n street_name | character varying(40) |\n street_metaphone | character varying(4) |\n apartment | integer |\n apartment_v | character varying(10) |\n apartment_label | character varying(5) |\n city | character varying(40) |\n city_metaphone | character varying(4) |\n state | character varying(40) |\n zip | character varying(5) |\nIndexes: tinput_data_pkey primary key btree (\"key\")\n\nusps=# \\d postal_code\n Table \"public.postal_code\"\n Column | Type | Modifiers\n----------------+-----------------------+-------------------------------------------------------------------------\n postal_code_id | integer | not null default nextval('public.postal_code_postal_code_id_seq'::text)\n postal_code | character varying(10) | not null\n locality_id | integer | not null\nIndexes: postal_code_pkey primary key btree (postal_code_id),\n postal_code_postal_code_key unique btree (postal_code)\n\nusps=# \\d locality\n Table \"public.locality\"\n Column | Type | Modifiers\n-------------+-----------------------+-------------------------------------------------------------------\n locality_id | integer | not null default nextval('public.locality_locality_id_seq'::text)\n locality | character varying(10) | not null\n state_code | character(2) | not null\nIndexes: locality_pkey primary key btree (locality_id)\nForeign Key constraints: $1 FOREIGN KEY (state_code) REFERENCES state(state_code) ON UPDATE NO ACTION ON DELETE NO ACTION\n\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 9 Jun 2003 15:40:09 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Hash or merge join instead of inner loop" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> I have a query that's cauing pgsql choose either a hash or merge join\n> depending on how I mess with the stats variables, but it won't choose an\n> nested loop, even though it's the fastest.\n\nThere's been some discussion about that before; you could check the\narchives (now that they're up again ;-)). I believe that the planner\noverestimates the cost of a nestloop with inner indexscan, because it\ncosts the indexscans as though each one is an independent ab-initio\nindex search. In reality, most of the upper btree levels will no doubt\nstay in memory during such a query, and so this estimate charges many\nmore reads than really occur. Fixing this is on the todo list, but no\none's got to it yet. (It's not clear to me how to put the consideration\ninto the planner's cost algorithms in a clean way.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jun 2003 02:15:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash or merge join instead of inner loop " }, { "msg_contents": "On 10 Jun 2003 at 2:15, Tom Lane wrote:\n\n> There's been some discussion about that before; you could check the\n> archives (now that they're up again ;-)). I believe that the planner\n> overestimates the cost of a nestloop with inner indexscan, because it\n> costs the indexscans as though each one is an independent ab-initio\n> index search. In reality, most of the upper btree levels will no doubt\n> stay in memory during such a query, and so this estimate charges many\n> more reads than really occur. Fixing this is on the todo list, but no\n> one's got to it yet. (It's not clear to me how to put the consideration\n> into the planner's cost algorithms in a clean way.)\n\nJust being na�ve here, but if planner and executor account for shared \nbuffers+effective OS cache, even a boolean choice could be a start.\n\nSay a query needs 100MB of data according to estimates so if shared \nbuffers+effective OS cache covers that, we can lower the cost.\n\nMay be we should have two config. parameters for tuple cost? Disk read tuple \ncost and memory read tuple cost. Later being 1/10th of former?\n\nBye\n Shridhar\n\n--\nAll new:\tParts not interchangeable with previous model.\n\n", "msg_date": "Tue, 10 Jun 2003 14:26:17 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash or merge join instead of inner loop " }, { "msg_contents": "On Tue, Jun 10, 2003 at 02:15:11AM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > I have a query that's cauing pgsql choose either a hash or merge join\n> > depending on how I mess with the stats variables, but it won't choose an\n> > nested loop, even though it's the fastest.\n> \n> There's been some discussion about that before; you could check the\n> archives (now that they're up again ;-)). I believe that the planner\n> overestimates the cost of a nestloop with inner indexscan, because it\n> costs the indexscans as though each one is an independent ab-initio\n> index search. In reality, most of the upper btree levels will no doubt\n> stay in memory during such a query, and so this estimate charges many\n> more reads than really occur. Fixing this is on the todo list, but no\n> one's got to it yet. (It's not clear to me how to put the consideration\n> into the planner's cost algorithms in a clean way.)\n \nWhat about just ignoring all but the leaf pages? Unless you have a\nreally, really big index, I think this would probably work well, or at\nleast better than what we have right now.\n\nI can't think of an elegant way to figure out hit percentages either.\nMaybe as a ratio of how often an individual page at a given level of the\nbtree is to be hit? IE: the root page will always be hit (only one\npage); if the next level up has 10 pages, each one is 10% likely to be\nin cache, and so-on. Or maybe a better way to look at it is how many\npages sit underneath each page. So if we figure there's a 0.1% chance that\na leaf page is in cache and each page in the layer above/below that has\ntuples for 100 leaf pages, then the odds of a page in that layer being\nin the cache is 10%\n\nIt might also be worth giving index pages a higher priority in the\ninternal buffer than table pages.\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Tue, 10 Jun 2003 14:42:07 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hash or merge join instead of inner loop" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Tue, Jun 10, 2003 at 02:15:11AM -0400, Tom Lane wrote:\n>> ... In reality, most of the upper btree levels will no doubt\n>> stay in memory during such a query, and so this estimate charges many\n>> more reads than really occur. Fixing this is on the todo list, but no\n>> one's got to it yet. (It's not clear to me how to put the consideration\n>> into the planner's cost algorithms in a clean way.)\n \n> What about just ignoring all but the leaf pages?\n\nIIRC, we already know what cost model we want to use. The problem is\nthat the planner's code structure makes it difficult for the indexscan\ncoster to know that the indexscan will be applied repeatedly rather than\njust once. That's what has to be solved.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jun 2003 16:56:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hash or merge join instead of inner loop " } ]
[ { "msg_contents": "Folks,\n\nWe've been discussing this for a while on HACKERS. However, I haven't been \ngetting much feedback on the specific order proposed.\n\nAttached is an outline of my proposed re-ordering of postgresql.conf.sample. \nPlease send me comments. I need to submit a patch by Thursday, so don't take \ntoo long.\n\nThis is an effort to make the order of run-time params in \npostgresql.conf.sample and in the docs more logical and less baffling to the \nnew DBA.\n\nQuestions:\n1) Should \"enable_implicit_from\" go in the \"Version/Platform Compatibility\" \nsection where I have it now, or in \"CLIENT CONNECTIONS-Statement Behavior\", \nor somewhere else?\n\n2) Where should \"preload_libraries\" go? I'm very reluctant to start a \n\"Misc.\" section. Perhaps I should start a \"LIBRARIES\" section?\n\n3) I have re-ordered each subsection somewhat. The fixed ordering is based \non:\n a) My guess at the frequency with which that option will be changed, \nwith more common options toward the top of the subsection;\n b) Grouping for tightly related options and for options that cascade;\n c) where (a) and (b) are unclear, alpha order.\nDoes this order make sense looking at the file?\n\n3) Should we use indenting in PostgreSQL.conf.sample? I tend to think it \nwould make the file easier to read, but I'm not sure what effect it would \nhave, if any, on parsing the file and whether other people would find it easy \nto read.\n\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco", "msg_date": "Tue, 10 Jun 2003 11:01:46 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re-ordering .CONF params ... questions for this list" }, { "msg_contents": "On Tuesday 10 Jun 2003 7:01 pm, Josh Berkus wrote:\n> Folks,\n>\n> We've been discussing this for a while on HACKERS. However, I haven't been\n> getting much feedback on the specific order proposed.\n>\n> Attached is an outline of my proposed re-ordering of\n> postgresql.conf.sample. Please send me comments. I need to submit a patch\n> by Thursday, so don't take too long.\n>\n> This is an effort to make the order of run-time params in\n> postgresql.conf.sample and in the docs more logical and less baffling to\n> the new DBA.\n>\n> Questions:\n> 1) Should \"enable_implicit_from\" go in the \"Version/Platform Compatibility\"\n> section where I have it now, or in \"CLIENT CONNECTIONS-Statement Behavior\",\n> or somewhere else?\n\nVersion compatibility I'd vote for (hesitantly)\n\n> 2) Where should \"preload_libraries\" go? I'm very reluctant to start a\n> \"Misc.\" section. Perhaps I should start a \"LIBRARIES\" section?\n\nNo useful ideas - sorry.\n\n> 3) I have re-ordered each subsection somewhat. The fixed ordering is\n> based on:\n> a) My guess at the frequency with which that option will be\n> changed, with more common options toward the top of the subsection;\n> b) Grouping for tightly related options and for options that\n> cascade; c) where (a) and (b) are unclear, alpha order.\n> Does this order make sense looking at the file?\n\nLooks good, I'd suggest the following perhaps:\n\nLogging & Debugging\nI'd like this near the top, but then I use syslogging. With a new install I go \nin and check tcpip_socket etc, fix the logging and just see if everything is \nworking. Then I go in and do a little tuning.\nActually, maybe the syslog sub-section should go above the others - say where \nyou'll log to, and then what you'll log. Of course, I'm biased since I use \nsyslog.\n\nClient Connection Defaults/Other/password_encryption\nThis should probably go in the security section. Actually, looking at it \n\"dynamic_librar_path\" is in the wrong place too - cut & past error?\n\nQuery Tuning/Planner Method Enabling\nI'm in two minds here - obviously it is more \"basic\" than the \"cost \nconstraints\" section, but that's the one people will be tinkering with first. \nNope - thinking about it, you've got it right.\n\n> 3) Should we use indenting in PostgreSQL.conf.sample? I tend to think it\n> would make the file easier to read, but I'm not sure what effect it would\n> have, if any, on parsing the file and whether other people would find it\n> easy to read.\n\nNot sure it would help that much - the comments need a URL to the relevant \npage in the online docs though. A couple more lines of comments too:\n\n# Syslog\n# To log to syslog, use something like\n# syslog = 2, syslog_facility = 'LOCAL0', syslog_ident = 'postgres'\n# Don't forget to update your syslog.conf then too.\n#\n...etc\n\nOtherwise, looks good to me.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 10 Jun 2003 20:05:11 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re-ordering .CONF params ... questions for this list" }, { "msg_contents": "Richard Huxton wrote:\n>>2) Where should \"preload_libraries\" go? I'm very reluctant to start a\n>>\"Misc.\" section. Perhaps I should start a \"LIBRARIES\" section?\n> \n> No useful ideas - sorry.\n\nSorry, I missed this earlier. This is a performance tuning option.\n\nJoe\n\n", "msg_date": "Tue, 10 Jun 2003 13:05:14 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re-ordering .CONF params ... questions for this list" }, { "msg_contents": "Richard,\n\n> Logging & Debugging\n> I'd like this near the top, but then I use syslogging. With a new install I \ngo \n> in and check tcpip_socket etc, fix the logging and just see if everything is \n> working. Then I go in and do a little tuning.\n> Actually, maybe the syslog sub-section should go above the others - say \nwhere \n> you'll log to, and then what you'll log. Of course, I'm biased since I use \n> syslog.\n\nI have no objection to moving the syslog section. Any other opinions?\n\n> Client Connection Defaults/Other/password_encryption\n> This should probably go in the security section. Actually, looking at it \n> \"dynamic_librar_path\" is in the wrong place too - cut & past error?\n\nNot the way I read the docs; according to the docs:\n\npassword_encryption is whether or not the statement \"ALTER USER joe_schmoe \nWITH PASSWORD 'xxxyyy'\" is encrypted by default even if you don't use the \n\"WITH ENCRYPTION\" option. And it is SET-able on each client connection, by \nregular users. So it goes in \"CLIENT CONNECTION SETTINGS\".\n\n\"dynamic_library_path\", while less obvious, is also SETable on each client \nconnection. I'd be happy to revise this if someone understands/uses this \noption and has a better idea where to put it.\n\n> Not sure it would help that much - the comments need a URL to the relevant \n> page in the online docs though. A couple more lines of comments too:\n\nGiven that we're running out of time, I wasn't going to touch any of the \ncomments in PostgreSQL.conf.sample. Instead, I was going to leave the \ncomments as-is, and post extensive comments on Techdocs before 7.4 beta. \nThen, in 7.5 or 8.0 we can re-comment .conf.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 10 Jun 2003 13:57:24 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re-ordering .CONF params ... questions for this list" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Given that we're running out of time, I wasn't going to touch any of the \n> comments in PostgreSQL.conf.sample. Instead, I was going to leave the \n> comments as-is, and post extensive comments on Techdocs before 7.4\n> beta. \n\nI doubt anyone would object to improving the comments during beta; so\nyou don't need to consider that part something that has to be done\nbefore feature freeze.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jun 2003 17:28:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re-ordering .CONF params ... questions for this list " }, { "msg_contents": "Tom,\n\n> I doubt anyone would object to improving the comments during beta; so\n> you don't need to consider that part something that has to be done\n> before feature freeze.\n\nOh, cool. OK, then ... the hard part is just deciding on what comments to \ninclude. We'll work on that in this list.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 10 Jun 2003 14:29:42 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re-ordering .CONF params ... questions for this list" }, { "msg_contents": "Folks,\n\nRevised ordering of options, based on information and suggestions received \nhere on both mailing lists.\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco", "msg_date": "Tue, 10 Jun 2003 14:45:13 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re-ordering .CONF params ... questions for this list" }, { "msg_contents": "Josh- I took a quick look at your proposal on conf ordering-\n\nThe groupings are great.\n\nWithout a clear notion of dependencies, and only based on what I think\npeople are likely to tweak the most, I'd suggest promoting the \"client\nconnection defaults\", \"version/platform compatibility\" & \"logging/debugging\"\ngroups to positions 2,3 & 4 respectively.\n\nHere's the thinking-\n\nYou'd have all of the options that a neophyte might need to set to perform a\nparticular task in a given environment in the first three groups. Problems\nencountered while setting these up might require the adventurous beginner to\ndip into logging/debugging to gather basic diagnostic info.\n\nWith this ordering, everything you might have to touch in order to get a\nbasic system up & running lives in the top 4 groups. (This also helps soften\nthe dilemma of where enable_implicit_from should go by putting the two\npossible groups next to one another.)\n\nBelow the top four groups are the tuning parameters best not messed with\nuntil one passes from neophyte to DB-Geek level. (And probably not worth\nmessing with even then.) These are only needed when you've passed over from\ngetting it running to needing it to run better.\n\n-Nick\n\n\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Josh Berkus\n> Sent: Tuesday, June 10, 2003 1:02 PM\n> To: [email protected]\n> Subject: [PERFORM] Re-ordering .CONF params ... questions for this list\n>\n>\n> Folks,\n>\n> We've been discussing this for a while on HACKERS. However, I\n> haven't been\n> getting much feedback on the specific order proposed.\n>\n> Attached is an outline of my proposed re-ordering of\n> postgresql.conf.sample.\n> Please send me comments. I need to submit a patch by Thursday,\n> so don't take\n> too long.\n>\n> This is an effort to make the order of run-time params in\n> postgresql.conf.sample and in the docs more logical and less\n> baffling to the\n> new DBA.\n>\n> Questions:\n> 1) Should \"enable_implicit_from\" go in the \"Version/Platform\n> Compatibility\"\n> section where I have it now, or in \"CLIENT CONNECTIONS-Statement\n> Behavior\",\n> or somewhere else?\n>\n> 2) Where should \"preload_libraries\" go? I'm very reluctant to start a\n> \"Misc.\" section. Perhaps I should start a \"LIBRARIES\" section?\n>\n> 3) I have re-ordered each subsection somewhat. The fixed\n> ordering is based\n> on:\n> a) My guess at the frequency with which that option will\n> be changed,\n> with more common options toward the top of the subsection;\n> b) Grouping for tightly related options and for options\n> that cascade;\n> c) where (a) and (b) are unclear, alpha order.\n> Does this order make sense looking at the file?\n>\n> 3) Should we use indenting in PostgreSQL.conf.sample? I tend to\n> think it\n> would make the file easier to read, but I'm not sure what effect it would\n> have, if any, on parsing the file and whether other people would\n> find it easy\n> to read.\n>\n>\n>\n> --\n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n\n", "msg_date": "Fri, 13 Jun 2003 17:31:54 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re-ordering .CONF params ... questions for this list" }, { "msg_contents": "Nick,\n\n> Without a clear notion of dependencies, and only based on what I think\n> people are likely to tweak the most, I'd suggest promoting the \"client\n> connection defaults\", \"version/platform compatibility\" & \"logging/debugging\"\n> groups to positions 2,3 & 4 respectively.\n\nI like your ideas, but there's two problems with them:\n\n1) I mess around with postgresql.conf constantly, and seldom touch anything in \nthe \"client connection defaults\" section. I do, however, mess with the stuff \nin the \"resource usage\" section, as to most of the people on this list.\n\n2) I just spent 4.5 hours re-arranging the Runtime-config docs page last \nnight, and am very reluctant to do it again.\n\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \[email protected]\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n", "msg_date": "Fri, 13 Jun 2003 15:51:14 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re-ordering .CONF params ... questions for this list" }, { "msg_contents": "> 2) I just spent 4.5 hours re-arranging the Runtime-config docs page last\n> night, and am very reluctant to do it again.\n\nI like this reason... I think you've already done a great service by\ncreating functional groups. The newbies won't be hurt by the need to scroll\ndown a bit, and the functional groupings already serve to eliminate the\nconfusion about what the params are for (which *does* hurt them). What\nyou've done is a great improvement. My additional comments below are offered\nin the spirit of support for what you've already done along with thoughts to\nconsider for future revisions.\n\n\n\n\n\n> 1) I mess around with postgresql.conf constantly, and seldom\n> touch anything in\n> the \"client connection defaults\" section. I do, however, mess\n> with the stuff\n> in the \"resource usage\" section, as to most of the people on this list.\n\nI agree... but are we the folks that the conf file needs to be made more\nintuitive for?\n\nIf the intent is to make it easier for experienced folks like ourselves who\nare working with large or unusual databases to deal with PostgreSQL, then\ncertainly the resource usage and tuning settings should go to the top. We'll\nset the other params once & never touch them again.\n\nOn the other hand, I suspect that the majority of postgresql users play with\nthe other params a bit during install to get their systems working and never\ntouch the resource usage or tuning params ever. (And this is as it should\nbe, given that the defaults are reasonable for most systems.)\n\nPart of my motivation in offering this advice is our sibling rivalry with\nMySQL- once we look under the hood, we usually find that PostgreSQL is the\nway to go, but all of us mechanics spend a silly amount of time wondering\naloud why the many people who don't enjoy looking under the hood don't get\nit. If we want the legions of MySQL followers to get it, we need to put only\nthe necessary instruments on the dashboard and not force non-mechanics to\nlook under the hood. (And to stretch the metaphor a bit further- The hood\nlatch still needs to be near the dashboard for the folks who are ready for\nthe next step.)\n\n\nI'll cross-post this to advocacy because I'm tottering off on that tangent.\nI think the comments may be useful in this forum as well because the\nadvocacy folks need to pass thoughts to the active developers & documenters\nin much the same way that marketing folks need to communicate well with\nengineers in the commercial world.\n\n\nRegards,\n -Nick\n\n", "msg_date": "Sat, 14 Jun 2003 09:40:30 -0500", "msg_from": "\"Nick Fankhauser\" <[email protected]>", "msg_from_op": false, "msg_subject": "A bit OT- RE: [PERFORM] Re-ordering .CONF params ... questions for\n\tthis list" }, { "msg_contents": "Nick,\n\n> I agree... but are we the folks that the conf file needs to be made more\n> intuitive for?\n>\n> If the intent is to make it easier for experienced folks like ourselves who\n> are working with large or unusual databases to deal with PostgreSQL, then\n> certainly the resource usage and tuning settings should go to the top.\n> We'll set the other params once & never touch them again.\n>\n> On the other hand, I suspect that the majority of postgresql users play\n> with the other params a bit during install to get their systems working and\n> never touch the resource usage or tuning params ever. (And this is as it\n> should be, given that the defaults are reasonable for most systems.)\n\nThis is a good argument. Though if you pursue it, surely you're advocating a \nGUI tool for PostgreSQL.conf, not that that's a bad idea ...\n\nHow do other people feel about this? What options in PostgreSQL.conf do you \ntweak most frequently?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 16 Jun 2003 09:28:07 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: A bit OT- RE: [PERFORM] Re-ordering .CONF params ... questions\n\tfor this list" } ]
[ { "msg_contents": "Hi,\nI am using pg 7.3.1 on a dual r.h. 7.3 box.\n\nI have a big problem with pg left join performance.\n\nMy plan is:\n=# explain analyze select D.IDS AS DIDS ,D.IDS_SKLAD, D.IDS_KO AS\nDIDSKO,KL.MNAME AS KLNAME, D.NOMER AS DNOMER,D.DATE_OP, S.MED AS\nMEDNAME, NOM.MNAME AS NOMNAME,S.IDS_NUM, S.KOL,\nS.CENA,S.VAL,S.TOT,S.DTO,S.PTO ,M.OTN AS MOTN FROM A_KLIENTI KL ,\nA_NOMEN NOM, A_DOC D,A_SKLA\nD S left outer join A_MESKLAD M ON(S.IDS=M.IDS) WHERE D.OP=4 AND\nD.IDS=S.IDS_DOC AND D.IDS_KO=KL.IDS AND S.IDS_NUM=NOM.IDS AND KL.IDS_G\nRUPA = 'SOF_112' ;\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Nested Loop (cost=460869.55..470785.29 rows=20 width=1034) (actual\ntime=50139.27..57565.34 rows=12990 loops=1)\n -> Hash Join (cost=460869.55..470662.48 rows=20 width=862) (actual\ntime=50139.02..57246.35 rows=12990 loops=1)\n Hash Cond: (\"outer\".ids_doc = \"inner\".ids)\n -> Merge Join (cost=457324.89..463038.60 rows=815792\nwidth=356) (actual time=48128.32..53430.02 rows=815926 loops=1)\n Merge Cond: (\"outer\".ids = \"inner\".ids)\n -> Index Scan using a_mesklad_pkey on a_mesklad m\n(cost=0.00..1395.47 rows=15952 width=72) (actual time=0.21..109.19\nrows=15952 loops=1)\n -> Sort (cost=457324.89..459364.37 rows=815792\nwidth=284) (actual time=48128.05..49380.06 rows=815926 loops=1)\n Sort Key: s.ids\n -> Seq Scan on a_sklad s (cost=0.00..74502.92\nrows=815792 width=284) (actual time=4.32..16777.16 rows=815926 loops=1)\n -> Hash (cost=3544.65..3544.65 rows=3 width=506) (actual\ntime=1104.34..1104.34 rows=0 loops=1)\n -> Hash Join (cost=905.35..3544.65 rows=3 width=506)\n(actual time=428.32..1098.52 rows=1966 loops=1)\n Hash Cond: (\"outer\".ids_ko = \"inner\".ids)\n -> Index Scan using i_doc_op on a_doc d\n(cost=0.00..2625.71 rows=677 width=244) (actual time=29.27..690.86\nrows=1981 loops=1)\n Index Cond: (op = 4)\n -> Hash (cost=905.19..905.19 rows=65 width=262)\n(actual time=398.97..398.97 rows=0 loops=1)\n -> Seq Scan on a_klienti kl\n(cost=0.00..905.19 rows=65 width=262) (actual time=396.68..398.93 rows=7\nloops=1)\n Filter: (ids_grupa = 'SOF_112'::name)\n -> Index Scan using a_nomen_pkey on a_nomen nom (cost=0.00..6.01\nrows=1 width=172) (actual time=0.01..0.02 rows=1 loops=12990)\n Index Cond: (\"outer\".ids_num = nom.ids)\n Total runtime: 57749.24 msec\n(20 rows)\n\n\n\nIf I remove the join ( I know it is not very correct and I receive 19\nrows as answer) it is working very fast.\nThe plan is:\n\n explain analyze select D.IDS AS DIDS ,D.IDS_SKLAD, D.IDS_KO AS\nDIDSKO,KL.MNAME AS KLNAME, D.NOMER AS DNOMER,D.DATE_OP, S.MED AS\nMEDNAME, NOM.MNAME AS NOMNAME,S.IDS_NUM, S.KOL,\nS.CENA,S.VAL,S.TOT,S.DTO,S.PTO ,M.OTN AS MOTN FROM A_KLIENTI KL ,\nA_NOMEN NOM, A_DOC D,A_SKLAD S ,A_MESKLAD M WHERE S.IDS=M.IDS AND\nD.OP=4 AND D.IDS=S.IDS_DOC AND D.IDS_KO=KL.IDS AND S.IDS_NUM=NOM.IDS\nAND D.NOMER like '%0905' AND KL.IDS_GRUPA = 'SOF_112' ORDER BY\nD.IDS,S.IDS_NUM,S.ORDER_NUM ;\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Sort (cost=18897.33..18897.33 rows=1 width=1038) (actual\ntime=36.33..36.35 rows=48 loops=1)\n Sort Key: d.ids, s.ids_num, s.order_num\n -> Nested Loop (cost=0.00..18897.32 rows=1 width=1038) (actual\ntime=30.90..35.93 rows=48 loops=1)\n -> Nested Loop (cost=0.00..18891.29 rows=1 width=866) (actual\ntime=30.70..33.34 rows=48 loops=1)\n -> Nested Loop (cost=0.00..18885.28 rows=1 width=794)\n(actual time=30.44..31.98 rows=48 loops=1)\n -> Nested Loop (cost=0.00..2633.93 rows=1\nwidth=506) (actual time=30.18..30.62 rows=1 loops=1)\n -> Index Scan using i_doc_op on a_doc d\n(cost=0.00..2627.40 rows=1 width=244) (actual time=29.93..30.36 rows=1\nloops=1)\n Index Cond: (op = 4)\n Filter: (nomer ~~ '%0905'::text)\n -> Index Scan using a_klienti_pkey on\na_klienti kl (cost=0.00..6.01 rows=1 width=262) (actual time=0.23..0.23\nrows=1 loops=1)\n Index Cond: (\"outer\".ids_ko = kl.ids)\n Filter: (ids_grupa = 'SOF_112'::name)\n -> Index Scan using i_sklad_ids_doc on a_sklad s\n(cost=0.00..16200.36 rows=4079 width=288) (actual time=0.24..0.95\nrows=48 loops=1)\n Index Cond: (\"outer\".ids = s.ids_doc)\n -> Index Scan using a_mesklad_pkey on a_mesklad m\n(cost=0.00..6.01 rows=1 width=72) (actual time=0.02..0.02 rows=1\nloops=48)\n Index Cond: (\"outer\".ids = m.ids)\n -> Index Scan using a_nomen_pkey on a_nomen nom\n(cost=0.00..6.01 rows=1 width=172) (actual time=0.04..0.04 rows=1\nloops=48)\n Index Cond: (\"outer\".ids_num = nom.ids)\n Total runtime: 36.98 msec\n(19 rows)\n\n\nAlso S.IDS and M.IDS are name and primary key's.\nI can not find my problem.\nAny idea will help.\nOf cours I can make the query with two selects and will work fast, but I\nthink it is not good solution.\nregards,\nivan.\n\n", "msg_date": "Wed, 11 Jun 2003 17:35:10 +0200", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "left join performance problem" }, { "msg_contents": "pginfo <[email protected]> writes:\n> I have a big problem with pg left join performance.\n\nI think the problem is that the LEFT JOIN clause is forcing the\nplanner to join A_SKLAD to A_MESKLAD before anything else, whereas\na good plan would do some of the other joins first to eliminate\nas many rows as possible. You will need to revise the query to\nlet the LEFT JOIN happen later. For discussion see\nhttp://www.postgresql.org/docs/view.php?version=7.3&idoc=0&file=explicit-joins.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jun 2003 22:57:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: left join performance problem " }, { "msg_contents": "Many thanks Tom,\nthe doc do not contain solution for this case, but the idea to\nchange the join order was excelent and all is working fine at the moment.\n\nregards,\nivan.\n\nTom Lane wrote:\n\n> pginfo <[email protected]> writes:\n> > I have a big problem with pg left join performance.\n>\n> I think the problem is that the LEFT JOIN clause is forcing the\n> planner to join A_SKLAD to A_MESKLAD before anything else, whereas\n> a good plan would do some of the other joins first to eliminate\n> as many rows as possible. You will need to revise the query to\n> let the LEFT JOIN happen later. For discussion see\n> http://www.postgresql.org/docs/view.php?version=7.3&idoc=0&file=explicit-joins.html\n>\n> regards, tom lane\n\n\n\n\nMany thanks Tom,\nthe doc do not contain solution for this case, but the idea to\nchange the join order was excelent and all is working fine at the moment.\n\nregards,\nivan.\n\nTom Lane wrote:\npginfo <[email protected]> writes:\n> I have a big problem with pg left join performance.\n\nI think the problem is that the LEFT JOIN clause is forcing the\nplanner to join A_SKLAD to A_MESKLAD before anything else, whereas\na good plan would do some of the other joins first to eliminate\nas many rows as possible.  You will need to revise the query to\nlet the LEFT JOIN happen later.  For discussion see\nhttp://www.postgresql.org/docs/view.php?version=7.3&idoc=0&file=explicit-joins.html\n                       \nregards, tom lane", "msg_date": "Thu, 12 Jun 2003 06:48:27 +0200", "msg_from": "pginfo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: left join performance problem" }, { "msg_contents": "On Thu, Jun 12, 2003 at 06:48:27AM +0200, pginfo wrote:\n> Many thanks Tom,\n> the doc do not contain solution for this case, but the idea to\n> change the join order was excelent and all is working fine at the moment.\n \nAny chance of getting a TODO added that would provide the option of\nhaving the optimizer pick join order when you're using the ANSI join\nsyntax?\n\nIMHO I think it's bad that using the ANSI syntax forces join order; it\nwould be much better to come up with a custom syntax for this like\neveryone else does. But I'm sure people won't want to change the\nexisting behavior, so special syntax to do the opposite is almost as\ngood.\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 16 Jun 2003 00:10:28 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: left join performance problem" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Any chance of getting a TODO added that would provide the option of\n> having the optimizer pick join order when you're using the ANSI join\n> syntax?\n\nNo ... because it's already DONE.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Jun 2003 01:28:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: left join performance problem " }, { "msg_contents": "On Mon, Jun 16, 2003 at 01:28:26AM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > Any chance of getting a TODO added that would provide the option of\n> > having the optimizer pick join order when you're using the ANSI join\n> > syntax?\n> \n> No ... because it's already DONE.\n \nDOH, I forgot about the subselect trick. Nevermind.\n\n*wipes egg off face*\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 16 Jun 2003 00:36:29 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: left join performance problem" }, { "msg_contents": "On Mon, Jun 16, 2003 at 00:36:29 -0500,\n \"Jim C. Nasby\" <[email protected]> wrote:\n> On Mon, Jun 16, 2003 at 01:28:26AM -0400, Tom Lane wrote:\n> > \"Jim C. Nasby\" <[email protected]> writes:\n> > > Any chance of getting a TODO added that would provide the option of\n> > > having the optimizer pick join order when you're using the ANSI join\n> > > syntax?\n> > \n> > No ... because it's already DONE.\n> \n> DOH, I forgot about the subselect trick. Nevermind.\n\nIn 7.4 there is a GUC setting to control this. I believe the default\nis to not constrain the join order any more than is necessary to\npreserve semantics.\n", "msg_date": "Mon, 16 Jun 2003 03:07:18 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: left join performance problem" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Mon, Jun 16, 2003 at 01:28:26AM -0400, Tom Lane wrote:\n>> \"Jim C. Nasby\" <[email protected]> writes:\n> Any chance of getting a TODO added that would provide the option of\n> having the optimizer pick join order when you're using the ANSI join\n> syntax?\n>> \n>> No ... because it's already DONE.\n \n> DOH, I forgot about the subselect trick. Nevermind.\n\nNo, I wasn't talking about that. See\nhttp://developer.postgresql.org/docs/postgres/explicit-joins.html\nfor the way it works in CVS tip.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Jun 2003 09:26:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: left join performance problem " }, { "msg_contents": "On Mon, Jun 16, 2003 at 09:26:13AM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > On Mon, Jun 16, 2003 at 01:28:26AM -0400, Tom Lane wrote:\n> >> \"Jim C. Nasby\" <[email protected]> writes:\n> > Any chance of getting a TODO added that would provide the option of\n> > having the optimizer pick join order when you're using the ANSI join\n> > syntax?\n> >> \n> >> No ... because it's already DONE.\n> \n> > DOH, I forgot about the subselect trick. Nevermind.\n> \n> No, I wasn't talking about that. See\n> http://developer.postgresql.org/docs/postgres/explicit-joins.html\n> for the way it works in CVS tip.\n \nAhh, cool. BTW, I think it should be prominently mentioned in\nhttp://developer.postgresql.org/docs/postgres/sql-select.html#SQL-FROM\nthat ANSI join syntax can force join order, since most DBA's unfamiliar\nwith pgsql probably won't be expecting that.\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 16 Jun 2003 17:41:05 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: left join performance problem" } ]
[ { "msg_contents": "Hi,\n\nI'm running PG 7.3.2 on a dual P3 1 GHz, 4GB RAM, 5-disk RAID 5 (hardware) on\nDebian Linux, kernel 2.4.21-rc3.\n\nI'm unable to tweak the various _cost settings in such a way that attached\nquery will use the right plan. Attachment contains relevant config file\nsettings, table defenitions and explain analyze output with some enable_*\nsettings turned off.\n\nCould anyone tell me what I'm doing wrong in the query itself or tell me what\nI should do with which config file setting to let the planner choose the\nfastest plan by itself?\n\n\nRegards,\n\nVincent van Leeuwen\nMedia Design - http://www.mediadesign.nl/", "msg_date": "Wed, 11 Jun 2003 21:33:14 +0200", "msg_from": "Vincent van Leeuwen <[email protected]>", "msg_from_op": true, "msg_subject": "tweaking costs to favor nestloop" }, { "msg_contents": "Vincent van Leeuwen <[email protected]> writes:\n> I'm unable to tweak the various _cost settings in such a way that attached\n> query will use the right plan.\n\nYou aren't going to be able to. You've already overshot a reasonable\nrandom_page_cost setting --- to judge by the relative actual costs of\nthe merge and hash join, a value somewhere around 3 is appropriate for\nyour setup. (Assuming I did the math right --- if you set it to 3,\ndo you get a ratio of merge and hash estimated costs that agrees with\nthe ratio of actual runtimes?)\n\nThe problem here is that the costing of the repeated inner index scans\nisn't realistic: 35417 probes into \"auth\" are clearly taking much less\nthan 35417 times what a single probe could be expected to take. We\ntalked about how repeated scans would win from caching of the upper\nbtree levels, but I think there's more to it than that. It occurs to me\nthat the probes you are making are probably not random and uncorrelated.\nThey are driven by the values of reportuser.idreporter ... is it fair\nto guess that most of the reportuser rows link to just a small fraction\nof the total auth population? If so, the caching could be eliminating\nmost of the reads, not just the upper btree levels, because we're\nmostly hitting only small parts of the index and auth tables.\n\nI'm beginning to think that the only reasonable way to model this is to\ncost the entire nestloop join as a unit, so that we have access to\nstatistics about the outer table as well as the indexed table. That\nwould give us a shot at estimating how much of the index is likely to\nget touched.\n\nAs of 7.3 I think all you can do is force nestloop by disabling the\nother two join types.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jun 2003 16:17:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tweaking costs to favor nestloop " }, { "msg_contents": "On 2003-06-11 16:17:53 -0400, Tom Lane wrote:\n> Vincent van Leeuwen <[email protected]> writes:\n> > I'm unable to tweak the various _cost settings in such a way that attached\n> > query will use the right plan.\n> \n> You aren't going to be able to. You've already overshot a reasonable\n> random_page_cost setting --- to judge by the relative actual costs of\n> the merge and hash join, a value somewhere around 3 is appropriate for\n> your setup. (Assuming I did the math right --- if you set it to 3,\n> do you get a ratio of merge and hash estimated costs that agrees with\n> the ratio of actual runtimes?)\n> \nWell, random_page_cost is where it is right now because for a number of other\nqueries it seems to give the best result. Specifically, 1.25 seems to be the\nsweet spot where a number of queries that were using seqscans but should use\nindexscans started to use indexscans. Tweaking the cpu_index_tuple_cost by\nrather large margins didn't seem to have any effect on the calculated costs.\nGoing back to a setting of 3 will hurt overall performance, unless we can\nstill get those other queries to use the right plan by tweaking other config\nparameters.\n\nHow did you calculate the value of 3?\n\nAnother problem we've noticed is that on an idle database certain queries are\nbetter off using an indexscan than a seqscan, something which the planner\nalready wanted to do. But when the load on the database gets a lot higher,\nindexscans are consistently slower than seqscans (same query, same\nparameters). So we had to dick around a bit to favor seqscans more for those\nqueries (we set cpu_operator_cost a lot lower to favor a seqscan+sort over a\n(reverse? dunno anymore) indexscan).\n\n> The problem here is that the costing of the repeated inner index scans\n> isn't realistic: 35417 probes into \"auth\" are clearly taking much less\n> than 35417 times what a single probe could be expected to take. We\n> talked about how repeated scans would win from caching of the upper\n> btree levels, but I think there's more to it than that. It occurs to me\n> that the probes you are making are probably not random and uncorrelated.\n> They are driven by the values of reportuser.idreporter ... is it fair\n> to guess that most of the reportuser rows link to just a small fraction\n> of the total auth population? If so, the caching could be eliminating\n> most of the reads, not just the upper btree levels, because we're\n> mostly hitting only small parts of the index and auth tables.\n> \nExactly. I think the 'auth' table is already completely in kernel\nfilesystemcache to begin with, and probably largely in shared_buffers too,\nsince it's a small table that gets hit a lot. Especially on it's primary key,\nwhich we use here.\n\n> I'm beginning to think that the only reasonable way to model this is to\n> cost the entire nestloop join as a unit, so that we have access to\n> statistics about the outer table as well as the indexed table. That\n> would give us a shot at estimating how much of the index is likely to\n> get touched.\n> \n> As of 7.3 I think all you can do is force nestloop by disabling the\n> other two join types.\n> \n\nDoes 7.4 already have changes in this area that will affect this query?\n\n\nVincent van Leeuwen\nMedia Design - http://www.mediadesign.nl/\n", "msg_date": "Fri, 13 Jun 2003 14:39:22 +0200", "msg_from": "Vincent van Leeuwen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tweaking costs to favor nestloop" }, { "msg_contents": "Vincent van Leeuwen <[email protected]> writes:\n> How did you calculate the value of 3?\n\nEstimated cost of an indexscan is approximately proportional to\nrandom_page_cost, but cost of a seqscan isn't affected by it.\nYou had a hash join plan that used two seqscans (so its estimated\ncost is unaffected by random_page_cost) plus a merge join plan\nthat had one indexscan input. I just extrapolated the change in\nthe indexscan cost needed to make the ratio of total costs agree with\nreality. This is a pretty rough calculation of course, but I don't\nbelieve small values of random_page_cost except for situations where all\nyour data is buffered in RAM. It's real easy to get led down the garden\npath by small test cases that get fully buffered (especially when you\nrepeat them over and over), and pick cost values that will not reflect\nreality in a production environment. I can't say whether that actually\nhappened to you, but it's something to be on your guard about.\n\n> Another problem we've noticed is that on an idle database certain queries are\n> better off using an indexscan than a seqscan, something which the planner\n> already wanted to do. But when the load on the database gets a lot higher,\n> indexscans are consistently slower than seqscans (same query, same\n> parameters).\n\nSee above. Increasing load reduces the chances that any one query will\nfind its data already buffered, since there's more competition for the\navailable buffer space.\n\n> Does 7.4 already have changes in this area that will affect this query?\n\nNo.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Jun 2003 10:07:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tweaking costs to favor nestloop " }, { "msg_contents": "Tom-\nFWIW, these are the same kind of numbers I'm seeing for the project I'm\nworking on.. ie: nested loop estimates at 0.00-3.01 but reality is much\ncloser to 0.2. I agrees that it probably makes sense to take the\ncorrelation of both tables into account for nested-loop joins.\n\nOn Wed, Jun 11, 2003 at 04:17:53PM -0400, Tom Lane wrote:\n> Vincent van Leeuwen <[email protected]> writes:\n> > I'm unable to tweak the various _cost settings in such a way that attached\n> > query will use the right plan.\n> \n> You aren't going to be able to. You've already overshot a reasonable\n> random_page_cost setting --- to judge by the relative actual costs of\n> the merge and hash join, a value somewhere around 3 is appropriate for\n> your setup. (Assuming I did the math right --- if you set it to 3,\n> do you get a ratio of merge and hash estimated costs that agrees with\n> the ratio of actual runtimes?)\n> \n> The problem here is that the costing of the repeated inner index scans\n> isn't realistic: 35417 probes into \"auth\" are clearly taking much less\n> than 35417 times what a single probe could be expected to take. We\n> talked about how repeated scans would win from caching of the upper\n> btree levels, but I think there's more to it than that. It occurs to me\n> that the probes you are making are probably not random and uncorrelated.\n> They are driven by the values of reportuser.idreporter ... is it fair\n> to guess that most of the reportuser rows link to just a small fraction\n> of the total auth population? If so, the caching could be eliminating\n> most of the reads, not just the upper btree levels, because we're\n> mostly hitting only small parts of the index and auth tables.\n> \n> I'm beginning to think that the only reasonable way to model this is to\n> cost the entire nestloop join as a unit, so that we have access to\n> statistics about the outer table as well as the indexed table. That\n> would give us a shot at estimating how much of the index is likely to\n> get touched.\n> \n> As of 7.3 I think all you can do is force nestloop by disabling the\n> other two join types.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 16 Jun 2003 00:17:40 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tweaking costs to favor nestloop" } ]
[ { "msg_contents": "Howdy folks. We just received an new monster box, and I would like to\nfield suggestion on setting up the Conf file for the best performance\n\n**************BOX INFO***************\nCompaq\n2 x 2GHZ XEON processors\n7 gigs RAM\n100 gigs HD\n*****************************************\n\n**********Expected number of Users and connection type:\n200 users with 80% connection via VB front end. 20% Web (PHP)\n\n\n**********Dbase Usage:\nThis is a mostly a reporting database, so there will be heavy usage of\naggregates (sum,avg, etc)\n\n*********Data Loading\n95% of database is truncated and refreshed nightly from mulitple\ndatasources.\nI have a concern with one particular summary table that takes 20 mins to do\nan insert/update from 4 individual talbes on our curent system.\n\nNot sure what other infomation I should provide.\nTIA\n-patrick\n\n\nPatrick Hatcher\nMacys.Com\n\n\n", "msg_date": "Thu, 12 Jun 2003 09:33:34 -0700", "msg_from": "\"Patrick Hatcher\" <[email protected]>", "msg_from_op": true, "msg_subject": "new monster box CONF suggestion please" } ]
[ { "msg_contents": "Hi.\n\nI have a problem with performance after upgrading from 7.2 to 7.3. Let's\nsee two simple tables:\n\nCREATE TABLE a (\n id integer,\n parent_id integer\n);\n\nwith 1632 records, and\n\nCREATE TABLE b (\n id integer\n);\n\nwith 5281 records, and a litle more complex view:\n\nCREATE VIEW v_c AS\n SELECT t1.id, \n (SELECT count(*) AS count FROM a t3 WHERE (t3.parent_id = t2.id)) AS children_count \n FROM (b t1 LEFT JOIN a t2 ON ((t1.id = t2.id)));\n\n\nNow see the query run under explain analyze:\n\nPostgresql 7.2:\n\nsiaco=# explain analyze select count(*) from v_c;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=219.66..219.66 rows=1 width=8) (actual time=162.75..162.75 rows=1 loops=1)\n -> Merge Join (cost=139.66..207.16 rows=5000 width=8) (actual time=95.07..151.46 rows=5281 loops=1)\n -> Sort (cost=69.83..69.83 rows=1000 width=4) (actual time=76.18..82.37 rows=5281 loops=1)\n -> Seq Scan on b t1 (cost=0.00..20.00 rows=1000 width=4) (actual time=0.02..22.02 rows=5281 loops=1)\n -> Sort (cost=69.83..69.83 rows=1000 width=4) (actual time=18.86..25.38 rows=5281 loops=1)\n -> Seq Scan on a t2 (cost=0.00..20.00 rows=1000 width=4) (actual time=0.02..6.70 rows=1632 loops=1)\nTotal runtime: 164.34 msec\nEXPLAIN\n\n\nPostgresql 7.3:\n\nsiaco=# explain analyze select count(*) from v_c;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=224.66..224.66 rows=1 width=8) (actual time=5691.77..5691.77 rows=1 loops=1)\n -> Subquery Scan v_c (cost=139.66..212.16 rows=5000 width=8) (actual time=24.72..5687.77 rows=5281 loops=1)\n -> Merge Join (cost=139.66..212.16 rows=5000 width=8) (actual time=24.72..5681.55 rows=5281 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".id)\n -> Sort (cost=69.83..72.33 rows=1000 width=4) (actual time=18.82..21.09 rows=5281 loops=1)\n Sort Key: t1.id\n -> Seq Scan on b t1 (cost=0.00..20.00 rows=1000 width=4) (actual time=0.01..7.28 rows=5281 loops=1)\n -> Sort (cost=69.83..72.33 rows=1000 width=4) (actual time=4.74..7.15 rows=5281 loops=1)\n Sort Key: t2.id\n -> Seq Scan on a t2 (cost=0.00..20.00 rows=1000 width=4) (actual time=0.02..2.13 rows=1632 loops=1)\n SubPlan\n -> Aggregate (cost=22.51..22.51 rows=1 width=0) (actual time=1.07..1.07 rows=1 loops=5281)\n -> Seq Scan on a t3 (cost=0.00..22.50 rows=5 width=0) (actual time=0.80..1.06 rows=1 loops=5281)\n Filter: (parent_id = $0)\n Total runtime: 5693.62 msec\n(15 rows)\n\n\n\nI can't understand where comes the big difference in query plan from, and\n(that's more important) - how to force postgres 7.3 to execute it more\nefficient? \n\nNotice, that both databases on both machines are identical and machine with\npostgres 7.3 is even faster than the other one.\n\nRichard.\n\n-- \n\"First they ignore you. Then they laugh at you. Then they\nfight you. Then you win.\" - Mohandas Gandhi.\n", "msg_date": "Fri, 13 Jun 2003 20:45:06 +0200", "msg_from": "Ryszard Lach <[email protected]>", "msg_from_op": true, "msg_subject": "7.3 vs 7.2 - different query plan, bad performance" }, { "msg_contents": "On Fri, Jun 13, 2003 at 20:45:06 +0200,\n Ryszard Lach <[email protected]> wrote:\n\n> I can't understand where comes the big difference in query plan from, and\n> (that's more important) - how to force postgres 7.3 to execute it more\n> efficient? \n\nI am guessing that your are really using 7.3.x and not 7.3. There was\na bug in 7.3 that was fixed in 7.3.1 or 7.3.2 with subselects. However\nthis fix was made with safety in mind (as it was a point release)\nand resulted in some queries running slower. A complete fix was made for\n7.4. To test to see if this is really the problem, you could try a 7.4\nsnapshot or 7.3 to see if you get improved plans.\n", "msg_date": "Fri, 13 Jun 2003 13:54:05 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.3 vs 7.2 - different query plan, bad performance" }, { "msg_contents": "On Fri, 13 Jun 2003 20:45:06 +0200, Ryszard Lach <[email protected]>\nwrote:\n>I have a problem with performance after upgrading from 7.2 to 7.3.\n\nTry\n\tVACUUM ANALYSE;\n\nand then re-run your query. If it is still slow, post the new EXPLAIN\nANALYSE output here.\n\nServus\n Manfred\n", "msg_date": "Sun, 15 Jun 2003 21:48:08 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.3 vs 7.2 - different query plan, bad performance" }, { "msg_contents": "On Sun, Jun 15, 2003 at 09:48:08PM +0200, Manfred Koizar wrote:\n> On Fri, 13 Jun 2003 20:45:06 +0200, Ryszard Lach <[email protected]>\n> wrote:\n> >I have a problem with performance after upgrading from 7.2 to 7.3.\n> \n> Try\n> \tVACUUM ANALYSE;\n> \n> and then re-run your query. If it is still slow, post the new EXPLAIN\n> ANALYSE output here.\n> \n\nHm. I've tried it too. I don't see a big difference:\n\nsiaco=# explain analyze select count(*) from v_c;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=210.83..210.83 rows=1 width=8) (actual time=5418.09..5418.09 rows=1 loops=1)\n -> Subquery Scan v_c (cost=28.40..197.63 rows=5281 width=8) (actual time=4.59..5414.13 rows=5281 loops=1)\n -> Hash Join (cost=28.40..197.63 rows=5281 width=8) (actual time=4.58..5407.73 rows=5281 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".id)\n -> Seq Scan on b t1 (cost=0.00..76.81 rows=5281 width=4) (actual time=0.01..9.68 rows=5281 loops=1)\n -> Hash (cost=24.32..24.32 rows=1632 width=4) (actual time=3.29..3.29 rows=0 loops=1)\n -> Seq Scan on a t2 (cost=0.00..24.32 rows=1632 width=4) (actual time=0.01..1.88 rows=1632 loops=1)\n SubPlan\n -> Aggregate (cost=28.41..28.41 rows=1 width=0) (actual time=1.02..1.02 rows=1 loops=5281)\n -> Seq Scan on a t3 (cost=0.00..28.40 rows=3 width=0) (actual time=0.76..1.01 rows=1 loops=5281)\n Filter: (parent_id = $0)\n Total runtime: 5433.65 msec\n\n\n-- \n\"First they ignore you. Then they laugh at you. Then they\nfight you. Then you win.\" - Mohandas Gandhi.\n", "msg_date": "Mon, 16 Jun 2003 08:38:50 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: 7.3 vs 7.2 - different query plan, bad performance" }, { "msg_contents": "On Mon, 16 Jun 2003 08:38:50 +0200, [email protected] wrote:\n>[After VACUUM ANALYSE ...] I don't see a big difference:\n>\n>siaco=# explain analyze select count(*) from v_c;\n> QUERY PLAN\n>---------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=210.83..210.83 rows=1 width=8) (actual time=5418.09..5418.09 rows=1 loops=1)\n> -> Subquery Scan v_c (cost=28.40..197.63 rows=5281 width=8) (actual time=4.59..5414.13 rows=5281 loops=1)\n> -> Hash Join (cost=28.40..197.63 rows=5281 width=8) (actual time=4.58..5407.73 rows=5281 loops=1)\n> Hash Cond: (\"outer\".id = \"inner\".id)\n> -> Seq Scan on b t1 (cost=0.00..76.81 rows=5281 width=4) (actual time=0.01..9.68 rows=5281 loops=1)\n> -> Hash (cost=24.32..24.32 rows=1632 width=4) (actual time=3.29..3.29 rows=0 loops=1)\n> -> Seq Scan on a t2 (cost=0.00..24.32 rows=1632 width=4) (actual time=0.01..1.88 rows=1632 loops=1)\n> SubPlan\n> -> Aggregate (cost=28.41..28.41 rows=1 width=0) (actual time=1.02..1.02 rows=1 loops=5281)\n> -> Seq Scan on a t3 (cost=0.00..28.40 rows=3 width=0) (actual time=0.76..1.01 rows=1 loops=5281)\n> Filter: (parent_id = $0)\n> Total runtime: 5433.65 msec\n\nOk, now we have something to work on.\n\n.) I guess you are not really interested in\n\n\tSELECT count(*) FROM v_c;\n\nIf you were, you would simply\n\n\tSELECT count(*) from b;\n\nTry\n\n\tEXPLAIN ANALYSE SELECT * FROM v_c;\n\nand you will see that 7.2 produces a plan that is almost equal to that\nproduced by 7.3.\n\n.) Without any index a seq scan is the best you can get. A scan of a\ntakes only 1 ms, but doing it 5000 times gives 5 seconds. Try\n\n\tCREATE INDEX a_parent ON a(parent_id);\n\n.) Wouldn't\n\nCREATE VIEW v_c AS\nSELECT t1.id, count(t3.id) AS children_count\n FROM (b t1 LEFT JOIN a t2 ON (t1.id = t2.id))\n LEFT JOIN a t3 ON (t3.parent_id = t2.id)\n GROUP BY t1.id;\n\ngive the same results as your view definition with the subselect? And\nunder some assumptions about your data even\n\nCREATE VIEW v_c AS\nSELECT b.id, count(a.id) AS children_count\n FROM b\n LEFT JOIN a ON (a.parent_id = b.id)\n GROUP BY b.id;\n\nmight work. But I think I don't understand your requirements. Why\nare you not interested in the children_count for an id that doesn't\nhave a parent itself?\n\n.) To answer your original question: The difference seems to be that\n7.2 does not evaluate the subselect in the SELECT list, when you are\nonly asking for count(*).\n\nServus\n Manfred\n", "msg_date": "Mon, 16 Jun 2003 12:31:08 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.3 vs 7.2 - different query plan, bad performance" }, { "msg_contents": "On Mon, Jun 16, 2003 at 12:31:08PM +0200, Manfred Koizar wrote:\n> \n> Ok, now we have something to work on.\n> \n> .) I guess you are not really interested in\n> \n> \tSELECT count(*) FROM v_c;\n> \n> If you were, you would simply\n> \n> \tSELECT count(*) from b;\n> \n\nThat's right.\n\n> Try\n> \n> \tEXPLAIN ANALYSE SELECT * FROM v_c;\n> \n> and you will see that 7.2 produces a plan that is almost equal to that\n> produced by 7.3.\n\nThat is not.\n\nI'm, pasting query plan from 7.2 once again (after vacuum analyze):\n\nsiaco=# explain analyze select count(*) from v_c;\nNOTICE: QUERY PLAN:\nAggregate (cost=213.83..213.83 rows=1 width=8) (actual time=90.43..90.43 rows=1 loops=1)\n -> Hash Join (cost=29.40..200.63 rows=5281 width=8) (actual time=11.14..78.48 rows=5281 loops=1)\n -> Seq Scan on b t1 (cost=0.00..78.81 rows=5281 width=4) (actual time=0.01..26.40 rows=5281 loops=1)\n -> Hash (cost=25.32..25.32 rows=1632 width=4) (actual time=10.99..10.99 rows=0 loops=1)\n -> Seq Scan on a t2 (cost=0.00..25.32 rows=1632 width=4) (actual time=0.02..6.30 rows=1632 loops=1)\nTotal runtime: 90.74 msec\nEXPLAIN\n\n> might work. But I think I don't understand your requirements. Why\n> are you not interested in the children_count for an id that doesn't\n> have a parent itself?\n\nThe point is, that my tables (and queries) are a 'little' bit more complicated\nand I wanted to give as simple example as I could. I think that problem is that\nsubselects are _much_slower_ executed in 7.3 than in 7.2, just as someone\nalready wrote here. \n\n\n> .) To answer your original question: The difference seems to be that\n> 7.2 does not evaluate the subselect in the SELECT list, when you are\n> only asking for count(*).\n\nThat looks reasonably.\n\nThanks for all your help,\n\nRichard.\n\n-- \n\"First they ignore you. Then they laugh at you. Then they\nfight you. Then you win.\" - Mohandas Gandhi.\n", "msg_date": "Mon, 16 Jun 2003 13:41:47 +0200", "msg_from": "Ryszard Lach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.3 vs 7.2 - different query plan, bad performance" }, { "msg_contents": "On Mon, 16 Jun 2003 13:41:47 +0200, Ryszard Lach <[email protected]>\nwrote:\n>On Mon, Jun 16, 2003 at 12:31:08PM +0200, Manfred Koizar wrote:\n>> \tEXPLAIN ANALYSE SELECT * FROM v_c;\n\n>siaco=# explain analyze select count(*) from v_c;\n ^^^^^^ ^\nSee the difference? I bet if you\n \tEXPLAIN ANALYSE SELECT * FROM v_c;\nyou get a much longer runtime.\n\nBTW, did the index on a.parent_id help? In my test it improved\nruntime from 59449.71 msec to 1203.26 msec (SELECT * with Postgres\n7.2).\n\nServus\n Manfred\n", "msg_date": "Mon, 16 Jun 2003 15:24:52 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.3 vs 7.2 - different query plan, bad performance" }, { "msg_contents": "On Mon, Jun 16, 2003 at 03:24:52PM +0200, Manfred Koizar wrote:\n> On Mon, 16 Jun 2003 13:41:47 +0200, Ryszard Lach <[email protected]>\n> wrote:\n> >On Mon, Jun 16, 2003 at 12:31:08PM +0200, Manfred Koizar wrote:\n> >> \tEXPLAIN ANALYSE SELECT * FROM v_c;\n> \n> >siaco=# explain analyze select count(*) from v_c;\n> ^^^^^^ ^\n> See the difference? I bet if you\n> \tEXPLAIN ANALYSE SELECT * FROM v_c;\n> you get a much longer runtime.\n\nYes, indeed.\n\n> BTW, did the index on a.parent_id help? In my test it improved\n> runtime from 59449.71 msec to 1203.26 msec (SELECT * with Postgres\n> 7.2).\n\nOh yeah... Thanks a lot once more.\n\nRichard.\n\n-- \n\"First they ignore you. Then they laugh at you. Then they\nfight you. Then you win.\" - Mohandas Gandhi.\n", "msg_date": "Mon, 16 Jun 2003 15:55:21 +0200", "msg_from": "Ryszard Lach <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.3 vs 7.2 - different query plan, bad performance" }, { "msg_contents": "Ryszard Lach <[email protected]> writes:\n> The point is, that my tables (and queries) are a 'little' bit more\n> complicated and I wanted to give as simple example as I could. I think\n> that problem is that subselects are _much_slower_ executed in 7.3 than\n> in 7.2, just as someone already wrote here.\n\nNo, the problem is that 7.3 fails to notice that it doesn't really need\nto execute the subselect at all. This is the price we paid for being\nsure that a post-release bug fix wouldn't break anything more serious.\nThere is a better fix in place for 7.4.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Jun 2003 10:17:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.3 vs 7.2 - different query plan, bad performance " } ]
[ { "msg_contents": "Similar question was \nhttp://archives.postgresql.org/pgsql-admin/2002-05/msg00148.php, but google \ndid not have answer for it.\n\nHere is the structure:\n\n Column | Type | Modifiers\n-------------+--------------------------+----------------------\n id | integer | not null default '0'\n datestamp | timestamp with time zone | not null\n thread | integer | not null default '0'\n parent | integer | not null default '0'\n author | character(37) | not null default ''\n subject | character(255) | not null default ''\n email | character(200) | not null default ''\n attachment | character(64) | default ''\n host | character(50) | not null default ''\n email_reply | character(1) | not null default 'N'\n approved | character(1) | not null default 'N'\n msgid | character(100) | not null default ''\n modifystamp | integer | not null default '0'\n userid | integer | not null default '0'\n closed | smallint | default '0'\nIndexes: tjavendanpri_key primary key btree (id),\n tjavendan_approved btree (approved),\n tjavendan_author btree (author),\n tjavendan_datestamp btree (datestamp),\n tjavendan_modifystamp btree (modifystamp),\n tjavendan_msgid btree (msgid),\n tjavendan_parent btree (parent),\n tjavendan_subject btree (subject),\n tjavendan_thread btree (thread),\n tjavendan_userid btree (userid)\n\nHere is the query:\nSELECT thread, modifystamp, count(id) AS tcount, abstime(modifystamp) AS \nlatest, max(id) as maxid FROM tjavendan WHERE approved='Y' GROUP BY \nthread, modifystamp ORDER BY modifystamp desc, thread desc limit 40\n\nand explain analyze for it:\n\nkrtjavendan34=> EXPLAIN ANALYZE SELECT thread, modifystamp, count(id) AS \ntcount, abstime(modifystamp) AS latest, max(id) as maxid FROM tjavendan \nWHERE approved='Y' GROUP BY thread, modifystamp ORDER BY modifystamp desc, \nthread desc limit 40;\n QUERY \nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=18419.78..18419.88 rows=40 width=12) (actual \ntime=6735.06..6735.69 rows=40 loops=1)\n -> Sort (cost=18419.78..18441.34 rows=8626 width=12) (actual \ntime=6735.04..6735.25 rows=41 loops=1)\n Sort Key: modifystamp, thread\n -> Aggregate (cost=16777.53..17855.84 rows=8626 width=12) \n(actual time=4605.01..6711.27 rows=2938 loops=1)\n -> Group (cost=16777.53..17424.52 rows=86265 width=12) \n(actual time=4604.85..6164.29 rows=86265 loops=1)\n -> Sort (cost=16777.53..16993.19 rows=86265 \nwidth=12) (actual time=4604.82..5130.14 rows=86265 loops=1)\n Sort Key: thread, modifystamp\n -> Seq Scan on tjavendan (cost=0.00..9705.31 \nrows=86265 width=12) (actual time=0.13..3369.28 rows=86265 loops=1)\n Filter: (approved = 'Y'::bpchar)\n Total runtime: 6741.12 msec\n(10 rows)\n\nThis is on 7.3.3.\n\nHaving backwards reading of index would really help here.\n\nThanks in advance.\n\nTomaz\n\n\n", "msg_date": "Sun, 15 Jun 2003 16:26:36 +0200", "msg_from": "Tomaz Borstnar <[email protected]>", "msg_from_op": true, "msg_subject": "any way to use indexscan to get last X values with \"order by Y\n\tlimit X\" clause?" }, { "msg_contents": "On 15 Jun 2003 at 16:26, Tomaz Borstnar wrote:\n\n> \n> Here is the structure:\n<snip>\n> approved | character(1) | not null default 'N'\n> msgid | character(100) | not null default ''\n> modifystamp | integer | not null default '0'\n> userid | integer | not null default '0'\n> closed | smallint | default '0'\n> Indexes: tjavendanpri_key primary key btree (id),\n> tjavendan_approved btree (approved),\n\n<snip>\n\n> Here is the query:\n> SELECT thread, modifystamp, count(id) AS tcount, abstime(modifystamp) AS \n> latest, max(id) as maxid FROM tjavendan WHERE approved='Y' GROUP BY \n> thread, modifystamp ORDER BY modifystamp desc, thread desc limit 40\n\nQuestion. The field approved seems to have boolean values. If probability of \nhaving either of value is 50%, I doubt planner will use index anyway.\n\nEven assuming all possible values of a char variable, the choice isn't too \nmuch, say if you have 1M row.\n\nCorrect me if I am wrong.\n\nBye\n Shridhar\n\n--\nEither one of us, by himself, is expendable. Both of us are not.\t\t-- Kirk, \n\"The Devil in the Dark\", stardate 3196.1\n\n", "msg_date": "Sun, 15 Jun 2003 20:01:38 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: any way to use indexscan to get last X values with \"order by Y\n\tlimit X\" clause?" }, { "msg_contents": "At 16:31 15.6.2003, Shridhar Daithankar wrote:\n\n>Question. The field approved seems to have boolean values. If probability of\n>having either of value is 50%, I doubt planner will use index anyway.\n\nTrue. It has Y or N only so index on approved is useless. But using index \non ORDER BY part would help a lot since it knows to fetch last X ordered \nvalues.\n\n>Correct me if I am wrong.\nUnfortunately you are very right.\n\nI am not sure how to stuff modifystamp and thread into WHERE clause to make \nit use indexes on thread and/or modifystamp. So far I believe this would be \nthe only way to use them, right?\n\nTomaz \n\n\n", "msg_date": "Sun, 15 Jun 2003 17:17:04 +0200", "msg_from": "Tomaz Borstnar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: any way to use indexscan to get last X values" }, { "msg_contents": "On Sun, 15 Jun 2003, Tomaz Borstnar wrote:\n\n> Similar question was\n> http://archives.postgresql.org/pgsql-admin/2002-05/msg00148.php, but google\n> did not have answer for it.\n>\n> Here is the structure:\n>\n> Column | Type | Modifiers\n> -------------+--------------------------+----------------------\n> id | integer | not null default '0'\n> datestamp | timestamp with time zone | not null\n> thread | integer | not null default '0'\n> parent | integer | not null default '0'\n> author | character(37) | not null default ''\n> subject | character(255) | not null default ''\n> email | character(200) | not null default ''\n> attachment | character(64) | default ''\n> host | character(50) | not null default ''\n> email_reply | character(1) | not null default 'N'\n> approved | character(1) | not null default 'N'\n> msgid | character(100) | not null default ''\n> modifystamp | integer | not null default '0'\n> userid | integer | not null default '0'\n> closed | smallint | default '0'\n> Indexes: tjavendanpri_key primary key btree (id),\n> tjavendan_approved btree (approved),\n> tjavendan_author btree (author),\n> tjavendan_datestamp btree (datestamp),\n> tjavendan_modifystamp btree (modifystamp),\n> tjavendan_msgid btree (msgid),\n> tjavendan_parent btree (parent),\n> tjavendan_subject btree (subject),\n> tjavendan_thread btree (thread),\n> tjavendan_userid btree (userid)\n>\n> Here is the query:\n> SELECT thread, modifystamp, count(id) AS tcount, abstime(modifystamp) AS\n> latest, max(id) as maxid FROM tjavendan WHERE approved='Y' GROUP BY\n> thread, modifystamp ORDER BY modifystamp desc, thread desc limit 40\n\nI'm not sure that it'd help since I don't think it'd realize that it\ndoesn't actually need to completely do the group by due to the order by,\nbut in any case, in the above, the sort orders are different for the group\nby and the order by and you'd really want a two column index on (probably)\n(modifystamp, thread) in order to get the best results on replacing a\nscan + sort.\n\n\n", "msg_date": "Sun, 15 Jun 2003 09:33:45 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: any way to use indexscan to get last X values with" }, { "msg_contents": "Tomaz Borstnar <[email protected]> writes:\n> SELECT thread, modifystamp, count(id) AS tcount, abstime(modifystamp) AS \n> latest, max(id) as maxid FROM tjavendan WHERE approved='Y' GROUP BY \n> thread, modifystamp ORDER BY modifystamp desc, thread desc limit 40\n\n> Having backwards reading of index would really help here.\n\nThe only way that a fast-start plan is useful is if there is a way to do\nit with no explicit sort steps at all. A sort step must read its entire\ninput before it can produce any output, so you completely blow the\nchance of not reading the whole table as soon as there's any sorting.\n\nThere are a couple of reasons why this query can't be done using only an\ninitial indexscan to sort the data:\n\n1. You don't have a suitable index. Neither an index on modifystamp\nalone nor an index on thread alone is of any use to produce a two-column\nordering; you need a two-column index on (modifystamp, thread).\n\n2. The GROUP BY and ORDER BY steps require different sort orders, and so\neven if an index satisfied one, there'd still be a sort needed for the\nother. This is partly your fault (writing the columns in different\norders) and partly the system's fault: it's implicitly taking the GROUP\nBY entries to be equivalent to ORDER BY ASC, which is overspecification.\n\nI've applied the attached patch to CVS tip to cure the latter problem.\nWith this, a two-column index, and compatible column ordering in ORDER\nBY and GROUP BY, I get a reasonable-looking fast-start plan. The patch\nwill not apply exactly against 7.3 because there's a renamed function\ncall in there, but you could make it work with a little effort.\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/parser/analyze.c.orig\tFri Jun 6 11:04:02 2003\n--- src/backend/parser/analyze.c\tSun Jun 15 12:05:34 2003\n***************\n*** 1787,1799 ****\n \t */\n \tqry->havingQual = transformWhereClause(pstate, stmt->havingClause);\n \n! \tqry->groupClause = transformGroupClause(pstate,\n! \t\t\t\t\t\t\t\t\t\t\tstmt->groupClause,\n! \t\t\t\t\t\t\t\t\t\t\tqry->targetList);\n! \n \tqry->sortClause = transformSortClause(pstate,\n \t\t\t\t\t\t\t\t\t\t stmt->sortClause,\n \t\t\t\t\t\t\t\t\t\t qry->targetList);\n \n \tqry->distinctClause = transformDistinctClause(pstate,\n \t\t\t\t\t\t\t\t\t\t\t\t stmt->distinctClause,\n--- 1787,1804 ----\n \t */\n \tqry->havingQual = transformWhereClause(pstate, stmt->havingClause);\n \n! \t/*\n! \t * Transform sorting/grouping stuff. Do ORDER BY first because both\n! \t * transformGroupClause and transformDistinctClause need the results.\n! \t */\n \tqry->sortClause = transformSortClause(pstate,\n \t\t\t\t\t\t\t\t\t\t stmt->sortClause,\n \t\t\t\t\t\t\t\t\t\t qry->targetList);\n+ \n+ \tqry->groupClause = transformGroupClause(pstate,\n+ \t\t\t\t\t\t\t\t\t\t\tstmt->groupClause,\n+ \t\t\t\t\t\t\t\t\t\t\tqry->targetList,\n+ \t\t\t\t\t\t\t\t\t\t\tqry->sortClause);\n \n \tqry->distinctClause = transformDistinctClause(pstate,\n \t\t\t\t\t\t\t\t\t\t\t\t stmt->distinctClause,\n*** src/backend/parser/parse_clause.c.orig\tFri Jun 6 11:04:02 2003\n--- src/backend/parser/parse_clause.c\tSun Jun 15 12:19:14 2003\n***************\n*** 1124,1130 ****\n *\t transform a GROUP BY clause\n */\n List *\n! transformGroupClause(ParseState *pstate, List *grouplist, List *targetlist)\n {\n \tList\t *glist = NIL,\n \t\t\t *gl;\n--- 1124,1131 ----\n *\t transform a GROUP BY clause\n */\n List *\n! transformGroupClause(ParseState *pstate, List *grouplist,\n! \t\t\t\t\t List *targetlist, List *sortClause)\n {\n \tList\t *glist = NIL,\n \t\t\t *gl;\n***************\n*** 1132,1152 ****\n \tforeach(gl, grouplist)\n \t{\n \t\tTargetEntry *tle;\n \n \t\ttle = findTargetlistEntry(pstate, lfirst(gl),\n \t\t\t\t\t\t\t\t targetlist, GROUP_CLAUSE);\n \n \t\t/* avoid making duplicate grouplist entries */\n! \t\tif (!targetIsInSortList(tle, glist))\n! \t\t{\n! \t\t\tGroupClause *grpcl = makeNode(GroupClause);\n! \n! \t\t\tgrpcl->tleSortGroupRef = assignSortGroupRef(tle, targetlist);\n \n! \t\t\tgrpcl->sortop = ordering_oper_opid(tle->resdom->restype);\n! \n! \t\t\tglist = lappend(glist, grpcl);\n \t\t}\n \t}\n \n \treturn glist;\n--- 1133,1173 ----\n \tforeach(gl, grouplist)\n \t{\n \t\tTargetEntry *tle;\n+ \t\tOid\t\t\tordering_op;\n+ \t\tGroupClause *grpcl;\n \n \t\ttle = findTargetlistEntry(pstate, lfirst(gl),\n \t\t\t\t\t\t\t\t targetlist, GROUP_CLAUSE);\n \n \t\t/* avoid making duplicate grouplist entries */\n! \t\tif (targetIsInSortList(tle, glist))\n! \t\t\tcontinue;\n \n! \t\t/*\n! \t\t * If the GROUP BY clause matches the ORDER BY clause, we want to\n! \t\t * adopt the ordering operators from the latter rather than using\n! \t\t * the default ops. This allows \"GROUP BY foo ORDER BY foo DESC\" to\n! \t\t * be done with only one sort step. Note we are assuming that any\n! \t\t * user-supplied ordering operator will bring equal values together,\n! \t\t * which is all that GROUP BY needs.\n! \t\t */\n! \t\tif (sortClause &&\n! \t\t\t((SortClause *) lfirst(sortClause))->tleSortGroupRef ==\n! \t\t\ttle->resdom->ressortgroupref)\n! \t\t{\n! \t\t\tordering_op = ((SortClause *) lfirst(sortClause))->sortop;\n! \t\t\tsortClause = lnext(sortClause);\n \t\t}\n+ \t\telse\n+ \t\t{\n+ \t\t\tordering_op = ordering_oper_opid(tle->resdom->restype);\n+ \t\t\tsortClause = NIL;\t/* disregard ORDER BY once match fails */\n+ \t\t}\n+ \n+ \t\tgrpcl = makeNode(GroupClause);\n+ \t\tgrpcl->tleSortGroupRef = assignSortGroupRef(tle, targetlist);\n+ \t\tgrpcl->sortop = ordering_op;\n+ \t\tglist = lappend(glist, grpcl);\n \t}\n \n \treturn glist;\n*** src/include/parser/parse_clause.h.orig\tFri Mar 21 20:49:38 2003\n--- src/include/parser/parse_clause.h\tSun Jun 15 12:03:13 2003\n***************\n*** 22,28 ****\n extern bool interpretInhOption(InhOption inhOpt);\n extern Node *transformWhereClause(ParseState *pstate, Node *where);\n extern List *transformGroupClause(ParseState *pstate, List *grouplist,\n! \t\t\t\t\t List *targetlist);\n extern List *transformSortClause(ParseState *pstate, List *orderlist,\n \t\t\t\t\tList *targetlist);\n extern List *transformDistinctClause(ParseState *pstate, List *distinctlist,\n--- 22,28 ----\n extern bool interpretInhOption(InhOption inhOpt);\n extern Node *transformWhereClause(ParseState *pstate, Node *where);\n extern List *transformGroupClause(ParseState *pstate, List *grouplist,\n! \t\t\t\t\t List *targetlist, List *sortClause);\n extern List *transformSortClause(ParseState *pstate, List *orderlist,\n \t\t\t\t\tList *targetlist);\n extern List *transformDistinctClause(ParseState *pstate, List *distinctlist,\n", "msg_date": "Sun, 15 Jun 2003 12:53:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: any way to use indexscan to get last X values with \"order by Y\n\tlimit X\" clause?" }, { "msg_contents": "At 18:53 15.6.2003, you wrote:\n>I've applied the attached patch to CVS tip to cure the latter problem.\n>With this, a two-column index, and compatible column ordering in ORDER\n>BY and GROUP BY, I get a reasonable-looking fast-start plan. The patch\n>will not apply exactly against 7.3 because there's a renamed function\n>call in there, but you could make it work with a little effort.\n\nYou mean this:\n/*\n * ordering_oper_opid - convenience routine for oprid(ordering_oper())\n *\n * This was formerly called any_ordering_op()\n */\n\nA little later...\n\nWOW!\n\n100 to 130 times faster on same dataset and additional index on \n(modifystamp,thread) which was not really useful before this patch!\n\n\n\nkrtjavendan34=> EXPLAIN ANALYZE SELECT thread, modifystamp, count(id) AS \ntcount,abstime(modifystamp) AS latest, max(id) as maxid FROM tjavendan \nWHERE approved='Y' GROUP BY modifystamp, thread ORDER BY modifystamp desc, \nthread desc limit 40;\n QUERY \nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..97.13 rows=40 width=12) (actual time=1.07..48.71 \nrows=40 loops=1)\n -> Aggregate (cost=0.00..20947.38 rows=8626 width=12) (actual \ntime=1.05..48.23 rows=41 loops=1)\n -> Group (cost=0.00..20516.06 rows=86265 width=12) (actual \ntime=0.35..42.25 rows=843 loops=1)\n -> Index Scan Backward using tjavendan_modstamp_thrd on \ntjavendan (cost=0.00..20084.73 rows=86265 width=12) (actual \ntime=0.34..31.29 rows=844 loops=1)\n Filter: (approved = 'Y'::bpchar)\n Total runtime: 50.20 msec\n(6 rows)\n\nUsed to be between 5800 and 6741 msec before this patch!\n\nThanks!\n\n\n\n\n", "msg_date": "Mon, 16 Jun 2003 00:37:30 +0200", "msg_from": "Tomaz Borstnar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: any way to use indexscan to get last X values" }, { "msg_contents": "Here is the 7.3.3 patch as it might help others too...\n\ndiff -rcN postgresql-7.3.3/src/backend/parser/analyze.c \npostgresql-7.3.3-grouporderby/src/backend/parser/analyze.c\n*** postgresql-7.3.3/src/backend/parser/analyze.c\tThu Feb 13 23:50:09 2003\n--- postgresql-7.3.3-grouporderby/src/backend/parser/analyze.c\tMon Jun 16 \n00:13:05 2003\n***************\n*** 1667,1679 ****\n \t */\n \tqry->havingQual = transformWhereClause(pstate, stmt->havingClause);\n\n! \tqry->groupClause = transformGroupClause(pstate,\n! \t\t\t\t\t\t\t\t\t\t\tstmt->groupClause,\n! \t\t\t\t\t\t\t\t\t\t\tqry->targetList);\n\n \tqry->sortClause = transformSortClause(pstate,\n \t\t\t\t\t\t\t\t\t\t stmt->sortClause,\n \t\t\t\t\t\t\t\t\t\t qry->targetList);\n\n \tqry->distinctClause = transformDistinctClause(pstate,\n \t\t\t\t\t\t\t\t\t\t\t\t stmt->distinctClause,\n--- 1667,1682 ----\n \t */\n \tqry->havingQual = transformWhereClause(pstate, stmt->havingClause);\n\n! /*\n! * Transform sorting/grouping stuff. Do ORDER BY first because both\n! * transformGroupClause and transformDistinctClause need the results.\n! */\n\n \tqry->sortClause = transformSortClause(pstate,\n \t\t\t\t\t\t\t\t\t\t stmt->sortClause,\n \t\t\t\t\t\t\t\t\t\t qry->targetList);\n+\n+ qry->groupClause = transformGroupClause(pstate, stmt->groupClause, \nqry->targetList, qry->sortClause);\n\n \tqry->distinctClause = transformDistinctClause(pstate,\n \t\t\t\t\t\t\t\t\t\t\t\t stmt->distinctClause,\ndiff -rcN postgresql-7.3.3/src/backend/parser/parse_clause.c \npostgresql-7.3.3-grouporderby/src/backend/parser/parse_clause.c\n*** postgresql-7.3.3/src/backend/parser/parse_clause.c\tMon Dec 16 19:39:56 2002\n--- postgresql-7.3.3-grouporderby/src/backend/parser/parse_clause.c\tMon Jun \n16 00:24:58 2003\n***************\n*** 1145,1151 ****\n *\n */\n List *\n! transformGroupClause(ParseState *pstate, List *grouplist, List *targetlist)\n {\n \tList\t *glist = NIL,\n \t\t\t *gl;\n--- 1145,1151 ----\n *\n */\n List *\n! transformGroupClause(ParseState *pstate, List *grouplist, List \n*targetlist, List *sortClause)\n {\n \tList\t *glist = NIL,\n \t\t\t *gl;\n***************\n*** 1153,1173 ****\n \tforeach(gl, grouplist)\n \t{\n \t\tTargetEntry *tle;\n\n \t\ttle = findTargetlistEntry(pstate, lfirst(gl),\n \t\t\t\t\t\t\t\t targetlist, GROUP_CLAUSE);\n\n \t\t/* avoid making duplicate grouplist entries */\n! \t\tif (!targetIsInSortList(tle, glist))\n! \t\t{\n! \t\t\tGroupClause *grpcl = makeNode(GroupClause);\n!\n! \t\t\tgrpcl->tleSortGroupRef = assignSortGroupRef(tle, targetlist);\n!\n! \t\t\tgrpcl->sortop = any_ordering_op(tle->resdom->restype);\n!\n! \t\t\tglist = lappend(glist, grpcl);\n! \t\t}\n \t}\n\n \treturn glist;\n--- 1153,1193 ----\n \tforeach(gl, grouplist)\n \t{\n \t\tTargetEntry *tle;\n+ Oid ordering_op;\n+ GroupClause *grpcl;\n\n \t\ttle = findTargetlistEntry(pstate, lfirst(gl),\n \t\t\t\t\t\t\t\t targetlist, GROUP_CLAUSE);\n\n \t\t/* avoid making duplicate grouplist entries */\n! if (targetIsInSortList(tle, glist))\n! continue;\n!\n! /*\n! * If the GROUP BY clause matches the ORDER BY clause, we \nwant to\n! * adopt the ordering operators from the latter rather \nthan using\n! * the default ops. This allows \"GROUP BY foo ORDER BY \nfoo DESC\" to\n! * be done with only one sort step. Note we are assuming \nthat any\n! * user-supplied ordering operator will bring equal values \ntogether,\n! * which is all that GROUP BY needs.\n! */\n! if (sortClause &&\n! ((SortClause *) \nlfirst(sortClause))->tleSortGroupRef ==\n! tle->resdom->ressortgroupref)\n! {\n! ordering_op = ((SortClause *) \nlfirst(sortClause))->sortop;\n! sortClause = lnext(sortClause);\n! }\n! else\n! {\n! ordering_op = any_ordering_op(tle->resdom->restype);\n! sortClause = NIL; /* disregard ORDER BY once \nmatch fails */\n! }\n!\n! grpcl = makeNode(GroupClause);\n! grpcl->tleSortGroupRef = assignSortGroupRef(tle, targetlist);\n! grpcl->sortop = ordering_op;\n! glist = lappend(glist, grpcl);\n \t}\n\n \treturn glist;\ndiff -rcN postgresql-7.3.3/src/include/parser/parse_clause.h \npostgresql-7.3.3-grouporderby/src/include/parser/parse_clause.h\n*** postgresql-7.3.3/src/include/parser/parse_clause.h\tThu Jun 20 22:29:51 2002\n--- postgresql-7.3.3-grouporderby/src/include/parser/parse_clause.h\tMon Jun \n16 00:08:43 2003\n***************\n*** 22,28 ****\n extern bool interpretInhOption(InhOption inhOpt);\n extern Node *transformWhereClause(ParseState *pstate, Node *where);\n extern List *transformGroupClause(ParseState *pstate, List *grouplist,\n! \t\t\t\t\t List *targetlist);\n extern List *transformSortClause(ParseState *pstate, List *orderlist,\n \t\t\t\t\tList *targetlist);\n extern List *transformDistinctClause(ParseState *pstate, List *distinctlist,\n--- 22,28 ----\n extern bool interpretInhOption(InhOption inhOpt);\n extern Node *transformWhereClause(ParseState *pstate, Node *where);\n extern List *transformGroupClause(ParseState *pstate, List *grouplist,\n! \t\t\t\t\t List *targetlist, List *sortClause);\n extern List *transformSortClause(ParseState *pstate, List *orderlist,\n \t\t\t\t\tList *targetlist);\n extern List *transformDistinctClause(ParseState *pstate, List *distinctlist,\n\n\n", "msg_date": "Mon, 16 Jun 2003 01:21:34 +0200", "msg_from": "Tomaz Borstnar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: any way to use indexscan to get last X values" }, { "msg_contents": "On 16 Jun 2003 at 0:37, Tomaz Borstnar wrote:\n> \n> krtjavendan34=> EXPLAIN ANALYZE SELECT thread, modifystamp, count(id) AS \n> tcount,abstime(modifystamp) AS latest, max(id) as maxid FROM tjavendan \n> WHERE approved='Y' GROUP BY modifystamp, thread ORDER BY modifystamp desc, \n> thread desc limit 40;\n> QUERY \n> PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..97.13 rows=40 width=12) (actual time=1.07..48.71 \n> rows=40 loops=1)\n> -> Aggregate (cost=0.00..20947.38 rows=8626 width=12) (actual \n> time=1.05..48.23 rows=41 loops=1)\n> -> Group (cost=0.00..20516.06 rows=86265 width=12) (actual \n> time=0.35..42.25 rows=843 loops=1)\n> -> Index Scan Backward using tjavendan_modstamp_thrd on \n> tjavendan (cost=0.00..20084.73 rows=86265 width=12) (actual \n> time=0.34..31.29 rows=844 loops=1)\n> Filter: (approved = 'Y'::bpchar)\n> Total runtime: 50.20 msec\n> (6 rows)\n> \n> Used to be between 5800 and 6741 msec before this patch!\n\nGood that the patch works for you. But as I see there is an improvement in \nplan. Not nitpicking but what does actual performance difference between system \nbefore patch and after patch?\n\nBye\n Shridhar\n\n--\nQOTD:\t\"In the shopping mall of the mind, he's in the toy department.\"\n\n", "msg_date": "Mon, 16 Jun 2003 11:45:52 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: any way to use indexscan to get last X values" }, { "msg_contents": "At 08:15 16.6.2003, Shridhar Daithankar wrote:\n> > Total runtime: 50.20 msec\n> > Used to be between 5800 and 6741 msec before this patch!\n>\n>Good that the patch works for you. But as I see there is an improvement in\n>plan. Not nitpicking but what does actual performance difference between \n>system\n>before patch and after patch?\n\nA lot since this is query to get list of last active threads sorted by last \nmodified date. With times less than 300ms you mostly do not notice slower \nquery as there could be other factors to affect the speed like network \ndelays and such. But people on fast links will notice that it takes a bit \nlong to display list of threads - especially when the system is using PHP \naccelerator and compression.\n\nSo this really means major increase of performance for real situation - \nforum with over 85 000 messages where you get rid of full scan and 2 full \nsorts to display list of msgs which happens a lot. You can always use some \nquery/page caching things, but then people start to post duplicates, \nbecause they think the message did not make it into the database.\n\nTomaz \n\n\n", "msg_date": "Mon, 16 Jun 2003 09:08:49 +0200", "msg_from": "Tomaz Borstnar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: any way to use indexscan to get last X values" }, { "msg_contents": "On 16 Jun 2003 at 9:08, Tomaz Borstnar wrote:\n> So this really means major increase of performance for real situation - \n> forum with over 85 000 messages where you get rid of full scan and 2 full \n> sorts to display list of msgs which happens a lot. You can always use some \n> query/page caching things, but then people start to post duplicates, \n> because they think the message did not make it into the database.\n\nOTOH, I was thinking of your original problem. If you could have two identical \ntables, one to store incoming posts and other to store approved posts, that \nshould be lot more simpler.\n\nOf course you need to vacuum much more if you are deleting from in queue but \nthe kind of database you are handling, me need not tell you about vacuum..\n\nJust a though..\n\n\nBye\n Shridhar\n\n--\nDrew's Law of Highway Biology:\tThe first bug to hit a clean windshield lands \ndirectly in front\tof your eyes.\n\n", "msg_date": "Mon, 16 Jun 2003 12:52:00 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: any way to use indexscan to get last X values" } ]
[ { "msg_contents": "I have the following index:\nstreet_range__street_locality_high_low_v btree (street_name_id,\n locality_id, addr_high_v, addr_low_v) WHERE (addr_high_v IS NOT NULL)\n\nThe query has a where clause like this:\n FROM street_range s, input i\n WHERE 1=1\n AND i.address_v IS NOT NULL\n\n AND s.locality_id = i.locality_id\n AND s.street_name_id = i.street_name_id\n\n AND s.addr_low_v <= i.address_v\n AND s.addr_high_v >= i.address_v\n\nAs-is, it won't use the index. i.address_v IS NOT NULL AND s.addr_high_v\n >= i.address_v should mandate that s.addr_high_v must be not-null, if\nI'm remembering how nulls work correctly. (Actually, having any kind of\ncomparison on s.addr_high_v should mandate NOT NULL since NULL != NULL,\nright?) Therefore the optimizer should be able to deduce that it can use\nthe index.\n\nAdding AND s.addr_high_v IS NOT NULL to the where clause makes\neverything work fine, so there is a work-around. Just seems like a minor\nitem to add to the TODO.\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 16 Jun 2003 00:31:18 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Partial index where clause not filtering through" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> As-is, it won't use the index. i.address_v IS NOT NULL AND s.addr_high_v\n> >= i.address_v should mandate that s.addr_high_v must be not-null,\n\nActually, if the >= operator is strict then it implies both NOT NULL\nconditions. But I am not excited about putting some kind of theorem\nprover into the partial-index logic. That is a recipe for chewing up\nhuge numbers of cycles trying (and, likely, failing) to prove that\na partial index is safe to use with the current query.\n\nInference rules that are limited to strict operators and NOT NULL\nclauses wouldn't cost as much as a general theorem prover, but they'd\nnot find useful improvements as often, either. So the question is\nstill whether the game is worth the candle. How often do you think\nthis would win, and is that worth the planner cycles expended on every\nquery to find out if it wins?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Jun 2003 01:43:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index where clause not filtering through " }, { "msg_contents": "On Mon, Jun 16, 2003 at 01:43:34AM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > As-is, it won't use the index. i.address_v IS NOT NULL AND s.addr_high_v\n> > >= i.address_v should mandate that s.addr_high_v must be not-null,\n> \n> Actually, if the >= operator is strict then it implies both NOT NULL\n> conditions. But I am not excited about putting some kind of theorem\n> prover into the partial-index logic. That is a recipe for chewing up\n> huge numbers of cycles trying (and, likely, failing) to prove that\n> a partial index is safe to use with the current query.\n> \n> Inference rules that are limited to strict operators and NOT NULL\n> clauses wouldn't cost as much as a general theorem prover, but they'd\n> not find useful improvements as often, either. So the question is\n> still whether the game is worth the candle. How often do you think\n> this would win, and is that worth the planner cycles expended on every\n> query to find out if it wins?\n \nWell, it would only need to make the checks if the table had partial\nindexes. Even then, it probably makes sense to only do the check if\nother query planning steps decide it would be useful to use the partial\nindex. So that means that for a lot of general use cases, performance\nwon't be impacted.\n\nWhen you get to the cases that would be impacted, the planner should\nprobably look for key clauses first; so if you were worried about\nplanning time, you would put an explicit clause in the query (I'm in the\nhabit of doing this for joins when joining three tables on the same\nkey... FROM a, b, c WHERE a.f1=b.f1 and b.f1=c.f1 and a.f1=c.f1. I would\nhope the planner would figure out that a.f1 must = c.f1, but some\ndon't). In many cases, planning time isn't a big deal; either the query\nis run often enough that it should stay in the plan cache (pgsql does\ncache plans, right?), or it's run infrequently enough that it's not a\nbig deal.\n\nOf course, this might extend well beyond just partial indexes, as my a,\nb, c example shows.\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 16 Jun 2003 01:24:56 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partial index where clause not filtering through" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Well, it would only need to make the checks if the table had partial\n> indexes. Even then, it probably makes sense to only do the check if\n> other query planning steps decide it would be useful to use the partial\n> index.\n\nYou have that backwards. Planning is bottom-up, so we have to determine\nthe relevant indexes *first*. Accordingly, a partial index is a\nperformance drag on every query that uses its table, as we check to\nsee if the partial index qual is satisfied by the query's WHERE clause.\nThat's why I don't want it to be any slower than it is ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Jun 2003 10:11:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index where clause not filtering through " }, { "msg_contents": "On Mon, Jun 16, 2003 at 10:11:00AM -0400, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > Well, it would only need to make the checks if the table had partial\n> > indexes. Even then, it probably makes sense to only do the check if\n> > other query planning steps decide it would be useful to use the partial\n> > index.\n> \n> You have that backwards. Planning is bottom-up, so we have to determine\n> the relevant indexes *first*. Accordingly, a partial index is a\n> performance drag on every query that uses its table, as we check to\n> see if the partial index qual is satisfied by the query's WHERE clause.\n> That's why I don't want it to be any slower than it is ...\n \nWell, could it assume the index was valid until we got to the point\nwhere we had to decide what index to use? In other words, don't do the\ntest unless the index appears to be the most attractive one. Also, as I\nmentioned, if query parsing performance is that important, you can\nexplicitly add whatever clause will show the planner that the index is\nvalid.\n\nAlso, I just read that there's no statement plan caching, which makes me\na bit confused by this todo:\n\nFlush cached query plans when their underlying catalog data changes\n\nDoes that only apply to pl/pgsql? Are there plans to add a statement\ncache?\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 16 Jun 2003 17:32:09 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Partial index where clause not filtering through" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Also, I just read that there's no statement plan caching, which makes me\n> a bit confused by this todo:\n> Flush cached query plans when their underlying catalog data changes\n> Does that only apply to pl/pgsql?\n\nThat and PREPARE.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Jun 2003 18:39:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partial index where clause not filtering through " } ]
[ { "msg_contents": "\n\nPostgreSQL Version: 7.2.3\nOS : Red Hat 7.3 with Kernel 2.4.18-5 and SGI_XFS\n\nI currently have two processes which create several persistent\nconnections to the database. One process primarily does inserts and the\nother primarily does selects. Both processes run 24/7.\n\nMy problem is that the memory used by the connections appears to grow\nover time, especially when the amount of data entering the system is\nincreased. The connections sometimes take up wards of 450 MB of memory\ncausing other applications on the system to swap.\n\nIs there anyway to limit the amount of memory used by a given connection\nor is there something I may be doing that is requiring the connection to\nneed more memory?\n\n-Dawn\n\n", "msg_date": "16 Jun 2003 08:32:17 +0000", "msg_from": "Dawn Hollingsworth <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres Connections Requiring Large Amounts of Memory" }, { "msg_contents": "\nWe have just recently hired a database consultant familiar with Postgres\nand just on his cursory glance we are not doing anything really crazy.\n\nThere are two things which might be considered off the beaten path\nthough:\n\n1. We have tables that have over 500 columns which we continually insert\ninto and select from.\n\n2. Our stored procedures take more than 16 parameters so in the file\nconfig.h the value INDEX_MAX_KEYS was increased to 100.\n\n-Dawn\n\nOn Mon, 2003-06-16 at 20:45, Tom Lane wrote:\n> Dawn Hollingsworth <[email protected]> writes:\n> > PostgreSQL Version: 7.2.3\n> \n> > My problem is that the memory used by the connections appears to grow\n> > over time, especially when the amount of data entering the system is\n> > increased.\n> \n> We have fixed memory-leak problems in the past, and I wouldn't be\n> surprised if some remain, but you'll have to give a lot more detail\n> about what you're doing if you want help. A leak that persists across\n> transaction boundaries is fairly surprising --- I think I can safely\n> say that there are none in the normal code paths. I'm guessing you must\n> be using some off-the-beaten-path feature.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n\n", "msg_date": "16 Jun 2003 08:57:35 +0000", "msg_from": "Dawn Hollingsworth <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres Connections Requiring Large Amounts of Memory" }, { "msg_contents": "Dawn Hollingsworth <[email protected]> writes:\n> PostgreSQL Version: 7.2.3\n\n> My problem is that the memory used by the connections appears to grow\n> over time, especially when the amount of data entering the system is\n> increased.\n\nWe have fixed memory-leak problems in the past, and I wouldn't be\nsurprised if some remain, but you'll have to give a lot more detail\nabout what you're doing if you want help. A leak that persists across\ntransaction boundaries is fairly surprising --- I think I can safely\nsay that there are none in the normal code paths. I'm guessing you must\nbe using some off-the-beaten-path feature.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Jun 2003 16:45:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Connections Requiring Large Amounts of Memory " }, { "msg_contents": "Dawn Hollingsworth <[email protected]> writes:\n> There are two things which might be considered off the beaten path\n> though:\n> 1. We have tables that have over 500 columns which we continually insert\n> into and select from.\n> 2. Our stored procedures take more than 16 parameters so in the file\n> config.h the value INDEX_MAX_KEYS was increased to 100.\n\nNeither of those raises a red flag with me.\n\nWhat would be useful to try to narrow things down is to look at the\noutput of \"MemoryContextStats(TopMemoryContext)\" in a backend that's\ngrown to a large size. This is a fairly primitive routine\nunfortunately; there is no built-in way to invoke it other than by\ncalling it manually with a debugger, and it is only bright enough\nto write to stderr, not syslog. If you have stderr going somewhere\nuseful (not /dev/null) and you built with debugging symbols, then you\ncould attach to a running backend right now with gdb and get some useful\ninfo. If you don't have debugging symbols then you'll need to either\nrebuild with 'em, or create some other way to call the function.\n(There is a bit of stub code marked #ifdef SHOW_MEMORY_STATS in\npostgres.c that might be worth enabling, but I think it's only a sketch\nand won't compile as-is, since I don't see a ShowStats variable\nanywhere.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Jun 2003 17:34:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Connections Requiring Large Amounts of Memory " }, { "msg_contents": "I installed postgres with debug compiled in and ran the same tests.\n\nI attached gdb to a connection using just over 400MB( according to top)\nand ran \"MemoryContextStats(TopMemoryContext)\"\n\nHere's the output:\n\nTopMemoryContext: 49176 total in 6 blocks; 16272 free (44 chunks); 32904\nused\nTopTransactionContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16\nused\nTransactionCommandContext: 8192 total in 1 blocks; 8176 free (0 chunks);\n16 used\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\nSPI Plan: 3072 total in 2 blocks; 864 free (0 chunks); 2208 used\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\nSPI Plan: 3072 total in 2 blocks; 720 free (0 chunks); 2352 used\nSPI Plan: 1024 total in 1 blocks; 184 free (0 chunks); 840 used\nSPI Plan: 261120 total in 8 blocks; 20416 free (0 chunks); 240704 used\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\nSPI Plan: 261120 total in 8 blocks; 18456 free (0 chunks); 242664 used\nSPI Plan: 7168 total in 3 blocks; 4016 free (5 chunks); 3152 used\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\nSPI Plan: 31744 total in 5 blocks; 15024 free (0 chunks); 16720 used\nSPI Plan: 1024 total in 1 blocks; 184 free (0 chunks); 840 used\nSPI Plan: 523264 total in 9 blocks; 80504 free (0 chunks); 442760 used\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\nSPI Plan: 523264 total in 9 blocks; 79992 free (0 chunks); 443272 used\nSPI Plan: 7168 total in 3 blocks; 4016 free (5 chunks); 3152 used\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\nSPI Plan: 7168 total in 3 blocks; 1816 free (0 chunks); 5352 used\nSPI Plan: 1024 total in 1 blocks; 184 free (0 chunks); 840 used\nSPI Plan: 261120 total in 8 blocks; 130824 free (3 chunks); 130296 used\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\nSPI Plan: 261120 total in 8 blocks; 130032 free (0 chunks); 131088 used\nSPI Plan: 7168 total in 3 blocks; 4016 free (5 chunks); 3152 used\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\nSPI Plan: 130048 total in 7 blocks; 36512 free (0 chunks); 93536 used\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\nSPI Plan: 1024 total in 1 blocks; 184 free (0 chunks); 840 used\nSPI Plan: 130048 total in 7 blocks; 37976 free (0 chunks); 92072 used\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\nSPI Plan: 130048 total in 7 blocks; 34688 free (0 chunks); 95360 used\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\nSPI Plan: 3072 total in 2 blocks; 864 free (0 chunks); 2208 used\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\nSPI Plan: 3072 total in 2 blocks; 536 free (0 chunks); 2536 used\nSPI Plan: 7168 total in 3 blocks; 4016 free (5 chunks); 3152 used\nSPI Plan: 7168 total in 3 blocks; 4016 free (5 chunks); 3152 used\nSPI Plan: 7168 total in 3 blocks; 4016 free (5 chunks); 3152 used\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\nSPI Plan: 31744 total in 5 blocks; 15024 free (0 chunks); 16720 used\nSPI Plan: 1024 total in 1 blocks; 184 free (0 chunks); 840 used\nSPI Plan: 130048 total in 7 blocks; 26120 free (0 chunks); 103928 used\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\nSPI Plan: 130048 total in 7 blocks; 24232 free (0 chunks); 105816 used\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\nSPI Plan: 3072 total in 2 blocks; 128 free (0 chunks); 2944 used\nSPI Plan: 7168 total in 3 blocks; 4016 free (5 chunks); 3152 used\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\nSPI Plan: 7168 total in 3 blocks; 1816 free (0 chunks); 5352 used\nSPI Plan: 7168 total in 3 blocks; 1816 free (0 chunks); 5352 used\nSPI Plan: 7168 total in 3 blocks; 1816 free (0 chunks); 5352 used\nSPI Plan: 1024 total in 1 blocks; 184 free (0 chunks); 840 used\nSPI Plan: 64512 total in 6 blocks; 24688 free (0 chunks); 39824 used\nSPI Plan: 64512 total in 6 blocks; 23792 free (0 chunks); 40720 used\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\nSPI Plan: 64512 total in 6 blocks; 23464 free (0 chunks); 41048 used\nSPI Plan: 31744 total in 5 blocks; 10088 free (0 chunks); 21656 used\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\nSPI Plan: 31744 total in 5 blocks; 8576 free (0 chunks); 23168 used\nSPI Plan: 7168 total in 3 blocks; 4016 free (5 chunks); 3152 used\nSPI Plan: 3096 total in 2 blocks; 8 free (0 chunks); 3088 used\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\nSPI Plan: 1024 total in 1 blocks; 184 free (0 chunks); 840 used\nSPI Plan: 31744 total in 5 blocks; 6672 free (0 chunks); 25072 used\nSPI Plan: 3072 total in 2 blocks; 1840 free (0 chunks); 1232 used\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\nQueryContext: 24576 total in 2 blocks; 15304 free (56 chunks); 9272 used\nDeferredTriggerSession: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nPortalMemory: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\nCacheMemoryContext: 2108952 total in 10 blocks; 1070136 free (3338\nchunks); 1038816 used\nbss_pkey: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\nstation_epoch_sensor_10_idx: 1024 total in 1 blocks; 680 free (0\nchunks); 344 used\nstation_epoch_sensor_9_idx: 1024 total in 1 blocks; 680 free (0 chunks);\n344 used\nstation_epoch_sensor_8_idx: 1024 total in 1 blocks; 680 free (0 chunks);\n344 used\nstation_epoch_sensor_7_idx: 1024 total in 1 blocks; 680 free (0 chunks);\n344 used\nstation_epoch_sensor_6_idx: 1024 total in 1 blocks; 680 free (0 chunks);\n344 used\nstation_epoch_sensor_5_idx: 1024 total in 1 blocks; 680 free (0 chunks);\n344 used\nstation_epoch_sensor_4_idx: 1024 total in 1 blocks; 680 free (0 chunks);\n344 used\nstation_epoch_sensor_3_idx: 1024 total in 1 blocks; 680 free (0 chunks);\n344 used\nstation_epoch_sensor_2_idx: 1024 total in 1 blocks; 680 free (0 chunks);\n344 used\nstation_epoch_sensor_1_idx: 1024 total in 1 blocks; 680 free (0 chunks);\n344 used\nstation_epoch_bss_id_idx: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\nstation_epochint_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\nstation_epoch_pkey: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\nstation_sensor_10_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\nstation_sensor_9_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\nstation_sensor_8_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\nstation_sensor_7_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\nstation_sensor_6_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\nstation_sensor_5_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\nstation_sensor_4_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\nstation_sensor_3_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\nstation_sensor_2_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\nstation_sensor_1_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\nstation_lastseenint_idx: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\nstation_lastseen_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\nstation_firstseenint_idx: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\nstation_firstseen_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632\nused\nstation_pkey_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nstation_cfg_view_pkey: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\nbss_cfg_view_pkey: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\nsensor_cfg_view_pk: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\nstation_epoch_sum_pk_idx: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\npg_index_indrelid_index: 1024 total in 1 blocks; 680 free (0 chunks);\n344 used\npg_relcheck_rcrelid_index: 1024 total in 1 blocks; 680 free (0 chunks);\n344 used\npg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 392 free (0\nchunks); 632 used\npg_shadow_usesysid_index: 1024 total in 1 blocks; 680 free (0 chunks);\n344 used\npg_trigger_tgrelid_index: 1024 total in 1 blocks; 680 free (0 chunks);\n344 used\npg_language_oid_index: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\npg_proc_oid_index: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\npg_aggregate_name_type_index: 1024 total in 1 blocks; 392 free (0\nchunks); 632 used\npg_type_oid_index: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\npg_proc_proname_narg_type_index: 1024 total in 1 blocks; 392 free (0\nchunks); 632 used\npg_amop_opc_strategy_index: 1024 total in 1 blocks; 392 free (0 chunks);\n632 used\npg_operator_oid_index: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\npg_amproc_opc_procnum_index: 1024 total in 1 blocks; 392 free (0\nchunks); 632 used\npg_index_indexrelid_index: 1024 total in 1 blocks; 680 free (0 chunks);\n344 used\npg_type_typname_index: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\npg_class_oid_index: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\npg_class_relname_index: 1024 total in 1 blocks; 680 free (0 chunks); 344\nused\npg_attribute_relid_attnum_index: 1024 total in 1 blocks; 392 free (0\nchunks); 632 used\nMdSmgr: 8192 total in 1 blocks; 4072 free (1 chunks); 4120 used\nDynaHash: 8192 total in 1 blocks; 6944 free (0 chunks); 1248 used\nDynaHashTable: 8192 total in 1 blocks; 5080 free (0 chunks); 3112 used\nDynaHashTable: 42008 total in 2 blocks; 6112 free (0 chunks); 35896 used\nDynaHashTable: 8192 total in 1 blocks; 6112 free (0 chunks); 2080 used\nDynaHashTable: 8192 total in 1 blocks; 3000 free (0 chunks); 5192 used\nDynaHashTable: 8192 total in 1 blocks; 3000 free (0 chunks); 5192 used\nDynaHashTable: 24576 total in 2 blocks; 13224 free (4 chunks); 11352\nused\nDynaHashTable: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\nDynaHashTable: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\nDynaHashTable: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\nDynaHashTable: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\nDynaHashTable: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\nErrorContext: 8192 total in 1 blocks; 8176 free (1 chunks); 16 used\n\nIs there any other information I could provide that would be useful? I'm\ngoing to try to enable SHOW_MEMORY_STATS next.\n\n-Dawn \n\nOn Mon, 2003-06-16 at 21:34, Tom Lane wrote:\n\n\n> What would be useful to try to narrow things down is to look at the\n> output of \"MemoryContextStats(TopMemoryContext)\" in a backend that's\n> grown to a large size. This is a fairly primitive routine\n> unfortunately; there is no built-in way to invoke it other than by\n> calling it manually with a debugger, and it is only bright enough\n> to write to stderr, not syslog. If you have stderr going somewhere\n> useful (not /dev/null) and you built with debugging symbols, then you\n> could attach to a running backend right now with gdb and get some useful\n> info. If you don't have debugging symbols then you'll need to either\n> rebuild with 'em, or create some other way to call the function.\n> (There is a bit of stub code marked #ifdef SHOW_MEMORY_STATS in\n> postgres.c that might be worth enabling, but I think it's only a sketch\n> and won't compile as-is, since I don't see a ShowStats variable\n> anywhere.)\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n\n\n\n\n\n\n\n\nI installed postgres with debug compiled in and ran the same tests.\n\n\nI attached gdb to a connection using just over 400MB( according to top) and ran \"MemoryContextStats(TopMemoryContext)\"\n\n\nHere's the output:\n\n\nTopMemoryContext: 49176 total in 6 blocks; 16272 free (44 chunks); 32904 used\n\nTopTransactionContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\n\nTransactionCommandContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\n\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\n\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\n\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\n\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\n\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\n\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\n\nSPI Plan: 3072 total in 2 blocks; 864 free (0 chunks); 2208 used\n\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\n\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\n\nSPI Plan: 3072 total in 2 blocks; 720 free (0 chunks); 2352 used\n\nSPI Plan: 1024 total in 1 blocks; 184 free (0 chunks); 840 used\n\nSPI Plan: 261120 total in 8 blocks; 20416 free (0 chunks); 240704 used\n\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\n\nSPI Plan: 261120 total in 8 blocks; 18456 free (0 chunks); 242664 used\n\nSPI Plan: 7168 total in 3 blocks; 4016 free (5 chunks); 3152 used\n\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\n\nSPI Plan: 31744 total in 5 blocks; 15024 free (0 chunks); 16720 used\n\nSPI Plan: 1024 total in 1 blocks; 184 free (0 chunks); 840 used\n\nSPI Plan: 523264 total in 9 blocks; 80504 free (0 chunks); 442760 used\n\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\n\nSPI Plan: 523264 total in 9 blocks; 79992 free (0 chunks); 443272 used\n\nSPI Plan: 7168 total in 3 blocks; 4016 free (5 chunks); 3152 used\n\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\n\nSPI Plan: 7168 total in 3 blocks; 1816 free (0 chunks); 5352 used\n\nSPI Plan: 1024 total in 1 blocks; 184 free (0 chunks); 840 used\n\nSPI Plan: 261120 total in 8 blocks; 130824 free (3 chunks); 130296 used\n\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\n\nSPI Plan: 261120 total in 8 blocks; 130032 free (0 chunks); 131088 used\n\nSPI Plan: 7168 total in 3 blocks; 4016 free (5 chunks); 3152 used\n\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\n\nSPI Plan: 130048 total in 7 blocks; 36512 free (0 chunks); 93536 used\n\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\n\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\n\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\n\nSPI Plan: 1024 total in 1 blocks; 184 free (0 chunks); 840 used\n\nSPI Plan: 130048 total in 7 blocks; 37976 free (0 chunks); 92072 used\n\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\n\nSPI Plan: 130048 total in 7 blocks; 34688 free (0 chunks); 95360 used\n\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\n\nSPI Plan: 3072 total in 2 blocks; 864 free (0 chunks); 2208 used\n\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\n\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\n\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\n\nSPI Plan: 3072 total in 2 blocks; 536 free (0 chunks); 2536 used\n\nSPI Plan: 7168 total in 3 blocks; 4016 free (5 chunks); 3152 used\n\nSPI Plan: 7168 total in 3 blocks; 4016 free (5 chunks); 3152 used\n\nSPI Plan: 7168 total in 3 blocks; 4016 free (5 chunks); 3152 used\n\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\n\nSPI Plan: 31744 total in 5 blocks; 15024 free (0 chunks); 16720 used\n\nSPI Plan: 1024 total in 1 blocks; 184 free (0 chunks); 840 used\n\nSPI Plan: 130048 total in 7 blocks; 26120 free (0 chunks); 103928 used\n\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\n\nSPI Plan: 130048 total in 7 blocks; 24232 free (0 chunks); 105816 used\n\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\n\nSPI Plan: 3072 total in 2 blocks; 128 free (0 chunks); 2944 used\n\nSPI Plan: 7168 total in 3 blocks; 4016 free (5 chunks); 3152 used\n\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\n\nSPI Plan: 7168 total in 3 blocks; 1816 free (0 chunks); 5352 used\n\nSPI Plan: 7168 total in 3 blocks; 1816 free (0 chunks); 5352 used\n\nSPI Plan: 7168 total in 3 blocks; 1816 free (0 chunks); 5352 used\n\nSPI Plan: 1024 total in 1 blocks; 184 free (0 chunks); 840 used\n\nSPI Plan: 64512 total in 6 blocks; 24688 free (0 chunks); 39824 used\n\nSPI Plan: 64512 total in 6 blocks; 23792 free (0 chunks); 40720 used\n\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\n\nSPI Plan: 64512 total in 6 blocks; 23464 free (0 chunks); 41048 used\n\nSPI Plan: 31744 total in 5 blocks; 10088 free (0 chunks); 21656 used\n\nSPI Plan: 7168 total in 3 blocks; 3904 free (4 chunks); 3264 used\n\nSPI Plan: 31744 total in 5 blocks; 8576 free (0 chunks); 23168 used\n\nSPI Plan: 7168 total in 3 blocks; 4016 free (5 chunks); 3152 used\n\nSPI Plan: 3096 total in 2 blocks; 8 free (0 chunks); 3088 used\n\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\n\nSPI Plan: 1024 total in 1 blocks; 184 free (0 chunks); 840 used\n\nSPI Plan: 31744 total in 5 blocks; 6672 free (0 chunks); 25072 used\n\nSPI Plan: 3072 total in 2 blocks; 1840 free (0 chunks); 1232 used\n\nSPI Plan: 1024 total in 1 blocks; 256 free (0 chunks); 768 used\n\nQueryContext: 24576 total in 2 blocks; 15304 free (56 chunks); 9272 used\n\nDeferredTriggerSession: 0 total in 0 blocks; 0 free (0 chunks); 0 used\n\nPortalMemory: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\n\nCacheMemoryContext: 2108952 total in 10 blocks; 1070136 free (3338 chunks); 1038816 used\n\nbss_pkey: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_epoch_sensor_10_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_epoch_sensor_9_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_epoch_sensor_8_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_epoch_sensor_7_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_epoch_sensor_6_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_epoch_sensor_5_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_epoch_sensor_4_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_epoch_sensor_3_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_epoch_sensor_2_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_epoch_sensor_1_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_epoch_bss_id_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\n\nstation_epochint_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\n\nstation_epoch_pkey: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\n\nstation_sensor_10_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_sensor_9_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_sensor_8_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_sensor_7_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_sensor_6_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_sensor_5_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_sensor_4_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_sensor_3_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_sensor_2_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_sensor_1_idx: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_lastseenint_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\n\nstation_lastseen_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\n\nstation_firstseenint_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\n\nstation_firstseen_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\n\nstation_pkey_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\n\nstation_cfg_view_pkey: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nbss_cfg_view_pkey: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nsensor_cfg_view_pk: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\nstation_epoch_sum_pk_idx: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\n\npg_index_indrelid_index: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\npg_relcheck_rcrelid_index: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\npg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\n\npg_shadow_usesysid_index: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\npg_trigger_tgrelid_index: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\npg_language_oid_index: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\npg_proc_oid_index: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\npg_aggregate_name_type_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\n\npg_type_oid_index: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\npg_proc_proname_narg_type_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\n\npg_amop_opc_strategy_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\n\npg_operator_oid_index: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\npg_amproc_opc_procnum_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\n\npg_index_indexrelid_index: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\npg_type_typname_index: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\npg_class_oid_index: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\npg_class_relname_index: 1024 total in 1 blocks; 680 free (0 chunks); 344 used\n\npg_attribute_relid_attnum_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\n\nMdSmgr: 8192 total in 1 blocks; 4072 free (1 chunks); 4120 used\n\nDynaHash: 8192 total in 1 blocks; 6944 free (0 chunks); 1248 used\n\nDynaHashTable: 8192 total in 1 blocks; 5080 free (0 chunks); 3112 used\n\nDynaHashTable: 42008 total in 2 blocks; 6112 free (0 chunks); 35896 used\n\nDynaHashTable: 8192 total in 1 blocks; 6112 free (0 chunks); 2080 used\n\nDynaHashTable: 8192 total in 1 blocks; 3000 free (0 chunks); 5192 used\n\nDynaHashTable: 8192 total in 1 blocks; 3000 free (0 chunks); 5192 used\n\nDynaHashTable: 24576 total in 2 blocks; 13224 free (4 chunks); 11352 used\n\nDynaHashTable: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\n\nDynaHashTable: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\n\nDynaHashTable: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\n\nDynaHashTable: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\n\nDynaHashTable: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\n\nErrorContext: 8192 total in 1 blocks; 8176 free (1 chunks); 16 used\n\n\nIs there any other information I could provide that would be useful? I'm going to try to enable SHOW_MEMORY_STATS next.\n\n\n-Dawn \n\n\nOn Mon, 2003-06-16 at 21:34, Tom Lane wrote:\n\n> What would be useful to try to narrow things down is to look at the\n> output of \"MemoryContextStats(TopMemoryContext)\" in a backend that's\n> grown to a large size. This is a fairly primitive routine\n> unfortunately; there is no built-in way to invoke it other than by\n> calling it manually with a debugger, and it is only bright enough\n> to write to stderr, not syslog. If you have stderr going somewhere\n> useful (not /dev/null) and you built with debugging symbols, then you\n> could attach to a running backend right now with gdb and get some useful\n> info. If you don't have debugging symbols then you'll need to either\n> rebuild with 'em, or create some other way to call the function.\n> (There is a bit of stub code marked #ifdef SHOW_MEMORY_STATS in\n> postgres.c that might be worth enabling, but I think it's only a sketch\n> and won't compile as-is, since I don't see a ShowStats variable\n> anywhere.)\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster", "msg_date": "17 Jun 2003 06:46:48 +0000", "msg_from": "Dawn Hollingsworth <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres Connections Requiring Large Amounts of Memory" }, { "msg_contents": "The database is used to store information for a network management\napplication. Almost all the Primary Keys are MACADDR or\nMACADDR,TIMSTAMPTZ and the Foreign Keys are almost always on one MACADDR\ncolumn with \"ON UPDATE CASCADE ON DELETE CASCADE\". It's not very\ncomplicated. I have not written any triggers of my own.\n\nThe connection I was looking at only does inserts and updates, no\ndeletes. All database access is made through stored procedures using\nplpgsql. The stored procedures all work like:\ntable1( id MACADDR, ... Primary Key(id) )\ntable2( id MACADDR, mytime TIMESTAMPTZ, .... Primary Key(id, mytime),\nFOREIGN KEY(id) REFERENCES table1 ON UPDATE CASCADE ON DELETE CASCADE)\n\nUpdate table1\nif update row count = 0 then\n insert into table1\nend if\n\ninsert into table 2\n\nI'm not starting any of my own transactions and I'm not calling stored\nprocedures from withing stored procedures. The stored procedures do have\nlarge parameters lists, up to 100. The tables are from 300 to 500\ncolumns. 90% of the columns are either INT4 or INT8. Some of these\ntables are inherited. Could that be causing problems?\n\n\n- Dawn\n\n> Hmm. This only seems to account for about 5 meg of space, which means\n> either that lots of space is being used and released, or that the leak\n> is coming from direct malloc calls rather than palloc. I doubt the\n> latter though; we don't use too many direct malloc calls.\n> \n> On the former theory, could it be something like updating a large\n> number of tuples in one transaction in a table with foreign keys?\n> The pending-triggers list could have swelled up and then gone away\n> again.\n> \n> The large number of SPI Plan contexts seems a tad fishy, and even more\n> so the fact that some of them are rather large. They still only account\n> for a couple of meg, so they aren't directly the problem, but perhaps\n> they are related to the problem. I presume these came from either\n> foreign-key triggers or something you've written in PL functions. Can\n> you tell us more about what you use in that line?\n> \n> \t\t\tregards, tom lane\n\n\n\n\n\n\n\n\n\n\nThe database is used to store information for a network management application. Almost all the Primary Keys are MACADDR or MACADDR,TIMSTAMPTZ and the Foreign Keys are almost always on one MACADDR column with \"ON UPDATE CASCADE ON DELETE CASCADE\".   It's not very complicated. I have not written any triggers of my own.\n\n\nThe connection I was looking at only does inserts and updates, no deletes. All database access is made through stored procedures using plpgsql.  The stored procedures all work like:\n\ntable1( id MACADDR, ... Primary Key(id) )\n\ntable2( id MACADDR, mytime TIMESTAMPTZ, .... Primary Key(id, mytime), FOREIGN KEY(id) REFERENCES table1 ON UPDATE CASCADE ON DELETE CASCADE)\n\n\nUpdate table1\n\nif update row count = 0 then\n\n   insert into table1\n\nend if\n\n\ninsert into table 2\n\n\nI'm not starting any of my own transactions and I'm not calling stored procedures from withing stored procedures. The stored procedures do have large parameters lists, up to 100. The tables are from 300 to 500 columns. 90% of the columns are either INT4 or INT8.  Some of these tables are inherited. Could that be causing problems?\n\n\n\n- Dawn\n> Hmm. This only seems to account for about 5 meg of space, which means\n> either that lots of space is being used and released, or that the leak\n> is coming from direct malloc calls rather than palloc. I doubt the\n> latter though; we don't use too many direct malloc calls.\n> \n> On the former theory, could it be something like updating a large\n> number of tuples in one transaction in a table with foreign keys?\n> The pending-triggers list could have swelled up and then gone away\n> again.\n> \n> The large number of SPI Plan contexts seems a tad fishy, and even more\n> so the fact that some of them are rather large. They still only account\n> for a couple of meg, so they aren't directly the problem, but perhaps\n> they are related to the problem. I presume these came from either\n> foreign-key triggers or something you've written in PL functions. Can\n> you tell us more about what you use in that line?\n> \n> \t\t\tregards, tom lane", "msg_date": "17 Jun 2003 09:42:07 +0000", "msg_from": "Dawn Hollingsworth <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres Connections Requiring Large Amounts of Memory" }, { "msg_contents": "Each stored procedure only updates one row and inserts one row. \n\nI just connected the user interface to the database. It only does\nselects on startup. It's connection jumped to a memory usage of 256M. \nIt's not getting any larger but it's not getting any smaller either.\n\nI'm going to compile postgres with the SHOW_MEMORY_STATS. I'm assuming I\ncan just set ShowStats equal to 1. I'll also pare down the application\nto only use one of the stored procedures for less noise and maybe I can\ntrack where memory might be going. And in the meantime I'll get a test\ngoing with Postgres 7.3 to see if I get the same behavior.\n\nAny other suggestions?\n\n-Dawn\n\nOn Tue, 2003-06-17 at 22:03, Tom Lane wrote:\n\n\n> The only theory I can come up with is that the deferred trigger list is\n> getting out of hand. Since you have foreign keys in all the tables,\n> each insert or update is going to add a trigger event to the list of\n> stuff to check at commit. The event entries aren't real large but they\n> could add up if you insert or update a lot of stuff in a single\n> transaction. How many rows do you process per transaction?\n> \n> \t\t\tregards, tom lane\n\n\n\n\n\n\n\n\n\n\nEach stored procedure only updates one row and inserts one row. \n\n\nI just connected the user interface to the database. It only does selects on startup. It's connection jumped to a memory usage of 256M.  It's not getting any larger but it's not getting any smaller either.\n\n\nI'm going to compile postgres with the SHOW_MEMORY_STATS. I'm assuming I can just set ShowStats equal to 1. I'll also pare down the application to only use one of the stored procedures for less noise and maybe I can track where memory might be going. And in the meantime I'll get a test going with Postgres 7.3 to see if I get the same behavior.\n\n\nAny other suggestions?\n\n\n-Dawn\n\n\nOn Tue, 2003-06-17 at 22:03, Tom Lane wrote:\n\n> The only theory I can come up with is that the deferred trigger list is\n> getting out of hand. Since you have foreign keys in all the tables,\n> each insert or update is going to add a trigger event to the list of\n> stuff to check at commit. The event entries aren't real large but they\n> could add up if you insert or update a lot of stuff in a single\n> transaction. How many rows do you process per transaction?\n> \n> \t\t\tregards, tom lane", "msg_date": "17 Jun 2003 11:03:28 +0000", "msg_from": "Dawn Hollingsworth <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres Connections Requiring Large Amounts of Memory" }, { "msg_contents": "Dawn Hollingsworth <[email protected]> writes:\n> I attached gdb to a connection using just over 400MB( according to top)\n> and ran \"MemoryContextStats(TopMemoryContext)\"\n\nHmm. This only seems to account for about 5 meg of space, which means\neither that lots of space is being used and released, or that the leak\nis coming from direct malloc calls rather than palloc. I doubt the\nlatter though; we don't use too many direct malloc calls.\n\nOn the former theory, could it be something like updating a large\nnumber of tuples in one transaction in a table with foreign keys?\nThe pending-triggers list could have swelled up and then gone away\nagain.\n\nThe large number of SPI Plan contexts seems a tad fishy, and even more\nso the fact that some of them are rather large. They still only account\nfor a couple of meg, so they aren't directly the problem, but perhaps\nthey are related to the problem. I presume these came from either\nforeign-key triggers or something you've written in PL functions. Can\nyou tell us more about what you use in that line?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Jun 2003 15:38:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Connections Requiring Large Amounts of Memory " }, { "msg_contents": "Dawn Hollingsworth <[email protected]> writes:\n> The database is used to store information for a network management\n> application. Almost all the Primary Keys are MACADDR or\n> MACADDR,TIMSTAMPTZ and the Foreign Keys are almost always on one MACADDR\n> column with \"ON UPDATE CASCADE ON DELETE CASCADE\". It's not very\n> complicated. I have not written any triggers of my own.\n\n> The connection I was looking at only does inserts and updates, no\n> deletes. All database access is made through stored procedures using\n> plpgsql. The stored procedures all work like:\n> table1( id MACADDR, ... Primary Key(id) )\n> table2( id MACADDR, mytime TIMESTAMPTZ, .... Primary Key(id, mytime),\n> FOREIGN KEY(id) REFERENCES table1 ON UPDATE CASCADE ON DELETE CASCADE)\n\n> Update table1\n> if update row count = 0 then\n> insert into table1\n> end if\n\n> insert into table 2\n\n> I'm not starting any of my own transactions and I'm not calling stored\n> procedures from withing stored procedures. The stored procedures do have\n> large parameters lists, up to 100. The tables are from 300 to 500\n> columns. 90% of the columns are either INT4 or INT8. Some of these\n> tables are inherited. Could that be causing problems?\n\nThe only theory I can come up with is that the deferred trigger list is\ngetting out of hand. Since you have foreign keys in all the tables,\neach insert or update is going to add a trigger event to the list of\nstuff to check at commit. The event entries aren't real large but they\ncould add up if you insert or update a lot of stuff in a single\ntransaction. How many rows do you process per transaction?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Jun 2003 18:03:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Connections Requiring Large Amounts of Memory " }, { "msg_contents": "Dawn Hollingsworth <[email protected]> writes:\n> I just connected the user interface to the database. It only does\n> selects on startup. It's connection jumped to a memory usage of 256M. \n> It's not getting any larger but it's not getting any smaller either.\n\nUm, are you sure that's actual memory usage? On some platforms \"top\"\nseems to count the Postgres shared memory block as part of the address\nspace of each backend. How big is your shared memory block? (ipcs may\nhelp here)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Jun 2003 19:21:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Connections Requiring Large Amounts of Memory " }, { "msg_contents": "----- Original Message ----- \nFrom: \"Dawn Hollingsworth\" <[email protected]>\nSent: Tuesday, June 17, 2003 11:42 AM\n\n\n> I'm not starting any of my own transactions and I'm not calling stored\n> procedures from withing stored procedures. The stored procedures do have\n> large parameters lists, up to 100. The tables are from 300 to 500\n\nGeez! I don't think it'll help you find the memory leak (if any), but\ncouldn't you normalize the tables to smaller ones? That may be a pain when\nupdating (views and rules), but I think it'd worth in resources (time and\nmemory, but maybe not disk space). I wonder what is the maximum number of\nupdated cols and the minimum correlation between their semantics in a\nsingle transaction (i.e. one func call), since there are \"only\" 100 params\nfor a proc.\n\n> columns. 90% of the columns are either INT4 or INT8. Some of these\n> tables are inherited. Could that be causing problems?\n\nHuh. It's still 30-50 columns (a size of a fairly large table for me) of\nother types :)\n\nG.\n------------------------------- cut here -------------------------------\n\n", "msg_date": "Wed, 18 Jun 2003 09:11:25 +0200", "msg_from": "\"=?utf-8?B?U1rFsENTIEfDoWJvcg==?=\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Connections Requiring Large Amounts of Memory" } ]
[ { "msg_contents": "Hello!\n\n\tHow much does planner take into consideration index size? Can one help \nplanner use indexes by having several functional indexes which should be \nsmaller instead of one bigger index which covers whole range of values per \nfield(s)?\n\nThanks in advance.\n\n\n", "msg_date": "Mon, 16 Jun 2003 19:38:19 +0200", "msg_from": "Tomaz Borstnar <[email protected]>", "msg_from_op": true, "msg_subject": "functional indexes instead of regular index on field(s)?" }, { "msg_contents": "On Mon, Jun 16, 2003 at 19:38:19 +0200,\n Tomaz Borstnar <[email protected]> wrote:\n> Hello!\n> \n> \tHow much does planner take into consideration index size? Can one \n> \thelp planner use indexes by having several functional indexes which should \n> be smaller instead of one bigger index which covers whole range of values \n> per field(s)?\n\nThis sounds more like partial indexes than functional indexes.\n", "msg_date": "Mon, 16 Jun 2003 12:58:48 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: functional indexes instead of regular index on field(s)?" } ]
[ { "msg_contents": "Hi\nRecently I was wondering about tables difficult to index. Example - \nqueries with \"ilike\" where clauses. Without additional contrib modules \nthe only way to search such tables is sequential scan (am I right?)\n\nThe point is too keep these tables as small as possible. We can do this \nby denormalizing tables. Let's say we have table \"users\" which we split \ninto 1:1 relation \"users_header\" and \"users_data\". We put searchable \ncolumns into users_header and rest of them into users_data. users_data \nhave some integer foreign key referencing to users_header.\n\nWhat do you think about it? Does the Postgres use advantages of small \ntable users_header? Sequential scan on memory cached table should speed \nup queries, the rest columns are in integer-indexed table which \nshouldn't slow it down.\n\nThese example above is ony an idea, I don't have currently any example \nfor it.\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Mon, 16 Jun 2003 21:49:40 +0200", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": true, "msg_subject": "sequential scans on few columns tables" }, { "msg_contents": "On Mon, Jun 16, 2003 at 21:49:40 +0200,\n Tomasz Myrta <[email protected]> wrote:\n> Hi\n> Recently I was wondering about tables difficult to index. Example - \n> queries with \"ilike\" where clauses. Without additional contrib modules \n> the only way to search such tables is sequential scan (am I right?)\n\nYou might be able to use a functional index depending on exactly what your\nsearch patterns are like.\n", "msg_date": "Mon, 16 Jun 2003 14:55:51 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sequential scans on few columns tables" }, { "msg_contents": "Dnia 2003-06-16 21:55, Uďż˝ytkownik Bruno Wolff III napisaďż˝:\n> On Mon, Jun 16, 2003 at 21:49:40 +0200,\n> Tomasz Myrta <[email protected]> wrote:\n> \n>>Hi\n>>Recently I was wondering about tables difficult to index. Example - \n>>queries with \"ilike\" where clauses. Without additional contrib modules \n>>the only way to search such tables is sequential scan (am I right?)\n> \n> \n> You might be able to use a functional index depending on exactly what your\n> search patterns are like.\nProbably functional indexes won't be helpful to find _substrings_.\nTomasz\n\n", "msg_date": "Mon, 16 Jun 2003 22:32:32 +0200", "msg_from": "Tomasz Myrta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sequential scans on few columns tables" }, { "msg_contents": "On Mon, Jun 16, 2003 at 09:49:40PM +0200, Tomasz Myrta wrote:\n> by denormalizing tables. Let's say we have table \"users\" which we split \n> into 1:1 relation \"users_header\" and \"users_data\". We put searchable \n> columns into users_header and rest of them into users_data. users_data \n> have some integer foreign key referencing to users_header.\n> \n> What do you think about it? Does the Postgres use advantages of small \n> table users_header? Sequential scan on memory cached table should speed \n> up queries, the rest columns are in integer-indexed table which \n> shouldn't slow it down.\n \nKeep in mind that pgsql has a pretty heafty per-row overhead of 23\nbytes. If your data table has a bunch of big varchars then it might be\nworth it, otherwise it might not be.\n-- \nJim C. Nasby (aka Decibel!) [email protected]\nMember: Triangle Fraternity, Sports Car Club of America\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 16 Jun 2003 17:45:27 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sequential scans on few columns tables" } ]
[ { "msg_contents": "Hi,\n\nI am researching some interesting inconsistent query timing and hope some\nof the gurus hanging out here might help me shed a light on this...\n\nThe table:\n Column | Type | Modifiers\n \n--------+--------------------------+----------------------------------------\n------------\n rid | integer | not null default\nnextval('rv2_mdata_id_seq'::text)\n pid | integer | \n owid | integer | \n ioid | integer | \n dcid | character varying | \n dsid | character varying | \n drid | integer | \n usg | integer | \n idx | character varying | \n env | integer | \n nxid | integer | \n ci | integer | \n cd | numeric(21,6) | \n cr | real | \n cts | timestamp with time zone | \n cst | character varying | \n ctx | text | \n cbl | oid | \n acl | text | \nIndexes: id_mdata_dictid,\n id_mdata_dictid_dec,\n id_mdata_dictid_int,\n id_mdata_dictid_real,\n id_mdata_dictid_string,\n id_mdata_dictid_text,\n id_mdata_dictid_timestamp,\n id_mdata_dowid,\n id_mdata_ioid,\n id_mdata_owid\nPrimary key: rv2_mdata_pkey\n\nIndex \"id_mdata_dictid_string\"\n Column | Type \n--------+-------------------\n dcid | character varying\n dsid | character varying\n drid | integer\n nxid | integer\n cst | character varying\nbtree\nIndex predicate: ((usg & 16) = 16)\n\n\n\nThe query:\nexplain analyze verbose\nselect distinct t1.owid\n from rv2_mdata t1\n where t1.dcid='ADDR' and t1.dsid='AUXDICT' and t1.drid=110 and\nt1.usg & 16 = 16\n and t1.nxid = 0\n and t1.cst ilike '%redist%'\n and t1.owid > 10\n;\n\nFor the first time run it executes in 1.5 - 2 seconds. From the second\ntime, only 10 msec are needed for the same result:\n\nUnique (cost=3.84..3.84 rows=1 width=4) (actual time=1569.36..1569.39\nrows=11 loops=1)\n -> Sort (cost=3.84..3.84 rows=1 width=4) (actual time=1569.36..1569.37\nrows=11 loops=1)\n -> Index Scan using id_mdata_dictid_string on rv2_mdata t1\n(cost=0.00..3.83 rows=1 width=4) (actual time=17.02..1569.22 rows=11 loops=1)\nTotal runtime: 1569.50 msec\n\n\nUnique (cost=3.84..3.84 rows=1 width=4) (actual time=10.51..10.53 rows=11\nloops=1)\n -> Sort (cost=3.84..3.84 rows=1 width=4) (actual time=10.51..10.51\nrows=11 loops=1)\n -> Index Scan using id_mdata_dictid_string on rv2_mdata t1\n(cost=0.00..3.83 rows=1 width=4) (actual time=0.60..10.43 rows=11 loops=1)\nTotal runtime: 10.64 msec\n\nIf any of the \"dcid\", \"dsid\", or \"drid\" constraint values are altered, the\nquery starts again at 1.5 - 2 secs, then drops to 10.5 msec again.\n\nEven after restarting PostgreSQL, the number is lower (~50 msec) than when\nrunning for the first time.\n\nI really would like to get a consistent timing here (the lower the better\nof course) since these queries will happen quite often within our\napplication, and I need a consistent and predictable timing (this being a\ncore component).\n\nThis is postgresql 7.2.1 on RH72.\n\nAny clues? Thanks for insights,\n\n\n-- \n >O Ernest E. Vogelsinger\n (\\) ICQ #13394035\n ^ http://www.vogelsinger.at/\n\n\n", "msg_date": "Tue, 17 Jun 2003 00:46:54 +0200", "msg_from": "Ernest E Vogelsinger <[email protected]>", "msg_from_op": true, "msg_subject": "Interesting incosistent query timing" }, { "msg_contents": "On Tue, 17 Jun 2003 00:46:54 +0200, Ernest E Vogelsinger\n<[email protected]> wrote:\n>For the first time run it executes in 1.5 - 2 seconds. From the second\n>time, only 10 msec are needed for the same result\n\nI'd call it inconsistent, if it were the other way round :-) I guess\nyou are seeing the effects of disk caching. Watch the drive LED\nduring the first run ...\n\nServus\n Manfred\n", "msg_date": "Tue, 17 Jun 2003 01:45:58 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Interesting incosistent query timing" } ]
[ { "msg_contents": "Ernest,\n\nMy guess is that the second execution of the query is\nshorter since the data blocks are cached in memory. \nWhen you modify the data then it needs to be read again\nfrom disk which is much slower than from memory. The\nshort execution after restarting PostgreSQL seems to\nindicate that your data is cached in the Linux buffer\ncache. \n\nThe only strange thing seems to be that you have so few\nrows. Are you getting the data from a remote machine? \nHow many bytes does a single row have? Are they really\nlarge???\n\nRegards,\nNikolaus\n\nOn Tue, 17 Jun 2003 00:46:54 +0200, Ernest E\nVogelsinger wrote:\n\n> \n> Hi,\n> \n> I am researching some interesting inconsistent query\n> timing and hope some\n> of the gurus hanging out here might help me shed a\n> light on this...\n> \n> The table:\n> Column | Type | \n \n> Modifiers\n> \n>\n--------+--------------------------+----------------------------------------\n> ------------\n> rid | integer | not null default\n> nextval('rv2_mdata_id_seq'::text)\n> pid | integer | \n> owid | integer | \n> ioid | integer | \n> dcid | character varying | \n> dsid | character varying | \n> drid | integer | \n> usg | integer | \n> idx | character varying | \n> env | integer | \n> nxid | integer | \n> ci | integer | \n> cd | numeric(21,6) | \n> cr | real | \n> cts | timestamp with time zone | \n> cst | character varying | \n> ctx | text | \n> cbl | oid | \n> acl | text | \n> Indexes: id_mdata_dictid,\n> id_mdata_dictid_dec,\n> id_mdata_dictid_int,\n> id_mdata_dictid_real,\n> id_mdata_dictid_string,\n> id_mdata_dictid_text,\n> id_mdata_dictid_timestamp,\n> id_mdata_dowid,\n> id_mdata_ioid,\n> id_mdata_owid\n> Primary key: rv2_mdata_pkey\n> \n> Index \"id_mdata_dictid_string\"\n> Column | Type \n> --------+-------------------\n> dcid | character varying\n> dsid | character varying\n> drid | integer\n> nxid | integer\n> cst | character varying\n> btree\n> Index predicate: ((usg & 16) = 16)\n> \n> \n> \n> The query:\n> explain analyze verbose\n> select distinct t1.owid\n> from rv2_mdata t1\n> where t1.dcid='ADDR' and t1.dsid='AUXDICT' and\n> t1.drid=110 and\n> t1.usg & 16 = 16\n> and t1.nxid = 0\n> and t1.cst ilike '%redist%'\n> and t1.owid > 10\n> ;\n> \n> For the first time run it executes in 1.5 - 2 seconds.\n> From the second\n> time, only 10 msec are needed for the same result:\n> \n> Unique (cost=3.84..3.84 rows=1 width=4) (actual\n> time=1569.36..1569.39\n> rows=11 loops=1)\n> -> Sort (cost=3.84..3.84 rows=1 width=4) (actual\n> time=1569.36..1569.37\n> rows=11 loops=1)\n> -> Index Scan using id_mdata_dictid_string on\n> rv2_mdata t1\n> (cost=0.00..3.83 rows=1 width=4) (actual\n> time=17.02..1569.22 rows=11 loops=1)\n> Total runtime: 1569.50 msec\n> \n> \n> Unique (cost=3.84..3.84 rows=1 width=4) (actual\n> time=10.51..10.53 rows=11\n> loops=1)\n> -> Sort (cost=3.84..3.84 rows=1 width=4) (actual\n> time=10.51..10.51\n> rows=11 loops=1)\n> -> Index Scan using id_mdata_dictid_string on\n> rv2_mdata t1\n> (cost=0.00..3.83 rows=1 width=4) (actual\n> time=0.60..10.43 rows=11 loops=1)\n> Total runtime: 10.64 msec\n> \n> If any of the \"dcid\", \"dsid\", or \"drid\" constraint\n> values are altered, the\n> query starts again at 1.5 - 2 secs, then drops to 10.5\n> msec again.\n> \n> Even after restarting PostgreSQL, the number is lower\n> (~50 msec) than when\n> running for the first time.\n> \n> I really would like to get a consistent timing here\n> (the lower the better\n> of course) since these queries will happen quite often\n> within our\n> application, and I need a consistent and predictable\n> timing (this being a\n> core component).\n> \n> This is postgresql 7.2.1 on RH72.\n> \n> Any clues? Thanks for insights,\n> \n> \n> -- \n> >O Ernest E. Vogelsinger\n> (\\) ICQ #13394035\n> ^ http://www.vogelsinger.at/\n> \n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map\n> settings\n", "msg_date": "Mon, 16 Jun 2003 19:20:57 -0700 (PDT)", "msg_from": "\"Nikolaus Dilger\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Interesting incosistent query timing" } ]
[ { "msg_contents": "At 04:20 17.06.2003, Nikolaus Dilger said:\n--------------------[snip]--------------------\n>My guess is that the second execution of the query is\n>shorter since the data blocks are cached in memory. \n>When you modify the data then it needs to be read again\n>from disk which is much slower than from memory. The\n>short execution after restarting PostgreSQL seems to\n>indicate that your data is cached in the Linux buffer\n>cache. \n>\n>The only strange thing seems to be that you have so few\n>rows. Are you getting the data from a remote machine? \n>How many bytes does a single row have? Are they really\n>large???\n--------------------[snip]-------------------- \n\nWhat exactly do you mean? This table is quite filled (2.3 million rows),\nbut the query results are correct.\n\n\n-- \n >O Ernest E. Vogelsinger\n (\\) ICQ #13394035\n ^ http://www.vogelsinger.at/\n\n\n", "msg_date": "Tue, 17 Jun 2003 04:54:56 +0200", "msg_from": "Ernest E Vogelsinger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Interesting incosistent query timing" } ]
[ { "msg_contents": "approve xec5mm unsubscribe pgsql-performance [email protected]\n", "msg_date": "Tue, 17 Jun 2003 09:43:39 +0200", "msg_from": "=?iso-8859-1?Q?Jordi_Gim=E9nez?= <[email protected]>", "msg_from_op": true, "msg_subject": "approve xec5mm unsubscribe pgsql-performance [email protected]" } ]
[ { "msg_contents": "Is there a way to limit the amount of memory that postgres\nwill use?\n\n", "msg_date": "Tue, 17 Jun 2003 11:53:14 +0200", "msg_from": "Howard Oblowitz <[email protected]>", "msg_from_op": true, "msg_subject": "Limiting Postgres memory usage" }, { "msg_contents": "On 17 Jun 2003 at 11:53, Howard Oblowitz wrote:\n\n> Is there a way to limit the amount of memory that postgres\n> will use?\n\nPostgresql will use memory as specified by settings in postgresql.conf. The \nconfig file is pretty well documented in itself. Go thr. it..\n\nHTH\n\nBye\n Shridhar\n\n--\nPaprika Measure:\t2 dashes == 1smidgen\t2 smidgens == 1 pinch\t3 pinches \n== 1 soupcon\t2 soupcons == 2 much paprika\n\n", "msg_date": "Tue, 17 Jun 2003 15:55:22 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Limiting Postgres memory usage" } ]
[ { "msg_contents": "Ernest,\n\nThanks for providing the additional information that\nthe table has 2.3 million rows.\n\nSee during the first execution you spend most of the\ntime scanning the index id_mdata_dictid_string. And\nsince that one is quite large it takes 1500 msec to\nread the index from disk into memory.\n\nFor the second execution you read the large index from\nmemory. Therfore it takes only 10 msec.\n\nOnce you change the data you need to read from disk\nagain and the query takes a long time.\n\nRegards,\nNikolaus\n\n> For the first time run it executes in 1.5 - 2 seconds.\n> From the second\n> time, only 10 msec are needed for the same result:\n> \n> Unique (cost=3.84..3.84 rows=1 width=4) (actual\n> time=1569.36..1569.39\n> rows=11 loops=1)\n> -> Sort (cost=3.84..3.84 rows=1 width=4) (actual\n> time=1569.36..1569.37\n> rows=11 loops=1)\n> -> Index Scan using id_mdata_dictid_string on\n> rv2_mdata t1\n> (cost=0.00..3.83 rows=1 width=4) (actual\n> time=17.02..1569.22 rows=11 loops=1)\n> Total runtime: 1569.50 msec\n> \n> \n> Unique (cost=3.84..3.84 rows=1 width=4) (actual\n> time=10.51..10.53 rows=11\n> loops=1)\n> -> Sort (cost=3.84..3.84 rows=1 width=4) (actual\n> time=10.51..10.51\n> rows=11 loops=1)\n> -> Index Scan using id_mdata_dictid_string on\n> rv2_mdata t1\n> (cost=0.00..3.83 rows=1 width=4) (actual\n> time=0.60..10.43 rows=11 loops=1)\n> Total runtime: 10.64 msec\n\n\nOn Tue, 17 Jun 2003 04:54:56 +0200, Ernest E\nVogelsinger wrote:\n\n> \n> At 04:20 17.06.2003, Nikolaus Dilger said:\n> --------------------[snip]--------------------\n> >My guess is that the second execution of the query is\n> >shorter since the data blocks are cached in memory. \n> >When you modify the data then it needs to be read\nagain\n> >from disk which is much slower than from memory. The\n> >short execution after restarting PostgreSQL seems to\n> >indicate that your data is cached in the Linux buffer\n> >cache. \n> >\n> >The only strange thing seems to be that you have so\nfew\n> >rows. Are you getting the data from a remote\nmachine? \n> >How many bytes does a single row have? Are they\nreally\n> >large???\n> --------------------[snip]-------------------- \n> \n> What exactly do you mean? This table is quite filled\n> (2.3 million rows),\n> but the query results are correct.\n> \n> \n> -- \n> >O Ernest E. Vogelsinger\n> (\\) ICQ #13394035\n> ^ http://www.vogelsinger.at/\n> \n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n", "msg_date": "Tue, 17 Jun 2003 15:45:38 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Interesting incosistent query timing" } ]
[ { "msg_contents": "At 00:45 18.06.2003, [email protected] said:\n--------------------[snip]--------------------\n>Thanks for providing the additional information that\n>the table has 2.3 million rows.\n>\n>See during the first execution you spend most of the\n>time scanning the index id_mdata_dictid_string. And\n>since that one is quite large it takes 1500 msec to\n>read the index from disk into memory.\n>\n>For the second execution you read the large index from\n>memory. Therfore it takes only 10 msec.\n>\n>Once you change the data you need to read from disk\n>again and the query takes a long time.\n--------------------[snip]-------------------- \n\nI came to the same conclusion - I installed a cron script that performs a\nselect against that index on a regular basis (3 minutes). After that even\nthe most complex queries against this huge table go like whoosssshhh ;-)\n\nWould be interesting what one could do to _not_ have to take this basically\nclumsy approach...\n\n\n-- \n >O Ernest E. Vogelsinger\n (\\) ICQ #13394035\n ^ http://www.vogelsinger.at/\n\n\n", "msg_date": "Wed, 18 Jun 2003 01:01:09 +0200", "msg_from": "Ernest E Vogelsinger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Interesting incosistent query timing" }, { "msg_contents": "Ernest E Vogelsinger <[email protected]> writes:\n> I came to the same conclusion - I installed a cron script that performs a\n> select against that index on a regular basis (3 minutes). After that even\n> the most complex queries against this huge table go like whoosssshhh ;-)\n> Would be interesting what one could do to _not_ have to take this basically\n> clumsy approach...\n\nSeems like your kernel is falling down on the job: if those files are\nthe most heavily used ones on the machine, it should be keeping them in\ndisk cache without such prompting.\n\nIf they are not all that heavily used, then you are basically slowing\neverything else down in order to speed up these queries (because you're\nstarving everything else for disk cache). Which may be a reasonable\ntradeoff in your situation, but be aware of what you're doing.\n\nThe best compromise may be to buy more RAM ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Jun 2003 20:12:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Interesting incosistent query timing " } ]
[ { "msg_contents": "The query below was running in a bit under 300ms on a version of 7.4\nfrom less than a week ago until I updated to the version from last night.\nNow it takes about 800ms using a significantly different plan.\nThe query is:\nexplain analyze\n select count(1)\n from\n (select distinct on (areaid) touched\n from crate\n order by areaid desc, touched desc)\n as current\n where touched >= localtimestamp + '10 year ago'\n group by touched >= localtimestamp + '2 year ago'\n order by touched >= localtimestamp + '2 year ago' desc;\n\nI don't have the earlier version of 7.4 around, but I get the better plan\nin 7.3.3.\n version \n------------------------------------------------------------------------\n PostgreSQL 7.4devel on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n(1 row)\n\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=1756.33..1756.50 rows=67 width=19) (actual time=795.64..795.65 rows=2 loops=1)\n Sort Key: (touched >= (('now'::text)::timestamp(6) without time zone + '-2 years'::interval))\n -> HashAggregate (cost=1753.46..1754.30 rows=67 width=19) (actual time=795.48..795.48 rows=2 loops=1)\n -> Subquery Scan current (cost=1624.62..1737.38 rows=3216 width=19) (actual time=631.84..784.75 rows=5339 loops=1)\n Filter: (touched >= (('now'::text)::timestamp(6) without time zone + '-10 years'::interval))\n -> Unique (cost=1624.62..1705.22 rows=3216 width=19) (actual time=631.72..713.66 rows=5364 loops=1)\n -> Sort (cost=1624.62..1664.92 rows=16119 width=19) (actual time=631.72..639.77 rows=16119 loops=1)\n Sort Key: areaid, touched\n -> Seq Scan on crate (cost=0.00..498.19 rows=16119 width=19) (actual time=0.02..48.85 rows=16119 loops=1)\n Total runtime: 800.88 msec\n(10 rows)\n\n Table \"public.crate\"\n Column | Type | Modifiers \n---------+-----------------------------+------------------------\n areaid | text | not null\n gameid | text | not null\n rate | integer | not null default 5000\n frq | integer | not null default 0\n opp | integer | not null default 0\n rmp | integer | not null default 0\n trn | integer | not null default 0\n rp | text | \n gm | text | \n touched | timestamp without time zone | not null default 'now'\nIndexes:\n \"crate_pkey\" PRIMARY KEY btree (areaid, gameid),\n \"crate_game\" btree (gameid, areaid),\n \"crate_touched\" btree (areaid, touched)\nCheck Constraints:\n \"rate_nonnegative\" CHECK (rate >= 0),\n \"rate_other_interested\" CHECK ((frq > 0) OR (rate = 5000)),\n \"frq_nonnegative\" CHECK (frq >= 0),\n \"opp_nonnegative\" CHECK (opp >= 0),\n \"rmp_nonnegative\" CHECK (rmp >= 0),\n \"trn_nonnegative\" CHECK (trn >= 0)\nForeign Key Constraints:\n \"bad_areaid\" FOREIGN KEY (areaid) REFERENCES cname(areaid),\n \"bad_gameid\" FOREIGN KEY (gameid) REFERENCES games(gameid)\n\n version \n---------------------------------------------------------------------\n PostgreSQL 7.3.3 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n(1 row)\n\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=1094.46..1094.87 rows=161 width=19) (actual time=274.17..274.18 rows=2 loops=1)\n Sort Key: (touched >= (('now'::text)::timestamp(6) without time zone + '-2 years'::interval))\n -> Aggregate (cost=1076.46..1088.55 rows=161 width=19) (actual time=263.78..274.09 rows=2 loops=1)\n -> Group (cost=1076.46..1084.52 rows=1612 width=19) (actual time=255.12..269.69 rows=5339 loops=1)\n -> Sort (cost=1076.46..1080.49 rows=1612 width=19) (actual time=255.11..258.09 rows=5339 loops=1)\n Sort Key: (touched >= (('now'::text)::timestamp(6) without time zone + '-2 years'::interval))\n -> Subquery Scan current (cost=0.00..990.59 rows=1612 width=19) (actual time=0.12..240.81 rows=5339 loops=1)\n Filter: (touched >= (('now'::text)::timestamp(6) without time zone + '-10 years'::interval))\n -> Unique (cost=0.00..990.59 rows=1612 width=19) (actual time=0.04..159.11 rows=5364 loops=1)\n -> Index Scan Backward using crate_touched on crate (cost=0.00..950.30 rows=16119 width=19) (actual time=0.04..82.15 rows=16119 loops=1)\n Total runtime: 275.32 msec\n(11 rows)\n\n Table \"public.crate\"\n Column | Type | Modifiers \n---------+-----------------------------+------------------------\n areaid | text | not null\n gameid | text | not null\n rate | integer | not null default 5000\n frq | integer | not null default 0\n opp | integer | not null default 0\n rmp | integer | not null default 0\n trn | integer | not null default 0\n rp | text | \n gm | text | \n touched | timestamp without time zone | not null default 'now'\nIndexes: crate_pkey primary key btree (areaid, gameid),\n crate_game btree (gameid, areaid),\n crate_touched btree (areaid, touched)\nCheck constraints: \"trn_nonnegative\" (trn >= 0)\n \"rmp_nonnegative\" (rmp >= 0)\n \"opp_nonnegative\" (opp >= 0)\n \"frq_nonnegative\" (frq >= 0)\n \"rate_other_interested\" ((frq > 0) OR (rate = 5000))\n \"rate_nonnegative\" (rate >= 0)\nForeign Key constraints: bad_gameid FOREIGN KEY (gameid) REFERENCES games(gameid) ON UPDATE NO ACTION ON DELETE NO ACTION,\n bad_areaid FOREIGN KEY (areaid) REFERENCES cname(areaid) ON UPDATE NO ACTION ON DELETE NO ACTION\n\n", "msg_date": "Wed, 18 Jun 2003 10:02:10 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": true, "msg_subject": "Recent 7.4 change slowed down a query by a factor of 3" }, { "msg_contents": "Bruno Wolff III <[email protected]> writes:\n> The query below was running in a bit under 300ms on a version of 7.4\n> from less than a week ago until I updated to the version from last night.\n> Now it takes about 800ms using a significantly different plan.\n\nSomething fishy here. Will it use the right plan if you set\nenable_seqscan off?\n\nI did\n\nbogus=# create table crate(areaid text, touched timestamp);\nCREATE TABLE\nbogus=# create index crate_touched on crate(areaid, touched);\nCREATE INDEX\n\nand then explained your query:\n\n GroupAggregate (cost=64.14..66.48 rows=67 width=40)\n -> Sort (cost=64.14..64.64 rows=200 width=40)\n Sort Key: (touched >= (('now'::text)::timestamp(6) without time zone + '-2 years'::interval))\n -> Subquery Scan current (cost=0.00..56.50 rows=200 width=40)\n Filter: (touched >= (('now'::text)::timestamp(6) without time zone + '-10 years'::interval))\n -> Unique (cost=0.00..54.50 rows=200 width=40)\n -> Index Scan Backward using crate_touched on crate (cost=0.00..52.00 rows=1000 width=40)\n\nwhich looks perfectly reasonable. Obviously, with no data or statistics\nthe estimates are not to be trusted, but it sure looks to me like CVS\ntip should still be able to generate the right plan. Did you do a full\n'make clean' and rebuild when you updated?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 18 Jun 2003 11:18:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 7.4 change slowed down a query by a factor of 3 " }, { "msg_contents": "On Wed, Jun 18, 2003 at 11:18:39 -0400,\n Tom Lane <[email protected]> wrote:\n> Bruno Wolff III <[email protected]> writes:\n> > The query below was running in a bit under 300ms on a version of 7.4\n> > from less than a week ago until I updated to the version from last night.\n> > Now it takes about 800ms using a significantly different plan.\n> \n> Something fishy here. Will it use the right plan if you set\n> enable_seqscan off?\n> \n> I did\n> \n> bogus=# create table crate(areaid text, touched timestamp);\n> CREATE TABLE\n> bogus=# create index crate_touched on crate(areaid, touched);\n> CREATE INDEX\n> \n> and then explained your query:\n> \n> GroupAggregate (cost=64.14..66.48 rows=67 width=40)\n> -> Sort (cost=64.14..64.64 rows=200 width=40)\n> Sort Key: (touched >= (('now'::text)::timestamp(6) without time zone + '-2 years'::interval))\n> -> Subquery Scan current (cost=0.00..56.50 rows=200 width=40)\n> Filter: (touched >= (('now'::text)::timestamp(6) without time zone + '-10 years'::interval))\n> -> Unique (cost=0.00..54.50 rows=200 width=40)\n> -> Index Scan Backward using crate_touched on crate (cost=0.00..52.00 rows=1000 width=40)\n> \n> which looks perfectly reasonable. Obviously, with no data or statistics\n> the estimates are not to be trusted, but it sure looks to me like CVS\n> tip should still be able to generate the right plan. Did you do a full\n> 'make clean' and rebuild when you updated?\n\nI did a make distclean. I didn't do an initdb as I was able to restart\nthe database without a problem. I also tried a simpler query just doing\nthe distinct on without a where clause and the backwards index scan still\nwasn't used.\n\nI will try an initdb and then if that doesn't change things I will fetch\na new copy of the code from CVS, do another initdb and see what happens.\n", "msg_date": "Wed, 18 Jun 2003 10:43:32 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recent 7.4 change slowed down a query by a factor of 3" }, { "msg_contents": "On Wed, Jun 18, 2003 at 11:18:39 -0400,\n Tom Lane <[email protected]> wrote:\n> Bruno Wolff III <[email protected]> writes:\n> > The query below was running in a bit under 300ms on a version of 7.4\n> > from less than a week ago until I updated to the version from last night.\n> > Now it takes about 800ms using a significantly different plan.\n> \n> Something fishy here. Will it use the right plan if you set\n> enable_seqscan off?\n\nThis got it to use the backward index scan.\n", "msg_date": "Wed, 18 Jun 2003 10:45:41 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recent 7.4 change slowed down a query by a factor of 3" }, { "msg_contents": "On Wed, Jun 18, 2003 at 11:18:39 -0400,\n Tom Lane <[email protected]> wrote:\n> Bruno Wolff III <[email protected]> writes:\n> > The query below was running in a bit under 300ms on a version of 7.4\n> > from less than a week ago until I updated to the version from last night.\n> > Now it takes about 800ms using a significantly different plan.\n> \n> Something fishy here. Will it use the right plan if you set\n> enable_seqscan off?\n\nAfter doing an initdb I got the expected plan.\n\nThis is a static db used for dynamic web pages and the script that loads\nthe db does a vacumm analyze at the end. So the db should have had the\ninformation needed to pick the correct plan. Possibly something changed\nthat affected the information needed for planning, but the value used\nto indicate an initdb was needed wasn't changed.\n\nIn the future if I see odd stuff I will try doing an initdb before reporting\na potential problem. Thanks for your help.\n", "msg_date": "Wed, 18 Jun 2003 10:53:40 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Recent 7.4 change slowed down a query by a factor of 3" }, { "msg_contents": "Bruno Wolff III <[email protected]> writes:\n> After doing an initdb I got the expected plan.\n\nHm. I'm not sure what happened there --- I don't recall that we made\nany initdb-needing changes in the past week or so. (If we did, we\nshould have forced initdb by incrementing catversion, but sometimes\npeople forget to do that.) The only change I can think of that's\nrelated at all is that the outer query's \"group by foo order by foo desc\"\nshould now only require one sort step not two (which is probably why\nmy test went for the Sort/GroupAggregate plan not the HashAgg/Sort\nplan you showed). But that shouldn't have affected the plan for the\ninner SELECT DISTINCT query, AFAICS. Odd.\n\nProbably not worth spending time on though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 18 Jun 2003 12:21:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Recent 7.4 change slowed down a query by a factor of 3 " } ]
[ { "msg_contents": "Why would the following query take soo long to run? What does 28.12 msec represent, since the total running time is \n16801.86 ms.\n\nThe table phoneinfo has a primary key called phoneinfo_id and the table has 400 000 records.\n\nmydb=#explain analyze delete from phoneinfo where phoneinfo_id = 85723;\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------\n-----\n Index Scan using phoneinfo_pkey on phoneinfo (cost=0.00..3.81 rows=1 width=6) (actual time=27.93..27.94 rows=1 loop\ns=1)\n Index Cond: (phoneinfo_id = 85723)\n Total runtime: 28.12 msec\n(3 rows)\n\nTime: 16801.86 ms\n\nBTW, I have \\timing on.\n\n", "msg_date": "Fri, 20 Jun 2003 11:53:28 -0400", "msg_from": "Yusuf <[email protected]>", "msg_from_op": true, "msg_subject": "Deleting one record from a table taking 17s." }, { "msg_contents": "On Fri, 2003-06-20 at 15:53, Yusuf wrote:\n> Why would the following query take soo long to run? What does 28.12 msec represent, since the total running time is \n> 16801.86 ms.\n\nI'd hazard to guess that you have a whole slew of foreign keys cascading\nto delete, update, or check many rows from other tables.\n\nThose are not represented in the explains at the moment.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "20 Jun 2003 16:06:03 +0000", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deleting one record from a table taking 17s." }, { "msg_contents": "Rod Taylor <[email protected]> writes:\n> On Fri, 2003-06-20 at 15:53, Yusuf wrote:\n>> Why would the following query take soo long to run? What does 28.12 msec =\n> represent, since the total running time is=20\n>> 16801.86 ms.\n\n> I'd hazard to guess that you have a whole slew of foreign keys cascading\n> to delete, update, or check many rows from other tables.\n\nEither that or some other AFTER trigger(s) that are taking lots of time.\nThose fire after the end of the statement, so EXPLAIN's measurement of\nruntime fails to include them.\n\nGiven that this query appears to have deleted only one row, though, you\nsure seem to have a mighty slow trigger. If it's an FK, perhaps you are\nmissing an index on the referencing column? The system doesn't force\nyou to have an index on that side of an FK, but it's generally a good\nidea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jun 2003 12:59:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deleting one record from a table taking 17s. " }, { "msg_contents": "\n\[email protected] wrote:\n> On Fri, 2003-06-20 at 15:53, Yusuf wrote:\n> \n>>Why would the following query take soo long to run? What does 28.12 msec represent, since the total running time is \n>>16801.86 ms.\n> \n> \n> I'd hazard to guess that you have a whole slew of foreign keys cascading\n> to delete, update, or check many rows from other tables.\n> \n> Those are not represented in the explains at the moment.\n> \n\nThat's what I thought at first, so I dropped the foreign key constraints. The table is referenced by 2 tables, one of \nwhich has around 200 000 records and the other has 0 records.\n\n", "msg_date": "Fri, 20 Jun 2003 13:06:42 -0400", "msg_from": "Yusuf <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Deleting one record from a table taking 17s." }, { "msg_contents": "On Fri, 2003-06-20 at 13:06, Yusuf wrote:\n> [email protected] wrote:\n> > On Fri, 2003-06-20 at 15:53, Yusuf wrote:\n> > \n> >>Why would the following query take soo long to run? What does 28.12 msec represent, since the total running time is \n> >>16801.86 ms.\n> > \n> > \n> > I'd hazard to guess that you have a whole slew of foreign keys cascading\n> > to delete, update, or check many rows from other tables.\n> > \n> > Those are not represented in the explains at the moment.\n> > \n> \n> That's what I thought at first, so I dropped the foreign key constraints. The table is referenced by 2 tables, one of \n> which has around 200 000 records and the other has 0 records.\n\nHmm... EXPLAIN ANALYZE your select again, but join both of those\nreferenced tables to the appropriate columns.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "20 Jun 2003 18:04:56 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Deleting one record from a table taking 17s." } ]
[ { "msg_contents": "I'd like to get some feedback on my setup to see if I can optimize my\ndatabase performance. My application has two separate applications:\n\nThe first application connects to websites and records the statistics in the\ndatabase. Websites are monitored every 5 or 10 minutes (depends on client),\nthere are 900 monitors which comes out to 7,800 monitorings per hour. The\nmonitor table has columns \"nextdate\" and \"status\" which are updated with\nevery monitoring, and also a row is inserted into the status table and the\nstatus item table. For my performance testing (we're just about to go live)\nI've loaded the database with a month of data (we don't plan to keep data\nlonger than 1 month). So my status table has 6 million records and my\nstatus item table has 6 million records as well. One key is that the system\nis multithreaded so up to 32 processes are accessing the database at the\nsame time, updating the \"nextdate\" before the monitoring and inserting the\nstatus and status item records after. There is a serious performance\nconstraint here because unlike a webserver, this application cannot slow\ndown. If it slows down, we won't be able to monitor our sites at 5 minute\nintervals which will make our customers unhappy.\n\nThe second application is a web app (tomcat) which lets customers check\ntheir status. Both of these applications are deployed on the same server, a\n4 CPU (Xeon) with 1.5 gigs of RAM. The OS (RedHat Linux 7.3) and servers\nare running on 18gig 10,000 RPM SCSI disk that is mirrored to a 2nd disk.\nThe database data directory is on a separate 36 gig 10,000 RPM SCSI disk\n(we're trying to buy a 2nd disk to mirror it). I'm using Postgres 7.3.2.\n\nIssue #1 - Vacuum => Overall the system runs pretty well and seems stable.\nLast night I did a \"vacuum full analyze\" and then ran my app overnight and\nfirst thing in the morning I did a \"vacuum analyze\", which took 35 minutes.\nI'm not sure if this is normal for a database this size (there are 15,000\nupdates per hour). During the vacuum my application does slow down quite a\nbit and afterwards is slow speeds back up. I've attached the vacuum output\nto this mail. I'm using Java Data Objects (JDO) so if table/column names\nlook weird it's because the schema is automatically generated.\n\nIssue #2 - postgres.conf => I'd love to get some feedback on these settings.\nI've read the archives and no one seems to agree I know, but with the above\ndescription of my app I hope someone can at least point me in the right\ndirection:\n\nmax_connections = 200\n\n#\n# Shared Memory Size\n#\nshared_buffers = 3072 # min max_connections*2 or 16, 8KB each\n#max_fsm_relations = 1000 # min 10, fsm is free space map, ~40 bytes\n#max_fsm_pages = 10000 # min 1000, fsm is free space map, ~6 bytes\n#max_locks_per_transaction = 64 # min 10\n#wal_buffers = 8 # min 4, typically 8KB each\n\n#\n# Non-shared Memory Sizes\n#\nsort_mem = 8192 # min 64, size in KB\nvacuum_mem = 24576 # min 1024, size in KB\n\nThe rest are left uncommented (using the defaults).\n\nIssue #3 - server hardware =>\n\n- Is there anything I can do with the hardware to increase performance?\n\n- Should I increase the ram to 2 gigs? top shows that it is using the swap\na bit (about 100k only).\n\n- I have at my disposal one other server which has 2 Xeons, 10,000 RPM SCSI\ndrive. Would it make sense to put Postgres on it and leave my apps running\non the more powerful 4 CPU server?\n\n- Would a RAID setup make the disk faster? Because top rarely shows the\nCPUs above 50%, I suspect maybe the disk is the bottleneck.\n\nI'm thrilled to be able to use Postgres instead of a commercial database and\nI'm looking forward to putting this into production. Any help with the\nabove questions would be greatly appreciated.\n\nMichael Mattox", "msg_date": "Tue, 24 Jun 2003 09:39:32 +0200", "msg_from": "\"Michael Mattox\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance advice" }, { "msg_contents": "On 24 Jun 2003 at 9:39, Michael Mattox wrote:\n\n> I'd like to get some feedback on my setup to see if I can optimize my\n> database performance. My application has two separate applications:\n> \n> The first application connects to websites and records the statistics in the\n> database. Websites are monitored every 5 or 10 minutes (depends on client),\n> there are 900 monitors which comes out to 7,800 monitorings per hour. The\n> monitor table has columns \"nextdate\" and \"status\" which are updated with\n> every monitoring, and also a row is inserted into the status table and the\n> status item table. For my performance testing (we're just about to go live)\n> I've loaded the database with a month of data (we don't plan to keep data\n> longer than 1 month). So my status table has 6 million records and my\n> status item table has 6 million records as well. One key is that the system\n> is multithreaded so up to 32 processes are accessing the database at the\n> same time, updating the \"nextdate\" before the monitoring and inserting the\n> status and status item records after. There is a serious performance\n> constraint here because unlike a webserver, this application cannot slow\n> down. If it slows down, we won't be able to monitor our sites at 5 minute\n> intervals which will make our customers unhappy.\n> \n> The second application is a web app (tomcat) which lets customers check\n> their status. Both of these applications are deployed on the same server, a\n> 4 CPU (Xeon) with 1.5 gigs of RAM. The OS (RedHat Linux 7.3) and servers\n> are running on 18gig 10,000 RPM SCSI disk that is mirrored to a 2nd disk.\n> The database data directory is on a separate 36 gig 10,000 RPM SCSI disk\n> (we're trying to buy a 2nd disk to mirror it). I'm using Postgres 7.3.2.\n\nI recommend that you use a latest kernel with, pre-empt+low latency + O(1) \npatches. First two are said to affect desktop only, but I believe a loaded \nserver need it as well.\n\nI suggest you get latest kernel from kernel.org and apply con kolivas's patches \nfrom http://members.optusnet.com.au/ckolivas/kernel/. That is the easiest way \naround.\n\nFurthermore if I/O throghput is an issue and you aer ready to experiment at \nthis stage, try freeBSD. Many out here believe that it has superior IO \nscheduling and of course VM. If you move off your database server to another \nmachine, you might get a chance to play with it.\n \n> Issue #1 - Vacuum => Overall the system runs pretty well and seems stable.\n> Last night I did a \"vacuum full analyze\" and then ran my app overnight and\n> first thing in the morning I did a \"vacuum analyze\", which took 35 minutes.\n> I'm not sure if this is normal for a database this size (there are 15,000\n> updates per hour). During the vacuum my application does slow down quite a\n> bit and afterwards is slow speeds back up. I've attached the vacuum output\n> to this mail. I'm using Java Data Objects (JDO) so if table/column names\n> look weird it's because the schema is automatically generated.\n\nThat is expected given how much data you have inserted overnight. The changes \nin status and status item table would need some time to come back.\n\nVacuum is IO intensive process. In case of freeBSD, if you lower the nice \npriority, IO priority is also lowered. That mean a vacuum process with lower \npriority will not hog disk bandwidth on freeBSD. Unfortunately not so on linux. \nSo the slowdown you are seeing is probably due to disk bandwidth congestion.\n\nClearly with a load like this, you can not rely upon scheduled vacuums. I \nrecommend you use pgavd in contrib directory in postgresql CVS tree. That would \nvacuum the database whenever needed. It's much better than scheduled vacuum.\n\nIf you can not use it immediately, do a hourly vacuum analyze, may be even more \nfrequent. Nightly vacuum would simply not do.\n\n> \n> Issue #2 - postgres.conf => I'd love to get some feedback on these settings.\n> I've read the archives and no one seems to agree I know, but with the above\n> description of my app I hope someone can at least point me in the right\n> direction:\n> \n> max_connections = 200\n> \n> #\n> # Shared Memory Size\n> #\n> shared_buffers = 3072 # min max_connections*2 or 16, 8KB each\n\nI would say of the order of 10K would be good. You need to experiment a bit to \nfind out what works best for you.\n\n> #max_fsm_relations = 1000 # min 10, fsm is free space map, ~40 bytes\n> #max_fsm_pages = 10000 # min 1000, fsm is free space map, ~6 bytes\n\nYou may bump these two as well. See past discussions for reference. Doubling \nthem would be a good start.\n\n> #max_locks_per_transaction = 64 # min 10\n> #wal_buffers = 8 # min 4, typically 8KB each\n> \n> #\n> # Non-shared Memory Sizes\n> #\n> sort_mem = 8192 # min 64, size in KB\n> vacuum_mem = 24576 # min 1024, size in KB\n> \n> The rest are left uncommented (using the defaults).\n\nNot good. You need to tune effective_cache_size so that postgresql accounts for \n1.5GB RAM your machine has. I would say set it up around 800MB.\n\nSecondly with SCSI in place, lower random_tuple_cost. Default is 4. 1 might be \ntoo agrressive. 2 might be OK. Experiment and decide.\n\n> \n> Issue #3 - server hardware =>\n> \n> - Is there anything I can do with the hardware to increase performance?\n> \n> - Should I increase the ram to 2 gigs? top shows that it is using the swap\n> a bit (about 100k only).\n\nMeans it does not need swap almost at all. Linux has habit to touch swap just \nfor no reason. So memory is not the bottleneck.\n\n \n> - I have at my disposal one other server which has 2 Xeons, 10,000 RPM SCSI\n> drive. Would it make sense to put Postgres on it and leave my apps running\n> on the more powerful 4 CPU server?\n> \n> - Would a RAID setup make the disk faster? Because top rarely shows the\n> CPUs above 50%, I suspect maybe the disk is the bottleneck.\n\nYes it is. You need to move WAL to a different disk. Even if it is IDE. (OK \nthat was over exaggeration but you got the point). If your data directories and \nWAL logs are on physically different disks, that should bump up performance \nplenty.\n\n\nHTH\n\nBye\n Shridhar\n\n--\nAmbidextrous, adj.:\tAble to pick with equal skill a right-hand pocket or a \nleft.\t\t-- Ambrose Bierce, \"The Devil's Dictionary\"\n\n", "msg_date": "Tue, 24 Jun 2003 13:29:08 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance advice" }, { "msg_contents": "On 24 Jun 2003 at 13:29, Shridhar Daithankar wrote:\n> > - I have at my disposal one other server which has 2 Xeons, 10,000 RPM SCSI\n> > drive. Would it make sense to put Postgres on it and leave my apps running\n> > on the more powerful 4 CPU server?\n\nArgh.. Forgot it first time. \n\nWith java runnning on same machine, I would not trust that machine for having \nfree RAM all the time, no matter how much RAM you have put into it.\n\nSecondly you are running linux which is known to have weird behaviour problems \nwhen it runs low on memory.\n\nFor both these reasons, I suggest you put your database on another machine. A \ndual CPU machine is more than enough. Put good deal RAM, around a GB and two \nSCSI disks, one for data and another for WAL. If you get RAID for data, great. \nBut that should suffice otherwise as well.\n\n> > \n> > - Would a RAID setup make the disk faster? Because top rarely shows the\n> > CPUs above 50%, I suspect maybe the disk is the bottleneck.\n> \n> Yes it is. You need to move WAL to a different disk. Even if it is IDE. (OK \n> that was over exaggeration but you got the point). If your data directories and \n> WAL logs are on physically different disks, that should bump up performance \n> plenty.\n\nIn addition to that, on linux, it matters a lot as in what filesystem you use. \nIMO ext3 is strict no-no. Go for either reiserfs or XFS.\n\nThere is no agreement as in which file system is best on linux. so you need to \nexperiment if you need every ounce of performance.\n\nAnd for that you got to try freeBSD. That would gave you plenty of idea about \nperformance differences. ( Especially I love man hier and man tuning on \nfreeBSD. Nothing on linux comes anywhere near to that)\n\nBye\n Shridhar\n\n--\n\"Who is General Failure and why is he reading my hard disk ?\"Microsoft spel \nchekar vor sail, worgs grate !!(By [email protected], Felix von Leitner)\n\n", "msg_date": "Tue, 24 Jun 2003 13:49:54 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance advice" }, { "msg_contents": "On 24 Jun 2003 at 12:10, Achilleus Mantzios wrote:\n\n> On Tue, 24 Jun 2003, Shridhar Daithankar wrote:\n> > With java runnning on same machine, I would not trust that machine for having \n> > free RAM all the time, no matter how much RAM you have put into it.\n> \n> There are always the -Xmx, -Xss, -Xms jvm switches,\n> to control stack (per thread) and heap sizes.\n\nOK. I am not familiar with any of them. Are they related to java? Have never \nworked on java myself.\n\nI was talking about OOM killer behaviour, which was beaten to death for last \nfew days..\n\n> > For both these reasons, I suggest you put your database on another machine. A \n> > dual CPU machine is more than enough. Put good deal RAM, around a GB and two \n> > SCSI disks, one for data and another for WAL. If you get RAID for data, great. \n> > But that should suffice otherwise as well.\n> > \n> \n> I think the DB on another machine could be from something helpfull,\n> to an overkill, to a leg self shooting.\n> Depending on the type of the majority of queries and the network speed\n> someone should give an extra time to think about it.\n\nI agree. but with the input provided, I think that remains as viable option.\n\n> > And for that you got to try freeBSD. That would gave you plenty of idea about \n> > performance differences. ( Especially I love man hier and man tuning on \n> > freeBSD. Nothing on linux comes anywhere near to that)\n> > \n> \n> Its like comparing Mazda with VVT-i.\n\nWhat are they? My guess is they are cars., Anyway, I drive a tiny utility bike \nin far country like India..:-)\n\n> Whould you expect to find the furniture fabric\n> specs in the main engine manual?\n\nWell, I agree they are different but not that much..:-) And besides man tuning \nis much more helpful w.r.t. tuning a box. I still think it is relevant. and \nthat was just one example why freeBSD is better server OS, out of the box, \ncompared to linux.\n\nNo flame wars.. Peace..\n\n\nBye\n Shridhar\n\n--\nLieberman's Law:\tEverybody lies, but it doesn't matter since nobody listens.\n\n", "msg_date": "Tue, 24 Jun 2003 14:32:46 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance advice" }, { "msg_contents": "On Tuesday 24 Jun 2003 8:39 am, Michael Mattox wrote:\n> I'd like to get some feedback on my setup to see if I can optimize my\n> database performance. My application has two separate applications:\n>\n> The first application connects to websites and records the statistics in\n> the database. Websites are monitored every 5 or 10 minutes (depends on\n> client), there are 900 monitors which comes out to 7,800 monitorings per\n> hour. \n[snip]\n> There is a serious\n> performance constraint here because unlike a webserver, this application\n> cannot slow down. If it slows down, we won't be able to monitor our sites\n> at 5 minute intervals which will make our customers unhappy.\n\nOthers are discussing the performance/tuning stuff, but can I make one \nsuggestion?\n\nDon't log your monitoring info directly into the database, log straight to one \nor more text-files and sync them every few seconds. Rotate the files once a \nminute (or whatever seems suitable). Then have a separate process that reads \n\"old\" files and processes them into the database.\n\nThe big advantage - you can take the database down for a short period and the \nmonitoring goes on. Useful for those small maintenance tasks.\n-- \n Richard Huxton\n", "msg_date": "Tue, 24 Jun 2003 12:33:42 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance advice" }, { "msg_contents": "> Don't log your monitoring info directly into the database, log\n> straight to one\n> or more text-files and sync them every few seconds. Rotate the\n> files once a\n> minute (or whatever seems suitable). Then have a separate process\n> that reads\n> \"old\" files and processes them into the database.\n>\n> The big advantage - you can take the database down for a short\n> period and the\n> monitoring goes on. Useful for those small maintenance tasks.\n\nThis is a good idea but it'd take a bit of redesign to make it work. here's\nmy algorithm now:\n\n- Every 10 seconds I get a list of monitors who have nextdate >= current\ntime\n- I put the id numbers of the monitors into a queue\n- A thread from a thread pool (32 active threads) retrieves the monitor from\nthe database from its id, updates the nextdate timestamp, executes the\nmonitor, and stores the status in the database\n\nSo I have two transactions, one to update the monitor's nextdate and another\nto update its status. Now that I wrote that I see a possibility to\nsteamline the last step. I can wait until I update the status to update the\nnextdate. That would cut the number of transactions in two. Only problem\nis I have to be sure not to add a monitor to the queue when it's currently\nexecuting. This shouldn't be hard, I have a hashtable containing all the\nactive monitors.\n\nThanks for the suggestion, I'm definitely going to give this some more\nthought.\n\nMichael\n\n\n\n\n\n\n", "msg_date": "Tue, 24 Jun 2003 14:16:09 +0200", "msg_from": "\"Michael Mattox\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance advice" }, { "msg_contents": "On Tue, 24 Jun 2003, Shridhar Daithankar wrote:\n\n> On 24 Jun 2003 at 13:29, Shridhar Daithankar wrote:\n> > > - I have at my disposal one other server which has 2 Xeons, 10,000 RPM SCSI\n> > > drive. Would it make sense to put Postgres on it and leave my apps running\n> > > on the more powerful 4 CPU server?\n> \n> Argh.. Forgot it first time. \n> \n> With java runnning on same machine, I would not trust that machine for having \n> free RAM all the time, no matter how much RAM you have put into it.\n\nThere are always the -Xmx, -Xss, -Xms jvm switches,\nto control stack (per thread) and heap sizes.\n\n> \n> Secondly you are running linux which is known to have weird behaviour problems \n> when it runs low on memory.\n> \n> For both these reasons, I suggest you put your database on another machine. A \n> dual CPU machine is more than enough. Put good deal RAM, around a GB and two \n> SCSI disks, one for data and another for WAL. If you get RAID for data, great. \n> But that should suffice otherwise as well.\n> \n\nI think the DB on another machine could be from something helpfull,\nto an overkill, to a leg self shooting.\nDepending on the type of the majority of queries and the network speed\nsomeone should give an extra time to think about it.\n\n> > > \n> > > - Would a RAID setup make the disk faster? Because top rarely shows the\n> > > CPUs above 50%, I suspect maybe the disk is the bottleneck.\n> > \n> > Yes it is. You need to move WAL to a different disk. Even if it is IDE. (OK \n> > that was over exaggeration but you got the point). If your data directories and \n> > WAL logs are on physically different disks, that should bump up performance \n> > plenty.\n> \n> In addition to that, on linux, it matters a lot as in what filesystem you use. \n> IMO ext3 is strict no-no. Go for either reiserfs or XFS.\n> \n> There is no agreement as in which file system is best on linux. so you need to \n> experiment if you need every ounce of performance.\n> \n> And for that you got to try freeBSD. That would gave you plenty of idea about \n> performance differences. ( Especially I love man hier and man tuning on \n> freeBSD. Nothing on linux comes anywhere near to that)\n> \n\nIts like comparing Mazda with VVT-i.\nWhould you expect to find the furniture fabric\nspecs in the main engine manual?\n\nBesides all that, i must note that jdk1.4.1 runs pretty\nnice on FreeBSD, and some efforts to run java\nover the KSE libs have been done with success.\n\n\n> Bye\n> Shridhar\n> \n> --\n> \"Who is General Failure and why is he reading my hard disk ?\"Microsoft spel \n> chekar vor sail, worgs grate !!(By [email protected], Felix von Leitner)\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: achill at matrix dot gatewaynet dot com\n mantzios at softlab dot ece dot ntua dot gr\n\n", "msg_date": "Tue, 24 Jun 2003 12:10:48 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance advice" }, { "msg_contents": "\"Shridhar Daithankar\" <[email protected]> writes:\n>> - Would a RAID setup make the disk faster? Because top rarely shows the\n>> CPUs above 50%, I suspect maybe the disk is the bottleneck.\n\n> Yes it is. You need to move WAL to a different disk.\n\nFor an update-intensive setup, putting WAL on its own disk is definitely\nyour biggest win. You might then find it rewarding to fool with the\nwal_sync_method and perhaps to bump up wal_buffers a little. A small\nnumber of people have had luck with putting a nonzero commit_delay but\nI have little faith in that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Jun 2003 10:18:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance advice " }, { "msg_contents": "I want to thank everyone for their help and post a status update. I've made\nquite a bit of improvements. Here's what all I did:\n\nI refactored my algorithm, instead of updating the timestamp, monitoring the\nwebsite, and then updating the status (two transactions), I wait and update\nthe timestamp and status at the same time (one transaction). This required\nusing a hashtable to contain active monitors so that I don't add a monitor\nto the queue while it's executing (I check to make sure it's not in the\nqueue and not executing before adding it to the queue). This cut down my\ntransactions by a factor of 2.\n\nI changed the postgres.conf settings as suggested by several people. I've\nattached it to this email, please let me know if you see anything else I can\ntweak. top still says I have plenty of ram, so should I increase the\nbuffers and/or effective_cache even more?\n\nMem: 1547572K av, 1537212K used, 10360K free, 0K shrd, 107028K\nbuff\nSwap: 1044216K av, 14552K used, 1029664K free 1192280K\ncached\n\nI moved the WAL (pg_xlog directory) to another drive. There are two drives\nin the system, so one has the OS, servers, all files, and the WAL and the\nother has nothing but the data. I think it'd be best to put the WAL on a\nseparate drive from the OS but I don't know if I can get another drive added\njust for that due to our limited budget.\n\nI learned that I only need to vacuum tables that are changed frequently. My\napp doesn't do any deletes, and only one table changes, the monitor table.\nseveral times a second. So I only need to vacuum that table. Vacuuming the\nentire database is slow and unecessary. If I only do the monitor table, it\ntakes only a few seconds. Much better than the 35 minutes for the entire\ndatabase that it was taking this morning.\n\nResult of all this? Before a monitor operation (update timestamp, download\nwebpage, update status) was taking 5-6 seconds each, and up to a minute\nduring a vacuum. Now it takes less than 1 second. Part of this is because\nI can run 8 threads instead of 32 due to the other optimizations.\n\nI want to thank everyone for their input. I've heard Postgres is slow and\ndoesn't scale, but now I do it's really just a matter of learning to\nconfigure it properly and trial & error. I do think the documentation could\nbe enhanced a bit here, but I'm sure there are some users who don't make\nthis effort and end up switching to another database, which is bad for\nPostgres' image. Anyway, I hope my summary can help others who may find\nthis email in the archives.\n\nRegards,\nMichael\n\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Michael\n> Mattox\n> Sent: Tuesday, June 24, 2003 9:40 AM\n> To: [email protected]\n> Subject: [PERFORM] Performance advice\n>\n>\n> I'd like to get some feedback on my setup to see if I can optimize my\n> database performance. My application has two separate applications:\n>\n> The first application connects to websites and records the\n> statistics in the\n> database. Websites are monitored every 5 or 10 minutes (depends\n> on client),\n> there are 900 monitors which comes out to 7,800 monitorings per hour. The\n> monitor table has columns \"nextdate\" and \"status\" which are updated with\n> every monitoring, and also a row is inserted into the status table and the\n> status item table. For my performance testing (we're just about\n> to go live)\n> I've loaded the database with a month of data (we don't plan to keep data\n> longer than 1 month). So my status table has 6 million records and my\n> status item table has 6 million records as well. One key is that\n> the system\n> is multithreaded so up to 32 processes are accessing the database at the\n> same time, updating the \"nextdate\" before the monitoring and inserting the\n> status and status item records after. There is a serious performance\n> constraint here because unlike a webserver, this application cannot slow\n> down. If it slows down, we won't be able to monitor our sites at 5 minute\n> intervals which will make our customers unhappy.\n>\n> The second application is a web app (tomcat) which lets customers check\n> their status. Both of these applications are deployed on the\n> same server, a\n> 4 CPU (Xeon) with 1.5 gigs of RAM. The OS (RedHat Linux 7.3) and servers\n> are running on 18gig 10,000 RPM SCSI disk that is mirrored to a 2nd disk.\n> The database data directory is on a separate 36 gig 10,000 RPM SCSI disk\n> (we're trying to buy a 2nd disk to mirror it). I'm using Postgres 7.3.2.\n>\n> Issue #1 - Vacuum => Overall the system runs pretty well and seems stable.\n> Last night I did a \"vacuum full analyze\" and then ran my app overnight and\n> first thing in the morning I did a \"vacuum analyze\", which took\n> 35 minutes.\n> I'm not sure if this is normal for a database this size (there are 15,000\n> updates per hour). During the vacuum my application does slow\n> down quite a\n> bit and afterwards is slow speeds back up. I've attached the\n> vacuum output\n> to this mail. I'm using Java Data Objects (JDO) so if table/column names\n> look weird it's because the schema is automatically generated.\n>\n> Issue #2 - postgres.conf => I'd love to get some feedback on\n> these settings.\n> I've read the archives and no one seems to agree I know, but with\n> the above\n> description of my app I hope someone can at least point me in the right\n> direction:\n>\n> max_connections = 200\n>\n> #\n> # Shared Memory Size\n> #\n> shared_buffers = 3072 # min max_connections*2 or 16, 8KB each\n> #max_fsm_relations = 1000 # min 10, fsm is free space map, ~40 bytes\n> #max_fsm_pages = 10000 # min 1000, fsm is free space\n> map, ~6 bytes\n> #max_locks_per_transaction = 64 # min 10\n> #wal_buffers = 8 # min 4, typically 8KB each\n>\n> #\n> # Non-shared Memory Sizes\n> #\n> sort_mem = 8192 # min 64, size in KB\n> vacuum_mem = 24576 # min 1024, size in KB\n>\n> The rest are left uncommented (using the defaults).\n>\n> Issue #3 - server hardware =>\n>\n> - Is there anything I can do with the hardware to increase performance?\n>\n> - Should I increase the ram to 2 gigs? top shows that it is\n> using the swap\n> a bit (about 100k only).\n>\n> - I have at my disposal one other server which has 2 Xeons,\n> 10,000 RPM SCSI\n> drive. Would it make sense to put Postgres on it and leave my\n> apps running\n> on the more powerful 4 CPU server?\n>\n> - Would a RAID setup make the disk faster? Because top rarely shows the\n> CPUs above 50%, I suspect maybe the disk is the bottleneck.\n>\n> I'm thrilled to be able to use Postgres instead of a commercial\n> database and\n> I'm looking forward to putting this into production. Any help with the\n> above questions would be greatly appreciated.\n>\n> Michael Mattox\n>\n>", "msg_date": "Tue, 24 Jun 2003 17:47:38 +0200", "msg_from": "\"Michael Mattox\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance advice" }, { "msg_contents": "Micheal,\n\n> I changed the postgres.conf settings as suggested by several people. I've\n> attached it to this email, please let me know if you see anything else I\n> can tweak. top still says I have plenty of ram, so should I increase the\n> buffers and/or effective_cache even more?\n\nEffective cache, yes. Buffers, no. Even if you have RAM available, \nincreasing buffers beyond an optimal but hard to locate point decreases \nperformance. I'd advise you to start playing with buffers only after you are \ndone playing with other memory-eating params.\n\nI would suggest, though, increasing FSM_relations even more, until your daily \nVACUUM FULL does almost no work. This will improve index usage and speed \nqueries.\n\n> I moved the WAL (pg_xlog directory) to another drive. There are two drives\n> in the system, so one has the OS, servers, all files, and the WAL and the\n> other has nothing but the data. I think it'd be best to put the WAL on a\n> separate drive from the OS but I don't know if I can get another drive\n> added just for that due to our limited budget.\n\nA high-speed IDE drive might be adequate for WAL, except that Linux has \nbooting issues with a mix of IDE & SCSI and many motherboards.\n\n> I learned that I only need to vacuum tables that are changed frequently. \n> My app doesn't do any deletes, and only one table changes, the monitor\n> table. several times a second. So I only need to vacuum that table. \n> Vacuuming the entire database is slow and unecessary. If I only do the\n> monitor table, it takes only a few seconds. Much better than the 35\n> minutes for the entire database that it was taking this morning.\n\nIncreasing FSM_relations will also make vacuums more efficient.\n\n> I want to thank everyone for their input. I've heard Postgres is slow and\n> doesn't scale, but now I do it's really just a matter of learning to\n> configure it properly and trial & error. I do think the documentation\n> could be enhanced a bit here, but I'm sure there are some users who don't\n\nAbsolutely. I'm working on it. Look to Techdocs next week.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 24 Jun 2003 09:04:35 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance advice" }, { "msg_contents": "> configure it properly and trial & error. I do think the documentation could\n> be enhanced a bit here, but I'm sure there are some users who don't make\n\nDo you have any specific thoughts about documentation? Areas of\nconfusion? Was it difficult to find the information in question, or was\nit simply unavailable?\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "24 Jun 2003 13:00:11 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance advice" }, { "msg_contents": "> > configure it properly and trial & error. I do think the\n> documentation could\n> > be enhanced a bit here, but I'm sure there are some users who don't make\n>\n> Do you have any specific thoughts about documentation? Areas of\n> confusion? Was it difficult to find the information in question, or was\n> it simply unavailable?\n\nI think the biggest area of confusion for me was that the various parameters\nare very briefly described and no context is given for their parameters.\nFor example, from:\nhttp://www.postgresql.org/docs/view.php?version=7.3&idoc=1&file=runtime-conf\nig.html\n\nMAX_FSM_RELATIONS (integer)\nSets the maximum number of relations (tables) for which free space will be\ntracked in the shared free-space map. The default is 100. This option can\nonly be set at server start.\n\nThere's not enough information there to properly tune postgres. A few\npeople suggested increasing this so I set mine to 4000. I don't have much\nidea if that's too high, too low, just right. What would be nice if these\nwere put into context. Maybe come up with a matrix, with the settings and\nvarious server configs. We could come up with the 5-10 most common server\nconfigurations. So a user with 256k of ram and a single IDE disk will have\ndifferent range from a user with 2 gigs of ram and a SCSI RAID.\n\nThe next thing that really needs improving is the optimization section of\nthe FAQ (http://www.postgresql.org/docs/faqs/FAQ.html#3.6). This is a very\nimportant section of the documentation and it's pretty empty. One thing\nthat was suggested to me is to move the WAL directory to another drive.\nThat could be in this FAQ section. effective_cache isn't mentioned either.\nIt'd be great to talk about server hardware as well, such as memory, whether\nto put postgres on a dedicated server or keep it on the same server as the\napps/webapps.\n\nPlease don't misunderstand, the Postgres documentation is excellent. Some\nimprovements to the performance sections of the documentation would make a\nhuge difference.\n\nRegards,\nMichael\n\n\n\n", "msg_date": "Wed, 25 Jun 2003 08:48:13 +0200", "msg_from": "\"Michael Mattox\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance advice" }, { "msg_contents": "[ This has been written offline yesterday. Now I see that most of it\nhas already been covered. I send it anyway ... ]\n\nOn Tue, 24 Jun 2003 09:39:32 +0200, \"Michael Mattox\"\n<[email protected]> wrote:\n>Websites are monitored every 5 or 10 minutes (depends on client),\n>there are 900 monitors which comes out to 7,800 monitorings per hour.\n\nSo your server load - at least INSERT, UPDATE, DELETE - is absolutely\npredictable. This is good. It enables you to design a cron-driven\nVACUUM strategy.\n\n|INFO: --Relation public.jdo_sequencex--\n|INFO: Pages 28: Changed 1, Empty 0; Tup 1: Vac 5124, Keep 0, UnUsed 0.\n ^ ^^^^\nThis table could stand more frequent VACUUMs, every 15 minutes or so.\n\nBTW, from the name of this table and from the fact that there is only\none live tuple I guess that you are using it to keep track of a\nsequence number. By using a real sequence you could get what you need\nwith less contention; and you don't have to VACUUM a sequence.\n\n|INFO: --Relation public.monitorx--\n|INFO: Removed 170055 tuples in 6036 pages.\n| CPU 0.52s/0.81u sec elapsed 206.26 sec.\n|INFO: Pages 6076: Changed 0, Empty 0; Tup 2057: Vac 170055, Keep 568, UnUsed 356.\n| Total CPU 6.28s/13.23u sec elapsed 486.07 sec.\n\nThe Vac : Tup ratio for this table is more than 80. You have to\nVACUUM this table more often. How long is \"overnight\"? Divide this\nby 80 and use the result as the interval between\n\tVACUUM [VERBOSE] [ANALYSE] public.monitorx;\n\nThus you'd have approximately as many dead tuples as live tuples and\nthe table size should not grow far beyond 150 pages (after an initial\nVACUUM FULL, of course). Then VACUUM of this table should take no\nmore than 20 seconds.\n\nCaveat: Frequent ANALYSEs might trigger the need to VACUUM\npg_catalog.pg_statistic.\n\n> The\n>monitor table has columns \"nextdate\" and \"status\" which are updated with\n>every monitoring, [...]\n> updating the \"nextdate\" before the monitoring and inserting the\n>status and status item records after.\n\nDo you mean updating monitor.nextdate before the monitoring and\nmonitor.status after the monitoring? Can you combine these two\nUPDATEs into one?\n\n> During the vacuum my application does slow down quite a bit\n\nYes, because VACUUM does lots of I/O.\n\n> and afterwards is slow speeds back up.\n\n... because the working set is slowly fetched into the cache after\nhaving been flushed out by VACUUM. Your five largest relations are\nmonitorstatus_statusitemsx, monitorstatusitemlistd8ea58a5x,\nmonitorstatusitemlistx, monitorstatusitemx, and monitorstatusx. The\nheap relations alone (without indexes) account for 468701 pages,\nalmost 4GB. VACUUMing these five relations takes 23 minutes for\nfreeing less than 200 out of 6 million tuples for each relation. This\nisn't worth it. Unless always the same tuples are updated over and\nover, scheduling a VACUUM for half a million deletions/updates should\nbe sufficient.\n\n>shared_buffers = 3072 # min max_connections*2 or 16, 8KB each\n>sort_mem = 8192 # min 64, size in KB\n>vacuum_mem = 24576 # min 1024, size in KB\n>\n>The rest are left uncommented (using the defaults).\n\nAs has already been said, don't forget effective_cache_size. I'm not\nso sure about random_page_cost. Try to find out which queries are too\nslow. EXPLAIN ANALYSE is your friend.\n\nOne more thing: I see 2 or 3 UPDATEs and 5 INSERTs per monitoring.\nAre these changes wrapped into a single transaction?\n\nServus\n Manfred\n", "msg_date": "Wed, 25 Jun 2003 08:51:08 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance advice" }, { "msg_contents": "PM4JI but from my point of view this has been a most useful thread. I too have found it difficult to find the right bit of documentation on performance. I *think* what is needed is some sort of a route map, Poor Performance - start here. Then some questions with sections of the documentation you should go to.\n\nHilary\n\nAt 13:00 24/06/2003 -0400, you wrote:\n>> configure it properly and trial & error. I do think the documentation could\n>> be enhanced a bit here, but I'm sure there are some users who don't make\n>\n>Do you have any specific thoughts about documentation? Areas of\n>confusion? Was it difficult to find the information in question, or was\n>it simply unavailable?\n>\n>-- \n>Rod Taylor <[email protected]>\n>\n>PGP Key: http://www.rbt.ca/rbtpub.asc\n\n\nHilary Forbes\n-------------\nDMR Computer Limited: http://www.dmr.co.uk/\nDirect line: 01689 889950\nSwitchboard: (44) 1689 860000 Fax: (44) 1689 860330\nE-mail: [email protected]\n\n**********************************************************\n\n", "msg_date": "Wed, 25 Jun 2003 09:12:24 +0100", "msg_from": "Hilary Forbes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance advice" }, { "msg_contents": "> [ This has been written offline yesterday. Now I see that most of it\n> has already been covered. I send it anyway ... ]\n\nStill great advice with slightly different explanations, very useful.\n\n> |INFO: --Relation public.jdo_sequencex--\n> |INFO: Pages 28: Changed 1, Empty 0; Tup 1: Vac 5124, Keep 0, UnUsed 0.\n> ^ ^^^^\n> This table could stand more frequent VACUUMs, every 15 minutes or so.\n\nCan you explain what the \"Vac\" is and how you knew that it should be\nvacuumed more often? I'd like to understand how to interpret my vacuum log.\nI looked in the vacuum section of the docs and there's nothing about the\nvacuum output <hint>.\n\n> BTW, from the name of this table and from the fact that there is only\n> one live tuple I guess that you are using it to keep track of a\n> sequence number. By using a real sequence you could get what you need\n> with less contention; and you don't have to VACUUM a sequence.\n\nI'm using Java Data Objects (JDO) which is an O/R mapper. It generated the\nschema from my object model by default it used a table for a sequence. I\njust got finished configuring it to use a real postgres sequence. With the\nway they have it designed, it opens and closes a connection each time it\nretrieves a sequence. Would I get a performance increase if I modify their\ncode to retrieve multiple sequence numbers in one connection? For example I\ncould have it grab 50 at a time, which would replace 50 connections with 1.\n\n> > The\n> >monitor table has columns \"nextdate\" and \"status\" which are updated with\n> >every monitoring, [...]\n> > updating the \"nextdate\" before the monitoring and inserting the\n> >status and status item records after.\n>\n> Do you mean updating monitor.nextdate before the monitoring and\n> monitor.status after the monitoring? Can you combine these two\n> UPDATEs into one?\n\nI was doing this to prevent the monitor from being added to the queue while\nit was executing. But I fixed this, effectively reducing my transactions by\n1/2.\n\n> >shared_buffers = 3072 # min max_connections*2 or 16, 8KB each\n> >sort_mem = 8192 # min 64, size in KB\n> >vacuum_mem = 24576 # min 1024, size in KB\n> >\n> >The rest are left uncommented (using the defaults).\n>\n> As has already been said, don't forget effective_cache_size. I'm not\n> so sure about random_page_cost. Try to find out which queries are too\n> slow. EXPLAIN ANALYSE is your friend.\n>\n> One more thing: I see 2 or 3 UPDATEs and 5 INSERTs per monitoring.\n> Are these changes wrapped into a single transaction?\n\nThese were in 2 transactions but now I have it into a single transaction.\n\nThanks,\nMichael\n\n\n", "msg_date": "Wed, 25 Jun 2003 11:47:48 +0200", "msg_from": "\"Michael Mattox\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance advice" }, { "msg_contents": "On 25 Jun 2003 at 11:47, Michael Mattox wrote:\n> I'm using Java Data Objects (JDO) which is an O/R mapper. It generated the\n> schema from my object model by default it used a table for a sequence. I\n> just got finished configuring it to use a real postgres sequence. With the\n> way they have it designed, it opens and closes a connection each time it\n> retrieves a sequence. Would I get a performance increase if I modify their\n> code to retrieve multiple sequence numbers in one connection? For example I\n> could have it grab 50 at a time, which would replace 50 connections with 1.\n\nYou need to use sequence functions like setval, curval, nextval. May be you can \nwrite your own wrapper function to \"grab\" as many sequence values as you want \nbut it would be good if you design/maintain locking around it as appropriate.\n\nSee\n\nhttp://developer.postgresql.org/docs/postgres/sql-createsequence.html\nhttp://developer.postgresql.org/docs/postgres/functions-sequence.html\n\nHTH\n\nBye\n Shridhar\n\n--\nVelilind's Laws of Experimentation:\t(1) If reproducibility may be a problem, \nconduct the test only once.\t(2) If a straight line fit is required, obtain only \ntwo data points.\n\n", "msg_date": "Wed, 25 Jun 2003 15:28:58 +0530", "msg_from": "\"Shridhar Daithankar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance advice" }, { "msg_contents": "On Wed, 2003-06-25 at 04:12, Hilary Forbes wrote:\n> PM4JI but from my point of view this has been a most useful thread. I too have found it difficult to find the right bit of documentation on performance. I *think* what is needed is some sort of a route map, Poor Performance - start here. Then some questions with sections of the documentation you should go to.\n\nDo you have any examples where this has worked well (for reference)?\n\nThe only real example I have is MS's help which never gave me the right\nanswer.\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "25 Jun 2003 07:06:22 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance advice" }, { "msg_contents": "> I think the biggest area of confusion for me was that the various parameters\n> are very briefly described and no context is given for their parameters.\n\n> improvements to the performance sections of the documentation would make a\n> huge difference.\n\nAgreed.. Josh has done some work recently re-arranging things to make\nthem easier to find, but the content hasn't changed much.\n\nThanks for your thoughts!\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "25 Jun 2003 07:09:27 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance advice" }, { "msg_contents": "\nOn 25/06/2003 10:47 Michael Mattox wrote:\n> I'm using Java Data Objects (JDO) which is an O/R mapper. It generated\n> the\n> schema from my object model by default it used a table for a sequence. I\n> just got finished configuring it to use a real postgres sequence. With\n> the\n> way they have it designed, it opens and closes a connection each time it\n> retrieves a sequence. Would I get a performance increase if I modify\n> their\n> code to retrieve multiple sequence numbers in one connection? For\n> example I\n> could have it grab 50 at a time, which would replace 50 connections with\n> 1.\n\nFor best performance, you really should consider using a connection pool \nas it removes the overhead of creating and closing connections.\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for the Smaller \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n", "msg_date": "Wed, 25 Jun 2003 12:52:21 +0100", "msg_from": "Paul Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance advice" }, { "msg_contents": "On Wed, 25 Jun 2003, Achilleus Mantzios wrote:\n\n> What i think would be ideal (helpful/feasible)\n> is some kind of documentation of the algorithms involved\n> in the planner/optimizer, along with some pointers\n> to postgresql.conf parameters where applicable.\n> \n> This way we will know\n> - Why something is happening\n> - If it is the best plan\n> - What tuning is possible\n\nI agree. In combination with this, I would find case studies very useful. \nHave the documentation team solicit a few volunteers with different setups\n(w/r/t db size, db traffic, and hardware). Perhaps these folks are\nrunning with the default postgresql.conf or have done little tuning. Via\nthe performance list, work through the tuning process with each volunteer:\n\n1. Gathering information about your setup that affects tuning.\n2. Measuring initial performance as a baseline.\n3. Making initial adjustments based on your setup.\n4. Identifying poorly-written SQL.\n5. Identifying poorly-indexed tables.\n6. Measuring effects of each adjustment, and tuning accordingly.\n\n(Note: I am certainly no performance expert -- these steps are meant to be \nexamples only.)\n\nSolicit a list member to monitor the discussion and document each case\nstudy in a consistent fashion. Run completed case studies by the \nperformance and docs lists for review.\n\nI would be happy to join the docs team to work on such a project.\n\nmichael\n\np.s. Should this discussion be moved to psgql-docs?\n\n", "msg_date": "Wed, 25 Jun 2003 09:04:31 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Performance advice" }, { "msg_contents": "\nI agree that a \"directed graph\"-like performance map\nwould be difficult to be written or understood.\n\nWhat i think would be ideal (helpful/feasible)\nis some kind of documentation of the algorithms involved\nin the planner/optimizer, along with some pointers\nto postgresql.conf parameters where applicable.\n\nThis way we will know\n- Why something is happening\n- If it is the best plan\n- What tuning is possible\n\n\n\nOn 25 Jun 2003, Rod Taylor wrote:\n\n> \n> > I think the biggest area of confusion for me was that the various parameters\n> > are very briefly described and no context is given for their parameters.\n> \n> > improvements to the performance sections of the documentation would make a\n> > huge difference.\n> \n> Agreed.. Josh has done some work recently re-arranging things to make\n> them easier to find, but the content hasn't changed much.\n> \n> Thanks for your thoughts!\n> \n> \n\n-- \n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-210-8981112\nfax: +30-210-8981877\nemail: achill at matrix dot gatewaynet dot com\n mantzios at softlab dot ece dot ntua dot gr\n\n", "msg_date": "Wed, 25 Jun 2003 14:32:43 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance advice" }, { "msg_contents": "On Wed, 25 Jun 2003 11:47:48 +0200, \"Michael Mattox\"\n<[email protected]> wrote:\n>> |INFO: --Relation public.jdo_sequencex--\n>> |INFO: Pages 28: Changed 1, Empty 0; Tup 1: Vac 5124, Keep 0, UnUsed 0.\n>> ^ ^^^^\n>> This table could stand more frequent VACUUMs, every 15 minutes or so.\n>\n>Can you explain what the \"Vac\" is\n\nThat's a long story, where shall I start? Search for MVCC in the docs\nand in the list archives. So you know that every DELETE and every\nUPDATE leaves behind old versions of tuples. The space occupied by\nthese cannot be used immediately. VACUUM is responsible for finding\ndead tuples, which are so old that there is no active transaction that\ncould be interested in their contents, and reclaiming the space. The\nnumber of such tuples is reported as \"Vac\".\n\n> and how you knew that it should be vacuumed more often?\n\njdo_sequencex stores (5000 old versions and 1 active version of) a\nsingle row in 28 pages. Depending on when you did ANALYSE it and\ndepending on the SQL statement, the planner might think that a\nsequential scan is the most efficient way to access this single row.\nA seq scan has to read 28 pages instead of a single page. Well,\nprobably all 28 pages are in the OS cache or even in PG's shared\nbuffers, but 27 pages are just wasted and push out pages you could\nmake better use of. And processing those 28 pages does not come at no\nCPU cost. If you VACUUM frequently enough, this relation never grows\nbeyond one page.\n\n>I'm using Java Data Objects (JDO) which is an O/R mapper. It generated the\n>schema from my object model by default it used a table for a sequence. I\n>just got finished configuring it to use a real postgres sequence. With the\n>way they have it designed, it opens and closes a connection each time it\n>retrieves a sequence. Would I get a performance increase if I modify their\n>code to retrieve multiple sequence numbers in one connection? For example I\n>could have it grab 50 at a time, which would replace 50 connections with 1.\n\nBetter yet you modify the code to use the normal access functions for\nsequences.\n\nServus\n Manfred\n", "msg_date": "Thu, 26 Jun 2003 14:49:54 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance advice" } ]
[ { "msg_contents": "I agree a route map would really help.\n\n> -----Original Message-----\n> From:\tHilary Forbes [SMTP:[email protected]]\n> Sent:\t25 June 2003 10:12\n> To:\tRod Taylor\n> Cc:\[email protected]\n> Subject:\tRe: [PERFORM] Performance advice\n> \n> PM4JI but from my point of view this has been a most useful thread. I too\n> have found it difficult to find the right bit of documentation on\n> performance. I *think* what is needed is some sort of a route map, Poor\n> Performance - start here. Then some questions with sections of the\n> documentation you should go to.\n> \n> Hilary\n> \n> At 13:00 24/06/2003 -0400, you wrote:\n> >> configure it properly and trial & error. I do think the documentation\n> could\n> >> be enhanced a bit here, but I'm sure there are some users who don't\n> make\n> >\n> >Do you have any specific thoughts about documentation? Areas of\n> >confusion? Was it difficult to find the information in question, or was\n> >it simply unavailable?\n> >\n> >-- \n> >Rod Taylor <[email protected]>\n> >\n> >PGP Key: http://www.rbt.ca/rbtpub.asc\n> \n> \n> Hilary Forbes\n> -------------\n> DMR Computer Limited: http://www.dmr.co.uk/\n> Direct line: 01689 889950\n> Switchboard: (44) 1689 860000 Fax: (44) 1689 860330\n> E-mail: [email protected]\n> \n> **********************************************************\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n", "msg_date": "Wed, 25 Jun 2003 10:58:18 +0200", "msg_from": "Howard Oblowitz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance advice" } ]
[ { "msg_contents": "I've used indexes to speed up my queries but this query escapes me. I'm\ncurious if someone can suggest an index or a way to modify the query to use\nthe index. The query is:\n\nselect ms.averageconnecttimex as ms_averageconnecttime, ms.averagedurationx\nas ms_averageduration, ms.datex as ms_date, ms.idx as ms_id,\nms.statusstringx as ms_statusstring, ms.statusx as ms_status,\nmsi.actualcontentx as msi_actualcontent, msi.connecttimex as\nmsi_connecttime, msi.correctcontentx as msi_correctcontent, msi.datex as\nmsi_date, msi.descriptionx as msi_description, msi.durationx as\nmsi_duration, msi.errorcontentx as msi_errorcontent, msi.idx as msi_id,\nmsi.monitorlocationx as msi_monitorlocation, msi.statusstringx as\nmsi_statusstring, msi.statusx as msi_status from monitorstatusx ms,\nmonitorstatusitemx msi where monitorx.idx =\n'M-TEST_1444-TEST_00_10560561260561463219352' AND monitorx.jdoidx =\nms.monitorx AND ms.datex >= '2003-06-20 08:57:21.36' AND ms.datex <=\n'2003-06-29 08:57:21.36' AND ms.jdoidx = monitorstatus_statusitemsx.jdoidx\nAND monitorstatus_statusitemsx.statusitemsx =\nmonitorstatusitemlistd8ea58a5x.jdoidx AND\nmonitorstatusitemlistd8ea58a5x.statusitemlistx = msi.jdoidx ORDER BY ms_date\nDESC;\n\nHere is the result of explain:\n\n Sort (cost=9498.85..9500.16 rows=525 width=788)\n Sort Key: ms.datex\n -> Nested Loop (cost=0.00..9475.15 rows=525 width=788)\n -> Nested Loop (cost=0.00..7887.59 rows=525 width=123)\n -> Nested Loop (cost=0.00..6300.03 rows=525 width=107)\n -> Nested Loop (cost=0.00..4712.02 rows=525 width=91)\n -> Index Scan using monitorx_id_index on\nmonitorx (cost=0.00..5.37 rows=1 width=8)\n Index Cond: (idx =\n'M-TEST_1444-TEST_00_10560561260561463219352'::character varying)\n -> Index Scan using monitorstatusxmonitori on\nmonitorstatusx ms (cost=0.00..4695.65 rows=880 width=83)\n Index Cond: (\"outer\".jdoidx = ms.monitorx)\n Filter: ((datex >= '2003-06-20\n08:57:21.36'::timestamp without time zone) AND (datex <= '2003-06-29\n08:57:21.36'::timestamp without time zone))\n -> Index Scan using monitorstatus_stjdoidb742c9b3i on\nmonitorstatus_statusitemsx (cost=0.00..3.01 rows=1 width=16)\n Index Cond: (\"outer\".jdoidx =\nmonitorstatus_statusitemsx.jdoidx)\n -> Index Scan using monitorstatusitejdoid7db0befci on\nmonitorstatusitemlistd8ea58a5x (cost=0.00..3.01 rows=1 width=16)\n Index Cond: (\"outer\".statusitemsx =\nmonitorstatusitemlistd8ea58a5x.jdoidx)\n -> Index Scan using monitorstatusitemx_pkey on monitorstatusitemx\nmsi (cost=0.00..3.01 rows=1 width=665)\n Index Cond: (\"outer\".statusitemlistx = msi.jdoidx)\n(17 rows)\n\nAs you can see, it's doing a sort on ms.datex. I created an index on the\nmonitorstatusx (ms) table for the datex, but it doesn't use it. Is it\npossible to create an index to prevent this sort?\n\nThanks,\nMichael\n\n\nMichael Mattox\[email protected] / http://www.advweb.com/michael\n\n\n\n", "msg_date": "Wed, 25 Jun 2003 13:46:48 +0200", "msg_from": "\"Michael Mattox\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to optimize monstrous query, sorts instead of using index" }, { "msg_contents": "Is this 7.3.x? Can we see explain analyze output for the query?\n\nOn Wed, 2003-06-25 at 07:46, Michael Mattox wrote:\n> I've used indexes to speed up my queries but this query escapes me. I'm\n> curious if someone can suggest an index or a way to modify the query to use\n> the index. The query is:\n> \n> select ms.averageconnecttimex as ms_averageconnecttime, ms.averagedurationx\n> as ms_averageduration, ms.datex as ms_date, ms.idx as ms_id,\n> ms.statusstringx as ms_statusstring, ms.statusx as ms_status,\n> msi.actualcontentx as msi_actualcontent, msi.connecttimex as\n> msi_connecttime, msi.correctcontentx as msi_correctcontent, msi.datex as\n> msi_date, msi.descriptionx as msi_description, msi.durationx as\n> msi_duration, msi.errorcontentx as msi_errorcontent, msi.idx as msi_id,\n> msi.monitorlocationx as msi_monitorlocation, msi.statusstringx as\n> msi_statusstring, msi.statusx as msi_status from monitorstatusx ms,\n> monitorstatusitemx msi where monitorx.idx =\n> 'M-TEST_1444-TEST_00_10560561260561463219352' AND monitorx.jdoidx =\n> ms.monitorx AND ms.datex >= '2003-06-20 08:57:21.36' AND ms.datex <=\n> '2003-06-29 08:57:21.36' AND ms.jdoidx = monitorstatus_statusitemsx.jdoidx\n> AND monitorstatus_statusitemsx.statusitemsx =\n> monitorstatusitemlistd8ea58a5x.jdoidx AND\n> monitorstatusitemlistd8ea58a5x.statusitemlistx = msi.jdoidx ORDER BY ms_date\n> DESC;\n> \n> Here is the result of explain:\n> \n> Sort (cost=9498.85..9500.16 rows=525 width=788)\n> Sort Key: ms.datex\n> -> Nested Loop (cost=0.00..9475.15 rows=525 width=788)\n> -> Nested Loop (cost=0.00..7887.59 rows=525 width=123)\n> -> Nested Loop (cost=0.00..6300.03 rows=525 width=107)\n> -> Nested Loop (cost=0.00..4712.02 rows=525 width=91)\n> -> Index Scan using monitorx_id_index on\n> monitorx (cost=0.00..5.37 rows=1 width=8)\n> Index Cond: (idx =\n> 'M-TEST_1444-TEST_00_10560561260561463219352'::character varying)\n> -> Index Scan using monitorstatusxmonitori on\n> monitorstatusx ms (cost=0.00..4695.65 rows=880 width=83)\n> Index Cond: (\"outer\".jdoidx = ms.monitorx)\n> Filter: ((datex >= '2003-06-20\n> 08:57:21.36'::timestamp without time zone) AND (datex <= '2003-06-29\n> 08:57:21.36'::timestamp without time zone))\n> -> Index Scan using monitorstatus_stjdoidb742c9b3i on\n> monitorstatus_statusitemsx (cost=0.00..3.01 rows=1 width=16)\n> Index Cond: (\"outer\".jdoidx =\n> monitorstatus_statusitemsx.jdoidx)\n> -> Index Scan using monitorstatusitejdoid7db0befci on\n> monitorstatusitemlistd8ea58a5x (cost=0.00..3.01 rows=1 width=16)\n> Index Cond: (\"outer\".statusitemsx =\n> monitorstatusitemlistd8ea58a5x.jdoidx)\n> -> Index Scan using monitorstatusitemx_pkey on monitorstatusitemx\n> msi (cost=0.00..3.01 rows=1 width=665)\n> Index Cond: (\"outer\".statusitemlistx = msi.jdoidx)\n> (17 rows)\n> \n> As you can see, it's doing a sort on ms.datex. I created an index on the\n> monitorstatusx (ms) table for the datex, but it doesn't use it. Is it\n> possible to create an index to prevent this sort?\n> \n> Thanks,\n> Michael\n> \n> \n> Michael Mattox\n> [email protected] / http://www.advweb.com/michael\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "25 Jun 2003 07:54:20 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to optimize monstrous query, sorts instead of using index" }, { "msg_contents": "Sorry, I neglected to say the version, yes I'm using Postgres 7.3.2 on\nLinux.\n\nHere's the output of explain analyze. The query typically takes 0-4 seconds\ndepending on the time frame. It's run very frequently especially to process\nthe nightly reports.\n\nveriguard=# explain analyze select ms.averageconnecttimex as\nms_averageconnecttime, ms.averagedurationx as ms_averageduration, ms.datex\nas ms_date, ms.idx as ms_id, ms.statusstringx as ms_statusstring, ms.statusx\nas ms_status, msi.actualcontentx as msi_actualcontent, msi.connecttimex as\nmsi_connecttime, msi.correctcontentx as msi_correctcontent, msi.datex as\nmsi_date, msi.descriptionx as msi_description, msi.durationx as\nmsi_duration, msi.errorcontentx as msi_errorcontent, msi.idx as msi_id,\nmsi.monitorlocationx as msi_monitorlocation, msi.statusstringx as\nmsi_statusstring, msi.statusx as msi_status from monitorstatusx ms,\nmonitorstatusitemx msi where monitorx.idx =\n'M-TEST_1444-TEST_00_10560561260561463219352' AND monitorx.jdoidx =\nms.monitorx AND ms.datex >= '2003-06-20 08:57:21.36' AND ms.datex <=\n'2003-06-29 08:57:21.36' AND ms.jdoidx = monitorstatus_statusitemsx.jdoidx\nAND monitorstatus_statusitemsx.statusitemsx =\nmonitorstatusitemlistd8ea58a5x.jdoidx AND\nmonitorstatusitemlistd8ea58a5x.statusitemlistx = msi.jdoidx ORDER BY ms_date\nDESC;\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----------------------------\n Sort (cost=9498.96..9500.27 rows=525 width=788) (actual\ntime=6720.91..6721.44 rows=623 loops=1)\n Sort Key: ms.datex\n -> Nested Loop (cost=0.00..9475.26 rows=525 width=788) (actual\ntime=145.16..6718.65 rows=623 loops=1)\n -> Nested Loop (cost=0.00..7887.69 rows=525 width=123) (actual\ntime=126.84..4528.85 rows=623 loops=1)\n -> Nested Loop (cost=0.00..6300.13 rows=525 width=107)\n(actual time=95.37..3470.55 rows=623 loops=1)\n -> Nested Loop (cost=0.00..4712.13 rows=525 width=91)\n(actual time=40.44..1892.06 rows=625 loops=1)\n -> Index Scan using monitorx_id_index on\nmonitorx (cost=0.00..5.48 rows=1 width=8) (actual time=0.25..19.90 rows=1\nloops=1)\n Index Cond: (idx =\n'M-TEST_1444-TEST_00_10560561260561463219352'::character varying)\n -> Index Scan using monitorstatusxmonitori on\nmonitorstatusx ms (cost=0.00..4695.65 rows=880 width=83) (actual\ntime=40.17..1868.12 rows=625 loops=1)\n Index Cond: (\"outer\".jdoidx = ms.monitorx)\n Filter: ((datex >= '2003-06-20\n08:57:21.36'::timestamp without time zone) AND (datex <= '2003-06-29\n08:57:21.36'::timestamp without time zone))\n -> Index Scan using monitorstatus_stjdoidb742c9b3i on\nmonitorstatus_statusitemsx (cost=0.00..3.01 rows=1 width=16) (actual\ntime=2.51..2.51 rows=1 loops=625)\n Index Cond: (\"outer\".jdoidx =\nmonitorstatus_statusitemsx.jdoidx)\n -> Index Scan using monitorstatusitejdoid7db0befci on\nmonitorstatusitemlistd8ea58a5x (cost=0.00..3.01 rows=1 width=16) (actual\ntime=1.68..1.69 rows=1 loops=623)\n Index Cond: (\"outer\".statusitemsx =\nmonitorstatusitemlistd8ea58a5x.jdoidx)\n -> Index Scan using monitorstatusitemx_pkey on monitorstatusitemx\nmsi (cost=0.00..3.01 rows=1 width=665) (actual time=3.50..3.50 rows=1\nloops=623)\n Index Cond: (\"outer\".statusitemlistx = msi.jdoidx)\n Total runtime: 6722.43 msec\n(18 rows)\n\n\n\n", "msg_date": "Wed, 25 Jun 2003 14:00:39 +0200", "msg_from": "\"Michael Mattox\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to optimize monstrous query, sorts instead of using index" }, { "msg_contents": "> Here's the output of explain analyze. The query typically takes 0-4 seconds\n> depending on the time frame. It's run very frequently especially to process\n> the nightly reports.\n\nThe plan picked seems reasonable (estimated costs / tuples is close to\nactual).\n\nI think the biggest hit is this index scan. Thats a substantial cost to\npull out less than a thousand lines:\n\n -> Index Scan using monitorstatusxmonitori\non\nmonitorstatusx ms (cost=0.00..4695.65 rows=880 width=83) (actual\ntime=40.17..1868.12 rows=625 loops=1)\n Index Cond: (\"outer\".jdoidx =\nms.monitorx)\n Filter: ((datex >= '2003-06-20\n08:57:21.36'::timestamp without time zone) AND (datex <= '2003-06-29\n08:57:21.36'::timestamp without time zone))\n\n\nAre jdoidx and monitorx integers?\n\nYou might try a multi-column index on (ms.monitorx, ms.datex).\n\nAre monitorx assigned roughly ordered by date? It must be, otherwise\nthe sort step would not be so cheap (hardly any impact on the query --\nsee actual cost number). The multi-column index above should give you a\nbit of a boost.\n\nDepending on the data in the table, the index (ms.datex, monitorx) may\ngive better results along with a single index on (ms.monitorx) as you\ncurrently have. It's not very likely though.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "25 Jun 2003 08:12:09 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to optimize monstrous query, sorts instead of" }, { "msg_contents": "> Are jdoidx and monitorx integers?\n\nYes both are integers:\n\n-- Table: public.monitorstatusx\nCREATE TABLE public.monitorstatusx (\n averageconnecttimex numeric(65535, 65532),\n averagedurationx numeric(65535, 65532),\n datex timestamp,\n idx varchar(255),\n jdoclassx varchar(255),\n jdoidx int8 NOT NULL,\n jdolockx int4,\n monitorx int8,\n statusstringx varchar(255),\n statusx varchar(255),\n CONSTRAINT monitorstatusx_pkey PRIMARY KEY (jdoidx)\n) WITH OIDS;\n\n\n> You might try a multi-column index on (ms.monitorx, ms.datex).\n\nJust tried it, it didn't prevent the sort. But it sounds like the sort\nisn't the problem, correct?\n\n-- Index: public.monitorstatusx_datex_monitorx_index\nCREATE INDEX monitorstatusx_datex_monitorx_index ON monitorstatusx USING\nbtree (monitorx, datex);\n\nveriguard=# explain analyze select ms.averageconnecttimex as\nms_averageconnecttime, ms.averagedurationx as ms_averageduration, ms.datex\nas ms_date, ms.idx as ms_id, ms.statusstringx as ms_statusstring, ms.statusx\nas ms_status, msi.actualcontentx as msi_actualcontent, msi.connecttimex as\nmsi_connecttime, msi.correctcontentx as msi_correctcontent, msi.datex as\nmsi_date, msi.descriptionx as msi_description, msi.durationx as\nmsi_duration, msi.errorcontentx as msi_errorcontent, msi.idx as msi_id,\nmsi.monitorlocationx as msi_monitorlocation, msi.statusstringx as\nmsi_statusstring, msi.statusx as msi_status from monitorstatusx ms,\nmonitorstatusitemx msi where monitorx.idx =\n'M-TEST_1444-TEST_00_10560561260561463219352' AND monitorx.jdoidx =\nms.monitorx AND ms.datex >= '2003-06-20 08:57:21.36' AND ms.datex <=\n'2003-06-29 08:57:21.36' AND ms.jdoidx = monitorstatus_statusitemsx.jdoidx\nAND monitorstatus_statusitemsx.statusitemsx =\nmonitorstatusitemlistd8ea58a5x.jdoidx AND\nmonitorstatusitemlistd8ea58a5x.statusitemlistx = msi.jdoidx ORDER BY ms_date\nDESC;\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----------------------------------------------------------------------\n Sort (cost=6014.53..6015.86 rows=529 width=788) (actual\ntime=4286.35..4286.88 rows=626 loops=1)\n Sort Key: ms.datex\n -> Nested Loop (cost=0.00..5990.59 rows=529 width=788) (actual\ntime=131.57..4283.76 rows=626 loops=1)\n -> Nested Loop (cost=0.00..4388.44 rows=529 width=123) (actual\ntime=106.23..3398.54 rows=626 loops=1)\n -> Nested Loop (cost=0.00..2786.29 rows=529 width=107)\n(actual time=90.29..2518.20 rows=626 loops=1)\n -> Nested Loop (cost=0.00..1175.81 rows=532 width=91)\n(actual time=55.15..1345.88 rows=628 loops=1)\n -> Index Scan using monitorx_id_index on\nmonitorx (cost=0.00..5.36 rows=1 width=8) (actual time=54.94..55.03 rows=1\nloops=1)\n Index Cond: (idx =\n'M-TEST_1444-TEST_00_10560561260561463219352'::character varying)\n -> Index Scan using\nmonitorstatusx_datex_monitorx_index on monitorstatusx ms\n(cost=0.00..1159.33 rows=890 width=83) (actual time=0.19..1287.02 rows=628\nloops=1)\n Index Cond: ((\"outer\".jdoidx = ms.monitorx)\nAND (ms.datex >= '2003-06-20 08:57:21.36'::timestamp without time zone) AND\n(ms.datex <= '2003-06-29 08:57:21.36'::timestamp without time zone))\n -> Index Scan using monitorstatus_stjdoidb742c9b3i on\nmonitorstatus_statusitemsx (cost=0.00..3.01 rows=1 width=16) (actual\ntime=1.85..1.86 rows=1 loops=628)\n Index Cond: (\"outer\".jdoidx =\nmonitorstatus_statusitemsx.jdoidx)\n -> Index Scan using monitorstatusitejdoid7db0befci on\nmonitorstatusitemlistd8ea58a5x (cost=0.00..3.01 rows=1 width=16) (actual\ntime=1.39..1.39 rows=1 loops=626)\n Index Cond: (\"outer\".statusitemsx =\nmonitorstatusitemlistd8ea58a5x.jdoidx)\n -> Index Scan using monitorstatusitemx_pkey on monitorstatusitemx\nmsi (cost=0.00..3.01 rows=1 width=665) (actual time=1.40..1.40 rows=1\nloops=626)\n Index Cond: (\"outer\".statusitemlistx = msi.jdoidx)\n Total runtime: 4288.71 msec\n(17 rows)\n\nveriguard=#\n\n\n> Are monitorx assigned roughly ordered by date? It must be, otherwise\n> the sort step would not be so cheap (hardly any impact on the query --\n> see actual cost number). The multi-column index above should give you a\n> bit of a boost.\n\nmonitorx is a foreign key to the monitorx table.\n\nIf the query can't be optimized it's OK, I can live it the speed. I just\ncouldn't figure out why it'd sort on datex if I had an index on datex.\n\nThanks,\nMichael\n\n\n", "msg_date": "Wed, 25 Jun 2003 14:48:15 +0200", "msg_from": "\"Michael Mattox\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to optimize monstrous query, sorts instead of" }, { "msg_contents": "> > You might try a multi-column index on (ms.monitorx, ms.datex).\n> \n> Just tried it, it didn't prevent the sort. But it sounds like the sort\n> isn't the problem, correct?\n\nThe sort isn't actually doing any sorting, so it's virtually free. The\nsort is taking less than 3ms as the data is already 99% sorted due to\nthe correlation between datex and monitorx.\n\nFor similar reasons, the datex index will not be used, as it has no\nadvantage to being used.\n\n> -> Index Scan using\n> monitorstatusx_datex_monitorx_index on monitorstatusx ms\n> (cost=0.00..1159.33 rows=890 width=83) (actual time=0.19..1287.02 rows=628\n> loops=1)\n> Index Cond: ((\"outer\".jdoidx = ms.monitorx)\n> AND (ms.datex >= '2003-06-20 08:57:21.36'::timestamp without time zone) AND\n> (ms.datex <= '2003-06-29 08:57:21.36'::timestamp without time zone))\n\nYou can see that it used the new multi-key index for both items, rather\nthan finding for monitorx, then filtering out unwanted results by datex.\n\nIt doesn't appear to have made much difference (looks like data was\npartially cached for this new run), but it changed a bit for the better.\n\nI'm afraid thats the best I can do on the query itself I think.\n\n\nOh, and using tables in your where clause that aren't in the from clause\nis non-portable and often hides bugs:\n\n from monitorstatusx ms\n , monitorstatusitemx msi\nwhere monitorx.idx = 'M-TEST_1444-TEST_00_10560561260561463219352'\n\nAre you sure you sure you don't have any duplicated constraints by\npulling information in from other tables that you don't need to? \nRemoving some of those nested loops would make a significant impact to\nthe results.\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "25 Jun 2003 09:36:06 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to optimize monstrous query, sorts instead of" }, { "msg_contents": "> Oh, and using tables in your where clause that aren't in the from clause\n> is non-portable and often hides bugs:\n>\n> from monitorstatusx ms\n> , monitorstatusitemx msi\n> where monitorx.idx = 'M-TEST_1444-TEST_00_10560561260561463219352'\n>\n> Are you sure you sure you don't have any duplicated constraints by\n> pulling information in from other tables that you don't need to?\n> Removing some of those nested loops would make a significant impact to\n> the results.\n\nI didn't notice that before, thanks for pointing that out. I just tried\nadding monitorx.idx to the select and it ended up making my query take\nseveral minutes long. Any ideas how I can fix this and keep my performance?\n\nnew query:\n\nveriguard=# explain select m.idx, ms.averageconnecttimex as\nms_averageconnecttime, ms.averagedurationx as ms_averageduration, ms.datex\nas ms_date, ms.idx as ms_id, ms.statusstringx as ms_statusstring, ms.statusx\nas ms_status, msi.actualcontentx as msi_actualcontent, msi.connecttimex as\nmsi_connecttime, msi.correctcontentx as msi_correctcontent, msi.datex as\nmsi_date, msi.descriptionx as msi_description, msi.durationx as\nmsi_duration, msi.errorcontentx as msi_errorcontent, msi.idx as msi_id,\nmsi.monitorlocationx as msi_monitorlocation, msi.statusstringx as\nmsi_statusstring, msi.statusx as msi_status from monitorx m, monitorstatusx\nms, monitorstatusitemx msi where m.idx =\n'M-TEST_1444-TEST_00_10560561260561463219352' AND monitorx.jdoidx =\nms.monitorx AND ms.datex >= '2003-06-20 08:57:21.36' AND ms.datex <=\n'2003-06-29 08:57:21.36' AND ms.jdoidx = monitorstatus_statusitemsx.jdoidx\nAND monitorstatus_statusitemsx.statusitemsx =\nmonitorstatusitemlistd8ea58a5x.jdoidx AND\nmonitorstatusitemlistd8ea58a5x.statusitemlistx = msi.jdoidx ORDER BY ms_date\nDESC;\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------------------------------------\n Sort (cost=1653384.42..1655402.97 rows=807418 width=826)\n Sort Key: ms.datex\n -> Hash Join (cost=820308.66..1112670.42 rows=807418 width=826)\n Hash Cond: (\"outer\".monitorx = \"inner\".jdoidx)\n -> Merge Join (cost=820132.71..1098364.65 rows=807418 width=780)\n Merge Cond: (\"outer\".jdoidx = \"inner\".statusitemlistx)\n -> Index Scan using monitorstatusitemx_pkey on\nmonitorstatusitemx msi (cost=0.00..247616.27 rows=6596084 width=665)\n -> Sort (cost=820132.71..822151.59 rows=807554 width=115)\n Sort Key:\nmonitorstatusitemlistd8ea58a5x.statusitemlistx\n -> Hash Join (cost=461310.87..685820.13 rows=807554\nwidth=115)\n Hash Cond: (\"outer\".jdoidx =\n\"inner\".statusitemsx)\n -> Seq Scan on monitorstatusitemlistd8ea58a5x\n(cost=0.00..104778.90 rows=6597190 width=16)\n -> Hash (cost=447067.98..447067.98 rows=807554\nwidth=99)\n -> Merge Join (cost=0.00..447067.98\nrows=807554 width=99)\n Merge Cond: (\"outer\".jdoidx =\n\"inner\".jdoidx)\n -> Index Scan using\nmonitorstatusx_pkey on monitorstatusx ms (cost=0.00..272308.56 rows=811754\nwidth=83)\n Filter: ((datex >= '2003-06-20\n08:57:21.36'::timestamp without time zone) AND (datex <= '2003-06-29\n08:57:21.36'::timestamp without time zone))\n -> Index Scan using\nmonitorstatus_stjdoidb742c9b3i on monitorstatus_statusitemsx\n(cost=0.00..146215.58 rows=6596680 width=16)\n -> Hash (cost=172.22..172.22 rows=1493 width=46)\n -> Nested Loop (cost=0.00..172.22 rows=1493 width=46)\n -> Index Scan using monitorx_id_index on monitorx m\n(cost=0.00..5.36 rows=1 width=38)\n Index Cond: (idx =\n'M-TEST_1444-TEST_00_10560561260561463219352'::character varying)\n -> Seq Scan on monitorx (cost=0.00..151.93 rows=1493\nwidth=8)\n(23 rows)\n\nold query:\n\nveriguard=# explain select ms.averageconnecttimex as ms_averageconnecttime,\nms.averagedurationx as ms_averageduration, ms.datex as ms_date, ms.idx as\nms_id, ms.statusstringx as ms_statusstring, ms.statusx as ms_status,\nmsi.actualcontentx as msi_actualcontent, msi.connecttimex as\nmsi_connecttime, msi.correctcontentx as msi_correctcontent, msi.datex as\nmsi_date, msi.descriptionx as msi_description, msi.durationx as\nmsi_duration, msi.errorcontentx as msi_errorcontent, msi.idx as msi_id,\nmsi.monitorlocationx as msi_monitorlocation, msi.statusstringx as\nmsi_statusstring, msi.statusx as msi_status from monitorstatusx ms,\nmonitorstatusitemx msi where monitorx.idx =\n'M-TEST_1444-TEST_00_10560561260561463219352' AND monitorx.jdoidx =\nms.monitorx AND ms.datex >= '2003-06-20 08:57:21.36' AND ms.datex <=\n'2003-06-29 08:57:21.36' AND ms.jdoidx = monitorstatus_statusitemsx.jdoidx\nAND monitorstatus_statusitemsx.statusitemsx =\nmonitorstatusitemlistd8ea58a5x.jdoidx AND\nmonitorstatusitemlistd8ea58a5x.statusitemlistx = msi.jdoidx ORDER BY ms_date\nDESC;\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------------------------\n Sort (cost=9590.52..9591.87 rows=541 width=788)\n Sort Key: ms.datex\n -> Nested Loop (cost=0.00..9565.97 rows=541 width=788)\n -> Nested Loop (cost=0.00..7929.22 rows=541 width=123)\n -> Nested Loop (cost=0.00..6292.48 rows=541 width=107)\n -> Nested Loop (cost=0.00..4647.22 rows=544 width=91)\n -> Index Scan using monitorx_id_index on\nmonitorx (cost=0.00..5.36 rows=1 width=8)\n Index Cond: (idx =\n'M-TEST_1444-TEST_00_10560561260561463219352'::character varying)\n -> Index Scan using monitorstatusxmonitori on\nmonitorstatusx ms (cost=0.00..4630.29 rows=926 width=83)\n Index Cond: (\"outer\".jdoidx = ms.monitorx)\n Filter: ((datex >= '2003-06-20\n08:57:21.36'::timestamp without time zone) AND (datex <= '2003-06-29\n08:57:21.36'::timestamp without time zone))\n -> Index Scan using monitorstatus_stjdoidb742c9b3i on\nmonitorstatus_statusitemsx (cost=0.00..3.01 rows=1 width=16)\n Index Cond: (\"outer\".jdoidx =\nmonitorstatus_statusitemsx.jdoidx)\n -> Index Scan using monitorstatusitejdoid7db0befci on\nmonitorstatusitemlistd8ea58a5x (cost=0.00..3.01 rows=1 width=16)\n Index Cond: (\"outer\".statusitemsx =\nmonitorstatusitemlistd8ea58a5x.jdoidx)\n -> Index Scan using monitorstatusitemx_pkey on monitorstatusitemx\nmsi (cost=0.00..3.01 rows=1 width=665)\n Index Cond: (\"outer\".statusitemlistx = msi.jdoidx)\n(17 rows)\n\nveriguard=#\n\n\n", "msg_date": "Wed, 25 Jun 2003 16:09:59 +0200", "msg_from": "\"Michael Mattox\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to optimize monstrous query, sorts instead of" }, { "msg_contents": "Michael,\n\nThis whole query looks like a mess to me. Since I don't know the exact model\nand the table stats, I don't even try to rewrite your query, however, here\nare the weak points I can think of:\n\n* as Rod pointed out, there are more tables in WHERE that aren't in FROM.\nThis can be a bug, but the very least, it makes the query far less readable.\nThese are:\n\n monitorx\n monitorstatus_statusitemsx.jdoidx\n monitorstatusitemlistd8ea58a5x.jdoidx\n\n* there are 3 index scans that basically steal your time.\nThey are 1.6..3.5 ms x 625 ~ 1..2 sec each (or I'm reading exp ana wrong,\nI'm not an expert indeed):\n\n - Index Scan using monitorstatus_stjdoidb742c9b3i on\n monitorstatus_statusitemsx\n (cost=0.00..3.01 rows=1 width=16)\n (actual time=2.51..2.51 rows=1 loops=625)\n Index Cond: (\"outer\".jdoidx = monitorstatus_statusitemsx.jdoidx)\n - Index Scan using monitorstatusitejdoid7db0befci on\n monitorstatusitemlistd8ea58a5x\n (cost=0.00..3.01 rows=1 width=16)\n (actual time=1.68..1.69 rows=1 loops=623)\n Index Cond: (\"outer\".statusitemsx =\nmonitorstatusitemlistd8ea58a5x.jdoidx)\n - Index Scan using monitorstatusitemx_pkey on monitorstatusitemx msi\n (cost=0.00..3.01 rows=1 width=665)\n (actual time=3.50..3.50 rows=1 loops=623)\n Index Cond: (\"outer\".statusitemlistx = msi.jdoidx)\n\n* another killer index: I think this one takes about the rest of the time\n(i.e. 3-4 secs):\n\n -> Index Scan using monitorstatusxmonitori on monitorstatusx ms\n (cost=0.00..4695.65 rows=880 width=83)\n (actual time=40.17..1868.12 rows=625 loops=1)\n Index Cond: (\"outer\".jdoidx = ms.monitorx)\n Filter: ((datex >= '2003-06-20 08:57:21.36'::timestamp without time\nzone)\n AND (datex <= '2003-06-29 08:57:21.36'::timestamp without time\nzone))\n\nSince the number of rows probably can't be reduced (as I read it, the query\nactually returned that many rows), I'd think about clever joins in the FROM\npart and fewer tables, to use fewer index scans.\n\nFinally, decided to do an ad-hoc adjustment. Try this, or (wild guess) try\nto completely eliminate the WHERE part by subselects on ms and monitorx.\n\nThis may be faster, slower, or even give different results, based on whether\nI guessed the 1:N relationships right or not.\n\nG.\n------------------------------- cut here -------------------------------\nselect\n ms.averageconnecttimex as ms_averageconnecttime,\n ms.averagedurationx as ms_averageduration,\n ms.datex as ms_date, ms.idx as ms_id,\n ms.statusstringx as ms_statusstring, ms.statusx as ms_status,\n\n msi.actualcontentx as msi_actualcontent, msi.connecttimex as\n msi_connecttime, msi.correctcontentx as msi_correctcontent, msi.datex as\n msi_date, msi.descriptionx as msi_description, msi.durationx as\n msi_duration, msi.errorcontentx as msi_errorcontent, msi.idx as msi_id,\n msi.monitorlocationx as msi_monitorlocation, msi.statusstringx as\n msi_statusstring, msi.statusx as msi_status\n\nfrom monitorstatusx ms\n LEFT JOIN monitorx ON (monitorx.jdoidx = ms.monitorx)\n LEFT JOIN monitorstatus_statusitemsx ms_si ON (ms.jdoidx =\nms_si.jdoidx)\n LEFT JOIN monitorstatusitemlistd8ea58a5x msil ON\n (ms_si.statusitemsx = msil.jdoidx)\n LEFT JOIN monitorstatusitemx msi ON (msil.statusitemlistx = msi.jdoidx)\nwhere\n monitorx.idx = 'M-TEST_1444-TEST_00_10560561260561463219352'\n ms.datex >= '2003-06-20 08:57:21.36' AND ms.datex <= '2003-06-29\n08:57:21.36'\n------------------------------- cut here -------------------------------\n\n", "msg_date": "Wed, 25 Jun 2003 16:15:53 +0200", "msg_from": "=?ISO-8859-1?Q?SZUCS_G=E1bor?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to optimize monstrous query, sorts instead of using index" }, { "msg_contents": "Michael,\n\nActually, you missed an alias :) the select now returned 800k rows!\n(according to explain)\n\npointed it out below. See my prev mail for more.\n\nIf it's possible, try your query on a backend and look for notices like\n\"Adding missing FROM clause for table ...\"\n\nG.\n------------------------------- cut here -------------------------------\n----- Original Message ----- \nFrom: \"Michael Mattox\" <[email protected]>\nCc: \"Postgresql Performance\" <[email protected]>\nSent: Wednesday, June 25, 2003 4:09 PM\n\n\n> from monitorx m, monitorstatusx ms, monitorstatusitemx msi\n> where m.idx = 'M-TEST_1444-TEST_00_10560561260561463219352' AND\n> monitorx.jdoidx = ms.monitorx AND\n ^^^^^^^^\n substitute the same alias \"m\" here.\n\n\n", "msg_date": "Wed, 25 Jun 2003 16:20:32 +0200", "msg_from": "=?ISO-8859-1?Q?SZUCS_G=E1bor?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to optimize monstrous query, sorts instead of" }, { "msg_contents": "> I didn't notice that before, thanks for pointing that out. I just tried\n> adding monitorx.idx to the select and it ended up making my query take\n> several minutes long. Any ideas how I can fix this and keep my performance?\n\nBy using it aliased and non-aliased (2 different references to the same\ntable) you've caused it to join itself.\n\nTry this:\n\nSELECT m.idx\n , ms.averageconnecttimex AS ms_averageconnecttime\n , ms.averagedurationx AS ms_averageduration\n , ms.datex AS ms_date\n , ms.idx AS ms_id\n , ms.statusstringx AS ms_statusstring\n , ms.statusx AS ms_status\n , msi.actualcontentx AS msi_actualcontent\n , msi.connecttimex AS msi_connecttime\n , msi.correctcontentx AS msi_correctcontent\n , msi.datex AS msi_date\n , msi.descriptionx AS msi_description\n , msi.durationx AS msi_duration\n , msi.errorcontentx AS msi_errorcontent\n , msi.idx AS msi_id\n , msi.monitorlocationx AS msi_monitorlocation\n , msi.statusstringx AS msi_statusstring\n , msi.statusx AS msi_status\n\n FROM monitorstatusx AS ms\n , monitorstatusitemx AS msi\n\n , monitorx AS mx\n , monitorstatus_statusitemsx AS mssisx\n , monitorstatusitemlistd8ea58a5x AS litem\n\n WHERE ms.jdoidx = mssisx.jdoidx\n AND mssisx.statusitemsx = litem.jdoidx\n AND litem.statusitemlistx = msi.jdoidx\n AND mx.jdoidx = ms.monitorx\n AND ms.datex BETWEEN '2003-06-20 08:57:21.36'\n AND '2003-06-29 08:57:21.36'\n AND m.idx = 'M-TEST_1444-TEST_00_10560561260561463219352'\n\nORDER BY ms.datex DESC;\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "25 Jun 2003 10:28:25 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to optimize monstrous query, sorts instead of" }, { "msg_contents": "> Finally, decided to do an ad-hoc adjustment. Try this, or (wild guess) try\n> to completely eliminate the WHERE part by subselects on ms and monitorx.\n>\n> This may be faster, slower, or even give different results, based\n> on whether\n> I guessed the 1:N relationships right or not.\n\nIt's much slower but I appreciate you taking the time to try. I'm pretty\nnew to SQL so I must admin this query is very confusing for me. I'm using\nJava Data Objects (JDO, an O/R mapping framework) but the implementation I'm\nusing (Kodo) isn't smart enough to do all the joins efficiently, which is\nwhy I had to rewrite this query by hand.\n\nHere's the output:\n\nveriguard=# explain select ms.averageconnecttimex as ms_averageconnecttime,\nms.averagedurationx as ms_averageduration, ms.datex as ms_date, ms.idx as\nms_id, ms.statusstringx as ms_statusstring, ms.statusx as ms_status,\nmsi.actualcontentx as msi_actualcontent, msi.connecttimex as\nmsi_connecttime, msi.correctcontentx as msi_correctcontent, msi.datex as\nmsi_date, msi.descriptionx as msi_description, msi.durationx as\nmsi_duration, msi.errorcontentx as msi_errorcontent, msi.idx as msi_id,\nmsi.monitorlocationx as msi_monitorlocation, msi.statusstringx as\nmsi_statusstring, msi.statusx as msi_status from monitorstatusx ms LEFT JOIN\nmonitorx ON (monitorx.jdoidx = ms.monitorx) LEFT JOIN\nmonitorstatus_statusitemsx ms_si ON (ms.jdoidx = ms_si.jdoidx) LEFT JOIN\nmonitorstatusitemlistd8ea58a5x msil ON (ms_si.statusitemsx = msil.jdoidx)\nLEFT JOIN monitorstatusitemx msi ON (msil.statusitemlistx = msi.jdoidx)\nwhere monitorx.idx = 'M-TEST_1444-TEST_00_10560561260561463219352' AND\nms.datex >= '2003-06-20 08:57:21.36' AND ms.datex <= '2003-06-29\n08:57:21.36';\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------------------------------------\n Merge Join (cost=1006209.47..1283529.68 rows=751715 width=826)\n Merge Cond: (\"outer\".jdoidx = \"inner\".statusitemlistx)\n -> Index Scan using monitorstatusitemx_pkey on monitorstatusitemx msi\n(cost=0.00..247679.64 rows=6595427 width=665)\n -> Sort (cost=1006209.47..1008088.76 rows=751715 width=161)\n Sort Key: msil.statusitemlistx\n -> Merge Join (cost=697910.17..864079.59 rows=751715 width=161)\n Merge Cond: (\"outer\".jdoidx = \"inner\".statusitemsx)\n -> Index Scan using monitorstatusitejdoid7db0befci on\nmonitorstatusitemlistd8ea58a5x msil (cost=0.00..136564.80 rows=6595427\nwidth=16)\n -> Sort (cost=697910.17..699789.46 rows=751715 width=145)\n Sort Key: ms_si.statusitemsx\n -> Merge Join (cost=385727.49..561594.96 rows=751715\nwidth=145)\n Merge Cond: (\"outer\".jdoidx = \"inner\".jdoidx)\n -> Index Scan using\nmonitorstatus_stjdoidb742c9b3i on monitorstatus_statusitemsx ms_si\n(cost=0.00..146268.80 rows=6595427 width=16)\n -> Sort (cost=385727.49..387606.78 rows=751715\nwidth=129)\n Sort Key: ms.jdoidx\n -> Hash Join (cost=155.66..255240.65\nrows=751715 width=129)\n Hash Cond: (\"outer\".monitorx =\n\"inner\".jdoidx)\n Filter: (\"inner\".idx =\n'M-TEST_1444-TEST_00_10560561260561463219352'::character varying)\n -> Seq Scan on monitorstatusx ms\n(cost=0.00..240050.69 rows=751715 width=83)\n Filter: ((datex >= '2003-06-20\n08:57:21.36'::timestamp without time zone) AND (datex <= '2003-06-29\n08:57:21.36'::timestamp without time zone))\n -> Hash (cost=151.93..151.93\nrows=1493 width=46)\n -> Seq Scan on monitorx\n(cost=0.00..151.93 rows=1493 width=46)\n(22 rows)\n\nveriguard=#\n\n\n\n", "msg_date": "Wed, 25 Jun 2003 16:48:46 +0200", "msg_from": "\"Michael Mattox\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to optimize monstrous query, sorts instead of using index" }, { "msg_contents": "With a slight correction (you had m & mx so I changed them to be all mx, I\nhope this is what you intended) this query works. It's exactly the same\nspeed, but it doesn't give me the warnings I was getting:\n\nNOTICE: Adding missing FROM-clause entry for table \"monitorx\"\nNOTICE: Adding missing FROM-clause entry for table\n\"monitorstatus_statusitemsx\"\nNOTICE: Adding missing FROM-clause entry for table\n\"monitorstatusitemlistd8ea58a5x\"\n\nI never knew what those were from, I even searched Google trying to find out\nand I couldn't understand it so I gave up. Thanks for pointing this out for\nme, and thanks for fixing my query.\n\nMichael\n\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Rod Taylor\n> Sent: Wednesday, June 25, 2003 4:28 PM\n> To: [email protected]\n> Cc: Postgresql Performance\n> Subject: Re: [PERFORM] How to optimize monstrous query, sorts instead of\n>\n>\n> > I didn't notice that before, thanks for pointing that out. I just tried\n> > adding monitorx.idx to the select and it ended up making my query take\n> > several minutes long. Any ideas how I can fix this and keep my\n> performance?\n>\n> By using it aliased and non-aliased (2 different references to the same\n> table) you've caused it to join itself.\n>\n> Try this:\n>\n> SELECT m.idx\n> , ms.averageconnecttimex AS ms_averageconnecttime\n> , ms.averagedurationx AS ms_averageduration\n> , ms.datex AS ms_date\n> , ms.idx AS ms_id\n> , ms.statusstringx AS ms_statusstring\n> , ms.statusx AS ms_status\n> , msi.actualcontentx AS msi_actualcontent\n> , msi.connecttimex AS msi_connecttime\n> , msi.correctcontentx AS msi_correctcontent\n> , msi.datex AS msi_date\n> , msi.descriptionx AS msi_description\n> , msi.durationx AS msi_duration\n> , msi.errorcontentx AS msi_errorcontent\n> , msi.idx AS msi_id\n> , msi.monitorlocationx AS msi_monitorlocation\n> , msi.statusstringx AS msi_statusstring\n> , msi.statusx AS msi_status\n>\n> FROM monitorstatusx AS ms\n> , monitorstatusitemx AS msi\n>\n> , monitorx AS mx\n> , monitorstatus_statusitemsx AS mssisx\n> , monitorstatusitemlistd8ea58a5x AS litem\n>\n> WHERE ms.jdoidx = mssisx.jdoidx\n> AND mssisx.statusitemsx = litem.jdoidx\n> AND litem.statusitemlistx = msi.jdoidx\n> AND mx.jdoidx = ms.monitorx\n> AND ms.datex BETWEEN '2003-06-20 08:57:21.36'\n> AND '2003-06-29 08:57:21.36'\n> AND m.idx = 'M-TEST_1444-TEST_00_10560561260561463219352'\n>\n> ORDER BY ms.datex DESC;\n>\n> --\n> Rod Taylor <[email protected]>\n>\n> PGP Key: http://www.rbt.ca/rbtpub.asc\n>\n\n\n", "msg_date": "Wed, 25 Jun 2003 17:09:21 +0200", "msg_from": "\"Michael Mattox\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to optimize monstrous query, sorts instead of" }, { "msg_contents": "Rod Taylor <[email protected]> writes:\n>> monitorstatusx_datex_monitorx_index on monitorstatusx ms\n>> (cost=3D0.00..1159.33 rows=3D890 width=3D83) (actual time=3D0.19..1287.02=\n> rows=3D628\n>> loops=3D1)\n>> Index Cond: ((\"outer\".jdoidx =3D ms.moni=\n> torx)\n>> AND (ms.datex >=3D '2003-06-20 08:57:21.36'::timestamp without time zone)=\n> AND\n>> (ms.datex <=3D '2003-06-29 08:57:21.36'::timestamp without time zone))\n\n> You can see that it used the new multi-key index for both items, rather\n> than finding for monitorx, then filtering out unwanted results by datex.\n\nWhat is the column ordering of the combined index? Unless datex is the\nfirst column, there is no chance of using it to create the required sort\norder anyway. I think this index condition is suggesting that monitorx\nis the first column.\n\nHowever, I agree with Rod's point that \"avoid the sort\" is not the\nmindset to use to optimize this query. The joins are the problem.\nYou might try forcing different join types (see enable_nestloop and\nfriends) to get an idea of whether a different plan is likely to help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Jun 2003 11:55:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to optimize monstrous query, sorts instead of " }, { "msg_contents": "\"Michael Mattox\" <[email protected]> writes:\n> It's much slower but I appreciate you taking the time to try. I'm pretty\n> new to SQL so I must admin this query is very confusing for me. I'm using\n> Java Data Objects (JDO, an O/R mapping framework) but the implementation I'm\n> using (Kodo) isn't smart enough to do all the joins efficiently, which is\n> why I had to rewrite this query by hand.\n\nIt wasn't till I read that :-( that I noticed that you were doing nested\nleft joins. Fooling with the join order may be your best route to a\nsolution --- have you read\nhttp://www.postgresql.org/docs/view.php?version=7.3&idoc=0&file=explicit-joins.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Jun 2003 12:39:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to optimize monstrous query, sorts instead of using index " }, { "msg_contents": "> Try this:\n\nRod, you improved my query last week (thank you very much) but I'm not sure\nwhy but my performance is getting worse. I think I know what happened, when\nI did my load testing I created data that all had the same date, so sorting\non the date was very fast. But now I've been running the system for a few\nweeks I've got a range of dates and now the sort is very costly. I'm\ncurious if it's possible to optimize this with an index? I've tried\ncreating some indexes but they're never used.\n\nexplain analyze SELECT mx.idx , ms.averageconnecttimex AS\nms_averageconnecttime , ms.averagedurationx AS ms_averageduration , ms.datex\nAS ms_date , ms.idx AS ms_id , ms.statusstringx AS ms_statusstring ,\nms.statusx AS ms_status , msi.actualcontentx AS msi_actualcontent ,\nmsi.connecttimex AS msi_connecttime , msi.correctcontentx AS\nmsi_correctcontent , msi.datex AS msi_date , msi.descriptionx AS\nmsi_description , msi.durationx AS msi_duration , msi.errorcontentx AS\nmsi_errorcontent , msi.idx AS msi_id , msi.monitorlocationx AS\nmsi_monitorlocation , msi.statusstringx AS msi_statusstring , msi.statusx AS\nmsi_status FROM monitorstatusx AS ms , monitorstatusitemx AS msi , monitorx\nAS mx , monitorstatus_statusitemsx AS mssisx ,\nmonitorstatusitemlistd8ea58a5x AS litem WHERE ms.jdoidx = mssisx.jdoidx AND\nmssisx.statusitemsx = litem.jdoidx AND litem.statusitemlistx = msi.jdoidx\nAND mx.jdoidx = ms.monitorx AND ms.datex BETWEEN '2003-07-01\n00:00:00.000000+01' AND '2003-07-01 23:59:59.000000+01' AND mx.idx =\n'M-TEST_150-TEST_01_10560776551771895174239' ORDER BY ms.datex DESC;\n\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-----------------------------------\n Sort (cost=6882.84..6883.08 rows=97 width=827) (actual\ntime=16712.46..16712.65 rows=225 loops=1)\n Sort Key: ms.datex\n -> Nested Loop (cost=0.00..6879.66 rows=97 width=827) (actual\ntime=4413.12..16711.62 rows=225 loops=1)\n -> Nested Loop (cost=0.00..6587.53 rows=97 width=162) (actual\ntime=4406.06..15941.16 rows=225 loops=1)\n -> Nested Loop (cost=0.00..6295.38 rows=97 width=146)\n(actual time=4383.59..15424.96 rows=225 loops=1)\n -> Nested Loop (cost=0.00..6003.22 rows=97 width=130)\n(actual time=4383.53..14938.02 rows=225 loops=1)\n -> Index Scan using monitorx_id_index on\nmonitorx mx (cost=0.00..5.01 rows=1 width=46) (actual time=0.13..0.21\nrows=1 loops=1)\n Index Cond: (idx =\n'M-TEST_150-TEST_01_10560776551771895174239'::character varying)\n -> Index Scan using monitorstatusxmonitori on\nmonitorstatusx ms (cost=0.00..5996.18 rows=163 width=84) (actual\ntime=4383.38..14936.39 rows=225 loops=1)\n Index Cond: (\"outer\".jdoidx = ms.monitorx)\n Filter: ((datex >= '2003-07-01\n00:00:00'::timestamp without time zone) AND (datex <= '2003-07-01\n23:59:59'::timestamp without time zone))\n -> Index Scan using monitorstatus_stjdoidb742c9b3i on\nmonitorstatus_statusitemsx mssisx (cost=0.00..3.01 rows=1 width=16) (actual\ntime=2.15..2.15 rows=1 loops=225)\n Index Cond: (\"outer\".jdoidx = mssisx.jdoidx)\n -> Index Scan using monitorstatusitejdoid7db0befci on\nmonitorstatusitemlistd8ea58a5x litem (cost=0.00..3.01 rows=1 width=16)\n(actual time=2.28..2.28 rows=1 loops=225)\n Index Cond: (\"outer\".statusitemsx = litem.jdoidx)\n -> Index Scan using monitorstatusitemx_pkey on monitorstatusitemx\nmsi (cost=0.00..3.01 rows=1 width=665) (actual time=3.41..3.41 rows=1\nloops=225)\n Index Cond: (\"outer\".statusitemlistx = msi.jdoidx)\n Total runtime: 16713.25 msec\n(18 rows)\n\nAs you can see it takes 16 seconds to return only 18 rows. The\nmonitorstatusx table has over 7 million rows, and for each monitor status\nthere's one row in each of the monitorstatusitemx and the join tables. So I\nthink the size of the database is just too high for this sort. I run my\nreports offline, but what I'm finding is that at 16 seconds per report, the\nreports aren't finished by morning. My postgresql.conf is attached in case\nI have it configured incorrectly.\n\nThanks,\nMichael\n\n\n", "msg_date": "Wed, 2 Jul 2003 12:24:23 +0200", "msg_from": "\"Michael Mattox\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to optimize monstrous query, sorts instead of" }, { "msg_contents": "> My postgresql.conf is attached in case I have it configured incorrectly.\n\nForgot my postgres.conf..", "msg_date": "Wed, 2 Jul 2003 12:28:10 +0200", "msg_from": "\"Michael Mattox\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to optimize monstrous query, sorts instead of" }, { "msg_contents": "On Wed, 2003-07-02 at 10:28, Michael Mattox wrote:\n> > My postgresql.conf is attached in case I have it configured incorrectly.\n> \n> Forgot my postgres.conf..\n\nShared buffers is probably too high. How much memory in this machine? \nIs there anything else running aside from PostgreSQL? What does top say\nabout cached / buffered data (number)\n\nI see you reduced the random_page_cost to 1.5. Why did you do this (how\nis your disk subsystem configured)?\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "02 Jul 2003 12:45:23 +0000", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to optimize monstrous query, sorts instead of" }, { "msg_contents": "On Wed, 2003-07-02 at 10:24, Michael Mattox wrote:\n> > Try this:\n> \n> Rod, you improved my query last week (thank you very much) but I'm not sure\n> why but my performance is getting worse. I think I know what happened, when\n> I did my load testing I created data that all had the same date, so sorting\n> on the date was very fast. But now I've been running the system for a few\n> weeks I've got a range of dates and now the sort is very costly. I'm\n> curious if it's possible to optimize this with an index? I've tried\n> creating some indexes but they're never used.\n\nStandard questions, did you VACUUM? Regularly? Want to try again and\nsend us the output from VACUUM VERBOSE?\n\nSounds like you created a ton of test data, then removed a bunch? Did\nyou REINDEX that table?\n\nDuring normal use, what is your query spread like? Mostly selects with\nsome inserts? Any updates or deletes? How often to updates or deletes\ncome in, and how many rows do they effect?\n\n> -> Index Scan using monitorstatusxmonitori on\n> monitorstatusx ms (cost=0.00..5996.18 rows=163 width=84) (actual\n> time=4383.38..14936.39 rows=225 loops=1)\n> Index Cond: (\"outer\".jdoidx = ms.monitorx)\n> Filter: ((datex >= '2003-07-01\n> 00:00:00'::timestamp without time zone) AND (datex <= '2003-07-01\n> 23:59:59'::timestamp without time zone))\n\nThe above index scan is taking a vast majority of the time (nearly 15\nseconds of the 16 second total -- stop thinking about sorts!).. What\nhappened to the index on monitorx and datex?\n\n-- \nRod Taylor <[email protected]>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "02 Jul 2003 12:51:51 +0000", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to optimize monstrous query, sorts instead of" }, { "msg_contents": "> Shared buffers is probably too high. How much memory in this machine?\n> Is there anything else running aside from PostgreSQL? What does top say\n> about cached / buffered data (number)\n\nI was using the 25% of RAM guideline posted recently. The machine has\n1.5gig but it also has a couple other java applications running on it\nincluding tomcat.\n\n 1:56pm up 6 days, 2:58, 6 users, load average: 2.60, 2.07, 1.78\n193 processes: 191 sleeping, 2 running, 0 zombie, 0 stopped\nCPU0 states: 14.0% user, 9.0% system, 0.0% nice, 75.1% idle\nCPU1 states: 31.0% user, 0.1% system, 0.0% nice, 67.0% idle\nCPU2 states: 5.0% user, 0.1% system, 0.1% nice, 93.0% idle\nCPU3 states: 0.0% user, 0.1% system, 0.1% nice, 98.0% idle\nMem: 1547572K av, 1537848K used, 9724K free, 0K shrd, 25104K\nbuff\nSwap: 1044216K av, 51352K used, 992864K free 1245460K\ncached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n10184 veriguar 9 0 94760 83M 2612 S 36.5 5.5 13:29 java\n 8990 postgres 17 0 54864 53M 53096 R 11.5 3.5 0:00 postmaster\n 8988 veriguar 11 0 1164 1164 836 R 1.9 0.0 0:00 top\n10161 veriguar 13 5 69504 60M 2600 S N 0.9 3.9 13:11 java\n10206 veriguar 13 5 27952 23M 2580 S N 0.9 1.5 7:21 java\n10699 postgres 9 0 31656 30M 30396 S 0.9 2.0 0:02 postmaster\n\n total used free shared buffers cached\nMem: 1547572 1532024 15548 0 23820 1239024\n-/+ buffers/cache: 269180 1278392\nSwap: 1044216 51368 992848\n\n> I see you reduced the random_page_cost to 1.5. Why did you do this (how\n> is your disk subsystem configured)?\n\nSomeone suggested I lower it to 1.5 or 1.0, not sure what the reasoning was.\nThe disks are both SCSI 10,000 RPM. My data directory is on one disk by\nitself, and the pg_xlog is on the other disk as well as the operating system\nand everything else. I was told it's best to have them on seperate disks,\nhowever I'm wondering because my system only has two disks and the one with\nthe operating system isn't big enough to hold my database therefore I must\nput my DB on the 2nd disk and if pg_xlog is to be separate, it has to be\nwith the OS & Java apps.\n\n> Standard questions, did you VACUUM? Regularly? Want to try again and\n> send us the output from VACUUM VERBOSE?\n\nI vacuum the monitor table every 5 minutes and I do a vacuum full analyze\nevery night at midnight (cron job).\nI just did a vacuum verbose, output is attached.\n\n> Sounds like you created a ton of test data, then removed a bunch? Did\n> you REINDEX that table?\n\nI haven't deleted any of the data, I've been continuously adding new data.\nI added about 6 million rows at once, and they all had the same date. Since\nthen my application has been stress testing over about 2 weeks now so\nthere's now 7693057 rows in monitorstatusx and monitorstatusitemx as well as\nthe necessary rows for the join tables.\n\n> During normal use, what is your query spread like? Mostly selects with\n> some inserts? Any updates or deletes? How often to updates or deletes\n> come in, and how many rows do they effect?\n\nThere is a query on monitorx by datex every 10 seconds (monitors are updated\nevery 5 minutes, so every 10 seconds I get the monitors that are due for an\nupdate). Each monitor is then saved with its status field modified, and a\nnew status item is inserted. This happens every 5 minutes. There are 8-16\nmonitors being run in parallel, although typically it's 8 or less. This is\nthe main application. The reporting application does a few queries but\nnothing major except the query that is the subject of this email. It's the\nreporting app that is slow due to this one big query. Finally the web app\nexecutes the same query as the reporting app, except there is a lot less\ndata to be returned since it's only for the current day.\n\n> > -> Index Scan using\n> monitorstatusxmonitori on\n> > monitorstatusx ms (cost=0.00..5996.18 rows=163 width=84) (actual\n> > time=4383.38..14936.39 rows=225 loops=1)\n> > Index Cond: (\"outer\".jdoidx =\n> ms.monitorx)\n> > Filter: ((datex >= '2003-07-01\n> > 00:00:00'::timestamp without time zone) AND (datex <= '2003-07-01\n> > 23:59:59'::timestamp without time zone))\n>\n> The above index scan is taking a vast majority of the time (nearly 15\n> seconds of the 16 second total -- stop thinking about sorts!).. What\n> happened to the index on monitorx and datex?\n\nI just did\n\nreindex table monitorstatux;\n\nwhich didn't help, in fact query times went up. I then did\n\ncreate index monitorstatus_monitor_date_i on monitorstatusx(monitorx,\ndatex);\n\nand this seemed to help a little:\n\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----------------------------------------------------------------\n Sort (cost=1133.13..1133.38 rows=98 width=827) (actual\ntime=9754.06..9754.25 rows=226 loops=1)\n Sort Key: ms.datex\n -> Nested Loop (cost=0.00..1129.90 rows=98 width=827) (actual\ntime=50.81..9753.17 rows=226 loops=1)\n -> Nested Loop (cost=0.00..833.47 rows=98 width=162) (actual\ntime=50.74..7149.28 rows=226 loops=1)\n -> Nested Loop (cost=0.00..537.04 rows=98 width=146)\n(actual time=50.67..4774.45 rows=226 loops=1)\n -> Nested Loop (cost=0.00..240.44 rows=98 width=130)\n(actual time=50.61..1515.10 rows=226 loops=1)\n -> Index Scan using monitorx_id_index on\nmonitorx mx (cost=0.00..3.45 rows=1 width=46) (actual time=0.09..0.11\nrows=1 loops=1)\n Index Cond: (idx =\n'M-TEST_170-TEST_00_10560857890510173779233'::character varying)\n -> Index Scan using monitorstatus_monitor_date_i\non monitorstatusx ms (cost=0.00..234.93 rows=165 width=84) (actual\ntime=50.51..1513.21 rows=226 loops=1)\n Index Cond: ((\"outer\".jdoidx = ms.monitorx)\nAND (ms.datex >= '2003-07-01 00:00:00'::timestamp without time zone) AND\n(ms.datex <= '2003-07-01 23:59:59'::timestamp without time zone))\n -> Index Scan using monitorstatus_stjdoidb742c9b3i on\nmonitorstatus_statusitemsx mssisx (cost=0.00..3.01 rows=1 width=16) (actual\ntime=14.40..14.41 rows=1 loops=226)\n Index Cond: (\"outer\".jdoidx = mssisx.jdoidx)\n -> Index Scan using monitorstatusitejdoid7db0befci on\nmonitorstatusitemlistd8ea58a5x litem (cost=0.00..3.01 rows=1 width=16)\n(actual time=10.49..10.49 rows=1 loops=226)\n Index Cond: (\"outer\".statusitemsx = litem.jdoidx)\n -> Index Scan using monitorstatusitemx_pkey on monitorstatusitemx\nmsi (cost=0.00..3.01 rows=1 width=665) (actual time=11.50..11.50 rows=1\nloops=226)\n Index Cond: (\"outer\".statusitemlistx = msi.jdoidx)\n Total runtime: 9754.64 msec\n(17 rows)\n\nBefore I guess the index with monitorx,datex didn't do much because all the\ndata had the same date. But now that I have over 2 weeks of real data, it\nmakes a difference.\n\nThanks,\nMichael", "msg_date": "Wed, 2 Jul 2003 15:46:36 +0200", "msg_from": "\"Michael Mattox\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to optimize monstrous query, sorts instead of" } ]
[ { "msg_contents": "Hi all!\n\nI have a strange behavior with this query:\n\nSELECT c.id_contenido,p.fecha_publicacion,c.titulo_esp,c.activo,c.activo,s.label_esp as label_sbc,p.orden\n,p.tapa_spc,p.tapa_cat,p.tapa_principal,p.id_publicacion,ca.label_esp as label_cat,sp.label_esp as label_spc\nFROM cont_contenido c ,cont_publicacion p ,cont_sbc s ,cont_cat ca ,cont_spc sp\n WHERE c.id_instalacion = 2\nAND s.id_instalacion = 2\nAND p.id_instalacion = 2\nAND c.id_contenido = p.id_contenido\nAND c.id_sbc = s.id_sbc\n--AND (c.activo = 'S' or c.activo = 's')\n--AND (s.activo = 'S' or s.activo = 's')\nAND upper(c.activo) = 'S'\nAND upper(s.activo) = 'S'\nAND ca.id_instalacion = 2\nAND sp.id_instalacion = 2\nAND ca.id_cat = s.id_cat\nAND sp.id_spc = ca.id_spc\nORDER BY sp.label_esp ,ca.label_esp ,p.orden\n\nThis is the execution plan:\nSort (cost=128.81..128.83 rows=5 width=189)\n Sort Key: sp.label_esp, ca.label_esp, p.orden\n -> Nested Loop (cost=0.00..128.76 rows=5 width=189)\n Join Filter: (\"outer\".id_contenido = \"inner\".id_contenido)\n -> Nested Loop (cost=0.00..24.70 rows=1 width=134)\n Join Filter: (\"inner\".id_spc = \"outer\".id_spc)\n -> Nested Loop (cost=0.00..22.46 rows=1 width=111)\n -> Nested Loop (cost=0.00..6.89 rows=1 width=68)\n Join Filter: (\"inner\".id_cat = \"outer\".id_cat)\n -> Seq Scan on cont_sbc s (cost=0.00..4.44 rows=1 width=35)\n Filter: ((id_instalacion = 2::numeric) AND (upper((activo)::text) = 'S'::text))\n -> Seq Scan on cont_cat ca (cost=0.00..2.31 rows=11 width=33)\n Filter: (id_instalacion = 2::numeric)\n -> Index Scan using cont_cont_cont_sbc_fk_i on cont_contenido c (cost=0.00..15.56 rows=1 width=43)\n Index Cond: ((c.id_instalacion = 2::numeric) AND (c.id_sbc = \"outer\".id_sbc))\n Filter: (upper((activo)::text) = 'S'::text)\n -> Seq Scan on cont_spc sp (cost=0.00..2.16 rows=6 width=23)\n Filter: (id_instalacion = 2::numeric)\n -> Seq Scan on cont_publicacion p (cost=0.00..98.54 rows=442 width=55)\n Filter: (id_instalacion = 2::numeric)\n\nIf I replace both \"uppers\" with \"...= 'S' or ...= 's'\":\n\nSELECT c.id_contenido,p.fecha_publicacion,c.titulo_esp,c.activo,c.activo,s.label_esp as label_sbc,p.orden\n,p.tapa_spc,p.tapa_cat,p.tapa_principal,p.id_publicacion,ca.label_esp as label_cat,sp.label_esp as label_spc\nFROM cont_contenido c ,cont_publicacion p ,cont_sbc s ,cont_cat ca ,cont_spc sp\n WHERE c.id_instalacion = 2\nAND s.id_instalacion = 2\nAND p.id_instalacion = 2\nAND c.id_contenido = p.id_contenido\nAND c.id_sbc = s.id_sbc\nAND (c.activo = 'S' or c.activo = 's')\nAND (s.activo = 'S' or s.activo = 's')\nAND ca.id_instalacion = 2\nAND sp.id_instalacion = 2\nAND ca.id_cat = s.id_cat\nAND sp.id_spc = ca.id_spc\nORDER BY sp.label_esp ,ca.label_esp ,p.orden\n\nThis is the Execution plan:\n\nSort (cost=193.98..194.62 rows=256 width=189)\n Sort Key: sp.label_esp, ca.label_esp, p.orden\n -> Merge Join (cost=178.07..183.75 rows=256 width=189)\n Merge Cond: (\"outer\".id_contenido = \"inner\".id_contenido)\n -> Sort (cost=60.11..60.25 rows=56 width=134)\n Sort Key: c.id_contenido\n -> Merge Join (cost=57.31..58.50 rows=56 width=134)\n Merge Cond: (\"outer\".id_sbc = \"inner\".id_sbc)\n -> Sort (cost=10.60..10.64 rows=15 width=91)\n Sort Key: s.id_sbc\n -> Merge Join (cost=10.00..10.32 rows=15 width=91)\n Merge Cond: (\"outer\".id_cat = \"inner\".id_cat)\n -> Sort (cost=5.10..5.12 rows=10 width=56)\n Sort Key: ca.id_cat\n -> Merge Join (cost=4.74..4.94 rows=10 width=56)\n Merge Cond: (\"outer\".id_spc = \"inner\".id_spc)\n -> Sort (cost=2.50..2.53 rows=11 width=33)\n Sort Key: ca.id_spc\n -> Seq Scan on cont_cat ca (cost=0.00..2.31 rows=11 width=33)\n Filter: (id_instalacion = 2::numeric)\n -> Sort (cost=2.24..2.26 rows=6 width=23)\n Sort Key: sp.id_spc\n -> Seq Scan on cont_spc sp (cost=0.00..2.16 rows=6 width=23)\n Filter: (id_instalacion = 2::numeric)\n -> Sort (cost=4.90..4.96 rows=21 width=35)\n Sort Key: s.id_cat\n -> Seq Scan on cont_sbc s (cost=0.00..4.44 rows=21 width=35)\n Filter: ((id_instalacion = 2::numeric) AND ((activo = 'S'::character varying) OR (activo = 's'::character varying)))\n -> Sort (cost=46.70..46.94 rows=93 width=43)\n Sort Key: c.id_sbc\n -> Seq Scan on cont_contenido c (cost=0.00..43.66 rows=93 width=43)\n Filter: ((id_instalacion = 2::numeric) AND ((activo = 'S'::character varying) OR (activo = 's'::character varying)))\n -> Sort (cost=117.96..119.06 rows=442 width=55)\n Sort Key: p.id_contenido\n -> Seq Scan on cont_publicacion p (cost=0.00..98.54 rows=442 width=55)\n Filter: (id_instalacion = 2::numeric)\n\n\nThe question is, why the query with the worst execution plan (most expensive, the second) runs faster the query with the better execution plan?\nFirst Query: 10 runs, avg: 8 sec.\nSecond Query: 10 runs, avg: 1.8 sec.\n\nI see a fail on the \"best\" exec plan, the rows I get are around 430, so the first EP expect only 5 rows and the second EP expect 256.\n\nI run 7.3.2 over Solaris.\nI did \"vacuum full analyze\" before \n\nThanks in advance!\n\n\nFernando.-\n", "msg_date": "Wed, 25 Jun 2003 16:25:44 -0300", "msg_from": "\"Fernando Papa\" <[email protected]>", "msg_from_op": true, "msg_subject": "Similar querys, better execution time on worst execution plan" }, { "msg_contents": "Fernando,\n\n1. Try EXPLAIN ANALYZE. Cost alone isn't an absolute measure. I think it's\nonly to see which parts of the query are expected to be slowest. However,\nEXP ANA will give you exact times in msec (which effectively means it\nexecutes the query).\n\n2. I think calling upper() for each row costs more than direct comparison,\nbut not sure\n\n3. Notice that there are seq scans with filter conditions like\n \"id_instalacion = 2::numeric\"\n Do you have indices on id_instalacion, which seems to be a numeric field?\nif so, try casting the constant expressions in the query to numeric so that\npostgresql may find the index. If you don't have such indices, it may be\nworth to create them. (I guess you only have it on the table aliased with c,\nsince it does an index scan there.\n\n4. another guess may be indices on (id_instalacion, activo), or, if activo\nhas few possible values (for example, it may be only one of three letters,\nsay, 'S', 'A' or 'K'), partial indices like:\n\nCREATE INDEX cont_sbc_id_ins_S ON cont_sbc (id_instalacion)\n WHERE activo in ('S', 's');\nCREATE INDEX cont_sbc_id_ins_A ON cont_sbc (id_instalacion)\n WHERE activo in ('A', 'a');\nCREATE INDEX cont_sbc_id_ins_K ON cont_sbc (id_instalacion)\n WHERE activo in ('K', 'k');\n\nG.\n------------------------------- cut here -------------------------------\n WHERE c.id_instalacion = 2\nAND s.id_instalacion = 2\nAND p.id_instalacion = 2\n...\n\n -> Seq Scan on cont_sbc s (cost=0.00..4.44 rows=1 width=35)\n Filter: ((id_instalacion = 2::numeric)\n AND (upper((activo)::text) = 'S'::text))\n -> Index Scan using cont_cont_cont_sbc_fk_i on cont_contenido c\n (cost=0.00..15.56 rows=1 width=43)\n Index Cond: ((c.id_instalacion = 2::numeric)\n AND (c.id_sbc = \"outer\".id_sbc))\n Filter: (upper((activo)::text) = 'S'::text)\n -> Seq Scan on cont_publicacion p (cost=0.00..98.54 rows=442 width=55)\n Filter: (id_instalacion = 2::numeric)\n\n\n", "msg_date": "Thu, 26 Jun 2003 12:30:59 +0200", "msg_from": "=?iso-8859-1?Q?SZUCS_G=E1bor?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Similar querys, better execution time on worst execution plan" } ]
[ { "msg_contents": "We are evaluating PostgreSQL for a typical data warehouse application. I\nhave 3 tables below that are part of a Star schema design. The query listed\nbelow runs in 16 seconds on Oracle 9.2 and 3+ minutes on PostgreSQL 7.3.3\nHere are the details.\n\nI'm wondering what else can be done to tune this type of query. Is 3\nminutes reasonable given the amount of data that is loaded into the 3\ntables? Is there anyone else who has made comparisons between Oracle and\nPostgreSQL?\n\n----------------------------------------------------------------------------\n------------------------------------------------------------------------\nOracle 9.2 is running on a windows/2000 server, 600MHz PIII, 512MB ram\n\tShared Pool\t48MB\n\tBuffer Cache\t98MB\n\tLarge Pool\t 8MB\n\tJava Pool\t32MB\n\t\t\t=========\n\tTotal SGA\t186MB\n\n----------------------------------------------------------------------------\n------------------------------------------------------------------------\nPostgreSQL is running on Redhat Linux 7.2, 733MHz PIII processor, 383MB ram.\n\tshared_buffers = 12384\t\t(96 MB)\n\tsort_mem = 16384\n\n----------------------------------------------------------------------------\n------------------------------------------------------------------------\n\nexplain analyze\nselect fiscalyearquarter, description, sum(amount_quantity)\nfrom time t, revenue r, statistic s\nWhere t.fiscalyear = 2002\nand r.timekey = t.timekey\nand r.statisticskey = s.statisticskey\ngroup by fiscalyearquarter, description;\n\n QUERY\nPLAN \n----------------------------------------------------------------------------\n------------------------------------------------------------------------\n Aggregate (cost=124685.74..127078.87 rows=23931 width=48) (actual\ntime=170682.53..189640.85 rows=8 loops=1)\n -> Group (cost=124685.74..126480.59 rows=239313 width=48) (actual\ntime=169508.49..185478.90 rows=1082454 loops=1)\n -> Sort (cost=124685.74..125284.02 rows=239313 width=48) (actual\ntime=169508.47..171853.03 rows=1082454 loops=1)\n Sort Key: t.fiscalyearquarter, s.description\n -> Hash Join (cost=6.46..94784.90 rows=239313 width=48)\n(actual time=140.20..47685.46 rows=1082454 loops=1)\n Hash Cond: (\"outer\".statisticskey =\n\"inner\".statisticskey)\n -> Hash Join (cost=5.43..90595.90 rows=239313\nwidth=32) (actual time=139.96..39672.76 rows=1082454 loops=1)\n Hash Cond: (\"outer\".timekey = \"inner\".timekey)\n -> Seq Scan on revenue r (cost=0.00..68454.04\nrows=3829004 width=17) (actual time=0.01..26336.95 rows=3829004 loops=1)\n -> Hash (cost=5.40..5.40 rows=12 width=15)\n(actual time=0.79..0.79 rows=0 loops=1)\n -> Seq Scan on \"time\" t (cost=0.00..5.40\nrows=12 width=15) (actual time=0.36..0.75 rows=12 loops=1)\n Filter: (fiscalyear = 2002::numeric)\n -> Hash (cost=1.02..1.02 rows=2 width=16) (actual\ntime=0.04..0.04 rows=0 loops=1)\n -> Seq Scan on statistic s (cost=0.00..1.02\nrows=2 width=16) (actual time=0.02..0.03 rows=2 loops=1)\n Total runtime: 195409.79 msec\n\n\nThis gives you an idea of the size of each table in the query\n----------------------------------------------------------------------------\n------------------------------------------------------------------------\n\npubnet=# vacuum analyze verbose revenue;\nINFO: --Relation dw.revenue--\nINFO: Pages 30164: Changed 0, Empty 0; Tup 3829004: Vac 0, Keep 0, UnUsed\n17.\n Total CPU 1.87s/0.73u sec elapsed 9.97 sec.\nINFO: Analyzing dw.revenue\nVACUUM\npubnet=# vacuum analyze verbose statistic;\nINFO: --Relation dw.statistic--\nINFO: Pages 1: Changed 0, Empty 0; Tup 2: Vac 0, Keep 0, UnUsed 1.\n Total CPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: Analyzing dw.statistic\nVACUUM\npubnet=# vacuum analyze verbose time;\nINFO: --Relation dw.time--\nINFO: Pages 3: Changed 0, Empty 0; Tup 192: Vac 0, Keep 0, UnUsed 33.\n Total CPU 0.00s/0.00u sec elapsed 0.02 sec.\nINFO: Analyzing dw.time\nVACUUM\npubnet=# \n\n\nI tried to disable the use of hash join to see what might happen. This\ncauses the optimizer to use a merge join. The timings are worse.\n\nHere is the plan for that\n\n \nQUERY PLAN\n\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n--------\n Aggregate (cost=665570.44..667963.57 rows=23931 width=48) (actual\ntime=362121.97..381081.18 rows=8 loops=1)\n -> Group (cost=665570.44..667365.29 rows=239313 width=48) (actual\ntime=360948.51..376904.14 rows=1082454 loops=1)\n -> Sort (cost=665570.44..666168.72 rows=239313 width=48) (actual\ntime=360948.48..363285.85 rows=1082454 loops=1)\n Sort Key: t.fiscalyearquarter, s.description\n -> Merge Join (cost=631481.61..635669.60 rows=239313\nwidth=48) (actual time=263257.77..276625.27 rows=1082454 loops=1)\n Merge Cond: (\"outer\".statisticskey =\n\"inner\".statisticskey)\n -> Sort (cost=631480.58..632078.86 rows=239313\nwidth=32) (actual time=260561.38..264151.04 rows=1082454 loops=1)\n Sort Key: r.statisticskey\n -> Merge Join (cost=587963.25..610099.74\nrows=239313 width=32) (actual time=217380.88..231958.36 rows=1082454\nloops=1)\n Merge Cond: (\"outer\".timekey =\n\"inner\".timekey)\n -> Sort (cost=5.62..5.65 rows=12\nwidth=15) (actual time=14.90..14.92 rows=12 loops=1)\n Sort Key: t.timekey\n -> Seq Scan on \"time\" t\n(cost=0.00..5.40 rows=12 width=15) (actual time=13.47..14.83 rows=12\nloops=1)\n Filter: (fiscalyear =\n2002::numeric)\n -> Sort (cost=587957.63..597530.14\nrows=3829004 width=17) (actual time=214776.92..224634.94 rows=1455997\nloops=1)\n Sort Key: r.timekey\n -> Seq Scan on revenue r\n(cost=0.00..68454.04 rows=3829004 width=17) (actual time=1.33..31014.95\nrows=3829004 loops=1)\n -> Sort (cost=1.03..1.03 rows=2 width=16) (actual\ntime=2696.35..3765.93 rows=541228 loops=1)\n Sort Key: s.statisticskey\n -> Seq Scan on statistic s (cost=0.00..1.02\nrows=2 width=16) (actual time=19.50..19.52 rows=2 loops=1)\n Total runtime: 385939.85 msec\n\n\nThe Query plan in Oracle looks like this...\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n--------\nSORT GROUP BY\n\tHASH JOIN\n\t\tMERGE JOIN CARTESIAN\n\t\t\tTABLE ACCESS FULL\t\t\tDBA_ADMIN\nSTATISTIC\n\t\t\tBUFFER SORT\n\t\t\t\tTABLE ACCESS FULL\t\tDBA_ADMIN\nTIME\n\t\tTABLE ACCESS FULL\t\t\t\tDBA_ADMIN\nREVENUE\n\n", "msg_date": "Wed, 25 Jun 2003 16:33:16 -0500", "msg_from": "\"Sailer, Denis (YBUSA-CDR)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query running slower than same on Oracle" }, { "msg_contents": "\"Sailer, Denis (YBUSA-CDR)\" <[email protected]> writes:\n> We are evaluating PostgreSQL for a typical data warehouse application. I\n> have 3 tables below that are part of a Star schema design. The query listed\n> below runs in 16 seconds on Oracle 9.2 and 3+ minutes on PostgreSQL 7.3.3\n> Here are the details.\n\nThe majority of the runtime seems to be going into the sort step. There\nis not much to be done about this in 7.3, but 7.4 should use a hashed\naggregation approach for this query, which'd eliminate the sort step and\nhopefully reduce the time a great deal. Since you're only doing\nevaluation at this point, it might be worth your while to try out CVS\ntip ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Jun 2003 17:50:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query running slower than same on Oracle " }, { "msg_contents": "Denis,\n\n> I'm wondering what else can be done to tune this type of query. Is 3\n> minutes reasonable given the amount of data that is loaded into the 3\n> tables? Is there anyone else who has made comparisons between Oracle and\n> PostgreSQL?\n\nWe will probably be a bit slower on aggregates than Oracle is, for reasons \ndiscussed on this list ad nauseum. However, it also looks from the queries \nlike you forgot to index your foriegn keys.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 25 Jun 2003 14:51:33 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query running slower than same on Oracle" } ]
[ { "msg_contents": "Hi,\n\ni think i need a little help with a problem with pg_statistic.\nLets say i have a table to collect traffic-data.\nThe table has a column time_stamp of type timesamptz.\nThe table has a single-column index on time_stamp.\nThe table has around 5 million records.\n\nIf i delete all statistical data from pg_statistic and do a\nexplain analyze i got this result.\n\n-------------------------------------------------------------------------\nexplain analyze select * from tbl_traffic where tbl_traffic.time_stamp >= '2003-05-01' and tbl_traffic.time_stamp < '2003-06-01';\nNOTICE: QUERY PLAN:\n\nIndex Scan using idx_ts on tbl_traffic (cost=0.00..97005.57 rows=24586 width=72) (actual time=0.19..7532.63 rows=1231474 loops=1)\nTotal runtime: 8179.08 msec\n\nEXPLAIN\n-------------------------------------------------------------------------\n\nafter i do a vacuum full verbose analyze i got the following result.\n\n-------------------------------------------------------------------------\nexplain analyze select * from tbl_traffic where tbl_traffic.time_stamp >= '2003-05-01' and tbl_traffic.time_stamp < '2003-06-01';\nNOTICE: QUERY PLAN:\n\nSeq Scan on tbl_traffic (cost=0.00..127224.24 rows=1197331 width=52) (actual time=0.03..14934.70 rows=1231474 loops=1)\nTotal runtime: 15548.35 msec\n\nEXPLAIN\n-------------------------------------------------------------------------\n\nnow i disable seqscans with set enable_seqscan to off\nand i got the following.\n\n-------------------------------------------------------------------------\nexplain analyze select * from tbl_traffic where tbl_traffic.time_stamp >= '2003-05-01' and tbl_traffic.time_stamp < '2003-06-01';\nNOTICE: QUERY PLAN:\n\nIndex Scan using idx_ts on tbl_traffic (cost=0.00..3340294.11 rows=1197331 width=52) (actual time=0.21..7646.29 rows=1231474 loops=1)\nTotal runtime: 8285.92 msec\n\nEXPLAIN\n-------------------------------------------------------------------------\n\nCould anybody explain or give some hint why the index is not used\nalthough it is faster than a sequence-scan ?\nBTW:\n version \n-----------------------------------------------------------\n PostgreSQL 7.2 on i686-pc-linux-gnu, compiled by GCC 2.96\n\nThanks in advance, as\n", "msg_date": "Thu, 26 Jun 2003 15:15:15 +0200", "msg_from": "Andre Schubert <[email protected]>", "msg_from_op": true, "msg_subject": "problem with pg_statistics" }, { "msg_contents": "Andre Schubert <[email protected]> writes:\n> i think i need a little help with a problem with pg_statistic.\n\nTry reducing random_page_cost --- although you'd be foolish to set it on\nthe basis of just a single test query. Experiment with a few different\ntables, and keep in mind that repeated tests will be affected by caching.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Jun 2003 10:08:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with pg_statistics " }, { "msg_contents": "On Thu, 26 Jun 2003 10:08:05 -0400, Tom Lane <[email protected]>\nwrote:\n>Andre Schubert <[email protected]> writes:\n>> i think i need a little help with a problem with pg_statistic.\n>\n>Try reducing random_page_cost\n\nWith index scan cost being more than 25 * seq scan cost, I guess that\n- all other things held equal - even random_page_cost = 1 wouldn't\nhelp.\n\nAndre might also want to experiment with effective_cache_size and with\nALTER TABLE ... SET STATISTICS.\n\nOr there's something wrong with correlation?\n\nAndre, what hardware is this running on? What are the values of\nshared_buffers, random_page_cost, effective_cache_size, ... ? Could\nyou show us the result of\n\n\tSELECT * FROM pg_stats\n\t WHERE tablename = \"tbl_traffic\" AND attname = \"time_stamp\";\n\nServus\n Manfred\n", "msg_date": "Thu, 26 Jun 2003 17:51:56 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with pg_statistics " }, { "msg_contents": "Manfred Koizar <[email protected]> writes:\n> On Thu, 26 Jun 2003 10:08:05 -0400, Tom Lane <[email protected]>\n> wrote:\n>> Try reducing random_page_cost\n\n> With index scan cost being more than 25 * seq scan cost, I guess that\n> - all other things held equal - even random_page_cost = 1 wouldn't\n> help.\n\nOh, you're right, I was comparing the wrong estimated costs. Yeah,\nchanging random_page_cost won't fix it.\n\n> Or there's something wrong with correlation?\n\nThat seems like a good bet. Andre, is this table likely to be\nphysically ordered by time_stamp, or nearly so? If so, do you\nexpect that condition to persist, or is it just an artifact of\na test setup?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Jun 2003 12:03:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with pg_statistics " }, { "msg_contents": "On Thu, 26 Jun 2003 12:03:52 -0400\nTom Lane <[email protected]> wrote:\n\n> Manfred Koizar <[email protected]> writes:\n> > On Thu, 26 Jun 2003 10:08:05 -0400, Tom Lane <[email protected]>\n> > wrote:\n> >> Try reducing random_page_cost\n> \n> > With index scan cost being more than 25 * seq scan cost, I guess that\n> > - all other things held equal - even random_page_cost = 1 wouldn't\n> > help.\n> \n> Oh, you're right, I was comparing the wrong estimated costs. Yeah,\n> changing random_page_cost won't fix it.\n> \n> > Or there's something wrong with correlation?\n> \n> That seems like a good bet. Andre, is this table likely to be\n> physically ordered by time_stamp, or nearly so? If so, do you\n> expect that condition to persist, or is it just an artifact of\n> a test setup?\n> \n\nFirst of all thanks for the quick response.\n\nWe have three servers at different places, all servers are running\nwith athlon processors and have ram between 512M up to 1024M,\nand a frequency between 700 and 1400Mhz.\nAll servers running under Linux 7.2 Kernel 2.4.20.\nWe use this table to collect traffic of our clients.\nTraffic data are inserted every 5 minutes with the actual datetime\nof the transaction, thatswhy the table should be physically order by time_stamp.\nAll servers are running in production and i could reproduce the problem on\nall three servers.\n\nTo answer Manfreds questions:\n> Andre, what hardware is this running on? What are the values of\n> shared_buffers, random_page_cost, effective_cache_size, ... ? Could\n> you show us the result of\n> \n> \tSELECT * FROM pg_stats\n> WHERE tablename = \"tbl_traffic\" AND attname = \"time_stamp\";\n\nThe only changes we have made are\n\nsort_mem = 32000\nshared_buffers = 13000\n\nAll other values are commented out and should be set to default\nby postgres itself.\n\n#max_fsm_relations = 100 # min 10, fsm is free space map\n#max_fsm_pages = 10000 # min 1000, fsm is free space map\n\n\n#effective_cache_size = 1000 # default in 8k pages\n#random_page_cost = 4\n#cpu_tuple_cost = 0.01\n#cpu_index_tuple_cost = 0.001\n#cpu_operator_cost = 0.0025\n\nHope this help ...\n\nThanks, as\n", "msg_date": "Fri, 27 Jun 2003 08:07:35 +0200", "msg_from": "Andre Schubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with pg_statistics" }, { "msg_contents": "On Thu, 26 Jun 2003 12:03:52 -0400\nTom Lane <[email protected]> wrote:\n\n> Manfred Koizar <[email protected]> writes:\n> > On Thu, 26 Jun 2003 10:08:05 -0400, Tom Lane <[email protected]>\n> > wrote:\n> >> Try reducing random_page_cost\n> \n> > With index scan cost being more than 25 * seq scan cost, I guess that\n> > - all other things held equal - even random_page_cost = 1 wouldn't\n> > help.\n> \n> Oh, you're right, I was comparing the wrong estimated costs. Yeah,\n> changing random_page_cost won't fix it.\n> \n> > Or there's something wrong with correlation?\n> \n> That seems like a good bet. Andre, is this table likely to be\n> physically ordered by time_stamp, or nearly so? If so, do you\n> expect that condition to persist, or is it just an artifact of\n> a test setup?\n> \n\nSorry forgot the pg_stat query...\n\n\nSELECT * FROM pg_stats where tablename = 'tbl_traffic' and attname = 'time_stamp';\n tablename | attname | null_frac | avg_width | n_distinct | \n most_common_vals \n | most_common_freqs | \n histogram_bounds \n | correlation \n-------------+------------+-----------+-----------+------------+-------------------------------------------------------------------------------------------------------\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------+--------------------------------------------------------------------+---------------------------------------------\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------------------------------------------------------------------------------------------------+-------------\n tbl_traffic | time_stamp | 0 | 8 | 104009 | {\"2003-06-03 19:12:01.059625+02\",\"2003-02-03 19:52:06.666296+01\",\"2003-02-13 09:59:45.415763+01\",\"2003\n-02-28 18:10:28.536399+01\",\"2003-04-11 18:09:42.30363+02\",\"2003-04-26 20:35:50.110235+02\",\"2003-05-03 11:09:32.991507+02\",\"2003-05-20 09:53:51.271853+02\",\"2003-05-21 2\n0:55:59.155387+02\",\"2003-06-02 02:38:28.823182+02\"} | {0.00133333,0.001,0.001,0.001,0.001,0.001,0.001,0.001,0.001,0.001} | {\"2002-07-01 00:00:00+02\",\"2003-02-21 01:59:\n46.107696+01\",\"2003-03-11 15:00:37.418521+01\",\"2003-03-26 18:14:50.028972+01\",\"2003-04-10 13:43:20.75909+02\",\"2003-04-27 09:03:19.592213+02\",\"2003-05-08 22:35:41.99761\n6+02\",\"2003-05-22 15:34:42.932958+02\",\"2003-06-03 00:53:05.870782+02\",\"2003-06-15 08:45:41.154875+02\",\"2003-06-27 07:18:30.265868+02\"} | -0.479749\n(1 row)\n\nThanks, as\n", "msg_date": "Fri, 27 Jun 2003 08:13:06 +0200", "msg_from": "Andre Schubert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with pg_statistics" }, { "msg_contents": "On Fri, 27 Jun 2003 08:07:35 +0200, Andre Schubert\n<[email protected]> wrote:\n>Traffic data are inserted every 5 minutes with the actual datetime\n>of the transaction, thatswhy the table should be physically order by time_stamp.\n\nSo I'd expect a correlation of nearly 1. Why do your statistics show\na value of -0.479749? A negative correlation is a sign of descending\nsort order, and correlation values closer to 0 indicate poor\ncorrespondence between column values and tuple positions.\n\nCould this be the effect of initial data loading? Are there any\nupdates or deletions in your traffic table?\n\n>To answer Manfreds questions:\n>> Andre, what hardware is this running on? What are the values of\n>> shared_buffers, random_page_cost, effective_cache_size, ... ? Could\n>> you show us the result of\n>> \n>> \tSELECT * FROM pg_stats\n>> WHERE tablename = \"tbl_traffic\" AND attname = \"time_stamp\";\n ^ ^ ^ ^\nOops, these should have been single quotes. It's too hot here these\ndays :-)\n\n>sort_mem = 32000\n>shared_buffers = 13000\n\nPersonally I would set them to lower values, but if you have good\nreasons ...\n\n>#effective_cache_size = 1000 # default in 8k pages\n\nThis is definitely too low. With 512MB or more I tend to set this to\nca. 80% of available RAM. Use top and free to find hints for good\nvalues.\n\nServus\n Manfred\n", "msg_date": "Fri, 27 Jun 2003 10:43:01 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with pg_statistics" }, { "msg_contents": "On Fri, 27 Jun 2003 10:43:01 +0200\nManfred Koizar <[email protected]> wrote:\n\n> On Fri, 27 Jun 2003 08:07:35 +0200, Andre Schubert\n> <[email protected]> wrote:\n> >Traffic data are inserted every 5 minutes with the actual datetime\n> >of the transaction, thatswhy the table should be physically order by time_stamp.\n> \n> So I'd expect a correlation of nearly 1. Why do your statistics show\n> a value of -0.479749? A negative correlation is a sign of descending\n> sort order, and correlation values closer to 0 indicate poor\n> correspondence between column values and tuple positions.\n> \n> Could this be the effect of initial data loading? Are there any\n> updates or deletions in your traffic table?\n> \n\nWe dont make updates the traffic table.\nOnce a month we delete the all data of the oldest month.\nAnd after that a vacuum full verbose analyze is performed.\nCould this cause reordering of the data ?\nAnd should i do a cluster idx_ts tbl_traffic ?\n\n> >To answer Manfreds questions:\n> >> Andre, what hardware is this running on? What are the values of\n> >> shared_buffers, random_page_cost, effective_cache_size, ... ? Could\n> >> you show us the result of\n> >> \n> >> \tSELECT * FROM pg_stats\n> >> WHERE tablename = \"tbl_traffic\" AND attname = \"time_stamp\";\n> ^ ^ ^ ^\n> Oops, these should have been single quotes. It's too hot here these\n> days :-)\n> \n\nYou are so right ... :)\n\n\n> >#effective_cache_size = 1000 # default in 8k pages\n> \n> This is definitely too low. With 512MB or more I tend to set this to\n> ca. 80% of available RAM. Use top and free to find hints for good\n> values.\n> \n\nOk, i will talk with my coworker ( he is the sysadmin of our machine )\nand look if can use such amount of RAM, because there are several other\nprocesses that are running on these machines.\nBut i will test and report ...\n\nThanks, as\n", "msg_date": "Fri, 27 Jun 2003 11:10:58 +0200", "msg_from": "Andre Schubert <[email protected]>", "msg_from_op": true, "msg_subject": "Re: problem with pg_statistics" }, { "msg_contents": "On Fri, 27 Jun 2003 11:10:58 +0200, Andre Schubert <[email protected]>\nwrote:\n>Once a month we delete the all data of the oldest month.\n>And after that a vacuum full verbose analyze is performed.\n>Could this cause reordering of the data ?\n\nI may be wrong, but I think VACUUM FULL starts taking tuples from the\nend of the relation and puts them into pages at the beginning until\nread and write position meet somewhere in the middle. This explains\nthe bad correlation.\n\n>And should i do a cluster idx_ts tbl_traffic ?\n\nI think so.\n\n>> >#effective_cache_size = 1000 # default in 8k pages\n>> \n>> This is definitely too low. With 512MB or more I tend to set this to\n>> ca. 80% of available RAM. Use top and free to find hints for good\n>> values.\n>> \n>\n>Ok, i will talk with my coworker ( he is the sysadmin of our machine )\n>and look if can use such amount of RAM, because there are several other\n>processes that are running on these machines.\n>But i will test and report ...\n\neffective_cache_size does not *control* resource consumption, it just\n*reports* it as a hint to the planner.\n\nServus\n Manfred\n", "msg_date": "Fri, 27 Jun 2003 12:05:14 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: problem with pg_statistics" } ]
[ { "msg_contents": "\n> -----Mensaje original-----\n> De: SZUCS Gábor [mailto:[email protected]] \n> Enviado el: jueves, 26 de junio de 2003 7:31\n> Para: [email protected]\n> Asunto: Re: [PERFORM] Similar querys, better execution time \n> on worst execution plan\n> \n> \n> Fernando,\n> \n> 1. Try EXPLAIN ANALYZE. Cost alone isn't an absolute measure. \n> I think it's only to see which parts of the query are \n> expected to be slowest. However, EXP ANA will give you exact \n> times in msec (which effectively means it executes the query).\n\nOk, yes, I did only explay because I run several times the query and get avg. run time. but it's true, it's better to do EXP ANA.\n \n> 2. I think calling upper() for each row costs more than \n> direct comparison, but not sure\n\nIt's the only answer than I can found... maybe do a lot of uppers and then compare will be too much than compare with 2 conditions...\n \n> 3. Notice that there are seq scans with filter conditions like\n> \"id_instalacion = 2::numeric\"\n> Do you have indices on id_instalacion, which seems to be a \n> numeric field? if so, try casting the constant expressions in \n> the query to numeric so that postgresql may find the index. \n> If you don't have such indices, it may be worth to create \n> them. (I guess you only have it on the table aliased with c, \n> since it does an index scan there.\n\nYes, we have index on id_instalacion, but now we have only one instalation, so the content of these field, in the 99% of the rows, it's 2. I think in this case it's ok to choose seq scan.\n \n> 4. another guess may be indices on (id_instalacion, activo), \n> or, if activo has few possible values (for example, it may be \n> only one of three letters, say, 'S', 'A' or 'K'), partial \n> indices like:\n> \n> CREATE INDEX cont_sbc_id_ins_S ON cont_sbc (id_instalacion)\n> WHERE activo in ('S', 's');\n> CREATE INDEX cont_sbc_id_ins_A ON cont_sbc (id_instalacion)\n> WHERE activo in ('A', 'a');\n> CREATE INDEX cont_sbc_id_ins_K ON cont_sbc (id_instalacion)\n> WHERE activo in ('K', 'k');\n> \n\nI need to recheck about the \"quality\" of \"active\" field. Really I don't know if I found a lot of 'S', a lot of 'N', maybe we will have 50%/50% of 'S' or 'N'. This will be important to define index.\n\nThanks for your answer.\n", "msg_date": "Thu, 26 Jun 2003 10:33:38 -0300", "msg_from": "\"Fernando Papa\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Similar querys, better execution time on worst execution plan" }, { "msg_contents": "*happy* :)))\n\nG.\n------------------------------- cut here -------------------------------\n----- Original Message ----- \nFrom: \"Fernando Papa\" <[email protected]>\nSent: Thursday, June 26, 2003 3:33 PM\n\n\nI need to recheck about the \"quality\" of \"active\" field. Really I don't know\nif I found a lot of 'S', a lot of 'N', maybe we will have 50%/50% of 'S' or\n'N'. This will be important to define index.\n\nThanks for your answer.\n\n", "msg_date": "Thu, 26 Jun 2003 17:06:32 +0200", "msg_from": "=?iso-8859-1?Q?SZUCS_G=E1bor?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Similar querys, better execution time on worst execution plan" } ]
[ { "msg_contents": "Hello,\n\nI've a performance question that I would like to ask you :\n\nI have to design a DB that will manage products, and I'm adding the\nproduct's options management.\n\nA box can be red or yellow, or with black rubber or with white rubber,\nfor example.\nSo I have a product (the box) and two options groups (the box color and\nthe rubber color) and four options (red,yellow,black,white).\n\nHere's my tables :\n\n/* PRODUCTS OPTIONS : */\n/* ------------------ */\n\nCREATE SEQUENCE seq_id_product_option START 1 MINVALUE 1;\n\nCREATE TABLE products_options\n(\n\tpk_prdopt_id INT4 DEFAULT NEXTVAL('seq_id_product_option') NOT\nNULL,\n\tfk_prd_id INT4 NOT NULL,\n\tname VARCHAR(100) NOT NULL,\n\tdescription TEXT,\n\tprice DOUBLE PRECISION NOT NULL,\n\tvat_rate NUMERIC(5,2) NOT NULL,\n\tinternal_notes TEXT,\n\tCONSTRAINT products_options_pk PRIMARY KEY (pk_prdopt_id),\n\tCONSTRAINT products_options_fk_prdid FOREIGN KEY (fk_prd_id)\nREFERENCES products (pk_prd_id),\n\tCONSTRAINT products_options_vatrate_value CHECK (vat_rate\nBETWEEN 0 AND 100)\n);\n\n/* PRODUCTS OPTIONS GROUP NAMES : */\n/* ------------------------------ */\n\nCREATE SEQUENCE seq_id_product_option_group START 1 MINVALUE 1;\n\nCREATE TABLE products_options_groups\n(\n\tpk_prdoptgrp_id INT4 DEFAULT\nNEXTVAL('seq_id_product_option_group') NOT NULL,\n\tprdoptgrp_name VARCHAR(100) NOT NULL,\n\tprdoptgrp_description TEXT NOT NULL,\n\tprdoptgrp_internal_notes TEXT,\n\tCONSTRAINT products_options_groups_pk PRIMARY\nKEY(pk_prdoptgrp_id)\n);\n\n/* PRODUCTS OPTIONS CLASSIFICATION : */\n/* ------------------------------ */\n\nCREATE TABLE products_options_classification\n(\n\tfk_prdoptgrp_id INT4 NOT NULL,\n\tfk_prdopt_id INT4 NOT NULL,\n\tCONSTRAINT products_options_classification_pk PRIMARY\nKEY(fk_prdoptgrp_id,fk_prdopt_id),\n\tCONSTRAINT products_options_classification_fk_prdoptgrp FOREIGN\nKEY (fk_prdoptgrp_id) REFERENCES products_options_groups\n(pk_prdoptgrp_id),\n\tCONSTRAINT products_options_classification_fk_prdopt FOREIGN KEY\n(fk_prdopt_id) REFERENCES products_options (pk_prdopt_id)\n);\n\n\nI'm worrying about the performances of the queries that will the most\noften dones, especially the select of the available options groups\n('Rubber color','Box color' in my example) on one product (The box).\n\nSELECT products_options_groups.pk_prdoptgrp_id,\nproducts_options_groups.prdoptgrp_name\nFROM products_options_groups\nWHERE EXISTS\n(\n\tSELECT *\n\tFROM products_options_classification\n\tWHERE products_options_classification =\nproducts_options_groups.pk_prdoptgrp_id\n\tAND EXISTS\n\t(\n\t\tSELECT *\n\t\tFROM products_options\n\t\tWHERE products_options.pk_prdopt_id =\nproducts_options_classification.fk_prdopt_id\n\t\tAND products_options.fk_prd_id = [A PRODUCT ID WRITTEN\nHERE BY MY APP]\n\t)\n)\nORDER BY products_options_groups.prdoptgrp_name;\n\n\nI will have to manage more or less 10.000 products with more or less 2-3\noptions by products and more or less 40 options-groups.\n\nDo you think that this query will be hard for PostgreSQL (currently\n7.2.1 but I will migrate to 7.3.2 when going in production environment)\n?\nHow can I improve that query to be faster ?\n\nThanks really much for your advices about this ! :-)\n\n---------------------------------------\nBruno BAGUETTE - [email protected] \n\n", "msg_date": "Fri, 27 Jun 2003 16:32:21 +0200", "msg_from": "\"Bruno BAGUETTE\" <[email protected]>", "msg_from_op": true, "msg_subject": "Large querie with several EXISTS which will be often runned" }, { "msg_contents": "Bruno,\n\n> I will have to manage more or less 10.000 products with more or less 2-3\n> options by products and more or less 40 options-groups.\n>\n> Do you think that this query will be hard for PostgreSQL (currently\n> 7.2.1 but I will migrate to 7.3.2 when going in production environment)\n> ?\n> How can I improve that query to be faster ?\n\nCollapse the inner EXISTS into a straight join in the outer EXISTS. Since you \nare merely checking for existence, there is no reason for the subquery \nnesting.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 27 Jun 2003 08:41:40 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large querie with several EXISTS which will be often runned" }, { "msg_contents": "Hello Josh,\n\n> > I will have to manage more or less 10.000 products with \n> more or less \n> > 2-3 options by products and more or less 40 options-groups.\n> >\n> > Do you think that this query will be hard for PostgreSQL (currently \n> > 7.2.1 but I will migrate to 7.3.2 when going in production \n> > environment) ? How can I improve that query to be faster ?\n> \n> Collapse the inner EXISTS into a straight join in the outer \n> EXISTS. Since you \n> are merely checking for existence, there is no reason for the \n> subquery \n> nesting.\n\nDo you mean this query ?\n\nSELECT\nproducts_options_groups.pk_prdoptgrp_id,products_options_groups.prdoptgr\np_name\nFROM products_options_groups\nWHERE EXISTS\n(\n\tSELECT *\n\tFROM products_options_classification\n\tINNER JOIN products_options ON products_options.pk_prdopt_id =\nproducts_options_classification.fk_prdopt_id\n\tWHERE products_options_classification =\nproducts_options_groups.pk_prdoptgrp_id\n\tAND products_options.fk_prd_id = [A PRODUCT ID WRITTEN HERE BY\nMY APP]\n)\nORDER BY products_options_groups.prdoptgrp_name;\n\n\nAn other question, do you think that my tables are OK or is there some\nthings I could change in order to have as much performance as possible\n(without de-normalize it because I want to avoid redundancy in my\ntables).\n\nThanks very much for your tips ! :-)\n\n---------------------------------------\nBruno BAGUETTE - [email protected] \n\n", "msg_date": "Sat, 28 Jun 2003 11:17:42 +0200", "msg_from": "\"Bruno BAGUETTE\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE : Large querie with several EXISTS which will be often runned" }, { "msg_contents": "On Saturday 28 June 2003 14:47, Bruno BAGUETTE wrote:\n> Do you mean this query ?\n>\n> SELECT\n> products_options_groups.pk_prdoptgrp_id,products_options_groups.prdoptgr\n> p_name\n> FROM products_options_groups\n> WHERE EXISTS\n> (\n> \tSELECT *\n> \tFROM products_options_classification\n> \tINNER JOIN products_options ON products_options.pk_prdopt_id =\n> products_options_classification.fk_prdopt_id\n> \tWHERE products_options_classification =\n> products_options_groups.pk_prdoptgrp_id\n> \tAND products_options.fk_prd_id = [A PRODUCT ID WRITTEN HERE BY\n> MY APP]\n> )\n> ORDER BY products_options_groups.prdoptgrp_name;\n\nYou can try \n\n SELECT\n products_options_groups.pk_prdoptgrp_id,products_options_groups.prdoptgr\n p_name\n FROM products_options_groups\n WHERE \n (\n \tSELECT count(*)\n \tFROM products_options_classification\n \tINNER JOIN products_options ON products_options.pk_prdopt_id =\n products_options_classification.fk_prdopt_id\n \tWHERE products_options_classification =\n products_options_groups.pk_prdoptgrp_id\n \tAND products_options.fk_prd_id = [A PRODUCT ID WRITTEN HERE BY\n MY APP]\n )>0\n ORDER BY products_options_groups.prdoptgrp_name;\n\nThe count(*) trick will make it just another subquery and hopefully any \nperformance issues with exists/in does not figure. Some of those issues are \nfixed in 7.4/CVS head though.\n\n HTH\n\n Shridhar\n\n", "msg_date": "Sat, 28 Jun 2003 15:05:00 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large querie with several EXISTS which will be often runned" } ]
[ { "msg_contents": "Sorry for posting an obvious Linux question, but have any of you\nencountered this and how have you fixed it.\nI have 6gig Ram box. I've set my shmmax to 3072000000. The database\nstarts up fine without any issues. As soon as a query is ran\nor a FTP process to the server is done, the used memory shoots up and\nappears to never be released.\nMy fear is that this may cause problems for my database if this number\ncontinues to grow. Below is my TOP after running a query, and shutting\ndown PgAdmin. While not low now, the amount of free memory has dropped to\naround 11mg. I'll admit I'm not that Linux savvy, but am I reading this\ncorrect?\n\n--TOP\n\n45 processes: 44 sleeping, 1 running, 0 zombie, 0 stopped\nCPU0 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nCPU1 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nCPU2 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nCPU3 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nMem: 6711564K av, 6517776K used, 193788K free, 0K shrd, 25168K\nbuff\nSwap: 2044056K av, 0K used, 2044056K free 6257620K\ncached\n\nPatrick Hatcher\n\n\n", "msg_date": "Fri, 27 Jun 2003 12:09:50 -0700", "msg_from": "\"Patrick Hatcher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Memory question" }, { "msg_contents": "This is actually normal. Look at the amount cached: 6257620K. That's \n6.2Gig of cache. Linux is using only 6517776k - 6257620k of memory, the \nrest is just acting as kernel cache. If anything tries to allocate a bit \nof memory, linux will flush enough cache to give the memory to the \napplication that needs it.\n\nNote that you're only showing linux and all its applications using about \n256Meg.\n\nOn Fri, 27 Jun 2003, Patrick Hatcher wrote:\n\n> Sorry for posting an obvious Linux question, but have any of you\n> encountered this and how have you fixed it.\n> I have 6gig Ram box. I've set my shmmax to 3072000000. The database\n> starts up fine without any issues. As soon as a query is ran\n> or a FTP process to the server is done, the used memory shoots up and\n> appears to never be released.\n> My fear is that this may cause problems for my database if this number\n> continues to grow. Below is my TOP after running a query, and shutting\n> down PgAdmin. While not low now, the amount of free memory has dropped to\n> around 11mg. I'll admit I'm not that Linux savvy, but am I reading this\n> correct?\n> \n> --TOP\n> \n> 45 processes: 44 sleeping, 1 running, 0 zombie, 0 stopped\n> CPU0 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\n> CPU1 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\n> CPU2 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\n> CPU3 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\n> Mem: 6711564K av, 6517776K used, 193788K free, 0K shrd, 25168K\n> buff\n> Swap: 2044056K av, 0K used, 2044056K free 6257620K\n> cached\n> \n> Patrick Hatcher\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n\n", "msg_date": "Fri, 27 Jun 2003 13:44:38 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory question" }, { "msg_contents": "Patrick,\n\n> Sorry for posting an obvious Linux question, but have any of you\n> encountered this and how have you fixed it.\n> I have 6gig Ram box. I've set my shmmax to 3072000000. The database\n> starts up fine without any issues. As soon as a query is ran\n> or a FTP process to the server is done, the used memory shoots up and\n> appears to never be released.\n\nWhat's you shared_buffers set to after our talk? Do you actually need 3gb of \nshmmax? \n\n> My fear is that this may cause problems for my database if this number\n> continues to grow. Below is my TOP after running a query, and shutting\n> down PgAdmin. While not low now, the amount of free memory has dropped to\n> around 11mg. I'll admit I'm not that Linux savvy, but am I reading this\n> correct?\n\nNo. \n\n> Mem: 6711564K av, 6517776K used, 193788K free, 0K shrd, 25168K\n\nThe \"used\" figure in Top doesn't really tell you anything, since it includes \nthe kernel buffer which tries to take up all available memory. If you \nactually look at the list of processes, I think you'll find that you're only \nusing 1-2% of memory for applications.\n\nI'm not sure what app would show your \"real\" free memory.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 27 Jun 2003 12:54:51 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory question" }, { "msg_contents": "\nShared buffer is now set to 20,000 as suggested. So far so good.\nAs far as shmmax, it really is my ignorance of Linux. We are going to play\naround with this number. Is there a suggested amount since I have\nmy effective_cache_size = 625000 (or does one have nothing to do with the\nother)\nThanks again\n\nPatrick Hatcher\n\n\n\n\n \n Josh Berkus \n <josh@agliodbs To: \"Patrick Hatcher\" <[email protected]>, [email protected] \n .com> cc: \n Subject: Re: [PERFORM] Memory question \n 06/27/2003 \n 12:54 PM \n Please respond \n to josh \n \n\n\n\n\nPatrick,\n\n> Sorry for posting an obvious Linux question, but have any of you\n> encountered this and how have you fixed it.\n> I have 6gig Ram box. I've set my shmmax to 3072000000. The database\n> starts up fine without any issues. As soon as a query is ran\n> or a FTP process to the server is done, the used memory shoots up and\n> appears to never be released.\n\nWhat's you shared_buffers set to after our talk? Do you actually need 3gb\nof\nshmmax?\n\n> My fear is that this may cause problems for my database if this number\n> continues to grow. Below is my TOP after running a query, and shutting\n> down PgAdmin. While not low now, the amount of free memory has dropped\nto\n> around 11mg. I'll admit I'm not that Linux savvy, but am I reading this\n> correct?\n\nNo.\n\n> Mem: 6711564K av, 6517776K used, 193788K free, 0K shrd, 25168K\n\nThe \"used\" figure in Top doesn't really tell you anything, since it\nincludes\nthe kernel buffer which tries to take up all available memory. If you\nactually look at the list of processes, I think you'll find that you're\nonly\nusing 1-2% of memory for applications.\n\nI'm not sure what app would show your \"real\" free memory.\n\n--\n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n\n\n\n", "msg_date": "Fri, 27 Jun 2003 12:58:22 -0700", "msg_from": "\"Patrick Hatcher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory question" }, { "msg_contents": "\nThank you\n\nPatrick Hatcher\n\n\n\n \n \"scott.marlowe \n \" To: Patrick Hatcher <[email protected]> \n <scott.marlowe cc: <[email protected]> \n @ihs.com> Subject: Re: [PERFORM] Memory question \n \n 06/27/2003 \n 12:44 PM \n \n\n\n\n\nThis is actually normal. Look at the amount cached: 6257620K. That's\n6.2Gig of cache. Linux is using only 6517776k - 6257620k of memory, the\nrest is just acting as kernel cache. If anything tries to allocate a bit\nof memory, linux will flush enough cache to give the memory to the\napplication that needs it.\n\nNote that you're only showing linux and all its applications using about\n256Meg.\n\nOn Fri, 27 Jun 2003, Patrick Hatcher wrote:\n\n> Sorry for posting an obvious Linux question, but have any of you\n> encountered this and how have you fixed it.\n> I have 6gig Ram box. I've set my shmmax to 3072000000. The database\n> starts up fine without any issues. As soon as a query is ran\n> or a FTP process to the server is done, the used memory shoots up and\n> appears to never be released.\n> My fear is that this may cause problems for my database if this number\n> continues to grow. Below is my TOP after running a query, and shutting\n> down PgAdmin. While not low now, the amount of free memory has dropped\nto\n> around 11mg. I'll admit I'm not that Linux savvy, but am I reading this\n> correct?\n>\n> --TOP\n>\n> 45 processes: 44 sleeping, 1 running, 0 zombie, 0 stopped\n> CPU0 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\n> CPU1 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\n> CPU2 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\n> CPU3 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\n> Mem: 6711564K av, 6517776K used, 193788K free, 0K shrd, 25168K\n> buff\n> Swap: 2044056K av, 0K used, 2044056K free 6257620K\n> cached\n>\n> Patrick Hatcher\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n\n\n\n\n", "msg_date": "Fri, 27 Jun 2003 12:59:23 -0700", "msg_from": "\"Patrick Hatcher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Memory question" }, { "msg_contents": "On Fri, 2003-06-27 at 12:09, Patrick Hatcher wrote:\n\n\n\n> I have 6gig Ram box. I've set my shmmax to 3072000000. The database\n> starts up fine without any issues. As soon as a query is ran\n> or a FTP process to the server is done, the used memory shoots up and\n> appears to never be released.\n\nIn my experience Linux likes to allocate almost all available RAM. I've\nnever had any trouble with that. I'm looking at the memory meter on my\nRH9 development workstation and it is at 95%. Performance is good, so I\njust trust that the kernel knows what it is doing.\n \n\n\n> Mem: 6711564K av, 6517776K used, 193788K free, 0K shrd, 25168K\n> buff\n> Swap: 2044056K av, 0K used, 2044056K free 6257620K\n> cached\n\nI've heard anecdotally that Linux has troubles if the swap space is less\nthan the RAM size. I note that you have 6G of RAM, but only 2G of swap. \n\nI'm sure others on the list will have more definitive opinions.\n\n\n-- \nJord Tanner <[email protected]>\n\n", "msg_date": "27 Jun 2003 13:17:04 -0700", "msg_from": "Jord Tanner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory question" }, { "msg_contents": "On Friday, June 27, 2003, at 01:17 PM, Jord Tanner wrote:\n\n> On Fri, 2003-06-27 at 12:09, Patrick Hatcher wrote:\n>\n>\n>\n>> I have 6gig Ram box. I've set my shmmax to 3072000000. The database\n>> starts up fine without any issues. As soon as a query is ran\n>> or a FTP process to the server is done, the used memory shoots up and\n>> appears to never be released.\n>\n> In my experience Linux likes to allocate almost all available RAM. I've\n> never had any trouble with that. I'm looking at the memory meter on my\n> RH9 development workstation and it is at 95%. Performance is good, so I\n> just trust that the kernel knows what it is doing.\n>\n>\n>\n>> Mem: 6711564K av, 6517776K used, 193788K free, 0K shrd, \n>> 25168K\n>> buff\n>> Swap: 2044056K av, 0K used, 2044056K free \n>> 6257620K\n>> cached\n>\n> I've heard anecdotally that Linux has troubles if the swap space is \n> less\n> than the RAM size. I note that you have 6G of RAM, but only 2G of swap.\n\nI've heard that too, but it doesn't seem to make much sense to me. If \nyou get to the point where your machine is _needing_ 2GB of swap then \nsomething has gone horribly wrong (or you just need more RAM in the \nmachine) and it will just crawl until the kernel kills off whatever \nprocess causes the swap space to be exceeded. Seems to me that you \nshould only have that much swap if you can't afford more RAM or you've \ntapped out your machine's capacity, and your application needs that \nmuch memory.\n -M@\n\n", "msg_date": "Fri, 27 Jun 2003 13:30:28 -0700", "msg_from": "Matthew Hixson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory question" }, { "msg_contents": "> The \"used\" figure in Top doesn't really tell you anything, \n> since it includes \n> the kernel buffer which tries to take up all available \n> memory. If you \n> actually look at the list of processes, I think you'll find \n> that you're only \n> using 1-2% of memory for applications.\n> \n> I'm not sure what app would show your \"real\" free memory.\n\nThe command 'free' shows what you like to know:\n$ free\n total used free shared buffers\ncached\nMem: 1551480 1505656 45824 0 101400\n1015540\n-/+ buffers/cache: 388716 1162764\nSwap: 524264 23088 501176\n\nThe used/free amounts on the second line are the interesting ones in\nthis case.\n\nArjen\n\n\n\n", "msg_date": "Fri, 27 Jun 2003 22:49:17 +0200", "msg_from": "\"Arjen van der Meijden\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory question" }, { "msg_contents": "On Fri, 27 Jun 2003, Matthew Hixson wrote:\n\n> On Friday, June 27, 2003, at 01:17 PM, Jord Tanner wrote:\n> > I've heard anecdotally that Linux has troubles if the swap space is \n> > less\n> > than the RAM size. I note that you have 6G of RAM, but only 2G of swap.\n> \n> I've heard that too, but it doesn't seem to make much sense to me. If \n> you get to the point where your machine is _needing_ 2GB of swap then \n> something has gone horribly wrong (or you just need more RAM in the \n> machine) and it will just crawl until the kernel kills off whatever \n> process causes the swap space to be exceeded. Seems to me that you \n> should only have that much swap if you can't afford more RAM or you've \n> tapped out your machine's capacity, and your application needs that \n> much memory.\n\nThis was an artifact in older kernels where the swap code didn't work \nright unless it had as much swap as memory. I'm pretty sure that was \nfixed long ago in the 2.4 series.\n\n", "msg_date": "Fri, 27 Jun 2003 14:51:25 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory question" }, { "msg_contents": "> I've heard that too, but it doesn't seem to make much sense \n> to me. If \n> you get to the point where your machine is _needing_ 2GB of swap then \n> something has gone horribly wrong (or you just need more RAM in the \n> machine) and it will just crawl until the kernel kills off whatever \n> process causes the swap space to be exceeded. Seems to me that you \n> should only have that much swap if you can't afford more RAM \n> or you've \n> tapped out your machine's capacity, and your application needs that \n> much memory.\n> -M@\nI've heard the same, the reason behind it was that there needs to be\none-to-one copy of the memory to be able to swap out everything and to\nhave a gain in the total \"memory\", you'd need twice as much swap as\nmemory to have a doubling of your memory.\n\nBut afaik this behaviour has been adjusted since the 2.4.5 kernel and\nisn't a real issue anymore.\n\nPlease keep in mind that I'm no expert at all on linux, so if you want\nto be sure, you'd better mail to the kernel-mailinglist orso :)\n\nAnyway, I manage a few machines with 1GB++ memory and none of them has\nmore than 1G of swap and none of them uses that swap for more than a few\nMB unless something was terribly wrong, so the actual 'risk' probably\ndoesn't have a high chance to occur.\n\nArjen\n\n\n\n", "msg_date": "Fri, 27 Jun 2003 22:55:40 +0200", "msg_from": "\"Arjen van der Meijden\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory question" }, { "msg_contents": "Arjen van der Meijden wrote:\n\n>>I've heard that too, but it doesn't seem to make much sense \n>>to me. If \n>>you get to the point where your machine is _needing_ 2GB of swap then \n>>something has gone horribly wrong (or you just need more RAM in the \n>>machine) and it will just crawl until the kernel kills off whatever \n>>process causes the swap space to be exceeded. Seems to me that you \n>>should only have that much swap if you can't afford more RAM \n>>or you've \n>>tapped out your machine's capacity, and your application needs that \n>>much memory.\n>> -M@\n>> \n>>\n>I've heard the same, the reason behind it was that there needs to be\n>one-to-one copy of the memory to be able to swap out everything and to\n>have a gain in the total \"memory\", you'd need twice as much swap as\n>memory to have a doubling of your memory.\n>\n>But afaik this behaviour has been adjusted since the 2.4.5 kernel and\n>isn't a real issue anymore.\n>\nIt may be different in vendor released kernels as the default overcommit \nbehavior of the Linux kernel may vary. More detailed discussions can be \nfound on the LKML, or you can find some useful summaries by searching \nthrough the last couple \"Kernel Traffic\" issues <http://kt.zork.net> .. \nI had some unexpected problems on one system, an older RH distribution, \nuntil I actually set the swap to be double the 2GB of ram on the system: \n4GB.\n\n>Please keep in mind that I'm no expert at all on linux, so if you want\n>to be sure, you'd better mail to the kernel-mailinglist orso :)\n>\n>Anyway, I manage a few machines with 1GB++ memory and none of them has\n>more than 1G of swap and none of them uses that swap for more than a few\n>MB unless something was terribly wrong, so the actual 'risk' probably\n>doesn't have a high chance to occur.\n>\n>Arjen\n>\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n>\n\n\n", "msg_date": "Mon, 30 Jun 2003 00:49:39 -0500", "msg_from": "Thomas Swan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory question" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Friday 27 June 2003 12:44, scott.marlowe wrote:\n> This is actually normal. Look at the amount cached: 6257620K. That's\n> 6.2Gig of cache. Linux is using only 6517776k - 6257620k of memory, the\n> rest is just acting as kernel cache. If anything tries to allocate a bit\n> of memory, linux will flush enough cache to give the memory to the\n> application that needs it.\n>\n\nI think it is appropriate to add that the Linux kernel does this in an \nextremely innovative and intelligent way. The more room you give your kernel \nto cache, the more responsive it is going to be.\n\n- -- \nJonathan Gardner <[email protected]>\n(was [email protected])\nLive Free, Use Linux!\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.1 (GNU/Linux)\n\niD8DBQE/AF8EWgwF3QvpWNwRArmVAJwK5C2ExmS8Rayrne33UJ0KZZM4UgCgq7b5\n3J1LGtofgtnKq/bPtF75lNI=\n=4Not\n-----END PGP SIGNATURE-----\n", "msg_date": "Mon, 30 Jun 2003 09:02:12 -0700", "msg_from": "Jonathan Gardner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory question" } ]
[ { "msg_contents": "I'm wondering how to speed up sorting which is slowing a query run regularly\non a linux postgresql 7.3.3 system.\n\nThe box is a dual PIII with 1Gb ram. The database is located on a 20Gb SCSI\ndisk, with WAL on a separate disk.\n\nThe only changes I've made to postgresql.conf so far are:\nshared_buffers=4000\nsort_mem=4000\n\nThe results of explain analyze are below. The largest tables in the join,\ngenotypes and ebv, contain 440681 and 3060781 rows respectively, the others\ncontain perhaps a couple of hundred each.\n\nI've reduced the runtime of the query to 2 minutes down from 10\nminutes by eliminating a left join and ORDER BY, but it would be nice to\nspeed it up further.\nThe time goes up to nearly 6 minutes if I put the ORDER BY back into the\nquery.\n\nAny suggestions gratefully received.\n\nRegards,\nChris Hutchinson\n\n----------------------------------------------------------------------\n\n'fast query' without sort\n\nexplain analyze\nSELECT A.genid,A.name,B.familyid,B.name,B.mumid,B.mumtype,B.dadid,\n\tB.dadtype,CBA.title,CA.sctrait,C.nvalue,C.accuracy\nFROM genotypes A , ebv C , sctraitdefinition CA ,\n\ttpr CB , tpsystems CBA , geneticfamily B\nWHERE (C.genid=A.genid) and ((A.speciesid='2') and (B.familyid=A.familyid))\nand\n\t((C.runid='72') and (CA.sctraitid=C.sctraitid)\n\tand (CB.runid=C.runid) and (CBA.systemid=CB.systemid));\n\nQUERY PLAN\n-------------------------------------------------------\n Hash Join (cost=18114.84..287310.15 rows=678223 width=130) (actual\ntime=11824.24..139737.82 rows=1460290 loops=1)\n Hash Cond: (\"outer\".sctraitid = \"inner\".sctraitid)\n -> Hash Join (cost=18113.32..273744.17 rows=678223 width=110) (actual\ntime=11813.27..124934.74 rows=1460290 loops=1)\n Hash Cond: (\"outer\".runid = \"inner\".runid)\n -> Hash Join (cost=18106.69..260173.07 rows=678223 width=73)\n(actual time=11782.12..107005.61 rows=1460290 loops=1)\n Hash Cond: (\"outer\".genid = \"inner\".genid)\n -> Seq Scan on ebv c (cost=0.00..195640.76 rows=1392655\nwidth=20) (actual time=4684.71..68728.26 rows=1460290 loops=1)\n Filter: (runid = 72)\n -> Hash (cost=15474.16..15474.16 rows=214612 width=53)\n(actual time=7089.57..7089.57 rows=0 loops=1)\n -> Hash Join (cost=147.83..15474.16 rows=214612\nwidth=53) (actual time=58.74..6597.01 rows=226561 loops=1)\n Hash Cond: (\"outer\".familyid = \"inner\".familyid)\n -> Seq Scan on genotypes a (cost=0.00..9424.51\nrows=214612 width=17) (actual time=10.47..3914.89 rows=226561 loops=1)\n Filter: (speciesid = 2)\n -> Hash (cost=132.46..132.46 rows=6146\nwidth=36) (actual time=48.06..48.06 rows=0 loops=1)\n -> Seq Scan on geneticfamily b\n(cost=0.00..132.46 rows=6146 width=36) (actual time=2.90..36.74 rows=6146\nloops=1)\n -> Hash (cost=6.60..6.60 rows=13 width=37) (actual\ntime=31.06..31.06 rows=0 loops=1)\n -> Hash Join (cost=5.16..6.60 rows=13 width=37) (actual\ntime=30.93..31.04 rows=13 loops=1)\n Hash Cond: (\"outer\".systemid = \"inner\".systemid)\n -> Seq Scan on tpsystems cba (cost=0.00..1.16 rows=16\nwidth=29) (actual time=6.68..6.72 rows=16 loops=1)\n -> Hash (cost=5.13..5.13 rows=13 width=8) (actual\ntime=24.17..24.17 rows=0 loops=1)\n -> Seq Scan on tpr cb (cost=0.00..5.13 rows=13\nwidth=8) (actual time=4.75..24.15 rows=13 loops=1)\n -> Hash (cost=1.41..1.41 rows=41 width=20) (actual time=10.90..10.90\nrows=0 loops=1)\n -> Seq Scan on sctraitdefinition ca (cost=0.00..1.41 rows=41\nwidth=20) (actual time=10.72..10.82 rows=41 loops=1)\n Total runtime: 140736.98 msec\n(24 rows)\n\n\n'slow query' with sort\n\nexplain analyze\nSELECT A.genid,A.name,B.familyid,B.name,B.mumid,B.mumtype,B.dadid,\n B.dadtype,CBA.title,CA.sctrait,C.nvalue,C.accuracy\nFROM genotypes A , ebv C , sctraitdefinition CA ,\n tpr CB , tpsystems CBA , geneticfamily B\nWHERE (C.genid=A.genid) and ((A.speciesid='2') and (B.familyid=A.familyid))\nand\n ((C.runid='72') and (CA.sctraitid=C.sctraitid)\n and (CB.runid=C.runid) and (CBA.systemid=CB.systemid))\nORDER BY A.genid ASC,B.familyid ASC,CA.sctrait ASC;\n\nQUERY PLAN\n-------------------------------------------------------\n Sort (cost=540740.81..542436.37 rows=678223 width=130) (actual\ntime=322602.06..346710.43 rows=1460290 loops=1)\n Sort Key: a.genid, b.familyid, ca.sctrait\n -> Hash Join (cost=18114.84..287310.15 rows=678223 width=130) (actual\ntime=10398.55..144991.62 rows=1460290 loops=1)\n Hash Cond: (\"outer\".sctraitid = \"inner\".sctraitid)\n -> Hash Join (cost=18113.32..273744.17 rows=678223 width=110)\n(actual time=10384.84..129637.23 rows=1460290 loops=1)\n Hash Cond: (\"outer\".runid = \"inner\".runid)\n -> Hash Join (cost=18106.69..260173.07 rows=678223\nwidth=73) (actual time=10353.69..111239.94 rows=1460290 loops=1)\n Hash Cond: (\"outer\".genid = \"inner\".genid)\n -> Seq Scan on ebv c (cost=0.00..195640.76\nrows=1392655 width=20) (actual time=4499.94..74509.34 rows=1460290 loops=1)\n Filter: (runid = 72)\n -> Hash (cost=15474.16..15474.16 rows=214612\nwidth=53) (actual time=5845.85..5845.85 rows=0 loops=1)\n -> Hash Join (cost=147.83..15474.16 rows=214612\nwidth=53) (actual time=58.75..5346.04 rows=226561 loops=1)\n Hash Cond: (\"outer\".familyid =\n\"inner\".familyid)\n -> Seq Scan on genotypes a\n(cost=0.00..9424.51 rows=214612 width=17) (actual time=7.00..2799.43\nrows=226561 loops=1)\n Filter: (speciesid = 2)\n -> Hash (cost=132.46..132.46 rows=6146\nwidth=36) (actual time=51.54..51.54 rows=0 loops=1)\n -> Seq Scan on geneticfamily b\n(cost=0.00..132.46 rows=6146 width=36) (actual time=2.88..39.66 rows=6146\nloops=1)\n -> Hash (cost=6.60..6.60 rows=13 width=37) (actual\ntime=31.05..31.05 rows=0 loops=1)\n -> Hash Join (cost=5.16..6.60 rows=13 width=37)\n(actual time=30.92..31.03 rows=13 loops=1)\n Hash Cond: (\"outer\".systemid = \"inner\".systemid)\n -> Seq Scan on tpsystems cba (cost=0.00..1.16\nrows=16 width=29) (actual time=6.67..6.72 rows=16 loops=1)\n -> Hash (cost=5.13..5.13 rows=13 width=8)\n(actual time=24.17..24.17 rows=0 loops=1)\n -> Seq Scan on tpr cb (cost=0.00..5.13\nrows=13 width=8) (actual time=4.72..24.14 rows=13 loops=1)\n -> Hash (cost=1.41..1.41 rows=41 width=20) (actual\ntime=13.62..13.62 rows=0 loops=1)\n -> Seq Scan on sctraitdefinition ca (cost=0.00..1.41\nrows=41 width=20) (actual time=13.43..13.54 rows=41 loops=1)\n Total runtime: 347780.04 msec\n(26 rows)\n\nThe tables look like this:\n\nTable \"public.ebv\"\n Column | Type | Modifiers\n-----------+---------+-----------\n runid | integer | not null\n genid | integer | not null\n sctraitid | integer | not null\n nvalue | real | not null\n accuracy | real |\nIndexes: idx_ebg btree (genid),\n idx_ebv btree (runid),\n idx_ebvr btree (runid)\n\nTable \"public.genotypes\"\n Column | Type |\nModifiers\n----------------+--------------------------+--------------------------------\n-------------------------\n genid | integer | not null default\nnextval('\"genotypes_genid_seq\"'::text)\n speciesid | integer | not null\n familyid | integer | not null\n created | timestamp with time zone | not null\n batchid | integer | not null\n lastmodifiedby | integer | not null\n lastmodified | timestamp with time zone | not null\n name | character varying(100) |\nIndexes: genotypes_pkey primary key btree (genid),\n idx_gnm unique btree (speciesid, name),\n idx_gb btree (batchid),\n idxc btree (created),\n idxgf btree (familyid),\n idxgsp btree (speciesid)\n\nTable \"public.tpr\"\n Column | Type |\nModifiers\n---------------+--------------------------+---------------------------------\n--------------------------\n runid | integer | not null default\nnextval('\"tpr_runid_seq\"'::text)\n description | text | not null\n systemid | integer | not null\n runlog | text |\n ranby | integer | not null\n whenrun | timestamp with time zone | not null\n valid_p | character(1) | not null default 'f'\n whenvalidated | timestamp with time zone |\n whovalidated | integer |\n notes | text |\nIndexes: tpr_pkey primary key btree (runid),\n idx_tprv btree (valid_p)\n\nTable \"public.geneticfamily\"\n Column | Type |\nModifiers\n----------------+--------------------------+--------------------------------\n--------------------------------\n familyid | integer | not null default\nnextval('\"geneticfamily_familyid_seq\"'::text)\n speciesid | integer | not null\n mumid | integer | not null\n mumtype | character(1) | not null\n dadid | integer | not null\n dadtype | character(1) | not null\n batchid | integer | not null\n lastmodifiedby | integer | not null\n lastmodified | timestamp with time zone |\n name | character varying(100) |\nIndexes: geneticfamily_pkey primary key btree (familyid),\n idu_gfam unique btree (mumid, dadid, mumtype, dadtype),\n idx_gfnm unique btree (speciesid, name),\n idx_gfb btree (batchid),\n idx_gfun btree (upper(name)),\n idxgfd btree (dadid),\n idxgff btree (mumid, dadid),\n idxgfm btree (mumid),\n idxpsp btree (speciesid)\n\nTable \"public.tpsystems\"\n Column | Type |\nModifiers\n------------------+--------------------------+------------------------------\n------------------------------\n systemid | integer | not null default\nnextval('\"tpsystems_systemid_seq\"'::text)\n speciesid | integer | not null\n title | character varying(250) | not null\n description | text |\n lastmodifiedby | integer | not null\n lastmodified | timestamp with time zone | not null\n variancechecksum | character varying(32) |\nIndexes: tpsystems_pkey primary key btree (systemid)\n\nTable \"public.sctraitdefinition\"\n Column | Type |\nModifiers\n--------------------+--------------------------+----------------------------\n-----------------------------------------\n sctraitid | integer | not null default\nnextval('\"sctraitdefinition_sctraitid_seq\"'::text)\n speciesid | integer | not null\n sctrait | character varying(32) | not null\n lastmodifiedby | integer | not null\n lastmodified | timestamp with time zone | not null\n datatype | character varying(32) |\n datatransformation | character varying(32) |\nIndexes: sctraitdefinition_pkey primary key btree (sctraitid),\n idu_sctd unique btree (speciesid, sctrait)\n\n\n--------------------------------------------------------------------------\n'top' looks like this most of the time.. (except when queries are running)\n\n 8:40pm up 54 days, 7:45, 2 users, load average: 0.00, 0.19, 0.32\n 44 processes: 43 sleeping, 1 running, 0 zombie, 0 stopped\n CPU0 states: 0.1% user, 0.0% system, 0.0% nice, 99.4% idle\n CPU1 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\n Mem: 1028428K av, 846776K used, 181652K free, 0K shrd, 1580K\nbuff\n Swap: 530104K av, 53540K used, 476564K free 804148K\ncached\n\n\n", "msg_date": "Sat, 28 Jun 2003 21:31:48 +1000", "msg_from": "\"Chris Hutchinson\" <[email protected]>", "msg_from_op": true, "msg_subject": "'best practises' to speed up sorting? tuning postgresql.conf" }, { "msg_contents": "On Saturday 28 June 2003 17:01, Chris Hutchinson wrote:\n> I'm wondering how to speed up sorting which is slowing a query run\n> regularly on a linux postgresql 7.3.3 system.\n>\n> The box is a dual PIII with 1Gb ram. The database is located on a 20Gb SCSI\n> disk, with WAL on a separate disk.\n>\n> The only changes I've made to postgresql.conf so far are:\n> shared_buffers=4000\n> sort_mem=4000\n\nsort_mem is in kbs. So you are setting it roughyl 4MB. Given that you don't \nseem to have any shortage of RAM, how about 32/64/128MB RAM till things work \nas expected?\n\nOf course, setting it so in postgresql.conf would be a bad idea. Just set for \nthe session that makes this query and reset it back.\n\nKeep us posted on updates..\n\n Shridhar\n\n", "msg_date": "Sat, 28 Jun 2003 17:07:52 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'best practises' to speed up sorting? tuning postgresql.conf" }, { "msg_contents": "> Chris Hutchinson wrote:\n\n> I'm wondering how to speed up sorting which is slowing a \n> query run regularly on a linux postgresql 7.3.3 system.\nI see a lot of seq scans in your explain and there are no index scans,\nhave you done a 'vacuum analyze' lately?\n\nArjen\n\n\n\n", "msg_date": "Sat, 28 Jun 2003 13:58:09 +0200", "msg_from": "\"Arjen van der Meijden\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 'best practises' to speed up sorting? tuning postgresql.conf" } ]
[ { "msg_contents": "I have a number of very common queries that the optimizer plans a very inefficient plan. I vacuum hourly. I'm wondering what I can do to make the queries faster.\n\nHere are the relevant tables:\n\ncreate table image(\n imageid integer not null, /* The image's ID */\n containerid integer not null, /* The container that owns it */\n name varchar(120) not null, /* Its name */\n state bigint not null default 0, /* Its state */\n primary key (imageid),\n unique (containerid, name) /* All images in a container must be uniquely named */\n);\n\ncreate table ancestry(\n containerid integer not null, /* The container that has an ancestor */\n ancestorid integer not null, /* The ancestor of the container */\n unique (containerid, ancestorid),\n unique (ancestorid, containerid)\n);\n\nI have somewhere around 3M rows in the image table, and 37K rows in the ancestry table. The following is representative of some of the common queries I issue:\n\nselect * from image natural join ancestry where ancestorid=1000000 and (state & 7::bigint) = 0::bigint;\n\nWhen I ask postgres to EXPLAIN it, I get the following:\n\nMerge Join (cost=81858.22..81900.60 rows=124 width=49)\n -> Sort (cost=81693.15..81693.15 rows=16288 width=41)\n -> Seq Scan on image (cost=0.00..80279.17 rows=16288 width=41)\n -> Sort (cost=165.06..165.06 rows=45 width=8)\n -> Index Scan using ancestry_ancestorid_key on ancestry (cost=0.00..163.83 rows=45 width=8)\n\nIt appears to me that the query executes as follows:\n\n1. Scan every row in the image table to find those where (state & 7::bigint) = 0::bigint\n2. Sort the results\n3. Use an index on ancestry to find rows where ancestorid=1000000\n4. Sort the results\n5. Join the two\n\nIt seems to me that if this query is going to return a small percentage of the rows (which is the common case), it could be done much faster by first joining (all columns involved in the join are indexed), and then by applying the (state & 7::bigint) = 0::bigint constraint to the results.\n\nSimilarly, when I update, I get the following:\n\nexplain update image set state=0 from ancestry where ancestorid=1000000 and ancestry.containerid=image.containerid and (state & 7::bigint) = 0::bigint;\n\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=81841.92..81884.30 rows=124 width=43)\n -> Sort (cost=81676.74..81676.74 rows=16288 width=39)\n -> Seq Scan on image (cost=0.00..80279.17 rows=16288 width=39)\n -> Sort (cost=165.19..165.19 rows=45 width=4)\n -> Index Scan using ancestry_ancestorid_key on ancestry (cost=0.00..163.95 rows=45 width=4)\n\nHow can I avoid the sequential scan of the entire image table (i.e. how can I get it to perform the join first)?\n\nThanks in advance.\n\nRobert Wille\n\n\n\n\n\n\n\n\nI have a number of very common queries that the \noptimizer plans a very inefficient plan. I vacuum hourly. I'm wondering what I \ncan do to make the queries faster.\n \nHere are the relevant tables:\n \ncreate table image(    imageid \ninteger not null,     /* The image's ID */\n    containerid integer not \nnull,     /* The container that owns it \n*/    name varchar(120) not \nnull,     /* Its name */\n    state bigint not null default \n0,    /* Its state */    primary key \n(imageid),    unique (containerid, \nname)     /* All images in a container must be uniquely \nnamed */);\n \ncreate table ancestry(    \ncontainerid integer not null,     /* The container that \nhas an ancestor */    ancestorid integer not \nnull,     /* The ancestor of the container \n*/    unique (containerid, ancestorid),    \nunique (ancestorid, containerid));\n \nI have somewhere around 3M rows in the image table, \nand 37K rows in the ancestry table. The following is representative of some of \nthe common queries I issue:\n \nselect * from image natural join ancestry where \nancestorid=1000000 and (state & 7::bigint) = 0::bigint;\n \nWhen I ask postgres to EXPLAIN it, I get the \nfollowing:\n \nMerge Join  (cost=81858.22..81900.60 rows=124 \nwidth=49)  ->  Sort  (cost=81693.15..81693.15 rows=16288 \nwidth=41)        ->  Seq Scan on \nimage  (cost=0.00..80279.17 rows=16288 width=41)  ->  \nSort  (cost=165.06..165.06 rows=45 \nwidth=8)        ->  Index Scan \nusing ancestry_ancestorid_key on ancestry  (cost=0.00..163.83 rows=45 \nwidth=8)\n \nIt appears to me that the query executes as \nfollows:\n \n1. Scan every row in the image table to find those \nwhere (state & 7::bigint) = 0::bigint\n2. Sort the results\n3. Use an index on ancestry to find rows where \nancestorid=1000000\n4. Sort the results\n5. Join the two\n \nIt seems to me that if this query is going to \nreturn a small percentage of the rows (which is the common case), it could be \ndone much faster by first joining (all columns involved in the join are \nindexed), and then by applying the (state & 7::bigint) = 0::bigint \nconstraint to the results.\n \nSimilarly, when I update, I get the \nfollowing:\n \nexplain update image set state=0 from ancestry \nwhere ancestorid=1000000 and ancestry.containerid=image.containerid and (state \n& 7::bigint) = 0::bigint;\n \nNOTICE:  QUERY PLAN:\n \nMerge Join  (cost=81841.92..81884.30 rows=124 \nwidth=43)  ->  Sort  (cost=81676.74..81676.74 rows=16288 \nwidth=39)        ->  Seq Scan on \nimage  (cost=0.00..80279.17 rows=16288 width=39)  ->  \nSort  (cost=165.19..165.19 rows=45 \nwidth=4)        ->  Index Scan \nusing ancestry_ancestorid_key on ancestry  (cost=0.00..163.95 rows=45 \nwidth=4)\n \nHow can I avoid the sequential scan of the entire \nimage table (i.e. how can I get it to perform the join first)?\nThanks in advance.\n \nRobert Wille", "msg_date": "Mon, 30 Jun 2003 11:57:20 -0600", "msg_from": "\"Robert Wille\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query planner plans very inefficient plans" }, { "msg_contents": "\"Robert Wille\" <[email protected]> writes:\n> select * from image natural join ancestry where ancestorid=1000000 and\n> (state & 7::bigint) = 0::bigint;\n\nThe planner is not going to have any statistics that allow it to predict\nthe number of rows satisfying that &-condition, and so it's unsurprising\nif its off-the-cuff guess has little to do with reality.\n\nI'd recommend skipping any cute tricks with bit-packing, and storing the\nstate (and any other values you query frequently) as its own column, so\nthat the query looks like \n\nselect * from image natural join ancestry where ancestorid=1000000 and\nstate = 0;\n\nANALYZE should be able to do a reasonable job with a column that has 8\nor fewer distinct values ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Jun 2003 15:05:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner plans very inefficient plans " }, { "msg_contents": "> I have somewhere around 3M rows in the image table, and 37K rows in the\n> ancestry table. The following is representative of some of the common\n> queries I issue:\n> \n> select * from image natural join ancestry where ancestorid=1000000 and\n> (state & 7::bigint) = 0::bigint;\n> \n> When I ask postgres to EXPLAIN it, I get the following:\n> \n> Merge Join (cost=81858.22..81900.60 rows=124 width=49)\n> -> Sort (cost=81693.15..81693.15 rows=16288 width=41)\n> -> Seq Scan on image (cost=0.00..80279.17 rows=16288 width=41)\n> -> Sort (cost=165.06..165.06 rows=45 width=8)\n> -> Index Scan using ancestry_ancestorid_key on ancestry \n> (cost=0.00..163.83 rows=45 width=8)\n> \n> It appears to me that the query executes as follows:\n> \n> 1. Scan every row in the image table to find those where (state &\n> 7::bigint) = 0::bigint\n> 2. Sort the results\n> 3. Use an index on ancestry to find rows where ancestorid=1000000\n> 4. Sort the results\n> 5. Join the two\n\nFWIW, I use INTs as bit vectors for options in various applications\nand have run into this in a few cases. In the database, I only care\nabout a few bits in the options INT, so what I did was create a\nfunction for each of the bits that I care about and then a function\nindex. Between the two, I've managed to solve my performance\nproblems.\n\nCREATE FUNCTION app_option_foo_is_set(INT)\n RETURNS BOOL\n IMMUTABLE\n AS '\nBEGIN\n IF $1 & 7::INT THEN\n RETURN TRUE;\n ELSE\n RETURN FALSE;\n END IF;\nEND;\n' LANGUAGE 'plpgsql';\nCREATE INDEX app_option_foo_fidx ON app_option_tbl (app_option_foo_is_set(options));\nVACUUM ANALYZE;\n\nJust make sure that you set your function to be IMMUTABLE. -sc\n\n\nPS It'd be slick if PostgreSQL would collapse adjacent booleans into a\n bit in a byte: it'd save some apps a chunk of space. 32 options ==\n 32 bytes with the type BOOL, but if adjacent BOOLs were collapsed,\n it'd only be 4 bytes on disk and maybe some page header data.\n\n-- \nSean Chittenden\n", "msg_date": "Mon, 30 Jun 2003 14:13:36 -0700", "msg_from": "Sean Chittenden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner plans very inefficient plans" } ]