threads
listlengths
1
275
[ { "msg_contents": "> Actually, you can if you assume you can \"temporarily materialize\" that\n> view.\n>\n> Then, you use a join on my_query to pull the bits you want:\n> \n> select [big table.details] from [big table],\n> [select * from my_query order by [something] offset 280 limit 20]\n> where [join criteria between my_query and big table]\n> order by [something];\n> \n\nI think that's a pretty reasonable compromise between a true\nmaterialized solution and brute force limit/offset. Still, the\nperformance of a snapshot materialized view indexed around your query\nsimply can't be beat, although you have to pay a hefty price in\ncomplexity, maintenance, and coherency.\n\nMerlin\n", "msg_date": "Thu, 27 Jan 2005 09:35:09 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" } ]
[ { "msg_contents": "Good morning,\n\nI have a table that links two tables and I need to flatten one.\n(Please, if I'm just not in the good forum for this, tell me. This is\na performance issue for me, but you might consider this as an SQL\nquestion. Feel free to direct me at the good mailling-list.)\n\ndesign.products ---> design.product_department_time <--- design.departments\n\nThis allows us to fixe a given required time by department for a given product.\n- Departments are defined by the user\n- Products also\n- Time is input for a department (0 and NULL are impossible).\n\nHere a normal listing of design.product_department_time:\n product_id | department_id | req_time\n------------+---------------+----------\n 906 | A | 3000\n 906 | C | 3000\n 906 | D | 1935\n 907 | A | 1500\n 907 | C | 1500\n 907 | D | 4575\n 924 | A | 6000\n 924 | C | 1575\n\nI need to JOIN this data with the product listing we have to produce\nand multiply the quantity with this time by departments, and all that\nin a row. So departments entries become columns.\n\nI did the following (I formated the query to help out):\n\nSELECT\n product_id,\n sum(CASE WHEN department_id = 'A' THEN req_time END) AS a,\n sum(CASE WHEN department_id = 'C' THEN req_time END) AS c,\n sum(CASE WHEN department_id = 'D' THEN req_time END) AS d\nFROM design.product_department_time\nGROUP BY product_id;\n\n product_id | a | c | d\n------------+------+------+------\n 924 | 6000 | 1575 |\n 907 | 1500 | 1500 | 4575\n 906 | 3000 | 3000 | 1935\n\nNow in my software I know all the departments, so programatically I\nbuild a query with a CASE for each department (just like the above).\nThis is nice, this is working, there is less than 10 departements for\nnow and about 250 jobs actives in the system. So PostgeSQL will not\ndie. (My example is more simple because this was an hard-coded test\ncase, but I would create a case entry for each department.)\n\nBut I am wondering what is the most efficient way to do what I need?\n\nAfter that I need to link (LEFT JOIN) this data with the jobs in the\nsystem. Each job has a product_id related to it, so USING (product_id)\nand I multiply the time of each department with the quantity there is\nto product. So someone can know how much work time there is to do by\ndepartments.\n\n\nThanks for any input, comments, tips, help, positive criticism to\nlearn more, etc.\n\n-- \nAlexandre Leclerc\n", "msg_date": "Thu, 27 Jan 2005 10:23:34 -0500", "msg_from": "Alexandre Leclerc <[email protected]>", "msg_from_op": true, "msg_subject": "Flattening a kind of 'dynamic' table" }, { "msg_contents": "On Thu, 27 Jan 2005 10:23:34 -0500, Alexandre Leclerc\n<[email protected]> wrote:\n> Here a normal listing of design.product_department_time:\n> product_id | department_id | req_time\n> ------------+---------------+----------\n> 906 | A | 3000\n> 906 | C | 3000\n> 906 | D | 1935\n> 907 | A | 1500\n> 907 | C | 1500\n> 907 | D | 4575\n> 924 | A | 6000\n> 924 | C | 1575\n\nWell, I did something like this recently; it can be done though\nmaybe not very efficiently...\n\nUnfortunately we will need a rowtype with all the departaments:\nCREATE DOMAIN departaments AS (a int, b int, c int, d int, ...);\n\nA function aggregate for this type:\nCREATE FUNCTION dep_agg(ds departaments, args text[]) RETURNS departaments AS $$\n BEGIN\n IF args[1] = 'A' THEN ds.a = args[2]; -- I think it is not\npossible to do ds.$args[1] = args[2] equivalent.\n ELSIF args[1] = 'B' THEN ds.b = args[2];\n ELSIF args[1] = 'C' THEN ds.c = args[2];\n ELSIF args[1] = 'D' THEN ds.d = args[2];\n END IF;\n RETURN ds;\n END;\n$$ LANUGAGE plpgsql;\n\nTHEN an aggregate:\nCREATE AGGREGATE dep_aggregate (basetype = text[], stype =\ndepartaments, sfunc =dep_agg);\n\nAND then a view for sugar:\n\nCREATE VIEW prod_dep_time VIEW AS\n SELECT product_id, (dep_aggregate(ARRAY[departament_id, req_time]::text[])).*\n FROM product_department_time GROUP BY product_id;\n\nAnd voila. :)\nCouple of comments:\n -- aggregate takes array[] since making \"multicolumn\" aggregates is\nnot possible, as far as I know.\n -- I did not check the code, yet I did manage to make it work some time before.\n You may need to use \"ROWS\" or something in the function definition; I\ndon't remember and can't check it right now.\n -- comments welcome. :)\n\n Regards,\n Dawid\n", "msg_date": "Thu, 27 Jan 2005 17:27:40 +0100", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flattening a kind of 'dynamic' table" }, { "msg_contents": "> Unfortunately we will need a rowtype with all the departaments:\n> CREATE DOMAIN departaments AS (a int, b int, c int, d int, ...);\n\nI think you mean CREATE TYPE departments...\n\nChris\n", "msg_date": "Thu, 27 Jan 2005 16:41:15 +0000", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flattening a kind of 'dynamic' table" }, { "msg_contents": "On Thu, 27 Jan 2005 17:27:40 +0100, Dawid Kuroczko <[email protected]> wrote:\n> On Thu, 27 Jan 2005 10:23:34 -0500, Alexandre Leclerc\n> <[email protected]> wrote:\n> > Here a normal listing of design.product_department_time:\n> > product_id | department_id | req_time\n> > ------------+---------------+----------\n> > 906 | A | 3000\n> > 906 | C | 3000\n> > 906 | D | 1935\n> > 907 | A | 1500\n> > 907 | C | 1500\n> > 907 | D | 4575\n> > 924 | A | 6000\n> > 924 | C | 1575\n> \n> Well, I did something like this recently; it can be done though\n> maybe not very efficiently...\n> \n> Unfortunately we will need a rowtype with all the departaments:\n> CREATE DOMAIN departaments AS (a int, b int, c int, d int, ...);\n\n\nThank you for this help Dawid, I'll have to take some time to look at\nthis suggestion. If I must create a domain with all the departments\nI'll have a problem because the user is creating and deleting\ndepartments as it pleases him.\n\nAny counter-ideas?\n\nRegards.\n\n-- \nAlexandre Leclerc\n", "msg_date": "Thu, 27 Jan 2005 12:43:56 -0500", "msg_from": "Alexandre Leclerc <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flattening a kind of 'dynamic' table" }, { "msg_contents": "On Thu, 27 Jan 2005 12:43:56 -0500, Alexandre Leclerc\n<[email protected]> wrote:\n> On Thu, 27 Jan 2005 17:27:40 +0100, Dawid Kuroczko <[email protected]> wrote:\n> > On Thu, 27 Jan 2005 10:23:34 -0500, Alexandre Leclerc\n> > <[email protected]> wrote:\n> > > Here a normal listing of design.product_department_time:\n> > > product_id | department_id | req_time\n> > > ------------+---------------+----------\n> > > 906 | A | 3000\n> > > 906 | C | 3000\n> > > 906 | D | 1935\n> > > 907 | A | 1500\n> > > 907 | C | 1500\n> > > 907 | D | 4575\n> > > 924 | A | 6000\n> > > 924 | C | 1575\n> >\n> > Well, I did something like this recently; it can be done though\n> > maybe not very efficiently...\n> >\n> > Unfortunately we will need a rowtype with all the departaments:\n> > CREATE DOMAIN departaments AS (a int, b int, c int, d int, ...);\n> Thank you for this help Dawid, I'll have to take some time to look at\n> this suggestion. If I must create a domain with all the departments\n> I'll have a problem because the user is creating and deleting\n> departments as it pleases him.\n> \n> Any counter-ideas?\n\nI have exactly the same problem with my proposal [1]\nI just wish there would be some \"native\" rows-to-columns\naggregate.\n\nThe other approach I used was something like this:\nSELECT product_id, a, b, c FROM\n (SELECT product_id, a FROM pdt) AS a FULL OUTER JOIN USING(product_id)\n (SELECT product_id, b FROM pdt) AS b FULL OUTER JOIN USING(product_id)\n (SELECT product_id, c FROM pdt) AS c;\n...or similar (I'm typing from memory ;)). Anyway it was good for getting\nwhole table, but performance well, wasn't the gratest. ;)).\n\n Regards,\n Dawid\n\n[1]: I was thinking about a trigger on a \"departaments\" table,\nand then recreating the aggregate and view as needed, but\nit isn't the kind of dynamic I had in mind. ;)\n", "msg_date": "Fri, 28 Jan 2005 09:07:59 +0100", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flattening a kind of 'dynamic' table" }, { "msg_contents": "On Fri, 28 Jan 2005 09:07:59 +0100, Dawid Kuroczko <[email protected]> wrote:\n> On Thu, 27 Jan 2005 12:43:56 -0500, Alexandre Leclerc\n> <[email protected]> wrote:\n> > On Thu, 27 Jan 2005 17:27:40 +0100, Dawid Kuroczko <[email protected]> wrote:\n> > > On Thu, 27 Jan 2005 10:23:34 -0500, Alexandre Leclerc\n> > > <[email protected]> wrote:\n> > > > Here a normal listing of design.product_department_time:\n> > > > product_id | department_id | req_time\n> > > > ------------+---------------+----------\n> > > > 906 | A | 3000\n> > > > 906 | C | 3000\n> > > > 906 | D | 1935\n> > > > 907 | A | 1500\n> > > > 907 | C | 1500\n> > > > 907 | D | 4575\n> > > > 924 | A | 6000\n> > > > 924 | C | 1575\n> > >\n> > > Well, I did something like this recently; it can be done though\n> > > maybe not very efficiently...\n> > >\n> > > Unfortunately we will need a rowtype with all the departaments:\n> > > CREATE DOMAIN departaments AS (a int, b int, c int, d int, ...);\n> > Thank you for this help Dawid, I'll have to take some time to look at\n> > this suggestion. If I must create a domain with all the departments\n> > I'll have a problem because the user is creating and deleting\n> > departments as it pleases him.\n> >\n> > Any counter-ideas?\n> \n> I have exactly the same problem with my proposal [1]\n> I just wish there would be some \"native\" rows-to-columns\n> aggregate.\n> \n> [1]: I was thinking about a trigger on a \"departaments\" table,\n> and then recreating the aggregate and view as needed, but\n> it isn't the kind of dynamic I had in mind. ;)\n\nYep, this is the only thing I also tought: a trigger to add / remove\ncolumns when the user add or remove a department... but this is not\nexactly what I wanted (this is not a very nice db design, from my\nperspective).\n\nThank you for you help.\n\n-- \nAlexandre Leclerc\n", "msg_date": "Fri, 28 Jan 2005 09:25:55 -0500", "msg_from": "Alexandre Leclerc <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flattening a kind of 'dynamic' table" }, { "msg_contents": "Alexandre Leclerc wrote:\n>>>>>Here a normal listing of design.product_department_time:\n>>>>> product_id | department_id | req_time\n>>>>>------------+---------------+----------\n>>>>> 906 | A | 3000\n>>>>> 906 | C | 3000\n>>>>> 906 | D | 1935\n>>>>> 907 | A | 1500\n>>>>> 907 | C | 1500\n>>>>> 907 | D | 4575\n>>>>> 924 | A | 6000\n>>>>> 924 | C | 1575\n\nSorry for jumping in on this thread so late -- I haven't been able to \nkeep up with the lists lately.\n\nIf I understand what you want correctly, you should be able to use \ncrosstab from contrib/tablefunc:\n\ncreate table product_department_time(product_id int, department_id text, \nreq_time int);\ninsert into product_department_time values(906, 'A', 3000);\ninsert into product_department_time values(906, 'C', 3000);\ninsert into product_department_time values(906, 'D', 1935);\ninsert into product_department_time values(907, 'A', 1500);\ninsert into product_department_time values(907, 'C', 1500);\ninsert into product_department_time values(907, 'D', 4575);\ninsert into product_department_time values(924, 'A', 6000);\ninsert into product_department_time values(924, 'C', 1575);\n\nselect * from crosstab(\n 'select product_id, department_id, req_time\n from product_department_time order by 1',\n 'select ''A'' union all select ''C'' union all select ''D'''\n) as (product_id int, a int, c int, d int);\n\n product_id | a | c | d\n------------+------+------+------\n 906 | 3000 | 3000 | 1935\n 907 | 1500 | 1500 | 4575\n 924 | 6000 | 1575 |\n(3 rows)\n\nYou could make this dynamic for new values of department_id by wrapping \nit with a PL/pgSQL function.\n\nHTH,\n\nJoe\n", "msg_date": "Fri, 28 Jan 2005 08:34:27 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flattening a kind of 'dynamic' table" }, { "msg_contents": "On Fri, 28 Jan 2005 08:34:27 -0800, Joe Conway <[email protected]> wrote:\n> Alexandre Leclerc wrote:\n> >>>>>Here a normal listing of design.product_department_time:\n> >>>>> product_id | department_id | req_time\n> >>>>>------------+---------------+----------\n> >>>>> 906 | A | 3000\n> >>>>> 906 | C | 3000\n> >>>>> 906 | D | 1935\n> >>>>> 907 | A | 1500\n> >>>>> 907 | C | 1500\n> >>>>> 907 | D | 4575\n> >>>>> 924 | A | 6000\n> >>>>> 924 | C | 1575\n> \n> Sorry for jumping in on this thread so late -- I haven't been able to\n> keep up with the lists lately.\n> \n> If I understand what you want correctly, you should be able to use\n> crosstab from contrib/tablefunc:\n\nI'm a little bit confused on how to install this contirb. I know my\ncontrib package is installed, but I don't know how to make it work in\npostgresql. (Using 7.4.5-1mdk on Mandrake Linux.)\n\n-- \nAlexandre Leclerc\n", "msg_date": "Fri, 28 Jan 2005 13:15:44 -0500", "msg_from": "Alexandre Leclerc <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flattening a kind of 'dynamic' table" }, { "msg_contents": "Alexandre Leclerc wrote:\n> I'm a little bit confused on how to install this contirb. I know my\n> contrib package is installed, but I don't know how to make it work in\n> postgresql. (Using 7.4.5-1mdk on Mandrake Linux.)\n> \n\nFind the file tablefunc.sql and redirect it into your database, e.g.\n\npsql mydatabase < /path/to/contrib/scripts/tablefunc.sql\n\nI have no idea where that would be on Mandrake, but you could probably do:\n\nlocate tablefunc.sql\n\nOn Fedora Core 1 I find it here:\n/usr/share/pgsql/contrib/tablefunc.sql\n\nAlso find and read README.tablefunc.\n\nHTH,\n\nJoe\n", "msg_date": "Fri, 28 Jan 2005 10:24:37 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flattening a kind of 'dynamic' table" }, { "msg_contents": "On Fri, 28 Jan 2005 10:24:37 -0800, Joe Conway <[email protected]> wrote:\n> Alexandre Leclerc wrote:\n> > I'm a little bit confused on how to install this contirb. I know my\n> > contrib package is installed, but I don't know how to make it work in\n> > postgresql. (Using 7.4.5-1mdk on Mandrake Linux.)\n> >\n> locate tablefunc.sql\n\nThank you. The RPM was not installing, but I manage to extract it's\ncontact and grap the .sql file in the contrib. So I installed the\nfunction manually.\n\nNow it's time to evaluate performance of this! Thanks for your help!\n\n-- \nAlexandre Leclerc\n", "msg_date": "Fri, 28 Jan 2005 14:26:57 -0500", "msg_from": "Alexandre Leclerc <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flattening a kind of 'dynamic' table" }, { "msg_contents": "On Fri, 28 Jan 2005 10:24:37 -0800, Joe Conway <[email protected]> wrote:\n> Alexandre Leclerc wrote:\n> > I'm a little bit confused on how to install this contirb. I know my\n> > contrib package is installed, but I don't know how to make it work in\n> > postgresql. (Using 7.4.5-1mdk on Mandrake Linux.)\n> >\n> \n> Find the file tablefunc.sql and redirect it into your database, e.g.\n> \n> psql mydatabase < /path/to/contrib/scripts/tablefunc.sql\n> \n> I have no idea where that would be on Mandrake, but you could probably do:\n> \n> locate tablefunc.sql\n> \n> On Fedora Core 1 I find it here:\n> /usr/share/pgsql/contrib/tablefunc.sql\n> \n> Also find and read README.tablefunc.\n> \n> HTH,\n\nWHOA! Yess! Exactly the thing! Amazing! :)))\n\n Regards,\n Dawid\n", "msg_date": "Fri, 28 Jan 2005 20:59:05 +0100", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flattening a kind of 'dynamic' table" } ]
[ { "msg_contents": "Alexandre wrote:\n> Here a normal listing of design.product_department_time:\n> product_id | department_id | req_time\n> ------------+---------------+----------\n> 906 | A | 3000\n> 906 | C | 3000\n> 906 | D | 1935\n> 907 | A | 1500\n> 907 | C | 1500\n> 907 | D | 4575\n> 924 | A | 6000\n> 924 | C | 1575\n> product_id | a | c | d\n> ------------+------+------+------\n> 924 | 6000 | 1575 |\n> 907 | 1500 | 1500 | 4575\n> 906 | 3000 | 3000 | 1935\n\nok, you have a couple of different options here. The first thing that\njumps out at me is to use arrays to cheat using arrays.\n Let's start with the normalized result set.\n\nselect product_id, department_id, sum(req_time) group by product_id,\ndepartment_id \n\nproduct_id | department_id | sum \n924 a 6000\n924 c 1575\n907 a 1500\n[...]\n\nThis should be no slower (in fact faster) then your original query and\ndoes not have to be re-coded when you add new departments (you have a\ndepartment table, right?).\n\nIf you absolutely must have 1 record/product, you can cheat using\narrays:\n\nselect q.product_id, \n array_accum(q.department_id) as depts,\n array_accum(q.req_time) as times\n\t from \n (\n select product_id, department_id, sum(req_time) as req_time\ngroup by product_id, department_id\n ) q\n\tgroup by q.product_id;\n\t\n\nselect product_id, array_accum(department_id) sum(req_time) group by\nproduct_id\n\nproduct_id | department_id | sum \n924 {a, c} {1500, 1575}\n [...]\n\ndisclaimer 1: I never checked syntax\ndisclaimer 2: you may have to add array_accum to pg (check docs)\nMerlin\n", "msg_date": "Thu, 27 Jan 2005 10:44:45 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flattening a kind of 'dynamic' table" }, { "msg_contents": "On Thu, 27 Jan 2005 10:44:45 -0500, Merlin Moncure\n<[email protected]> wrote:\n> Alexandre wrote:\n> > Here a normal listing of design.product_department_time:\n> > product_id | department_id | req_time\n> > ------------+---------------+----------\n> > 906 | A | 3000\n> > 906 | C | 3000\n> > 906 | D | 1935\n> > 907 | A | 1500\n> > 907 | C | 1500\n> > 907 | D | 4575\n> > 924 | A | 6000\n> > 924 | C | 1575\n> > product_id | a | c | d\n> > ------------+------+------+------\n> > 924 | 6000 | 1575 |\n> > 907 | 1500 | 1500 | 4575\n> > 906 | 3000 | 3000 | 1935\n> \n> ok, you have a couple of different options here. The first thing that\n> jumps out at me is to use arrays to cheat using arrays.\n> Let's start with the normalized result set.\n> \n> select product_id, department_id, sum(req_time) group by product_id,\n> department_id\n> \n> product_id | department_id | sum\n> 924 a 6000\n> 924 c 1575\n> 907 a 1500\n> [...]\n\nHello Merlin,\n\nFirst of all, thanks for your time. Yes this is exactly what I'm doing\nright now (if I understand well your point here). All records in\ndesign.product_department_time are unique for each (product_id,\nreq_time) combo and 0-null values are not possible. This is the first\nlisting you have.\n\nIn my query I added the sum() and GROUP BY stuff to avoid having such a listing:\n\n product_id | a | c | d\n------------+------+------+------\n 906 | 3000 | |\n 906 | | 3000 |\n 906 | | | 1935\n 907 | 1500 | |\n 907 | | 1500 |\n 907 | | | 4575\n 924 | 6000 | |\n 924 | | 1575 |\n\nSo that for a given product_id I have all the times by departments in\na single row (second listing I posted).\n\n> If you absolutely must have 1 record/product, you can cheat using\n> arrays:\n> \n> select q.product_id,\n> array_accum(q.department_id) as depts,\n> array_accum(q.req_time) as times\n> from\n> (\n> select product_id, department_id, sum(req_time) as req_time\n> group by product_id, department_id\n> ) q\n> group by q.product_id;\n> \n> select product_id, array_accum(department_id) sum(req_time) group by\n> product_id\n> \n> product_id | department_id | sum\n> 924 {a, c} {1500, 1575}\n> [...]\n\nI did not used arrays because I didn't think about it, but I don't\nknow if this is still the most efficient way. My software will have to\nwork out the data, unless the array expands in good columns. But I'm\nnot an expert at all. I try to do good DB design, but sometimes this\nis more complicated to work with the data.\n\nHere is the table definition if it can help:\n\ndesign.products (product_id serial PRIMARY KEY, ...);\nprod.departments (department_id varchar(3) PRIMARY KEY, ...);\n\ndesign.product_department_time (\nproduct_id integer REFERENCES design.products ON DELETE\nCASCADE ON UPDATE CASCADE,\ndepartment_id varchar(3) REFERENCES prod.departments ON DELETE\nCASCADE ON UPDATE CASCADE,\nreq_time integer NOT NULL DEFAULT 0 CHECK (req_time >= 0),\nCONSTRAINT product_department_time_pkey PRIMARY KEY (product_id, department_id)\n);\n\nAnd i also have a jobs table which has one product_id attached to one\njob with the required quantity to produce. So I must shouw the user\nhow much time this will require by departments for each jobs. :) This\nis a nice report, but I don't want to kill the database each time the\nuser want to see it.\n\nThanks for your contrib so far, this will help me looking for other\nways doing it. I'm always ready to hear more!\n\nRegards.\n\n-- \nAlexandre Leclerc\n", "msg_date": "Thu, 27 Jan 2005 12:43:25 -0500", "msg_from": "Alexandre Leclerc <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flattening a kind of 'dynamic' table" } ]
[ { "msg_contents": "Steve wrote:\n> Okay. Darn. While I don't write the queries for the application, I do\n> interact with the company frequently. Their considering moving the\n> queries into the database with PL/pgSQL. Currently their queries are\n> done through ProvIV development using ODBC. Will context switching be\n> minimized here by using PL/pgSQL?\n\nYes, yes, yes! :-)\nOr maybe, depending on what you are doing. Moving application code into\nthe database has the potential to supercharge your system depending on\nhow it is structured.\n\nOur company has done very detailed performance measurements on the\nsubject. We converted our COBOL based ERP to PostgreSQL by writing a\nlibpq wrapper to allow our COBOL runtime to read/write queries to the\ndatabase. If you don't know much about COBOL, let's just say it has a\n'one record at a time' mentality. (read a record...do something...read a\nrecord...do something...). It is these cases that really want to be\nmoved into the server.\n\nHere are some rough performance numbers, but they are a pretty good\nreflection why pl/pgsql is so good. The procedure in question here will\nbuild a bill of materials for a fairly complex product assembly in an\norder entry system. Since all users hate waiting for things, this is a\nperformance sensitive operation.\n\nThe baseline time is the COBOL app's pre-conversion-to-sql time to build\nthe BOM. \nBOM-ISAM: 8 seconds\n\nUsing SQL queries instead of ISAM statements, our time suddenly leaps to\nBOM-SQL: 20 seocnds.\n\nA long, long, time ago, we implemented prepared statements into our\ndriver using the parameterized interface.\nBOM-SQL (prepared): 10 seconds\n\nWe converted the COBOL code to pl/pgsql. The logic is the same, however\neasy record aggregations were taken via refcursors were made where\npossible.\nBOM-PL/PGSQL: 1 second\n\nEven the commercial COBOL vendor's file system driver can't beat that\ntime when the application is running on the server. Also, pl/pgsql\nroutines are not latency sensitive, so they can be run over the internet\netc. In addition, having the server execute the business logic actually\n*reduced* the cpu load on the server by greatly reducing the time the\nserver spent switching back and forth from network/processing.\n\nOf course, ours is an extreme case but IMO, the benefits are real.\n\nMerlin\n\n\n\n\n\n\n\n", "msg_date": "Thu, 27 Jan 2005 12:35:32 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ideal disk setup for Postgresql 7.4?" } ]
[ { "msg_contents": "Alexandre wrote:\n> On Thu, 27 Jan 2005 10:44:45 -0500, Merlin Moncure\n> <[email protected]> wrote:\n> > Alexandre wrote:\n> > ok, you have a couple of different options here. The first thing\nthat\n> > jumps out at me is to use arrays to cheat using arrays.\n> > Let's start with the normalized result set.\n> >\n> > select product_id, department_id, sum(req_time) group by\nproduct_id,\n> > department_id\n> >\n> > product_id | department_id | sum\n> > 924 a 6000\n> > 924 c 1575\n> > 907 a 1500\n> > [...]\n> \n> Hello Merlin,\n> \n> First of all, thanks for your time. Yes this is exactly what I'm doing\n> right now (if I understand well your point here). All records in\n> design.product_department_time are unique for each (product_id,\n> req_time) combo and 0-null values are not possible. This is the first\n> listing you have.\n\nRight. I expanding departments into columns is basically a dead end.\nFirst of all, SQL is not really designed to do this, and second of all\n(comments continued below)\n\n> product_id | a | c | d\n> ------------+------+------+------\n> 906 | 3000 | |\n> 906 | | 3000 |\n> 906 | | | 1935\n> 907 | 1500 | |\n> 907 | | 1500 |\n> 907 | | | 4575\n> 924 | 6000 | |\n> 924 | | 1575 |\n\nthe above table is more expensive to group than the normalized version\nabove because it is much, much longer. This will get worse and worse as\nyou add more departments. So, whatever you end up doing, I'd advise\nagainst expanding rows from a table into columns of a result except for\nvery, very special cases. This is not one of those cases.\n\n> I did not used arrays because I didn't think about it, but I don't\n> know if this is still the most efficient way. My software will have to\n> work out the data, unless the array expands in good columns. But I'm\n> not an expert at all. I try to do good DB design, but sometimes this\n> is more complicated to work with the data.\n\nArrays are a quick'n'dirty way to de-normalize a result set. According\nto me, de-normalization is o.k. for result sets *only*. Generally, it\nis inappropriate to de-normalize any persistent object in the database,\nsuch as a view (or especially) a table. de-normalizing sets can\nsometimes simplify client-side coding issues or provide a performance\nbenefit at the query stage (or slow it down, so be careful!)\n\n> And i also have a jobs table which has one product_id attached to one\n> job with the required quantity to produce. So I must shouw the user\n> how much time this will require by departments for each jobs. :) This\n> is a nice report, but I don't want to kill the database each time the\n> user want to see it.\n\nYou always have the option to do this in code. This basically means\nordering the result set and writing a nested loop to pass over the data.\nIf you happen to be using a report engine (and it sounds like you are),\nsome engines can simplify this via a grouping criteria, some can't.\n\nIf parsing an array string is a pain I happen to have a C++ class handy\nthat can compose/decompose a postgresql array string if:\na: no more than 1 dimension and \nb: array bounds are known\n\nLet me know if you need it and I'll send it over.\nMerlin\n", "msg_date": "Thu, 27 Jan 2005 13:02:48 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flattening a kind of 'dynamic' table" }, { "msg_contents": "On Thu, 27 Jan 2005 13:02:48 -0500, Merlin Moncure\n<[email protected]> wrote:\n> Alexandre wrote:\n> > On Thu, 27 Jan 2005 10:44:45 -0500, Merlin Moncure\n> > <[email protected]> wrote:\n> > > Alexandre wrote:\n> > > Let's start with the normalized result set.\n> > >\n> > > product_id | department_id | sum\n> > > 924 a 6000\n> > > 924 c 1575\n> > > 907 a 1500\n> > > [...]\n> >\n> Right. I expanding departments into columns is basically a dead end.\n> First of all, SQL is not really designed to do this, and second of all\n> (comments continued below)\n\nOk, I got it. The basic message is to avoid making columns out of rows\nlike I'm doing right now, that \"de-normalizing\" in an array is the way\nto go. So I should query and get the results in an array then after my\napplication will parse the array into the good columns. (I'm\ndevelopping a software.)\n\nIf I still got it wrong, this is because the 'geek' section of my\nbrain is in vacation: leave a message and when it'll come back, it'll\nexplain all this to me! :)\n\nSo I found the array_accum function in the doc, so I did create it.\n\nCREATE AGGREGATE array_accum (\n sfunc = array_append,\n basetype = anyelement,\n stype = anyarray,\n initcond = '{}'\n);\n\nThen I created this new select:\nSELECT \n product_id, \n array_accum(department_id) as a_department_id,\n array_accum(req_time) as a_req_time\nFROM (SELECT * FROM design.product_department_time) AS tmp \nGROUP BY product_id;\n\nIt gives:\n product_id | a_department_id | a_req_time\n------------+-----------------+------------------\n 924 | {A,C} | {6000,1575}\n 907 | {A,C,D} | {1500,1500,4575}\n 906 | {A,C,D} | {3000,3000,1935}\n\nSo, the performance should be much better using this agregate approach?\n\nNo I thing I'll merge the results in my software, unless you think\nthat at this point doing a LEFT JOIN with my jobs table is the way to\ngo, beacuse the performance will be good. (Personally I don't know the\nanswer of this one.)\n\n> If parsing an array string is a pain I happen to have a C++ class handy\n> that can compose/decompose a postgresql array string if:\n> a: no more than 1 dimension and\n> b: array bounds are known\n> \n> Let me know if you need it and I'll send it over.\n\nThank you for your offer. I think parsing an array is the easiest\nthing to do for me in all this. :) If I encounter any problem, I'll\ndrop you a mail.\n\nRegards.\n\n-- \nAlexandre Leclerc\n", "msg_date": "Thu, 27 Jan 2005 15:02:48 -0500", "msg_from": "Alexandre Leclerc <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flattening a kind of 'dynamic' table" } ]
[ { "msg_contents": "Greg Stark wrote:\n \n> test=> create or replace function array_push (anyarray, anyelement)\n> returns anyarray as 'select $1 || $2' language sql immutable strict;\n> CREATE FUNCTION\n> test=> create aggregate array_aggregate (basetype=anyelement,\n> sfunc=array_push, stype=anyarray, initcond = '{}');\n> CREATE AGGREGATE\n\nwhat about \nCREATE AGGREGATE array_accum (\n sfunc = array_append,\n basetype = anyelement,\n stype = anyarray,\n initcond = '{}'\n);\n?\nMerlin\n", "msg_date": "Thu, 27 Jan 2005 14:13:02 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "\n\"Merlin Moncure\" <[email protected]> writes:\n\n> what about \n> CREATE AGGREGATE array_accum (\n> sfunc = array_append,\n> basetype = anyelement,\n> stype = anyarray,\n> initcond = '{}'\n> );\n\nhuh, that is faster. It's only 14x slower than the C implementation.\n\nFor completeness, here are the fastest times I get after repeating a few times\neach:\n\n 13.97 ms \tcontrib/intagg C implementation\n194.76 ms \taggregate using array_append\n723.15 ms \taggregate with SQL state function\n\n-- \ngreg\n\n", "msg_date": "27 Jan 2005 19:14:46 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" } ]
[ { "msg_contents": "Alexandre wrote:\n> > >\n> > Right. I expanding departments into columns is basically a dead\nend.\n> > First of all, SQL is not really designed to do this, and second of\nall\n> > (comments continued below)\n> \n> Ok, I got it. The basic message is to avoid making columns out of rows\n\nyes. This is wrong.\n\n> like I'm doing right now, that \"de-normalizing\" in an array is the way\n> to go. \n\nOnly sometimes. Looping application code is another tactic. There may\nbe other things to do as well that don't involve arrays or application\ncode. Consider arrays a (very postgresql specific) tool in your\nexpanding toolchest.\n\nDe-normalization is a loaded term because we are only presenting queried\ndata in an alternate format (normalization normally applying to data\nstructured within the database). There are many people on this list who\nwill tell you not to de-normalize anything, ever (and most of the time,\nyou shouldn't!). \n\nMerlin\n", "msg_date": "Thu, 27 Jan 2005 16:05:09 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flattening a kind of 'dynamic' table" }, { "msg_contents": "On Thu, 27 Jan 2005 16:05:09 -0500, Merlin Moncure\n<[email protected]> wrote:\n> Alexandre wrote:\n> > like I'm doing right now, that \"de-normalizing\" in an array is the way\n> > to go.\n> \n> Only sometimes. Looping application code is another tactic. There may\n> be other things to do as well that don't involve arrays or application\n> code. Consider arrays a (very postgresql specific) tool in your\n> expanding toolchest.\n\nI take good notes of that. All this opened to me other ways for\nsolutions, so I'm glad of that. I'll take more time to think about all\nthat.\n\n> De-normalization is a loaded term because we are only presenting queried\n> data in an alternate format (normalization normally applying to data\n> structured within the database). There are many people on this list who\n> will tell you not to de-normalize anything, ever (and most of the time,\n> you shouldn't!).\n\nThank you for all you help and time for this.\n\nBest regards.\n\n-- \nAlexandre Leclerc\n", "msg_date": "Thu, 27 Jan 2005 17:07:19 -0500", "msg_from": "Alexandre Leclerc <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flattening a kind of 'dynamic' table" } ]
[ { "msg_contents": "quote from manual:\n--\nUnfortunately, there is no similarly trivial query\nthat can be used to improve the performance of count()\nwhen applied to the entire table\n--\n\ndoes count(1) also cause a sequential scan of the\nentire table? It should be able to just use the\nprimary keys.\n\n-Zavier\n\n=====\n---\nzavier.net - Internet Solutions\n---\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nYahoo! Mail - now with 250MB free storage. Learn more.\nhttp://info.mail.yahoo.com/mail_250\n", "msg_date": "Thu, 27 Jan 2005 21:17:56 -0800 (PST)", "msg_from": "Zavier Sheran <[email protected]>", "msg_from_op": true, "msg_subject": "slow count()" }, { "msg_contents": "On Thu, Jan 27, 2005 at 21:17:56 -0800,\n Zavier Sheran <[email protected]> wrote:\n> quote from manual:\n> --\n> Unfortunately, there is no similarly trivial query\n> that can be used to improve the performance of count()\n> when applied to the entire table\n> --\n> \n> does count(1) also cause a sequential scan of the\n> entire table? It should be able to just use the\n> primary keys.\n\nNo it can't just use the index file, so that an index scan will be slower\nthan the sequential scan unless there is a where clause restricting the\nnumber of rows to a small fraction (about 5%) of the table.\n\nSearch the archives for if you want to read more about this.\n", "msg_date": "Thu, 27 Jan 2005 23:38:34 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow count()" } ]
[ { "msg_contents": "I'm involved in an implementation of doing trigger-based counting as a \nsubstitute for count( * ) in real time in an application. My \ntrigger-based counts seem to be working fine and dramatically improve \nthe performance of the display of the counts in the application layer.\n\nThe problem comes in importing new data into the tables for which the \ncounts are maintained. The current import process does some \npreprocessing and then does a COPY from the filesystem to one of the \ntables on which counts are maintained. This means that for each row \nbeing inserted by COPY, a trigger is fired. This didn't seem like a big \ndeal to me until testing began on realistic data sets.\n\nFor a 5,000-record import, preprocessing plus the COPY took about 5 \nminutes. Once the triggers used for maintaining the counts were added, \nthis grew to 25 minutes. While I knew there would be a slowdown per row \naffected, I expected something closer to 2x than to 5x.\n\nIt's not unrealistic for this system to require data imports on the \norder of 100,000 records. Whereas this would've taken at most an hour \nand a half before (preprocessing takes a couple of minutes, so the \nactual original COPY takes closer to 2-3 minutes, or just over 1500 \nrows per minute), the new version is likely to take more than 7 hours, \nwhich seems unreasonable to me. Additionally, the process is fairly CPU \nintensive.\n\nI've examined the plans, and, as far as I can tell, the trigger \nfunctions are being prepared and using the indexes on the involved \ntables, which are hundreds of thousands of rows in the worst cases. The \nbasic structure of the functions is a status lookup SELECT (to \ndetermine whether a count needs to be updated and which one) and one or \ntwo UPDATE statements (depending on whether both an increment and a \ndecrement need to be performed). As I said, it looks like this basic \nformat is using indexes appropriately.\n\nIs there anything I could be overlooking that would tweak some more \nperformance out of this scenario?\n\nWould it be absurd to drop the triggers during import and recreate them \nafterward and update the counts in a summary update based on \ninformation from the import process?\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\n", "msg_date": "Thu, 27 Jan 2005 23:22:33 -0600", "msg_from": "Thomas F.O'Connell <[email protected]>", "msg_from_op": true, "msg_subject": "Triggers During COPY" }, { "msg_contents": "Thomas,\n\n> Would it be absurd to drop the triggers during import and recreate them\n> afterward and update the counts in a summ> ary update based on \n> information from the import process?\n\nThat's what I'd do.\n\nAlso, might I suggest storing the counts in memcached (see the pgmemached \nproject on pgFoundry) rather than in a table?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 27 Jan 2005 21:41:49 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Triggers During COPY" }, { "msg_contents": "Thomas F.O'Connell wrote:\n> \n> The problem comes in importing new data into the tables for which the \n> counts are maintained. The current import process does some \n> preprocessing and then does a COPY from the filesystem to one of the \n> tables on which counts are maintained. This means that for each row \n> being inserted by COPY, a trigger is fired. This didn't seem like a big \n> deal to me until testing began on realistic data sets.\n> \n> For a 5,000-record import, preprocessing plus the COPY took about 5 \n> minutes. Once the triggers used for maintaining the counts were added, \n> this grew to 25 minutes. While I knew there would be a slowdown per row \n> affected, I expected something closer to 2x than to 5x.\n> rformance out of this scenario?\n> \nHave been seeing similar behavior whilst testing sample code for the 8.0\ndocs (summary table plpgsql trigger example).\n\nI think the nub of the problem is dead tuples bloat in the summary /\ncount table, so each additional triggered update becomes more and more\nexpensive as time goes on. I suspect the performance decrease is\nexponential with the no of rows to be processed.\n\n\n> Would it be absurd to drop the triggers during import and recreate them \n> afterward and update the counts in a summary update based on information \n> from the import process?\n> \n> \nThat's the conclusion I came to :-)\n\nregards\n\nMark\n\n", "msg_date": "Fri, 28 Jan 2005 18:55:37 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Triggers During COPY" }, { "msg_contents": "I forgot to mention that I'm running 7.4.6. The README includes the \ncaveat that pgmemcache is designed for use with 8.0. My instinct is to \nbe hesitant using something like that in a production environment \nwithout some confidence that people have done so with good and reliable \nsuccess or without more extensive testing than I'm likely to have time \nfor primarily because support for 7.4.x is never likely to increase. \nThanks for the tip, though.\n\nFor the time being, it sounds like I'll probably try to implement the \ndrop/create trigger setup during import.\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\nOn Jan 27, 2005, at 11:41 PM, Josh Berkus wrote:\n\n> Thomas,\n>\n>> Would it be absurd to drop the triggers during import and recreate \n>> them\n>> afterward and update the counts in a summ> ary update based on\n>> information from the import process?\n>\n> That's what I'd do.\n>\n> Also, might I suggest storing the counts in memcached (see the \n> pgmemached\n> project on pgFoundry) rather than in a table?\n>\n> -- \n> --Josh\n>\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n\n", "msg_date": "Fri, 28 Jan 2005 00:28:58 -0600", "msg_from": "Thomas F.O'Connell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Triggers During COPY" }, { "msg_contents": "As far as dropping/recreating triggers, there seem to be two strategies:\n\n1. Perform the drop-import-create operation in a transaction, thereby \nguaranteeing the accuracy of the counts but presumably locking the \ntable during the operation, which could take many minutes (up to an \nhour or two) in extreme cases.\n\n2. Drop the triggers, import, create the triggers, and update with the \nimport count, recognizing that other updates could've occurred without \naccumulating updates during the import process, then later (nightly, \nmaybe?) do a full update to recalibrate the counts. In this case the \ncount( * ) involved could also lock the table for a bit pending the \nsequential scan(s) if the update is performed in a transaction. \nOtherwise, again, there is a realistic possibility of inaccurate counts \noccurring and persisting between calibrations.\n\nIs there a best practice anywhere here?\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\nOn Jan 27, 2005, at 11:41 PM, Josh Berkus wrote:\n\n> Thomas,\n>\n>> Would it be absurd to drop the triggers during import and recreate \n>> them\n>> afterward and update the counts in a summ> ary update based on\n>> information from the import process?\n>\n> That's what I'd do.\n>\n> -- \n> --Josh\n>\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n\n", "msg_date": "Fri, 28 Jan 2005 11:17:23 -0600", "msg_from": "Thomas F.O'Connell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Triggers During COPY" }, { "msg_contents": "Thomas,\n\n> I forgot to mention that I'm running 7.4.6. The README includes the\n> caveat that pgmemcache is designed for use with 8.0.\n\nWell, you could always hire Sean to backport it.\n\n> 1. Perform the drop-import-create operation in a transaction, thereby\n> guaranteeing the accuracy of the counts but presumably locking the\n> table during the operation, which could take many minutes (up to an\n> hour or two) in extreme cases.\n\nWhat other operations are ocurring on the table concurrent with the COPY? \nCopy isn't really intended to be run in parallel with regular insert/update \non the same table, AFAIK.\n\n> 2. Drop the triggers, import, create the triggers, and update with the\n> import count, recognizing that other updates could've occurred without\n> accumulating updates during the import process, then later (nightly,\n> maybe?) do a full update to recalibrate the counts. In this case the\n> count( * ) involved could also lock the table for a bit pending the\n> sequential scan(s) if the update is performed in a transaction.\n> Otherwise, again, there is a realistic possibility of inaccurate counts\n> occurring and persisting between calibrations.\n\nAlternately: bulk load the new rows into a \"holding\" table. Do counts on \nthat table. Then, as one transaction, drop the triggers, merge the holding \ntable with the live table and update the counts, and restore the triggers.\n\nAlternately: Move the copy out of triggers into middleware where you can deal \nwith it more flexibly.\n\nAlternately: Resign yourself to the idea that keeping running statistics is \nincompatible with doing a fast bulk load, and buy faster/better hardware.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 28 Jan 2005 09:48:31 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Triggers During COPY" } ]
[ { "msg_contents": "Hi Folks ,\n\n I am running this query on postgres 8 beta version and it is not \nusing the right index, where as if i run the same query on postgres 7.4 \nversion it uses the right index . Here are the explain analyze output \nfor both the versions. can anyone explain this ?\n\n\ntks.\n\n\n\ntables: attribute table has 200k records, string table has 190 records\n\n\\d common.attribute\n Table \"common.attribute\"\n Column | Type | \nModifiers\n----------------+-----------------------------+-------------------------------------------------------\n attributeid | integer | not null default \nnextval('COMMON.ATTRIBUTESEQ'::text)\n fknamestringid | integer | not null\n stringvalue | text |\n integervalue | integer |\n numericvalue | numeric(14,2) |\n datevalue | timestamp without time zone |\n booleanvalue | boolean |\n bigstringvalue | text |\nIndexes:\n \"pk_attribute_attributeid\" primary key, btree (attributeid)\n \"uk_attribute_fkstringid_stringvalue_integervalue_numericvalue_d\" \nunique, btree (fknamestringid, stringvalue, integervalue, numericvalue, \ndatevalue)\n \"idx_attribute_fknamestringid\" btree (fknamestringid)\nForeign-key constraints:\n \"fk_attribute_string\" FOREIGN KEY (fknamestringid) REFERENCES \ncommon.string(stringid)\n\n\n\n\\d common.string\n Table \"common.string\"\n Column | Type | Modifiers\n----------+---------+----------------------------------------------------\n stringid | integer | not null default nextval('COMMON.STRINGSEQ'::text)\n value | text |\nIndexes:\n \"pk_string_stringid\" primary key, btree (stringid)\n\n\nQuery\n\nselect attribute0_.attributeid as attribut1_, attribute0_.stringvalue as \nstringva2_,\n attribute0_.bigStringvalue as bigStrin3_, attribute0_.integervalue \nas integerv4_,\n attribute0_.numericvalue as numericv5_, attribute0_.datevalue as \ndatevalue,\n attribute0_.booleanvalue as booleanv7_, attribute0_.fknamestringid \nas fknamest8_\nfrom common.attribute attribute0_, common.string text1_\nwhere (text1_.value='squareFeet' and \nattribute0_.fknamestringid=text1_.stringid)\nand (numericValue='775.0')\n\n\nExplain Analyze from 7.4\n\n \nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..501.96 rows=1 width=100) (actual \ntime=127.420..135.914 rows=1 loops=1)\n -> Seq Scan on string text1_ (cost=0.00..12.31 rows=2 width=4) \n(actual time=68.421..68.466 rows=1 loops=1)\n Filter: (value = 'squareFeet'::text)\n -> Index Scan using idx_attribute_fknamestringid on attribute \nattribute0_ (cost=0.00..244.81 rows=1 width=100) (actual \ntime=58.963..67.406 rows=1 loops=1)\n Index Cond: (attribute0_.fknamestringid = \"outer\".stringid)\n Filter: (numericvalue = 775.0)\n Total runtime: 136.056 ms\n\nExplain Analyze from 8 beta\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..5440.85 rows=1 width=109) (actual \ntime=27.313..440.469 rows=1 loops=1)\n -> Seq Scan on attribute attribute0_ (cost=0.00..5437.82 rows=1 \nwidth=109) (actual time=26.987..440.053 rows=2 loops=1)\n Filter: (numericvalue = 775.0)\n -> Index Scan using pk_string_stringid on string text1_ \n(cost=0.00..3.02 rows=1 width=4) (actual time=0.169..0.172 rows=0 loops=2)\n Index Cond: (\"outer\".fknamestringid = text1_.stringid)\n Filter: (value = 'squareFeet'::text)\n Total runtime: 440.648 ms\n\n\n", "msg_date": "Fri, 28 Jan 2005 10:15:50 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Poor Performance on Postgres 8.0" }, { "msg_contents": "Pallav Kalva <[email protected]> writes:\n> I am running this query on postgres 8 beta version and it is not \n> using the right index, where as if i run the same query on postgres 7.4 \n> version it uses the right index .\n\n1. Beta which, exactly?\n\n2. Have you ANALYZEd both tables lately?\n\n3. If so, try this to see what it thinks the cost of the reverse plan\nis:\n\n\tbegin;\n\talter table common.string drop constraint pk_string_stringid;\n\texplain analyze ... same query ...\n\trollback;\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Jan 2005 11:23:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Performance on Postgres 8.0 " }, { "msg_contents": "Tom Lane wrote:\n\n>Pallav Kalva <[email protected]> writes:\n> \n>\n>> I am running this query on postgres 8 beta version and it is not \n>>using the right index, where as if i run the same query on postgres 7.4 \n>>version it uses the right index .\n>> \n>>\n>\n>1. Beta which, exactly?\n>\n\nBeta 4\n\n>\n>2. Have you ANALYZEd both tables lately?\n>\nYes\n\n>\n>3. If so, try this to see what it thinks the cost of the reverse plan\n>is:\n>\n>\tbegin;\n>\talter table common.string drop constraint pk_string_stringid;\n>\texplain analyze ... same query ...\n>\trollback;\n>\n what do u mean by rollback exactly ? i can drop the pk constraint \nand run explain analyze and see how it behaves.\n\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n\n\n", "msg_date": "Fri, 28 Jan 2005 11:58:17 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor Performance on Postgres 8.0" }, { "msg_contents": "Pallav Kalva <[email protected]> writes:\n>> begin;\n>> alter table common.string drop constraint pk_string_stringid;\n>> explain analyze ... same query ...\n>> rollback;\n>> \n> what do u mean by rollback exactly ? i can drop the pk constraint \n> and run explain analyze and see how it behaves.\n\nThe point of the rollback is that you don't really make the pk\nconstraint go away. It is gone from the perspective of the EXPLAIN,\nbut after you rollback it's back again. Easier than rebuilding it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Jan 2005 12:02:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Performance on Postgres 8.0 " }, { "msg_contents": "Hi Tom,\n\n I dropped the primary key constraint and ran the explain analyze on \nthe same query and here is what i get seq scans on both the tables , \nstill doesnt make use of the index on common.attribute table .\n\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..5609.19 rows=1 width=104) (actual \ntime=11.875..319.358 rows=1 loops=1)\n Join Filter: (\"outer\".fknamestringid = \"inner\".stringid)\n -> Seq Scan on attribute attribute0_ (cost=0.00..5604.76 rows=1 \nwidth=104) (actual time=11.541..318.649 rows=2 loops=1)\n Filter: (numericvalue = 775.0)\n -> Seq Scan on string text1_ (cost=0.00..4.41 rows=1 width=4) \n(actual time=0.277..0.319 rows=1 loops=2)\n Filter: (value = 'squareFeet'::text)\n Total runtime: 319.496 ms\n\n\nTom Lane wrote:\n\n>Pallav Kalva <[email protected]> writes:\n> \n>\n>>>begin;\n>>>alter table common.string drop constraint pk_string_stringid;\n>>>explain analyze ... same query ...\n>>>rollback;\n>>>\n>>> \n>>>\n>> what do u mean by rollback exactly ? i can drop the pk constraint \n>>and run explain analyze and see how it behaves.\n>> \n>>\n>\n>The point of the rollback is that you don't really make the pk\n>constraint go away. It is gone from the perspective of the EXPLAIN,\n>but after you rollback it's back again. Easier than rebuilding it...\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n\n\n", "msg_date": "Fri, 28 Jan 2005 13:38:15 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor Performance on Postgres 8.0" }, { "msg_contents": "Pallav Kalva <[email protected]> writes:\n> still doesnt make use of the index on common.attribute table .\n\nWhat do you get from just plain\n\nexplain analyze select * from common.string text1_\nwhere text1_.value='squareFeet';\n\nI get the impression that it must think this will yield a lot of rows.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Jan 2005 14:48:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Performance on Postgres 8.0 " }, { "msg_contents": "explain analyze select * from common.string text1_\nwhere text1_.value='squareFeet';\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------\n Seq Scan on string text1_ (cost=0.00..4.41 rows=1 width=21) (actual \ntime=0.283..0.322 rows=1 loops=1)\n Filter: (value = 'squareFeet'::text)\n Total runtime: 0.492 ms\n\n\nI am not worried about this table as common.string has only 190 records, \nwhere as the other table common.attribute which is very big (200k \nrecords) i want it to use index scan on it . The matching column in \ncommon.attribute table has only 175 distinct records in common.attribute \ntable , do you think that's the problem ? here is the full query again\n\nselect attribute0_.attributeid as attribut1_, attribute0_.stringvalue as \nstringva2_,\n attribute0_.bigStringvalue as bigStrin3_, attribute0_.integervalue \nas integerv4_,\n attribute0_.numericvalue as numericv5_, attribute0_.datevalue as \ndatevalue,\n attribute0_.booleanvalue as booleanv7_, attribute0_.fknamestringid \nas fknamest8_\nfrom common.attribute attribute0_, common.string text1_\nwhere (text1_.value='squareFeet' and \nattribute0_.fknamestringid=text1_.stringid)\nand (numericValue='775.0')\n\n\nTom Lane wrote:\n\n>Pallav Kalva <[email protected]> writes:\n> \n>\n>>still doesnt make use of the index on common.attribute table .\n>> \n>>\n>\n>What do you get from just plain\n>\n>explain analyze select * from common.string text1_\n>where text1_.value='squareFeet';\n>\n>I get the impression that it must think this will yield a lot of rows.\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n\n\n", "msg_date": "Fri, 28 Jan 2005 14:57:31 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor Performance on Postgres 8.0" }, { "msg_contents": "I was able to duplicate this behavior with dummy data that had only a\nfew distinct values for fknamestringid --- the planner then thinks that\nthe index probe into attribute will match a lot of rows and hence take a\nlong time. Could we see your pg_stats row for fknamestringid, ie\n\nselect * from pg_stats\nwhere tablename = 'attribute' and attname = 'fknamestringid';\n\nIt would be interesting to see the same for your 7.4 installation too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Jan 2005 15:50:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Performance on Postgres 8.0 " }, { "msg_contents": "On 7.4 I get\n\nselect * from pg_stats\n where tablename = 'attribute' and attname = 'fknamestringid';\n\n schemaname | tablename | attname | null_frac | avg_width | \nn_distinct | most_common_vals \n| \nmost_common_freqs | \nhistogram_bounds | correlation\n------------+-----------+----------------+-----------+-----------+------------+-----------------------------------------------------+-------------------------------------------------------------------------------------+----------------------------------------------------------+-------------\n common | attribute | fknamestringid | 0 | 4 \n| 124 | {2524,2434,2523,2599,2595,2592,2596,2528,2586,2446} | \n{0.132333,0.13,0.0766667,0.0373333,0.0366667,0.0333333,0.031,0.029,0.0263333,0.019} \n| {2433,2441,2455,2462,2473,2479,2484,2492,2505,2574,2598} | -0.22864\n(1 row)\n\nOn 8\n\nselect * from pg_stats\nwhere tablename = 'attribute' and attname = 'fknamestringid';\n\n schemaname | tablename | attname | null_frac | avg_width | \nn_distinct | most_common_vals \n| \nmost_common_freqs | \nhistogram_bounds | correlation\n------------+-----------+----------------+-----------+-----------+------------+-----------------------------------------------------+-----------------------------------------------------------------------------------+----------------------------------------------------------+-------------\n common | attribute | fknamestringid | 0 | 4 \n| 80 | {2524,2434,2530,2522,2525,2523,2527,2526,2574,2531} | \n{0.219333,0.199333,0.076,0.0643333,0.0616667,0.05,0.0453333,0.042,0.04,0.0286667} \n| {2437,2528,2529,2538,2539,2540,2554,2562,2575,2584,2637} | 0.0274016\n\n\nTom Lane wrote:\n\n>I was able to duplicate this behavior with dummy data that had only a\n>few distinct values for fknamestringid --- the planner then thinks that\n>the index probe into attribute will match a lot of rows and hence take a\n>long time. Could we see your pg_stats row for fknamestringid, ie\n>\n>select * from pg_stats\n>where tablename = 'attribute' and attname = 'fknamestringid';\n>\n>It would be interesting to see the same for your 7.4 installation too.\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n\n\n", "msg_date": "Fri, 28 Jan 2005 15:58:19 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor Performance on Postgres 8.0" }, { "msg_contents": "Pallav Kalva <[email protected]> writes:\n> On 8\n> common | attribute | fknamestringid | 0 | 4 \n> | 80 | {2524,2434,2530,2522,2525,2523,2527,2526,2574,2531} | \n> {0.219333,0.199333,0.076,0.0643333,0.0616667,0.05,0.0453333,0.042,0.04,0.0286667} \n> | {2437,2528,2529,2538,2539,2540,2554,2562,2575,2584,2637} | 0.0274016\n\nGiven those stats, the planner is going to estimate that about 1/80th of\nthe attribute table matches any particular fknamestringid, and that's\nwhat's driving it away from using the indexscan. I cannot tell whether\nthere are indeed a couple of thousand rows joining to the 'squareFeet'\nstring row (in which case the condition numericValue='775.0' must be\nreally selective) or whether this is an outlier case that joins to just\na few attribute rows.\n\nThe slightly different stats values for 7.4 would have given it a\nslightly lower value for the cost of an indexscan by\nidx_attribute_fknamestringid, but certainly not as low as your original\nmessage shows. Perhaps you have some difference in parameter settings\nin your 7.4 installation --- most likely a lower random_page_cost.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Jan 2005 16:34:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor Performance on Postgres 8.0 " }, { "msg_contents": "The random_page_cost value is same on both the versions, the only thing \ndifference between 7.4 version and 8 version is that 7.4 ver has 100k \nless records. For, now i created index on numericvalue column on \nattribute table and it used that index and it is much faster that way. \nit came down to 24msec. \n\nAlso, i tried to see the matching id for squarefeet in attribute table \nthere are 800 some records in attribute table for 8 version and 700 \nsomething in 7.4 version.\n\n\nTom Lane wrote:\n\n>Pallav Kalva <[email protected]> writes:\n> \n>\n>>On 8\n>> common | attribute | fknamestringid | 0 | 4 \n>>| 80 | {2524,2434,2530,2522,2525,2523,2527,2526,2574,2531} | \n>>{0.219333,0.199333,0.076,0.0643333,0.0616667,0.05,0.0453333,0.042,0.04,0.0286667} \n>>| {2437,2528,2529,2538,2539,2540,2554,2562,2575,2584,2637} | 0.0274016\n>> \n>>\n>\n>Given those stats, the planner is going to estimate that about 1/80th of\n>the attribute table matches any particular fknamestringid, and that's\n>what's driving it away from using the indexscan. I cannot tell whether\n>there are indeed a couple of thousand rows joining to the 'squareFeet'\n>string row (in which case the condition numericValue='775.0' must be\n>really selective) or whether this is an outlier case that joins to just\n>a few attribute rows.\n>\n>The slightly different stats values for 7.4 would have given it a\n>slightly lower value for the cost of an indexscan by\n>idx_attribute_fknamestringid, but certainly not as low as your original\n>message shows. Perhaps you have some difference in parameter settings\n>in your 7.4 installation --- most likely a lower random_page_cost.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n> \n>\n\n\n", "msg_date": "Fri, 28 Jan 2005 16:57:32 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor Performance on Postgres 8.0" } ]
[ { "msg_contents": "I was wondering about index types. Oracle has an index type called a\n'bitmap' index. They describe this as an index for low cardinality\nfields, where only the cardinal values are indexed in a b-tree, and\nthen it uses a bitmap below that to describe rows. They say that this\ntype of index is very fast when combined with queries that used the\nindexed row in 'AND' clauses in a sql statement as the index can\n'mask' the results very fast. I have not been able to benchmark the\nactual effectiveness of this kind of index, but I was wondering if\nanyone has had experience with this an believes it might be a useful\nfeature for postgres?\n\nYes I have a vested interest in this because alot of my searches are\nmasked against low cardinality fields 'Y' or 'N' type things where\nthis could potentialy benefit me...\n\nAlex Turner\nNetEconomist\n", "msg_date": "Fri, 28 Jan 2005 10:39:52 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": true, "msg_subject": "Bitmap indexes" }, { "msg_contents": "\n\tcontrib/intarray has an index type which could be what you need.\n\n\n> I was wondering about index types. Oracle has an index type called a\n> 'bitmap' index. They describe this as an index for low cardinality\n> fields, where only the cardinal values are indexed in a b-tree, and\n> then it uses a bitmap below that to describe rows. They say that this\n> type of index is very fast when combined with queries that used the\n> indexed row in 'AND' clauses in a sql statement as the index can\n> 'mask' the results very fast. I have not been able to benchmark the\n> actual effectiveness of this kind of index, but I was wondering if\n> anyone has had experience with this an believes it might be a useful\n> feature for postgres?\n>\n> Yes I have a vested interest in this because alot of my searches are\n> masked against low cardinality fields 'Y' or 'N' type things where\n> this could potentialy benefit me...\n>\n> Alex Turner\n> NetEconomist\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n", "msg_date": "Fri, 28 Jan 2005 17:04:48 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap indexes" }, { "msg_contents": "Alex Turner <[email protected]> writes:\n> I was wondering about index types. Oracle has an index type called a\n> 'bitmap' index.\n\nThere's a great deal about this in the list archives (probably more in\npgsql-hackers than in -performance). Most of the current interest has\nto do with building in-memory bitmaps on the fly, as a way of decoupling\nindex and heap scan processing. Which is not quite what you're talking\nabout but should be pretty effective for low-cardinality cases. In\nparticular it'd allow AND and OR combination of multiple indexes, which\nwe do poorly or not at all at the moment.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Jan 2005 11:13:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap indexes " }, { "msg_contents": "> There's a great deal about this in the list archives (probably more in\n> pgsql-hackers than in -performance). Most of the current interest has\n> to do with building in-memory bitmaps on the fly, as a way of decoupling\n> index and heap scan processing. Which is not quite what you're talking\n> about but should be pretty effective for low-cardinality cases. In\n> particular it'd allow AND and OR combination of multiple indexes, which\n> we do poorly or not at all at the moment.\n\n\tIs this called a star join ?\n\n\tIt would also allow to access the data pages in a more sequential order \nif the rows are not required to be retrieved in index order, which would \npotentially be a large speedup for index scans concerning more than the \nusual very small percentage of rows in a table : if several rows to be \nretrieved are on the same page, it would visit this page only once.\n", "msg_date": "Fri, 28 Jan 2005 18:14:14 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap indexes " }, { "msg_contents": "[email protected] (Alex Turner) writes:\n\n> I was wondering about index types. Oracle has an index type called a\n> 'bitmap' index. They describe this as an index for low cardinality\n> fields, where only the cardinal values are indexed in a b-tree, and\n> then it uses a bitmap below that to describe rows. They say that this\n> type of index is very fast when combined with queries that used the\n> indexed row in 'AND' clauses in a sql statement as the index can\n> 'mask' the results very fast. I have not been able to benchmark the\n> actual effectiveness of this kind of index, but I was wondering if\n> anyone has had experience with this an believes it might be a useful\n> feature for postgres?\n>\n> Yes I have a vested interest in this because alot of my searches are\n> masked against low cardinality fields 'Y' or 'N' type things where\n> this could potentialy benefit me...\n\nThere are some ideas on this; nothing likely to be implemented in the\nvery short term.\n\nIf you do a lot of queries on this sort of basis, there's something in\nPostgreSQL known as a \"partial index\" that could be used to improve\nsome queries.\n\nWhat you might do is something like:\n\n create index partial_y_for_field_a on some_table (id_column)\n where field_a = 'Y';\n create index partial_n_for_field_a on some_table (id_column)\n where field_a = 'N';\n\nThat could provide speedup for queries that might do joins on\nid_column where your query has the qualifiers \"where field_a = 'Y'\" or\n\"where field_a = 'N'\".\n\nThat's not going to provide a generalized answer to \"star queries,\"\nbut it is an immediate answer for some cases.\n-- \n\"cbbrowne\",\"@\",\"ca.afilias.info\"\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 673-4124 (land)\n", "msg_date": "Fri, 28 Jan 2005 14:45:17 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap indexes" }, { "msg_contents": "PFC wrote:\n> > There's a great deal about this in the list archives (probably more in\n> > pgsql-hackers than in -performance). Most of the current interest has\n> > to do with building in-memory bitmaps on the fly, as a way of decoupling\n> > index and heap scan processing. Which is not quite what you're talking\n> > about but should be pretty effective for low-cardinality cases. In\n> > particular it'd allow AND and OR combination of multiple indexes, which\n> > we do poorly or not at all at the moment.\n> \n> \tIs this called a star join ?\n> \n> \tIt would also allow to access the data pages in a more sequential order \n> if the rows are not required to be retrieved in index order, which would \n> potentially be a large speedup for index scans concerning more than the \n> usual very small percentage of rows in a table : if several rows to be \n> retrieved are on the same page, it would visit this page only once.\n\nPlease see the TODO list for a summary of previous discussions and\ndirections.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 2 Feb 2005 11:08:09 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap indexes" }, { "msg_contents": "PFC wrote:\n\n>\n> contrib/intarray has an index type which could be what you need.\n>\n\nI've used intarray for a site that requires that I match multiple low\ncardinality attributes with multiple search criteria. Here's an\n(abridged) example:\n\nThe table:\n\n\\d person_attributes\n Table \"dm.person_attributes\"\n Column | Type | Modifiers\n----------------+--------------------------+--------------------\n attributes | integer[] | not null\n personid | integer | not null\nIndexes:\n \"person_attributes_pk\" PRIMARY KEY, btree (personid)\n \"person_attributes_gist_attributes_index\" gist (attributes)\n\nThis table has about 1.1 million rows.\n\nThe index:\n\ncreate index person_attributes_gist_attributes_index on\nperson_attributes using gist ((attributes) gist__int_ops);\n\nThe query:\n\nselect personid\nfrom person_attributes\nwhere attributes @@\n'(1|3)&(900)&(902)&(1002)&(9002)&(11003)&(12002|12003)&(13003|13004|13005|13006|13007|13008|13009|13010)'::query_int\n\nThe explain analyze:\n\nIndex Scan using person_attributes_gist_search_index on\nperson_attributes pa (cost=0.00..1221.26 rows=602 width=4) (actual\ntime=0.725..628.994 rows=1659 loops=1)\n Index Cond: (search @@ '( 1 | 3 ) & 900 & 902 & 1002 & 9002 & 11003 &\n( 12002 | 12003 ) & ( ( ( ( ( ( ( 13003 | 13004 ) | 13005 ) | 13006 ) |\n13007 ) | 13008 ) | 13009 ) | 13010 )'::query_int)\nTotal runtime: 431.843 ms\n\nThe query_int and what each number means:\n\n1|3 means, only gather the people in site id 1 or 3.\n900 is an arbitrary flag that means they are searchable.\n902 is another arbitrary flag that means they have photos.\n1002 is the flag for \"don't drink\".\n9002 is the flag for \"don't smoke\".\n11003 is the flag for \"female\".\n12002|12003 are the flags for straight|bisexual.\n13003 through 13010 represent the age range 18 through 25.\n\nIn plain English: select all females who are straight or bisexual,\nbetween the ages of 18 and 25 inclusive, that don't drink, that don't\nsmoke, who are searchable, who have photos, and belong to sites 1 or 3.\n\nAs you can see by the explain, this query is relatively fast, given the\nnumber of criteria and data that has to be searched.\n\nThis site's predecessor used oracle, and I used bitmap indexes for\nperforming these searches in oracle. This intarray method is the closest\nI've come to being able to reproduce the same functionality at the\nrequired speed in postgres.\n\nThe only problems I've run into with this method are: the non-concurrent\nnature of gist indexes, which makes doing any sort of bulk DML on them\nextremely time consuming (I usually have to drop the index, perform the\nbulk DML, then re-create the index), dealing with intarray methods to\nselect particular attributes so I can then order by them, and dealing\nwith intarray methods for updating the attributes column. All of these\nmethods are detailed in the intarray README.\n\nI'm happy with the performance in production so far. I've yet to see any\ngist concurrency issues affect performance with normal rates of DML.\n\nDaniel\n\n-- \n\nDaniel Ceregatti - Programmer\nOmnis Network, LLC\n\nReal Programmers don't eat quiche. They eat Twinkies and Szechwan food.\n\n\n", "msg_date": "Wed, 02 Feb 2005 15:18:37 -0800", "msg_from": "Daniel Ceregatti <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap indexes" } ]
[ { "msg_contents": "> With the right configuration you can get very serious throughput. The\n> new system is processing over 2500 insert transactions per second. We\n> don't need more RAM with this config. The disks are fast enough.\n> 2500 transaction/second is pretty damn fast.\n\nfsync on/off?\n\nMerlin\n", "msg_date": "Fri, 28 Jan 2005 11:19:44 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "fsync on.\n\nAlex Turner\nNetEconomist\n\n\nOn Fri, 28 Jan 2005 11:19:44 -0500, Merlin Moncure\n<[email protected]> wrote:\n> > With the right configuration you can get very serious throughput. The\n> > new system is processing over 2500 insert transactions per second. We\n> > don't need more RAM with this config. The disks are fast enough.\n> > 2500 transaction/second is pretty damn fast.\n> \n> fsync on/off?\n> \n> Merlin\n> \n>\n", "msg_date": "Sun, 30 Jan 2005 23:15:07 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" } ]
[ { "msg_contents": "Hi\n\n i am running a High availability Postgresql server on redhat\nlinux 9. I am using NFS mount of data directory from a shared storage. The server was running without problems for last two \nmonths. The server is connected to a dialin router where all my company units dialin and update the database. \nConcurrently i have some 25 to 50 connections at peak hours. \n\nFollowing are the problems i am facing \n\n1) When 3 or 4 clients connect to this server, the pids are created and\nthose pids are not killed even after the client disconnects.\nafter sometimes some 10 to 20 pids gets created and postgres reject client connections. \n\n2) After one or two concurrent connections, the server slows down.\nThe postmaster occupies more than 90% of the memory.\n\n3) Even when restarting server, after one or two connection from clients, the pids start increasing automatically to 10 or 13. This again slows down the server.\n\nPlease help me to sort out this issue\n\nThanks in advance .\n\n\nRegards\n\nN S\n\n  \nHi\n\n  i am running a High availability Postgresql server on redhat\nlinux 9. I am using NFS mount of data directory from a shared storage. The server was running without problems for last two \nmonths. The server is connected to a dialin router where all my company units dialin and update the database. \nConcurrently i have some 25 to 50 connections at peak hours. \n\nFollowing are the problems i am facing \n\n1) When 3 or 4 clients connect to this server, the pids are created and\nthose pids are not killed even after the client disconnects.\nafter sometimes some 10 to 20 pids gets created and postgres reject client connections. \n\n2) After one or two concurrent connections, the server slows down.\nThe postmaster occupies more than 90% of the memory.\n\n3) Even when restarting server, after one or two connection from clients, the pids start increasing automatically to 10 or 13. This again slows down the server.\n\nPlease help me to sort out this issue\n\nThanks in advance .\n\n\nRegards\n\nN S", "msg_date": "29 Jan 2005 14:43:21 -0000", "msg_from": "\"Narayanan Subramaniam Iyer\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres server getting slow!!" }, { "msg_contents": "\"Narayanan Subramaniam Iyer\" <[email protected]> writes:\n> 1) When 3 or 4 clients connect to this server, the pids are created and\n> those pids are not killed even after the client disconnects.\n\nIn that case your clients are not really disconnecting. Take a closer\nlook at your client-side software.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 29 Jan 2005 10:51:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres server getting slow!! " } ]
[ { "msg_contents": "You don't mention if you have run VACUUM or VACUUM ANALYZE lately.\nThat's generally one of the first things that folks will suggest. If you\nhave a lot of updates then VACUUM will clean up dead tuples; if you have\na lot of inserts then VACUUM ANALYZE will update statistics so that the\nplanner can make better decisions (as I understand it).\n \nAnother data point people will ask for in helping you will be EXPLAIN\nANALYZE output from running the queries you think are slowing down.\n \n- DAP\n\n\n________________________________\n\n\tFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Ken\nEgervari\n\tSent: Wednesday, January 26, 2005 9:17 PM\n\tTo: [email protected]\n\tSubject: [PERFORM] Performance problem with semi-large tables\n\t\n\t\n\tHi everyone.\n\t \n\tI'm new to this forum and was wondering if anyone would be kind\nenough to help me out with a pretty severe performance issue. I believe\nthe problem to be rather generic, so I'll put it in generic terms.\nSince I'm at home and not a work (but this is really bugging me), I\ncan't post any specifics. However, I think my explaination will\nsuffice.\n\t \n\tI have a 2 tables that are are getting large and will only get\nlarger with time (expoentially as more users sign on to the system).\nRight the now, a table called 'shipment' contains about 16,000 rows and\n'shipment_status' contains about 32,500 rows. These aren't massive rows\n(I keep reading about tables with millions), but they will definately\nget into 6 digits by next year and query performance is quite poor.\n\t \n\tNow, from what I can understand about tuning, you want to\nspecify good filters, provide good indexes on the driving filter as well\nas any referencial keys that are used while joining. This has helped me\nsolve performance problems many times in the past (for example, changing\na query speed from 2 seconds to 21 milliseconds). \n\t \n\tHowever, I am now tuning queries that operate on these two\ntables and the filters aren't very good (the best is a filter ratio of\n0.125) and the number of rows returned is very large (not taking into\nconsideration limits).\n\t \n\tFor example, consider something like this query that takes ~1\nsecond to finish:\n\t \n\tselect s.*, ss.*\n\tfrom shipment s, shipment_status ss, release_code r\n\twhere s.current_status_id = ss.id\n\t and ss.release_code_id = r.id\n\t and r.filtered_column = '5'\n\torder by ss.date desc\n\tlimit 100;\n\t \n\tRelease code is just a very small table of 8 rows by looking at\nthe production data, hence the 0.125 filter ratio. However, the data\ndistribution is not normal since the filtered column actually pulls out\nabout 54% of the rows in shipment_status when it joins. Postgres seems\nto be doing a sequencial scan to pull out all of these rows. Next, it\njoins approx 17550 rows to shipment. Since this query has a limit, it\nonly returns the first 100, which seems like a waste.\n\t \n\tNow, for this query, I know I can filter out the date instead to\nspeed it up. For example, I can probably search for all the shipments\nin the last 3 days instead of limiting it to 100. But since this isn't\na real production query, I only wanted to show it as an example since\nmany times I cannot do a filter by the date (and the sort may be date or\nsomething else irrelavant).\n\t \n\tI'm just stressed out how I can make queries like this more\nefficient since all I see is a bunch of hash joins and sequencial scans\ntaking all kinds of time.\n\t \n\tI guess here are my 2 questions:\n\t \n\t1. Should I just change beg to change the requirements so that I\ncan make more specific queries and more screens to access those?\n\t2. Can you recommend ways so that postgres acts on big tables\nmore efficiently? I'm not really interested in this specific case (I\njust made it up). I'm more interested in general solutions to this\ngeneral problem of big table sizes with bad filters and where join\norders don't seem to help much.\n\t \n\tThank you very much for your help.\n\t \n\tBest Regards,\n\tKen Egervari\n\n\n\n\n\n\n\n\nYou don't mention if you have run VACUUM or VACUUM ANALYZE \nlately. That's generally one of the first things that folks will suggest. If you \nhave a lot of updates then VACUUM will clean up dead tuples; if you have a lot \nof inserts then VACUUM ANALYZE will update statistics so that the planner can \nmake better decisions (as I understand it).\n \nAnother data point people will ask for in helping you will \nbe EXPLAIN ANALYZE output from running the queries you think are slowing \ndown.\n \n- DAP\n\n\n\nFrom: [email protected] \n [mailto:[email protected]] On Behalf Of Ken \n EgervariSent: Wednesday, January 26, 2005 9:17 PMTo: \n [email protected]: [PERFORM] Performance \n problem with semi-large tables\n\nHi everyone.\n \nI'm new to this forum and was wondering if anyone \n would be kind enough to help me out with a pretty severe performance \n issue.  I believe the problem to be rather generic, so I'll put it in \n generic terms.  Since I'm at home and not a work (but this is really \n bugging me), I can't post any specifics.  However, I think my \n explaination will suffice.\n \nI have a 2 tables that are are getting large and \n will only get larger with time (expoentially as more users sign on to the \n system).  Right the now, a table called 'shipment' contains about 16,000 \n rows and 'shipment_status' contains about 32,500 rows.  These aren't \n massive rows (I keep reading about tables with millions), but they will \n definately get into 6 digits by next year and query performance is quite \n poor.\n \nNow, from what I can understand about tuning, you \n want to specify good filters, provide good indexes on the driving filter as \n well as any referencial keys that are used while joining.  This has \n helped me solve performance problems many times in the past (for example, \n changing a query speed from 2 seconds to 21 milliseconds).  \n \nHowever, I am now tuning queries that operate on \n these two tables and the filters aren't very good (the best is a filter ratio \n of 0.125) and the number of rows returned is very large (not taking into \n consideration limits).\n \nFor example, consider something like this \n query that takes ~1 second to finish:\n \nselect s.*, ss.*\nfrom shipment s, shipment_status ss, release_code \n r\nwhere s.current_status_id = ss.id\n   and ss.release_code_id = \n r.id\n   and r.filtered_column = \n '5'\norder by ss.date desc\nlimit 100;\n \nRelease code is just a very small table of 8 rows \n by looking at the production data, hence the 0.125 filter ratio.  \n However, the data distribution is not normal since the filtered column \n actually pulls out about 54% of the rows in shipment_status when it \n joins.  Postgres seems to be doing a sequencial scan to pull out all of \n these rows.  Next, it joins approx 17550 rows to shipment.  Since \n this query has a limit, it only returns the first 100, which seems like a \n waste.\n \nNow, for this query, I know I can filter out the \n date instead to speed it up.  For example, I can probably search for all \n the shipments in the last 3 days instead of limiting it to 100.  But \n since this isn't a real production query, I only wanted to show it as an \n example since many times I cannot do a filter by the date (and the sort may be \n date or something else irrelavant).\n \nI'm just stressed out how I can make queries like \n this more efficient since all I see is a bunch of hash joins and sequencial \n scans taking all kinds of time.\n \nI guess here are my 2 questions:\n \n1. Should I just change beg to change the \n requirements so that I can make more specific queries and more screens to \n access those?\n2. Can you recommend ways so that postgres acts \n on big tables more efficiently?  I'm not really interested in this \n specific case (I just made it up).  I'm more interested in general \n solutions to this general problem of big table sizes with bad filters and \n where join orders don't seem to help much.\n \nThank you very much for your \n help.\n \nBest Regards,\nKen \nEgervari", "msg_date": "Sat, 29 Jan 2005 17:04:26 -0500", "msg_from": "\"David Parker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem with semi-large tables" }, { "msg_contents": "Yes, I'm very well aware of VACUUM and VACUUM ANALYZE. I've even clusted the date index and so on to ensure faster performance.\n ----- Original Message ----- \n From: David Parker \n To: Ken Egervari ; [email protected] \n Sent: Saturday, January 29, 2005 5:04 PM\n Subject: Re: [PERFORM] Performance problem with semi-large tables\n\n\n You don't mention if you have run VACUUM or VACUUM ANALYZE lately. That's generally one of the first things that folks will suggest. If you have a lot of updates then VACUUM will clean up dead tuples; if you have a lot of inserts then VACUUM ANALYZE will update statistics so that the planner can make better decisions (as I understand it).\n\n Another data point people will ask for in helping you will be EXPLAIN ANALYZE output from running the queries you think are slowing down.\n\n - DAP\n\n\n\n----------------------------------------------------------------------------\n From: [email protected] [mailto:[email protected]] On Behalf Of Ken Egervari\n Sent: Wednesday, January 26, 2005 9:17 PM\n To: [email protected]\n Subject: [PERFORM] Performance problem with semi-large tables\n\n\n Hi everyone.\n\n I'm new to this forum and was wondering if anyone would be kind enough to help me out with a pretty severe performance issue. I believe the problem to be rather generic, so I'll put it in generic terms. Since I'm at home and not a work (but this is really bugging me), I can't post any specifics. However, I think my explaination will suffice.\n\n I have a 2 tables that are are getting large and will only get larger with time (expoentially as more users sign on to the system). Right the now, a table called 'shipment' contains about 16,000 rows and 'shipment_status' contains about 32,500 rows. These aren't massive rows (I keep reading about tables with millions), but they will definately get into 6 digits by next year and query performance is quite poor.\n\n Now, from what I can understand about tuning, you want to specify good filters, provide good indexes on the driving filter as well as any referencial keys that are used while joining. This has helped me solve performance problems many times in the past (for example, changing a query speed from 2 seconds to 21 milliseconds). \n\n However, I am now tuning queries that operate on these two tables and the filters aren't very good (the best is a filter ratio of 0.125) and the number of rows returned is very large (not taking into consideration limits).\n\n For example, consider something like this query that takes ~1 second to finish:\n\n select s.*, ss.*\n from shipment s, shipment_status ss, release_code r\n where s.current_status_id = ss.id\n and ss.release_code_id = r.id\n and r.filtered_column = '5'\n order by ss.date desc\n limit 100;\n\n Release code is just a very small table of 8 rows by looking at the production data, hence the 0.125 filter ratio. However, the data distribution is not normal since the filtered column actually pulls out about 54% of the rows in shipment_status when it joins. Postgres seems to be doing a sequencial scan to pull out all of these rows. Next, it joins approx 17550 rows to shipment. Since this query has a limit, it only returns the first 100, which seems like a waste.\n\n Now, for this query, I know I can filter out the date instead to speed it up. For example, I can probably search for all the shipments in the last 3 days instead of limiting it to 100. But since this isn't a real production query, I only wanted to show it as an example since many times I cannot do a filter by the date (and the sort may be date or something else irrelavant).\n\n I'm just stressed out how I can make queries like this more efficient since all I see is a bunch of hash joins and sequencial scans taking all kinds of time.\n\n I guess here are my 2 questions:\n\n 1. Should I just change beg to change the requirements so that I can make more specific queries and more screens to access those?\n 2. Can you recommend ways so that postgres acts on big tables more efficiently? I'm not really interested in this specific case (I just made it up). I'm more interested in general solutions to this general problem of big table sizes with bad filters and where join orders don't seem to help much.\n\n Thank you very much for your help.\n\n Best Regards,\n Ken Egervari\n\n\n\n\n\n\nYes, I'm very well aware of VACUUM and VACUUM \nANALYZE.  I've even clusted the date index and so on to ensure faster \nperformance.\n\n----- Original Message ----- \nFrom:\nDavid \n Parker \nTo: Ken Egervari ; [email protected]\n\nSent: Saturday, January 29, 2005 5:04 \n PM\nSubject: Re: [PERFORM] Performance \n problem with semi-large tables\n\nYou don't mention if you have run VACUUM or VACUUM \n ANALYZE lately. That's generally one of the first things that folks will \n suggest. If you have a lot of updates then VACUUM will clean up dead tuples; \n if you have a lot of inserts then VACUUM ANALYZE will update statistics so \n that the planner can make better decisions (as I understand \n it).\n \nAnother data point people will ask for in helping you \n will be EXPLAIN ANALYZE output from running the queries you think are slowing \n down.\n \n- DAP\n\n\n\nFrom: [email protected] \n [mailto:[email protected]] On Behalf Of Ken \n EgervariSent: Wednesday, January 26, 2005 9:17 PMTo: \n [email protected]: [PERFORM] Performance \n problem with semi-large tables\n\nHi everyone.\n \nI'm new to this forum and was wondering if \n anyone would be kind enough to help me out with a pretty severe performance \n issue.  I believe the problem to be rather generic, so I'll put it in \n generic terms.  Since I'm at home and not a work (but this is really \n bugging me), I can't post any specifics.  However, I think my \n explaination will suffice.\n \nI have a 2 tables that are are getting large \n and will only get larger with time (expoentially as more users sign on to \n the system).  Right the now, a table called 'shipment' contains about \n 16,000 rows and 'shipment_status' contains about 32,500 rows.  These \n aren't massive rows (I keep reading about tables with millions), but they \n will definately get into 6 digits by next year and query performance is \n quite poor.\n \nNow, from what I can understand about tuning, \n you want to specify good filters, provide good indexes on the driving filter \n as well as any referencial keys that are used while joining.  This has \n helped me solve performance problems many times in the past (for example, \n changing a query speed from 2 seconds to 21 milliseconds).  \n \n \nHowever, I am now tuning queries that operate \n on these two tables and the filters aren't very good (the best is a filter \n ratio of 0.125) and the number of rows returned is very large (not taking \n into consideration limits).\n \nFor example, consider something like this \n query that takes ~1 second to finish:\n \nselect s.*, ss.*\nfrom shipment s, shipment_status ss, \n release_code r\nwhere s.current_status_id = ss.id\n   and ss.release_code_id = \n r.id\n   and r.filtered_column = \n '5'\norder by ss.date desc\nlimit 100;\n \nRelease code is just a very small table of 8 \n rows by looking at the production data, hence the 0.125 filter ratio.  \n However, the data distribution is not normal since the filtered column \n actually pulls out about 54% of the rows in shipment_status when it \n joins.  Postgres seems to be doing a sequencial scan to pull out all of \n these rows.  Next, it joins approx 17550 rows to shipment.  Since \n this query has a limit, it only returns the first 100, which seems like a \n waste.\n \nNow, for this query, I know I can filter out \n the date instead to speed it up.  For example, I can probably search \n for all the shipments in the last 3 days instead of limiting it to \n 100.  But since this isn't a real production query, I only wanted to \n show it as an example since many times I cannot do a filter by the date (and \n the sort may be date or something else irrelavant).\n \nI'm just stressed out how I can make queries \n like this more efficient since all I see is a bunch of hash joins and \n sequencial scans taking all kinds of time.\n \nI guess here are my 2 questions:\n \n1. Should I just change beg to change the \n requirements so that I can make more specific queries and more screens to \n access those?\n2. Can you recommend ways so that postgres acts \n on big tables more efficiently?  I'm not really interested in this \n specific case (I just made it up).  I'm more interested in general \n solutions to this general problem of big table sizes with bad filters and \n where join orders don't seem to help much.\n \nThank you very much for your \n help.\n \nBest Regards,\nKen \nEgervari", "msg_date": "Sat, 29 Jan 2005 17:40:11 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem with semi-large tables" } ]
[ { "msg_contents": "Thanks tom. I checked the client side software. The software closes connection when connected locally. But when connected through dialup,\nthis problem comes. I will check the ppp connection also.\nIs there any method of killing old pids. And also any performance tuning to be done on postgresql.conf file.\n\nThe database now contains 20K records. Will that cause a problem?\n\nRegds\n\nNarayanan\n\nOn Sat, 29 Jan 2005 Tom Lane wrote :\n>\"Narayanan Subramaniam Iyer\" <[email protected]> writes:\n> > 1) When 3 or 4 clients connect to this server, the pids are created and\n> > those pids are not killed even after the client disconnects.\n>\n>In that case your clients are not really disconnecting. Take a closer\n>look at your client-side software.\n>\n> \t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n\n  \n\nThanks tom. I checked the client side software. The software closes connection when connected locally. But when connected through dialup,\nthis problem comes. I will check the ppp connection also.\nIs there any method of killing old pids. And also any performance tuning to be done on postgresql.conf file.\n\nThe database now contains 20K records. Will that cause  a problem?\n\nRegds\n\nNarayanan\n\nOn Sat, 29 Jan 2005 Tom Lane wrote :\n>\"Narayanan Subramaniam Iyer\" <[email protected]> writes:\n> > 1) When 3 or 4 clients connect to this server, the pids are created and\n> > those pids are not killed even after the client disconnects.\n>\n>In that case your clients are not really disconnecting.  Take a closer\n>look at your client-side software.\n>\n>                regards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n>      subscribe-nomail command to [email protected] so that your\n>      message can get through to the mailing list cleanly", "msg_date": "30 Jan 2005 03:19:34 -0000", "msg_from": "\"N S\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres server getting slow!!" } ]
[ { "msg_contents": "I checked to find out the cause of the problem, ppp is disconnecting properly and the user session is also closed smoothely.\nBut when a report query is run on the table containing 32500 records,\nthe memory shoots up from 50 MB to 500 MB(Total memory is 512 MB RAM).\nAfter that the memory usage never comes down .When some 4 or 5 user \nconnects, the remaining memory is utilised in a very little way, and finally the 6th or 7th user is denied with database access.The server now becomes slow. \n\nWill running vacuum help to solve the problem?\n\nThe total database dump is 50 MB and the /var/lib/pgsql/data contains\n700 MB of data.\n\n Which all paramters are required to be increased in postgresq.conf.\n\n\nRegds\n\nN S \n\nOn Sun, 30 Jan 2005 N S wrote :\n>\n>\n>Thanks tom. I checked the client side software. The software closes connection when connected locally. But when connected through dialup,\n>this problem comes. I will check the ppp connection also.\n>Is there any method of killing old pids. And also any performance tuning to be done on postgresql.conf file.\n>\n>The database now contains 20K records. Will that cause a problem?\n>\n>Regds\n>\n>Narayanan\n>\n>On Sat, 29 Jan 2005 Tom Lane wrote :\n> >\"Narayanan Subramaniam Iyer\" <[email protected]> writes:\n> > > 1) When 3 or 4 clients connect to this server, the pids are created and\n> > > those pids are not killed even after the client disconnects.\n> >\n> >In that case your clients are not really disconnecting. Take a closer\n> >look at your client-side software.\n> >\n> > \t\t\tregards, tom lane\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n\n\nI checked to find out the cause of the problem, ppp is disconnecting properly and the user session is also closed smoothely.\nBut when a report query is run on the table containing 32500 records,\nthe memory shoots up from 50 MB to 500 MB(Total memory is 512 MB RAM).\nAfter that the memory usage never comes down .When some 4 or 5 user \nconnects, the remaining memory is utilised in a very little way, and finally the 6th or 7th user is denied with database access.The server now becomes slow. \n\nWill running vacuum help to solve the problem?\n\nThe total database dump is 50 MB and the /var/lib/pgsql/data contains\n700 MB of data.\n\n Which all paramters are required to be increased in postgresq.conf.\n\n\nRegds\n\nN S \n\nOn Sun, 30 Jan 2005 N S wrote :\n>\n>\n>Thanks tom. I checked the client side software. The software closes connection when connected locally. But when connected through dialup,\n>this problem comes. I will check the ppp connection also.\n>Is there any method of killing old pids. And also any performance tuning to be done on postgresql.conf file.\n>\n>The database now contains 20K records. Will that cause  a problem?\n>\n>Regds\n>\n>Narayanan\n>\n>On Sat, 29 Jan 2005 Tom Lane wrote :\n> >\"Narayanan Subramaniam Iyer\" <[email protected]> writes:\n> > > 1) When 3 or 4 clients connect to this server, the pids are created and\n> > > those pids are not killed even after the client disconnects.\n> >\n> >In that case your clients are not really disconnecting.  Take a closer\n> >look at your client-side software.\n> >\n> >                regards, tom lane\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 3: if posting/reading through Usenet, please send an appropriate\n> >      subscribe-nomail command to [email protected] so that your\n> >      message can get through to the mailing list cleanly", "msg_date": "30 Jan 2005 18:04:10 -0000", "msg_from": "\"N S\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres server getting slow!!" }, { "msg_contents": "N S wrote:\n\n> I checked to find out the cause of the problem, ppp is disconnecting \n> properly and the user session is also closed smoothely.\n> But when a report query is run on the table containing 32500 records,\n> the memory shoots up from 50 MB to 500 MB(Total memory is 512 MB RAM).\n> After that the memory usage never comes down .When some 4 or 5 user\n> connects, the remaining memory is utilised in a very little way, and \n> finally the 6th or 7th user is denied with database access.The server \n> now becomes slow.\n>\n> Will running vacuum help to solve the problem?\n>\nSounds like you need to run vacuum and analyze. It also sounds like you\nmay need to run vacuum full the first time.\n\nvacuum needs to be run regularly as does analyze.\n\nSincerely,\n\nJoshua D. Drake\n\n\n>\n> The total database dump is 50 MB and the /var/lib/pgsql/data contains\n> 700 MB of data.\n>\n> Which all paramters are required to be increased in postgresq.conf.\n>\n>\n> Regds\n>\n> N S\n>\n> On Sun, 30 Jan 2005 N S wrote :\n> >\n> >\n> >Thanks tom. I checked the client side software. The software closes \n> connection when connected locally. But when connected through dialup,\n> >this problem comes. I will check the ppp connection also.\n> >Is there any method of killing old pids. And also any performance \n> tuning to be done on postgresql.conf file.\n> >\n> >The database now contains 20K records. Will that cause a problem?\n> >\n> >Regds\n> >\n> >Narayanan\n> >\n> >On Sat, 29 Jan 2005 Tom Lane wrote :\n> > >\"Narayanan Subramaniam Iyer\" <[email protected]> writes:\n> > > > 1) When 3 or 4 clients connect to this server, the pids are \n> created and\n> > > > those pids are not killed even after the client disconnects.\n> > >\n> > >In that case your clients are not really disconnecting. Take a closer\n> > >look at your client-side software.\n> > >\n> > > regards, tom lane\n> > >\n> > >---------------------------(end of \n> broadcast)---------------------------\n> > >TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > subscribe-nomail command to [email protected] so that your\n> > > message can get through to the mailing list cleanly\n>\n>\n>\n> <http://clients.rediff.com/signature/track_sig.asp> \n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Sun, 30 Jan 2005 12:26:42 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres server getting slow!!" } ]
[ { "msg_contents": "Thanks joshua\n\n i tried running vacuum command, \nvacuum database as well as vacuum <indvidual table names>\n\nbut even after that querying the database , the memory shoots up\nas i mentioned in the previous mail and never comes down.\nAlso the old pids of connections established remains even after the\nconnection is closed.\n\nWill backing up the complete database, dropping and recreating can\nmake any difference. \n\n\nKindly suggest\n\nThanks in advance\n\nregards\n\nN S\n\n\nOn Mon, 31 Jan 2005 Joshua D.Drake wrote :\n>N S wrote:\n>\n>>I checked to find out the cause of the problem, ppp is disconnecting properly and the user session is also closed smoothely.\n>>But when a report query is run on the table containing 32500 records,\n>>the memory shoots up from 50 MB to 500 MB(Total memory is 512 MB RAM).\n>>After that the memory usage never comes down .When some 4 or 5 user\n>>connects, the remaining memory is utilised in a very little way, and finally the 6th or 7th user is denied with database access.The server now becomes slow.\n>>\n>>Will running vacuum help to solve the problem?\n>>\n>Sounds like you need to run vacuum and analyze. It also sounds like you\n>may need to run vacuum full the first time.\n>\n>vacuum needs to be run regularly as does analyze.\n>\n>Sincerely,\n>\n>Joshua D. Drake\n>\n>\n>>\n>>The total database dump is 50 MB and the /var/lib/pgsql/data contains\n>>700 MB of data.\n>>\n>>Which all paramters are required to be increased in postgresq.conf.\n>>\n>>\n>>Regds\n>>\n>>N S\n>>\n>>On Sun, 30 Jan 2005 N S wrote :\n>> >\n>> >\n>> >Thanks tom. I checked the client side software. The software closes connection when connected locally. But when connected through dialup,\n>> >this problem comes. I will check the ppp connection also.\n>> >Is there any method of killing old pids. And also any performance tuning to be done on postgresql.conf file.\n>> >\n>> >The database now contains 20K records. Will that cause a problem?\n>> >\n>> >Regds\n>> >\n>> >Narayanan\n>> >\n>> >On Sat, 29 Jan 2005 Tom Lane wrote :\n>> > >\"Narayanan Subramaniam Iyer\" <[email protected]> writes:\n>> > > > 1) When 3 or 4 clients connect to this server, the pids are created and\n>> > > > those pids are not killed even after the client disconnects.\n>> > >\n>> > >In that case your clients are not really disconnecting. Take a closer\n>> > >look at your client-side software.\n>> > >\n>> > > regards, tom lane\n>> > >\n>> > >---------------------------(end of broadcast)---------------------------\n>> > >TIP 3: if posting/reading through Usenet, please send an appropriate\n>> > > subscribe-nomail command to [email protected] so that your\n>> > > message can get through to the mailing list cleanly\n>>\n>>\n>>\n>><http://clients.rediff.com/signature/track_sig.asp>\n>\n>\n>\n>-- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\n>Postgresql support, programming shared hosting and dedicated hosting.\n>+1-503-667-4564 - [email protected] - http://www.commandprompt.com\n>PostgreSQL Replicator -- production quality replication for PostgreSQL\n>\n\n\n Thanks joshua\n\n  i tried running vacuum command, \nvacuum database as well as vacuum <indvidual table names>\n\nbut even after that querying the database , the memory shoots up\nas i mentioned in the previous mail and never comes down.\nAlso the old pids of connections established remains even after the\nconnection is closed.\n\nWill backing up the complete database, dropping and recreating can\nmake any difference. \n\n\nKindly suggest\n\nThanks in advance\n\nregards\n\nN S\n\n\nOn Mon, 31 Jan 2005 Joshua D.Drake wrote :\n>N S wrote:\n>\n>>I checked to find out the cause of the problem, ppp is disconnecting properly and the user session is also closed smoothely.\n>>But when a report query is run on the table containing 32500 records,\n>>the memory shoots up from 50 MB to 500 MB(Total memory is 512 MB RAM).\n>>After that the memory usage never comes down .When some 4 or 5 user\n>>connects, the remaining memory is utilised in a very little way, and finally the 6th or 7th user is denied with database access.The server now becomes slow.\n>>\n>>Will running vacuum help to solve the problem?\n>>\n>Sounds like you need to run vacuum and analyze. It also sounds like you\n>may need to run vacuum full the first time.\n>\n>vacuum needs to be run regularly as does analyze.\n>\n>Sincerely,\n>\n>Joshua D. Drake\n>\n>\n>>\n>>The total database dump is 50 MB and the /var/lib/pgsql/data contains\n>>700 MB of data.\n>>\n>>Which all paramters are required to be increased in postgresq.conf.\n>>\n>>\n>>Regds\n>>\n>>N S\n>>\n>>On Sun, 30 Jan 2005 N S wrote :\n>> >\n>> >\n>> >Thanks tom. I checked the client side software. The software closes connection when connected locally. But when connected through dialup,\n>> >this problem comes. I will check the ppp connection also.\n>> >Is there any method of killing old pids. And also any performance tuning to be done on postgresql.conf file.\n>> >\n>> >The database now contains 20K records. Will that cause  a problem?\n>> >\n>> >Regds\n>> >\n>> >Narayanan\n>> >\n>> >On Sat, 29 Jan 2005 Tom Lane wrote :\n>> > >\"Narayanan Subramaniam Iyer\" <[email protected]> writes:\n>> > > > 1) When 3 or 4 clients connect to this server, the pids are created and\n>> > > > those pids are not killed even after the client disconnects.\n>> > >\n>> > >In that case your clients are not really disconnecting.  Take a closer\n>> > >look at your client-side software.\n>> > >\n>> > >                regards, tom lane\n>> > >\n>> > >---------------------------(end of broadcast)---------------------------\n>> > >TIP 3: if posting/reading through Usenet, please send an appropriate\n>> > >      subscribe-nomail command to [email protected] so that your\n>> > >      message can get through to the mailing list cleanly\n>>\n>>\n>>\n>><http://clients.rediff.com/signature/track_sig.asp>\n>\n>\n>\n>-- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\n>Postgresql support, programming shared hosting and dedicated hosting.\n>+1-503-667-4564 - [email protected] - http://www.commandprompt.com\n>PostgreSQL Replicator -- production quality replication for PostgreSQL\n>", "msg_date": "31 Jan 2005 05:47:37 -0000", "msg_from": "\"N S\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres server getting slow!!" } ]
[ { "msg_contents": "Thanks joshua\n\n i tried running vacuum command,\nvacuum database as well as vacuum <indvidual table names>\n\nbut even after that querying the database , the memory shoots up\nas i mentioned in the previous mail and never comes down.\nAlso the old pids of connections established remains even after the\nconnection is closed.\n\nWill backing up the complete database, dropping and recreating can\nmake any difference.\n\n\nKindly suggest\n\nThanks in advance\n\nregards\n\nN S\n\n\n\tN S wrote:\n\n> I checked to find out the cause of the problem, ppp is disconnecting\n> properly and the user session is also closed smoothely.\n> But when a report query is run on the table containing 32500 records,\n> the memory shoots up from 50 MB to 500 MB(Total memory is 512 MB RAM).\n> After that the memory usage never comes down .When some 4 or 5 user\n> connects, the remaining memory is utilised in a very little way, and\n> finally the 6th or 7th user is denied with database access.The server\n> now becomes slow.\n>\n> Will running vacuum help to solve the problem?\n>\nSounds like you need to run vacuum and analyze. It also sounds like you\nmay need to run vacuum full the first time.\n\nvacuum needs to be run regularly as does analyze.\n\nSincerely,\n\nJoshua D. Drake\n\n\n>\n> The total database dump is 50 MB and the /var/lib/pgsql/data contains\n> 700 MB of data.\n>\n> Which all paramters are required to be increased in postgresq.conf.\n>\n>\n> Regds\n>\n> N S\n>\n> On Sun, 30 Jan 2005 N S wrote :\n> >\n> >\n> >Thanks tom. I checked the client side software. The software closes\n> connection when connected locally. But when connected through dialup,\n> >this problem comes. I will check the ppp connection also.\n> >Is there any method of killing old pids. And also any performance\n> tuning to be done on postgresql.conf file.\n> >\n> >The database now contains 20K records. Will that cause a problem?\n> >\n> >Regds\n> >\n> >Narayanan\n> >\n> >On Sat, 29 Jan 2005 Tom Lane wrote :\n> > >\"Narayanan Subramaniam Iyer\" <[email protected]> writes:\n> > > > 1) When 3 or 4 clients connect to this server, the pids are\n> created and\n> > > > those pids are not killed even after the client disconnects.\n> > >\n> > >In that case your clients are not really disconnecting. Take a closer\n> > >look at your client-side software.\n> > >\n> > > regards, tom lane\n> > >\n> > >---------------------------(end of\n> broadcast)---------------------------\n> > >TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > subscribe-nomail command to [email protected] so that your\n> > > message can get through to the mailing list cleanly\n>\n>\n>\n> <http://clients.rediff.com/signature/track_sig.asp> \n\n  \nThanks joshua\n\n  i tried running vacuum command,\nvacuum database as well as vacuum <indvidual table names>\n\nbut even after that querying the database , the memory shoots up\nas i mentioned in the previous mail and never comes down.\nAlso the old pids of connections established remains even after the\nconnection is closed.\n\nWill backing up the complete database, dropping and recreating can\nmake any difference.\n\n\nKindly suggest\n\nThanks in advance\n\nregards\n\nN S\n\n\n     N S wrote:\n\n> I checked to find out the cause of the problem, ppp is disconnecting\n> properly and the user session is also closed smoothely.\n> But when a report query is run on the table containing 32500 records,\n> the memory shoots up from 50 MB to 500 MB(Total memory is 512 MB RAM).\n> After that the memory usage never comes down .When some 4 or 5 user\n> connects, the remaining memory is utilised in a very little way, and\n> finally the 6th or 7th user is denied with database access.The server\n> now becomes slow.\n>\n> Will running vacuum help to solve the problem?\n>\nSounds like you need to run vacuum and analyze. It also sounds like you\nmay need to run vacuum full the first time.\n\nvacuum needs to be run regularly as does analyze.\n\nSincerely,\n\nJoshua D. Drake\n\n\n>\n> The total database dump is 50 MB and the /var/lib/pgsql/data contains\n> 700 MB of data.\n>\n> Which all paramters are required to be increased in postgresq.conf.\n>\n>\n> Regds\n>\n> N S\n>\n> On Sun, 30 Jan 2005 N S wrote :\n> >\n> >\n> >Thanks tom. I checked the client side software. The software closes\n> connection when connected locally. But when connected through dialup,\n> >this problem comes. I will check the ppp connection also.\n> >Is there any method of killing old pids. And also any performance\n> tuning to be done on postgresql.conf file.\n> >\n> >The database now contains 20K records. Will that cause  a problem?\n> >\n> >Regds\n> >\n> >Narayanan\n> >\n> >On Sat, 29 Jan 2005 Tom Lane wrote :\n> > >\"Narayanan Subramaniam Iyer\" <[email protected]> writes:\n> > > > 1) When 3 or 4 clients connect to this server, the pids are\n> created and\n> > > > those pids are not killed even after the client disconnects.\n> > >\n> > >In that case your clients are not really disconnecting.  Take a closer\n> > >look at your client-side software.\n> > >\n> > >                regards, tom lane\n> > >\n> > >---------------------------(end of\n> broadcast)---------------------------\n> > >TIP 3: if posting/reading through Usenet, please send an appropriate\n> > >      subscribe-nomail command to [email protected] so that your\n> > >      message can get through to the mailing list cleanly\n>\n>\n>\n> <http://clients.rediff.com/signature/track_sig.asp>", "msg_date": "31 Jan 2005 09:38:36 -0000", "msg_from": "\"N S\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres server getting slow!!" } ]
[ { "msg_contents": "Hello, \n\nClient is seeing continual performance degradation on\nupdates and queries from a large database. Any help\nappreciated.\n\nClient is using PostgreSQL 7.4.2 on Sparcv9 650MHZ\ncpu, 2GB Ram, running Solaris.\n\nWe have the following tables:\n\nEVENT_TBL\nevt_id bigserial, unique\nd1 numeric(13)\nobj_id numeric(6)\nd2 numeric(13)\nval varchar(22)\ncorrection numeric(1)\ndelta numeric(13)\n\nCONTROL_TBL\nobj_id numeric(6), unique\nname varchar(22), unique\ndtype numeric(2)\ndfreq numeric(2)\n\nIndexes:\nEVENT_TBL.d1 (non-clustered)\nEVENT_TBL.obj_id (non-clustered)\nCONTROL_TBL.obj_id (non-clustered)\nCONTROL_TBL.name (clustered)\n\nUpdate processes run continually throughout the day in\nwhich rows are inserted but none deleted. The\nEVENT_TBL is currently very big, w/ over 5 million\nrows. The CONTROL_TBL is fairly small w/ around 4000\nrows. We're doing a \"VACUUM ANALYZE\" on each table\nafter each update has been completed and changes\ncommitted. Each night we drop all the indexes and\nrecreate them. \n\nDo I understand correctly, however, that when you\ncreate a unique SERIAL column an index is\nautomatically created on that column? If so, does\nthat sound like a possible culprit? We are not doing\nany reindexing on that index at all. Could it be\nsuffering from index bloat? Do we need to\nperiodically explicity run the command:\n\nreindex index event_tbl_evt_id_key;\n\n???\n\nEven seemingly simple commands are taking forever. \nFor example:\n\nselect evt_id from event_tbl where evt_id=1;\n\ntakes over a minute to complete.\n\n\nHere is a slightly more complicated example along with\nits explain output:\n\nselect events.evt_id, ctrl.name, events.d1,\nevents.val, events.d2, events.correction, ctrl.type,\nctrl.freq from event_tbl events, control_tbl ctrl\nwhere events.obj_id = ctrl.obj_id and events.evt_id >\n3690000 order by events.evt_id limit 2000;\n\n QUERY PLAN\n-----------------------------------------------------------------\n Limit (cost=0.00..6248.56 rows=2000 width=118)\n -> Nested Loop (cost=0.00..7540780.32\nrows=2413606 width=118)\n -> Index Scan using event_tbl_evt_id_key on\nevent_tbl events (cost=0.00..237208.57 rows=2413606\nwidth=63)\n Filter: (evt_id > 3690000)\n -> Index Scan using control_tbl_obj_id_idx\non control_tbl ctrl (cost=0.00..3.01 rows=1 width=75)\n Index Cond: (\"outer\".obj_id =\nctrl.obj_id)\n(6 rows)\n\nThis takes minutes to return 2000 rows.\n\nThank you in advance.\n\nBill\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nAll your favorites on one personal page ��� Try My Yahoo!\nhttp://my.yahoo.com \n", "msg_date": "Mon, 31 Jan 2005 09:19:18 -0800 (PST)", "msg_from": "Bill Chandler <[email protected]>", "msg_from_op": true, "msg_subject": "Performance degredation at client site" }, { "msg_contents": "Bill Chandler <[email protected]> writes:\n> Update processes run continually throughout the day in\n> which rows are inserted but none deleted.\n\nWhat about row updates?\n\n> Even seemingly simple commands are taking forever. \n> For example:\n> select evt_id from event_tbl where evt_id=1;\n> takes over a minute to complete.\n\nSince evt_id is a bigint, you need to write that as\n\nselect evt_id from event_tbl where evt_id=1::bigint;\n\nor various other locutions that have the same effect. What you have is\na bigint-vs-int comparison, which is not indexable in releases before 8.0.\n\nThe same problem is occurring in your other example.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 31 Jan 2005 12:46:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance degredation at client site " }, { "msg_contents": "\n> Do I understand correctly, however, that when you\n> create a unique SERIAL column an index is\n> automatically created on that column? If so, does\n> that sound like a possible culprit? We are not doing\n> any reindexing on that index at all. Could it be\n> suffering from index bloat? Do we need to\n> periodically explicity run the command:\n\n\tSERIAL creates a sequence, not an index.\n\tUNIQUE and PRIMARY KEY do create indexes.\n\n\n\tRegards.\n", "msg_date": "Mon, 31 Jan 2005 19:14:48 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance degredation at client site" }, { "msg_contents": "Tom,\n\nThank you! I will have the client try that. What\nabout the event_tbl_evt_id_key index question. Could\nthat also be causing me difficulties? Should I\nperiodically reindex it? \n\nthanks,\n\nBill\n\n--- Tom Lane <[email protected]> wrote:\n\n> Bill Chandler <[email protected]> writes:\n> > Update processes run continually throughout the\n> day in\n> > which rows are inserted but none deleted.\n> \n> What about row updates?\n> \n> > Even seemingly simple commands are taking forever.\n> \n> > For example:\n> > select evt_id from event_tbl where evt_id=1;\n> > takes over a minute to complete.\n> \n> Since evt_id is a bigint, you need to write that as\n> \n> select evt_id from event_tbl where evt_id=1::bigint;\n> \n> or various other locutions that have the same\n> effect. What you have is\n> a bigint-vs-int comparison, which is not indexable\n> in releases before 8.0.\n> \n> The same problem is occurring in your other example.\n> \n> \t\t\tregards, tom lane\n> \n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n", "msg_date": "Mon, 31 Jan 2005 10:32:00 -0800 (PST)", "msg_from": "Bill Chandler <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance degredation at client site " } ]
[ { "msg_contents": "Hi all,\n\nI've been following this list for nearly a year now.\nI've always managed to get PostgreSQL 7.1.x right for the job,\nwhich in my case is a large and complex oltp system,\nrun under Pg for 6 years now.\n\nWe were already planning the switch from 7.1 to 7.4 (or even 8.0).\nThe last project we're facing with has a transaction volume that is\nsomething we've never dealt with. By \"transaction\" I mean\nsomething involving 10 to 10,000 (and more) sql queries\n(a complex mix of insert/ update/ delete/ select).\n\nI'd like to ask:\n\n1) What kind of performance gain can I expect switching from\n 7.1 to 7.4 (or 8.0)? Obviously I'm doing my own testing,\n but I'm not very impressed by 8.0 speed, may be I'm doing\n testing on a low end server...\n\n2) The goal is to make the db handle 100 tps (something like\n 100 users). What kind of server and storage should I provide?\n\n The actual servers our application runs on normally have\n 2 Intel Xeon processors, 2-4 Gb RAM, RAID 0/1/5 SCSI\n disk storage with hard drives @ 10,000 rpm\n\n3) Highest I/O throughput SCSI adapters? Adaptec?\n\n4) Is it correct to suppose that multiple RAID 1 arrays\n can provide the fastest I/O ?\n I usually reserve one RAID1 array to db data directory,\n one RAID1 array to pg_xlog directory and one RAID1 array\n for os and application needs.\n\n5) OS and Pg specific tuning?\n Usually I modify shared memory settings and most of postgresql.conf\n available settings for 7.1, like `effective_cache', `shared_buffers',\n `wal_buffers', `wal_files', and so on.\n\n-- \nCosimo\n\n", "msg_date": "Mon, 31 Jan 2005 21:41:32 +0100", "msg_from": "Cosimo Streppone <[email protected]>", "msg_from_op": true, "msg_subject": "High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": "Cosimo Streppone <[email protected]> writes:\n> 1) What kind of performance gain can I expect switching from\n> 7.1 to 7.4 (or 8.0)? Obviously I'm doing my own testing,\n> but I'm not very impressed by 8.0 speed, may be I'm doing\n> testing on a low end server...\n\nMost people report a noticeable speedup in each new release; we hit\ndifferent things in different releases, but usually at least one\nperformance gain is useful to any one person. For a jump as far as\nfrom 7.1 to 8.0 I'm surprised that you're not seeing any gain at all.\nWhat was your test case exactly? Have you perhaps tuned your app\nso specifically to 7.1 that you need to detune it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 31 Jan 2005 16:35:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system " }, { "msg_contents": "On Mon, Jan 31, 2005 at 09:41:32PM +0100, Cosimo Streppone wrote:\n> 2) The goal is to make the db handle 100 tps (something like\n> 100 users). What kind of server and storage should I provide?\n> \n> The actual servers our application runs on normally have\n> 2 Intel Xeon processors, 2-4 Gb RAM, RAID 0/1/5 SCSI\n> disk storage with hard drives @ 10,000 rpm\n\nYou might look at Opteron's, which theoretically have a higher data\nbandwidth. If you're doing anything data intensive, like a sort in\nmemory, this could make a difference.\n\n> 4) Is it correct to suppose that multiple RAID 1 arrays\n> can provide the fastest I/O ?\n> I usually reserve one RAID1 array to db data directory,\n> one RAID1 array to pg_xlog directory and one RAID1 array\n> for os and application needs.\n\nRAID10 will be faster than RAID1. The key factor to a high performance\ndatabase is a high performance I/O system. If you look in the archives\nyou'll find people running postgresql on 30 and 40 drive arrays.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 31 Jan 2005 22:56:45 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": "Tom Lane wrote:\n\n> Cosimo writes:\n> \n>>1) What kind of performance gain can I expect switching from\n>> 7.1 to 7.4 (or 8.0)? Obviously I'm doing my own testing,\n>> but I'm not very impressed by 8.0 speed, may be I'm doing\n>> testing on a low end server...\n> \n> Most people report a noticeable speedup in each new release\n > [...]\n> I'm surprised that you're not seeing any gain at all.\n> What was your test case exactly? Have you perhaps tuned your app\n> so specifically to 7.1 that you need to detune it?\n\nWe tend to use the lowest common SQL features that will allow\nus to work with any db, so probably the problem is the opposite,\nthere is no pg-specific overtuning.\n\nAlso, the real pg load, that should be my ideal test case,\nis somewhat difficult to reproduce (~ 50 users with handhelds\nand browser clients).\n\nAnother good test is a particular procedure that opens\nseveral (~1000) subsequent transactions, composed of many\nrepeated selection queries with massive write loads on 6/7\ndifferent tables, as big as 300/400k tuples.\nEvery transaction ends with either commit or rollback state\n\nIndexing here should be ok, for I've analyzed every single query\nalso under database \"stress\".\n\nProbably one big issue is that I need to vacuum/reindex too often\nto keep db performances at a good(tm) level. I realize that this\nhas been addressed in several ways with newer PGs.\n\nHowever, I need to do a lot of application and performance\ntests and do them more seriously. Then I'll report the results here.\n\n-- \nCosimo\n\n", "msg_date": "Tue, 01 Feb 2005 07:26:53 +0100", "msg_from": "Cosimo Streppone <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": "Jim C. Nasby wrote:\n\n> On Mon, Jan 31, 2005 at 09:41:32PM +0100, Cosimo wrote:\n> \n> >2) The goal is to make the db handle 100 tps (something like\n> > 100 users). What kind of server and storage should I provide?\n> \n> You might look at Opteron's, which theoretically have a higher data\n> bandwidth. If you're doing anything data intensive, like a sort in\n> memory, this could make a difference.\n\nWould Opteron systems need 64-bit postgresql (and os, gcc, ...)\nbuild to have that advantage?\n\n> >4) Is it correct to suppose that multiple RAID 1 arrays\n> > can provide the fastest I/O ?\n> > I usually reserve one RAID1 array to db data directory,\n> > one RAID1 array to pg_xlog directory and one RAID1 array\n> > for os and application needs.\n>\n> RAID10 will be faster than RAID1.\n\nSorry Jim, by RAID10 you mean several raid1 arrays mounted on\ndifferent linux partitions? Or several raid1 arrays that\nbuild up a raid0 array? In the latter case, who decides which\ndata goes in which raid1 array? Raid Adapter?\n\n > The key factor to a high performance database is a high\n > performance I/O system. If you look in the archives\n> you'll find people running postgresql on 30 and 40\n > drive arrays.\n\nI'll do a search, thank you.\n\n-- \nCosimo\n\n", "msg_date": "Tue, 01 Feb 2005 07:35:35 +0100", "msg_from": "Cosimo Streppone <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": "On Tue, Feb 01, 2005 at 07:35:35AM +0100, Cosimo Streppone wrote:\n> >You might look at Opteron's, which theoretically have a higher data\n> >bandwidth. If you're doing anything data intensive, like a sort in\n> >memory, this could make a difference.\n> \n> Would Opteron systems need 64-bit postgresql (and os, gcc, ...)\n> build to have that advantage?\n \nWell, that would give you the most benefit, but the memory bandwidth is\nstill greater than on a Xeon. There's really no issue with 64 bit if\nyou're using open source software; it all compiles for 64 bits and\nyou're good to go. http://stats.distributed.net runs on a dual opteron\nbox running FreeBSD and I've had no issues.\n\n> >RAID10 will be faster than RAID1.\n> \n> Sorry Jim, by RAID10 you mean several raid1 arrays mounted on\n> different linux partitions? Or several raid1 arrays that\n> build up a raid0 array? In the latter case, who decides which\n> data goes in which raid1 array? Raid Adapter?\n\nYou should take a look around online for a description of raid types.\n\nThere's technically RAID0+1 and RAID1+0; one is a stripe of mirrored\ndrives (a RAID 0 built out of RAID 1s), the other is a mirror of two\nRAID 0s. The former is much better; if you're lucky you can lose half\nyour drives without any data loss (if each dead drive is part of a\ndifferent mirror). Recovery is also faster.\n\nYou'll almost certainly be much happier with hardware raid instead of\nsoftware raid. stats.distributed.net runs a 3ware controller and SATA\ndrives.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Tue, 1 Feb 2005 05:27:27 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": "To be honest I've used compaq, dell and LSI SCSI RAID controllers and\ngot pretty pathetic benchmarks from all of them. The best system I\nhave is the one I just built:\n\n2xOpteron 242, Tyan S2885 MoBo, 4GB Ram, 14xSATA WD Raptor drives:\n2xRaid 1, 1x4 disk Raid 10, 1x6 drive Raid 10. 2x3ware (now AMCC)\nEscalade 9500S-8MI.\n\nThis system with fsync on has managed 2500 insert transactions/sec\n(granted they are simple transactions, but still).\n\nRAID 10 is a stripe of mirrors. RAID 10 give you the best read and\nwrite performance combined. RAID 5 gives very bad write perfomance,\nbut good read performance. With RAID 5 you can only loose a single\ndrive and rebuild times are slow. RAID 10 can loose up to have the\narray depending on which drives without loosing data.\n\nI would be interested in starting a site listing RAID benchmarks under\nlinux. If anyone is interested let me know. I would be interested in\nat least some bonnie++ benchmarks, and perhaps other if people would\nlike.\n\nAlex Turner\nNetEconomist\n\n\nOn Tue, 1 Feb 2005 05:27:27 -0600, Jim C. Nasby <[email protected]> wrote:\n> On Tue, Feb 01, 2005 at 07:35:35AM +0100, Cosimo Streppone wrote:\n> > >You might look at Opteron's, which theoretically have a higher data\n> > >bandwidth. If you're doing anything data intensive, like a sort in\n> > >memory, this could make a difference.\n> >\n> > Would Opteron systems need 64-bit postgresql (and os, gcc, ...)\n> > build to have that advantage?\n> \n> Well, that would give you the most benefit, but the memory bandwidth is\n> still greater than on a Xeon. There's really no issue with 64 bit if\n> you're using open source software; it all compiles for 64 bits and\n> you're good to go. http://stats.distributed.net runs on a dual opteron\n> box running FreeBSD and I've had no issues.\n> \n> > >RAID10 will be faster than RAID1.\n> >\n> > Sorry Jim, by RAID10 you mean several raid1 arrays mounted on\n> > different linux partitions? Or several raid1 arrays that\n> > build up a raid0 array? In the latter case, who decides which\n> > data goes in which raid1 array? Raid Adapter?\n> \n> You should take a look around online for a description of raid types.\n> \n> There's technically RAID0+1 and RAID1+0; one is a stripe of mirrored\n> drives (a RAID 0 built out of RAID 1s), the other is a mirror of two\n> RAID 0s. The former is much better; if you're lucky you can lose half\n> your drives without any data loss (if each dead drive is part of a\n> different mirror). Recovery is also faster.\n> \n> You'll almost certainly be much happier with hardware raid instead of\n> software raid. stats.distributed.net runs a 3ware controller and SATA\n> drives.\n> --\n> Jim C. Nasby, Database Consultant [email protected]\n> Give your computer some brain candy! www.distributed.net Team #1828\n> \n> Windows: \"Where do you want to go today?\"\n> Linux: \"Where do you want to go tomorrow?\"\n> FreeBSD: \"Are you guys coming, or what?\"\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n", "msg_date": "Tue, 1 Feb 2005 08:58:34 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": "Alex Turner wrote:\n\n> To be honest I've used compaq, dell and LSI SCSI RAID controllers and\n> got pretty pathetic benchmarks from all of them.\n\nI also have seen average-low results for LSI (at least the 1020 card).\n\n> 2xOpteron 242, Tyan S2885 MoBo, 4GB Ram, 14xSATA WD Raptor drives:\n> 2xRaid 1, 1x4 disk Raid 10, 1x6 drive Raid 10. 2x3ware (now AMCC)\n> Escalade 9500S-8MI.\n\nThanks, this is precious information.\n\n> I would be interested in starting a site listing RAID benchmarks under\n> linux. If anyone is interested let me know. I would be interested in\n> at least some bonnie++ benchmarks, and perhaps other if people would\n> like.\n\nI have used also tiobench [http://tiobench.sourceforge.net/]\nAny experience with it?\n\n-- \nCosimo\n\n", "msg_date": "Tue, 01 Feb 2005 22:11:30 +0100", "msg_from": "Cosimo Streppone <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": "Jim C. Nasby wrote:\n> On Tue, Feb 01, 2005 at 07:35:35AM +0100, Cosimo Streppone wrote:\n> \n>>>You might look at Opteron's, which theoretically have a higher data\n>>>bandwidth. If you're doing anything data intensive, like a sort in\n>>>memory, this could make a difference.\n>>\n>>Would Opteron systems need 64-bit postgresql (and os, gcc, ...)\n>>build to have that advantage?\n> \n> \n> Well, that would give you the most benefit, but the memory bandwidth is\n> still greater than on a Xeon. There's really no issue with 64 bit if\n> you're using open source software; it all compiles for 64 bits and\n> you're good to go. http://stats.distributed.net runs on a dual opteron\n> box running FreeBSD and I've had no issues.\n\nYou can get 64-bit Xeons also but it takes hit in the I/O department due \nto the lack of a hardware I/O MMU which limits DMA transfers to \naddresses below 4GB. This has a two-fold impact:\n\n1) transfering data to >4GB require first a transfer to <4GB and then a \ncopy to the final destination.\n\n2) You must allocate real memory 2X the address space of the devices to \nact as bounce buffers. This is especially problematic for workstations \nbecause if you put a 512MB Nvidia card in your computer for graphics \nwork -- you've just lost 1GB of memory. (I dunno how much the typical \nSCSI/NIC/etc take up.)\n", "msg_date": "Tue, 01 Feb 2005 20:25:09 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": "None - but I'll definately take a look..\n\nAlex Turner\nNetEconomist\n\n\nOn Tue, 01 Feb 2005 22:11:30 +0100, Cosimo Streppone\n<[email protected]> wrote:\n> Alex Turner wrote:\n> \n> > To be honest I've used compaq, dell and LSI SCSI RAID controllers and\n> > got pretty pathetic benchmarks from all of them.\n> \n> I also have seen average-low results for LSI (at least the 1020 card).\n> \n> > 2xOpteron 242, Tyan S2885 MoBo, 4GB Ram, 14xSATA WD Raptor drives:\n> > 2xRaid 1, 1x4 disk Raid 10, 1x6 drive Raid 10. 2x3ware (now AMCC)\n> > Escalade 9500S-8MI.\n> \n> Thanks, this is precious information.\n> \n> > I would be interested in starting a site listing RAID benchmarks under\n> > linux. If anyone is interested let me know. I would be interested in\n> > at least some bonnie++ benchmarks, and perhaps other if people would\n> > like.\n> \n> I have used also tiobench [http://tiobench.sourceforge.net/]\n> Any experience with it?\n> \n> --\n> Cosimo\n> \n>\n", "msg_date": "Tue, 1 Feb 2005 23:47:31 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": "William Yu wrote:\n> > Well, that would give you the most benefit, but the memory bandwidth is\n> > still greater than on a Xeon. There's really no issue with 64 bit if\n> > you're using open source software; it all compiles for 64 bits and\n> > you're good to go. http://stats.distributed.net runs on a dual opteron\n> > box running FreeBSD and I've had no issues.\n> \n> You can get 64-bit Xeons also but it takes hit in the I/O department due \n> to the lack of a hardware I/O MMU which limits DMA transfers to \n> addresses below 4GB. This has a two-fold impact:\n> \n> 1) transfering data to >4GB require first a transfer to <4GB and then a \n> copy to the final destination.\n> \n> 2) You must allocate real memory 2X the address space of the devices to \n> act as bounce buffers. This is especially problematic for workstations \n> because if you put a 512MB Nvidia card in your computer for graphics \n> work -- you've just lost 1GB of memory. (I dunno how much the typical \n> SCSI/NIC/etc take up.)\n\nI thought Intel was copying AMD's 64-bit API. Is Intel's\nimplementation as poor as you description? Does Intel have any better\n64-bit offering other than the Itanium/Itanic?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 2 Feb 2005 12:37:10 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": ">>You can get 64-bit Xeons also but it takes hit in the I/O department due \n>>to the lack of a hardware I/O MMU which limits DMA transfers to \n>>addresses below 4GB. This has a two-fold impact:\n>>\n>>1) transfering data to >4GB require first a transfer to <4GB and then a \n>>copy to the final destination.\n>>\n>>2) You must allocate real memory 2X the address space of the devices to \n>>act as bounce buffers. This is especially problematic for workstations \n>>because if you put a 512MB Nvidia card in your computer for graphics \n>>work -- you've just lost 1GB of memory. (I dunno how much the typical \n>>SCSI/NIC/etc take up.)\n> \n> \n> I thought Intel was copying AMD's 64-bit API. Is Intel's\n> implementation as poor as you description? Does Intel have any better\n> 64-bit offering other than the Itanium/Itanic?\n\nUnfortunately, there's no easy way for Intel to have implemented a \n64-bit IOMMU under their current restrictions. The memory controller \nresides on the chipset and to upgrade the functionality significantly, \nit would probably require changing the bus protocol. It's not that they \ncouldn't do it -- it would just require all Intel chipset/MB \nvendors/partners to go through the process of creating & validating \ntotally new products. A way lengthier process than just producing 64-bit \nCPUs that drop into current motherboards.\n", "msg_date": "Wed, 02 Feb 2005 10:18:03 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": "William Yu wrote:\n> >>You can get 64-bit Xeons also but it takes hit in the I/O department due \n> >>to the lack of a hardware I/O MMU which limits DMA transfers to \n> >>addresses below 4GB. This has a two-fold impact:\n> >>\n> >>1) transfering data to >4GB require first a transfer to <4GB and then a \n> >>copy to the final destination.\n> >>\n> >>2) You must allocate real memory 2X the address space of the devices to \n> >>act as bounce buffers. This is especially problematic for workstations \n> >>because if you put a 512MB Nvidia card in your computer for graphics \n> >>work -- you've just lost 1GB of memory. (I dunno how much the typical \n> >>SCSI/NIC/etc take up.)\n\nWhen you say \"allocate real memory 2X\" are you saying that if you have\n16GB of RAM only 8GB is actually usable and the other 8GB is for\nbounce buffers, or is it just address space being used up?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 2 Feb 2005 13:35:20 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": "Bruce Momjian wrote:\n> William Yu wrote:\n> \n>>>>You can get 64-bit Xeons also but it takes hit in the I/O department due \n>>>>to the lack of a hardware I/O MMU which limits DMA transfers to \n>>>>addresses below 4GB. This has a two-fold impact:\n>>>>\n>>>>1) transfering data to >4GB require first a transfer to <4GB and then a \n>>>>copy to the final destination.\n>>>>\n>>>>2) You must allocate real memory 2X the address space of the devices to \n>>>>act as bounce buffers. This is especially problematic for workstations \n>>>>because if you put a 512MB Nvidia card in your computer for graphics \n>>>>work -- you've just lost 1GB of memory. (I dunno how much the typical \n>>>>SCSI/NIC/etc take up.)\n> \n> \n> When you say \"allocate real memory 2X\" are you saying that if you have\n> 16GB of RAM only 8GB is actually usable and the other 8GB is for\n> bounce buffers, or is it just address space being used up?\n> \n\nIt's 2x the memory space of the devices. E.g. a Nvidia Graphics card w/ \n512MB of RAM would require 1GB of memory to act as bounce buffers. And \nit has to be real chunks of memory in 64-bit mode since DMA transfer \nmust drop it into real memory in order to then be copied to > 4GB.\n", "msg_date": "Wed, 02 Feb 2005 10:49:35 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": "[email protected] (Bruce Momjian) wrote:\n> William Yu wrote:\n>> > Well, that would give you the most benefit, but the memory\n>> > bandwidth is still greater than on a Xeon. There's really no\n>> > issue with 64 bit if you're using open source software; it all\n>> > compiles for 64 bits and you're good to\n>> > go. http://stats.distributed.net runs on a dual opteron box\n>> > running FreeBSD and I've had no issues.\n>> \n>> You can get 64-bit Xeons also but it takes hit in the I/O\n>> department due to the lack of a hardware I/O MMU which limits DMA\n>> transfers to addresses below 4GB. This has a two-fold impact:\n>> \n>> 1) transfering data to >4GB require first a transfer to <4GB and\n>> then a copy to the final destination.\n>> \n>> 2) You must allocate real memory 2X the address space of the\n>> devices to act as bounce buffers. This is especially problematic\n>> for workstations because if you put a 512MB Nvidia card in your\n>> computer for graphics work -- you've just lost 1GB of memory. (I\n>> dunno how much the typical SCSI/NIC/etc take up.)\n>\n> I thought Intel was copying AMD's 64-bit API. Is Intel's\n> implementation as poor as you description? Does Intel have any better\n> 64-bit offering other than the Itanium/Itanic?\n\n From what I can see, the resulting \"copy of AMD64\" amounts to little\nmore than rushing together a project to glue a bag on the side of a\nXeon chip with some 64 bit parts in it.\n\nI see no reason to expect what is only billed as an \"extension\ntechnology\" <http://www.eweek.com/article2/0,1759,1545734,00.asp> to\nalleviate the deeply rooted memory bandwidth problems seen on Xeon.\n-- \nlet name=\"cbbrowne\" and tld=\"acm.org\" in name ^ \"@\" ^ tld;;\nhttp://cbbrowne.com/info/advocacy.html\nQ: What does the function NULL do?\nA: The function NULL tests whether or not its argument is NIL or not. If\n its argument is NIL the value of NULL is NIL.\n-- Ken Tracton, Programmer's Guide to Lisp, page 73.\n", "msg_date": "Wed, 02 Feb 2005 22:35:19 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" } ]
[ { "msg_contents": "I have a data collector function in a PostGreSQL 7.4 DB running on Linux\nthat inserts approximately 10000 records into a table every fifteen\nminutes. The table has two macaddr columns, one varchar(50) column, two\ntimestamptz columns, five interval columns, one float8 column, and one\nint4 column. I have one multi-column B-tree index on the two macaddr\ncolumns, the varchar(50), and one of the timestamptz columns, in that\norder.\n\nThe 10000-record insert takes approximately 2 minutes, which I thought\nseemed awfully slow, so I tried removing the index, and sure enough,\nwithout the index the insert took less than two seconds. I repeated the\ninserts many times (with and without the index) and there's very little\nother activity on this server, so I'm confident of these results.\n\nThere are approximately 10000 fixed combinations of the first three\nindexed columns, and the fourth is the current time, so essentially what\nthe function is doing is inserting a set of values for each of those\n10000 fixed combinations for every fifteen minute period. I can see how\nthis might be a worst-case scenario for an index, because the inserted\nrows are alone and evenly spaced through the index. Even so, it doesn't\nseem reasonable to me that an index would slow an insert more than\n50-fold, regardless of hardware or the nature of the index. Am I wrong?\nCan anybody suggest why this would be happening and what I might be able\nto do about it? In production the table will have several million\nrecords, and the index is necessary for data retrieval from this table\nto be feasible, so leaving the index off is not an option.\n\nThanks in advance,\nTrevor Ball\n\n\n\n\n\n\nIndex Slowing Insert >50x\n\n\n\nI have a data collector function in a PostGreSQL 7.4 DB running on Linux that inserts approximately 10000 records into a table every fifteen minutes. The table has two macaddr columns, one varchar(50) column, two timestamptz columns, five interval columns, one float8 column, and one int4 column. I have one multi-column B-tree index on the two macaddr columns, the varchar(50), and one of the timestamptz columns, in that order.\nThe 10000-record insert takes approximately 2 minutes, which I thought seemed awfully slow, so I tried removing the index, and sure enough, without the index the insert took less than two seconds. I repeated the inserts many times (with and without the index) and there’s very little other activity on this server, so I’m confident of these results.\nThere are approximately 10000 fixed combinations of the first three indexed columns, and the fourth is the current time, so essentially what the function is doing is inserting a set of values for each of those 10000 fixed combinations for every fifteen minute period. I can see how this might be a worst-case scenario for an index, because the inserted rows are alone and evenly spaced through the index. Even so, it doesn’t seem reasonable to me that an index would slow an insert more than 50-fold, regardless of hardware or the nature of the index. Am I wrong? Can anybody suggest why this would be happening and what I might be able to do about it? In production the table will have several million records, and the index is necessary for data retrieval from this table to be feasible, so leaving the index off is not an option.\nThanks in advance,\nTrevor Ball", "msg_date": "Mon, 31 Jan 2005 16:14:39 -0500", "msg_from": "\"Trevor Ball\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index Slowing Insert >50x" }, { "msg_contents": "\"Trevor Ball\" <[email protected]> writes:\n> ... it doesn't\n> seem reasonable to me that an index would slow an insert more than\n> 50-fold, regardless of hardware or the nature of the index.\n\nSeems pretty slow to me too. Can you provide a self-contained test\ncase?\n\nOne possibility is that depending on your platform and locale setting,\nvarchar comparisons can be a whole lot slower than a normal person would\nconsider sane. If you're not using C locale, you might try C locale and\nsee if it helps.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Feb 2005 00:29:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Slowing Insert >50x " } ]
[ { "msg_contents": "Hello my friends,\n\n \n\nI'd like to know (based on your experience and technical details) which OS\nis recommended for running PostgreSQL keeping in mind 3 indicators:\n\n \n\n1 - Performance (SO, Network and IO)\n\n2 - SO Stability\n\n3 - File System Integrity\n\n \n\nComparisons between Slackware, Gentoo and FreeBSD are welcome.\n\n \n\nWhich file system has the best performance and integrity: XFS (Linux) or UFS\n(FreeBSD)? \n\n*I've read that UFS is not a journaling FS. Is this right? How much this\ndifference affects performance and integrity?\n\n \n\nI don't have experience with FreeBSD so I'd like to know if it is possible\nto run XFS on FreeBSD 5.3.\n\n \n\n \n\nThank you,\n\nBruno Almeida do Lago\n\n\n\n\n\n\n\n\n\n\nHello my friends,\n \nI'd like to know (based on your experience and\ntechnical details) which OS is recommended for running PostgreSQL keeping in\nmind 3 indicators:\n \n1 - Performance (SO, Network and IO)\n2 - SO Stability\n3 - File System Integrity\n \nComparisons between Slackware, Gentoo and FreeBSD are\nwelcome.\n \nWhich file system has the best performance and integrity:\nXFS (Linux) or UFS (FreeBSD)? \n*I've read that UFS is not a journaling FS. Is this\nright? How much this difference affects performance and integrity?\n \nI don't have experience with FreeBSD so I'd like\nto know if it is possible to run XFS on FreeBSD 5.3.\n \n \nThank you,\nBruno Almeida do Lago", "msg_date": "Mon, 31 Jan 2005 23:59:46 -0200", "msg_from": "\"Lago, Bruno Almeida do\" <[email protected]>", "msg_from_op": true, "msg_subject": "Very important choice" }, { "msg_contents": "Lago, Bruno Almeida do wrote:\n> Hello my friends,\n> \n> I'd like to know (based on your experience and technical details) which OS\n> is recommended for running PostgreSQL keeping in mind 3 indicators:\n> \n> 1 - Performance (SO, Network and IO)\n> 2 - SO Stability\n> 3 - File System Integrity\n\nThe short answer is almost certainly whichever OS you are most familiar \nwith. If you have a problem, you don't want to be learning new details \nabout your OS while fixing it. That rules out FreeBSD for now.\n\nWhat hardware you want to use will affect performance and choice of OS. \nYou'll need to decide what hardware you're looking to use.\n\nAs far as file-systems are concerned, ext3 seems to be the slowest, and \nthe general feeling seems to be that XFS is perhaps the fastest. In \nterms of reliability, avoid cutting-edge releases of any file-system - \nlet others test them for you. One thing to consider is how long it takes \nto recover from a crash - you can run PostgreSQL on ext2, but checking a \nlarge disk can take hours after a crash. That's the real benefit of \njournalling for PG - speed of recovery.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 01 Feb 2005 08:50:54 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very important choice" } ]
[ { "msg_contents": "Sorry, I sent this mail message with wrong account and it has been delayed\nby the mail system, so I'm resending it. \n\n \n\nHello my friends,\n\n \n\nI'd like to know (based on your experience and technical details) which OS\nis recommended for running PostgreSQL keeping in mind 3 indicators:\n\n \n\n1 - Performance (SO, Network and IO)\n\n2 - SO Stability\n\n3 - File System Integrity\n\n \n\nComparisons between Slackware, Gentoo and FreeBSD are welcome.\n\n \n\nWhich file system has the best performance and integrity: XFS (Linux) or UFS\n(FreeBSD)? \n\n*I've read that UFS is not a journaling FS. Is this right? How much this\ndifference affects performance and integrity?\n\n \n\nI don't have experience with FreeBSD so I'd like to know if it is possible\nto run XFS on FreeBSD 5.3.\n\n \n\n \n\nThank you,\n\nBruno Almeida do Lago\n\n \n\n\n\n\n\n\n\n\n\n\nSorry, I sent this mail message with wrong account and it\nhas been delayed by the mail system, so I’m resending it. \n \nHello my friends,\n \nI'd like to know (based on your experience and technical\ndetails) which OS is recommended for running PostgreSQL keeping in mind 3\nindicators:\n \n1 - Performance (SO, Network and IO)\n2 - SO Stability\n3 - File System Integrity\n \nComparisons between Slackware, Gentoo and FreeBSD are\nwelcome.\n \nWhich file system has the best performance and integrity:\nXFS (Linux) or UFS (FreeBSD)? \n*I've read that UFS is not a journaling FS. Is this right?\nHow much this difference affects performance and integrity?\n \nI don't have experience with FreeBSD so I'd like to know if\nit is possible to run XFS on FreeBSD 5.3.\n \n \nThank you,\nBruno Almeida do Lago", "msg_date": "Tue, 1 Feb 2005 08:02:35 -0200", "msg_from": "\"Bruno Almeida do Lago\" <[email protected]>", "msg_from_op": true, "msg_subject": "Very important choice" } ]
[ { "msg_contents": "Doing some rather crude comparative performance tests\nbetween PG 8.0.1 on Windows XP and SQL Server 2000, PG\nwhips SQL Server's ass on\n\ninsert into junk (select * from junk)\n\non a one column table defined as int.\nIf we start with a 1 row table and repeatedly execute\nthis command, PG can take the table from 500K rows to\n1M rows in 20 seconds; SQL Server is at least twice as\nslow.\n\nBUT... \n\nSQL Server can do\n\nselect count(*) on junk\n\nin almost no time at all, probably because this query\ncan be optimised to go back and use catalogue\nstatistics.\n\nPG, on the other hand, appears to do a full table scan\nto answer this question, taking nearly 4 seconds to\nprocess the query.\n\nDoing an ANALYZE on the table and also VACUUM did not\nseem to affect this.\n\nCan PG find a table's row count more efficiently?.\nThis is not an unusual practice in commercial\napplications which assume that count(*) with no WHERE\nclause will be a cheap query - and use it to test if\na table is empty, for instance. (because for\nOracle/Sybase/SQL Server, count(*) is cheap).\n\n(sure, I appreciate there are other ways of doing\nthis, but I am curious about the way PG works here).\n\n\n", "msg_date": "Tue, 1 Feb 2005 04:41:43 -0800 (PST)", "msg_from": "Andrew Mayo <[email protected]>", "msg_from_op": true, "msg_subject": "Performance of count(*) on large tables vs SQL Server" }, { "msg_contents": "On Tuesday 01 Feb 2005 6:11 pm, Andrew Mayo wrote:\n> PG, on the other hand, appears to do a full table scan\n> to answer this question, taking nearly 4 seconds to\n> process the query.\n>\n> Doing an ANALYZE on the table and also VACUUM did not\n> seem to affect this.\n>\n> Can PG find a table's row count more efficiently?.\n> This is not an unusual practice in commercial\n> applications which assume that count(*) with no WHERE\n> clause will be a cheap query - and use it to test if\n> a table is empty, for instance. (because for\n> Oracle/Sybase/SQL Server, count(*) is cheap).\n\nFirst of all, such an assumption is no good. It should hit concurrency under \nheavy load but I know people do use it.\n\nFor the specific question, after a vacuum analyze, you can use \n\nselect reltuples from pg_class where relname='Foo';\n\nRemember, you will get different results between 'analyze' and 'vacuum \nanalyze', since later actually visit every page in the table and hence is \nexpected to be more accurate.\n\n> (sure, I appreciate there are other ways of doing\n> this, but I am curious about the way PG works here).\n\nAnswer is MVCC and PG's inability use index alone. This has been a FAQ for a \nloong time.. Furthermore PG has custom aggregates to complicate the matter..\n\nMost of the pg developers/users think that unqualified select count(*) is of \nno use. You can search the archives for more details..\n\n HTH\n\n Shridhar\n", "msg_date": "Tue, 1 Feb 2005 18:32:56 +0530", "msg_from": "Shridhar Daithankar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*) on large tables vs SQL Server" }, { "msg_contents": "Hello Andrew,\n\tEverything that Shridhar says makes perfect\nsense, and, speaking from experience in dealing with\nthis type of 'problem', everything you say does as \nwell. Such is life really :)\n\n\tI would not be at -all- surprised if Sybase\nand Oracle did query re-writing behind the scene's\nto send un-defined count's to a temporary table which\nholds the row count. For an example of such done in\npostgreSQL (using triggers and a custom procedure)\nlook into the 'General Bits' newsletter. Specifically\nhttp://www.varlena.com/varlena/GeneralBits/49.php\n\n\tI know, giving a URL as an answer 'sucks', but,\nwell, it simply repeats my experience. Triggers and\nProcedures.\n\n\tRegards\n\tSteph\n\nOn Tue, Feb 01, 2005 at 06:32:56PM +0530, Shridhar Daithankar wrote:\n> On Tuesday 01 Feb 2005 6:11 pm, Andrew Mayo wrote:\n> > PG, on the other hand, appears to do a full table scan\n> > to answer this question, taking nearly 4 seconds to\n> > process the query.\n> >\n> > Doing an ANALYZE on the table and also VACUUM did not\n> > seem to affect this.\n> >\n> > Can PG find a table's row count more efficiently?.\n> > This is not an unusual practice in commercial\n> > applications which assume that count(*) with no WHERE\n> > clause will be a cheap query - and use it to test if\n> > a table is empty, for instance. (because for\n> > Oracle/Sybase/SQL Server, count(*) is cheap).\n> \n> First of all, such an assumption is no good. It should hit concurrency under \n> heavy load but I know people do use it.\n> \n> For the specific question, after a vacuum analyze, you can use \n> \n> select reltuples from pg_class where relname='Foo';\n> \n> Remember, you will get different results between 'analyze' and 'vacuum \n> analyze', since later actually visit every page in the table and hence is \n> expected to be more accurate.\n> \n> > (sure, I appreciate there are other ways of doing\n> > this, but I am curious about the way PG works here).\n> \n> Answer is MVCC and PG's inability use index alone. This has been a FAQ for a \n> loong time.. Furthermore PG has custom aggregates to complicate the matter..\n> \n> Most of the pg developers/users think that unqualified select count(*) is of \n> no use. You can search the archives for more details..\n> \n> HTH\n> \n> Shridhar\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n>", "msg_date": "Tue, 1 Feb 2005 08:22:50 -0500", "msg_from": "Stef <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*) on large tables vs SQL Server" }, { "msg_contents": "\n\n> clause will be a cheap query - and use it to test if\n> a table is empty, for instance. (because for\n> Oracle/Sybase/SQL Server, count(*) is cheap).\n\n\tTo test if a table is empty, use a SELECT EXISTS or whatever SELECT with \na LIMIT 1...\n", "msg_date": "Tue, 01 Feb 2005 14:36:30 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of count(*) on large tables vs SQL Server" } ]
[ { "msg_contents": "> Hi all,\n> 1) What kind of performance gain can I expect switching from\n> 7.1 to 7.4 (or 8.0)? Obviously I'm doing my own testing,\n> but I'm not very impressed by 8.0 speed, may be I'm doing\n> testing on a low end server...\n\n8.0 gives you savepoints. While this may not seem like a big deal at\nfirst, the ability to handle exceptions inside pl/pgsql functions gives\nyou much more flexibility to move code into the server. Also, recent\nversions of pl/pgsql give you more flexibility with cursors, incuding\nreturning them outside of the function.\nCorollary: use pl/pgsql. It can be 10 times or more faster than query\nby query editing.\n\nYou also have the parse/bind interface. This may not be so easily to\nimplement in your app, but if you are machine gunning your server with\nqueries, use parameterized prepared queries and reap 50% + performance,\nmeaning lower load and quicker transaction turnaround time.\n\n\nMerlin\n", "msg_date": "Tue, 1 Feb 2005 10:30:59 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": "Merlin Moncure wrote:\n\n> Corollary: use pl/pgsql. It can be 10 times or more faster than query\n> by query editing.\n\nMerlin, thanks for your good suggestions.\n\nBy now, our system has never used \"stored procedures\" approach,\ndue to the fact that we're staying on the minimum common SQL features\nthat are supported by most db engines.\nI realize though that it would provide an heavy performance boost.\n\n> You also have the parse/bind interface\n\nThis is something I have already engineered in our core classes\n(that use DBI + DBD::Pg), so that switching to 8.0 should\nautomatically enable the \"single-prepare, multiple-execute\" behavior,\nsaving a lot of query planner processing, if I understand correctly.\n\n-- \nCosimo\n\n", "msg_date": "Tue, 01 Feb 2005 22:00:50 +0100", "msg_from": "Cosimo Streppone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" } ]
[ { "msg_contents": "What's the effect of different encodings on database performance? \n\n \n\nWe're looking to switch encoding of our database from SQL_ASCII to UTF-8\nto better handle international data. I expect that at least 90% of our\ndata will be in the ASCII range with a few characters that need\ndouble-byte encoding. Has anyone done extensive comparison of the\nperformance of different encodings?\n\n \n\n-Igor\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nWhat’s the effect of different encodings on database\nperformance? \n \nWe’re looking to switch encoding of our database from\nSQL_ASCII to UTF-8 to better handle international data. I expect that at least\n90% of our data will be in the ASCII range with a few characters that need\ndouble-byte encoding. Has anyone done extensive comparison of the performance of\ndifferent encodings?\n \n-Igor", "msg_date": "Tue, 1 Feb 2005 10:38:13 -0600", "msg_from": "\"Igor Postelnik\" <[email protected]>", "msg_from_op": true, "msg_subject": "Effect of database encoding on performance" } ]
[ { "msg_contents": "Hi all,\n\nI have a freshly vacuumed table with 1104379 records with a index on zipcode. Can anyone explain why the queries go as they go, and why the performance differs so much (1 second versus 64 seconds, or stated differently, 10000 records per second versus 1562 records per second) and why the query plan of query 2 ignores the index?\n\nFor completeness sake I also did a select ordernumber without any ordering. That only took 98 second for 1104379 record (11222 record per second, compariable with the first query as I would have expected). \n\nQuery 1:\nselect a.ordernumer from orders a order by a.zipcode limit 10000\nExplain: \nQUERY PLAN\nLimit (cost=0.00..39019.79 rows=10000 width=14)\n -> Index Scan using orders_postcode on orders a (cost=0.00..4309264.07 rows=1104379 width=14)\nRunning time: 1 second\n\nQuery 2:\nselect a.ordernumer from orders a order by a.zipcode limit 100000\nExplain:\nQUERY PLAN\nLimit (cost=207589.75..207839.75 rows=100000 width=14)\n -> Sort (cost=207589.75..210350.70 rows=1104379 width=14)\n Sort Key: postcode\n -> Seq Scan on orders a (cost=0.00..46808.79 rows=1104379 width=14)\nRunning time: 64 seconds\n\nQuery 3:\nselect a.ordernumer from orders a\nQUERY PLAN\nSeq Scan on orders a (cost=0.00..46808.79 rows=1104379 width=4)\nRunning time: 98 seconds\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n", "msg_date": "Tue, 1 Feb 2005 19:55:01 +0100", "msg_from": "\"Joost Kraaijeveld\" <[email protected]>", "msg_from_op": true, "msg_subject": "Why the difference in query plan and performance pg 7.4.6?" }, { "msg_contents": "Joost Kraaijeveld wrote:\n\n>Hi all,\n>\n>I have a freshly vacuumed table with 1104379 records with a index on zipcode. Can anyone explain why the queries go as they go, and why the performance differs so much (1 second versus 64 seconds, or stated differently, 10000 records per second versus 1562 records per second) and why the query plan of query 2 ignores the index?\n>\n>\n>\n>\nIndexes are generally only faster if you are grabbing <10% of the table.\nOtherwise you have the overhead of loading the index into memory, and\nthen paging through it looking for the entries.\n\nWith 100,000 entries a sequential scan is actually likely to be faster\nthan an indexed one.\n\nIf you try:\nselect a.ordernumer from orders a order by a.zipcode\n\nhow long does it take?\n\nYou can also try disabling sequential scan to see how long Query 2 would\nbe if you used indexing. Remember, though, that because of caching, a\nrepeated index scan may seem faster, but in actual production, that\nindex may not be cached, depending on what other queries are done.\n\nJohn\n=:->\n\n>For completeness sake I also did a select ordernumber without any ordering. That only took 98 second for 1104379 record (11222 record per second, compariable with the first query as I would have expected).\n>\n>Query 1:\n>select a.ordernumer from orders a order by a.zipcode limit 10000\n>Explain:\n>QUERY PLAN\n>Limit (cost=0.00..39019.79 rows=10000 width=14)\n> -> Index Scan using orders_postcode on orders a (cost=0.00..4309264.07 rows=1104379 width=14)\n>Running time: 1 second\n>\n>Query 2:\n>select a.ordernumer from orders a order by a.zipcode limit 100000\n>Explain:\n>QUERY PLAN\n>Limit (cost=207589.75..207839.75 rows=100000 width=14)\n> -> Sort (cost=207589.75..210350.70 rows=1104379 width=14)\n> Sort Key: postcode\n> -> Seq Scan on orders a (cost=0.00..46808.79 rows=1104379 width=14)\n>Running time: 64 seconds\n>\n>Query 3:\n>select a.ordernumer from orders a\n>QUERY PLAN\n>Seq Scan on orders a (cost=0.00..46808.79 rows=1104379 width=4)\n>Running time: 98 seconds\n>\n>Groeten,\n>\n>Joost Kraaijeveld\n>Askesis B.V.\n>Molukkenstraat 14\n>6524NB Nijmegen\n>tel: 024-3888063 / 06-51855277\n>fax: 024-3608416\n>e-mail: [email protected]\n>web: www.askesis.nl\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 8: explain analyze is your friend\n>\n>", "msg_date": "Tue, 01 Feb 2005 14:06:47 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why the difference in query plan and performance pg" } ]
[ { "msg_contents": "Hi all,\nI have a big table with ~ 10 Milion rows, and is a very\npain administer it, so after years I convinced my self\nto partition it and replace the table usage ( only for reading )\nwith a view.\n\nNow my user_logs table is splitted in 4:\n\nuser_logs\nuser_logs_2002\nuser_logs_2003\nuser_logs_2004\n\nand the view v_user_logs is builded on top of these tables:\n\n\nCREATE OR REPLACE VIEW v_user_logs AS\n SELECT * FROM user_logs\n UNION ALL\n SELECT * FROM user_logs_2002\n UNION ALL\n SELECT * FROM user_logs_2003\n UNION ALL\n SELECT * FROM user_logs_2004\n;\n\n\nthe view is performing really well:\n\n\nempdb=# explain analyze select * from v_user_logs where id_user = sp_id_user('kalman');\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan v_user_logs (cost=0.00..895.45 rows=645 width=88) (actual time=17.039..2345.388 rows=175 loops=1)\n -> Append (cost=0.00..892.23 rows=645 width=67) (actual time=17.030..2344.195 rows=175 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..120.70 rows=60 width=67) (actual time=17.028..17.036 rows=1 loops=1)\n -> Index Scan using idx_user_user_logs on user_logs (cost=0.00..120.40 rows=60 width=67) (actual time=17.012..17.018 rows=1 loops=1)\n Index Cond: (id_user = 4185)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..475.44 rows=316 width=67) (actual time=49.406..1220.400 rows=79 loops=1)\n -> Index Scan using idx_user_user_logs_2004 on user_logs_2004 (cost=0.00..473.86 rows=316 width=67) (actual time=49.388..1219.386 rows=79 loops=1)\n Index Cond: (id_user = 4185)\n -> Subquery Scan \"*SELECT* 3\" (cost=0.00..204.33 rows=188 width=67) (actual time=59.375..1068.806 rows=95 loops=1)\n -> Index Scan using idx_user_user_logs_2003 on user_logs_2003 (cost=0.00..203.39 rows=188 width=67) (actual time=59.356..1067.934 rows=95 loops=1)\n Index Cond: (id_user = 4185)\n -> Subquery Scan \"*SELECT* 4\" (cost=0.00..91.75 rows=81 width=67) (actual time=37.623..37.623 rows=0 loops=1)\n -> Index Scan using idx_user_user_logs_2002 on user_logs_2002 (cost=0.00..91.35 rows=81 width=67) (actual time=37.618..37.618 rows=0 loops=1)\n Index Cond: (id_user = 4185)\n Total runtime: 2345.917 ms\n(15 rows)\n\n\n\nthe problem is now if this view is used in others views like this:\n\n\n\nCREATE OR REPLACE VIEW v_ua_user_login_logout_tmp AS\n SELECT\n u.login,\n ul.*\n FROM user_login u,\n v_user_logs ul\n WHERE\n u.id_user = ul.id_user\n;\n\n\n\nempdb=# explain analyze select * from v_ua_user_login_logout_tmp where login = 'kalman';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=4.01..228669.81 rows=173 width=100) (actual time=1544.784..116490.363 rows=175 loops=1)\n Hash Cond: (\"outer\".id_user = \"inner\".id_user)\n -> Subquery Scan ul (cost=0.00..193326.71 rows=7067647 width=88) (actual time=5.677..108190.096 rows=7067831 loops=1)\n -> Append (cost=0.00..157988.47 rows=7067647 width=67) (actual time=5.669..77109.995 rows=7067831 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..8158.48 rows=362548 width=67) (actual time=5.666..3379.178 rows=362862 loops=1)\n -> Seq Scan on user_logs (cost=0.00..6345.74 rows=362548 width=67) (actual time=5.645..1395.673 rows=362862 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..93663.88 rows=4191588 width=67) (actual time=9.149..35094.798 rows=4191580 loops=1)\n -> Seq Scan on user_logs_2004 (cost=0.00..72705.94 rows=4191588 width=67) (actual time=9.117..16531.486 rows=4191580 loops=1)\n -> Subquery Scan \"*SELECT* 3\" (cost=0.00..44875.33 rows=2008233 width=67) (actual time=0.562..24017.680 rows=2008190 loops=1)\n -> Seq Scan on user_logs_2003 (cost=0.00..34834.17 rows=2008233 width=67) (actual time=0.542..13224.265 rows=2008190 loops=1)\n -> Subquery Scan \"*SELECT* 4\" (cost=0.00..11290.78 rows=505278 width=67) (actual time=7.100..3636.163 rows=505199 loops=1)\n -> Seq Scan on user_logs_2002 (cost=0.00..8764.39 rows=505278 width=67) (actual time=6.446..1474.709 rows=505199 loops=1)\n -> Hash (cost=4.00..4.00 rows=1 width=16) (actual time=0.083..0.083 rows=0 loops=1)\n -> Index Scan using user_login_login_key on user_login u (cost=0.00..4.00 rows=1 width=16) (actual time=0.064..0.066 rows=1 loops=1)\n Index Cond: ((login)::text = 'kalman'::text)\n Total runtime: 116491.056 ms\n(16 rows)\n\n\n\nas you can see the index scan is not used anymore.\nDo you see any problem on this approach ?\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Wed, 02 Feb 2005 01:16:17 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "horizontal partition" }, { "msg_contents": "Gaetano,\n\n> I have a big table with ~ 10 Milion rows, and is a very\n> pain administer it, so after years I convinced my self\n> to partition it and replace the table usage ( only for reading )\n> with a view.\n>\n> Now my user_logs table is splitted in 4:\n>\n> user_logs\n> user_logs_2002\n> user_logs_2003\n> user_logs_2004\n\nAny reason you didn't use inheritance?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 2 Feb 2005 12:09:06 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: horizontal partition" }, { "msg_contents": "Josh Berkus wrote:\n> Gaetano,\n> \n> \n>>I have a big table with ~ 10 Milion rows, and is a very\n>>pain administer it, so after years I convinced my self\n>>to partition it and replace the table usage ( only for reading )\n>>with a view.\n>>\n>>Now my user_logs table is splitted in 4:\n>>\n>>user_logs\n>>user_logs_2002\n>>user_logs_2003\n>>user_logs_2004\n> \n> \n> Any reason you didn't use inheritance?\n\nI did in that way just to not use postgresql specific feature.\nI can give it a try and I let you know, however the question remain,\nwhy the index usage is lost if used in that way ?\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Thu, 03 Feb 2005 02:10:15 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: horizontal partition" }, { "msg_contents": "Gaetano,\n\n> I did in that way just to not use postgresql specific feature.\n> I can give it a try and I let you know, however the question remain,\n> why the index usage is lost if used in that way ?\n\nBecause PostgreSQL is materializing the entire UNION data set in the \nsubselect. What Postgres version are you using? I thought this was fixed \nin 7.4, but maybe not ...\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 2 Feb 2005 19:16:08 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: horizontal partition" }, { "msg_contents": "On Thu, 03 Feb 2005 02:10:15 +0100, Gaetano Mendola <[email protected]> wrote:\n> why the index usage is lost if used in that way ?\n\nThis is how I interpret it (if anyone wants to set me straight or\nimprove on it feel free)\n\nViews are implemented as rules. \n\nRules are pretty much just a macro to the query builder. When it sees\nthe view, it replaces it with the implementation of the view.\n\nWhen you join a view to a table, it generates a subselect of the\nimplementation and joins that to the other table.\n\nSo the subselect will generate the entire set of data from the view\nbefore it can use the join to eliminate rows.\n\nI would like a way to make this work better as well. One of my views is\n32 joins of the same table (to get tree like data for reports).\n\nklint.\n\n+---------------------------------------+-----------------+\n: Klint Gore : \"Non rhyming :\n: EMail : [email protected] : slang - the :\n: Snail : A.B.R.I. : possibilities :\n: Mail University of New England : are useless\" :\n: Armidale NSW 2351 Australia : L.J.J. :\n: Fax : +61 2 6772 5376 : :\n+---------------------------------------+-----------------+\n", "msg_date": "Thu, 03 Feb 2005 14:31:35 +1100", "msg_from": "Klint Gore <[email protected]>", "msg_from_op": false, "msg_subject": "Re: horizontal partition" }, { "msg_contents": "Klint,\n\n> This is how I interpret it (if anyone wants to set me straight or\n> improve on it feel free)\n>\n> Views are implemented as rules.\n>\n> Rules are pretty much just a macro to the query builder. When it sees\n> the view, it replaces it with the implementation of the view.\n\nRight so far.\n\n>\n> When you join a view to a table, it generates a subselect of the\n> implementation and joins that to the other table.\n\nMore or less. A join set and a subselect are not really different in the \nplanner.\n\n> So the subselect will generate the entire set of data from the view\n> before it can use the join to eliminate rows.\n\nWell, not exactly. That's what's happening in THIS query, but it doesn't \nhappen in most queries, no matter how many view levels you nest (well, up to \nthe number FROM_COLLAPSE_LIMIT, anyway).\n\nThe issue here is that the planner is capable of \"pushing down\" the WHERE \ncriteria into the first view, but not into the second, \"nested\" view, and so \npostgres materializes the UNIONed data set before perfoming the join.\n\nThing is, I seem to recall that this particular issue was something Tom fixed \na while ago. Which is why I wanted to know what version Gaetano is using.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 2 Feb 2005 22:41:57 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: horizontal partition" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> The issue here is that the planner is capable of \"pushing down\" the WHERE \n> criteria into the first view, but not into the second, \"nested\" view, and so \n> postgres materializes the UNIONed data set before perfoming the join.\n\n> Thing is, I seem to recall that this particular issue was something Tom fixed\n> a while ago. Which is why I wanted to know what version Gaetano is using.\n\nIt's still true that we can't generate a nestloop-with-inner-indexscan\njoin plan if the inner side is anything more complex than a single table\nscan. Since that's the only plan that gives you any chance of not\nscanning the whole partitioned table, it's rather a hindrance :-(\n\nIt might be possible to fix this by treating the nestloop's join\nconditions as \"push down-able\" criteria, instead of the present rather\nad hoc method for generating nestloop/indexscan plans. It'd be quite\na deal of work though, and I'm concerned about how slow the planner\nmight run if we did do it like that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Feb 2005 01:55:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: horizontal partition " }, { "msg_contents": "Josh Berkus wrote:\n> Gaetano,\n> \n> \n>>I did in that way just to not use postgresql specific feature.\n>>I can give it a try and I let you know, however the question remain,\n>>why the index usage is lost if used in that way ?\n> \n> \n> Because PostgreSQL is materializing the entire UNION data set in the \n> subselect. What Postgres version are you using? I thought this was fixed \n> in 7.4, but maybe not ...\n> \n\nYes, I'm using with 7.4.x, so it was not fixed...\n\n\n\nRegards\nGaetano Mendola\n\n\n", "msg_date": "Thu, 03 Feb 2005 11:40:20 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: horizontal partition" }, { "msg_contents": "Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n> \n>>The issue here is that the planner is capable of \"pushing down\" the WHERE \n>>criteria into the first view, but not into the second, \"nested\" view, and so \n>>postgres materializes the UNIONed data set before perfoming the join.\n> \n> \n>>Thing is, I seem to recall that this particular issue was something Tom fixed\n>>a while ago. Which is why I wanted to know what version Gaetano is using.\n> \n> \n> It's still true that we can't generate a nestloop-with-inner-indexscan\n> join plan if the inner side is anything more complex than a single table\n> scan. Since that's the only plan that gives you any chance of not\n> scanning the whole partitioned table, it's rather a hindrance :-(\n> \n> It might be possible to fix this by treating the nestloop's join\n> conditions as \"push down-able\" criteria, instead of the present rather\n> ad hoc method for generating nestloop/indexscan plans. It'd be quite\n> a deal of work though, and I'm concerned about how slow the planner\n> might run if we did do it like that.\n> \n\nI don't know if this will help my attempt to perform an horizontal\npartition, if it do I think that it can solve lot of problems out there,\nI tried the inheritance technique too:\n\nThe table user_logs is the original one, I created two tables extending this one:\n\nCREATE TABLE user_logs_2003_h () inherits (user_logs);\nCREATE TABLE user_logs_2002_h () inherits (user_logs);\n\nI defined on this table the index already defined on user_logs.\n\n\nAnd this is the result:\n\nempdb=# explain analyze select * from user_logs where id_user = sp_id_user('kalman');\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..426.33 rows=335 width=67) (actual time=20.891..129.218 rows=98 loops=1)\n -> Append (cost=0.00..426.33 rows=335 width=67) (actual time=20.871..128.643 rows=98 loops=1)\n -> Index Scan using idx_user_user_logs on user_logs (cost=0.00..133.11 rows=66 width=67) (actual time=20.864..44.594 rows=3 loops=1)\n Index Cond: (id_user = 4185)\n -> Index Scan using idx_user_user_logs_2003_h on user_logs_2003_h user_logs (cost=0.00..204.39 rows=189 width=67) (actual time=1.507..83.662 rows=95 loops=1)\n Index Cond: (id_user = 4185)\n -> Index Scan using idx_user_user_logs_2002_h on user_logs_2002_h user_logs (cost=0.00..88.83 rows=80 width=67) (actual time=0.206..0.206 rows=0 loops=1)\n Index Cond: (id_user = 4185)\n Total runtime: 129.500 ms\n(9 rows)\n\nthat is good, but now look what happen in a view like this one:\n\n\ncreate view to_delete AS\nSELECT v.login,\n u.*\nfrom user_login v,\n user_logs u\nwhere v.id_user = u.id_user;\n\n\n\nempdb=# explain analyze select * from to_delete where login = 'kalman';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=4.01..65421.05 rows=143 width=79) (actual time=1479.738..37121.511 rows=98 loops=1)\n Hash Cond: (\"outer\".id_user = \"inner\".id_user)\n -> Append (cost=0.00..50793.17 rows=2924633 width=67) (actual time=21.391..33987.363 rows=2927428 loops=1)\n -> Seq Scan on user_logs u (cost=0.00..7195.22 rows=411244 width=67) (actual time=21.385..5641.307 rows=414039 loops=1)\n -> Seq Scan on user_logs_2003_h u (cost=0.00..34833.95 rows=2008190 width=67) (actual time=0.024..18031.218 rows=2008190 loops=1)\n -> Seq Scan on user_logs_2002_h u (cost=0.00..8764.00 rows=505199 width=67) (actual time=0.005..5733.554 rows=505199 loops=1)\n -> Hash (cost=4.00..4.00 rows=2 width=16) (actual time=0.195..0.195 rows=0 loops=1)\n -> Index Scan using user_login_login_key on user_login v (cost=0.00..4.00 rows=2 width=16) (actual time=0.155..0.161 rows=1 loops=1)\n Index Cond: ((login)::text = 'kalman'::text)\n Total runtime: 37122.069 ms\n(10 rows)\n\n\n\nand how you can see this path is not applicable too :-(\n\nAny other suggestion ?\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n", "msg_date": "Sun, 06 Feb 2005 16:50:08 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: horizontal partition" } ]
[ { "msg_contents": "Hi,\n\nI am trying to build a function that would extend the trigger in general \ntid bits that would only track count changes for table rows.\n\nThe one i am trying to build would check which column and value should \nbe tracked.\n\ne.g. below would be the tracker.\n\nCREATE TABLE \"public\".\"aaaa\" (\n \"tables\" TEXT,\n \"columns\" TEXT[],\n \"values\" TEXT[],\n \"counts\" BIGINT\n) WITH OIDS;\n\nThe column array is the column name and the values array would be the \nvalues matching the columns array defined.\n\nif columns has {group,item} then values has {1,'basket'} so if the data \ninserted matches the values defined, it will be tracked.\n\nWhen new data is inserted, we have to use new.xxxx to access the data. \nIs there another way to access the data that would be more generic\nlike value[1] and so on? This way, the tracker is independant of any \ntables. i believe tg_argv is only for data being updated.\n\nI'm have not done any functions in C, but for those who has, it is more \npossible to do it in C? If those who has used it say i can access the \ndata easier\nin C, then i'll have to read up and learn it.\n\nI was looking through the examples in the doc on creating triggers in \nC. I believe the data i'm looking for is in tg_trigtuple but i am not \nsure how it is supposed to be accessed. Anyone knows of any links i can \nrefer to or anyones has simple codes showing how to access the data?\n\nThanks for any info.\n\nHasnul\n\n\n\n\n\n\n", "msg_date": "Wed, 02 Feb 2005 18:02:28 +0800", "msg_from": "Hasnul Fadhly bin Hasan <[email protected]>", "msg_from_op": true, "msg_subject": "Accessing insert values in triggers" }, { "msg_contents": "On Wed, Feb 02, 2005 at 06:02:28PM +0800, Hasnul Fadhly bin Hasan wrote:\n\n> When new data is inserted, we have to use new.xxxx to access the data. \n> Is there another way to access the data that would be more generic\n> like value[1] and so on? This way, the tracker is independant of any \n> tables.\n\nProcedural languages like PL/Perl, PL/Tcl, and PL/Python can access\nNEW and OLD columns without knowing the column names in advance.\n\nhttp://www.postgresql.org/docs/8.0/static/plperl-triggers.html\nhttp://www.postgresql.org/docs/8.0/static/pltcl-trigger.html\nhttp://www.postgresql.org/docs/8.0/static/plpython-trigger.html\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Thu, 3 Feb 2005 01:01:42 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Accessing insert values in triggers" } ]
[ { "msg_contents": "[This mail goes as X-Post to both pgsql-perform and postgis-users\nbecause postgis users may suffer from this problem, but I would prefer\nto keep the Discussion on pgsql-performance as it is a general TOAST\nproblem and not specific to PostGIS alone.]\n\nHello,\n\nRunning PostGIS 0.8.1 under PostgreSQL 7.4.6-7 (Debian), I struggled\nover the following problem:\n\nlogigis=# explain analyze SELECT geom FROM adminbndy1 WHERE geom &&\nsetsrid('BOX3D(9.4835390946502 47.39365740740741,9.5164609053498\n47.40634259259259)'::box3d,4326);\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------\n Seq Scan on adminbndy1 (cost=0.00..4.04 rows=1 width=121) (actual\ntime=133.591..7947.546 rows=5 loops=1)\n Filter: (geom && 'SRID=4326;BOX3D(9.4835390946502 47.3936574074074\n0,9.5164609053498 47.4063425925926 0)'::geometry)\n Total runtime: 7947.669 ms\n(3 Zeilen)\n\nlogigis=# set enable_seqscan to off;\nSET\nlogigis=# explain analyze SELECT geom FROM adminbndy1 WHERE geom &&\nsetsrid('BOX3D(9.4835390946502 47.39365740740741,9.5164609053498\n47.40634259259259)'::box3d,4326);\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using adminbndy1_geom_idx on adminbndy1 (cost=0.00..4.44\nrows=1 width=121) (actual time=26.902..27.066 rows=5 loops=1)\n Index Cond: (geom && 'SRID=4326;BOX3D(9.4835390946502\n47.3936574074074 0,9.5164609053498 47.4063425925926 0)'::geometry)\n Total runtime: 27.265 ms\n(3 Zeilen)\n\nSo the query planner choses to ignore the index, although it is\nappropriate. My first idea was that the statistics, but that turned out\nnot to be the problem. As the above output shows, the query optimizer\nalready guesses a rowcount of 1 which is even smaller than the actual\nnumber of 5 fetched rows, so this should really make the query planner\nuse the index.\n\nSome amount of increasingly random tries later, I did the following:\n\nlogigis=# vacuum full freeze analyze verbose adminbndy1;\nINFO: vacuuming \"public.adminbndy1\"\nINFO: \"adminbndy1\": found 0 removable, 83 nonremovable row versions in\n3 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 128 to 1968 bytes long.\nThere were 1 unused item pointers.\nTotal free space (including removable row versions) is 5024 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n3 pages containing 5024 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: index \"adminbndy1_geom_idx\" now contains 83 row versions in 1 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"adminbndy1\": moved 0 row versions, truncated 3 to 3 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: vacuuming \"pg_toast.pg_toast_19369\"\nINFO: \"pg_toast_19369\": found 0 removable, 32910 nonremovable row\nversions in 8225 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 37 to 2034 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 167492 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n66 pages containing 67404 free bytes are potential move destinations.\nCPU 0.67s/0.04u sec elapsed 2.76 sec.\nINFO: index \"pg_toast_19369_index\" now contains 32910 row versions in\n127 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.01s/0.00u sec elapsed 0.14 sec.\nINFO: \"pg_toast_19369\": moved 0 row versions, truncated 8225 to 8225 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.adminbndy1\"\nINFO: \"adminbndy1\": 3 pages, 83 rows sampled, 83 estimated total rows\nVACUUM\nlogigis=#\n\nIMHO, this tells the reason. The query planner has a table size of 3\npages, which clearly is a case for a seqscan. But during the seqscan,\nthe database has to fetch an additional amount of 8225 toast pages and\n127 toast index pages, and rebuild the geometries contained therein.\n\nAnd the total number of 8355 pages = 68MB is a rather huge amount of\ndata to fetch.\n\nI think this problem bites every user that has rather large columns that\nget stored in the TOAST table, when querying on those column.\n\nAs a small workaround, I could imagine to add a small additional column\nin the table that contains the geometry's bbox, and which I use the &&\noperator against. This should avoid touching the TOAST for the skipped rows.\n\nBut the real fix should be to add the toast pages to the query planners\nestimation for the sequential scan. What do you think about it?\n\nMarkus\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zᅵrich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n", "msg_date": "Wed, 02 Feb 2005 18:21:40 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": true, "msg_subject": "Bad query optimizer misestimation because of TOAST tables" }, { "msg_contents": "Markus Schaber <[email protected]> writes:\n> IMHO, this tells the reason. The query planner has a table size of 3\n> pages, which clearly is a case for a seqscan. But during the seqscan,\n> the database has to fetch an additional amount of 8225 toast pages and\n> 127 toast index pages, and rebuild the geometries contained therein.\n\nI don't buy this analysis at all. The toasted columns are not those in\nthe index (because we don't support out-of-line-toasted index entries),\nso a WHERE clause that only touches indexed columns isn't going to need\nto fetch anything from the toast table. The only stuff it would fetch\nis in rows that passed the WHERE and need to be returned to the client\n--- and those costs are going to be the same either way.\n\nI'm not entirely sure where the time is going, but I do not think you\nhave proven your theory about it. I'd suggest building the backend\nwith -pg and getting some gprof evidence.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Feb 2005 12:44:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad query optimizer misestimation because of TOAST tables " }, { "msg_contents": "Hi, Tom,\n\nTom Lane schrieb:\n\n>>IMHO, this tells the reason. The query planner has a table size of 3\n>>pages, which clearly is a case for a seqscan. But during the seqscan,\n>>the database has to fetch an additional amount of 8225 toast pages and\n>>127 toast index pages, and rebuild the geometries contained therein.\n>\n> I don't buy this analysis at all. The toasted columns are not those in\n> the index (because we don't support out-of-line-toasted index entries),\n> so a WHERE clause that only touches indexed columns isn't going to need\n> to fetch anything from the toast table. The only stuff it would fetch\n> is in rows that passed the WHERE and need to be returned to the client\n> --- and those costs are going to be the same either way.\n>\n> I'm not entirely sure where the time is going, but I do not think you\n> have proven your theory about it. I'd suggest building the backend\n> with -pg and getting some gprof evidence.\n\nThe column is a PostGIS column, and the index was created using GIST.\nThose are lossy indices that do not store the whole geometry, but only\nthe bounding box corners of the Geometry (2 Points).\n\nWithout using the index, the && Operator (which tests for bbox\noverlapping) has to load the whole geometry from disk, and extract the\nbbox therein (as it cannot make use of partial fetch).\n\nSome little statistics:\n\nlogigis=# select max(mem_size(geom)), avg(mem_size(geom))::int,\nmax(npoints(geom)) from adminbndy1;\n max | avg | max\n----------+---------+--------\n 20998856 | 1384127 | 873657\n(1 Zeile)\n\nSo the geometries use are about 1.3 MB average size, and have a maximum\nsize of 20Mb. I'm pretty shure this cannot be stored without TOASTing.\n\nAdditionally, my suggested workaround using a separate bbox column\nreally works:\n\nlogigis=# alter table adminbndy1 ADD column bbox geometry;\nALTER TABLE\nlogigis=# update adminbndy1 set bbox = setsrid(box3d(geom)::geometry, 4326);\nUPDATE 83\nlogigis=# explain analyze SELECT geom FROM adminbndy1 WHERE bbox &&\nsetsrid('BOX3D(9.4835390946502 47.39365740740741,9.5164609053498\n47.40634259259259)'::box3d,4326);\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------\n Seq Scan on adminbndy1 (cost=100000000.00..100000022.50 rows=1\nwidth=32) (actual time=0.554..0.885 rows=5 loops=1)\n Filter: (bbox && 'SRID=4326;BOX3D(9.4835390946502 47.3936574074074\n0,9.5164609053498 47.4063425925926 0)'::geometry)\n Total runtime: 0.960 ms\n(3 Zeilen)\n\nHere, the seqential scan matching exactly the same 5 rows only needs\nabout 1/8000th of time, because it does not have to touch the TOAST\npages at all.\n\nlogigis=# \\o /dev/null\nlogigis=# \\timing\nZeitmessung ist an.\nlogigis=# SELECT geom FROM adminbndy1 WHERE geom &&\nsetsrid('BOX3D(9.4835390946502 47.39365740740741,9.5164609053498\n47.40634259259259)'::box3d,4326);\nZeit: 11224,185 ms\nlogigis=# SELECT geom FROM adminbndy1 WHERE bbox &&\nsetsrid('BOX3D(9.4835390946502 47.39365740740741,9.5164609053498\n47.40634259259259)'::box3d,4326);\nZeit: 7689,720 ms\n\nSo you can see that, when actually detoasting the 5 rows and\ndeserializing the geometries to WKT format (their canonical text\nrepresentation), the time relation gets better, but there's still a\nnoticeable difference.\n\nMarkus\n--\nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zᅵrich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com", "msg_date": "Wed, 02 Feb 2005 19:25:15 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad query optimizer misestimation because of TOAST" }, { "msg_contents": "Markus Schaber <[email protected]> writes:\n> Tom Lane schrieb:\n>> I don't buy this analysis at all. The toasted columns are not those in\n>> the index (because we don't support out-of-line-toasted index entries),\n>> so a WHERE clause that only touches indexed columns isn't going to need\n>> to fetch anything from the toast table.\n\n> The column is a PostGIS column, and the index was created using GIST.\n> Those are lossy indices that do not store the whole geometry, but only\n> the bounding box corners of the Geometry (2 Points).\n> Without using the index, the && Operator (which tests for bbox\n> overlapping) has to load the whole geometry from disk, and extract the\n> bbox therein (as it cannot make use of partial fetch).\n\nAh, I see; I forgot to consider the GIST \"storage\" option, which allows\nthe index contents to be something different from the represented column.\nHmm ...\n\nWhat I would be inclined to do is to extend ANALYZE to make an estimate\nof the extent of toasting of every toastable column, and then modify\ncost_qual_eval to charge a nonzero cost for evaluation of Vars that are\npotentially toasted.\n\nThis implies an initdb-forcing change in pg_statistic, which might or\nmight not be allowed for 8.1 ... we are still a bit up in the air on\nwhat our release policy will be for 8.1.\n\nMy first thought about what stat ANALYZE ought to collect is \"average\nnumber of out-of-line TOAST chunks per value\". Armed with that number\nand size information about the TOAST table, it'd be relatively simple\nfor costsize.c to estimate the average cost of fetching such values.\n\nI'm not sure if it's worth trying to model the cost of decompression of\ncompressed values. Surely that's a lot cheaper than fetching\nout-of-line values, so maybe we can just ignore it. If we did want to\nmodel it then we'd also need to make ANALYZE note the fraction of values\nthat require decompression, and maybe something about their sizes.\n\nThis approach would overcharge for operations that are able to work with\npartially fetched values, but it's probably not reasonable to expect the\nplanner to account for that with any accuracy.\n\nGiven this we'd have a pretty accurate computation of the true cost of\nthe seqscan alternative, but what of indexscans? The current\nimplementation charges one evaluation of the index qual(s) per\nindexscan, which is not really right because actually the index\ncomponent is never evaluated at all. This didn't matter when the index\ncomponent was a Var with zero eval cost, but if we're charging some eval\ncost it might. But ... since it's charging only one eval per scan\n... the error is probably down in the noise in practice, and it may not\nbe worth trying to get it exactly right.\n\nA bigger concern is \"what about lossy indexes\"? We currently ignore the\ncosts of rechecking qual expressions for fetched rows, but this might be\ntoo inaccurate for situations like yours. I'm hesitant to mess with it\nthough. For one thing, to get it right we'd need to understand how many\nrows will be returned by the raw index search (which is the number of\ntimes we'd need to recheck). At the moment the only info we have is the\nnumber that will pass the recheck, which could be a lot less ... and of\ncourse, even that is probably a really crude estimate when we are\ndealing with this sort of operator.\n\nSeems like a bit of a can of worms ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Feb 2005 18:49:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad query optimizer misestimation because of TOAST " }, { "msg_contents": "Hi, Tom,\n\nTom Lane schrieb:\n\n> What I would be inclined to do is to extend ANALYZE to make an estimate\n> of the extent of toasting of every toastable column, and then modify\n> cost_qual_eval to charge a nonzero cost for evaluation of Vars that are\n> potentially toasted.\n\nI currently do not have any internal knowledge of the query planner, but\nthat sounds good in my ears :-)\n\nMy (simpler) alternative would have been to simply add the number of\ntoast pages to the table size when estimating sequential scan costs.\nThis would clearly help in my case, but I now realize that it would give\nrather bad misestimations when the TOASTed columns are never touched.\n\n> This implies an initdb-forcing change in pg_statistic, which might or\n> might not be allowed for 8.1 ... we are still a bit up in the air on\n> what our release policy will be for 8.1.\n\nIs it possible to add metadata table columns to an existing database? At\nleast when the database is offline (no postmaster running on it)?\n\nYou could make the query optimizer code work with the old and new\nstatistic schema (at least during the 8.x series). Thus users could\nupgrade as normal (without dump/restore, and withut benefiting from this\nchange), and then manually change the schema to benefit (maybe using\nsome offline tool or special command). Of course, this should be clearly\ndocumented. ANALYZE could spit out a warning message about the missing\ncolumns.\n\nThe most convenient method might be to make ANALYZE automatically add\nthose columns, but'm somehow reluctant to accept such unexpected side\neffects (metadata schema changes) .\n\n> My first thought about what stat ANALYZE ought to collect is \"average\n> number of out-of-line TOAST chunks per value\". Armed with that number\n> and size information about the TOAST table, it'd be relatively simple\n> for costsize.c to estimate the average cost of fetching such values.\n\nThis sounds good.\n\n> I'm not sure if it's worth trying to model the cost of decompression of\n> compressed values. Surely that's a lot cheaper than fetching\n> out-of-line values, so maybe we can just ignore it. If we did want to\n> model it then we'd also need to make ANALYZE note the fraction of values\n> that require decompression, and maybe something about their sizes.\n\nWell, the first step is to generate those statistics (they may be of\ninterest for administrators and developers, too), and as we are already\nchanging the metadata schema, I would vote to add those columns, even in\ncase the query optimizer does not exploit them yet.\n\n> This approach would overcharge for operations that are able to work with\n> partially fetched values, but it's probably not reasonable to expect the\n> planner to account for that with any accuracy.\n\nI think it is impossible to give accurate statistics for this. We could\ngive some hints in \"CREATE OPERATOR\" to tell the query planner whether\nthe operator could make use of partial fetches, but this could never be\nreally accurate, as the amount of fetched data may vary wildly depending\non the value itself.\n\n> A bigger concern is \"what about lossy indexes\"? We currently ignore the\n> costs of rechecking qual expressions for fetched rows, but this might be\n> too inaccurate for situations like yours. I'm hesitant to mess with it\n> though. For one thing, to get it right we'd need to understand how many\n> rows will be returned by the raw index search (which is the number of\n> times we'd need to recheck). At the moment the only info we have is the\n> number that will pass the recheck, which could be a lot less ... and of\n> course, even that is probably a really crude estimate when we are\n> dealing with this sort of operator.\n\nI do not know whether PostGIS actually rechecks against the real\ngeometry. If the app needs the bbox check (&& operator), then the lossy\nindex contains just all the information. If a real intersection is\nneeded, PostGIS users usually use \"(column && bbox_of_reference) AND\nintersects(column, reference)\". This uses the bbox based index for\nefficient candidate selection, and then uses the rather expensive\ngeometric intersection algorithm for real decision.\n\nBut maybe there'll be a real intersection Operator in the future, that\nmakes use of the bbox index in the first stage.\n\n> Seems like a bit of a can of worms ...\n\nSorry :-)\n\nI did not really expect the problem to be so complicated when I posted\nmy problem, I merely thought about a 10-liner patch to the query\nestimator that I could backport and aply to my 7.4.6. Seems that my\npersonal problem-size estimator has some serious bugs, too...\n\nMarkus\n\n--\nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zᅵrich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com", "msg_date": "Thu, 03 Feb 2005 11:30:32 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad query optimizer misestimation because of TOAST" }, { "msg_contents": "Hi, all,\n\nMarkus Schaber schrieb:\n\n> As a small workaround, I could imagine to add a small additional column\n> in the table that contains the geometry's bbox, and which I use the &&\n> operator against. This should avoid touching the TOAST for the skipped rows.\n\nFor your personal amusement: I just noticed that, after adding the\nadditional column containing the bbox and running VACUUM FULL, the table\nnow has a size of 4 pages, and that's enough for the query optimizer to\nchoose an index scan. At least that saves us from modifying and\nredeploying a bunch of applications to use the && bbox query:-)\n\nMarkus\n--\nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zᅵrich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com", "msg_date": "Thu, 03 Feb 2005 14:59:25 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [postgis-users] Bad query optimizer misestimation because of" }, { "msg_contents": "Hi, Tom,\n\nTom Lane schrieb:\n> Markus Schaber <[email protected]> writes:\n>> [Query optimizer misestimation using lossy GIST on TOASTed columns]\n>\n> What I would be inclined to do is to extend ANALYZE to make an estimate\n> of the extent of toasting of every toastable column, and then modify\n> cost_qual_eval to charge a nonzero cost for evaluation of Vars that are\n> potentially toasted.\n\nWhat to do now? To fix this issue seems to be a rather long-term job.\n\nIs it enough to document workarounds (as in PostGIS), provided that\nthere are such workarounds for other GIST users?\n\nIs there a bug tracking system we could file the problem, so it does not\nget lost?\n\nMarkus\n--\nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zᅵrich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com", "msg_date": "Mon, 07 Feb 2005 15:07:44 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad query optimizer misestimation because of TOAST" } ]
[ { "msg_contents": "> By now, our system has never used \"stored procedures\" approach,\n> due to the fact that we're staying on the minimum common SQL features\n> that are supported by most db engines.\n> I realize though that it would provide an heavy performance boost.\n\nI feel your pain. Well, sometimes you have to bite the bullet and do a\ncouple of implementation specific hacks in especially time sensitive\ncomponents.\n\n> > You also have the parse/bind interface\n> \n> This is something I have already engineered in our core classes\n> (that use DBI + DBD::Pg), so that switching to 8.0 should\n> automatically enable the \"single-prepare, multiple-execute\" behavior,\n> saving a lot of query planner processing, if I understand correctly.\n\nYes. You save the planning step (which adds up, even for trivial plans).\nThe 'ExexPrepared' variant of prepared statement execution also provides\nsubstantial savings (on server cpu load and execution time) because the\nstatement does not have to be parsed. Oh, and network traffic is reduced\ncorrespondingly. \n\nI know that the perl people were pushing for certain features into the\nlibpq library (describing prepared statements, IIRC). I think this\nstuff made it into 8.0...have no clue about DBD::pg.\n\nIf everything is working the way it's supposed to, 8.0 should be faster\nthan 7.1 (like, twice faster) for what you are probably trying to do.\nIf it isn't, something else is wrong and it's very likely a solvable\nproblem.\n\nIn short, in pg 8.0, statement by statement query execution is highly\noptimizeable at the driver level, much more so than 7.1. Law of\nUnintended Consequences aside, this will translate into direct benefits\ninto your app if it uses this application programming model.\n\nMerlin\n", "msg_date": "Wed, 2 Feb 2005 13:50:59 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": "Merlin Moncure wrote:\n\n> > [...]\n > > (...DBI + DBD::Pg), so that switching to 8.0 should\n> > automatically enable the \"single-prepare, multiple-execute\" behavior,\n> > saving a lot of query planner processing, if I understand correctly.\n>\n> [...]\n>\n> I know that the perl people were pushing for certain features into the\n> libpq library (describing prepared statements, IIRC). I think this\n> stuff made it into 8.0...have no clue about DBD::pg.\n\nFor the record: yes, DBD::Pg in CVS (> 1.32) has support\nfor server prepared statements.\n\n> If everything is working the way it's supposed to, 8.0 should be faster\n> than 7.1 (like, twice faster) for what you are probably trying to do.\n\nIn the next days I will be testing the entire application with the\nsame database only changing the backend from 7.1 to 8.0, so this is\na somewhat perfect condition to have a \"real-world\" benchmark\nof Pg 8.0 vs 7.1.x performances.\n\n-- \nCosimo\n\n", "msg_date": "Wed, 02 Feb 2005 22:10:00 +0100", "msg_from": "Cosimo Streppone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": "Cosimo Streppone wrote:\n\n> Merlin Moncure wrote:\n> \n> > If everything is working the way it's supposed to, 8.0 should be faster\n> > than 7.1 (like, twice faster) for what you are probably trying to do.\n> \n> In the next days I will be testing the entire application with the\n> same database only changing the backend from 7.1 to 8.0, so this is\n> a somewhat perfect condition to have a \"real-world\" benchmark\n> of Pg 8.0 vs 7.1.x performances.\n\nThe \"next days\" have come. I did a complete migration to Pg 8.0.1\nfrom 7.1.3. It was a *huge* jump.\nThe application is exactly the same, also the database structure\nis the same. I only dumped the entire 7.1.3 db, changed the backend\nversion, and restored the data in the 8.0.1 db.\n\nThe performance level of Pg 8 is at least *five* times higher\n(faster!) than 7.1.3 in \"query-intensive\" transactions,\nwhich is absolutely astounding.\n\nIn my experience, Pg8 handles far better non-unique indexes\nwith low cardinality built on numeric and integer types, which\nis very common in our application.\n\n-- \nCosimo\n\n", "msg_date": "Mon, 28 Feb 2005 22:47:02 +0100", "msg_from": "Cosimo Streppone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system" }, { "msg_contents": "Cosimo Streppone <[email protected]> writes:\n> The performance level of Pg 8 is at least *five* times higher\n> (faster!) than 7.1.3 in \"query-intensive\" transactions,\n> which is absolutely astounding.\n\nCool.\n\n> In my experience, Pg8 handles far better non-unique indexes\n> with low cardinality built on numeric and integer types, which\n> is very common in our application.\n\nYes, we've fixed a number of places where the btree code was inefficient\nwith large numbers of equal keys. I'm not sure that that explains a\n5x speedup all by itself, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 28 Feb 2005 17:15:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High end server and storage for a PostgreSQL OLTP system " } ]
[ { "msg_contents": "Hi,\n\naccording to \nhttp://www.postgresql.org/docs/8.0/interactive/limitations.html , \nconcurrent access to GiST indexes isn't possible at the moment. I \nhaven't read the thesis mentioned there, but I presume that concurrent \nread access is also impossible. Is there any workaround for this, esp. \nif the index is usually only read and not written to?\n\nIt seems to be a big problem with tsearch2, when multiple clients are \nhammering the db (we have a quad opteron box here that stays 75% idle \ndespite an apachebench with concurrency 10 stressing the php script that \nuses tsearch2, with practically no disk accesses)\n\nRegards,\n Marinos\n-- \nDipl.-Ing. Marinos Yannikos, CEO\nPreisvergleich Internet Services AG\nObere Donaustra�e 63/2, A-1020 Wien\nTel./Fax: (+431) 5811609-52/-55\n", "msg_date": "Thu, 03 Feb 2005 05:55:13 +0100", "msg_from": "\"Marinos J. Yannikos\" <[email protected]>", "msg_from_op": true, "msg_subject": "GiST indexes and concurrency (tsearch2)" }, { "msg_contents": "\"Marinos J. Yannikos\" <[email protected]> writes:\n> according to \n> http://www.postgresql.org/docs/8.0/interactive/limitations.html , \n> concurrent access to GiST indexes isn't possible at the moment. I \n> haven't read the thesis mentioned there, but I presume that concurrent \n> read access is also impossible.\n\nYou presume wrong ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Feb 2005 00:26:16 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST indexes and concurrency (tsearch2) " }, { "msg_contents": "> It seems to be a big problem with tsearch2, when multiple clients are \n> hammering the db (we have a quad opteron box here that stays 75% idle \n> despite an apachebench with concurrency 10 stressing the php script that \n> uses tsearch2, with practically no disk accesses)\n\nConcurrency with READs is fine - but you can only have one WRITE going \nat once.\n\nChris\n", "msg_date": "Thu, 03 Feb 2005 09:28:52 +0000", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST indexes and concurrency (tsearch2)" }, { "msg_contents": "On Thu, 3 Feb 2005, Marinos J. Yannikos wrote:\n\n> Hi,\n>\n> according to http://www.postgresql.org/docs/8.0/interactive/limitations.html \n> , concurrent access to GiST indexes isn't possible at the moment. I haven't \n> read the thesis mentioned there, but I presume that concurrent read access is \n> also impossible. Is there any workaround for this, esp. if the index is \n> usually only read and not written to?\n\nthere are should no problem with READ access.\n\n>\n> It seems to be a big problem with tsearch2, when multiple clients are \n> hammering the db (we have a quad opteron box here that stays 75% idle despite \n> an apachebench with concurrency 10 stressing the php script that uses \n> tsearch2, with practically no disk accesses)\n\nI'm willing to see some details: version, query, explain analyze.\n\n\n\n\n>\n> Regards,\n> Marinos\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Thu, 3 Feb 2005 12:57:37 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST indexes and concurrency (tsearch2)" }, { "msg_contents": "Oleg Bartunov wrote:\n> On Thu, 3 Feb 2005, Marinos J. Yannikos wrote:\n>> concurrent access to GiST indexes isn't possible at the moment. I [...]\n> \n> there are should no problem with READ access.\n\nOK, thanks everyone (perhaps it would make sense to clarify this in the \nmanual).\n\n> I'm willing to see some details: version, query, explain analyze.\n\n8.0.0\n\nQuery while the box is idle:\n\nexplain analyze select count(*) from fr_offer o, fr_merchant m where \nidxfti @@ to_tsquery('ranz & mc') and eur >= 70 and m.m_id=o.m_id;\n\nAggregate (cost=2197.48..2197.48 rows=1 width=0) (actual \ntime=88.052..88.054 rows=1 loops=1)\n -> Merge Join (cost=2157.42..2196.32 rows=461 width=0) (actual \ntime=88.012..88.033 rows=3 loops=1)\n Merge Cond: (\"outer\".m_id = \"inner\".m_id)\n -> Index Scan using fr_merchant_pkey on fr_merchant m \n(cost=0.00..29.97 rows=810 width=4) (actual time=0.041..1.233 rows=523 \nloops=1)\n -> Sort (cost=2157.42..2158.57 rows=461 width=4) (actual \ntime=85.779..85.783 rows=3 loops=1)\n Sort Key: o.m_id\n -> Index Scan using idxfti_idx on fr_offer o \n(cost=0.00..2137.02 rows=461 width=4) (actual time=77.957..85.754 rows=3 \nloops=1)\n Index Cond: (idxfti @@ '\\'ranz\\' & \\'mc\\''::tsquery)\n Filter: (eur >= 70::double precision)\n\n Total runtime: 88.131 ms\n\nnow, while using apachebench (-c10), \"top\" says this:\n\nCpu0 : 15.3% us, 10.0% sy, 0.0% ni, 74.7% id, 0.0% wa, 0.0% hi, 0.0% si\nCpu1 : 13.3% us, 11.6% sy, 0.0% ni, 75.1% id, 0.0% wa, 0.0% hi, 0.0% si\nCpu2 : 16.9% us, 9.6% sy, 0.0% ni, 73.4% id, 0.0% wa, 0.0% hi, 0.0% si\nCpu3 : 18.7% us, 14.0% sy, 0.0% ni, 67.0% id, 0.0% wa, 0.0% hi, 0.3% si\n\n(this is with shared_buffers = 2000; a larger setting makes almost no \ndifference for overall performance: although according to \"top\" system \ntime goes to ~0 and user time to ~25%, the system still stays 70-75% idle)\n\nvmstat:\n\n r b swpd free buff cache si so bi bo in cs us \nsy id wa\n 2 0 0 8654316 64908 4177136 0 0 56 35 279 286 5 \n 1 94 0\n 2 0 0 8646188 64908 4177136 0 0 0 0 1156 2982 15 \n10 75 0\n 2 0 0 8658412 64908 4177136 0 0 0 0 1358 3098 19 \n11 70 0\n 1 0 0 8646508 64908 4177136 0 0 0 104 1145 2070 13 \n12 75 0\n\nso the script's execution speed is apparently not limited by the CPUs.\n\nThe query execution times go up like this while apachebench is running \n(and the system is 75% idle):\n\n Aggregate (cost=2197.48..2197.48 rows=1 width=0) (actual \ntime=952.661..952.663 rows=1 loops=1)\n -> Merge Join (cost=2157.42..2196.32 rows=461 width=0) (actual \ntime=952.621..952.641 rows=3 loops=1)\n Merge Cond: (\"outer\".m_id = \"inner\".m_id)\n -> Index Scan using fr_merchant_pkey on fr_merchant m \n(cost=0.00..29.97 rows=810 width=4) (actual time=2.078..3.338 rows=523 \nloops=1)\n -> Sort (cost=2157.42..2158.57 rows=461 width=4) (actual \ntime=948.345..948.348 rows=3 loops=1)\n Sort Key: o.m_id\n -> Index Scan using idxfti_idx on fr_offer o \n(cost=0.00..2137.02 rows=461 width=4) (actual time=875.643..948.301 \nrows=3 loops=1)\n Index Cond: (idxfti @@ '\\'ranz\\' & \\'mc\\''::tsquery)\n Filter: (eur >= 70::double precision)\n Total runtime: 952.764 ms\n\nI can't seem to find out where the bottleneck is, but it doesn't seem to \nbe CPU or disk. \"top\" shows that postgres processes are frequently in \nthis state:\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ WCHAN \nCOMMAND\n 6701 postgres 16 0 204m 58m 56m S 9.3 0.2 0:06.96 semtimedo\n ^^^^^^^^^\npostmaste\n\nAny hints are appreciated...\n\nRegards,\n Marinos\n-- \nDipl.-Ing. Marinos Yannikos, CEO\nPreisvergleich Internet Services AG\nObere Donaustra�e 63/2, A-1020 Wien\nTel./Fax: (+431) 5811609-52/-55\n", "msg_date": "Thu, 03 Feb 2005 12:04:27 +0100", "msg_from": "\"Marinos J. Yannikos\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST indexes and concurrency (tsearch2)" }, { "msg_contents": "\n\tDo you have anything performing any updates or inserts to this table, \neven if it does not update the gist column, even if it does not update \nanything ?\n", "msg_date": "Thu, 03 Feb 2005 13:11:12 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST indexes and concurrency (tsearch2)" }, { "msg_contents": "Marinos,\n\nwhat if you construct \"apachebench & Co\" free script and see if\nthe issue still exists. There are could be many issues doesn't\nconnected to postgresql and tsearch2.\n\nOleg\n\nOn Thu, 3 Feb 2005, Marinos J. Yannikos wrote:\n\n> Oleg Bartunov wrote:\n>> On Thu, 3 Feb 2005, Marinos J. Yannikos wrote:\n>>> concurrent access to GiST indexes isn't possible at the moment. I [...]\n>> \n>> there are should no problem with READ access.\n>\n> OK, thanks everyone (perhaps it would make sense to clarify this in the \n> manual).\n>\n>> I'm willing to see some details: version, query, explain analyze.\n>\n> 8.0.0\n>\n> Query while the box is idle:\n>\n> explain analyze select count(*) from fr_offer o, fr_merchant m where idxfti \n> @@ to_tsquery('ranz & mc') and eur >= 70 and m.m_id=o.m_id;\n>\n> Aggregate (cost=2197.48..2197.48 rows=1 width=0) (actual time=88.052..88.054 \n> rows=1 loops=1)\n> -> Merge Join (cost=2157.42..2196.32 rows=461 width=0) (actual \n> time=88.012..88.033 rows=3 loops=1)\n> Merge Cond: (\"outer\".m_id = \"inner\".m_id)\n> -> Index Scan using fr_merchant_pkey on fr_merchant m \n> (cost=0.00..29.97 rows=810 width=4) (actual time=0.041..1.233 rows=523 \n> loops=1)\n> -> Sort (cost=2157.42..2158.57 rows=461 width=4) (actual \n> time=85.779..85.783 rows=3 loops=1)\n> Sort Key: o.m_id\n> -> Index Scan using idxfti_idx on fr_offer o \n> (cost=0.00..2137.02 rows=461 width=4) (actual time=77.957..85.754 rows=3 \n> loops=1)\n> Index Cond: (idxfti @@ '\\'ranz\\' & \\'mc\\''::tsquery)\n> Filter: (eur >= 70::double precision)\n>\n> Total runtime: 88.131 ms\n>\n> now, while using apachebench (-c10), \"top\" says this:\n>\n> Cpu0 : 15.3% us, 10.0% sy, 0.0% ni, 74.7% id, 0.0% wa, 0.0% hi, 0.0% si\n> Cpu1 : 13.3% us, 11.6% sy, 0.0% ni, 75.1% id, 0.0% wa, 0.0% hi, 0.0% si\n> Cpu2 : 16.9% us, 9.6% sy, 0.0% ni, 73.4% id, 0.0% wa, 0.0% hi, 0.0% si\n> Cpu3 : 18.7% us, 14.0% sy, 0.0% ni, 67.0% id, 0.0% wa, 0.0% hi, 0.3% si\n>\n> (this is with shared_buffers = 2000; a larger setting makes almost no \n> difference for overall performance: although according to \"top\" system time \n> goes to ~0 and user time to ~25%, the system still stays 70-75% idle)\n>\n> vmstat:\n>\n> r b swpd free buff cache si so bi bo in cs us sy id \n> wa\n> 2 0 0 8654316 64908 4177136 0 0 56 35 279 286 5 1 94 \n> 0\n> 2 0 0 8646188 64908 4177136 0 0 0 0 1156 2982 15 10 75 \n> 0\n> 2 0 0 8658412 64908 4177136 0 0 0 0 1358 3098 19 11 70 \n> 0\n> 1 0 0 8646508 64908 4177136 0 0 0 104 1145 2070 13 12 75 \n> 0\n>\n> so the script's execution speed is apparently not limited by the CPUs.\n>\n> The query execution times go up like this while apachebench is running (and \n> the system is 75% idle):\n>\n> Aggregate (cost=2197.48..2197.48 rows=1 width=0) (actual \n> time=952.661..952.663 rows=1 loops=1)\n> -> Merge Join (cost=2157.42..2196.32 rows=461 width=0) (actual \n> time=952.621..952.641 rows=3 loops=1)\n> Merge Cond: (\"outer\".m_id = \"inner\".m_id)\n> -> Index Scan using fr_merchant_pkey on fr_merchant m \n> (cost=0.00..29.97 rows=810 width=4) (actual time=2.078..3.338 rows=523 \n> loops=1)\n> -> Sort (cost=2157.42..2158.57 rows=461 width=4) (actual \n> time=948.345..948.348 rows=3 loops=1)\n> Sort Key: o.m_id\n> -> Index Scan using idxfti_idx on fr_offer o \n> (cost=0.00..2137.02 rows=461 width=4) (actual time=875.643..948.301 rows=3 \n> loops=1)\n> Index Cond: (idxfti @@ '\\'ranz\\' & \\'mc\\''::tsquery)\n> Filter: (eur >= 70::double precision)\n> Total runtime: 952.764 ms\n>\n> I can't seem to find out where the bottleneck is, but it doesn't seem to be \n> CPU or disk. \"top\" shows that postgres processes are frequently in this \n> state:\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ WCHAN COMMAND\n> 6701 postgres 16 0 204m 58m 56m S 9.3 0.2 0:06.96 semtimedo\n> ^^^^^^^^^\n> postmaste\n>\n> Any hints are appreciated...\n>\n> Regards,\n> Marinos\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Thu, 3 Feb 2005 15:16:00 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST indexes and concurrency (tsearch2)" }, { "msg_contents": "Oleg Bartunov wrote:\n> Marinos,\n> \n> what if you construct \"apachebench & Co\" free script and see if\n> the issue still exists. There are could be many issues doesn't\n> connected to postgresql and tsearch2.\n> \n\nYes, the problem persists - I wrote a small perl script that forks 10 \nchils processes and executes the same queries in parallel without any \nphp/apachebench involved:\n\n--- 8< ---\n#!/usr/bin/perl\nuse DBI;\n$n=10;\n$nq=100;\n$sql=\"select count(*) from fr_offer o, fr_merchant m where idxfti @@ \nto_tsquery('ranz & mc') and eur >= 70 and m.m_id=o.m_id;\";\n\nsub reaper { my $waitedpid = wait; $running--; $SIG{CHLD} = \\&reaper; }\n$SIG{CHLD} = \\&reaper;\n\nfor $i (1..$n)\n{\n if (fork() > 0) { $running++; }\n else\n {\n my \n$dbh=DBI->connect('dbi:Pg:host=daedalus;dbname=<censored>','root','',{\n AutoCommit => 1 }) || die \"!db\";\n for my $j (1..$nq)\n {\n my $sth=$dbh->prepare($sql);\n $r=$sth->execute() or print STDERR $dbh->errstr();\n }\n exit 0;\n }\n}\nwhile ($running > 0)\n{\n sleep 1;\n print \"Running: $running\\n\";\n}\n--- >8 ---\n\nResult (now with shared_buffers = 20000, hence less system and more user \ntime):\n\nCpu0 : 25.1% us, 0.0% sy, 0.0% ni, 74.9% id, 0.0% wa, 0.0% hi, 0.0% si\nCpu1 : 18.3% us, 0.0% sy, 0.0% ni, 81.7% id, 0.0% wa, 0.0% hi, 0.0% si\nCpu2 : 27.8% us, 0.3% sy, 0.0% ni, 71.9% id, 0.0% wa, 0.0% hi, 0.0% si\nCpu3 : 23.5% us, 0.3% sy, 0.0% ni, 75.9% id, 0.0% wa, 0.0% hi, 0.3% si\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ WCHAN \nCOMMAND\n 7571 postgres 16 0 204m 62m 61m R 10.6 0.2 0:01.97 - \npostmaste\n 7583 postgres 16 0 204m 62m 61m S 9.6 0.2 0:02.06 semtimedo \npostmaste\n 7586 postgres 16 0 204m 62m 61m S 9.6 0.2 0:02.00 semtimedo \npostmaste\n 7575 postgres 16 0 204m 62m 61m S 9.3 0.2 0:02.12 semtimedo \npostmaste\n 7578 postgres 16 0 204m 62m 61m R 9.3 0.2 0:02.05 - \npostmaste\n\ni.e., virtually no difference. With 1000 queries and 10 in parallel, the \napachebench run takes 60.674 seconds and the perl script 59.392 seconds.\n\nRegards,\n Marinos\n-- \nDipl.-Ing. Marinos Yannikos, CEO\nPreisvergleich Internet Services AG\nObere Donaustra�e 63/2, A-1020 Wien\nTel./Fax: (+431) 5811609-52/-55\n", "msg_date": "Thu, 03 Feb 2005 14:15:50 +0100", "msg_from": "\"Marinos J. Yannikos\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST indexes and concurrency (tsearch2)" }, { "msg_contents": "\"Marinos J. Yannikos\" <[email protected]> writes:\n> I can't seem to find out where the bottleneck is, but it doesn't seem to \n> be CPU or disk. \"top\" shows that postgres processes are frequently in \n> this state:\n\n> 6701 postgres 16 0 204m 58m 56m S 9.3 0.2 0:06.96 semtimedo\n> ^^^^^^^^^\n\nWhat's the platform exactly (hardware and OS)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Feb 2005 11:27:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST indexes and concurrency (tsearch2) " }, { "msg_contents": "Tom Lane schrieb:\n> What's the platform exactly (hardware and OS)?\n\nHardware: http://www.appro.com/product/server_1142h.asp\n- SCSI version, 2 x 146GB 10k rpm disks in software RAID-1\n- 32GB RAM\n\nOS: Linux 2.6.10-rc3, x86_64, debian GNU/Linux distribution\n\n- CONFIG_K8_NUMA is currently turned off (no change, but now all CPUs \nhave ~25% load, previously one was 100% busy and the others idle)\n\n- CONFIG_GART_IOMMU=y (but no change, tried both settings)\n[other kernel options didn't seem to be relevant for tweaking at the \nmoment, mostly they're \"safe defaults\"]\n\nThe PostgreSQL data directory is on an ext2 filesystem.\n\nRegards,\n Marinos\n-- \nDipl.-Ing. Marinos Yannikos, CEO\nPreisvergleich Internet Services AG\nObere Donaustrasse 63, A-1020 Wien\nTel./Fax: (+431) 5811609-52/-55\n", "msg_date": "Thu, 03 Feb 2005 20:06:10 +0100", "msg_from": "\"Marinos J. Yannikos\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST indexes and concurrency (tsearch2)" }, { "msg_contents": "On Thu, 3 Feb 2005, Tom Lane wrote:\n\n> \"Marinos J. Yannikos\" <[email protected]> writes:\n>> I can't seem to find out where the bottleneck is, but it doesn't seem to\n>> be CPU or disk. \"top\" shows that postgres processes are frequently in\n>> this state:\n>\n>> 6701 postgres 16 0 204m 58m 56m S 9.3 0.2 0:06.96 semtimedo\n>> ^^^^^^^^^\n>\n> What's the platform exactly (hardware and OS)?\n>\n\nit should be 'semtimedop'\n\n\n> \t\t\tregards, tom lane\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Thu, 3 Feb 2005 22:50:53 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST indexes and concurrency (tsearch2)" }, { "msg_contents": "Oleg Bartunov schrieb:\n> Marinos,\n> \n> what if you construct \"apachebench & Co\" free script and see if\n> the issue still exists. There are could be many issues doesn't\n> connected to postgresql and tsearch2.\n> \n\nSome more things I tried:\n- data directory on ramdisk (tmpfs) - no effect\n- database connections either over Unix domain sockets or TCP - no effect\n- CLUSTER on gist index - approx. 20% faster queries, but CPU usage \nstill hovers around 25% (75% idle)\n- preemptible kernel - no effect\n\nThis is really baffling me, it looks like a kernel issue of some sort \n(I'm only guessing though). I found this old posting: \nhttp://archives.postgresql.org/pgsql-general/2001-12/msg00836.php - is \nthis still applicable? I don't see an unusually high number of context \nswitches, but the processes seem to be spending some time in \n\"semtimedop\" (even though the TAS assembly macros are definetely being \ncompiled-in).\n\nIf you are interested, I can probably provide an account on one of our \nidentically configured boxes by Monday afternoon (GMT+1) with the same \ndatabase and benchmarking utility.\n\nRegards,\n Marinos\n", "msg_date": "Sat, 05 Feb 2005 14:01:40 +0100", "msg_from": "Marinos Yannikos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST indexes and concurrency (tsearch2)" }, { "msg_contents": "Marinos Yannikos <[email protected]> writes:\n> This is really baffling me, it looks like a kernel issue of some sort \n> (I'm only guessing though). I found this old posting: \n> http://archives.postgresql.org/pgsql-general/2001-12/msg00836.php - is \n> this still applicable?\n\nThat seems to be an early report of what we now recognize as the\n\"context swap storm\" problem, and no we don't have a solution yet.\nI'm not completely convinced that you're seeing the same thing,\nbut if you're seeing a whole lot of semops then it could well be.\n\nI set up a test case consisting of two backends running the same\ntsearch2 query over and over --- nothing fancy, just one of the ones\nfrom the tsearch2 regression test:\nSELECT count(*) FROM test_tsvector WHERE a @@ to_tsquery('345&qwerty');\nI used gdb to set breakpoints at PGSemaphoreLock and PGSemaphoreTryLock,\nwhich are the only two functions that can possibly block on a semop\ncall. On a single-processor machine, I saw maybe one hit every couple\nof seconds, all coming from contention for the BufMgrLock or sometimes\nthe LockMgrLock. So unless I've missed something, there's not anything\nin tsearch2 or gist per se that is causing lock conflicts. You said\nyou're testing a quad-processor machine, so it could be that you're\nseeing the same lock contention issues that we've been trying to figure\nout for the past year ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Feb 2005 14:01:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST indexes and concurrency (tsearch2) " }, { "msg_contents": "Marinos Yannikos <[email protected]> writes:\n> Some more things I tried:\n\nYou might try the attached patch (which I just applied to HEAD).\nIt cuts down the number of acquisitions of the BufMgrLock by merging\nadjacent bufmgr calls during a GIST index search. I'm not hugely\nhopeful that this will help, since I did something similar to btree\nlast spring without much improvement for context swap storms involving\nbtree searches ... but it seems worth trying.\n\n\t\t\tregards, tom lane\n\n*** src/backend/access/gist/gistget.c.orig\tFri Dec 31 17:45:27 2004\n--- src/backend/access/gist/gistget.c\tSat Feb 5 14:19:52 2005\n***************\n*** 60,69 ****\n \tBlockNumber blk;\n \tIndexTuple\tit;\n \n \tb = ReadBuffer(s->indexRelation, GISTP_ROOT);\n \tp = BufferGetPage(b);\n \tpo = (GISTPageOpaque) PageGetSpecialPointer(p);\n- \tso = (GISTScanOpaque) s->opaque;\n \n \tfor (;;)\n \t{\n--- 60,70 ----\n \tBlockNumber blk;\n \tIndexTuple\tit;\n \n+ \tso = (GISTScanOpaque) s->opaque;\n+ \n \tb = ReadBuffer(s->indexRelation, GISTP_ROOT);\n \tp = BufferGetPage(b);\n \tpo = (GISTPageOpaque) PageGetSpecialPointer(p);\n \n \tfor (;;)\n \t{\n***************\n*** 75,86 ****\n \n \t\twhile (n < FirstOffsetNumber || n > maxoff)\n \t\t{\n! \t\t\tReleaseBuffer(b);\n! \t\t\tif (so->s_stack == NULL)\n \t\t\t\treturn false;\n \n! \t\t\tstk = so->s_stack;\n! \t\t\tb = ReadBuffer(s->indexRelation, stk->gs_blk);\n \t\t\tp = BufferGetPage(b);\n \t\t\tpo = (GISTPageOpaque) PageGetSpecialPointer(p);\n \t\t\tmaxoff = PageGetMaxOffsetNumber(p);\n--- 76,89 ----\n \n \t\twhile (n < FirstOffsetNumber || n > maxoff)\n \t\t{\n! \t\t\tstk = so->s_stack;\n! \t\t\tif (stk == NULL)\n! \t\t\t{\n! \t\t\t\tReleaseBuffer(b);\n \t\t\t\treturn false;\n+ \t\t\t}\n \n! \t\t\tb = ReleaseAndReadBuffer(b, s->indexRelation, stk->gs_blk);\n \t\t\tp = BufferGetPage(b);\n \t\t\tpo = (GISTPageOpaque) PageGetSpecialPointer(p);\n \t\t\tmaxoff = PageGetMaxOffsetNumber(p);\n***************\n*** 89,94 ****\n--- 92,98 ----\n \t\t\t\tn = OffsetNumberPrev(stk->gs_child);\n \t\t\telse\n \t\t\t\tn = OffsetNumberNext(stk->gs_child);\n+ \n \t\t\tso->s_stack = stk->gs_parent;\n \t\t\tpfree(stk);\n \n***************\n*** 116,123 ****\n \t\t\tit = (IndexTuple) PageGetItem(p, PageGetItemId(p, n));\n \t\t\tblk = ItemPointerGetBlockNumber(&(it->t_tid));\n \n! \t\t\tReleaseBuffer(b);\n! \t\t\tb = ReadBuffer(s->indexRelation, blk);\n \t\t\tp = BufferGetPage(b);\n \t\t\tpo = (GISTPageOpaque) PageGetSpecialPointer(p);\n \t\t}\n--- 120,126 ----\n \t\t\tit = (IndexTuple) PageGetItem(p, PageGetItemId(p, n));\n \t\t\tblk = ItemPointerGetBlockNumber(&(it->t_tid));\n \n! \t\t\tb = ReleaseAndReadBuffer(b, s->indexRelation, blk);\n \t\t\tp = BufferGetPage(b);\n \t\t\tpo = (GISTPageOpaque) PageGetSpecialPointer(p);\n \t\t}\n***************\n*** 137,142 ****\n--- 140,147 ----\n \tBlockNumber blk;\n \tIndexTuple\tit;\n \n+ \tso = (GISTScanOpaque) s->opaque;\n+ \n \tblk = ItemPointerGetBlockNumber(&(s->currentItemData));\n \tn = ItemPointerGetOffsetNumber(&(s->currentItemData));\n \n***************\n*** 148,154 ****\n \tb = ReadBuffer(s->indexRelation, blk);\n \tp = BufferGetPage(b);\n \tpo = (GISTPageOpaque) PageGetSpecialPointer(p);\n- \tso = (GISTScanOpaque) s->opaque;\n \n \tfor (;;)\n \t{\n--- 153,158 ----\n***************\n*** 157,176 ****\n \n \t\twhile (n < FirstOffsetNumber || n > maxoff)\n \t\t{\n! \t\t\tReleaseBuffer(b);\n! \t\t\tif (so->s_stack == NULL)\n \t\t\t\treturn false;\n \n! \t\t\tstk = so->s_stack;\n! \t\t\tb = ReadBuffer(s->indexRelation, stk->gs_blk);\n \t\t\tp = BufferGetPage(b);\n- \t\t\tmaxoff = PageGetMaxOffsetNumber(p);\n \t\t\tpo = (GISTPageOpaque) PageGetSpecialPointer(p);\n \n \t\t\tif (ScanDirectionIsBackward(dir))\n \t\t\t\tn = OffsetNumberPrev(stk->gs_child);\n \t\t\telse\n \t\t\t\tn = OffsetNumberNext(stk->gs_child);\n \t\t\tso->s_stack = stk->gs_parent;\n \t\t\tpfree(stk);\n \n--- 161,183 ----\n \n \t\twhile (n < FirstOffsetNumber || n > maxoff)\n \t\t{\n! \t\t\tstk = so->s_stack;\n! \t\t\tif (stk == NULL)\n! \t\t\t{\n! \t\t\t\tReleaseBuffer(b);\n \t\t\t\treturn false;\n+ \t\t\t}\n \n! \t\t\tb = ReleaseAndReadBuffer(b, s->indexRelation, stk->gs_blk);\n \t\t\tp = BufferGetPage(b);\n \t\t\tpo = (GISTPageOpaque) PageGetSpecialPointer(p);\n+ \t\t\tmaxoff = PageGetMaxOffsetNumber(p);\n \n \t\t\tif (ScanDirectionIsBackward(dir))\n \t\t\t\tn = OffsetNumberPrev(stk->gs_child);\n \t\t\telse\n \t\t\t\tn = OffsetNumberNext(stk->gs_child);\n+ \n \t\t\tso->s_stack = stk->gs_parent;\n \t\t\tpfree(stk);\n \n***************\n*** 198,205 ****\n \t\t\tit = (IndexTuple) PageGetItem(p, PageGetItemId(p, n));\n \t\t\tblk = ItemPointerGetBlockNumber(&(it->t_tid));\n \n! \t\t\tReleaseBuffer(b);\n! \t\t\tb = ReadBuffer(s->indexRelation, blk);\n \t\t\tp = BufferGetPage(b);\n \t\t\tpo = (GISTPageOpaque) PageGetSpecialPointer(p);\n \n--- 205,211 ----\n \t\t\tit = (IndexTuple) PageGetItem(p, PageGetItemId(p, n));\n \t\t\tblk = ItemPointerGetBlockNumber(&(it->t_tid));\n \n! \t\t\tb = ReleaseAndReadBuffer(b, s->indexRelation, blk);\n \t\t\tp = BufferGetPage(b);\n \t\t\tpo = (GISTPageOpaque) PageGetSpecialPointer(p);\n \n", "msg_date": "Sat, 05 Feb 2005 14:42:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST indexes and concurrency (tsearch2) " }, { "msg_contents": "Tom Lane wrote:\n> You might try the attached patch (which I just applied to HEAD).\n> It cuts down the number of acquisitions of the BufMgrLock by merging\n> adjacent bufmgr calls during a GIST index search. [...]\n\nThanks - I applied it successfully against 8.0.0, but it didn't seem to \nhave a noticeable effect. I'm still seeing more or less exactly 25% CPU \nusage by postgres processes and identical query times (measured with the \nPerl script I posted earlier).\n\nRegards,\n Marinos\n-- \nDipl.-Ing. Marinos Yannikos, CEO\nPreisvergleich Internet Services AG\nObere Donaustrasse 63, A-1020 Wien\nTel./Fax: (+431) 5811609-52/-55\n", "msg_date": "Wed, 09 Feb 2005 20:25:36 +0100", "msg_from": "\"Marinos J. Yannikos\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST indexes and concurrency (tsearch2)" }, { "msg_contents": "Tom Lane wrote:\n> I'm not completely convinced that you're seeing the same thing,\n> but if you're seeing a whole lot of semops then it could well be.\n\nI'm seeing ~280 semops/second with spinlocks enabled and ~80k \nsemops/second (> 4 mil. for 100 queries) with --disable-spinlocks, which \nincreases total run time by ~20% only. In both cases, cpu usage stays \naround 25%, which is a bit odd.\n\n> [...]You said\n> you're testing a quad-processor machine, so it could be that you're\n> seeing the same lock contention issues that we've been trying to figure\n> out for the past year ...\n\nAre those issues specific to a particular platform (only x86/Linux?) or \nis it a problem with SMP systems in general? I guess I'll be following \nthe current discussion on -hackers closely...\n\nRegards,\n Marinos\n", "msg_date": "Thu, 10 Feb 2005 01:55:05 +0100", "msg_from": "\"Marinos J. Yannikos\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST indexes and concurrency (tsearch2)" }, { "msg_contents": "On Sat, 2005-02-05 at 14:42 -0500, Tom Lane wrote:\n> Marinos Yannikos <[email protected]> writes:\n> > Some more things I tried:\n> \n> You might try the attached patch (which I just applied to HEAD).\n> It cuts down the number of acquisitions of the BufMgrLock by merging\n> adjacent bufmgr calls during a GIST index search.\n\nI'm not sure it will help much either, but there is more low-hanging\nfruit in this area: GiST currently does a ReadBuffer() for each tuple\nproduced by the index scan, which is grossly inefficient. I recently\napplied a patch to change rtree to keep a pin on the scan's current\nbuffer in between invocations of the index scan API (which is how btree\nand hash already work), and it improved performance by about 10%\n(according to contrib/rtree_gist's benchmark). I've made similar changes\nfor GiST, but unfortunately it is part of a larger GiST improvement\npatch that I haven't had a chance to commit to 8.1 yet:\n\nhttp://archives.postgresql.org/pgsql-patches/2004-11/msg00144.php\n\nI'll try and get this cleaned up for application to HEAD next week.\n\n-Neil\n\n\n", "msg_date": "Thu, 10 Feb 2005 12:13:03 +1100", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST indexes and concurrency (tsearch2)" } ]
[ { "msg_contents": "Hello,\nI have a little time and I decided to improve the performance of my \nserver(s). I have found on google many 'tips' in tuning linux kernel and \npostgresql database ... but I can't decide wich 'how-to' is better ... :(\nSo the question is: where to find a 'easy' and complete documentation \nabout this tweaks ... ?\n\nthank you,\nAdrian Din\n\n\n-- \nUsing Opera's revolutionary e-mail client: http://www.opera.com/m2/\n", "msg_date": "Thu, 03 Feb 2005 11:55:04 +0200", "msg_from": "Din Adrian <[email protected]>", "msg_from_op": true, "msg_subject": "Tunning postgresql on linux (fedora core 3)" }, { "msg_contents": "Din Adrian wrote:\n> Hello,\n> I have a little time and I decided to improve the performance of my \n> server(s). I have found on google many 'tips' in tuning linux kernel \n> and postgresql database ... but I can't decide wich 'how-to' is better \n> ... :(\n> So the question is: where to find a 'easy' and complete documentation \n> about this tweaks ... ?\n\nTry the \"performance tuning\" article linked from this page:\n http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 03 Feb 2005 10:54:08 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tunning postgresql on linux (fedora core 3)" }, { "msg_contents": "Please CC the mailing list as well as replying to me, so that others can \nhelp too.\n\nDin Adrian wrote:\n> yes I have read this as well ...\n> \n> One question about this option:\n> fsync = true / false\n> a) I have Raid and UPS - it is safe to turn this off ... (' But be \n> very aware that any unexpected database shutdown will force you to \n> restore the database from your last backup.' - from my last backup if \n> the server goes down ??? why ? just at 'any unexpected database \n> shutdown' ? ....!!!!!!!!!!!)\n\nBecause fsync=true flushes transaction details to disk (the Write Ahead \nLog). That way if (say) the power-supply in your server fails you can \ncheck the WAL and compare it to the main database files to make sure \neverything is in a known state.\n\n> b) in docs say that after 7.2 seting this to false does'n turn off the \n> wall ...!? wich option does?\n\nThe docs don't say that, as far as I can see. It doesn't make sense to \nturn off the WAL.\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 03 Feb 2005 13:56:50 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tunning postgresql on linux (fedora core 3)" }, { "msg_contents": "I'll repeat myself:\n\n\n\n\nPlease CC the mailing list as well as replying to me, so that others\ncan help too.\n\n\n\n\nDin Adrian wrote:\n> \n> On Thu, 03 Feb 2005 13:56:50 +0000, Richard Huxton <[email protected]> \n> wrote:\n> \n>> Please CC the mailing list as well as replying to me, so that others \n>> can help too.\n>>\n>>\n>>> b) in docs say that after 7.2 seting this to false does'n turn off \n>>> the wall ...!? wich option does?\n>>\n>>\n>> The docs don't say that, as far as I can see. It doesn't make sense \n>> to turn off the WAL.\n> \n> \n> hmm this is the doc about ...\n> \n> ' NOTE: Since 7.2, turning fsync off does NOT stop WAL. It does stop \n> checkpointing, however. This is a change in the notes that follow Turn \n> WAL off (fsync=false) only for a read-only database or one where the \n> database can be regenerated from external software. While RAID plus \n> UPSes can do a lot to protect your data, turning off fsync means that \n> you will be restoring from backup in the event of hardware or power \n> failure.'\n\nI don't know what this is, and you don't give a URL, but it DOES NOT \nappear to be in the manuals.\n\nYou should probably read the sections of the manuals regarding \"run-time \nconfiguration\" and \"write ahead logs\". The manuals are quite extensive, \nare available online at http://www.postgresql.org/ and also in most \ndistributions.\n\nThis is probably a good place to start.\nhttp://www.postgresql.org/docs/8.0/interactive/runtime-config.html#RUNTIME-CONFIG-WAL\n\n> If you turn it off you should have more speed ... !!!???\n\nBasically, as I said in my last email - fsync=true makes sure \ntransaction details are safely stored on disk. If you turn this off, the \ndatabase doesn't have to wait for the data to physically be written to \nthe disk. But, if power fails then data might be in OS or disk cache and \nso lost when you restart the machine.\n\nPlease CC the mailing list if you reply to this message.\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 03 Feb 2005 14:52:04 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tunning postgresql on linux (fedora core 3)" }, { "msg_contents": "Din Adrian wrote:\n> sorry about cc ...\n> this is the site:\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n> but I gues is not right ... hmm\n\nIt's not that it's incorrect, just that you should always use the \nmanuals as a starting point.\n\n> On Thu, 03 Feb 2005 14:52:04 +0000, Richard Huxton <[email protected]> \n> wrote:\n> \n>> I'll repeat myself:\n>>\n>>\n>>\n>>\n>> Please CC the mailing list as well as replying to me, so that others\n>> can help too.\n>>\n>>\n>>\n>>\n>> Din Adrian wrote:\n>>\n>>> On Thu, 03 Feb 2005 13:56:50 +0000, Richard Huxton \n>>> <[email protected]> wrote:\n>>>\n>>>> Please CC the mailing list as well as replying to me, so that \n>>>> others can help too.\n>>>>\n>>>>\n>>>>> b) in docs say that after 7.2 seting this to false does'n turn \n>>>>> off the wall ...!? wich option does?\n>>>>\n>>>>\n>>>>\n>>>> The docs don't say that, as far as I can see. It doesn't make sense \n>>>> to turn off the WAL.\n>>>\n>>> hmm this is the doc about ...\n>>> ' NOTE: Since 7.2, turning fsync off does NOT stop WAL. It does \n>>> stop checkpointing, however. This is a change in the notes that \n>>> follow Turn WAL off (fsync=false) only for a read-only database or \n>>> one where the database can be regenerated from external software. \n>>> While RAID plus UPSes can do a lot to protect your data, turning \n>>> off fsync means that you will be restoring from backup in the event \n>>> of hardware or power failure.'\n>>\n>>\n>> I don't know what this is, and you don't give a URL, but it DOES NOT \n>> appear to be in the manuals.\n>>\n>> You should probably read the sections of the manuals regarding \n>> \"run-time configuration\" and \"write ahead logs\". The manuals are \n>> quite extensive, are available online at http://www.postgresql.org/ \n>> and also in most distributions.\n>>\n>> This is probably a good place to start.\n>> http://www.postgresql.org/docs/8.0/interactive/runtime-config.html#RUNTIME-CONFIG-WAL \n>>\n>>\n>>> If you turn it off you should have more speed ... !!!???\n>>\n>>\n>> Basically, as I said in my last email - fsync=true makes sure \n>> transaction details are safely stored on disk. If you turn this off, \n>> the database doesn't have to wait for the data to physically be \n>> written to the disk. But, if power fails then data might be in OS or \n>> disk cache and so lost when you restart the machine.\n>>\n>> Please CC the mailing list if you reply to this message.\n>> -- \n>> Richard Huxton\n>> Archonet Ltd\n>>\n> \n> \n> \n\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 03 Feb 2005 15:15:32 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Tunning postgresql on linux (fedora core 3)" }, { "msg_contents": "sorry about cc ...\nthis is the site:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\nbut I gues is not right ... hmm\n\nAdrian Din\n\n\nOn Thu, 03 Feb 2005 14:52:04 +0000, Richard Huxton <[email protected]> \nwrote:\n\n> I'll repeat myself:\n>\n>\n>\n>\n> Please CC the mailing list as well as replying to me, so that others\n> can help too.\n>\n>\n>\n>\n> Din Adrian wrote:\n>> On Thu, 03 Feb 2005 13:56:50 +0000, Richard Huxton <[email protected]> \n>> wrote:\n>>\n>>> Please CC the mailing list as well as replying to me, so that others \n>>> can help too.\n>>>\n>>>\n>>>> b) in docs say that after 7.2 seting this to false does'n turn off \n>>>> the wall ...!? wich option does?\n>>>\n>>>\n>>> The docs don't say that, as far as I can see. It doesn't make sense \n>>> to turn off the WAL.\n>> hmm this is the doc about ...\n>> ' NOTE: Since 7.2, turning fsync off does NOT stop WAL. It does stop \n>> checkpointing, however. This is a change in the notes that follow Turn \n>> WAL off (fsync=false) only for a read-only database or one where the \n>> database can be regenerated from external software. While RAID plus \n>> UPSes can do a lot to protect your data, turning off fsync means that \n>> you will be restoring from backup in the event of hardware or power \n>> failure.'\n>\n> I don't know what this is, and you don't give a URL, but it DOES NOT \n> appear to be in the manuals.\n>\n> You should probably read the sections of the manuals regarding \"run-time \n> configuration\" and \"write ahead logs\". The manuals are quite extensive, \n> are available online at http://www.postgresql.org/ and also in most \n> distributions.\n>\n> This is probably a good place to start.\n> http://www.postgresql.org/docs/8.0/interactive/runtime-config.html#RUNTIME-CONFIG-WAL\n>\n>> If you turn it off you should have more speed ... !!!???\n>\n> Basically, as I said in my last email - fsync=true makes sure \n> transaction details are safely stored on disk. If you turn this off, the \n> database doesn't have to wait for the data to physically be written to \n> the disk. But, if power fails then data might be in OS or disk cache and \n> so lost when you restart the machine.\n>\n> Please CC the mailing list if you reply to this message.\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n\n\n\n-- \nUsing Opera's revolutionary e-mail client: http://www.opera.com/m2/\n", "msg_date": "Thu, 03 Feb 2005 18:59:40 +0200", "msg_from": "Din Adrian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Tunning postgresql on linux (fedora core 3)" } ]
[ { "msg_contents": "I'm hoping someone can shed some light on these results. The 'factor' \ncompares the ratios of cost to actual for different plans. Perhaps \nnested loops should be given a discount in the planner? The estimates \nseem to be out by one and a half orders of magnitude. :(\n\n========== QUERY ==========\n\nSELECT sum(L.Extended)\nFROM sord H\nJOIN sordln L USING (OrderNo)\n[ WHERE H.OrderDate between '2003-01-01' and '2003-03-16' ]\n[ WHERE H.OrderDate between '2003-01-01' and '2003-09-02' ]\n\n========== SUMMARY ==========\n\nJoin Cost Cache Factor Disk Factor\n------------------------------------------------------------\n 10% ROWS\nHash 40085 4.9s 1.0 12.8s 1.0\nMerge 63338 4.1s 1.9 23.1s 0.9\nHash Idx 65386 5.5s 1.5 30.7s 0.7\nNest 257108 0.8s 39.3 2.7s 30.4\n 33% ROWS\nHash 43646 5.8s 1.0 13.6s 1.0\nMerge 67153 6.0s 1.5 30.7s 0.7\nHash Idx 68946 6.5s 1.4\nNest 868642 2.8s 41.2 10.2s 26.5\n ALL ROWS\nHash 53458 8.9s 1.0 14.3s 1.0\nMerge 76156 9.4s 1.3 35.2s 0.6\nNest 2594934 9.2s 47.0 33.8s 20.5\n\n========== 10% CACHE ROWS ==========\n\nQUERY PLAN (Hash Join on <10% cache rows, indexed + sequential)\nAggregate (cost=40085.14..40085.14 rows=1 width=8) (actual \ntime=4907.000..4907.000 rows=1 loops=1)\n -> Hash Join (cost=145.11..39814.32 rows=108324 width=8) (actual \ntime=3844.000..4735.000 rows=96183 loops=1)\n Hash Cond: (\"outer\".orderno = \"inner\".orderno)\n -> Seq Scan on sordln l (cost=0.00..33118.98 rows=1093398 \nwidth=12) (actual time=0.000..2313.000 rows=1093398 loops=1)\n -> Hash (cost=138.48..138.48 rows=2655 width=4) (actual \ntime=16.000..16.000 rows=0 loops=1)\n -> Index Scan using sord_date on sord h \n(cost=0.00..138.48 rows=2655 width=4) (actual time=0.000..0.000 \nrows=2646 loops=1)\n Index Cond: ((orderdate >= '2003-01-01'::date) AND \n(orderdate <= '2003-03-16'::date))\nTotal runtime: 4907.000 ms\n\nQUERY PLAN (Merge Join on <10% cache rows, indexed only)\nAggregate (cost=63338.43..63338.43 rows=1 width=8) (actual \ntime=4141.000..4141.000 rows=1 loops=1)\n -> Merge Join (cost=289.47..63067.62 rows=108324 width=8) (actual \ntime=3000.000..3896.000 rows=96183 loops=1)\n Merge Cond: (\"outer\".orderno = \"inner\".orderno)\n -> Index Scan using sordln_pkey on sordln l \n(cost=0.00..58419.79 rows=1093398 width=12) (actual time=0.000..2058.000 \nrows=737827 loops=1)\n -> Sort (cost=289.47..296.11 rows=2655 width=4) (actual \ntime=16.000..127.000 rows=96174 loops=1)\n Sort Key: h.orderno\n -> Index Scan using sord_date on sord h \n(cost=0.00..138.48 rows=2655 width=4) (actual time=0.000..0.000 \nrows=2646 loops=1)\n Index Cond: ((orderdate >= '2003-01-01'::date) AND \n(orderdate <= '2003-03-16'::date))\nTotal runtime: 4141.000 ms\n\nQUERY PLAN (Hash Join on <10% cache rows, indexed only)\nAggregate (cost=65385.95..65385.95 rows=1 width=8) (actual \ntime=5516.000..5516.000 rows=1 loops=1)\n -> Hash Join (cost=145.11..65115.13 rows=108324 width=8) (actual \ntime=3031.000..5376.000 rows=96183 loops=1)\n Hash Cond: (\"outer\".orderno = \"inner\".orderno)\n -> Index Scan using sordln_pkey on sordln l \n(cost=0.00..58419.79 rows=1093398 width=12) (actual time=0.000..3091.000 \nrows=1093398 loops=1)\n -> Hash (cost=138.48..138.48 rows=2655 width=4) (actual \ntime=0.000..0.000 rows=0 loops=1)\n -> Index Scan using sord_date on sord h \n(cost=0.00..138.48 rows=2655 width=4) (actual time=0.000..0.000 \nrows=2646 loops=1)\n Index Cond: ((orderdate >= '2003-01-01'::date) AND \n(orderdate <= '2003-03-16'::date))\nTotal runtime: 5516.000 ms\n\nQUERY PLAN (Nested Loop on <10% cache rows, indexed only)\nAggregate (cost=257108.11..257108.11 rows=1 width=8) (actual \ntime=781.000..781.000 rows=1 loops=1)\n -> Nested Loop (cost=0.00..256837.30 rows=108324 width=8) (actual \ntime=0.000..610.000 rows=96183 loops=1)\n -> Index Scan using sord_date on sord h (cost=0.00..138.48 \nrows=2655 width=4) (actual time=0.000..0.000 rows=2646 loops=1)\n Index Cond: ((orderdate >= '2003-01-01'::date) AND \n(orderdate <= '2003-03-16'::date))\n -> Index Scan using sordln_pkey on sordln l (cost=0.00..96.01 \nrows=54 width=12) (actual time=0.000..0.118 rows=36 loops=2646)\n Index Cond: (\"outer\".orderno = l.orderno)\nTotal runtime: 781.000 ms\n\n========== 33% CACHE ROWS ==========\n\nQUERY PLAN (Hash Join on >33% cache rows, indexed + sequential)\nAggregate (cost=43645.62..43645.62 rows=1 width=8) (actual \ntime=5828.000..5828.000 rows=1 loops=1)\n -> Hash Join (cost=484.94..42730.67 rows=365976 width=8) (actual \ntime=2391.000..5078.000 rows=352856 loops=1)\n Hash Cond: (\"outer\".orderno = \"inner\".orderno)\n -> Seq Scan on sordln l (cost=0.00..33118.98 rows=1093398 \nwidth=12) (actual time=0.000..2234.000 rows=1093398 loops=1)\n -> Hash (cost=462.52..462.52 rows=8970 width=4) (actual \ntime=47.000..47.000 rows=0 loops=1)\n -> Index Scan using sord_date on sord h \n(cost=0.00..462.52 rows=8970 width=4) (actual time=0.000..0.000 \nrows=8934 loops=1)\n Index Cond: ((orderdate >= '2003-01-01'::date) AND \n(orderdate <= '2003-09-02'::date))\nTotal runtime: 5828.000 ms\n\nQUERY PLAN (Merge Join on >33% cache rows, indexed only)\nAggregate (cost=67153.04..67153.04 rows=1 width=8) (actual \ntime=5985.000..5985.000 rows=1 loops=1)\n -> Merge Join (cost=0.00..66238.09 rows=365976 width=8) (actual \ntime=2953.000..5281.000 rows=352856 loops=1)\n Merge Cond: (\"outer\".orderno = \"inner\".orderno)\n -> Index Scan using sord_pkey on sord h (cost=0.00..1402.78 \nrows=8970 width=4) (actual time=31.000..46.000 rows=8934 loops=1)\n Filter: ((orderdate >= '2003-01-01'::date) AND (orderdate \n<= '2003-09-02'::date))\n -> Index Scan using sordln_pkey on sordln l \n(cost=0.00..58419.79 rows=1093398 width=12) (actual time=0.000..2485.000 \nrows=994500 loops=1)\nTotal runtime: 5985.000 ms\n\nQUERY PLAN (Hash Join on >33% cache rows, indexed only)\nAggregate (cost=68946.43..68946.43 rows=1 width=8) (actual \ntime=6531.000..6531.000 rows=1 loops=1)\n -> Hash Join (cost=484.94..68031.48 rows=365976 width=8) (actual \ntime=3031.000..5765.000 rows=352856 loops=1)\n Hash Cond: (\"outer\".orderno = \"inner\".orderno)\n -> Index Scan using sordln_pkey on sordln l \n(cost=0.00..58419.79 rows=1093398 width=12) (actual time=0.000..3075.000 \nrows=1093398 loops=1)\n -> Hash (cost=462.52..462.52 rows=8970 width=4) (actual \ntime=46.000..46.000 rows=0 loops=1)\n -> Index Scan using sord_date on sord h \n(cost=0.00..462.52 rows=8970 width=4) (actual time=0.000..16.000 \nrows=8934 loops=1)\n Index Cond: ((orderdate >= '2003-01-01'::date) AND \n(orderdate <= '2003-09-02'::date))\nTotal runtime: 6531.000 ms\n\nQUERY PLAN (Nested Loop on >33% cache rows, indexed only)\nAggregate (cost=868642.40..868642.40 rows=1 width=8) (actual \ntime=2828.000..2828.000 rows=1 loops=1)\n -> Nested Loop (cost=0.00..867727.46 rows=365976 width=8) (actual \ntime=0.000..2171.000 rows=352856 loops=1)\n -> Index Scan using sord_date on sord h (cost=0.00..462.52 \nrows=8970 width=4) (actual time=0.000..0.000 rows=8934 loops=1)\n Index Cond: ((orderdate >= '2003-01-01'::date) AND \n(orderdate <= '2003-09-02'::date))\n -> Index Scan using sordln_pkey on sordln l (cost=0.00..96.01 \nrows=54 width=12) (actual time=0.012..0.125 rows=39 loops=8934)\n Index Cond: (\"outer\".orderno = l.orderno)\nTotal runtime: 2828.000 ms\n\n========== ALL CACHE ROWS ==========\n\nQUERY PLAN (Hash Join on all cache rows, sequential only)\nAggregate (cost=53458.44..53458.44 rows=1 width=8) (actual \ntime=8906.000..8906.000 rows=1 loops=1)\n -> Hash Join (cost=1204.99..50724.94 rows=1093398 width=8) (actual \ntime=141.000..7089.000 rows=1093397 loops=1)\n Hash Cond: (\"outer\".orderno = \"inner\".orderno)\n -> Seq Scan on sordln l (cost=0.00..33118.98 rows=1093398 \nwidth=12) (actual time=0.000..2629.000 rows=1093398 loops=1)\n -> Hash (cost=1137.99..1137.99 rows=26799 width=4) (actual \ntime=141.000..141.000 rows=0 loops=1)\n -> Seq Scan on sord h (cost=0.00..1137.99 rows=26799 \nwidth=4) (actual time=0.000..79.000 rows=26799 loops=1)\nTotal runtime: 8906.000 ms\n\nQUERY PLAN (Merge Join on all cache rows, indexed only)\nAggregate (cost=76156.45..76156.45 rows=1 width=8) (actual \ntime=9422.000..9422.000 rows=1 loops=1)\n -> Merge Join (cost=0.00..73422.95 rows=1093398 width=8) (actual \ntime=0.000..6835.000 rows=1093397 loops=1)\n Merge Cond: (\"outer\".orderno = \"inner\".orderno)\n -> Index Scan using sord_pkey on sord h (cost=0.00..1268.79 \nrows=26799 width=4) (actual time=0.000..94.000 rows=26799 loops=1)\n -> Index Scan using sordln_pkey on sordln l \n(cost=0.00..58419.79 rows=1093398 width=12) (actual time=0.000..2773.000 \nrows=1093398 loops=1)\nTotal runtime: 9422.000 ms\n\nQUERY PLAN (Nested Loop on all cache rows, Sequential + indexed)\nAggregate (cost=2594934.26..2594934.26 rows=1 width=8) (actual \ntime=9234.000..9234.000 rows=1 loops=1)\n -> Nested Loop (cost=0.00..2592200.76 rows=1093398 width=8) (actual \ntime=0.000..6966.000 rows=1093397 loops=1)\n -> Seq Scan on sord h (cost=0.00..1137.99 rows=26799 width=4) \n(actual time=0.000..110.000 rows=26799 loops=1)\n -> Index Scan using sordln_pkey on sordln l (cost=0.00..96.01 \nrows=54 width=12) (actual time=0.011..0.104 rows=41 loops=26799)\n Index Cond: (\"outer\".orderno = l.orderno)\nTotal runtime: 9234.000 ms\n\n========== 10% DISK ROWS ==========\n\nQUERY PLAN (Hash Join on <10% disk rows, indexed + sequential)\nAggregate (cost=40085.14..40085.14 rows=1 width=8) (actual \ntime=12813.000..12813.000 rows=1 loops=1)\n -> Hash Join (cost=145.11..39814.32 rows=108324 width=8) (actual \ntime=11188.000..12592.000 rows=96183 loops=1)\n Hash Cond: (\"outer\".orderno = \"inner\".orderno)\n -> Seq Scan on sordln l (cost=0.00..33118.98 rows=1093398 \nwidth=12) (actual time=31.000..9985.000 rows=1093398 loops=1)\n -> Hash (cost=138.48..138.48 rows=2655 width=4) (actual \ntime=172.000..172.000 rows=0 loops=1)\n -> Index Scan using sord_date on sord h \n(cost=0.00..138.48 rows=2655 width=4) (actual time=47.000..156.000 \nrows=2646 loops=1)\n Index Cond: ((orderdate >= '2003-01-01'::date) AND \n(orderdate <= '2003-03-16'::date))\nTotal runtime: 12813.000 ms\n\nQUERY PLAN (Merge Join on <10% disk rows, indexed only)\nAggregate (cost=63338.43..63338.43 rows=1 width=8) (actual \ntime=23078.000..23078.000 rows=1 loops=1)\n -> Merge Join (cost=289.47..63067.62 rows=108324 width=8) (actual \ntime=20375.000..22874.000 rows=96183 loops=1)\n Merge Cond: (\"outer\".orderno = \"inner\".orderno)\n -> Index Scan using sordln_pkey on sordln l \n(cost=0.00..58419.79 rows=1093398 width=12) (actual \ntime=63.000..20657.000 rows=737827 loops=1)\n -> Sort (cost=289.47..296.11 rows=2655 width=4) (actual \ntime=171.000..297.000 rows=96174 loops=1)\n Sort Key: h.orderno\n -> Index Scan using sord_date on sord h \n(cost=0.00..138.48 rows=2655 width=4) (actual time=31.000..171.000 \nrows=2646 loops=1)\n Index Cond: ((orderdate >= '2003-01-01'::date) AND \n(orderdate <= '2003-03-16'::date))\nTotal runtime: 23078.000 ms\n\nQUERY PLAN (Hash Join on <10% disk rows, indexed only)\nAggregate (cost=65385.95..65385.95 rows=1 width=8) (actual \ntime=30734.000..30734.000 rows=1 loops=1)\n -> Hash Join (cost=145.11..65115.13 rows=108324 width=8) (actual \ntime=19546.000..30593.000 rows=96183 loops=1)\n Hash Cond: (\"outer\".orderno = \"inner\".orderno)\n -> Index Scan using sordln_pkey on sordln l \n(cost=0.00..58419.79 rows=1093398 width=12) (actual \ntime=47.000..27711.000 rows=1093398 loops=1)\n -> Hash (cost=138.48..138.48 rows=2655 width=4) (actual \ntime=187.000..187.000 rows=0 loops=1)\n -> Index Scan using sord_date on sord h \n(cost=0.00..138.48 rows=2655 width=4) (actual time=46.000..171.000 \nrows=2646 loops=1)\n Index Cond: ((orderdate >= '2003-01-01'::date) AND \n(orderdate <= '2003-03-16'::date))\nTotal runtime: 30734.000 ms\n\nQUERY PLAN (Nested Loop on <10% disk rows, indexed only)\nAggregate (cost=257108.11..257108.11 rows=1 width=8) (actual \ntime=2704.000..2704.000 rows=1 loops=1)\n -> Nested Loop (cost=0.00..256837.30 rows=108324 width=8) (actual \ntime=94.000..2529.000 rows=96183 loops=1)\n -> Index Scan using sord_date on sord h (cost=0.00..138.48 \nrows=2655 width=4) (actual time=32.000..93.000 rows=2646 loops=1)\n Index Cond: ((orderdate >= '2003-01-01'::date) AND \n(orderdate <= '2003-03-16'::date))\n -> Index Scan using sordln_pkey on sordln l (cost=0.00..96.01 \nrows=54 width=12) (actual time=0.041..0.814 rows=36 loops=2646)\n Index Cond: (\"outer\".orderno = l.orderno)\nTotal runtime: 2704.000 ms\n\n========== 33% DISK ROWS ==========\n\nQUERY PLAN (Hash Join on >33% disk rows, indexed + sequential)\nAggregate (cost=43645.62..43645.62 rows=1 width=8) (actual \ntime=13562.000..13562.000 rows=1 loops=1)\n -> Hash Join (cost=484.94..42730.67 rows=365976 width=8) (actual \ntime=8687.000..12985.000 rows=352856 loops=1)\n Hash Cond: (\"outer\".orderno = \"inner\".orderno)\n -> Seq Scan on sordln l (cost=0.00..33118.98 rows=1093398 \nwidth=12) (actual time=31.000..10106.000 rows=1093398 loops=1)\n -> Hash (cost=462.52..462.52 rows=8970 width=4) (actual \ntime=375.000..375.000 rows=0 loops=1)\n -> Index Scan using sord_date on sord h \n(cost=0.00..462.52 rows=8970 width=4) (actual time=47.000..375.000 \nrows=8934 loops=1)\n Index Cond: ((orderdate >= '2003-01-01'::date) AND \n(orderdate <= '2003-09-02'::date))\nTotal runtime: 13562.000 ms\n\nQUERY PLAN (Merge Join on >33% disk rows, indexed only)\nAggregate (cost=67153.04..67153.04 rows=1 width=8) (actual \ntime=30672.000..30672.000 rows=1 loops=1)\n -> Merge Join (cost=0.00..66238.09 rows=365976 width=8) (actual \ntime=20297.000..29823.000 rows=352856 loops=1)\n Merge Cond: (\"outer\".orderno = \"inner\".orderno)\n -> Index Scan using sord_pkey on sord h (cost=0.00..1402.78 \nrows=8970 width=4) (actual time=578.000..670.000 rows=8934 loops=1)\n Filter: ((orderdate >= '2003-01-01'::date) AND (orderdate \n<= '2003-09-02'::date))\n -> Index Scan using sordln_pkey on sordln l \n(cost=0.00..58419.79 rows=1093398 width=12) (actual \ntime=47.000..26509.000 rows=994500 loops=1)\nTotal runtime: 30672.000 ms\n\nQUERY PLAN (Nested Loop on >33% disk rows, indexed only)\nAggregate (cost=868642.40..868642.40 rows=1 width=8) (actual \ntime=10235.000..10235.000 rows=1 loops=1)\n -> Nested Loop (cost=0.00..867727.46 rows=365976 width=8) (actual \ntime=78.000..9496.000 rows=352856 loops=1)\n -> Index Scan using sord_date on sord h (cost=0.00..462.52 \nrows=8970 width=4) (actual time=32.000..126.000 rows=8934 loops=1)\n Index Cond: ((orderdate >= '2003-01-01'::date) AND \n(orderdate <= '2003-09-02'::date))\n -> Index Scan using sordln_pkey on sordln l (cost=0.00..96.01 \nrows=54 width=12) (actual time=0.035..0.912 rows=39 loops=8934)\n Index Cond: (\"outer\".orderno = l.orderno)\nTotal runtime: 10235.000 ms\n\n========== ALL DISK ROWS ==========\n\nQUERY PLAN (Hash Join on all disk rows, sequential only)\nAggregate (cost=53458.44..53458.44 rows=1 width=8) (actual \ntime=14281.000..14281.000 rows=1 loops=1)\n -> Hash Join (cost=1204.99..50724.94 rows=1093398 width=8) (actual \ntime=719.000..12096.000 rows=1093397 loops=1)\n Hash Cond: (\"outer\".orderno = \"inner\".orderno)\n -> Seq Scan on sordln l (cost=0.00..33118.98 rows=1093398 \nwidth=12) (actual time=16.000..7389.000 rows=1093398 loops=1)\n -> Hash (cost=1137.99..1137.99 rows=26799 width=4) (actual \ntime=703.000..703.000 rows=0 loops=1)\n -> Seq Scan on sord h (cost=0.00..1137.99 rows=26799 \nwidth=4) (actual time=0.000..657.000 rows=26799 loops=1)\nTotal runtime: 14281.000 ms\n\nQUERY PLAN (Merge Join on all disk rows, indexed only)\nAggregate (cost=76156.45..76156.45 rows=1 width=8) (actual \ntime=35235.000..35235.000 rows=1 loops=1)\n -> Merge Join (cost=0.00..73422.95 rows=1093398 width=8) (actual \ntime=94.000..33050.000 rows=1093397 loops=1)\n Merge Cond: (\"outer\".orderno = \"inner\".orderno)\n -> Index Scan using sord_pkey on sord h (cost=0.00..1268.79 \nrows=26799 width=4) (actual time=47.000..141.000 rows=26799 loops=1)\n -> Index Scan using sordln_pkey on sordln l \n(cost=0.00..58419.79 rows=1093398 width=12) (actual \ntime=47.000..28250.000 rows=1093398 loops=1)\nTotal runtime: 35235.000 ms\n\nQUERY PLAN (Nested Loop on all disk rows, indexed + sequential)\nAggregate (cost=2594934.26..2594934.26 rows=1 width=8) (actual \ntime=33797.000..33797.000 rows=1 loops=1)\n -> Nested Loop (cost=0.00..2592200.76 rows=1093398 width=8) (actual \ntime=63.000..31744.000 rows=1093397 loops=1)\n -> Seq Scan on sord h (cost=0.00..1137.99 rows=26799 width=4) \n(actual time=16.000..79.000 rows=26799 loops=1)\n -> Index Scan using sordln_pkey on sordln l (cost=0.00..96.01 \nrows=54 width=12) (actual time=0.039..1.041 rows=41 loops=26799)\n Index Cond: (\"outer\".orderno = l.orderno)\nTotal runtime: 33797.000 ms\n\n========== ENVIRONMENT ==========\n\nAthlon XP2500, 768MB, 80GB ATA HDD\nPostgreSQL 8.0rc2 on Win2k\nshared_buffers = 1000\nwork_mem = 32768\nrandom_page_cost = 2\nsord = 27000 rows, 7MB, pkey = int4, Stats = 100 on OrderDate\nsordln = 1 million rows, 173MB, pkey = int4 + int2\n\n", "msg_date": "Thu, 03 Feb 2005 21:10:37 +1000", "msg_from": "David Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Planner really hates nested loops" }, { "msg_contents": "David Brown <[email protected]> writes:\n> I'm hoping someone can shed some light on these results.\n\nNot without a lot more detail on how you *got* the results. What\nexactly did you do to force the various plan choices? (I see some\nridiculous choices of indexscans, for instance, suggesting improper use\nof enable_seqscan in some cases.) And what's the \"cache rows\" and \"disk\nrows\" stuff, and how do you know that what you were measuring is\nactually what you think it is? I have zero confidence in\nWindows-atop-ATA as a platform for measuring disk-related behaviors,\nbecause I don't think you can control or even know what caching is\ngoing on.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Feb 2005 11:25:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner really hates nested loops " }, { "msg_contents": "Tom Lane wrote:\n\n>What exactly did you do to force the various plan choices? (I see some\n>ridiculous choices of indexscans, for instance, suggesting improper use\n>of enable_seqscan in some cases.)\n>\nExcept for forcing a hash with indexes (to show that increased use of \nindexes is not necessarily good), the \"ridiculous choices of indexscans\" \nare straight from the planner, i.e. I did not use enable_seqscan. \nObviously, the alternative join methods were obtained by disabling hash \njoins and merge joins.\n\n> And what's the \"cache rows\" and \"disk\n>rows\" stuff, and how do you know that what you were measuring is\n>actually what you think it is? I have zero confidence in\n>Windows-atop-ATA as a platform for measuring disk-related behaviors,\n>because I don't think you can control or even know what caching is\n>going on.\n> \n>\nThe terms are just abbreviated headings to make it easier to check what \nyou're looking at. \"Cache\" refers to repeated runs without disk I/O. \n\"Disk\" refers to a completely initialized system with no PostgreSQL data \nin the OS cache (i.e. after a reboot - this is Benchmarking 101). All \nresults were verified with *at least* two runs at different times. This \nis not to say the \"disk\" results are an accurate or absolute benchmark, \nbut they're useful as a reference when looking at the cached results.\n\nIn any case, I can get the same \"cached\" results by increasing buffers \nto take up most of the memory and thereby make them the defacto cache.\n\nWith respect, could we please focus on the point of this thread? I've \nspent a great deal of time experimenting with PostgreSQL over the last \ncouple of months, including reading every known web page regarding \ntuning and following every post in this list in that period. I'm \nconfident that my results here are what most people will experience when \ntrying PostgreSQL, and I'd like to help in a constructive way.\n", "msg_date": "Fri, 04 Feb 2005 09:22:39 +1000", "msg_from": "David Brown <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner really hates nested loops" } ]
[ { "msg_contents": "> > I'm hoping someone can shed some light on these results.\n> \n> Not without a lot more detail on how you *got* the results. \n> What exactly did you do to force the various plan choices? \n> (I see some ridiculous choices of indexscans, for instance, \n> suggesting improper use of enable_seqscan in some cases.) \n> And what's the \"cache rows\" and \"disk rows\" stuff, and how do \n> you know that what you were measuring is actually what you \n> think it is? I have zero confidence in Windows-atop-ATA as a \n> platform for measuring disk-related behaviors, because I \n> don't think you can control or even know what caching is going on.\n\nYou can control the writeback-cache from Device Manager->(the\ndisk)->Policies. And if that is turned off, fsync definitly should write\nthrough, just as on *nix. (write-cache is on by default, no surprise)\n\nAFAIK, you can't control what is cached for reading.\n\n//Magnus\n", "msg_date": "Thu, 3 Feb 2005 17:41:37 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner really hates nested loops " }, { "msg_contents": "Magnus Hagander wrote:\n> > > I'm hoping someone can shed some light on these results.\n> > \n> > Not without a lot more detail on how you *got* the results. \n> > What exactly did you do to force the various plan choices? \n> > (I see some ridiculous choices of indexscans, for instance, \n> > suggesting improper use of enable_seqscan in some cases.) \n> > And what's the \"cache rows\" and \"disk rows\" stuff, and how do \n> > you know that what you were measuring is actually what you \n> > think it is? I have zero confidence in Windows-atop-ATA as a \n> > platform for measuring disk-related behaviors, because I \n> > don't think you can control or even know what caching is going on.\n> \n> You can control the writeback-cache from Device Manager->(the\n> disk)->Policies. And if that is turned off, fsync definitly should write\n> through, just as on *nix. (write-cache is on by default, no surprise)\n> \n> AFAIK, you can't control what is cached for reading.\n\nAre you saying that fsync() doesn't write to the platters by default on\nWin32?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Feb 2005 23:39:44 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner really hates nested loops" } ]
[ { "msg_contents": "Magnus wrote:\n> > > I'm hoping someone can shed some light on these results.\n> >\n> > Not without a lot more detail on how you *got* the results.\n> > What exactly did you do to force the various plan choices?\n> > (I see some ridiculous choices of indexscans, for instance,\n> > suggesting improper use of enable_seqscan in some cases.)\n> > And what's the \"cache rows\" and \"disk rows\" stuff, and how do\n> > you know that what you were measuring is actually what you\n> > think it is? I have zero confidence in Windows-atop-ATA as a\n> > platform for measuring disk-related behaviors, because I\n> > don't think you can control or even know what caching is going on.\n> \n> You can control the writeback-cache from Device Manager->(the\n> disk)->Policies. And if that is turned off, fsync definitly should\nwrite\n> through, just as on *nix. (write-cache is on by default, no surprise)\n\nThere is some truth to what Tom is saying, we just can't seem to get our\ndevelopment server to *quit* syncing with fsync=on, even though we have\nthe Promise raid controller (yeah, I know) configured to cache writes.\n\nIOW, with certain configurations I just can't seem to delegate sync\nresponsibility to the raid controller. It is a matter of record that\ncertain crappy drives lie about caching but, IMO this is more of a\ndriver issue than a O/S issue. (aside: I have become quite a believer\nin Western Digital parts, lately!)\n\nMerlin\n", "msg_date": "Thu, 3 Feb 2005 12:10:12 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner really hates nested loops " } ]
[ { "msg_contents": "> Alexandre Leclerc wrote:\n> Sorry for jumping in on this thread so late -- I haven't been able to\n> select * from crosstab(\n> 'select product_id, department_id, req_time\n> from product_department_time order by 1',\n> 'select ''A'' union all select ''C'' union all select ''D'''\n> ) as (product_id int, a int, c int, d int);\n\nI forgot you could do this...This would certainly be easier than parsing\narray values returned from array_accum. It will probably be faster as\nwell...but with the array approach the query would not have to be\nmodified each time a new department was added. That said, a crosstab\nbased query could be built easily enough from a department query on the\nclient and then you have the best of both worlds.\n\nMerlin\n\n\n", "msg_date": "Fri, 4 Feb 2005 12:48:43 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Flattening a kind of 'dynamic' table" }, { "msg_contents": "On Fri, 4 Feb 2005 12:48:43 -0500, Merlin Moncure\n<[email protected]> wrote:\n> > Alexandre Leclerc wrote:\n> > Sorry for jumping in on this thread so late -- I haven't been able to\n> > select * from crosstab(\n> > 'select product_id, department_id, req_time\n> > from product_department_time order by 1',\n> > 'select ''A'' union all select ''C'' union all select ''D'''\n> > ) as (product_id int, a int, c int, d int);\n> \n> I forgot you could do this...This would certainly be easier than parsing\n> array values returned from array_accum. It will probably be faster as\n> well...but with the array approach the query would not have to be\n> modified each time a new department was added. That said, a crosstab\n> based query could be built easily enough from a department query on the\n> client and then you have the best of both worlds.\n\nHello Merlin,\n\nWell, I'm glad because with all this i've learn a lot of new things.\n\nFinally, the crosstab solution is very fast and is simple for me to\nuse. I get my super-bug-jumbo-dbkiller-query run in about 210ms\n(seeking many tables and so). I had a score of 2480ms before. (This is\na much more complex query; the cross table thing had to be included in\nthis one.) This is much better! :)\n\nIn all, thanks for your help. Regards.\n\n-- \nAlexandre Leclerc\n", "msg_date": "Fri, 4 Feb 2005 15:08:56 -0500", "msg_from": "Alexandre Leclerc <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Flattening a kind of 'dynamic' table" } ]
[ { "msg_contents": "Hi all,\n\n I am using a (MFC based) recordset to read in 25M\nrecords of a table. I use a cursor to prevent complete\nloading of all records. However, currently performance\nis limited by the number of times the odbc driver\nloads in the rows. \n\n The tuple cache is set to 5M. I am unable to\nincrease it beyond this owing to \"out of memory for\ntuple cache\". This is not because of my RAM because\nthere is atleast 1G of free RAM. Any ideas? This is on\nWindows, Postgresql 8.0.\n\n Please make sure to reply-all as I am not\nsubscribed to the list.\n\n-\nSanketh\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n", "msg_date": "Fri, 4 Feb 2005 11:29:36 -0800 (PST)", "msg_from": "Sanketh Indarapu <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres odbc performance on windows" } ]
[ { "msg_contents": "\nHi,\n\nhere is a query which produces over 1G temp file in pgsql_tmp. This\nis on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB\nsort_mem and 320MB shared_mem.\n\nBelow is the query and results for EXPLAIN and EXPLAIN ANALYZE. All\ntables have been analyzed before.\n\nCan some please explain why the temp file is so huge? I understand\nthere are a lot of rows. All relevant indices seem to be used.\n\nThanks in advance,\n\nDirk\n\nEXPLAIN \nSELECT DISTINCT ON (ft.val_9, ft.created, ft.flatid) ft.docstart, ft.flatobj, bi.oid, bi.en\nFROM bi, en, df AS ft, es\nWHERE bi.rc=130170467\nAND bi.en=ft.en\nAND bi.co=117305223\nAND bi.hide=FALSE\nAND ft.en=en.oid\nAND es.en=bi.en\nAND es.co=bi.co\nAND es.spec=122293729\nAND (ft.val_2='DG' OR ft.val_2='SK')\nAND ft.docstart=1\nORDER BY ft.val_9 ASC, ft.created DESC\nLIMIT 1000 OFFSET 0;\n\n Limit (cost=8346.75..8346.78 rows=3 width=1361)\n -> Unique (cost=8346.75..8346.78 rows=3 width=1361)\n -> Sort (cost=8346.75..8346.76 rows=3 width=1361)\n Sort Key: ft.val_9, ft.created, ft.flatid\n -> Nested Loop (cost=0.00..8346.73 rows=3 width=1361)\n -> Nested Loop (cost=0.00..5757.17 rows=17 width=51)\n -> Nested Loop (cost=0.00..5606.39 rows=30 width=42)\n -> Index Scan using es_sc_index on es (cost=0.00..847.71 rows=301 width=8)\n Index Cond: ((spec = 122293729) AND (co = 117305223::oid))\n -> Index Scan using bi_env_index on bi (cost=0.00..15.80 rows=1 width=42)\n Index Cond: (\"outer\".en = bi.en)\n Filter: ((rc = 130170467::oid) AND (co = 117305223::oid) AND (hide = false))\n -> Index Scan using en_oid_index on en (cost=0.00..5.01 rows=1 width=9)\n Index Cond: (\"outer\".en = en.oid)\n -> Index Scan using df_en on df ft (cost=0.00..151.71 rows=49 width=1322)\n Index Cond: (\"outer\".en = ft.en)\n Filter: (((val_2 = 'DG'::text) OR (val_2 = 'SK'::text)) AND (docstart = 1))\n(17 rows)\n\n\n--------------\n\nEXPLAIN ANALYZE gives:\n\n\n Limit (cost=8346.75..8346.78 rows=3 width=1361) (actual time=75357.465..75679.964 rows=1000 loops=1)\n -> Unique (cost=8346.75..8346.78 rows=3 width=1361) (actual time=75357.459..75675.371 rows=1000 loops=1)\n -> Sort (cost=8346.75..8346.76 rows=3 width=1361) (actual time=75357.448..75499.263 rows=22439 loops=1)\n Sort Key: ft.val_9, ft.created, ft.flatid\n -> Nested Loop (cost=0.00..8346.73 rows=3 width=1361) (actual time=34.104..18016.005 rows=703677 loops=1)\n -> Nested Loop (cost=0.00..5757.17 rows=17 width=51) (actual time=0.467..3216.342 rows=48563 loops=1)\n -> Nested Loop (cost=0.00..5606.39 rows=30 width=42) (actual time=0.381..1677.014 rows=48563 loops=1)\n -> Index Scan using es_sc_index on es (cost=0.00..847.71 rows=301 width=8) (actual time=0.184..46.519 rows=5863 loops=1)\n Index Cond: ((spec = 122293729) AND (co = 117305223::oid))\n -> Index Scan using bi_env_index on bi (cost=0.00..15.80 rows=1 width=42) (actual time=0.052..0.218 rows=8 loops=5863)\n Index Cond: (\"outer\".en = bi.en)\n Filter: ((rc = 130170467::oid) AND (co = 117305223::oid) AND (hide = false))\n -> Index Scan using en_oid_index on en (cost=0.00..5.01 rows=1 width=9) (actual time=0.015..0.019 rows=1 loops=48563)\n Index Cond: (\"outer\".en = en.oid)\n -> Index Scan using df_en on df ft (cost=0.00..151.71 rows=49 width=1322) (actual time=0.038..0.148 rows=14 loops=48563)\n Index Cond: (\"outer\".en = ft.en)\n Filter: (((val_2 = 'DG'::text) OR (val_2 = 'SK'::text)) AND (docstart = 1))\n Total runtime: 81782.052 ms\n(18 rows)\n\n", "msg_date": "Sat, 5 Feb 2005 18:25:52 +0100 (CET)", "msg_from": "Dirk Lutzebaeck <[email protected]>", "msg_from_op": true, "msg_subject": "query produces 1 GB temp file" }, { "msg_contents": "On 2/5/05, Dirk Lutzebaeck <[email protected]> wrote:\n> here is a query which produces over 1G temp file in pgsql_tmp. This\n> is on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB\n> sort_mem and 320MB shared_mem.\n>\n> Below is the query and results for EXPLAIN and EXPLAIN ANALYZE. All\n> tables have been analyzed before.\n>\n> Can some please explain why the temp file is so huge? I understand\n> there are a lot of rows. All relevant indices seem to be used.\n\nhow much memory have you set aside for sorting? also, this query will\nlikely run better in a more recent version of postgresql if thats\npossible.\n\nmerlin\n", "msg_date": "Fri, 27 Oct 2006 09:49:34 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "Merlin Moncure wrote:\n> On 2/5/05, Dirk Lutzebaeck <[email protected]> wrote:\n<snip>\nWas the original message actually from 2/5/05?\n", "msg_date": "Fri, 27 Oct 2006 07:33:57 -0700", "msg_from": "Bricklen Anderson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "He is probably using IPOT (IP Other Time) :\nhttp://kadreg.free.fr/ipot/ :-) (sorry only french page )\n\n\nOn Oct 27, 2006, at 16:33, Bricklen Anderson wrote:\n\n> Merlin Moncure wrote:\n>> On 2/5/05, Dirk Lutzebaeck <[email protected]> wrote:\n> <snip>\n> Was the original message actually from 2/5/05?\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that \n> your\n> message can get through to the mailing list cleanly\n\n\nHe is probably using IPOT (IP Other Time) : http://kadreg.free.fr/ipot/  :-) (sorry only french page ) On Oct 27, 2006, at 16:33, Bricklen Anderson wrote:Merlin Moncure wrote: On 2/5/05, Dirk Lutzebaeck <[email protected]> wrote: <snip>Was the original message actually from 2/5/05?---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate      subscribe-nomail command to [email protected] so that your      message can get through to the mailing list cleanly", "msg_date": "Fri, 27 Oct 2006 16:38:55 +0200", "msg_from": "Thomas Burdairon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "While I can't explain why PostgreSQL would use that memory, I \nrecommend looking into tweaking the work_mem parameter. This setting \nspecifies how much memory PostgreSQL on certain temporary data \nstructures (hash tables, sort vectors) until it starts using \ntemporary files. Quoting the docs:\n\n> work_mem (integer)\n> Specifies the amount of memory to be used by internal sort \n> operations and hash tables before switching to temporary disk \n> files. The value is specified in kilobytes, and defaults to 1024 \n> kilobytes (1 MB). Note that for a complex query, several sort or \n> hash operations might be running in parallel; each one will be \n> allowed to use as much memory as this value specifies before it \n> starts to put data into temporary files. Also, several running \n> sessions could be doing such operations concurrently. So the total \n> memory used could be many times the value of work_mem; it is \n> necessary to keep this fact in mind when choosing the value. Sort \n> operations are used for ORDER BY, DISTINCT, and merge joins. Hash \n> tables are used in hash joins, hash-based aggregation, and hash- \n> based processing of IN subqueries.\n\nAlexander.\n\nOn Feb 5, 2005, at 18:25 , Dirk Lutzebaeck wrote:\n\n> Hi,\n>\n> here is a query which produces over 1G temp file in pgsql_tmp. This\n> is on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB\n> sort_mem and 320MB shared_mem.\n>\n> Below is the query and results for EXPLAIN and EXPLAIN ANALYZE. All\n> tables have been analyzed before.\n>\n> Can some please explain why the temp file is so huge? I understand\n> there are a lot of rows. All relevant indices seem to be used.\n>\n> Thanks in advance,\n>\n> Dirk\n>\n> EXPLAIN\n> SELECT DISTINCT ON (ft.val_9, ft.created, ft.flatid) ft.docstart, \n> ft.flatobj, bi.oid, bi.en\n> FROM bi, en, df AS ft, es\n> WHERE bi.rc=130170467\n> AND bi.en=ft.en\n> AND bi.co=117305223\n> AND bi.hide=FALSE\n> AND ft.en=en.oid\n> AND es.en=bi.en\n> AND es.co=bi.co\n> AND es.spec=122293729\n> AND (ft.val_2='DG' OR ft.val_2='SK')\n> AND ft.docstart=1\n> ORDER BY ft.val_9 ASC, ft.created DESC\n> LIMIT 1000 OFFSET 0;\n>\n> Limit (cost=8346.75..8346.78 rows=3 width=1361)\n> -> Unique (cost=8346.75..8346.78 rows=3 width=1361)\n> -> Sort (cost=8346.75..8346.76 rows=3 width=1361)\n> Sort Key: ft.val_9, ft.created, ft.flatid\n> -> Nested Loop (cost=0.00..8346.73 rows=3 width=1361)\n> -> Nested Loop (cost=0.00..5757.17 rows=17 \n> width=51)\n> -> Nested Loop (cost=0.00..5606.39 \n> rows=30 width=42)\n> -> Index Scan using es_sc_index \n> on es (cost=0.00..847.71 rows=301 width=8)\n> Index Cond: ((spec = \n> 122293729) AND (co = 117305223::oid))\n> -> Index Scan using bi_env_index \n> on bi (cost=0.00..15.80 rows=1 width=42)\n> Index Cond: (\"outer\".en = \n> bi.en)\n> Filter: ((rc = \n> 130170467::oid) AND (co = 117305223::oid) AND (hide = false))\n> -> Index Scan using en_oid_index on en \n> (cost=0.00..5.01 rows=1 width=9)\n> Index Cond: (\"outer\".en = en.oid)\n> -> Index Scan using df_en on df ft \n> (cost=0.00..151.71 rows=49 width=1322)\n> Index Cond: (\"outer\".en = ft.en)\n> Filter: (((val_2 = 'DG'::text) OR (val_2 \n> = 'SK'::text)) AND (docstart = 1))\n> (17 rows)\n>\n>\n> --------------\n>\n> EXPLAIN ANALYZE gives:\n>\n>\n> Limit (cost=8346.75..8346.78 rows=3 width=1361) (actual \n> time=75357.465..75679.964 rows=1000 loops=1)\n> -> Unique (cost=8346.75..8346.78 rows=3 width=1361) (actual \n> time=75357.459..75675.371 rows=1000 loops=1)\n> -> Sort (cost=8346.75..8346.76 rows=3 width=1361) \n> (actual time=75357.448..75499.263 rows=22439 loops=1)\n> Sort Key: ft.val_9, ft.created, ft.flatid\n> -> Nested Loop (cost=0.00..8346.73 rows=3 \n> width=1361) (actual time=34.104..18016.005 rows=703677 loops=1)\n> -> Nested Loop (cost=0.00..5757.17 rows=17 \n> width=51) (actual time=0.467..3216.342 rows=48563 loops=1)\n> -> Nested Loop (cost=0.00..5606.39 \n> rows=30 width=42) (actual time=0.381..1677.014 rows=48563 loops=1)\n> -> Index Scan using es_sc_index \n> on es (cost=0.00..847.71 rows=301 width=8) (actual \n> time=0.184..46.519 rows=5863 loops=1)\n> Index Cond: ((spec = \n> 122293729) AND (co = 117305223::oid))\n> -> Index Scan using bi_env_index \n> on bi (cost=0.00..15.80 rows=1 width=42) (actual time=0.052..0.218 \n> rows=8 loops=5863)\n> Index Cond: (\"outer\".en = \n> bi.en)\n> Filter: ((rc = \n> 130170467::oid) AND (co = 117305223::oid) AND (hide = false))\n> -> Index Scan using en_oid_index on en \n> (cost=0.00..5.01 rows=1 width=9) (actual time=0.015..0.019 rows=1 \n> loops=48563)\n> Index Cond: (\"outer\".en = en.oid)\n> -> Index Scan using df_en on df ft \n> (cost=0.00..151.71 rows=49 width=1322) (actual time=0.038..0.148 \n> rows=14 loops=48563)\n> Index Cond: (\"outer\".en = ft.en)\n> Filter: (((val_2 = 'DG'::text) OR (val_2 \n> = 'SK'::text)) AND (docstart = 1))\n> Total runtime: 81782.052 ms\n> (18 rows)\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Fri, 27 Oct 2006 16:47:45 +0200", "msg_from": "Alexander Staubo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "Hi,\n\nI'm sorry but it look like my computer has resent older posts from me, \nsigh...\n\n\nDirk\n\nAlexander Staubo wrote:\n> While I can't explain why PostgreSQL would use that memory, I \n> recommend looking into tweaking the work_mem parameter. This setting \n> specifies how much memory PostgreSQL on certain temporary data \n> structures (hash tables, sort vectors) until it starts using temporary \n> files. Quoting the docs:\n>\n>> work_mem (integer)\n>> Specifies the amount of memory to be used by internal sort operations \n>> and hash tables before switching to temporary disk files. The value \n>> is specified in kilobytes, and defaults to 1024 kilobytes (1 MB). \n>> Note that for a complex query, several sort or hash operations might \n>> be running in parallel; each one will be allowed to use as much \n>> memory as this value specifies before it starts to put data into \n>> temporary files. Also, several running sessions could be doing such \n>> operations concurrently. So the total memory used could be many times \n>> the value of work_mem; it is necessary to keep this fact in mind when \n>> choosing the value. Sort operations are used for ORDER BY, DISTINCT, \n>> and merge joins. Hash tables are used in hash joins, hash-based \n>> aggregation, and hash-based processing of IN subqueries.\n>\n> Alexander.\n>\n> On Feb 5, 2005, at 18:25 , Dirk Lutzebaeck wrote:\n>\n>> Hi,\n>>\n>> here is a query which produces over 1G temp file in pgsql_tmp. This\n>> is on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB\n>> sort_mem and 320MB shared_mem.\n>>\n>> Below is the query and results for EXPLAIN and EXPLAIN ANALYZE. All\n>> tables have been analyzed before.\n>>\n>> Can some please explain why the temp file is so huge? I understand\n>> there are a lot of rows. All relevant indices seem to be used.\n>>\n>> Thanks in advance,\n>>\n>> Dirk\n>>\n>> EXPLAIN\n>> SELECT DISTINCT ON (ft.val_9, ft.created, ft.flatid) ft.docstart, \n>> ft.flatobj, bi.oid, bi.en\n>> FROM bi, en, df AS ft, es\n>> WHERE bi.rc=130170467\n>> AND bi.en=ft.en\n>> AND bi.co=117305223\n>> AND bi.hide=FALSE\n>> AND ft.en=en.oid\n>> AND es.en=bi.en\n>> AND es.co=bi.co\n>> AND es.spec=122293729\n>> AND (ft.val_2='DG' OR ft.val_2='SK')\n>> AND ft.docstart=1\n>> ORDER BY ft.val_9 ASC, ft.created DESC\n>> LIMIT 1000 OFFSET 0;\n>>\n>> Limit (cost=8346.75..8346.78 rows=3 width=1361)\n>> -> Unique (cost=8346.75..8346.78 rows=3 width=1361)\n>> -> Sort (cost=8346.75..8346.76 rows=3 width=1361)\n>> Sort Key: ft.val_9, ft.created, ft.flatid\n>> -> Nested Loop (cost=0.00..8346.73 rows=3 width=1361)\n>> -> Nested Loop (cost=0.00..5757.17 rows=17 \n>> width=51)\n>> -> Nested Loop (cost=0.00..5606.39 \n>> rows=30 width=42)\n>> -> Index Scan using es_sc_index on \n>> es (cost=0.00..847.71 rows=301 width=8)\n>> Index Cond: ((spec = \n>> 122293729) AND (co = 117305223::oid))\n>> -> Index Scan using bi_env_index on \n>> bi (cost=0.00..15.80 rows=1 width=42)\n>> Index Cond: (\"outer\".en = bi.en)\n>> Filter: ((rc = 130170467::oid) \n>> AND (co = 117305223::oid) AND (hide = false))\n>> -> Index Scan using en_oid_index on en \n>> (cost=0.00..5.01 rows=1 width=9)\n>> Index Cond: (\"outer\".en = en.oid)\n>> -> Index Scan using df_en on df ft \n>> (cost=0.00..151.71 rows=49 width=1322)\n>> Index Cond: (\"outer\".en = ft.en)\n>> Filter: (((val_2 = 'DG'::text) OR (val_2 = \n>> 'SK'::text)) AND (docstart = 1))\n>> (17 rows)\n>>\n>>\n>> --------------\n>>\n>> EXPLAIN ANALYZE gives:\n>>\n>>\n>> Limit (cost=8346.75..8346.78 rows=3 width=1361) (actual \n>> time=75357.465..75679.964 rows=1000 loops=1)\n>> -> Unique (cost=8346.75..8346.78 rows=3 width=1361) (actual \n>> time=75357.459..75675.371 rows=1000 loops=1)\n>> -> Sort (cost=8346.75..8346.76 rows=3 width=1361) (actual \n>> time=75357.448..75499.263 rows=22439 loops=1)\n>> Sort Key: ft.val_9, ft.created, ft.flatid\n>> -> Nested Loop (cost=0.00..8346.73 rows=3 \n>> width=1361) (actual time=34.104..18016.005 rows=703677 loops=1)\n>> -> Nested Loop (cost=0.00..5757.17 rows=17 \n>> width=51) (actual time=0.467..3216.342 rows=48563 loops=1)\n>> -> Nested Loop (cost=0.00..5606.39 \n>> rows=30 width=42) (actual time=0.381..1677.014 rows=48563 loops=1)\n>> -> Index Scan using es_sc_index on \n>> es (cost=0.00..847.71 rows=301 width=8) (actual time=0.184..46.519 \n>> rows=5863 loops=1)\n>> Index Cond: ((spec = \n>> 122293729) AND (co = 117305223::oid))\n>> -> Index Scan using bi_env_index on \n>> bi (cost=0.00..15.80 rows=1 width=42) (actual time=0.052..0.218 \n>> rows=8 loops=5863)\n>> Index Cond: (\"outer\".en = bi.en)\n>> Filter: ((rc = 130170467::oid) \n>> AND (co = 117305223::oid) AND (hide = false))\n>> -> Index Scan using en_oid_index on en \n>> (cost=0.00..5.01 rows=1 width=9) (actual time=0.015..0.019 rows=1 \n>> loops=48563)\n>> Index Cond: (\"outer\".en = en.oid)\n>> -> Index Scan using df_en on df ft \n>> (cost=0.00..151.71 rows=49 width=1322) (actual time=0.038..0.148 \n>> rows=14 loops=48563)\n>> Index Cond: (\"outer\".en = ft.en)\n>> Filter: (((val_2 = 'DG'::text) OR (val_2 = \n>> 'SK'::text)) AND (docstart = 1))\n>> Total runtime: 81782.052 ms\n>> (18 rows)\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n>\n\n-- \n\n*Dirk Lutzeb�ck* <[email protected]> Tel +49.30.5362.1635 Fax .1638\nCTO AEC/communications GmbH & Co. KG <http://www.aeccom.com>, Berlin, \nGermany\n\n\n\n\n\n\n\nHi,\n\nI'm sorry but it look like my computer has resent older posts from me,\nsigh...\n\n\nDirk\n\nAlexander Staubo wrote:\nWhile I can't explain why PostgreSQL would use that\nmemory, I recommend looking into tweaking the work_mem parameter. This\nsetting specifies how much memory PostgreSQL on certain temporary data\nstructures (hash tables, sort vectors) until it starts using temporary\nfiles. Quoting the docs:\n \n\nwork_mem (integer)\n \nSpecifies the amount of memory to be used by internal sort operations\nand hash tables before switching to temporary disk files. The value is\nspecified in kilobytes, and defaults to 1024 kilobytes (1 MB). Note\nthat for a complex query, several sort or hash operations might be\nrunning in parallel; each one will be allowed to use as much memory as\nthis value specifies before it starts to put data into temporary files.\nAlso, several running sessions could be doing such operations\nconcurrently. So the total memory used could be many times the value of\nwork_mem; it is necessary to keep this fact in mind when choosing the\nvalue. Sort operations are used for ORDER BY, DISTINCT, and merge\njoins. Hash tables are used in hash joins, hash-based aggregation, and\nhash-based processing of IN subqueries.\n \n\n\nAlexander.\n \n\nOn Feb 5, 2005, at 18:25 , Dirk Lutzebaeck wrote:\n \n\nHi,\n \n\nhere is a query which produces over 1G temp file in pgsql_tmp. This\n \nis on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB\n \nsort_mem and 320MB shared_mem.\n \n\nBelow is the query and results for EXPLAIN and EXPLAIN ANALYZE. All\n \ntables have been analyzed before.\n \n\nCan some please explain why the temp file is so huge? I understand\n \nthere are a lot of rows. All relevant indices seem to be used.\n \n\nThanks in advance,\n \n\nDirk\n \n\nEXPLAIN\n \nSELECT DISTINCT ON (ft.val_9, ft.created, ft.flatid) ft.docstart,\nft.flatobj, bi.oid, bi.en\n \nFROM bi, en, df AS ft, es\n \nWHERE bi.rc=130170467\n \nAND bi.en=ft.en\n \nAND bi.co=117305223\n \nAND bi.hide=FALSE\n \nAND ft.en=en.oid\n \nAND es.en=bi.en\n \nAND es.co=bi.co\n \nAND es.spec=122293729\n \nAND (ft.val_2='DG' OR ft.val_2='SK')\n \nAND ft.docstart=1\n \nORDER BY ft.val_9 ASC, ft.created DESC\n \nLIMIT 1000 OFFSET 0;\n \n\n Limit  (cost=8346.75..8346.78 rows=3 width=1361)\n \n   ->  Unique  (cost=8346.75..8346.78 rows=3 width=1361)\n \n         ->  Sort  (cost=8346.75..8346.76 rows=3 width=1361)\n \n               Sort Key: ft.val_9, ft.created, ft.flatid\n \n               ->  Nested Loop  (cost=0.00..8346.73 rows=3\nwidth=1361)\n \n                     ->  Nested Loop  (cost=0.00..5757.17 rows=17\nwidth=51)\n \n                           ->  Nested Loop  (cost=0.00..5606.39\nrows=30 width=42)\n \n                                 ->  Index Scan using es_sc_index on\nes  (cost=0.00..847.71 rows=301 width=8)\n \n                                       Index Cond: ((spec = 122293729)\nAND (co = 117305223::oid))\n \n                                 ->  Index Scan using bi_env_index\non bi  (cost=0.00..15.80 rows=1 width=42)\n \n                                       Index Cond: (\"outer\".en = bi.en)\n \n                                       Filter: ((rc = 130170467::oid)\nAND (co = 117305223::oid) AND (hide = false))\n \n                           ->  Index Scan using en_oid_index on en \n(cost=0.00..5.01 rows=1 width=9)\n \n                                 Index Cond: (\"outer\".en = en.oid)\n \n                     ->  Index Scan using df_en on df ft \n(cost=0.00..151.71 rows=49 width=1322)\n \n                           Index Cond: (\"outer\".en = ft.en)\n \n                           Filter: (((val_2 = 'DG'::text) OR (val_2 =\n'SK'::text)) AND (docstart = 1))\n \n(17 rows)\n \n\n\n--------------\n \n\nEXPLAIN ANALYZE gives:\n \n\n\n Limit  (cost=8346.75..8346.78 rows=3 width=1361) (actual\ntime=75357.465..75679.964 rows=1000 loops=1)\n \n   ->  Unique  (cost=8346.75..8346.78 rows=3 width=1361) (actual\ntime=75357.459..75675.371 rows=1000 loops=1)\n \n         ->  Sort  (cost=8346.75..8346.76 rows=3 width=1361) (actual\ntime=75357.448..75499.263 rows=22439 loops=1)\n \n               Sort Key: ft.val_9, ft.created, ft.flatid\n \n               ->  Nested Loop  (cost=0.00..8346.73 rows=3\nwidth=1361) (actual time=34.104..18016.005 rows=703677 loops=1)\n \n                     ->  Nested Loop  (cost=0.00..5757.17 rows=17\nwidth=51) (actual time=0.467..3216.342 rows=48563 loops=1)\n \n                           ->  Nested Loop  (cost=0.00..5606.39\nrows=30 width=42) (actual time=0.381..1677.014 rows=48563 loops=1)\n \n                                 ->  Index Scan using es_sc_index on\nes  (cost=0.00..847.71 rows=301 width=8) (actual time=0.184..46.519\nrows=5863 loops=1)\n \n                                       Index Cond: ((spec = 122293729)\nAND (co = 117305223::oid))\n \n                                 ->  Index Scan using bi_env_index\non bi  (cost=0.00..15.80 rows=1 width=42) (actual time=0.052..0.218\nrows=8 loops=5863)\n \n                                       Index Cond: (\"outer\".en = bi.en)\n \n                                       Filter: ((rc = 130170467::oid)\nAND (co = 117305223::oid) AND (hide = false))\n \n                           ->  Index Scan using en_oid_index on en \n(cost=0.00..5.01 rows=1 width=9) (actual time=0.015..0.019 rows=1\nloops=48563)\n \n                                 Index Cond: (\"outer\".en = en.oid)\n \n                     ->  Index Scan using df_en on df ft \n(cost=0.00..151.71 rows=49 width=1322) (actual time=0.038..0.148\nrows=14 loops=48563)\n \n                           Index Cond: (\"outer\".en = ft.en)\n \n                           Filter: (((val_2 = 'DG'::text) OR (val_2 =\n'SK'::text)) AND (docstart = 1))\n \n Total runtime: 81782.052 ms\n \n(18 rows)\n \n\n\n---------------------------(end of\nbroadcast)---------------------------\n \nTIP 5: don't forget to increase your free space map settings\n \n\n\n\n\n-- \nDirk Lutzebäck <[email protected]> Tel +49.30.5362.1635\nFax .1638\nCTO AEC/communications GmbH & Co.\nKG, Berlin, Germany", "msg_date": "Fri, 27 Oct 2006 16:51:25 +0200", "msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "On Sat, 2005-02-05 at 11:25, Dirk Lutzebaeck wrote:\n> Hi,\n> \n> here is a query which produces over 1G temp file in pgsql_tmp. This\n> is on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB\n> sort_mem and 320MB shared_mem.\n\nFirst step, upgrade to the latest 7.4.x version. 7.4.2 is an OLD\nversion of 7.4 I think the latest version is 7.4.13.\n\n> Below is the query and results for EXPLAIN and EXPLAIN ANALYZE. All\n> tables have been analyzed before.\n\nSNIP\n\n> EXPLAIN ANALYZE gives:\n> \n> \n> Limit (cost=8346.75..8346.78 rows=3 width=1361) (actual time=75357.465..75679.964 rows=1000 loops=1)\n> -> Unique (cost=8346.75..8346.78 rows=3 width=1361) (actual time=75357.459..75675.371 rows=1000 loops=1)\n> -> Sort (cost=8346.75..8346.76 rows=3 width=1361) (actual time=75357.448..75499.263 rows=22439 loops=1)\n> Sort Key: ft.val_9, ft.created, ft.flatid\n> -> Nested Loop (cost=0.00..8346.73 rows=3 width=1361) (actual time=34.104..18016.005 rows=703677 loops=1)\n> -> Nested Loop (cost=0.00..5757.17 rows=17 width=51) (actual time=0.467..3216.342 rows=48563 loops=1)\n> -> Nested Loop (cost=0.00..5606.39 rows=30 width=42) (actual time=0.381..1677.014 rows=48563 loops=1)\n> -> Index Scan using es_sc_index on es (cost=0.00..847.71 rows=301 width=8) (actual time=0.184..46.519 rows=5863 loops=1)\n> Index Cond: ((spec = 122293729) AND (co = 117305223::oid))\n> -> Index Scan using bi_env_index on bi (cost=0.00..15.80 rows=1 width=42) (actual time=0.052..0.218 rows=8 loops=5863)\n> Index Cond: (\"outer\".en = bi.en)\n> Filter: ((rc = 130170467::oid) AND (co = 117305223::oid) AND (hide = false))\n> -> Index Scan using en_oid_index on en (cost=0.00..5.01 rows=1 width=9) (actual time=0.015..0.019 rows=1 loops=48563)\n> Index Cond: (\"outer\".en = en.oid)\n> -> Index Scan using df_en on df ft (cost=0.00..151.71 rows=49 width=1322) (actual time=0.038..0.148 rows=14 loops=48563)\n> Index Cond: (\"outer\".en = ft.en)\n> Filter: (((val_2 = 'DG'::text) OR (val_2 = 'SK'::text)) AND (docstart = 1))\n> Total runtime: 81782.052 ms\n> (18 rows)\n\nWhy do you have an index scan on en_oid_index that thinks it will return\n1 row when it returns 48563, and one on df_en that thinks it will return\n49 and returns 48563 as well? Is this database analyzed often? Are\noids even analyzed? I'd really recommend switching off of them as they\ncomplicate backups and restores.\n\nIf analyze doesn't help, you can try brute forcing off nested loops for\nthis query and see if that helps. nested loop is really slow for large\nnumbers of rows.\n", "msg_date": "Fri, 27 Oct 2006 10:49:28 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" } ]
[ { "msg_contents": "\nHi,\n\nhere is a query which produces over 1G temp file in pgsql_tmp. This\nis on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB\nsort_mem and 320MB shared_mem.\n\nBelow is the query and results for EXPLAIN and EXPLAIN ANALYZE. All\ntables have been analyzed before.\n\nCan some please explain why the temp file is so huge? I understand\nthere are a lot of rows.\n\nThanks in advance,\n\nDirk\n\nEXPLAIN \nSELECT DISTINCT ON (ft.val_9, ft.created, ft.flatid) ft.docstart, ft.docindex, ft.flatobj, bi.oid, bi.en\nFROM bi, en, df AS ft, es\nWHERE bi.rc=130170467\nAND bi.en=ft.en\nAND bi.co=117305223\nAND bi.hide=FALSE\nAND ft.en=en.oid\nAND es.en=bi.en\nAND es.co=bi.co\nAND es.spec=122293729\nAND (ft.val_2='DG' OR ft.val_2='SK')\nAND ft.docstart=1\nORDER BY ft.val_9 ASC, ft.created DESC\nLIMIT 1000 OFFSET 0;\n\n Limit (cost=8346.75..8346.78 rows=3 width=1361)\n -> Unique (cost=8346.75..8346.78 rows=3 width=1361)\n -> Sort (cost=8346.75..8346.76 rows=3 width=1361)\n Sort Key: ft.val_9, ft.created, ft.flatid\n -> Nested Loop (cost=0.00..8346.73 rows=3 width=1361)\n -> Nested Loop (cost=0.00..5757.17 rows=17 width=51)\n -> Nested Loop (cost=0.00..5606.39 rows=30 width=42)\n -> Index Scan using es_sc_index on es (cost=0.00..847.71 rows=301 width=8)\n Index Cond: ((spec = 122293729) AND (co = 117305223::oid))\n -> Index Scan using bi_env_index on bi (cost=0.00..15.80 rows=1 width=42)\n Index Cond: (\"outer\".en = bi.en)\n Filter: ((rc = 130170467::oid) AND (co = 117305223::oid) AND (hide = false))\n -> Index Scan using en_oid_index on en (cost=0.00..5.01 rows=1 width=9)\n Index Cond: (\"outer\".en = en.oid)\n -> Index Scan using df_en on df ft (cost=0.00..151.71 rows=49 width=1322)\n Index Cond: (\"outer\".en = ft.en)\n Filter: (((val_2 = 'DG'::text) OR (val_2 = 'SK'::text)) AND (docstart = 1))\n(17 rows)\n\n\n--------------\n\nEXPLAIN ANALYZE gives:\n\n\n Limit (cost=8346.75..8346.78 rows=3 width=1361) (actual time=75357.465..75679.964 rows=1000 loops=1)\n -> Unique (cost=8346.75..8346.78 rows=3 width=1361) (actual time=75357.459..75675.371 rows=1000 loops=1)\n -> Sort (cost=8346.75..8346.76 rows=3 width=1361) (actual time=75357.448..75499.263 rows=22439 loops=1)\n Sort Key: ft.val_9, ft.created, ft.flatid\n -> Nested Loop (cost=0.00..8346.73 rows=3 width=1361) (actual time=34.104..18016.005 rows=703677 loops=1)\n -> Nested Loop (cost=0.00..5757.17 rows=17 width=51) (actual time=0.467..3216.342 rows=48563 loops=1)\n -> Nested Loop (cost=0.00..5606.39 rows=30 width=42) (actual time=0.381..1677.014 rows=48563 loops=1)\n -> Index Scan using es_sc_index on es (cost=0.00..847.71 rows=301 width=8) (actual time=0.184..46.519 rows=5863 loops=1)\n Index Cond: ((spec = 122293729) AND (co = 117305223::oid))\n -> Index Scan using bi_env_index on bi (cost=0.00..15.80 rows=1 width=42) (actual time=0.052..0.218 rows=8 loops=5863)\n Index Cond: (\"outer\".en = bi.en)\n Filter: ((rc = 130170467::oid) AND (co = 117305223::oid) AND (hide = false))\n -> Index Scan using en_oid_index on en (cost=0.00..5.01 rows=1 width=9) (actual time=0.015..0.019 rows=1 loops=48563)\n Index Cond: (\"outer\".en = en.oid)\n -> Index Scan using df_en on df ft (cost=0.00..151.71 rows=49 width=1322) (actual time=0.038..0.148 rows=14 loops=48563)\n Index Cond: (\"outer\".en = ft.en)\n Filter: (((val_2 = 'DG'::text) OR (val_2 = 'SK'::text)) AND (docstart = 1))\n Total runtime: 81782.052 ms\n(18 rows)\n\n", "msg_date": "Sat, 5 Feb 2005 19:21:17 +0100 (CET)", "msg_from": "Dirk Lutzebaeck <[email protected]>", "msg_from_op": true, "msg_subject": "query produces 1 GB temp file" }, { "msg_contents": "Dirk Lutzebaeck wrote:\n\n>Hi,\n>\n>here is a query which produces over 1G temp file in pgsql_tmp. This\n>is on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB\n>sort_mem and 320MB shared_mem.\n>\n>Below is the query and results for EXPLAIN and EXPLAIN ANALYZE. All\n>tables have been analyzed before.\n>\n>Can some please explain why the temp file is so huge? I understand\n>there are a lot of rows.\n>\n>Thanks in advance,\n>\n>Dirk\n> \n>\n...\n\n> -> Nested Loop (cost=0.00..8346.73 rows=3 width=1361) (actual time=34.104..18016.005 rows=703677 loops=1)\n> \n>\nWell, there is this particular query where it thinks there will only be \n3 rows, but in fact there are 703,677 of them. And the previous line:\n\n> -> Sort (cost=8346.75..8346.76 rows=3 width=1361) (actual time=75357.448..75499.263 rows=22439 loops=1)\n> \n>\nSeem to indicate that after sorting you still have 22,439 rows, which \nthen gets pared down again down to 1000.\n\nI'm assuming that the sort you are trying to do is extremely expensive. \nYou are sorting 700k rows, which takes up too much memory (1GB), which \nforces it to create a temporary table, and write it out to disk.\n\nI didn't analyze it a lot, but you might get a lot better performance \nfrom doing a subselect, rather than the query you wrote.\n\nYou are joining 4 tables (bi, en, df AS ft, es) I don't know which \ntables are what size. In the end, though, you don't really care about \nthe en table or es tables (they aren't in your output).\n\nSo maybe one of you subselects could be:\n\nwhere bi.en = (select en from es where es.co = bi.co and es.spec=122293729);\n\nI'm pretty sure the reason you need 1GB of temp space is because at one \npoint you have 700k rows. Is it possible to rewrite the query so that it \ndoes more filtering earlier? Your distinct criteria seems to filter it \ndown to 20k rows. So maybe it's possible to do some sort of a distinct \nin part of the subselect, before you start joining against other tables.\n\nIf you have that much redundancy, you might also need to think of doing \na different normalization.\n\nJust some thoughts.\n\nAlso, I thought using the \"oid\" column wasn't really recommended, since \nin *high* volume databases they aren't even guaranteed to be unique. (I \nthink it is a 32-bit number that rolls over.) Also on a database dump \nand restore, they don't stay the same, unless you take a lot of extra \ncare that they are included in both the dump and the restore. I believe \nit is better to create your own \"id\" per table (say SERIAL or BIGSERIAL).\n\nJohn\n=:->", "msg_date": "Sat, 05 Feb 2005 13:26:09 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "Hi John,\n\nthanks very much for your analysis. I'll probably need to reorganize \nsome things.\n\nRegards,\n\nDirk\n\nJohn A Meinel wrote:\n\n> Dirk Lutzebaeck wrote:\n>\n>> Hi,\n>>\n>> here is a query which produces over 1G temp file in pgsql_tmp. This\n>> is on pgsql 7.4.2, RHEL 3.0, XEON MP machine with 32GB RAM, 300MB\n>> sort_mem and 320MB shared_mem.\n>>\n>> Below is the query and results for EXPLAIN and EXPLAIN ANALYZE. All\n>> tables have been analyzed before.\n>>\n>> Can some please explain why the temp file is so huge? I understand\n>> there are a lot of rows.\n>>\n>> Thanks in advance,\n>>\n>> Dirk\n>> \n>>\n> ...\n>\n>> -> Nested Loop (cost=0.00..8346.73 rows=3 width=1361) \n>> (actual time=34.104..18016.005 rows=703677 loops=1)\n>> \n>>\n> Well, there is this particular query where it thinks there will only \n> be 3 rows, but in fact there are 703,677 of them. And the previous line:\n>\n>> -> Sort (cost=8346.75..8346.76 rows=3 width=1361) (actual \n>> time=75357.448..75499.263 rows=22439 loops=1)\n>> \n>>\n> Seem to indicate that after sorting you still have 22,439 rows, which \n> then gets pared down again down to 1000.\n>\n> I'm assuming that the sort you are trying to do is extremely \n> expensive. You are sorting 700k rows, which takes up too much memory \n> (1GB), which forces it to create a temporary table, and write it out \n> to disk.\n>\n> I didn't analyze it a lot, but you might get a lot better performance \n> from doing a subselect, rather than the query you wrote.\n>\n> You are joining 4 tables (bi, en, df AS ft, es) I don't know which \n> tables are what size. In the end, though, you don't really care about \n> the en table or es tables (they aren't in your output).\n>\n> So maybe one of you subselects could be:\n>\n> where bi.en = (select en from es where es.co = bi.co and \n> es.spec=122293729);\n>\n> I'm pretty sure the reason you need 1GB of temp space is because at \n> one point you have 700k rows. Is it possible to rewrite the query so \n> that it does more filtering earlier? Your distinct criteria seems to \n> filter it down to 20k rows. So maybe it's possible to do some sort of \n> a distinct in part of the subselect, before you start joining against \n> other tables.\n>\n> If you have that much redundancy, you might also need to think of \n> doing a different normalization.\n>\n> Just some thoughts.\n>\n> Also, I thought using the \"oid\" column wasn't really recommended, \n> since in *high* volume databases they aren't even guaranteed to be \n> unique. (I think it is a 32-bit number that rolls over.) Also on a \n> database dump and restore, they don't stay the same, unless you take a \n> lot of extra care that they are included in both the dump and the \n> restore. I believe it is better to create your own \"id\" per table (say \n> SERIAL or BIGSERIAL).\n>\n> John\n> =:->\n>\n\n", "msg_date": "Sat, 05 Feb 2005 20:46:20 +0100", "msg_from": "[email protected] (Dirk Lutzebaeck)", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "\nDirk Lutzebaeck <[email protected]> writes:\n\n> Below is the query and results for EXPLAIN and EXPLAIN ANALYZE. All\n> tables have been analyzed before.\n\nReally? A lot of the estimates are very far off. If you really just analyzed\nthese tables immediately prior to the query then perhaps you should try\nraising the statistics target on spec and co. Or is the problem that there's a\ncorrelation between those two columns?\n\n> -> Nested Loop (cost=0.00..8346.73 rows=3 width=1361) (actual time=34.104..18016.005 rows=703677 loops=1)\n> -> Nested Loop (cost=0.00..5757.17 rows=17 width=51) (actual time=0.467..3216.342 rows=48563 loops=1)\n> -> Nested Loop (cost=0.00..5606.39 rows=30 width=42) (actual time=0.381..1677.014 rows=48563 loops=1)\n> -> Index Scan using es_sc_index on es (cost=0.00..847.71 rows=301 width=8) (actual time=0.184..46.519 rows=5863 loops=1)\n> Index Cond: ((spec = 122293729) AND (co = 117305223::oid))\n\nThe root of your problem,. The optimizer is off by a factor of 20. It thinks\nthese two columns are much more selective than they are.\n\n> -> Index Scan using bi_env_index on bi (cost=0.00..15.80 rows=1 width=42) (actual time=0.052..0.218 rows=8 loops=5863)\n> Index Cond: (\"outer\".en = bi.en)\n> Filter: ((rc = 130170467::oid) AND (co = 117305223::oid) AND (hide = false))\n\nIt also thinks these three columns are much more selective than they are.\n\nHow accurate are its estimates if you just do these?\n\nexplain analyze select * from es where spec = 122293729\nexplain analyze select * from es where co = 117305223::oid\nexplain analyze select * from bi where rc = 130170467::oid\nexplain analyze select * from bi where co = 117305223\nexplain analyze select * from bi where hide = false\n\nIf they're individually accurate then you've run into the familiar problem of\nneeding cross-column statistics. If they're individually inaccurate then you\nshould try raising the targets on those columns with:\n\nALTER TABLE [ ONLY ] name [ * ]\n ALTER [ COLUMN ] column SET STATISTICS integer\n\nand reanalyzing.\n\n\nDirk Lutzebaeck <[email protected]> writes:\n\n> Can some please explain why the temp file is so huge? I understand\n> there are a lot of rows.\n\nWell that I can't explain. 22k rows of width 1361 doesn't sound so big to me\neither. The temporary table does need to store three copies of the records at\na given time, but still it sounds like an awful lot.\n\n\n-- \ngreg\n\n", "msg_date": "05 Feb 2005 15:22:52 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Dirk Lutzebaeck <[email protected]> writes:\n>> Can some please explain why the temp file is so huge? I understand\n>> there are a lot of rows.\n\n> Well that I can't explain. 22k rows of width 1361 doesn't sound so big to me\n> either.\n\nIt was 700k rows to sort, not 22k. The Unique/Limit superstructure\nonly demanded 22k rows out from the sort, but we still had to sort 'em\nall to figure out which ones were the first 22k.\n\n> The temporary table does need to store three copies of the records at\n> a given time, but still it sounds like an awful lot.\n\nHuh?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Feb 2005 17:41:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> It was 700k rows to sort, not 22k. \n\nOops, missed that.\n\n> > The temporary table does need to store three copies of the records at\n> > a given time, but still it sounds like an awful lot.\n> \n> Huh?\n\nAm I wrong? I thought the disk sort algorithm was the polyphase tape sort from\nKnuth which is always reading two tapes and writing to a third.\n\n-- \ngreg\n\n", "msg_date": "05 Feb 2005 17:50:13 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> Am I wrong? I thought the disk sort algorithm was the polyphase tape sort from\n> Knuth which is always reading two tapes and writing to a third.\n\nIt is a polyphase sort, but we recycle the input \"tapes\" as fast as we\nuse them, so that the maximum disk space usage is about as much as the\ndata volume to sort.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Feb 2005 18:01:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file " }, { "msg_contents": "Greg,\n\nThanks for your analysis. But I dont get any better after bumping \nSTATISTICS target from 10 to 200.\nexplain analyze shows that the optimizer is still way off estimating the \nrows. Is this normal? It still produces a 1 GB temp file.\nI simplified the query a bit, now only two tables are involved (bi, df). \nI also vacuumed.\n\n\nalter table bi alter rc set statistics 200;\nalter table bi alter hide set statistics 200;\nalter table bi alter co set statistics 200;\nalter table bi alter en set statistics 200;\nanalyze bi;\n\nalter table df alter en set statistics 200;\nalter table df alter val_2 set statistics 200;\nanalyze df;\n\nEXPLAIN ANALYZE\nSELECT DISTINCT ON (df.val_9, df.created, df.flatid) df.docindex, \ndf.flatobj, bi.oid, bi.en\nFROM bi,df\nWHERE bi.rc=130170467\nAND bi.en=df.en\nAND bi.co=117305223\nAND bi.hide=FALSE\nAND (df.val_2='DG' OR df.val_2='SK')\nAND df.docstart=1\nORDER BY df.val_9 ASC, df.created DESC\nLIMIT 1000 OFFSET 0\n;\n\nLimit (cost=82470.09..82480.09 rows=1000 width=646) (actual \ntime=71768.685..72084.622 rows=1000 loops=1)\n-> Unique (cost=82470.09..82643.71 rows=17362 width=646) (actual \ntime=71768.679..72079.987 rows=1000 loops=1)\n-> Sort (cost=82470.09..82513.50 rows=17362 width=646) (actual \ntime=71768.668..71905.138 rows=22439 loops=1)\nSort Key: df.val_9, df.created, df.flatid\n-> Merge Join (cost=80422.51..81247.49 rows=17362 width=646) (actual \ntime=7657.872..18486.551 rows=703677 loops=1)\nMerge Cond: (\"outer\".en = \"inner\".en)\n-> Sort (cost=55086.74..55340.18 rows=101378 width=8) (actual \ntime=5606.137..6672.630 rows=471871 loops=1)\nSort Key: bi.en\n-> Seq Scan on bi (cost=0.00..46657.47 rows=101378 width=8) (actual \ntime=0.178..3715.109 rows=472320 loops=1)\nFilter: ((rc = 130170467::oid) AND (co = 117305223::oid) AND (hide = false))\n-> Sort (cost=25335.77..25408.23 rows=28982 width=642) (actual \ntime=2048.039..3677.140 rows=706482 loops=1)\nSort Key: df.en\n-> Seq Scan on df (cost=0.00..23187.79 rows=28982 width=642) (actual \ntime=0.112..1546.580 rows=71978 loops=1)\nFilter: (((val_2 = 'DG'::text) OR (val_2 = 'SK'::text)) AND (docstart = 1))\n\n\nexplain analyze select * from bi where rc=130170467;\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\nSeq Scan on bi (cost=0.00..41078.76 rows=190960 width=53) (actual \ntime=0.157..3066.028 rows=513724 loops=1)\nFilter: (rc = 130170467::oid)\nTotal runtime: 4208.663 ms\n(3 rows)\n\n\nexplain analyze select * from bi where co=117305223;\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\nSeq Scan on bi (cost=0.00..41078.76 rows=603988 width=53) (actual \ntime=0.021..3692.238 rows=945487 loops=1)\nFilter: (co = 117305223::oid)\nTotal runtime: 5786.268 ms\n(3 rows)\n\n\n\n\n\nGreg Stark wrote:\n\n>Dirk Lutzebaeck <[email protected]> writes:\n>\n> \n>\n>>Below is the query and results for EXPLAIN and EXPLAIN ANALYZE. All\n>>tables have been analyzed before.\n>> \n>>\n>\n>Really? A lot of the estimates are very far off. If you really just analyzed\n>these tables immediately prior to the query then perhaps you should try\n>raising the statistics target on spec and co. Or is the problem that there's a\n>correlation between those two columns?\n>\n> \n>\n>> -> Nested Loop (cost=0.00..8346.73 rows=3 width=1361) (actual time=34.104..18016.005 rows=703677 loops=1)\n>> -> Nested Loop (cost=0.00..5757.17 rows=17 width=51) (actual time=0.467..3216.342 rows=48563 loops=1)\n>> -> Nested Loop (cost=0.00..5606.39 rows=30 width=42) (actual time=0.381..1677.014 rows=48563 loops=1)\n>> -> Index Scan using es_sc_index on es (cost=0.00..847.71 rows=301 width=8) (actual time=0.184..46.519 rows=5863 loops=1)\n>> Index Cond: ((spec = 122293729) AND (co = 117305223::oid))\n>> \n>>\n>\n>The root of your problem,. The optimizer is off by a factor of 20. It thinks\n>these two columns are much more selective than they are.\n>\n> \n>\n>> -> Index Scan using bi_env_index on bi (cost=0.00..15.80 rows=1 width=42) (actual time=0.052..0.218 rows=8 loops=5863)\n>> Index Cond: (\"outer\".en = bi.en)\n>> Filter: ((rc = 130170467::oid) AND (co = 117305223::oid) AND (hide = false))\n>> \n>>\n>\n>It also thinks these three columns are much more selective than they are.\n>\n>How accurate are its estimates if you just do these?\n>\n>explain analyze select * from es where spec = 122293729\n>explain analyze select * from es where co = 117305223::oid\n>explain analyze select * from bi where rc = 130170467::oid\n>explain analyze select * from bi where co = 117305223\n>explain analyze select * from bi where hide = false\n>\n>If they're individually accurate then you've run into the familiar problem of\n>needing cross-column statistics. If they're individually inaccurate then you\n>should try raising the targets on those columns with:\n>\n>ALTER TABLE [ ONLY ] name [ * ]\n> ALTER [ COLUMN ] column SET STATISTICS integer\n>\n>and reanalyzing.\n>\n>\n>Dirk Lutzebaeck <[email protected]> writes:\n>\n> \n>\n>>Can some please explain why the temp file is so huge? I understand\n>>there are a lot of rows.\n>> \n>>\n>\n>Well that I can't explain. 22k rows of width 1361 doesn't sound so big to me\n>either. The temporary table does need to store three copies of the records at\n>a given time, but still it sounds like an awful lot.\n>\n>\n> \n>\n\n", "msg_date": "Sun, 06 Feb 2005 14:27:17 +0100", "msg_from": "[email protected] (Dirk Lutzebaeck)", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "Dirk Lutzebaeck wrote:\n\n> Greg,\n>\n> Thanks for your analysis. But I dont get any better after bumping \n> STATISTICS target from 10 to 200.\n> explain analyze shows that the optimizer is still way off estimating \n> the rows. Is this normal? It still produces a 1 GB temp file.\n> I simplified the query a bit, now only two tables are involved (bi, \n> df). I also vacuumed.\n\n\nAre you just doing VACUUM? Or are you doing VACUUM ANALYZE? You might \nalso try VACUUM ANALYZE FULL (in the case that you have too many dead \ntuples in the table).\n\nVACUUM cleans up, but doesn't adjust any planner statistics without ANALYZE.\n\nJohn\n=:->", "msg_date": "Sun, 06 Feb 2005 09:19:08 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "\nI gave a bunch of \"explain analyze select\" commands to test estimates for\nindividual columns. What results do they come up with? If those are inaccurate\nthen raising the statistics target is a good route. If those are accurate\nindividually but the combination is inaccurate then you have a more difficult\nproblem.\n\n-- \ngreg\n\n", "msg_date": "06 Feb 2005 10:57:48 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "John,\n\nI'm doing VACUUM ANALYZE once a night. Before the tests I did VACUUM and \nthen ANALYZE.\n\nDirk\n\nJohn A Meinel wrote:\n\n> Dirk Lutzebaeck wrote:\n>\n>> Greg,\n>>\n>> Thanks for your analysis. But I dont get any better after bumping \n>> STATISTICS target from 10 to 200.\n>> explain analyze shows that the optimizer is still way off estimating \n>> the rows. Is this normal? It still produces a 1 GB temp file.\n>> I simplified the query a bit, now only two tables are involved (bi, \n>> df). I also vacuumed.\n>\n>\n>\n> Are you just doing VACUUM? Or are you doing VACUUM ANALYZE? You might \n> also try VACUUM ANALYZE FULL (in the case that you have too many dead \n> tuples in the table).\n>\n> VACUUM cleans up, but doesn't adjust any planner statistics without \n> ANALYZE.\n>\n> John\n> =:->\n>\n\n", "msg_date": "Sun, 06 Feb 2005 17:04:05 +0100", "msg_from": "[email protected] (Dirk Lutzebaeck)", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "Greg Stark wrote:\n\n>I gave a bunch of \"explain analyze select\" commands to test estimates for\n>individual columns. What results do they come up with? If those are inaccurate\n>then raising the statistics target is a good route. If those are accurate\n>individually but the combination is inaccurate then you have a more difficult\n>problem.\n>\n> \n>\nAfter setting the new statistics target to 200 they did slightly better \nbut not accurate. The results were attached to my last post. Here is a copy:\n\n\n\nexplain analyze select * from bi where rc=130170467;\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------- \n\nSeq Scan on bi (cost=0.00..41078.76 rows=190960 width=53) (actual \ntime=0.157..3066.028 rows=513724 loops=1)\nFilter: (rc = 130170467::oid)\nTotal runtime: 4208.663 ms\n(3 rows)\n\n\nexplain analyze select * from bi where co=117305223;\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------- \n\nSeq Scan on bi (cost=0.00..41078.76 rows=603988 width=53) (actual \ntime=0.021..3692.238 rows=945487 loops=1)\nFilter: (co = 117305223::oid)\nTotal runtime: 5786.268 ms\n(3 rows)\n\nHere is the distribution of the data in bi:\nselect count(*) from bi;\n\n 1841966\n\n\nselect count(*) from bi where rc=130170467::oid;\n\n 513732\n\n\nselect count(*) from bi where co=117305223::oid;\n\n 945503\n\n\n\n\n", "msg_date": "Sun, 06 Feb 2005 17:12:19 +0100", "msg_from": "[email protected] (Dirk Lutzebaeck)", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "Dirk Lutzebaeck wrote:\n\n> Greg Stark wrote:\n>\n>> I gave a bunch of \"explain analyze select\" commands to test estimates \n>> for\n>> individual columns. What results do they come up with? If those are \n>> inaccurate\n>> then raising the statistics target is a good route. If those are \n>> accurate\n>> individually but the combination is inaccurate then you have a more \n>> difficult\n>> problem.\n>>\n>> \n>>\n> After setting the new statistics target to 200 they did slightly \n> better but not accurate. The results were attached to my last post. \n> Here is a copy:\n>\n>\nIt does seem that setting the statistics to a higher value would help. \nSince rc=130170467 seems to account for almost 1/3 of the data. Probably \nyou have other values that are much less common. So setting a high \nstatistics target would help the planner realize that this value occurs \nat a different frequency from the other ones. Can you try other numbers \nand see what the counts are?\n\nI assume you did do a vacuum analyze after adjusting the statistics target.\n\nAlso interesting that in the time it took you to place these queries, \nyou had received 26 new rows.\n\nAnd finally, what is the row count if you do\nexplain analyze select * from bi where rc=130170467::oid and \nco=117305223::oid;\n\nIf this is a lot less than say 500k, then probably you aren't going to \nbe helped a lot. The postgresql statistics engine doesn't generate cross \ncolumn statistics. It always assumes random distribution of data. So if \ntwo columns are correlated (or anti-correlated), it won't realize that.\n\nEven so, your original desire was to reduce the size of the intermediate \nstep (where you have 700k rows). So you need to try and design a \nsubselect on bi which is as restrictive as possible, so that you don't \nget all of these rows. With any luck, the planner will realize ahead of \ntime that there won't be that many rows, and can use indexes, etc. But \neven if it doesn't use an index scan, if you have a query that doesn't \nuse a lot of rows, then you won't need a lot of disk space.\n\nJohn\n=:->\n\n>\n> explain analyze select * from bi where rc=130170467;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------- \n>\n> Seq Scan on bi (cost=0.00..41078.76 rows=190960 width=53) (actual \n> time=0.157..3066.028 rows=513724 loops=1)\n> Filter: (rc = 130170467::oid)\n> Total runtime: 4208.663 ms\n> (3 rows)\n>\n>\n> explain analyze select * from bi where co=117305223;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------- \n>\n> Seq Scan on bi (cost=0.00..41078.76 rows=603988 width=53) (actual \n> time=0.021..3692.238 rows=945487 loops=1)\n> Filter: (co = 117305223::oid)\n> Total runtime: 5786.268 ms\n> (3 rows)\n>\n> Here is the distribution of the data in bi:\n> select count(*) from bi;\n>\n> 1841966\n>\n>\n> select count(*) from bi where rc=130170467::oid;\n>\n> 513732\n>\n>\n> select count(*) from bi where co=117305223::oid;\n>\n> 945503\n>\n>\n>", "msg_date": "Sun, 06 Feb 2005 10:46:06 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "[email protected] (Dirk Lutzebaeck) writes:\n> SELECT DISTINCT ON (df.val_9, df.created, df.flatid) df.docindex, \n> df.flatobj, bi.oid, bi.en\n> FROM bi,df\n> WHERE bi.rc=130170467\n> ...\n> ORDER BY df.val_9 ASC, df.created DESC\n> LIMIT 1000 OFFSET 0\n\nJust out of curiosity, what is this query supposed to *do* exactly?\nIt looks to me like it will give indeterminate results. Practical\nuses of DISTINCT ON generally specify more ORDER BY columns than\nthere are DISTINCT ON columns, because the extra columns determine\nwhich rows have priority to survive the DISTINCT filter. With the\nabove query, you have absolutely no idea which row will be output\nfor a given combination of val_9/created/flatid.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Feb 2005 12:16:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file " }, { "msg_contents": "John A Meinel wrote:\n\n> Dirk Lutzebaeck wrote:\n>\n>> Greg Stark wrote:\n>>\n>>> I gave a bunch of \"explain analyze select\" commands to test \n>>> estimates for\n>>> individual columns. What results do they come up with? If those are \n>>> inaccurate\n>>> then raising the statistics target is a good route. If those are \n>>> accurate\n>>> individually but the combination is inaccurate then you have a more \n>>> difficult\n>>> problem.\n>>>\n>>> \n>>>\n>> After setting the new statistics target to 200 they did slightly \n>> better but not accurate. The results were attached to my last post. \n>> Here is a copy:\n>>\n>>\n> It does seem that setting the statistics to a higher value would help. \n> Since rc=130170467 seems to account for almost 1/3 of the data. \n> Probably you have other values that are much less common. So setting a \n> high statistics target would help the planner realize that this value \n> occurs at a different frequency from the other ones. Can you try other \n> numbers and see what the counts are?\n\nThere is not much effect when increasing statistics target much higher. \nI guess this is because rc=130170467 takes a large portion of the column \ndistribution.\n\n> I assume you did do a vacuum analyze after adjusting the statistics \n> target.\n\nYes.\n\n> Also interesting that in the time it took you to place these queries, \n> you had received 26 new rows.\n\nYes, it's a live system...\n\n> And finally, what is the row count if you do\n> explain analyze select * from bi where rc=130170467::oid and \n> co=117305223::oid;\n\nexplain analyze select * from bi where rc=130170467::oid and \nco=117305223::oid;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Seq Scan on bi (cost=0.00..43866.19 rows=105544 width=51) (actual \ntime=0.402..3724.222 rows=513732 loops=1)\n Filter: ((rc = 130170467::oid) AND (co = 117305223::oid))\n\nWell both columns data take about 1/4 of the whole table. There is not \nmuch distributed data. So it needs to do full scans...\n\n> If this is a lot less than say 500k, then probably you aren't going to \n> be helped a lot. The postgresql statistics engine doesn't generate \n> cross column statistics. It always assumes random distribution of \n> data. So if two columns are correlated (or anti-correlated), it won't \n> realize that.\n\n105k, that seems to be may problem. No much random data. Does 8.0 \naddress this problem?\n\n> Even so, your original desire was to reduce the size of the \n> intermediate step (where you have 700k rows). So you need to try and \n> design a subselect on bi which is as restrictive as possible, so that \n> you don't get all of these rows. With any luck, the planner will \n> realize ahead of time that there won't be that many rows, and can use \n> indexes, etc. But even if it doesn't use an index scan, if you have a \n> query that doesn't use a lot of rows, then you won't need a lot of \n> disk space.\n\nI'll try that. What I have already noticed it that one of my output \ncolumn is quite large so that's why it uses so much temp space. I'll \nprobably need to sort without that output column and read it in \nafterwards using a subselect on the limted result.\n\nThanks for your help,\n\nDirk\n\n>\n> John\n> =:->\n>\n>>\n>> explain analyze select * from bi where rc=130170467;\n>> QUERY PLAN\n>> ------------------------------------------------------------------------------------------------------------------- \n>>\n>> Seq Scan on bi (cost=0.00..41078.76 rows=190960 width=53) (actual \n>> time=0.157..3066.028 rows=513724 loops=1)\n>> Filter: (rc = 130170467::oid)\n>> Total runtime: 4208.663 ms\n>> (3 rows)\n>>\n>>\n>> explain analyze select * from bi where co=117305223;\n>> QUERY PLAN\n>> ------------------------------------------------------------------------------------------------------------------- \n>>\n>> Seq Scan on bi (cost=0.00..41078.76 rows=603988 width=53) (actual \n>> time=0.021..3692.238 rows=945487 loops=1)\n>> Filter: (co = 117305223::oid)\n>> Total runtime: 5786.268 ms\n>> (3 rows)\n>>\n>> Here is the distribution of the data in bi:\n>> select count(*) from bi;\n>>\n>> 1841966\n>>\n>>\n>> select count(*) from bi where rc=130170467::oid;\n>>\n>> 513732\n>>\n>>\n>> select count(*) from bi where co=117305223::oid;\n>>\n>> 945503\n>>\n>>\n>>\n>\n\n", "msg_date": "Sun, 06 Feb 2005 18:18:35 +0100", "msg_from": "[email protected] (Dirk Lutzebaeck)", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "Tom,\n\nthe orginal query has more output columns. I reduced it for readability. \nSpecifically it returns a persitent object (flatobj column) which needs \nto be processed by the application as the returned result. The problem \nof the huge sort space usage seems to be that the flatobj is part of the \nrow, so it used always copied in the sort algorithm I guess. When I drop \nthe flatobj from the output columns the size of the temp space file \ndrops dramatically. So I'll probably need to read flatobj after the \nsorting from the limited return result in a subselect.\n\nRegards,\n\nDirk\n\nTom Lane wrote:\n\n>[email protected] (Dirk Lutzebaeck) writes:\n> \n>\n>>SELECT DISTINCT ON (df.val_9, df.created, df.flatid) df.docindex, \n>>df.flatobj, bi.oid, bi.en\n>>FROM bi,df\n>>WHERE bi.rc=130170467\n>>...\n>>ORDER BY df.val_9 ASC, df.created DESC\n>>LIMIT 1000 OFFSET 0\n>> \n>>\n>\n>Just out of curiosity, what is this query supposed to *do* exactly?\n>It looks to me like it will give indeterminate results. Practical\n>uses of DISTINCT ON generally specify more ORDER BY columns than\n>there are DISTINCT ON columns, because the extra columns determine\n>which rows have priority to survive the DISTINCT filter. With the\n>above query, you have absolutely no idea which row will be output\n>for a given combination of val_9/created/flatid.\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n\n", "msg_date": "Sun, 06 Feb 2005 18:26:30 +0100", "msg_from": "[email protected] (Dirk Lutzebaeck)", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" }, { "msg_contents": "> I'm doing VACUUM ANALYZE once a night. Before the tests I did VACUUM and \n> then ANALYZE.\n\nI'd suggest once an hour on any resonably active database...\n\nChris\n", "msg_date": "Wed, 09 Feb 2005 09:14:49 +0000", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query produces 1 GB temp file" } ]
[ { "msg_contents": "\n\n\n\nThis is probably a very trivial question and I feel foolish in even posting\nit, but I cannot seem to get it to work.\n\nSCENARIO (abstracted):\n\nTwo tables, \"summary\" and \"detail\". The schema of summary looks like:\n\nid int serial sequential record id\ncollect_date date date the detail events were collected\n\nThe schema of detail looks like:\n\nid int serial sequential record id\nsum_id int the id of the parent record in the summary table\ndetails text a particular event's details\n\nThe relationship is obvious. If I want to extract all the detail records\nfor a particular date (2/5/05), I construct a query as follows:\n\nSELECT * FROM detail JOIN summary ON (summary.id=detail.sum_id) WHERE\ncollect_date='2005-02-05';\n\nNow... I want to *delete* all the detail records for a particular date, I\ntried:\n\nDELETE FROM detail JOIN summary ON (summary.id=detail.sum_id) WHERE\ncollect_date='2005-02-05';\n\nBut I keep getting a parser error. Am I not allowed to use JOINs in a\nDELETE statement, or am I just fat-fingering the SQL text somewhere. If\nI'm *not* allowed to use a JOIN with a DELETE, what is the best workaround?\nI want to delete just the records in the detail table, and not its parent\nsummary record.\n\nThanks in advance for your help,\n--- Steve\n___________________________________________________________________________________\n\nSteven Rosenstein\nIT Architect/Developer | IBM Virtual Server Administration\nVoice/FAX: 845-689-2064 | Cell: 646-345-6978 | Tieline: 930-6001\nText Messaging: 6463456978 @ mobile.mycingular.com\nEmail: srosenst @ us.ibm.com\n\n\"Learn from the mistakes of others because you can't live long enough to\nmake them all yourself.\" -- Eleanor Roosevelt\n\n", "msg_date": "Sun, 6 Feb 2005 12:16:13 -0500", "msg_from": "Steven Rosenstein <[email protected]>", "msg_from_op": true, "msg_subject": "Are JOINs allowed with DELETE FROM" }, { "msg_contents": "Steven Rosenstein wrote:\n> DELETE FROM detail JOIN summary ON (summary.id=detail.sum_id) WHERE\n> collect_date='2005-02-05';\n\nDELETE FROM detail WHERE detail.sum_id in ( select id from summary )\nAND collect_date='2005-02-05';\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n", "msg_date": "Sun, 06 Feb 2005 18:36:03 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are JOINs allowed with DELETE FROM" }, { "msg_contents": "On Sun, Feb 06, 2005 at 12:16:13PM -0500, Steven Rosenstein wrote:\n>\n> DELETE FROM detail JOIN summary ON (summary.id=detail.sum_id) WHERE\n> collect_date='2005-02-05';\n> \n> But I keep getting a parser error. Am I not allowed to use JOINs in a\n> DELETE statement, or am I just fat-fingering the SQL text somewhere.\n\nSee the documentation for DELETE:\n\nhttp://www.postgresql.org/docs/8.0/static/sql-delete.html\n\nIf you intend to delete the date's record from the summary table,\nthen the detail table could use a foreign key constraint defined\nwith ON DELETE CASCADE. Deleting a record from summary would then\nautomatically delete all associated records in detail.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Sun, 6 Feb 2005 10:50:29 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are JOINs allowed with DELETE FROM" }, { "msg_contents": "Gaetano Mendola wrote:\n\n> Steven Rosenstein wrote:\n>\n>> DELETE FROM detail JOIN summary ON (summary.id=detail.sum_id) WHERE\n>> collect_date='2005-02-05';\n>\n>\nYou have to tell it what table you are deleting from. Select * from A\njoin B is both tables. What you want to do is fix the where clause.\n\n> DELETE FROM detail WHERE detail.sum_id in ( select id from summary )\n> AND collect_date='2005-02-05';\n>\nI'm guessing this should actually be\nDELETE FROM detail WHERE detail.sum_id in ( SELECT id FROM summary WHERE\ncollect_date='2005-02-05' );\nOtherwise you wouldn't really need the join.\n\nYou have to come up with a plan that yields rows that are in the table\nyou want to delete. The rows that result from\nselect * from detail join summary, contain values from both tables.\n\nIf you want to delete from both tables, I think this has to be 2\ndeletes. Probably best to be in a transaction.\n\nBEGIN;\nDELETE FROM detail WHERE ...\nDELETE FROM summary WHERE collect_date = '2005-02-05';\nCOMMIT;\n\n>\n> Regards\n> Gaetano Mendola\n>\nJohn\n=:->", "msg_date": "Sun, 06 Feb 2005 11:58:45 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are JOINs allowed with DELETE FROM" }, { "msg_contents": "\n\n\n\nHi Michael,\n\nThank you for the link to the documentation page. I forgot to mention that\nwe're still using version 7.3. When I checked the 7.3 documentation for\nDELETE, there was no mention of being able to use fields from different\ntables in a WHERE clause. This feature must have been added in a\nsubsequent release of PostgreSQL.\n\nGaetano & John: I *did* try your suggestion. However, there were so many\nsummary ID's returned (9810 to be exact) that the DELETE seemed to be\ntaking forever. Here's an actual SELECT query that I ran as a test:\n\nvsa=# vacuum analyze verbose vsa.tbl_win_patch_scan; [This is the\n\"summary\" table from my abstracted example]\nINFO: --Relation vsa.tbl_win_patch_scan--\nINFO: Pages 374: Changed 0, Empty 0; Tup 10485: Vac 0, Keep 0, UnUsed 0.\n Total CPU 0.01s/0.00u sec elapsed 0.00 sec.\nINFO: --Relation pg_toast.pg_toast_39384--\nINFO: Pages 62679: Changed 0, Empty 0; Tup 254116: Vac 0, Keep 0, UnUsed\n0.\n Total CPU 0.86s/0.21u sec elapsed 13.79 sec.\nINFO: Analyzing vsa.tbl_win_patch_scan\nVACUUM\nTime: 18451.32 ms\n\nvsa=# vacuum analyze verbose vsa.tbl_win_patch_scan_item; [This is the\n\"detail\" table from my abstracted example]\nINFO: --Relation vsa.tbl_win_patch_scan_item--\nINFO: Pages 110455: Changed 0, Empty 0; Tup 752066: Vac 0, Keep 0, UnUsed\n0.\n Total CPU 2.23s/0.45u sec elapsed 42.07 sec.\nINFO: --Relation pg_toast.pg_toast_39393--\nINFO: Pages 2464: Changed 0, Empty 0; Tup 14780: Vac 0, Keep 0, UnUsed 0.\n Total CPU 0.02s/0.02u sec elapsed 2.31 sec.\nINFO: Analyzing vsa.tbl_win_patch_scan_item\nVACUUM\nTime: 62075.52 ms\n\nvsa=# explain analyze SELECT * FROM vsa.tbl_win_patch_scan_item WHERE\nwin_patch_scan_id IN (SELECT id FROM vsa.tbl_win_patch_scan WHERE\nscan_datetime < '2004-09-01 00:00:00');\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on tbl_win_patch_scan_item (cost=0.00..379976970.68 rows=376033\nwidth=1150) (actual time=11.50..27373.29 rows=62 loops=1)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=505.06..505.11 rows=4 width=4) (actual\ntime=0.00..0.00 rows=2 loops=752066)\n -> Seq Scan on tbl_win_patch_scan (cost=0.00..505.06 rows=4\nwidth=4) (actual time=0.03..11.16 rows=2 loops=1)\n Filter: (scan_datetime < '2004-09-01 00:00:00'::timestamp\nwithout time zone)\n Total runtime: 27373.65 msec\n(7 rows)\n\nTime: 27384.12 ms\n\nI put in a very early date (2004-09-01) because I knew there would be very\nfew rows to look at (2 rows in vsa.tbl_win_patch_scan meet the date\ncriteria, and a total of 62 rows in vsa.tbl_win_patch_scan_item match\neither of the two tbl_win_patch_scan ID's returned in the WHERE subquery).\nCan anyone see a way of optimizing this so that it runs faster? The real\ndate I should be using is 2004-12-06 (~60 days retention), and when I do\nuse it, the query seems to take forever. I ran a number explan analyzes\nwith different scan_datetimes, and it seems that the execution time\nincreases exponentially with the number of rows (ID's) returned by the\nsubquery. Running top shows that the EXPLAIN is entirely CPU-bound. There\nis no disk I/O during any query execution:\n\nDATE=2004-09-01; SUMMARY ROWS=2; DETAIL ROWS=62; TIME=27.37 sec (Included\ninitial query cache loading effect)\nvsa=# explain analyze SELECT * FROM vsa.tbl_win_patch_scan_item WHERE\nwin_patch_scan_id IN (SELECT id FROM vsa.tbl_win_patch_scan WHERE\nscan_datetime < '2004-09-01 00:00:00');\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on tbl_win_patch_scan_item (cost=0.00..379976970.68 rows=376033\nwidth=1150) (actual time=11.50..27373.29 rows=62 loops=1)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=505.06..505.11 rows=4 width=4) (actual\ntime=0.00..0.00 rows=2 loops=752066)\n -> Seq Scan on tbl_win_patch_scan (cost=0.00..505.06 rows=4\nwidth=4) (actual time=0.03..11.16 rows=2 loops=1)\n Filter: (scan_datetime < '2004-09-01 00:00:00'::timestamp\nwithout time zone)\n Total runtime: 27373.65 msec\n(7 rows)\n\nTime: 27384.12 ms\n\nDATE=2004-09-02; SUMMARY ROWS=2; DETAIL ROWS=62; TIME=8.26 sec\nvsa=# explain analyze SELECT * FROM vsa.tbl_win_patch_scan_item WHERE\nwin_patch_scan_id IN (SELECT id FROM vsa.tbl_win_patch_scan WHERE\nscan_datetime < '2004-09-02 00:00:00');\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on tbl_win_patch_scan_item (cost=0.00..380115740.80 rows=376033\nwidth=1142) (actual time=10.42..8259.79 rows=62 loops=1)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=505.06..505.48 rows=41 width=4) (actual\ntime=0.00..0.00 rows=2 loops=752066)\n -> Seq Scan on tbl_win_patch_scan (cost=0.00..505.06 rows=41\nwidth=4) (actual time=0.02..10.08 rows=2 loops=1)\n Filter: (scan_datetime < '2004-09-02 00:00:00'::timestamp\nwithout time zone)\n Total runtime: 8259.91 msec\n(7 rows)\n\nTime: 8263.52 ms\n\nDATE=2004-09-05; SUMMARY ROWS=3; DETAIL ROWS=93; TIME=5.61 sec\nvsa=# explain analyze SELECT * FROM vsa.tbl_win_patch_scan_item WHERE\nwin_patch_scan_id IN (SELECT id FROM vsa.tbl_win_patch_scan WHERE\nscan_datetime < '2004-09-05 00:00:00');\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on tbl_win_patch_scan_item (cost=0.00..380531977.65 rows=376033\nwidth=1142) (actual time=10.11..5616.68 rows=93 loops=1)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=505.06..506.58 rows=152 width=4) (actual\ntime=0.00..0.00 rows=3 loops=752066)\n -> Seq Scan on tbl_win_patch_scan (cost=0.00..505.06 rows=152\nwidth=4) (actual time=0.02..10.05 rows=3 loops=1)\n Filter: (scan_datetime < '2004-09-05 00:00:00'::timestamp\nwithout time zone)\n Total runtime: 5616.81 msec\n(7 rows)\n\nTime: 5617.87 ms\n\nDATE=2004-09-15; SUMMARY ROWS=16; DETAIL ROWS=674; TIME=18.03 sec\nvsa=# explain analyze SELECT * FROM vsa.tbl_win_patch_scan_item WHERE\nwin_patch_scan_id IN (SELECT id FROM vsa.tbl_win_patch_scan WHERE\nscan_datetime < '2004-09-15 00:00:00');\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on tbl_win_patch_scan_item (cost=0.00..381919433.78 rows=376033\nwidth=1142) (actual time=10.18..18032.25 rows=674 loops=1)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=505.06..510.27 rows=521 width=4) (actual\ntime=0.00..0.01 rows=16 loops=752066)\n -> Seq Scan on tbl_win_patch_scan (cost=0.00..505.06 rows=521\nwidth=4) (actual time=0.02..10.11 rows=16 loops=1)\n Filter: (scan_datetime < '2004-09-15 00:00:00'::timestamp\nwithout time zone)\n Total runtime: 18032.72 msec\n(7 rows)\n\nTime: 18033.78 ms\n\nDATE=2004-09-16; SUMMARY ROWS=25; DETAIL ROWS=1131; TIME=26.22 sec\nvsa=# explain analyze SELECT * FROM vsa.tbl_win_patch_scan_item WHERE\nwin_patch_scan_id IN (SELECT id FROM vsa.tbl_win_patch_scan WHERE\nscan_datetime < '2004-09-16 00:00:00');\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on tbl_win_patch_scan_item (cost=0.00..382058179.39 rows=376033\nwidth=1142) (actual time=6.14..26218.56 rows=1131 loops=1)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=505.06..510.64 rows=558 width=4) (actual\ntime=0.00..0.01 rows=25 loops=752066)\n -> Seq Scan on tbl_win_patch_scan (cost=0.00..505.06 rows=558\nwidth=4) (actual time=0.01..6.09 rows=25 loops=1)\n Filter: (scan_datetime < '2004-09-16 00:00:00'::timestamp\nwithout time zone)\n Total runtime: 26219.24 msec\n(7 rows)\n\nTime: 26220.44 ms\n\nDATE=2004-09-17; SUMMARY ROWS=34; DETAIL ROWS=1588; TIME=34.97 sec\nvsa=# explain analyze SELECT * FROM vsa.tbl_win_patch_scan_item WHERE\nwin_patch_scan_id IN (SELECT id FROM vsa.tbl_win_patch_scan WHERE\nscan_datetime < '2004-09-17 00:00:00');\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on tbl_win_patch_scan_item (cost=0.00..382196925.01 rows=376033\nwidth=1142) (actual time=10.25..34965.95 rows=1588 loops=1)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=505.06..511.01 rows=595 width=4) (actual\ntime=0.00..0.02 rows=34 loops=752066)\n -> Seq Scan on tbl_win_patch_scan (cost=0.00..505.06 rows=595\nwidth=4) (actual time=0.02..10.16 rows=34 loops=1)\n Filter: (scan_datetime < '2004-09-17 00:00:00'::timestamp\nwithout time zone)\n Total runtime: 34966.90 msec\n(7 rows)\n\nTime: 34967.95 ms\n\n\nWhat I may end up doing is using the scripting language PHP to solve the\nissue by running one query just to return the summary table ID's, and then\nDELETE all the rows matching each ID individually by looping through the\nID's. I was looking for something more elegant, but this will work if its\nthe only solution.\n\nThank you all for your help with this.\n--- Steve\n\n___________________________________________________________________________________\n\nSteven Rosenstein\nIT Architect/Developer | IBM Virtual Server Administration\nVoice/FAX: 845-689-2064 | Cell: 646-345-6978 | Tieline: 930-6001\nText Messaging: 6463456978 @ mobile.mycingular.com\nEmail: srosenst @ us.ibm.com\n\n\"Learn from the mistakes of others because you can't live long enough to\nmake them all yourself.\" -- Eleanor Roosevelt\n\n\n \n Michael Fuhr \n <[email protected]> \n To \n 02/06/2005 12:50 Steven Rosenstein/New \n PM York/IBM@IBMUS \n cc \n [email protected] \n Subject \n Re: [PERFORM] Are JOINs allowed \n with DELETE FROM \n \n \n \n \n \n \n\n\n\n\nOn Sun, Feb 06, 2005 at 12:16:13PM -0500, Steven Rosenstein wrote:\n>\n> DELETE FROM detail JOIN summary ON (summary.id=detail.sum_id) WHERE\n> collect_date='2005-02-05';\n>\n> But I keep getting a parser error. Am I not allowed to use JOINs in a\n> DELETE statement, or am I just fat-fingering the SQL text somewhere.\n\nSee the documentation for DELETE:\n\nhttp://www.postgresql.org/docs/8.0/static/sql-delete.html\n\nIf you intend to delete the date's record from the summary table,\nthen the detail table could use a foreign key constraint defined\nwith ON DELETE CASCADE. Deleting a record from summary would then\nautomatically delete all associated records in detail.\n\n--\nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n\n\n", "msg_date": "Sun, 6 Feb 2005 14:33:16 -0500", "msg_from": "Steven Rosenstein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Are JOINs allowed with DELETE FROM" }, { "msg_contents": "Steven Rosenstein <[email protected]> writes:\n> Thank you for the link to the documentation page. I forgot to mention that\n> we're still using version 7.3. When I checked the 7.3 documentation for\n> DELETE, there was no mention of being able to use fields from different\n> tables in a WHERE clause. This feature must have been added in a\n> subsequent release of PostgreSQL.\n\nNo, it's been there all along, if perhaps not well documented.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Feb 2005 14:49:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are JOINs allowed with DELETE FROM " }, { "msg_contents": "\n\n\n\nMany thanks to Gaetano Mendola and Tom Lane for the hints about using\nfields from other tables in a DELETE's WHERE clause. That was the magic\nbullet I needed, and my application is working as expected.\n\n--- Steve\n___________________________________________________________________________________\n\nSteven Rosenstein\nIT Architect/Developer | IBM Virtual Server Administration\nVoice/FAX: 845-689-2064 | Cell: 646-345-6978 | Tieline: 930-6001\nText Messaging: 6463456978 @ mobile.mycingular.com\nEmail: srosenst @ us.ibm.com\n\n\"Learn from the mistakes of others because you can't live long enough to\nmake them all yourself.\" -- Eleanor Roosevelt\n\n\n \n Tom Lane \n <[email protected] \n s> To \n Sent by: Steven Rosenstein/New \n pgsql-performance York/IBM@IBMUS \n -owner@postgresql cc \n .org [email protected] \n Subject \n Re: [PERFORM] Are JOINs allowed \n 02/06/2005 02:49 with DELETE FROM \n PM \n \n \n \n \n \n\n\n\n\nSteven Rosenstein <[email protected]> writes:\n> Thank you for the link to the documentation page. I forgot to mention\nthat\n> we're still using version 7.3. When I checked the 7.3 documentation for\n> DELETE, there was no mention of being able to use fields from different\n> tables in a WHERE clause. This feature must have been added in a\n> subsequent release of PostgreSQL.\n\nNo, it's been there all along, if perhaps not well documented.\n\n regards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n\n", "msg_date": "Sun, 6 Feb 2005 16:57:44 -0500", "msg_from": "Steven Rosenstein <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Are JOINs allowed with DELETE FROM" }, { "msg_contents": "Steven Rosenstein wrote:\n >\n >\n >\n > Hi Michael,\n >\n > Thank you for the link to the documentation page. I forgot to mention that\n > we're still using version 7.3. When I checked the 7.3 documentation for\n > DELETE, there was no mention of being able to use fields from different\n > tables in a WHERE clause. This feature must have been added in a\n > subsequent release of PostgreSQL.\n >\n > Gaetano & John: I *did* try your suggestion. However, there were so many\n > summary ID's returned (9810 to be exact) that the DELETE seemed to be\n > taking forever.\n\n7.3 is affected by bad performances if you use IN.\nTransform the IN in an EXIST construct.\n\nIf it'is an option for you upgrade you DB engine.\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n", "msg_date": "Mon, 07 Feb 2005 18:22:50 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are JOINs allowed with DELETE FROM" } ]
[ { "msg_contents": "\n\n\n\nWhile working on a previous question I posed to this group, I ran a number\nof EXPLAIN ANALYZE's to provide as examples. After sending up my last\nemail, I ran the same query *without* EXPLAIN ANALYZE. The runtimes were\nvastly different. In the following example, I ran two identical queries\none right after the other. The runtimes for both was very close (44.77\nsec). I then immediately ran the exact same query, but without EXPLAIN\nANALYZE. The same number of rows was returned, but the runtime was only\n8.7 sec. I don't think EXPLAIN ANALYZE puts that much overhead on a query.\nDoes anyone have any idea what is going on here?\n\n--- Steve\n\nvsa=# explain analyze SELECT id,win_patch_scan_id FROM\nvsa.tbl_win_patch_scan_item WHERE win_patch_scan_id IN (SELECT id FROM\nvsa.tbl_win_patch_scan WHERE scan_datetime < '2004-09-18 00:00:00');\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on tbl_win_patch_scan_item (cost=0.00..382335670.62 rows=376033\nwidth=8) (actual time=10.18..44773.22 rows=2045 loops=1)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=505.06..511.38 rows=632 width=4) (actual\ntime=0.00..0.02 rows=43 loops=752066)\n -> Seq Scan on tbl_win_patch_scan (cost=0.00..505.06 rows=632\nwidth=4) (actual time=0.02..10.09 rows=43 loops=1)\n Filter: (scan_datetime < '2004-09-18 00:00:00'::timestamp\nwithout time zone)\n Total runtime: 44774.49 msec\n(7 rows)\n\nTime: 44775.62 ms\n\n\nvsa=# explain analyze SELECT id,win_patch_scan_id FROM\nvsa.tbl_win_patch_scan_item WHERE win_patch_scan_id IN (SELECT id FROM\nvsa.tbl_win_patch_scan WHERE scan_datetime < '2004-09-18 00:00:00');\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on tbl_win_patch_scan_item (cost=0.00..382335670.62 rows=376033\nwidth=8) (actual time=10.18..44765.36 rows=2045 loops=1)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=505.06..511.38 rows=632 width=4) (actual\ntime=0.00..0.02 rows=43 loops=752066)\n -> Seq Scan on tbl_win_patch_scan (cost=0.00..505.06 rows=632\nwidth=4) (actual time=0.02..10.10 rows=43 loops=1)\n Filter: (scan_datetime < '2004-09-18 00:00:00'::timestamp\nwithout time zone)\n Total runtime: 44766.62 msec\n(7 rows)\n\nTime: 44767.71 ms\n\n\nvsa=# SELECT id,win_patch_scan_id FROM vsa.tbl_win_patch_scan_item WHERE\nwin_patch_scan_id IN (SELECT id FROM vsa.tbl_win_patch_scan WHERE\nscan_datetime < '2004-09-18 00:00:00');\n id | win_patch_scan_id\n--------+-------------------\n 1 | 1\n 2 | 1\n 3 | 1\n 4 | 1\n 5 | 1\n----------8< SNIP --------------\n 211 | 7\n 212 | 7\n 213 | 7\n 214 | 7\n 215 | 7\n 216 | 7\n 217 | 7\n 692344 | 9276\n 692345 | 9276\n 692346 | 9276\n 692347 | 9276\n 692348 | 9276\n----------8< SNIP --------------\n 694167 | 9311\n 694168 | 9311\n 694169 | 9311\n 694170 | 9311\n 694171 | 9311\n(2045 rows)\n\nTime: 8703.56 ms\nvsa=#\n___________________________________________________________________________________\n\nSteven Rosenstein\nIT Architect/Developer | IBM Virtual Server Administration\nVoice/FAX: 845-689-2064 | Cell: 646-345-6978 | Tieline: 930-6001\nText Messaging: 6463456978 @ mobile.mycingular.com\nEmail: srosenst @ us.ibm.com\n\n\"Learn from the mistakes of others because you can't live long enough to\nmake them all yourself.\" -- Eleanor Roosevelt\n\n", "msg_date": "Sun, 6 Feb 2005 14:50:56 -0500", "msg_from": "Steven Rosenstein <[email protected]>", "msg_from_op": true, "msg_subject": "Can the V7.3 EXPLAIN ANALYZE be trusted?" } ]
[ { "msg_contents": "\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]]On Behalf Of Steven Rosenstein\nSent: Sunday, February 06, 2005 8:51 PM\nTo: [email protected]\nSubject: [PERFORM] Can the V7.3 EXPLAIN ANALYZE be trusted?\n\n\n\n\n\n\nWhile working on a previous question I posed to this group, I ran a number\nof EXPLAIN ANALYZE's to provide as examples. After sending up my last\nemail, I ran the same query *without* EXPLAIN ANALYZE. The runtimes were\nvastly different. In the following example, I ran two identical queries\none right after the other. The runtimes for both was very close (44.77\nsec). I then immediately ran the exact same query, but without EXPLAIN\nANALYZE. The same number of rows was returned, but the runtime was only\n8.7 sec. I don't think EXPLAIN ANALYZE puts that much overhead on a query.\nDoes anyone have any idea what is going on here?\n\n--- Steve\n\n\nCaching by the OS?\n\n(Did you try to *first* run the query w/o EXPLAIN ANALYZE, and then with? What's the timing if you do that?)\n\n--Tim\n", "msg_date": "Sun, 6 Feb 2005 23:09:52 +0100", "msg_from": "\"Leeuw van der, Tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can the V7.3 EXPLAIN ANALYZE be trusted?" }, { "msg_contents": "\n\"Leeuw van der, Tim\" <[email protected]> writes:\n\n> I don't think EXPLAIN ANALYZE puts that much overhead on a query.\n\nEXPLAIN ANALYZE does indeed impose a significant overhead. What percentage of\nthe time is overhead depends heavily on how much i/o the query is doing.\n\nFor queries that are primarily cpu bound because they're processing data from\nthe cache it can be substantial. If all the data is in the shared buffers then\nthe gettimeofday calls for explain analyze can be just about the only syscalls\nbeing executed and they're executed a lot.\n\nIt would be interesting to try to subtract out the profiling overhead from the\ndata like most profilers do. But it's not an easy thing to do since the times\nare nested.\n\n-- \ngreg\n\n", "msg_date": "06 Feb 2005 17:34:47 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can the V7.3 EXPLAIN ANALYZE be trusted?" }, { "msg_contents": "> From: [email protected] [mailto:[email protected]]On Behalf Of Steven Rosenstein\n> >> I don't think EXPLAIN ANALYZE puts that much overhead on a query.\n\nI think you're being overly optimistic. The explain shows that the\nMaterialize subnode is being entered upwards of 32 million times:\n\n -> Materialize (cost=505.06..511.38 rows=632 width=4) (actual time=0.00..0.02 rows=43 loops=752066)\n\n43 * 752066 = 32338838. The instrumentation overhead is basically two\ngettimeofday() kernel calls per node entry. Doing the math shows that\nyour machine is able to do gettimeofday() in about half a microsecond,\nwhich isn't stellar but it's not all that slow for a kernel call.\n(What's the platform here, anyway?) Nonetheless it's a couple of times\nlarger than the actual time needed to pull a row from a materialized\narray ...\n\nThe real answer to your question is \"IN (subselect) sucks before PG 7.4;\nget a newer release\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Feb 2005 17:46:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can the V7.3 EXPLAIN ANALYZE be trusted? " }, { "msg_contents": "\n\n\n\nYou're probably right about my being overly optimistic about the load\nimposed by EXPLAIN ANALYZE. It was just that in my previous experience\nwith it, I'd never seen such a large runtime discrepancy before. I even\nallowed for a \"caching effect\" by making sure the server was all but\nquiescent, and then running the three queries as quickly after one another\nas I could.\n\nThe server itself is an IBM x345 with dual Xeon 3ghz CPU's (hyperthreading\nturned off) and 2.5gb of RAM. O/S is RHEL3 Update 4. Disks are a\nServeRAID of some flavor, I'm not sure what.\n\nThanks for the heads-up about the performance of IN in 7.3. We're looking\nto migrate to 8.0 or 8.0.1 when they become GA, but some of our databases\nare in excess of 200gb-300gb, and we need to make sure we have a good\nmigration plan in place (space to store the dump out of the 7.3 db) before\nwe start.\n___________________________________________________________________________________\n\nSteven Rosenstein\nIT Architect/Developer | IBM Virtual Server Administration\nVoice/FAX: 845-689-2064 | Cell: 646-345-6978 | Tieline: 930-6001\nText Messaging: 6463456978 @ mobile.mycingular.com\nEmail: srosenst @ us.ibm.com\n\n\"Learn from the mistakes of others because you can't live long enough to\nmake them all yourself.\" -- Eleanor Roosevelt\n\n\n \n Tom Lane \n <[email protected] \n s> To \n Steven Rosenstein/New \n 02/06/2005 05:46 York/IBM@IBMUS \n PM cc \n [email protected] \n Subject \n Re: [PERFORM] Can the V7.3 EXPLAIN \n ANALYZE be trusted? \n \n \n \n \n \n \n\n\n\n\n> From: [email protected]\n[mailto:[email protected]]On Behalf Of Steven\nRosenstein\n> >> I don't think EXPLAIN ANALYZE puts that much overhead on a query.\n\nI think you're being overly optimistic. The explain shows that the\nMaterialize subnode is being entered upwards of 32 million times:\n\n -> Materialize (cost=505.06..511.38 rows=632 width=4) (actual\ntime=0.00..0.02 rows=43 loops=752066)\n\n43 * 752066 = 32338838. The instrumentation overhead is basically two\ngettimeofday() kernel calls per node entry. Doing the math shows that\nyour machine is able to do gettimeofday() in about half a microsecond,\nwhich isn't stellar but it's not all that slow for a kernel call.\n(What's the platform here, anyway?) Nonetheless it's a couple of times\nlarger than the actual time needed to pull a row from a materialized\narray ...\n\nThe real answer to your question is \"IN (subselect) sucks before PG 7.4;\nget a newer release\".\n\n regards, tom lane\n\n\n", "msg_date": "Sun, 6 Feb 2005 21:43:09 -0500", "msg_from": "Steven Rosenstein <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can the V7.3 EXPLAIN ANALYZE be trusted?" }, { "msg_contents": "Hi, @all,\n\nGreg Stark schrieb:\n> \"Leeuw van der, Tim\" <[email protected]> writes:\n>\n>>I don't think EXPLAIN ANALYZE puts that much overhead on a query.\n>\n> EXPLAIN ANALYZE does indeed impose a significant overhead.\n\nAdditional note:\n\nIn some rare cases, you can experience just the opposite effect, explain\nanalyze can be quicker then the actual query.\n\nThis is the case for rather expensive send/output functions, like the\nPostGIS ones:\n\nlwgeom=# \\timing\nZeitmessung ist an.\nlwgeom=# explain analyze select setsrid(geom,4326) from adminbndy1;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------\n Seq Scan on adminbndy1 (cost=0.00..4.04 rows=83 width=89) (actual\ntime=11.793..2170.184 rows=83 loops=1)\n Total runtime: 2170.834 ms\n(2 Zeilen)\n\nZeit: 2171,688 ms\nlwgeom=# \\o /dev/null\nlwgeom=# select setsrid(geom,4326) from adminbndy1;\nZeit: 9681,001 ms\n\n\nBTW: I use the cheap setsrid(geom,4326) to force deTOASTing of the\ngeometry column. Not using it seems to ignore TOASTed columns in\nsequential scan simulation.)\n\nlwgeom=# explain analyze select geom from adminbndy1;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------\n Seq Scan on adminbndy1 (cost=0.00..3.83 rows=83 width=89) (actual\ntime=0.089..0.499 rows=83 loops=1)\n Total runtime: 0.820 ms\n(2 Zeilen)\n\n\nMarkus\n\n--\nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 z�rich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com", "msg_date": "Mon, 07 Feb 2005 15:39:15 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can the V7.3 EXPLAIN ANALYZE be trusted?" } ]
[ { "msg_contents": "Hi all,\n I am facing a strange problem when I run EXPLAIN against a table\nhaving more than 100000 records. The query have lot of OR conditions\nand when parts of the query is removed it is using index. To analyse\nit I created a table with a single column, inserted 100000\nrecords(random number) in it created index and run a query which\nreturns 1 record which have no or condition and it was using index. I\nadded an OR conditon and is using sequential scan. I set the\nenable_seqscan to off. I ran the tests again and is using index scan.\n So which one I have to use. Is this any bug in Explain.\n\nrgds\nAntony Paul.\n", "msg_date": "Mon, 7 Feb 2005 14:37:15 +0530", "msg_from": "Antony Paul <[email protected]>", "msg_from_op": true, "msg_subject": "Index not used with or condition" }, { "msg_contents": "On more investigation I found that index scan is not used if the query\nhave a function in it like lower() and an index exist for lower()\ncolumn.\n\nrgds\nAntony Paul\n\n\nOn Mon, 7 Feb 2005 14:37:15 +0530, Antony Paul <[email protected]> wrote:\n> Hi all,\n> I am facing a strange problem when I run EXPLAIN against a table\n> having more than 100000 records. The query have lot of OR conditions\n> and when parts of the query is removed it is using index. To analyse\n> it I created a table with a single column, inserted 100000\n> records(random number) in it created index and run a query which\n> returns 1 record which have no or condition and it was using index. I\n> added an OR conditon and is using sequential scan. I set the\n> enable_seqscan to off. I ran the tests again and is using index scan.\n> So which one I have to use. Is this any bug in Explain.\n> \n> rgds\n> Antony Paul.\n>\n", "msg_date": "Mon, 7 Feb 2005 16:44:07 +0530", "msg_from": "Antony Paul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index not used with or condition" }, { "msg_contents": "On Mon, Feb 07, 2005 at 04:44:07PM +0530, Antony Paul wrote:\n> On more investigation I found that index scan is not used if the query\n> have a function in it like lower() and an index exist for lower()\n> column.\n\nWhat version are you using? 8.0 had fixes for this situation.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Mon, 7 Feb 2005 12:46:05 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used with or condition" }, { "msg_contents": "It depends on many circumstances, but, at first, simple question: Did \nyou run vacuum analyze?\nI am satisfied with functional indexes - it works in my pg 7.4.x.\n\nAntony Paul wrote:\n\n>On more investigation I found that index scan is not used if the query\n>have a function in it like lower() and an index exist for lower()\n>column.\n>\n>rgds\n>Antony Paul\n>\n>\n>On Mon, 7 Feb 2005 14:37:15 +0530, Antony Paul <[email protected]> wrote:\n> \n>\n>>Hi all,\n>> I am facing a strange problem when I run EXPLAIN against a table\n>>having more than 100000 records. The query have lot of OR conditions\n>>and when parts of the query is removed it is using index. To analyse\n>>it I created a table with a single column, inserted 100000\n>>records(random number) in it created index and run a query which\n>>returns 1 record which have no or condition and it was using index. I\n>>added an OR conditon and is using sequential scan. I set the\n>>enable_seqscan to off. I ran the tests again and is using index scan.\n>> So which one I have to use. Is this any bug in Explain.\n>>\n>>rgds\n>>Antony Paul.\n>>\n>> \n>>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n> \n>\n", "msg_date": "Mon, 07 Feb 2005 12:53:30 +0100", "msg_from": "Jan Poslusny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used with or condition" }, { "msg_contents": "Sorry I forgot to mention it. I am using 7.3.3. I will try it in 8.0.0\n\nrgds\nAntony Paul\n \n\n\nOn Mon, 7 Feb 2005 12:46:05 +0100, Steinar H. Gunderson\n<[email protected]> wrote:\n> On Mon, Feb 07, 2005 at 04:44:07PM +0530, Antony Paul wrote:\n> > On more investigation I found that index scan is not used if the query\n> > have a function in it like lower() and an index exist for lower()\n> > column.\n> \n> What version are you using? 8.0 had fixes for this situation.\n> \n> /* Steinar */\n> --\n> Homepage: http://www.sesse.net/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n", "msg_date": "Mon, 7 Feb 2005 17:27:13 +0530", "msg_from": "Antony Paul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index not used with or condition" }, { "msg_contents": "I ran analyze; several times.\n\nrgds\nAntony Paul\n\n\nOn Mon, 07 Feb 2005 12:53:30 +0100, Jan Poslusny <[email protected]> wrote:\n> It depends on many circumstances, but, at first, simple question: Did\n> you run vacuum analyze?\n> I am satisfied with functional indexes - it works in my pg 7.4.x.\n> \n> Antony Paul wrote:\n> \n> >On more investigation I found that index scan is not used if the query\n> >have a function in it like lower() and an index exist for lower()\n> >column.\n> >\n> >rgds\n> >Antony Paul\n> >\n> >\n> >On Mon, 7 Feb 2005 14:37:15 +0530, Antony Paul <[email protected]> wrote:\n> >\n> >\n> >>Hi all,\n> >> I am facing a strange problem when I run EXPLAIN against a table\n> >>having more than 100000 records. The query have lot of OR conditions\n> >>and when parts of the query is removed it is using index. To analyse\n> >>it I created a table with a single column, inserted 100000\n> >>records(random number) in it created index and run a query which\n> >>returns 1 record which have no or condition and it was using index. I\n> >>added an OR conditon and is using sequential scan. I set the\n> >>enable_seqscan to off. I ran the tests again and is using index scan.\n> >> So which one I have to use. Is this any bug in Explain.\n> >>\n> >>rgds\n> >>Antony Paul.\n> >>\n> >>\n> >>\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n> >\n> >\n>\n", "msg_date": "Mon, 7 Feb 2005 17:28:14 +0530", "msg_from": "Antony Paul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index not used with or condition" } ]
[ { "msg_contents": "Hi every one,\n\n\nWhy does this take forever (each query is sub second when done seperately)? \nIs it because I cannot open two cursors in the same transaction?\n\nbegin;\n\ndeclare SQL_CUR01 cursor for \nSELECT A.ordernummer, B.klantnummer FROM \"orders\" A LEFT OUTER JOIN \"klt_alg\" B ON A.Klantnummer=B.Klantnummer ORDER BY A.klantnummer;\nfetch 100 in SQL_CUR01;\n\ndeclare SQL_CUR02 cursor for \nSELECT A.ordernummer, B.klantnummer FROM \"orders\" A LEFT OUTER JOIN \"klt_alg\" B ON A.Klantnummer=B.Klantnummer ORDER BY A.klantnummer desc;\nfetch 100 in SQL_CUR02;\n\ncommit;\n\n\nTIA\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n", "msg_date": "Mon, 7 Feb 2005 12:36:24 +0100", "msg_from": "\"Joost Kraaijeveld\" <[email protected]>", "msg_from_op": true, "msg_subject": "Is this possible / slow performance?" } ]
[ { "msg_contents": "Hi all,\n\nA retry of the question asked before. All tables freshly vacuumed an analized. \n\nTwo queries: one with \"set enable_seqscan = on\" , the other with \"set enable_seqscan = off\". The first query lasts 59403 ms, the second query 31 ms ( the desc order variant has the same large difference: 122494 ms vs. 1297 ms). (for the query plans see below).\n\nCan I, without changing the SQL (because it is generated by a tool) or explicitely setting \"set enable_seqscan = off\" for this query, trick PostgreSQL in taking the fast variant of the queryplan?\n\nTIA\n\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl\n\n\n------------------------------- Query 1\n\nbegin;\nset enable_seqscan = on;\ndeclare SQL_CUR01 cursor for \nSELECT A.ordernummer, B.klantnummer FROM \"orders\" A LEFT OUTER JOIN \"klt_alg\" B ON A.Klantnummer=B.Klantnummer \nORDER BY A.klantnummer;\nfetch 100 in SQL_CUR01;\ncommit;\n\nQUERY PLAN\nSort (cost=259968.77..262729.72 rows=1104380 width=12)\n Sort Key: a.klantnummer, a.ordernummer\n -> Hash Left Join (cost=42818.43..126847.70 rows=1104380 width=12)\n Hash Cond: (\"outer\".klantnummer = \"inner\".klantnummer)\n -> Seq Scan on orders a (cost=0.00..46530.79 rows=1104379 width=8)\n -> Hash (cost=40635.14..40635.14 rows=368914 width=4)\n -> Seq Scan on klt_alg b (cost=0.00..40635.14 rows=368914 width=4)\n\nActual running time: 59403 ms.\n\n------------------------------- Query 2\n\nbegin;\nset enable_seqscan = off;\ndeclare SQL_CUR01 cursor for \nSELECT A.ordernummer, B.klantnummer FROM \"orders\" A LEFT OUTER JOIN \"klt_alg\" B ON A.Klantnummer=B.Klantnummer \nORDER BY A.klantnummer;\nfetch 100 in SQL_CUR01;\ncommit;\n\nQUERY PLAN\nMerge Left Join (cost=0.00..2586604.86 rows=1104380 width=12)\n Merge Cond: (\"outer\".klantnummer = \"inner\".klantnummer)\n -> Index Scan using orders_klantnummer on orders a (cost=0.00..2435790.17 rows=1104379 width=8)\n -> Index Scan using klt_alg_klantnummer on klt_alg b (cost=0.00..44909.11 rows=368914 width=4)\n\nActual running time: 31 ms.\n\n\n", "msg_date": "Mon, 7 Feb 2005 17:37:53 +0100", "msg_from": "\"Joost Kraaijeveld\" <[email protected]>", "msg_from_op": true, "msg_subject": "Retry: Is this possible / slow performance?" }, { "msg_contents": "\"Joost Kraaijeveld\" <[email protected]> writes:\n> Two queries: one with \"set enable_seqscan = on\" , the other with \"set enable_seqscan = off\". The first query lasts 59403 ms, the second query 31 ms ( the desc order variant has the same large difference: 122494 ms vs. 1297 ms). (for the query plans see below).\n\nThe reason for the difference is that the mergejoin plan has a much\nlower startup cost than the hash plan, and since you're only fetching\n100 rows the startup cost is dominant. IIRC the planner does make some\nallowance for this effect when preparing a DECLARE CURSOR plan (ie,\nit puts some weight on startup cost rather than considering only total\ncost) ... but it's not so optimistic as to assume that you only want 100\nout of an estimated 1 million+ result rows.\n\nThe best solution is probably to put a LIMIT into the DECLARE CURSOR,\nso that the planner can see how much you intend to fetch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Feb 2005 12:03:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Retry: Is this possible / slow performance? " }, { "msg_contents": "\n\tDoes the planner also take into account that the Hash Join will need a \nhuge temporary space which will exist for the whole length of the cursor \nexistence (which may be quite long if he intends to fetch everything), \nwhereas the Merge Join should need very little space as it is sending the \nrows as it fetches them using the Indexes ?\n\n\n\n\nOn Mon, 07 Feb 2005 12:03:56 -0500, Tom Lane <[email protected]> wrote:\n\n> \"Joost Kraaijeveld\" <[email protected]> writes:\n>> Two queries: one with \"set enable_seqscan = on\" , the other with \"set \n>> enable_seqscan = off\". The first query lasts 59403 ms, the second query \n>> 31 ms ( the desc order variant has the same large difference: 122494 ms \n>> vs. 1297 ms). (for the query plans see below).\n>\n> The reason for the difference is that the mergejoin plan has a much\n> lower startup cost than the hash plan, and since you're only fetching\n> 100 rows the startup cost is dominant. IIRC the planner does make some\n> allowance for this effect when preparing a DECLARE CURSOR plan (ie,\n> it puts some weight on startup cost rather than considering only total\n> cost) ... but it's not so optimistic as to assume that you only want 100\n> out of an estimated 1 million+ result rows.\n>\n> The best solution is probably to put a LIMIT into the DECLARE CURSOR,\n> so that the planner can see how much you intend to fetch.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n\n\n", "msg_date": "Mon, 07 Feb 2005 19:16:47 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Retry: Is this possible / slow performance? " } ]
[ { "msg_contents": " \n>> The best solution is probably to put a LIMIT into the DECLARE CURSOR,\n>> so that the planner can see how much you intend to fetch.\nI assume that this limits the resultset to a LIMIT. That is not what I was hoping for. I was hoping for a way to scrolll throught the whole tables with orders.\n\nI have tested, and if one really wants the whole table the query with \"set enable_seqscan = on\" lasts 137 secs, the query with \"set enable_seqscan = off\" lasts 473 secs, so (alas), the planner is right. \n\nI sure would like to have ISAM like behaviour once in a while.\n\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl\n", "msg_date": "Mon, 7 Feb 2005 20:27:18 +0100", "msg_from": "\"Joost Kraaijeveld\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Retry: Is this possible / slow performance? " } ]
[ { "msg_contents": "> >> The best solution is probably to put a LIMIT into the DECLARE\nCURSOR,\n> >> so that the planner can see how much you intend to fetch.\n> I assume that this limits the resultset to a LIMIT. That is not what I\nwas\n> hoping for. I was hoping for a way to scrolll throught the whole\ntables\n> with orders.\n> \n> I have tested, and if one really wants the whole table the query with\n\"set\n> enable_seqscan = on\" lasts 137 secs, the query with \"set\nenable_seqscan =\n> off\" lasts 473 secs, so (alas), the planner is right.\n> \n> I sure would like to have ISAM like behaviour once in a while.\n\nThen stop using cursors. A few months back I detailed the relative\nmerits of using Cursors v. Queries to provide ISAM like functionality\nand Queries win hands down. Right now I am using pg as an ISAM backend\nfor a relatively old and large COBOL ERP via a C++ ISAM driver, for\nwhich a publicly available version of the source will be available Real\nSoon Now :-).\n\nMerlin\n", "msg_date": "Mon, 7 Feb 2005 14:52:53 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Retry: Is this possible / slow performance? " } ]
[ { "msg_contents": "Hi all, we have an Sun E3500 running Solaris 9. It's got 6x336MHz CPU and\n10GB RAM.\n\nI would like to know what /etc/system and postgresql_conf values are\nrecommended to deliver as much system resource as possible to Postgres. We\nuse this Sun box solely for single user Postgres data warehousing\nworkloads.\n\nChanges made to /etc/system values are:\n\nset shmsys:shminfo_shmmax=0xffffffff\nset shmsys:shminfo_shmmni=256\nset shmsys:shminfo_shmseg=256\nset shmsys:shminfo_shmmin=1\nset semsys:seminfo_semmap=256\nset semsys:seminfo_semmni=512\nset semsys:seminfo_semmns=512\nset semsys:seminfo_semmsl=32\n\nChanges made to postgresql.conf are:\n\nshared_buffers = 500000\nsort_mem = 2097152\nvacuum_mem = 1000000\n\nThanks.\n", "msg_date": "Mon, 7 Feb 2005 22:25:17 -0000 (GMT)", "msg_from": "\"Paul Johnson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Solaris 9 tuning" }, { "msg_contents": "Paul,\n\n> I would like to know what /etc/system and postgresql_conf values are\n> recommended to deliver as much system resource as possible to Postgres. We\n> use this Sun box solely for single user Postgres data warehousing\n> workloads.\n\nWhat's your disk system?\n\n> shared_buffers = 500000\n\nThis is highly unlikely to be optimal. That's 3GB. On test linux systems \nup to 8GB, we've not seen useful values of shared buffers anywhere above \n400mb. How did you arrive at that figure?\n\n> sort_mem = 2097152\n> vacuum_mem = 1000000\n\nThese could be fine on a single-user system. sort_mem is per *sort* though, \nnot per query, so you'd need to watch out for complex queries spillling into \nswap; perhaps set it a 0.5GB or 1GB?\n\nOtherwise, start with the config guide at www.powerpostgresql.com/PerfList\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 7 Feb 2005 18:22:42 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris 9 tuning" }, { "msg_contents": "Hi Josh, there are 8 internal disks - all are 18GB@10,000 RPM, fibre\nconnected.\n\nThe O/S is on 2 mirrored disks, the Postgres cluster is on the /data1\nfilesystem that is striped across the other 6 disks.\n\nThe shared_buffers value is a semi-educated guess based on having made 4GB\nshared memory available via /etc/system, and having read all we could find\non various web sites.\n\nShould I knock it down to 400MB as you suggest?\n\nI'll check out that URL.\n\nCheers,\n\nPaul.\n\n> Paul,\n>> I would like to know what /etc/system and postgresql_conf values are\nrecommended to deliver as much system resource as possible to Postgres.\nWe\n>> use this Sun box solely for single user Postgres data warehousing\nworkloads.\n> What's your disk system?\n>> shared_buffers = 500000\n> This is highly unlikely to be optimal. That's 3GB. On test linux\nsystems\n> up to 8GB, we've not seen useful values of shared buffers anywhere above\n400mb. How did you arrive at that figure?\n>> sort_mem = 2097152\n>> vacuum_mem = 1000000\n> These could be fine on a single-user system. sort_mem is per *sort*\nthough,\n> not per query, so you'd need to watch out for complex queries spillling\ninto\n> swap; perhaps set it a 0.5GB or 1GB?\n> Otherwise, start with the config guide at\nwww.powerpostgresql.com/PerfList\n> --\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n\n\n\n\n", "msg_date": "Wed, 9 Feb 2005 19:02:42 -0000 (GMT)", "msg_from": "\"Paul Johnson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Solaris 9 tuning" }, { "msg_contents": "Hi, Paul\n\nJosh helped my company with this issue -- PG doesn't use shared memory like Oracle, it depends more on the OS buffers. Making shared mem\ntoo large a fraction is disasterous and seriously impact performance. (though I find myself having to justify this to Oracle trained\nDBA's) :)\n\nWhat I found was the biggest performance improvement on the write side was to turn of file system journaling, and on the read side was\nto feed postgres as many CPU's as you can. What we found for a high use db (for example backending a web site) is that 8-400 g cpu's\noutperforms 2 or 4 fast cpus. The fast cpu's spend all of their time context switching as more connections are made.\n\nAlso make sure your txlog is on another spindle -- it might even be worth taking one out of the stripe to do this.\n\nI am running solaris 9 on an e3500 also (though my disc setup is different)\n\nHere's what I have things set to -- it's probably a pretty good starting point for you:\n\n# - Memory -\n\nshared_buffers = 65536 # min 16, at least max_connections*2, 8KB each\nsort_mem = 12000 # min 64, size in KB\nvacuum_mem = 64000 # min 1024, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 100000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 10000 # min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n-----------------------------------------------------------------------------------\n\nand the tail end of /etc/system:\n\n* shared memory config for postgres\nset shmsys:shminfo_shmmax=0xFFFFFFFF\nset shmsys:shminfo_shmmin=1\nset shmsys:shminfo_shmmni=256\nset shmsys:shminfo_shmseg=256\nset semsys:seminfo_semmap=256\nset semsys:seminfo_semmni=512\nset semsys:seminfo_semmsl=1000\nset semsys:seminfo_semmns=512\n* end of shared memory setting\n* Set the hme card to force 100 full duplex and not to autonegotiate\n* since hme does not play well with cisco\n*\nset hme:hme_adv_autoneg_cap=0\nset hme:hme_adv_100fdx_cap=1\nset hme:hme_adv_100hdx_cap=0\nset hme:hme_adv_10fdx_cap=0\nset hme:hme_adv_10hdx_cap=0\nset hme:hme_adv_100T4_cap=0\n\n\nPaul Johnson wrote:\n\n> Hi Josh, there are 8 internal disks - all are 18GB@10,000 RPM, fibre\n> connected.\n> \n> The O/S is on 2 mirrored disks, the Postgres cluster is on the /data1\n> filesystem that is striped across the other 6 disks.\n> \n> The shared_buffers value is a semi-educated guess based on having made 4GB\n> shared memory available via /etc/system, and having read all we could find\n> on various web sites.\n> \n> Should I knock it down to 400MB as you suggest?\n> \n> I'll check out that URL.\n> \n> Cheers,\n> \n> Paul.\n> \n> \n>>Paul,\n>>\n>>>I would like to know what /etc/system and postgresql_conf values are\n> \n> recommended to deliver as much system resource as possible to Postgres.\n> We\n> \n>>>use this Sun box solely for single user Postgres data warehousing\n> \n> workloads.\n> \n>>What's your disk system?\n>>\n>>>shared_buffers = 500000\n>>\n>>This is highly unlikely to be optimal. That's 3GB. On test linux\n> \n> systems\n> \n>>up to 8GB, we've not seen useful values of shared buffers anywhere above\n> \n> 400mb. How did you arrive at that figure?\n> \n>>>sort_mem = 2097152\n>>>vacuum_mem = 1000000\n>>\n>>These could be fine on a single-user system. sort_mem is per *sort*\n> \n> though,\n> \n>>not per query, so you'd need to watch out for complex queries spillling\n> \n> into\n> \n>>swap; perhaps set it a 0.5GB or 1GB?\n>>Otherwise, start with the config guide at\n> \n> www.powerpostgresql.com/PerfList\n> \n>>--\n>>Josh Berkus\n>>Aglio Database Solutions\n>>San Francisco\n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n> \n", "msg_date": "Wed, 09 Feb 2005 11:23:30 -0800", "msg_from": "Tom Arthurs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris 9 tuning" }, { "msg_contents": "... Trying again again with right email address -- list server rejected previous :)\n\nHi, Paul\n\nJosh helped my company with this issue -- PG doesn't use shared memory like Oracle, it depends more on the OS buffers. Making shared mem\ntoo large a fraction is disasterous and seriously impact performance. (though I find myself having to justify this to Oracle trained\nDBA's) :)\n\nWhat I found was the biggest performance improvement on the write side was to turn of file system journaling, and on the read side was\nto feed postgres as many CPU's as you can. What we found for a high use db (for example backending a web site) is that 8-400 g cpu's\noutperforms 2 or 4 fast cpus. The fast cpu's spend all of their time context switching as more connections are made.\n\nAlso make sure your txlog is on another spindle -- it might even be worth taking one out of the stripe to do this.\n\nI am running solaris 9 on an e3500 also (though my disc setup is different)\n\nHere's what I have things set to -- it's probably a pretty good starting point for you:\n\n# - Memory -\n\nshared_buffers = 65536 # min 16, at least max_connections*2, 8KB each\nsort_mem = 12000 # min 64, size in KB\nvacuum_mem = 64000 # min 1024, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 100000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 10000 # min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n-----------------------------------------------------------------------------------\n\nand the tail end of /etc/system:\n\n* shared memory config for postgres\nset shmsys:shminfo_shmmax=0xFFFFFFFF\nset shmsys:shminfo_shmmin=1\nset shmsys:shminfo_shmmni=256\nset shmsys:shminfo_shmseg=256\nset semsys:seminfo_semmap=256\nset semsys:seminfo_semmni=512\nset semsys:seminfo_semmsl=1000\nset semsys:seminfo_semmns=512\n* end of shared memory setting\n* Set the hme card to force 100 full duplex and not to autonegotiate\n* since hme does not play well with cisco\n*\nset hme:hme_adv_autoneg_cap=0\nset hme:hme_adv_100fdx_cap=1\nset hme:hme_adv_100hdx_cap=0\nset hme:hme_adv_10fdx_cap=0\nset hme:hme_adv_10hdx_cap=0\nset hme:hme_adv_100T4_cap=0\n\n\nPaul Johnson wrote:\n\n> Hi Josh, there are 8 internal disks - all are 18GB@10,000 RPM, fibre\n> connected.\n> \n> The O/S is on 2 mirrored disks, the Postgres cluster is on the /data1\n> filesystem that is striped across the other 6 disks.\n> \n> The shared_buffers value is a semi-educated guess based on having made 4GB\n> shared memory available via /etc/system, and having read all we could find\n> on various web sites.\n> \n> Should I knock it down to 400MB as you suggest?\n> \n> I'll check out that URL.\n> \n> Cheers,\n> \n> Paul.\n> \n> \n>>Paul,\n>>\n>>>I would like to know what /etc/system and postgresql_conf values are\n> \n> recommended to deliver as much system resource as possible to Postgres.\n> We\n> \n>>>use this Sun box solely for single user Postgres data warehousing\n> \n> workloads.\n> \n>>What's your disk system?\n>>\n>>>shared_buffers = 500000\n>>\n>>This is highly unlikely to be optimal. That's 3GB. On test linux\n> \n> systems\n> \n>>up to 8GB, we've not seen useful values of shared buffers anywhere above\n> \n> 400mb. How did you arrive at that figure?\n> \n>>>sort_mem = 2097152\n>>>vacuum_mem = 1000000\n>>\n>>These could be fine on a single-user system. sort_mem is per *sort*\n> \n> though,\n> \n>>not per query, so you'd need to watch out for complex queries spillling\n> \n> into\n> \n>>swap; perhaps set it a 0.5GB or 1GB?\n>>Otherwise, start with the config guide at\n> \n> www.powerpostgresql.com/PerfList\n> \n>>--\n>>Josh Berkus\n>>Aglio Database Solutions\n>>San Francisco\n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n> \n\n\n", "msg_date": "Wed, 09 Feb 2005 11:31:56 -0800", "msg_from": "Tom Arthurs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris 9 tuning" }, { "msg_contents": "Hi Tom, I've made changes to postgresql.conf as recommended on Josh's site\nand this seems to be working well so far.\n\nGiven your comments on shared memory, it would appear that the following\nentry in /etc/system is unnecessary:\n\nset shmsys:shminfo_shmmax=0xFFFFFFFF\n\nIronically, we both have this identical setting!\n\nGiven that most of our queries are single-user read-only, how do we take\nadvantage of the 6 CPUs? I'm guessing we can't!?!?!\n\nAlso, does this type of workload benefit from moving the txlog?\n\nI'll check our settings against yours given the Solaris 9/E3500 setup that\nwe both run.\n\nMany thanks,\n\nPaul.\n\n> Hi, Paul\n>\n> Josh helped my company with this issue -- PG doesn't use shared memory\n> like Oracle, it depends more on the OS buffers. Making shared mem\n> too large a fraction is disasterous and seriously impact performance.\n> (though I find myself having to justify this to Oracle trained\n> DBA's) :)\n>\n> What I found was the biggest performance improvement on the write side was\n> to turn of file system journaling, and on the read side was\n> to feed postgres as many CPU's as you can. What we found for a high use\n> db (for example backending a web site) is that 8-400 g cpu's\n> outperforms 2 or 4 fast cpus. The fast cpu's spend all of their time\n> context switching as more connections are made.\n>\n> Also make sure your txlog is on another spindle -- it might even be worth\n> taking one out of the stripe to do this.\n>\n> I am running solaris 9 on an e3500 also (though my disc setup is\n> different)\n>\n> Here's what I have things set to -- it's probably a pretty good starting\n> point for you:\n>\n> # - Memory -\n>\n> shared_buffers = 65536 # min 16, at least max_connections*2, 8KB\n> each\n> sort_mem = 12000 # min 64, size in KB\n> vacuum_mem = 64000 # min 1024, size in KB\n>\n> # - Free Space Map -\n>\n> max_fsm_pages = 100000 # min max_fsm_relations*16, 6 bytes each\n> #max_fsm_relations = 10000 # min 100, ~50 bytes each\n>\n> # - Kernel Resource Usage -\n>\n> #max_files_per_process = 1000 # min 25\n> #preload_libraries = ''\n>\n> -----------------------------------------------------------------------------------\n>\n> and the tail end of /etc/system:\n>\n> * shared memory config for postgres\n> set shmsys:shminfo_shmmax=0xFFFFFFFF\n> set shmsys:shminfo_shmmin=1\n> set shmsys:shminfo_shmmni=256\n> set shmsys:shminfo_shmseg=256\n> set semsys:seminfo_semmap=256\n> set semsys:seminfo_semmni=512\n> set semsys:seminfo_semmsl=1000\n> set semsys:seminfo_semmns=512\n> * end of shared memory setting\n> * Set the hme card to force 100 full duplex and not to autonegotiate\n> * since hme does not play well with cisco\n> *\n> set hme:hme_adv_autoneg_cap=0\n> set hme:hme_adv_100fdx_cap=1\n> set hme:hme_adv_100hdx_cap=0\n> set hme:hme_adv_10fdx_cap=0\n> set hme:hme_adv_10hdx_cap=0\n> set hme:hme_adv_100T4_cap=0\n>\n>\n> Paul Johnson wrote:\n>\n>> Hi Josh, there are 8 internal disks - all are 18GB@10,000 RPM, fibre\n>> connected.\n>>\n>> The O/S is on 2 mirrored disks, the Postgres cluster is on the /data1\n>> filesystem that is striped across the other 6 disks.\n>>\n>> The shared_buffers value is a semi-educated guess based on having made\n>> 4GB\n>> shared memory available via /etc/system, and having read all we could\n>> find\n>> on various web sites.\n>>\n>> Should I knock it down to 400MB as you suggest?\n>>\n>> I'll check out that URL.\n>>\n>> Cheers,\n>>\n>> Paul.\n>>\n>>\n>>>Paul,\n>>>\n>>>>I would like to know what /etc/system and postgresql_conf values are\n>>\n>> recommended to deliver as much system resource as possible to Postgres.\n>> We\n>>\n>>>>use this Sun box solely for single user Postgres data warehousing\n>>\n>> workloads.\n>>\n>>>What's your disk system?\n>>>\n>>>>shared_buffers = 500000\n>>>\n>>>This is highly unlikely to be optimal. That's 3GB. On test linux\n>>\n>> systems\n>>\n>>>up to 8GB, we've not seen useful values of shared buffers anywhere above\n>>\n>> 400mb. How did you arrive at that figure?\n>>\n>>>>sort_mem = 2097152\n>>>>vacuum_mem = 1000000\n>>>\n>>>These could be fine on a single-user system. sort_mem is per *sort*\n>>\n>> though,\n>>\n>>>not per query, so you'd need to watch out for complex queries spillling\n>>\n>> into\n>>\n>>>swap; perhaps set it a 0.5GB or 1GB?\n>>>Otherwise, start with the config guide at\n>>\n>> www.powerpostgresql.com/PerfList\n>>\n>>>--\n>>>Josh Berkus\n>>>Aglio Database Solutions\n>>>San Francisco\n>>\n>>\n>>\n>>\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>>\n>>\n>>\n>\n\n", "msg_date": "Wed, 9 Feb 2005 20:49:43 -0000 (GMT)", "msg_from": "\"Paul Johnson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Solaris 9 tuning" }, { "msg_contents": "Yes, I agree it's unnecessary -- but you'll never have to worry about the postmaster not starting due to lack of allocatable\nmemory -- when I was testing setups, I got sick of rebooting everytime I had to make a change to /etc/system, that I threw up my\nhands and said, \"let it take all it wants\". :)\n\n\"single user read only\" -- the key is how many connections -- what's your application? Is this being driven by a front end application?\n\nIn my case, we run a website with an apache fronted, a tomcat server as middleware, and 4 applications. We may, at times, have only 1\nuser on, but the java code could be doing a half dozen queries in different threads for that one user.\n\nrun /usr/ucb/ps -auxww | grep <postgres user name> (-- we use postgres so \"grep post\" works for us) while under load and see\nhow many backends are running. if it's more than 4 or 5, then you are using the cpu's.\n\nOn the topic of shared memory, watch for the ouput of top or prstat -a -- these programs count the shared memory block towards each process\nand therefor lie about amount of memory used. Looking at vmstat, etc show that the percentage of utilization reported by top or prstat is\nway off, and if you care to examine the memory for each proces, you'll see that the shared memory block address is, well, shared by each\nprocess (by definition, eh?) but it can be reported as if it were a different block for each process.\n\n\nNot sure the e3500 is the best box for a data warehouse application\n\nPaul Johnson wrote:\n\n> Hi Tom, I've made changes to postgresql.conf as recommended on Josh's site\n> and this seems to be working well so far.\n> \n> Given your comments on shared memory, it would appear that the following\n> entry in /etc/system is unnecessary:\n> \n> set shmsys:shminfo_shmmax=0xFFFFFFFF\n> \n> Ironically, we both have this identical setting!\n> \n> Given that most of our queries are single-user read-only, how do we take\n> advantage of the 6 CPUs? I'm guessing we can't!?!?!\n> \n> Also, does this type of workload benefit from moving the txlog?\n> \n> I'll check our settings against yours given the Solaris 9/E3500 setup that\n> we both run.\n> \n> Many thanks,\n> \n> Paul.\n> \n> \n>>Hi, Paul\n>>\n>>Josh helped my company with this issue -- PG doesn't use shared memory\n>>like Oracle, it depends more on the OS buffers. Making shared mem\n>>too large a fraction is disasterous and seriously impact performance.\n>>(though I find myself having to justify this to Oracle trained\n>>DBA's) :)\n>>\n>>What I found was the biggest performance improvement on the write side was\n>>to turn of file system journaling, and on the read side was\n>>to feed postgres as many CPU's as you can. What we found for a high use\n>>db (for example backending a web site) is that 8-400 g cpu's\n>>outperforms 2 or 4 fast cpus. The fast cpu's spend all of their time\n>>context switching as more connections are made.\n>>\n>>Also make sure your txlog is on another spindle -- it might even be worth\n>>taking one out of the stripe to do this.\n>>\n>>I am running solaris 9 on an e3500 also (though my disc setup is\n>>different)\n>>\n>>Here's what I have things set to -- it's probably a pretty good starting\n>>point for you:\n>>\n>># - Memory -\n>>\n>>shared_buffers = 65536 # min 16, at least max_connections*2, 8KB\n>>each\n>>sort_mem = 12000 # min 64, size in KB\n>>vacuum_mem = 64000 # min 1024, size in KB\n>>\n>># - Free Space Map -\n>>\n>>max_fsm_pages = 100000 # min max_fsm_relations*16, 6 bytes each\n>>#max_fsm_relations = 10000 # min 100, ~50 bytes each\n>>\n>># - Kernel Resource Usage -\n>>\n>>#max_files_per_process = 1000 # min 25\n>>#preload_libraries = ''\n>>\n>>-----------------------------------------------------------------------------------\n>>\n>>and the tail end of /etc/system:\n>>\n>>* shared memory config for postgres\n>>set shmsys:shminfo_shmmax=0xFFFFFFFF\n>>set shmsys:shminfo_shmmin=1\n>>set shmsys:shminfo_shmmni=256\n>>set shmsys:shminfo_shmseg=256\n>>set semsys:seminfo_semmap=256\n>>set semsys:seminfo_semmni=512\n>>set semsys:seminfo_semmsl=1000\n>>set semsys:seminfo_semmns=512\n>>* end of shared memory setting\n>>* Set the hme card to force 100 full duplex and not to autonegotiate\n>>* since hme does not play well with cisco\n>>*\n>>set hme:hme_adv_autoneg_cap=0\n>>set hme:hme_adv_100fdx_cap=1\n>>set hme:hme_adv_100hdx_cap=0\n>>set hme:hme_adv_10fdx_cap=0\n>>set hme:hme_adv_10hdx_cap=0\n>>set hme:hme_adv_100T4_cap=0\n>>\n>>\n>>Paul Johnson wrote:\n>>\n>>\n>>>Hi Josh, there are 8 internal disks - all are 18GB@10,000 RPM, fibre\n>>>connected.\n>>>\n>>>The O/S is on 2 mirrored disks, the Postgres cluster is on the /data1\n>>>filesystem that is striped across the other 6 disks.\n>>>\n>>>The shared_buffers value is a semi-educated guess based on having made\n>>>4GB\n>>>shared memory available via /etc/system, and having read all we could\n>>>find\n>>>on various web sites.\n>>>\n>>>Should I knock it down to 400MB as you suggest?\n>>>\n>>>I'll check out that URL.\n>>>\n>>>Cheers,\n>>>\n>>>Paul.\n>>>\n>>>\n>>>\n>>>>Paul,\n>>>>\n>>>>\n>>>>>I would like to know what /etc/system and postgresql_conf values are\n>>>\n>>>recommended to deliver as much system resource as possible to Postgres.\n>>>We\n>>>\n>>>\n>>>>>use this Sun box solely for single user Postgres data warehousing\n>>>\n>>>workloads.\n>>>\n>>>\n>>>>What's your disk system?\n>>>>\n>>>>\n>>>>>shared_buffers = 500000\n>>>>\n>>>>This is highly unlikely to be optimal. That's 3GB. On test linux\n>>>\n>>>systems\n>>>\n>>>\n>>>>up to 8GB, we've not seen useful values of shared buffers anywhere above\n>>>\n>>>400mb. How did you arrive at that figure?\n>>>\n>>>\n>>>>>sort_mem = 2097152\n>>>>>vacuum_mem = 1000000\n>>>>\n>>>>These could be fine on a single-user system. sort_mem is per *sort*\n>>>\n>>>though,\n>>>\n>>>\n>>>>not per query, so you'd need to watch out for complex queries spillling\n>>>\n>>>into\n>>>\n>>>\n>>>>swap; perhaps set it a 0.5GB or 1GB?\n>>>>Otherwise, start with the config guide at\n>>>\n>>>www.powerpostgresql.com/PerfList\n>>>\n>>>\n>>>>--\n>>>>Josh Berkus\n>>>>Aglio Database Solutions\n>>>>San Francisco\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 6: Have you searched our list archives?\n>>>\n>>> http://archives.postgresql.org\n>>>\n>>>\n>>>\n>>\n> \n> \n> \n> \n", "msg_date": "Wed, 09 Feb 2005 14:08:10 -0800", "msg_from": "Tom Arthurs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Solaris 9 tuning" } ]
[ { "msg_contents": "All,\n\nWhen trying to restore my template1 database (7.3.4) I am \nexperiencing very long delays. For 600 users (pg_shadow) \nand 4 groups (pg_group) it is taking 1hr and 17 minutes to \ncomplete. All of the create user statements are processed in a \nmatter of seconds, but each alter groups statement takes \nabout 10 seconds to process. I have done a full vacuum, \nreindex (contrib) and restart before my run, but this did \nnot have an impact on load times. I have also searched \nthe archives to no avail and modified my postgresql.conf \nfile as recommended General Bits on www.varlena.com\n\nOne other thing to mention is that this restoration has \nbeen occurring on our slave server every hour for the \nlast 3-4 months and seems to be getting progressively \nworse even if new users are not created in our master server.\n\nDUMP: \tpg_dumpall -g > foo\nRESTORE: \tpsql template1 < foo\n\nCheers,\nBen Young\n", "msg_date": "Tue, 8 Feb 2005 13:23:55 -0500", "msg_from": "\"Ben Young\" <[email protected]>", "msg_from_op": true, "msg_subject": "Slow Restoration of a template1 Database (ALTER GROUP)" }, { "msg_contents": "\"Ben Young\" <[email protected]> writes:\n> When trying to restore my template1 database (7.3.4) I am \n> experiencing very long delays. For 600 users (pg_shadow) \n> and 4 groups (pg_group) it is taking 1hr and 17 minutes to \n> complete. All of the create user statements are processed in a \n> matter of seconds, but each alter groups statement takes \n> about 10 seconds to process.\n\nI tried doing 1000 ALTER GROUP ADD USER commands in 7.3, and didn't\nsee any particular performance problem. Could we see the output\nof \"VACUUM FULL VERBOSE pg_group\"?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Feb 2005 13:57:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Restoration of a template1 Database (ALTER GROUP) " } ]
[ { "msg_contents": "\"Ben Young\" <[email protected]> writes:\n> template1=# VACUUM FULL VERBOSE pg_group;\n> INFO: --Relation pg_catalog.pg_group--\n> INFO: Pages 124: Changed 1, reaped 124, Empty 0, New 0; Tup 4: Vac 966, Keep/VTL 0/0, UnUsed 156, MinLen 92, MaxLen 136; Re-using: Free/Avail. Space 1008360/1008360; EndEmpty/Avail. Pages 0/124.\n> \tCPU 0.01s/0.00u sec elapsed 0.07 sec.\n> INFO: Index pg_group_name_index: Pages 19072; Tuples 4: Deleted 966.\n ^^^^^\n> \tCPU 1.51s/0.25u sec elapsed 17.19 sec.\n> INFO: Index pg_group_sysid_index: Pages 4313; Tuples 4: Deleted 966.\n ^^^^\n> \tCPU 0.48s/0.04u sec elapsed 6.06 sec.\n\nWhoa. Can you say \"index bloat\"?\n\nI think that the only way to fix this is to REINDEX pg_group, which IIRC\nin 7.3 requires stopping the postmaster and doing it in a standalone\nbackend (check the REINDEX reference page for details). Make sure the\ntoast table gets reindexed too, as its index is oversized as well.\n(Recent PG versions will automatically reindex the toast table when you\nreindex its parent table, but I forget whether 7.3 did so; you might\nhave to explicitly \"reindex pg_toast.pg_toast_1261\".)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Feb 2005 14:59:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Restoration of a template1 Database (ALTER GROUP) " } ]
[ { "msg_contents": "Tom,\n\nIs the \"index bloat\" prevented/reduced in newer versions of Postgres?\n\nIs there a way to prevent/reduce it with the current version of Postgres I'm using?\n\nMany Thanks,\nBen\n\n\"Ben Young\" <[email protected]> writes:\n> template1=# VACUUM FULL VERBOSE pg_group;\n> INFO: --Relation pg_catalog.pg_group--\n> INFO: Pages 124: Changed 1, reaped 124, Empty 0, New 0; Tup 4: Vac 966, Keep/VTL 0/0, UnUsed 156, MinLen 92, MaxLen 136; Re-using: Free/Avail. Space 1008360/1008360; EndEmpty/Avail. Pages 0/124.\n> \tCPU 0.01s/0.00u sec elapsed 0.07 sec.\n> INFO: Index pg_group_name_index: Pages 19072; Tuples 4: Deleted 966.\n ^^^^^\n> \tCPU 1.51s/0.25u sec elapsed 17.19 sec.\n> INFO: Index pg_group_sysid_index: Pages 4313; Tuples 4: Deleted 966.\n ^^^^\n> \tCPU 0.48s/0.04u sec elapsed 6.06 sec.\n\nWhoa. Can you say \"index bloat\"?\n\nI think that the only way to fix this is to REINDEX pg_group, which IIRC\nin 7.3 requires stopping the postmaster and doing it in a standalone\nbackend (check the REINDEX reference page for details). Make sure the\ntoast table gets reindexed too, as its index is oversized as well.\n(Recent PG versions will automatically reindex the toast table when you\nreindex its parent table, but I forget whether 7.3 did so; you might\nhave to explicitly \"reindex pg_toast.pg_toast_1261\".)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 8 Feb 2005 15:08:50 -0500", "msg_from": "\"Ben Young\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Restoration of a template1 Database (ALTER GROUP) " }, { "msg_contents": "\"Ben Young\" <[email protected]> writes:\n> Is the \"index bloat\" prevented/reduced in newer versions of Postgres?\n\nDepends on what's causing it. Have you been inventing alphabetically\ngreater group names and getting rid of smaller names over time? If so,\nthis is a known problem that should be fixed in 7.4. The 7.4 release\nnotes say:\n\n In previous releases, B-tree index pages that were left empty\n because of deleted rows could only be reused by rows with index\n values similar to the rows originally indexed on that page. In 7.4,\n VACUUM records empty index pages and allows them to be reused for\n any future index rows.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Feb 2005 15:32:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Restoration of a template1 Database (ALTER GROUP) " } ]
[ { "msg_contents": "Hi,\nwe just got a new dual processor machine and I wonder if there is a way \nto utilize both processors.\n\nOur DB server is basically fully dedicated to postgres. (its a dual amd \nwith 4gb mem.)\n\nI have a batch job that periodically loads about 8 million records into \na table.\nfor this I drop the indices, truncate the table, use the copy to insert \nthe data, recreate the indices (4 indices), vacuum the table.\n\nThat is all done through a perl batch job.\n\nWhile I am doing this, I noticed that only one CPU is really used.\n\nSo here are my questions:\n\nIs there a way to utilize both CPUs\n\nIs it possible to split up the import file and run 2 copy processes\n\nIs it possible to create 2 indices at the same time\n\nWould I actually gain anything from that, or is the bottleneck somewhere \nelse ?\n\n(perl is a given here for the batch job)\n\nIf anyone has some experience or ideas... any hints or help on this \nwould be appreciated.\n\nThanks\nAlex\n\n", "msg_date": "Thu, 10 Feb 2005 01:26:35 +1100", "msg_from": "Alex <[email protected]>", "msg_from_op": true, "msg_subject": "How can I make use of both CPUs in a dual processor machine" }, { "msg_contents": "Alex wrote:\n\n> Hi,\n> we just got a new dual processor machine and I wonder if there is a \n> way to utilize both processors.\n>\n> Our DB server is basically fully dedicated to postgres. (its a dual \n> amd with 4gb mem.)\n>\n> I have a batch job that periodically loads about 8 million records \n> into a table.\n> for this I drop the indices, truncate the table, use the copy to \n> insert the data, recreate the indices (4 indices), vacuum the table.\n>\n> That is all done through a perl batch job.\n>\n> While I am doing this, I noticed that only one CPU is really used.\n>\n> So here are my questions:\n>\n> Is there a way to utilize both CPUs\n>\nFor postgres, you get a max of 1 CPU per connection, so to use both, you \nneed 2 CPU's.\n\n> Is it possible to split up the import file and run 2 copy processes\n>\n> Is it possible to create 2 indices at the same time\n>\nYou'd want to be a little careful. Postgres uses work_mem for vacuum and \nindex creation, so if you have 2 processes doing it, just make sure you \naren't running out of RAM and going to swap.\n\n> Would I actually gain anything from that, or is the bottleneck \n> somewhere else ?\n>\nMore likely, the bottleneck would be disk I/O. Simply because it is \nalmost always disk I/O. However, without knowing your configuration, how \nmuch CPU is used during the operation, etc, it's hard to say.\n\n> (perl is a given here for the batch job)\n>\n> If anyone has some experience or ideas... any hints or help on this \n> would be appreciated.\n>\n> Thanks\n> Alex\n>\nSorry I wasn't a lot of help. You should probably post your postgres \nversion, and more information about how much CPU load there is while \nyour load is running.\n\nJohn\n=:->", "msg_date": "Wed, 09 Feb 2005 08:49:11 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I make use of both CPUs in a dual processor" }, { "msg_contents": "Thanks John.\n\nWell as I mentioned. I have a Dual AMD Opteron 64 2.4ghz, 15k rpm SCSI \nDisks, 4GB of memory.\nDisks are pretty fast and memory should be more than enough. Currently \nwe dont have many concurrent connections.\n\nI run PG 8.0.1 on Fedora Core 3\n\nWhen I now run the batch job, one CPU runs in the 80-90% the other in \n5-10% max.\n\n\n\n\n\n\nJohn A Meinel wrote:\n\n> Alex wrote:\n>\n>> Hi,\n>> we just got a new dual processor machine and I wonder if there is a \n>> way to utilize both processors.\n>>\n>> Our DB server is basically fully dedicated to postgres. (its a dual \n>> amd with 4gb mem.)\n>>\n>> I have a batch job that periodically loads about 8 million records \n>> into a table.\n>> for this I drop the indices, truncate the table, use the copy to \n>> insert the data, recreate the indices (4 indices), vacuum the table.\n>>\n>> That is all done through a perl batch job.\n>>\n>> While I am doing this, I noticed that only one CPU is really used.\n>>\n>> So here are my questions:\n>>\n>> Is there a way to utilize both CPUs\n>>\n> For postgres, you get a max of 1 CPU per connection, so to use both, \n> you need 2 CPU's.\n>\n>> Is it possible to split up the import file and run 2 copy processes\n>>\n>> Is it possible to create 2 indices at the same time\n>>\n> You'd want to be a little careful. Postgres uses work_mem for vacuum \n> and index creation, so if you have 2 processes doing it, just make \n> sure you aren't running out of RAM and going to swap.\n>\n>> Would I actually gain anything from that, or is the bottleneck \n>> somewhere else ?\n>>\n> More likely, the bottleneck would be disk I/O. Simply because it is \n> almost always disk I/O. However, without knowing your configuration, \n> how much CPU is used during the operation, etc, it's hard to say.\n>\n>> (perl is a given here for the batch job)\n>>\n>> If anyone has some experience or ideas... any hints or help on this \n>> would be appreciated.\n>>\n>> Thanks\n>> Alex\n>>\n> Sorry I wasn't a lot of help. You should probably post your postgres \n> version, and more information about how much CPU load there is while \n> your load is running.\n>\n> John\n> =:->\n>\n\n\n", "msg_date": "Thu, 10 Feb 2005 02:00:16 +1100", "msg_from": "Alex <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How can I make use of both CPUs in a dual processor" }, { "msg_contents": "Alex wrote:\n\n> Thanks John.\n>\n> Well as I mentioned. I have a Dual AMD Opteron 64 2.4ghz, 15k rpm \n> SCSI Disks, 4GB of memory.\n> Disks are pretty fast and memory should be more than enough. Currently \n> we dont have many concurrent connections.\n>\nWell, you didn't mention Opteron before (it makes a difference against \nXeons).\nHow many disks and in what configuration?\nDo you have pg_xlog on a separate set of disks?\nAre your drives in RAID 10 (0+1) or RAID 5?\n\nIf you have enough disks the recommended configuration is at least a \nRAID1 for the OS, RAID 10 for pg_xlog (4drives), and RAID 10 (the rest \nof the drives) for the actual data.\n\nIf your dataset is read heavy, or you have more than 6 disks, you can \nget away with RAID 5 for the actual data. But since you are talking \nabout loading 8million rows at once, it certainly sounds like you are \nwrite heavy.\n\nIf you only have a few disks, it's still probably better to put pg_xlog \non it's own RAID1 (2-drive) mirror. pg_xlog is pretty much append only, \nso if you dedicate a disk set to it, you eliminate a lot of seek times.\n\n> I run PG 8.0.1 on Fedora Core 3\n>\n> When I now run the batch job, one CPU runs in the 80-90% the other in \n> 5-10% max.\n\n\nAnyway, it doesn't completely sound like you are CPU limited, but you \nmight be able to get a little bit more if you spawn another process. \nHave you tried dropping the index, doing the copy, and then recreating \nthe 4-indexes in separate processes?\n\nThe simple test for this is to open 3-4 psql connections, have one of \nthem drop the indexes and do the copy, in the other connections you can \nalready have typed \"CREATE INDEX ...\" so when the copy is done and \ncommitted to the database, you just go to the other terminals and hit enter.\n\nUnfortunately you'll have to use wall clock time to see if this is faster.\n\nThough I think you could do the same thing with a bash script. The \nauthentication should be in \"trust\" mode so that you don't take the time \nto type your password.\n\n#!/bin/bash\npsql -h <host> -c \"DROP INDEX ...; COPY FROM ...\"\n\npsql -h <host> -c \"CREATE INDEX ...\" &\npsql -h <host> -c \"CREATE INDEX ...\" &\npsql -h <host> -c \"CREATE INDEX ...\" &\npsql -h <host> -c \"CREATE INDEX ...\"\n\n\nNow, I don't really know how to wait for all child processes in a bash \nscript (I could give you the python for it, but you're a perl guy). But \nby not spawning the last INDEX, I'm hoping it takes longer than the \nrest. Try to put the most difficult index there.\n\nThen you could just run\n\ntime loadscript.sh\n\nI'm sure you could do the equivalent in perl. Just open multiple \nconnections to the DB, and have them ready.\n\nI'm guessing since you are on a dual processor machine, you won't get \nmuch better performance above 2 connections.\n\nYou can also try doing 2 COPYs at the same time, but it seems like you \nwould have issues. Do you have any serial columns that you expect to be \nin a certain order, or is all the information in the copy?\n\nIf the latter, try it, let us know what you get. I can't tell you the \nperl for this, since I'm not a perl guy.\n\nJohn\n=:->", "msg_date": "Wed, 09 Feb 2005 09:16:03 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I make use of both CPUs in a dual processor" }, { "msg_contents": "You can wait for processes to finish as follows:\n\n#launch 3 processes\nsh -c './build_indexes1.sh' & PID1=$!\nsh -c './build_indexes2.sh' & PID2=$!\nsh -c './build_indexes3.sh' & PID3=$!\n# then\nwait $PID1\nwait $PID2\nwait $PID3\n#continue\n\nMy feeling is that doing so should generally reduce the overall processing \ntime, but if there are contention problems then it could conceivably get \nmuch worse.\n\nregards\nIain\n----- Original Message ----- \nFrom: \"Alex\" <[email protected]>\nTo: \"John A Meinel\" <[email protected]>\nCc: <[email protected]>\nSent: Thursday, February 10, 2005 12:00 AM\nSubject: Re: [PERFORM] How can I make use of both CPUs in a dual processor\n\n\n> Thanks John.\n>\n> Well as I mentioned. I have a Dual AMD Opteron 64 2.4ghz, 15k rpm SCSI \n> Disks, 4GB of memory.\n> Disks are pretty fast and memory should be more than enough. Currently we \n> dont have many concurrent connections.\n>\n> I run PG 8.0.1 on Fedora Core 3\n>\n> When I now run the batch job, one CPU runs in the 80-90% the other in \n> 5-10% max.\n>\n>\n>\n>\n>\n>\n> John A Meinel wrote:\n>\n>> Alex wrote:\n>>\n>>> Hi,\n>>> we just got a new dual processor machine and I wonder if there is a way \n>>> to utilize both processors.\n>>>\n>>> Our DB server is basically fully dedicated to postgres. (its a dual amd \n>>> with 4gb mem.)\n>>>\n>>> I have a batch job that periodically loads about 8 million records into \n>>> a table.\n>>> for this I drop the indices, truncate the table, use the copy to insert \n>>> the data, recreate the indices (4 indices), vacuum the table.\n>>>\n>>> That is all done through a perl batch job.\n>>>\n>>> While I am doing this, I noticed that only one CPU is really used.\n>>>\n>>> So here are my questions:\n>>>\n>>> Is there a way to utilize both CPUs\n>>>\n>> For postgres, you get a max of 1 CPU per connection, so to use both, you \n>> need 2 CPU's.\n>>\n>>> Is it possible to split up the import file and run 2 copy processes\n>>>\n>>> Is it possible to create 2 indices at the same time\n>>>\n>> You'd want to be a little careful. Postgres uses work_mem for vacuum and \n>> index creation, so if you have 2 processes doing it, just make sure you \n>> aren't running out of RAM and going to swap.\n>>\n>>> Would I actually gain anything from that, or is the bottleneck somewhere \n>>> else ?\n>>>\n>> More likely, the bottleneck would be disk I/O. Simply because it is \n>> almost always disk I/O. However, without knowing your configuration, how \n>> much CPU is used during the operation, etc, it's hard to say.\n>>\n>>> (perl is a given here for the batch job)\n>>>\n>>> If anyone has some experience or ideas... any hints or help on this \n>>> would be appreciated.\n>>>\n>>> Thanks\n>>> Alex\n>>>\n>> Sorry I wasn't a lot of help. You should probably post your postgres \n>> version, and more information about how much CPU load there is while your \n>> load is running.\n>>\n>> John\n>> =:->\n>>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend \n\n", "msg_date": "Thu, 10 Feb 2005 10:50:16 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How can I make use of both CPUs in a dual processor" }, { "msg_contents": "Thanks for all the suggestions. It seems that creating indices, or even \nimport data using a copy is easy to implement. I also have some jobs \nthat create reports and want to try if I gain anything if i work reports \nin parallel.\n\nwill give it a try in the next week and let you know the resuls.\n\nAlex\n\nJohn A Meinel wrote:\n\n> Alex wrote:\n>\n>> Thanks John.\n>>\n>> Well as I mentioned. I have a Dual AMD Opteron 64 2.4ghz, 15k rpm \n>> SCSI Disks, 4GB of memory.\n>> Disks are pretty fast and memory should be more than enough. \n>> Currently we dont have many concurrent connections.\n>>\n> Well, you didn't mention Opteron before (it makes a difference against \n> Xeons).\n> How many disks and in what configuration?\n> Do you have pg_xlog on a separate set of disks?\n> Are your drives in RAID 10 (0+1) or RAID 5?\n>\n> If you have enough disks the recommended configuration is at least a \n> RAID1 for the OS, RAID 10 for pg_xlog (4drives), and RAID 10 (the rest \n> of the drives) for the actual data.\n>\n> If your dataset is read heavy, or you have more than 6 disks, you can \n> get away with RAID 5 for the actual data. But since you are talking \n> about loading 8million rows at once, it certainly sounds like you are \n> write heavy.\n>\n> If you only have a few disks, it's still probably better to put \n> pg_xlog on it's own RAID1 (2-drive) mirror. pg_xlog is pretty much \n> append only, so if you dedicate a disk set to it, you eliminate a lot \n> of seek times.\n>\n>> I run PG 8.0.1 on Fedora Core 3\n>>\n>> When I now run the batch job, one CPU runs in the 80-90% the other in \n>> 5-10% max.\n>\n>\n>\n> Anyway, it doesn't completely sound like you are CPU limited, but you \n> might be able to get a little bit more if you spawn another process. \n> Have you tried dropping the index, doing the copy, and then recreating \n> the 4-indexes in separate processes?\n>\n> The simple test for this is to open 3-4 psql connections, have one of \n> them drop the indexes and do the copy, in the other connections you \n> can already have typed \"CREATE INDEX ...\" so when the copy is done and \n> committed to the database, you just go to the other terminals and hit \n> enter.\n>\n> Unfortunately you'll have to use wall clock time to see if this is \n> faster.\n>\n> Though I think you could do the same thing with a bash script. The \n> authentication should be in \"trust\" mode so that you don't take the \n> time to type your password.\n>\n> #!/bin/bash\n> psql -h <host> -c \"DROP INDEX ...; COPY FROM ...\"\n>\n> psql -h <host> -c \"CREATE INDEX ...\" &\n> psql -h <host> -c \"CREATE INDEX ...\" &\n> psql -h <host> -c \"CREATE INDEX ...\" &\n> psql -h <host> -c \"CREATE INDEX ...\"\n>\n>\n> Now, I don't really know how to wait for all child processes in a bash \n> script (I could give you the python for it, but you're a perl guy). \n> But by not spawning the last INDEX, I'm hoping it takes longer than \n> the rest. Try to put the most difficult index there.\n>\n> Then you could just run\n>\n> time loadscript.sh\n>\n> I'm sure you could do the equivalent in perl. Just open multiple \n> connections to the DB, and have them ready.\n>\n> I'm guessing since you are on a dual processor machine, you won't get \n> much better performance above 2 connections.\n>\n> You can also try doing 2 COPYs at the same time, but it seems like you \n> would have issues. Do you have any serial columns that you expect to \n> be in a certain order, or is all the information in the copy?\n>\n> If the latter, try it, let us know what you get. I can't tell you the \n> perl for this, since I'm not a perl guy.\n>\n> John\n> =:->\n>\n\n\n", "msg_date": "Fri, 11 Feb 2005 00:18:26 +1100", "msg_from": "Alex <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How can I make use of both CPUs in a dual processor" } ]
[ { "msg_contents": "> Thanks John.\n> \n> Well as I mentioned. I have a Dual AMD Opteron 64 2.4ghz, 15k rpm\nSCSI\n> Disks, 4GB of memory.\n> Disks are pretty fast and memory should be more than enough. Currently\n> we dont have many concurrent connections.\n> \n> I run PG 8.0.1 on Fedora Core 3\n> \n> When I now run the batch job, one CPU runs in the 80-90% the other in\n> 5-10% max.\n\nIf possible, split up your job into two workloads that can be run\nconcurrently. Open up two connections on the client, one for each\nworkload. To be 100% sure they get delegated to separate processors,\nlet the first connection start working before opening the second one\n(should be easy enough to test from the terminal)...this should pretty\nmuch guarantee your batch processes get delegated to different\nprocessors.\n\nThe beauty of pg is that it lets the o/s handle things that should be\nthe o/s's job...don't forget the bgwriter/stats collector processes are\nalso in the mix. Your o/s is probably already doing a pretty good job\ndelegating work already.\n\nEven with fast disks, batch data management jobs are rarely cpu bound,\nso you might not see much difference in the total run time (spitting\nyour batch might actually increase the run time, or reduce\nresponsiveness to other connections). Never hurts to test that though.\n\nMerlin\n", "msg_date": "Wed, 9 Feb 2005 10:14:16 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How can I make use of both CPUs in a dual processor" } ]
[ { "msg_contents": "Hi,\n\nis there a way to tell Postgres which index to use when a query is \nissued in 7.4.2?\n\nI have a query for which costwise a Hash-Join and no Index-Usage is the \nbest, but timewise using the index and then do a NestedLoop join is much \nbetter (3 - 4 times).\n\nI have vacuumed before I started the comparison, so Postgres does its \nbest. And I don't constantly want to switch on and off the hashjoin and \nmergejoin.\n\nRegards,\n\nSilke Trissl\n\n\n", "msg_date": "Wed, 09 Feb 2005 16:58:01 +0100", "msg_from": "Silke Trissl <[email protected]>", "msg_from_op": true, "msg_subject": "Tell postgres which index to use?" }, { "msg_contents": "Silke,\n\n> is there a way to tell Postgres which index to use when a query is\n> issued in 7.4.2?\n\nPostgreSQL adjusts usage through global parameters, statistics, and periodic \nANALYZE. Please post an EXPLAIN ANALYZE (not just EXPLAIN) for your query \nand people on this list can help you with your specific problem.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 9 Feb 2005 09:41:01 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tell postgres which index to use?" }, { "msg_contents": "Sorry,\n> \n>>is there a way to tell Postgres which index to use when a query is\n>>issued in 7.4.2?\n> \n> \n> PostgreSQL adjusts usage through global parameters, statistics, and periodic \n> ANALYZE. Please post an EXPLAIN ANALYZE (not just EXPLAIN) for your query \n> and people on this list can help you with your specific problem.\n\nhere are the plans, but still I would like to tell Postgres to use an \nindex or the join method (like HINT in ORACLE).\n\n> \n\nFirst the vacuum\n db=# vacuum full analyze;\n VACUUM\n\nThen the query for the first time with analyze\n db=# EXPLAIN ANALYZE\n db-# SELECT chain.pdb_id, chain.id FROM PDB_ENTRY, CHAIN\n WHERE PDB_ENTRY.resolution > 0.0 and PDB_ENTRY.resolution < 1.7\n AND PDB_ENTRY.id = CHAIN.pdb_id;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1426.75..5210.52 rows=7533 width=8) (actual \ntime=77.712..399.108 rows=5798 loops=1)\n Hash Cond: (\"outer\".pdb_id = \"inner\".id)\n -> Seq Scan on \"chain\" (cost=0.00..3202.11 rows=67511 width=8) \n(actual time=0.048..151.885 rows=67511 loops=1)\n -> Hash (cost=1418.68..1418.68 rows=3226 width=4) (actual \ntime=77.062..77.062 rows=0 loops=1)\n -> Seq Scan on pdb_entry (cost=0.00..1418.68 rows=3226 \nwidth=4) (actual time=0.118..71.956 rows=3329 loops=1)\n Filter: ((resolution > 0::double precision) AND \n(resolution < 1.7::double precision))\n Total runtime: 404.434 ms\n(7 rows)\n\nAnd then try to avoid the Hash Join\n\n db=# SET ENABLE_hashjoin = OFF;\n SET\n db=# EXPLAIN ANALYZE\n db-# SELECT chain.pdb_id, chain.id FROM PDB_ENTRY, CHAIN\n WHERE PDB_ENTRY.resolution > 0.0 and PDB_ENTRY.resolution < 1.7\n AND PDB_ENTRY.id = CHAIN.pdb_id;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=9163.85..11100.74 rows=7533 width=8) (actual \ntime=606.505..902.740 rows=5798 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".pdb_id)\n -> Index Scan using pdb_entry_pkey on pdb_entry \n(cost=0.00..1516.03 rows=3226 width=4) (actual time=0.440..102.912 \nrows=3329 loops=1)\n Filter: ((resolution > 0::double precision) AND (resolution < \n1.7::double precision))\n -> Sort (cost=9163.85..9332.63 rows=67511 width=8) (actual \ntime=605.838..694.190 rows=67501 loops=1)\n Sort Key: \"chain\".pdb_id\n -> Seq Scan on \"chain\" (cost=0.00..3202.11 rows=67511 \nwidth=8) (actual time=0.064..225.859 rows=67511 loops=1)\n Total runtime: 911.024 ms\n(8 rows)\n\nAnd finally timewise the fastest method, but not costwise. Even for \nalmost full table joins, this method is the fastest.\n\n db=# SET ENABLE_mergejoin = off;\n SET\n db=# EXPLAIN ANALYZE\n db-# SELECT chain.pdb_id, chain.id FROM PDB_ENTRY, CHAIN\n WHERE PDB_ENTRY.resolution > 0.0 and PDB_ENTRY.resolution < 1.7\n AND PDB_ENTRY.id = CHAIN.pdb_id;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..23849.81 rows=7533 width=8) (actual \ntime=0.341..198.162 rows=5798 loops=1)\n -> Seq Scan on pdb_entry (cost=0.00..1418.68 rows=3226 width=4) \n(actual time=0.145..78.177 rows=3329 loops=1)\n Filter: ((resolution > 0::double precision) AND (resolution < \n1.7::double precision))\n -> Index Scan using chain_pdb_id_ind on \"chain\" (cost=0.00..6.87 \nrows=6 width=8) (actual time=0.021..0.027 rows=2 loops=3329)\n Index Cond: (\"outer\".id = \"chain\".pdb_id)\n Total runtime: 204.105 ms\n(6 rows)\n\n", "msg_date": "Wed, 09 Feb 2005 18:58:02 +0100", "msg_from": "Silke Trissl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tell postgres which index to use?" }, { "msg_contents": "Silke Trissl wrote:\n\n> Sorry,\n>\n>>\n>>> is there a way to tell Postgres which index to use when a query is\n>>> issued in 7.4.2?\n>>\n>>\n>>\n>> PostgreSQL adjusts usage through global parameters, statistics, and\n>> periodic ANALYZE. Please post an EXPLAIN ANALYZE (not just EXPLAIN)\n>> for your query and people on this list can help you with your\n>> specific problem.\n>\n>\n> here are the plans, but still I would like to tell Postgres to use an\n> index or the join method (like HINT in ORACLE).\n>\n>>\n>\n> First the vacuum\n> db=# vacuum full analyze;\n> VACUUM\n>\n> Then the query for the first time with analyze\n> db=# EXPLAIN ANALYZE\n> db-# SELECT chain.pdb_id, chain.id FROM PDB_ENTRY, CHAIN\n> WHERE PDB_ENTRY.resolution > 0.0 and PDB_ENTRY.resolution < 1.7\n> AND PDB_ENTRY.id = CHAIN.pdb_id;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------\n>\n> Hash Join (cost=1426.75..5210.52 rows=7533 width=8) (actual\n> time=77.712..399.108 rows=5798 loops=1)\n> Hash Cond: (\"outer\".pdb_id = \"inner\".id)\n> -> Seq Scan on \"chain\" (cost=0.00..3202.11 rows=67511 width=8)\n> (actual time=0.048..151.885 rows=67511 loops=1)\n> -> Hash (cost=1418.68..1418.68 rows=3226 width=4) (actual\n> time=77.062..77.062 rows=0 loops=1)\n\nThis seems to be at least one of the problems. The planner thinks there\nare going to be 3000+ rows, but in reality there are 0.\n\n> -> Seq Scan on pdb_entry (cost=0.00..1418.68 rows=3226\n> width=4) (actual time=0.118..71.956 rows=3329 loops=1)\n> Filter: ((resolution > 0::double precision) AND\n> (resolution < 1.7::double precision))\n> Total runtime: 404.434 ms\n> (7 rows)\n>\n> And then try to avoid the Hash Join\n>\n> db=# SET ENABLE_hashjoin = OFF;\n> SET\n> db=# EXPLAIN ANALYZE\n> db-# SELECT chain.pdb_id, chain.id FROM PDB_ENTRY, CHAIN\n> WHERE PDB_ENTRY.resolution > 0.0 and PDB_ENTRY.resolution < 1.7\n> AND PDB_ENTRY.id = CHAIN.pdb_id;\n> QUERY\n> PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Merge Join (cost=9163.85..11100.74 rows=7533 width=8) (actual\n> time=606.505..902.740 rows=5798 loops=1)\n> Merge Cond: (\"outer\".id = \"inner\".pdb_id)\n> -> Index Scan using pdb_entry_pkey on pdb_entry\n> (cost=0.00..1516.03 rows=3226 width=4) (actual time=0.440..102.912\n> rows=3329 loops=1)\n> Filter: ((resolution > 0::double precision) AND (resolution <\n> 1.7::double precision))\n> -> Sort (cost=9163.85..9332.63 rows=67511 width=8) (actual\n> time=605.838..694.190 rows=67501 loops=1)\n> Sort Key: \"chain\".pdb_id\n> -> Seq Scan on \"chain\" (cost=0.00..3202.11 rows=67511\n> width=8) (actual time=0.064..225.859 rows=67511 loops=1)\n> Total runtime: 911.024 ms\n> (8 rows)\n>\n> And finally timewise the fastest method, but not costwise. Even for\n> almost full table joins, this method is the fastest.\n>\n> db=# SET ENABLE_mergejoin = off;\n> SET\n> db=# EXPLAIN ANALYZE\n> db-# SELECT chain.pdb_id, chain.id FROM PDB_ENTRY, CHAIN\n> WHERE PDB_ENTRY.resolution > 0.0 and PDB_ENTRY.resolution < 1.7\n> AND PDB_ENTRY.id = CHAIN.pdb_id;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------\n>\n> Nested Loop (cost=0.00..23849.81 rows=7533 width=8) (actual\n> time=0.341..198.162 rows=5798 loops=1)\n> -> Seq Scan on pdb_entry (cost=0.00..1418.68 rows=3226 width=4)\n> (actual time=0.145..78.177 rows=3329 loops=1)\n> Filter: ((resolution > 0::double precision) AND (resolution <\n> 1.7::double precision))\n> -> Index Scan using chain_pdb_id_ind on \"chain\" (cost=0.00..6.87\n> rows=6 width=8) (actual time=0.021..0.027 rows=2 loops=3329)\n> Index Cond: (\"outer\".id = \"chain\".pdb_id)\n> Total runtime: 204.105 ms\n> (6 rows)\n\nI'm guessing the filter is more selective than postgres thinks it is (0\n<> 1.7). You might try increasing the statistics of that column, you\nmight also try playing with your random_page_cost to make index scans\nrelatively cheaper (than seq scans).\nIt might be an issue that your effective_cache_size isn't quite right,\nwhich makes postgres think most things are on disk, when in reality they\nare in memory (which also makes index scans much cheaper).\n\nAlso, this query may sort itself out in time. As the tables grow, the\nrelative fraction that you desire probably decreases, which makes index\nscans more attractive.\n\nJohn\n=:->", "msg_date": "Wed, 09 Feb 2005 12:36:38 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tell postgres which index to use?" }, { "msg_contents": "John Arbash Meinel <[email protected]> writes:\n\n> > -> Hash (cost=1418.68..1418.68 rows=3226 width=4) (actual\n> > time=77.062..77.062 rows=0 loops=1)\n> \n> This seems to be at least one of the problems. The planner thinks there\n> are going to be 3000+ rows, but in reality there are 0.\n\nNo, that's a red herring. Hash nodes always report 0 rows. \n\n> > Nested Loop (cost=0.00..23849.81 rows=7533 width=8) (actual time=0.341..198.162 rows=5798 loops=1)\n> > -> Seq Scan on pdb_entry (cost=0.00..1418.68 rows=3226 width=4) (actual time=0.145..78.177 rows=3329 loops=1)\n> > Filter: ((resolution > 0::double precision) AND (resolution < 1.7::double precision))\n> > -> Index Scan using chain_pdb_id_ind on \"chain\" (cost=0.00..6.87 rows=6 width=8) (actual time=0.021..0.027 rows=2 loops=3329)\n> > Index Cond: (\"outer\".id = \"chain\".pdb_id)\n\nThe actual number of records is pretty close to the estimated number. And the\ndifference seems to come primarily from selectivity of the join where it\nthinks an average of 6 rows will match every row whereas in fact an average of\nabout 2 rows matches.\n\nSo it thinks it's going to read about 18,000 records out of 67,000 or about\n25%. In that case the sequential scan is almost certainly better. In fact it's\ngoing to read about 6,000 or just under 10%, in which case the sequential scan\nis probably still better but it's not so clear.\n\nI suspect the only reason you're seeing such a big difference when I would\nexpect it to be about even is because nearly all the data is cached. In that\ncase the non-sequential access pattern of the nested loop has little effect.\n\nYou might get away with lowering random_page_cost but since it thinks it's\ngoing to read 25% of the table I suspect you'll have to get very close to 1\nbefore it switches over, if it does even then. Be careful about tuning\nsettings like this based on a single query, especially to unrealistically low\nvalues.\n\nYou might also want to try raising the statistics target on pdb_entry. See if\nthat makes the estimate go down from 6 to closer to 2.\n\n-- \ngreg\n\n", "msg_date": "09 Feb 2005 15:50:06 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tell postgres which index to use?" } ]
[ { "msg_contents": "Hello All,\n\nIn contrast to what we hear from most others on this list, we find our \ndatabase servers are mostly CPU bound. We are wondering if this is because \nwe have postgres configured incorrectly in some way, or if we really need \nmore powerfull processor(s) to gain more performance from postgres. \n\nWe continue to tune our individual queries where we can, but it seems we still \nare waiting on the db a lot in our app. When we run most queries, top shows \nthe postmaster running at 90%+ constantly during the duration of the request. \nThe disks get touched occasionally, but not often. Our database on disk is \naround 2.6G and most of the working set remains cached in memory, hence the \nfew disk accesses. All this seems to point to the need for faster \nprocessors.\n\nOur question is simply this, is it better to invest in a faster processor at \nthis point, or are there configuration changes to make it faster? I've done \nsome testing with with 4x SCSI 10k and the performance didn't improve, in \nfact it actually was slower the the sata drives marginally. One of our \ndevelopers is suggesting we should compile postgres from scratch for this \nparticular processor, and we may try that. Any other ideas?\n\n-Chris\n\nOn this particular development server, we have:\n\nAthlon XP,3000 \n1.5G Mem\n4x Sata drives in Raid 0\n\nPostgresql 7.4.5 installed via RPM running on Linux kernel 2.6.8.1\n\nItems changed in the postgresql.conf:\n\ntcpip_socket = true\nmax_connections = 32\nport = 5432\nshared_buffers = 12288\t\t# min 16, at least max_connections*2, 8KB each\nsort_mem=16384\nvacuum_mem = 32768\t\t# min 1024, size in KB\nmax_fsm_pages = 60000\t\t# min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 1000\t# min 100, ~50 bytes each\neffective_cache_size = 115200\t# typically 8KB each\nrandom_page_cost = 1\t\t# units are one sequential page fetch cost\n\n", "msg_date": "Wed, 9 Feb 2005 15:01:31 -0500", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Tuning" }, { "msg_contents": "On Wed, 2005-02-09 at 15:01 -0500, Chris Kratz wrote:\n> Hello All,\n> \n> In contrast to what we hear from most others on this list, we find our \n> database servers are mostly CPU bound. We are wondering if this is because \n> we have postgres configured incorrectly in some way, or if we really need \n> more powerfull processor(s) to gain more performance from postgres. \n\nNot necessarily. I had a very disk bound system, bought a bunch of\nhigher end equipment (which focuses on IO) and now have a (faster) but\nCPU bound system.\n\nIt's just the way the cookie crumbles.\n\nSome things to watch for are large calculations which are easy to move\nclient side, such as queries that sort for display purposes. Or data\ntypes which aren't really required (using numeric where an integer would\ndo).\n\n> We continue to tune our individual queries where we can, but it seems we still \n> are waiting on the db a lot in our app. When we run most queries, top shows \n> the postmaster running at 90%+ constantly during the duration of the request. \n\nIs this for the duration of a single request or 90% constantly?\n\nIf it's a single request, odds are you're going through much more\ninformation than you need to. Lots of aggregate work (max / min) perhaps\nor count(*)'s where an approximation would do?\n\n> Our question is simply this, is it better to invest in a faster processor at \n> this point, or are there configuration changes to make it faster? I've done \n\nIf it's for a single request, you cannot get single processors which are\nmuch faster than what you describe as having.\n\nWant to send us a few EXPLAIN ANALYZE's of your longer running queries?\n\n-- \nRod Taylor <[email protected]>\n\n", "msg_date": "Wed, 09 Feb 2005 15:27:53 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning" }, { "msg_contents": "Chris Kratz wrote:\n\n>Hello All,\n>\n>In contrast to what we hear from most others on this list, we find our\n>database servers are mostly CPU bound. We are wondering if this is because\n>we have postgres configured incorrectly in some way, or if we really need\n>more powerfull processor(s) to gain more performance from postgres.\n>\n>\n>\nIf everything is cached in ram, it's pretty easy to be CPU bound. You\nvery easily could be at this point if your database is only 2.6G and you\ndon't touch all the tables often.\n\nI do believe that when CPU bound, the best thing to do is get faster CPUs.\n...\n\n>Our question is simply this, is it better to invest in a faster processor at\n>this point, or are there configuration changes to make it faster? I've done\n>some testing with with 4x SCSI 10k and the performance didn't improve, in\n>fact it actually was slower the the sata drives marginally. One of our\n>developers is suggesting we should compile postgres from scratch for this\n>particular processor, and we may try that. Any other ideas?\n>\n>-Chris\n>\n>On this particular development server, we have:\n>\n>Athlon XP,3000\n>1.5G Mem\n>4x Sata drives in Raid 0\n>\n>\n>\nI'm very surprised you are doing RAID 0. You realize that if 1 drive\ngoes out, your entire array is toast, right? I would recommend doing\neither RAID 10 (0+1), or even Raid 5 if you don't do a lot of writes.\n\nProbably most important, though is to look at the individual queries and\nsee what they are doing.\n\n>Postgresql 7.4.5 installed via RPM running on Linux kernel 2.6.8.1\n>\n>Items changed in the postgresql.conf:\n>\n>tcpip_socket = true\n>max_connections = 32\n>port = 5432\n>shared_buffers = 12288\t\t# min 16, at least max_connections*2, 8KB each\n>sort_mem=16384\n>vacuum_mem = 32768\t\t# min 1024, size in KB\n>max_fsm_pages = 60000\t\t# min max_fsm_relations*16, 6 bytes each\n>max_fsm_relations = 1000\t# min 100, ~50 bytes each\n>effective_cache_size = 115200\t# typically 8KB each\n>random_page_cost = 1\t\t# units are one sequential page fetch cost\n>\n>\nMost of these seem okay to me, but random page cost is *way* too low.\nThis should never be tuned below 2. I think this says \"an index scan of\n*all* rows is as cheap as a sequential scan of all rows.\" and that\nshould never be true.\n\nWhat could actually be happening is that you are getting index scans\nwhen a sequential scan would be faster.\n\nI don't know what you would see, but what does \"explain analyze select\ncount(*) from blah;\" say. If it is an index scan, you have your machine\nmistuned. select count(*) always grabs every row, and this is always\ncheaper with a sequential scan.\n\nJohn\n=:->", "msg_date": "Wed, 09 Feb 2005 14:38:22 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning" }, { "msg_contents": "\nChris Kratz <[email protected]> writes:\n\n> We continue to tune our individual queries where we can, but it seems we still \n> are waiting on the db a lot in our app. When we run most queries, top shows \n> the postmaster running at 90%+ constantly during the duration of the request. \n> The disks get touched occasionally, but not often. Our database on disk is \n> around 2.6G and most of the working set remains cached in memory, hence the \n> few disk accesses. All this seems to point to the need for faster \n> processors.\n\nI would suggest looking at the top few queries that are taking the most\ncumulative time on the processor. It sounds like the queries are doing a ton\nof logical i/o on data that's cached in RAM. A few indexes might cut down on\nthe memory bandwidth needed to churn through all that data.\n\n> Items changed in the postgresql.conf:\n> ...\n> random_page_cost = 1\t\t# units are one sequential page fetch cost\n\nThis makes it nigh impossible for the server from ever making a sequential\nscan when an index would suffice. What query made you do this? What plan did\nit fix?\n\n\n-- \ngreg\n\n", "msg_date": "09 Feb 2005 15:59:50 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning" }, { "msg_contents": "On Wednesday 09 February 2005 03:38 pm, John Arbash Meinel wrote:\n>...\n> I'm very surprised you are doing RAID 0. You realize that if 1 drive\n> goes out, your entire array is toast, right? I would recommend doing\n> either RAID 10 (0+1), or even Raid 5 if you don't do a lot of writes.\n\n<grin> Yeah, we know. This is a development server and we drop and reload \ndatabases regularly (sometimes several times a day). In this case we don't \nreally care about the integrity of the data since it's for our developers to \ntest code against. Also, the system is on a mirrored set of drives. On our \nlive servers we have hardware raid 1 at this point for the data drives. When \nI/O becomes a bottleneck, we are planning on moving to Raid 10 for the data \nand Raid 1 for the transaction log with as many drives as I can twist arms \nfor. Up to this point it has been easier just to stuff the servers full of \nmemory and let the OS cache the db in memory. We know that at some point \nthis will no longer work, but for now it is.\n\nAs a side note, I learned something very interesting for our developers here. \nWe had been doing a drop database and then a reload off a db dump from our \nlive server for test data. This takes 8-15 minutes depending on the server \n(the one above takes about 8 minutes). I learned through testing that I can \nuse create database template some_other_database and make a duplicate in \nabout 2.5 minutes. which is a huge gain for us. We can load a pristine copy, \nmake a duplicate, do our testing on the duplicate, drop the duplicate and \ncreate a new duplicate in less then five mintes.\n\nCool.\n\n> Probably most important, though is to look at the individual queries and\n> see what they are doing.\n>\n> >Postgresql 7.4.5 installed via RPM running on Linux kernel 2.6.8.1\n> >\n> >Items changed in the postgresql.conf:\n> >\n> >tcpip_socket = true\n> >max_connections = 32\n> >port = 5432\n> >shared_buffers = 12288\t\t# min 16, at least max_connections*2, 8KB each\n> >sort_mem=16384\n> >vacuum_mem = 32768\t\t# min 1024, size in KB\n> >max_fsm_pages = 60000\t\t# min max_fsm_relations*16, 6 bytes each\n> >max_fsm_relations = 1000\t# min 100, ~50 bytes each\n> >effective_cache_size = 115200\t# typically 8KB each\n> >random_page_cost = 1\t\t# units are one sequential page fetch cost\n>\n> Most of these seem okay to me, but random page cost is *way* too low.\n> This should never be tuned below 2. I think this says \"an index scan of\n> *all* rows is as cheap as a sequential scan of all rows.\" and that\n> should never be true.\n\nYou caught me. I actually tweaked that today after finding a page that \nsuggested doing that if the data was mostly in memory. I have been running \nit at 2, and since we didn't notice any improvement, it will be going back to \n2. \n\n> What could actually be happening is that you are getting index scans\n> when a sequential scan would be faster.\n>\n> I don't know what you would see, but what does \"explain analyze select\n> count(*) from blah;\" say. If it is an index scan, you have your machine\n> mistuned. select count(*) always grabs every row, and this is always\n> cheaper with a sequential scan.\n>\n> John\n> =:->\nWith a random_page_cost set to 1, on a larger table a select count(*) nets \nthis...\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=9848.12..9848.12 rows=1 width=0) (actual \ntime=4916.869..4916.872 rows=1 loops=1)\n -> Seq Scan on answer (cost=0.00..8561.29 rows=514729 width=0) (actual \ntime=0.011..2624.202 rows=514729 loops=1)\n Total runtime: 4916.942 ms\n(3 rows)\n\nNow here is a very curious thing. If I turn on timing and run the count \nwithout explain analyze, I get...\n\n count\n--------\n 514729\n(1 row)\n\nTime: 441.539 ms\n\nHow odd. Running the explain adds 4.5s to it. Running the explain again goes \nback to almost 5s. Now I wonder why that would be different.\n\nChanging random cpu cost back to 2 nets little difference (4991.940ms for \nexplain and 496ms) But we will leave it at that for now.\n\n-- \nChris Kratz\nSystems Analyst/Programmer\nVistaShare LLC\nwww.vistashare.com\n\n", "msg_date": "Wed, 9 Feb 2005 16:25:34 -0500", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Tuning" }, { "msg_contents": "On Wednesday 09 February 2005 03:27 pm, you wrote:\n---snip---\n> > We continue to tune our individual queries where we can, but it seems we\n> > still are waiting on the db a lot in our app. When we run most queries,\n> > top shows the postmaster running at 90%+ constantly during the duration\n> > of the request.\n>\n> Is this for the duration of a single request or 90% constantly?\n\nNo, this is during the processing of a request. The rest of the time, it sits \nidle. \n\nWe thought we would post our config and see if there was something obvious we \nwere missing. I expect the only real answer is to continue to optimise the \nsql our app generates since compexity seems to be the issue.\n\n> If it's a single request, odds are you're going through much more\n> information than you need to. Lots of aggregate work (max / min) perhaps\n> or count(*)'s where an approximation would do?\n\nYes, many of our queries heavily use common aggregates and grouping. And the \nexplains bears out that we spend most of our time in sorts related to the \ngrouping, aggregating, etc. The problem we often need to get multiple \nrecords per person, but then summarize that data per person. Our users want \nAccurate, Fast and Complex. It's hard to convince them they can only have 2 \nof the 3. :-)\n\n> > Our question is simply this, is it better to invest in a faster processor\n> > at this point, or are there configuration changes to make it faster? \n> > I've done\n>\n> If it's for a single request, you cannot get single processors which are\n> much faster than what you describe as having.\n>\n> Want to send us a few EXPLAIN ANALYZE's of your longer running queries?\n\nMany (most) of our queries are dynamic based on what the user needs. \nSearches, statistics gathering, etc are all common tasks our users do. \n\nHere is an explain from a common search giving a list of people. This runs in \nabout 4.2s (4.5s with web page generation) which is actually pretty amazing \nwhen you think about what it does. It's just that we are always looking for \nspeed in the web environment since concurrent usage can be high at times \nmaking the server feel less responsive. I'm looking at possibly moving this \ninto lazy materialized views at some point since I can't seem to make the sql \ngo much faster.\n\n Sort (cost=8165.28..8198.09 rows=13125 width=324) (actual \ntime=4116.714..4167.915 rows=13124 loops=1)\n Sort Key: system_name_id, fullname_lfm_sort\n -> GroupAggregate (cost=6840.96..7267.53 rows=13125 width=324) (actual \ntime=2547.928..4043.255 rows=13124 loops=1)\n -> Sort (cost=6840.96..6873.78 rows=13125 width=324) (actual \ntime=2547.876..2603.938 rows=14115 loops=1)\n Sort Key: system_name_id, fullname_last_first_mdl, phone, \ndaytime_phone, email_address, fullname_lfm_sort, firstname, is_business, ssn, \ninactive\n -> Subquery Scan foo (cost=5779.15..5943.21 rows=13125 \nwidth=324) (actual time=2229.877..2459.003 rows=14115 loops=1)\n -> Sort (cost=5779.15..5811.96 rows=13125 width=194) \n(actual time=2229.856..2288.350 rows=14115 loops=1)\n Sort Key: dem.nameid, dem.name_float_lfm_sort\n -> Hash Left Join (cost=2354.58..4881.40 \nrows=13125 width=194) (actual time=1280.523..2139.423 rows=14115 loops=1)\n Hash Cond: (\"outer\".relatednameid = \n\"inner\".nameid)\n -> Hash Left Join (cost=66.03..1889.92 \nrows=13125 width=178) (actual time=576.228..1245.760 rows=14115 loops=1)\n Hash Cond: (\"outer\".nameid = \n\"inner\".nameid)\n -> Merge Left Join \n(cost=0.00..1758.20 rows=13125 width=174) (actual time=543.056..1015.657 \nrows=13124 loops=1)\n Merge Cond: (\"outer\".inactive = \n\"inner\".validanswerid)\n -> Index Scan using \nnamemaster_inactive_idx on namemaster dem (cost=0.00..3714.19 rows=13125 \nwidth=163) (actual time=0.594..188.219 rows=13124 loops=1)\n Filter: (programid = 55)\n -> Index Scan using \nvalidanswerid_pk on validanswer ina (cost=0.00..1103.61 rows=46367 width=19) \n(actual time=0.009..360.218 rows=26005 loops=1)\n -> Hash (cost=65.96..65.96 rows=31 \nwidth=8) (actual time=33.053..33.053 rows=0 loops=1)\n -> Nested Loop \n(cost=0.00..65.96 rows=31 width=8) (actual time=0.078..25.047 rows=1874 \nloops=1)\n -> Index Scan using \nrelationship_programid on relationship s (cost=0.00..3.83 rows=1 width=4) \n(actual time=0.041..0.047 rows=1 loops=1)\n Index Cond: \n(programid = 55)\n Filter: \n(inter_agency_id = 15530)\n -> Index Scan using \n\"relationshipdetail_relatio-4\" on relationshipdetail r (cost=0.00..61.17 \nrows=77 width=12) (actual time=0.017..9.888 rows=1874 loops=1)\n Index Cond: \n(r.relationshipid = \"outer\".relationshipid)\n -> Hash (cost=2142.84..2142.84 rows=58284 \nwidth=24) (actual time=704.197..704.197 rows=0 loops=1)\n -> Seq Scan on namemaster rln155301 \n(cost=0.00..2142.84 rows=58284 width=24) (actual time=0.015..402.784 \nrows=58284 loops=1)\n Total runtime: 4228.945 ms\n", "msg_date": "Wed, 9 Feb 2005 17:15:26 -0500", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Tuning" }, { "msg_contents": "On Wednesday 09 February 2005 03:59 pm, Greg Stark wrote:\n> Chris Kratz <[email protected]> writes:\n> > We continue to tune our individual queries where we can, but it seems we\n> > still are waiting on the db a lot in our app. When we run most queries,\n> > top shows the postmaster running at 90%+ constantly during the duration\n> > of the request. The disks get touched occasionally, but not often. Our\n> > database on disk is around 2.6G and most of the working set remains\n> > cached in memory, hence the few disk accesses. All this seems to point\n> > to the need for faster processors.\n>\n> I would suggest looking at the top few queries that are taking the most\n> cumulative time on the processor. It sounds like the queries are doing a\n> ton of logical i/o on data that's cached in RAM. A few indexes might cut\n> down on the memory bandwidth needed to churn through all that data.\n\nHmmm, yes we continue to use indexes judiciously. I actually think we've \noverdone it in some cases since inserts are starting to slow in some critical \nareas.\n\n> > Items changed in the postgresql.conf:\n> > ...\n> > random_page_cost = 1\t\t# units are one sequential page fetch cost\n>\n> This makes it nigh impossible for the server from ever making a sequential\n> scan when an index would suffice. What query made you do this? What plan\n> did it fix?\n\nYes, it got set back to 2. I was testing various settings suggested by a \nposting in the archives and that one didn't get reset.\n", "msg_date": "Wed, 9 Feb 2005 17:17:59 -0500", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Tuning" }, { "msg_contents": "\n> As a side note, I learned something very interesting for our developers \n> here.\n> We had been doing a drop database and then a reload off a db dump from \n> our\n> live server for test data. This takes 8-15 minutes depending on the \n> server\n> (the one above takes about 8 minutes). I learned through testing that I \n> can\n> use create database template some_other_database and make a duplicate in\n> about 2.5 minutes. which is a huge gain for us. We can load a pristine \n> copy,\n> make a duplicate, do our testing on the duplicate, drop the duplicate and\n> create a new duplicate in less then five mintes.\n\n\tI think thats because postgres just makes a file copy from the template. \nThus you could make it 2x faster if you put the template in another \ntablespace on another drive.\n", "msg_date": "Thu, 10 Feb 2005 00:58:48 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning" }, { "msg_contents": "The world rejoiced as [email protected] (PFC) wrote:\n>> As a side note, I learned something very interesting for our\n>> developers here.\n>> We had been doing a drop database and then a reload off a db dump\n>> from our\n>> live server for test data. This takes 8-15 minutes depending on the\n>> server\n>> (the one above takes about 8 minutes). I learned through testing\n>> that I can\n>> use create database template some_other_database and make a duplicate in\n>> about 2.5 minutes. which is a huge gain for us. We can load a\n>> pristine copy,\n>> make a duplicate, do our testing on the duplicate, drop the duplicate and\n>> create a new duplicate in less then five mintes.\n>\n> \tI think thats because postgres just makes a file copy from the\n> template. Thus you could make it 2x faster if you put the template\n> in another tablespace on another drive.\n\nI had some small amusement today trying this feature out in one of our\nenvironments today...\n\nWe needed to make a copy of one of the databases we're replicating for\nthe sysadmins to use for some testing.\n\nI figured using the \"template\" capability was:\n a) Usefully educational to one of the other DBAs, and\n b) Probably a quick way to copy the data over.\n\nWe shortly discovered that we had to shut off the Slony-I daemon in\norder to get exclusive access to the database; no _big_ deal.\n\nAt that point, he hit ENTER, and rather quickly saw...\nCREATE DATABASE.\n\nWe then discovered that the sysadmins wanted the test DB to be on one\nof the other servers. Oops. Oh, well, we'll have to do this on the\nother server; no big deal.\n\nEntertainment ensued... \"My, that's taking a while...\" At about the\npoint that we started thinking there might be a problem...\n\nCREATE DATABASE\n\nThe entertainment was that the first box is one of those spiffy new\n4-way Opteron boxes, whilst the \"slow\" one was a 4-way Xeon... Boy,\nthose Opterons are faster...\n-- \noutput = reverse(\"moc.liamg\" \"@\" \"enworbbc\")\nhttp://cbbrowne.com/info/rdbms.html\n\"No matter how far you have gone on the wrong road, turn back.\"\n-- Turkish proverb\n", "msg_date": "Wed, 09 Feb 2005 22:09:29 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning" } ]
[ { "msg_contents": "Folks,\n\nA lot of people have been pestering me for this stuff, so I've finally \nfinished it and put it up. \nhttp://www.powerpostgresql.com/\n\nHopefully this should help people as much as the last one did.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 9 Feb 2005 13:50:24 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "annotated PostgreSQL.conf now up" }, { "msg_contents": "Josh,\n\nIt would be great if you could email this list when you add new articles or\ncontent to the site- and of course let us know when it's for sale.....\n\n-Jon \n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On\nBehalf Of Josh Berkus\nSent: Wednesday, February 09, 2005 1:50 PM\nTo: [email protected]; [email protected]\nSubject: [sfpug] annotated PostgreSQL.conf now up\n\nFolks,\n\nA lot of people have been pestering me for this stuff, so I've finally \nfinished it and put it up. \nhttp://www.powerpostgresql.com/\n\nHopefully this should help people as much as the last one did.\n\n--\n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n\n\n", "msg_date": "Wed, 9 Feb 2005 15:18:28 -0800", "msg_from": "\"Jon Asher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: annotated PostgreSQL.conf now up" }, { "msg_contents": "We are currently considering the possibility of creating a warm standby \nmachine utilizing heartbeat and a network attached storage device for the DATA \ndirectory. The idea being that the warm standby machine has its postmaster \nstopped. When heartbeat detects the death of the master server, the \npostmaster is started up on the warm standby using the shared DATA directory. \nOther than the obvious problems of both postmasters inadvertently attempting \naccess at the same time, I'm curious to know if anyone has tried any similar \nsetups and what the experiences have been. Specifically is the performance of \ngigE good enough to allow postgres to perform under load with an NFS mounted \nDATA dir? Are there other problems I haven't thought about? Any input would \nbe greatly appreciated.\n\nThanks!\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Fri, 8 Apr 2005 10:01:55 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "DATA directory on network attached storage" }, { "msg_contents": "Jeff,\n\n>  Specifically is the performance of\n> gigE good enough to allow postgres to perform under load with an NFS\n> mounted DATA dir?  Are there other problems I haven't thought about?  Any\n> input would be greatly appreciated.\n\nThe big problem with NFS-mounted data is that NFS is designed to be a lossy \nprotocol; that is, sometimes bits get dropped and you just re-request the \nfile. This isn't a great idea with databases.\n\nIf we were talking SAN, then I don't see any reason why your plan wouldn't \nwork. However, what type of failure exactly are you guarding against? How \nlikely is a machine failure if its hard drives are external?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 8 Apr 2005 10:05:45 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [sfpug] DATA directory on network attached storage" }, { "msg_contents": " This message is in MIME format. The first part should be readable text,\n while the remaining parts are likely unreadable without MIME-aware tools.\n\n---498755627-153491485-1112980124=:11558\nContent-Type: TEXT/PLAIN; CHARSET=ISO-8859-1; FORMAT=flowed\nContent-Transfer-Encoding: 8BIT\nContent-ID: <[email protected]>\n\nJosh, thanks for the quick reply!\n\nOn Fri, 8 Apr 2005, Josh Berkus wrote:\n\n> Jeff,\n>\n>> �Specifically is the performance of\n>> gigE good enough to allow postgres to perform under load with an NFS\n>> mounted DATA dir? �Are there other problems I haven't thought about? �Any\n>> input would be greatly appreciated.\n>\n> The big problem with NFS-mounted data is that NFS is designed to be a lossy\n> protocol; that is, sometimes bits get dropped and you just re-request the\n> file. This isn't a great idea with databases.\n\nThat is sort of what I was thinking, and we'll have to address this somehow.\n\n>\n> If we were talking SAN, then I don't see any reason why your plan wouldn't\n> work. However, what type of failure exactly are you guarding against? How\n> likely is a machine failure if its hard drives are external?\n\nI believe we are looking to fulfill two possibilities. First is failure, be \nit CPU fan, ram, motherboard, swap partition, kernel panic, etc. Second is \nthe ability to take the server offline for maintenance upgrades, etc. A warm \nstandby would be ideal to satisfy both conditions. In the past we have done \nthis with sloni, but sloni can be cumbersome when schema changes happen often \non the db as is the case with this one. pg-cluster is another option, but it \nappears it comes only as a patched version of postgres which would hamper our \nability to change versions as quickly as might be desired.\n\nPerhaps something shared could be done with PITR as this new install will be \npg8.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n---498755627-153491485-1112980124=:11558--\n", "msg_date": "Fri, 8 Apr 2005 10:11:07 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DATA directory on network attached storage" }, { "msg_contents": "On Fri, Apr 08, 2005 at 10:01:55AM -0700, Jeff Frost wrote:\n> We are currently considering the possibility of creating a warm standby \n> machine utilizing heartbeat and a network attached storage device for the \n> DATA directory. The idea being that the warm standby machine has its \n> postmaster stopped. When heartbeat detects the death of the master server, \n> the postmaster is started up on the warm standby using the shared DATA \n> directory. Other than the obvious problems of both postmasters \n> inadvertently attempting access at the same time, I'm curious to know if \n> anyone has tried any similar setups and what the experiences have been. \n> Specifically is the performance of gigE good enough to allow postgres to \n> perform under load with an NFS mounted DATA dir? Are there other problems \n> I haven't thought about? Any input would be greatly appreciated.\n\nWe (Zapatec Inc) have been running lots of Pg dbs off of a Network Appliance\nfileserver (NFS TCPv3) with FreeBSD client machines for several years now with\nno problems AFAICT other than insufficient bandwidth between servers and the\nfileserver (for one application, www.fastbuzz.com, 100baseTX (over a private\nswitched network) was insufficient, but IDE-UDMA was fine, so GigE would have\nworked too, but we couldn't justify purchasing a new GigE adapter for our\nNetapp).\n\nWe have the same setup as you would like, allowing for warm standby(s),\nhowever we haven't had to use them at all.\n\nWe have not, AFAICT, had any problems with the traffic over NFS as far as\nreliability -- I'm sure there is a performance penalty, but the reliability\nand scalability gains more than offset that.\n\nFWIW, if I were to do this anew, I would probably opt for iSCSI over GigE with\na NetApp.\n\nAdi\n", "msg_date": "Fri, 8 Apr 2005 19:21:36 +0000", "msg_from": "Aditya <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [sfpug] DATA directory on network attached storage" }, { "msg_contents": "Aditya wrote:\n> We have not, AFAICT, had any problems with the traffic over NFS as far as\n> reliability -- I'm sure there is a performance penalty, but the reliability\n> and scalability gains more than offset that.\n\nMy experience agrees with yours. However we did find one gotcha -- see \nthe thread starting here for details:\nhttp://archives.postgresql.org/pgsql-hackers/2004-12/msg00479.php\n\nIn a nutshell, be careful when using an nfs mounted data directory \ncombined with an init script that creates a new data dir when it doesn't \nfind one.\n\n> FWIW, if I were to do this anew, I would probably opt for iSCSI over GigE with\n> a NetApp.\n\nAny particular reason? Our NetApp technical rep advised nfs over iSCSI, \nIIRC because of performance.\n\nJoe\n", "msg_date": "Mon, 11 Apr 2005 10:59:51 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [sfpug] DATA directory on network attached storage" }, { "msg_contents": "On Mon, Apr 11, 2005 at 10:59:51AM -0700, Joe Conway wrote:\n> >FWIW, if I were to do this anew, I would probably opt for iSCSI over GigE \n> >with\n> >a NetApp.\n> \n> Any particular reason? Our NetApp technical rep advised nfs over iSCSI, \n> IIRC because of performance.\n\nI would mount the Netapp volume(s) as a block level device on my server using\niSCSI (vs. a file-based device like NFS) so that filesystem parameters could\nbe more finely tuned and one could really make use of jumbo frames over GigE.\n\nBut that level of tuning depends on load after all and with a Netapp you can\nhave both, so maybe start with having your databases on an NFS volume on the\nNetapp, and when you have a better idea of the tuning requirements, move it\nover to a iSCSI LUN.\n\nI'm not sure I understand why NFS would perform better than iSCSI -- in any\ncase, some large Oracle dbs at my current job are moving to iSCSI on Netapp\nand in that environment both Oracle and Netapp advise iSCSI (probably because\nOracle uses the block-level device directly), so I suspend the difference in\nperformance is minimal.\n\nAdi\n", "msg_date": "Mon, 11 Apr 2005 18:20:32 +0000", "msg_from": "Aditya <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [sfpug] DATA directory on network attached storage" }, { "msg_contents": "Aditya wrote:\n> On Mon, Apr 11, 2005 at 10:59:51AM -0700, Joe Conway wrote:\n>>Any particular reason? Our NetApp technical rep advised nfs over iSCSI, \n>>IIRC because of performance.\n> \n> I would mount the Netapp volume(s) as a block level device on my server using\n> iSCSI (vs. a file-based device like NFS) so that filesystem parameters could\n> be more finely tuned and one could really make use of jumbo frames over GigE.\n\nActually, we're using jumbo frames over GigE with nfs too.\n\n> I'm not sure I understand why NFS would perform better than iSCSI -- in any\n> case, some large Oracle dbs at my current job are moving to iSCSI on Netapp\n> and in that environment both Oracle and Netapp advise iSCSI (probably because\n> Oracle uses the block-level device directly), so I suspend the difference in\n> performance is minimal.\n\nWe also have Oracle DBs via nfs mounted Netapp, again per the local \nguru's advice. It might be one of those things that is still being \ndebated even within Netapp's ranks (or maybe our info is dated - worth a\ncheck).\n\nThanks,\n\nJoe\n", "msg_date": "Mon, 11 Apr 2005 13:17:34 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [sfpug] DATA directory on network attached storage" } ]
[ { "msg_contents": "\n> Hello All,\n> \n> In contrast to what we hear from most others on this list, we find our\n> database servers are mostly CPU bound. We are wondering if this is\n> because\n> we have postgres configured incorrectly in some way, or if we really\nneed\n> more powerfull processor(s) to gain more performance from postgres.\n\nYes, many apps are not I/O bound (mine isn't). Here are factors that\nare likely to make your app CPU bound:\n\n1. Your cache hit ratio is very high\n2. You have a lot of concurrency.\n3. Your queries are complex, for example, doing sorting or statistics\nanalysis\n4. Your queries are simple, but the server has to process a lot of them\n(transaction overhead becomes significant) sequentially.\n5. You have context switching problems, etc.\n\nOn the query side, you can tune things down considerably...try and keep\nsorting down to a minimum (order on keys, avoid distinct where possible,\nuse 'union all', not 'union'). Basically, reduce individual query time.\n\nOther stuff:\nFor complex queries, use views to cut out plan generation.\nFor simple but frequently run queries (select a,b,c from t where k), use\nparameterized prepared statements for a 50% cpu savings, this may not be\nan option in some client interfaces.\n\nOn the hardware side, you will get improvements by moving to Opteron,\netc.\n\nMerlin\n", "msg_date": "Wed, 9 Feb 2005 17:08:48 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Tuning" }, { "msg_contents": "On Wednesday 09 February 2005 05:08 pm, Merlin Moncure wrote:\n> > Hello All,\n> >\n> > In contrast to what we hear from most others on this list, we find our\n> > database servers are mostly CPU bound. We are wondering if this is\n> > because\n> > we have postgres configured incorrectly in some way, or if we really\n>\n> need\n>\n> > more powerfull processor(s) to gain more performance from postgres.\n>\n> Yes, many apps are not I/O bound (mine isn't). Here are factors that\n> are likely to make your app CPU bound:\n>\n> 1. Your cache hit ratio is very high\n> 2. You have a lot of concurrency.\n> 3. Your queries are complex, for example, doing sorting or statistics\n> analysis\n\nFor now, it's number 3. Relatively low usage, but very complex sql.\n\n> 4. Your queries are simple, but the server has to process a lot of them\n> (transaction overhead becomes significant) sequentially.\n> 5. You have context switching problems, etc.\n>\n> On the query side, you can tune things down considerably...try and keep\n> sorting down to a minimum (order on keys, avoid distinct where possible,\n> use 'union all', not 'union'). Basically, reduce individual query time.\n>\n> Other stuff:\n> For complex queries, use views to cut out plan generation.\n> For simple but frequently run queries (select a,b,c from t where k), use\n> parameterized prepared statements for a 50% cpu savings, this may not be\n> an option in some client interfaces.\n\nPrepared statements are not something we've tried yet. Perhaps we should look \ninto that in cases where it makes sense.\n\n>\n> On the hardware side, you will get improvements by moving to Opteron,\n> etc.\n>\n> Merlin\n\nWell, that's what we were looking for. \n\n---\n\nIt sounds like our configuration as it stands is probably about as good as we \nare going to get with the hardware we have at this point.\n\nWe are cpu bound reflecting the fact that we tend to have complex statements \ndoing aggregates, sorts and group bys.\n\nThe solutions appear to primarily be:\n1. Going to faster hardware of which probably Opterons would be about the only \nchoice. And even that probably won't be a huge difference.\n2. Moving to more materialized views and prepared statements where we can.\n3. Continue to tweak the sql behind our app.\n", "msg_date": "Wed, 9 Feb 2005 17:30:41 -0500", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning" }, { "msg_contents": "On Wed, 9 Feb 2005 17:30:41 -0500, Chris Kratz\n<[email protected]> wrote:\n> The solutions appear to primarily be:\n> 1. Going to faster hardware of which probably Opterons would be about the only\n> choice. And even that probably won't be a huge difference.\n\nI'd beg to differ on that last part. The difference between a 3.6GHz\nXeon and a 2.8GHz Opteron is ~150% speed increase on the Opteron on my\nCPU bound app. This is because the memory bandwidth on the Opteron is\nENORMOUS compared to on the Xeon. Add to that the fact that you\nactually get to use more than about 2G of RAM directly and you've got\nthe perfect platform for a high speed database on a budget.\n\n> 2. Moving to more materialized views and prepared statements where we can.\n\nDefinitely worth investigating. I wish I could, but I can't get my\ncustomers to even consider slightly out of date stats.... :(\n\n> 3. Continue to tweak the sql behind our app.\n\nShort of an Opteron based system, this is by far your best bet.\n\n-- \nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n", "msg_date": "Thu, 10 Feb 2005 01:27:16 +0000", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning" }, { "msg_contents": ">> 2. Moving to more materialized views and prepared statements where we \n>> can.\n>\n> Definitely worth investigating. I wish I could, but I can't get my\n> customers to even consider slightly out of date stats.... :(\n\n\tPut a button 'Stats updated every hour', which gives the results in 0.1 \nseconds, and a button 'stats in real time' which crunches 10 seconds \nbefore displaying the page... if 90% of the people click on the first one \nyou save a lot of CPU.\n\n\tSeems like people who hit Refresh every 10 seconds to see an earnings \ngraph creep up by half a pixel every time... but it seems it's moving !\n\n\tMore seriously, you can update your stats in near real time with a \nmaterialized view, there are two ways :\n\t- ON INSERT / ON UPDATE triggers which update the stats in real time \nbased on each modification\n\t- Have statistics computed for everything until some point in time (like \nan hour ago) and only compute and add stats on the records added or \nmodified since (but it does not work very well for deleted records...)\n", "msg_date": "Thu, 10 Feb 2005 04:38:39 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Tuning" } ]
[ { "msg_contents": "Hi guys,\n\ni'm planning try to do a comparative between some DBMS\nand postgresql (informix, oracle, m$ sql server,\nfirebird and even mysql) i'm coordinating with people\nin the irc spanish postgresql channel.\n\n1) maybe can anyone give me suggestions on this?\n2) point me to a good benchmark test or script that\ncan be used?\n3) any comments?\n\nregards,\nJaime Casanova\n\n_________________________________________________________\nDo You Yahoo!?\nInformaci�n de Estados Unidos y Am�rica Latina, en Yahoo! Noticias.\nVis�tanos en http://noticias.espanol.yahoo.com\n", "msg_date": "Wed, 9 Feb 2005 23:49:10 -0600 (CST)", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": true, "msg_subject": "Benchmark" }, { "msg_contents": "\nOn Feb 10, 2005, at 12:49 AM, Jaime Casanova wrote:\n\n> Hi guys,\n>\n> i'm planning try to do a comparative between some DBMS\n> and postgresql (informix, oracle, m$ sql server,\n> firebird and even mysql) i'm coordinating with people\n> in the irc spanish postgresql channel.\n>\n> 2) point me to a good benchmark test or script that\n> can be used?\n\nThe TPC tests are fairly widely accepted. The thing with a benchmark \nis they are unlikely to simulate your real traffic. But it is always \nfun to look at numbers\n\n> 3) any comments?\n>\n\nIf you plan on making your results public be very careful with the \nlicense agreements on the other db's. I know Oracle forbids the \nrelease of benchmark numbers without their approval.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Thu, 10 Feb 2005 08:21:09 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "On Thu, 10 Feb 2005 08:21:09 -0500, Jeff <[email protected]> wrote:\n> \n> If you plan on making your results public be very careful with the\n> license agreements on the other db's. I know Oracle forbids the\n> release of benchmark numbers without their approval.\n\n...as all of the other commercial databases do. This may be off-topic,\nbut has anyone actually suffered any consequences of a published\nbenchmark without permission?\n\nFor example, I am a developer of Mambo, a PHP-based CMS application,\nand am porting the mysql functions to ADOdb so I can use grown-up\ndatabases ;-)\n\nWhat is keeping me from running a copy of Mambo on a donated server\nfor testing and performance measures (including the commercial\ndatabases) and then publishing the results based on Mambo's\nperformance on each?\n\nIt would be really useful to know if anyone has ever been punished for\ndoing this, as IANAL but that restriction is going to be very, VERY\ndifficult to back up in court without precedence. Is this just a\ndeterrent, or is it real?\n\n-- Mitch\n", "msg_date": "Fri, 11 Feb 2005 00:22:05 -0500", "msg_from": "Mitch Pirtle <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "Mitch Pirtle <[email protected]> writes:\n> It would be really useful to know if anyone has ever been punished for\n> doing this, as IANAL but that restriction is going to be very, VERY\n> difficult to back up in court without precedence. Is this just a\n> deterrent, or is it real?\n\nIf Oracle doesn't eat your rear for lunch, it would only be because you\nhadn't annoyed them sufficiently for them to bother. Under the terms of\nthe license agreement that you presumably clicked through, you gave up\nyour rights to publish anything they don't like. Do a little Google\nresearch. For instance\nhttp://www.infoworld.com/articles/op/xml/01/04/16/010416opfoster.html\n\nThe impression I get is that if you are willing to spend lots of $$\nyou could *maybe* win the case, if you can still find a judge who thinks\nthat the public good outweighs private contract law (good luck, with the\nRepublicans in office). Do you have a larger budget for legal issues\nthan Oracle does? If so, step right up.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Feb 2005 01:38:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark " }, { "msg_contents": "On Fri, 11 Feb 2005 01:38:13 -0500, Tom Lane <[email protected]> wrote:\n> \n> If Oracle doesn't eat your rear for lunch, \n\nThat would be more like an appetizer at a california cuisine place.\n\n> it would only be because you\n> hadn't annoyed them sufficiently for them to bother. Under the terms of\n> the license agreement that you presumably clicked through, you gave up\n> your rights to publish anything they don't like. Do a little Google\n> research. For instance\n> http://www.infoworld.com/articles/op/xml/01/04/16/010416opfoster.html\n\nI did do the research, but couldn't find one instance where someone\nwas actually taken to task over it. So far it appears to be bluster.\nHorrifying to some, but still bluster.\n\n> The impression I get is that if you are willing to spend lots of $$\n> you could *maybe* win the case, if you can still find a judge who thinks\n> that the public good outweighs private contract law (good luck, with the\n> Republicans in office). Do you have a larger budget for legal issues\n> than Oracle does? If so, step right up.\n\nThe reason I asked is because this has a lot more to do with than just\nmoney. This is restriction of speech as well, and publishing\nbenchmarks (simply as statistical data) cannot in any way be construed\nas defamation or libel. Just because it is in the click-wrap contract\ndoesn't mean you waive certain rights, and this has been proven (and\nnow has precedence). Again, I would love to know of any instances\nwhere someone published (forbidden) benchmarks and was actually\npursued in a court of law. Well, and the result, too ;-)\n\nI ask not to cause trouble, but to learn if this is just a deterrent\nthat has never been tested (\"small pebble\") or a well-defined threat\nthat will be enforced (\"plasma cannon\").\n\n-- Mitch, thinking this is off topic but still fascinating\n", "msg_date": "Fri, 11 Feb 2005 02:04:03 -0500", "msg_from": "Mitch Pirtle <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "On Fri, 11 Feb 2005 01:38:13 -0500, Tom Lane <[email protected]> wrote:\n> Mitch Pirtle <[email protected]> writes:\n> > It would be really useful to know if anyone has ever been punished for\n> > doing this, as IANAL but that restriction is going to be very, VERY\n> > difficult to back up in court without precedence. Is this just a\n> > deterrent, or is it real?\n> \n> If Oracle doesn't eat your rear for lunch, it would only be because you\n> hadn't annoyed them sufficiently for them to bother. Under the terms of\n> the license agreement that you presumably clicked through, you gave up\n> your rights to publish anything they don't like. Do a little Google\n> research. For instance\n> http://www.infoworld.com/articles/op/xml/01/04/16/010416opfoster.html\n> \nWhat about the free speech rigths, in USA they are in the constitution\nand cannot be denied or revoked, IANAL.\n\nAnd like stated by Mitch just numbers are not lies that can be pursued\nin a court of law.\n\nThink anout it, In USA you can speak and publish about the President\nbut cannot say anything about M$ or Oracles' DBMS?\n\nregards,\nJaime Casanova\n", "msg_date": "Fri, 11 Feb 2005 02:22:39 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "On Fri, Feb 11, 2005 at 02:22:39 -0500,\n Jaime Casanova <[email protected]> wrote:\n> What about the free speech rigths, in USA they are in the constitution\n> and cannot be denied or revoked, IANAL.\n\nYou can voluntarily give up your rights to free speech in the US.\n\n> And like stated by Mitch just numbers are not lies that can be pursued\n> in a court of law.\n\nI think part of the reason they don't want people doing this, is because\nif you don't configure their database well, you can make it look bad\nwhen it shouldn't.\n\n> Think anout it, In USA you can speak and publish about the President\n> but cannot say anything about M$ or Oracles' DBMS?\n\nNot if you signed a contract that says you can't.\n\nIf you didn't actually sign an agreement saying you wouldn't publish\nbenchmarks, then you might have a case. You might argue that a click\nthrough eula isn't a valid contract or that you are a third party\nwho isn't bound by whatever agreement the person who installed Oracle\nmade. However it probably would cost you a bundle to have a chance\nat winning.\n", "msg_date": "Fri, 11 Feb 2005 07:08:41 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "Bruno Wolff III wrote:\n> On Fri, Feb 11, 2005 at 02:22:39 -0500,\n> Jaime Casanova <[email protected]> wrote:\n> \n>>Think anout it, In USA you can speak and publish about the President\n>>but cannot say anything about M$ or Oracles' DBMS?\n> \n> \n> Not if you signed a contract that says you can't.\n> \n> If you didn't actually sign an agreement saying you wouldn't publish\n> benchmarks, then you might have a case. You might argue that a click\n> through eula isn't a valid contract or that you are a third party\n> who isn't bound by whatever agreement the person who installed Oracle\n> made. However it probably would cost you a bundle to have a chance\n> at winning.\n\nIANAL etc, but the key fear is more likely that Oracle merely cancel \nyour licence(s). And deny you any more. And prevent your software from \nrunning on top of Oracle. At which point, you have to sue Oracle and \nprove restraint of trade or unfair competition or similar. Don't forget \nthat you have no right to purchase Oracle licences, they are free to \nsell to whoever they choose and under whatever conditions.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 11 Feb 2005 13:16:29 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "\nOn Feb 11, 2005, at 2:04 AM, Mitch Pirtle wrote:\n>\n> I did do the research, but couldn't find one instance where someone\n> was actually taken to task over it. So far it appears to be bluster.\n> Horrifying to some, but still bluster.\n>\n\nThey may not have done that yet, but they _COULD_. And if they decide \nto they have more money and power than you likely have and would drive \nyou into financial ruin for the rest of your life (Even if you are \ncorrect). It is a big risk. I think that clause is in there so MS, \netc. can't say \"Use FooSQL, its 428% faster than that Oracle POS Just \nlook!\"\n\nAfter using oracle in the last few months.. I can see why they'd want \nto prevent those numbers.. Oracle really isn't that good. I had been \nunder the impression that it was holy smokes amazingly fast. It just \nisn't. At least, in my experience it isn't. but that is another \nstory.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Fri, 11 Feb 2005 08:20:05 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "Jeff <[email protected]> writes:\n\n> After using oracle in the last few months.. I can see why they'd want to\n> prevent those numbers.. Oracle really isn't that good. I had been under the\n> impression that it was holy smokes amazingly fast. It just isn't. At least,\n> in my experience it isn't. but that is another story.\n\nOracle's claim to performance comes not from tight coding and low overhead.\nFor that you use Mysql :)\n\nOracle's claim to performance comes from how you can throw it at a machine\nwith 4-16 processors and it really does get 4-16x as fast. Features like\npartitioned tables, parallel query, materialized views, etc make it possible\nto drive it further up the performance curve than Sybase/MSSQL or Postgres.\n\nIn terms of performance, Oracle is to Postgres as Postgres is to Mysql: More\ncomplexity, more overhead, more layers of abstraction, but in the long run it\npays off when you need it. (Only without the user-friendliness of either\nopen-source softwares.)\n\n-- \ngreg\n\n", "msg_date": "11 Feb 2005 10:04:08 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "I have never used Oracle myself, nor have I read its license agreement,\nbut what if you didn't name Oracle directly? ie:\n\nTPS\t\tDatabase\n-------------------------------\n112\tMySQL\n120\tPgSQL\n90\tSybase\n95\t\"Other database that *may* start with a letter after N\"\n50\t\"Other database that *may* start with a letter after L\"\n\nAs far as I know there are only a couple databases that don't allow you\nto post benchmarks, but if they remain \"unnamed\" can legal action be\ntaken? \n\nJust like all those commercials on TV where they advertise: \"Cleans 10x\nbetter then the other leading brand\".\n\n\nOn Fri, 2005-02-11 at 00:22 -0500, Mitch Pirtle wrote:\n> On Thu, 10 Feb 2005 08:21:09 -0500, Jeff <[email protected]> wrote:\n> > \n> > If you plan on making your results public be very careful with the\n> > license agreements on the other db's. I know Oracle forbids the\n> > release of benchmark numbers without their approval.\n> \n> ...as all of the other commercial databases do. This may be off-topic,\n> but has anyone actually suffered any consequences of a published\n> benchmark without permission?\n> \n> For example, I am a developer of Mambo, a PHP-based CMS application,\n> and am porting the mysql functions to ADOdb so I can use grown-up\n> databases ;-)\n> \n> What is keeping me from running a copy of Mambo on a donated server\n> for testing and performance measures (including the commercial\n> databases) and then publishing the results based on Mambo's\n> performance on each?\n> \n> It would be really useful to know if anyone has ever been punished for\n> doing this, as IANAL but that restriction is going to be very, VERY\n> difficult to back up in court without precedence. Is this just a\n> deterrent, or is it real?\n> \n> -- Mitch\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n-- \nMike Benoit <[email protected]>", "msg_date": "Fri, 11 Feb 2005 10:09:10 -0800", "msg_from": "Mike Benoit <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "\n> For example, I am a developer of Mambo, a PHP-based CMS application,\n> and am porting the mysql functions to ADOdb so I can use grown-up\n> databases ;-)\n\n\tJust yesterday I \"optimized\" a query for a website running MySQL. It's \nthe 'new products' type query :\n\nSELECT product_id, pd.product_name, p.price, COALESCE( s.specials_price, \np.price ) as real_price\n FROM products p, products_descriptions pd LEFT join specials s ON \n(p.product_id = s.product_id)\nWHERE p.product_id = pd.product_id\nAND pd.language_id=(constant)\nAND p.product_visible=TRUE\nAND s.is_active = TRUE\nORDER BY p.date_added DESC LIMIT 6\n\n\tWith ~100 products everything went smooth, about 0.5 ms. I decided to \ntest with 20.000 because we have a client with a large catalog coming. \nWow. It took half a second, to yield six products. Note that there are \nappropriate indexes all over the place (for getting the new products, I \nhave an index on product_visible, date_added)\n\n\tI tested with Postgres : with 100 products it takes 0.4 ms, with 20.000 \nit takes 0.6 ms...\n\n\tPostgres needs a bit of query massaging (putting an extra ORDER BY \nproduct_visible to use the index). With MySQL no amount of query rewriting \nwould do.\n\tI noted sometimes MySQL would never use a multicolumn index for an ORDER \nBY LIMIT unless one specifies a dummy condition on the missing parameter.\n\n\tSo I had to split the query in two : fetch the six product_ids, store \nthem in a PHP variable, implode(',',$ids), and SELECT ... WHERE product_id \nIN (x,y,z)\n\n\tUGLY ! And a lot slower.\n\n\tNote this is with MySQL 4.0.23 or something. Maybe 4.1 would be faster.\n\n\tHere's the URL to the site. There is a query log if you wanna look just \nfor laughs. Note that all the products boxes are active which makes a very \nlong page time... There are 42000 fictive products and about 60 real \nproducts. Don't use the search form unless you have a good book to read ! \nYou can click on \"Nouveautᅵs\" to see the old \"new products\" query in \naction, but please, only one people at a time.\n\n\thttp://pinceau-d-or.com/gros/product_info.php?products_id=164\n\tAh, you can buy stuff with the test version if you like, just don't use \nthe credit card because ... it works ;)\n\n\tThis is the un-messed-up version (production) :\n\thttp://pinceau-d-or.com/product_info.php?products_id=164\n\n\tIf some day I can recode this mess to use Postgres... this would be nice, \nso nice... the other day my database went apeshit and in the absence of \nforeign keys... and the absence of PHP checking anything... !\n\t\n\ntest=# CREATE TABLE suicide (id INT NOT NULL, moment TIMESTAMP NOT NULL);\nCREATE TABLE\ntest=# INSERT INTO suicide (id,moment) VALUES (0,now());\nINSERT 6145577 1\ntest=# INSERT INTO suicide (id,moment) VALUES (0,0);\nERREUR: La colonne <<moment>> est de type timestamp without time zone \nmais l'expression est de type integer\nHINT: Vous devez reecrire l'expression ou lui appliquer une \ntransformation de type.\ntest=# INSERT INTO suicide (id,moment) VALUES (NULL,1);\nERREUR: La colonne <<moment>> est de type timestamp without time zone \nmais l'expression est de type integer\nHINT: Vous devez reecrire l'expression ou lui appliquer une \ntransformation de type.\ntest=# INSERT INTO suicide (id,moment) VALUES (NULL,now());\nERREUR: Une valeur NULL dans la colonne <<id>> viole la contrainte NOT \nNULL\ntest=# SELECT * FROM suicide;\n id | moment\n----+----------------------------\n 0 | 2005-02-11 19:16:21.262359\n\nmysql> CREATE TABLE suicide (id INT NOT NULL, moment DATETIME NOT NULL);\nQuery OK, 0 rows affected (0.02 sec)\n\nmysql> INSERT INTO suicide (id,moment) VALUES (0,now());\nQuery OK, 1 row affected (0.00 sec)\n\nmysql> INSERT INTO suicide (id,moment) VALUES (0,0);\nQuery OK, 1 row affected (0.00 sec)\n\nmysql> INSERT INTO suicide (id,moment) VALUES (NULL,1);\nERROR 1048: Column 'id' cannot be null\nmysql> INSERT INTO suicide (moment) VALUES (now());\nQuery OK, 1 row affected (0.00 sec)\n\nhey, did I specify a default value ?\n\nmysql> SELECT * FROM suicide;\n+----+---------------------+\n| id | moment |\n+----+---------------------+\n| 0 | 2005-02-11 19:17:49 |\n| 0 | 0000-00-00 00:00:00 |\n| 0 | 2005-02-11 19:18:45 |\n+----+---------------------+\n3 rows in set (0.00 sec)\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Fri, 11 Feb 2005 19:22:35 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark (slightly off topic but oh well)" }, { "msg_contents": "\n> In terms of performance, Oracle is to Postgres as Postgres is to Mysql: \n> More\n> complexity, more overhead, more layers of abstraction, but in the long \n> run it\n> pays off when you need it. (Only without the user-friendliness of either\n> open-source softwares.)\n>\n\n\tI don't find postgres complex... I find it nice, well-behaved, very easy \nto use, very powerful, user-friendly... there are a lot of features but \nsomehow it's well integrated and makes a coherent set. It also has some \nvery useful secret passages (like the whole GiST family) which can save \nyou from some things at which SQL really sucks. It certainly is complex on \nthe inside but I think the devs have done a very good job at hiding that.\n\n\tIt's also polite : it will say 'I have a function with the name you said \nbut the parameter types don't match' ; mysql will just say 'syntax error, \nRTFM', or insert its favorite value of 0.\n", "msg_date": "Fri, 11 Feb 2005 19:32:40 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "Mike Benoit <[email protected]> writes:\n> I have never used Oracle myself, nor have I read its license agreement,\n> but what if you didn't name Oracle directly? ie:\n\n> TPS\t\tDatabase\n> -------------------------------\n> 112\tMySQL\n> 120\tPgSQL\n> 90\tSybase\n> 95\t\"Other database that *may* start with a letter after N\"\n> 50\t\"Other database that *may* start with a letter after L\"\n\nGreat Bridge did essentially that years ago, but I think we only got\naway with it because we didn't say which DBs \"Commercial Database A\"\nand \"Commercial Database B\" actually were. Even off the record, we\nwere only allowed to tell people that the commercial DBs were Oracle\nand SQL Server ... but not which was which.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Feb 2005 13:41:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark " }, { "msg_contents": "Tom Lane wrote:\n> Great Bridge did essentially that years ago, but I think we only got\n> away with it because we didn't say which DBs \"Commercial Database A\"\n> and \"Commercial Database B\" actually were. Even off the record, we\n> were only allowed to tell people that the commercial DBs were Oracle\n> and SQL Server ... but not which was which.\n\nIMHO clues like:\n\n \"What versions of the databases did you use?\n - PostgreSQL - 7.0 release version\n - Proprietary 1 - 8.1.5\n - Proprietary 2 - 7.0\n - MySQL - 3.22.32\n - Interbase - 6.0\n \"\nand\n \"PostgreSQL\" and \"Proprietary 1\" was running \"red hat linux 6.1\" and\n \"Proprietary 2\" was running Windows NT server 4 - service pack 4\"\nin articles like this one: http://www.xperts.com/news/press1.htm\n\nhelped some people narrow it down a bit. :)\n\n", "msg_date": "Sun, 13 Feb 2005 01:21:04 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" } ]
[ { "msg_contents": "Does anyone have any idea why there be over a 4s difference between running \nthe statement directly and using explain analyze? Multiple runs give the \nsame result and I've tested on several servers.\n\ndb=# \\timing\nTiming is on.\ndb=# select count(*) from answer;\n count\n--------\n 530576\n(1 row)\n\nTime: 358.805 ms\ndb=# explain analyze select count(*) from answer;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=9848.12..9848.12 rows=1 width=0) (actual \ntime=4841.231..4841.235 rows=1 loops=1)\n -> Seq Scan on answer (cost=0.00..8561.29 rows=514729 width=0) (actual \ntime=0.011..2347.762 rows=530576 loops=1)\n Total runtime: 4841.412 ms\n(3 rows)\n\nTime: 4855.712 ms\n\n---\n\nPostgresql 7.4.5 running on Linux 2.6.8.1\n", "msg_date": "Thu, 10 Feb 2005 13:34:40 -0500", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Large time difference between explain analyze and normal run" }, { "msg_contents": "Chris Kratz <[email protected]> writes:\n> Does anyone have any idea why there be over a 4s difference between running \n> the statement directly and using explain analyze?\n\n> Aggregate (cost=9848.12..9848.12 rows=1 width=0) (actual \n> time=4841.231..4841.235 rows=1 loops=1)\n> -> Seq Scan on answer (cost=0.00..8561.29 rows=514729 width=0) (actual \n> time=0.011..2347.762 rows=530576 loops=1)\n> Total runtime: 4841.412 ms\n\nEXPLAIN ANALYZE's principal overhead is two gettimeofday() kernel calls\nper plan node execution, so 1061154 such calls here. I infer that\ngettimeofday takes about 4 microseconds on your hardware ... which seems\na bit slow for modern machines. What sort of box is it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Feb 2005 13:58:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large time difference between explain analyze and normal run " }, { "msg_contents": "On Thursday 10 February 2005 01:58 pm, Tom Lane wrote:\n> Chris Kratz <[email protected]> writes:\n> > Does anyone have any idea why there be over a 4s difference between\n> > running the statement directly and using explain analyze?\n> >\n> > Aggregate (cost=9848.12..9848.12 rows=1 width=0) (actual\n> > time=4841.231..4841.235 rows=1 loops=1)\n> > -> Seq Scan on answer (cost=0.00..8561.29 rows=514729 width=0)\n> > (actual time=0.011..2347.762 rows=530576 loops=1)\n> > Total runtime: 4841.412 ms\n>\n> EXPLAIN ANALYZE's principal overhead is two gettimeofday() kernel calls\n> per plan node execution, so 1061154 such calls here. I infer that\n> gettimeofday takes about 4 microseconds on your hardware ... which seems\n> a bit slow for modern machines. What sort of box is it?\n>\n> \t\t\tregards, tom lane\n\nOK, that makes sense.\n\nAthlon XP 3000+\n1.5G Mem\n\nIs there a way to test the gettimeofday() directly?\n", "msg_date": "Thu, 10 Feb 2005 14:05:46 -0500", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large time difference between explain analyze and normal run" }, { "msg_contents": "On February 10, 2005 10:58 am, Tom Lane wrote:\n> Chris Kratz <[email protected]> writes:\n> > Does anyone have any idea why there be over a 4s difference between\n> > running the statement directly and using explain analyze?\n> >\n> > Aggregate (cost=9848.12..9848.12 rows=1 width=0) (actual\n> > time=4841.231..4841.235 rows=1 loops=1)\n> > -> Seq Scan on answer (cost=0.00..8561.29 rows=514729 width=0)\n> > (actual time=0.011..2347.762 rows=530576 loops=1)\n> > Total runtime: 4841.412 ms\n>\n> EXPLAIN ANALYZE's principal overhead is two gettimeofday() kernel calls\n> per plan node execution, so 1061154 such calls here. I infer that\n> gettimeofday takes about 4 microseconds on your hardware ... which seems\n> a bit slow for modern machines. What sort of box is it?\n\ndvl reported the same thing on #postgresql some months back, and neilc \nwas/is/did looking into it. I belive he came up with a way to move the \nfunction call outside of the loop with no ill effects to the rest of the \nexpected behavior.\n\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \nDarcy Buskermolen\nWavefire Technologies Corp.\nph: 250.717.0200\nfx: 250.763.1759\nhttp://www.wavefire.com\n", "msg_date": "Thu, 10 Feb 2005 12:09:42 -0800", "msg_from": "Darcy Buskermolen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large time difference between explain analyze and normal run" }, { "msg_contents": "On Thursday 10 February 2005 03:09 pm, Darcy Buskermolen wrote:\n> On February 10, 2005 10:58 am, Tom Lane wrote:\n> > Chris Kratz <[email protected]> writes:\n> > > Does anyone have any idea why there be over a 4s difference between\n> > > running the statement directly and using explain analyze?\n> > >\n> > > Aggregate (cost=9848.12..9848.12 rows=1 width=0) (actual\n> > > time=4841.231..4841.235 rows=1 loops=1)\n> > > -> Seq Scan on answer (cost=0.00..8561.29 rows=514729 width=0)\n> > > (actual time=0.011..2347.762 rows=530576 loops=1)\n> > > Total runtime: 4841.412 ms\n> >\n> > EXPLAIN ANALYZE's principal overhead is two gettimeofday() kernel calls\n> > per plan node execution, so 1061154 such calls here. I infer that\n> > gettimeofday takes about 4 microseconds on your hardware ... which seems\n> > a bit slow for modern machines. What sort of box is it?\n>\n> dvl reported the same thing on #postgresql some months back, and neilc\n> was/is/did looking into it. I belive he came up with a way to move the\n> function call outside of the loop with no ill effects to the rest of the\n> expected behavior.\n\nThat's interesting to know. It's not a big deal, we were just curious as to \nwhy the difference. Tom's explanation makes good sense. We run into the \nsame situation with using a profiler on an application, ie measuring incurs \noverhead. \n\n-Chris\n", "msg_date": "Thu, 10 Feb 2005 15:25:09 -0500", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large time difference between explain analyze and normal run" } ]
[ { "msg_contents": "Hi all,\n\nA question on how to read and interpret the explain analyse statement (and what to do)\n\nI have a query \"SELECT A.ordernummer, B.klantnummer FROM orders A LEFT OUTER JOIN klt_alg B ON A.Klantnummer=B.Klantnummer ORDER BY A.klantnummer;\"\n\nBoth tables have an btree index on klantnummer (int4, the column the join is on). I have vacuumed and analyzed both tables. The explain analyse is:\n\nQUERY PLAN\nSort (cost=220539.32..223291.41 rows=1100836 width=12) (actual time=51834.128..56065.126 rows=1104380 loops=1)\n Sort Key: a.klantnummer\n -> Hash Left Join (cost=41557.43..110069.51 rows=1100836 width=12) (actual time=21263.858..42845.158 rows=1104380 loops=1)\n Hash Cond: (\"\"outer\"\".klantnummer = \"\"inner\"\".klantnummer)\n -> Seq Scan on orders a (cost=0.00..46495.36 rows=1100836 width=8) (actual time=5.986..7378.488 rows=1104380 loops=1)\n -> Hash (cost=40635.14..40635.14 rows=368914 width=4) (actual time=21256.683..21256.683 rows=0 loops=1)\n -> Seq Scan on klt_alg b (cost=0.00..40635.14 rows=368914 width=4) (actual time=8.880..18910.120 rows=368914 loops=1)\nTotal runtime: 61478.077 ms\n\n\nQuestions:\n -> Hash Left Join (cost=41557.43..110069.51 rows=1100836 width=12) (actual time=21263.858..42845.158 rows=1104380 loops=1)\n\n0. What exactly are the numbers in \"cost=41557.43..110069.51\" ( I assume for the other questions that 41557.43 is the estimated MS the query will take, what are the others)?\n\n1. I assume that (cost=41557.43..110069.51 rows=1100836 width=12) is the estimated cost and (actual time=21263.858..42845.158 rows=1104380 loops=1) the actual cost. Is the difference acceptable?\n\n2. If not, what can I do about it?\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n", "msg_date": "Fri, 11 Feb 2005 10:18:45 +0100", "msg_from": "\"Joost Kraaijeveld\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to interpret this explain analyse?" }, { "msg_contents": "Joost Kraaijeveld wrote:\n> Hi all,\n> \n> A question on how to read and interpret the explain analyse statement\n> (and what to do)\n> \n> I have a query \"SELECT A.ordernummer, B.klantnummer FROM orders A\n> LEFT OUTER JOIN klt_alg B ON A.Klantnummer=B.Klantnummer ORDER BY\n> A.klantnummer;\"\n> \n> Both tables have an btree index on klantnummer (int4, the column the\n> join is on). I have vacuumed and analyzed both tables. The explain\n> analyse is:\n\nIndexes not necessarily useful here since you're fetching all rows in A \nand presumably much of B\n\nSort\n Hash Left Join\n Seq Scan on orders a\n Hash\n Seq Scan on klt_alg b\n\nI've trimmed the above from your explain output. It's sequentially \nscanning \"b\" and using a hash to join to \"a\" before sorting the results.\n\n> Questions: -> Hash Left Join (cost=41557.43..110069.51 rows=1100836\n> width=12) (actual time=21263.858..42845.158 rows=1104380 loops=1)\n> \n> 0. What exactly are the numbers in \"cost=41557.43..110069.51\" ( I\n> assume for the other questions that 41557.43 is the estimated MS the\n> query will take, what are the others)?\n\nThe cost numbers represent \"effort\" rather than time. They're only \nreally useful in that you can compare one part of the query to another. \nThere are two numbers because the first shows startup, the second final \ntime. So - the \"outer\" parts of the query will have increasing startup \nvalues since the \"inner\" parts will have to do their work first.\n\nThe \"actual time\" is measured in ms, but remember to multiply it by the \n\"loops\" value. Oh, and actually measuring the time slows the query down too.\n\n> 1. I assume that (cost=41557.43..110069.51 rows=1100836 width=12) is\n> the estimated cost and (actual time=21263.858..42845.158 rows=1104380\n> loops=1) the actual cost. Is the difference acceptable?\n> \n> 2. If not, what can I do about it?\n\nThe key thing to look for here is the number of rows. If PG expects say \n100 rows but there are instead 10,000 then it may choose the wrong plan. \nIn this case the estimate is 1,100,836 and the actual is 1,104,380 - \nvery close.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 11 Feb 2005 10:20:02 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to interpret this explain analyse?" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n> Joost Kraaijeveld wrote:\n>> 2. If not, what can I do about it?\n\n> The key thing to look for here is the number of rows. If PG expects say \n> 100 rows but there are instead 10,000 then it may choose the wrong plan. \n> In this case the estimate is 1,100,836 and the actual is 1,104,380 - \n> very close.\n\nOn the surface this looks like a reasonable plan choice. If you like\nyou can try the other two basic types of join plan by turning off\nenable_hashjoin, which will likely drive the planner to use a merge\njoin, and then also turn off enable_mergejoin to get a nested loop\n(or if it thinks nested loop is second best, turn off enable_nestloop\nto see the behavior with a merge join).\n\nWhat's important in comparing different plan alternatives is the ratios\nof estimated costs to actual elapsed times. If the planner is doing its\njob well, those ratios should be similar across all the alternatives\n(which implies of course that the cheapest-estimate plan is also the\ncheapest in reality). If not, it may be appropriate to fool with the\nplanner's cost estimate parameters to try to line up estimates and\nreality a bit better.\n\nSee\nhttp://www.postgresql.org/docs/8.0/static/performance-tips.html\nfor more detail.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Feb 2005 11:18:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to interpret this explain analyse? " } ]
[ { "msg_contents": "Hi Tom,\n\nTom Lane schreef:\n> On the surface this looks like a reasonable plan choice. If you like\n> you can try the other two basic types of join plan by turning off\n> enable_hashjoin, which will likely drive the planner to use a merge\n> join, and then also turn off enable_mergejoin to get a nested loop\n> (or if it thinks nested loop is second best, turn off enable_nestloop\n> to see the behavior with a merge join).\n\nThe problem is that the query logically requests all records ( as in \"select * from a join\") from the database but actually displays (in practise) in 97% of the time the first 1000 records and at most the first 50.000 records 99.99999999999999% of the time by scrolling (using \"page down) in the gui and an occasional \"jump to record xxxx\" through something called a locator) (both percentages tested!).\n\nIf I do the same query with a \"limit 60.000\" or if I do a \"set enable_seqscan = off\" the query returns in 0.3 secs. Otherwise it lasts for 20 secs (which is too much for the user to wait for, given the circumstances).\n\nI cannot change the query (it is geneated by a tool called Clarion) but it something like (from the psqlodbc_xxx.log):\n\"...\ndeclare SQL_CUR01 cursor for \nSELECT A.ordernummer, B.klantnummer FROM \"orders\" A LEFT OUTER JOIN \"klt_alg\" B ON A.Klantnummer=B.Klantnummer ORDER BY A.klantnummer;\nfetch 100 in SQL_CUR01;\n...\"\n\nPostgreSQL does the planning (and than executes accordingly) to the query and not the \"fetch 100\". Changing the query with a \"limit whatever\" prohibits scrolling after the size of the resultset. If Postgres should delay the planning of the actual query untill the fetch it could choose the quick solution. Another solution would be to \"advise\" PostgreSQL which index etc (whatever etc means ;-)) to use ( as in the mailing from Silke Trissl in the performance list on 09-02-05).\n\n> What's important in comparing different plan alternatives is the ratios\n> of estimated costs to actual elapsed times. If the planner is doing its\n> job well, those ratios should be similar across all the alternatives\n> (which implies of course that the cheapest-estimate plan is also the\n> cheapest in reality). If not, it may be appropriate to fool with the\n> planner's cost estimate parameters to try to line up estimates and\n> reality a bit better. \nI I really do a \"select *\" and display the result, the planner is right (tested with \"set enable_seqscan = off\" and \"set enable_seqscan = on).\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl\n", "msg_date": "Fri, 11 Feb 2005 20:25:11 +0100", "msg_from": "\"Joost Kraaijeveld\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to interpret this explain analyse? " }, { "msg_contents": "\"Joost Kraaijeveld\" <[email protected]> writes:\n> I cannot change the query (it is geneated by a tool called Clarion) but it something like (from the psqlodbc_xxx.log):\n> \"...\n> declare SQL_CUR01 cursor for \n> SELECT A.ordernummer, B.klantnummer FROM \"orders\" A LEFT OUTER JOIN \"klt_alg\" B ON A.Klantnummer=B.Klantnummer ORDER BY A.klantnummer;\n> fetch 100 in SQL_CUR01;\n> ...\"\n\nWell, the planner does put some emphasis on startup time when dealing\nwith a DECLARE CURSOR plan; the problem you face is just that that\ncorrection isn't large enough. (From memory, I think it optimizes on\nthe assumption that 10% of the estimated rows will actually be fetched;\nyou evidently want a setting of 1% or even less.)\n\nWe once talked about setting up a GUC variable to control the percentage\nof a cursor that is estimated to be fetched:\nhttp://archives.postgresql.org/pgsql-hackers/2000-10/msg01108.php\nIt never got done but that seems like the most reasonable solution to\nme.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Feb 2005 14:40:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to interpret this explain analyse? " }, { "msg_contents": "Tom Lane wrote:\n> \"Joost Kraaijeveld\" <[email protected]> writes:\n> > I cannot change the query (it is geneated by a tool called Clarion) but it something like (from the psqlodbc_xxx.log):\n> > \"...\n> > declare SQL_CUR01 cursor for \n> > SELECT A.ordernummer, B.klantnummer FROM \"orders\" A LEFT OUTER JOIN \"klt_alg\" B ON A.Klantnummer=B.Klantnummer ORDER BY A.klantnummer;\n> > fetch 100 in SQL_CUR01;\n> > ...\"\n> \n> Well, the planner does put some emphasis on startup time when dealing\n> with a DECLARE CURSOR plan; the problem you face is just that that\n> correction isn't large enough. (From memory, I think it optimizes on\n> the assumption that 10% of the estimated rows will actually be fetched;\n> you evidently want a setting of 1% or even less.)\n\nOuch. Is this really a reasonable assumption? I figured the primary\nuse of a cursor was to fetch small amounts of data at a time from a\nlarge table, so 10% seems extremely high as an average fetch size. Or\nis the optimization based on the number of rows that will be fetched\nby the cursor during the cursor's lifetime (as opposed to in a single\nfetch)?\n\nAlso, one has to ask what the consequences are of assuming a value too\nlow versus too high. Which ends up being worse?\n\n> We once talked about setting up a GUC variable to control the percentage\n> of a cursor that is estimated to be fetched:\n> http://archives.postgresql.org/pgsql-hackers/2000-10/msg01108.php\n> It never got done but that seems like the most reasonable solution to\n> me.\n\nOr keep some statistics on cursor usage, and adjust the value\ndynamically based on actual cursor queries (this might be easier said\nthan done, of course).\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Mon, 14 Feb 2005 16:10:12 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to interpret this explain analyse?" }, { "msg_contents": "\nKevin Brown <[email protected]> writes:\n\n> Ouch. Is this really a reasonable assumption? I figured the primary\n> use of a cursor was to fetch small amounts of data at a time from a\n> large table, so 10% seems extremely high as an average fetch size. Or\n> is the optimization based on the number of rows that will be fetched\n> by the cursor during the cursor's lifetime (as opposed to in a single\n> fetch)?\n> \n> Also, one has to ask what the consequences are of assuming a value too\n> low versus too high. Which ends up being worse?\n\nThis is one of the things the planner really cannot know. Ultimately it's the\nkind of thing for which hints really are necessary. Oracle distinguishes\nbetween the \"minimize total time\" versus \"minimize startup time\" with\n/*+ ALL_ROWS */ and /*+ FIRST_ROWS */ hints, for example.\n\nI would also find it reasonable to have hints to specify a selectivity for\nexpressions the optimizer has no hope of possibly being able to estimate.\nThings like \"WHERE myfunction(col1,col2,?) /*+ 10% */\"\n\n\n-- \ngreg\n\n", "msg_date": "15 Feb 2005 02:17:49 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to interpret this explain analyse?" }, { "msg_contents": "Greg Stark wrote:\n> Kevin Brown <[email protected]> writes:\n> \n> \n>>Ouch. Is this really a reasonable assumption? I figured the primary\n>>use of a cursor was to fetch small amounts of data at a time from a\n>>large table, so 10% seems extremely high as an average fetch size. Or\n>>is the optimization based on the number of rows that will be fetched\n>>by the cursor during the cursor's lifetime (as opposed to in a single\n>>fetch)?\n>>\n>>Also, one has to ask what the consequences are of assuming a value too\n>>low versus too high. Which ends up being worse?\n> \n> \n> This is one of the things the planner really cannot know. Ultimately it's the\n> kind of thing for which hints really are necessary. Oracle distinguishes\n> between the \"minimize total time\" versus \"minimize startup time\" with\n> /*+ ALL_ROWS */ and /*+ FIRST_ROWS */ hints, for example.\n> \n> I would also find it reasonable to have hints to specify a selectivity for\n> expressions the optimizer has no hope of possibly being able to estimate.\n> Things like \"WHERE myfunction(col1,col2,?) /*+ 10% */\"\n> \n> \nNot to mention that hints would be helpful if you want to specify a particular index for a specific \nquery (case in point, testing plans and response of various indices without having to drop and \ncreate other ones). This is a bit of functionality that I'd like to see.\n", "msg_date": "Tue, 15 Feb 2005 07:11:57 -0800", "msg_from": "Bricklen Anderson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to interpret this explain analyse?" }, { "msg_contents": "Greg Stark wrote:\n> \n> Kevin Brown <[email protected]> writes:\n> > Also, one has to ask what the consequences are of assuming a value too\n> > low versus too high. Which ends up being worse?\n> \n> This is one of the things the planner really cannot know. Ultimately it's the\n> kind of thing for which hints really are necessary. Oracle distinguishes\n> between the \"minimize total time\" versus \"minimize startup time\" with\n> /*+ ALL_ROWS */ and /*+ FIRST_ROWS */ hints, for example.\n\nWell, the planner *can* know the actual value to use in this case, or\nat least a close approximation, but the system would have to gather\nsome information about cursors during fetches. At the very least, it\nwill know how many rows were actually fetched by the cursor in\nquestion, and it will also hopefully know how many rows were returned\nby the query being executed. Store the ratio of the two in a list,\nthen store the list itself into a table (or something) at backend exit\ntime.\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Wed, 16 Feb 2005 13:09:28 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to interpret this explain analyse?" } ]
[ { "msg_contents": "> I have never used Oracle myself, nor have I read its license\nagreement,\n> but what if you didn't name Oracle directly? ie:\n> \n> TPS\t\tDatabase\n> -------------------------------\n> 112\tMySQL\n> 120\tPgSQL\n> 90\tSybase\n> 95\t\"Other database that *may* start with a letter after N\"\n> 50\t\"Other database that *may* start with a letter after L\"\n> \n> As far as I know there are only a couple databases that don't allow\nyou\n> to post benchmarks, but if they remain \"unnamed\" can legal action be\n> taken?\n> \n> Just like all those commercials on TV where they advertise: \"Cleans\n10x\n> better then the other leading brand\".\n\nInstead of measuring transactions/second, let's put everything in terms\nof transactions/dollar. This will make it quite easy to determine which\ndatabase is which from the results. Since postgresql is free and would\ninvalidate our test on mathematical terms, we will sub in the $19.99\nprice of a T-Shirt (http://www.sourcewear.com/) for the price of the\ndatabase.\n\nTP$\t\tDatabase\n-------------------------------\n25 A\n.5 B\n.01 C\n.001 D\n.00001 E\n\nMerlin\n", "msg_date": "Fri, 11 Feb 2005 15:41:58 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Benchmark" }, { "msg_contents": "Oops! [email protected] (\"Merlin Moncure\") was seen spray-painting on a wall:\n> Instead of measuring transactions/second, let's put everything in terms\n> of transactions/dollar. This will make it quite easy to determine which\n> database is which from the results. Since postgresql is free and would\n> invalidate our test on mathematical terms, we will sub in the $19.99\n> price of a T-Shirt (http://www.sourcewear.com/) for the price of the\n> database.\n>\n> TP$\t\tDatabase\n> -------------------------------\n> 25 A\n> .5 B\n> .01 C\n> .001 D\n> .00001 E\n\nAh, but that's a completely wrong evaluation.\n\nThe fact that PostgreSQL is available without licensing charges does\n_not_ make a transactions/dollar ratio break down.\n\nAfter all, the cost of a computer system to run the transactions is\nlikely to be comprised of some combination of software licenses and\nhardware costs. Even if the software is free, the hardware isn't.\n\nIf you're doing a high end evaluation, you probably have a million\ndollars worth of computer hardware.\n\nIf you're running PostgreSQL, that may mean you can afford to throw\nsome extra RAM on the box, but you still need the million dollar\nserver in order to get hefty TPS counts...\n-- \n(reverse (concatenate 'string \"moc.liamg\" \"@\" \"enworbbc\"))\nhttp://cbbrowne.com/info/linuxdistributions.html\n\"Let's face it -- ASCII text is a far richer medium than most of us\ndeserve.\" -- Scott McNealy\n", "msg_date": "Sat, 12 Feb 2005 20:34:24 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "Christopher Browne <[email protected]> writes:\n\n> After all, the cost of a computer system to run the transactions is\n> likely to be comprised of some combination of software licenses and\n> hardware costs. Even if the software is free, the hardware isn't.\n\nAnd labour costs.\n\n-- \ngreg\n\n", "msg_date": "13 Feb 2005 11:34:54 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "Dear,\n\nWe are using PostgreSQL for 4 Years now, one can say it is a blessing to \nmaintain. Our previous database was number one (;-), it was much harder to \nmaintain so labor is a pro for PostgreSQL ...\n\nKind Regards\n\nPatrick Meylemans\n\nIT Manager\nWTCM-CRIF\nCelestijnenlaan 300C\n3001 Helerlee\n\n\nAt 11:34 13/02/2005 -0500, Greg Stark wrote:\n>Christopher Browne <[email protected]> writes:\n>\n> > After all, the cost of a computer system to run the transactions is\n> > likely to be comprised of some combination of software licenses and\n> > hardware costs. Even if the software is free, the hardware isn't.\n>\n>And labour costs.\n>\n>--\n>greg\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n\nDear,\nWe are using PostgreSQL for 4 Years now, one can say it is a blessing to\nmaintain. Our previous database was number one (;-), it was much harder\nto maintain so labor is a pro for PostgreSQL ...\nKind Regards\nPatrick Meylemans\nIT Manager \nWTCM-CRIF\nCelestijnenlaan 300C\n3001 Helerlee\n\nAt 11:34 13/02/2005 -0500, Greg Stark \nwrote:\nChristopher Browne\n<[email protected]> writes:\n> After all, the cost of a computer system to run the transactions\nis\n> likely to be comprised of some combination of software licenses\nand\n> hardware costs.  Even if the software is free, the hardware\nisn't.\nAnd labour costs.\n-- \ngreg\n\n---------------------------(end of\nbroadcast)---------------------------\nTIP 6: Have you searched our list archives?\n              \nhttp://archives.postgresql.org", "msg_date": "Sun, 13 Feb 2005 20:24:22 +0100", "msg_from": "Patrick Meylemans <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" }, { "msg_contents": "In article <[email protected]>,\nGreg Stark <[email protected]> writes:\n\n> Christopher Browne <[email protected]> writes:\n>> After all, the cost of a computer system to run the transactions is\n>> likely to be comprised of some combination of software licenses and\n>> hardware costs. Even if the software is free, the hardware isn't.\n\n> And labour costs.\n\nExcept that working with PostgreSQL is fun, not labour :-)\n\n", "msg_date": "14 Feb 2005 11:46:18 +0100", "msg_from": "Harald Fuchs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark" } ]
[ { "msg_contents": "Hi Tom,\n\nTom Lane schreef:\n> Well, the planner does put some emphasis on startup time when dealing\n> with a DECLARE CURSOR plan; the problem you face is just that that\n> correction isn't large enough. (From memory, I think it optimizes on\n> the assumption that 10% of the estimated rows will actually\n> be fetched; you evidently want a setting of 1% or even less.)\nI wish I had your mnemory ;-) . The tables contain 1.100.000 records by the way (that is not nearly 10 %, my math is not that good))\n\n\n> We once talked about setting up a GUC variable to control the\n> percentage of a cursor that is estimated to be fetched:\n> http://archives.postgresql.org/pgsql-hackers/2000-10/msg01108.php\n> It never got done but that seems like the most reasonable solution to\n> me. \nIf the proposal means that the cursor is not limited to ths limit in the query but is limited to the fetch than I support the proposal. A bit late I presume.\n\nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl\n", "msg_date": "Sat, 12 Feb 2005 00:12:26 +0100", "msg_from": "\"Joost Kraaijeveld\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to interpret this explain analyse? " } ]
[ { "msg_contents": "Hi,\n\nin the #postgresql-es channel someone shows me this:\n\npgsql-7.4.5 + postgis \n\n--- begin context ---\n\nCREATE TABLE calles (\n gid int4 NOT NULL DEFAULT nextval('public.callesstgo_gid_seq'::text),\n nombre varchar,\n inicio int4,\n termino int4,\n comuna varchar,\n ciudad varchar,\n region numeric,\n pais varchar,\n the_geom geometry,\n id_comuna numeric,\n CONSTRAINT callesstgo_pkey PRIMARY KEY (gid),\n CONSTRAINT enforce_geotype_the_geom CHECK (geometrytype(the_geom) =\n'MULTILINESTRING'::text OR the_geom IS NULL),\n CONSTRAINT enforce_srid_the_geom CHECK (srid(the_geom) = -1)\n) \nWITH OIDS;\n \nCREATE INDEX idx_region_comunas ON calles USING btree\n (id_comuna, region);\n\nselect count(*) from calles;\n143902\n\n--- end context ---\n \nOk . here is the problem (BTW, the database has been analyzed just\nbefore this query was execured)\n\nexplain analyze\nselect * from calles where id_comuna = 92 and region=13; \n\nQUERY PLAN Seq Scan on calles (cost=0.00..7876.53 rows=2610\nwidth=279) (actual time=182.590..454.195 rows=4612 loops=1)\n Filter: ((id_comuna = 92::numeric) AND (region = 13::numeric))\nTotal runtime: 456.876 ms\n\n\nWhy is this query using a seq scan rather than a index scan? i notice\nthe diff between the estimated rows and actual rows (almost 2000).\n\nCan this affect the query plan? i think this is a problem of\nstatistics, am i right? if so, what can be done?\n\nregards,\nJaime Casanova\n", "msg_date": "Sun, 13 Feb 2005 16:27:45 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": true, "msg_subject": "estimated rows vs. actual rows" }, { "msg_contents": "Jaime,\n\n> Why is this query using a seq scan rather than a index scan? \n\nBecause it thinks a seq scan will be faster.\n\n> i notice \n> the diff between the estimated rows and actual rows (almost 2000).\n\nYes, ANALYZE, and possibly increasing the column stats, should help that.\n\n> Can this affect the query plan? i think this is a problem of\n> statistics, am i right? if so, what can be done?\n\nWell, if the estimate was accurate, PG would be even *more* likely to use a \nseq scan (more rows).\n\nI think maybe you should establish whether a seq scan actually *is* faster? \nPerhaps do SET enable_seqscan = false and then re-run the query a few times?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 13 Feb 2005 13:41:09 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: estimated rows vs. actual rows" }, { "msg_contents": "On Sun, 13 Feb 2005 13:41:09 -0800, Josh Berkus <[email protected]> wrote:\n> Jaime,\n> \n> > Why is this query using a seq scan rather than a index scan?\n> \n> Because it thinks a seq scan will be faster.\n> \nI will suggest him to probe with seq scans disabled.\n\nBut, IMHO, if the table has 143902 and it thinks will retrieve 2610\n(almost 1.81% of the total). it won't be faster with an index?\n\ni know, i will suggest him to probe to be sure. just an opinion.\n\nregards,\nJaime Casanova\n", "msg_date": "Sun, 13 Feb 2005 22:18:52 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": true, "msg_subject": "Re: estimated rows vs. actual rows" }, { "msg_contents": "Jaime Casanova <[email protected]> writes:\n> But, IMHO, if the table has 143902 and it thinks will retrieve 2610\n> (almost 1.81% of the total). it won't be faster with an index?\n\nThat's almost one row in fifty. We don't know how wide the table is,\nbut it's certainly possible that there are order-of-a-hundred rows\non each page; in which case the indexscan is likely to hit every page.\nTwice. Not in sequence. Only if the selected rows are pretty well\nclustered in a small part of the table is this going to be a win\nover a seqscan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Feb 2005 22:38:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: estimated rows vs. actual rows " }, { "msg_contents": "Jaime Casanova wrote:\n> \n> But, IMHO, if the table has 143902 and it thinks will retrieve 2610\n> (almost 1.81% of the total). it won't be faster with an index?\n> \n\nDepends on how those 2610 rows are distributed amongst the 143902. The \nworst case scenario is each one of them in its own page. In that case \nyou have to read 2610 *pages*, which is probably a significant \npercentage of the table.\n\nYou can find out this information from the pg_stats view (particularly \nthe correlation column).\n\n\nMark\n", "msg_date": "Mon, 14 Feb 2005 16:45:58 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: estimated rows vs. actual rows" }, { "msg_contents": "After takin a swig o' Arrakan spice grog, [email protected] (Jaime Casanova) belched out:\n> On Sun, 13 Feb 2005 13:41:09 -0800, Josh Berkus <[email protected]> wrote:\n>> Jaime,\n>> \n>> > Why is this query using a seq scan rather than a index scan?\n>> \n>> Because it thinks a seq scan will be faster.\n>> \n> I will suggest him to probe with seq scans disabled.\n>\n> But, IMHO, if the table has 143902 and it thinks will retrieve 2610\n> (almost 1.81% of the total). it won't be faster with an index?\n\nIf the 2610 rows are scattered widely enough, it may be cheaper to do\na seq scan.\n\nAfter all, with a seq scan, you read each block of the table's pages\nexactly once.\n\nWith an index scan, you read index pages _and_ table pages, and may do\nand redo some of the pages.\n\nIt sounds as though it's worth forcing the matter and trying it both\nways and comparing them. Don't be surprised if the seq scan is in\nfact faster...\n-- \nselect 'cbbrowne' || '@' || 'gmail.com';\nhttp://cbbrowne.com/info/emacs.html\nWhen aiming for the common denominator, be prepared for the occasional\ndivision by zero.\n", "msg_date": "Mon, 14 Feb 2005 07:41:13 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: estimated rows vs. actual rows" } ]
[ { "msg_contents": "Hi,\n\nI am just wondering, by default, autocommit is enabled for every client \nconnection. The documentations states that we have to use BEGIN\nand END or COMMIT so to increase performance by not using autocommit. \nMy question is, when we use the BEGIN and END statements, is autocommit \nunset/disabled automatically or we have to disable/unset it manually?\n\n\nHasnul\n\n\n", "msg_date": "Mon, 14 Feb 2005 16:01:20 +0800", "msg_from": "Hasnul Fadhly bin Hasan <[email protected]>", "msg_from_op": true, "msg_subject": "Autocommit" }, { "msg_contents": "On Mon, Feb 14, 2005 at 04:01:20PM +0800, Hasnul Fadhly bin Hasan wrote:\n> \n> I am just wondering, by default, autocommit is enabled for every client \n> connection. The documentations states that we have to use BEGIN\n> and END or COMMIT so to increase performance by not using autocommit. \n> My question is, when we use the BEGIN and END statements, is autocommit \n> unset/disabled automatically or we have to disable/unset it manually?\n\nWhat version of PostgreSQL is your server running and what client\nsoftware are you using? PostgreSQL 7.3 had a server-side autocommit\nsetting, but it caused problems with some clients so 7.4 got rid\nof it and left autocommit up to the client. How to enable or disable\nclient-side autocommit depends on the client software, but if you're\nable to execute a BEGIN (or START TRANSACTION) statement then you\nshould be inside a transaction until you execute COMMIT (or END)\nor ROLLBACK. That is, unless your client intercepts these statements\nand does whatever it wants....\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Mon, 14 Feb 2005 01:47:03 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autocommit" }, { "msg_contents": "Hi Micheal,\n\nThanks for the reply. I am using postgres 7.4.5 client. There's one \nthat is using 7.4.1 client. I'm not sure if there would be any difference.\nWhen i use psql and check the status of autocommit, it is set to \nenable. I'm not sure if libpq and psql uses the same defaults.\n\nThanks,\n\nHasnul\n\n\n\nMichael Fuhr wrote:\n\n>On Mon, Feb 14, 2005 at 04:01:20PM +0800, Hasnul Fadhly bin Hasan wrote:\n> \n>\n>>I am just wondering, by default, autocommit is enabled for every client \n>>connection. The documentations states that we have to use BEGIN\n>>and END or COMMIT so to increase performance by not using autocommit. \n>>My question is, when we use the BEGIN and END statements, is autocommit \n>>unset/disabled automatically or we have to disable/unset it manually?\n>> \n>>\n>\n>What version of PostgreSQL is your server running and what client\n>software are you using? PostgreSQL 7.3 had a server-side autocommit\n>setting, but it caused problems with some clients so 7.4 got rid\n>of it and left autocommit up to the client. How to enable or disable\n>client-side autocommit depends on the client software, but if you're\n>able to execute a BEGIN (or START TRANSACTION) statement then you\n>should be inside a transaction until you execute COMMIT (or END)\n>or ROLLBACK. That is, unless your client intercepts these statements\n>and does whatever it wants....\n>\n> \n>\n\n\n\n\n\n\n\nHi Micheal,\n\nThanks for the reply.  I am using postgres 7.4.5 client.  There's one\nthat is using 7.4.1 client.  I'm not sure if there would be any\ndifference.\nWhen i use psql and check the status of autocommit, it is set to\nenable.  I'm not sure if libpq and psql uses the same defaults.\n\nThanks,\n\nHasnul\n\n\n\nMichael Fuhr wrote:\n\nOn Mon, Feb 14, 2005 at 04:01:20PM +0800, Hasnul Fadhly bin Hasan wrote:\n \n\nI am just wondering, by default, autocommit is enabled for every client \nconnection. The documentations states that we have to use BEGIN\nand END or COMMIT so to increase performance by not using autocommit. \nMy question is, when we use the BEGIN and END statements, is autocommit \nunset/disabled automatically or we have to disable/unset it manually?\n \n\n\nWhat version of PostgreSQL is your server running and what client\nsoftware are you using? PostgreSQL 7.3 had a server-side autocommit\nsetting, but it caused problems with some clients so 7.4 got rid\nof it and left autocommit up to the client. How to enable or disable\nclient-side autocommit depends on the client software, but if you're\nable to execute a BEGIN (or START TRANSACTION) statement then you\nshould be inside a transaction until you execute COMMIT (or END)\nor ROLLBACK. That is, unless your client intercepts these statements\nand does whatever it wants....", "msg_date": "Mon, 14 Feb 2005 16:58:31 +0800", "msg_from": "Hasnul Fadhly bin Hasan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autocommit" }, { "msg_contents": "On Mon, Feb 14, 2005 at 04:58:31PM +0800, Hasnul Fadhly bin Hasan wrote:\n\n> Thanks for the reply. I am using postgres 7.4.5 client. There's one \n> that is using 7.4.1 client. I'm not sure if there would be any difference.\n> When i use psql and check the status of autocommit, it is set to \n> enable. I'm not sure if libpq and psql uses the same defaults.\n\nAs far as I can tell, libpq doesn't have an autocommit setting --\nit just sends statements on behalf of the application. Clients\nthat allow the user to disable autocommit presumably do so by\nimplicitly sending BEGIN statements to start new transactions.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Mon, 14 Feb 2005 02:34:03 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autocommit" } ]
[ { "msg_contents": "Is there a way to use indexes for queries like:\n\nselect field from table where field like 'abc%'\n\n(i.e. filter for string fields that begin with something) ?\n", "msg_date": "Mon, 14 Feb 2005 20:57:27 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "String matching" }, { "msg_contents": "\n\tnormally you shouldn't have to do anything, it should just work :\n\n> select field from table where field like 'abc%'\n\nCREATE INDEX ... ON table( field );\n\n\tthat's all\n\n\tIf it does not use the index, I saw on the mailing list that the locale \ncould be an issue.\n", "msg_date": "Mon, 14 Feb 2005 21:08:39 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: String matching" }, { "msg_contents": "PFC wrote:\n> \n> normally you shouldn't have to do anything, it should just work :\n> \n>> select field from table where field like 'abc%'\n\n> If it does not use the index, I saw on the mailing list that the \n> locale could be an issue.\n\nOh yes, I forgot about that :( I do have LC_COLLATE (on latin2)...\n\nIt's a shame PostgreSQL doesn't allow collation rules on specific fields \n- this field I'm using here will always be 7bit ASCII :(\n\n", "msg_date": "Mon, 14 Feb 2005 21:31:46 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: String matching" }, { "msg_contents": "On Mon, 14 Feb 2005, Ivan Voras wrote:\n\n> PFC wrote:\n> >\n> > normally you shouldn't have to do anything, it should just work :\n> >\n> >> select field from table where field like 'abc%'\n>\n> > If it does not use the index, I saw on the mailing list that the\n> > locale could be an issue.\n>\n> Oh yes, I forgot about that :( I do have LC_COLLATE (on latin2)...\n>\n> It's a shame PostgreSQL doesn't allow collation rules on specific fields\n> - this field I'm using here will always be 7bit ASCII :(\n\nYou can also create an index using a <typename>_pattern_ops operator\nclass which should be usable even with other collations.\n\n", "msg_date": "Mon, 14 Feb 2005 12:42:14 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: String matching" }, { "msg_contents": "Stephan Szabo wrote:\n\n> You can also create an index using a <typename>_pattern_ops operator\n> class which should be usable even with other collations.\n\nCould you give me an example for this, or point me to the relevant \ndocumentation?\n", "msg_date": "Mon, 14 Feb 2005 21:45:00 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: String matching" }, { "msg_contents": "\nOn Mon, 14 Feb 2005, Ivan Voras wrote:\n\n> Stephan Szabo wrote:\n>\n> > You can also create an index using a <typename>_pattern_ops operator\n> > class which should be usable even with other collations.\n>\n> Could you give me an example for this, or point me to the relevant\n> documentation?\n\nBasically, you could have something like:\n\ncreate table test_table(a text);\ncreate index test_index on test_table(a text_pattern_ops);\n\n------------------------------------------------------------------\n\nhttp://www.postgresql.org/docs/8.0/interactive/indexes-opclass.html\n\n", "msg_date": "Mon, 14 Feb 2005 14:12:49 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: String matching" }, { "msg_contents": "Stephan Szabo wrote:\n> On Mon, 14 Feb 2005, Ivan Voras wrote:\n\n>>Could you give me an example for this, or point me to the relevant\n>>documentation?\n\n> \n> http://www.postgresql.org/docs/8.0/interactive/indexes-opclass.html\n\nThanks! I didn't know this and I certainly didn't think it would be that\neasy :)\n\n\n", "msg_date": "Mon, 14 Feb 2005 23:29:15 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: String matching" }, { "msg_contents": "On Mon, 14 Feb 2005, Ivan Voras wrote:\n\n> Stephan Szabo wrote:\n> > On Mon, 14 Feb 2005, Ivan Voras wrote:\n>\n> >>Could you give me an example for this, or point me to the relevant\n> >>documentation?\n>\n> >\n> > http://www.postgresql.org/docs/8.0/interactive/indexes-opclass.html\n>\n> Thanks! I didn't know this and I certainly didn't think it would be that\n> easy :)\n\nWell, it's not perfect. It requires a separate index from one for normal\ncomparisons, so it's trading modification speed for LIKE lookup speed.\n", "msg_date": "Mon, 14 Feb 2005 14:31:46 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: String matching" } ]
[ { "msg_contents": "Hi All,\n\nI have boiled my situation down to the following simple case: (postgres \nversion 7.3)\n\n* Query 1 is doing a sequential scan over a table (courtesy of field \nILIKE 'foo%') and index joins to a few others\n* Query 2 is doing a functional index scan over the same table \n(lower(field) LIKE 'foo%') and index joins to a few others\n* neither query has an order by clause\n* for the purpose of testing, both queries are designed to return the \nsame result set\n\nObviously Q2 is faster than Q1, but if I ever run them both at the same \ntime (lets say I run two of Q1 and one of Q2 at the same time) then Q2 \nconsistently returns WORSE times than Q1 (explain analyze confirms that \nit is using the index).\n\nMy assumption is that the sequential scan is blowing the index from any \ncache it might live in, and simultaneously stealing all the disk IO \nthat is needed to access the index on disk (the table has 200,000 \nrows).\n\nIf I simplify the case to not do the index joins (ie. operate on the \none table only) the situation is not as dramatic, but similar.\n\nMy thoughts are:\n\n1) kill the sequential scan - but unfortunately I don't have direct \ncontrol over that code\n2) change the way the server allocates/prioritizes different caches - i \ndon't know enough about how postgres caches work to do this (if it's \npossible)\n3) try it on postgres 7.4 - possible, but migrating the system to 7.4 \nin production will be hard because the above code that I am not \nresponsible for has a lot of (slightly wacky) implicit date casts\n4) ask the fine people on the mailing list for other suggestions!\n-- \nMark Aufflick\n e [email protected]\n w www.pumptheory.com (work)\n w mark.aufflick.com (personal)\n p +61 438 700 647\n f +61 2 9436 4737\n\n\n========================================================================\n iBurst Wireless Broadband from $34.95/month www.platformnetworks.net\n Forward undetected SPAM to: [email protected]\n========================================================================\n\n", "msg_date": "Tue, 15 Feb 2005 10:34:35 +1100", "msg_from": "Mark Aufflick <[email protected]>", "msg_from_op": true, "msg_subject": "seq scan cache vs. index cache smackdown" }, { "msg_contents": "Hi,\n\nI think there was some discussion about seq scans messing up the cache, and \ntalk about doing something about it but I don't think it has been addressed \nyet. Maybe worth a troll through the archives.\n\nIt is certainly true that in many situations, a seq scan is preferable to \nusing an index. I have been testing a situation here on two versions of the \nsame database, one of the databases is much bigger than the other \n(artificially bloated for testing purposes). Some of the query plans change \nto use seq scans on the big database, where they used indexes on the little \ndatabase - but either way, in *single user* testing the performance is fine. \nMy concern is that this kind of testing has very little relevance to the \nreal world of multiuser processing where contention for the cache becomes an \nissue. It may be that, at least in the current situation, postgres is \ngiving too much weight to seq scans based on single user, straight line \nperformance comparisons. If your assumption is correct, then addressing that \nmight help, though it's bound to have it's compromises too...\n\nregards\nIain\n\n\n\n\n----- Original Message ----- \nFrom: \"Mark Aufflick\" <[email protected]>\nTo: <[email protected]>\nSent: Tuesday, February 15, 2005 8:34 AM\nSubject: [PERFORM] seq scan cache vs. index cache smackdown\n\n\n> Hi All,\n>\n> I have boiled my situation down to the following simple case: (postgres \n> version 7.3)\n>\n> * Query 1 is doing a sequential scan over a table (courtesy of field ILIKE \n> 'foo%') and index joins to a few others\n> * Query 2 is doing a functional index scan over the same table \n> (lower(field) LIKE 'foo%') and index joins to a few others\n> * neither query has an order by clause\n> * for the purpose of testing, both queries are designed to return the same \n> result set\n>\n> Obviously Q2 is faster than Q1, but if I ever run them both at the same \n> time (lets say I run two of Q1 and one of Q2 at the same time) then Q2 \n> consistently returns WORSE times than Q1 (explain analyze confirms that it \n> is using the index).\n>\n> My assumption is that the sequential scan is blowing the index from any \n> cache it might live in, and simultaneously stealing all the disk IO that \n> is needed to access the index on disk (the table has 200,000 rows).\n>\n> If I simplify the case to not do the index joins (ie. operate on the one \n> table only) the situation is not as dramatic, but similar.\n>\n> My thoughts are:\n>\n> 1) kill the sequential scan - but unfortunately I don't have direct \n> control over that code\n> 2) change the way the server allocates/prioritizes different caches - i \n> don't know enough about how postgres caches work to do this (if it's \n> possible)\n> 3) try it on postgres 7.4 - possible, but migrating the system to 7.4 in \n> production will be hard because the above code that I am not responsible \n> for has a lot of (slightly wacky) implicit date casts\n> 4) ask the fine people on the mailing list for other suggestions!\n> -- \n> Mark Aufflick\n> e [email protected]\n> w www.pumptheory.com (work)\n> w mark.aufflick.com (personal)\n> p +61 438 700 647\n> f +61 2 9436 4737\n>\n>\n> ========================================================================\n> iBurst Wireless Broadband from $34.95/month www.platformnetworks.net\n> Forward undetected SPAM to: [email protected]\n> ========================================================================\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings \n\n", "msg_date": "Tue, 15 Feb 2005 12:55:33 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown" }, { "msg_contents": "> My concern is that this kind of testing has very little relevance to the \n> real world of multiuser processing where contention for the cache becomes an \n> issue. It may be that, at least in the current situation, postgres is \n> giving too much weight to seq scans based on single user, straight line \n\nTo be fair, a large index scan can easily throw the buffers out of whack\nas well. An index scan on 0.1% of a table with 1 billion tuples will\nhave a similar impact to buffers as a sequential scan of a table with 1\nmillion tuples.\n\nAny solution fixing buffers should probably not take into consideration\nthe method being performed (do you really want to skip caching a\nsequential scan of a 2 tuple table because it didn't use an index) but\nthe volume of data involved as compared to the size of the cache.\n\nI've often wondered if a single 1GB toasted tuple could wipe out the\nbuffers. I would suppose that toast doesn't bypass them.\n-- \nRod Taylor <[email protected]>\n\n", "msg_date": "Mon, 14 Feb 2005 23:20:51 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown" }, { "msg_contents": "The world rejoiced as [email protected] (Mark Aufflick) wrote:\n> Hi All,\n>\n> I have boiled my situation down to the following simple case:\n> (postgres version 7.3)\n>\n> * Query 1 is doing a sequential scan over a table (courtesy of field\n> ILIKE 'foo%') and index joins to a few others\n> * Query 2 is doing a functional index scan over the same table\n> (lower(field) LIKE 'foo%') and index joins to a few others\n> * neither query has an order by clause\n> * for the purpose of testing, both queries are designed to return the\n> same result set\n>\n> Obviously Q2 is faster than Q1, but if I ever run them both at the\n> same time (lets say I run two of Q1 and one of Q2 at the same time)\n> then Q2 consistently returns WORSE times than Q1 (explain analyze\n> confirms that it is using the index).\n>\n> My assumption is that the sequential scan is blowing the index from\n> any cache it might live in, and simultaneously stealing all the disk\n> IO that is needed to access the index on disk (the table has 200,000\n> rows).\n\nThere's something to be said for that...\n\n> If I simplify the case to not do the index joins (ie. operate on the\n> one table only) the situation is not as dramatic, but similar.\n>\n> My thoughts are:\n>\n> 1) kill the sequential scan - but unfortunately I don't have direct\n> control over that code\n\nThis is a good choice, if plausible...\n\n> 2) change the way the server allocates/prioritizes different caches -\n> i don't know enough about how postgres caches work to do this (if it's\n> possible)\n\nThat's what the 8.0 cache changes did... Patent claim issues are\nleading to some changes to the prioritization, which is liable to\nchange 8.0.something and 8.1.\n\n> 3) try it on postgres 7.4 - possible, but migrating the system to 7.4\n> in production will be hard because the above code that I am not\n> responsible for has a lot of (slightly wacky) implicit date casts\n\nMoving to 7.4 wouldn't materially change the situation; you'd have to\ngo all the way to version 8.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"gmail.com\")\nhttp://linuxdatabases.info/~cbbrowne/postgresql.html\nRules of the Evil Overlord #32. \"I will not fly into a rage and kill a\nmessenger who brings me bad news just to illustrate how evil I really\nam. Good messengers are hard to come by.\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Mon, 14 Feb 2005 23:54:46 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown" }, { "msg_contents": "Hi Rod,\n \n> Any solution fixing buffers should probably not take into consideration\n> the method being performed (do you really want to skip caching a\n> sequential scan of a 2 tuple table because it didn't use an index) but\n> the volume of data involved as compared to the size of the cache.\n\nYes, in fact indexes aren't so different to tables really in that regard.\n\nIt sounds like version 8 may help out anyway.\n\nregards\nIain\n", "msg_date": "Tue, 15 Feb 2005 15:55:02 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown" }, { "msg_contents": "Mark Aufflick <[email protected]> writes:\n\n> Obviously Q2 is faster than Q1, \n\nThat's not really obvious at all. If there are lots of records being returned\nthe index might not be faster than a sequential scan.\n\n> My assumption is that the sequential scan is blowing the index from any cache\n> it might live in, and simultaneously stealing all the disk IO that is needed to\n> access the index on disk (the table has 200,000 rows).\n\nIt kind of sounds to me like you've lowered random_page_cost to reflect the\nfact that your indexes are nearly always completely cached. But when they're\nnot this unrealistic random_page_cost causes indexes to be used when they're\nno longer faster.\n\nPerhaps you should post an \"EXPLAIN ANALYZE\" of your Q1 and Q2 (the latter\npreferable with and without enable_indexscan, but since it's a join you may\nnot be able to get precisely the comparable plan without just that one index\nscan.)\n\n> 2) change the way the server allocates/prioritizes different caches - i don't\n> know enough about how postgres caches work to do this (if it's possible)\n\nPostgres keeps one set of shared buffers, not separate pools . Normally you\nonly allocate a small amount of your memory for Postgres and let the OS handle\ndisk caching.\n\nWhat is your shared_buffers set to and how much memory do you have?\n\n> 3) try it on postgres 7.4 - possible, but migrating the system to 7.4 in\n> production will be hard because the above code that I am not responsible for\n> has a lot of (slightly wacky) implicit date casts\n\nI can't think of any 7.4 changes that would affect this directly, but there\nwere certainly plenty of changes that had broad effects. you never know. \n\n8.0, on the other hand, has a new algorithm that specifically tries to protect\nagainst the shared buffers being blown out by a sequential scan. But that will\nonly help if it's the shared buffers being thrashed that's hurting you, not\nthe entire OS file system cache.\n\n-- \ngreg\n\n", "msg_date": "15 Feb 2005 02:07:07 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> 8.0, on the other hand, has a new algorithm that specifically tries to\n> protect against the shared buffers being blown out by a sequential\n> scan. But that will only help if it's the shared buffers being\n> thrashed that's hurting you, not the entire OS file system cache.\n\nSomething we ought to think about sometime: what are the performance\nimplications of the real-world situation that we have another level of\ncaching sitting underneath us? AFAIK all the theoretical studies we've\nlooked at consider only a single level of caching. But for example,\nif our buffer management algorithm recognizes an index page as being\nheavily hit and therefore keeps it in cache for a long time, then when\nit does fall out of cache you can be sure it's going to need to be read\nfrom disk when it's next used, because the OS-level buffer cache has not\nseen a call for that page in a long time. Contrariwise a page that we\nthink is only on the fringe of usefulness is going to stay in the OS\ncache because we repeatedly drop it and then have to ask for it again.\n\nI have no idea how to model this situation, but it seems like it needs\nsome careful thought.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Feb 2005 02:36:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n\n> Greg Stark <[email protected]> writes:\n> > 8.0, on the other hand, has a new algorithm that specifically tries to\n> > protect against the shared buffers being blown out by a sequential\n> > scan. But that will only help if it's the shared buffers being\n> > thrashed that's hurting you, not the entire OS file system cache.\n> \n> Something we ought to think about sometime: what are the performance\n> implications of the real-world situation that we have another level of\n> caching sitting underneath us? \n\nIt seems inevitable that Postgres will eventually eliminate that redundant\nlayer of buffering. Since mmap is not workable, that means using O_DIRECT to\nread table and index data.\n\nEvery other database eventually goes this direction, and for good reason.\nHaving two layers of caching and buffering is inherently inefficient. It also\nmakes it impossible for Postgres to offer any application-specific hints to\nthe caching replacement algorithms.\n\nIn that world you would configure Postgres much like you configure Oracle,\nwith shared_buffers taking up as much of your memory as you can afford. And\nthe OS file system cache is kept entirely out of the loop.\n\n> AFAIK all the theoretical studies we've looked at consider only a single\n> level of caching. But for example, if our buffer management algorithm\n> recognizes an index page as being heavily hit and therefore keeps it in\n> cache for a long time, then when it does fall out of cache you can be sure\n> it's going to need to be read from disk when it's next used, because the\n> OS-level buffer cache has not seen a call for that page in a long time.\n> Contrariwise a page that we think is only on the fringe of usefulness is\n> going to stay in the OS cache because we repeatedly drop it and then have to\n> ask for it again.\n\nHum. Is it clear that that's bad? By the same logic it's the ones on the\nfringe that you're likely to have to read again anyways. The ones that are\nbeing heavily used are likely not to have to be read again anyways.\n\n-- \ngreg\n\n", "msg_date": "15 Feb 2005 03:10:39 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown" }, { "msg_contents": "Tom, Greg, Merlin,\n\n> But for example,\n> if our buffer management algorithm recognizes an index page as being\n> heavily hit and therefore keeps it in cache for a long time, then when\n> it does fall out of cache you can be sure it's going to need to be read\n> from disk when it's next used, because the OS-level buffer cache has not\n> seen a call for that page in a long time. Contrariwise a page that we\n> think is only on the fringe of usefulness is going to stay in the OS\n> cache because we repeatedly drop it and then have to ask for it again.\n\nNow you can see why other DBMSs don't use the OS disk cache. There's other \nissues as well; for example, as long as we use the OS disk cache, we can't \neliminate checkpoint spikes, at least on Linux. No matter what we do with \nthe bgwriter, fsyncing the OS disk cache causes heavy system activity.\n\n> It seems inevitable that Postgres will eventually eliminate that redundant\n> layer of buffering. Since mmap is not workable, that means using O_DIRECT\n> to read table and index data.\n\nWhy is mmap not workable? It would require far-reaching changes to our code \n-- certainly -- but I don't think it can be eliminated from consideration.\n\n> What about going the other way and simply letting the o/s do all the\n> caching?  How bad (or good) would the performance really be?  \n\nPretty bad. You can simulate this easily by turning your shared_buffers way \ndown ...\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 15 Feb 2005 09:22:59 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Why is mmap not workable?\n\nWe can't control write order. There are other equally bad problems,\nbut that one alone eliminates it from consideration. See past discussions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Feb 2005 12:55:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown " }, { "msg_contents": "\nJosh Berkus <[email protected]> writes:\n\n> Why is mmap not workable? It would require far-reaching changes to our code \n> -- certainly -- but I don't think it can be eliminated from consideration.\n\nFundamentally because there is no facility for being notified by the OS before\na page is written to disk. And there's no way to prevent a page from being\nwritten to disk (mlock just prevents it from being flushed from memory, not\nfrom being synced to disk). \n\nSo there's no way to guarantee the WAL will be written before the buffer is\nsynced to disk.\n\n\n\nMaybe it could be done by writing and syncing the WAL independently before the\nshared buffer is written to at all, but that would be a completely different\nmodel. And it would locking the shared buffer until the sync is done, and\nrequire a private copy of the shared buffer necessitating more copies than the\ndouble buffering in the first place.\n\n-- \ngreg\n\n", "msg_date": "15 Feb 2005 13:39:46 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown" }, { "msg_contents": "Josh Berkus wrote:\n> \n> Now you can see why other DBMSs don't use the OS disk cache. ...\n> ...as long as we use the OS disk cache, we can't \n> eliminate checkpoint spikes, at least on Linux. \n\nWouldn't the VM settings like the ones under /proc/sys/vm\nand/or the commit=XXX mount option if using ext3 be a good\nplace to control this?\n\nIt seems if you wanted, by setting /proc/sys/vm/dirty_background_ratio\nand /proc/sys/vm/dirty_expire_centisecs very low you'd be constantly\nflushing dirty pages.\n\n\nHas anyone experimented with these kinds of values:\n/proc/sys/vm/dirty_ratio\n /* the generator of dirty data writes back at this ratio */\n/proc/sys/vm/dirty_background_ratio\n /* start background writeback */\n/proc/sys/vm/dirty_writeback_centisecs\n /* the interval between [some style of] writebacks */\n/proc/sys/vm/dirty_expire_centisecs\n /* the number of centiseconds that data is allowed to remain dirty\n\n\nI tried these to workaround the opposite kind of problem.... on a\nlaptop running linux under vmware I wanted to avoid having it do writes\nquickly to make each individual transaction go faster; at the expense\nof a big spike in IO that the sales guy would trigger explicitly before\ntalking a while. Setting each of those very high and using a\ncommit=600 mount option made the whole demo run with very little\nIO except for the explicit sync; but I never took the time\nto understand which setting mattered to me or why.\n\n\n>>It seems inevitable that Postgres will eventually eliminate that redundant\n>>layer of buffering. Since mmap is not workable, that means using O_DIRECT\n>>to read table and index data.\n", "msg_date": "Wed, 16 Feb 2005 19:22:19 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown" } ]
[ { "msg_contents": "Hi,\n\n \n\nI have 3 tables in the database with 80G of data, one of them is almost 40G\nand the remaining 2 tables has 20G each.\n\nWe use this database mainly for query and updating is done only quarterly\nand the database perform well. My problem\n\nis after updating and then run VACCUM FULL ANALYZE vacuuming the tables\ntakes days to complete. I hope someone\n\ncan help me solve my problem.\n\n \n\nThanks\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nHi,\n \nI have 3 tables in the database with 80G of data, one of\nthem is almost 40G and the remaining 2 tables has 20G each.\nWe use this database mainly for query and updating is done only quarterly\nand the database perform well. My problem\nis after updating and then run VACCUM FULL ANALYZE  vacuuming the tables\ntakes days to complete. I hope someone\ncan help me solve my problem.\n \nThanks", "msg_date": "Tue, 15 Feb 2005 09:34:39 +0800", "msg_from": "\"Michael Ryan S. Puncia\" <[email protected]>", "msg_from_op": true, "msg_subject": "VACCUM FULL ANALYZE PROBLEM" }, { "msg_contents": "Hi,\n\njust make sure that your freespace map is big enough and then do a vacuum \nanalyse without the full option.\n\nI can imagine that database performance might not be as good as it would be \nafter a vacuum full, though I expect that it wouldn't make much difference.\n\nregards\nIain\n ----- Original Message ----- \n From: Michael Ryan S. Puncia\n To: [email protected]\n Sent: Tuesday, February 15, 2005 10:34 AM\n Subject: [PERFORM] VACCUM FULL ANALYZE PROBLEM\n\n\n Hi,\n\n\n\n I have 3 tables in the database with 80G of data, one of them is almost \n40G and the remaining 2 tables has 20G each.\n\n We use this database mainly for query and updating is done only quarterly \nand the database perform well. My problem\n\n is after updating and then run VACCUM FULL ANALYZE vacuuming the tables \ntakes days to complete. I hope someone\n\n can help me solve my problem.\n\n\n\n Thanks\n\n\n\n\n\n\n\n\n\n\n\nHi,\n \njust make sure that your freespace map is \nbig enough and then do a vacuum analyse without the full option. \n \nI can imagine that database performance \nmight not be as good as it would be after a vacuum full, though I expect that it \nwouldn't make much difference.\n \nregards\nIain \n\n----- Original Message ----- \nFrom:\nMichael \n Ryan S. Puncia \nTo: [email protected]\n\nSent: Tuesday, February 15, 2005 \n 10:34 AM\nSubject: [PERFORM] VACCUM FULL \n ANALYZE PROBLEM\n\n\nHi,\n \nI have 3 tables in the database \n with 80G of data, one of them is almost 40G and the remaining 2 tables has 20G \n each.\nWe use this database mainly for query and updating is \n done only quarterly and the database perform well. My \n problem\nis after updating and then run VACCUM FULL ANALYZE \n  vacuuming the tables takes days to complete. I hope \n someone\ncan help me solve my \n problem.\n \nThanks", "msg_date": "Tue, 15 Feb 2005 10:52:01 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACCUM FULL ANALYZE PROBLEM" }, { "msg_contents": "But I need to do full vacuum because I deleted some of the fields that are\nnot use anymore and I also add another fields. Is there\n\nanother way to speed up full vacuum?\n\n \n\n \n\n _____ \n\nFrom: Iain [mailto:[email protected]] \nSent: Tuesday, February 15, 2005 9:52 AM\nTo: Michael Ryan S. Puncia; [email protected]\nSubject: Re: [PERFORM] VACCUM FULL ANALYZE PROBLEM\n\n \n\nHi,\n\n \n\njust make sure that your freespace map is big enough and then do a vacuum\nanalyse without the full option. \n\n \n\nI can imagine that database performance might not be as good as it would be\nafter a vacuum full, though I expect that it wouldn't make much difference.\n\n \n\nregards\n\nIain \n\n----- Original Message ----- \n\nFrom: Michael Ryan <mailto:[email protected]> S. Puncia \n\nTo: [email protected] \n\nSent: Tuesday, February 15, 2005 10:34 AM\n\nSubject: [PERFORM] VACCUM FULL ANALYZE PROBLEM\n\n \n\nHi,\n\n \n\nI have 3 tables in the database with 80G of data, one of them is almost 40G\nand the remaining 2 tables has 20G each.\n\nWe use this database mainly for query and updating is done only quarterly\nand the database perform well. My problem\n\nis after updating and then run VACCUM FULL ANALYZE vacuuming the tables\ntakes days to complete. I hope someone\n\ncan help me solve my problem.\n\n \n\nThanks\n\n \n\n \n\n\n\n__________ NOD32 1.998 (20050212) Information __________\n\nThis message was checked by NOD32 Antivirus System.\nhttp://www.nod32.com\n\n\n\n\n\n\n\n\n\n\n\n\n\n \nBut I need to do full vacuum because I\ndeleted some of the fields that are not use anymore and I also add another\nfields. Is there\nanother  way  to speed up full\nvacuum?\n \n \n\n\n\n\nFrom: Iain\n[mailto:[email protected]] \nSent: Tuesday, February 15, 2005\n9:52 AM\nTo: Michael Ryan S. Puncia; [email protected]\nSubject: Re: [PERFORM] VACCUM FULL\nANALYZE PROBLEM\n\n \n\nHi,\n\n\n \n\n\njust make sure that your freespace map is\nbig enough and then do a vacuum analyse without the full option. \n\n\n \n\n\nI can imagine that database performance\nmight not be as good as it would be after a vacuum full, though I expect that\nit wouldn't make much difference.\n\n\n \n\n\nregards\n\n\nIain \n\n\n\n----- Original Message ----- \n\n\nFrom: Michael Ryan\nS. Puncia \n\n\nTo:\[email protected] \n\n\nSent:\nTuesday, February 15, 2005 10:34 AM\n\n\nSubject:\n[PERFORM] VACCUM FULL ANALYZE PROBLEM\n\n\n \n\nHi,\n \nI have 3 tables in the database with 80G of data, one of\nthem is almost 40G and the remaining 2 tables has 20G each.\nWe use this database mainly for query and updating is done only\nquarterly and the database perform well. My problem\nis after updating and then run VACCUM FULL ANALYZE  vacuuming the\ntables takes days to complete. I hope someone\ncan help me solve my problem.\n \nThanks\n \n \n\n\n\n__________ NOD32 1.998 (20050212) Information __________\n\nThis message was checked by NOD32 Antivirus System.\nhttp://www.nod32.com", "msg_date": "Tue, 15 Feb 2005 10:10:30 +0800", "msg_from": "\"Michael Ryan S. Puncia\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: VACCUM FULL ANALYZE PROBLEM" }, { "msg_contents": "On Tue, 2005-02-15 at 09:34 +0800, Michael Ryan S. Puncia wrote:\n> Hi,\n> \n> \n> \n> I have 3 tables in the database with 80G of data, one of them is\n> almost 40G and the remaining 2 tables has 20G each.\n> \n> We use this database mainly for query and updating is done only\n> quarterly and the database perform well. My problem\n> \n> is after updating and then run VACCUM FULL ANALYZE vacuuming the\n> tables takes days to complete. I hope someone\n\nI suspect the VACUUM FULL is the painful part. Try running CLUSTER on\nthe table or changing a column type (in 8.0) instead.\n-- \n\n", "msg_date": "Mon, 14 Feb 2005 21:21:56 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACCUM FULL ANALYZE PROBLEM" }, { "msg_contents": ">> But I need to do full vacuum because I deleted some of the fields that \n>> are not use anymore and I also add another fields. Is there\n\n>> another way to speed up full vacuum?\n\n\n\nHmmm... a full vacuum may help to re-organize the structure of modified \ntables, but whether this is significant or not is another matter. I don't \nknow enough of the internals to comment on that maybe someone else who knows \nmore can.\n\n\n\nThe obvious thing is the vacuum memory setting (in postgresql.conf). \nPresumably, you could set this quite high at least just for the duration of \nthe vacuum anyway.\n\n\n\nWould the total time be reduced by dropping the indexes, then vacuuming and \nrebuilding the indexes? I havn't tried anything like this so I can't say.\n\n\n\nYou should probably say what version of the db you are using and describe \nyour system a little.\n\n\n\nRegards\n\nIain\n\n----- Original Message ----- \n\n From: Michael Ryan S. Puncia\n To: 'Iain' ; [email protected]\n Sent: Tuesday, February 15, 2005 11:10 AM\n Subject: RE: [PERFORM] VACCUM FULL ANALYZE PROBLEM\n\n\n\n\n But I need to do full vacuum because I deleted some of the fields that are \nnot use anymore and I also add another fields. Is there\n\n another way to speed up full vacuum?\n\n\n\n\n\n\n------------------------------------------------------------------------------\n\n From: Iain [mailto:[email protected]]\n Sent: Tuesday, February 15, 2005 9:52 AM\n To: Michael Ryan S. Puncia; [email protected]\n Subject: Re: [PERFORM] VACCUM FULL ANALYZE PROBLEM\n\n\n\n Hi,\n\n\n\n just make sure that your freespace map is big enough and then do a vacuum \nanalyse without the full option.\n\n\n\n I can imagine that database performance might not be as good as it would \nbe after a vacuum full, though I expect that it wouldn't make much \ndifference.\n\n\n\n regards\n\n Iain\n\n ----- Original Message ----- \n\n From: Michael Ryan S. Puncia\n\n To: [email protected]\n\n Sent: Tuesday, February 15, 2005 10:34 AM\n\n Subject: [PERFORM] VACCUM FULL ANALYZE PROBLEM\n\n\n\n Hi,\n\n\n\n I have 3 tables in the database with 80G of data, one of them is almost \n40G and the remaining 2 tables has 20G each.\n\n We use this database mainly for query and updating is done only \nquarterly and the database perform well. My problem\n\n is after updating and then run VACCUM FULL ANALYZE vacuuming the tables \ntakes days to complete. I hope someone\n\n can help me solve my problem.\n\n\n\n Thanks\n\n\n\n\n\n\n\n __________ NOD32 1.998 (20050212) Information __________\n\n This message was checked by NOD32 Antivirus System.\n http://www.nod32.com\n\n\n\n\n\n\n\n\n>> But I need to \ndo full vacuum because I deleted some of the fields that are not use anymore and \nI also add another fields. Is there\n>> another \n way  to speed up full vacuum?\n \nHmmm... a full vacuum \nmay help to re-organize the structure of modified tables, but whether this is \nsignificant or not is another matter. I don't know enough of the internals to \ncomment on that maybe someone else who knows more can.\n \nThe obvious thing is \nthe vacuum memory setting (in postgresql.conf). Presumably, you could set this \nquite high at least just for the duration of the vacuum \nanyway.\n \nWould the total time be \nreduced by dropping the indexes, then vacuuming and rebuilding the indexes? I \nhavn't tried anything like this so I can't say.\n \nYou should probably say \nwhat version of the db you are using and describe your system a \nlittle.\n \nRegards\nIain\n----- \nOriginal Message ----- \n\nFrom:\nMichael \n Ryan S. Puncia \nTo: 'Iain' ; [email protected]\n\nSent: Tuesday, February 15, 2005 \n 11:10 AM\nSubject: RE: [PERFORM] VACCUM FULL \n ANALYZE PROBLEM\n\n\n \nBut I need to do full \n vacuum because I deleted some of the fields that are not use anymore and I \n also add another fields. Is there\nanother  way \n  to speed up full vacuum?\n \n \n\n\n\n\nFrom: Iain \n [mailto:[email protected]] Sent: Tuesday, February 15, 2005 9:52 \n AMTo: Michael Ryan S. \n Puncia; [email protected]: Re: [PERFORM] VACCUM FULL \n ANALYZE PROBLEM\n \n\nHi,\n\n \n\njust make sure that your \n freespace map is big enough and then do a vacuum analyse without the full \n option. \n\n \n\nI can imagine that \n database performance might not be as good as it would be after a vacuum full, \n though I expect that it wouldn't make much \n difference.\n\n \n\nregards\n\nIain \n \n\n\n----- Original Message \n ----- \n\nFrom: Michael Ryan \n S. Puncia \n\nTo: [email protected]\n\n\nSent: Tuesday, February 15, \n 2005 10:34 AM\n\nSubject: [PERFORM] VACCUM FULL \n ANALYZE PROBLEM\n\n \nHi,\n \nI have 3 tables in the database \n with 80G of data, one of them is almost 40G and the remaining 2 tables has \n 20G each.\nWe use this database mainly for query and updating \n is done only quarterly and the database perform well. My \n problem\nis after updating and then run VACCUM FULL ANALYZE \n  vacuuming the tables takes days to complete. I hope \n someone\ncan help me solve my \n problem.\n \nThanks\n \n \n__________ NOD32 1.998 (20050212) Information \n __________This message was checked by NOD32 Antivirus System.http://www.nod32.com", "msg_date": "Tue, 15 Feb 2005 11:31:14 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACCUM FULL ANALYZE PROBLEM" }, { "msg_contents": "\"Iain\" <[email protected]> writes:\n>> another way to speed up full vacuum?\n\n> Hmmm... a full vacuum may help to re-organize the structure of modified \n> tables, but whether this is significant or not is another matter.\n\nActually, VACUUM FULL is designed to work nicely for the situation where\na table has say 10% wasted space and you want the wasted space all\ncompressed out. When there is a lot of wasted space, so that nearly all\nthe rows have to be moved to complete the compaction operation, VACUUM\nFULL is not a very good choice. And it simply moves rows around, it\ndoesn't modify the rows internally; so it does nothing at all to reclaim\nspace that would have been freed up by DROP COLUMN operations.\n\nCLUSTER is actually a better bet if you want to repack a table that's\nsuffered a lot of updates or deletions. In PG 8.0 you might also\nconsider one of the rewriting variants of ALTER TABLE.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Feb 2005 00:30:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACCUM FULL ANALYZE PROBLEM " }, { "msg_contents": "OK, that's interesting. So the original assumption that vacuum full was \nneeded was completely wrong anyway.\n\nIf table re-organisation isn't required a plain vacuum would be fastest. I \nwill take a guess that the next best alternative is to do the \"create table \nnewtable as select ... order by ...\" thing and then create the indexes and \nstuff. This would reorganize the table completely. After that you have the \ncluster command, and coming in last place is vacuum full. Sound about right?\n\nMichael, you said that a vacuum that runs for 3 days is too long, but hasn't \ngiven any specific requirements or limitations. Hopefully you can find \nsomething suitable in the alternatives listed above.\n\nregards\nIain\n\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Iain\" <[email protected]>\nCc: \"Michael Ryan S. Puncia\" <[email protected]>; \n<[email protected]>\nSent: Tuesday, February 15, 2005 2:30 PM\nSubject: Re: [PERFORM] VACCUM FULL ANALYZE PROBLEM\n\n\n> \"Iain\" <[email protected]> writes:\n>>> another way to speed up full vacuum?\n>\n>> Hmmm... a full vacuum may help to re-organize the structure of modified\n>> tables, but whether this is significant or not is another matter.\n>\n> Actually, VACUUM FULL is designed to work nicely for the situation where\n> a table has say 10% wasted space and you want the wasted space all\n> compressed out. When there is a lot of wasted space, so that nearly all\n> the rows have to be moved to complete the compaction operation, VACUUM\n> FULL is not a very good choice. And it simply moves rows around, it\n> doesn't modify the rows internally; so it does nothing at all to reclaim\n> space that would have been freed up by DROP COLUMN operations.\n>\n> CLUSTER is actually a better bet if you want to repack a table that's\n> suffered a lot of updates or deletions. In PG 8.0 you might also\n> consider one of the rewriting variants of ALTER TABLE.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org \n\n", "msg_date": "Tue, 15 Feb 2005 14:58:16 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACCUM FULL ANALYZE PROBLEM " }, { "msg_contents": "\n\tI don't know if this would work, but if you just want to restructure your \nrows, your could do this:\n\n\tUPDATE table SET id = id WHERE id BETWEEN 0 AND 20000;\n\tVACUUM table;\n\tUPDATE table SET id = id WHERE id BETWEEN 20001 AND 40000;\n\tVACUUM table;\n\n\twash, rinse, repeat.\n\n\tThe idea is that an update rewrites the rows (in your new format) and \nthat VACUUM (not FULL) is quite fast when you just modified a part of the \ntable, and non-locking.\n\n\tWould this work ?\n\n\n> \"Iain\" <[email protected]> writes:\n>>> another way to speed up full vacuum?\n>\n>> Hmmm... a full vacuum may help to re-organize the structure of modified\n>> tables, but whether this is significant or not is another matter.\n>\n> Actually, VACUUM FULL is designed to work nicely for the situation where\n> a table has say 10% wasted space and you want the wasted space all\n> compressed out. When there is a lot of wasted space, so that nearly all\n> the rows have to be moved to complete the compaction operation, VACUUM\n> FULL is not a very good choice. And it simply moves rows around, it\n> doesn't modify the rows internally; so it does nothing at all to reclaim\n> space that would have been freed up by DROP COLUMN operations.\n>\n> CLUSTER is actually a better bet if you want to repack a table that's\n> suffered a lot of updates or deletions. In PG 8.0 you might also\n> consider one of the rewriting variants of ALTER TABLE.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n", "msg_date": "Tue, 15 Feb 2005 09:51:22 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACCUM FULL ANALYZE PROBLEM " } ]
[ { "msg_contents": "> It seems inevitable that Postgres will eventually eliminate that\nredundant\n> layer of buffering. Since mmap is not workable, that means using\nO_DIRECT\n> to\n> read table and index data.\n\nWhat about going the other way and simply letting the o/s do all the\ncaching? How bad (or good) would the performance really be? \n\nMerlin\n", "msg_date": "Tue, 15 Feb 2005 09:46:33 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: seq scan cache vs. index cache smackdown" }, { "msg_contents": "In the last exciting episode, [email protected] (\"Merlin Moncure\") wrote:\n>> It seems inevitable that Postgres will eventually eliminate that\n>> redundant layer of buffering. Since mmap is not workable, that\n>> means using O_DIRECT to read table and index data.\n>\n> What about going the other way and simply letting the o/s do all the\n> caching? How bad (or good) would the performance really be?\n\nI'm going to see about taking this story to OLS (Ottawa Linux\nSymposium) in July and will see what hearing I can get. There are\nhistorically some commonalities in the way this situation is regarded,\nin that there was _long_ opposition to the notion of having unbuffered\ndisk devices.\n\nIf there's more \"story\" that definitely needs to be taken, let me\nknow...\n-- \noutput = reverse(\"moc.enworbbc\" \"@\" \"enworbbc\")\nhttp://linuxdatabases.info/info/slony.html\nRules of the Evil Overlord #90. \"I will not design my Main Control\nRoom so that every workstation is facing away from the door.\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Tue, 15 Feb 2005 14:04:49 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown" } ]
[ { "msg_contents": "Josh Berkus wrote:\n> Now you can see why other DBMSs don't use the OS disk cache. There's\n> other\n> issues as well; for example, as long as we use the OS disk cache, we\ncan't\n> eliminate checkpoint spikes, at least on Linux. No matter what we do\nwith\n> the bgwriter, fsyncing the OS disk cache causes heavy system activity.\n\nMS SQL server uses the O/S disk cache...the database is very tightly\nintegrated with the O/S. Write performance is one of the few things SQL\nserver can do better than most other databases despite running on a\nmid-grade kernel and a low-grade filesystem...what does that say?\nReadFileScatter() and ReadFileGather() were added to the win32 API\nspecifically for SQL server...this is somewhat analogous to transaction\nbased writing such as in Reisfer4. I'm not arguing ms sql server is\nbetter in any way, IIRC they are still using table locks (!). \n\n> > It seems inevitable that Postgres will eventually eliminate that\n> redundant\n> > layer of buffering. Since mmap is not workable, that means using\n> O_DIRECT\n> > to read table and index data.\n\nIMO, The O_DIRECT argument makes assumptions about storage and o/s\ntechnology that are moving targets. Not sure about mmap().\n\nMerlin\n", "msg_date": "Tue, 15 Feb 2005 13:03:53 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: seq scan cache vs. index cache smackdown" } ]
[ { "msg_contents": ">Josh Berkus wrote:\n>> Now you can see why other DBMSs don't use the OS disk cache. There's\n>> other\n>> issues as well; for example, as long as we use the OS disk cache, we\n>can't\n>> eliminate checkpoint spikes, at least on Linux. No matter what we do\n>with\n>> the bgwriter, fsyncing the OS disk cache causes heavy system \n>activity.\n>\n>MS SQL server uses the O/S disk cache...\n\nNo, it doesn't. They open all files with FILE_FLAG_WRITE_THROUGH and\nFILE_FLAG_NO_BUFFERING. It scales the size of it dynamically with the\nsystem, but it uses it's own buffer cache.\n\n> the database is very tightly\n>integrated with the O/S. \n\nThat it is.\n\n\n>Write performance is one of the few things SQL\n>server can do better than most other databases despite running on a\n>mid-grade kernel and a low-grade filesystem...what does that say?\n>ReadFileScatter() and ReadFileGather() were added to the win32 API\n>specifically for SQL server...this is somewhat analogous to transaction\n>based writing such as in Reisfer4. \n\n(Those are ReadFileScatter and WriteFileGather)\n\nI don't think that's correct either. Scatter/Gather I/O is used to SQL\nServer can issue reads for several blocks from disks into it's own\nbuffer cache with a single syscall even if these buffers are not\nsequential. It did make significant performance improvements when they\nadded it, though.\n\n(For those not knowing - it's ReadFile/WriteFile where you pass an array\nof \"this many bytes to this address\" as parameters)\n\n\n> I'm not arguing ms sql server is\n>better in any way, IIRC they are still using table locks (!). \n\nNot at all. They use row level locks, escalated to page level, then\nescalated to table level. Has been since 7.0. In <= 6.5 they had page\nlevel and table level locks. I think possibly back in 4.2 (this is\n16-bit days on OS/2) they had only table level locks, but that's a long\ntime ago.\nThey don't do MVCC, though.\n\n(I'm not saying it's better either. At some things it is, at many it is\nnot)\n\n//Magnus\n", "msg_date": "Tue, 15 Feb 2005 19:41:26 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: seq scan cache vs. index cache smackdown" }, { "msg_contents": "\n\tIn the 'wishful hand waving' department :\n\n\tread index -> determine (tuple id,page) to hit in table -> for each of \nthese, tell the OS 'I'm gonna need these' via a NON BLOCKING call. Non \nblocking because you feed the information to the OS as you read the index, \nstreaming it.\n\n\tMeanwhile, the OS accumulates the requests in an internal FIFO, \nreorganizes them according to the order best suited to good disk head \nmovements, then reads them in clusters, and calls a callback inside the \napplication when it has data available. Or the application polls it once \nin a while to get a bucketload of pages. The 'I'm gonna need these()' \nsyscall would also sometimes return 'hey, I'm full, read the pages I have \nhere waiting for you before asking for new ones'.\n\n\tA flag would tell the OS if the application wanted the results in any \norder, or with order preserved.\n\tWithout order preservation, if the application has requested twice the \nsame page with different tuple id's, the OS would call the callback only \nonce, giving it a list of the tuple id's associated with that page.\n\n\tIt involves a tradeoff between memory and performance : as the size of \nthe FIFO increases, likelihood of good contiguous disk reading increases. \nHowever, the memory structure would only contain page numbers and tuple \nid's, so it could be pretty small.\n\n\tReturning the results in-order would also need more memory.\n\n\tIt could be made very generic if instead of 'tuple id' you read 'opaque \napplication data', and instead of 'page' you read '(offset, length)'.\n\n\tThis structure actually exists already in the Linux Kernel, it's called \nthe Elevator or something, but it works for scheduling reads between \nthreads.\n\n\tYou can also read 'internal not yet developed postgres cache manager' \ninstead of OS if you don't feel like talking kernel developers into \nimplementing this thing.\n\n\n> (Those are ReadFileScatter and WriteFileGather)\n>\n", "msg_date": "Tue, 15 Feb 2005 22:55:32 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown" }, { "msg_contents": "\nPFC <[email protected]> writes:\n\n> \tYou can also read 'internal not yet developed postgres cache manager'\n> instead of OS if you don't feel like talking kernel developers into\n> implementing this thing.\n\nIt exists already, it's called aio. \n\nBut there are a *lot* of details you skipped over. \nAnd as always, the devil is in the details.\n\n-- \ngreg\n\n", "msg_date": "15 Feb 2005 23:19:25 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown" }, { "msg_contents": "Magnus Hagander wrote:\n> I don't think that's correct either. Scatter/Gather I/O is used to SQL\n> Server can issue reads for several blocks from disks into it's own\n> buffer cache with a single syscall even if these buffers are not\n> sequential. It did make significant performance improvements when they\n> added it, though.\n> \n> (For those not knowing - it's ReadFile/WriteFile where you pass an array\n> of \"this many bytes to this address\" as parameters)\n\nIsn't that like the BSD writev()/readv() that Linux supports also? Is\nthat something we should be using on Unix if it is supported by the OS?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 18 Feb 2005 23:11:08 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown" }, { "msg_contents": "\n>> (For those not knowing - it's ReadFile/WriteFile where you pass an array\n>> of \"this many bytes to this address\" as parameters)\n>\n> Isn't that like the BSD writev()/readv() that Linux supports also? Is\n> that something we should be using on Unix if it is supported by the OS?\n\n\tNope, readv()/writev() read/write from/to the file sequentially to/from a \nlist of buffers in memory. The Windows calls read/write at random file \noffsets to/from a list of buffers.\n\n\n", "msg_date": "Tue, 01 Mar 2005 06:32:14 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown" } ]
[ { "msg_contents": "With three databases running the same query, I am receiving greatly\ndiffering results from 2 of the query planners.\n\n-db2 and db3 are slonied copies of db1. The servers have identical\npostgresql.conf files but the server hardware differs.\n-all appropriate columns are indexed\n-vacuum analyze is run nightly on all dbs\n\nHere is a simplified version of the query:\n------------------------------------------------------------------------\nEXPLAIN ANALYZE\nSELECT COUNT(DISTINCT(m_object_paper.id))\n FROM m_object_paper, m_assignment, m_class,\nr_comment_rubric_user_object\n WHERE m_object_paper.assignment=m_assignment.id\n AND m_assignment.class=m_class.id\n AND m_class.account = 36123\n AND m_object_paper.id = r_comment_rubric_user_object.objectid;\n------------------------------------------------------------------------\n\ndb1 displays a concise query plan of nested loops and index scans\nexecuting in 85 ms.\nHowever, db2's query plan consists of sequential scans and takes 3500\nms to complete.\n\nThe strange part is this. Last week, db1 and db3 were in agreement and\nexecuting the more efficient plan. Now, db3 is in agreement with db2\nwith the less efficient, slower plan.\n\nAre we missing something, what could cause this disagreement?\n\nThanks\n\n", "msg_date": "15 Feb 2005 11:15:55 -0800", "msg_from": "\"lcham02\" <[email protected]>", "msg_from_op": true, "msg_subject": "disagreeing query planners" } ]
[ { "msg_contents": "Hi,\n\n\tI was wondering if there would be a proper configuration for a PG database \nused for a forum.. hmm phpBB to be exact..\n\n\tIt seems that postgres on phpBB is kinda slow..\n\n\nTIA,\n\n", "msg_date": "Thu, 17 Feb 2005 21:29:01 +0800", "msg_from": "JM <[email protected]>", "msg_from_op": true, "msg_subject": "PG proper configuation for a php forum" } ]
[ { "msg_contents": "Certain queries on my database get slower after\nrunning a VACUUM ANALYZE. Why would this happen, and\nhow can I fix it?\n\nI am running PostgreSQL 7.4.2 (I also seen this\nproblem on v. 7.3 and 8.0)\n\nHere is a sample query that exhibits this behaviour\n(here the query goes from 1 second before VACUUM\nANALYZE to 2 seconds after; there are other queries\nthat go from 20 seconds before to 800 seconds after):\n\n==================================================\n\nselect ToolRepairRequest.RequestID, (Select\ncount(ToolHistory.HistoryID) from ToolHistory where\nToolRepairRequest.RepairID=ToolHistory.RepairID) as\nCountOfTH\nfrom ((ToolRepairRequest\n LEFT JOIN (ToolRepair\n LEFT JOIN ToolHistory on (ToolRepair.RepairID =\nToolHistory.RepairID)) on (ToolRepairRequest.RepairID\n= ToolRepair.RepairID))\n LEFT JOIN ServiceOrder ON\n(ToolRepairRequest.ServiceOrderID =\nServiceOrder.ServiceOrderID))\nLEFT JOIN Tool ON (ToolRepairRequest.ToolID = Tool.ID)\nwhere (ToolRepairRequest.StationID = 1303)\n\n==================================================\n\nHere are the EXPLAIN ANALYZE results:\n\nBefore VACUUM ANALYZE:\n\n==================================================\n\n \n QUERY PLAN \n \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=3974.74..48055.42\nrows=79 width=8) (actual time=359.751..1136.165\nrows=1518 loops=1)\n -> Nested Loop Left Join (cost=3974.74..6175.84\nrows=78 width=12) (actual time=359.537..1023.404\nrows=1518 loops=1)\n -> Merge Right Join (cost=3974.74..5705.83\nrows=78 width=16) (actual time=359.516..991.826\nrows=1518 loops=1)\n Merge Cond: (\"outer\".repairid =\n\"inner\".repairid)\n -> Merge Left Join \n(cost=3289.68..4949.83 rows=27907 width=4) (actual\ntime=302.058..840.706 rows=28000 loops=1)\n Merge Cond: (\"outer\".repairid =\n\"inner\".repairid)\n -> Index Scan using\ntoolrepair_pkey on toolrepair (cost=0.00..1175.34\nrows=26485 width=4) (actual time=0.063..130.516\nrows=26485 loops=1)\n -> Sort (cost=3289.68..3359.44\nrows=27906 width=4) (actual time=301.965..402.228\nrows=27906 loops=1)\n Sort Key:\ntoolhistory.repairid\n -> Seq Scan on toolhistory\n (cost=0.00..1229.06 rows=27906 width=4) (actual\ntime=0.009..116.441 rows=27906 loops=1)\n -> Sort (cost=685.06..685.24 rows=74\nwidth=16) (actual time=26.490..36.454 rows=1518\nloops=1)\n Sort Key:\ntoolrepairrequest.repairid\n -> Seq Scan on toolrepairrequest\n (cost=0.00..682.76 rows=74 width=16) (actual\ntime=0.039..20.506 rows=1462 loops=1)\n Filter: (stationid = 1303)\n -> Index Scan using serviceorder_pkey on\nserviceorder (cost=0.00..6.01 rows=1 width=4) (actual\ntime=0.008..0.009 rows=0 loops=1518)\n Index Cond: (\"outer\".serviceorderid =\nserviceorder.serviceorderid)\n -> Index Scan using tool_pkey on tool \n(cost=0.00..6.01 rows=1 width=4) (actual\ntime=0.013..0.018 rows=1 loops=1518)\n Index Cond: (\"outer\".toolid = tool.id)\n SubPlan\n -> Aggregate (cost=524.17..524.17 rows=1\nwidth=4) (actual time=0.032..0.035 rows=1 loops=1518)\n -> Index Scan using th_repair_key on\ntoolhistory (cost=0.00..523.82 rows=140 width=4)\n(actual time=0.013..0.018 rows=1 loops=1518)\n Index Cond: ($0 = repairid)\n Total runtime: 1147.350 ms\n(23 rows)\n\n==================================================\n\n\nand after VACUUM ANALYZE:\n\n==================================================\n\n \n QUERY PLAN \n \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Left Join (cost=18310.59..29162.44 rows=1533\nwidth=8) (actual time=1886.942..2183.774 rows=1518\nloops=1)\n Merge Cond: (\"outer\".toolid = \"inner\".id)\n -> Sort (cost=15110.46..15114.29 rows=1532\nwidth=12) (actual time=1534.319..1539.461 rows=1518\nloops=1)\n Sort Key: toolrepairrequest.toolid\n -> Nested Loop Left Join \n(cost=4050.79..15029.41 rows=1532 width=12) (actual\ntime=410.948..1527.360 rows=1518 loops=1)\n -> Merge Right Join \n(cost=4050.79..5800.48 rows=1532 width=16) (actual\ntime=410.926..1488.229 rows=1518 loops=1)\n Merge Cond: (\"outer\".repairid =\n\"inner\".repairid)\n -> Merge Left Join \n(cost=3289.68..4946.79 rows=27907 width=4) (actual\ntime=355.606..1321.320 rows=28000 loops=1)\n Merge Cond:\n(\"outer\".repairid = \"inner\".repairid)\n -> Index Scan using\ntoolrepair_pkey on toolrepair (cost=0.00..1172.67\nrows=26485 width=4) (actual time=0.108..235.096\nrows=26485 loops=1)\n -> Sort \n(cost=3289.68..3359.44 rows=27906 width=4) (actual\ntime=355.460..519.987 rows=27906 loops=1)\n Sort Key:\ntoolhistory.repairid\n -> Seq Scan on\ntoolhistory (cost=0.00..1229.06 rows=27906 width=4)\n(actual time=0.016..129.811 rows=27906 loops=1)\n -> Sort (cost=761.11..764.83\nrows=1487 width=16) (actual time=30.447..35.695\nrows=1518 loops=1)\n Sort Key:\ntoolrepairrequest.repairid\n -> Seq Scan on\ntoolrepairrequest (cost=0.00..682.76 rows=1487\nwidth=16) (actual time=0.039..23.852 rows=1462\nloops=1)\n Filter: (stationid =\n1303)\n -> Index Scan using serviceorder_pkey\non serviceorder (cost=0.00..6.01 rows=1 width=4)\n(actual time=0.009..0.010 rows=0 loops=1518)\n Index Cond:\n(\"outer\".serviceorderid = serviceorder.serviceorderid)\n -> Sort (cost=3200.13..3267.24 rows=26844\nwidth=4) (actual time=352.324..453.352 rows=24746\nloops=1)\n Sort Key: tool.id\n -> Seq Scan on tool (cost=0.00..1225.44\nrows=26844 width=4) (actual time=0.024..126.826\nrows=26844 loops=1)\n SubPlan\n -> Aggregate (cost=6.98..6.98 rows=1 width=4)\n(actual time=0.038..0.042 rows=1 loops=1518)\n -> Index Scan using th_repair_key on\ntoolhistory (cost=0.00..6.97 rows=2 width=4) (actual\ntime=0.016..0.021 rows=1 loops=1518)\n Index Cond: ($0 = repairid)\n Total runtime: 2191.401 ms\n(27 rows)\n\n==================================================\n\nThanks for any assistance.\n\nWalt\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nMeet the all-new My Yahoo! - Try it today! \nhttp://my.yahoo.com \n \n\n", "msg_date": "Thu, 17 Feb 2005 14:24:27 -0800 (PST)", "msg_from": "werner fraga <[email protected]>", "msg_from_op": true, "msg_subject": "VACUUM ANALYZE slows down query" }, { "msg_contents": "werner fraga wrote:\n\n>Certain queries on my database get slower after\n>running a VACUUM ANALYZE. Why would this happen, and\n>how can I fix it?\n>\n>I am running PostgreSQL 7.4.2 (I also seen this\n>problem on v. 7.3 and 8.0)\n>\n>Here is a sample query that exhibits this behaviour\n>(here the query goes from 1 second before VACUUM\n>ANALYZE to 2 seconds after; there are other queries\n>that go from 20 seconds before to 800 seconds after):\n>\n>\n>\nFirst, try to attach your explain analyze as a textfile attachment,\nrather than inline to prevent wrapping and make it easier to read.\n\nSecond, the problem is that it *is* getting a more accurate estimate of\nthe number of rows that are going to be returned, compare:\n\nPlan 1:\n\n>-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop Left Join (cost=3974.74..48055.42\n>rows=79 width=8) (actual time=359.751..1136.165\n>rows=1518 loops=1)\n>\n>\nThe planner was expecting 79 rows, but was actually getting 1518.\n\nPlan 2:\n\n>-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Merge Left Join (cost=18310.59..29162.44 rows=1533\n>width=8) (actual time=1886.942..2183.774 rows=1518\n>loops=1)\n>\n>\nIt is predicting 1533 rows, and found 1518, a pretty good guess.\n\nSo the big issue is why does the planner think that a nested loop is\ngoing to be more expensive than a merge join. That I don't really know.\nI'm guessing some parameters like random_page_cost could be tweaked, but\nI don't really know the criteria postgres uses for merge joins vs nested\nloop joins.\n\n>Thanks for any assistance.\n>\n>Walt\n>\n>\nHopefully someone can help a little better. In the mean time, you might\nwant to resend with an attachment. I know I had trouble reading your\nexplain analyze.\n\nJohn\n=:->", "msg_date": "Thu, 17 Feb 2005 16:38:37 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM ANALYZE slows down query" }, { "msg_contents": "John Arbash Meinel <[email protected]> writes:\n> So the big issue is why does the planner think that a nested loop is\n> going to be more expensive than a merge join. That I don't really know.\n\nWell, with the increased (and much more accurate) rowcount estimate,\nthe estimated cost of the nestloop naturally went up a lot: it's\nproportional to the number of rows involved. It appears that the\nestimated cost of the mergejoin actually went *down* quite a bit\n(else it'd have been selected the first time too). That seems odd to\nme. AFAIR the only reason that would happen is that given stats about\nthe distributions of the two join keys, the planner can recognize that\none side of the merge may not need to be run to completion --- for\nexample if one column ranges from 1..100 and the other only from 1..40,\nyou never need to look at the values 41..100 in the first table.\n\nYou can see in the explain output that this is indeed happening to some\nextent:\n\n -> Sort (cost=3200.13..3267.24 rows=26844 width=4) (actual time=352.324..453.352 rows=24746 loops=1)\n Sort Key: tool.id\n -> Seq Scan on tool (cost=0.00..1225.44 rows=26844 width=4) (actual time=0.024..126.826 rows=26844 loops=1)\n\nOnly 24746 of the 26844 tool rows ever got read from the sort node (and\neven that is probably overstating matters; if there are duplicate toolid\nvalues in the lefthand input, as seems likely, then the same rows will\nbe pulled from the sort node multiple times). However, when both sides\nof the merge are being explicitly sorted, as is happening here, then not\nrunning one side to completion does not save you much at all (since you\nhad to do the sort anyway). The early-out trick only really wins when\nyou can quit early on a more incremental subplan, such as an indexscan.\nSo I'm pretty surprised that the planner made this pair of choices.\nThe estimated cost of the mergejoin shouldn't have changed much with the\naddition of statistics, and so ISTM it should have been picked the first\ntime too.\n\nWalt, is there anything proprietary about the contents of these tables?\nIf you'd be willing to send me a dump off-list, I'd like to dig through\nwhat the planner is doing here. There may be a bug somewhere in the\ncost estimation code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Feb 2005 18:28:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM ANALYZE slows down query " }, { "msg_contents": "I wrote:\n> Well, with the increased (and much more accurate) rowcount estimate,\n> the estimated cost of the nestloop naturally went up a lot: it's\n> proportional to the number of rows involved. It appears that the\n> estimated cost of the mergejoin actually went *down* quite a bit\n> (else it'd have been selected the first time too). That seems odd to\n> me.\n\nNah, I just can't count :-(. What I forgot about was the sub-select in\nthe output list:\n\n>> select ToolRepairRequest.RequestID, (Select\n>> count(ToolHistory.HistoryID) from ToolHistory where\n>> ToolRepairRequest.RepairID=ToolHistory.RepairID) as\n>> CountOfTH\n\nwhich shows up in the (un-analyzed) EXPLAIN output here:\n\n SubPlan\n -> Aggregate (cost=524.17..524.17 rows=1 width=4) (actual time=0.032..0.035 rows=1 loops=1518)\n -> Index Scan using th_repair_key on toolhistory (cost=0.00..523.82 rows=140 width=4) (actual time=0.013..0.018 rows=1 loops=1518)\n Index Cond: ($0 = repairid)\n\nNow in this case the planner is estimating 79 rows out, so the estimated\ncost of the nestloop plan includes a charge of 79*524.17 for evaluating\nthe subplan. If we discount that then the estimated cost of the\nnestloop plan is 3974.74..6645.99 (48055.42-79*524.17).\n\nIn the ANALYZEd case the subplan is estimated to be a lot cheaper:\n\n SubPlan\n -> Aggregate (cost=6.98..6.98 rows=1 width=4) (actual time=0.038..0.042 rows=1 loops=1518)\n -> Index Scan using th_repair_key on toolhistory (cost=0.00..6.97 rows=2 width=4) (actual time=0.016..0.021 rows=1 loops=1518)\n Index Cond: ($0 = repairid)\n\nIt's estimated to be needed 1533 times, but that still adds up to less\nof a charge than before. Discounting that, the mergejoin plan was\nestimated at 18310.59..18462.10 (29162.44 - 1533*6.98). So it's not\ntrue that the estimated cost of the join went down in the ANALYZEd case.\n\nWerner sent me a data dump off-list, and trawling through the planner I\ngot these numbers for the estimated costs without the output subquery:\n\nwithout any statistics:\n\tmergejoin cost\t9436.42 .. 9571.81\n\tnestloop cost\t3977.74 .. 6700.71\n\nwith statistics:\n\tmergejoin cost\t18213.04 .. 18369.73\n\tnestloop cost\t 4054.93 .. 24042.85\n\n(these are a bit different from his results because of different ANALYZE\nsamples etc, but close enough)\n\nSo the planner isn't going crazy: in each case it chose what seemed the\ncheapest total-cost plan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Feb 2005 18:08:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: VACUUM ANALYZE slows down query " } ]
[ { "msg_contents": "Hi ALL,\n\n\tI was wondering if there is a DB performance reduction if there are a lot of \nIDLE processes.\n\n30786 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n32504 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n32596 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n 1722 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n 1724 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n 3881 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n 6332 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n 6678 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n 6700 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n 6768 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n 8544 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n 8873 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n 8986 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n 9000 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n 9010 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n 9013 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n 9016 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n 9019 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n 9020 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n\n\nTIA,\n\n", "msg_date": "Fri, 18 Feb 2005 18:15:16 +0800", "msg_from": "JM <[email protected]>", "msg_from_op": true, "msg_subject": "Effects of IDLE processes" }, { "msg_contents": "JM <[email protected]> writes:\n> \tI was wondering if there is a DB performance reduction if there are a lot of \n> IDLE processes.\n\nThere will be some overhead, but I dunno if anyone's ever tried to\nmeasure it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Feb 2005 09:16:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of IDLE processes " }, { "msg_contents": "JM wrote:\n> Hi ALL,\n> \n> \tI was wondering if there is a DB performance reduction if there are a lot of \n> IDLE processes.\n> \n> 30786 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 32504 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 32596 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 1722 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 1724 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 3881 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 6332 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 6678 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 6700 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 6768 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 8544 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 8873 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 8986 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 9000 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 9010 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 9013 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 9016 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 9019 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> 9020 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n> \n\nIn my experience not at all, you have to wonder if some of that are \"idle in transaction\"\nthat are really a pain in the @#$\n\n\nRegards\nGaetano Mendola\n\n\n", "msg_date": "Mon, 21 Feb 2005 01:50:18 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of IDLE processes" }, { "msg_contents": "After a long battle with technology, Gaetano Mendola <[email protected]>, an earthling, wrote:\n> JM wrote:\n>> Hi ALL,\n>> \n>> \tI was wondering if there is a DB performance reduction if\n>> there are a lot of IDLE processes.\n>> \n>> 30786 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 32504 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 32596 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 1722 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 1724 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 3881 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 6332 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 6678 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 6700 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 6768 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 8544 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 8873 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 8986 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 9000 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 9010 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 9013 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 9016 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 9019 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> 9020 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>> \n\n> In my experience not at all, you have to wonder if some of that are\n> \"idle in transaction\" that are really a pain in the @#$\n\nI'd be concerned about \"idle\" processes insofar as they are holding on\nto _some_ memory that isn't shared.\n\n\"idle in transaction\" is quite another matter; long-running\ntransactions certainly do lead to evil. When running Slony-I, for\ninstance, \"idle in transaction\" means that pg_listener entries are\nbeing held onto so they cannot be vacuumed out, and that's only one\nexample of a possible evil...\n-- \n(reverse (concatenate 'string \"moc.liamg\" \"@\" \"enworbbc\"))\nhttp://linuxdatabases.info/info/languages.html\nYou know how most packages say \"Open here\". What is the protocol if\nthe package says, \"Open somewhere else\"?\n", "msg_date": "Sun, 20 Feb 2005 22:18:31 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of IDLE processes" }, { "msg_contents": "Christopher Browne wrote:\n> After a long battle with technology, Gaetano Mendola <[email protected]>, an earthling, wrote:\n> \n>>JM wrote:\n>>\n>>>Hi ALL,\n>>>\n>>>\tI was wondering if there is a DB performance reduction if\n>>>there are a lot of IDLE processes.\n>>>\n>>>30786 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>>32504 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>>32596 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>> 1722 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>> 1724 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>> 3881 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>> 6332 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>> 6678 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>> 6700 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>> 6768 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>> 8544 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>> 8873 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>> 8986 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>> 9000 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>> 9010 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>> 9013 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>> 9016 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>> 9019 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>> 9020 ? S 0:00 postgres: user1 gmadb 10.10.10.1 idle\n>>>\n> \n> \n>>In my experience not at all, you have to wonder if some of that are\n>>\"idle in transaction\" that are really a pain in the @#$\n> \n> \n> I'd be concerned about \"idle\" processes insofar as they are holding on\n> to _some_ memory that isn't shared.\n\nFor \"not at all\" I was refering the fact that the normal engine work and\nmaintenances are not affected ( at least your iron shall be able to\nsupport all these connections and processes ).\nA long transaction for example can stop the entire engine if for example\na \"Vacuum full\" remain stuck on some tables locked by that transaction\n\n\nRegards\nGaetano Mendola\n\n\n\n", "msg_date": "Mon, 21 Feb 2005 12:23:15 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Effects of IDLE processes" } ]
[ { "msg_contents": "On Fri, Feb 18, 2005 at 11:54:34AM -0300, Rodrigo Moreno wrote:\n> 00 23 * * 1-5 /usr/local/pgsql/bin/psql supre -c \"vacuum analyze;\"\n>>>/dev/null 2>&1\n\nIsn't vacuum once a day a bit too little with heavy activity? You should\nprobably consider autovacuum.\n\n> 00 23 * * 6 /usr/local/pgsql/bin/psql supre -c \"reindex database supre;\"\n>>>/dev/null 2>&1\n\nREINDEX DATABASE does (AFAIK) only index the indexes on the system tables in\nthe database.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 18 Feb 2005 15:08:50 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Degradation of postgres 7.4.5 on FreeBSD/CygWin" }, { "msg_contents": "\"Rodrigo Moreno\" <[email protected]> writes:\n> After 2 months, postgres start get down the performance, and simple queries\n> that should run in 100ms now tooks about 15 secs.\n\n> Another behaviour, the data is growing to much, with no reason, just like\n> the comparision.\n\nAre you vacuuming on a regular basis? Do you have the FSM settings high\nenough to cover the database?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Feb 2005 09:32:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Degradation of postgres 7.4.5 on FreeBSD/CygWin " }, { "msg_contents": "On Fri, Feb 18, 2005 at 09:32:25AM -0500, Tom Lane wrote:\n> Are you vacuuming on a regular basis? Do you have the FSM settings high\n> enough to cover the database?\n\nHe posted his cron settings ;-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 18 Feb 2005 15:40:21 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Degradation of postgres 7.4.5 on FreeBSD/CygWin" }, { "msg_contents": "> this is only max 15 concurrent conections. And is not a heavy performance\n> database, so i think this is not necessary vacumm more than once a day.\n> \n> In another customer, has only 5 users and the database have 300mb, small\n> database, and has the same behaviour (haven't modified postgresql).\n> My first instalation was not changed anything in postgresql.conf, but in\n> this new server (FreeBSD) i have changed some parameters.\n> \n> as showed in my crontab list, i think this is enough:\n> 00 13 * * 1-5 /bin/sh /home/postgres/backup.sh >/dev/null 2>&1\n> 00 19 * * 1-5 /bin/sh /home/postgres/backup.sh >/dev/null 2>&1\n> 00 23 * * 1-5 /usr/local/pgsql/bin/psql supre -c \"vacuum analyze;\"\n\nWe just told you - it's nowhere near enough. Vacuum once an hour. Size \nof the database is not that relevant, its size of changes that is.\n\nChris\n", "msg_date": "Fri, 18 Feb 2005 14:51:42 +0000", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RES: Degradation of postgres 7.4.5 on FreeBSD/CygWin" }, { "msg_contents": "> 00 23 * * 1-5 /usr/local/pgsql/bin/psql supre -c \"vacuum analyze;\"\n\nAlso, this is bad - you are not vacuuming all your databases, which will \ncause you data loss one day with transaction wraparound. Use the \nvacuumdb utility that comes with PostgreSQL instead.\n\nChris\n", "msg_date": "Fri, 18 Feb 2005 14:52:48 +0000", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RES: Degradation of postgres 7.4.5 on FreeBSD/CygWin" }, { "msg_contents": "Hi All\n\nI'm really desparate about this. The problem has occurried in both of my\ncustomers first with cygwin and now with FreeBSD 5.3.\n\nAfter 2 months, postgres start get down the performance, and simple queries\nthat should run in 100ms now tooks about 15 secs.\n\nAnother behaviour, the data is growing to much, with no reason, just like\nthe comparision.\n\nSo, to solve problem, for the 5th time, a made a backup, dropped the entire\ndatabase, recreate e reimported.\n\nOne friend of mine tell me about same problem in linux and he go back to\n7.3.x, and with me 5 times.\n\nThe old data have this sizes:\n\n$ du -ks * | sort -nr\n1379872 base\n131202 pg_xlog\n390 global\n336 serverlog\n74 pg_clog\n8 postgresql.conf\n4 pg_hba.conf\n2 postmaster.opts\n2 pg_ident.conf\n2 PG_VERSION\n\nThe Reimported database has this sizes:\n$ du -ks * | sort -nr\n916496 base\n131202 pg_xlog\n134 global\n14 serverlog\n10 pg_clog\n8 postgresql.conf\n4 pg_hba.conf\n2 postmaster.pid\n2 postmaster.opts\n2 pg_ident.conf\n2 PG_VERSION\n\n\nThis Procedure took 100 ms, but before re-import it took about 15secs, in a\nprocess that have a 1000 itens its took about 4 hours to finish, and after\nre-import 5 minutes.\n\nThe bottleneck is this recursion procedure, that is a part os others\nprocedure, but it have a simple query.\n\nCREATE OR REPLACE FUNCTION Produt_Repos(numeric, double precision, integer,\ninteger) RETURNS double precision AS '\nDECLARE\n xcodpro ALIAS FOR $1;\n xPesfor ALIAS FOR $2;\n xAno ALIAS FOR $3;\n xMes ALIAS FOR $4;\n oMatpro RECORD;\n xPreRep DOUBLE PRECISION;\n nPreRep DOUBLE PRECISION;\n xQtdKgs DOUBLE PRECISION;\n xPreCus DOUBLE PRECISION;\nBEGIN\n xPreRep := 0;\n\n IF xPesFor <> 0 THEN\n FOR oMatpro IN SELECT a.qtdpro, a.codmat, b.pesfor\n FROM matpro a, produt b\n WHERE a.codpro = xCodpro\n AND b.codpro = a.codmat LOOP\n\n\t xQtdKgs := oMatpro.QtdPro / xPesFor;\n\t nPreRep := Produt_Repos( oMatpro.codmat, coalesce(oMatpro.pesfor, 0.0),\nxAno, xMes);\n xPrerep := xPrerep + (nPreRep * xQtdKgs);\n\n IF nPreRep = 0 THEN\n\t\tSELECT coalesce(PreCus, 0.0) INTO xPreCus FROM produt_fecha WHERE codpro =\noMatPro.codmat and ano = xAno and mes = xMes LIMIT 1;\n xPreRep := xPrerep + ( xPrecus * xQtdKgs );\n END IF;\n END LOOP;\n END IF;\n\n RETURN xPreRep;\nEND;\n' LANGUAGE 'plpgsql';\n\n\n\nThis are my configs:\n\nmsginfo:\n msgmax: 16384 (max characters in a message)\n msgmni: 40 (# of message queues)\n msgmnb: 2048 (max characters in a message queue)\n msgtql: 40 (max # of messages in system)\n msgssz: 8 (size of a message segment)\n msgseg: 2048 (# of message segments in system)\n\nshminfo:\n shmmax: 163840000 (max shared memory segment size)\n shmmin: 1 (min shared memory segment size)\n shmmni: 4000 (max number of shared memory identifiers)\n shmseg: 128 (max shared memory segments per process)\n shmall: 40000 (max amount of shared memory in pages)\n\nseminfo:\n semmap: 30 (# of entries in semaphore map)\n semmni: 40961 (# of semaphore identifiers)\n semmns: 16380 (# of semaphores in system)\n semmnu: 30 (# of undo structures in system)\n semmsl: 16380 (max # of semaphores per id)\n semopm: 100 (max # of operations per semop call)\n semume: 10 (max # of undo entries per process)\n semusz: 92 (size in bytes of undo structure)\n semvmx: 32767 (semaphore maximum value)\n semaem: 16384 (adjust on exit max value)\n\n\nmax_connections = 30\nshared_buffers = 8192 # min 16, at least max_connections*2, 8KB\neach\nsort_mem = 32768 # min 64, size in KB\nvacuum_mem = 32768 # min 1024, size in KB\nmax_fsm_pages = 40000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 2000 # min 100, ~50 bytes each\n\nThese are my crontab activities:\n\n$ crontab -l\n00 13 * * 1-5 /bin/sh /home/postgres/backup.sh >/dev/null 2>&1\n00 19 * * 1-5 /bin/sh /home/postgres/backup.sh >/dev/null 2>&1\n00 23 * * 1-5 /usr/local/pgsql/bin/psql supre -c \"vacuum analyze;\"\n>>/dev/null 2>&1\n00 23 * * 6 /usr/local/pgsql/bin/psql supre -c \"reindex database supre;\"\n>>/dev/null 2>&1\n00 23 * * 7 /usr/local/pgsql/bin/psql supre -c \"vacuum full analyze;\"\n>>/dev/null 2>&1\n\n\nSo guys, i'm really desparate about this issue, and i think i'm doing\neverthing right. Please help me.\n\nIf i tell to my customer that he is having the same problem that in cygwin\nversion, after spending money to change from windows to freebsd,upgrading\nserver, etc, problably he will kill me. :)\n\nBest Regards\nRodrigo Moreno\n\n\n", "msg_date": "Fri, 18 Feb 2005 11:54:34 -0300", "msg_from": "\"Rodrigo Moreno\" <[email protected]>", "msg_from_op": false, "msg_subject": "Degradation of postgres 7.4.5 on FreeBSD/CygWin" }, { "msg_contents": "\"Rodrigo Moreno\" <[email protected]> writes:\n> max_fsm_pages = 40000\n> max_fsm_relations = 2000\n\n> But why after 2 months the database has 1.3gb and after reimport on 900mb ?\n\n40k pages = 320M bytes = 1/3rd of your database. Perhaps you need a\nlarger setting for max_fsm_pages.\n\nHowever, 30% bloat of the database doesn't particularly bother me,\nespecially when you are using infrequent vacuums. Bear in mind that,\nfor example, the steady-state fill factor of a b-tree index is usually\nestimated at less than 70%. A certain amount of wasted space is not\nonly intended, but essential for reasonable performance.\n\nWhat you need is to take a more detailed look at the behavior of that\nfunction that's getting so slow. Are the query plans changing? Is\nthe loop iterating over many more rows than before? You haven't told\nus anything that would account for 100x slowdown.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Feb 2005 09:59:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: RES: Degradation of postgres 7.4.5 on FreeBSD/CygWin " }, { "msg_contents": "Hi,\n\nthis is only max 15 concurrent conections. And is not a heavy performance\ndatabase, so i think this is not necessary vacumm more than once a day.\n\nIn another customer, has only 5 users and the database have 300mb, small\ndatabase, and has the same behaviour (haven't modified postgresql).\nMy first instalation was not changed anything in postgresql.conf, but in\nthis new server (FreeBSD) i have changed some parameters.\n\nas showed in my crontab list, i think this is enough:\n00 13 * * 1-5 /bin/sh /home/postgres/backup.sh >/dev/null 2>&1\n00 19 * * 1-5 /bin/sh /home/postgres/backup.sh >/dev/null 2>&1\n00 23 * * 1-5 /usr/local/pgsql/bin/psql supre -c \"vacuum analyze;\"\n>>/dev/null 2>&1\n00 23 * * 6 /usr/local/pgsql/bin/psql supre -c \"reindex database supre;\"\n>>/dev/null 2>&1\n00 23 * * 7 /usr/local/pgsql/bin/psql supre -c \"vacuum full analyze;\"\n>>/dev/null 2>&1\n\nThese my changed configs in postgresql.conf:\nmax_connections = 30\nshared_buffers = 8192\nsort_mem = 32768\nvacuum_mem = 32768\nmax_fsm_pages = 40000\nmax_fsm_relations = 2000\n\nBut why after 2 months the database has 1.3gb and after reimport on 900mb ?\n\nBoth customer are smaller databases, but one of them, has 8 years os data,\nit's the reason of size 900mb, these are too smaller database.\n\nRegards\nRodrigo Moreno\n\n", "msg_date": "Fri, 18 Feb 2005 12:43:58 -0300", "msg_from": "\"Rodrigo Moreno\" <[email protected]>", "msg_from_op": false, "msg_subject": "RES: Degradation of postgres 7.4.5 on FreeBSD/CygWin" }, { "msg_contents": "Thanks to all,\n\nat this moment, can't stop the database and put back the old database, but\nat night i will take more analyzes on old database and reimported and i put\nhere the results.\n\nThanks a lot\nRodrigo\n\n-----Mensagem original-----\nDe: Tom Lane [mailto:[email protected]]\nEnviada em: sexta-feira, 18 de fevereiro de 2005 12:00\nPara: Rodrigo Moreno\nCc: [email protected]\nAssunto: Re: RES: [PERFORM] Degradation of postgres 7.4.5 on\nFreeBSD/CygWin\n\n\n\"Rodrigo Moreno\" <[email protected]> writes:\n> max_fsm_pages = 40000\n> max_fsm_relations = 2000\n\n> But why after 2 months the database has 1.3gb and after reimport on 900mb\n?\n\n40k pages = 320M bytes = 1/3rd of your database. Perhaps you need a\nlarger setting for max_fsm_pages.\n\nHowever, 30% bloat of the database doesn't particularly bother me,\nespecially when you are using infrequent vacuums. Bear in mind that,\nfor example, the steady-state fill factor of a b-tree index is usually\nestimated at less than 70%. A certain amount of wasted space is not\nonly intended, but essential for reasonable performance.\n\nWhat you need is to take a more detailed look at the behavior of that\nfunction that's getting so slow. Are the query plans changing? Is\nthe loop iterating over many more rows than before? You haven't told\nus anything that would account for 100x slowdown.\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Fri, 18 Feb 2005 13:10:01 -0300", "msg_from": "\"Rodrigo Moreno\" <[email protected]>", "msg_from_op": false, "msg_subject": "RES: RES: Degradation of postgres 7.4.5 on FreeBSD/CygWin" }, { "msg_contents": "Hi all,\n\nI Got more improvements using vacuumdb utility and the size of my database\nwas decreasead from 1.3gb to 900mb.\n\nOnly one thing is not right yeat. My procedure perform others 7\nsubprocedures and with reimported database, it's took about 5 minutes to\ncomplete. With old vacuumed database, the same process took 20minutes, it's\nmuch better than the 4 hours before, but there is little diference.\n\nNow, i have scheduled the vacuumdb --analyze once a day and\nvacuumdb --analyze --all --full once a week, i think this is enough.\n\nNow i'll check for reindexes tables and i'll perform analyze in each query\nin procedure.\n\nWhen i get more results, i post here.\n\nThanks a Lot\nRodrigo Moreno\n\n", "msg_date": "Sun, 20 Feb 2005 10:02:44 -0300", "msg_from": "\"Rodrigo Moreno\" <[email protected]>", "msg_from_op": false, "msg_subject": "RES: RES: Degradation of postgres 7.4.5 on FreeBSD/CygWin" } ]
[ { "msg_contents": "Magnus prepared a trivial patch which added the O_SYNC flag for windows\nand mapped it to FILE_FLAG_WRITE_THROUGH in win32_open.c. We pg_benched\nit and here are the results of our test on my WinXP workstation on a 10k\nraptor:\n\nSettings were pgbench -t 100 -c 10.\n\nfsync = off: \n~ 280 tps\n\nfsync on, WAL=fsync:\n~ 35 tps \n\nfsync on, WAL=open_sync write cache policy on:\n~ 240 tps\n\nfsync on, WAL=open_sync write cache policy off:\n~ 80 tps\n\n80 tps, btw, is about the results I'd expect from linux on this\nhardware. Also, the open_sync method plays much nicer with RAID\ndevices, but it would need some more rigorous testing before I'd\npersonally certify it as safe. As an aside, it doesn't look like the\nopen_sync can be trusted with write caching policy on the disk (the\ndefault), and that's worth noting. \n\nMerlin\n\n\n\n", "msg_date": "Fri, 18 Feb 2005 10:11:18 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: win32 performance - fsync question " } ]
[ { "msg_contents": "I am running postgreSQL 8.0.1 under the Windows 2000. I want to use COPY\nFROM STDIN function from Java application, but it doesn't work, it\nthrows:\n\n\"org.postgresql.util.PSQLException: Unknown Response Type G\" error.\n\nPlease help me!\n\nNote: COPY FROM filename works properly.\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nI am running postgreSQL 8.0.1 under the Windows 2000. I want\nto use COPY FROM STDIN function from Java application, but it doesn’t\nwork, it throws:\n“org.postgresql.util.PSQLException: Unknown Response\nType G”  error.\nPlease help me!\nNote: COPY FROM filename works properly.", "msg_date": "Sat, 19 Feb 2005 15:05:52 +0400", "msg_from": "\"Asatryan, Anahit\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help me please !" }, { "msg_contents": "Asatryan, Anahit wrote:\n> I am running postgreSQL 8.0.1 under the Windows 2000. I want to use COPY\n> FROM STDIN function from Java application, but it doesn't work, it\n> throws:\n> \n> \"org.postgresql.util.PSQLException: Unknown Response Type G\" error.\n\nI don't think that there is a \"STDIN\" if you are executing via JDBC. The \nonly workaround I know of is to create a file and copy from that, which \nyou already have working.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 23 Feb 2005 08:29:49 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help me please !" }, { "msg_contents": "Hi, Asatryan,\n\nAsatryan, Anahit schrieb:\n> I am running postgreSQL 8.0.1 under the Windows 2000. I want to use COPY\n> FROM STDIN function from Java application, but it doesn’t work, it throws:\n> \n> “org.postgresql.util.PSQLException: Unknown Response Type G” error.\n\nCurrently, there is no COPY support in the postgresql jdbc driver. There\nwere some patches enabling COPY support floating around on the\[email protected] mailing list. You can search the archive and\ntry whether one of them fits your needs.\n\nAFAIR, COPY support is on the TODO list, but they wait for some other\ndriver reworking to be finished.\n\nYou can discuss this issue on psql-jdbc list or search the archives if\nyou need more info.\n\n\nMarkus\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com", "msg_date": "Wed, 23 Feb 2005 15:42:04 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help me please !" } ]
[ { "msg_contents": "Hi all,\nI'm stuck in a select that use the hash join where should not:\n6 seconds vs 0.3 ms !!\n\nIf you need other info in order to improve the planner,\nlet me know.\n\nRegards\nGaetano Mendola\n\n\n\n\nempdb=# explain analyze SELECT id_sat_request\nempdb-# FROM sat_request sr,\nempdb-# v_sc_packages vs\nempdb-# WHERE ----- JOIN ----\nempdb-# sr.id_package = vs.id_package AND\nempdb-# ---------------\nempdb-# id_user = 29416 AND\nempdb-# id_url = 329268 AND\nempdb-# vs.estimated_start > now() AND\nempdb-# id_sat_request_status = sp_lookup_id('sat_request_status', 'Scheduled');\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=272.95..276.61 rows=1 width=4) (actual time=6323.107..6323.107 rows=0 loops=1)\n Hash Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Subquery Scan vs (cost=269.91..272.10 rows=292 width=4) (actual time=6316.534..6317.398 rows=407 loops=1)\n -> Sort (cost=269.91..270.64 rows=292 width=263) (actual time=6316.526..6316.789 rows=407 loops=1)\n Sort Key: vs.estimated_start\n -> Hash Join (cost=227.58..257.95 rows=292 width=263) (actual time=6302.480..6313.982 rows=407 loops=1)\n Hash Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Subquery Scan vpk (cost=141.82..150.04 rows=1097 width=218) (actual time=6106.020..6113.038 rows=1104 loops=1)\n -> Sort (cost=141.82..144.56 rows=1097 width=162) (actual time=6106.006..6106.745 rows=1104 loops=1)\n Sort Key: p.id_publisher, p.name\n -> Hash Left Join (cost=15.54..86.42 rows=1097 width=162) (actual time=2.978..6087.608 rows=1104 loops=1)\n Hash Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Seq Scan on packages p (cost=0.00..53.48 rows=1097 width=146) (actual time=0.011..7.978 rows=1104 loops=1)\n -> Hash (cost=13.69..13.69 rows=738 width=20) (actual time=2.061..2.061 rows=0 loops=1)\n -> Seq Scan on package_security ps (cost=0.00..13.69 rows=738 width=20) (actual time=0.027..1.289 rows=747 loops=1)\n -> Hash (cost=85.63..85.63 rows=54 width=49) (actual time=196.022..196.022 rows=0 loops=1)\n -> Merge Join (cost=82.83..85.63 rows=54 width=49) (actual time=192.898..195.565 rows=407 loops=1)\n Merge Cond: (\"outer\".id_program = \"inner\".id_program)\n -> Subquery Scan vs (cost=72.27..73.97 rows=226 width=16) (actual time=6.867..7.872 rows=407 loops=1)\n -> Sort (cost=72.27..72.84 rows=226 width=20) (actual time=6.851..7.114 rows=407 loops=1)\n Sort Key: sequences.id_program, sequences.internal_position\n -> Seq Scan on sequences (cost=0.00..63.44 rows=226 width=20) (actual time=0.295..3.370 rows=407 loops=1)\n Filter: ((estimated_start IS NOT NULL) AND (date_trunc('seconds'::text, estimated_start) > now()))\n -> Sort (cost=10.56..10.68 rows=47 width=37) (actual time=186.013..186.296 rows=439 loops=1)\n Sort Key: vpr.id_program\n -> Subquery Scan vpr (cost=8.90..9.25 rows=47 width=37) (actual time=185.812..185.928 rows=48 loops=1)\n -> Sort (cost=8.90..9.02 rows=47 width=61) (actual time=185.806..185.839 rows=48 loops=1)\n Sort Key: programs.id_publisher, programs.id_program\n -> Seq Scan on programs (cost=0.00..7.60 rows=47 width=61) (actual time=9.592..185.634 rows=48 loops=1)\n Filter: (id_program <> 0)\n -> Hash (cost=3.04..3.04 rows=1 width=8) (actual time=4.862..4.862 rows=0 loops=1)\n -> Index Scan using idx_id_url_sat_request on sat_request sr (cost=0.00..3.04 rows=1 width=8) (actual time=4.851..4.851 rows=0 loops=1)\n Index Cond: (id_url = 329268)\n Filter: ((id_user = 29416) AND (id_sat_request_status = 1))\n Total runtime: 6324.435 ms\n(35 rows)\n\nempdb=# set enable_hashjoin = false;\nSET\nempdb=# explain analyze SELECT id_sat_request\nempdb-# FROM sat_request sr,\nempdb-# v_sc_packages vs\nempdb-# WHERE ----- JOIN ----\nempdb-# sr.id_package = vs.id_package AND\nempdb-# ---------------\nempdb-# id_user = 29416 AND\nempdb-# id_url = 329268 AND\nempdb-# vs.estimated_start > now() AND\nempdb-# id_sat_request_status = sp_lookup_id('sat_request_status', 'Scheduled');\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=393.41..400.83 rows=1 width=4) (actual time=0.080..0.080 rows=0 loops=1)\n Join Filter: (\"outer\".id_package = \"inner\".id_package)\n -> Index Scan using idx_id_url_sat_request on sat_request sr (cost=0.00..3.04 rows=1 width=8) (actual time=0.078..0.078 rows=0 loops=1)\n Index Cond: (id_url = 329268)\n Filter: ((id_user = 29416) AND (id_sat_request_status = 1))\n -> Subquery Scan vs (cost=393.41..395.60 rows=292 width=4) (never executed)\n -> Sort (cost=393.41..394.14 rows=292 width=263) (never executed)\n Sort Key: vs.estimated_start\n -> Merge Join (cost=372.76..381.46 rows=292 width=263) (never executed)\n Merge Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Sort (cost=87.19..87.32 rows=54 width=49) (never executed)\n Sort Key: vs.id_package\n -> Merge Join (cost=82.83..85.63 rows=54 width=49) (never executed)\n Merge Cond: (\"outer\".id_program = \"inner\".id_program)\n -> Subquery Scan vs (cost=72.27..73.97 rows=226 width=16) (never executed)\n -> Sort (cost=72.27..72.84 rows=226 width=20) (never executed)\n Sort Key: sequences.id_program, sequences.internal_position\n -> Seq Scan on sequences (cost=0.00..63.44 rows=226 width=20) (never executed)\n Filter: ((estimated_start IS NOT NULL) AND (date_trunc('seconds'::text, estimated_start) > now()))\n -> Sort (cost=10.56..10.68 rows=47 width=37) (never executed)\n Sort Key: vpr.id_program\n -> Subquery Scan vpr (cost=8.90..9.25 rows=47 width=37) (never executed)\n -> Sort (cost=8.90..9.02 rows=47 width=61) (never executed)\n Sort Key: programs.id_publisher, programs.id_program\n -> Seq Scan on programs (cost=0.00..7.60 rows=47 width=61) (never executed)\n Filter: (id_program <> 0)\n -> Sort (cost=285.57..288.31 rows=1097 width=218) (never executed)\n Sort Key: vpk.id_package\n -> Subquery Scan vpk (cost=221.95..230.17 rows=1097 width=218) (never executed)\n -> Sort (cost=221.95..224.69 rows=1097 width=162) (never executed)\n Sort Key: p.id_publisher, p.name\n -> Merge Right Join (cost=108.88..166.55 rows=1097 width=162) (never executed)\n Merge Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Index Scan using package_security_id_package_key on package_security ps (cost=0.00..38.50 rows=738 width=20) (never executed)\n -> Sort (cost=108.88..111.62 rows=1097 width=146) (never executed)\n Sort Key: p.id_package\n -> Seq Scan on packages p (cost=0.00..53.48 rows=1097 width=146) (never executed)\n Total runtime: 0.302 ms\n(38 rows)\n", "msg_date": "Sun, 20 Feb 2005 13:20:38 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "bad performances using hashjoin" }, { "msg_contents": "Gaetano Mendola <[email protected]> writes:\n> If you need other info in order to improve the planner,\n\n... like, say, the PG version you are using, or the definitions of the\nviews involved? It's difficult to say much of anything about this.\n\nHowever: the reason the second plan wins is because there are zero rows\nfetched from sat_request, and so the bulk of the plan is never executed\nat all. I doubt the second plan would win if there were any matching\nsat_request rows. If this is the case you actually need to optimize,\nprobably the thing to do is to get rid of the ORDER BY clauses you\nevidently have in your views, so that there's some chance of building\na fast-start plan.\n\nBTW, I believe in 8.0 the first plan would be about as fast as the\nsecond, because we added some code to hash join to fall out without\nscanning the left input if the right input is empty.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Feb 2005 13:46:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad performances using hashjoin " }, { "msg_contents": "Tom Lane wrote:\n> Gaetano Mendola <[email protected]> writes:\n>\n>>If you need other info in order to improve the planner,\n>\n>\n> ... like, say, the PG version you are using, or the definitions of the\n> views involved? It's difficult to say much of anything about this.\n\nThat is true, sorry I forgot it :-(\nThe engine is a 7.4.5 and these are the views definitions:\n\nsat_request is just a table\n\nCREATE OR REPLACE VIEW v_sc_packages AS\n SELECT *\n FROM\n v_programs vpr,\n v_packages vpk,\n v_sequences vs\n WHERE\n ------------ JOIN -------------\n vpr.id_program = vs.id_program AND\n vpk.id_package = vs.id_package AND\n -------------------------------\n vs.estimated_start IS NOT NULL\n ORDER BY vs.estimated_start;\n\n\nCREATE OR REPLACE VIEW v_programs AS\n SELECT *\n FROM programs\n WHERE id_program<>0\n ORDER BY id_publisher, id_program\n;\n\n\n\nCREATE OR REPLACE VIEW v_packages AS\n SELECT *\n FROM packages p LEFT OUTER JOIN package_security ps USING (id_package)\n ORDER BY p.id_publisher, p.name\n;\n\nCREATE OR REPLACE VIEW v_sequences AS\n SELECT id_package AS id_package,\n id_program AS id_program,\n internal_position AS internal_position,\n estimated_start AS estimated_start\n FROM sequences\n ORDER BY id_program, internal_position\n;\n\n\n> However: the reason the second plan wins is because there are zero rows\n> fetched from sat_request, and so the bulk of the plan is never executed\n> at all. I doubt the second plan would win if there were any matching\n> sat_request rows. If this is the case you actually need to optimize,\n> probably the thing to do is to get rid of the ORDER BY clauses you\n> evidently have in your views, so that there's some chance of building\n> a fast-start plan.\n\nRemoved all order by from that views, this is the comparison between the two\n(orderdered and not ordered):\n\nempdb=# explain analyze SELECT id_sat_request\nempdb-# FROM sat_request sr,\nempdb-# v_sc_packages vs\nempdb-# WHERE ----- JOIN ----\nempdb-# sr.id_package = vs.id_package AND\nempdb-# ---------------\nempdb-# id_user = 29416 AND\nempdb-# id_url = 424364 AND\nempdb-# vs.estimated_start > now() AND\nempdb-# id_sat_request_status = sp_lookup_id('sat_request_status', 'Scheduled');\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=280.98..284.74 rows=1 width=4) (actual time=895.344..895.344 rows=0 loops=1)\n Hash Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Subquery Scan vs (cost=277.94..280.19 rows=301 width=4) (actual time=893.191..894.396 rows=569 loops=1)\n -> Sort (cost=277.94..278.69 rows=301 width=263) (actual time=893.184..893.546 rows=569 loops=1)\n Sort Key: vs.estimated_start\n -> Hash Join (cost=232.61..265.54 rows=301 width=263) (actual time=868.980..889.643 rows=569 loops=1)\n Hash Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Subquery Scan vpk (cost=150.29..159.26 rows=1196 width=218) (actual time=822.281..834.063 rows=1203 loops=1)\n -> Sort (cost=150.29..153.28 rows=1196 width=159) (actual time=822.266..823.190 rows=1203 loops=1)\n Sort Key: p.id_publisher, p.name\n -> Hash Left Join (cost=16.14..89.16 rows=1196 width=159) (actual time=3.504..809.262 rows=1203 loops=1)\n Hash Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Seq Scan on packages p (cost=0.00..53.98 rows=1196 width=143) (actual time=0.018..13.869 rows=1203 loops=1)\n -> Hash (cost=14.09..14.09 rows=818 width=20) (actual time=2.395..2.395 rows=0 loops=1)\n -> Seq Scan on package_security ps (cost=0.00..14.09 rows=818 width=20) (actual time=0.020..1.520 rows=845 loops=1)\n -> Hash (cost=82.19..82.19 rows=51 width=49) (actual time=46.402..46.402 rows=0 loops=1)\n -> Merge Join (cost=79.54..82.19 rows=51 width=49) (actual time=39.435..45.376 rows=569 loops=1)\n Merge Cond: (\"outer\".id_program = \"inner\".id_program)\n -> Subquery Scan vs (cost=70.98..72.59 rows=214 width=16) (actual time=16.834..19.102 rows=569 loops=1)\n -> Sort (cost=70.98..71.52 rows=214 width=20) (actual time=16.824..17.338 rows=569 loops=1)\n Sort Key: sequences.id_program, sequences.internal_position\n -> Seq Scan on sequences (cost=0.00..62.70 rows=214 width=20) (actual time=0.638..11.969 rows=569 loops=1)\n Filter: ((estimated_start IS NOT NULL) AND (date_trunc('seconds'::text, estimated_start) > now()))\n -> Sort (cost=8.56..8.68 rows=47 width=37) (actual time=22.580..23.123 rows=605 loops=1)\n Sort Key: vpr.id_program\n -> Subquery Scan vpr (cost=6.90..7.25 rows=47 width=37) (actual time=22.294..22.464 rows=48 loops=1)\n -> Sort (cost=6.90..7.02 rows=47 width=61) (actual time=22.287..22.332 rows=48 loops=1)\n Sort Key: programs.id_publisher, programs.id_program\n -> Seq Scan on programs (cost=0.00..5.60 rows=47 width=61) (actual time=4.356..22.068 rows=48 loops=1)\n Filter: (id_program <> 0)\n -> Hash (cost=3.04..3.04 rows=1 width=8) (actual time=0.033..0.033 rows=0 loops=1)\n -> Index Scan using idx_id_url_sat_request on sat_request sr (cost=0.00..3.04 rows=1 width=8) (actual time=0.031..0.031 rows=0 loops=1)\n Index Cond: (id_url = 424364)\n Filter: ((id_user = 29416) AND (id_sat_request_status = 1))\n Total runtime: 897.044 ms\n(35 rows)\n\n\n\nempdb=# explain analyze SELECT id_sat_request\nempdb-# FROM sat_request sr,\nempdb-# v_sc_packages_new vs\nempdb-# WHERE ----- JOIN ----\nempdb-# sr.id_package = vs.id_package AND\nempdb-# ---------------\nempdb-# id_user = 29416 AND\nempdb-# id_url = 424364 AND\nempdb-# vs.estimated_start > now() AND\nempdb-# id_sat_request_status = sp_lookup_id('sat_request_status', 'Scheduled');\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=19.18..96.87 rows=1 width=4) (actual time=15.576..15.576 rows=0 loops=1)\n -> Nested Loop (cost=19.18..93.04 rows=1 width=8) (actual time=15.569..15.569 rows=0 loops=1)\n -> Hash Join (cost=19.18..89.21 rows=1 width=12) (actual time=15.566..15.566 rows=0 loops=1)\n Hash Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Hash Left Join (cost=16.14..80.19 rows=1196 width=4) (actual time=7.291..13.620 rows=1203 loops=1)\n Hash Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Seq Scan on packages p (cost=0.00..53.98 rows=1196 width=4) (actual time=0.028..2.694 rows=1203 loops=1)\n -> Hash (cost=14.09..14.09 rows=818 width=4) (actual time=6.707..6.707 rows=0 loops=1)\n -> Seq Scan on package_security ps (cost=0.00..14.09 rows=818 width=4) (actual time=0.026..4.620 rows=845 loops=1)\n -> Hash (cost=3.04..3.04 rows=1 width=8) (actual time=0.061..0.061 rows=0 loops=1)\n -> Index Scan using idx_id_url_sat_request on sat_request sr (cost=0.00..3.04 rows=1 width=8) (actual time=0.056..0.056 rows=0 loops=1)\n Index Cond: (id_url = 424364)\n Filter: ((id_user = 29416) AND (id_sat_request_status = 1))\n -> Index Scan using idx_sequences_id_package on sequences (cost=0.00..3.82 rows=1 width=8) (never executed)\n Index Cond: (\"outer\".id_package = sequences.id_package)\n Filter: ((estimated_start IS NOT NULL) AND (date_trunc('seconds'::text, estimated_start) > now()))\n -> Index Scan using programs_pkey on programs (cost=0.00..3.83 rows=1 width=4) (never executed)\n Index Cond: (programs.id_program = \"outer\".id_program)\n Filter: (id_program <> 0)\n Total runtime: 16.279 ms\n(20 rows)\n\n\n\n\nThe second one of course is faster, this is the second select with hashjoin disabled:\n\nempdb=# set enable_hashjoin = false;\nSET\nempdb=# explain analyze SELECT id_sat_request\nempdb-# FROM sat_request sr,\nempdb-# v_sc_packages_new vs\nempdb-# WHERE ----- JOIN ----\nempdb-# sr.id_package = vs.id_package AND\nempdb-# ---------------\nempdb-# id_user = 29416 AND\nempdb-# id_url = 424364 AND\nempdb-# vs.estimated_start > now() AND\nempdb-# id_sat_request_status = sp_lookup_id('sat_request_status', 'Scheduled');\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=10.62..175.83 rows=1 width=4) (actual time=0.280..0.280 rows=0 loops=1)\n Merge Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Merge Left Join (cost=0.00..162.21 rows=1196 width=4) (actual time=0.188..0.188 rows=1 loops=1)\n Merge Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Index Scan using packages_pkey on packages p (cost=0.00..115.51 rows=1196 width=4) (actual time=0.085..0.085 rows=1 loops=1)\n -> Index Scan using package_security_id_package_key on package_security ps (cost=0.00..39.06 rows=818 width=4) (actual time=0.080..0.080 rows=1 loops=1)\n -> Sort (cost=10.62..10.62 rows=1 width=12) (actual time=0.087..0.087 rows=0 loops=1)\n Sort Key: sr.id_package\n -> Nested Loop (cost=0.00..10.61 rows=1 width=12) (actual time=0.069..0.069 rows=0 loops=1)\n -> Nested Loop (cost=0.00..6.77 rows=1 width=16) (actual time=0.067..0.067 rows=0 loops=1)\n -> Index Scan using idx_id_url_sat_request on sat_request sr (cost=0.00..3.04 rows=1 width=8) (actual time=0.065..0.065 rows=0 loops=1)\n Index Cond: (id_url = 424364)\n Filter: ((id_user = 29416) AND (id_sat_request_status = 1))\n -> Index Scan using idx_sequences_id_package on sequences (cost=0.00..3.72 rows=1 width=8) (never executed)\n Index Cond: (\"outer\".id_package = sequences.id_package)\n Filter: ((estimated_start IS NOT NULL) AND (date_trunc('seconds'::text, estimated_start) > now()))\n -> Index Scan using programs_pkey on programs (cost=0.00..3.83 rows=1 width=4) (never executed)\n Index Cond: (programs.id_program = \"outer\".id_program)\n Filter: (id_program <> 0)\n Total runtime: 0.604 ms\n(20 rows)\n\nI see the problem is still here:\nHash Left Join (cost=16.14..80.19 rows=1196 width=4) (actual time=7.291..13.620 rows=1203 loops=1)\n\nBTW, at this time the executions time seen are lower because right now is not a peak hour\n\n> BTW, I believe in 8.0 the first plan would be about as fast as the\n> second, because we added some code to hash join to fall out without\n> scanning the left input if the right input is empty.\n\nI'll take it a try if you are really interested in the results.\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Mon, 21 Feb 2005 01:45:03 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad performances using hashjoin" }, { "msg_contents": "On Sun, 20 Feb 2005 13:46:10 -0500, Tom Lane <[email protected]> wrote:\n> sat_request rows. If this is the case you actually need to optimize,\n> probably the thing to do is to get rid of the ORDER BY clauses you\n> evidently have in your views, so that there's some chance of building\n> a fast-start plan.\n\nIs having an order by in a view legal? In sybase ASA, mssql it throws a\nsyntax error if there's an order by.\n\nIf so, does it do 2 sorts when you sort by something else?\n\ni.e. if view is \n create view v1 as select * from table order by 1;\nand the statment\n select * from v1 order by 2;\nis run does it sort by 1 then resort by 2?\n\nklint.\n\n+---------------------------------------+-----------------+\n: Klint Gore : \"Non rhyming :\n: EMail : [email protected] : slang - the :\n: Snail : A.B.R.I. : possibilities :\n: Mail University of New England : are useless\" :\n: Armidale NSW 2351 Australia : L.J.J. :\n: Fax : +61 2 6772 5376 : :\n+---------------------------------------+-----------------+\n", "msg_date": "Mon, 21 Feb 2005 12:30:21 +1100", "msg_from": "Klint Gore <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad performances using hashjoin " }, { "msg_contents": "Klint Gore <[email protected]> writes:\n> Is having an order by in a view legal?\n\nNot according to the SQL spec, but PG has allowed it for several releases.\n(The same goes for ORDER BY in a sub-select, which is actually pretty\nmuch the same thing ...)\n\n> If so, does it do 2 sorts when you sort by something else?\n\nYup. It's something you'd only want for the topmost view in a stack,\nIMHO. A sort at a lower level is likely to be wasted effort.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Feb 2005 21:01:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad performances using hashjoin " }, { "msg_contents": "Tom Lane wrote:\n\n>However: the reason the second plan wins is because there are zero rows\n>fetched from sat_request, and so the bulk of the plan is never executed\n>at all. I doubt the second plan would win if there were any matching\n>sat_request rows.\n>\nThat's what I thought at first, but if you look more closely, that's \nhaving very little impact on either the cost or actual time:\n\n-> Index Scan using idx_id_url_sat_request on sat_request sr (cost=0.00..3.04 rows=1 width=8) (actual time=0.031..0.031 rows=0 loops=1)\n\n\nThe real problem appears to be here:\n\n-> Hash Left Join (cost=16.14..89.16 rows=1196 width=159) (actual time=3.504..809.262 rows=1203 loops=1)\n\n\nAs Gaetano points out in his follow-up post, the problem still exists \nafter he removed the sorts:\n\n-> Hash Left Join (cost=16.14..80.19 rows=1196 width=4) (actual time=7.291..13.620 rows=1203 loops=1)\n\n\nThe planner is not breaking up the outer join in his v_packages view:\n SELECT *\n FROM packages p LEFT OUTER JOIN package_security ps USING (id_package)\n\nIt's not being selective at all with packages, despite id_package being \nthe link to sat_request.\n\nIf this is too complex for the planner, could he re-arrange his outer \njoin so that's it's processed later? If he puts it in his actual query, \nfor instance, will the planner flatten it out anyway?\n", "msg_date": "Mon, 21 Feb 2005 13:01:20 +1000", "msg_from": "David Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad performances using hashjoin" }, { "msg_contents": "David Brown <[email protected]> writes:\n> The planner is not breaking up the outer join in his v_packages view:\n\nThe planner doesn't make any attempt to rearrange join order of outer\njoins. There are some cases where such a rearrangement is OK, but there\nare other cases where it isn't, and we don't currently have the logic\nneeded to tell which is which.\n\nIn the particular case at hand here, 8.0's hack to suppress evaluating\nthe outer side of a hash join after finding the inner side is empty\nwould eliminate the complaint.\n\nIn the original message, it did seem that the packages-to-\npackage_security join is taking a lot longer than one would expect:\n\n -> Hash Left Join (cost=15.54..86.42 rows=1097 width=162) (actual time=2.978..6087.608 rows=1104 loops=1)\n Hash Cond: (\"outer\".id_package = \"inner\".id_package)\n -> Seq Scan on packages p (cost=0.00..53.48 rows=1097 width=146) (actual time=0.011..7.978 rows=1104 loops=1)\n -> Hash (cost=13.69..13.69 rows=738 width=20) (actual time=2.061..2.061 rows=0 loops=1)\n -> Seq Scan on package_security ps (cost=0.00..13.69 rows=738 width=20) (actual time=0.027..1.289 rows=747 loops=1)\n\nbut this behavior isn't reproduced in the later message, so I wonder if\nit wasn't an artifact of something else taking a chunk of time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 20 Feb 2005 22:48:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad performances using hashjoin " }, { "msg_contents": "Tom Lane wrote:\n\n> but this behavior isn't reproduced in the later message, so I wonder if\n> it wasn't an artifact of something else taking a chunk of time.\n\nI think is due the fact that first queries were performed in peakhours.\n\n\nRegards\nGaetano Mendola\n\n\n", "msg_date": "Mon, 21 Feb 2005 12:17:39 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: bad performances using hashjoin" }, { "msg_contents": "Gaetano Mendola wrote:\n\n>I think is due the fact that first queries were performed in peakhours.\n> \n>\nA memory intensive operation takes 7.5 times longer during heavy loads.\nDoesn't this suggest that swapping of working memory is occurring?\n\n", "msg_date": "Tue, 22 Feb 2005 00:03:03 +1000", "msg_from": "David Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad performances using hashjoin" }, { "msg_contents": "Quoting Tom Lane <[email protected]>:\n\n> Klint Gore <[email protected]> writes:\n> > Is having an order by in a view legal?\n> \n> Not according to the SQL spec, but PG has allowed it for several releases.\n> (The same goes for ORDER BY in a sub-select, which is actually pretty\n> much the same thing ...)\n\nUmm... if you implement LIMIT in a subselect, it becomes highly meaningful (nad\nuseful. \n\nIs this a case of one nonstandard feature being the thin edge of the wedge\nfor another?\n-- \n\"Dreams come true, not free.\"\n\n", "msg_date": "Mon, 21 Feb 2005 09:11:34 -0800", "msg_from": "Mischa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: bad performances using hashjoin " } ]
[ { "msg_contents": "I newly installed the postgresql 7.4.5 and FC 3 in my server and transfer the\ndata from 7.3.2 with just a few problems. After I use the webmin 1.8 to config\nthe grant previllage to the users ,I found that there is an error in the grant\nprevilege .\npostgresql --> Grant Previlege --> error\n\nselect relname, relacl from pg_class where (relkind = 'r' OR relkind = 'S') and\nrelname !~ '^pg_' order by relname : Unknown DBI error\n\nWhat is the cause of this error and how could I handle this order?\nPlease make any comment, Thanks.\nAmrit , Thailand\n", "msg_date": "Sun, 20 Feb 2005 22:49:30 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Problem with 7.4.5 and webmin 1.8 in grant function" }, { "msg_contents": "[email protected] wrote:\n> I newly installed the postgresql 7.4.5 and FC 3 in my server and transfer the\n> data from 7.3.2 with just a few problems. After I use the webmin 1.8 to config\n> the grant previllage to the users ,I found that there is an error in the grant\n> previlege .\n> postgresql --> Grant Previlege --> error\n> \n> select relname, relacl from pg_class where (relkind = 'r' OR relkind = 'S') and\n> relname !~ '^pg_' order by relname : Unknown DBI error\n> \n> What is the cause of this error and how could I handle this order?\n> Please make any comment, Thanks.\n> \n\nI would suspect a DBI/DBD installation issue, either perl DBI cannot \nfind DBD-Pg (not installed ?) or DBD-Pg cannot find your Pg 7.4.5.\n\nI note that FC3 comes with Pg 7.4.6 - did you installed 7.4.5 from \nsource? If so this could be why the perl database modules cannot find it \n(you may need to rebuild DBD-Pg, telling it where your Pg install is).\n\nregards\n\nMark\n", "msg_date": "Mon, 21 Feb 2005 11:58:43 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with 7.4.5 and webmin 1.8 in grant function" }, { "msg_contents": "> I would suspect a DBI/DBD installation issue, either perl DBI cannot\n> find DBD-Pg (not installed ?) or DBD-Pg cannot find your Pg 7.4.5.\n>\n> I note that FC3 comes with Pg 7.4.6 - did you installed 7.4.5 from\n> source? If so this could be why the perl database modules cannot find it\n> (you may need to rebuild DBD-Pg, telling it where your Pg install is).\n>\n> regards\n>\n> Mark\n>\n\nI installed FC3 from rpm kernel 2.6.9 which already included postgresql 7.4.5 .\nSuppose that there were some missing component , what should be the missing rpm\ncomponent which I forgot to install ?\nAmrit , Thailand\n", "msg_date": "Mon, 21 Feb 2005 08:34:45 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Problem with 7.4.5 and webmin 1.8 in grant function" }, { "msg_contents": "[email protected] wrote:\n>>I would suspect a DBI/DBD installation issue, either perl DBI cannot\n>>find DBD-Pg (not installed ?) or DBD-Pg cannot find your Pg 7.4.5.\n>>\n>>I note that FC3 comes with Pg 7.4.6 - did you installed 7.4.5 from\n>>source? If so this could be why the perl database modules cannot find it\n>>(you may need to rebuild DBD-Pg, telling it where your Pg install is).\n> \n> I installed FC3 from rpm kernel 2.6.9 which already included postgresql 7.4.5 .\n> Suppose that there were some missing component , what should be the missing rpm\n> component which I forgot to install ?\n>\n\nOk - I must be looking at the *updated* FC3 distribution...\n\nI may have 'jumped the gun' a little - the situation I describe above\nwill prevent *any* access at all to Pg from webmin. If this is the case\nthen check you have (perl) DBI and (perl) DBD-Pg components installed.\n\nIf on the other hand you can do *some* Pg admin from webmin, and you are\nonly having problems with the grants then there is something it does not\nlike about the *particular* statement. The way to debug this is to do a\ntiny perl DBI program that tries to execute the statement :\n\nselect relname, relacl from pg_class where (relkind = 'r' OR relkind =\n'S') and relname !~ '^pg_' order by relname\n\nSo - sorry to confuse, but let us know which situation you have there :-)\n\nbest wishes\n\nMark\n\n", "msg_date": "Tue, 22 Feb 2005 10:16:06 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with 7.4.5 and webmin 1.8 in grant function" }, { "msg_contents": "Mark Kirkwood wrote:\n> If on the other hand you can do *some* Pg admin from webmin, and you are\n> only having problems with the grants then there is something it does not\n> like about the *particular* statement. The way to debug this is to do a\n> tiny perl DBI program that tries to execute the statement :\n> \n> select relname, relacl from pg_class where (relkind = 'r' OR relkind =\n> 'S') and relname !~ '^pg_' order by relname\n> \n\nI did a quick check of this case... seems to be no problem running this\nstatement using perl 5.8.5, DBI-1.42 and DBD-Pg-1.22. You might like to\ntry out the attached test program that does this (You may have to add a\npassword in order to connect, depending on your security settings).\n\nMark", "msg_date": "Tue, 22 Feb 2005 10:51:16 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with 7.4.5 and webmin 1.8 in grant function" }, { "msg_contents": "> Ok - I must be looking at the *updated* FC3 distribution...\n>\n> I may have 'jumped the gun' a little - the situation I describe above\n> will prevent *any* access at all to Pg from webmin. If this is the case\n> then check you have (perl) DBI and (perl) DBD-Pg components installed.\n>\n> If on the other hand you can do *some* Pg admin from webmin, and you are\n> only having problems with the grants then there is something it does not\n> like about the *particular* statement. The way to debug this is to do a\n> tiny perl DBI program that tries to execute the statement :\n>\n> select relname, relacl from pg_class where (relkind = 'r' OR relkind =\n> 'S') and relname !~ '^pg_' order by relname\n>\n> So - sorry to confuse, but let us know which situation you have there :-)\n>\n> best wishes\n>\n> Mark\n>\n\nI used you perl script and found the error =>\n[root@samba tmp]# perl relacl.pl\nDBI connect('dbname=template1;port=5432','postgres',...) failed: FATAL: IDENT\nauthentication failed for user \"postgres\" at relacl.pl line 21\nError in connect to DBI:Pg:dbname=template1;port=5432:\n\n\nAnd my pg_hba.conf is\n\n# IPv4-style local connections:\nhost all all 127.0.0.1 255.255.255.255 trust\nhost all all 192.168.0.0 255.255.0.0 trust\n\ntrusted for every user.\n\nWould you give me an idea what's wrong?\nThanks .\nAmrit,Thailand\n", "msg_date": "Tue, 22 Feb 2005 12:00:58 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Problem with 7.4.5 and webmin 1.8 in grant function" }, { "msg_contents": "[email protected] wrote:\n> \n> I used you perl script and found the error =>\n> [root@samba tmp]# perl relacl.pl\n> DBI connect('dbname=template1;port=5432','postgres',...) failed: FATAL: IDENT\n> authentication failed for user \"postgres\" at relacl.pl line 21\n> Error in connect to DBI:Pg:dbname=template1;port=5432:\n> \n> \nExcellent - we know what is going on now!\n\n\n> And my pg_hba.conf is\n> \n> # IPv4-style local connections:\n> host all all 127.0.0.1 255.255.255.255 trust\n> host all all 192.168.0.0 255.255.0.0 trust\n> \n> trusted for every user.\n\nOk, what I think has happened is that there is another Pg installation\n(or another initdb'ed cluster) on this machine that you are accidentally\ntalking to. Try\n\n$ rpm -qa|grep -i postgres\n\nwhich will spot another software installation, you may just have to\nsearch for files called pg_hba.conf to find another initdb'ed cluster....\n\nThis other installation should have a pg_hba.conf that looks something\nlike :\n\nlocal all all ident\nhost all all 127.0.0.1 255.255.255.255 ident\n\nSo a bit of detective work is in order :-)\n\nMark\n\n", "msg_date": "Tue, 22 Feb 2005 18:45:23 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with 7.4.5 and webmin 1.8 in grant function" }, { "msg_contents": "> > I used you perl script and found the error =>\n> > [root@samba tmp]# perl relacl.pl\n> > DBI connect('dbname=template1;port=5432','postgres',...) failed: FATAL:\n> IDENT\n> > authentication failed for user \"postgres\" at relacl.pl line 21\n> > Error in connect to DBI:Pg:dbname=template1;port=5432:\n> >\n> >\n> Excellent - we know what is going on now!\n>\n>\n> > And my pg_hba.conf is\n> >\n> > # IPv4-style local connections:\n> > host all all 127.0.0.1 255.255.255.255 trust\n> > host all all 192.168.0.0 255.255.0.0 trust\n> >\n> > trusted for every user.\n>\n> Ok, what I think has happened is that there is another Pg installation\n> (or another initdb'ed cluster) on this machine that you are accidentally\n> talking to. Try\n>\n> $ rpm -qa|grep -i postgres\n>\n> which will spot another software installation, you may just have to\n> search for files called pg_hba.conf to find another initdb'ed cluster....\n>\n> This other installation should have a pg_hba.conf that looks something\n> like :\n>\n> local all all ident\n> host all all 127.0.0.1 255.255.255.255 ident\n>\n> So a bit of detective work is in order :-)\n>\n> Mark\nAfter being a detector I found that\n[root@samba ~]# rpm -qa|grep -i postgres\npostgresql-7.4.5-3.1.tlc\npostgresql-python-7.4.5-3.1.tlc\npostgresql-jdbc-7.4.5-3.1.tlc\npostgresql-tcl-7.4.5-3.1.tlc\npostgresql-server-7.4.5-3.1.tlc\npostgresql-libs-7.4.5-3.1.tlc\npostgresql-docs-7.4.5-3.1.tlc\npostgresql-odbc-7.3-8.1.tlc\npostgresql-pl-7.4.5-3.1.tlc\npostgresql-test-7.4.5-3.1.tlc\npostgresql-contrib-7.4.5-3.1.tlc\n[root@samba ~]#\n\nno other pg installation except the pgsql for windows in samba folder which I\nthink it isn't matter ,is it?\nNo other pg being run.\n[root@samba ~]# ps ax|grep postmaster\n 2228 ? S 0:00 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data\n 3308 pts/0 S+ 0:00 grep postmaster\n[root@samba ~]#\n\nIs it possible that it is related to pg_ident.conf ?\n\nAny comment please.\nAmrit,Thailand\n\n\n\n", "msg_date": "Tue, 22 Feb 2005 22:37:19 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Problem with 7.4.5 and webmin 1.8 in grant function" }, { "msg_contents": "[email protected] wrote:\n\n> After being a detector I found that\n> [root@samba ~]# rpm -qa|grep -i postgres\n> postgresql-7.4.5-3.1.tlc\n> postgresql-python-7.4.5-3.1.tlc\n> postgresql-jdbc-7.4.5-3.1.tlc\n> postgresql-tcl-7.4.5-3.1.tlc\n> postgresql-server-7.4.5-3.1.tlc\n> postgresql-libs-7.4.5-3.1.tlc\n> postgresql-docs-7.4.5-3.1.tlc\n> postgresql-odbc-7.3-8.1.tlc\n> postgresql-pl-7.4.5-3.1.tlc\n> postgresql-test-7.4.5-3.1.tlc\n> postgresql-contrib-7.4.5-3.1.tlc\n> [root@samba ~]#\n> \n> no other pg installation except the pgsql for windows in samba folder which I\n> think it isn't matter ,is it?\n> No other pg being run.\n> [root@samba ~]# ps ax|grep postmaster\n> 2228 ? S 0:00 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data\n> 3308 pts/0 S+ 0:00 grep postmaster\n> [root@samba ~]#\n>\nWell, sure looks like you only have one running. Your data directory is\n/var/lib/pgsql/data so lets see the files:\n\n/var/lib/pgsql/data/pg_hba.conf\n/var/lib/pgsql/data/pg_ident.conf\n/var/lib/pgsql/data/postmaster.opts\n\nMight also be useful to know any nondefault settings in postgresql.conf too.\n\nAs I understand it, these vendor shipped rpms have ident *enabled*.\nI will download FC3 Pg and check this out... I'm a compile it from\nsource guy :-)\n\nMark\n\n\n", "msg_date": "Wed, 23 Feb 2005 09:07:16 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with 7.4.5 and webmin 1.8 in grant function" }, { "msg_contents": "> Well, sure looks like you only have one running. Your data directory is\n> /var/lib/pgsql/data so lets see the files:\n>\n> /var/lib/pgsql/data/pg_hba.conf\n> /var/lib/pgsql/data/pg_ident.conf\n> /var/lib/pgsql/data/postmaster.opts\n>\n> Might also be useful to know any nondefault settings in postgresql.conf too.\n>\n> As I understand it, these vendor shipped rpms have ident *enabled*.\n> I will download FC3 Pg and check this out... I'm a compile it from\n> source guy :-)\n>\n> Mark\n\nI got the answer that is in module config of postgresl-webmin , there is a check\nbox for\n\nUse DBI to connect if available? yes no the default is\nyes , but if I choosed no everything went fine.\n\nI also test it in the desktop mechine and get the same error and the same\nsolution. Could you explain what happen to the FC3 + postgresql and webmin 1.8?\nThanks\nAmrit ,Thailand\n", "msg_date": "Wed, 23 Feb 2005 21:58:29 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Problem with 7.4.5 and webmin 1.8 in grant function" }, { "msg_contents": "[email protected] wrote:\n> \n> I got the answer that is in module config of postgresl-webmin , there is a check\n> box for\n> \n> Use DBI to connect if available? yes no the default is\n> yes , but if I choosed no everything went fine.\n> \n> I also test it in the desktop mechine and get the same error and the same\n> solution. Could you explain what happen to the FC3 + postgresql and webmin 1.8?\n\nWell, given the error was coming from the postmaster, I don't believe\nthat DBI or webmin have anything to do with it. What I can believe is\nthat DBI=yes and DBI=no are using different parameters for connecting,\ntherefore hitting different parts of your old (see below) pg_hba.conf\nsettings.\n\nI concur with the other poster, and suspect that the files *were* using\nsome form of ident identification, but have been subsequently edited to\nuse trust - but the postmaster has not been restarted to know this! Try\n\n$ pg_ctl reload\n\nto get running with 'trust'.\n\n\n", "msg_date": "Thu, 24 Feb 2005 09:30:50 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with 7.4.5 and webmin 1.8 in grant function" } ]
[ { "msg_contents": ">> I don't think that's correct either. Scatter/Gather I/O is \n>used to SQL\n>> Server can issue reads for several blocks from disks into it's own\n>> buffer cache with a single syscall even if these buffers are not\n>> sequential. It did make significant performance improvements \n>when they\n>> added it, though.\n>> \n>> (For those not knowing - it's ReadFile/WriteFile where you \n>pass an array\n>> of \"this many bytes to this address\" as parameters)\n>\n>Isn't that like the BSD writev()/readv() that Linux supports also? Is\n>that something we should be using on Unix if it is supported by the OS?\n\nYes, they certainly seem very similar. The win32 functions are\nexplicitly designed for async I/O (they were after all created\nspecifically for SQL Server), so they put harder requirements on the\nparameters. Specifically, it writes complete system pages only, and each\npointer has to point to only one page.\nIn a file opened without buffering it will also write all buffers out\nand then wait for I/O completion from the device instead of one for\neach. Not sure what the writev/readv ones do (not clear from my linux\nman page).\n\n\nNow wether this is something we could make use of - I'll leave that up\nto those who know the buffer manager a lot better than I do.\n\n//Magnus\n", "msg_date": "Sun, 20 Feb 2005 19:25:33 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: seq scan cache vs. index cache smackdown" } ]
[ { "msg_contents": "> Magnus Hagander wrote:\n> > I don't think that's correct either. Scatter/Gather I/O is used to\nSQL\n> > Server can issue reads for several blocks from disks into it's own\n> > buffer cache with a single syscall even if these buffers are not\n> > sequential. It did make significant performance improvements when\nthey\n> > added it, though.\n> >\n> > (For those not knowing - it's ReadFile/WriteFile where you pass an\narray\n> > of \"this many bytes to this address\" as parameters)\n> \n> Isn't that like the BSD writev()/readv() that Linux supports also? Is\n> that something we should be using on Unix if it is supported by the\nOS?\n\nreadv and writev are in the single unix spec...and yes they are\nbasically just like the win32 versions except that that are synchronous\n(and therefore better, IMO).\n\nOn some systems they might just be implemented as a loop inside the\nlibrary, or even as a macro.\n\nhttp://www.opengroup.org/onlinepubs/007908799/xsh/sysuio.h.html\n\nOn operating systems that optimize vectored read operations, it's pretty\nreasonable to assume good or even great performance gains, in addition\nto (or instead of) recent changes to xlog.c to group writes together for\na file...it just takes things one stop further.\n\nIs there a reason why readv/writev have not been considered in the past?\n\nMerlin\n", "msg_date": "Mon, 21 Feb 2005 09:03:20 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: seq scan cache vs. index cache smackdown" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> Is there a reason why readv/writev have not been considered in the past?\n\nLack of portability, and lack of obvious usefulness that would justify\ndealing with the lack of portability.\n\nI don't think there's any value in trying to write ordinary buffers this\nway; making the buffer manager able to write multiple buffers at once\nsounds like a great deal of complexity and deadlock risk in return for\nnot much. It might be an alternative to the existing proposed patch for\nwriting multiple WAL buffers at once, but frankly I consider that patch\na waste of effort. In real scenarios you seldom get to write more than\none WAL page without a forced sync occurring because someone committed.\nEven if *your* transaction is long, any other backend committing a small\ntransaction still fsyncs. On top of that, the bgwriter will be flushing\nWAL in order to maintain the write-ahead rule any time it dumps a dirty\nbuffer. I have a personal to-do item to make the bgwriter explicitly\nresponsible for writing completed WAL pages as part of its duties, but\nI haven't done anything about it because I think that it will write lots\nof such pages without any explicit code, thanks to the bufmgr's LSN\ninterlock. Even if it doesn't get it done that way, the simplest answer\nis to add a little bit of code to make sure bgwriter generally does the\nwrites, and then we don't care.\n\nIf you want to experiment with writev, feel free, but I'll want to see\ndemonstrable performance benefits before any such code actually goes in.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Feb 2005 11:27:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown " }, { "msg_contents": "Merlin Moncure wrote:\n> \n> readv and writev are in the single unix spec...and yes ...\n> \n> On some systems they might just be implemented as a loop inside the\n> library, or even as a macro.\n\nYou sure?\n\nRequirements like this:\n http://www.opengroup.org/onlinepubs/007908799/xsh/write.html\n \"Write requests of {PIPE_BUF} bytes or less will not be\n interleaved with data from other processes doing writes\n on the same pipe.\"\nmake me think that it couldn't be just a macro; and if it\nwere a loop in the library it seems it'd still have to\nmake sure it's done with a single write system call.\n\n(yeah, I know that requirement is just for pipes; and I\nsuppose they could write a loop for normal files and a\ndifferent special case for pipes; but I'd be surprised).\n", "msg_date": "Mon, 21 Feb 2005 17:14:19 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: seq scan cache vs. index cache smackdown" } ]
[ { "msg_contents": "Hi -\n\nThis is based on a discussion I was having with neilc on IRC. He \nsuggested I post it here. Sorry for the length - I'm including \neverything he requested.\n\nI'm comparing the speeds of the following two queries. I was curious \nwhy query 1 was faster than query 2:\n\nquery 1:\nSelect layer_number\nFROM batch_report_index\nWHERE device_id = (SELECT device_id FROM device_index WHERE device_name \n='CP8M')\nAND technology_id = (SELECT technology_id FROM technology_index WHERE \ntechnology_name = 'G12');\n\nquery 2:\nSelect b.layer_number\nFROM batch_report_index b, device_index d, technology_index t\nWHERE b.device_id = d.device_id\nAND b.technology_id = t.technology_id\nAND d.device_name = 'CP8M'\nAND t.technology_name = 'G12';\n\n\nHere were my first runs:\n(query 1 explain analyze)\n Seq Scan on batch_report_index (cost=6.05..12370.66 rows=83 width=4) \n(actual time=19.274..1903.110 rows=61416 loops=1)\n Filter: ((device_id = $0) AND (technology_id = $1))\n InitPlan\n -> Index Scan using device_index_device_name_key on device_index \n(cost=0.00..4.88 rows=1 width=4) (actual time=0.310..0.320 rows=1 \nloops=1)\n Index Cond: (device_name = 'CP8M'::text)\n -> Seq Scan on technology_index (cost=0.00..1.18 rows=1 width=4) \n(actual time=0.117..0.149 rows=1 loops=1)\n Filter: (technology_name = 'G12'::text)\n Total runtime: 1947.896 ms\n(8 rows)\n\n\n(query 2 explain analyze)\n Hash Join (cost=6.06..12380.70 rows=46 width=4) (actual \ntime=35.509..2831.685 rows=61416 loops=1)\n Hash Cond: (\"outer\".technology_id = \"inner\".technology_id)\n -> Hash Join (cost=4.88..12375.87 rows=638 width=8) (actual \ntime=34.584..2448.862 rows=61416 loops=1)\n Hash Cond: (\"outer\".device_id = \"inner\".device_id)\n -> Seq Scan on batch_report_index b (cost=0.00..10182.74 \nrows=436374 width=12) (actual time=0.100..1373.085 rows=436374 loops=1)\n -> Hash (cost=4.88..4.88 rows=1 width=4) (actual \ntime=0.635..0.635 rows=0 loops=1)\n -> Index Scan using device_index_device_name_key on \ndevice_index d (cost=0.00..4.88 rows=1 width=4) (actual \ntime=0.505..0.520 rows=1 loops=1)\n Index Cond: (device_name = 'CP8M'::text)\n -> Hash (cost=1.18..1.18 rows=1 width=4) (actual time=0.348..0.348 \nrows=0 loops=1)\n -> Seq Scan on technology_index t (cost=0.00..1.18 rows=1 \nwidth=4) (actual time=0.198..0.239 rows=1 loops=1)\n Filter: (technology_name = 'G12'::text)\n Total runtime: 2872.252 ms\n(12 rows)\n\n\nOn neilc's suggestion, I did a vacuum analyze, then turned off hash \njoins. Here's query 2, no hash joins:\n\n(query 2 explain analyze)\n Nested Loop (cost=0.00..15651.44 rows=46 width=4) (actual \ntime=22.079..2741.103 rows=61416 loops=1)\n Join Filter: (\"inner\".technology_id = \"outer\".technology_id)\n -> Seq Scan on technology_index t (cost=0.00..1.18 rows=1 width=4) \n(actual time=0.178..0.218 rows=1 loops=1)\n Filter: (technology_name = 'G12'::text)\n -> Nested Loop (cost=0.00..15642.29 rows=638 width=8) (actual \ntime=21.792..2530.470 rows=61416 loops=1)\n Join Filter: (\"inner\".device_id = \"outer\".device_id)\n -> Index Scan using device_index_device_name_key on \ndevice_index d (cost=0.00..4.88 rows=1 width=4) (actual \ntime=0.331..0.346 rows=1 loops=1)\n Index Cond: (device_name = 'CP8M'::text)\n -> Seq Scan on batch_report_index b (cost=0.00..10182.74 \nrows=436374 width=12) (actual time=0.070..1437.938 rows=436374 loops=1)\n Total runtime: 2782.628 ms\n(10 rows)\n\nHe then suggested I turn hash_joins back on and put an index on the \nbatch_report_table's device_id. Here's query 2 again:\n\n(query 2 explain analyze)\nHash Join (cost=1.18..2389.06 rows=46 width=4) (actual \ntime=1.562..2473.554 rows=61416 loops=1)\n Hash Cond: (\"outer\".technology_id = \"inner\".technology_id)\n -> Nested Loop (cost=0.00..2384.24 rows=638 width=8) (actual \ntime=0.747..2140.160 rows=61416 loops=1)\n -> Index Scan using device_index_device_name_key on \ndevice_index d (cost=0.00..4.88 rows=1 width=4) (actual \ntime=0.423..0.435 rows=1 loops=1)\n Index Cond: (device_name = 'CP8M'::text)\n -> Index Scan using b_r_device_index on batch_report_index b \n(cost=0.00..2365.82 rows=1083 width=12) (actual time=0.288..1868.118 \nrows=61416 loops=1)\n Index Cond: (b.device_id = \"outer\".device_id)\n -> Hash (cost=1.18..1.18 rows=1 width=4) (actual time=0.359..0.359 \nrows=0 loops=1)\n -> Seq Scan on technology_index t (cost=0.00..1.18 rows=1 \nwidth=4) (actual time=0.198..0.237 rows=1 loops=1)\n Filter: (technology_name = 'G12'::text)\n Total runtime: 2515.950 ms\n(11 rows)\n\nHe then suggested I increase the statistics on batch_report_index & run \nthe query again. I \"set statistics\" for both\nthe device_id and technology_id column to 900, vacuum analyzed, and \nre-ran the query (it's still slower than query\n1 run after the same contortions ):\n\n(query 2 explain analyze)\n Hash Join (cost=1.18..1608.49 rows=46 width=4) (actual \ntime=1.437..1499.414 rows=61416 loops=1)\n Hash Cond: (\"outer\".technology_id = \"inner\".technology_id)\n -> Nested Loop (cost=0.00..1603.66 rows=638 width=8) (actual \ntime=0.613..1185.826 rows=61416 loops=1)\n -> Index Scan using device_index_device_name_key on \ndevice_index d (cost=0.00..4.88 rows=1 width=4) (actual \ntime=0.246..0.259 rows=1 loops=1)\n Index Cond: (device_name = 'CP8M'::text)\n -> Index Scan using b_r_device_index on batch_report_index b \n(cost=0.00..1589.93 rows=708 width=12) (actual time=0.324..928.888 \nrows=61416 loops=1)\n Index Cond: (b.device_id = \"outer\".device_id)\n -> Hash (cost=1.18..1.18 rows=1 width=4) (actual time=0.358..0.358 \nrows=0 loops=1)\n -> Seq Scan on technology_index t (cost=0.00..1.18 rows=1 \nwidth=4) (actual time=0.196..0.238 rows=1 loops=1)\n Filter: (technology_name = 'G12'::text)\n Total runtime: 1538.302 ms\n\nAt this point, he said \"send it to the -perform mailing list\". So here \nI am.\n\nThe relevant table schemas as of right now:\nreg=# \\d batch_report_index\n Table \"public.batch_report_index\"\n Column | Type | \n Modifiers\n-----------------+------------------------ \n+---------------------------------------------------------------------\n batch_report_id | integer | not null default \nnextval('\"batch_report__batch_report__seq\"'::text)\n lot | character varying(16) |\n tool_id | integer | not null\n technology_id | integer | not null\n device_id | integer | not null\n reticle_id | integer | not null\n layer_id | integer | not null\n layer_number | integer | not null\n image_id | integer |\n start_date | date |\n start_time | time without time zone |\n stop_date | date |\n stop_time | time without time zone |\n in_system | boolean | default false\nIndexes:\n \"batch_report_index_pkey\" primary key, btree (batch_report_id)\n \"b_r_device_index\" btree (device_id)\n \"batch_report_stop_date_indexing\" btree (stop_date)\n \"batch_report_tool_id_index\" btree (tool_id)\n\nreg=# \\d technology_index\n Table \"public.technology_index\"\n Column | Type | Modifiers\n-----------------+--------- \n+---------------------------------------------------------------------\n technology_id | integer | not null default \nnextval('\"technology_in_technology_id_seq\"'::text)\n technology_name | text | not null\nIndexes:\n \"technology_index_pkey\" primary key, btree (technology_id)\n \"technology_in_technology_na_key\" unique, btree (technology_name)\n\nreg=# \\d device_index\n Table \"public.device_index\"\n Column | Type | Modifiers\n-------------+--------- \n+----------------------------------------------------------------\n device_id | integer | not null default \nnextval('\"device_index_device_id_seq\"'::text)\n device_name | text | not null\nIndexes:\n \"device_index_pkey\" primary key, btree (device_id)\n \"device_index_device_name_key\" unique, btree (device_name)\n\nThanks for the great work y'all do!\n\n", "msg_date": "Mon, 21 Feb 2005 22:47:59 -0800", "msg_from": "David Haas <[email protected]>", "msg_from_op": true, "msg_subject": "join vs. subquery" } ]
[ { "msg_contents": "Hi -\n\nThis is based on a discussion I was having with neilc on IRC. He \nsuggested I post it here. Sorry for the length - I'm including \neverything he requested\n\nI'm comparing the speeds of the following two queries on 7.4.5. I was \ncurious why query 1 was faster than query 2:\n\nquery 1:\nSelect layer_number\nFROM batch_report_index\nWHERE device_id = (SELECT device_id FROM device_index WHERE device_name \n='CP8M')\nAND technology_id = (SELECT technology_id FROM technology_index WHERE \ntechnology_name = 'G12');\n\nquery 2:\nSelect b.layer_number\nFROM batch_report_index b, device_index d, technology_index t\nWHERE b.device_id = d.device_id\nAND b.technology_id = t.technology_id\nAND d.device_name = 'CP8M'\nAND t.technology_name = 'G12';\n\n\nHere were my first runs:\n(query 1 explain analyze)\n Seq Scan on batch_report_index (cost=6.05..12370.66 rows=83 width=4) \n(actual time=19.274..1903.110 rows=61416 loops=1)\n Filter: ((device_id = $0) AND (technology_id = $1))\n InitPlan\n -> Index Scan using device_index_device_name_key on device_index \n(cost=0.00..4.88 rows=1 width=4) (actual time=0.310..0.320 rows=1 \nloops=1)\n Index Cond: (device_name = 'CP8M'::text)\n -> Seq Scan on technology_index (cost=0.00..1.18 rows=1 width=4) \n(actual time=0.117..0.149 rows=1 loops=1)\n Filter: (technology_name = 'G12'::text)\n Total runtime: 1947.896 ms\n(8 rows)\n\n\n(query 2 explain analyze)\n Hash Join (cost=6.06..12380.70 rows=46 width=4) (actual \ntime=35.509..2831.685 rows=61416 loops=1)\n Hash Cond: (\"outer\".technology_id = \"inner\".technology_id)\n -> Hash Join (cost=4.88..12375.87 rows=638 width=8) (actual \ntime=34.584..2448.862 rows=61416 loops=1)\n Hash Cond: (\"outer\".device_id = \"inner\".device_id)\n -> Seq Scan on batch_report_index b (cost=0.00..10182.74 \nrows=436374 width=12) (actual time=0.100..1373.085 rows=436374 loops=1)\n -> Hash (cost=4.88..4.88 rows=1 width=4) (actual \ntime=0.635..0.635 rows=0 loops=1)\n -> Index Scan using device_index_device_name_key on \ndevice_index d (cost=0.00..4.88 rows=1 width=4) (actual \ntime=0.505..0.520 rows=1 loops=1)\n Index Cond: (device_name = 'CP8M'::text)\n -> Hash (cost=1.18..1.18 rows=1 width=4) (actual time=0.348..0.348 \nrows=0 loops=1)\n -> Seq Scan on technology_index t (cost=0.00..1.18 rows=1 \nwidth=4) (actual time=0.198..0.239 rows=1 loops=1)\n Filter: (technology_name = 'G12'::text)\n Total runtime: 2872.252 ms\n(12 rows)\n\n\nOn neilc's suggestion, I did a vacuum analyze, then turned off hash \njoins. Here's query 2, no hash joins:\n\n(query 2 explain analyze)\n Nested Loop (cost=0.00..15651.44 rows=46 width=4) (actual \ntime=22.079..2741.103 rows=61416 loops=1)\n Join Filter: (\"inner\".technology_id = \"outer\".technology_id)\n -> Seq Scan on technology_index t (cost=0.00..1.18 rows=1 width=4) \n(actual time=0.178..0.218 rows=1 loops=1)\n Filter: (technology_name = 'G12'::text)\n -> Nested Loop (cost=0.00..15642.29 rows=638 width=8) (actual \ntime=21.792..2530.470 rows=61416 loops=1)\n Join Filter: (\"inner\".device_id = \"outer\".device_id)\n -> Index Scan using device_index_device_name_key on \ndevice_index d (cost=0.00..4.88 rows=1 width=4) (actual \ntime=0.331..0.346 rows=1 loops=1)\n Index Cond: (device_name = 'CP8M'::text)\n -> Seq Scan on batch_report_index b (cost=0.00..10182.74 \nrows=436374 width=12) (actual time=0.070..1437.938 rows=436374 loops=1)\n Total runtime: 2782.628 ms\n(10 rows)\n\nHe then suggested I turn hash_joins back on and put an index on the \nbatch_report_table's device_id. Here's query 2 again:\n\n(query 2 explain analyze)\nHash Join (cost=1.18..2389.06 rows=46 width=4) (actual \ntime=1.562..2473.554 rows=61416 loops=1)\n Hash Cond: (\"outer\".technology_id = \"inner\".technology_id)\n -> Nested Loop (cost=0.00..2384.24 rows=638 width=8) (actual \ntime=0.747..2140.160 rows=61416 loops=1)\n -> Index Scan using device_index_device_name_key on \ndevice_index d (cost=0.00..4.88 rows=1 width=4) (actual \ntime=0.423..0.435 rows=1 loops=1)\n Index Cond: (device_name = 'CP8M'::text)\n -> Index Scan using b_r_device_index on batch_report_index b \n(cost=0.00..2365.82 rows=1083 width=12) (actual time=0.288..1868.118 \nrows=61416 loops=1)\n Index Cond: (b.device_id = \"outer\".device_id)\n -> Hash (cost=1.18..1.18 rows=1 width=4) (actual time=0.359..0.359 \nrows=0 loops=1)\n -> Seq Scan on technology_index t (cost=0.00..1.18 rows=1 \nwidth=4) (actual time=0.198..0.237 rows=1 loops=1)\n Filter: (technology_name = 'G12'::text)\n Total runtime: 2515.950 ms\n(11 rows)\n\nHe then suggested I increase the statistics on batch_report_index & run \nthe query again. I \"set statistics\" for both\nthe device_id and technology_id column to 900, vacuum analyzed, and \nre-ran the query (it's still slower than query\n1 run after the same contortions ):\n\n(query 2 explain analyze)\n Hash Join (cost=1.18..1608.49 rows=46 width=4) (actual \ntime=1.437..1499.414 rows=61416 loops=1)\n Hash Cond: (\"outer\".technology_id = \"inner\".technology_id)\n -> Nested Loop (cost=0.00..1603.66 rows=638 width=8) (actual \ntime=0.613..1185.826 rows=61416 loops=1)\n -> Index Scan using device_index_device_name_key on \ndevice_index d (cost=0.00..4.88 rows=1 width=4) (actual \ntime=0.246..0.259 rows=1 loops=1)\n Index Cond: (device_name = 'CP8M'::text)\n -> Index Scan using b_r_device_index on batch_report_index b \n(cost=0.00..1589.93 rows=708 width=12) (actual time=0.324..928.888 \nrows=61416 loops=1)\n Index Cond: (b.device_id = \"outer\".device_id)\n -> Hash (cost=1.18..1.18 rows=1 width=4) (actual time=0.358..0.358 \nrows=0 loops=1)\n -> Seq Scan on technology_index t (cost=0.00..1.18 rows=1 \nwidth=4) (actual time=0.196..0.238 rows=1 loops=1)\n Filter: (technology_name = 'G12'::text)\n Total runtime: 1538.302 ms\n\nAt this point, he said \"send it to the -perform mailing list\". So here \nI am.\n\nThe relevant table schemas as of right now:\nreg=# \\d batch_report_index\n Table \"public.batch_report_index\"\n Column | Type | \n Modifiers\n-----------------+------------------------ \n+---------------------------------------------------------------------\n batch_report_id | integer | not null default \nnextval('\"batch_report__batch_report__seq\"'::text)\n lot | character varying(16) |\n tool_id | integer | not null\n technology_id | integer | not null\n device_id | integer | not null\n reticle_id | integer | not null\n layer_id | integer | not null\n layer_number | integer | not null\n image_id | integer |\n start_date | date |\n start_time | time without time zone |\n stop_date | date |\n stop_time | time without time zone |\n in_system | boolean | default false\nIndexes:\n \"batch_report_index_pkey\" primary key, btree (batch_report_id)\n \"b_r_device_index\" btree (device_id)\n \"batch_report_stop_date_indexing\" btree (stop_date)\n \"batch_report_tool_id_index\" btree (tool_id)\n\nreg=# \\d technology_index\n Table \"public.technology_index\"\n Column | Type | Modifiers\n-----------------+--------- \n+---------------------------------------------------------------------\n technology_id | integer | not null default \nnextval('\"technology_in_technology_id_seq\"'::text)\n technology_name | text | not null\nIndexes:\n \"technology_index_pkey\" primary key, btree (technology_id)\n \"technology_in_technology_na_key\" unique, btree (technology_name)\n\nreg=# \\d device_index\n Table \"public.device_index\"\n Column | Type | Modifiers\n-------------+--------- \n+----------------------------------------------------------------\n device_id | integer | not null default \nnextval('\"device_index_device_id_seq\"'::text)\n device_name | text | not null\nIndexes:\n \"device_index_pkey\" primary key, btree (device_id)\n \"device_index_device_name_key\" unique, btree (device_name)\n\nThanks for the great work y'all do!\n\n", "msg_date": "Mon, 21 Feb 2005 23:05:08 -0800", "msg_from": "David Haas <[email protected]>", "msg_from_op": true, "msg_subject": "subquery vs join on 7.4.5" }, { "msg_contents": "David Haas <[email protected]> writes:\n> I'm comparing the speeds of the following two queries on 7.4.5. I was \n> curious why query 1 was faster than query 2:\n\n> query 1:\n> Select layer_number\n> FROM batch_report_index\n> WHERE device_id = (SELECT device_id FROM device_index WHERE device_name \n> ='CP8M')\n> AND technology_id = (SELECT technology_id FROM technology_index WHERE \n> technology_name = 'G12');\n\n> query 2:\n> Select b.layer_number\n> FROM batch_report_index b, device_index d, technology_index t\n> WHERE b.device_id = d.device_id\n> AND b.technology_id = t.technology_id\n> AND d.device_name = 'CP8M'\n> AND t.technology_name = 'G12';\n\nWhy didn't you try a two-column index on batch_report_index(device_id,\ntechnology_id) ?\n\nWhether this would actually be better than a seqscan I'm not sure, given\nthe large number of matching rows. But the planner would surely try it\ngiven that it's drastically underestimating that number :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Feb 2005 11:54:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: subquery vs join on 7.4.5 " } ]
[ { "msg_contents": "Hi,\n\nI've downloaded the latest release (PostgreSQL 8.0) for windows.\nInstallation was OK, but I have tried to restore a database.\nIt had more than ~100.000 records. Usually I use PostgreSQL\nunder Linux, and it used to be done under 10 minutes.\n\nUnder W2k und XP it took 3 hours(!) Why is it so slow????\n\nThe commands I used:\n\nUnder Linux: (duration: 1 minute)\n\tpg_dump -D databasename > databasename.db\n\nUnder Windows: (duration: 3 - 3.5 hours(!))\n\tpsql databasename < databasename.db >nul\n\nIt seemed to me, that only 20-30 transactions/sec were\nwriten to the database.\n\nI need to write scripts for automatic (sheduled) database\nbackup and recovery.\n\nHelp anyone?\n\nBye,\n\nVig Sándor\n\n\nThe information transmitted is intended only for the person or entity to\nwhich it is addressed and may contain confidential and/or privileged\nmaterial. Any review, retransmission, dissemination or other use of, or\ntaking of any action in reliance upon, this information by persons or\nentities other than the intended recipient is prohibited. If you received\nthis in error, please contact the sender and delete the material from any\ncomputer.\n", "msg_date": "Tue, 22 Feb 2005 15:58:23 +0100", "msg_from": "\"Vig, Sandor (G/FI-2)\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL is extremely slow on Windows" } ]
[ { "msg_contents": "\n\n\nHi,\n\nI've downloaded the latest release (PostgreSQL 8.0) for windows.\nInstallation was OK, but I have tried to restore a database.\nIt had more than ~100.000 records. Usually I use PostgreSQL\nunder Linux, and it used to be done under 10 minutes.\n\nUnder W2k und XP it took 3 hours(!) Why is it so slow????\n\nThe commands I used:\n\nUnder Linux: (duration: 1 minute)\n\tpg_dump -D databasename > databasename.db\n\nUnder Windows: (duration: 3 - 3.5 hours(!))\n\tpsql databasename < databasename.db >nul\n\nIt seemed to me, that only 20-30 transactions/sec were\nwriten to the database.\n\nI need to write scripts for automatic (sheduled) database\nbackup and recovery.\n\nHelp anyone?\n\nBye,\n\nVig Sándor\n\n\nThe information transmitted is intended only for the person or entity to\nwhich it is addressed and may contain confidential and/or privileged\nmaterial. Any review, retransmission, dissemination or other use of, or\ntaking of any action in reliance upon, this information by persons or\nentities other than the intended recipient is prohibited. If you received\nthis in error, please contact the sender and delete the material from any\ncomputer.\n\nThe information transmitted is intended only for the person or entity to\nwhich it is addressed and may contain confidential and/or privileged\nmaterial. Any review, retransmission, dissemination or other use of, or\ntaking of any action in reliance upon, this information by persons or\nentities other than the intended recipient is prohibited. If you received\nthis in error, please contact the sender and delete the material from any\ncomputer.\n", "msg_date": "Tue, 22 Feb 2005 16:00:59 +0100", "msg_from": "\"Vig, Sandor (G/FI-2)\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL is extremely slow on Windows" }, { "msg_contents": "On Tue, 22 Feb 2005 16:00:59 +0100, Vig, Sandor (G/FI-2)\n<[email protected]> wrote:\n> \n> \n> Hi,\n> \n> I've downloaded the latest release (PostgreSQL 8.0) for windows.\n> Installation was OK, but I have tried to restore a database.\n> It had more than ~100.000 records. Usually I use PostgreSQL\n> under Linux, and it used to be done under 10 minutes.\n> \n> Under W2k und XP it took 3 hours(!) Why is it so slow????\n\nCan you tell us your postgresql.conf configuration settings? We cannot\nhelp without some information about your environment...\n\n-- Mitch\n", "msg_date": "Tue, 22 Feb 2005 11:30:28 -0500", "msg_from": "Mitch Pirtle <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL is extremely slow on Windows" } ]
[ { "msg_contents": "\n>I've downloaded the latest release (PostgreSQL 8.0) for windows.\n>Installation was OK, but I have tried to restore a database.\n>It had more than ~100.000 records. Usually I use PostgreSQL\n>under Linux, and it used to be done under 10 minutes.\n>\n>Under W2k und XP it took 3 hours(!) Why is it so slow????\n>\n>The commands I used:\n>\n>Under Linux: (duration: 1 minute)\n>\tpg_dump -D databasename > databasename.db\n>\n>Under Windows: (duration: 3 - 3.5 hours(!))\n>\tpsql databasename < databasename.db >nul\n>\n>It seemed to me, that only 20-30 transactions/sec were\n>writen to the database.\n\n20-30 transactionsi s about what you'll get on a single disk on Windows\ntoday.\nWe have a patch in testing that will bring this up to about 80.\nYou can *never* get above 80 without using write cache, regardless of\nyour OS, if you have a single disk. You might want to look into wether\nwrite cacheing is enabled on your linux box, and disable it. (unless you\nare using RAID) A lot points towards write cache enabled on your system.\n\nIf you need the performance that equals the one with write cache on, you\ncan set fsync=off. But then you will lose the guarantee that your\nmachine will survive an unclean shutdown or crash. I would strongly\nadvice against it on a production system - same goes for running with\nwrite cache!\n\n//Magnus\n", "msg_date": "Tue, 22 Feb 2005 19:15:13 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL is extremely slow on Windows" }, { "msg_contents": "Magnus Hagander wrote:\n> You can *never* get above 80 without using write cache, regardless of\n> your OS, if you have a single disk.\n\nWhy? Even with, say, a 15K RPM disk? Or the ability to fsync() multiple \nconcurrently-committing transactions at once?\n\n-Neil\n", "msg_date": "Wed, 23 Feb 2005 15:58:51 +1100", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL is extremely slow on Windows" }, { "msg_contents": "Hi, Magnus & all,\n\nMagnus Hagander schrieb:\n> 20-30 transactionsi s about what you'll get on a single disk on Windows\n> today.\n> We have a patch in testing that will bring this up to about 80.\n> You can *never* get above 80 without using write cache, regardless of\n> your OS, if you have a single disk. You might want to look into wether\n> write cacheing is enabled on your linux box, and disable it. (unless you\n> are using RAID) A lot points towards write cache enabled on your system.\n\nNote that you can get higher rates for the server as a whole when using\nconcurrent transactions (that is, several independend connections\ncommitting at the same time). The commit delay settings should be tuned\naccordingly.\n\n\nMarkus\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com", "msg_date": "Wed, 23 Feb 2005 15:37:48 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL is extremely slow on Windows" } ]
[ { "msg_contents": "Hi all,\nI'm running since one week without use any vacuum full,\nI'm using ony pg_autovacuum. I expect that disk usage will reach\na steady state but is not. PG engine: 7.4.5\n\nExample:\n\nThe message table is touched by pg_autvacuum at least 2 time a day:\n\n$ cat pg_autovacuum.log | grep VACUUM | grep messages\n[2005-02-15 16:41:00 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-16 03:31:47 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-16 12:44:18 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-16 23:26:09 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-17 09:25:41 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-17 19:57:11 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-18 05:38:46 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-18 14:28:55 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-19 02:22:20 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-19 13:43:02 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-20 02:05:40 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-20 14:06:33 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-20 23:54:32 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-21 08:57:20 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-21 19:24:53 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-22 05:25:03 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-22 15:20:39 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n\n\n\nthis is what gave me the vacuum full on that table:\n\n\n# vacuum full verbose messages;\nINFO: vacuuming \"public.messages\"\nINFO: \"messages\": found 77447 removable, 1606437 nonremovable row versions in 69504 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 97 to 2033 bytes long.\nThere were 633541 unused item pointers.\nTotal free space (including removable row versions) is 52819600 bytes.\n1690 pages are or will become empty, including 0 at the end of the table.\n22217 pages containing 51144248 free bytes are potential move destinations.\nCPU 2.39s/0.55u sec elapsed 31.90 sec.\nINFO: index \"idx_type_message\" now contains 1606437 row versions in 7337 pages\nDETAIL: 77447 index row versions were removed.\n446 index pages have been deleted, 446 are currently reusable.\nCPU 0.33s/0.75u sec elapsed 16.56 sec.\nINFO: index \"messages_pkey\" now contains 1606437 row versions in 5628 pages\nDETAIL: 77447 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.15s/0.80u sec elapsed 4.22 sec.\nINFO: index \"idx_service_message\" now contains 1606437 row versions in 6867 pages\nDETAIL: 77447 index row versions were removed.\n499 index pages have been deleted, 499 are currently reusable.\nCPU 0.67s/0.99u sec elapsed 8.85 sec.\nINFO: index \"idx_service_message_expired\" now contains 135313 row versions in 3308 pages\nDETAIL: 77375 index row versions were removed.\n512 index pages have been deleted, 512 are currently reusable.\nCPU 0.21s/0.32u sec elapsed 6.88 sec.\nINFO: index \"idx_expired_messages\" now contains 1606437 row versions in 7070 pages\nDETAIL: 77447 index row versions were removed.\n448 index pages have been deleted, 448 are currently reusable.\nCPU 0.34s/1.10u sec elapsed 29.77 sec.\nINFO: index \"idx_messages_target\" now contains 1606437 row versions in 14480 pages\nDETAIL: 77447 index row versions were removed.\n643 index pages have been deleted, 643 are currently reusable.\nCPU 0.84s/1.61u sec elapsed 25.72 sec.\nINFO: index \"idx_messages_source\" now contains 1606437 row versions in 10635 pages\nDETAIL: 77447 index row versions were removed.\n190 index pages have been deleted, 190 are currently reusable.\nCPU 0.68s/1.04u sec elapsed 31.96 sec.\nINFO: \"messages\": moved 55221 row versions, truncated 69504 to 63307 pages\nDETAIL: CPU 5.46s/25.14u sec elapsed 280.20 sec.\nINFO: index \"idx_type_message\" now contains 1606437 row versions in 7337 pages\nDETAIL: 55221 index row versions were removed.\n2304 index pages have been deleted, 2304 are currently reusable.\nCPU 0.42s/0.49u sec elapsed 53.35 sec.\nINFO: index \"messages_pkey\" now contains 1606437 row versions in 5628 pages\nDETAIL: 55221 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.31s/0.34u sec elapsed 13.27 sec.\nINFO: index \"idx_service_message\" now contains 1606437 row versions in 6867 pages\nDETAIL: 55221 index row versions were removed.\n2024 index pages have been deleted, 2024 are currently reusable.\nCPU 0.51s/0.57u sec elapsed 16.60 sec.\nINFO: index \"idx_service_message_expired\" now contains 135313 row versions in 3308 pages\nDETAIL: 41411 index row versions were removed.\n1918 index pages have been deleted, 1918 are currently reusable.\nCPU 0.30s/0.31u sec elapsed 36.01 sec.\nINFO: index \"idx_expired_messages\" now contains 1606437 row versions in 7064 pages\nDETAIL: 55221 index row versions were removed.\n2166 index pages have been deleted, 2166 are currently reusable.\nCPU 0.94s/0.58u sec elapsed 34.97 sec.\nINFO: index \"idx_messages_target\" now contains 1606437 row versions in 14480 pages\nDETAIL: 55221 index row versions were removed.\n3404 index pages have been deleted, 3404 are currently reusable.\nCPU 0.99s/1.03u sec elapsed 50.53 sec.\nINFO: index \"idx_messages_source\" now contains 1606437 row versions in 10635 pages\nDETAIL: 55221 index row versions were removed.\n1809 index pages have been deleted, 1809 are currently reusable.\nCPU 0.84s/1.04u sec elapsed 35.44 sec.\nINFO: vacuuming \"pg_toast.pg_toast_18376\"\nINFO: \"pg_toast_18376\": found 0 removable, 1 nonremovable row versions in 1 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 1976 to 1976 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 6192 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n1 pages containing 6192 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.02 sec.\nINFO: index \"pg_toast_18376_index\" now contains 1 row versions in 2 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.03 sec.\nINFO: \"pg_toast_18376\": moved 0 row versions, truncated 1 to 1 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\n\n\n\n\npg_class after the vacuum full for that table\n\n relfilenode | relname | relpages | reltuples\n-------------+----------+----------+-------------\n 18376 | messages | 63307 | 1.60644e+06\n\n\npg_class before the vacuum full for that table\n\n relfilenode | relname | relpages | reltuples\n-------------+----------+----------+-------------\n 18376 | messages | 69472 | 1.60644e+06\n\n\n\nhow was possible accumulate 6000 pages wasted on that table?\n\nBetween these two calls:\n[2005-02-22 05:25:03 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n[2005-02-22 15:20:39 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n\n1768 rows where inserted, and I had 21578 updated for that rows ( each\nrow have a counter incremented for each update ) so that table is not\nso heavy updated\n\nI'm running autovacuum with these parameters:\npg_autovacuum -d 3 -v 300 -V 0.1 -S 0.8 -a 200 -A 0.1 -D\n\n\nshall I run it in a more aggressive way ? May be I'm missing\nsomething.\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n", "msg_date": "Tue, 22 Feb 2005 20:47:27 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "is pg_autovacuum so effective ?" }, { "msg_contents": "Gaetano Mendola wrote:\n\n>pg_class after the vacuum full for that table\n>\n> relfilenode | relname | relpages | reltuples\n>-------------+----------+----------+-------------\n> 18376 | messages | 63307 | 1.60644e+06\n>\n>\n>pg_class before the vacuum full for that table\n>\n> relfilenode | relname | relpages | reltuples\n>-------------+----------+----------+-------------\n> 18376 | messages | 69472 | 1.60644e+06\n>\n>\n>\n>how was possible accumulate 6000 pages wasted on that table?\n>\n>Between these two calls:\n>[2005-02-22 05:25:03 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n>[2005-02-22 15:20:39 CET] Performing: VACUUM ANALYZE \"public\".\"messages\"\n>\n>1768 rows where inserted, and I had 21578 updated for that rows ( each\n>row have a counter incremented for each update ) so that table is not\n>so heavy updated\n>\n>I'm running autovacuum with these parameters:\n>pg_autovacuum -d 3 -v 300 -V 0.1 -S 0.8 -a 200 -A 0.1 -D\n>\n>\n>shall I run it in a more aggressive way ? May be I'm missing\n>something.\n>\n\nWell without thinking too much, I would first ask about your FSM \nsettings? If they aren't big enought that will cause bloat. Try \nbumping your FSM settings and then see if you reach steady state.\n", "msg_date": "Tue, 22 Feb 2005 15:58:01 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is pg_autovacuum so effective ?" }, { "msg_contents": "Gaetano Mendola <[email protected]> writes:\n> I'm using ony pg_autovacuum. I expect that disk usage will reach\n> a steady state but is not. PG engine: 7.4.5\n\nOne data point doesn't prove that you're not at a steady state.\n\n> # vacuum full verbose messages;\n> INFO: vacuuming \"public.messages\"\n> INFO: \"messages\": found 77447 removable, 1606437 nonremovable row versions in 69504 pages\n> ...\n> INFO: \"messages\": moved 55221 row versions, truncated 69504 to 63307 pages\n\n10% overhead sounds fairly reasonable to me. How does that compare to\nthe amount of updating you do on the table --- ie, do you turn over 10%\nof the table in a day?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Feb 2005 16:28:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is pg_autovacuum so effective ? " }, { "msg_contents": "Tom Lane wrote:\n> Gaetano Mendola <[email protected]> writes:\n> \n>>I'm using ony pg_autovacuum. I expect that disk usage will reach\n>>a steady state but is not. PG engine: 7.4.5\n> \n> \n> One data point doesn't prove that you're not at a steady state.\n\nI do a graph about my disk usage and it's a ramp since one week,\nI'll continue to wait in order to see if it will decrease.\nI was expecting the steady state at something like 4 GB\n( after a full vacuum and reindex ) + 10 % = 4.4 GB\nI'm at 4.6 GB and increasing. I'll see how it will continue.\n\n>># vacuum full verbose messages;\n>>INFO: vacuuming \"public.messages\"\n>>INFO: \"messages\": found 77447 removable, 1606437 nonremovable row versions in 69504 pages\n>>...\n>>INFO: \"messages\": moved 55221 row versions, truncated 69504 to 63307 pages\n> \n> \n> 10% overhead sounds fairly reasonable to me. How does that compare to\n> the amount of updating you do on the table --- ie, do you turn over 10%\n> of the table in a day?\n\nLess, that table have 1.6 milion rows, and I insert 2000 rows in a day\nwith almost ~ 40000 update in one day. So it's something like: 2.5 %\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n", "msg_date": "Wed, 23 Feb 2005 02:03:10 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: is pg_autovacuum so effective ?" }, { "msg_contents": "Matthew T. O'Connor wrote:\n\n> Well without thinking too much, I would first ask about your FSM\n> settings? If they aren't big enought that will cause bloat. Try\n> bumping your FSM settings and then see if you reach steady state.\n\nFSM settings are big enough:\n\n max_fsm_pages | 2000000\n max_fsm_relations | 1000\n\nat least after a vacuum full I see that these numbers are an overkill...\n\n\n\n\nREgards\nGaetano Mendola\n\n\n\n\n\n\n\n", "msg_date": "Wed, 23 Feb 2005 02:05:03 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: is pg_autovacuum so effective ?" }, { "msg_contents": "Gaetano Mendola <[email protected]> writes:\n\n> Tom Lane wrote:\n>> Gaetano Mendola <[email protected]> writes:\n>> \n>>>I'm using ony pg_autovacuum. I expect that disk usage will reach\n>>>a steady state but is not. PG engine: 7.4.5\n>> \n>> \n>> One data point doesn't prove that you're not at a steady state.\n>\n> I do a graph about my disk usage and it's a ramp since one week,\n> I'll continue to wait in order to see if it will decrease.\n> I was expecting the steady state at something like 4 GB\n> ( after a full vacuum and reindex ) + 10 % = 4.4 GB\n> I'm at 4.6 GB and increasing. I'll see how it will continue.\n\nYou probably want for the \"experiment\" to last more than a week.\n\nAfter all, it might actually be that with your usage patterns, that\ntable would stabilize at 15% \"overhead,\" and that might take a couple\nor three weeks.\n\nUnless it's clear that it's growing perilously quickly, just leave it\nalone so that there's actually some possibility of reaching an\nequilibrium. Any time you \"VACUUM FULL\" it, that _destroys_ any\nexperimental results or any noticeable patterns, and it guarantees\nthat you'll see \"seemingly perilous growth\" for a while.\n\nAnd if the table is _TRULY_ growing \"perilously quickly,\" then it is\nlikely that you should add in some scheduled vacuums on the table.\nNot VACUUM FULLs; just plain VACUUMs.\n\nI revised cron scripts yet again today to do hourly and \"4x/day\"\nvacuums of certain tables in some of our systems where we know they\nneed the attention. I didn't schedule any VACUUM FULLs; it's\nunnecessary, and would lead directly to system outages, which is\ntotally unacceptable.\n-- \n\"cbbrowne\",\"@\",\"ca.afilias.info\"\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 673-4124 (land)\n", "msg_date": "Wed, 23 Feb 2005 00:31:58 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is pg_autovacuum so effective ?" }, { "msg_contents": "Gaetano Mendola <[email protected]> writes:\n> Matthew T. O'Connor wrote:\n>\n>> Well without thinking too much, I would first ask about your FSM\n>> settings? If they aren't big enought that will cause bloat. Try\n>> bumping your FSM settings and then see if you reach steady state.\n>\n> FSM settings are big enough:\n>\n> max_fsm_pages | 2000000\n> max_fsm_relations | 1000\n>\n> at least after a vacuum full I see that these numbers are an overkill...\n\nWhen you do a VACUUM FULL, the FSM is made irrelevant because VACUUM\nFULL takes the time to reclaim all possible space without resorting to\n_any_ use of the FSM.\n\nIf you VACUUM FULL, then it's of little value to bother having a free\nspace map because you're obviating the need to use it.\n\nIn any case, the FSM figures you get out of a VACUUM are only really\nmeaningful if you're moving towards the \"equilibrium point\" where the\nFSM is large enough to cope with the growth between VACUUM cycles.\nVACUUM FULL pushes the system away from equilibrium, thereby making\nFSM estimates less useful.\n-- \n\"cbbrowne\",\"@\",\"ca.afilias.info\"\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 673-4124 (land)\n", "msg_date": "Wed, 23 Feb 2005 00:35:26 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is pg_autovacuum so effective ?" }, { "msg_contents": "Christopher Browne wrote:\n> Gaetano Mendola <[email protected]> writes:\n> \n> \n>>Tom Lane wrote:\n>>\n>>>Gaetano Mendola <[email protected]> writes:\n>>>\n>>>\n>>>>I'm using ony pg_autovacuum. I expect that disk usage will reach\n>>>>a steady state but is not. PG engine: 7.4.5\n>>>\n>>>\n>>>One data point doesn't prove that you're not at a steady state.\n>>\n>>I do a graph about my disk usage and it's a ramp since one week,\n>>I'll continue to wait in order to see if it will decrease.\n>>I was expecting the steady state at something like 4 GB\n>>( after a full vacuum and reindex ) + 10 % = 4.4 GB\n>>I'm at 4.6 GB and increasing. I'll see how it will continue.\n> \n> \n> You probably want for the \"experiment\" to last more than a week.\n> \n> After all, it might actually be that with your usage patterns, that\n> table would stabilize at 15% \"overhead,\" and that might take a couple\n> or three weeks.\n> \n> Unless it's clear that it's growing perilously quickly, just leave it\n> alone so that there's actually some possibility of reaching an\n> equilibrium. Any time you \"VACUUM FULL\" it, that _destroys_ any\n> experimental results or any noticeable patterns, and it guarantees\n> that you'll see \"seemingly perilous growth\" for a while.\n> \n> And if the table is _TRULY_ growing \"perilously quickly,\" then it is\n> likely that you should add in some scheduled vacuums on the table.\n> Not VACUUM FULLs; just plain VACUUMs.\n> \n> I revised cron scripts yet again today to do hourly and \"4x/day\"\n> vacuums of certain tables in some of our systems where we know they\n> need the attention. I didn't schedule any VACUUM FULLs; it's\n> unnecessary, and would lead directly to system outages, which is\n> totally unacceptable.\n\nYes, I'm in this direction too.\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n", "msg_date": "Wed, 23 Feb 2005 17:30:21 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: is pg_autovacuum so effective ?" }, { "msg_contents": "Christopher Browne wrote:\n\n>Gaetano Mendola <[email protected]> writes:\n> \n>\n>>I do a graph about my disk usage and it's a ramp since one week,\n>>I'll continue to wait in order to see if it will decrease.\n>>I was expecting the steady state at something like 4 GB\n>>( after a full vacuum and reindex ) + 10 % = 4.4 GB\n>>I'm at 4.6 GB and increasing. I'll see how it will continue.\n>> \n>>\n>\n>You probably want for the \"experiment\" to last more than a week.\n>\n>After all, it might actually be that with your usage patterns, that\n>table would stabilize at 15% \"overhead,\" and that might take a couple\n>or three weeks.\n>\n>Unless it's clear that it's growing perilously quickly, just leave it\n>alone so that there's actually some possibility of reaching an\n>equilibrium. Any time you \"VACUUM FULL\" it, that _destroys_ any\n>experimental results or any noticeable patterns, and it guarantees\n>that you'll see \"seemingly perilous growth\" for a while.\n>\n>And if the table is _TRULY_ growing \"perilously quickly,\" then it is\n>likely that you should add in some scheduled vacuums on the table.\n>Not VACUUM FULLs; just plain VACUUMs.\n>\n>I revised cron scripts yet again today to do hourly and \"4x/day\"\n>vacuums of certain tables in some of our systems where we know they\n>need the attention. I didn't schedule any VACUUM FULLs; it's\n>unnecessary, and would lead directly to system outages, which is\n>totally unacceptable.\n> \n>\n\nChris, is this in addition to pg_autovacuum? Or do you not use \npg_autovacuum at all?, and if so why not?\n\n\n", "msg_date": "Wed, 23 Feb 2005 14:18:08 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is pg_autovacuum so effective ?" }, { "msg_contents": "Matthew T. O'Connor wrote:\n> Christopher Browne wrote:\n> \n>> Gaetano Mendola <[email protected]> writes:\n>> \n>>\n>>> I do a graph about my disk usage and it's a ramp since one week,\n>>> I'll continue to wait in order to see if it will decrease.\n>>> I was expecting the steady state at something like 4 GB\n>>> ( after a full vacuum and reindex ) + 10 % = 4.4 GB\n>>> I'm at 4.6 GB and increasing. I'll see how it will continue.\n>>> \n>>\n>>\n>> You probably want for the \"experiment\" to last more than a week.\n>>\n>> After all, it might actually be that with your usage patterns, that\n>> table would stabilize at 15% \"overhead,\" and that might take a couple\n>> or three weeks.\n>>\n>> Unless it's clear that it's growing perilously quickly, just leave it\n>> alone so that there's actually some possibility of reaching an\n>> equilibrium. Any time you \"VACUUM FULL\" it, that _destroys_ any\n>> experimental results or any noticeable patterns, and it guarantees\n>> that you'll see \"seemingly perilous growth\" for a while.\n>>\n>> And if the table is _TRULY_ growing \"perilously quickly,\" then it is\n>> likely that you should add in some scheduled vacuums on the table.\n>> Not VACUUM FULLs; just plain VACUUMs.\n>>\n>> I revised cron scripts yet again today to do hourly and \"4x/day\"\n>> vacuums of certain tables in some of our systems where we know they\n>> need the attention. I didn't schedule any VACUUM FULLs; it's\n>> unnecessary, and would lead directly to system outages, which is\n>> totally unacceptable.\n>> \n>>\n> \n> Chris, is this in addition to pg_autovacuum? Or do you not use\n> pg_autovacuum at all?, and if so why not?\n\nI have the same requirement too. Actually pg_autovacuum can not be\ninstructed \"per table\" so some time the global settings are not good\nenough. I have a table of logs with 6 milions rows ( 3 years logs )\nI insert on that page ~ 6000 rows for day. I'm running pg_autovacuum\nwith setting to ANALYZE or VACUUM table if the 10% is touched.\n\nWith this setting pg_autovacuum will analyze that table each 3 months!!!\n\nSo I need to analyze and/or vacuum it manually.\n\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Thu, 24 Feb 2005 11:58:51 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": true, "msg_subject": "Re: is pg_autovacuum so effective ?" }, { "msg_contents": "Hi, Gaetano,\n\nGaetano Mendola schrieb:\n\n> I have the same requirement too. Actually pg_autovacuum can not be\n> instructed \"per table\" so some time the global settings are not good\n> enough. I have a table of logs with 6 milions rows ( 3 years logs )\n> I insert on that page ~ 6000 rows for day. I'm running pg_autovacuum\n> with setting to ANALYZE or VACUUM table if the 10% is touched.\n> With this setting pg_autovacuum will analyze that table each 3 months!!!\n\nIf you have only inserts, and only so few on a large table, you do not\nneed to vacuum such often. Not to reclaim space, only to prevent\ntransaction ID wraparound (which is ensured by pg_autovacuum).\n\nAnd if the data distribution does not change, frequently calling ANALYZE\ndoes not help much, either.\n\nMarkus\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com", "msg_date": "Thu, 24 Feb 2005 13:13:09 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is pg_autovacuum so effective ?" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nMarkus Schaber wrote:\n> Hi, Gaetano,\n> \n> Gaetano Mendola schrieb:\n> \n> \n>>I have the same requirement too. Actually pg_autovacuum can not be\n>>instructed \"per table\" so some time the global settings are not good\n>>enough. I have a table of logs with 6 milions rows ( 3 years logs )\n>>I insert on that page ~ 6000 rows for day. I'm running pg_autovacuum\n>>with setting to ANALYZE or VACUUM table if the 10% is touched.\n>>With this setting pg_autovacuum will analyze that table each 3 months!!!\n> \n> \n> If you have only inserts, and only so few on a large table, you do not\n> need to vacuum such often. Not to reclaim space, only to prevent\n> transaction ID wraparound (which is ensured by pg_autovacuum).\n> \n> And if the data distribution does not change, frequently calling ANALYZE\n> does not help much, either.\n\nYes, I'm aware about it indeed I need the analyze because usualy I do on that\ntable select regarding last 24 ours so need to analyze it in order to\ncollect the statistics for this period.\nBeside that I tried to partition that table, I used both tecnique on\nmy knowledge\n\n1) A view with UNION ALL on all tables collecting these logs\n2) Using inheritance\n\nand both cases are working in theory but in practice are not ( the index scan\nis lost as soon you use this view/table inside others views or joining them)\n\nI heard that next version of pg_autovacuum can be instructed \"per table\";\nis it true ?\n\n\nRegards\nGaetano Mendola\n\n\n\n\n\n\n\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.5 (MingW32)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFCHlVu7UpzwH2SGd4RAqQfAKCatX9qbf5fmTN7RbapWj6BgAcwQgCfRy2R\nApeFl9jezm/4YyVN/4fY3Jg=\n=wBIK\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Thu, 24 Feb 2005 23:30:06 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is pg_autovacuum so effective ?" }, { "msg_contents": "Hi, Gaetano,\n\nGaetano Mendola schrieb:\n\n> Yes, I'm aware about it indeed I need the analyze because usualy I do on that\n> table select regarding last 24 ours so need to analyze it in order to\n> collect the statistics for this period.\n\nIf you tend to do lots of queries for the last 24 hours, and there is\nonly a very small percentage of such young rows, partial indices could\nbe helpful.\n\nYou could include all rows that are not older than 24 hours, and\nrecreate them via cron script daily, so they grow from 24 to 48 hours\nbetween recreations. To avoid a gap in recreation, you could first\ncreate the new index, and then drop the old one, using alternating names.\n\nBTW, a small question for the gurus: does postgres make use of other\nindices when creating partial indices?\n\nHTH,\nMarkus\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com", "msg_date": "Fri, 25 Feb 2005 17:28:58 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is pg_autovacuum so effective ?" }, { "msg_contents": "Gaetano Mendola wrote:\n\n>Yes, I'm aware about it indeed I need the analyze because usualy I do on that\n>table select regarding last 24 ours so need to analyze it in order to\n>collect the statistics for this period.\n>Beside that I tried to partition that table, I used both tecnique on\n>my knowledge\n>\n>1) A view with UNION ALL on all tables collecting these logs\n>2) Using inheritance\n>\n>and both cases are working in theory but in practice are not ( the index scan\n>is lost as soon you use this view/table inside others views or joining them)\n>\n>I heard that next version of pg_autovacuum can be instructed \"per table\";\n>is it true ?\n>\n\nThe version of pg_autovacuum that I submitted for 8.0 could be \ninstructed \"per table\" but it didn't make the cut. Aside from moved out \nof contrib and integrated into the backend, per table autovacuum \nsettings is probably the next highest priority.\n\n", "msg_date": "Mon, 28 Feb 2005 09:30:58 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is pg_autovacuum so effective ?" }, { "msg_contents": "Hi, Matthew,\n\nMatthew T. O'Connor schrieb:\n\n> The version of pg_autovacuum that I submitted for 8.0 could be\n> instructed \"per table\" but it didn't make the cut. Aside from moved out\n> of contrib and integrated into the backend, per table autovacuum\n> settings is probably the next highest priority.\n\nWhat was the reason for non-acceptance?\n\nIs it available as a standalone project?\n\n\nMarkus\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 z�rich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n", "msg_date": "Mon, 28 Feb 2005 16:46:34 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is pg_autovacuum so effective ?" }, { "msg_contents": "On Mon, Feb 28, 2005 at 16:46:34 +0100,\n Markus Schaber <[email protected]> wrote:\n> Hi, Matthew,\n> \n> Matthew T. O'Connor schrieb:\n> \n> > The version of pg_autovacuum that I submitted for 8.0 could be\n> > instructed \"per table\" but it didn't make the cut. Aside from moved out\n> > of contrib and integrated into the backend, per table autovacuum\n> > settings is probably the next highest priority.\n> \n> What was the reason for non-acceptance?\n\nIt wasn't reviewed until very close to freeze due to people who could do\nthe review being busy and then there wasn't enough time to iron some things\nout before the freeze.\n", "msg_date": "Mon, 28 Feb 2005 12:38:18 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: is pg_autovacuum so effective ?" } ]
[ { "msg_contents": "The following query plans both result from the very same query run on\ndifferent servers. They obviously differ drastically, but I don't why\nas one db is a slonied copy of the other with identical postgresql.conf\nfiles.\nBoth databases are vacuum analyzed nightly.\n\nHere is the query:\n------------------------------------------------------------------------\nEXPLAIN ANALYZE\nSELECT COUNT(DISTINCT(t.id)) FROM (\n SELECT m_object_paper.id\n FROM m_object_paper, m_assignment, m_class,\nr_comment_rubric_user_object\n WHERE m_object_paper.assignment=m_assignment.id\n AND m_assignment.class=m_class.id\n AND m_class.account IN (SELECT * FROM children_of(32660) as acts)\n AND m_object_paper.id = r_comment_rubric_user_object.objectid\n UNION\n SELECT m_object_paper.id\n FROM m_object_paper, m_assignment, m_class, r_quickmark_user_object\n WHERE m_object_paper.assignment=m_assignment.id\n AND m_assignment.class=m_class.id\n AND m_class.account IN (SELECT * FROM children_of(32660) acts)\n AND m_object_paper.id = r_quickmark_user_object.objectid)as t;\n------------------------------------------------------------------------\n\n-------------------\nDB1 QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=314616.49..314616.49 rows=1 width=4) (actual\ntime=853.483..853.484 rows=1 loops=1)\n -> Subquery Scan t (cost=314568.97..314609.70 rows=2715 width=4)\n(actual time=848.574..852.912 rows=354 loops=1)\n -> Unique (cost=314568.97..314582.55 rows=2715 width=4)\n(actual time=848.568..852.352 rows=354 loops=1)\n -> Sort (cost=314568.97..314575.76 rows=2715 width=4)\n(actual time=848.565..850.264 rows=2428 loops=1)\n Sort Key: id\n -> Append (cost=153181.39..314414.12 rows=2715\nwidth=4) (actual time=224.984..844.714 rows=2428 loops=1)\n -> Subquery Scan \"*SELECT* 1\"\n(cost=153181.39..159900.66 rows=2250 width=4) (actual\ntime=224.981..700.687 rows=2116 loops=1)\n -> Hash Join\n(cost=153181.39..159878.16 rows=2250 width=4) (actual\ntime=224.975..696.639 rows=2116 loops=1)\n Hash Cond: (\"outer\".objectid =\n\"inner\".id)\n -> Seq Scan on\nr_comment_rubric_user_object (cost=0.00..5144.18 rows=306018 width=4)\n(actual time=0.021..405.881 rows=306392 loops=1)\n -> Hash\n(cost=153072.40..153072.40 rows=43595 width=4) (actual\ntime=32.311..32.311 rows=0 loops=1)\n -> Nested Loop\n(cost=15.00..153072.40 rows=43595 width=4) (actual time=0.554..29.762\nrows=2033 loops=1)\n -> Nested Loop\n(cost=15.00..16071.65 rows=3412 width=4) (actual time=0.512..3.657\nrows=180 loops=1)\n -> Nested\nLoop (cost=15.00..3769.73 rows=1666 width=4) (actual time=0.452..0.943\nrows=50 loops=1)\n ->\nHashAggregate (cost=15.00..15.00 rows=200 width=4) (actual\ntime=0.388..0.394 rows=1 loops=1)\n ->\n Function Scan on children_of acts (cost=0.00..12.50 rows=1000\nwidth=4) (actual time=0.376..0.377 rows=1 loops=1)\n ->\nIndex Scan using m_class_account_idx on m_class (cost=0.00..18.67\nrows=8 width=8) (actual time=0.057..0.416 rows=50 loops=1)\n\nIndex Cond: (m_class.account = \"outer\".acts)\n -> Index Scan\nusing m_assignment_class_idx on m_assignment (cost=0.00..7.25 rows=11\nwidth=8) (actual time=0.023..0.043 rows=4 loops=50)\n Index\nCond: (m_assignment.\"class\" = \"outer\".id)\n -> Index Scan using\nm_object_paper_assignment_idx on m_object_paper (cost=0.00..39.24\nrows=73 width=8) (actual time=0.026..0.118 rows=11 loops=180)\n Index Cond:\n(m_object_paper.\"assignment\" = \"outer\".id)\n -> Subquery Scan \"*SELECT* 2\"\n(cost=153181.39..154513.46 rows=465 width=4) (actual\ntime=54.883..140.747 rows=312 loops=1)\n -> Hash Join\n(cost=153181.39..154508.81 rows=465 width=4) (actual\ntime=54.875..140.161 rows=312 loops=1)\n Hash Cond: (\"outer\".objectid =\n\"inner\".id)\n -> Seq Scan on\nr_quickmark_user_object (cost=0.00..1006.85 rows=63185 width=4)\n(actual time=0.007..70.446 rows=63268 loops=1)\n -> Hash\n(cost=153072.40..153072.40 rows=43595 width=4) (actual\ntime=17.633..17.633 rows=0 loops=1)\n -> Nested Loop\n(cost=15.00..153072.40 rows=43595 width=4) (actual time=0.549..15.186\nrows=2033 loops=1)\n -> Nested Loop\n(cost=15.00..16071.65 rows=3412 width=4) (actual time=0.515..2.406\nrows=180 loops=1)\n -> Nested\nLoop (cost=15.00..3769.73 rows=1666 width=4) (actual time=0.482..0.792\nrows=50 loops=1)\n ->\nHashAggregate (cost=15.00..15.00 rows=200 width=4) (actual\ntime=0.443..0.449 rows=1 loops=1)\n ->\n Function Scan on children_of acts (cost=0.00..12.50 rows=1000\nwidth=4) (actual time=0.428..0.429 rows=1 loops=1)\n ->\nIndex Scan using m_class_account_idx on m_class (cost=0.00..18.67\nrows=8 width=8) (actual time=0.029..0.219 rows=50 loops=1)\n\nIndex Cond: (m_class.account = \"outer\".acts)\n -> Index Scan\nusing m_assignment_class_idx on m_assignment (cost=0.00..7.25 rows=11\nwidth=8) (actual time=0.013..0.023 rows=4 loops=50)\n Index\nCond: (m_assignment.\"class\" = \"outer\".id)\n -> Index Scan using\nm_object_paper_assignment_idx on m_object_paper (cost=0.00..39.24\nrows=73 width=8) (actual time=0.011..0.048 rows=11 loops=180)\n Index Cond:\n(m_object_paper.\"assignment\" = \"outer\".id)\n Total runtime: 854.101 ms\n\n\n\n\n-------------------\nDB2 QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=431500.82..431500.82 rows=1 width=4) (actual\ntime=161604.563..161604.568 rows=1 loops=1)\n -> Subquery Scan t (cost=431025.16..431432.86 rows=27180 width=4)\n(actual time=161571.533..161602.095 rows=354 loops=1)\n -> Unique (cost=431025.16..431161.06 rows=27180 width=4)\n(actual time=161571.515..161598.311 rows=354 loops=1)\n -> Sort (cost=431025.16..431093.11 rows=27180 width=4)\n(actual time=161571.502..161583.783 rows=2428 loops=1)\n Sort Key: id\n -> Append (cost=203789.85..429023.32 rows=27180\nwidth=4) (actual time=79513.023..161555.122 rows=2428 loops=1)\n -> Subquery Scan \"*SELECT* 1\"\n(cost=203789.85..216527.64 rows=22528 width=4) (actual\ntime=79513.012..82516.102 rows=2116 loops=1)\n -> Hash Join\n(cost=203789.85..216302.36 rows=22528 width=4) (actual\ntime=79512.998..82493.092 rows=2116 loops=1)\n Hash Cond: (\"outer\".objectid =\n\"inner\".id)\n -> Seq Scan on\nr_comment_rubric_user_object (cost=0.00..5133.34 rows=306034 width=4)\n(actual time=0.045..1769.838 rows=306390 loops=1)\n -> Hash\n(cost=201205.35..201205.35 rows=436601 width=4) (actual\ntime=78627.830..78627.830 rows=0 loops=1)\n -> Hash Join\n(cost=25006.81..201205.35 rows=436601 width=4) (actual\ntime=9572.704..78612.859 rows=2033 loops=1)\n Hash Cond:\n(\"outer\".\"assignment\" = \"inner\".id)\n -> Seq Scan on\nm_object_paper (cost=0.00..142176.19 rows=5931219 width=8) (actual\ntime=0.085..36433.616 rows=5934777 loops=1)\n -> Hash\n(cost=24921.40..24921.40 rows=34160 width=4) (actual\ntime=8636.897..8636.897 rows=0 loops=1)\n -> Hash Join\n(cost=8773.25..24921.40 rows=34160 width=4) (actual\ntime=3013.277..8635.612 rows=180 loops=1)\n Hash\nCond: (\"outer\".\"class\" = \"inner\".id)\n -> Seq\nScan on m_assignment (cost=0.00..13486.37 rows=464037 width=8) (actual\ntime=0.037..2903.799 rows=464639 loops=1)\n -> Hash\n (cost=8731.55..8731.55 rows=16682 width=4) (actual\ntime=2985.051..2985.051 rows=0 loops=1)\n ->\n Hash Join (cost=15.50..8731.55 rows=16682 width=4) (actual\ntime=716.452..2984.651 rows=50 loops=1)\n\n Hash Cond: (\"outer\".account = \"inner\".acts)\n\n -> Seq Scan on m_class (cost=0.00..7416.15 rows=226615 width=8)\n(actual time=0.042..1586.784 rows=226796 loops=1)\n\n -> Hash (cost=15.00..15.00 rows=200 width=4) (actual\ntime=0.548..0.548 rows=0 loops=1)\n\n -> HashAggregate (cost=15.00..15.00 rows=200 width=4)\n(actual time=0.519..0.527 rows=1 loops=1)\n\n -> Function Scan on children_of acts (cost=0.00..12.50\nrows=1000 width=4) (actual time=0.485..0.491 rows=1 loops=1)\n -> Subquery Scan \"*SELECT* 2\"\n(cost=203789.85..212495.68 rows=4652 width=4) (actual\ntime=78431.905..79014.599 rows=312 loops=1)\n -> Hash Join\n(cost=203789.85..212449.16 rows=4652 width=4) (actual\ntime=78431.889..79011.085 rows=312 loops=1)\n Hash Cond: (\"outer\".objectid =\n\"inner\".id)\n -> Seq Scan on\nr_quickmark_user_object (cost=0.00..1006.95 rows=63195 width=4)\n(actual time=0.085..391.887 rows=63268 loops=1)\n -> Hash\n(cost=201205.35..201205.35 rows=436601 width=4) (actual\ntime=78182.649..78182.649 rows=0 loops=1)\n -> Hash Join\n(cost=25006.81..201205.35 rows=436601 width=4) (actual\ntime=9328.018..78167.922 rows=2033 loops=1)\n Hash Cond:\n(\"outer\".\"assignment\" = \"inner\".id)\n -> Seq Scan on\nm_object_paper (cost=0.00..142176.19 rows=5931219 width=8) (actual\ntime=0.052..36243.971 rows=5934777 loops=1)\n -> Hash\n(cost=24921.40..24921.40 rows=34160 width=4) (actual\ntime=8416.317..8416.317 rows=0 loops=1)\n -> Hash Join\n(cost=8773.25..24921.40 rows=34160 width=4) (actual\ntime=2801.934..8415.065 rows=180 loops=1)\n Hash\nCond: (\"outer\".\"class\" = \"inner\".id)\n -> Seq\nScan on m_assignment (cost=0.00..13486.37 rows=464037 width=8) (actual\ntime=0.121..2899.409 rows=464639 loops=1)\n -> Hash\n (cost=8731.55..8731.55 rows=16682 width=4) (actual\ntime=2772.260..2772.260 rows=0 loops=1)\n ->\n Hash Join (cost=15.50..8731.55 rows=16682 width=4) (actual\ntime=674.259..2771.886 rows=50 loops=1)\n\n Hash Cond: (\"outer\".account = \"inner\".acts)\n\n -> Seq Scan on m_class (cost=0.00..7416.15 rows=226615 width=8)\n(actual time=0.049..1430.376 rows=226796 loops=1)\n\n -> Hash (cost=15.00..15.00 rows=200 width=4) (actual\ntime=0.647..0.647 rows=0 loops=1)\n\n -> HashAggregate (cost=15.00..15.00 rows=200 width=4)\n(actual time=0.604..0.613 rows=1 loops=1)\n\n -> Function Scan on children_of acts (cost=0.00..12.50\nrows=1000 width=4) (actual time=0.568..0.574 rows=1 loops=1)\n\nTotal runtime: 161605.867\n\n--------------------------\n--------------------------\nAdditionally, we have a db3 which was originally in agreement w/ db1\nand\nexecuting the more efficient plan. However, now it is in agreement with\ndb2\nwith the less efficient, slower plan.\n\nWhat could be causing this?\n\nThanks for your help.\n\n", "msg_date": "22 Feb 2005 14:08:05 -0800", "msg_from": "\"Luke Chambers\" <[email protected]>", "msg_from_op": true, "msg_subject": "Inefficient Query Plans" }, { "msg_contents": "\"Luke Chambers\" <[email protected]> writes:\n> The following query plans both result from the very same query run on\n> different servers. They obviously differ drastically, but I don't why\n> as one db is a slonied copy of the other with identical postgresql.conf\n> files.\n\nThere's an order-of-magnitude difference in the estimated row counts for\nsome of the joins, so it's hardly surprising that different plans would\nbe chosen. Assuming that these are exactly the same Postgres version,\nthe only explanation would be considerably different ANALYZE statistics\nstored in the two databases.\n\n> Both databases are vacuum analyzed nightly.\n\nMaybe you should double-check that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Feb 2005 11:44:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inefficient Query Plans " } ]
[ { "msg_contents": "I've got 2 tables defined as follows:\n\nCREATE TABLE \"cluster\"\n(\n id int8 NOT NULL DEFAULT nextval('serial'::text),\n clusterid varchar(255) NOT NULL,\n ...\n CONSTRAINT pk_cluster PRIMARY KEY (id)\n) \n\nCREATE TABLE sensorreport\n(\n id int8 NOT NULL DEFAULT nextval('serial'::text),\n clusterid int8 NOT NULL,\n ...\n CONSTRAINT pk_sensorreport PRIMARY KEY (id),\n CONSTRAINT fk_sensorreport_clusterid FOREIGN KEY (clusterid) REFERENCES\n\"cluster\" (id) ON UPDATE RESTRICT ON DELETE RESTRICT\n) \n\nI've defined an Index on the clusterid field of sensorreport.\n\n\nSo I've run into 2 issues, one a SELECT, the other a DELETE;\n\nSELECT issue:\nSo the following query:\nEXPLAIN ANALYZE select * from sensorreport where clusterid = 25000114;\n\nYields:\n\"Index Scan using idx_sensorreport_clusterid on sensorreport\n(cost=0.00..2.01 rows=1 width=129) (actual time=0.000..0.000 rows=38\nloops=1)\"\n\" Index Cond: (clusterid = 25000114)\"\n\"Total runtime: 0.000 ms\"\n\nHowever, when using a join as follows (in the cluster table id=25000114\nclusterid='clusterid1'):\nEXPLAIN ANALYZE select * from sensorreport as a join cluster as c on c.id =\na.clusterid where c.clusterid = 'clusterid1';\n\nYields:\nHash Join (cost=1.18..566211.51 rows=1071429 width=287) (actual\ntime=150025.000..150025.000 rows=38 loops=1)\n Hash Cond: (\"outer\".clusterid = \"inner\".id)\n -> Seq Scan on sensorreport a (cost=0.00..480496.03 rows=15000003\nwidth=129) (actual time=10.000..126751.000 rows=15000039 loops=1)\n -> Hash (cost=1.18..1.18 rows=1 width=158) (actual time=0.000..0.000\nrows=0 loops=1)\n -> Seq Scan on \"cluster\" c (cost=0.00..1.18 rows=1 width=158)\n(actual time=0.000..0.000 rows=1 loops=1)\n Filter: ((clusterid)::text = 'clusterid1'::text)\nTotal runtime: 150025.000 ms\n\nMy question is can I get the join query to use the\nidx_sensorreport_clusterid index on the sensorreport table?\n\nDELETE issue:\nThe statement:\nEXPLAIN ANALYZE delete from cluster where clusterid='clusterid99'\n\nYields:\n Seq Scan on \"cluster\" (cost=0.00..1.18 rows=1 width=6) (actual\ntime=0.000..0.000 rows=1 loops=1)\n Filter: ((clusterid)::text = 'clusterid99'::text)\n Total runtime: 275988.000 ms\n\nI'm assuming that the length of the delete is because the \"DELETE RESTRICT\"\non the foreign key from sensortable.\nAgain, is there any way to get the delete to use the\nidx_sensorreport_clusterid index?\n", "msg_date": "Tue, 22 Feb 2005 20:45:44 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Joins, Deletes and Indexes" }, { "msg_contents": "[email protected] wrote:\n> I've got 2 tables defined as follows:\n> \n> CREATE TABLE \"cluster\"\n> (\n> id int8 NOT NULL DEFAULT nextval('serial'::text),\n> clusterid varchar(255) NOT NULL,\n> ...\n> CONSTRAINT pk_cluster PRIMARY KEY (id)\n> ) \n> \n> CREATE TABLE sensorreport\n> (\n> id int8 NOT NULL DEFAULT nextval('serial'::text),\n> clusterid int8 NOT NULL,\n> ...\n> CONSTRAINT pk_sensorreport PRIMARY KEY (id),\n> CONSTRAINT fk_sensorreport_clusterid FOREIGN KEY (clusterid) REFERENCES\n> \"cluster\" (id) ON UPDATE RESTRICT ON DELETE RESTRICT\n> ) \n> \n> I've defined an Index on the clusterid field of sensorreport.\n\nLooking further down, perhaps an index on cluster.clusterid too.\n\n> So I've run into 2 issues, one a SELECT, the other a DELETE;\n> \n> SELECT issue:\n> So the following query:\n> EXPLAIN ANALYZE select * from sensorreport where clusterid = 25000114;\n> \n> Yields:\n> \"Index Scan using idx_sensorreport_clusterid on sensorreport\n> (cost=0.00..2.01 rows=1 width=129) (actual time=0.000..0.000 rows=38\n> loops=1)\"\n> \" Index Cond: (clusterid = 25000114)\"\n> \"Total runtime: 0.000 ms\"\n> \n> However, when using a join as follows (in the cluster table id=25000114\n> clusterid='clusterid1'):\n> EXPLAIN ANALYZE select * from sensorreport as a join cluster as c on c.id =\n> a.clusterid where c.clusterid = 'clusterid1';\n\nYou don't say what version you're using, but older versions of PG took a \nliteral join as a request to plan a query in that order. Try rewriting \nit without the \"join\" keyword and see if the plan alters.\n\n> Yields:\n> Hash Join (cost=1.18..566211.51 rows=1071429 width=287) (actual\n> time=150025.000..150025.000 rows=38 loops=1)\n> Hash Cond: (\"outer\".clusterid = \"inner\".id)\n> -> Seq Scan on sensorreport a (cost=0.00..480496.03 rows=15000003\n> width=129) (actual time=10.000..126751.000 rows=15000039 loops=1)\n> -> Hash (cost=1.18..1.18 rows=1 width=158) (actual time=0.000..0.000\n> rows=0 loops=1)\n> -> Seq Scan on \"cluster\" c (cost=0.00..1.18 rows=1 width=158)\n> (actual time=0.000..0.000 rows=1 loops=1)\n> Filter: ((clusterid)::text = 'clusterid1'::text)\n> Total runtime: 150025.000 ms\n> \n> My question is can I get the join query to use the\n> idx_sensorreport_clusterid index on the sensorreport table?\n\nThe only reason to use the index on sensorreport is if it isn't going to \nmatch many rows. That means we want to run the restriction on \n\"clisterid1\" first, which suggests you want that index on table cluster.\n\n> DELETE issue:\n> The statement:\n> EXPLAIN ANALYZE delete from cluster where clusterid='clusterid99'\n> \n> Yields:\n> Seq Scan on \"cluster\" (cost=0.00..1.18 rows=1 width=6) (actual\n> time=0.000..0.000 rows=1 loops=1)\n> Filter: ((clusterid)::text = 'clusterid99'::text)\n> Total runtime: 275988.000 ms\n> \n> I'm assuming that the length of the delete is because the \"DELETE RESTRICT\"\n> on the foreign key from sensortable.\n> Again, is there any way to get the delete to use the\n> idx_sensorreport_clusterid index?\n\nNo, because this is the cluster table, not sensorreport :-)\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 23 Feb 2005 08:39:47 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joins, Deletes and Indexes" } ]
[ { "msg_contents": "Hi,\n\nI changed fsync to false. It took 8 minutes to restore the full database.\nThat is 26 times faster than before. :-/ (aprox. 200 tps)\nWith background writer it took 12 minutes. :-(\n\nThe funny thing is, I had a VMWARE emulation on the same Windows mashine,\nrunning Red Hat, with fsync turned on. It took also 8 minutes to finish.\nProbably the Linux code is better + VMWARE optimises (physical) disk\naccess.(?)\n\nIt seems to me, I need 2 types of operating modes:\n- For bulk loading (database restore) : fsync=false\n- Normal operation fsync=true\n\nAm I right? How can I do it \"elegantly\"?\n\nI Think, it should be a \"performance tuning guide\" in the docomentation.\n(not just explaning the settings) Playing with the settings could be quite\nanoying. \n\nAnyway, thanks for the tips.\n\nBye,\nVig Sándor\n\n\n\n-----Original Message-----\nFrom: Magnus Hagander [mailto:[email protected]]\nSent: Tuesday, February 22, 2005 7:15 PM\nTo: Vig, Sandor (G/FI-2); [email protected]\nSubject: RE: [PERFORM] PostgreSQL is extremely slow on Windows\n\n\n\n>I've downloaded the latest release (PostgreSQL 8.0) for windows.\n>Installation was OK, but I have tried to restore a database.\n>It had more than ~100.000 records. Usually I use PostgreSQL\n>under Linux, and it used to be done under 10 minutes.\n>\n>Under W2k und XP it took 3 hours(!) Why is it so slow????\n>\n>The commands I used:\n>\n>Under Linux: (duration: 1 minute)\n>\tpg_dump -D databasename > databasename.db\n>\n>Under Windows: (duration: 3 - 3.5 hours(!))\n>\tpsql databasename < databasename.db >nul\n>\n>It seemed to me, that only 20-30 transactions/sec were\n>writen to the database.\n\n20-30 transactionsi s about what you'll get on a single disk on Windows\ntoday.\nWe have a patch in testing that will bring this up to about 80.\nYou can *never* get above 80 without using write cache, regardless of\nyour OS, if you have a single disk. You might want to look into wether\nwrite cacheing is enabled on your linux box, and disable it. (unless you\nare using RAID) A lot points towards write cache enabled on your system.\n\nIf you need the performance that equals the one with write cache on, you\ncan set fsync=off. But then you will lose the guarantee that your\nmachine will survive an unclean shutdown or crash. I would strongly\nadvice against it on a production system - same goes for running with\nwrite cache!\n\n//Magnus\n\nThe information transmitted is intended only for the person or entity to\nwhich it is addressed and may contain confidential and/or privileged\nmaterial. Any review, retransmission, dissemination or other use of, or\ntaking of any action in reliance upon, this information by persons or\nentities other than the intended recipient is prohibited. If you received\nthis in error, please contact the sender and delete the material from any\ncomputer.\n", "msg_date": "Wed, 23 Feb 2005 11:41:32 +0100", "msg_from": "\"Vig, Sandor (G/FI-2)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL is extremely slow on Windows" } ]
[ { "msg_contents": " \n\n> -----Original Message-----\n> From: Richard Huxton [mailto:[email protected]] \n> Sent: Wednesday, February 23, 2005 3:40 AM\n> To: [email protected]\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Joins, Deletes and Indexes\n> \n> [email protected] wrote:\n> > I've got 2 tables defined as follows:\n> > \n> > CREATE TABLE \"cluster\"\n> > (\n> > id int8 NOT NULL DEFAULT nextval('serial'::text),\n> > clusterid varchar(255) NOT NULL,\n> > ...\n> > CONSTRAINT pk_cluster PRIMARY KEY (id)\n> > ) \n> > \n> > CREATE TABLE sensorreport\n> > (\n> > id int8 NOT NULL DEFAULT nextval('serial'::text),\n> > clusterid int8 NOT NULL,\n> > ...\n> > CONSTRAINT pk_sensorreport PRIMARY KEY (id),\n> > CONSTRAINT fk_sensorreport_clusterid FOREIGN KEY \n> (clusterid) REFERENCES\n> > \"cluster\" (id) ON UPDATE RESTRICT ON DELETE RESTRICT\n> > ) \n> > \n> > I've defined an Index on the clusterid field of sensorreport.\n> \n> Looking further down, perhaps an index on cluster.clusterid too.\n> \n> > So I've run into 2 issues, one a SELECT, the other a DELETE;\n> > \n> > SELECT issue:\n> > So the following query:\n> > EXPLAIN ANALYZE select * from sensorreport where clusterid \n> = 25000114;\n> > \n> > Yields:\n> > \"Index Scan using idx_sensorreport_clusterid on sensorreport\n> > (cost=0.00..2.01 rows=1 width=129) (actual time=0.000..0.000 rows=38\n> > loops=1)\"\n> > \" Index Cond: (clusterid = 25000114)\"\n> > \"Total runtime: 0.000 ms\"\n> > \n> > However, when using a join as follows (in the cluster table \n> id=25000114\n> > clusterid='clusterid1'):\n> > EXPLAIN ANALYZE select * from sensorreport as a join \n> cluster as c on c.id =\n> > a.clusterid where c.clusterid = 'clusterid1';\n> \n> You don't say what version you're using, but older versions \n> of PG took a \n> literal join as a request to plan a query in that order. Try \n> rewriting \n> it without the \"join\" keyword and see if the plan alters.\n\nI'm using version 8.0 on Windows.\n\n> \n> > Yields:\n> > Hash Join (cost=1.18..566211.51 rows=1071429 width=287) (actual\n> > time=150025.000..150025.000 rows=38 loops=1)\n> > Hash Cond: (\"outer\".clusterid = \"inner\".id)\n> > -> Seq Scan on sensorreport a (cost=0.00..480496.03 \n> rows=15000003\n> > width=129) (actual time=10.000..126751.000 rows=15000039 loops=1)\n> > -> Hash (cost=1.18..1.18 rows=1 width=158) (actual \n> time=0.000..0.000\n> > rows=0 loops=1)\n> > -> Seq Scan on \"cluster\" c (cost=0.00..1.18 \n> rows=1 width=158)\n> > (actual time=0.000..0.000 rows=1 loops=1)\n> > Filter: ((clusterid)::text = 'clusterid1'::text)\n> > Total runtime: 150025.000 ms\n> > \n> > My question is can I get the join query to use the\n> > idx_sensorreport_clusterid index on the sensorreport table?\n> \n> The only reason to use the index on sensorreport is if it \n> isn't going to \n> match many rows. That means we want to run the restriction on \n> \"clisterid1\" first, which suggests you want that index on \n> table cluster.\n\nThe cluster table only has 11 rows, so I'm not sure an index would\nhelp. The sensorreport table has 15,000,000 rows so that's why I've\ngot the index there.\n\n> \n> > DELETE issue:\n> > The statement:\n> > EXPLAIN ANALYZE delete from cluster where clusterid='clusterid99'\n> > \n> > Yields:\n> > Seq Scan on \"cluster\" (cost=0.00..1.18 rows=1 width=6) (actual\n> > time=0.000..0.000 rows=1 loops=1)\n> > Filter: ((clusterid)::text = 'clusterid99'::text)\n> > Total runtime: 275988.000 ms\n> > \n> > I'm assuming that the length of the delete is because the \n> \"DELETE RESTRICT\"\n> > on the foreign key from sensortable.\n> > Again, is there any way to get the delete to use the\n> > idx_sensorreport_clusterid index?\n> \n> No, because this is the cluster table, not sensorreport :-)\n\nTrue, but the foreign key constraint on the sensorreport table forces\nPostgres to check if there are any sensorreport's that are currently\nusing this cluster before allowing the cluster to be deleted.\n\n> \n> -- \n> Richard Huxton\n> Archonet Ltd\n> \n\nThanks a lot for the reply.\n\nChuck Butkus\nEMC\n", "msg_date": "Wed, 23 Feb 2005 08:06:08 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Joins, Deletes and Indexes" }, { "msg_contents": "[email protected] wrote:\n> \n> The cluster table only has 11 rows, so I'm not sure an index would\n> help. The sensorreport table has 15,000,000 rows so that's why I've\n> got the index there.\n\nAh - only 11?\n\n>>>on the foreign key from sensortable.\n>>>Again, is there any way to get the delete to use the\n>>>idx_sensorreport_clusterid index?\n>>\n>>No, because this is the cluster table, not sensorreport :-)\n> \n> True, but the foreign key constraint on the sensorreport table forces\n> Postgres to check if there are any sensorreport's that are currently\n> using this cluster before allowing the cluster to be deleted.\n\nIf you only have 11 distinct values in the large table then it's \ndebatable whether it's always quicker to use the index. Since your first \nexample (clusterid = 25000114) returned so few rows, I'm guessing that \nsome other values represent a sizeable percentage of the table. That'd \nexplain the difference between PG's estimates and the actual number of \nmatching rows.\n\nYou can try \"SET enable_seqscan =false;\" before running the query and \nsee whether using the index helps things.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 23 Feb 2005 13:42:18 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Joins, Deletes and Indexes" } ]
[ { "msg_contents": "> > You can *never* get above 80 without using write cache, \n> regardless of \n> > your OS, if you have a single disk.\n> \n> Why? Even with, say, a 15K RPM disk? Or the ability to \n> fsync() multiple concurrently-committing transactions at once?\n\nUh. What I meant was a single *IDE* disk. Sorry. Been too deep into\nhelping ppl with IDE disks lately to remember that SCSI can be a lot\nfaster :-) And we're talking about restore of a dump, so it's a single\nsession.\n\n(Strictly, that shuld be a 7200rpm IDE disk. I don't know if any others\nare common, though)\n\n//mha\n", "msg_date": "Wed, 23 Feb 2005 15:50:47 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL is extremely slow on Windows" } ]
[ { "msg_contents": "> Hi,\n> \n> I changed fsync to false. It took 8 minutes to restore the \n> full database.\n> That is 26 times faster than before. :-/ (aprox. 200 tps) \n> With background writer it took 12 minutes. :-(\n\nThat seems reasonable.\n\n\n> The funny thing is, I had a VMWARE emulation on the same \n> Windows mashine, running Red Hat, with fsync turned on. It \n> took also 8 minutes to finish.\n> Probably the Linux code is better + VMWARE optimises (physical) disk\n> access.(?)\n\nVmware makes fsync() into a no-op. It will always cache the disk.\n(This is vmware workstation. Their server products behave differntly, of\ncourse)\n\n\n> It seems to me, I need 2 types of operating modes:\n> - For bulk loading (database restore) : fsync=false\n> - Normal operation fsync=true\n\nYes, fsync=false is very good for bulk loading *IFF* you can live with\ndata loss in case you get a crash during load.\n\n\n> Am I right? How can I do it \"elegantly\"?\n\nYou'll need to edit postgresql.conf and restart the server for this.\n\n\n> I Think, it should be a \"performance tuning guide\" in the \n> docomentation.\n> (not just explaning the settings) Playing with the settings \n> could be quite anoying. \n\nThere is some information on techdocs.postgresql.org you miht be\ninterested in.\n\n//Magnus\n", "msg_date": "Wed, 23 Feb 2005 15:54:50 +0100", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL is extremely slow on Windows" }, { "msg_contents": "Magnus Hagander wrote:\n> Yes, fsync=false is very good for bulk loading *IFF* you can live with\n> data loss in case you get a crash during load.\n\nIt's not merely data loss -- you could encounter potentially \nunrecoverable database corruption.\n\nThere is a TODO item about allowing the delaying of WAL writes. If we \nmaintain the WAL invariant (that is, a WAL record describing a change \nmust hit disk before the change itself does) but simply don't flush the \nWAL at transaction commit, we should be able to get better performance \nwithout the risk of database corruption (so we would need to keep pages \nmodified by the committed transaction pinned in memory until the WAL has \nbeen flushed, which might be done on a periodic basis).\n\nNaturally, there is a risk of losing data in the period between \ntransaction commit and syncing the WAL, but no risk of database \ncorruption. This seems a reasonable approach to providing better \nperformance for people who don't need the strict guarantees provided by \nfsync=true.\n\n-Neil\n", "msg_date": "Thu, 24 Feb 2005 09:35:47 +1100", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL is extremely slow on Windows" }, { "msg_contents": "Neil Conway <[email protected]> writes:\n> There is a TODO item about allowing the delaying of WAL writes. If we \n> maintain the WAL invariant (that is, a WAL record describing a change \n> must hit disk before the change itself does) but simply don't flush the \n> WAL at transaction commit, we should be able to get better performance \n> without the risk of database corruption (so we would need to keep pages \n> modified by the committed transaction pinned in memory until the WAL has \n> been flushed, which might be done on a periodic basis).\n\nThat interlock already exists, in the form of the bufmgr LSN logic.\n\nI think this \"feature\" might be as simple as\n\n XLogFlush(recptr);\n\nbecomes\n\n /* Must flush if we are deleting files... */\n if (PerCommitFlush || nrels > 0)\n XLogFlush(recptr);\n\nin RecordTransactionCommit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Feb 2005 17:56:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL is extremely slow on Windows " }, { "msg_contents": "Neil Conway wrote:\n> Magnus Hagander wrote:\n> > Yes, fsync=false is very good for bulk loading *IFF* you can live with\n> > data loss in case you get a crash during load.\n> \n> It's not merely data loss -- you could encounter potentially \n> unrecoverable database corruption.\n> \n> There is a TODO item about allowing the delaying of WAL writes. If we \n> maintain the WAL invariant (that is, a WAL record describing a change \n> must hit disk before the change itself does) but simply don't flush the \n> WAL at transaction commit, we should be able to get better performance \n> without the risk of database corruption (so we would need to keep pages \n> modified by the committed transaction pinned in memory until the WAL has \n> been flushed, which might be done on a periodic basis).\n> \n> Naturally, there is a risk of losing data in the period between \n> transaction commit and syncing the WAL, but no risk of database \n> corruption. This seems a reasonable approach to providing better \n> performance for people who don't need the strict guarantees provided by \n> fsync=true.\n\nRight. Just for clarity, you might lose the last 5 seconds of\ntransactions, but all transactsions would be completely committed or\naborted in your datbase. Right now with fsync off you can get\ntransactions partially commited in your database, which is a serious\nproblem (think moving money from one account to another).\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 24 Feb 2005 16:25:34 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL is extremely slow on Windows" }, { "msg_contents": "\nBruce Momjian <[email protected]> writes:\n\n> Right now with fsync off you can get transactions partially commited in your\n> database, which is a serious problem (think moving money from one account to\n> another).\n\nIt's worse than that. You can get a totally corrupted database. Things like\nduplicated records (the before and after image of an update). Or indexes that\nare out of sync with the table. This can cause strange inconsistent results\ndepending on the plan queries use, or outright database crashes.\n\n-- \ngreg\n\n", "msg_date": "25 Feb 2005 00:02:25 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL is extremely slow on Windows" } ]
[ { "msg_contents": "Hello All\n\n I am setting up a hardware clustering solution. My hardware is Dual \nOpteron 550 with 8GB ram. My external storage is a Kingston Fibre \nchannel Infostation. With 14 15000'k 36GB drives. The OS we are running \nis Redhat ES 3.0. Clustering using Redhat Cluster Suite. Postgres \nVersion is Postgres 7.4.7. We will be setting up about 9 databases which \nrange in size from 100MB to 2.5GB on the config. The postgres \napplication is mostly read intensive. What would be the best way to \nsetup the hardrives on this server. Currently I have it partioned with 2 \nseperate raid 5 with 1 failover per raid. I have two database clusters \nconfigured with a seperate postmaster running for each. Running two \npostmasters seems like a pain but that is the only way I knew to \nseperate the load. I am mostly concerned about disk IO and performance. \nIs my current setup a good way to accomplish the best performance or \nwould it be better to use all the drives in one huge raid five with a \ncouple of failovers. I have looked around in the archives and found some \ninfo but I would like to here about some of the configs other people are \nrunning and how they have them setup.\n\n\nThanks\n\nJohn Allgood - ESC\nSystems Admin\n", "msg_date": "Wed, 23 Feb 2005 11:39:27 -0500", "msg_from": "John Allgood <[email protected]>", "msg_from_op": true, "msg_subject": "Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "On Wed, Feb 23, 2005 at 11:39:27AM -0500, John Allgood wrote:\n> Hello All\n> \n> I am setting up a hardware clustering solution. My hardware is Dual \n> Opteron 550 with 8GB ram. My external storage is a Kingston Fibre \n> channel Infostation. With 14 15000'k 36GB drives. The OS we are running \n> is Redhat ES 3.0. Clustering using Redhat Cluster Suite. Postgres \n> Version is Postgres 7.4.7. We will be setting up about 9 databases which \n> range in size from 100MB to 2.5GB on the config. The postgres \n> application is mostly read intensive. What would be the best way to \n> setup the hardrives on this server. Currently I have it partioned with 2 \n> seperate raid 5 with 1 failover per raid. I have two database clusters \n> configured with a seperate postmaster running for each. Running two \n> postmasters seems like a pain but that is the only way I knew to \n> seperate the load. I am mostly concerned about disk IO and performance. \n> Is my current setup a good way to accomplish the best performance or \n> would it be better to use all the drives in one huge raid five with a \n> couple of failovers. I have looked around in the archives and found some \n> info but I would like to here about some of the configs other people are \n> running and how they have them setup.\n\nhttp://www.powerpostgresql.com/PerfList/\n\nConsider a separate array for pg_xlog. \n\nWith tablespaces in 8.0, you can isolate much of the IO in a single\ncluster.\n\n -Mike Adler\n", "msg_date": "Wed, 23 Feb 2005 13:03:38 -0500", "msg_from": "Michael Adler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "Is there a real limit for max_connections? Here we've an Oracle server with\nup to 1200 simultaneous conections over it!\n\n\"max_connections: exactly like previous versions, this needs to be set to\nthe actual number of simultaneous connections you expect to need. High\nsettings will require more shared memory (shared_buffers). As the\nper-connection overhead, both from PostgreSQL and the host OS, can be quite\nhigh, it's important to use connection pooling if you need to service a\nlarge number of users. For example, 150 active connections on a medium-end\n32-bit Linux server will consume significant system resources, and 600 is\nabout the limit.\"\n\n\nC ya,\nBruno\n\n \n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Michael Adler\nSent: Wednesday, February 23, 2005 3:04 PM\nTo: John Allgood\nCc: [email protected]\nSubject: Re: [PERFORM] Peformance Tuning Opterons/ Hard Disk Layout\n\nOn Wed, Feb 23, 2005 at 11:39:27AM -0500, John Allgood wrote:\n> Hello All\n> \n> I am setting up a hardware clustering solution. My hardware is Dual \n> Opteron 550 with 8GB ram. My external storage is a Kingston Fibre \n> channel Infostation. With 14 15000'k 36GB drives. The OS we are running \n> is Redhat ES 3.0. Clustering using Redhat Cluster Suite. Postgres \n> Version is Postgres 7.4.7. We will be setting up about 9 databases which \n> range in size from 100MB to 2.5GB on the config. The postgres \n> application is mostly read intensive. What would be the best way to \n> setup the hardrives on this server. Currently I have it partioned with 2 \n> seperate raid 5 with 1 failover per raid. I have two database clusters \n> configured with a seperate postmaster running for each. Running two \n> postmasters seems like a pain but that is the only way I knew to \n> seperate the load. I am mostly concerned about disk IO and performance. \n> Is my current setup a good way to accomplish the best performance or \n> would it be better to use all the drives in one huge raid five with a \n> couple of failovers. I have looked around in the archives and found some \n> info but I would like to here about some of the configs other people are \n> running and how they have them setup.\n\nhttp://www.powerpostgresql.com/PerfList/\n\nConsider a separate array for pg_xlog. \n\nWith tablespaces in 8.0, you can isolate much of the IO in a single\ncluster.\n\n -Mike Adler\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n", "msg_date": "Wed, 23 Feb 2005 15:26:18 -0300", "msg_from": "\"Bruno Almeida do Lago\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "On Wed, 2005-02-23 at 15:26 -0300, Bruno Almeida do Lago wrote:\n> Is there a real limit for max_connections? Here we've an Oracle server with\n> up to 1200 simultaneous conections over it!\n\nIf you can reduce them by using something like pgpool between PostgreSQL\nand the client, you'll save some headache. PostgreSQL did not perform as\nwell with a large number of idle connections and it does otherwise (last\ntime I tested was 7.4 though -- perhaps it's better now).\n\nThe kernel also starts to play a significant role with a high number of\nconnections. Some operating systems don't perform as well with a high\nnumber of processes (process handling, scheduling, file handles, etc.).\n\nI think you can do it without any technical issues, but you will\nprobably be happier with the result if you can hide idle connections\nfrom the database machine.\n-- \n\n", "msg_date": "Wed, 23 Feb 2005 13:35:03 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "\"Bruno Almeida do Lago\" <[email protected]> writes:\n> Is there a real limit for max_connections? Here we've an Oracle server with\n> up to 1200 simultaneous conections over it!\n\n[ shrug... ] If your machine has the beef to run 1200 simultaneous\nqueries, you can set max_connections to 1200.\n\nThe point of what you were quoting is that if you want to service\n1200 concurrent users but you only expect maybe 100 simultaneously\nactive queries from them (and you have a database box that can only\nservice that many) then you want to put a connection pooler in\nfront of 100 backends, not try to start a backend for every user.\n\nOracle may handle this sort of thing differently, I dunno.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Feb 2005 13:37:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout " }, { "msg_contents": "Rod Taylor <[email protected]> writes:\n> The kernel also starts to play a significant role with a high number of\n> connections. Some operating systems don't perform as well with a high\n> number of processes (process handling, scheduling, file handles, etc.).\n\nRight; the main problem with having lots more backends than you need is\nthat the idle ones still eat their share of RAM and open file handles.\n\nA connection pooler uses relatively few resources per idle connection,\nso it's a much better impedance match if you want to service lots of\nconnections that are mostly idle.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Feb 2005 13:51:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout " }, { "msg_contents": "I think maybe I didn't explain myself well enough. At most we will \nservice 200-250 connections across all the 9 databases mentioned. The \ndatabase we are building is for a trucking company. Each of the \ndatabases represents a different division. With one master database that \neverything is updated to. Most of the access to the database is by \nsimple queries. Most of the IO intensive stuff is done when revenue \nreports are generated and when we have our month/year end processing. \nAll the trucking loads that are mark as delivered are transferred to our \nmaster database during night time processing. All that will be handled \nusing custom scripts. Maybe I have given a better explanation of the \napplication. my biggest concern is how to partition the shared storage \nfor maximum performance. Is there a real benifit to having more that one \nraid5 partition or am I wasting my time.\n\nThanks\n\nJohn Allgood - ESC\n\n\nTom Lane wrote:\n>\"Bruno Almeida do Lago\" <[email protected]> writes:\n> \n>>Is there a real limit for max_connections? Here we've an Oracle server with\n>>up to 1200 simultaneous conections over it!\n>> \n>\n>[ shrug... ] If your machine has the beef to run 1200 simultaneous\n>queries, you can set max_connections to 1200.\n>\n>The point of what you were quoting is that if you want to service\n>1200 concurrent users but you only expect maybe 100 simultaneously\n>active queries from them (and you have a database box that can only\n>service that many) then you want to put a connection pooler in\n>front of 100 backends, not try to start a backend for every user.\n>\n>Oracle may handle this sort of thing differently, I dunno.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n> \n", "msg_date": "Wed, 23 Feb 2005 14:15:52 -0500", "msg_from": "John Allgood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "John Allgood wrote:\n\n> I think maybe I didn't explain myself well enough. At most we will\n> service 200-250 connections across all the 9 databases mentioned. The\n> database we are building is for a trucking company. Each of the\n> databases represents a different division. With one master database\n> that everything is updated to. Most of the access to the database is\n> by simple queries. Most of the IO intensive stuff is done when revenue\n> reports are generated and when we have our month/year end processing.\n> All the trucking loads that are mark as delivered are transferred to\n> our master database during night time processing. All that will be\n> handled using custom scripts. Maybe I have given a better explanation\n> of the application. my biggest concern is how to partition the shared\n> storage for maximum performance. Is there a real benifit to having\n> more that one raid5 partition or am I wasting my time.\n>\n> Thanks\n>\n> John Allgood - ESC\n\nIf you read the general advice statements, it's actually better to not\nuse raid5, but to use raid10 (striping and mirroring). Simply because\nraid5 writing is quite poor.\n\nAlso, if you have the disks, the next best improvement is to move\npg_xlog onto it's own set of disks. I think that gets as much as 25%\nimprovement by itself. pg_xlog is an append process, which must complete\nbefore the actual data gets updated, so giving it it's own set of\nspindles reduces seek time, and lets the log be written quickly.\nI think there is even some benefit to making pg_xlog be a solid state\ndisk, as it doesn't have to be huge, but having high I/O rates can\nremove it as a bottleneck. (I'm not positive how large pg_xlog gets, but\nit is probably small compared with the total db size, and I think it can\nbe flushed periodically as transactions are completed.)\n\nI'm not sure what you are considering \"shared storage\". Are you thinking\nthat all the machines will be mounting a remote drive for writing the\nDB? They should all have their own local copy (multiple masters on the\nsame database is not supported).\n\nI think it is possible to get better performance by having more raid\nsystems. But it is application dependent. If you know that you have 2\ntables that are being updated often and independently, then having each\none on it's own raid would allow better concurrency.\n\nBut it sounds like in your app you get concurrency by having a bunch of\nremote databases, which then do bulk updates on the master database. I\nthink if you are careful to have each of the remote dbs update the\nmaster at a slightly different time, you could probably get very good\ntransaction rates.\n\nJohn\n=:->", "msg_date": "Wed, 23 Feb 2005 13:46:11 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "On Wed, Feb 23, 2005 at 02:15:52PM -0500, John Allgood wrote:\n> using custom scripts. Maybe I have given a better explanation of the \n> application. my biggest concern is how to partition the shared storage \n> for maximum performance. Is there a real benifit to having more that one \n> raid5 partition or am I wasting my time.\n\nI think the simplest and most generic solution would be to put the OS\nand pg_xlog on a RAID 1 pair and dedicate the rest of the drives to\nRAID 5 or RAID 1+0 (striped set of mirrors) array.\n\nDepending on the nature of your work, you may get better performance\nby placing individual tables/indices on dedicated spindles for\nparallel access.\n\n -Mike Adler\n", "msg_date": "Wed, 23 Feb 2005 14:50:59 -0500", "msg_from": "Michael Adler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "This some good info. The type of attached storage is a Kingston 14 bay \nFibre Channel Infostation. I have 14 36GB 15,000 RPM drives. I think the \nway it is being explained that I should build a mirror with two disk for \nthe pg_xlog and the striping and mirroring the rest and put all my \ndatabases into one cluster. Also I might mention that I am running \nclustering using Redhat Clustering Suite.\n\nJohn Arbash Meinel wrote:\n> John Allgood wrote:\n>\n>> I think maybe I didn't explain myself well enough. At most we will\n>> service 200-250 connections across all the 9 databases mentioned. The\n>> database we are building is for a trucking company. Each of the\n>> databases represents a different division. With one master database\n>> that everything is updated to. Most of the access to the database is\n>> by simple queries. Most of the IO intensive stuff is done when revenue\n>> reports are generated and when we have our month/year end processing.\n>> All the trucking loads that are mark as delivered are transferred to\n>> our master database during night time processing. All that will be\n>> handled using custom scripts. Maybe I have given a better explanation\n>> of the application. my biggest concern is how to partition the shared\n>> storage for maximum performance. Is there a real benifit to having\n>> more that one raid5 partition or am I wasting my time.\n>>\n>> Thanks\n>>\n>> John Allgood - ESC\n>\n> If you read the general advice statements, it's actually better to not\n> use raid5, but to use raid10 (striping and mirroring). Simply because\n> raid5 writing is quite poor.\n>\n> Also, if you have the disks, the next best improvement is to move\n> pg_xlog onto it's own set of disks. I think that gets as much as 25%\n> improvement by itself. pg_xlog is an append process, which must complete\n> before the actual data gets updated, so giving it it's own set of\n> spindles reduces seek time, and lets the log be written quickly.\n> I think there is even some benefit to making pg_xlog be a solid state\n> disk, as it doesn't have to be huge, but having high I/O rates can\n> remove it as a bottleneck. (I'm not positive how large pg_xlog gets, but\n> it is probably small compared with the total db size, and I think it can\n> be flushed periodically as transactions are completed.)\n>\n> I'm not sure what you are considering \"shared storage\". Are you thinking\n> that all the machines will be mounting a remote drive for writing the\n> DB? They should all have their own local copy (multiple masters on the\n> same database is not supported).\n>\n> I think it is possible to get better performance by having more raid\n> systems. But it is application dependent. If you know that you have 2\n> tables that are being updated often and independently, then having each\n> one on it's own raid would allow better concurrency.\n>\n> But it sounds like in your app you get concurrency by having a bunch of\n> remote databases, which then do bulk updates on the master database. I\n> think if you are careful to have each of the remote dbs update the\n> master at a slightly different time, you could probably get very good\n> transaction rates.\n>\n> John\n> =:->\n>\n", "msg_date": "Wed, 23 Feb 2005 15:05:17 -0500", "msg_from": "John Allgood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "John Allgood wrote:\n\n> This some good info. The type of attached storage is a Kingston 14 bay\n> Fibre Channel Infostation. I have 14 36GB 15,000 RPM drives. I think\n> the way it is being explained that I should build a mirror with two\n> disk for the pg_xlog and the striping and mirroring the rest and put\n> all my databases into one cluster. Also I might mention that I am\n> running clustering using Redhat Clustering Suite.\n\n\nSo are these 14-disks supposed to be shared across all of your 9 databases?\nIt seems to me that you have a few architectural issues here.\n\nFirst, you can't really have 2 masters writing to the same disk array.\nI'm not sure if Redhat Clustering gets around this. But second is that\nyou can't run 2 postgres engines on the same database. Postgres doesn't\nsupport a clustered setup. There are too many issues with concurancy and\nkeeping everyone in sync.\n\nSince you seem to be okay with having a bunch of smaller localized\ndatabases, which update a master database 1/day, I would think you would\nwant hardware to go something like this.\n\n1 master server, at least dual opteron with access to lots of disks\n(likely the whole 14 if you can get away with it). Put 2 as a RAID1 for\nthe OS, 4 as a RAID10 for pg_xlog, and then the other 8 as RAID10 for\nthe rest of the database.\n\n8-9 other servers, these don't need to be as powerful, since they are\nlocal domains. Probably a 4-disk RAID10 for the OS and pg_xlog is plenty\ngood, and whatever extra disks you can get for the local database.\n\nThe master database holds all information for all domains, but the other\ndatabases only hold whatever is the local information. Every night your\nscript sequences through the domain databases one-by-one, updating the\nmaster database, and synchronizing whatever data is necesary back to the\nlocal domain. I would guess that this script could actually just\ncontinually run, going to each local db in turn, but you may want\nnighttime only updating depending on what kind of load they have.\n\nJohn\n=:->", "msg_date": "Wed, 23 Feb 2005 14:24:57 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "Here is a summary about the cluster suite from redhat. All 9 databases \nwill be on the primary server the secondary server I have is the \nfailover. They don't actually share the partitions at the same time. \nWhen you have some type of failure the backup server takes over. Once \nyou setup the hardware and install the clustering software. You then \nsetup a service \"ie postgres\" and then you tell it what harddrive you \nwill be using. /dev/sde1 and the clustering software takes care of \nstarting and stopping the postgres database.\n\n\n Cluster Manager\n\nThe Cluster Manager feature of Red Hat Cluster Suite provides an \napplication failover infrastructure that can be used by a wide range of \napplications, including:\n\n * Most custom and mainstream commercial applications\n * File and print serving\n * Databases and database applications\n * Messaging applications\n * Internet and open source application\n\nWith Cluster Manager, these applications can be deployed in high \navailability configurations so that they are always operational�bringing \n\"scale-out\" capabilities to enterprise Linux deployments.\n\nFor high-volume open source applications, such as NFS, Samba, and \nApache, Cluster Manager provides a complete ready-to-use failover \nsolution. For most other applications, customers can create custom \nfailover scripts using provided templates. Red Hat Professional Services \ncan provide custom Cluster Manager deployment services where required.\n\n\n Features\n\n * Support for up to eight nodes: Allows high availability to be\n provided for multiple applications simultaneously.\n * NFS/CIFS Failover: Supports highly available file serving in Unix\n and Windows environments.\n * Fully shared storage subsystem: All cluster members have access to\n the same storage.\n * Comprehensive Data Integrity guarantees: Uses the latest I/O\n barrier technology, such as programmable power switches and\n watchdog timers.\n * SCSI and Fibre Channel support: Cluster Manager configurations can\n be deployed using latest SCSI and Fibre Channel technology.\n Multi-terabyte configurations can readily be made highly available.\n * Service failover: Cluster Manager not only ensures hardware\n shutdowns or failures are detected and recovered from\n automatically, but also will monitor your applications to ensure\n they are running correctly, and will restart them automatically if\n they fail.\n\n\n\nJohn Arbash Meinel wrote:\n> John Allgood wrote:\n>\n>> This some good info. The type of attached storage is a Kingston 14 bay\n>> Fibre Channel Infostation. I have 14 36GB 15,000 RPM drives. I think\n>> the way it is being explained that I should build a mirror with two\n>> disk for the pg_xlog and the striping and mirroring the rest and put\n>> all my databases into one cluster. Also I might mention that I am\n>> running clustering using Redhat Clustering Suite.\n>\n>\n> So are these 14-disks supposed to be shared across all of your 9 \n> databases?\n> It seems to me that you have a few architectural issues here.\n>\n> First, you can't really have 2 masters writing to the same disk array.\n> I'm not sure if Redhat Clustering gets around this. But second is that\n> you can't run 2 postgres engines on the same database. Postgres doesn't\n> support a clustered setup. There are too many issues with concurancy and\n> keeping everyone in sync.\n>\n> Since you seem to be okay with having a bunch of smaller localized\n> databases, which update a master database 1/day, I would think you would\n> want hardware to go something like this.\n>\n> 1 master server, at least dual opteron with access to lots of disks\n> (likely the whole 14 if you can get away with it). Put 2 as a RAID1 for\n> the OS, 4 as a RAID10 for pg_xlog, and then the other 8 as RAID10 for\n> the rest of the database.\n>\n> 8-9 other servers, these don't need to be as powerful, since they are\n> local domains. Probably a 4-disk RAID10 for the OS and pg_xlog is plenty\n> good, and whatever extra disks you can get for the local database.\n>\n> The master database holds all information for all domains, but the other\n> databases only hold whatever is the local information. Every night your\n> script sequences through the domain databases one-by-one, updating the\n> master database, and synchronizing whatever data is necesary back to the\n> local domain. I would guess that this script could actually just\n> continually run, going to each local db in turn, but you may want\n> nighttime only updating depending on what kind of load they have.\n>\n> John\n> =:->\n>\n", "msg_date": "Wed, 23 Feb 2005 15:41:41 -0500", "msg_from": "John Allgood <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "John Allgood wrote:\n\n> Here is a summary about the cluster suite from redhat. All 9 databases\n> will be on the primary server the secondary server I have is the\n> failover. They don't actually share the partitions at the same time.\n> When you have some type of failure the backup server takes over. Once\n> you setup the hardware and install the clustering software. You then\n> setup a service \"ie postgres\" and then you tell it what harddrive you\n> will be using. /dev/sde1 and the clustering software takes care of\n> starting and stopping the postgres database.\n>\nOkay, I misunderstood your hardware. So you actually only have 1\nmachine, with a second machine as a potential rollover. But all\ntransactions occur on the same hardware, even if is a separate\n\"database\". I was thinking these were alternate machines.\n\nSo my first question is why are you partitioning into a separate\ndatabase, and then updating the master one at night. Since everything is\nrestricted to the same machine, why not just have everything performed\non the master?\n\nHowever, sticking with your arrangement, it would seem that you might be\nable to get some extra performance if each database is on it's own raid,\nsince you are fairly likely to have 2 transactions occuring at the same\ntime, that don't affect eachother (since you wouldn't have any foreign\nkeys, etc on 2 separate databases.)\n\nBut I think the basic OS RAID1, pg_xlog RAID10, database RAID10 is still\na good separation of disks. And probably would help you maximize your\nthroughput.\n\nI can't say too much about how the Cluster failover stuff will work with\npostgres. But as long as one is completely shutdown before the next is\nstarted, and they are both running binary compatible versions of\npostgres, it seems like it would be fine. Not much different from having\na second machine that is sitting turned off, which you turn on when the\nfirst one fails.\n\nJohn\n=:->", "msg_date": "Wed, 23 Feb 2005 15:01:55 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "Bruno,\n\n> For example, 150 active connections on a medium-end\n> 32-bit Linux server will consume significant system resources, and 600 is\n> about the limit.\"\n\nThat, is, \"is about the limit for a medium-end 32-bit Linux server\". Sorry \nif the implication didn't translate well. If you use beefier hardware, of \ncourse, you can manage more connections; personally I've never needed more \nthan 1000, even on a web site that gets 100,000 d.u.v.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 23 Feb 2005 13:02:18 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "[email protected] (\"Bruno Almeida do Lago\") wrote:\n> Is there a real limit for max_connections? Here we've an Oracle server with\n> up to 1200 simultaneous conections over it!\n>\n> \"max_connections: exactly like previous versions, this needs to be set to\n> the actual number of simultaneous connections you expect to need. High\n> settings will require more shared memory (shared_buffers). As the\n> per-connection overhead, both from PostgreSQL and the host OS, can be quite\n> high, it's important to use connection pooling if you need to service a\n> large number of users. For example, 150 active connections on a medium-end\n> 32-bit Linux server will consume significant system resources, and 600 is\n> about the limit.\"\n\nRight now, I have an Opteron box with:\n a) A load average of about 0.1, possibly less ;-), and\n b) 570 concurrent connections.\n\nHaving so connections is something of a \"fool's errand,\" as it really\nis ludicrously unnecessary, but I wouldn't be too afraid of having\n1000 connections on that box, as long as they're being used for\nrelatively small transactions.\n\nYou can, of course, kill performance on any not-outrageously-large\nsystem if a few of those users are doing big queries...\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','gmail.com').\nhttp://cbbrowne.com/info/slony.html\nI've had a perfectly wonderful evening. But this wasn't it.\n-- Groucho Marx\n", "msg_date": "Wed, 23 Feb 2005 22:24:57 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "No problems my friend :P\n\nI thought that since the beginning and just sent the e-mail to confirm if\nthere was no software limitation.\n\n\nBest Wishes,\nBruno Almeida do Lago\n\n \n\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]] \nSent: Wednesday, February 23, 2005 6:02 PM\nTo: Bruno Almeida do Lago\nCc: [email protected]\nSubject: Re: [PERFORM] Peformance Tuning Opterons/ Hard Disk Layout\n\nBruno,\n\n> For example, 150 active connections on a medium-end\n> 32-bit Linux server will consume significant system resources, and 600 is\n> about the limit.\"\n\nThat, is, \"is about the limit for a medium-end 32-bit Linux server\".\nSorry \nif the implication didn't translate well. If you use beefier hardware, of \ncourse, you can manage more connections; personally I've never needed more \nthan 1000, even on a web site that gets 100,000 d.u.v.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Thu, 24 Feb 2005 10:28:34 -0300", "msg_from": "\"Bruno Almeida do Lago\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "On Wed, Feb 23, 2005 at 01:37:28PM -0500, Tom Lane wrote:\n> \"Bruno Almeida do Lago\" <[email protected]> writes:\n> > Is there a real limit for max_connections? Here we've an Oracle server with\n> > up to 1200 simultaneous conections over it!\n> \n> [ shrug... ] If your machine has the beef to run 1200 simultaneous\n> queries, you can set max_connections to 1200.\n> \n> The point of what you were quoting is that if you want to service\n> 1200 concurrent users but you only expect maybe 100 simultaneously\n> active queries from them (and you have a database box that can only\n> service that many) then you want to put a connection pooler in\n> front of 100 backends, not try to start a backend for every user.\n> \n> Oracle may handle this sort of thing differently, I dunno.\n> \n> \t\t\tregards, tom lane\n\nOracle has some form of built-in connection pooling. I don't remember\nthe exact details of it off the top of my head, but I think it was a\n'wedge' that clients would connect to as if it was the database, and the\nwedge would then find an available database process to use.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Fri, 25 Feb 2005 10:20:05 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" } ]
[ { "msg_contents": "Sorry, just a fool tip, cause I haven't seen that you already done the pg_ctl stop && pg_ctl start ...\n\n(I mean, did you reload your conf settings?)\n\nRegards,\nGuido\n\n> > > I used you perl script and found the error =>\n> > > [root@samba tmp]# perl relacl.pl\n> > > DBI connect('dbname=template1;port=5432','postgres',...) failed: FATAL:\n> > IDENT\n> > > authentication failed for user \"postgres\" at relacl.pl line 21\n> > > Error in connect to DBI:Pg:dbname=template1;port=5432:\n> > >\n> > >\n> > Excellent - we know what is going on now!\n> >\n> >\n> > > And my pg_hba.conf is\n> > >\n> > > # IPv4-style local connections:\n> > > host all all 127.0.0.1 255.255.255.255 trust\n> > > host all all 192.168.0.0 255.255.0.0 trust\n> > >\n> > > trusted for every user.\n> >\n> > Ok, what I think has happened is that there is another Pg installation\n> > (or another initdb'ed cluster) on this machine that you are accidentally\n> > talking to. Try\n> >\n> > $ rpm -qa|grep -i postgres\n> >\n> > which will spot another software installation, you may just have to\n> > search for files called pg_hba.conf to find another initdb'ed cluster....\n> >\n> > This other installation should have a pg_hba.conf that looks something\n> > like :\n> >\n> > local all all ident\n> > host all all 127.0.0.1 255.255.255.255 ident\n> >\n> > So a bit of detective work is in order :-)\n> >\n> > Mark\n> After being a detector I found that\n> [root@samba ~]# rpm -qa|grep -i postgres\n> postgresql-7.4.5-3.1.tlc\n> postgresql-python-7.4.5-3.1.tlc\n> postgresql-jdbc-7.4.5-3.1.tlc\n> postgresql-tcl-7.4.5-3.1.tlc\n> postgresql-server-7.4.5-3.1.tlc\n> postgresql-libs-7.4.5-3.1.tlc\n> postgresql-docs-7.4.5-3.1.tlc\n> postgresql-odbc-7.3-8.1.tlc\n> postgresql-pl-7.4.5-3.1.tlc\n> postgresql-test-7.4.5-3.1.tlc\n> postgresql-contrib-7.4.5-3.1.tlc\n> [root@samba ~]#\n> \n> no other pg installation except the pgsql for windows in samba folder which I\n> think it isn't matter ,is it?\n> No other pg being run.\n> [root@samba ~]# ps ax|grep postmaster\n> 2228 ? S 0:00 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data\n> 3308 pts/0 S+ 0:00 grep postmaster\n> [root@samba ~]#\n> \n> Is it possible that it is related to pg_ident.conf ?\n> \n> Any comment please.\n> Amrit,Thailand\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n", "msg_date": "Wed, 23 Feb 2005 14:50:16 -0300 (GMT+3)", "msg_from": "G u i d o B a r o s i o <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem with 7.4.5 and webmin 1.8 in grant function" } ]
[ { "msg_contents": "Hi,\n\nRAID1 (mirroring) and RAID1+0 (striping and mirroring) seems to\nbe a good choice. (RAID 5 is for saving money, but it doesn't have a\ngood performance) \n\nI suggest you to make a different array for:\n- Operating system\n- db logs\n- each database\n\nIt is a little bit of \"wasting\" disk storage, but it has the best\nperformance.\nForget RAID 5. If your fibre channel card and the external storage exceeds\ntheir throughput limits you should consider to implement +1 fibre channel\nand/or +1 external storage unit. (If you had such a load)\n\nBut it is only the hardware. The database structure, and the application\nlogic is the other 50% of the performance...\n\nBye\nVig Sándor\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of John Allgood\nSent: Wednesday, February 23, 2005 9:42 PM\nTo: John Arbash Meinel\nCc: [email protected]\nSubject: Re: [PERFORM] Peformance Tuning Opterons/ Hard Disk Layout\n\n\nHere is a summary about the cluster suite from redhat. All 9 databases \nwill be on the primary server the secondary server I have is the \nfailover. They don't actually share the partitions at the same time. \nWhen you have some type of failure the backup server takes over. Once \nyou setup the hardware and install the clustering software. You then \nsetup a service \"ie postgres\" and then you tell it what harddrive you \nwill be using. /dev/sde1 and the clustering software takes care of \nstarting and stopping the postgres database.\n\n\n Cluster Manager\n\nThe Cluster Manager feature of Red Hat Cluster Suite provides an \napplication failover infrastructure that can be used by a wide range of \napplications, including:\n\n * Most custom and mainstream commercial applications\n * File and print serving\n * Databases and database applications\n * Messaging applications\n * Internet and open source application\n\nWith Cluster Manager, these applications can be deployed in high \navailability configurations so that they are always operational—bringing \n\"scale-out\" capabilities to enterprise Linux deployments.\n\nFor high-volume open source applications, such as NFS, Samba, and \nApache, Cluster Manager provides a complete ready-to-use failover \nsolution. For most other applications, customers can create custom \nfailover scripts using provided templates. Red Hat Professional Services \ncan provide custom Cluster Manager deployment services where required.\n\n\n Features\n\n * Support for up to eight nodes: Allows high availability to be\n provided for multiple applications simultaneously.\n * NFS/CIFS Failover: Supports highly available file serving in Unix\n and Windows environments.\n * Fully shared storage subsystem: All cluster members have access to\n the same storage.\n * Comprehensive Data Integrity guarantees: Uses the latest I/O\n barrier technology, such as programmable power switches and\n watchdog timers.\n * SCSI and Fibre Channel support: Cluster Manager configurations can\n be deployed using latest SCSI and Fibre Channel technology.\n Multi-terabyte configurations can readily be made highly available.\n * Service failover: Cluster Manager not only ensures hardware\n shutdowns or failures are detected and recovered from\n automatically, but also will monitor your applications to ensure\n they are running correctly, and will restart them automatically if\n they fail.\n\n\n\nJohn Arbash Meinel wrote:\n> John Allgood wrote:\n>\n>> This some good info. The type of attached storage is a Kingston 14 bay\n>> Fibre Channel Infostation. I have 14 36GB 15,000 RPM drives. I think\n>> the way it is being explained that I should build a mirror with two\n>> disk for the pg_xlog and the striping and mirroring the rest and put\n>> all my databases into one cluster. Also I might mention that I am\n>> running clustering using Redhat Clustering Suite.\n>\n>\n> So are these 14-disks supposed to be shared across all of your 9 \n> databases?\n> It seems to me that you have a few architectural issues here.\n>\n> First, you can't really have 2 masters writing to the same disk array.\n> I'm not sure if Redhat Clustering gets around this. But second is that\n> you can't run 2 postgres engines on the same database. Postgres doesn't\n> support a clustered setup. There are too many issues with concurancy and\n> keeping everyone in sync.\n>\n> Since you seem to be okay with having a bunch of smaller localized\n> databases, which update a master database 1/day, I would think you would\n> want hardware to go something like this.\n>\n> 1 master server, at least dual opteron with access to lots of disks\n> (likely the whole 14 if you can get away with it). Put 2 as a RAID1 for\n> the OS, 4 as a RAID10 for pg_xlog, and then the other 8 as RAID10 for\n> the rest of the database.\n>\n> 8-9 other servers, these don't need to be as powerful, since they are\n> local domains. Probably a 4-disk RAID10 for the OS and pg_xlog is plenty\n> good, and whatever extra disks you can get for the local database.\n>\n> The master database holds all information for all domains, but the other\n> databases only hold whatever is the local information. Every night your\n> script sequences through the domain databases one-by-one, updating the\n> master database, and synchronizing whatever data is necesary back to the\n> local domain. I would guess that this script could actually just\n> continually run, going to each local db in turn, but you may want\n> nighttime only updating depending on what kind of load they have.\n>\n> John\n> =:->\n>\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n\nThe information transmitted is intended only for the person or entity to\nwhich it is addressed and may contain confidential and/or privileged\nmaterial. Any review, retransmission, dissemination or other use of, or\ntaking of any action in reliance upon, this information by persons or\nentities other than the intended recipient is prohibited. If you received\nthis in error, please contact the sender and delete the material from any\ncomputer.\n", "msg_date": "Thu, 24 Feb 2005 09:28:47 +0100", "msg_from": "\"Vig, Sandor (G/FI-2)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" } ]
[ { "msg_contents": "Hello Again\n\nIn the below statement you mention putting each database on its own raid \nmirror.\n\n\"However, sticking with your arrangement, it would seem that you might be\nable to get some extra performance if each database is on it's own raid,\nsince you are fairly likely to have 2 transactions occuring at the same\ntime, that don't affect eachother (since you wouldn't have any foreign\nkeys, etc on 2 separate databases.)\"\n\nThat would take alot of disk drives to accomplish. I was thinking maybe \nputting three or four databases on each raid and dividing the heaviest \nused databases on each mirrored set. And for each of these sets have its \nown mirror for pg_xlog. My question is what is the best way to setup \npostgres databases on different disks. I have setup multiple postmasters \non this system as a test. The only problem was configuring each \ndatabases \"ie postgresql.conf, pg_hba.conf\". Is there anyway in \npostgres to have everything in one cluster and have it seperated onto \nmultiple drives. Here is a example of what is was thinking about.\n\nMIRROR1 - Database Group 1\nMIRROR2 - pg_xlog for database group 1\nMIRROR3 - Database Group 2\nMIRROR4 - pg_xlog for database group 2\nMIRROR5 - Database Group 3\nMIRROR6 - pg_xlog for database group 3\n\nThis will take about 12 disk drives. I have a 14 bay Storage Bay I can \nuse two of the drives for hotspare's.\n\n\n\n\nThanks\n\nJohn Allgood - ESC\nSystems Administrator\n", "msg_date": "Thu, 24 Feb 2005 13:31:38 -0500", "msg_from": "John Allgood <[email protected]>", "msg_from_op": true, "msg_subject": "Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "I am no expert, but have been asking them a bunch and I think your missing a\nkey concept.\n\nThe data is best on several drives.\nI could be completely off, but if I understood (I just finished doing the\nsame kind of thing minus several databases) you want your WAL on fast drives\nin raid 1 and your data (as many drives as you can use) on raid 10 (can be\nslower drives , but I saw you already have a bunch of 15k drives).\nSo you may get best performance just using one database rather then several\nsmaller ones on mirrored data drives. Keep in mind if you go with ES4 (I am\nusing AS4) and postgres 8 you can add spindles and move hard hit tables to\ntheir own spindle.\n\nAgain I am no expert; just thought I would echo what I was informed.\nI ended up using 2 15k drives in raid 1 for my WAL and 4 10k drives for my\ndata in raid 10. I ended up using links to these from the original install\nof postgres on the raid 5, 4 15k drives inside the server itself. I believe\nthis gives me three separate raid arrays for my install with logs and such\non the raid 5, data on the raid 10 and wal on the raid 1. I am in the\ntesting and conversion phase and have found it very fast. I used a 4\nprocessor Dell 6550, but think from what I have been told your computer\nwould have been a better choice (CPU wise). I am not using fibre but do have\na 14 drive powervault which I split to have the 15k's on one side and the\n10k's on the other. So I am using both channels of the controller. I have\nbeen told for me to get best performance I should add as many 10k drives to\nmy data array as I can (but this was all I had in my budget). I have room\nfor 3 more drives on that side of the powervault.\n\nBest of luck on your project.\n\nJoel\n\n", "msg_date": "Thu, 24 Feb 2005 14:12:29 -0500", "msg_from": "\"Joel Fradkin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "John Allgood wrote:\n\n> Hello Again\n>\n> In the below statement you mention putting each database on its own\n> raid mirror.\n>\n> \"However, sticking with your arrangement, it would seem that you might be\n> able to get some extra performance if each database is on it's own raid,\n> since you are fairly likely to have 2 transactions occuring at the same\n> time, that don't affect eachother (since you wouldn't have any foreign\n> keys, etc on 2 separate databases.)\"\n>\n> That would take alot of disk drives to accomplish. I was thinking\n> maybe putting three or four databases on each raid and dividing the\n> heaviest used databases on each mirrored set. And for each of these\n> sets have its own mirror for pg_xlog. My question is what is the best\n> way to setup postgres databases on different disks. I have setup\n> multiple postmasters on this system as a test. The only problem was\n> configuring each databases \"ie postgresql.conf, pg_hba.conf\". Is\n> there anyway in postgres to have everything in one cluster and have it\n> seperated onto multiple drives. Here is a example of what is was\n> thinking about.\n>\nI think this is something that you would have to try and see what works.\nMy first feeling is that 8-disks in RAID10 is better than 4 sets of RAID1.\n\n> MIRROR1 - Database Group 1\n> MIRROR2 - pg_xlog for database group 1\n> MIRROR3 - Database Group 2\n> MIRROR4 - pg_xlog for database group 2\n> MIRROR5 - Database Group 3\n> MIRROR6 - pg_xlog for database group 3\n>\n> This will take about 12 disk drives. I have a 14 bay Storage Bay I can\n> use two of the drives for hotspare's.\n>\nI would have all of them in 1 database cluster, which means they are all\nserved by the same postgres daemon. Which I believe means that they all\nuse the same pg_xlog. That means you only need 1 raid for pg_xlog,\nthough I would make it a 4-drive RAID10. (RAID1 is redundant, but\nactually slower on writes, you need the 0 to speed up reading/writing, I\ncould be wrong).\n\nI believe you can still split each database onto it's own raid later on\nif you find that you need to.\n\nSo this is my proposal 1:\nOS RAID (sounds like this is not in the Storage Bay).\n4-drives RAID10 pg_xlog\n8-drives RAID10 database cluster\n2-drives Hot spares / RAID1\n\nIf you feel like you want to partition your databases, you could also do\nproposal 2:\n4-drives RAID10 pg_xlog\n4-drives RAID10 databases master + 1-4\n4-drives RAID10 databases 5-9\n2-drives hotspare / RAID1\n\nIf you think partitioning is better than striping, you could do proposal 3:\n4-drives RAID10 pg_xlog\n2-drives RAID1 master database\n2-drives RAID1 databases 1,2,3\n2-drives RAID1 databases 4,5\n2-drives RAID1 databases 6,7\n2-drives RAID1 databases 8,9\n\nThere are certainly a lot of potential arrangements here, and it's not\nlike I've tried a lot of them. pg_xlog seems like a big enough\nbottleneck that it would be good to put it on it's own RAID10, to make\nit as fast as possible.\n\nIt also depends a lot on whether you will be write heavy/read heavy,\netc. RAID5 works quite well for reading, very poor for writing. But if\nthe only reason to have the master database is to perform read heavy\nqueries, and all the writing is done at night in bulk fashion with\ncareful tuning to avoid saturation, then maybe you would want to put the\nmaster database on a RAID5 so that you can get extra disk space.\nYou could do proposal 4:\n4-drive RAID10 pg_xlog\n4-drive RAID5 master db\n2-drive RAID1 dbs 1-3\n2-drive RAID1 dbs 4-6\n2-drive RAID1 dbs 7-9\n\nYou might also do some testing and find that pg_xlog doesn't deserve\nit's own 4 disks, and they would be better off in the bulk tables.\n\nUnfortunately a lot of this would come down to performance testing on\nyour dataset, with a real data load. Which isn't very easy to do.\nI personally like the simplicity of proposal 1.\n\nJohn\n=:->\n\n>\n> Thanks\n>\n> John Allgood - ESC\n> Systems Administrator", "msg_date": "Thu, 24 Feb 2005 13:40:56 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" }, { "msg_contents": "Hi, John,\n\nJohn Allgood schrieb:\n> My question is what is the best way to setup\n> postgres databases on different disks. I have setup multiple postmasters\n> on this system as a test. The only problem was configuring each\n> databases \"ie postgresql.conf, pg_hba.conf\". Is there anyway in\n> postgres to have everything in one cluster and have it seperated onto\n> multiple drives.\n\nUsing PostgreSQL 8.0, the newly introduced \"tablespaces\" solve all this:\nhttp://www.postgresql.org/docs/8.0/interactive/manage-ag-tablespaces.html\n\nUsing PostgreSQL 7.4, you can relatively easy create single databases on\ndifferent drives. However, separating out single tables or indices\ninvolves some black symlink magic. See google and\nhttp://www.postgresql.org/docs/7.4/interactive/manage-ag-alternate-locs.html\n\nHTH,\nMarkus\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 z�rich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n", "msg_date": "Mon, 28 Feb 2005 09:10:36 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Peformance Tuning Opterons/ Hard Disk Layout" } ]
[ { "msg_contents": "Our dual opteron has been performing well for many weeks now (after some simple tuning) when all of a sudden the queries have slowed right down!\n\nie:\n\nDUAL 246 OPTERON:\n\nselect count(*) from job_archieve; - Time: 107.24 ms\n\nexplain analyse select count(*) from job_archieve;\nAggregate (cost=2820.50..2820.50 rows=1 width=0) (actual time=153.53..153.53 rows=1 loops=1)\n -> Seq Scan on job_archieve (cost=0.00..2789.20 rows=12520 width=0) (actual time=1.39..132.98 rows=12520 loops=1)\n Total runtime: 153.74 msec\nTime: 156.94 ms\n\n\n\nCRAPPY AMD ATHLON XP 1700+:\n\nselect count(*) from job_archieve; - Time: 23.30 ms\n\nexplain analyse select count(*) from job_archieve;\nAggregate (cost=2816.50..2816.50 rows=1 width=0) (actual time=133.83..133.84 rows=1 loops=1)\n -> Seq Scan on job_archieve (cost=0.00..2785.20 rows=12520 width=0) (actual time=0.02..72.64 rows=12520 loops=1)\n Total runtime: 133.92 msec\nTime: 134.79 ms\n\n\n\n\n\nThe ratio of these simple query times is about accurate for most queries performed on the same database on the different machines... Any ideas what may have suddenly caused this and where to start troubleshooting??? Both dbs have already been fully vacuumed.\n\nThe opteron is going to get a overhaul (4 port raid going in, fresh install of freebsd, postgres etc) but would be handy to know for future reference in case this happens again....\n\n(ps, yes i know archive is not spelt archieve ;)\n\nCheers!\nDave.\n\n\n\n\n\n\n\n\n\n\n\n\n\nOur dual opteron has been performing well for many \nweeks now (after some simple tuning) when all of a sudden the queries have \nslowed right down!ie:\n \nDUAL 246 OPTERON:\n \nselect count(*) from job_archieve; - Time: 107.24 ms\n \nexplain analyse select count(*) from \njob_archieve;Aggregate  (cost=2820.50..2820.50 rows=1 width=0) (actual \ntime=153.53..153.53 rows=1 loops=1)   ->  Seq Scan on \njob_archieve  (cost=0.00..2789.20 rows=12520 width=0) (actual \ntime=1.39..132.98 rows=12520 loops=1) Total runtime: 153.74 \nmsecTime: 156.94 ms\n \n \n \nCRAPPY AMD ATHLON XP 1700+:\n \nselect count(*) from job_archieve; - Time: 23.30 ms\n \nexplain analyse select count(*) from \njob_archieve;\nAggregate  (cost=2816.50..2816.50 rows=1 \nwidth=0) (actual time=133.83..133.84 rows=1 loops=1)   ->  \nSeq Scan on job_archieve  (cost=0.00..2785.20 rows=12520 width=0) (actual \ntime=0.02..72.64 rows=12520 loops=1) Total runtime: 133.92 \nmsecTime: 134.79 ms\n \n \n \n \n \nThe ratio of these simple query times is about \naccurate for most queries performed on the same database on the different \nmachines... Any ideas what may have suddenly caused this and where to start \ntroubleshooting??? Both dbs have already been fully vacuumed.\n \nThe opteron is going to get a overhaul (4 port raid \ngoing in, fresh install of freebsd, postgres etc) but would be handy to know for \nfuture reference in case this happens again....\n \n(ps, yes i know archive is not spelt archieve \n;)\n \nCheers!Dave.", "msg_date": "Fri, 25 Feb 2005 15:41:58 +0800", "msg_from": "\"SpaceBallOne\" <[email protected]>", "msg_from_op": true, "msg_subject": "gah! sudden slowdown??" }, { "msg_contents": "On Fri, Feb 25, 2005 at 03:41:58PM +0800, SpaceBallOne wrote:\n> Our dual opteron has been performing well for many weeks now (after some\n> simple tuning) when all of a sudden the queries have slowed right down!\n\nAre you running regular VACUUMs? Looks like you have a lot of dead rows or\nsomething.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 25 Feb 2005 12:19:55 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: gah! sudden slowdown??" } ]