threads
listlengths
1
275
[ { "msg_contents": "Dear Gurus,\n\nVersion: 7.4.6\n\nI use a query on a heavily indexed table which picks a wrong index \nunexpectedly. Since this query is used in response to certain user \ninteractions thousands of times in succession (with different constants), \n500ms is not affordable for us. I can easily work around this, but I'd like \nto understand the root of the problem.\n\nBasically, there are two relevant indexes:\n- muvelet_vonalkod_muvelet btree (muvelet, ..., idopont)\n- muvelet_vonalkod_pk3 btree (idopont, ...)\n\nQuery is:\nSELECT idopont WHERE muvelet = x ORDER BY idopont LIMIT 1.\n\nI expected the planner to choose the index on muvelet, then sort by idopont.\nInstead, it took the other index. I think there is heavy correlation since \nmuvelet references to a sequenced pkey and idopont is a timestamp (both \nincrease with passing time). May that be a cause?\n\nSee full table description and explain analyze results at end of the email.\n\n\nTIA,\n--\nG.\n\n---- table :\n Table \"public.muvelet_vonalkod\"\n Column | Type | Modifiers\n------------+--------------------------+-----------------------------------\n az | integer | not null def. nextval('...')\n olvaso_nev | character varying | not null\n vonalkod | character varying | not null\n mozgasnem | integer | not null\n idopont | timestamp with time zone | not null\n muvelet | integer |\n minoseg | integer | not null\n cikk | integer |\n muszakhely | integer |\n muszakkod | integer |\n muszaknap | date |\n repre | boolean | not null default false\n hiba | integer | not null default 0\nIndexes:\n \"muvelet_vonalkod_pkey\" primary key, btree (az)\n \"muvelet_vonalkod_pk2\" unique, btree (olvaso_nev, idopont)\n \"muvelet_vonalkod_muvelet\" btree\n (muvelet, mozgasnem, vonalkod, olvaso_nev, idopont)\n \"muvelet_vonalkod_pk3\" btree (idopont, olvaso_nev)\n \"muvelet_vonalkod_vonalkod\" btree\n (vonalkod, mozgasnem, olvaso_nev, idopont)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (mozgasnem) REFERENCES mozgasnem(az)\n \"$2\" FOREIGN KEY (muvelet) REFERENCES muvelet(az)\n \"$3\" FOREIGN KEY (minoseg) REFERENCES minoseg(az)\n \"$4\" FOREIGN KEY (cikk) REFERENCES cikk(az)\n \"$5\" FOREIGN KEY (muszakhely) REFERENCES hely(az)\n \"$6\" FOREIGN KEY (muszakkod) REFERENCES muszakkod(az)\n \"muvelet_vonalkod_muszak_fk\"\n FOREIGN KEY (muszakhely, muszaknap, muszakkod)\n REFERENCES muszak(hely, nap, muszakkod)\nTriggers:\n muvelet_vonalkod_aiud AFTER INSERT OR DELETE OR UPDATE ON \nmuvelet_vonalkod FOR EACH ROW EXECUTE PROCEDURE muvelet_vonalkod_aiud()\n muvelet_vonalkod_biu BEFORE INSERT OR UPDATE ON muvelet_vonalkod FOR \nEACH ROW EXECUTE PROCEDURE muvelet_vonalkod_biu()\n muvelet_vonalkod_noty AFTER INSERT OR DELETE OR UPDATE ON \nmuvelet_vonalkod FOR EACH ROW EXECUTE PROCEDURE muvelet_vonalkod_noty()\n\n\n-- original query, limit\n# explain analyze\n select idopont from muvelet_vonalkod\n where muvelet=6859 order by idopont\n limit 1;\n QUERY PLAN \n\n----------------------------------------------------------------------------\n Limit (cost=0.00..25.71 rows=1 width=8) (actual time=579.528..579.529 \nrows=1 loops=1)\n -> Index Scan using muvelet_vonalkod_pk3 on muvelet_vonalkod \n(cost=0.00..8304.42 rows=323 width=8) (actual time=579.522..579.522 rows=1 \nloops=1)\n Filter: (muvelet = 6859)\n Total runtime: 579.606 ms\n(4 rows)\n\n-- however, if I omit the limit clause:\n# explain analyze\n select idopont from muvelet_vonalkod\n where muvelet=6859 order by idopont;\n QUERY PLAN \n\n---------------------------------------------------------------------------\n Sort (cost=405.41..405.73 rows=323 width=8) (actual time=1.295..1.395 \nrows=360 loops=1)\n Sort Key: idopont\n -> Index Scan using muvelet_vonalkod_muvelet on muvelet_vonalkod \n(cost=0.00..400.03 rows=323 width=8) (actual time=0.049..0.855 rows=360 loops=1)\n Index Cond: (muvelet = 6859)\n Total runtime: 1.566 ms\n(5 rows)\n\n-- workaround 1: the planner is hard to trick...\n# explain analyze\n select idopont from\n (select idopont from muvelet_vonalkod\n where muvelet=6859) foo\n order by idopont limit 1;\n QUERY PLAN \n\n---------------------------------------------------------------------------\n Limit (cost=0.00..25.71 rows=1 width=8) (actual time=584.403..584.404 \nrows=1 loops=1)\n -> Index Scan using muvelet_vonalkod_pk3 on muvelet_vonalkod \n(cost=0.00..8304.42 rows=323 width=8) (actual time=584.397..584.397 rows=1 \nloops=1)\n Filter: (muvelet = 6859)\n Total runtime: 584.482 ms\n(4 rows)\n\n-- workaround 2: quite ugly but seems to work (at least for this\n-- one test case):\n# explain analyze\n select idopont from\n (select idopont from muvelet_vonalkod\n where muvelet=6859 order by idopont) foo\n order by idopont limit 1;\n QUERY PLAN \n\n---------------------------------------------------------------------------\n Limit (cost=405.41..405.42 rows=1 width=8) (actual time=1.754..1.755 \nrows=1 loops=1)\n -> Subquery Scan foo (cost=405.41..407.35 rows=323 width=8) (actual \ntime=1.751..1.751 rows=1 loops=1)\n -> Sort (cost=405.41..405.73 rows=323 width=8) (actual \ntime=1.746..1.746 rows=1 loops=1)\n Sort Key: idopont\n -> Index Scan using muvelet_vonalkod_muvelet on \nmuvelet_vonalkod (cost=0.00..400.03 rows=323 width=8) (actual \ntime=0.377..1.359 rows=360 loops=1)\n Index Cond: (muvelet = 6859)\n Total runtime: 1.853 ms\n(7 rows)\n\n", "msg_date": "Wed, 21 Dec 2005 19:03:00 +0100", "msg_from": "=?ISO-8859-2?Q?Sz=FBcs_G=E1bor?= <[email protected]>", "msg_from_op": true, "msg_subject": "Wrong index used when ORDER BY LIMIT 1" }, { "msg_contents": "On Wed, Dec 21, 2005 at 07:03:00PM +0100, Sz?cs Gbor wrote:\n> Version: 7.4.6\n[...]\n> Query is:\n> SELECT idopont WHERE muvelet = x ORDER BY idopont LIMIT 1.\n> \n> I expected the planner to choose the index on muvelet, then sort by idopont.\n> Instead, it took the other index.\n\nI think the planner is guessing that since you're ordering on\nidopont, scanning the idopont index will find the first matching\nrow faster than using the muvelet index would. In many cases that's\na good bet, but in this case the guess is wrong and you end up with\na suboptimal plan.\n\nI just ran some tests with 8.1.1 and it chose the better plan for\na query similar to what you're doing. One of the developers could\nprobably explain why; maybe it's because of the changes that allow\nbetter use of multicolumn indexes. Try 8.1.1 if you can and see\nif you get better results.\n\n> -- workaround 2: quite ugly but seems to work (at least for this\n> -- one test case):\n> # explain analyze\n> select idopont from\n> (select idopont from muvelet_vonalkod\n> where muvelet=6859 order by idopont) foo\n> order by idopont limit 1;\n\nAnother workaround is to use OFFSET 0 in the subquery.\n\n-- \nMichael Fuhr\n", "msg_date": "Wed, 21 Dec 2005 11:51:02 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong index used when ORDER BY LIMIT 1" }, { "msg_contents": "=?ISO-8859-2?Q?Sz=FBcs_G=E1bor?= <[email protected]> writes:\n> Query is:\n> SELECT idopont WHERE muvelet = x ORDER BY idopont LIMIT 1.\n\nMuch the best solution for this would be to have an index on\n\t(muvelet, idopont)\n--- perhaps you can reorder the columns of \"muvelet_vonalkod_muvelet\"\ninstead of making a whole new index --- and then say\n\n\tSELECT idopont WHERE muvelet = x ORDER BY muvelet, idopont LIMIT 1\n\nPG 8.1 can apply such an index to your original query, but older\nversions will need the help of the modified ORDER BY to recognize\nthat the index is usable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Dec 2005 14:34:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong index used when ORDER BY LIMIT 1 " }, { "msg_contents": "Dear Tom,\n\nOn 2005.12.21. 20:34, Tom Lane wrote:\n> =?ISO-8859-2?Q?Sz=FBcs_G=E1bor?= <[email protected]> writes:\n>> Query is:\n>> SELECT idopont WHERE muvelet = x ORDER BY idopont LIMIT 1.\n> \n> Much the best solution for this would be to have an index on\n> \t(muvelet, idopont)\n> --- perhaps you can reorder the columns of \"muvelet_vonalkod_muvelet\"\n> instead of making a whole new index --- and then say\n> \n> \tSELECT idopont WHERE muvelet = x ORDER BY muvelet, idopont LIMIT 1\n\nI was far too tired yesterday evening to produce such a clean solution but \nfinally came to this conclusion this morning :) Even without the new index, \nit picks the index on muvelet, which decreases time to ~1.5ms. The new index \ntakes it down to 0.1ms.\n\nHowever, this has a problem; namely, what if I don't (or can't) tell the \nexact int value in the WHERE clause? In general: will the following query:\n\n SELECT indexed_ts_field FROM table WHERE indexed_int_field IN (100,200)\n -- or even: indexed_int_field BETWEEN 100 AND 200\n ORDER BY indexed_ts_field LIMIT n\n\nalways pick the index on the timestamp field, or does it depend on something \nelse, say the limit size n and the attributes' statistics?\n\n> PG 8.1 can apply such an index to your original query, but older\n> versions will need the help of the modified ORDER BY to recognize\n> that the index is usable.\n\nSo the direct cause is that 7.x planners prefer ORDER BY to WHERE when \npicking indexes? But only when there is a LIMIT clause present?\n\nI'd like to know how much of our code should I review; if it's explicitly \nconnected to LIMIT, I'd probably have to check far less code.\n\n--\nG.\n\n", "msg_date": "Thu, 22 Dec 2005 13:52:44 +0100", "msg_from": "=?ISO-8859-2?Q?Sz=FBcs_G=E1bor?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong index used when ORDER BY LIMIT 1" } ]
[ { "msg_contents": "Hi,\n \n We�ve a SELECT that even without ORDER BY is returning the rows in the order that we liked but when we add the ORDER BY clause the runtime and costs are much bigger.\n \n We have to use ORDER BY otherwise in some future postgresql version probably it will not return in the correct order anymore.\n \n But if we use ORDER BY it�s too much expensive... is there a way to have the same costs and runtime but with the ORDER BY clause?\n \n Why is not the planner using the access plan builded for the \"without order by\" select even if we use the order by clause? The results are both the same...\n \n Postgresql version: 8.0.3\n \n Without order by:\n explain analyze\nSELECT * FROM iparq.ARRIPT \nwhere \n(ANOCALC = 2005\nand CADASTRO = 19\nand CODVENCTO = 00\nand PARCELA >= 00 ) \nor \n(ANOCALC = 2005\nand CADASTRO = 19\nand CODVENCTO > 00 ) \nor \n(ANOCALC = 2005\nand CADASTRO > 19 ) \nor \n(ANOCALC > 2005 );\n Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript (cost=0.00..122255.35 rows=146602 width=897) (actual time=9.303..1609.987 rows=167710 loops=1)\n Index Cond: (((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric))\n Total runtime: 1712.456 ms\n(3 rows)\n \n \n With order by:\nexplain analyze\nSELECT * FROM iparq.ARRIPT \nwhere \n(ANOCALC = 2005\nand CADASTRO = 19\nand CODVENCTO = 00\nand PARCELA >= 00 ) \nor \n(ANOCALC = 2005\nand CADASTRO = 19\nand CODVENCTO > 00 ) \nor \n(ANOCALC = 2005\nand CADASTRO > 19 ) \nor \n(ANOCALC > 2005 )\norder by ANOCALC asc, CADASTRO asc, CODVENCTO asc, PARCELA asc;\n Sort (cost=201296.59..201663.10 rows=146602 width=897) (actual time=9752.555..10342.363 rows=167710 loops=1)\n Sort Key: anocalc, cadastro, codvencto, parcela\n -> Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript (cost=0.00..122255.35 rows=146602 width=897) (actual time=0.402..1425.085 rows=167710 loops=1)\n Index Cond: (((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric))\n Total runtime: 10568.290 ms\n(5 rows)\n \nTable definition:\n Table \"iparq.arript\"\n Column | Type | Modifiers\n-------------------+-----------------------+-----------\n anocalc | numeric(4,0) | not null\n cadastro | numeric(8,0) | not null\n codvencto | numeric(2,0) | not null\n parcela | numeric(2,0) | not null\n inscimob | character varying(18) | not null\n codvencto2 | numeric(2,0) | not null\n parcela2 | numeric(2,0) | not null\n codpropr | numeric(10,0) | not null\n dtaven | numeric(8,0) | not null\n anocalc2 | numeric(4,0) |\n...\n...\nIndexes:\n \"pk_arript\" PRIMARY KEY, btree (anocalc, cadastro, codvencto, parcela)\n \"iarchave04\" UNIQUE, btree (cadastro, anocalc, codvencto, parcela)\n \"iarchave02\" btree (inscimob, anocalc, codvencto2, parcela2)\n \"iarchave03\" btree (codpropr, dtaven)\n \"iarchave05\" btree (anocalc, inscimob, codvencto2, parcela2)\n \n Best regards and thank you very much in advance,\n \n Carlos Benkendorf\n\n\n\t\t\n---------------------------------\n Yahoo! doce lar. Fa�a do Yahoo! sua homepage.\nHi,   We�ve a SELECT that even without ORDER BY is returning the rows in the order that we liked but when we add the ORDER BY clause the runtime and costs are much bigger.   We have to use ORDER BY otherwise in some future postgresql version probably it will not return in the correct order anymore.   But if we use ORDER BY it�s too much expensive... is there a way to have the same costs and runtime but with the ORDER BY clause?   Why is not the planner using the access plan builded for the \"without order by\" select  even if we use the order by clause? The results are both the same...   Postgresql version: 8.0.3   Without order by: explain analyzeSELECT * FROM iparq.ARRIPT where (ANOCALC =  2005and CADASTRO =  19and CODVENCTO =  00a\n nd\n PARCELA >=  00 ) or (ANOCALC =  2005and CADASTRO =  19and CODVENCTO >  00 ) or (ANOCALC =  2005and CADASTRO >  19 ) or (ANOCALC >  2005 );  Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript  (cost=0.00..122255.35 rows=146602 width=897) (actual time=9.303..1609.987 rows=167710 loops=1)   Index Cond: (((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric)) Total runtime: 1712.456 ms(3 rows)     With order by:explain analyzeSELECT * FROM iparq.ARRIPT where (ANOCALC =  2005and CADASTRO =  19and COD\n VENCTO\n =  00and PARCELA >=  00 ) or (ANOCALC =  2005and CADASTRO =  19and CODVENCTO >  00 ) or (ANOCALC =  2005and CADASTRO >  19 ) or (ANOCALC >  2005 )order by ANOCALC asc, CADASTRO asc, CODVENCTO asc, PARCELA asc;  Sort  (cost=201296.59..201663.10 rows=146602 width=897) (actual time=9752.555..10342.363 rows=167710 loops=1)   Sort Key: anocalc, cadastro, codvencto, parcela   ->  Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript  (cost=0.00..122255.35 rows=146602 width=897) (actual time=0.402..1425.085 rows=167710 loops=1)         Index Cond: (((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc \n =\n 2005::numeric) AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric)) Total runtime: 10568.290 ms(5 rows) Table definition:                 Table \"iparq.arript\"      Column       |         Type          | Modifiers-------------------+-----------------------+----------- anocalc           | numeric(4,0)          | not null cadastro          | numeric(8,0)          | not null codvencto         | numeric(2,0)          | \n not\n null parcela           | numeric(2,0)          | not null inscimob          | character varying(18) | not null codvencto2        | numeric(2,0)          | not null parcela2          | numeric(2,0)          | not null codpropr          | numeric(10,0)         | not null dtaven            | numeric(8,0)          | not null anocalc2          |\n numeric(4,0)          |......Indexes:    \"pk_arript\" PRIMARY KEY, btree (anocalc, cadastro, codvencto, parcela)    \"iarchave04\" UNIQUE, btree (cadastro, anocalc, codvencto, parcela)    \"iarchave02\" btree (inscimob, anocalc, codvencto2, parcela2)    \"iarchave03\" btree (codpropr, dtaven)    \"iarchave05\" btree (anocalc, inscimob, codvencto2, parcela2)   Best regards and thank you very much in advance,   Carlos Benkendorf\n \nYahoo! doce lar. Fa�a do Yahoo! sua homepage.", "msg_date": "Wed, 21 Dec 2005 18:16:01 +0000 (GMT)", "msg_from": "Carlos Benkendorf <[email protected]>", "msg_from_op": true, "msg_subject": "ORDER BY costs" }, { "msg_contents": "Carlos Benkendorf <[email protected]> writes:\n> Table \"iparq.arript\"\n> Column | Type | Modifiers\n> -------------------+-----------------------+-----------\n> anocalc | numeric(4,0) | not null\n> cadastro | numeric(8,0) | not null\n> codvencto | numeric(2,0) | not null\n> parcela | numeric(2,0) | not null\n> inscimob | character varying(18) | not null\n> codvencto2 | numeric(2,0) | not null\n> parcela2 | numeric(2,0) | not null\n> codpropr | numeric(10,0) | not null\n> dtaven | numeric(8,0) | not null\n> anocalc2 | numeric(4,0) |\n\nI suspect you'd find a significant performance improvement from changing\nthe NUMERIC columns to int or bigint as needed. Numeric comparisons are\npretty slow.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Dec 2005 14:39:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY costs " }, { "msg_contents": "I restored the table in another database and repeated the analyze again with original column definitions (numeric):\n \n With order by:\n Sort (cost=212634.30..213032.73 rows=159374 width=897) (actual time=9286.817..9865.030 rows=167710 loops=1)\n Sort Key: anocalc, cadastro, codvencto, parcela\n -> Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript (cost=0.00..126604.64 rows=159374 width=897) (actual time=0.152..1062.664 rows=167710 loops=1)\n Index Cond: (((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric))\n Total runtime: 10086.884 ms\n(5 rows)\n \n Without order by:\n Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript (cost=0.00..126604.64 rows=159374 width=897) (actual time=0.154..809.566 rows=167710 loops=1)\n Index Cond: (((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric))\n Total runtime: 894.218 ms\n(3 rows)\n \n\nThen I recreated the table and changed the primary key column type definitions to smallint, integer and bigint.\n \n CREATE TABLE arript (\n anocalc smallint NOT NULL,\n cadastro integer NOT NULL,\n codvencto smallint NOT NULL,\n parcela smallint NOT NULL,\n inscimob character varying(18) NOT NULL,\n codvencto2 smallint NOT NULL,\n parcela2 smallint NOT NULL,\n codpropr bigint NOT NULL,\n dtaven integer NOT NULL,\n anocalc2 smallint,\n dtabase integer,\n vvt numeric(14,2),\n vvp numeric(14,2),\n...\n ...\n \n Now the new analyze:\n \n With order by:\n Sort (cost=180430.98..180775.10 rows=137649 width=826) (actual time=4461.524..5000.707 rows=167710 loops=1)\n Sort Key: anocalc, cadastro, codvencto, parcela\n -> Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript (cost=0.00..111126.93 rows=137649 width=826) (actual time=0.142..763.255 rows=167710 loops=1)\n Index Cond: (((anocalc = 2005) AND (cadastro = 19) AND (codvencto = 0) AND (parcela >= 0)) OR ((anocalc = 2005) AND (cadastro = 19) AND (codvencto > 0)) OR ((anocalc = 2005) AND (cadastro > 19)) OR (anocalc > 2005))\n Total runtime: 5222.729 ms\n(5 rows)\n \n \n Without order by:\n Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript (cost=0.00..111126.93 rows=137649 width=826) (actual time=0.135..505.250 rows=167710 loops=1)\n Index Cond: (((anocalc = 2005) AND (cadastro = 19) AND (codvencto = 0) AND (parcela >= 0)) OR ((anocalc = 2005) AND (cadastro = 19) AND (codvencto > 0)) OR ((anocalc = 2005) AND (cadastro > 19)) OR (anocalc > 2005))\n Total runtime: 589.528 ms\n(3 rows)\n\n Total runtime summary:\n Primary key columns defined with integer/smallint/bigint and select with order by: 5222.729 ms\n Primary key columns defined with integer/smallint/bigint and select without order by: 589.528 ms\n Primary key columns defined with numeric and select with order by: 10086.884 ms\n Primary key columns defined with numeric and select without order by: 894.218 ms\n\n\n \n Using order by and integer/smallint/bigint (5222.729) is almost half total runtime of select over numeric columns (10086.884) but is still 6 x more from the numbers of the original select (without order by and number columns=894.218).\n \n Is there something more that could be done? Planner cost constants?\n \n Thanks very much in advance!\n \n Benkendorf\n \n \n\n\n\n\t\t\n---------------------------------\n Yahoo! doce lar. Fa�a do Yahoo! sua homepage.\nI restored the table in another database and repeated the analyze again with original column definitions (numeric):   With order by: Sort  (cost=212634.30..213032.73 rows=159374 width=897) (actual time=9286.817..9865.030 rows=167710 loops=1)   Sort Key: anocalc, cadastro, codvencto, parcela   ->  Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript  (cost=0.00..126604.64 rows=159374 width=897) (actual time=0.152..1062.664 rows=167710 loops=1)         Index Cond: (((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric)) Total runtime: 10086.884 ms(5 rows)\n  Without order by: Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript  (cost=0.00..126604.64 rows=159374 width=897) (actual time=0.154..809.566 rows=167710 loops=1)   Index Cond: (((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric)) Total runtime: 894.218 ms(3 rows) Then I recreated the table and changed the primary key column type definitions to smallint, integer and bigint.   CREATE TABLE arript (    anocalc smallint     NOT NULL,    cadastro integer      NOT NULL,    codvencto\n smallint     NOT NULL,    parcela smallint     NOT NULL,    inscimob character varying(18) NOT NULL,    codvencto2 smallint     NOT NULL,    parcela2 smallint     NOT NULL,    codpropr bigint        NOT NULL,    dtaven integer      NOT NULL,    anocalc2 smallint,    dtabase integer,    vvt numeric(14,2),    vvp numeric(14,2),... ...   Now the new analyze:   With order by: Sort  (cost=180430.98..180775.10 rows=137649 width=826) (actual time=4461.524..5000.707 rows=167710 loops=1)   Sort Key: anocalc, cadastro, codvencto, parcela   ->&n\n bsp;\n Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript  (cost=0.00..111126.93 rows=137649 width=826) (actual time=0.142..763.255 rows=167710 loops=1)         Index Cond: (((anocalc = 2005) AND (cadastro = 19) AND (codvencto = 0) AND (parcela >= 0)) OR ((anocalc = 2005) AND (cadastro = 19) AND (codvencto > 0)) OR ((anocalc = 2005) AND (cadastro > 19)) OR (anocalc > 2005)) Total runtime: 5222.729 ms(5 rows)     Without order by: Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript  (cost=0.00..111126.93 rows=137649 width=826) (actual time=0.135..505.250 rows=167710 loops=1)   Index Cond: (((anocalc = 2005) AND (cadastro = 19) AND (codvencto = 0) AND (parcela >= 0)) OR ((anocalc = 2005) AND (cadastro = 19) AND (codvencto > 0)) OR ((anocalc = 2005) AND (cadastro > 19)) OR (anocalc >\n 2005)) Total runtime: 589.528 ms(3 rows) Total runtime summary: Primary key columns defined with integer/smallint/bigint and select with order by: 5222.729 ms Primary key columns defined with integer/smallint/bigint and select without order by: 589.528 ms Primary key columns defined with numeric and select with order by: 10086.884 ms Primary key columns defined with numeric and select without order by: 894.218 ms   Using order by and integer/smallint/bigint (5222.729) is almost half total runtime of select over numeric columns (10086.884) but is still 6 x more from the numbers of the original select (without order by and number columns=894.218).   Is there something more that could be done? Planner cost constants?   Thanks very much in advance!<\n /DIV> \n   Benkendorf    \n \nYahoo! doce lar. Fa�a do Yahoo! sua homepage.", "msg_date": "Thu, 22 Dec 2005 00:35:00 +0000 (GMT)", "msg_from": "Carlos Benkendorf <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ORDER BY costs " }, { "msg_contents": "I�m not sure but I think the extra runtime of the select statement that has the ORDER BY clause is because the planner decided to sort the result set.\n \n Is the sort really necessary? Why not only scanning the primary key index pages and retrieving the rows like the select without the order by clause?\n \n Aren�t not the rows retrieved from the index in a odered form?\n \n Thanks in advance!\n \n Benkendorf\n \n \n \n\n\t\t\n---------------------------------\n Yahoo! doce lar. Fa�a do Yahoo! sua homepage.\nI�m not sure but I think the extra runtime of the select statement that has the ORDER BY clause is because the planner decided to sort the result set.   Is the sort really necessary? Why not only scanning the primary key index pages and retrieving the rows like the select without the order by clause?   Aren�t not the rows retrieved from the index in a odered form?   Thanks in advance!   Benkendorf      \n \nYahoo! doce lar. Fa�a do Yahoo! sua homepage.", "msg_date": "Thu, 22 Dec 2005 14:06:20 +0000 (GMT)", "msg_from": "Carlos Benkendorf <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ORDER BY costs " } ]
[ { "msg_contents": "> > It's *got* to be the network configuration on the client machine.\n> \n> We've seen gripes of this sort before --- check the list archives for\n> possible fixes. I seem to recall something about a \"QoS patch\", as\n> well as suggestions to get rid of third-party packages that might be\n> interfering with the TCP stack.\n\nI personally checked out the last report from a poster who got the issue\non win2k but not on winxp. I ran his exact dump into my 2k server with\nno problems. This is definitely some type of local issue.\n\nJosep: does your table have any large ( > 1k ) fields in it?\n\nMerlin\n", "msg_date": "Wed, 21 Dec 2005 13:40:09 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Windows performance again " } ]
[ { "msg_contents": "> On Sun, Dec 18, 2005 at 01:10:21AM -0000, Ben Trewern wrote:\n> > I know I should be writing these in C but that's a bit beyond me. I\nwas\n> > going to try PL/Python or PL/Perl or even PL/Ruby. Has anyone any\nidea\n> > which language is fastest, or is the data access going to swamp the\n> overhead\n> > of small functions?\n> \n> I'm not sure if it's what you ask for, but there _is_ a clear\ndifference\n> between the procedural languages -- I've had a 10x speed increase from\n> rewriting PL/PgSQL stuff into PL/Perl, for instance. I'm not sure\nwhich\n> ones\n> would be faster, though -- I believe Ruby is slower than Perl or\nPython\n> generally, but I don't know how it all works out in a PL/* setting.\n\nSo far, I use plpgsql for everything...queries being first class and\nall...I don't have any performance problems with it. I have cut the\noccasional C routine, but for flexibility not for speed.\n\nPL/Perl routines cannot directly execute each other, meaning you can't\npass high level objects between them like refcursors. YMMV\n\nSince most database apps are bound by the server one way or another I\nwould imagine you should be choosing a language on reasons other than\nperformance. \n\nMaybe Ben you could provide an example of what you are trying to do that\nis not fast enough?\n\nMerlin \n", "msg_date": "Wed, 21 Dec 2005 15:43:43 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed of different procedural language" } ]
[ { "msg_contents": "I am currently using a dual Opteron (248) single core system (RAM\nPC3200) and for a change I am finding that the bottleneck is not disk\nI/O but CPU/RAM (not sure which). The reason for this is that the most\nfrequently accessed tables/indexes are all held in RAM and when\nquerying the database there is almost no disk activity which is great,\nmost of the time. However, the database is growing and this database\nis supporting an OLTP system where the retrieval of the data is an\norder of magnitude more important than the insertion and general\nupkeep of the data. It supports a search engine[0] and contains a\nreverse index, lexicon and the actual data table (currently just under\n2Gb for the three tables and associated indexes).\n\nAt the moment everything is working OK but I am noticing an almost\nlinear increase in time to retrieve data from the database as the data\nset increases in size. Clustering knocks the access times down by 25%\nbut it also knocks users off the website and can take up to 30 minutes\nwhich is hardly an ideal scenario. I have also considered partitioning\nthe tables up using extendible hashing and tries to allocate the terms\nin the index to the correct table but after some testing I noticed no\nnoticeable gain using this method which surprised me a bit.\n\nThe actual size of the database is not that big (4Gb) but I am\nexpecting this to increase to at least 20Gb over the next year or so.\nThis means that search times are going to jump dramatically which also\nmeans the site becomes completely unusable. This also means that\nalthough disk access is currently low I am eventually going to run out\nof RAM and require a decent disk subsystem.\n\nDo people have any recommendations as to what hardware would alleviate\nmy current CPU/RAM problem but with a mind to the future would still\nbe able to cope with heavy disk access. My budget is about £2300/$4000\nwhich is not a lot of money when talking databases so suggestions of a\nSun Fire T2000 or similar systems will be treated with the utmost\ndisdain ;) unless you are about to give me one to keep.\n\n--\nHarry\nhttp://www.hjackson.org\nhttp://www.uklug.co.uk\n\n\nBefore anyone asks I have considered using tsearch2.\n", "msg_date": "Thu, 22 Dec 2005 01:20:16 +0000", "msg_from": "Harry Jackson <[email protected]>", "msg_from_op": true, "msg_subject": "CPU and RAM" }, { "msg_contents": "\nOn Thu, 22 Dec 2005, Harry Jackson wrote:\n\n> I am currently using a dual Opteron (248) single core system (RAM\n> PC3200) and for a change I am finding that the bottleneck is not disk\n> I/O but CPU/RAM (not sure which). The reason for this is that the most\n> frequently accessed tables/indexes are all held in RAM and when\n> querying the database there is almost no disk activity which is great,\n> most of the time.\n>\n> At the moment everything is working OK but I am noticing an almost\n> linear increase in time to retrieve data from the database as the data\n> set increases in size. Clustering knocks the access times down by 25%\n\nLet's find out what's going on first. Can you find out the most expensive\nquery. Also, according to you what you said: (1) execution time is linear\nto data set size (2) no disk IO - so why cluster will improve 25%?\n\nRegards,\nQingqing\n", "msg_date": "Wed, 21 Dec 2005 23:01:12 -0500", "msg_from": "Qingqing Zhou <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU and RAM" }, { "msg_contents": "Harry Jackson wrote:\n> I am currently using a dual Opteron (248) single core system (RAM\n> PC3200) and for a change I am finding that the bottleneck is not disk\n> I/O but CPU/RAM (not sure which).\n\nWell that's the first thing to find out. What is \"top\" showing for CPU \nusage and which processes?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 22 Dec 2005 09:19:51 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU and RAM" }, { "msg_contents": "Harry Jackson <[email protected]> writes:\n\n> At the moment everything is working OK but I am noticing an almost\n> linear increase in time to retrieve data from the database as the data\n> set increases in size. Clustering knocks the access times down by 25%\n> but it also knocks users off the website and can take up to 30 minutes\n> which is hardly an ideal scenario. \n\nIf the whole database is in RAM I wouldn't expect clustering to have any\neffect. Either you're doing a lot of merge joins or a few other cases where\nclustering might be helping you, or the cluster is helping you keep more of\nthe database in ram avoiding the occasional disk i/o.\n\nThat said, I would agree with the others to not assume the plans for every\nquery is ok. It's easy when the entire database fits in RAM to be fooled into\nthinking plans are ok because they're running quite fast but in fact have\nproblems.\n\nIn particular, if you have a query doing a sequential scan of some moderately\nlarge table (say a few thousand rows) then you may find the query executes\nreasonably fast when tested on its own but consumes enough cpu and memory\nbandwidth that when it's executed frequently in an OLTP setting it pegs the\ncpu at 100%.\n\n-- \ngreg\n\n", "msg_date": "22 Dec 2005 22:52:54 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU and RAM" }, { "msg_contents": "\n\"Greg Stark\" <[email protected]> wrote\n>\n> If the whole database is in RAM I wouldn't expect clustering to have any\n> effect. Either you're doing a lot of merge joins or a few other cases \n> where\n> clustering might be helping you, or the cluster is helping you keep more \n> of\n> the database in ram avoiding the occasional disk i/o.\n>\n\nHi Greg,\n\nAt first I think the same - notice that Tom has submitted a patch to scan a \nwhole page in one run, so if Harry tests against the cvs tip, he could see \nthe real benefits. For example, a index scan may touch 5000 tuples, which \ninvolves 5000 pairs of lock/unlock buffer, no matter how the tuples are \ndistributed. After the patch, if the tuples belong to a few pages, then a \nsignificant number of lock/unlock are avoided.\n\nRegards,\nQingqing \n\n\n", "msg_date": "Thu, 22 Dec 2005 23:29:42 -0500", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CPU and RAM" }, { "msg_contents": "On 24 Dec 2005 10:25:09 -0500, Greg Stark <[email protected]> wrote:\n>\n> Harry Jackson <[email protected]> writes:\n>\n> > I always look at the explain plans.\n> >\n> > =# explain select item_id, term_frequency from reverse_index where\n> > term_id = 22781;\n> > QUERY PLAN\n> > -----------------------------------------------------------------------------------------------\n> > Bitmap Heap Scan on reverse_index (cost=884.57..84443.35 rows=150448 width=8)\n> > Recheck Cond: (term_id = 22781)\n> > -> Bitmap Index Scan on reverse_index_term_id_idx\n> > (cost=0.00..884.57 rows=150448 width=0)\n> > Index Cond: (term_id = 22781)\n> > (4 rows)\n>\n> Can you send EXPLAIN ANALYZE for this query for a problematic term_id? Are you\n> really retrieving 150k records like it expects? In an OLTP environment that's\n> an awful lot of records to be retrieving and might explain your high CPU usage\n> all on its own.\n\nThe above is with the problematic term_id ;)\n\nThe above comes in at around 1/4 of a second which is fine for now but\nwill cause me severe problems in a few months when the size of teh\ndatabase swells.\n\n> 250ms might be as good as you'll get for 150k records. I'm not sure precaching\n> that many records will help you. You're still going to have to read them from\n> somewhere.\n\nThis is what I am thinking. I have tried various methods to reduce the\ntime. I even tried to use \"order by\" then reduce the amount of data to\n50K records to see if this would work but it came in at around the\nsame amount of time. It is faster if I use the following though but\nnot by much.\n\n=# explain select * from reverse_index where term_id = 22781 order by\nterm_frequency DESC limit 30000;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------\n Limit (cost=67337.87..67412.87 rows=30000 width=12)\n -> Sort (cost=67337.87..67565.26 rows=90956 width=12)\n Sort Key: term_frequency\n -> Index Scan using reverse_index_term_id_idx on\nreverse_index (cost=0.00..59846.33 rows=90956 width=12)\n Index Cond: (term_id = 22781)\n(5 rows)\n\nI was actually suprised by this and it shows that whatever routines\nPostgresql is using to sort the data its pretty bloody fast. The total\nsort time for 110K records is about 193ms. The its retrieval after\nthat. What also suprised me is that without the sort\n\nselect * from reverse_index where term_id = 22781;\n\nis slower than\n\nselect item_id, term_frequency from reverse_index where term_id = 22781;\n\nbut with the sort and limit added\n\nselect * from reverse_index where term_id = 22781 order by\nterm_frequency DESC limit 30000;\n\nis faster than\n\nselect item_id, term_frequency from reverse_index where term_id =\n22781 order by term_frequency DESC limit 30000;\n\n> I guess clustering on term_id might speed this up by putting all the records\n> being retrieved together. It might also let the planner use a plain index scan\n> instead of a bitmap scan and get the same benefit.\n\nYep. I clustered on the term_id index again before running the above\nexplain and this time we have a plain index scan.\n\n> > The next query absolutely flies but it would have been the one I would\n> > have expected to be a lot slower.\n> > ...\n> > This comes in under 10.6ms which is astounding and I am more than\n> > happy with the performance to be had from it.\n>\n> Out of curiosity it would be interesting to see the EXPLAIN ANALYZE from this\n> too.\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on item i (cost=8.01..16.18 rows=4 width=478)\n Recheck Cond: ((item_id = 20006293) OR (item_id = 20097065) OR\n(item_id = 20101014) OR (item_id = 20101015))\n -> BitmapOr (cost=8.01..8.01 rows=4 width=0)\n -> Bitmap Index Scan on item_item_id_pk (cost=0.00..2.00\nrows=1 width=0)\n Index Cond: (item_id = 20006293)\n -> Bitmap Index Scan on item_item_id_pk (cost=0.00..2.00\nrows=1 width=0)\n Index Cond: (item_id = 20097065)\n\n<snip lots of single item_id bitmap index scans>\n\n -> Bitmap Index Scan on item_item_id_pk (cost=0.00..2.00\nrows=1 width=0)\n Index Cond: (item_id = 20101014)\n -> Bitmap Index Scan on item_item_id_pk (cost=0.00..2.00\nrows=1 width=0)\n Index Cond: (item_id = 20101015)\n\n\nAnother intereting thing I noticed was the size of the tables and\nindexes after the cluster operation\n\nBEFORE:\n relname | bytes | kbytes | relkind | mb\n---------------------------+-----------+--------+---------+-----\n reverse_index | 884293632 | 863568 | r | 843\n reverse_index_pk | 548126720 | 535280 | i | 522\n reverse_index_term_id_idx | 415260672 | 405528 | i | 396\n\nAFTER:\n reverse_index | 635944960 | 621040 | r | 606\n reverse_index_pk | 322600960 | 315040 | i | 307\n reverse_index_term_id_idx | 257622016 | 251584 | i | 245\n\nThis database has autovacuum running but it looks like there is a lot\nof space in pages on disk that is not being used. Is this a trade off\nwhen using MVCC?\n\n--\nHarry\nhttp://www.uklug.co.uk\nhttp://www.hjackson.org\n", "msg_date": "Fri, 30 Dec 2005 08:38:09 +0000", "msg_from": "Harry Jackson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CPU and RAM" } ]
[ { "msg_contents": "Hi all,\n\n On a user's request, I recently added MySQL support to my backup \nprogram which had been written for PostgreSQL exclusively until now. \nWhat surprises me is that MySQL is about 20%(ish) faster than PostgreSQL.\n\n Now, I love PostgreSQL and I want to continue recommending it as the \ndatabase engine of choice but it is hard to ignore a performance \ndifference like that.\n\n My program is a perl backup app that scans the content of a given \nmounted partition, 'stat's each file and then stores that data in the \ndatabase. To maintain certain data (the backup, restore and display \nvalues for each file) I first read in all the data from a given table \n(one table per partition) into a hash, drop and re-create the table, \nthen start (in PostgreSQL) a bulk 'COPY..' call through the 'psql' shell \napp.\n\n In MySQL there is no 'COPY...' equivalent so instead I generate a \nlarge 'INSERT INTO file_info_X (col1, col2, ... coln) VALUES (...), \n(blah) ... (blah);'. This doesn't support automatic quoting, obviously, \nso I manually quote my values before adding the value to the INSERT \nstatement. I suspect this might be part of the performance difference?\n\n I take the total time needed to update a partition (load old data \ninto hash + scan all files and prepare COPY/INSERT + commit new data) \nand devide by the number of seconds needed to get a score I call a \n'U.Rate). On average on my Pentium3 1GHz laptop I get U.Rate of ~4/500. \nOn MySQL though I usually get a U.Rate of ~7/800.\n\n If the performace difference comes from the 'COPY...' command being \nslower because of the automatic quoting can I somehow tell PostgreSQL \nthat the data is pre-quoted? Could the performance difference be \nsomething else?\n\n If it would help I can provide code samples. I haven't done so yet \nbecause it's a little convoluded. ^_^;\n\n Thanks as always!\n\nMadison\n\n\nWhere the big performance concern is when\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Madison Kelly (Digimer)\n TLE-BU; The Linux Experience, Back Up\nMain Project Page: http://tle-bu.org\nCommunity Forum: http://forum.tle-bu.org\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Wed, 21 Dec 2005 21:03:18 -0500", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": true, "msg_subject": "MySQL is faster than PgSQL but a large margin in my program... any\n\tideas why?" }, { "msg_contents": "* Madison Kelly ([email protected]) wrote:\n> If the performace difference comes from the 'COPY...' command being \n> slower because of the automatic quoting can I somehow tell PostgreSQL \n> that the data is pre-quoted? Could the performance difference be \n> something else?\n\nI doubt the issue is with the COPY command being slower than INSERTs\n(I'd expect the opposite generally, actually...). What's the table type\nof the MySQL tables? Is it MyISAM or InnoDB (I think those are the main\nalternatives)? IIRC, MyISAM doesn't do ACID and isn't transaction safe,\nand has problems with data reliability (aiui, equivilant to doing 'fsync\n= false' for Postgres). InnoDB, again iirc, is transaction safe and\nwhatnot, and more akin to the default PostgreSQL setup.\n\nI expect some others will comment along these lines too, if my response\nisn't entirely clear. :)\n\n\tStephen", "msg_date": "Wed, 21 Dec 2005 21:14:18 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL is faster than PgSQL but a large margin in my program...\n\tany ideas why?" }, { "msg_contents": "On Wednesday 21 December 2005 20:14, Stephen Frost wrote:\n> * Madison Kelly ([email protected]) wrote:\n> > If the performace difference comes from the 'COPY...' command being\n> > slower because of the automatic quoting can I somehow tell PostgreSQL\n> > that the data is pre-quoted? Could the performance difference be\n> > something else?\n>\n> I doubt the issue is with the COPY command being slower than INSERTs\n> (I'd expect the opposite generally, actually...). What's the table type\n> of the MySQL tables? Is it MyISAM or InnoDB (I think those are the main\n> alternatives)? IIRC, MyISAM doesn't do ACID and isn't transaction safe,\n> and has problems with data reliability (aiui, equivilant to doing 'fsync\n> = false' for Postgres). InnoDB, again iirc, is transaction safe and\n> whatnot, and more akin to the default PostgreSQL setup.\n>\n> I expect some others will comment along these lines too, if my response\n> isn't entirely clear. :)\n\nIs fsync() on in your postgres config? If so, that's why you're slower. The \ndefault is to have it on for stability (writes are forced to disk). It is \nquite a bit slower than just allowing the write caches to do their job, but \nmore stable. MySQL does not force writes to disk.\n\n", "msg_date": "Wed, 21 Dec 2005 20:44:53 -0600", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL is faster than PgSQL but a large margin in my program...\n\tany ideas why?" }, { "msg_contents": "Stephen Frost wrote:\n> * Madison Kelly ([email protected]) wrote:\n> \n>> If the performace difference comes from the 'COPY...' command being \n>>slower because of the automatic quoting can I somehow tell PostgreSQL \n>>that the data is pre-quoted? Could the performance difference be \n>>something else?\n> \n> \n> I doubt the issue is with the COPY command being slower than INSERTs\n> (I'd expect the opposite generally, actually...). What's the table type\n> of the MySQL tables? Is it MyISAM or InnoDB (I think those are the main\n> alternatives)? IIRC, MyISAM doesn't do ACID and isn't transaction safe,\n> and has problems with data reliability (aiui, equivilant to doing 'fsync\n> = false' for Postgres). InnoDB, again iirc, is transaction safe and\n> whatnot, and more akin to the default PostgreSQL setup.\n> \n> I expect some others will comment along these lines too, if my response\n> isn't entirely clear. :)\n> \n> \tStephen\n\nAh, that makes a lot of sense (I read about the 'fsync' issue before, \nnow that you mention it). I am not too familiar with MySQL but IIRC \nMyISAM is their open-source DB and InnoDB is their commercial one, ne? \nIf so, then I am running MyISAM.\n\n Here is the MySQL table. The main difference from the PostgreSQL \ntable is that the 'varchar(255)' columns are 'text' columns in PostgreSQL.\n\nmysql> DESCRIBE file_info_1;\n+-----------------+--------------+------+-----+---------+-------+\n| Field | Type | Null | Key | Default | Extra |\n+-----------------+--------------+------+-----+---------+-------+\n| file_group_name | varchar(255) | YES | | NULL | |\n| file_group_uid | int(11) | | | 0 | |\n| file_mod_time | bigint(20) | | | 0 | |\n| file_name | varchar(255) | | | | |\n| file_parent_dir | varchar(255) | | MUL | | |\n| file_perm | int(11) | | | 0 | |\n| file_size | bigint(20) | | | 0 | |\n| file_type | char(1) | | | | |\n| file_user_name | varchar(255) | YES | | NULL | |\n| file_user_uid | int(11) | | | 0 | |\n| file_backup | char(1) | | MUL | i | |\n| file_display | char(1) | | | i | |\n| file_restore | char(1) | | | i | |\n+-----------------+--------------+------+-----+---------+-------+\n\n I will try turning off 'fsync' on my test box to see how much of a \nperformance gain I get and to see if it is close to what I am getting \nout of MySQL. If that does turn out to be the case though I will be able \nto comfortably continue recommending PostgreSQL from a stability point \nof view.\n\nThanks!!\n\nMadison\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Madison Kelly (Digimer)\n TLE-BU; The Linux Experience, Back Up\nMain Project Page: http://tle-bu.org\nCommunity Forum: http://forum.tle-bu.org\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Thu, 22 Dec 2005 01:58:51 -0500", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MySQL is faster than PgSQL but a large margin in my" }, { "msg_contents": "Madison,\n\nOn 12/21/05 10:58 PM, \"Madison Kelly\" <[email protected]> wrote:\n\n> Ah, that makes a lot of sense (I read about the 'fsync' issue before,\n> now that you mention it). I am not too familiar with MySQL but IIRC\n> MyISAM is their open-source DB and InnoDB is their commercial one, ne?\n> If so, then I am running MyISAM.\n\nYou can run either storage method with MySQL, I expect the default is\nMyISAM.\n\nCOPY performance with or without fsync was sped up recently nearly double in\nPostgresql. The Bizgres version (www.bizgres.org, www.greenplum.com) is the\nfastest, Postgres 8.1.1 is close, depending on how fast your disk I/O is (as\nI/O speed increases Bizgres gets faster).\n\nfsync isn't really an \"issue\" and I'd suggest you not run without it! We've\nfound that \"fdatasync\" as the wal sync method is actually a bit faster than\nfsync if you want a bit better speed.\n\nSo, I'd recommend you upgrade to either bizgres or Postgres 8.1.1 to get the\nmaximum COPY speed.\n\n> Here is the MySQL table. The main difference from the PostgreSQL\n> table is that the 'varchar(255)' columns are 'text' columns in PostgreSQL.\n\nShouldn't matter.\n \n> mysql> DESCRIBE file_info_1;\n> +-----------------+--------------+------+-----+---------+-------+\n> | Field | Type | Null | Key | Default | Extra |\n> +-----------------+--------------+------+-----+---------+-------+\n> | file_group_name | varchar(255) | YES | | NULL | |\n> | file_group_uid | int(11) | | | 0 | |\n> | file_mod_time | bigint(20) | | | 0 | |\n> | file_name | varchar(255) | | | | |\n> | file_parent_dir | varchar(255) | | MUL | | |\n> | file_perm | int(11) | | | 0 | |\n> | file_size | bigint(20) | | | 0 | |\n> | file_type | char(1) | | | | |\n> | file_user_name | varchar(255) | YES | | NULL | |\n> | file_user_uid | int(11) | | | 0 | |\n> | file_backup | char(1) | | MUL | i | |\n> | file_display | char(1) | | | i | |\n> | file_restore | char(1) | | | i | |\n> +-----------------+--------------+------+-----+---------+-------+\n\nWhat's a bigint(20)? Are you using \"numeric\" in Postgresql?\n \n> I will try turning off 'fsync' on my test box to see how much of a\n> performance gain I get and to see if it is close to what I am getting\n> out of MySQL. If that does turn out to be the case though I will be able\n> to comfortably continue recommending PostgreSQL from a stability point\n> of view.\n\nAgain - fsync is a small part of the performance - you will need to run\neither Postgres 8.1.1 or Bizgres to get good COPY speed.\n\n- Luke\n\n\n", "msg_date": "Wed, 21 Dec 2005 23:07:22 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL is faster than PgSQL but a large margin in" } ]
[ { "msg_contents": "What version of postgres?\r\n\r\nCopy has been substantially improved in bizgres and also in 8.1.\r\n- Luke\r\n--------------------------\r\nSent from my BlackBerry Wireless Device\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] <[email protected]>\r\nTo: [email protected] <[email protected]>\r\nSent: Wed Dec 21 21:03:18 2005\r\nSubject: [PERFORM] MySQL is faster than PgSQL but a large margin in my program... any ideas why?\r\n\r\nHi all,\r\n\r\n On a user's request, I recently added MySQL support to my backup \r\nprogram which had been written for PostgreSQL exclusively until now. \r\nWhat surprises me is that MySQL is about 20%(ish) faster than PostgreSQL.\r\n\r\n Now, I love PostgreSQL and I want to continue recommending it as the \r\ndatabase engine of choice but it is hard to ignore a performance \r\ndifference like that.\r\n\r\n My program is a perl backup app that scans the content of a given \r\nmounted partition, 'stat's each file and then stores that data in the \r\ndatabase. To maintain certain data (the backup, restore and display \r\nvalues for each file) I first read in all the data from a given table \r\n(one table per partition) into a hash, drop and re-create the table, \r\nthen start (in PostgreSQL) a bulk 'COPY..' call through the 'psql' shell \r\napp.\r\n\r\n In MySQL there is no 'COPY...' equivalent so instead I generate a \r\nlarge 'INSERT INTO file_info_X (col1, col2, ... coln) VALUES (...), \r\n(blah) ... (blah);'. This doesn't support automatic quoting, obviously, \r\nso I manually quote my values before adding the value to the INSERT \r\nstatement. I suspect this might be part of the performance difference?\r\n\r\n I take the total time needed to update a partition (load old data \r\ninto hash + scan all files and prepare COPY/INSERT + commit new data) \r\nand devide by the number of seconds needed to get a score I call a \r\n'U.Rate). On average on my Pentium3 1GHz laptop I get U.Rate of ~4/500. \r\nOn MySQL though I usually get a U.Rate of ~7/800.\r\n\r\n If the performace difference comes from the 'COPY...' command being \r\nslower because of the automatic quoting can I somehow tell PostgreSQL \r\nthat the data is pre-quoted? Could the performance difference be \r\nsomething else?\r\n\r\n If it would help I can provide code samples. I haven't done so yet \r\nbecause it's a little convoluded. ^_^;\r\n\r\n Thanks as always!\r\n\r\nMadison\r\n\r\n\r\nWhere the big performance concern is when\r\n\r\n-- \r\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\r\n Madison Kelly (Digimer)\r\n TLE-BU; The Linux Experience, Back Up\r\nMain Project Page: http://tle-bu.org\r\nCommunity Forum: http://forum.tle-bu.org\r\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\r\n\r\n---------------------------(end of broadcast)---------------------------\r\nTIP 1: if posting/reading through Usenet, please send an appropriate\r\n subscribe-nomail command to [email protected] so that your\r\n message can get through to the mailing list cleanly\r\n\r\n", "msg_date": "Wed, 21 Dec 2005 22:33:14 -0500", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MySQL is faster than PgSQL but a large margin in" }, { "msg_contents": "Luke Lonergan wrote:\n> What version of postgres?\n> \n> Copy has been substantially improved in bizgres and also in 8.1.\n> - Luke\n\nCurrently 7.4 (what comes with Debian Sarge). I have run my program on \n8.0 but not since I have added MySQL support. I should run the tests on \nthe newer versions of both DBs (using v4.1 for MySQL which is also \nmature at this point).\n\nAs others mentioned though, so far the most likely explanation is the \n'fsync' being enabled on PostgreSQL.\n\nThanks for the reply!\n\nMadison\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Madison Kelly (Digimer)\n TLE-BU; The Linux Experience, Back Up\nMain Project Page: http://tle-bu.org\nCommunity Forum: http://forum.tle-bu.org\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Thu, 22 Dec 2005 02:02:47 -0500", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL is faster than PgSQL but a large margin in" }, { "msg_contents": "Madison,\n\n\nOn 12/21/05 11:02 PM, \"Madison Kelly\" <[email protected]> wrote:\n\n> Currently 7.4 (what comes with Debian Sarge). I have run my program on\n> 8.0 but not since I have added MySQL support. I should run the tests on\n> the newer versions of both DBs (using v4.1 for MySQL which is also\n> mature at this point).\n\nYes, this is *definitely* your problem. Upgrade to Postgres 8.1.1 or\nBizgres 0_8_1 and your COPY speed could double without even changing fsync\n(depending on your disk speed). We typically get 12-14MB/s from Bizgres on\nOpteron CPUs and disk subsystems that can write at least 60MB/s. This means\nyou can load 100GB in 2 hours.\n\nNote that indexes will also slow down loading.\n \n- Luke\n\n\n", "msg_date": "Wed, 21 Dec 2005 23:10:43 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL is faster than PgSQL but a large margin in" }, { "msg_contents": "Hi, Madison,\nHi, Luke,\n\nLuke Lonergan wrote:\n\n> Note that indexes will also slow down loading.\n\nFor large loading bunches, it often makes sense to temporarily drop the\nindices before the load, and recreate them afterwards, at least, if you\ndon't have normal users accessing the database concurrently.\n\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Thu, 22 Dec 2005 14:34:16 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL is faster than PgSQL but a large margin in" }, { "msg_contents": "Agreed. I have a 13 million row table that gets a 100,000 new records every \nweek. There are six indexes on this table. Right about the time when it \nreached the 10 million row mark updating the table with new records started \nto take many hours if I left the indexes in place during the update. Indeed \nthere was even some suspicion that the indexes were starting to get corrupted \nduring the load. So I decided to fist drop the indexes when I needed to \nupdate the table. Now inserting 100,000 records into the table is nearly \ninstantaneous although it does take me a couple of hours to build the indexes \nanew. This is still big improvement since at one time it was taking almost \n12 hours to update the table with the indexes in place. \n\n\nJuan\n\nOn Thursday 22 December 2005 08:34, Markus Schaber wrote:\n> Hi, Madison,\n> Hi, Luke,\n>\n> Luke Lonergan wrote:\n> > Note that indexes will also slow down loading.\n>\n> For large loading bunches, it often makes sense to temporarily drop the\n> indices before the load, and recreate them afterwards, at least, if you\n> don't have normal users accessing the database concurrently.\n>\n> Markus\n", "msg_date": "Thu, 22 Dec 2005 21:44:32 -0500", "msg_from": "Juan Casero <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL is faster than PgSQL but a large margin in" }, { "msg_contents": "\nOn Dec 22, 2005, at 9:44 PM, Juan Casero wrote:\n\n> Agreed. I have a 13 million row table that gets a 100,000 new \n> records every\n> week. There are six indexes on this table. Right about the time \n> when it\n\ni have some rather large tables that grow much faster than this (~1 \nmillion per day on a table with > 200m rows) and a few indexes. \nthere is no such slowness I see.\n\ndo you really need all those indexes?\n\n", "msg_date": "Fri, 23 Dec 2005 11:16:14 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MySQL is faster than PgSQL but a large margin in" } ]
[ { "msg_contents": "Tom Lane wrote:\n> I'd expect plpgsql to suck at purely computational tasks, compared to\n> the other PLs, but to win at tasks involving database access. These\n\nThere you go...pl/pgsql is pretty much required learning (it's not\nhard). For classic data processing tasks, it is without peer. I would\ngeneralize that a large majority of tasks fall under this category.\npl/pgsql is quick, has a low memory profile, and you can cut sql\ndirectly in code instead of through a proxy object...I could go on and\non about how useful and important that is.\n\nmerlin\n\n\n", "msg_date": "Thu, 22 Dec 2005 08:29:53 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed of different procedural language " } ]
[ { "msg_contents": "Jan Dittmer <[email protected]> escreveu: What is your work_mem setting? I think the default is 1MB which is\nprobably too low as your trying to sort roughly 150000*100Bytes = 15MB.\n\nJan\n\n I think you would like to say 150000*896Bytes... Am I right? My default work_mem is 2048 and I changed to 200000... and pgsql_tmp directory is not used any more...but... \n \n Now the new numbers:\n \n Sort (cost=132929.22..133300.97 rows=148701 width=896) (actual time=3949.663..4029.618 rows=167710 loops=1)\n Sort Key: anocalc, cadastro, codvencto, parcela\n -> Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript (cost=0.00..120154.28 rows=148701 width=896) (actual time=0.166..829.260 rows=167710 loops=1)\n Index Cond: (((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric))\n Total runtime: 4184.723 ms\n(5 rows)\n\n \n It is less than with work_mem set to 2000 but is it worthly? I�m afraind of swapping... are not those settings applied for all backends?\n \n Benkendorf\n \n \n\n\t\t\n---------------------------------\n Yahoo! doce lar. Fa�a do Yahoo! sua homepage.\nJan Dittmer <[email protected]> escreveu: What is your work_mem setting? I think the default is 1MB which isprobably too low as your trying to sort roughly 150000*100Bytes = 15MB.Jan I think you would like to say 150000*896Bytes... Am I right? My default work_mem is 2048 and I changed to 200000... and pgsql_tmp directory is not used any more...but...   Now the new numbers:   Sort  (cost=132929.22..133300.97 rows=148701 width=896) (actual time=3949.663..4029.618 rows=167710 loops=1)   Sort Key: anocalc, cadastro, codvencto, parcela   ->  Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript  (cost=0.00..120154.28 rows=148701 width=896) (actual time=0.166..829.260 rows=1677\n 10\n loops=1)         Index Cond: (((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric)) Total runtime: 4184.723 ms(5 rows)   It is less than with work_mem set to 2000 but is it worthly? I�m afraind of swapping... are not those settings applied for all backends?   Benkendorf    \n \nYahoo! doce lar. Fa�a do Yahoo! sua homepage.", "msg_date": "Thu, 22 Dec 2005 16:23:36 +0000 (GMT)", "msg_from": "Carlos Benkendorf <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ORDER BY costs" } ]
[ { "msg_contents": "Hi, all.\n\n While working on algorithm of my project I came to question. Let it\nbe table like this (user+cookie pair is the primary key).\n\nINT user\nINT cookie\nINT count\n\n Periodically (with period 10 minutes) this PostgreSQL table\nupdated with my information.\n The main problem that some of pairs (user, cookie) may be already\nexists in PostgreSQL and must be updated, some not exists and must be\ninserted.\n\n My first way was to DELETE row with (user, cookie) pair which I'm\ngoing to update then INSERT new. This guarantees that there will not\nbe an error when (user, cookie) pair already exists in table. And\ncurrently it works by this way.\n But I think that it lead to highly fragmentation of table and it need\nto be VACUUMED and ANALYZED far more frequently...\n\n Second idea was to try to SELECT (user, cookie) pair and then UPDATE\nit if it exists or INSERT if not. I has thought that if UPDATE will\nrewrite same place in file with new count it may lead to more compact\ntable (file not grow and information about actual rows in file will\nnot changed). And, if actual file blocks containing (user, cookie)\npair will not be moved to new place in file, table need to be ANALYZED\nless frequently.\n But if UPDATE will actually insert new row in file, marking as 'free\nto use' previous block in file which was contain previous version of\nrow, then again, table need to be VACUUMED and ANALYZED far more\nfrequently... \n And this second idea will be completely waste of time and code.\nBecause write on C code which \"DELETE and INSERT\" is more portably\nthan \"SELECT than UPDATE if there are rows, or INSERT if there are\nnot\".\n\n\n So, can anyone explain me is the actual mechanism of UPDATE can save\nresources and tables from been highly fragmented? Or it gives same\nresults and problems and \"DELETE then INSERT\" is the best way?\n-- \nengineer\n\n", "msg_date": "Fri, 23 Dec 2005 14:02:05 +0500", "msg_from": "Anton Maksimenkov <[email protected]>", "msg_from_op": true, "msg_subject": "DELETE, INSERT vs SELECT, UPDATE || INSERT" }, { "msg_contents": "Anton Maksimenkov <[email protected]> writes:\n> Second idea was to try to SELECT (user, cookie) pair and then UPDATE\n> it if it exists or INSERT if not. I has thought that if UPDATE will\n> rewrite same place in file with new count it may lead to more compact\n> table (file not grow and information about actual rows in file will\n> not changed).\n\nYou're wasting your time, because Postgres doesn't work that way.\nUPDATE is really indistinguishable from DELETE+INSERT, and there will\nalways be a dead row afterwards, because under MVCC rules both versions\nof the row have to be left in the table for some time after your\ntransaction commits. See\nhttp://www.postgresql.org/docs/8.1/static/mvcc.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Dec 2005 10:30:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE, INSERT vs SELECT, UPDATE || INSERT " } ]
[ { "msg_contents": "Hi,\n \n We have more than 200 customers running 8.0.3 and two weeks ago started migration project to 8.1.1.After the first migration to 8.1.1 we had to return back to 8.0.3 because some applications were not working right.\n \n Our user told me that records are not returning more in the correct order, so I started logging and saw that the select clause wasn�t not used with the ORDER BY clause. It seemed a simple problem to be solved.\n \n I asked the programmers that they should add the ORDER BY clause if they need the rows in a certain order and they told me they could not do it because it will cost too much and the response time is bigger than not using ORDER BY. I disagreed with them because there was an index with the same order needed for the order by. Before starting a figth we decided to explain analyze both select types and discover who was right. For my surprise the select with order by was really more expensive than the select without the order by. I will not bet any more...;-)\n \n For some implementation reason in 8.0.3 the query is returning the rows in the correct order even without the order by but in 8.1.1 probably the implementation changed and the rows are not returning in the correct order.\n \n We need the 8.1 for other reasons but this order by behavior stopped the migration project.\n \n Some friends of the list tried to help us and I did some configuration changes like increased work_mem and changed the primary columns from numeric types to smallint/integer/bigint but even so the runtime and costs are far from the ones from the selects without the ORDER BY clause.\n \n What I can not understand is why the planner is not using the same retrieving method with the order by clause as without the order by clause. All the rows are retrieved in the correct order in both methods but one is much cheaper (without order by) than the other (with order by). Should not the planner choice that one?\n \n Can someone explain me why the planner is not choosing the same method used with the selects without the order by clause instead of using a sort that is much more expensive?\n \n Without order by:\nexplain analyze\nSELECT * FROM iparq.ARRIPT \nwhere \n(ANOCALC = 2005\nand CADASTRO = 19\nand CODVENCTO = 00\nand PARCELA >= 00 ) \nor \n(ANOCALC = 2005\nand CADASTRO = 19\nand CODVENCTO > 00 ) \nor \n(ANOCALC = 2005\nand CADASTRO > 19 ) \nor \n(ANOCALC > 2005 );\n Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript (cost=0.00..122255.35 rows=146602 width=897) (actual time=9.303..1609.987 rows=167710 loops=1)\n Index Cond: (((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric))\n Total runtime: 1712.456 ms\n(3 rows)\n \n \nWith order by:\nexplain analyze\nSELECT * FROM iparq.ARRIPT \nwhere \n(ANOCALC = 2005\nand CADASTRO = 19\nand CODVENCTO = 00\nand PARCELA >= 00 ) \nor \n(ANOCALC = 2005\nand CADASTRO = 19\nand CODVENCTO > 00 ) \nor \n(ANOCALC = 2005\nand CADASTRO > 19 ) \nor \n(ANOCALC > 2005 )\norder by ANOCALC asc, CADASTRO asc, CODVENCTO asc, PARCELA asc;\n Sort (cost=201296.59..201663.10 rows=146602 width=897) (actual time=9752.555..10342.363 rows=167710 loops=1)\n Sort Key: anocalc, cadastro, codvencto, parcela\n -> Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript (cost=0.00..122255.35 rows=146602 width=897) (actual time=0.402..1425.085 rows=167710 loops=1)\n Index Cond: (((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric))\n Total runtime: 10568.290 ms\n(5 rows)\n \n Table definition:\n Table \"iparq.arript\"\n Column | Type | Modifiers\n-------------------+-----------------------+-----------\n anocalc | numeric(4,0) | not null\n cadastro | numeric(8,0) | not null\n codvencto | numeric(2,0) | not null\n parcela | numeric(2,0) | not null\n inscimob | character varying(18) | not null\n codvencto2 | numeric(2,0) | not null\n parcela2 | numeric(2,0) | not null\n codpropr | numeric(10,0) | not null\n dtaven | numeric(8,0) | not null\n anocalc2 | numeric(4,0) |\n...\n...\nIndexes:\n \"pk_arript\" PRIMARY KEY, btree (anocalc, cadastro, codvencto, parcela)\n \"iarchave04\" UNIQUE, btree (cadastro, anocalc, codvencto, parcela)\n \"iarchave02\" btree (inscimob, anocalc, codvencto2, parcela2)\n \"iarchave03\" btree (codpropr, dtaven)\n \"iarchave05\" btree (anocalc, inscimob, codvencto2, parcela2)\n \nBest regards and thank you very much in advance,\n \nCarlos Benkendorf\n \n\n\t\t\n---------------------------------\n Yahoo! doce lar. Fa�a do Yahoo! sua homepage.\nHi,   We have more than 200 customers running 8.0.3 and two weeks ago started migration project to 8.1.1.After the first migration to 8.1.1 we had to return back to 8.0.3 because some applications were not working right.   Our user told me that records are not returning more in the correct order, so I started logging and saw that the select clause wasn�t not used with the ORDER BY clause. It seemed a simple problem to be solved.   I asked the programmers that they should add the ORDER BY clause if they need the rows in a certain order and they told me they could not do it because it will cost too much and the response time is bigger than not using ORDER BY. I disagreed with them because there was an index with the same order needed for the order by. Before starting a figth we decided to explain analyze both select types and discover who was right. For my surprise the sele\n ct with\n order by was really more expensive than the select without the order by. I will not bet any more...;-)   For some implementation reason in 8.0.3 the query is returning the rows in the correct order even without the order by but in 8.1.1 probably the implementation changed and the rows are not returning in the correct order.   We need the 8.1 for other reasons but this order by behavior stopped the migration project.   Some friends of the list tried to help us and I did some configuration changes like increased work_mem and changed the primary columns from numeric types to smallint/integer/bigint but even so the runtime and costs are far from the ones from the selects without the ORDER BY clause.   What I can not understand is why the planner is not using the same retrieving method with the order by clause as without the order by clause. All the rows are retriev\n ed in\n the correct order in both methods but one is much cheaper (without order by) than the other (with order by). Should not the planner choice that one?   Can someone explain me why the planner is not choosing the same method used with the selects without the order by clause instead of using a sort that is much more expensive?   Without order by:explain analyzeSELECT * FROM iparq.ARRIPT where (ANOCALC =  2005and CADASTRO =  19and CODVENCTO =  00and PARCELA >=  00 ) or (ANOCALC =  2005and CADASTRO =  19and CODVENCTO >  00 ) or (ANOCALC =  2005and CADASTRO >  19 ) or (ANOCALC >  2005 ); Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript  (cost=0.00..122255.35 rows=146602 width=897) (actual time=9.303..1609.987 rows=167710 loops=1)   Index Cond: (((anocal\n c =\n 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric)) Total runtime: 1712.456 ms(3 rows)  With order by:explain analyzeSELECT * FROM iparq.ARRIPT where (ANOCALC =  2005and CADASTRO =  19and CODVENCTO =  00and PARCELA >=  00 ) or (ANOCALC =  2005and CADASTRO =  19and CODVENCTO >  00 ) or (ANOCALC =  2005and CADASTRO >  19 ) or (ANOCALC >  2005 )order by ANOCALC asc, CADASTRO asc, CODVENCTO asc, PARCELA asc; Sort  (cost=201296.59..201663.10 rows=146602 width=897) (actual time=9752.555..10342.363 rows=167710 loops=1)   Sort Key: anocalc, cadastro, codven\n cto,\n parcela   ->  Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript  (cost=0.00..122255.35 rows=146602 width=897) (actual time=0.402..1425.085 rows=167710 loops=1)         Index Cond: (((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric)) Total runtime: 10568.290 ms(5 rows)   Table definition:                 Table \"iparq.arript\"      Column       |         Type          |\n Modifiers-------------------+-----------------------+----------- anocalc           | numeric(4,0)          | not null cadastro          | numeric(8,0)          | not null codvencto         | numeric(2,0)          | not null parcela           | numeric(2,0)          | not null inscimob          | character varying(18) | not null codvencto2        | numeric(2,0)          | not\n null parcela2          | numeric(2,0)          | not null codpropr          | numeric(10,0)         | not null dtaven            | numeric(8,0)          | not null anocalc2          | numeric(4,0)          |......Indexes:    \"pk_arript\" PRIMARY KEY, btree (anocalc, cadastro, codvencto, parcela)    \"iarchave04\" UNIQUE, btree (cadastro, anocalc, codvencto, parcela)    \"iarchave02\" btree (inscimob, anocalc, codvencto2, parcela2)    \"iarchave03\" btree (codpropr, dtaven)    \"iarcha\n ve05\"\n btree (anocalc, inscimob, codvencto2, parcela2) Best regards and thank you very much in advance, Carlos Benkendorf  \n \nYahoo! doce lar. Fa�a do Yahoo! sua homepage.", "msg_date": "Fri, 23 Dec 2005 12:34:39 +0000 (GMT)", "msg_from": "Carlos Benkendorf <[email protected]>", "msg_from_op": true, "msg_subject": "Order by behaviour" }, { "msg_contents": "I think whatever the reasons for the different query plans are (and if that \ncan be fixed) - you CANNOT assume that data comes in sorted order when you do \nnot use order by. Thats what every database does this way. So, use order by, \nor you'll be in trouble sooner or later.\n\nBest regards,\n\tMario Weilguni\n\n\nAm Freitag, 23. Dezember 2005 13:34 schrieb Carlos Benkendorf:\n> Hi,\n>\n> We have more than 200 customers running 8.0.3 and two weeks ago started\n> migration project to 8.1.1.After the first migration to 8.1.1 we had to\n> return back to 8.0.3 because some applications were not working right.\n>\n> Our user told me that records are not returning more in the correct\n> order, so I started logging and saw that the select clause wasn´t not used\n> with the ORDER BY clause. It seemed a simple problem to be solved.\n>\n> I asked the programmers that they should add the ORDER BY clause if they\n> need the rows in a certain order and they told me they could not do it\n> because it will cost too much and the response time is bigger than not\n> using ORDER BY. I disagreed with them because there was an index with the\n> same order needed for the order by. Before starting a figth we decided to\n> explain analyze both select types and discover who was right. For my\n> surprise the select with order by was really more expensive than the select\n> without the order by. I will not bet any more...;-)\n>\n> For some implementation reason in 8.0.3 the query is returning the rows\n> in the correct order even without the order by but in 8.1.1 probably the\n> implementation changed and the rows are not returning in the correct order.\n>\n> We need the 8.1 for other reasons but this order by behavior stopped the\n> migration project.\n>\n> Some friends of the list tried to help us and I did some configuration\n> changes like increased work_mem and changed the primary columns from\n> numeric types to smallint/integer/bigint but even so the runtime and costs\n> are far from the ones from the selects without the ORDER BY clause.\n>\n> What I can not understand is why the planner is not using the same\n> retrieving method with the order by clause as without the order by clause.\n> All the rows are retrieved in the correct order in both methods but one is\n> much cheaper (without order by) than the other (with order by). Should not\n> the planner choice that one?\n>\n> Can someone explain me why the planner is not choosing the same method\n> used with the selects without the order by clause instead of using a sort\n> that is much more expensive?\n>\n> Without order by:\n> explain analyze\n> SELECT * FROM iparq.ARRIPT\n> where\n> (ANOCALC = 2005\n> and CADASTRO = 19\n> and CODVENCTO = 00\n> and PARCELA >= 00 )\n> or\n> (ANOCALC = 2005\n> and CADASTRO = 19\n> and CODVENCTO > 00 )\n> or\n> (ANOCALC = 2005\n> and CADASTRO > 19 )\n> or\n> (ANOCALC > 2005 );\n> Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript \n> (cost=0.00..122255.35 rows=146602 width=897) (actual time=9.303..1609.987\n> rows=167710 loops=1) Index Cond: (((anocalc = 2005::numeric) AND (cadastro\n> = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR\n> ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto >\n> 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro > 19::numeric)) OR\n> (anocalc > 2005::numeric)) Total runtime: 1712.456 ms\n> (3 rows)\n>\n>\n> With order by:\n> explain analyze\n> SELECT * FROM iparq.ARRIPT\n> where\n> (ANOCALC = 2005\n> and CADASTRO = 19\n> and CODVENCTO = 00\n> and PARCELA >= 00 )\n> or\n> (ANOCALC = 2005\n> and CADASTRO = 19\n> and CODVENCTO > 00 )\n> or\n> (ANOCALC = 2005\n> and CADASTRO > 19 )\n> or\n> (ANOCALC > 2005 )\n> order by ANOCALC asc, CADASTRO asc, CODVENCTO asc, PARCELA asc;\n> Sort (cost=201296.59..201663.10 rows=146602 width=897) (actual\n> time=9752.555..10342.363 rows=167710 loops=1) Sort Key: anocalc, cadastro,\n> codvencto, parcela\n> -> Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on\n> arript (cost=0.00..122255.35 rows=146602 width=897) (actual\n> time=0.402..1425.085 rows=167710 loops=1) Index Cond: (((anocalc =\n> 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric)\n> AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro =\n> 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc = 2005::numeric)\n> AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric)) Total runtime:\n> 10568.290 ms\n> (5 rows)\n>\n> Table definition:\n> Table \"iparq.arript\"\n> Column | Type | Modifiers\n> -------------------+-----------------------+-----------\n> anocalc | numeric(4,0) | not null\n> cadastro | numeric(8,0) | not null\n> codvencto | numeric(2,0) | not null\n> parcela | numeric(2,0) | not null\n> inscimob | character varying(18) | not null\n> codvencto2 | numeric(2,0) | not null\n> parcela2 | numeric(2,0) | not null\n> codpropr | numeric(10,0) | not null\n> dtaven | numeric(8,0) | not null\n> anocalc2 | numeric(4,0) |\n> ...\n> ...\n> Indexes:\n> \"pk_arript\" PRIMARY KEY, btree (anocalc, cadastro, codvencto, parcela)\n> \"iarchave04\" UNIQUE, btree (cadastro, anocalc, codvencto, parcela)\n> \"iarchave02\" btree (inscimob, anocalc, codvencto2, parcela2)\n> \"iarchave03\" btree (codpropr, dtaven)\n> \"iarchave05\" btree (anocalc, inscimob, codvencto2, parcela2)\n>\n> Best regards and thank you very much in advance,\n>\n> Carlos Benkendorf\n>\n>\n>\n> ---------------------------------\n> Yahoo! doce lar. Faça do Yahoo! sua homepage.\n", "msg_date": "Fri, 23 Dec 2005 13:51:12 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by behaviour" }, { "msg_contents": "We agree completely with you and that is what we are doing right now. \n \n But what I would like is really understand the reason for this behavior and be sure we can not do anything more to improve the runtimes.\n \n Benkendorf\n\n\nMario Weilguni <[email protected]> escreveu:\n I think whatever the reasons for the different query plans are (and if that \ncan be fixed) - you CANNOT assume that data comes in sorted order when you do \nnot use order by. Thats what every database does this way. So, use order by, \nor you'll be in trouble sooner or later.\n\nBest regards,\nMario Weilguni\n\n\nAm Freitag, 23. Dezember 2005 13:34 schrieb Carlos Benkendorf:\n> Hi,\n>\n> We have more than 200 customers running 8.0.3 and two weeks ago started\n> migration project to 8.1.1.After the first migration to 8.1.1 we had to\n> return back to 8.0.3 because some applications were not working right.\n>\n> Our user told me that records are not returning more in the correct\n> order, so I started logging and saw that the select clause wasn�t not used\n> with the ORDER BY clause. It seemed a simple problem to be solved.\n>\n> I asked the programmers that they should add the ORDER BY clause if they\n> need the rows in a certain order and they told me they could not do it\n> because it will cost too much and the response time is bigger than not\n> using ORDER BY. I disagreed with them because there was an index with the\n> same order needed for the order by. Before starting a figth we decided to\n> explain analyze both select types and discover who was right. For my\n> surprise the select with order by was really more expensive than the select\n> without the order by. I will not bet any more...;-)\n>\n> For some implementation reason in 8.0.3 the query is returning the rows\n> in the correct order even without the order by but in 8.1.1 probably the\n> implementation changed and the rows are not returning in the correct order.\n>\n> We need the 8.1 for other reasons but this order by behavior stopped the\n> migration project.\n>\n> Some friends of the list tried to help us and I did some configuration\n> changes like increased work_mem and changed the primary columns from\n> numeric types to smallint/integer/bigint but even so the runtime and costs\n> are far from the ones from the selects without the ORDER BY clause.\n>\n> What I can not understand is why the planner is not using the same\n> retrieving method with the order by clause as without the order by clause.\n> All the rows are retrieved in the correct order in both methods but one is\n> much cheaper (without order by) than the other (with order by). Should not\n> the planner choice that one?\n>\n> Can someone explain me why the planner is not choosing the same method\n> used with the selects without the order by clause instead of using a sort\n> that is much more expensive?\n>\n> Without order by:\n> explain analyze\n> SELECT * FROM iparq.ARRIPT\n> where\n> (ANOCALC = 2005\n> and CADASTRO = 19\n> and CODVENCTO = 00\n> and PARCELA >= 00 )\n> or\n> (ANOCALC = 2005\n> and CADASTRO = 19\n> and CODVENCTO > 00 )\n> or\n> (ANOCALC = 2005\n> and CADASTRO > 19 )\n> or\n> (ANOCALC > 2005 );\n> Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript \n> (cost=0.00..122255.35 rows=146602 width=897) (actual time=9.303..1609.987\n> rows=167710 loops=1) Index Cond: (((anocalc = 2005::numeric) AND (cadastro\n> = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR\n> ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto >\n> 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro > 19::numeric)) OR\n> (anocalc > 2005::numeric)) Total runtime: 1712.456 ms\n> (3 rows)\n>\n>\n> With order by:\n> explain analyze\n> SELECT * FROM iparq.ARRIPT\n> where\n> (ANOCALC = 2005\n> and CADASTRO = 19\n> and CODVENCTO = 00\n> and PARCELA >= 00 )\n> or\n> (ANOCALC = 2005\n> and CADASTRO = 19\n> and CODVENCTO > 00 )\n> or\n> (ANOCALC = 2005\n> and CADASTRO > 19 )\n> or\n> (ANOCALC > 2005 )\n> order by ANOCALC asc, CADASTRO asc, CODVENCTO asc, PARCELA asc;\n> Sort (cost=201296.59..201663.10 rows=146602 width=897) (actual\n> time=9752.555..10342.363 rows=167710 loops=1) Sort Key: anocalc, cadastro,\n> codvencto, parcela\n> -> Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on\n> arript (cost=0.00..122255.35 rows=146602 width=897) (actual\n> time=0.402..1425.085 rows=167710 loops=1) Index Cond: (((anocalc =\n> 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric)\n> AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro =\n> 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc = 2005::numeric)\n> AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric)) Total runtime:\n> 10568.290 ms\n> (5 rows)\n>\n> Table definition:\n> Table \"iparq.arript\"\n> Column | Type | Modifiers\n> -------------------+-----------------------+-----------\n> anocalc | numeric(4,0) | not null\n> cadastro | numeric(8,0) | not null\n> codvencto | numeric(2,0) | not null\n> parcela | numeric(2,0) | not null\n> inscimob | character varying(18) | not null\n> codvencto2 | numeric(2,0) | not null\n> parcela2 | numeric(2,0) | not null\n> codpropr | numeric(10,0) | not null\n> dtaven | numeric(8,0) | not null\n> anocalc2 | numeric(4,0) |\n> ...\n> ...\n> Indexes:\n> \"pk_arript\" PRIMARY KEY, btree (anocalc, cadastro, codvencto, parcela)\n> \"iarchave04\" UNIQUE, btree (cadastro, anocalc, codvencto, parcela)\n> \"iarchave02\" btree (inscimob, anocalc, codvencto2, parcela2)\n> \"iarchave03\" btree (codpropr, dtaven)\n> \"iarchave05\" btree (anocalc, inscimob, codvencto2, parcela2)\n>\n> Best regards and thank you very much in advance,\n>\n> Carlos Benkendorf\n>\n>\n>\n> ---------------------------------\n> Yahoo! doce lar. Fa�a do Yahoo! sua homepage.\n\n\n\n\t\t\n---------------------------------\n Yahoo! doce lar. Fa�a do Yahoo! sua homepage.\n We agree completely with you and that is what we are doing right now.   But what I would like is really understand the reason for this behavior and be sure we can not do anything more to improve the runtimes.   BenkendorfMario Weilguni <[email protected]> escreveu: I think whatever the reasons for the different query plans are (and if that can be fixed) - you CANNOT assume that data comes in sorted order when you do not use order by. Thats what every database does this way. So, use order by, or you'll be in trouble sooner or later.Best regards,Mario WeilguniAm Freitag, 23. Dezember 2005 13:34 schrieb Carlos Benkendorf:> Hi,>> We have more than 200 customers running 8.0.3 and two weeks ago started>\n migration project to 8.1.1.After the first migration to 8.1.1 we had to> return back to 8.0.3 because some applications were not working right.>> Our user told me that records are not returning more in the correct> order, so I started logging and saw that the select clause wasn�t not used> with the ORDER BY clause. It seemed a simple problem to be solved.>> I asked the programmers that they should add the ORDER BY clause if they> need the rows in a certain order and they told me they could not do it> because it will cost too much and the response time is bigger than not> using ORDER BY. I disagreed with them because there was an index with the> same order needed for the order by. Before starting a figth we decided to> explain analyze both select types and discover who was right. For my> surprise the select with order by was really more expensive than the select> without the orde\n r by. I\n will not bet any more...;-)>> For some implementation reason in 8.0.3 the query is returning the rows> in the correct order even without the order by but in 8.1.1 probably the> implementation changed and the rows are not returning in the correct order.>> We need the 8.1 for other reasons but this order by behavior stopped the> migration project.>> Some friends of the list tried to help us and I did some configuration> changes like increased work_mem and changed the primary columns from> numeric types to smallint/integer/bigint but even so the runtime and costs> are far from the ones from the selects without the ORDER BY clause.>> What I can not understand is why the planner is not using the same> retrieving method with the order by clause as without the order by clause.> All the rows are retrieved in the correct order in both methods but one is> much cheaper\n (without order by) than the other (with order by). Should not> the planner choice that one?>> Can someone explain me why the planner is not choosing the same method> used with the selects without the order by clause instead of using a sort> that is much more expensive?>> Without order by:> explain analyze> SELECT * FROM iparq.ARRIPT> where> (ANOCALC = 2005> and CADASTRO = 19> and CODVENCTO = 00> and PARCELA >= 00 )> or> (ANOCALC = 2005> and CADASTRO = 19> and CODVENCTO > 00 )> or> (ANOCALC = 2005> and CADASTRO > 19 )> or> (ANOCALC > 2005 );> Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on arript > (cost=0.00..122255.35 rows=146602 width=897) (actual time=9.303..1609.987> rows=167710 loops=1) Index Cond: (((anocalc = 2005::numeric) AND (cadastro> = 19::numeric) AN\n D\n (codvencto = 0::numeric) AND (parcela >= 0::numeric)) OR> ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto >> 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro > 19::numeric)) OR> (anocalc > 2005::numeric)) Total runtime: 1712.456 ms> (3 rows)>>> With order by:> explain analyze> SELECT * FROM iparq.ARRIPT> where> (ANOCALC = 2005> and CADASTRO = 19> and CODVENCTO = 00> and PARCELA >= 00 )> or> (ANOCALC = 2005> and CADASTRO = 19> and CODVENCTO > 00 )> or> (ANOCALC = 2005> and CADASTRO > 19 )> or> (ANOCALC > 2005 )> order by ANOCALC asc, CADASTRO asc, CODVENCTO asc, PARCELA asc;> Sort (cost=201296.59..201663.10 rows=146602 width=897) (actual> time=9752.555..10342.363 rows=167710 loops=1) Sort Key: anocalc, cadastro,> codvencto, parcela&\n gt;\n -> Index Scan using pk_arript, pk_arript, pk_arript, pk_arript on> arript (cost=0.00..122255.35 rows=146602 width=897) (actual> time=0.402..1425.085 rows=167710 loops=1) Index Cond: (((anocalc => 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric)> AND (parcela >= 0::numeric)) OR ((anocalc = 2005::numeric) AND (cadastro => 19::numeric) AND (codvencto > 0::numeric)) OR ((anocalc = 2005::numeric)> AND (cadastro > 19::numeric)) OR (anocalc > 2005::numeric)) Total runtime:> 10568.290 ms> (5 rows)>> Table definition:> Table \"iparq.arript\"> Column | Type | Modifiers> -------------------+-----------------------+-----------> anocalc | numeric(4,0) | not null> cadastro | numeric(8,0) | not null> codvencto | numeric(2,0) | not null> parcela | numeric(2,0) | not null> inscimob | character varying(18) | not null> codvenc\n to2 |\n numeric(2,0) | not null> parcela2 | numeric(2,0) | not null> codpropr | numeric(10,0) | not null> dtaven | numeric(8,0) | not null> anocalc2 | numeric(4,0) |> ...> ...> Indexes:> \"pk_arript\" PRIMARY KEY, btree (anocalc, cadastro, codvencto, parcela)> \"iarchave04\" UNIQUE, btree (cadastro, anocalc, codvencto, parcela)> \"iarchave02\" btree (inscimob, anocalc, codvencto2, parcela2)> \"iarchave03\" btree (codpropr, dtaven)> \"iarchave05\" btree (anocalc, inscimob, codvencto2, parcela2)>> Best regards and thank you very much in advance,>> Carlos Benkendorf>>>> ---------------------------------> Yahoo! doce lar. Fa�a do Yahoo! sua homepage.\n \nYahoo! doce lar. Fa�a do Yahoo! sua homepage.", "msg_date": "Fri, 23 Dec 2005 13:32:35 +0000 (GMT)", "msg_from": "Carlos Benkendorf <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Order by behaviour" }, { "msg_contents": "On 23.12.2005, at 13:34 Uhr, Carlos Benkendorf wrote:\n\n> For some implementation reason in 8.0.3 the query is returning the \n> rows in the correct order even without the order by but in 8.1.1 \n> probably the implementation changed and the rows are not returning \n> in the correct order.\n\nYou will never be sure to get rows in a specific order without an \n\"order by\".\n\nI don't know why PG is faster without ordering, perhaps others can \nhelp with that so you don't need a workaround like this:\n\nIf you can't force PostgreSQL to perform better on the ordered query, \nwhat about retrieving only the primary keys for the rows you want \nunordered in a subquery and using an \"where primaryKey in (...) order \nby ...\" statement with ordering the five rows?\n\nLike this:\n\nselect * from mytable where pk in (select pk from mytable where ...) \norder by ...;\n\nI don't know whether the query optimizer will flatten this query, but \nyou can try it.\n\ncug\n\n\n-- \nPharmaLine Essen, GERMANY and\nBig Nerd Ranch Europe - PostgreSQL Training, Feb. 2006, Rome, Italy\nhttp://www.bignerdranch.com/classes/postgresql.shtml", "msg_date": "Fri, 23 Dec 2005 14:34:20 +0100", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by behaviour" }, { "msg_contents": "Carlos Benkendorf wrote:\n\n> Hi,\n> \n> We have more than 200 customers running 8.0.3 and two weeks ago \n> started migration project to 8.1.1.After the first migration to 8.1.1 \n> we had to return back to 8.0.3 because some applications were not \n> working right.\n> \n> Our user told me that records are not returning more in the correct \n> order, so I started logging and saw that the select clause wasn´t not \n> used with the ORDER BY clause. It seemed a simple problem to be solved.\n> \n> I asked the programmers that they should add the ORDER BY clause if \n> they need the rows in a certain order and they told me they could not \n> do it because it will cost too much and the response time is bigger \n> than not using ORDER BY. I disagreed with them because there was an \n> index with the same order needed for the order by. Before starting a \n> figth we decided to explain analyze both select types and discover who \n> was right. For my surprise the sele ct with order by was really more \n> expensive than the select without the order by. I will not bet any \n> more...;-)\n> \n> For some implementation reason in 8.0.3 the query is returning the \n> rows in the correct order even without the order by but in 8.1.1 \n> probably the implementation changed and the rows are not returning in \n> the correct order.\n> \n> We need the 8.1 for other reasons but this order by behavior stopped \n> the migration project.\n> \n> Some friends of the list tried to help us and I did some configuration \n> changes like increased work_mem and changed the primary columns from \n> numeric types to smallint/integer/bigint but even so the runtime and \n> costs are far from the ones from the selects without the ORDER BY clause.\n> \n> What I can not understand is why the planner is not using the same \n> retrieving method with the order by clause as without the order by \n> clause. All the rows are retriev ed in the correct order in both \n> methods but one is much cheaper (without order by) than the other \n> (with order by). Should not the planner choice that one?\n> \n> Can someone explain me why the planner is not choosing the same method \n> used with the selects without the order by clause instead of using a \n> sort that is much more expensive?\n\nMaybe your table in the old database is clustered on an index that \ncovers all ordered columns? Then, a sequential fetch of all rows would \nprobably return them ordered. But there still should be no guarantee for \nthis because postgres might first return the rows that are already in \nmemory.\n\nJust making wild guesses.\n\n-- \nKrešimir Tonković\nZ-el d.o.o.\nIndustrijska cesta 28, 10360 Sesvete, Croatia\nTel: +385 1 2022 758\nFax: +385 1 2022 741\nWeb: www.chipoteka.hr\ne-mail: [email protected]\n\n\n\n", "msg_date": "Fri, 23 Dec 2005 15:31:15 +0100", "msg_from": "Kresimir Tonkovic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by behaviour" }, { "msg_contents": ">If you can't force PostgreSQL to perform better on the ordered query, \n>what about retrieving only the primary keys for the rows you want \n>unordered in a subquery and using an \"where primaryKey in (...) order \n>by ...\" statement with ordering the five rows?\n \n I appreciate your suggestion but I think I�m misunderstanding something, the select statement should return at about 150.000 rows, why 5 rows?\n \nGuido Neitzer <[email protected]> escreveu:\n On 23.12.2005, at 13:34 Uhr, Carlos Benkendorf wrote:\n\n> For some implementation reason in 8.0.3 the query is returning the \n> rows in the correct order even without the order by but in 8.1.1 \n> probably the implementation changed and the rows are not returning \n> in the correct order.\n\nYou will never be sure to get rows in a specific order without an \n\"order by\".\n\nI don't know why PG is faster without ordering, perhaps others can \nhelp with that so you don't need a workaround like this:\n\nIf you can't force PostgreSQL to perform better on the ordered query, \nwhat about retrieving only the primary keys for the rows you want \nunordered in a subquery and using an \"where primaryKey in (...) order \nby ...\" statement with ordering the five rows?\n \n \n\nLike this:\n\nselect * from mytable where pk in (select pk from mytable where ...) \norder by ...;\n\nI don't know whether the query optimizer will flatten this query, but \nyou can try it.\n\ncug\n\n\n-- \nPharmaLine Essen, GERMANY and\nBig Nerd Ranch Europe - PostgreSQL Training, Feb. 2006, Rome, Italy\nhttp://www.bignerdranch.com/classes/postgresql.shtml\n\n\n\n\n\n\n\t\t\n---------------------------------\n Yahoo! doce lar. Fa�a do Yahoo! sua homepage.\n>If you can't force PostgreSQL to perform better on the ordered query, >what about retrieving only the primary keys for the rows you want >unordered in a subquery and using an \"where primaryKey in (...) order >by ...\" statement with ordering the five rows?   I appreciate your suggestion but I think I�m misunderstanding something, the select statement should return at about 150.000 rows, why 5 rows? Guido Neitzer <[email protected]> escreveu: On 23.12.2005, at 13:34 Uhr, Carlos Benkendorf wrote:> For some implementation reason in 8.0.3 the query is returning the > rows in the correct order even without the order by but in 8.1.1 > probably the implementation changed and the rows are not returning > in the \n correct\n order.You will never be sure to get rows in a specific order without an \"order by\".I don't know why PG is faster without ordering, perhaps others can help with that so you don't need a workaround like this:If you can't force PostgreSQL to perform better on the ordered query, what about retrieving only the primary keys for the rows you want unordered in a subquery and using an \"where primaryKey in (...) order by ...\" statement with ordering the five rows?   Like this:select * from mytable where pk in (select pk from mytable where ...) order by ...;I don't know whether the query optimizer will flatten this query, but you can try it.cug-- PharmaLine Essen, GERMANY andBig Nerd Ranch Europe - PostgreSQL Training, Feb. 2006, Rome, Italyhttp://www.bignerdranch.com/classes/postgresql.shtml\n \nYahoo! doce lar. Fa�a do Yahoo! sua homepage.", "msg_date": "Fri, 23 Dec 2005 14:35:18 +0000 (GMT)", "msg_from": "Carlos Benkendorf <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Order by behaviour" }, { "msg_contents": "On 23.12.2005, at 15:35 Uhr, Carlos Benkendorf wrote:\n\n> I appreciate your suggestion but I think I�m misunderstanding \n> something, the select statement should return at about 150.000 \n> rows, why 5 rows?\n\nI have looked at the wrong lines of the explain ... statement. Sorry, \nmy fault. With that many lines, I doubt that my workaround will do \nanything good ... :-/ I was just a little bit to fast ... looking at \nto many different \"explain ...\" (or similar) statements in the last \nweeks.\n\nSorry, my fault.\n\nOther idea: have you tried ordering the rows in memory? Is that \nfaster? From now looking better at the explain result, it seems to \nme, that the sorting takes most of the time:\n\nSort (cost=201296.59..201663.10 rows=146602 width=897) (actual \ntime=9752.555..10342.363 rows=167710 loops=1)\n\nHow large are the rows returned by your query? Do they fit completely \nin the memory during the sort? If PostgreSQL starts switching to temp \nfiles ... There was a discussion on that topic a few weeks ago ...\n\nPerhaps this may help:\n\n------------------------------\nwork_mem (integer)\n\n Specifies the amount of memory to be used by internal sort \noperations and hash tables before switching to temporary disk files. \nThe value is specified in kilobytes, and defaults to 1024 kilobytes \n(1 MB). Note that for a complex query, several sort or hash \noperations might be running in parallel; each one will be allowed to \nuse as much memory as this value specifies before it starts to put \ndata into temporary files. Also, several running sessions could be \ndoing such operations concurrently. So the total memory used could be \nmany times the value of work_mem; it is necessary to keep this fact \nin mind when choosing the value. Sort operations are used for ORDER \nBY, DISTINCT, and merge joins. Hash tables are used in hash joins, \nhash-based aggregation, and hash-based processing of IN subqueries.\n------------------------------\n\ncug", "msg_date": "Fri, 23 Dec 2005 16:03:36 +0100", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by behaviour" }, { "msg_contents": "Carlos Benkendorf <[email protected]> writes:\n> For some implementation reason in 8.0.3 the query is returning the rows in the correct order even without the order by but in 8.1.1 probably the implementation changed and the rows are not returning in the correct order.\n\nIt was pure luck that prior versions gave you the result you wanted ---\nas other people already noted, the ordering of results is never\nguaranteed unless you say ORDER BY. The way you phrased the query\ngave rise (before 8.1) to several independent index scans that just\nhappened to yield non-overlapping, individually sorted, segments of\nthe desired output, and so as long as the system executed those scans\nin the right order, you got your sorted result without explicitly asking\nfor it. But the system wasn't aware that it was giving you any such\nthing, and certainly wasn't going out of its way to do so.\n\nIn 8.1 we no longer generate that kind of plan --- OR'd index scans are\nhandled via bitmap-scan plans now, which are generally a lot faster,\nbut don't yield sorted output.\n\nYou could probably kluge around it by switching to a UNION ALL query:\n\nSELECT * FROM iparq.ARRIPT where \n(ANOCALC = 2005\nand CADASTRO = 19\nand CODVENCTO = 00\nand PARCELA >= 00 ) \nUNION ALL\nSELECT * FROM iparq.ARRIPT where \n(ANOCALC = 2005\nand CADASTRO = 19\nand CODVENCTO > 00 ) \nUNION ALL\nSELECT * FROM iparq.ARRIPT where \n(ANOCALC = 2005\nand CADASTRO > 19 ) \nUNION ALL\nSELECT * FROM iparq.ARRIPT where \n(ANOCALC > 2005 );\n\nAgain, the system has no idea that it's giving you data in any\nuseful overall order, so this technique might also break someday,\nbut it's good for the time being.\n\nOf course, all of these are ugly, klugy solutions. The correct way\nto solve your problem would be with a row comparison:\n\nSELECT * FROM iparq.ARRIPT \nwhere \n(ANOCALC, CADASTRO, CODVENCTO, PARCELA) >= (2005, 19, 00, 00)\nORDER BY ANOCALC, CADASTRO, CODVENCTO, PARCELA;\n\nPostgres doesn't currently support this (we take the syntax but don't\nimplement it per SQL spec, and don't understand the connection to an\nindex anyway :-() ... but sooner or later it'll get fixed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Dec 2005 10:54:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Order by behaviour " }, { "msg_contents": "YES.... it worked very nice....\n \n Using UNION with 8.0.3:\n \n Append (cost=0.00..164840.70 rows=232632 width=892) (actual time=0.350..28529.895 rows=167711 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..2.91 rows=1 width=892) (actual time=0.098..0.098 rows=0 loops=1)\n -> Index Scan using pk_arript on arript (cost=0.00..2.90 rows=1 width=892) (actual time=0.094..0.094 rows=0 loops=1)\n Index Cond: ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric))\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..14.00 rows=12 width=892) (actual time=0.249..0.425 rows=2 loops=1)\n -> Index Scan using pk_arript on arript (cost=0.00..13.88 rows=12 width=892) (actual time=0.041..0.053 rows=2 loops=1)\n Index Cond: ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric))\n -> Subquery Scan \"*SELECT* 3\" (cost=0.00..55949.61 rows=68413 width=892) (actual time=0.216..12324.475 rows=72697 loops=1)\n -> Index Scan using pk_arript on arript (cost=0.00..55265.48 rows=68413 width=892) (actual time=0.033..429.152 rows=72697 loops=1)\n Index Cond: ((anocalc = 2005::numeric) AND (cadastro > 19::numeric))\n -> Subquery Scan \"*SELECT* 4\" (cost=0.00..108874.19 rows=164206 width=892) (actual time=0.297..16054.064 rows=95012 loops=1)\n -> Index Scan using pk_arript on arript (cost=0.00..107232.13 rows=164206 width=892) (actual time=0.046..485.430 rows=95012 loops=1)\n Index Cond: (anocalc > 2005::numeric)\n Total runtime: 28621.053 ms\n(14 rows)\n \n NOT SO GOOD!\n \n But using with 8.1:\n \n Append (cost=0.00..117433.94 rows=171823 width=897) (actual time=0.126..697.004 rows=167710 loops=1)\n -> Index Scan using pk_arript on arript (cost=0.00..2.81 rows=1 width=897) (actual time=0.083..0.083 rows=0 loops=1)\n Index Cond: ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric))\n -> Index Scan using pk_arript on arript (cost=0.00..12.05 rows=11 width=897) (actual time=0.039..0.050 rows=2 loops=1)\n Index Cond: ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric))\n -> Index Scan using pk_arript on arript (cost=0.00..46950.74 rows=65125 width=897) (actual time=0.031..275.674 rows=72697 loops=1)\n Index Cond: ((anocalc = 2005::numeric) AND (cadastro > 19::numeric))\n -> Index Scan using pk_arript on arript (cost=0.00..68750.11 rows=106686 width=897) (actual time=0.042..272.257 rows=95011 loops=1)\n Index Cond: (anocalc > 2005::numeric)\n Total runtime: 786.670 ms\n\n Using 8.1 and changing NUMERIC primary key columns to INTEGERs.\n \n Append (cost=0.00..107767.19 rows=159082 width=826) (actual time=0.091..487.802 rows=167710 loops=1)\n -> Index Scan using pk_arript on arript (cost=0.00..2.81 rows=1 width=826) (actual time=0.067..0.067 rows=0 loops=1)\n Index Cond: ((anocalc = 2005) AND (cadastro = 19) AND (codvencto = 0) AND (parcela >= 0))\n -> Index Scan using pk_arript on arript (cost=0.00..11.21 rows=10 width=826) (actual time=0.020..0.026 rows=2 loops=1)\n Index Cond: ((anocalc = 2005) AND (cadastro = 19) AND (codvencto > 0))\n -> Index Scan using pk_arript on arript (cost=0.00..44454.18 rows=62866 width=826) (actual time=0.012..157.058 rows=72697 loops=1)\n Index Cond: ((anocalc = 2005) AND (cadastro > 19))\n -> Index Scan using pk_arript on arript (cost=0.00..61708.17 rows=96205 width=826) (actual time=0.044..183.768 rows=95011 loops=1)\n Index Cond: (anocalc > 2005)\n Total runtime: 571.221 ms\n(10 rows)\n \n It�s faster than our currently SELECT without ORDER BY (1712.456 ms)... it�s wonderful...\n \n We are aware about the risks of not using the ORDER BY clause .. but it�s a managed risk...\n \n Thank very much all the people who helped to solve this problem, especially Tom Lane!\n \n Thanks a lot!\n \n Benkendorf\n \nTom Lane <[email protected]> escreveu:\n Carlos Benkendorf writes:\n> For some implementation reason in 8.0.3 the query is returning the rows in the correct order even without the order by but in 8.1.1 probably the implementation changed and the rows are not returning in the correct order.\n\nIt was pure luck that prior versions gave you the result you wanted ---\nas other people already noted, the ordering of results is never\nguaranteed unless you say ORDER BY. The way you phrased the query\ngave rise (before 8.1) to several independent index scans that just\nhappened to yield non-overlapping, individually sorted, segments of\nthe desired output, and so as long as the system executed those scans\nin the right order, you got your sorted result without explicitly asking\nfor it. But the system wasn't aware that it was giving you any such\nthing, and certainly wasn't going out of its way to do so.\n\nIn 8.1 we no longer generate that kind of plan --- OR'd index scans are\nhandled via bitmap-scan plans now, which are generally a lot faster,\nbut don't yield sorted output.\n\nYou could probably kluge around it by switching to a UNION ALL query:\n\nSELECT * FROM iparq.ARRIPT where \n(ANOCALC = 2005\nand CADASTRO = 19\nand CODVENCTO = 00\nand PARCELA >= 00 ) \nUNION ALL\nSELECT * FROM iparq.ARRIPT where \n(ANOCALC = 2005\nand CADASTRO = 19\nand CODVENCTO > 00 ) \nUNION ALL\nSELECT * FROM iparq.ARRIPT where \n(ANOCALC = 2005\nand CADASTRO > 19 ) \nUNION ALL\nSELECT * FROM iparq.ARRIPT where \n(ANOCALC > 2005 );\n\nAgain, the system has no idea that it's giving you data in any\nuseful overall order, so this technique might also break someday,\nbut it's good for the time being.\n\nOf course, all of these are ugly, klugy solutions. The correct way\nto solve your problem would be with a row comparison:\n\nSELECT * FROM iparq.ARRIPT \nwhere \n(ANOCALC, CADASTRO, CODVENCTO, PARCELA) >= (2005, 19, 00, 00)\nORDER BY ANOCALC, CADASTRO, CODVENCTO, PARCELA;\n\nPostgres doesn't currently support this (we take the syntax but don't\nimplement it per SQL spec, and don't understand the connection to an\nindex anyway :-() ... but sooner or later it'll get fixed.\n\nregards, tom lane\n \n\n\n\t\t\n---------------------------------\n Yahoo! doce lar. Fa�a do Yahoo! sua homepage.\nYES.... it worked very nice....   Using UNION with 8.0.3:    Append  (cost=0.00..164840.70 rows=232632 width=892) (actual time=0.350..28529.895 rows=167711 loops=1)   ->  Subquery Scan \"*SELECT* 1\"  (cost=0.00..2.91 rows=1 width=892) (actual time=0.098..0.098 rows=0 loops=1)         ->  Index Scan using pk_arript on arript  (cost=0.00..2.90 rows=1 width=892) (actual time=0.094..0.094 rows=0 loops=1)               Index Cond: ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric))   ->  Subquery Scan \"*SELECT* 2\"  (cost=0.00..14.00 rows=12 width=892) (actual time=0.249..0.425 rows=2 loops=1)         ->  Index Sca\n n using\n pk_arript on arript  (cost=0.00..13.88 rows=12 width=892) (actual time=0.041..0.053 rows=2 loops=1)               Index Cond: ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric))   ->  Subquery Scan \"*SELECT* 3\"  (cost=0.00..55949.61 rows=68413 width=892) (actual time=0.216..12324.475 rows=72697 loops=1)         ->  Index Scan using pk_arript on arript  (cost=0.00..55265.48 rows=68413 width=892) (actual time=0.033..429.152 rows=72697 loops=1)               Index Cond: ((anocalc = 2005::numeric) AND (cadastro > 19::numeric))   ->  Subquery Scan \"*SELECT* 4\"  (cost=0.00..108874.19 rows=164206 width=892) (actual time=0.297..16054.064 rows=95012\n loops=1)         ->  Index Scan using pk_arript on arript  (cost=0.00..107232.13 rows=164206 width=892) (actual time=0.046..485.430 rows=95012 loops=1)               Index Cond: (anocalc > 2005::numeric) Total runtime: 28621.053 ms(14 rows)   NOT SO GOOD!   But using with 8.1:    Append  (cost=0.00..117433.94 rows=171823 width=897) (actual time=0.126..697.004 rows=167710 loops=1)   ->  Index Scan using pk_arript on arript  (cost=0.00..2.81 rows=1 width=897) (actual time=0.083..0.083 rows=0 loops=1)         Index Cond: ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto = 0::numeric) AND (parcela >= 0::numeric))   ->  Index\n Scan\n using pk_arript on arript  (cost=0.00..12.05 rows=11 width=897) (actual time=0.039..0.050 rows=2 loops=1)         Index Cond: ((anocalc = 2005::numeric) AND (cadastro = 19::numeric) AND (codvencto > 0::numeric))   ->  Index Scan using pk_arript on arript  (cost=0.00..46950.74 rows=65125 width=897) (actual time=0.031..275.674 rows=72697 loops=1)         Index Cond: ((anocalc = 2005::numeric) AND (cadastro > 19::numeric))   ->  Index Scan using pk_arript on arript  (cost=0.00..68750.11 rows=106686 width=897) (actual time=0.042..272.257 rows=95011 loops=1)         Index Cond: (anocalc > 2005::numeric) Total runtime: 786.670 ms Using 8.1 and changing NUMERIC primary key columns to INTEGERs.    Append  (cost=0.00..107767\n .19\n rows=159082 width=826) (actual time=0.091..487.802 rows=167710 loops=1)   ->  Index Scan using pk_arript on arript  (cost=0.00..2.81 rows=1 width=826) (actual time=0.067..0.067 rows=0 loops=1)         Index Cond: ((anocalc = 2005) AND (cadastro = 19) AND (codvencto = 0) AND (parcela >= 0))   ->  Index Scan using pk_arript on arript  (cost=0.00..11.21 rows=10 width=826) (actual time=0.020..0.026 rows=2 loops=1)         Index Cond: ((anocalc = 2005) AND (cadastro = 19) AND (codvencto > 0))   ->  Index Scan using pk_arript on arript  (cost=0.00..44454.18 rows=62866 width=826) (actual time=0.012..157.058 rows=72697 loops=1)         Index Cond: ((anocalc = 2005) AND (cadastro > 19))   ->  Index Scan using pk_arript on arript \n (cost=0.00..61708.17 rows=96205 width=826) (actual time=0.044..183.768 rows=95011 loops=1)         Index Cond: (anocalc > 2005) Total runtime: 571.221 ms(10 rows)   It�s faster than our currently SELECT without ORDER BY (1712.456 ms)... it�s wonderful...   We are aware about the risks of not using the ORDER BY clause .. but it�s a managed risk...   Thank very much all the people who helped to solve this problem, especially Tom Lane!   Thanks a lot!   Benkendorf Tom Lane <[email protected]> escreveu: Carlos Benkendorf writes:> For some implementation reason in 8.0.3 the query is returni\n ng the\n rows in the correct order even without the order by but in 8.1.1 probably the implementation changed and the rows are not returning in the correct order.It was pure luck that prior versions gave you the result you wanted ---as other people already noted, the ordering of results is neverguaranteed unless you say ORDER BY. The way you phrased the querygave rise (before 8.1) to several independent index scans that justhappened to yield non-overlapping, individually sorted, segments ofthe desired output, and so as long as the system executed those scansin the right order, you got your sorted result without explicitly askingfor it. But the system wasn't aware that it was giving you any suchthing, and certainly wasn't going out of its way to do so.In 8.1 we no longer generate that kind of plan --- OR'd index scans arehandled via bitmap-scan plans now, which are generally a lot faster,but don't yield sorted output.You \n could\n probably kluge around it by switching to a UNION ALL query:SELECT * FROM iparq.ARRIPT where (ANOCALC = 2005and CADASTRO = 19and CODVENCTO = 00and PARCELA >= 00 ) UNION ALLSELECT * FROM iparq.ARRIPT where (ANOCALC = 2005and CADASTRO = 19and CODVENCTO > 00 ) UNION ALLSELECT * FROM iparq.ARRIPT where (ANOCALC = 2005and CADASTRO > 19 ) UNION ALLSELECT * FROM iparq.ARRIPT where (ANOCALC > 2005 );Again, the system has no idea that it's giving you data in anyuseful overall order, so this technique might also break someday,but it's good for the time being.Of course, all of these are ugly, klugy solutions. The correct wayto solve your problem would be with a row comparison:SELECT * FROM iparq.ARRIPT where (ANOCALC, CADASTRO, CODVENCTO, PARCELA) >= (2005, 19, 00, 00)ORDER BY ANOCALC, CADASTRO, CODVENCTO, PARCELA;Postgres doesn't currently suppo\n rt this\n (we take the syntax but don'timplement it per SQL spec, and don't understand the connection to anindex anyway :-() ... but sooner or later it'll get fixed.regards, tom lane \n \nYahoo! doce lar. Fa�a do Yahoo! sua homepage.", "msg_date": "Sat, 24 Dec 2005 01:49:00 +0000 (GMT)", "msg_from": "Carlos Benkendorf <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Order by behaviour " } ]
[ { "msg_contents": "Frank, \n\n> You definitely DO NOT want to do RAID 5 on a database server. That\n> is probably the worst setup you could have, I've seen it have lower\n> performance than just a single hard disk. \n\nI've seen that on RAID0 and RAID10 as well.\n\nThis is more about the quality and modernity of the RAID controller than\nanything else at this point, although there are some theoretical\nadvantages of RAID10 from a random seek standpoint even if the adapter\nCPU is infinitely fast at checksumming. We're using RAID5 in practice\nfor OLAP / Data Warehousing systems very successfully using the newest\nRAID cards from 3Ware (9550SX).\n\nNote that host-based SCSI raid cards from LSI, Adaptec, Intel, Dell, HP\nand others have proven to have worse performance than a single disk\ndrive in many cases, whether for RAID0 or RAID5. In most circumstances\nI've seen, people don't even notice until they write a message to a\nmailing list about \"my query runs slowly on xxx dbms\". In many cases,\nafter they run a simple sequential transfer rate test using dd, they see\nthat their RAID controller is the culprit.\n\nRecently, I helped a company named DeepData to improve their dbms\nperformance, which was a combination of moving them to software RAID50\non Linux and getting them onto Bizgres. The disk subsystem sped up on\nthe same hardware (minus the HW RAID card) by over a factor of 10. The\ndownside is that SW RAID is a pain in the neck for management - you have\nto shut down the Linux host when a disk fails to replace it.\n\n- Luke\n\n", "msg_date": "Sat, 24 Dec 2005 20:51:15 -0500", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On Sat, 24 Dec 2005, Luke Lonergan wrote:\n\n> Recently, I helped a company named DeepData to improve their dbms\n> performance, which was a combination of moving them to software RAID50\n> on Linux and getting them onto Bizgres. The disk subsystem sped up on\n> the same hardware (minus the HW RAID card) by over a factor of 10. The\n> downside is that SW RAID is a pain in the neck for management - you have\n> to shut down the Linux host when a disk fails to replace it.\n\nLuke, you should not need to shut down the linux host when a disk fails.\n\nyou should be able to use mdadm to mark the drive as failed, then remove \nit from the system and replace it, then use mdadm to add the drive to the \narray.\n\nI'm fighting through a double disk failure on my system at home and when I \nhit a bad spot on a drive (failing it from the array) I can just re-add it \nwithout having to restart everything (if it's the second drive I will have \nto stop and restart the array, but that's becouse the entire array has \nfailed at that point)\n\nnow hot-swap may not be supported on all interface types, that may be what \nyou have run into, but with SCSI or SATA you should be able to hot-swap \nwith the right controller.\n\nDavid Lang\n", "msg_date": "Sat, 24 Dec 2005 19:03:20 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "David,\n\n> now hot-swap may not be supported on all interface types, that may be what \n> you have run into, but with SCSI or SATA you should be able to hot-swap \n> with the right controller.\n\nThat's actually the problem - Linux hot swap is virtually non-functional for SCSI. You can write into the proper places in /proc, then remove and rescan to get a new drive up, but I've found that the resulting OS state is flaky. This is true of the latest 2.6 kernels and LSI and Adaptec SCSI controllers.\n\nThe problems I've seen are with Linux, not the controllers.\n\n- Luke\n\n\n", "msg_date": "Sat, 24 Dec 2005 22:13:43 -0500", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Luke Lonergan wrote:\n> Note that host-based SCSI raid cards from LSI, Adaptec, Intel, Dell, HP\n> and others have proven to have worse performance than a single disk\n> drive in many cases, whether for RAID0 or RAID5. In most circumstances\n\nThis is my own experience. Running a LSI MegaRAID in pure passthrough \nmode + Linux software RAID10 is a ton faster than configuring the RAID \nvia the LSI card. One of the things I've noticed is that the card does \nnot seem to be able to parallel read on mirrors. While looking at iostat \nunder Linux, I can see software RAID1 reading all drives and the MD \nnumber adding up to the sum of all drives.\n\nThe ARECA SATA controller I just got though doesn't seem to exhibit \nthese problems. Performance is a few % points above Linux software RAID \nat lower CPU usage. In fact, I'm getting better single-threaded \nbandwidth on a 4x7200RPM SATA config versus a 6x15K SCSI config on the \nLSI. The drives are bigger for the SATA drive (300GB) versus 36GB for \nthe SCSI so that means the heads don't have to move any where as much \nand can stay on the fast portion of the disk. Haven't had a chance to \ntest multi-user DB between the two setup though.\n", "msg_date": "Sat, 24 Dec 2005 19:43:52 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Luke Lonergan wrote:\n\n>David,\n>\n> \n>\n>>now hot-swap may not be supported on all interface types, that may be what \n>>you have run into, but with SCSI or SATA you should be able to hot-swap \n>>with the right controller.\n>> \n>>\n>\n>That's actually the problem - Linux hot swap is virtually non-functional for SCSI. You can write into the proper places in /proc, then remove and rescan to get a new drive up, but I've found that the resulting OS state is flaky. This is true of the latest 2.6 kernels and LSI and Adaptec SCSI controllers.\n>\n>The problems I've seen are with Linux, not the controllers.\n> \n>\nInteresting, I have had zero problems with Linux and SATA with LSI \ncontrollers and hot plug. I wonder what the difference is. The LSI \ncontroller even though SATA just uses the scsi driver.\n\nJoshua D. Drake\n\n>- Luke\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: Don't 'kill -9' the postmaster\n> \n>\n\n\n-- \nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: PLphp, PLperl - http://www.commandprompt.com/\n\n", "msg_date": "Sat, 24 Dec 2005 20:18:55 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On Sat, 24 Dec 2005, Luke Lonergan wrote:\n\n> David,\n>\n>> now hot-swap may not be supported on all interface types, that may be what\n>> you have run into, but with SCSI or SATA you should be able to hot-swap\n>> with the right controller.\n>\n> That's actually the problem - Linux hot swap is virtually non-functional for SCSI. You can write into the proper places in /proc, then remove and rescan to get a new drive up, but I've found that the resulting OS state is flaky. This is true of the latest 2.6 kernels and LSI and Adaptec SCSI controllers.\n>\n> The problems I've seen are with Linux, not the controllers.\n\nThanks for the clarification, I knew that PATA didn't do hotswap, and I've \nseen discussions on the linux-kernel list about SATA hotswap being worked \non, but I thought that scsi handled it. how recent a kernel have you had \nproblems with?\n\nDavid Lang\n", "msg_date": "Sun, 25 Dec 2005 04:13:57 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On Sun, Dec 25, 2005 at 04:13:57AM -0800, David Lang wrote:\n> Thanks for the clarification, I knew that PATA didn't do hotswap, and I've \n> seen discussions on the linux-kernel list about SATA hotswap being worked \n> on, but I thought that scsi handled it. how recent a kernel have you had \n> problems with?\n\nIs has largely worked for us, even though it's a bit hackish -- you _must_\ndisconnect the drive properly in the kernel before ejecting it physically,\nthough, or it will never reconnect. At least that's how it is with our\nAdaptec 19160.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sun, 25 Dec 2005 13:29:19 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "On Sat, Dec 24, 2005 at 22:13:43 -0500,\n Luke Lonergan <[email protected]> wrote:\n> David,\n> \n> > now hot-swap may not be supported on all interface types, that may be what \n> > you have run into, but with SCSI or SATA you should be able to hot-swap \n> > with the right controller.\n> \n> That's actually the problem - Linux hot swap is virtually non-functional for SCSI. You can write into the proper places in /proc, then remove and rescan to get a new drive up, but I've found that the resulting OS state is flaky. This is true of the latest 2.6 kernels and LSI and Adaptec SCSI controllers.\n> \n> The problems I've seen are with Linux, not the controllers.\n\nThe other option is to keep hot spares available so that you can have a failure\nor two before you have to pull drives out. This might allow you to get to a\nmaintenance window to swap out the bad drives.\n", "msg_date": "Sun, 25 Dec 2005 09:15:43 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Have you done any benchmarking of the 9550SX against a software raid \nconfiguration? \n\nLuke Lonergan wrote:\n\n>Frank, \n>\n> \n>\n>> You definitely DO NOT want to do RAID 5 on a database server. That\n>> is probably the worst setup you could have, I've seen it have lower\n>> performance than just a single hard disk. \n>> \n>>\n>\n>I've seen that on RAID0 and RAID10 as well.\n>\n>This is more about the quality and modernity of the RAID controller than\n>anything else at this point, although there are some theoretical\n>advantages of RAID10 from a random seek standpoint even if the adapter\n>CPU is infinitely fast at checksumming. We're using RAID5 in practice\n>for OLAP / Data Warehousing systems very successfully using the newest\n>RAID cards from 3Ware (9550SX).\n>\n>Note that host-based SCSI raid cards from LSI, Adaptec, Intel, Dell, HP\n>and others have proven to have worse performance than a single disk\n>drive in many cases, whether for RAID0 or RAID5. In most circumstances\n>I've seen, people don't even notice until they write a message to a\n>mailing list about \"my query runs slowly on xxx dbms\". In many cases,\n>after they run a simple sequential transfer rate test using dd, they see\n>that their RAID controller is the culprit.\n>\n>Recently, I helped a company named DeepData to improve their dbms\n>performance, which was a combination of moving them to software RAID50\n>on Linux and getting them onto Bizgres. The disk subsystem sped up on\n>the same hardware (minus the HW RAID card) by over a factor of 10. The\n>downside is that SW RAID is a pain in the neck for management - you have\n>to shut down the Linux host when a disk fails to replace it.\n>\n>- Luke\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: explain analyze is your friend\n> \n>\n\n-- \n*Benjamin Arai*\[email protected] <emailto:[email protected]>\nhttp://www.benjaminarai.com\n\n\n\n\n\n\nHave you done any benchmarking of the 9550SX against a software raid\nconfiguration?  \n\nLuke Lonergan wrote:\n\nFrank, \n\n \n\n You definitely DO NOT want to do RAID 5 on a database server. That\n is probably the worst setup you could have, I've seen it have lower\n performance than just a single hard disk. \n \n\n\nI've seen that on RAID0 and RAID10 as well.\n\nThis is more about the quality and modernity of the RAID controller than\nanything else at this point, although there are some theoretical\nadvantages of RAID10 from a random seek standpoint even if the adapter\nCPU is infinitely fast at checksumming. We're using RAID5 in practice\nfor OLAP / Data Warehousing systems very successfully using the newest\nRAID cards from 3Ware (9550SX).\n\nNote that host-based SCSI raid cards from LSI, Adaptec, Intel, Dell, HP\nand others have proven to have worse performance than a single disk\ndrive in many cases, whether for RAID0 or RAID5. In most circumstances\nI've seen, people don't even notice until they write a message to a\nmailing list about \"my query runs slowly on xxx dbms\". In many cases,\nafter they run a simple sequential transfer rate test using dd, they see\nthat their RAID controller is the culprit.\n\nRecently, I helped a company named DeepData to improve their dbms\nperformance, which was a combination of moving them to software RAID50\non Linux and getting them onto Bizgres. The disk subsystem sped up on\nthe same hardware (minus the HW RAID card) by over a factor of 10. The\ndownside is that SW RAID is a pain in the neck for management - you have\nto shut down the Linux host when a disk fails to replace it.\n\n- Luke\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n \n\n\n-- \nBenjamin Arai\[email protected]\nhttp://www.benjaminarai.com", "msg_date": "Mon, 26 Dec 2005 01:22:09 -0800", "msg_from": "Benjamin Arai <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Benjamin,\n\n> Have you done any benchmarking of the 9550SX against a software raid configuration? \n\n \nInteresting - no, not on SATA, mostly because I've had awful luck with Linux drivers and SATA. The popular manufacturers of SATA to PCI bridge chipsets are Silicon Image and Highpoint, and I've not seen Linux work with them at any reasonable performance yet. I've also had problems with Adaptec's cards - I think they manufacture their own SATA to PCI chipset as well. So far, I've only had good luck with the on-chipset Intel SATA implementation. I think the problems I've had could be entirely driver-related, but in the end it doesn't matter if you can't find drivers that work for Linux.\n \nThe other problem is getting enough SATA connections for the number of disks we want. I do have two new Areca SATA RAID cards and I'm going to benchmark those against the 3Ware 9550SX with 2 x 8 = 16 disks on one host.\n \nI guess we could run the HW RAID controllers in JBOD mode to get a good driver / chipset configuration for software RAID, but frankly I prefer HW RAID if it performs well. So far the SATA host-based RAID is blowing the doors off of every other HW RAID solution I've tested.\n \n- Luke\n\n", "msg_date": "Mon, 26 Dec 2005 07:13:22 -0500", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Have you have any experience rebuilding arrays in linux using the 3Ware \nutilities? If so, did it work well?\n\nLuke Lonergan wrote:\n\n>Benjamin,\n>\n> \n>\n>>Have you done any benchmarking of the 9550SX against a software raid configuration? \n>> \n>>\n>\n> \n>Interesting - no, not on SATA, mostly because I've had awful luck with Linux drivers and SATA. The popular manufacturers of SATA to PCI bridge chipsets are Silicon Image and Highpoint, and I've not seen Linux work with them at any reasonable performance yet. I've also had problems with Adaptec's cards - I think they manufacture their own SATA to PCI chipset as well. So far, I've only had good luck with the on-chipset Intel SATA implementation. I think the problems I've had could be entirely driver-related, but in the end it doesn't matter if you can't find drivers that work for Linux.\n> \n>The other problem is getting enough SATA connections for the number of disks we want. I do have two new Areca SATA RAID cards and I'm going to benchmark those against the 3Ware 9550SX with 2 x 8 = 16 disks on one host.\n> \n>I guess we could run the HW RAID controllers in JBOD mode to get a good driver / chipset configuration for software RAID, but frankly I prefer HW RAID if it performs well. So far the SATA host-based RAID is blowing the doors off of every other HW RAID solution I've tested.\n> \n>- Luke\n>\n> \n>\n\n-- \n*Benjamin Arai*\[email protected] <emailto:[email protected]>\nhttp://www.benjaminarai.com\n\n\n\n\n\n\nHave you have any experience rebuilding arrays in linux using the 3Ware\nutilities?  If so, did it work well?\n\nLuke Lonergan wrote:\n\nBenjamin,\n\n \n\nHave you done any benchmarking of the 9550SX against a software raid configuration? \n \n\n\n \nInteresting - no, not on SATA, mostly because I've had awful luck with Linux drivers and SATA. The popular manufacturers of SATA to PCI bridge chipsets are Silicon Image and Highpoint, and I've not seen Linux work with them at any reasonable performance yet. I've also had problems with Adaptec's cards - I think they manufacture their own SATA to PCI chipset as well. So far, I've only had good luck with the on-chipset Intel SATA implementation. I think the problems I've had could be entirely driver-related, but in the end it doesn't matter if you can't find drivers that work for Linux.\n \nThe other problem is getting enough SATA connections for the number of disks we want. I do have two new Areca SATA RAID cards and I'm going to benchmark those against the 3Ware 9550SX with 2 x 8 = 16 disks on one host.\n \nI guess we could run the HW RAID controllers in JBOD mode to get a good driver / chipset configuration for software RAID, but frankly I prefer HW RAID if it performs well. So far the SATA host-based RAID is blowing the doors off of every other HW RAID solution I've tested.\n \n- Luke\n\n \n\n\n-- \nBenjamin Arai\[email protected]\nhttp://www.benjaminarai.com", "msg_date": "Mon, 26 Dec 2005 10:21:42 -0800", "msg_from": "Benjamin Arai <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Benjamin,\n\nOn 12/26/05 10:21 AM, \"Benjamin Arai\" <[email protected]> wrote:\n\n> Have you have any experience rebuilding arrays in linux using the 3Ware\n> utilities? If so, did it work well?\n\nSure we have - nowadays with disks failing as much as they do how could we\nnot? ;-)\n\n3Ware has some *nice* tools - including a web browser utility for managing\nthe RAID. Rebuilds have been super easy - and the e-mail notification is\nfine. They even have some decent migration options.\n\nWhat they don't have are tools like snapshot backup, like EMC has, or SRDF\nor any of the enterprise SAN features. We don't need them because Bizgres\nMPP takes care of the need in software, but some people have become\naccustomed to the features for other uses.\n\nWe're pretty happy with 3Ware, but their new 9550SX is, well, new. We\nmanaged to find a good enough combination of driver and firmware to make it\nwork well on CentOs 4.1 and that's good enough for us, but there are\ndefinitely some issues with some combinations now. Lastly, you do have to\nset the block device readahead to 16MB to get performance.\n\n- Luke \n\n\n", "msg_date": "Mon, 26 Dec 2005 10:50:34 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" }, { "msg_contents": "Yes - they work excellently. I have several medium and large servers\nrunning 3ware 9500S series cards with great success. We have\nrebuilding many failed RAID 10s over the course with no problems.\n\nAlex\n\nOn 12/26/05, Benjamin Arai <[email protected]> wrote:\n> Have you have any experience rebuilding arrays in linux using the 3Ware\n> utilities? If so, did it work well?\n>\n>\n> Luke Lonergan wrote:\n> Benjamin,\n>\n>\n>\n> Have you done any benchmarking of the 9550SX against a software raid\n> configuration?\n>\n>\n> Interesting - no, not on SATA, mostly because I've had awful luck with Linux\n> drivers and SATA. The popular manufacturers of SATA to PCI bridge chipsets\n> are Silicon Image and Highpoint, and I've not seen Linux work with them at\n> any reasonable performance yet. I've also had problems with Adaptec's cards\n> - I think they manufacture their own SATA to PCI chipset as well. So far,\n> I've only had good luck with the on-chipset Intel SATA implementation. I\n> think the problems I've had could be entirely driver-related, but in the end\n> it doesn't matter if you can't find drivers that work for Linux.\n>\n> The other problem is getting enough SATA connections for the number of disks\n> we want. I do have two new Areca SATA RAID cards and I'm going to benchmark\n> those against the 3Ware 9550SX with 2 x 8 = 16 disks on one host.\n>\n> I guess we could run the HW RAID controllers in JBOD mode to get a good\n> driver / chipset configuration for software RAID, but frankly I prefer HW\n> RAID if it performs well. So far the SATA host-based RAID is blowing the\n> doors off of every other HW RAID solution I've tested.\n>\n> - Luke\n>\n>\n>\n>\n> --\n> Benjamin Arai\n> [email protected]\n> http://www.benjaminarai.com\n", "msg_date": "Mon, 26 Dec 2005 17:54:54 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's the best hardver for PostgreSQL 8.1?" } ]
[ { "msg_contents": "Guys,\n\nGot the following ERROR when i was vacuuming the template0 database.\npostgresql server version is 7.4.5 and stats info in postgresql.conf is\n\n# - Query/Index Statistics Collector -\n\nstats_start_collector = true\nstats_command_string = true\nstats_block_level = true\nstats_row_level = tue\n#stats_reset_on_server_start = true\n\n============================\n\nstep 1\n\nupdate pg_database set datallowconn=TRUE where datname='template0';\n\nstep 2\nvacuum analyze verbose\n......\n.....\nINFO: vacuuming \"pg_catalog.pg_statistic\"\nERROR: could not access status of transaction 1107341112\nDETAIL: could not open file \"/home/postgres/data/pg_clog/0420\": No such\nfile or directory\n\nstep 3\npostgres@test1:~> /usr/local/pgsql/bin/psql ecommerce -c 'SELECT datname,\nage(datfrozenxid) FROM pg_database';\n datname | age\n-----------+------------\n template0 | 1112108248\n database1 | 1074511487\n template1 | 1073987669\n(3 rows)\n\nFiles in the pg_clog are:-\npostgres@test1:~/data/pg_clog> ls -lart\ntotal 417\n-rw------- 1 postgres postgres 262144 2005-12-26 02:09 0443\ndrwx------ 2 postgres postgres 96 2005-12-26 02:17 ./\ndrwx------ 6 postgres postgres 640 2005-12-26 03:22 ../\n-rw------- 1 postgres postgres 163840 2005-12-26 03:23 0444\n\nProblem: template0 is not getting vacuumed due to the above ERROR.. please\nlet me know whats the solution.\n--\nBest,\nGourish Singbal\n\n \nGuys,\n \nGot the following ERROR when i was vacuuming the template0 database. \npostgresql server version is 7.4.5 and stats info in postgresql.conf is \n\n# - Query/Index Statistics Collector -\nstats_start_collector = truestats_command_string = truestats_block_level = truestats_row_level = tue#stats_reset_on_server_start = true\n============================\nstep 1\nupdate pg_database set datallowconn=TRUE where datname='template0';\n \nstep 2\nvacuum analyze verbose\n......\n.....\nINFO:  vacuuming \"pg_catalog.pg_statistic\"ERROR:  could not access status of transaction 1107341112DETAIL:  could not open file \"/home/postgres/data/pg_clog/0420\": No such file or directory\n \nstep 3\npostgres@test1:~> /usr/local/pgsql/bin/psql ecommerce -c 'SELECT datname, age(datfrozenxid) FROM pg_database';  datname  |    age-----------+------------ template0 | 1112108248\n database1 | 1074511487 template1 | 1073987669(3 rows) \nFiles in the pg_clog are:-\npostgres@test1:~/data/pg_clog> ls -larttotal 417-rw-------  1 postgres postgres 262144 2005-12-26 02:09 0443drwx------  2 postgres postgres     96 2005-12-26 02:17 ./\ndrwx------  6 postgres postgres    640 2005-12-26 03:22 ../-rw-------  1 postgres postgres 163840 2005-12-26 03:23 0444 \nProblem: template0 is not getting vacuumed due to the above ERROR.. please let me know whats the solution.-- Best,Gourish Singbal", "msg_date": "Mon, 26 Dec 2005 17:04:28 +0530", "msg_from": "Gourish Singbal <[email protected]>", "msg_from_op": true, "msg_subject": "vacuuming template0 gave ERROR" }, { "msg_contents": "---------- Forwarded message ----------\nFrom: Gourish Singbal <[email protected]>\nDate: Dec 26, 2005 5:04 PM\nSubject: vacuuming template0 gave ERROR\nTo: \"[email protected]\" <[email protected]>\n\n\nGuys,\n\nGot the following ERROR when i was vacuuming the template0 database.\npostgresql server version is 7.4.5 and stats info in postgresql.conf is\n\n# - Query/Index Statistics Collector -\n\nstats_start_collector = true\nstats_command_string = true\nstats_block_level = true\nstats_row_level = tue\n#stats_reset_on_server_start = true\n\n============================\n\nstep 1\n\nupdate pg_database set datallowconn=TRUE where datname='template0';\n\nstep 2\n\\c template0\nvacuum analyze verbose\n......\n.....\nINFO: vacuuming \"pg_catalog.pg_statistic\"\nERROR: could not access status of transaction 1107341112\nDETAIL: could not open file \"/home/postgres/data/pg_clog/0420\": No such\nfile or directory\n\nstep 3\npostgres@test1:~> /usr/local/pgsql/bin/psql database1 -c 'SELECT datname,\nage(datfrozenxid) FROM pg_database';\n datname | age\n-----------+------------\n template0 | 1112108248\n database1 | 1074511487\n template1 | 1073987669\n(3 rows)\n\nFiles in the pg_clog are:-\npostgres@test1:~/data/pg_clog> ls -lart\ntotal 417\n-rw------- 1 postgres postgres 262144 2005-12-26 02:09 0443\ndrwx------ 2 postgres postgres 96 2005-12-26 02:17 ./\ndrwx------ 6 postgres postgres 640 2005-12-26 03:22 ../\n-rw------- 1 postgres postgres 163840 2005-12-26 03:23 0444\n\nProblem: template0 is not getting vacuumed due to the above ERROR.. please\nlet me know whats the solution.\n--\nBest,\nGourish Singbal\n\n\n--\nBest,\nGourish Singbal\n\n---------- Forwarded message ----------From: Gourish Singbal <[email protected]>Date: Dec 26, 2005 5:04 PM\nSubject: vacuuming template0 gave ERRORTo: \"[email protected]\" <[email protected]>\n\n \nGuys,\n \nGot the following ERROR when i was vacuuming the template0 database. \npostgresql server version is 7.4.5 and stats info in postgresql.conf is \n\n# - Query/Index Statistics Collector -\nstats_start_collector = truestats_command_string = truestats_block_level = truestats_row_level = tue#stats_reset_on_server_start = true\n============================\nstep 1\nupdate pg_database set datallowconn=TRUE where datname='template0';\n \nstep 2\n\\c template0\nvacuum analyze verbose\n......\n.....\nINFO:  vacuuming \"pg_catalog.pg_statistic\"ERROR:  could not access status of transaction 1107341112DETAIL:  could not open file \"/home/postgres/data/pg_clog/0420\": No such file or directory \n \nstep 3\npostgres@test1:~> /usr/local/pgsql/bin/psql database1 -c 'SELECT datname, age(datfrozenxid) FROM pg_database';\n  datname  |    age-----------+------------ template0 | 1112108248  database1 | 1074511487 template1 | 1073987669(3 rows) \nFiles in the pg_clog are:-\npostgres@test1:~/data/pg_clog> ls -larttotal 417-rw-------  1 postgres postgres 262144 2005-12-26 02:09 0443\ndrwx------  2 postgres postgres     96 2005-12-26 02:17 ./ drwx------  6 postgres postgres    640 2005-12-26 03:22 ../-rw-------  1 postgres postgres 163840 2005-12-26 03:23 0444 \nProblem: template0 is not getting vacuumed due to the above ERROR.. please let me know whats the solution.-- Best,Gourish Singbal -- Best,Gourish Singbal", "msg_date": "Mon, 26 Dec 2005 18:02:44 +0530", "msg_from": "Gourish Singbal <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: vacuuming template0 gave ERROR" }, { "msg_contents": "Gourish Singbal <[email protected]> writes:\n> Got the following ERROR when i was vacuuming the template0 database.\n\nWhy were you doing that in the first place? template0 shouldn't ever\nbe touched.\n\n> postgresql server version is 7.4.5\n\nThe underlying cause is likely related to this 7.4.6 bug fix:\n\n2004-10-13 18:22 tgl\n\n\t* contrib/pgstattuple/pgstattuple.c,\n\tsrc/backend/access/heap/heapam.c,\n\tsrc/backend/utils/adt/ri_triggers.c (REL7_4_STABLE): Repair\n\tpossible failure to update hint bits back to disk, per\n\thttp://archives.postgresql.org/pgsql-hackers/2004-10/msg00464.php. \n\tI plan a more permanent fix in HEAD, but for the back branches it\n\tseems best to just touch the places that actually have a problem.\n\n\n> INFO: vacuuming \"pg_catalog.pg_statistic\"\n> ERROR: could not access status of transaction 1107341112\n> DETAIL: could not open file \"/home/postgres/data/pg_clog/0420\": No such\n> file or directory\n\nFortunately for you, pg_statistic doesn't contain any irreplaceable\ndata. So you could get out of this via\n\n\tTRUNCATE pg_statistic;\n\tVACUUM ANALYZE; -- rebuild contents of pg_statistic\n\tVACUUM FREEZE; -- make sure template0 needs no further vacuuming\n\nThen reset template0's datallowconn to false, and get rid of that code\nto override it. And then update to a more recent release ;-)\n\n(I don't recall exactly what rules 7.4 uses, but likely you'll find that\nyou need to run a standalone backend with -O switch to perform\nTRUNCATE on a system catalog.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Dec 2005 11:02:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuuming template0 gave ERROR " }, { "msg_contents": "Tom,\nI got the followign Erorr when i tried to trucate the table.\n /usr/local/pgsql/bin/postgres -D data -O -o standalone_log template0\n\n2005-12-26 22:48:12 ERROR: expected one dependency record for TOAST table,\nfound 0\n2005-12-26 22:48:31 ERROR: could not access status of transaction\n1107341112\nDETAIL: could not open file \"/home/postgres/data/pg_clog/0420\": No such\nfile or directory\n2005-12-26 22:48:41 LOG: shutting down\n2005-12-26 22:48:41 LOG: database system is shut down\n\nplease suggest ?.\n\nOn 12/26/05, Tom Lane <[email protected]> wrote:\n>\n> Gourish Singbal <[email protected]> writes:\n> > Got the following ERROR when i was vacuuming the template0 database.\n>\n> Why were you doing that in the first place? template0 shouldn't ever\n> be touched.\n>\n> > postgresql server version is 7.4.5\n>\n> The underlying cause is likely related to this 7.4.6 bug fix:\n>\n> 2004-10-13 18:22 tgl\n>\n> * contrib/pgstattuple/pgstattuple.c,\n> src/backend/access/heap/heapam.c,\n> src/backend/utils/adt/ri_triggers.c (REL7_4_STABLE): Repair\n> possible failure to update hint bits back to disk, per\n> http://archives.postgresql.org/pgsql-hackers/2004-10/msg00464.php.\n> I plan a more permanent fix in HEAD, but for the back branches it\n> seems best to just touch the places that actually have a problem.\n>\n>\n> > INFO: vacuuming \"pg_catalog.pg_statistic\"\n> > ERROR: could not access status of transaction 1107341112\n> > DETAIL: could not open file \"/home/postgres/data/pg_clog/0420\": No such\n> > file or directory\n>\n> Fortunately for you, pg_statistic doesn't contain any irreplaceable\n> data. So you could get out of this via\n>\n> TRUNCATE pg_statistic;\n> VACUUM ANALYZE; -- rebuild contents of pg_statistic\n> VACUUM FREEZE; -- make sure template0 needs no further vacuuming\n>\n> Then reset template0's datallowconn to false, and get rid of that code\n> to override it. And then update to a more recent release ;-)\n>\n> (I don't recall exactly what rules 7.4 uses, but likely you'll find that\n> you need to run a standalone backend with -O switch to perform\n> TRUNCATE on a system catalog.)\n>\n> regards, tom lane\n>\n\n\n\n--\nBest,\nGourish Singbal\n\n \nTom,\nI got the followign Erorr when i tried to trucate the table.\n /usr/local/pgsql/bin/postgres -D data -O -o standalone_log template0 \n2005-12-26 22:48:12 ERROR:  expected one dependency record for TOAST table, found 02005-12-26 22:48:31 ERROR:  could not access status of transaction 1107341112DETAIL:  could not open file \"/home/postgres/data/pg_clog/0420\": No such file or directory\n2005-12-26 22:48:41 LOG:  shutting down2005-12-26 22:48:41 LOG:  database system is shut downplease suggest ?. \nOn 12/26/05, Tom Lane <[email protected]> wrote:\nGourish Singbal <[email protected]> writes:> Got the following ERROR when i was vacuuming the template0 database.\nWhy were you doing that in the first place?  template0 shouldn't everbe touched.> postgresql server version is 7.4.5The underlying cause is likely related to this 7.4.6 bug fix:2004-10-13 18:22  tgl\n       * contrib/pgstattuple/pgstattuple.c,       src/backend/access/heap/heapam.c,       src/backend/utils/adt/ri_triggers.c (REL7_4_STABLE): Repair       possible failure to update hint bits back to disk, per\n       http://archives.postgresql.org/pgsql-hackers/2004-10/msg00464.php.       I plan a more permanent fix in HEAD, but for the back branches it\n       seems best to just touch the places that actually have a problem.> INFO:  vacuuming \"pg_catalog.pg_statistic\"> ERROR:  could not access status of transaction 1107341112> DETAIL:  could not open file \"/home/postgres/data/pg_clog/0420\": No such\n> file or directoryFortunately for you, pg_statistic doesn't contain any irreplaceabledata.  So you could get out of this via       TRUNCATE pg_statistic;       VACUUM ANALYZE;  -- rebuild contents of pg_statistic\n       VACUUM FREEZE;   -- make sure template0 needs no further vacuumingThen reset template0's datallowconn to false, and get rid of that codeto override it.  And then update to a more recent release ;-)\n(I don't recall exactly what rules 7.4 uses, but likely you'll find thatyou need to run a standalone backend with -O switch to performTRUNCATE on a system catalog.)                       regards, tom lane\n-- Best,Gourish Singbal", "msg_date": "Tue, 27 Dec 2005 12:35:43 +0530", "msg_from": "Gourish Singbal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuuming template0 gave ERROR" }, { "msg_contents": "Gourish Singbal <[email protected]> writes:\n> I got the followign Erorr when i tried to trucate the table.\n> /usr/local/pgsql/bin/postgres -D data -O -o standalone_log template0\n\n> 2005-12-26 22:48:12 ERROR: expected one dependency record for TOAST table,\n> found 0\n\n[ raised eyebrow... ] Probably time to pg_dump, initdb, reload. You\nseem to be suffering multiple problems. If you aren't aware of any\ncatastrophe that would explain all these holes in your DB, then it's\nalso time to start running some hardware diagnostics ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Dec 2005 02:16:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: vacuuming template0 gave ERROR " }, { "msg_contents": "Thanks a million tom,\n\nBut guess what we think alike, have taken the dump and am in to process of\nrestoring it right now.\n\nthanks for the help.\n\n\nOn 12/27/05, Tom Lane <[email protected]> wrote:\n>\n> Gourish Singbal <[email protected]> writes:\n> > I got the followign Erorr when i tried to trucate the table.\n> > /usr/local/pgsql/bin/postgres -D data -O -o standalone_log template0\n>\n> > 2005-12-26 22:48:12 ERROR: expected one dependency record for TOAST\n> table,\n> > found 0\n>\n> [ raised eyebrow... ] Probably time to pg_dump, initdb, reload. You\n> seem to be suffering multiple problems. If you aren't aware of any\n> catastrophe that would explain all these holes in your DB, then it's\n> also time to start running some hardware diagnostics ...\n>\n> regards, tom lane\n>\n\n\n\n--\nBest,\nGourish Singbal\n\n \nThanks a million tom,\n \nBut guess what we think alike, have taken the dump and am in to process of restoring it right now.\n \nthanks for the help. \nOn 12/27/05, Tom Lane <[email protected]> wrote:\nGourish Singbal <[email protected]> writes:> I got the followign Erorr when i tried to trucate the table.\n>  /usr/local/pgsql/bin/postgres -D data -O -o standalone_log template0> 2005-12-26 22:48:12 ERROR:  expected one dependency record for TOAST table,> found 0[ raised eyebrow... ]  Probably time to pg_dump, initdb, reload.  You\nseem to be suffering multiple problems.  If you aren't aware of anycatastrophe that would explain all these holes in your DB, then it'salso time to start running some hardware diagnostics ...                       regards, tom lane\n-- Best,Gourish Singbal", "msg_date": "Tue, 27 Dec 2005 12:50:10 +0530", "msg_from": "Gourish Singbal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: vacuuming template0 gave ERROR" } ]
[ { "msg_contents": "We are trying to ascertain if we are up against the limits of what \npostgres can accomplish without having the tables clustered. We would \nprefer not to have to cluster our tables because according to the \ndocumentation this is a one time operation and not maintained. Is there \nany other performance tweaks that can be done to avoid clustering? Is \nthere any way to force the cluster maintenance (even with a performance \nhit on load)? \nWe are aware that there is a minimum time that is required to resolve \nthe index values against the table to ascertain that they are live rows, \nand we believe we are circumventing that time to some extent by taking \nadvantage of the rows being in physical order with the cluster. So does \nthis lead us to the conclusion that the differences in the query times \nis how long is takes us to check on disk whether or not these rows are live?\n\nThanks for any help, thoughts, tips or suggestions.\n\n\nAll of these commands are after a vacuum full analyze and the config \nfile is attached. Different values were used for the queries so no \ncaching would confuse our stats. The box is running gentoo with \npostgres 8.1.0, has raid 0, 9 gigs of ram, 2 hyperthreaded procs, x86_64.\n\n/Three tables with row counts:/\n lookup1.count = 3,306,930\n lookup2.count = 4,189,734\n stuff.count = 3,423,994\n\n/The first attempt (after index adjustments, no hits to cached results)/\n\nexplain analyze select col2, count(*) as cnt from stuff where col1 = \n56984 group by col2\n\n HashAggregate (cost=14605.68..14605.88 rows=16 width=4) (actual \ntime=6980.752..6985.893 rows=6389 loops=1)\n -> Bitmap Heap Scan on stuff (cost=60.97..14571.44 rows=6848 \nwidth=4) (actual time=371.215..6965.742 rows=6389 loops=1)\n Recheck Cond: (col1 = 56984)\n -> Bitmap Index Scan on stuff_pair_idx (cost=0.00..60.97 \nrows=6848 width=0) (actual time=361.237..361.237 rows=6389 loops=1)\n Index Cond: (col1 = 56984)\n Total runtime: 6988.105 ms\n\n/After clustering:/\n\nexplain analyze select col2, count(*) as cnt from stuff where col1 = \n3540634 group by col2;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=1399.62..1399.63 rows=1 width=4) (actual \ntime=11.376..15.282 rows=5587 loops=1)\n -> Bitmap Heap Scan on stuff (cost=4.36..1397.68 rows=389 width=4) \n(actual time=1.029..4.538 rows=5587 loops=1)\n Recheck Cond: (col1 = 3540634)\n -> Bitmap Index Scan on stuff_col1_idx (cost=0.00..4.36 \nrows=389 width=0) (actual time=1.003..1.003 rows=5587 loops=1)\n Index Cond: (col1 = 3540634)\n Total runtime: 17.113 ms\n\n\n\n/Using this in the next layer of querying:/\n\nexplain analyze SELECT col1,col2, value AS val, \ncoalesce(coalesce(lookup1.col3, lookup2.col3),0) AS dollars FROM (select \ncol1, col2, value from stuff where col1 = 95350) stuff LEFT JOIN \nlookup1 ON (stuff.col2 = lookup1.pkey) LEFT JOIN lookup2 ON (stuff.col2 \n= lookup2.pkey);\n\n Nested Loop Left Join (cost=0.00..10325.15 rows=857 width=20) (actual \ntime=84.223..9306.228 rows=2296 loops=1)\n -> Nested Loop Left Join (cost=0.00..5183.25 rows=857 width=16) \n(actual time=56.623..1710.655 rows=2296 loops=1)\n -> Index Scan using stuff_col1_idx on stuff (cost=0.00..21.57 \nrows=857 width=12) (actual time=40.531..57.160 rows=2296 loops=1)\n Index Cond: (col1 = 4528383)\n -> Index Scan using lookup2_pkey on lookup2 (cost=0.00..6.01 \nrows=1 width=8) (actual time=0.717..0.717 rows=0 loops=2296)\n Index Cond: (\"outer\".col2 = lookup2.pkey)\n -> Index Scan using lookup1_pkey on lookup1 (cost=0.00..5.99 rows=1 \nwidth=8) (actual time=3.304..3.305 rows=1 loops=2296)\n Index Cond: (\"outer\".col2 = lookup1.pkey)\n Total runtime: 9307.569 ms\n\n/After clustering the two left join tables (lookup1 and lookup2):/\n\nexplain analyze SELECT col1,col2, value AS val, \ncoalesce(coalesce(lookup1.col3, lookup2.col3),0) AS dollars FROM (select \ncol1, col2, value from stuff where col1 = 95350) stuff LEFT JOIN \nlookup1 ON (stuff.col2 = lookup1.pkey) LEFT JOIN lookup2 ON (stuff.col2 \n= lookup2.pkey);\n\n Nested Loop Left Join (cost=0.00..10325.15 rows=857 width=20) (actual \ntime=24.444..84.114 rows=1727 loops=1)\n -> Nested Loop Left Join (cost=0.00..5163.47 rows=857 width=16) \n(actual time=24.392..62.787 rows=1727 loops=1)\n -> Index Scan using stuff_col1_idx on stuff (cost=0.00..21.57 \nrows=857 width=12) (actual time=24.332..27.455 rows=1727 loops=1)\n Index Cond: (col1 = 95350)\n -> Index Scan using lookup1_pkey on lookup1 (cost=0.00..5.99 \nrows=1 width=8) (actual time=0.018..0.018 rows=1 loops=1727)\n Index Cond: (\"outer\".col2 = lookup1.pkey)\n -> Index Scan using lookup2_pkey on lookup2 (cost=0.00..6.01 rows=1 \nwidth=8) (actual time=0.010..0.010 rows=0 loops=1727)\n Index Cond: (\"outer\".col2 = lookup2.pkey)\n Total runtime: 84.860 ms", "msg_date": "Mon, 26 Dec 2005 13:03:34 -0800", "msg_from": "David Scott <[email protected]>", "msg_from_op": true, "msg_subject": "Performance hit on large row counts" }, { "msg_contents": "David Scott <[email protected]> writes:\n> We are trying to ascertain if we are up against the limits of what \n> postgres can accomplish without having the tables clustered. ...\n\n> We are aware that there is a minimum time that is required to resolve \n> the index values against the table to ascertain that they are live rows, \n> and we believe we are circumventing that time to some extent by taking \n> advantage of the rows being in physical order with the cluster. So does \n> this lead us to the conclusion that the differences in the query times \n> is how long is takes us to check on disk whether or not these rows are live?\n\nBoth of your initial examples are bitmap scans, which should be pretty\ninsensitive to index correlation effects --- certainly the planner\nassumes so. What I'd want to know about is why the planner is picking\ndifferent indexes for the queries. The CLUSTER may be affecting things\nin some other way, like by squeezing out dead tuples causing a\nreduction in the total table and index sizes.\n\nThe join examples use plain indexscans, which *would* be affected by\ncorrelation ... but again, why are you getting a different scan plan\nfor \"stuff\" than in the non-join case?\n\nIt's not helping you that the rowcount estimates are so far off.\nI think the different plans might be explained by the noise in the\nrowcount estimates.\n\nYou should try increasing the statistics targets on the columns you use\nin the WHERE conditions.\n\nI'm not at all sure I believe your premise that querying for a different\nkey value excludes cache effects, btw. On modern hardware it's likely\nthat CLUSTER would leave the *whole* of these tables sitting in kernel\ndisk cache.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Dec 2005 16:36:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance hit on large row counts " }, { "msg_contents": "Tom Lane wrote:\n\n>The CLUSTER may be affecting things in some other way, like by squeezing out dead tuples causing a\n>reduction in the total table and index sizes.\n> \n>\n I didn't mention I was the only user with transactions open on the \nsystem during this. Would cluster eliminate more rows then vacuum full \nif the only open transaction is the one running the vacuum and it is a \nclean transaction?\n\n>You should try increasing the statistics targets on the columns you use\n>in the WHERE conditions.\n> \n>\n We set it to 500 and couldn't get it to repeat the plan where it was \nusing the pair_idx, so that certainly helps.\n\n>I'm not at all sure I believe your premise that querying for a different\n>key value excludes cache effects, btw. On modern hardware it's likely\n>that CLUSTER would leave the *whole* of these tables sitting in kernel\n>disk cache.\n> \n>\n You are exactly right. After rebooting the entire box and running \nthe query the query time was 15 seconds. Rebooting the box, running \ncluster on all three tables and then executing the query was 120 ms. Is \ncalling cluster the only way to ensure that these tables get loaded into \ncache? Running select * appeared to cache some but not all.\n\n Thanks\n", "msg_date": "Mon, 26 Dec 2005 15:07:23 -0800", "msg_from": "David Scott <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance hit on large row counts" }, { "msg_contents": "David Scott <[email protected]> writes:\n> I didn't mention I was the only user with transactions open on the \n> system during this. Would cluster eliminate more rows then vacuum full \n> if the only open transaction is the one running the vacuum and it is a \n> clean transaction?\n\nIt wouldn't eliminate more rows, but it could nonetheless produce a\nsmaller table. IIRC, VACUUM FULL stops shrinking as soon as it finds\na row that there is no room for in lower-numbered table pages; so a\nlarge row near the end of the table could block squeezing-out of small\namounts of free space in earlier pages of the table. I doubt this\neffect is significant most of the time, but in a table with widely\nvarying row sizes it might be an issue.\n\nAlso, CLUSTER can definitely produce smaller *indexes* than VACUUM FULL.\nVACUUM FULL operates at a serious disadvantage when it comes to indexes,\nbecause in order to move a tuple it has to actually make extra index\nentries.\n\n>> I'm not at all sure I believe your premise that querying for a different\n>> key value excludes cache effects, btw. On modern hardware it's likely\n>> that CLUSTER would leave the *whole* of these tables sitting in kernel\n>> disk cache.\n>> \n> You are exactly right. After rebooting the entire box and running \n> the query the query time was 15 seconds. Rebooting the box, running \n> cluster on all three tables and then executing the query was 120 ms. Is \n> calling cluster the only way to ensure that these tables get loaded into \n> cache? Running select * appeared to cache some but not all.\n\nHm, I'd think that SELECT * or SELECT count(*) would cause all of a\ntable to be cached. It wouldn't do anything about caching the indexes\nthough, and that might explain your observations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Dec 2005 18:52:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance hit on large row counts " } ]
[ { "msg_contents": "Hi!\n\nThis is not actually a question about performance, but an inquiry to help \nme understand what is going on. Below this text are two EXPLAIN ANALYZE \noutputs, before and after VACUUM ANALYZE was run. I have several questions \nabout the proposed plans (mostly the first one). There is only one table \nin the query, \"layout\", containing ~10k rows. In it, for each \"page_id\" \nthere are several (1-10) rows (i.e. there are around 10000/5 unique \npage_id values). There's an index on \"page_id\" and I've upped statistics \ncollection on it to 150 at table creation time because sometimes the \nplanner didn't use the index at all.\nThis is PostgreSQL 8.1.0.\n\n- what does \"Bitmap Heap Scan\" phase do?\n- what is \"Recheck condition\" and why is it needed?\n- why are proposed \"width\" fields in the plan different between the two\n plans?\n (actually, a nice explanation what exactly are those widths would also\n be nice :) )\n- I thought \"Bitmap Index Scan\" was only used when there are two or more\n applicable indexes in the plan, so I don't understand why is it used\n now?\n\n\ncw2=> explain analyze select * from layout where page_id=10060;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on layout (cost=2.15..64.21 rows=43 width=60) (actual \ntime=0.043..0.053 rows=4 loops=1)\n Recheck Cond: (page_id = 10060)\n -> Bitmap Index Scan on layout_page_id (cost=0.00..2.15 rows=43 \nwidth=0) (actual time=0.034..0.034 rows=4 loops=1)\n Index Cond: (page_id = 10060)\n Total runtime: 0.112 ms\n(5 rows)\n\ncw2> VACUUM ANALYZE;\n\ncw2=> explain analyze select * from layout where page_id=10060;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Index Scan using layout_page_id on layout (cost=0.00..12.14 rows=4 \nwidth=42) (actual time=0.014..0.025 rows=4 loops=1)\n Index Cond: (page_id = 10060)\n Total runtime: 0.076 ms\n(3 rows)\n\n\n\n-- \nPreserve wildlife -- pickle a squirrel today!\n\n", "msg_date": "Mon, 26 Dec 2005 22:32:13 +0100 (CET)", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Bitmap indexes etc." }, { "msg_contents": "Ivan Voras <[email protected]> writes:\n> This is PostgreSQL 8.1.0.\n\n> - what does \"Bitmap Heap Scan\" phase do?\n\nA plain indexscan fetches one tuple-pointer at a time from the index,\nand immediately visits that tuple in the table. A bitmap scan fetches\nall the tuple-pointers from the index in one go, sorts them using an\nin-memory \"bitmap\" data structure, and then visits the table tuples in\nphysical tuple-location order. The bitmap scan improves locality of\nreference to the table at the cost of more bookkeeping overhead to\nmanage the \"bitmap\" data structure --- and at the cost that the data\nis no longer retrieved in index order, which doesn't matter for your\nquery but would matter if you said ORDER BY.\n\n> - what is \"Recheck condition\" and why is it needed?\n\nIf the bitmap gets too large we convert it to \"lossy\" style, in which we\nonly remember which pages contain matching tuples instead of remembering\neach tuple individually. When that happens, the table-visiting phase\nhas to examine each tuple on the page and recheck the scan condition to\nsee which tuples to return.\n\n> - why are proposed \"width\" fields in the plan different between the two\n> plans?\n\nUpdated statistics about average column widths, presumably.\n\n> (actually, a nice explanation what exactly are those widths would also\n> be nice :) )\n\nSum of the average widths of the columns being fetched from the table.\n\n> - I thought \"Bitmap Index Scan\" was only used when there are two or more\n> applicable indexes in the plan, so I don't understand why is it used\n> now?\n\nTrue, we can combine multiple bitmaps via AND/OR operations to merge\nresults from multiple indexes before visiting the table ... but it's\nstill potentially worthwhile even for one index. A rule of thumb is\nthat plain indexscan wins for fetching a small number of tuples, bitmap\nscan wins for a somewhat larger number of tuples, and seqscan wins if\nyou're fetching a large percentage of the whole table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Dec 2005 16:57:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bitmap indexes etc. " }, { "msg_contents": "On Mon, 26 Dec 2005, Tom Lane wrote:\n> ...snip...\n\nThanks, it's a very good explanation!\n\n-- \nPreserve wildlife -- pickle a squirrel today!\n\n", "msg_date": "Tue, 27 Dec 2005 14:19:55 +0100 (CET)", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bitmap indexes etc. " } ]
[ { "msg_contents": "Hello,\n\twe have a PostgreSQL for datawarehousing. As we heard of so many enhancements \nfor 8.0 and 8.1 versions we dicided to upgrade from 7.4 to 8.1. I must say \nthat the COPY FROM processes are much faster now from 27 to 17 minutes. Some \nqueries where slower, but the performance problems were solved by increasing \nwork_mem to 8192.\n\tHowever, now we have a query that is much slower with 8.1 compared to 7.4. \nThe query lasts 7minutes (all the times we try) with 8.1, keeping CPU usage \nat 93~97% while it lasts 25 seconds in 7.4 the first time going down to 4 \nseconds the following tries.\n\tWe're not experts at all but we can't see anything strange with the \ndifferences of EXPLAIN in the queries. Below I paste the query and the \nEXPLAIN output.\n\tDoes somebody have a clue of what could be the cause of this big difference \nin performance?\n\tMany thanks in advance.\n\n\nSELECT\n lpad(c.codigo,6,'0'),\n MIN(c.nombre),\n\n SUM( CASE WHEN ( res.hora_inicio >= time '00:00' AND res.hora_inicio < \ntime '16:00' )\n THEN (CASE WHEN res.importe_neto IS NOT NULL\n THEN res.importe_neto ELSE 0 END)\n ELSE 0 END ) AS p1,\n SUM( CASE WHEN ( res.hora_inicio >= time '00:00' AND res.hora_inicio < \ntime '16:00' )\n THEN (CASE WHEN res.cantidad_servida IS NOT NULL\n THEN res.cantidad_servida\n ELSE 0 END)\n ELSE 0 END ) AS p2,\n SUM( CASE WHEN ( res.hora_inicio >= time '16:00' AND res.hora_inicio < \ntime '23:59' )\n THEN (CASE WHEN res.importe_neto IS NOT NULL\n THEN res.importe_neto\n ELSE 0 END)\n ELSE 0 END ) AS p3\n SUM( CASE WHEN ( res.hora_inicio >= time '16:00' AND res.hora_inicio < \ntime '23:59' )\n THEN (CASE WHEN res.cantidad_servida IS NOT NULL THEN\n res.cantidad_servida\n ELSE 0 END)\n ELSE 0 END ) AS p4\n SUM(CASE WHEN res.importe_neto IS NOT NULL\n THEN res.importe_neto\n ELSE 0 END) AS total,\n SUM(CASE WHEN res.cantidad_servida IS NOT NULL\n THEN res.cantidad_servida\n ELSE 0 END) AS total_lineas\nFROM clientes c LEFT JOIN (\n SELECT\n la.cliente as cliente,\n es.hora_inicio as hora_inicio,\n la.albaran as albaran,\n la.cantidad_servida as cantidad_servida,\n la.importe_neto as importe_neto\n FROM lineas_albaranes la\n LEFT JOIN escaner es ON la.albaran = es.albaran\n WHERE la.fecha_albaran = '20-12-2005' AND la.empresa = 1 AND \nla.indicador_factura = 'F'\n ) AS res ON c.codigo = res.cliente, provincias p\nWHERE p.codigo = c.provincia AND p.nombre='NAME' AND EXISTS(SELECT 1 FROM \nlineas_albaranes la WHERE la.cliente=c.codigo AND la.fecha_albaran > (date \n'20-12-2005' - interval '2 month') AND la.fecha_albaran <= '20-12-2005' AND \nla.empresa=1 AND la.indicador_factura='F')\nGROUP BY c.codigo\nORDER BY nom;\n\nPostgreSQL 8.1.1:\n\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=333769.99..333769.99 rows=2 width=61)\n Sort Key: min((c.nombre)::text)\n -> GroupAggregate (cost=37317.41..333769.98 rows=2 width=61)\n -> Nested Loop (cost=37317.41..333769.83 rows=2 width=61)\n Join Filter: (\"inner\".codigo = \"outer\".provincia)\n -> Merge Left Join (cost=37315.27..333758.58 rows=405 \nwidth=65)\n Merge Cond: (\"outer\".codigo = \"inner\".cliente)\n -> Index Scan using clientes_pkey on clientes c \n(cost=0.00..296442.28 rows=405 width=40)\n Filter: (subplan)\n SubPlan\n -> Bitmap Heap Scan on lineas_albaranes la \n(cost=138.99..365.53 rows=1 width=0)\n Recheck Cond: ((cliente = $0) AND \n((indicador_factura)::text = 'F'::text))\n Filter: ((fecha_albaran > '2005-10-20 \n00:00:00'::timestamp without time zone) AND (fecha_albaran <= \n'2005-12-20'::date)AND (empresa = 1))\n -> BitmapAnd (cost=138.99..138.99 rows=57 \nwidth=0)\n -> Bitmap Index Scan on \nlineas_albaranes_cliente_idx (cost=0.00..65.87 rows=11392 width=0)\n Index Cond: (cliente = $0)\n -> Bitmap Index Scan on \nlineas_albaranes_indicador_factura_idx (cost=0.00..72.87 rows=11392 width=0)\n Index Cond: \n((indicador_factura)::text = 'F'::text)\n -> Sort (cost=37315.27..37315.28 rows=1 width=29)\n Sort Key: la.cliente\n -> Nested Loop Left Join (cost=72.87..37315.26 \nrows=1 width=29)\n -> Bitmap Heap Scan on lineas_albaranes la \n(cost=72.87..37309.24 rows=1 width=25)\n Recheck Cond: \n((indicador_factura)::text = 'F'::text)\n Filter: ((fecha_albaran = \n'2005-12-20'::date) AND (empresa = 1))\n -> Bitmap Index Scan on \nlineas_albaranes_indicador_factura_idx (cost=0.00..72.87 rows=11392 width=0)\n Index Cond: \n((indicador_factura)::text = 'F'::text)\n -> Index Scan using escaner_pkey on escaner \nes (cost=0.00..6.01 rows=1 width=12)\n Index Cond: (\"outer\".albaran = \nes.albaran)\n -> Materialize (cost=2.14..2.15 rows=1 width=4)\n -> Seq Scan on provincias p (cost=0.00..2.14 rows=1 \nwidth=4)\n Filter: ((nombre)::text = 'NAME'::text)\n(31 rows)\n\n\nPostgreSQL 7.4.7:\n\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=270300.14..270300.21 rows=29 width=61)\n Sort Key: min((c.nombre)::text)\n -> HashAggregate (cost=270298.20..270299.44 rows=29 width=61)\n -> Hash Join (cost=270222.84..270297.62 rows=29 width=61)\n Hash Cond: (\"outer\".provincia = \"inner\".codigo)\n -> Merge Left Join (cost=270220.70..270280.70 rows=2899 \nwidth=65)\n Merge Cond: (\"outer\".codigo = \"inner\".cliente)\n -> Sort (cost=10928.47..10929.48 rows=405 width=40)\n Sort Key: c.codigo\n -> Seq Scan on clientes c (cost=0.00..10910.93 \nrows=405 width=40)\n Filter: (subplan)\n SubPlan\n -> Index Scan using \nlineas_albaranes_cliente_idx on lineas_albaranes la (cost=0.00..51542.10 \nrows=3860 width=0)\n Index Cond: (cliente = $0)\n Filter: (((fecha_albaran)::timestamp \nwithout time zone > '2005-10-20 00:00:00'::timestamp without time zone) AND \n(fecha_albaran <= '2005-12-20'::date) AND (empresa = 1) AND \n((indicador_factura)::text = 'F'::text))\n -> Sort (cost=259292.23..259306.72 rows=5797 width=29)\n Sort Key: la.cliente\n -> Merge Right Join (cost=256176.76..258929.88 \nrows=5797 width=29)\n Merge Cond: (\"outer\".albaran = \n\"inner\".albaran)\n -> Index Scan using escaner_pkey on escaner \nes (cost=0.00..2582.64 rows=55604 width=12)\n -> Sort (cost=256176.76..256191.26 \nrows=5797 width=25)\n Sort Key: la.albaran\n -> Seq Scan on lineas_albaranes la \n(cost=0.00..255814.42 rows=5797 width=25)\n Filter: ((fecha_albaran = \n'2005-12-20'::date) AND (empresa = 1) AND ((indicador_factura)::text = \n'F'::text))\n -> Hash (cost=2.14..2.14 rows=2 width=4)\n -> Seq Scan on provincias p (cost=0.00..2.14 rows=2 \nwidth=4)\n Filter: ((nombre)::text = 'NAME'::text)\n(27 rows)\n", "msg_date": "Tue, 27 Dec 2005 17:09:28 +0100", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problems with 8.1.1 compared to 7.4.7" }, { "msg_contents": "On Tue, Dec 27, 2005 at 05:09:28PM +0100, Albert Cervera Areny wrote:\n> \tHowever, now we have a query that is much slower with 8.1 compared to 7.4. \n> The query lasts 7minutes (all the times we try) with 8.1, keeping CPU usage \n> at 93~97% while it lasts 25 seconds in 7.4 the first time going down to 4 \n> seconds the following tries.\n> \tWe're not experts at all but we can't see anything strange with the \n> differences of EXPLAIN in the queries. Below I paste the query and the \n> EXPLAIN output.\n\nCould you post the EXPLAIN ANALYZE output of the query on both\nsystems? That'll show how accurate the planner's estimates are.\n\nHave you run ANALYZE (or VACUUM ANALYZE) on the tables in both\nversions? The row count estimates in the 8.1.1 query differ from\nthose in the 7.4.7 query. Are the two versions using the same data\nset?\n\nAre your configuration settings the same in both versions? You\nmentioned increasing work_mem, but what about others like\neffective_cache_size, random_page_cost, and shared_buffers?\n\n-- \nMichael Fuhr\n", "msg_date": "Tue, 27 Dec 2005 10:13:43 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with 8.1.1 compared to 7.4.7" }, { "msg_contents": "\n\nA Dimarts 27 Desembre 2005 18:13, Michael Fuhr va escriure:\n> On Tue, Dec 27, 2005 at 05:09:28PM +0100, Albert Cervera Areny wrote:\n> > \tHowever, now we have a query that is much slower with 8.1 compared to\n> > 7.4. The query lasts 7minutes (all the times we try) with 8.1, keeping\n> > CPU usage at 93~97% while it lasts 25 seconds in 7.4 the first time going\n> > down to 4 seconds the following tries.\n> > \tWe're not experts at all but we can't see anything strange with the\n> > differences of EXPLAIN in the queries. Below I paste the query and the\n> > EXPLAIN output.\n>\n> Could you post the EXPLAIN ANALYZE output of the query on both\n> systems? That'll show how accurate the planner's estimates are.\n>\n> Have you run ANALYZE (or VACUUM ANALYZE) on the tables in both\n> versions? The row count estimates in the 8.1.1 query differ from\n> those in the 7.4.7 query. Are the two versions using the same data\n> set?\n>\n> Are your configuration settings the same in both versions? You\n> mentioned increasing work_mem, but what about others like\n> effective_cache_size, random_page_cost, and shared_buffers?\n\nHey, thank you for your fast response, I found what the problem was.\n\nI thought the settings were the same but work_mem was still higher in 7.4, \n30Mb, so I increased 8.1 to 30Mb and it worked faster, down to 17 seconds the \nfirst time, 2.5 seconds for the others. \n\nAre there any \"rules of thumb\" to let a begginer give reasonable values to \nthese parameters? Not only work_mem, but also random_page_cost, and so on. \nAre there any tests one can run to determine \"good\" values?\n\nThanks a lot!\n\n-- \nAlbert Cervera Areny\nDept. Informàtica Sedifa, S.L.\n\nAv. Can Bordoll, 149\n08202 - Sabadell (Barcelona)\nTel. 93 715 51 11\nFax. 93 715 51 12\n\n====================================================================\n........................ AVISO LEGAL ............................\nLa presente comunicación y sus anexos tiene como destinatario la\npersona a la que va dirigida, por lo que si usted lo recibe\npor error debe notificarlo al remitente y eliminarlo de su\nsistema, no pudiendo utilizarlo, total o parcialmente, para\nningún fin. Su contenido puede tener información confidencial o\nprotegida legalmente y únicamente expresa la opinión del\nremitente. El uso del correo electrónico vía Internet no\npermite asegurar ni la confidencialidad de los mensajes\nni su correcta recepción. En el caso de que el\ndestinatario no consintiera la utilización del correo electrónico,\ndeberá ponerlo en nuestro conocimiento inmediatamente.\n====================================================================\n........................... DISCLAIMER .............................\nThis message and its attachments are intended exclusively for the\nnamed addressee. If you receive this message in error, please\nimmediately delete it from your system and notify the sender. You\nmay not use this message or any part of it for any purpose.\nThe message may contain information that is confidential or\nprotected by law, and any opinions expressed are those of the\nindividual sender. Internet e-mail guarantees neither the\nconfidentiality nor the proper receipt of the message sent.\nIf the addressee of this message does not consent to the use\nof internet e-mail, please inform us inmmediately.\n====================================================================\n\n\n \n", "msg_date": "Tue, 27 Dec 2005 19:02:17 +0100", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with 8.1.1 compared to 7.4.7" }, { "msg_contents": "On Tue, 27 Dec 2005 19:02:17 +0100\nAlbert Cervera Areny <[email protected]> wrote:\n\n> Are there any \"rules of thumb\" to let a begginer give reasonable\n> values to these parameters? Not only work_mem, but also\n> random_page_cost, and so on. Are there any tests one can run to\n> determine \"good\" values?\n>\n \n Hi Albert, \n\n There are several online sites that have information related to\n tuning parameters. Here is a list of a few of them: \n\n http://revsys.com/writings/postgresql-performance.html\n\n http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n http://www.powerpostgresql.com/Docs\n\n http://www.powerpostgresql.com/PerfList\n\n Hope these help! \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Tue, 27 Dec 2005 12:56:56 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with 8.1.1 compared to 7.4.7" } ]
[ { "msg_contents": "Hi all,\n\n Which is the best way to import data to tables? I have to import \n90000 rows into a column and doing it as inserts takes ages. Would be \nfaster with copy? is there any other alternative to insert/copy?\n\nCheers!\n", "msg_date": "Thu, 29 Dec 2005 10:48:26 +0100", "msg_from": "Arnau <[email protected]>", "msg_from_op": true, "msg_subject": "How import big amounts of data?" }, { "msg_contents": "On Thu, Dec 29, 2005 at 10:48:26AM +0100, Arnau wrote:\n> Which is the best way to import data to tables? I have to import \n> 90000 rows into a column and doing it as inserts takes ages. Would be \n> faster with copy? is there any other alternative to insert/copy?\n\nThere are multiple reasons why your INSERT might be slow:\n\n- Are you using multiple transactions instead of batching them in all or a\n few transactions? (Usually, the per-transaction cost is a lot higher than\n the per-row insertion cost.)\n- Do you have a foreign key without a matching index in the other table? (In\n newer versions of PostgreSQL, EXPLAIN ANALYZE can help with this; do a\n single insert and see where it ends up doing work. Older won't show such\n things, though.)\n- Do you have an insertion trigger taking time? (Ditto wrt. EXPLAIN ANALYZE.)\n\nCOPY will be faster than INSERT regardless, though (for more than a few rows,\nat least).\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 29 Dec 2005 11:50:49 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How import big amounts of data?" }, { "msg_contents": "On Thu, 29 Dec 2005, Arnau wrote:\n\n> Which is the best way to import data to tables? I have to import \n> 90000 rows into a column and doing it as inserts takes ages. Would be \n> faster with copy? is there any other alternative to insert/copy?\n\nWrap the inserts inside a BEGIN/COMMIT block and it will be a lot faster.\nCopy is even faster, but for just 90000 rows I wouldn't bother.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Thu, 29 Dec 2005 12:39:06 +0100 (CET)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How import big amounts of data?" }, { "msg_contents": "At 04:48 AM 12/29/2005, Arnau wrote:\n>Hi all,\n>\n> Which is the best way to import data to tables? I have to import \n> 90000 rows into a column and doing it as inserts takes ages. Would \n> be faster with copy? is there any other alternative to insert/copy?\nCompared to some imports, 90K rows is not that large.\n\nAssuming you want the table(s) to be in some sorted order when you \nare done, the fastest way to import a large enough amount of data is:\n-put the new data into a temp table (works best if temp table fits into RAM)\n-merge the rows from the original table and the temp table into a new table\n-create the indexes you want on the new table\n-DROP the old table and its indexes\n-rename the new table and its indexes to replace the old ones.\n\nIf you _don't_ care about having the table in some sorted order,\n-put the new data into a new table\n-COPY the old data to the new table\n-create the indexes you want on the new table\n-DROP the old table and its indexes\n-rename the new table and its indexes to replace the old ones\n\nEither of these procedures will also minimize your downtime while you \nare importing.\n\nIf one doesn't want to go to all of the trouble of either of the \nabove, at least DROP your indexes, do your INSERTs in batches, and \nrebuild your indexes.\nDoing 90K individual INSERTs should usually be avoided.\n\ncheers,\nRon\n\n\n", "msg_date": "Thu, 29 Dec 2005 09:20:28 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How import big amounts of data?" }, { "msg_contents": "On Thursday 29 December 2005 10:48, Arnau wrote:\n> Which is the best way to import data to tables? I have to import\n> 90000 rows into a column and doing it as inserts takes ages. Would be\n> faster with copy? is there any other alternative to insert/copy?\n\nI am doing twice as big imports daily, and found the follwing method \nmost efficient (other than using copy):\n\n- Use plpgsql function to do the actual insert (or update/insert if \nneeded). \n\n- Inside a transaction, execute SELECT statements with maximum possible \nnumber of insert function calls in one go. This minimizes the number \nof round trips between the client and the server.\n\nTeemu\n", "msg_date": "Thu, 29 Dec 2005 15:41:05 +0100", "msg_from": "Teemu Torma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How import big amounts of data?" }, { "msg_contents": "> \n> I am doing twice as big imports daily, and found the follwing method \n> most efficient (other than using copy):\n> \n> - Use plpgsql function to do the actual insert (or update/insert if \n> needed). \n> \n> - Inside a transaction, execute SELECT statements with maximum possible \n> number of insert function calls in one go. This minimizes the number \n> of round trips between the client and the server.\n\nThanks Teemu! could you paste an example of one of those functions? ;-) \nAn example of those SELECTS also would be great, I'm not sure I have \ncompletly understood what you mean.\n\n-- \nArnau\n", "msg_date": "Thu, 29 Dec 2005 17:19:27 +0100", "msg_from": "Arnau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How import big amounts of data?" }, { "msg_contents": "On Thursday 29 December 2005 17:19, Arnau wrote:\n> > - Use plpgsql function to do the actual insert (or update/insert if\n> > needed).\n> >\n> > - Inside a transaction, execute SELECT statements with maximum\n> > possible number of insert function calls in one go.  This minimizes\n> > the number of round trips between the client and the server.\n>\n> Thanks Teemu! could you paste an example of one of those functions?\n> ;-) An example of those SELECTS also would be great, I'm not sure I\n> have completly understood what you mean.\n\nAn insert function like:\n\nCREATE OR REPLACE FUNCTION\ninsert_values (the_value1 numeric, the_value2 numeric)\nRETURNS void\nLANGUAGE plpgsql VOLATILE AS $$\nBEGIN\n INSERT INTO values (value1, value2)\n VALUES (the_value1, the_value2);\nRETURN;\nEND;\n$$;\n\nThen execute queries like\n\nSELECT insert_values(1,2), insert_values(2,3), insert_values(3,4);\n\nwith maximum number of insert_values calls as possible.\n\nI think the transaction (BEGIN/COMMIT) has little time benefit if you \nhave at least hundreds of calls in one SELECT.\n\nTeemu\n", "msg_date": "Thu, 29 Dec 2005 18:05:45 +0100", "msg_from": "Teemu Torma <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How import big amounts of data?" } ]
[ { "msg_contents": "I have an instance of PG 7.4 where I would really like to execute some\nschema changes, but every schema change is blocked waiting for a process\ndoing a COPY. That query is:\n\nCOPY drill.trades (manager, sec_id, ticker, bridge_tkr, date, \"type\",\nshort, quantity, price, prin, net_money, factor) TO stdout;\n\nSo it's only involved with a single table in a single schema.\nUnfortunately, what this process is doing is opening and reading every\ntable in the database:\n\n# strace -e open,close -p 29859\nProcess 29859 attached - interrupt to quit\nopen(\"/var/lib/postgres/data/base/7932340/2442094542\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.16\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.1\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.5\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.5\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.6\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.9\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.16\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.16\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.16\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2414561511\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2426495316\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2426495316\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2414561511\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2426495316\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2426495316\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2414561511\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.16\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205386\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205433\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.16\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2426495316\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2414561511\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.16\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.16\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205441\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.16\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2414561511\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2426495316\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.10\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.16\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.16\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.9\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/2298808676/2298808939.10\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.15\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2414561511\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2414561511\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205446\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205454\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429226532\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2442094542\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.9\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.9\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.9\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.4\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.8\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.9\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.6\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2414561511\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.6\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205386\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205441\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.6\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.6\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.6\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205446\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.6\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205454\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429226532\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.5\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2442094542\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205386\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205433\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.5\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205441\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.5\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2414561511\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2426495316\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2414561511\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2414561511\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2414559657\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2414561511\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2426495316\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2426495316\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205446\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205454\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429226532\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.5\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2414561511\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2442094542\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205386\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.5\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.5\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.7\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205441\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429205446\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.5\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.9\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.1\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.6\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.9\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.9\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.6\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.9\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.10\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.9\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.2\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.5\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023238811.18\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2429226532\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/2298808676/2361517065\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.5\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2442094542\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.9\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.6\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.10\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.1\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.10\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.3\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.5\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.2\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/358185104.5\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.6\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/2298808676/2361517065\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.6\", O_RDWR) = 49\nclose(49) = 0\nopen(\"/var/lib/postgres/data/base/7932340/2023517557.7\", O_RDWR) = 49\nclose(49) = 0\nProcess 29859 detached\n\nSeems like a somewhat unusual behavior. As you can see it's opening\nsome tables numerous times. Is there some way to avoid this?\n\n-jwb\n", "msg_date": "Thu, 29 Dec 2005 13:21:57 -0800", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Process executing COPY opens and reads every table on the system" } ]
[ { "msg_contents": "A few WEEKS ago, the autovacuum on my instance of pg 7.4 unilaterally\ndecided to VACUUM a table which has not been updated in over a year and\nis more than one terabyte on the disk. Because of the very high\ntransaction load on this database, this VACUUM has been ruining\nperformance for almost a month. Unfortunately is seems invulnerable to\nkilling by signals:\n\n# ps ax | grep VACUUM\n15308 ? D 588:00 postgres: postgres skunk [local] VACUUM\n# kill -HUP 15308\n# ps ax | grep VACUUM\n15308 ? D 588:00 postgres: postgres skunk [local] VACUUM\n# kill -INT 15308\n# ps ax | grep VACUUM\n15308 ? D 588:00 postgres: postgres skunk [local] VACUUM\n# kill -PIPE 15308\n# ps ax | grep VACUUM\n15308 ? D 588:00 postgres: postgres skunk [local] VACUUM\n\no/~ But the cat came back, the very next day ...\n\nI assume that if I kill this with SIGKILL, that will bring down every\nother postgres process, so that should be avoided. But surely there is\na way to interrupt this. If I had some reason to shut down the\ninstance, I'd be screwed, it seems.\n\n-jwb\n", "msg_date": "Thu, 29 Dec 2005 14:09:22 -0800", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Invulnerable VACUUM process thrashing everything" }, { "msg_contents": "Ick. Can you get users and foreign connections off that machine, \nlock them out for some period, and renice the VACUUM?\n\nShedding load and keeping it off while VACUUM runs high priority \nmight allow it to finish in a reasonable amount of time.\nOr\nShedding load and dropping the VACUUM priority might allow a kill \nsignal to get through.\n\nHope this helps,\nRon\n\n\nAt 05:09 PM 12/29/2005, Jeffrey W. Baker wrote:\n>A few WEEKS ago, the autovacuum on my instance of pg 7.4 unilaterally\n>decided to VACUUM a table which has not been updated in over a year and\n>is more than one terabyte on the disk. Because of the very high\n>transaction load on this database, this VACUUM has been ruining\n>performance for almost a month. Unfortunately is seems invulnerable to\n>killing by signals:\n>\n># ps ax | grep VACUUM\n>15308 ? D 588:00 postgres: postgres skunk [local] VACUUM\n># kill -HUP 15308\n># ps ax | grep VACUUM\n>15308 ? D 588:00 postgres: postgres skunk [local] VACUUM\n># kill -INT 15308\n># ps ax | grep VACUUM\n>15308 ? D 588:00 postgres: postgres skunk [local] VACUUM\n># kill -PIPE 15308\n># ps ax | grep VACUUM\n>15308 ? D 588:00 postgres: postgres skunk [local] VACUUM\n>\n>o/~ But the cat came back, the very next day ...\n>\n>I assume that if I kill this with SIGKILL, that will bring down every\n>other postgres process, so that should be avoided. But surely there is\n>a way to interrupt this. If I had some reason to shut down the\n>instance, I'd be screwed, it seems.\n\n\n\n", "msg_date": "Thu, 29 Dec 2005 17:42:51 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invulnerable VACUUM process thrashing everything" }, { "msg_contents": "In my experience a kill -9 has never resulted in any data loss in this \nsituation (it will cause postgres to detect that the process died, shut \ndown, then recover), and most of the time it only causes a 5-10sec \noutage. I'd definitely hesitate to recommend it in a production context \nthough, especially since I think there are some known race-condition \nbugs in 7.4.\n\nVACUUM *will* respond to a SIGTERM, but it doesn't check very often - \nI've often had to wait hours for it to determine that it's been killed, \nand my tables aren't anywhere near 1TB. Maybe this is a place where \nthings could be improved...\n\nIncidentally, I have to kill -9 some of our MySQL instances quite \nregularly because they do odd things. Not something you want to be \ndoing, especially when MySQL takes 30mins to recover.\n\nRuss Garrett\nLast.fm Ltd.\[email protected]\n\nRon wrote:\n\n> Ick. Can you get users and foreign connections off that machine, lock \n> them out for some period, and renice the VACUUM?\n>\n> Shedding load and keeping it off while VACUUM runs high priority might \n> allow it to finish in a reasonable amount of time.\n> Or\n> Shedding load and dropping the VACUUM priority might allow a kill \n> signal to get through.\n>\n> Hope this helps,\n> Ron\n>\n>\n> At 05:09 PM 12/29/2005, Jeffrey W. Baker wrote:\n>\n>> A few WEEKS ago, the autovacuum on my instance of pg 7.4 unilaterally\n>> decided to VACUUM a table which has not been updated in over a year and\n>> is more than one terabyte on the disk. Because of the very high\n>> transaction load on this database, this VACUUM has been ruining\n>> performance for almost a month. Unfortunately is seems invulnerable to\n>> killing by signals:\n>>\n>> # ps ax | grep VACUUM\n>> 15308 ? D 588:00 postgres: postgres skunk [local] VACUUM\n>> # kill -HUP 15308\n>> # ps ax | grep VACUUM\n>> 15308 ? D 588:00 postgres: postgres skunk [local] VACUUM\n>> # kill -INT 15308\n>> # ps ax | grep VACUUM\n>> 15308 ? D 588:00 postgres: postgres skunk [local] VACUUM\n>> # kill -PIPE 15308\n>> # ps ax | grep VACUUM\n>> 15308 ? D 588:00 postgres: postgres skunk [local] VACUUM\n>>\n>> o/~ But the cat came back, the very next day ...\n>>\n>> I assume that if I kill this with SIGKILL, that will bring down every\n>> other postgres process, so that should be avoided. But surely there is\n>> a way to interrupt this. If I had some reason to shut down the\n>> instance, I'd be screwed, it seems.\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Thu, 29 Dec 2005 22:53:11 +0000", "msg_from": "Russ Garrett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invulnerable VACUUM process thrashing everything" }, { "msg_contents": "On Thu, 2005-12-29 at 22:53 +0000, Russ Garrett wrote:\n> In my experience a kill -9 has never resulted in any data loss in this \n> situation (it will cause postgres to detect that the process died, shut \n> down, then recover), and most of the time it only causes a 5-10sec \n> outage. I'd definitely hesitate to recommend it in a production context \n> though, especially since I think there are some known race-condition \n> bugs in 7.4.\n> \n> VACUUM *will* respond to a SIGTERM, but it doesn't check very often - \n> I've often had to wait hours for it to determine that it's been killed, \n> and my tables aren't anywhere near 1TB. Maybe this is a place where \n> things could be improved...\n\nFWIW, I murdered this process with SIGKILL, and the recovery was very\nshort.\n\n\n> Incidentally, I have to kill -9 some of our MySQL instances quite \n> regularly because they do odd things. Not something you want to be \n> doing, especially when MySQL takes 30mins to recover.\n\nAgreed. After mysql shutdown with MyISAM, all tables must be checked\nand usually many need to be repaired. This takes a reallllllly long\ntime.\n\n-jwb\n\n> Russ Garrett\n> Last.fm Ltd.\n> [email protected]\n> \n> Ron wrote:\n> \n> > Ick. Can you get users and foreign connections off that machine, lock \n> > them out for some period, and renice the VACUUM?\n> >\n> > Shedding load and keeping it off while VACUUM runs high priority might \n> > allow it to finish in a reasonable amount of time.\n> > Or\n> > Shedding load and dropping the VACUUM priority might allow a kill \n> > signal to get through.\n> >\n> > Hope this helps,\n> > Ron\n> >\n> >\n> > At 05:09 PM 12/29/2005, Jeffrey W. Baker wrote:\n> >\n> >> A few WEEKS ago, the autovacuum on my instance of pg 7.4 unilaterally\n> >> decided to VACUUM a table which has not been updated in over a year and\n> >> is more than one terabyte on the disk. Because of the very high\n> >> transaction load on this database, this VACUUM has been ruining\n> >> performance for almost a month. Unfortunately is seems invulnerable to\n> >> killing by signals:\n> >>\n> >> # ps ax | grep VACUUM\n> >> 15308 ? D 588:00 postgres: postgres skunk [local] VACUUM\n> >> # kill -HUP 15308\n> >> # ps ax | grep VACUUM\n> >> 15308 ? D 588:00 postgres: postgres skunk [local] VACUUM\n> >> # kill -INT 15308\n> >> # ps ax | grep VACUUM\n> >> 15308 ? D 588:00 postgres: postgres skunk [local] VACUUM\n> >> # kill -PIPE 15308\n> >> # ps ax | grep VACUUM\n> >> 15308 ? D 588:00 postgres: postgres skunk [local] VACUUM\n> >>\n> >> o/~ But the cat came back, the very next day ...\n> >>\n> >> I assume that if I kill this with SIGKILL, that will bring down every\n> >> other postgres process, so that should be avoided. But surely there is\n> >> a way to interrupt this. If I had some reason to shut down the\n> >> instance, I'd be screwed, it seems.\n> >\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: Don't 'kill -9' the postmaster\n> >\n> \n", "msg_date": "Thu, 29 Dec 2005 18:16:02 -0800", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Invulnerable VACUUM process thrashing everything" }, { "msg_contents": "Russ Garrett <[email protected]> writes:\n> VACUUM *will* respond to a SIGTERM, but it doesn't check very often - \n> I've often had to wait hours for it to determine that it's been killed, \n> and my tables aren't anywhere near 1TB. Maybe this is a place where \n> things could be improved...\n\nHmm, there are CHECK_FOR_INTERRUPTS calls in all the loops that seem\nsignificant to me. Is there anything odd about your database schema?\nUnusual index types or data types maybe? Also, what PG version are\nyou using?\n\nIf you notice a VACUUM not responding to SIGTERM promptly, it'd be\nuseful to attach to the backend process with gdb and get a stack trace\nto find out what it's doing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Dec 2005 22:03:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invulnerable VACUUM process thrashing everything " }, { "msg_contents": "Hi, Jeffrey,\n\nJeffrey W. Baker wrote:\n> A few WEEKS ago, the autovacuum on my instance of pg 7.4 unilaterally\n> decided to VACUUM a table which has not been updated in over a year and\n> is more than one terabyte on the disk.\n\nHmm, maybe this is the Transaction ID wraparound emerging, and VACUUM is\nfreezing the rows.\n\nDid you VACUUM FREEZE the table after the last modifications?\n\n> # kill -HUP 15308\n> # kill -INT 15308\n> # kill -PIPE 15308\n\nDid you try kill -TERM?\n\nThis always cleanly ended VACUUMing backends on our machines within seconds.\n\n> I assume that if I kill this with SIGKILL, that will bring down every\n> other postgres process, so that should be avoided. But surely there is\n> a way to interrupt this. If I had some reason to shut down the\n> instance, I'd be screwed, it seems.\n\nYes, SIGKILL will make the postmaster shut down all running backend\ninstances, the same as SIGSEGV and possibly a few others.\n\nThe reason is that the postmaster assumes some internal data structure\ncorruption in the shared memory pages is possible on an \"unclean\"\nbackend abort, and thus quits immediately to minimize the possibility of\nthose corruptions to propagate to the disks.\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Thu, 05 Jan 2006 12:29:59 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Invulnerable VACUUM process thrashing everything" } ]
[ { "msg_contents": "Is it possible to have the planner consider the second plan instead of the \nfirst?\n\nadmpostgres4=> explain analyze select * from users where id in (select \nuser_id from user2user_group where user_group_id = 769694);\n \nQUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash IN Join (cost=4.04..2302.05 rows=4 width=78) (actual \ntime=50.381..200.985 rows=2 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".user_id)\n -> Append (cost=0.00..1931.68 rows=77568 width=78) (actual \ntime=0.004..154.629 rows=76413 loops=1)\n -> Seq Scan on users (cost=0.00..1024.88 rows=44588 width=78) \n(actual time=0.004..36.220 rows=43433 loops=1)\n -> Seq Scan on person_user users (cost=0.00..906.80 rows=32980 \nwidth=78) (actual time=0.005..38.120 rows=32980 loops=1)\n -> Hash (cost=4.04..4.04 rows=2 width=4) (actual time=0.020..0.020 \nrows=2 loops=1)\n -> Index Scan using user2user_group_user_group_id_idx on \nuser2user_group (cost=0.00..4.04 rows=2 width=4) (actual time=0.011..0.014 \nrows=2 loops=1)\n Index Cond: (user_group_id = 769694)\n Total runtime: 201.070 ms\n(9 rows)\n\nadmpostgres4=> select user_id from user2user_group where user_group_id = \n769694;\n user_id\n---------\n 766541\n 766552\n(2 rows)\n\nadmpostgres4=> explain analyze select * from users where id in (766541, \n766552);\n QUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=4.02..33.48 rows=9 width=78) (actual time=0.055..0.087 \nrows=2 loops=1)\n -> Append (cost=4.02..33.48 rows=9 width=78) (actual time=0.051..0.082 \nrows=2 loops=1)\n -> Bitmap Heap Scan on users (cost=4.02..18.10 rows=5 width=78) \n(actual time=0.051..0.053 rows=2 loops=1)\n Recheck Cond: ((id = 766541) OR (id = 766552))\n -> BitmapOr (cost=4.02..4.02 rows=5 width=0) (actual \ntime=0.045..0.045 rows=0 loops=1)\n -> Bitmap Index Scan on users_id_idx \n(cost=0.00..2.01 rows=2 width=0) (actual time=0.034..0.034 rows=1 loops=1)\n Index Cond: (id = 766541)\n -> Bitmap Index Scan on users_id_idx \n(cost=0.00..2.01 rows=2 width=0) (actual time=0.008..0.008 rows=1 loops=1)\n Index Cond: (id = 766552)\n -> Bitmap Heap Scan on person_user users (cost=4.02..15.37 \nrows=4 width=78) (actual time=0.025..0.025 rows=0 loops=1)\n Recheck Cond: ((id = 766541) OR (id = 766552))\n -> BitmapOr (cost=4.02..4.02 rows=4 width=0) (actual \ntime=0.023..0.023 rows=0 loops=1)\n -> Bitmap Index Scan on person_user_id_idx \n(cost=0.00..2.01 rows=2 width=0) (actual time=0.017..0.017 rows=0 loops=1)\n Index Cond: (id = 766541)\n -> Bitmap Index Scan on person_user_id_idx \n(cost=0.00..2.01 rows=2 width=0) (actual time=0.004..0.004 rows=0 loops=1)\n Index Cond: (id = 766552)\n Total runtime: 0.177 ms\n(17 rows)\n\nadmpostgres4=>\n\nadmpostgres4=> \\d users;\n Table \"adm.users\"\n Column | Type | Modifiers\n------------------+-----------------------------+---------------------\n id | integer | not null\n classid | integer | not null\n revision | integer | not null\n rev_start | timestamp without time zone |\n rev_end | timestamp without time zone |\n rev_timestamp | timestamp without time zone | not null\n rev_state | integer | not null default 10\n name | character varying |\n password | character varying |\n password_expires | timestamp without time zone |\n password_period | integer |\nIndexes:\n \"users_pkey\" primary key, btree (revision)\n \"users_uidx\" unique, btree (revision)\n \"users_id_idx\" btree (id)\n \"users_name_idx\" btree (rev_state, rev_end, name)\n \"users_rev_end_idx\" btree (rev_end)\n \"users_rev_idx\" btree (rev_state, rev_end)\n \"users_rev_start_idx\" btree (rev_start)\n \"users_rev_state_idx\" btree (rev_state)\nInherits: revision\n\nadmpostgres4=>\\d person_user;\n Table \"adm.person_user\"\n Column | Type | Modifiers\n------------------+-----------------------------+---------------------\n id | integer | not null\n classid | integer | not null\n revision | integer | not null\n rev_start | timestamp without time zone |\n rev_end | timestamp without time zone |\n rev_timestamp | timestamp without time zone | not null\n rev_state | integer | not null default 10\n name | character varying |\n password | character varying |\n password_expires | timestamp without time zone |\n password_period | integer |\n lastname | character varying |\n description | character varying |\n vat_id | character varying |\n firstname | character varying |\n sex | integer |\n birthdate | timestamp without time zone |\n title | character varying |\nIndexes:\n \"person_user_pkey\" primary key, btree (revision)\n \"person_user_uidx\" unique, btree (revision)\n \"person_user_id_idx\" btree (id)\n \"person_user_rev_end_idx\" btree (rev_end)\n \"person_user_rev_idx\" btree (rev_state, rev_end)\n \"person_user_rev_start_idx\" btree (rev_start)\n \"person_user_rev_state_idx\" btree (rev_state)\nInherits: users\n\nadmpostgres4=>\n\nadmpostgres4=> \\d user2user_group;\n Table \"adm.user2user_group\"\n Column | Type | Modifiers\n---------------+---------+-----------\n user_id | integer | not null\n user_group_id | integer | not null\nIndexes:\n \"user2user_group_pkey\" primary key, btree (user_id, user_group_id)\n \"user2user_group_uidx\" unique, btree (user_id, user_group_id)\n \"user2user_group_user_group_id_idx\" btree (user_group_id)\n \"user2user_group_user_id_idx\" btree (user_id)\n\nadmpostgres4=>\n\nMit freundlichem Gruß\nJens Schicke\n-- \nJens Schicke\t\t [email protected]\nasco GmbH\t\t http://www.asco.de\nMittelweg 7\t\t Tel 0531/3906-127\n38106 Braunschweig\t Fax 0531/3906-400\n", "msg_date": "Tue, 03 Jan 2006 16:08:48 +0100", "msg_from": "Jens-Wolfhard Schicke <[email protected]>", "msg_from_op": true, "msg_subject": "Materialize Subplan and push into inner index conditions" }, { "msg_contents": "Jens-Wolfhard Schicke <[email protected]> writes:\n> Is it possible to have the planner consider the second plan instead of the \n> first?\n\nAt the moment, only if you get rid of the inheritance. The planner's\nnot very smart at all when faced with joining inheritance trees.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jan 2006 10:43:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Materialize Subplan and push into inner index conditions " } ]
[ { "msg_contents": "I have questions about how to improve the write performance of PostgreSQL for logging data from a real-time simulation. We found that MySQL 4.1.3 could log about 1480 objects/second using MyISAM tables or about 1225 objects/second using InnoDB tables, but PostgreSQL 8.0.3 could log only about 540 objects/second. (test system: quad-Itanium2, 8GB memory, SCSI RAID, GigE connection from simulation server, nothing running except system processes and database system under test)\n\nWe also found that we could improve MySQL performance significantly using MySQL's \"INSERT\" command extension allowing multiple value-list tuples in a single command; the rate for MyISAM tables improved to about 2600 objects/second. PostgreSQL doesn't support that language extension. Using the COPY command instead of INSERT might help, but since rows are being generated on the fly, I don't see how to use COPY without running a separate process that reads rows from the application and uses COPY to write to the database. The application currently has two processes: the simulation and a data collector that reads events from the sim (queued in shared memory) and writes them as rows to the database, buffering as needed to avoid lost data during periods of high activity. To use COPY I think we would have to split our data collector into two processes communicating via a pipe.\n\nQuery performance is not an issue: we found that when suitable indexes are added PostgreSQL is fast enough on the kinds of queries our users make. The crux is writing rows to the database fast enough to keep up with the simulation.\n\nAre there general guidelines for tuning the PostgreSQL server for this kind of application? The suggestions I've found include disabling fsync (done), increasing the value of wal_buffers, and moving the WAL to a different disk, but these aren't likely to produce the 3x improvement that we need. On the client side I've found only two suggestions: disable autocommit and use COPY instead of INSERT. I think I've effectively disabled autocommit by batching up to several hundred INSERT commands in each PQexec() call, and it isn�t clear that COPY is worth the effort in our application.\n\nThanks.\n\n", "msg_date": "Tue, 03 Jan 2006 16:44:28 -0700", "msg_from": "Steve Eckmann <[email protected]>", "msg_from_op": true, "msg_subject": "improving write performance for logging application" }, { "msg_contents": "Steve Eckmann <[email protected]> writes:\n> We also found that we could improve MySQL performance significantly\n> using MySQL's \"INSERT\" command extension allowing multiple value-list\n> tuples in a single command; the rate for MyISAM tables improved to\n> about 2600 objects/second. PostgreSQL doesn't support that language\n> extension. Using the COPY command instead of INSERT might help, but\n> since rows are being generated on the fly, I don't see how to use COPY\n> without running a separate process that reads rows from the\n> application and uses COPY to write to the database.\n\nCan you conveniently alter your application to batch INSERT commands\ninto transactions? Ie\n\n\tBEGIN;\n\tINSERT ...;\n\t... maybe 100 or so inserts ...\n\tCOMMIT;\n\tBEGIN;\n\t... lather, rinse, repeat ...\n\nThis cuts down the transactional overhead quite a bit. A downside is\nthat you lose multiple rows if any INSERT fails, but then the same would\nbe true of multiple VALUES lists per INSERT.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Jan 2006 19:00:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving write performance for logging application " }, { "msg_contents": "On Tue, Jan 03, 2006 at 04:44:28PM -0700, Steve Eckmann wrote:\n> Are there general guidelines for tuning the PostgreSQL server for this kind \n> of application? The suggestions I've found include disabling fsync (done),\n\nAre you sure you really want this? The results could be catastrophic in case\nof a crash.\n\n> On the client side I've found only two suggestions: disable autocommit and \n> use COPY instead of INSERT. I think I've effectively disabled autocommit by \n> batching up to several hundred INSERT commands in each PQexec() call, and \n> it isn’t clear that COPY is worth the effort in our application.\n\nI'm a bit confused here: How can you batch multiple INSERTs into large\nstatements for MySQL, but not batch multiple INSERTs into COPY statements for\nPostgreSQL?\n\nAnyhow, putting it all inside one transaction (or a few) is likely to help\nquite a lot, but of course less when you have fsync=false. Bunding multiple\nstatements in each PQexec() call won't really give you that; you'll have to\ntell the database so explicitly.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 4 Jan 2006 01:06:03 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving write performance for logging application" }, { "msg_contents": "\n\nOn Tue, 3 Jan 2006, Tom Lane wrote:\n\n> Steve Eckmann <[email protected]> writes:\n> > We also found that we could improve MySQL performance significantly\n> > using MySQL's \"INSERT\" command extension allowing multiple value-list\n> > tuples in a single command; the rate for MyISAM tables improved to\n> > about 2600 objects/second. PostgreSQL doesn't support that language\n> > extension. Using the COPY command instead of INSERT might help, but\n> > since rows are being generated on the fly, I don't see how to use COPY\n> > without running a separate process that reads rows from the\n> > application and uses COPY to write to the database.\n>\n> Can you conveniently alter your application to batch INSERT commands\n> into transactions? Ie\n>\n> \tBEGIN;\n> \tINSERT ...;\n> \t... maybe 100 or so inserts ...\n> \tCOMMIT;\n> \tBEGIN;\n> \t... lather, rinse, repeat ...\n>\n> This cuts down the transactional overhead quite a bit. A downside is\n> that you lose multiple rows if any INSERT fails, but then the same would\n> be true of multiple VALUES lists per INSERT.\n\nSteve, you mentioned that you data collector buffers the data before\nsending it to the database, modify it so that each time it goes to send\nthings to the database you send all the data that's in the buffer as a\nsingle transaction.\n\nI am working on useing postgres to deal with log data and wrote a simple\nperl script that read in the log files a line at a time, and then wrote\nthem 1000 at a time to the database. On a dual Opteron 240 box with 2G of\nram 1x 15krpm SCSI drive (and a untuned postgress install with the compile\ntime defaults) I was getting 5000-8000 lines/sec (I think this was with\nfsync disabled, but I don't remember for sure). and postgres was\ncomplaining that it was overrunning it's log sizes (which limits the speed\nas it then has to pause to flush the logs)\n\nthe key thing is to send multiple lines with one transaction as tom shows\nabove.\n\nDavid Lang\n\n", "msg_date": "Tue, 3 Jan 2006 19:23:23 -0800 (PST)", "msg_from": "dlang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving write performance for logging application" }, { "msg_contents": "We have a similar application thats doing upwards of 2B inserts\nper day. We have spent a lot of time optimizing this, and found the\nfollowing to be most beneficial:\n\n1) use COPY (BINARY if possible)\n2) don't use triggers or foreign keys\n3) put WAL and tables on different spindles (channels if possible)\n4) put as much as you can in each COPY, and put as many COPYs as\n you can in a single transaction.\n5) watch out for XID wraparound\n6) tune checkpoint* and bgwriter* parameters for your I/O system\n\nOn Tue, 2006-01-03 at 16:44 -0700, Steve Eckmann wrote:\n> I have questions about how to improve the write performance of PostgreSQL for logging data from a real-time simulation. We found that MySQL 4.1.3 could log about 1480 objects/second using MyISAM tables or about 1225 objects/second using InnoDB tables, but PostgreSQL 8.0.3 could log only about 540 objects/second. (test system: quad-Itanium2, 8GB memory, SCSI RAID, GigE connection from simulation server, nothing running except system processes and database system under test)\n> \n> We also found that we could improve MySQL performance significantly using MySQL's \"INSERT\" command extension allowing multiple value-list tuples in a single command; the rate for MyISAM tables improved to about 2600 objects/second. PostgreSQL doesn't support that language extension. Using the COPY command instead of INSERT might help, but since rows are being generated on the fly, I don't see how to use COPY without running a separate process that reads rows from the application and uses COPY to write to the database. The application currently has two processes: the simulation and a data collector that reads events from the sim (queued in shared memory) and writes them as rows to the database, buffering as needed to avoid lost data during periods of high activity. To use COPY I think we would have to split our data collector into two processes communicating via a pipe.\n> \n> Query performance is not an issue: we found that when suitable indexes are added PostgreSQL is fast enough on the kinds of queries our users make. The crux is writing rows to the database fast enough to keep up with the simulation.\n> \n> Are there general guidelines for tuning the PostgreSQL server for this kind of application? The suggestions I've found include disabling fsync (done), increasing the value of wal_buffers, and moving the WAL to a different disk, but these aren't likely to produce the 3x improvement that we need. On the client side I've found only two suggestions: disable autocommit and use COPY instead of INSERT. I think I've effectively disabled autocommit by batching up to several hundred INSERT commands in each PQexec() call, and it isn’t clear that COPY is worth the effort in our application.\n> \n> Thanks.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n-- \nIan Westmacott <[email protected]>\nIntellivid Corp.\n\n", "msg_date": "Wed, 04 Jan 2006 08:54:25 -0500", "msg_from": "Ian Westmacott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving write performance for logging application" }, { "msg_contents": "Tom Lane wrote:\n\n>Steve Eckmann <[email protected]> writes:\n> \n>\n>>We also found that we could improve MySQL performance significantly\n>>using MySQL's \"INSERT\" command extension allowing multiple value-list\n>>tuples in a single command; the rate for MyISAM tables improved to\n>>about 2600 objects/second. PostgreSQL doesn't support that language\n>>extension. Using the COPY command instead of INSERT might help, but\n>>since rows are being generated on the fly, I don't see how to use COPY\n>>without running a separate process that reads rows from the\n>>application and uses COPY to write to the database.\n>> \n>>\n>\n>Can you conveniently alter your application to batch INSERT commands\n>into transactions? Ie\n>\n>\tBEGIN;\n>\tINSERT ...;\n>\t... maybe 100 or so inserts ...\n>\tCOMMIT;\n>\tBEGIN;\n>\t... lather, rinse, repeat ...\n>\n>This cuts down the transactional overhead quite a bit. A downside is\n>that you lose multiple rows if any INSERT fails, but then the same would\n>be true of multiple VALUES lists per INSERT.\n>\n>\t\t\tregards, tom lane\n> \n>\nThanks for the suggestion, Tom. Yes, I think I could do that. But I \nthought what I was doing now was effectively the same, because the \nPostgreSQL 8.0.0 Documentation says (section 27.3.1): \"It is allowed to \ninclude multiple SQL commands (separated by semicolons) in the command \nstring. Multiple queries sent in a single PQexec call are processed in a \nsingle transaction....\" Our simulation application has nearly 400 event \ntypes, each of which is a C++ class for which we have a corresponding \ndatabase table. So every thousand events or so I issue one PQexec() call \nfor each event type that has unlogged instances, sending INSERT commands \nfor all instances. For example,\n\n PQexec(dbConn, \"INSERT INTO FlyingObjectState VALUES (...); INSERT \nINTO FlyingObjectState VALUES (...); ...\");\n\nMy thought was that this would be a good compromise between minimizing \ntransactions (one per event class per buffering interval instead of one \nper event) and minimizing disk seeking (since all queries in a single \ntransaction insert rows into the same table). Am I overlooking something \nhere? One thing I haven't tried is increasing the buffering interval \nfrom 1000 events to, say, 10,000. It turns out that 1000 is a good \nnumber for Versant, the object database system we're replacing, and for \nMySQL, so I assumed it would be a good number for PostgreSQL, too.\n\nRegards, Steve\n\n\n\n\n\n\n\n\nTom Lane wrote:\n\nSteve Eckmann <[email protected]> writes:\n \n\nWe also found that we could improve MySQL performance significantly\nusing MySQL's \"INSERT\" command extension allowing multiple value-list\ntuples in a single command; the rate for MyISAM tables improved to\nabout 2600 objects/second. PostgreSQL doesn't support that language\nextension. Using the COPY command instead of INSERT might help, but\nsince rows are being generated on the fly, I don't see how to use COPY\nwithout running a separate process that reads rows from the\napplication and uses COPY to write to the database.\n \n\n\nCan you conveniently alter your application to batch INSERT commands\ninto transactions? Ie\n\n\tBEGIN;\n\tINSERT ...;\n\t... maybe 100 or so inserts ...\n\tCOMMIT;\n\tBEGIN;\n\t... lather, rinse, repeat ...\n\nThis cuts down the transactional overhead quite a bit. A downside is\nthat you lose multiple rows if any INSERT fails, but then the same would\nbe true of multiple VALUES lists per INSERT.\n\n\t\t\tregards, tom lane\n \n\nThanks for the suggestion, Tom. Yes, I think I could do that. But I\nthought what I was doing now was effectively the same, because the\nPostgreSQL 8.0.0 Documentation says (section 27.3.1): \"It is allowed to\ninclude multiple SQL commands (separated by semicolons) in the command\nstring. Multiple queries sent in a single PQexec call are processed in\na single transaction....\" Our simulation application has nearly 400\nevent types, each of which is a C++ class for which we have a\ncorresponding database table. So every thousand events or so I issue\none PQexec() call for each event type that has unlogged instances,\nsending INSERT commands for all instances. For example,\n\n    PQexec(dbConn, \"INSERT INTO FlyingObjectState VALUES (...); INSERT\nINTO FlyingObjectState VALUES (...); ...\");\n\nMy thought was that this would be a good compromise between minimizing\ntransactions (one per event class per buffering interval instead of one\nper event) and minimizing disk seeking (since all queries in a single\ntransaction insert rows into the same table). Am I overlooking\nsomething here? One thing I haven't tried is increasing the buffering\ninterval from 1000 events to, say, 10,000. It turns out that 1000 is a\ngood number for Versant, the object database system we're replacing,\nand for MySQL, so I assumed it would be a good number for PostgreSQL,\ntoo.\n\nRegards,  Steve", "msg_date": "Wed, 04 Jan 2006 07:00:12 -0700", "msg_from": "Steve Eckmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improving write performance for logging application" }, { "msg_contents": "Steinar H. Gunderson wrote:\n\n>On Tue, Jan 03, 2006 at 04:44:28PM -0700, Steve Eckmann wrote:\n> \n>\n>>Are there general guidelines for tuning the PostgreSQL server for this kind \n>>of application? The suggestions I've found include disabling fsync (done),\n>> \n>>\n>\n>Are you sure you really want this? The results could be catastrophic in case\n>of a crash.\n>\n> \n>\n>>On the client side I've found only two suggestions: disable autocommit and \n>>use COPY instead of INSERT. I think I've effectively disabled autocommit by \n>>batching up to several hundred INSERT commands in each PQexec() call, and \n>>it isn’t clear that COPY is worth the effort in our application.\n>> \n>>\n>\n>I'm a bit confused here: How can you batch multiple INSERTs into large\n>statements for MySQL, but not batch multiple INSERTs into COPY statements for\n>PostgreSQL?\n>\n>Anyhow, putting it all inside one transaction (or a few) is likely to help\n>quite a lot, but of course less when you have fsync=false. Bunding multiple\n>statements in each PQexec() call won't really give you that; you'll have to\n>tell the database so explicitly.\n>\n>/* Steinar */\n> \n>\nThanks, Steinar. I don't think we would really run with fsync off, but I \nneed to document the performance tradeoffs. You're right that my \nexplanation was confusing; probably because I'm confused about how to \nuse COPY! I could batch multiple INSERTS using COPY statements, I just \ndon't see how to do it without adding another process to read from \nSTDIN, since the application that is currently the database client is \nconstructing rows on the fly. I would need to get those rows into some \nprocess's STDIN stream or into a server-side file before COPY could be \nused, right?\n\nYou're comment about bundling multiple statements in each PQexec() call \nseems to disagree with a statement in 27.3.1 that I interpret as saying \neach PQexec() call corresponds to a single transaction. Are you sure my \ninterpretation is wrong?\n\nRegards, Steve\n\n\n\n\n\n\n\nSteinar H. Gunderson wrote:\n\nOn Tue, Jan 03, 2006 at 04:44:28PM -0700, Steve Eckmann wrote:\n \n\nAre there general guidelines for tuning the PostgreSQL server for this kind \nof application? The suggestions I've found include disabling fsync (done),\n \n\n\nAre you sure you really want this? The results could be catastrophic in case\nof a crash.\n\n \n\nOn the client side I've found only two suggestions: disable autocommit and \nuse COPY instead of INSERT. I think I've effectively disabled autocommit by \nbatching up to several hundred INSERT commands in each PQexec() call, and \nit isn’t clear that COPY is worth the effort in our application.\n \n\n\nI'm a bit confused here: How can you batch multiple INSERTs into large\nstatements for MySQL, but not batch multiple INSERTs into COPY statements for\nPostgreSQL?\n\nAnyhow, putting it all inside one transaction (or a few) is likely to help\nquite a lot, but of course less when you have fsync=false. Bunding multiple\nstatements in each PQexec() call won't really give you that; you'll have to\ntell the database so explicitly.\n\n/* Steinar */\n \n\nThanks, Steinar. I don't think we would really run with fsync off, but\nI need to document the performance tradeoffs. You're right that my\nexplanation was confusing; probably because I'm confused about how to\nuse COPY! I could batch multiple INSERTS using COPY statements, I just\ndon't see how to do it without adding another process to read from\nSTDIN, since the application that is currently the database client is\nconstructing rows on the fly. I would need to get those rows into some\nprocess's STDIN stream or into a server-side file before COPY could be\nused, right?\n\nYou're comment about bundling multiple statements in each PQexec() call\nseems to disagree with a statement in 27.3.1 that I interpret as saying\neach PQexec() call corresponds to a single transaction. Are you sure my\ninterpretation is wrong?\n\nRegards, Steve", "msg_date": "Wed, 04 Jan 2006 07:08:34 -0700", "msg_from": "Steve Eckmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improving write performance for logging application" }, { "msg_contents": "dlang wrote:\n\n>On Tue, 3 Jan 2006, Tom Lane wrote:\n>\n> \n>\n>>Steve Eckmann <[email protected]> writes:\n>> \n>>\n>>>We also found that we could improve MySQL performance significantly\n>>>using MySQL's \"INSERT\" command extension allowing multiple value-list\n>>>tuples in a single command; the rate for MyISAM tables improved to\n>>>about 2600 objects/second. PostgreSQL doesn't support that language\n>>>extension. Using the COPY command instead of INSERT might help, but\n>>>since rows are being generated on the fly, I don't see how to use COPY\n>>>without running a separate process that reads rows from the\n>>>application and uses COPY to write to the database.\n>>> \n>>>\n>>Can you conveniently alter your application to batch INSERT commands\n>>into transactions? Ie\n>>\n>>\tBEGIN;\n>>\tINSERT ...;\n>>\t... maybe 100 or so inserts ...\n>>\tCOMMIT;\n>>\tBEGIN;\n>>\t... lather, rinse, repeat ...\n>>\n>>This cuts down the transactional overhead quite a bit. A downside is\n>>that you lose multiple rows if any INSERT fails, but then the same would\n>>be true of multiple VALUES lists per INSERT.\n>> \n>>\n>\n>Steve, you mentioned that you data collector buffers the data before\n>sending it to the database, modify it so that each time it goes to send\n>things to the database you send all the data that's in the buffer as a\n>single transaction.\n>\n>I am working on useing postgres to deal with log data and wrote a simple\n>perl script that read in the log files a line at a time, and then wrote\n>them 1000 at a time to the database. On a dual Opteron 240 box with 2G of\n>ram 1x 15krpm SCSI drive (and a untuned postgress install with the compile\n>time defaults) I was getting 5000-8000 lines/sec (I think this was with\n>fsync disabled, but I don't remember for sure). and postgres was\n>complaining that it was overrunning it's log sizes (which limits the speed\n>as it then has to pause to flush the logs)\n>\n>the key thing is to send multiple lines with one transaction as tom shows\n>above.\n>\n>David Lang\n>\nThanks, David. I will look more carefully at how to batch multiple rows \nper PQexec() call. Regards, Steve.\n\n\n\n\n\n\n\ndlang wrote:\n\n\nOn Tue, 3 Jan 2006, Tom Lane wrote:\n\n \n\nSteve Eckmann <[email protected]> writes:\n \n\nWe also found that we could improve MySQL performance significantly\nusing MySQL's \"INSERT\" command extension allowing multiple value-list\ntuples in a single command; the rate for MyISAM tables improved to\nabout 2600 objects/second. PostgreSQL doesn't support that language\nextension. Using the COPY command instead of INSERT might help, but\nsince rows are being generated on the fly, I don't see how to use COPY\nwithout running a separate process that reads rows from the\napplication and uses COPY to write to the database.\n \n\nCan you conveniently alter your application to batch INSERT commands\ninto transactions? Ie\n\n\tBEGIN;\n\tINSERT ...;\n\t... maybe 100 or so inserts ...\n\tCOMMIT;\n\tBEGIN;\n\t... lather, rinse, repeat ...\n\nThis cuts down the transactional overhead quite a bit. A downside is\nthat you lose multiple rows if any INSERT fails, but then the same would\nbe true of multiple VALUES lists per INSERT.\n \n\n\nSteve, you mentioned that you data collector buffers the data before\nsending it to the database, modify it so that each time it goes to send\nthings to the database you send all the data that's in the buffer as a\nsingle transaction.\n\nI am working on useing postgres to deal with log data and wrote a simple\nperl script that read in the log files a line at a time, and then wrote\nthem 1000 at a time to the database. On a dual Opteron 240 box with 2G of\nram 1x 15krpm SCSI drive (and a untuned postgress install with the compile\ntime defaults) I was getting 5000-8000 lines/sec (I think this was with\nfsync disabled, but I don't remember for sure). and postgres was\ncomplaining that it was overrunning it's log sizes (which limits the speed\nas it then has to pause to flush the logs)\n\nthe key thing is to send multiple lines with one transaction as tom shows\nabove.\n\nDavid Lang\n\nThanks, David. I will look more carefully at how to batch multiple rows\nper PQexec() call.  Regards, Steve.", "msg_date": "Wed, 04 Jan 2006 07:13:13 -0700", "msg_from": "Steve Eckmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improving write performance for logging application" }, { "msg_contents": "Ian Westmacott wrote:\n\n>We have a similar application thats doing upwards of 2B inserts\n>per day. We have spent a lot of time optimizing this, and found the\n>following to be most beneficial:\n>\n>1) use COPY (BINARY if possible)\n>2) don't use triggers or foreign keys\n>3) put WAL and tables on different spindles (channels if possible)\n>4) put as much as you can in each COPY, and put as many COPYs as\n> you can in a single transaction.\n>5) watch out for XID wraparound\n>6) tune checkpoint* and bgwriter* parameters for your I/O system\n>\nThanks, Ian. I will look at how to implement your suggestions.\n\nRegards, Steve\n", "msg_date": "Wed, 04 Jan 2006 07:16:33 -0700", "msg_from": "Steve Eckmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improving write performance for logging application" }, { "msg_contents": "2B is a lot of inserts. If you had to guess, \nwhat do you think is the maximum number of inserts you could do in a day?\n\nHow large is each record being inserted?\n\nHow much can you put in a COPY and how many COPYs \ncan you put into a transactions?\n\nWhat values are you using for bgwriter* and checkpoint*?\n\nWhat HW on you running on and what kind of performance do you typically get?\n\nInquiring minds definitely want to know ;-)\nRon\n\n\nAt 08:54 AM 1/4/2006, Ian Westmacott wrote:\n>We have a similar application thats doing upwards of 2B inserts\n>per day. We have spent a lot of time optimizing this, and found the\n>following to be most beneficial:\n>\n>1) use COPY (BINARY if possible)\n>2) don't use triggers or foreign keys\n>3) put WAL and tables on different spindles (channels if possible)\n>4) put as much as you can in each COPY, and put as many COPYs as\n> you can in a single transaction.\n>5) watch out for XID wraparound\n>6) tune checkpoint* and bgwriter* parameters for your I/O system\n>\n>On Tue, 2006-01-03 at 16:44 -0700, Steve Eckmann wrote:\n> > I have questions about how to improve the \n> write performance of PostgreSQL for logging \n> data from a real-time simulation. We found that \n> MySQL 4.1.3 could log about 1480 objects/second \n> using MyISAM tables or about 1225 \n> objects/second using InnoDB tables, but \n> PostgreSQL 8.0.3 could log only about 540 \n> objects/second. (test system: quad-Itanium2, \n> 8GB memory, SCSI RAID, GigE connection from \n> simulation server, nothing running except \n> system processes and database system under test)\n> >\n> > We also found that we could improve MySQL \n> performance significantly using MySQL's \n> \"INSERT\" command extension allowing multiple \n> value-list tuples in a single command; the rate \n> for MyISAM tables improved to about 2600 \n> objects/second. PostgreSQL doesn't support that \n> language extension. Using the COPY command \n> instead of INSERT might help, but since rows \n> are being generated on the fly, I don't see how \n> to use COPY without running a separate process \n> that reads rows from the application and uses \n> COPY to write to the database. The application \n> currently has two processes: the simulation and \n> a data collector that reads events from the sim \n> (queued in shared memory) and writes them as \n> rows to the database, buffering as needed to \n> avoid lost data during periods of high \n> activity. To use COPY I think we would have to \n> split our data collector into two processes communicating via a pipe.\n> >\n> > Query performance is not an issue: we found \n> that when suitable indexes are added PostgreSQL \n> is fast enough on the kinds of queries our \n> users make. The crux is writing rows to the \n> database fast enough to keep up with the simulation.\n> >\n> > Are there general guidelines for tuning the \n> PostgreSQL server for this kind of application? \n> The suggestions I've found include disabling \n> fsync (done), increasing the value of \n> wal_buffers, and moving the WAL to a different \n> disk, but these aren't likely to produce the 3x \n> improvement that we need. On the client side \n> I've found only two suggestions: disable \n> autocommit and use COPY instead of INSERT. I \n> think I've effectively disabled autocommit by \n> batching up to several hundred INSERT commands \n> in each PQexec() call, and it isn’t clear \n> that COPY is worth the effort in our application.\n> >\n> > Thanks.\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: Don't 'kill -9' the postmaster\n>--\n>Ian Westmacott <[email protected]>\n>Intellivid Corp.\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n\n", "msg_date": "Wed, 04 Jan 2006 09:29:00 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving write performance for logging" }, { "msg_contents": "Steve Eckmann <[email protected]> writes:\n> Thanks for the suggestion, Tom. Yes, I think I could do that. But I \n> thought what I was doing now was effectively the same, because the \n> PostgreSQL 8.0.0 Documentation says (section 27.3.1): \"It is allowed to \n> include multiple SQL commands (separated by semicolons) in the command \n> string. Multiple queries sent in a single PQexec call are processed in a \n> single transaction....\" Our simulation application has nearly 400 event \n> types, each of which is a C++ class for which we have a corresponding \n> database table. So every thousand events or so I issue one PQexec() call \n> for each event type that has unlogged instances, sending INSERT commands \n> for all instances. For example,\n\n> PQexec(dbConn, \"INSERT INTO FlyingObjectState VALUES (...); INSERT \n> INTO FlyingObjectState VALUES (...); ...\");\n\nHmm. I'm not sure if that's a good idea or not. You're causing the\nserver to take 1000 times the normal amount of memory to hold the\ncommand parsetrees, and if there are any O(N^2) behaviors in parsing\nyou could be getting hurt badly by that. (I'd like to think there are\nnot, but would definitely not swear to it.) OTOH you're reducing the\nnumber of network round trips which is a good thing. Have you actually\nmeasured to see what effect this approach has? It might be worth\nbuilding a test server with profiling enabled to see if the use of such\nlong command strings creates any hot spots in the profile.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Jan 2006 10:39:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving write performance for logging application " }, { "msg_contents": "On 1/4/06, Steve Eckmann <[email protected]> wrote:\n>\n> Thanks, Steinar. I don't think we would really run with fsync off, but I\n> need to document the performance tradeoffs. You're right that my explanation\n> was confusing; probably because I'm confused about how to use COPY! I could\n> batch multiple INSERTS using COPY statements, I just don't see how to do it\n> without adding another process to read from STDIN, since the application\n> that is currently the database client is constructing rows on the fly. I\n> would need to get those rows into some process's STDIN stream or into a\n> server-side file before COPY could be used, right?\n\n\nSteve,\n\nYou can use copy without resorting to another process. See the libpq\ndocumentation for 'Functions Associated with the copy Command\". We do\nsomething like this:\n\nchar *mbuf;\n\n// allocate space and fill mbuf with appropriately formatted data somehow\n\nPQexec( conn, \"begin\" );\nPQexec( conn, \"copy mytable from stdin\" );\nPQputCopyData( conn, mbuf, strlen(mbuf) );\nPQputCopyEnd( conn, NULL );\nPQexec( conn, \"commit\" );\n\n-K\n\nOn 1/4/06, Steve Eckmann <[email protected]> wrote:\nThanks, Steinar. I don't think we would really run with fsync off, but\nI need to document the performance tradeoffs. You're right that my\nexplanation was confusing; probably because I'm confused about how to\nuse COPY! I could batch multiple INSERTS using COPY statements, I just\ndon't see how to do it without adding another process to read from\nSTDIN, since the application that is currently the database client is\nconstructing rows on the fly. I would need to get those rows into some\nprocess's STDIN stream or into a server-side file before COPY could be\nused, right?\nSteve,\n\nYou can use copy without resorting to another process.  See the\nlibpq documentation for 'Functions Associated with the copy\nCommand\".  We do something like this:\n\nchar *mbuf;\n\n// allocate space and fill mbuf with appropriately formatted data somehow\n\nPQexec( conn, \"begin\" );\nPQexec( conn, \"copy mytable from stdin\" );\nPQputCopyData( conn, mbuf, strlen(mbuf) );\nPQputCopyEnd( conn, NULL );\nPQexec( conn, \"commit\" );\n\n-K", "msg_date": "Wed, 4 Jan 2006 09:40:31 -0600", "msg_from": "Kelly Burkhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving write performance for logging application" }, { "msg_contents": "On Wed, 2006-01-04 at 09:29 -0500, Ron wrote:\n> 2B is a lot of inserts. If you had to guess, \n> what do you think is the maximum number of inserts you could do in a day?\n\nIt seems we are pushing it there. Our intentions are to scale much\nfurther, but our plans are to distribute at this point.\n\n> How large is each record being inserted?\n\nThey are small: 32 (data) bytes.\n\n> How much can you put in a COPY and how many COPYs \n> can you put into a transactions?\n\nThese are driven by the application; we do about 60 COPYs and a couple\ndozen INSERT/UPDATE/DELETEs in a single transaction. Each COPY is\ndoing a variable number of rows, up to several hundred. We do 15 of\nthese transactions per second.\n\n> What values are you using for bgwriter* and checkpoint*?\n\nbgwriter is 100%/500 pages, and checkpoint is 50 segments/300 seconds.\nwal_buffers doesn't do much for us, and fsync is enabled.\n\n> What HW on you running on and what kind of performance do you typically get?\n\nThe WAL is a 2-spindle (SATA) RAID0 with its own controller (ext3).\nThe tables are on a 10-spindle (SCSI) RAID50 with dual U320\ncontrollers (XFS). This is overkill for writing and querying the data,\nbut we need to constantly ANALYZE and VACUUM in the\nbackground without interrupting the inserts (the app is 24x7). The\ndatabases are 4TB, so these operations can be lengthy.\n\n-- \nIan Westmacott <[email protected]>\nIntellivid Corp.\n\n", "msg_date": "Wed, 04 Jan 2006 11:00:38 -0500", "msg_from": "Ian Westmacott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving write performance for logging" }, { "msg_contents": "Tom Lane wrote:\n\n>Steve Eckmann <[email protected]> writes:\n> \n>\n>> <>Thanks for the suggestion, Tom. Yes, I think I could do that. But I\n>> thought what I was doing now was effectively the same, because the\n>> PostgreSQL 8.0.0 Documentation says (section 27.3.1): \"It is allowed to\n>> include multiple SQL commands (separated by semicolons) in the command\n>> string. Multiple queries sent in a single PQexec call are processed in a\n>> single transaction....\" Our simulation application has nearly 400 event\n>> types, each of which is a C++ class for which we have a corresponding\n>> database table. So every thousand events or so I issue one PQexec() call\n>> for each event type that has unlogged instances, sending INSERT commands\n>> for all instances. For example,\n>\n>> PQexec(dbConn, \"INSERT INTO FlyingObjectState VALUES (...); INSERT \n>>INTO FlyingObjectState VALUES (...); ...\");\n>> \n>>\n>\n>Hmm. I'm not sure if that's a good idea or not. You're causing the\n>server to take 1000 times the normal amount of memory to hold the\n>command parsetrees, and if there are any O(N^2) behaviors in parsing\n>you could be getting hurt badly by that. (I'd like to think there are\n>not, but would definitely not swear to it.) OTOH you're reducing the\n>number of network round trips which is a good thing. Have you actually\n>measured to see what effect this approach has? It might be worth\n>building a test server with profiling enabled to see if the use of such\n>long command strings creates any hot spots in the profile.\n>\n>\t\t\tregards, tom lane\n> \n>\nNo, I haven't measured it. I will compare this approach with others that \nhave been suggested. Thanks. -steve\n\n\n\n\n\n\n\nTom Lane wrote:\n\nSteve Eckmann <[email protected]> writes:\n \n<>Thanks for the suggestion, Tom. Yes, I\nthink I could do that. But I \nthought what I was doing now was effectively the same, because the \nPostgreSQL 8.0.0 Documentation says (section 27.3.1): \"It is allowed to\n \ninclude multiple SQL commands (separated by semicolons) in the command \nstring. Multiple queries sent in a single PQexec call are processed in\na \nsingle transaction....\" Our simulation application has nearly 400 event\n \ntypes, each of which is a C++ class for which we have a corresponding \ndatabase table. So every thousand events or so I issue one PQexec()\ncall \nfor each event type that has unlogged instances, sending INSERT\ncommands \nfor all instances. For example,\n >\n\n PQexec(dbConn, \"INSERT INTO FlyingObjectState VALUES (...); INSERT \nINTO FlyingObjectState VALUES (...); ...\");\n \n\n\nHmm. I'm not sure if that's a good idea or not. You're causing the\nserver to take 1000 times the normal amount of memory to hold the\ncommand parsetrees, and if there are any O(N^2) behaviors in parsing\nyou could be getting hurt badly by that. (I'd like to think there are\nnot, but would definitely not swear to it.) OTOH you're reducing the\nnumber of network round trips which is a good thing. Have you actually\nmeasured to see what effect this approach has? It might be worth\nbuilding a test server with profiling enabled to see if the use of such\nlong command strings creates any hot spots in the profile.\n\n\t\t\tregards, tom lane\n \n\nNo, I haven't measured it. I will compare this approach with others\nthat have been suggested. Thanks.  -steve", "msg_date": "Wed, 04 Jan 2006 17:16:45 -0700", "msg_from": "Steve Eckmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improving write performance for logging application" }, { "msg_contents": "Kelly Burkhart wrote:\n\n> On 1/4/06, Steve Eckmann <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Thanks, Steinar. I don't think we would really run with fsync off,\n> but I need to document the performance tradeoffs. You're right\n> that my explanation was confusing; probably because I'm confused\n> about how to use COPY! I could batch multiple INSERTS using COPY\n> statements, I just don't see how to do it without adding another\n> process to read from STDIN, since the application that is\n> currently the database client is constructing rows on the fly. I\n> would need to get those rows into some process's STDIN stream or\n> into a server-side file before COPY could be used, right?\n>\n>\n> Steve,\n>\n> You can use copy without resorting to another process. See the libpq \n> documentation for 'Functions Associated with the copy Command\". We do \n> something like this:\n>\n> char *mbuf;\n>\n> // allocate space and fill mbuf with appropriately formatted data somehow\n>\n> PQexec( conn, \"begin\" );\n> PQexec( conn, \"copy mytable from stdin\" );\n> PQputCopyData( conn, mbuf, strlen(mbuf) );\n> PQputCopyEnd( conn, NULL );\n> PQexec( conn, \"commit\" );\n>\n> -K\n\nThanks for the concrete example, Kelly. I had read the relevant libpq \ndoc but didn't put the pieces together.\n\nRegards, Steve\n\n\n\n\n\n\n\nKelly Burkhart wrote:\nOn 1/4/06, Steve Eckmann\n<[email protected]>\nwrote:\n \nThanks,\nSteinar. I don't think we would really run with fsync off, but\nI need to document the performance tradeoffs. You're right that my\nexplanation was confusing; probably because I'm confused about how to\nuse COPY! I could batch multiple INSERTS using COPY statements, I just\ndon't see how to do it without adding another process to read from\nSTDIN, since the application that is currently the database client is\nconstructing rows on the fly. I would need to get those rows into some\nprocess's STDIN stream or into a server-side file before COPY could be\nused, right?\n\nSteve,\n\nYou can use copy without resorting to another process.  See the\nlibpq documentation for 'Functions Associated with the copy\nCommand\".  We do something like this:\n\n\n\nchar *mbuf;\n\n// allocate space and fill mbuf with appropriately formatted data\nsomehow\n\nPQexec( conn, \"begin\" );\nPQexec( conn, \"copy mytable from stdin\" );\nPQputCopyData( conn, mbuf, strlen(mbuf) );\nPQputCopyEnd( conn, NULL );\nPQexec( conn, \"commit\" );\n\n-K\n\nThanks for the concrete example, Kelly. I had read the relevant libpq\ndoc but didn't put the pieces together.\n\nRegards,  Steve", "msg_date": "Wed, 04 Jan 2006 17:19:08 -0700", "msg_from": "Steve Eckmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: improving write performance for logging application" }, { "msg_contents": "On Wed, Jan 04, 2006 at 11:00:38AM -0500, Ian Westmacott wrote:\n> The WAL is a 2-spindle (SATA) RAID0 with its own controller (ext3).\n> The tables are on a 10-spindle (SCSI) RAID50 with dual U320\n> controllers (XFS). This is overkill for writing and querying the data,\n> but we need to constantly ANALYZE and VACUUM in the\n> background without interrupting the inserts (the app is 24x7). The\n> databases are 4TB, so these operations can be lengthy.\n\nHow come you're using RAID50 instead of just RAID0? Or was WAL being on\nRAID0 a typo?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 5 Jan 2006 19:08:22 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving write performance for logging" }, { "msg_contents": "On Thu, 2006-01-05 at 19:08 -0600, Jim C. Nasby wrote:\n> On Wed, Jan 04, 2006 at 11:00:38AM -0500, Ian Westmacott wrote:\n> > The WAL is a 2-spindle (SATA) RAID0 with its own controller (ext3).\n> > The tables are on a 10-spindle (SCSI) RAID50 with dual U320\n> > controllers (XFS). This is overkill for writing and querying the data,\n> > but we need to constantly ANALYZE and VACUUM in the\n> > background without interrupting the inserts (the app is 24x7). The\n> > databases are 4TB, so these operations can be lengthy.\n> \n> How come you're using RAID50 instead of just RAID0? Or was WAL being on\n> RAID0 a typo?\n\nWe use RAID50 instead of RAID0 for the tables for some fault-tolerance.\nWe use RAID0 for the WAL for performance.\n\nI'm missing the implication of the question...\n\n-- \nIan Westmacott <[email protected]>\nIntellivid Corp.\n\n", "msg_date": "Fri, 06 Jan 2006 09:00:06 -0500", "msg_from": "Ian Westmacott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving write performance for logging" }, { "msg_contents": "On Fri, Jan 06, 2006 at 09:00:06AM -0500, Ian Westmacott wrote:\n>We use RAID50 instead of RAID0 for the tables for some fault-tolerance.\n>We use RAID0 for the WAL for performance.\n>\n>I'm missing the implication of the question...\n\nIf you have the WAL on RAID 0 you have no fault tolerance, regardless of\nwhat level you use for the tables.\n\nMike Stone\n", "msg_date": "Fri, 06 Jan 2006 11:03:12 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving write performance for logging" }, { "msg_contents": "On Fri, Jan 06, 2006 at 09:00:06AM -0500, Ian Westmacott wrote:\n> On Thu, 2006-01-05 at 19:08 -0600, Jim C. Nasby wrote:\n> > On Wed, Jan 04, 2006 at 11:00:38AM -0500, Ian Westmacott wrote:\n> > > The WAL is a 2-spindle (SATA) RAID0 with its own controller (ext3).\n> > > The tables are on a 10-spindle (SCSI) RAID50 with dual U320\n> > > controllers (XFS). This is overkill for writing and querying the data,\n> > > but we need to constantly ANALYZE and VACUUM in the\n> > > background without interrupting the inserts (the app is 24x7). The\n> > > databases are 4TB, so these operations can be lengthy.\n> > \n> > How come you're using RAID50 instead of just RAID0? Or was WAL being on\n> > RAID0 a typo?\n> \n> We use RAID50 instead of RAID0 for the tables for some fault-tolerance.\n> We use RAID0 for the WAL for performance.\n> \n> I'm missing the implication of the question...\n\nThe problem is that if you lose WAL or the data, you've lost everything.\nSo you might as well use raid0 for the data if you're using it for WAL.\nOr switch WAL to raid1. Actually, a really good controller *might* be\nable to do a good job of raid5 for WAL. Or just use raid10.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 6 Jan 2006 10:37:32 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving write performance for logging" }, { "msg_contents": "On Fri, 2006-01-06 at 10:37 -0600, Jim C. Nasby wrote:\n> The problem is that if you lose WAL or the data, you've lost everything.\n> So you might as well use raid0 for the data if you're using it for WAL.\n> Or switch WAL to raid1. Actually, a really good controller *might* be\n> able to do a good job of raid5 for WAL. Or just use raid10.\n\nIf the WAL is lost, can you lose more than the data since the last\ncheckpoint?\n\n-- \nIan Westmacott <[email protected]>\nIntellivid Corp.\n\n", "msg_date": "Fri, 06 Jan 2006 14:02:57 -0500", "msg_from": "Ian Westmacott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving write performance for logging" }, { "msg_contents": "Ian Westmacott <[email protected]> writes:\n> If the WAL is lost, can you lose more than the data since the last\n> checkpoint?\n\nThe problem is that you might have partially-applied actions since the\nlast checkpoint, rendering your database inconsistent; then *all* the\ndata is suspect if not actually corrupt.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Jan 2006 15:27:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving write performance for logging " } ]
[ { "msg_contents": "I have a table which stores cumulative values\nI would like to display/chart the deltas between successive data collections\n\nIf my primary key only increments by 1, I could write a simple query\n\nselect b.gc_minor - a.gc_minor, b.gc_major - a.gc_major\n from jam_trace_sys a, jam_trace_sys b\n where a.trace_id = 22\n and b.trace_id = a.trace_id\n and b.seq_no = a.seq_no + 1\n order by a.seq_no;\n\nHowever the difference in sequence number is variable.\nSo (in Oracle) I used to extract the next seq_no using a correlated sub-query\n\nselect b.gc_minor - a.gc_minor, b.gc_major - a.gc_major\nfrom jam_trace_sys a, jam_trace_sys b\nwhere a.trace_id = 22\nand (b.trace_id, b.seq_no) =\n(select a.trace_id, min(c.seq_no) from jam_trace_sys c\nwhere c.trace_id = a.trace_id and c.seq_no > a.seq_no)\n order by a.seq_no;\n\nFor every row in A, The correlated sub-query from C will execute\nWith an appropriate index, it will just descend the index Btree\ngo one row to the right and return that row (min > :value)\nand join to table B\n\nSELECT STATEMENT\n SORT ORDER BY\n TABLE ACCESS BY INDEX ROWID JAM_TRACE_SYS B\n NESTED LOOPS\n TABLE ACCESS BY INDEX ROWID JAM_TRACE_SYS A\n INDEX RANGE SCAN JAM_TRACE_SYS_N1 A\n INDEX RANGE SCAN JAM_TRACE_SYS_N1 B\n SORT AGGREGATE\n INDEX RANGE SCAN JAM_TRACE_SYS_N1 C\n\nIn postgreSQL A and B are doing a cartesian product\nthen C gets executed for every row in this cartesian product\nand most of the extra rows get thrown out.\nIs there any way to force an execution plan like above where the correlated subquery runs before going to B.\nThe table is small right now, but it will grow to have millions of rows\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=124911.81..124944.84 rows=13213 width=20) (actual time=13096.754..13097.053 rows=149 loops=1)\n Sort Key: a.seq_no\n -> Nested Loop (cost=4.34..124007.40 rows=13213 width=20) (actual time=1948.300..13096.329 rows=149 loops=1)\n Join Filter: (subplan)\n -> Seq Scan on jam_trace_sys b (cost=0.00..3.75 rows=175 width=16) (actual time=0.005..0.534 rows=175 loops=1)\n -> Materialize (cost=4.34..5.85 rows=151 width=16) (actual time=0.002..0.324 rows=150 loops=175)\n -> Seq Scan on jam_trace_sys a (cost=0.00..4.19 rows=151 width=16) (actual time=0.022..0.687 rows=150 loops=1)\n Filter: (trace_id = 22)\n SubPlan\n -> Aggregate (cost=4.67..4.67 rows=1 width=4) (actual time=0.486..0.488 rows=1 loops=26250)\n -> Seq Scan on jam_trace_sys c (cost=0.00..4.62 rows=15 width=4) (actual time=0.058..0.311 rows=74 loops=26250)\n Filter: ((trace_id = $0) AND (seq_no > $1))\n Total runtime: 13097.557 ms\n(13 rows)\n\npglnx01=> \\d jam_trace_sys\n Table \"public.jam_trace_sys\"\n Column | Type | Modifiers\n-----------------+---------+-----------\n trace_id | integer |\n seq_no | integer |\n cpu_utilization | integer |\n gc_minor | integer |\n gc_major | integer |\n heap_used | integer |\nIndexes:\n \"jam_trace_sys_n1\" btree (trace_id, seq_no)\n\npglnx01=> select count(*) from jam_trace_Sys ;\n count\n-------\n 175\n(1 row)\n\npglnx01=> select trace_id, count(*) from jam_trace_sys group by trace_id ;\n trace_id | count\n----------+-------\n 15 | 2\n 18 | 21\n 22 | 150\n 16 | 2\n(4 rows)\n\n\n\n\n\n\n\n\nI have a table which stores cumulative valuesI \nwould like to display/chart the deltas between successive data \ncollections\n \nIf my primary key only increments by 1, I could \nwrite a simple query\n \nselect b.gc_minor - a.gc_minor, b.gc_major - \na.gc_major  from jam_trace_sys a, jam_trace_sys b where \na.trace_id = 22   and b.trace_id = a.trace_id   and \nb.seq_no = a.seq_no + 1 order by a.seq_no;\n \nHowever the difference in sequence number is \nvariable.So (in Oracle) I used to extract the next seq_no using a correlated \nsub-query\n \nselect b.gc_minor - a.gc_minor, b.gc_major - \na.gc_majorfrom jam_trace_sys a, jam_trace_sys bwhere a.trace_id = \n22and (b.trace_id, b.seq_no) =(select a.trace_id, min(c.seq_no) from \njam_trace_sys cwhere c.trace_id = a.trace_id and c.seq_no > \na.seq_no) order by a.seq_no;\n \nFor every row in A, The correlated sub-query from C \nwill executeWith an appropriate index, it will just descend the index \nBtreego one row to the right and return that row (min > :value)and \njoin to table B\n \nSELECT STATEMENT  SORT ORDER \nBY   TABLE ACCESS BY INDEX ROWID JAM_TRACE_SYS \nB     NESTED \nLOOPS       TABLE ACCESS BY INDEX ROWID \nJAM_TRACE_SYS  A         INDEX \nRANGE SCAN JAM_TRACE_SYS_N1  A       \nINDEX RANGE SCAN JAM_TRACE_SYS_N1 \nB         SORT \nAGGREGATE           INDEX \nRANGE SCAN JAM_TRACE_SYS_N1 C\n \nIn postgreSQL A and B are doing a cartesian \nproductthen C gets executed for every row in this cartesian productand \nmost of the extra rows get thrown out.Is there any way to force an execution \nplan like above where the correlated subquery runs before going to B.The \ntable is small right now, but it will grow to have millions of \nrows\nQUERY \nPLAN----------------------------------------------------------------------------------------------------------------------------------- Sort  \n(cost=124911.81..124944.84 rows=13213 width=20) (actual \ntime=13096.754..13097.053 rows=149 loops=1)   Sort Key: \na.seq_no   ->  Nested Loop  (cost=4.34..124007.40 \nrows=13213 width=20) (actual time=1948.300..13096.329 rows=149 \nloops=1)         Join Filter: \n(subplan)         ->  Seq \nScan on jam_trace_sys b  (cost=0.00..3.75 rows=175 width=16) (actual \ntime=0.005..0.534 rows=175 \nloops=1)         ->  \nMaterialize  (cost=4.34..5.85 rows=151 width=16) (actual time=0.002..0.324 \nrows=150 \nloops=175)               \n->  Seq Scan on jam_trace_sys a  (cost=0.00..4.19 rows=151 \nwidth=16) (actual time=0.022..0.687 rows=150 \nloops=1)                     \nFilter: (trace_id = 22)         \nSubPlan           \n->  Aggregate  (cost=4.67..4.67 rows=1 width=4) (actual \ntime=0.486..0.488 rows=1 \nloops=26250)                 \n->  Seq Scan on jam_trace_sys c  (cost=0.00..4.62 rows=15 width=4) \n(actual time=0.058..0.311 rows=74 \nloops=26250)                       \nFilter: ((trace_id = $0) AND (seq_no > $1)) Total runtime: 13097.557 \nms(13 rows)\n \npglnx01=> \\d \njam_trace_sys     Table \n\"public.jam_trace_sys\"     \nColumn      |  Type   | \nModifiers-----------------+---------+----------- trace_id        \n| integer \n| seq_no          | \ninteger | cpu_utilization | integer \n| gc_minor        | integer \n| gc_major        | integer \n| heap_used       | integer \n|Indexes:    \"jam_trace_sys_n1\" btree (trace_id, \nseq_no)\n \npglnx01=> select count(*) from jam_trace_Sys \n; count-------   175(1 row)\n \npglnx01=> select trace_id, count(*) from \njam_trace_sys group by trace_id ; trace_id | \ncount----------+-------       15 \n|     2       18 \n|    21       22 |   \n150       16 |     2(4 \nrows)", "msg_date": "Tue, 3 Jan 2006 20:12:51 -0800", "msg_from": "\"Virag Saksena\" <[email protected]>", "msg_from_op": true, "msg_subject": "Avoiding cartesian product" }, { "msg_contents": "Dear Virag,\n\nAFAIK aggregates aren't indexed in postgres (at least not before 8.1, which \nindexes min and max, iirc).\n\nAlso, I don't think you need to exactly determine the trace_id. Try this one \n(OTOH; might be wrong):\n\nselect DISTINCT ON (a.trace_id, a.seq_no)\t-- See below\n b.gc_minor - a.gc_minor, b.gc_major - a.gc_major\nfrom jam_trace_sys a, jam_trace_sys b\nwhere a.trace_id = 22\n and b.trace_id = a.trace_id\n and b.seq_no > a.seq_no\t\t\t-- Simply \">\" is enough\norder by a.trace_id, a.seq_no, b.seq_no;\t-- DISTINCT, see below\n\nThe trick is that DISTINCT takes the first one in each group (IIRC it is \ngranted, at least someone told me on one of these lists :) ) so if you order \nby the DISTINCT attributes and then by b.seq_no, you'll get the smallest of \nappropriate b.seq_no values for each DISTINCT values.\n\nThe idea of DISTINCTing by both columns is to make sure the planner finds \nthe index. (lately I had a similar problem: WHERE a=1 ORDER BY b LIMIT 1 \nused an index on b, instead of an (a,b) index. Using ORDER BY a,b solved it)\n\nHTH,\n\n--\nG.\n\n\nOn 2006.01.04. 5:12, Virag Saksena wrote:\n> \n> I have a table which stores cumulative values\n> I would like to display/chart the deltas between successive data \n> collections\n> \n> If my primary key only increments by 1, I could write a simple query\n> \n> select b.gc_minor - a.gc_minor, b.gc_major - a.gc_major\n> from jam_trace_sys a, jam_trace_sys b\n> where a.trace_id = 22\n> and b.trace_id = a.trace_id\n> and b.seq_no = a.seq_no + 1\n> order by a.seq_no;\n> \n> However the difference in sequence number is variable.\n> So (in Oracle) I used to extract the next seq_no using a correlated \n> sub-query\n> \n> select b.gc_minor - a.gc_minor, b.gc_major - a.gc_major\n> from jam_trace_sys a, jam_trace_sys b\n> where a.trace_id = 22\n> and (b.trace_id, b.seq_no) =\n> (select a.trace_id, min(c.seq_no) from jam_trace_sys c\n> where c.trace_id = a.trace_id and c.seq_no > a.seq_no)\n> order by a.seq_no;\n> \n> For every row in A, The correlated sub-query from C will execute\n> With an appropriate index, it will just descend the index Btree\n> go one row to the right and return that row (min > :value)\n> and join to table B\n> \n> SELECT STATEMENT\n> SORT ORDER BY\n> TABLE ACCESS BY INDEX ROWID JAM_TRACE_SYS B\n> NESTED LOOPS\n> TABLE ACCESS BY INDEX ROWID JAM_TRACE_SYS A\n> INDEX RANGE SCAN JAM_TRACE_SYS_N1 A\n> INDEX RANGE SCAN JAM_TRACE_SYS_N1 B\n> SORT AGGREGATE\n> INDEX RANGE SCAN JAM_TRACE_SYS_N1 C\n> \n> In postgreSQL A and B are doing a cartesian product\n> then C gets executed for every row in this cartesian product\n> and most of the extra rows get thrown out.\n> Is there any way to force an execution plan like above where the \n> correlated subquery runs before going to B.\n> The table is small right now, but it will grow to have millions of rows\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=124911.81..124944.84 rows=13213 width=20) (actual \n> time=13096.754..13097.053 rows=149 loops=1)\n> Sort Key: a.seq_no\n> -> Nested Loop (cost=4.34..124007.40 rows=13213 width=20) (actual \n> time=1948.300..13096.329 rows=149 loops=1)\n> Join Filter: (subplan)\n> -> Seq Scan on jam_trace_sys b (cost=0.00..3.75 rows=175 \n> width=16) (actual time=0.005..0.534 rows=175 loops=1)\n> -> Materialize (cost=4.34..5.85 rows=151 width=16) (actual \n> time=0.002..0.324 rows=150 loops=175)\n> -> Seq Scan on jam_trace_sys a (cost=0.00..4.19 \n> rows=151 width=16) (actual time=0.022..0.687 rows=150 loops=1)\n> Filter: (trace_id = 22)\n> SubPlan\n> -> Aggregate (cost=4.67..4.67 rows=1 width=4) (actual \n> time=0.486..0.488 rows=1 loops=26250)\n> -> Seq Scan on jam_trace_sys c (cost=0.00..4.62 \n> rows=15 width=4) (actual time=0.058..0.311 rows=74 loops=26250)\n> Filter: ((trace_id = $0) AND (seq_no > $1))\n> Total runtime: 13097.557 ms\n> (13 rows)\n> \n> pglnx01=> \\d jam_trace_sys\n> Table \"public.jam_trace_sys\"\n> Column | Type | Modifiers\n> -----------------+---------+-----------\n> trace_id | integer |\n> seq_no | integer |\n> cpu_utilization | integer |\n> gc_minor | integer |\n> gc_major | integer |\n> heap_used | integer |\n> Indexes:\n> \"jam_trace_sys_n1\" btree (trace_id, seq_no)\n> \n> pglnx01=> select count(*) from jam_trace_Sys ;\n> count\n> -------\n> 175\n> (1 row)\n> \n> pglnx01=> select trace_id, count(*) from jam_trace_sys group by trace_id ;\n> trace_id | count\n> ----------+-------\n> 15 | 2\n> 18 | 21\n> 22 | 150\n> 16 | 2\n> (4 rows)\n", "msg_date": "Mon, 09 Jan 2006 16:59:16 +0100", "msg_from": "=?UTF-8?B?U3rFsWNzIEfDoWJvcg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Avoiding cartesian product" }, { "msg_contents": "Szűcs,\n Thanks for your suggestion, I guess there is more than one way to attack the problem.\n\nI ended up using a trick with limit to get the next row ...\n\nselect (b.gc_minor- a.gc_minor), (b.gc_major- a.gc_major)\nfrom jam_trace_sys a join jam_trace_sys b on\n(b.seq_no = (select c.seq_no from jam_trace_sys c\n where c.trace_id = a.trace_id and c.seq_no > a.seq_no\n order by c.trace_id, c.seq_no limit 1)\n and b.trace_id = a.trace_id),\njam_tracesnap s1, jam_tracesnap s2\nwhere s1.trace_id = a.trace_id\nand s1.seq_no = a.seq_no\nand s2.trace_id = b.trace_id\nand s2.seq_no = b.seq_no\nand a.trace_id = 22\norder by a.seq_no;\n\nThis gave me a nice clean execution plan (there are some extra sources listed, but that is a bug in postgresql) ...\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=11.24..11.25 rows=1 width=20) (actual time=0.040..0.040 rows=0 loops=1)\n Sort Key: a.seq_no\n ->Nested Loop (cost=0.00..11.23 rows=1 width=20) (actual time=0.028..0.028 rows=0 loops=1)\n Join Filter: (\"inner\".seq_no = \"outer\".seq_no)\n ->Nested Loop (cost=0.00..9.20 rows=1 width=32) (actual time=0.024..0.024 rows=0 loops=1)\n Join Filter: (\"inner\".seq_no = \"outer\".seq_no)\n ->Nested Loop (cost=0.00..7.17 rows=1 width=32) (actual time=0.020..0.020 rows=0 loops=1)\n Join Filter: (\"inner\".seq_no = (subplan))\n ->Index Scan using jam_trace_sys_n1 on jam_trace_sys a (cost=0.00..3.41 rows=1 width=16) (actual time=0.016..0.016 rows=0 loops=1)\n Index Cond: (trace_id = 22)\n ->Index Scan using jam_trace_sys_n1 on jam_trace_sys b (cost=0.00..3.41 rows=1 width=16) (never executed)\n Index Cond: (22 = trace_id)\n SubPlan\n ->Limit (cost=0.00..0.33 rows=1 width=8) (never executed)\n ->Index Scan using jam_trace_sys_n1 on jam_trace_sys c (cost=0.00..6.36 rows=19 width=8) (never executed)\n Index Cond: ((trace_id = $0) AND (seq_no > $1))\n ->Index Scan using jam_tracesnap_n1 on jam_tracesnap s1 (cost=0.00..2.01 rows=1 width=8) (never executed)\n Index Cond: (22 = trace_id)\n ->Index Scan using jam_tracesnap_n1 on jam_tracesnap s2 (cost=0.00..2.01 rows=1 width=8) (never executed)\n Index Cond: (22 = trace_id)\n\nRegards,\n\nVirag\n\n\n----- Original Message ----- \nFrom: \"Szűcs Gábor\" <[email protected]>\nTo: <[email protected]>\nSent: Monday, January 09, 2006 7:59 AM\nSubject: Re: Avoiding cartesian product\n\n\n> Dear Virag,\n> \n> AFAIK aggregates aren't indexed in postgres (at least not before 8.1, which \n> indexes min and max, iirc).\n> \n> Also, I don't think you need to exactly determine the trace_id. Try this one \n> (OTOH; might be wrong):\n> \n> select DISTINCT ON (a.trace_id, a.seq_no) -- See below\n> b.gc_minor - a.gc_minor, b.gc_major - a.gc_major\n> from jam_trace_sys a, jam_trace_sys b\n> where a.trace_id = 22\n> and b.trace_id = a.trace_id\n> and b.seq_no > a.seq_no -- Simply \">\" is enough\n> order by a.trace_id, a.seq_no, b.seq_no; -- DISTINCT, see below\n> \n> The trick is that DISTINCT takes the first one in each group (IIRC it is \n> granted, at least someone told me on one of these lists :) ) so if you order \n> by the DISTINCT attributes and then by b.seq_no, you'll get the smallest of \n> appropriate b.seq_no values for each DISTINCT values.\n> \n> The idea of DISTINCTing by both columns is to make sure the planner finds \n> the index. (lately I had a similar problem: WHERE a=1 ORDER BY b LIMIT 1 \n> used an index on b, instead of an (a,b) index. Using ORDER BY a,b solved it)\n> \n> HTH,\n> \n> --\n> G.\n> \n> \n> On 2006.01.04. 5:12, Virag Saksena wrote:\n> > \n> > I have a table which stores cumulative values\n> > I would like to display/chart the deltas between successive data \n> > collections\n> > \n> > If my primary key only increments by 1, I could write a simple query\n> > \n> > select b.gc_minor - a.gc_minor, b.gc_major - a.gc_major\n> > from jam_trace_sys a, jam_trace_sys b\n> > where a.trace_id = 22\n> > and b.trace_id = a.trace_id\n> > and b.seq_no = a.seq_no + 1\n> > order by a.seq_no;\n> > \n> > However the difference in sequence number is variable.\n> > So (in Oracle) I used to extract the next seq_no using a correlated \n> > sub-query\n> > \n> > select b.gc_minor - a.gc_minor, b.gc_major - a.gc_major\n> > from jam_trace_sys a, jam_trace_sys b\n> > where a.trace_id = 22\n> > and (b.trace_id, b.seq_no) =\n> > (select a.trace_id, min(c.seq_no) from jam_trace_sys c\n> > where c.trace_id = a.trace_id and c.seq_no > a.seq_no)\n> > order by a.seq_no;\n> > \n> > For every row in A, The correlated sub-query from C will execute\n> > With an appropriate index, it will just descend the index Btree\n> > go one row to the right and return that row (min > :value)\n> > and join to table B\n> > \n> > SELECT STATEMENT\n> > SORT ORDER BY\n> > TABLE ACCESS BY INDEX ROWID JAM_TRACE_SYS B\n> > NESTED LOOPS\n> > TABLE ACCESS BY INDEX ROWID JAM_TRACE_SYS A\n> > INDEX RANGE SCAN JAM_TRACE_SYS_N1 A\n> > INDEX RANGE SCAN JAM_TRACE_SYS_N1 B\n> > SORT AGGREGATE\n> > INDEX RANGE SCAN JAM_TRACE_SYS_N1 C\n> > \n> > In postgreSQL A and B are doing a cartesian product\n> > then C gets executed for every row in this cartesian product\n> > and most of the extra rows get thrown out.\n> > Is there any way to force an execution plan like above where the \n> > correlated subquery runs before going to B.\n> > The table is small right now, but it will grow to have millions of rows\n> > QUERY PLAN\n> > -----------------------------------------------------------------------------------------------------------------------------------\n> > Sort (cost=124911.81..124944.84 rows=13213 width=20) (actual \n> > time=13096.754..13097.053 rows=149 loops=1)\n> > Sort Key: a.seq_no\n> > -> Nested Loop (cost=4.34..124007.40 rows=13213 width=20) (actual \n> > time=1948.300..13096.329 rows=149 loops=1)\n> > Join Filter: (subplan)\n> > -> Seq Scan on jam_trace_sys b (cost=0.00..3.75 rows=175 \n> > width=16) (actual time=0.005..0.534 rows=175 loops=1)\n> > -> Materialize (cost=4.34..5.85 rows=151 width=16) (actual \n> > time=0.002..0.324 rows=150 loops=175)\n> > -> Seq Scan on jam_trace_sys a (cost=0.00..4.19 \n> > rows=151 width=16) (actual time=0.022..0.687 rows=150 loops=1)\n> > Filter: (trace_id = 22)\n> > SubPlan\n> > -> Aggregate (cost=4.67..4.67 rows=1 width=4) (actual \n> > time=0.486..0.488 rows=1 loops=26250)\n> > -> Seq Scan on jam_trace_sys c (cost=0.00..4.62 \n> > rows=15 width=4) (actual time=0.058..0.311 rows=74 loops=26250)\n> > Filter: ((trace_id = $0) AND (seq_no > $1))\n> > Total runtime: 13097.557 ms\n> > (13 rows)\n> > \n> > pglnx01=> \\d jam_trace_sys\n> > Table \"public.jam_trace_sys\"\n> > Column | Type | Modifiers\n> > -----------------+---------+-----------\n> > trace_id | integer |\n> > seq_no | integer |\n> > cpu_utilization | integer |\n> > gc_minor | integer |\n> > gc_major | integer |\n> > heap_used | integer |\n> > Indexes:\n> > \"jam_trace_sys_n1\" btree (trace_id, seq_no)\n> > \n> > pglnx01=> select count(*) from jam_trace_Sys ;\n> > count\n> > -------\n> > 175\n> > (1 row)\n> > \n> > pglnx01=> select trace_id, count(*) from jam_trace_sys group by trace_id ;\n> > trace_id | count\n> > ----------+-------\n> > 15 | 2\n> > 18 | 21\n> > 22 | 150\n> > 16 | 2\n> > (4 rows)\n> \n\n\n\n\n\n\n\nSzűcs,\n    Thanks for your suggestion, I \nguess there is more than one way to attack the problem.\n \nI ended up using a trick with limit to get the next \nrow ...\n \nselect (b.gc_minor- a.gc_minor), (b.gc_major- \na.gc_major)from jam_trace_sys a join jam_trace_sys b on(b.seq_no = \n(select c.seq_no from jam_trace_sys c where c.trace_id = a.trace_id and \nc.seq_no > a.seq_no order by c.trace_id, c.seq_no limit \n1) and b.trace_id = a.trace_id),jam_tracesnap s1, jam_tracesnap \ns2where s1.trace_id = a.trace_idand s1.seq_no = a.seq_noand \ns2.trace_id = b.trace_idand s2.seq_no = b.seq_noand a.trace_id = \n22order by a.seq_no;\n \nThis gave me a nice clean execution plan (there are \nsome extra sources listed, but that is a bug in postgresql) ...\n \nQUERY \nPLAN-----------------------------------------------------------------------------------------------------------------------------------------------------------Sort  \n(cost=11.24..11.25 rows=1 width=20) (actual time=0.040..0.040 rows=0 \nloops=1) Sort Key: a.seq_no ->Nested Loop  \n(cost=0.00..11.23 rows=1 width=20) (actual time=0.028..0.028 rows=0 \nloops=1)    Join Filter: (\"inner\".seq_no = \n\"outer\".seq_no)    ->Nested Loop  (cost=0.00..9.20 \nrows=1 width=32) (actual time=0.024..0.024 rows=0 \nloops=1)       Join Filter: (\"inner\".seq_no = \n\"outer\".seq_no)       ->Nested Loop  \n(cost=0.00..7.17 rows=1 width=32) (actual time=0.020..0.020 rows=0 \nloops=1)          Join Filter: \n(\"inner\".seq_no = \n(subplan))          ->Index \nScan using jam_trace_sys_n1 on jam_trace_sys a  (cost=0.00..3.41 rows=1 \nwidth=16) (actual time=0.016..0.016 rows=0 \nloops=1)             \nIndex Cond: (trace_id = \n22)          ->Index Scan \nusing jam_trace_sys_n1 on jam_trace_sys b  (cost=0.00..3.41 rows=1 \nwidth=16) (never \nexecuted)             \nIndex Cond: (22 = \ntrace_id)          \nSubPlan          \n->Limit  (cost=0.00..0.33 rows=1 width=8) (never \nexecuted)            \n->Index Scan using jam_trace_sys_n1 on jam_trace_sys c  (cost=0.00..6.36 \nrows=19 width=8) (never \nexecuted)               \nIndex Cond: ((trace_id = $0) AND (seq_no > \n$1))       ->Index Scan using \njam_tracesnap_n1 on jam_tracesnap s1  (cost=0.00..2.01 rows=1 width=8) \n(never executed)          Index \nCond: (22 = trace_id)    ->Index Scan using \njam_tracesnap_n1 on jam_tracesnap s2  (cost=0.00..2.01 rows=1 width=8) \n(never executed)       Index Cond: (22 = \ntrace_id)\nRegards,\n \nVirag\n\n----- Original Message ----- \nFrom: \"Szűcs Gábor\" <[email protected]>\nTo: <[email protected]>\nSent: Monday, January 09, 2006 7:59 AM\nSubject: Re: Avoiding cartesian \nproduct\n> Dear Virag,> > AFAIK aggregates aren't indexed in \npostgres (at least not before 8.1, which > indexes min and max, \niirc).> > Also, I don't think you need to exactly determine the \ntrace_id. Try this one > (OTOH; might be wrong):> > select \nDISTINCT ON (a.trace_id, a.seq_no) -- See below>    b.gc_minor \n- a.gc_minor, b.gc_major - a.gc_major> from jam_trace_sys a, \njam_trace_sys b> where a.trace_id = 22>    and \nb.trace_id = a.trace_id>    and b.seq_no > a.seq_no -- \nSimply \">\" is enough> order by a.trace_id, a.seq_no, b.seq_no; -- \nDISTINCT, see below> > The trick is that DISTINCT takes the first \none in each group (IIRC it is > granted, at least someone told me on one \nof these lists :) ) so if you order > by the DISTINCT attributes and then \nby b.seq_no, you'll get the smallest of > appropriate b.seq_no values for \neach DISTINCT values.> > The idea of DISTINCTing by both columns \nis to make sure the planner finds > the index. (lately I had a similar \nproblem: WHERE a=1 ORDER BY b LIMIT 1 > used an index on b, instead of an \n(a,b) index. Using ORDER BY a,b solved it)> > HTH,> \n> --> G.> > > On 2006.01.04. 5:12, Virag \nSaksena wrote:> > > > I have a table which stores cumulative \nvalues> > I would like to display/chart the deltas between successive \ndata > > collections> >  > > If my primary \nkey only increments by 1, I could write a simple query> >  \n> > select b.gc_minor - a.gc_minor, b.gc_major - a.gc_major> \n>   from jam_trace_sys a, jam_trace_sys b> >  where \na.trace_id = 22> >    and b.trace_id = \na.trace_id> >    and b.seq_no = a.seq_no + 1> \n>  order by a.seq_no;> >  > > However the \ndifference in sequence number is variable.> > So (in Oracle) I used to \nextract the next seq_no using a correlated > > sub-query> \n>  > > select b.gc_minor - a.gc_minor, b.gc_major - \na.gc_major> > from jam_trace_sys a, jam_trace_sys b> > where \na.trace_id = 22> > and (b.trace_id, b.seq_no) => > (select \na.trace_id, min(c.seq_no) from jam_trace_sys c> > where c.trace_id = \na.trace_id and c.seq_no > a.seq_no)> >  order by \na.seq_no;> >  > > For every row in A, The correlated \nsub-query from C will execute> > With an appropriate index, it will \njust descend the index Btree> > go one row to the right and return \nthat row (min > :value)> > and join to table B> >  \n> > SELECT STATEMENT> >   SORT ORDER BY> \n>    TABLE ACCESS BY INDEX ROWID JAM_TRACE_SYS B> \n>      NESTED LOOPS> \n>        TABLE ACCESS BY INDEX ROWID \nJAM_TRACE_SYS  A> \n>          INDEX RANGE SCAN \nJAM_TRACE_SYS_N1  A> >        \nINDEX RANGE SCAN JAM_TRACE_SYS_N1 B> \n>          SORT \nAGGREGATE> \n>            INDEX \nRANGE SCAN JAM_TRACE_SYS_N1 C> >  > > In postgreSQL A \nand B are doing a cartesian product> > then C gets executed for every \nrow in this cartesian product> > and most of the extra rows get thrown \nout.> > Is there any way to force an execution plan like above where \nthe > > correlated subquery runs before going to B.> > The \ntable is small right now, but it will grow to have millions of rows> > \nQUERY PLAN> > \n-----------------------------------------------------------------------------------------------------------------------------------> \n>  Sort  (cost=124911.81..124944.84 rows=13213 width=20) (actual \n> > time=13096.754..13097.053 rows=149 loops=1)> \n>    Sort Key: a.seq_no> >    \n->  Nested Loop  (cost=4.34..124007.40 rows=13213 width=20) (actual \n> > time=1948.300..13096.329 rows=149 loops=1)> \n>          Join Filter: \n(subplan)> >          \n->  Seq Scan on jam_trace_sys b  (cost=0.00..3.75 rows=175 > \n> width=16) (actual time=0.005..0.534 rows=175 loops=1)> \n>          ->  \nMaterialize  (cost=4.34..5.85 rows=151 width=16) (actual > > \ntime=0.002..0.324 rows=150 loops=175)> \n>                \n->  Seq Scan on jam_trace_sys a  (cost=0.00..4.19 > > \nrows=151 width=16) (actual time=0.022..0.687 rows=150 loops=1)> \n>                      \nFilter: (trace_id = 22)> \n>          SubPlan> \n>            \n->  Aggregate  (cost=4.67..4.67 rows=1 width=4) (actual > \n> time=0.486..0.488 rows=1 loops=26250)> \n>                  \n->  Seq Scan on jam_trace_sys c  (cost=0.00..4.62 > > \nrows=15 width=4) (actual time=0.058..0.311 rows=74 loops=26250)> \n>                        \nFilter: ((trace_id = $0) AND (seq_no > $1))> >  Total runtime: \n13097.557 ms> > (13 rows)> >  > > \npglnx01=> \\d jam_trace_sys> >      Table \n\"public.jam_trace_sys\"> >      \nColumn      |  Type   | \nModifiers> > -----------------+---------+-----------> \n>  trace_id        | integer \n|> >  \nseq_no          | integer |> \n>  cpu_utilization | integer |> >  \ngc_minor        | integer |> \n>  gc_major        | integer \n|> >  heap_used       | integer \n|> > Indexes:> >     \"jam_trace_sys_n1\" \nbtree (trace_id, seq_no)> >  > > pglnx01=> select \ncount(*) from jam_trace_Sys ;> >  count> > \n-------> >    175> > (1 row)> \n>  > > pglnx01=> select trace_id, count(*) from \njam_trace_sys group by trace_id ;> >  trace_id | count> \n> ----------+-------> >        \n15 |     2> \n>        18 |    21> \n>        22 |   150> \n>        16 |     \n2> > (4 rows)>", "msg_date": "Sun, 19 Feb 2006 22:42:23 -0800", "msg_from": "\"Virag Saksena\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Avoiding cartesian product" } ]
[ { "msg_contents": "Hello,\n\nWe have a web-application running against a postgres 8.1 database, and \nbasically, every time I run a report after no other reports have been run for \nseveral hours, the report will take significantly longer (e.g. 30 seconds), \nthen if I re-run the report again, or run the report when the web-application \nhas been used recently (< 1 second).\n\nKeep in mind that our web-app might issue 30 or more individual queries to \nreturn a given report and that this behavior is not just isolated to a single \nreport-type - it basically happens for any of the reports after the web-app \nhas been inactive. Also, I can trace it back to the timing of the underlying \nqueries, which show this same behavior (e.g. it's not because of overhead in \nour web-app).\n\nSo, it appears to be some sort of caching issue. I'm not 100% clear on how \nthe shared buffer cache works, and what we might do to make sure that we \ndon't have these periods where queries take a long time. Since our users' \ntypical usage scenario is to not use the web-app for a long time and then \ncome back and use it, if reports which generally take a second are taking 30 \nseconds, we have a real problem.\n\nI have isolated a single example of one such query which is very slow when no \nother queries have been run, and then speeds up significantly on the second \nrun.\n\nFirst run, after a night of inactivity:\n\nexplain analyze SELECT average_size, end_time FROM 1min_events WHERE file_id = \n'137271' AND end_time > now() - interval '2 minutes' ORDER BY end_time DESC \nLIMIT 1;\n\n Limit (cost=47.06..47.06 rows=1 width=24) (actual time=313.585..313.585 \nrows=1 loops=1)\n -> Sort (cost=47.06..47.06 rows=1 width=24) (actual time=313.584..313.584 \nrows=1 loops=1)\n Sort Key: end_time\n -> Bitmap Heap Scan on 1min_events (cost=44.03..47.05 rows=1 \nwidth=24) (actual time=313.562..313.568 rows=2 loops=1)\n Recheck Cond: ((end_time > (now() - '00:02:00'::interval)) AND \n(file_id = 137271))\n -> BitmapAnd (cost=44.03..44.03 rows=1 width=0) (actual \ntime=313.551..313.551 rows=0 loops=1)\n -> Bitmap Index Scan on 1min_events_end_idx \n(cost=0.00..5.93 rows=551 width=0) (actual time=0.076..0.076 rows=46 loops=1)\n Index Cond: (end_time > (now() - \n'00:02:00'::interval))\n -> Bitmap Index Scan on 1min_events_file_id_begin_idx \n(cost=0.00..37.85 rows=3670 width=0) (actual time=313.468..313.468 rows=11082 \nloops=1)\n Index Cond: (file_id = 137271)\n Total runtime: 313.643 ms\n(11 rows)\n\nSecond run, after that:\n\n explain analyze SELECT average_size, end_time FROM 1min_events WHERE file_id \n= '137271' AND end_time > now() - interval '2 minutes' ORDER BY end_time DESC \nLIMIT 1;\n\n Limit (cost=47.06..47.06 rows=1 width=24) (actual time=2.209..2.209 rows=1 \nloops=1)\n -> Sort (cost=47.06..47.06 rows=1 width=24) (actual time=2.208..2.208 \nrows=1 loops=1)\n Sort Key: end_time\n -> Bitmap Heap Scan on 1min_events (cost=44.03..47.05 rows=1 \nwidth=24) (actual time=2.192..2.194 rows=2 loops=1)\n Recheck Cond: ((end_time > (now() - '00:02:00'::interval)) AND \n(file_id = 137271))\n -> BitmapAnd (cost=44.03..44.03 rows=1 width=0) (actual \ntime=2.186..2.186 rows=0 loops=1)\n -> Bitmap Index Scan on 1min_events_end_idx \n(cost=0.00..5.93 rows=551 width=0) (actual time=0.076..0.076 rows=46 loops=1)\n Index Cond: (end_time > (now() - \n'00:02:00'::interval))\n -> Bitmap Index Scan on 1min_events_file_id_begin_idx \n(cost=0.00..37.85 rows=3670 width=0) (actual time=2.106..2.106 rows=11082 \nloops=1)\n Index Cond: (file_id = 137271)\n Total runtime: 2.276 ms\n(11 rows)\n\nOne of the things that is perplexing about the initial slowness of this query \nis that it's accessing the most recent rows in a given table (e.g. those in \nthe last 2 minutes). So, I would expect the OS cache to be updated with \nthese new rows.\n\nSome general information about the server / db:\n\n1) The database is 25G, and has about 60 tables - some very small, but several \n> 5 MM rows.\n\n2) The table I am querying against above (1min_events) has 5.5 MM rows, but is \nindexed on end_time, as well as a compound index on file_id, begin_time\n\n3) The following are running on the server that holds the db:\n\na) A program which is reading files and making several (5-10) database calls \nper minute (these calls tend to take < 100 ms each). These calls are \ninserting 10's of rows into several of the tables.\n\nb) An apache web-server\n\nc) The 8.1 postgres DB\n\nd) we are running periodic CRON jobs (generally at 11pm, 1 am and 3am) that \ntruncate some of the older data\n\ne) we have autovacuum on with a 60 second naptime and and low scale factors \n0.2, so analyzes and vacuums happen throughout the day - vacuums are \ngenerally triggered by the truncate CRON jobs too.\n\n4) Some of our config settings:\n\nshared_buffers = 8192\nwork_mem = 8192 \n\nTotal RAM on server is 1 Gig\n\n\nBasically any advice as to what to look at to avoid this situation would be \ngreatly appreciated. Is this simply a matter of tuning the shared_buffers \nparameter? If not, is scheduling a set of queries to force the proper \nloading of the cache a logical solution?\n\nThanks in advance,\n\n\nMark\n\n", "msg_date": "Wed, 4 Jan 2006 14:49:43 -0800", "msg_from": "Mark Liberman <[email protected]>", "msg_from_op": true, "msg_subject": "Help in avoiding a query 'Warm-Up' period/shared buffer cache" }, { "msg_contents": "\n\"Mark Liberman\" <[email protected]> wrote\n>\n> First run, after a night of inactivity:\n>\n> -> Bitmap Index Scan on 1min_events_file_id_begin_idx\n> (cost=0.00..37.85 rows=3670 width=0) (actual time=313.468..313.468 \n> rows=11082\n> loops=1)\n> Index Cond: (file_id = 137271)\n> Total runtime: 313.643 ms\n>\n> Second run, after that:\n>\n> -> Bitmap Index Scan on 1min_events_file_id_begin_idx\n> (cost=0.00..37.85 rows=3670 width=0) (actual time=2.106..2.106 rows=11082\n> loops=1)\n> Index Cond: (file_id = 137271)\n> Total runtime: 2.276 ms\n\nIt is clear that the first query takes longer time because of the IO time of \nindex 1min_events_file_id_begin_idx (see 313.468 vs. 2.106). I am afraid \ncurrently there is no easy solution for this situation, unless you could \npredicate which part of relation/index your query will use, then you can \npreload or \"warm-up\" cache for it.\n\nRegards,\nQingqing\n\n\n\n", "msg_date": "Thu, 5 Jan 2006 18:12:13 -0500", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help in avoiding a query 'Warm-Up' period/shared buffer cache" }, { "msg_contents": "On Thursday 05 January 2006 15:12, Qingqing Zhou wrote:\n> \"Mark Liberman\" <[email protected]> wrote\n>\n> > First run, after a night of inactivity:\n> >\n> > -> Bitmap Index Scan on\n> > 1min_events_file_id_begin_idx (cost=0.00..37.85 rows=3670 width=0)\n> > (actual time=313.468..313.468 rows=11082\n> > loops=1)\n> > Index Cond: (file_id = 137271)\n> > Total runtime: 313.643 ms\n> >\n> > Second run, after that:\n> >\n> > -> Bitmap Index Scan on\n> > 1min_events_file_id_begin_idx (cost=0.00..37.85 rows=3670 width=0)\n> > (actual time=2.106..2.106 rows=11082 loops=1)\n> > Index Cond: (file_id = 137271)\n> > Total runtime: 2.276 ms\n>\n> It is clear that the first query takes longer time because of the IO time\n> of index 1min_events_file_id_begin_idx (see 313.468 vs. 2.106). I am afraid\n> currently there is no easy solution for this situation, unless you could\n> predicate which part of relation/index your query will use, then you can\n> preload or \"warm-up\" cache for it.\n>\n> Regards,\n> Qingqing\n\n\nThanks Qingqing, \n\nthis actually helped me determine that the compound index, \n1min_events_file_id_begin_idx, is not the proper index to use as it is based \non file_id and begin_time - the later of which is not involved in the where \nclause. It is only using that index to \"filter\" out the listed file_id. \n\nNow, my follow-up question / assumption. I am assuming that the IO time is \nso long on that index because it has to read the entire index (for that \nfile_id) into memory (because it cannot just scan the rows with a certain \ndate range because we are not using begin_time in the where clause). \n\nBut, if I replaced that compound index with the proper compound index of \nfile_id / end_time, it would give similar performance results to the scan on \n1min_events_end_idx (which was < 1 ms). E.g. the latest rows that were \nupdated are more likely to be in the cache - and it is smart enough to only \nread the index rows that it needs.\n\nAlternatively, I could create a single index on file_id (and rely upon the new \nbitmap scan capabilities in 1.2). But, I fear that, although this will be \nsmaller than the erroneous compound index on file_id / begin_time, it will \nstill display the same behavior in that it will need to read all rows from \nthat index for the appropriate file_id - and since the data goes back every \nminute for 60 days, that IO might be large.\n\nObviously, I will be testing this - but it might take a few days, as I haven't \nfigure out how to simulate the \"period of inactivity\" to get the data flushed \nout of the cache ... so I have to run this each morning. But, any \nconfirmation / corrections to my assumptions are greatly appreciated. E.g. is \nthe compound index the way to go, or the solo index on file_id?\n\nThanks,\n\nMark\n\n", "msg_date": "Thu, 5 Jan 2006 18:15:36 -0800", "msg_from": "Mark Liberman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help in avoiding a query 'Warm-Up' period/shared buffer cache" }, { "msg_contents": "On Thu, 5 Jan 2006, Mark Liberman wrote:\n\n> Obviously, I will be testing this - but it might take a few days, as I haven't\n> figure out how to simulate the \"period of inactivity\" to get the data flushed\n> out of the cache ... so I have to run this each morning.\n\ncat large_file >/dev/null\n\nwill probably do a pretty good job of this (especially if large_file is \nnoticably larger then the amount of ram you have)\n\nDavid Lang\n", "msg_date": "Thu, 5 Jan 2006 18:50:22 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help in avoiding a query 'Warm-Up' period/shared buffer" }, { "msg_contents": "\n\"Mark Liberman\" <[email protected]> wrote\n>\n> Now, my follow-up question / assumption. I am assuming that the IO time \n> is\n> so long on that index because it has to read the entire index (for that\n> file_id) into memory\n>\n> any confirmation / corrections to my assumptions are greatly appreciated. \n> E.g. is\n> the compound index the way to go, or the solo index on file_id?\n>\n\nOnly part of the index file is read. It is a btree index. Keep the index \nsmaller but sufficient to guide your search is always good because even by \nthe guidiance of the index, a heap visit to get the real data is not \navoidable.\n\nRegards,\nQingqing \n\n\n", "msg_date": "Thu, 5 Jan 2006 22:08:05 -0500", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help in avoiding a query 'Warm-Up' period/shared buffer cache" }, { "msg_contents": "On Thu, Jan 05, 2006 at 06:50:22PM -0800, David Lang wrote:\n> On Thu, 5 Jan 2006, Mark Liberman wrote:\n> \n> >Obviously, I will be testing this - but it might take a few days, as I \n> >haven't\n> >figure out how to simulate the \"period of inactivity\" to get the data \n> >flushed\n> >out of the cache ... so I have to run this each morning.\n> \n> cat large_file >/dev/null\n> \n> will probably do a pretty good job of this (especially if large_file is \n> noticably larger then the amount of ram you have)\n\nThe following C code is much faster...\n\n/*\n * $Id: clearmem.c,v 1.1 2003/06/29 20:41:33 decibel Exp $\n *\n * Utility to clear out a chunk of memory and zero it. Useful for flushing disk buffers\n */\n\nint main(int argc, char *argv[]) {\n if (!calloc(atoi(argv[1]), 1024*1024)) { printf(\"Error allocating memory.\\n\"); }\n}\n\nCompile it and then pass in the number of MB of memory to allocate on\nthe command line.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 5 Jan 2006 21:14:21 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help in avoiding a query 'Warm-Up' period/shared buffer" } ]
[ { "msg_contents": "Hi to all, \n\nI have the following query:\n\nSELECT count(*) FROM orders o\n INNER JOIN report r ON r.id_order=o.id\n WHERE o.id_status>3\n\nExplaing analyze:\nAggregate (cost=8941.82..8941.82 rows=1 width=0) (actual time=1003.297..1003.298 rows=1 loops=1)\n -> Hash Join (cost=3946.28..8881.72 rows=24041 width=0) (actual time=211.985..951.545 rows=72121 loops=1)\n Hash Cond: (\"outer\".id_order = \"inner\".id)\n -> Seq Scan on report r (cost=0.00..2952.21 rows=72121 width=4) (actual time=0.005..73.869 rows=72121 loops=1)\n -> Hash (cost=3787.57..3787.57 rows=24682 width=4) (actual time=211.855..211.855 rows=0 loops=1)\n -> Seq Scan on orders o (cost=0.00..3787.57 rows=24682 width=4) (actual time=0.047..147.170 rows=72121 loops=1)\n Filter: (id_status > 3)\nTotal runtime: 1003.671 ms\n\n\nI could use it in the following format, because I have to the moment only the 4,6 values for the id_status.\n\nSELECT count(*) FROM orders o\n INNER JOIN report r ON r.id_order=o.id\n WHERE o.id_status IN (4,6)\n\nExplain analyze:\nAggregate (cost=5430.04..5430.04 rows=1 width=0) (actual time=1472.877..1472.877 rows=1 loops=1)\n -> Hash Join (cost=2108.22..5428.23 rows=720 width=0) (actual time=342.080..1419.775 rows=72121 loops=1)\n Hash Cond: (\"outer\".id_order = \"inner\".id)\n -> Seq Scan on report r (cost=0.00..2952.21 rows=72121 width=4) (actual time=0.036..106.217 rows=72121 loops=1)\n -> Hash (cost=2106.37..2106.37 rows=739 width=4) (actual time=342.011..342.011 rows=0 loops=1)\n -> Index Scan using orders_id_status_idx, orders_id_status_idx on orders o (cost=0.00..2106.37 rows=739 width=4) (actual time=0.131..268.397 rows=72121 loops=1)\n Index Cond: ((id_status = 4) OR (id_status = 6))\nTotal runtime: 1474.356 ms\n\nHow can I improve this query's performace?? The ideea is to count all the values that I have in the database for the following conditions. If the users puts in some other search fields on the where then the query runs faster but in this format sometimes it takes a lot lot of time(sometimes even 2,3 seconds). \n\nCan this be tuned somehow???\n\nRegards, \nAndy.\n\n\n\n\n\n\n\n\nHi to all, \n \nI have the following query:\n \nSELECT count(*) FROM orders \no      INNER JOIN report r ON \nr.id_order=o.id      WHERE \no.id_status>3\n \nExplaing analyze:\nAggregate  (cost=8941.82..8941.82 rows=1 \nwidth=0) (actual time=1003.297..1003.298 rows=1 loops=1)  ->  \nHash Join  (cost=3946.28..8881.72 rows=24041 width=0) (actual \ntime=211.985..951.545 rows=72121 \nloops=1)        Hash Cond: \n(\"outer\".id_order = \"inner\".id)        \n->  Seq Scan on report r  (cost=0.00..2952.21 rows=72121 width=4) \n(actual time=0.005..73.869 rows=72121 \nloops=1)        ->  Hash  \n(cost=3787.57..3787.57 rows=24682 width=4) (actual time=211.855..211.855 rows=0 \nloops=1)              \n->  Seq Scan on orders o  (cost=0.00..3787.57 rows=24682 width=4) \n(actual time=0.047..147.170 rows=72121 \nloops=1)                    \nFilter: (id_status > 3)Total runtime: 1003.671 ms\n \n \nI could use it in the following format, because I \nhave to the moment only the 4,6 values for the id_status.\n \nSELECT count(*) FROM orders \no      INNER JOIN report r ON \nr.id_order=o.id      WHERE o.id_status IN \n(4,6)\n \nExplain analyze:Aggregate  \n(cost=5430.04..5430.04 rows=1 width=0) (actual time=1472.877..1472.877 rows=1 \nloops=1)  ->  Hash Join  (cost=2108.22..5428.23 rows=720 \nwidth=0) (actual time=342.080..1419.775 rows=72121 \nloops=1)        Hash Cond: \n(\"outer\".id_order = \"inner\".id)        \n->  Seq Scan on report r  (cost=0.00..2952.21 rows=72121 width=4) \n(actual time=0.036..106.217 rows=72121 \nloops=1)        ->  Hash  \n(cost=2106.37..2106.37 rows=739 width=4) (actual time=342.011..342.011 rows=0 \nloops=1)              \n->  Index Scan using orders_id_status_idx, orders_id_status_idx on \norders o  (cost=0.00..2106.37 rows=739 width=4) (actual time=0.131..268.397 \nrows=72121 \nloops=1)                    \nIndex Cond: ((id_status = 4) OR (id_status = 6))Total runtime: 1474.356 \nms\nHow can I improve this query's performace?? The \nideea is to count all the values that I have in the database for the following \nconditions. If the users puts in some other search fields on the where then the \nquery runs faster but in this format sometimes it takes a lot lot of \ntime(sometimes even 2,3 seconds). \n \nCan this be tuned somehow???\n \nRegards, \nAndy.", "msg_date": "Thu, 5 Jan 2006 17:16:47 +0200", "msg_from": "\"Andy\" <[email protected]>", "msg_from_op": true, "msg_subject": "Improving Inner Join Performance" }, { "msg_contents": "On Thu, 5 Jan 2006 17:16:47 +0200\n\"Andy\" <[email protected]> wrote:\n\n> Hi to all, \n> \n> I have the following query:\n> \n> SELECT count(*) FROM orders o\n> INNER JOIN report r ON r.id_order=o.id\n> WHERE o.id_status>3\n\n> How can I improve this query's performace?? The ideea is to count all\n> the values that I have in the database for the following conditions.\n> If the users puts in some other search fields on the where then the\n> query runs faster but in this format sometimes it takes a lot lot of\n> time(sometimes even 2,3 seconds). \n> \n> Can this be tuned somehow???\n\n Do you have an index on report.id_order ? Try creating an index for\n it if not and run a vacuum analyze on the table to see if it gets\n rid of the sequence scan in the plan. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Thu, 5 Jan 2006 13:20:06 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving Inner Join Performance" }, { "msg_contents": "Yes I have indexes an all join fields. \nThe tables have around 30 columns each and around 100k rows. \nThe database is vacuumed every hour. \n\nAndy.\n----- Original Message ----- \nFrom: \"Frank Wiles\" <[email protected]>\nTo: \"Andy\" <[email protected]>\nCc: <[email protected]>\nSent: Thursday, January 05, 2006 9:20 PM\nSubject: Re: [PERFORM] Improving Inner Join Performance\n\n\n> On Thu, 5 Jan 2006 17:16:47 +0200\n> \"Andy\" <[email protected]> wrote:\n> \n>> Hi to all, \n>> \n>> I have the following query:\n>> \n>> SELECT count(*) FROM orders o\n>> INNER JOIN report r ON r.id_order=o.id\n>> WHERE o.id_status>3\n> \n>> How can I improve this query's performace?? The ideea is to count all\n>> the values that I have in the database for the following conditions.\n>> If the users puts in some other search fields on the where then the\n>> query runs faster but in this format sometimes it takes a lot lot of\n>> time(sometimes even 2,3 seconds). \n>> \n>> Can this be tuned somehow???\n> \n> Do you have an index on report.id_order ? Try creating an index for\n> it if not and run a vacuum analyze on the table to see if it gets\n> rid of the sequence scan in the plan. \n> \n> ---------------------------------\n> Frank Wiles <[email protected]>\n> http://www.wiles.org\n> ---------------------------------\n> \n> \n>\n", "msg_date": "Fri, 6 Jan 2006 11:21:31 +0200", "msg_from": "\"Andy\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving Inner Join Performance" }, { "msg_contents": "\nOn Jan 6, 2006, at 18:21 , Andy wrote:\n\n> Yes I have indexes an all join fields. The tables have around 30 \n> columns each and around 100k rows. The database is vacuumed every \n> hour.\n\nJust to chime in, VACUUM != VACUUM ANALYZE. ANALYZE is what updates \ndatabase statistics and affects query planning. VACUUM alone does not \ndo this.\n\n>> Do you have an index on report.id_order ? Try creating an index for\n>> it if not and run a vacuum analyze on the table to see if it gets\n>> rid of the sequence scan in the plan.\n\nMichael Glaesemann\ngrzm myrealbox com\n\n\n\n", "msg_date": "Fri, 6 Jan 2006 18:45:06 +0900", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving Inner Join Performance" }, { "msg_contents": "Sorry, I had to be more specific. \nVACUUM ANALYZE is performed every hour. \n\nRegards,\nAndy.\n\n----- Original Message ----- \nFrom: \"Michael Glaesemann\" <[email protected]>\nTo: \"Andy\" <[email protected]>\nCc: <[email protected]>\nSent: Friday, January 06, 2006 11:45 AM\nSubject: Re: [PERFORM] Improving Inner Join Performance\n\n\n> \n> On Jan 6, 2006, at 18:21 , Andy wrote:\n> \n>> Yes I have indexes an all join fields. The tables have around 30 \n>> columns each and around 100k rows. The database is vacuumed every \n>> hour.\n> \n> Just to chime in, VACUUM != VACUUM ANALYZE. ANALYZE is what updates \n> database statistics and affects query planning. VACUUM alone does not \n> do this.\n> \n>>> Do you have an index on report.id_order ? Try creating an index for\n>>> it if not and run a vacuum analyze on the table to see if it gets\n>>> rid of the sequence scan in the plan.\n> \n> Michael Glaesemann\n> grzm myrealbox com\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n>\n", "msg_date": "Fri, 6 Jan 2006 11:54:46 +0200", "msg_from": "\"Andy\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving Inner Join Performance" }, { "msg_contents": "> If the users puts in some other search fields on the where then the query runs faster but > in this format sometimes it takes a lot lot of time(sometimes even 2,3 seconds).\n\nCan you eloborate under what conditions which query is slower?\n\nOn 1/5/06, Andy <[email protected]> wrote:\n>\n> Hi to all,\n>\n> I have the following query:\n>\n> SELECT count(*) FROM orders o\n> INNER JOIN report r ON r.id_order=o.id\n> WHERE o.id_status>3\n>\n> Explaing analyze:\n> Aggregate (cost=8941.82..8941.82 rows=1 width=0) (actual\n> time=1003.297..1003.298 rows=1 loops=1)\n> -> Hash Join (cost=3946.28..8881.72 rows=24041 width=0) (actual\n> time=211.985..951.545 rows=72121 loops=1)\n> Hash Cond: (\"outer\".id_order = \"inner\".id)\n> -> Seq Scan on report r (cost=0.00..2952.21 rows=72121 width=4)\n> (actual time=0.005..73.869 rows=72121 loops=1)\n> -> Hash (cost=3787.57..3787.57 rows=24682 width=4) (actual\n> time=211.855..211.855 rows=0 loops=1)\n> -> Seq Scan on orders o (cost=0.00..3787.57 rows=24682\n> width=4) (actual time=0.047..147.170 rows=72121 loops=1)\n> Filter: (id_status > 3)\n> Total runtime: 1003.671 ms\n>\n>\n> I could use it in the following format, because I have to the moment only\n> the 4,6 values for the id_status.\n>\n> SELECT count(*) FROM orders o\n> INNER JOIN report r ON r.id_order=o.id\n> WHERE o.id_status IN (4,6)\n>\n> Explain analyze:\n> Aggregate (cost=5430.04..5430.04 rows=1 width=0) (actual\n> time=1472.877..1472.877 rows=1 loops=1)\n> -> Hash Join (cost=2108.22..5428.23 rows=720 width=0) (actual\n> time=342.080..1419.775 rows=72121 loops=1)\n> Hash Cond: (\"outer\".id_order = \"inner\".id)\n> -> Seq Scan on report r (cost=0.00..2952.21 rows=72121 width=4)\n> (actual time=0.036..106.217 rows=72121 loops=1)\n> -> Hash (cost=2106.37..2106.37 rows=739 width=4) (actual\n> time=342.011..342.011 rows=0 loops=1)\n> -> Index Scan using orders_id_status_idx,\n> orders_id_status_idx on orders o (cost=0.00..2106.37 rows=739 width=4)\n> (actual time=0.131..268.397 rows=72121 loops=1)\n> Index Cond: ((id_status = 4) OR (id_status = 6))\n> Total runtime: 1474.356 ms\n>\n> How can I improve this query's performace?? The ideea is to count all the\n> values that I have in the database for the following conditions. If the\n> users puts in some other search fields on the where then the query runs\n> faster but in this format sometimes it takes a lot lot of time(sometimes\n> even 2,3 seconds).\n>\n> Can this be tuned somehow???\n>\n> Regards,\n> Andy.\n>\n>\n", "msg_date": "Fri, 6 Jan 2006 15:26:31 +0530", "msg_from": "Pandurangan R S <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving Inner Join Performance" }, { "msg_contents": "At the moment: o.id_status>3 can have values only 4 and 6. The 6 is around\n90% from the whole table. This is why seq scan is made.\n\nNow, depending on the user input the query can have more where fields. For\nexample:\nSELECT count(*) FROM orders o\n INNER JOIN report r ON r.id_order=o.id\n WHERE o.id_status > 3 AND r.id_zufriden=7\n\nAggregate (cost=7317.15..7317.15 rows=1 width=0) (actual\ntime=213.418..213.419 rows=1 loops=1)\n -> Hash Join (cost=3139.00..7310.80 rows=2540 width=0) (actual\ntime=57.554..212.215 rows=1308 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".id_order)\n -> Seq Scan on orders o (cost=0.00..3785.31 rows=72216 width=4)\n(actual time=0.014..103.292 rows=72121 loops=1)\n Filter: (id_status > 3)\n -> Hash (cost=3132.51..3132.51 rows=2597 width=4) (actual\ntime=57.392..57.392 rows=0 loops=1)\n -> Seq Scan on report r (cost=0.00..3132.51 rows=2597\nwidth=4) (actual time=0.019..56.220 rows=1308 loops=1)\n Filter: (id_zufriden = 7)\nTotal runtime: 213.514 ms\n\nThese examples can go on and on.\n\nIf I run this query\nSELECT count(*) FROM orders o\nINNER JOIN report r ON r.id_order=o.id\nWHERE o.id_status>3\nunder normal system load the average response time is between 1.3 > 2.5\nseconds. Sometimes even more. If I run it rapidly a few times then it\nrespondes faster(that is normal I supose).\n\nThe ideea of this query is to count all the possible results that the user\ncan have. I use this to build pages of results.\n\nAndy.\n\n----- Original Message ----- \nFrom: \"Pandurangan R S\" <[email protected]>\nTo: \"Andy\" <[email protected]>\nCc: <[email protected]>\nSent: Friday, January 06, 2006 11:56 AM\nSubject: Re: [PERFORM] Improving Inner Join Performance\n\n\n> If the users puts in some other search fields on the where then the query\n> runs faster but > in this format sometimes it takes a lot lot of\n> time(sometimes even 2,3 seconds).\n\nCan you eloborate under what conditions which query is slower?\n\nOn 1/5/06, Andy <[email protected]> wrote:\n>\n> Hi to all,\n>\n> I have the following query:\n>\n> SELECT count(*) FROM orders o\n> INNER JOIN report r ON r.id_order=o.id\n> WHERE o.id_status>3\n>\n> Explaing analyze:\n> Aggregate (cost=8941.82..8941.82 rows=1 width=0) (actual\n> time=1003.297..1003.298 rows=1 loops=1)\n> -> Hash Join (cost=3946.28..8881.72 rows=24041 width=0) (actual\n> time=211.985..951.545 rows=72121 loops=1)\n> Hash Cond: (\"outer\".id_order = \"inner\".id)\n> -> Seq Scan on report r (cost=0.00..2952.21 rows=72121 width=4)\n> (actual time=0.005..73.869 rows=72121 loops=1)\n> -> Hash (cost=3787.57..3787.57 rows=24682 width=4) (actual\n> time=211.855..211.855 rows=0 loops=1)\n> -> Seq Scan on orders o (cost=0.00..3787.57 rows=24682\n> width=4) (actual time=0.047..147.170 rows=72121 loops=1)\n> Filter: (id_status > 3)\n> Total runtime: 1003.671 ms\n>\n>\n> I could use it in the following format, because I have to the moment only\n> the 4,6 values for the id_status.\n>\n> SELECT count(*) FROM orders o\n> INNER JOIN report r ON r.id_order=o.id\n> WHERE o.id_status IN (4,6)\n>\n> Explain analyze:\n> Aggregate (cost=5430.04..5430.04 rows=1 width=0) (actual\n> time=1472.877..1472.877 rows=1 loops=1)\n> -> Hash Join (cost=2108.22..5428.23 rows=720 width=0) (actual\n> time=342.080..1419.775 rows=72121 loops=1)\n> Hash Cond: (\"outer\".id_order = \"inner\".id)\n> -> Seq Scan on report r (cost=0.00..2952.21 rows=72121 width=4)\n> (actual time=0.036..106.217 rows=72121 loops=1)\n> -> Hash (cost=2106.37..2106.37 rows=739 width=4) (actual\n> time=342.011..342.011 rows=0 loops=1)\n> -> Index Scan using orders_id_status_idx,\n> orders_id_status_idx on orders o (cost=0.00..2106.37 rows=739 width=4)\n> (actual time=0.131..268.397 rows=72121 loops=1)\n> Index Cond: ((id_status = 4) OR (id_status = 6))\n> Total runtime: 1474.356 ms\n>\n> How can I improve this query's performace?? The ideea is to count all the\n> values that I have in the database for the following conditions. If the\n> users puts in some other search fields on the where then the query runs\n> faster but in this format sometimes it takes a lot lot of time(sometimes\n> even 2,3 seconds).\n>\n> Can this be tuned somehow???\n>\n> Regards,\n> Andy.\n>\n>\n\n\n", "msg_date": "Fri, 6 Jan 2006 12:21:25 +0200", "msg_from": "\"Andy\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving Inner Join Performance" }, { "msg_contents": "On 1/6/06, Andy <[email protected]> wrote:\n> At the moment: o.id_status>3 can have values only 4 and 6. The 6 is around\n> 90% from the whole table. This is why seq scan is made.\n>\ngiven this if you make id_status > 3 you will never use an index\nbecause you will be scanning 4 and 6 the only values in this field as\nyou say, and even if there were any other value 6 is 90% of whole\ntable, so an index for this will not be used...\n\n> Now, depending on the user input the query can have more where fields. For\n> example:\n> SELECT count(*) FROM orders o\n> INNER JOIN report r ON r.id_order=o.id\n> WHERE o.id_status > 3 AND r.id_zufriden=7\n>\nhere the planner can be more selective, and of course the query is\nfaster... if you will be loading data load it all then make tests...\n\nbut because your actual data the planner will always choose to scan\nthe entire orders table for o.id_status > 3...\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n", "msg_date": "Fri, 6 Jan 2006 10:26:43 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving Inner Join Performance" }, { "msg_contents": "shared_buffers = 10240\neffective_cache_size = 64000\nRAM on server: 1Gb. \n\nAndy.\n\n----- Original Message ----- \n\nFrom: \"Frank Wiles\" <[email protected]>\nTo: \"Andy\" <[email protected]>\nSent: Friday, January 06, 2006 7:12 PM\nSubject: Re: [PERFORM] Improving Inner Join Performance\n\n\n> On Fri, 6 Jan 2006 09:59:30 +0200\n> \"Andy\" <[email protected]> wrote:\n> \n>> Yes I have indexes an all join fields. \n>> The tables have around 30 columns each and around 100k rows. \n>> The database is vacuumed every hour. \n> \n> What are you settings for: \n> \n> shared_buffers \n> effective_cache_size\n> \n> And how much RAM do you have in the server? \n> \n> ---------------------------------\n> Frank Wiles <[email protected]>\n> http://www.wiles.org\n> ---------------------------------\n> \n> \n>\n\n\n\n\n\n\nshared_buffers = 10240effective_cache_size = \n64000RAM on server: 1Gb. \nAndy.\n----- Original Message ----- \n\nFrom: \"Frank Wiles\" <[email protected]>\nTo: \"Andy\" <[email protected]>\nSent: Friday, January 06, 2006 7:12 PM\nSubject: Re: [PERFORM] Improving Inner Join \nPerformance\n> On Fri, 6 Jan 2006 09:59:30 +0200> \"Andy\" <[email protected]> wrote:> \n>> Yes I have indexes an all join fields. >> The tables have \naround 30 columns each and around 100k rows. >> The database is \nvacuumed every hour.  > >  What are you settings for: \n> >  shared_buffers >  \neffective_cache_size> >  And how much RAM do you have in the \nserver? > \n> --------------------------------->   Frank Wiles \n<[email protected]>>   http://www.wiles.org> ---------------------------------> > \n>", "msg_date": "Mon, 9 Jan 2006 09:56:52 +0200", "msg_from": "\"Andy\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Improving Inner Join Performance" }, { "msg_contents": "Did you originally post some problem queries? The settings look OK,\nthough 1G of memory isn't very much now-a-days.\n\nOn Mon, Jan 09, 2006 at 09:56:52AM +0200, Andy wrote:\n> shared_buffers = 10240\n> effective_cache_size = 64000\n> RAM on server: 1Gb. \n> \n> Andy.\n> \n> ----- Original Message ----- \n> \n> From: \"Frank Wiles\" <[email protected]>\n> To: \"Andy\" <[email protected]>\n> Sent: Friday, January 06, 2006 7:12 PM\n> Subject: Re: [PERFORM] Improving Inner Join Performance\n> \n> \n> > On Fri, 6 Jan 2006 09:59:30 +0200\n> > \"Andy\" <[email protected]> wrote:\n> > \n> >> Yes I have indexes an all join fields. \n> >> The tables have around 30 columns each and around 100k rows. \n> >> The database is vacuumed every hour. \n> > \n> > What are you settings for: \n> > \n> > shared_buffers \n> > effective_cache_size\n> > \n> > And how much RAM do you have in the server? \n> > \n> > ---------------------------------\n> > Frank Wiles <[email protected]>\n> > http://www.wiles.org\n> > ---------------------------------\n> > \n> > \n> >\n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 11 Jan 2006 12:25:45 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving Inner Join Performance" } ]
[ { "msg_contents": "\nPg: 7.4.9\nRH: ES v3\nQuad-Xeon\n16G ram\n\nThe following SQL takes 4+ mins to run. I have indexes on all join fields\nand I've tried rearranging the table orders but haven't had any luck. I\nhave done the usual vacuums analyze and even vacuum FULL just to make sure\nbut still the same results. The ending resultset is around 169K rows which,\nif I'm reading the analyze output, is more than double. Any suggestions?\n\nTIA\n-patrick\n\nSelect gmmid, gmmname, divid, divname, feddept, fedvend,itemnumber as\nmstyle,amc_week_id,\nsum(tran_itm_total) as net_dollars\n\nFROM\npublic.tbldetaillevel_report a2 join cdm.cdm_ddw_tran_item a1 on\na1.item_upc = a2.upc\njoin public.date_dim a3 on a3.date_dim_id = a1.cal_date\nwhere\na3.date_dim_id between '2005-10-30' and '2005-12-31'\nand\na1.appl_id in ('MCOM','NET')\nand\n a1.tran_typ_id in ('S','R')\ngroup by 1,2,3,4,5,6,7,8\norder by 1,2,3,4,5,6,7,8\n\n\nGroupAggregate (cost=1646283.56..1648297.72 rows=73242 width=65)\n -> Sort (cost=1646283.56..1646466.67 rows=73242 width=65)\n Sort Key: a2.gmmid, a2.gmmname, a2.divid, a2.divname, a2.feddept,\na2.fedvend, a2.itemnumber, a3.amc_week_id\n -> Merge Join (cost=1595839.67..1640365.47 rows=73242 width=65)\n Merge Cond: (\"outer\".upc = \"inner\".item_upc)\n -> Index Scan using report_upc_idx on tbldetaillevel_report\na2 (cost=0.00..47236.85 rows=366234 width=58)\n -> Sort (cost=1595839.67..1596022.77 rows=73242 width=23)\n Sort Key: a1.item_upc\n -> Hash Join (cost=94.25..1589921.57 rows=73242\nwidth=23)\n Hash Cond: (\"outer\".cal_date =\n\"inner\".date_dim_id)\n -> Seq Scan on cdm_ddw_tran_item a1\n(cost=0.00..1545236.00 rows=8771781 width=23)\n Filter: ((((appl_id)::text = 'MCOM'::text)\nOR ((appl_id)::text = 'NET'::text)) AND ((tran_typ_id = 'S'::bpchar) OR\n(tran_typ_id = 'R'::bpchar)))\n -> Hash (cost=94.09..94.09 rows=64 width=8)\n -> Index Scan using date_date_idx on\ndate_dim a3 (cost=0.00..94.09 rows=64 width=8)\n Index Cond: ((date_dim_id >=\n'2005-10-30'::date) AND (date_dim_id <= '2005-12-31'::date))\n\n\n\n-- Table: tbldetaillevel_report\n\n-- DROP TABLE tbldetaillevel_report;\n\nCREATE TABLE tbldetaillevel_report\n(\n pageid int4,\n feddept int4,\n fedvend int4,\n oz_description varchar(254),\n price_owned_retail float8,\n oz_color varchar(50),\n oz_size varchar(50),\n total_oh int4 DEFAULT 0,\n total_oo int4 DEFAULT 0,\n vendorname varchar(40),\n dunsnumber varchar(9),\n current_week int4,\n current_period int4,\n week_end date,\n varweek int4,\n varperiod int4,\n upc int8,\n itemnumber varchar(15),\n mkd_status int2,\n inforem_flag int2\n)\nWITH OIDS;\n\n-- DROP INDEX report_dept_vend_idx;\n\nCREATE INDEX report_dept_vend_idx\n ON tbldetaillevel_report\n USING btree\n (feddept, fedvend);\n\n-- Index: report_upc_idx\n\n-- DROP INDEX report_upc_idx;\n\nCREATE INDEX report_upc_idx\n ON tbldetaillevel_report\n USING btree\n (upc);\n\n\n\n-- Table: cdm.cdm_ddw_tran_item\n\n-- DROP TABLE cdm.cdm_ddw_tran_item;\n\nCREATE TABLE cdm.cdm_ddw_tran_item\n(\n appl_xref varchar(22),\n intr_xref varchar(13),\n tran_typ_id char(1),\n cal_date date,\n cal_time time,\n tran_itm_total numeric(15,2),\n itm_qty int4,\n itm_price numeric(8,2),\n item_id int8,\n item_upc int8,\n item_pid varchar(20),\n item_desc varchar(30),\n nrf_color_name varchar(10),\n nrf_size_name varchar(10),\n dept_id int4,\n vend_id int4,\n mkstyl int4,\n item_group varchar(20),\n appl_id varchar(20),\n cost float8 DEFAULT 0,\n onhand int4 DEFAULT 0,\n onorder int4 DEFAULT 0,\n avail int4 DEFAULT 0,\n owned float8 DEFAULT 0,\n fill_store_loc int4,\n ddw_tran_key bigserial NOT NULL,\n price_type_id int2 DEFAULT 999,\n last_update date DEFAULT ('now'::text)::date,\n tran_id int8,\n tran_seq_nbr int4,\n CONSTRAINT ddw_tritm_pk PRIMARY KEY (ddw_tran_key)\n)\nWITHOUT OIDS;\n\n\n-- Index: cdm.cdm_ddw_tran_item_applid_idx\n\n-- DROP INDEX cdm.cdm_ddw_tran_item_applid_idx;\n\nCREATE INDEX cdm_ddw_tran_item_applid_idx\n ON cdm.cdm_ddw_tran_item\n USING btree\n (appl_id);\n\n-- Index: cdm.cdm_ddw_tran_item_cal_date\n\n-- DROP INDEX cdm.cdm_ddw_tran_item_cal_date;\n\nCREATE INDEX cdm_ddw_tran_item_cal_date\n ON cdm.cdm_ddw_tran_item\n USING btree\n (cal_date);\n\n-- Index: cdm.cdm_ddw_tran_item_trn_type\n\n-- DROP INDEX cdm.cdm_ddw_tran_item_trn_type;\n\nCREATE INDEX cdm_ddw_tran_item_trn_type\n ON cdm.cdm_ddw_tran_item\n USING btree\n (tran_typ_id);\n\n-- Index: cdm.ddw_ti_upc_idx\n\n-- DROP INDEX cdm.ddw_ti_upc_idx;\n\nCREATE INDEX ddw_ti_upc_idx\n ON cdm.cdm_ddw_tran_item\n USING btree\n (item_upc);\n\n-- Index: cdm.ddw_tran_item_dept_idx\n\n-- DROP INDEX cdm.ddw_tran_item_dept_idx;\n\nCREATE INDEX ddw_tran_item_dept_idx\n ON cdm.cdm_ddw_tran_item\n USING btree\n (dept_id);\n\n-- Index: cdm.ddw_trn_ittotal_idx\n\n-- DROP INDEX cdm.ddw_trn_ittotal_idx;\n\nCREATE INDEX ddw_trn_ittotal_idx\n ON cdm.cdm_ddw_tran_item\n USING btree\n (tran_itm_total);\n\n-- Table: date_dim\n\n-- DROP TABLE date_dim;\n\nCREATE TABLE date_dim\n(\n date_dim_id date NOT NULL,\n amc_date char(8),\n amc_day_nbr int2 NOT NULL,\n amc_week int2 NOT NULL,\n amc_period int2 NOT NULL,\n amc_quarter int2 NOT NULL,\n amc_season int2 NOT NULL,\n amc_year int4 NOT NULL,\n amc_period_id int4 NOT NULL,\n amc_week_id int4 NOT NULL,\n nbr_weeks_per_peri int2 NOT NULL,\n nbr_weeks_per_year int2 NOT NULL,\n calendar_day int2 NOT NULL,\n calendar_month int2 NOT NULL,\n julian_day int2 NOT NULL,\n CONSTRAINT date_dimph PRIMARY KEY (date_dim_id)\n)\nWITH OIDS;\n\n\n-- Index: amc_weekid_idx\n\n-- DROP INDEX amc_weekid_idx;\n\nCREATE INDEX amc_weekid_idx\n ON date_dim\n USING btree\n (amc_week_id);\n\n-- Index: date_date_idx\n\n-- DROP INDEX date_date_idx;\n\nCREATE INDEX date_date_idx\n ON date_dim\n USING btree\n (date_dim_id);\n\n\nPatrick Hatcher\nDevelopment Manager Analytics/MIO\nMacys.com\n\n", "msg_date": "Thu, 5 Jan 2006 16:38:17 -0800", "msg_from": "Patrick Hatcher <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query. Any way to speed up?" }, { "msg_contents": "Patrick Hatcher <[email protected]> writes:\n> The following SQL takes 4+ mins to run. I have indexes on all join fields\n> and I've tried rearranging the table orders but haven't had any luck.\n\nPlease show EXPLAIN ANALYZE output, not just EXPLAIN. It's impossible\nto tell whether the planner is making any wrong guesses when you can't\nsee the actual times/rowcounts ...\n\n(BTW, 7.4 is looking pretty long in the tooth.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Jan 2006 00:07:57 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query. Any way to speed up? " }, { "msg_contents": "Duh sorry. We will eventually move to 8.x, it's just a matter of finding\nthe time:\n\nExplain analyze\nSelect gmmid, gmmname, divid, divname, feddept, fedvend,itemnumber as\nmstyle,amc_week_id,\nsum(tran_itm_total) as net_dollars\n\nFROM\npublic.tbldetaillevel_report a2 join cdm.cdm_ddw_tran_item a1 on\na1.item_upc = a2.upc\njoin public.date_dim a3 on a3.date_dim_id = a1.cal_date\nwhere\na3.date_dim_id between '2005-10-30' and '2005-12-31'\nand\na1.appl_id in ('MCOM','NET')\nand\n a1.tran_typ_id in ('S','R')\ngroup by 1,2,3,4,5,6,7,8\norder by 1,2,3,4,5,6,7,8\n\n\nGroupAggregate (cost=1648783.47..1650793.74 rows=73101 width=65) (actual\ntime=744556.289..753136.278 rows=168343 loops=1)\n -> Sort (cost=1648783.47..1648966.22 rows=73101 width=65) (actual\ntime=744556.236..746634.566 rows=1185096 loops=1)\n Sort Key: a2.gmmid, a2.gmmname, a2.divid, a2.divname, a2.feddept,\na2.fedvend, a2.itemnumber, a3.amc_week_id\n -> Merge Join (cost=1598067.59..1642877.78 rows=73101 width=65)\n(actual time=564862.772..636550.484 rows=1185096 loops=1)\n Merge Cond: (\"outer\".upc = \"inner\".item_upc)\n -> Index Scan using report_upc_idx on tbldetaillevel_report\na2 (cost=0.00..47642.36 rows=367309 width=58) (actual\ntime=82.512..65458.137 rows=365989 loops=1)\n -> Sort (cost=1598067.59..1598250.34 rows=73100 width=23)\n(actual time=564764.506..566529.796 rows=1248862 loops=1)\n Sort Key: a1.item_upc\n -> Hash Join (cost=94.25..1592161.99 rows=73100\nwidth=23) (actual time=493500.913..548924.039 rows=1248851 loops=1)\n Hash Cond: (\"outer\".cal_date =\n\"inner\".date_dim_id)\n -> Seq Scan on cdm_ddw_tran_item a1\n(cost=0.00..1547562.88 rows=8754773 width=23) (actual\ntime=14.219..535704.691 rows=10838135 loops=1)\n Filter: ((((appl_id)::text = 'MCOM'::text)\nOR ((appl_id)::text = 'NET'::text)) AND ((tran_typ_id = 'S'::bpchar) OR\n(tran_typ_id = 'R'::bpchar)))\n -> Hash (cost=94.09..94.09 rows=64 width=8)\n(actual time=362.953..362.953 rows=0 loops=1)\n -> Index Scan using date_date_idx on\ndate_dim a3 (cost=0.00..94.09 rows=64 width=8) (actual\ntime=93.710..362.802 rows=63 loops=1)\n Index Cond: ((date_dim_id >=\n'2005-10-30'::date) AND (date_dim_id <= '2005-12-31'::date))\nTotal runtime: 753467.847 ms\n\n\nPatrick Hatcher\nDevelopment Manager Analytics/MIO\nMacys.com\n415-422-1610\n\n\n\n \n Tom Lane \n <[email protected] \n s> To \n Patrick Hatcher \n 01/05/06 09:07 PM <[email protected]> \n cc \n [email protected] \n Subject \n Re: [PERFORM] Slow query. Any way \n to speed up? \n \n \n \n \n \n \n\n\n\n\nPatrick Hatcher <[email protected]> writes:\n> The following SQL takes 4+ mins to run. I have indexes on all join\nfields\n> and I've tried rearranging the table orders but haven't had any luck.\n\nPlease show EXPLAIN ANALYZE output, not just EXPLAIN. It's impossible\nto tell whether the planner is making any wrong guesses when you can't\nsee the actual times/rowcounts ...\n\n(BTW, 7.4 is looking pretty long in the tooth.)\n\n regards, tom lane\n\n\n", "msg_date": "Fri, 6 Jan 2006 10:55:28 -0800", "msg_from": "Patrick Hatcher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query. Any way to speed up?" }, { "msg_contents": "Patrick Hatcher <[email protected]> writes:\n> -> Seq Scan on cdm_ddw_tran_item a1\n> (cost=0.00..1547562.88 rows=8754773 width=23) (actual\n> time=14.219..535704.691 rows=10838135 loops=1)\n> Filter: ((((appl_id)::text = 'MCOM'::text)\n> OR ((appl_id)::text = 'NET'::text)) AND ((tran_typ_id = 'S'::bpchar) OR\n> (tran_typ_id = 'R'::bpchar)))\n\nThe bulk of the time is evidently going into this step. You didn't say\nhow big cdm_ddw_tran_item is, but unless it's in the billion-row range,\nan indexscan isn't going to help for pulling out 10 million rows.\nThis may be about the best you can do :-(\n\nIf it *is* in the billion-row range, PG 8.1's bitmap indexscan facility\nwould probably help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Jan 2006 15:24:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query. Any way to speed up? " } ]
[ { "msg_contents": "Howdy.\n\nI'm running into scaling problems when testing with a 16gb (data \n+indexes) database.\n\nI can run a query, and it returns in a few seconds. If I run it \nagain, it returns in a few milliseconds. I realize this is because \nduring subsequent runs, the necessary disk pages have been cached by \nthe OS.\n\nI have experimented with having all 8 disks in a single RAID0 set, a \nsingle RAID10 set, and currently 4 RAID0 sets of 2 disks each. There \nhasn't been an appreciable difference in the overall performance of \nmy test suite (which randomly generates queries like the samples \nbelow as well as a few other types. this problem manifests itself on \nother queries in the test suite as well).\n\nSo, my question is, is there anything I can do to boost performance \nwith what I've got, or am I in a position where the only 'fix' is \nmore faster disks? I can't think of any schema/index changes that \nwould help, since everything looks pretty optimal from the 'explain \nanalyze' output. I'd like to get a 10x improvement when querying from \nthe 'cold' state.\n\nThanks for any assistance. The advice from reading this list to \ngetting to where I am now has been invaluable.\n-peter\n\n\nConfiguration:\n\nPostgreSQL 8.1.1\n\nshared_buffers = 10000 # (It was higher, 50k, but didn't help any, \nso brought down to free ram for disk cache)\nwork_mem = 8196\nrandom_page_cost = 3\neffective_cache_size = 250000\n\n\nHardware:\n\nCentOS 4.2 (Linux 2.6.9-22.0.1.ELsmp)\nAreca ARC-1220 8-port PCI-E controller\n8 x Hitachi Deskstar 7K80 (SATA2) (7200rpm)\n2 x Opteron 242 @ 1.6ghz\n3gb RAM (should be 4gb, but separate Linux issue preventing us from \ngetting it to see all of it)\nTyan Thunder K8WE\n\n\nRAID Layout:\n\n4 2-disk RAID0 sets created\n\nEach raid set is a tablespace, formatted ext3. The majority of the \ndatabase is in the primary tablespace, and the popular object_data \ntable is in its own tablespace.\n\n\nSample 1:\n\ntriple_store=# explain analyze SELECT DISTINCT O.subject AS oid FROM \nobject_data O, object_tags T1, tags T2 WHERE O.type = 179 AND \nO.subject = T1.object_id AND T1.tag_id = T2.tag_id AND T2.tag = \n'transmitter\\'s' LIMIT 1000;\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-------------------------\nLimit (cost=1245.07..1245.55 rows=97 width=4) (actual \ntime=3702.697..3704.665 rows=206 loops=1)\n -> Unique (cost=1245.07..1245.55 rows=97 width=4) (actual \ntime=3702.691..3703.900 rows=206 loops=1)\n -> Sort (cost=1245.07..1245.31 rows=97 width=4) (actual \ntime=3702.686..3703.056 rows=206 loops=1)\n Sort Key: o.subject\n -> Nested Loop (cost=2.82..1241.87 rows=97 width=4) \n(actual time=97.166..3701.970 rows=206 loops=1)\n -> Nested Loop (cost=2.82..678.57 rows=186 \nwidth=4) (actual time=59.903..1213.170 rows=446 loops=1)\n -> Index Scan using tags_tag_key on tags \nt2 (cost=0.00..5.01 rows=1 width=4) (actual time=13.139..13.143 \nrows=1 loops=1)\n Index Cond: (tag = \n'transmitter''s'::text)\n -> Bitmap Heap Scan on object_tags t1 \n(cost=2.82..670.65 rows=233 width=8) (actual time=46.751..1198.198 \nrows=446 loops=1)\n Recheck Cond: (t1.tag_id = \n\"outer\".tag_id)\n -> Bitmap Index Scan on \nobject_tags_tag_id_object_id (cost=0.00..2.82 rows=233 width=0) \n(actual time=31.571..31.571 rows=446 loops=1)\n Index Cond: (t1.tag_id = \n\"outer\".tag_id)\n -> Index Scan using object_data_pkey on \nobject_data o (cost=0.00..3.02 rows=1 width=4) (actual \ntime=5.573..5.574 rows=0 loops=446)\n Index Cond: (o.subject = \"outer\".object_id)\n Filter: (\"type\" = 179)\nTotal runtime: 3705.166 ms\n(16 rows)\n\ntriple_store=# explain analyze SELECT DISTINCT O.subject AS oid FROM \nobject_data O, object_tags T1, tags T2 WHERE O.type = 179 AND \nO.subject = T1.object_id AND T1.tag_id = T2.tag_id AND T2.tag = \n'transmitter\\'s' LIMIT 1000;\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-----------------------\nLimit (cost=1245.07..1245.55 rows=97 width=4) (actual \ntime=11.037..12.923 rows=206 loops=1)\n -> Unique (cost=1245.07..1245.55 rows=97 width=4) (actual \ntime=11.031..12.190 rows=206 loops=1)\n -> Sort (cost=1245.07..1245.31 rows=97 width=4) (actual \ntime=11.027..11.396 rows=206 loops=1)\n Sort Key: o.subject\n -> Nested Loop (cost=2.82..1241.87 rows=97 width=4) \n(actual time=0.430..10.461 rows=206 loops=1)\n -> Nested Loop (cost=2.82..678.57 rows=186 \nwidth=4) (actual time=0.381..3.479 rows=446 loops=1)\n -> Index Scan using tags_tag_key on tags \nt2 (cost=0.00..5.01 rows=1 width=4) (actual time=0.058..0.061 rows=1 \nloops=1)\n Index Cond: (tag = \n'transmitter''s'::text)\n -> Bitmap Heap Scan on object_tags t1 \n(cost=2.82..670.65 rows=233 width=8) (actual time=0.310..1.730 \nrows=446 loops=1)\n Recheck Cond: (t1.tag_id = \n\"outer\".tag_id)\n -> Bitmap Index Scan on \nobject_tags_tag_id_object_id (cost=0.00..2.82 rows=233 width=0) \n(actual time=0.199..0.199 rows=446 loops=1)\n Index Cond: (t1.tag_id = \n\"outer\".tag_id)\n -> Index Scan using object_data_pkey on \nobject_data o (cost=0.00..3.02 rows=1 width=4) (actual \ntime=0.009..0.010 rows=0 loops=446)\n Index Cond: (o.subject = \"outer\".object_id)\n Filter: (\"type\" = 179)\nTotal runtime: 13.411 ms\n(16 rows)\n\ntriple_store=#\n\n\nSample 2:\n\ntriple_store=# explain analyze SELECT DISTINCT O.subject AS oid FROM \nobject_data O, object_tags T1, tags T2 WHERE O.type = 93 AND \nO.subject = T1.object_id AND T1.tag_id = T2.tag_id AND T2.tag = \n'current' LIMIT 1000;\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-------------------------\nLimit (cost=1241.88..1241.88 rows=1 width=4) (actual \ntime=6411.409..6411.409 rows=0 loops=1)\n -> Unique (cost=1241.88..1241.88 rows=1 width=4) (actual \ntime=6411.405..6411.405 rows=0 loops=1)\n -> Sort (cost=1241.88..1241.88 rows=1 width=4) (actual \ntime=6411.400..6411.400 rows=0 loops=1)\n Sort Key: o.subject\n -> Nested Loop (cost=2.82..1241.87 rows=1 width=4) \n(actual time=6411.386..6411.386 rows=0 loops=1)\n -> Nested Loop (cost=2.82..678.57 rows=186 \nwidth=4) (actual time=46.045..2229.978 rows=446 loops=1)\n -> Index Scan using tags_tag_key on tags \nt2 (cost=0.00..5.01 rows=1 width=4) (actual time=11.798..11.802 \nrows=1 loops=1)\n Index Cond: (tag = 'current'::text)\n -> Bitmap Heap Scan on object_tags t1 \n(cost=2.82..670.65 rows=233 width=8) (actual time=34.222..2216.321 \nrows=446 loops=1)\n Recheck Cond: (t1.tag_id = \n\"outer\".tag_id)\n -> Bitmap Index Scan on \nobject_tags_tag_id_object_id (cost=0.00..2.82 rows=233 width=0) \n(actual time=25.523..25.523 rows=446 loops=1)\n Index Cond: (t1.tag_id = \n\"outer\".tag_id)\n -> Index Scan using object_data_pkey on \nobject_data o (cost=0.00..3.02 rows=1 width=4) (actual \ntime=9.370..9.370 rows=0 loops=446)\n Index Cond: (o.subject = \"outer\".object_id)\n Filter: (\"type\" = 93)\nTotal runtime: 6411.516 ms\n(16 rows)\n\ntriple_store=# explain analyze SELECT DISTINCT O.subject AS oid FROM \nobject_data O, object_tags T1, tags T2 WHERE O.type = 93 AND \nO.subject = T1.object_id AND T1.tag_id = T2.tag_id AND T2.tag = \n'current' LIMIT 1000;\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-----------------------\nLimit (cost=1241.88..1241.88 rows=1 width=4) (actual \ntime=9.437..9.437 rows=0 loops=1)\n -> Unique (cost=1241.88..1241.88 rows=1 width=4) (actual \ntime=9.431..9.431 rows=0 loops=1)\n -> Sort (cost=1241.88..1241.88 rows=1 width=4) (actual \ntime=9.426..9.426 rows=0 loops=1)\n Sort Key: o.subject\n -> Nested Loop (cost=2.82..1241.87 rows=1 width=4) \n(actual time=9.414..9.414 rows=0 loops=1)\n -> Nested Loop (cost=2.82..678.57 rows=186 \nwidth=4) (actual time=0.347..3.477 rows=446 loops=1)\n -> Index Scan using tags_tag_key on tags \nt2 (cost=0.00..5.01 rows=1 width=4) (actual time=0.039..0.042 rows=1 \nloops=1)\n Index Cond: (tag = 'current'::text)\n -> Bitmap Heap Scan on object_tags t1 \n(cost=2.82..670.65 rows=233 width=8) (actual time=0.297..1.688 \nrows=446 loops=1)\n Recheck Cond: (t1.tag_id = \n\"outer\".tag_id)\n -> Bitmap Index Scan on \nobject_tags_tag_id_object_id (cost=0.00..2.82 rows=233 width=0) \n(actual time=0.185..0.185 rows=446 loops=1)\n Index Cond: (t1.tag_id = \n\"outer\".tag_id)\n -> Index Scan using object_data_pkey on \nobject_data o (cost=0.00..3.02 rows=1 width=4) (actual \ntime=0.009..0.009 rows=0 loops=446)\n Index Cond: (o.subject = \"outer\".object_id)\n Filter: (\"type\" = 93)\nTotal runtime: 9.538 ms\n(16 rows)\n\ntriple_store=#\n\n\nSchema:\n\ntriple_store=# \\d object_data\n Table \"public.object_data\"\n Column | Type | Modifiers\n---------------+-----------------------------+-----------\nsubject | integer | not null\ntype | integer | not null\nowned_by | integer | not null\ncreated_by | integer | not null\ncreated | timestamp without time zone | not null\nlast_modified | timestamp without time zone | not null\nlabel | text |\nIndexes:\n \"object_data_pkey\" PRIMARY KEY, btree (subject)\n \"object_data_type_created_by\" btree (\"type\", created_by)\n \"object_data_type_owned_by\" btree (\"type\", owned_by)\nForeign-key constraints:\n \"object_data_created_by_fkey\" FOREIGN KEY (created_by) \nREFERENCES objects(object_id) DEFERRABLE INITIALLY DEFERRED\n \"object_data_owned_by_fkey\" FOREIGN KEY (owned_by) REFERENCES \nobjects(object_id) DEFERRABLE INITIALLY DEFERRED\n \"object_data_type_fkey\" FOREIGN KEY (\"type\") REFERENCES objects \n(object_id) DEFERRABLE INITIALLY DEFERRED\nTablespace: \"alt_2\"\n\ntriple_store=# \\d object_tags\n Table \"public.object_tags\"\n Column | Type | Modifiers\n-----------+---------+-----------\nobject_id | integer | not null\ntag_id | integer | not null\nIndexes:\n \"object_tags_pkey\" PRIMARY KEY, btree (object_id, tag_id)\n \"object_tags_tag_id\" btree (tag_id)\n \"object_tags_tag_id_object_id\" btree (tag_id, object_id)\nForeign-key constraints:\n \"object_tags_object_id_fkey\" FOREIGN KEY (object_id) REFERENCES \nobjects(object_id) DEFERRABLE INITIALLY DEFERRED\n \"object_tags_tag_id_fkey\" FOREIGN KEY (tag_id) REFERENCES tags \n(tag_id) DEFERRABLE INITIALLY DEFERRED\n\ntriple_store=# \\d tags\n Table \"public.tags\"\nColumn | Type | Modifiers\n--------+--------- \n+-------------------------------------------------------\ntag_id | integer | not null default nextval('tags_tag_id_seq'::regclass)\ntag | text | not null\nIndexes:\n \"tags_pkey\" PRIMARY KEY, btree (tag_id)\n \"tags_tag_key\" UNIQUE, btree (tag)\n\n-- \n(peter.royal|osi)@pobox.com - http://fotap.org/~osi", "msg_date": "Fri, 6 Jan 2006 17:59:24 -0500", "msg_from": "peter royal <[email protected]>", "msg_from_op": true, "msg_subject": "help tuning queries on large database" }, { "msg_contents": "peter royal <[email protected]> writes:\n> So, my question is, is there anything I can do to boost performance \n> with what I've got, or am I in a position where the only 'fix' is \n> more faster disks? I can't think of any schema/index changes that \n> would help, since everything looks pretty optimal from the 'explain \n> analyze' output. I'd like to get a 10x improvement when querying from \n> the 'cold' state.\n\nI don't think you have any hope of improving the \"cold\" state much.\nThe right way to think about this is not to be in the \"cold\" state.\nCheck your kernel parameters and make sure it's not set to limit\nthe amount of memory used for cache (I'm not actually sure if there\nis such a limit on Linux, but there definitely is on some other Unixen).\nLook around and see if you can reduce the memory used by processes,\nor even better, offload non-database tasks to other machines.\n\nBasically you need to get as much of the database as you can to stay\nin disk cache.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 06 Jan 2006 18:47:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help tuning queries on large database " }, { "msg_contents": "On 1/6/06, peter royal <[email protected]> wrote:\n> PostgreSQL 8.1.1\n>\n> shared_buffers = 10000 # (It was higher, 50k, but didn't help any,\n> so brought down to free ram for disk cache)\n> work_mem = 8196\n> random_page_cost = 3\n> effective_cache_size = 250000\n\nI have played with both disk cache settings and shared buffers and I\nfound that if I increased the shared buffers above a certain value\nperformance would increase dramatically. Playing with the effective\ncache did not have the same amount of impact. I am currently running\nwith\n\nshared_buffers = 254288 # approx 2.1Gb\n\nand this is on a smaller dataset than yours.\n\n--\nHarry\nhttp://www.hjackson.org\nhttp://www.uklug.co.uk\n", "msg_date": "Sat, 7 Jan 2006 01:08:25 +0000", "msg_from": "Harry Jackson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help tuning queries on large database" }, { "msg_contents": "On Fri, 6 Jan 2006, Tom Lane wrote:\n\n> Date: Fri, 06 Jan 2006 18:47:55 -0500\n> From: Tom Lane <[email protected]>\n> To: peter royal <[email protected]>\n> Cc: [email protected]\n> Subject: Re: [PERFORM] help tuning queries on large database\n> \n> peter royal <[email protected]> writes:\n>> So, my question is, is there anything I can do to boost performance\n>> with what I've got, or am I in a position where the only 'fix' is\n>> more faster disks? I can't think of any schema/index changes that\n>> would help, since everything looks pretty optimal from the 'explain\n>> analyze' output. I'd like to get a 10x improvement when querying from\n>> the 'cold' state.\n>\n> I don't think you have any hope of improving the \"cold\" state much.\n> The right way to think about this is not to be in the \"cold\" state.\n> Check your kernel parameters and make sure it's not set to limit\n> the amount of memory used for cache (I'm not actually sure if there\n> is such a limit on Linux, but there definitely is on some other Unixen).\n\nLinux doesn't have any ability to limit the amount of memory used for \ncaching (there are periodicly requests for such a feature)\n\nDavid Lang\n\n> Look around and see if you can reduce the memory used by processes,\n> or even better, offload non-database tasks to other machines.\n>\n> Basically you need to get as much of the database as you can to stay\n> in disk cache.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n", "msg_date": "Fri, 6 Jan 2006 17:18:48 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help tuning queries on large database" }, { "msg_contents": "Peter,\n\nOn 1/6/06 2:59 PM, \"peter royal\" <[email protected]> wrote:\n\n> I have experimented with having all 8 disks in a single RAID0 set, a\n> single RAID10 set, and currently 4 RAID0 sets of 2 disks each. There\n> hasn't been an appreciable difference in the overall performance of\n> my test suite (which randomly generates queries like the samples\n> below as well as a few other types. this problem manifests itself on\n> other queries in the test suite as well).\n\nHave you tested the underlying filesystem for it's performance? Run this:\n time bash -c 'dd if=/dev/zero of=/my_file_system/bigfile bs=8k\ncount=<your_memory_size_in_GB * 250000> && sync'\n\nThen run this:\n time dd if=/my_file_system/bigfile bs=8k of=/dev/null\n\nAnd report the times here please. With your 8 disks in any of the RAID0\nconfigurations you describe, you should be getting 480MB/s. In the RAID10\nconfiguration you should get 240.\n\nNote that ext3 will not go faster than about 300MB/s in our experience. You\nshould use xfs, which will run *much* faster.\n\nYou should also experiment with using larger readahead, which you can\nimplement like this:\n blockdev --setra 16384 /dev/<my_block_device>\n\nE.g. \"blockdev --setra 16384 /dev/sda\"\n\nThis will set the readahead of Linux block device reads to 16MB. Using\n3Ware's newest controllers we have seen 500MB/s + on 8 disk drives in RAID0\non CentOS 4.1 with xfs. Note that you will need to run the \"CentOS\nunsupported kernel\" to get xfs.\n\n> So, my question is, is there anything I can do to boost performance\n> with what I've got, or am I in a position where the only 'fix' is\n> more faster disks? I can't think of any schema/index changes that\n> would help, since everything looks pretty optimal from the 'explain\n> analyze' output. I'd like to get a 10x improvement when querying from\n> the 'cold' state.\n\n From what you describe, one of these is likely:\n- hardware isn't configured properly or a driver problem.\n- you need to use xfs and tune your Linux readahead\n\n- Luke\n\n\n", "msg_date": "Sun, 08 Jan 2006 10:42:31 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help tuning queries on large database" }, { "msg_contents": "I'll second all of Luke Lonergan's comments and add these.\n\nYou should be able to increase both \"cold\" and \"warm\" performance (as \nwell as data integrity. read below.) considerably.\nRon\n\nAt 05:59 PM 1/6/2006, peter royal wrote:\n>Howdy.\n>\n>I'm running into scaling problems when testing with a 16gb (data \n>+indexes) database.\n>\n>I can run a query, and it returns in a few seconds. If I run it\n>again, it returns in a few milliseconds. I realize this is because\n>during subsequent runs, the necessary disk pages have been cached by\n>the OS.\n>\n>I have experimented with having all 8 disks in a single RAID0 set, a\n>single RAID10 set, and currently 4 RAID0 sets of 2 disks each. There\n>hasn't been an appreciable difference in the overall performance of\n>my test suite (which randomly generates queries like the samples\n>below as well as a few other types. this problem manifests itself on\n>other queries in the test suite as well).\n>\n>So, my question is, is there anything I can do to boost performance\n>with what I've got, or am I in a position where the only 'fix' is\n>more faster disks? I can't think of any schema/index changes that\n>would help, since everything looks pretty optimal from the 'explain\n>analyze' output. I'd like to get a 10x improvement when querying from\n>the 'cold' state.\n>\n>Thanks for any assistance. The advice from reading this list to\n>getting to where I am now has been invaluable.\n>-peter\n>\n>\n>Configuration:\n>\n>PostgreSQL 8.1.1\n>\n>shared_buffers = 10000 # (It was higher, 50k, but didn't help any,\n>so brought down to free ram for disk cache)\n>work_mem = 8196\n>random_page_cost = 3\n>effective_cache_size = 250000\n>\n>\n>Hardware:\n>\n>CentOS 4.2 (Linux 2.6.9-22.0.1.ELsmp)\n\nUpgrade your kernel to at least 2.6.12\nThere's a known issue with earlier versions of the 2.6.x kernel and \n64b CPUs like the Opteron. See kernel.org for details.\n\n>Areca ARC-1220 8-port PCI-E controller\n\nMake sure you have 1GB or 2GB of cache. Get the battery backup and \nset the cache for write back rather than write through.\n\n>8 x Hitachi Deskstar 7K80 (SATA2) (7200rpm)\n>2 x Opteron 242 @ 1.6ghz\n>3gb RAM (should be 4gb, but separate Linux issue preventing us from\n>getting it to see all of it)\n>Tyan Thunder K8WE\nThe K8WE has 8 DIMM slots. That should be good for 16 or 32 GB of \nRAM (Depending on whether the mainboard recognizes 4GB DIMMs or \nnot. Ask Tyan about the latest K8WE firmare.). If nothing else, 1GB \nDIMMs are now so cheap that you should have no problems having 8GB on the K8WE.\n\nA 2.6.12 or later based Linux distro should have NO problems using \nmore than 4GB or RAM.\n\nAmong the other tricks having lots of RAM allows:\nIf some of your tables are Read Only or VERY rarely written to, you \ncan preload them at boot time and make them RAM resident using the \n/etc/tmpfs trick.\n\nIn addition there is at least one company making a cheap battery \nbacked PCI-X card that can hold up to 4GB of RAM and pretend to be a \nsmall HD to the OS. I don't remember any names at the moment, but \nthere have been posts here and at storage.review.com on such products.\n\n\n>RAID Layout:\n>\n>4 2-disk RAID0 sets created\nYou do know that a RAID 0 set provides _worse_ data protection than a \nsingle HD? Don't use RAID 0 for any data you want kept reliably.\n\nWith 8 HDs, the best config is probably\n1 2HD RAID 1 + 1 6HD RAID 10 or\n2 4HD RAID 10's\n\nIt is certainly true that once you have done everything you can with \nRAM, the next set of HW optimizations is to add HDs. The more the \nbetter up to a the limits of your available PCI-X bandwidth.\n\nIn short, a 2nd RAID fully populated controller is not unreasonable.\n\n\n", "msg_date": "Sun, 08 Jan 2006 16:35:11 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help tuning queries on large database" }, { "msg_contents": "On 1/8/06, Ron <[email protected]> wrote:\n>\n> <snip>\n> Among the other tricks having lots of RAM allows:\n> If some of your tables are Read Only or VERY rarely written to, you\n> can preload them at boot time and make them RAM resident using the\n> /etc/tmpfs trick.\n\n\nWhat is the /etc/tmpfs trick?\n\n-K\n\nOn 1/8/06, Ron <[email protected]> wrote:\n<snip>Among the other tricks having lots of RAM allows:If some of your tables are Read Only or VERY rarely written to, youcan preload them at boot time and make them RAM resident using the/etc/tmpfs trick.\n\nWhat is the /etc/tmpfs trick?\n\n-K", "msg_date": "Mon, 9 Jan 2006 08:37:04 -0600", "msg_from": "Kelly Burkhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help tuning queries on large database" }, { "msg_contents": "On 1/9/06, Kelly Burkhart <[email protected]> wrote:\n> On 1/8/06, Ron <[email protected]> wrote:\n> > <snip>\n> > Among the other tricks having lots of RAM allows:\n> > If some of your tables are Read Only or VERY rarely written to, you\n> > can preload them at boot time and make them RAM resident using the\n> > /etc/tmpfs trick.\n>\n> What is the /etc/tmpfs trick?\n\nI think he means you can create a directory that mounts and area of\nRAM. If you put the tables on it then it will be very fast. I would\nnot recommend it for anything you cannot afford to loose.\n\nI have also tried it and found that it did not produce as good as\nperformance as I expected.\n\n--\nHarry\nhttp://www.hjackson.org\nhttp://www.uklug.co.uk\n", "msg_date": "Mon, 9 Jan 2006 15:40:31 +0000", "msg_from": "Harry Jackson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help tuning queries on large database" }, { "msg_contents": "On Jan 8, 2006, at 1:42 PM, Luke Lonergan wrote:\n> Have you tested the underlying filesystem for it's performance? \n> Run this:\n> time bash -c 'dd if=/dev/zero of=/my_file_system/bigfile bs=8k\n> count=<your_memory_size_in_GB * 250000> && sync'\n\nThis is a 2-disk RAID0\n\n[root@bigboy /opt/alt-2]# time bash -c 'dd if=/dev/zero of=/opt/alt-2/ \nbigfile bs=8k count=1000000 && sync'\n1000000+0 records in\n1000000+0 records out\n\nreal 1m27.143s\nuser 0m0.276s\nsys 0m37.338s\n\n'iostat -x' showed writes peaking at ~100MB/s\n\n\n> Then run this:\n> time dd if=/my_file_system/bigfile bs=8k of=/dev/null\n\n[root@bigboy /opt/alt-2]# time dd if=/opt/alt-2/bigfile bs=8k of=/dev/ \nnull\n1000000+0 records in\n1000000+0 records out\n\nreal 1m9.846s\nuser 0m0.189s\nsys 0m11.099s\n\n'iostat -x' showed reads peaking at ~116MB/s\n\n\nAgain with kernel 2.6.15:\n\n[root@bigboy ~]# time bash -c 'dd if=/dev/zero of=/opt/alt-2/bigfile \nbs=8k count=1000000 && sync'\n1000000+0 records in\n1000000+0 records out\n\nreal 1m29.144s\nuser 0m0.204s\nsys 0m48.415s\n\n[root@bigboy ~]# time dd if=/opt/alt-2/bigfile bs=8k of=/dev/null\n1000000+0 records in\n1000000+0 records out\n\nreal 1m9.701s\nuser 0m0.168s\nsys 0m11.933s\n\n\n> And report the times here please. With your 8 disks in any of the \n> RAID0\n> configurations you describe, you should be getting 480MB/s. In the \n> RAID10\n> configuration you should get 240.\n\nNot anywhere near that. I'm scouring the 'net looking to see what \nneeds to be tuned at the HW level.\n\n> You should also experiment with using larger readahead, which you can\n> implement like this:\n> blockdev --setra 16384 /dev/<my_block_device>\n>\n> E.g. \"blockdev --setra 16384 /dev/sda\"\n\nwow, this helped nicely. Without using the updated kernel, it took \n28% off my testcase time.\n\n> From what you describe, one of these is likely:\n> - hardware isn't configured properly or a driver problem.\n\nUsing the latest Areca driver, looking to see if there is some \nconfiguration that was missed.\n\n> - you need to use xfs and tune your Linux readahead\n\nWill try XFS soon, concentrating on the 'dd' speed issue first.\n\n\nOn Jan 8, 2006, at 4:35 PM, Ron wrote:\n>> Areca ARC-1220 8-port PCI-E controller\n>\n> Make sure you have 1GB or 2GB of cache. Get the battery backup and \n> set the cache for write back rather than write through.\n\nThe card we've got doesn't have a SODIMM socket, since its only an 8- \nport card. My understanding was that was cache used when writing?\n\n> A 2.6.12 or later based Linux distro should have NO problems using \n> more than 4GB or RAM.\n\nUpgraded the kernel to 2.6.15, then we were able to set the BIOS \noption for the 'Memory Hole' to 'Software' and it saw all 4G (under \n2.6.11 we got a kernel panic with that set)\n\n>> RAID Layout:\n>>\n>> 4 2-disk RAID0 sets created\n> You do know that a RAID 0 set provides _worse_ data protection than \n> a single HD? Don't use RAID 0 for any data you want kept reliably.\n\nyup, aware of that. was planning on RAID10 for production, but just \nbroke it out into RAID0 sets for testing (from what I read, I \ngathered that the read performance of RAID0 and RAID10 were comparable)\n\n\nthanks for all the suggestions, I'll report back as I continue testing.\n\n-pete\n\n-- \n(peter.royal|osi)@pobox.com - http://fotap.org/~osi", "msg_date": "Mon, 9 Jan 2006 12:23:15 -0500", "msg_from": "peter royal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: help tuning queries on large database" }, { "msg_contents": "Peter,\n\nOn 1/9/06 9:23 AM, \"peter royal\" <[email protected]> wrote:\n\n> This is a 2-disk RAID0\n\nYour 2-disk results look fine - what about your 8-disk results?\n\nGiven that you want to run in production with RAID10, the most you should\nexpect is 2x the 2-disk results using all 8 of your disks. If you want the\nbest rate for production while preserving data integrity, I recommend\nrunning your Areca in RAID5, in which case you should expect 3.5x your\n2-disk results (7 drives). You can assume you'll get that if you use XFS +\nreadahead. OTOH - I'd like to see your test results anyway :-)\n\n- Luke\n\n\n", "msg_date": "Mon, 09 Jan 2006 11:01:43 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help tuning queries on large database" }, { "msg_contents": "On Jan 9, 2006, at 2:01 PM, Luke Lonergan wrote:\n> Peter,\n>\n> On 1/9/06 9:23 AM, \"peter royal\" <[email protected]> wrote:\n>\n>> This is a 2-disk RAID0\n>\n> Your 2-disk results look fine - what about your 8-disk results?\n\nafter some further research the 2-disk RAID0 numbers are not bad.\n\nI have a single drive of the same type hooked up to the SATA2 port on \nthe motherboard to boot from, and its performance numbers are (linux \n2.6.15, ext3):\n\n[root@bigboy ~]# time bash -c 'dd if=/dev/zero of=/tmp/bigfile bs=8k \ncount=1000000 && sync'\n1000000+0 records in\n1000000+0 records out\n\nreal 4m55.032s\nuser 0m0.256s\nsys 0m47.299s\n[root@bigboy ~]# time dd if=/tmp/bigfile bs=8k of=/dev/null\n1000000+0 records in\n1000000+0 records out\n\nreal 3m27.229s\nuser 0m0.156s\nsys 0m13.377s\n\nso, there is a clear advantage to RAID over a single drive.\n\n\nnow, some stats in a 8-disk configuration:\n\n8-disk RAID0, ext3, 16k read-ahead\n\n[root@bigboy /opt/pgdata]# time bash -c 'dd if=/dev/zero of=/opt/ \npgdata/bigfile bs=8k count=1000000 && sync'\n1000000+0 records in\n1000000+0 records out\n\nreal 0m53.030s\nuser 0m0.204s\nsys 0m42.015s\n\n[root@bigboy /opt/pgdata]# time dd if=/opt/pgdata/bigfile bs=8k of=/ \ndev/null\n1000000+0 records in\n1000000+0 records out\n\nreal 0m23.232s\nuser 0m0.144s\nsys 0m13.213s\n\n\n8-disk RAID0, xfs, 16k read-ahead\n\n[root@bigboy /opt/pgdata]# time bash -c 'dd if=/dev/zero of=/opt/ \npgdata/bigfile bs=8k count=1000000 && sync'\n1000000+0 records in\n1000000+0 records out\n\nreal 0m32.177s\nuser 0m0.212s\nsys 0m21.277s\n\n[root@bigboy /opt/pgdata]# time dd if=/opt/pgdata/bigfile bs=8k of=/ \ndev/null\n1000000+0 records in\n1000000+0 records out\n\nreal 0m21.814s\nuser 0m0.172s\nsys 0m13.881s\n\n\n... WOW.. highly impressed with the XFS write speed! going to stick \nwith that!\n\nOverall, I got a 50% boost in the overall speed of my test suite by \nusing XFS and the 16k read-ahead.\n\n> Given that you want to run in production with RAID10, the most you \n> should\n> expect is 2x the 2-disk results using all 8 of your disks. If you \n> want the\n> best rate for production while preserving data integrity, I recommend\n> running your Areca in RAID5, in which case you should expect 3.5x your\n> 2-disk results (7 drives). You can assume you'll get that if you \n> use XFS +\n> readahead. OTOH - I'd like to see your test results anyway :-)\n\nI've been avoiding RAID5 after reading how performance drops when a \ndrive is out/rebuilding. The performance benefit will outweigh the \ncost I think.\n\nThanks for the help!\n-pete\n\n-- \n(peter.royal|osi)@pobox.com - http://fotap.org/~osi", "msg_date": "Mon, 9 Jan 2006 15:59:39 -0500", "msg_from": "peter royal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: help tuning queries on large database" }, { "msg_contents": "Peter,\n\nOn 1/9/06 12:59 PM, \"peter royal\" <[email protected]> wrote:\n\n\n> Overall, I got a 50% boost in the overall speed of my test suite by\n> using XFS and the 16k read-ahead.\n\nYes, it all looks pretty good for your config, though it looks like you\nmight be adapter limited with the Areca - you should have seen a read time\nwith XFS of about 17 seconds.\n\nOTOH - with RAID5, you are probably about balanced, you should see a read\ntime of about 19 seconds and instead you'll get your 22 which isn't too big\nof a deal.\n \n> Thanks for the help!\n\nSure - no problem!\n\nBTW - I'm running tests right now with the 3Ware 9550SX controllers. Two of\nthem on one machine running simultaneously with 16 drives and we're getting\n800MB/s sustained read times. That's a 32GB file read in 40 seconds (!!)\n\nAt that rate, we're only about 3x slower than memory access (practically\nlimited at around 2GB/s even though the system bus peak is 10GB/s). So, the\npoint is, if you want to get close to your \"warm\" speed, you need to get\nyour disk I/O as close to main memory speed as you can. With parallel I/O\nyou can do that (see Bizgres MPP for more).\n\n- Luke\n\n\n", "msg_date": "Mon, 09 Jan 2006 13:29:30 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help tuning queries on large database" }, { "msg_contents": "At 12:23 PM 1/9/2006, peter royal wrote:\n\n>On Jan 8, 2006, at 4:35 PM, Ron wrote:\n>>>Areca ARC-1220 8-port PCI-E controller\n>>\n>>Make sure you have 1GB or 2GB of cache. Get the battery backup and\n>>set the cache for write back rather than write through.\n>\n>The card we've got doesn't have a SODIMM socket, since its only an \n>8- port card. My understanding was that was cache used when writing?\nTrade in your 8 port ARC-1220 that doesn't support 1-2GB of cache for \na 12, 16, or 24 port Areca one that does. It's that important.\n\nPresent generation SATA2 HDs should average ~50MBps raw ASTR. The \nIntel IOP333 DSP on the ARC's is limited to 800MBps, so that's your \nlimit per card. That's 16 SATA2 HD's operating in parallel (16HD \nRAID 0, 17 HD RAID 5, 32 HD RAID 10).\n\nNext generation 2.5\" form factor 10Krpm SAS HD's due to retail in \n2006 are supposed to average ~90MBps raw ASTR. 8 such HDs in \nparallel per ARC-12xx will be the limit.\n\nSide Note: the PCI-Ex8 bus on the 12xx cards is good for ~1.6GBps \nRWPB, so I expect Areca is going to be upgrading this controller to \nat least 2x, if not 4x (would require replacing the x8 bus with a x16 \nbus), the bandwidth at some point.\n\nA PCI-Ex16 bus is good for ~3.2GBps RWPB, so if you have the slots 4 \nsuch populated ARC cards will max out a PCI-Ex16 bus.\n\nIn your shoes, I think I would recommend replacing your 8 port \nARC-1220 with a 12 port ARC-1230 with 1-2GB of battery backed cache \nand planning to get more of them as need arises.\n\n\n>>A 2.6.12 or later based Linux distro should have NO problems using\n>>more than 4GB or RAM.\n>\n>Upgraded the kernel to 2.6.15, then we were able to set the BIOS\n>option for the 'Memory Hole' to 'Software' and it saw all 4G (under\n>2.6.11 we got a kernel panic with that set)\nThere are some other kernel tuning params that should help memory and \nphysical IO performance. Talk to a Linux kernel guru to get the \ncorrect advice specific to your installation and application.\n\n\nIt should be noted that there are indications of some major \ninefficiencies in pg's IO layer that make it compute bound under some \ncircumstances before it becomes IO bound. These may or may not cause \ntrouble for you as you keep pushing the envelope for maximum IO performance.\n\n\nWith the kind of work you are doing and we are describing, I'm sure \nyou can have a _very_ zippy system.\n\nRon\n\n\n", "msg_date": "Tue, 10 Jan 2006 09:33:20 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help tuning queries on large database" }, { "msg_contents": "Ron,\n\nA few days back you mentioned:\n\n> Upgrade your kernel to at least 2.6.12\n> There's a known issue with earlier versions of the 2.6.x kernel and \n> 64b CPUs like the Opteron. See kernel.org for details.\n> \n\nI did some searching and couldn't find any obvious mention of this issue\n(I gave up after searching through the first few hundred instances of\n\"64\" in the 2.6.12 changelog).\n\nWould you mind being a little more specific about which issue you're\ntalking about? We're about to deploy some new 16GB RAM Opteron DB\nservers and I'd like to check and make sure RH backported whatever the\nfix was to their current RHEL4 kernel.\n\nThanks,\nMark Lewis\n", "msg_date": "Tue, 10 Jan 2006 16:28:06 -0800", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help tuning queries on large database" }, { "msg_contents": "At 07:28 PM 1/10/2006, Mark Lewis wrote:\n>Ron,\n>\n>A few days back you mentioned:\n>\n> > Upgrade your kernel to at least 2.6.12\n> > There's a known issue with earlier versions of the 2.6.x kernel and\n> > 64b CPUs like the Opteron. See kernel.org for details.\n> >\n>\n>I did some searching and couldn't find any obvious mention of this issue\n>(I gave up after searching through the first few hundred instances of\n>\"64\" in the 2.6.12 changelog).\n>\n>Would you mind being a little more specific about which issue you're\n>talking about? We're about to deploy some new 16GB RAM Opteron DB\n>servers and I'd like to check and make sure RH backported whatever the\n>fix was to their current RHEL4 kernel.\nThere are 3 issues I know about in general:\n1= As Peter Royal noted on this list, pre 12 versions of 2.6.x have \nproblems with RAM of >= 4GB.\n\n2= Pre 12 versions on 2.6.x when running A64 or Xeon 64b SMP seem to \nbe susceptible to \"context switch storms\".\n\n3= Physical and memory IO is considerably improved in the later \nversions of 2.6.x compared to 2.6.11 or earlier.\n\nTalk to a real Linux kernel guru (I am not) for details and specifics.\nRon\n\n\n", "msg_date": "Wed, 11 Jan 2006 01:03:48 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: help tuning queries on large database" } ]
[ { "msg_contents": "\nWe have to inserts a records(15000- 20000) into a table which also\ncontains (15000-20000) records, then after insertion, we have to delete\nthe records according to a business rule.\nAbove process is taking place in a transaction and we are using batches\nof 128 to insert records.\nEverything works fine on QA environment but somehow after inserts,\ndelete query hangs in production environment. Delete query has some\njoins with other table and a self join. There is no exception as we\nhave done enough exception handling. It simply hangs with no trace in\napplication logs.\n\nWhen I do \"ps aux\" , I see\npostgres 5294 41.3 2.4 270120 38092 pts/4 R 10:41 52:56\npostgres: nuuser nm 127.0.0.1 DELETE\n\nPostgres 7.3.4 on Linux..\n\nThanks for any help..\n\nVimal\n\n", "msg_date": "8 Jan 2006 09:34:30 -0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Hanging Query" }, { "msg_contents": "[email protected] writes:\n> We have to inserts a records(15000- 20000) into a table which also\n> contains (15000-20000) records, then after insertion, we have to delete\n> the records according to a business rule.\n> Above process is taking place in a transaction and we are using batches\n> of 128 to insert records.\n> Everything works fine on QA environment but somehow after inserts,\n> delete query hangs in production environment. Delete query has some\n> joins with other table and a self join. There is no exception as we\n> have done enough exception handling. It simply hangs with no trace in\n> application logs.\n\n> When I do \"ps aux\" , I see\n> postgres 5294 41.3 2.4 270120 38092 pts/4 R 10:41 52:56\n> postgres: nuuser nm 127.0.0.1 DELETE\n\nThat doesn't look to me like it's \"hanging\"; it's trying to process\nsome unreasonably long-running query. If I were you I'd be taking\na closer look at that DELETE command. It may contain an unconstrained\njoin (cross-product) or some such. Try EXPLAINing the command and\nlook for unexpected table scans.\n\n> Postgres 7.3.4 on Linux..\n\nThat's mighty ancient and has many known bugs. Do yourself a favor\nand update to some newer version --- at the very least, use the latest\n7.3 branch release (we're up to 7.3.13 now).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Jan 2006 13:05:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hanging Query " } ]
[ { "msg_contents": "Hello\n\nwhat is the best for a char field with less than 1000 characters?\na text field or a varchar(1000)\n\nthanks\n\n", "msg_date": "Mon, 09 Jan 2006 11:58:19 +0100", "msg_from": "TNO <[email protected]>", "msg_from_op": true, "msg_subject": "[PERFORMANCE] Beetwen text and varchar field" }, { "msg_contents": "On Mon, Jan 09, 2006 at 11:58:19AM +0100, TNO wrote:\n> what is the best for a char field with less than 1000 characters?\n> a text field or a varchar(1000)\n\nThey will be equivalent. text and varchar are the same type internally -- the\nonly differences are that varchar can have a length (but does not need one),\nand that some casts are only defined for text.\n\nIf there's really a natural thousand-character limit to the data in question,\nuse varchar(1000); if not, use text or varchar, whatever you'd like.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Mon, 9 Jan 2006 12:49:40 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] Beetwen text and varchar field" }, { "msg_contents": "\nSee the FAQ.\n\n---------------------------------------------------------------------------\n\nSteinar H. Gunderson wrote:\n> On Mon, Jan 09, 2006 at 11:58:19AM +0100, TNO wrote:\n> > what is the best for a char field with less than 1000 characters?\n> > a text field or a varchar(1000)\n> \n> They will be equivalent. text and varchar are the same type internally -- the\n> only differences are that varchar can have a length (but does not need one),\n> and that some casts are only defined for text.\n> \n> If there's really a natural thousand-character limit to the data in question,\n> use varchar(1000); if not, use text or varchar, whatever you'd like.\n> \n> /* Steinar */\n> -- \n> Homepage: http://www.sesse.net/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 9 Jan 2006 18:50:31 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] Beetwen text and varchar field" } ]
[ { "msg_contents": "Hello gentlemen,\n\nAlthough this is my first post on the list, I am a fairly experienced PostgreSQL \nprogrammer. I am writing an ERP application suite using PostgreSQL as the \npreferred DBMS. Let me state that the SQL DDL is automatically generated by a \nCASE tool from an ER model. The generated schema contains the appropriate \nprimary key and foreign key constraints, as defined by the original ER model, as \nwell as \"reverse indexes\" on foreign keys, allowing (in theory) rapid backward \nnavigation of foreign keys in joins.\n\nLet me show a sample join of two tables in the database schema. The information \nprovided is quite extensive. I'm sorry for that, but I think it is better to \nprovide the list with all the relevant information.\n\nPackage: postgresql-7.4\nPriority: optional\nSection: misc\nInstalled-Size: 7860\nMaintainer: Martin Pitt <[email protected]>\nArchitecture: i386\nVersion: 1:7.4.9-2\n\n\n\n Table \"public.articolo\"\n Column | Type | \nModifiers\n-------------------------+-----------------------------+-----------------------------------------------------\n bigoid | bigint | not null default \nnextval('object_bigoid_seq'::text)\n metadata | text |\n finalized | timestamp without time zone |\n xdbs_created | timestamp without time zone | default now()\n xdbs_modified | timestamp without time zone |\n id_ente | text | not null\n barcode | text |\n tipo | text |\n id_produttore | text | not null\n id_articolo | text | not null\n venditore_id_ente | text |\n id_prodotto | text |\n aggregato_id_ente | text |\n aggregato_id_produttore | text |\n aggregato_id_articolo | text |\n descr | text |\n url | text |\n datasheet | text |\n scheda_sicurezza | text |\n peso | numeric |\n lunghezza | numeric |\n larghezza | numeric |\n altezza | numeric |\n volume | numeric |\n max_strati | numeric |\n um | text |\nIndexes:\n \"articolo_pkey\" primary key, btree (id_ente, id_produttore, id_articolo)\n \"articolo_unique_barcode_index\" unique, btree (barcode)\n \"articolo_modified_index\" btree (xdbs_modified)\nForeign-key constraints:\n \"$4\" FOREIGN KEY (um) REFERENCES um(um) DEFERRABLE INITIALLY DEFERRED\n \"$3\" FOREIGN KEY (aggregato_id_ente, aggregato_id_produttore, \naggregato_id_articolo) REFERENCES articolo(id_ente, id_produttore, id_articolo) \nDEFERRABLE INITIALLY DEFERRED\n \"$2\" FOREIGN KEY (venditore_id_ente, id_prodotto) REFERENCES \nprodotto(venditore_id_ente, id_prodotto) DEFERRABLE INITIALLY DEFERRED\n \"$1\" FOREIGN KEY (id_ente) REFERENCES ente(id_ente) DEFERRABLE INITIALLY \nDEFERRED\nRules:\n articolo_delete_rule AS ON DELETE TO articolo DO INSERT INTO articolo_trash \n(id_ente, id_produttore, id_articolo, venditore_id_ente, id_prodotto, \naggregato_id_ente, aggregato_id_produttore, aggregato_id_articolo, descr, url, \ndatasheet, scheda_sicurezza, peso, lunghezza, larghezza, altezza, volume, \nmax_strati, um, barcode, tipo, bigoid, metadata, finalized, xdbs_created, \nxdbs_modified) VALUES (old.id_ente, old.id_produttore, old.id_articolo, \nold.venditore_id_ente, old.id_prodotto, old.aggregato_id_ente, \nold.aggregato_id_produttore, old.aggregato_id_articolo, old.descr, old.url, \nold.datasheet, old.scheda_sicurezza, old.peso, old.lunghezza, old.larghezza, \nold.altezza, old.volume, old.max_strati, old.um, old.barcode, old.tipo, \nold.bigoid, old.metadata, old.finalized, old.xdbs_created, old.xdbs_modified)\n articolo_update_rule AS ON UPDATE TO articolo WHERE \n((new.xdbs_modified)::timestamp with time zone <> now()) DO INSERT INTO \narticolo_trash (id_ente, id_produttore, id_articolo, venditore_id_ente, \nid_prodotto, aggregato_id_ente, aggregato_id_produttore, aggregato_id_articolo, \ndescr, url, datasheet, scheda_sicurezza, peso, lunghezza, larghezza, altezza, \nvolume, max_strati, um, barcode, tipo, bigoid, metadata, finalized, \nxdbs_created, xdbs_modified) VALUES (old.id_ente, old.id_produttore, \nold.id_articolo, old.venditore_id_ente, old.id_prodotto, old.aggregato_id_ente, \nold.aggregato_id_produttore, old.aggregato_id_articolo, old.descr, old.url, \nold.datasheet, old.scheda_sicurezza, old.peso, old.lunghezza, old.larghezza, \nold.altezza, old.volume, old.max_strati, old.um, old.barcode, old.tipo, \nold.bigoid, old.metadata, old.finalized, old.xdbs_created, old.xdbs_modified)\nTriggers:\n articolo_update_trigger BEFORE UPDATE ON articolo FOR EACH ROW EXECUTE \nPROCEDURE xdbs_update_trigger()\nInherits: object,\n barcode\n\n\n Table \"public.ubicazione\"\n Column | Type | Modifiers\n---------------+-----------------------------+-----------------------------------------------------\n bigoid | bigint | not null default \nnextval('object_bigoid_seq'::text)\n metadata | text |\n finalized | timestamp without time zone |\n xdbs_created | timestamp without time zone | default now()\n xdbs_modified | timestamp without time zone |\n id_ente | text | not null\n barcode | text |\n tipo | text |\n id_magazzino | text | not null\n id_settore | text | not null\n id_area | text | not null\n id_ubicazione | text | not null\n flavor | text |\n peso_max | numeric |\n lunghezza | numeric |\n larghezza | numeric |\n altezza | numeric |\n volume_max | numeric |\n inventario | integer | default 0\n allarme | text |\n manutenzione | text |\n id_produttore | text |\n id_articolo | text |\n quantita | numeric |\n in_prelievo | numeric |\n in_deposito | numeric |\n lotto | text |\n scadenza | date |\nIndexes:\n \"ubicazione_pkey\" primary key, btree (id_ente, id_magazzino, id_settore, \nid_area, id_ubicazione)\n \"ubicazione_id_ubicazione_key\" unique, btree (id_ubicazione)\n \"ubicazione_fkey_articolo\" btree (id_ente, id_produttore, id_articolo)\n \"ubicazione_modified_index\" btree (xdbs_modified)\nForeign-key constraints:\n \"$5\" FOREIGN KEY (id_ente, id_produttore, id_articolo) REFERENCES \narticolo(id_ente, id_produttore, id_articolo) DEFERRABLE INITIALLY DEFERRED\n \"$4\" FOREIGN KEY (manutenzione) REFERENCES manutenzione(manutenzione) \nDEFERRABLE INITIALLY DEFERRED\n \"$3\" FOREIGN KEY (allarme) REFERENCES allarme(allarme) DEFERRABLE INITIALLY \nDEFERRED\n \"$2\" FOREIGN KEY (flavor) REFERENCES flavor(flavor) DEFERRABLE INITIALLY \nDEFERRED\n \"$1\" FOREIGN KEY (id_ente, id_magazzino, id_settore, id_area) REFERENCES \narea(id_ente, id_magazzino, id_settore, id_area) DEFERRABLE INITIALLY DEFERRED\nRules:\n ubicazione_delete_rule AS ON DELETE TO ubicazione DO INSERT INTO \nubicazione_trash (id_ente, id_magazzino, id_settore, id_area, id_ubicazione, \nflavor, peso_max, lunghezza, larghezza, altezza, volume_max, inventario, \nallarme, manutenzione, id_produttore, id_articolo, quantita, in_prelievo, \nin_deposito, lotto, scadenza, barcode, tipo, bigoid, metadata, finalized, \nxdbs_created, xdbs_modified) VALUES (old.id_ente, old.id_magazzino, \nold.id_settore, old.id_area, old.id_ubicazione, old.flavor, old.peso_max, \nold.lunghezza, old.larghezza, old.altezza, old.volume_max, old.inventario, \nold.allarme, old.manutenzione, old.id_produttore, old.id_articolo, old.quantita, \nold.in_prelievo, old.in_deposito, old.lotto, old.scadenza, old.barcode, \nold.tipo, old.bigoid, old.metadata, old.finalized, old.xdbs_created, \nold.xdbs_modified)\n ubicazione_update_rule AS ON UPDATE TO ubicazione WHERE \n((new.xdbs_modified)::timestamp with time zone <> now()) DO INSERT INTO \nubicazione_trash (id_ente, id_magazzino, id_settore, id_area, id_ubicazione, \nflavor, peso_max, lunghezza, larghezza, altezza, volume_max, inventario, \nallarme, manutenzione, id_produttore, id_articolo, quantita, in_prelievo, \nin_deposito, lotto, scadenza, barcode, tipo, bigoid, metadata, finalized, \nxdbs_created, xdbs_modified) VALUES (old.id_ente, old.id_magazzino, \nold.id_settore, old.id_area, old.id_ubicazione, old.flavor, old.peso_max, \nold.lunghezza, old.larghezza, old.altezza, old.volume_max, old.inventario, \nold.allarme, old.manutenzione, old.id_produttore, old.id_articolo, old.quantita, \nold.in_prelievo, old.in_deposito, old.lotto, old.scadenza, old.barcode, \nold.tipo, old.bigoid, old.metadata, old.finalized, old.xdbs_created, \nold.xdbs_modified)\nTriggers:\n ubicazione_update_trigger BEFORE UPDATE ON ubicazione FOR EACH ROW EXECUTE \nPROCEDURE xdbs_update_trigger()\nInherits: object,\n barcode\n\n******************************************************************************\n\nHere is the first join. This is planned correctly. Execution times are irrelevant.\n\ndmd-freerp-1-alex=# explain analyze SELECT * FROM articolo JOIN ubicazione \nUSING (id_ente, id_produttore, id_articolo) WHERE ubicazione.id_ente = 'dmd' \nAND allarme IS NULL AND manutenzione IS NULL AND ubicazione.xdbs_modified > \n'2006-01-08 18:25:00+01';\n QUERY \nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..8.73 rows=1 width=1146) (actual time=0.247..0.247 \nrows=0 loops=1)\n -> Index Scan using ubicazione_modified_index on ubicazione \n(cost=0.00..3.03 rows=1 width=536) (actual time=0.239..0.239 rows=0 loops=1)\n Index Cond: (xdbs_modified > '2006-01-08 18:25:00'::timestamp without \ntime zone)\n Filter: ((id_ente = 'dmd'::text) AND (allarme IS NULL) AND \n(manutenzione IS NULL))\n -> Index Scan using articolo_pkey on articolo (cost=0.00..5.69 rows=1 \nwidth=653) (never executed)\n Index Cond: (('dmd'::text = articolo.id_ente) AND \n(articolo.id_produttore = \"outer\".id_produttore) AND (articolo.id_articolo = \n\"outer\".id_articolo))\n Total runtime: 0.556 ms\n(7 rows)\n\n*********************************************************************\n\nHere's the second join on the same tables. This times a different set of indexes \nshould be used to perform the join, but even in this case I would expect the \nplanner to generate a nested loop of two index scans. Instead, this is what happens.\n\n\ndmd-freerp-1-alex=# explain analyze SELECT * FROM articolo JOIN ubicazione \nUSING (id_ente, id_produttore, id_articolo) WHERE ubicazione.id_ente = 'dmd' \nAND allarme IS NULL AND manutenzione IS NULL AND articolo.xdbs_modified > \n'2006-01-08 18:25:00+01';\n QUERY \nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..1017.15 rows=1 width=1146) (actual \ntime=258.648..258.648 rows=0 loops=1)\n -> Seq Scan on ubicazione (cost=0.00..1011.45 rows=1 width=536) (actual \ntime=0.065..51.617 rows=12036 loops=1)\n Filter: ((id_ente = 'dmd'::text) AND (allarme IS NULL) AND \n(manutenzione IS NULL))\n -> Index Scan using articolo_pkey on articolo (cost=0.00..5.69 rows=1 \nwidth=653) (actual time=0.011..0.011 rows=0 loops=12036)\n Index Cond: (('dmd'::text = articolo.id_ente) AND \n(articolo.id_produttore = \"outer\".id_produttore) AND (articolo.id_articolo = \n\"outer\".id_articolo))\n Filter: (xdbs_modified > '2006-01-08 18:25:00'::timestamp without time \nzone)\n Total runtime: 258.975 ms\n(7 rows)\n\nThis time, a sequential scan on the rightmost table is used to perform the join. \nThis is quite plainly a wrong choice, since the number of tuples in the articolo \nhaving xdbs_modified > '2006-01-08 18:25:00' is 0. I also tried increasing the \namount of collected statistics to 1000 with \"ALTER TABLE articolo ALTER COLUMN \nxdbs_modified SET STATISTICS 1000\" and subsequently vacuum-analyzed the db, so \nas to give the planner as much information as possible to realize that articolo \nought to be index-scanned with the articolo_modified_index B-tree index. The \ncorrect query plan is to perform a nested loop join with an index scan on \narticolo using xdbs_modified_index and a corresponding index scan on ubicazione \nusing ubicazione_fkey_articolo.\n\nI am currently resorting to selecting from the single tables and performing the \njoin in the application code rather than in the DB. This is currently the only \nviable alternative for me, as a 500x speed-down simply cannot be tolerated.\n\nWhat I do not understand is why the planner behaves so differently in the two \ncases. Any ideas? Would upgrading to more recent versions of postgresql make any \ndifference?\n\n\n\nAlex\n\n\n\n-- \n*********************************************************************\nhttp://www.barettadeit.com/\nBaretta DE&IT\nA division of Baretta SRL\n\ntel. +39 02 370 111 55\nfax. +39 02 370 111 54\n\nOur technology:\n\nThe Application System/Xcaml (AS/Xcaml)\n<http://www.asxcaml.org/>\n\nThe FreerP Project\n<http://www.freerp.org/>\n", "msg_date": "Mon, 09 Jan 2006 14:51:27 +0100", "msg_from": "Alessandro Baretta <[email protected]>", "msg_from_op": true, "msg_subject": "500x speed-down: Wrong query plan?" }, { "msg_contents": "Hi Alessandro,\n\n> Nested Loop (cost=0.00..1017.15 rows=1 width=1146) (actual \n> time=258.648..258.648 rows=0 loops=1)\n> -> Seq Scan on ubicazione (cost=0.00..1011.45 rows=1 width=536) \n> (actual time=0.065..51.617 rows=12036 loops=1)\n> Filter: ((id_ente = 'dmd'::text) AND (allarme IS NULL) AND \n> (manutenzione IS NULL))\n\nThe problem seems here. The planner expects one matching row (and that's \nwhy it chooses a nested loop), but 12036 rows are matching this condition.\n\nAre you sure that you recentrly ANALYZED the table \"ubicazione\"? If so, \ntry to increase statistics for the id_ente column.\n\n\nP.S.\nThere is also an italian mailing list, if you are interested :)\n\nBest regards\n--\nMatteo Beccati\nhttp://phpadsnew.com\nhttp://phppgads.com\n", "msg_date": "Mon, 09 Jan 2006 16:02:18 +0100", "msg_from": "Matteo Beccati <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 500x speed-down: Wrong query plan?" }, { "msg_contents": "On 1/9/06, Alessandro Baretta <[email protected]> wrote:\n> Hello gentlemen,\n>\n> Although this is my first post on the list, I am a fairly experienced PostgreSQL\n> programmer. I am writing an ERP application suite using PostgreSQL as the\n> preferred DBMS. Let me state that the SQL DDL is automatically generated by a\n> CASE tool from an ER model. The generated schema contains the appropriate\n> primary key and foreign key constraints, as defined by the original ER model, as\n> well as \"reverse indexes\" on foreign keys, allowing (in theory) rapid backward\n> navigation of foreign keys in joins.\n>\n> Let me show a sample join of two tables in the database schema. The information\n> provided is quite extensive. I'm sorry for that, but I think it is better to\n> provide the list with all the relevant information.\n>\n> Package: postgresql-7.4\n\nmaybe, because you are in developing state, you can start to think in\nupgrading to 8.1\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n", "msg_date": "Mon, 9 Jan 2006 10:30:21 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 500x speed-down: Wrong query plan?" }, { "msg_contents": "Matteo Beccati wrote:\n> Hi Alessandro,\n> \n>> Nested Loop (cost=0.00..1017.15 rows=1 width=1146) (actual \n>> time=258.648..258.648 rows=0 loops=1)\n>> -> Seq Scan on ubicazione (cost=0.00..1011.45 rows=1 width=536) \n>> (actual time=0.065..51.617 rows=12036 loops=1)\n>> Filter: ((id_ente = 'dmd'::text) AND (allarme IS NULL) AND \n>> (manutenzione IS NULL))\n> \n> \n> The problem seems here. The planner expects one matching row (and that's \n> why it chooses a nested loop), but 12036 rows are matching this condition.\n> \n> Are you sure that you recentrly ANALYZED the table \"ubicazione\"? If so, \n> try to increase statistics for the id_ente column.\n\nNo, this is not the problem. I increased the amount of statistics with ALTER \nTABLE ... SET STATISTICS 1000, which is as much as I can have. The problem is \nthat the planner simply ignores the right query plan, which is orders of \nmagnitude less costly. Keep in mind that the XDBS--the CASE tool I use--makes \nheavy use of indexes, and generates all relevant indexes in relation to the join \npaths which are implicit in the ER model \"relations\". In this case, both \nubicazione and articolo have indexes on the join fields:\n\nIndexes:\n\"articolo_pkey\" primary key, btree (id_ente, id_produttore, id_articolo)\n\"ubicazione_fkey_articolo\" btree (id_ente, id_produttore, id_articolo)\n\nNotice that only the \"articolo_pkey\" is a unique index, while \n\"ubicazione_fkey_articolo\" allows duplicates. This second index is not used by \nthe planner.\n\nBoth tables also have a \"bookkeeping\" index on xdbs_modified. I am selecting \n\"recently inserted or updated\" tuples, which are usually a very small fraction \nof the table--if there are any. The index on xdbs_modified is B-tree allowing a \nvery quick index scan to find the few tuples having xdbs_modified > '[some \nrecent timestamp]'. Hence, the optimal plan for both queries is to perform an \nindex scan using the <table_name>_modified_index on the table upon which I \nspecify the xdbs_modified > '...' condition, and the join-fields index on the \nother table.\n\nAlex\n\n-- \n*********************************************************************\nhttp://www.barettadeit.com/\nBaretta DE&IT\nA division of Baretta SRL\n\ntel. +39 02 370 111 55\nfax. +39 02 370 111 54\n\nOur technology:\n\nThe Application System/Xcaml (AS/Xcaml)\n<http://www.asxcaml.org/>\n\nThe FreerP Project\n<http://www.freerp.org/>\n", "msg_date": "Mon, 09 Jan 2006 17:23:06 +0100", "msg_from": "Alessandro Baretta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 500x speed-down: Wrong query plan?" }, { "msg_contents": "Alessandro Baretta <[email protected]> writes:\n> Matteo Beccati wrote:\n>> Are you sure that you recentrly ANALYZED the table \"ubicazione\"? If so, \n>> try to increase statistics for the id_ente column.\n\n> No, this is not the problem. I increased the amount of statistics with ALTER \n> TABLE ... SET STATISTICS 1000, which is as much as I can have.\n\nWhat Matteo wanted to know is if you'd done an ANALYZE afterward. ALTER\nTABLE SET STATISTICS doesn't in itself update the statistics.\n\nWhat do you get from\n\nEXPLAIN SELECT * FROM articolo WHERE articolo.xdbs_modified > '2006-01-08 18:25:00+01';\n\nI'm curious to see how many rows the planner thinks this will produce,\nand whether it will use the index or not.\n\nAlso, I gather from the plan choices that the condition id_ente = 'dmd'\nisn't very selective ... what fraction of the rows in each table\nsatisfy that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Jan 2006 13:41:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 500x speed-down: Wrong query plan? " }, { "msg_contents": "Tom Lane wrote:\n> Alessandro Baretta <[email protected]> writes:\n> \n>>Matteo Beccati wrote:\n>>\n>>>Are you sure that you recentrly ANALYZED the table \"ubicazione\"? If so, \n>>>try to increase statistics for the id_ente column.\n> \n> \n>>No, this is not the problem. I increased the amount of statistics with ALTER \n>>TABLE ... SET STATISTICS 1000, which is as much as I can have.\n> \n> \n> What Matteo wanted to know is if you'd done an ANALYZE afterward. ALTER\n> TABLE SET STATISTICS doesn't in itself update the statistics.\n\nI probably forgot to mention that I have vacuum-analyze the after this \noperation, and, since I did not manage to get the index to work, I \nvacuum-analyzed several times more, just to be on the safe side.\n\n> What do you get from\n> \n> EXPLAIN SELECT * FROM articolo WHERE articolo.xdbs_modified > '2006-01-08 18:25:00+01';\n> \n> I'm curious to see how many rows the planner thinks this will produce,\n> and whether it will use the index or not.\n\ndmd-freerp-1-alex=# EXPLAIN ANALYZE SELECT * FROM articolo WHERE \narticolo.xdbs_modified > '2006-01-08 18:25:00+01';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using articolo_modified_index on articolo (cost=0.00..3914.91 \nrows=17697 width=653) (actual time=0.032..0.032 rows=0 loops=1)\n Index Cond: (xdbs_modified > '2006-01-08 18:25:00'::timestamp without time zone)\n Total runtime: 0.150 ms\n(3 rows)\n\nThe planner gets tricked only by *SOME* join queries.\n\n\n> Also, I gather from the plan choices that the condition id_ente = 'dmd'\n> isn't very selective ... what fraction of the rows in each table\n> satisfy that?\n\nIn most situation, this condition selects all the tuples. \"id_ente\" selects the \n\"owner of the data\". Since, in most situations, companies do not share a \ndatabase between them--although the application allows it--filtering according \nto 'id_ente' is like to filtering at all. Yet, this field is used in the \nindexes, so the condition ought to be specified in the queries anyhow.\n\n-- \n*********************************************************************\nhttp://www.barettadeit.com/\nBaretta DE&IT\nA division of Baretta SRL\n\ntel. +39 02 370 111 55\nfax. +39 02 370 111 54\n\nOur technology:\n\nThe Application System/Xcaml (AS/Xcaml)\n<http://www.asxcaml.org/>\n\nThe FreerP Project\n<http://www.freerp.org/>\n", "msg_date": "Mon, 09 Jan 2006 19:50:19 +0100", "msg_from": "Alessandro Baretta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 500x speed-down: Wrong query plan?" }, { "msg_contents": "Alessandro Baretta <[email protected]> writes:\n> Tom Lane wrote:\n>> I'm curious to see how many rows the planner thinks this will produce,\n>> and whether it will use the index or not.\n\n> dmd-freerp-1-alex=# EXPLAIN ANALYZE SELECT * FROM articolo WHERE \n> articolo.xdbs_modified > '2006-01-08 18:25:00+01';\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using articolo_modified_index on articolo (cost=0.00..3914.91 \n> rows=17697 width=653) (actual time=0.032..0.032 rows=0 loops=1)\n> Index Cond: (xdbs_modified > '2006-01-08 18:25:00'::timestamp without time zone)\n> Total runtime: 0.150 ms\n> (3 rows)\n\nWell, there's your problem: 17697 rows estimated vs 0 actual. With a\ncloser row estimate it would've probably done the right thing for the\njoin problem.\n\nHow many rows are really in the table, anyway? Could we see the\npg_stats row for articolo.xdbs_modified?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Jan 2006 13:56:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 500x speed-down: Wrong query plan? " }, { "msg_contents": "Tom Lane wrote:\n> Alessandro Baretta <[email protected]> writes:\n> \n>>Tom Lane wrote:\n>>\n>>>I'm curious to see how many rows the planner thinks this will produce,\n>>>and whether it will use the index or not.\n>>dmd-freerp-1-alex=# EXPLAIN ANALYZE SELECT * FROM articolo WHERE \n>>articolo.xdbs_modified > '2006-01-08 18:25:00+01';\n>> QUERY PLAN\n>>-------------------------------------------------------------------------------------------------------------------------------------------\n>> Index Scan using articolo_modified_index on articolo (cost=0.00..3914.91 \n>>rows=17697 width=653) (actual time=0.032..0.032 rows=0 loops=1)\n>> Index Cond: (xdbs_modified > '2006-01-08 18:25:00'::timestamp without time zone)\n>> Total runtime: 0.150 ms\n>>(3 rows)\n> \n> \n> Well, there's your problem: 17697 rows estimated vs 0 actual. With a\n> closer row estimate it would've probably done the right thing for the\n> join problem.\n\nHmmm, what you are telling me is very interesting, Tom. So, let me see if I got \nthis straight: the first 'rows=... in the result from EXPLAIN ANALYZE gives me \nestimates, while the second gives the actual cardinality of the selected record \nset. Correct? If this is true, two questions arise: why is the estimated number \nof rows completele wrong, and why, given such a large estimated record set does \nPostgreSQL schedule an Index Scan as opposed to a Seq Scan?\n\n\n >\n > How many rows are really in the table, anyway? Could we see the\n > pg_stats row for articolo.xdbs_modified?\n\ndmd-freerp-1-alex=# select count(*) from articolo;\n count\n-------\n 53091\n(1 row)\n\ndmd-freerp-1-alex=# explain analyze select * from articolo;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Seq Scan on articolo (cost=0.00..1439.91 rows=53091 width=653) (actual \ntime=0.013..151.189 rows=53091 loops=1)\n Total runtime: 295.152 ms\n(2 rows)\n\nNow let me get the pg_stats for xdbs_modified.\n\ndmd-freerp-1-alex=# select * from pg_stats where tablename = 'articolo' and \nattname = 'xdbs_modified';\n schemaname | tablename | attname | null_frac | avg_width | n_distinct | \n most_common_vals | most_common_freqs | histogram_bounds | correlation\n------------+-----------+---------------+-----------+-----------+------------+--------------------------------+-------------------+------------------+-------------\n public | articolo | xdbs_modified | 0 | 8 | 1 | \n{\"2006-01-10 08:12:58.605327\"} | {1} | | 1\n(1 row)\n\nFor sake of simplicity I have re-timestamped all tuples in the table with the \ncurrent timestamp, as you can see above. Now, obviously, the planner must \nestimate ~0 rows for queries posing a selection condition on xdbs_modified, for \nany value other than \"2006-01-10 08:12:58.605327\". Let me try selecting from \narticolo first.\n\ndmd-freerp-1-alex=# EXPLAIN ANALYZE SELECT * FROM articolo WHERE \narticolo.xdbs_modified > '2006-01-10 18:25:00+01';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using articolo_modified_index on articolo (cost=0.00..2.01 rows=1 \nwidth=653) (actual time=0.139..0.139 rows=0 loops=1)\n Index Cond: (xdbs_modified > '2006-01-10 18:25:00'::timestamp without time zone)\n Total runtime: 0.257 ms\n(3 rows)\n\nThe planner produces a sensible estimate of the number of rows and consequently \nchooses the appropriate query plan. Now, the join.\n\ndmd-freerp-1-alex=# explain analyze SELECT * FROM articolo JOIN ubicazione \nUSING (id_ente, id_produttore, id_articolo) WHERE articolo.id_ente = 'dmd' AND \nallarme IS NULL AND manutenzione IS NULL AND articolo.xdbs_modified > \n'2006-01-10 18:25:00+01';\n \nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..5.05 rows=1 width=1146) (actual time=0.043..0.043 \nrows=0 loops=1)\n -> Index Scan using articolo_modified_index on articolo (cost=0.00..2.02 \nrows=1 width=653) (actual time=0.035..0.035 rows=0 loops=1)\n Index Cond: (xdbs_modified > '2006-01-10 18:25:00'::timestamp without \ntime zone)\n Filter: (id_ente = 'dmd'::text)\n -> Index Scan using ubicazione_fkey_articolo on ubicazione \n(cost=0.00..3.02 rows=1 width=536) (never executed)\n Index Cond: (('dmd'::text = ubicazione.id_ente) AND \n(\"outer\".id_produttore = ubicazione.id_produttore) AND (\"outer\".id_articolo = \nubicazione.id_articolo))\n Filter: ((allarme IS NULL) AND (manutenzione IS NULL))\n Total runtime: 0.382 ms\n(8 rows)\n\nDear Tom, you're my hero! I have no clue as to how or why the statistics were \nwrong yesterday--as I vacuum-analyzed continuously out of lack of any better \nidea--and I was stupid enough to re-timestamp everything before selecting from \npg_stats. Supposedly, the timestamps in the table were a random sampling taken \nfrom the month of December 2005, so that any date in January would be greater \nthan all the timestamps in xdbs_modified. There must a bug in the my \nrule/trigger system, which is responsible to maintain these timestamps as \nappropriate.\n\n> \t\t\tregards, tom lane\n>\n\nThank you very much Tom and Matteo. Your help has been very precious to me. \nThanks to your wisdom, my application will now have a 500x speed boost on a very \ncommon class of queries.\n\nThe end result is the following query plan, allowing me to rapidly select only \nthe tuples in a join which have changed since the application last updated its \nnotion of this dataset.\n\ndmd-freerp-1-alex=# explain analyze SELECT * FROM articolo JOIN ubicazione \nUSING (id_ente, id_produttore, id_articolo) WHERE articolo.id_ente = 'dmd' AND \nallarme IS NULL AND manutenzione IS NULL AND articolo.xdbs_modified > \n'2006-01-10 18:25:00+01' UNION SELECT * FROM articolo JOIN ubicazione USING \n(id_ente, id_produttore, id_articolo) WHERE articolo.id_ente = 'dmd' AND \nallarme IS NULL AND manutenzione IS NULL AND ubicazione.xdbs_modified > \n'2006-01-10 18:25:00+01';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=11.13..11.39 rows=2 width=1146) (actual time=0.519..0.519 rows=0 \nloops=1)\n -> Sort (cost=11.13..11.14 rows=2 width=1146) (actual time=0.512..0.512 \nrows=0 loops=1)\n Sort Key: id_ente, id_produttore, id_articolo, bigoid, metadata, \nfinalized, xdbs_created, xdbs_modified, barcode, tipo, venditore_id_ente, \nid_prodotto, aggregato_id_ente, aggregato_id_produttore, aggregato_id_articolo, \ndescr, url, datasheet, scheda_sicurezza, peso, lunghezza, larghezza, altezza, \nvolume, max_strati, um, bigoid, metadata, finalized, xdbs_created, \nxdbs_modified, barcode, tipo, id_magazzino, id_settore, id_area, id_ubicazione, \nflavor, peso_max, lunghezza, larghezza, altezza, volume_max, inventario, \nallarme, manutenzione, quantita, in_prelievo, in_deposito, lotto, scadenza\n -> Append (cost=0.00..11.12 rows=2 width=1146) (actual \ntime=0.305..0.305 rows=0 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..5.06 rows=1 \nwidth=1146) (actual time=0.157..0.157 rows=0 loops=1)\n -> Nested Loop (cost=0.00..5.05 rows=1 width=1146) \n(actual time=0.149..0.149 rows=0 loops=1)\n -> Index Scan using articolo_modified_index on \narticolo (cost=0.00..2.02 rows=1 width=653) (actual time=0.142..0.142 rows=0 \nloops=1)\n Index Cond: (xdbs_modified > '2006-01-10 \n18:25:00'::timestamp without time zone)\n Filter: (id_ente = 'dmd'::text)\n -> Index Scan using ubicazione_fkey_articolo on \nubicazione (cost=0.00..3.02 rows=1 width=536) (never executed)\n Index Cond: (('dmd'::text = \nubicazione.id_ente) AND (\"outer\".id_produttore = ubicazione.id_produttore) AND \n(\"outer\".id_articolo = ubicazione.id_articolo))\n Filter: ((allarme IS NULL) AND (manutenzione \nIS NULL))\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..6.06 rows=1 \nwidth=1146) (actual time=0.137..0.137 rows=0 loops=1)\n -> Nested Loop (cost=0.00..6.05 rows=1 width=1146) \n(actual time=0.131..0.131 rows=0 loops=1)\n -> Index Scan using ubicazione_modified_index on \nubicazione (cost=0.00..3.02 rows=1 width=536) (actual time=0.123..0.123 rows=0 \nloops=1)\n Index Cond: (xdbs_modified > '2006-01-10 \n18:25:00'::timestamp without time zone)\n Filter: ((allarme IS NULL) AND (manutenzione \nIS NULL) AND ('dmd'::text = id_ente))\n -> Index Scan using articolo_pkey on articolo \n(cost=0.00..3.02 rows=1 width=653) (never executed)\n Index Cond: ((articolo.id_ente = 'dmd'::text) \nAND (articolo.id_produttore = \"outer\".id_produttore) AND (articolo.id_articolo = \n\"outer\".id_articolo))\n Total runtime: 1.609 ms\n(20 rows)\n\nSince the previous approach used in my application was to select the whole \nrelation every time the user needed to access this data, the above result must \nbe compared with the following naive one.\n\ndmd-freerp-1-alex=# explain analyze SELECT * FROM articolo JOIN ubicazione \nUSING (id_ente, id_produttore, id_articolo) WHERE articolo.id_ente = 'dmd' AND \nallarme IS NULL AND manutenzione IS NULL; \n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..1014.49 rows=1 width=1146) (actual \ntime=0.210..283.272 rows=3662 loops=1)\n -> Seq Scan on ubicazione (cost=0.00..1011.45 rows=1 width=536) (actual \ntime=0.070..51.223 rows=12036 loops=1)\n Filter: ((allarme IS NULL) AND (manutenzione IS NULL) AND ('dmd'::text \n= id_ente))\n -> Index Scan using articolo_pkey on articolo (cost=0.00..3.02 rows=1 \nwidth=653) (actual time=0.008..0.009 rows=0 loops=12036)\n Index Cond: ((articolo.id_ente = 'dmd'::text) AND \n(articolo.id_produttore = \"outer\".id_produttore) AND (articolo.id_articolo = \n\"outer\".id_articolo))\n Total runtime: 292.544 ms\n(6 rows)\n\nThis amounts to a ~200x speedup for the end user.\n\nAlex\n\n-- \n*********************************************************************\nhttp://www.barettadeit.com/\nBaretta DE&IT\nA division of Baretta SRL\n\ntel. +39 02 370 111 55\nfax. +39 02 370 111 54\n\nOur technology:\n\nThe Application System/Xcaml (AS/Xcaml)\n<http://www.asxcaml.org/>\n\nThe FreerP Project\n<http://www.freerp.org/>\n", "msg_date": "Tue, 10 Jan 2006 08:46:39 +0100", "msg_from": "Alessandro Baretta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 500x speed-down: Wrong statistics!" }, { "msg_contents": "Alessandro Baretta <[email protected]> writes:\n> I have no clue as to how or why the statistics were wrong\n> yesterday--as I vacuum-analyzed continuously out of lack of any better\n> idea--and I was stupid enough to re-timestamp everything before\n> selecting from pg_stats.\n\nToo bad. I would be interested to find out how, if the stats were\nup-to-date, the thing was still getting the row estimate so wrong.\nIf you manage to get the database back into its prior state please\ndo send along the pg_stats info.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jan 2006 10:22:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 500x speed-down: Wrong statistics! " }, { "msg_contents": "Tom Lane wrote:\n> Alessandro Baretta <[email protected]> writes:\n> \n>>I have no clue as to how or why the statistics were wrong\n>>yesterday--as I vacuum-analyzed continuously out of lack of any better\n>>idea--and I was stupid enough to re-timestamp everything before\n>>selecting from pg_stats.\n> \n> \n> Too bad. I would be interested to find out how, if the stats were\n> up-to-date, the thing was still getting the row estimate so wrong.\n> If you manage to get the database back into its prior state please\n> do send along the pg_stats info.\n\nI have some more information on this issue, which clears PostgreSQL's planner of \nall suspects. I am observing severe corruption of the bookkeeping fields managed \nby the xdbs rule/trigger \"complex\". I am unable to pinpoint the cause, right \nnow, but the effect is that after running a few hours' test on the end-user \napplication (which never interacts directly with xdbs_* fields, and thus cannot \npossibly mangle them) most tuples (the older ones, apparently) get thei \ntimestamps set to NULL. Before vacuum-analyzing the table, yesterday's \nstatistics were in effect, and the planner used the appropriate indexes. Now, \nafter vacuum-analyzing the table, the pg_stats row for the xdbs_modified field \nno longer exists (!), and the planner has reverted to the Nested Loop Seq Scan \njoin strategy. Hence, all the vacuum-analyzing I was doing when complaining \nagainst the planner was actually collecting completely screwed statistics, and \nthis is why the ALTER TABLE ... SET STATISTICS 1000 did not help at all!\n\nOk. I plead guilty and ask for the clemency of the court. I'll pay my debt with \nsociety with a long term of pl/pgsql code debugging...\n\nAlex\n", "msg_date": "Wed, 11 Jan 2006 10:42:45 +0100", "msg_from": "Alessandro Baretta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 500x speed-down: Wrong statistics!" } ]
[ { "msg_contents": "Suppose a table with structure:\n\nTable \"public.t4\"\n\n Column | Type | Modifiers\n--------+---------------+-----------\n c1 | character(10) | not null\n c2 | character(6) | not null\n c3 | date | not null\n c4 | character(30) |\n c5 | numeric(10,2) | not null\nIndexes:\n \"t4_prim\" PRIMARY KEY, btree (c1, c2, c3)\n\nThen 2 queries\n\necho \"explain select * from t4 where (c1,c2,c3) >=\n('A','B','1990-01-01') order by c1,c2,c3\"|psql test\n QUERY PLAN\n\n----------------------------------------------------------------------------------\n Index Scan using t4_prim on t4 (cost=0.00..54.69 rows=740 width=75)\n Filter: (ROW(c1, c2, c3) >= ROW('A'::bpchar, 'B'::bpchar,\n'1990-01-01'::date))\n(2 rows)\n\nand\n\necho \"explain select * from t4 where (c1,c2,c3) >=\n('A','B','1990-01-01') orde>\n QUERY PLAN\n\n----------------------------------------------------------------------------------\n Index Scan using t4_prim on t4 (cost=0.00..54.69 rows=740 width=75)\n Filter: (ROW(c1, c2, c3) >= ROW('A'::bpchar, 'B'::bpchar,\n'1990-01-01'::date))\n(2 rows)\n\nSo switching from (c1,c2,c3) compare from = to >= makes the optimizer\nsee the where clause as a row filter, which is not really the case.\n\nFurther\n\necho \"explain select * from t4 where (c1,c2) = ('A','B') order by\nc1,c2,c3\"|ps>\n QUERY PLAN\n-------------------------------------------------------------------\n Index Scan using t4_prim on t4 (cost=0.00..4.83 rows=1 width=75)\n Index Cond: ((c1 = 'A'::bpchar) AND (c2 = 'B'::bpchar))\n(2 rows)\n\nhere again the index can be used (again), the row count can be greater\nthan one.\n\nbut\n\n echo \"explain select * from t4 where (c1,c2) >= ('A','B') order by\nc1,c2,c3\"|p>\n QUERY PLAN\n----------------------------------------------------------------------\n Index Scan using t4_prim on t4 (cost=0.00..52.84 rows=740 width=75)\n Filter: (ROW(c1, c2) >= ROW('A'::bpchar, 'B'::bpchar))\n(2 rows)\n\n\nSo >= (or <=) is not optimized against an index where it could be.\n\n\n\nBernard Dhooghe\n\n", "msg_date": "9 Jan 2006 09:10:02 -0800", "msg_from": "\"Bernard Dhooghe\" <[email protected]>", "msg_from_op": true, "msg_subject": ">= forces row compare and not index elements compare when possible" }, { "msg_contents": "\"Bernard Dhooghe\" <[email protected]> writes:\n> So >= (or <=) is not optimized against an index where it could be.\n\nWork in progress...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Jan 2006 13:10:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: >= forces row compare and not index elements compare when\n\tpossible" } ]
[ { "msg_contents": "Question,\n\nHow exactly is Postgres and Linux use the memory?\n\nI have serveral databases that have multi GB indexes on very large tables.\nOn our current servers, the indexes can fit into memory but not the data\n(servers have 8 - 12 GB). However, my boss is wanting to get new servers\nfor me but does not want to keep the memory requirements as high as they are\nnow (this will allow us to get more servers to spread our 200+ databases\nover).\n\nQuestion, if I have a 4GB+ index for a table on a server with 4GB ram, and I\nsubmit a query that does an index scan, does Postgres read the entire index,\nor just read the index until it finds the matching value (our extra large\nindexes are primary keys).\n\nI am looking for real number to give to my boss the say either having a\nprimary key larger than our memory is bad (and how to clearly justfify it),\nor it is ok.\n\nIf it is ok, what are the trade offs in performance?\\\n\nObviously, I want more memory, but I have to prove the need to my boss since\nit raises the cost of the servers a fair amount.\n\nThanks for any help,\n\nChris\n\nQuestion,\n\nHow exactly is Postgres and Linux use the memory?\n\nI have serveral databases that have multi GB indexes on very large\ntables.  On our current servers, the indexes can fit into memory\nbut not the data (servers have 8 - 12 GB).  However, my boss is\nwanting to get new servers for me but does not want to keep the memory\nrequirements as high as they are now (this will allow us to get more\nservers to spread our 200+ databases over).\n\nQuestion, if I have a 4GB+ index for a table on a server with 4GB ram,\nand I submit a query that does an index scan, does Postgres read the\nentire index, or just read the index until it finds the matching value\n(our extra large indexes are primary keys).\n\nI am looking for real number to give to my boss the say either having a\nprimary key larger than our memory is bad (and how to clearly justfify\nit), or it is ok.\n\nIf it is ok, what are the trade offs in performance?\\\n\nObviously, I want more memory, but I have to prove the need to my boss since it raises the cost of the servers a fair amount.\n\nThanks for any help,\n\nChris", "msg_date": "Mon, 9 Jan 2006 13:54:48 -0500", "msg_from": "Chris Hoover <[email protected]>", "msg_from_op": true, "msg_subject": "Memory Usage Question" }, { "msg_contents": "On Mon, Jan 09, 2006 at 01:54:48PM -0500, Chris Hoover wrote:\n> Question, if I have a 4GB+ index for a table on a server with 4GB ram, and I\n> submit a query that does an index scan, does Postgres read the entire index,\n> or just read the index until it finds the matching value (our extra large\n> indexes are primary keys).\n\nWell, the idea behind an index is that if you need a specific value from\nit, you can get there very quickly, reading a minimum of data along the\nway. So basically, PostgreSQL won't normally read an entire index.\n\n> I am looking for real number to give to my boss the say either having a\n> primary key larger than our memory is bad (and how to clearly justfify it),\n> or it is ok.\n> \n> If it is ok, what are the trade offs in performance?\\\n> \n> Obviously, I want more memory, but I have to prove the need to my boss since\n> it raises the cost of the servers a fair amount.\n\nWell, if you add a sleep to the following code, you can tie up some\namount of memory, which would allow you to simulate having less memory\navailable. Though over time I think the kernel might decide to page that\nmemory out, so it's not perfect.\n\nint main(int argc, char *argv[]) {\n if (!calloc(atoi(argv[1]), 1024*1024)) { printf(\"Error allocating memory.\\n\"); }\n}\n\nIn a nutshell, PostgreSQL and the OS will generally work together to\nonly cache data that is being used fairly often. In the case of a large\nPK index, if you're not actually reading a large distribution of the\nvalues in the index you probably aren't even caching the entire index\neven now. There may be some kind of linux tool that would show you what\nportion of a file is currently cached, which would help answer that\nquestion (but remember that hopefully whatever parts of the index are\ncached by PostgreSQL itself won't also be cached by the OS as well).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 9 Jan 2006 14:35:58 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Usage Question" } ]
[ { "msg_contents": "Hello,\n\nI've a performance problem with the planner algorithm choosen in a website.\nSee the difference between this:\n\n\thttp://klive.cpushare.com/?scheduler=cooperative\n\nand this:\n\n\thttp://klive.cpushare.com/?scheduler=preemptive\n\n(note, there's much less data to show with preemptive, so it's not because of\nthe amount of data to output)\n\nThe second takes ages to complete and it overloads the server as well at 100%\ncpu load for several seconds.\n\n\"cooperative\" runs \"WHERE kernel_version NOT LIKE '%% PREEMPT %%'\", while\n\"preempt\" runs \"WHERE kernel_version LIKE '%% PREEMPT %%'. The only difference\nis a NOT before \"LIKE\". No other differences at all.\n\nThe planner does apparently a big mistake using the nested loop in the \"LIKE\"\nquery, it should use the hash join lik in the \"NOT LIKE\" query instead.\n\nI guess I can force it somehow (I found some docs about it here:\n\n\thttp://www.postgresql.org/docs/8.1/static/runtime-config-query.html\n\n) but it looks like something could be improved in the default mode too, so I'm\nreporting the problem since it looks a performance bug to me. It just makes no\nsense to me that the planner takes a difference decision based on a \"not\". It\ncan't know if it's more likely or less likely, this is a boolean return, it's\n*exactly* the same cost to run it. Making assumption without user-provided\nhints looks a mistake. I never said to the db that \"not like\" is more or less\nlikely to return data in output than \"like\".\n\nTried ANALYZE, including VACUUM FULL ANALYZE and it doesn't make a difference.\n\nPerhaps it's analyze that suggests to use a different algorithm with \"not like\"\nbecause there's much more data to analyze with \"not like\" than with \"like\", but\nthat algorithm works much better even when there's less data to analyze.\n\nIndexes don't make any visible difference.\n\npostgres is version 8.1.2 self compiled from CVS 8.1 branch of yesterday.\n\npsandrea@opteron:~> psql -V\npsql (PostgreSQL) 8.1.2\ncontains support for command-line editing\nandrea@opteron:~> \n\nThe problem is reproducible on the shell, I only need to remove \"explain\". Of\ncourse explain is wrong about the cost, according to explain the first query is\ncheaper when infact it's an order of magnitude more costly.\n\nklive=> explain SELECT class, vendor, device, revision, COUNT(*) as nr FROM pci NATURAL INNER JOIN klive WHERE kernel_version LIKE '%% PREEMPT %%' GROUP BY class, vendor, device, revision;\n QUERY PLAN\n--------------------------------------------------------------------------------------\n HashAggregate (cost=1687.82..1687.83 rows=1 width=16)\n -> Nested Loop (cost=235.86..1687.81 rows=1 width=16)\n -> Seq Scan on klive (cost=0.00..1405.30 rows=1 width=8)\n Filter: ((kernel_version)::text ~~ '%% PREEMPT %%'::text)\n -> Bitmap Heap Scan on pci (cost=235.86..282.32 rows=15 width=24)\n Recheck Cond: (pci.klive = \"outer\".klive)\n -> Bitmap Index Scan on pci_pkey (cost=0.00..235.86 rows=15 width=0)\n Index Cond: (pci.klive = \"outer\".klive)\n(8 rows)\n\nklive=> explain SELECT class, vendor, device, revision, COUNT(*) as nr FROM pci NATURAL INNER JOIN klive WHERE kernel_version NOT LIKE '%% PREEMPT %%' GROUP BY class, vendor, device, revision;\n QUERY PLAN\n--------------------------------------------------------------------------------\n HashAggregate (cost=3577.40..3612.00 rows=2768 width=16)\n -> Hash Join (cost=1569.96..3231.50 rows=27672 width=16)\n Hash Cond: (\"outer\".klive = \"inner\".klive)\n -> Seq Scan on pci (cost=0.00..480.73 rows=27673 width=24)\n -> Hash (cost=1405.30..1405.30 rows=22263 width=8)\n -> Seq Scan on klive (cost=0.00..1405.30 rows=22263 width=8)\n Filter: ((kernel_version)::text !~~ '%% PREEMPT %%'::text)\n(7 rows)\n\nklive=> \n\nHints welcome, thanks!\n\n\nPS. All the source code of the website where I'm reproducing the problem is\navailable at the above url under the GPL.\n", "msg_date": "Tue, 10 Jan 2006 02:44:47 +0100", "msg_from": "Andrea Arcangeli <[email protected]>", "msg_from_op": true, "msg_subject": "NOT LIKE much faster than LIKE?" }, { "msg_contents": "Andrea Arcangeli <[email protected]> writes:\n> It just makes no sense to me that the planner takes a difference\n> decision based on a \"not\".\n\nWhy in the world would you think that? In general a NOT will change the\nselectivity of the WHERE condition tremendously. If the planner weren't\nsensitive to that, *that* would be a bug. The only case where it's\nirrelevant is if the selectivity of the base condition is exactly 50%,\nwhich is not a very reasonable default guess for LIKE.\n\nIt sounds to me that the problem is misestimation of the selectivity\nof the LIKE pattern --- the planner is going to think that\nLIKE '%% PREEMPT %%' is fairly selective because of the rather long\nmatch text, when in reality it's probably not so selective on your\ndata. But we don't keep any statistics that would allow the actual\nnumber of matching rows to be estimated well. You might want to think\nabout changing your data representation so that the pattern-match can be\nreplaced by a boolean column, or some such, so that the existing\nstatistics code can make a more reasonable estimate how many rows are\nselected.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Jan 2006 21:04:48 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE? " }, { "msg_contents": "On Mon, Jan 09, 2006 at 09:04:48PM -0500, Tom Lane wrote:\n> Andrea Arcangeli <[email protected]> writes:\n> > It just makes no sense to me that the planner takes a difference\n> > decision based on a \"not\".\n> \n> Why in the world would you think that? In general a NOT will change the\n> selectivity of the WHERE condition tremendously. If the planner weren't\n> sensitive to that, *that* would be a bug. The only case where it's\n> irrelevant is if the selectivity of the base condition is exactly 50%,\n> which is not a very reasonable default guess for LIKE.\n\nHow do you know that \"LIKE\" will have a selectivity above 50% in the\nfirst place? I think 50% should be the default unless the selectively is\nmeasured at runtime against the data being queried.\n\nIf you don't know the data, I think it's a bug that LIKE is assumed to\nhave a selectivity above 50%. You can't know that, only the author of\nthe code can know that and that's why I talked about hints. It'd be fine\nto give hints like:\n\n\tUNLIKELY string LIKE '%% PREEMPT %%'\n\nor:\n\n\tLIKELY string NOT LIKE '%% PREEMPT %%'\n\nThen you could assume that very little data will be returned or a lot of\ndata will be returned. \n\nIf you don't get hints NOT LIKE or LIKE should be assumed to have the\nsame selectivity.\n\n> It sounds to me that the problem is misestimation of the selectivity\n> of the LIKE pattern --- the planner is going to think that\n> LIKE '%% PREEMPT %%' is fairly selective because of the rather long\n> match text, when in reality it's probably not so selective on your\n> data. But we don't keep any statistics that would allow the actual\n\nTrue, there's a lot of data that matches %% PREEMPT %% (even if less\nthan the NOT case).\n\n> number of matching rows to be estimated well. You might want to think\n> about changing your data representation so that the pattern-match can be\n> replaced by a boolean column, or some such, so that the existing\n> statistics code can make a more reasonable estimate how many rows are\n> selected.\n\nI see. I can certainly fix it by stopping using LIKE. But IMHO this\nremains a bug, since until the statistics about the numberof matching\nrows isn't estimated well, you should not make assumptions on LIKE/NOT\nLIKE. I think you can change the code in a way that I won't have to\ntouch anything, and this will lead to fewer surprises in the future IMHO.\n\nThanks!\n", "msg_date": "Tue, 10 Jan 2006 03:23:03 +0100", "msg_from": "Andrea Arcangeli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "> \tUNLIKELY string LIKE '%% PREEMPT %%'\n> \n> or:\n> \n> \tLIKELY string NOT LIKE '%% PREEMPT %%'\n\nYou should be using contrib/tsearch2 for an un-anchored text search perhaps?\n\n", "msg_date": "Tue, 10 Jan 2006 10:29:05 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "On Tue, Jan 10, 2006 at 10:29:05AM +0800, Christopher Kings-Lynne wrote:\n> >\tUNLIKELY string LIKE '%% PREEMPT %%'\n> >\n> >or:\n> >\n> >\tLIKELY string NOT LIKE '%% PREEMPT %%'\n> \n> You should be using contrib/tsearch2 for an un-anchored text search perhaps?\n\nIf I wanted to get the fastest speed possible, then I think parsing it\nwith python and storing true/false in a boolean like suggested before\nwould be better and simpler as well for this specific case.\n\nHowever I don't need big performance, I need just decent performance, and it\nannoys me that there heurisics where the LIKE query assumes little data\nwill be selected. There's no way to know that until proper stats are\nrecorded on the results of the query. The default should be good enough\nto use IMHO, and there's no way to know if NOT LIKE or LIKE will return\nmore data, 50% should be assumed for both if no runtime information is\navailable IMHO.\n\nIIRC gcc in a code like if (something) {a} else {b} assumes that a is\nmore likely to be executed then b, but that's because it's forced to\nchoose something. Often one is forced to choose what is more likely\nbetween two events, but I don't think the above falls in this case. I\nguess the heuristic really wanted to speed up the runtime of LIKE, when\nit actually made it a _lot_ worse. No heuristic is better than an\nheuristic that falls apart in corner cases like the above \"LIKE\".\n", "msg_date": "Tue, 10 Jan 2006 03:45:34 +0100", "msg_from": "Andrea Arcangeli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "Andrea Arcangeli <[email protected]> writes:\n> If you don't know the data, I think it's a bug that LIKE is assumed to\n> have a selectivity above 50%.\n\nExtrapolating from the observation that the heuristics don't work well\non your data to the conclusion that they don't work for anybody is not\ngood logic. Replacing that code with a flat 50% is not going to happen\n(or if it does, I'll be sure to send the mob of unhappy users waving\ntorches and pitchforks to your door not mine ;-)).\n\nI did just think of something we could improve though. The pattern\nselectivity code doesn't make any use of the statistics about \"most\ncommon values\". For a constant pattern, we could actually apply the\npattern test with each common value and derive answers that are exact\nfor the portion of the population represented by the most-common-values\nlist. If the MCV list covers a large fraction of the population then\nthis would be a big leg up in accuracy. Dunno if that applies to your\nparticular case or not, but it seems worth doing ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 09 Jan 2006 21:54:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE? " }, { "msg_contents": "On Tue, 10 Jan 2006, Andrea Arcangeli wrote:\n\n> I see. I can certainly fix it by stopping using LIKE. But IMHO this\n> remains a bug, since until the statistics about the numberof matching\n> rows isn't estimated well, you should not make assumptions on LIKE/NOT\n> LIKE. I think you can change the code in a way that I won't have to\n> touch anything, and this will lead to fewer surprises in the future IMHO.\n\nI doubt it, since I would expect that this would be as large a\npessimization for a larger fraction of people than it is an optimization\nfor you.\n", "msg_date": "Mon, 9 Jan 2006 18:54:57 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "On Mon, Jan 09, 2006 at 09:54:44PM -0500, Tom Lane wrote:\n> Extrapolating from the observation that the heuristics don't work well\n> on your data to the conclusion that they don't work for anybody is not\n> good logic. Replacing that code with a flat 50% is not going to happen\n> (or if it does, I'll be sure to send the mob of unhappy users waving\n> torches and pitchforks to your door not mine ;-)).\n\nI'm not convinced but of course I cannot exclude that some people may be\ndepending on this very heuristic. But I consider this being\nbug-compatible, I've an hard time to be convinced that such heuristic\nisn't going to bite other people like it did with me.\n\n> I did just think of something we could improve though. The pattern\n> selectivity code doesn't make any use of the statistics about \"most\n> common values\". For a constant pattern, we could actually apply the\n> pattern test with each common value and derive answers that are exact\n> for the portion of the population represented by the most-common-values\n> list. If the MCV list covers a large fraction of the population then\n> this would be a big leg up in accuracy. Dunno if that applies to your\n> particular case or not, but it seems worth doing ...\n\nFixing this with proper stats would be great indeed. What would be the\nmost common value for the kernel_version? You can see samples of the\nkernel_version here http://klive.cpushare.com/2.6.15/ . That's the\nstring that is being searched against both PREEMPT and SMP.\n\nBTW, I also run a LIKE '%% SMP %%' a NOT LIKE '%% SMP %%' but that runs\nfine, probably because as you said in the first email PREEMPT is long\nbut SMP is short.\n\nThanks!\n", "msg_date": "Tue, 10 Jan 2006 04:47:21 +0100", "msg_from": "Andrea Arcangeli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "Hi,\n\n> I did just think of something we could improve though. The pattern\n> selectivity code doesn't make any use of the statistics about \"most\n> common values\". For a constant pattern, we could actually apply the\n> pattern test with each common value and derive answers that are exact\n> for the portion of the population represented by the most-common-values\n> list. If the MCV list covers a large fraction of the population then\n> this would be a big leg up in accuracy. Dunno if that applies to your\n> particular case or not, but it seems worth doing ...\n\nThis reminds me what I did in a patch which is currently on hold for the \nnext release:\n\nhttp://momjian.postgresql.org/cgi-bin/pgpatches_hold\nhttp://candle.pha.pa.us/mhonarc/patches_hold/msg00026.html\n\nThe patch was addressing a similar issue when using ltree <@ and @> \noperator on an unbalanced tree.\n\n\nBest regards\n--\nMatteo Beccati\nhttp://phpadsnew.com\nhttp://phppgads.com\n", "msg_date": "Tue, 10 Jan 2006 10:08:45 +0100", "msg_from": "Matteo Beccati <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "\nAndrea Arcangeli <[email protected]> writes:\n\n> Fixing this with proper stats would be great indeed. What would be the\n> most common value for the kernel_version? You can see samples of the\n> kernel_version here http://klive.cpushare.com/2.6.15/ . That's the\n> string that is being searched against both PREEMPT and SMP.\n\nTry something like this where attname is the column name and tablename is,\nwell, the tablename:\n\ndb=> select most_common_vals from pg_stats where tablename = 'region' and attname = 'province';\n most_common_vals \n------------------\n {ON,NB,QC,BC}\n\nNote that there's a second column most_common_freqs and to do this would\nreally require doing a weighted average based on the frequencies.\n\n-- \ngreg\n\n", "msg_date": "10 Jan 2006 10:11:18 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "On Tue, Jan 10, 2006 at 10:11:18AM -0500, Greg Stark wrote:\n> \n> Andrea Arcangeli <[email protected]> writes:\n> \n> > Fixing this with proper stats would be great indeed. What would be the\n> > most common value for the kernel_version? You can see samples of the\n> > kernel_version here http://klive.cpushare.com/2.6.15/ . That's the\n> > string that is being searched against both PREEMPT and SMP.\n> \n> Try something like this where attname is the column name and tablename is,\n> well, the tablename:\n> \n> db=> select most_common_vals from pg_stats where tablename = 'region' and attname = 'province';\n> most_common_vals \n> ------------------\n> {ON,NB,QC,BC}\n\nThanks for the info!\n\nklive=> select most_common_vals from pg_stats where tablename = 'klive' and attname = 'kernel_version';\n most_common_vals \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n {\"#1 Tue Sep 13 14:56:15 UTC 2005\",\"#1 Fri Aug 19 11:58:59 UTC 2005\",\"#7 SMP Fri Oct 7 15:56:41 CEST 2005\",\"#1 SMP Fri Aug 19 11:58:59 UTC 2005\",\"#2 Thu Sep 22 15:58:44 CEST 2005\",\"#1 Fri Sep 23 15:32:21 GMT 2005\",\"#1 Fri Oct 21 03:46:55 EDT 2005\",\"#1 Sun Sep 4 13:45:32 CEST 2005\",\"#5 PREEMPT Mon Nov 21 17:53:59 EET 2005\",\"#1 Wed Sep 28 19:15:10 EDT 2005\"}\n(1 row)\n\nklive=> select most_common_freqs from pg_stats where tablename = 'klive' and attname = 'kernel_version';\n most_common_freqs \n-------------------------------------------------------------------------------------------\n {0.0133333,0.0116667,0.011,0.009,0.00733333,0.00666667,0.00633333,0.006,0.006,0.00566667}\n(1 row)\n\nklive=> \n\nThere's only one preempt near the end, not sure if it would work?\n", "msg_date": "Tue, 10 Jan 2006 16:27:20 +0100", "msg_from": "Andrea Arcangeli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "Andrea Arcangeli <[email protected]> writes:\n> There's only one preempt near the end, not sure if it would work?\n\nNot with that data, but maybe if you increased the statistics target for\nthe column to 100 or so, you'd catch enough values to get reasonable\nresults.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jan 2006 10:46:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE? " }, { "msg_contents": "Matteo Beccati <[email protected]> writes:\n>> I did just think of something we could improve though. The pattern\n>> selectivity code doesn't make any use of the statistics about \"most\n>> common values\". For a constant pattern, we could actually apply the\n>> pattern test with each common value and derive answers that are exact\n>> for the portion of the population represented by the most-common-values\n>> list.\n\n> This reminds me what I did in a patch which is currently on hold for the \n> next release:\n\nI've applied a patch to make patternsel() compute the exact result for\nthe MCV list, and use its former heuristics only for the portion of the\ncolumn population not included in the MCV list.\n\nAfter finishing that work it occurred to me that we could go a step\nfurther: if the MCV list accounts for a substantial fraction of the\npopulation, we could assume that the MCV list is representative of the\nwhole population, and extrapolate the pattern's selectivity over the MCV\nlist to the whole population instead of using the existing heuristics at\nall. In a situation like Andreas' example this would win big, although\nyou can certainly imagine cases where it would lose too.\n\nAny thoughts about this? What would be a reasonable threshold for\n\"substantial fraction\"? It would probably make sense to have different\nthresholds depending on whether the pattern is left-anchored or not,\nsince the range heuristic only works for left-anchored patterns.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jan 2006 12:49:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE? " }, { "msg_contents": "On Tue, 2006-01-10 at 12:49 -0500, Tom Lane wrote:\n> Matteo Beccati <[email protected]> writes:\n> >> I did just think of something we could improve though. The pattern\n> >> selectivity code doesn't make any use of the statistics about \"most\n> >> common values\". For a constant pattern, we could actually apply the\n> >> pattern test with each common value and derive answers that are exact\n> >> for the portion of the population represented by the most-common-values\n> >> list.\n> \n> > This reminds me what I did in a patch which is currently on hold for the \n> > next release:\n> \n> I've applied a patch to make patternsel() compute the exact result for\n> the MCV list, and use its former heuristics only for the portion of the\n> column population not included in the MCV list.\n\nI think its OK to use the MCV, but I have a problem with the current\nheuristics: they only work for randomly generated strings, since the\nselectivity goes down geometrically with length. That doesn't match most\nlanguages where one and two syllable words are extremely common and\nlonger ones much less so. A syllable can be 1-2 chars long, so any\nsearch string of length 1-4 is probably equally likely, rather than\nreducing in selectivity based upon length. So I think part of the\nproblem is the geometrically reducing selectivity itself.\n\n> After finishing that work it occurred to me that we could go a step\n> further: if the MCV list accounts for a substantial fraction of the\n> population, we could assume that the MCV list is representative of the\n> whole population, and extrapolate the pattern's selectivity over the MCV\n> list to the whole population instead of using the existing heuristics at\n> all. In a situation like Andreas' example this would win big, although\n> you can certainly imagine cases where it would lose too.\n\nI don't think that can be inferred with any confidence, unless a large\nproportion of the MCV list were itself selected. Otherwise it might\nmatch only a single MCV that just happens to have a high proportion,\nthen we assume all others have the same proportion. The calculation is\nrelated to Ndistinct, in some ways.\n\n> Any thoughts about this? What would be a reasonable threshold for\n> \"substantial fraction\"? It would probably make sense to have different\n> thresholds depending on whether the pattern is left-anchored or not,\n> since the range heuristic only works for left-anchored patterns.\n\nI don't think you can do this for a low enough substantial fraction to\nmake this interesting.\n\nI would favour the idea of dynamic sampling using a block sampling\napproach; that was a natural extension of improving ANALYZE also. We can\nuse that approach for things such as LIKE, but also use it for\nmulti-column single-table and join selectivity.\n\nBest Regards, Simon Riggs\n\n\n", "msg_date": "Tue, 10 Jan 2006 22:06:36 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> I think its OK to use the MCV, but I have a problem with the current\n> heuristics: they only work for randomly generated strings, since the\n> selectivity goes down geometrically with length.\n\nWe could certainly use a less aggressive curve for that. You got a\nspecific proposal?\n\n>> After finishing that work it occurred to me that we could go a step\n>> further: if the MCV list accounts for a substantial fraction of the\n>> population, we could assume that the MCV list is representative of the\n>> whole population, and extrapolate the pattern's selectivity over the MCV\n>> list to the whole population instead of using the existing heuristics at\n>> all. In a situation like Andreas' example this would win big, although\n>> you can certainly imagine cases where it would lose too.\n\n> I don't think that can be inferred with any confidence, unless a large\n> proportion of the MCV list were itself selected. Otherwise it might\n> match only a single MCV that just happens to have a high proportion,\n> then we assume all others have the same proportion.\n\nWell, of course it can't be inferred \"with confidence\". Sometimes\nyou'll win and sometimes you'll lose. The question is, is this a\nbetter heuristic than what we use otherwise? The current estimate\nfor non-anchored patterns is really pretty crummy, and even with a\nless aggressive length-vs-selectivity curve it's not going to be great.\n\nAnother possibility is to merge the two estimates somehow.\n\n> I would favour the idea of dynamic sampling using a block sampling\n> approach; that was a natural extension of improving ANALYZE also.\n\nOne thing at a time please. Obtaining better statistics is one issue,\nbut the one at hand here is what to do given particular statistics.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jan 2006 17:21:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE? " }, { "msg_contents": "On Tue, 2006-01-10 at 17:21 -0500, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > I think its OK to use the MCV, but I have a problem with the current\n> > heuristics: they only work for randomly generated strings, since the\n> > selectivity goes down geometrically with length.\n> \n> We could certainly use a less aggressive curve for that. You got a\n> specific proposal?\n\nI read some research not too long ago that showed a frequency curve of\nwords by syllable length. I'll dig that out tomorrow.\n\n> > I would favour the idea of dynamic sampling using a block sampling\n> > approach; that was a natural extension of improving ANALYZE also.\n> \n> One thing at a time please. Obtaining better statistics is one issue,\n> but the one at hand here is what to do given particular statistics.\n\nI meant use the same sampling approach as I was proposing for ANALYZE,\nbut do this at plan time for the query. That way we can apply the\nfunction directly to the sampled rows and estimate selectivity. \n\nI specifically didn't mention that in the Ndistinct discussion because I\ndidn't want to confuse the subject further, but the underlying block\nsampling method would be identical, so the code is already almost\nthere...we just need to eval the RestrictInfo against the sampled\ntuples.\n\nBest Regards, Simon Riggs\n\n\n\n", "msg_date": "Tue, 10 Jan 2006 23:36:45 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> I meant use the same sampling approach as I was proposing for ANALYZE,\n> but do this at plan time for the query. That way we can apply the\n> function directly to the sampled rows and estimate selectivity. \n\nI think this is so unlikely to be a win as to not even be worth spending\nany time discussing. The extra planning time across all queries will\nvastly outweigh the occasional improvement in plan choice for some\nqueries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jan 2006 22:40:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE? " }, { "msg_contents": "On Tue, Jan 10, 2006 at 10:46:53AM -0500, Tom Lane wrote:\n> Not with that data, but maybe if you increased the statistics target for\n> the column to 100 or so, you'd catch enough values to get reasonable\n> results.\n\nSorry, I'm not expert with postgresql, could you tell me how to increase\nthe statistic target?\n\nIn another email you said you applied a patch to CVS, please let me know\nif you've anything to test for me, and I'll gladly test it immediately\n(I've a sandbox so it's ok even if it corrupts the db ;).\n\nThanks!\n", "msg_date": "Wed, 11 Jan 2006 08:59:56 +0100", "msg_from": "Andrea Arcangeli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "On Tue, 2006-01-10 at 22:40 -0500, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > I meant use the same sampling approach as I was proposing for ANALYZE,\n> > but do this at plan time for the query. That way we can apply the\n> > function directly to the sampled rows and estimate selectivity. \n> \n> I think this is so unlikely to be a win as to not even be worth spending\n> any time discussing. The extra planning time across all queries will\n> vastly outweigh the occasional improvement in plan choice for some\n> queries.\n\nExtra planning time would be bad, so clearly we wouldn't do this when we\nalready have relevant ANALYZE statistics. \n\nI would suggest we do this only when all of these are true\n- when accessing more than one table, so the selectivity could effect a\njoin result\n- when we have either no ANALYZE statistics, or ANALYZE statistics are\nnot relevant to estimating selectivity, e.g. LIKE \n- when access against the single table in question cannot find an index\nto use from other RestrictInfo predicates\n\nI imagined that this would also be controlled by a GUC, dynamic_sampling\nwhich would be set to zero by default, and give a measure of sample size\nto use. (Or just a bool enable_sampling = off (default)).\n\nThis is mentioned now because the plan under consideration in this\nthread would be improved by this action. It also isn't a huge amount of\ncode to get it to work.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Wed, 11 Jan 2006 09:07:45 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "On Wed, Jan 11, 2006 at 09:07:45AM +0000, Simon Riggs wrote:\n> I would suggest we do this only when all of these are true\n> - when accessing more than one table, so the selectivity could effect a\n> join result\n\nFWIW my problem only happens if I join: on the main table where the\nkernel_version string is stored (without joins), everything is always\nblazing fast. So this requirement certainly sounds fine to me.\n", "msg_date": "Wed, 11 Jan 2006 10:18:41 +0100", "msg_from": "Andrea Arcangeli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "On Tue, Jan 10, 2006 at 02:44:47AM +0100, Andrea Arcangeli wrote:\n> \"cooperative\" runs \"WHERE kernel_version NOT LIKE '%% PREEMPT %%'\", while\n> \"preempt\" runs \"WHERE kernel_version LIKE '%% PREEMPT %%'. The only difference\n\nOne thing you could do is change the like to:\n\nWHERE position(' PREEMPT ' in kernel_version) != 0\n\nAnd then create a functional index on that:\n\nCREATE INDEX indexname ON tablename ( position(' PREEMPT ' in kernel_version) );\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 11 Jan 2006 12:40:32 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "On Wed, Jan 11, 2006 at 12:40:32PM -0600, Jim C. Nasby wrote:\n> On Tue, Jan 10, 2006 at 02:44:47AM +0100, Andrea Arcangeli wrote:\n> > \"cooperative\" runs \"WHERE kernel_version NOT LIKE '%% PREEMPT %%'\", while\n> > \"preempt\" runs \"WHERE kernel_version LIKE '%% PREEMPT %%'. The only difference\n> \n> One thing you could do is change the like to:\n> \n> WHERE position(' PREEMPT ' in kernel_version) != 0\n\nThat alone fixed it, with this I don't even need the index (yet). Thanks\na lot.\n\n> And then create a functional index on that:\n> \n> CREATE INDEX indexname ON tablename ( position(' PREEMPT ' in kernel_version) );\n\nThe index only helps the above query with = 0 and not the one with != 0,\nbut it seems not needed in practice.\n", "msg_date": "Wed, 11 Jan 2006 21:39:47 +0100", "msg_from": "Andrea Arcangeli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "On Wed, Jan 11, 2006 at 09:39:47PM +0100, Andrea Arcangeli wrote:\n> On Wed, Jan 11, 2006 at 12:40:32PM -0600, Jim C. Nasby wrote:\n> > On Tue, Jan 10, 2006 at 02:44:47AM +0100, Andrea Arcangeli wrote:\n> > > \"cooperative\" runs \"WHERE kernel_version NOT LIKE '%% PREEMPT %%'\", while\n> > > \"preempt\" runs \"WHERE kernel_version LIKE '%% PREEMPT %%'. The only difference\n> > \n> > One thing you could do is change the like to:\n> > \n> > WHERE position(' PREEMPT ' in kernel_version) != 0\n> \n> That alone fixed it, with this I don't even need the index (yet). Thanks\n> a lot.\n\nThe fix is online already w/o index:\n\n\thttp://klive.cpushare.com/?branch=all&scheduler=preemptive\n\nOf course I'm still fully available to test any fix for the previous\nLIKE query if there's interest.\n", "msg_date": "Wed, 11 Jan 2006 21:46:39 +0100", "msg_from": "Andrea Arcangeli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "On Wed, Jan 11, 2006 at 09:39:47PM +0100, Andrea Arcangeli wrote:\n> > CREATE INDEX indexname ON tablename ( position(' PREEMPT ' in kernel_version) );\n> \n> The index only helps the above query with = 0 and not the one with != 0,\n> but it seems not needed in practice.\n\nHrm. If you need indexing then, you'll probably have to do 2 indexes\nwith a WHERE clause...\n\nCREATE INDEX ... WHERE position(...) = 0;\nCREATE INDEX ... WHERE position(...) != 0;\n\nI suspect this is because of a lack of stats for functional indexes.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 11 Jan 2006 15:02:21 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE?" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Wed, Jan 11, 2006 at 09:39:47PM +0100, Andrea Arcangeli wrote:\n>> The index only helps the above query with = 0 and not the one with != 0,\n>> but it seems not needed in practice.\n\n> I suspect this is because of a lack of stats for functional indexes.\n\nNo, it's because != isn't an indexable operator.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jan 2006 16:13:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE? " }, { "msg_contents": "On Tue, 2006-01-10 at 17:21 -0500, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > I think its OK to use the MCV, but I have a problem with the current\n> > heuristics: they only work for randomly generated strings, since the\n> > selectivity goes down geometrically with length.\n> \n> We could certainly use a less aggressive curve for that. You got a\n> specific proposal?\n\nnon-left anchored LIKE is most likely going to be used with\nunstructured, variable length data - else we might use SUBSTRING\ninstead. My proposal would be to assume that LIKE is acting on human\nlanguage text data.\n\nI considered this a while back, but wrote it off in favour of dynamic\nsampling - but it's worth discussing this to see whether we can improve\non things without that.\n\nHere's one of the links I reviewed previously:\nhttp://www.ling.lu.se/persons/Joost/Texts/studling.pdf\nSigurd et al [2004]\n\nThis shows word frequency distribution peaks at 3 letter/2 phoneme\nwords, then tails off exponentially after that.\n\nClearly when search string > 3 then the selectivity must tail off\nexponentially also, since we couldn't find words shorter than the search\nstring itself. The search string might be a phrase, but it seems\nreasonable to assume that phrases also drop off in frequency according\nto length. It is difficult to decide what to do at len=2 or len=3, and I\nwould be open to various thoughts, but would default to keeping\nlike_selectivity as it is now.\n\nSigurd et al show that word length tails off at 0.7^Len beyond Len=3, so\nselectivity FIXED_CHAR_SEL should not be more than 0.7, but I see no\nevidence for it being as low as 0.2 (from the published results). For\nsimplicity, where Len > 3, I would make the tail off occur with factor\n0.5, rather than 0.2.\n\nWe could see a few more changes from those results, but curbing the\naggressive tail off would be a simple and easy act.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Thu, 12 Jan 2006 00:48:36 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOT LIKE much faster than LIKE?" } ]
[ { "msg_contents": "Hey folks,\n\nI'm working with a query to get more info out with a join. The base query works great speed wise because of index usage. When the join is tossed in, the index is no longer used, so the query performance tanks.\n\nCan anyone advise on how to get the index usage back?\n\nweather=# select version();\n version \n-----------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.1.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.0.1 (4.0.1-5mdk for Mandriva Linux release 2006.0)\n(1 row)\n\nThe base query is:\n\nweather=# EXPLAIN ANALYZE\nweather-# SELECT min_reading, max_reading, avg_reading, -- doy,\nweather-# unmunge_time( time_group ) AS time\nweather-# FROM minute.\"windspeed\"\nweather-# --JOIN readings_doy ON EXTRACT( doy FROM unmunge_time( time_group ) ) = doy\nweather-# WHERE unmunge_time( time_group ) > ( now() - '24 hour'::interval )\nweather-# ORDER BY time_group;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=10995.29..11155.58 rows=64117 width=28) (actual time=4.509..4.574 rows=285 loops=1)\n Sort Key: time_group\n -> Bitmap Heap Scan on windspeed (cost=402.42..5876.05 rows=64117 width=28) (actual time=0.784..3.639 rows=285 loops=1)\n Recheck Cond: (unmunge_time(time_group) > (now() - '24:00:00'::interval))\n -> Bitmap Index Scan on minute_windspeed_index (cost=0.00..402.42 rows=64117 width=0) (actual time=0.675..0.675 rows=285 loops=1)\n Index Cond: (unmunge_time(time_group) > (now() - '24:00:00'::interval))\n Total runtime: 4.880 ms\n(7 rows)\n\nWhen I add in the join, the query tosses out the nice quick index in favor of sequence scans:\n\nweather=# EXPLAIN ANALYZE\nweather-# SELECT min_reading, max_reading, avg_reading, -- doy,\nweather-# unmunge_time( time_group ) AS time\nweather-# FROM minute.\"windspeed\"\nweather-# JOIN readings_doy ON EXTRACT( doy FROM unmunge_time( time_group ) ) = doy\nweather-# WHERE unmunge_time( time_group ) > ( now() - '24 hour'::interval )\nweather-# ORDER BY time_group;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=98239590.88..99052623.66 rows=325213113 width=28) (actual time=60136.484..61079.845 rows=1030656 loops=1)\n Sort Key: windspeed.time_group\n -> Merge Join (cost=265774.21..8396903.54 rows=325213113 width=28) (actual time=34318.334..47113.277 rows=1030656 loops=1)\n Merge Cond: (\"outer\".\"?column5?\" = \"inner\".\"?column2?\")\n -> Sort (cost=12997.68..13157.98 rows=64120 width=28) (actual time=2286.155..2286.450 rows=284 loops=1)\n Sort Key: date_part('doy'::text, unmunge_time(windspeed.time_group))\n -> Seq Scan on windspeed (cost=0.00..7878.18 rows=64120 width=28) (actual time=2279.275..2285.271 rows=284 loops=1)\n Filter: (unmunge_time(time_group) > (now() - '24:00:00'::interval))\n -> Sort (cost=252776.54..255312.51 rows=1014389 width=8) (actual time=32001.370..33473.407 rows=1051395 loops=1)\n Sort Key: date_part('doy'::text, readings.\"when\")\n -> Seq Scan on readings (cost=0.00..142650.89 rows=1014389 width=8) (actual time=0.053..13759.015 rows=1014448 loops=1)\n Total runtime: 61720.935 ms\n(12 rows)\n\nweather=# \\d minute.windspeed\n Table \"minute.windspeed\"\n Column | Type | Modifiers \n-------------+------------------+-----------\n time_group | integer | not null\n min_reading | double precision | not null\n max_reading | double precision | not null\n avg_reading | double precision | not null\nIndexes:\n \"windspeed_pkey\" PRIMARY KEY, btree (time_group)\n \"minute_windspeed_index\" btree (unmunge_time(time_group))\n\nCREATE OR REPLACE FUNCTION unmunge_time( integer )\nRETURNS timestamp AS '\nDECLARE\n input ALIAS FOR $1;\nBEGIN\n RETURN (''epoch''::timestamptz + input * ''1sec''::interval)::timestamp;\nEND;\n' LANGUAGE plpgsql IMMUTABLE STRICT;\n\nweather=# \\d readings\n Table \"public.readings\"\n Column | Type | Modifiers \n----------------------+-----------------------------+-------------------------------------------------------------\n when | timestamp without time zone | not null default (timeofday())::timestamp without time zone\n hour_group | integer | \n minute_group | integer | \n day_group | integer | \n week_group | integer | \n month_group | integer | \n year_group | integer | \n year_group_updated | boolean | default false\n month_group_updated | boolean | default false\n week_group_updated | boolean | default false\n day_group_updated | boolean | default false\n hour_group_updated | boolean | default false\n minute_group_updated | boolean | default false\nIndexes:\n \"readings_pkey\" PRIMARY KEY, btree (\"when\")\n \"day_group_updated_index\" btree (day_group_updated, day_group)\n \"hour_group_updated_index\" btree (hour_group_updated, hour_group)\n \"month_group_updated_index\" btree (month_group_updated, month_group)\n \"readings_doy_index\" btree (date_part('doy'::text, \"when\"))\n \"week_group_updated_index\" btree (week_group_updated, week_group)\n \"year_group_updated_index\" btree (year_group_updated, year_group)\nTriggers:\n munge_time BEFORE INSERT OR UPDATE ON readings FOR EACH ROW EXECUTE PROCEDURE munge_time()\n\nreadings_doy is a view that adds date_part('doy'::text, readings.\"when\") AS doy to the readings table.\n\nThanks,\nRob\n\n-- \n 21:15:51 up 2 days, 13:42, 9 users, load average: 3.14, 2.63, 2.62\nLinux 2.6.12-12-2 #4 SMP Tue Jan 3 19:56:19 MST 2006", "msg_date": "Mon, 9 Jan 2006 21:23:38 -0700", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Index isn't used during a join." }, { "msg_contents": "On Mon, Jan 09, 2006 at 09:23:38PM -0700, Robert Creager wrote:\n> I'm working with a query to get more info out with a join. The base\n> query works great speed wise because of index usage. When the join is\n> tossed in, the index is no longer used, so the query performance tanks.\n\nThe first query you posted returns 285 rows and the second returns\nover one million; index usage aside, that difference surely accounts\nfor a performance penalty. And as is often pointed out, index scans\naren't always faster than sequential scans: the more of a table a\nquery has to fetch, the more likely a sequential scan will be faster.\n\nHave the tables been vacuumed and analyzed? The planner's estimates\nfor windspeed are pretty far off, which could be affecting the query\nplan:\n\n> -> Sort (cost=12997.68..13157.98 rows=64120 width=28) (actual time=2286.155..2286.450 rows=284 loops=1)\n> Sort Key: date_part('doy'::text, unmunge_time(windspeed.time_group))\n> -> Seq Scan on windspeed (cost=0.00..7878.18 rows=64120 width=28) (actual time=2279.275..2285.271 rows=284 loops=1)\n> Filter: (unmunge_time(time_group) > (now() - '24:00:00'::interval))\n\nThat's a small amount of the total query time, however, so although\nan index scan might help it probably won't provide the big gain\nyou're looking for.\n\nHave you done any tests with enable_seqscan disabled? That'll show\nwhether an index or bitmap scan would be faster. And have you\nverified that the join condition is correct? Should the query be\nreturning over a million rows?\n\n-- \nMichael Fuhr\n", "msg_date": "Mon, 9 Jan 2006 22:58:18 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index isn't used during a join." }, { "msg_contents": "When grilled further on (Mon, 9 Jan 2006 22:58:18 -0700),\nMichael Fuhr <[email protected]> confessed:\n\n> On Mon, Jan 09, 2006 at 09:23:38PM -0700, Robert Creager wrote:\n> > I'm working with a query to get more info out with a join. The base\n> > query works great speed wise because of index usage. When the join is\n> > tossed in, the index is no longer used, so the query performance tanks.\n> \n> The first query you posted returns 285 rows and the second returns\n> over one million; index usage aside, that difference surely accounts\n> for a performance penalty. And as is often pointed out, index scans\n> aren't always faster than sequential scans: the more of a table a\n> query has to fetch, the more likely a sequential scan will be faster.\n\nThanks for pointing out the obvious that I missed. Too much data in the second query. It's supposed to match (row wise) what was returned from the first query.\n\nJust ignore me for now...\n\nThanks,\nRob\n\n-- \n 08:15:24 up 3 days, 42 min, 9 users, load average: 2.07, 2.20, 2.25\nLinux 2.6.12-12-2 #4 SMP Tue Jan 3 19:56:19 MST 2006", "msg_date": "Tue, 10 Jan 2006 08:17:05 -0700", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index isn't used during a join." }, { "msg_contents": "Ok, I'm back, and in a little better shape.\n\nThe query is now correct, but still is slow because of lack of index usage. I don't know how to structure the query correctly to use the index.\n\nTaken individually:\n\nweather=# explain analyze select * from doy_agg where doy = extract( doy from now() );\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=13750.67..13750.71 rows=2 width=20) (actual time=123.134..123.135 rows=1 loops=1)\n -> Bitmap Heap Scan on readings (cost=25.87..13720.96 rows=3962 width=20) (actual time=6.384..116.559 rows=4175 loops=1)\n Recheck Cond: (date_part('doy'::text, \"when\") = date_part('doy'::text, now()))\n -> Bitmap Index Scan on readings_doy_index (cost=0.00..25.87 rows=3962 width=0) (actual time=5.282..5.282 rows=4215 loops=1)\n Index Cond: (date_part('doy'::text, \"when\") = date_part('doy'::text, now()))\n Total runtime: 123.366 ms\n\nproduces the data:\n\nweather=# select * from doy_agg where doy = extract( doy from now() );\n doy | avg_windspeed | max_windspeed \n-----+------------------+---------------\n 10 | 8.53403056583666 | 59\n\nand:\n\nweather=# EXPLAIN ANALYZE\nweather-# SELECT *,\nweather-# unmunge_time( time_group ) AS time\nweather-# FROM minute.\"windspeed\"\nweather-# WHERE unmunge_time( time_group ) > ( now() - '24 hour'::interval )\nweather-# ORDER BY time_group;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=595.33..595.77 rows=176 width=28) (actual time=4.762..4.828 rows=283 loops=1)\n Sort Key: time_group\n -> Bitmap Heap Scan on windspeed (cost=2.62..588.76 rows=176 width=28) (actual time=0.901..3.834 rows=283 loops=1)\n Recheck Cond: (unmunge_time(time_group) > (now() - '24:00:00'::interval))\n -> Bitmap Index Scan on minute_windspeed_unmunge_index (cost=0.00..2.62 rows=176 width=0) (actual time=0.745..0.745 rows=284 loops=1)\n Index Cond: (unmunge_time(time_group) > (now() - '24:00:00'::interval))\n Total runtime: 5.108 ms\n\nproduces:\n\n time_group | min_reading | max_reading | avg_reading | time \n------------+-------------------+-------------+-------------------+---------------------\n 1136869500 | 0.8 | 6 | 2.62193548387097 | 2006-01-09 22:05:00\n 1136869800 | 0 | 3 | 0.406021505376343 | 2006-01-09 22:10:00\n 1136870100 | 0 | 5 | 1.68 | 2006-01-09 22:15:00\n... \n\nBut I want the composite of the two queries, and I'm stuck on:\n\nweather=# EXPLAIN ANALYZE\nweather-# SELECT *,\nweather-# unmunge_time( time_group ) AS time\nweather-# FROM minute.\"windspeed\"\nweather-# JOIN doy_agg ON( EXTRACT( doy FROM unmunge_time( time_group ) ) = doy )\nweather-# WHERE unmunge_time( time_group ) > ( now() - '24 hour'::interval )\nweather-# ORDER BY time_group;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=153627.67..153628.48 rows=322 width=48) (actual time=10637.681..10637.748 rows=286 loops=1)\n Sort Key: windspeed.time_group\n -> Merge Join (cost=153604.82..153614.26 rows=322 width=48) (actual time=10633.375..10636.728 rows=286 loops=1)\n Merge Cond: (\"outer\".\"?column5?\" = \"inner\".doy)\n -> Sort (cost=594.89..595.33 rows=176 width=28) (actual time=5.539..5.612 rows=286 loops=1)\n Sort Key: date_part('doy'::text, unmunge_time(windspeed.time_group))\n -> Bitmap Heap Scan on windspeed (cost=2.62..588.32 rows=176 width=28) (actual time=0.918..4.637 rows=286 loops=1)\n Recheck Cond: (unmunge_time(time_group) > (now() - '24:00:00'::interval))\n -> Bitmap Index Scan on minute_windspeed_unmunge_index (cost=0.00..2.62 rows=176 width=0) (actual time=0.739..0.739 rows=287 loops=1)\n Index Cond: (unmunge_time(time_group) > (now() - '24:00:00'::interval))\n -> Sort (cost=153009.93..153010.84 rows=366 width=20) (actual time=10627.699..10627.788 rows=295 loops=1)\n Sort Key: doy_agg.doy\n -> HashAggregate (cost=152984.28..152990.69 rows=366 width=20) (actual time=10625.649..10626.601 rows=366 loops=1)\n -> Seq Scan on readings (cost=0.00..145364.93 rows=1015914 width=20) (actual time=0.079..8901.123 rows=1015917 loops=1)\n Total runtime: 10638.298 ms\n\nWhere:\n\nweather=# \\d doy_agg\n View \"public.doy_agg\"\n Column | Type | Modifiers \n---------------+------------------+-----------\n doy | double precision | \n avg_windspeed | double precision | \n max_windspeed | integer | \nView definition:\n SELECT doy_readings.doy, avg(doy_readings.windspeedaverage1) AS avg_windspeed, max(doy_readings.windspeedmax1) AS max_windspeed\n FROM ONLY doy_readings\n GROUP BY doy_readings.doy;\n\nwhich I don't want because of the full scan on readings.\n\nI can easily do the two queries seperately in the script utilizing this data, but want to do it in the db itself. I figure I'm just not seeing how to combine the two queries effectively.\n\nThoughts?\n\nThanks,\nRob\n\n-- \n 22:08:50 up 3 days, 14:35, 9 users, load average: 2.71, 2.48, 2.51\nLinux 2.6.12-12-2 #4 SMP Tue Jan 3 19:56:19 MST 2006", "msg_date": "Tue, 10 Jan 2006 22:10:55 -0700", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index isn't used during a join." }, { "msg_contents": "On Tue, Jan 10, 2006 at 10:10:55PM -0700, Robert Creager wrote:\n> The query is now correct, but still is slow because of lack of\n> index usage. I don't know how to structure the query correctly to\n> use the index.\n\nHave you tried adding restrictions on doy in the WHERE clause?\nSomething like this, I think:\n\nWHERE ...\n AND doy >= EXTRACT(doy FROM now() - '24 hour'::interval)\n AND doy <= EXTRACT(doy FROM now())\n\nSomething else occurred to me: do you (or will you) have more than\none year of data? If so then matching on doy could be problematic\nunless you also check for the year, or unless you want to match\nmore than one year.\n\n-- \nMichael Fuhr\n", "msg_date": "Wed, 11 Jan 2006 00:56:55 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index isn't used during a join." }, { "msg_contents": "On Wed, Jan 11, 2006 at 12:56:55AM -0700, Michael Fuhr wrote:\n> WHERE ...\n> AND doy >= EXTRACT(doy FROM now() - '24 hour'::interval)\n> AND doy <= EXTRACT(doy FROM now())\n\nTo work on 1 Jan this should be more like\n\nWHERE ...\n AND (doy = EXTRACT(doy FROM now() - '24 hour'::interval) OR\n doy = EXTRACT(doy FROM now()))\n\nIn any case the point is to add conditions to the WHERE clause that\nwill use an index on the table for which you're currently getting\na sequential scan.\n\n-- \nMichael Fuhr\n", "msg_date": "Wed, 11 Jan 2006 02:00:08 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index isn't used during a join." }, { "msg_contents": "When grilled further on (Wed, 11 Jan 2006 00:56:55 -0700),\nMichael Fuhr <[email protected]> confessed:\n\n> On Tue, Jan 10, 2006 at 10:10:55PM -0700, Robert Creager wrote:\n> > The query is now correct, but still is slow because of lack of\n> > index usage. I don't know how to structure the query correctly to\n> > use the index.\n> \n> Have you tried adding restrictions on doy in the WHERE clause?\n> Something like this, I think:\n\nI cannot. That's what I thought I would get from the join. The query shown will always have two days involved, and only grows from there. The data is graphed at http://www.logicalchaos.org/weather/index.html, and I'm looking at adding historical data to the graphs.\n\nOpps, never mind. You hit the nail on the head:\n\nweather-# SELECT *, unmunge_time( time_group ) AS time,\nweather-# EXTRACT( doy FROM unmunge_time( time_group ) )\nweather-# FROM minute.\"windspeed\"\nweather-# JOIN doy_agg ON( EXTRACT( doy FROM unmunge_time( time_group ) ) = doy )\nweather-# WHERE unmunge_time( time_group ) > ( now() - '24 hour'::interval ) \nweather-# AND doy BETWEEN EXTRACT( doy FROM now() - '24 hour'::interval) \nweather-# AND EXTRACT( doy FROM now() )\nweather-# ORDER BY time_group;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=21914.09..21914.10 rows=1 width=48) (actual time=76.595..76.662 rows=286 loops=1)\n Sort Key: windspeed.time_group\n -> Hash Join (cost=21648.19..21914.08 rows=1 width=48) (actual time=64.656..75.562 rows=286 loops=1)\n Hash Cond: (date_part('doy'::text, unmunge_time(\"outer\".time_group)) = \"inner\".doy)\n -> Bitmap Heap Scan on windspeed (cost=2.27..267.40 rows=74 width=28) (actual time=0.585..1.111 rows=286 loops=1)\n Recheck Cond: (unmunge_time(time_group) > (now() - '24:00:00'::interval))\n -> Bitmap Index Scan on minute_windspeed_unmunge_index (cost=0.00..2.27 rows=74 width=0) (actual time=0.566..0.566 rows=287 loops=1)\n Index Cond: (unmunge_time(time_group) > (now() - '24:00:00'::interval))\n -> Hash (cost=21645.92..21645.92 rows=3 width=20) (actual time=63.849..63.849 rows=2 loops=1)\n -> HashAggregate (cost=21645.84..21645.89 rows=3 width=20) (actual time=63.832..63.834 rows=2 loops=1)\n -> Bitmap Heap Scan on readings (cost=59.21..21596.85 rows=6532 width=20) (actual time=15.174..53.249 rows=7613 loops=1)\n Recheck Cond: ((date_part('doy'::text, \"when\") >= date_part('doy'::text, (now() - '24:00:00'::interval))) AND (date_part('doy'::text, \"when\") <= date_part('doy'::text, now())))\n -> Bitmap Index Scan on readings_doy_index (cost=0.00..59.21 rows=6532 width=0) (actual time=12.509..12.509 rows=10530 loops=1)\n Index Cond: ((date_part('doy'::text, \"when\") >= date_part('doy'::text, (now() - '24:00:00'::interval))) AND (date_part('doy'::text, \"when\") <= date_part('doy'::text, now())))\n Total runtime: 77.177 ms\n\nWhat I had thought is that PG would (could?) be smart enough to realize that one query was restricted, and apply that restriction to the other based on the join. I know it works in other cases (using indexes on both tables using the join)...\n\n> \n> Something else occurred to me: do you (or will you) have more than\n> one year of data? If so then matching on doy could be problematic\n> unless you also check for the year, or unless you want to match\n> more than one year.\n\nYes and yes. I'm doing both aggregate by day of the year for all data, and aggregate by day of year within each year. The examples are:\n\nweather=# select * from doy_agg where doy = extract( doy from now() );\n doy | avg_windspeed | max_windspeed \n-----+------------------+---------------\n 11 | 6.14058239764748 | 69\n(1 row)\n\nweather=# select * from doy_day_agg where extract( doy from day ) = extract( doy from now() );\n day | avg_windspeed | max_windspeed \n---------------------+------------------+---------------\n 2004-01-11 00:00:00 | 5.03991313397539 | 17\n 2006-01-11 00:00:00 | 18.532050716667 | 69\n 2005-01-11 00:00:00 | 3.6106763448041 | 13\n\nThanks for your help Michael.\n\nCheers,\nRob\n\n-- \n 07:07:30 up 3 days, 23:34, 9 users, load average: 2.29, 2.44, 2.43\nLinux 2.6.12-12-2 #4 SMP Tue Jan 3 19:56:19 MST 2006", "msg_date": "Wed, 11 Jan 2006 07:26:59 -0700", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index isn't used during a join." }, { "msg_contents": "When grilled further on (Wed, 11 Jan 2006 07:26:59 -0700),\nRobert Creager <[email protected]> confessed:\n\n> \n> weather-# SELECT *, unmunge_time( time_group ) AS time,\n> weather-# EXTRACT( doy FROM unmunge_time( time_group ) )\n> weather-# FROM minute.\"windspeed\"\n> weather-# JOIN doy_agg ON( EXTRACT( doy FROM unmunge_time( time_group ) ) = doy )\n> weather-# WHERE unmunge_time( time_group ) > ( now() - '24 hour'::interval ) \n> weather-# AND doy BETWEEN EXTRACT( doy FROM now() - '24 hour'::interval) \n> weather-# AND EXTRACT( doy FROM now() )\n> weather-# ORDER BY time_group;\n\nThe more I think about it, the more I believe PG is missing an opportunity. The query is adequately constrained without the BETWEEN clause. Why doesn't PG see that? I realize I'm a hack and by db organization shows that...\n\nThe query is wrong as stated, as it won't work when the interval crosses a year boundary, but it's a stop gap for now.\n\nCheers,\nRob\n\n-- \n 07:58:30 up 4 days, 25 min, 9 users, load average: 2.13, 2.15, 2.22\nLinux 2.6.12-12-2 #4 SMP Tue Jan 3 19:56:19 MST 2006", "msg_date": "Wed, 11 Jan 2006 08:02:37 -0700", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index isn't used during a join." }, { "msg_contents": "Robert Creager <[email protected]> writes:\n> What I had thought is that PG would (could?) be smart enough to realize tha=\n> t one query was restricted, and apply that restriction to the other based o=\n> n the join. I know it works in other cases (using indexes on both tables u=\n> sing the join)...\n\nThe planner understands about transitivity of equality, ie given a = b\nand b = c it can infer a = c. It doesn't do any such thing for\ninequalities though, nor does it deduce f(a) = f(b) for arbitrary\nfunctions f. The addition Michael suggested requires much more\nunderstanding of the properties of the functions in your query than\nI think would be reasonable to put into the planner.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jan 2006 10:33:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index isn't used during a join. " }, { "msg_contents": "On Wed, Jan 11, 2006 at 08:02:37AM -0700, Robert Creager wrote:\n> The query is wrong as stated, as it won't work when the interval\n> crosses a year boundary, but it's a stop gap for now.\n\nYeah, I realized that shortly after I posted the original and posted\na correction.\n\nhttp://archives.postgresql.org/pgsql-performance/2006-01/msg00104.php\n\n-- \nMichael Fuhr\n", "msg_date": "Wed, 11 Jan 2006 10:06:45 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index isn't used during a join." }, { "msg_contents": "When grilled further on (Wed, 11 Jan 2006 10:33:03 -0500),\nTom Lane <[email protected]> confessed:\n\n> The planner understands about transitivity of equality, ie given a = b\n> and b = c it can infer a = c. It doesn't do any such thing for\n> inequalities though, nor does it deduce f(a) = f(b) for arbitrary\n> functions f. The addition Michael suggested requires much more\n> understanding of the properties of the functions in your query than\n> I think would be reasonable to put into the planner.\n> \n\nOK. I think reached a point that I need to re-organize how the data is stored,\nmaybe ridding myself of the schema and switching entirely to views. At that\npoint, I likely could rid myself of the function (unmunge_time) I'm using, and\nwork with times and doy fields.\n\nThanks,\nRob\n\n-- \n 21:17:00 up 4 days, 13:43, 9 users, load average: 2.02, 2.18, 2.23\nLinux 2.6.12-12-2 #4 SMP Tue Jan 3 19:56:19 MST 2006", "msg_date": "Wed, 11 Jan 2006 21:20:46 -0700", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index isn't used during a join." } ]
[ { "msg_contents": "Hello,\n\nI have to develop a companies search engine (looks like the Yellow\npages). We're using PostgreSQL at the company, and the initial DB is\n2GB large, as it\nhas companies from the entire world, with a fair amount of information.\n\nWhat reading do you suggest so that we can develop the search engine\ncore, in order that the result pages show up instantly, no matter the\nheavy load and\nthe DB size. The DB is 2GB but should grow to up to 10GB in 2 years,\nand there should be 250,000 unique visitors per month by the end of\nthe year.\n\nAre there special techniques? Maybe there's a way to sort of cache\nsearch results? We're using PHP5 + phpAccelerator.\nThanks,\n\n--\nCharles A. Landemaine.\n", "msg_date": "Tue, 10 Jan 2006 12:41:27 -0200", "msg_from": "\"Charles A. Landemaine\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to handle a large DB and simultaneous accesses?" }, { "msg_contents": "On Tue, 10 Jan 2006, Charles A. Landemaine wrote:\n\n> Hello,\n>\n> I have to develop a companies search engine (looks like the Yellow\n> pages). We're using PostgreSQL at the company, and the initial DB is\n> 2GB large, as it\n> has companies from the entire world, with a fair amount of information.\n>\n> What reading do you suggest so that we can develop the search engine\n> core, in order that the result pages show up instantly, no matter the\n> heavy load and\n> the DB size. The DB is 2GB but should grow to up to 10GB in 2 years,\n> and there should be 250,000 unique visitors per month by the end of\n> the year.\n>\n> Are there special techniques? Maybe there's a way to sort of cache\n> search results? We're using PHP5 + phpAccelerator.\n> Thanks,\n\nfrankly that is a small enough chunk of data compared to available memory \nsizes that I think your best bet is to plan to have enough ram that you \nonly do disk I/O to write and on boot.\n\na dual socket Opteron system can hold 16G with 2G memory modules (32G as \n4G modules become readily available over the next couple of years). this \nshould be enough to keep your data and indexes in ram at all times. if you \nfind that other system processes push the data out of ram consider loading \nthe data from disk to a ramfs filesystem, just make sure you don't update \nthe ram-only copy (or if you do that you have replication setup to \nreplicate from the ram copy to a copy on real disks somewhere). depending \non your load you could go with single core or dual core chips (and the \ncpu's are a small enough cost compared to this much ram that you may as \nwell go with the dual core cpu's)\n\nnow even with your data in ram you can slow down if your queries, indexes, \nand other settings are wrong, but if performance is important you should \nbe able to essentially eliminate disks for databases of this size.\n\nDavid Lang\n", "msg_date": "Tue, 10 Jan 2006 19:50:46 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to handle a large DB and simultaneous accesses?" } ]
[ { "msg_contents": "Hello,\n \nI have an inner join query that runs fast, but I when I change to a left\njoin the query runs 96 times slower. I wish I could always do an inner\njoin, but there are rare times when there isn't data in the right hand\ntable. I could expect a small performance hit, but the difference is so\nlarge I figure I must be doing something wrong. What I think is the\nstrangest is how similar the two query plans are.\n \nQuery (inner join version, just replace inner with left for other\nversion):\nselect \np.owner_trader_id, p.strategy_id, m.last, m.bid, m.ask\nfrom \nom_position p inner join om_instrument_mark m on m.instrument_id =\np.instrument_id and m.data_source_id = 5 and m.date = '2005-02-03' \nwhere p.as_of_date = '2005-02-03' and p.fund_id = 'TRIDE' and\np.owner_trader_id = 'tam4' and p.strategy_id = 'BASKET1'\n \nQuery plan for inner join:\nNested Loop (cost=0.00..176.99 rows=4 width=43) (actual\ntime=0.234..14.182 rows=193 loops=1)\n -> Index Scan using as_of_date_om_position_index on om_position p\n(cost=0.00..68.26 rows=19 width=20) (actual time=0.171..5.210 rows=193\nloops=1)\n Index Cond: (as_of_date = '2005-02-03'::date)\"\n Filter: (((fund_id)::text = 'TRIDE'::text) AND\n((owner_trader_id)::text = 'tam4'::text) AND ((strategy_id)::text =\n'BASKET1'::text))\n -> Index Scan using om_instrument_mark_pkey on om_instrument_mark m\n(cost=0.00..5.71 rows=1 width=31) (actual time=0.028..0.032 rows=1\nloops=193)\n Index Cond: ((m.instrument_id = \"outer\".instrument_id) AND\n(m.data_source_id = 5) AND (m.date = '2005-02-03'::date))\nTotal runtime: 14.890 ms\n \nQuery plan for left join:\nNested Loop Left Join (cost=0.00..7763.36 rows=19 width=43) (actual\ntime=3.005..1346.308 rows=193 loops=1)\n -> Index Scan using as_of_date_om_position_index on om_position p\n(cost=0.00..68.26 rows=19 width=20) (actual time=0.064..6.654 rows=193\nloops=1)\n Index Cond: (as_of_date = '2005-02-03'::date)\n Filter: (((fund_id)::text = 'TRIDE'::text) AND\n((owner_trader_id)::text = 'tam4'::text) AND ((strategy_id)::text =\n'BASKET1'::text))\n -> Index Scan using om_instrument_mark_pkey on om_instrument_mark m\n(cost=0.00..404.99 rows=1 width=31) (actual time=3.589..6.919 rows=1\nloops=193)\n Index Cond: (m.instrument_id = \"outer\".instrument_id)\n Filter: ((data_source_id = 5) AND (date = '2005-02-03'::date))\nTotal runtime: 1347.159 ms\n \n \nTable Definitions:\nCREATE TABLE om_position\n(\n fund_id varchar(10) NOT NULL DEFAULT ''::character varying,\n owner_trader_id varchar(10) NOT NULL DEFAULT ''::character varying,\n strategy_id varchar(30) NOT NULL DEFAULT ''::character varying,\n instrument_id int4 NOT NULL DEFAULT 0,\n as_of_date date NOT NULL DEFAULT '0001-01-01'::date,\n pos numeric(22,9) NOT NULL DEFAULT 0.000000000,\n cf_account_id int4 NOT NULL DEFAULT 0,\n cost numeric(22,9) NOT NULL DEFAULT 0.000000000,\n CONSTRAINT om_position_pkey PRIMARY KEY (fund_id, owner_trader_id,\nstrategy_id, cf_account_id, instrument_id, as_of_date),\n CONSTRAINT \"$1\" FOREIGN KEY (strategy_id)\n REFERENCES om_strategy (strategy_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT \"$2\" FOREIGN KEY (fund_id)\n REFERENCES om_fund (fund_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT \"$3\" FOREIGN KEY (cf_account_id)\n REFERENCES om_cf_account (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT \"$4\" FOREIGN KEY (owner_trader_id)\n REFERENCES om_trader (trader_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n) \nWITH OIDS;\nCREATE INDEX as_of_date_om_position_index\n ON om_position\n USING btree\n (as_of_date);\n \nCREATE TABLE om_instrument_mark\n(\n instrument_id int4 NOT NULL DEFAULT 0,\n data_source_id int4 NOT NULL DEFAULT 0,\n date date NOT NULL DEFAULT '0001-01-01'::date,\n \"last\" numeric(22,9) NOT NULL DEFAULT 0.000000000,\n bid numeric(22,9) NOT NULL DEFAULT 0.000000000,\n ask numeric(22,9) NOT NULL DEFAULT 0.000000000,\n \"comment\" varchar(150) NOT NULL DEFAULT ''::character varying,\n trader_id varchar(10) NOT NULL DEFAULT 'auto'::character varying,\n CONSTRAINT om_instrument_mark_pkey PRIMARY KEY (instrument_id,\ndata_source_id, date),\n CONSTRAINT \"$1\" FOREIGN KEY (instrument_id)\n REFERENCES om_instrument (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT \"$2\" FOREIGN KEY (data_source_id)\n REFERENCES om_data_source (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT om_instrument_mark_trader_id_fkey FOREIGN KEY (trader_id)\n REFERENCES om_trader (trader_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n) \nWITH OIDS;\n \nThanks for any help\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHello,\n \nI have an inner join query that runs fast, but I when I\nchange to a left join the query runs 96 times slower.  I wish I could always do an inner join,\nbut there are rare times when there isn’t data in the right hand\ntable.  I could expect a small\nperformance hit, but the difference is so large I figure I must be doing\nsomething wrong.  What I think is\nthe strangest is how similar the two query plans are.\n \nQuery (inner join version, just replace inner with left for\nother version):\nselect \np.owner_trader_id, p.strategy_id, m.last, m.bid, m.ask\nfrom \nom_position p inner\njoin om_instrument_mark m on m.instrument_id\n= p.instrument_id and m.data_source_id\n= 5 and m.date = '2005-02-03' \nwhere p.as_of_date = '2005-02-03' and p.fund_id\n= 'TRIDE' and p.owner_trader_id = 'tam4' and p.strategy_id = 'BASKET1'\n \nQuery plan for inner join:\nNested Loop  (cost=0.00..176.99 rows=4\nwidth=43) (actual time=0.234..14.182 rows=193 loops=1)\n  ->  Index Scan\nusing as_of_date_om_position_index on om_position p \n(cost=0.00..68.26 rows=19 width=20) (actual time=0.171..5.210 rows=193\nloops=1)\n        Index\nCond: (as_of_date =\n'2005-02-03'::date)\"\n       \nFilter: (((fund_id)::text = 'TRIDE'::text) AND ((owner_trader_id)::text\n= 'tam4'::text) AND ((strategy_id)::text =\n'BASKET1'::text))\n  ->  Index Scan\nusing om_instrument_mark_pkey on om_instrument_mark\nm  (cost=0.00..5.71 rows=1 width=31)\n(actual time=0.028..0.032 rows=1 loops=193)\n       \nIndex Cond: ((m.instrument_id\n= \"outer\".instrument_id) AND (m.data_source_id = 5) AND (m.date\n= '2005-02-03'::date))\nTotal runtime: 14.890 ms\n \nQuery plan for left join:\nNested Loop Left Join  (cost=0.00..7763.36 rows=19\nwidth=43) (actual time=3.005..1346.308 rows=193 loops=1)\n  ->  Index Scan\nusing as_of_date_om_position_index on om_position p \n(cost=0.00..68.26 rows=19 width=20) (actual time=0.064..6.654 rows=193\nloops=1)\n       \nIndex Cond: (as_of_date\n= '2005-02-03'::date)\n       \nFilter: (((fund_id)::text = 'TRIDE'::text) AND ((owner_trader_id)::text\n= 'tam4'::text) AND ((strategy_id)::text =\n'BASKET1'::text))\n  ->  Index Scan\nusing om_instrument_mark_pkey on om_instrument_mark\nm  (cost=0.00..404.99 rows=1\nwidth=31) (actual time=3.589..6.919 rows=1 loops=193)\n       \nIndex Cond: (m.instrument_id\n= \"outer\".instrument_id)\n       \nFilter: ((data_source_id = 5) AND (date =\n'2005-02-03'::date))\nTotal runtime: 1347.159 ms\n \n \nTable Definitions:\nCREATE TABLE om_position\n(\n  fund_id varchar(10)\nNOT NULL DEFAULT ''::character varying,\n  owner_trader_id varchar(10) NOT NULL DEFAULT ''::character varying,\n  strategy_id varchar(30)\nNOT NULL DEFAULT ''::character varying,\n  instrument_id int4 NOT NULL DEFAULT 0,\n  as_of_date date NOT NULL DEFAULT\n'0001-01-01'::date,\n  pos numeric(22,9) NOT NULL DEFAULT 0.000000000,\n  cf_account_id int4 NOT NULL DEFAULT 0,\n  cost numeric(22,9) NOT NULL DEFAULT 0.000000000,\n  CONSTRAINT om_position_pkey PRIMARY KEY (fund_id,\nowner_trader_id, strategy_id,\ncf_account_id, instrument_id,\nas_of_date),\n  CONSTRAINT\n\"$1\" FOREIGN KEY (strategy_id)\n      REFERENCES om_strategy (strategy_id) MATCH\nSIMPLE\n      ON UPDATE NO\nACTION ON DELETE NO ACTION,\n  CONSTRAINT \"$2\"\nFOREIGN KEY (fund_id)\n      REFERENCES om_fund (fund_id) MATCH SIMPLE\n      ON UPDATE NO\nACTION ON DELETE NO ACTION,\n  CONSTRAINT\n\"$3\" FOREIGN KEY (cf_account_id)\n      REFERENCES om_cf_account (id) MATCH SIMPLE\n      ON UPDATE NO\nACTION ON DELETE NO ACTION,\n  CONSTRAINT\n\"$4\" FOREIGN KEY (owner_trader_id)\n      REFERENCES om_trader (trader_id) MATCH\nSIMPLE\n      ON UPDATE NO\nACTION ON DELETE NO ACTION\n) \nWITH OIDS;\nCREATE INDEX as_of_date_om_position_index\n  ON om_position\n  USING btree\n  (as_of_date);\n \nCREATE TABLE om_instrument_mark\n(\n  instrument_id int4 NOT NULL DEFAULT 0,\n  data_source_id int4 NOT NULL DEFAULT 0,\n  date date NOT NULL DEFAULT\n'0001-01-01'::date,\n  \"last\" numeric(22,9) NOT NULL DEFAULT 0.000000000,\n  bid numeric(22,9) NOT NULL DEFAULT 0.000000000,\n  ask numeric(22,9) NOT NULL DEFAULT 0.000000000,\n  \"comment\" varchar(150) NOT\nNULL DEFAULT ''::character varying,\n  trader_id varchar(10)\nNOT NULL DEFAULT 'auto'::character varying,\n  CONSTRAINT om_instrument_mark_pkey PRIMARY KEY (instrument_id,\ndata_source_id, date),\n  CONSTRAINT\n\"$1\" FOREIGN KEY (instrument_id)\n      REFERENCES om_instrument (id) MATCH SIMPLE\n      ON UPDATE NO\nACTION ON DELETE NO ACTION,\n  CONSTRAINT\n\"$2\" FOREIGN KEY (data_source_id)\n      REFERENCES om_data_source (id) MATCH SIMPLE\n      ON UPDATE NO\nACTION ON DELETE NO ACTION,\n  CONSTRAINT om_instrument_mark_trader_id_fkey FOREIGN KEY (trader_id)\n      REFERENCES om_trader (trader_id) MATCH\nSIMPLE\n      ON UPDATE NO\nACTION ON DELETE NO ACTION\n) \nWITH OIDS;\n \nThanks for any help", "msg_date": "Tue, 10 Jan 2006 20:06:08 -0600", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Left Join Performance vs Inner Join Performance" }, { "msg_contents": "\"Dave Dutcher\" <[email protected]> writes:\n> I have an inner join query that runs fast, but I when I change to a left\n> join the query runs 96 times slower.\n\nThis looks like an issue that is fixed in the latest set of releases,\nnamely that OUTER JOIN ON conditions that reference only the inner\nside of the join weren't getting pushed down into indexquals. See\nthread here:\nhttp://archives.postgresql.org/pgsql-performance/2005-12/msg00134.php\nand patches in this and the following messages:\nhttp://archives.postgresql.org/pgsql-committers/2005-12/msg00105.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 10 Jan 2006 23:38:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Left Join Performance vs Inner Join Performance " } ]
[ { "msg_contents": "Hello!\nHas anyone got any tips for speeding up this query? It currently \ntakes hours to start.\n\nPostgreSQL v8.x on (SuSe Linux)\nThanks!\n\n\nno_people=# explain SELECT r.id AS r_id, r.firstname || ' ' || \nr.lastname AS r_name, ad.id AS ad_id, ad.type AS ad_type, ad.address \nAS ad_address, ad.postalcode AS ad_postalcode, ad.postalsite AS \nad_postalsite, ad.priority AS ad_priority, ad.position[0] AS ad_lat, \nad.position[1] AS ad_lon, ad.uncertainty AS ad_uncertainty, ad.extra \nAS ad_extra, co.id AS co_id, co.type AS co_type, co.value AS \nco_value, co.description AS co_description, co.priority AS \nco_priority, co.visible AS co_visible, co.searchable AS co_searchable\n\nFROM people r\nLEFT OUTER JOIN addresses ad ON(r.id = ad.record)\nLEFT OUTER JOIN contacts co ON(r.id = co.record)\nWHERE r.deleted = false AND r.original IS NULL AND co.deleted = \nfalse AND NOT ad.deleted\nORDER BY r.id;\n\n QUERY PLAN\n------------------------------------------------------------------------ \n-------------------------------------------------------\nSort (cost=1152540.74..1152988.20 rows=178983 width=585)\n Sort Key: r.id\n -> Hash Join (cost=313757.11..1005334.96 rows=178983 width=585)\n Hash Cond: (\"outer\".record = \"inner\".id)\n -> Seq Scan on addresses ad (cost=0.00..428541.29 \nrows=4952580 width=136)\n Filter: (NOT deleted)\n -> Hash (cost=312039.95..312039.95 rows=27664 width=457)\n -> Hash Join (cost=94815.24..312039.95 rows=27664 \nwidth=457)\n Hash Cond: (\"outer\".record = \"inner\".id)\n -> Seq Scan on contacts co \n(cost=0.00..147791.54 rows=5532523 width=430)\n Filter: (deleted = false)\n -> Hash (cost=94755.85..94755.85 rows=23755 \nwidth=27)\n -> Index Scan using \npeople_original_is_null on people r (cost=0.00..94755.85 rows=23755 \nwidth=27)\n Filter: ((deleted = false) AND \n(original IS NULL))\n(14 rows)\n\n\n\n\n\n\nno_people=# \\d contacts\n Table \"public.contacts\"\n Column | Type | \nModifiers\n-------------+------------------------ \n+----------------------------------------------------------\nid | integer | not null default nextval \n('public.contacts_id_seq'::text)\nrecord | integer |\ntype | integer |\nvalue | character varying(128) |\ndescription | character varying(255) |\npriority | integer |\nitescotype | integer |\noriginal | integer |\nsource | integer |\nreference | character varying(32) |\ndeleted | boolean | not null default false\nquality | integer |\nvisible | boolean | not null default true\nsearchable | boolean | not null default true\nIndexes:\n \"contacts_pkey\" PRIMARY KEY, btree (id)\n \"contacts_deleted_idx\" btree (deleted)\n \"contacts_record_idx\" btree (record) CLUSTER\n \"contacts_source_reference_idx\" btree (source, reference)\n\n\n\n\n\n\n\n\n\nno_people=# \\d addresses\n Table \"public.addresses\"\n Column | Type | \nModifiers\n-------------+------------------------ \n+-----------------------------------------------------------\nid | integer | not null default nextval \n('public.addresses_id_seq'::text)\nrecord | integer |\naddress | character varying(128) |\nextra | character varying(32) |\npostalcode | character varying(16) |\npostalsite | character varying(64) |\ndescription | character varying(255) |\nposition | point |\nuncertainty | integer | default 99999999\npriority | integer |\ntype | integer |\nplace | character varying(64) |\nfloor | integer |\nside | character varying(8) |\nhousename | character varying(64) |\noriginal | integer |\nsource | integer |\nreference | character varying(32) |\ndeleted | boolean | not null default false\nquality | integer |\nvisible | boolean | not null default true\nsearchable | boolean | not null default true\nIndexes:\n \"addresses_pkey\" PRIMARY KEY, btree (id)\n \"addresses_deleted_idx\" btree (deleted)\n \"addresses_record_idx\" btree (record) CLUSTER\n \"addresses_source_reference_idx\" btree (source, reference)\n\n\n\n\n\n\n\n\nno_people=# \\d people\n Table \"public.people\"\n Column | Type | \nModifiers\n------------+-------------------------- \n+--------------------------------------------------------\nid | integer | not null default nextval \n('public.people_id_seq'::text)\norigid | integer |\nfirstname | character varying(128) | default ''::character varying\nmiddlename | character varying(128) | default ''::character varying\nlastname | character varying(128) | default ''::character varying\nupdated | timestamp with time zone | default \n('now'::text)::timestamp(6) with time zone\nupdater | integer |\nrelevance | real | not null default 0\nphonetic | text |\nindexed | boolean | default false\nrecord | text |\noriginal | integer |\nactive | boolean | default true\ntitle | character varying(128) |\ndeleted | boolean | not null default false\nIndexes:\n \"people_pkey\" PRIMARY KEY, btree (id)\n \"people_indexed_idx\" btree (indexed)\n \"people_lower_lastname_firstname_idx\" btree (lower \n(lastname::text), lower(firstname::text))\n \"people_original_is_null\" btree (original) WHERE original IS NULL\n \"people_relevance_idx\" btree (relevance)\n \"person_updated_idx\" btree (updated)\n\nno_people=# \n", "msg_date": "Wed, 11 Jan 2006 11:59:39 +0100", "msg_from": "Bendik Rognlien Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query with joins" }, { "msg_contents": "Bendik Rognlien Johansen <[email protected]> writes:\n> Has anyone got any tips for speeding up this query? It currently \n> takes hours to start.\n\nAre the rowcount estimates close to reality? The plan doesn't look\nunreasonable to me if they are. It might help to increase work_mem\nto ensure that the hash tables don't spill to disk.\n\nIndexes:\n \"people_original_is_null\" btree (original) WHERE original IS NULL\n\nThis index seems poorly designed: the actual index entries are dead\nweight since all of them are necessarily NULL. You might as well make\nthe index carry something that you frequently test in conjunction with\n\"original IS NULL\". For instance, if this particular query is a common\ncase, you could replace this index with\n\nCREATE INDEX people_deleted_original_is_null ON people(deleted)\n WHERE original IS NULL;\n\nThis index is still perfectly usable for queries that only say \"original\nIS NULL\", but it can also filter out rows with the wrong value of\ndeleted. Now, if there are hardly any rows with deleted = true, maybe\nthis won't help much for your problem. But in any case you ought to\nconsider whether you can make the index entries do something useful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jan 2006 10:45:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with joins " }, { "msg_contents": "Yes, the rowcount estimates are real, however, it has been a long \ntime since the last VACUUM FULL (there is never a good time).\n\nI have clustered the tables, reindexed, analyzed, vacuumed and the \nplan now looks like this:\n\n\nno_people=# explain SELECT r.id AS r_id, r.firstname || ' ' || \nr.lastname AS r_name, ad.id AS ad_id, ad.type AS ad_type, ad.address \nAS ad_address, ad.postalcode AS ad_postalcode, ad.postalsite AS \nad_postalsite, ad.priority AS ad_priority, ad.position[0] AS ad_lat, \nad.position[1] AS ad_lon, ad.uncertainty AS ad_uncertainty, ad.extra \nAS ad_extra, ad.deleted AS ad_deleted, co.id AS co_id, co.type AS \nco_type, co.value AS co_value, co.description AS co_description, \nco.priority AS co_priority, co.visible AS co_visible, co.searchable \nAS co_searchable, co.deleted AS co_deleted FROM people r LEFT OUTER \nJOIN addresses ad ON(r.id = ad.record) LEFT OUTER JOIN contacts co ON \n(r.id = co.record) WHERE NOT r.deleted AND r.original IS NULL ORDER \nBY r.id;\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------------------------------------\nSort (cost=182866.49..182943.12 rows=30655 width=587)\n Sort Key: r.id\n -> Nested Loop Left Join (cost=0.00..170552.10 rows=30655 \nwidth=587)\n -> Nested Loop Left Join (cost=0.00..75054.96 rows=26325 \nwidth=160)\n -> Index Scan using people_deleted_original_is_null \non people r (cost=0.00..1045.47 rows=23861 width=27)\n Filter: ((NOT deleted) AND (original IS NULL))\n -> Index Scan using addresses_record_idx on \naddresses ad (cost=0.00..3.05 rows=4 width=137)\n Index Cond: (\"outer\".id = ad.record)\n -> Index Scan using contacts_record_idx on contacts co \n(cost=0.00..3.32 rows=24 width=431)\n Index Cond: (\"outer\".id = co.record)\n(10 rows)\n\n\n\n\n\n\nLooks faster, but still very slow. I added limit 1000 and it has been \nrunning for about 25 minutes now with no output. top shows:\n\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n29994 postgres 18 0 95768 78m 68m R 17.0 7.7 0:53.27 postmaster\n\n\n\nwhich is unusual, I usually get 99.9 %cpu for just about any query, \nwhich leads me to believe this is disk related.\n\n\n\npostgresql.conf:\nshared_buffers = 8192\nwork_mem = 8192\nmaintenance_work_mem = 524288\n\n\n\n\nHardware 2x2.8GHz cpu\n1GB ram\n\nCould this be an issue related to lack of VACUUM FULL? The tables get \na lot of updates.\n\n\nThank you very much so far!\n\n\n\n\nOn Jan 11, 2006, at 4:45 PM, Tom Lane wrote:\n\n> Bendik Rognlien Johansen <[email protected]> writes:\n>> Has anyone got any tips for speeding up this query? It currently\n>> takes hours to start.\n>\n> Are the rowcount estimates close to reality? The plan doesn't look\n> unreasonable to me if they are. It might help to increase work_mem\n> to ensure that the hash tables don't spill to disk.\n>\n> Indexes:\n> \"people_original_is_null\" btree (original) WHERE original IS NULL\n>\n> This index seems poorly designed: the actual index entries are dead\n> weight since all of them are necessarily NULL. You might as well make\n> the index carry something that you frequently test in conjunction with\n> \"original IS NULL\". For instance, if this particular query is a \n> common\n> case, you could replace this index with\n>\n> CREATE INDEX people_deleted_original_is_null ON people(deleted)\n> WHERE original IS NULL;\n>\n> This index is still perfectly usable for queries that only say \n> \"original\n> IS NULL\", but it can also filter out rows with the wrong value of\n> deleted. Now, if there are hardly any rows with deleted = true, maybe\n> this won't help much for your problem. But in any case you ought to\n> consider whether you can make the index entries do something useful.\n>\n> \t\t\tregards, tom lane\n\n", "msg_date": "Wed, 11 Jan 2006 20:55:32 +0100", "msg_from": "Bendik Rognlien Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with joins " }, { "msg_contents": "I'd try figuring out if the join is the culprit or the sort is (by\ndropping the ORDER BY). work_mem is probably forcing the sort to spill\nto disk, and if your drives are rather busy...\n\nYou might also get a win if you re-order the joins to people, contacts,\naddresses, if you know it will have the same result.\n\nIn this case LIMIT won't have any real effect, because you have to go\nall the way through with the ORDER BY anyway.\n\nOn Wed, Jan 11, 2006 at 08:55:32PM +0100, Bendik Rognlien Johansen wrote:\n> Yes, the rowcount estimates are real, however, it has been a long \n> time since the last VACUUM FULL (there is never a good time).\n> \n> I have clustered the tables, reindexed, analyzed, vacuumed and the \n> plan now looks like this:\n> \n> \n> no_people=# explain SELECT r.id AS r_id, r.firstname || ' ' || \n> r.lastname AS r_name, ad.id AS ad_id, ad.type AS ad_type, ad.address \n> AS ad_address, ad.postalcode AS ad_postalcode, ad.postalsite AS \n> ad_postalsite, ad.priority AS ad_priority, ad.position[0] AS ad_lat, \n> ad.position[1] AS ad_lon, ad.uncertainty AS ad_uncertainty, ad.extra \n> AS ad_extra, ad.deleted AS ad_deleted, co.id AS co_id, co.type AS \n> co_type, co.value AS co_value, co.description AS co_description, \n> co.priority AS co_priority, co.visible AS co_visible, co.searchable \n> AS co_searchable, co.deleted AS co_deleted FROM people r LEFT OUTER \n> JOIN addresses ad ON(r.id = ad.record) LEFT OUTER JOIN contacts co ON \n> (r.id = co.record) WHERE NOT r.deleted AND r.original IS NULL ORDER \n> BY r.id;\n> QUERY PLAN\n> ------------------------------------------------------------------------ \n> --------------------------------------------------\n> Sort (cost=182866.49..182943.12 rows=30655 width=587)\n> Sort Key: r.id\n> -> Nested Loop Left Join (cost=0.00..170552.10 rows=30655 \n> width=587)\n> -> Nested Loop Left Join (cost=0.00..75054.96 rows=26325 \n> width=160)\n> -> Index Scan using people_deleted_original_is_null \n> on people r (cost=0.00..1045.47 rows=23861 width=27)\n> Filter: ((NOT deleted) AND (original IS NULL))\n> -> Index Scan using addresses_record_idx on \n> addresses ad (cost=0.00..3.05 rows=4 width=137)\n> Index Cond: (\"outer\".id = ad.record)\n> -> Index Scan using contacts_record_idx on contacts co \n> (cost=0.00..3.32 rows=24 width=431)\n> Index Cond: (\"outer\".id = co.record)\n> (10 rows)\n> \n> \n> \n> \n> \n> \n> Looks faster, but still very slow. I added limit 1000 and it has been \n> running for about 25 minutes now with no output. top shows:\n> \n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 29994 postgres 18 0 95768 78m 68m R 17.0 7.7 0:53.27 postmaster\n> \n> \n> \n> which is unusual, I usually get 99.9 %cpu for just about any query, \n> which leads me to believe this is disk related.\n> \n> \n> \n> postgresql.conf:\n> shared_buffers = 8192\n> work_mem = 8192\n> maintenance_work_mem = 524288\n> \n> \n> \n> \n> Hardware 2x2.8GHz cpu\n> 1GB ram\n> \n> Could this be an issue related to lack of VACUUM FULL? The tables get \n> a lot of updates.\n> \n> \n> Thank you very much so far!\n> \n> \n> \n> \n> On Jan 11, 2006, at 4:45 PM, Tom Lane wrote:\n> \n> >Bendik Rognlien Johansen <[email protected]> writes:\n> >>Has anyone got any tips for speeding up this query? It currently\n> >>takes hours to start.\n> >\n> >Are the rowcount estimates close to reality? The plan doesn't look\n> >unreasonable to me if they are. It might help to increase work_mem\n> >to ensure that the hash tables don't spill to disk.\n> >\n> >Indexes:\n> > \"people_original_is_null\" btree (original) WHERE original IS NULL\n> >\n> >This index seems poorly designed: the actual index entries are dead\n> >weight since all of them are necessarily NULL. You might as well make\n> >the index carry something that you frequently test in conjunction with\n> >\"original IS NULL\". For instance, if this particular query is a \n> >common\n> >case, you could replace this index with\n> >\n> >CREATE INDEX people_deleted_original_is_null ON people(deleted)\n> > WHERE original IS NULL;\n> >\n> >This index is still perfectly usable for queries that only say \n> >\"original\n> >IS NULL\", but it can also filter out rows with the wrong value of\n> >deleted. Now, if there are hardly any rows with deleted = true, maybe\n> >this won't help much for your problem. But in any case you ought to\n> >consider whether you can make the index entries do something useful.\n> >\n> >\t\t\tregards, tom lane\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 11 Jan 2006 14:23:45 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with joins" }, { "msg_contents": "The sort is definitively the culprit. When I removed it the query was \ninstant. I tried setting work_mem = 131072 but it did not seem to \nhelp. I really don't understand this :-( Any other ideas?\n\nThanks!\n\n\nOn Jan 11, 2006, at 9:23 PM, Jim C. Nasby wrote:\n\n> I'd try figuring out if the join is the culprit or the sort is (by\n> dropping the ORDER BY). work_mem is probably forcing the sort to spill\n> to disk, and if your drives are rather busy...\n>\n> You might also get a win if you re-order the joins to people, \n> contacts,\n> addresses, if you know it will have the same result.\n>\n> In this case LIMIT won't have any real effect, because you have to go\n> all the way through with the ORDER BY anyway.\n>\n> On Wed, Jan 11, 2006 at 08:55:32PM +0100, Bendik Rognlien Johansen \n> wrote:\n>> Yes, the rowcount estimates are real, however, it has been a long\n>> time since the last VACUUM FULL (there is never a good time).\n>>\n>> I have clustered the tables, reindexed, analyzed, vacuumed and the\n>> plan now looks like this:\n>>\n>>\n>> no_people=# explain SELECT r.id AS r_id, r.firstname || ' ' ||\n>> r.lastname AS r_name, ad.id AS ad_id, ad.type AS ad_type, ad.address\n>> AS ad_address, ad.postalcode AS ad_postalcode, ad.postalsite AS\n>> ad_postalsite, ad.priority AS ad_priority, ad.position[0] AS ad_lat,\n>> ad.position[1] AS ad_lon, ad.uncertainty AS ad_uncertainty, ad.extra\n>> AS ad_extra, ad.deleted AS ad_deleted, co.id AS co_id, co.type AS\n>> co_type, co.value AS co_value, co.description AS co_description,\n>> co.priority AS co_priority, co.visible AS co_visible, co.searchable\n>> AS co_searchable, co.deleted AS co_deleted FROM people r LEFT OUTER\n>> JOIN addresses ad ON(r.id = ad.record) LEFT OUTER JOIN contacts co ON\n>> (r.id = co.record) WHERE NOT r.deleted AND r.original IS NULL ORDER\n>> BY r.id;\n>> QUERY PLAN\n>> --------------------------------------------------------------------- \n>> ---\n>> --------------------------------------------------\n>> Sort (cost=182866.49..182943.12 rows=30655 width=587)\n>> Sort Key: r.id\n>> -> Nested Loop Left Join (cost=0.00..170552.10 rows=30655\n>> width=587)\n>> -> Nested Loop Left Join (cost=0.00..75054.96 rows=26325\n>> width=160)\n>> -> Index Scan using people_deleted_original_is_null\n>> on people r (cost=0.00..1045.47 rows=23861 width=27)\n>> Filter: ((NOT deleted) AND (original IS NULL))\n>> -> Index Scan using addresses_record_idx on\n>> addresses ad (cost=0.00..3.05 rows=4 width=137)\n>> Index Cond: (\"outer\".id = ad.record)\n>> -> Index Scan using contacts_record_idx on contacts co\n>> (cost=0.00..3.32 rows=24 width=431)\n>> Index Cond: (\"outer\".id = co.record)\n>> (10 rows)\n>>\n>>\n>>\n>>\n>>\n>>\n>> Looks faster, but still very slow. I added limit 1000 and it has been\n>> running for about 25 minutes now with no output. top shows:\n>>\n>>\n>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n>> 29994 postgres 18 0 95768 78m 68m R 17.0 7.7 0:53.27 \n>> postmaster\n>>\n>>\n>>\n>> which is unusual, I usually get 99.9 %cpu for just about any query,\n>> which leads me to believe this is disk related.\n>>\n>>\n>>\n>> postgresql.conf:\n>> shared_buffers = 8192\n>> work_mem = 8192\n>> maintenance_work_mem = 524288\n>>\n>>\n>>\n>>\n>> Hardware 2x2.8GHz cpu\n>> 1GB ram\n>>\n>> Could this be an issue related to lack of VACUUM FULL? The tables get\n>> a lot of updates.\n>>\n>>\n>> Thank you very much so far!\n>>\n>>\n>>\n>>\n>> On Jan 11, 2006, at 4:45 PM, Tom Lane wrote:\n>>\n>>> Bendik Rognlien Johansen <[email protected]> writes:\n>>>> Has anyone got any tips for speeding up this query? It currently\n>>>> takes hours to start.\n>>>\n>>> Are the rowcount estimates close to reality? The plan doesn't look\n>>> unreasonable to me if they are. It might help to increase work_mem\n>>> to ensure that the hash tables don't spill to disk.\n>>>\n>>> Indexes:\n>>> \"people_original_is_null\" btree (original) WHERE original IS \n>>> NULL\n>>>\n>>> This index seems poorly designed: the actual index entries are dead\n>>> weight since all of them are necessarily NULL. You might as well \n>>> make\n>>> the index carry something that you frequently test in conjunction \n>>> with\n>>> \"original IS NULL\". For instance, if this particular query is a\n>>> common\n>>> case, you could replace this index with\n>>>\n>>> CREATE INDEX people_deleted_original_is_null ON people(deleted)\n>>> WHERE original IS NULL;\n>>>\n>>> This index is still perfectly usable for queries that only say\n>>> \"original\n>>> IS NULL\", but it can also filter out rows with the wrong value of\n>>> deleted. Now, if there are hardly any rows with deleted = true, \n>>> maybe\n>>> this won't help much for your problem. But in any case you ought to\n>>> consider whether you can make the index entries do something useful.\n>>>\n>>> \t\t\tregards, tom lane\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n>>\n>\n> -- \n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n", "msg_date": "Wed, 11 Jan 2006 22:30:58 +0100", "msg_from": "Bendik Rognlien Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with joins" }, { "msg_contents": "On Wed, Jan 11, 2006 at 10:30:58PM +0100, Bendik Rognlien Johansen wrote:\n> The sort is definitively the culprit. When I removed it the query was \n> instant. I tried setting work_mem = 131072 but it did not seem to \n> help. I really don't understand this :-( Any other ideas?\n\nWhat's explain analyze show with the sort in?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 13 Jan 2006 17:34:54 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with joins" } ]
[ { "msg_contents": "Hi ,\n\n\n I am having problem optimizing this query, Postgres optimizer uses a \nplan which invloves seq-scan on a table. And when I choose a option to \ndisable seq-scan it uses index-scan and obviously the query is much faster.\n All tables are daily vacummed and analyzed as per docs.\n\n Why cant postgres use index-scan ?\n\n\nPostgres Version:8.0.2\nPlatform : Fedora\n\nHere is the explain analyze output. Let me know if any more information \nis needed. Can we make postgres use index scan for this query ?\n\nThanks!\nPallav.\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nexplain analyze\n select * from provisioning.alerts where countystate = 'FL' and countyno \n= '099' and status = 'ACTIVE' ;\n\n \nQUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=3.45..15842.17 rows=1 width=125) (actual \ntime=913.491..18992.009 rows=110 loops=1)\n -> Nested Loop (cost=3.45..15838.88 rows=1 width=86) (actual \ntime=913.127..18958.482 rows=110 loops=1)\n -> Hash Join (cost=3.45..15835.05 rows=1 width=82) (actual \ntime=913.093..18954.951 rows=110 loops=1)\n Hash Cond: (\"outer\".fkserviceinstancestatusid = \n\"inner\".serviceinstancestatusid)\n -> Hash Join (cost=2.38..15833.96 rows=2 width=74) \n(actual time=175.139..18952.830 rows=358 loops=1)\n Hash Cond: (\"outer\".fkserviceofferingid = \n\"inner\".serviceofferingid)\n -> Seq Scan on serviceinstance si \n(cost=0.00..15831.52 rows=7 width=78) (actual time=174.430..18948.210 \nrows=358 loops=1)\n Filter: (((subplan) = 'FL'::text) AND \n((subplan) = '099'::text))\n SubPlan\n -> Result (cost=0.00..0.01 rows=1 \nwidth=0) (actual time=0.090..0.093 rows=1 loops=3923)\n -> Result (cost=0.00..0.01 rows=1 \nwidth=0) (actual time=0.058..0.061 rows=1 loops=265617)\n -> Hash (cost=2.38..2.38 rows=3 width=4) (actual \ntime=0.444..0.444 rows=0 loops=1)\n -> Hash Join (cost=1.08..2.38 rows=3 \nwidth=4) (actual time=0.312..0.428 rows=1 loops=1)\n Hash Cond: (\"outer\".fkserviceid = \n\"inner\".serviceid)\n -> Seq Scan on serviceoffering so \n(cost=0.00..1.18 rows=18 width=8) (actual time=0.005..0.068 rows=18 loops=1)\n -> Hash (cost=1.07..1.07 rows=1 \nwidth=4) (actual time=0.036..0.036 rows=0 loops=1)\n -> Seq Scan on service s \n(cost=0.00..1.07 rows=1 width=4) (actual time=0.014..0.019 rows=1 loops=1)\n Filter: (servicename = \n'alert'::text)\n -> Hash (cost=1.06..1.06 rows=1 width=16) (actual \ntime=0.044..0.044 rows=0 loops=1)\n -> Seq Scan on serviceinstancestatus sis \n(cost=0.00..1.06 rows=1 width=16) (actual time=0.017..0.024 rows=1 loops=1)\n Filter: (status = 'ACTIVE'::text)\n -> Index Scan using pk_account_accountid on account a \n(cost=0.00..3.82 rows=1 width=8) (actual time=0.012..0.016 rows=1 loops=110)\n Index Cond: (\"outer\".fkaccountid = a.accountid)\n -> Index Scan using pk_contact_contactid on contact c \n(cost=0.00..3.24 rows=1 width=47) (actual time=0.014..0.018 rows=1 \nloops=110)\n Index Cond: (\"outer\".fkcontactid = c.contactid)\n SubPlan\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.072..0.075 rows=1 loops=110)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.079..0.082 rows=1 loops=110)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.086..0.089 rows=1 loops=110)\n Total runtime: 18992.694 ms\n(30 rows)\n\nTime: 18996.203 ms\n\n--> As you can see the -> Seq Scan on serviceinstance si \n(cost=0.00..15831.52 rows=7 width=78) (actual time=174.430..18948.210 \nrows=358 loops=1) was taking too long .\n same query when i disable the seq-scan it uses index-scan and its \nmuch faster now\n\nset enable_seqscan=false;\nSET\nTime: 0.508 ms\nexplain analyze\nselect * from provisioning.alerts where countystate = 'FL' and countyno \n= '099' and status = 'ACTIVE' ;\n\n \nQUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=9.10..16676.10 rows=1 width=125) (actual \ntime=24.792..3898.939 rows=110 loops=1)\n -> Nested Loop (cost=9.10..16672.81 rows=1 width=86) (actual \ntime=24.383..3862.025 rows=110 loops=1)\n -> Hash Join (cost=9.10..16668.97 rows=1 width=82) (actual \ntime=24.351..3858.351 rows=110 loops=1)\n Hash Cond: (\"outer\".fkserviceofferingid = \n\"inner\".serviceofferingid)\n -> Nested Loop (cost=0.00..16659.85 rows=2 width=86) \n(actual time=8.449..3841.260 rows=110 loops=1)\n -> Index Scan using \npk_serviceinstancestatus_serviceinstancestatusid on \nserviceinstancestatus sis (cost=0.00..3.07 rows=1 width=16) (actual \ntime=3.673..3.684 rows=1 loops=1)\n Filter: (status = 'ACTIVE'::text)\n -> Index Scan using \nidx_serviceinstance_fkserviceinstancestatusid on serviceinstance si \n(cost=0.00..16656.76 rows=2 width=78) (actual time=4.755..3836.399 \nrows=110 loops=1)\n Index Cond: (si.fkserviceinstancestatusid = \n\"outer\".serviceinstancestatusid)\n Filter: (((subplan) = 'FL'::text) AND \n((subplan) = '099'::text))\n SubPlan\n -> Result (cost=0.00..0.01 rows=1 \nwidth=0) (actual time=0.125..0.128 rows=1 loops=1283)\n -> Result (cost=0.00..0.01 rows=1 \nwidth=0) (actual time=0.083..0.086 rows=1 loops=26146)\n -> Hash (cost=9.09..9.09 rows=3 width=4) (actual \ntime=15.661..15.661 rows=0 loops=1)\n -> Nested Loop (cost=0.00..9.09 rows=3 width=4) \n(actual time=15.617..15.637 rows=1 loops=1)\n -> Index Scan using uk_service_servicename \non service s (cost=0.00..3.96 rows=1 width=4) (actual \ntime=11.231..11.236 rows=1 loops=1)\n Index Cond: (servicename = 'alert'::text)\n -> Index Scan using \nidx_serviceoffering_fkserviceid on serviceoffering so (cost=0.00..5.09 \nrows=3 width=8) (actual time=4.366..4.371 rows=1 loops=1)\n Index Cond: (\"outer\".serviceid = \nso.fkserviceid)\n -> Index Scan using pk_account_accountid on account a \n(cost=0.00..3.82 rows=1 width=8) (actual time=0.013..0.017 rows=1 loops=110)\n Index Cond: (\"outer\".fkaccountid = a.accountid)\n -> Index Scan using pk_contact_contactid on contact c \n(cost=0.00..3.24 rows=1 width=47) (actual time=0.013..0.017 rows=1 \nloops=110)\n Index Cond: (\"outer\".fkcontactid = c.contactid)\n SubPlan\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.081..0.084 rows=1 loops=110)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.088..0.091 rows=1 loops=110)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.098..0.101 rows=1 loops=110)\n Total runtime: 3899.589 ms\n(28 rows)\n\n\nHere is the view definition\n-------------------------------\n\n View \"provisioning.alerts\"\n Column | Type | Modifiers\n-------------------+---------+-----------\n serviceinstanceid | integer |\n accountid | integer |\n firstname | text |\n lastname | text |\n email | text |\n status | text |\n affiliate | text |\n affiliatesub | text |\n domain | text |\n countyno | text |\n countystate | text |\n listingtype | text |\nView definition:\n SELECT si.serviceinstanceid, a.accountid, c.firstname, c.lastname, \nc.email, sis.status, si.affiliate, si.affiliatesub, si.\"domain\",\n ( SELECT get_parametervalue(si.serviceinstanceid, \n'countyNo'::text) AS get_parametervalue) AS countyno,\n ( SELECT get_parametervalue(si.serviceinstanceid, \n'countyState'::text) AS get_parametervalue) AS countystate,\n ( SELECT get_parametervalue(si.serviceinstanceid, \n'listingType'::text) AS get_parametervalue) AS listingtype\n FROM provisioning.account a, common.contact c, provisioning.service \ns, provisioning.serviceoffering so, provisioning.serviceinstance si, \nprovisioning.serviceinstancestatus sis\n WHERE si.fkserviceofferingid = so.serviceofferingid\n AND si.fkserviceinstancestatusid = sis.serviceinstancestatusid\n AND s.serviceid = so.fkserviceid\n AND a.fkcontactid = c.contactid\n AND si.fkaccountid = a.accountid\nAND s.servicename = 'alert'::text;\n\nFunction Definition\n----------------------\n\nCREATE OR REPLACE FUNCTION get_parametervalue(v_fkserviceinstanceid \ninteger, v_name text) RETURNS TEXT AS $$\nDECLARE\n v_value text;\nBEGIN\n SELECT p.value\n INTO v_value\n FROM provisioning.serviceinstanceparameter sip, \ncommon.parameter p\n WHERE fkserviceinstanceid = v_fkserviceinstanceid\n AND sip.fkparameterid = p.parameterid\n AND p.name = v_name;\n\n RETURN v_value;\n\nEND\n\nServiceinstance table stats\n-----------------------------\n\nselect relname, relpages, reltuples from pg_class where relname = \n'serviceinstance';\n relname | relpages | reltuples\n-----------------+----------+-----------\n serviceinstance | 5207 | 265613\n\n$$ language plpgsql\n\n\n\n\n", "msg_date": "Wed, 11 Jan 2006 10:03:36 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres8.0 Planner chooses WRONG plan." } ]
[ { "msg_contents": "Hi ,\n\n\n I am having problem optimizing this query, Postgres optimizer uses a \nplan which invloves seq-scan on a table. And when I choose a option to \ndisable seq-scan it uses index-scan and obviously the query is much faster.\n All tables are daily vacummed and analyzed as per docs.\n\n Why cant postgres use index-scan ?\n\n\nPostgres Version:8.0.2\nPlatform : Fedora\n\nHere is the explain analyze output. Let me know if any more information \nis needed. Can we make postgres use index scan for this query ?\n\nThanks!\nPallav.\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \n\nexplain analyze\nselect * from provisioning.alerts where countystate = 'FL' and countyno \n= '099' and status = 'ACTIVE' ;\n\n QUERY \nPLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------- \n\nNested Loop (cost=3.45..15842.17 rows=1 width=125) (actual \ntime=913.491..18992.009 rows=110 loops=1)\n -> Nested Loop (cost=3.45..15838.88 rows=1 width=86) (actual \ntime=913.127..18958.482 rows=110 loops=1)\n -> Hash Join (cost=3.45..15835.05 rows=1 width=82) (actual \ntime=913.093..18954.951 rows=110 loops=1)\n Hash Cond: (\"outer\".fkserviceinstancestatusid = \n\"inner\".serviceinstancestatusid)\n -> Hash Join (cost=2.38..15833.96 rows=2 width=74) \n(actual time=175.139..18952.830 rows=358 loops=1)\n Hash Cond: (\"outer\".fkserviceofferingid = \n\"inner\".serviceofferingid)\n -> Seq Scan on serviceinstance si \n(cost=0.00..15831.52 rows=7 width=78) (actual time=174.430..18948.210 \nrows=358 loops=1)\n Filter: (((subplan) = 'FL'::text) AND \n((subplan) = '099'::text))\n SubPlan\n -> Result (cost=0.00..0.01 rows=1 width=0) \n(actual time=0.090..0.093 rows=1 loops=3923)\n -> Result (cost=0.00..0.01 rows=1 width=0) \n(actual time=0.058..0.061 rows=1 loops=265617)\n -> Hash (cost=2.38..2.38 rows=3 width=4) (actual \ntime=0.444..0.444 rows=0 loops=1)\n -> Hash Join (cost=1.08..2.38 rows=3 \nwidth=4) (actual time=0.312..0.428 rows=1 loops=1)\n Hash Cond: (\"outer\".fkserviceid = \n\"inner\".serviceid)\n -> Seq Scan on serviceoffering so \n(cost=0.00..1.18 rows=18 width=8) (actual time=0.005..0.068 rows=18 \nloops=1)\n -> Hash (cost=1.07..1.07 rows=1 \nwidth=4) (actual time=0.036..0.036 rows=0 loops=1)\n -> Seq Scan on service s \n(cost=0.00..1.07 rows=1 width=4) (actual time=0.014..0.019 rows=1 loops=1)\n Filter: (servicename = \n'alert'::text)\n -> Hash (cost=1.06..1.06 rows=1 width=16) (actual \ntime=0.044..0.044 rows=0 loops=1)\n -> Seq Scan on serviceinstancestatus sis \n(cost=0.00..1.06 rows=1 width=16) (actual time=0.017..0.024 rows=1 loops=1)\n Filter: (status = 'ACTIVE'::text)\n -> Index Scan using pk_account_accountid on account a \n(cost=0.00..3.82 rows=1 width=8) (actual time=0.012..0.016 rows=1 \nloops=110)\n Index Cond: (\"outer\".fkaccountid = a.accountid)\n -> Index Scan using pk_contact_contactid on contact c \n(cost=0.00..3.24 rows=1 width=47) (actual time=0.014..0.018 rows=1 \nloops=110)\n Index Cond: (\"outer\".fkcontactid = c.contactid)\n SubPlan\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.072..0.075 rows=1 loops=110)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.079..0.082 rows=1 loops=110)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.086..0.089 rows=1 loops=110)\nTotal runtime: 18992.694 ms\n(30 rows)\n\nTime: 18996.203 ms\n\n--> As you can see the -> Seq Scan on serviceinstance si \n(cost=0.00..15831.52 rows=7 width=78) (actual time=174.430..18948.210 \nrows=358 loops=1) was taking too long .\n same query when i disable the seq-scan it uses index-scan and its \nmuch faster now\n\nset enable_seqscan=false;\nSET\nTime: 0.508 ms\nexplain analyze\nselect * from provisioning.alerts where countystate = 'FL' and countyno \n= '099' and status = 'ACTIVE' ;\n\n \nQUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \n\nNested Loop (cost=9.10..16676.10 rows=1 width=125) (actual \ntime=24.792..3898.939 rows=110 loops=1)\n -> Nested Loop (cost=9.10..16672.81 rows=1 width=86) (actual \ntime=24.383..3862.025 rows=110 loops=1)\n -> Hash Join (cost=9.10..16668.97 rows=1 width=82) (actual \ntime=24.351..3858.351 rows=110 loops=1)\n Hash Cond: (\"outer\".fkserviceofferingid = \n\"inner\".serviceofferingid)\n -> Nested Loop (cost=0.00..16659.85 rows=2 width=86) \n(actual time=8.449..3841.260 rows=110 loops=1)\n -> Index Scan using \npk_serviceinstancestatus_serviceinstancestatusid on \nserviceinstancestatus sis (cost=0.00..3.07 rows=1 width=16) (actual \ntime=3.673..3.684 rows=1 loops=1)\n Filter: (status = 'ACTIVE'::text)\n -> Index Scan using \nidx_serviceinstance_fkserviceinstancestatusid on serviceinstance si \n(cost=0.00..16656.76 rows=2 width=78) (actual time=4.755..3836.399 \nrows=110 loops=1)\n Index Cond: (si.fkserviceinstancestatusid = \n\"outer\".serviceinstancestatusid)\n Filter: (((subplan) = 'FL'::text) AND \n((subplan) = '099'::text))\n SubPlan\n -> Result (cost=0.00..0.01 rows=1 width=0) \n(actual time=0.125..0.128 rows=1 loops=1283)\n -> Result (cost=0.00..0.01 rows=1 width=0) \n(actual time=0.083..0.086 rows=1 loops=26146)\n -> Hash (cost=9.09..9.09 rows=3 width=4) (actual \ntime=15.661..15.661 rows=0 loops=1)\n -> Nested Loop (cost=0.00..9.09 rows=3 width=4) \n(actual time=15.617..15.637 rows=1 loops=1)\n -> Index Scan using uk_service_servicename on \nservice s (cost=0.00..3.96 rows=1 width=4) (actual time=11.231..11.236 \nrows=1 loops=1)\n Index Cond: (servicename = 'alert'::text)\n -> Index Scan using \nidx_serviceoffering_fkserviceid on serviceoffering so (cost=0.00..5.09 \nrows=3 width=8) (actual time=4.366..4.371 rows=1 loops=1)\n Index Cond: (\"outer\".serviceid = \nso.fkserviceid)\n -> Index Scan using pk_account_accountid on account a \n(cost=0.00..3.82 rows=1 width=8) (actual time=0.013..0.017 rows=1 \nloops=110)\n Index Cond: (\"outer\".fkaccountid = a.accountid)\n -> Index Scan using pk_contact_contactid on contact c \n(cost=0.00..3.24 rows=1 width=47) (actual time=0.013..0.017 rows=1 \nloops=110)\n Index Cond: (\"outer\".fkcontactid = c.contactid)\n SubPlan\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.081..0.084 rows=1 loops=110)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.088..0.091 rows=1 loops=110)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.098..0.101 rows=1 loops=110)\nTotal runtime: 3899.589 ms\n(28 rows)\n\n\nHere is the view definition\n-------------------------------\n\n View \"provisioning.alerts\"\n Column | Type | Modifiers\n-------------------+---------+-----------\nserviceinstanceid | integer |\naccountid | integer |\nfirstname | text |\nlastname | text |\nemail | text |\nstatus | text |\naffiliate | text |\naffiliatesub | text |\ndomain | text |\ncountyno | text |\ncountystate | text |\nlistingtype | text |\nView definition:\nSELECT si.serviceinstanceid, a.accountid, c.firstname, c.lastname, \nc.email, sis.status, si.affiliate, si.affiliatesub, si.\"domain\",\n ( SELECT get_parametervalue(si.serviceinstanceid, \n'countyNo'::text) AS get_parametervalue) AS countyno,\n ( SELECT get_parametervalue(si.serviceinstanceid, \n'countyState'::text) AS get_parametervalue) AS countystate,\n ( SELECT get_parametervalue(si.serviceinstanceid, \n'listingType'::text) AS get_parametervalue) AS listingtype\n FROM provisioning.account a, common.contact c, provisioning.service s, \nprovisioning.serviceoffering so, provisioning.serviceinstance si, \nprovisioning.serviceinstancestatus sis\n WHERE si.fkserviceofferingid = so.serviceofferingid\n AND si.fkserviceinstancestatusid = sis.serviceinstancestatusid\nAND s.serviceid = so.fkserviceid\nAND a.fkcontactid = c.contactid\nAND si.fkaccountid = a.accountid\nAND s.servicename = 'alert'::text;\n\nFunction Definition\n----------------------\n\nCREATE OR REPLACE FUNCTION get_parametervalue(v_fkserviceinstanceid \ninteger, v_name text) RETURNS TEXT AS $$\nDECLARE\n v_value text;\nBEGIN\n SELECT p.value\n INTO v_value\n FROM provisioning.serviceinstanceparameter sip, \ncommon.parameter p\n WHERE fkserviceinstanceid = v_fkserviceinstanceid\n AND sip.fkparameterid = p.parameterid\n AND p.name = v_name;\n\n RETURN v_value;\n\nEND\n\nServiceinstance table stats\n-----------------------------\n\nselect relname, relpages, reltuples from pg_class where relname = \n'serviceinstance';\n relname | relpages | reltuples\n-----------------+----------+-----------\nserviceinstance | 5207 | 265613\n\n$$ language plpgsql\n\n\n\n\n", "msg_date": "Wed, 11 Jan 2006 10:27:39 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres8.0 planner chooses WRONG plan" }, { "msg_contents": "Pallav Kalva <[email protected]> writes:\n> I am having problem optimizing this query,\n\nGet rid of the un-optimizable function inside the view. You've\nconverted something that should be a join into an unreasonably large\nnumber of function calls.\n\n> -> Seq Scan on serviceinstance si \n> (cost=0.00..15831.52 rows=7 width=78) (actual time=174.430..18948.210 \n> rows=358 loops=1)\n> Filter: (((subplan) = 'FL'::text) AND \n> ((subplan) = '099'::text))\n> SubPlan\n> -> Result (cost=0.00..0.01 rows=1 width=0) \n> (actual time=0.090..0.093 rows=1 loops=3923)\n> -> Result (cost=0.00..0.01 rows=1 width=0) \n> (actual time=0.058..0.061 rows=1 loops=265617)\n\nThe bulk of the cost here is in the second subplan (0.061 * 265617 =\n16202.637 msec total runtime), and there's not a darn thing Postgres\ncan do to improve this because the work is all down inside a \"black box\"\nfunction. In fact the planner does not even know that the function call\nis expensive, else it would have preferred a plan that requires fewer\nevaluations of the function. The alternative plan you show is *not*\nfaster \"because it's an indexscan\"; it's faster because get_parametervalue\nis evaluated fewer times.\n\nThe useless sub-SELECTs atop the function calls are adding their own\nlittle increment of wasted time, too. I'm not sure how bad that is\nrelative to the function calls, but it's certainly not helping.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jan 2006 11:07:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres8.0 planner chooses WRONG plan " }, { "msg_contents": "Hi Tom,\n\n Thanks! for your input, the view was written first without using \nthe function but its an ugly big with all the joins and its much slower \nthat way. Below is the view without the function and its explain analzye \noutput , as you can see the it takes almost 2 min to run this query with \nthis view . Is there any way to optimize or make changes to this view ?\n\nThanks!\nPallav.\n\n\nView Definition\n-------------------\n\ncreate or replace view provisioning.alertserviceinstanceold as\nSELECT services.serviceinstanceid, a.accountid, c.firstname, c.lastname, \nc.email, services.countyno, services.countystate, services.listingtype \nAS listingtypename, services.status, services.affiliate, \nservices.affiliatesub, services.\"domain\"\n FROM provisioning.account a\n JOIN common.contact c ON a.fkcontactid = c.contactid\n JOIN ( SELECT p1.serviceinstanceid, p1.accountid, p1.countyno, \np2.countystate, p3.listingtype, p1.status, p1.affiliate, \np1.affiliatesub, p1.\"domain\"\n FROM ( SELECT si.serviceinstanceid, si.affiliate, si.affiliatesub, \nsi.\"domain\", si.fkaccountid AS accountid, p.value AS countyno, sis.status\n FROM provisioning.service s\n JOIN provisioning.serviceoffering so ON s.serviceid = \nso.fkserviceid\n JOIN provisioning.serviceinstance si ON so.serviceofferingid = \nsi.fkserviceofferingid\n JOIN provisioning.serviceinstancestatus sis ON \nsi.fkserviceinstancestatusid = sis.serviceinstancestatusid\n JOIN provisioning.serviceinstanceparameter sip ON \nsi.serviceinstanceid = sip.fkserviceinstanceid\n JOIN common.parameter p ON sip.fkparameterid = p.parameterid\n WHERE s.servicename = 'alert'::text AND p.name = 'countyNo'::text) p1\n JOIN ( SELECT si.serviceinstanceid, si.affiliate, si.affiliatesub, \nsi.\"domain\", si.fkaccountid AS accountid, p.value AS countystate, sis.status\n FROM provisioning.service s\n JOIN provisioning.serviceoffering so ON s.serviceid = \nso.fkserviceid\n JOIN provisioning.serviceinstance si ON so.serviceofferingid = \nsi.fkserviceofferingid\n JOIN provisioning.serviceinstancestatus sis ON \nsi.fkserviceinstancestatusid = sis.serviceinstancestatusid\n JOIN provisioning.serviceinstanceparameter sip ON \nsi.serviceinstanceid = sip.fkserviceinstanceid\n JOIN common.parameter p ON sip.fkparameterid = p.parameterid\n WHERE s.servicename = 'alert'::text AND p.name = 'countyState'::text) \np2 ON p1.accountid = p2.accountid AND p1.serviceinstanceid = \np2.serviceinstanceid\n JOIN ( SELECT si.serviceinstanceid, si.affiliate, si.affiliatesub, \nsi.\"domain\", si.fkaccountid AS accountid, p.value AS listingtype, sis.status\n FROM provisioning.service s\n JOIN provisioning.serviceoffering so ON s.serviceid = so.fkserviceid\n JOIN provisioning.serviceinstance si ON so.serviceofferingid = \nsi.fkserviceofferingid\n JOIN provisioning.serviceinstancestatus sis ON \nsi.fkserviceinstancestatusid = sis.serviceinstancestatusid\n JOIN provisioning.serviceinstanceparameter sip ON \nsi.serviceinstanceid = sip.fkserviceinstanceid\n JOIN common.parameter p ON sip.fkparameterid = p.parameterid\n WHERE s.servicename = 'alert'::text AND p.name = 'listingType'::text) \np3 ON p2.accountid = p3.accountid AND p2.serviceinstanceid = \np3.serviceinstanceid) services\nON a.accountid = services.accountid\nORDER BY services.serviceinstanceid;\n\nExplain Analyze\n------------------\nexplain analyze\nselect * from provisioning.alertserviceinstanceold where countystate = \n'FL' and countyno = '099' and status = 'ACTIVE' ;\n\n \nQUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------------------------------------------------------------\n Subquery Scan alertserviceinstanceold (cost=31954.24..31954.25 rows=1 \nwidth=328) (actual time=113485.801..113487.024 rows=110 loops=1)\n -> Sort (cost=31954.24..31954.24 rows=1 width=152) (actual \ntime=113485.787..113486.123 rows=110 loops=1)\n Sort Key: si.serviceinstanceid\n -> Hash Join (cost=20636.38..31954.23 rows=1 width=152) \n(actual time=109721.688..113485.311 rows=110 loops=1)\n Hash Cond: (\"outer\".accountid = \"inner\".fkaccountid)\n -> Hash Join (cost=6595.89..16770.25 rows=228696 \nwidth=47) (actual time=1742.592..4828.396 rows=229855 loops=1)\n Hash Cond: (\"outer\".contactid = \"inner\".fkcontactid)\n -> Seq Scan on contact c (cost=0.00..4456.96 \nrows=228696 width=47) (actual time=0.006..1106.459 rows=229868 loops=1)\n -> Hash (cost=6024.11..6024.11 rows=228711 \nwidth=8) (actual time=1742.373..1742.373 rows=0 loops=1)\n -> Seq Scan on account a \n(cost=0.00..6024.11 rows=228711 width=8) (actual time=0.010..990.597 \nrows=229855 loops=1)\n -> Hash (cost=14040.49..14040.49 rows=1 width=117) \n(actual time=107911.397..107911.397 rows=0 loops=1)\n -> Nested Loop (cost=10.34..14040.49 rows=1 \nwidth=117) (actual time=1185.383..107910.738 rows=110 loops=1)\n -> Nested Loop (cost=10.34..14037.45 rows=1 \nwidth=112) (actual time=1185.278..107898.885 rows=550 loops=1)\n -> Hash Join (cost=10.34..14033.98 \nrows=1 width=124) (actual time=1185.224..107888.542 rows=110 loops=1)\n Hash Cond: \n(\"outer\".fkserviceofferingid = \"inner\".serviceofferingid)\n -> Hash Join \n(cost=7.96..14031.58 rows=1 width=128) (actual time=1184.490..107886.329 \nrows=110 loops=1)\n Hash Cond: \n(\"outer\".fkserviceinstancestatusid = \"inner\".serviceinstancestatusid)\n -> Nested Loop \n(cost=6.90..14030.50 rows=1 width=132) (actual time=1184.151..107884.302 \nrows=110 loops=1)\n Join Filter: \n(\"outer\".fkaccountid = \"inner\".fkaccountid)\n -> Nested Loop \n(cost=6.90..14025.09 rows=1 width=116) (actual time=1184.123..107880.635 \nrows=110 loops=1)\n Join Filter: \n((\"outer\".fkaccountid = \"inner\".fkaccountid) AND \n(\"outer\".serviceinstanceid = \"inner\".serviceinstanceid))\n -> Hash Join \n(cost=3.45..636.39 rows=1 width=95) (actual time=85.524..293.387 \nrows=226 loops=1)\n Hash \nCond: (\"outer\".fkserviceinstancestatusid = \"inner\".serviceinstancestatusid)\n -> Hash \nJoin (cost=2.38..635.29 rows=4 width=87) (actual time=6.894..289.000 \nrows=663 loops=1)\n \nHash Cond: (\"outer\".fkserviceofferingid = \"inner\".serviceofferingid)\n -> \nNested Loop (cost=0.00..632.75 rows=23 width=91) (actual \ntime=6.176..281.620 rows=663 loops=1)\n \n-> Nested Loop (cost=0.00..508.26 rows=23 width=13) (actual \ntime=6.138..221.590 rows=663 loops=1)\n \n-> Index Scan using idx_parameter_value on parameter p \n(cost=0.00..437.42 rows=23 width=13) (actual time=6.091..20.656 rows=663 \nloops=1)\n \nIndex Cond: (value = '099'::text)\n \nFilter: (name = 'countyNo'::text)\n \n-> Index Scan using idx_serviceinstanceparameter_fkparameterid on \nserviceinstanceparameter sip (cost=0.00..3.07 rows=1 width=8) (actual \ntime=0.278..0.288 rows=1 loops=663)\n \nIndex Cond: (sip.fkparameterid = \"outer\".parameterid)\n \n-> Index Scan using pk_serviceinstance_serviceinstanceid on \nserviceinstance si (cost=0.00..5.40 rows=1 width=78) (actual \ntime=0.041..0.073 rows=1 loops=663)\n \nIndex Cond: (si.serviceinstanceid = \"outer\".fkserviceinstanceid)\n -> \nHash (cost=2.38..2.38 rows=3 width=4) (actual time=0.445..0.445 rows=0 \nloops=1)\n \n-> Hash Join (cost=1.08..2.38 rows=3 width=4) (actual \ntime=0.314..0.426 rows=1 loops=1)\n \nHash Cond: (\"outer\".fkserviceid = \"inner\".serviceid)\n \n-> Seq Scan on serviceoffering so (cost=0.00..1.18 rows=18width=8) \n(actual time=0.005..0.065 rows=18 loops=1)\n \n-> Hash (cost=1.07..1.07 rows=1 width=4) (actual time=0.033..0.033 \nrows=0 loops=1)\n \n-> Seq Scan on service s (cost=0.00..1.07 rows=1 width=4) (actual \ntime=0.011..0.016 rows=1 loops=1)\n \nFilter: (servicename = 'alert'::text)\n -> Hash \n(cost=1.06..1.06 rows=1 width=16) (actual time=0.031..0.031 rows=0 loops=1)\n -> \nSeq Scan on serviceinstancestatus sis (cost=0.00..1.06 rows=1 width=16) \n(actual time=0.008..0.014 rows=1 loops=1)\n \nFilter: (status = 'ACTIVE'::text)\n -> Hash Join \n(cost=3.45..13386.23 rows=165 width=21) (actual time=0.119..461.891 \nrows=3935 loops=226)\n Hash \nCond: (\"outer\".fkserviceinstancestatusid = \"inner\".serviceinstancestatusid)\n -> Hash \nJoin (cost=2.38..13382.69 rows=165 width=25) (actual \ntime=0.110..432.555 rows=3935 loops=226)\n \nHash Cond: (\"outer\".fkserviceofferingid = \"inner\".serviceofferingid)\n -> \nNested Loop (cost=0.00..13373.71 rows=990 width=29) (actual \ntime=0.098..400.805 rows=3935 loops=226)\n \n-> Nested Loop (cost=0.00..8015.16 rows=990 width=13) (actual \ntime=0.035..267.634 rows=3935 loops=226)\n \n-> Seq Scan on parameter p (cost=0.00..4968.81 rows=989 width=13) \n(actual time=0.008..131.735 rows=3935 loops=226)\n \nFilter: ((name = 'countyState'::text) AND (value = 'FL'::text))\n \n-> Index Scan using idx_serviceinstanceparameter_fkparameterid on \nserviceinstanceparameter sip (cost=0.00..3.07 rows=1 width=8) (actual \ntime=0.015..0.020 rows=1 loops=889310)\n \nIndex Cond: (sip.fkparameterid = \"outer\".parameterid)\n \n-> Index Scan using pk_serviceinstance_serviceinstanceid on \nserviceinstance si (cost=0.00..5.40 rows=1 width=16) (actual \ntime=0.012..0.019 rows=1 loops=889310)\n \nIndex Cond: (si.serviceinstanceid = \"outer\".fkserviceinstanceid)\n -> \nHash (cost=2.38..2.38 rows=3 width=4) (actual time=0.439..0.439 rows=0 \nloops=1)\n \n-> Hash Join (cost=1.08..2.38 rows=3 width=4) (actual \ntime=0.310..0.423 rows=1 loops=1)\n \nHash Cond: (\"outer\".fkserviceid = \"inner\".serviceid)\n \n-> Seq Scan on serviceoffering so (cost=0.00..1.18 rows=18 width=8) \n(actual time=0.006..0.065 rows=18 loops=1)\n \n-> Hash (cost=1.07..1.07 rows=1 width=4) (actual time=0.035..0.035 \nrows=0 loops=1)\n \n-> Seq Scan on service s (cost=0.00..1.07 rows=1 width=4) (actual \ntime=0.013..0.018 rows=1 loops=1)\n \nFilter: (servicename = 'alert'::text)\n -> Hash \n(cost=1.05..1.05 rows=5 width=4) (actual time=0.059..0.059 rows=0 loops=1)\n -> \nSeq Scan on serviceinstancestatus sis (cost=0.00..1.05 rows=5 width=4) \n(actual time=0.010..0.029 rows=5 loops=1)\n -> Index Scan using \npk_serviceinstance_serviceinstanceid on serviceinstance si \n(cost=0.00..5.40 rows=1 width=16) (actual time=0.009..0.012 rows=1 \nloops=110)\n Index Cond: \n(si.serviceinstanceid = \"outer\".fkserviceinstanceid)\n -> Hash (cost=1.05..1.05 \nrows=5 width=4) (actual time=0.055..0.055 rows=0 loops=1)\n -> Seq Scan on \nserviceinstancestatus sis (cost=0.00..1.05 rows=5 width=4) (actual \ntime=0.008..0.025 rows=5 loops=1)\n -> Hash (cost=2.38..2.38 rows=3 \nwidth=4) (actual time=0.461..0.461 rows=0 loops=1)\n -> Hash Join \n(cost=1.08..2.38 rows=3 width=4) (actual time=0.325..0.445 rows=1 loops=1)\n Hash Cond: \n(\"outer\".fkserviceid = \"inner\".serviceid)\n -> Seq Scan on \nserviceoffering so (cost=0.00..1.18 rows=18 width=8) (actual \ntime=0.006..0.074 rows=18 loops=1)\n -> Hash \n(cost=1.07..1.07 rows=1 width=4) (actual time=0.044..0.044 rows=0 loops=1)\n -> Seq Scan on \nservice s (cost=0.00..1.07 rows=1 width=4) (actual time=0.022..0.027 \nrows=1 loops=1)\n Filter: \n(servicename = 'alert'::text)\n -> Index Scan using \nidx_serviceinstanceparameter_fkserviceinstanceid on \nserviceinstanceparameter sip (cost=0.00..3.41 rows=5 width=8) (actual \ntime=0.018..0.038 rows=5 loops=110)\n Index Cond: \n(sip.fkserviceinstanceid = \"outer\".fkserviceinstanceid)\n -> Index Scan using pk_parameter_parameterid \non parameter p (cost=0.00..3.02 rows=1 width=13) (actual \ntime=0.011..0.012 rows=0 loops=550)\n Index Cond: (\"outer\".fkparameterid = \np.parameterid)\n Filter: (name = 'listingType'::text)\n\n Total runtime: 113490.582 ms\n(82 rows)\n\n\n\nTom Lane wrote:\n>Pallav Kalva <[email protected]> writes:\n> \n>> I am having problem optimizing this query,\n>> \n>\n>Get rid of the un-optimizable function inside the view. You've\n>converted something that should be a join into an unreasonably large\n>number of function calls.\n>\n> \n>> -> Seq Scan on serviceinstance si \n>>(cost=0.00..15831.52 rows=7 width=78) (actual time=174.430..18948.210 \n>>rows=358 loops=1)\n>> Filter: (((subplan) = 'FL'::text) AND \n>>((subplan) = '099'::text))\n>> SubPlan\n>> -> Result (cost=0.00..0.01 rows=1 width=0) \n>>(actual time=0.090..0.093 rows=1 loops=3923)\n>> -> Result (cost=0.00..0.01 rows=1 width=0) \n>>(actual time=0.058..0.061 rows=1 loops=265617)\n>> \n>\n>The bulk of the cost here is in the second subplan (0.061 * 265617 =\n>16202.637 msec total runtime), and there's not a darn thing Postgres\n>can do to improve this because the work is all down inside a \"black box\"\n>function. In fact the planner does not even know that the function call\n>is expensive, else it would have preferred a plan that requires fewer\n>evaluations of the function. The alternative plan you show is *not*\n>faster \"because it's an indexscan\"; it's faster because get_parametervalue\n>is evaluated fewer times.\n>\n>The useless sub-SELECTs atop the function calls are adding their own\n>little increment of wasted time, too. I'm not sure how bad that is\n>relative to the function calls, but it's certainly not helping.\n>\n>\t\t\tregards, tom lane\n>\n> \n\n", "msg_date": "Wed, 11 Jan 2006 11:44:58 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres8.0 planner chooses WRONG plan" }, { "msg_contents": "On Wed, Jan 11, 2006 at 11:44:58AM -0500, Pallav Kalva wrote:\nSome view you've got there... you might want to break that apart into\nmultiple views that are a bit easier to manage.\nservice_instance_with_status is a likely candidate, for example.\n\n> View Definition\n> -------------------\n> \n> create or replace view provisioning.alertserviceinstanceold as\n> SELECT services.serviceinstanceid, a.accountid, c.firstname, c.lastname, \n> c.email, services.countyno, services.countystate, services.listingtype \n> AS listingtypename, services.status, services.affiliate, \n> services.affiliatesub, services.\"domain\"\n> FROM provisioning.account a\n> JOIN common.contact c ON a.fkcontactid = c.contactid\n> JOIN ( SELECT p1.serviceinstanceid, p1.accountid, p1.countyno, \n> p2.countystate, p3.listingtype, p1.status, p1.affiliate, \n> p1.affiliatesub, p1.\"domain\"\n> FROM ( SELECT si.serviceinstanceid, si.affiliate, si.affiliatesub, \n> si.\"domain\", si.fkaccountid AS accountid, p.value AS countyno, sis.status\n> FROM provisioning.service s\n> JOIN provisioning.serviceoffering so ON s.serviceid = \n> so.fkserviceid\n> JOIN provisioning.serviceinstance si ON so.serviceofferingid = \n> si.fkserviceofferingid\n> JOIN provisioning.serviceinstancestatus sis ON \n> si.fkserviceinstancestatusid = sis.serviceinstancestatusid\n> JOIN provisioning.serviceinstanceparameter sip ON \n> si.serviceinstanceid = sip.fkserviceinstanceid\n> JOIN common.parameter p ON sip.fkparameterid = p.parameterid\n> WHERE s.servicename = 'alert'::text AND p.name = 'countyNo'::text) p1\n> JOIN ( SELECT si.serviceinstanceid, si.affiliate, si.affiliatesub, \n> si.\"domain\", si.fkaccountid AS accountid, p.value AS countystate, sis.status\n> FROM provisioning.service s\n> JOIN provisioning.serviceoffering so ON s.serviceid = \n> so.fkserviceid\n> JOIN provisioning.serviceinstance si ON so.serviceofferingid = \n> si.fkserviceofferingid\n> JOIN provisioning.serviceinstancestatus sis ON \n> si.fkserviceinstancestatusid = sis.serviceinstancestatusid\n> JOIN provisioning.serviceinstanceparameter sip ON \n> si.serviceinstanceid = sip.fkserviceinstanceid\n> JOIN common.parameter p ON sip.fkparameterid = p.parameterid\n> WHERE s.servicename = 'alert'::text AND p.name = 'countyState'::text) \n> p2 ON p1.accountid = p2.accountid AND p1.serviceinstanceid = \n> p2.serviceinstanceid\n> JOIN ( SELECT si.serviceinstanceid, si.affiliate, si.affiliatesub, \n> si.\"domain\", si.fkaccountid AS accountid, p.value AS listingtype, sis.status\n> FROM provisioning.service s\n> JOIN provisioning.serviceoffering so ON s.serviceid = so.fkserviceid\n> JOIN provisioning.serviceinstance si ON so.serviceofferingid = \n> si.fkserviceofferingid\n> JOIN provisioning.serviceinstancestatus sis ON \n> si.fkserviceinstancestatusid = sis.serviceinstancestatusid\n> JOIN provisioning.serviceinstanceparameter sip ON \n> si.serviceinstanceid = sip.fkserviceinstanceid\n> JOIN common.parameter p ON sip.fkparameterid = p.parameterid\n> WHERE s.servicename = 'alert'::text AND p.name = 'listingType'::text) \n> p3 ON p2.accountid = p3.accountid AND p2.serviceinstanceid = \n> p3.serviceinstanceid) services\n> ON a.accountid = services.accountid\n> ORDER BY services.serviceinstanceid;\n> \n> Explain Analyze\n> ------------------\n> explain analyze\n> select * from provisioning.alertserviceinstanceold where countystate = \n> 'FL' and countyno = '099' and status = 'ACTIVE' ;\n> \n> \n> QUERY PLAN\n> \n> ----------------------------------------------------------------------------------------------------------------------------------------------\n> --------------------------------------------------------------------------------------------------------------------\n> Subquery Scan alertserviceinstanceold (cost=31954.24..31954.25 rows=1 \n> width=328) (actual time=113485.801..113487.024 rows=110 loops=1)\n> -> Sort (cost=31954.24..31954.24 rows=1 width=152) (actual \n> time=113485.787..113486.123 rows=110 loops=1)\n> Sort Key: si.serviceinstanceid\n> -> Hash Join (cost=20636.38..31954.23 rows=1 width=152) \n> (actual time=109721.688..113485.311 rows=110 loops=1)\n> Hash Cond: (\"outer\".accountid = \"inner\".fkaccountid)\n> -> Hash Join (cost=6595.89..16770.25 rows=228696 \n> width=47) (actual time=1742.592..4828.396 rows=229855 loops=1)\n> Hash Cond: (\"outer\".contactid = \"inner\".fkcontactid)\n> -> Seq Scan on contact c (cost=0.00..4456.96 \n> rows=228696 width=47) (actual time=0.006..1106.459 rows=229868 loops=1)\n> -> Hash (cost=6024.11..6024.11 rows=228711 \n> width=8) (actual time=1742.373..1742.373 rows=0 loops=1)\n> -> Seq Scan on account a \n> (cost=0.00..6024.11 rows=228711 width=8) (actual time=0.010..990.597 \n> rows=229855 loops=1)\n> -> Hash (cost=14040.49..14040.49 rows=1 width=117) \n> (actual time=107911.397..107911.397 rows=0 loops=1)\n> -> Nested Loop (cost=10.34..14040.49 rows=1 \n> width=117) (actual time=1185.383..107910.738 rows=110 loops=1)\n> -> Nested Loop (cost=10.34..14037.45 rows=1 \n> width=112) (actual time=1185.278..107898.885 rows=550 loops=1)\n> -> Hash Join (cost=10.34..14033.98 \n> rows=1 width=124) (actual time=1185.224..107888.542 rows=110 loops=1)\n> Hash Cond: \n> (\"outer\".fkserviceofferingid = \"inner\".serviceofferingid)\n> -> Hash Join \n> (cost=7.96..14031.58 rows=1 width=128) (actual time=1184.490..107886.329 \n> rows=110 loops=1)\n> Hash Cond: \n> (\"outer\".fkserviceinstancestatusid = \"inner\".serviceinstancestatusid)\n> -> Nested Loop \n> (cost=6.90..14030.50 rows=1 width=132) (actual time=1184.151..107884.302 \n> rows=110 loops=1)\n> Join Filter: \n> (\"outer\".fkaccountid = \"inner\".fkaccountid)\n\nWell, here's the step that's killing you:\n> -> Nested Loop \n> (cost=6.90..14025.09 rows=1 width=116) (actual time=1184.123..107880.635 \n> rows=110 loops=1)\n> Join Filter: \n> ((\"outer\".fkaccountid = \"inner\".fkaccountid) AND \n> (\"outer\".serviceinstanceid = \"inner\".serviceinstanceid))\n> -> Hash Join \n> (cost=3.45..636.39 rows=1 width=95) (actual time=85.524..293.387 \n> rows=226 loops=1)\n> Hash \n> Cond: (\"outer\".fkserviceinstancestatusid = \"inner\".serviceinstancestatusid)\n\nUnfortunately, the way this query plan came out it's difficult to figure\nout what the other input to that nested loop is. But before we get to\nthat, what do you have join_collapse_limit set to? If it's the default\nof 8 then the optimizer is essentially going to follow the join order\nyou specified when you wrote the view, which could be far from optimal.\nIt would be worth setting join_collapse_limit high enough so that this\nquery will get flattened and see what kind of plan it comes up with\nthen. Note that this could result in an unreasonably-large plan time,\nbut if it results in a fast query execution we know it's just a matter\nof re-ordering things in the query.\n\nAlso, it would be best if you could send the results of explain as an\nattachement that hasn't been word-wrapped.\n\n> -> Hash \n> Join (cost=2.38..635.29 rows=4 width=87) (actual time=6.894..289.000 \n> rows=663 loops=1)\n> \n> Hash Cond: (\"outer\".fkserviceofferingid = \"inner\".serviceofferingid)\n> -> \n> Nested Loop (cost=0.00..632.75 rows=23 width=91) (actual \n> time=6.176..281.620 rows=663 loops=1)\n> \n> -> Nested Loop (cost=0.00..508.26 rows=23 width=13) (actual \n> time=6.138..221.590 rows=663 loops=1)\n> \n> -> Index Scan using idx_parameter_value on parameter p \n> (cost=0.00..437.42 rows=23 width=13) (actual time=6.091..20.656 rows=663 \n> loops=1)\n> \n> Index Cond: (value = '099'::text)\n> \n> Filter: (name = 'countyNo'::text)\n> \n> -> Index Scan using idx_serviceinstanceparameter_fkparameterid on \n> serviceinstanceparameter sip (cost=0.00..3.07 rows=1 width=8) (actual \n> time=0.278..0.288 rows=1 loops=663)\n> \n> Index Cond: (sip.fkparameterid = \"outer\".parameterid)\n> \n> -> Index Scan using pk_serviceinstance_serviceinstanceid on \n> serviceinstance si (cost=0.00..5.40 rows=1 width=78) (actual \n> time=0.041..0.073 rows=1 loops=663)\n> \n> Index Cond: (si.serviceinstanceid = \"outer\".fkserviceinstanceid)\n> -> \n> Hash (cost=2.38..2.38 rows=3 width=4) (actual time=0.445..0.445 rows=0 \n> loops=1)\n> \n> -> Hash Join (cost=1.08..2.38 rows=3 width=4) (actual \n> time=0.314..0.426 rows=1 loops=1)\n> \n> Hash Cond: (\"outer\".fkserviceid = \"inner\".serviceid)\n> \n> -> Seq Scan on serviceoffering so (cost=0.00..1.18 rows=18width=8) \n> (actual time=0.005..0.065 rows=18 loops=1)\n> \n> -> Hash (cost=1.07..1.07 rows=1 width=4) (actual time=0.033..0.033 \n> rows=0 loops=1)\n> \n> -> Seq Scan on service s (cost=0.00..1.07 rows=1 width=4) (actual \n> time=0.011..0.016 rows=1 loops=1)\n> \n> Filter: (servicename = 'alert'::text)\n> -> Hash \n> (cost=1.06..1.06 rows=1 width=16) (actual time=0.031..0.031 rows=0 loops=1)\n> -> \n> Seq Scan on serviceinstancestatus sis (cost=0.00..1.06 rows=1 width=16) \n> (actual time=0.008..0.014 rows=1 loops=1)\n> \n> Filter: (status = 'ACTIVE'::text)\n> -> Hash Join \n> (cost=3.45..13386.23 rows=165 width=21) (actual time=0.119..461.891 \n> rows=3935 loops=226)\n> Hash \n> Cond: (\"outer\".fkserviceinstancestatusid = \"inner\".serviceinstancestatusid)\n> -> Hash \n> Join (cost=2.38..13382.69 rows=165 width=25) (actual \n> time=0.110..432.555 rows=3935 loops=226)\n> \n> Hash Cond: (\"outer\".fkserviceofferingid = \"inner\".serviceofferingid)\n> -> \n> Nested Loop (cost=0.00..13373.71 rows=990 width=29) (actual \n> time=0.098..400.805 rows=3935 loops=226)\n> \n> -> Nested Loop (cost=0.00..8015.16 rows=990 width=13) (actual \n> time=0.035..267.634 rows=3935 loops=226)\n> \n> -> Seq Scan on parameter p (cost=0.00..4968.81 rows=989 width=13) \n> (actual time=0.008..131.735 rows=3935 loops=226)\n> \n> Filter: ((name = 'countyState'::text) AND (value = 'FL'::text))\n> \n> -> Index Scan using idx_serviceinstanceparameter_fkparameterid on \n> serviceinstanceparameter sip (cost=0.00..3.07 rows=1 width=8) (actual \n> time=0.015..0.020 rows=1 loops=889310)\n> \n> Index Cond: (sip.fkparameterid = \"outer\".parameterid)\n> \n> -> Index Scan using pk_serviceinstance_serviceinstanceid on \n> serviceinstance si (cost=0.00..5.40 rows=1 width=16) (actual \n> time=0.012..0.019 rows=1 loops=889310)\n> \n> Index Cond: (si.serviceinstanceid = \"outer\".fkserviceinstanceid)\n> -> \n> Hash (cost=2.38..2.38 rows=3 width=4) (actual time=0.439..0.439 rows=0 \n> loops=1)\n> \n> -> Hash Join (cost=1.08..2.38 rows=3 width=4) (actual \n> time=0.310..0.423 rows=1 loops=1)\n> \n> Hash Cond: (\"outer\".fkserviceid = \"inner\".serviceid)\n> \n> -> Seq Scan on serviceoffering so (cost=0.00..1.18 rows=18 width=8) \n> (actual time=0.006..0.065 rows=18 loops=1)\n> \n> -> Hash (cost=1.07..1.07 rows=1 width=4) (actual time=0.035..0.035 \n> rows=0 loops=1)\n> \n> -> Seq Scan on service s (cost=0.00..1.07 rows=1 width=4) (actual \n> time=0.013..0.018 rows=1 loops=1)\n> \n> Filter: (servicename = 'alert'::text)\n> -> Hash \n> (cost=1.05..1.05 rows=5 width=4) (actual time=0.059..0.059 rows=0 loops=1)\n> -> \n> Seq Scan on serviceinstancestatus sis (cost=0.00..1.05 rows=5 width=4) \n> (actual time=0.010..0.029 rows=5 loops=1)\n> -> Index Scan using \n> pk_serviceinstance_serviceinstanceid on serviceinstance si \n> (cost=0.00..5.40 rows=1 width=16) (actual time=0.009..0.012 rows=1 \n> loops=110)\n> Index Cond: \n> (si.serviceinstanceid = \"outer\".fkserviceinstanceid)\n> -> Hash (cost=1.05..1.05 \n> rows=5 width=4) (actual time=0.055..0.055 rows=0 loops=1)\n> -> Seq Scan on \n> serviceinstancestatus sis (cost=0.00..1.05 rows=5 width=4) (actual \n> time=0.008..0.025 rows=5 loops=1)\n> -> Hash (cost=2.38..2.38 rows=3 \n> width=4) (actual time=0.461..0.461 rows=0 loops=1)\n> -> Hash Join \n> (cost=1.08..2.38 rows=3 width=4) (actual time=0.325..0.445 rows=1 loops=1)\n> Hash Cond: \n> (\"outer\".fkserviceid = \"inner\".serviceid)\n> -> Seq Scan on \n> serviceoffering so (cost=0.00..1.18 rows=18 width=8) (actual \n> time=0.006..0.074 rows=18 loops=1)\n> -> Hash \n> (cost=1.07..1.07 rows=1 width=4) (actual time=0.044..0.044 rows=0 loops=1)\n> -> Seq Scan on \n> service s (cost=0.00..1.07 rows=1 width=4) (actual time=0.022..0.027 \n> rows=1 loops=1)\n> Filter: \n> (servicename = 'alert'::text)\n> -> Index Scan using \n> idx_serviceinstanceparameter_fkserviceinstanceid on \n> serviceinstanceparameter sip (cost=0.00..3.41 rows=5 width=8) (actual \n> time=0.018..0.038 rows=5 loops=110)\n> Index Cond: \n> (sip.fkserviceinstanceid = \"outer\".fkserviceinstanceid)\n> -> Index Scan using pk_parameter_parameterid \n> on parameter p (cost=0.00..3.02 rows=1 width=13) (actual \n> time=0.011..0.012 rows=0 loops=550)\n> Index Cond: (\"outer\".fkparameterid = \n> p.parameterid)\n> Filter: (name = 'listingType'::text)\n> \n> Total runtime: 113490.582 ms\n> (82 rows)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 11 Jan 2006 13:02:09 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres8.0 planner chooses WRONG plan" } ]
[ { "msg_contents": "\nHi,\n\nI'm running version 8.1 on a dedicated Sun v20 server (2 AMD x64's)\nwith 4Gb of RAM. I have recently noticed that the performance of\nsome more complex queries is extremely variable and irregular.\nFor example, I currently have a query that returns a small number \nof rows (5) by joining a dozen of tables. Below are the running times\nobtained by repeatedly lauching this query in psql:\n\nTime: 424.848 ms\nTime: 1615.143 ms\nTime: 15036.475 ms\nTime: 83471.683 ms\nTime: 163.224 ms\nTime: 2454.939 ms\nTime: 188.093 ms\nTime: 158.071 ms\nTime: 192.431 ms\nTime: 195.076 ms\nTime: 635.739 ms\nTime: 164549.902 ms\n\nAs you can see, the performance is most of the time pretty good (less\nthan 1 second), but every fourth of fifth time I launch the query\nthe server seems to go into orbit. For the longer running times,\nI can see from top that the server process uses almost 100% of\na CPU.\n\nThis is rather worrisome, as I cannot be confident of the overall performance\nof my application with so much variance in query response times.\n\nI suspect a configuration problem related to the cache mechanism \n(shared_buffers? effective_cache_size?), but to be honest I do not know \nwhere to start to diagnose it. \n\nAny help would be greatly appreciated.\n\nThanks in advance,\n\nJ-P\n\n", "msg_date": "Wed, 11 Jan 2006 14:29:03 -0500", "msg_from": "=?iso-8859-1?Q?Jean-Philippe_C=F4t=E9?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Extremely irregular query performance" } ]
[ { "msg_contents": "Hi,\n \nI've looked around through the docs, but can't seem to find an answer to\nthis. If I change a column's statistics with \"Alter table alter column\nset statistics n\", is there a way I can later go back and see what the\nnumber is for that column? I want to be able to tell which columns I've\nchanged the statistics on, and which ones I haven't.\n \nThanks,\n \nDave\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nI’ve looked around through the docs, but can’t\nseem to find an answer to this.  If\nI change a column’s statistics with “Alter table alter column set\nstatistics n”, is there a way I can later go back and see what the number\nis for that column?  I want to be\nable to tell which columns I’ve changed the statistics on, and which ones\nI haven’t.\n \nThanks,\n \nDave", "msg_date": "Wed, 11 Jan 2006 16:05:18 -0600", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": true, "msg_subject": "Showing Column Statistics Number" }, { "msg_contents": "On Wed, Jan 11, 2006 at 04:05:18PM -0600, Dave Dutcher wrote:\n> I've looked around through the docs, but can't seem to find an answer to\n> this. If I change a column's statistics with \"Alter table alter column\n> set statistics n\", is there a way I can later go back and see what the\n> number is for that column? I want to be able to tell which columns I've\n> changed the statistics on, and which ones I haven't.\n\npg_attribute.attstattarget\n\nhttp://www.postgresql.org/docs/8.1/interactive/catalog-pg-attribute.html\n\n-- \nMichael Fuhr\n", "msg_date": "Wed, 11 Jan 2006 15:21:38 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Showing Column Statistics Number" } ]
[ { "msg_contents": "\nHi,\n\nI'm running version 8.1 on a dedicated Sun v20 server (2 AMD x64's)\nwith 4Gb of RAM. I have recently noticed that the performance of\nsome more complex queries is extremely variable and irregular.\nFor example, I currently have a query that returns a small number \nof rows (5) by joining a dozen of tables. Below are the running times\nobtained by repeatedly lauching this query in psql (nothing else\nwas running on the server at that time):\n\nTime: 424.848 ms\nTime: 1615.143 ms\nTime: 15036.475 ms\nTime: 83471.683 ms\nTime: 163.224 ms\nTime: 2454.939 ms\nTime: 188.093 ms\nTime: 158.071 ms\nTime: 192.431 ms\nTime: 195.076 ms\nTime: 635.739 ms\nTime: 164549.902 ms\n\nAs you can see, the performance is most of the time pretty good (less\nthan 1 second), but every fourth of fifth time I launch the query\nthe server seems to go into orbit. For the longer running times,\nI can see from 'top' that the server process uses almost 100% of\na CPU.\n\nThis is rather worrisome, as I cannot be confident of the overall performance\nof my application with so much variance in query response times.\n\nI suspect a configuration problem related to the cache mechanism \n(shared_buffers? effective_cache_size?), but to be honest I do not know \nwhere to start to diagnose it. I also noticed that the query plan\ncan vary when the same query is launched two times in a row (with\nno other changes to the DB in between). Is there a random aspect to\nthe query optimizer that could explain some of the observed variance\nin performance ?\n\nAny help would be greatly appreciated.\n\nThanks in advance,\n\nJ-P\n\n\n\n", "msg_date": "Wed, 11 Jan 2006 17:37:24 -0500", "msg_from": "=?iso-8859-1?Q?Jean-Philippe_C=F4t=E9?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Extremely irregular query performance" }, { "msg_contents": "=?iso-8859-1?Q?Jean-Philippe_C=F4t=E9?= <[email protected]> writes:\n> I'm running version 8.1 on a dedicated Sun v20 server (2 AMD x64's)\n> with 4Gb of RAM. I have recently noticed that the performance of\n> some more complex queries is extremely variable and irregular.\n> For example, I currently have a query that returns a small number \n> of rows (5) by joining a dozen of tables.\n\nA dozen tables? You're exceeding the geqo_threshold and getting a plan\nthat has some randomness in it. You could either increase\ngeqo_threshold if you can stand the extra planning time, or try\nincreasing geqo_effort to get it to search a little harder and hopefully\nfind a passable plan more often. See\n\nhttp://www.postgresql.org/docs/8.1/static/geqo.html\nhttp://www.postgresql.org/docs/8.1/static/runtime-config-query.html#RUNTIME-CONFIG-QUERY-GEQO\n\nI'm kinda surprised that you don't get better results with the default\nsettings. We could tinker some more with the defaults, if you can\nprovide evidence about better values ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jan 2006 18:03:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely irregular query performance " }, { "msg_contents": "On Wed, 2006-01-11 at 16:37, Jean-Philippe Côté wrote:\n> Hi,\n> \n> I'm running version 8.1 on a dedicated Sun v20 server (2 AMD x64's)\n> with 4Gb of RAM. I have recently noticed that the performance of\n> some more complex queries is extremely variable and irregular.\n> For example, I currently have a query that returns a small number \n> of rows (5) by joining a dozen of tables. Below are the running times\n> obtained by repeatedly lauching this query in psql (nothing else\n> was running on the server at that time):\n> \n> Time: 424.848 ms\n> Time: 1615.143 ms\n> Time: 15036.475 ms\n> Time: 83471.683 ms\n> Time: 163.224 ms\n> Time: 2454.939 ms\n> Time: 188.093 ms\n> Time: 158.071 ms\n> Time: 192.431 ms\n> Time: 195.076 ms\n> Time: 635.739 ms\n> Time: 164549.902 ms\n> \n> As you can see, the performance is most of the time pretty good (less\n> than 1 second), but every fourth of fifth time I launch the query\n> the server seems to go into orbit. For the longer running times,\n> I can see from 'top' that the server process uses almost 100% of\n> a CPU.\n\nAs mentioned earlier, it could be you're exceeding the GEQO threshold.\n\nIt could also be that you are doing just enough else at the time, and\nhave your shared buffers or sort mem high enough that you're initiating\na swap storm.\n\nMind posting all the parts of your postgresql.conf file you've changed\nfrom the default?\n", "msg_date": "Wed, 11 Jan 2006 17:29:14 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely irregular query performance" }, { "msg_contents": "\nThanks a lot for this info, I was indeed exceeding the genetic\noptimizer's threshold. Now that it is turned off, I get\na very stable response time of 435ms (more or less 5ms) for\nthe same query. It is about three times slower than the best\nI got with the genetic optimizer on, but the overall average\nis much lower.\n\nI'll also try to play with the geqo parameters and see if things\nimprove.\n\nThanks again,\n\nJ-P\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Tom Lane\nSent: January 11, 2006 6:03 PM\nTo: Jean-Philippe Côté\nCc: [email protected]\nSubject: Re: [PERFORM] Extremely irregular query performance \n\n=?iso-8859-1?Q?Jean-Philippe_C=F4t=E9?= <[email protected]> writes:\n> I'm running version 8.1 on a dedicated Sun v20 server (2 AMD x64's)\n> with 4Gb of RAM. I have recently noticed that the performance of\n> some more complex queries is extremely variable and irregular.\n> For example, I currently have a query that returns a small number \n> of rows (5) by joining a dozen of tables.\n\nA dozen tables? You're exceeding the geqo_threshold and getting a plan\nthat has some randomness in it. You could either increase\ngeqo_threshold if you can stand the extra planning time, or try\nincreasing geqo_effort to get it to search a little harder and hopefully\nfind a passable plan more often. See\n\nhttp://www.postgresql.org/docs/8.1/static/geqo.html\nhttp://www.postgresql.org/docs/8.1/static/runtime-config-query.html#RUNTIME-CONFIG-QUERY-GEQO\n\nI'm kinda surprised that you don't get better results with the default\nsettings. We could tinker some more with the defaults, if you can\nprovide evidence about better values ...\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n", "msg_date": "Wed, 11 Jan 2006 18:50:12 -0500", "msg_from": "=?iso-8859-1?Q?Jean-Philippe_C=F4t=E9?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Extremely irregular query performance " }, { "msg_contents": "If this is a query that will be executed more than once, you can also\navoid incurring the planning overhead multiple times by using PREPARE.\n\n-- Mark Lewis\n\nOn Wed, 2006-01-11 at 18:50 -0500, Jean-Philippe Côté wrote:\n> Thanks a lot for this info, I was indeed exceeding the genetic\n> optimizer's threshold. Now that it is turned off, I get\n> a very stable response time of 435ms (more or less 5ms) for\n> the same query. It is about three times slower than the best\n> I got with the genetic optimizer on, but the overall average\n> is much lower.\n> \n> I'll also try to play with the geqo parameters and see if things\n> improve.\n> \n> Thanks again,\n> \n> J-P\n\n", "msg_date": "Wed, 11 Jan 2006 15:50:43 -0800", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely irregular query performance" }, { "msg_contents": "=?iso-8859-1?Q?Jean-Philippe_C=F4t=E9?= <[email protected]> writes:\n> Thanks a lot for this info, I was indeed exceeding the genetic\n> optimizer's threshold. Now that it is turned off, I get\n> a very stable response time of 435ms (more or less 5ms) for\n> the same query. It is about three times slower than the best\n> I got with the genetic optimizer on, but the overall average\n> is much lower.\n\nHmm. It would be interesting to use EXPLAIN ANALYZE to confirm that the\nplan found this way is the same as the best plan found by GEQO, and\nthe extra couple hundred msec is the price you pay for the exhaustive\nplan search. If GEQO is managing to find a plan better than the regular\nplanner then we need to look into why ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jan 2006 22:23:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely irregular query performance " }, { "msg_contents": "On Wed, 2006-01-11 at 22:23 -0500, Tom Lane wrote:\n> =?iso-8859-1?Q?Jean-Philippe_C=F4t=E9?= <[email protected]> writes:\n> > Thanks a lot for this info, I was indeed exceeding the genetic\n> > optimizer's threshold. Now that it is turned off, I get\n> > a very stable response time of 435ms (more or less 5ms) for\n> > the same query. It is about three times slower than the best\n> > I got with the genetic optimizer on, but the overall average\n> > is much lower.\n> \n> Hmm. It would be interesting to use EXPLAIN ANALYZE to confirm that the\n> plan found this way is the same as the best plan found by GEQO, and\n> the extra couple hundred msec is the price you pay for the exhaustive\n> plan search. If GEQO is managing to find a plan better than the regular\n> planner then we need to look into why ...\n\nIt seems worth noting in the EXPLAIN whether GEQO has been used to find\nthe plan, possibly along with other factors influencing the plan such as\nenable_* settings.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Thu, 12 Jan 2006 09:48:41 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely irregular query performance" }, { "msg_contents": "On Thu, Jan 12, 2006 at 09:48:41AM +0000, Simon Riggs wrote:\n> On Wed, 2006-01-11 at 22:23 -0500, Tom Lane wrote:\n> > =?iso-8859-1?Q?Jean-Philippe_C=F4t=E9?= <[email protected]> writes:\n> > > Thanks a lot for this info, I was indeed exceeding the genetic\n> > > optimizer's threshold. Now that it is turned off, I get\n> > > a very stable response time of 435ms (more or less 5ms) for\n> > > the same query. It is about three times slower than the best\n> > > I got with the genetic optimizer on, but the overall average\n> > > is much lower.\n> > \n> > Hmm. It would be interesting to use EXPLAIN ANALYZE to confirm that the\n> > plan found this way is the same as the best plan found by GEQO, and\n> > the extra couple hundred msec is the price you pay for the exhaustive\n> > plan search. If GEQO is managing to find a plan better than the regular\n> > planner then we need to look into why ...\n> \n> It seems worth noting in the EXPLAIN whether GEQO has been used to find\n> the plan, possibly along with other factors influencing the plan such as\n> enable_* settings.\n> \n\nIs it the plan that is different in the fastest case with GEQO or is it\nthe time needed to plan that is causing the GEQO to beat the exhaustive\nsearch?\n\nKen\n\n", "msg_date": "Thu, 12 Jan 2006 13:16:18 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely irregular query performance" }, { "msg_contents": "\n\nCan I actully know whether a given plan is excuted with GEQO on ?\nIn other words, if I launch 'explain <query>', I'll get a given plan, but if I re-launch\nthe <query> (withtout the 'explain' keyword), could I get a different\nplan given that GEQO induces some randomness ?\n\n>Is it the plan that is different in the fastest case with GEQO or is it\n>the time needed to plan that is causing the GEQO to beat the exhaustive\n>search?\n\n\n", "msg_date": "Thu, 12 Jan 2006 15:23:14 -0500", "msg_from": "=?us-ascii?Q?Jean-Philippe_Cote?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely irregular query performance" }, { "msg_contents": "On Thu, Jan 12, 2006 at 03:23:14PM -0500, Jean-Philippe Cote wrote:\n> \n> \n> Can I actully know whether a given plan is excuted with GEQO on ?\n> In other words, if I launch 'explain <query>', I'll get a given plan, but if I re-launch\n> the <query> (withtout the 'explain' keyword), could I get a different\n> plan given that GEQO induces some randomness ?\n> \n> >Is it the plan that is different in the fastest case with GEQO or is it\n> >the time needed to plan that is causing the GEQO to beat the exhaustive\n> >search?\n> \nGEQO will be used if the number of joins is over the GEQO limit in\nthe configuration file. The GEQO process is an iterative random\nprocess to find an query plan. The EXPLAIN results are the plan for that\nquery, but not neccessarily for subsequent runs. GEQO's advantage is a\nmuch faster plan time than the exhaustive search method normally used.\nIf the resulting plan time is less than the exhaustive search plan time,\nfor short queries you can have the GECO run more quickly than the\nexhaustive search result. Of course, if you PREPARE the query the plan\ntime drops out.\n\nKen\n", "msg_date": "Thu, 12 Jan 2006 15:05:50 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely irregular query performance" }, { "msg_contents": "Jean-Philippe Cote wrote:\n> \n> \n> Can I actully know whether a given plan is excuted with GEQO on ?\n> In other words, if I launch 'explain <query>', I'll get a given plan, but if I re-launch\n> the <query> (withtout the 'explain' keyword), could I get a different\n> plan given that GEQO induces some randomness ?\n> \n> >Is it the plan that is different in the fastest case with GEQO or is it\n> >the time needed to plan that is causing the GEQO to beat the exhaustive\n> >search?\n\nYes, is it likely that when using GEQO you would get a different plan\neach time, so running it with and without EXPLAIN would produce\ndifferent plans.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 13 Jan 2006 23:23:33 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely irregular query performance" }, { "msg_contents": "Simon Riggs wrote:\n> On Wed, 2006-01-11 at 22:23 -0500, Tom Lane wrote:\n> > =?iso-8859-1?Q?Jean-Philippe_C=F4t=E9?= <[email protected]> writes:\n> > > Thanks a lot for this info, I was indeed exceeding the genetic\n> > > optimizer's threshold. Now that it is turned off, I get\n> > > a very stable response time of 435ms (more or less 5ms) for\n> > > the same query. It is about three times slower than the best\n> > > I got with the genetic optimizer on, but the overall average\n> > > is much lower.\n> > \n> > Hmm. It would be interesting to use EXPLAIN ANALYZE to confirm that the\n> > plan found this way is the same as the best plan found by GEQO, and\n> > the extra couple hundred msec is the price you pay for the exhaustive\n> > plan search. If GEQO is managing to find a plan better than the regular\n> > planner then we need to look into why ...\n> \n> It seems worth noting in the EXPLAIN whether GEQO has been used to find\n> the plan, possibly along with other factors influencing the plan such as\n> enable_* settings.\n\nI thought the best solution would be to replace \"QUERY PLAN\" with \"GEQO\nQUERY PLAN\" when GEQO was in use. However, looking at the code, I see\nno way to do that cleanly.\n\nInstead, I added documentation to EXPLAIN to highlight the fact the\nexecution plan will change when GEQO is in use.\n\n(I also removed a documentation mention of the pre-7.3 EXPLAIN output\nbehavior.)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: doc/src/sgml/ref/explain.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/ref/explain.sgml,v\nretrieving revision 1.35\ndiff -c -c -r1.35 explain.sgml\n*** doc/src/sgml/ref/explain.sgml\t4 Jan 2005 00:39:53 -0000\t1.35\n--- doc/src/sgml/ref/explain.sgml\t20 Jan 2006 16:18:53 -0000\n***************\n*** 151,161 ****\n </para>\n \n <para>\n! Prior to <productname>PostgreSQL</productname> 7.3, the plan was\n! emitted in the form of a <literal>NOTICE</literal> message. Now it\n! appears as a query result (formatted like a table with a single\n! text column).\n </para>\n </refsect1>\n \n <refsect1>\n--- 151,162 ----\n </para>\n \n <para>\n! Genetic query optimization (<acronym>GEQO</acronym>) randomly \n! tests execution plans. Therefore, when the number of tables \n! exceeds <varname>geqo</> and genetic query optimization is in use,\n! the execution plan will change each time the statement is executed.\n </para>\n+ \n </refsect1>\n \n <refsect1>", "msg_date": "Fri, 20 Jan 2006 11:19:32 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely irregular query performance" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> <para>\n> ! Genetic query optimization (<acronym>GEQO</acronym>) randomly \n> ! tests execution plans. Therefore, when the number of tables \n> ! exceeds <varname>geqo</> and genetic query optimization is in use,\n> ! the execution plan will change each time the statement is executed.\n> </para>\n\ngeqo_threshold, please --- geqo is a boolean.\n\nPossibly better wording: Therefore, when the number of tables exceeds\ngeqo_threshold causing genetic query optimization to be used, the\nexecution plan is likely to change each time the statement is executed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jan 2006 11:34:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely irregular query performance " }, { "msg_contents": "\nDone, and paragraph added to 8.1.X. (7.3 mention retained for 8.1.X.)\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > <para>\n> > ! Genetic query optimization (<acronym>GEQO</acronym>) randomly \n> > ! tests execution plans. Therefore, when the number of tables \n> > ! exceeds <varname>geqo</> and genetic query optimization is in use,\n> > ! the execution plan will change each time the statement is executed.\n> > </para>\n> \n> geqo_threshold, please --- geqo is a boolean.\n> \n> Possibly better wording: Therefore, when the number of tables exceeds\n> geqo_threshold causing genetic query optimization to be used, the\n> execution plan is likely to change each time the statement is executed.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 20 Jan 2006 11:42:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Extremely irregular query performance" } ]
[ { "msg_contents": "I do a load of sql joins using primary and foreign keys. What i would like\nto know if PostgreSQL creates indexes on these columns automatically (in\naddition to using them to maintain referential integrity) or do I have to\ncreate an index manually on these columns as indicated below?\n\nCREATE TABLE cities (\n city_id integer primary key,\n city_name varchar(50)\n);\n\nCREATE INDEX city_id_index ON cities(city_id);\n\nThanks for any insight.\n\nBurak\n\nI do a load of sql joins using primary and foreign keys. What i would\nlike to know if PostgreSQL creates indexes on these columns\nautomatically (in addition to using them to maintain referential\nintegrity) or do I have to create an index manually on these columns as\nindicated below?\n\nCREATE TABLE cities ( city_id integer primary key, city_name varchar(50));CREATE INDEX city_id_index ON cities(city_id);Thanks for any insight.\nBurak", "msg_date": "Wed, 11 Jan 2006 14:38:42 -0800", "msg_from": "Burak Seydioglu <[email protected]>", "msg_from_op": true, "msg_subject": "indexes on primary and foreign keys" }, { "msg_contents": "Burak Seydioglu <[email protected]> writes:\n> I do a load of sql joins using primary and foreign keys. What i would like\n> to know if PostgreSQL creates indexes on these columns automatically (in\n> addition to using them to maintain referential integrity) or do I have to\n> create an index manually on these columns as indicated below?\n\nIndexes are only automatically created where needed to enforce a UNIQUE\nconstraint. That includes primary keys, but not foreign keys.\n\nNote that you only really need an index on the referencing (non-unique)\nside of a foreign key if you are worried about performance of DELETEs\nor key changes on the referenced table. If you seldom or never do that,\nyou might want to dispense with the index.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jan 2006 18:06:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: indexes on primary and foreign keys " }, { "msg_contents": "On Wed, Jan 11, 2006 at 02:38:42PM -0800, Burak Seydioglu wrote:\n> I do a load of sql joins using primary and foreign keys. What i would like\n> to know if PostgreSQL creates indexes on these columns automatically (in\n> addition to using them to maintain referential integrity) or do I have to\n> create an index manually on these columns as indicated below?\n> \n> CREATE TABLE cities (\n> city_id integer primary key,\n> city_name varchar(50)\n> );\n> \n> CREATE INDEX city_id_index ON cities(city_id);\n\nPostgreSQL automatically creates indexes on primary keys. If you run\nthe above CREATE TABLE statement in psql you should see a message to\nthat effect:\n\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \"cities_pkey\" for table \"cities\"\n\nIf you look at the table definition you should see the primary\nkey's index:\n\ntest=> \\d cities\n Table \"public.cities\"\n Column | Type | Modifiers \n-----------+-----------------------+-----------\n city_id | integer | not null\n city_name | character varying(50) | \nIndexes:\n \"cities_pkey\" PRIMARY KEY, btree (city_id)\n\nSo you don't need to create another index on cities.city_id. However,\nPostgreSQL doesn't automatically create an index on the referring\ncolumn of a foreign key constraint, so if you have another table like\n\nCREATE TABLE districts (\n district_id integer PRIMARY KEY,\n district_name varchar(50),\n city_id integer REFERENCES cities\n);\n\nthen you won't automatically get an index on districts.city_id.\nIt's generally a good idea to create one; failure to do so can cause\ndeletes and updates on the referred-to table (cities) to be slow\nbecause referential integrity checks would have to do sequential\nscans on the referring table (districts). Indeed, performance\nproblems for exactly this reason occasionally come up in the mailing\nlists.\n\n-- \nMichael Fuhr\n", "msg_date": "Wed, 11 Jan 2006 16:21:57 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: indexes on primary and foreign keys" }, { "msg_contents": "How about the performance effect on SELECT statements joining multiple\ntables (LEFT JOINS)?\n\nI have been reading all day and here is an excerpt from one article that is\nlocated at http://pgsql.designmagick.com/tutorial.php?id=19&pid=28\n\n[quote]\n\nThe best reason to use an index is for joining multiple tables\ntogether in a single query. When two tables are joined, a record\nthat exists in both tables needs to be used to link them together. If\npossible, the column in both tables should be indexed.\n\n[/quote]\n\nRegarding similar posts, I tried to search the archives but for some reason\nthe search utility is not functioning.\nhttp://search.postgresql.org/archives.search?cs=utf-8&fm=on&st=20&dt=back&q=index\n\nThank you very much for your help.\n\nBurak\n\n\nOn 1/11/06, Michael Fuhr <[email protected]> wrote:\n>\n> On Wed, Jan 11, 2006 at 02:38:42PM -0800, Burak Seydioglu wrote:\n> > I do a load of sql joins using primary and foreign keys. What i would\n> like\n> > to know if PostgreSQL creates indexes on these columns automatically (in\n> > addition to using them to maintain referential integrity) or do I have\n> to\n> > create an index manually on these columns as indicated below?\n> >\n> > CREATE TABLE cities (\n> > city_id integer primary key,\n> > city_name varchar(50)\n> > );\n> >\n> > CREATE INDEX city_id_index ON cities(city_id);\n>\n> PostgreSQL automatically creates indexes on primary keys. If you run\n> the above CREATE TABLE statement in psql you should see a message to\n> that effect:\n>\n> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n> \"cities_pkey\" for table \"cities\"\n>\n> If you look at the table definition you should see the primary\n> key's index:\n>\n> test=> \\d cities\n> Table \"public.cities\"\n> Column | Type | Modifiers\n> -----------+-----------------------+-----------\n> city_id | integer | not null\n> city_name | character varying(50) |\n> Indexes:\n> \"cities_pkey\" PRIMARY KEY, btree (city_id)\n>\n> So you don't need to create another index on cities.city_id. However,\n> PostgreSQL doesn't automatically create an index on the referring\n> column of a foreign key constraint, so if you have another table like\n>\n> CREATE TABLE districts (\n> district_id integer PRIMARY KEY,\n> district_name varchar(50),\n> city_id integer REFERENCES cities\n> );\n>\n> then you won't automatically get an index on districts.city_id.\n> It's generally a good idea to create one; failure to do so can cause\n> deletes and updates on the referred-to table (cities) to be slow\n> because referential integrity checks would have to do sequential\n> scans on the referring table (districts). Indeed, performance\n> problems for exactly this reason occasionally come up in the mailing\n> lists.\n>\n> --\n> Michael Fuhr\n>\n\nHow about the performance effect on SELECT statements joining multiple tables (LEFT JOINS)?\n\nI have been reading all day and here is an excerpt from one article\nthat is located at\nhttp://pgsql.designmagick.com/tutorial.php?id=19&pid=28\n\n[quote]\nThe best reason to use an index is for joining multiple tables together in a single query. When two tables are joined, a recordthat exists in both tables needs to be used to link them together. If possible, the column in both tables should be indexed.\n\n[/quote]\n\nRegarding similar posts, I tried to search the archives but for some reason the search utility is not functioning. \nhttp://search.postgresql.org/archives.search?cs=utf-8&fm=on&st=20&dt=back&q=index\n\nThank you very much for your help.\n\nBurak\nOn 1/11/06, Michael Fuhr <[email protected]> wrote:\nOn Wed, Jan 11, 2006 at 02:38:42PM -0800, Burak Seydioglu wrote:> I do a load of sql joins using primary and foreign keys. What i would like> to know if PostgreSQL creates indexes on these columns automatically (in\n> addition to using them to maintain referential integrity) or do I have to> create an index manually on these columns as indicated below?>> CREATE TABLE cities (>   city_id integer primary key,\n>   city_name varchar(50)> );>> CREATE INDEX city_id_index ON cities(city_id);PostgreSQL automatically creates indexes on primary keys.  If you runthe above CREATE TABLE statement in psql you should see a message to\nthat effect:NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index \"cities_pkey\" for table \"cities\"If you look at the table definition you should see the primarykey's index:\ntest=> \\d cities             Table \"public.cities\"  Column  \n|        \nType          |\nModifiers-----------+-----------------------+----------- city_id   | integer               | not null city_name | character varying(50) |Indexes:    \"cities_pkey\" PRIMARY KEY, btree (city_id)\nSo you don't need to create another index on cities.city_id.  However,PostgreSQL doesn't automatically create an index on the referringcolumn of a foreign key constraint, so if you have another table like\nCREATE TABLE districts (  district_id    integer PRIMARY KEY,  district_name  varchar(50),  city_id        integer REFERENCES cities);then you won't automatically get an index on districts.city_id\n.It's generally a good idea to create one; failure to do so can causedeletes and updates on the referred-to table (cities) to be slowbecause referential integrity checks would have to do sequentialscans on the referring table (districts).  Indeed, performance\nproblems for exactly this reason occasionally come up in the mailinglists.--Michael Fuhr", "msg_date": "Wed, 11 Jan 2006 15:52:40 -0800", "msg_from": "Burak Seydioglu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: indexes on primary and foreign keys" }, { "msg_contents": "At 07:21 06/01/12, Michael Fuhr wrote:\n>On Wed, Jan 11, 2006 at 02:38:42PM -0800, Burak Seydioglu wrote:\n> > I do a load of sql joins using primary and foreign keys. What i would like\n> > to know if PostgreSQL creates indexes on these columns automatically (in\n> > addition to using them to maintain referential integrity) or do I have to\n> > create an index manually on these columns as indicated below?\n> >\n> > CREATE TABLE cities (\n> > city_id integer primary key,\n> > city_name varchar(50)\n> > );\n> >\n> > CREATE INDEX city_id_index ON cities(city_id);\n>\n>PostgreSQL automatically creates indexes on primary keys. If you run\n>the above CREATE TABLE statement in psql you should see a message to\n>that effect:\n>\n>NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \n>\"cities_pkey\" for table \"cities\"\n\nIs there a way to suppress this notice when I create tables in a script?\n\nBest regards,\nKC.\n\n\n", "msg_date": "Thu, 12 Jan 2006 08:36:19 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: indexes on primary and foreign keys" }, { "msg_contents": "\nOn Jan 12, 2006, at 9:36 , K C Lau wrote:\n\n>> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \n>> \"cities_pkey\" for table \"cities\"\n>\n> Is there a way to suppress this notice when I create tables in a \n> script?\n\nSet[1] your log_min_messages to WARNING or higher[2].\n\n[1](http://www.postgresql.org/docs/current/interactive/sql-set.html)\n[2](http://www.postgresql.org/docs/current/interactive/runtime-config- \nlogging.html#RUNTIME-CONFIG-LOGGING-WHEN)\n\nMichael Glaesemann\ngrzm myrealbox com\n\n\n\n", "msg_date": "Thu, 12 Jan 2006 10:26:58 +0900", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: indexes on primary and foreign keys" }, { "msg_contents": "On Thu, Jan 12, 2006 at 10:26:58AM +0900, Michael Glaesemann wrote:\n> On Jan 12, 2006, at 9:36 , K C Lau wrote:\n> >>NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \n> >>\"cities_pkey\" for table \"cities\"\n> >\n> >Is there a way to suppress this notice when I create tables in a \n> >script?\n> \n> Set[1] your log_min_messages to WARNING or higher[2].\n\nOr client_min_messages, depending on where you don't want to see\nthe notice.\n\n-- \nMichael Fuhr\n", "msg_date": "Wed, 11 Jan 2006 18:40:57 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: indexes on primary and foreign keys" }, { "msg_contents": "At 09:26 06/01/12, you wrote:\n\n>On Jan 12, 2006, at 9:36 , K C Lau wrote:\n>\n>>>NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n>>>\"cities_pkey\" for table \"cities\"\n>>\n>>Is there a way to suppress this notice when I create tables in a\n>>script?\n>\n>Set[1] your log_min_messages to WARNING or higher[2].\n>\n>[1](http://www.postgresql.org/docs/current/interactive/sql-set.html)\n>[2](http://www.postgresql.org/docs/current/interactive/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHEN)\n>\n>Michael Glaesemann\n>grzm myrealbox com\n\nThanks. The side effect is that it would suppress other notices which might \nbe useful.\n\nI was looking for a way to suppress the notice within the CREATE TABLE \nstatement but could not.\nI noticed that when I specify a constraint name for the primary key, it \nwould create an implicit index with the constraint name. So may be if the \noptional constraint name is specified by the user, then the notice can be \nsuppressed. Indeed the manual already says that the index will be \nautomatically created.\n\nBTW, there's an extra space in link[2] above which I have removed.\n\nBest regards,\nKC. \n\n", "msg_date": "Thu, 12 Jan 2006 11:49:16 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: indexes on primary and foreign keys" }, { "msg_contents": "K C Lau <[email protected]> writes:\n> Thanks. The side effect is that it would suppress other notices which might \n> be useful.\n\nThere's been some discussion of subdividing the present \"notice\"\ncategory into two subclasses, roughly defined as \"only novices wouldn't\nknow this\" and \"maybe this is interesting\". What's missing at this\npoint is a concrete proposal as to which existing NOTICE messages should\ngo into each category. If you feel like tackling the project, go for it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jan 2006 23:45:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: indexes on primary and foreign keys " } ]
[ { "msg_contents": "\n\n\n\n\n\nHi,\n\nI'm working on a project, whose implementation deals with PostgreSQL. A brief description of our application is given below.\n\nI'm running version 8.0 on a dedicated server 1Gb of RAM. \nmy database isn't complex, it contains just 2 simple tables.\n\nCREATE TABLE cookies (\n domain varchar(50) NOT NULL,\n path varchar(50) NOT NULL,\n name varchar(50) NOT NULL,\n principalid varchar(50) NOT NULL,\n host text NOT NULL,\n value text NOT NULL,\n secure bool NOT NULL,\n timestamp timestamp with time zone NOT NULL DEFAULT \nCURRENT_TIMESTAMP+TIME '04:00:00',\n PRIMARY KEY (domain,path,name,principalid)\n)\n\nCREATE TABLE liberty (\n principalid varchar(50) NOT NULL,\n requestid varchar(50) NOT NULL,\n spassertionurl text NOT NULL,\n libertyversion varchar(50) NOT NULL,\n relaystate varchar(50) NOT NULL,\n PRIMARY KEY (principalid)\n)\n\nI'm developping an application that uses the libpqxx to execute \npsql queries on the database and have to execute 500 requests at the same time.\n\n\nUPDATE cookies SET host='ping.icap-elios.com', value= '54E5B5491F27C0177083795F2E09162D', secure=FALSE, \ntimestamp=CURRENT_TIMESTAMP+INTERVAL '14400 SECOND' WHERE \ndomain='ping.icap-elios.com' AND path='/tfs' AND principalid='192.168.8.219' AND \nname='jsessionid'\n\nSELECT path, upper(name) AS name, value FROM cookies WHERE timestamp<CURRENT_TIMESTAMP AND principalid='192.168.8.219' AND \nsecure=FALSE AND (domain='ping.icap-elios.com' OR domain='.icap-elios.com')\n\nI have to notify that the performance of is extremely variable and irregular.\nI can also see that the server process uses almost 100% of\na CPU.\n\nI'm using the default configuration file, and i m asking if i have to change some paramters to have a good performance.\n\nAny help would be greatly appreciated.\n\nThanks,\n\n\n\n", "msg_date": "Thu, 12 Jan 2006 01:32:10 +0100", "msg_from": "Jamal Ghaffour <[email protected]>", "msg_from_op": true, "msg_subject": "Please Help: PostgreSQL performance Optimization" }, { "msg_contents": "On Thu, 12 Jan 2006 01:32:10 +0100\nJamal Ghaffour <[email protected]> wrote:\n\n> I'm using the default configuration file, and i m asking if i have to\n> change some paramters to have a good performance.\n\n In general the answer is yes. The default is a pretty good best guess\n at what sorts of values work for your \"typical system\", but if you run\n into performance problems the config file is where you should look\n first, provided you've done the simple things like adding good\n indexes, vacumm analyze, etc. \n\n You'll want to consult the following various documentation out there\n to help your properly tune your configuration: \n\n http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n http://www.powerpostgresql.com/Docs\n http://www.powerpostgresql.com/PerfList\n http://www.revsys.com/writings/postgresql-performance.html\n\n Hopefully these will help you understand how to set your configuration\n values. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Fri, 13 Jan 2006 11:41:26 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please Help: PostgreSQL performance Optimization" } ]
[ { "msg_contents": "Hi,\n\nI've got a set-returning function, defined as STABLE, that I reference twice \nwithin a single query, yet appears to be evaluated via two seperate function \nscans. I created a simple query that calls the function below and joins the \nresults to itself (Note: in case you wonder why I'd do such a query, it's \nnot my actual query, which is much more complex. I just created this simple \nquery to try to test out the 'stable' behavior).\n\n\nselect proname,provolatile from pg_proc where proname = 'get_tran_filesize';\n proname | provolatile\n----------------------------+-------------\n get_tran_filesize | s\n(1 row)\n\n\nexplain analyze\nselect * from \n get_tran_filesize('2005-12-11 00:00:00-08','2006-01-11 \n15:58:33-08','{228226,228222,228210}');\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Function Scan on get_tran_filesize (cost=0.00..12.50 rows=1000 width=40) \n(actual time=49.522..49.524 rows=3 loops=1)\n Total runtime: 49.550 ms\n(2 rows)\n\n\nexplain analyze\nselect * from \n get_tran_filesize('2005-12-11 00:00:00-08','2006-01-11 \n15:58:33-08','{228226,228222,228210}') gt,\n get_tran_filesize('2005-12-11 00:00:00-08','2006-01-11 \n15:58:33-08','{228226,228222,228210}') gt2\nwhere gt.tran_id = gt2.tran_id;\n\n QUERY \nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=124.66..204.66 rows=5000 width=80) (actual \ntime=83.027..83.040 rows=3 loops=1)\n Merge Cond: (\"outer\".tran_id = \"inner\".tran_id)\n -> Sort (cost=62.33..64.83 rows=1000 width=40) (actual \ntime=40.250..40.251 rows=3 loops=1)\n Sort Key: gt.tran_id\n -> Function Scan on get_tran_filesize gt (cost=0.00..12.50 \nrows=1000 width=40) (actual time=40.237..40.237 rows=3 loops=1)\n -> Sort (cost=62.33..64.83 rows=1000 width=40) (actual \ntime=42.765..42.767 rows=3 loops=1)\n Sort Key: gt2.tran_id\n -> Function Scan on get_tran_filesize gt2 (cost=0.00..12.50 \nrows=1000 width=40) (actual time=42.748..42.751 rows=3 loops=1)\n Total runtime: 83.112 ms\n(9 rows)\n\n\nIf I do get this working, then my question is, if I reference this function \nwithin a single query, but within seperate subqueries within the query, will \nit be re-evaluated each time, or just once. Basically, I'm not clear on the \ndefinition of \"surrounding query\" in the following exerpt from the Postgreql \ndocumentation:\n\nA STABLE function cannot modify the database and is guaranteed to return the \nsame results given the same arguments for all calls within a single\nsurrounding query.\n\nThanks,\n\nMark\n", "msg_date": "Wed, 11 Jan 2006 16:41:20 -0800", "msg_from": "Mark Liberman <[email protected]>", "msg_from_op": true, "msg_subject": "Stable function being evaluated more than once in a single query" }, { "msg_contents": "Mark Liberman <[email protected]> writes:\n> I've got a set-returning function, defined as STABLE, that I reference twice\n> within a single query, yet appears to be evaluated via two seperate function \n> scans.\n\nThere is no guarantee, express or implied, that this won't be the case.\n\n(Seems like we just discussed this a couple days ago...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Jan 2006 23:33:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stable function being evaluated more than once in a single query " }, { "msg_contents": "On Wed, Jan 11, 2006 at 11:33:23PM -0500, Tom Lane wrote:\n> Mark Liberman <[email protected]> writes:\n> > I've got a set-returning function, defined as STABLE, that I reference twice\n> > within a single query, yet appears to be evaluated via two seperate function \n> > scans.\n> \n> There is no guarantee, express or implied, that this won't be the case.\n> \n> (Seems like we just discussed this a couple days ago...)\n\nWell, from 32.6:\n\n\"This category allows the optimizer to optimize away multiple calls of\nthe function within a single query.\"\n\nThat could certainly be read as indicating that if the function is used\ntwice in one query it could be optimized to one call.\n\nIs the issue that the optimizer won't combine two function calls (ie:\nSELECT foo(..) ... WHERE foo(..)), or is it that sometimes it won't make\nthe optimization (maybe depending on the query plan, for example)?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 13 Jan 2006 18:06:40 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stable function being evaluated more than once in a single query" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Is the issue that the optimizer won't combine two function calls (ie:\n> SELECT foo(..) ... WHERE foo(..)), or is it that sometimes it won't make\n> the optimization (maybe depending on the query plan, for example)?\n\nWhat the STABLE category actually does is give the planner permission to\nuse the function within an indexscan qualification, eg,\n\tWHERE indexed_column = f(42)\nSince an indexscan involves evaluating the comparison expression just\nonce and using its value to search the index, this would be incorrect\nif the expression's value might change from row to row. (For VOLATILE\nfunctions, we assume that the correct behavior is the naive SQL\nsemantics of actually computing the WHERE clause at each candidate row.)\n\nThere is no function cache and no checking for duplicate expressions.\nI think we do check for duplicate aggregate expressions, but not\nanything else.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Jan 2006 19:27:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stable function being evaluated more than once in a single query " }, { "msg_contents": "Adding -docs...\n\nOn Fri, Jan 13, 2006 at 07:27:28PM -0500, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > Is the issue that the optimizer won't combine two function calls (ie:\n> > SELECT foo(..) ... WHERE foo(..)), or is it that sometimes it won't make\n> > the optimization (maybe depending on the query plan, for example)?\n> \n> What the STABLE category actually does is give the planner permission to\n> use the function within an indexscan qualification, eg,\n> \tWHERE indexed_column = f(42)\n> Since an indexscan involves evaluating the comparison expression just\n> once and using its value to search the index, this would be incorrect\n> if the expression's value might change from row to row. (For VOLATILE\n> functions, we assume that the correct behavior is the naive SQL\n> semantics of actually computing the WHERE clause at each candidate row.)\n> \n> There is no function cache and no checking for duplicate expressions.\n> I think we do check for duplicate aggregate expressions, but not\n> anything else.\n \nIn that case I'd say that the sSTABLE section of 32.6 should be changed\nto read:\n\nA STABLE function cannot modify the database and is guaranteed to\nreturn the same results given the same arguments for all calls within a\nsingle surrounding query. This category gives the planner permission to\nuse the function within an indexscan qualification. (Since an indexscan\ninvolves evaluating the comparison expression just once and using its\nvalue to search the index, this would be incorrect if the expression's\nvalue might change from row to row.) There is no function cache and no\nchecking for duplicate expressions.\n\nI can provide a patch to that effect if it's easier...\n\nOn a related note, would it be difficult to recognize multiple calls of\nthe same function in one query? ISTM that would be a win for all but the\nmost trivial of functions...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n", "msg_date": "Fri, 13 Jan 2006 18:43:48 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Stable function being evaluated more than once in a\n\tsingle query" }, { "msg_contents": "Here is updated documentation for STABLE. I just changed a few words\nfor clarification.\n\n---------------------------------------------------------------------------\n\nJim C. Nasby wrote:\n> Adding -docs...\n> \n> On Fri, Jan 13, 2006 at 07:27:28PM -0500, Tom Lane wrote:\n> > \"Jim C. Nasby\" <[email protected]> writes:\n> > > Is the issue that the optimizer won't combine two function calls (ie:\n> > > SELECT foo(..) ... WHERE foo(..)), or is it that sometimes it won't make\n> > > the optimization (maybe depending on the query plan, for example)?\n> > \n> > What the STABLE category actually does is give the planner permission to\n> > use the function within an indexscan qualification, eg,\n> > \tWHERE indexed_column = f(42)\n> > Since an indexscan involves evaluating the comparison expression just\n> > once and using its value to search the index, this would be incorrect\n> > if the expression's value might change from row to row. (For VOLATILE\n> > functions, we assume that the correct behavior is the naive SQL\n> > semantics of actually computing the WHERE clause at each candidate row.)\n> > \n> > There is no function cache and no checking for duplicate expressions.\n> > I think we do check for duplicate aggregate expressions, but not\n> > anything else.\n> \n> In that case I'd say that the sSTABLE section of 32.6 should be changed\n> to read:\n> \n> A STABLE function cannot modify the database and is guaranteed to\n> return the same results given the same arguments for all calls within a\n> single surrounding query. This category gives the planner permission to\n> use the function within an indexscan qualification. (Since an indexscan\n> involves evaluating the comparison expression just once and using its\n> value to search the index, this would be incorrect if the expression's\n> value might change from row to row.) There is no function cache and no\n> checking for duplicate expressions.\n> \n> I can provide a patch to that effect if it's easier...\n> \n> On a related note, would it be difficult to recognize multiple calls of\n> the same function in one query? ISTM that would be a win for all but the\n> most trivial of functions...\n> -- \n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: doc/src/sgml/xfunc.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/xfunc.sgml,v\nretrieving revision 1.109\ndiff -c -c -r1.109 xfunc.sgml\n*** doc/src/sgml/xfunc.sgml\t29 Nov 2005 01:46:54 -0000\t1.109\n--- doc/src/sgml/xfunc.sgml\t19 Jan 2006 22:43:58 -0000\n***************\n*** 899,911 ****\n <para>\n A <literal>STABLE</> function cannot modify the database and is\n guaranteed to return the same results given the same arguments\n! for all calls within a single surrounding query. This category\n! allows the optimizer to optimize away multiple calls of the function\n! within a single query. In particular, it is safe to use an expression\n! containing such a function in an index scan condition. (Since an\n! index scan will evaluate the comparison value only once, not once at\n! each row, it is not valid to use a <literal>VOLATILE</> function in\n! an index scan condition.)\n </para>\n </listitem>\n <listitem>\n--- 899,911 ----\n <para>\n A <literal>STABLE</> function cannot modify the database and is\n guaranteed to return the same results given the same arguments\n! for all rows within a single statement. This category allows the\n! optimizer to optimize multiple calls of the function to a single\n! call. In particular, it is safe to use an expression containing\n! such a function in an index scan condition. (Since an index scan\n! will evaluate the comparison value only once, not once at each\n! row, it is not valid to use a <literal>VOLATILE</> function in an\n! index scan condition.)\n </para>\n </listitem>\n <listitem>", "msg_date": "Thu, 19 Jan 2006 17:52:14 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stable function being evaluated more than once in a single" } ]
[ { "msg_contents": "Hi all,\n\nIs PostgreSQL able to throw unnecessary joins?\nFor example I have two tables, and I join then with their primary keys, say\ntype of bigint . In this case if I don't reference to one of the\ntables anywhere except the join condition, then the join can be eliminated.\nOr if I do a \"table1 left join table2 (table1.referer=table2.id)\" (N : 1\nrelationship), and I don't reference table2 anywhere else, then it is\nunnecessary.\nPrimary key - primary key joins are often generated by O/R mappers. These\ngenerated queries could be optimized even more by not joining if not\nnecessary.\n\nYou may say that I should not write such queries. The truth is that the O/R\nmapper is generating queries on views, and it does not use every field every\ntime, but even so the query of the view is executed with the same plan by\nPostgreSQL, although some joins are unnecessary.\n\nSo basically this all is relevant only with views.\n\nBest Regards,\nOtto\n\nHi all,\n \nIs PostgreSQL able to throw unnecessary joins?\nFor example I have two tables, and I join then with their primary keys, say type of bigint . In this case if I don't reference to one of the tables anywhere except the join condition, then the join can be eliminated. \n\nOr if I do a \"table1 left join table2 (table1.referer=table2.id)\"  (N : 1 relationship), and I don't reference table2 anywhere else, then it is unnecessary.\nPrimary key - primary key joins are often generated by O/R mappers. These generated queries could be optimized even more by not joining if not necessary.\n \nYou may say that I should not write such queries. The truth is that the O/R mapper is generating queries on views, and it does not use every field every time, but even so the query of the view is executed with the same plan by PostgreSQL, although some joins are unnecessary.\n\n \nSo basically this all is relevant only with views.\n \nBest Regards,\nOtto", "msg_date": "Thu, 12 Jan 2006 13:18:58 +0100", "msg_from": "=?ISO-8859-1?Q?Ott=F3_Havasv=F6lgyi?= <[email protected]>", "msg_from_op": true, "msg_subject": "Throwing unnecessary joins away" }, { "msg_contents": "Ott� Havasv�lgyi wrote:\n> Hi all,\n> \n> Is PostgreSQL able to throw unnecessary joins?\n> For example I have two tables, and I join then with their primary keys, \n> say type of bigint . In this case if I don't reference to one of the \n> tables anywhere except the join condition, then the join can be eliminated.\n> Or if I do a \"table1 left join table2 (table1.referer=table2.id)\" (N : \n> 1 relationship), and I don't reference table2 anywhere else, then it is \n> unnecessary.\n\nIt cannot possibly remove \"unnecessary joins\", simply because the join \ninfluences whether a tuple in the referenced table gets selected and how many times.\n\nAlex\n\n\n-- \n*********************************************************************\nhttp://www.barettadeit.com/\nBaretta DE&IT\nA division of Baretta SRL\n\ntel. +39 02 370 111 55\nfax. +39 02 370 111 54\n\nOur technology:\n\nThe Application System/Xcaml (AS/Xcaml)\n<http://www.asxcaml.org/>\n\nThe FreerP Project\n<http://www.freerp.org/>\n", "msg_date": "Thu, 12 Jan 2006 13:35:07 +0100", "msg_from": "Alessandro Baretta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Throwing unnecessary joins away" }, { "msg_contents": "Hi,\nAs far as I know SQL Server has some similar feature. It does not join\nif not necessary, more exactly: if the result would be the same if it\njoined the table.\nHere is another example:\nhttp://www.ianywhere.com/developer/product_manuals/sqlanywhere/0902/en/html/dbugen9/00000468.htm\nThis would be a fantastic feature.\nBest Regards,\nOtto\n\n\n2006/1/12, Alessandro Baretta <[email protected]>:\n> Ottó Havasvölgyi wrote:\n> > Hi all,\n> >\n> > Is PostgreSQL able to throw unnecessary joins?\n> > For example I have two tables, and I join then with their primary keys,\n> > say type of bigint . In this case if I don't reference to one of the\n> > tables anywhere except the join condition, then the join can be eliminated.\n> > Or if I do a \"table1 left join table2 (table1.referer=table2.id)\" (N :\n> > 1 relationship), and I don't reference table2 anywhere else, then it is\n> > unnecessary.\n>\n> It cannot possibly remove \"unnecessary joins\", simply because the join\n> influences whether a tuple in the referenced table gets selected and how many times.\n>\n> Alex\n>\n>\n> --\n> *********************************************************************\n> http://www.barettadeit.com/\n> Baretta DE&IT\n> A division of Baretta SRL\n>\n> tel. +39 02 370 111 55\n> fax. +39 02 370 111 54\n>\n> Our technology:\n>\n> The Application System/Xcaml (AS/Xcaml)\n> <http://www.asxcaml.org/>\n>\n> The FreerP Project\n> <http://www.freerp.org/>\n>\n", "msg_date": "Thu, 12 Jan 2006 14:41:30 +0100", "msg_from": "=?ISO-8859-1?Q?Ott=F3_Havasv=F6lgyi?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Throwing unnecessary joins away" }, { "msg_contents": "=?ISO-8859-1?Q?Ott=F3_Havasv=F6lgyi?= <[email protected]> writes:\n> As far as I know SQL Server has some similar feature. It does not join\n> if not necessary, more exactly: if the result would be the same if it\n> joined the table.\n\nI find it really really hard to believe that such cases arise often\nenough to justify having the planner spend cycles checking for them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Jan 2006 10:53:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Throwing unnecessary joins away " }, { "msg_contents": "Dear Tom,\n\nNot sure about Otto's exact problem, but he did mention views, and I'd feel \nmore comfortable if you told me that view-based queries are re-planned based \non actual conditions etc. Are they?\n\nAlso, if you find it unlikely (or very rare) then it might be a configurable \nparameter. If someone finds it drastically improving (some of) their \nqueries, it'd be possible to enable this feature in expense of extra planner \ncycles (on all queries).\n\nWhat I'd be concerned about, is whether the developers' time spent on this \nfeature would worth it. :)\n\n--\nG.\n\n\nOn 2006.01.12. 16:53, Tom Lane wrote:\n> =?ISO-8859-1?Q?Ott=F3_Havasv=F6lgyi?= <[email protected]> writes:\n>> As far as I know SQL Server has some similar feature. It does not join\n>> if not necessary, more exactly: if the result would be the same if it\n>> joined the table.\n> \n> I find it really really hard to believe that such cases arise often\n> enough to justify having the planner spend cycles checking for them.\n> \n> \t\t\tregards, tom lane\n\n", "msg_date": "Thu, 12 Jan 2006 17:25:25 +0100", "msg_from": "=?ISO-8859-2?Q?Sz=FBcs_G=E1bor?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Throwing unnecessary joins away" }, { "msg_contents": "Hi,\n\nI think it would be sufficient only for views. In other cases the\nprogrammer can optimize himself. But a view can be a join of other\ntables, and it is not sure that all of them are always needed. It all\ndepends on what I select from the view.\nThis information could even be calculted at view creation time. Of\ncource this requires that views are handled in a bit more special way,\nnot just a view definition that is substituted into the original query\n(as far as I know the current implementation is similar to this. Sorry\nif not).\nWhat do you think about this idea? Of course it is not trivial to\nimplement, but the result is really great.\n\nPostgres could determine at creation time, if this kind of\noptimization is possible at all or not. It can happan though that not\nall information is available (I mean unique index or foreign key) at\nthat time. So this optimiztaion info could be refreshed later by a\ncommand, \"ALTER VIEW <viewname> ANALYZE\" or \"ANALYZE <view name>\"\nsimply.\nPostgres could also establish at creation time that for a given column\nin the selection list which source table(s) are required. This is\nprobably not sufficient, but I haven't thought is through thouroughly\nyet. And I am not that familiar with the current optimizer internals.\nAnd one should be able to turn off this optimization, so that view\ncreation takes not longer than now. If the optimizer finds no\noptimization info in the catalog, it behaves like now.\nI hope you see this worth.\nThis all is analogue to statistics collection.\n\nThanks for reading,\nOtto\n\n\n2006/1/12, Tom Lane <[email protected]>:\n> =?ISO-8859-1?Q?Ott=F3_Havasv=F6lgyi?= <[email protected]> writes:\n> > As far as I know SQL Server has some similar feature. It does not join\n> > if not necessary, more exactly: if the result would be the same if it\n> > joined the table.\n>\n> I find it really really hard to believe that such cases arise often\n> enough to justify having the planner spend cycles checking for them.\n>\n> regards, tom lane\n>\n", "msg_date": "Thu, 12 Jan 2006 18:00:07 +0100", "msg_from": "=?ISO-8859-1?Q?Ott=F3_Havasv=F6lgyi?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Throwing unnecessary joins away" }, { "msg_contents": "On Thu, 2006-01-12 at 11:00, Ottó Havasvölgyi wrote:\n> Hi,\n> \n> I think it would be sufficient only for views. In other cases the\n> programmer can optimize himself. But a view can be a join of other\n> tables, and it is not sure that all of them are always needed. It all\n> depends on what I select from the view.\n\nThe idea that you could throw away joins only works for outer joins. \nI.e. if you did:\n\nselect a.x, a.y, a.z from a left join b (on a.id=b.aid) \n\nthen you could throw away the join to b. But if it was a regular inner\njoin then you couldn't know whether or not you needed to join to b\nwithout actually joining to b...\n", "msg_date": "Thu, 12 Jan 2006 11:03:02 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Throwing unnecessary joins away" }, { "msg_contents": "Hi,\n\nIf the join is to a primary key or notnull unique column(s), then\ninner join is also ok. But of course left join is the simpler case.\nAn example:\n\ncreate table person (id serial primary key, name varchar not null);\ncreate table pet (id serial primary key, name varchar not null,\nperson_id int not null references person(id));\ncreate view v_pet_person as select pet.id as pet_id, pet.name as\npet_name, person_id as person_id, person.name as person_name from pet\njoin person (pet.person_id=person.id);\n\nAt this point we know that optimization may be possible because of the\nprimary key on person. The optimization depends on the primary key\nconstraint. Kindof internal dependency.\nWe can find out that which \"from-element\" is a given field's source as\nfar they are simple references. This can be stored.\nThen query the view:\n\nselect pet_name, person_id from v_pet_person where person_id=2;\n\nIn this case we don't need the join.\nThese queries are usually dynamically generated, the selection list\nand the where condition is the dynamic part.\n\nBest Regards,\nOtto\n\n\n2006/1/12, Scott Marlowe <[email protected]>:\n> On Thu, 2006-01-12 at 11:00, Ottó Havasvölgyi wrote:\n> > Hi,\n> >\n> > I think it would be sufficient only for views. In other cases the\n> > programmer can optimize himself. But a view can be a join of other\n> > tables, and it is not sure that all of them are always needed. It all\n> > depends on what I select from the view.\n>\n> The idea that you could throw away joins only works for outer joins.\n> I.e. if you did:\n>\n> select a.x, a.y, a.z from a left join b (on a.id=b.aid)\n>\n> then you could throw away the join to b. But if it was a regular inner\n> join then you couldn't know whether or not you needed to join to b\n> without actually joining to b...\n>\n", "msg_date": "Thu, 12 Jan 2006 19:51:22 +0100", "msg_from": "=?ISO-8859-1?Q?Ott=F3_Havasv=F6lgyi?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Throwing unnecessary joins away" }, { "msg_contents": "On Thu, Jan 12, 2006 at 01:35:07PM +0100, Alessandro Baretta wrote:\n> Ott? Havasv?lgyi wrote:\n> >Hi all,\n> > \n> >Is PostgreSQL able to throw unnecessary joins?\n> >For example I have two tables, and I join then with their primary keys, \n> >say type of bigint . In this case if I don't reference to one of the \n> >tables anywhere except the join condition, then the join can be eliminated.\n> >Or if I do a \"table1 left join table2 (table1.referer=table2.id)\" (N : \n> >1 relationship), and I don't reference table2 anywhere else, then it is \n> >unnecessary.\n> \n> It cannot possibly remove \"unnecessary joins\", simply because the join \n> influences whether a tuple in the referenced table gets selected and how \n> many times.\n\nIt can remove them if it's an appropriate outer join, or if there is\nappropriate RI that proves that the join won't change what data is\nselected.\n\nA really common example of this is creating views that pull in tables\nthat have text names to go with id's, ie:\n\nCREATE TABLE bug_status(\n bug_status_id serial PRIMARY KEY\n , bug_status_name text NOT NULL UNIQUE\n);\n\nCREATE TABLE bug(\n ...\n , bug_status_id int REFERENCES bug_status(bug_status_id)\n);\n\nCREATE VIEW bug_v AS\n SELECT b.*, bs.bug_status_name FROM bug b JOIN bug_status NATURAL\n;\n\nIf you have a bunch of cases like that and start building views on views\nit's very easy to end up in situations where you don't have any need of\nbug_status_name at all. And because of the RI, you know that removing\nthe join can't possibly change the bug.* portion of that view.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 13 Jan 2006 18:17:58 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Throwing unnecessary joins away" }, { "msg_contents": "On Thu, Jan 12, 2006 at 07:51:22PM +0100, Ott? Havasv?lgyi wrote:\n> Hi,\n> \n> If the join is to a primary key or notnull unique column(s), then\n> inner join is also ok. But of course left join is the simpler case.\n> An example:\n\nActually, you need both the unique/pk constraint, and RI (a fact I\nmissed in the email I just sent). Nullability is another consideration\nas well. But there certainly are some pretty common cases that can be\noptimized for.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 13 Jan 2006 18:22:05 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Throwing unnecessary joins away" } ]
[ { "msg_contents": "Porting app from 7.3 to 8.1, have hit a query that is slower. Everything\nis analyzed / vacuumed appropriately. I've managed to pare the query\ndown into something manageable that still gives me a problem, it looks\nlike this:\n\nSELECT \n * \nFROM\n (\n SELECT\n software_download.*\n FROM\n (\n SELECT\n host_id, max(mtime) as mtime\n FROM \n software_download\n JOIN software_binary USING (software_binary_id)\n WHERE\n binary_type_id IN (3,5,6) AND bds_status_id not in (6,17,18)\n GROUP BY\n host_id, software_binary_id\n ) latest_download\n JOIN software_download using (host_id,mtime)\n ) ld\n LEFT JOIN\n (\n\tSELECT\n entityid, rmsbinaryid, rmsbinaryid as software_binary_id, timestamp as downloaded, ia.host_id\n FROM\n (\n\t\tSELECT\n entityid, rmsbinaryid,max(msgid) as msgid\n FROM\n msg306u\n WHERE\n downloadstatus=1\n GROUP BY entityid,rmsbinaryid\n ) a1\n JOIN myapp_app ia on (entityid=myapp_app_id)\n JOIN (\n\t\t SELECT \n\t\t\t*\n FROM \n\t\t\tmsg306u\n WHERE\n downloadstatus != 0\n ) a2 USING(entityid,rmsbinaryid,msgid)\n ) aa USING (host_id,software_binary_id)\n\n\n\nThe problem seems to stem from 8.1's thinking that using a nested loop\nleft join is a good idea. The 7.3 explain plan looks like this:\n\n Hash Join (cost=10703.38..11791.64 rows=1 width=150) (actual time=2550.23..2713.26 rows=475 loops=1)\n Hash Cond: (\"outer\".host_id = \"inner\".host_id)\n Join Filter: (\"outer\".software_binary_id = \"inner\".rmsbinaryid)\n -> Merge Join (cost=1071.80..2160.07 rows=1 width=110) (actual time=93.16..252.12 rows=475 loops=1)\n Merge Cond: (\"outer\".host_id = \"inner\".host_id)\n Join Filter: (\"inner\".mtime = \"outer\".mtime)\n -> Index Scan using software_download_host_id on software_download (cost=0.00..973.16 rows=18513 width=98) (actual time=0.05..119.89 rows=15587 loops=1)\n -> Sort (cost=1071.80..1072.81 rows=403 width=20) (actual time=90.82..94.97 rows=7328 loops=1)\n Sort Key: latest_download.host_id\n -> Subquery Scan latest_download (cost=1014.00..1054.34 rows=403 width=20) (actual time=85.60..90.12 rows=475 loops=1)\n -> Aggregate (cost=1014.00..1054.34 rows=403 width=20) (actual time=85.59..89.27 rows=475 loops=1)\n -> Group (cost=1014.00..1044.26 rows=4034 width=20) (actual time=85.55..87.61 rows=626 loops=1)\n -> Sort (cost=1014.00..1024.09 rows=4034 width=20) (actual time=85.54..85.86 rows=626 loops=1)\n Sort Key: software_download.host_id, software_download.software_binary_id\n -> Hash Join (cost=21.64..772.38 rows=4034 width=20) (actual time=1.06..84.14 rows=626 loops=1)\n Hash Cond: (\"outer\".software_binary_id = \"inner\".software_binary_id)\n -> Seq Scan on software_download (cost=0.00..565.98 rows=17911 width=16) (actual time=0.06..67.26 rows=15364 loops=1)\n Filter: ((bds_status_id <> 6) AND (bds_status_id <> 17) AND (bds_status_id <> 18))\n -> Hash (cost=21.59..21.59 rows=20 width=4) (actual time=0.94..0.94 rows=0 loops=1)\n -> Seq Scan on software_binary (cost=0.00..21.59 rows=20 width=4) (actual time=0.32..0.91 rows=23 loops=1)\n Filter: ((binary_type_id = 3) OR (binary_type_id = 5) OR (binary_type_id = 6))\n -> Hash (cost=9631.57..9631.57 rows=1 width=40) (actual time=2457.04..2457.04 rows=0 loops=1)\n -> Merge Join (cost=9495.38..9631.57 rows=1 width=40) (actual time=2345.77..2456.74 rows=240 loops=1)\n Merge Cond: ((\"outer\".rmsbinaryid = \"inner\".rmsbinaryid) AND (\"outer\".msgid = \"inner\".msgid) AND (\"outer\".entityid = \"inner\".entityid))\n -> Sort (cost=4629.24..4691.15 rows=24761 width=20) (actual time=514.19..539.04 rows=25544 loops=1)\n Sort Key: msg306u.rmsbinaryid, msg306u.msgid, msg306u.entityid\n -> Seq Scan on msg306u (cost=0.00..2556.22 rows=24761 width=20) (actual time=0.08..228.09 rows=25544 loops=1)\n Filter: (downloadstatus <> '0'::text)\n -> Sort (cost=4866.14..4872.33 rows=2476 width=20) (actual time=1831.55..1831.68 rows=241 loops=1)\n Sort Key: a1.rmsbinaryid, a1.msgid, a1.entityid\n -> Hash Join (cost=4429.43..4726.56 rows=2476 width=20) (actual time=1724.39..1830.63 rows=325 loops=1)\n Hash Cond: (\"outer\".entityid = \"inner\".myapp_app_id)\n -> Subquery Scan a1 (cost=4363.24..4610.85 rows=2476 width=12) (actual time=1714.04..1818.66 rows=325 loops=1)\n -> Aggregate (cost=4363.24..4610.85 rows=2476 width=12) (actual time=1714.03..1818.08 rows=325 loops=1)\n -> Group (cost=4363.24..4548.95 rows=24761 width=12) (actual time=1714.01..1796.43 rows=25544 loops=1)\n -> Sort (cost=4363.24..4425.15 rows=24761 width=12) (actual time=1714.00..1739.34 rows=25544 loops=1)\n Sort Key: entityid, rmsbinaryid\n -> Seq Scan on msg306u (cost=0.00..2556.22 rows=24761 width=12) (actual time=0.03..152.94 rows=25544 loops=1)\n Filter: (downloadstatus = '1'::text)\n -> Hash (cost=61.95..61.95 rows=1695 width=8) (actual time=10.25..10.25 rows=0 loops=1)\n -> Seq Scan on myapp_app ia (cost=0.00..61.95 rows=1695 width=8) (actual time=0.09..5.48 rows=1698 loops=1)\n Total runtime: 2716.84 msec\n\n\nCompared to the 8.1 plan:\n\n Nested Loop Left Join (cost=2610.56..6491.82 rows=1 width=112) (actual time=166.411..4468.322 rows=472 loops=1)\n Join Filter: ((\"outer\".host_id = \"inner\".host_id) AND (\"outer\".software_binary_id = \"inner\".rmsbinaryid))\n -> Merge Join (cost=616.56..1495.06 rows=1 width=96) (actual time=47.004..120.085 rows=472 loops=1)\n Merge Cond: (\"outer\".host_id = \"inner\".host_id)\n Join Filter: (\"inner\".mtime = \"outer\".mtime)\n -> Index Scan using software_download_host_id on software_download (cost=0.00..615.92 rows=13416 width=96) (actual time=0.017..35.243 rows=13372 loops=1)\n -> Sort (cost=616.56..620.45 rows=1555 width=12) (actual time=46.034..53.978 rows=6407 loops=1)\n Sort Key: latest_download.host_id\n -> Subquery Scan latest_download (cost=499.13..534.12 rows=1555 width=12) (actual time=43.137..45.058 rows=472 loops=1)\n -> HashAggregate (cost=499.13..518.57 rows=1555 width=16) (actual time=43.132..43.887 rows=472 loops=1)\n -> Hash Join (cost=5.64..477.57 rows=2875 width=16) (actual time=0.206..41.782 rows=623 loops=1)\n Hash Cond: (\"outer\".software_binary_id = \"inner\".software_binary_id)\n -> Seq Scan on software_download (cost=0.00..377.78 rows=13080 width=16) (actual time=0.007..23.679 rows=13167 loops=1)\n Filter: ((bds_status_id <> 6) AND (bds_status_id <> 17) AND (bds_status_id <> 18))\n -> Hash (cost=5.59..5.59 rows=20 width=4) (actual time=0.155..0.155 rows=22 loops=1)\n -> Seq Scan on software_binary (cost=0.00..5.59 rows=20 width=4) (actual time=0.011..0.111 rows=22 loops=1)\n Filter: ((binary_type_id = 3) OR (binary_type_id = 5) OR (binary_type_id = 6))\n -> Nested Loop (cost=1994.00..4996.74 rows=1 width=20) (actual time=0.259..8.870 rows=238 loops=472)\n -> Nested Loop (cost=1994.00..4992.28 rows=1 width=16) (actual time=0.249..5.851 rows=238 loops=472)\n Join Filter: (\"outer\".rmsbinaryid = \"inner\".rmsbinaryid)\n -> HashAggregate (cost=1994.00..2001.41 rows=593 width=12) (actual time=0.236..0.942 rows=323 loops=472)\n -> Seq Scan on msg306u (cost=0.00..1797.28 rows=26230 width=12) (actual time=0.009..69.590 rows=25542 loops=1)\n Filter: (downloadstatus = '1'::text)\n -> Index Scan using msg306u_entityid_msgid_idx on msg306u (cost=0.00..5.02 rows=1 width=20) (actual time=0.008..0.010 rows=1 loops=152456)\n Index Cond: ((\"outer\".entityid = msg306u.entityid) AND (\"outer\".\"?column3?\" = msg306u.msgid))\n Filter: (downloadstatus <> '0'::text)\n -> Index Scan using myapp_app_pkey on myapp_app ia (cost=0.00..4.44 rows=1 width=8) (actual time=0.006..0.007 rows=1 loops=112336)\n Index Cond: (\"outer\".entityid = ia.myapp_app_id)\n Total runtime: 4469.506 ms\n\n\nWhat is really tossing me here is I set enable_nestloop = off and got this plan:\n\n Hash Left Join (cost=7034.77..7913.29 rows=1 width=112) (actual time=483.840..551.136 rows=472 loops=1)\n Hash Cond: ((\"outer\".host_id = \"inner\".host_id) AND (\"outer\".software_binary_id = \"inner\".rmsbinaryid))\n -> Merge Join (cost=616.56..1495.06 rows=1 width=96) (actual\ntime=46.696..112.434 rows=472 loops=1)\n Merge Cond: (\"outer\".host_id = \"inner\".host_id)\n Join Filter: (\"inner\".mtime = \"outer\".mtime)\n -> Index Scan using software_download_host_id on software_download (cost=0.00..615.92 rows=13416 width=96) (actual time=0.019..30.345 rows=13372 loops=1)\n -> Sort (cost=616.56..620.45 rows=1555 width=12) (actual time=45.720..53.265 rows=6407 loops=1)\n Sort Key: latest_download.host_id\n -> Subquery Scan latest_download (cost=499.13..534.12 rows=1555 width=12) (actual time=42.867..44.763 rows=472 loops=1)\n -> HashAggregate (cost=499.13..518.57 rows=1555 width=16) (actual time=42.862..43.628 rows=472 loops=1)\n -> Hash Join (cost=5.64..477.57 rows=2875 width=16) (actual time=0.206..41.503 rows=623 loops=1)\n Hash Cond: (\"outer\".software_binary_id = \"inner\".software_binary_id)\n -> Seq Scan on software_download (cost=0.00..377.78 rows=13080 width=16) (actual time=0.007..23.494 rows=13167 loops=1)\n Filter: ((bds_status_id <> 6) AND (bds_status_id <> 17) AND (bds_status_id <> 18))\n -> Hash (cost=5.59..5.59 rows=20 width=4) (actual time=0.155..0.155 rows=22 loops=1)\n -> Seq Scan on software_binary (cost=0.00..5.59 rows=20 width=4) (actual time=0.011..0.112 rows=22 loops=1)\n Filter: ((binary_type_id = 3) OR (binary_type_id = 5) OR (binary_type_id = 6))\n -> Hash (cost=6418.20..6418.20 rows=1 width=20) (actual time=437.111..437.111 rows=238 loops=1)\n -> Merge Join (cost=6149.96..6418.20 rows=1 width=20) (actual time=367.555..436.667 rows=238 loops=1)\n Merge Cond: ((\"outer\".rmsbinaryid = \"inner\".rmsbinaryid) AND (\"outer\".msgid = \"inner\".msgid) AND (\"outer\".entityid = \"inner\".entityid))\n -> Sort (cost=2119.55..2121.03 rows=593 width=16) (actual time=117.104..117.476 rows=323 loops=1)\n Sort Key: a1.rmsbinaryid, a1.msgid, a1.entityid\n -> Hash Join (cost=2054.19..2092.23 rows=593 width=16) (actual time=114.671..116.280 rows=323 loops=1)\n Hash Cond: (\"outer\".entityid = \"inner\".myapp_app_id)\n -> HashAggregate (cost=1994.00..2001.41 rows=593 width=12) (actual time=108.909..109.486 rows=323 loops=1)\n -> Seq Scan on msg306u (cost=0.00..1797.28 rows=26230 width=12) (actual time=0.009..68.861 rows=25542 loops=1)\n Filter: (downloadstatus = '1'::text)\n -> Hash (cost=55.95..55.95 rows=1695 width=8) (actual time=5.736..5.736 rows=1695 loops=1)\n -> Seq Scan on myapp_app ia (cost=0.00..55.95 rows=1695 width=8) (actual time=0.005..2.850 rows=1695 loops=1)\n -> Sort (cost=4030.42..4095.99 rows=26230 width=20) (actual time=250.434..286.311 rows=25542 loops=1)\n Sort Key: public.msg306u.rmsbinaryid, public.msg306u.msgid, public.msg306u.entityid\n -> Seq Scan on msg306u (cost=0.00..1797.28 rows=26230 width=20) (actual time=0.009..80.478 rows=25542 loops=1)\n Filter: (downloadstatus <> '0'::text)\n Total runtime: 553.409 ms\n\nAh, a beautiful scheme! So given I can't run with enable_nestloop off,\nanyone have a suggestion on how to get this thing moving in the right\ndirection? I tried raising statistics estimates on some of the columns\nbut that didn't help, though maybe I was raising it on the right\ncolumns.. any suggestions there? Or perhaps a better way to write the\nquery... I'm open to suggestions. TIA,\n\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "12 Jan 2006 09:48:25 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "query slower on 8.1 than 7.3" } ]
[ { "msg_contents": "Jamal Ghaffour a écrit :\n\n\n\n\nHi,\n\nI'm working on a project, whose implementation deals with PostgreSQL. A brief description of our application is given below.\n\nI'm running version 8.0 on a dedicated server 1Gb of RAM. \nmy database isn't complex, it contains just 2 simple tables.\n\nCREATE TABLE cookies (\n domain varchar(50) NOT NULL,\n path varchar(50) NOT NULL,\n name varchar(50) NOT NULL,\n principalid varchar(50) NOT NULL,\n host text NOT NULL,\n value text NOT NULL,\n secure bool NOT NULL,\n timestamp timestamp with time zone NOT NULL DEFAULT \nCURRENT_TIMESTAMP+TIME '04:00:00',\n PRIMARY KEY (domain,path,name,principalid)\n)\n\nCREATE TABLE liberty (\n principalid varchar(50) NOT NULL,\n requestid varchar(50) NOT NULL,\n spassertionurl text NOT NULL,\n libertyversion varchar(50) NOT NULL,\n relaystate varchar(50) NOT NULL,\n PRIMARY KEY (principalid)\n)\n\nI'm developping an application that uses the libpqxx to execute \npsql queries on the database and have to execute 500 requests at the same time.\n\n\nUPDATE cookies SET host='ping.icap-elios.com', value= '54E5B5491F27C0177083795F2E09162D', secure=FALSE, \ntimestamp=CURRENT_TIMESTAMP+INTERVAL '14400 SECOND' WHERE \ndomain='ping.icap-elios.com' AND path='/tfs' AND principalid='192.168.8.219' AND \nname='jsessionid'\n\nSELECT path, upper(name) AS name, value FROM cookies WHERE timestamp<CURRENT_TIMESTAMP AND principalid='192.168.8.219' AND \nsecure=FALSE AND (domain='ping.icap-elios.com' OR domain='.icap-elios.com')\n\nI have to notify that the performance of is extremely variable and irregular.\nI can also see that the server process uses almost 100% of\na CPU.\n\nI'm using the default configuration file, and i m asking if i have to change some paramters to have a good performance.\n\nAny help would be greatly appreciated.\n\nThanks,\n\n\nHi,\n\nThere are some results that can give you concrete \nidea about my problem: \nwhen i 'm launching my test that executes in loop manner the  SELECT\nand UPDATE queries described above, i'm obtaining this results:\n\nUPDATE Time execution :0s: 5225 us\nSELECT Time execution  :0s: 6908 us\n\n5 minutes Later: \n\nUPDATE Time execution :0s: 6125 us\nSELECT Time execution  :0s: 10928 us\n\n5 minutes Later: \n\nUPDATE Time execution :0s: 5825 us\nSELECT Time execution  :0s: 14978 us\n\nAs you can see , the time execution of the SELECT request is growing\nrelatively to time and not the UPDATE time execution. \n I note that to stop the explosion of the Select time execution, i m\nusing frequently the vaccum query on the cookies table.\nSet  the  autovacuum parmaeter in the configuation file to on wasn't\nable to remplace the use of the vaccum command, and i don't know if\nthis behaivour is normal?\n\nThanks,\nJamal", "msg_date": "Thu, 12 Jan 2006 15:53:27 +0100", "msg_from": "Jamal Ghaffour <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Please Help: PostgreSQL performance Optimization" }, { "msg_contents": "Jamal Ghaffour wrote:\n\n>>CREATE TABLE cookies (\n>> domain varchar(50) NOT NULL,\n>> path varchar(50) NOT NULL,\n>> name varchar(50) NOT NULL,\n>> principalid varchar(50) NOT NULL,\n>> host text NOT NULL,\n>> value text NOT NULL,\n>> secure bool NOT NULL,\n>> timestamp timestamp with time zone NOT NULL DEFAULT \n>>CURRENT_TIMESTAMP+TIME '04:00:00',\n>> PRIMARY KEY (domain,path,name,principalid)\n>>)\n[snip]\n>>SELECT path, upper(name) AS name, value FROM cookies WHERE timestamp<CURRENT_TIMESTAMP AND principalid='192.168.8.219' AND \n>>secure=FALSE AND (domain='ping.icap-elios.com' OR domain='.icap-elios.com')\n\nI think the problem here is that the column order in the index doesn't \nmatch the columns used in the WHERE clause criteria. Try adding an index \non (domain,principalid) or (domain,principalid,timestamp). If these are \nyour only queries, you can get the same effect by re-ordering the \ncolumns in the table so that this is the column order used by the \nprimary key and its implicit index.\n\nYou should check up on EXPLAIN and EXPLAIN ANALYZE to help you debug \nslow queries.", "msg_date": "Thu, 12 Jan 2006 10:42:39 -0800", "msg_from": "Andrew Lazarus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please Help: PostgreSQL performance Optimization" }, { "msg_contents": "On 1/12/06, Jamal Ghaffour <[email protected]> wrote:\n> Jamal Ghaffour a écrit :\n> Hi,\n\nI'm working on a project, whose implementation deals with PostgreSQL. A\n> brief description of our application is given below.\n\nI'm running version\n> 8.0 on a dedicated server 1Gb of RAM.\nmy database isn't complex, it\n> contains just 2 simple tables.\n\nCREATE TABLE cookies (\n domain varchar(50)\n> NOT NULL,\n path varchar(50) NOT NULL,\n name varchar(50) NOT NULL,\n> principalid varchar(50) NOT NULL,\n host text NOT NULL,\n value text NOT\n> NULL,\n secure bool NOT NULL,\n timestamp timestamp with time zone NOT NULL\n> DEFAULT\nCURRENT_TIMESTAMP+TIME '04:00:00',\n PRIMARY KEY\n> (domain,path,name,principalid)\n)\n\nCREATE TABLE liberty (\n principalid\n> varchar(50) NOT NULL,\n requestid varchar(50) NOT NULL,\n spassertionurl text\n> NOT NULL,\n libertyversion varchar(50) NOT NULL,\n relaystate varchar(50) NOT\n> NULL,\n PRIMARY KEY (principalid)\n)\n\nI'm developping an application that uses\n> the libpqxx to execute\npsql queries on the database and have to execute 500\n> requests at the same time.\n\n\nUPDATE cookies SET host='ping.icap-elios.com',\n> value= '54E5B5491F27C0177083795F2E09162D', secure=FALSE,\n>\ntimestamp=CURRENT_TIMESTAMP+INTERVAL '14400 SECOND' WHERE\n>\ndomain='ping.icap-elios.com' AND path='/tfs' AND\n> principalid='192.168.8.219' AND\nname='jsessionid'\n\nSELECT path, upper(name)\n> AS name, value FROM cookies WHERE timestamp<CURRENT_TIMESTAMP AND\n> principalid='192.168.8.219' AND\nsecure=FALSE AND\n> (domain='ping.icap-elios.com' OR domain='.icap-elios.com')\n\nI have to notify\n> that the performance of is extremely variable and irregular.\nI can also see\n> that the server process uses almost 100% of\na CPU.\n\nI'm using the default\n> configuration file, and i m asking if i have to change some paramters to\n> have a good performance.\n\nAny help would be greatly appreciated.\n\nThanks,\n> Hi,\n>\n> There are some results that can give you concrete idea about my problem:\n> when i 'm launching my test that executes in loop manner the SELECT and\n> UPDATE queries described above, i'm obtaining this results:\n>\n> UPDATE Time execution :0s: 5225 us\n> SELECT Time execution :0s: 6908 us\n>\n> 5 minutes Later:\n>\n> UPDATE Time execution :0s: 6125 us\n> SELECT Time execution :0s: 10928 us\n>\n> 5 minutes Later:\n>\n> UPDATE Time execution :0s: 5825 us\n> SELECT Time execution :0s: 14978 us\n>\n> As you can see , the time execution of the SELECT request is growing\n> relatively to time and not the UPDATE time execution.\n> I note that to stop the explosion of the Select time execution, i m using\n> frequently the vaccum query on the cookies table.\n> Set the autovacuum parmaeter in the configuation file to on wasn't able to\n> remplace the use of the vaccum command, and i don't know if this behaivour\n> is normal?\n>\n> Thanks,\n> Jamal\n>\n>\n\nplease execute\n\nEXPLAIN ANALYZE <your query>\nand show the results, is the only way to know what's happening\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n", "msg_date": "Thu, 12 Jan 2006 15:16:00 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Please Help: PostgreSQL performance Optimization" }, { "msg_contents": "Andrew Lazarus a �crit :\n\n> Jamal Ghaffour wrote:\n>\n>>> CREATE TABLE cookies (\n>>> domain varchar(50) NOT NULL,\n>>> path varchar(50) NOT NULL,\n>>> name varchar(50) NOT NULL,\n>>> principalid varchar(50) NOT NULL,\n>>> host text NOT NULL,\n>>> value text NOT NULL,\n>>> secure bool NOT NULL,\n>>> timestamp timestamp with time zone NOT NULL DEFAULT \n>>> CURRENT_TIMESTAMP+TIME '04:00:00',\n>>> PRIMARY KEY (domain,path,name,principalid)\n>>> )\n>>\n> [snip]\n>\n>>> SELECT path, upper(name) AS name, value FROM cookies WHERE \n>>> timestamp<CURRENT_TIMESTAMP AND principalid='192.168.8.219' AND \n>>> secure=FALSE AND (domain='ping.icap-elios.com' OR \n>>> domain='.icap-elios.com')\n>>\n>\n> I think the problem here is that the column order in the index doesn't \n> match the columns used in the WHERE clause criteria. Try adding an \n> index on (domain,principalid) or (domain,principalid,timestamp). If \n> these are your only queries, you can get the same effect by \n> re-ordering the columns in the table so that this is the column order \n> used by the primary key and its implicit index.\n>\n> You should check up on EXPLAIN and EXPLAIN ANALYZE to help you debug \n> slow queries.\n\nHi,\nI created an index into the cookies table\nCREATE INDEX index_cookies_select ON cookies (domain, principalid, \ntimestamp);\nand execute my UPDATE and select queries:\n\n1 - The first select quey give the following results:\n\nicap=# EXPLAIN ANALYZE SELECT path, upper(name) AS name, value FROM \ncookies WHERE timestamp>CURRENT_TIMESTAMP AND \nprincipalid='192.168.8.219' AND secure=FALSE AND \n(domain='ping.icap-elios.com' OR domain='.icap-elios.com');\n \nQUERY \nPLAN \n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on cookies (cost=4.02..8.04 rows=1 width=268) (actual \ntime=0.107..0.108 rows=1 loops=1)\n Recheck Cond: ((((\"domain\")::text = 'ping.icap-elios.com'::text) AND \n((principalid)::text = '192.168.8.219'::text) AND (\"timestamp\" > now())) \nOR (((\"domain\")::text = '.icap-elios.com'::text) AND \n((principalid)::text = '192.168.8.219'::text) AND (\"timestamp\" > now())))\n Filter: ((\"timestamp\" > now()) AND (NOT secure))\n -> BitmapOr (cost=4.02..4.02 rows=1 width=0) (actual \ntime=0.091..0.091 rows=0 loops=1)\n -> Bitmap Index Scan on index_cookies_select (cost=0.00..2.01 \nrows=1 width=0) (actual time=0.077..0.077 rows=1 loops=1)\n Index Cond: (((\"domain\")::text = \n'ping.icap-elios.com'::text) AND ((principalid)::text = \n'192.168.8.219'::text) AND (\"timestamp\" > now()))\n -> Bitmap Index Scan on index_cookies_select (cost=0.00..2.01 \nrows=1 width=0) (actual time=0.012..0.012 rows=0 loops=1)\n Index Cond: (((\"domain\")::text = '.icap-elios.com'::text) \nAND ((principalid)::text = '192.168.8.219'::text) AND (\"timestamp\" > now()))\n Total runtime: 0.155 ms\n(9 rows)\n \n2- After that, i launch my test code that execute continuely the UPDATE \nand select queries (in loop manner), after 1 minute of continuous \nexecution, i obtain the following result:\nicap=# EXPLAIN ANALYZE SELECT path, upper(name) AS name, value FROM \ncookies WHERE timestamp>CURRENT_TIMESTAMP AND \nprincipalid='192.168.8.219' AND secure=FALSE AND \n(domain='ping.icap-elios.com' OR domain='.icap-elios.com');\n \nQUERY \nPLAN \n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on cookies (cost=4.02..8.04 rows=1 width=268) (actual \ntime=39.545..39.549 rows=1 loops=1)\n Recheck Cond: ((((\"domain\")::text = 'ping.icap-elios.com'::text) AND \n((principalid)::text = '192.168.8.219'::text) AND (\"timestamp\" > now())) \nOR (((\"domain\")::text = '.icap-elios.com'::text) AND \n((principalid)::text = '192.168.8.219'::text) AND (\"timestamp\" > now())))\n Filter: ((\"timestamp\" > now()) AND (NOT secure))\n -> BitmapOr (cost=4.02..4.02 rows=1 width=0) (actual \ntime=39.512..39.512 rows=0 loops=1)\n -> Bitmap Index Scan on index_cookies_select (cost=0.00..2.01 \nrows=1 width=0) (actual time=39.471..39.471 rows=2 loops=1)\n Index Cond: (((\"domain\")::text = \n'ping.icap-elios.com'::text) AND ((principalid)::text = \n'192.168.8.219'::text) AND (\"timestamp\" > now()))\n -> Bitmap Index Scan on index_cookies_select (cost=0.00..2.01 \nrows=1 width=0) (actual time=0.036..0.036 rows=0 loops=1)\n Index Cond: (((\"domain\")::text = '.icap-elios.com'::text) \nAND ((principalid)::text = '192.168.8.219'::text) AND (\"timestamp\" > now()))\n Total runtime: 39.616 ms\n(9 rows)\n\nI notice that the time execution increases significantly. and i need \nthe vacuum query to obtain normal time execution:\n\n\n3- After vacuum execution:\nicap=# vacuum cookies;\nVACUUM\nicap=# EXPLAIN ANALYZE SELECT path, upper(name) AS name, value FROM \ncookies WHERE timestamp>CURRENT_TIMESTAMP AND \nprincipalid='192.168.8.219' AND secure=FALSE AND \n(domain='ping.icap-elios.com' OR domain='.icap-elios.com');\n \nQUERY \nPLAN \n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on cookies (cost=4.02..8.04 rows=1 width=268) (actual \ntime=0.111..0.112 rows=1 loops=1)\n Recheck Cond: ((((\"domain\")::text = 'ping.icap-elios.com'::text) AND \n((principalid)::text = '192.168.8.219'::text) AND (\"timestamp\" > now())) \nOR (((\"domain\")::text = '.icap-elios.com'::text) AND \n((principalid)::text = '192.168.8.219'::text) AND (\"timestamp\" > now())))\n Filter: ((\"timestamp\" > now()) AND (NOT secure))\n -> BitmapOr (cost=4.02..4.02 rows=1 width=0) (actual \ntime=0.095..0.095 rows=0 loops=1)\n -> Bitmap Index Scan on index_cookies_select (cost=0.00..2.01 \nrows=1 width=0) (actual time=0.081..0.081 rows=1 loops=1)\n Index Cond: (((\"domain\")::text = \n'ping.icap-elios.com'::text) AND ((principalid)::text = \n'192.168.8.219'::text) AND (\"timestamp\" > now()))\n -> Bitmap Index Scan on index_cookies_select (cost=0.00..2.01 \nrows=1 width=0) (actual time=0.012..0.012 rows=0 loops=1)\n Index Cond: (((\"domain\")::text = '.icap-elios.com'::text) \nAND ((principalid)::text = '192.168.8.219'::text) AND (\"timestamp\" > now()))\n Total runtime: 0.159 ms\n(9 rows)\n\n\n\nThanks,\nJamal", "msg_date": "Fri, 13 Jan 2006 10:13:37 +0100", "msg_from": "Jamal Ghaffour <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Please Help: PostgreSQL performance Optimization" } ]
[ { "msg_contents": "OIDs seem to be on their way out, and most of the time you can get a\nmore helpful result by using a serial primary key anyway, but I wonder\nif there's any extension to INSERT to help identify what unique id a\nnewly-inserted key will get? Using OIDs the insert would return the OID\nof the inserted row, which could be useful if you then want to refer to\nthat row in a subsequent operation. You could get the same result by\nmanually retrieving the next number in the sequence and using that value\nin the insert, but at the cost of additional DB operations. Are there\nplans on updating the insert API for the post-OID world?\n\nMike Stone\n", "msg_date": "Fri, 13 Jan 2006 15:10:11 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": true, "msg_subject": "insert without oids" }, { "msg_contents": "On Fri, Jan 13, 2006 at 03:10:11PM -0500, Michael Stone wrote:\n> Are there plans on updating the insert API for the post-OID world?\n\nAre you looking for this TODO item?\n\n* Allow INSERT/UPDATE ... RETURNING new.col or old.col\n\n This is useful for returning the auto-generated key for an INSERT.\n One complication is how to handle rules that run as part of the\n insert.\n\nhttp://www.postgresql.org/docs/faqs.TODO.html\n\n-- \nMichael Fuhr\n", "msg_date": "Fri, 13 Jan 2006 13:19:59 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert without oids" }, { "msg_contents": "On Fri, 2006-01-13 at 15:10 -0500, Michael Stone wrote:\n> OIDs seem to be on their way out, and most of the time you can get a\n> more helpful result by using a serial primary key anyway, but I wonder\n> if there's any extension to INSERT to help identify what unique id a\n> newly-inserted key will get? Using OIDs the insert would return the OID\n> of the inserted row, which could be useful if you then want to refer to\n> that row in a subsequent operation. You could get the same result by\n> manually retrieving the next number in the sequence and using that value\n> in the insert, but at the cost of additional DB operations.\n\nThere's really no additional operations required:\n\nINSERT INTO t1 VALUES (...);\nINSERT INTO t2 VALUES (currval('t1_id_seq'), ...);\n\nYou need a separate SELECT if you want to use the generated sequence\nvalue outside the database, although the INSERT ... RETURNING extension\nwill avoid that (there's a patch implementing this, although it is not\nyet in CVS).\n\n-Neil\n\n\n", "msg_date": "Fri, 13 Jan 2006 16:29:15 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: insert without oids" }, { "msg_contents": "On Fri, Jan 13, 2006 at 04:29:15PM -0500, Neil Conway wrote:\n>There's really no additional operations required:\n>INSERT INTO t2 VALUES (currval('t1_id_seq'), ...);\n>You need a separate SELECT if you want to use the generated sequence\n>value outside the database, \n\nThat would, of course, be the goal. IOW, if you have a table which has\ndata which is unique only for the serial column, the old syntax provided\na way to refer to the newly inserted row uniquely without any additional\noperations. \n\n>although the INSERT ... RETURNING extension will avoid that \n\nThat sounds promising. I'll have to put the TODO list on my todo list.\n:)\n\nMike Stone\n", "msg_date": "Fri, 13 Jan 2006 17:48:01 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": true, "msg_subject": "Re: insert without oids" } ]
[ { "msg_contents": "Hi!\n\nI need to write a few pages about Postgresql, to convince some suits. They\nhave a few millions of records, on a few site, but they want to know the\npractical limits of Postgresql. So i need some information about the\nbiggest (in storage space, in record number, in field number, and maybe\ntable number) postgresql databases.\n\nAdditionally, because this company develops hospital information systems,\nif someone knows about a medical institute, which uses Postgresql, and\nhappy, please send me infomation. I only now subscribed to the advocacy\nlist, and only started to browse the archives.\n\nThanks.\n\n-- \nTomka Gergely\nTudom, anyu. Sapka, sďż˝l, doksi.\n", "msg_date": "Sat, 14 Jan 2006 13:13:02 +0100 (CET)", "msg_from": "Tomka Gergely <[email protected]>", "msg_from_op": true, "msg_subject": "big databases & hospitals" }, { "msg_contents": " This is the homepage of a Hospital/Healthservice Information System, I'm\nnot so sure but I think it can use various DBMS, including PostgreSQL:\n\nhttp://www.care2x.org/\n\n Regards,\n\n Javier\n\n\n\nOn Sat, 14 Jan 2006 13:13:02 +0100 (CET), Tomka Gergely wrote\n> Hi!\n> \n> I need to write a few pages about Postgresql, to convince some \n> suits. They have a few millions of records, on a few site, but they \n> want to know the practical limits of Postgresql. So i need some \n> information about the biggest (in storage space, in record number, \n> in field number, and maybe table number) postgresql databases.\n> \n> Additionally, because this company develops hospital information \n> systems, if someone knows about a medical institute, which uses \n> Postgresql, and happy, please send me infomation. I only now \n> subscribed to the advocacy list, and only started to browse the archives.\n> \n> Thanks.\n> \n> -- \n> Tomka Gergely\n> Tudom, anyu. Sapka, sďż˝l, doksi.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\n--\nnediam.com.mx\n\n", "msg_date": "Sat, 14 Jan 2006 08:51:46 -0600", "msg_from": "\"Javier Carlos\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big databases & hospitals" }, { "msg_contents": "\n> Additionally, because this company develops hospital information systems,\n> if someone knows about a medical institute, which uses Postgresql, and\n> happy, please send me infomation. I only now subscribed to the advocacy\n> list, and only started to browse the archives.\n\nHi,\n\nhave you seen this case study:\nhttp://www.postgresql.org/about/casestudies/shannonmedical\n\nBye, Chris.\n\n\n", "msg_date": "Sat, 14 Jan 2006 18:37:32 +0100", "msg_from": "Chris Mair <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big databases & hospitals" }, { "msg_contents": "2006-01-14 ragyogďż˝ napjďż˝n Chris Mair ezt ďż˝zente:\n\n>\n> > Additionally, because this company develops hospital information systems,\n> > if someone knows about a medical institute, which uses Postgresql, and\n> > happy, please send me infomation. I only now subscribed to the advocacy\n> > list, and only started to browse the archives.\n>\n> Hi,\n>\n> have you seen this case study:\n> http://www.postgresql.org/about/casestudies/shannonmedical\n\nYes, and i found this page:\n\nhttp://advocacy.daemonnews.org/viewtopic.php?t=82\n\nwhich is better than this small something on the postgres page. Quotes\nlike this:\n\n\"The FreeBSD/PostgreSQL combination is extremely flexible and\nrobust.\"\n\nthe sweetest music for us. But two example better than one, and i can't\nfind technical details, the size of the data, by example. I can only guess\nthe architecture (single node, single cpu, no HA or expensive RAID,\nbecause AFAIK FreeBSD in 2000 was not to good in this areas).\n\nThanks, of course!\n\n-- \nTomka Gergely\nTudom, anyu. Sapka, sďż˝l, doksi.\n", "msg_date": "Sat, 14 Jan 2006 18:55:49 +0100 (CET)", "msg_from": "Tomka Gergely <[email protected]>", "msg_from_op": true, "msg_subject": "Re: big databases & hospitals" }, { "msg_contents": "On 1/14/06, Tomka Gergely <[email protected]> wrote:\n> Hi!\n>\n> I need to write a few pages about Postgresql, to convince some suits. They\n> have a few millions of records, on a few site, but they want to know the\n> practical limits of Postgresql. So i need some information about the\n> biggest (in storage space, in record number, in field number, and maybe\n> table number) postgresql databases.\n>\n\nhere you can see some limits of postgresql:\nhttp://www.postgresql.org/about/\n\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n", "msg_date": "Sat, 14 Jan 2006 15:31:43 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big databases & hospitals" }, { "msg_contents": "On Sat, Jan 14, 2006 at 01:13:02PM +0100, Tomka Gergely wrote:\n> Hi!\n> \n> I need to write a few pages about Postgresql, to convince some suits. They\n> have a few millions of records, on a few site, but they want to know the\n> practical limits of Postgresql. So i need some information about the\n> biggest (in storage space, in record number, in field number, and maybe\n> table number) postgresql databases.\n> \n> Additionally, because this company develops hospital information systems,\n> if someone knows about a medical institute, which uses Postgresql, and\n> happy, please send me infomation. I only now subscribed to the advocacy\n> list, and only started to browse the archives.\n\nWe have a customer that has around 5000 tables and hasn't had any\nserious issues from that number of tables (other than \\d sometimes not\nworking in psql, but IIRC Tom put a fix in for that already).\n\nAs for raw size, 100G databases are pretty common. If you search through\nthe list archives you can probably find some examples.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 17 Jan 2006 14:36:24 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: big databases & hospitals" } ]
[ { "msg_contents": "I have been working on optimizing a PostgreSQL server for weekly updates\nwhere data is only updated once a week then for the remaining portion of the\nweek the data is static. So far I have set fsync to off and increased the\nsegment size among other things. I need to ensure that at the end of the\nupdate each week the data is in state where a crash will not kill the\ndatabase. \n \nRight now I run \"sync\" afte the updates have finished to ensure that the\ndata is synced to disk but I am concerned about the segment data and\nanything else I am missing that PostgreSQL explicitly handles. Is there\nsomething I can do in addition to sync to tell PostgreSQL exlplicitly that\nit is time to ensure everything is stored in its final destionation and etc?\n \nBenjamin\n\n\n\n\n\nI have been working \non optimizing a PostgreSQL server for weekly updates where data is only updated \nonce a week then for the remaining portion of the week the data is static.  \nSo far I have set fsync to off and increased the segment size among other \nthings.  I need to ensure that at the end of the update each week the \ndata is in state where a crash will not kill the database.  \n\n \nRight now I run \n\"sync\" afte the updates have finished to ensure that the data is synced to \ndisk but I am concerned about the segment data and anything else I am missing \nthat PostgreSQL explicitly handles.  Is there something I can do in \naddition to sync to tell PostgreSQL exlplicitly that it is time to ensure \neverything is stored in its final destionation and etc?\n \nBenjamin", "msg_date": "Sat, 14 Jan 2006 10:13:27 -0800", "msg_from": "\"Benjamin Arai\" <[email protected]>", "msg_from_op": true, "msg_subject": "Ensuring data integrity with fsync=off" }, { "msg_contents": "\"Benjamin Arai\" <[email protected]> writes:\n> Right now I run \"sync\" afte the updates have finished to ensure that the\n> data is synced to disk but I am concerned about the segment data and\n> anything else I am missing that PostgreSQL explicitly handles. Is there\n> something I can do in addition to sync to tell PostgreSQL exlplicitly that\n> it is time to ensure everything is stored in its final destionation and etc?\n\nYou need to give PG a CHECKPOINT command to flush stuff out of its\ninternal buffers. After that finishes, a manual \"sync\" commnd will\npush everything down to disk.\n\nYou realize, of course, that a system failure while the updates are\nrunning might leave your database corrupt? As long as you are prepared\nto restore from scratch, this might be a good tradeoff ... but don't\nlet yourself get caught without an up-to-date backup ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Jan 2006 13:41:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ensuring data integrity with fsync=off " }, { "msg_contents": "On Sat, Jan 14, 2006 at 01:41:43PM -0500, Tom Lane wrote:\n> \"Benjamin Arai\" <[email protected]> writes:\n> > Right now I run \"sync\" afte the updates have finished to ensure that the\n> > data is synced to disk but I am concerned about the segment data and\n> > anything else I am missing that PostgreSQL explicitly handles. Is there\n> > something I can do in addition to sync to tell PostgreSQL exlplicitly that\n> > it is time to ensure everything is stored in its final destionation and etc?\n> \n> You need to give PG a CHECKPOINT command to flush stuff out of its\n> internal buffers. After that finishes, a manual \"sync\" commnd will\n> push everything down to disk.\n> \n> You realize, of course, that a system failure while the updates are\n> running might leave your database corrupt? As long as you are prepared\n> to restore from scratch, this might be a good tradeoff ... but don't\n> let yourself get caught without an up-to-date backup ...\n\nAnother alternative that may (or may not) be simpler would be to run\neverything in one transaction and just let that commit at the end. Also,\nthere is ongoing work towards allowing certain operations to occur\nwithout generating any log writes. Currently there is code submitted\nthat allows COPY into a table that was created in the same transaction\nto go un-logged, though I think it's only in HEAD. In any case, there\nshould be some features that could be very useful to you in 8.2.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 17 Jan 2006 14:40:44 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ensuring data integrity with fsync=off" } ]
[ { "msg_contents": "I am aware that what I am dreaming of is already available through cursors, but \nin a web application, cursors are bad boys, and should be avoided. What I would \nlike to be able to do is to plan a query and run the plan to retreive a limited \nnumber of rows as well as the executor's state. This way, the burden of \nmaintaining the cursor \"on hold\", between activations of the web resource which \nuses it, is transferred from the DBMS to the web application server, and, most \nimportantly, the responsibility for garbage-collecting stale cursors is \nimplicitely delegated to the garbage-collector of active user sessions. Without \nthis mechanism, we are left with two equally unpleasant solutions: first, any \ntime a user instantiates a new session, a new cursor would have to be declared \nfor all relevant queries, and an ad-hoc garbage collection daemon would have to \nbe written to periodically scan the database for stale cursors to be closed; \notherwise, instead of using cursors, the web application could resort to \nOFFSET-LIMIT queries--no garbage collection issues but pathetic performance and \nserver-load.\n\nDo we have any way out?\n\nAlex\n\n-- \n*********************************************************************\nhttp://www.barettadeit.com/\nBaretta DE&IT\nA division of Baretta SRL\n\ntel. +39 02 370 111 55\nfax. +39 02 370 111 54\n\nOur technology:\n\nThe Application System/Xcaml (AS/Xcaml)\n<http://www.asxcaml.org/>\n\nThe FreerP Project\n<http://www.freerp.org/>\n", "msg_date": "Mon, 16 Jan 2006 11:13:00 +0100", "msg_from": "Alessandro Baretta <[email protected]>", "msg_from_op": true, "msg_subject": "Suspending SELECTs " }, { "msg_contents": "Alessandro Baretta <[email protected]> writes:\n> I am aware that what I am dreaming of is already available through\n> cursors, but in a web application, cursors are bad boys, and should be\n> avoided. What I would like to be able to do is to plan a query and run\n> the plan to retreive a limited number of rows as well as the\n> executor's state. This way, the burden of maintaining the cursor \"on\n> hold\", between activations of the web resource which uses it, is\n> transferred from the DBMS to the web application server,\n\nThis is a pipe dream, I'm afraid, as the state of a cursor does not\nconsist exclusively of bits that can be sent somewhere else and then\nretrieved. There are also locks to worry about, as well as the open\ntransaction itself, and these must stay alive inside the DBMS because\nthey affect the behavior of other transactions. As an example, once\nthe cursor's originating transaction closes, there is nothing to stop\nother transactions from modifying or removing rows it would have read.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Jan 2006 12:51:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs " }, { "msg_contents": "Tom Lane wrote:\n> Alessandro Baretta <[email protected]> writes:\n> > I am aware that what I am dreaming of is already available through\n> > cursors, but in a web application, cursors are bad boys, and should be\n> > avoided. What I would like to be able to do is to plan a query and run\n> > the plan to retreive a limited number of rows as well as the\n> > executor's state. This way, the burden of maintaining the cursor \"on\n> > hold\", between activations of the web resource which uses it, is\n> > transferred from the DBMS to the web application server,\n> \n> This is a pipe dream, I'm afraid, as the state of a cursor does not\n> consist exclusively of bits that can be sent somewhere else and then\n> retrieved.\n\nI wonder if we could have a way to \"suspend\" a transaction and restart\nit later in another backend. I think we could do something like this\nusing the 2PC machinery.\n\nNot that I'm up for coding it; just an idea that crossed my mind.\n\n-- \nAlvaro Herrera Developer, http://www.PostgreSQL.org\nOh, oh, las chicas galacianas, lo har�n por las perlas,\n�Y las de Arrakis por el agua! Pero si buscas damas\nQue se consuman como llamas, �Prueba una hija de Caladan! (Gurney Halleck)\n", "msg_date": "Mon, 16 Jan 2006 14:57:38 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> I wonder if we could have a way to \"suspend\" a transaction and restart\n> it later in another backend. I think we could do something like this\n> using the 2PC machinery.\n> Not that I'm up for coding it; just an idea that crossed my mind.\n\nIt's not impossible, perhaps, but it would require an order-of-magnitude\nexpansion of the 2PC machinery --- the amount of state associated with\nan open execution plan is daunting. I think there are discussions about\nthis in the archives.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Jan 2006 13:10:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs " }, { "msg_contents": "On Mon, 2006-01-16 at 11:13 +0100, Alessandro Baretta wrote:\n> I am aware that what I am dreaming of is already available through cursors, but \n> in a web application, cursors are bad boys, and should be avoided. What I would \n> like to be able to do is to plan a query and run the plan to retreive a limited \n> number of rows as well as the executor's state. This way, the burden of \n> maintaining the cursor \"on hold\", between activations of the web resource which \n> uses it, is transferred from the DBMS to the web application server, and, most \n> importantly, the responsibility for garbage-collecting stale cursors is \n> implicitely delegated to the garbage-collector of active user sessions. Without \n> this mechanism, we are left with two equally unpleasant solutions: first, any \n> time a user instantiates a new session, a new cursor would have to be declared \n> for all relevant queries, and an ad-hoc garbage collection daemon would have to \n> be written to periodically scan the database for stale cursors to be closed; \n> otherwise, instead of using cursors, the web application could resort to \n> OFFSET-LIMIT queries--no garbage collection issues but pathetic performance and \n> server-load.\n> \n> Do we have any way out?\n> \n> Alex\n\nI know that Tom has pretty much ruled out any persistent cursor\nimplementation in the database, but here's an idea for a workaround in\nthe app:\n\nHave a pool of connections used for these queries. When a user runs a\nquery the first time, create a cursor and remember that this user\nsession is associated with that particular connection. When the user\ntries to view the next page of results, request that particular\nconnection from the pool and continue to use the cursor. Between\nrequests, this connection could of course be used to service other\nusers.\n\nThis avoids the awfulness of tying up a connection for the entire course\nof a user session, but still allows you to use cursors for\nperformance. \n\nWhen a user session is invalidated or times out, you remove the mapping\nfor this connection and close the cursor. Whenever there are no more\nmappings for a particular connection, you can use the opportunity to\nclose the current transaction (to prevent eternal transactions).\n\nIf the site is at all busy, you will need to implement a pooling policy\nsuch as 'do not open new cursors on the connection with the oldest\ntransaction', which will ensure that all transactions can be closed in a\nfinite amount of time, the upper bound on the duration of a transaction\nis (longest_session_duration * connections in pool).\n\nLimitations:\n\n1. You shouldn't do anything that acquires write locks on the database\nusing these connections, because the transactions will be long-running.\nTo mitigate this, use a separate connection pool.\n\n2. Doesn't work well if some queries take a long time to run, because\nother users may need to wait for the connection, and another connection\nwon't do.\n\n3. If this is a busy web site, you might end up with potentially many\nthousands of open cursors. I don't know if this introduces an\nunacceptable performance penalty or other bottleneck in the server?\n\n-- Mark Lewis\n", "msg_date": "Mon, 16 Jan 2006 10:45:09 -0800", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "\nAlessandro Baretta <[email protected]> writes:\n>I am aware that what I am dreaming of is already available through\n>cursors, but in a web application, cursors are bad boys, and should be\n>avoided. What I would like to be able to do is to plan a query and run\n>the plan to retreive a limited number of rows as well as the\n>executor's state. This way, the burden of maintaining the cursor \"on\n>hold\", between activations of the web resource which uses it, is\n>transferred from the DBMS to the web application server,\n\nI think you're trying to do something at the wrong layer of your architecture. This task normally goes in your middleware layer, not your database layer.\n\nThere are several technologies that allow you to keep persistent database sessions open (for example, mod_perl, mod_cgi among others). If you combine these with what's called \"session affinity\" (the ability of a load-balancing server to route a particular user back to the same persistent session object every time), then you can create a middleware layer that does exactly what you need.\n\nBasically, you create a session object that holds all of the state (such as your cursor, and anything else you need to maintain between requests), and send back a cookie to the client. Each time the client reconnects, your server finds the user's session object using the cookie, and you're ready to go.\n\nThe main trick is that you have to manage your session objects, primarily to flush the full state to the database, if too much time elapses between requests, and then be able to re-create them on demand. Enterprise Java Beans has a large fraction of its design devoted to this sort of object management.\n\nThere are solutions for this in just about every middleware technology, from Apache/perl to EJB to CORBA. Search for \"session affinity\" and you should find a lot of information.\n\nCraig\n", "msg_date": "Mon, 16 Jan 2006 13:19:27 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "Tom Lane wrote:\n> Alessandro Baretta <[email protected]> writes:\n> \n>>I am aware that what I am dreaming of is already available through\n>>cursors, but in a web application, cursors are bad boys, and should be\n>>avoided. What I would like to be able to do is to plan a query and run\n>>the plan to retreive a limited number of rows as well as the\n>>executor's state. This way, the burden of maintaining the cursor \"on\n>>hold\", between activations of the web resource which uses it, is\n>>transferred from the DBMS to the web application server,\n> \n> \n> This is a pipe dream, I'm afraid, as the state of a cursor does not\n> consist exclusively of bits that can be sent somewhere else and then\n> retrieved. There are also locks to worry about, as well as the open\n> transaction itself, and these must stay alive inside the DBMS because\n> they affect the behavior of other transactions. As an example, once\n> the cursor's originating transaction closes, there is nothing to stop\n> other transactions from modifying or removing rows it would have read.\n\nI understand most of these issues, and expected this kind of reply. Please, \nallow me to insist that we reason on this problem and try to find a solution. My \nreason for doing so is that the future software industry is likely to see more \nand more web applications retrieving data from virtually endless databases, and \nin such contexts, it is sensible to ask the final client--the web client--to \nstore the \"cursor state\", because web interaction is intrinsically asynchronous, \nand you cannot count on users logging out when they're done, releasing resources \nallocated to them. Think of Google.\n\nLet me propose a possible solution strategy for the problem of \"client-side \ncursors\".\n* Let us admit the limitation that a \"client-side cursor\" can only be declared \nin a transaction where no inserts, updates or deletes are allowed, so that such \na transaction is virtually non-existent to other transactions. This allows the \nbackend to close the transaction and release locks as soon as the cursor is \ndeclared.\n* When the cursor state is pushed back to the backend, no new transaction is \ninstantiated, but the XID of the original transaction is reused. In the MVCC \nsystem, this allows us to achieve a perfectly consistent view of the database at \nthe instant the original transaction started, unless a VACUUM command has been \nexecuted in the meantime, in which case I would lose track of tuples which would \nhave been live in the context of the original transaction, but have been updated \nor deleted and later vacuumed; however, this does not bother me at all.\n\nIs this not a viable solution?\n\nAlex\n\n-- \n*********************************************************************\nhttp://www.barettadeit.com/\nBaretta DE&IT\nA division of Baretta SRL\n\ntel. +39 02 370 111 55\nfax. +39 02 370 111 54\n\nOur technology:\n\nThe Application System/Xcaml (AS/Xcaml)\n<http://www.asxcaml.org/>\n\nThe FreerP Project\n<http://www.freerp.org/>\n", "msg_date": "Tue, 17 Jan 2006 20:56:00 +0100", "msg_from": "Alessandro Baretta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "On Tue, Jan 17, 2006 at 08:56:00PM +0100, Alessandro Baretta wrote:\n>I understand most of these issues, and expected this kind of reply. Please, \n>allow me to insist that we reason on this problem and try to find a \n>solution. My reason for doing so is that the future software industry is \n>likely to see more and more web applications retrieving data from virtually \n>endless databases, and in such contexts, it is sensible to ask the final \n>client--the web client--to store the \"cursor state\", because web \n>interaction is intrinsically asynchronous, and you cannot count on users \n>logging out when they're done, releasing resources allocated to them. Think \n>of Google.\n\nI don't understand why it is better to rework the db instead of just\nhaving the web middleware keep track of what cursors are associated with\nwhat sessions?\n\nMike Stone\n", "msg_date": "Tue, 17 Jan 2006 15:04:41 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "Craig A. James wrote:\n> \n> Alessandro Baretta <[email protected]> writes:\n>\n> I think you're trying to do something at the wrong layer of your \n> architecture. This task normally goes in your middleware layer, not \n> your database layer.\n\nI am developing my applications in Objective Caml, and I have written the \nmiddleware layer myself. I could easily implement a cursor-pooling strategy, but \nthere is no perfect solution to the problem of guaranteeing that cursors be \nclosed. Remember that web applications require the user to \"open a session\" by \nconnecting the appropriate HTTP resource, but users as never required to log \nout. Hence, in order to eventually reclaim all cursors, I must use magical \n\"log-out detection\" algorithm, which is usually implemented with a simple \ntimeout. This guarantees the required property of safety (the population of \ncursors does not diverge) but does not guarantee the required property of \nliveness (a user connecting to the application, who has opened a session but has \nnot logged out, and thus possesses a session token, should have access the \nexecution context identified by his token).\n\nAlex\n\n\n\n-- \n*********************************************************************\nhttp://www.barettadeit.com/\nBaretta DE&IT\nA division of Baretta SRL\n\ntel. +39 02 370 111 55\nfax. +39 02 370 111 54\n\nOur technology:\n\nThe Application System/Xcaml (AS/Xcaml)\n<http://www.asxcaml.org/>\n\nThe FreerP Project\n<http://www.freerp.org/>\n", "msg_date": "Tue, 17 Jan 2006 21:06:53 +0100", "msg_from": "Alessandro Baretta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "Alessandro Baretta <[email protected]> writes:\n> * When the cursor state is pushed back to the backend, no new\n> transaction is instantiated, but the XID of the original transaction\n> is reused. In the MVCC system, this allows us to achieve a perfectly\n> consistent view of the database at the instant the original\n> transaction started, unless a VACUUM command has been executed in the\n> meantime, in which case I would lose track of tuples which would have\n> been live in the context of the original transaction, but have been\n> updated or deleted and later vacuumed; however, this does not bother\n> me at all.\n\n> Is this not a viable solution?\n\nNo. I'm not interested in \"solutions\" that can be translated as \"you\nmay or may not get the right answer, and there's no way even to know\nwhether you did or not\". That might be acceptable for your particular\napplication but you certainly can't argue that it's of general\nusefulness.\n\nAlso, I can't accept the concept of pushing the entire execution engine\nstate out to the client and then back again; that state is large enough\nthat doing so for every few dozen rows would yield incredibly bad\nperformance. (In most scenarios I think it'd be just as efficient for\nthe client to pull the whole cursor output at the start and page through\nit for itself.) Worse yet: this would represent a security hole large\nenough to wheel West Virginia through. We'd have no reasonable way to\nvalidate the data the client sends back.\n\nLastly, you underestimate the problems associated with not holding the\nlocks the cursor is using. As an example, it's likely that a btree\nindexscan wouldn't successfully restart at all, because it couldn't find\nwhere it had been if the index page had been split or deleted meanwhile.\nSo not running VACUUM is not enough to guarantee the query will still\nwork.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Jan 2006 15:19:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs " }, { "msg_contents": "On Tue, Jan 17, 2006 at 09:06:53PM +0100, Alessandro Baretta wrote:\n> Craig A. James wrote:\n> >\n> >Alessandro Baretta <[email protected]> writes:\n> >\n> >I think you're trying to do something at the wrong layer of your \n> >architecture. This task normally goes in your middleware layer, not \n> >your database layer.\n> \n> I am developing my applications in Objective Caml, and I have written the \n> middleware layer myself. I could easily implement a cursor-pooling \n> strategy, but there is no perfect solution to the problem of guaranteeing \n> that cursors be closed. Remember that web applications require the user to \n> \"open a session\" by connecting the appropriate HTTP resource, but users as \n> never required to log out. Hence, in order to eventually reclaim all \n> cursors, I must use magical \"log-out detection\" algorithm, which is usually \n> implemented with a simple timeout. This guarantees the required property of \n> safety (the population of cursors does not diverge) but does not guarantee \n> the required property of liveness (a user connecting to the application, \n> who has opened a session but has not logged out, and thus possesses a \n> session token, should have access the execution context identified by his \n> token).\n\nWith some \"AJAX magic\", it would probably be pretty easy to create an\napplication that let you know very quickly if a user left the\napplication (ie: browsed to another site, or closed the browser).\nEssentially, you should be able to set it up so that it will ping the\napplication server fairly frequently (like every 10 seconds), so you\ncould drastically reduce the timeout interval.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 17 Jan 2006 14:58:28 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "On Tue, Jan 17, 2006 at 08:56:00PM +0100, Alessandro Baretta wrote:\n> I understand most of these issues, and expected this kind of reply. Please, \n> allow me to insist that we reason on this problem and try to find a \n> solution. My reason for doing so is that the future software industry is \n> likely to see more and more web applications retrieving data from virtually \n> endless databases, and in such contexts, it is sensible to ask the final \n> client--the web client--to store the \"cursor state\", because web \n> interaction is intrinsically asynchronous, and you cannot count on users \n> logging out when they're done, releasing resources allocated to them. Think \n> of Google.\n\nWhat is wrong with LIMIT and OFFSET? I assume your results are ordered\nin some manner.\n\nEspecially with web users, who become bored if the page doesn't flicker\nin a way that appeals to them, how could one have any expectation that\nthe cursor would ever be useful at all?\n\nAs a 'general' solution, I think optimizing the case where the same\nquery is executed multiple times, with only the LIMIT and OFFSET\nparameters changing, would be a better bang for the buck. I'm thinking\nalong the lines of materialized views, for queries executed more than\na dozen times in a short length of time... :-)\n\nIn the mean time, I successfully use LIMIT and OFFSET without such an\noptimization, and things have been fine for me.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Tue, 17 Jan 2006 16:12:59 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "> I am developing my applications in Objective Caml, and I have written the \n> middleware layer myself. I could easily implement a cursor-pooling strategy, but \n> there is no perfect solution to the problem of guaranteeing that cursors be \n> closed. Remember that web applications require the user to \"open a session\" by \n> connecting the appropriate HTTP resource, but users as never required to log \n> out. Hence, in order to eventually reclaim all cursors, I must use magical \n> \"log-out detection\" algorithm, which is usually implemented with a simple \n> timeout. This guarantees the required property of safety (the population of \n> cursors does not diverge) but does not guarantee the required property of \n> liveness (a user connecting to the application, who has opened a session but has \n> not logged out, and thus possesses a session token, should have access the \n> execution context identified by his token).\n\nI fail to see the problem here. Why should \"liveness\" be a required\nproperty? If is it simply that you can't promptly detect when a user is\nfinished with their web session so you can free resources, then remember\nthat there is no requirement that you dedicate a connection to their\nsession in the first place. Even if you're using your own custom\nmiddleware, it isn't a very complicated or conceptually difficult thing\nto implement (see my previous post). Certainly it's simpler than\nallowing clients to pass around runtime state.\n\nAs far as implementing this sort of thing in the back-end, it would be\nreally hard with the PostgreSQL versioning model. Oracle can more\neasily (and kind of does) support cursors like you've described because\nthey implement MVCC differently than PostgreSQL, and in their\nimplementation you're guaranteed that you always have access to the most\nrecent x megabytes of historical rows, so even without an open\ntransaction to keep the required rows around you can still be relatively\nsure they'll be around for \"long enough\". In PostgreSQL, historical\nrows are kept in the tables themselves and periodically vacuumed, so\nthere is no such guarantee, which means that you would need to either\nimplement a lot of complex locking for little material gain, or just\nhold the cursors in moderately long-running transactions, which leads\nback to the solution suggested earlier.\n\n-- Mark Lewis\n\n\n", "msg_date": "Tue, 17 Jan 2006 13:40:50 -0800", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "On Tue, 17 Jan 2006 16:12:59 -0500\[email protected] wrote:\n\n> In the mean time, I successfully use LIMIT and OFFSET without such an\n> optimization, and things have been fine for me.\n\n Same here. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Tue, 17 Jan 2006 16:51:15 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "Alessandro,\n\n> I understand most of these issues, and expected this kind of reply.\n> Please, allow me to insist that we reason on this problem and try to\n> find a solution. My reason for doing so is that the future software\n> industry is likely to see more and more web applications retrieving data\n> from virtually endless databases, and in such contexts, it is sensible\n> to ask the final client--the web client--to store the \"cursor state\",\n> because web interaction is intrinsically asynchronous, and you cannot\n> count on users logging out when they're done, releasing resources\n> allocated to them. Think of Google.\n\nI think you're trying to use an unreasonable difficult method to solve a \nproblem that's already been solved multiple times. What you want is \ncalled \"query caching.\" There are about 800 different ways to do this on \nthe middleware or application layer which are 1000% easier than what \nyou're proposing.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 17 Jan 2006 14:59:02 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "People:\n\nTo follow up further, what Alessandro is talking about is known as a \n\"keyset cursor\". Sybase and SQL Server used to support them; I beleive \nthat they were strictly read-only and had weird issues with record \nvisibility.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 17 Jan 2006 15:18:07 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "[email protected] wrote:\n> On Tue, Jan 17, 2006 at 08:56:00PM +0100, Alessandro Baretta wrote:\n> \n> \n> What is wrong with LIMIT and OFFSET? I assume your results are ordered\n> in some manner.\n> \n> Especially with web users, who become bored if the page doesn't flicker\n> in a way that appeals to them, how could one have any expectation that\n> the cursor would ever be useful at all?\n> \n> As a 'general' solution, I think optimizing the case where the same\n> query is executed multiple times, with only the LIMIT and OFFSET\n> parameters changing, would be a better bang for the buck. I'm thinking\n> along the lines of materialized views, for queries executed more than\n> a dozen times in a short length of time... :-)\n> \n> In the mean time, I successfully use LIMIT and OFFSET without such an\n> optimization, and things have been fine for me.\n> \n\nSecond that.\n\nI do seem to recall a case where I used a different variant of this \nmethod (possibly a database product that didn't have OFFSET, or maybe \nbecause OFFSET was expensive for the case in point), where the ORDER BY \nkey for the last record on the page was saved and the query amended to \nuse it filter for the \"next' screen - e.g:\n\n1st time in:\n\nSELECT ... FROM table WHERE ... ORDER BY id LIMIT 20;\n\nSuppose this displays records for id 10000 -> 10020.\nWhen the user hits next, and page saves id=10020 in the session state \nand executes:\n\nSELECT ... FROM table WHERE ... AND id > 10020 ORDER BY id LIMIT 20;\n\nClearly you have to be a little careful about whether to use '>' or '>=' \ndepending on whether 'id' is unique or not (to continue using '>' in the \nnon unique case, you can just save and use all the members of the \nprimary key too).\n\nCheers\n\nMark\n", "msg_date": "Wed, 18 Jan 2006 13:26:45 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "Alessandro Baretta wrote:\n>> I think you're trying to do something at the wrong layer of your \n>> architecture. This task normally goes in your middleware layer, not \n>> your database layer.\n> \n> I am developing my applications in Objective Caml, and I have written \n> the middleware layer myself. I could easily implement a cursor-pooling \n> strategy...\n\nYou're trying to solve a very hard problem, and you're rewriting a lot of stuff that's been worked on for years by teams of people. If there's any way you switch use something like JBOSS, it might save you a lot of grief and hard work.\n\nI eliminated this problem a different way, using what we call a \"hitlist\". Basically, every query becomes a \"select into\", something like this:\n\n insert into hitlist_xxxx (select id from ...)\n\nwhere \"xxxx\" is your user's id. Once you do this, it's trivial to return each page to the user almost instantly using offset/limit, or by adding a \"ROW_NUM\" column of some sort. We manage very large hitlists -- millions of rows. Going from page 1 to page 100,000 takes a fraction of a second.\n\nIt also has the advantage that the user can come back in a week or a month and the results are still there.\n\nThe drawback are:\n\n1. Before the user gets the first page, the entire query must complete.\n2. You need a way to clean up old hitlists.\n3. If you have tens of thousands of users, you'll have a large number of hitlists, and you have to use tablespaces to ensure that Linux filesystem directories don't get too large.\n4. It takes space to store everyone's data. (But disk space is so cheap this isn't much of an issue.)\n\nYou can eliminate #3 by a single shared hitlist with a column of UserID's. But experience shows that a big shared hitlist doesn't work very well: Inserts get slower because the UserID column must be indexed, and you can truncate individual hitlists but you have to delete from a shared hitlist.\n\nCraig\n", "msg_date": "Tue, 17 Jan 2006 19:43:54 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "Mark Kirkwood <[email protected]> writes:\n> SELECT ... FROM table WHERE ... ORDER BY id LIMIT 20;\n\n> Suppose this displays records for id 10000 -> 10020.\n> When the user hits next, and page saves id=10020 in the session state \n> and executes:\n\n> SELECT ... FROM table WHERE ... AND id > 10020 ORDER BY id LIMIT 20;\n\n> Clearly you have to be a little careful about whether to use '>' or '>=' \n> depending on whether 'id' is unique or not (to continue using '>' in the \n> non unique case, you can just save and use all the members of the \n> primary key too).\n\nThis is actually fairly painful to get right for a multi-column key\nat the moment. It'll be much easier once I finish up the\nSQL-spec-row-comparison project. See this thread for background:\nhttp://archives.postgresql.org/pgsql-performance/2004-07/msg00188.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Jan 2006 23:02:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs " }, { "msg_contents": "Tom Lane wrote:\n> Mark Kirkwood <[email protected]> writes:\n> \n>>SELECT ... FROM table WHERE ... ORDER BY id LIMIT 20;\n> \n> \n>>Suppose this displays records for id 10000 -> 10020.\n>>When the user hits next, and page saves id=10020 in the session state \n>>and executes:\n> \n> \n>>SELECT ... FROM table WHERE ... AND id > 10020 ORDER BY id LIMIT 20;\n> \n> \n>>Clearly you have to be a little careful about whether to use '>' or '>=' \n>>depending on whether 'id' is unique or not (to continue using '>' in the \n>>non unique case, you can just save and use all the members of the \n>>primary key too).\n> \n> \n> This is actually fairly painful to get right for a multi-column key\n> at the moment. It'll be much easier once I finish up the\n> SQL-spec-row-comparison project. \n\nRight, I think it was actually an Oracle 7.3 based web app (err... \nshowing age here...) that I used this technique on.\n\nCheers\n\nMark\n", "msg_date": "Wed, 18 Jan 2006 18:51:58 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "[email protected] wrote:\n> On Tue, Jan 17, 2006 at 08:56:00PM +0100, Alessandro Baretta wrote:\n> \n>>I understand most of these issues, and expected this kind of reply. Please, \n>>allow me to insist that we reason on this problem and try to find a \n>>solution. My reason for doing so is that the future software industry is \n>>likely to see more and more web applications retrieving data from virtually \n>>endless databases, and in such contexts, it is sensible to ask the final \n>>client--the web client--to store the \"cursor state\", because web \n>>interaction is intrinsically asynchronous, and you cannot count on users \n>>logging out when they're done, releasing resources allocated to them. Think \n>>of Google.\n> \n> \n> What is wrong with LIMIT and OFFSET? I assume your results are ordered\n> in some manner.\n\nIt looks like this is the only possible solution at present--and in the future, \ntoo--but it has a tremendouse performance impact on queries returning thousands \nof rows.\n\nAlex\n\n-- \n*********************************************************************\nhttp://www.barettadeit.com/\nBaretta DE&IT\nA division of Baretta SRL\n\ntel. +39 02 370 111 55\nfax. +39 02 370 111 54\n\nOur technology:\n\nThe Application System/Xcaml (AS/Xcaml)\n<http://www.asxcaml.org/>\n\nThe FreerP Project\n<http://www.freerp.org/>\n", "msg_date": "Wed, 18 Jan 2006 09:57:50 +0100", "msg_from": "Alessandro Baretta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "Alessandro Baretta schrieb:\n> [email protected] wrote:\n> \n...\n> \n> It looks like this is the only possible solution at present--and in the \n> future, too--but it has a tremendouse performance impact on queries \n> returning thousands of rows.\n> \nWell actually one of the better solutions would be persistent cursors\n(and therefore connection pooling). I bet this is easier then\nfiddling with the problems of offset/limit and inventing even more\ncompex caching in the application.\n\nJust my 0.02c\n++Tino\n", "msg_date": "Wed, 18 Jan 2006 10:08:25 +0100", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "Josh Berkus wrote:\n> People:\n> \n> To follow up further, what Alessandro is talking about is known as a \n> \"keyset cursor\". Sybase and SQL Server used to support them; I beleive \n> that they were strictly read-only and had weird issues with record \n> visibility.\n\nI would like to thank everyone for sharing their ideas with me. I democratically \naccept the idea that my middleware will have to support the functionality I \nwould have liked to delegate to PostgreSQL. If I have to implement anything of \nthis sort--just like Tom--I don't want to spend time on a solution lacking \ngenerality or imposing unacceptable resource requirements under high load. The \nkeyset-cursor idea is probably the best bet--and BTW, let me specifically thank \nJosh for mentioning them.\n\nWhat I could do relatively easily is instantiate a thread to iteratively scan a \ntraditional cursor N rows at a time, retrieving only record keys, and finally \nsend them to the query-cache-manager. The application thread would then scan \nthrough the cursor results by fetching the rows associated to a given \"page\" of \nkeys. I would have to keep the full cursor keyset in the application server's \nsession state, but, hopefully, this is not nearly as bad as storing the entire \nrecordset.\n\nAlex\n\n\n\n-- \n*********************************************************************\nhttp://www.barettadeit.com/\nBaretta DE&IT\nA division of Baretta SRL\n\ntel. +39 02 370 111 55\nfax. +39 02 370 111 54\n\nOur technology:\n\nThe Application System/Xcaml (AS/Xcaml)\n<http://www.asxcaml.org/>\n\nThe FreerP Project\n<http://www.freerp.org/>\n", "msg_date": "Wed, 18 Jan 2006 10:14:49 +0100", "msg_from": "Alessandro Baretta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "On Wed, Jan 18, 2006 at 09:57:50AM +0100, Alessandro Baretta wrote:\n> [email protected] wrote:\n> >On Tue, Jan 17, 2006 at 08:56:00PM +0100, Alessandro Baretta wrote:\n> >>I understand most of these issues, and expected this kind of reply. \n> >>Please, allow me to insist that we reason on this problem and try to find \n> >>a solution. My reason for doing so is that the future software industry \n> >>is likely to see more and more web applications retrieving data from \n> >>virtually endless databases, and in such contexts, it is sensible to ask \n> >>the final client--the web client--to store the \"cursor state\", because \n> >>web interaction is intrinsically asynchronous, and you cannot count on \n> >>users logging out when they're done, releasing resources allocated to \n> >>them. Think of Google.\n> >What is wrong with LIMIT and OFFSET? I assume your results are ordered\n> >in some manner.\n> It looks like this is the only possible solution at present--and in the \n> future, too--but it has a tremendouse performance impact on queries \n> returning thousands of rows.\n\nIn the case of one web user generating one query, I don't see how it would\nhave a tremendous performance impact on large queries.\n\nYou mentioned google. I don't know how you use google - but most of the\npeople I know, rarely ever search through the pages. Generally the answer\nwe want is on the first page. If the ratio of users who search through\nmultiple pages of results, and users who always stop on the first page,\nis anything significant (better than 2:1?) LIMIT and OFFSET are the\ndesired approach. Why have complicated magic in an application, for a\nsubset of the users?\n\nI there is to be a change to PostgreSQL to optimize for this case, I\nsuggest it involve the caching of query plans, executor plans, query\nresults (materialized views?), LIMIT and OFFSET. If we had all of\nthis, you would have exactly what you want, while benefitting many\nmore people than just you. No ugly 'persistent state cursors' or\n'import/export cursor state' implementation. People would automatically\nbenefit, without changing their applications.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Wed, 18 Jan 2006 09:30:25 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "[email protected] wrote:\n> On Wed, Jan 18, 2006 at 09:57:50AM +0100, Alessandro Baretta wrote:\n> \n> I there is to be a change to PostgreSQL to optimize for this case, I\n> suggest it involve the caching of query plans, executor plans, query\n> results (materialized views?), LIMIT and OFFSET. If we had all of\n> this, you would have exactly what you want, while benefitting many\n> more people than just you. No ugly 'persistent state cursors' or\n> 'import/export cursor state' implementation. People would automatically\n> benefit, without changing their applications.\n\nActually, many of the features you mention (caching executor plans--that is \nimposing the use of prepared queries, caching query results and materializing \nviews) I have already implemented in my \"middleware\". Somehow, apparently, my \nintuition on the problem of determining what ought to be delegated to the DBMS \nand what to the \"middleware\" is the opposites of most people on this list. As I \nmentioned earlier, I democratically accept the position of the majority--and \nmonarchically I accept Tom's. And, scientifically, I have taken resposibility \nfor proving myself wrong: I have stated my assumptions, I have formulated the \nhypothesis, I have designed an experiment capable of disproving it, and I have \ncollected the data. Here are the steps and the results of the experiment.\n\nAssumptions: Google defines the \"best current practices\" in web applications.\n\nHypothesis: The \"best practice\" for returning large data sets is to devise an \nalgorithm (say, persistent cursors, for example) allowing subdivision of \nrecordset is pages of a fixed maximum size, in such a way that sequentially \nfetching pages of records requires the system to compute each record only once.\n\nExperiment: Given the stated assumption, record the time taken by Google to \nretrieve a sequence of pages of results, relative to the same query string. \nRepeat the experiment with different settings. Notice that Google actually \ndisplays on the results page the time taken to process the request.\n\nResults: I'll omit the numerical data, which everyone can easily obtain in only \na few minutes, repeating the experiment. I used several query strings containing \nvery common words (\"linux debian\", \"linux kernel\", \"linux tux\"), each yielding \nmillions of results. I set Google to retrieve 100 results per page. Then I ran \nthe query and paged through the data set. The obvious result is that execution \ntime is a monotonously growing function of the page number. This clearly \nindicates that Google does not use any algorithm of the proposed kind, but \nrather an OFFSET/LIMIT strategy, thus disproving the hypothesis.\n\nIt must also be noted that Google refuses to return more than 1000 results per \nquery, thus indicating that the strategy the adopted quite apparently cannot \nscale indefinitely, for on a query returning a potentially flooding dataset, a \nuser paging through the data would experience a linear slowdown on the number of \npages already fetched, and the DBMS workload would also be linear on the number \nof fetched pages.\n\nI do not like this approach, but the fact that Google came up with no better \nsolution is a clear indication that Tome et al. are more than correct.\n\nAlex\n\n-- \n*********************************************************************\nhttp://www.barettadeit.com/\nBaretta DE&IT\nA division of Baretta SRL\n\ntel. +39 02 370 111 55\nfax. +39 02 370 111 54\n\nOur technology:\n\nThe Application System/Xcaml (AS/Xcaml)\n<http://www.asxcaml.org/>\n\nThe FreerP Project\n<http://www.freerp.org/>\n", "msg_date": "Wed, 18 Jan 2006 16:12:07 +0100", "msg_from": "Alessandro Baretta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "Your experiment made far too many assumptions and the data does not\nstand up to scrutiny.\n\nOn 1/18/06, Alessandro Baretta <[email protected]> wrote:\n> Results: I'll omit the numerical data, which everyone can easily obtain in only\n> a few minutes, repeating the experiment. I used several query strings containing\n> very common words (\"linux debian\", \"linux kernel\", \"linux tux\"), each yielding\n> millions of results. I set Google to retrieve 100 results per page. Then I ran\n> the query and paged through the data set. The obvious result is that execution\n> time is a monotonously growing function of the page number. This clearly\n> indicates that Google does not use any algorithm of the proposed kind, but\n> rather an OFFSET/LIMIT strategy, thus disproving the hypothesis.\n\nI just ran the same test and I got a different outcome than you. The\nlast page came back twice as fast as page 4. I noticed no trend in the\nspeed of the results from each page.\n\nOf course it is probably in cache because its such a common thing to\nbe searched on so the experiment is pointless.\n\nYou cannot jump to your conclusions based on a few searches on google.\n\n> It must also be noted that Google refuses to return more than 1000 results per\n> query, thus indicating that the strategy the adopted quite apparently cannot\n> scale indefinitely, for on a query returning a potentially flooding dataset, a\n> user paging through the data would experience a linear slowdown on the number of\n> pages already fetched, and the DBMS workload would also be linear on the number\n> of fetched pages.\n\nThere are various reason why google might want to limit the search\nresult returned ie to encourage people to narrow their search. Prevent\nscreen scrapers from hitting them really hard blah blah. Perhaps less\nthan 0.00000001% of real users (not scrapers) actually dig down to the\n10th page so whats the point.\n\nThere are numerous methods that you can use to give separate result\npages some of which include going back to the database and some don't.\nI prefer not to go back to the database if I can avoid it and if all\nyou want to do is provide a few links to further pages of results then\ngoing back to the database and using offsets is a waste of IO.\n\n--\nHarry\nhttp://www.hjackson.org\nhttp://www.uklug.co.uk\n", "msg_date": "Wed, 18 Jan 2006 15:41:57 +0000", "msg_from": "Harry Jackson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "On Wed, Jan 18, 2006 at 03:41:57PM +0000, Harry Jackson wrote:\n> There are various reason why google might want to limit the search\n> result returned ie to encourage people to narrow their search. Prevent\n> screen scrapers from hitting them really hard blah blah. Perhaps less\n> than 0.00000001% of real users (not scrapers) actually dig down to the\n> 10th page so whats the point.\n\nI recall a day when google crashed, apparently due to a Windows virus\nthat would use google to obtain email addresses.\n\nAs an unsubstantiated theory - this may have involved many, many clients,\nall accessing search page results beyond the first page.\n\nI don't see google optimizing for the multiple page scenario. Most\npeople (as I think you agree above), are happy with the first or\nsecond page, and they are gone. Keeping a cursor for these people as\nanything more than an offset into search criteria, would not be\nuseful.\n\nCheers,\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n", "msg_date": "Wed, 18 Jan 2006 11:50:17 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "Alessandro Baretta wrote:\n> \n> What I could do relatively easily is instantiate a thread to iteratively\n> scan a traditional cursor N rows at a time, retrieving only record keys,\n> and finally send them to the query-cache-manager. The application thread\n> would then scan through the cursor results by fetching the rows\n> associated to a given \"page\" of keys. I would have to keep the full\n> cursor keyset in the application server's session state, but, hopefully,\n> this is not nearly as bad as storing the entire recordset.\n> \n> Alex\n> \n> \n> \n\nAlessandro,\n\nI've very much enjoyed reading your thoughts and the problem your facing\nand everyone's responses.\n\nSince you control the middle layer, could you not use a cookie to keep a\ncursor open on the middle layer and tie into it on subsequent queries?\n\nIf you are concerned with too many connections open, you could timeout\nthe sessions quickly and recreate the cursor if someone came back. If\nthey waited 5 minutes to make the next query, certainly they could wait\na few extra seconds to offset and reacquire a cursor?\n\nThe hitlist idea was also a nice one if the size of the data returned is\nnot overwhelming and does not need to track the underlying db at all\n(ie, no refresh).\n\nMark had a number of good general suggestions though, and I'd like to\necho the materialized views as an option that I could see a lot of uses\nfor (and have worked around in the past with SELECT INTO's and like).\n\nInteresting stuff.\n\n- August\n", "msg_date": "Sun, 22 Jan 2006 11:04:29 -0500", "msg_from": "August Zajonc <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suspending SELECTs" }, { "msg_contents": "August Zajonc wrote:\n> Alessandro Baretta wrote:\n> \n> Alessandro,\n> \n> I've very much enjoyed reading your thoughts and the problem your facing\n> and everyone's responses.\n\nThank you for your interest, Agust.\n\n> Since you control the middle layer, could you not use a cookie to keep a\n> cursor open on the middle layer and tie into it on subsequent queries?\n\nI do. The AS/Xcaml calls it \"session key\". It is usually passed in a cookie for\nwebsites and elsewhere--query string or url--for Intranet/Extranet web\napplications. The session information kept by the AS/Xcaml virtual machine\nincludes cached query results and state information for servlets. I\ncould--although not terribly easily--also use the Xcaml session manager to\nhandle allocation of DB connections from the pool, thus allocating one\nconnection per active session. The session garbage collector would then also\nhave to manage the recycling of stale DB connections.\n\n> If you are concerned with too many connections open, you could timeout\n> the sessions quickly and recreate the cursor if someone came back. If\n> they waited 5 minutes to make the next query, certainly they could wait\n> a few extra seconds to offset and reacquire a cursor?\n\nYes I could. Actually, there are quite a few means of handling this issue. The\npossible strategies for garbage collecting resources allocated to a remote peer\nis are collectively called \"failure-detection algorithms\" in the theory of\ndistributed computing. In most cases an \"eventually weak failure detector\" is\nnecessary and sufficient to guarantee a number of desirable properties in\nasynchronous systems: termination of execution, bounded open connections, and\nothers.\n\nYet, it is a theorm that no asynchronous algorithm can be devised to implement\nan eventually weak failure detector. This, in turn, implies that no distributed\nasynchronous system--i.e. a web application--possesses the above mentioned\ndesirable properties. Hence, from a theoretical standpoint, we can either choose\nto relax the axioms of the system allowing synchronicity--a failure detector\nbased on a timeout explicitly requires the notion of time--or, as I would\nprefer, by eliminating the need for termination of execution--i.e. explicit\nclient logout--and bounded open connections by delegating to the client the\nresponsibility of maintaing all relevant state information. Under these\nassumptions we can live happily in a perfectly asynchronous stateless world like\nthat of HTTP.\n\nNow, neither of the two solutions is perfect. In an Intranet/Extranet context, I\nwant to store server side a significant amount of state information, including\ncached query results, thus entailing the need for a synchronous\nfailure-detector--let's call it \"implicit logout detector\"--to garbage collect\nthe stale session files generated by the application. In an open website--no\nlogin--I do not usually want to use sessions, so I prefer to implement the\napplication so that all relevant state informatoin is maintained by the client.\nThis model is perfect until we reach the issue of \"suspending SELECTs\", that is,\nlimiting the the cardinality of the record set to a configurable \"page-size\",\nallowing the user to page through a vast query result.\n\n\n> The hitlist idea was also a nice one if the size of the data returned is\n> not overwhelming and does not need to track the underlying db at all\n> (ie, no refresh).\n\nIn an open website, immediate refresh is not critical, so long as I can\nguarantee some decent property of data consistency. Full consistency cannot be\nachieved, as Tom pointed out. I cordially disagree with Tom on the commandment\nthat \"Thou shalt have no property of consistency other than me\". Just like we\nhave two different transaction isolation levels, guarateeing different degrees\nof ACIDity, we could, conceivably wish to formalize a weaker notion of\nconsistency and implement functionality to match with it, which would not be\npossible under the stronger definition property.\n\n> Mark had a number of good general suggestions though, and I'd like to\n> echo the materialized views as an option that I could see a lot of uses\n> for (and have worked around in the past with SELECT INTO's and like).\n\nI already use materialized views. The database design layer of the AS/Xcaml\nallows the definition of fragmented materialized views: the view is split in\nfragments, that is, equivalence classes of the record set with respect to the\noperation of projection of the view signature to a (small) subset of its\ncolumns. Yet, this actually makes the original problem worse, for materialiazed\nview fragments must be garbage collected at some point, thus offering much of\nthe same conceptual difficulties as cursor pooling strategy.\n\nAlex\n\n\n-- \n*********************************************************************\nhttp://www.barettadeit.com/\nBaretta DE&IT\nA division of Baretta SRL\n\ntel. +39 02 370 111 55\nfax. +39 02 370 111 54\n\nOur technology:\n\nThe Application System/Xcaml (AS/Xcaml)\n<http://www.asxcaml.org/>\n\nThe FreerP Project\n<http://www.freerp.org/>\n\n", "msg_date": "Mon, 23 Jan 2006 11:31:23 +0100", "msg_from": "Alessandro Baretta <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suspending SELECTs" } ]
[ { "msg_contents": "Hi,\n\nI've been reading an interesting article which compared different \ndatabase systems, focusing on materialized views. I was wondering how \nthe postgresql developers feel about this feature ... is it planned to \nimplement materialized views any time soon? They would greatly improve \nboth performance and readability (and thus maintainability) of my code.\n\nIn particular I'm interested in a view which materializes whenever \nqueried, and is invalidated as soon as underlying data is changed.\n\nMike\n", "msg_date": "Mon, 16 Jan 2006 15:36:53 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": true, "msg_subject": "Materialized Views" }, { "msg_contents": "On Mon, 16 Jan 2006 15:36:53 +0100\nMichael Riess <[email protected]> wrote:\n\n> Hi,\n> \n> I've been reading an interesting article which compared different \n> database systems, focusing on materialized views. I was wondering how \n> the postgresql developers feel about this feature ... is it planned\n> to implement materialized views any time soon? They would greatly\n> improve both performance and readability (and thus maintainability)\n> of my code.\n> \n> In particular I'm interested in a view which materializes whenever \n> queried, and is invalidated as soon as underlying data is changed.\n\n You can already build materialized views in PostgreSQL, but you\n end up doing the \"heavy lifting\" yourself with triggers. You put\n insert/update/delete triggers on the underlying tables of your\n view that \"do the right thing\" in your materialized view table.\n\n I wrote a blog entry about this recently,\n http://revsys.com/blog/archive/9, where I used a very simple\n materialized view to achieve the performance I needed. It has links\n to the relevant documentation you'll need however to build triggers\n for a more complex situation. \n\n Hope this helps! \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Mon, 16 Jan 2006 09:40:36 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Materialized Views" }, { "msg_contents": "hi mike\n\n> In particular I'm interested in a view which materializes whenever \n> queried, and is invalidated as soon as underlying data is changed.\n\nfrom the german pgsql list earlier last week:\n\nhttp://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n\nthis seems to be pretty much what you want (except you'll have to update \neverything yourself). would be really nice if pgsql supports this \"in-house\"\n\ncheers,\nthomas \n\n\n", "msg_date": "Mon, 16 Jan 2006 17:17:47 +0100", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Materialized Views" }, { "msg_contents": "Thanks!\n\nOf course I know that I can build materialized views with triggers, but \nso far I've avoided using triggers altogether ... I would really \nappreciate something like \"create view foo (select * from b) materialize \non query\".\n\nBut I'll look into your blog entry, thanks again!\n\nMike\n> On Mon, 16 Jan 2006 15:36:53 +0100\n> Michael Riess <[email protected]> wrote:\n> \n>> Hi,\n>>\n>> I've been reading an interesting article which compared different \n>> database systems, focusing on materialized views. I was wondering how \n>> the postgresql developers feel about this feature ... is it planned\n>> to implement materialized views any time soon? They would greatly\n>> improve both performance and readability (and thus maintainability)\n>> of my code.\n>>\n>> In particular I'm interested in a view which materializes whenever \n>> queried, and is invalidated as soon as underlying data is changed.\n> \n> You can already build materialized views in PostgreSQL, but you\n> end up doing the \"heavy lifting\" yourself with triggers. You put\n> insert/update/delete triggers on the underlying tables of your\n> view that \"do the right thing\" in your materialized view table.\n> \n> I wrote a blog entry about this recently,\n> http://revsys.com/blog/archive/9, where I used a very simple\n> materialized view to achieve the performance I needed. It has links\n> to the relevant documentation you'll need however to build triggers\n> for a more complex situation. \n> \n> Hope this helps! \n> \n> ---------------------------------\n> Frank Wiles <[email protected]>\n> http://www.wiles.org\n> ---------------------------------\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n", "msg_date": "Mon, 16 Jan 2006 17:26:59 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Materialized Views" } ]
[ { "msg_contents": "Hi,\n\nI always think that use of * in SELECT affected in the performance,\nbecoming the search slowest.\n\nBut I read in the a Postgres book's that it increases the speed of\nsearch.\n\nAnd now???? What the more fast?\n\nThanks\n\n", "msg_date": "Mon, 16 Jan 2006 14:49:28 +0000", "msg_from": "Marcos <[email protected]>", "msg_from_op": true, "msg_subject": "Use of * affect the performance" }, { "msg_contents": "\n\"Marcos\" <[email protected]> wrote\n>\n> I always think that use of * in SELECT affected in the performance,\n> becoming the search slowest.\n>\n> But I read in the a Postgres book's that it increases the speed of\n> search.\n>\n> And now???? What the more fast?\n>\n\nIf you mean use \"*\" vs. \"explicitely name all columns of a relation\", then \nthere is almost no difference except the negligible difference in parsing. \nIf you mean you just want part of the columns of a relation but you still \nuse \"*\": Yes, you will save one projection operation for each result row but \nyou will pay for more network traffic. In the worst case, say your \"*\" \ninvolves some toast attributes, you just hurt performance. Considering the \nbenefits is so marginal and dangerous, I suggest stay with the idea that \n\"only retrive the columns that you are interested in\".\n\nRegards,\nQingqing \n\n\n", "msg_date": "Mon, 16 Jan 2006 16:55:11 -0500", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use of * affect the performance" } ]
[ { "msg_contents": "Hi,\nWe have a horribly designed postgres 8.1.0 database (not my fault!). I \nam pretty new to database design and management and have really no idea \nhow to diagnose performance problems. The db has only 25-30 tables, and \nhalf of them are only there because our codebase needs them (long story, \nagain not my fault!). Basically we have 10 tables that are being \naccessed, and only a couple of queries that join more than 3 tables. \nMost of the action takes place on two tables. One of the devs has done \nsome truly atrocious coding and is using the db as his data access \nmechanism (instead of an in-memory array, and he only needs an \narray/collection).\nIt is running on an p4 3000ish (desktop model) running early linux 2.6 \n(mdk 10.1) (512meg of ram) so that shouldn't be an issue, as we are \ntalking only about 20000 inserts a day. It probably gets queried about \n20000 times a day too (all vb6 via the pg odbc).\nSo... seeing as I didn't really do any investigation as to setting \ndefault sizes for storage and the like - I am wondering whether our \nperformance problems (a programme running 1.5x slower than two weeks \nago) might not be coming from the db (or rather, my maintaining of it). \nI have turned on stats, so as to allow autovacuuming, but have no idea \nwhether that could be related. Is it better to schedule a cron job to do \nit x times a day? I just left all the default values in postgres.conf... \ncould I do some tweaking?\nDoes anyone know of any practical resources that might guide me in \nsorting out these sorts of problems? Some stuff with pratical examples \nwould be good so I could compare with what we have.\nThanks\nAntoine\nps. I had a look with top and it didn't look like it was going much over \n15% cpu, with memory usage negligeable. There are usually about 10 open \nconnections. I couldn't find an easy way to check for disk accessings.\npps. The db is just one possible reason for our bottleneck so if you \ntell me it is very unlikely I will be most reassured!\n", "msg_date": "Mon, 16 Jan 2006 23:07:52 +0100", "msg_from": "Antoine <[email protected]>", "msg_from_op": true, "msg_subject": "new to postgres (and db management) and performance already a problem\n\t:-(" }, { "msg_contents": "On Mon, Jan 16, 2006 at 11:07:52PM +0100, Antoine wrote:\n\n> performance problems (a programme running 1.5x slower than two weeks \n> ago) might not be coming from the db (or rather, my maintaining of it). \n> I have turned on stats, so as to allow autovacuuming, but have no idea \n> whether that could be related. Is it better to schedule a cron job to do \n> it x times a day? I just left all the default values in postgres.conf... \n> could I do some tweaking?\n\nThe first thing you need to do is find out where your problem is. \nAre queries running slowly? You need to do some EXPLAIN ANALYSE\nqueries to understand that.\n\nA\n\n\n-- \nAndrew Sullivan | [email protected]\nThe whole tendency of modern prose is away from concreteness.\n\t\t--George Orwell\n", "msg_date": "Mon, 16 Jan 2006 17:42:45 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: new to postgres (and db management) and performance already a\n\tproblem :-(" }, { "msg_contents": "Antoine <[email protected]> writes:\n> So... seeing as I didn't really do any investigation as to setting \n> default sizes for storage and the like - I am wondering whether our \n> performance problems (a programme running 1.5x slower than two weeks \n> ago) might not be coming from the db (or rather, my maintaining of it). \n\nThat does sound like a lack-of-vacuuming problem. If the performance\ngoes back where it was after VACUUM FULL, then you can be pretty sure\nof it. Note that autovacuum is not designed to fix this for you: it\nonly ever issues regular vacuum not vacuum full.\n\n> I couldn't find an easy way to check for disk accessings.\n\nWatch the output of \"vmstat 1\" or \"iostat 1\" for info about that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Jan 2006 17:43:30 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: new to postgres (and db management) and performance already a\n\tproblem :-(" }, { "msg_contents": "> That does sound like a lack-of-vacuuming problem. If the performance\n> goes back where it was after VACUUM FULL, then you can be pretty sure\n> of it. Note that autovacuum is not designed to fix this for you: it\n> only ever issues regular vacuum not vacuum full.\n\nin our db system (for a website), i notice performance boosts after a vacuum \nfull. but then, a VACUUM FULL takes 50min+ during which the db is not really \naccessible to web-users. is there another way to perform maintenance tasks \nAND leaving the db fully operable and accessible?\n\nthanks,\nthomas \n\n\n", "msg_date": "Mon, 16 Jan 2006 23:48:10 +0100", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: new to postgres (and db management) and performance already a\n\tproblem :-(" }, { "msg_contents": "<[email protected]> writes:\n> in our db system (for a website), i notice performance boosts after a vacuum \n> full. but then, a VACUUM FULL takes 50min+ during which the db is not really \n> accessible to web-users. is there another way to perform maintenance tasks \n> AND leaving the db fully operable and accessible?\n\nYou're not doing regular vacuums often enough.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Jan 2006 20:08:55 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: new to postgres (and db management) and performance already a\n\tproblem :-(" }, { "msg_contents": ">> in our db system (for a website), i notice performance boosts after a \n>> vacuum\n>> full. but then, a VACUUM FULL takes 50min+ during which the db is not \n>> really\n>> accessible to web-users. is there another way to perform maintenance \n>> tasks\n>> AND leaving the db fully operable and accessible?\n>\n> You're not doing regular vacuums often enough.\n\nwell, shouldn't autovacuum take care of \"regular\" vacuums? in addition to \nautovacuum, tables with data changes are vacuumed and reindexed once a day - \nstill performance seems to degrade slowly until a vacuum full is \ninitiated... could an additional daily vacuum over the entire db (even on \ntables that only get data added, never changed or removed) help?\n\n- thomas \n\n\n", "msg_date": "Tue, 17 Jan 2006 02:29:43 +0100", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: new to postgres (and db management) and performance already a\n\tproblem :-(" }, { "msg_contents": ">>> in our db system (for a website), i notice performance boosts after\n>>> a vacuum\n>>> full. but then, a VACUUM FULL takes 50min+ during which the db is\n>>> not really\n>>> accessible to web-users. is there another way to perform\n>>> maintenance tasks\n>>> AND leaving the db fully operable and accessible?\n>>\n>> You're not doing regular vacuums often enough.\n\nBy the way, you can get that VACUUM FULL to be \"less injurious\" if you\ncollect a list of tables:\npubs=# select table_schema, table_name from information_schema.tables\nwhere table_type = 'BASE TABLE';\n\nAnd then VACUUM FULL table by table. It'll take the same 50 minutes;\nit'll be more sporadically \"unusable\" which may turn out better. But\nthat's just one step better; you want more steps :-).\n\n> well, shouldn't autovacuum take care of \"regular\" vacuums? in addition\n> to autovacuum, tables with data changes are vacuumed and reindexed\n> once a day -\n> still performance seems to degrade slowly until a vacuum full is\n> initiated... could an additional daily vacuum over the entire db (even\n> on tables that only get data added, never changed or removed) help?\n\nTables which never see updates/deletes don't need to get vacuumed very\noften. They should only need to get a periodic ANALYZE so that the\nquery optimizer gets the right stats.\n\nThere are probably many tables where pg_autovacuum is doing a fine\njob. What you need to do is to figure out which tables *aren't*\ngetting maintained well enough, and see about doing something special\nto them.\n\nWhat you may want to do is to go table by table and, for each one, do\ntwo things:\n\n1) VACUUM VERBOSE, which will report some information about how much\ndead space there is on the table.\n\n2) Contrib function pgstattuple(), which reports more detailed info\nabout space usage (alas, for just the table).\n\nYou'll find, between these, that there are some tables that have a LOT\nof dead space. At that point, there may be three answers:\n\na) PG 8.1 pg_autovacuum allows you to modify how often specific tables\nare vacuumed; upping the numbers for the offending tables may clear\nthings up\n\nb) Schedule cron jobs to periodically (hourly? several times per\nhour?) VACUUM the \"offending\" tables\n\nc) You may decide to fall back to VACUUM FULL; if you do so just for a\nsmall set of tables, the \"time of pain\" won't be the 50 minutes you're\nliving with now...\n\nTry a), b), and c) in order on the \"offending\" tables as they address\nthe problem at increasing cost...\n-- \n(reverse (concatenate 'string \"moc.liamg\" \"@\" \"enworbbc\"))\nhttp://linuxdatabases.info/info/x.html\n\"Listen, strange women, lyin' in ponds, distributin' swords, is no\nbasis for a system of government. Supreme executive power derives\nitself from a mandate from the masses, not from some farcical aquatic\nceremony.\" -- Monty Python and the Holy Grail\n", "msg_date": "Mon, 16 Jan 2006 22:57:59 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: new to postgres (and db management) and performance already a\n\tproblem :-(" }, { "msg_contents": "> Try a), b), and c) in order on the \"offending\" tables as they address\n> the problem at increasing cost...\n\nthanks alot for the detailed information! the entire concept of vacuum isn't \nyet that clear to me, so your explanations and hints are very much \nappreciated. i'll defenitely try these steps this weekend when the next full \nvacuum was scheduled :-)\n\nbest regards,\nthomas \n\n\n", "msg_date": "Tue, 17 Jan 2006 05:13:09 +0100", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: new to postgres (and db management) and performance already a\n\tproblem :-(" }, { "msg_contents": "On 17/01/06, [email protected] <[email protected]> wrote:\n>\n> > Try a), b), and c) in order on the \"offending\" tables as they address\n> > the problem at increasing cost...\n>\n> thanks alot for the detailed information! the entire concept of vacuum\n> isn't\n> yet that clear to me, so your explanations and hints are very much\n> appreciated. i'll defenitely try these steps this weekend when the next\n> full\n> vacuum was scheduled :-)\n\n\nThanks guys, that pretty much answered my question(s) too. I have a sneaking\nsuspicion that vacuuming won't do too much for us however... now that I\nthink about it - we do very little removing, pretty much only inserts and\nselects. I will give it a vacuum full and see what happens.\nCheers\nAntoine\n\n\n\n\n--\nThis is where I should put some witty comment.\n\nOn 17/01/06, [email protected] <[email protected]> wrote:\n> Try a), b), and c) in order on the \"offending\" tables as they address> the problem at increasing cost...thanks alot for the detailed information! the entire concept of vacuum isn'tyet that clear to me, so your explanations and hints are very much\nappreciated. i'll defenitely try these steps this weekend when the next fullvacuum was scheduled :-)Thanks guys, that pretty much answered my question(s) too. I have a sneaking suspicion that vacuuming won't do too much for us however... now that I think about it - we do very little removing, pretty much only inserts and selects. I will give it a vacuum full and see what happens.\nCheersAntoine -- This is where I should put some witty comment.", "msg_date": "Tue, 17 Jan 2006 09:14:27 +0100", "msg_from": "Antoine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: new to postgres (and db management) and performance already a\n\tproblem :-(" }, { "msg_contents": "On Tue, Jan 17, 2006 at 09:14:27AM +0100, Antoine wrote:\n> think about it - we do very little removing, pretty much only inserts and\n> selects. I will give it a vacuum full and see what happens.\n\nUPDATES? Remember that, in Postgres, UPDATE is effectively DELETE +\nINSERT (from the point of view of storage, not the point of view of\nthe user).\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe plural of anecdote is not data.\n\t\t--Roger Brinner\n", "msg_date": "Tue, 17 Jan 2006 08:28:59 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: new to postgres (and db management) and performance already a\n\tproblem :-(" }, { "msg_contents": "Hi, Tom,\n\nTom Lane wrote:\n\n>>in our db system (for a website), i notice performance boosts after a vacuum \n>>full. but then, a VACUUM FULL takes 50min+ during which the db is not really \n>>accessible to web-users. is there another way to perform maintenance tasks \n>>AND leaving the db fully operable and accessible?\n> \n> You're not doing regular vacuums often enough.\n\nIt may also help to increase the max_fsm_pages setting, so postmaster\nhas more memory to remember freed pages between VACUUMs.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Tue, 17 Jan 2006 14:33:41 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: new to postgres (and db management) and performance" }, { "msg_contents": "Hi, Thomas,\n\[email protected] wrote:\n>> Try a), b), and c) in order on the \"offending\" tables as they address\n>> the problem at increasing cost...\n> \n> thanks alot for the detailed information! the entire concept of vacuum\n> isn't yet that clear to me, so your explanations and hints are very much\n> appreciated. i'll defenitely try these steps this weekend when the next\n> full vacuum was scheduled :-)\n\nBasically, VACUUM scans the whole table and looks for pages containing\ngarbage rows (or row versions), deletes the garbage, and adds those\npages to the free space map (if there are free slots). When allocating\nnew rows / row versions, PostgreSQL first tries to fit them in pages\nfrom the free space maps before allocating new pages. This is why a high\nmax_fsm_pages setting can help when VACUUM freqency is low.\n\nVACUUM FULL additionally moves rows between pages, trying to concentrate\nall the free space at the end of the tables (aka \"defragmentation\"), so\nit can then truncate the files and release the space to the filesystem.\n\nCLUSTER basically rebuilds the tables by copying all rows into a new\ntable, in index order, and then dropping the old table, which also\nreduces fragmentation, but not as strong as VACUUM FULL might.\n\nANALYZE creates statistics about the distribution of values in a column,\nallowing the query optimizer to estimate the selectivity of query criteria.\n\n(This explanation is rather simplified, and ignores indices as well as\nthe fact that a table can consist of multiple files. Also, I believe\nthat newer PostgreSQL versions allow VACUUM to truncate files when free\npages happen to appear at the very end of the file.)\n\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Tue, 17 Jan 2006 14:52:58 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: new to postgres (and db management) and performance" } ]
[ { "msg_contents": "Hi,\n\nI already read the documentation for to use the SPI_PREPARE and\nSPI_EXEC... but sincerely I don't understand how I will use this\nresource in my statements.\n\nI looked for examples, but I din't good examples :(..\n\nSomebody can help me?\n\nThanks.\n\nMarcos.\n\n\n\n", "msg_date": "Tue, 17 Jan 2006 09:04:53 +0000", "msg_from": "Marcos <[email protected]>", "msg_from_op": true, "msg_subject": "Use of Stored Procedures and" }, { "msg_contents": "On Tue, Jan 17, 2006 at 09:04:53AM +0000, Marcos wrote:\n> I already read the documentation for to use the SPI_PREPARE and\n> SPI_EXEC... but sincerely I don't understand how I will use this\n> resource in my statements.\n\nWhat statements? What problem are you trying to solve?\n\n-- \nMichael Fuhr\n", "msg_date": "Wed, 18 Jan 2006 13:28:25 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use of Stored Procedures and" }, { "msg_contents": "> What statements? \n\nSorry. Statements is my code.\n\n> What problem are you trying to solve?\n\nI want know how I make to use a prepared plan\n( http://www.postgresql.org/docs/8.1/static/sql-prepare.html ). I read\nthat I need to use the SPI_PREPARE and SPI_EXEC in my code, but I didn't\nunderstand how make it.\n\nThanks\n\n", "msg_date": "Thu, 19 Jan 2006 08:44:16 +0000", "msg_from": "Marcos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use of Stored Procedures and" }, { "msg_contents": "> Which interface are you using to connect to PostgreSQL? libpq, libpqxx,\n> pgjdbc, python-popy?\n> \n> E. G. PGJDBC handles prepared plans transparently by using the\n> PreparedStatement class.\n> \n> If you use command line PSQL, you can use the PREPARE commands.\n\nI'm using the adodb to call the stored procedure (plpgsql).\n\nThanks..\n\n", "msg_date": "Thu, 19 Jan 2006 09:42:57 +0000", "msg_from": "Marcos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Use of Stored Procedures and" }, { "msg_contents": "Hi, Marcos,\n\nMarcos wrote:\n\n>>What problem are you trying to solve?\n> \n> I want know how I make to use a prepared plan\n> ( http://www.postgresql.org/docs/8.1/static/sql-prepare.html ). I read\n> that I need to use the SPI_PREPARE and SPI_EXEC in my code, but I didn't\n> understand how make it.\n\nWhich interface are you using to connect to PostgreSQL? libpq, libpqxx,\npgjdbc, python-popy?\n\nE. G. PGJDBC handles prepared plans transparently by using the\nPreparedStatement class.\n\nIf you use command line PSQL, you can use the PREPARE commands.\n\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Thu, 19 Jan 2006 12:15:09 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use of Stored Procedures and" }, { "msg_contents": "Hi, Marcos,\n\nMarcos wrote:\n>>Which interface are you using to connect to PostgreSQL? libpq, libpqxx,\n>>pgjdbc, python-popy?\n>>\n>>E. G. PGJDBC handles prepared plans transparently by using the\n>>PreparedStatement class.\n>>\n>>If you use command line PSQL, you can use the PREPARE commands.\n> \n> I'm using the adodb to call the stored procedure (plpgsql).\n\nSo your statements are inside a plpgsql stored procedure, important to\nknow that.\n\nAFAIK, plpgsql uses prepared statements internally, so it should not be\nnecessary to use them explicitly.\n\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Thu, 19 Jan 2006 13:20:57 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Use of Stored Procedures and" } ]
[ { "msg_contents": "hi,\n\nI'm curious as to why autovacuum is not designed to do full vacuum. I \nknow that the necessity of doing full vacuums can be reduced by \nincreasing the FSM, but in my opinion that is the wrong decision for \nmany applications. My application does not continuously \ninsert/update/delete tuples at a constant rate. Basically there are long \nperiods of relatively few modifications and short burst of high \nactivity. Increasing the FSM so that even during these bursts most space \n would be reused would mean to reduce the available memory for all \nother database tasks.\n\nSo my question is: What's the use of an autovacuum daemon if I still \nhave to use a cron job to do full vacuums? wouldn't it just be a minor \njob to enhance autovacuum to be able to perform full vacuums, if one \nreally wants it to do that - even if some developers think that it's the \nwrong approach?\n\nMike\n", "msg_date": "Tue, 17 Jan 2006 11:18:59 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": true, "msg_subject": "Autovacuum / full vacuum" }, { "msg_contents": "> So my question is: What's the use of an autovacuum daemon if I still \n> have to use a cron job to do full vacuums? wouldn't it just be a minor \n> job to enhance autovacuum to be able to perform full vacuums, if one \n> really wants it to do that - even if some developers think that it's the \n> wrong approach?\n\nYou should never have to do full vacuums...\n\nChris\n", "msg_date": "Tue, 17 Jan 2006 18:29:37 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "Hi,\n\ndid you read my post? In the first part I explained why I don't want to \nincrease the FSM that much.\n\nMike\n\n>> So my question is: What's the use of an autovacuum daemon if I still \n>> have to use a cron job to do full vacuums? wouldn't it just be a minor \n>> job to enhance autovacuum to be able to perform full vacuums, if one \n>> really wants it to do that - even if some developers think that it's \n>> the wrong approach?\n> \n> You should never have to do full vacuums...\n> \n> Chris\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n", "msg_date": "Tue, 17 Jan 2006 11:33:02 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": ">> You should never have to do full vacuums...\n\nI would rather say, You should never have to do full vacuums by any\nperiodic means. It may be done on a adhoc basis, when you have figured\nout that your table is never going to grow that big again.\n\nOn 1/17/06, Christopher Kings-Lynne <[email protected]> wrote:\n> > So my question is: What's the use of an autovacuum daemon if I still\n> > have to use a cron job to do full vacuums? wouldn't it just be a minor\n> > job to enhance autovacuum to be able to perform full vacuums, if one\n> > really wants it to do that - even if some developers think that it's the\n> > wrong approach?\n>\n> You should never have to do full vacuums...\n>\n> Chris\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n", "msg_date": "Tue, 17 Jan 2006 17:05:20 +0530", "msg_from": "Pandurangan R S <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "On Tue, Jan 17, 2006 at 11:33:02AM +0100, Michael Riess wrote:\n>did you read my post? In the first part I explained why I don't want to \n>increase the FSM that much.\n\nSince you didn't quantify it, that wasn't much of a data point. (IOW,\nyou'd generally have to be seriously resource constrained before the FSM\nwould be a significant source of memory consumption--in which case more\nRAM would probably be a much better solution than screwing with\nautovacuum.)\n\nMike Stone\n", "msg_date": "Tue, 17 Jan 2006 07:08:05 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "Michael Riess wrote:\n> hi,\n> \n> I'm curious as to why autovacuum is not designed to do full vacuum.\n\nBecause a VACUUM FULL is too invasive. Lazy vacuum is so light on the\nsystem w.r.t. locks that it's generally not a problem to start one at\nany time. On the contrary, vacuum full could be a disaster on some\nsituations.\n\nWhat's more, in general a lazy vacuum is enough to keep the dead space\nwithin manageability, given a good autovacuum configuration and good FSM\nconfiguration, so there's mostly no need for full vacuum. (This is the\ntheory at least.) For the situations where there is a need, we tell you\nto issue it manually.\n\n> So my question is: What's the use of an autovacuum daemon if I still \n> have to use a cron job to do full vacuums? wouldn't it just be a minor \n> job to enhance autovacuum to be able to perform full vacuums, if one \n> really wants it to do that - even if some developers think that it's the \n> wrong approach?\n\nYes, it is a minor job to \"enhance\" it to perform vacuum full. The\nproblem is having a good approach to determining _when_ to issue a full\nvacuum, and having a way to completely disallow it. If you want to do\nthe development work, be my guest (but let us know your design first).\nIf you don't, I guess you would have to wait until it comes high enough\non someone's to-do list, maybe because you convinced him (or her, but we\ndon't have Postgres-ladies at the moment AFAIK) monetarily or something.\n\nYou can, of course, produce a patch and use it internally. This is free\nsoftware, remember.\n\n-- \nAlvaro Herrera Developer, http://www.PostgreSQL.org\n\"God is real, unless declared as int\"\n", "msg_date": "Tue, 17 Jan 2006 10:19:58 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "> I'm curious as to why autovacuum is not designed to do full vacuum. \n\nBecause that's terribly invasive due to the locks it takes out.\n\nLazy vacuum may chew some I/O, but it does *not* block your\napplication for the duration.\n\nVACUUM FULL blocks the application. That is NOT something that anyone\nwants to throw into the \"activity mix\" randomly.\n-- \nlet name=\"cbbrowne\" and tld=\"gmail.com\" in String.concat \"@\" [name;tld];;\nhttp://linuxdatabases.info/info/slony.html\nSigns of a Klingon Programmer #11: \"This machine is a piece of GAGH! I\nneed dual Pentium processors if I am to do battle with this code!\"\n", "msg_date": "Tue, 17 Jan 2006 08:59:28 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "> did you read my post? In the first part I explained why I don't want\n> to increase the FSM that much.\n\nNo, you didn't. You explained *that* you thought you didn't want to\nincrease the FSM. You didn't explain why.\n\nFSM expansion comes fairly cheap, and tends to be an effective way of\neliminating the need for VACUUM FULL. That is generally considered to\nbe a good tradeoff. In future versions, there is likely to be more of\nthis sort of thing; for instance, on the ToDo list is a \"Vacuum Space\nMap\" that would collect page IDs that need vacuuming so that\nPostgreSQL could do \"quicker\" vacuums...\n-- \noutput = reverse(\"moc.liamg\" \"@\" \"enworbbc\")\nhttp://cbbrowne.com/info/internet.html\nGiven recent events in Florida, the tourism board in Texas has\ndeveloped a new advertising campaign based on the slogan \"Ya'll come\nto Texas, where we ain't shot a tourist in a car since November 1963.\"\n", "msg_date": "Tue, 17 Jan 2006 09:04:06 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "\n> VACUUM FULL blocks the application. That is NOT something that anyone\n> wants to throw into the \"activity mix\" randomly.\n\nThere must be a way to implement a daemon which frees up space of a \nrelation without blocking it too long. It could abort after a certain \nnumber of blocks have been freed and then move to the next relation.\n", "msg_date": "Tue, 17 Jan 2006 15:05:29 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "Michael Riess wrote:\n> did you read my post? In the first part I explained why I don't want to \n> increase the FSM that much.\n\nI'm sure he did, but just because you don't have enough FSM space to \ncapture all everything from your \"burst\", that doesn't mean that space \ncan't be reclaimed. The next time a regular vacuum is run, it will once \nagain try to fill the FSM with any remaining free space it finds in the \ntable. What normally happens is that your table will never bee 100% \nfree of dead space, normally it will settle at some steady state size \nthat is small percentage bigger than the table will be after a full \nvacuum. As long as that percentage is small enough, the effect on \nperformance is negligible. Have you measured to see if things are truly \nfaster after a VACUUM FULL?\n\nMatt\n", "msg_date": "Tue, 17 Jan 2006 09:09:02 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "Hi, Matthew,\n\nMatthew T. O'Connor wrote:\n\n> I'm sure he did, but just because you don't have enough FSM space to\n> capture all everything from your \"burst\", that doesn't mean that space\n> can't be reclaimed. The next time a regular vacuum is run, it will once\n> again try to fill the FSM with any remaining free space it finds in the\n> table. What normally happens is that your table will never bee 100%\n> free of dead space, normally it will settle at some steady state size\n> that is small percentage bigger than the table will be after a full\n> vacuum. As long as that percentage is small enough, the effect on\n> performance is negligible.\n\nThis will work if you've a steady stream of inserts / updates, but not\nif you happen to have update bulks that exhaust the FSM capacity. The\nupdate first fills up all the FSM, and then allocates new pages for the\nrest. Then VACUUM comes and refills the FSM, however, the FSM does not\ncontain enough free space for the next large bulk update. The same is\nfor deletes and large bulk inserts, btw.\n\nSo your table keeps growing steadily, until VACUUM FULL or CLUSTER comes\nalong to clean up the mess.\n\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Tue, 17 Jan 2006 15:30:14 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "Hi,\n\nyes, some heavily used tables contain approx. 90% free space after a \nweek. I'll try to increase FSM even more, but I think that I will still \nhave to run a full vacuum every week. Prior to 8.1 I was using 7.4 and \nran a full vacuum every day, so the autovacuum has helped a lot.\n\nBut actually I never understood why the database system slows down at \nall when there is much unused space in the files. Are the unused pages \ncached by the system, or is there another reason for the impact on the \nperformance?\n\nMike\n\n\n > Have you measured to see if things are truly\n> faster after a VACUUM FULL?\n> \n> Matt\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n", "msg_date": "Tue, 17 Jan 2006 15:33:18 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "Well,\n\nI think that the documentation is not exactly easy to understand. I \nalways wondered why there are no examples for common postgresql \nconfigurations. All I know is that the default configuration seems to be \ntoo low for production use. And while running postgres I get no hints as \nto which setting needs to be increased to improve performance. I have no \nchance to see if my FSM settings are too low other than to run vacuum \nfull verbose in psql, pipe the result to a text file and grep for some \nwords to get a somewhat comprehensive idea of how much unused space \nthere is in my system.\n\nDon't get me wrong - I really like PostgreSQL and it works well in my \napplication. But somehow I feel that it might run much better ...\n\nabout the FSM: You say that increasing the FSM is fairly cheap - how \nshould I know that?\n\n>> did you read my post? In the first part I explained why I don't want\n>> to increase the FSM that much.\n> \n> No, you didn't. You explained *that* you thought you didn't want to\n> increase the FSM. You didn't explain why.\n> \n> FSM expansion comes fairly cheap ...\n", "msg_date": "Tue, 17 Jan 2006 15:50:38 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "On Tue, Jan 17, 2006 at 11:18:59AM +0100, Michael Riess wrote:\n> hi,\n> \n> I'm curious as to why autovacuum is not designed to do full vacuum. I \n\nBecause nothing that runs automatically should ever take an exclusive\nlock on the entire database, which is what VACUUM FULL does.\n\n> activity. Increasing the FSM so that even during these bursts most space \n> would be reused would mean to reduce the available memory for all \n> other database tasks.\n\nI don't believe the hit is enough that you should even notice it. \nYou'd have to post some pretty incredible use cases to show that the\ntiny loss of memory to FSM is worth (a) an exclusive lock and (b) the\nloss of efficiency you get from having some preallocated pages in\ntables.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe fact that technology doesn't work is no bar to success in the marketplace.\n\t\t--Philip Greenspun\n", "msg_date": "Tue, 17 Jan 2006 09:56:49 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "Hi,\n\n>> hi,\n>>\n>> I'm curious as to why autovacuum is not designed to do full vacuum. I \n> \n> Because nothing that runs automatically should ever take an exclusive\n> lock on the entire database, which is what VACUUM FULL does.\n\nI thought that vacuum full only locks the table which it currently \noperates on? I'm pretty sure that once a table has been vacuumed, it can \nbe accessed without any restrictions while the vacuum process works on \nthe next table.\n\n> \n>> activity. Increasing the FSM so that even during these bursts most space \n>> would be reused would mean to reduce the available memory for all \n>> other database tasks.\n> \n> I don't believe the hit is enough that you should even notice it. \n> You'd have to post some pretty incredible use cases to show that the\n> tiny loss of memory to FSM is worth (a) an exclusive lock and (b) the\n> loss of efficiency you get from having some preallocated pages in\n> tables.\n\nI have 5000 tables and a workstation with 1 GB RAM which hosts an Apache \n Web Server, Tomcat Servlet Container and PostgreSQL. RAM is not \nsomething that I have plenty of ... and the hardware is fixed and cannot \nbe changed.\n\n\n", "msg_date": "Tue, 17 Jan 2006 16:04:41 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "On Tue, Jan 17, 2006 at 03:50:38PM +0100, Michael Riess wrote:\n>about the FSM: You say that increasing the FSM is fairly cheap - how \n>should I know that?\n\nWhy would you assume otherwise, to the point of not considering changing\nthe setting? \n\nThe documentation explains how much memory is used for FSM entries. If\nyou look at vacuum verbose output it will tell you how much memory\nyou're currently using for the FSM.\n\nMike Stone\n", "msg_date": "Tue, 17 Jan 2006 10:07:32 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "On Tue, Jan 17, 2006 at 03:50:38PM +0100, Michael Riess wrote:\n> always wondered why there are no examples for common postgresql \n> configurations. \n\nYou mean like this one? (for 8.0):\n\n<http://www.powerpostgresql.com/Downloads/annotated_conf_80.html>\n\n\n\n> All I know is that the default configuration seems to be \n> too low for production use. \n\nDefine \"production use\". It may be too low for you.\n\n> chance to see if my FSM settings are too low other than to run vacuum \n> full verbose in psql, pipe the result to a text file and grep for some \n\nNot true. You don't need a FULL on there to figure this out.\n\n> about the FSM: You say that increasing the FSM is fairly cheap - how \n> should I know that?\n\nDo the math. The docs say this:\n\n--snip---\nmax_fsm_pages (integer)\n\n Sets the maximum number of disk pages for which free space will\nbe tracked in the shared free-space map. Six bytes of shared memory\nare consumed for each page slot. This setting must be more than 16 *\nmax_fsm_relations. The default is 20000. This option can only be set\nat server start. \n\nmax_fsm_relations (integer)\n\n Sets the maximum number of relations (tables and indexes) for\nwhich free space will be tracked in the shared free-space map.\nRoughly seventy bytes of shared memory are consumed for each slot.\nThe default is 1000. This option can only be set at server start. \n\n---snip---\n\nSo by default, you have 6 B * 20,000 = 120,000 bytes for the FSM pages.\n\nBy default, you have 70 B * 1,000 = 70,000 bytes for the FSM\nrelations.\n\nNow, there are two knobs. One of them tracks the number of\nrelations. How many relations do you have? Count the number of\nindexes and tables you have, and give yourself some headroom in case\nyou add some more, and poof, you have your number for the relations.\n\nNow all you need to do is figure out what your churn rate is on\ntables, and count up how many disk pages that's likely to be. Give\nyourself a little headroom, and the number of FSM pages is done, too.\n\nThis churn rate is often tough to estimate, though, so you may have\nto fiddle with it from time to time. \n\nA\n\n\n-- \nAndrew Sullivan | [email protected]\nI remember when computers were frustrating because they *did* exactly what \nyou told them to. That actually seems sort of quaint now.\n\t\t--J.D. Baldwin\n", "msg_date": "Tue, 17 Jan 2006 10:08:00 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "On Tue, Jan 17, 2006 at 09:09:02AM -0500, Matthew T. O'Connor wrote:\n> vacuum. As long as that percentage is small enough, the effect on \n> performance is negligible. Have you measured to see if things are truly \n\nActually, as long as the percentage is small enough and the pages are\nreally empty, the performance effect is positive. If you have VACUUM\nFULLed table, inserts have to extend the table before inserting,\nwhereas in a table with some space reclaimed, the I/O effect of\nhaving to allocate another disk page is already done.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nWhen my information changes, I alter my conclusions. What do you do sir?\n\t\t--attr. John Maynard Keynes\n", "msg_date": "Tue, 17 Jan 2006 10:09:48 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "On 1/17/06, Michael Riess <[email protected]> wrote:\n>\n> about the FSM: You say that increasing the FSM is fairly cheap - how\n> should I know that?\n>\n\ncomment from original postgresql.conf file seems pretty obvious:\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000 # min 100, ~70 bytes each\n\nbasically setting max_fsm_pages to 1000000 consumes 6 megabytes. and i\ndefinitelly doubt you will ever hit that high.\n\ndepesz\n\nOn 1/17/06, Michael Riess <[email protected]> wrote:\nabout the FSM: You say that increasing the FSM is fairly cheap - howshould I know that?comment from original postgresql.conf file seems pretty obvious:#max_fsm_pages = 20000          # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000       # min 100, ~70 bytes eachbasically setting max_fsm_pages to 1000000 consumes 6 megabytes. and i definitelly doubt you will ever hit that high.depesz", "msg_date": "Tue, 17 Jan 2006 16:10:34 +0100", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "Michael Riess <[email protected]> writes:\n> I'm curious as to why autovacuum is not designed to do full vacuum.\n\nLocking considerations. VACUUM FULL takes an exclusive lock, which\nblocks any foreground transactions that want to touch the table ---\nso it's really not the sort of thing you want being launched at\nunpredictable times.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Jan 2006 10:13:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum " }, { "msg_contents": "On Tue, Jan 17, 2006 at 03:05:29PM +0100, Michael Riess wrote:\n> There must be a way to implement a daemon which frees up space of a \n> relation without blocking it too long. \n\nDefine \"too long\". If I have a table that needs to respond to a\nSELECT in 50ms, I don't have time for you to lock my table. If this\nwere such an easy thing to do, don't you think the folks who came up\nwit the ingenious lazy vacuum system would have done it?\n\nRemember, a vacuum full must completely lock the table, because it is\nphysically moving bits around on the disk. So a SELECT can't happen\nat the same time, because the bits might move out from under the\nSELECT while it's running. Concurrency is hard, and race conditions\nare easy, to implement.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nA certain description of men are for getting out of debt, yet are\nagainst all taxes for raising money to pay it off.\n\t\t--Alexander Hamilton\n", "msg_date": "Tue, 17 Jan 2006 10:13:57 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "On Tue, Jan 17, 2006 at 04:04:41PM +0100, Michael Riess wrote:\n> \n> I thought that vacuum full only locks the table which it currently \n> operates on? I'm pretty sure that once a table has been vacuumed, it can \n> be accessed without any restrictions while the vacuum process works on \n> the next table.\n\nYes, I think the way I phrased it was unfortunate. But if you issue\nVACUUM FULL you'll get an exclusive lock on everything, although not\nall at the same time. But of course, if your query load is like\nthis\n\nBEGIN;\nSELECT from t1, t2 where t1.col1 = t2.col2;\n[application logic]\nUPDATE t3 . . .\nCOMMIT;\n\nyou'll find yourself blocked in the first statement on both t1 and\nt2; and then on t3 as well. You sure don't want that to happen\nautomagically, in the middle of your business day. \n\n> I have 5000 tables and a workstation with 1 GB RAM which hosts an Apache \n> Web Server, Tomcat Servlet Container and PostgreSQL. RAM is not \n> something that I have plenty of ... and the hardware is fixed and cannot \n> be changed.\n\nI see. Well, I humbly submit that your problem is not the design of\nthe PostgreSQL server, then. \"The hardware is fixed and cannot be\nchanged,\" is the first optimisation I'd make. Heck, I gave away a\nbox to charity only two weeks ago that would solve your problem\nbetter than automatically issuing VACUUM FULL.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nInformation security isn't a technological problem. It's an economics\nproblem.\n\t\t--Bruce Schneier\n", "msg_date": "Tue, 17 Jan 2006 10:19:44 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "Michael Riess <[email protected]> writes:\n> But actually I never understood why the database system slows down at \n> all when there is much unused space in the files.\n\nPerhaps some of your common queries are doing sequential scans? Those\nwould visit the empty pages as well as the full ones.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Jan 2006 10:36:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum " }, { "msg_contents": "On Tue, 2006-01-17 at 09:08, Andrew Sullivan wrote:\n> On Tue, Jan 17, 2006 at 03:50:38PM +0100, Michael Riess wrote:\n> > always wondered why there are no examples for common postgresql \n> > configurations. \n> \n> You mean like this one? (for 8.0):\n> \n> <http://www.powerpostgresql.com/Downloads/annotated_conf_80.html>\n\nI have to admit, looking at the documentation, that we really don't\nexplain this all that well in the administration section, and I can see\nhow easily led astray beginners are.\n\nI think it's time I joined the pgsql-docs mailing list...\n", "msg_date": "Tue, 17 Jan 2006 09:59:25 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "[email protected] (Andrew Sullivan) writes:\n> On Tue, Jan 17, 2006 at 11:18:59AM +0100, Michael Riess wrote:\n>> hi,\n>> \n>> I'm curious as to why autovacuum is not designed to do full vacuum. I \n>\n> Because nothing that runs automatically should ever take an exclusive\n> lock on the entire database, which is what VACUUM FULL does.\n\nThat's a bit more than what autovacuum would probably do...\nautovacuum does things table by table, so that what would be locked\nshould just be one table.\n\nEven so, I'd not be keen on having anything that runs automatically\ntake an exclusive lock on even as much as a table.\n\n>> activity. Increasing the FSM so that even during these bursts most\n>> space would be reused would mean to reduce the available memory for\n>> all other database tasks.\n>\n> I don't believe the hit is enough that you should even notice\n> it. You'd have to post some pretty incredible use cases to show that\n> the tiny loss of memory to FSM is worth (a) an exclusive lock and\n> (b) the loss of efficiency you get from having some preallocated\n> pages in tables.\n\nThere is *a* case for setting up full vacuums of *some* objects. If\nyou have a table whose tuples all get modified in the course of some\ncommon query, that will lead to a pretty conspicuous bloating of *that\ntable.*\n\nEven with a big FSM, the pattern of how updates take place will lead\nto that table having ~50% of its space being \"dead/free,\" which is way\nhigher than the desirable \"stable proportion\" of 10-15%.\n\nFor that sort of table, it may be attractive to run VACUUM FULL on a\nregular basis. Of course, it may also be attractive to try to come up\nwith an update process that won't kill the whole table's contents at\nonce ;-).\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in String.concat \"@\" [name;tld];;\nhttp://cbbrowne.com/info/x.html\n\"As long as each individual is facing the TV tube alone, formal\nfreedom poses no threat to privilege.\" --Noam Chomsky\n", "msg_date": "Tue, 17 Jan 2006 11:43:14 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "On Tue, Jan 17, 2006 at 09:59:25AM -0600, Scott Marlowe wrote:\n> I have to admit, looking at the documentation, that we really don't\n> explain this all that well in the administration section, and I can see\n> how easily led astray beginners are.\n\nI understand what you mean, but I suppose my reaction would be that\nwhat we really need is a place to keep these things, with a note in\nthe docs that the \"best practice\" settings for these are documented\nat <some url>, and evolve over time as people gain expertise with the\nnew features.\n\nI suspect, for instance, that nobody knows exactly the right settings\nfor any generic workload yet under 8.1 (although probably people know\nthem well enough for particular workloads).\n\nA\n\n\n-- \nAndrew Sullivan | [email protected]\nThis work was visionary and imaginative, and goes to show that visionary\nand imaginative work need not end up well. \n\t\t--Dennis Ritchie\n", "msg_date": "Tue, 17 Jan 2006 12:16:56 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "Hi, Michael,\n\nMichael Riess wrote:\n\n> But actually I never understood why the database system slows down at\n> all when there is much unused space in the files. Are the unused pages\n> cached by the system, or is there another reason for the impact on the\n> performance?\n\nNo, they are not cached as such, but PostgreSQL caches whole pages, and\nyou don't have only empty pages, but also lots of partially empty pages,\nso the signal/noise ratio is worse (means PostgreSQL has to fetch more\npages to get the same data).\n\nSequential scans etc. are also slower.\n\nAnd some file systems get slower when files get bigger or there are more\nfiles, but this effect should not really be noticeable here.\n\nHTH,\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Tue, 17 Jan 2006 18:25:06 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "Chris Browne wrote:\n> [email protected] (Andrew Sullivan) writes:\n> > On Tue, Jan 17, 2006 at 11:18:59AM +0100, Michael Riess wrote:\n> >> hi,\n> >> \n> >> I'm curious as to why autovacuum is not designed to do full vacuum. I \n> >\n> > Because nothing that runs automatically should ever take an exclusive\n> > lock on the entire database, which is what VACUUM FULL does.\n> \n> That's a bit more than what autovacuum would probably do...\n> autovacuum does things table by table, so that what would be locked\n> should just be one table.\n\nEven a database-wide vacuum does not take locks on more than one table.\nThe table locks are acquired and released one by one, as the operation\nproceeds. And as you know, autovacuum (both 8.1's and contrib) does\nissue database-wide vacuums, if it finds a database close to an xid\nwraparound.\n\n-- \nAlvaro Herrera http://www.advogato.org/person/alvherre\n\"Las mujeres son como hondas: mientras m�s resistencia tienen,\n m�s lejos puedes llegar con ellas\" (Jonas Nightingale, Leap of Faith)\n", "msg_date": "Tue, 17 Jan 2006 14:30:47 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "On Tue, 2006-01-17 at 11:16, Andrew Sullivan wrote:\n> On Tue, Jan 17, 2006 at 09:59:25AM -0600, Scott Marlowe wrote:\n> > I have to admit, looking at the documentation, that we really don't\n> > explain this all that well in the administration section, and I can see\n> > how easily led astray beginners are.\n> \n> I understand what you mean, but I suppose my reaction would be that\n> what we really need is a place to keep these things, with a note in\n> the docs that the \"best practice\" settings for these are documented\n> at <some url>, and evolve over time as people gain expertise with the\n> new features.\n> \n> I suspect, for instance, that nobody knows exactly the right settings\n> for any generic workload yet under 8.1 (although probably people know\n> them well enough for particular workloads).\n\nBut the problem is bigger than that. The administrative docs were\nobviously evolved over time, and now they kind of jump around and around\ncovering the same subject from different angles and at different\ndepths. Even I find it hard to find what I need, and I know PostgreSQL\nadministration well enough to be pretty darned good at it.\n\nFor the beginner, it must seem much more confusing. The more I look at\nthe administration section of the docs, the more I want to reorganize\nthe whole thing, and rewrite large sections of it as well.\n", "msg_date": "Tue, 17 Jan 2006 11:31:43 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "On Tue, Jan 17, 2006 at 11:43:14AM -0500, Chris Browne wrote:\n> [email protected] (Andrew Sullivan) writes:\n> > Because nothing that runs automatically should ever take an exclusive\n> > lock on the entire database, \n\n> That's a bit more than what autovacuum would probably do...\n\nOr even VACUUM FULL, as I tried to make clearer in another message:\nthe way I phrased it suggests that it's a simultaneous lock on the\nentire database (when it is most certainly not). I didn't intend to\nmislead; my apologies.\n\nNote, though, that the actual effect for a user might look worse\nthan a lock on the entire database, though, if you conider\nstatement_timeout and certain use patterns.\n\nSuppose you want to issue occasional VACCUM FULLs, but your\napplication is prepared for this, and depends on statement_timeout to\ntell it \"sorry, too long, try again\". Now, if the exclusive lock on\nany given table takes less than statement_timeout, so that each\nstatement is able to continue in its time, the application looks like\nit's having an outage _even though_ it is actually blocked on\nvacuums. (Yes, it's poor application design. There's plenty of that\nin the world, and you can't always fix it.)\n\n> There is *a* case for setting up full vacuums of *some* objects. If\n> you have a table whose tuples all get modified in the course of some\n> common query, that will lead to a pretty conspicuous bloating of *that\n> table.*\n\nSure. And depending on your use model, that might be good. In many\ncases, though, a \"rotor table + view + truncate\" approach would be\nbetter, and would allow improved uptime. If you don't care about\nuptime, and can take long outages every day, then the discussion is\nsort of moot anyway. And _all_ of this is moot, as near as I can\ntell, given the OP's claim that the hardware is adequate and\nimmutable, even though the former claim is demonstrably false.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe fact that technology doesn't work is no bar to success in the marketplace.\n\t\t--Philip Greenspun\n", "msg_date": "Tue, 17 Jan 2006 13:13:50 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "[email protected] (Alvaro Herrera) writes:\n> Chris Browne wrote:\n>> [email protected] (Andrew Sullivan) writes:\n>> > On Tue, Jan 17, 2006 at 11:18:59AM +0100, Michael Riess wrote:\n>> >> hi,\n>> >> \n>> >> I'm curious as to why autovacuum is not designed to do full vacuum. I \n>> >\n>> > Because nothing that runs automatically should ever take an exclusive\n>> > lock on the entire database, which is what VACUUM FULL does.\n>> \n>> That's a bit more than what autovacuum would probably do...\n>> autovacuum does things table by table, so that what would be locked\n>> should just be one table.\n>\n> Even a database-wide vacuum does not take locks on more than one table.\n> The table locks are acquired and released one by one, as the operation\n> proceeds. And as you know, autovacuum (both 8.1's and contrib) does\n> issue database-wide vacuums, if it finds a database close to an xid\n> wraparound.\n\nHas that changed recently? I have always seen \"vacuumdb\" or SQL\n\"VACUUM\" (without table specifications) running as one long\ntransaction which doesn't release the locks that it is granted until\nthe end of the transaction.\n-- \n\"cbbrowne\",\"@\",\"acm.org\"\nhttp://cbbrowne.com/info/spiritual.html\n\"My nostalgia for Icon makes me forget about any of the bad things. I\ndon't have much nostalgia for Perl, so its faults I remember.\"\n-- Scott Gilbert comp.lang.python\n", "msg_date": "Tue, 17 Jan 2006 13:40:46 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "Chris Browne <[email protected]> writes:\n> [email protected] (Alvaro Herrera) writes:\n>> Even a database-wide vacuum does not take locks on more than one table.\n>> The table locks are acquired and released one by one, as the operation\n>> proceeds.\n\n> Has that changed recently? I have always seen \"vacuumdb\" or SQL\n> \"VACUUM\" (without table specifications) running as one long\n> transaction which doesn't release the locks that it is granted until\n> the end of the transaction.\n\nYou sure? It's not supposed to, and watching a database-wide vacuum\nwith \"select * from pg_locks\" doesn't look to me like it ever has locks\non more than one table (plus the table's indexes and toast table).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Jan 2006 14:32:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum " }, { "msg_contents": "On Tue, Jan 17, 2006 at 03:50:38PM +0100, Michael Riess wrote:\n> Well,\n> \n> I think that the documentation is not exactly easy to understand. I \n> always wondered why there are no examples for common postgresql \n> configurations. All I know is that the default configuration seems to be \n> too low for production use. And while running postgres I get no hints as \n> to which setting needs to be increased to improve performance. I have no \n\nThere's a number of sites that have lots of info on postgresql.conf\ntuning. Google for 'postgresql.conf tuning' or 'annotated\npostgresql.conf'.\n\n> chance to see if my FSM settings are too low other than to run vacuum \n> full verbose in psql, pipe the result to a text file and grep for some \n> words to get a somewhat comprehensive idea of how much unused space \n> there is in my system.\n> \n> Don't get me wrong - I really like PostgreSQL and it works well in my \n> application. But somehow I feel that it might run much better ...\n> \n> about the FSM: You say that increasing the FSM is fairly cheap - how \n> should I know that?\n\[email protected][16:26]/opt/local/share/postgresql8:3%grep fsm \\\npostgresql.conf.sample \n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000 # min 100, ~70 bytes each\[email protected][16:26]/opt/local/share/postgresql8:4%\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 17 Jan 2006 16:28:12 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "> There's a number of sites that have lots of info on postgresql.conf\n> tuning. Google for 'postgresql.conf tuning' or 'annotated\n> postgresql.conf'.\n\nI know some of these sites, but who should I know if the information on \nthose pages is correct? The information on those pages should be \npublished as part of the postgres documentation. Doesn't have to be too \nmuch, maybe like this page:\n\nhttp://www.powerpostgresql.com/Downloads/annotated_conf_80.html\n\nBut it should be part of the documentation to show newbies that not only \nthe information is correct, but also approved of and recommended by the \npostgres team.\n", "msg_date": "Wed, 18 Jan 2006 15:09:42 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "> >> Even a database-wide vacuum does not take locks on more than one table.\n> >> The table locks are acquired and released one by one, as the operation\n> >> proceeds.\n> \n> > Has that changed recently? I have always seen \"vacuumdb\" or SQL\n> > \"VACUUM\" (without table specifications) running as one long\n> > transaction which doesn't release the locks that it is granted until\n> > the end of the transaction.\n> \n> You sure? It's not supposed to, and watching a database-wide vacuum\n> with \"select * from pg_locks\" doesn't look to me like it ever has locks\n> on more than one table (plus the table's indexes and toast table).\n\n Are there some plans to remove vacuum altogether?\n\n Mindaugas\n\n", "msg_date": "Wed, 18 Jan 2006 17:50:49 +0200", "msg_from": "\"Mindaugas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum " }, { "msg_contents": "[email protected] (\"Mindaugas\") writes:\n>> >> Even a database-wide vacuum does not take locks on more than one\n>> >> table. The table locks are acquired and released one by one, as\n>> >> the operation proceeds.\n>> \n>> > Has that changed recently? I have always seen \"vacuumdb\" or SQL\n>> > \"VACUUM\" (without table specifications) running as one long\n>> > transaction which doesn't release the locks that it is granted\n>> > until the end of the transaction.\n>> \n>> You sure? It's not supposed to, and watching a database-wide\n>> vacuum with \"select * from pg_locks\" doesn't look to me like it\n>> ever has locks on more than one table (plus the table's indexes and\n>> toast table).\n>\n> Are there some plans to remove vacuum altogether?\n\nI don't see that on the TODO list...\n\nhttp://www.postgresql.org/docs/faqs.TODO.html\n\nTo the contrary, there is a whole section on what functionality to\n*ADD* to VACUUM.\n-- \nlet name=\"cbbrowne\" and tld=\"acm.org\" in String.concat \"@\" [name;tld];;\nhttp://www.ntlug.org/~cbbrowne/finances.html\n\"There are two types of hackers working on Linux: those who can spell,\nand those who can't. There is a constant, pitched battle between the\ntwo camps.\" \n--Russ Nelson (Linux Kernel Summary, Ver. 1.1.75 -> 1.1.76)\n", "msg_date": "Wed, 18 Jan 2006 11:54:21 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "Mindaugas wrote:\n> > >> Even a database-wide vacuum does not take locks on more than one table.\n> > >> The table locks are acquired and released one by one, as the operation\n> > >> proceeds.\n> > \n> > > Has that changed recently? I have always seen \"vacuumdb\" or SQL\n> > > \"VACUUM\" (without table specifications) running as one long\n> > > transaction which doesn't release the locks that it is granted until\n> > > the end of the transaction.\n> > \n> > You sure? It's not supposed to, and watching a database-wide vacuum\n> > with \"select * from pg_locks\" doesn't look to me like it ever has locks\n> > on more than one table (plus the table's indexes and toast table).\n> \n> Are there some plans to remove vacuum altogether?\n\nNo, but there are plans to make it as automatic and unintrusive as\npossible. (User configuration will probably always be needed.)\n\n-- \nAlvaro Herrera Developer, http://www.PostgreSQL.org\nFOO MANE PADME HUM\n", "msg_date": "Wed, 18 Jan 2006 13:55:21 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "\nOn Wednesday 18 January 2006 08:54 am, Chris Browne wrote:\n> To the contrary, there is a whole section on what functionality to\n> *ADD* to VACUUM.\n\nNear but not quite off the topic of VACUUM and new features...\n\nI've been thinking about parsing the vacuum output and storing it in \nPostgresql. All the tuple, page, cpu time, etc... information would be \ninserted into a reasonably flat set of tables.\n\nThe benefits I would expect from this are:\n\n* monitoring ability - I could routinely monitor the values in the table to \nwarn when vacuum's are failing or reclaimed space has risen dramatically. I \nfind it easier to write and maintain monitoring agents that perform SQL \nqueries than ones that need to routinely parse log files and coordinate with \ncron.\n\n* historical perspective on tuple use - which a relatively small amount of \nstorage, I could use the vacuum output to get an idea of usage levels over \ntime, which is beneficial for planning additional capacity\n\n* historical information could theoretically inform the autovacuum, though I \nassume there are better alternatives planned.\n\n* it could cut down on traffic on this list if admin could see routine \nmaintenance in a historical context.\n\nAssuming this isn't a fundamentally horrible idea, it would be nice if there \nwere ways to do this without parsing the pretty-printed vacuum text (ie, \ncallbacks, triggers, guc variable).\n\nI'd like to know if anybody does this already, thinks its a bad idea, or can \nknock me on the noggin with the pg manual and say, \"it's already there!\".\n\nRegards,\n\n Michael\n\n", "msg_date": "Wed, 18 Jan 2006 11:15:51 -0800", "msg_from": "Michael Crozier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum (off-topic?)" }, { "msg_contents": "[email protected] (Michael Crozier) writes:\n\n> On Wednesday 18 January 2006 08:54 am, Chris Browne wrote:\n>> To the contrary, there is a whole section on what functionality to\n>> *ADD* to VACUUM.\n>\n> Near but not quite off the topic of VACUUM and new features...\n>\n> I've been thinking about parsing the vacuum output and storing it in\n> Postgresql. All the tuple, page, cpu time, etc... information would\n> be inserted into a reasonably flat set of tables.\n>\n> The benefits I would expect from this are:\n>\n> * monitoring ability - I could routinely monitor the values in the\n> table to warn when vacuum's are failing or reclaimed space has risen\n> dramatically. I find it easier to write and maintain monitoring\n> agents that perform SQL queries than ones that need to routinely\n> parse log files and coordinate with cron.\n>\n> * historical perspective on tuple use - which a relatively small\n> amount of storage, I could use the vacuum output to get an idea of\n> usage levels over time, which is beneficial for planning additional\n> capacity\n>\n> * historical information could theoretically inform the autovacuum,\n> though I assume there are better alternatives planned.\n>\n> * it could cut down on traffic on this list if admin could see\n> routine maintenance in a historical context.\n>\n> Assuming this isn't a fundamentally horrible idea, it would be nice\n> if there were ways to do this without parsing the pretty-printed\n> vacuum text (ie, callbacks, triggers, guc variable).\n>\n> I'd like to know if anybody does this already, thinks its a bad\n> idea, or can knock me on the noggin with the pg manual and say,\n> \"it's already there!\".\n\nWe had someone working on that for a while; I don't think it got to\nthe point of being something ready to unleash on the world.\n\nI certainly agree that it would be plenty useful to have this sort of\ninformation available. Having a body of historical information could\nlead to having some \"more informed\" suggestions for heuristics.\n-- \n(reverse (concatenate 'string \"gro.mca\" \"@\" \"enworbbc\"))\nhttp://cbbrowne.com/info/unix.html\nBad command. Bad, bad command! Sit! Stay! Staaay... \n", "msg_date": "Wed, 18 Jan 2006 17:04:51 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum (off-topic?)" }, { "msg_contents": "On Wed, Jan 18, 2006 at 03:09:42PM +0100, Michael Riess wrote:\n> >There's a number of sites that have lots of info on postgresql.conf\n> >tuning. Google for 'postgresql.conf tuning' or 'annotated\n> >postgresql.conf'.\n> \n> I know some of these sites, but who should I know if the information on \n> those pages is correct? The information on those pages should be \n> published as part of the postgres documentation. Doesn't have to be too \n> much, maybe like this page:\n> \n> http://www.powerpostgresql.com/Downloads/annotated_conf_80.html\n> \n> But it should be part of the documentation to show newbies that not only \n> the information is correct, but also approved of and recommended by the \n> postgres team.\n\nActually, most of what you find there is probably also found in\ntechdocs. But of course it would be better if the docs did a better job\nof explaining things...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 18 Jan 2006 16:48:56 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum" }, { "msg_contents": "On Wed, Jan 18, 2006 at 11:15:51AM -0800, Michael Crozier wrote:\n> I've been thinking about parsing the vacuum output and storing it in \n> Postgresql. All the tuple, page, cpu time, etc... information would be \n> inserted into a reasonably flat set of tables.\n<snip>\n> Assuming this isn't a fundamentally horrible idea, it would be nice if there \n> were ways to do this without parsing the pretty-printed vacuum text (ie, \n> callbacks, triggers, guc variable).\n\nThe best way to do this would be to modify the vacuum code itself, but\nthe issue is that vacuum (or at least lazyvacuum) doesn't handle\ntransactions like the rest of the backend does, so I suspect that there\nwould be some issues with trying to log the information from the same\nbackend that was running the vacuum.\n\n> I'd like to know if anybody does this already, thinks its a bad idea, or can \n> knock me on the noggin with the pg manual and say, \"it's already there!\".\n\nI think it's a good idea, but you should take a look at the recently\nadded functionality that allows you to investigate the contests of the\nFSM via a user function (this is either in 8.1 or in HEAD; I can't\nremember which).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 18 Jan 2006 16:52:57 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum (off-topic?)" }, { "msg_contents": "\nOn Wednesday 18 January 2006 14:52 pm, Jim C. Nasby wrote:\n> I think it's a good idea, but you should take a look at the recently\n> added functionality that allows you to investigate the contests of the\n> FSM via a user function (this is either in 8.1 or in HEAD; I can't\n> remember which).\n\nI will look at this when time allows. Perhaps there is a combination of \ntriggers on stat tables and asynchronous notifications that would provide \nthis functionality without getting too deep into the vacuum's transaction \nlogic?\n\nWere it too integrated with the vacuum, it would likely be too much for \ncontrib/, I assume.\n\n\nthanks,\n\n Michael\n\n", "msg_date": "Wed, 18 Jan 2006 15:36:04 -0800", "msg_from": "Michael Crozier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum (off-topic?)" }, { "msg_contents": "\n> We had someone working on that for a while; I don't think it got to\n> the point of being something ready to unleash on the world.\n\nInteresting. I will dig around the mailing list archives too see how they \nwent about it... for my own curiosity if nothing else. If you happen to \nknow offhand, I'd appreciate a link.\n\nRegards,\n\n Michael\n", "msg_date": "Wed, 18 Jan 2006 15:41:33 -0800", "msg_from": "Michael Crozier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum (off-topic?)" }, { "msg_contents": "On Wed, Jan 18, 2006 at 03:36:04PM -0800, Michael Crozier wrote:\n> \n> On Wednesday 18 January 2006 14:52 pm, Jim C. Nasby wrote:\n> > I think it's a good idea, but you should take a look at the recently\n> > added functionality that allows you to investigate the contests of the\n> > FSM via a user function (this is either in 8.1 or in HEAD; I can't\n> > remember which).\n> \n> I will look at this when time allows. Perhaps there is a combination of \n> triggers on stat tables and asynchronous notifications that would provide \n> this functionality without getting too deep into the vacuum's transaction \n> logic?\n\nYou can't put triggers on system tables, at least not ones that will be\ntriggered by system operations themselves, because the backend bypasses\nnormal access methods. Also, most of the really interesting info isn't\nlogged anywhere in a system table; stuff like the amount of dead space,\ntuples removed, etc.\n\n> Were it too integrated with the vacuum, it would likely be too much for \n> contrib/, I assume.\n\nProbably.\n\nA good alternative might be allowing vacuum to output some\nmachine-friendly information (maybe into a backend-writable file?) and\nthen have code that could load that into a table (though presumably that\ncould should be as simple as just a COPY).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 18 Jan 2006 17:44:40 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum (off-topic?)" }, { "msg_contents": "Jim C. Nasby wrote:\n\n> \n> I think it's a good idea, but you should take a look at the recently\n> added functionality that allows you to investigate the contests of the\n> FSM via a user function (this is either in 8.1 or in HEAD; I can't\n> remember which).\n\nAFAICS it is still in the patch queue for 8.2.\n\nIt's called 'pg_freespacemap' and is available for 8.1/8.0 from the \nPgfoundry 'backports' project:\n\nhttp://pgfoundry.org/projects/backports\n\nCheers\n\nMark\n\n", "msg_date": "Fri, 20 Jan 2006 12:10:32 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum (off-topic?)" }, { "msg_contents": "\nVerified. I am working toward getting all those patches applied.\n\n---------------------------------------------------------------------------\n\nMark Kirkwood wrote:\n> Jim C. Nasby wrote:\n> \n> > \n> > I think it's a good idea, but you should take a look at the recently\n> > added functionality that allows you to investigate the contests of the\n> > FSM via a user function (this is either in 8.1 or in HEAD; I can't\n> > remember which).\n> \n> AFAICS it is still in the patch queue for 8.2.\n> \n> It's called 'pg_freespacemap' and is available for 8.1/8.0 from the \n> Pgfoundry 'backports' project:\n> \n> http://pgfoundry.org/projects/backports\n> \n> Cheers\n> \n> Mark\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 19 Jan 2006 18:12:36 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum (off-topic?)" }, { "msg_contents": "BTW, given all the recent discussion about vacuuming and our MVCC,\nhttp://www.pervasive-postgres.com/lp/newsletters/2006/Insights_Postgres_Jan.asp#3\nshould prove interesting. :)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 20 Jan 2006 11:12:14 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum (off-topic?)" }, { "msg_contents": "Jim C. Nasby wrote:\n> BTW, given all the recent discussion about vacuuming and our MVCC,\n> http://www.pervasive-postgres.com/lp/newsletters/2006/Insights_Postgres_Jan.asp#3\n> should prove interesting. :)\n> \nPlease explain... what is the .asp extension. I have yet to see it \nreliable in production ;)\n\n\n\n\n-- \nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: PLphp, PLperl - http://www.commandprompt.com/\n\n", "msg_date": "Fri, 20 Jan 2006 09:31:14 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum (off-topic?)" }, { "msg_contents": "On Fri, Jan 20, 2006 at 09:31:14AM -0800, Joshua D. Drake wrote:\n> Jim C. Nasby wrote:\n> >BTW, given all the recent discussion about vacuuming and our MVCC,\n> >http://www.pervasive-postgres.com/lp/newsletters/2006/Insights_Postgres_Jan.asp#3\n> >should prove interesting. :)\n> > \n> Please explain... what is the .asp extension. I have yet to see it \n> reliable in production ;)\n\nI lay no claim to our infrastructure. :)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 20 Jan 2006 11:34:06 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum (off-topic?)" }, { "msg_contents": "\n> I lay no claim to our infrastructure. :)\n> \nCan I quote the: Pervasive Senior Engineering Consultant on that?\n\n-- \nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: PLphp, PLperl - http://www.commandprompt.com/\n\n", "msg_date": "Fri, 20 Jan 2006 09:37:50 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum (off-topic?)" }, { "msg_contents": "On Fri, Jan 20, 2006 at 09:37:50AM -0800, Joshua D. Drake wrote:\n> \n> >I lay no claim to our infrastructure. :)\n> > \n> Can I quote the: Pervasive Senior Engineering Consultant on that?\n\nSure... I've never been asked to consult on our stuff, and in any case,\nI don't do web front-ends (one of the nice things about working with a\nteam of other consultants). AFAIK IIS will happily talk to PostgreSQL\n(though maybe I'm wrong there...)\n\nI *have* asked what database is being used on the backend though, and\ndepending on the answer to that some folks might have some explaining to\ndo. :)\n\n*grabs big can of dog food*\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 20 Jan 2006 11:44:16 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum (off-topic?)" }, { "msg_contents": "> Sure... I've never been asked to consult on our stuff, and in any case,\n> I don't do web front-ends (one of the nice things about working with a\n> team of other consultants). AFAIK IIS will happily talk to PostgreSQL\n> (though maybe I'm wrong there...)\n\niis (yeah, asp in a successfull productive environement hehe) & postgresql \nworks even better for us than iis & mssql :-)\n\n- thomas \n\n\n", "msg_date": "Fri, 20 Jan 2006 18:46:45 +0100", "msg_from": "<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum (off-topic?)" }, { "msg_contents": "On Fri, Jan 20, 2006 at 06:46:45PM +0100, [email protected] wrote:\n> >Sure... I've never been asked to consult on our stuff, and in any case,\n> >I don't do web front-ends (one of the nice things about working with a\n> >team of other consultants). AFAIK IIS will happily talk to PostgreSQL\n> >(though maybe I'm wrong there...)\n> \n> iis (yeah, asp in a successfull productive environement hehe) & postgresql \n> works even better for us than iis & mssql :-)\n\nJust last night I was talking to someone about different databases and\nwhat-not (he's stuck in a windows shop using MSSQL and I mentioned I'd\nheard some bad things about it's stability). I realized at some point\nthat asking about what large installs of something exist is pretty\npointless... given enough effort you can make almost anything scale. As\nan example, there's a cable company with a MySQL database that's nearly\n1TB... if that's not proof you can make anything scale, I don't know\nwhat is. ;)\n\nWhat people really need to ask about is how hard it is to make something\nwork, and how many problems you're likely to keep encountering.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 20 Jan 2006 13:02:36 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum / full vacuum (off-topic?)" } ]
[ { "msg_contents": "Michael Riess wrote:\n> Hi,\n> \n> yes, some heavily used tables contain approx. 90% free space after a\n> week. I'll try to increase FSM even more, but I think that I will\n> still have to run a full vacuum every week. Prior to 8.1 I was using\n> 7.4 and ran a full vacuum every day, so the autovacuum has helped a\n> lot. \n> \n> But actually I never understood why the database system slows down at\n> all when there is much unused space in the files. Are the unused pages\n> cached by the system, or is there another reason for the impact on the\n> performance?\n\nThe reason is that the system needs to LOOK at the pages/tuples to see\nif the tuples\nare dead or not. \n\nSo, the number of dead tuples impacts the scans.\n\nLER\n\n> \n> Mike\n-- \nLarry Rosenman\t\t\nDatabase Support Engineer\n\nPERVASIVE SOFTWARE. INC.\n12365B RIATA TRACE PKWY\n3015\nAUSTIN TX 78727-6531 \n\nTel: 512.231.6173\nFax: 512.231.6597\nEmail: [email protected]\nWeb: www.pervasive.com \n", "msg_date": "Tue, 17 Jan 2006 08:36:53 -0600", "msg_from": "\"Larry Rosenman\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum / full vacuum" } ]
[ { "msg_contents": "8.1.1, everything vacuumed/analyzed. basically i have two queries that\nwhen executed individually run quite quickly, but if I try to left join\nthe second query onto the first, everything gets quite a bit slower. \n\nrms=# explain analyze\nrms-# SELECT\nrms-# software_download.*\nrms-# FROM\nrms-# (\nrms(# SELECT\nrms(# host_id, max(mtime) as mtime\nrms(# FROM\nrms(# software_download\nrms(# WHERE\nrms(# bds_status_id not in (6,17,18)\nrms(# GROUP BY\nrms(# host_id, software_binary_id\nrms(# ) latest_download\nrms-# JOIN software_download using (host_id,mtime)\nrms-# JOIN software_binary b USING (software_binary_id)\nrms-# WHERE\nrms-# binary_type_id IN (3,5,6);\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=870.00..992.56 rows=1 width=96) (actual time=90.566..125.782 rows=472 loops=1)\n Hash Cond: ((\"outer\".host_id = \"inner\".host_id) AND (\"outer\".\"?column2?\" = \"inner\".mtime))\n -> HashAggregate (cost=475.88..495.32 rows=1555 width=16) (actual time=51.300..70.761 rows=10870 loops=1)\n -> Seq Scan on software_download (cost=0.00..377.78 rows=13080 width=16) (actual time=0.010..23.700 rows=13167 loops=1)\n Filter: ((bds_status_id <> 6) AND (bds_status_id <> 17) AND (bds_status_id <> 18))\n -> Hash (cost=379.37..379.37 rows=2949 width=96) (actual time=39.167..39.167 rows=639 loops=1)\n -> Hash Join (cost=5.64..379.37 rows=2949 width=96) (actual time=0.185..37.808 rows=639 loops=1)\n Hash Cond: (\"outer\".software_binary_id = \"inner\".software_binary_id)\n -> Seq Scan on software_download (cost=0.00..277.16 rows=13416 width=96) (actual time=0.008..19.338 rows=13416 loops=1)\n -> Hash (cost=5.59..5.59 rows=20 width=4) (actual time=0.149..0.149 rows=22 loops=1)\n -> Seq Scan on software_binary b (cost=0.00..5.59 rows=20 width=4) (actual time=0.011..0.108 rows=22 loops=1)\n Filter: ((binary_type_id = 3) OR (binary_type_id = 5) OR (binary_type_id = 6))\n Total runtime: 126.704 ms\n(13 rows)\n\n\nrms=# explain analyze \nrms-# SELECT\nrms-# entityid, rmsbinaryid, rmsbinaryid as software_binary_id, timestamp as downloaded, ia.host_id\nrms-# FROM\nrms-# (SELECT\nrms(# entityid, rmsbinaryid,max(msgid) as msgid\nrms(# FROM\nrms(# msg306u\nrms(# WHERE\nrms(# downloadstatus=1\nrms(# GROUP BY entityid,rmsbinaryid\nrms(# ) a1\nrms-# JOIN myapp_app ia on (entityid=myapp_app_id)\nrms-# JOIN\nrms-# (SELECT *\nrms(# FROM msg306u\nrms(# WHERE\nrms(# downloadstatus != 0\nrms(# ) a2 USING(entityid,rmsbinaryid,msgid)\nrms-# ;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=1733.79..4620.38 rows=1 width=20) (actual time=81.160..89.826 rows=238 loops=1)\n -> Nested Loop (cost=1733.79..4615.92 rows=1 width=20) (actual time=81.142..86.826 rows=238 loops=1)\n Join Filter: (\"outer\".rmsbinaryid = \"inner\".rmsbinaryid)\n -> HashAggregate (cost=1733.79..1740.92 rows=570 width=12) (actual time=81.105..81.839 rows=323 loops=1)\n -> Bitmap Heap Scan on msg306u (cost=111.75..1540.65 rows=25752 width=12) (actual time=4.490..41.233 rows=25542 loops=1)\n -> Bitmap Index Scan on rht3 (cost=0.00..111.75 rows=25752 width=0) (actual time=4.248..4.248 rows=25542 loops=1)\n -> Index Scan using msg306u_entityid_msgid_idx on msg306u (cost=0.00..5.02 rows=1 width=20) (actual time=0.008..0.010 rows=1 loops=323)\n Index Cond: ((\"outer\".entityid = msg306u.entityid) AND (\"outer\".\"?column3?\" = msg306u.msgid))\n Filter: (downloadstatus <> '0'::text)\n -> Index Scan using myapp_app_pkey on myapp_app ia (cost=0.00..4.44 rows=1 width=8) (actual time=0.006..0.007 rows=1 loops=238)\n Index Cond: (\"outer\".entityid = ia.myapp_app_id)\n Total runtime: 90.270 ms\n(12 rows)\n\n\nand here are the two queries left joined together. \n\nrms=# explain analyze\nrms-# select * from (\nrms(# SELECT\nrms(# software_download.*\nrms(# FROM\nrms(# (\nrms(# SELECT\nrms(# host_id, max(mtime) as mtime\nrms(# FROM\nrms(# software_download\nrms(# WHERE\nrms(# bds_status_id not in (6,17,18)\nrms(# GROUP BY\nrms(# host_id, software_binary_id\nrms(# ) latest_download\nrms(# JOIN software_download using (host_id,mtime)\nrms(# JOIN software_binary b USING (software_binary_id)\nrms(# WHERE\nrms(# binary_type_id IN (3,5,6)\nrms(# ) ld\nrms-# LEFT JOIN\nrms-# (SELECT\nrms(# entityid, rmsbinaryid, rmsbinaryid as software_binary_id, timestamp as downloaded, ia.host_id\nrms(# FROM\nrms(# (SELECT\nrms(# entityid, rmsbinaryid,max(msgid) as msgid\nrms(# FROM\nrms(# msg306u\nrms(# WHERE\nrms(# downloadstatus=1\nrms(# GROUP BY entityid,rmsbinaryid\nrms(# ) a1\nrms(# JOIN myapp_app ia on (entityid=myapp_app_id)\nrms(# JOIN\nrms(# (SELECT *\nrms(# FROM msg306u\nrms(# WHERE\nrms(# downloadstatus != 0\nrms(# ) a2 USING(entityid,rmsbinaryid,msgid)\nrms(# ) aa USING (host_id,software_binary_id);\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=2603.79..5612.95 rows=1 width=112) (actual time=181.988..4359.330 rows=472 loops=1)\n Join Filter: ((\"outer\".host_id = \"inner\".host_id) AND (\"outer\".software_binary_id = \"inner\".rmsbinaryid))\n -> Hash Join (cost=870.00..992.56 rows=1 width=96) (actual time=92.048..131.154 rows=472 loops=1)\n Hash Cond: ((\"outer\".host_id = \"inner\".host_id) AND (\"outer\".\"?column2?\" = \"inner\".mtime))\n -> HashAggregate (cost=475.88..495.32 rows=1555 width=16) (actual time=52.302..73.892 rows=10870 loops=1)\n -> Seq Scan on software_download (cost=0.00..377.78 rows=13080 width=16) (actual time=0.010..24.181 rows=13167 loops=1)\n Filter: ((bds_status_id <> 6) AND (bds_status_id <> 17) AND (bds_status_id <> 18))\n -> Hash (cost=379.37..379.37 rows=2949 width=96) (actual time=39.645..39.645 rows=639 loops=1)\n -> Hash Join (cost=5.64..379.37 rows=2949 width=96) (actual time=0.187..38.265 rows=639 loops=1)\n Hash Cond: (\"outer\".software_binary_id = \"inner\".software_binary_id)\n -> Seq Scan on software_download (cost=0.00..277.16 rows=13416 width=96) (actual time=0.008..19.905 rows=13416 loops=1)\n -> Hash (cost=5.59..5.59 rows=20 width=4) (actual time=0.151..0.151 rows=22 loops=1)\n -> Seq Scan on software_binary b (cost=0.00..5.59 rows=20 width=4) (actual time=0.011..0.109 rows=22 loops=1)\n Filter: ((binary_type_id = 3) OR (binary_type_id = 5) OR (binary_type_id = 6))\n -> Nested Loop (cost=1733.79..4620.38 rows=1 width=20) (actual time=0.196..8.620 rows=238 loops=472)\n -> Nested Loop (cost=1733.79..4615.92 rows=1 width=16) (actual time=0.186..5.702 rows=238 loops=472)\n Join Filter: (\"outer\".rmsbinaryid = \"inner\".rmsbinaryid)\n -> HashAggregate (cost=1733.79..1740.92 rows=570 width=12) (actual time=0.173..0.886 rows=323 loops=472)\n -> Bitmap Heap Scan on msg306u (cost=111.75..1540.65 rows=25752 width=12) (actual time=4.372..41.248 rows=25542 loops=1)\n -> Bitmap Index Scan on rht3 (cost=0.00..111.75 rows=25752 width=0) (actual time=4.129..4.129 rows=25542 loops=1)\n -> Index Scan using msg306u_entityid_msgid_idx on msg306u (cost=0.00..5.02 rows=1 width=20) (actual time=0.008..0.010 rows=1 loops=152456)\n Index Cond: ((\"outer\".entityid = msg306u.entityid) AND (\"outer\".\"?column3?\" = msg306u.msgid))\n Filter: (downloadstatus <> '0'::text)\n -> Index Scan using myapp_app_pkey on myapp_app ia (cost=0.00..4.44 rows=1 width=8) (actual time=0.005..0.007 rows=1 loops=112336)\n Index Cond: (\"outer\".entityid = ia.myapp_app_id)\n Total runtime: 4360.552 ms\n(26 rows)\n\nistm this query should be able to return quite a bit faster, and setting\nenable_nestloop = off seems to back up this theory:\n\n\nrms=# explain analyze\nrms-# select * from (\nrms(# SELECT\nrms(# software_download.*\nrms(# FROM\nrms(# (\nrms(# SELECT\nrms(# host_id, max(mtime) as mtime\nrms(# FROM\nrms(# software_download\nrms(# WHERE\nrms(# bds_status_id not in (6,17,18)\nrms(# GROUP BY\nrms(# host_id, software_binary_id\nrms(# ) latest_download\nrms(# JOIN software_download using (host_id,mtime)\nrms(# JOIN software_binary b USING (software_binary_id)\nrms(# WHERE\nrms(# binary_type_id IN (3,5,6)\nrms(# ) ld\nrms-# LEFT JOIN\nrms-# (SELECT\nrms(# entityid, rmsbinaryid, rmsbinaryid as software_binary_id, timestamp as downloaded, ia.host_id\nrms(# FROM\nrms(# (SELECT\nrms(# entityid, rmsbinaryid,max(msgid) as msgid\nrms(# FROM\nrms(# msg306u\nrms(# WHERE\nrms(# downloadstatus=1\nrms(# GROUP BY entityid,rmsbinaryid\nrms(# ) a1\nrms(# JOIN myapp_app ia on (entityid=myapp_app_id)\nrms(# JOIN\nrms(# (SELECT *\nrms(# FROM msg306u\nrms(# WHERE\nrms(# downloadstatus != 0\nrms(# ) a2 USING(entityid,rmsbinaryid,msgid)\nrms(# ) aa USING (host_id,software_binary_id);\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=6976.52..7099.10 rows=1 width=112) (actual time=500.681..537.894 rows=472 loops=1)\n Hash Cond: ((\"outer\".host_id = \"inner\".host_id) AND (\"outer\".software_binary_id = \"inner\".rmsbinaryid))\n -> Hash Join (cost=870.00..992.56 rows=1 width=96) (actual time=91.738..127.423 rows=472 loops=1)\n Hash Cond: ((\"outer\".host_id = \"inner\".host_id) AND (\"outer\".\"?column2?\" = \"inner\".mtime))\n -> HashAggregate (cost=475.88..495.32 rows=1555 width=16) (actual time=52.025..71.872 rows=10870 loops=1)\n -> Seq Scan on software_download (cost=0.00..377.78 rows=13080 width=16) (actual time=0.009..23.959 rows=13167 loops=1)\n Filter: ((bds_status_id <> 6) AND (bds_status_id <> 17) AND (bds_status_id <> 18))\n -> Hash (cost=379.37..379.37 rows=2949 width=96) (actual time=39.612..39.612 rows=639 loops=1)\n -> Hash Join (cost=5.64..379.37 rows=2949 width=96) (actual time=0.183..38.220 rows=639 loops=1)\n Hash Cond: (\"outer\".software_binary_id = \"inner\".software_binary_id)\n -> Seq Scan on software_download (cost=0.00..277.16 rows=13416 width=96) (actual time=0.008..19.511 rows=13416 loops=1)\n -> Hash (cost=5.59..5.59 rows=20 width=4) (actual time=0.147..0.147 rows=22 loops=1)\n -> Seq Scan on software_binary b (cost=0.00..5.59 rows=20 width=4) (actual time=0.011..0.108 rows=22 loops=1)\n Filter: ((binary_type_id = 3) OR (binary_type_id = 5) OR (binary_type_id = 6))\n -> Hash (cost=6106.52..6106.52 rows=1 width=20) (actual time=408.915..408.915 rows=238 loops=1)\n -> Merge Join (cost=5843.29..6106.52 rows=1 width=20) (actual time=338.516..408.477 rows=238 loops=1)\n Merge Cond: ((\"outer\".rmsbinaryid = \"inner\".rmsbinaryid) AND (\"outer\".msgid = \"inner\".msgid) AND (\"outer\".entityid = \"inner\".entityid))\n -> Sort (cost=1857.37..1858.80 rows=570 width=16) (actual time=88.816..89.179 rows=323 loops=1)\n Sort Key: a1.rmsbinaryid, a1.msgid, a1.entityid\n -> Hash Join (cost=1793.98..1831.28 rows=570 width=16) (actual time=86.452..88.074 rows=323 loops=1)\n Hash Cond: (\"outer\".entityid = \"inner\".myapp_app_id)\n -> HashAggregate (cost=1733.79..1740.92 rows=570 width=12) (actual time=80.772..81.320 rows=323 loops=1)\n -> Bitmap Heap Scan on msg306u (cost=111.75..1540.65 rows=25752 width=12) (actual time=4.515..40.984 rows=25542 loops=1)\n -> Bitmap Index Scan on rht3 (cost=0.00..111.75 rows=25752 width=0) (actual time=4.271..4.271 rows=25542 loops=1)\n -> Hash (cost=55.95..55.95 rows=1695 width=8) (actual time=5.663..5.663 rows=1695 loops=1)\n -> Seq Scan on myapp_app ia (cost=0.00..55.95 rows=1695 width=8) (actual time=0.006..2.888 rows=1695 loops=1)\n -> Sort (cost=3985.92..4050.30 rows=25752 width=20) (actual time=249.682..286.295 rows=25542 loops=1)\n Sort Key: public.msg306u.rmsbinaryid, public.msg306u.msgid, public.msg306u.entityid\n -> Seq Scan on msg306u (cost=0.00..1797.28 rows=25752 width=20) (actual time=0.010..80.572 rows=25542 loops=1)\n Filter: (downloadstatus <> '0'::text)\n Total runtime: 540.284 ms\n(31 rows)\n\ni've been banging on this one off and on for awhile now with little\nprogress, can someone explain why it is choosing the initial slower plan\nand/or how to get it to run something closer to the second faster plan?\n\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n", "msg_date": "17 Jan 2006 12:07:38 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": true, "msg_subject": "sum of left join greater than its parts" }, { "msg_contents": "Hmmm, this looks like a planner bug to me:\n\n> Hash\n> Join (cost=870.00..992.56 rows=1 width=96) (actual time=90.566..125.782\n> rows=472 loops=1) Hash Cond: ((\"outer\".host_id = \"inner\".host_id) AND\n> (\"outer\".\"?column2?\" = \"inner\".mtime)) -> HashAggregate \n> (cost=475.88..495.32 rows=1555 width=16) (actual time=51.300..70.761\n> rows=10870 loops=1)\n\n>-- Nested Loop (cost=1733.79..4620.38 rows=1 width=20) (actual\n> time=81.160..89.826 rows=238 loops=1) -> Nested Loop \n> (cost=1733.79..4615.92 rows=1 width=20) (actual time=81.142..86.826\n> rows=238 loops=1) Join Filter: (\"outer\".rmsbinaryid =\n> \"inner\".rmsbinaryid) -> HashAggregate (cost=1733.79..1740.92 rows=570\n> width=12) (actual time=81.105..81.839 rows=323 loops=1) -> Bitmap Heap\n> Scan on msg306u (cost=111.75..1540.65 rows=25752 width=12) (actual\n> time=4.490..41.233 rows=25542 loops=1)\n\nNotice that for both queries, the estimates are reasonably accurate (within \n+/- 4x) until they get to left joining the subquery, at which point the \nestimate of rows joined becomes exactly \"1\". That looks suspicios to \nme ... Tom? Neil?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 17 Jan 2006 13:09:53 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sum of left join greater than its parts" } ]
[ { "msg_contents": "Hi,\n\nI have a postges 8.1.1 table with over 29 million rows in it. The colunm \n(file_name) that I need to search on has entries like the following:\n\n MOD04_L2.A2005311.1400.004.2005312013848.hdf \n\n MYD04_L2.A2005311.0700.004.2005312013437.hdf \n\nI have an index on this column. But an index search is performance only \nwhen I give the full file_name for search:\n\ntestdbspc=# explain select file_name from catalog where file_name = \n'MOD04_L2.A2005311.1400.004.2005312013848.hdf';\nQUERY PLAN\nIndex Scan using catalog_pk_idx on catalog (cost=0.00..6.01 rows=1 \nwidth=404)\n Index Cond: (file_name = \n'MOD04_L2.A2005311.1400.004.2005312013848.hdf'::bpchar)\n(2 rows)\n\nWhat I really need to do most of the time is a multi-wildcard search on \nthis column, which is now doing a whole table scan without using the \nindex at all:\n\ntestdbspc=# explain select file_name from catalog where file_name like \n'MOD04_L2.A2005311.%.004.2005312013%.hdf';\nQUERY PLAN\nSeq Scan on catalog (cost=0.00..429.00 rows=1 width=404)\n Filter: (file_name ~~ 'MOD04_L2.A2005311.%.004.2005312013%.hdf'::text)\n(2 rows)\n\nObviously, the performance of the table scan on such a large table is \nnot acceptable.\n\nI tried full-text indexing and searching. It did NOT work on this column \nbecause all the letters and numbers are linked together with \".\" and \nconsidered one big single word by to_tsvector.\n\nAny solutions for this column to use an index search with multiple wild \ncards?\n\nThanks a lot,\nYantao Shi\n\n\n\n", "msg_date": "Tue, 17 Jan 2006 15:00:37 -0500", "msg_from": "Yantao Shi <[email protected]>", "msg_from_op": true, "msg_subject": "wildcard search performance with \"like\"" }, { "msg_contents": "Yantao Shi <[email protected]> writes:\n> testdbspc=# explain select file_name from catalog where file_name like \n> 'MOD04_L2.A2005311.%.004.2005312013%.hdf';\n> QUERY PLAN\n> Seq Scan on catalog (cost=0.00..429.00 rows=1 width=404)\n> Filter: (file_name ~~ 'MOD04_L2.A2005311.%.004.2005312013%.hdf'::text)\n> (2 rows)\n\nI'm betting you are using a non-C locale. You need either to run the\ndatabase in C locale, or to create a special index type that is\ncompatible with LIKE searches. See\nhttp://www.postgresql.org/docs/8.1/static/indexes-opclass.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Jan 2006 16:01:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wildcard search performance with \"like\" " }, { "msg_contents": "As far as I know the index is only used when you do a prefix search, for \nexample\n\ncol like 'xyz%'\n\nI think that if you are looking for expressions such as 'A%B', you could \nrephrase them like this:\n\ncol like 'A%' AND col like 'A%B'\n\nSo the database could use the index to narrow down the result and then \ndo a sequential search for the second condition.\n\nMike\n\n\nYantao Shi schrieb:\n> Hi,\n> \n> I have a postges 8.1.1 table with over 29 million rows in it. The colunm \n> (file_name) that I need to search on has entries like the following:\n> \n> MOD04_L2.A2005311.1400.004.2005312013848.hdf \n> \n> MYD04_L2.A2005311.0700.004.2005312013437.hdf \n> I have an index on this column. But an index search is performance only \n> when I give the full file_name for search:\n> \n> testdbspc=# explain select file_name from catalog where file_name = \n> 'MOD04_L2.A2005311.1400.004.2005312013848.hdf';\n> QUERY PLAN\n> Index Scan using catalog_pk_idx on catalog (cost=0.00..6.01 rows=1 \n> width=404)\n> Index Cond: (file_name = \n> 'MOD04_L2.A2005311.1400.004.2005312013848.hdf'::bpchar)\n> (2 rows)\n> \n> What I really need to do most of the time is a multi-wildcard search on \n> this column, which is now doing a whole table scan without using the \n> index at all:\n> \n> testdbspc=# explain select file_name from catalog where file_name like \n> 'MOD04_L2.A2005311.%.004.2005312013%.hdf';\n> QUERY PLAN\n> Seq Scan on catalog (cost=0.00..429.00 rows=1 width=404)\n> Filter: (file_name ~~ 'MOD04_L2.A2005311.%.004.2005312013%.hdf'::text)\n> (2 rows)\n> \n> Obviously, the performance of the table scan on such a large table is \n> not acceptable.\n> \n> I tried full-text indexing and searching. It did NOT work on this column \n> because all the letters and numbers are linked together with \".\" and \n> considered one big single word by to_tsvector.\n> \n> Any solutions for this column to use an index search with multiple wild \n> cards?\n> \n> Thanks a lot,\n> Yantao Shi\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n", "msg_date": "Wed, 18 Jan 2006 16:06:15 +0100", "msg_from": "Michael Riess <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wildcard search performance with \"like\"" } ]
[ { "msg_contents": "I'm trying to query a table with 250,000+ rows. My query requires I provide 5 colums in my \"order by\" clause:\n\nselect\n column\nfrom table \nwhere \n column >= '2004-3-22 0:0:0'\norder by \n ds.receipt desc,\n ds.carrier_id asc,\n ds.batchnum asc,\n encounternum asc,\n ds.encounter_id ASC\nlimit 100 offset 0\n\nI have an index built for each of these columns in my order by clause. This query takes an unacceptable amount of time to execute. Here are the results of the explain:\n\nLimit (cost=229610.78..229611.03 rows=100 width=717)\n -> Sort (cost=229610.78..230132.37 rows=208636 width=717)\n Sort Key: receipt, carrier_id, batchnum, encounternum, encounter_id\n -> Seq Scan on detail_summary ds (cost=0.00..22647.13 rows=208636 width=717)\n Filter: (receipt >= '2004-03-22'::date)\n\n\nWhen I have the order by just have 1 criteria, it's fine (just ds.receipt DESC)\n\nLimit (cost=0.00..177.71 rows=100 width=717)\n -> Index Scan Backward using detail_summary_receipt_id_idx on detail_summary ds (cost=0.00..370756.84 rows=208636 width=717)\n Index Cond: (receipt >= '2004-03-22'::date)\n\nI've increased my work_mem to up to 256meg with no speed increase. I think there's something here I just don't understand.\n\nHow do I make this go fast ?\n\n\n\n\n\n\n\n\n\n\n\n\nI'm trying to query a table with 250,000+ rows. My \nquery requires I provide 5 colums in my \"order by\" clause:\n \nselect   column\nfrom table \nwhere \n column >= '2004-3-22 0:0:0'order by \n\n    ds.receipt desc,\n    ds.carrier_id asc,\n    ds.batchnum asc,\n    encounternum asc,\n    ds.encounter_id ASC\nlimit 100 offset 0\n \nI have an index built for each of these columns in \nmy order by clause. This query takes an unacceptable amount of time to execute. \nHere are the results of the explain:\n \nLimit  (cost=229610.78..229611.03 rows=100 \nwidth=717)  ->  Sort  (cost=229610.78..230132.37 \nrows=208636 width=717)        Sort Key: \nreceipt, carrier_id, batchnum, encounternum, \nencounter_id        ->  Seq Scan \non detail_summary ds  (cost=0.00..22647.13 rows=208636 \nwidth=717)              \nFilter: (receipt >= '2004-03-22'::date)\n \nWhen I have the order by just have 1 criteria, it's \nfine (just ds.receipt DESC)\n \nLimit  (cost=0.00..177.71 rows=100 \nwidth=717)  ->  Index Scan Backward using \ndetail_summary_receipt_id_idx on detail_summary ds  (cost=0.00..370756.84 \nrows=208636 width=717)        Index Cond: \n(receipt >= '2004-03-22'::date)\n \nI've increased my work_mem to up to 256meg with no \nspeed increase. I think there's something here I just don't \nunderstand.\n \nHow do I make this go fast ?", "msg_date": "Tue, 17 Jan 2006 17:03:27 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Multiple Order By Criteria" }, { "msg_contents": "J,\n\n> I have an index built for each of these columns in my order by clause.\n> This query takes an unacceptable amount of time to execute. Here are the\n> results of the explain:\n\nYou need a single index which has all five columns, in order.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 17 Jan 2006 14:25:38 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple Order By Criteria" }, { "msg_contents": "I created the index, in order. Did a vacuum analyze on the table and my \nexplain still says:\n\nLimit (cost=229610.78..229611.03 rows=100 width=717)\n -> Sort (cost=229610.78..230132.37 rows=208636 width=717)\n Sort Key: receipt, carrier_id, batchnum, encounternum, encounter_id\n -> Seq Scan on detail_summary ds (cost=0.00..22647.13 rows=208636 \nwidth=717)\n Filter: (receipt >= '2004-03-22'::date)\n\n\nSo, for fun I did\nset enable_seqscan to off\n\nBut that didn't help. For some reason, the sort wants to do a seq scan and \nnot use my super new index.\n\nAm I doing something wrong ?\n\n----- Original Message ----- \nFrom: \"Josh Berkus\" <[email protected]>\nTo: <[email protected]>\nCc: <[email protected]>\nSent: Tuesday, January 17, 2006 5:25 PM\nSubject: Re: [PERFORM] Multiple Order By Criteria\n\n\n> J,\n>\n>> I have an index built for each of these columns in my order by clause.\n>> This query takes an unacceptable amount of time to execute. Here are the\n>> results of the explain:\n>\n> You need a single index which has all five columns, in order.\n>\n\n> -- \n> --Josh\n>\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n", "msg_date": "Tue, 17 Jan 2006 17:29:04 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Multiple Order By Criteria" }, { "msg_contents": "\nOn Tue, 17 Jan 2006, Josh Berkus wrote:\n\n> J,\n>\n> > I have an index built for each of these columns in my order by clause.\n> > This query takes an unacceptable amount of time to execute. Here are the\n> > results of the explain:\n>\n> You need a single index which has all five columns, in order.\n\nI think he'll also need a reverse opclass for the first column in the\nindex or for the others since he's doing desc, asc, asc, asc, asc.\n", "msg_date": "Tue, 17 Jan 2006 14:40:36 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple Order By Criteria" }, { "msg_contents": "\ntry adding the keyword 'date' before the date in your query.\nI ran into this quite a while back, but I'm not sure I remember the solution.\n\n\n > In Reply to: Tuesday January 17 2006 04:29 pm, [email protected] [email protected] \nwrote:\n> I created the index, in order. Did a vacuum analyze on the table and my\n> explain still says:\n>\n> Limit (cost=229610.78..229611.03 rows=100 width=717)\n> -> Sort (cost=229610.78..230132.37 rows=208636 width=717)\n> Sort Key: receipt, carrier_id, batchnum, encounternum, encounter_id\n> -> Seq Scan on detail_summary ds (cost=0.00..22647.13 rows=208636\n> width=717)\n> Filter: (receipt >= '2004-03-22'::date)\n>\n>\n> So, for fun I did\n> set enable_seqscan to off\n>\n> But that didn't help. For some reason, the sort wants to do a seq scan and\n> not use my super new index.\n>\n> Am I doing something wrong ?\n>\n> ----- Original Message -----\n> From: \"Josh Berkus\" <[email protected]>\n> To: <[email protected]>\n> Cc: <[email protected]>\n> Sent: Tuesday, January 17, 2006 5:25 PM\n> Subject: Re: [PERFORM] Multiple Order By Criteria\n>\n> > J,\n> >\n> >> I have an index built for each of these columns in my order by clause.\n> >> This query takes an unacceptable amount of time to execute. Here are the\n> >> results of the explain:\n> >\n> > You need a single index which has all five columns, in order.\n> >\n> >\n> > --\n> > --Josh\n> >\n> > Josh Berkus\n> > Aglio Database Solutions\n> > San Francisco\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n", "msg_date": "Tue, 17 Jan 2006 16:57:06 -0600", "msg_from": "Fredrick O Jackson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple Order By Criteria" }, { "msg_contents": "I created the index like this:\n\nCREATE INDEX rcbee_idx\n ON detail_summary\n USING btree\n (receipt, carrier_id, batchnum, encounternum, encounter_id);\n\nIs this correct ?\n\nHow do I make a reverse opclass ?\n\n----- Original Message ----- \nFrom: \"Stephan Szabo\" <[email protected]>\nTo: \"Josh Berkus\" <[email protected]>\nCc: <[email protected]>; <[email protected]>\nSent: Tuesday, January 17, 2006 5:40 PM\nSubject: Re: [PERFORM] Multiple Order By Criteria\n\n\n>\n> On Tue, 17 Jan 2006, Josh Berkus wrote:\n>\n>> J,\n>>\n>> > I have an index built for each of these columns in my order by clause.\n>> > This query takes an unacceptable amount of time to execute. Here are \n>> > the\n>> > results of the explain:\n>>\n>> You need a single index which has all five columns, in order.\n>\n> I think he'll also need a reverse opclass for the first column in the\n> index or for the others since he's doing desc, asc, asc, asc, asc.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n", "msg_date": "Tue, 17 Jan 2006 18:03:04 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Multiple Order By Criteria" }, { "msg_contents": "\nOn Tue, 17 Jan 2006 [email protected] wrote:\n\n> I created the index like this:\n>\n> CREATE INDEX rcbee_idx\n> ON detail_summary\n> USING btree\n> (receipt, carrier_id, batchnum, encounternum, encounter_id);\n>\n> Is this correct ?\n\nThat would work if you were asking for all the columns ascending or\ndescending, but we don't currently use it for mixed orders.\n\n> How do I make a reverse opclass ?\n\nThere's some information at the following:\nhttp://archives.postgresql.org/pgsql-novice/2005-10/msg00254.php\nhttp://archives.postgresql.org/pgsql-general/2005-01/msg00121.php\nhttp://archives.postgresql.org/pgsql-general/2004-06/msg00565.php\n", "msg_date": "Tue, 17 Jan 2006 15:39:03 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple Order By Criteria" }, { "msg_contents": "I've read all of this info, closely. I wish when I was searching for an\nanswer for my problem these pages came up. Oh well.\n\nI am getting an idea of what I need to do to make this work well. I was\nwondering if there is more information to read on how to implement this\nsolution in a more simple way. Much of what's written seems to be towards an\naudience that should understand certain things automatically.\n----- Original Message ----- \nFrom: \"Stephan Szabo\" <[email protected]>\nTo: <[email protected]>\nCc: \"Josh Berkus\" <[email protected]>; <[email protected]>\nSent: Tuesday, January 17, 2006 6:39 PM\nSubject: Re: [PERFORM] Multiple Order By Criteria\n\n\n>\n> On Tue, 17 Jan 2006 [email protected] wrote:\n>\n>> I created the index like this:\n>>\n>> CREATE INDEX rcbee_idx\n>> ON detail_summary\n>> USING btree\n>> (receipt, carrier_id, batchnum, encounternum, encounter_id);\n>>\n>> Is this correct ?\n>\n> That would work if you were asking for all the columns ascending or\n> descending, but we don't currently use it for mixed orders.\n>\n>> How do I make a reverse opclass ?\n>\n> There's some information at the following:\n> http://archives.postgresql.org/pgsql-novice/2005-10/msg00254.php\n> http://archives.postgresql.org/pgsql-general/2005-01/msg00121.php\n> http://archives.postgresql.org/pgsql-general/2004-06/msg00565.php\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n", "msg_date": "Tue, 17 Jan 2006 19:23:25 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Multiple Order By Criteria" }, { "msg_contents": "\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of [email protected]\nSent: Rabu, 18 Januari 2006 07:23\nTo: Stephan Szabo\nCc: Josh Berkus; [email protected]\nSubject: Re: [PERFORM] Multiple Order By Criteria\n\nI've read all of this info, closely. I wish when I was searching for an\nanswer for my problem these pages came up. Oh well.\nWell, I think you have to know about btree index. Btree is good enough,\nalthough it's not better. It will perform best, if it doesn't index\ntoo many multiple column.\nIn your case, you have to consentrate on 2 or 3 fields that will\nuse frequently. Put the most duplicate value on the front and others\nare behind.\nEq: \nreceipt, carrier_id, batchnum is the most frequently use, \nbut the most duplicate value are: carrier_id, receipt, and batchnum\nso make btree index (carrier_id, receipt, batchnum).\nBtree will not suffer, and we also will advantage if the table\nhave relationship with other table with the same fields order. We have\nnot to make another index for that relation.\n\nBest regards,\nahmad fajar.\n\n\n> I am getting an idea of what I need to do to make this work well. I was\n> wondering if there is more information to read on how to implement this\n> solution in a more simple way. Much of what's written seems to be towards\n> audience that should understand certain things automatically.\n----- Original Message ----- \nFrom: \"Stephan Szabo\" <[email protected]>\nTo: <[email protected]>\nCc: \"Josh Berkus\" <[email protected]>; <[email protected]>\nSent: Tuesday, January 17, 2006 6:39 PM\nSubject: Re: [PERFORM] Multiple Order By Criteria\n\n\n>\n> On Tue, 17 Jan 2006 [email protected] wrote:\n>\n>> I created the index like this:\n>>\n>> CREATE INDEX rcbee_idx\n>> ON detail_summary\n>> USING btree\n>> (receipt, carrier_id, batchnum, encounternum, encounter_id);\n>>\n>> Is this correct ?\n>\n> That would work if you were asking for all the columns ascending or\n> descending, but we don't currently use it for mixed orders.\n>\n>> How do I make a reverse opclass ?\n>\n> There's some information at the following:\n> http://archives.postgresql.org/pgsql-novice/2005-10/msg00254.php\n> http://archives.postgresql.org/pgsql-general/2005-01/msg00121.php\n> http://archives.postgresql.org/pgsql-general/2004-06/msg00565.php\n>\n\n", "msg_date": "Wed, 18 Jan 2006 12:54:44 +0700", "msg_from": "\"Ahmad Fajar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple Order By Criteria" }, { "msg_contents": "I have the answer I've been looking for and I'd like to share with all. \nAfter help from you guys, it appeared that the real issue was using an index \nfor my order by X DESC clauses. For some reason that doesn't make good \nsense, postgres doesn't support this, when it kinda should automatically.\n\nTake the following end of an SQL statement.\n\norder by\n col1 DESC\n col2 ASC\n col3 ASC\n\nThe first thing I learned is that you need an index that contains all these \ncolumns in it, in this order. If one of them has DESC then you have to \ncreate a function / operator class for each data type, in this case let's \nassume it's an int4. So, first thing you do is create a function that you're \ngoing to use in your operator:\n\ncreate function\n int4_revcmp(int4,int4) // --> cal the function whatever you want\n returns int4\n as 'select $2 - $1'\nlanguage sql;\n\nThen you make your operator class.\nCREATE OPERATOR CLASS int4_revop\n FOR TYPE int4 USING btree AS\n OPERATOR 1 > ,\n OPERATOR 2 >= ,\n OPERATOR 3 = ,\n OPERATOR 4 <= ,\n OPERATOR 5 < ,\n FUNCTION 1 int4_revcmp(int4, int4); // --> must be \nthe name of your function you created.\n\nThen when you make your index\n\ncreate index rev_idx on table\n using btree(\n col1 int4_revop, // --> must be name of operator class you \ndefined.\n col2,\n col3\n);\n\nWhat I don't understand is how to make this function / operator class work \nwith a text datatype. I tried interchanging the int4 with char and text and \npostgres didn't like the (as 'select $2 - $1') in the function, which I can \nkinda understand. Since I'm slighlty above my head at this point, I don't \nreally know how to do it. Does any smart people here know how ?\n\n", "msg_date": "Wed, 18 Jan 2006 09:06:05 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Multiple Order By Criteria" }, { "msg_contents": "On Wed, 18 Jan 2006 [email protected] wrote:\n\n> I have the answer I've been looking for and I'd like to share with all.\n> After help from you guys, it appeared that the real issue was using an index\n> for my order by X DESC clauses. For some reason that doesn't make good\n> sense, postgres doesn't support this, when it kinda should automatically.\n\nWell, the problem is that we do order with the index simply by through\nfollowing index order. Standard index order is going to give you a sorted\norder only in some particular order and its inverse. IIRC, there are ways\nto use an index in all ascending order to do mixed orders, but I think\nthose may involve traversing parts of the index multiple times and hasn't\nbeen implemented.\n\n> The first thing I learned is that you need an index that contains all these\n> columns in it, in this order. If one of them has DESC then you have to\n> create a function / operator class for each data type, in this case let's\n> assume it's an int4. So, first thing you do is create a function that you're\n> going to use in your operator:\n>\n> create function\n> int4_revcmp(int4,int4) // --> cal the function whatever you want\n> returns int4\n> as 'select $2 - $1'\n> language sql;\n>\n> Then you make your operator class.\n> CREATE OPERATOR CLASS int4_revop\n> FOR TYPE int4 USING btree AS\n> OPERATOR 1 > ,\n> OPERATOR 2 >= ,\n> OPERATOR 3 = ,\n> OPERATOR 4 <= ,\n> OPERATOR 5 < ,\n> FUNCTION 1 int4_revcmp(int4, int4); // --> must be\n> the name of your function you created.\n>\n> Then when you make your index\n>\n> create index rev_idx on table\n> using btree(\n> col1 int4_revop, // --> must be name of operator class you\n> defined.\n> col2,\n> col3\n> );\n>\n> What I don't understand is how to make this function / operator class work\n> with a text datatype. I tried interchanging the int4 with char and text and\n> postgres didn't like the (as 'select $2 - $1') in the function, which I can\n> kinda understand. Since I'm slighlty above my head at this point, I don't\n> really know how to do it. Does any smart people here know how ?\n\nI think having the function call the helper function for the normal\noperator class for the type function with the arguments in reverse order\nmay work (or negating its output).\n\nIf you have any interest, there's an outstanding call for C versions of\nthe helper functions that we could then package up in contrib with the\noperator class definitions.\n", "msg_date": "Wed, 18 Jan 2006 07:10:30 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple Order By Criteria" }, { "msg_contents": "Here's some C to use to create the operator classes, seems to work ok.\n---\n\n#include \"postgres.h\"\n#include <string.h>\n#include \"fmgr.h\"\n#include \"utils/date.h\"\n\n/* For date sorts */\n\nPG_FUNCTION_INFO_V1(ddd_date_revcmp);\n\nDatum ddd_date_revcmp(PG_FUNCTION_ARGS){\n DateADT arg1=PG_GETARG_DATEADT(0);\n DateADT arg2=PG_GETARG_DATEADT(1);\n\n PG_RETURN_INT32(arg2 - arg1);\n}\n\n/* For integer sorts */\n\nPG_FUNCTION_INFO_V1(ddd_int_revcmp);\n\nDatum ddd_int_revcmp(PG_FUNCTION_ARGS){\n int32 arg1=PG_GETARG_INT32(0);\n int32 arg2=PG_GETARG_INT32(1);\n\n PG_RETURN_INT32(arg2 - arg1);\n}\n\n/* For string sorts */\n\nPG_FUNCTION_INFO_V1(ddd_text_revcmp);\n\nDatum ddd_text_revcmp(PG_FUNCTION_ARGS){\n text* arg1=PG_GETARG_TEXT_P(0);\n text* arg2=PG_GETARG_TEXT_P(1);\n\n PG_RETURN_INT32(strcmp((char*)VARDATA(arg2),(char*)VARDATA(arg1)));\n}\n\n\n/*\ncreate function ddd_date_revcmp(date,date) returns int4 as \n'/data/postgres/contrib/cmplib.so', 'ddd_date_revcmp' LANGUAGE C STRICT;\ncreate function ddd_int_revcmp(int4,int4) returns int4 as \n'/data/postgres/contrib/cmplib.so', 'ddd_int_revcmp' LANGUAGE C STRICT;\ncreate function ddd_text_revcmp(text,text) returns int4 as \n'/data/postgres/contrib/cmplib.so', 'ddd_text_revcmp' LANGUAGE C STRICT;\n */\n\n----- Original Message ----- \nFrom: \"Stephan Szabo\" <[email protected]>\nTo: <[email protected]>\nSent: Wednesday, January 18, 2006 2:24 PM\nSubject: Re: [PERFORM] Multiple Order By Criteria\n\n\n> On Wed, 18 Jan 2006 [email protected] wrote:\n>\n>> Could you explain to me how do create this operator class for a text data\n>> type ? I think it will give me more of an understanding of what's going \n>> on\n>> if I could see this example.\n>\n> Using an SQL function (mostly because I'm too lazy to look up the C call\n> syntax) I think it'd be something like:\n>\n> create function bttextrevcmp(text, text) returns int4 as\n> 'select bttextcmp($2, $1)' language 'sql';\n>\n> CREATE OPERATOR CLASS text_revop\n> FOR TYPE text USING btree AS\n> OPERATOR 1 > ,\n> OPERATOR 2 >= ,\n> OPERATOR 3 = ,\n> OPERATOR 4 <= ,\n> OPERATOR 5 < ,\n> FUNCTION 1 bttextrevcmp(text,text);\n>\n> I believe bttextcmp is the standard text btree operator class helper\n> function, so we call it with reverse arguments to try to flip its results\n> (I think -bttextcmp($1,$2) would also work).\n> \n\n", "msg_date": "Wed, 18 Jan 2006 14:36:09 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Multiple Order By Criteria" } ]
[ { "msg_contents": "Hi,\n\nI have two tables foobar and foobar2 (which inherits from foobar, no \nextra columns).\nfoobar2 has all the data (574,576 rows), foobar is empty.\nBoth foobar and foobar2 have an index on the only column 'id'. Now I \nhave a list of ids in a tmp_ids tables.\nA query on foobar2 (child table) uses the index, whereas the same query \nvia foobar (parent) doesn't.\nEven if I set seq_scan off, it still doesn't use the index on the child \ntable while queried via the parent table.\n\nDetails are given below. Any help is appreciated.\n\n# analyze foobar;\nANALYZE\n# analyze foobar2;\nANALYZE\n# explain analyze select * from foobar2 join tmp_ids using (id);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..3013.69 rows=85856 width=4) (actual \ntime=0.038..234.864 rows=44097 loops=1)\n -> Seq Scan on tmp_ids (cost=0.00..1.52 rows=52 width=4) (actual \ntime=0.008..0.102 rows=52 loops=1)\n -> Index Scan using foobar2_idx1 on foobar2 (cost=0.00..37.29 \nrows=1651 width=4) (actual time=0.007..1.785 rows=848 loops=52)\n Index Cond: (foobar2.id = \"outer\".id)\n Total runtime: 302.963 ms\n(5 rows)\n\n# explain analyze select * from foobar join tmp_ids using (id);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1.65..13267.85 rows=149946 width=4) (actual \ntime=7.338..3837.060 rows=44097 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".id)\n -> Append (cost=0.00..8883.16 rows=576716 width=4) (actual \ntime=0.012..2797.555 rows=574576 loops=1)\n -> Seq Scan on foobar (cost=0.00..31.40 rows=2140 width=4) \n(actual time=0.002..0.002 rows=0 loops=1)\n -> Seq Scan on foobar2 foobar (cost=0.00..8851.76 rows=574576 \nwidth=4) (actual time=0.004..1027.422 rows=574576 loops=1)\n -> Hash (cost=1.52..1.52 rows=52 width=4) (actual time=0.194..0.194 \nrows=52 loops=1)\n -> Seq Scan on tmp_ids (cost=0.00..1.52 rows=52 width=4) \n(actual time=0.003..0.094 rows=52 loops=1)\n Total runtime: 3905.074 ms\n(8 rows)\n\n# select version();\n version\n--------------------------------------------------------------------------------------------\n PostgreSQL 8.1.1 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) \n3.3.3 (SuSE Linux)\n(1 row)\n\n# \\d foobar\n Table \"public.foobar\"\n Column | Type | Modifiers\n--------+---------+-----------\n id | integer |\nIndexes:\n \"foobar_idx1\" btree (id)\n\n# \\d foobar2\n Table \"public.foobar2\"\n Column | Type | Modifiers\n--------+---------+-----------\n id | integer |\nIndexes:\n \"foobar2_idx1\" btree (id)\nInherits: foobar\n\n# \\d tmp_ids\n Table \"public.tmp_ids\"\n Column | Type | Modifiers\n--------+---------+-----------\n id | integer |\n\n\n# set enable_seqscan=off;\nSET\n# explain analyze select * from foobar join tmp_ids using (id);\n QUERY \nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=200000001.65..300013267.85 rows=149946 width=4) \n(actual time=7.352..3841.221 rows=44097 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".id)\n -> Append (cost=100000000.00..200008883.16 rows=576716 width=4) \n(actual time=0.012..2803.547 rows=574576 loops=1)\n -> Seq Scan on foobar (cost=100000000.00..100000031.40 \nrows=2140 width=4) (actual time=0.003..0.003 rows=0 loops=1)\n -> Seq Scan on foobar2 foobar \n(cost=100000000.00..100008851.76 rows=574576 width=4) (actual \ntime=0.005..1032.148 rows=574576 loops=1)\n -> Hash (cost=100000001.52..100000001.52 rows=52 width=4) (actual \ntime=0.194..0.194 rows=52 loops=1)\n -> Seq Scan on tmp_ids (cost=100000000.00..100000001.52 \nrows=52 width=4) (actual time=0.004..0.098 rows=52 loops=1)\n Total runtime: 3909.332 ms\n(8 rows)\n\nOutput of \"show all\" (remember I just turned off seq_scan above)\n\n enable_bitmapscan | \non | Enables the planner's use of \nbitmap-scan plans.\n enable_hashagg | \non | Enables the planner's use of \nhashed aggregation plans.\n enable_hashjoin | \non | Enables the planner's use of \nhash join plans.\n enable_indexscan | \non | Enables the planner's use of \nindex-scan plans.\n enable_mergejoin | \non | Enables the planner's use of \nmerge join plans.\n enable_nestloop | \non | Enables the planner's use of \nnested-loop join plans.\n enable_seqscan | \noff | Enables the planner's use of \nsequential-scan plans.\n enable_sort | \non | Enables the planner's use of \nexplicit sort steps.\n enable_tidscan | \non | Enables the planner's use of \nTID scan plans.\n\n", "msg_date": "Tue, 17 Jan 2006 19:29:44 -0800", "msg_from": "Hari Warrier <[email protected]>", "msg_from_op": true, "msg_subject": "Getting pg to use index on an inherited table (8.1.1)" }, { "msg_contents": "Hari Warrier <[email protected]> writes:\n> A query on foobar2 (child table) uses the index, whereas the same query \n> via foobar (parent) doesn't.\n\nA query just on foobar should be able to use the index AFAIR. The\nproblem here is that you have a join, and we are not very good about\nsituations involving joins against inheritance sets (nor joins against\nUNION ALL subqueries, which is really about the same thing).\n\nI'm hoping to get a chance to look into improving this during the 8.2\ndevelopment cycle.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Jan 2006 23:07:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting pg to use index on an inherited table (8.1.1) " } ]
[ { "msg_contents": "Hi,\n\nI have a simple question about performance using two resources.\n\nWhat's have the best performance?\n\nlower( col1 ) LIKE lower( 'myquestion%' )\n\nOR\n\ncol1 ILIKE 'myquestion%'\n\nThanks.\n\n\n", "msg_date": "Wed, 18 Jan 2006 09:10:30 +0000", "msg_from": "Marcos <[email protected]>", "msg_from_op": true, "msg_subject": "Simple Question of Performance ILIKE or Lower" }, { "msg_contents": "On Wed, Jan 18, 2006 at 09:10:30AM +0000, Marcos wrote:\n> Hi,\n> \n> I have a simple question about performance using two resources.\n> \n> What's have the best performance?\n> \n> lower( col1 ) LIKE lower( 'myquestion%' )\n> \n> OR\n> \n> col1 ILIKE 'myquestion%'\n\nIf you index lower( col1 ), then the former would likely perform better\n(if the optimizer knows it could use the index in that case). Otherwise\nI suspect they'd be the same.\n\nTry it and find out.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 18 Jan 2006 17:02:30 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Simple Question of Performance ILIKE or Lower" } ]
[ { "msg_contents": "Hi,\n \nI am currently doing large weekly updates with fsync=off. My updates\ninvolves SELECT, UPDATE, DELETE and etc. Setting fsync=off works for me\nsince I take a complete backup before the weekly update and run a \"sync\" and\n\"CHECKPOINT\" after each weekly update has completed to ensure the data is\nall written to disk. \n \nObviously, I have done this to improve write performance for the update each\nweek. My question is if I install a 3ware or similar card to replace my\ncurrent software RAID 1 configuration, am I going to see a very large\nimprovement? If so, what would be a ball park figure?\n \nBenjamin Arai\[email protected]\nhttp://www.benjaminarai.com <http://www.benjaminarai.com/> \n \n\n\n\n\n\nHi,\n \nI am currently doing \nlarge weekly updates with fsync=off.  My updates involves SELECT, UPDATE, \nDELETE and etc.  Setting fsync=off works for me since I take a complete \nbackup before the weekly update and run a \"sync\" and \"CHECKPOINT\" after each \nweekly update has completed to ensure the data is all written to disk.  \n\n \nObviously, I have \ndone this to improve write performance for the update each week.  My \nquestion is if I install a 3ware or similar card to replace my current \nsoftware RAID 1 configuration, am I going to see a very large improvement?  \nIf so, what would be a ball park figure?\n \n\nBenjamin Arai\[email protected]\nhttp://www.benjaminarai.com", "msg_date": "Wed, 18 Jan 2006 10:09:46 -0800", "msg_from": "\"Benjamin Arai\" <[email protected]>", "msg_from_op": true, "msg_subject": "3WARE Card performance boost?" }, { "msg_contents": "\n> Obviously, I have done this to improve write performance for the update \n> each week. My question is if I install a 3ware or similar card to \n> replace my current software RAID 1 configuration, am I going to see a \n> very large improvement? If so, what would be a ball park figure?\n\nWell that entirely depends on what level...\n\n1. I would suggest LSI 150-6 not 3ware\n\n Why?\n\nBecause 3ware does not make a midrange card that has a battery backed \ncache :). That is the only reason. 3ware makes good stuff.\n\nSo anyway... LSI150-6 with Battery Backed cache option. Put 6 drives\non it with a RAID 10 array, turn on write cache and you should have\na hauling drive.\n\nJoshua D. Drake\n\n\n\n> \n> *Benjamin Arai*\n> [email protected] <mailto:[email protected]>\n> http://www.benjaminarai.com <http://www.benjaminarai.com/>\n> \n\n\n-- \nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n", "msg_date": "Wed, 18 Jan 2006 10:24:17 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3WARE Card performance boost?" }, { "msg_contents": "My original plan was to buy a 3WARE card and put a 1GB of memory on it to\nimprove writes but I am not sure if that is actually going to help the issue\nif fsync=off.\n \nBenjamin Arai\[email protected]\nhttp://www.benjaminarai.com <http://www.benjaminarai.com/> \n \n\n\n _____ \n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Benjamin Arai\nSent: Wednesday, January 18, 2006 10:10 AM\nTo: [email protected]\nSubject: [PERFORM] 3WARE Card performance boost?\n\n\nHi,\n \nI am currently doing large weekly updates with fsync=off. My updates\ninvolves SELECT, UPDATE, DELETE and etc. Setting fsync=off works for me\nsince I take a complete backup before the weekly update and run a \"sync\" and\n\"CHECKPOINT\" after each weekly update has completed to ensure the data is\nall written to disk. \n \nObviously, I have done this to improve write performance for the update each\nweek. My question is if I install a 3ware or similar card to replace my\ncurrent software RAID 1 configuration, am I going to see a very large\nimprovement? If so, what would be a ball park figure?\n \n\nBenjamin Arai\[email protected]\nhttp://www.benjaminarai.com <http://www.benjaminarai.com/> \n \n\n\n\n\n\n\n\nMy original plan was to buy a 3WARE card and put a 1GB of \nmemory on it to improve writes but I am not sure if that is actually going to \nhelp the issue if fsync=off.\n \n\nBenjamin Arai\[email protected]\nhttp://www.benjaminarai.com\n \n\n\n\nFrom: [email protected] \n [mailto:[email protected]] On Behalf Of Benjamin \n AraiSent: Wednesday, January 18, 2006 10:10 AMTo: \n [email protected]: [PERFORM] 3WARE Card \n performance boost?\n\nHi,\n \nI am currently \n doing large weekly updates with fsync=off.  My updates involves SELECT, \n UPDATE, DELETE and etc.  Setting fsync=off works for me since I take a \n complete backup before the weekly update and run a \"sync\" and \"CHECKPOINT\" \n after each weekly update has completed to ensure the data is all written to \n disk.  \n \nObviously, I have \n done this to improve write performance for the update each week.  My \n question is if I install a 3ware or similar card to replace my current \n software RAID 1 configuration, am I going to see a very large \n improvement?  If so, what would be a ball park \nfigure?\n \n\nBenjamin Arai\[email protected]\nhttp://www.benjaminarai.com", "msg_date": "Wed, 18 Jan 2006 10:26:37 -0800", "msg_from": "\"Benjamin Arai\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 3WARE Card performance boost?" }, { "msg_contents": "http://www.3ware.com/products/serial_ata2-9000.asp\n\nCheck their data sheet - the cards are BBU ready - all you have to do\nis order a BBU\nwhich you can from here:\nhttp://www.newegg.com/Product/Product.asp?Item=N82E16815999601\n\n\nAlex.\n\nOn 1/18/06, Joshua D. Drake <[email protected]> wrote:\n>\n> > Obviously, I have done this to improve write performance for the update\n> > each week. My question is if I install a 3ware or similar card to\n> > replace my current software RAID 1 configuration, am I going to see a\n> > very large improvement? If so, what would be a ball park figure?\n>\n> Well that entirely depends on what level...\n>\n> 1. I would suggest LSI 150-6 not 3ware\n>\n> Why?\n>\n> Because 3ware does not make a midrange card that has a battery backed\n> cache :). That is the only reason. 3ware makes good stuff.\n>\n> So anyway... LSI150-6 with Battery Backed cache option. Put 6 drives\n> on it with a RAID 10 array, turn on write cache and you should have\n> a hauling drive.\n>\n> Joshua D. Drake\n>\n>\n>\n> >\n> > *Benjamin Arai*\n> > [email protected] <mailto:[email protected]>\n> > http://www.benjaminarai.com <http://www.benjaminarai.com/>\n> >\n>\n>\n> --\n> The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n> Managed Services, Shared and Dedicated Hosting\n> Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n", "msg_date": "Wed, 18 Jan 2006 14:12:43 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3WARE Card performance boost?" }, { "msg_contents": "A 3ware card will re-order your writes to put them more in disk order,\nwhich will probably improve performance a bit, but just going from a\nsoftware RAID 1 to a hardware RAID 1, I would not imagine that you\nwill see much of a performance boost. Really to get better\nperformance you will need to add more drives, or faster drives. If\nyou are currently running 7200 RPM consumer drives, going to a\n10000RPM WD Raptor drive will probably increase performance by about\n30%, again not all that much.\n\nAlex\n\nOn 1/18/06, Benjamin Arai <[email protected]> wrote:\n>\n> Hi,\n>\n> I am currently doing large weekly updates with fsync=off. My updates\n> involves SELECT, UPDATE, DELETE and etc. Setting fsync=off works for me\n> since I take a complete backup before the weekly update and run a \"sync\" and\n> \"CHECKPOINT\" after each weekly update has completed to ensure the data is\n> all written to disk.\n>\n> Obviously, I have done this to improve write performance for the update each\n> week. My question is if I install a 3ware or similar card to replace my\n> current software RAID 1 configuration, am I going to see a very large\n> improvement? If so, what would be a ball park figure?\n>\n>\n> Benjamin Arai\n> [email protected]\n> http://www.benjaminarai.com\n>\n", "msg_date": "Wed, 18 Jan 2006 14:17:04 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3WARE Card performance boost?" }, { "msg_contents": "On Wed, 2006-01-18 at 10:26 -0800, Benjamin Arai wrote:\n> My original plan was to buy a 3WARE card and put a 1GB of memory on it\n> to improve writes but I am not sure if that is actually going to help\n> the issue if fsync=off.\nMy experience with a 3Ware 9500S-8 card are rather disappointing,\nespecially the write performance of the card, which is extremely poor.\nReading is OK.\n\n\n-- \nGroeten,\n\nJoost Kraaijeveld\nAskesis B.V.\nMolukkenstraat 14\n6524NB Nijmegen\ntel: 024-3888063 / 06-51855277\nfax: 024-3608416\ne-mail: [email protected]\nweb: www.askesis.nl \n\n\n", "msg_date": "Wed, 18 Jan 2006 20:23:25 +0100", "msg_from": "Joost Kraaijeveld <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3WARE Card performance boost?" }, { "msg_contents": "Benjamin Arai wrote:\n> Obviously, I have done this to improve write performance for the update \n> each week. My question is if I install a 3ware or similar card to \n> replace my current software RAID 1 configuration, am I going to see a \n> very large improvement? If so, what would be a ball park figure?\n\nThe key is getting a card with the ability to upgrade the onboard ram.\n\nOur previous setup was a LSI MegaRAID 320-1 (128MB), 4xRAID10, \nfsync=off. Replaced it with a ARC-1170 (1GB) w/ 24x7200RPM SATA2 drives \n(split into 3 8-drive RAID6 arrays) and performance for us is through \nthe ceiling.\n\nFor OLTP type updates, we've gotten about +80% increase. For massive \n1-statement updates, performance increase is in the +triple digits.\n", "msg_date": "Wed, 18 Jan 2006 13:58:09 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3WARE Card performance boost?" }, { "msg_contents": "On Wed, Jan 18, 2006 at 01:58:09PM -0800, William Yu wrote:\n> The key is getting a card with the ability to upgrade the onboard ram.\n> \n> Our previous setup was a LSI MegaRAID 320-1 (128MB), 4xRAID10, \n> fsync=off. Replaced it with a ARC-1170 (1GB) w/ 24x7200RPM SATA2 drives \n> (split into 3 8-drive RAID6 arrays) and performance for us is through \n> the ceiling.\n\nWell, the fact that you went from four to 24 disks would perhaps be a bigger\nfactor than the amount of RAM...\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 18 Jan 2006 23:02:32 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3WARE Card performance boost?" }, { "msg_contents": "Steinar H. Gunderson wrote:\n> On Wed, Jan 18, 2006 at 01:58:09PM -0800, William Yu wrote:\n>> The key is getting a card with the ability to upgrade the onboard ram.\n>>\n>> Our previous setup was a LSI MegaRAID 320-1 (128MB), 4xRAID10, \n>> fsync=off. Replaced it with a ARC-1170 (1GB) w/ 24x7200RPM SATA2 drives \n>> (split into 3 8-drive RAID6 arrays) and performance for us is through \n>> the ceiling.\n> \n> Well, the fact that you went from four to 24 disks would perhaps be a bigger\n> factor than the amount of RAM...\n> \n> /* Steinar */\n\nActually no. Our 2xOpteron 244 server is NOT fast enough to drive an \narray this large. That's why we had to split it up into 3 different \narrays. I tried all different RAID configs and once past about 8 drives, \nI got the same performance no matter what because the CPU was pegged at \n100%. Right now, 2 of the arrays are just mirroring each other because \nwe can't seem utilize the performance right now. (Also protects against \ncabling/power supply issues as we're using 3 seperate external enclosures.)\n\nThe 1GB RAM is much bigger because it almost completely hides the write \nactivity. Looking at iostat while all our jobs are running, there's \nalmost no disk activity. If I manually type \"sync\", I see 1 big \n250MB-500MB write storm for 2 seconds but otherwise, writes just slowly \ndribble out to disk.\n", "msg_date": "Wed, 18 Jan 2006 14:15:22 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3WARE Card performance boost?" }, { "msg_contents": "On Wed, Jan 18, 2006 at 10:24:17AM -0800, Joshua D. Drake wrote:\n> Because 3ware does not make a midrange card that has a battery backed \n> cache :). That is the only reason. 3ware makes good stuff.\n\nWhy worry about battery-backup if he's running with fsync off?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 18 Jan 2006 17:05:32 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3WARE Card performance boost?" }, { "msg_contents": "\nOn Jan 18, 2006, at 1:09 PM, Benjamin Arai wrote:\n\n> Obviously, I have done this to improve write performance for the \n> update each\n> week. My question is if I install a 3ware or similar card to \n> replace my\n\nI'll bet that if you increase your checkpoint_segments (and \ncorresponding timeout value) to something in the 10's or higher it \nwill help you a lot.\n\n", "msg_date": "Thu, 19 Jan 2006 16:38:33 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3WARE Card performance boost?" }, { "msg_contents": "He's talking about RAID 1 here, not a gargantuan RAID 6. Onboard RAM\non the controller card is going to make very little difference. All\nit will do is allow the card to re-order writes to a point (not all\ncards even do this).\n\nAlex.\n\nOn 1/18/06, William Yu <[email protected]> wrote:\n> Benjamin Arai wrote:\n> > Obviously, I have done this to improve write performance for the update\n> > each week. My question is if I install a 3ware or similar card to\n> > replace my current software RAID 1 configuration, am I going to see a\n> > very large improvement? If so, what would be a ball park figure?\n>\n> The key is getting a card with the ability to upgrade the onboard ram.\n>\n> Our previous setup was a LSI MegaRAID 320-1 (128MB), 4xRAID10,\n> fsync=off. Replaced it with a ARC-1170 (1GB) w/ 24x7200RPM SATA2 drives\n> (split into 3 8-drive RAID6 arrays) and performance for us is through\n> the ceiling.\n>\n> For OLTP type updates, we've gotten about +80% increase. For massive\n> 1-statement updates, performance increase is in the +triple digits.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n", "msg_date": "Thu, 19 Jan 2006 17:54:32 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3WARE Card performance boost?" } ]
[ { "msg_contents": "Hi,\n \nWill simple queries such as \"SELECT * FROM blah_table WHERE tag='x'; work\nany faster by putting them into a stored procedure?\n \nBenjamin Arai\[email protected]\nhttp://www.benjaminarai.com <http://www.benjaminarai.com/> \n \n\n\n\n\n\nHi,\n \nWill simple queries \nsuch as \"SELECT * FROM blah_table WHERE tag='x'; work any faster by putting them \ninto a stored procedure?\n \n\nBenjamin Arai\[email protected]\nhttp://www.benjaminarai.com", "msg_date": "Thu, 19 Jan 2006 09:38:01 -0800", "msg_from": "\"Benjamin Arai\" <[email protected]>", "msg_from_op": true, "msg_subject": "Stored Procedures" }, { "msg_contents": "Benjamin Arai <[email protected]> schrieb:\n\n> Hi,\n> \n> Will simple queries such as \"SELECT * FROM blah_table WHERE tag='x'; work any\n> faster by putting them into a stored procedure?\n\nIMHO no, why do you think so? You can use PREPARE instead, if you have many\nselects like this.\n\n\nHTH, Andreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Thu, 19 Jan 2006 20:34:26 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stored Procedures" } ]
[ { "msg_contents": "Hi,\n \n I am trying to perform the following type of query 'select\npatientname ... from patient were patientname LIKE 'JONES%, %' order by\npatientname asc limit 100'. There about 1.4 million rows in the table.\nOn my windows machine (2GB Ram ,3Ghz, Windows XP, 120GB Hard Drive,\npostgres 8.1beta4) it takes about 150 millisecs and the query plan is \n \n 'Limit (cost=18381.90..18384.40 rows=100 width=404)'\n' -> Unique (cost=18381.90..18418.62 rows=1469 width=404)'\n' -> Sort (cost=18381.90..18385.57 rows=1469 width=404)'\n' Sort Key: patientname, patientidentifier,\npatientvipindicator, patientconfidentiality, patientmrn,\npatientfacility, patientssn, patientsex, patientbirthdate'\n' -> Bitmap Heap Scan on patient (cost=81.08..18304.62\nrows=1469 width=404)'\n' Filter: ((patientname)::text ~~ ''BILL%,\n%''::text)'\n' -> Bitmap Index Scan on ix_patientname\n(cost=0.00..81.08 rows=7347 width=0)'\n' Index Cond: (((patientname)::text >=\n''BILL''::character varying) AND ((patientname)::text <\n''BILM''::character varying))'\n\nHowever the same query on AIX (4 1.5Ghz processors, 60GB filesystem, 4GB\nRam, postgres 8.1.2) it takes like 5 secs because the query plan just\nuses sequentials scans\n \nLimit (cost=100054251.96..100054253.41 rows=58 width=161)\n -> Unique (cost=100054251.96..100054253.41 rows=58 width=161)\n -> Sort (cost=100054251.96..100054252.11 rows=58 width=161)\n Sort Key: patientname, patientidentifier,\npatientvipindicator, patientconfidentiality, patientmrn,\npatientfacility, patientssn, patientsex, patientbirthdate\n -> Seq Scan on patient (cost=100000000.00..100054250.26\nrows=58 width=161)\n Filter: ((patientname)::text ~~ 'SMITH%,\nNA%'::text)\n\nWhy is postgres using a sequential scan and not the index what\nparameters do I need to adjust\n \nthanks\nTim Jones\nOptio Software\n\n\n\n\n\n\nHi,\n \n  I am trying \nto perform the following type of query  'select patientname ... from \npatient were patientname LIKE 'JONES%, %' order by patientname asc limit 100'. \nThere about 1.4 million rows in the table. On my windows machine (2GB Ram ,3Ghz, \nWindows XP, 120GB Hard Drive, postgres 8.1beta4) it takes about 150 millisecs \nand the query plan is \n \n    \n'Limit  (cost=18381.90..18384.40 rows=100 width=404)''  \n->  Unique  (cost=18381.90..18418.62 rows=1469 \nwidth=404)''        ->  \nSort  (cost=18381.90..18385.57 rows=1469 \nwidth=404)''              \nSort Key: patientname, patientidentifier, patientvipindicator, \npatientconfidentiality, patientmrn, patientfacility, patientssn, patientsex, \npatientbirthdate''              \n->  Bitmap Heap Scan on patient  (cost=81.08..18304.62 rows=1469 \nwidth=404)''                    \nFilter: ((patientname)::text ~~ ''BILL%, \n%''::text)''                    \n->  Bitmap Index Scan on ix_patientname  (cost=0.00..81.08 \nrows=7347 \nwidth=0)''                          \nIndex Cond: (((patientname)::text >= ''BILL''::character varying) AND \n((patientname)::text < ''BILM''::character varying))'\nHowever the same \nquery on AIX (4 1.5Ghz processors, 60GB filesystem, 4GB Ram, postgres 8.1.2) it \ntakes like 5 secs because the query plan just uses sequentials \nscans\n \nLimit  \n(cost=100054251.96..100054253.41 rows=58 width=161)   ->  \nUnique  (cost=100054251.96..100054253.41 rows=58 \nwidth=161)         ->  \nSort  (cost=100054251.96..100054252.11 rows=58 \nwidth=161)               \nSort Key: patientname, patientidentifier, patientvipindicator, \npatientconfidentiality, patientmrn, patientfacility, patientssn, patientsex, \npatientbirthdate               \n->  Seq Scan on patient  (cost=100000000.00..100054250.26 rows=58 \nwidth=161)                     \nFilter: ((patientname)::text ~~ 'SMITH%, NA%'::text)\nWhy is postgres \nusing a sequential scan and not the index what parameters do I need to \nadjust\n \nthanks\nTim \nJones\nOptio \nSoftware", "msg_date": "Thu, 19 Jan 2006 17:20:22 -0500", "msg_from": "\"Tim Jones\" <[email protected]>", "msg_from_op": true, "msg_subject": "query plans different for 8.1 on windows and aix" }, { "msg_contents": "\"Tim Jones\" <[email protected]> writes:\n> Why is postgres using a sequential scan and not the index what\n> parameters do I need to adjust\n\nYou probably initialized the AIX database in a non-C locale.\nSee the manual concerning LIKE index optimizations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 19 Jan 2006 17:27:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query plans different for 8.1 on windows and aix " } ]
[ { "msg_contents": "\nThe following query took 17 seconds:\nselect count(LogSN), min(LogSN), max(LogSN) from Log where create_time < \n'2005/10/19';\n\nFiguring that getting the count will involve scanning the database, I took \nit out, but the new query took 200 seconds:\nselect min(LogSN), max(LogSN) from Log where create_time < '2005/10/19';\n\nIs it because the planner is using index pk_log instead of idx_logtime? \nAnyway to avoid that?\n\nI can get instant replies with 2 separate queries for min(LogSN) and \nmax(LogSN) using order by create_time limit 1, but I can't get both values \nwithin 1 query using the limit 1 construct. Any suggestions?\n\nI am running pg 8.1.2 on Windows 2000. The queries are done immediately \nafter a vacuum analyze.\n\nBest regards,\nKC.\n\n----------------------\n\nesdt=> \\d log;\n create_time | character varying(23) | default \n'1970/01/01~00:00:00.000'::char\nacter varying\n logsn | integer | not null\n ...\nIndexes:\n \"pk_log\" PRIMARY KEY, btree (logsn)\n \"idx_logtime\" btree (create_time, logsn)\n ...\n\nesdt=> vacuum analyze log;\nVACUUM\n\nesdt=> explain analyze select count(LogSN), min(LogSN), max(LogSN) from Log \nwhere create_time < '2005/10/19';\n\n Aggregate (cost=57817.74..57817.75 rows=1 width=4) (actual \ntime=17403.381..17403.384 rows=1 loops=1)\n -> Bitmap Heap Scan on log (cost=1458.31..57172.06 rows=86089 \nwidth=4) (actual time=180.368..17039.262 rows=106708 loops=1)\n Recheck Cond: ((create_time)::text < '2005/10/19'::text)\n -> Bitmap Index Scan on idx_logtime (cost=0.00..1458.31 \nrows=86089 width=0) (actual time=168.777..168.777 rows=106708 loops=1)\n Index Cond: ((create_time)::text < '2005/10/19'::text)\n Total runtime: 17403.787 ms\n\nesdt=> explain analyze select min(LogSN), max(LogSN) from Log where \ncreate_time < '2005/10/19';\n\n Result (cost=2.51..2.52 rows=1 width=0) (actual \ntime=200051.507..200051.510 rows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..1.26 rows=1 width=4) (actual \ntime=18.541..18.544 rows=1 loops=1)\n -> Index Scan using pk_log on log (cost=0.00..108047.11 \nrows=86089\nwidth=4) (actual time=18.533..18.533 rows=1 loops=1)\n Filter: (((create_time)::text < '2005/10/19'::text) AND \n(logsn IS NOT NULL))\n -> Limit (cost=0.00..1.26 rows=1 width=4) (actual \ntime=200032.928..200032.931 rows=1 loops=1)\n -> Index Scan Backward using pk_log on \nlog (cost=0.00..108047.11 rows=86089 width=4) (actual \ntime=200032.920..200032.920 rows=1 loops=1)\n Filter: (((create_time)::text < '2005/10/19'::text) AND \n(logsn IS NOT NULL))\n Total runtime: 200051.701 ms\n\nesdt=> explain analyze select LogSN from Log where create_time < \n'2005/10/19' order by create_time limit 1;\n\n Limit (cost=0.00..0.98 rows=1 width=31) (actual time=0.071..0.073 rows=1 \nloops=1)\n -> Index Scan using idx_logtime on log (cost=0.00..84649.94 \nrows=86089 width=31) (actual time=0.063..0.063 rows=1 loops=1)\n Index Cond: ((create_time)::text < '2005/10/19'::text)\n Total runtime: 0.182 ms\n\nesdt=> explain analyze select LogSN from Log where create_time < \n'2005/10/19' order by create_time desc limit 1;\n Limit (cost=0.00..0.98 rows=1 width=31) (actual time=0.058..0.061 rows=1 \nloops=1)\n -> Index Scan Backward using idx_logtime on log (cost=0.00..84649.94 \nrows=86089 width=31) (actual time=0.051..0.051 rows=1 loops=1)\n Index Cond: ((create_time)::text < '2005/10/19'::text)\n Total runtime: 0.186 ms\n\n", "msg_date": "Fri, 20 Jan 2006 12:35:36 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": true, "msg_subject": "SELECT MIN, MAX took longer time than SELECT COUNT, MIN, MAX" }, { "msg_contents": "On Fri, Jan 20, 2006 at 12:35:36PM +0800, K C Lau wrote:\n\nHere's the problem... the estimate for the backwards index scan is *way*\noff:\n\n> -> Limit (cost=0.00..1.26 rows=1 width=4) (actual \n> time=200032.928..200032.931 rows=1 loops=1)\n> -> Index Scan Backward using pk_log on \n> log (cost=0.00..108047.11 rows=86089 width=4) (actual \n> time=200032.920..200032.920 rows=1 loops=1)\n> Filter: (((create_time)::text < '2005/10/19'::text) AND \n> (logsn IS NOT NULL))\n> Total runtime: 200051.701 ms\n\nBTW, these queries below are meaningless; they are not equivalent to\nmin(logsn).\n\n> esdt=> explain analyze select LogSN from Log where create_time < \n> '2005/10/19' order by create_time limit 1;\n> \n> Limit (cost=0.00..0.98 rows=1 width=31) (actual time=0.071..0.073 rows=1 \n> loops=1)\n> -> Index Scan using idx_logtime on log (cost=0.00..84649.94 \n> rows=86089 width=31) (actual time=0.063..0.063 rows=1 loops=1)\n> Index Cond: ((create_time)::text < '2005/10/19'::text)\n> Total runtime: 0.182 ms\n> \n> esdt=> explain analyze select LogSN from Log where create_time < \n> '2005/10/19' order by create_time desc limit 1;\n> Limit (cost=0.00..0.98 rows=1 width=31) (actual time=0.058..0.061 rows=1 \n> loops=1)\n> -> Index Scan Backward using idx_logtime on log (cost=0.00..84649.94 \n> rows=86089 width=31) (actual time=0.051..0.051 rows=1 loops=1)\n> Index Cond: ((create_time)::text < '2005/10/19'::text)\n> Total runtime: 0.186 ms\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 20 Jan 2006 11:20:26 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT MIN, MAX took longer time than SELECT COUNT, MIN, MAX" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> On Fri, Jan 20, 2006 at 12:35:36PM +0800, K C Lau wrote:\n> Here's the problem... the estimate for the backwards index scan is *way*\n> off:\n\n>> -> Limit (cost=0.00..1.26 rows=1 width=4) (actual \n>> time=200032.928..200032.931 rows=1 loops=1)\n>> -> Index Scan Backward using pk_log on \n>> log (cost=0.00..108047.11 rows=86089 width=4) (actual \n>> time=200032.920..200032.920 rows=1 loops=1)\n>> Filter: (((create_time)::text < '2005/10/19'::text) AND \n>> (logsn IS NOT NULL))\n>> Total runtime: 200051.701 ms\n\nIt's more subtle than you think. The estimated rowcount is the\nestimated number of rows fetched if the indexscan were run to\ncompletion, which it isn't because the LIMIT cuts it off after the\nfirst returned row. That estimate is not bad (we can see from the\naggregate plan that the true value would have been 106708, assuming\nthat the \"logsn IS NOT NULL\" condition isn't filtering anything).\n\nThe real problem is that it's taking quite a long time for the scan\nto reach the first row with create_time < 2005/10/19, which is not\ntoo surprising if logsn is strongly correlated with create_time ...\nbut in the absence of any cross-column statistics the planner has\nno very good way to know that. (Hm ... but both of them probably\nalso show a strong correlation to physical order ... we could look\nat that maybe ...) The default assumption is that the two columns\naren't correlated and so it should not take long to hit the first such\nrow, which is why the planner likes the indexscan/limit plan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jan 2006 13:51:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SELECT MIN, MAX took longer time than SELECT COUNT, MIN, MAX " }, { "msg_contents": ">\n>> Hi,\n>> \n>> Will simple queries such as \"SELECT * FROM blah_table WHERE tag='x'; \n>> work any\n>> faster by putting them into a stored procedure?\n\n>\n> IMHO no, why do you think so? You can use PREPARE instead, if you have \n> many\n> selects like this.\n\n\nI tought that creating stored procedures in database means\nstoring it's execution plan (well, actually storing it like a\ncompiled object). Well, that's what I've learned couple a years\nago in colledge ;)\n\nWhat are the advantages of parsing SP functions every time it's called?\n\nMy position is that preparing stored procedures for execution solves\nmore problems, that it creates.\nAnd the most important one to be optimizing access to queries from \nmultiple connections (which is one of the most important reasons for \nusing stored procedures in the first place).\n\nBest regards,\n Rikard\n\n\n", "msg_date": "Fri, 20 Jan 2006 19:55:19 +0100", "msg_from": "Rikard Pavelic <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Stored procedures" }, { "msg_contents": "At 01:20 06/01/21, Jim C. Nasby wrote:\n\n>BTW, these queries below are meaningless; they are not equivalent to\n>min(logsn).\n>\n> > esdt=> explain analyze select LogSN from Log where create_time <\n> > '2005/10/19' order by create_time limit 1;\n\nThank you for pointing it out.\n\nIt actually returns the min(logsn), as the index is on (create_time, \nlogsn). To be more explicit, I have changed to query to:\nexplain analyze select LogSN from Log where create_time < '2005/10/19' \norder by create_time, logsn limit 1;\n\nesdt=> \\d log;\n create_time | character varying(23) | default \n'1970/01/01~00:00:00.000'::character varying\n logsn | integer | not null\n ...\nIndexes:\n \"pk_log\" PRIMARY KEY, btree (logsn)\n \"idx_logtime\" btree (create_time, logsn)\n\nBest regards,\nKC.\n\n", "msg_date": "Sat, 21 Jan 2006 21:12:47 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT MIN, MAX took longer time than SELECT" }, { "msg_contents": "I have worked round the issue by using 2 separate queries with the LIMIT \nconstruct.\n\nLogSN and create_time are indeed directly correlated, both monotonously \nincreasing, occasionally with multiple LogSN's having the same create_time.\n\nWhat puzzles me is why the query with COUNT, MIN, MAX uses idx_logtime for \nthe scan, but the query without the COUNT uses pk_log and takes much \nlonger. If it had chosen idx_logtime instead, then it should have returned \nimmediately for both MIN and MAX.\n\nBest regards,\nKC.\n\nAt 02:51 06/01/21, Tom Lane wrote:\n>\"Jim C. Nasby\" <[email protected]> writes:\n> > On Fri, Jan 20, 2006 at 12:35:36PM +0800, K C Lau wrote:\n> > Here's the problem... the estimate for the backwards index scan is *way*\n> > off:\n>\n> >> -> Limit (cost=0.00..1.26 rows=1 width=4) (actual\n> >> time=200032.928..200032.931 rows=1 loops=1)\n> >> -> Index Scan Backward using pk_log on\n> >> log (cost=0.00..108047.11 rows=86089 width=4) (actual\n> >> time=200032.920..200032.920 rows=1 loops=1)\n> >> Filter: (((create_time)::text < '2005/10/19'::text) AND\n> >> (logsn IS NOT NULL))\n> >> Total runtime: 200051.701 ms\n>\n>It's more subtle than you think. The estimated rowcount is the\n>estimated number of rows fetched if the indexscan were run to\n>completion, which it isn't because the LIMIT cuts it off after the\n>first returned row. That estimate is not bad (we can see from the\n>aggregate plan that the true value would have been 106708, assuming\n>that the \"logsn IS NOT NULL\" condition isn't filtering anything).\n>\n>The real problem is that it's taking quite a long time for the scan\n>to reach the first row with create_time < 2005/10/19, which is not\n>too surprising if logsn is strongly correlated with create_time ...\n>but in the absence of any cross-column statistics the planner has\n>no very good way to know that. (Hm ... but both of them probably\n>also show a strong correlation to physical order ... we could look\n>at that maybe ...) The default assumption is that the two columns\n>aren't correlated and so it should not take long to hit the first such\n>row, which is why the planner likes the indexscan/limit plan.\n>\n> regards, tom lane\n\n", "msg_date": "Sat, 21 Jan 2006 21:38:55 +0800", "msg_from": "K C Lau <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SELECT MIN, MAX took longer time than SELECT" } ]
[ { "msg_contents": "Hi there,\n\nI am running a website where each page connects to the DB to retrieve and\nwrite information. Each page load uses a separate connection (rather than\njust sharing one as is the common case) because I use a lot of transactions.\n\nI am looking to speed up performance, and since each page executes a static\nset of queries where only the parameters change, I was hoping to take\nadvantage of stored procedures since I read that PostgreSQL's caches the\nexecution plans used inside stored procedures.\n\nHowever, the documentation states that this execution plan caching is done\non a per-connection basis. If each page uses a separate connection, I can\nget no performance benefit between pages.\n\nIn other words, there's no benefit to me in putting a one-shot query that is\nbasically the same for every page (e.g. \"SELECT * FROM users WHERE\nuser_name='<username>'\") inside a stored proc, since the generated execution\nplan will be thrown away once the connection is dropped.\n\nHas anyone found a way around this limitation? As I said, I can't share the\nDB connection between pages (unless someone knows of a way to do this and\nstill retain a level of separation between pages that use the same DB\nconnection).\n\nMany thanks,\n\nJames\n\nHi there,\n\nI am running a website where each page connects to the DB to retrieve\nand write information. Each page load uses a separate connection\n(rather than just sharing one as is the common case) because I use a\nlot of transactions.\n\nI am looking to speed up performance, and since each page executes a\nstatic set of queries where only the parameters change, I was hoping to\ntake advantage of stored procedures since I read that PostgreSQL's\ncaches the execution plans used inside stored procedures.\n\nHowever, the documentation states that this execution plan caching is\ndone on a per-connection basis. If each page uses a separate\nconnection, I can get no performance benefit between pages.\n\nIn other words, there's no benefit to me in putting a one-shot query\nthat is basically the same for every page (e.g. \"SELECT * FROM users\nWHERE user_name='<username>'\") inside a stored proc, since the\ngenerated execution plan will be thrown away once the connection is\ndropped.\n\nHas anyone found a way around this limitation? As I said, I can't share\nthe DB connection between pages (unless someone knows of a way to do\nthis and still retain a level of separation between pages that use the\nsame DB connection).\n\nMany thanks,\n\nJames", "msg_date": "Fri, 20 Jan 2006 18:14:15 +0900", "msg_from": "James Russell <[email protected]>", "msg_from_op": true, "msg_subject": "Retaining execution plans between connections?" }, { "msg_contents": "you could use pgpool\n\nhttp://pgpool.projects.postgresql.org/\n\n\nOn 1/20/06, James Russell <[email protected]> wrote:\n> Hi there,\n>\n> I am running a website where each page connects to the DB to retrieve and\n> write information. Each page load uses a separate connection (rather than\n> just sharing one as is the common case) because I use a lot of transactions.\n>\n> I am looking to speed up performance, and since each page executes a static\n> set of queries where only the parameters change, I was hoping to take\n> advantage of stored procedures since I read that PostgreSQL's caches the\n> execution plans used inside stored procedures.\n>\n> However, the documentation states that this execution plan caching is done\n> on a per-connection basis. If each page uses a separate connection, I can\n> get no performance benefit between pages.\n>\n> In other words, there's no benefit to me in putting a one-shot query that\n> is basically the same for every page (e.g. \"SELECT * FROM users WHERE\n> user_name='<username>'\") inside a stored proc, since the generated execution\n> plan will be thrown away once the connection is dropped.\n>\n> Has anyone found a way around this limitation? As I said, I can't share the\n> DB connection between pages (unless someone knows of a way to do this and\n> still retain a level of separation between pages that use the same DB\n> connection).\n>\n> Many thanks,\n>\n> James\n>\n", "msg_date": "Fri, 20 Jan 2006 15:05:49 +0530", "msg_from": "Pandurangan R S <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Retaining execution plans between connections?" }, { "msg_contents": "On Fri, 2006-01-20 at 18:14 +0900, James Russell wrote:\n> I am looking to speed up performance, and since each page executes a\n> static set of queries where only the parameters change, I was hoping\n> to take advantage of stored procedures since I read that PostgreSQL's\n> caches the execution plans used inside stored procedures.\n\nNote that you can also take advantage of plan caching by using prepared\nstatements (PREPARE, EXECUTE and DEALLOCATE). These are also session\nlocal, however (i.e. you can't share prepared statements between\nconnections).\n\n> As I said, I can't share the DB connection between pages (unless\n> someone knows of a way to do this and still retain a level of\n> separation between pages that use the same DB connection).\n\nYou can't share plans among different sessions at the moment. Can you\nelaborate on why you can't use persistent or pooled database\nconnections?\n\n-Neil\n\n\n", "msg_date": "Fri, 20 Jan 2006 10:14:59 -0500", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Retaining execution plans between connections?" } ]
[ { "msg_contents": "Hello!\n\nI noticed that the creation of a GIST index for tsearch2 takes very\nlong - about 20 minutes. CPU utilization is 100 %, the resulting\nindex file size is ~25 MB. Is this behaviour normal?\n\nFull text columns: title author_list\ntsearch2 word lists: fti_title fti_author_list\ntsearch2 indexes: idx_fti_title idx_fti_author_list\n\nThe table has 700,000 records. When I create a normal B-Tree index\non the same column for testing purposes, it works quite fast\n(approx. 30 seconds).\n\nThe columns that should be indexed are small, only about 10 words on\naverage.\n\nSystem specs:\nAthlon64 X2 3800+, 2 GB RAM\nPostgreSQL 8.1.2, Windows XP SP2\n\nI've never noticed this problem before, so could it probably be\nrelated to v8.1.2? Or is something in my configuration or table\ndefinition that causes this sluggishness?\n\nThanks very much in advance for your help!\n\n- Stephan\n\n\n\nThis is the table definition:\n-----------------------------------------------------------------\nCREATE TABLE publications\n(\n id int4 NOT NULL DEFAULT nextval('publications_id_seq'::regclass),\n publication_type_id int4 NOT NULL DEFAULT 0,\n keyword text NOT NULL,\n mdate date,\n \"year\" date,\n title text,\n fti_title tsvector,\n author_list text,\n fti_author_list tsvector,\n overview_timestamp timestamp,\n overview_xml text,\n CONSTRAINT publications_pkey PRIMARY KEY (keyword) USING INDEX\n TABLESPACE dblp_index,\n CONSTRAINT publications_publication_type_id_fkey FOREIGN KEY\n (publication_type_id)\n REFERENCES publication_types (id) MATCH SIMPLE\n ON UPDATE RESTRICT ON DELETE RESTRICT,\n CONSTRAINT publications_year_check CHECK (date_part('month'::text,\n\"year\") = 1::double precision AND date_part('day'::text, \"year\") =\n1::double precision)\n)\nWITHOUT OIDS;\n\nCREATE INDEX fki_publications_publication_type_id\n ON publications\n USING btree\n (publication_type_id)\n TABLESPACE dblp_index;\n\nCREATE INDEX idx_fti_author_list\n ON publications\n USING gist\n (fti_author_list)\n TABLESPACE dblp_index;\n\nCREATE INDEX idx_fti_title\n ON publications\n USING gist\n (fti_title)\n TABLESPACE dblp_index;\n\nCREATE INDEX idx_publications_year\n ON publications\n USING btree\n (\"year\")\n TABLESPACE dblp_index;\n\nCREATE INDEX idx_publications_year_part\n ON publications\n USING btree\n (date_part('year'::text, \"year\"))\n TABLESPACE dblp_index;\n\n\nCREATE TRIGGER tsvectorupdate_all\n BEFORE INSERT OR UPDATE\n ON publications\n FOR EACH ROW\n EXECUTE PROCEDURE multi_tsearch2();", "msg_date": "Fri, 20 Jan 2006 15:01:59 +0100", "msg_from": "Stephan Vollmer <[email protected]>", "msg_from_op": true, "msg_subject": "Creation of tsearch2 index is very slow" }, { "msg_contents": "PS:\n\nWhat I forgot to mention was that inserting records into the table\nis also about 2-3 times slower than before (most likely due to the\nslow index update operations).\n\nI dropped the whole database and restored the dumpfile, but the\nresult it the same. When the index is recreated after COPYing the\ndata, it takes more than 20 minutes for _each_ of both tsearch2\nindexes. So the total time to restore this table is more than 45\nminutes!\n\n- Stephan", "msg_date": "Fri, 20 Jan 2006 15:17:07 +0100", "msg_from": "Stephan Vollmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Creation of tsearch2 index is very slow" }, { "msg_contents": "Stephan Vollmer <[email protected]> writes:\n> I noticed that the creation of a GIST index for tsearch2 takes very\n> long - about 20 minutes. CPU utilization is 100 %, the resulting\n> index file size is ~25 MB. Is this behaviour normal?\n\nThis has been complained of before. GIST is always going to be slower\nat index-building than btree; in the btree case there's a simple optimal\nstrategy for making an index (ie, sort the keys) but for GIST we don't\nknow anything better to do than insert the keys one at a time.\n\nHowever, I'm not sure that anyone's tried to do any performance\noptimization on the GIST insert code ... there might be some low-hanging\nfruit there. It'd be interesting to look at a gprof profile of what the\nbackend is doing during the index build. Do you have the ability to do\nthat, or would you be willing to give your data to someone else to\ninvestigate with? (The behavior is very possibly data-dependent, which\nis why I want to see a profile with your specific data and not just some\nrandom dataset or other.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jan 2006 10:35:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creation of tsearch2 index is very slow " }, { "msg_contents": "Tom Lane wrote:\n> Stephan Vollmer <[email protected]> writes:\n>> I noticed that the creation of a GIST index for tsearch2 takes very\n>> long - about 20 minutes. CPU utilization is 100 %, the resulting\n>> index file size is ~25 MB. Is this behaviour normal?\n> \n> This has been complained of before. GIST is always going to be slower\n> at index-building than btree; in the btree case there's a simple optimal\n> strategy for making an index (ie, sort the keys) but for GIST we don't\n> know anything better to do than insert the keys one at a time.\n\nAh, ok. That explains a lot, although I wonder why it is so much slower.\n\n\n> However, I'm not sure that anyone's tried to do any performance\n> optimization on the GIST insert code ... there might be some low-hanging\n> fruit there. It'd be interesting to look at a gprof profile of what the\n> backend is doing during the index build. Do you have the ability to do\n> that, or would you be willing to give your data to someone else to\n> investigate with?\n\nUnfortunately, I'm not able to investigate it further myself as I'm\nquite a Postgres newbie. But I could provide someone else with the\nexample table. Maybe someone else could find out why it is so slow.\n\nI dropped all unnecessary columns and trimmed the table down to\n235,000 rows. The dumped table (compressed with RAR) is 7,1 MB. I\ndon't have a website to upload it but I could send it to someone via\ne-mail.\n\nWith this 235,000 row table, index creation times are:\n- GIST 347063 ms\n- B-Tree 2515 ms\n\n\nThanks for your help!\n\n- Stephan", "msg_date": "Fri, 20 Jan 2006 17:49:53 +0100", "msg_from": "Stephan Vollmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Creation of tsearch2 index is very slow" }, { "msg_contents": "On Fri, Jan 20, 2006 at 10:35:21AM -0500, Tom Lane wrote:\n> However, I'm not sure that anyone's tried to do any performance\n> optimization on the GIST insert code ... there might be some low-hanging\n> fruit there. It'd be interesting to look at a gprof profile of what the\n> backend is doing during the index build. Do you have the ability to do\n> that, or would you be willing to give your data to someone else to\n> investigate with? (The behavior is very possibly data-dependent, which\n> is why I want to see a profile with your specific data and not just some\n> random dataset or other.)\n\nThe cost on inserting would generally go to either penalty, or\npicksplit. Certainly if you're inserting lots of values in a short\ninterval, I can imagine picksplit being nasty, since the algorithms for\na lot of datatypes are not really reknown for their speed.\n\nI'm wondering if you could possibly improve the process by grouping\ninto larger blocks. For example, pull out enough tuples to cover 4\npages and then call picksplit three times to split it into the four\npages. This gives you 4 entries for the level above the leaves. Keep\nreading tuples and splitting until you get enough for the next level\nand call picksplit on those. etc etc.\n\nThe thing is, you never call penalty here so it's questionable whether\nthe tree will be as efficient as just inserting. For example, if have a\ndata type representing ranges (a,b), straight inserting can produce the\nperfect tree order like a b-tree (assuming non-overlapping entries).\nThe above process will produce something close, but not quite...\n\nShould probably get out a pen-and-paper to model this. After all, if\nthe speed of the picksplit increases superlinearly to the number of\nelements, calling it will larger sets may prove to be a loss overall...\n\nPerhaps the easiest would be to allow datatypes to provide a bulkinsert\nfunction, like b-tree does? The question is, what should be its input\nand output?\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Fri, 20 Jan 2006 18:04:52 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creation of tsearch2 index is very slow" }, { "msg_contents": "Stephan Vollmer <[email protected]> writes:\n> Unfortunately, I'm not able to investigate it further myself as I'm\n> quite a Postgres newbie. But I could provide someone else with the\n> example table. Maybe someone else could find out why it is so slow.\n\nI'd be willing to take a look, if you'll send me the dump file off-list.\n\n> I dropped all unnecessary columns and trimmed the table down to\n> 235,000 rows. The dumped table (compressed with RAR) is 7,1 MB. I\n> don't have a website to upload it but I could send it to someone via\n> e-mail.\n\nDon't have RAR --- gzip or bzip2 is fine ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jan 2006 12:09:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creation of tsearch2 index is very slow " }, { "msg_contents": "Martijn van Oosterhout <[email protected]> writes:\n> The cost on inserting would generally go to either penalty, or\n> picksplit. Certainly if you're inserting lots of values in a short\n> interval, I can imagine picksplit being nasty, since the algorithms for\n> a lot of datatypes are not really reknown for their speed.\n\nTut tut ... in the words of the sage, it is a capital mistake to\ntheorize in advance of the data. You may well be right, but on the\nother hand it could easily be something dumb like an O(N^2) loop over\na list structure.\n\nI'll post some gprof numbers after Stephan sends me the dump. We\nshould probably move the thread to someplace like pgsql-perform, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jan 2006 12:14:35 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creation of tsearch2 index is very slow " }, { "msg_contents": "[ thread moved to pgsql-performance ]\n\nI've obtained a gprof profile on Stephan's sample case (many thanks for\nproviding the data, Stephan). The command is\n\tCREATE INDEX foo ON publications_test USING gist (fti_title);\nwhere fti_title is a tsvector column. There are 236984 rows in the\ntable, most with between 4 and 10 words in fti_title.\nsum(length(fti_title)) yields 1636202 ... not sure if this is a\nrelevant measure, however.\n\nUsing CVS tip with a fairly vanilla configuration (including\n--enable-cassert), here are all the hotspots down to the 1% level:\n\n % cumulative self self total \n time seconds seconds calls s/call s/call name \n 20.19 1.90 1.90 588976 0.00 0.00 gistchoose\n 19.02 3.69 1.79 683471 0.00 0.00 XLogInsert\n 5.95 4.25 0.56 3575135 0.00 0.00 LWLockAcquire\n 4.46 4.67 0.42 3579005 0.00 0.00 LWLockRelease\n 4.14 5.06 0.39 3146848 0.00 0.00 AllocSetAlloc\n 3.72 5.41 0.35 236984 0.00 0.00 gistdoinsert\n 3.40 5.73 0.32 876047 0.00 0.00 hash_search\n 2.76 5.99 0.26 3998576 0.00 0.00 LockBuffer\n 2.28 6.21 0.22 11514275 0.00 0.00 gistdentryinit\n 1.86 6.38 0.18 841757 0.00 0.00 UnpinBuffer\n 1.81 6.55 0.17 12201023 0.00 0.00 FunctionCall1\n 1.81 6.72 0.17 237044 0.00 0.00 AllocSetCheck\n 1.49 6.86 0.14 236984 0.00 0.00 gistmakedeal\n 1.49 7.00 0.14 10206985 0.00 0.00 FunctionCall3\n 1.49 7.14 0.14 1287874 0.00 0.00 MemoryContextAllocZero\n 1.28 7.26 0.12 826179 0.00 0.00 PinBuffer\n 1.17 7.37 0.11 875785 0.00 0.00 hash_any\n 1.17 7.48 0.11 1857292 0.00 0.00 MemoryContextAlloc\n 1.17 7.59 0.11 221466 0.00 0.00 PageIndexTupleDelete\n 1.06 7.69 0.10 9762101 0.00 0.00 gistpenalty\n\nClearly, one thing that would be worth doing is suppressing the WAL\ntraffic when possible, as we already do for btree builds. It seems\nthat gistchoose may have some internal ineffiency too --- I haven't\nlooked at the code yet. The other thing that jumps out is the very\nlarge numbers of calls to gistdentryinit, FunctionCall1, FunctionCall3.\nSome interesting parts of the calls/calledby graph are:\n\n-----------------------------------------------\n 0.35 8.07 236984/236984 gistbuildCallback [14]\n[15] 89.5 0.35 8.07 236984 gistdoinsert [15]\n 0.14 3.55 236984/236984 gistmakedeal [16]\n 1.90 0.89 588976/588976 gistchoose [17]\n 0.07 0.83 825960/841757 ReadBuffer [19]\n 0.09 0.10 825960/1287874 MemoryContextAllocZero [30]\n 0.12 0.05 1888904/3998576 LockBuffer [29]\n 0.13 0.00 825960/3575135 LWLockAcquire [21]\n 0.10 0.00 825960/3579005 LWLockRelease [26]\n 0.06 0.00 473968/3146848 AllocSetAlloc [27]\n 0.03 0.00 473968/1857292 MemoryContextAlloc [43]\n 0.02 0.00 825960/1272423 gistcheckpage [68]\n-----------------------------------------------\n 0.14 3.55 236984/236984 gistdoinsert [15]\n[16] 39.2 0.14 3.55 236984 gistmakedeal [16]\n 1.20 0.15 458450/683471 XLogInsert [18]\n 0.01 0.66 224997/224997 gistxlogInsertCompletion [20]\n 0.09 0.35 444817/444817 gistgetadjusted [23]\n 0.08 0.17 456801/456804 formUpdateRdata [32]\n 0.17 0.01 827612/841757 UnpinBuffer [35]\n 0.11 0.00 221466/221466 PageIndexTupleDelete [42]\n 0.02 0.08 456801/460102 gistfillbuffer [45]\n 0.06 0.04 1649/1649 gistSplit [46]\n 0.08 0.00 685099/3579005 LWLockRelease [26]\n 0.03 0.05 446463/446463 gistFindCorrectParent [50]\n 0.04 0.02 685099/3998576 LockBuffer [29]\n 0.04 0.00 1649/1649 gistextractbuffer [58]\n 0.03 0.00 460102/460121 write_buffer [66]\n 0.02 0.00 825960/826092 ReleaseBuffer [69]\n 0.02 0.00 221402/221402 gistadjscans [82]\n 0.00 0.00 1582/1582 gistunion [131]\n 0.00 0.00 1649/1649 formSplitRdata [147]\n 0.00 0.00 1649/1649 gistjoinvector [178]\n 0.00 0.00 3/3 gistnewroot [199]\n 0.00 0.00 458450/461748 gistnospace [418]\n 0.00 0.00 458450/458450 WriteNoReleaseBuffer [419]\n 0.00 0.00 1652/1671 WriteBuffer [433]\n-----------------------------------------------\n 1.90 0.89 588976/588976 gistdoinsert [15]\n[17] 29.7 1.90 0.89 588976 gistchoose [17]\n 0.25 0.17 9762101/10892174 FunctionCall3 <cycle 1> [38]\n 0.18 0.14 9762101/11514275 gistdentryinit [28]\n 0.10 0.00 9762101/9762101 gistpenalty [47]\n 0.04 0.02 588976/1478610 gistDeCompressAtt [39]\n-----------------------------------------------\n 0.00 0.00 1/683471 gistbuild [12]\n 0.00 0.00 1/683471 log_heap_update [273]\n 0.00 0.00 1/683471 RecordTransactionCommit [108]\n 0.00 0.00 1/683471 smgrcreate [262]\n 0.00 0.00 3/683471 gistnewroot [199]\n 0.00 0.00 5/683471 heap_insert [116]\n 0.00 0.00 12/683471 _bt_insertonpg [195]\n 0.59 0.07 224997/683471 gistxlogInsertCompletion [20]\n 1.20 0.15 458450/683471 gistmakedeal [16]\n[18] 21.4 1.79 0.22 683471 XLogInsert [18]\n 0.11 0.00 683471/3575135 LWLockAcquire [21]\n 0.08 0.00 687340/3579005 LWLockRelease [26]\n 0.03 0.00 683471/683471 GetCurrentTransactionIdIfAny [65]\n 0.01 0.00 15604/15604 AdvanceXLInsertBuffer [111]\n 0.00 0.00 3/10094 BufferGetBlockNumber [95]\n 0.00 0.00 3869/3870 XLogWrite [281]\n 0.00 0.00 3870/3871 LWLockConditionalAcquire [428]\n 0.00 0.00 3/3 BufferGetFileNode [611]\n-----------------------------------------------\n 0.00 0.00 3164/11514275 gistunion [131]\n 0.01 0.00 270400/11514275 gistSplit [46]\n 0.03 0.02 1478610/11514275 gistDeCompressAtt [39]\n 0.18 0.14 9762101/11514275 gistchoose [17]\n[28] 4.0 0.22 0.16 11514275 gistdentryinit [28]\n 0.16 0.00 11514275/12201023 FunctionCall1 [36]\n-----------------------------------------------\n 0.00 0.00 67/12201023 index_endscan <cycle 1> [167]\n 0.01 0.00 686681/12201023 gistcentryinit [62]\n 0.16 0.00 11514275/12201023 gistdentryinit [28]\n[36] 1.8 0.17 0.00 12201023 FunctionCall1 [36]\n 0.00 0.00 67/67 btendscan [231]\n 0.00 0.00 12200956/22855929 data_start [414]\n-----------------------------------------------\n 67 index_beginscan_internal <cycle 1> [169]\n 0.01 0.01 444817/10892174 gistgetadjusted [23]\n 0.25 0.17 9762101/10892174 gistchoose [17]\n[38] 1.5 0.14 0.00 10206985 FunctionCall3 <cycle 1> [38]\n 0.00 0.00 10206918/22855929 data_start [414]\n 0.00 0.00 67/67 btbeginscan [486]\n 67 RelationGetIndexScan <cycle 1> [212]\n\nNow that we have some data, we can start to think about how to improve\nmatters ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jan 2006 14:14:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow " }, { "msg_contents": "On Fri, Jan 20, 2006 at 02:14:29PM -0500, Tom Lane wrote:\n> [ thread moved to pgsql-performance ]\n> \n> I've obtained a gprof profile on Stephan's sample case (many thanks for\n> providing the data, Stephan). The command is\n\n<snip>\n\nSomething I'm missing is the calls to tsearch functions. I'm not 100%\nfamiliar with gprof, but is it possible those costs have been added\nsomewhere else because it's in a shared library? Perhaps the costs went\ninto FunctionCall1/3?\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Fri, 20 Jan 2006 21:10:36 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "Martijn van Oosterhout <[email protected]> writes:\n> Something I'm missing is the calls to tsearch functions. I'm not 100%\n> familiar with gprof, but is it possible those costs have been added\n> somewhere else because it's in a shared library? Perhaps the costs went\n> into FunctionCall1/3?\n\nI think that the tsearch functions must be the stuff charged as\n\"data_start\" (which is not actually any symbol in our source code).\nThat is showing as being called by FunctionCallN which is what you'd\nexpect.\n\nIf the totals given by gprof are correct, then it's down in the noise.\nI don't think I trust that too much ... but I don't see anything in the\ngprof manual about how to include a dynamically loaded library in the\nprofile. (I did compile tsearch2 with -pg, but that's evidently not\nenough.)\n\nI'll see if I can link tsearch2 statically to resolve this question.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jan 2006 15:21:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow " }, { "msg_contents": "On Fri, Jan 20, 2006 at 03:21:45PM -0500, Tom Lane wrote:\n> If the totals given by gprof are correct, then it's down in the noise.\n> I don't think I trust that too much ... but I don't see anything in the\n> gprof manual about how to include a dynamically loaded library in the\n> profile. (I did compile tsearch2 with -pg, but that's evidently not\n> enough.)\n\nThere is some mention on the web of an environment variable you can\nset: LD_PROFILE=<libname>\n\nThese pages seem relevent:\nhttp://sourceware.org/ml/binutils/2002-04/msg00047.html\nhttp://www.scit.wlv.ac.uk/cgi-bin/mansec?1+gprof\n\nIt's wierd how some man pages for gprof describe this feature, but the\none on my local system doesn't mention it.\n\n> I'll see if I can link tsearch2 statically to resolve this question.\n\nThat'll work too...\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Fri, 20 Jan 2006 21:51:32 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "Well, I feel like a fool, because I failed to notice that the total\nruntime shown in that profile wasn't anywhere close to the actual wall\nclock time. gprof is indeed simply not counting the time spent in\ndynamically-linked code. With tsearch2 statically linked into the\nbackend, a more believable picture emerges:\n\n % cumulative self self total \n time seconds seconds calls Ks/call Ks/call name \n 98.96 1495.93 1495.93 33035195 0.00 0.00 hemdistsign\n 0.27 1500.01 4.08 10030581 0.00 0.00 makesign\n 0.11 1501.74 1.73 588976 0.00 0.00 gistchoose\n 0.10 1503.32 1.58 683471 0.00 0.00 XLogInsert\n 0.05 1504.15 0.83 246579 0.00 0.00 sizebitvec\n 0.05 1504.93 0.78 446399 0.00 0.00 gtsvector_union\n 0.03 1505.45 0.52 3576475 0.00 0.00 LWLockRelease\n 0.03 1505.92 0.47 1649 0.00 0.00 gtsvector_picksplit\n 0.03 1506.38 0.47 3572572 0.00 0.00 LWLockAcquire\n 0.02 1506.74 0.36 444817 0.00 0.00 gtsvector_same\n 0.02 1507.09 0.35 4077089 0.00 0.00 AllocSetAlloc\n 0.02 1507.37 0.28 236984 0.00 0.00 gistdoinsert\n 0.02 1507.63 0.26 874195 0.00 0.00 hash_search\n 0.02 1507.89 0.26 9762101 0.00 0.00 gtsvector_penalty\n 0.01 1508.08 0.19 236984 0.00 0.00 gistmakedeal\n 0.01 1508.27 0.19 841754 0.00 0.00 UnpinBuffer\n 0.01 1508.45 0.18 22985469 0.00 0.00 hemdistcache\n 0.01 1508.63 0.18 3998572 0.00 0.00 LockBuffer\n 0.01 1508.81 0.18 686681 0.00 0.00 gtsvector_compress\n 0.01 1508.98 0.17 11514275 0.00 0.00 gistdentryinit\n\nSo we gotta fix hemdistsign ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jan 2006 16:19:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow " }, { "msg_contents": "On Fri, Jan 20, 2006 at 04:19:15PM -0500, Tom Lane wrote:\n> % cumulative self self total \n> time seconds seconds calls Ks/call Ks/call name \n> 98.96 1495.93 1495.93 33035195 0.00 0.00 hemdistsign\n\n<snip>\n\n> So we gotta fix hemdistsign ...\n\nlol! Yeah, I guess so. Pretty nasty loop. LOOPBIT will iterate 8*63=504\ntimes and it's going to do silly bit handling on each and every\niteration.\n\nGiven that all it's doing is counting bits, a simple fix would be to\nloop over bytes, use XOR and count ones. For extreme speedup create a\nlookup table with 256 entries to give you the answer straight away...\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Fri, 20 Jan 2006 22:37:54 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "On Fri, Jan 20, 2006 at 10:37:54PM +0100, Martijn van Oosterhout wrote:\n> Given that all it's doing is counting bits, a simple fix would be to\n> loop over bytes, use XOR and count ones. For extreme speedup create a\n> lookup table with 256 entries to give you the answer straight away...\n\nFor extra obfscation:\n\n unsigned v = (unsigned)c;\n int num_bits = (v * 0x1001001001001ULL & 0x84210842108421ULL) % 0x1f;\n\n(More more-or-less intelligent options at\nhttp://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetNaive :-) )\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 20 Jan 2006 22:44:02 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "Martijn van Oosterhout <[email protected]> writes:\n> Given that all it's doing is counting bits, a simple fix would be to\n> loop over bytes, use XOR and count ones. For extreme speedup create a\n> lookup table with 256 entries to give you the answer straight away...\n\nYeah, I just finished doing that and got about a 20x overall speedup\n(30-some seconds to build the index instead of 10 minutes). However,\nhemdistsign is *still* 70% of the runtime after doing that. The problem\nseems to be that gtsvector_picksplit is calling it an inordinate number\nof times:\n\n 0.53 30.02 1649/1649 FunctionCall2 <cycle 2> [19]\n[20] 52.4 0.53 30.02 1649 gtsvector_picksplit [20]\n 29.74 0.00 23519673/33035195 hemdistsign [18]\n 0.14 0.00 22985469/22985469 hemdistcache [50]\n 0.12 0.00 268480/10030581 makesign [25]\n 0.02 0.00 270400/270400 fillcache [85]\n 0.00 0.00 9894/4077032 AllocSetAlloc [34]\n 0.00 0.00 9894/2787477 MemoryContextAlloc [69]\n\n(hemdistcache calls hemdistsign --- I think gprof is doing something\nfunny with tail-calls here, and showing hemdistsign as directly called\nfrom gtsvector_picksplit when control really arrives through hemdistcache.)\n\nThe bulk of the problem is clearly in this loop, which performs O(N^2)\ncomparisons to find the two entries that are furthest apart in hemdist\nterms:\n\n for (k = FirstOffsetNumber; k < maxoff; k = OffsetNumberNext(k))\n {\n for (j = OffsetNumberNext(k); j <= maxoff; j = OffsetNumberNext(j))\n {\n if (k == FirstOffsetNumber)\n fillcache(&cache[j], GETENTRY(entryvec, j));\n\n size_waste = hemdistcache(&(cache[j]), &(cache[k]));\n if (size_waste > waste)\n {\n waste = size_waste;\n seed_1 = k;\n seed_2 = j;\n }\n }\n }\n\nI wonder if there is a way to improve on that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jan 2006 16:50:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow " }, { "msg_contents": "On Fri, Jan 20, 2006 at 04:50:17PM -0500, Tom Lane wrote:\n> I wonder if there is a way to improve on that.\n\nOoh, the farthest pair problem (in an N-dimensional vector space, though).\nI'm pretty sure problems like this has been studied quite extensively in the\nliterature, although perhaps not with the same norm. It's known under both\n\"farthest pair\" and \"diameter\", and probably others. I'm fairly sure it\nshould be solvable in at least O(n log n).\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 20 Jan 2006 23:16:55 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "At 05:16 PM 1/20/2006, Steinar H. Gunderson wrote:\n>On Fri, Jan 20, 2006 at 04:50:17PM -0500, Tom Lane wrote:\n> > I wonder if there is a way to improve on that.\n>\n>Ooh, the farthest pair problem (in an N-dimensional vector space, though).\n>I'm pretty sure problems like this has been studied quite extensively in the\n>literature, although perhaps not with the same norm. It's known under both\n>\"farthest pair\" and \"diameter\", and probably others. I'm fairly sure it\n>should be solvable in at least O(n log n).\n\nIf the N-dimensional space is Euclidean (any <x, x+1> is the same \ndistance apart in dimension x), then finding the farthest pair can be \ndone in at least O(n).\n\nIf you do not want the actual distance and can create the proper data \nstructures, particularly if you can update them incrementally as you \ngenerate pairs, it is often possible to solve this problem in O(lg n) or O(1).\n\nI'll do some grinding.\nRon \n\n\n", "msg_date": "Fri, 20 Jan 2006 17:29:46 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "On Fri, Jan 20, 2006 at 04:50:17PM -0500, Tom Lane wrote:\n> (hemdistcache calls hemdistsign --- I think gprof is doing something\n> funny with tail-calls here, and showing hemdistsign as directly called\n> from gtsvector_picksplit when control really arrives through hemdistcache.)\n\nIt may be the compiler. All these functions are declared static, which\ngives the compiler quite a bit of leeway to rearrange code.\n\n> The bulk of the problem is clearly in this loop, which performs O(N^2)\n> comparisons to find the two entries that are furthest apart in hemdist\n> terms:\n\nAh. A while ago someone came onto the list asking about bit strings\nindexing[1]. If I'd known tsearch worked like this I would have pointed\nhim to it. Anyway, before he went off to implement it he mentioned\n\"Jarvis-Patrick clustering\", whatever that means.\n\nProbably more relevent was this thread[2] on -hackers a while back with\npseudo-code[3]. How well it works, I don't know, it worked for him\nevidently, he went away happy...\n\n[1] http://archives.postgresql.org/pgsql-general/2005-11/msg00473.php\n[2] http://archives.postgresql.org/pgsql-hackers/2005-11/msg01067.php\n[3] http://archives.postgresql.org/pgsql-hackers/2005-11/msg01069.php\n\nHope this helps,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Fri, 20 Jan 2006 23:33:27 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "At 04:37 PM 1/20/2006, Martijn van Oosterhout wrote:\n>On Fri, Jan 20, 2006 at 04:19:15PM -0500, Tom Lane wrote:\n> > % cumulative self self total\n> > time seconds seconds calls Ks/call Ks/call name\n> > 98.96 1495.93 1495.93 33035195 0.00 0.00 hemdistsign\n>\n><snip>\n>\n> > So we gotta fix hemdistsign ...\n>\n>lol! Yeah, I guess so. Pretty nasty loop. LOOPBIT will iterate 8*63=504\n>times and it's going to do silly bit handling on each and every\n>iteration.\n>\n>Given that all it's doing is counting bits, a simple fix would be to\n>loop over bytes, use XOR and count ones. For extreme speedup create a\n>lookup table with 256 entries to give you the answer straight away...\nFor an even more extreme speedup, don't most modern CPUs have an asm \ninstruction that counts the bits (un)set (AKA \"population counting\") \nin various size entities (4b, 8b, 16b, 32b, 64b, and 128b for 64b \nCPUs with SWAR instructions)?\n\nRon \n\n\n", "msg_date": "Fri, 20 Jan 2006 17:46:34 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "On Fri, Jan 20, 2006 at 05:46:34PM -0500, Ron wrote:\n> For an even more extreme speedup, don't most modern CPUs have an asm \n> instruction that counts the bits (un)set (AKA \"population counting\") \n> in various size entities (4b, 8b, 16b, 32b, 64b, and 128b for 64b \n> CPUs with SWAR instructions)?\n\nNone in the x86 series that I'm aware of, at least.\n\nYou have instructions for finding the highest set bit, though.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 20 Jan 2006 23:49:15 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "Ron <[email protected]> writes:\n> For an even more extreme speedup, don't most modern CPUs have an asm \n> instruction that counts the bits (un)set (AKA \"population counting\") \n> in various size entities (4b, 8b, 16b, 32b, 64b, and 128b for 64b \n> CPUs with SWAR instructions)?\n\nYeah, but fetching from a small constant table is pretty quick too;\nI doubt it's worth getting involved in machine-specific assembly code\nfor this. I'm much more interested in the idea of improving the\nfurthest-distance algorithm in gtsvector_picksplit --- if we can do\nthat, it'll probably drop the distance calculation down to the point\nwhere it's not really worth the trouble to assembly-code it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jan 2006 17:50:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow " }, { "msg_contents": "On Fri, Jan 20, 2006 at 05:46:34PM -0500, Ron wrote:\n> At 04:37 PM 1/20/2006, Martijn van Oosterhout wrote:\n> >Given that all it's doing is counting bits, a simple fix would be to\n> >loop over bytes, use XOR and count ones. For extreme speedup create a\n> >lookup table with 256 entries to give you the answer straight away...\n> For an even more extreme speedup, don't most modern CPUs have an asm \n> instruction that counts the bits (un)set (AKA \"population counting\") \n> in various size entities (4b, 8b, 16b, 32b, 64b, and 128b for 64b \n> CPUs with SWAR instructions)?\n\nQuite possibly, though I wouldn't have the foggiest idea how to get the\nC compiler to generate it.\n\nGiven that even a lookup table will get you pretty close to that with\nplain C coding, I think that's quite enough for a function that really\nis just a small part of a much larger system...\n\nBetter solution (as Tom points out): work out how to avoid calling it\nso much in the first place... At the moment each call to\ngtsvector_picksplit seems to call the distance function around 14262\ntimes. Getting that down by an order of magnitude will help much much\nmore.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Fri, 20 Jan 2006 23:57:20 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "On Fri, Jan 20, 2006 at 05:50:36PM -0500, Tom Lane wrote:\n> Yeah, but fetching from a small constant table is pretty quick too;\n> I doubt it's worth getting involved in machine-specific assembly code\n> for this. I'm much more interested in the idea of improving the\n> furthest-distance algorithm in gtsvector_picksplit --- if we can do\n> that, it'll probably drop the distance calculation down to the point\n> where it's not really worth the trouble to assembly-code it.\n\nFor the record: Could we do with a less-than-optimal split here? In that\ncase, an extremely simple heuristic is:\n\n best = distance(0, 1)\n best_i = 0\n best_j = 1\n\n for i = 2..last:\n if distance(best_i, i) > best:\n best = distance(best_i, i)\n\t best_j = i\n else if distance(best_j, i) > best:\n best = distance(best_j, i)\n\t best_i = i\n\nI've tested it on various data, and although it's definitely not _correct_,\nit generally gets within 10%.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 20 Jan 2006 23:57:51 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "On Fri, Jan 20, 2006 at 05:29:46PM -0500, Ron wrote:\n> If the N-dimensional space is Euclidean (any <x, x+1> is the same \n> distance apart in dimension x), then finding the farthest pair can be \n> done in at least O(n).\n\nThat sounds a bit optimistic.\n\n http://portal.acm.org/ft_gateway.cfm?id=167217&type=pdf&coll=GUIDE&dl=GUIDE&CFID=66230761&CFTOKEN=72453878\n\nis from 1993, but still it struggles with getting it down to O(n log n)\ndeterministically, for Euclidian 3-space, and our problem is not Euclidian\n(although it still satisfies the triangle inequality, which sounds important\nto me) and in a significantly higher dimension...\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Fri, 20 Jan 2006 23:59:32 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "Stephan Vollmer <[email protected]> writes:\n> Tom Lane wrote:\n>> However, I'm not sure that anyone's tried to do any performance\n>> optimization on the GIST insert code ... there might be some low-hanging\n>> fruit there.\n\n> Unfortunately, I'm not able to investigate it further myself as I'm\n> quite a Postgres newbie. But I could provide someone else with the\n> example table. Maybe someone else could find out why it is so slow.\n\nThe problem seems to be mostly tsearch2's fault rather than the general\nGIST code. I've applied a partial fix to 8.1 and HEAD branches, which\nyou can find here if you're in a hurry for it:\nhttp://archives.postgresql.org/pgsql-committers/2006-01/msg00283.php\n(the gistidx.c change is all you need for tsearch2)\n\nThere is some followup discussion in the pgsql-performance list. It\nseems possible that we can get another factor of 10 or better with a\nsmarter picksplit algorithm --- but that patch will probably be too\nlarge to be considered for back-patching into the stable branches.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jan 2006 18:22:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Creation of tsearch2 index is very slow " }, { "msg_contents": "On Fri, Jan 20, 2006 at 04:50:17PM -0500, Tom Lane wrote:\n> I wonder if there is a way to improve on that.\n\nhttp://www.cs.uwaterloo.ca/~tmchan/slide_isaac.ps:\n\n The diameter problem has been studied extensively in the traditional model.\n Although O(n log n) algorithms have been given for d = 2 and d = 3, only\n slightly subquadratic algorithms are known for higher dimensions.\n\nIt doesn't mention a date, but has references to at least 2004-papers, so I'm\nfairly sure nothing big has happened since that.\n\nIt sounds like we either want to go for an approximation, or just accept that\nit's a lot of work to get it better than O(n^2). Or, of course, find some\nspecial property of our problem that makes it easier than the general problem\n:-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 21 Jan 2006 00:28:43 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> For the record: Could we do with a less-than-optimal split here?\n\nYeah, I was wondering the same. The code is basically choosing two\n\"seed\" values to drive the index-page split. Intuitively it seems that\n\"pretty far apart\" would be nearly as good as \"absolute furthest apart\"\nfor this purpose.\n\nThe cost of a less-than-optimal split would be paid on all subsequent\nindex accesses, though, so it's not clear how far we can afford to go in\nthis direction.\n\nIt's also worth considering that the entire approach is a heuristic,\nreally --- getting the furthest-apart pair of seeds doesn't guarantee\nan optimal split as far as I can see. Maybe there's some totally\ndifferent way to do it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jan 2006 18:52:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow " }, { "msg_contents": "On Fri, Jan 20, 2006 at 06:52:37PM -0500, Tom Lane wrote:\n> It's also worth considering that the entire approach is a heuristic,\n> really --- getting the furthest-apart pair of seeds doesn't guarantee\n> an optimal split as far as I can see. Maybe there's some totally\n> different way to do it.\n\nFor those of us who don't know what tsearch2/gist is trying to accomplish\nhere, could you provide some pointers? :-) During my mini-literature-search\non Google, I've found various algorithms for locating clusters in\nhigh-dimensional metric spaces[1]; some of it might be useful, but I might\njust be misunderstanding what the real problem is.\n\n[1] http://ieeexplore.ieee.org/iel5/69/30435/01401892.pdf?arnumber=1401892 ,\n for instance\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 21 Jan 2006 01:05:19 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> On Fri, Jan 20, 2006 at 06:52:37PM -0500, Tom Lane wrote:\n>> It's also worth considering that the entire approach is a heuristic,\n>> really --- getting the furthest-apart pair of seeds doesn't guarantee\n>> an optimal split as far as I can see. Maybe there's some totally\n>> different way to do it.\n\n> For those of us who don't know what tsearch2/gist is trying to accomplish\n> here, could you provide some pointers? :-)\n\nWell, we're trying to split an index page that's gotten full into two\nindex pages, preferably with approximately equal numbers of items in\neach new page (this isn't a hard requirement though). I think the true\nfigure of merit for a split is how often will subsequent searches have\nto descend into *both* of the resulting pages as opposed to just one\n--- the less often that is true, the better. I'm not very clear on\nwhat tsearch2 is doing with these bitmaps, but it looks like an upper\npage's downlink has the union (bitwise OR) of the one-bits in the values\non the lower page, and you have to visit the lower page if this union\nhas a nonempty intersection with the set you are looking for. If that's\ncorrect, what you really want is to divide the values so that the unions\nof the two sets have minimal overlap ... which seems to me to have\nlittle to do with what the code does at present.\n\nTeodor, Oleg, can you clarify what's needed here?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Jan 2006 19:23:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow " }, { "msg_contents": "On Fri, Jan 20, 2006 at 07:23:10PM -0500, Tom Lane wrote:\n> I'm not very clear on what tsearch2 is doing with these bitmaps, but it\n> looks like an upper page's downlink has the union (bitwise OR) of the\n> one-bits in the values on the lower page, and you have to visit the lower\n> page if this union has a nonempty intersection with the set you are looking\n> for. If that's correct, what you really want is to divide the values so\n> that the unions of the two sets have minimal overlap ... which seems to me\n> to have little to do with what the code does at present.\n\nSort of like the vertex-cover problem? That's probably a lot harder than\nfinding the two farthest points...\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Sat, 21 Jan 2006 01:36:46 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "Tom Lane wrote:\n> Well, we're trying to split an index page that's gotten full into two\n> index pages, preferably with approximately equal numbers of items in\n> each new page (this isn't a hard requirement though). ... If that's\n> correct, what you really want is to divide the values so that the unions\n> of the two sets have minimal overlap ... which seems to me to have\n> little to do with what the code does at present.\n\nThis problem has been studied extensively by chemists, and they haven't found any easy solutions.\n\nThe Jarvis Patrick clustering algorithm might give you hints about a fast approach. In theory it's K*O(N^2), but J-P is preferred for large datasets (millions of molecules) because the coefficient K can be made quite low. It starts with a \"similarity metric\" for two bit strings, the Tanimoto or Tversky coefficients:\n\n http://www.daylight.com/dayhtml/doc/theory/theory.finger.html#RTFToC83\n\nJ-P Clustering is described here:\n\n http://www.daylight.com/dayhtml/doc/cluster/cluster.a.html#cl33\n\nJ-P Clustering is probably not the best for this problem (see the illustrations in the link above to see why), but the general idea of computing N-nearest-neighbors, followed by a partitioning step, could be what's needed.\n\nCraig\n", "msg_date": "Fri, 20 Jan 2006 17:30:17 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "At 07:23 PM 1/20/2006, Tom Lane wrote:\n>\"Steinar H. Gunderson\" <[email protected]> writes:\n> > On Fri, Jan 20, 2006 at 06:52:37PM -0500, Tom Lane wrote:\n> >> It's also worth considering that the entire approach is a heuristic,\n> >> really --- getting the furthest-apart pair of seeds doesn't guarantee\n> >> an optimal split as far as I can see. Maybe there's some totally\n> >> different way to do it.\n>\n> > For those of us who don't know what tsearch2/gist is trying to accomplish\n> > here, could you provide some pointers? :-)\n>\n>Well, we're trying to split an index page that's gotten full into \n>two index pages, preferably with approximately equal numbers of items in\n>each new page (this isn't a hard requirement though).\n\nMaybe we are over thinking this. What happens if we do the obvious \nand just make a new page and move the \"last\" n/2 items on the full \npage to the new page?\n\nVarious forms of \"move the last n/2 items\" can be tested here:\n0= just split the table in half. Sometimes KISS works. O(1).\n1= the one's with the highest (or lowest) \"x\" value.\n2= the one's with the highest sum of coordinates (x+y+...= values in \nthe top/bottom n/2 of entries).\n3= split the table so that each table has entries whose size_waste \nvalues add up to approximately the same value.\n4= I'm sure there are others.\n1-5 can be done in O(n) time w/o auxiliary data. They can be done in \nO(1) if we've kept track of the appropriate metric as we've built the \ncurrent page.\n\n\n>I think the true figure of merit for a split is how often will \n>subsequent searches have to descend into *both* of the resulting \n>pages as opposed to just one\n>--- the less often that is true, the better. I'm not very clear on \n>what tsearch2 is doing with these bitmaps, but it looks like an \n>upper page's downlink has the union (bitwise OR) of the one-bits in \n>the values on the lower page, and you have to visit the lower page \n>if this union has a nonempty intersection with the set you are \n>looking for. If that's correct, what you really want is to divide \n>the values so that the unions of the two sets have minimal overlap \n>... which seems to me to have little to do with what the code does at present.\nI'm not sure what \"upper page\" and \"lower page\" mean here?\n\n\n>Teodor, Oleg, can you clarify what's needed here?\nDitto. Guys what is the real motivation and purpose for this code?\n\nRon \n\n\n", "msg_date": "Sat, 21 Jan 2006 07:22:32 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very" }, { "msg_contents": "On Fri, Jan 20, 2006 at 05:50:36PM -0500, Tom Lane wrote:\n> Yeah, but fetching from a small constant table is pretty quick too;\n> I doubt it's worth getting involved in machine-specific assembly code\n> for this. I'm much more interested in the idea of improving the\n> furthest-distance algorithm in gtsvector_picksplit --- if we can do\n> that, it'll probably drop the distance calculation down to the point\n> where it's not really worth the trouble to assembly-code it.\n\nI've played with another algorithm. Very simple but it's only O(N). It\ndoesn't get the longest distance but it does get close. Basically you\ntake the first two elements as your starting length. Then you loop over\neach remaining string, each time finding the longest pair out of each\nset of three.\n\nI've only tried it on random strings. The maximum distance for 128\nrandom strings tends to be around 291-295. This algorithm tends to find\nlengths around 280. Pseudo code below (in Perl).\n\nHowever, IMHO, this algorithm is optimising the wrong thing. It\nshouldn't be trying to split into sets that are far apart, it should be\ntrying to split into sets that minimize the number of set bits (ie\ndistance from zero), since that's what's will speed up searching.\nThat's harder though (this algorithm does approximate it sort of)\nand I havn't come up with an algorithm yet\n\nsub MaxDistFast\n{\n my $strings = shift;\n \n my $m1 = 0;\n my $m2 = 1;\n my $dist = -1;\n\n for my $i (2..$#$strings)\n {\n my $d1 = HammDist( $strings->[$i], $strings->[$m1] );\n my $d2 = HammDist( $strings->[$i], $strings->[$m2] );\n\n my $m = ($d1 > $d2) ? $m1 : $m2;\n my $d = ($d1 > $d2) ? $d1 : $d2;\n \n if( $d > $dist )\n {\n $dist = $d;\n $m1 = $i; \n $m2 = $m;\n }\n }\n return($m1,$m2,$dist);\n}\n\nFull program available at:\nhttp://svana.org/kleptog/temp/picksplit.pl\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Sat, 21 Jan 2006 14:08:30 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "On Sat, 21 Jan 2006, Martijn van Oosterhout wrote:\n\n> However, IMHO, this algorithm is optimising the wrong thing. It\n> shouldn't be trying to split into sets that are far apart, it should be\n> trying to split into sets that minimize the number of set bits (ie\n> distance from zero), since that's what's will speed up searching.\n\nMartijn, you're right! We want not only to split page to very\ndifferent parts, but not to increase the number of sets bits in\nresulted signatures, which are union (OR'ed) of all signatures \nin part. We need not only fast index creation (thanks, Tom !),\nbut a better index. Some information is available here\nhttp://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\nThere are should be more detailed document, but I don't remember where:)\n\n> That's harder though (this algorithm does approximate it sort of)\n> and I havn't come up with an algorithm yet\n\nDon't ask how hard we thought :)\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Sat, 21 Jan 2006 16:29:13 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "On Sat, 21 Jan 2006, Ron wrote:\n\n> At 07:23 PM 1/20/2006, Tom Lane wrote:\n>> \"Steinar H. Gunderson\" <[email protected]> writes:\n>> > On Fri, Jan 20, 2006 at 06:52:37PM -0500, Tom Lane wrote:\n>> >> It's also worth considering that the entire approach is a heuristic,\n>> >> really --- getting the furthest-apart pair of seeds doesn't guarantee\n>> >> an optimal split as far as I can see. Maybe there's some totally\n>> >> different way to do it.\n>> \n>> > For those of us who don't know what tsearch2/gist is trying to accomplish\n>> > here, could you provide some pointers? :-)\n>> \n>> Well, we're trying to split an index page that's gotten full into two index \n>> pages, preferably with approximately equal numbers of items in\n>> each new page (this isn't a hard requirement though).\n>\n> Maybe we are over thinking this. What happens if we do the obvious and just \n> make a new page and move the \"last\" n/2 items on the full page to the new \n> page?\n>\n> Various forms of \"move the last n/2 items\" can be tested here:\n> 0= just split the table in half. Sometimes KISS works. O(1).\n> 1= the one's with the highest (or lowest) \"x\" value.\n> 2= the one's with the highest sum of coordinates (x+y+...= values in the \n> top/bottom n/2 of entries).\n> 3= split the table so that each table has entries whose size_waste values add \n> up to approximately the same value.\n> 4= I'm sure there are others.\n> 1-5 can be done in O(n) time w/o auxiliary data. They can be done in O(1) if \n> we've kept track of the appropriate metric as we've built the current page.\n>\n>\n>> I think the true figure of merit for a split is how often will subsequent \n>> searches have to descend into *both* of the resulting pages as opposed to \n>> just one\n>> --- the less often that is true, the better. I'm not very clear on what \n>> tsearch2 is doing with these bitmaps, but it looks like an upper page's \n>> downlink has the union (bitwise OR) of the one-bits in the values on the \n>> lower page, and you have to visit the lower page if this union has a \n>> nonempty intersection with the set you are looking for. If that's correct, \n>> what you really want is to divide the values so that the unions of the two \n>> sets have minimal overlap ... which seems to me to have little to do with \n>> what the code does at present.\n> I'm not sure what \"upper page\" and \"lower page\" mean here?\n>\n>\n>> Teodor, Oleg, can you clarify what's needed here?\n> Ditto. Guys what is the real motivation and purpose for this code?\n\nwe want not just split the page by two very distinct parts, but to keep\nresulted signatures which is ORed signature of all signatures in the page\nas much 'sparse' as can. \nsome information available here\nhttp://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n\nUnfortunately, we're rather busy right now and couldn't be very useful.\n\n>\n> Ron \n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Sat, 21 Jan 2006 16:34:38 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very" }, { "msg_contents": "On Sat, Jan 21, 2006 at 04:29:13PM +0300, Oleg Bartunov wrote:\n> Martijn, you're right! We want not only to split page to very\n> different parts, but not to increase the number of sets bits in\n> resulted signatures, which are union (OR'ed) of all signatures \n> in part. We need not only fast index creation (thanks, Tom !),\n> but a better index. Some information is available here\n> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n> There are should be more detailed document, but I don't remember where:)\n\nI see how it works, what I don't quite get is whether the \"inverted\nindex\" you refer to is what we're working with here, or just what's in\ntsearchd?\n\n> >That's harder though (this algorithm does approximate it sort of)\n> >and I havn't come up with an algorithm yet\n> \n> Don't ask how hard we thought :)\n\nWell, looking at how other people are struggling with it, it's\ndefinitly a Hard Problem. One thing though, I don't think the picksplit\nalgorithm as is really requires you to strictly have the longest\ndistance, just something reasonably long. So I think the alternate\nalgorithm I posted should produce equivalent results. No idea how to\ntest it though...\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Sat, 21 Jan 2006 16:04:24 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "On Sat, 21 Jan 2006, Martijn van Oosterhout wrote:\n\n> On Sat, Jan 21, 2006 at 04:29:13PM +0300, Oleg Bartunov wrote:\n>> Martijn, you're right! We want not only to split page to very\n>> different parts, but not to increase the number of sets bits in\n>> resulted signatures, which are union (OR'ed) of all signatures\n>> in part. We need not only fast index creation (thanks, Tom !),\n>> but a better index. Some information is available here\n>> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n>> There are should be more detailed document, but I don't remember where:)\n>\n> I see how it works, what I don't quite get is whether the \"inverted\n> index\" you refer to is what we're working with here, or just what's in\n> tsearchd?\n\njust tsearchd. We plan to implement inverted index into PostgreSQL core\nand then adapt tsearch2 to use it as option for read-only archives.\n\n>\n>>> That's harder though (this algorithm does approximate it sort of)\n>>> and I havn't come up with an algorithm yet\n>>\n>> Don't ask how hard we thought :)\n>\n> Well, looking at how other people are struggling with it, it's\n> definitly a Hard Problem. One thing though, I don't think the picksplit\n> algorithm as is really requires you to strictly have the longest\n> distance, just something reasonably long. So I think the alternate\n> algorithm I posted should produce equivalent results. No idea how to\n> test it though...\n\nyou may try our development module 'gevel' to see how dense is a signature.\n\nwww=# \\d v_pages\n Table \"public.v_pages\"\n Column | Type | Modifiers\n-----------+-------------------+-----------\n tid | integer | not null\n path | character varying | not null\n body | character varying |\n title | character varying |\n di | integer |\n dlm | integer |\n de | integer |\n md5 | character(22) |\n fts_index | tsvector |\nIndexes:\n \"v_pages_pkey\" PRIMARY KEY, btree (tid)\n \"v_pages_path_key\" UNIQUE, btree (path)\n \"v_gist_key\" gist (fts_index)\n\n# select * from gist_print('v_gist_key') as t(level int, valid bool, a gtsvector) where level =1;\n level | valid | a\n-------+-------+--------------------------------\n 1 | t | 1698 true bits, 318 false bits\n 1 | t | 1699 true bits, 317 false bits\n 1 | t | 1701 true bits, 315 false bits\n 1 | t | 1500 true bits, 516 false bits\n 1 | t | 1517 true bits, 499 false bits\n(5 rows)\n\n\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Sat, 21 Jan 2006 18:22:52 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "gevel is available from \nhttp://www.sai.msu.su/~megera/postgres/gist/\n\n \tOleg\nOn Sat, 21 Jan 2006, Martijn van Oosterhout wrote:\n\n> On Sat, Jan 21, 2006 at 04:29:13PM +0300, Oleg Bartunov wrote:\n>> Martijn, you're right! We want not only to split page to very\n>> different parts, but not to increase the number of sets bits in\n>> resulted signatures, which are union (OR'ed) of all signatures\n>> in part. We need not only fast index creation (thanks, Tom !),\n>> but a better index. Some information is available here\n>> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n>> There are should be more detailed document, but I don't remember where:)\n>\n> I see how it works, what I don't quite get is whether the \"inverted\n> index\" you refer to is what we're working with here, or just what's in\n> tsearchd?\n>\n>>> That's harder though (this algorithm does approximate it sort of)\n>>> and I havn't come up with an algorithm yet\n>>\n>> Don't ask how hard we thought :)\n>\n> Well, looking at how other people are struggling with it, it's\n> definitly a Hard Problem. One thing though, I don't think the picksplit\n> algorithm as is really requires you to strictly have the longest\n> distance, just something reasonably long. So I think the alternate\n> algorithm I posted should produce equivalent results. No idea how to\n> test it though...\n>\n> Have a nice day,\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Sat, 21 Jan 2006 18:24:30 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "Perhaps a different approach to this problem is called for:\n\n_Managing Gigabytes: Compressing and Indexing Documents and Images_ 2ed\nWitten, Moffat, Bell\nISBN 1-55860-570-3\n\nThis is a VERY good book on the subject.\n\nI'd also suggest looking at the publicly available work on indexing \nand searching for search engines like Inktomi (sp?) and Google.\nRon\n\n\nAt 08:34 AM 1/21/2006, Oleg Bartunov wrote:\n>On Sat, 21 Jan 2006, Ron wrote:\n>\n>>At 07:23 PM 1/20/2006, Tom Lane wrote:\n>>>\"Steinar H. Gunderson\" <[email protected]> writes:\n>>> > On Fri, Jan 20, 2006 at 06:52:37PM -0500, Tom Lane wrote:\n>>> >> It's also worth considering that the entire approach is a heuristic,\n>>> >> really --- getting the furthest-apart pair of seeds doesn't guarantee\n>>> >> an optimal split as far as I can see. Maybe there's some totally\n>>> >> different way to do it.\n>>> > For those of us who don't know what tsearch2/gist is trying to accomplish\n>>> > here, could you provide some pointers? :-)\n>>>Well, we're trying to split an index page that's gotten full into \n>>>two index pages, preferably with approximately equal numbers of items in\n>>>each new page (this isn't a hard requirement though).\n>>\n>>Maybe we are over thinking this. What happens if we do the obvious \n>>and just make a new page and move the \"last\" n/2 items on the full \n>>page to the new page?\n>>\n>>Various forms of \"move the last n/2 items\" can be tested here:\n>>0= just split the table in half. Sometimes KISS works. O(1).\n>>1= the one's with the highest (or lowest) \"x\" value.\n>>2= the one's with the highest sum of coordinates (x+y+...= values \n>>in the top/bottom n/2 of entries).\n>>3= split the table so that each table has entries whose size_waste \n>>values add up to approximately the same value.\n>>4= I'm sure there are others.\n>>1-5 can be done in O(n) time w/o auxiliary data. They can be done \n>>in O(1) if we've kept track of the appropriate metric as we've \n>>built the current page.\n>>\n>>\n>>>I think the true figure of merit for a split is how often will \n>>>subsequent searches have to descend into *both* of the resulting \n>>>pages as opposed to just one\n>>>--- the less often that is true, the better. I'm not very clear \n>>>on what tsearch2 is doing with these bitmaps, but it looks like an \n>>>upper page's downlink has the union (bitwise OR) of the one-bits \n>>>in the values on the lower page, and you have to visit the lower \n>>>page if this union has a nonempty intersection with the set you \n>>>are looking for. If that's correct, what you really want is to \n>>>divide the values so that the unions of the two sets have minimal \n>>>overlap ... which seems to me to have little to do with what the \n>>>code does at present.\n>>I'm not sure what \"upper page\" and \"lower page\" mean here?\n>>\n>>\n>>>Teodor, Oleg, can you clarify what's needed here?\n>>Ditto. Guys what is the real motivation and purpose for this code?\n>\n>we want not just split the page by two very distinct parts, but to keep\n>resulted signatures which is ORed signature of all signatures in the page\n>as much 'sparse' as can. some information available here\n>http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n>\n>Unfortunately, we're rather busy right now and couldn't be very useful.\n\n\n", "msg_date": "Sat, 21 Jan 2006 11:11:53 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very" }, { "msg_contents": "On Sat, 21 Jan 2006, Ron wrote:\n\n> Perhaps a different approach to this problem is called for:\n>\n> _Managing Gigabytes: Compressing and Indexing Documents and Images_ 2ed\n> Witten, Moffat, Bell\n> ISBN 1-55860-570-3\n>\n> This is a VERY good book on the subject.\n>\n> I'd also suggest looking at the publicly available work on indexing and \n> searching for search engines like Inktomi (sp?) and Google.\n> Ron\n\nRon,\n\nyou completely miss the problem ! We do know MG and other SE. Actually,\nwe've implemented several search engines based on inverted index technology \n(see, for example, pgsql.ru/db/pgsearch). tsearch2 was designed for\nonline indexing, while keeping inverted index online is rather difficult\nproblem. We do have plan to implement inverted index as an option for\nlarge read-only archives, but now we discuss how to organize online\nindex and if possible to optimize current storage for signatures \nwithout breaking search performance.\n\n>\n>\n> At 08:34 AM 1/21/2006, Oleg Bartunov wrote:\n>> On Sat, 21 Jan 2006, Ron wrote:\n>> \n>>> At 07:23 PM 1/20/2006, Tom Lane wrote:\n>>>> \"Steinar H. Gunderson\" <[email protected]> writes:\n>>>> > On Fri, Jan 20, 2006 at 06:52:37PM -0500, Tom Lane wrote:\n>>>> >> It's also worth considering that the entire approach is a heuristic,\n>>>> >> really --- getting the furthest-apart pair of seeds doesn't guarantee\n>>>> >> an optimal split as far as I can see. Maybe there's some totally\n>>>> >> different way to do it.\n>>>> > For those of us who don't know what tsearch2/gist is trying to \n>>>> accomplish\n>>>> > here, could you provide some pointers? :-)\n>>>> Well, we're trying to split an index page that's gotten full into two \n>>>> index pages, preferably with approximately equal numbers of items in\n>>>> each new page (this isn't a hard requirement though).\n>>> \n>>> Maybe we are over thinking this. What happens if we do the obvious and \n>>> just make a new page and move the \"last\" n/2 items on the full page to the \n>>> new page?\n>>> \n>>> Various forms of \"move the last n/2 items\" can be tested here:\n>>> 0= just split the table in half. Sometimes KISS works. O(1).\n>>> 1= the one's with the highest (or lowest) \"x\" value.\n>>> 2= the one's with the highest sum of coordinates (x+y+...= values in the \n>>> top/bottom n/2 of entries).\n>>> 3= split the table so that each table has entries whose size_waste values \n>>> add up to approximately the same value.\n>>> 4= I'm sure there are others.\n>>> 1-5 can be done in O(n) time w/o auxiliary data. They can be done in O(1) \n>>> if we've kept track of the appropriate metric as we've built the current \n>>> page.\n>>> \n>>> \n>>>> I think the true figure of merit for a split is how often will subsequent \n>>>> searches have to descend into *both* of the resulting pages as opposed to \n>>>> just one\n>>>> --- the less often that is true, the better. I'm not very clear on what \n>>>> tsearch2 is doing with these bitmaps, but it looks like an upper page's \n>>>> downlink has the union (bitwise OR) of the one-bits in the values on the \n>>>> lower page, and you have to visit the lower page if this union has a \n>>>> nonempty intersection with the set you are looking for. If that's \n>>>> correct, what you really want is to divide the values so that the unions \n>>>> of the two sets have minimal overlap ... which seems to me to have little \n>>>> to do with what the code does at present.\n>>> I'm not sure what \"upper page\" and \"lower page\" mean here?\n>>> \n>>> \n>>>> Teodor, Oleg, can you clarify what's needed here?\n>>> Ditto. Guys what is the real motivation and purpose for this code?\n>> \n>> we want not just split the page by two very distinct parts, but to keep\n>> resulted signatures which is ORed signature of all signatures in the page\n>> as much 'sparse' as can. some information available here\n>> http://www.sai.msu.su/~megera/oddmuse/index.cgi/Tsearch_V2_internals\n>> \n>> Unfortunately, we're rather busy right now and couldn't be very useful.\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Sat, 21 Jan 2006 19:33:27 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very" }, { "msg_contents": "Ron <[email protected]> writes:\n> At 07:23 PM 1/20/2006, Tom Lane wrote:\n>> Well, we're trying to split an index page that's gotten full into \n>> two index pages, preferably with approximately equal numbers of items in\n>> each new page (this isn't a hard requirement though).\n\n> Maybe we are over thinking this. What happens if we do the obvious \n> and just make a new page and move the \"last\" n/2 items on the full \n> page to the new page?\n\nSearch performance will go to hell in a handbasket :-(. We have to make\nat least some effort to split the page in a way that will allow searches\nto visit only one of the two child pages rather than both.\n\nIt's certainly true though that finding the furthest pair is not a\nnecessary component of that. It's reasonable if you try to visualize\nthe problem in 2D or 3D, but I'm not sure that that geometric intuition\nholds up in such a high-dimensional space as we have here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 21 Jan 2006 13:27:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very " }, { "msg_contents": "Tom Lane wrote:\n\n> The problem seems to be mostly tsearch2's fault rather than the general\n> GIST code. I've applied a partial fix to 8.1 and HEAD branches, which\n> you can find here if you're in a hurry for it:\n> http://archives.postgresql.org/pgsql-committers/2006-01/msg00283.php\n> (the gistidx.c change is all you need for tsearch2)\n\nThanks for all your time and work you and the other guys are\nspending on this matter! I'll look into the new version, but a.p.o\nseems to be unreachable at the moment.\n\n\n> There is some followup discussion in the pgsql-performance list. It\n> seems possible that we can get another factor of 10 or better with a\n> smarter picksplit algorithm --- but that patch will probably be too\n> large to be considered for back-patching into the stable branches.\n\nI've already been following the discussion on pgsql-perform,\nalthough I have to admit that don't understand every detail of the\ntsearch2 implementation. :-) Thus, I'm sorry that I won't be able\nto help directly on that problem. But it is interesting to read anyway.\n\nBest regards,\n\n- Stephan", "msg_date": "Sat, 21 Jan 2006 19:37:06 +0100", "msg_from": "Stephan Vollmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Creation of tsearch2 index is very slow" }, { "msg_contents": "On Sat, 21 Jan 2006, Tom Lane wrote:\n\n> Ron <[email protected]> writes:\n>> At 07:23 PM 1/20/2006, Tom Lane wrote:\n>>> Well, we're trying to split an index page that's gotten full into\n>>> two index pages, preferably with approximately equal numbers of items in\n>>> each new page (this isn't a hard requirement though).\n>\n>> Maybe we are over thinking this. What happens if we do the obvious\n>> and just make a new page and move the \"last\" n/2 items on the full\n>> page to the new page?\n>\n> Search performance will go to hell in a handbasket :-(. We have to make\n> at least some effort to split the page in a way that will allow searches\n> to visit only one of the two child pages rather than both.\n\ndoes the order of the items within a given page matter? if not this sounds \nlike a partial quicksort algorithm would work. you don't need to fully \nsort things, but you do want to make sure that everything on the first \npage is 'less' then everything on the second page so you can skip passes \nthat don't cross a page boundry\n\n> It's certainly true though that finding the furthest pair is not a\n> necessary component of that. It's reasonable if you try to visualize\n> the problem in 2D or 3D, but I'm not sure that that geometric intuition\n> holds up in such a high-dimensional space as we have here.\n\nI will say that I'm not understanding the problem well enough to \nunderstand themulti-dimentional nature of this problem.\n\nDavid Lang\n", "msg_date": "Sat, 21 Jan 2006 12:19:26 -0800 (PST)", "msg_from": "David Lang <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very" }, { "msg_contents": "On Sat, Jan 21, 2006 at 06:22:52PM +0300, Oleg Bartunov wrote:\n> >I see how it works, what I don't quite get is whether the \"inverted\n> >index\" you refer to is what we're working with here, or just what's in\n> >tsearchd?\n> \n> just tsearchd. We plan to implement inverted index into PostgreSQL core\n> and then adapt tsearch2 to use it as option for read-only archives.\n\nHmm, had a look and think about it and I think I see what you mean by\nan inverted index. I also think your going to have a real exercise\nimplementing it in Postgres because postgres indexes work on the basis\nof one tuple, one index entry, which I think your inverted index\ndoesn't do.\n\nThat said, I think GiST could be extended to support your case without\ntoo much difficulty. Interesting project though :)\n\nBTW, given you appear to have a tsearch2 index with some real-world\ndata, would you be willing to try some alternate picksplit algorithms\nto see if your gevel module shows any difference?\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Sat, 21 Jan 2006 21:35:58 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very slow" }, { "msg_contents": "David Lang <[email protected]> writes:\n> On Sat, 21 Jan 2006, Tom Lane wrote:\n>> Ron <[email protected]> writes:\n>>> Maybe we are over thinking this. What happens if we do the obvious\n>>> and just make a new page and move the \"last\" n/2 items on the full\n>>> page to the new page?\n>> \n>> Search performance will go to hell in a handbasket :-(. We have to make\n>> at least some effort to split the page in a way that will allow searches\n>> to visit only one of the two child pages rather than both.\n\n> does the order of the items within a given page matter?\n\nAFAIK the items within a GIST index page are just stored in insertion\norder (which is exactly why Ron's suggestion above doesn't work well).\nThere's no semantic significance to it. It's only when we have to split\nthe page that we need to classify the items more finely.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 21 Jan 2006 15:55:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very " }, { "msg_contents": "At 01:27 PM 1/21/2006, Tom Lane wrote:\n>Ron <[email protected]> writes:\n> > At 07:23 PM 1/20/2006, Tom Lane wrote:\n> >> Well, we're trying to split an index page that's gotten full into\n> >> two index pages, preferably with approximately equal numbers of items in\n> >> each new page (this isn't a hard requirement though).\n>\n> > Maybe we are over thinking this. What happens if we do the obvious\n> > and just make a new page and move the \"last\" n/2 items on the full\n> > page to the new page?\n>\n>Search performance will go to hell in a handbasket :-(. We have to make\n>at least some effort to split the page in a way that will allow searches\n>to visit only one of the two child pages rather than both.\n>\n>It's certainly true though that finding the furthest pair is not a\n>necessary component of that. It's reasonable if you try to visualize\n>the problem in 2D or 3D, but I'm not sure that that geometric intuition\n>holds up in such a high-dimensional space as we have here.\nAfter reading the various papers available on GiST and RD trees, I \nthink I have a decent suggestion.\n\nSince RD tree keys contain the keys of their descendents/components \nin them, they are basically a representation of a depth first \nsearch. This can be useful for intra-document searches.\n\nOTOH, inter-document searches are going to be more akin to breadth \nfirst searches using RD trees.\n\nThus my suggestion is that we maintain =two= index structures for text data.\n\nThe first contains as many keys and all their descendents as \npossible. When we can no longer fit a specific complete \"path\" into \na page, we start a new one; trying to keep complete top level to leaf \nkey sets within a page. This will minimize paging during \nintra-document searches.\n\nThe second index keeps siblings within a page and avoids putting \nparents or children within a page unless the entire depth first \nsearch can be kept within the page in addition to the siblings \npresent. This will minimize paging during inter-document searches.\n\nTraditional B-tree ordering methods can be used to define the \nordering/placement of pages within each index, which will minimize \nhead seeks to find the correct page to scan.\n\nSince the criteria for putting a key within a page or starting a new \npage is simple, performance for those tasks should be O(1).\n\nThe price is the extra space used for two indexes instead of one, but \nat first glance that seems well worth it.\n\nComments?\nRon\n\n\n\n\n", "msg_date": "Thu, 26 Jan 2006 19:18:35 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very " }, { "msg_contents": "Ron wrote:\n> At 01:27 PM 1/21/2006, Tom Lane wrote:\n> >Ron <[email protected]> writes:\n> >> At 07:23 PM 1/20/2006, Tom Lane wrote:\n> >>> Well, we're trying to split an index page that's gotten full into\n> >>> two index pages, preferably with approximately equal numbers of items in\n> >>> each new page (this isn't a hard requirement though).\n\n> After reading the various papers available on GiST and RD trees, I \n> think I have a decent suggestion.\n\nI for one don't understand what does your suggestion have to do with the\nproblem at hand ... not that I have a better one myself.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4\n\"Siempre hay que alimentar a los dioses, aunque la tierra est� seca\" (Orual)\n", "msg_date": "Thu, 26 Jan 2006 22:00:55 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very" }, { "msg_contents": "At 08:00 PM 1/26/2006, Alvaro Herrera wrote:\n>Ron wrote:\n> > At 01:27 PM 1/21/2006, Tom Lane wrote:\n> > >Ron <[email protected]> writes:\n> > >> At 07:23 PM 1/20/2006, Tom Lane wrote:\n> > >>> Well, we're trying to split an index page that's gotten full into\n> > >>> two index pages, preferably with approximately equal numbers \n> of items in\n> > >>> each new page (this isn't a hard requirement though).\n>\n> > After reading the various papers available on GiST and RD trees, I\n> > think I have a decent suggestion.\n>\n>I for one don't understand what does your suggestion have to do with the\n>problem at hand ... not that I have a better one myself.\n\nWe have two problems here.\nThe first is that the page splitting code for these indexes currently \nhas O(N^2) performance.\nThe second is that whatever solution we do use for this \nfunctionality, we still need good performance during searches that \nuse the index. It's not clear that the solutions we've discussed to \nsplitting index pages thus far will result in good performance during searches.\n\nMy suggestion is intended to address both issues.\n\nIf I'm right it helps obtain high performance during searches while \nallowing the index page splitting code to be O(1)\n\nRon. \n\n\n", "msg_date": "Thu, 26 Jan 2006 20:55:46 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very" }, { "msg_contents": "Ron <[email protected]> writes:\n> We have two problems here.\n> The first is that the page splitting code for these indexes currently \n> has O(N^2) performance.\n> The second is that whatever solution we do use for this functionality, \n> we still need good performance during searches that use the index.\n\nNo, unfortunately that's not the problem that needs to be solved.\n\nThe problem is figuring out WHICH records to put in the \"left\" and \"right\" trees once you split them. If you can figure that out, then your suggestion (and perhaps other techniques) could be useful.\n\nThe problem boils down to this: You have a whole bunch of essentially random bitmaps. You have two buckets. You want to put half of the bitmaps in one bucket, and half in the other bucket, and when you get through, you want all of the bitmaps in each bucket to be maximally similar to each other, and maximally dissimilar to the ones in the other bucket.\n\nThat way, when you OR all the bitmaps in each bucket together to build the bitmap for the left and right child nodes of the tree, you'll get maximum separation -- the chances that you'll have to descend BOTH the left and right nodes of the tree are minimized.\n\nUnfortunately, this problem is very likely in the set of NP-complete problems, i.e. like the famous \"Traveling Salesman Problem,\" you can prove there's no algorithm that will give the answer in a reasonable time. In this case, \"reasonable\" would be measured in milliseconds to seconds, but in fact an actual \"perfect\" split of a set of bitmaps probably can't be computed in the lifetime of the universe for more than a few hundred bitmaps.\n\nThat's the problem that's being discussed: How do you decide which bitmaps go in each of the two buckets? Any solution will necessarily be imperfect, a pragmatic algorithm that gives an imperfect, but acceptable, answer. \n\nAs I mentioned earlier, chemists make extensive use of bitmaps to categorize and group molecules. They use Tanimoto or Tversky similarity metrics (Tanimoto is a special case of Tversky), because it's extremely fast to compare two bitmaps, and the score is highly correlated with the number of bits the two bitmaps have in common.\n\nBut even with a fast \"distance\" metric like Tanimoto, there's still no easy way to decide which bucket to put each bitmap into.\n\nCraig\n", "msg_date": "Thu, 26 Jan 2006 18:29:22 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very" }, { "msg_contents": "You seem to have missed my point. I just gave a very clear \ndescription of how to \"decide which bitmaps go in each of the two \nbuckets\" by reformulating the question into \"decide which bitmaps go \nin each of =four= buckets\".\n\nThe intent is to have two indexes, one optimized for one common class \nof searches, and the other optimized for another common class of searches.\n\nBy decomposing the optimization problem into two simpler problems, \nthe hope is that we address all the issues reasonably simply while \nstill getting decent performance.\n\nNothing is free. The price we pay, and it is significant, is that we \nnow have two indexes where before we had only one.\n\nRon\n\n\nAt 09:29 PM 1/26/2006, Craig A. James wrote:\n>Ron <[email protected]> writes:\n>>We have two problems here.\n>>The first is that the page splitting code for these indexes \n>>currently has O(N^2) performance.\n>>The second is that whatever solution we do use for this \n>>functionality, we still need good performance during searches that \n>>use the index.\n>\n>No, unfortunately that's not the problem that needs to be solved.\n>\n>The problem is figuring out WHICH records to put in the \"left\" and \n>\"right\" trees once you split them. If you can figure that out, then \n>your suggestion (and perhaps other techniques) could be useful.\n>\n>The problem boils down to this: You have a whole bunch of \n>essentially random bitmaps. You have two buckets. You want to put \n>half of the bitmaps in one bucket, and half in the other bucket, and \n>when you get through, you want all of the bitmaps in each bucket to \n>be maximally similar to each other, and maximally dissimilar to the \n>ones in the other bucket.\n>\n>That way, when you OR all the bitmaps in each bucket together to \n>build the bitmap for the left and right child nodes of the tree, \n>you'll get maximum separation -- the chances that you'll have to \n>descend BOTH the left and right nodes of the tree are minimized.\n>\n>Unfortunately, this problem is very likely in the set of NP-complete \n>problems, i.e. like the famous \"Traveling Salesman Problem,\" you can \n>prove there's no algorithm that will give the answer in a reasonable \n>time. In this case, \"reasonable\" would be measured in milliseconds \n>to seconds, but in fact an actual \"perfect\" split of a set of \n>bitmaps probably can't be computed in the lifetime of the universe \n>for more than a few hundred bitmaps.\n>\n>That's the problem that's being discussed: How do you decide which \n>bitmaps go in each of the two buckets? Any solution will \n>necessarily be imperfect, a pragmatic algorithm that gives an \n>imperfect, but acceptable, answer.\n>\n>As I mentioned earlier, chemists make extensive use of bitmaps to \n>categorize and group molecules. They use Tanimoto or Tversky \n>similarity metrics (Tanimoto is a special case of Tversky), because \n>it's extremely fast to compare two bitmaps, and the score is highly \n>correlated with the number of bits the two bitmaps have in common.\n>\n>But even with a fast \"distance\" metric like Tanimoto, there's still \n>no easy way to decide which bucket to put each bitmap into.\n>\n>Craig\n\n\n", "msg_date": "Thu, 26 Jan 2006 22:33:07 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very" }, { "msg_contents": "At 11:13 PM 1/26/2006, Craig A. James wrote:\n>Ron,\n>\n>I'll write to you privately, because these discussions can get messy \n>\"in public\".\n\nI'm responding to this missive publicly in an attempt to help the \ndiscussion along. It is not my usual practice to respond to private \nmessages publicly, but this seems a good case for an exception.\n\n\n>>You seem to have missed my point. I just gave a very clear \n>>description of how to \"decide which bitmaps go in each of the two \n>>buckets\" by reformulating the question into \"decide which bitmaps \n>>go in each of =four= buckets\".\n>\n>Sorry to disagree, but here's the problem. It's not whether you put \n>them into two, or four, or N buckets. The problem is, how do you \n>categorize them to begin with, so that you have some reasonable \n>criterion for which item goes in which bucket? THAT is the hard \n>problem, not whether you have two or four buckets.\n\nAgreed. ...and I've given the answer to \"how do you categorize them\" \nusing a general property of RD Trees which should result in \"a \nreasonable criterion for which item goes in which bucket\" when used \nfor text searching.\n\nThe definition of RD tree keys being either \"atomic\" (non \ndecomposable) or \"molecular\" (containing the keys of their \ndescendents) is the one source of our current problems figuring out \nhow to split them and, if I'm correct, a hint as to how to solve the \nsplitting problem in O(1) time while helping to foster high \nperformance during seearches.\n\n\n>Earlier, you wrote:\n>>Traditional B-tree ordering methods can be used to define the \n>>ordering/placement of pages within each index, which will minimize \n>>head seeks to find the correct page to scan.\n>>Since the criteria for putting a key within a page or starting a \n>>new page is simple, performance for those tasks should be O(1).\n>\n>What are the \"traditional B-tree ordering methods\"? That's the \n>issue, right there. The problem is that bitmaps HAVE NO ORDERING \n>METHOD. You can't sort them numerically or alphabetically.\nThe =bitmaps= have no ordering method. =Pages= of bitmaps MUST have \nan ordering method or we have no idea which page to look at when \nsearching for a key.\n\nTreating the \"root\" bitmaps (those that may have descendents but have \nno parents) as numbers and ordering the pages using B tree creation \nmethods that use those numbers as keys is a simple way to create a \nbalanced data structure with high fan out. IOW, a recipe for finding \nthe proper page(s) to scan using minimal seeks.\n\n\n>Take a crowd of people. It's easy to divide them in half by names: \n>just alphabetize and split the list in half. That's what your \n>solution would improve on.\n>\n>But imagine I gave you a group of people and told you to put them \n>into two rooms, such that when you are through, the people in each \n>room are maximally similar to each other and maximally dissimilar to \n>the people in the other room. How would you do it?\n>\n>First of all, you'd have to define what \"different\" and \"same\" \n>are. Is your measure based on skin color, hair color, age, height, \n>weight, girth, intelligence, speed, endurance, body odor, ... \n>? Suppose I tell you, \"All of those\". You have to sort the people \n>so that your two groups are separated such that the group \n>differences are maximized in this N-dimensional \n>space. Computationally, it's a nearly impossible problem.\nI'm =changing= the problem using the semantics of RD trees. Using an \nRD tree representation, we'd create and assign a key for each person \nthat ranked them compared to everyone else for each of the metrics we \ndecided to differentiate on.\n\nThen we start to form trees of keys to these people by creating \n\"parent\" keys as roots that contain the union of everyone with the \nsame or possibly similar value for some quality. By iterating this \nprocess, we end up with a bottom up construction method for an RD \ntree whose root key will the union of all the keys representing these \npeople. If this is one data structure, we end up with an efficient \nand effective way of answering not only Boolean but also ranking and \nsimilarity type queries.\n\nThe problem comes when we have to split this monolithic DS into \npieces for best performance. As many have noted, there is no \nsimple way to decide how to do such a thing.\n\nOTOH, we =do= have the knowledge of how RD trees are built and what \ntheir keys represent, and we know that queries are going to tend \nstrongly to either a) traverse the path from parent to child (depth \nfirst) or b) find all siblings with (dis)similar characteristics \n(breadth first), or c) contain a mixture of a) and b).\n\nSo I'm suggesting that conceptually we clone the original RD tree and \nwe split each clone according to two different methods.\nMethod A results in pages that contain as many complete depth first \npaths from root to leave as possible on each page.\nMethod B results in pages that contain as many siblings as possible per page.\n...and we use the appropriate index during each type of query or query part.\n\nIn return for using 2x as much space, we should have a general method \nthat is O(1) for decomposing RD trees in such a way as to support \nhigh performance during searches.\n\n\n>Here's a much simpler version of the problem. Suppose I give you \n>1000 numbers, and tell you to divide them in half so that the two \n>groups have the smallest standard deviation within each group \n>possible, and the two average values of the groups of numbers are \n>the farthest apart possible. Pretty easy, right?\n>\n>Now do it where \"distance\" is evaluated modulo(10), that is, 1 and 9 \n>are closer together than 1 and 3. Now try it -- you'll find that \n>this problem is almost impossible to solve.\nThis is orthogonal to the discussion at hand since the above is not \nakin to text searching nor best done with RD trees and that is \nexactly what this discussion is about. We don't have to solve a \ngeneral problem for all domains. We only have to solve it for the \nspecific domain of text search using the specific DS of RD trees.\n\n\n>The problem is that, as database designers, we're used to text and \n>numbers, which have an inherent order. Bitmaps have no ordering -- \n>you can't say one is \"greater than\" or \"less than\" the other. All \n>you can do is determine that A and B are \"more similar\" or \"less \n>similar\" than A and C.\nText and numbers only have an order because we deem them to. There \nis even a very common default order we tend to use. But even in the \ntrivial case of ranking we've all said that \"1\" is better than \"2\" in \none situation and \"2\" is better than \"1\" in another.\n\nIn the case of text searching, the bitmaps represent where to find \nlegomena, AKA specific tokens. Particularly hapax legomena, AKA \nunique tokens. Hapax Legomena are particularly important because \nthey are maximally efficient at driving the search needed to answer a query.\n\nWhile absolute hapax legomena are great for quickly pruning things \nwithin a document or document region, relative hapax legomena can do \nthe same thing when searching among multiple documents or document regions.\n\nThe two indexes I'm suggesting are designed to take advantage of this \ngeneral property of text searching.\n\n\nHopefully this clarifies things and motivates a better discussion?\nRon \n\n\n", "msg_date": "Fri, 27 Jan 2006 02:52:45 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Creation of tsearch2 index is very" } ]
[ { "msg_contents": "Hi,\nI have a query that does a left outer join. The query gets some text \nfrom a reference table where one of the query's main tables may or may \nnot have the text's tables id. It wasn't super fast, but now it simply \nwon't execute. It won't complete either through odbc or via pgadmin \n(haven't yet tried via psql). A week ago (with considerably fewer \nrecords in the main table) it executed fine, not particularly quickly, \nbut not that slowly either. Now it locks up postgres completely (if \nnothing else needs anything it takes 100% cpu), and even after an hour \ngives me nothing. I have come up with a solution that gets the text via \nanother query (possibly even a better solution), but this seems very \nstrange.\nCan anyone shed some light on the subject? I tried a full vacuum on the \ntables that needed it, and a postgres restart, all to no avail.\nCheers\nAntoine\nps. I can send the query if that will help...\npps. running a home-compiled 8.1.1 with tables in the query having 70000 \nrecords, 30000 records and 10 for the outer join. Without the left outer \njoin it runs in ~ 1 second.\n", "msg_date": "Fri, 20 Jan 2006 19:32:34 +0100", "msg_from": "Antoine <[email protected]>", "msg_from_op": true, "msg_subject": "query stopped working after tables > 50000 records" }, { "msg_contents": "Send query, output of EXPLAIN and table definitions.\n\nOn Fri, Jan 20, 2006 at 07:32:34PM +0100, Antoine wrote:\n> Hi,\n> I have a query that does a left outer join. The query gets some text \n> from a reference table where one of the query's main tables may or may \n> not have the text's tables id. It wasn't super fast, but now it simply \n> won't execute. It won't complete either through odbc or via pgadmin \n> (haven't yet tried via psql). A week ago (with considerably fewer \n> records in the main table) it executed fine, not particularly quickly, \n> but not that slowly either. Now it locks up postgres completely (if \n> nothing else needs anything it takes 100% cpu), and even after an hour \n> gives me nothing. I have come up with a solution that gets the text via \n> another query (possibly even a better solution), but this seems very \n> strange.\n> Can anyone shed some light on the subject? I tried a full vacuum on the \n> tables that needed it, and a postgres restart, all to no avail.\n> Cheers\n> Antoine\n> ps. I can send the query if that will help...\n> pps. running a home-compiled 8.1.1 with tables in the query having 70000 \n> records, 30000 records and 10 for the outer join. Without the left outer \n> join it runs in ~ 1 second.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 20 Jan 2006 13:04:37 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query stopped working after tables > 50000 records" } ]
[ { "msg_contents": ">>Hi,\n>> \n>> Will simple queries such as \"SELECT * FROM blah_table WHERE tag='x'; work any\n>> faster by putting them into a stored procedure?\n\n>\n>IMHO no, why do you think so? You can use PREPARE instead, if you have many\n>selects like this.\n\n\nI tought that creating stored procedures in database means\nstoring it's execution plan (well, actually storing it like a\ncompiled object). Well, that's what I've learned couple a years\nago in colledge ;)\n\nWhat are the advantages of parsing SP functions every time it's called?\n\nMy position is that preparing stored procedures for execution solves\nmore problems, that it creates.\nAnd the most important one to be optimizing access to queries from \nmultiple connections (which is one of the most important reasons \nfor using stored procedures in the first place).\n\nBest regards,\n\tRikard\n\n", "msg_date": "Fri, 20 Jan 2006 19:50:23 +0100", "msg_from": "Rikard Pavelic <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORMANCE] Stored Procedures" }, { "msg_contents": "On Fri, Jan 20, 2006 at 07:50:23PM +0100, Rikard Pavelic wrote:\n> >>Hi,\n> >> \n> >>Will simple queries such as \"SELECT * FROM blah_table WHERE tag='x'; work \n> >>any\n> >>faster by putting them into a stored procedure?\n> \n> >\n> >IMHO no, why do you think so? You can use PREPARE instead, if you have many\n> >selects like this.\n> \n> \n> I tought that creating stored procedures in database means\n> storing it's execution plan (well, actually storing it like a\n> compiled object). Well, that's what I've learned couple a years\n> ago in colledge ;)\n\nMy college professor said it, it must be true! ;P\n\nMy understanding is that in plpgsql, 'bare' queries get prepared and act\nlike prepared statements. IE:\n\nSELECT INTO variable\n field\n FROM table\n WHERE condition = true\n;\n\n> What are the advantages of parsing SP functions every time it's called?\n> \n> My position is that preparing stored procedures for execution solves\n> more problems, that it creates.\n> And the most important one to be optimizing access to queries from \n> multiple connections (which is one of the most important reasons \n> for using stored procedures in the first place).\n\nOk, so post some numbers then. It might be interesting to look at the\ncost of preparing a statement, although AFAIK that does not store the\nquery plan anywhere.\n\nIn most databases, query planning seems to be a pretty expensive\noperation. My experience is that that isn't the case with PostgreSQL.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 20 Jan 2006 13:10:31 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] Stored Procedures" }, { "msg_contents": "Jim C. Nasby wrote:\n> My college professor said it, it must be true! ;P\n>\n> \nThe famous joke ;)\n> My understanding is that in plpgsql, 'bare' queries get prepared and act\n> like prepared statements. IE:\n>\n> SELECT INTO variable\n> field\n> FROM table\n> WHERE condition = true\n> ;\n>\n> \nUnfortunately I don't know enough about PostgreSQL, but from responses \nI've been reading I've\ncome to that conclusion.\n> Ok, so post some numbers then. It might be interesting to look at the\n> cost of preparing a statement, although AFAIK that does not store the\n> query plan anywhere.\n>\n> In most databases, query planning seems to be a pretty expensive\n> operation. My experience is that that isn't the case with PostgreSQL.\n> \n\nI didn't think about storing query plan anywhere on the disk, rather \nkeep them in memory pool.\nIt would be great if we had an option to use prepare statement for \nstored procedure so it\nwould prepare it self the first time it's called and remained prepared \nuntil server shutdown or\nmemory pool overflow.\n\nThis would solve problems with prepare which is per session, so for \nprepared function to be\noptimal one must use same connection.\n", "msg_date": "Fri, 20 Jan 2006 20:38:23 +0100", "msg_from": "Rikard Pavelic <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORMANCE] Stored Procedures" }, { "msg_contents": "On Fri, Jan 20, 2006 at 08:38:23PM +0100, Rikard Pavelic wrote:\n> This would solve problems with prepare which is per session, so for \n> prepared function to be\n> optimal one must use same connection.\n\nIf you're dealing with something that's performance critical you're not\ngoing to be constantly re-connecting anyway, so I don't see what the\nissue is.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Fri, 20 Jan 2006 15:34:55 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] Stored Procedures" }, { "msg_contents": "Jim C. Nasby wrote:\n> If you're dealing with something that's performance critical you're not\n> going to be constantly re-connecting anyway, so I don't see what the\n> issue is.\n> \n\nI really missed your point.\nIn multi user environment where each user uses it's connection for \nidentification\npurposes, this seems like a reasonable optimization.\n\nI know there is pgpool, but it's non windows, and it's not the best \nsolution\nfor every other problem.\n\n", "msg_date": "Sat, 21 Jan 2006 08:59:28 +0100", "msg_from": "Rikard Pavelic <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORMANCE] Stored Procedures" }, { "msg_contents": "Jim C. Nasby wrote:\n> If you're dealing with something that's performance critical you're not\n> going to be constantly re-connecting anyway, so I don't see what the\n> issue is.\n> \n\nI didn't include mailing list in my second reply :( so here it is again.\nSomeone may find this interesting...\n\nhttp://archives.postgresql.org/pgsql-general/2004-04/msg00084.php\n\n From Tom Lane:\n\"EXECUTE means something different in plpgsql than it does in plain SQL,\n\nand you do not need PREPARE at all in plpgsql. plpgsql's automatic\ncaching of plans gives you the effect of PREPARE on every statement\nwithout your having to ask for it.\"\n\n", "msg_date": "Sat, 21 Jan 2006 22:06:13 +0100", "msg_from": "Rikard Pavelic <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORMANCE] Stored Procedures" }, { "msg_contents": "Em Sex, 2006-01-20 �s 15:34 -0600, Jim C. Nasby escreveu:\n> On Fri, Jan 20, 2006 at 08:38:23PM +0100, Rikard Pavelic wrote:\n> > This would solve problems with prepare which is per session, so for \n> > prepared function to be\n> > optimal one must use same connection.\n> \n> If you're dealing with something that's performance critical you're not\n> going to be constantly re-connecting anyway, so I don't see what the\n> issue is.\n\nThis one was my doubt, perhaps in based desktop applications this is\ntrue, but in web applications this is not the re-connecting is\nconstant :(.\n\nThen the preprare not have very advantage because your duration is per\nsession.\n\nMarcos.\n\n", "msg_date": "Mon, 23 Jan 2006 08:14:14 +0000", "msg_from": "Marcos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] Stored Procedures" }, { "msg_contents": "Hi, Marcos,\n\nMarcos wrote:\n\n> This one was my doubt, perhaps in based desktop applications this is\n> true, but in web applications this is not the re-connecting is\n> constant :(.\n\nIf this is true, then you have a much bigger performance problem than\nquery plan preparation.\n\nYou really should consider using a connection pool (most web application\nservers provide pooling facilities) or some other means to keep the\nconnection between several http requests.\n\nWorried,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Mon, 23 Jan 2006 13:30:31 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] Stored Procedures" }, { "msg_contents": "Hi Markus\n\n> You really should consider using a connection pool (most web application\n> servers provide pooling facilities) or some other means to keep the\n> connection between several http requests.\n\nYes. I'm finding a connection pool, I found the pgpool but yet don't\nunderstand how it's work I'm go read more about him.\n\nThanks\n\nMarcos\n\n", "msg_date": "Mon, 23 Jan 2006 13:27:25 +0000", "msg_from": "Marcos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] Stored Procedures" }, { "msg_contents": "I don't think pgpool is what you need. If I understand pgpool\ncorrectly, pgpool lets you pool multiple postgres servers together. You\nare just looking for database connection pooling. \n\nA simple connection pool is basically just an application wide list of\nconnections. When a client needs a connection, you just request a\nconnection from the pool. If there is an unused connection in the pool,\nit is given to the client and removed from the unused pool. If there is\nno unused connection in the pool, then a new connection is opened. When\nthe client is done with it, the client releases it back into the pool.\n\nYou can google for 'database connection pool' and you should find a\nbunch of stuff. It's probably a good idea to find one already written.\nIf you write your own you have to make sure it can deal with things like\ndead connections, synchronization, and maximum numbers of open\nconnections.\n\nDave\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Marcos\nSent: Monday, January 23, 2006 7:27 AM\nTo: Markus Schaber\nCc: [email protected]\nSubject: Re: [PERFORM] [PERFORMANCE] Stored Procedures\n\nHi Markus\n\n> You really should consider using a connection pool (most web\napplication\n> servers provide pooling facilities) or some other means to keep the\n> connection between several http requests.\n\nYes. I'm finding a connection pool, I found the pgpool but yet don't\nunderstand how it's work I'm go read more about him.\n\nThanks\n\nMarcos\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n", "msg_date": "Mon, 23 Jan 2006 10:23:17 -0600", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] Stored Procedures" }, { "msg_contents": "On Mon, 23 Jan 2006 10:23:17 -0600\n\"Dave Dutcher\" <[email protected]> wrote:\n\n> I don't think pgpool is what you need. If I understand pgpool\n> correctly, pgpool lets you pool multiple postgres servers together.\n> You are just looking for database connection pooling. \n\n While pgpool can let you pool together multiple backend servers,\n it also functions well as just a connection pooling device with\n only one backend. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Mon, 23 Jan 2006 11:25:26 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] Stored Procedures" } ]
[ { "msg_contents": "Hello; \n\nI am going through a post mortem analysis of an infrequent but\nrecurring problem on a Pg 8.0.3 installation. Application code\nconnects to Pg using J2EE pooled connections.\n\n PostgreSQL 8.0.3 on sparc-sun-solaris2.9, compiled by GCC sparc-sun-solaris2.8-gcc (GCC) 3.3.2\n\nDatabase is quite large with respect to the number of tables, some of\nwhich have up to 6 million tuples. Typical idle/busy connection ratio\nis 3/100 but occationally we'll catch 20 or more busy sessions.\n\nThe problem manifests itself and appears like a locking issue. About\nweekly throuput slows down and we notice the busy connection count\nrising minute by minute. 2, 20, 40... Before long, the app server\ndetects lack of responsiveness and fails over to another app server\n(not Pg) which in turn attempts a bunch of new connections into\nPostgres.\n\nSampling of the snapshots of pg_locks and pg_stat_activity tables\ntakes place each minute.\n\nI am wishing for a few new ideas as to what to be watching; Here's\nsome observations that I've made.\n\n1. At no time do any UN-granted locks show in pg_locks\n2. The number of exclusive locks is small 1, 4, 8\n3. Other locks type/mode are numerous but appear like normal workload.\n4. There are at least a few old '<IDLE> In Transaction' cases in\n activity view\n5. No interesting error messages or warning in Pg logs.\n6. No crash of Pg backend\n\nOther goodies includes a bounty of poor performing queries which are\nconstantly being optimized now for good measure. Aside from the heavy\nqueries, performance is generallly decent.\n\nResource related server configs have been boosted substantially but\nhave not undergone any formal R&D to verify that we're inthe safe\nunder heavy load.\n\nAn max_fsm_relations setting which is *below* our table and index\ncount was discovered by me today and will be increased this evening\nduring a maint cycle.\n\nThe slowdown and subsequent run-away app server takes place within a\nsmall 2-5 minute window and I have as of yet not been able to get into\nPsql during the event for a hands-on look.\n\nQuestions;\n\n1. Is there any type of resource lock that can unconditionally block\n another session and NOT appear as UN-granted lock?\n\n2. What in particular other runtime info would be most useful to\n sample here?\n\n3. What Solaris side runtime stats might give some clues here\n (maybe?)( and how often to sample? Assume needs to be aggressive\n due to how fast this problem crops up.\n\nAny help appreciated\n\nThank you\n\n\n-- \n-------------------------------------------------------------------------------\nJerry Sievers 305 854-3001 (home) WWW ECommerce Consultant\n 305 321-1144 (mobile\thttp://www.JerrySievers.com/\n", "msg_date": "20 Jan 2006 16:42:20 -0500", "msg_from": "Jerry Sievers <[email protected]>", "msg_from_op": true, "msg_subject": "Sudden slowdown of Pg server " }, { "msg_contents": "\nlockstat is available in Solaris 9. That can help you to determine if \nthere are any kernel level locks that are occuring during that time.\nSolaris 10 also has plockstat which can be used to identify userland \nlocks happening in your process.\n\nSince you have Solaris 9, try the following:\n\nYou can run (as root)\nlockstat sleep 5 \nand note the output which can be long.\n\nI guess \"prstat -am\" output, \"iostat -xczn 3\", \"vmstat 3\" outputs will \nhelp also.\n\nprstat -am has a column called \"LAT\", if the value is in double digits, \nthen you have a locking issue which will probably result in higher \"SLP\" \nvalue for the process. (Interpretation is data and workload specific \nwhich this email is too small to decode)\n\nOnce you have identified a particular process (if any) to be the source \nof the problem, get its id and you can look at the outputs of following \ncommand which (quite intrusive)\ntruss -c -p $pid 2> truss-syscount.txt\n\n(Ctrl-C after a while to stop collecting)\n\ntruss -a -e -u\":::\" -p $pid 2> trussout.txt\n\n(Ctrl-C after a while to stop collecting)\n\nRegards,\nJignesh\n\n\nJerry Sievers wrote:\n\n>Hello; \n>\n>I am going through a post mortem analysis of an infrequent but\n>recurring problem on a Pg 8.0.3 installation. Application code\n>connects to Pg using J2EE pooled connections.\n>\n> PostgreSQL 8.0.3 on sparc-sun-solaris2.9, compiled by GCC sparc-sun-solaris2.8-gcc (GCC) 3.3.2\n>\n>Database is quite large with respect to the number of tables, some of\n>which have up to 6 million tuples. Typical idle/busy connection ratio\n>is 3/100 but occationally we'll catch 20 or more busy sessions.\n>\n>The problem manifests itself and appears like a locking issue. About\n>weekly throuput slows down and we notice the busy connection count\n>rising minute by minute. 2, 20, 40... Before long, the app server\n>detects lack of responsiveness and fails over to another app server\n>(not Pg) which in turn attempts a bunch of new connections into\n>Postgres.\n>\n>Sampling of the snapshots of pg_locks and pg_stat_activity tables\n>takes place each minute.\n>\n>I am wishing for a few new ideas as to what to be watching; Here's\n>some observations that I've made.\n>\n>1. At no time do any UN-granted locks show in pg_locks\n>2. The number of exclusive locks is small 1, 4, 8\n>3. Other locks type/mode are numerous but appear like normal workload.\n>4. There are at least a few old '<IDLE> In Transaction' cases in\n> activity view\n>5. No interesting error messages or warning in Pg logs.\n>6. No crash of Pg backend\n>\n>Other goodies includes a bounty of poor performing queries which are\n>constantly being optimized now for good measure. Aside from the heavy\n>queries, performance is generallly decent.\n>\n>Resource related server configs have been boosted substantially but\n>have not undergone any formal R&D to verify that we're inthe safe\n>under heavy load.\n>\n>An max_fsm_relations setting which is *below* our table and index\n>count was discovered by me today and will be increased this evening\n>during a maint cycle.\n>\n>The slowdown and subsequent run-away app server takes place within a\n>small 2-5 minute window and I have as of yet not been able to get into\n>Psql during the event for a hands-on look.\n>\n>Questions;\n>\n>1. Is there any type of resource lock that can unconditionally block\n> another session and NOT appear as UN-granted lock?\n>\n>2. What in particular other runtime info would be most useful to\n> sample here?\n>\n>3. What Solaris side runtime stats might give some clues here\n> (maybe?)( and how often to sample? Assume needs to be aggressive\n> due to how fast this problem crops up.\n>\n>Any help appreciated\n>\n>Thank you\n>\n>\n> \n>\n", "msg_date": "Fri, 20 Jan 2006 17:13:58 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sudden slowdown of Pg server" } ]
[ { "msg_contents": "Hello,\n\nOur application uses typical queries similar to following (very simplified):\n\nSELECT\n part_id,\n part_name,\n (SELECT\n SUM(amount) FROM part_movements M\n WHERE P.part_id = M.part_id\n ) as part_amount\nFROM parts P\nLIMIT 50\n\nThe parts table holds thousands of items. Movement table stores yearly\nmovement information of those items. We are presenting results to users page\nby page, hence the limit case.\n\nUser can sort and filter results. When sorting is introduced, query\nperformance drops significantly:\n\nSELECT\n part_id,\n part_name,\n (SELECT\n SUM(amount) FROM part_movements M\n WHERE P.part_id = M.part_id\n ) as part_amount\nFROM parts P\nORDER BY part_name\nLIMIT 50\n\nPostgres seems to compute all possible rows and then sorts the\nresults, which nearly renders the paging meaningless. A dummy WHERE\ncase dramatically improves performance:\n\nSELECT\n part_id,\n part_name,\n (SELECT\n SUM(amount) FROM part_movements M\n WHERE P.part_id = M.part_id\n ) as part_amount\nFROM parts P\nORDER BY part_name\nWHERE part_amount > -10000000\nLIMIT 50\n\nIs there a way to improve performance of these queries? Is it possible\nto instruct Postgres to first sort the rows then compute the inner\nqueries? (We have simulated this by using temporary tables and two\nstage queries, but this is not practical because most queries are\nautomatically generated).\n\nAttached is the output of real queries and their corresponding EXPLAIN\nANALYZE outputs.\n\nRegards,\nUmit Oztosun", "msg_date": "Sat, 21 Jan 2006 22:55:23 +0200", "msg_from": "=?ISO-8859-1?Q?=DCmit_=D6ztosun?= <[email protected]>", "msg_from_op": true, "msg_subject": "Slow queries consisting inner selects and order bys & hack to speed\n up" }, { "msg_contents": "=?ISO-8859-1?Q?=DCmit_=D6ztosun?= <[email protected]> writes:\n> Our application uses typical queries similar to following (very simplified):\n\n> SELECT\n> part_id,\n> part_name,\n> (SELECT\n> SUM(amount) FROM part_movements M\n> WHERE P.part_id =3D M.part_id\n> ) as part_amount\n> FROM parts P\n> ORDER BY part_name\n> LIMIT 50\n\n> Postgres seems to compute all possible rows and then sorts the\n> results, which nearly renders the paging meaningless.\n\nYeah. The general rule is that sorting happens after computing the\nSELECT values --- this is more or less required for cases where the\nORDER BY refers to a SELECT-list item. You'd probably have better\nresults by writing a sub-select:\n\nSELECT\n part_id,\n part_name,\n (SELECT\n SUM(amount) FROM part_movements M\n WHERE P.part_id = M.part_id\n ) as part_amount\nFROM\n (SELECT part_id, part_name FROM parts P\n WHERE whatever ORDER BY whatever LIMIT n) as P;\n\nThis will do the part_movements stuff only for rows that make it out of\nthe sub-select.\n\nAnother approach is to make sure the ORDER BY is always on an indexed\ncolumn; in cases where the ORDER BY is done by an indexscan instead\nof a sort, calculation of the unwanted SELECT-list items does not\nhappen. However, this only works as long as LIMIT+OFFSET is fairly\nsmall.\n\nLastly, are you on a reasonably current Postgres version (performance\ncomplaints about anything older than 8.0 will no longer be accepted\nwith much grace), and are your statistics up to date? The ANALYZE\nshows rowcount estimates that seem suspiciously far off:\n\n -> Seq Scan on scf_stokkart stok (cost=0.00..142622.54 rows=25 width=503) (actual time=4.726..19324.633 rows=4947 loops=1)\n Filter: (upper((durum)::text) = 'A'::text)\n\nThis is important because, if the planner realized that the SELECT-list\nitems were going to be evaluated 5000 times not 25, it might well choose\na different plan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Jan 2006 14:19:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow queries consisting inner selects and order bys & hack to\n\tspeed up" } ]
[ { "msg_contents": "Greetings -\n\nI am really love Postgres and do enjoy hacking around it but I just\nmet a libpq performance issue that I would like to get your help with.\n\nI have Postgres 8.0.1 running on Linux version 2.6.10-1.771_FC2.\n\nI have an application that makes queries to Postgres. I want to keep\nthe database code in this application as simple as possible because\nthe application is doing lots of other complex things (it is an IP\nTelephony server). So for the sake of simplicity, I do not \"prepare\"\nSQL statements before submitting them to database.\n\nNow I have 2 ways to access Postgres - one through unixODBC, the \nother through libpq. I found that libpq performance is nearly 2 times\nslower than performance of unixODBC. Specifically, I have a\nmultithreaded\ntest program (using pthreads), where I can run a specified number of\nsimple queries like this:\n\nSELECT * FROM extensions WHERE \n\t(somecolumn IS NOT NULL) \n\tAND (id IN (SELECT extension_id FROM table 2))\n\nThe test program can submit the queries using unixODBC or libpq. \n\nWith 1 execution thread I have these performance results:\nODBC: does 200 queries in 2 seconds (100.000000 q/s)\nlibpq: does 200 queries in 3 seconds (66.666667 q/s)\n\nWith 2 threads the results are:\nODBC: does 200 queries in 3 seconds (66.666667 q/s)\nLibpq: does 200 queries in 6 seconds (33.333333 q/s)\n\nWith 3 threads:\nODBC: does 200 queries in 5 seconds (40.000000 q/s)\nLibpq: 200 queries in 9 seconds (22.222222 q/s)\n\nObviously libpq is slower.\n\nDo you have any ideas why libpq is so much slower than unixODBC?\nWhere do you think the libpq bottleneck is? Are there any libpq\noptions (compile time or runtime) that can make it faster?\n\nRespectfully\n\nConstantine\n\n", "msg_date": "Sat, 21 Jan 2006 15:49:52 -0800", "msg_from": "\"Constantine Filin\" <[email protected]>", "msg_from_op": true, "msg_subject": "libpq vs. unixODBC performance" }, { "msg_contents": "\"Constantine Filin\" <[email protected]> writes:\n> Do you have any ideas why libpq is so much slower than unixODBC?\n\nPerhaps ODBC is batching the queries into a transaction behind your\nback? Or preparing them for you?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 21 Jan 2006 19:40:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libpq vs. unixODBC performance " } ]
[ { "msg_contents": "Hi folks,\n\nI'm not sure if this is the right place for this but thought I'd ask. \nI'm relateively new to postgres having only used it on 3 projects and am \njust delving into the setup and admin for the second time.\n\nI decided to try tsearch2 for this project's search requirements but am \nhaving trouble attaining adequate performance. I think I've nailed it \ndown to trouble with the headline() function in tsearch2. \n\nIn short, there is a crawler that grabs HTML docs and places them in a \ndatabase. The search is done using tsearch2 pretty much installed \naccording to instructions. I have read a couple online guides suggested \nby this list for tuning the postgresql.conf file. I only made modest \nadjustments because I'm not working with top-end hardware and am still \nuncertain of the actual impact of the different paramenters.\n\nI've been learning 'explain' and over the course of reading I have done \nenough query tweaking to discover the source of my headache seems to be \nheadline().\n\nOn a query of 429 documents, of which the avg size of the stripped down \ndocument as stored is 21KB, and the max is 518KB (an anomaly), tsearch2 \nperforms exceptionally well returning most queries in about 100ms.\n\nOn the other hand, following the tsearch2 guide which suggests returning \nthat first portion as a subquery and then generating the headline() from \nthose results, I see the query increase to 4 seconds!\n\nThis seems to be directly related to document size. If I filter out \nthat 518KB doc along with some 100KB docs by returning \"substring( \nstripped_text FROM 0 FOR 50000) AS stripped_text\" I decrease the time to \n1.4 seconds, but increase the risk of not getting a headline.\n\nSeeing as how this problem is directly tied to document size, I'm \nwondering if there are any specific settings in postgresql.conf that may \nhelp, or is this just a fact of life for the headline() function? Or, \ndoes anyone know what the problem is and how to overcome it?\n", "msg_date": "Sun, 22 Jan 2006 01:46:50 -0600", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "tsearch2 headline and postgresql.conf" }, { "msg_contents": "You didn't provides us any query with explain analyze.\nJust to make sure you're fine.\n\n \tOleg\nOn Sun, 22 Jan 2006, [email protected] wrote:\n\n> Hi folks,\n>\n> I'm not sure if this is the right place for this but thought I'd ask. I'm \n> relateively new to postgres having only used it on 3 projects and am just \n> delving into the setup and admin for the second time.\n>\n> I decided to try tsearch2 for this project's search requirements but am \n> having trouble attaining adequate performance. I think I've nailed it down \n> to trouble with the headline() function in tsearch2. \n> In short, there is a crawler that grabs HTML docs and places them in a \n> database. The search is done using tsearch2 pretty much installed according \n> to instructions. I have read a couple online guides suggested by this list \n> for tuning the postgresql.conf file. I only made modest adjustments because \n> I'm not working with top-end hardware and am still uncertain of the actual \n> impact of the different paramenters.\n>\n> I've been learning 'explain' and over the course of reading I have done \n> enough query tweaking to discover the source of my headache seems to be \n> headline().\n>\n> On a query of 429 documents, of which the avg size of the stripped down \n> document as stored is 21KB, and the max is 518KB (an anomaly), tsearch2 \n> performs exceptionally well returning most queries in about 100ms.\n>\n> On the other hand, following the tsearch2 guide which suggests returning that \n> first portion as a subquery and then generating the headline() from those \n> results, I see the query increase to 4 seconds!\n>\n> This seems to be directly related to document size. If I filter out that \n> 518KB doc along with some 100KB docs by returning \"substring( stripped_text \n> FROM 0 FOR 50000) AS stripped_text\" I decrease the time to 1.4 seconds, but \n> increase the risk of not getting a headline.\n>\n> Seeing as how this problem is directly tied to document size, I'm wondering \n> if there are any specific settings in postgresql.conf that may help, or is \n> this just a fact of life for the headline() function? Or, does anyone know \n> what the problem is and how to overcome it?\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Sun, 22 Jan 2006 11:24:55 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tsearch2 headline and postgresql.conf" }, { "msg_contents": "Oleg Bartunov wrote:\n\n> You didn't provides us any query with explain analyze.\n> Just to make sure you're fine.\n>\n> Oleg\n> On Sun, 22 Jan 2006, [email protected] wrote:\n>\n>> Hi folks,\n>>\n>> I'm not sure if this is the right place for this but thought I'd \n>> ask. I'm relateively new to postgres having only used it on 3 \n>> projects and am just delving into the setup and admin for the second \n>> time.\n>>\n>> I decided to try tsearch2 for this project's search requirements but \n>> am having trouble attaining adequate performance. I think I've \n>> nailed it down to trouble with the headline() function in tsearch2. \n>> In short, there is a crawler that grabs HTML docs and places them in \n>> a database. The search is done using tsearch2 pretty much installed \n>> according to instructions. I have read a couple online guides \n>> suggested by this list for tuning the postgresql.conf file. I only \n>> made modest adjustments because I'm not working with top-end hardware \n>> and am still uncertain of the actual impact of the different \n>> paramenters.\n>>\n>> I've been learning 'explain' and over the course of reading I have \n>> done enough query tweaking to discover the source of my headache \n>> seems to be headline().\n>>\n>> On a query of 429 documents, of which the avg size of the stripped \n>> down document as stored is 21KB, and the max is 518KB (an anomaly), \n>> tsearch2 performs exceptionally well returning most queries in about \n>> 100ms.\n>>\n>> On the other hand, following the tsearch2 guide which suggests \n>> returning that first portion as a subquery and then generating the \n>> headline() from those results, I see the query increase to 4 seconds!\n>>\n>> This seems to be directly related to document size. If I filter out \n>> that 518KB doc along with some 100KB docs by returning \"substring( \n>> stripped_text FROM 0 FOR 50000) AS stripped_text\" I decrease the time \n>> to 1.4 seconds, but increase the risk of not getting a headline.\n>>\n>> Seeing as how this problem is directly tied to document size, I'm \n>> wondering if there are any specific settings in postgresql.conf that \n>> may help, or is this just a fact of life for the headline() \n>> function? Or, does anyone know what the problem is and how to \n>> overcome it?\n>>\n>\n> Regards,\n> Oleg\n> _____________________________________________________________\n\n\n------------------------\n\nHi Oleg,\n\nThanks for taking time to look at this. Pardon my omission, I was \nwriting that email rather late at night.\n\nThe following results from 'explain analyze' are from my\ndevlopment machine which is a dual PIII 600MHz running Debian\nLinux and Postgres 8.1.2. 512 MB RAM. The production machine\nyields similar results but it is a virtual server so the\nresources are rather unpredictable. It is a quad processor and\nhas a larger result set in it's DB.\n\n\nThe original query is:\nexplain analyze\nSELECT url, title, headline(stripped_text,q,\n 'MaxWords=75, MinWords=25, \nStartSel=!!!REPLACE_ME!!!,StopSel=!!!/REPLACE_ME!!!'),\n rank, to_char(timezone('CST', date_last_changed), 'DD Mon YYYY') AS \ndate_last_changed\nFROM\n( SELECT url_id, url, title, stripped_text, date_last_changed, q, \nrank(index_text, q) AS rank\n FROM (web_page w LEFT JOIN url u USING (url_id)), \nto_tsquery('big&search') AS q\n WHERE (index_text <> '') AND (index_text @@ q) AND (w.url_id NOT IN (1,2))\n AND (url NOT LIKE '%badurl.com%')\n ORDER BY rank DESC, date_last_changed DESC\n LIMIT 10 OFFSET 0\n) AS useless\n;\n\n\n...and the resultant output of EXPLAIN ANALYZE is:\n\n Subquery Scan useless (cost=8.02..8.04 rows=1 width=624) (actual \ntime=769.131..2769.320 rows=10 loops=1)\n -> Limit (cost=8.02..8.02 rows=1 width=282) (actual \ntime=566.798..566.932 rows=10 loops=1)\n -> Sort (cost=8.02..8.02 rows=1 width=282) (actual \ntime=566.792..566.870 rows=10 loops=1)\n Sort Key: rank(w.index_text, q.q), w.date_last_changed\n -> Nested Loop (cost=2.00..8.01 rows=1 width=282) \n(actual time=4.068..563.128 rows=178 loops=1)\n -> Nested Loop (cost=2.00..4.96 rows=1 width=221) \n(actual time=3.179..388.610 rows=179 loops=1)\n -> Function Scan on q (cost=0.00..0.01 \nrows=1 width=32) (actual time=0.025..0.028 rows=1 loops=1)\n -> Bitmap Heap Scan on web_page w \n(cost=2.00..4.94 rows=1 width=189) (actual time=3.123..387.547 rows=179 \nloops=1)\n Filter: ((w.index_text <> ''::tsvector) \nAND (w.url_id <> 1) AND (w.url_id <> 2) AND (w.index_text @@ \"outer\".q))\n -> Bitmap Index Scan on \nidx_index_text (cost=0.00..2.00 rows=1 width=0) (actual \ntime=1.173..1.173 rows=277 loops=1)\n Index Cond: (w.index_text @@ \n\"outer\".q)\n -> Index Scan using pk_url on url u \n(cost=0.00..3.03 rows=1 width=65) (actual time=0.044..0.049 rows=1 \nloops=179)\n Index Cond: (\"outer\".url_id = u.url_id)\n Filter: (url !~~ '%badurl.com%'::text)\n Total runtime: 2771.023 ms\n(15 rows)\n-----\nMaybe someone can help me with interpreting the ratio of cost to time \ntoo. Do they look appropriate?\n\nTo give some further data, I've stripped down the query to only return \nwhat's necessary for examination here.\n\n\nexplain analyze\nSELECT headline(stripped_text,q)\nFROM\n( SELECT stripped_text, q\n FROM (web_page w LEFT JOIN url u USING (url_id)), \nto_tsquery('big&search') AS q\n WHERE (index_text <> '') AND (index_text @@ q) AND (w.url_id NOT IN (1,2))\n LIMIT 10 OFFSET 0\n) AS useless\n;\n Subquery Scan useless (cost=43.25..75.50 rows=1 width=64) (actual \ntime=383.720..2066.962 rows=10 loops=1)\n -> Limit (cost=43.25..75.49 rows=1 width=129) (actual \ntime=236.814..258.150 rows=10 loops=1)\n -> Nested Loop (cost=43.25..75.49 rows=1 width=129) (actual \ntime=236.807..258.070 rows=10 loops=1)\n Join Filter: (\"inner\".index_text @@ \"outer\".q)\n -> Function Scan on q (cost=0.00..0.01 rows=1 width=32) \n(actual time=0.033..0.033 rows=1 loops=1)\n -> Merge Right Join (cost=43.25..70.37 rows=409 \nwidth=151) (actual time=235.603..237.283 rows=31 loops=1)\n Merge Cond: (\"outer\".url_id = \"inner\".url_id)\n -> Index Scan using pk_url on url u \n(cost=0.00..806.68 rows=30418 width=4) (actual time=0.029..0.731 rows=39 \nloops=1)\n -> Sort (cost=43.25..44.27 rows=409 width=155) \n(actual time=235.523..235.654 rows=31 loops=1)\n Sort Key: w.url_id\n -> Seq Scan on web_page w (cost=0.00..25.51 \nrows=409 width=155) (actual time=0.037..230.577 rows=409 loops=1)\n Filter: ((index_text <> ''::tsvector) \nAND (url_id <> 1) AND (url_id <> 2))\n Total runtime: 2081.569 ms\n(13 rows)\n\nAs a demonstration to note the effect of document size on headline speed \nI've returned only the first 20,000 characters of each document in the \nsubquery shown below. 20K is the size of avg document from the results \nwith 518K being max (an anomaly), and a few 100K documents.\n\nexplain analyze\nSELECT headline(stripped_text,q)\nFROM\n( SELECT substring(stripped_text FROM 0 FOR 20000) AS stripped_text, q\n FROM (web_page w LEFT JOIN url u USING (url_id)), \nto_tsquery('big&search') AS q\n WHERE (index_text <> '') AND (index_text @@ q) AND (w.url_id NOT IN (1,2))\n LIMIT 10 OFFSET 0\n) AS useless\n;\n\n Subquery Scan useless (cost=43.25..75.51 rows=1 width=64) (actual \ntime=316.049..906.045 rows=10 loops=1)\n -> Limit (cost=43.25..75.49 rows=1 width=129) (actual \ntime=239.831..295.151 rows=10 loops=1)\n -> Nested Loop (cost=43.25..75.49 rows=1 width=129) (actual \ntime=239.825..295.068 rows=10 loops=1)\n Join Filter: (\"inner\".index_text @@ \"outer\".q)\n -> Function Scan on q (cost=0.00..0.01 rows=1 width=32) \n(actual time=0.021..0.021 rows=1 loops=1)\n -> Merge Right Join (cost=43.25..70.37 rows=409 \nwidth=151) (actual time=234.711..236.265 rows=31 loops=1)\n Merge Cond: (\"outer\".url_id = \"inner\".url_id)\n -> Index Scan using pk_url on url u \n(cost=0.00..806.68 rows=30418 width=4) (actual time=0.046..0.715 rows=39 \nloops=1)\n -> Sort (cost=43.25..44.27 rows=409 width=155) \n(actual time=234.613..234.731 rows=31 loops=1)\n Sort Key: w.url_id\n -> Seq Scan on web_page w (cost=0.00..25.51 \nrows=409 width=155) (actual time=0.030..229.788 rows=409 loops=1)\n Filter: ((index_text <> ''::tsvector) \nAND (url_id <> 1) AND (url_id <> 2))\n Total runtime: 907.397 ms\n(13 rows)\n\nAnd finally, returning the whole document without the intervention of \nheadline:\n\nexplain analyze\nSELECT stripped_text\nFROM\n( SELECT stripped_text, q\n FROM (web_page w LEFT JOIN url u USING (url_id)), \nto_tsquery('big&search') AS q\n WHERE (index_text <> '') AND (index_text @@ q) AND (w.url_id NOT IN (1,2))\n LIMIT 10 OFFSET 0\n) AS useless\n;\n\n Subquery Scan useless (cost=43.25..75.50 rows=1 width=32) (actual \ntime=235.218..253.048 rows=10 loops=1)\n -> Limit (cost=43.25..75.49 rows=1 width=129) (actual \ntime=235.210..252.994 rows=10 loops=1)\n -> Nested Loop (cost=43.25..75.49 rows=1 width=129) (actual \ntime=235.204..252.953 rows=10 loops=1)\n Join Filter: (\"inner\".index_text @@ \"outer\".q)\n -> Function Scan on q (cost=0.00..0.01 rows=1 width=32) \n(actual time=0.024..0.024 rows=1 loops=1)\n -> Merge Right Join (cost=43.25..70.37 rows=409 \nwidth=151) (actual time=234.008..234.766 rows=31 loops=1)\n Merge Cond: (\"outer\".url_id = \"inner\".url_id)\n -> Index Scan using pk_url on url u \n(cost=0.00..806.68 rows=30418 width=4) (actual time=0.058..0.344 rows=39 \nloops=1)\n -> Sort (cost=43.25..44.27 rows=409 width=155) \n(actual time=233.896..233.952 rows=31 loops=1)\n Sort Key: w.url_id\n -> Seq Scan on web_page w (cost=0.00..25.51 \nrows=409 width=155) (actual time=0.031..229.057 rows=409 loops=1)\n Filter: ((index_text <> ''::tsvector) \nAND (url_id <> 1) AND (url_id <> 2))\n Total runtime: 254.057 ms\n(13 rows)\n\n\nAgain, I really appreciate any help you folks can give with this.\n\n\n\n", "msg_date": "Sun, 22 Jan 2006 14:29:01 -0600", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: tsearch2 headline and postgresql.conf" } ]
[ { "msg_contents": "\n\nHi,\n\n\nFinally i found the problem of slow backup/restore, i´m only instaled de\nWindows 2000 Service Pack 4... :)\n\nThanks to all\n\n\nFranklin\n \n\n-----Mensagem original-----\nDe: Richard Huxton [mailto:[email protected]] Enviada em: quarta-feira, 30 de\nnovembro de 2005 14:28\nPara: Franklin Haut\nCc: 'Ron'; [email protected]\nAssunto: Re: RES: [PERFORM] pg_dump slow\n\nFranklin Haut wrote:\n> Hi,\n> \n> Yes, my problem is that the pg_dump takes 40 secs to complete under \n> WinXP and 50 minutes under W2K! The same database, the same hardware!, \n> only diferrent Operational Systems.\n> \n> The hardware is: \n> Pentium4 HT 3.2 GHz\n> 1024 Mb Memory\n> HD 120Gb SATA\n\nThere have been reports of very slow network performance on Win2k systems\nwith the default configuration. You'll have to check the archives for\ndetails I'm afraid. This might apply to you.\n\nIf you're happy that doesn't affect you then I'd look at the disk system\n- perhaps XP has newer drivers than Win2k.\n\nWhat do the MS performance-charts show is happening? Specifically, CPU and\ndisk I/O.\n\n--\n Richard Huxton\n Archonet Ltd\n\n", "msg_date": "Mon, 23 Jan 2006 07:46:27 -0300", "msg_from": "\"Franklin Haut\" <[email protected]>", "msg_from_op": true, "msg_subject": "ENC: RES: pg_dump slow - Solution" } ]
[ { "msg_contents": "I'm investigating a potential IO issue. We're running 7.4 on AIX 5.1. \nDuring periods of high activity (reads, writes, and vacuums), we are \nseeing iostat reporting 100% disk usage. I have a feeling that the \niostat numbers are misleading. I can make iostat usage jump from less \nthan 10% to greater than 95% by running a single vacuum against a \nmoderate sized table (no noticeable change in the other activity).\n\nDo I actually have a problem with IO? Whether I do or not, at what \npoint should I start to be concerned about IO problems? If my \nunderstanding is correct, it should be based on the wait time. Here's \nthe output of vmstat during heavy load (reads, writes, several daily \nvacuums and a nightly pg_dump). Wait times appear to be alright, but my \nunderstanding of when to start being concerned about IO starvation is \nfoggy at best.\n\nvmstat 5\nkthr memory page faults cpu\n----- ----------- ------------------------ ------------ -----------\nr b avm fre re pi po fr sr cy in sy cs us sy id wa\n2 2 1548418 67130 0 0 0 141 99 0 1295 22784 13128 11 4 71 14\n3 3 1548422 66754 0 0 0 2127 2965 0 2836 29981 25091 26 4 39 31\n2 3 1548423 66908 0 0 0 2369 3221 0 3130 34725 28424 25 7 38 30\n3 5 1548423 67029 0 0 0 2223 3097 0 2722 31885 25929 26 9 33 32\n3 3 1548423 67066 0 0 0 2366 3194 0 2824 43546 36226 30 5 35 31\n2 4 1548423 67004 0 0 0 2123 3236 0 2662 25756 21841 22 4 39 35\n2 4 1548957 66277 0 0 0 1928 10322 0 2941 36340 29906 28 6 34 33\n3 5 1549245 66024 0 0 0 2324 14291 0 2872 39413 25615 25 4 34 37\n2 6 1549282 66107 0 0 0 1930 11189 0 2832 72062 32311 26 5 32 38\n2 4 1549526 65855 0 0 0 2375 9278 0 2822 40368 32156 29 5 37 29\n2 3 1548984 66227 0 0 0 1732 5065 0 2825 39240 30788 26 5 40 30\n3 4 1549341 66027 0 0 0 2325 6453 0 2790 37567 30509 28 5 37 30\n2 4 1549377 65789 0 0 0 1633 2731 0 2648 35533 27395 20 5 39 36\n1 5 1549765 65666 0 0 0 2272 3340 0 2792 43002 34090 26 5 29 40\n2 3 1549787 65646 0 0 0 1779 2679 0 2596 37446 29184 22 5 37 36\n2 5 1548985 66263 0 0 0 2077 3086 0 2778 49579 39940 26 9 35 30\n2 4 1548985 66473 0 0 0 2078 3093 0 2682 23274 18460 22 3 41 34\n4 3 1548985 66263 0 0 0 2177 3344 0 2734 43029 35536 29 5 38 28\n1 4 1548985 66491 0 0 0 1978 3215 0 2739 28291 22672 23 4 41 32\n3 3 1548985 66422 0 0 0 1732 2469 0 2852 71865 30850 28 5 38 29\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n", "msg_date": "Tue, 24 Jan 2006 11:35:02 -0500", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": true, "msg_subject": "Investigating IO Saturation" }, { "msg_contents": "Brad Nicholson wrote:\n> I'm investigating a potential IO issue. We're running 7.4 on AIX \n> 5.1. During periods of high activity (reads, writes, and vacuums), we \n> are seeing iostat reporting 100% disk usage. I have a feeling that \n> the iostat numbers are misleading. I can make iostat usage jump from \n> less than 10% to greater than 95% by running a single vacuum against a \n> moderate sized table (no noticeable change in the other activity).\n>\nWell that isn't surprising. Vacuum is brutal especially on 7.4 as that \nis pre background writer. What type of IO do you have available (RAID, \nSCSI?)\n\n> Do I actually have a problem with IO? Whether I do or not, at what \n> point should I start to be concerned about IO problems? If my \n> understanding is correct, it should be based on the wait time. Here's \n> the output of vmstat during heavy load (reads, writes, several daily \n> vacuums and a nightly pg_dump). Wait times appear to be alright, but \n> my understanding of when to start being concerned about IO starvation \n> is foggy at best.\n>\n> vmstat 5\n> kthr memory page faults cpu\n> ----- ----------- ------------------------ ------------ -----------\n> r b avm fre re pi po fr sr cy in sy cs us sy id wa\n> 2 2 1548418 67130 0 0 0 141 99 0 1295 22784 13128 11 4 71 14\n> 3 3 1548422 66754 0 0 0 2127 2965 0 2836 29981 25091 26 4 39 31\n> 2 3 1548423 66908 0 0 0 2369 3221 0 3130 34725 28424 25 7 38 30\n> 3 5 1548423 67029 0 0 0 2223 3097 0 2722 31885 25929 26 9 33 32\n> 3 3 1548423 67066 0 0 0 2366 3194 0 2824 43546 36226 30 5 35 31\n> 2 4 1548423 67004 0 0 0 2123 3236 0 2662 25756 21841 22 4 39 35\n> 2 4 1548957 66277 0 0 0 1928 10322 0 2941 36340 29906 28 6 \n> 34 33\n> 3 5 1549245 66024 0 0 0 2324 14291 0 2872 39413 25615 25 4 \n> 34 37\n> 2 6 1549282 66107 0 0 0 1930 11189 0 2832 72062 32311 26 5 \n> 32 38\n> 2 4 1549526 65855 0 0 0 2375 9278 0 2822 40368 32156 29 5 37 29\n> 2 3 1548984 66227 0 0 0 1732 5065 0 2825 39240 30788 26 5 40 30\n> 3 4 1549341 66027 0 0 0 2325 6453 0 2790 37567 30509 28 5 37 30\n> 2 4 1549377 65789 0 0 0 1633 2731 0 2648 35533 27395 20 5 39 36\n> 1 5 1549765 65666 0 0 0 2272 3340 0 2792 43002 34090 26 5 29 40\n> 2 3 1549787 65646 0 0 0 1779 2679 0 2596 37446 29184 22 5 37 36\n> 2 5 1548985 66263 0 0 0 2077 3086 0 2778 49579 39940 26 9 35 30\n> 2 4 1548985 66473 0 0 0 2078 3093 0 2682 23274 18460 22 3 41 34\n> 4 3 1548985 66263 0 0 0 2177 3344 0 2734 43029 35536 29 5 38 28\n> 1 4 1548985 66491 0 0 0 1978 3215 0 2739 28291 22672 23 4 41 32\n> 3 3 1548985 66422 0 0 0 1732 2469 0 2852 71865 30850 28 5 38 29\n>\n\n", "msg_date": "Tue, 24 Jan 2006 08:41:55 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Investigating IO Saturation" }, { "msg_contents": "Joshua D. Drake wrote:\n\n> Brad Nicholson wrote:\n>\n>> I'm investigating a potential IO issue. We're running 7.4 on AIX \n>> 5.1. During periods of high activity (reads, writes, and vacuums), \n>> we are seeing iostat reporting 100% disk usage. I have a feeling \n>> that the iostat numbers are misleading. I can make iostat usage jump \n>> from less than 10% to greater than 95% by running a single vacuum \n>> against a moderate sized table (no noticeable change in the other \n>> activity).\n>>\n> Well that isn't surprising. Vacuum is brutal especially on 7.4 as that \n> is pre background writer. What type of IO do you have available (RAID, \n> SCSI?)\n>\nData LUN is RAID 10, wal LUN is RAID 1.\n\n\n-- \nBrad Nicholson 416-673-4106 [email protected]\nDatabase Administrator, Afilias Canada Corp.\n\n\n", "msg_date": "Tue, 24 Jan 2006 12:03:56 -0500", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Investigating IO Saturation" }, { "msg_contents": "Brad Nicholson <[email protected]> writes:\n> I'm investigating a potential IO issue. We're running 7.4 on AIX 5.1. \n> During periods of high activity (reads, writes, and vacuums), we are \n> seeing iostat reporting 100% disk usage. I have a feeling that the \n> iostat numbers are misleading. I can make iostat usage jump from less \n> than 10% to greater than 95% by running a single vacuum against a \n> moderate sized table (no noticeable change in the other activity).\n\nThat's not particularly surprising, and I see no reason to think that\niostat is lying to you.\n\nMore recent versions of PG include parameters that you can use to\n\"throttle\" vacuum's I/O demand ... but unthrottled, it's definitely\nan I/O hog.\n\nThe vmstat numbers suggest that vacuum is not completely killing you,\nbut you could probably get some improvement in foreground query\nperformance by throttling it back. There are other good reasons to\nconsider an update, anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Jan 2006 12:07:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Investigating IO Saturation " }, { "msg_contents": "Brad Nicholson wrote:\n> Joshua D. Drake wrote:\n>\n>> Brad Nicholson wrote:\n>>\n>>> I'm investigating a potential IO issue. We're running 7.4 on AIX \n>>> 5.1. During periods of high activity (reads, writes, and vacuums), \n>>> we are seeing iostat reporting 100% disk usage. I have a feeling \n>>> that the iostat numbers are misleading. I can make iostat usage \n>>> jump from less than 10% to greater than 95% by running a single \n>>> vacuum against a moderate sized table (no noticeable change in the \n>>> other activity).\n>>>\n>> Well that isn't surprising. Vacuum is brutal especially on 7.4 as \n>> that is pre background writer. What type of IO do you have available \n>> (RAID, SCSI?)\n>>\n> Data LUN is RAID 10, wal LUN is RAID 1.\nHow many disks?\n\n>\n>\n\n", "msg_date": "Tue, 24 Jan 2006 09:09:18 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Investigating IO Saturation" }, { "msg_contents": "[email protected] (Tom Lane) writes:\n\n> Brad Nicholson <[email protected]> writes:\n>> I'm investigating a potential IO issue. We're running 7.4 on AIX 5.1. \n>> During periods of high activity (reads, writes, and vacuums), we are \n>> seeing iostat reporting 100% disk usage. I have a feeling that the \n>> iostat numbers are misleading. I can make iostat usage jump from less \n>> than 10% to greater than 95% by running a single vacuum against a \n>> moderate sized table (no noticeable change in the other activity).\n>\n> That's not particularly surprising, and I see no reason to think that\n> iostat is lying to you.\n>\n> More recent versions of PG include parameters that you can use to\n> \"throttle\" vacuum's I/O demand ... but unthrottled, it's definitely\n> an I/O hog.\n\nI believe it's 7.4 where the cost-based vacuum parameters entered in,\nso that would, in principle, already be an option.\n\n[rummaging around...]\n\nHmm.... There was a patch for 7.4, but it's only \"standard\" as of\n8.0...\n\n> The vmstat numbers suggest that vacuum is not completely killing you,\n> but you could probably get some improvement in foreground query\n> performance by throttling it back. There are other good reasons to\n> consider an update, anyway.\n\nI'd have reservations about \"throttling it back\" because that would\nlead to VACUUMs running, and holding transactions open, for 6 hours\ninstead of 2.\n\nThat is consistent with benchmarking; there was a report of the\ndefault policy cutting I/O load by ~80% at the cost of vacuums taking\n3x as long to complete.\n\nThe \"real\" answer is to move to 8.x, where VACUUM doesn't chew up\nshared memory cache as it does in 7.4 and earlier.\n\nBut in the interim, we need to make sure we tilt over the right\nwindmills, or something of the sort :-).\n-- \noutput = reverse(\"gro.gultn\" \"@\" \"enworbbc\")\nhttp://www3.sympatico.ca/cbbrowne/linuxxian.html\n\"Women and cats will do as they please, and men and dogs should relax\nand get used to the idea.\" -- Robert A. Heinlein\n", "msg_date": "Tue, 24 Jan 2006 14:43:59 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Investigating IO Saturation" }, { "msg_contents": "On Tue, Jan 24, 2006 at 02:43:59PM -0500, Chris Browne wrote:\n> I believe it's 7.4 where the cost-based vacuum parameters entered in,\n> so that would, in principle, already be an option.\n> \n> [rummaging around...]\n> \n> Hmm.... There was a patch for 7.4, but it's only \"standard\" as of\n> 8.0...\n\nAnd it doesn't work very well without changes to buffering. You need\nboth pieces to get it to work.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nIn the future this spectacle of the middle classes shocking the avant-\ngarde will probably become the textbook definition of Postmodernism. \n --Brad Holland\n", "msg_date": "Tue, 24 Jan 2006 16:00:37 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Investigating IO Saturation" } ]
[ { "msg_contents": "Hi all,\n\n I have a performance problem and I don't know where is my bottleneck.\nI have postgresql 7.4.2 running on a debian server with kernel\n2.4.26-1-686-smp with two Xeon(TM) at 2.80GHz and 4GB of RAM and a RAID\n5 made with SCSI disks. Maybe its not the latest hardware but I think\nit's not that bad.\n\n My problem is that the general performance is not good enough and I\ndon't know where is the bottleneck. It could be because the queries are\nnot optimized as they should be, but I also think it can be a postgresql\nconfiguration problem or hardware problem (HDs not beeing fast enough,\nnot enough RAM, ... )\n\n The configuration of postgresql is the default, I tried to tune the\npostgresql.conf and the results where disappointing, so I left again the\ndefault values.\n\nWhen I do top I get:\ntop - 19:10:24 up 452 days, 15:48, 4 users, load average: 6.31, 6.27, 6.52\nTasks: 91 total, 8 running, 83 sleeping, 0 stopped, 0 zombie\nCpu(s): 24.8% user, 15.4% system, 0.0% nice, 59.9% idle\nMem: 3748956k total, 3629252k used, 119704k free, 57604k buffers\nSwap: 2097136k total, 14188k used, 2082948k free, 3303620k cached\n\n Most of the time the idle value is even higher than 60%.\n\nI know it's a problem with a very big scope, but could you give me a\nhint about where I should look to?\n\n\nThank you very much\n-- \nArnau\n\n", "msg_date": "Tue, 24 Jan 2006 19:40:22 +0100", "msg_from": "Arnau Rebassa Villalonga <[email protected]>", "msg_from_op": true, "msg_subject": "Where is my bottleneck?" }, { "msg_contents": "Arnau Rebassa Villalonga wrote:\n> \n> The configuration of postgresql is the default, I tried to tune the\n> postgresql.conf and the results where disappointing, so I left again the\n> default values.\n\nThat's the first thing to fix. Go to the page below and read through the \n \"Performance Tuning\" article.\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 30 Jan 2006 09:25:45 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where is my bottleneck?" }, { "msg_contents": "On Tue, Jan 24, 2006 at 07:40:22PM +0100, Arnau Rebassa Villalonga wrote:\n> I have a performance problem and I don't know where is my bottleneck.\n[snip]\n> Most of the time the idle value is even higher than 60%.\n\nIt's generally a fairly safe bet that if you are running slow and your \ncpu is idle, your i/o isn't fast enough.\n\nMike Stone\n", "msg_date": "Mon, 30 Jan 2006 06:00:49 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where is my bottleneck?" }, { "msg_contents": "Arnau,\n\nOn 1/24/06 10:40 AM, \"Arnau Rebassa Villalonga\" <[email protected]> wrote:\n\n> I know it's a problem with a very big scope, but could you give me a\n> hint about where I should look to?\n\nTry this:\n\n time bash -c \"dd if=/dev/zero of=bigfile bs=8k count=2000000 && sync\"\n time dd if=bigfile of=/dev/null bs=8k\n\nAnd report the results back here. If it takes too long to complete (more\nthan a couple of minutes), bring up another window and run \"vmstat 1\" then\nreport the values in the columns \"bi\" and \"bo\".\n\n- Luke\n\n\n", "msg_date": "Mon, 30 Jan 2006 08:49:04 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where is my bottleneck?" }, { "msg_contents": "On Tue, Jan 24, 2006 at 07:40:22PM +0100, Arnau Rebassa Villalonga wrote:\n> Hi all,\n> \n> I have a performance problem and I don't know where is my bottleneck.\n> I have postgresql 7.4.2 running on a debian server with kernel\n\nYou should really upgrade to the latest 7.4 version. You're probably\nvulnerable to some data-loss issues.\n\n> 2.4.26-1-686-smp with two Xeon(TM) at 2.80GHz and 4GB of RAM and a RAID\n> 5 made with SCSI disks. Maybe its not the latest hardware but I think\n\nGenerally speaking, databases (or anything else that does a lot of\nrandom writes) don't like RAID5.\n\n> My problem is that the general performance is not good enough and I\n> don't know where is the bottleneck. It could be because the queries are\n> not optimized as they should be, but I also think it can be a postgresql\n> configuration problem or hardware problem (HDs not beeing fast enough,\n> not enough RAM, ... )\n\nWhat kind of performance are you expecting? What are you actually\nseeing?\n\n> The configuration of postgresql is the default, I tried to tune the\n> postgresql.conf and the results where disappointing, so I left again the\n> default values.\n\nProbably not so good... you'll most likely want to tune shared_buffers,\nsort_mem and effective_cache_size at a minimum. Granted, that might not\nbe your current bottleneck, but it would probably be your next.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 30 Jan 2006 14:10:25 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where is my bottleneck?" }, { "msg_contents": "Hi, Michael,\n\nMichael Stone wrote:\n\n>> I have a performance problem and I don't know where is my bottleneck.\n> \n> [snip]\n> \n>> Most of the time the idle value is even higher than 60%.\n> \n> It's generally a fairly safe bet that if you are running slow and your\n> cpu is idle, your i/o isn't fast enough.\n\nOr the query is misoptimized (low work_mem, missing indices) and cause\nmuch more I/O than necessary.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Thu, 02 Feb 2006 13:14:55 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Where is my bottleneck?" } ]
[ { "msg_contents": "Hi,\nWe are running Postgresql 8.1, and getting dramatically inconsistant results\nafter running VACUUM ANALYZE. Sometimes after analyzing the database, the\nquery planner chooses a very efficient plan (15 rows, 4.744 ms), and\nsometimes a terrible one (24 rows, 3536.995 ms). Here's the abbreviated\nquery:\n\nSELECT * FROM t1 INNER JOIN (t2 INNER JOIN (t3 INNER JOIN t4 ON t3.gid =\nt4.gid) ON t3.gid = t2.gid) ON t2.eid = t1.eid WHERE ...\n\nIn the efficient plan, t2 is joined to t3 & t4 before being joined to t1.\nThe inefficient plan joins t1 to t2 before joining to the other tables.\n\nWe've experimented with different settings, such as shared_buffers &\nmax_fsm_pages, to no avail. Anybody have a suggestion for getting the\nefficient plan to execute consistantly? If you'd like to see the actual\nquery & query plans let me know.\n\nBest Regards,\nDan\n\n\n", "msg_date": "Tue, 24 Jan 2006 16:15:57 -0700", "msg_from": "\"Daniel Gish\" <[email protected]>", "msg_from_op": true, "msg_subject": "Inconsistant query plan" }, { "msg_contents": "On Tue, 2006-01-24 at 17:15, Daniel Gish wrote:\n> Hi,\n> We are running Postgresql 8.1, and getting dramatically inconsistant results\n> after running VACUUM ANALYZE. Sometimes after analyzing the database, the\n> query planner chooses a very efficient plan (15 rows, 4.744 ms), and\n> sometimes a terrible one (24 rows, 3536.995 ms). Here's the abbreviated\n> query:\n> \n> SELECT * FROM t1 INNER JOIN (t2 INNER JOIN (t3 INNER JOIN t4 ON t3.gid =\n> t4.gid) ON t3.gid = t2.gid) ON t2.eid = t1.eid WHERE ...\n> \n> In the efficient plan, t2 is joined to t3 & t4 before being joined to t1.\n> The inefficient plan joins t1 to t2 before joining to the other tables.\n> \n> We've experimented with different settings, such as shared_buffers &\n> max_fsm_pages, to no avail. Anybody have a suggestion for getting the\n> efficient plan to execute consistantly? If you'd like to see the actual\n> query & query plans let me know.\n\nHave you adjusted the stats target for that column? See \\h alter table\nin psql for the syntax for that. Then run analyze again.\n", "msg_date": "Tue, 24 Jan 2006 17:30:18 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inconsistant query plan" }, { "msg_contents": "On Tue, Jan 24, 2006 at 04:15:57PM -0700, Daniel Gish wrote:\n> We are running Postgresql 8.1, and getting dramatically inconsistant results\n> after running VACUUM ANALYZE. Sometimes after analyzing the database, the\n> query planner chooses a very efficient plan (15 rows, 4.744 ms), and\n> sometimes a terrible one (24 rows, 3536.995 ms). Here's the abbreviated\n> query:\n> \n> SELECT * FROM t1 INNER JOIN (t2 INNER JOIN (t3 INNER JOIN t4 ON t3.gid =\n> t4.gid) ON t3.gid = t2.gid) ON t2.eid = t1.eid WHERE ...\n\nHow abbreviated is that example? Are you actually joining more\ntables than that? In another recent thread varying plans were\nattributed to exceeding geqo_threshold:\n\nhttp://archives.postgresql.org/pgsql-performance/2006-01/msg00132.php\n\nDoes your situation look similar?\n\n-- \nMichael Fuhr\n", "msg_date": "Tue, 24 Jan 2006 16:58:57 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inconsistant query plan" }, { "msg_contents": "Hi,\nThanks for your response. The actual query is below; the joins are only 4\ndeep. Adjusting the stats target did help, but not dramatically.\n\n\nEFFICIENT PLAN:\n\n# explain analyze SELECT ev.eid FROM events ev INNER JOIN (events_join ej\nINNER JOIN (groups_join gj INNER JOIN groups g ON gj.gid = g.gid) ON ej.gid\n= gj.gid) ON ev.eid = ej.eid WHERE ev.status > 0 AND ej.type_id = 1 AND\ng.deleted = 'f' AND g.deactivated != 't' AND ev.type_id >= 0 AND gj.uid=3\nAND ev.timestart BETWEEN '01/23/2006'::timestamp AND '02/23/2006'::timestamp\n+ '1 day - 1 minute';\n \n QUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n---------------------------------------\n Nested Loop (cost=0.00..8370.41 rows=25 width=4) (actual time=4.510..4.510\nrows=0 loops=1)\n -> Nested Loop (cost=0.00..6124.63 rows=673 width=4) (actual\ntime=0.132..3.116 rows=92 loops=1)\n -> Nested Loop (cost=0.00..70.95 rows=8 width=8) (actual\ntime=0.080..2.226 rows=19 loops=1)\n -> Index Scan using groups_join_uid_idx on groups_join gj\n(cost=0.00..16.27 rows=11 width=4) (actual time=0.019..0.471 rows=196\nloops=1)\n Index Cond: (uid = 3)\n -> Index Scan using groups_pkey on groups g\n(cost=0.00..4.96 rows=1 width=4) (actual time=0.005..0.006 rows=0 loops=196)\n Index Cond: (\"outer\".gid = g.gid)\n Filter: ((NOT deleted) AND (deactivated <> true))\n -> Index Scan using events_join_gid_idx on events_join ej\n(cost=0.00..752.45 rows=341 width=8) (actual time=0.010..0.027 rows=5\nloops=19)\n Index Cond: (ej.gid = \"outer\".gid)\n Filter: (type_id = 1)\n -> Index Scan using events_pkey on events ev (cost=0.00..3.32 rows=1\nwidth=4) (actual time=0.012..0.012 rows=0 loops=92)\n Index Cond: (ev.eid = \"outer\".eid)\n Filter: ((status > 0) AND (type_id >= 0) AND (timestart >=\n'2006-01-23 00:00:00'::timestamp without time zone) AND (timestart <=\n'2006-02-23 23:59:00'::timestamp without time zone))\n Total runtime: 4.744 ms\n(15 rows)\n\n\nINEFFICIENT PLAN:\n\n# explain analyze SELECT ev.eid FROM events ev INNER JOIN (events_join ej\nINNER JOIN (groups_join gj INNER JOIN groups g ON gj.gid = g.gid) ON ej.gid\n= g.gid) ON ev.eid = ej.eid WHERE ev.status > 0 AND ej.type_id = 1 AND\ng.deleted = 'f' AND g.deactivated != 't' AND ev.type_id >= 0 AND gj.uid=3\nAND ev.timestart BETWEEN '01/23/2006'::timestamp AND '02/23/2006'::timestamp\n+ '1 day - 1 minute';\n \n QUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n---------------------------------------\n Nested Loop (cost=978.19..37161.81 rows=133 width=4) (actual\ntime=2511.676..2511.676 rows=0 loops=1)\n -> Merge Join (cost=978.19..22854.00 rows=4244 width=4) (actual\ntime=1718.420..2510.128 rows=92 loops=1)\n Merge Cond: (\"outer\".gid = \"inner\".gid)\n -> Index Scan using events_join_gid_idx on events_join ej\n(cost=0.00..23452.59 rows=740598 width=8) (actual time=0.014..1532.447\nrows=626651 loops=1)\n Filter: (type_id = 1)\n -> Sort (cost=978.19..978.47 rows=113 width=8) (actual\ntime=2.371..2.540 rows=101 loops=1)\n Sort Key: g.gid\n -> Nested Loop (cost=0.00..974.33 rows=113 width=8) (actual\ntime=0.078..2.305 rows=19 loops=1)\n -> Index Scan using groups_join_uid_idx on groups_join\ngj (cost=0.00..182.65 rows=159 width=4) (actual time=0.017..0.485 rows=196\nloops=1)\n Index Cond: (uid = 3)\n -> Index Scan using groups_pkey on groups g\n(cost=0.00..4.97 rows=1 width=4) (actual time=0.006..0.006 rows=0 loops=196)\n Index Cond: (\"outer\".gid = g.gid)\n Filter: ((NOT deleted) AND (deactivated <> true))\n -> Index Scan using events_pkey on events ev (cost=0.00..3.36 rows=1\nwidth=4) (actual time=0.013..0.013 rows=0 loops=92)\n Index Cond: (ev.eid = \"outer\".eid)\n Filter: ((status > 0) AND (type_id >= 0) AND (timestart >=\n'2006-01-23 00:00:00'::timestamp without time zone) AND (timestart <=\n'2006-02-23 23:59:00'::timestamp without time zone))\n Total runtime: 2511.920 ms\n(17 rows)\n\nRegards,\nDan\n\n\n\n", "msg_date": "Tue, 24 Jan 2006 18:04:10 -0700", "msg_from": "\"Daniel Gish\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inconsistant query plan" }, { "msg_contents": "Daniel Gish wrote:\n> Hi,\n> Thanks for your response. The actual query is below; the joins are only 4\n> deep. Adjusting the stats target did help, but not dramatically.\n> \n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> ----------------------------------------------------------------------------\n> ---------------------------------------\n> Nested Loop (cost=978.19..37161.81 rows=133 width=4) (actual\n> time=2511.676..2511.676 rows=0 loops=1)\n> -> Merge Join (cost=978.19..22854.00 rows=4244 width=4) (actual\n> time=1718.420..2510.128 rows=92 loops=1)\n > ...\n > -> Nested Loop (cost=0.00..974.33 rows=113 width=8) (actual\ntime=0.078..2.305 rows=19 loops=1)\n\nI have a similar problem recently. An importat diagnostic tool for these issues \nis the pg_stats view. Let me suggest that you post the relevant lines from \npg_stats, so that with some help you will be able to discover what data advises \nthe query planner to overestimate the cardinality of some joins and \nunderestimate others.\n\n\nAlex\n\n\n-- \n*********************************************************************\nhttp://www.barettadeit.com/\nBaretta DE&IT\nA division of Baretta SRL\n\ntel. +39 02 370 111 55\nfax. +39 02 370 111 54\n\nOur technology:\n\nThe Application System/Xcaml (AS/Xcaml)\n<http://www.asxcaml.org/>\n\nThe FreerP Project\n<http://www.freerp.org/>\n", "msg_date": "Wed, 25 Jan 2006 10:05:44 +0100", "msg_from": "Alessandro Baretta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inconsistant query plan" }, { "msg_contents": "Hi, everybody!\n\nI experience problems with backing up one of my Postgresql 8.1.2 installations.\nThe problem is that when I do DB backup, all queries begin to run very slow =(\nThe database only grows in its size (~20Gb today), and the number of transactions increases every month.\nA year ago such slow down was OK, but today it is unacceptable.\n\nI found out that pg_dump dramatically increases hdd I/O and because of this most of all\nqueries begin to run slower. My application using this DB server is time-critical, so\nany kind of slow down is critical.\n\nI've written a perl script to limit pg_dump output bandwidth, a simple traffic shaper,\nwhich runs as: pg_dumpall -c -U postgres | limit_bandwidth.pl | bzip2 > pgsql_dump.bz2\nThe limit_bandwidth.pl script limits pipe output at 4Mb/sec rate, which seems to be ok.\n\nIs there any other solution to avoid this problem?\n\n\n-- \nEvgeny Gridasov\nSoftware Engineer \nI-Free, Russia\n", "msg_date": "Wed, 25 Jan 2006 15:01:34 +0300", "msg_from": "Evgeny Gridasov <[email protected]>", "msg_from_op": false, "msg_subject": "DB responce during DB dump" }, { "msg_contents": "Evgeny Gridasov wrote:\n> Hi, everybody!\n> \n> I experience problems with backing up one of my Postgresql 8.1.2 installations.\n> The problem is that when I do DB backup, all queries begin to run very slow =(\n> The database only grows in its size (~20Gb today), and the number of transactions increases every month.\n> A year ago such slow down was OK, but today it is unacceptable.\n> \n> I found out that pg_dump dramatically increases hdd I/O and because of this most of all\n> queries begin to run slower. My application using this DB server is time-critical, so\n> any kind of slow down is critical.\n> \n> I've written a perl script to limit pg_dump output bandwidth, a simple traffic shaper,\n> which runs as: pg_dumpall -c -U postgres | limit_bandwidth.pl | bzip2 > pgsql_dump.bz2\n> The limit_bandwidth.pl script limits pipe output at 4Mb/sec rate, which seems to be ok.\n> \n> Is there any other solution to avoid this problem?\n\nThat's an interesting solution, and I'd guess people might like to see \nit posted to the list if it's not too big.\n\nAlso, there's no reason you have to dump from the same machine, you can \ndo so over the network which should reduce activity a little bit.\n\nBasically though, it sounds like you either need more disk I/O or a \ndifferent approach.\n\nHave you looked into using PITR log-shipping or replication (e.g. slony) \nto have an off-machine backup?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 25 Jan 2006 12:44:45 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB responce during DB dump" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n> Evgeny Gridasov wrote:\n>> I've written a perl script to limit pg_dump output bandwidth,\n>> ...\n>> Is there any other solution to avoid this problem?\n\n> That's an interesting solution, and I'd guess people might like to see \n> it posted to the list if it's not too big.\n\nYears ago there was some experimentation with dump-rate throttling logic\ninside pg_dump itself --- there's still a comment about it in pg_dump.c.\nThe experiment didn't seem very successful, which is why it never got to\nbe a permanent feature. I'm curious to know why this perl script is\ndoing a better job than we were able to do inside pg_dump.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Jan 2006 11:21:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB responce during DB dump " }, { "msg_contents": "All I was trying to achieve is to limit I/O rate done by pg_dump.\nThe script is a very simple pipe rate limitter and nothing more:\nit reads input, but outputs data no more than at rate specified.\n\nI guess it helps because even if pg_dump outputs data at 20 mb/sec,\nthe script won't be able to read it at rate higher than output rate. Pipe\nbuffer is not infinitive, so pg_dump output rate and hard disk reads become\nalmost equal the input rate of my perl script.\n\nOn Wed, 25 Jan 2006 11:21:58 -0500\nTom Lane <[email protected]> wrote:\n\n> Years ago there was some experimentation with dump-rate throttling logic\n> inside pg_dump itself --- there's still a comment about it in pg_dump.c.\n> The experiment didn't seem very successful, which is why it never got to\n> be a permanent feature. I'm curious to know why this perl script is\n> doing a better job than we were able to do inside pg_dump.\n\n-- \nEvgeny Gridasov\nSoftware Engineer \nI-Free, Russia\n", "msg_date": "Wed, 25 Jan 2006 19:43:09 +0300", "msg_from": "Evgeny Gridasov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB responce during DB dump" }, { "msg_contents": "Ok, It's VERY simple =) here:\nhttp://deepcore.i-free.ru/simple_shaper.pl\n\nI could dump it to a spare machine, but I don't have one.\nCurrent DB server is 2xXEON / 4GbRAM / RAID10 (4 SCSI HDD). Performance is excellent, except during backups.\n\nI wanted to set up some kind of replication but it's useless - I don't have a spare machine now, may be in future...\n\n\nOn Wed, 25 Jan 2006 12:44:45 +0000\nRichard Huxton <[email protected]> wrote:\n\n> \n> That's an interesting solution, and I'd guess people might like to see \n> it posted to the list if it's not too big.\n> \n> Also, there's no reason you have to dump from the same machine, you can \n> do so over the network which should reduce activity a little bit.\n> \n> Basically though, it sounds like you either need more disk I/O or a \n> different approach.\n> \n> Have you looked into using PITR log-shipping or replication (e.g. slony) \n> to have an off-machine backup?\n\n-- \nEvgeny Gridasov\nSoftware Engineer \nI-Free, Russia\n", "msg_date": "Wed, 25 Jan 2006 19:47:27 +0300", "msg_from": "Evgeny Gridasov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DB responce during DB dump" } ]
[ { "msg_contents": "We recently segmented a large table into calendar month slices and were going \nto to replace the original, but we are not getting the results we think it \nshould... Everything is vacuumed, and we are using 8.0.3 on amd64.\n\nAnything anyone can suggest would be appreciated, our backs against the wall.\n\n=> explain select suck_id from sucks_new where suck_id in ( select id as \nsuck_id from saved_cart_items where \npublish_id='60160b57a1969fa228ae3470fbe7a50a' );\n \nQUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop IN Join (cost=5311290.80..7124499.86 rows=5472 width=32)\n Join Filter: (\"outer\".suck_id = \"inner\".id)\n -> Subquery Scan sucks_new (cost=5309907.40..6181642.53 rows=13947762 \nwidth=32)\n -> Unique (cost=5309907.40..6042164.91 rows=13947762 width=212)\n -> Sort (cost=5309907.40..5344776.81 rows=13947762 width=212)\n Sort Key: suck_id, sitenum\n -> Append (cost=0.00..632289.24 rows=13947762 \nwidth=212)\n -> Subquery Scan \"*SELECT* \n1\" (cost=0.00..83767.54 rows=1703577 width=209)\n -> Seq Scan on sucks_2006_01 \n(cost=0.00..66731.77 rows=1703577 width=209)\n -> Subquery Scan \"*SELECT* \n2\" (cost=0.00..93670.20 rows=2081560 width=209)\n -> Seq Scan on sucks_2005_12 \n(cost=0.00..72854.60 rows=2081560 width=209)\n -> Subquery Scan \"*SELECT* \n3\" (cost=0.00..91311.16 rows=2021958 width=210)\n -> Seq Scan on sucks_2005_11 \n(cost=0.00..71091.58 rows=2021958 width=210)\n -> Subquery Scan \"*SELECT* \n4\" (cost=0.00..85510.34 rows=1886417 width=211)\n -> Seq Scan on sucks_2005_10 \n(cost=0.00..66646.17 rows=1886417 width=211)\n -> Subquery Scan \"*SELECT* \n5\" (cost=0.00..74216.38 rows=1642719 width=210)\n -> Seq Scan on sucks_2005_09 \n(cost=0.00..57789.19 rows=1642719 width=210)\n -> Subquery Scan \"*SELECT* \n6\" (cost=0.00..64346.12 rows=1429106 width=209)\n -> Seq Scan on sucks_2005_08 \n(cost=0.00..50055.06 rows=1429106 width=209)\n -> Subquery Scan \"*SELECT* \n7\" (cost=0.00..76449.66 rows=1709283 width=209)\n -> Seq Scan on sucks_2005_07 \n(cost=0.00..59356.83 rows=1709283 width=209)\n -> Subquery Scan \"*SELECT* \n8\" (cost=0.00..63017.84 rows=1473142 width=212)\n -> Seq Scan on sucks_2005_06 \n\"local\" (cost=0.00..48286.42 rows=1473142 width=212)\n -> Materialize (cost=1383.39..1383.60 rows=20 width=12)\n -> Seq Scan on saved_cart_items (cost=0.00..1383.38 rows=20 \nwidth=12)\n Filter: (publish_id = \n'60160b57a1969fa228ae3470fbe7a50a'::bpchar)\n\nas opposed to \n\n=> explain select suck_id from sucks_new where suck_id=7136642; \nQUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan sucks_new (cost=46.22..46.72 rows=8 width=32)\n -> Unique (cost=46.22..46.64 rows=8 width=212)\n -> Sort (cost=46.22..46.24 rows=8 width=212)\n Sort Key: suck_id, sitenum\n -> Append (cost=0.00..46.10 rows=8 width=212)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..5.64 rows=1 \nwidth=209)\n -> Index Scan using sucks_2006_01_pkey on \nsucks_2006_01 (cost=0.00..5.63 rows=1 width=209)\n Index Cond: (suck_id = 7136642::numeric)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..5.95 rows=1 \nwidth=209)\n -> Index Scan using sucks_2005_12_pkey on \nsucks_2005_12 (cost=0.00..5.94 rows=1 width=209)\n Index Cond: (suck_id = 7136642::numeric)\n -> Subquery Scan \"*SELECT* 3\" (cost=0.00..5.21 rows=1 \nwidth=210)\n -> Index Scan using sucks_2005_11_pkey on \nsucks_2005_11 (cost=0.00..5.20 rows=1 width=210)\n Index Cond: (suck_id = 7136642::numeric)\n -> Subquery Scan \"*SELECT* 4\" (cost=0.00..5.67 rows=1 \nwidth=211)\n -> Index Scan using sucks_2005_10_pkey on \nsucks_2005_10 (cost=0.00..5.66 rows=1 width=211)\n Index Cond: (suck_id = 7136642::numeric)\n -> Subquery Scan \"*SELECT* 5\" (cost=0.00..5.78 rows=1 \nwidth=210)\n -> Index Scan using sucks_2005_09_pkey on \nsucks_2005_09 (cost=0.00..5.77 rows=1 width=210)\n Index Cond: (suck_id = 7136642::numeric)\n -> Subquery Scan \"*SELECT* 6\" (cost=0.00..6.01 rows=1 \nwidth=209)\n -> Index Scan using sucks_2005_08_pkey on \nsucks_2005_08 (cost=0.00..6.00 rows=1 width=209)\n Index Cond: (suck_id = 7136642::numeric)\n -> Subquery Scan \"*SELECT* 7\" (cost=0.00..5.87 rows=1 \nwidth=209)\n -> Index Scan using sucks_2005_07_pkey on \nsucks_2005_07 (cost=0.00..5.86 rows=1 width=209)\n Index Cond: (suck_id = 7136642::numeric)\n -> Subquery Scan \"*SELECT* 8\" (cost=0.00..5.98 rows=1 \nwidth=212)\n -> Index Scan using sucks_2005_06_pkey on \nsucks_2005_06 \"local\" (cost=0.00..5.97 rows=1 width=212)\n Index Cond: (suck_id = 7136642::numeric)\n(29 rows)\n\n\n\n\ncan someone please tell me what we did wrong?\n\nTIA\n\n\n", "msg_date": "Wed, 25 Jan 2006 11:09:25 -0500", "msg_from": "Jen Sale <[email protected]>", "msg_from_op": true, "msg_subject": "Desperate: View not using indexes (very slow)" }, { "msg_contents": "Jen Sale <[email protected]> writes:\n> can someone please tell me what we did wrong?\n\nJoins against union subqueries aren't handled very well at the moment.\n(As it happens, I'm working on that exact problem right now for 8.2,\nbut that won't help you today.)\n\nThe plan indicates that you are using UNION rather than UNION ALL,\nwhich is not helping any. Do you really need duplicate elimination\nin that view?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 29 Jan 2006 22:50:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Desperate: View not using indexes (very slow) " } ]
[ { "msg_contents": "\nWith big thanks to Josh Berkus and Devrim Gunduz, I'm happy to announce \nthat Sun has just released a Solaris distribution of PostgreSQL 8.1.2 \nwith ready-to-install packages for both Sparc and x86. These packages \nare currently in Beta, and we expect to FCS in 2 -3 weeks. The \npackages, along with an install guide, are available for download at \nhttp://pgfoundry.org/projects/solarispackages/\n\nWe have tightly integrated PostgreSQL with Solaris in a manner similar \nto the Linux distributions available on postgresql.org. In fact, the \ndirectory structures are identical. Starting with Solaris 10 Update 2, \nPostgreSQL will be distributed with every copy of Solaris, via download \nand physical media.\n\nWe welcome any and all feedback on this PostgreSQL Solaris \ndistribution. Please subscribe to the \[email protected] mailing list to give us feedback: \nhttp://pgfoundry.org/mail/?group_id=1000063\n\nBTW, I'm a senior engineer at Sun Microsystems, recently working with \nthe PostgreSQL community (namely Josh Berkus, Devrim Gunduz, and Gavin \nSherry) on the Solaris Packages Project at PgFoundry, PostgreSQL \nperformance optimization on Solaris, and leveraging Solaris 10 \ncapabilities (e.g. DTrace) specifically for PostgreSQL. I'll be posting \na Solaris performance tuning guide in a few weeks.\n\nRegards,\nRobert Lor\n\n\n", "msg_date": "Wed, 25 Jan 2006 17:46:12 -0800", "msg_from": "Robert Lor <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL Solaris packages now in beta" }, { "msg_contents": "For people installing PostgreSQL on Solaris with the new packaget, it \nwill show a greatly improved experience to get PostgreSQL up and running \nwhich was quite a inhibitor in terms of \"Love at First Sight\". This will \nnow help people familiar with Solaris have a great first impression of \nPostgreSQL and hence lower the barrier of entry to PostgreSQL. With \nmore than 3.7 million downloads of Solaris 10 already, now PostgreSQL \nhave accesss to probably a 3.7 million incremental user-base of these \nrelatively \"New\" PostgreSQL users.\n\n\nRegards,\nJignesh\n\n\n\n\nRobert Lor wrote:\n\n>\n> With big thanks to Josh Berkus and Devrim Gunduz, I'm happy to \n> announce that Sun has just released a Solaris distribution of \n> PostgreSQL 8.1.2 with ready-to-install packages for both Sparc and \n> x86. These packages are currently in Beta, and we expect to FCS in 2 \n> -3 weeks. The packages, along with an install guide, are available \n> for download at http://pgfoundry.org/projects/solarispackages/\n>\n> We have tightly integrated PostgreSQL with Solaris in a manner similar \n> to the Linux distributions available on postgresql.org. In fact, the \n> directory structures are identical. Starting with Solaris 10 Update \n> 2, PostgreSQL will be distributed with every copy of Solaris, via \n> download and physical media.\n>\n> We welcome any and all feedback on this PostgreSQL Solaris \n> distribution. Please subscribe to the \n> [email protected] mailing list to give us \n> feedback: http://pgfoundry.org/mail/?group_id=1000063\n>\n> BTW, I'm a senior engineer at Sun Microsystems, recently working with \n> the PostgreSQL community (namely Josh Berkus, Devrim Gunduz, and Gavin \n> Sherry) on the Solaris Packages Project at PgFoundry, PostgreSQL \n> performance optimization on Solaris, and leveraging Solaris 10 \n> capabilities (e.g. DTrace) specifically for PostgreSQL. I'll be \n> posting a Solaris performance tuning guide in a few weeks.\n>\n> Regards,\n> Robert Lor\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n", "msg_date": "Thu, 26 Jan 2006 09:56:14 -0500", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Solaris packages now in beta" } ]
[ { "msg_contents": "Hi,\n\nI've created a table like this :\nCREATE TABLE tmp_A (\nc \"char\",\ni int4\n);\n\nAnd another one\nCREATE TABLE tmp_B (\ni int4,\nii int4\n);\n\nI then inerted a bit more than 19 million rows in each table (exactly the\nsame number of rows in each).\n\nThe end result is that the physical size on disk used by table tmp_A is\nexactly the same as table tmp_B (as revealed by the pg_relation_size\nfunction) ! Given that a \"char\" field is supposed to be 1 byte in size and a\nint4 4 bytes, shouldn't the tmp_A use a smaller disk space ? Or is it that\nany value, whatever the type, requires at least 4 bytes to be stored ?\n\nThanks,\nPaul\n\nHi,I've created a table like this : CREATE TABLE tmp_A (c \"char\",i int4);And another one CREATE TABLE tmp_B (i int4,ii int4);I then inerted a bit more than 19 million rows in each table (exactly the same number of rows in each). \nThe end result is that the physical size on disk used by table tmp_A is exactly the same as table tmp_B (as revealed by the pg_relation_size function) ! Given that a \"char\" field is supposed to be 1 byte in size and a int4 4 bytes, shouldn't the tmp_A use a smaller disk space ? Or is it that any value, whatever the type, requires at least 4 bytes to be stored ? \nThanks,Paul", "msg_date": "Thu, 26 Jan 2006 11:06:24 +0100", "msg_from": "Paul Mackay <[email protected]>", "msg_from_op": true, "msg_subject": "Physical column size" }, { "msg_contents": "Am Donnerstag, 26. Januar 2006 11:06 schrieb Paul Mackay:\n> Hi,\n>\n> I've created a table like this :\n> CREATE TABLE tmp_A (\n> c \"char\",\n> i int4\n> );\n>\n> And another one\n> CREATE TABLE tmp_B (\n> i int4,\n> ii int4\n> );\n>\n> I then inerted a bit more than 19 million rows in each table (exactly the\n> same number of rows in each).\n>\n> The end result is that the physical size on disk used by table tmp_A is\n> exactly the same as table tmp_B (as revealed by the pg_relation_size\n> function) ! Given that a \"char\" field is supposed to be 1 byte in size and\n> a int4 4 bytes, shouldn't the tmp_A use a smaller disk space ? Or is it\n> that any value, whatever the type, requires at least 4 bytes to be stored ?\n\nI think this is caused by alignment.\n\n", "msg_date": "Thu, 26 Jan 2006 12:22:03 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Physical column size" }, { "msg_contents": "Hey guys, how u been. This is quite a newbie question, but I need to ask it. I'm trying to wrap my mind around the syntax of join and why and when to use it. I understand the concept of making a query go faster by creating indexes, but it seems that when I want data from multiple tables that link together the query goes slow. The slow is typically due to expensive nested loops. The reason is, all my brain understands is:\n\nselect\n tablea.data\n tableb.data\n tablec.data\nfrom\n tablea\n tableb\n tablec\nwhere\n tablea.pri_key = tableb.foreign_key AND\n tableb.pri_key = tablec.foreign_key AND...\n\n From what I read, it seems you can use inner/outer right/left join on (bla) but when I see syntax examples I see that sometimes tables are omitted from the 'from' section of the query and other times, no. Sometimes I see that the join commands are nested and others, no and sometimes I see joins syntax that only applies to one table. From what I understand join can be used to tell the database the fast way to murge table data together to get results by specifiying the table that has the primary keys and the table that has the foreign keys.\n\nI've read all through the postgres docs on this command and I'm still left lost. Can someone please explain to me in simple language how to use these commands or provide me with a link. I need it to live right now. Thanx.\n\n \n\n\n\n\n\n\nHey guys, how u been. This is quite a newbie \nquestion, but I need to ask it. I'm trying to wrap my mind around the syntax of \njoin and why and when to use it. I understand the concept of making a query go \nfaster by creating indexes, but it seems that when I want data from multiple \ntables that link together the query goes slow. The slow is typically due to \nexpensive nested loops. The reason is, all my brain understands is:\n \nselect\n    tablea.data\n    tableb.data\n    tablec.data\nfrom\n    tablea\n    tableb\n    tablec\nwhere\n    tablea.pri_key = \ntableb.foreign_key AND\n    tableb.pri_key = \ntablec.foreign_key AND...\n \nFrom what I read, it seems you can use inner/outer \nright/left join on (bla) but when I see syntax examples I see that sometimes \ntables are omitted from the 'from' section of the query and other times, no. \nSometimes I see that the join commands are nested and others, no and sometimes I \nsee joins syntax that only applies to one table. From what I understand join can \nbe used to tell the database the fast way to murge table data together to get \nresults by specifiying the table that has the primary keys and the table that \nhas the foreign keys.\n \nI've read all through the postgres docs on this \ncommand and I'm still left lost. Can someone please explain to me in simple \nlanguage how to use these commands or provide me with a link. I need it to live \nright now. Thanx.", "msg_date": "Thu, 26 Jan 2006 10:43:12 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Query optimization with X Y JOIN" }, { "msg_contents": "First, this isn't really the right place to ask -- this forum is about performance, not SQL syntax.\n\nSecond, this isn't a question anyone can answer in a reasonable length of time. What you're asking for usually is taught in a class on relational database theory, which is typically a semester or two in college.\n\nIf you really need a crash course, dig around on the web for terms like \"SQL Tutorial\".\n\nGood luck,\nCraig\n\n\[email protected] wrote:\n> Hey guys, how u been. This is quite a newbie question, but I need to ask \n> it. I'm trying to wrap my mind around the syntax of join and why and \n> when to use it. I understand the concept of making a query go faster by \n> creating indexes, but it seems that when I want data from multiple \n> tables that link together the query goes slow. The slow is typically due \n> to expensive nested loops. The reason is, all my brain understands is:\n> \n> select\n> tablea.data\n> tableb.data\n> tablec.data\n> from\n> tablea\n> tableb\n> tablec\n> where\n> tablea.pri_key = tableb.foreign_key AND\n> tableb.pri_key = tablec.foreign_key AND...\n> \n> From what I read, it seems you can use inner/outer right/left join on \n> (bla) but when I see syntax examples I see that sometimes tables are \n> omitted from the 'from' section of the query and other times, no. \n> Sometimes I see that the join commands are nested and others, no and \n> sometimes I see joins syntax that only applies to one table. From what I \n> understand join can be used to tell the database the fast way to murge \n> table data together to get results by specifiying the table that has the \n> primary keys and the table that has the foreign keys.\n> \n> I've read all through the postgres docs on this command and I'm still \n> left lost. Can someone please explain to me in simple language how to \n> use these commands or provide me with a link. I need it to live right \n> now. Thanx.\n> \n> \n", "msg_date": "Thu, 26 Jan 2006 08:12:45 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query optimization with X Y JOIN" }, { "msg_contents": "If I want my database to go faster, due to X then I would think that the \nissue is about performance. I wasn't aware of a paticular constraint on X.\n\nI have more that a rudementary understanding of what's going on here, I was \njust hoping that someone could shed some light on the basic principal of \nthis JOIN command and its syntax. Most people I ask, don't give me straight \nanswers and what I have already read on the web is not very helpful thus \nfar.\n----- Original Message ----- \nFrom: \"Craig A. James\" <[email protected]>\nTo: <[email protected]>\nSent: Thursday, January 26, 2006 11:12 AM\nSubject: Re: [PERFORM] Query optimization with X Y JOIN\n\n\n> First, this isn't really the right place to ask -- this forum is about \n> performance, not SQL syntax.\n>\n> Second, this isn't a question anyone can answer in a reasonable length of \n> time. What you're asking for usually is taught in a class on relational \n> database theory, which is typically a semester or two in college.\n>\n> If you really need a crash course, dig around on the web for terms like \n> \"SQL Tutorial\".\n>\n> Good luck,\n> Craig\n>\n>\n> [email protected] wrote:\n>> Hey guys, how u been. This is quite a newbie question, but I need to ask \n>> it. I'm trying to wrap my mind around the syntax of join and why and when \n>> to use it. I understand the concept of making a query go faster by \n>> creating indexes, but it seems that when I want data from multiple tables \n>> that link together the query goes slow. The slow is typically due to \n>> expensive nested loops. The reason is, all my brain understands is:\n>> select\n>> tablea.data\n>> tableb.data\n>> tablec.data\n>> from\n>> tablea\n>> tableb\n>> tablec\n>> where\n>> tablea.pri_key = tableb.foreign_key AND\n>> tableb.pri_key = tablec.foreign_key AND...\n>> From what I read, it seems you can use inner/outer right/left join on \n>> (bla) but when I see syntax examples I see that sometimes tables are \n>> omitted from the 'from' section of the query and other times, no. \n>> Sometimes I see that the join commands are nested and others, no and \n>> sometimes I see joins syntax that only applies to one table. From what I \n>> understand join can be used to tell the database the fast way to murge \n>> table data together to get results by specifiying the table that has the \n>> primary keys and the table that has the foreign keys.\n>> I've read all through the postgres docs on this command and I'm still \n>> left lost. Can someone please explain to me in simple language how to use \n>> these commands or provide me with a link. I need it to live right now. \n>> Thanx.\n>>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n", "msg_date": "Thu, 26 Jan 2006 11:25:12 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Query optimization with X Y JOIN" }, { "msg_contents": "On 1/26/06, [email protected] <[email protected]> wrote:\n> If I want my database to go faster, due to X then I would think that the\n> issue is about performance. I wasn't aware of a paticular constraint on X.\n>\n> I have more that a rudementary understanding of what's going on here, I was\n> just hoping that someone could shed some light on the basic principal of\n> this JOIN command and its syntax. Most people I ask, don't give me straight\n> answers and what I have already read on the web is not very helpful thus\n> far.\n\nhttp://www.postgresql.org/docs/current/static/sql-select.html\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n", "msg_date": "Thu, 26 Jan 2006 11:33:34 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Query optimization with X Y JOIN" }, { "msg_contents": "[email protected] wrote:\n> If I want my database to go faster, due to X then I would think that \n> the issue is about performance. I wasn't aware of a paticular \n> constraint on X.\n>\n> I have more that a rudementary understanding of what's going on here, \n> I was just hoping that someone could shed some light on the basic \n> principal of this JOIN command and its syntax. Most people I ask, \n> don't give me straight answers and what I have already read on the web \n> is not very helpful thus far.\nWhat you are looking for is here:\n\nhttp://sqlzoo.net/\n\nIt is an excellent website that discusses in depth but at a tutorial \nstyle level how and what SQL is and how to use it. Including JOINS.\n\nFYI, a JOIN is basically a FROM with an integrated WHERE clause. That is \na very simplified description and isn't 100% accurate\nbut it is close. I strongly suggest the website I mentioned above as it \nwill resolve your question.\n\nJoshua D. Drake\n\n-- \nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: PLphp, PLperl - http://www.commandprompt.com/\n\n", "msg_date": "Thu, 26 Jan 2006 08:34:11 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query optimization with X Y JOIN" }, { "msg_contents": "[email protected] wrote:\n> If I want my database to go faster, due to X then I would think that the \n> issue is about performance. I wasn't aware of a paticular constraint on X.\n\nYou haven't asked a performance question yet though.\n\n> I have more that a rudementary understanding of what's going on here, I \n> was just hoping that someone could shed some light on the basic \n> principal of this JOIN command and its syntax. Most people I ask, don't \n> give me straight answers and what I have already read on the web is not \n> very helpful thus far.\n\nOK - firstly it's not a JOIN command. It's a SELECT query that happens \nto join (in your example) three tables together. The syntax is specified \nin the SQL reference section of the manuals, and I don't think it's \ndifferent from the standard SQL spec here.\n\nA query that joins two or more tables (be they real base-tables, views \nor sub-query result-sets) produces the product of both. Normally you \ndon't want this so you apply constraints to that join (table_a.col1 = \ntable_b.col2).\n\nIn some cases you want all the rows from one side of a join, whether or \nnot you get a match on the other side of the join. This is called an \nouter join and results in NULLs for all the columns on the \"outside\" of \nthe join. A left-join returns all rows from the table on the left of the \njoin, a right-join from the table on the right of it.\n\nWhen planning a join, the planner will try to estimate how many matches \nit will see on each side, taking into account any extra constraints (you \nmight want only some of the rows in table_a anyway). It then decides \nwhether to use any indexes on the relevant column(s).\n\nNow, if you think the planner is making a mistake we'll need to see the \noutput of EXPLAIN ANALYSE for the query and will want to know that \nyou've vacuumed and analysed the tables in question.\n\nDoes that help at all?\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 26 Jan 2006 16:47:53 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query optimization with X Y JOIN" }, { "msg_contents": "Yes, that helps a great deal. Thank you so much.\n\n----- Original Message ----- \nFrom: \"Richard Huxton\" <[email protected]>\nTo: <[email protected]>\nCc: <[email protected]>\nSent: Thursday, January 26, 2006 11:47 AM\nSubject: Re: [PERFORM] Query optimization with X Y JOIN\n\n\n> [email protected] wrote:\n>> If I want my database to go faster, due to X then I would think that the \n>> issue is about performance. I wasn't aware of a paticular constraint on \n>> X.\n>\n> You haven't asked a performance question yet though.\n>\n>> I have more that a rudementary understanding of what's going on here, I \n>> was just hoping that someone could shed some light on the basic principal \n>> of this JOIN command and its syntax. Most people I ask, don't give me \n>> straight answers and what I have already read on the web is not very \n>> helpful thus far.\n>\n> OK - firstly it's not a JOIN command. It's a SELECT query that happens to \n> join (in your example) three tables together. The syntax is specified in \n> the SQL reference section of the manuals, and I don't think it's different \n> from the standard SQL spec here.\n>\n> A query that joins two or more tables (be they real base-tables, views or \n> sub-query result-sets) produces the product of both. Normally you don't \n> want this so you apply constraints to that join (table_a.col1 = \n> table_b.col2).\n>\n> In some cases you want all the rows from one side of a join, whether or \n> not you get a match on the other side of the join. This is called an outer \n> join and results in NULLs for all the columns on the \"outside\" of the \n> join. A left-join returns all rows from the table on the left of the join, \n> a right-join from the table on the right of it.\n>\n> When planning a join, the planner will try to estimate how many matches it \n> will see on each side, taking into account any extra constraints (you \n> might want only some of the rows in table_a anyway). It then decides \n> whether to use any indexes on the relevant column(s).\n>\n> Now, if you think the planner is making a mistake we'll need to see the \n> output of EXPLAIN ANALYSE for the query and will want to know that you've \n> vacuumed and analysed the tables in question.\n>\n> Does that help at all?\n> -- \n> Richard Huxton\n> Archonet Ltd\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n", "msg_date": "Thu, 26 Jan 2006 12:13:33 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Query optimization with X Y JOIN" }, { "msg_contents": "Hi,\n\nI've created a table like this :\nCREATE TABLE tmp_A (\nc \"char\",\ni int4\n);\n\nAnd another one\nCREATE TABLE tmp_B (\ni int4,\nii int4\n);\n\nI then inserted a bit more than 19 million rows in each table (exactly the\nsame number of rows in each).\n\nThe end result is that the physical size on disk used by table tmp_A is\nexactly the same as table tmp_B (as revealed by the pg_relation_size\nfunction) ! Given that a \"char\" field is supposed to be 1 byte in size and a\nint4 4 bytes, shouldn't the tmp_A use a smaller disk space ? Or is it that\nany value, whatever the type, requires at least 4 bytes to be stored ?\n\nThanks,\nPaul\n\nHi,I've created a table like this : CREATE TABLE tmp_A (c \"char\",i int4);And another one CREATE TABLE tmp_B (i int4,\nii int4);I then inserted a bit more than 19 million rows in each table (exactly the same number of rows in each). \nThe end result is that the physical size on disk used by table tmp_A is exactly the same as table tmp_B (as revealed by the pg_relation_size function) ! Given that a \"char\" field is supposed to be 1 byte in size and a int4 4 bytes, shouldn't the tmp_A use a smaller disk space ? Or is it that any value, whatever the type, requires at least 4 bytes to be stored ? \nThanks,Paul", "msg_date": "Fri, 3 Mar 2006 11:03:24 +0100", "msg_from": "\"Paul Mackay\" <[email protected]>", "msg_from_op": false, "msg_subject": "Physical column size" }, { "msg_contents": "Am Freitag, 3. M�rz 2006 11:03 schrieb Paul Mackay:\n> I've created a table like this :\n> CREATE TABLE tmp_A (\n> c \"char\",\n> i int4\n> );\n>\n> And another one\n> CREATE TABLE tmp_B (\n> i int4,\n> ii int4\n> );\n\n> The end result is that the physical size on disk used by table tmp_A is\n> exactly the same as table tmp_B (as revealed by the pg_relation_size\n> function) !\n\nAn int4 field is required to be aligned at a 4-byte boundary internally, so \nthere are 3 bytes wasted between tmp_A.c and tmp_A.i. If you switch the \norder of the fields you should see space savings. (Note, however, that the \nper-row overhead is about 32 bytes, so you'll probably only save about 10% \noverall, rather than the 37.5% that one might expect.)\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n", "msg_date": "Fri, 3 Mar 2006 11:23:21 +0100", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Physical column size" }, { "msg_contents": "On Fri, Mar 03, 2006 at 11:03:24AM +0100, Paul Mackay wrote:\n> The end result is that the physical size on disk used by table tmp_A is\n> exactly the same as table tmp_B (as revealed by the pg_relation_size\n> function) ! Given that a \"char\" field is supposed to be 1 byte in size and a\n> int4 4 bytes, shouldn't the tmp_A use a smaller disk space ? Or is it that\n> any value, whatever the type, requires at least 4 bytes to be stored ?\n\nAlignment. An int4 value must start on a multiple of 4 offset, so you\nget three bytes of padding. If you put the int4, then the char it\nshould work better. Although whole rows have alignment requirements\ntoo...\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a\n> tool for doing 5% of the work and then sitting around waiting for someone\n> else to do the other 95% so you can sue them.", "msg_date": "Fri, 3 Mar 2006 11:23:58 +0100", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Physical column size" }, { "msg_contents": "On f�s, 2006-03-03 at 11:03 +0100, Paul Mackay wrote:\n> Hi,\n> \n> I've created a table like this : \n> CREATE TABLE tmp_A (\n> c \"char\",\n> i int4\n> );\n> \n> And another one \n> CREATE TABLE tmp_B (\n> i int4, \n> ii int4\n> );\n> \n> I then inserted a bit more than 19 million rows in each table (exactly\n> the same number of rows in each). \n> \n> The end result is that the physical size on disk used by table tmp_A\n> is exactly the same as table tmp_B (as revealed by the\n> pg_relation_size function) ! Given that a \"char\" field is supposed to\n> be 1 byte in size and a int4 4 bytes, shouldn't the tmp_A use a\n> smaller disk space ? Or is it that any value, whatever the type,\n> requires at least 4 bytes to be stored ? \n\nthe int4 needs to be aligned at 4 bytes boundaries,\nmaking wasted space after the char.\n\nthis would probably be the same size:\n\nCREATE TABLE tmp_C (\n c \"char\",\n cc \"char\",\n i int4\n);\n\nand this would be smaller:\n\nCREATE TABLE tmp_D (\n c \"char\",\n cc \"char\",\n ccc \"char\",\n);\n\nP.S.: I did not actually check to\nsee if the \"char\" type needs to be aligned,\nby I assumed not.\n\n\n", "msg_date": "Fri, 03 Mar 2006 10:27:59 +0000", "msg_from": "Ragnar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Physical column size" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n> An int4 field is required to be aligned at a 4-byte boundary internally, so \n> there are 3 bytes wasted between tmp_A.c and tmp_A.i. If you switch the \n> order of the fields you should see space savings.\n\nProbably not, because the row-as-a-whole has alignment requirements too.\nIn this example you'll just move the pad bytes from one place to\nanother.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 03 Mar 2006 09:53:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Physical column size " } ]
[ { "msg_contents": "Hi All,\n\n \n\nI have seen it on occasion that the total runtime reported by explain\nanalyze was much higher than the actual time the query needed to\ncomplete. The differences in my case ranged between 20-120 seconds. I'm\njust curious if anyone else has experienced this and whether there is\nsomething that I can do to convince explain analyze to report the\nexecution time of the query itself rather than the time of its own\nexecution. Engine version is 8.1.1.\n\n \n\nThanks for the help!\n\n \n\n\n\n\n\n\n\n\n\n\nHi All,\n \nI have seen it on occasion that the total runtime reported\nby explain analyze was much higher than the actual time the query needed to\ncomplete. The differences in my case ranged between 20-120 seconds. I’m\njust curious if anyone else has experienced this and whether there is something\nthat I can do to convince explain analyze to report the execution time of the query\nitself rather than the time of its own execution. Engine version is 8.1.1.\n \nThanks for the help!", "msg_date": "Thu, 26 Jan 2006 09:50:29 -0600", "msg_from": "\"Jozsef Szalay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Incorrect Total runtime Reported by Explain Analyze!?" }, { "msg_contents": "On Thu, 2006-01-26 at 09:50, Jozsef Szalay wrote:\n> Hi All,\n> \n> \n> \n> I have seen it on occasion that the total runtime reported by explain\n> analyze was much higher than the actual time the query needed to\n> complete. The differences in my case ranged between 20-120 seconds.\n> I’m just curious if anyone else has experienced this and whether there\n> is something that I can do to convince explain analyze to report the\n> execution time of the query itself rather than the time of its own\n> execution. Engine version is 8.1.1.\n\nI've seen this problem before in 7.4 and 8.0 as well.\n", "msg_date": "Thu, 26 Jan 2006 10:35:43 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect Total runtime Reported by Explain Analyze!?" }, { "msg_contents": "Jozsef Szalay wrote:\n> \n> I have seen it on occasion that the total runtime reported by explain\n> analyze was much higher than the actual time the query needed to\n> complete. The differences in my case ranged between 20-120 seconds. I'm\n> just curious if anyone else has experienced this and whether there is\n> something that I can do to convince explain analyze to report the\n> execution time of the query itself rather than the time of its own\n> execution. Engine version is 8.1.1.\n\nI think it's down to all the gettime() calls that have to be made to \nmeasure how long each stage of the query takes. In some cases these can \ntake a substantial part of the overall query time. I seem to recall one \nof the BSDs was particularly bad in this respect a couple of years ago. \nDoes that sound like your problem?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 26 Jan 2006 16:49:59 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect Total runtime Reported by Explain Analyze!?" }, { "msg_contents": "On Thu, Jan 26, 2006 at 04:49:59PM +0000, Richard Huxton wrote:\n> Jozsef Szalay wrote:\n> >I have seen it on occasion that the total runtime reported by explain\n> >analyze was much higher than the actual time the query needed to\n> >complete. The differences in my case ranged between 20-120 seconds. I'm\n> >just curious if anyone else has experienced this and whether there is\n> >something that I can do to convince explain analyze to report the\n> >execution time of the query itself rather than the time of its own\n> >execution. Engine version is 8.1.1.\n> \n> I think it's down to all the gettime() calls that have to be made to \n> measure how long each stage of the query takes. In some cases these can \n> take a substantial part of the overall query time.\n\nAnother possibility is that the total query time was indeed that\nlong because the query was blocked waiting for a lock. For example:\n\nT1: BEGIN;\nT2: BEGIN;\nT1: SELECT * FROM foo WHERE id = 1 FOR UPDATE;\nT2: EXPLAIN ANALYZE UPDATE foo SET x = x + 1 WHERE id = 1;\nT1: (do something for a long time)\nT1: COMMIT;\n\nWhen T2's EXPLAIN ANALYZE finally returns it'll show something like\nthis:\n\ntest=> EXPLAIN ANALYZE UPDATE foo SET x = x + 1 WHERE id = 1;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------\n Index Scan using foo_pkey on foo (cost=0.00..3.92 rows=1 width=14) (actual time=0.123..0.138 rows=1 loops=1)\n Index Cond: (id = 1)\n Total runtime: 31926.304 ms\n(3 rows)\n\nSELECT queries can be blocked by operations that take an Access\nExclusive lock, such as CLUSTER, VACUUM FULL, or REINDEX. Have you\never examined pg_locks during one of these queries to look for\nungranted locks?\n\nIf this weren't 8.1 I'd ask if you had any triggers (including\nforeign key constraints), whose execution time EXPLAIN ANALYZE\ndoesn't show in earlier versions. For example:\n\n8.1.2:\ntest=> EXPLAIN ANALYZE DELETE FROM foo WHERE id = 1;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------\n Index Scan using foo_pkey on foo (cost=0.00..3.92 rows=1 width=6) (actual time=0.136..0.154 rows=1 loops=1)\n Index Cond: (id = 1)\n Trigger for constraint bar_fooid_fkey: time=1538.054 calls=1\n Total runtime: 1539.732 ms\n(4 rows)\n\n8.0.6:\ntest=> EXPLAIN ANALYZE DELETE FROM foo WHERE id = 1;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------\n Index Scan using foo_pkey on foo (cost=0.00..3.92 rows=1 width=6) (actual time=0.124..0.147 rows=1 loops=1)\n Index Cond: (id = 1)\n Total runtime: 1746.173 ms\n(3 rows)\n\n-- \nMichael Fuhr\n", "msg_date": "Thu, 26 Jan 2006 10:57:23 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect Total runtime Reported by Explain Analyze!?" }, { "msg_contents": "On Thu, 2006-01-26 at 11:57, Michael Fuhr wrote:\n> On Thu, Jan 26, 2006 at 04:49:59PM +0000, Richard Huxton wrote:\n> > Jozsef Szalay wrote:\n> > >I have seen it on occasion that the total runtime reported by explain\n> > >analyze was much higher than the actual time the query needed to\n> > >complete. The differences in my case ranged between 20-120 seconds. I'm\n> > >just curious if anyone else has experienced this and whether there is\n> > >something that I can do to convince explain analyze to report the\n> > >execution time of the query itself rather than the time of its own\n> > >execution. Engine version is 8.1.1.\n> > \n> > I think it's down to all the gettime() calls that have to be made to \n> > measure how long each stage of the query takes. In some cases these can \n> > take a substantial part of the overall query time.\n> \n> Another possibility is that the total query time was indeed that\n> long because the query was blocked waiting for a lock. For example:\n\nCould be, but I've had this happen where the query returned in like 1\nsecond but reported 30 seconds run time.\n", "msg_date": "Thu, 26 Jan 2006 11:59:49 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect Total runtime Reported by Explain Analyze!?" } ]
[ { "msg_contents": "It might be. I'm running on Fedora Linux kernel 2.6.5-1.358smp, GCC\n3.3.3, glibc-2.3.3-27\n\n\n-----Original Message-----\nFrom: Richard Huxton [mailto:[email protected]] \nSent: Thursday, January 26, 2006 10:50 AM\nTo: Jozsef Szalay\nCc: [email protected]\nSubject: Re: [PERFORM] Incorrect Total runtime Reported by Explain\nAnalyze!?\n\nJozsef Szalay wrote:\n> \n> I have seen it on occasion that the total runtime reported by explain\n> analyze was much higher than the actual time the query needed to\n> complete. The differences in my case ranged between 20-120 seconds.\nI'm\n> just curious if anyone else has experienced this and whether there is\n> something that I can do to convince explain analyze to report the\n> execution time of the query itself rather than the time of its own\n> execution. Engine version is 8.1.1.\n\nI think it's down to all the gettime() calls that have to be made to \nmeasure how long each stage of the query takes. In some cases these can \ntake a substantial part of the overall query time. I seem to recall one \nof the BSDs was particularly bad in this respect a couple of years ago. \nDoes that sound like your problem?\n\n-- \n Richard Huxton\n Archonet Ltd\n\n", "msg_date": "Thu, 26 Jan 2006 11:35:25 -0600", "msg_from": "\"Jozsef Szalay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect Total runtime Reported by Explain Analyze!?" } ]
[ { "msg_contents": "Very good points thanks. In my case however, I was doing performance\ntests and therefore I had a very controlled environment with a single\nclient (me) doing strictly read-only multi-join queries.\n\n-----Original Message-----\nFrom: Michael Fuhr [mailto:[email protected]] \nSent: Thursday, January 26, 2006 11:57 AM\nTo: Richard Huxton\nCc: Jozsef Szalay; [email protected]\nSubject: Re: [PERFORM] Incorrect Total runtime Reported by Explain\nAnalyze!?\n\nOn Thu, Jan 26, 2006 at 04:49:59PM +0000, Richard Huxton wrote:\n> Jozsef Szalay wrote:\n> >I have seen it on occasion that the total runtime reported by explain\n> >analyze was much higher than the actual time the query needed to\n> >complete. The differences in my case ranged between 20-120 seconds.\nI'm\n> >just curious if anyone else has experienced this and whether there is\n> >something that I can do to convince explain analyze to report the\n> >execution time of the query itself rather than the time of its own\n> >execution. Engine version is 8.1.1.\n> \n> I think it's down to all the gettime() calls that have to be made to \n> measure how long each stage of the query takes. In some cases these\ncan \n> take a substantial part of the overall query time.\n\nAnother possibility is that the total query time was indeed that\nlong because the query was blocked waiting for a lock. For example:\n\nT1: BEGIN;\nT2: BEGIN;\nT1: SELECT * FROM foo WHERE id = 1 FOR UPDATE;\nT2: EXPLAIN ANALYZE UPDATE foo SET x = x + 1 WHERE id = 1;\nT1: (do something for a long time)\nT1: COMMIT;\n\nWhen T2's EXPLAIN ANALYZE finally returns it'll show something like\nthis:\n\ntest=> EXPLAIN ANALYZE UPDATE foo SET x = x + 1 WHERE id = 1;\n QUERY PLAN\n\n------------------------------------------------------------------------\n---------------------------------------\n Index Scan using foo_pkey on foo (cost=0.00..3.92 rows=1 width=14)\n(actual time=0.123..0.138 rows=1 loops=1)\n Index Cond: (id = 1)\n Total runtime: 31926.304 ms\n(3 rows)\n\nSELECT queries can be blocked by operations that take an Access\nExclusive lock, such as CLUSTER, VACUUM FULL, or REINDEX. Have you\never examined pg_locks during one of these queries to look for\nungranted locks?\n\nIf this weren't 8.1 I'd ask if you had any triggers (including\nforeign key constraints), whose execution time EXPLAIN ANALYZE\ndoesn't show in earlier versions. For example:\n\n8.1.2:\ntest=> EXPLAIN ANALYZE DELETE FROM foo WHERE id = 1;\n QUERY PLAN\n\n------------------------------------------------------------------------\n--------------------------------------\n Index Scan using foo_pkey on foo (cost=0.00..3.92 rows=1 width=6)\n(actual time=0.136..0.154 rows=1 loops=1)\n Index Cond: (id = 1)\n Trigger for constraint bar_fooid_fkey: time=1538.054 calls=1\n Total runtime: 1539.732 ms\n(4 rows)\n\n8.0.6:\ntest=> EXPLAIN ANALYZE DELETE FROM foo WHERE id = 1;\n QUERY PLAN\n\n------------------------------------------------------------------------\n--------------------------------------\n Index Scan using foo_pkey on foo (cost=0.00..3.92 rows=1 width=6)\n(actual time=0.124..0.147 rows=1 loops=1)\n Index Cond: (id = 1)\n Total runtime: 1746.173 ms\n(3 rows)\n\n-- \nMichael Fuhr\n\n", "msg_date": "Thu, 26 Jan 2006 12:03:36 -0600", "msg_from": "\"Jozsef Szalay\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect Total runtime Reported by Explain Analyze!?" } ]
[ { "msg_contents": "\n\nDoes anyone have any experience with extremely large data sets?\nI'm mean hundreds of millions of rows.\n\nThe queries I need to run on my 200 million transactions are relatively\nsimple:\n\n select month, count(distinct(cardnum)) count(*), sum(amount) from\ntransactions group by month;\n\nThis query took 18 hours on PG 8.1 on a Dual Xeon, RHEL3, (2.4 Kernel) with\nRAID-10 (15K drives)\nand 12 GB Ram. I was expecting it to take about 4 hours - based on some\nexperience with a\nsimilar dataset on a different machine (RH9, PG7.3 Dual Xeon, 4GB RAM,\nRaid-5 10K drives)\n\n This machine is COMPLETELY devoted to running these relatively simple\nqueries one at a\ntime. (No multi-user support needed!) I've been tooling with the various\nperformance settings:\neffective_cache at 5GB, shared_buffers at 2 GB, workmem, sortmem at 1 GB\neach.\n( Shared buffers puzzles me a it bit - my instinct says to set it as high as\npossible,\nbut everything I read says that \"too high\" can hurt performance.)\n\n Any ideas for performance tweaking in this kind of application would be\ngreatly appreciated.\nWe've got indexes on the fields being grouped, and always vacuum analzye\nafter building them.\n\n It's difficult to just \"try\" various ideas because each attempt takes a\nfull day to test. Real\nexperience is needed here!\n\nThanks much,\n\nMike\n\n", "msg_date": "Fri, 27 Jan 2006 20:23:55 -0500", "msg_from": "\"Mike Biamonte\" <[email protected]>", "msg_from_op": true, "msg_subject": "Huge Data sets, simple queries" }, { "msg_contents": "Sounds like you are running into the limits of your disk subsystem. You are\nscanning all of the data in the transactions table, so you will be limited\nby the disk bandwidth you have ­ and using RAID-10, you should divide the\nnumber of disk drives by 2 and multiply by their indiividual bandwidth\n(around 60MB/s) and that¹s what you can expect in terms of performance. So,\nif you have 8 drives, you should expect to get 4 x 60 MB/s = 240 MB/s in\nbandwidth. That means that if you are dealing with 24,000 MB of data in the\n³transactions² table, then you will scan it in 100 seconds.\n\nWith a workload like this, you are in the realm of business intelligence /\ndata warehousing I think. You should check your disk performance, I would\nexpect you¹ll find it lacking, partly because you are running RAID10, but\nmostly because I expect you are using a hardware RAID adapter.\n\n- Luke\n\n\nOn 1/27/06 5:23 PM, \"Mike Biamonte\" <[email protected]> wrote:\n\n> \n> \n> \n> Does anyone have any experience with extremely large data sets?\n> I'm mean hundreds of millions of rows.\n> \n> The queries I need to run on my 200 million transactions are relatively\n> simple:\n> \n> select month, count(distinct(cardnum)) count(*), sum(amount) from\n> transactions group by month;\n> \n> This query took 18 hours on PG 8.1 on a Dual Xeon, RHEL3, (2.4 Kernel) with\n> RAID-10 (15K drives)\n> and 12 GB Ram. I was expecting it to take about 4 hours - based on some\n> experience with a\n> similar dataset on a different machine (RH9, PG7.3 Dual Xeon, 4GB RAM,\n> Raid-5 10K drives)\n> \n> This machine is COMPLETELY devoted to running these relatively simple\n> queries one at a\n> time. (No multi-user support needed!) I've been tooling with the various\n> performance settings:\n> effective_cache at 5GB, shared_buffers at 2 GB, workmem, sortmem at 1 GB\n> each.\n> ( Shared buffers puzzles me a it bit - my instinct says to set it as high as\n> possible,\n> but everything I read says that \"too high\" can hurt performance.)\n> \n> Any ideas for performance tweaking in this kind of application would be\n> greatly appreciated.\n> We've got indexes on the fields being grouped, and always vacuum analzye\n> after building them.\n> \n> It's difficult to just \"try\" various ideas because each attempt takes a\n> full day to test. Real\n> experience is needed here!\n> \n> Thanks much,\n> \n> Mike\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n> \n\n\n\n\n\nRe: [PERFORM] Huge Data sets, simple queries\n\n\nSounds like you are running into the limits of your disk subsystem.  You are scanning all of the data in the transactions table, so you will be limited by the disk bandwidth you have – and using RAID-10, you should divide the number of disk drives by 2 and multiply by their indiividual bandwidth (around 60MB/s) and that’s what you can expect in terms of performance.  So, if you have 8 drives, you should expect to get 4 x 60 MB/s = 240 MB/s in bandwidth.  That means that if you are dealing with 24,000 MB of data in the “transactions” table, then you will scan  it in 100 seconds.\n\nWith a workload like this, you are in the realm of business intelligence / data warehousing I think.  You should check your disk performance, I would expect you’ll find it lacking, partly because you are running RAID10, but mostly because I expect you are using a hardware RAID adapter.\n\n- Luke\n\n\nOn 1/27/06 5:23 PM, \"Mike Biamonte\" <[email protected]> wrote:\n\n\n\n\nDoes anyone have any experience with extremely large data sets?\nI'm mean hundreds of millions of rows.\n\nThe queries I need to run on my 200 million transactions are relatively\nsimple:\n\n   select month, count(distinct(cardnum)) count(*), sum(amount) from\ntransactions group by month;\n\nThis query took 18 hours on PG 8.1 on a Dual Xeon, RHEL3, (2.4 Kernel) with\nRAID-10 (15K drives)\nand 12 GB Ram.  I was expecting it to take about 4 hours - based on some\nexperience with a\nsimilar dataset on a different machine (RH9, PG7.3 Dual Xeon, 4GB RAM,\nRaid-5 10K drives)\n\n  This machine is COMPLETELY devoted to running these relatively simple\nqueries one at a\ntime. (No multi-user support needed!)    I've been tooling with the various\nperformance settings:\neffective_cache at 5GB, shared_buffers at 2 GB, workmem, sortmem at 1 GB\neach.\n( Shared buffers puzzles me a it bit - my instinct says to set it as high as\npossible,\nbut everything I read says that \"too high\" can hurt performance.)\n\n   Any ideas for performance tweaking in this kind of application would be\ngreatly appreciated.\nWe've got indexes on the fields being grouped, and always vacuum analzye\nafter building them.\n\n   It's difficult to just \"try\" various ideas because each attempt takes a\nfull day to test.  Real\nexperience is needed here!\n\nThanks much,\n\nMike\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings", "msg_date": "Fri, 27 Jan 2006 19:05:04 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On Fri, 2006-01-27 at 20:23 -0500, Mike Biamonte wrote:\n> \n> Does anyone have any experience with extremely large data sets?\n> I'm mean hundreds of millions of rows.\n\nSure, I think more than a few of us do. Just today I built a summary\ntable from a 25GB primary table with ~430 million rows. This took about\n45 minutes.\n\n> The queries I need to run on my 200 million transactions are relatively\n> simple:\n> \n> select month, count(distinct(cardnum)) count(*), sum(amount) from\n> transactions group by month;\n> \n> This query took 18 hours on PG 8.1 on a Dual Xeon, RHEL3, (2.4 Kernel) with\n> RAID-10 (15K drives)\n> and 12 GB Ram. I was expecting it to take about 4 hours - based on some\n> experience with a\n> similar dataset on a different machine (RH9, PG7.3 Dual Xeon, 4GB RAM,\n> Raid-5 10K drives)\n\nPossibly the latter machine has a faster I/O subsystem. How large is\nthe table on disk?\n\n> This machine is COMPLETELY devoted to running these relatively simple\n> queries one at a\n> time. (No multi-user support needed!) I've been tooling with the various\n> performance settings:\n> effective_cache at 5GB, shared_buffers at 2 GB, workmem, sortmem at 1 GB\n> each.\n> ( Shared buffers puzzles me a it bit - my instinct says to set it as high as\n> possible,\n> but everything I read says that \"too high\" can hurt performance.)\n> \n> Any ideas for performance tweaking in this kind of application would be\n> greatly appreciated.\n> We've got indexes on the fields being grouped, \n> and always vacuum analzye\n> after building them.\n\nProbably vacuum makes no difference.\n\n> It's difficult to just \"try\" various ideas because each attempt takes a\n> full day to test. Real\n> experience is needed here!\n\nCan you send us an EXPLAIN of the query? I believe what you're seeing\nhere is probably:\n\nAggregate\n+-Sort\n +-Sequential Scan\n\nor perhaps:\n\nAggregate\n+-Index Scan\n\nI have a feeling that the latter will be much faster. If your table has\nbeen created over time, then it is probably naturally ordered by date,\nand therefore also ordered by month. You might expect a Sequential Scan\nto be the fastest, but the Sort step will be a killer. On the other\nhand, if your table is badly disordered by date, the Index Scan could\nalso be very slow.\n\nAnyway, send us the query plan and also perhaps a sample of vmstat\nduring the query.\n\nFor what it's worth, I have:\n\neffective_cache_size | 700000\ncpu_tuple_cost | 0.01\ncpu_index_tuple_cost | 0.001\nrandom_page_cost | 3\nshared_buffers | 50000\ntemp_buffers | 1000\nwork_mem | 1048576 <= for this query only\n\nAnd here's a few lines from vmstat during the query:\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 2 1 76 43476 94916 7655148 0 0 78800 0 1662 788 68 12 0 20\n 1 1 76 45060 91196 7658088 0 0 78028 0 1639 712 71 11 0 19\n 2 0 76 44668 87624 7662960 0 0 78924 52 1650 736 69 12 0 19\n 2 0 76 45300 83672 7667432 0 0 83536 16 1688 768 71 12 0 18\n 1 1 76 45744 80652 7670712 0 0 84052 0 1691 796 70 12 0 17\n\nThat's about 80MB/sec sequential input, for comparison purposes.\n\n-jwb\n\n", "msg_date": "Fri, 27 Jan 2006 22:50:00 -0800", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "\"Mike Biamonte\" <[email protected]> writes:\n> The queries I need to run on my 200 million transactions are relatively\n> simple:\n\n> select month, count(distinct(cardnum)) count(*), sum(amount) from\n> transactions group by month;\n\ncount(distinct) is not \"relatively simple\", and the current\nimplementation isn't especially efficient. Can you avoid that\nconstruct?\n\nAssuming that \"month\" means what it sounds like, the above would result\nin running twelve parallel sort/uniq operations, one for each month\ngrouping, to eliminate duplicates before counting. You've got sortmem\nset high enough to blow out RAM in that scenario ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Jan 2006 10:55:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries " }, { "msg_contents": "On Sat, 2006-01-28 at 10:55 -0500, Tom Lane wrote:\n> \n> Assuming that \"month\" means what it sounds like, the above would\n> result\n> in running twelve parallel sort/uniq operations, one for each month\n> grouping, to eliminate duplicates before counting. You've got sortmem\n> set high enough to blow out RAM in that scenario ...\n\nHrmm, why is it that with a similar query I get a far simpler plan than\nyou describe, and relatively snappy runtime?\n\n select date\n , count(1) as nads\n , sum(case when premium then 1 else 0 end) as npremium\n , count(distinct(keyword)) as nwords\n , count(distinct(advertiser)) as nadvertisers \n from data \ngroup by date \norder by date asc\n\n QUERY PLAN \n-----------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.00..14452743.09 rows=721 width=13)\n -> Index Scan using data_date_idx on data (cost=0.00..9075144.27 rows=430206752 width=13)\n(2 rows)\n\n=# show server_version;\n server_version \n----------------\n 8.1.2\n(1 row)\n\n-jwb\n\n", "msg_date": "Sat, 28 Jan 2006 09:08:53 -0800", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "\"Jeffrey W. Baker\" <[email protected]> writes:\n> On Sat, 2006-01-28 at 10:55 -0500, Tom Lane wrote:\n>> Assuming that \"month\" means what it sounds like, the above would result\n>> in running twelve parallel sort/uniq operations, one for each month\n>> grouping, to eliminate duplicates before counting. You've got sortmem\n>> set high enough to blow out RAM in that scenario ...\n\n> Hrmm, why is it that with a similar query I get a far simpler plan than\n> you describe, and relatively snappy runtime?\n\nYou can't see the sort operations in the plan, because they're invoked\nimplicitly by the GroupAggregate node. But they're there.\n\nAlso, a plan involving GroupAggregate is going to run the \"distinct\"\nsorts sequentially, because it's dealing with only one grouping value at\na time. In the original case, the planner probably realizes there are\nonly 12 groups and therefore prefers a HashAggregate, which will try\nto run all the sorts in parallel. Your \"group by date\" isn't a good\napproximation of the original conditions because there will be a lot\nmore groups.\n\n(We might need to tweak the planner to discourage selecting\nHashAggregate in the presence of DISTINCT aggregates --- I don't\nremember whether it accounts for the sortmem usage in deciding\nwhether the hash will fit in memory or not ...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Jan 2006 12:37:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries " }, { "msg_contents": "I wrote:\n> (We might need to tweak the planner to discourage selecting\n> HashAggregate in the presence of DISTINCT aggregates --- I don't\n> remember whether it accounts for the sortmem usage in deciding\n> whether the hash will fit in memory or not ...)\n\nAh, I take that all back after checking the code: we don't use\nHashAggregate at all when there are DISTINCT aggregates, precisely\nbecause of this memory-blow-out problem.\n\nFor both your group-by-date query and the original group-by-month query,\nthe plan of attack is going to be to read the original input in grouping\norder (either via sort or indexscan, with sorting probably preferred\nunless the table is pretty well correlated with the index) and then\nsort/uniq on the DISTINCT value within each group. The OP is probably\nlosing on that step compared to your test because it's over much larger\ngroups than yours, forcing some spill to disk. And most likely he's not\ngot an index on month, so the first sort is in fact a sort and not an\nindexscan.\n\nBottom line is that he's probably doing a ton of on-disk sorting\nwhere you're not doing any. This makes me think Luke's theory about\ninadequate disk horsepower may be on the money.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Jan 2006 13:55:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries " }, { "msg_contents": "On 1/28/06, Luke Lonergan <[email protected]> wrote:\n> You should check your disk performance, I would\n> expect you'll find it lacking, partly because you are running RAID10, but\n> mostly because I expect you are using a hardware RAID adapter.\n\nhmm .. do i understand correctly that you're suggesting that using\nraid 10 and/or hardware raid adapter might hurt disc subsystem\nperformance? could you elaborate on the reasons, please? it's not that\ni'm against the idea - i'm just curious as this is very\n\"against-common-sense\". and i always found it interesting when\nsomebody states something that uncommon...\n\nbest regards\n\ndepesz\n", "msg_date": "Sun, 29 Jan 2006 12:25:23 +0100", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On Sun, Jan 29, 2006 at 12:25:23PM +0100, hubert depesz lubaczewski wrote:\n>hmm .. do i understand correctly that you're suggesting that using\n>raid 10 and/or hardware raid adapter might hurt disc subsystem\n>performance? could you elaborate on the reasons, please?\n\nI think it's been fairly well beaten to death that the low-end hardware \nraid adapters have lousy performance. It's not until you get into the \nrange of battery-backed disk caches with 512M+ and multiple I/O channels \nthat hardware raid becomes competitive with software raid.\n\nMike Stone\n", "msg_date": "Sun, 29 Jan 2006 10:43:24 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On Fri, Jan 27, 2006 at 08:23:55PM -0500, Mike Biamonte wrote:\n\n> This query took 18 hours on PG 8.1 on a Dual Xeon, RHEL3, (2.4\n> Kernel) with RAID-10 (15K drives) and 12 GB Ram. I was expecting it\n> to take about 4 hours - based on some experience with a similar\n> dataset on a different machine (RH9, PG7.3 Dual Xeon, 4GB RAM,\n> Raid-5 10K drives)\n>\n> It's difficult to just \"try\" various ideas because each attempt\n> takes a full day to test. Real experience is needed here!\n\nIt seems like you are changing multiple variables at the same time.\n\nI think you need to first compare the query plans with EXPLAIN SELECT\nto see if they are significantly different. Your upgrade from 7.3 to\n8.1 may have resulted in a less optimal plan.\n\nSecond, you should monitor your IO performance during the query\nexecution and test it independent of postgres. Then compare the stats\nbetween the two systems. \n\nAs a side note, if you have many disks and you are concerned about\nbottlenecks on read operations, RAID 5 may perform better than\nRAID 10. \n\n -Mike\n", "msg_date": "Sun, 29 Jan 2006 19:32:15 -0500", "msg_from": "Michael Adler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "Mike Biamonte wrote:\n> Does anyone have any experience with extremely large data sets?\n> I'm mean hundreds of millions of rows.\n> \n> The queries I need to run on my 200 million transactions are relatively\n> simple:\n> \n> select month, count(distinct(cardnum)) count(*), sum(amount) from\n> transactions group by month;\n\nThis may be heretical to post to a relational-database group, but sometimes a problem can be better solved OUTSIDE of the relational system.\n\nI had a similar problem recently: I have a set of about 100,000 distinct values, each of which occurs one to several million times in the database, with an aggregate total of several hundred million occurances in the database.\n\nSorting this into distinct lists (\"Which rows contain this value?\") proved quite time consuming (just like your case), but on reflection, I realized that it was dumb to expect a general-purpose sorting algorithm to sort a list about which I had specialized knowledge. General-purpose sorting usually takes O(N*log(N)), but if you have a small number of distinct values, you can use \"bucket sorting\" and sort in O(N) time, a huge improvement. In my case, it was even more specialized -- there was a very small number of the lists that contained thousands or millions of items, but about 95% of the lists only had a few items.\n\nArmed with this knowledge, it took me couple weeks to write a highly-specialized sorting system that used a combination of Postgres, in-memory and disk caching, and algorithms dredged up from Knuth. The final result ran in about four hours.\n\nThe thing to remember about relational databases is that the designers are constrained by the need for generality, reliability and SQL standards. Given any particular well-defined task where you have specialized knowledge about the data, and/or you don't care about transactional correctness, and/or you're not concerned about data loss, a good programmer can always write a faster solution.\n\nOf course, there's a huge penalty. You lose support, lose of generality, the application takes on complexity that should be in the database, and on and on. A hand-crafted solution should be avoided unless there's simply no other way.\n\nA relational database is a tool. Although powerful, like any tool it has limitations. Use the tool where it's useful, and use other tools when necessary.\n\nCraig\n", "msg_date": "Sun, 29 Jan 2006 19:19:46 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On Fri, Jan 27, 2006 at 07:05:04PM -0800, Luke Lonergan wrote:\n> Sounds like you are running into the limits of your disk subsystem. You are\n> scanning all of the data in the transactions table, so you will be limited\n> by the disk bandwidth you have ? and using RAID-10, you should divide the\n> number of disk drives by 2 and multiply by their indiividual bandwidth\n> (around 60MB/s) and that?s what you can expect in terms of performance. So,\n> if you have 8 drives, you should expect to get 4 x 60 MB/s = 240 MB/s in\n> bandwidth. That means that if you are dealing with 24,000 MB of data in the\n> ?transactions? table, then you will scan it in 100 seconds.\n\nWhy divide by 2? A good raid controller should be able to send read\nrequests to both drives out of the mirrored set to fully utilize the\nbandwidth. Of course, that probably won't come into play unless the OS\ndecides that it's going to read-ahead fairly large chunks of the table\nat a time...\n\nAlso, some vmstat output would certainly help clarify where the\nbottleneck is...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 30 Jan 2006 14:25:24 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "Jim,\n\nOn 1/30/06 12:25 PM, \"Jim C. Nasby\" <[email protected]> wrote:\n\n> Why divide by 2? A good raid controller should be able to send read\n> requests to both drives out of the mirrored set to fully utilize the\n> bandwidth. Of course, that probably won't come into play unless the OS\n> decides that it's going to read-ahead fairly large chunks of the table\n> at a time...\n\nI've not seen one that does, nor would it work in the general case IMO. In\nRAID1 writes are duplicated and reads come from one of the copies. You\ncould alternate read service requests to minimize rotational latency, but\nyou can't improve bandwidth.\n\n- Luke \n\n\n", "msg_date": "Tue, 31 Jan 2006 09:00:30 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "Luke Lonergan wrote:\n> Jim,\n>\n> On 1/30/06 12:25 PM, \"Jim C. Nasby\" <[email protected]> wrote:\n>\n> \n>> Why divide by 2? A good raid controller should be able to send read\n>> requests to both drives out of the mirrored set to fully utilize the\n>> bandwidth. Of course, that probably won't come into play unless the OS\n>> decides that it's going to read-ahead fairly large chunks of the table\n>> at a time...\n>> \n>\n> I've not seen one that does, nor would it work in the general case IMO. In\n> RAID1 writes are duplicated and reads come from one of the copies. You\n> could alternate read service requests to minimize rotational latency, but\n> you can't improve bandwidth.\n>\n> - Luke \n>\n> \nFor Solaris's software raid, the default settings for raid-1 sets is: \nround-robin read, parallel write. I assumed this mean't it would give \nsimilar read performance to raid-0, but I've never benchmarked it.\n\n-Kevin\n", "msg_date": "Tue, 31 Jan 2006 10:25:39 -0700", "msg_from": "Kevin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On Tue, Jan 31, 2006 at 09:00:30AM -0800, Luke Lonergan wrote:\n> Jim,\n> \n> On 1/30/06 12:25 PM, \"Jim C. Nasby\" <[email protected]> wrote:\n> \n> > Why divide by 2? A good raid controller should be able to send read\n> > requests to both drives out of the mirrored set to fully utilize the\n> > bandwidth. Of course, that probably won't come into play unless the OS\n> > decides that it's going to read-ahead fairly large chunks of the table\n> > at a time...\n> \n> I've not seen one that does, nor would it work in the general case IMO. In\n> RAID1 writes are duplicated and reads come from one of the copies. You\n> could alternate read service requests to minimize rotational latency, but\n> you can't improve bandwidth.\n\n(BTW, I did some testing that seems to confirm this)\n\nWhy couldn't you double the bandwidth? If you're doing a largish read\nyou should be able to do something like have drive a read the first\ntrack, drive b the second, etc. Of course that means that the controller\nor OS would have to be able to stitch things back together.\n\nAs for software raid, I'm wondering how well that works if you can't use\na BBU to allow write caching/re-ordering...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 31 Jan 2006 13:21:45 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On Tue, 2006-01-31 at 09:00 -0800, Luke Lonergan wrote:\n> Jim,\n> \n> On 1/30/06 12:25 PM, \"Jim C. Nasby\" <[email protected]> wrote:\n> \n> > Why divide by 2? A good raid controller should be able to send read\n> > requests to both drives out of the mirrored set to fully utilize the\n> > bandwidth. Of course, that probably won't come into play unless the OS\n> > decides that it's going to read-ahead fairly large chunks of the table\n> > at a time...\n> \n> I've not seen one that does, nor would it work in the general case IMO. In\n> RAID1 writes are duplicated and reads come from one of the copies. You\n> could alternate read service requests to minimize rotational latency, but\n> you can't improve bandwidth.\n\nThen you've not seen Linux. Linux does balanced reads on software\nmirrors. I'm not sure why you think this can't improve bandwidth. It\ndoes improve streaming bandwidth as long as the platter STR is more than\nthe bus STR.\n\n-jwb\n", "msg_date": "Tue, 31 Jan 2006 12:03:58 -0800", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "Jeffrey,\n\nOn 1/31/06 12:03 PM, \"Jeffrey W. Baker\" <[email protected]> wrote:\n\n> Then you've not seen Linux.\n\n:-D\n\n> Linux does balanced reads on software\n> mirrors. I'm not sure why you think this can't improve bandwidth. It\n> does improve streaming bandwidth as long as the platter STR is more than\n> the bus STR.\n\n... Prove it.\n\n- Luke\n\n\n", "msg_date": "Tue, 31 Jan 2006 12:47:10 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "Jim,\n\nOn 1/31/06 11:21 AM, \"Jim C. Nasby\" <[email protected]> wrote:\n\n> (BTW, I did some testing that seems to confirm this)\n> \n> Why couldn't you double the bandwidth? If you're doing a largish read\n> you should be able to do something like have drive a read the first\n> track, drive b the second, etc. Of course that means that the controller\n> or OS would have to be able to stitch things back together.\n\nIt's because your alternating reads are skipping in chunks across the\nplatter. Disks work at their max internal rate when reading sequential\ndata, and the cache is often built to buffer a track-at-a-time, so\nalternating pieces that are not contiguous has the effect of halving the max\ninternal sustained bandwidth of each drive - the total is equal to one\ndrive's sustained internal bandwidth.\n\nThis works differently for RAID0, where the chunks are allocated to each\ndrive and laid down contiguously on each, so that when they're read back,\neach drive runs at it's sustained sequential throughput.\n\nThe alternating technique in mirroring might improve rotational latency for\nrandom seeking - a trick that Tandem exploited, but it won't improve\nbandwidth.\n \n> As for software raid, I'm wondering how well that works if you can't use\n> a BBU to allow write caching/re-ordering...\n\nWorks great with standard OS write caching.\n\n- Luke\n\n\n", "msg_date": "Tue, 31 Jan 2006 14:52:57 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "\n>> Linux does balanced reads on software\n>> mirrors. I'm not sure why you think this can't improve bandwidth. It\n>> does improve streaming bandwidth as long as the platter STR is more than\n>> the bus STR.\n>\n> ... Prove it.\n>\n\n\t(I have a software RAID1 on this desktop machine)\n\n\tIt's a lot faster than a single disk for random reads when more than 1 \nthread hits the disk, because it distributes reads to both disks. Thus, \napplications start faster, and the machine is more reactive even when the \ndisk is thrashing. Cron starting a \"updatedb\" is less painful. It's cool \nfor desktop use (and of course it's more reliable).\n\n\tHowever large reads (dd-style) are just the same speed as 1 drive. I \nguess you'd need a humongous readahead in order to read from both disks.\n", "msg_date": "Wed, 01 Feb 2006 00:11:05 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On Tue, Jan 31, 2006 at 02:52:57PM -0800, Luke Lonergan wrote:\n> It's because your alternating reads are skipping in chunks across the\n> platter. Disks work at their max internal rate when reading sequential\n> data, and the cache is often built to buffer a track-at-a-time, so\n> alternating pieces that are not contiguous has the effect of halving the max\n> internal sustained bandwidth of each drive - the total is equal to one\n> drive's sustained internal bandwidth.\n> \n> This works differently for RAID0, where the chunks are allocated to each\n> drive and laid down contiguously on each, so that when they're read back,\n> each drive runs at it's sustained sequential throughput.\n> \n> The alternating technique in mirroring might improve rotational latency for\n> random seeking - a trick that Tandem exploited, but it won't improve\n> bandwidth.\n\nOr just work in multiples of tracks, which would greatly reduce the\nimpact of delays from seeking.\n\n> > As for software raid, I'm wondering how well that works if you can't use\n> > a BBU to allow write caching/re-ordering...\n> \n> Works great with standard OS write caching.\n\nWell, the only problem with that is if the machine crashes for any\nreason you risk having the database corrupted (or at best losing some\ncommitted transactions).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 31 Jan 2006 17:12:27 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "PFC,\n\nOn 1/31/06 3:11 PM, \"PFC\" <[email protected]> wrote:\n\n>> ... Prove it.\n>> \n> \n> (I have a software RAID1 on this desktop machine)\n> \n> It's a lot faster than a single disk for random reads when more than 1\n> thread hits the disk, because it distributes reads to both disks. Thus,\n> applications start faster, and the machine is more reactive even when the\n> disk is thrashing. Cron starting a \"updatedb\" is less painful. It's cool\n> for desktop use (and of course it's more reliable).\n\nExactly - improved your random seeks.\n\n> However large reads (dd-style) are just the same speed as 1 drive. I\n> guess you'd need a humongous readahead in order to read from both disks.\n\nNope - won't help.\n\n- Luke\n\n\n", "msg_date": "Tue, 31 Jan 2006 15:13:10 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "Jim,\n\nOn 1/31/06 3:12 PM, \"Jim C. Nasby\" <[email protected]> wrote:\n\n>> The alternating technique in mirroring might improve rotational latency for\n>> random seeking - a trick that Tandem exploited, but it won't improve\n>> bandwidth.\n> \n> Or just work in multiples of tracks, which would greatly reduce the\n> impact of delays from seeking.\n\nSo, having rediscovered the facts underlying the age-old RAID10 versus RAID5\ndebate we're back to the earlier points.\n\nRAID10 is/was the best option when latency / random seek was the predominant\nproblem to be solved, RAID5/50 is best where read bandwidth is needed.\nModern developments in fast CPUs for write checksumming have made RAID5/50 a\nviable alternative to RAID10 even when there is moderate write / random seek\nworkloads and fast read is needed.\n \n>> \n>> Works great with standard OS write caching.\n> \n> Well, the only problem with that is if the machine crashes for any\n> reason you risk having the database corrupted (or at best losing some\n> committed transactions).\n\nSo, do you routinely turn off Linux write caching? If not, then there's no\ndifference.\n\n- Luke\n\n\n", "msg_date": "Tue, 31 Jan 2006 15:19:38 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On Tue, Jan 31, 2006 at 12:47:10PM -0800, Luke Lonergan wrote:\n>> Linux does balanced reads on software\n>> mirrors. I'm not sure why you think this can't improve bandwidth. It\n>> does improve streaming bandwidth as long as the platter STR is more than\n>> the bus STR.\n> ... Prove it.\n\nFWIW, this is on Ultra160 disks (Seagate 10000rpm) on a dual Opteron running\nLinux 2.6.14.3:\n\ncassarossa:~# grep md1 /proc/mdstat \nmd1 : active raid1 sdf6[1] sda6[0]\ncassarossa:~# dd if=/dev/sda6 of=/dev/null bs=8k count=400000\n[system at about 35% wait for I/O and 15% system, according to top]\n400000+0 records in\n400000+0 records out\n3276800000 bytes transferred in 54,488154 seconds (60137842 bytes/sec)\n[system at about 45% wait for I/O and 7% system -- whoa?]\n400000+0 records in\n400000+0 records out\n3276800000 bytes transferred in 52,523771 seconds (62386990 bytes/sec)\n\nI'm not sure if it _refutes_ the assertion that the Linux RAID-1 driver can\ndo balancing of sequential reads, but it certainly doesn't present very much\nevidence in that direction. BTW, sda and sdf are on different channels of a\ndual-channel (onboard, connected via PCI-X) Adaptec board, so I doubt the bus\nis the limiting factor.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 1 Feb 2006 02:26:15 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "Steinar,\n\nOn 1/31/06 5:26 PM, \"Steinar H. Gunderson\" <[email protected]> wrote:\n\n> cassarossa:~# grep md1 /proc/mdstat\n> md1 : active raid1 sdf6[1] sda6[0]\n> cassarossa:~# dd if=/dev/sda6 of=/dev/null bs=8k count=400000\n> [system at about 35% wait for I/O and 15% system, according to top]\n> 400000+0 records in\n> 400000+0 records out\n> 3276800000 bytes transferred in 54,488154 seconds (60137842 bytes/sec)\n> [system at about 45% wait for I/O and 7% system -- whoa?]\n> 400000+0 records in\n> 400000+0 records out\n> 3276800000 bytes transferred in 52,523771 seconds (62386990 bytes/sec)\n> \n> I'm not sure if it _refutes_ the assertion that the Linux RAID-1 driver can\n> do balancing of sequential reads, but it certainly doesn't present very much\n> evidence in that direction. BTW, sda and sdf are on different channels of a\n> dual-channel (onboard, connected via PCI-X) Adaptec board, so I doubt the bus\n> is the limiting factor.\n\nYep - 2MB/s is noise. Run a RAID0, you should get 120MB/s.\n\nIncidentally, before this thread took a turn to RAID10 vs. RAID5, the\nquestion of HW RAID adapter versus SW RAID was the focus. I routinely see\nnumbers like 20MB/s coming from HW RAID adapters on Linux, so it's nice to\nsee someone post a decent number using SW RAID.\n\nWe're very happy with the 3Ware HW RAID adapters, but so far they're the\nonly ones (I have two Arecas but I mistakenly ordered PCI-E so I can't test\nthem :-( \n\n- Luke\n\n\n", "msg_date": "Tue, 31 Jan 2006 17:35:32 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On Tue, Jan 31, 2006 at 03:19:38PM -0800, Luke Lonergan wrote:\n> > Well, the only problem with that is if the machine crashes for any\n> > reason you risk having the database corrupted (or at best losing some\n> > committed transactions).\n> \n> So, do you routinely turn off Linux write caching? If not, then there's no\n> difference.\n\nMy thought was about fsync on WAL; if you're doing much writing then\na good raid write cache with BBU will improve performance.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 31 Jan 2006 20:47:03 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On Tue, 2006-01-31 at 12:47 -0800, Luke Lonergan wrote:\n> Jeffrey,\n> \n> On 1/31/06 12:03 PM, \"Jeffrey W. Baker\" <[email protected]> wrote:\n> > Linux does balanced reads on software\n> > mirrors. I'm not sure why you think this can't improve bandwidth. It\n> > does improve streaming bandwidth as long as the platter STR is more than\n> > the bus STR.\n> \n> ... Prove it.\n\nIt's clear that Linux software RAID1, and by extension RAID10, does\nbalanced reads, and that these balanced reads double the bandwidth. A\nquick glance at the kernel source code, and a trivial test, proves the\npoint.\n\nIn this test, sdf and sdg are Seagate 15k.3 disks on a single channel of\nan Adaptec 39320, but the enclosure, and therefore the bus, is capable\nof only Ultra160 operation.\n\n# grep md0 /proc/mdstat \nmd0 : active raid1 sdf1[0] sdg1[1]\n\n# dd if=/dev/md0 of=/dev/null bs=8k count=400000 skip=0 & \n dd if=/dev/md0 of=/dev/null bs=8k count=400000 skip=400000\n400000+0 records in\n400000+0 records out\n3276800000 bytes transferred in 48.243362 seconds (67922298 bytes/sec)\n400000+0 records in\n400000+0 records out\n3276800000 bytes transferred in 48.375897 seconds (67736211 bytes/sec)\n\nThat's 136MB/sec, for those following along at home. With only two\ndisks in a RAID1, you can nearly max out the SCSI bus.\n\n# dd if=/dev/sdf1 of=/dev/null bs=8k count=400000 skip=0 & \n dd if=/dev/sdf1 of=/dev/null bs=8k count=400000 skip=400000\n400000+0 records in\n400000+0 records out\n3276800000 bytes transferred in 190.413286 seconds (17208883 bytes/sec)\n400000+0 records in\n400000+0 records out\n3276800000 bytes transferred in 192.096232 seconds (17058117 bytes/sec)\n\nThat, on the other hand, is only 34MB/sec. With two threads, the RAID1\nis 296% faster.\n\n# dd if=/dev/md0 of=/dev/null bs=8k count=400000 skip=0 & \n dd if=/dev/md0 of=/dev/null bs=8k count=400000 skip=400000 & \n dd if=/dev/md0 of=/dev/null bs=8k count=400000 skip=800000 & \n dd if=/dev/md0 of=/dev/null bs=8k count=400000 skip=1200000 &\n400000+0 records in\n400000+0 records out\n3276800000 bytes transferred in 174.276585 seconds (18802296 bytes/sec)\n400000+0 records in\n400000+0 records out\n3276800000 bytes transferred in 181.581893 seconds (18045852 bytes/sec)\n400000+0 records in\n400000+0 records out\n3276800000 bytes transferred in 183.724243 seconds (17835425 bytes/sec)\n400000+0 records in\n400000+0 records out\n3276800000 bytes transferred in 184.209018 seconds (17788489 bytes/sec)\n\nThat's 71MB/sec with 4 threads...\n\n# dd if=/dev/sdf1 of=/dev/null bs=8k count=100000 skip=0 & \n dd if=/dev/sdf1 of=/dev/null bs=8k count=100000 skip=400000 & \n dd if=/dev/sdf1 of=/dev/null bs=8k count=100000 skip=800000 & \n dd if=/dev/sdf1 of=/dev/null bs=8k count=100000 skip=1200000 &\n100000+0 records in\n100000+0 records out\n819200000 bytes transferred in 77.489210 seconds (10571794 bytes/sec)\n100000+0 records in\n100000+0 records out\n819200000 bytes transferred in 87.628000 seconds (9348610 bytes/sec)\n100000+0 records in\n100000+0 records out\n819200000 bytes transferred in 88.912989 seconds (9213502 bytes/sec)\n100000+0 records in\n100000+0 records out\n819200000 bytes transferred in 90.238705 seconds (9078144 bytes/sec)\n\nOnly 36MB/sec for the single disk. 96% advantage for the RAID1.\n\n# dd if=/dev/md0 of=/dev/null bs=8k count=50000 skip=0 & \n dd if=/dev/md0 of=/dev/null bs=8k count=50000 skip=400000 & \n dd if=/dev/md0 of=/dev/null bs=8k count=50000 skip=800000 & \n dd if=/dev/md0 of=/dev/null bs=8k count=50000 skip=1200000 & \n dd if=/dev/md0 of=/dev/null bs=8k count=50000 skip=1600000 & \n dd if=/dev/md0 of=/dev/null bs=8k count=50000 skip=2000000 & \n dd if=/dev/md0 of=/dev/null bs=8k count=50000 skip=2400000 & \n dd if=/dev/md0 of=/dev/null bs=8k count=50000 skip=2800000 &\n50000+0 records in\n50000+0 records out\n409600000 bytes transferred in 35.289648 seconds (11606803 bytes/sec)\n50000+0 records in\n50000+0 records out\n409600000 bytes transferred in 42.653475 seconds (9602969 bytes/sec)\n50000+0 records in\n50000+0 records out\n409600000 bytes transferred in 43.524714 seconds (9410745 bytes/sec)\n50000+0 records in\n50000+0 records out\n409600000 bytes transferred in 45.151705 seconds (9071640 bytes/sec)\n50000+0 records in\n50000+0 records out\n409600000 bytes transferred in 47.741845 seconds (8579476 bytes/sec)\n50000+0 records in\n50000+0 records out\n409600000 bytes transferred in 48.600533 seconds (8427891 bytes/sec)\n50000+0 records in\n50000+0 records out\n409600000 bytes transferred in 48.758726 seconds (8400548 bytes/sec)\n50000+0 records in\n50000+0 records out\n409600000 bytes transferred in 49.679275 seconds (8244887 bytes/sec)\n\n66MB/s with 8 threads.\n\n# dd if=/dev/sdf1 of=/dev/null bs=8k count=50000 skip=0 & \n dd if=/dev/sdf1 of=/dev/null bs=8k count=50000 skip=400000 & \n dd if=/dev/sdf1 of=/dev/null bs=8k count=50000 skip=800000 & \n dd if=/dev/sdf1 of=/dev/null bs=8k count=50000 skip=1200000 & \n dd if=/dev/sdf1 of=/dev/null bs=8k count=50000 skip=1600000 & \n dd if=/dev/sdf1 of=/dev/null bs=8k count=50000 skip=2000000 & \n dd if=/dev/sdf1 of=/dev/null bs=8k count=50000 skip=2400000 & \n dd if=/dev/sdf1 of=/dev/null bs=8k count=50000 skip=2800000 &\n50000+0 records in\n50000+0 records out\n409600000 bytes transferred in 73.873911 seconds (5544583 bytes/sec)\n50000+0 records in\n50000+0 records out\n409600000 bytes transferred in 75.613093 seconds (5417051 bytes/sec)\n50000+0 records in\n50000+0 records out\n409600000 bytes transferred in 79.988303 seconds (5120749 bytes/sec)\n50000+0 records in\n50000+0 records out\n409600000 bytes transferred in 79.996440 seconds (5120228 bytes/sec)\n50000+0 records in\n50000+0 records out\n409600000 bytes transferred in 84.885172 seconds (4825342 bytes/sec)\n50000+0 records in\n50000+0 records out\n409600000 bytes transferred in 92.995892 seconds (4404496 bytes/sec)\n50000+0 records in\n50000+0 records out\n409600000 bytes transferred in 99.180337 seconds (4129851 bytes/sec)\n50000+0 records in\n50000+0 records out\n409600000 bytes transferred in 100.144752 seconds (4090080 bytes/sec)\n\n33MB/s. RAID1 gives a 100% advantage at 8 threads.\n\nI think I've proved my point. Software RAID1 read balancing provides\n0%, 300%, 100%, and 100% speedup on 1, 2, 4, and 8 threads,\nrespectively. In the presence of random I/O, the results are even\nbetter.\n\nAnyone who thinks they have a single-threaded workload has not yet\nencountered the autovacuum daemon.\n\n-Jeff\n\n", "msg_date": "Tue, 31 Jan 2006 20:09:40 -0800", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "Jeffrey,\n\nOn 1/31/06 8:09 PM, \"Jeffrey W. Baker\" <[email protected]> wrote:\n>> ... Prove it.\n> I think I've proved my point. Software RAID1 read balancing provides\n> 0%, 300%, 100%, and 100% speedup on 1, 2, 4, and 8 threads,\n> respectively. In the presence of random I/O, the results are even\n> better.\n> Anyone who thinks they have a single-threaded workload has not yet\n> encountered the autovacuum daemon.\n\nGood data - interesting case. I presume from your results that you had to\nmake the I/Os non-overlapping (the \"skip\" option to dd) in order to get the\nconcurrent access to work. Why the particular choice of offset - 3.2GB in\nthis case?\n\nSo - the bandwidth doubles in specific circumstances under concurrent\nworkloads - not relevant to \"Huge Data sets, simple queries\", but possibly\nhelpful for certain kinds of OLTP applications.\n\n- Luke \n\n\n", "msg_date": "Tue, 31 Jan 2006 21:53:06 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On Tue, 2006-01-31 at 21:53 -0800, Luke Lonergan wrote:\n> Jeffrey,\n> \n> On 1/31/06 8:09 PM, \"Jeffrey W. Baker\" <[email protected]> wrote:\n> >> ... Prove it.\n> > I think I've proved my point. Software RAID1 read balancing provides\n> > 0%, 300%, 100%, and 100% speedup on 1, 2, 4, and 8 threads,\n> > respectively. In the presence of random I/O, the results are even\n> > better.\n> > Anyone who thinks they have a single-threaded workload has not yet\n> > encountered the autovacuum daemon.\n> \n> Good data - interesting case. I presume from your results that you had to\n> make the I/Os non-overlapping (the \"skip\" option to dd) in order to get the\n> concurrent access to work. Why the particular choice of offset - 3.2GB in\n> this case?\n\nNo particular reason. 8k x 100000 is what the last guy used upthread.\n> \n> So - the bandwidth doubles in specific circumstances under concurrent\n> workloads - not relevant to \"Huge Data sets, simple queries\", but possibly\n> helpful for certain kinds of OLTP applications.\n\nAh, but someday Pg will be able to concurrently read from two\ndatastreams to complete a single query. And that day will be glorious\nand fine, and you'll want as much disk concurrency as you can get your\nhands on.\n\n-jwb\n\n", "msg_date": "Wed, 01 Feb 2006 00:25:13 -0800", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "\n\tI did a little test on soft raid1 :\n\n\tI have two 800 Mbytes files, say A and B. (RAM is 512Mbytes).\n\n\tTest 1 :\n\t1- Read A, then read B :\n\t\t19 seconds per file\n\n\t2- Read A and B simultaneously using two threads :\n\t\t22 seconds total (reads were paralleled by the RAID)\n\n\t3- Read one block of A, then one block of B, then one block of A, etc. \nEssentially this is the same as the threaded case, except there's only one \nthread.\n\t\t53 seconds total (with heavy seeking noise from the hdd).\n\n\tI was half expecting 3 to take the same as 2. It simulates, for instance, \nscanning a table and its index, or scanning 2 sort bins. Well, maybe one \nday...\n\n\tIt would be nice if the Kernel had an API for applications to tell it \n\"I'm gonna need these blocks in the next seconds, can you read them in the \norder you like (fastest), from whatever disk you like, and cache them for \nme please; so that I can read them in the order I like, but very fast ?\"\n\n\nOn Wed, 01 Feb 2006 09:25:13 +0100, Jeffrey W. Baker <[email protected]> \nwrote:\n\n> On Tue, 2006-01-31 at 21:53 -0800, Luke Lonergan wrote:\n>> Jeffrey,\n>>\n>> On 1/31/06 8:09 PM, \"Jeffrey W. Baker\" <[email protected]> wrote:\n>> >> ... Prove it.\n>> > I think I've proved my point. Software RAID1 read balancing provides\n>> > 0%, 300%, 100%, and 100% speedup on 1, 2, 4, and 8 threads,\n>> > respectively. In the presence of random I/O, the results are even\n>> > better.\n>> > Anyone who thinks they have a single-threaded workload has not yet\n>> > encountered the autovacuum daemon.\n>>\n>> Good data - interesting case. I presume from your results that you had \n>> to\n>> make the I/Os non-overlapping (the \"skip\" option to dd) in order to get \n>> the\n>> concurrent access to work. Why the particular choice of offset - 3.2GB \n>> in\n>> this case?\n>\n> No particular reason. 8k x 100000 is what the last guy used upthread.\n>>\n>> So - the bandwidth doubles in specific circumstances under concurrent\n>> workloads - not relevant to \"Huge Data sets, simple queries\", but \n>> possibly\n>> helpful for certain kinds of OLTP applications.\n>\n> Ah, but someday Pg will be able to concurrently read from two\n> datastreams to complete a single query. And that day will be glorious\n> and fine, and you'll want as much disk concurrency as you can get your\n> hands on.\n>\n> -jwb\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\n", "msg_date": "Wed, 01 Feb 2006 10:01:39 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On Tue, Jan 31, 2006 at 08:09:40PM -0800, Jeffrey W. Baker wrote:\n>I think I've proved my point. Software RAID1 read balancing provides\n>0%, 300%, 100%, and 100% speedup on 1, 2, 4, and 8 threads,\n>respectively. In the presence of random I/O, the results are even\n>better.\n\nUmm, the point *was* about single stream performance. I guess you did a \ngood job of proving it.\n\n>Anyone who thinks they have a single-threaded workload has not yet\n>encountered the autovacuum daemon.\n\nOn tables where my single stream performance matters you'd better \nbelieve that the autovacuum daemon isn't running.\n\nMike Stone\n", "msg_date": "Wed, 01 Feb 2006 07:15:09 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "PFC,\n\n\nOn 2/1/06 1:01 AM, \"PFC\" <[email protected]> wrote:\n\n> 3- Read one block of A, then one block of B, then one block of A, etc.\n> Essentially this is the same as the threaded case, except there's only one\n> thread.\n> 53 seconds total (with heavy seeking noise from the hdd).\n> \n> I was half expecting 3 to take the same as 2. It simulates, for\n> instance, \n> scanning a table and its index, or scanning 2 sort bins. Well, maybe one\n> day...\n\nThis is actually interesting overall - I think what this might be showing is\nthat the Linux SW RAID1 is alternating I/Os to the mirror disks from\ndifferent processes (LWP or HWP both maybe?), but not within one process.\n\n> It would be nice if the Kernel had an API for applications to tell it\n> \"I'm gonna need these blocks in the next seconds, can you read them in the\n> order you like (fastest), from whatever disk you like, and cache them for\n> me please; so that I can read them in the order I like, but very fast ?\"\n\nMore control is always good IMO, but for now there's I/O reordering in the\nSCSI layer and readahead tuning. There is POSIX fadvise() also to tell the\nunderlying I/O layer what the access pattern looks like.\n\n- Luke\n\n\n", "msg_date": "Wed, 01 Feb 2006 09:42:12 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On Wed, Feb 01, 2006 at 09:42:12AM -0800, Luke Lonergan wrote:\n> This is actually interesting overall - I think what this might be showing is\n> that the Linux SW RAID1 is alternating I/Os to the mirror disks from\n> different processes (LWP or HWP both maybe?), but not within one process.\n\nHaving read the code, I'm fairly certain it doesn't really care what process\nanything is coming from.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 1 Feb 2006 18:55:35 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On 2/1/06, Luke Lonergan <[email protected]> wrote:\n[snip]\n> This is actually interesting overall - I think what this might be showing is\n> that the Linux SW RAID1 is alternating I/Os to the mirror disks from\n> different processes (LWP or HWP both maybe?), but not within one process.\n\nI can confirm this behavior after looking at my multipathed fibre\nchannel SAN. To the best of my knowledge, the multipathing code uses\nthe same underlying I/O code as the Linux SW RAID logic.\n\n--\nMike Rylander\[email protected]\nGPLS -- PINES Development\nDatabase Developer\nhttp://open-ils.org\n", "msg_date": "Wed, 1 Feb 2006 17:57:47 +0000", "msg_from": "Mike Rylander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "Jeffrey,\n\nOn 2/1/06 12:25 AM, \"Jeffrey W. Baker\" <[email protected]> wrote:\n\n> Ah, but someday Pg will be able to concurrently read from two\n> datastreams to complete a single query. And that day will be glorious\n> and fine, and you'll want as much disk concurrency as you can get your\n> hands on.\n\nWell - so happens that we have one of those multi-processing postgres'\nhandy, so we'll test this theory out in the next couple of days. We've a\ncustomer who ordered 3 machines with 6 drives each (Dell 2850s) on two U320\nSCSI busses, and we're going to try configuring them all in a single RAID10\nand run two Bizgres MPP segments on that (along with two mirrors).\n\nWe'll try the RAID10 config and if we get full parallelism, we'll use it (if\nthe customer like it). Otherwise, we'll use two 3 disk RAID5 sets.\n\nI'll post the results here.\n\nThanks Jeffrey,\n\n- Luke\n\n\n", "msg_date": "Wed, 01 Feb 2006 20:47:09 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "Jeffrey W. Baker wrote:\n> On Tue, 2006-01-31 at 09:00 -0800, Luke Lonergan wrote:\n> \n>> Jim,\n>>\n>> On 1/30/06 12:25 PM, \"Jim C. Nasby\" <[email protected]> wrote:\n>>\n>> \n>>> Why divide by 2? A good raid controller should be able to send read\n>>> requests to both drives out of the mirrored set to fully utilize the\n>>> bandwidth. Of course, that probably won't come into play unless the OS\n>>> decides that it's going to read-ahead fairly large chunks of the table\n>>> at a time...\n>>> \n>> I've not seen one that does, nor would it work in the general case IMO. In\n>> RAID1 writes are duplicated and reads come from one of the copies. You\n>> could alternate read service requests to minimize rotational latency, but\n>> you can't improve bandwidth.\n>> \n>\n> Then you've not seen Linux. Linux does balanced reads on software\n> mirrors. I'm not sure why you think this can't improve bandwidth. It\n> does improve streaming bandwidth as long as the platter STR is more than\n> the bus STR.\n> \nFYI: so does the Solaris Volume Manager (by default) on Solaris. One \ncan choose alternate access methods like \"First\" (if the other mirrors \nare slower than the first) or \"Geometric\". It's been doing this for a \ngood 10 years now (back when it was called DiskSuite), so it's nothing new.\n\n-- Alan\n\n\n\n\n\n\n", "msg_date": "Thu, 02 Feb 2006 14:41:19 -0500", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" } ]
[ { "msg_contents": "Depesz,\n\n> [mailto:[email protected]] On Behalf Of \n> hubert depesz lubaczewski\n> Sent: Sunday, January 29, 2006 3:25 AM\n>\n> hmm .. do i understand correctly that you're suggesting that \n> using raid 10 and/or hardware raid adapter might hurt disc \n> subsystem performance? could you elaborate on the reasons, \n> please? it's not that i'm against the idea - i'm just curious \n> as this is very \"against-common-sense\". and i always found it \n> interesting when somebody states something that uncommon...\n\nSee previous postings on this list - often when someone is reporting a\nperformance problem with large data, the answer comes back that their\nI/O setup is not performing well. Most times, people are trusting that\nwhen they buy a hardware RAID adapter and set it up, that the\nperformance will be what they expect and what is theoretically correct\nfor the number of disk drives.\n\nIn fact, in our testing of various host-based SCSI RAID adapters (LSI,\nDell PERC, Adaptec, HP SmartArray), we find that *all* of them\nunderperform, most of them severely. Some produce results slower than a\nsingle disk drive. We've found that some external SCSI RAID adapters,\nthose built into the disk chassis, often perform better. I think this\nmight be due to the better drivers and perhaps a different marketplace\nfor the higher end solutions driving performance validation.\n\nThe important lesson we've learned is to always test the I/O subsystem\nperformance - you can do so with a simple test like:\n time bash -c \"dd if=/dev/zero of=bigfile bs=8k count=4000000 && sync\"\n time dd if=bigfile of=/dev/null bs=8k\n\nIf the answer isn't something close to the theoretical rate, you are\nlikely limited by your RAID setup. You might be shocked to find a\nsevere performance problem. If either is true, switching to software\nRAID using a simple SCSI adapter will fix the problem.\n\nBTW - we've had very good experiences with the host-based SATA adapters\nfrom 3Ware. The Areca controllers are also respected.\n\nOh - and about RAID 10 - for large data work it's more often a waste of\ndisk performance-wise compared to RAID 5 these days. RAID5 will almost\ndouble the performance on a reasonable number of drives.\n\n- Luke\n\n", "msg_date": "Sun, 29 Jan 2006 13:44:08 -0500", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On Sun, 2006-01-29 at 13:44 -0500, Luke Lonergan wrote:\n> Depesz,\n> \n> > [mailto:[email protected]] On Behalf Of \n> > hubert depesz lubaczewski\n> > Sent: Sunday, January 29, 2006 3:25 AM\n> >\n> > hmm .. do i understand correctly that you're suggesting that \n> > using raid 10 and/or hardware raid adapter might hurt disc \n> > subsystem performance? could you elaborate on the reasons, \n> > please? it's not that i'm against the idea - i'm just curious \n> > as this is very \"against-common-sense\". and i always found it \n> > interesting when somebody states something that uncommon...\n\n> Oh - and about RAID 10 - for large data work it's more often a waste of\n> disk performance-wise compared to RAID 5 these days. RAID5 will almost\n> double the performance on a reasonable number of drives.\n\nI think you might want to be more specific here. I would agree with you\nfor data warehousing, decision support, data mining, and similar\nread-mostly non-transactional loads. For transactional loads RAID-5 is,\ngenerally speaking, a disaster due to the read-before-write problem.\n\nWhile we're on the topic, I just installed another one of those Areca\nARC-1130 controllers with 1GB cache. It's ludicrously fast: 250MB/sec\nburst writes, CPU-limited reads. I can't recommend them highly enough.\n\n-jwb\n\nPS: Could you look into fixing your mailer? Your messages sometimes\ndon't contain In-Reply-To headers, and therefore don't thread properly.\n\n", "msg_date": "Sun, 29 Jan 2006 13:04:01 -0800", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On Sun, 29 Jan 2006, Luke Lonergan wrote:\n\n> In fact, in our testing of various host-based SCSI RAID adapters (LSI,\n> Dell PERC, Adaptec, HP SmartArray), we find that *all* of them\n> underperform, most of them severely.\n\n[snip]\n\n> The important lesson we've learned is to always test the I/O subsystem\n> performance - you can do so with a simple test like:\n> time bash -c \"dd if=/dev/zero of=bigfile bs=8k count=4000000 && sync\"\n> time dd if=bigfile of=/dev/null bs=8k\n\nI'm curious about this since we're shopping around for something new... I \ndo want to get some kind of baseline to compare new products to. Areca \nsent me stats on their SCSI->SATA controller and it looks like it maxes \nout around 10,000 IOPS.\n\nI'd like to see how our existing stuff compares to this. I'd especially \nlike to see it in graph form such as the docs Areca sent (IOPS on one \naxis, block size on the other, etc.). Looking at the venerable Bonnie, it \ndoesn't really seem to focus so much on the number of read/write \noperations per second, but on big bulky transfers.\n\nWhat are you folks using to measure your arrays?\n\nI've been considering using some of our data and just basically \nbenchmarking postgres on various hardware with that, but I cannot compare \nthat to any manufacturer tests.\n\nSorry to meander a bit off topic, but I've been getting frustrated with \nthis little endeavour...\n\nThanks,\n\nCharles\n\n> - Luke\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n", "msg_date": "Mon, 30 Jan 2006 00:35:12 -0500 (EST)", "msg_from": "Charles Sprickman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "Charles,\n\nOn 1/29/06 9:35 PM, \"Charles Sprickman\" <[email protected]> wrote:\n\n> What are you folks using to measure your arrays?\n\nBonnie++ measures random I/Os, numbers we find are typically in the 500/s\nrange, the best I've seen is 1500/s on a large Fibre Channel RAID0 (at\nhttp://www.wlug.org.nz/HarddiskBenchmarks).\n\n- Luke \n\n\n", "msg_date": "Sun, 29 Jan 2006 23:25:22 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "On 1/29/06, Luke Lonergan <[email protected]> wrote:\n> Oh - and about RAID 10 - for large data work it's more often a waste of\n> disk performance-wise compared to RAID 5 these days. RAID5 will almost\n> double the performance on a reasonable number of drives.\n\nhow many is reasonable?\n\ndepesz\n", "msg_date": "Mon, 30 Jan 2006 18:53:29 +0100", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" }, { "msg_contents": "Depesz,\n\nOn 1/30/06 9:53 AM, \"hubert depesz lubaczewski\" <[email protected]> wrote:\n\n>> double the performance on a reasonable number of drives.\n> \n> how many is reasonable?\n\nWhat I mean by that is: given a set of disks N, the read performance of RAID\nwill be equal to the drive read rate A times the number of drives used for\nreading by the RAID algorithm. In the case of RAID5, that number is (N-1),\nso the read rate is A x (N-1). In the case of RAID10, that number is N/2,\nso the read rate is A x (N/2). So, the ratio of read performance\nRAID5/RAID10 is (N-1)/(N/2) = 2 x (N-1)/N. For numbers of drives, this\nratio looks like this:\nN RAID5/RAID10\n3 1.33 \n6 1.67 \n8 1.75 \n14 1.86\n\nSo - I think reasonable would be 6-8, which are common disk configurations.\n\n- Luke\n\n\n", "msg_date": "Mon, 30 Jan 2006 11:48:49 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge Data sets, simple queries" } ]
[ { "msg_contents": "Hi everybody,\n\nI have the following problem, on a test server, if I do a fresh import\nof production data then run \n'explain analyze select count(*) from mandats;'\n\nI get this result:\n\nAggregate (cost=6487.32..6487.32 rows=1 width=0) (actual time=607.61..607.61 rows=1 loops=1)\n -> Seq Scan on mandats (cost=0.00..6373.26 rows=45626 width=0) (actual time=0.14..496.20 rows=45626 loops=1)\n Total runtime: 607.95 msec\n\n\nOn the production server, if I do the same (without other use of the server), I get:\n\nAggregate (cost=227554.33..227554.33 rows=1 width=0) (actual time=230705.79..230705.79 rows=1 loops=1)\n -> Seq Scan on mandats (cost=0.00..227440.26 rows=45626 width=0) (actual time=0.03..230616.64 rows=45760 loops=1)\n Total runtime: 230706.08 msec\n\n\n\nIs there anyone having an idea on how yo solve this poor performances? I\nthink it is caused by many delete/insert on this table every day, but\nhow to solve it, I need to run this qury each hour :(. I run\nvacuum each night, postgresql is unfortunatly 7.2.1 :( (no upgrade\nbefore 2 or 3 months).\n\n-- \nEmmanuel Lacour ------------------------------------ Easter-eggs\n44-46 rue de l'Ouest - 75014 Paris - France - M�tro Gait�\nPhone: +33 (0) 1 43 35 00 37 - Fax: +33 (0) 1 41 35 00 76\nmailto:[email protected] - http://www.easter-eggs.com\n", "msg_date": "Mon, 30 Jan 2006 23:57:11 +0100", "msg_from": "Emmanuel Lacour <[email protected]>", "msg_from_op": true, "msg_subject": "Query planner issue" }, { "msg_contents": "You have lots of dead rows. Do a vacuum full to get it under control,\nthen run VACUUM more frequently and/or increase your FSM settings to\nkeep dead rows in check. In 7.2 vacuum is pretty intrusive; it will be\nmuch better behaved once you can upgrade to a more recent version.\n\nYou really, really want to upgrade as soon as possible, and refer to the\non-line docs about what to do with your FSM settings.\n\n-- Mark Lewis\n\n\nOn Mon, 2006-01-30 at 23:57 +0100, Emmanuel Lacour wrote:\n> Hi everybody,\n> \n> I have the following problem, on a test server, if I do a fresh import\n> of production data then run \n> 'explain analyze select count(*) from mandats;'\n> \n> I get this result:\n> \n> Aggregate (cost=6487.32..6487.32 rows=1 width=0) (actual time=607.61..607.61 rows=1 loops=1)\n> -> Seq Scan on mandats (cost=0.00..6373.26 rows=45626 width=0) (actual time=0.14..496.20 rows=45626 loops=1)\n> Total runtime: 607.95 msec\n> \n> \n> On the production server, if I do the same (without other use of the server), I get:\n> \n> Aggregate (cost=227554.33..227554.33 rows=1 width=0) (actual time=230705.79..230705.79 rows=1 loops=1)\n> -> Seq Scan on mandats (cost=0.00..227440.26 rows=45626 width=0) (actual time=0.03..230616.64 rows=45760 loops=1)\n> Total runtime: 230706.08 msec\n> \n> \n> \n> Is there anyone having an idea on how yo solve this poor performances? I\n> think it is caused by many delete/insert on this table every day, but\n> how to solve it, I need to run this qury each hour :(. I run\n> vacuum each night, postgresql is unfortunatly 7.2.1 :( (no upgrade\n> before 2 or 3 months).\n> \n", "msg_date": "Mon, 30 Jan 2006 15:26:23 -0800", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner issue" }, { "msg_contents": "with Postgresql 7.2.1 you will need to do BOTH vacuum and reindex and with a table that gets many updates/deletes, you \nshould run vacuum more than daily.\n\nBoth issues have been solved in 8.1.\n\nJim\n \n\n---------- Original Message -----------\nFrom: Emmanuel Lacour <[email protected]>\nTo: [email protected]\nSent: Mon, 30 Jan 2006 23:57:11 +0100\nSubject: [PERFORM] Query planner issue\n\n> Hi everybody,\n> \n> I have the following problem, on a test server, if I do a fresh import\n> of production data then run \n> 'explain analyze select count(*) from mandats;'\n> \n> I get this result:\n> \n> Aggregate (cost=6487.32..6487.32 rows=1 width=0) (actual time=607.61..607.61 rows=1 loops=1)\n> -> Seq Scan on mandats (cost=0.00..6373.26 rows=45626 width=0) (actual time=0.14..496.20 rows=45626 \n> loops=1) Total runtime: 607.95 msec\n> \n> On the production server, if I do the same (without other use of the server), I get:\n> \n> Aggregate (cost=227554.33..227554.33 rows=1 width=0) (actual time=230705.79..230705.79 rows=1 loops=1)\n> -> Seq Scan on mandats (cost=0.00..227440.26 rows=45626 width=0) (actual time=0.03..230616.64 rows=45760 \n> loops=1) Total runtime: 230706.08 msec\n> \n> Is there anyone having an idea on how yo solve this poor performances? I\n> think it is caused by many delete/insert on this table every day, but\n> how to solve it, I need to run this qury each hour :(. I run\n> vacuum each night, postgresql is unfortunatly 7.2.1 :( (no upgrade\n> before 2 or 3 months).\n> \n> -- \n> Emmanuel Lacour ------------------------------------ Easter-eggs\n> 44-46 rue de l'Ouest - 75014 Paris - France - M�tro Gait�\n> Phone: +33 (0) 1 43 35 00 37 - Fax: +33 (0) 1 41 35 00 76\n> mailto:[email protected] - http://www.easter-eggs.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n------- End of Original Message -------\n\n", "msg_date": "Mon, 30 Jan 2006 18:37:22 -0500", "msg_from": "\"Jim Buttafuoco\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner issue" }, { "msg_contents": "On Mon, Jan 30, 2006 at 03:26:23PM -0800, Mark Lewis wrote:\n> You have lots of dead rows. Do a vacuum full to get it under control,\n> then run VACUUM more frequently and/or increase your FSM settings to\n> keep dead rows in check. In 7.2 vacuum is pretty intrusive; it will be\n> much better behaved once you can upgrade to a more recent version.\n> \n> You really, really want to upgrade as soon as possible, and refer to the\n> on-line docs about what to do with your FSM settings.\n> \n\nThanks! Vacuum full did it. I will now play with fsm settings to avoid\nrunning a full vacuum daily...\n\n\n-- \nEmmanuel Lacour ------------------------------------ Easter-eggs\n44-46 rue de l'Ouest - 75014 Paris - France - M�tro Gait�\nPhone: +33 (0) 1 43 35 00 37 - Fax: +33 (0) 1 41 35 00 76\nmailto:[email protected] - http://www.easter-eggs.com\n", "msg_date": "Tue, 31 Jan 2006 01:27:25 +0100", "msg_from": "Emmanuel Lacour <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query planner issue" }, { "msg_contents": "Mark Lewis <[email protected]> writes:\n> You really, really want to upgrade as soon as possible,\n\nNo, sooner than that. Show your boss the list of known\ndata-loss-causing bugs in 7.2.1, and refuse to take responsibility\nif the database eats all your data before the \"in good time\" upgrade.\n\nThe release note pages starting here:\nhttp://developer.postgresql.org/docs/postgres/release-7-2-8.html\nmention the problems we found while 7.2 was still supported. It's\nlikely that some of the 7.3 bugs found later than 2005-05-09 also\napply to 7.2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Jan 2006 20:55:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner issue " } ]
[ { "msg_contents": "\nCan you delete me from the mail list Please?\n\n", "msg_date": "Tue, 31 Jan 2006 17:24:30 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Delete me" }, { "msg_contents": "[email protected] wrote:\n> Can you delete me from the mail list Please?\n\nGo to the website.\nClick \"community\"\nClick \"mailing lists\"\nOn the left-hand side click \"Subscribe\"\nFill in the form, changing action to \"unsubscribe\"\n\nHTH\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 31 Jan 2006 18:33:14 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Delete me" }, { "msg_contents": "On Tue, Jan 31, 2006 at 06:33:14PM +0000, Richard Huxton wrote:\n> [email protected] wrote:\n> >Can you delete me from the mail list Please?\n> \n> Go to the website.\n> Click \"community\"\n> Click \"mailing lists\"\n> On the left-hand side click \"Subscribe\"\n> Fill in the form, changing action to \"unsubscribe\"\n\nOr take a look at the header that's included with every single message\nsent to the list...\n\nList-Unsubscribe: <mailto:[email protected]?body=unsub%20pgsql-performance>\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 31 Jan 2006 13:23:21 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Delete me" } ]
[ { "msg_contents": "We have a large database system designed around partitioning. Our\napplication is characterized with\n \n- terabytes of data\n- billions of rows in dozens of base tables (and 100s of paritions)\n- 24x7 insert load of new data that cannot be stopped, data is time\nsensitive.\n- periodic reports that can have long running queries with query times\nmeasured in hours\n \nWe have 2 classes of \"maintenance\" activities that are causing us\nproblems:\n- periodically we need to change an insert rule on a view to point to a\ndifferent partition.\n- periodically we need to delete data that is no longer needed.\nPerformed via truncate.\n \nUnder both these circumstances (truncate and create / replace rule) the\nlocking behaviour of these commands can cause locking problems for us.\nThe scenario is best illustrated as a series of steps:\n \n\n\t1- long running report is running on view\n\t2- continuous inserters into view into a table via a rule\n\t3- truncate or rule change occurs, taking an exclusive lock.\nMust wait for #1 to finish.\n\t4- new reports and inserters must now wait for #3.\n\t5- now everyone is waiting for a single query in #1. Results\nin loss of insert data granularity (important for our application).\n\n \nWould like to understand the implications of changing postgres'\ncode/locking for rule changes and truncate to not require locking out\nselect statements? \n \nThe following is a simplified schema to help illustrate the problem.\n \n\n\tcreate table a_1\n\t(\n\t pkey int primary key\n\t);\n\tcreate table a_2\n\t(\n\t pkey int primary key\n\t);\n\t \n\tcreate view a as select * from a_1 union all select * from a_2;\n\t \n\tcreate function change_rule(int) returns void as\n\t'\n\tbegin\n\t execute ''create or replace rule insert as on insert to a do\ninstead insert into a_''||$1||''(pkey) values(NEW.pkey)'';\n\tend;\n\t' language plpgsql;\n\t \n\t-- change rule, execute something like the following\nperiodically\n\tselect change_rule(1);\n\n \nWe've looked at the code and the rule changes appear \"easy\" but we are\nconcerned about the required changes for truncate.\n \nThanks\nMarc\n\n", "msg_date": "Tue, 31 Jan 2006 18:25:01 -0500", "msg_from": "\"Marc Morin\" <[email protected]>", "msg_from_op": true, "msg_subject": "partitioning and locking problems" }, { "msg_contents": "\"Marc Morin\" <[email protected]> writes:\n> Would like to understand the implications of changing postgres'\n> code/locking for rule changes and truncate to not require locking out\n> select statements? \n\nIt won't work...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Feb 2006 00:49:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and locking problems " }, { "msg_contents": "Marc Morin wrote:\n> Under both these circumstances (truncate and create / replace rule) the\n> locking behaviour of these commands can cause locking problems for us.\n> The scenario is best illustrated as a series of steps:\n> \n> \n> \t1- long running report is running on view\n> \t2- continuous inserters into view into a table via a rule\n> \t3- truncate or rule change occurs, taking an exclusive lock.\n> Must wait for #1 to finish.\n> \t4- new reports and inserters must now wait for #3.\n> \t5- now everyone is waiting for a single query in #1. Results\n> in loss of insert data granularity (important for our application).\n\nHow much would you get from splitting the view into two: reporting and \ninserting?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 01 Feb 2006 09:39:05 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and locking problems" }, { "msg_contents": "Hi, Marc,\n\nMarc Morin wrote:\n\n> \t1- long running report is running on view\n> \t2- continuous inserters into view into a table via a rule\n> \t3- truncate or rule change occurs, taking an exclusive lock.\n> Must wait for #1 to finish.\n> \t4- new reports and inserters must now wait for #3.\n> \t5- now everyone is waiting for a single query in #1. Results\n> in loss of insert data granularity (important for our application).\n\nApart from having two separate views (one for report, one for insert) as\nRichard suggested:\n\nIf you have fixed times for #3, don't start any #1 that won't finish\nbefore it's time for #3.\n\nYou could also use the LOCK command on an empty lock table at the\nbeginning of each #1 or #3 transaction to prevent #3 from getting the\nview lock before #1 is finished.\n\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Thu, 02 Feb 2006 13:44:23 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and locking problems" } ]
[ { "msg_contents": "I am concerned with performance issues involving the storage of DV on\na database.\n\nI though of some options, which would be the most advised for speed?\n\n1) Pack N frames inside a \"container\" and store the container to the db.\n2) Store each frame in a separate record in the table \"frames\".\n3) (type something here)\n\nThanks for the help,\n\nRodrigo\n", "msg_date": "Tue, 31 Jan 2006 16:32:56 -0800", "msg_from": "Rodrigo Madera <[email protected]>", "msg_from_op": true, "msg_subject": "Storing Digital Video" }, { "msg_contents": "On Tue, 2006-01-31 at 16:32 -0800, Rodrigo Madera wrote:\n> I am concerned with performance issues involving the storage of DV on\n> a database.\n> \n> I though of some options, which would be the most advised for speed?\n> \n> 1) Pack N frames inside a \"container\" and store the container to the db.\n> 2) Store each frame in a separate record in the table \"frames\".\n> 3) (type something here)\n\nHow about some more color? _Why_, for example, would you store video in\na relational database?\n\n-jwb\n", "msg_date": "Tue, 31 Jan 2006 16:37:51 -0800", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing Digital Video" }, { "msg_contents": "Rodrigo Madera wrote:\n\n>I am concerned with performance issues involving the storage of DV on\n>a database.\n>\n>I though of some options, which would be the most advised for speed?\n>\n>1) Pack N frames inside a \"container\" and store the container to the db.\n>2) Store each frame in a separate record in the table \"frames\".\n>3) (type something here)\n>\n>Thanks for the help,\n>\n> \n>\n\nMy experience has been that this is a very bad idea. Many people want to \nstore all sorts of data in a database such as email messages, pictures, \netc... The idea of a relational database is to perform queries against \ndata. If you are needing to just store data then store it on a disk and \nuse the database as the indexer of the data.\n\nKeep in mind the larger the database the slower some operations become.\n\nUnless you are operating on the frame data (which you either store as \nblobs or hex-encoded data) I'd recommend you store the data on a hard \ndrive and let the database store meta data about the video such as path \ninformation, run time, author, etc...\n\nWe do this on an application storing close to a million images and the \nperformance is impressive.\n 1. we don't have to do any sort of data manipulation storing the \ndata in or retrieving the data out of the database.\n 2. our database is compact and extremely fast - it is using the \ndatabase for what it was designed for - relational queries.\n\nMy $0.02\n\n>Rodrigo\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n>\n> \n>\n\n", "msg_date": "Tue, 31 Jan 2006 17:51:18 -0700", "msg_from": "Matt Davies | Postgresql List <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing Digital Video" }, { "msg_contents": "A Dimecres 01 Febrer 2006 01:32, Rodrigo Madera va escriure:\n> I am concerned with performance issues involving the storage of DV on\n> a database.\n>\n> I though of some options, which would be the most advised for speed?\n>\n> 1) Pack N frames inside a \"container\" and store the container to the db.\n> 2) Store each frame in a separate record in the table \"frames\".\n> 3) (type something here)\n>\n> Thanks for the help,\n\n\nWhat if you store meta data in the database and use some PL/Python/Java/Perl \nfunctions to store and retrieve video files from the server. The function \nwould store files to the files system, not a table. This avoids the need for \na file server for your application while making your relational queries fast.\n\nAny experiences/thoughts on this solution?\n\n>\n> Rodrigo\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n \n", "msg_date": "Mon, 6 Feb 2006 09:30:30 +0100", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing Digital Video" }, { "msg_contents": "I must claim some ignorance, I come from the application world... but, \nfrom a data integrity perspective, it makes a whole lot of sense to \nstore video, images, documents, whatever in the database rather than on \nthe file system external to it. Personally, I would use LOB's, but I do \nnot know the internals well enough to say LOBs or large columns. \nRegardless, there are a lot of compelling reasons ranging from software \nmaintenance, disk management, data access control, single security layer \nimplementation, and so on which justify storing data like this in the \nDB. Am I too much of an Oracle guy? I think that Postgres is more than \ncapable enough for this type of implementation. Is this confidence \nunfounded?\n\n Aside from disk utilization, what are the performance issues with \nLOB and / or large columns? Does the data on disk get too fragmented to \nallow for efficient querying? Are the performance issues significant \nenough to push parts of the data integrity responsibility to the \napplication layer?\n\nThanks,\n Nate\n\nAlbert Cervera Areny wrote:\n> A Dimecres 01 Febrer 2006 01:32, Rodrigo Madera va escriure:\n> \n>> I am concerned with performance issues involving the storage of DV on\n>> a database.\n>>\n>> I though of some options, which would be the most advised for speed?\n>>\n>> 1) Pack N frames inside a \"container\" and store the container to the db.\n>> 2) Store each frame in a separate record in the table \"frames\".\n>> 3) (type something here)\n>>\n>> Thanks for the help,\n>> \n>\n>\n> What if you store meta data in the database and use some PL/Python/Java/Perl \n> functions to store and retrieve video files from the server. The function \n> would store files to the files system, not a table. This avoids the need for \n> a file server for your application while making your relational queries fast.\n>\n> Any experiences/thoughts on this solution?\n>\n> \n>> Rodrigo\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> choose an index scan if your joining column's datatypes do not\n>> match\n>> \n>\n> \n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n> !DSPAM:43e70ada303236796316472!\n>\n> \n", "msg_date": "Thu, 09 Feb 2006 08:45:00 -0500", "msg_from": "Nate Byrnes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing Digital Video" }, { "msg_contents": "Nate Byrnes wrote:\n> I must claim some ignorance, I come from the application world... but, \n> from a data integrity perspective, it makes a whole lot of sense to \n> store video, images, documents, whatever in the database rather than on \n> the file system external to it. Personally, I would use LOB's, but I do \n> not know the internals well enough to say LOBs or large columns. \n> Regardless, there are a lot of compelling reasons ranging from software \n> maintenance, disk management, data access control, single security layer \n> implementation, and so on which justify storing data like this in the \n> DB. Am I too much of an Oracle guy?\n\nYes, you are too much of an Oracle guy ;-). Oracle got this notion that they could conquer the world, that EVERYTHING should be in an Oracle database. I think they even built a SAMBA file system on top of Oracle. It's like a hammer manufacturer telling you the hammer is also good for screws and for gluing. It just ain't so.\n\nYou can store videos in a database, but there will be a price. You're asking the database to do something that the file system is already exceptionally good at: store big files.\n\nYou make one good point about security: A database can provide a single point of access control. Storing the videos externally requires a second mechanism. That's not necessarily bad -- you probably have a middleware layer, which can ensure that it won't deliver the goods unless the user has successfully connected to the database.\n\nCraig\n", "msg_date": "Thu, 09 Feb 2006 07:18:49 -0800", "msg_from": "\"Craig A. James\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing Digital Video" }, { "msg_contents": " Thanks, until Postgres can pay my bills (hopefully soon...) I will \nhave to be an Oracle guy. Aside from the filesystem being better at \nmanaging large files (which I do agree) are there performance \nimplications for the storage in the DB?\n Where I work, the question is not can you add the security code to \nthe middleware, but how many middlewares and applications will need to \nbe updated.\n Regards,\n Nate\n\nCraig A. James wrote:\n> Nate Byrnes wrote:\n>> I must claim some ignorance, I come from the application world... \n>> but, from a data integrity perspective, it makes a whole lot of sense \n>> to store video, images, documents, whatever in the database rather \n>> than on the file system external to it. Personally, I would use \n>> LOB's, but I do not know the internals well enough to say LOBs or \n>> large columns. Regardless, there are a lot of compelling reasons \n>> ranging from software maintenance, disk management, data access \n>> control, single security layer implementation, and so on which \n>> justify storing data like this in the DB. Am I too much of an \n>> Oracle guy?\n>\n> Yes, you are too much of an Oracle guy ;-). Oracle got this notion \n> that they could conquer the world, that EVERYTHING should be in an \n> Oracle database. I think they even built a SAMBA file system on top \n> of Oracle. It's like a hammer manufacturer telling you the hammer is \n> also good for screws and for gluing. It just ain't so.\n>\n> You can store videos in a database, but there will be a price. You're \n> asking the database to do something that the file system is already \n> exceptionally good at: store big files.\n>\n> You make one good point about security: A database can provide a \n> single point of access control. Storing the videos externally \n> requires a second mechanism. That's not necessarily bad -- you \n> probably have a middleware layer, which can ensure that it won't \n> deliver the goods unless the user has successfully connected to the \n> database.\n>\n> Craig\n>\n> !DSPAM:43eb5e8970644042098162!\n>\n", "msg_date": "Thu, 09 Feb 2006 10:58:25 -0500", "msg_from": "Nate Byrnes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing Digital Video" }, { "msg_contents": "On Thu, Feb 09, 2006 at 07:18:49AM -0800, Craig A. James wrote:\n> Nate Byrnes wrote:\n> >I must claim some ignorance, I come from the application world... but, \n> >from a data integrity perspective, it makes a whole lot of sense to \n> >store video, images, documents, whatever in the database rather than on \n> >the file system external to it. Personally, I would use LOB's, but I do \n> >not know the internals well enough to say LOBs or large columns. \n> >Regardless, there are a lot of compelling reasons ranging from software \n> >maintenance, disk management, data access control, single security layer \n> >implementation, and so on which justify storing data like this in the \n> >DB. Am I too much of an Oracle guy?\n> \n> Yes, you are too much of an Oracle guy ;-). Oracle got this notion that \n> they could conquer the world, that EVERYTHING should be in an Oracle \n> database. I think they even built a SAMBA file system on top of Oracle. \n> It's like a hammer manufacturer telling you the hammer is also good for \n> screws and for gluing. It just ain't so.\n> \n> You can store videos in a database, but there will be a price. You're \n> asking the database to do something that the file system is already \n> exceptionally good at: store big files.\n> \n> You make one good point about security: A database can provide a single \n> point of access control. Storing the videos externally requires a second \n> mechanism. That's not necessarily bad -- you probably have a middleware \n> layer, which can ensure that it won't deliver the goods unless the user has \n> successfully connected to the database.\n\nYou're forgetting about cleanup and transactions. If you store outside\nthe database you either have to write some kind of garbage collector, or\nyou add a trigger to delete the file on disk when the row in the\ndatabase pointing at it is deleted and hope that the transaction doesn't\nrollback.\n\nOf course, someone could probably write some stand-alone code that would\nhandle all of this in a generic way... :)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Thu, 9 Feb 2006 13:26:21 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing Digital Video" }, { "msg_contents": "In my experience, you don't want to store this stuff in the database. \nIn general, it will work fine, until you have to VACUUM the\npg_largeobject table. Unless you have a very powerful I/O subsystem,\nthis VACUUM will kill your performance.\n\n> You're forgetting about cleanup and transactions. If you store outside\n> the database you either have to write some kind of garbage collector, or\n> you add a trigger to delete the file on disk when the row in the\n> database pointing at it is deleted and hope that the transaction doesn't\n> rollback.\n\nOur solution to this problem was to have a separate table of \"external\nfiles to delete\". When you want to delete a file, you just stuff an\nentry into this table. If your transaction rolls back, so does your\ninsert into this table. You have a separate thread that periodically\nwalks this table and zaps the files from the filesystem.\n\nWe found that using a procedural language (such as pl/Perl) was fine\nfor proof of concept. We did find limitations in how data is returned\nfrom Perl functions as a string, combined with the need for binary\ndata in the files, that prevented us from using it in production. We\nhad to rewrite the functions in C.\n\n -jan-\n--\nJan L. Peterson\n<[email protected]>\n", "msg_date": "Thu, 9 Feb 2006 16:14:09 -0700", "msg_from": "Jan Peterson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing Digital Video" }, { "msg_contents": "On Thu, Feb 09, 2006 at 04:14:09PM -0700, Jan Peterson wrote:\n> In my experience, you don't want to store this stuff in the database. \n> In general, it will work fine, until you have to VACUUM the\n> pg_largeobject table. Unless you have a very powerful I/O subsystem,\n> this VACUUM will kill your performance.\n\nGood point about the vacuum issue; I haven't had to deal with vacuuming\nvery large objects.\n\n> > You're forgetting about cleanup and transactions. If you store outside\n> > the database you either have to write some kind of garbage collector, or\n> > you add a trigger to delete the file on disk when the row in the\n> > database pointing at it is deleted and hope that the transaction doesn't\n> > rollback.\n> \n> Our solution to this problem was to have a separate table of \"external\n> files to delete\". When you want to delete a file, you just stuff an\n> entry into this table. If your transaction rolls back, so does your\n> insert into this table. You have a separate thread that periodically\n> walks this table and zaps the files from the filesystem.\n\nSure, there's lots of ways around it. My point was that there *is* a\ntradeoff.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Sat, 11 Feb 2006 14:32:28 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing Digital Video" } ]
[ { "msg_contents": "Hi there,\n\nI'm running a simple query with 2 inner joins (say A, B and C). Each of the\njoin columns has indexes. If I run queries that join just A and B, or just B\nand C, postgres uses indexes. But if I run \"A join B join C\" then the \"B\njoin C\" part starts using a sequential scan and I can't figure out why.\n\nHere's the query, which basically retrieves all meta-data for all messages\nin a given forum. The relationship is pretty simple. Forums contain threads,\nwhich contain messages, which each have associated meta-data:\n\nSELECT message.message_id, message_meta_data.value\nFROM thread\n JOIN message USING (thread_id)\n JOIN message_meta_data ON (\nmessage.message_id=message_meta_data.message_id)\nWHERE thread.forum_id=123;\n\nExplaining:\nHash Join (cost=337.93..1267.54 rows=180 width=35)\nHash Cond: (\"outer\".message_id = \"inner\".message_id)\n-> Seq Scan on message_meta_data (cost=0.00..739.19 rows=37719 width=30)\n-> Hash (cost=337.79..337.79 rows=57 width=13)\n -> Nested Loop (cost=0.00..337.79 rows=57 width=13)\n -> Index Scan using thread_forum_id_idx on thread (cost=\n0.00..41.61 rows=13 width=4)\n Index Cond: (forum_id = 6)\n -> Index Scan using message_thread_id_idx on message (cost=\n0.00..22.72 rows=5 width=17)\n Index Cond: (\"outer\".thread_id = message.thread_id)\n\nAs you can see, the message and message_meta_data tables use a Seq Scan. The\nonly way I can think of forcing it to use the Index Scan in all cases would\nbe to use two separate nested queries: The outer query would retrieve the\nlist of messages in the forum, and the inner query would retrieve the list\nof metadata for an individual message. Obviously I want to avoid having to\ndo that if possible.\n\nAny ideas?\n\nMany thanks if you can help.\n\nJames\n\nHi there,\n\nI'm running a simple query with 2 inner joins (say A, B and C). Each of\nthe join columns has indexes. If I run queries that join just A and B,\nor just B and C, postgres uses indexes. But if I run \"A join B join C\"\nthen the \"B join C\" part starts using a sequential scan and I can't\nfigure out why.\n\nHere's the query, which basically retrieves all meta-data for all\nmessages in a given forum. The relationship is pretty simple. Forums\ncontain threads, which contain messages, which each have associated\nmeta-data:\n\nSELECT message.message_id, message_meta_data.value\nFROM thread\n    JOIN message USING (thread_id)\n    JOIN message_meta_data ON (message.message_id=message_meta_data.message_id)\nWHERE thread.forum_id=123;\n\nExplaining:\nHash Join  (cost=337.93..1267.54 rows=180 width=35)\nHash Cond: (\"outer\".message_id = \"inner\".message_id)\n->  Seq Scan on message_meta_data  (cost=0.00..739.19 rows=37719 width=30)\n->  Hash  (cost=337.79..337.79 rows=57 width=13)\n    ->  Nested Loop  (cost=0.00..337.79 rows=57 width=13)\n          -> \nIndex Scan using thread_forum_id_idx on thread (cost=0.00..41.61\nrows=13 width=4)\n                 Index Cond: (forum_id = 6)\n          -> \nIndex Scan using message_thread_id_idx on message (cost=0.00..22.72\nrows=5 width=17)\n                \nIndex Cond: (\"outer\".thread_id = message.thread_id)\n\nAs you can see, the message and message_meta_data tables use a Seq\nScan. The only way I can think of forcing it to use the Index Scan in\nall cases would be to use two separate nested queries: The outer query\nwould retrieve the list of messages in the forum, and the inner query\nwould retrieve the list of metadata for an individual message.\nObviously I want to avoid having to do that if possible.\n\nAny ideas?\n\nMany thanks if you can help.\n\nJames", "msg_date": "Wed, 1 Feb 2006 12:14:47 +0900", "msg_from": "James Russell <[email protected]>", "msg_from_op": true, "msg_subject": "Sequential scan being used despite indexes" }, { "msg_contents": "\n>\n> Explaining:\n> Hash Join (cost=337.93..1267.54 rows=180 width=35)\n> Hash Cond: (\"outer\".message_id = \"inner\".message_id)\n> -> Seq Scan on message_meta_data (cost=0.00..739.19 rows=37719 width=30)\n> -> Hash (cost=337.79..337.79 rows=57 width=13)\n> -> Nested Loop (cost=0.00..337.79 rows=57 width=13)\n> -> Index Scan using thread_forum_id_idx on thread \n> (cost=0.00..41.61 rows=13 width=4)\n> Index Cond: (forum_id = 6)\n> -> Index Scan using message_thread_id_idx on message \n> (cost=0.00..22.72 rows=5 width=17)\n> Index Cond: (\"outer\".thread_id = message.thread_id)\n>\n> As you can see, the message and message_meta_data tables use a Seq \n> Scan. The only way I can think of forcing it to use the Index Scan in \n> all cases would be to use two separate nested queries: The outer query \n> would retrieve the list of messages in the forum, and the inner query \n> would retrieve the list of metadata for an individual message. \n> Obviously I want to avoid having to do that if possible.\n>\n> Any ideas?\nWhat does explain analyze say?\n\nJoshua D. Drake\n\n\n>\n> Many thanks if you can help.\n>\n> James\n\n\n-- \nThe PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: PLphp, PLperl - http://www.commandprompt.com/\n\n", "msg_date": "Tue, 31 Jan 2006 19:29:51 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequential scan being used despite indexes" }, { "msg_contents": "On Tue, Jan 31, 2006 at 07:29:51PM -0800, Joshua D. Drake wrote:\n> > Any ideas?\n>\n> What does explain analyze say?\n\nAlso, have the tables been vacuumed and analyzed?\n\n-- \nMichael Fuhr\n", "msg_date": "Tue, 31 Jan 2006 20:58:04 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequential scan being used despite indexes" }, { "msg_contents": "[Sorry, my last reply didn't go to the list]\n\nReading about this issue further in the FAQ, it seems that I should ensure\nthat Postgres has adequate and accurate information about the tables in\nquestion by regularly running VACUUM ANALYZE, something I don't do\ncurrently.\n\nI disabled SeqScan as per the FAQ, and it indeed was a lot slower so\nPostgres was making the right choice in this case.\n\nMany thanks,\n\nJames\n\n[Sorry, my last reply didn't go to the list]\n\nReading about this issue further in the FAQ, it seems that I should\nensure that Postgres has adequate and accurate information about the\ntables in question by regularly running VACUUM ANALYZE, something I don't do currently.\n\n\nI disabled SeqScan as per the FAQ, and it indeed was a lot slower so Postgres was making the right choice in this case.\nMany thanks,\n\nJames", "msg_date": "Wed, 1 Feb 2006 13:33:08 +0900", "msg_from": "James Russell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sequential scan being used despite indexes" }, { "msg_contents": "> Reading about this issue further in the FAQ, it seems that I should \n> ensure that Postgres has adequate and accurate information about the \n> tables in question by regularly running VACUUM ANALYZE, something I \n> don't do currently.\n\nWell then you'll get rubbish performance always in PostgreSQL...\n\nI strongly suggest you run autovacuum if you don't really understand \nPostgreSQL vacuuming/analyzing.\n\nChris\n\n", "msg_date": "Wed, 01 Feb 2006 12:43:57 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequential scan being used despite indexes" }, { "msg_contents": "On Wed, Feb 01, 2006 at 01:33:08PM +0900, James Russell wrote:\n> Reading about this issue further in the FAQ, it seems that I should ensure\n> that Postgres has adequate and accurate information about the tables in\n> question by regularly running VACUUM ANALYZE, something I don't do\n> currently.\n\nMany people use a cron job (or the equivalent) to run VACUUM ANALYZE\nat regular intervals; some also use the pg_autovacuum daemon, which\nis a contrib module in 8.0 and earlier and part of the backend as of\n8.1.\n\nHow often to vacuum/analyze depends on usage. Once per day is\ncommonly cited, but busy tables might need it more often than that.\nJust recently somebody had a table that could have used vacuuming\nevery five minutes or less (all records were updated every 30\nseconds); pg_autovacuum can be useful in such cases.\n\n> I disabled SeqScan as per the FAQ, and it indeed was a lot slower so\n> Postgres was making the right choice in this case.\n\nThe planner might be making the right choice given the statistics\nit has, but it's possible that better statistics would lead to a\ndifferent plan, perhaps one where an index scan would be faster.\n\nWhat happens if you run VACUUM ANALYZE on all the tables, then run\nthe query again with EXPLAIN ANALYZE?\n\n-- \nMichael Fuhr\n", "msg_date": "Tue, 31 Jan 2006 22:08:28 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sequential scan being used despite indexes" } ]
[ { "msg_contents": "\nhi,\n\ni have a database storing XML documents.\nThe main table contains the nodes of the document, the other tables contain data for each node (depending on the node's type : ELE, Text, PI, ...)\n\nMy test document has 115000 nodes.\nthe export of the document(extracting all informations from database and writing XML file on disk) takes 30s with Oracle and 5mn with Postgresql.\nThe Oracle stored procedure is written in pl/sql and the Postgresql stored procedure in pl/perl (using spi_exec).\nThe export stored procedure use a SAX way algorithm : from a node, get all the children and for each child if it's an Element go in recursion else write data into a file.\n\nThe tests have been made on different systems\n- Sun systems :\n\t- solaris8 : 16 cpu and 64Gb RAM\n\t- solaris8 : 2cpu and 8Gb RAM\n- Windows Systems :\n\t- WinNT : 1 cpu(PIV) and 1Gb RAM\n\t- WinXP : 1 cpu(centrino) and 512 RAM\n\nthe times are always the same, except with the centrino for which it takes 1 min.\n\nSo i don't understand such differences.\n\nhere is my main query (on the main table for getting the children of a node) and the execution plan for PostgreSQL and Oracle :\n\n-Query :\nSELECT *\n FROM xdb_child c1\n WHERE c1.doc_id = 100\n AND c1.ele_id = 2589\n AND c1.isremoved = 0\n AND c1.evolution =\n (SELECT\n MAX (evolution)\n FROM xdb_child c2\n WHERE c2.doc_id = c1.doc_id\n AND c2.ele_id = c1.ele_id\n AND c2.evolution <= 0\n AND c2.child_id = c1.child_id\n AND c2.child_class = c1.child_class) ORDER BY c1.evolution, c1.indx\n\n-Oracle plan (cost 14):\nOperation\tObject Name\tRows\tBytes\tCost\tObject Node\tIn/Out\tPStart\tPStop\n\nSELECT STATEMENT Optimizer Mode=CHOOSE\t\t1 \t \t14 \t \t \t \n SORT ORDER BY\t\t1 \t4 K\t14 \t \t \t \n TABLE ACCESS BY INDEX ROWID\tXDB_CHILD\t1 \t4 K\t4 \t \t \t \n INDEX RANGE SCAN\tINDEX_XDB_CHILD_1\t1 \t \t3 \t \t \t \n SORT AGGREGATE\t\t1 \t65 \t \t \t \t \n FIRST ROW\t\t1 \t65 \t3 \t \t \t \n INDEX RANGE SCAN (MIN/MAX)\tINDEX_XDB_CHILD_2\t8 M\t \t3\n\n-PostgreSQL explain analyse :\n {SORT\n :startup_cost 9.65\n :total_cost 9.66\n :plan_rows 1\n :plan_width 28\n :targetlist (\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n }\n :resno 1\n :resname child_id\n :ressortgroupref 0\n :resorigtbl 34719\n :resorigcol 1\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n :resno 2\n :resname evolution\n :ressortgroupref 1\n :resorigtbl 34719\n :resorigcol 2\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 3\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n }\n :resno 3\n :resname isremoved\n :ressortgroupref 0\n :resorigtbl 34719\n :resorigcol 3\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 4\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 4\n }\n :resno 4\n :resname child_class\n :ressortgroupref 0\n :resorigtbl 34719\n :resorigcol 4\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 5\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 5\n }\n :resno 5\n :resname indx\n :ressortgroupref 2\n :resorigtbl 34719\n :resorigcol 5\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 6\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 6\n }\n :resno 6\n :resname ele_id\n :ressortgroupref 0\n :resorigtbl 34719\n :resorigcol 6\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 7\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 7\n }\n :resno 7\n :resname doc_id\n :ressortgroupref 0\n :resorigtbl 34719\n :resorigcol 7\n :resjunk false\n }\n )\n :qual <>\n :lefttree\n {INDEXSCAN\n :startup_cost 0.00\n :total_cost 9.64\n :plan_rows 1\n :plan_width 28\n :targetlist (\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n }\n :resno 1\n :resname child_id\n :ressortgroupref 0\n :resorigtbl 34719\n :resorigcol 1\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n :resno 2\n :resname evolution\n :ressortgroupref 1\n :resorigtbl 34719\n :resorigcol 2\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 3\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n }\n :resno 3\n :resname isremoved\n :ressortgroupref 0\n :resorigtbl 34719\n :resorigcol 3\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 4\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 4\n }\n :resno 4\n :resname child_class\n :ressortgroupref 0\n :resorigtbl 34719\n :resorigcol 4\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 5\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 5\n }\n :resno 5\n :resname indx\n :ressortgroupref 2\n :resorigtbl 34719\n :resorigcol 5\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 6\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 6\n }\n :resno 6\n :resname ele_id\n :ressortgroupref 0\n :resorigtbl 34719\n :resorigcol 6\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 7\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 7\n }\n :resno 7\n :resname doc_id\n :ressortgroupref 0\n :resorigtbl 34719\n :resorigcol 7\n :resjunk false\n }\n )\n :qual (\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n {SUBPLAN\n :subLinkType 4\n :useOr false\n :exprs <>\n :paramIds <>\n :plan\n {AGG\n :startup_cost 4.93\n :total_cost 4.94\n :plan_rows 1\n :plan_width 4\n :targetlist (\n {TARGETENTRY\n :expr\n {AGGREF\n :aggfnoid 2116\n :aggtype 23\n :target\n {VAR\n :varno 0\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n :agglevelsup 0\n :aggstar false\n :aggdistinct false\n }\n :resno 1\n :resname max\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n )\n :qual <>\n :lefttree\n {INDEXSCAN\n :startup_cost 0.00\n :total_cost 4.93\n :plan_rows 1\n :plan_width 4\n :targetlist (\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n }\n :resno 1\n :resname <>\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n :resno 2\n :resname <>\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 3\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n }\n :resno 3\n :resname <>\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 4\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 4\n }\n :resno 4\n :resname <>\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 5\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 5\n }\n :resno 5\n :resname <>\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 6\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 6\n }\n :resno 6\n :resname <>\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n {TARGETENTRY\n :expr\n {VAR\n :varno 1\n :varattno 7\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 7\n }\n :resno 7\n :resname <>\n :ressortgroupref 0\n :resorigtbl 0\n :resorigcol 0\n :resjunk false\n }\n )\n :qual (\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 4\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 4\n }\n {PARAM\n :paramkind 15\n :paramid 3\n :paramname <>\n :paramtype 23\n }\n )\n }\n )\n :lefttree <>\n :righttree <>\n :initPlan <>\n :extParam (b 0 1 2 3)\n :allParam (b 0 1 2 3)\n :nParamExec 0\n :scanrelid 1\n :indexid 34737\n :indexqual (\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 7\n }\n {PARAM\n :paramkind 15\n :paramid 0\n :paramname <>\n :paramtype 23\n }\n )\n }\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 6\n }\n {PARAM\n :paramkind 15\n :paramid 1\n :paramname <>\n :paramtype 23\n }\n )\n }\n {OPEXPR\n :opno 523\n :opfuncid 149\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 4\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n {CONST\n :consttype 23\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 0 0 0 0 ]\n }\n )\n }\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 5\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n }\n {PARAM\n :paramkind 15\n :paramid 2\n :paramname <>\n :paramtype 23\n }\n )\n }\n )\n :indexqualorig (\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 7\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 7\n }\n {PARAM\n :paramkind 15\n :paramid 0\n :paramname <>\n :paramtype 23\n }\n )\n }\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 6\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 6\n }\n {PARAM\n :paramkind 15\n :paramid 1\n :paramname <>\n :paramtype 23\n }\n )\n }\n {OPEXPR\n :opno 523\n :opfuncid 149\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 2\n }\n {CONST\n :consttype 23\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 0 0 0 0 ]\n }\n )\n }\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n }\n {PARAM\n :paramkind 15\n :paramid 2\n :paramname <>\n :paramtype 23\n }\n )\n }\n )\n :indexstrategy (i 3 3 2 3)\n :indexsubtype (o 0 0 0 0)\n :indexorderdir 1\n }\n :righttree <>\n :initPlan <>\n :extParam (b 0 1 2 3)\n :allParam (b 0 1 2 3)\n :nParamExec 0\n :aggstrategy 0\n :numCols 0\n :numGroups 0\n }\n :plan_id 1\n :rtable (\n {RTE\n :alias\n {ALIAS\n :aliasname c2\n :colnames <>\n }\n :eref\n {ALIAS\n :aliasname c2\n :colnames (\"child_id\" \"evolution\" \"isremoved\" \"child_class\"\n \"indx\" \"ele_id\" \"doc_id\")\n }\n :rtekind 0\n :relid 34719\n :inh false\n :inFromCl true\n :requiredPerms 2\n :checkAsUser 0\n }\n )\n :useHashTable false\n :unknownEqFalse false\n :setParam <>\n :parParam (i 0 1 2 3)\n :args (\n {VAR\n :varno 1\n :varattno 7\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 7\n }\n {VAR\n :varno 1\n :varattno 6\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 6\n }\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 1\n }\n {VAR\n :varno 1\n :varattno 4\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 4\n }\n )\n }\n )\n }\n )\n :lefttree <>\n :righttree <>\n :initPlan <>\n :extParam (b)\n :allParam (b)\n :nParamExec 0\n :scanrelid 1\n :indexid 34737\n :indexqual (\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 1\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 7\n }\n {CONST\n :consttype 23\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 0 0 0 100 ]\n }\n )\n }\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 2\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 6\n }\n {CONST\n :consttype 23\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 0 0 10 29 ]\n }\n )\n }\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 3\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n }\n {CONST\n :consttype 23\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 0 0 0 0 ]\n }\n )\n }\n )\n :indexqualorig (\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 7\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 7\n }\n {CONST\n :consttype 23\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 0 0 0 100 ]\n }\n )\n }\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 6\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 6\n }\n {CONST\n :consttype 23\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 0 0 10 29 ]\n }\n )\n }\n {OPEXPR\n :opno 96\n :opfuncid 65\n :opresulttype 16\n :opretset false\n :args (\n {VAR\n :varno 1\n :varattno 3\n :vartype 23\n :vartypmod -1\n :varlevelsup 0\n :varnoold 1\n :varoattno 3\n }\n {CONST\n :consttype 23\n :constlen 4\n :constbyval true\n :constisnull false\n :constvalue 4 [ 0 0 0 0 ]\n }\n )\n }\n )\n :indexstrategy (i 3 3 3)\n :indexsubtype (o 0 0 0)\n :indexorderdir 1\n }\n :righttree <>\n :initPlan <>\n :extParam (b)\n :allParam (b)\n :nParamExec 4\n :numCols 2\n :sortColIdx 2 5\n :sortOperators 97 97\n }\n\nSort (cost=9.65..9.66 rows=1 width=28) (actual time=0.163..0.164 rows=1 loops=1)\n Sort Key: evolution, indx\n -> Index Scan using index_xdb_child on xdb_child c1 (cost=0.00..9.64 rows=1 width=28) (actual time=0.133..0.135 rows=1 loops=1)\n Index Cond: ((doc_id = 100) AND (ele_id = 2589) AND (isremoved = 0))\n Filter: (evolution = (subplan))\n SubPlan\n -> Aggregate (cost=4.93..4.94 rows=1 width=4) (actual time=0.048..0.048 rows=1 loops=1)\n -> Index Scan using index_xdb_child on xdb_child c2 (cost=0.00..4.93 rows=1 width=4) (actual time=0.025..0.030 rows=1 loops=1)\n Index Cond: ((doc_id = $0) AND (ele_id = $1) AND (evolution <= 0) AND (child_id = $2))\n Filter: (child_class = $3)\nTotal runtime: 0.418 ms\n\nthe Postgresql cost is better but the query is two times slower.\n\nan other question about the procedural language : is pl/perl efficient with a such process knowing that it's just a test document : a real document contains between 1 and 3 millions of nodes.\n\n\nRegards\n\n\tWilliam\n\n\nThis e-mail is intended only for the above addressee. It may contain\nprivileged information. If you are not the addressee you must not copy,\ndistribute, disclose or use any of the information in it. If you have\nreceived it in error please delete it and immediately notify the sender.\nSecurity Notice: all e-mail, sent to or from this address, may be\naccessed by someone other than the recipient, for system management and\nsecurity reasons. This access is controlled under Regulation of\nInvestigatory Powers Act 2000, Lawful Business Practises.\n", "msg_date": "Wed, 01 Feb 2006 11:11:33 +0100", "msg_from": "\"FERREIRA, William (VALTECH)\" <[email protected]>", "msg_from_op": true, "msg_subject": "execution plan : Oracle vs PostgreSQL" }, { "msg_contents": "\"FERREIRA, William (VALTECH)\" <[email protected]> writes:\n> My test document has 115000 nodes.\n> the export of the document(extracting all informations from database and writing XML file on disk) takes 30s with Oracle and 5mn with Postgresql.\n> The Oracle stored procedure is written in pl/sql and the Postgresql stored procedure in pl/perl (using spi_exec).\n\nSo the test case involves 115000 executions of the same query via spi_exec?\nThat means the query will be re-parsed and re-planned 115000 times. If\nyou want something that's a reasonably fair comparison against Oracle,\ntry plpgsql which has query plan caching.\n\n\t\t\tregards, tom lane\n\nPS: please do NOT post EXPLAIN VERBOSE output unless someone\nspecifically asks for it. It clutters the archives and it's usually\nuseless. EXPLAIN ANALYZE is what we normally want to see for\nperformance issues.\n", "msg_date": "Wed, 01 Feb 2006 11:04:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: execution plan : Oracle vs PostgreSQL " } ]
[ { "msg_contents": "Tom,\n\nDo you mean it would be impossible to change the code so that existing\nselects continue to use the pre-truncated table until they commit? Or\njust require a more extensive change?\n\nThe update/insert rule change appears to be more more doable? No? \n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]] \n> Sent: Wednesday, February 01, 2006 12:50 AM\n> To: Marc Morin\n> Cc: [email protected]\n> Subject: Re: [PERFORM] partitioning and locking problems \n> \n> \"Marc Morin\" <[email protected]> writes:\n> > Would like to understand the implications of changing postgres'\n> > code/locking for rule changes and truncate to not require \n> locking out \n> > select statements?\n> \n> It won't work...\n> \n> \t\t\tregards, tom lane\n> \n> \n", "msg_date": "Wed, 1 Feb 2006 09:25:10 -0500", "msg_from": "\"Marc Morin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning and locking problems " }, { "msg_contents": "\"Marc Morin\" <[email protected]> writes:\n> Do you mean it would be impossible to change the code so that existing\n> selects continue to use the pre-truncated table until they commit?\n\nYes, because that table won't exist any more (as in the file's been\nunlinked) once the TRUNCATE commits.\n\n> The update/insert rule change appears to be more more doable? No? \n\nYou've still got race conditions there: do onlooker transactions see the\nold set of rules, or the new set, or some unholy mixture? Removing the\nlock as you suggest would make it possible for the rule rewriter to pick\nup non-self-consistent data from the system catalogs, leading to\narbitrarily bad behavior ... if you're lucky, it'll just crash, if\nyou're not lucky the incorrect rule will do a fandango on your data.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Feb 2006 10:20:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and locking problems " }, { "msg_contents": "On Wed, Feb 01, 2006 at 10:20:21AM -0500, Tom Lane wrote:\n> \"Marc Morin\" <[email protected]> writes:\n> > Do you mean it would be impossible to change the code so that existing\n> > selects continue to use the pre-truncated table until they commit?\n> \n> Yes, because that table won't exist any more (as in the file's been\n> unlinked) once the TRUNCATE commits.\n \nIs there a reason the truncate must happen in 'real time'? If TRUNCATE\nmarked a table as \"truncated as of tid, cid\" and created a new set of\nempty objects to be used by all transactions after that, then it should\nbe possible to truncate without waiting on existing selects.\nUnfortunately, I can't think of any way to avoid blocking new inserters,\nbut in the partitioning case that shouldn't matter.\n\n> > The update/insert rule change appears to be more more doable? No? \n> \n> You've still got race conditions there: do onlooker transactions see the\n> old set of rules, or the new set, or some unholy mixture? Removing the\n> lock as you suggest would make it possible for the rule rewriter to pick\n> up non-self-consistent data from the system catalogs, leading to\n> arbitrarily bad behavior ... if you're lucky, it'll just crash, if\n> you're not lucky the incorrect rule will do a fandango on your data.\n\nWhere can one read about why the catalogs can't/don't use MVCC (I'm\nassuming that's why this won't work...)\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 6 Feb 2006 22:11:05 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and locking problems" } ]
[ { "msg_contents": "\nmy first implementation was in pl/pgsql but when i query the children of a node, i need to store them into an array because i need to iterate over all the children and for each child, I test the type of it.\nif it's a PI or a TEXT, i write it into a file, but if it's an element, i call the same function with new parameters (recursive call) and in consequence i can't use a cursor.\n\nin pl/pgsql, the result of a query is returned into a cursor, and in my implementation the only solution i found was to iterate over the cursor and to add children into an array.\ni didn't found any solution to get all the children directly into an array (like the oracle BULK COLLECT).\nSo we chose pl/perl.\n\nmaybe there is an other way to query children directly into an array and having query plan caching ?\n\n-----Message d'origine-----\nDe : Tom Lane [mailto:[email protected]]\nEnvoyé : mercredi 1 février 2006 17:05\nÀ : FERREIRA, William (VALTECH)\nCc : [email protected]\nObjet : Re: [PERFORM] execution plan : Oracle vs PostgreSQL\n\n\n\n\"FERREIRA, William (VALTECH)\" <[email protected]> writes:\n> My test document has 115000 nodes.\n> the export of the document(extracting all informations from database and writing XML file on disk) takes 30s with Oracle and 5mn with Postgresql.\n> The Oracle stored procedure is written in pl/sql and the Postgresql stored procedure in pl/perl (using spi_exec).\n\nSo the test case involves 115000 executions of the same query via spi_exec?\nThat means the query will be re-parsed and re-planned 115000 times. If\nyou want something that's a reasonably fair comparison against Oracle,\ntry plpgsql which has query plan caching.\n\n\t\t\tregards, tom lane\n\nPS: please do NOT post EXPLAIN VERBOSE output unless someone\nspecifically asks for it. It clutters the archives and it's usually\nuseless. EXPLAIN ANALYZE is what we normally want to see for\nperformance issues.\n\n\r\nThis mail has originated outside your organization,\neither from an external partner or the Global Internet.\nKeep this in mind if you answer this message.\n\n\r\nThis e-mail is intended only for the above addressee. It may contain\nprivileged information. If you are not the addressee you must not copy,\ndistribute, disclose or use any of the information in it. If you have\nreceived it in error please delete it and immediately notify the sender.\nSecurity Notice: all e-mail, sent to or from this address, may be\naccessed by someone other than the recipient, for system management and\nsecurity reasons. This access is controlled under Regulation of\nInvestigatory Powers Act 2000, Lawful Business Practises.\n", "msg_date": "Wed, 01 Feb 2006 17:33:15 +0100", "msg_from": "\"FERREIRA, William (VALTECH)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: execution plan : Oracle vs PostgreSQL" }, { "msg_contents": "Indeed there is: you can use an ARRAY constructor with SELECT. Here's \nsome PGPLSQL code I have (simplified and with the variable names shrouded).\n\n SELECT INTO m\n ARRAY(SELECT d FROM hp\n WHERE hp.ss=$1\n ORDER BY 1);\n\n\nFERREIRA, William (VALTECH) wrote:\n\n> maybe there is an other way to query children directly into an array and having query plan caching ?", "msg_date": "Mon, 06 Feb 2006 12:36:15 -0800", "msg_from": "Andrew Lazarus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: execution plan : Oracle vs PostgreSQL" } ]
[ { "msg_contents": "We're converting from a commercial database product to PostgreSQL, and\ngenerally things are going well. While the licensing agreement with the\ncommercial vendor prohibits publication of benchmarks without their\nwritten consent, I'll just say that on almost everything, PostgreSQL is\nfaster.\n\nWe do have a few queries where PostgreSQL is several orders of\nmagnitude slower. It appears that the reason it is choosing a bad plan\nis that it is reluctant to start from a subquery when there is an outer\njoin in the FROM clause. Pasted below are four logically equivalent\nqueries. The first is a much stripped down version of one of the\nproduction queries. The second turns the EXISTS expression into an IN\nexpression. (In the full query this makes very little difference; as I\npared down the query, the planner started to do better with the IN form\nbefore the EXISTS form.) The third query is the fastest, but isn't\nportable enough for our mixed environment. The fourth is the best\nworkaround I've found, but I get a bit queasy when I have to use the\nDISTINCT modifier on a query.\n\nAny other suggestions?\n\n-Kevin\n\n\nexplain analyze\nSELECT \"C\".*, \"P\".\"partyNo\"\n FROM \"Case\" \"C\"\n JOIN \"Party\" \"P\" ON (\"C\".\"caseNo\" = \"P\".\"caseNo\" AND \"C\".\"countyNo\" =\n\"P\".\"countyNo\")\n LEFT OUTER JOIN \"WccaPermCaseType\" \"WPCT\"\n ON ( \"C\".\"caseType\" = \"WPCT\".\"caseType\"\n AND \"C\".\"countyNo\" = \"WPCT\".\"countyNo\"\n AND \"WPCT\".\"profileName\" = 'PUBLIC'\n )\n WHERE ( \"WPCT\".\"profileName\" IS NOT NULL\n OR (\"C\".\"caseType\" = 'PA' AND \"C\".\"isConfidential\" = false)\n )\n AND \"C\".\"countyNo\" = 66\n AND EXISTS\n (\n SELECT *\n FROM \"DocImageMetaData\" \"D\"\n WHERE \"D\".\"isEFiling\" = true\n AND \"D\".\"insertedDate\" BETWEEN '2006-01-01' AND '2006-01-07'\n AND \"D\".\"countyNo\" = 66\n AND \"D\".\"countyNo\" = \"C\".\"countyNo\"\n AND \"D\".\"caseNo\" = \"C\".\"caseNo\"\n )\n ORDER BY \"caseNo\"\n;\n \n QUERY PLAN \n \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=786467.94..786504.40 rows=14584 width=210) (actual\ntime=7391.295..7391.418 rows=51 loops=1)\n Sort Key: \"C\".\"caseNo\"\n -> Hash Left Join (cost=49.35..785459.30 rows=14584 width=210)\n(actual time=6974.819..7390.802 rows=51 loops=1)\n Hash Cond: (((\"outer\".\"caseType\")::bpchar =\n(\"inner\".\"caseType\")::bpchar) AND ((\"outer\".\"countyNo\")::smallint =\n(\"inner\".\"countyNo\")::smallint))\n Filter: ((\"inner\".\"profileName\" IS NOT NULL) OR\n(((\"outer\".\"caseType\")::bpchar = 'PA'::bpchar) AND (NOT\n\"outer\".\"isConfidential\")))\n -> Merge Join (cost=0.00..783366.38 rows=14584 width=210)\n(actual time=6972.672..7388.329 rows=51 loops=1)\n Merge Cond: ((\"outer\".\"caseNo\")::bpchar =\n(\"inner\".\"caseNo\")::bpchar)\n -> Index Scan using \"Case_pkey\" on \"Case\" \"C\" \n(cost=0.00..624268.11 rows=65025 width=208) (actual\ntime=4539.588..4927.730 rows=22 loops=1)\n Index Cond: ((\"countyNo\")::smallint = 66)\n Filter: (subplan)\n SubPlan\n -> Index Scan using \"DocImageMetaData_pkey\" on\n\"DocImageMetaData\" \"D\" (cost=0.00..3.89 rows=1 width=212) (actual\ntime=0.012..0.012 rows=0 loops=203171)\n Index Cond: (((\"countyNo\")::smallint = 66)\nAND ((\"countyNo\")::smallint = ($0)::smallint) AND ((\"caseNo\")::bpchar =\n($1)::bpchar))\n Filter: (\"isEFiling\" AND\n((\"insertedDate\")::date >= '2006-01-01'::date) AND\n((\"insertedDate\")::date <= '2006-01-07'::date))\n -> Index Scan using \"Party_pkey\" on \"Party\" \"P\" \n(cost=0.00..158657.86 rows=191084 width=22) (actual time=0.769..1646.381\nrows=354058 loops=1)\n Index Cond: (66 = (\"countyNo\")::smallint)\n -> Hash (cost=49.22..49.22 rows=27 width=31) (actual\ntime=1.919..1.919 rows=28 loops=1)\n -> Bitmap Heap Scan on \"WccaPermCaseType\" \"WPCT\" \n(cost=2.16..49.22 rows=27 width=31) (actual time=0.998..1.782 rows=28\nloops=1)\n Recheck Cond: (((\"countyNo\")::smallint = 66) AND\n((\"profileName\")::text = 'PUBLIC'::text))\n -> Bitmap Index Scan on \"WccaPermCaseType_pkey\" \n(cost=0.00..2.16 rows=27 width=0) (actual time=0.684..0.684 rows=28\nloops=1)\n Index Cond: (((\"countyNo\")::smallint = 66)\nAND ((\"profileName\")::text = 'PUBLIC'::text))\n Total runtime: 7392.577 ms\n(22 rows)\n\nexplain analyze\nSELECT \"C\".*, \"P\".\"partyNo\"\n FROM \"Case\" \"C\"\n JOIN \"Party\" \"P\" ON (\"C\".\"caseNo\" = \"P\".\"caseNo\" AND \"C\".\"countyNo\" =\n\"P\".\"countyNo\")\n LEFT OUTER JOIN \"WccaPermCaseType\" \"WPCT\"\n ON ( \"C\".\"caseType\" = \"WPCT\".\"caseType\"\n AND \"C\".\"countyNo\" = \"WPCT\".\"countyNo\"\n AND \"WPCT\".\"profileName\" = 'PUBLIC'\n )\n WHERE ( \"WPCT\".\"profileName\" IS NOT NULL\n OR (\"C\".\"caseType\" = 'PA' AND \"C\".\"isConfidential\" = false)\n )\n AND \"C\".\"countyNo\" = 66\n AND \"C\".\"caseNo\" IN\n (\n SELECT \"D\".\"caseNo\"\n FROM \"DocImageMetaData\" \"D\"\n WHERE \"D\".\"isEFiling\" = true\n AND \"D\".\"insertedDate\" BETWEEN '2006-01-01' AND '2006-01-07'\n AND \"D\".\"countyNo\" = 66\n )\n ORDER BY\n \"caseNo\"\n\n;\n \n QUERY PLAN \n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=284708.49..284708.50 rows=1 width=210) (actual\ntime=8962.995..8963.103 rows=51 loops=1)\n Sort Key: \"C\".\"caseNo\"\n -> Hash Join (cost=2359.31..284708.48 rows=1 width=210) (actual\ntime=8401.856..8962.606 rows=51 loops=1)\n Hash Cond: ((\"outer\".\"caseNo\")::bpchar =\n(\"inner\".\"caseNo\")::bpchar)\n -> Hash Left Join (cost=49.35..282252.68 rows=29167\nwidth=228) (actual time=32.120..8184.880 rows=312718 loops=1)\n Hash Cond: (((\"outer\".\"caseType\")::bpchar =\n(\"inner\".\"caseType\")::bpchar) AND ((\"outer\".\"countyNo\")::smallint =\n(\"inner\".\"countyNo\")::smallint))\n Filter: ((\"inner\".\"profileName\" IS NOT NULL) OR\n(((\"outer\".\"caseType\")::bpchar = 'PA'::bpchar) AND (NOT\n\"outer\".\"isConfidential\")))\n -> Merge Join (cost=0.00..278116.34 rows=29167\nwidth=228) (actual time=0.596..6236.238 rows=362819 loops=1)\n Merge Cond: ((\"outer\".\"caseNo\")::bpchar =\n(\"inner\".\"caseNo\")::bpchar)\n -> Index Scan using \"Case_pkey\" on \"Case\" \"C\" \n(cost=0.00..118429.72 rows=130049 width=208) (actual\ntime=0.265..1303.409 rows=203171 loops=1)\n Index Cond: ((\"countyNo\")::smallint = 66)\n -> Index Scan using \"Party_pkey\" on \"Party\" \"P\" \n(cost=0.00..158657.86 rows=191084 width=22) (actual time=0.303..2310.735\nrows=362819 loops=1)\n Index Cond: (66 = (\"countyNo\")::smallint)\n -> Hash (cost=49.22..49.22 rows=27 width=31) (actual\ntime=31.406..31.406 rows=28 loops=1)\n -> Bitmap Heap Scan on \"WccaPermCaseType\" \"WPCT\" \n(cost=2.16..49.22 rows=27 width=31) (actual time=23.498..31.284 rows=28\nloops=1)\n Recheck Cond: (((\"countyNo\")::smallint = 66)\nAND ((\"profileName\")::text = 'PUBLIC'::text))\n -> Bitmap Index Scan on\n\"WccaPermCaseType_pkey\" (cost=0.00..2.16 rows=27 width=0) (actual\ntime=17.066..17.066 rows=28 loops=1)\n Index Cond: (((\"countyNo\")::smallint =\n66) AND ((\"profileName\")::text = 'PUBLIC'::text))\n -> Hash (cost=2309.95..2309.95 rows=1 width=18) (actual\ntime=24.255..24.255 rows=22 loops=1)\n -> HashAggregate (cost=2309.94..2309.95 rows=1\nwidth=18) (actual time=24.132..24.185 rows=22 loops=1)\n -> Index Scan using\n\"DocImageMetaData_CountyNoInsertedDate\" on \"DocImageMetaData\" \"D\" \n(cost=0.00..2309.93 rows=6 width=18) (actual time=7.362..23.933 rows=29\nloops=1)\n Index Cond: (((\"countyNo\")::smallint = 66)\nAND ((\"insertedDate\")::date >= '2006-01-01'::date) AND\n((\"insertedDate\")::date <= '2006-01-07'::date))\n Filter: \"isEFiling\"\n Total runtime: 8964.044 ms\n(24 rows)\n\nexplain analyze\nSELECT \"C\".*, \"P\".\"partyNo\"\n FROM \"Case\" \"C\"\n JOIN \"Party\" \"P\" ON (\"C\".\"caseNo\" = \"P\".\"caseNo\" AND \"C\".\"countyNo\" =\n\"P\".\"countyNo\")\n JOIN\n (\n SELECT \"D\".\"caseNo\"\n FROM \"DocImageMetaData\" \"D\"\n WHERE \"D\".\"isEFiling\" = true\n AND \"D\".\"insertedDate\" BETWEEN '2006-01-01' AND '2006-01-07'\n AND \"D\".\"countyNo\" = 66\n GROUP BY \"D\".\"caseNo\"\n ) \"DD\"\n ON (\"DD\".\"caseNo\" = \"C\".\"caseNo\")\n LEFT OUTER JOIN \"WccaPermCaseType\" \"WPCT\"\n ON ( \"C\".\"caseType\" = \"WPCT\".\"caseType\"\n AND \"C\".\"countyNo\" = \"WPCT\".\"countyNo\"\n AND \"WPCT\".\"profileName\" = 'PUBLIC'\n )\n WHERE ( \"WPCT\".\"profileName\" IS NOT NULL\n OR (\"C\".\"caseType\" = 'PA' AND \"C\".\"isConfidential\" = false)\n )\n AND \"C\".\"countyNo\" = 66\n ORDER BY\n \"caseNo\"\n;\n \n QUERY PLAN \n \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=2321.49..2321.50 rows=1 width=210) (actual\ntime=7.753..7.859 rows=51 loops=1)\n Sort Key: \"C\".\"caseNo\"\n -> Nested Loop Left Join (cost=2309.94..2321.48 rows=1 width=210)\n(actual time=3.982..7.369 rows=51 loops=1)\n Join Filter: ((\"outer\".\"countyNo\")::smallint =\n(\"inner\".\"countyNo\")::smallint)\n Filter: ((\"inner\".\"profileName\" IS NOT NULL) OR\n(((\"outer\".\"caseType\")::bpchar = 'PA'::bpchar) AND (NOT\n\"outer\".\"isConfidential\")))\n -> Nested Loop (cost=2309.94..2317.99 rows=1 width=210)\n(actual time=3.906..5.717 rows=51 loops=1)\n -> Nested Loop (cost=2309.94..2313.51 rows=1\nwidth=240) (actual time=3.847..4.660 rows=22 loops=1)\n -> HashAggregate (cost=2309.94..2309.95 rows=1\nwidth=18) (actual time=3.775..3.830 rows=22 loops=1)\n -> Index Scan using\n\"DocImageMetaData_CountyNoInsertedDate\" on \"DocImageMetaData\" \"D\" \n(cost=0.00..2309.93 rows=6 width=18) (actual time=0.732..3.601 rows=29\nloops=1)\n Index Cond: (((\"countyNo\")::smallint =\n66) AND ((\"insertedDate\")::date >= '2006-01-01'::date) AND\n((\"insertedDate\")::date <= '2006-01-07'::date))\n Filter: \"isEFiling\"\n -> Index Scan using \"Case_pkey\" on \"Case\" \"C\" \n(cost=0.00..3.53 rows=1 width=208) (actual time=0.020..0.022 rows=1\nloops=22)\n Index Cond: (((\"C\".\"countyNo\")::smallint =\n66) AND ((\"outer\".\"caseNo\")::bpchar = (\"C\".\"caseNo\")::bpchar))\n -> Index Scan using \"Party_pkey\" on \"Party\" \"P\" \n(cost=0.00..4.46 rows=2 width=22) (actual time=0.019..0.028 rows=2\nloops=22)\n Index Cond: ((66 = (\"P\".\"countyNo\")::smallint) AND\n((\"outer\".\"caseNo\")::bpchar = (\"P\".\"caseNo\")::bpchar))\n -> Index Scan using \"WccaPermCaseType_ProfileName\" on\n\"WccaPermCaseType\" \"WPCT\" (cost=0.00..3.47 rows=1 width=31) (actual\ntime=0.015..0.018 rows=1 loops=51)\n Index Cond: (((\"WPCT\".\"profileName\")::text =\n'PUBLIC'::text) AND ((\"outer\".\"caseType\")::bpchar =\n(\"WPCT\".\"caseType\")::bpchar) AND ((\"WPCT\".\"countyNo\")::smallint = 66))\n Total runtime: 8.592 ms\n(18 rows)\n\nexplain analyze\nSELECT DISTINCT \"C\".*, \"P\".\"partyNo\"\n FROM \"Case\" \"C\"\n JOIN \"Party\" \"P\" ON (\"P\".\"countyNo\" = \"C\".\"countyNo\" AND \"P\".\"caseNo\"\n= \"C\".\"caseNo\")\n JOIN \"DocImageMetaData\" \"D\" ON (\"D\".\"countyNo\" = \"C\".\"countyNo\" AND\n\"D\".\"caseNo\" = \"C\".\"caseNo\")\n LEFT OUTER JOIN \"WccaPermCaseType\" \"WPCT\"\n ON ( \"C\".\"caseType\" = \"WPCT\".\"caseType\"\n AND \"C\".\"countyNo\" = \"WPCT\".\"countyNo\"\n AND \"WPCT\".\"profileName\" = 'PUBLIC'\n )\n WHERE ( \"WPCT\".\"profileName\" IS NOT NULL\n OR (\"C\".\"caseType\" = 'PA' AND \"C\".\"isConfidential\" = false)\n )\n AND \"C\".\"countyNo\" = 66\n AND \"D\".\"countyNo\" = 66\n AND \"D\".\"isEFiling\" = true\n AND \"D\".\"insertedDate\" BETWEEN '2006-01-01' AND '2006-01-07'\n ORDER BY\n \"caseNo\"\n;\n \n \n \n \n QUERY PLAN \n \n \n \n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=2339.19..2339.28 rows=1 width=210) (actual\ntime=9.539..10.044 rows=51 loops=1)\n -> Sort (cost=2339.19..2339.19 rows=1 width=210) (actual\ntime=9.532..9.678 rows=68 loops=1)\n Sort Key: \"C\".\"caseNo\", \"C\".\"countyNo\", \"C\".\"caseType\",\n\"C\".\"filingDate\", \"C\".\"isConfidential\", \"C\".\"isDomesticViolence\",\n\"C\".\"isFiledWoCtofc\", \"C\".\"lastChargeSeqNo\", \"C\".\"lastCvJgSeqNo\",\n\"C\".\"lastHistSeqNo\", \"C\".\"lastPartySeqNo\", \"C\".\"lastRelSeqNo\",\n\"C\".\"statusCode\", \"C\".\"bondId\", \"C\".\"branchId\", \"C\".caption,\n\"C\".\"daCaseNo\", \"C\".\"dispCtofcNo\", \"C\".\"fileCtofcDate\",\n\"C\".\"filingCtofcNo\", \"C\".\"issAgencyNo\", \"C\".\"maintCode\",\n\"C\".\"oldCaseNo\", \"C\".\"plntfAgencyNo\", \"C\".\"previousRespCo\",\n\"C\".\"prosAgencyNo\", \"C\".\"prosAtty\", \"C\".\"respCtofcNo\",\n\"C\".\"wcisClsCode\", \"C\".\"isSeal\", \"C\".\"isExpunge\",\n\"C\".\"isElectronicFiling\", \"C\".\"isPartySeal\", \"P\".\"partyNo\"\n -> Nested Loop Left Join (cost=0.00..2339.18 rows=1\nwidth=210) (actual time=0.857..7.901 rows=68 loops=1)\n Join Filter: ((\"outer\".\"countyNo\")::smallint =\n(\"inner\".\"countyNo\")::smallint)\n Filter: ((\"inner\".\"profileName\" IS NOT NULL) OR\n(((\"outer\".\"caseType\")::bpchar = 'PA'::bpchar) AND (NOT\n\"outer\".\"isConfidential\")))\n -> Nested Loop (cost=0.00..2335.68 rows=1 width=210)\n(actual time=0.786..5.784 rows=68 loops=1)\n -> Nested Loop (cost=0.00..2331.20 rows=1\nwidth=226) (actual time=0.728..4.313 rows=29 loops=1)\n -> Index Scan using\n\"DocImageMetaData_CountyNoInsertedDate\" on \"DocImageMetaData\" \"D\" \n(cost=0.00..2309.93 rows=6 width=20) (actual time=0.661..3.266 rows=29\nloops=1)\n Index Cond: (((\"countyNo\")::smallint =\n66) AND ((\"insertedDate\")::date >= '2006-01-01'::date) AND\n((\"insertedDate\")::date <= '2006-01-07'::date))\n Filter: \"isEFiling\"\n -> Index Scan using \"Case_pkey\" on \"Case\"\n\"C\" (cost=0.00..3.53 rows=1 width=208) (actual time=0.018..0.021 rows=1\nloops=29)\n Index Cond:\n(((\"C\".\"countyNo\")::smallint = 66) AND ((\"outer\".\"caseNo\")::bpchar =\n(\"C\".\"caseNo\")::bpchar))\n -> Index Scan using \"Party_pkey\" on \"Party\" \"P\" \n(cost=0.00..4.46 rows=2 width=22) (actual time=0.018..0.027 rows=2\nloops=29)\n Index Cond: ((66 =\n(\"P\".\"countyNo\")::smallint) AND ((\"P\".\"caseNo\")::bpchar =\n(\"outer\".\"caseNo\")::bpchar))\n -> Index Scan using \"WccaPermCaseType_ProfileName\" on\n\"WccaPermCaseType\" \"WPCT\" (cost=0.00..3.47 rows=1 width=31) (actual\ntime=0.014..0.017 rows=1 loops=68)\n Index Cond: (((\"WPCT\".\"profileName\")::text =\n'PUBLIC'::text) AND ((\"outer\".\"caseType\")::bpchar =\n(\"WPCT\".\"caseType\")::bpchar) AND ((\"WPCT\".\"countyNo\")::smallint = 66))\n Total runtime: 10.748 ms\n(18 rows)\n \n", "msg_date": "Wed, 01 Feb 2006 12:33:02 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Planner reluctant to start from subquery" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> We do have a few queries where PostgreSQL is several orders of\n> magnitude slower. It appears that the reason it is choosing a bad plan\n> is that it is reluctant to start from a subquery when there is an outer\n> join in the FROM clause.\n\nAFAICT this case doesn't really hinge on the outer join at all. The\nproblem is that EXISTS subqueries aren't well optimized. I would have\nexpected an equivalent IN clause to work better. In fact, I'm not\nclear why the planner isn't finding the cheapest plan (which it does\nestimate as cheapest) from the IN version you posted. What PG version\nis this exactly?\n\n> ... The third query is the fastest, but isn't\n> portable enough for our mixed environment.\n\nNot really relevant to the problem, but what's wrong with it? Looks\nlike standard SQL to me.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Feb 2006 14:34:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner reluctant to start from subquery " }, { "msg_contents": ">>> On Wed, Feb 1, 2006 at 1:34 pm, in message\n<[email protected]>,\nTom Lane <[email protected]> wrote: \n> \"Kevin Grittner\" <[email protected]> writes:\n>> We do have a few queries where PostgreSQL is several orders of\n>> magnitude slower. It appears that the reason it is choosing a bad\nplan\n>> is that it is reluctant to start from a subquery when there is an\nouter\n>> join in the FROM clause.\n> \n> AFAICT this case doesn't really hinge on the outer join at all. The\n> problem is that EXISTS subqueries aren't well optimized. I would\nhave\n> expected an equivalent IN clause to work better. In fact, I'm not\n> clear why the planner isn't finding the cheapest plan (which it does\n> estimate as cheapest) from the IN version you posted.\n\nAll I know is that trying various permutations, I saw it pick a good\nplan for the IN format when I eliminated the last outer join in the FROM\nclause. I know it isn't conclusive, but it was a correlation which\nsuggested a possible causality to me. The EXISTS never chose a\nreasonable plan on this one, although we haven't had a problem with them\nin most cases.\n\n> What PG version is this exactly?\n\nselect version() reports:\n\n PostgreSQL 8.1.2 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC)\n3.4.2 (mingw-special)\n\nHowever, this was actually built off the 8.1 stable branch as of Jan.\n13th at about 3 p.m. This build does contain the implementation of\nstandard_conforming_strings for which I recently posted a patch. The\nmake was configured with: --enable-integer-datetimes --enable-debug\n--disable-nls\n\n> \n>> ... The third query is the fastest, but isn't\n>> portable enough for our mixed environment.\n> \n> Not really relevant to the problem, but what's wrong with it? Looks\n> like standard SQL to me.\n\nIt is absolutely compliant with the standards. Unfortunately, we are\nunder a \"lowest common denominator\" portability mandate. I notice that\nsupport for this syntax has improved since we last set our limits; I'll\ntry to get this added to our allowed techniques.\n\nI can't complain about the portability mandate -- without it, we would\nundoubtedly have had product specific code for the commercial product\nwhich would have made migration to PostgreSQL much more painful.\n\n-Kevin\n\n\n", "msg_date": "Wed, 01 Feb 2006 13:59:57 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner reluctant to start from subquery" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Tom Lane <[email protected]> wrote: \n>> ... expected an equivalent IN clause to work better. In fact, I'm not\n>> clear why the planner isn't finding the cheapest plan (which it does\n>> estimate as cheapest) from the IN version you posted.\n\n> All I know is that trying various permutations, I saw it pick a good\n> plan for the IN format when I eliminated the last outer join in the FROM\n> clause. I know it isn't conclusive, but it was a correlation which\n> suggested a possible causality to me.\n\nBut there is still an outer join in your third example (the one with the\nbest plan), so that doesn't seem to hold water. In any case, the way\nthat IN planning works these days it really should have considered the\nplan equivalent to your JOIN-against-GROUP-BY variant.\n\nI'm interested to poke at this ... are you in a position to provide a\ntest case?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Feb 2006 15:14:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner reluctant to start from subquery " }, { "msg_contents": ">>> On Wed, Feb 1, 2006 at 2:14 pm, in message\n<[email protected]>,\nTom Lane <[email protected]> wrote: \n> \"Kevin Grittner\" <[email protected]> writes:\n>> Tom Lane <[email protected]> wrote: \n>>> ... expected an equivalent IN clause to work better. In fact, I'm\nnot\n>>> clear why the planner isn't finding the cheapest plan (which it\ndoes\n>>> estimate as cheapest) from the IN version you posted.\n> \n>> All I know is that trying various permutations, I saw it pick a\ngood\n>> plan for the IN format when I eliminated the last outer join in the\nFROM\n>> clause. I know it isn't conclusive, but it was a correlation which\n>> suggested a possible causality to me.\n> \n> But there is still an outer join in your third example (the one with\nthe\n> best plan), so that doesn't seem to hold water.\n\nRight, if I moved the DocImageMetaData from a subquery in the WHERE\nclause up to the FROM clause, or I eliminated all OUTER JOINs, it chose\na good plan. Of course, this was just playing with a few dozen\npermutations, so it proves nothing -- I'm just sayin'....\n\n> In any case, the way\n> that IN planning works these days it really should have considered\nthe\n> plan equivalent to your JOIN- against- GROUP- BY variant.\n> \n> I'm interested to poke at this ... are you in a position to provide\na\n> test case?\n\nI can't supply the original data, since many of the tables have\nmillions of rows, with some of the data (related to juvenile, paternity,\nsealed, and expunged cases) protected by law. I could try to put\ntogether a self-contained example, but I'm not sure the best way to do\nthat, since the table sizes and value distributions may be significant\nhere. Any thoughts on that?\n\n-Kevin\n\n\n", "msg_date": "Wed, 01 Feb 2006 14:24:39 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner reluctant to start from subquery" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Tom Lane <[email protected]> wrote: \n>> I'm interested to poke at this ... are you in a position to provide a\n>> test case?\n\n> I can't supply the original data, since many of the tables have\n> millions of rows, with some of the data (related to juvenile, paternity,\n> sealed, and expunged cases) protected by law. I could try to put\n> together a self-contained example, but I'm not sure the best way to do\n> that, since the table sizes and value distributions may be significant\n> here. Any thoughts on that?\n\nI think that the only aspect of the data that really matters here is the\nnumber of distinct values, which would affect decisions about whether\nHashAggregate is appropriate or not. And you could probably get the\nsame thing to happen with at most a few tens of thousands of rows.\n\nAlso, all we need to worry about is the columns used in the WHERE/JOIN\nconditions, which looks to be mostly case numbers, dates, and county\nidentification ... how much confidential info is there in that? At\nworst you could translate the case numbers to some randomly generated\nidentifiers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Feb 2006 15:36:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner reluctant to start from subquery " }, { "msg_contents": ">>> On Wed, Feb 1, 2006 at 2:36 pm, in message\n<[email protected]>,\nTom Lane <[email protected]> wrote: \n> \"Kevin Grittner\" <[email protected]> writes:\n>> Tom Lane <[email protected]> wrote: \n>>> I'm interested to poke at this ... are you in a position to provide\na\n>>> test case?\n> \n>> I can't supply the original data, since many of the tables have\n>> millions of rows, with some of the data (related to juvenile,\npaternity,\n>> sealed, and expunged cases) protected by law. I could try to put\n>> together a self- contained example, but I'm not sure the best way to\ndo\n>> that, since the table sizes and value distributions may be\nsignificant\n>> here. Any thoughts on that?\n> \n> I think that the only aspect of the data that really matters here is\nthe\n> number of distinct values, which would affect decisions about\nwhether\n> HashAggregate is appropriate or not. And you could probably get the\n> same thing to happen with at most a few tens of thousands of rows.\n> \n> Also, all we need to worry about is the columns used in the\nWHERE/JOIN\n> conditions, which looks to be mostly case numbers, dates, and county\n> identification ... how much confidential info is there in that? At\n> worst you could translate the case numbers to some randomly\ngenerated\n> identifiers.\n\nOK, I could probably obliterate name, addresses, etc. in a copy of the\ndata (those aren't significant to the query anyway) and provide a test\ncase. However, I just found another clue.\n\nSince you were so confident it couldn't be the outer join, I went\nlooking for what else I changed at the same time. I eliminated the code\nreferencing that table, which contained an OR. I've seen ORs cause\nnasty problems with optimizers in the past. I took out the OR in the\nwhere clause, without eliminating that last outer join, and it optimized\nfine.\n\nI'll hold off a bit to see if you still need the test case. ;-)\n\n-Kevin\n\n", "msg_date": "Wed, 01 Feb 2006 14:43:01 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner reluctant to start from subquery" }, { "msg_contents": ">>> On Wed, Feb 1, 2006 at 2:43 pm, in message\n<[email protected]>, \"Kevin Grittner\"\n<[email protected]> wrote: \n> \n> I took out the OR in the\n> where clause, without eliminating that last outer join, and it\noptimized\n> fine.\n\nFYI, with both sides of the OR separated:\n\nexplain analyze\nSELECT \"C\".*, \"P\".\"partyNo\"\n FROM \"Case\" \"C\"\n JOIN \"Party\" \"P\" ON (\"C\".\"caseNo\" = \"P\".\"caseNo\" AND \"C\".\"countyNo\" =\n\"P\".\"countyNo\")\n LEFT OUTER JOIN \"WccaPermCaseType\" \"WPCT\"\n ON ( \"C\".\"caseType\" = \"WPCT\".\"caseType\"\n AND \"C\".\"countyNo\" = \"WPCT\".\"countyNo\"\n AND \"WPCT\".\"profileName\" = 'PUBLIC'\n )\n WHERE \"WPCT\".\"profileName\" IS NOT NULL\n AND \"C\".\"countyNo\" = 66\n AND \"C\".\"caseNo\" IN\n (\n SELECT \"D\".\"caseNo\"\n FROM \"DocImageMetaData\" \"D\"\n WHERE \"D\".\"isEFiling\" = true\n AND \"D\".\"insertedDate\" BETWEEN '2006-01-01' AND '2006-01-07'\n AND \"D\".\"countyNo\" = 66\n )\n ORDER BY\n \"caseNo\"\n;\n \n QUERY PLAN \n \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=2321.48..2321.48 rows=1 width=210) (actual\ntime=5.908..6.001 rows=51 loops=1)\n Sort Key: \"C\".\"caseNo\"\n -> Nested Loop (cost=2309.94..2321.47 rows=1 width=210) (actual\ntime=3.407..5.605 rows=51 loops=1)\n -> Nested Loop (cost=2309.94..2316.98 rows=1 width=226)\n(actual time=3.353..4.659 rows=22 loops=1)\n -> Nested Loop (cost=2309.94..2313.50 rows=1\nwidth=226) (actual time=3.301..4.023 rows=22 loops=1)\n -> HashAggregate (cost=2309.94..2309.95 rows=1\nwidth=18) (actual time=3.251..3.300 rows=22 loops=1)\n -> Index Scan using\n\"DocImageMetaData_CountyNoInsertedDate\" on \"DocImageMetaData\" \"D\" \n(cost=0.00..2309.93 rows=6 width=18) (actual time=0.681..3.141 rows=29\nloops=1)\n Index Cond: (((\"countyNo\")::smallint =\n66) AND ((\"insertedDate\")::date >= '2006-01-01'::date) AND\n((\"insertedDate\")::date <= '2006-01-07'::date))\n Filter: \"isEFiling\"\n -> Index Scan using \"Case_pkey\" on \"Case\" \"C\" \n(cost=0.00..3.53 rows=1 width=208) (actual time=0.018..0.020 rows=1\nloops=22)\n Index Cond: (((\"C\".\"countyNo\")::smallint =\n66) AND ((\"C\".\"caseNo\")::bpchar = (\"outer\".\"caseNo\")::bpchar))\n -> Index Scan using \"WccaPermCaseType_ProfileName\" on\n\"WccaPermCaseType\" \"WPCT\" (cost=0.00..3.47 rows=1 width=8) (actual\ntime=0.015..0.017 rows=1 loops=22)\n Index Cond: (((\"WPCT\".\"profileName\")::text =\n'PUBLIC'::text) AND ((\"outer\".\"caseType\")::bpchar =\n(\"WPCT\".\"caseType\")::bpchar) AND (66 = (\"WPCT\".\"countyNo\")::smallint))\n Filter: (\"profileName\" IS NOT NULL)\n -> Index Scan using \"Party_pkey\" on \"Party\" \"P\" \n(cost=0.00..4.46 rows=2 width=22) (actual time=0.017..0.025 rows=2\nloops=22)\n Index Cond: ((66 = (\"P\".\"countyNo\")::smallint) AND\n((\"outer\".\"caseNo\")::bpchar = (\"P\".\"caseNo\")::bpchar))\n Total runtime: 6.511 ms\n(17 rows)\n\nexplain analyze\nSELECT \"C\".*, \"P\".\"partyNo\"\n FROM \"Case\" \"C\"\n JOIN \"Party\" \"P\" ON (\"C\".\"caseNo\" = \"P\".\"caseNo\" AND \"C\".\"countyNo\" =\n\"P\".\"countyNo\")\n LEFT OUTER JOIN \"WccaPermCaseType\" \"WPCT\"\n ON ( \"C\".\"caseType\" = \"WPCT\".\"caseType\"\n AND \"C\".\"countyNo\" = \"WPCT\".\"countyNo\"\n AND \"WPCT\".\"profileName\" = 'PUBLIC'\n )\n WHERE \"C\".\"caseType\" = 'PA' AND \"C\".\"isConfidential\" = false\n AND \"C\".\"countyNo\" = 66\n AND \"C\".\"caseNo\" IN\n (\n SELECT \"D\".\"caseNo\"\n FROM \"DocImageMetaData\" \"D\"\n WHERE \"D\".\"isEFiling\" = true\n AND \"D\".\"insertedDate\" BETWEEN '2006-01-01' AND '2006-01-07'\n AND \"D\".\"countyNo\" = 66\n )\n ORDER BY\n \"caseNo\"\n;\n \n QUERY PLAN \n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=11527.21..11527.21 rows=1 width=210) (actual\ntime=107.449..107.449 rows=0 loops=1)\n Sort Key: \"C\".\"caseNo\"\n -> Nested Loop IN Join (cost=3.47..11527.20 rows=1 width=210)\n(actual time=107.432..107.432 rows=0 loops=1)\n -> Hash Left Join (cost=3.47..9637.44 rows=255 width=228)\n(actual time=107.425..107.425 rows=0 loops=1)\n Hash Cond: (((\"outer\".\"caseType\")::bpchar =\n(\"inner\".\"caseType\")::bpchar) AND ((\"outer\".\"countyNo\")::smallint =\n(\"inner\".\"countyNo\")::smallint))\n -> Nested Loop (cost=0.00..9631.40 rows=255 width=228)\n(actual time=107.418..107.418 rows=0 loops=1)\n -> Index Scan using \"Case_CaseTypeStatus\" on\n\"Case\" \"C\" (cost=0.00..4536.25 rows=1136 width=208) (actual\ntime=107.412..107.412 rows=0 loops=1)\n Index Cond: (((\"caseType\")::bpchar =\n'PA'::bpchar) AND ((\"countyNo\")::smallint = 66))\n Filter: (NOT \"isConfidential\")\n -> Index Scan using \"Party_pkey\" on \"Party\" \"P\" \n(cost=0.00..4.46 rows=2 width=22) (never executed)\n Index Cond: ((66 =\n(\"P\".\"countyNo\")::smallint) AND ((\"outer\".\"caseNo\")::bpchar =\n(\"P\".\"caseNo\")::bpchar))\n -> Hash (cost=3.47..3.47 rows=1 width=8) (never\nexecuted)\n -> Index Scan using\n\"WccaPermCaseType_ProfileName\" on \"WccaPermCaseType\" \"WPCT\" \n(cost=0.00..3.47 rows=1 width=8) (never executed)\n Index Cond: (((\"profileName\")::text =\n'PUBLIC'::text) AND ((\"caseType\")::bpchar = 'PA'::bpchar) AND\n((\"countyNo\")::smallint = 66))\n -> Index Scan using \"DocImageMetaData_pkey\" on\n\"DocImageMetaData\" \"D\" (cost=0.00..7.40 rows=1 width=18) (never\nexecuted)\n Index Cond: (((\"D\".\"countyNo\")::smallint = 66) AND\n((\"outer\".\"caseNo\")::bpchar = (\"D\".\"caseNo\")::bpchar))\n Filter: (\"isEFiling\" AND ((\"insertedDate\")::date >=\n'2006-01-01'::date) AND ((\"insertedDate\")::date <= '2006-01-07'::date))\n Total runtime: 107.860 ms\n(18 rows)\n\n", "msg_date": "Wed, 01 Feb 2006 14:50:51 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner reluctant to start from subquery" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Since you were so confident it couldn't be the outer join, I went\n> looking for what else I changed at the same time. I eliminated the code\n> referencing that table, which contained an OR. I've seen ORs cause\n> nasty problems with optimizers in the past. I took out the OR in the\n> where clause, without eliminating that last outer join, and it optimized\n> fine.\n\nI don't think that OR is relevant either, since again it's present in\nboth the well-optimized and badly-optimized variants that you posted.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Feb 2006 15:53:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner reluctant to start from subquery " }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes [offlist]:\n> Attached is a pg_dump -c file with only the required rows (none of\n> which contain confidential data), and 0.1% of the rows from the larger\n> tables. It does show the same pattern of costing and plan choice.\n\nThanks for the test case. The first thing I found out was that HEAD\ndoes generate the fast plan from the IN case, while 8.1 does not, and\nafter a bit of digging the reason became clear. The initial state\nthat the planner starts from is essentially\n\n\tSELECT ... FROM ((C JOIN P) LEFT JOIN WPCT) IN-JOIN D\n\n(IN-JOIN being a notation for the way the planner thinks about IN, which\nis that it's a join with some special runtime behavior). The problem\nwith this is that outer joins don't always commute with other joins,\nand up through 8.1 we didn't have any code to analyze whether or not\nre-ordering outer joins is safe. So we never did it at all. HEAD does\nhave such code, and so it is able to re-order the joins enough to\ngenerate the fast plan, which is essentially\n\n\tSELECT ... FROM ((C IN-JOIN D) JOIN P) LEFT JOIN WPCT\n\nThis is why eliminating the OUTER JOIN improved things for you. Your\nmanual rearrangement into a JOIN-with-GROUP-BY inside the OUTER JOIN\nessentially duplicates the IN-JOIN rearrangement that HEAD is able to\ndo for itself.\n\nBTW, the reason why getting rid of the OR improved matters is that:\n(a) with the \"WPCT\".\"profileName\" IS NOT NULL part as a top-level WHERE\nclause, the planner could prove that it could reduce the OUTER JOIN to\na JOIN (because no null-extended row would pass that qual), whereupon\nit had join order flexibility again.\n(b) with the \"C\".\"caseType\" = 'PA' AND \"C\".\"isConfidential\" = false\npart as a top-level WHERE clause, there still wasn't any join order\nflexibility, but this added restriction on C reduced the number of C\nrows enough that there wasn't a performance problem anyway.\n\nSo it's all fairly clear now what is happening. The good news is we\nhave this fixed for 8.2, the bad news is that that patch is much too\nlarge to consider back-patching.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Feb 2006 11:58:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner reluctant to start from subquery " } ]
[ { "msg_contents": "Hi,\n\nI have 2 tables both have an index on ID (both ID columns are an oid).\n\nI want to find only only rows in one and not the other.\n\nSelect ID from TableA where ID not IN ( Select ID from Table B)\n\nThis always generates sequential scans.\n\nTable A has about 250,000 rows. Table B has about 250,000 Rows.\n\nWe should get a Scan on Table B and a Index Lookup on Table A.\n\nIs there any way to force this? enable_seqscan off doesn't help at all.\n\nThe Plan is\n\nSeq Scan on tablea(cost=100000000.00..23883423070450.96 rows=119414 width=4)\n Filter: (NOT (subplan))\"\n SubPlan -> \n Seq Scan on tableb (cost=100000000.00..100004611.17 rows=242617 \nwidth=4)\n\n\nThanks\nRalph\n\n\n\n", "msg_date": "Thu, 02 Feb 2006 09:12:59 +1300", "msg_from": "Ralph Mason <[email protected]>", "msg_from_op": true, "msg_subject": "Index Usage using IN " }, { "msg_contents": "On Thu, 2006-02-02 at 09:12 +1300, Ralph Mason wrote:\n> Hi,\n> \n> I have 2 tables both have an index on ID (both ID columns are an oid).\n> \n> I want to find only only rows in one and not the other.\n> \n> Select ID from TableA where ID not IN ( Select ID from Table B)\n\nHave you considered this:\n\nSELECT ID from TableA EXCEPT Select ID from Table B\n\n?\n\n-jwb\n\n", "msg_date": "Wed, 01 Feb 2006 12:22:50 -0800", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Usage using IN" }, { "msg_contents": "On Wed, 2006-02-01 at 12:22 -0800, Jeffrey W. Baker wrote:\n> On Thu, 2006-02-02 at 09:12 +1300, Ralph Mason wrote:\n> > Hi,\n> > \n> > I have 2 tables both have an index on ID (both ID columns are an oid).\n> > \n> > I want to find only only rows in one and not the other.\n> > \n> > Select ID from TableA where ID not IN ( Select ID from Table B)\n> \n> Have you considered this:\n> \n> SELECT ID from TableA EXCEPT Select ID from Table B\n\nAlternately:\n\n SELECT a.ID \n FROM TableA AS a \nLEFT JOIN TableB AS b \n ON a.ID = b.ID \n WHERE b.ID IS NULL\n\n-jwb\n", "msg_date": "Wed, 01 Feb 2006 12:28:19 -0800", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Usage using IN" }, { "msg_contents": "\"Jeffrey W. Baker\" <[email protected]> writes:\n> On Thu, 2006-02-02 at 09:12 +1300, Ralph Mason wrote:\n>> Select ID from TableA where ID not IN ( Select ID from Table B)\n\n> Have you considered this:\n\n> SELECT ID from TableA EXCEPT Select ID from Table B\n\nAlso, increasing work_mem might persuade the planner to try a hashed\nsubplan, which'd be a lot better than what you have. Note that it's\nquite unlikely that indexes are going to help for this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Feb 2006 15:41:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Usage using IN " }, { "msg_contents": "On Thu, Feb 02, 2006 at 09:12:59 +1300,\n Ralph Mason <[email protected]> wrote:\n> Hi,\n> \n> I have 2 tables both have an index on ID (both ID columns are an oid).\n> \n> I want to find only only rows in one and not the other.\n> \n> Select ID from TableA where ID not IN ( Select ID from Table B)\n> \n> This always generates sequential scans.\n> \n> Table A has about 250,000 rows. Table B has about 250,000 Rows.\n> \n> We should get a Scan on Table B and a Index Lookup on Table A.\n\nI don't think that is going to work if there are NULLs in table B.\nI don't know whether or not Postgres has code to special case NULL testing\n(either for constraints ruling them out, or doing probes for them in addition\nto the key it is trying to match) for doing NOT IN. Just doing a simple\nindex probe into table A isn't going to tell you all you need to know if\nyou don't find a match.\n", "msg_date": "Wed, 1 Feb 2006 15:23:03 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Usage using IN" }, { "msg_contents": "\nSelect ID from TableA where not exists ( Select ID from Table B where ID \n= TableA.ID)\nmight give you index scan. Of course, that is only useful is TableA is \nvery small table.\nNot appropriate for 250k rows\n\non 2/1/2006 12:12 PM Ralph Mason said the following:\n> Hi,\n>\n> I have 2 tables both have an index on ID (both ID columns are an oid).\n>\n> I want to find only only rows in one and not the other.\n>\n> Select ID from TableA where ID not IN ( Select ID from Table B)\n>\n> This always generates sequential scans.\n>\n> Table A has about 250,000 rows. Table B has about 250,000 Rows.\n>\n> We should get a Scan on Table B and a Index Lookup on Table A.\n>\n> Is there any way to force this? enable_seqscan off doesn't help at all.\n>\n> The Plan is\n>\n> Seq Scan on tablea(cost=100000000.00..23883423070450.96 rows=119414 \n> width=4)\n> Filter: (NOT (subplan))\"\n> SubPlan -> Seq Scan on tableb (cost=100000000.00..100004611.17 \n> rows=242617 width=4)\n>\n>\n> Thanks\n> Ralph\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n", "msg_date": "Wed, 01 Feb 2006 14:12:54 -0800", "msg_from": "Hari Warrier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index Usage using IN" } ]
[ { "msg_contents": "As I recall, the idea behind vacuum_threshold was to prevent\ntoo-frequent vacuuming of small tables. I'm beginning to question this\nreasoning:\n\nSmall tables vacuum very, very quickly, so 'extra' vacuuming is very\nunlikely to hurt system performance.\n\nSmall tables are most likely to have either very few updates (ie: a\n'lookup table') or very frequent updates (ie: a table implementing a\nqueue). In the former, even with vacuum_threshold = 0 vacuum will be a\nvery rare occurance. In the later case, a high threshold is likely to\ncause a large amount of un-nececcasry bloat.\n\nAlso, vacuum_scale_factor of 0.4 seems unreasonably large. It means\ntables will be 40% dead space, which seems excessively wasteful.\nSomething between 0.1 and 0.2 seems much better.\n\nHas anyone looked at how effective these two settings are?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Wed, 1 Feb 2006 15:16:33 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Default autovacuum settings too conservative" }, { "msg_contents": "Jim C. Nasby wrote:\n> Small tables are most likely to have either very few updates (ie: a\n> 'lookup table') or very frequent updates (ie: a table implementing a\n> queue). In the former, even with vacuum_threshold = 0 vacuum will be a\n> very rare occurance. In the later case, a high threshold is likely to\n> cause a large amount of un-nececcasry bloat.\n\nWell a threshold of 0 won't work because then a 0 tuple table will get \nvacuumed every time. Or at least autovacuum needs to special case this.\n\n> Also, vacuum_scale_factor of 0.4 seems unreasonably large. It means\n> tables will be 40% dead space, which seems excessively wasteful.\n> Something between 0.1 and 0.2 seems much better.\n\nDepends on the app and the usage patterns as to what too much slack \nspace is.\n\n> Has anyone looked at how effective these two settings are?\n\nAs far I as I know, we are still looking for real world feedback. 8.1 \nis the first release to have the integrated autovacuum. The thresholds \nin 8.1 are a good bit less conservative than the thresholds in the \ncontrib version. The contrib thresholds were universally considered WAY \nto conservative, but that was somewhat necessary since you couldn't set \nthem on a per table basis as you can in 8.1. If we continue to hear \nfrom people that the current 8.1 default thresholds are still to \nconservative we can look into lowering them.\n\nI think the default settings should be designed to minimize the impact \nautovacuum has on the system while preventing the system from ever \ngetting wildly bloated (also protect xid wraparound, but that doesn't \nhave anything to do with the thresholds).\n\nMatt\n", "msg_date": "Wed, 01 Feb 2006 16:37:07 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Default autovacuum settings too conservative" }, { "msg_contents": "\nOn Feb 1, 2006, at 4:37 PM, Matthew T. O'Connor wrote:\n\n> As far I as I know, we are still looking for real world feedback. \n> 8.1 is the first release to have the integrated autovacuum. The \n> thresholds in 8.1 are a good bit less conservative than the \n> thresholds in the contrib version. The contrib thresholds were \n> universally considered WAY to conservative, but that was somewhat \n> necessary since you couldn't set them on a per table basis as you \n> can in 8.1. If we continue to hear from people that the current \n> 8.1 default thresholds are still to conservative we can look into \n> lowering them.\n\nI spent the weekend researching and pondering this topic as well.\n\nFor me the per-table tuning is vital, since I have some tables that \nare very small and implement a queue (ie, update very often several \nmillion times per day and have at most 10 or so rows), some that are \nfairly stable with O(10k) rows which update occasionally, and a \ncouple of tables that are quite large: 20 million rows which updates \na few million times per day and inserts a few thousand, and another \ntable with ~275 million rows in which we insert and update roughly 3 \nmillion per day.\n\nThe 40% overhead would kill these large tables both in terms of \nperformance and disk usage. I'm pondering a global 10% and having the \nbig tables at or below 1% based on the rate of change.\n\nIs there a way to make the autovacuum process log more verbosely \nwhile leaving the rest of the logging minimal? This would help tune it.\n\n", "msg_date": "Mon, 6 Feb 2006 15:00:12 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Default autovacuum settings too conservative" }, { "msg_contents": "[email protected] (\"Matthew T. O'Connor\") writes:\n> I think the default settings should be designed to minimize the\n> impact autovacuum has on the system while preventing the system from\n> ever getting wildly bloated (also protect xid wraparound, but that\n> doesn't have anything to do with the thresholds).\n\nThat would suggest setting the \"base threshold\"\nautovacuum_vacuum_threshold relatively low, and the \"scale factor\"\nautovacuum_vacuum_scale_factor fairly high.\n-- \nlet name=\"cbbrowne\" and tld=\"ntlug.org\" in name ^ \"@\" ^ tld;;\nhttp://cbbrowne.com/info/nonrdbms.html\nI think it may be possible to simplify and condense the content of\nthis thread somewhat:\n \"GX is an ex-API. It is no longer supported\" - The Rest of Us\n \"No it isn't. It's just pining for the fjords!\" - Lawson\n-- Michael Paquette\n", "msg_date": "Mon, 06 Feb 2006 17:15:16 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Default autovacuum settings too conservative" }, { "msg_contents": "On Wed, Feb 01, 2006 at 04:37:07PM -0500, Matthew T. O'Connor wrote:\n> I think the default settings should be designed to minimize the impact \n> autovacuum has on the system while preventing the system from ever \n> getting wildly bloated (also protect xid wraparound, but that doesn't \n> have anything to do with the thresholds).\n\nI don't really see the logic behind that. Problems caused by inadequate\nvacuuming seem to be much more prevalent than problems caused by vacuum\nimpacting the system. If vacuum impact is a concern I think it more\nreasonable to make the default vacuum_cost_delay non-zero instead.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 6 Feb 2006 23:05:45 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Default autovacuum settings too conservative" }, { "msg_contents": "> On Wed, Feb 01, 2006 at 04:37:07PM -0500, Matthew T. O'Connor wrote:\n>> I think the default settings should be designed to minimize the impact \n>> autovacuum has on the system while preventing the system from ever \n>> getting wildly bloated (also protect xid wraparound, but that doesn't \n>> have anything to do with the thresholds).\n>\n> I don't really see the logic behind that. Problems caused by inadequate\n> vacuuming seem to be much more prevalent than problems caused by vacuum\n> impacting the system. If vacuum impact is a concern I think it more\n> reasonable to make the default vacuum_cost_delay non-zero instead.\n\nThat's a good point.\n\nI would not be keen, on the other hand, on having the delays terribly\nhigh.\n\nBig tables, if delayed significantly, will take plenty longer to\nvacuum, and I always get paranoid about long running transactions :-).\n-- \nlet name=\"cbbrowne\" and tld=\"gmail.com\" in name ^ \"@\" ^ tld;;\nhttp://linuxdatabases.info/info/postgresql.html\nThis login session: $13.99\n", "msg_date": "Mon, 06 Feb 2006 22:14:53 -0800", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Default autovacuum settings too conservative" }, { "msg_contents": "On Mon, Feb 06, 2006 at 10:14:53PM -0800, Christopher Browne wrote:\n> > On Wed, Feb 01, 2006 at 04:37:07PM -0500, Matthew T. O'Connor wrote:\n> >> I think the default settings should be designed to minimize the impact \n> >> autovacuum has on the system while preventing the system from ever \n> >> getting wildly bloated (also protect xid wraparound, but that doesn't \n> >> have anything to do with the thresholds).\n> >\n> > I don't really see the logic behind that. Problems caused by inadequate\n> > vacuuming seem to be much more prevalent than problems caused by vacuum\n> > impacting the system. If vacuum impact is a concern I think it more\n> > reasonable to make the default vacuum_cost_delay non-zero instead.\n> \n> That's a good point.\n> \n> I would not be keen, on the other hand, on having the delays terribly\n> high.\n> \n> Big tables, if delayed significantly, will take plenty longer to\n> vacuum, and I always get paranoid about long running transactions :-).\n\nVery true, but I'd hope anyone running a table large enough for this to\nmake a difference would have done some tuning of their own...\n\nWhat we really need is a replacement for vacuum_delay that takes\nPostgreSQL generated IO activity into account...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 7 Feb 2006 01:26:21 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Default autovacuum settings too conservative" }, { "msg_contents": "Hi, Jim,\n\nJim C. Nasby wrote:\n\n> What we really need is a replacement for vacuum_delay that takes\n> PostgreSQL generated IO activity into account...\n\nThere are also other ideas which can make vacuum less painfull:\n\n- Use a \"delete\"-map (like the free space map) so vacuum can quickly\nfind the pages to look at.\n\n- Have vacuum end its transaction after a certain amount of work, and\nrestart at the same page later.\n\n- Have vacuum full search good candidates with non-stopping lock (and\nusage of delete-map and fsm), then doing {lock, recheck, move, unlock}\nin small amounts of data with delay between.\n\n- Introducing some load measurement, and a pressure measurement (number\nof deleted rows, TID wraparound etc.). Then start vacuum when load is\nlow or pressure is very high. Tune other parameters (like \"certain\namount of work\" depending on those measures.\n\nAll of them are a lot of code to hack, but although I'm not a postgresql\ncore developer, I am keen enough to invite you to send patches. :-)\n\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Tue, 07 Feb 2006 13:39:34 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Default autovacuum settings too conservative" }, { "msg_contents": "On Mon, Feb 06, 2006 at 11:05:45PM -0600, Jim C. Nasby wrote:\n>I don't really see the logic behind that. Problems caused by inadequate\n>vacuuming seem to be much more prevalent than problems caused by vacuum\n>impacting the system. \n\nAgreed. If your tables are large enough that a vacuum matters, you \nprobably shouldn't be blindly running autovacuum anyway.\n\nMike Stone\n", "msg_date": "Tue, 07 Feb 2006 09:20:08 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Default autovacuum settings too conservative" }, { "msg_contents": "On Tue, Feb 07, 2006 at 01:39:34PM +0100, Markus Schaber wrote:\n> Hi, Jim,\n> \n> Jim C. Nasby wrote:\n> \n> > What we really need is a replacement for vacuum_delay that takes\n> > PostgreSQL generated IO activity into account...\n> \n> There are also other ideas which can make vacuum less painfull:\n> \n> - Use a \"delete\"-map (like the free space map) so vacuum can quickly\n> find the pages to look at.\n\nAlready on TODO.\n\n> - Have vacuum end its transaction after a certain amount of work, and\n> restart at the same page later.\n\nAFAIK this isn't possible with the current way vacuum works.\n\n> - Have vacuum full search good candidates with non-stopping lock (and\n> usage of delete-map and fsm), then doing {lock, recheck, move, unlock}\n> in small amounts of data with delay between.\n\nThis isn't an issue of locks, it's an issue of long-running\ntransactions. It *might* be possible for vacuum to break work into\nsmaller transactions, but I'm pretty sure that would be a non-trivial\namount of hacking.\n\n> - Introducing some load measurement, and a pressure measurement (number\n> of deleted rows, TID wraparound etc.). Then start vacuum when load is\n> low or pressure is very high. Tune other parameters (like \"certain\n> amount of work\" depending on those measures.\n\nWhich is essentially what I was suggesting...\n\n> All of them are a lot of code to hack, but although I'm not a postgresql\n> core developer, I am keen enough to invite you to send patches. :-)\n\nWell, if you know C then you're already 1 step closer to being able to\nchange these kinds of things than I am.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 7 Feb 2006 19:03:39 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Default autovacuum settings too conservative" }, { "msg_contents": "Jim C. Nasby wrote:\n> On Tue, Feb 07, 2006 at 01:39:34PM +0100, Markus Schaber wrote:\n> \n>>Hi, Jim,\n>>\n>>Jim C. Nasby wrote:\n>>\n>>\n>>>What we really need is a replacement for vacuum_delay that takes\n>>>PostgreSQL generated IO activity into account...\n>>\n>>There are also other ideas which can make vacuum less painfull:\n>>\n>>- Use a \"delete\"-map (like the free space map) so vacuum can quickly\n>>find the pages to look at.\n> \n> \n> Already on TODO.\n> \n> \n>>- Have vacuum end its transaction after a certain amount of work, and\n>>restart at the same page later.\n> \n> \n> AFAIK this isn't possible with the current way vacuum works.\n\nThere was a patch posted for this in the 8.0 cycle, but it was said to\nbe not useful. I think it's possibly useful for large tables and with\nautovac only.\n\n> \n> \n>>- Have vacuum full search good candidates with non-stopping lock (and\n>>usage of delete-map and fsm), then doing {lock, recheck, move, unlock}\n>>in small amounts of data with delay between.\n> \n> \n> This isn't an issue of locks, it's an issue of long-running\n> transactions. It *might* be possible for vacuum to break work into\n> smaller transactions, but I'm pretty sure that would be a non-trivial\n> amount of hacking.\n\nWhen tables are tracked individually for wraparound, the longest \ntransaction required for vacuuming will be one to vacuum one table. \nWith delete-map and other functions, the time for that transaction may \nbe reduced. Partial vacuum of large tables is an option, but again \nrequires some real smarts in the autovac code to track wraparound issues.\n\n> \n> \n>>- Introducing some load measurement, and a pressure measurement (number\n>>of deleted rows, TID wraparound etc.). Then start vacuum when load is\n>>low or pressure is very high. Tune other parameters (like \"certain\n>>amount of work\" depending on those measures.\n> \n> \n> Which is essentially what I was suggesting...\n> \n> \n>>All of them are a lot of code to hack, but although I'm not a postgresql\n>>core developer, I am keen enough to invite you to send patches. :-)\n> \n> \n> Well, if you know C then you're already 1 step closer to being able to\n> change these kinds of things than I am.\n\nRegards\n\nRussell Smith\n", "msg_date": "Wed, 08 Feb 2006 12:49:54 +1100", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Default autovacuum settings too conservative" }, { "msg_contents": "> Jim C. Nasby wrote:\n>> On Tue, Feb 07, 2006 at 01:39:34PM +0100, Markus Schaber wrote:\n>>\n>>>Hi, Jim,\n>>>\n>>>Jim C. Nasby wrote:\n>>>\n>>>\n>>>>What we really need is a replacement for vacuum_delay that takes\n>>>>PostgreSQL generated IO activity into account...\n>>>\n>>>There are also other ideas which can make vacuum less painfull:\n>>>\n>>>- Use a \"delete\"-map (like the free space map) so vacuum can quickly\n>>>find the pages to look at.\n>> Already on TODO.\n>>\n>>>- Have vacuum end its transaction after a certain amount of work, and\n>>>restart at the same page later.\n>> AFAIK this isn't possible with the current way vacuum works.\n>\n> There was a patch posted for this in the 8.0 cycle, but it was said to\n> be not useful. I think it's possibly useful for large tables and with\n> autovac only.\n\nI could see it being useful in an autovac perspective. Work on a\ntable for a while, giving up after some period of time, but without\ngiving up on having done some work.\n\n>>>- Have vacuum full search good candidates with non-stopping lock (and\n>>>usage of delete-map and fsm), then doing {lock, recheck, move, unlock}\n>>>in small amounts of data with delay between.\n>> This isn't an issue of locks, it's an issue of long-running\n>> transactions. It *might* be possible for vacuum to break work into\n>> smaller transactions, but I'm pretty sure that would be a non-trivial\n>> amount of hacking.\n\nRight. And part of the trouble is that you lose certainty that you\nhave covered off transaction wraparound.\n\n> When tables are tracked individually for wraparound, the longest\n> transaction required for vacuuming will be one to vacuum one\n> table. With delete-map and other functions, the time for that\n> transaction may be reduced. Partial vacuum of large tables is an\n> option, but again requires some real smarts in the autovac code to\n> track wraparound issues.\n\nUnfortunately, \"delete-map\" *doesn't* help you with the wraparound\nproblem. The point of the \"delete map\" or \"vacuum space map\" is to\nallow the VACUUM to only touch the pages known to need vacuuming.\n\nAt some point, you still need to walk through the whole table (touched\nparts and untouched) in order to make sure that the old tuples are\nfrozen.\n\nTracking tables individually does indeed help by making the longest\ntransaction be the one needed for the largest table. Unfortunately,\nthat one can't lean on the \"delete map\"/\"vacuum space map\" to ignore\nparts of the table :-(.\n-- \n(reverse (concatenate 'string \"gro.gultn\" \"@\" \"enworbbc\"))\nhttp://linuxdatabases.info/info/slony.html\n\"Access to a COFF symbol table via ldtbread is even less abstract,\n really sucks in general, and should be banned from earth.\"\n -- SCSH 0.5.1 unix.c\n", "msg_date": "Tue, 07 Feb 2006 20:12:10 -0800", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Default autovacuum settings too conservative" }, { "msg_contents": "Hi, Christopher,\n\nChristopher Browne wrote:\n\n> Right. And part of the trouble is that you lose certainty that you\n> have covered off transaction wraparound.\n\nYes. Vacuum (full) serve at least four purposes:\n\n- TID wraparound prevention\n- obsolete row removal\n- table compaction\n- giving space back to the OS by truncating files\n\nWhile the first one needs full table sweeps, the others don't. And from\nmy personal experience, at least the obsolete row removal is needed much\nmore frequently than TID wraparound prevention.\n\n>>When tables are tracked individually for wraparound, the longest\n>>transaction required for vacuuming will be one to vacuum one\n>>table. With delete-map and other functions, the time for that\n>>transaction may be reduced. Partial vacuum of large tables is an\n>>option, but again requires some real smarts in the autovac code to\n>>track wraparound issues.\n> \n> Unfortunately, \"delete-map\" *doesn't* help you with the wraparound\n> problem. The point of the \"delete map\" or \"vacuum space map\" is to\n> allow the VACUUM to only touch the pages known to need vacuuming.\n> \n> At some point, you still need to walk through the whole table (touched\n> parts and untouched) in order to make sure that the old tuples are\n> frozen.\n\nPreventing transaction ID wraparound needs a guaranteed full table sweep\nduring a vacuum run, but not necessarily in a single transaction. It\nshould be possible to divide this full table sweep into smaller chunks,\neach of them in its own transaction.\n\nIt will certainly be necessary to block e. G. simultaneous VACUUMs,\nCLUSTERs or other maintainance commands for the whole VACUUM run, but\nnormal SELECT, INSERT and UPDATE statement should be able to interleave\nwith the VACUUM transaction.\n\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Wed, 08 Feb 2006 12:05:10 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Default autovacuum settings too conservative" }, { "msg_contents": "Hi, Mahesh,\n\nMahesh Shinde wrote:\n\n> Does vacuum improves the performance of the database search.. As if now I\n> have a table who is having a records 70 lac and daily appx 10-15 thousand\n> rows get added. so please let me know which type of vacuum I should prefer.\n> I am accessing a data using java application which is hosted on the same\n> database server.\n\nI don't know what \"70 lac\" means.\n\nBut if you only add to the table, and never update or delete, vacuum\nbrings nothing for performance. (Although it is necessary for TID\nwraparound prevention.)\n\nHowever, if your often do range queries on an index that does not\ncorrespond to the insertion order, you may benefit from CLUSTERing on\nthat index from time to time.\n\n\n\nHth,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n", "msg_date": "Wed, 08 Feb 2006 14:38:08 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Default autovacuum settings too conservative" }, { "msg_contents": "Markus Schaber wrote:\n>>Does vacuum improves the performance of the database search.. As if now I\n>>have a table who is having a records 70 lac and daily appx 10-15 thousand\n>>rows get added. so please let me know which type of vacuum I should prefer.\n>>I am accessing a data using java application which is hosted on the same\n>>database server.\n> \n> I don't know what \"70 lac\" means.\n\nOne lac (also spelt \"lakh\") is one hundred thousand. And one crore is \nten million. Indians count differently from the rest of the world :-).\n\nTim\n\n-- \n-----------------------------------------------\nTim Allen [email protected]\nProximity Pty Ltd http://www.proximity.com.au/\n", "msg_date": "Thu, 09 Feb 2006 10:31:09 +1100", "msg_from": "Tim Allen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Default autovacuum settings too conservative" }, { "msg_contents": "Hi, Tim,\n\nTim Allen schrieb:\n>> I don't know what \"70 lac\" means.\n> One lac (also spelt \"lakh\") is one hundred thousand. And one crore is\n> ten million. Indians count differently from the rest of the world :-).\n\nOkay, so he talks about 7 million rows.\n\nThank you.\n\nMarkus\n", "msg_date": "Thu, 09 Feb 2006 01:21:01 +0100", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Default autovacuum settings too conservative" } ]
[ { "msg_contents": "Hi,\n\nI'm fairly new to PostgreSQL. I was trying pgbench , but could not\nunderstand the output . Can anyone help me out to understand the output of\npgbench\n\n\n----Pradeep\n\nHi,I'm fairly new to PostgreSQL. I was trying pgbench , but could not understand the output . Can anyone help me out to understand the output of pgbench----Pradeep", "msg_date": "Thu, 2 Feb 2006 12:39:59 +0530", "msg_from": "Pradeep Parmar <[email protected]>", "msg_from_op": true, "msg_subject": "pgbench output" }, { "msg_contents": "Well, it tells you how many transactions per second it was able to do.\nDo you have specific questions?\n\nOn Thu, Feb 02, 2006 at 12:39:59PM +0530, Pradeep Parmar wrote:\n> Hi,\n> \n> I'm fairly new to PostgreSQL. I was trying pgbench , but could not\n> understand the output . Can anyone help me out to understand the output of\n> pgbench\n> \n> \n> ----Pradeep\n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Mon, 6 Feb 2006 23:06:26 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgbench output" }, { "msg_contents": "Hi All,\n\nHere are some of the results i got after performing pgbench marking between\npostgresql 7.4.5 and postgresql 8.1.2. having parameters with same values in\nthe postgresql.conf file.\n\npostgres@machine:/newdisk/postgres/data> /usr/local/pgsql7.4.5/bin/pgbench\n-c 10 -t 10000 regression\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 80.642615 (including connections establishing)\ntps = 80.650638 (excluding connections establishing)\n\npostgres@machine:/newdisk/postgres/data> /usr/local/pgsql/bin/pgbench -c 10\n-t 10000 regression\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 124.134926 (including connections establishing)\ntps = 124.148749 (excluding connections establishing)\n\nConclusion : So please correct me if i am wrong ... this result set shows\nthat the postgresql version 8.1.2 has perform better than 7.4.5 in the\nbench marking process since 8.1.2 was able to complete more transcations per\nsecond successfully .\n\n\nOn 2/7/06, Jim C. Nasby <[email protected]> wrote:\n\n> Well, it tells you how many transactions per second it was able to do.\n> Do you have specific questions?\n>\n> On Thu, Feb 02, 2006 at 12:39:59PM +0530, Pradeep Parmar wrote:\n> > Hi,\n> >\n> > I'm fairly new to PostgreSQL. I was trying pgbench , but could not\n> > understand the output . Can anyone help me out to understand the output\n> of\n> > pgbench\n> >\n> >\n> > ----Pradeep\n>\n> --\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n\n\n--\nBest,\nGourish Singbal\n\n \nHi All,\n \nHere are some of the results i got after performing pgbench marking between postgresql 7.4.5 and postgresql 8.1.2. having parameters with same values in the postgresql.conf file.\n \npostgres@machine:/newdisk/postgres/data> /usr/local/pgsql7.4.5/bin/pgbench -c 10 -t 10000 regressionstarting vacuum...end.transaction type: TPC-B (sort of)\nscaling factor: 10number of clients: 10number of transactions per client: 10000number of transactions actually processed: 100000/100000tps = 80.642615 (including connections establishing)tps = 80.650638\n (excluding connections establishing)\n\npostgres@machine:/newdisk/postgres/data> /usr/local/pgsql/bin/pgbench -c 10 -t 10000 regressionstarting vacuum...end.transaction type: TPC-B (sort of)\nscaling factor: 10number of clients: 10number of transactions per client: 10000number of transactions actually processed: 100000/100000tps = 124.134926 (including connections establishing)tps = 124.148749\n (excluding connections establishing)\nConclusion : So please correct me if i am wrong ... this result set shows that the postgresql version  8.1.2 has perform better than 7.4.5 in the bench marking process since 8.1.2 was able to complete more transcations per second successfully . \n\nOn 2/7/06, Jim C. Nasby <[email protected]> wrote:\n\nWell, it tells you how many transactions per second it was able to do.Do you have specific questions?\nOn Thu, Feb 02, 2006 at 12:39:59PM +0530, Pradeep Parmar wrote:> Hi,>> I'm fairly new to PostgreSQL. I was trying pgbench , but could not> understand the output . Can anyone help me out to understand the output of\n> pgbench>>> ----Pradeep--Jim C. Nasby, Sr. Engineering Consultant      [email protected] Software      \nhttp://pervasive.com    work: 512-231-6117vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster-- Best,Gourish Singbal", "msg_date": "Fri, 10 Feb 2006 21:11:00 +0530", "msg_from": "Gourish Singbal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgbench output" } ]
[ { "msg_contents": "Hi everyone,\n\nI have a question concerning the size of an index...\nWhat I acually did was bulid a btree index on an smallint attribute\nwithin a table with 10^8 rows. The table itself is app. 10GB large and\nwhat I would like to have the smallest possible indeces. Unfortunately\nthe current size is about 2GB per indexed column (8 columns are indexed\nin total) which is too large if the planner is supposed to choose a\nbitmap scan between all of the indices.\n\nSo what I would like to know is the following:\nIs there an easy way to tell postgres to occupy the index pages up to\n100 %?\nI am working in a decision support system so inserts/deletes etc. do\nnormally not happen at all?\n\n\nThanks,\n\nTschak\n\n", "msg_date": "2 Feb 2006 06:03:01 -0800", "msg_from": "\"tschak\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index occupancy" }, { "msg_contents": "\"tschak\" <[email protected]> writes:\n> I have a question concerning the size of an index...\n> What I acually did was bulid a btree index on an smallint attribute\n> within a table with 10^8 rows. The table itself is app. 10GB large and\n> what I would like to have the smallest possible indeces. Unfortunately\n> the current size is about 2GB per indexed column (8 columns are indexed\n> in total) which is too large if the planner is supposed to choose a\n> bitmap scan between all of the indices.\n\n> So what I would like to know is the following:\n> Is there an easy way to tell postgres to occupy the index pages up to\n> 100 %?\n\nNo, but even if there were it wouldn't make much of a difference. The\nminimum possible size of a PG index is about 16 bytes per entry, which\nwould still put you at 1.6Gb for that many rows.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 04 Feb 2006 19:49:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index occupancy " } ]
[ { "msg_contents": "Using a separate lock table is what we've decided to do in this\nparticular case to serialize #1 and #3. Inserters don't take this lock\nand as such will not be stalled. \n\n> -----Original Message-----\n> From: Markus Schaber [mailto:[email protected]] \n> Sent: Thursday, February 02, 2006 7:44 AM\n> To: Marc Morin\n> Cc: [email protected]\n> Subject: Re: [PERFORM] partitioning and locking problems\n> \n> Hi, Marc,\n> \n> Marc Morin wrote:\n> \n> > \t1- long running report is running on view\n> > \t2- continuous inserters into view into a table via a rule\n> > \t3- truncate or rule change occurs, taking an exclusive lock.\n> > Must wait for #1 to finish.\n> > \t4- new reports and inserters must now wait for #3.\n> > \t5- now everyone is waiting for a single query in #1. Results\n> > in loss of insert data granularity (important for our application).\n> \n> Apart from having two separate views (one for report, one for \n> insert) as Richard suggested:\n> \n> If you have fixed times for #3, don't start any #1 that won't \n> finish before it's time for #3.\n> \n> You could also use the LOCK command on an empty lock table at \n> the beginning of each #1 or #3 transaction to prevent #3 from \n> getting the view lock before #1 is finished.\n> \n> \n> HTH,\n> Markus\n> \n> --\n> Markus Schaber | Logical Tracking&Tracing International AG\n> Dipl. Inf. | Software Development GIS\n> \n> Fight against software patents in EU! www.ffii.org \n> www.nosoftwarepatents.org\n> \n> \n", "msg_date": "Thu, 2 Feb 2006 11:27:38 -0500", "msg_from": "\"Marc Morin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioning and locking problems" }, { "msg_contents": "On Thu, 2006-02-02 at 11:27 -0500, Marc Morin wrote:\n\n> > > \t1- long running report is running on view\n> > > \t2- continuous inserters into view into a table via a rule\n> > > \t3- truncate or rule change occurs, taking an exclusive lock.\n> > > Must wait for #1 to finish.\n> > > \t4- new reports and inserters must now wait for #3.\n> > > \t5- now everyone is waiting for a single query in #1. Results\n> > > in loss of insert data granularity (important for our application).\n\n> Using a separate lock table is what we've decided to do in this\n> particular case to serialize #1 and #3. Inserters don't take this lock\n> and as such will not be stalled. \n\nWould it not be simpler to have the Inserters change from one table to\nanother either upon command, on a fixed timing cycle or even better\nbased upon one of the inserted values (Logdate?) (or all 3?). (Requires\nchanges in the application layer: 3GL or db functions).\n\nThe truncates can wait until the data has stopped being used.\n\nI'd be disinclined to using the locking system as a scheduling tool.\n\nBest Regards, Simon Riggs\n\n\n", "msg_date": "Tue, 07 Feb 2006 22:09:02 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and locking problems" }, { "msg_contents": "At 05:09 PM 2/7/2006, Simon Riggs wrote:\n\n>I'd be disinclined to using the locking system as a scheduling tool.\nI Agree with Simon. Using the locking system for scheduling feels \nlike a form of Programming by Side Effect.\n\nRon \n\n\n", "msg_date": "Tue, 07 Feb 2006 17:22:22 -0500", "msg_from": "Ron <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and locking problems" }, { "msg_contents": "On Tue, Feb 07, 2006 at 10:09:02PM +0000, Simon Riggs wrote:\n> On Thu, 2006-02-02 at 11:27 -0500, Marc Morin wrote:\n> \n> > > > \t1- long running report is running on view\n> > > > \t2- continuous inserters into view into a table via a rule\n> > > > \t3- truncate or rule change occurs, taking an exclusive lock.\n> > > > Must wait for #1 to finish.\n> > > > \t4- new reports and inserters must now wait for #3.\n> > > > \t5- now everyone is waiting for a single query in #1. Results\n> > > > in loss of insert data granularity (important for our application).\n> \n> > Using a separate lock table is what we've decided to do in this\n> > particular case to serialize #1 and #3. Inserters don't take this lock\n> > and as such will not be stalled. \n> \n> Would it not be simpler to have the Inserters change from one table to\n> another either upon command, on a fixed timing cycle or even better\n> based upon one of the inserted values (Logdate?) (or all 3?). (Requires\n> changes in the application layer: 3GL or db functions).\n\nUnfortunately, AFAIK rule changes would suffer from the exact same\nproblem, which will be a serious issue for table partitioning. If you\ntry and add a new partition while a long report is running you'll end up\nblocking everything.\n\nALso, IIRC the OP was trying *not* to have the locking system impose\nscheduling. I believe the intention is that either 1 not block 3 or 3\nnot block 4.\n\nI'm honestly somewhat surprised someone hasn't run into this problem\nwith partitioning yet; or maybe everyone who needs to do long\ntransactions just shoves those off to slony slaves...\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n", "msg_date": "Tue, 7 Feb 2006 18:59:12 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and locking problems" }, { "msg_contents": "On Tue, 2006-02-07 at 18:59 -0600, Jim C. Nasby wrote:\n\n> I'm honestly somewhat surprised someone hasn't run into this problem\n> with partitioning yet; or maybe everyone who needs to do long\n> transactions just shoves those off to slony slaves...\n\nAll DDL takes locks, on all DBMS.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Wed, 08 Feb 2006 08:30:29 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioning and locking problems" } ]